Consider the system given by \(Ax = b\) where \(A = \begin{bmatrix} 0 & 0 & 0 & -4 & 1 \\ 1 & 0 & 0 & -1 & 0 \\ 0 & 1 & 1 & 3 & 0 \\ \end{bmatrix}\), \(x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix}\), and \(b = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix}\). Do you see a solution right away?

The system written out in full is \begin{eqnarray} -4x_4 + x_5 & = 2 \\ x_1 - x_4 & = 4 \\ x_2 + x_3 + 3x_4 & = 6, \end{eqnarray} which can be rewritten to the equivalent system \begin{eqnarray} x_5 & = & 2 + 4x_4 \\ x_1 & = & 4 + x_4 \\ x_3 & = & 6 - x_2 - 3x_4. \end{eqnarray} We can now obtain solutions by choosing whatever values we like for \(x_2\) and \(x_4\) and setting \(x_1, x_3,\) and \(x_5\) to the values given in terms of \(x_2\) and \(x_4\) as shown in the system above. For example, setting \(x_2\) and \(x_4\) to \(0\) gives \(x_5 = 2\), \(x_1 = 4\), and \(x_3 = 6\). Note that the values assigned to \(x_5, x_1,\) and \(x_3\) are precisely the right-hand side values of the original system. Is this merely a coincidence?

If you look closely at the original system, you will notice that in each equation, there is a variable that appears in no other equation and it has a coefficient of \(1\). In the first equation, \(x_5\) is the only such variable. But in the third equation, both \(x_2\) and \(x_3\) have that property.

In general, if you have a system from which you can choose one variable in each equation that appears in no other equation and has coefficient \(1\), you can easily obtain a solution by setting all such variables to the corresponding right-hand side values and \(0\) to the remaining variables. This is really great. But the question is, “Can we always convert a given system into such an equivalent system?”

Before we address this question, let us reorder things a bit in the original system and write it as \[\begin{array}{cccccccccccc} x_1 & & & & & & & - & x_4 & = & 4 \\ & & x_2 & + & x_3 & & & + & 3x_4 & = & 6 \\ & & & & & & x_5 & - & 4x_4 & = & 2 \end{array}\] In matrix form, the coefficient matrix would be \(\begin{bmatrix} 1 & 0 & 0 & 0 & -1 \\ 0 & 1 & 1 & 0 & 3 \\ 0 & 0 & 0 & 1 & -4 \end{bmatrix}\).

This matrix satisfies a number of properties.
First, the left-most nonzero entry in each row is \(1\).
Such a \(1\) is called a **leading 1**.
Second, all the entries above and below any leading \(1\) are \(0\).
Also, a leading \(1\) in a given row appears to the right of any leading
\(1\)'s in the rows above.
There is one more property that this matrix satisfies vacuously:
any row consisting of only \(0\)'s (called a zero row) appears below
rows with at least one nonzero entry. The matrix above satisfies this
condition vacuously because it does not contain any zero row.
Any matrix that satisfies the properties listed above is said
to be in **reduced row-echelon form**.

A matrix is in **reduced row-echelon form** if it satisfies the following:

In each row, the left-most nonzero entry is \(1\) and the column that contains this \(1\) has all other entries equal to \(0\). This \(1\) is called a leading \(1\).

The leading \(1\) in the second row or beyond is to the right of the leading \(1\) in the row just above.

Any row containing only \(0\)'s is at the bottom.

For example, the following matrices are **not** in RREF:

\(\begin{bmatrix} 1& 0 & 0 \\ 0 & 2 & -1 \\ 0 & 0 & 0\end{bmatrix}\) (The left-most nonzero entry in the second row not equal to \(1\), thus violating property 1 stated above.)

\(\begin{bmatrix} 0& 1 \\ 1 & 0 \end{bmatrix}\) (The leading \(1\) in the second row is to the left of the leading \(1\) in the first row, thus violating property 2 stated above.)

\(\begin{bmatrix} 1& 0 & -3 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}\) (The leading \(1\) in the third row is not the only nonzero in the column containg it, thus violating property 1 stated above.)

If you have a system \(Ax = b\) where \(A\) is in RREF, you can quickly tell if it has a solution and give one when it does. If \(A\) has a zero row, say row \(i\), and \(b_i\) is nonzero, then there is no solution because the \(i\)th equation would read “\(0 = b_i\)”, which is impossible. For example, if \(A = \begin{bmatrix} 1 & 2 \\ 0 & 0 \end{bmatrix}\), \(x = \begin{bmatrix} x_1\\x_2\end{bmatrix}\), and \(b = \begin{bmatrix} 3 \\ 4\end{bmatrix}\), the system written out in full is \begin{eqnarray} x_1 + 2x_2 = 3 \\ 0 = 4 \end{eqnarray} Clearly, no matter what \(x_1\) and \(x_2\) are, the second equation cannot be satisfied.

If \(A\) does not have a zero row whose corresponding right-hand side is nonzero, then one can obtain a solution by setting all the variables corresponding to the leading \(1\)'s to the right-hand side values and the remaining variables to \(0\). (What if you want to obtain all solutions?)

For each the following matrices, determine if it is in reduced row-echelon form.

\(\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0\end{bmatrix}\)

\(\begin{bmatrix} 0 & 1 & 2 & 3 \end{bmatrix}\)

\(\begin{bmatrix} 1 & 2 & 0 & 4 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}\)

\(\begin{bmatrix} 1 & 0 & 0 & 4 \\ 0 & 1 & -1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix}\)

\(\begin{bmatrix} 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\)