Find all real numbers \(x, y, z\) satisfying the following system of linear
equations:
\[\begin{array}{r}
3 x - 2y - z = 1 \\
2z + x - y = 5
\end{array}\]
The system can be written as
\(
\begin{bmatrix}
3 & -2 & -1 \\
1 & -1 & 2
\end{bmatrix}
\begin{bmatrix} x\\ y\\ z \end{bmatrix} =
\begin{bmatrix} 1\\ 5 \end{bmatrix}. \) (Beware that the variables
in the second equation are given out of order.)
The augmented matrix of the system is
\[
\left[\begin{array}{rrr|r}
3 & -2 & -1 & 1\\
1 & -1 & 2 & 5
\end{array}\right]
\]
Performing \(R_1 \leftrightarrow R_2\) gives
\[
\left[\begin{array}{rrr|r}
1 & -1 & 2 & 5 \\
3 & -2 & -1 & 1
\end{array}\right]
\]
Performing \(R_2 \leftarrow R_2 - 3 R_1\) gives
\[
\left[\begin{array}{rrr|r}
1 & -1 & 2 & 5 \\
0 & 1 & -7 & -14
\end{array}\right]
\]
Performing \(R_1 \leftarrow R_1 + R_2\) gives
\[
\left[\begin{array}{rrr|r}
1 & 0 & -5 & -9 \\
0 & 1 & -7 & -14
\end{array}\right]
\]
So \(z\) is a free variable.
Setting \(z = s\), the second row gives \(y - 7s = -14\)
and the first row gives \(x - 5s = -9\).
Hence, the solutions are \(\begin{bmatrix} x\\y\\z \end{bmatrix}
= \begin{bmatrix} -9 + 5s \\ -14 + 7s \\ s\end{bmatrix}\)
for all \(s \in \mathbb{R}\).
Example 2
Find all the solutions to the following system defined over \(GF(2)\):
\begin{eqnarray*}
x_1 + x_2 + x_3 + x_4 & = & 1 \\
x_1 + x_2 + x_3 & = & 1 \\
x_2 + x_3 + x_4 & = & 0 \\
x_1 + x_2 + x_4 & = & 0
\end{eqnarray*}
Let \(A = \begin{bmatrix} 1 & 2 & 0 & 1 \\ 0 & 0 & 1 & 0\end{bmatrix}\).
Let \(x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4\end{bmatrix}\) be a tuple
of variables. Find all solutions to \(A x = 0\).
Observe that \(A\) is already in RREF. The free variables are
\(x_2\) and \(x_4\). Hence, we may set \(x_2 = s\) and
\(x_4 = t\) where \(s,t \in \mathbb{R}\).
Note that the system is
\begin{eqnarray*}
x_1 + 2x_2 + x_4 & = & 0 \\
x_3 & = & 0.
\end{eqnarray*}
Hence, \(x_1 = -2s - t\) and \(x_3 = 0\).
Thus, the solutions are
\(\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4\end{bmatrix}
= \begin{bmatrix} -2s -t \\ s \\ 0 \\t \end{bmatrix}
= s\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0\end{bmatrix} + t
\begin{bmatrix} -1 \\ 0 \\ 0 \\ 1\end{bmatrix}\)
for all possible choices of real numbers \(s\) and \(t\).
Example 4
Let \(A = \begin{bmatrix} 1 & 2 & 0 \\ 0 & 3 & -6 \\1 & 3 & 5\end{bmatrix}\).
Let \(B = \begin{bmatrix} 1 & 2 & 0 \\ 0 & 1 & -2 \\0 & 1 & 5\end{bmatrix}\).
Show that \(A\) can be transformed into \(B\) via a sequence
of two elementary row operations.
Observe that \(A\) and \(B\) differ in two rows; namely, row 2 and row 3.
The requirement that \(A\) be transformed into \(B\)
with only two elementary row operations forces us to
perform one operation that alters the second row and another that
alters the third row.
Note that the third row of \(A\) has a nonzero in the first column and the
only other row of \(A\) that has the same property is the first row.
Thus, we can eliminate the leading \(1\) in the third row only using the
operation \(R_3 \leftarrow R_3 - R_1\). The result is
\(\begin{bmatrix} 1 & 2 & 0 \\ 0 & 3 & -6 \\0 & 1 & 5\end{bmatrix}\).
Let us call this matrix \(A'\).
Notice that \(A'\) and \(B\) differ only in the second row. In fact,
row 2 of \(A'\) is simply 3 times row 2 of \(B\). Thus,
\(A' \xrightarrow{R_2 \leftarrow \frac{1}{3} R_2} B\).
Example 5
Find all real numbers \(a\) such that the system of linear equations
\begin{eqnarray*}
x - y + z & = & 1 \\
-x + 2y + z & = & -1 \\
y + az & = & 0
\end{eqnarray*}
has infinitely many solutions. Justify your answer.
The augmented matrix for the system is
\[\left[\begin{array}{rcc|c}
1 & -1 & 1 & 1 \\
-1 & 2 & 1 & -1 \\
0 & 1 & a & 0
\end{array}\right].\]
Performing \(R_2 \leftarrow R_2 + R_1\) gives
\[\left[\begin{array}{rcr|c}
1 & -1 & 1 & 1 \\
0 & 1 & 2 & 0 \\
0 & 1 & a & 0
\end{array}\right].\]
Performing \(R_3 \leftarrow R_3 - R_2\) gives
\[\left[\begin{array}{rcc|c}
1 & -1 & 1 & 1 \\
0 & 1 & 2 & 0 \\
0 & 0 & a-2 & 0
\end{array}\right].\]
Performing \(R_1 \leftarrow R_1 + R_2\) gives
\[\left[\begin{array}{rrc|c}
1 & 0 & 3 & 1 \\
0 & 1 & 2 & 0 \\
0 & 0 & a-2 & 0
\end{array}\right].\]
If \(a \neq 2\), then \(a-2 \neq 0\) and so the third row gives
\((a-2)z = 0\) implying that \(z = 0\). Substituting this into the
equation corresponding to the second row, we get \(y = 0\). Hence,
\(x = 1\).
So there is a unique solution in this case.
If \(a = 2\), then \(a-2 = 0\) and the augmented matrix is in RREF.
We have \(z\) as a free variable and we get
infinitely many solutions with \(y = -2z\) and \(x = 1 - 3z\).
In summary, the system has infinitely many solutions if and only if \(a = 2\).
Example 6
Consider the following system of linear equations defined over the
field of real numbers:
\begin{eqnarray*}
x + y - 4z & = & 1 \\
-2x + y - z & = & -1 \\
x - y + 2z & = & 0.
\end{eqnarray*}
How many solutions to the system are there?
A system of linear equations is said to be inconsistent
if it has no solutions. Otherwise it is said to be consistent.
Is it possible for a system of linear equations defined
over a field to be inconsistent if there are more variables
than there are equations?
The answer is “yes”. But before we look at the solution,
we first take a look at how one could approach such a question.
Unless you have seen this question before, you probably would not know
which way the answer goes. You might have a guess but it has to be
verified.
Since there are more variables than there are equations, we know right away
that not every column can be a pivot column in the reduced row-echelon
form of the augmented matrix. Therefore, there must be at least one
free variable and we are tempted to answer in the negative.
However, we must be careful here and ask, “Does the existence of
a free variable imply that there must be a solution?”
The criterion for being inconsistent is for the row-reduced augmented matrix
to have a row of the form \([0 \cdots 0 \mid 1]\). It does not
involve the relationship between the number of variables and the
number of equations.
Hence, if our system is simply
\begin{eqnarray*}
x + y + z & = &0 \\
0x +0 y +0 z & = &1,
\end{eqnarray*}
then we have a system with more variables than equations that is inconsistent.
If we don't like the look of the second equation, we can simply add the
first equation to the second to obtain the system
\begin{eqnarray*}
x + y + z & = & 0 \\
x + y + z & = & 1.
\end{eqnarray*}
Clearly, \(x+y+z\) cannot equal to both 1 and 0 at the same time and so
the system is inconsistent. The system also happens to have
more variables than equations.
Example 8
Let \(A\) be an \(m\times n\) matrix with entries from some field.
Prove that no matter how you transform \(A\) using elementary row
operations to a matrix in RREF, you always end up with the same matrix.
This result means that one can talk about the reduced row-echelon
form of \(A\).
Suppose that \(A\) can be row reduced to matrices \(B\) and \(C\)
in RREF via different sequences of elementary row operations.
We want to show that \(B = C\).
First, note that the homogeneous systems \(Bx = 0\) and \(Cx = 0\) must
have the same solutions as \(Ax = 0\).
Suppose that the
earliest column at which \(B\) and \(C\) differ is column \(i\).
Because \(B\) and \(C\) are in RREF,
\(B_i\) (column \(i\) of \(B\)) and \(C_i\) cannot both be pivot columns.
Without loss of generality, we may assume that \(C_i\) is not a pivot column.
Hence, \(x_i\) is a free variable in \(Cx = 0\).
Now, solving for \(x\) in \(Cx = 0\) after
setting \(x_i = 1\) and all the remaining free variables to \(0\),
we obtain a solution \(x'\) such that \(x'_i = 1\) and \(x'_j = 0\)
for all \(j \gt i\).
Since \(Cx' = 0\) and \(B_j = C_j\) for \(j = 1,\ldots,i-1\),
we have
\[Bx' = Bx' - Cx' =
\sum_{j=1}^n B_jx'_j-\sum_{j=1}^n C_jx'_j =
\sum_{j=1}^n (B_j-C_j)x'_j =
\sum_{j=1}^i (B_j-C_j)x'_j =
(B_i - C_i)x'_i = B_i - C_i \neq 0.\]
So \(x'\) is not a solution of \(Bx = 0\),
contradicting that \(Bx = 0\) and \(Cx = 0\) have the same solutions.
(The above proof is an example of a minimality argument.
Choosing the smallest column index \(i\) at which the two matrices differ
allows us to ignore the rest of the matrices. The technique is often
useful in proving counterexamples cannot exist by considering a smallest
counterexample and arriving at a contradiction.)