Let \(p_1 = -x+1\), \(p_2 = x+2\), and \(p_3 = x^2+1\).
Show that \(\{p_1,p_2,p_3\}\) is a basis for \(P_2\),
the vector space
of polynomials in \(x\) with real coefficients having
degree at most \(2\).
Write \(x^2 + 2x\) as a linear combination of \(p_1,p_2,p_3\).
We first show that the span of \(\{p_1,p_2,p_3\}\) is \(P_2\)
To this end,
let \(a,b,c\in \mathbb{R}\). We need to show that there
exist \(\alpha,\beta,\gamma\in \mathbb{R}\) such
that \[\alpha p_1 + \beta p_2 + \gamma p_3 =
ax^2 + bx + c.\]
Comparing coefficients on both sides, we get
\begin{eqnarray*}
\gamma & = & a \\
-\alpha + \beta & = & b \\
\alpha + 2\beta + \gamma & = & c. \\
\end{eqnarray*}
Adding the last two equations gives \(3\beta + \gamma = b+c\).
Since \(\gamma = a\), we have \(\beta = \frac{-a+b+c}{3}\).
Hence, \(\alpha = \frac{-a - 2b + c}{3}\).
To obtain \(x^2 + 2x\), we need to set \(a = 1\), \(b=2\), and \(c=0\).
Therefore, \(x^2 + 2x = -\frac{5}{3}p_1 + \frac{1}{3}p_2 + p_3\).
Note that to get the identically zero polynomial, one needs
\(a = b = c = 0\). But this forces \(\alpha = \beta = \gamma = 0\).
Thus, \(\{p_1,p_2,p_3\}\) is a linearly independent set and is therefore
a basis for \(P_2\).
Example 2
Let \(A = \begin{bmatrix} 1 & 1\\ 0 & 2 \\ -1 & 0 \end{bmatrix}\) be
a matrix with real entries.
Is \(\begin{bmatrix} 1\\1\\1\end{bmatrix}\) in the column space of
\(A\)?
Note that
\(\begin{bmatrix} 1\\1\\1\end{bmatrix}\) is in the column
space of \(A\) if and only
if there exist real numbers \(\alpha\) and \(\beta\) such that
\[
\begin{bmatrix} 1\\1\\1\end{bmatrix} =
\alpha \begin{bmatrix} 1 \\ 0 \\ -1\end{bmatrix}
+ \beta \begin{bmatrix} 1 \\ 2 \\ 0\end{bmatrix},
\]
or equivalently,
\[
\begin{bmatrix} 1\\1\\1\end{bmatrix} =
\begin{bmatrix} \alpha + \beta \\ 2\beta \\ -\alpha\end{bmatrix}.
\]
Comparing the second and third entries of both sides,
we must have \(\alpha = -1\) and \(\beta = \frac{1}{2}\).
However, \(\alpha + \beta \neq 1\). Hence, there do not exist
\(\alpha\) and \(\beta\) such that the two sides are equal.
Therefore,
\(\begin{bmatrix} 1\\1\\1\end{bmatrix}\) is not in the column
space of \(A\).
Example 3
Let \(A = \begin{bmatrix}
1 & -1 & 1 & 0\\
0 & 1 & 1 & 0\\
-2 & 2 & 0 & 1
\end{bmatrix}\).
Give a basis for each of \(N(A)\), \({\cal C}(A)\), and \({\cal R}(A)\).
The RREF of \(A\) is
\(R = \begin{bmatrix}
1 & 0 & 0 & -1\\
0 & 1 & 0 & -\frac{1}{2}\\
0 & 0 & 1 & \frac{1}{2}
\end{bmatrix}\).
Since each column of \(R^\mathsf{T}\)
contains a pivot, all three columns of \(A^\mathsf{T}\)
(or \(R^\mathsf{T}\)) give a basis for \({\cal R}(A)\).
Since only the first three columns of \(R\) are pivot columns, the first three
columns of the original matrix \(A\) give a basis for
\({\cal C}(A)\).
Since the fourth column of \(R\) is the only non-pivot column,
all the solutions to the system \(Ax = 0\) are of the form
\(\begin{bmatrix} x_1\\x_2\\x_3\\x_4\end{bmatrix}
=\begin{bmatrix} s \\ \frac{s}{2} \\ -\frac{s}{2} \\ s\end{bmatrix}
=s\begin{bmatrix} 1 \\ \frac{1}{2} \\ -\frac{1}{2} \\ 1\end{bmatrix}\)
where \(s \in \mathbb{R}\).
Hence, a basis for \(N(A)\) is given by the single vector
\(\begin{bmatrix} 1 \\ \frac{1}{2} \\ -\frac{1}{2} \\ 1\end{bmatrix}\).
Example 4
Let \(A \in \mathbb{R}^{2\times 4}\). What is the smallest possible
value for the nullity of \(A\)?
Using the rank-nullity theorem, we have
\(\operatorname{rank}(A) + \operatorname{nullity}(A) = 4\).
For a \(2\times 4\) matrix, the rank is at most \(2\) since in the RREF,
there can be at most two pivots.
Thus \(2 + \operatorname{nullity}(A) \geq
\operatorname{rank}(A) + \operatorname{nullity}(A)\), implying that
\(2 + \operatorname{nullity}(A) \geq 4\). Hence, the nullity of \(A\)
is at least \(2\).
Note that the nullity can indeed equal 2 and \(A = \begin{bmatrix}
1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}\) is an example.
Therefore, 2 is the smallest possible value for the nullity of \(A\).
Example 5
Let \(A = \begin{bmatrix}
1 & -2 & 1 \\
-2 & 4 & -2 \\
\end{bmatrix}\) and
\(B = \begin{bmatrix}
1 & 2 & 1\\
1 & 1 & 0\\
1 & 0 & -1
\end{bmatrix}\) be matrices defined over the real numbers.
Determine if \(N(A) = {\cal C}(B)\).
First note that \(AB = 0\).
Hence, every column of \(B\) is in \(N(A)\),
implying that \({\cal C}(B)\) is a subspace of \(N(A)\).
The RREF of \(B\) is \(\begin{bmatrix} 1 & 0 & -1 \\ 0 & 1 & 1\\0 & 0 & 0
\end{bmatrix}\). Hence, the rank of \(B\) is 2, meaning that
the column space of \(B\) has dimension 2.
The RREF of \(A\) is \(\begin{bmatrix} 1 & - 2 & 1\\0 & 0 & 0\end{bmatrix}\).
Hence, the rank of \(A\) is 1 and by the rank-nullity theorem,
the nullity of \(A\) is 2.
Hence, the dimension of \({\cal C}(B)\) is the same as the dimension
of \(N(A)\). Since the former is a subspace of the latter, they must
be equal.
Example 6
Let \(A = \begin{bmatrix}
1 & 0 & 1 & 0 & 1\\
0 & 1 & 1 & 0 & 1\\
0 & 1 & 0 & 1 & 1\\
\end{bmatrix}\) be defined over \(GF(2)\).
Give a basis for \(N(A)\).
We first find all the solutions to \(Ax = 0\),
where \(x = \begin{bmatrix} x_1\\ \vdots \\ x_5\end{bmatrix}\)
by row-reducing \(A\):
\begin{eqnarray*}
\begin{bmatrix}
1 & 0 & 1 & 0 & 1 \\
0 & 1 & 1 & 0 & 1\\
0 & 1 & 0 & 1 & 1\\
\end{bmatrix}
& \stackrel{R_2 \leftarrow R_2 + R_3}{\longrightarrow}
\begin{bmatrix}
1 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 & 0 \\
0 & 1 & 0 & 1 & 1\\
\end{bmatrix} \\
& \stackrel{R_2 \leftrightarrow R_3}{\longrightarrow}
\begin{bmatrix}
1 & 0 & 1 & 0 & 1 \\
0 & 1 & 0 & 1 & 1\\
0 & 0 & 1 & 1 & 0 \\
\end{bmatrix} \\
& \stackrel{R_1 \leftrightarrow R_3}{\longrightarrow}
\begin{bmatrix}
1 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 1\\
0 & 0 & 1 & 1 & 0 \\
\end{bmatrix} \\
\end{eqnarray*}
Since the fourth and fifth columns are the only nonpivot columns,
the solutions to \(Ax = 0\), after setting \(x_4 = s\) and
\(x_5 = t\), are given by
\[\begin{bmatrix} s \\ s \\s \\s \\0\end{bmatrix} +
\begin{bmatrix} t \\ t\\ 0 \\ 0 \\t \end{bmatrix}
=
s\begin{bmatrix} 1 \\ 1 \\1 \\1 \\0\end{bmatrix} +
t\begin{bmatrix} 1 \\ 1\\ 0 \\ 0 \\1 \end{bmatrix}
\]
where \(s, t \in GF(2)\).
Hence, a basis for \(N(A)\) is
\(\left\{\begin{bmatrix} 1 \\ 1 \\1 \\1 \\0\end{bmatrix},
\begin{bmatrix} 1 \\ 1\\ 0 \\ 0 \\1 \end{bmatrix}\right \}\).
Example 7
Let \(\mathbb{F}\) denote a field. Let \(m\) and \(n\) be
positive integers.
Let \(A \in \mathbb{F}^{m \times n}\).
Let \(b \in \mathbb{F}^m\).
Show that \(b \in {\cal C}(A)\) if and only if
the system of linear equations \(Ax = b\) has a solution
where \(x =\begin{bmatrix} x_1\\ \vdots \\ x_n\end{bmatrix}\).
By definition, \(b \in {\cal C}(A)\) if and only if
there exist scalars \(x_1,\ldots, x_n \in \mathbb{F}\)
such that
\[b = x_1 A_1 + \cdots + x_n A_n\]
where \(A_j\) denotes the \(j\)th column of \(A\).
But
\(x_1 A_1 + \cdots + x_n A_n\)
can be written as \(A \begin{bmatrix} x_1\\
\vdots \\x_n\end{bmatrix}.\)
Thus, \(b \in {\cal C}(A)\) if and only if
the system \(Ax = b\) has a solution.
Example 8
Let
\(\Gamma = \left ( \begin{bmatrix} 1 \\ 2\end{bmatrix},
\begin{bmatrix} 1 \\ 0 \end{bmatrix}\right )\)
be an ordered basis for \(\mathbb{R}^2\).
Let \(u = \begin{bmatrix} 0 \\ -2 \end{bmatrix}\).
What is \([u]_{\Gamma}\)?
We need to write \(u\) as a linear combination of the elements in the
ordered basis.
Hence, we seek \(\alpha, \beta \in \mathbb{R}\) such that
\[u = \alpha \begin{bmatrix} 1 \\ 2 \end{bmatrix}+
\beta \begin{bmatrix} 1 \\ 0 \end{bmatrix}.\]
Then, \([u]_{\Gamma} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix}.\)
Note that the equation can be rewritten as
\[\begin{bmatrix} 0 \\ -2\end{bmatrix}
= \begin{bmatrix} \alpha + \beta \\ 2\alpha \end{bmatrix}. \]
Comparing the second components on both sides, we obtain \(\alpha = -1\).
Thus, \(\beta = 1\).
Hence, \([u]_{\Gamma} = \begin{bmatrix} -1 \\ 1\end{bmatrix}\).
Example 9
Let
\(\Gamma = ( x^2-1, x+1, x)\)
be an ordered basis for the vector space of
polynomials in \(x\) with real coefficients having degree at most \(2\).
Let \(u = x-1\). What is \([u]_{\Gamma}\)?
We need to write \(u\) as a linear combination of the elements in the
ordered basis.
Hence, we seek \(\alpha, \beta, \gamma \in \mathbb{R}\) such that
\[u = \alpha (x^2-1) + \beta (x+1) + \gamma x.\]
Then, \([u]_{\Gamma} = \begin{bmatrix} \alpha \\ \beta \\
\gamma \end{bmatrix}.\)
Note that the equation can be rewritten as
\[x-1
= \alpha x^2 + (\beta+\gamma)x + (-\alpha + \beta). \]
Comparing coefficients on both sides, we get
\begin{eqnarray*}
0 & = & \alpha \\
1 & = & \beta + \gamma \\
-1 & = & -\alpha + \beta.
\end{eqnarray*}
Thus, \(\alpha = 0\).
Then, from the third equation, we obtain that \(\beta = -1\).
The second equation then gives \(\gamma = 1-\beta = 2\).
Hence, \([u]_{\Gamma} = \begin{bmatrix} 0 \\ -1 \\ 2\end{bmatrix}\).
Example 10
Determine the dimension of the subspace of \(\mathbb{R}^{2\times 2}\)
given by \(\left \{ \begin{bmatrix} a & b \\ c & d \end{bmatrix} :
\begin{array}{r}
a + b + c + d = 0 \\~b - c + 2d= 0 \end{array} \right \}\).
From the second equation, we see that \(b = c - 2d\). Substituting
this into the first equation, we obtain \(a = -2c + d\).
Hence, the subspace, call it \(W\), can be written as
\(\left \{ \begin{bmatrix} -2c+d & c-2d \\ c & d \end{bmatrix} : c,d \in
\mathbb{R}\right \}\).
Hence, every matrix in \(W\) can be written as
\(c \begin{bmatrix} -2 & 1 \\ 1 & 0\end{bmatrix} +
d\begin{bmatrix} 1 & -2 \\ 0 & 1\end{bmatrix}\).
Thus, \(W\) is given by the span of
\(\left\{ \begin{bmatrix} -2 & 1 \\ 1 & 0\end{bmatrix},
\begin{bmatrix} 1 & -2 \\ 0 & 1\end{bmatrix}\right\}\).
Since the two matrices are not scalar multiples of each other, they form
a linearly independent set and thus give a basis for \(W\). Therefore, the
dimension of \(W\) is 2.
Example 11
Let \(t,u,v,w \in \mathbb{R}^3\) be given by
\(t = \begin{bmatrix} 1 \\ 0 \\ -2\end{bmatrix}\),
\(u = \begin{bmatrix} 1 \\ -1\\ 2\end{bmatrix}\),
\(v = \begin{bmatrix} -3 \\ 2 \\ -2\end{bmatrix}\),
\(w = \begin{bmatrix} 2 \\ -1 \\ 0\end{bmatrix}\).
Let \(W\) be a subspace of \(\mathbb{R}^3\) given
by the span of \(\{t,u,v,w\}\).
Show that \(W\) is a proper subspace of \(\mathbb{R}^3\).
To show that \(W\) is a proper subspace of \(\mathbb{R}^3\), we
need to show that there exists some \(x \in \mathbb{R}^3\) that
is not in \(W\).
However, we will not attempt to directly find such an \(x\).
Instead, we show that the dimension of \(W\) is less than 3.
Since the dimension of \(\mathbb{R}^3\) is 3, it follows that
\(W\) cannot possibly be \(\mathbb{R}^3\).
Recall that the span of \(\{t,u,v,w\}\) is given by the set of
linear combinations of \(t\), \(u\), \(v\), and \(w\). Note that if
we let \(A\) be the matrix \([t~u~v~w]\), then the column space
of \(A\) is precisely the set of linear combinations of
\(t\), \(u\), \(v\), and \(w\).
Thus, the dimension of the span of \(\{t,u,v,w\}\)
is given by the rank of \(A\), which is the matrix
\[\begin{bmatrix} 1 & 1 & -3 & 2\\ 0 &-1 & 2 & -1 \\ -2 & 2 & -2 & 0\end{bmatrix}.\]
Row-reducing the above matrix to RREF, we obtain
\[\begin{bmatrix} 1 & 0 & -1 & 1\\ 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0\end{bmatrix}.\]
Since there are only two pivot columns, the rank is \(2\).
Thus, is the dimension of the span of \(\{t,u,v,w\}\) is \(2\).
Hence, \(W\) has dimension less than 3.
Example 12
Let \(A\) denote the matrix
\(\begin{bmatrix} 1 & -1 & 0 & 2\\ 0 & 0 & 1 & -3 \end{bmatrix}.\)
Give an orthnormal basis to \(N(A)\).
We first find a description of \(N(A)\), which is the set of solutions to
\(A x = 0\).
As \(A\) is already in RREF, we can obtain the general solution by
setting the free variables \(x_2 = s\) and \(x_4 = t\) and then solving
for \(x_1\) and \(x_3\) to obtain that
\[\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} =
s \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} +
t \begin{bmatrix} -2 \\ 0 \\ 3 \\ 1 \end{bmatrix}.\]
As \(\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\) and
\(\begin{bmatrix} -2 \\ 0 \\ 3 \\ 1 \end{bmatrix}\) are also not scalar multiples
of each other, they are linearly independent and so they form a basis, though
not orthonormal, for \(N(A)\).
We now find a nonzero vector \(u \in N(A)\) that is orthogonal to
\(\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\).
To this end, we solve
\begin{align*}
\begin{bmatrix} 1 & -1 & 0 & 2\\ 0 & 0 & 1 & -3 \end{bmatrix}
\begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} & =
\begin{bmatrix} 0 \\ 0 \end{bmatrix} \\
\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \cdot
\begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} & =
\begin{bmatrix} 0 \\ 0\end{bmatrix},
\end{align*}
which is equivalent to
\[
\begin{bmatrix} 1 & -1 & 0 & 2\\ 0 & 0 & 1 & -3 \\ 1 & 1 & 0 & 0 \end{bmatrix}
\begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} =
\begin{bmatrix} 0 \\ 0 \\ 0\end{bmatrix}
\]
since
\(\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \cdot
\begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} =
\begin{bmatrix} 1 & 1 & 0 & 0 \end{bmatrix}
\begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix}.\)
Row-reducing the coefficient matrix gives
\(\begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -3\end{bmatrix}.\)
Setting \(u_4 = 1\), we obtain the solution
\(\begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} =
\begin{bmatrix} -1 \\ 1 \\ 3 \\ 1 \end{bmatrix},\) which is not a scalar
multiple of \(\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\).
Hence,
\(\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\) and
\(\begin{bmatrix} -1 \\ 1 \\ 3 \\ 1 \end{bmatrix}\) form a basis for \(N(A)\)
which are orthogonal. Dividing each vector by its norm, we obtain the
orthonormal basis
\(\left\{\frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix},
\frac{1}{2\sqrt{3}}\begin{bmatrix} -1 \\ 1 \\ 3 \\ 1 \end{bmatrix}\right\}\).