Write \(\begin{bmatrix} 1 \\ 0 \end{bmatrix}\) as a
linear combination of \(\begin{bmatrix} 1 \\1 \end{bmatrix}\) and
\(\begin{bmatrix} -1 \\1 \end{bmatrix}\).

We want to find real numbers \(\alpha\) and \(\beta\) satisfying the following:
\begin{equation*}\begin{bmatrix} 1 \\ 0\end{bmatrix} = \alpha \begin{bmatrix} 1
\\ 1\end{bmatrix} + \beta \begin{bmatrix} -1 \\ 1
\end{bmatrix}.\end{equation*}
In other words, we want to solve the system
\begin{align*} 1 & = \alpha - \beta \\ 0 & = \alpha + \beta\end{align*}
for \(\alpha\) and \(\beta\).

Adding the equations gives \(2\alpha = 1\), implying that \(\alpha =
\frac{1}{2}.\) It follows that \(\beta = -\frac{1}{2}\).
Hence,
\begin{equation*}\begin{bmatrix} 1 \\ 0\end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1
\\ 1\end{bmatrix} + \left(-\frac{1}{2}\right)\begin{bmatrix} -1 \\ 1
\end{bmatrix}.\end{equation*}

Example 2

Show that \(\operatorname{span}\left(\left \{
\begin{bmatrix} i \\ 0 \end{bmatrix},
\begin{bmatrix} 1 \\ 1-i \end{bmatrix} \right \}\right) = \mathbb{C}^2\).

It suffices to show that
for every choice of \(z_1, z_2\in \mathbb{C}^2\),
there exist \(\alpha,\beta\in\mathbb{C}\) such that
\(\alpha \begin{bmatrix} i \\ 0 \end{bmatrix}
+ \beta \begin{bmatrix} 1 \\ 1-i \end{bmatrix} =
\begin{bmatrix} z_1 \\ z_2\end{bmatrix}\).

The tuple equation can be written as the system
\( \begin{bmatrix} i & 1 \\ 0 & 1-i\end{bmatrix}
\begin{bmatrix} \alpha \\ \beta \end{bmatrix}
= \begin{bmatrix} z_1 \\ z_2\end{bmatrix}.\)

The augmented matrix for the system is
\( \left [ \begin{array}{cc|c} i & 1 & z_1 \\ 0 & 1-i & z_2 \end{array}
\right] \).

One can proceed to perform row reduction. But row 2 represents
the equation \((1-i)\beta = z_2\). Hence,
\(\beta = \frac{z_2}{1-i} = \frac{1+i}{2}z_2\).

Putting this into the equation corresponding to row 1 gives
us that \(i\alpha + \frac{1+i}{2}z_2 = z_1\),
implying that
\(\alpha = -i\left(z_1 - \frac{1+i}{2}z_2\right) = -iz_1 + \frac{-1+i}{2}z_2\).
Hence, there is a solution no matter what \(z_1, z_2\) are.

(There is actually no need to solve for \(\alpha\) and \(\beta\)
explicitly. The coefficient matrix is square and has determinant
\(i(1-i)\neq 0\).
By Cramer's rule, a unique solution exists and that is all we need to
make our conclusion.)

Example 3

Is \(\begin{bmatrix} 1& 0 \\ 0 & 1\end{bmatrix}\) in the span
of \(\left \{
\begin{bmatrix} 3 & 1 \\ 1 & -1 \end{bmatrix},
\begin{bmatrix} 0 & 1 \\ 1 & 2 \end{bmatrix},
\begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}
\right \}\)? Assume that we are working over the real numbers.

We want to determine if there exist \(a,b,c \in\mathbb{R}\)
such that
\[
\begin{bmatrix} 1& 0 \\ 0 & 1\end{bmatrix}
=
a\begin{bmatrix} 3 & 1 \\ 1 & -1 \end{bmatrix}
+b\begin{bmatrix} 0 & 1 \\ 1 & 2 \end{bmatrix}
+c\begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}
\]
We can rewrite the right-hand to obtain
\[
\begin{bmatrix} 1& 0 \\ 0 & 1\end{bmatrix}
= \begin{bmatrix} 3a + c & a + b + c \\ a + b + c & -a +2b \end{bmatrix}
\]

Comparing the entries on both sides, we get the system
\begin{eqnarray*}
3a + c & = & 1 \\
a + b + c & = & 0 \\
-a + 2b & = & 1
\end{eqnarray*}
Note that the top-right and bottom-left entries give the same equation
\(a+b+c=0\).

The third equation gives \(a = 2b-1\). Substituing this into the first
equation gives \(6b+c = 4\).
Adding the last two equations gives \(3b +c = 1\). Subtracting this
from \(6b+c=4\) gives \(3b = 3\), implying that \(b = 1\).
Hence, \(a = 1\) and \(c = -2\). So there is a solution and the answer
to the question is “yes”.

Example 4

Let \(A,B,C \in \mathbb{R}^{2\times 2}\) be given by
\(A = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}\),
\(B = \begin{bmatrix} -1 & 0 \\ 1 & 0 \end{bmatrix}\), and
\(C = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\).
Give a description of the span of \(\{A,B,C\}\) that is as simple as possible.

Clearly, the span is simply the set of all linear combinations of the three
matrices. But we need to give as simple a description as possible. So
we should look at what kind of \(2\times 2\) matrices we are getting.

The span is given by
\begin{eqnarray*}
&& \left \{ \alpha A + \beta B + \gamma C : \alpha,\beta, \gamma \in \mathbb{R}
\right \}\
& = & \left \{
\begin{bmatrix}
\alpha - \beta & \alpha + \gamma \\
\alpha + \beta + \gamma & 0
\end{bmatrix}
: \alpha,\beta, \gamma \in \mathbb{R}
\right \}\
\end{eqnarray*}
We claim that the set is the same as
\(\left \{
\begin{bmatrix}
r & s \\
t & 0
\end{bmatrix}
: r, s, t \in \mathbb{R}
\right \}\)

Hence, we want to show that the system
\[\begin{array}{rl}
\alpha - \beta = r & (1)\\
\alpha + \gamma = s & (2)\\
\alpha + \beta + \gamma = t & (3)
\end{array}\]
has a solution for any choice of \(r,s,t\in \mathbb{R}\).

Now \((1)+(3)\) gives \(2\alpha + \gamma = r + t\). Subtracting \((2)\)
from this gives \(\alpha = r - s + t\).
Substituting this into \((1)\) gives \(\beta = -s + t\) and
substituing into \((2)\) gives \(\gamma = -r + 2s -t\).
Hence, there is a solution and so the span is given by
matrices of the form \(\begin{bmatrix}
r & s \\
t & 0
\end{bmatrix}\) where \(r, s, t \in \mathbb{R}\).

Example 5

Consider the polynomials \(x\) and \(x^2 - 1\) with coefficients
from the set of real numbers. Show that the span of these vectors
is a proper subspace of \(P_2\), the vector space of
polynomials in \(x\) with real coefficients having degree at most \(2\).

It suffices to exhibit a vector in \(P_2\) that cannot be written
as a linear combination of \(x\) and \(x^2 - 1\).

We claim that \(x^2\) is such a vector.
Suppose that there exist real numbers
\(\alpha\) and \(\beta\) such that
\(x^2 = \alpha x + \beta (x^2 - 1)\).
Hence, \[x^2 = \beta x^2 + \alpha x - \beta.\]
Comparing the coefficients of \(x^2\) on both sides gives \(\beta = 1\).
Now, the left-hand side has a \(0\) constant term but the constant
term on the right-hand side is \(-\beta\), implying that
\(\beta = 0\). Hence, \(\beta\) must be \(1\) and \(0\) at the same
time, which is impossible.

Example 6

Determine if \(x^2 + 1\) is in the span of \(\{x^2 + x + 1, x+2\}\)
where the scalars are the real numbers.

By definition, \(x^2 + 1\) is in the span of \(\{x^2 + x + 1, x+2\}\)
if and only if there exist real numbers \(\alpha\) and \(\beta\)
such that \[x^2 + 1 = \alpha (x^2 + x + 1) + \beta (x+2).\]
Rewriting the right-hand side, we obtain
\[x^2 + 1 = \alpha x^2 + (\alpha+ \beta) x + (\alpha + 2\beta).\]
Now, comparing the coefficients of \(x^2\), \(x\), and the constant term
on both sides, we get that
\begin{eqnarray*}
1 & = & \alpha \\
0 & = & \alpha + \beta \\
1 & = & \alpha + 2\beta.
\end{eqnarray*}
So \(\alpha = 1\). The second equation then gives \(\beta = -1\).
But that means \(\alpha + 2\beta = -1 \neq 1\).
Hence, \(\alpha\) and \(\beta\) cannot exist and
so \(x^2 + 1\) is not in the span of \(\{x^2 + x + 1, x+2\}\).

Example 7

Let \(W\) denote the span of
\( \left\{
\begin{bmatrix} 1 \\ -1 \\ 2\end{bmatrix},
\begin{bmatrix} 2 \\ 1 \\ 1\end{bmatrix},
\begin{bmatrix} 1 \\ 2 \\ -1\end{bmatrix},
\begin{bmatrix} 0 \\ 1 \\ -1\end{bmatrix},
\right\}\). Show that \(W\) is a proper subspace of \(\mathbb{R}^3\).

Clearly, \(W \subseteq \mathbb{R}^3\). To show
that \(W\) is a proper subspace of \(\mathbb{R}^3\), we find
a vector in \(\mathbb{R}^3\) that is not in \(W\).
That is, we want to find \(\alpha,\beta,\gamma\in \mathbb{R}\)
so that
\(\begin{bmatrix} \alpha \\ \beta \\ \gamma \end{bmatrix}\) cannot
be written as a linear combination of
\(\begin{bmatrix} 1 \\ -1 \\ 2\end{bmatrix}\),
\(\begin{bmatrix} 2 \\ 1 \\ 1\end{bmatrix}\),
\(\begin{bmatrix} 1 \\ 2 \\ -1\end{bmatrix}\),
and
\(\begin{bmatrix} 0 \\ 1 \\ -1\end{bmatrix}\),

In other words, we seek \(\alpha,\beta,\gamma\in \mathbb{R}\)
so that the following has no solution:
\begin{align*}
\lambda_1 \begin{bmatrix} 1 \\ -1 \\ 2\end{bmatrix}
+ \lambda_2 \begin{bmatrix} 2 \\ 1 \\ 1\end{bmatrix}
+ \lambda_3 \begin{bmatrix} 1 \\ 2 \\ -1\end{bmatrix}
+ \lambda_3 \begin{bmatrix} 0 \\ 1 \\ -1\end{bmatrix}
= \begin{bmatrix} \alpha \\ \beta \\ \gamma \end{bmatrix}.
\end{align*}
The system can be rewritten as
\begin{align*}
\begin{bmatrix}
1 & 2 & 1 & 0 \\
-1 & 1 & 2 & 1\\
2 & 1 & -1 & -1
\end{bmatrix}
\begin{bmatrix}
\lambda_1 \\ \lambda_2 \\ \lambda_3 \\ \lambda_4
\end{bmatrix}
= \begin{bmatrix} \alpha \\ \beta \\ \gamma \end{bmatrix}.
\end{align*}

The augmented matrix for the system is therefore
\[\left[\begin{array}{cccc|c}
1 & 2 & 1 & 0 & \alpha \\
-1 & 1 & 2 & 1 & \beta \\
2 & 1 & -1 & -1 & \gamma
\end{array}\right]\]
Adding row one to row two and subtracting two times row one
from row three gives
\[\left[\begin{array}{cccc|c}
1 & 2 & 1 & 0 & \alpha \\
0 & 3 & 3 & 1 & \alpha + \beta \\
0 & -3 & -3 & -1 & -2 \alpha + \gamma
\end{array}\right]\]
Adding row to to row three gives
\[\left[\begin{array}{cccc|c}
1 & 2 & 1 & 0 & \alpha \\
0 & 3 & 3 & 1 & \alpha + \beta \\
0 & 0 & 0 & 0 & -\alpha + \beta + \gamma
\end{array}\right]\]
If we choose \(\alpha = \beta = 0\) and \(\gamma = 1\), the system
will have no solution. Thus, we have shown that
\(\begin{bmatrix} 0 \\ 0 \\ 1\end{bmatrix}\) is not in \(W\)
and so \(W\) is a proper subspace of \(\mathbb{R}^3\).

Remark. In fact, every vector \(\begin{bmatrix}
\alpha \\ \beta \\ \gamma \end{bmatrix}\) such that
\(-\alpha + \beta + \gamma \neq 0\) will not be in \(W\).

Example 8

Let \(W\) be a subspace of \(\mathbb{R}^2\).

Show that if \(W\) contains two nonzero vectors that
are not scalar multiples of each other, then \(W = \mathbb{R}^2\).

(One can see the answer by considering the plane representation of
\(\mathbb{R}^2\) by taking two such vectors and observe that
they are scalar multiples of each other if and only if they lie on the
same line. So if we have two nonzero vectors that don't lie on the
same line, then we can get the entire plane.)

Let \(u, v \in W\) be nonzero vectors such that
there does not exist \(\lambda\) such that \(v = \lambda u\).
Pick an arbitrary element \(w \in \mathbb{R}^2\).

Let \(A = \begin{bmatrix} u_1 & v_1\\ u_2 & v_2\end{bmatrix}\).
We claim that \(Ax = w\) always has a solution. This will establish
that every vector in \(\mathbb{R}^2\) can be written as a linear
combination of \(u\) and \(v\).
Note that it is sufficient to show that \(\det(A)\neq 0\) and
so \(A\) is invertible.

Suppose that \(\det(A) = u_1 v_2 - u_2 v_1 = 0.\)
By assumption, not both \(u_1\) and \(u_2\) are \(0\).
Without loss of generality, assume that \(u_1 \neq 0\).
Let \(\lambda = v_1/u_1\)
Then \(u_1 v_2 - u_2 v_1 = 0\) implies that
\(v_2 = \lambda u_2\).
But \(v_1 = \lambda u_1\). So \(v = \lambda u\), contradicting
that \(v\) is not a scalar multiple of \(u\).

Describe geometrically all the possibilities for \(W\).

There are only three possibilities:

If \(W\) has no nonzero vector, then \(W\) is just a point, the origin.

If \(W\) has two nonzero vectors that are not scalar multiple of each
other, then \(W\) is the entire plane.

Otherwise, \(W\) is given by all the scalar multiples of a nonzero vector
and so it is a line through the origin.