Let \(S\) be a structure on which addition and scalar multiplication (on the left) with scalars from some set \(\mathbb{F}\) is defined and \(S\) is closed under these operations. In other words, for all \(x,y \in S\) and \(\alpha \in \mathbb{F}\), \(x+y\) and \(\alpha x\) are elements of \(S\).
Let \(v_1,\ldots, v_k\in S\) and \(\alpha_1,\ldots,\alpha_k \in \mathbb{F}\). Then \(\alpha_1 v_1+\cdots + \alpha_k v_k\) is called a linear combination of \(v_1,\ldots,v_k\).
\(2\begin{bmatrix} 1\\2\end{bmatrix} + 3\begin{bmatrix} 3\\4\end{bmatrix}\) is a linear combination of the 2-tuples \(\begin{bmatrix} 1\\2\end{bmatrix}\) and \(\begin{bmatrix} 3\\4\end{bmatrix}\).
\(3\begin{bmatrix} 1&0\\0& 2\end{bmatrix} + (-1) \begin{bmatrix} 2 &3\\4 &0\end{bmatrix}\) is a linear combination of the matrices \(\begin{bmatrix} 1&0\\0& 2\end{bmatrix}\) and \(\begin{bmatrix} 2 &3\\4 &0\end{bmatrix}.\) (Note that one often writes \(3\begin{bmatrix} 1&0\\0& 2\end{bmatrix} - \begin{bmatrix} 2 &3\\4 &0\end{bmatrix}\) instead.)
The polynomial \(ax^2 + bx + c\) where \(a,b,c\) are real numbers can be viewed as a linear combination of the polynomials \(x^2\), \(x\), and \(1\).
The span of the set \(\{v_1,\ldots,v_k\}\) (or the elements \(v_1,\ldots,v_k\)), denoted by \(\operatorname{span}(\{v_1,\ldots,v_k\})\), is the set of all the linear combinations of \(v_1,\ldots,v_k\).
Without more information on \(\mathbb{F}\) and \(S\), these notions are not that useful. Often, one studies linear combinations and spans in the context of vector spaces. In fact, the two notions are central to the subject of vector spaces.
In the context of vector spaces, the span of an empty set is defined to be the vector space consisting of just the zero vector. This definition is sometimes needed for technical reasons to simplify exposition in certain proofs.
The span of the matrices \(\begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}\) and \(\begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}\) is the set of all matrices of the form \(\begin{bmatrix} a & b \\ b & 0\end{bmatrix}\) since every linear combination of the given matrices can be written as \(a \begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}+ b\begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}\).
The span of \(\left\{\begin{bmatrix} 1\\0\end{bmatrix}, \begin{bmatrix} 0 \\1\end{bmatrix}\right\}\) with real numbers as scalars is all of \(\mathbb{R}^2\) because \(\begin{bmatrix} x\\y\end{bmatrix}\) can be written as \(x\begin{bmatrix} 1\\0\end{bmatrix}+ y\begin{bmatrix} 0\\1\end{bmatrix}\).
Here is a more general version of the previous example. Let \(n \in \mathbb{N}\) and \(\mathbb{F}\) be a field. Let \(e_k\) denote the tuple in \(\mathbb{F}^n\) having \(1\) in component \(k\) and \(0\) everywhere else. Then using elements of \(\mathbb{F}\) as scalars, the span of \(\{e_1,\ldots, e_n\}\) is \(\mathbb{F}^n\).
The polynomial \(x^2 + 2\) can be written as a linear combination of \(x^2 + x\), \(x^2 + 1\) and \(x + 1\). To see this, we try to find real numbers \(\alpha\), \(\beta\), and \(\gamma\) such that \begin{align*} x^2 + 2 & = \alpha(x^2 + x) + \beta(x^2+1) + \gamma(x+1) \end{align*} Rewritting the right-hand side by collecting like terms, we obtain \begin{align*} x^2 + 2 & = (\alpha + \beta) x^2 + (\alpha+\gamma)x + (\beta+\gamma) \end{align*} Comparing the coefficients of the different powers of \(x\) on both sides, we get that \begin{align*} \alpha + \beta & = 1 \\ \alpha + \gamma & = 0 \\ \beta + \gamma & = 2 \end{align*} The first equation comes from the coefficients of \(x^2\), the second from those of \(x\), and the third from the constant terms. This system has a unique solution with \(\alpha = -\dfrac{1}{2}\), \(\beta = \dfrac{3}{2}\), and \(\gamma = \dfrac{1}{2}\). In other words, \[x^2 + 2 = -\dfrac{1}{2}(x^2+x) + \dfrac{3}{2}(x^2+1) + \dfrac{1}{2}(x+1).\] One can now simplify the right-hand side to see that equality holds.
We show that \(\operatorname{span}\left(\left\{\begin{bmatrix} 1\\1\end{bmatrix}, \begin{bmatrix} 1 \\-1\end{bmatrix}\right\}\right)=\mathbb{R}^2\).
What we need to show is that every \(\displaystyle\begin{bmatrix} x\\y\end{bmatrix} \in \mathbb{R}^2\) can be written as a linear combination of \(\begin{bmatrix} 1\\1\end{bmatrix}\) and \(\begin{bmatrix} 1 \\-1\end{bmatrix}\). In other words, we want to determine if there exist scalars \(\alpha\) and \(\beta\) such that \(\displaystyle\begin{bmatrix} x\\y\end{bmatrix} = \alpha\begin{bmatrix} 1\\1\end{bmatrix}+ \beta\begin{bmatrix} 1\\-1\end{bmatrix}\), or equivalently, \(\displaystyle\begin{bmatrix} x\\y\end{bmatrix} = \begin{bmatrix} \alpha + \beta\\\alpha - \beta\end{bmatrix}\).
Thus, we need \begin{eqnarray*} \alpha + \beta = x \\ \alpha - \beta = y. \end{eqnarray*} Solving this system gives \(\alpha = \frac{x+y}{2}\) and \(\beta = \frac{x-y}{2}.\) Thus, \(\begin{bmatrix} x\\ y\end{bmatrix}\) can be written as a linear combination of \(\begin{bmatrix} 1\\1\end{bmatrix}\) and \(\begin{bmatrix} 1 \\-1\end{bmatrix}\).
In fact, there are many sets whose spans also give \(\mathbb{R}^2\).)
It can be seen from the definition of a vector space that if \(v_1,\ldots,v_k\) are vectors from a vector space \(V\), then \(V\) contains every linear combination of \(v_1,\ldots,v_k\). So \(V\) contains the span of \(\{v_1,\ldots,v_k\}\). (To see this for the case when \(k=2\), let \(v_1,v_2\in V\) and let \(\alpha\) and \(\beta\) be scalars. Since \(V\) is closed under scalar multiplication, we have \(\alpha v_1, \beta v_2 \in V\). Now using that \(V\) is closed under addition, we get that \(\alpha v_1+\beta v_2 \in V\). Hence, \(V\) contains the span of \(\{v_1, v_2\}\).)
One can check that the span of \(\{v_1,\ldots,v_k\}\) is again vector space, giving a subspace of \(V\).
For example, let \(V\) be given by the span of \(\begin{bmatrix} 1 \\ 2\end{bmatrix}\) with real numbers as scalars. Then \(V\) gives a vector space in which every vector is of the form \(\begin{bmatrix} a \\ 2a\end{bmatrix}\) where \(a\) can be any real number. Note that every vector in \(V\) is also a vector in \(\mathbb{R}^2\) but \(V \neq \mathbb{R}^2\). So \(V\) is a proper subspace of \(\mathbb{R}^2\).
When a given vector space \(V\) is equal to \(\operatorname{span}(\{v_1,\ldots,v_k\})\) for some \(v_1,\ldots,v_k\in V\), we have a succinct and concrete description of all the vectors in \(V\) because every vector in \(V\) is a linear combination of these \(k\) vectors. This is a rather attractive property that will be explored in more details. However, we will see that not every vector space can be written as the span of a finite set of vectors.
Let \(V\) be a subspace of \(\mathbb{R}^3\) given by the span of the set \(\left\{\begin{bmatrix} 1 \\ 0\\1\end{bmatrix}, \begin{bmatrix} 1 \\ 1 \\ 0\end{bmatrix}\right\}\) with real numbers as scalars. Show that \(V\) is a proper subspace of \(\mathbb{R}^3\). (Hint: Find a vector that cannot be written as a linear combination of the two vectors in the given set.)
Write \(x-2\) as a linear combination of \(x+2\) and \(2x - 1\).
For each of the following sets, determine its span with real numbers as scalars. Give the answer in as succinct a form as possible.
\(\left\{\begin{bmatrix} 2\\1\\0\end{bmatrix}, \begin{bmatrix} 0 \\-1\\0 \end{bmatrix}, \begin{bmatrix} 0 \\0\\1 \end{bmatrix}\right\}\)
\(\{ x-1, x+2\}\)
\(\left\{\begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}\right\}\)
Pick a point on the plane and call it \(\cal{O}\). Consider the arrows having \(\cal{O}\) as tails. Pick two such arrows that do not lie on the same line call them \(u\) and \(v\).
For any \(\alpha \in \mathbb{R}\) with \(\alpha \geq 0\), let \(\alpha u\) denote the arrow having the same direction as \(u\) but with length \(\alpha\) times the length of \(u\). (Hence, \(2u\) is an arrow pointing in the same direction as \(u\) that is twice as long.) We define \(\alpha v\) similarly.
For any \(\alpha \in \mathbb{R}\) with \(\alpha < 0\), let \(\alpha u\) denote the arrow having the opposite direction as \(u\) but with length \(-\alpha\) times the length of \(u\). We define \(\alpha v\) similarly.
Let \(\alpha, \beta \in \mathbb{R}\). Define \(\alpha u + \beta v\) to be the arrow having \(\cal{O}\) as tail and \(\cal{P}\) as head where \(\cal{P}\) is the head of \(\beta v\) after repositioning its tail (without any rotation) at the head of \(\alpha u\).
What are the arrows that \(\alpha u + \beta v\) gives as \(\alpha\) and \(\beta\) range over all possible real numbers?