Up Main page

Now that we have seen that a linear transformation \(T:V \rightarrow W\) can be represented by a matrix when \(V\) and \(W\) are finite-dimensional vector spaces, is there a good reason to bother with the definition of a linear transformation when we can just work with matrices?

Recall that in our definition of a linear transformation \(T:V \rightarrow W\), \(V\) and \(W\) are not required to be finite-dimensional. When either \(V\) or \(W\) is of infinite dimension, representing \(T\) by a matrix with finite size is not possible. (Note that there is such a thing as an infinite matrix.) Hence, the question that we might want to address is: Are there linear transformations with domain or codomain being infinite dimensional? The answer is “yes”.

Before we get to the infinite-dimension example, let us looking at a special case for motivation.

Let \(P_k\) denote the vector space of polynomials in \(x\) of degree \(k\) with real coefficients for \(k = 1,2,\ldots\). Hence, \[P_k = \{ a_k x^k + a_{k-1}x^{k-1}+\cdots +a_1 x + a_0 : a_k,a_{k-1},\ldots,a_1\in\mathbb{R}\}.\] Note that \(\dim(P_k) = k+1\) for all \(k\).

Let \(D_1:P_2 \rightarrow P_1\) be given by \[ D(ax^2 + bx + c) = 2ax + b. \] Then \(D\) is a linear transformation. Indeed, if \(u = a_1x^2 + b_1 x + c_1\) and \(v = a_2 x^2 + b_2 x + c_2\), \begin{eqnarray*} D(u + v) & = & D( (a_1+a_2)x^2 + (b_1+b_2)x + (c_1+c_2) ) \\ & = & 2(a_1+a_2)x + (b_1+b_2) \\ & = & 2(a_1x + b_1) + (2a_2x + b_2) \\ & = & D(u + v). \end{eqnarray*}

Furthermore, if \(u = ax^2 + bx + c\) and \(\gamma \in \mathbb{R}\), \begin{eqnarray*} D(\gamma u) & = & D( (\gamma a)x^2 + (\gamma b)x + \gamma ) \\ & = & 2 (\gamma a)x + \gamma b \\ &= & \gamma (2ax + b) = \gamma D(u).\end{eqnarray*}

More generally, for each positive integer \(k\), \(D_k:P_{k+1} \rightarrow P_{k}\) given by \[D(a_{k+1}x^{k+1} + a_k x^k +\cdots + a_1 x + a_0) =(k+1)a_{k+1}x^k + k a_k x^{k-1} + \cdots + a_1\] is a linear transformation.

If you have taken differential calculus before, you probably have recognized that the output of \(D_k\) is simply the derivative of the input.

We now remove the degree restriction and consider \(P\), the vector space of all polynomials in \(x\) with real coefficients. Recall that \(P\) is infinite dimensional.

If \(D:P \rightarrow P\) is given by \[ D( a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1 x + a_0) = na_nx^{n-1} + \cdots + a_1\] for all integers \(n \geq 0\) and \(a_n,\ldots,a_0 \in \mathbb{R}\), then one can check that \(D\) is a linear transformation and it cannot be represented by a matrix having finite size.

In fact, differentiation is a linear transformation over more general vector spaces of functions. For instance, we can replace \(P\) with the vector space of all differentiable functions. Vector spaces of differentiable functions appear quite often in signal processing and advanced calculus.

Exercises

  1. Let \(\Gamma = ( x^2, x, 1)\) be an ordered basis for \(P_2\) and let \(\Omega = ( x , 1)\) be an ordered basis for \(P_1\). Find \([D_1]_{\Gamma}^{\Omega}\).

  2. Let \(k\) be a positive integer. Let \(\Gamma = ( x^{k+1}, x^k, \ldots, x, 1)\) be an ordered basis for \(P_{k+1}\) and let \(\Omega = ( x^k , x^{k-1},\ldots,x, 1)\) be an ordered basis for \(P_k\). Find \([D_k]_{\Gamma}^{\Omega}\).

  3. Let \(F\) denote the set of functions \(\displaystyle\sum_{k = 0}^\infty (\alpha_k \sin( kx) + \beta_k \cos(kx))\) such that \(\alpha_k,\beta_k \in \mathbb{R}\), for all \(k = 0,1,\ldots,\) and only a finite number of the \(\alpha_k\)'s and \(\beta_k\)'s are nonzero. (In other words, only a finite number of terms in the infinite sum could be nonzero.)

    1. Show that \(F\) is an infinite-dimensional vector space. (Hint: Show that \(\sin(x), \sin(2x), \sin(4x),\ldots, \sin(2^ix),\ldots\) are linearly independent.)

    2. Show that if \(f\) is in \(F\), then the derivative of \(f\) is also in \(F\).