Up Main page

Introducing the left inverse of a square matrix

Consider the system \(Ax = b\) where \(A = \begin{bmatrix} 1 & 0 & 2\\ -2 & 0 & -3 \\ 0 & 2 & 0 \end{bmatrix}\), \(x = \begin{bmatrix} x_1\\x_2\\x_3\end{bmatrix}\), and \(b = \begin{bmatrix} -1\\1\\-2\end{bmatrix}\). As we have seen, one way to solve this system is to transform the augmented matrix \([A\mid b]\) to one in reduced row-echelon form using elementary row operations. In the table below, each row shows the current matrix and the elementary row operation to be applied to give the matrix in the next row. The elementary matrix corresponding to the operation is shown in the right-most column.

Matrix Elementary row operation Elementary matrix
\(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ -2 & 0 & -3 & 1 \\ 0 & 2 & 0 & -2 \end{array}\right]\) \(R_2 \leftarrow R_2 + 2 R_1\) \(M_1 = \begin{bmatrix}1 & 0 & 0\\ 2 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}\)
\(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 0 & 1 & -1 \\ 0 & 2 & 0 & -2 \\ \end{array}\right]\) \(R_2 \leftrightarrow R_3\) \(M_2 = \begin{bmatrix}1 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0\end{bmatrix}\)
\(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 2 & 0 & -2 \\ 0 & 0 & 1 & -1 \end{array}\right]\) \(R_2 \leftarrow \frac{1}{2}R_2\) \(M_3 = \begin{bmatrix}1 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & 1\end{bmatrix}\)
\(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{array}\right]\) \(R_1 \leftarrow R_1 + (-2)R_3\) \(M_4 = \begin{bmatrix}1 & 0 & -2\\ 0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}\)
\(\left[\begin{array}{ccc|c} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{array}\right]\)

This table tells us that \(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 0 & 1 & -1 \\ 0 & 2 & 0 & -2 \\ \end{array}\right] = [M_1A \mid M_1b]\), \(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 2 & 0 & -2 \\ 0 & 0 & 1 & -1 \end{array}\right] = M_2\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 0 & 1 & -1 \\ 0 & 2 & 0 & -2 \\ \end{array}\right] = [M_2(M_1A) \mid M_2(M_1b)]\), \(\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{array}\right] = M_3\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 2 & 0 & -2 \\ 0 & 0 & 1 & -1 \end{array}\right] = [M_3(M_2(M_1A)) \mid M_3(M_2(M_1b))]\), and \(\left[\begin{array}{ccc|c} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{array}\right] = M_4\left[\begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{array}\right] = [M_4(M_3(M_2(M_1A))) \mid M_4(M_3(M_2(M_1b)))]\).

Looking at the last set of equalities, we see that \(M_4(M_3(M_2(M_1A))) = \begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}\). The left-hand side is rather messy. But we know that there is a single matrix \(M\) such that \(MA = M_4(M_3(M_2(M_1A)))\). Can we obtain \(M\) from \(M_1,\ldots,M_4\)? The answer is “yes” because of the associativity of matrix multiplication: For matrices \(P,Q,R\) such that the product \(P(QR)\) is defined, \(P(QR) = (PQ)R\).

Therefore, \(M_4(M_3(M_2(M_1A))) = (M_4(M_3(M_2M_1)))A\). So we can first compute \(M_2M_1\), then compute \(M_3(M_2M_1)\), and then \(M_4(M_3(M_2M_1))\), which gives us \(M\).

Performing the calculations gives \(M = \begin{bmatrix} -3 & -2 & 0\\ 0 & 0 & \frac{1}{2}\\2 & 1 & 0 \end{bmatrix}\). One can verify that \(MA = I_3\) and \(Mb = \begin{bmatrix}1\\-1\\-1\end{bmatrix}\).

Remark: If one does not need to specify each of the elementary matrices, one could have obtained \(M\) directly by applying the same sequence of elementary row operations to the \(3\times 3\) identity matrix. (Try this.)

The matrix \(M\) is called a left-inverse of \(A\) because when it is multiplied to the left of \(A\), we get the identity matrix. Incidentally, if you multiply \(M\) to the right of \(A\), i.e. computing \(AM\) instead of \(MA\), you also get the identity matrix. This is not a coincidence.

The above example illustrates a couple of ideas.

First, performing a sequence of elementary row operations corresponds to applying a sequence of linear transformation to both sides of \(Ax=b\), which in turn can be written as a single linear transformation since composition of linear transformations results in a linear transformation. The matrix \(M\) represents this single linear transformation.

Second, any time we row reduce a square matrix \(A\) that ends in the identity matrix, the matrix that corresponds to the linear transformation that encapsulates the entire sequence gives a left inverse of \(A\). This means that left inverses of square matrices can be found via row reduction.

Quick Quiz

Exercises

Find a left inverse of each of the following matrices.

  1. \(\begin{bmatrix} 3 & 4 \\ 2 & 3 \end{bmatrix}\)

  2. \(\begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 3 & 2 & 1\end{bmatrix}\)