Working with simplex tableaux can sometimes be inconvenient. The revised simplex method works directly with the problem data without having to maintain simplex tableaux.
Let \(\mathbf{A} \in \mathbb{R}^{m \times n}\) with rank \(m\). Let \(\mathbf{b} \in \mathbb{R}^m\), \(\mathbf{c} \in \mathbb{R}^n\), and \(\mathbf{x} = \begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}\) be a tuple of \(n\) real variables. Let \((P)\) denote the following linear programming problem: \[\begin{array}{rl} \min & \mathbf{c}^\mathsf{T} \mathbf{x} \\ \mbox{s.t.} & \mathbf{A}\mathbf{x} = \mathbf{b} \\ & \mathbf{x}\geq \mathbf{0}. \end{array}\] For a subset \(B \subseteq \{1,\ldots, n\}\), let \(\mathbf{A}_B\) denote the submatrix of \(\mathbf{A}\) consisting of the columns indexed by the elements of \(B\) and let \(\mathbf{c}_B\) denote the subvector of \(\mathbf{c}\) consisting of only the components indexed by the elements of \(B\). The revised simplex method is as follows:
Input: The problem \((P)\) and a basis \(B\)
determining the basic feasible solution \(\mathbf{x}^*\).
Steps:
Let \(N = \{1,\ldots,n\} \backslash B\).
Solve \(\mathbf{y}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) where \(\mathbf{y} = [y_1,\ldots,y_m]^\mathsf{T}\).
Choose \(k \in N\) such that \(\mathbf{y}^\mathsf{T}\mathbf{A}_k \gt c_k\). If no such \(k\) exists, then stop; \(\mathbf{x}^*\) is an optimal solution.
Find \(\mathbf{d} = [d_1,\ldots,d_n]^\mathsf{T}\) such that \(d_k = 1,~d_j = 0\) for all \(j \in N\backslash \{k\}\), and \(\mathbf{A}_B \mathbf{d}_B = -\mathbf{A}_k.\)
If \(\mathbf{d} \geq \mathbf{0}\), then stop; the problem is unbounded.
Set \(\lambda = \min\left\{\dfrac{x^*_i}{-d_i} : i \in B \mbox{ with } d_i \lt 0 \right\}\) and choose \(r \in B\) such that \(d_r \lt 0\) and \(\dfrac{x^*_r}{-d_r} = \lambda\).
Replace \(\mathbf{x}^*\) with \(\mathbf{x}^* + \lambda \mathbf{d}\).
Replace \(B\) with \((B\cup \{k\})\backslash \{r\}\) and go to step 1.
Note that if \(\mathbf{x}^*_B\gt \mathbf{0}\), step 2 precisely computes a \(\mathbf{y}\) that will satisfy complementary slackness with \(\mathbf{x}^*\). The solution is unique since \(\mathbf{A}_B\) is invertible. If \(\mathbf{y}\) turns out to be feasible to the dual problem (checked in step 3), then \(\mathbf{x}^*\) and \(\mathbf{y}\) are optimal solutions to the primal and dual problems, respectively. Otherwise, we find a nonbasic variable that can be increased to give a lower objective function value.
Observe that the \(\mathbf{d}\) computed in step 4 is a solution to the homogeneous system \(\mathbf{A} \mathbf{x} = \mathbf{0}\). Hence, if \(\mathbf{d} \geq \mathbf{0}\) (discovered in step 5), then \(\mathbf{x}^* + \lambda \mathbf{d}\) is a feasible solution for all positive values of \(\lambda\) with objective function value \(\mathbf{c}^\mathsf{T}\mathbf{x}^* + \lambda (c_k d_k + \mathbf{c}_B^\mathsf{T}\mathbf{d}_B) = \mathbf{c}^\mathsf{T}\mathbf{x}^* + \lambda(c_k - \mathbf{c}_B^\mathsf{T}\mathbf{A}_B^{-1}\mathbf{A}_k) = \mathbf{c}^\mathsf{T}\mathbf{x}^* + \lambda(c_k- \mathbf{y}^\mathbf{T} \mathbf{A}_k)\rightarrow -\infty\) as \(\lambda \rightarrow \infty\).
The value \(\lambda\) computed in step 6 is the maximum value that the variable \(x_k\) can be set to. As seen for the tableau method if \(\mathbf{x}^*_B \gt \mathbf{0}\) in every iteration, the objective function value will decrease by a positive amount. Therefore, the basis \(B\) will be different at the end of each iteration. As there are finitely many bases, the algorithm will terminate.
What if it is not true that all components of \(\mathbf{x}^*_B\) are positive? In this case, we have a degenerate solution. The presence of degeneracy leads to the possibility of nontermination by visiting the same basis again and again. Such a phenomenon is called cycling. We will look at cycling in more detail in the next segment. In the meantime, we remark that with a careful choice of the indices in steps 3 and 6, cycling can be avoided. In particular, a simple anti-cycling rule due to Robert Bland is to always choose the smallest valid index in both steps 3 and 6. If we make such choices in every iteration, cycling cannot occur.
In an iteration of the simplex method, the variable \(x_k\) is called the entering variable because it becomes basic and the variable \(x_r\) is called the leaving variable because it becomes nonbasic.
We conclude this segment with an example illustrating the steps of the revised simplex method.
Example. Solve the problem \(\min\{ \mathbf{c}^\mathsf{T}\mathbf{x} : \mathbf{A}\mathbf{x} = \mathbf{b},~\mathbf{x} \geq \mathbf{0}\}\) where \(\mathbf{A} = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 2 & -1 \end{bmatrix},~ \mathbf{b} = \begin{bmatrix} 4 \\ 2\end{bmatrix},~ \mathbf{c} = \begin{bmatrix} 1\\3\\-1\\3\end{bmatrix}.\)
Given: \(B = \{1,2\}\) is a basis determining the basic feasible solution \(\mathbf{x}^*=\begin{bmatrix} 3\\1\\0\\0\end{bmatrix}\).
Iteration 1:
\(N = \{3,4\}\).
Solve \(\mathbf{y}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\); i.e. \(\begin{bmatrix} y_1 & y_2\end{bmatrix}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = \begin{bmatrix}1 & 3\end{bmatrix}\). We obtain \(y_1 = 2,~y_2 = -1\).
Choose \(k = 3\) since \(\mathbf{y}^\mathsf{T}\mathbf{A}_3 = \begin{bmatrix}2 & -1\end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = 0 \gt -1 = c_3\).
Find \(\mathbf{d} = \begin{bmatrix} d_1&d_2&d_3&d_4\end{bmatrix}^\mathsf{T}\) such that \(d_3 = 1\), \(d_4 = 0\), and \[\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\begin{bmatrix} d_1\\d_2\end{bmatrix} = \begin{bmatrix} -1\\ -2\end{bmatrix}.\] Solving, we obtain \(d_1 = -\dfrac{3}{2}\), \(d_2 = \dfrac{1}{2}\).
It is not true that \(\mathbf{d} \geq 0\).
As only \(d_1 \lt 0\), we have \(\lambda = \dfrac{x_1^*}{-d_1} = \dfrac{3}{3/2} = 2\) and we have to choose \(r = 1\).
New \(\mathbf{x}^* = \begin{bmatrix}3\\1\\0\\0\end{bmatrix} + 2\begin{bmatrix}-\dfrac{3}{2}\\\dfrac{1}{2}\\1\\0\end{bmatrix} = \begin{bmatrix}0\\2\\2\\0\end{bmatrix}\).
New \(B = \{2,3\}\).
\(N = \{1,4\}\).
Solve \(\mathbf{y}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\); i.e. \(\begin{bmatrix} y_1 & y_2\end{bmatrix}\begin{bmatrix} 1 & 1 \\ -1 & 2 \end{bmatrix} = \begin{bmatrix} 3 & -1\end{bmatrix}\). We obtain \(y_1 = \dfrac{5}{3},~y_2 = -\dfrac{4}{3}\).
\(\mathbf{y}^\mathsf{T}\mathbf{A}_N = \begin{bmatrix}\dfrac{5}{3} & -\dfrac{4}{3}\end{bmatrix}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = \begin{bmatrix}\dfrac{1}{3} & 3\end{bmatrix} \leq \begin{bmatrix} c_1 & c_4\end{bmatrix}\). Hence \(\mathbf{x}^* = \begin{bmatrix}0\\2\\2\\0\end{bmatrix}\) is an optimal solution.
Consider the following linear programming problem: \[\begin{array}{rrcrcrcrcl} \min & 3x_1 & + & 2x_2 & - & x_3 & & & \\ (P)~~~~~\mbox{s.t.} & x_1 & - & 2x_2 & + & x_3 & - & x_4 & = & 0 \\ & x_1 & + & x_2 & + & 2x_3 & - & x_4 & = & 1 \\ & x_1 & , & x_2 & , & x_3 & , & x_4 & \geq & 0. \end{array}\] Apply the revised simplex method starting with the basis \(B = \{1,2\}\).
Consider the linear programming problem \(\min\{\mathbf{c}^\mathsf{T} \mathbf{x} : \mathbf{A}\mathbf{x} = \mathbf{b},~ \mathbf{x} \geq \mathbf{0}\}\) where \(\mathbf{A} = \begin{bmatrix} 1 & 1 & -2 & 2 \\ 1 & -1 & 2 & 0 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 3 \\ 1\end{bmatrix}\), and \(\mathbf{c} = \begin{bmatrix} 2 & 0 & -1 & 2\end{bmatrix}^\mathsf{T}\). Apply the revised simplex method starting with the basis \(B = \{1,2\}\).