We discuss a method to prevent cycling by making a “perturbation” to the original problem so that every basic feasible solution is nondegenerate. Obviously, for this method to work, the perturbed problem cannot be too far off from the original problem. It turns out that when the perturbation is small enough, solving the perturbed problem will solve the original problem.
Consider the following tableau from the cycling examples associated with the basis \(B = \{3,4\}\): \[ \small \begin{array}{ccccccccccccccccccl} z & - & 2x_1 & - & 2x_2 & & & & & + & x_5 & & & + & 3 x_7 & - & \frac{5}{2} x_8 & = & 0 \\ & & x_1 & + & 4x_2 & + & x_3 & & & + & x_5 & -& 4x_6 & - & \frac{9}{2}x_7 & + & \frac{5}{2}x_8 & = & 0 \\ & & & & 4x_2 & & & + & x_4 & + & 2 x_5 & - & 4 x_6 & - & 4 x_7 & + & 2x_8 & = & 0. \end{array} \] As seen before, if \(x_5\) is the entering variable, we have a choice for the leaving variable between \(x_3\) and \(x_4\).
Let \(\epsilon\) be a very small positive number. We modify the tableau by replacing the right-hand side of the second equation by \(\epsilon\) and the right-hand side of the third equation by \(\epsilon^2\) to obtain: \[ \small \begin{array}{ccccccccccccccccccl} z & - & 2x_1 & - & 2x_2 & & & & & + & x_5 & & & + & 3 x_7 & - & \frac{5}{2} x_8 & = & 0 \\ & & x_1 & + & 4x_2 & + & x_3 & & & + & x_5 & -& 4x_6 & - & \frac{9}{2}x_7 & + & \frac{5}{2}x_8 & = & \epsilon \\ & & & & 4x_2 & & & + & x_4 & + & 2 x_5 & - & 4 x_6 & - & 4 x_7 & + & 2x_8 & = & \epsilon^2. \end{array} \] With \(x_5\) as the entering variable, the leaving variable is chosen depending on the value of \(\min \left\{ \epsilon, \frac{\epsilon^2}{2}\right\}\). As \(\epsilon\) is assumed to be very small, we have \(\frac{\epsilon^2}{2} = \min \left\{ \epsilon, \frac{\epsilon^2}{2}\right\}\). Thus, \(x_4\) is chosen as the leaving variable. After pivoting, we obtain: \[ \small \begin{array}{ccccccccccccccccccl} z & - & 2x_1 & - & 4x_2 & & & - & \frac{1}{2}x_4 & & & + & 2x_6 & + & 5 x_7 & - & \frac{7}{2} x_8 & = & -\frac{1}{2}\epsilon^2 \\ & & x_1 & + & 2x_2 & + & x_3 & - &\frac{1}{2}x_4 & & & -& 2x_6 & - & \frac{5}{2}x_7 & + & \frac{3}{2}x_8 & = & \epsilon-\frac{1}{2}\epsilon^2 \\ & & & & 2x_2 & & & + & \frac{1}{2}x_4 & + & x_5 & - & 2 x_6 & - & 2 x_7 & + & x_8 & = & \frac{1}{2}\epsilon^2. \end{array} \] Now, whether we choose \(x_6\) or \(x_7\) to be the entering variable, unboundedness will be detected. Note that if we set \(\epsilon\) to \(0\), the above tableau will be the tableau associated with the basis \(B = \{3,5\}\) for the original problem.
The example above illustrates a couple of ideas: First, the amount of perturbation (i.e. the value of \(\epsilon\)) needs not be specified provided we can compare values of the form \(d_0 + d_1 \epsilon + \cdots + d_k \epsilon^k\) assuming that \(\epsilon\) is positive and very small as shown in the lemma below. Second, if \(\epsilon\) is sufficiently small, then setting it to \(0\) will give us a feasible tableau for the unperturbed problem.
Proposition 5.1. Let \(m\) and \(n\) be positive integers. Let \(\epsilon \gt 0\). Let \(d_0,d_1,\ldots,d_m \in \mathbb{R}\). If \(D = \displaystyle\sum_{k = 0}^m d_k \epsilon^k\) and \(\epsilon\) is sufficiently small, then
\(D = 0\) if and only if \(d_k = 0\) for all \(k = 0,\ldots, m\);
\(D \gt 0\) if and only if there exists \(\ell \in \{0,\ldots,m\}\) such that \(d_\ell \gt 0\) and \(d_k = 0\) for all \(k = 0,\ldots,\ell-1\);
\(D \lt 0\) if and only if there exists \(\ell \in \{0,\ldots,m\}\) such that \(d_\ell \lt 0\) and \(d_k = 0\) for all \(k = 0,\ldots,\ell-1\).
If \(\epsilon \gt 0\) is sufficiently small, then \(1 - 2\epsilon + 10000\epsilon^3 \lt 1 - 2\epsilon + \epsilon^2\).
If \(\epsilon \gt 0\) is sufficiently small, then \(1000\epsilon^2 + \epsilon^3 < \epsilon\) since \(1000\epsilon^2 + \epsilon^3 = 0\epsilon + 1000\epsilon^2 + \epsilon^3 < 1\epsilon\).
We now state the perturbation method in the context of the revised simplex method. Let \(\mathbf{A}\) be an \(m \times n\) matrix having rank \(m\), \(\mathbf{b} \in \mathbb{R}^m\), \(\mathbf{c} \in \mathbb{R}^n\), \(\mathbf{x} = \begin{bmatrix} x_1 \\ \vdots\\ x_n\end{bmatrix}\) be a tuple of \(n\) real variables, and \(\mathbf{e} = \begin{bmatrix} \epsilon \\ \epsilon^2 \\ \vdots \\ \epsilon^m\end{bmatrix}\) where \(\epsilon \gt 0\). Let \((P)\) denote the linear programming problem: \[\begin{array}{rl} \min & \mathbf{c}^\mathsf{T}\mathbf{x} \\ \mbox{s.t.} & \mathbf{A}\mathbf{x} = \mathbf{b} \\ & \mathbf{x}\geq \mathbf{0}. \end{array}\] Let \(B\) be a basis determining a basic feasible solution. Let \((P')\) denote the linear programming problem: \[\begin{array}{rl} \min & \mathbf{c}^\mathsf{T}\mathbf{x} \\ \mbox{s.t.} & \mathbf{A}\mathbf{x} = \mathbf{b}' \\ & \mathbf{x}\geq \mathbf{0} \end{array}\] where \(\mathbf{b}' = \mathbf{b} + \mathbf{A}_B \mathbf{e}\). Then \(B\) determines a nondegenerate basic solution of \(\mathbf{A}\mathbf{x} = \mathbf{b}'\). provided that that \(\epsilon\) is sufficiently small. Furthermore, if \(B'\) is any other basis determining a basic feasible solution to \(\mathbf{A}\mathbf{x} =\mathbf{b}', \mathbf{x} \geq \mathbf{0}\), then \(B'\) also determines a basic feasible solution to \(\mathbf{A}\mathbf{x} = \mathbf{b},~\mathbf{x}\geq \mathbf{0}\). We call \((P')\) the perturbed problem with respect to the basis \(B\).
For example, consider \[\begin{array}{rrcrcrcl} \min & x_1 & + & 2x_2 & - & 3x_3 \\ \mbox{s.t.} & x_1 & - & 2x_2 & + & x_3 & = & 0 \\ & x_1 & + & x_2 & - & 2x_3 & = & 3 \\ & x_1 & , & x_2 & , & x_2 & \geq & 0. \end{array}\] where \(B = \{1,2\}\) is a basis determining the basic feasible solution \(\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} =\begin{bmatrix} 2 \\ 1 \\ 0\end{bmatrix}\).
Here, \(\mathbf{A}_B = \begin{bmatrix} 1 & -2 \\ 1 & 1 \end{bmatrix}\). The perturbed problem is then \[\begin{array}{rrcrcrcl} \min & x_1 & + & 2x_2 & - & 3x_3 \\ \mbox{s.t.} & x_1 & - & 2x_2 & + & x_3 & = & \epsilon - 2\epsilon^2 \\ & x_1 & + & x_2 & - & 2x_3 & = & 3 + \epsilon + \epsilon^2 \\ & x_1 & , & x_2 & , & x_2 & \geq & 0. \end{array}\]
It is not difficult to show that for a sufficiently small positive \(\epsilon\), every basic feasible solution to the perturbed problem is nondegenerate. Hence, the revised simplex method applied to \((P')\) will terminate after a finite number of iterations. Furthermore, if it terminates with a basis \(B'\) determining an optimal basic feasible solution of \((P')\), then \(B'\) also determines an optimal basic feasible solution of \((P).\) If it terminates with the conclusion that \((P')\) is unbounded, then \((P)\) is unbounded.
Since it is not necessary to fix a value for \(\epsilon\), comparing two polynomials in \(\epsilon\) amounts to comparing tuples of coefficients of powers of \(\epsilon\) lexicographically. This simplification leads to what is historically known as the lexicographic rule.
Assuming that \(\epsilon\) is positive and sufficiently small, which of the following is the least?
\(1+100\epsilon^2\)
\(1+3\epsilon\)
\(2-4001\epsilon\)
Suppose that in a given iteration of the revised simplex method using the method of perturbation, the basis is \(B = \{1,2,4\}\), the basic feasible solution is \(\mathbf{x}^* = \begin{bmatrix} 2 - \epsilon^2 \\ 2\epsilon + 3\epsilon^2 \\ 0 \\ 4\epsilon - 5\epsilon^3 \\ 0\end{bmatrix}\), and \(\mathbf{d} = \begin{bmatrix} -3 \\ -1 \\ 0 \\ -2 \\ 1\end{bmatrix}\). What is \(r\) in Step 6?
Let \(m\) and \(n\) be positive integers. Let \(\mathbf{A} \in \mathbb{R}^{m \times n}\) with rank \(m\). Let \(\mathbf{b} \in \mathbb{R}^m\). Let \(\epsilon \gt 0\). Let \(\mathbf{b}' = \mathbf{b} + \mathbf{P} \mathbf{e}\) where \(\mathbf{e} = \begin{bmatrix} \epsilon \\ \epsilon^2 \\ \vdots \\ \epsilon^m\end{bmatrix}\) where \(\mathbf{P} \in\mathbb{R}^{m\times m}\) is invertible. Prove that if \(\epsilon\) is sufficiently small, \(\mathbf{A}_B \mathbf{b}'\) has no zero entry for every basis \(B\) with respect to \(\mathbf{A}\).