Back

Revised dual simplex method

Let \(m\) and \(n\) be positive integers. Let \(\mathbf{A}\in \mathbb{R}^{m\times n}\) with rank \(m\), \(\mathbf{b} \in \mathbb{R}^m\), \(\mathbf{c} \in \mathbb{R}^n\), and \(\mathbf{x} = \begin{bmatrix} x_1\\ \vdots\\ x_n\end{bmatrix}\) be a tuple of variables. Consider the following linear programming problem: \[\begin{array}{rl} \min & \mathbf{c}^\mathsf{T}\mathbf{x} \\ (P)~~~~\mbox{s.t.} & \mathbf{A}\mathbf{x} = \mathbf{b} \\ & \mathbf{x}\geq \mathbf{0}. \end{array}\] The dual problem of \((P)\) is \[\begin{array}{rl} \max & \mathbf{y}^\mathsf{T}\mathbf{b} \\ (D)~~~~\mbox{s.t.} & \mathbf{y}^\mathsf{T}\mathbf{A} \leq \mathbf{c}^\mathsf{T}. \end{array}\] Recall that a basis \(B\) of \(\mathbf{A}\) is said to be dual feasible with respect to \((P)\) if \(\mathbf{y}^*\) satisfying \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) is a feasible solution to \((D)\).

The following algorithm solves \((P)\) starting with a dual feasible basis.

Revised Dual Simplex Method with Bland's anti-cycling rule

Input: The problem \((P)\), a dual feasible basis \(B\) and \(\mathbf{y}^*\) satisfying \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\).
Steps:

  1. Let \(N = \{1,\ldots,n\} \backslash B\).

  2. Find the basic solution \(x^*\) of \(\mathbf{A}\mathbf{x} = \mathbf{b}\) determined by \(B\).

  3. Choose the smallest \(r \in B\) such that \(x^*_r \lt 0\). If no such \(r\) exists, then stop; \(\mathbf{x}^*\) is an optimal solution to \((P)\) and \(\mathbf{y}^*\) is an optimal solution to \((D)\).

  4. Find \(\mathbf{d} = \begin{bmatrix}d_1\\ \vdots\\ d_m\end{bmatrix}\) such that \(\mathbf{d}^\mathsf{T}\mathbf{A}_r = -1\) and \(\mathbf{d}^\mathsf{T}\mathbf{A}_i = 0\) for all \(i \in B\backslash \{r\}\).

  5. Set \(\mathbf{u}_N^\mathsf{T} = \mathbf{d}^\mathsf{T}\mathbf{A}_N\). If \(\mathbf{u}_N \leq \mathbf{0}\), stop; \((D)\) is unbounded, implying that \((P)\) is infeasible.

  6. Set \(\lambda = \min\left\{\dfrac{c_j-{\mathbf{y}^*}^\mathsf{T}\mathbf{A}_j}{u_j} : j \in N,~ u_j \gt 0\right\}\) and choose the smallest \(k \in N\) such that \(u_k \gt 0\) and \(\dfrac{c_k - {\mathbf{y}^*}^\mathsf{T}\mathbf{A}_k}{u_k} = \lambda\).

  7. Replace \(\mathbf{y}^*\) with \(\mathbf{y}^* + \lambda \mathbf{d}\).

  8. Replace \(B\) with \((B\cup \{k\})\backslash \{r\}\) and go to step 1.

Remarks.

  1. In step 5, \(\mathbf{u}_N\) is an element of \(\mathbb{R}^{N}\), the set of tuples with \(|N|\) real entries indexed by the elements of \(N\). It does not mean that it comes from \(\mathbf{u}\) that contain entries not indexed by the elements of \(N\).

  2. Note that \(\mathbf{y}^*\) and the \(\mathbf{x}^*\) computed in step 2 together satisfy the complementary slackness conditions though \(\mathbf{x}^*\) is not necessarily a feasible solution to \((P)\). If we have \(\mathbf{x}_B^* \geq 0\) in step 3, then \(\mathbf{x}^*\) is a feasible solution to \((P)\) and \(\mathbf{x}^*\) and \(\mathbf{y}^*\) together satisfy the complementary slackness conditions. Hence, \(\mathbf{x}^*\) is an optimal solution to \((P)\) and \(\mathbf{y}^*\) is an optimal solution to \((D)\). So the stopping criterion in step 3 is correct. In casual terms, the dual simplex method maintains dual feasibility and complementary slackness and strives to obtain primal feasibility.

  3. In step 5, if we have \(\mathbf{u}_N \leq \mathbf{0}\), then \(\mathbf{y}^* + \lambda \mathbf{d}\) is a feasible solution to \((D)\) for all \(\lambda \geq 0\) because \[(\mathbf{y}^*+\lambda\mathbf{d})^\mathsf{T}\mathbf{A} = \begin{bmatrix}{\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B & {\mathbf{y}^*}^\mathsf{T}\mathbf{A}_N\end{bmatrix} + \lambda\begin{bmatrix} \mathbf{d}^\mathsf{T}\mathbf{A}_B & \mathbf{d}^\mathsf{T}\mathbf{A}_N\end{bmatrix} \leq \begin{bmatrix} \mathbf{c}_B^\mathsf{T} & \mathbf{c}_N^\mathsf{T}\end{bmatrix}\] and its objective function value is \[(\mathbf{y}^*+\lambda\mathbf{d})^\mathsf{T}\mathbf{b} = {\mathbf{y}^*}^\mathsf{T}\mathbf{b}+\lambda\mathbf{d}^\mathsf{T}\mathbf{b} = {\mathbf{y}^*}^\mathsf{T}\mathbf{b}+\lambda\mathbf{d}^\mathsf{T} \mathbf{A}_B\mathbf{x}^*_B = {\mathbf{y}^*}^\mathsf{T}\mathbf{b}-\lambda x^*_r\] which tends to \(\infty\) as \(\lambda \rightarrow \infty\) since \(x^*_r \lt 0\). So the stopping criterion in step 5 is correct.

  4. It is easy to check that if \(\lambda\) is as computed in step 6, then \(\mathbf{y}^* + \lambda\mathbf{d}\) is a feasible solution to \((D)\) having objective function value \({\mathbf{y}^*}^\mathsf{T}\mathbf{b}-\lambda x^*_r\). As \(x^*_r \lt 0\), if \(\lambda \gt 0\), the objective function value of the new \(\mathbf{y}^*\) is greater than that of the old one. If \(\lambda = 0\), we have a degenerate iteration.

  5. Let \(B'\) denote the new basis in step 8. It is easy to check that the new \(\mathbf{y}^*\) satisfies \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_{B'} = \mathbf{c}_{B'}^\mathsf{T}\).

    To check that \(B'\) is a basis, it is enough to show that \(\mathbf{A}_k\) cannot be expressed as a linear combination of columns of \(\mathbf{A}_{B\backslash \{r\}}\). This follows from that \(\mathbf{d}^\mathsf{T}\mathbf{A}_{B\backslash \{r\}} = 0\) but \(\mathbf{d}^\mathsf{T}\mathbf{A}_k = u_k \gt 0\).

Example. We consider the example from an earlier part and change \(b_2\) to \(-1\). So the problem \((P)\) is \(\min\{ \mathbf{c}^\mathsf{T}\mathbf{x} : \mathbf{A}\mathbf{x} = \mathbf{b},~\mathbf{x} \geq \mathbf{0}\}\) where \(\mathbf{A} = \begin{bmatrix} 2 & -1 & 1 & 2 & 0 \\ 1 & 1 & -1 & -2 & 0 \\ 1 & 2 & 0 & 0 & 1 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 0 \\ -1 \\ 5 \end{bmatrix}\), \(\mathbf{c} = \begin{bmatrix} -4\\-2\\0\\ 1\\3\end{bmatrix}.\)

As we have seen, \(B = \{1,2,3\}\) is a dual feasible basis with \(\mathbf{y}^* = \begin{bmatrix} -1\\-1\\-1\end{bmatrix}\).

Iteration 1:

  1. \(N = \{4,5\}\).

  2. The basic solution determined by \(B\) can be found by solving \(\mathbf{A}_B\mathbf{x}_B^* = b, ~\mathbf{x}_N = \mathbf{0}\). This gives \(\mathbf{x}^* = \begin{bmatrix} -\frac{1}{3}\\\frac{8}{3}\\\frac{10}{3}\\0\\0\end{bmatrix}\).

  3. Choose \(r = 1\) since \(x^*_1 \lt 0\).

  4. Solving \(\begin{bmatrix} d_1& d_2& d_3\end{bmatrix} \begin{bmatrix} 2 & -1 & 1 \\ 1 & 1 & -1 \\ 1 & 2 & 0 \end{bmatrix} = \begin{bmatrix}-1& 0& 0\end{bmatrix}\) gives \(d_1 = d_2 = -\frac{1}{3}\), \(d_3 = 0\).

  5. Set \(\begin{bmatrix}u_4 & u_5\end{bmatrix} = \begin{bmatrix} -\frac{1}{3} & -\frac{1}{3} & 0\end{bmatrix} \begin{bmatrix} 2 & 0 \\ -2 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix}0& 0\end{bmatrix}\). As \(\mathbf{u}_N \leq \mathbf{0}\), stop; \((P)\) is infeasible.

Note that \(\mathbf{y}^* = \begin{bmatrix} -1-\frac{1}{3}\lambda\\ -1-\frac{1}{3}\lambda\\ -1\end{bmatrix}\) is a feasible solution to the dual problem for all \(\lambda \geq 0\) with objective function value \(-4+\frac{1}{3}\lambda\) which tends to \(\infty\) as \(\lambda \rightarrow \infty\).

Handling the addition of new inequalities

The dual simplex method can also be used to handle addition of new inequalities. Consider the example from an earlier part. As we have seen, \(B = \{1,2,3\}\) is an optimal basis determining the basic feasible solution \(\mathbf{x}^*=\begin{bmatrix}1\\2\\0\\0\\0\end{bmatrix}\) and \(\mathbf{y}^* = \begin{bmatrix}-1\\-1\\-1\end{bmatrix}\) satisfies \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\).

Suppose we add the inequality \(x_1 - x_2 \leq -2\) to the problem. First, we convert this to an equality constraint by introducing a slack variable \(x_6\geq 0\): \(x_1 - x_2 + x_6 = -2\). The new problem data are \(\mathbf{A} = \begin{bmatrix} 2 & -1 & 1 & 2 & 0 & 0\\ 1 & 1 & -1 & -2 & 0 & 0\\ 1 & 2 & 0 & 0 & 1 & 0 \\ 1 & -1 & 0 & 0 & 0 & 1 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 0 \\ 3 \\ 5 \\ -2 \end{bmatrix}\), \(\mathbf{c} = \begin{bmatrix} -4\\-2\\0\\ 1\\3\\0\end{bmatrix}.\)

As \(x_6\) appears only in the added constraint, adding \(6\) to \(B\) will give a basis of \(\mathbf{A}\). In fact, the basis \(B = \{1,2,3,6\}\) is dual feasible as \({\mathbf{y}^*} = \begin{bmatrix}-1\\-1\\-1\\0\end{bmatrix}\) satisfies \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) and is clearly feasible to the dual problem of the new problem. We now apply the dual simplex method beginning with \(B = \{1,2,3,6\}\) and \({\mathbf{y}^*} = \begin{bmatrix}-1\\-1\\-1\\0\end{bmatrix}\).

Iteration 1:
  1. \(N = \{4,5\}\).

  2. The basic solution determined by \(B\) is \(x^* = \begin{bmatrix}1\\2\\0\\0\\0\\-2\end{bmatrix}\).

  3. Choose \(r = 6\) since \(x^*_6 \lt 0\).

  4. Solving \(\begin{bmatrix}d_1&d_2&d_3&d_4\end{bmatrix}\begin{bmatrix} 2 & -1 & 1 & 0\\ 1 & 1 & -1 & 0\\ 1 & 2 & 0 & 0\\ 1 & -1 & 0 & 1 \end{bmatrix} = \begin{bmatrix}0&0&0&-1\end{bmatrix}\) gives \(d_1 = d_2 = \frac{1}{2}\), \(d_3 = -\frac{1}{2}\), \(d_4 = -1\).

  5. Set \(\begin{bmatrix} u_4&u_5\end{bmatrix} = \begin{bmatrix} \frac{1}{2}&\frac{1}{2}&-\frac{1}{2}&-1\end{bmatrix} \begin{bmatrix} 2 & 0 \\ -2 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix}0 & -\frac{1}{2}\end{bmatrix}\). As \(\mathbf{u}_N \leq \mathbf{0}\), stop; the problem with the added constraint is infeasible.

Finding basic feasible solutions

The dual simplex method can be used to find a basic feasible solution of the system \(\mathbf{A}\mathbf{x} = \mathbf{b},~\mathbf{x} \geq \mathbf{0}\) where \(\mathbf{A} \in \mathbb{R}^{m\times n}\) has rank \(m\), \(\mathbf{b} \in \mathbb{R}^m\), and \(\mathbf{x} = \begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}\) is a tuple of variables. The reason is that every basis \(B\) of \(\mathbf{A}\) will be a dual feasible basis with respect to \[\begin{array}{rl} \min & 0 \\ (P)~~~~\mbox{s.t.} & \mathbf{A}\mathbf{x} = \mathbf{b} \\ & \mathbf{x}\geq \mathbf{0}. \end{array}\]

The first worked example below illustrates this method.

Worked examples

  1. Let \(\mathbf{A} = \begin{bmatrix} 1 & 1 & -1\\ 1 & -1 & 2 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 2\\ 3 \end{bmatrix}\), and \(\mathbf{c} = \begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix}\). Solve the problem \(\min\{\mathbf{c}^\mathsf{T}\mathbf{x} : \mathbf{A}\mathbf{x} = \mathbf{b},~\mathbf{x}\geq\mathbf{0} \}\) using the revised dual simplex method starting with the dual feasible basis \(B = \{1, 2\}\).

  2. Let \(\mathbf{A} = \begin{bmatrix} 3 & -1 & -1 & 0\\ -1 & 2 & 1 & -1 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 1\\ 3 \end{bmatrix}\), and \(\mathbf{c} = \begin{bmatrix} 4\\ 3\\ 0\\ -1 \end{bmatrix}\). Solve the problem \(\min\{\mathbf{c}^\mathsf{T}\mathbf{x} : \mathbf{A}\mathbf{x} = \mathbf{b},~\mathbf{x}\geq\mathbf{0} \}\) using the revised dual simplex method starting with the dual feasible basis \(B = \{3, 4\}\).

  3. Consider the problem \((P)\) with \(\mathbf{A} = \begin{bmatrix} 1 & 1 & 0 & 1 & 0 & 0\\ 2 & 1 & 1 & 0 & 0 & 0\\ -1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 6 \\ 10 \\ 4 \\ 5 \end{bmatrix}\), \(\mathbf{c} = \begin{bmatrix} -3\\-4\\0\\ 0\\0\\0\end{bmatrix}.\) You are given that \(B = \{1,2,3,6\}\) is a dual feasible basis. Solve \((P)\) using the dual simplex method starting with the basis \(B\).

  4. Let \(B\) be a dual feasible basis with respect to \((P)\). Suppose that \((P)\) is modified by adding an equality constraint \(\mathbf{a}^\mathsf{T}\mathbf{x} + x_{n+1} = b_{m+1}\) where \(x_{n+1}\) is a new variable required to be nonnegative. Prove that \(B' = B \cup \{n+1\}\) is a dual feasible basis with respect to the modified problem.