We consider LP problems in standard (equality) form: \[\begin{array}{rlcrl} \min & \mathbf{c}^\mathsf{T}\mathbf{x} \\ (P)~~~~~\mbox{s.t.} & \mathbf{A}\mathbf{x} = \mathbf{b} \\ & \mathbf{x} \geq \mathbf{0}. \end{array} \] As before, \(\mathbf{A} \in \mathbb{R}^{m\times n}\) with \(\operatorname{rank}(\mathbf{A}) = m\),, \(\mathbf{b} \in \mathbb{R}^m\), \(\mathbf{c} \in \mathbb{R}^n\), and \(\mathbf{x} = \begin{bmatrix} x_1 \\ \vdots \\ x_n\end{bmatrix}\) is a tuple of variables.
In practice, the entries in any of \(\mathbf{A}\), \(\mathbf{b}\), and \(\mathbf{c}\) might come from measurements and might be inexact. Sensitivity analysis deals with how much change in the problem data we can tolerate without changing the optimality of the current basis. In what follows, we consider a selection of such questions. The questions we consider are by no means exhaustive. Before we look at some such questions, we list a few definitions.
A basis \(B\) is called a feasible basis if determines a basic feasible solution.
A basis \(B\) is called a dual feasible basis if \(\mathbf{y}^*\) satisfying \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) is a feasible solution to the dual problem of \((P)\). (Note that a dual feasible basis is not required to determine a basic feasible solution.)
A basis \(B\) is called an optimal basis if it is a feasible basis and a dual feasible basis. In this case, if \(\mathbf{x}^*\) is the basic feasible solution determined by \(B\) and \(\mathbf{y}^*\) satisfies \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\), then it can be seen from the revised simplex method (or simply from complementary slackness) that \(\mathbf{x}^*\) and \(\mathbf{y}^*\) are optimal solutions to \((P)\) and the dual of \((P)\), respectively.
Consider \((P)\) with \(A = \left[\begin{array}{rrrrr} 2 & -1 & 1 & 2 & 0 \\ 1 & 1 &-1 &-2 & 0 \\ 1 & 2 & 0 & 0 & 1 \end{array}\right],~b = \left[\begin{array}{r} 0 \\ 3\\ 5 \end{array}\right],~c = \left[\begin{array}{r} -4 \\ -2\\ 0\\ 1 \\ 3 \end{array}\right].\) It is known that \(B = \{1,2,3\}\) is an optimal basis determining the basic feasible solution \(\mathbf{x}^* = \begin{bmatrix} 1\\2\\0\\0\\0\end{bmatrix}^\mathsf{T}\). Let \(N = \{1,\ldots,5\}\backslash B\).
We consider various types of changes in the problem data.Changes to \(c_j\) when \(j \notin B\) are the easiest to handle.
Suppose that \(c_4\) is changed from \(1\) to \(\theta\) and \(c_5\) is changed from \(3\) to \(\lambda\). For what values of \(\theta\) and \(\lambda\) does \(B\) remain an optimal basis?
Note that \(\mathbf{x}^*\) remains a basic feasible solution as \(\mathbf{A}\) and \(\mathbf{b}\) are not affected. Hence, \(B\) remains an optimal basis if and only if \(\mathbf{y}^*\) satisfying \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) remains a dual feasible solution after the changes.
Now, \(\mathbf{y}^* = \begin{bmatrix}-1\\-1\\-1\end{bmatrix}^\mathsf{T}\). So \(\mathbf{y}^*\) remains a dual feasible solution after changing \(c_4\) to \(\theta\) and \(c_5\) to \(\lambda\) if and only if \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_N \leq \begin{bmatrix} \theta & \lambda \end{bmatrix}\); or equivalently, \(0 \leq \theta, -1 \leq \lambda\). Hence, \(B\) remains an optimal basis for all \(\theta\) and \(\lambda\) satisfying \(\theta \geq 0, \lambda \geq -1\).
Note that, if \(\theta\) and \(\lambda\) do not satisfy these inequalities, \(B\) will no longer be an optimal basis. However, we do not need to solve the problem from scratch because \(B\) is still a feasible basis and so we can continue with the simplex method with the basis \(B\).
Now, suppose that \(c_2\) is changed from \(-2\) to \(\theta\). For what values of \(\theta\) does \(B\) remain an optimal basis?
Again, \(\mathbf{x}^*\) remains a basic feasible solution. As \(2 \in B\), we need to solve for \(\mathbf{y}\) in \(\mathbf{y}^\mathsf{T}\mathbf{A}_B = {\mathbf{c}'_B}^\mathsf{T}\) where \(\mathbf{c}' = \begin{bmatrix} -4\\ \theta\\ 0\\ 1\\ 3\end{bmatrix}\): \[ [y_1,~y_2,~y_3] \begin{bmatrix} 2 & -1 & 1 \\ 1 & 1 &-1 \\ 1 & 2 & 0 \end{bmatrix} = [-4,~\theta,~0].\]
Solving gives \(y_1 = y_2 = \dfrac{-8-\theta}{6},\) \(y_3 = \dfrac{\theta}{2}.\) So \(B\) remains an optimal basis if and only if \begin{align*} \mathbf{y}^\mathsf{T}\mathbf{A}_N \leq {\mathbf{c}_N'}^\mathsf{T} & \iff \begin{bmatrix} \dfrac{-8-\theta}{6}&\dfrac{-8-\theta}{6}&\dfrac{\theta}{2} \end{bmatrix} \begin{bmatrix} 2 & 0 \\ -2 & 0 \\ 0 & 1 \end{bmatrix} \leq \begin{bmatrix}1 & ~3\end{bmatrix} \\ & \iff \begin{bmatrix}0 & ~\dfrac{\theta}{2}\end{bmatrix} \leq \begin{bmatrix} 1 \\~3\end{bmatrix}. \\ & \iff \theta \leq 6. \end{align*}
Suppose that \(b_2\) is changed from \(3\) to \(\theta\). For what values of \(\theta\) does \(B\) remain an optimal basis?
Note that changes in \(\mathbf{b}\) do not affect \(\mathbf{c}\). So \(\mathbf{y}^*\) satisfying \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) is a feasible solution to the dual problem after modifying \(\mathbf{b}\). Hence, \(B\) remains an optimal basis as long as it determines a feasible solution. Let \(\mathbf{b}' = \begin{bmatrix} 0\\\theta\\5\end{bmatrix}\). By definition, the basic solution \(\mathbf{x}^*\) to \(\mathbf{A}x = \mathbf{b}'\) determined by \(B\) satisfies \(\mathbf{A}\mathbf{x}^* = \mathbf{b}',~x_j^* = 0\) for all \(j \notin B\); or equivalently, \(\mathbf{A}_B\mathbf{x}^*_B = b',~x^*_j = 0\) for all \(j \notin B\). So we need to solve \[\begin{bmatrix} 2 & -1 & 1 \\ 1 & 1 &-1 \\ 1 & 2 & 0 \end{bmatrix} \begin{bmatrix} x^*_1\\x^*_2\\x^*_3 \end{bmatrix}= \begin{bmatrix} 0\\ \theta\\ 5 \end{bmatrix}.\] We get \(x^*_1 = \dfrac{\theta}{3}\), \(x^*_2 = \dfrac{-\theta}{6}+\dfrac{5}{2}\), \(x^*_3 = \dfrac{-5\theta}{6}+\dfrac{5}{2}\). So \(B\) determines an optimal basic feasible solution if and only if we have \begin{align*} x^*_1 & \geq 0 \\ x^*_2 & \geq 0 \\ x^*_3 & \geq 0. \end{align*} In terms of \(\theta\), the system is \begin{align*} \dfrac{\theta}{3} & \geq 0 \\ \dfrac{-\theta}{6}+\dfrac{5}{2} & \geq 0 \\ \dfrac{-5\theta}{6}+\dfrac{5}{2} & \geq 0 \end{align*} which simplifies to \begin{align*} \theta & \geq 0 \\ \theta & \leq 15 \\ \theta & \leq 3. \end{align*} So the range of values of \(\theta\) for which \(B\) remains an optimal basis is \(0 \leq \theta \leq 3\).
If \(\theta\) is changed to a value outside this range, \(B\) will no longer determine a basic feasible solution. However, \(B\) is still a dual feasible basis as \(\mathbf{y}^*\) satisfying \({\mathbf{y}^*}^\mathsf{T}\mathbf{A}_B = \mathbf{c}_B^\mathsf{T}\) remains a feasible solution to the dual problem since the constraints in the dual problem are not affected by changes to \(\mathbf{b}\). Next, we introduce the dual simplex method that works with such bases.
Consider the linear programming problem \((P)\) with \(\mathbf{A} = \begin{bmatrix} 1 & 2 & -1 \\ 1 & 1 & 1 \end{bmatrix}\), \(\mathbf{b} = \begin{bmatrix} 4 \\ 3 \end{bmatrix}\), and \(\mathbf{c} = \left[\begin{array}{r} 0 \\ 0 \\ 1 \end{array}\right].\) Let \(B\) denote the basis \(\{1,2\}\).
Show that \(B\) is an optimal basis.
Suppose that the first entry of \(\mathbf{b}\) is changed from \(4\) to \(\theta\). For what values of \(\theta\) does \(B\) remain an optimal basis?