Back

Fundamental Theorem of Linear Programming

We now see a proof of the Fundamental Theorem of Linear Programming, which states that if a linear programming (LP) problem is not infeasible and is not unbounded, then it has an optimal solution.

Proof of Theorem 1.1. Without loss of generality, we may assume that the LP problem is of the form \[\begin{array}{rlcrl} \min & \mathbf{c}^\mathsf{T}\mathbf{x} \\ (P)~~~~~\mbox{s.t.} & \mathbf{A}\mathbf{x} \geq \mathbf{b} \end{array}\] where \(m\) and \(n\) are positive integers, \(\mathbf{A} \in \mathbb{R}^{m\times n}\), \(\mathbf{b} \in \mathbb{R}^m\), \(\mathbf{c} \in \mathbb{R}^n\), and \(\mathbf{x}= \begin{bmatrix} x_1\\\vdots \\ x_n\end{bmatrix}\) is a tuple of variables. (Indeed, any LP problem can be converted to an LP problem in the form of \((P)\) having the same feasible region and optimal solution set. To see this, note that a constraint of the form \(\mathbf{a}^\mathsf{T}\mathbf{x} \leq \beta\) can be written as \(-\mathbf{a}^\mathsf{T}\mathbf{x} \geq -\beta\); a constraint of the form \(\mathbf{a}^\mathsf{T}\mathbf{x} = \beta\) written as a pair of constraints \(\mathbf{a}^\mathsf{T}\mathbf{x} \geq \beta\) and \(-\mathbf{a}^\mathsf{T}\mathbf{x} \geq -\beta\); and a maximization problem is equivalent to the problem that minimizes the negative of the objective function subject to the same constraints.)

Suppose that \((P)\) is not infeasible. Let \((S)\) denote the system \[ \begin{array}{r} && ~~z- \mathbf{c}^\mathsf{T}\mathbf{x} \geq 0\\ & & -z+ \mathbf{c}^\mathsf{T}\mathbf{x} \geq 0 \\ && ~~~~~~\mathbf{A}\mathbf{x} \geq \mathbf{b}. \end{array} \] Solving \((P)\) is equivalent to finding among all the solutions to \((S)\) one that minimizes \(z\), if it exists. Eliminating the variables \(x_1,\ldots,x_n\) (in any order) using Fourier-Motzkin elimination gives a system of linear inequalities \((S')\) containing at most the variable \(z\). By scaling, we may assume that the coefficients of \(z\) in \((S')\) are \(1\), \(-1\), or \(0\). Note that any \(z\) satisfying \((S')\) can be extended to a solution to \((S)\) and the \(z\) value from any solution to \((S)\) must satisfy \((S')\).

That \((P)\) is not unbounded implies that \((S')\) must contain an inequality of the form \(z \geq \beta\) for some \(\beta \in \mathbb{R}\). (Why?) Let all the inequalites in which the coefficient of \(z\) is positive be \[z \geq \beta_i\] where \(\beta_i \in \mathbb{R}\) for \(i = 1,\ldots,p\) for some positive integer \(p\). Let \(\gamma = \max\{\beta_1,\ldots,\beta_p\}\). Then for any solution \(x,z\) to \((S)\), \(z\) is at least \(\gamma\). But we can set \(z = \gamma\) and extend it to a solution for \((S)\). Hence, we obtain an optimal solution for \((P)\) and \(\gamma\) is the optimal value. This completes the proof of the theorem.

Remark. It is possible to construct multipliers to infer the inequality \(\mathbf{c}^\mathsf{T}\mathbf{x} \geq \gamma\) from the system \(\mathbf{A}\mathbf{x} \geq \mathbf{b}\). Because we obtained the inequality \(z \geq \gamma\) using Fourier-Motzkin elimination, there must exist real numbers \(\alpha, \beta, y^*_1,\ldots, y^*_m\geq 0\) such that \[ \begin{bmatrix}\alpha & \beta & y^*_1 & \cdots &y^*_m\end{bmatrix} \begin{bmatrix} 1 & -\mathbf{c}^\mathsf{T} \\ -1 & \mathbf{c}^\mathsf{T} \\ 0 & \mathbf{A} \end{bmatrix} \begin{bmatrix} z \\ \mathbf{x} \end{bmatrix} \geq \begin{bmatrix}\alpha & \beta & y^*_1 & \cdots &y^*_m\end{bmatrix} \begin{bmatrix} 0 \\ 0\\ \mathbf{b} \end{bmatrix} \] is identically \(z \geq \gamma\). Note that we must have \(\alpha-\beta = 1\) and \[\mathbf{y}^* \geq \mathbf{0},~{\mathbf{y}^*}^\mathsf{T}\mathbf{A} = \mathbf{c}^\mathsf{T},~\mbox{and } {\mathbf{y}^*}^\mathsf{T}\mathbf{b} = \gamma\] where \(\mathbf{y}^* = [y^*_1,\ldots,y^*_m]^\mathsf{T}\). Hence, \(y^*_1,\ldots,y^*_m\) are the desired multipliers.,

The significance of the fact that we can infer \(\mathbf{c}^\mathsf{T}\mathbf{x} \geq \gamma\) where \(\gamma\) is the optimal value will be discussed in more details when we look at duality theory for linear programming.

Worked examples

  1. Show that \(\mathbf{x}^*\) is an optimal solution to \[\begin{array}{rl} \min & \mathbf{c}^\mathsf{T} \mathbf{x} \\ \text{s.t.} & \mathbf{A}\mathbf{x} \geq \mathbf{b} \end{array}\] if and only if \((z,\mathbf{x}) = (\mathbf{c}^\mathsf{T}\mathbf{x}^*, \mathbf{x}^*)\) is an optimal solution to \[\begin{array}{rl} \min & z \\ \text{s.t.} & z- \mathbf{c}^\mathsf{T} \mathbf{x} \geq 0 \\ & \mathbf{A}\mathbf{x} \geq \mathbf{b} \end{array}\]