Back

Resolution theorem

We now establish a generalization of Theorem 9.1.

Theorem 12.1. Let \(P \subseteq \mathbb{R}^n\) be a pointed polyhedron. Let \(\mathbf{x}^1,\ldots,\mathbf{x}^k\) be the extreme points of \(P\). Let \(\mathbf{d}^1,\ldots, \mathbf{d}^\ell\) be a complete set of extreme rays of \(P\). Then \[P = \left\{ \sum_{i=1}^k \lambda_i \mathbf{x}^i + \sum_{j=1}^\ell \gamma_i \mathbf{d}^j : \sum_{i=1}^k \lambda_i = 1,\, \lambda_i \geq 0~\forall i \in \{1,\ldots,k\},\, \gamma_j \geq 0~\forall j \in \{1,\ldots,\ell\}\right\}.\]

Proof. Let \(Q\) denote \(\left\{ \sum_{i=1}^k \lambda_i \mathbf{x}^i + \sum_{j=1}^\ell \gamma_i \mathbf{d}^j : \sum_{i=1}^k \lambda_i = 1,\, \lambda_i \geq 0~\forall i \in \{1,\ldots,k\},\, \gamma_j \geq 0~\forall j \in \{1,\ldots,\ell\}\right\}.\)

We first show that \(Q \subseteq P\). Let \(\mathbf{x} \in Q\) be given by \(\sum_{i=1}^k \lambda_i \mathbf{x}^i + \sum_{j=1}^\ell \gamma_i \mathbf{d}^j\) for some nonngeative scalars \(\lambda_i, i = 1,\ldots,k\), and \(\gamma_j, j = 1,\ldots,\ell\) with \(\sum_{i=1}^k \lambda_i = 1\). Then \(\mathbf{u} = \sum_{i=1}^k \lambda_i \mathbf{x}^i\) is a convex combination of elements of \(P\) and so \(\mathbf{u} \in P\). In addition, \(\mathbf{d} = \sum_{j=1}^\ell \gamma_i \mathbf{d}^j \in \operatorname{cone}\left( \left\{\mathbf{d}^1,\ldots, \mathbf{d}^k\right\}\right) = \operatorname{recc}(P)\). It now follows from the definition of the recession cone that \(\mathbf{x} = \mathbf{u} + \mathbf{d} \in P\).

Suppose that there exists \(\mathbf{u} \in P\setminus Q\). In other words, the following system is infeasible: \begin{align*} \sum_{i=1}^k \lambda_i \mathbf{x}^i + \sum_{j=1}^\ell \gamma_i \mathbf{d}^j & = \mathbf{u} \\ \sum_{i=1}^k \lambda_i & = 1 \\ \lambda_i & \geq 0~& \forall i \in \{1,\ldots,k\},\\ \gamma_j & \geq 0~& \forall j \in \{1,\ldots,\ell\}. \end{align*} By Corollary 2.2, there exist \(\mathbf{y}\in \mathbb{R}^n\) and \(\mu \in \mathbb{R}\) such that \begin{align*} \mathbf{y}^\mathsf{T}\mathbf{x}^i + \mu & \geq 0 & \forall i\in\{1,\ldots,k\},\\ \mathbf{y}^\mathsf{T}\mathbf{d}^j & \geq 0 & \forall j\in\{1,\ldots,\ell\}, \\ \mathbf{y}^\mathsf{T}\mathbf{u} + \mu & \lt 0. \end{align*}

Consider the linear programming problem \begin{align*} \min \ & \mathbf{y}^\mathsf{T}\mathbf{x} \\ \text{s.t.} \ & \mathbf{x} \in P. \end{align*} If it has an optimal solution, then by Theorem 8.4, it has one that is an extreme point of \(P\), say \(\mathbf{x}^\bar{i}\). Since \(\mathbf{u} \in P\), we have that \(\mathbf{y}^\mathsf{T}\mathbf{x}^\bar{i} \leq \mathbf{y}^\mathsf{T}\mathbf{u}\), implying that \(\mathbf{y}^\mathsf{T}\mathbf{x}^\bar{i} + \mu \leq \mathbf{y}^\mathsf{T}\mathbf{u} + \mu \lt 0\), a contradiction. If it is unbounded, then by Theorem 10.5, there is an extreme ray \(\mathbf{d}^\bar{j}\) of \(\operatorname{recc}(P)\) such that \(\mathbf{y}^\mathsf{T}\mathbf{d}^\bar{j} \lt 0\), which again is a contradiction.

Using the Minkowski sum for sets (that is, \(U+V = \{ \mathbf{x}+\mathbf{y} : \mathbf{x} \in U,~\mathbf{y} \in V\}\) for \(U,V \subseteq \mathbb{R}^n\)), we can write \[P = \left\{ \sum_{i=1}^k \lambda_i \mathbf{x}^i + \sum_{j=1}^\ell \gamma_i \mathbf{d}^j : \sum_{i=1}^k \lambda_i = 1,\, \lambda_i \geq 0~\forall i \in \{1,\ldots,k\},\, \gamma_j \geq 0~\forall j \in \{1,\ldots,\ell\}\right\}.\] more compactly as \[P = \operatorname{conv}\left(\left\{\mathbf{x}^1,\ldots,\mathbf{x}^k \right\}\right) + \operatorname{cone}\left(\left\{\mathbf{d}^1,\ldots,\mathbf{d}^\ell \right\}\right).\]

The following more general result holds. However, its proof is beyond the scope of these notes.

Theorem 12.2. Let \(P\) be a nonempty polyhedron. Then there exist \(\mathbf{x}^1,\ldots,\mathbf{x}^k\) and \(\mathbf{d}^1,\ldots, \mathbf{d}^\ell\) such that \[P = \operatorname{conv}\left(\left\{\mathbf{x}^1,\ldots,\mathbf{x}^k \right\}\right) + \operatorname{cone}\left(\left\{\mathbf{d}^1,\ldots,\mathbf{d}^\ell \right\}\right)\]

Worked examples

  1. Let \(P = \left \{ \mathbf{x} \in \mathbb{R}^3 : \mathbf{A} \mathbf{x} \geq \mathbf{b}\right \}\) where \(\mathbf{A} = \begin{bmatrix} 1 & 2 & 1 \\ 2 & 2 & -1 \\ 0 & 4 & 1 \\ 1 & -1 & -1 \end{bmatrix}\) and \(\mathbf{b} = \begin{bmatrix} 0 \\ 1 \\ -2 \\ -1 \end{bmatrix}\). Find extreme points \(\mathbf{x}^1,\ldots,\mathbf{x}^k\) of \(P\) and extreme rays \(\mathbf{d}^1,\ldots, \mathbf{d}^\ell\) of \(\operatorname{recc}(P)\) such that \(P = \operatorname{conv}\left(\left\{\mathbf{x}^1,\ldots,\mathbf{x}^k \right\}\right) + \operatorname{cone}\left(\left\{\mathbf{d}^1,\ldots,\mathbf{d}^\ell \right\}\right).\)

  2. Let \(C \subseteq\mathbb{R}^2\) be a polyhedral cone not necessarily pointed. Consider \(C_1 = C \cap \left\{ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} : x_1, x_2 \geq 0\right\}\), \(C_2 = C \cap \left\{ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} : x_1 \leq 0, x_2 \geq 0\right\}\), \(C_3 = C \cap \left\{ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} : x_1, x_2 \leq 0 \right\}\), and \(C_4 = C \cap \left\{ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} : x_1 \geq 0, x_2 \leq 0 \right\}\).

    1. Show that \(C_i\) is a pointed polyhedral cone for \(i = 1,\ldots,4\). and that if \(\mathbf{d}^{(i,1)},\ldots,\mathbf{d}^{(i,r_i)}\) are a complete set of extreme rays of \(C_i\) for \(i = 1,\ldots,4\), then \[C = \operatorname{cone} \left( \bigcup_{i=1}^4 \left\{ \mathbf{d}^{(i,1)},\ldots,\mathbf{d}^{(i,r_i)} \right\} \right).\]

    2. Let \(C = \left\{\begin{bmatrix} x_1\\x_2\end{bmatrix} : 3x_1 + 2x_2 \geq 0\right\}\). Use the result in part (a) to find nonzero \(\mathbf{d}^1,\ldots,\mathbf{d}^\ell \in C\), where \(\ell\) is a positive integer, such that \(C = \operatorname{cone}\left( \left\{\mathbf{d}^1,\ldots,\mathbf{d}^\ell\right\} \right)\).