A set \(S \subseteq \mathbb{R}^n\) is said to be bounded if there exists \(M \gt 0\) such that \(\|\mathbf{x} \| \leq M\) for all \(\mathbf{x} \in S\).
A polyhedron \(P \subseteq \mathbb{R}^n\) is called a polytope if it is bounded. For example, the polyhedron \(P = \left \{\begin{bmatrix} x_1 \\ x_2\end{bmatrix} : -x_1-x_2\geq -1, x_1,x_2 \geq 0\right\}\) is a polytope in \(\mathbb{R}^2\).
One can see this in one of two ways. The first way is to sketch \(P\) and make the observation that it is indeed bounded. Unfortunately, sketching a polyhedron is not always possible, especially when the polyhedron is in dimensions higher than 3.
The second way is to show that if \(\mathbf{x} \in P\), then \(\lvert x_i\rvert \leq \gamma\) for \(i = 1,\ldots,n\) for some positive real number \(\gamma\). Then \(\|\mathbf{x}\| = \sqrt{\sum_{i=1}^n x_i^2 } \leq \sqrt{n\gamma^2} = \sqrt{n} \gamma\), implying that \(P\) is bounded.
In our example, we already have the inequalities \(x_1,x_2 \geq 0\). Hence, \(x_1\) and \(x_2\) are bounded below by 0. Now, adding the inequalities \(-x_1-x_2 \geq -1\) and \(x_2 \geq 0\) gives \(-x_2\geq -1\), which is equivalent to \(x_2 \leq 1\). Finally, adding the inequalities \(-x_1-x_2 \geq -1\) and \(x_1 \geq 0\) gives \(-x_1\geq -1\), which is equivalent to \(x_1 \leq 1\). Thus, \(\|\mathbf{x}\| \leq \sqrt{2}\) for all \(\mathbf{x} \in P\) and so \(P\) is a bounded polyhedron and thus a polytope.
Note that every nonempty polytope is pointed because as a bounded set, it cannot contain any line. Therefore, a polytope has at least one extreme point. In fact, we can say more.
Proposition 12.3. If \(P\) is a nonempty polytope in \(\mathbb{R}^n\), then \(P\) is the convex hull of its extreme points.
Our proof will use the following lemma.
Lemma 12.4. Let \(\mathbf{v},\mathbf{u}^1,\ldots, \mathbf{u}^k \in \mathbb{R}^n\) where \(n\) is a positive integer. If \(\mathbf{v}\notin \operatorname{conv}(\{ \mathbf{u}^1,\ldots, \mathbf{u}^k \}),\) then there exist \(\mathbf{a} \in \mathbb{R}^n\) and \(\beta \in \mathbb{R}\) such that \(\mathbf{a}^\mathsf{T}\mathbf{v} \lt \beta\) and \(\mathbf{a}^\mathsf{T}\mathbf{u}^i \geq \beta\) for \(i = 1,\ldots, k\). (This lemma is a special case of the more general Separating Hyperplane Theorem.)
Proof. Because \(\mathbf{v}\notin \operatorname{conv}(\{ \mathbf{u}^1,\ldots, \mathbf{u}^k \})\), the following linear programming problem is infeasible: \[\begin{array}{c} \min 0\lambda_1 + \cdots + 0\lambda_k \\ \text{s.t.} \\ \begin{bmatrix} 1 & \cdots & 1 \\ \mathbf{u}^1 & \cdots & \mathbf{u}^k \end{bmatrix} \begin{bmatrix} \lambda_1 \\ \vdots \\ \lambda_k \end{bmatrix} = \begin{bmatrix} 1 \\ \mathbf{v} \end{bmatrix} \\ \lambda_1,\ldots, \lambda_k \geq 0. \end{array}\] Its dual problem is \[\begin{array}{c} \max y_0 + \begin{bmatrix} y_1 & \cdots & y_n\end{bmatrix}\mathbf{v} \\ \text{s.t.} \\ y_0 + \begin{bmatrix} y_1 & \cdots & y_n\end{bmatrix}\mathbf{u}^i \leq 0~~~~\text{for } i = 1,\ldots, k. \end{array}\] This problem is feasible (e.g. with \(y_0 = y_1 = \cdots = y_n = 0\)) and so it must be unbounded. Hence, there exists \(y^*_0,y^*_1,\ldots,y^*_n\) satisfying \(y^*_0 + \begin{bmatrix} y^*_1 & \cdots & y^*_n\end{bmatrix}\mathbf{u}^i \leq 0\) and \(y^*_0 + \begin{bmatrix} y^*_1 & \cdots & y^*_n\end{bmatrix}\mathbf{v} \gt 0\). Setting \(\mathbf{a} = \begin{bmatrix} -y^*_1 \\ \vdots \\ -y^*_n\end{bmatrix}\) and \(\beta = y^*_0\) gives the desired result.
Proof of Proposition 12.3. Let \(Q\) denote the convex hull of the extreme points of \(P\). Note that \(Q\) is nonempty because \(P\) is pointed. As \(P\) is convex, \(P \supseteq Q\).
Suppose that there exists \(\mathbf{v} \in P \backslash Q\). By Corollary 11.2, that the number of extreme points of \(P\) is finite. Hence, by the lemma above, there exist \(\mathbf{a} \in \mathbb{R}^n\) and \(\beta \in \mathbb{R}\) such that \(\mathbf{a}^\mathsf{T}\mathbf{u} \geq \beta\) for every extreme point \(\mathbf{u}\) of \(P\) but \(\mathbf{a}^\mathsf{T}\mathbf{v} \lt \beta\). Let \((LP)\) denote the linear programming problem \(\min \{ \mathbf{a}^\mathsf{T} \mathbf{x} : \mathbf{x} \in P\}\). Since \(P\) is a nonempty polytope, \((LP)\) is not infeasible and cannot be unbounded. Hence, it must have an optimal solution. By Theorem 11.4, it must have an optimal solution that is an extreme point of \(P\). But \(\mathbf{a}^\mathsf{T}\mathbf{u} \geq \beta\) for every extreme point of \(P\). Thus, the optimal value must be at least \(\beta\). However, \(\mathbf{v}\) is a feasible solution with objective function value \(\mathbf{a}^\mathsf{T}\mathbf{v} \lt \beta\), giving a contradiction.
Is \(P = \left\{ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} : \begin{array}{c} x_1 - x_2 + 2x_3 \geq 2\\ -x_1+2x_2-3x_3 \geq 0\\ x_2+x_3 \geq 1\\ x_3 \geq 0 \end{array}\right\}\) a polytope? Justify your answer.
Let \(P = \left \{ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} : \begin{array}{c} -x_1 - 2x_2 - 3x_3 \geq -6 \\ x_1, x_2, x_3 \geq 0 \end{array} \right \}\). Write \(P\) as the convex hull of its extreme points.
Let \(P\) be the convex hull of \( \left\{ \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} -1 \\ -2 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \\ -2 \end{bmatrix} \right \}\). Let \(\mathbf{u} = \begin{bmatrix} 1 \\ 1 \\ 1\end{bmatrix}\). Show that \(\mathbf{u} \notin P\) by giving \(\mathbf{a} \in \mathbb{R}^3\) and \(\beta \in \mathbb{R}\) such that \(\mathbf{a}^\mathsf{T}\mathbf{u} \lt \beta\) but \(\mathbf{a}^\mathsf{T}\mathbf{x} \geq \beta\) for all \(\mathbf{x} \in P\).