Back

Dantzig-Wolfe decomposition

We consider linear programming problems of the form min c(1)Tx(1)+c(2)Tx(2)++c(k)Tx(k)(P)s.t.A(1)x(1)+A(2)x(2)++A(k)x(k)=bx(1)P1x(2)P2x(k)Pk. where bRm for some positive integer m, and for i=1,,k, ni is a positive integer, A(i)Rm×ni, c(i)Rni, and Pi is a pointed polyhedron.

For each i=1,,k, let x(i,j), j=1,,pi be the extreme points of Pi and let d(i,j), j=1,,ri give a complete set of extreme rays of Pi where pi and ri are nonnegative integers.

By Theorem 12.1, x(i)Pi can be written as j=1piλ(i,j)x(i,j)+j=1riγ(i,j)d(i,j) for some λ(i,j)0, j=1,,pi and γ(i,j)0, j=1,,ri where j=1piλ(i,j)=1.

Then (P) can be reformulated as min i=1k(j=1pic^(i,j)λ(i,j)+j=1ric~(i,j)γ(i,j))(DW)s.t. i=1k(j=1piλ(i,j)a^(i,j)+j=1riγ(i,j)a~(i,j))=bj=1piλ(i,j)=1i=1,,k,λ(i,j)0i=1,,k, j=1,,pi,γ(i,j)0i=1,,k, j=1,,ri. where for i=1,,k we have c^(i,j)=c(i)Tx(i,j)a^(i,j)=A(i)x(i,j) for j=1,,pi, and c~(i,j)=c(i)Td(i,j)a~(i,j)=A(i)d(i,j) for j=1,,ri.

Example. Consider min x1+2x2+3y1+4y2(P)    s.t. x1+5x2+7y16y2=1[x1x2]P[y1y2]Q where P={[x1x2]:x1+x21x1+2x20x21} and Q={[y1y2]:3y1y22y1+2y23}. Solving this directly gives an optimal solution [x1x2y1y2]=[2114112011] with optimal value 16611.

Now, P has extreme points [01] and [21] and a complete set of extreme rays {[11],[21]} where as Q has one extreme point [11] and a complete set of extreme rays {[13],[21]}. Hence, (P) can be reformulated as min 2λ(1,1)+4λ(1,2)+γ(1,1)+4γ(1,2)+7λ(2,1)+15γ(2,1)+2γ(2,2)s.t. 5λ(1,1)+3λ(1,2)+6γ(1,1)+3γ(1,2)+λ(2,1)11γ(2,1)+20γ(2,2)=1λ(1,1)+λ(1,2)=1λ(2,1)=1λ(1,1),λ(1,2),γ(1,1),γ(1,2),λ(2,1),γ(2,1),γ(2,2)0 Now, solving this directly gives an optimal solution λ(1,2)=λ(2,1)=1, γ(2,1)=311, and the remainder of the variables equal to 0. The optimal value is again 16611 and the solution gives the point λ(1,2)[21]=[21] in P and λ(2,1)[11]+γ(2,1)[13]=[1+3111+911]=[14112011] in Q.

Note that (DW) is a linear programming problem in standard form. But it might have too many variables compared to (P). To solve (DW), we will use the simplex method with column generation. We illustrate the concepts on the problem min cTx(SP)     s.t. Ax=bx0.

Suppose that B is a basis determining a basic feasible solution. Recall that the only part of A that is needed to compute the associated basic feasible solution is the submatrix AB. To determine if the basic feasible solution is optimal, we solve for y in yTAB=cBT and check if it is feasible to the dual; i.e. if yTAjcj for all jB. If not, we have a kB such that yTAk>ck and xk is the entering variable. The leaving variable can be obtained knowing only AB{k}, cB{k}, and b. Therefore, the only step that involves knowledge of the entire A is in finding k.

In the case of (DW), the check can be performed without the explicit knowledge of the entire A. To see this, let such y for the dual of (DW) be given by [uζ1ζk]. We therefore need to check for each i{1,,k} if there exists j{1,,ni} such that uTa^(i,j)+ζi>c^(i,j) or if there exists j{1,,ri} such that uTa~(i,j)>c~(i,j).

These inequalities can be rewritten as ζi>(c(i)TuTAj(i))x(i,j) and 0>(c(i)TuTAj(i))d(i,j), respectively. We can handle both sets of inequalities for a particular i{1,,k} by solving the subproblem min (c(i)TuTAj(i))xs.t. xPi. Suppose that it has an optimal solution. By Theorem 8.7 there is one that is an extreme point x(i) of Pi. If the optimal value is less than ζi, then ζi>(c(i)TuTAj(i))x(i) and so we have found an entering variable for (DW).

Suppose that it is unbounded. By Theorem 10.5 there is an extreme ray di of Pi so that 0>(c(i)TuTAj(i))di. Again, we have found an entering variable for (DW).

If none of the above applies for all i=1,,k, then (DW) has been solved to optimality.

The computational advantage of the above method for solving (DW) instead of solving (P) directly is that if k is large, the problem (P) has many constraints. So in the case when m is small and each of the subproblem is easy to solve, each iteration of the simplex method has only modest memory requirements, making it possible to solve very large instances of (P).