HOME

All problems are basically the same

Table of Contents

We’ve explained how to use the Simplex Method to solve systems of linear inequalities and to find optimal strategies for matrix games. In both cases this was accomplished by taking the problem we wanted to solve and using it to create a related linear program whose solution would also tell us how to solve the starting problem. This means that not only the Simplex Method, but any method for solving LPs can be used to for both those tasks. In particular, linear programming is at least as hard as those other problems: an easy method for solving linear programs would give us an easy method for these other problems. But is linear programming equally hard or is it harder?

We’ll compare the following three problems:

  1. LP: Solving linear programs.
  2. Ineq: Solving systems of linear inequalities (and you can allow some equations thrown in too).
  3. Games: Finding optimal strategy for matrix games.

To show two such problems, say A and B, are “equally hard” we have to reduce A to B and vice versa. Reducing A to B means explaining:

If we can do both of these things any method of solving problems of type B can be used to solve problems of type A too. So solving problems of type A can’t really be harder than solving problems of type B.

Warm up: LP and LPstd

As an easy warm up let’s consider the tasks LP of solving linear problems and LPstd of solving linear programs in standard form. Let’s be clear about what a solution to an LP means in this context: it means that we determine if the LP is infeasible, unbounded or has an optimal solution; if it is unbounded we find a 1-parameter unbounded family of feasible solutions, and if it has optimal solutions we find one.

Clearly it’s not harder to solve standard form LPs than to solve arbitrary LPs! So LPstd reduces to LP trivially: given a standard form LP you don’t have to do anything to convert it to an LP, and if you have a solution for it thought of as an LP that is obviously also a solution for it thought of as a standard form LP.

The interesting reduction is to go the other way: reduce solving arbitrary LPs to solving standard form LPs. This is one of the first things we studied: converting LPs to standard form. We gave a simple procedure for doing so, that came with a corresponding procedure to interpret the solution of the standard form LP as a solution to the original LP.

The only reason I bring this simple example of comparing two problems up is to point out that when you convert an LP, say P, to standard form, say Pstd, you really do have to do something to interpret the solution of Pstd as a solution to P. It’s not hard, but it’s not nothing either. Here’s a sketch of the procedure:

If Pstd is infeasible, unbounded or has an optimal solution so does P. In the infeasible case that’s all there is to say, in the other cases we need to know how the variables of the two LPs correspond. If P had any free variables, say \(x\), then Pstd has in place of \(x\) two variables \(x'\) and \(x''\); a solution for Pstd will include values for \(x'\) and \(x''\), and to get the value of \(x\) in the corresponding solution of P we set \(x:=x'-x''\). Also, if the objective function of P had a constant term, for example, if P started with “maximize \(x_1+2x_2-x_3+7\)”, then Pstd had objective function \(x_1+2x_2-x_3\), without the 7, so the optimal value of P is the optimal value of Pstd plus 7. Even if P had no constant term, if it was a minimization question, the optimal value of P is not the same as the optimal value of Pstd: they differ in sign.

OK, enough about that let’s pass to more interesting comparisons.

LP and Ineq

Let’s start by being clear about what it should mean to solve a system of linear inequalities: we should be able to decide whether they have a simultaneous solution and if they do, to find one.

We already discussed a reduction in one direction between LP and Ineq. Pause a bit to make sure you know which direction and what we called it.

The reduction of Ineq to LP is basically Phase 1 of the Simplex Method! Not quite, because we also need to convert to standard form to use the exact construction we used in Phase 1. Let’s review it:

To solve a system of linear inequalities, you first convert it to standard form \(A \mathbf{x} \le \mathbf{b}\), \(\mathbf{x} \ge 0\), just like you do for the constraints of an LP. Once it is in that form you make the auxiliary LP:

Maximize \(-x_0\)

subject to \(\left\{\begin{aligned} A \mathbf{x} - \begin{pmatrix} x_0 & x_0 & \ldots x_0 \end{pmatrix}^\top & \le \mathbf{b} \\ x_0, \mathbf{x} & \ge 0 \\ \end{aligned}\right.\)

How do you interpret the solution to this LP as a solution to the system of inequalities? Well, this LP always has an optimal solution and all optimal solutions have the same value of \(x_0\)1. If \(x_0=0\) then the system of inequalities has solutions and, for the standard form version, a solution is given by the vector \(\mathbf{x}\). If \(x_0>0\) then the system of inequalities has no solution.

What about the other direction? Does is sound plausible that you can reduce LP to Ineq? That having the ability to solve systems of inequalities means you also have the ability to solve any linear program? It does, by the following clever trick!

Take any LP. We may as well assume it is in standard form, say

Maximize \( \mathbf{c} \cdot \mathbf{x} \)

subject to \(\left\{\begin{aligned} A \mathbf{x} & \le \mathbf{b} \\ \mathbf{x} & \ge 0 \\ \end{aligned}\right.\)

We need to find out if it is infeasible, unbounded or has an optimal solution, and we are assuming we have the ability to solve systems of linear inequalities. We can first check if the LP is feasible simply by checking if the system of inequalities given by the constraints has a solution. If not, we are done: the LP is infeasible.

So now assume the LP is feasible. Look at the dual:

Minimize \( \mathbf{b} \cdot \mathbf{y} \)

subject to \(\left\{\begin{aligned} A^\top \mathbf{y} & \ge \mathbf{c} \\ \mathbf{y} & \ge 0 \\ \end{aligned}\right.\)

We can use our powers of inequality solving to find out if the dual is feasible. If not, the primal was unbounded (by strong duality).

So now assume both primal and dual are feasible. At this stage we know the primal must have an optimal solution, but we still need to find one. To do that we can form the following system of inequalities:

\[\begin{aligned} A \mathbf{x} & \le \mathbf{b} \\ \mathbf{x} & \ge 0 \\ A^\top \mathbf{y} & \ge \mathbf{c} \\ \mathbf{y} & \ge 0 \\ \mathbf{c} \cdot \mathbf{x} & \ge \mathbf{b} \cdot \mathbf{y} \end{aligned}\]

We put in the constraints of both the primal and the dual, and also the constraint that the objective function of the primal is greater than or equal to that of the dual. If this system of inequalities has a solution, then \(\mathbf{x}\) is feasible for the primal, \(\mathbf{y}\) is feasible for the dual and the objective functions agree: indeed, the system includes the inequality \(\mathbf{c} \cdot \mathbf{x} \ge \mathbf{b} \cdot \mathbf{y}\) and the opposite, \(\mathbf{c} \cdot \mathbf{x} \le \mathbf{b} \cdot \mathbf{y}\) is true by weak duality. Then, again by duality theory, \(\mathbf{x}\) is optimal for the primal.

Notice that in this reduction we reduced solving an LP to solving three systems of inequalities. We can easily save one: just try the last system of inequalities first. That’ll discover if the LP has an optimal solution. If not, try the constraints of the LP: that’ll tells us if the LP was infeasible or unbounded.

If we had defined “solution of an LP” to mean only “find an optimal solution if there is one, otherwise just say there is no optimal solution”; call that modified problem LPopt. Then we would only need the last system of inequalities, the one with both \(\mathbf{x}\) and \(\mathbf{y}\) in it.

LP and Games

We’ve already seen how to use linear programming to find optimal strategies of games, that is we saw a reduction of Games to LP. Can we go the other way? Almost, but again the way to do it is a clever trick.

Say you have an LP, that we may as well assume is in standard form:

Maximize \( \mathbf{c} \cdot \mathbf{x} \)

subject to \(\left\{\begin{aligned} A \mathbf{x} & \le \mathbf{b} \\ \mathbf{x} & \ge 0 \\ \end{aligned}\right.\)

We’ll build a payoff matrix out of it as follows: \[M = \begin{pmatrix} 0 & -\mathbf{c}^{\top} & \mathbf{b}^{\top} \\ \mathbf{c} & 0 & -A^{\top} \\ -\mathbf{b} & A & 0 \\ \end{pmatrix}\]

Notice that \(M = -M^{\top}\) so that \(v(M)=0\). We won’t quite be able to interpret an arbitrary optimal strategy for \(M\) as a solution for the LP we started with, so this technically won’t reduce LP to Games. Here’s what we can prove:

The LP above has an optimal solution if and only there is an optimal strategy for \(M\) that uses the first move with positive probability. If \(\mathbf{u} = \begin{pmatrix} s \\ \mathbf{x}' \\ \mathbf{y}' \end{pmatrix}\) is such an optimal strategy with \(s>0\), then \(\mathbf{x} := \mathbf{x}'/s\) is an optimal primal solution and \(\mathbf{y} := \mathbf{y}'/s\) is an optimal dual solution.

Before we get to the proof: why does this talk about optimal strategy without specifying the player? Well, since \(M = -M^{\top}\), we have \((\mathbf{u}^{\top} M)^{\top} = M^{\top} \mathbf{u} = - M\mathbf{u}\), so the minimum entry of \(\mathbf{u}^{\top} M\) is \(v(M)=0\) if and only if the maximum entry of \(M\mathbf{u}\) is \(0\). This shows that \(\mathbf{u}\) is optimal for player one if and only if it is optimal for player two.

Assume first that \(\mathbf{u} = \begin{pmatrix} s \\ \mathbf{x}' \\ \mathbf{y}' \end{pmatrix}\) is an optimal strategy. Then we know that the maximum entry of \(M \mathbf{u} = \begin{pmatrix} - \mathbf{c} \cdot \mathbf{x}' + \mathbf{b} \cdot \mathbf{y}' \\ s \mathbf{c} - A^\top \mathbf{y}' \\ - s \mathbf{b} + A \mathbf{x}' \end{pmatrix}\) is \(v(M)=0\). In particular all of the entries of that vector must be \(\le 0\). If \(s>0\), we can divide all of those inequalities by \(s\) to get that \(\mathbf{x} := \mathbf{x}'/s\) is a feasible primal solution, that \(\mathbf{y} := \mathbf{y}'/s\) is a feasible dual solution and that \(\mathbf{c} \cdot \mathbf{x} \ge \mathbf{b} \cdot \mathbf{y}\), which implies both are optimal.

Conversely, if the LP has an optimal solution \(\mathbf{x}\) and an optimal solution \(\mathbf{y}\) we can set \(s := \sum_j x_j + \sum_i y_i + 1\), \(\mathbf{x}' := s \mathbf{x}\), \(\mathbf{y}' := s \mathbf{y}\) and \(\mathbf{u} := \begin{pmatrix} s \\ \mathbf{x}' \\ \mathbf{y}' \end{pmatrix} \). It is easy to check that \(\mathbf{u}\) is stochastic and, reversing the work above, that \(M \mathbf{u} \le 0\) and that the first entry is \(- \mathbf{c} \cdot \mathbf{x}' + \mathbf{b} \cdot \mathbf{y}' = s (- \mathbf{c} \cdot \mathbf{x} + \mathbf{b} \cdot \mathbf{y})\), which is \(0\) by strong duality. So the maximum entry of \(M\mathbf{u}\) is \(v(M)=0\) and thus \(\mathbf{u}\) is an optimal strategy in which \(s > 0\).

So we didn’t quite reduce LP to Games for two reasons:

  • We can only tell if the LP has an optimal solution or not. In case it doesn’t, this game doesn’t seem to tells us whether the LP is infeasible or unbounded.
  • We weren’t using just the ability of finding optimal strategies for matrix games, but rather the souped up ability to tell whether or not some optimal strategy uses the first move and if so to find one.

So, to be more precise we reduced LPopt to Games’, where:

  • LPopt is the problem of determining whether an LP has an optimal solution and finding one if it does, and
  • Games’ is the problem of determining whether or not a game has an optimal strategy for player one that assigns the first move a non-zero probability and finding such an optimal strategy if it does.

1

Exercise: Why was that again?

Omar Antolín Camarena