Linear Programming Algorithms
6.854 Notes #14

David Karger

1 LP Solving Algorithms

So far we learned a lot about the structure of LPs. But how do we actually solve them? Can we do it faster than by just enumerating all basic feasible solutions (BFS)?

2 Simplex

3 Interlude: Vertices in Standard form/bases

Summary: \(x\) is vertex of \(P\) if for some basis \(B\),

4 Back to the Simplex Method

4.1 Potential problems

4.2 Simplex and Duality

4.3 Polynomial Time Bounds

We know a lot about structure. And we’ve seen how to verify optimality in polynomial time. Now turn to question: can we solve in polynomial time?

Yes, sort of (Khachiyan 1979):

5 Ellipsoid Method

Basic idea: Reduce optimization to feasibility checking. (Remind how it is with polytope \(P\).)

5.1 Feasibility vs. Separation vs. Optimization

Notice in ellipsoid, were only using one constraint at a time.

This is very useful in many applications. e.g. network design.

Can also show that optimization implies separation:

6 Interior Point

Ellipsoid has problems in practice (\(O(n^6)\) for one). So people developed a different approach that has been extremely successful.

What goes wrong with simplex?

Primal-Dual Method

6.1 Potential Reduction

Potential function:

Early methods used gradient descent:

More recent, arguably cleaner approach is central path

Characterizing the central path

How discretize?

Termination

What is the improvement direction?

Can we solve this?

How far?

Interior point has recently been revolutionizing maxflow.