Last Time
- Analyzed augmenting flow via shortest augmenting paths.
- Each search for such a path takes $O(m)$ time (via BFS).
- Intuition: Killing/augmenting by shortest augmenting path should increase source to sink distance in the residual graph.
- Proved the distance from source can only increase.
$\Rightarrow$ There can be at most $\frac{mn}{2}$ of such augmentations.
- Led to a strongly polynomial bound of $O(m^2n)$.
- But: Aren't we a bit wasteful here?
$\Rightarrow$ Each search for a path (i.e., BFS) gives us the whole shortest path tree but we use only one path.
$\Rightarrow$ Can we use this information to kill/augment by more than one shortest augmenting path?
- Yes! This is made precise by the blocking flow primitive.
Blocking Flows
Extension of shortest augmenting path.
Dinic's algorithm:
- In each iteration, consider the residual graph layered by running a BFS from the source. That is, each layer corresponds to a different distance from the source.
- Define:
- Admissible edges: Edges in the residual graph that point away of the source, i.e., are a part of some shortest path from the source. These edges “hop” exactly one BFS layer (all the other edges either stay in the same layer or go back).
- Admissible graph: The graph consisting only of admissible edges.
- Admissible path: Path in the admissible graph, i.e., path consisting solely of admissible edges.
- Goal: Find a blocking flow: a flow in the admissible graph, i.e., a flow supported only on admissible edges, that saturates an edge on every admissible path. (Note: For unit-capacity graphs (only!), this corresponds to finding a maximal collection of edge-disjoint admissible path.)
- Key observation: Augmenting the flow using an admissible path does not create new admissible edges.
$\Rightarrow$ When we saturate an edge on an admissible path, we can just {discard} it, as the reverse arc is never admissible.
$\Rightarrow$ This is very much not the case for maximum flows!
- So, blocking flow $\neq$ maximum flow in the admissible graph.
(For unit capacity graphs, this corresponds to the difference between the collection of edge-disjoint admissible paths being maximal and maximum.)
$\Rightarrow$ Computing a blocking flow is much easier.
- What is the benefit of finding a blocking flow?
- Claim: After augmenting our flow with a blocking flow, the source to sink distance in the residual graph increases by at least $1$.
- Proof:
- Let $f$ (resp. $f'$) be the flow before (resp. after) augmentation via blocking flow.
- Every edge on a shortest source-sink path in the residual graph $G_{f}$ had to “hop” (exactly) one BFS layer. That is, it had to be admissible.
- Augmentation does not create in $G_{f'}$ any edges that would be admissible in $G_{f}$.
$\Rightarrow$ No new arcs in $G_{f'}$ would “hop” a BFS layer in $G_{f}$. (Note that we are still looking at the BFS layers in $G_f$ here.)
- As all the admissible edges in $G_{f}$ in some $s$-$t$ cut were killed/saturated, every source-sink path in $G_{f'}$ has to use at least one edge that would not be admissible in $G_{f}$.
$\Rightarrow$ That edge would not hop a BFS layer in $G_{f}$.
- But, every edge in $G_{f'}$ (as well as $G_f$) can hop at most one BFS layer in $G_f$.
$\Rightarrow$ No way for a source-sink path in $G_{f'}$ to make up for not hopping a $G_{f}$ BFS layer (at least) once.
- All such paths had to have length larger than the source-to-sink distance in $G_f$.
$\Rightarrow$ The source to sink distance in the new graph had to increase by at least one.
- So, after at most $n$ blocking flow computations we will have a max flow. (Recall: If source and sink at distance $\geq n$ in the residual graph, current flow has to be optimal.)
- Key remaining question: How to find a blocking flow?
Blocking Flows in Unit-capacity Graphs
- Let's consider first the case of unit capacities.
- Use DFS/greedy approach (“greedy” is usually the best strategy when solving a problem involving finding “maximal” -- as opposed to “maximum” -- collections.):
- Start a DFS from $s$.
- In each step, advance along yet unexplored outgoing admissible edge.
- If you reach $t$, retrace the path back to $s$ and block that path, by adding it to the blocking flow.
$\Rightarrow$ You can discard all the edges on this path now.
- If you reach a vertex $v\neq t$ with no outgoing admissible edges, retreat back along the edge you got through and discard that edge.
- Once there is no admissible edges leaving $s$, the blocking flow construction is complete.
- Seems much like just searching for an augmenting path but it is not.
- Key difference: Can save info about which part of the graph we already explored. Crucially, as we never add new admissible arcs. Once a vertex is a dead end, it stays that way. This is why it is ok for us to discard an edge upon retreat.
- What is the running time of this procedure?
- We have three types of operations: advance, retreat, and block.
- There is at most $m$ retreats, as we always discard the edge when we do that.
$\Rightarrow$ Total cost of retreats: $O(m)$.
- Advances can be charged to the total cost of retreats and blocks corresponding to traversing back the edge.
- Each blocking of a path $P$ takes $O(|P|)$ time, but also discards $|P|$ edges.
$\Rightarrow$ $O(1)$ per edge, amortized.
$\Rightarrow$ Total time for blocks: $O(m)$.
- So, total cost of blocking flow is $O(m)$.
$\Rightarrow$ The resulting maximum flow algorithm runs in $O(mn)$.
Wait a minute: This does not sound too impressive. We already knew how to do that! (Our first algorithm run in time $O(mnU)$.) Why bother?
- This bound holds even if we allow parallel edges. (But this is a weak motivation.)
- Much more important: Can actually get significantly nicer bounds, both for unit-capacity and general capacities.
- Also, much better in practice (in large part because of the above).
How to get a better bound for unit-capacities?
- Suppose we already did $k$ blocking flows.
- Consider the maximum flow in the residual graph.
- If we decompose into paths, the number of these paths will be exactly the value of the remaining flow.
- After $k$ blocking flows, the source to sink distance in the residual graph $\geq k$.
$\Rightarrow$ Each flow path has length $\geq k$.
- Key observation: These flow paths are edge-disjoint and their total volume has to be $m$.
$\Rightarrow$ The number of flow paths $\leq \frac{m}{k}$!
- Each blocking flow increases the value of the flow by at least one. (As there always is at least one admissible path.)
$\Rightarrow$ $\frac{m}{k}$ additional blocking flow (or even just augmenting paths) suffices!
- Total time: $O(km+m^2/k)=O(m^{3/2})$, if we set $k=\sqrt{m}$. (Note though that we do not even need to know $k$ here.)
- On the homework: Show how similar argument gives a bound of $O(mn^{2/3})$.
- Also, can get an $O(m\sqrt{n})$ bound for bipartite matchings. (Note this is always better than $O(m^{3/2})$.)
- Recall that we could reduce this problem to a unit-capacity flow instance.
- Key observation: Initial and residual graphs are unit graphs, i.e., every vertex either has an in-degree of $1$ or an out-degree of $1$.
- After we did $k$ blocking flows, decompose the remaining flow into flow paths, as above.
- Note that these flow paths are vertex disjoint now. (Instead of only edge disjoint.)
$\Rightarrow$ There can be at most $\frac{n}{k}$ instead of $\frac{m}{k}$ flow paths, and after additional $\frac{n}{k}$ blocking flow computations we are done.
$\Rightarrow$ Total time: $O(km+\frac{m^2}/k)=O(m\sqrt{n})$, if we set $k=\sqrt{n}$.
Blocking Flows in General Graphs
What breaks when we try the above analysis in general graphs?
- The basic idea of advance/retreat/block still valid.
- Every advance is still paid for by costs of corresponding retreat or block.
- Also, still $O(m)$ time for all retreats.
- Key problem: Augmentation might saturate only one (bottlenecking) edge on the path.
$\Rightarrow$ Each block costs $O(n)$ now. Must charge the whole $O(n)$ work to that single edge.
- Blocking flow takes $O(mn)$ time now.
- Gives an $O(mn^2)$ overall time maximum flow.
- Still better than shortest augmenting path algorithm!
- Intuitively: Using blocking flows enables us to “change” $m$ into $n$ because of the resulting more efficient search for augmenting paths.
- In fact, we can make this intuition even stronger:
- Each block corresponds to finding a new augmenting path.
$\Rightarrow$ Each block increases the value of our flow by at least $1$.
- So, if value of max flow is $|f^*|$, the total cost of blocks is $O(n|f^*|)$.
- In other words, blocking flow approach takes $O(n)$ time “per unit of flow routed”, while the basic/naive augmenting path approach took $O(m)$ time. (This is true even if capacities are not unit.)
- Thus, the running time of blocking flow algorithm in general graphs is actually $O(mn+n|f^*|)=O((m+|f^*|)n)$. (Compare to the $O(m|f^*|)$ running time of the naive augmenting path approach.)
- Note: Advances are charged here per each blocking flow computation, while blocks are amortized over all the blocking flow computations.
- Each block corresponds to finding a new augmenting path.
Scaling Blocking Flows
- Blocking flows enable us to find the maximum flow in $O((m+|f^*|)n)$ time.
- This is good for $|f^*|$ large but not too large.
- How to capitalize on that even further?
- Use scaling!
- As before, round down the edge capacities to zero, and shift in one bit at a time (starting with the most significant bit).
- Extend the current flow to the optimal one after each bit shift-in.
- Recall: After the bit shift-in (and doubling the current flow), finding the new optimum boils down to solving max flow in a graph with max flow value $|f^*|\leq m$.
- This takes $O((m+|f^*|)n)=O(mn)$ time.
- As there is $\leq \log U$ bit shift-ins, the total running time of the algorithm is $O(mn\log U)$. Big improvement over what we had before! (Modulo not being strongly polynomial anymore.)
Dynamic Trees/Link-Cut Trees
- Our best strongly polynomial time algorithm runs in $O(mn^2)$.
- The best weakly polynomial time algorithm is $O(mn\log U)$.
- Do we need to pay this extra factor of $n$ for being strong polynomial?
- Maybe there is a better way to implement blocking flows in general graphs?
- Recall: Key bottleneck, blocks might take $\Theta(n)$ time, but might result in discarding only a single edge from the whole admissible path found.
- Still, the pieces of this path that were not discarded are still useful.
- Can we somehow take advantage of this fact and maintain these pieces for efficient access later?
- Is there a data structure that can help us here?
- Yes! Dynamic trees (sometimes called link/cut trees) designed by (surprise, surprise) Sleator and Tarjan.
- Maintain a forest of directed arborescences, i.e., directed trees with all edges pointing towards single root, of “blockable”, i.e., non-saturated, admissible edges.
- Initially all vertices are isolated in this forest, i.e., there is no edges.
- When we do our DFS search for a new source-sink path to be blocked, the “current” vertex is always a root of the arborescence that contains the source $s$.
- To advance on a new outgoing admissible edge:
- “Link” current (root) vertex to the head of that edge.
$\Rightarrow$ This merges our current arborescence with the arborescence that contains the head, with the new root being the root of arborescence that contained the head.
- Jump to the root of the (new) current arborescence.
- “Link” current (root) vertex to the head of that edge.
- To retreat from a vertex (root) that has no outgoing admissible edges:
- “Cut” the arborescence into separate pieces corresponding to removing its root.
$\Rightarrow$ Tail of each cut edge becomes the new root of corresponding (new) arborescence.
- “Cut” the arborescence into separate pieces corresponding to removing its root.
- To block a source-sink path once found:
- Find the bottleneck capacity $c$ on the arborescence path from source to sink. (Note that both the source and sink have to be in the same (current) arborescence at this point, as per our definition of advance operation.)
- Decrease all edge capacities on this path by $c$
- Cut at all the edges that drop their capacity to $0$.
- We have above four operations: link, cut, bottleneck-path-capacity, decrease-path-capacity.
- Can be implemented using (even bigger surprise) Splay trees.
- To get a high level idea of the implementation, let us just consider representation of an arborescences that is just a path:
- Maintain ordered list of vertices on path in splay tree (but we could use any balanced BST).
- Key trick: Instead of storing the current residual capacity of each edge, store only “deltas”. This way, the true value of the residual capacity of an edge is obtained by summing all the “deltas” on the path from that edge node to the root of the tree.
- This representation is easy to maintain under rotations.
- To subtract a capacity of $x$ on a path from some vertex $v$ to the root of the path, splay successor of $v$ to root, subtract $x$ from the root of left subtree.
- Similarly, one can maintain at each node the minimum capacity of its subtree (to help with bottleneck-path-capacity operation).
$\Rightarrow$ Gives an $O(mn\log n)$ time algorithm!
Beyond Flow Decomposition Barrier
- Dynamic trees enabled us to get an $O(mn\log n)$ algorithm, which is strongly polynomial and comparable to $O(mn\log U)$ we had before.
- In fact, by doing some tweaks, one can get $O(mn \log_{m/n} n)$ (which is $O(mn)$ for dense graphs).
- Recently, Orlin showed an $O(mn)$ algorithm.
- This meets a so-called “flow decomposition” barrier for maximum flow algorithms: any algorithm that implicitly computes a decomposition of the flow into flow paths has to run in $\Omega(mn)$ time, as this is the worst-case size of such decomposition. (Recall our proof of the flow decomposition lemma.)
- All the algorithms we discussed so far fall into this category.
- Except the dynamic tree--based implementation of the blocking flows. (In that implementation we do not keep track of individual flow paths, just represent the flow via net edge flows. This is, one of the aspects that makes this implementation so efficient.)
- In 1999, Goldberg and Rao combined this blocking flow primitive with a (very sophisticated) form of scaling to get an $O(\min\{m^{3/2}, mn^{2/3}\}\log U)$ algorithm. Thus matching our bound for unit-capacity case and breaking the above flow decomposition barrier!
- Is this the end of the story?
- No!
- Recent results:
- Madry '13, '16: $O(m^{\frac{10}{7}}U^{1/7})$ time max flow algorithm. Breaks the $O(n^{3/2})$ barrier for sparse graphs and small capacity case, that all the blocking flow based analysis suffered from. In particular, gives an $O(m^{10/7})$ time algorithm for the bipartite matching problem.
- Lee-Sidford '14: $O(m\sqrt{n}\log U)$ -- directly improving over Goldberg-Rao!
- Techniques very different to what we have seen. Uses continuous optimization and linear system solving instead of combinatorial techniques. (Might see a glimpse of that later in the class.)
- If interested in only $(1+\varepsilon)$-approximation for undirected graphs then something even faster can be obtained.
- Kelner Lee Sidford Orecchia '14, Peng '15: Compute $(1+\varepsilon)$ approximation in $O(m\varepsilon^{-2} \log^{O(1)}n)$, i.e., nearly-linear, time.
- Also, Karger Levine '02: Can find an undirected maximum flow in (expected) $O(m+n|f^*|)$ time using a clever flow path sampling!
- In practice: So-called push-relabel algorithms perform very well. (Much better than their theoretical analysis would suggest.) In particular, “empirically” maximum flow can be solved in close to linear time. One more reason maximum flow such an important problem in applied contexts.