1 Introduction

The minimum cut problem is one of the most fundamental and well-studied optimization problems in theoretical computer science. Given an undirected edge-weighted graph \(G=(V,E)\), the problem asks to find a subset of vertices S such that the total weight of all edges between S and \(V \setminus S\) is minimized. The vast literature on the minimum cut problem can be classified into three main approaches:

The maximum-flow approach

The minimum cut problem was originally solved by computing the maximum st-flow [6] for all pairs of vertices s and t. In 1961, Gomory and Hu [14] showed that only O(n) maximum st-flow computations are required, and in 1994 Hao and Orlin [15] showed that in fact a single maximum st-flow computation suffices. A maximum st-flow can be found in \(O(mn\log (n^2/m))\) time using the Goldberg-Tarjan algorithm [13], and the fastest algorithm to date takes O(mn) time [27, 37]. Faster maximum st-flow algorithms are known (see e.g [5, 12, 30, 31, 41] and references within) when the graph is unweighted or when the maximum edge weight W is not extremely large. However, even with a linear O(m) time maximum st-flow algorithm, computing the maximum flow for all pairs of vertices takes O(mn) time. Very recently [1], it was shown how to break this barrier and compute the maximum flow for all pairs of vertices in \(\tilde{O}(n^2)\) time.

The edge-contraction approach

An alternative method is edge contraction. If we can identify an edge that does not cross the minimum cut, then we can contract this edge without affecting the minimum cut. Nagamochi and Ibaraki [34, 35] showed how to deterministically find a contractible edge in O(m) time, leading to an \(O(mn+n^2\log n)\)-time minimum cut algorithm. Karger [19] showed that randomly choosing the edge to contract works well with high probability. In particular, Karger and Stein [25] showed that this leads to an improved \(O(n^2 \log ^3 n)\) Monte Carlo algorithm.

The tree-packing approach

In 1961, Nash-Williams [36] proved that, in unweighted graphs, any graph with minimum cut c contains a set of c/2 edge-disjoint spanning trees. Gabow’s algorithm [7] can be used to find such a tree-packing with c/2 trees in \(O(mc\log n)\) time. Karger [23] observed that the c edges of a minimum cut must be partitioned among these c/2 spanning trees, hence the minimum cut 1- or 2-respects some tree in the packing. That is, one of the trees is such that at most two of its edges cross the minimum cut (these edges are said to determine the cut). We can therefore find the minimum cut by examining each tree and finding the minimum cut that 1- or 2-respects it.

Several obstacles need to be overcome in order to translate this idea into an efficient minimum cut algorithm for weighted graphs:

  1. (1)

    we need a weighted version of tree-packing,

  2. (2)

    finding the packing (even in unweighted graphs) takes time proportional to c (and c may be large),

  3. (3)

    checking all trees takes time proportional to c, and

  4. (4)

    one needs an efficient algorithm that, given a spanning tree T of G, finds the minimum cut in G that 2-respects T (finding a minimum cut that 1-respects T can be easily done in \(O(m+n)\) time, see e.g [23, Lemma 5.1]).

In a seminal work, Karger [23] overcame all four obstacles: First, he converts G into an unweighted graph by conceptually replacing an edge of weight w by w parallel edges. Then, he uses his random sampling from [19, 21] combined with Gabow’s algorithm [7] to reduce the packing time to \(O(m + n \log ^3 n)\) and the number of trees in the packing to \(O(\log n)\). Finally, he designs a deterministic \(O(m \log ^2 n)\) time algorithm that given a spanning tree T of G finds the minimum cut in G that 2-respects T. Together, this gives an \(O(m \log ^3 n)\) time randomized algorithm for minimum cut. Karger also showed how to improve the running time by a \(\log \log n\) factor, at the cost of an additive \(O(n\log ^{6}n)\) factor by observing that 1-respecting cuts can be found faster than 2-respecting cuts, and tweaking the parameters of the tree packing.

Karger’s \(O(m \log ^2 n)\) algorithm for the 2-respecting problem finds, for each edge \(e\in T\), the edge \(e'\in T\) that minimizes the cut determined by \(\{e,e'\}\). He used link-cut trees [39] to efficiently keep track of the sizes of cuts as the candidate edges e of T are processed in a certain order (bough decomposition), consisting of \(O(\log n)\) iterations, and guarantees that the number of dynamic tree operations is O(m) per iteration. Since each link-cut tree operation takes \(O(\log n)\) time, the total running time for solving the 2-respecting problem is \(O(m \log ^2 n)\).

In a very recent paper, Lovett and Sandlund [3] proposed a simplified version of Karger’s algorithm. Their algorithm has the same \(O(m\log ^3 n)\) running time as Karger’s. To solve the 2-respecting problem they use top trees [2] rather than link-cut trees, and use heavy path decomposition [16, 39] to guide the order in which edges of T are processed. A property of heavy path decomposition is that, for every edge \((u,v) \notin T\), the u-to-v path in T intersects \(O(\log n)\) paths of the decomposition. This property implies that the contribution of each non-tree edge to the cut changes \(O(\log n)\) times along the entire process. See also [10] who use the fact that the bough decomposition, implicitly used by Karger, also satisfies the above property. The idea of traversing a tree according to a heavy path decomposition, i.e., by first processing a smaller subtree and then processing the larger subtree has been used quite a few times in similar problems on trees. See e.g., [4]. While the ideas of Lovett and Sandlund [3] do not improve on Karger’s bound, their paper has drawn our attention to this problem.

1.1 Our Result and Techniques

In this paper, we present a deterministic \(O(m \log n)\) time algorithm that, given a spanning tree T of G, finds the minimum cut in G that 2-respects T. Using Karger’s framework, this implies an \(O(m \log ^2 n)\) time randomized algorithm for minimum cut in weighted graphs. We also show (see Section 4) that Karger’s \(\log \log n\) speedup approach can be used to obtain an \(O(m \log ^2 n / \log \log n + n\log ^{3+\delta })\) time algorithm for any \(\delta >0\).

Like prior algorithms for this problem, our algorithm finds, for each edge \(e\in T\) the edge \(e' \in T\) that minimizes the cut determined by \(\{e,e'\}\). The difficult case to handle is when e and \(e'\) are such that neither of them is an ancestor of the other. In Karger’s solution, handling each edge \(e=(u,v)\in T\) was done using amortized \(O(d\log n)\) operations on Sleator and Tarjan’s link-cut trees [39] where d is the number of non-tree edges incident to u. Since operations on link-cut trees require \(O(\log n)\) amortized time, the time to handle all edges is \(O(m\log ^2 n)\) (implying an \(O(m \log ^3 n)\) time algorithm for the minimum cut problem). As an open problem, Karger [23] asked (more than 20 years ago) whether the required link-cut tree operations can be done in constant amortized time per operation (implying an \(O(m \log ^2 n)\) time algorithm for the minimum cut problem). Karger even pointed out why one could perhaps achieve this: “We are not using the full power of dynamic trees (in particular, the tree we are operating on is static, and the sequence of operations is known in advance).” In this paper, we manage to achieve exactly that. We show how to order the link cut tree operations so that they can be handled efficiently in batches. We call such a batch a bipartite problem (see Definition 4).

Perhaps a reason that the running time of Karger’s algorithm has not been improved in more than two decades is that it is not at all apparent that these bipartite problems can indeed be solved more efficiently. Coming up with an efficient solution to the bipartite problem requires a combination of several additional ideas. Like [3], we use heavy path decomposition, but in a different way. We develop a new decomposition of a tree that combines heavy path decomposition with biased divide and conquer, and use this decomposition in conjunction with a compact representation which we call topologically induced subtrees (see Definition 3). This compact representation turns out to be crucial not only for solving the bipartite problem, but also to the reduction from the original problem to a collection of bipartite problems.

1.2 Application to Unweighted Graphs

Karger’s method is inherently randomized and obtaining a deterministic (or at least Las Vegas) near-linear time algorithm for the minimum cut in undirected weighted graphs is an interesting problem that has been solved very recently [17] (the exact exponent in the polylog is not stated in [17]). For simple unweighted undirected graphs, a deterministic near-linear time algorithm was provided by Kawarabayashi and Thorup [26]. Later, Henzinger, Rao, and Wang [18] designed a faster \(O(m\log ^{2} n(\log \log n)^{2})\) time algorithm. Ghaffari, Nowicki and Thorup [11] introduced a new technique of random 2-out contractions and applied it to design an \(O(\min \{m+n\log ^{3}n,m\log n\})\) time randomized algorithm that finds a minimum cut with high probability. We stress that the faster algorithms of Henzinger et al. and Ghaffari et al. work only for simple unweighted graphs, that is, for edge connectivity. Interestingly, the latter uses Karger’s \(O(m\log ^{3}n)\) time algorithm as a black box, and by plugging in our faster method one immediately obtains an improved running time of \(O(\min \{m+n\log ^{2}n,m\log n\})\) for simple unweighted graphs.

1.3 Independent Work and Summary of State-of-the-art

Independently to our work,Footnote 1 Mukhopadhyay and Nanongkai [33] came up with an \(O(m\log n + n\log ^4 n)\) time algorithm for finding a minimum 2-respecting cut (see also a simplified and faster \(O(m \log n + n\log ^2 n)\) implementation in [9]). While this improves Karger’s bound for sufficiently dense graphs, it does not improve it for all graphs, and is randomized. Our algorithm uses a different (deterministic and simple) approach. There are however benefits to the approach of [33]; It can be improved [9] to obtain an \(O(m\log n + n^{1+\epsilon })\) time min-cut algorithm in weighted graphs for any fixed \(\epsilon >0\), which is the state of the art for dense graphs with \(m = \Omega (n^{1+\epsilon })\). Furthermore, it can be used to obtain an algorithm that requires \(\tilde{O}(n)\) cut queries to compute the min-cut, and a streaming algorithm that requires \(\tilde{O}(n)\) space and \(O(\log n)\) passes to compute the min-cut.

The following table summarizes the state-of-the-art for randomized algorithms for minimum cut in undirected weighted graphs.

\(O(m \log ^2 n)\)

Here

\(O(m\log ^{2}n/\log \log n+n\log ^{3+\delta }n)\)

Here

\(O(m \log n + n^{1+\epsilon })\)

[9]

2 Preliminaries

2.1 Karger’s Algorithm

At a high level, Karger’s algorithm [23] has two main steps. The input is a weighted undirected graph G. The first step produces a set \(\{T_1,\dots , T_s\}\) of \(s=O(\log n)\) spanning trees such that, with high probability, the minimum cut of G 1- or 2-respects at least one of them. The second step deterministically computes for each \(T_i\) the minimum cut in G that 1-respects \(T_i\) and the minimum cut in G that 2-respects \(T_i\). The minimum cut in G is the minimum among the cuts found in the second step.

Karger shows [23, Theorem 4.1] that producing the trees \(\{T_1,\dots , T_s\}\) in the first step can be done in \(O(m+n\log ^3 n)\) time, and that finding the minimum 2-respecting cut for all the \(T_i\)’s in the second step can be done in \(O(m\log ^3 n)\) time. We will show that each of the steps can be implemented in \(O(m \log ^2 n)\) time. Showing this for the second step is the main result of the paper, and is presented in Section 3. For the first step, we essentially use Karger’s proof. Since the first step was not the bottleneck in Karger’s paper, proving a bound of \(O(m+n\log ^3 n)\) was sufficient for his purposes. Karger’s concluding remarks suggest that he knew that the first step could be implemented in \(O(m\log ^2 n)\) time. For completeness, we prove the \(O(m\log ^2 n)\) bound by slightly modifying Karger’s arguments and addressing a few issues that were not important in his proof. Readers proficient with Karger’s algorithm can safely skip the proof.

Definition 1

(2-respecting and 2-constraining) Given a spanning tree T and a cut \((S,\bar{S})\), we say that the cut 2-respects T and that T 2-constrains the cut if at most 2 edges of T cross the cut.

Definition 2

(weighted tree packing) Let G be an unweighted undirected graph. Let \(\mathcal T\) be a set of spanning trees of G, where each tree \(T\in \mathcal T\) is assigned a weight w(T). We say that the load of an edge e of G (w.r.t. \(\mathcal T\)) is \(\ell (e) = \sum _{T \in \mathcal T : e \in T} w(T)\). We say that \(\mathcal T\) is a weighted tree packing if no edge has load exceeding 1. The weight of the packing \(\mathcal T\) is \(\tau = \sum _{T \in \mathcal T} w(T)\).

Theorem 1

Given a weighted undirected graph G, in \(O(m \log ^2 n)\) time, we can construct a set \(\mathcal T\) of \(O(\log n)\) spanning trees such that, with high probability, the minimum cut 2-respects at least one of the trees in \(\mathcal T\).

Proof

Let \((S,\bar{S})\) be the partition of the vertices of G that forms a minimum cut, and let c be the weight of the minimum cut in G. We precompute a constant factor approximation of c in \(O(m \log ^2 n)\) time using Matula’s algorithm [32]. See Appendix A for details. We assume that all weights are integers, each fitting in a single memory word. Since edges with weight greater than c never cross the minimum cut, we contract all edges with weight greater than our estimate for c, so that now the total weight of edges of G is O(mc). For the sake of presentation we think of an unweighted graph \(\tilde{G}\), obtained from G by replacing an edge of weight w by w parallel edges. We stress that \(\tilde{G}\) is never actually constructed by the algorithm. Let \(\tilde{m}\) denote the number of edges of \(\tilde{G}\). By the argument above, \(\tilde{m} = O(mc)\). Let \(p=\Theta (\log n/c)\). Let H be the unweighted multigraph obtained by sampling \(\lceil p\tilde{m} \rceil \) edges of \(\tilde{G}\) (\(H=\tilde{G}\) if \(c<\log n\)). Clearly, the expected value of every cut in H is p times the value of the same cut in G. By [21, Lemma 5.1], choosing the appropriate constants in the sampling probability p guarantees that, with high probability, the value of every cut in H is at least 64/65 times its expected value, and no more than 66/65 times its expected value. It follows that, with high probability, (i) the minimum cut in H has value \(c' = \Theta (\log n)\), and that (ii) the value of the cut in H defined by \((S,\bar{S})\) is at most \(33c'/32\). The conceptual process for constructing H can be carried out by randomly selecting \(\lceil p\tilde{m} \rceil \) edges of G (with replacement) with probability proportional to their weights. Since the total edge weight of G is O(mc), each selection can be easily performed in \(O(\log (mc))\) time, assuming that our model allows generating a random word in constant time. Using a standard technique, each selection can actually be done in \(O(\log m)\) time, even under a weaker assumption that we can only generate a random bit in constant time.

Lemma 1

Given n non-negative integers \(x_{1},x_{2},\ldots ,x_{n}\), we can build in O(n) time a structure that supports the following operation in \(O(\log n)\) time, assuming that we can generate a random bit in constant time: choose \(i\in \{1,2,\ldots ,n\}\) with probability \(p_{i}=x_{i}/X\), where \(X=\sum _{i=1}^{n}x_{i}\), and then with high probability return i or fail otherwise.

Using the above lemma (see proof in Appendix B), the time to construct H is \(O(p\tilde{m} \log m) = O(m\log ^2 n)\). If any of the invocations fails, we simply terminate the whole algorithm. We emphasize that H is an unweighted multigraph with \(m' = O(m \log n)\) edges, and note that we can assume no edge of H has multiplicity greater than \(c'\) (we can just delete extra copies).

Next, we apply the following specialized instantiation [40, Theorem 2] of Young’s variant [44] of the Lagrangian packing technique of Plotkin, Shmoys, and Tardos [38]. It is shown [40, 44] that for an unweighted graph H with \(m'\) edges and minimum cut of size \(c'\), the following algorithm finds a weighted tree packing of weight \(3c'/8 \le \tau \le c'\).

figure a

Karger [23, Lemma 2.3] proves that for a graph H with minimum cut \(c'\), any tree packing of weight at least \(3c'/8\), and any cut \((S,\bar{S})\) of H of value at most \(33c'/32\), at least a 1/8 fraction of the trees (by weight) 2-constrain the cut \((S,\bar{S})\). Thus, a tree chosen at random from the packing according to the weights 2-constrains the cut \((S,\bar{S})\) with probability at least 1/8. Choosing \(\theta (\log n)\) trees guarantees that, with high probability, one of them 2-constrains the cut \((S,\bar{S})\), which is the minimum cut in G.

It remains to bound the running time of the packing algorithm. Observe that the algorithm increases the weight of some tree by \(1/(96\ln m')\) at each iteration. Since the weight \(\tau \) of the resulting packing is bounded by \(c'\), there are at most \(96 c' \ln m' = O(\log ^2 n)\) iterations. The bottleneck in each iteration is the time to compute a minimum spanning tree in H. We argue that this can be done in O(m) time even though \(m' = O(n\log n)\). To see this, first note that since H is a subgraph of G, H has at most m edges (ignoring multiplicities of parallel edges). Next note that it suffices to invoke the MST algorithm on a subgraph of H that includes just the edge with minimum load among any set of parallel edges. Since the algorithm always increases the load of edges by a fixed amount, the edge with minimum load in each set of parallel edges can be easily maintained in O(1) time per load increase by maintaining a cyclic ordered list of each set of parallel edges and moving to choose the next element in this cyclic list whenever the load of the current element is incremented. Hence, we can invoke the randomized linear time MST algorithm [24] on a simple subgraph of H of size O(m). It follows that the running time of the packing algorithm, and hence of the entire procedure, is \(O(m\log ^2 n)\).\(\square \)

2.2 Link-cut Trees

In our algorithm we will repeatedly use a structure that maintains a rooted tree T with costs on the edges under the following operations:

  1. 1.

    \(T.\textsc {add}(u,\Delta )\) adds \(\Delta \) to the cost of every edge on the path from u to the root,

  2. 2.

    \(T.\textsc {path}(u)\) finds the minimum cost of an edge on the path from u to the root,

  3. 3.

    \(T.\textsc {subtree}(u)\) finds the minimum cost of an edge in the subtree rooted at u.

All three operations can be supported with a link-cut tree [39] in amortized \(O(\log |T|)\) time.Footnote 2 We note that we only require these three operations and do not actually use the link and cut functionality of link-cut trees. Other data structures might also be suitable. See, e.g., the use of top-trees in [3].

2.3 Topologically Induced Subtrees

For a rooted tree T and a node v we denote by \(T_v\) the subtree of T rooted at v. For an edge e of T we denote by \(T_e\) the subtree of T rooted at the lower endpoint of e.

Let T be a binary tree with edge-costs and n nodes, equipped with a data structure that can answer lowest common ancestor (LCA) queries on T in constant time [16]. Let \(\Lambda = \{w_1,w_2,\ldots ,w_s\}\) be a subset of nodes of T. We define a smaller tree \(T^\Lambda \) that is equivalent to T in the following sense (Fig. 1):

Definition 3

(topologically induced tree) We say that a tree \(T^\Lambda \) is topologically induced on T by \(\Lambda \) if for every \(S \subseteq \Lambda \), the minimum cost edge \(f\in T^\Lambda \) with \(T^\Lambda _f \cap \Lambda = S\) has the same cost as the minimum cost edge \(e\in T\) with \(T_e \cap \Lambda = S\).

To be clear, the above definition implies that, for any \(S \subseteq \Lambda \), there is an edge \(e \in T\) with \(T_e \cap \Lambda = S\), if and only if there is an edge \(f \in T^\Lambda \) with \(T^\Lambda _f \cap \Lambda = S\). The term topologically induced tree will be justified by the construction in the following lemma.

Fig. 1
figure 1

On the left: a tree and (in red) a set \(\Lambda =\{w_{1},w_{2},\ldots ,w_{6}\}\) sorted according to their preorder numbers. On the right: the corresponding topologically induced tree

Lemma 2

There exists an algorithm that, given a binary tree T with edge costs, equipped with a link-cut data structure, and a list \(\Lambda = \{w_1,w_2,\ldots ,w_s\}\) of nodes of T, ordered according to their visit time in a preorder traversal of T, constructs in \(O(\min \{|T|, s\log |T|\})\) time, a tree \(T^\Lambda \) of size O(s) that is topologically induced on T by \(\Lambda \).

Proof

We define the tree \(T^\Lambda \) to be a tree over all nodes \(w_i \in \Lambda \), together with the root and the lowest common ancestor in T of every pair of nodes \(w_i\) and \(w_j\). For any two nodes \(u,v\in T^\Lambda \), u is an ancestor of v in \(T^\Lambda \) if and only if u is an ancestor of v in T. Thus, each edge (uv) of \(T^\Lambda \) corresponds to the u-to-v path in T. The edges on this path in T are exactly the edges e of T with \(T_e \cap \Lambda = T^\Lambda _{(u,v)} \cap \Lambda \). Hence, the paths of T corresponding to distinct edges of \(T^\Lambda \) are edge disjoint. We define the cost of the edge (uv) of \(T^\Lambda \) to be the minimum cost of an edge on the corresponding path in T. It follows that for every \(\emptyset \ne S \subseteq \Lambda \), the minimum cost edge \(f\in T^\Lambda \) with \(T^\Lambda _f \cap \Lambda = S\) has the same cost as the minimum cost edge \(e\in T\) with \(T_e \cap \Lambda = S\). To guarantee that this condition holds for \(S = \emptyset \) as well, we choose the edge e of T with the minimum cost such that \(T_e \cap \Lambda = \emptyset \) and proceed as follows if such an edge exists. We create a new node v and change the root of \(T^\Lambda \) to v by making the old root of \(T^{\Lambda }\) a child of v via an edge with infinite cost. We then add a new edge incident to v, whose cost is set to the cost of e. This transformation does not change \(T_e \cap \Lambda \) for any edge e of \(T^\Lambda \), but now that condition with \(S=\emptyset \) is satisfied for the new edge incident to the root.

We now turn to proving the construction time. We first prove that \(T^\Lambda \) consists of at most 2s nodes. This is because \(T^\Lambda \) consists only of the root, the nodes \(w_i\) and LCA\((w_i,w_{i+1})\). To see this, consider two nodes \(w_i\) and \(w_j\) with \(i<j\) such that their lowest common ancestor u is different than \(w_i\) and \(w_j\). Let \(u_\ell \) (\(u_r\)) be the left (right) child of u. Then, \(w_i\) is a descendant of \(u_\ell \) and \(w_j\) a descendant of \(u_r\). Let \(i'\) be the largest index such that \(w_{i'}\) is in the subtree rooted at \(u_\ell \). Then \(u=\)LCA\((w_{i'},w_{i'+1})\).

We next prove that \(T^\Lambda \) (without its edge weights) can be constructed edge weights) in O(s) time. We use a method similar to constructing the Cartesian tree [8, 42] of a sequence: we scan \(w_1,w_2,\ldots ,w_s\) from the left to right while maintaining the subtree of \(T^\Lambda \) induced by \(w_0 = \) LCA\((w_1,w_s)\), and \(w_1,w_2,\ldots ,w_i\). initially, the subtree of \(T^\Lambda \) induced by \(w_0\) and \(w_1\) is just a single edge \((w_0,w_1)\). We keep the rightmost path of the subtree of \(T^\Lambda \) induced by \(w_0, w_1, \dots , w_i\) on a stack, with the bottommost edge on the top. To process \(w_{i+1}\), we first find \(x = \) LCA\((w_i,w_{i+1})\). Then, we pop from the stack all edges (uv) such that u and v are both below (or equal to) x in T. Finally, we possibly split the edge on the top of the stack into two and push a new edge onto the stack. The amortized complexity of every step is constant, so the total time is O(s).

Once \(T^\Lambda \) is constructed, we set the cost of every edge (uv) in \(T^\Lambda \) to be the minimum cost of an edge on the u-to-v path in T. This can be done in \(O(\log |T|)\) time per edge of \(T^\Lambda \) by first calling \(T.\textsc {add}(u)(\infty )\), then \(T.\textsc {path}(v)\) to retrieve the answer, and finally \(T.\textsc {add}(u)(-\infty )\), for a total of \(O(s \log |T|)\) time. Alternatively, we can explicitly go over the edges of the corresponding paths of T for every edge of \(T^\Lambda \). We had argued above that these paths are disjoint so this takes O(|T|) in total.

We also need to compute the cost of the edge e of T with the minimum cost such that \(T_e \cap \Lambda = \emptyset \). To this end, for each \(v \in \Lambda \) we add \(\infty \) to the cost of all edges on the path from v to the root of T. This takes \(O(\min (|T|,s \log |T|)\) by either a bottom-up computation on T, or using \(T.\textsc {add}(u,\infty )\) for every \(v\in \Lambda \). We then retrieve the edge with minimum cost in the entire tree in \(O(\log |T|)\) time by a call to \(\textsc {subtree}\) for the root of T, and then subtract \(\infty \) from the cost of all edges on the path from v to the root of T for every \(v\in \Lambda \).\(\square \)

We will use the fact that the operation of taking the topologically induced subtree is composable in the following sense.

Proposition 1

Let T be a binary tree with edge-costs. Let \(\Lambda _2 \subseteq \Lambda _1\) be subsets of nodes of T. Let \(T_1\) be topologically induced on T by \(\Lambda _{1}\) and \(T_{2}\) be topologically induced on \(T_{1}\) by \(\Lambda _{2}\). Then \(T_{2}\) is topologically induced on T by \(\Lambda _{2}\).

3 Finding a Minimum 2-respecting Cut

Given a graph G and a spanning tree T of G, a cut in G is said to 2-respect the tree T if at most two edges \(e,e'\) of T cross the cut (these edges are said to determine the cut). In this section we prove the main theorem of this paper:

Theorem 2

Given an edge-weighted graph G with n vertices and m edges and a spanning tree T, the minimum (weighted) cut in G that 2-respects T can be found in \(O(m\log n)\) time.

The minimum cut determined by every single edge can be easily found in \(O(m+n)\) time [23, Lemma 5.1]. We therefore focus on finding the minimum cut determined by two edges. Observe that the cut determined by \(\{e,e'\}\) is unique and consists of all edges \((u,v) \in G\) such that the u-to-v path in T contains exactly one of \(\{e,e'\}\).

We begin by transforming T (in linear time) into a binary tree. This is standard and is done by replacing every node of degree d with a binary tree of size O(d) where internal edges have weight \(\infty \) and edges incident to leaves have their original weight. We also add an artificial root to T and connect it to the original root with an edge of weight \(\infty \). From now we will be working with binary trees only.

3.1 Descendant Edges

We first describe an \(O(m\log n)\) time algorithm for finding the minimum cut determined by all pairs of edges \(\{e,e'\}\) where \(e'\) is a descendant of e in T (i.e. \(e'\) is in the subtree of T rooted at the lower endpoint of e). To this end, we shall efficiently find, for each edge e of T, the descendant edge \(e'\) that minimizes the weight of the cut determined by \(\{e,e'\}\), and return the pair minimizing the weight of the cut.

For a given edge e of T, let \(T_e\) denote the subtree of T rooted at the lower endpoint of e. We associate with every node x a list of all edges (uv) such that x is the lowest common ancestor of u and v. Note that all these lists can be computed in linear time, and form a partition of the edges of G. We compute for every node x the total weight of edges incident to x, and the total weight of edges (uv) such that LCA(uv) is x. This also takes O(m) time. We then compute using an O(m)-time bottom-up computation on T, for every edge e of T, the total weight A(e) of all edges with exactly one endpoint in \(T_e\) (in fact, this is the information computed by Karger’s algorithm for the 1-respecting case). To see how this is done, observe that for an edge e whose lower endpoint is v and whose children in T are \(e_\ell \) and \(e_r\), A(e) is given by \(A(e_\ell ) + A(e_r)\) minus twice the total weight of edges whose lowest common ancestor is v plus the total weight of edges incident to v. Note that A(e) includes the weight of e.

Using a link-cut tree we maintain a score for every edge e of T. The scores will be such that, when an edge e is first encountered by a depth first scan of T, the weight of the cut determined by e and any descendant \(e'\) of e is A(e) plus the score of \(e'\). This will allow us to find the best \(e'\) for e using a single call to \(T.\textsc {subtree}\). We next explain the details.

All scores are first initialized to zero. Then, for every edge (uv) of G, we increase the score of all edges on the u-to-v path in T by the weight w(uv) of (uv). This takes \(O(\log n)\) time per edge (uv) by calling \(T.\textsc {add}(u,w(u,v))\), \(T.\textsc {add}(v,w(u,v))\) and \(T.\textsc {add}(\text {LCA}(u,v),-2w(u,v))\). This initialization takes \(O(m \log n)\) time. We then perform an Euler tour of T. When the tour first descends below a node x, for every edge (uv) in the list of x, we decrease the score of all edges on the u-to-v path in T by 2w(uv). Then, when the tour ascends above x, for every edge (uv) in the list of x, we increase the score of all edges on the u-to-v path in T by 2w(uv). Note that, at any point during this traversal, each edge (uv) either contributes w(uv) or \(-w(u,v)\) to the score of every edge on the u-to-v path in T, depending on whether the tour is yet to descend below LCA(uv) or has already done so. As above, each update can be implemented in \(O(\log n)\) time. Since every edge appears in exactly one list, the total time to perform all the updates is \(O(m \log n)\).

Lemma 3

Consider the point in time when the Euler tour had just encountered an edge e for the first time. At that time, for every descendant edge \(e'\) of e, the weight of the cut determined by \(\{e,e'\}\) is A(e) plus the score of \(e'\).

Proof

Observe that the weight of the cut determined by \(\{e,e'\}\) is the sum of weights of all edges with (1) one endpoint in \(T_e - T_{e'}\) and the other not in \(T_e\), or (2) one endpoint in \(T_e - T_{e'}\) and the other in \(T_{e'}\). Note that the edges satisfying (1) have exactly one endpoint in \(T_e\), and hence their weight is accounted for in A(e). However, A(e) also counts the weight of edges (uv) with one endpoint in \(T_{e'}\) and the other not in \(T_e\) (see Fig. 2). Such edges do not cross the cut. Note that for such edges both e and \(e'\) are on the u-to-v path in T. The fact that e is on the u-to-v path implies that the traversal has already descended below LCA(uv). Hence, (uv) currently contributes \(-w(u,v)\) to the score of \(e'\), offsetting its contribution to A(e). Next note that the edges satisfying (2) are edges (uv) with both u and v in \(T_e\), which means that they are not accounted for in A(e), and that the traversal did not yet descend below LCA(uv). Hence the contribution of such an edge (uv) to the score of \(e'\) is indeed the weight of (uv). \(\square \)

Fig. 2
figure 2

A tree T and two descendant edges \(e,e'\). Edges \(e_1\) and \(e_3\) cross the cut determined by \(\{e,e'\}\). Edge \(e_2\) does not cross the cut. The edge \(e_1\) is not counted in A(e), while the edges \(e_2,e_3\) are. The edge \(e_1\) contributes \(w(e_1)\) to the score of \(e'\), the edge \(e_2\) contributes \(-w(e_2)\) to the score of \(e'\), and the edge \(e_3\) does not contribute to the score of \(e'\)

By the lemma, the descendant edge \(e'\) of e that minimizes the weight of the cut determined by \(\{e,e'\}\) is the edge with minimum score in the subtree of e at that time. The score of this edge \(e'\) can be found in \(O(\log n)\) time by calling \(T.\textsc {subtree}(x)\), where x is the lower endpoint of e.

3.2 Independent Edges

We now describe an \(O(m\log n)\) time algorithm for finding the minimum cut determined by all pairs of edges \(\{e,e'\}\) where e is independent of \(e'\) in T (i.e. e is not a descendant of \(e'\) and \(e'\) is not a descendant of e). We begin by showing that the problem can be reduced to the following bipartite problem:

Definition 4

(The bipartite problem) Given two trees \(T_1\) and \(T_2\) with costs on the edges and a list of non-tree edges \(L=\{(u,v) : u\in T_1, v\in T_2 \}\) where each non-tree edge has a cost, find a pair of edges \(e\in T_1\) and \(e'\in T_2\) that minimize the sum of costs of e, of \(e'\), and of all non-tree edges \((u,v)\in L\) where u is in \({T_1}_e\), and v is in \({T_2}_{e'}\). The size of such a problem is defined as the number of non-tree edges in L plus the sizes of \(T_1\) and \(T_2\).

Lemma 4

Given an edge-weighted graph G with n vertices and m edges and a spanning tree T, finding the minimum cut among those determined by a pair of independent edges \(\{e,e'\}\) can be reduced in \(O(m\log n)\) time to multiple instances of the bipartite problem of total size O(m).

Proof

Recall that every node w of T has at most two children. We create a separate bipartite problem for every node w of T that has exactly two children (x and y). This bipartite problem will be responsible for finding the minimum cut determined by all pairs of independent edges \(\{e,e'\}\) where e is in \(T_x\) (or possibly (wx)) and \(e'\) is in \(T_y\) (or possibly (wy)).

Throughout our description, note the distinction between edge weights and edge costs. The input graph G has edge weights, and the goal is to find the cut with minimum weight. The bipartite problems we define have edge costs, which are derived from the weights of edges in the input graph.

We initialize the cost of every edge of G to be zero. Then, for every edge \(f=(u,v)\) of G, we add the weight of f to the cost of every edge on the u-to-v path. We maintain the costs in a link-cut tree so each f is handled in \(O(\log n)\) time. Now consider any node w with exactly two children x and y, and any pair of independent edges \(\{e,e'\}\) where e is in \(T_x\) and \(e'\) is in \(T_y\). Observe that the edges crossing the cut determined by \(\{e,e'\}\) are exactly the edges \(f=(u,v)\) with one endpoint in \(T_e\) or in \(T_{e'}\), and the other endpoint not in \(T_e\) nor in \(T_{e'}\). Hence, the weight of the cut determined by \(\{e,e'\}\) equals the sum of the cost of e plus the cost of \(e'\) minus twice the total weight of all non-tree edges \(f=(u,v)\) such that u is in \(T_e\) and v is in \(T_{e'}\).

We therefore define the bipartite problem for w as follows: (1) \(T_1\) is composed of the edge (wx) and the subtree rooted at x with costs as described above, (2) \(T_2\) is composed of the edge (wy) and the subtree rooted at y with the costs as described above, and (3) for every non-tree edge \(f=(u,v)\) with weight c such that LCA\((u,v)=w\) the list of non-tree edges L includes (uv) with cost \(-2c\). By construction, the solution to this bipartite problem is the pair of independent edges \(e,e'\) with \(e \in T_x\) and \(e' \in T_y\) that minimize the weight of the cut in G defined by e and \(e'\).

The only issue with the above bipartite problem is that the overall size of all bipartite problems (over all nodes w) might not be O(m). This is because the edges of T might appear in the bipartite problems defined for more than a single node w. In order to guarantee that the overall size of all bipartite problems is O(m), we construct a compact bipartite problem using the topologically induced trees of Lemma 2.

We construct in O(m) time a constant-time LCA data structure [16] for T. In overall \(O(m\log n)\) time, we construct, for each node \(w\in T\) with exactly two children x and y:

  1. 1.

    A list \(L_w\) of all non-tree edges (uv) with LCA\((u,v) = w\).

  2. 2.

    A list \(\Lambda _x = \{w,x\} \cup \{u : (u,v) \in L_w \text { and } u \in T_x\}\), sorted according to their visit time in a preorder traversal of T.

  3. 3.

    A list \(\Lambda _y = \{w,y\} \cup \{v : (u,v) \in L_w \text { and } v \in T_y\}\), sorted according to their visit time in a preorder traversal of T.

These lists require total O(m) space and can be easily computed in \(O(m\log n)\) time by going over the non-tree edges, because each non-tree edge is in the list \(L_w\) of a unique node w.

The list L for the compact bipartite problem is identical to the list L for the non-compact problem. The tree \(T^\circ _1\) (\(T^\circ _2\)) for the compact bipartite problem of w is the topologically tree induced on \((w,x) \cup T_x\) (\((w,y) \cup T_y\)) by \(\Lambda _x\) (\(\Lambda _y\)). This is done in \(O(|\Lambda _x| \log n)\) (\(O(|\Lambda _{y}|\log n)\)) time by invoking Lemma 2. It follows that the total time for constructing all compact bipartite problems is \(O(m\log n)\) and that their total space is O(m). It remains to argue that the solution to the compact bipartite problem is identical to the solution to the non-compact one. Observe that the cost of a solution \(e,e'\) for the non-compact bipartite problem is the cost of e plus the cost of \(e'\) plus the cost of all edges in L with one endpoint in \({T_1}_e \cap \Lambda \) and the other endpoint in \({T_2}_{e'} \cap \Lambda \).

Consider now any pair of edges e and \(e'\) for the non-compact bipartite problem. By definition of topologically induced trees, the minimum cost edge \(f \in T^\circ _1\) with \({T^\circ _1}_f \cap \Lambda = {T_1}_e \cap \Lambda \), has cost not exceeding that of e. An analogous argument holds for \(e'\) and an edge \(f'\) of \(T^\circ _2\). Hence, the cost of the optimal solution for the compact problem is not greater than that of the non-compact problem. Conversely, for any edge f in \(T^\circ _1\), there exists an edge e in \(T_1\) with cost not exceeding that of f and \({T^\circ _1}_f \cap \Lambda = {T_1}_e \cap \Lambda \). Hence, the cost of the optimal solution for the compact problem is not less than that of the non-compact problem. It follows that the two solutions are the same.\(\square \)

The proof of Theorem 2 follows from the above reduction and the following solution to the bipartite problem:

Lemma 5

A bipartite problem of size m can be solved in \(O(m\log m)\) time.

Proof

Recall that in the bipartite problem we are given two trees \(T_1\) and \(T_2\) with edge-costs and a list of non-tree edges \(L=\{(u,v) : u\in T_1, v\in T_2\}\) where each non-tree edge has a cost. To prove the lemma, we describe a recursive \(O(m\log m)\) time algorithm that finds, for every edge \(e\in T_1\), the best edge \(e'\in T_2\) (i.e. the edge \(e'\) that minimizes the sum of costs of \(e'\) and of all non-tree edges \((u,v)\in L\) where u in \({T_1}_e\) and \(v \in {T_2}_{e'}\)).

We begin by applying a standard heavy path decomposition [16] to \(T_1\), guided by the number of non-tree edges: The heavy edge of a node of \(T_1\) is the edge leading to the child (called the heavy child) whose subtree has the largest number of incident non-tree edges in L (breaking ties arbitrarily). The other edges are called light. The maximal sets of connected heavy edges define a decomposition of the nodes into heavy paths.

We define a fragment of the tree \(T_1\) to be a subtree of \(T_1\) formed by a contiguous subpath \(u_1-u_2-\cdots -u_k\) of some heavy path of \(T_1\), together with all subtrees hanging from this subpath via light edges. Given a fragment f, let \(L(f) = \{(x_1,y_1), (x_2,y_2) \dots , (x_\ell ,y_\ell )\}\) be the set of edges (xy) of L with \(x \in f\). We define the induced subtree \(T_2(f)\) to be the tree topologically induced on \(T_2\) by the root of \(T_2\) and \(\{y_1, y_2, \dots , y_{\ell }\}\). The size of \(T_2(f)\) is \(|T_2(f)| = O(|L(f)|)\). We also define a modified induced subtree \(T'_2(f)\) as follows. Let \(L_\downarrow (f)\) be the set of edges \((x,y)\in L \) with x in the subtree rooted at the heavy child of the last node \(u_k\) of the fragment f (if such a heavy child exists). Consider the tree \(T_2\), where the cost of each edge \(e'\) of \(T_2\) is increased by the total cost of all edges \((x,y)\in L_\downarrow (f)\), where y is in \({T_2}_{e'}\). The modified induced subtree \(T'_2(f)\) is defined as the tree topologically induced by the root of \(T_2\) and \(\{y_1, y_2, \dots , y_\ell \}\) on this modified \(T_2\) (Fig. 3).

Fig. 3
figure 3

On the left: A fragment (in light gray) in the tree \(T_1\), defined by the top node \(u_{1}\) and the bottom node \(u_{k}\), both laying on the same heavy path (solid edges). The triangles (in dark gray) are the subtrees hanging from the heavy paths via light edges (dashed). On the right: The tree \(T_2\) (black) connected to the fragment via three non-tree edges (blue). The endpoints \(w_1,w_2,w_3\) of these edges define the topologically induced tree (in red)

We are now ready to describe the recursion. The input to a recursive call is a fragment f of \(T_1\) and the list \((x_1,y_1),(x_2,y_2),\ldots ,(x_\ell ,y_\ell )\) of all non-tree edges in L with \(x_i\) in f, together with \(T_{2}(f)\) and \(T'_{2}(f)\). A fragment f is specified by the top node (\(u_1\)) and the bottom node (\(u_k\)) of the corresponding subpath of a heavy path of \(T_1\). In the first call, f is specified by the root of \(T_1\) and the leaf ending the heavy path of \(T_1\) that contains the root. That is, in the first call f is the entire tree \(T_1\). The list of non-tree edges for the first call is the entire list L. The recursion works by selecting the middle node of the subpath, defined as follows: We define the light size \(s_i\) of node \(u_i\) as the number of non-tree edges \((x,y)\in L\) where either \(x=u_i\) or x is in the subtree rooted at the light child of \(u_i\). Note that \(s_1+s_2+\ldots +s_k = |L(f)|\). If \(s_1 > |L(f)|/2\) then the middle node is defined as \(u_1\). Otherwise, the middle node is defined as the node \(u_i\) such that \(s_1+\ldots +s_{i-1} \le |L(f)|/2\) but \(s_1+\ldots +s_i > |L(f)|/2\). We keep, for every heavy path P of \(T_1\) a list of the nodes of P with non-zero light size, ordered according to their order on P. We find the middle node \(u_i\) in O(|L(f)|) time by going over the nodes in this list one after the other until we encounter the middle node.

After identifying the middle node \(u_i\) we apply recursion on the following three fragments: the fragment defined by subpath \(u_1-\cdots -u_{i-1}\), the fragment defined by subpath \(u_{i+1}-\cdots -u_k\), and the fragment consisting of the entire subtree rooted at the light child of \(u_i\). Before a recursive call to fragment g, we construct the appropriate \(T_2(g)\) and \(T'_2(g)\). The induced tree \(T_2(g)\) can be computed from \(T_2(f)\) in \(O(|T_2(f)|) = O(|L(f)|)\) time by invoking Lemma 2 on \(T_2(f)\) with \(\Lambda _g = r \cup \{y : (x,y)\in L(g)\}\), where r is the root of \(T_2\). Note that we had defined \(T_2(g)\) as the topologically induced tree on \(T_2\) by \(\Lambda _g\), not on \(T_2(f)\) by \(\Lambda _g\). However, since \(\Lambda _g \subseteq \Lambda _f\), by Proposition 1, the two definitions are equivalent.

For constructing \(T'_2(g)\) from \(T'_2(f)\), we first need to increase the cost of each edge \(\tilde{e}\) of \(T'_2(f)\) by the total cost of edges in \(L_\downarrow (g) \setminus L_\downarrow (f)\) that are incident to \(T'_2(f)_{\tilde{e}}\). This can be done in a single bottom-up traversal of \(T'_2(f)\) in \(O(|T'_2(f)|)\) time. Then, we invoke Lemma 2 on \(T'_2(f)\) with \(\Lambda _g\) to obtain \(T'_2(g)\). To summarize, constructing the trees \(T_2(g)\) and \(T'_2(g)\) for all three recursive subproblems takes O(|L(f)|) time.

The three recursive calls will take care of finding the best edge \(e'\in T_2\) for every edge \(e \in T_1\) included in one of the recursive problems. It only remains to handle the three edges that do not belong to any of the recursive problems; the edge between \(u_i\) and its light child, the edge \((u_{i-1},u_i)\), and the edge \((u_i,u_{i+1})\). For each such edge e we describe a procedure that finds its best \(e'\in T_2(f)\) in time \(O(|T_2(f)|)\).

Recall that, by definition of the bipartite problem, the best edge \(e'\) for e is the edge \(e'\) of \(T_2\) minimizing the cost of \(e'\) plus the cost of all non-tree edges \((x,y)\in L\) with \(x \in {T_1}_e\), and \(y \in {T_2}_{e'}\). For the case where e is the edge between \(u_i\) and its light child, \({T_1}_e = T_1(f)_e\). We therefore mark all non-tree edges \((x,y)\in L(f)\), where x is in \(T_1(f)_e\). A non-efficient solution would work directly on \(T_2\) by propagating, in a bottom-up traversal of \(T_2\), the cost of all marked edges so that, after the propagation, the cost of every edge \(e'\) in \(T_2\) has been increased by the total cost of all non-tree edges \((x,y) \in L\) with \(x \in T_1(f)_e\), and with \(y \in {T_2}_{e'}\). Then we can take the edge \(e' \in T_2\) with the minimum cost. However, this would take \(O(|T_2|) = O(m)\), which is too slow. Instead, we perform the propagation in \(T_2(f)\). Namely, in a bottom-up traversal of \(T_2(f)\), we propagate the cost of all marked edges so that, after the propagation, the cost of every edge \(e'\) in \(T_2(f)\) has been increased by the total cost of all non-tree edges \((x,y) \in L(f)\) with \(x \in T_1(f)_e\), and with \(y \in T_2(f)_{e'}\). This takes \(O(|T_2(f)|) = O(|L(f)|)\) time. Since the propagation process affects all the edges \(\tilde{e}\) with the same \({T_2}_{\tilde{e}} \cap \Lambda _f\) in the same way, the definition of topologically induced tree guarantees that the edges with the minimum cost in \(T_2\) and in \(T_2(f)\) have the same cost, so using \(T_2(f)\) instead of \(T_2\) is correct.

The procedure for the cases where e is the edge \((u_{i-1},u_i)\) or \((u_i,u_{i+1})\) is identical, except that we apply it with \(T'_2(f)\) instead of \(T_2(f)\). This difference stems from the fact that applying the above procedure on \(T_2(f)\) only considers the costs of the non-tree edges in L(f), but not the costs of the non-tree edges in \(L_\downarrow (f)\), which might also cross cuts involving the edges \((u_{i-1},u_i)\) or \((u_i,u_{i+1})\). The definition of the costs of edges in \(T'_2(f)\) takes into account the contribution of costs of non-tree edges in \(L_\downarrow (f)\). The rest of the propagation procedure and the proof of its correctness remain unchanged.

To analyze the overall running time, let T(m) be the time to handle a fragment f corresponding to a whole heavy path, and \(T'(m)\) be the time to handle a fragment f corresponding to a proper subpath of some heavy path, where \(m=|L(f)|\). Then \(T(m)=T'(m_{1})+T(m_{2})+T'(m_{3})\) for some \(m_{1},m_{2},m_{3}\), where \(m_{1}+m_{2}+m_{3}=m\) since the subproblems are disjoint, \(m_{1},m_{3}\le m/2\) by the choice of the middle node, and \(m_{2}\le m/2\) by the definition of a heavy path. This is since the light child of \(u_i\) does not have more incident non-tree edges in its subtree than the number of non-tree edges incident to the subtree of the heavy child \(u_{i+1}\). When f corresponds to a whole heavy path the number of non-tree edges incident to the subtree of \(u_{i+1}\) is exactly \(m_3\). Similarly, for the case where f does not correspond to a whole heavy path, \(T'(m) = T'(m_{1})+T(m_{2})+T'(m_{3})\) for some \(m_{1},m_{2},m_{3}\), where \(m_{1}+m_{2}+m_{3}=m\) and \(m_{1},m_{3}\le m/2\) (but now we cannot guarantee that \(m_{2}\le m/2\)). Considering the tree describing the recursive calls, on any path from the root (corresponding to the fragment consisting of the whole \(T_{1}\)) to a leaf, we have the property that the value of m decreases by at least a factor of 2 every two steps. Hence, the depth of the recursion is \(O(\log m)\). It follows that the total time to handle a bipartite problem of size m is \(O(m \log m)\).\(\square \)

To conclude, we have shown how to find the minimum cut determined by two descendant edges and the minimum cut determined by two independent edges. If we wish to find for every edge e of T the edge \(e'\) that minimizes the weight of the cut determined by \(\{e,e'\}\), then we need to run every bipartite problem twice (switching the roles of \(T_1\) and \(T_2\)). This does not incur any asymptotic overhead.

4 A \(\log \log n\) Speedup

Karger modified his \(O(m\log ^{3}n)\) time minimum cut algorithm to work in \(O(m\log ^{3}n/\)\(\log \log n+n\log ^{6}n)\) time by observing that 1-respecting cuts can be found faster than 2-respecting cuts, and tweaking the parameters of the tree packing. In this section we explain how to apply this idea, together with our new \(O(m\log n)\) time algorithm for the 2-respecting problem, to derive a new \(O(m\log ^{2}n/\log \log n+n\log ^{3+\delta }n)\) time minimum cut algorithm, for any \(\delta >0\).

As in Karger’s implementation [23, Section 9.1], we start with an initial sampling step, except that instead of \(\epsilon =\frac{1}{4\log n}\) we use \(\epsilon =\frac{1}{4\log ^{\gamma }n}\), for some \(\gamma \in (0,1)\) to be fixed later. This produces in linear time a graph H with \(m'=O(n/\epsilon ^{2}\log n)\) edges and minimum cut \(c'=O(\epsilon ^{-2}\log n)\) such that the minimum cut in G corresponds to a \((1+\epsilon )\)-times minimum cut in H. We find a packing in H of weight \(c'/2\) with Gabow’s algorithm [7] in \(O(m'c'\log n)=O(n/\epsilon ^{4}\log ^{3}n)\) time. We choose \(4\log ^{1+\gamma }n\) trees at random from the packing and for each tree we find the minimum cut that 1-respects it. This takes total \(O(m\log ^{1+\gamma }n)\) time. Then, we choose \(\log n/\log (\log ^{\gamma }n)=O(\log n/\log \log n)\) trees at random from the packing and for each tree we find the minimum cut that 2-respects it. This takes total in \(O(m\log ^{2}n/\log \log n)\) time using our new algorithm.

Let \(\rho \ge c'/2\) be the weight of the packing, let \(\alpha \rho \) be the total weight of trees that 1-respect the minimum cut, and let \(\beta \rho \) be the total weight of trees that 2-respect the minimum cut. As observed by Karger, \(\beta \ge 1-2\epsilon -2\alpha \). Thus, either \(\alpha > \frac{1}{4\log ^{\gamma }n}\) and choosing \(4\log ^{1+\gamma }n\) trees guarantees that none of them 1-respects the minimum cut with probability at most

$$\begin{aligned} (1-\alpha )^{4\log ^{1+\gamma }n} \le \left( 1-\frac{1}{4\log ^{\gamma }n}\right) ^{4\log ^{\gamma }n\cdot \log n} < 1/n, \end{aligned}$$

or \(\beta \ge 1-1/\log ^{\gamma }n\) and choosing \(\log n/\log (\log ^{\gamma }n)\) trees guarantees that none of them 2-respects the minimum cut with probability at most

$$\begin{aligned} (1-\beta )^{\log n/\log (\log ^{\gamma }n)} \le \left( \frac{1}{\log ^{\gamma }n}\right) ^{\log n/\log (\log ^{\gamma }n)} = 1/n. \end{aligned}$$

The overall complexity is \(O(n/\epsilon ^{4}\log ^{3}n+m\log ^{1+\gamma }n+m\log ^{2}n/\log \log n)= O(m\log ^{2}n/\log \log n+n\log ^{3+4\gamma }n)\). By adjusting \(\gamma =\delta /4\) we obtain the claimed complexity of \(O(m\log ^{2}n/\log \log n+n\log ^{3+\delta }n)\).