1 Introduction

In the dynamic setting our task is to maintain a ’good’ solution for some computational problem as the input undergoes updates [1,2,3,4,5,6]. Our goal is to minimize the update time we need to spend in order to update the output when the input undergoes updates. One of the most extensively studied computational problems in the dynamic setting is approximate maximum matching. Our task is to maintain an \(\alpha \)-approximate matching M in G, which is a matching which satisfies that \(|M| \cdot \alpha \ge \mu (G)\) (where \(\mu (G)\) represent the size of a maximum size matching of graph G). Due to the conditional lower bound of [7] the maintenance of an exact maximum matching (a 1-approximate maximum matching) requires at least O(poly(n)) update time. Hence, a long line of papers were focused on the possible approximation ratio-update time trade-offs achievable for \(\alpha > 1\) [8,9,10,11,12,13,14,15].Footnote 1

If a dynamic algorithm computes the updated output after at most O(T) time following any single change in the input we say that its update time is worst-case O(T). A slight relaxation of this bound is that the algorithm takes at most \(O(T \cdot k)\) total time to maintain the output over \(k>0\) consecutive updates to the input, for any k, in this case the update time of the algorithm is amortized O(T).

A number of dynamic algorithms in literature utilize different levels of randomization [16,17,18,19,20,21]. However, currently all known techniques for proving update time lower bounds fail to differentiate between randomized and deterministic dynamic algorithms [7, 22,23,24,25]. Hence, understanding the power of randomization in the dynamic setting is an important research agenda. In the case of dynamic matching getting rid of randomization has proven to be difficult within the realm of \({\tilde{O}}(1)\) update time. While as early as the influential work of Onak and Rubinfield [26] a randomized algorithm with \({\tilde{O}}(1)\) update time has been found the first deterministic algorithm with the same update time was first shown by Bhattacharya et al. [27]. For achieving \((2 + \epsilon )\)-approximation with worst-case update time there is still an O(poly(n)) factor difference between the fastest randomized and deterministic implementations ([17, 18] and [28] respectively).

While amortized update time bounds don’t tell us anything about worst-case update time some problems in the dynamic setting have proven to be difficult to solve efficiently without amortization. Notably, for the dynamic connectivity problem the first deterministic amortized update time solution by Holm et al. [29] has long preceded the first worst-case update time implementation of Kapron et al. [30] which required randomization.

Both of the algorithms presented by this paper carry the best of both worlds as they are deterministic and provide new worst-case update-time bounds.

Many dynamic algorithms such as [31, 32] rely on the robustness of the output of the output. To consider this in a context of matching as an example observe that if a matching M is \(\alpha \)-approximate it remains \((\alpha \cdot (1 + O(\epsilon )))\)-approximate even after \(\epsilon \cdot |M|\) edge updates. Hence, if we are to rebuild M after the updates we can amortize its reconstruction cost over \(\epsilon \cdot |M|\) time steps. However, such an approach initially inherently results in amortized update time bound. In some cases with additional technical effort de-amortization was shown to be achievable for these algorithms [31, 33]. A natural question to ask is weather an amortized update time bound is always avoidable for amortized rebuild based dynamic algorithms.

To answer this question we explicitly present a versatile framework for improving the update time bounds of amortized rebuild based algorithms to worst-case while incurring only a \({\tilde{O}}(1)\) blowup in update time. Our framework was implicitly shown in Bernstein et al. [33] and Nanongkai and Saranurak [34]. To demonstrate the framework we present two new results:

Theorem 1

There is a deterministic algorithm for maintaining a \((2 + \epsilon )\)-approximate matching in a fully dynamic graph with worst-case update time \(O_\epsilon (\log ^7(n)) = {\tilde{O}}(1)\) (where \(O_{\epsilon }\) hides \(O(poly(1/\epsilon )\) factors).

For the approximation ratio of \((2 + \epsilon )\) the best known worst-case update time algorithm of \({\tilde{O}}(\sqrt{n})\) was show recently in [28]. However, \({\tilde{O}}(1)\) amortized update time algorithms were previously shown by [27, 32]. We show that an O(poly(n)) blowup in update time is not necessary to improve these bounds to worst-case.

Theorem 2

There is a fully dynamic algorithm for maintaining a \((3/2 + \epsilon )\)-approximate maximum matching in worst-case deterministic \({\hat{O}}\left( \frac{m}{n \cdot \beta } + \beta \right) \) (for our choice of \(\beta \)) or \({\hat{O}}(\sqrt{n})\) update time (where \({\hat{O}}\) hides \(O(poly(n^{o(1)}, 1/\epsilon ))\) factors).

For achieving better than than 2-approximation the fastest known worst-case update time of \({\tilde{O}}(\sqrt{n}\root 8 \of {m})\) was shown in [28]. Similar to the case of \((2 + \epsilon )\)-approximation there is an \({\tilde{O}}(poly(n))\) faster algorithm achieving the same approximation ratio shown in [14] using amortization. We again show that such a large blowup is not necessary in order to achieve worst-case update times.

In order to derive the later result we first show an amortized rebuild based algorithm for maintaining the widely utilized [14, 35,36,37,38,39,40,41] matching sparsifier EDCS introduced by Bernstein and Stein [35]. At the core of amortized rebuild based algorithms there is a static algorithm for efficiently recomputing the underlying data-structure. As the EDCS matching sparsifier (as far as we are aware) doesn’t admit a deterministic near-linear time static algorithm, we introduce a relaxed version of the EDCS we refer to as ’damaged EDCS’. For constructing a damaged EDCS we show a deterministic \({\tilde{O}}(m)\) static algorithm. Say that matching sparsifier (or matching) H is \((\alpha , \delta )\)-approximate if \(\mu (H) \cdot \alpha + n \cdot \delta \ge \mu (G)\). A damaged EDCS is a \((3/2 + \epsilon , \delta )\)-approximate matching sparsifier as opposed to the EDCS which is \((3/2+\epsilon )\)-approximate. To counter this we show new reductions from \((\alpha + \epsilon )\) to \((\alpha , \delta )\)-approximate dynamic matching algorithms based on ideas of [42, 43]. Previous such reductions relied on the oblivious adversary assumption that the input sequence is independent from the choices of the algorithm and is fixed beforehand. Our reductions work against an adaptive adversary whose decisions may depend on the decisions and random bits of the algorithm. The update time blowup required by the reductions is \({\tilde{O}}(1)\) or \({\hat{O}}(1)\) if the reduction step is randomized or deterministic respectively. These reductions and the static algorithm for constructing a damaged EDCS might be of independent research interest. Using the randomized reduction we receive the following corollary:

Corollary 3

The update time bound of Theorem 2 can be improved to \({\tilde{O}}\left( \frac{m}{n \cdot \beta } + \beta \right) \) (or \({\tilde{O}}(\sqrt{n})\)) if we allow for randomization against an adaptive adversary (where \({\tilde{O}}\) hides \(O(poly(\log (n), 1/\epsilon ))\) factors).

1.1 Techniques

We base our approach for improving an amortized rebuild based algorithm to worst-case update time on an observation implicitly stated in Bernstein et al. [33] (Lemma 6.1). Take an arbitrary input sequence of changes I for a dynamic problem and arbitrarily partition it into k continuous sub-sequences \(I_i: i \in [k]\). If a dynamic algorithm with update time O(T) is such that (knowing the partitionings) it can process the input sequence and the total time of processing sub-sequence \(I_i\) is \(O(|I_i| \cdot T)\) then call it k batch-dynamic. Note that the update time guarantee of a batch-dynamic algorithm is stronger then of an amortized update time algorithm but it is weaker than a worst-case update time bound.

Building on the framework of [33] we show that an \(O(\log (n))\) batch-dynamic algorithm Alg can be used to maintain \({\tilde{O}}(1)\) parallel output tapes with worst-case update time such that at all times at least one output tape contains a valid output of Alg while only incurring a blowup of \({\tilde{O}}(1)\) in update-time. If Alg is an \(\alpha \)-approximate dynamic matching algorithm then each of the \(O(\log (n))\) output tapes each contain a matching. Therefore, the union of the output tapes is an \(\alpha \)-approximate matching sparsifier with maximum degree \(O(\log (n))\) on which we can run the algorithm of Gupta and Peng [31] to maintain an \((\alpha + \epsilon )\)-approximate matching.

Therefore, in order to find new worst-case update time dynamic matching algorithms we only have to find batch-dynamic algorithms. We show a framework (building on [33]) for transforming amortized rebuild based dynamic algorithms to batch-dynamic algorithms. On a high level an amortized rebuild based algorithm allows for a slack of \(\epsilon \) factor damage to its underlying data-structure before commencing a rebuild. To turn such an algorithm k batch-dynamic during the progressing of the i-th batch we ensure a slack of \(\frac{i \cdot \epsilon }{k}\) instead. This way once the algorithm finishes processing a batch it has \(\frac{\epsilon }{k}\) factor of slack it is allowed to take before commencing a rebuild meaning that the next rebuild operation is expected to happen well into the proceeding batch.

With this general method and some technical effort we show a batch-dynamic version of the \((2 + \epsilon )\)-approximate dynamic matching algorithm of [32] and prove Theorem 1.

In order to generate a batch-dynamic algorithm for maintaining a \((3/2 + \epsilon )\)-approximate maximum matching more work is required as algorithms currently present in literature for this approximation ratio are not conveniently amortized rebuild based. We introduce a relaxed version of the matching sparsifier EDCS (initially appeared in [35]) called ’damaged EDCS’. We further show that a damaged EDCS can be found in \({\tilde{O}}(m)\) time. We show that a damaged EDCS is robust against \({\tilde{O}}(n \cdot \beta )\) edge updates and has maximum degree \(\beta \) for our choice of \(\beta \). This means we can maintain the damaged EDCS in \({\tilde{O}}\left( \frac{m}{n \cdot \beta }\right) \) amortized update time with periodic rebuilds. We can then run the algorithm of [31] to maintain a matching in the damaged EDCS in \({\tilde{O}}(\beta )\) update time.

1.2 Independent Work

Independently from our work Grandoni et al. [36] presented a dynamic algorithm for maintaining a \((3/2 + \epsilon )\)-approximate matching with deterministic worst-case update time \(O_\epsilon (m^{1/4})\), where \(O_\epsilon \) is hiding \(O(poly(1/\epsilon ))\) dependency.

2 Notations and Preliminaries

Throughout this paper, we let \(G = (V,E)\) denote the input graph and n will stand for |V| and m will stand for the maximum of |E| as the graph undergoes edge updates. \(\deg _E(v)\) will stand for the degree of vertex v in edge set E while \(N_E(v)\) stand for the set of neighbouring vertices of v in edge set E. We will sometimes refer to \(\deg _E(u) + \deg _E(v)\) as the degree of edge (uv) in E. A matching M of graph G is a subset of vertex disjoint edges of E. \(\mu (G)\) refers to the size of a maximum cardinality matching of G. A matching M is an \(\alpha \)-approximate maximum matching if \(\alpha \cdot |M| \ge \mu (G)\). Define a matching to be \((\alpha , \delta )\)-approximate if \(|M| \cdot \alpha + \delta \cdot n \ge \mu (G)\).

In the maximum dynamic matching problem the task is to maintain a large matching while the graph undergoes edge updates. In this paper we will be focusing on the fully dynamic setting where the graph undergoes both edge insertions and deletions over time. An algorithm is said to be a dynamic \(\alpha \) (or \((\alpha , \delta )\))-approximate maximum matching algorithm if it maintains an \(\alpha \) (or \((\alpha , \delta )\))-approximate matching at all times. A sub-graph \(H \subseteq E\) is said to be an \(\alpha \) (or \((\alpha , \delta )\))-approximate matching sparsifier if it contains an \(\alpha \) (or \((\alpha , \delta )\))-approximate matching. We will regularly be referring to the following influential result from literature:

Lemma 4

Gupta and Peng [31]: There is a \((1 + \epsilon )\)-approximate maximum matching algorithm for fully dynamic graph G with deterministic worst-case update time \(O(\Delta /\epsilon ^2)\) given the maximum degree of G is at most \(\Delta \) at all times.

Throughout the paper the notations \({\tilde{O}}(), {\hat{O}}()\) and \(O_\epsilon ()\) will be hiding \(O(poly(\log (n), \epsilon ))\), \(O(poly(n^{o(1)}, \epsilon ))\) and \(O(poly(\frac{1}{\epsilon }))\) factors from running times respectively.

The update time of a dynamic algorithm is worst-case O(T) if it takes at most O(T) time for it to update the output each time the input undergoes a change. An algorithm update time is said to be amortized O(T) if there is some integer \(k>0\) such that over k consecutive changes to the input the algorithm takes \(O(k \cdot T)\) time steps to maintain the output. The recourse of a dynamic algorithm measures the changes the algorithm makes to its output per change to the input. Similarly to update time recourse can be amortized and worst-case.

We call a dynamic algorithm k batch-dynamic with update time O(T) if for any partitioning of the input sequence I into k sub-sequences \(I_i: i \in [k]\) during the processing of I the algorithm can process input sub-sequence \(I_i\) in \(O(T \cdot |I_i|)\) total update time. Note that this implies that the worst-case update time during the progressing of \(I_i\) is \(O(T \cdot |I_i|)\). The definition is based on [33]. A k-batch dynamic algorithm provides slightly better update time bounds then an amortized update time algorithm as we can select k sub-sequences to amortize the update time over.

We will furthermore be referring to the following recent result from Solomon and Solomon [44]:

Lemma 5

Theorem 1.3 of Solomon and Solomon [44] (slightly phrased differently and trivially generalized for \((\alpha , \delta )\)-approximate matchings): Any fully dynamic \(\alpha \) (or \((\alpha , \delta )\))-approximate maximum matching algorithm with update time O(T) can be transformed into an \((\alpha + \epsilon )\) (or \((\alpha + \epsilon ,\delta )\))-approximate maximum matching algorithm with \(O(T + \frac{\alpha }{\epsilon })\) update time and worst-case recourse of \(O(\frac{\alpha }{\epsilon })\) per update. The update time of the new algorithm is worst-case if so is the underlying matching algorithm.

Definition 6

Random variables \(X_1,\ldots ,X_n\) are said to be negatively associated if for any non-decreasing functions gf and disjoint subsets \(I,J \subseteq [n]\) we have that:

$$\begin{aligned} \text {Cov}(g(X_i: i \in I), h(X_j: j \in J)) \le 0 \end{aligned}$$

We will make use of the following influential result bounding the probability of a sum of negatively associated random variables falling far from their expectation.

Lemma 7

(Chernoff bound for negatively associated random variables [45]): Let \({\bar{X}} = \sum _{i \in [n]} X_i\) where \(X_i: i \in [n]\) are negatively associated and \(\forall i \in [n]: X_i \in [0,1]\). Then for all \(\delta \in (0,1)\):

$$\begin{aligned} \Pr [{\bar{X}} \le (1 - \delta ) \cdot {\mathbb {E}}[{\bar{X}}]] \le \exp \left( -\frac{{\mathbb {E}}[{\bar{X}}] \cdot \delta ^2}{2}\right) \end{aligned}$$

3 Batch Dynamic to Worst Case Update Time

3.1 Improving a Batch-Dynamic Algorithm to Amortized Update Time

Lemma 8

Given an \(\alpha \) approximate (or \((\alpha , \epsilon )\)-approximate) dynamic matching algorithm Alg is \(O(\log (n))\) batch-dynamic with update time O(T(n)) and dynamic graph G undergoing edge insertions and deletions. There is an algorithm \(Alg'\) which maintains \(O(\log (n))\) matchings of G such that at all times during progressing an input sequence of arbitrarily large polynomial length one of the matchings is \(\alpha \) approximate (or \((\alpha , \epsilon )\)-approximate). The update time of \(Alg'\) is worst-case \(O(T(n) \cdot \log ^3(n))\) and it is deterministic if Alg is deterministic.

As this lemma was implicitly stated in [33] and [34] in a less general setting we defer the proof to Appendix A.

Corollary 9

If there exists an \(\alpha \) (or \((\alpha , \delta )\))-approximate dynamic matching algorithm (where \(\alpha = O(1)\)) Alg which is \(O(\log (n))\) batch-dynamic with update time O(T(n)) then there is an \((\alpha + \epsilon )\) (or \((\alpha + \epsilon , \delta )\))-approximate matching algorithm \(Alg'\) with worst case update time \(O \left( \frac{T(n) \cdot \log ^3 (n)}{\epsilon ^3}\right) \). If Alg is deterministic so is \(Alg'\).

Proof

Maintain \(O(\log (n))\) parallel matchings of G using the algorithm from Lemma 8 in \(O(T(n) \cdot \log ^3(n))\) worst case update time. Their union, say H, is a a graph with maximum degree \(O(\log (n))\) and is an \(\alpha \) (or (\(\alpha , \delta \)))-approximate matching sparsifier and is a union of the output of \(O(\log (n))\) dynamic matching algorithms with worst-case update time \(O(T \cdot \log ^2(n))\). By Lemma 5 ([44]) these approximate matching algorithms can be transformed into \((\alpha + \epsilon /2)\) (or \((\alpha + \epsilon /2, \delta )\))-approximate matching algorithms with \(O(T \cdot \log ^2(n) + \frac{\alpha }{\epsilon })\) update time and \(O(\frac{\alpha }{\epsilon })\) worst-case recourse. This bounds the total recourse of the sparsifier at \(O \left( \frac{\log (n) \cdot \alpha }{\epsilon }\right) \). Therefore, with slack parameter \(\frac{\epsilon }{2 \cdot \alpha }\) we can run the algorithm of Lemma 4 ([31]) to maintain an \((\alpha + \epsilon )\) (or \((\alpha + \epsilon , \delta )\))-approximate matching in the sparsifier with worst-case update time \(O \left( T \cdot \log ^3(n) + \frac{\log (n) \cdot \alpha }{\epsilon } + \frac{\log ^2(n) \cdot \alpha ^2}{\epsilon ^3}\right) = O \left( \frac{T \cdot \log ^3(n)}{\epsilon ^3}\right) \). \(\square \)

Observe that the framework outlined by Lemma 8 has not exploited any property of the underlying batch-dynamic algorithm other than the nature of it’s running time. This allows for a more general formulation of Lemma 8.

Corollary 10

If there is a \(O(\log (n))\) batch-dynamic algorithm Alg with deterministic (randomized) update time O(T(n)) and poly(n) length input update sequence I then there is an algorithm \(Alg'\) such that

  • The update time of \(Alg'\) is worst-case deterministic (randomized) \(O(T(n) \cdot \log ^3(n))\)

  • \(Alg'\) maintains \(\log (n)\) parallel outputs and after processing update sequence \(I[0,\tau )\) one of \(Alg'\)-s maintained outputs is equivalent to the output of Alg after processing \(I[0,\tau )\) partitioned into at most \(\log (n)\) batches

4 Vertex Set Sparsification

An \((\alpha , \delta )\)-approximate matching sparsifier satisfies that \(\mu (H) \cdot \alpha + n \cdot \delta \ge \mu (G)\). Selecting \(\delta = \frac{\epsilon \cdot \mu (H)}{n}\) results in a \((\alpha + \epsilon )\)-approximate sparsifier. The algorithm we present in this paper has a polynomial dependence on \(1/\delta \) therefore we can’t select the required \(\delta \) value to receive an \((\alpha + \epsilon )\)-approximate sparsifier assuming \(\mu (H)\) is significantly lower then \(\mu (G)\). To get around this problem we sparsify the vertex set to a size of \({\hat{O}}(\mu (H))\) while ensuring that the sparsified graph contains a matching of size \((1 - O(\epsilon )) \cdot \mu (G)\).

Let \(V^k\) be a partitioning of the vertices of \(G = (V,E)\) into k sets \(v^i: i \in [k]\). Define the concatenation of G based on \(V^k\) to be graph \(G_{V^k}\) on k vertices corresponding to vertex subsets \(v^i\) where there is an edge between vertices \(v^i\) and \(v^j\) if and only if there is \(u \in v^i\) and \(w \in v^j\) such that \((u,w) \in E\). Note that maintaining \(V^k\) as G undergoes edge changes can be done in constant time. Also note that given a matching \(M_{V^k}\) of \(G_{V^k}\) is maintained under edge changes to \(G_{V^k}\) in constant update time per edge changes to \(M_{V^k}\) we can maintain a matching of the same size in G.

4.1 Vertex Sparsification Against an Oblivious Adversary

Assume we are aware of \(\mu (G)\) (note we can guess \(\mu (G)\) within an \(1+\epsilon \) multiplicative factor through running \(O(\frac{\log (n)}{\epsilon })\) parallel copies of the algorithm). Choose a partitioning of G-s vertices into \(O(\mu (G)/\epsilon )\) vertex subsets \(V'\) uniformly at random. Define \(G'\) to be the concatenation of G based on \(V'\).

Consider a maximum matching \(M^*\) of G. It’s edges have \(2 \cdot \mu (G)\) endpoints. Fix a specific endpoint v. With probability \(\left( 1 - \frac{2 \cdot \mu (G)}{\mu (G)/\epsilon }\right) ^{2 \cdot \mu (G) - 1} \sim (1 - o(\epsilon ))\) it falls in a vertex set of \(V'\) no other endpoint of \(M^*\) does. Hence, in expectation \(2 \cdot \mu (G) \cdot (1 - O(\epsilon ))\) endpoints of \(M^*\) fall into unique vertex subsets of \(V'\) with respect to other endpoints. This also implies that \(\mu (G) \cdot (1 - O(\epsilon ))\) edges of \(M^*\) will have both of their endpoints falling into unique vertex sets of \(V'\), hence \(\mu (G') \ge \mu (G) \cdot (1 - O(\epsilon ))\). This observation motivates the following lemma which can be concluded from [31] and [42, 43].

Lemma 11

Assume there is a dynamic algorithm Alg which maintains an \((\alpha , \delta )\)-approximate maximum matching where \(\alpha = O(1)\) in graph \(G = (V,E)\) with update time \(O(T(n, \delta ))\). Then there is a randomized dynamic algorithm \(Alg'\) which maintains an \((\alpha +\epsilon )\)-approximate maximum matching in update time time \(O \left( T(n, \epsilon ^2) \cdot \frac{\log ^2(n)}{\epsilon ^4}\right) \). If the running time of Alg is worst-case (amortized) so will be the running time of \(Alg'\).

(Stated without proof as it concludes from [31, 42, 43])

4.2 Vertex Set Sparsification Using \((k,\epsilon )\) Matching Preserving Partitionings

A slight disadvantage of the method described above is that if the adversary is aware of our selection of \(V'\) they might insert a maximum matching within the vertices of a single subset in \(V'\) which would be completely lost after concatenation. In order to counter this we will do the following: we will choose some L different partitionings of the vertices in such a way that for any matching M of G most of M-s vertices fall into unique subsets in at least one partitioning.

Definition 12

Call a set of partitionings \(\mathcal {V}\) of the vertices of graph \(G = (V,E)\) into d vertex subsets is \((k,\epsilon )\) matching preserving if for any matching of size k in G there is a partitioning \(V^{d}_i\) in \(\mathcal {V}\) such that if \(G'\) is a concatenation of G based on \(V'\) then \(G'\) satisfies that \(\mu (G') \ge (1 - \epsilon ) \cdot k\).

We will show that using randomization we can generate a \((k,\epsilon )\) matching preserving set of partitionings of size \(O \left( \frac{\log (n)}{\epsilon ^2}\right) \) into \(O(k/\epsilon )\) vertex subsets in polynomial time. Furthermore, we will show how to find an \((k,\epsilon )\) matching preserving set of partitionings of size \(O(n^{(1)})\) into \(O(k \cdot n^{o(1)})\) vertex subsets deterministically in polynomial time.

Lemma 13

Assume there exists a dynamic matching algorithm \(Alg_M\) maintaining an \((\alpha , \delta )\)-approximate matching in update time \(O(T(n,\delta ))\) for \(\alpha = O(1)\) as well as an algorithm \(Alg_S\) generating an \((k,\epsilon )\) matching preserving set of vertex partitionings into \(O(k \cdot C)\) vertex subsets of size L. Then there exists an algorithm Alg maintaining an \((\alpha + \epsilon )\)-approximate matching with update time \(O \left( T(n,\epsilon /C) \cdot \frac{L^2 \cdot \log ^2(n)}{\epsilon ^4}\right) \). If both \(Alg_S\) and \(Alg_M\) are deterministic then so is Alg. If \(Alg_M\) is randomized against an adaptive adversary then so is Alg. If the update time of \(Alg_M\) is worst-case then so is of Alg. Alg makes a single call to \(Alg_S\).

The proof of the lemma is deferred to Appendix C.2. The intuition is as follows: through running \(O \left( \frac{\log (n)}{\epsilon }\right) \) parallel copies of the algorithm guess \(\mu (G)\) within a \(1 + \epsilon \) factor. In the knowledge of \(\mu (G)\) run \(Alg_M\) on the L concatenations of G we generate with \(Alg_S\). Each of these concatenated sub-graphs are of size \(O(\mu (G) \cdot C)\) and have maximum matching size \((1 - O(\epsilon )) \cdot \mu (G)\). Therefore running \(Alg_M\) with \(\delta \) parameter \(\Theta (\epsilon /C)\) yields an \((\alpha + O(\epsilon ))\)-approximate matching in on of these L graphs. Using the algorithm of [31] find an approximate maximum matching in the union of the \(O_\epsilon (L \cdot \log (n))\) concatenated graphs. Note that with an application of Lemma 5 ([44]) the update time can be changed into \(O \left( \frac{T(n,\epsilon /C) \cdot L \cdot \log (n)}{\epsilon } + \frac{L^2 \cdot \log ^2(n)}{\epsilon ^5}\right) \) as shown in the appendix.

4.3 Generating Matching Preserving Partitionings Through Random Sampling

Lemma 14

There is a randomized algorithm succeeding with \(1 - 1/poly(n)\) probability for generating a \((k,\epsilon )\) matching preserving set of partitionings of graph G into \(O(k/\epsilon )\) vertex subsets of size \(O \left( \frac{\log (n)}{\epsilon ^2}\right) \) running in polynomial time.

We defer the proof to Appendix C.2. Essentially, \(O \left( \frac{\log (n)}{\epsilon ^2}\right) \) random chosen vertex partitionings into \(O(k/\epsilon )\) vertex subsets are \((k,\epsilon )\) matching preserving. Note, that in unbounded time we can find an appropriate set of partitionings deterministically as we can iterate through all possible sets of partitionings and test each separately.

4.4 Generating Matching Preserving Partitionings Using Expanders

We will define expander graphs as follows. Such expanders are sometimes called unbalanced or lossless expanders in literature.

Definition 15

Define a \((k, d, \epsilon )\)-expander graph as a bipartite graph \(G = ((L,R),E)\) such that \(\forall v \in L: deg_E(v) = d\) and for any \(S \subseteq L\) such that \(|S| \le k\) we have that \(|N_E(S)| \ge (1 - \epsilon ) \cdot d \cdot |S|\).

Graph expanders are extensively researched and have found a number of different applications. We will now show how an expander graph can be used to be the bases of an \((k,\epsilon )\) preserving set of partitionings.

Lemma 16

Assume there exists an algorithm Alg which outputs a \((k, d, \epsilon )\)-expander \(G_{exp} = ((L,R),E)\) in \(O(T(k,d,\epsilon ))\) time. There is an algorithm \(Alg'\) which outputs a set of \((k,\epsilon )\) matching preserving vertex partitionings of a vertex set of size |L| into |R| subsets of size d with running time \(O(T(k, d, \epsilon ))\). \(Alg'\) is deterministic if Alg is deterministic.

Proof

Take graph \(G = (V,E)\) and bipartite \((2 \cdot k,d,\epsilon /2)\) expander graph \(G_{Exp} = ((V,R),E')\) such that vertices of the left partition of \(G_{Exp}\) correspond to vertices of V. For each \(v \in V\) define an arbitrary ordering of it’s neighbours in R according to \(E'\) and let \(N_{E'}(v)_i\) be it’s i-th neighbour according to this ordering (\(i \in [d]\)). For each \(i \in [d]\) and \(v \in R\) define \(V_{i,v} \subseteq V\) to be the set of vertices in V whose i-th neighbour is v (or \(V_{i,v} = \{v' \in V: N_{E'}(v')_i = v \}\)).

Define set of vertex partitionings \(\mathcal {V}=\{ V_i^{|R|}: i \in [d]\}\) where \(V_i^{|R|}\) contain vertex sets \(V_{i,v}: v \in R\). Fix a matching M in G of size k and call it’s endpoints \(V_{M}\). By the definition of the expander we have that \(|N_{E'}(V_{M})| \ge (1 - \epsilon /2) \cdot d \cdot 2 k\). Hence by the pigeonhole principle we have that \(|N_{E'}(V_{M^*})_i| \ge (1 - \epsilon /2) \cdot 2 k\) for some \(i \in [d]\). Define \(G'\) as the concatenation of G based on \(V_i^{|R|}\). By the definition of \(V_i^{|R|}\) at least \((1 - \epsilon /2) \cdot 2 k\) endpoints of M are concatenated into vertices of \(G'\) containing exactly one vertex of \(V_M\). Therefore, \((1 - \epsilon ) \cdot k\) edges of M will have both their endpoints concatenated into unique vertices of \(G'\) within M. Hence, \(\mu (G') \ge (1 - \epsilon ) \cdot k\) and \(\mathcal {V}\) is a \((k,\epsilon )\) matching preserving set of partitionings. \(\square \)

Lemma 17

(Theorem 7.3 of [46] and Proposition 7 of [47]): Given \(n \ge k\) and \(\epsilon > 0\). There exists a \((k,d,\epsilon )\)-expander graph \(G_{exp} = ((L,R),E)\) such that \(|L| = n\), \(|R| = \frac{k \cdot 2^{O(\log ^3(\log (n)/\epsilon ))}}{poly(\epsilon )} = {\hat{O}}(k)\), \(d = 2^{O(\log ^3(\log (n)/\epsilon ))} = {\hat{O}}(1)\) which can be deterministically computed in \({\hat{O}}(n)\) time.

4.5 Black-Box Implications

The following statements are black-box statements which can be concluded based on this section.

Corollary 18

[42, 43]: If there is a dynamic algorithm for maintaining an \((\alpha , \delta )\)-approximate maximum matching for dynamic graphs in update time \(O(T(n, \delta ))\) then there is a randomized algorithm (against oblivious adversaries) for maintaining an \((\alpha + \epsilon )\)-approximate maximum matching with update time \(O \left( T(n,\epsilon ^2) \cdot \frac{\log ^2(n)}{\epsilon ^4}\right) \).

Corollary 19

If there is a dynamic algorithm for maintaining an \((\alpha , \delta )\)-approximate maximum matching for dynamic graphs in update time \(O(T(n, \delta ))\) then there is a randomized algorithm for maintaining an \((\alpha + \epsilon )\)-approximate maximum matching with update time \(O \left( T(n, \epsilon ^2) \cdot \frac{\log ^4(n)}{\epsilon ^8}\right) \) which works against adaptive adversaries given the underlying algorithm also does.

Proof

Follows from Lemmas 13 and 14. \(\square \)

Corollary 20

If there is a dynamic algorithm for maintaining an \((\alpha , \delta )\)-approximate maximum matching for dynamic graphs in update time \(O(T(n, \delta ))\) then there is a deterministic algorithm for maintaining an \((\alpha + \epsilon )\)-approximate maximum matching with update time \({\hat{O}}\left( T \left( n,\frac{poly(\epsilon )}{n^ {o(1)}}\right) \right) \) which is deterministic given the underlying matching algorithm is also deterministic.

Proof

Follows from Lemmas 13, 17 and 16. \(\square \)

5 \((3/2 + \epsilon )\)-Approximate Fully Dynamic Matching in \({\hat{O}}(\sqrt{n})\) Worst-Case Deterministic Update Time

5.1 Algorithm Outline

In this section we present an amortized rebuild based algorithm for maintaining a locally relaxed EDCS we refer to as ’damaged EDCS’. The following definition and key-property originates from [35] and [38].

Definition 21

From Bernstein and Stein [35]:

Given graph \(G = (V,E)\), \(H \subseteq E\) is a \((\beta , \lambda )\)-EDCS of G if it satisfies that:

  • \(\forall e \in H: deg_H(e) \le \beta \)

  • \(\forall e \in E\setminus H: deg_H(e) \ge \beta \cdot (1 - \lambda )\)

Lemma 22

From Assadi and Stein [38]:

If \(\epsilon < 1/2\), \(\lambda \le \frac{\epsilon }{32}\), \(\beta \ge 8 \cdot \lambda ^2 \cdot \log (1/\lambda )\) and H is a \((\beta , \lambda )\)-EDCS of G then \(\mu (G) \le \mu (H) \cdot (\frac{3}{2} + \epsilon )\)

The intuition behind the algorithm is as follows: take a \((\beta , \lambda )\)-EDCS H. Relax it’s parameter bounds slightly through observing that H is also a \((\beta \cdot (1 + \lambda ), 4 \lambda )\)-EDCS. As H is a \((\beta , \lambda )\)-EDCS for every edge e in it’s local neighbourhood \(\Theta (\beta \cdot \lambda )\) edge updates may occur in an arbitrary fashion before either of the two edge degree bounds of a \((\beta \cdot (1 + \lambda ), 4 \lambda )\)-EDCS is violated on e.

Therefore, after \({\tilde{\Theta }}(n \cdot \beta )\) edge updates the properties of a \((\beta \cdot (1 + \lambda ), 4 \lambda )\)-EDCS should only be violated in the local neighbourhood of \(O(\delta \cdot n)\) vertices for some small \(\delta \) of our choice. At this point the EDCS is locally ’damaged’ and it’s approximation ratio as a matching sparsifier is reduced to \((3/2 + O(\epsilon ), \delta )\). However, the reductions appearing in Sect. 4 allows us to improve this approximation ratio to \((3/2 + O(\epsilon ))\). At this point we commence a rebuild, the cost of which can be amortized over \({\tilde{\Theta }}(n \cdot \beta )\) edge updates.

We then proceed to turn this amortized rebuild based algorithm into a batch-dynamic algorithm which we improve to worst-case update time using Lemma 8.

5.2 Definition and Properties of \((\beta , \lambda , \delta )\)Damaged EDCS

In order to base an amortized rebuild based dynamic algorithm on the EDCS matching sparsifier we need an efficient algorithm for constructing an EDCS. As far as we are aware there is no known deterministic algorithm for constructing an EDCS in in \({\hat{O}}(n)\) time. In order to get around this we introduce a locally relaxed version of EDCS.

Definition 23

For graph \(G = (V,E)\) a \((\beta , \lambda , \delta )\)-damaged EDCS is a subset of edges \(H \subseteq E\) such that there is a subset of ’damaged’ vertices \(V_D \subseteq V\) and the following properties hold:

  • \(|V_D| \le \delta \cdot |V|\)

  • \(\forall e \in H: deg_H(e) \le \beta \)

  • All \(e \in E {\setminus } H\) such that \(e \cap V_D = \emptyset \) satisfies \(deg_H(e) \ge \beta \cdot (1 - \lambda )\)

Lemma 24

If \(\epsilon < 1/2\), \(\lambda \le \frac{\epsilon }{32}\), \(\beta \ge 8 \lambda ^{-2} \log (1 / \lambda )\) and H is a \((\beta , \lambda ,\delta )\)-damaged EDCS of graph \(G = (V,E)\) then H is an \((3/2 + \epsilon , \delta )\)-approximate matching sparsifier.

Proof

Define the following edge-set: \(E' = \{e \in E: e \cap V_D = \emptyset \}\). Observe, that H is a \((\beta ,\lambda )\)-EDCS of \(E' \cup H\). Fix a maximum matching \(M^*\) of G. At least \(\mu (G) - |V_D| = \mu (G) - \delta \cdot |V|\) edges of \(M^*\) appear in \(E'\) as each vertex of \(V_D\) can appear on at most one edge of \(M^*\). Therefore, \(\mu ((V,E')) \ge \mu (G) - |V| \cdot \delta \). Now the lemma follows from Lemma 22. \(\square \)

5.3 Constructing a Damaged EDCS in Near-linear Time

Algorithm 1:
figure a

StaticDamagedEDCS

Lemma 25

Algorithm 1 returns \(H_{fin}\) as a \((\beta , \lambda , \delta )\)-damaged EDCS of G.

The potential function \(\Phi \) used in proof of the following lemma is based on [14].

Lemma 26

Algorithm 1 runs in deterministic \(O \left( \frac{m}{\delta \cdot \lambda ^2}\right) \) time.

The proofs of the lemmas are deferred to Appendix B. The intuition is the following: at the start of each iteration we add all edges of the graph to H which have \(deg_H(e) < \beta \cdot (1 - \lambda /2)\). If we fail to add at least \(O(\lambda \cdot \delta \cdot \beta \cdot n)\) such edges we terminate with H stripped of some edges. At the end of each iteration we remove all edges such that \(deg_H(e) > \beta \). Consider what happens if we fail to add \(\Omega (\lambda \cdot \delta \cdot \beta \cdot n)\) edges in an iteration. That means that only in the local neighbourhood of \(\Theta (\delta \cdot n)\) ’damaged’ vertices could we have added \(\Omega (\beta \cdot \lambda )\) edges in the last iteration. We strip away the edges around damaged vertices to get H. The running time argument is based on a potential function \(\Phi \) from [14]. Initially it is 0 and has an upper bound of \(O(n \cdot \beta ^2)\) and grows by at least \(\Omega (n \cdot \beta ^2 \cdot \lambda ^2 \cdot \delta )\) in each iteration bounding the number of iterations by \(O(\frac{1}{\delta \cdot \lambda ^2})\).

5.4 Maintaining a Damaged EDCS in \({\tilde{O}}(\frac{m}{n \cdot \beta })\)pdate Time with Amortized Rebuilds

Algorithm 2:
figure b

DynamicDamagedEDCS

Note that \(E_D\) is defined for the purposes of the analysis.

Lemma 27

The sparsifier H maintained by Algorithm 2 is a \((\beta , \lambda , \delta )\)-damaged EDCS of G whenever the algorithm halts.

Lemma 28

The amortized update time of Algorithm 2 over a series of \(\alpha \) updates is \(O \left( \frac{m}{n \cdot \beta \cdot \lambda ^3 \cdot \delta ^2}\right) \) and the sparsifier H undergoes \(O(\frac{1}{\lambda \cdot \delta })\) amortized recourse.

The lemmas are proven in the appendix. On a high level, a \((\beta , O(\lambda ), O(\delta ))\) damaged EDCS will gain \(O(n \cdot \delta )\) damaged vertices in the span of \(O(n \cdot \delta \cdot \lambda \cdot \beta )\) edge updates as for a vertex to be damaged there has to be \(O(\beta \cdot \lambda )\) edge updates in it’s local neighbourhood. At this point we can call a rebuild of the EDCS in \({\tilde{O}}(m)\) time to get an amortized update time of \({\tilde{O}}\left( \frac{m}{\beta \cdot n}\right) \).

5.5 k Batch-Dynamic Algorithm for Maintaining an Approximate EDCS

Lemma 29

Given fully dynamic graph G with n vertices and m edges. There is a k batch-dynamic dynamic algorithm which maintains a \((\beta , \lambda , \delta )\)-damaged EDCS of this graph with deterministic update time \(O \left( \frac{k \cdot m}{n \cdot \beta \cdot \delta ^2 \cdot \lambda ^3}\right) \) and recourse \(O \left( \frac{k}{\delta \cdot \lambda }\right) \).

Proof

Define an alternative version of Algorithm 2 where \(\alpha \) is simply set to \(\alpha _i = i \cdot \frac{\alpha }{k}\) during the processing of the i-th batch. Observe that in the proof of Lemma 27 the only detail which depends on the choice of \(\alpha \) is the size of \(V_{E_D} \cup V_{E_I}\). At any point in this batch modified version of the algorithm \(\alpha _i \le \alpha \) therefore the correctness of the algorithm follows.

The running time of the algorithm will be affected by this change. As every edge update is processed in constant time by the algorithm the running time is dominated by calls to StaticDamagedEDCS. By definition for every batch at least \(\alpha /k\) edge updates will occur between the start of the batch and the first rebuild (if there is one) yielding an amortized update time of at most \(O \left( \frac{k \cdot m}{n \cdot \beta \cdot \delta ^2 \cdot \lambda ^3}\right) \) over the first rebuild (due to Lemma 28). After the first rebuild the algorithm simply proceeds to run with \(\alpha \)-parameter \(\alpha _i\) therefore the amortized update time for the remainder of batch i is \(O \left( \frac{i \cdot m}{n \cdot \beta \cdot \delta ^2 \cdot \lambda ^3}\right) = O \left( \frac{k \cdot m}{n \cdot \beta \cdot \delta ^2 \cdot \lambda ^3}\right) \). \(\square \)

Corollary 30

For fully dynamic graph G there is a deterministic k batch-dynamic algorithm for maintaining a \((3/2+\epsilon , \delta )\)-approximate maximum matching with update time \({\tilde{O}}\left( \frac{k \cdot m}{n \cdot \beta } + k \cdot \beta \right) \).

Proof

Set \(\lambda = \frac{\epsilon }{128}\) and \(\beta \) large enough to satisfy the requirements of Lemma 24 such that the resulting sparsifier is \((3/2 + \epsilon /4, \delta )\)-approximate. Use the algorithm of Lemma 29. The resulting damaged-EDCS sparsifier will have maximum degree \(O(\beta )\), undergo \({\tilde{O}}(k)\) recourse per update and will take \({\tilde{O}}\left( \frac{k \cdot m}{n \cdot \beta }\right) \) time to maintain. By Lemma 24 it will be a \((3/2 + \epsilon /4, \delta )\)-approximate matching sparsifier. Hence, if we apply the algorithm of Lemma 4 to maintain a \((1 + \epsilon /4)\)-approximate maximum matching within the sparsifier we can maintain a \((3/2 + \epsilon , \delta )\) approximate matching in \({\tilde{O}}\left( \frac{m \cdot k}{n \cdot \beta } + \beta \cdot k \right) \) update time and recourse. \(\square \)

5.6 Proof of Theorem 2

Proof

Take the algorithm of Corollary 30. Set \(k = \log (n)\) and apply Corollary 9 to receive a deterministic \((3/2 + \epsilon , \delta )\)-approximate dynamic matching algorithm with worst-case update time \({\tilde{O}}\left( \frac{m}{n \cdot \beta } + \beta \right) \). Finally, transform this algorithm into a \((3/2 + \epsilon )\)-approximate matching algorithm using either Corollary 19 or Corollary 20. \(\square \)

6 \((2+\epsilon )\)-Approximate Fully Dynamic Maximum Matching in \({\tilde{O}}(1)\) Worst-Case Update Time

In the appendix we present a deterministic worst-case \(O(poly(\log (n), 1/\epsilon ))\)-update time \((2 + \epsilon )\)-approximate fully dynamic matching algorithm. Currently, the only deterministic \(O(poly(\log (n), 1/\epsilon ))\)-update time algorithms [27, 32] have amortized update time bounds, while the fastest wort-case algorithm runs in \({\tilde{O}}(\sqrt{n})\) update time from [28]. We will first improve the running time bounds of the algorithm presented in [32] to k batch-dynamic using the same technique as presented previously. [32] similarly bases the algorithm on amortized rebuilds which are triggered when \(\epsilon \) factor of change occurs within the data-structure. In order to improve the update time to batch-dynamic we define \(\epsilon _i = \frac{i \cdot \epsilon }{k}\) to be the slack parameter during the progressing of batch i. Firstly, this ensures that \(\epsilon _i \le \epsilon \) during any of the batches progressed guaranteeing the approximation ratio. Secondly, whenever a new batch begins the slack parameter increases by \(\frac{\epsilon }{k}\) which insures that there will be enough time steps before next rebuild occurs to amortize the rebuild time over.

Lemma 31

There is a deterministic k batch amortized \(O_\epsilon (k \cdot \log ^4(n))\) update time \((2+\epsilon )\)-approximate fully dynamic matching algorithm.

Proof

The proof of this lemma is only available online due to length requirements. \(\square \)

Theorem 1 follows from Lemma 31 and Corollary 9.

7 Open Questions

Worst-Case Update Time Improvement Through Batch-Dynamization: We have shown two applications on how batch-dynamization can be used to improve amortized rebuild based algorithm update times to worst-case. As amortized rebuild is a popular method for dynamizing a data-structure not just in the context of matching it would be interesting to see if the batch-dynamization based framework has any more applications.

\((\alpha , \delta )\)-Approximate Dynamic Matching: In current dynamic matching literature most algorithms focus on maintaining an \(\alpha \)-approximate matching or matching sparsifier both for the integral and fractional version of the problem. However, a more relaxed \((\alpha , \delta )\)-approximate matching algorithm using the reductions presented in this paper (or [42, 43]) allow for the general assumption that \(\mu (G) = \Theta (n)\) at all times. This assumption has proven to be useful in other settings for the matching problem such as the stochastic setting ([42, 43]) but largely seems to be unexplored in the dynamic setting.

Damaged EDCS: The EDCS matching sparsifier [35] has found use in a number of different settings for the matching problem [14, 28, 37,38,39,40,41]. In contrast with the EDCS (as far as we are aware) a damaged EDCS admits a deterministic near-linear time static algorithm. This might lead to new results in related settings.