Deterministic Dynamic Matching In Worst-Case Update Time

We present deterministic algorithms for maintaining a $(3/2 + \epsilon)$ and $(2 + \epsilon)$-approximate maximum matching in a fully dynamic graph with worst-case update times $\hat{O}(\sqrt{n})$ and $\tilde{O}(1)$ respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio $(2 - \delta)$ (for any $\delta>0$) and $(2 + \epsilon)$ were both shown by Roghani et al. [2021] with update times $O(n^{3/4})$ and $O_\epsilon(\sqrt{n})$ respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are $O_\epsilon(\sqrt{n})$ and $\tilde{O}(1)$ which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively. In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time. \textbf{Independent Work:} Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a $(3/2 + \epsilon)$-approximate maximum matching with deterministic worst-case update time $O_\epsilon(\sqrt{n})$.

and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are O ǫ ( √ n) and Õ(1) which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively.
The algorithm achieving (3/2 + ǫ) approximation builds on the EDCS concept introduced by the influential paper of Bernstein and Stein [ICALP'2015].Say that H is a (α, δ)-approximate matching sparsifier if at all times H satisfies that µ(H) • α + δ • n ≥ µ(G) (define (α, δ)approximation similarly for matchings).We show how to maintain a locally damaged version of the EDCS which is a (3/2 + ǫ, δ)-approximate matching sparsifier.We further show how to reduce the maintenance of an α-approximate maximum matching to the maintenance of an (α, δ)-approximate maximum matching building based on an observation of Assadi et al. [EC'2016].Our reduction requires an update time blow-up of Ô (1) or Õ (1) and is deterministic or randomized against an adaptive adversary respectively.
To achieve (2 + ǫ)-approximation we improve on the update time guarantee of an algorithm of Bhattacharya and Kiss [ICALP'2021].In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time.
Independent Work: Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a (3/2 + ǫ)-approximate maximum matching with deterministic worst-case update time O ǫ ( √ n).

Introduction
In the dynamic setting our task is to maintain a 'good' solution for some computational problem as the input undergoes updates [1,10,15,16,19,32].Our goal is to minimize the update time we need to spend in order to update the output when the input undergoes updates.One of the most extensively studied computational problems in the dynamic setting is approximate maximum matching.Our task is to maintain an α-approximate matching M in G, which is a matching which satisfies that |M | • α ≥ µ(G) (where µ(G) represent the size of a maximum size matching of graph G).Due to the conditional lower bound of [2] the maintenance of an exact maximum matching (a 1-approximate maximum matching) requires at least O(poly(n)) update time.Hence, a long line of papers were focused on the possible approximation ratio-update time trade-offs achievable for α > 1 [26,30,33,42,20,21,17,50].
If a dynamic algorithm computes the updated output after at most O(T ) time following any single change in the input we say that its update time is worst-case O(T ).A slight relaxation of this bound is that the algorithm takes at most O(T • k) total time to maintain the output over k > 0 consecutive updates to the input, for any k, in this case the update time of the algorithm is amortized O(T ).
A number of dynamic algorithms in literature utilize different levels of randomization [47,3,51,8,11,35].However, currently all known techniques for proving update time lower bounds fail to differentiate between randomized and deterministic dynamic algorithms [2,36,39,40,44].Hence, understanding the power of randomization in the dynamic setting is an important research agenda.In the case of dynamic matching getting rid of randomization has proven to be difficult within the realm of Õ(1) update time.While as early as the influential work of Onak and Rubinfield [43] a randomized algorithm with Õ(1) update time has been found the first deterministic algorithm with the same update time was first shown by Bhattacharya et al. [22].For achieving (2 + ǫ)approximation with worst-case update time there is still an O(poly(n)) factor difference between the fastest randomized and deterministic implementations ( [3,51] and [46] respectively).
While amortized update time bounds don't tell us anything about worst-case update time some problems in the dynamic setting have proven to be difficult to solve efficiently without amortization.Notably, for the dynamic connectivity problem the first deterministic amortized update time solution by Holm et al. [37] has long preceded the first worst-case update time implementation of Kapron et al. [38] which required randomization.
Both of the algorithms presented by this paper carry the best of both worlds as they are deterministic and provide new worst-case update-time bounds.
Many dynamic algorithms such as [34,48] rely on the robustness of the output of the output.To consider this in a context of matching as an example observe that if a matching M is α-approximate it remains (α • (1 + O(ǫ)))-approximate even after ǫ • |M | edge updates.Hence, if we are to rebuild M after the updates we can amortize its reconstruction cost over ǫ • |M | time steps.However, such an approach initially inherently results in amortized update time bound.In some cases with additional technical effort de-amortization was shown to be achievable for these algorithms [34,24].A natural question to ask is weather an amortized update time bound is always avoidable for amortized rebuild based dynamic algorithms.
To answer this question we explicitly present a versatile framework for improving the update time bounds of amortized rebuild based algorithms to worst-case while incurring only a Õ(1) blowup in update time.Our framework was implicitly shown in Bernstein et al. [24] and Nanongkai and Saranurak [41].To demonstrate the framework we present two new results: Theorem 1.1.There is a deterministic algorithm for maintaining a (2 + ǫ)-approximate matching in a fully dynamic graph with worst-case update time O ǫ (log 7 (n)) = Õ(1) (where O ǫ hides O(poly(1/ǫ) factors).
For the approximation ratio of (2+ǫ) the best known worst-case update time algorithm of Õ( √ n) was show recently in [46].However, Õ(1) amortized update time algorithms were previously shown by [22], [48].We show that an O(poly(n)) blowup in update time is not necessary to improve these bounds to worst-case.  1) , 1/ǫ)) factors).
For achieving better than than 2-approximation the fastest known worst-case update time of Õ( √ n 8 √ m) was shown in [46].Similar to the case of (2 + ǫ)-approximation there is an Õ(poly(n)) faster algorithm achieving the same approximation ratio shown in [17] using amortization.We again show that such a large blowup is not necessary in order to achieve worst-case update times.
In order to derive the later result we first show an amortized rebuild based algorithm for maintaining the widely utilized [18,17,31,4,6,13,14,5] matching sparsifier EDCS introduced by Bernstein and Stein [18].At the core of amortized rebuild based algorithms there is a static algorithm for efficiently recomputing the underlying data-structure.As the EDCS matching sparsifier (as far as we are aware) doesn't admit a deterministic near-linear time static algorithm, we introduce a relaxed version of the EDCS we refer to as "damaged EDCS".For constructing a damaged EDCS we show a deterministic Õ(m) static algorithm.Say that matching sparsifier (or matching) A damaged EDCS is a (3/2 + ǫ, δ)-approximate matching sparsifier as opposed to the EDCS which is (3/2 + ǫ)-approximate.To counter this we show new reductions from (α + ǫ) to (α, δ)-approximate dynamic matching algorithms based on ideas of [9], [7].Previous such reductions relied on the oblivious adversary assumption that the input sequence is independent from the choices of the algorithm and is fixed beforehand.Our reductions work against an adaptive adversary whose decisions may depend on the decisions and random bits of the algorithm.The update time blowup required by the reductions is Õ(1) or Ô(1) if the reduction step is randomized or deterministic respectively.These reductions and the static algorithm for constructing a damaged EDCS might be of independent research interest.Using the randomized reduction we receive the following corollary: if we allow for randomization against an adaptive adversary (where Õ hides O(poly(log(n), 1/ǫ)) factors).

Techniques
We base our approach for improving an amortized rebuild based algorithm to worst-case update time on an observation implicitly stated in Bernstein et al. [24] (Lemma 6.1).Take an arbitrary input sequence of changes I for a dynamic problem and arbitrarily partition it into k continuous sub-sequences If a dynamic algorithm with update time O(T ) is such that (knowing the partitionings) it can process the input sequence and the total time of processing sub-sequence Note that the update time guarantee of a batch-dynamic algorithm is stronger then of an amortized update time algorithm but it is weaker than a worst-case update time bound.
Building on the framework of [24] we show that an O(log(n)) batch-dynamic algorithm Alg can be used to maintain Õ(1) parallel output tapes with worst-case update time such that at all times at least one output tape contains a valid output of Alg while only incurring a blowup of Õ(1) in update-time.If Alg is an α-approximate dynamic matching algorithm then each of the O(log(n)) output tapes each contain a matching.Therefore, the union of the output tapes is an α-approximate matching sparsifier with maximum degree O(log(n)) on which we can run the algorithm of Gupta and Peng [34] to maintain an (α + ǫ)-approximate matching.
Therefore, in order to find new worst-case update time dynamic matching algorithms we only have to find batch-dynamic algorithms.We show a framework (building on [24]) for transforming amortized rebuild based dynamic algorithms to batch-dynamic algorithms.On a high level an amortized rebuild based algorithm allows for a slack of ǫ factor damage to its underlying datastructure before commencing a rebuild.To turn such an algorithm k batch-dynamic during the progressing of the i-th batch we ensure a slack of i•ǫ k instead.This way once the algorithm finishes processing a batch it has ǫ k factor of slack it is allowed to take before commencing a rebuild meaning that the next rebuild operation is expected to happen well into the proceeding batch.
With this general method and some technical effort we show a batch-dynamic version of the (2 + ǫ)-approximate dynamic matching algorithm of [48] and prove Theorem 1.1.
In order to generate a batch-dynamic algorithm for maintaining a (3/2 + ǫ)-approximate maximum matching more work is required as algorithms currently present in literature for this approximation ratio are not conveniently amortized rebuild based.We introduce a relaxed version of the matching sparsifier EDCS (initially appeared in [18]) called 'damaged EDCS'.We further show that a damaged EDCS can be found in Õ(m) time.We show that a damaged EDCS is robust against Õ(n • β) edge updates and has maximum degree β for our choice of β.This means we can maintain the damaged EDCS in Õ( m n•β ) amortized update time with periodic rebuilds.We can then run the algorithm of [34] to maintain a matching in the damaged EDCS in Õ(β) update time.

Notations and Preliminaries
Throughout this paper, we let G = (V, E) denote the input graph and n will stand for |V | and m will stand for the maximum of |E| as the graph undergoes edge updates.deg E (v) will stand for the degree of vertex v in edge set E while N E (v) stand for the set of neighbouring vertices of v in edge set E. We will sometimes refer to deg In the maximum dynamic matching problem the task is to maintain a large matching while the graph undergoes edge updates.In this paper we will be focusing on the fully dynamic setting where the graph undergoes both edge insertions and deletions over time.An algorithm is said to be a dynamic α (or (α, δ))-approximate maximum matching algorithm if it maintains an α (or (α, δ))approximate matching at all times.A sub-graph H ⊆ E is said to be an α (or (α, δ))-approximate matching sparsifier if it contains an α (or (α, δ))-approximate matching.We will regularly be referring to the following influential result from literature: Lemma 2.1.Gupta and Peng [34]: There is a (1 + ǫ)-approximate maximum matching algorithm for fully dynamic graph G with deterministic worst-case update time O(∆/ǫ 2 ) given the maximum degree of G is at most ∆ at all times.
Throughout the paper the notations Õ(), Ô() and O ǫ () will be hiding O(poly(log(n), ǫ)), O(poly(n o (1) , ǫ)) and O(poly( 1 ǫ )) factors from running times respectively.The update time of a dynamic algorithm is worst-case O(T ) if it takes at most O(T ) time for it to update the output each time the input undergoes a change.An algorithm update time is said to be amortized O(T ) if there is some integer k > 0 such that over k consecutive changes to the input the algorithm takes O(k • T ) time steps to maintain the output.The recourse of a dynamic algorithm measures the changes the algorithm makes to its output per change to the input.Similarly to update time recourse can be amortized and worst-case.
We call a dynamic algorithm k batch-dynamic with update time O(T ) if for any partitioning of the input sequence I into k sub-sequences I i : i ∈ [k] during the processing of I the algorithm can process input sub-sequence I i in O(T • |I i |) total update time.Note that this implies that the worst-case update time during the progressing of The definition is based on [24].A k-batch dynamic algorithm provides slightly better update time bounds then an amortized update time algorithm as we can select k sub-sequences to amortize the update time over.
We will furthermore be referring to the following recent result from Solomon and Solomon [49]: Lemma 2.2.Theorem 1.3 of Solomon and Solomon [49] (slightly phrased differently and trivially generalized for (α, δ)-approximate matchings): Any fully dynamic α (or (α, δ))-approximate maximum matching algorithm with update time O(T ) can be transformed into an (α + ǫ) (or (α + ǫ, δ))approximate maximum matching algorithm with O(T + α ǫ ) update time and worst-case recourse of O( α ǫ ) per update.The update time of the new algorithm is worst-case if so is the underlying matching algorithm.Definition 2.3.Random variables X 1 , ..., X n are said to be negatively associated if for any nondecreasing functions g, f and disjoint subsets I, J ⊆ [n] we have that: We will make use of the following influential result bounding the probability of a sum of negatively associated random variables falling far from their expectation.Lemma 2.4.(Chernoff bound for negatively associated random variables [27] 3 Batch Dynamic To Worst Case Update Time As this lemma was implicitly stated in [24] and [41] in a less general setting we defer the proof to Appendix A. Corollary 3.2.If there exists an α (or (α, δ))-approximate dynamic matching algorithm (where and is an α (or (α, δ))-approximate matching sparsifier and is a union of the output of O(log(n)) dynamic matching algorithms with worst-case update time O(T •log 2 (n)).By Lemma 2.2 ( [49]) these approximate matching algorithms can be transformed into (α + ǫ/2) (or (α + ǫ/2, δ))-approximate matching algorithms with O(T •log 2 (n)+ α ǫ ) update time and O( α ǫ ) worst-case recourse.This bounds the total recourse of the sparsifier at O( log(n)•α ǫ ).Therefore, with slack parameter ǫ 2•α we can run the algorithm of Lemma 2.1 ( [34]) to maintain an (α + ǫ) (or (α + ǫ, δ))-approximate matching in the sparsifier with worst-case update time ).
Observe that the framework outlined by Lemma 3.1 has not exploited any property of the underlying batch-dynamic algorithm other than the nature of it's running time.This allows for a more general formulation of Lemma 3.1.
Corollary 3.3.If there is a O(log(n)) batch-dynamic algorithm Alg with deterministic (randomized) update time O(T (n)) and poly(n) length input update sequence I then there is an algorithm Alg ′ such that • Alg ′ maintains log(n) parallel outputs and after processing update sequence I[0, τ ) one of Alg ′s maintained outputs is equivalent to the output of Alg after processing I[0, τ ) partitioned into at most log(n) batches

Vertex Set Sparsification
An (α, δ)-approximate matching sparsifier satisfies that µ(H) n results in a (α + ǫ)-approximate sparsifier.The algorithm we present in this paper has a polynomial dependence on 1/δ therefore we can't select the required δ value to receive an (α + ǫ)-approximate sparsifier assuming µ(H) is significantly lower then µ(G).To get around this problem we sparsify the vertex set to a size of Ô(µ(H)) while ensuring that the sparsified graph contains a matching of size (1 where there is an edge between vertices v i and v j if and only if there is u ∈ v i and w ∈ v j such that (u, w) ∈ E. Note that maintaining V k as G undergoes edge changes can be done in constant time.Also note that given a matching M V k of G V k is maintained under edge changes to G V k in constant update time per edge changes to M V k we can maintain a matching of the same size in G.

Vertex Sparsification Against An Oblivious Adversary
Assume we are aware of µ(G) (note we can guess µ(G) within an 1 + ǫ multiplicative factor through running O( log(n) ǫ ) parallel copies of the algorithm).Choose a partitioning of G-s vertices into . This observation motivates the following lemma which can be concluded from [7], [9] and [34].
Lemma 4.1.Assume there is a dynamic algorithm Alg which maintains an (α, δ)-approximate maximum matching where α = O(1) in graph G = (V, E) with update time O(T (n, δ)).Then there is a randomized dynamic algorithm Alg ′ which maintains an (α + ǫ)-approximate maximum matching in update time time If the running time of Alg is worst-case (amortized) so will be the running time of Alg ′ .

Vertex Set Sparsification Using (k, ǫ) Matching Preserving Partitionings
A slight disadvantage of the method described above is that if the adversary is aware of our selection of V ′ they might insert a maximum matching within the vertices of a single subset in V ′ which would be completely lost after concatenation.In order to counter this we will do the following: we will choose some L different partitionings of the vertices in such a way that for any matching M of G most of M -s vertices fall into unique subsets in at least one partitioning.
Definition 4.2.Call a set of partitionings V of the vertices of graph G = (V, E) into d vertex subsets is (k, ǫ) matching preserving if for any matching of size k in G there is a partitioning We will show that using randomization we can generate a (k, ǫ) matching preserving set of partitionings of size O( log(n) ǫ 2 ) into O(k/ǫ) vertex subsets in polynomial time.Furthermore, we will show how to find an (k, ǫ) matching preserving set of partitionings of size O(n (1) ) vertex subsets deterministically in polynomial time.
).If both Alg S and Alg M are deterministic then so is Alg.If Alg M is randomized against an adaptive adversary then so is Alg.If the update time of Alg M is worst-case then so is of Alg.Alg makes a single call to Alg S .
The proof of the lemma is deferred to Appendix B.2.The intuition is as follows: through running O( log(n) ǫ ) parallel copies of the algorithm guess µ(G) within a 1 + ǫ factor.In the knowledge of µ(G) run Alg M on the L concatenations of G we generate with Alg S .Each of these concatenated sub-graphs are of size O(µ(G) • C) and have maximum matching size (1 − O(ǫ)) • µ(G).Therefore running Alg M with δ parameter Θ(ǫ/C) yields an (α + O(ǫ))-approximate matching in on of these L graphs.Using the algorithm of [34] find an approximate maximum matching in the union of the O ǫ (L • log(n)) concatenated graphs.Note that with an application of Lemma 2.2 ( [49]) the update time can be changed into O( ) as shown in the appendix.ǫ 2 ) running in polynomial time.

Generating Matching Preserving Partitionings Through Random Sampling
We defer the proof to Appendix B.2. Essentially, O( log(n) ǫ 2 ) random chosen vertex partitionings into O(k/ǫ) vertex subsets are (k, ǫ) matching preserving.Note, that in unbounded time we can find an appropriate set of partitionings deterministically as we can iterate through all possible sets of partitionings and test each separately.

Generating Matching Preserving Partitionings Using Expanders
We will define expander graphs as follows.Such expanders are sometimes called unbalanced or lossless expanders in literature.
Graph expanders are extensively researched and have found a number of different applications.We will now show how an expander graph can be used to be the bases of an (k, ǫ) preserving set of partitionings.
Fix a matching M in G of size k and call it's endpoints V M .By the definition of the expander we have that Hence by the pigeonhole principle we have that By the definition of V which can be deterministically computed in Ô(n) time.

Black-Box Implications
The following statements are black-box statements which can be concluded based on this section.Corollary 4.8.[7], [9]: If there is a dynamic algorithm for maintaining an (α, δ)-approximate maximum matching for dynamic graphs in update time O(T (n, δ)) then there is a randomized algorithm (against oblivious adversaries) for maintaining an (α + ǫ)-approximate maximum matching with update time O(T (n, ǫ 2 ) Corollary 4.9.If there is a dynamic algorithm for maintaining an (α, δ)-approximate maximum matching for dynamic graphs in update time O(T (n, δ)) then there is a randomized algorithm for maintaining an (α + ǫ)-approximate maximum matching with update time O(T (n, ǫ 2 ) • log 4 (n) ǫ 8 ) which works against adaptive adversaries given the underlying algorithm also does.
Proof.Follows from Lemma 4.3 and Lemma 4.4.
Corollary 4.10.If there is a dynamic algorithm for maintaining an (α, δ)-approximate maximum matching for dynamic graphs in update time O(T (n, δ)) then there is a deterministic algorithm for maintaining an (α + ǫ)-approximate maximum matching with update time Ô(T (n, poly(ǫ)  n o(1) )) which is deterministic given the underlying matching algorithm is also deterministic.

Algorithm Outline
In this section we present an amortized rebuild based algorithm for maintaining a locally relaxed EDCS we refer to as 'damaged EDCS'.The following definition and key-property originates from [18] and [6].
Therefore, after Θ(n • β) edge updates the properties of a (β • (1 + λ), 4λ)-EDCS should only be violated in the local neighbourhood of O(δ • n) vertices for some small δ of our choice.At this point the EDCS is locally 'damaged' and it's approximation ratio as a matching sparsifier is reduced to (3/2 + O(ǫ), δ).However, the reductions appearing in Section 4 allows us to improve this approximation ratio to (3/2 + O(ǫ)).At this point we commence a rebuild, the cost of which can be amortized over Θ(n • β) edge updates.
We then proceed to turn this amortized rebuild based algorithm into a batch-dynamic algorithm which we improve to worst-case update time using Lemma 3.1.

Definition and Properties of (β, λ, δ)-Damaged EDCS
In order to base an amortized rebuild based dynamic algorithm on the EDCS matching sparsifier we need an efficient algorithm for constructing an EDCS.As far as we are aware there is no known deterministic algorithm for constructing an EDCS in in Ô(n) time.In order to get around this we introduce a locally relaxed version of EDCS.Definition 5.3.For graph G = (V, E) a (β, λ, δ)-damaged EDCS is a subset of edges H ⊆ E such that there is a subset of 'damaged' vertices V D ⊆ V and the following properties hold: ) and H is a (β, λ, δ)-damaged EDCS of graph G = (V, E) then H is an (3/2 + ǫ, δ)-approximate matching sparsifier.
Proof.Define the following edge-set: as each vertex of V D can appear on at most one edge of M * .Therefore, µ((V, E ′ )) ≥ µ(G) − |V | • δ.Now the lemma follows from Lemma 5.2.

Constructing A Damaged EDCS in Near-Linear Time
The potential function Φ used in proof of the following lemma is based on [17].:

Maintaining a Damaged EDCS in Õ(
10 Function DeleteEdge(e): 11 Note that E D is defined for the purposes of the analysis.

k Batch-Dynamic Algorithm For Maintaining An Approximate EDCS
Lemma 5.9.Given fully dynamic graph G with n vertices and m edges.There is a k batch-dynamic dynamic algorithm which maintains a (β, λ, δ)-damaged EDCS of this graph with deterministic update time Define an alternative version of Algorithm 2 where α is simply set to α i = i • α k during the processing of the i-th batch.Observe that in the proof of Lemma 5.7 the only detail which depends on the choice of α is the size of V E D ∪ V E I .At any point in this batch modified version of the algorithm α i ≤ α therefore the correctness of the algorithm follows.
The running time of the algorithm will be affected by this change.As every edge update is processed in constant time by the algorithm the running time is dominated by calls to StaticDam-agedEDCS.By definition for every batch at least α/k edge updates will occur between the start of the batch and the first rebuild (if there is one) yielding an amortized update time of at most ) over the first rebuild (due to Lemma 5.8).After the first rebuild the algorithm simply proceeds to run with α-parameter α i therefore the amortized update time for the remainder of batch i is Corollary 5.10.For fully dynamic graph G there is a deterministic k batch-dynamic algorithm for maintaining a (3/2 + ǫ, δ)-approximate maximum matching with update time 128 and β large enough to satisfy the requirements of Lemma 5.4 such that the resulting sparsifier is (3/2 + ǫ/4, δ)-approximate.Use the algorithm of Lemma 5.9.The resulting damaged-EDCS sparsifier will have maximum degree O(β), undergo Õ(k) recourse per update and will take Õ(

(2 + ǫ)-Approximate Fully Dynamic Maximum Matching in Õ(1)
Worst-Case Update Time In the appendix we present a deterministic worst-case O(poly(log(n), 1/ǫ))-update time (2 + ǫ)approximate fully dynamic matching algorithm.Currently, the only deterministic O(poly(log(n), 1/ǫ))update time algorithms [22], [48] have amortized update time bounds, while the fastest wort-case algorithm runs in Õ( √ n) update time from [46].We will first improve the running time bounds of the algorithm presented in [48] to k batch-dynamic using the same technique as presented previously.[48] similarly bases the algorithm on amortized rebuilds which are triggered when ǫ factor of change occurs within the data-structure.In order to improve the update time to batch-dynamic we define ǫ i = i•ǫ k to be the slack parameter during the progressing of batch i.Firstly, this ensures that ǫ i ≤ ǫ during any of the batches progressed guaranteeing the approximation ratio.Secondly, whenever a new batch begins the slack parameter increases by ǫ k which insures that there will be enough time steps before next rebuild occurs to amortize the rebuild time over.

Acknowledgements
We would like to thank Sayan Bhattacharya and Thatchaphol Saranurak for helpful discussions.Further we would like to thank Thatchaphol Saranurak for suggesting to use lossless expanders to deterministically generate ǫ-matching preserving partitionings.

Open Questions
Worst-Case Update Time Improvement Through Batch-Dynamization: We have shown two applications on how batch-dynamization can be used to improve amortized rebuild based algorithm update times to worst-case.As amortized rebuild is a popular method for dynamizing a data-structure not just in the context of matching it would be interesting to see if the batchdynamization based framework has any more applications.
(α, δ)-Approximate Dynamic Matching: In current dynamic matching literature most algorithms focus on maintaining an α-approximate matching or matching sparsifier both for the integral and fractional version of the problem.However, a more relaxed (α, δ)-approximate matching algorithm using the reductions presented in this paper (or [7], [9]) allow for the general assumption that µ(G) = Θ(n) at all times.This assumption has proven to be useful in other settings for the matching problem such as the stochastic setting ( [7], [9]) but largely seems to be unexplored in the dynamic setting.
Damaged EDCS: The EDCS matching sparsifier [18] has found use in a number of different settings for the matching problem [17] [13] [5] [46] [6] [14] [4].In contrast with the EDCS (as far as we are aware) a damaged EDCS admits a deterministic near-linear time static algorithm.This might lead to new results in related settings.
A Proof of Lemma 3.1 We restate the lemma for the readers convenience: Lemma A.
. At a given point in time let |B j i | refer to the number of input elements instance A i has progressed as it's j-th batch.Note that we will assume that in update time O(T (n) • |B j i |) instance A i can revert back to a state where input batch B j i was empty given the elements of B j i where the last elements of I progressed by A i .
Represent the input elements of I as k-long k-airy strings starting from {0} k .Choose I[λ] such that λ-s k-airy representation ends with an ′ i ′ followed by γ > 0 ′ 0 ′ -s and contains a single i digit.We will now describe what instance A i will be doing while Alg ′ is processing input elements I[λ, λ + k γ ).We will call this process as the resetting of batches B λ i , .., B 1 i .Resetting the Contents of Batches B λ i , .., B 1 i : With a slight overload of notation partition the input sub-sequence • While Alg ′ is processing input elements I λ instance A i will revert to the state it was in before processing the contents of the batches B γ+1 i , ..., B 1 i .Then it proceeds to place all these elements into batch B γ+1 i a single batch.
• While Alg ′ is processing input elements I j : γ > j > 0 instance A i will progress input elements I j+1 as batch B j+1 i .
• While Alg ′ is processing the input element I 0 instance A i will place input elements If A i is not resetting batches it is processing single elements of the input string.

Processing Single Elements of The Input String:
If the first k − 1 digits of the k-airy representation of λ don't contain a single i digit then while Alg is processing I[λ] instance A i will extend it's last batch B 1 i with input element I[λ].These two instances describe the behaviour of A i over the whole of I.If A i is processing a single input element at any point in time it's output is an α (or (α, δ))-approximate matching.Also observe, that for any λ there is a digit i ∈ [k] in it's k-airy representation which is not one of it's first k − 1 digits.By definition, this implies that A i will be be processing I[λ] as a single input element.Hence, the output of A i will be an (α) (or (α, δ))-approximate matching for some i at all time steps.Claim A.1.At all times for all j ∈ [k] and i ∈ {0, ..., k − 1} it holds that |B j i | ≤ (j + 1) • k j .
Proof.We will proof the claim through induction on j.Fix i. Whenever the contents of B 1 i are reset it will be set to contain exactly k input elements.If the contents of B 1 i are not reset while However, over the course of k consecutive input elements being progressed by Alg ′ batch B 1 i must be reset.Therefore, B 1 i will never contain more than 2 • k − 1 elements.
Assume that ) at all times as an inductive hypothesis.Consider how many elements may B j+1 i contain.Whenever B j+1 i is reset it will be set to contain exactly (k−1)•k j elements.Furthermore, whenever B j i is reset B j+1 i is extended by the contents of B j i , .., B 1 i .These are the only cases when B j+1 i may be extend by any input elements.
Proof.To bound worst case running times differentiate two cases.Firstly, if I[λ] is progressed as a single input element by A i then A i will extend it's smallest batch B Fix λ as described previously, such that it's k-airy representation contains a single i digit followed by γ > 0 0-s so that A i will be resetting batches B γ i , .., B 1 i while Alg ′ is processing I[λ, λ + k γ ).Define I j : γ ≥ j ≥ 0 as before.While Alg ′ is processing I γ instance A i has to revert to the state before processing any of B γ+1 i , ..., B 1 i and progress their contents as a single batch into B γ+1 i .This concerns the backtracking and processing of O(k γ+1 • γ) input elements by Claim A.1.The computational work required to complete this can be distributed over the time period Alg ′ is handling I γ evenly as this computation doesn't require A i to know the contents of Similarly, over the course of Alg ′ processing I j which consists of Θ(k j ) elements we can distribute the O(T (n) • k j+1 ) total work of processing I j+1 into batch B j+1 i evenly resulting in O(T (n) • k) worst case update time.Finally, for instance A i processing I 1 ∪ I 0 while Alg ′ progresses I 0 will take 11 M * ← Maintain a 1 + ǫ/(8α)-approximate maximum matching of (V, E ′ ) with Lemma 2.1 Claim B.1.Algorithm 3 maintains an (α + ǫ)-approximate maximum matching.

B.2 Proof of Lemma 4.4
Proof.For graph G = (V, E) generate L = 512•log(n) ǫ 2 vertex partitionings into d = 4 • (2k) ǫ sets at random.Call the set of partitionings V = {V j : j ∈ [L]} and let V j i stand for the i-th vertex set of the j-th partitioning.Fix 2k vertices S of V arbitrarily to represent the endpoints of a matching of size k in G and note that this can be done be the indicator variable of the l-th vertex of S falling into the i-th subset V j i .This turns the random variables into the well known balls and binds experiment.By [28] (this can also be considered a folklore fact) random variables By Theorem 2 of [29] monotonously increasing functions defined on disjoint subsets of a set of negatively associated random variables are negatively associated.As max is monotonously increasing this implies that X j i : i ∈ [d] are also negatively associated.

E[X
Therefore, ).Now we apply Chernoff's inequality for negatively associated random variables to get that: ) ≤ e

−2k•ǫ 2 128
This implies that Pr[min Further applying a union bound over the n 2k possible choices of S yields that regardless of the choice of S with probability 1 where at least 2k • (1 − ǫ/4) of the vertex sets of V j contain a vertex of S. This implies that there can be at most 2k • (1 − ǫ/2) vertices of S sharing a vertex set of V j with an other vertex of S. Furthermore, if S represents the endpoints of a matching of size k at least k • (1 − ǫ) of it's edges will have both their endpoints being assigned to unique vertex sets of V j with respect to S. This implies that the concatenation of G based on V j will preserve a 1 − ǫ fraction of any matching of size k from G. Therefore, V is a (k, ǫ) matching preserving set of partitionings for G.
Note that while we can simply sample the partitionings randomly in polynomial time, we could also consider all possible sets of partitionings and check weather any of them is (k, ǫ) matching preserving for all possible choice of S ⊆ V .From the fact that a random sampling based approach succeeds with positive probability we know that there is a set of (k, ǫ) matching preserving partitionings therefore we will find one one eventually deterministically.

C Missing Proofs of Section 5
C.1 Proof of Lemma 5.5 Proof.Let E ′ f in represent the state of E ′ at termination.First let's argue that ∀e ∈ H f in : deg H f in ≤ β.At the end of the penultimate iteration of the outer loop H must have maximum edge degree of β • (1 − λ/4).H then will be extended with edges of E ′ \ E D which has a maximum degree of Take an edge e ∈ E \ H f in which doesn't intersect V D .As all such edges with lower than β • (1 − λ/2) edge degree in E were added to C.2 Proof of Lemma 5.6 Proof.Observe that every iteration of the repeat loop runs in O(m) time as each iteration can be executed over a constant number of passes over the edge set.Define Φ Initially Φ(H) = 0 and Φ(H) ≤ β 2 • n.We will show that φ(H) monotonously increases over the run of the algorithm and each iteration of the repeat loop (except for the last one) increases it by at least Ω(

C.3 Proof of Lemma 5.7
Proof.Every time H is reset rebuilt through StaticDamagedEDCS the lemma statement is satisfied (by Lemma 5.5) Focus on one period of α updates after a rebuild.Define E D and E I to be the set of edges deleted and inserted over these updates respectively (note that E D ∩ E I might not be empty).Define As after a call to Algorithm 1 the sparsifier H is a ( β 1+λ/4 , λ/4, δ/2)-damaged EDCS (follows from Lemma 5.5) and the following holds for some Step 3 : Maintain a (1 + ǫ)-approximate integral maximum matching in E ′ .
In this process Step 1 is executed using the algorithm of [23] while Step 3 is using the algorithm of [45].Both of these algorithms have worst-case update time bounds therefore they are also k batch-dynamic for any k.Step 2 of this process involves amortization which we will relax into k batch-dynamic.
Step 2 is executed in two phases.Firstly, in phase A) w is discretized.This achieved through defining fractional matching w r such that w r (e) is the smallest power of (1 + ǫ) −1 smaller then w(e) (edges with very small, less than ǫ/n 2 weight are ignored).By definition w r is the union of O( log(n) ǫ ) uniform fractional matchings and size(w r ) • (1 + ǫ) ≥ size(w).The uniform fractional matchings are then are maintained separately and their union is returned as the sparsifier.
In phase B) the algorithm sparsifies the O( log(n) ǫ ) uniform fractional matchings in parallel.Specifically, assume w λ : E λ → λ is one of these uniform fractional matchings.Taking w λ as an input Algorithms 4 to 9 maintain w ′ λ : E ′ λ → [0, 1] such that: for constants λ, β (by Lemma 4.4 of [48]).Furthermore, the arboricity of )) amortized update time (Lemma 4.3 and Lemma 4.5 of [48]).As phase A), the discretization of w, can be done in O(1) worst-case update time we will be focusing on the k batch-dynamization of phase B).We will show that through simply adjusting the slack parameters we will be able to guarantee all properties of E ′ λ and w ′ λ above while changing the running time from amortized O(ǫ • log( β λ )) to k batch amortized O(k • ǫ • log( β λ )).Substituting the modified version of phase B) into the framework of [48] we therefore get a k batch algorithm with an update time which is an O(k)-factor slower.
The only modifications we need to make concern the lines highlighted in red from Algorithm 7 and Algorithm 9. Partition input sequence I into batches I i : i ∈ [k].Administer the following modification (highlighted with blue text): while progressing batch i exchange slack parameter ǫ with ǫ i = ǫ•i k .As the modified algorithm will be operating with slack parameters strictly tighter at all times all requirements of Lemmas 4.3 and 4.4 of [48] will be enforced at all times.The only challenge is to argue about the update time of the algorithm.
Firstly, let's consider how much time will handling edge insertions through Algorithm 7 will take.Assume that the progressing of input batch I i−1 has finished at time τ 0 and the algorithm currently halts before starting progressing the elements of input batch I i .By the description of Algorithm 7,at  D.1 Sub-Routines From [48] These sub-routines are coppies from [48] and are included just for the readers convenience.The changes made are to the rows highlighted with red in Algorithm 7 and Algorithm 9.

Lemma 4 . 3 .
Assume there exists a dynamic matching algorithm Alg M maintaining an (α, δ)approximate matching in update time O(T (n, δ)) for α = O(1) as well as an algorithm Alg S generating an (k, ǫ) matching preserving set of vertex partitionings into O(k • C) vertex subsets of size L. Then there exists an algorithm Alg maintaining an (α + ǫ)-approximate matching with update time

Lemma 4 . 4 .
There is a randomized algorithm succeeding with 1 − 1/poly(n) probability for generating a (k, ǫ) matching preserving set of partitionings of graph G into O(k/ǫ) vertex subsets of size O( log(n)

Lemma 4 . 6 .
Assume there exists an algorithm Alg which outputs a (k, d, ǫ)-expander G exp = ((L, R), E) in O(T (k, d, ǫ)) time.There is an algorithm Alg ′ which outputs a set of (k, ǫ) matching preserving vertex partitionings of a vertex set of size |L| into |R| subsets of size d with running time O |R| i at least (1 − ǫ/2) • 2k endpoints of M are concatenated into vertices of G ′ containing exactly one vertex of V M .Therefore, (1 − ǫ) • k edges of M will have both their endpoints concatenated into unique vertices of G ′ within M .Hence, µ(G ′ ) ≥ (1 − ǫ) • k and V is a (k, ǫ) matching preserving set of partitionings.

Lemma 5 . 6 .
Algorithm 1 runs in deterministic O( m δ•λ 2 ) time.The proofs of the lemmas are deferred to Appendix C. The intuition is the following: at the start of each iteration we add all edges of the graph to H which have deg H (e) < β • (1 − λ/2).If we fail to add at least O(λ • δ • β • n) such edges we terminate with H stripped of some edges.At the end of each iteration we remove all edges such that deg H (e) > β.Consider what happens if we fail to add Ω(λ • δ • β • n) edges in an iteration.That means that only in the local neighbourhood of Θ(δ • n) 'damaged' vertices could we have added Ω(β • λ) edges in the last iteration.We strip away the edges around damaged vertices to get H.The running time argument is based on a potential function Φ from [17].Initially it is 0 and has an upper bound of O(n • β 2 ) and grows by at least Ω(n • β 2 • λ 2 • δ) in each iteration bounding the number of iterations by O( 1 δ•λ 2 ).

Lemma 5 . 8 .
The amortized update time of Algorithm 2 over a series of α updates is O( m n•β•λ 3 •δ 2 ) and the sparsifier H undergoes O( 1 λ•δ ) amortized recourse.The lemmas are proven in the appendix.On a high level, a (β, O(λ), O(δ)) damaged EDCS will gain O(n • δ) damaged vertices in the span of O(n • δ • λ • β) edge updates as for a vertex to be damaged there has to be O(β • λ) edge updates in it's local neighbourhood.At this point we can call a rebuild of the EDCS in Õ(m) time to get an amortized update time of Õ( m β•n ).

Lemma 6 . 1 .
There is a deterministic k batch amortized O ǫ (k • log 4 (n)) update time (2 + ǫ)approximate fully dynamic matching algorithm.Proof.Deferred to Appendix D as most of the argument is taken from[48].Theorem 1.1 follows from Lemma 6.1 and Corollary 3.2.

1 i
with I[λ].As at all times |B 1 i | ≤ 2 • k due to Claim A.1 this can be done in worst-case update time O(T (n) • k).

4 B. 1 3 Algorithm 3 :
) worst-case update time.As there are k instances of Alg running like as described in parallel, this takes a total of O(T (n)•k 2 ) worst case update time.It remains to select k = O(log(n)) so the algorithm can progress an input of length O(log log(n) (n)) = O(n log(log(n)) ), that is of input sequences of arbitrarily large polynomial length for large enough n.B Missing Proofs from Section Proof of Lemma 4.Vertex SparsificationInput: G = (V, E), Alg M , Alg S Output:

Proof.
The maintenance of M i j will take O(T (n, ǫ/C)) update time for specific values of i, j.Asα = O(1) i will range in [O( log(n) ǫ )].Therefore, the algorithm maintains O( L•log(n) ǫ ) matchings in parallel using Alg M .This means E ′ has maximum degree O( L•log(n)ǫ ) and can be maintained in update time O(T (n, ǫ/C) • L•log(n) ǫ ) and may undergo the same amount of recourse.Hence, with the invocation of the algorithm from Lemma 2.1 the total update time is O(T (n, ǫ/C) • L 2 •log 2 ǫ 4 ).The two claims conclude Lemma 4.3 Do note, that the update time can be slightly improved to O(T (n, ǫ C H) may change at times when edges are added to or removed from H. Whenever e is removed from H we know that deg H (e) > β • (1 − λ/4) (before the deletion).This means that Φ 1 (H) decreases by 2β • (1 − λ/4) − 1 but Φ 2 (H) also decreases by at least 2 • β • (1 − λ/4).This is because deg H (e) disappears from the sum of Φ 2 (H) and deg H (e) − 2 elements of the sum (degrees of edges neighbouring e) reduce by 1 and deg H (e) ≥ β • (1 − λ/4) + 1.Hence, Φ(H) increases by at least 1.Whenever an edge e is added to H we know that deg H (e) < β • (1 − λ/2) (before the insertion).Due to the insertion Φ 1 (H) increases by exactly 2 • β − 1. Φ 2 (H) increases by at most 2 • β • (1 − λ/2) as a term of at most β • (1 − λ/2) + 1 is added to it's sum and at most β • (1 − λ/2) − 1 elements of it's sum increase by 1.Therefore, Φ(H) increases by at least λ • β.In every iteration but the last one of the repeat loop at least λ•β•δ•n 16 edges were added to H.This means every iteration increases

Φ
this point in time |E (p) | ≤ ǫ•(i−1) k • |E (a) | as enforced while progressing batch I i−1 .The next time Algorithm 7 is triggered to do more than O(1) work is when |E (p) | exceeds ǫ•i k • |E (a) |, call this a critical event.As during a single edge insertion or deletion either |E (p) | increases by one (in case of an insertion) or |E (a) | decreases by one (in case of a deletion) there will be at least O(|E (a) |) • ǫk edge updates before a critical event.Hence, if such an event doesn't occur during the progressing of batch I i then the handling of insertions will require O(|I i |) time over the whole batch.If such an event occurs at time τ then a call to Static-Uniform-Sparsify and Clean-Up will be made.Both of these subroutines will run in O(|E (a) |) total time (by Lemma 3.3 of[48] and by observation).Therefore, to handle updates from time τ 0 to τ the algorithm will require O(τ− τ 0 + |E (a) |) = O((τ − τ 0 ) • k ǫ ) time.After the update at τ is progressed the invariant |E (a) | • ǫ•(i−1) k ≥ |E (p)| is again satisfied (as E (p) is emptied by Clean-Up) therefore we can inductively argue that the processing of the whole batch will take O(|I i | • k ǫ ) time.Similarly, let's consider how much update time handling deletions through Algorithm 9 takes.Again assume that at time τ 0 batch i − 1 has just been progressed.Note that L = O(log( β λ )).Firstly, given the deletion of edge e we might need to add it to edge sets D (≥i) which can take up to O(L) time.The time required to handle deletions will be dominated by the time it will take to run Clean-Up and Rebuild subroutines given the If statement (highlighted with blue) is satisfied.The if statement considers L + 1 different layers.Focus on a specific layer j.At time τ we are guaranteed that|D (≥j) | ≤ ǫ•(i−1) k • |E (≥j )|.An edge deletion may increase |D (≥j) | by one or reset |D (≥j) | to 0 through a rebuild on an other layer.This means that there will be at least O( ǫ k • |E (≥j) |) edge changes before the if statement of level j may be satisfied.At that point Clean-Up and Rebuild will take O(|E (≥j) |) total time to complete (by Lemma 3.3 of [48]) and |D (≥j) | ≤ ǫ•(i−1) k • |E (≥j )| is again satisfied as |D (≥j) | is emptied.Hence, we can again argue that it will take at most O(|I i | • k ǫ ) total time to satisfies the If condition on the j-th layer during input batch I i .Therefore, the total update time of the algorithm over batch I i is in O(L • k ǫ • |I i |) as required.This finishes the proof.
1. Given (α) approximate (or (α, ǫ)-approximate) dynamic matching algorithm Alg is O(log(n)) batch-dynamic with update time O(T (n)) and dynamic graph G undergoing edge insertions and deletions.There is an algorithm Alg ′ which maintains O(log(n)) matchings of G such that at all times during processing an input sequence of arbitrarily large polynomial length one of the matchings is (α) approximate (or (α, ǫ)-approximate).The update time of Alg ′ is worst-case O(T (n) • log 3 (n)) and it is deterministic if Alg is deterministic.
Proof.Fix some integer k = O(log(n)).Alg ′ will be running k instances of Alg in parallel on graph G, call them A i : i ∈ {0, .., k − 1}.Assume that Alg's running time is k batch-dynamic.We will describe what state each instance of Alg will take during processing specific parts of the input, then argue that at least one of them will be outputting an α (or (α, δ))-approximate matching at all times.Assume that the input sequence I is k k long.Let I[i] represent the i-th element of the input sequence and 4•log 2 (n)•k number of ways.Fix a specific vertex partitioning V j with vertex setsV j i : i ∈ [d].Let the random variable X j i : i ∈ [d] be an indicator variable of S ∩ V j i = ∅ and Xj = i∈[d] X j i .