Skip to main content

Deterministic Dynamic Matching in O(1) Update Time

Abstract

We consider the problems of maintaining an approximate maximum matching and an approximate minimum vertex cover in a dynamic graph undergoing a sequence of edge insertions/deletions. Starting with the seminal work of Onak and Rubinfeld (in: Proceedings of the ACM symposium on theory of computing (STOC), 2010), this problem has received significant attention in recent years. Very recently, extending the framework of Baswana et al. (in: Proceedings of the IEEE symposium on foundations of computer science (FOCS), 2011) , Solomon (in: Proceedings of the IEEE symposium on foundations of computer science (FOCS), 2016) gave a randomized dynamic algorithm for this problem that has an approximation ratio of 2 and an amortized update time of O(1) with high probability. This algorithm requires the assumption of an oblivious adversary, meaning that the future sequence of edge insertions/deletions in the graph cannot depend in any way on the algorithm’s past output. A natural way to remove the assumption on oblivious adversary is to give a deterministic dynamic algorithm for the same problem in O(1) update time. In this paper, we resolve this question. We present a new deterministic fully dynamic algorithm that maintains a O(1)-approximate minimum vertex cover and maximum fractional matching, with an amortized update time of O(1). Previously, the best deterministic algorithm for this problem was due to Bhattacharya et al. (in: Proceedings of the ACM-SIAM symposium on discrete algorithms (SODA), 2015); it had an approximation ratio of \((2+\varepsilon )\) and an amortized update time of \(O(\log n/\varepsilon ^2)\). Our result can be generalized to give a fully dynamic \(O(f^3)\)-approximate algorithm with \(O(f^2)\) amortized update time for the hypergraph vertex cover and fractional hypergraph matching problem, where every hyperedge has at most f vertices.

Introduction

Computing a maximum cardinality matching is a fundamental problem in computer science with applications, for example, in operations research, computer science, and computational chemistry. In many of these applications the underlying graph can change. Thus, it is natural to ask how quickly a maximum matching can be maintained after a change in the graph. As nodes usually change less frequently than edges, dynamic matching algorithms usually study the problem where edges are inserted and deleted, which is called the (fully) dynamic matching problem.Footnote 1 The goal of a dynamic matching algorithm is to maintain either an actual matching (called the matching version) or the value of the matching (called the value version) as efficiently as possible.

Unfortunately, the problem of maintaining even just the value of the maximum cardinality matching is hard: There is a conditional lower bound that shows that no (deterministic or randomized) algorithm can achieve at the same time an amortized update time of \(O(m^{1/2-\varepsilon })\) and a query (for the size of the matching) time of \(O(m^{1-\varepsilon })\) for any small \(\varepsilon >0\) [15] (see [1] for conditional lower bounds using different assumptions). The best upper bound is Sankowski’s randomized algorithm [21] that solves the value problem in time \(O(n^{1.495})\) per update and O(1) per query. Thus, it is natural to study the dynamic approximate maximum matching problem, and there has been a large body [3, 7, 8, 14, 17, 18, 22] of work on it and its dual, the approximate vertex cover problem, in the last few years.

Dynamic algorithms can be further classified into two types: Algorithms that require an oblivious (aka non-adaptive) adversary, i.e., an adversary that does not base future updates and queries on the answers to past queries, and algorithms that work even for an adaptive adversary. Obviously, the earlier kind of algorithms are less general than the later. Unfortunately, the known randomized dynamic approximate matching and vertex cover algorithms do not work with an adaptive adversary [3, 18, 22]. Solomon [22] gives the best such randomized algorithm: It achieves O(1) amortized update time (with high probability) and O(1) query time for maintaining a 2-approximate maximum matching and a 2-approximate minimum vertex cover. He also extends this result to the dynamic distributed setting (à la Parter, Peleg, and Solomon [19]) with the same approximation ratio and update cost.

Our Result In this paper we present the first deterministic algorithm that maintains an O(1)-approximation to the size of the maximum matching in O(1) amortized update time and O(1) query time. We also maintain an O(1)-approximate vertex cover in the same update time. Note that this is the first deterministic dynamic algorithm with constant update time for any non-trivial dynamic graph problem. This is significant as for other dynamic problems such as the dynamic connectivity problem or the dynamic planarity testing problem there are non-constant lower bounds in the cell probe model on the time per operation [16, 20]. Thus, our result shows that no such lower bound can exist for the dynamic approximate matching problem.

Previous Work There has been prior work on deterministic algorithms for dynamic approximate matching, but they all have \(\Omega ({\text {poly}}(\log n))\) update time. One line of work concentrated on reducing the approximation ratio as much as possible, or at least below 2: Neiman and Solomon [17] achieved an update time \(O(\sqrt{m})\) for maintaining a 3 / 2-approximate maximum matching and 2-approximate minimum vertex cover. This result was improved by Gupta and Peng [14] who gave an algorithm with update time \(O(\sqrt{m}/\varepsilon ^2)\) for maintaining a \((1+\varepsilon )\)-approximate maximum matching. Recently, Bernstein and Stein [4] gave an algorithm with \(O(m^{1/4}/\varepsilon ^2)\) amortized update time for maintaining a \((3/2+\varepsilon )\)-approximate maximum matching. Another line of work, and this paper fits in this line, concentrated on getting a constant approximation while reducing the update time to polylogarithmic: Bhattacharya, Henzinger and Italiano [7] achieved an \(O(\log n/\varepsilon ^2)\) update time for maintaining a \((2+\varepsilon )\)-approximate maximum fractional matching and a \((2+\varepsilon )\)-approximate minimum vertex cover. Note that any fractional matching algorithm solves the value version of the dynamic matching problem while degrading the approximation ratio by a factor of 3 / 2 [8]. Thus, the algorithm in [7] maintains a \((3+\varepsilon )\)-approximation of the value of the maximum matching. The fractional matching in this algorithm was later “deterministically rounded” by Bhattacharya, Henzinger and Nanongkai [8] to achieve a \(O(\text {poly} (\log n, 1/\varepsilon ))\) update time for maintaining a \((2+\varepsilon )\)-approximate maximum matching. Very recently, a third line of work has focussed on getting small constant approximation in \(O(\text {poly} \log n)\)worst case update time. Bhattacharya, Henzinger and Nanongkai [9] showed how to maintain a \((2+\varepsilon )\)-approximate maximum fractional matching and a \((2+\varepsilon )\)-approximate minimum vertex cover in \(O(\log ^3 n/\varepsilon ^4)\)worst case update time. Subsequently, Charikar and Solomon [12] and Arar, Chechik, Cohen, Stein and Wajc [2] showed how to maintain a \((2+\varepsilon )\)-approximate maximum (integral) matching in \(O(\text {poly} \log (n, 1/\varepsilon ))\) worst case update time.

Our method also generalizes to the hypergraph vertex (set) cover and hypergraph fractional matching problem which was considered by  [6]. In this problem the hyperedges of a hypergraph are inserted and deleted over time. f indicates the maximum cardinality of any hyperedge. The objective is to maintain a hypergraph vertex cover, that is, a set of vertices that hit every hyperedge. Similarly a fractional matching in the hypergraph is a fractional assignment (weights) to the hyperedges so that the total weight faced by any vertex is at most 1. We give an \(O(f^3)\)-approximate algorithm with amortized \(O(f^2)\) update time; compare this with the \(O(f^2)\)-approximate algorithm due to Bhattacharya et al. [6] whose amortized update time was \(O(f\cdot \log (m+n))\); we trade-off O(f) in the approximation factor with \(O(\log n)\) in the update time.

Subsequent Work Very recently, after the publication of the conference version of our paper, Bhattacharya and Kulkarni [10] have shown how to maintain a \((2+\varepsilon )\)-approximate minimum vertex cover deterministically in a dynamic graph in \(O(1/\varepsilon ^2)\) amortized update time. When \(f = 2\), this improves upon the approximation guarantee of our result (which is a large constant). Arguably, however, our analysis in this paper is more intuitive and easy to follow, compared to the analysis in [10] which uses an extremely intricate potential function. Also, for technical reasons, the analysis in [10] holds only for simple graphs (with \(f = 2\)). In other words, the result in [10] does not subsume our result on hypergraph vertex (set) cover and fractional matching.

Our Techniques

Our algorithm builds and simplifies the framework of hierarchical partitioning of vertices proposed by Onak and Rubinfeld [18], which was later enhanced by Bhattacharya, Henzinger and Italiano [7] to give a deterministic fully-dynamic \((2+\delta )\)-approximate vertex cover and maximum matching in \(O(\log n/\delta ^2)\)-amortized update time. The hierarchical partition divides the vertices into \(L := O(\log n)\)-many levels and maintains a fractional matching and vertex cover. To prove that the approximation factor is good, Bhattacharya et. al. [7] also maintain approximate complementary slackness conditions. An edge insertion or deletion can disrupt these conditions (and indeed at times the feasibility of the fractional matching), and a fixing procedure maintains various invariants. To argue that the update time is bounded, [7] gives a rather involved potential function argument which proves that the update time bounded by O(L), the number of levels, and is thus \(O(\log n)\). It seems unclear whether the update time can be argued to be a constant or not.

Our algorithm follows the same approach as Bhattacharya et. al. [7], maintaining invariants for the set of edges incident to a node in the hierarchical graph decomposition, calling nodes that violate them dirty and fixing all dirty nodes after each edge insertion or deletion. To fix dirty nodes the algorithm moves them up or down in the hierarchy. All the work in the algorithm consists of inserting and deleting edges and moving nodes up and down the hierarchy. Our algorithm and analysis which we sketch below is very different and simpler than the potential-function based analysis in [7].

To achieve an amortized constant running time, however, we need to develop a number of technical innovations:

  • We introduce a new state, called Super-Clean , that fulfills much stricter invariants, not only on the total set of edges incident to a node, but also on the subset of these edges whose other endpoint lies at a higher level in the hierarchy. The fact that an upper bound on this new type of edges (a) can be achieved by the update algorithms and (b) can be exploited to show a constant bound on the amortized running time is the crucial new insight in this paper. Accordingly, we suitably modify the subroutines that move nodes up and down. We show that one of these subroutines, namely the one that moves nodes down the hierarchy, ensures the strict invariants of the Super-Clean state are fulfilled after it runs; however, the strict invariants need not hold when a node moves up. To analyze, we show that the total time of the “down-movements” is linear in the number T of the update operations; however, arguing about the “up-movements” is trickier. Intuitively, one down-movement might drop a vertex from some level i down to level 0, but it might take i up-movements for the node to achieve level i again. Thus, there might be L times as many up-movements as there are down-movements, and, hence a naive argument may count the total cost for all up-movements to be up to L times the total cost of all down-movements. But this would only result in an O(LT) total time, which would only give a logarithmic amortized time. We show, however, by using the stricter conditions of the Super-Clean state, that this is not the case and we can indeed bound the total cost of all up-movements up to level i by the cost of the single down-movement from i.

  • To analyze the up-movements, we partition the sequence of operations we perform into epochs. Note that Solomon [22] also used epochs, but with a very different definition of epochs: In [22] an epoch of an edge is a maximal time interval that an edge remains matched. For us, an epoch of a vertex is a maximal time period during which the vertex remains at the same level of the hierarchy. Since a vertex only becomes Super-Clean when it moves down, we need to group together consecutive epochs where the vertex moves only up. We call such a consecutive sequence of epochs a phase and the crucial part in our analysis is to analyse the cost of the algorithm in each phase. At a high level it works as follows: The number of phases is linear in the number of down-movements plus all the edges that exist at the end. Additionally the way the invariants are set up, the cost of all the epochs in a given phase are geometrically increasing and, thus, the total cost of a phase is dominated by the last epoch in that phase. We then show that the cost of the last epochs of all phases can be “charged” to the cost of all down-movements.

As these technique easily generalize to the hypergraph setting and, thus, we state all our results in the hypergraph setting. It would be interesting to see other scenarios where approximation ratios can be slightly traded in for huge improvements in the update time.

Remark

Independently of our work, Gupta et al. [13] has obtained a \(O(f^3)\)-approximation algorithm for maximum fractional matching and minimum vertex cover in a hypergraph in \(O(f^2)\) amortized update time. Here, the symbol f denotes the maximum number of nodes that can be incident on a hyperedge. Their algorithm and analysis are also very similar to ours using an asymmetry between how one treats UP and DOWN moves, and using the notion of epochs like we do.

Notations and Preliminaries

Since the hypergraph result implies the graph result, henceforth we consider the former problem. The input hypergraph \({\mathscr {G}} = (V, E)\) has \(|V| = n\) nodes. Initially, the set of hyperedges is empty, i.e., \(E = \emptyset \). Subsequently, an adversary inserts or deletes hyperedges in the hypergraph \({\mathscr {G}} = (V, E)\). The node-set V remains unchanged with time. Each hyperedge contains at most f nodes. We say that f is the maximum frequency of a hyperedge. If a hyperedge e has a node v as one of its endpoints, then we write \(v \in e\). For every node \(v \in V\), we let \(E_v = \{ e \in E : v \in e \}\) denote the set of hyperedges that are incident on v.

A matching is a subset of hyperedges \(M \subseteq E\) that do not share any common endpoint. The size of a matching M is given by |M|. In a fractional matching, each hyperedge \(e \in E\) receives a (possibly fractional) weight \(w(e) \ge 0\) subject to the following constraint: The total weight \(W_v = \sum _{e \in E_v} w(e)\) received by every node \(v \in V\) from all the hyperedges incident on v is at most 1. The size of a fractional matching is given by the sum of the weights of all the hyperedges in \({\mathscr {G}}\). Note that if \(w(e) \in \{0,1\}\) for all \(e \in E\), then the fractional matching becomes a simple matching \(M \subseteq E\), where M is the subset of hyperedges e with \(w(e) = 1\). Finally, a vertex cover is a subset of nodes \(V' \subseteq V\) such that every hyperedge \(e \in E\) has at least one endpoint in \(V'\). The size of the vertex cover is given by \(|V'|\). From LP-duality, it follows that in any hypergraph \({\mathscr {G}} = (V, E)\) the size of the maximum fractional matching is at most the size of the minimum vertex cover (see Appendix A).

We consider a fully dynamic setting, where the hypergraph \({\mathscr {G}} = (V, E)\) is getting updated via a sequence of edge insertions/deletions. In this setting, our goal is to maintain an approximate maximum fractional matching and an approximate minimum vertex cover in \({\mathscr {G}}\). Our main result is stated in Theorem 2.1.

Theorem 2.1

We can maintain an \(O(f^3)\) approximate maximum fractional matching and an \(O(f^3)\) approximate minimum vertex cover in the input hypergraph \({\mathscr {G}} = (V, E)\) in \(O(f^2)\) amortized update time.

Throughout the rest of this paper, we fix two parameters \(\alpha \) and \(\beta \) as follows.

$$\begin{aligned} \beta = 6, \text { and } \alpha = 1 + 28 f^2 \beta ^2. \end{aligned}$$
(1)

We will maintain a hierarchical partition of the node-set V into \(L +1\) levels \(\{0, \ldots , L\}\), where \(L = \left\lceil f \cdot \log _{\beta } n\right\rceil +1\). We let \(\ell (v) \in \{0, \ldots , L\}\) denote the level of a node \(v \in V\). We define the level of a hyperedge \(e \in E\) to be the maximum level among its endpoints, i.e., \(\ell (e) = \max _{v \in e}\ell (v)\). The levels of nodes (and therefore hyperedges) induce the following weights on hyperedges: \(w(e) := \beta ^{-\ell (e)}\) for every hyperedge \(e \in E\). For all nodes \(v \in V\), let \(W_v := \sum _{e \in E_v} w(e)\) be the total weight received by v from its incident hyperedges. We will satisfy the following invariant after processing a hyperedge insertion or deletion.

Invariant 2.2

Every node \(v \in V\) at level \(\ell (v) > 0\) has weight \(1/(\alpha \beta ^2)< W_v < 1\). Every node \(v \in V\) at level \(\ell (v) = 0\) has weight \(0 \le W_v \le 1/\beta ^2\).

Corollary 2.3

Under Invariant 2.2, the nodes in levels \(\{1, \ldots , L\}\) form a vertex cover in \({\mathscr {G}}\).

Proof

Suppose that there is a hyperedge \(e \in E\) with \(\ell (v) = 0\) for all \(v \in e\). Then we also have \(\ell (e) = 0\) and \(w(e) = 1/\beta ^{\ell (e)} = 1/\beta ^0 = 1\). So for every node \(v \in e\), we get: \(W_v \ge w(e) = 1\). This violates Invariant 2.2 as \(\beta > 1\). \(\square \)

Invariant 2.2 ensures that w(e)’s form a fractional matching satisfying approximate complementary slackness conditions with the vertex cover defined in Corollary 2.3. This gives the following theorem.

Theorem 2.4

In our algorithm, the hyperedge weights \(\{ w(e) \}\) form a \(f\alpha \beta ^2\)-approximate maximum fractional matching, and the nodes in levels \(\{1, \ldots , L \}\) form a \(f\alpha \beta ^2\)-approximate minimum vertex cover.

Proof

Consider the set of nodes \(T = \{v \in V : \ell (v) > 0\}\). By Corollary 2.3, the set T forms a vertex cover in \(\mathscr {G}\). We now compare the size of this vertex cover with the size of the fractional matching \(\{w(e)\}\) as follows.

$$\begin{aligned} |T|= & {} | \{v \in V : \ell (v) > 0\}| \\\le & {} \alpha \beta ^2 \cdot \sum _{v \in T} W_v \\\le & {} \alpha \beta ^2 \cdot \sum _{v \in V} W_v \\\le & {} f\alpha \beta ^2 \cdot \sum _{e \in E} w(e). \end{aligned}$$

In the above derivation, the first inequality holds since \(W_v > 1/(\alpha \beta ^2)\) for all \(v \in T\) under Invariant 2.2. The third inequality holds since every node in \({\mathscr {G}}\) is incident upon at most f hyperedges. Thus, the set T forms a valid vertex cover in \({\mathscr {G}}\) and the weights \(\{w(e)\}\) form a valid fractional matching in \({\mathscr {G}}\) such that the size of the vertex cover T is at most \(f\alpha \beta ^2\) times the size of the fractional matching \(\{w(e)\}\). Since the size of the maximum fractional matching in \(\mathscr {G}\) is upper bounded by the size of the minimum vertex cover in \({\mathscr {G}}\), we conclude that the fractional matching \(\{w(e)\}\) is a \(f\alpha \beta ^2\)-approximate maximum fractional matching in \({\mathscr {G}}\), and furthermore, the set of nodes T form a \(f\alpha \beta ^2\)-approximate minimum vertex cover in \({\mathscr {G}}\). \(\square \)

We introduce some more notations. For any vertex v, let \(W_v^+ := \sum _{e \in E_v : \ell (e) > \ell (v)} w(e)\) be the total up-weight received by v, that is, weight from those incident hyperedges whose levels are strictly greater than \(\ell (v)\). For all levels \(i \in \{0,1,\ldots , L\}\), we let \(W_{v\rightarrow i}\) and \(W_{v\rightarrow i}^+\) respectively denote the values of \(W_v\) and \(W_v^+\)if the nodevwere to go to leveliand the levels of all the other nodes were to remain unchanged. More precisely, for every hyperedge \(e \in E\) and node \(v \in e\), we define \(\ell _v(e) = \max _{u \in e : u \ne v} \ell (u)\) to be the maximum level among the endpoints of e that are distinct from v. Then we have: \(W_{v\rightarrow i} := \sum _{e \in E_v} \beta ^{-\max (\ell _v(e), i)}\) and \(W_{v\rightarrow i}^+ := \sum _{e \in E_v : \ell _v(e) > i} \beta ^{-\ell _v(e)}\). Our algorithm maintains a notion of time such that in each time step the algorithm performs one elementary operation. We let \(W_v(t)\) denote the weight faced by vright before the operation at time t. Similarly define \(W_{v\rightarrow i}(t), W^+_v(t)\), and \(W^+_{v\rightarrow i}(t)\).

Different States of a Node Before the insertion/deletion of a hyperedge in \({\mathscr {G}}\), all nodes satisfy Invariant 2.2. When a hyperedge is inserted (resp. deleted), it increases (resp. decreases) the weights faced by its endpoints. Accordingly, one or more endpoints can violate Invariant 2.2 after the insertion/deletion of a hyperedge. Our algorithm fixes these nodes by changing their levels, which may lead to new violations, and so on and so forth. To describe the algorithm, we need to define certain states of the nodes.

Definition 2.5

A node \(v \in V\) is Down-Dirty iff \(\ell (v) > 0\) and \(W_v \le 1/(\alpha \beta ^2)\). A node \(v \in V\) is Up-Dirty iff either {\(\ell (v) = 0, W_v > 1/\beta ^2\)} or {\(\ell (v) > 0, W_v \ge 1\)}. A node is Dirty if it is either Down-Dirty or Up-Dirty.

A node that is not Dirty is Clean. Invariant 2.2 is satisfied if and only if no node is Dirty. Whenever a node becomes Dirty, we use one of two subroutines to fix the node, i.e., to make it clean. However, for the running time analysis to go through, we need that the weight of node v after it was fixed has a “slack” (or gap) both with regard to the upper and the lower bound of the allowed weight. More specifically, right after a node v was fixed, it holds that \(1/\beta ^2 < W_v \le 1/\beta \). But this fact alone still does not suffice to achieve constant running time. Instead we show that when a node was fixed by one of our subroutines, an even stronger condition is fulfilled, and we call a node fulfilling that condition to be a Super-Clean node.

Definition 2.6

A node \(v \in V\) is Super-Clean iff one of the following conditions hold: (1) We have \(\ell (v) = 0\) and \(W_v \le 1/\beta ^2\), or (2) We have \(\ell (v) > 0\), \(1/\beta ^2 < W_v \le 1/\beta \), and \(W_v^+ \le 1/\beta ^2\).

Note that a Super-Clean node v with \(\ell (v) > 0\) fulfills a stronger upper bound on the weight \(W_v\) then a Clean node and additionally it fulfills an upper bound on the up-weight \(W^+_v\).

Data Structures For all nodes \(v \in V\) and levels \(i \in \{0,1,\ldots ,L\}\), let \(E_{v, i} := \{ e \in E_v : \ell (e) = i\}\) denote the set of hyperedges incident on v that are at level i. Note that \(E_{v, i} = \emptyset \) for all \(i < \ell (v)\). We will maintain the following data structures.

  1. 1.

    For every level \(i\in \{0,1,\ldots ,L\}\) and node \(v \in V\), we store the set of hyperedges \(E_{v, i}\) as a doubly linked list, and also maintain a counter that stores the number of hyperedges in \(E_{v, i}\).

  2. 2.

    For every node \(v \in V\), we store the weights \(W_v\) and \(W_v^+\), its level \(\ell (v)\) and an indicator variable for each of the states Down-Dirty, Up-Dirty, Dirty and Super-Clean.

  3. 3.

    For each hyperedge \(e \in E\), we store the values of its level \(\ell (e)\) and therefore its weight w(e). Finally, using appropriate pointers, we ensure that a hyperedge can be inserted into or deleted from any given linked list in constant time.

We now state two lemmata that will be useful in analysing the update time of our algorithm.

Lemma 2.7

Suppose that a node v is currently at level \(\ell (v) = i \in [0, L-1]\) and we want to move it to some level \(j \in [i+1, L]\). Then it takes \(O(f \cdot |\{ e \in E_v : \ell _v(e) < j\}|)\) time to update the relevant data structures.

Proof

If a hyperedge e is not incident on the node v, then the data structures associated with e are not affected as v moves up from level i to level j. Further, among the hyperedges \(e \in E_v\), only the ones with \(\ell _v(e) < j\) get affected (i.e., the data structures associated with them need to be changed) as v moves up from level i to level j. Note that just before the node v moves up from level i to level j, the set of hyperedges \(e \in E_v\) with \(\ell _v(e) < j\) is given by \(\bigcup _{k=i}^{j-1} E_{v,i}\). This holds since \(i < j\). Finally, for every hyperedge that gets affected, we need to spend O(f) time to update the data structures for its f endpoints. \(\square \)

Lemma 2.8

Suppose that a node v is currently at level \(\ell (v) = i \in [1,L]\) and we want to move it down to some level \(j \in [0,i-1]\). Then it takes \(O(f \cdot |\{ e \in E_v : \ell _v(e) \le i \} |)\) time to update the relevant data structures.

Proof

If a hyperedge e is not adjacent to the node v, then the data structures associated with e are not affected as v moves down from level i to level j. Further, among the hyperedges \(e \in E_v\), only the ones with \(\ell _v(e) \le i\) get affected (i.e., the data structures associated with them need to be changed) as v moves down from level i to level j. Finally, for every hyperedge that gets affected, we need to spend O(f) time to update the data structures for its f endpoints. \(\square \)

The algorithm: Handling the Insertion/Deletion of a Hyperedge

Initially, the graph \({\mathscr {G}}\) is empty, every node is at level 0, and Invariant 2.2 holds. By induction, we will ensure that the following property is satisfied just before the insertion/deletion of a hyperedge.

Property 3.1

No node \(v \in V\) is Dirty.

Insertion of a hyperedgee When a hyperedge e is inserted into the input graph, it is assigned a level \(\ell (e) = \max _{v \in e}\ell (v)\) and a weight \(w(e) = \beta ^{-\ell (e)}\). The hyperedge gets inserted into the linked lists \(E_{v, \ell (e)}\) for all nodes \(v \in e\). Furthermore, for every node \(v \in e\), the weights \(W_v\) increases by w(e). For every endpoint \(v \in e\), if \(\ell (v) < \ell (e)\), then the weight \(W_v^+\) increases by w(e). As a result of these operations, one or more endpoints of e can now become Up-Dirty and Property 3.1 might no longer be satisfied. Hence, in order to restore Property 3.1 we call the subroutine described in Fig. 1.

Deletion of a hyperedgee When a hyperedge e is deleted from the input graph, we erase all the data structures associated with it. We remove the hyperedge from the linked lists \(E_{v, \ell (e)}\) for all \(v \in e\), and erase the values w(e) and \(\ell (e)\). For every node \(v \in e\), the weight \(W_v\) decreases by w(e). Further, for every endpoint \(v \in e\), if \(\ell (v) < \ell (e)\), then we decrease the weight \(W_v^+\) by w(e). As a result of these operations, one or more endpoints of e can now become Down-Dirty, and Property 3.1 might get violated. Hence, in order to restore Property 3.1 we call the subroutine described in Fig. 1.

Fig. 1
figure1

FIX-DIRTY(\(\cdot \))

The algorithm is simple – as long as some Dirty node remains, it runs either FIX-UP-DIRTY or FIX-DOWN-DIRTY to take care of Up-Dirty and Down-Dirty nodes respectively. One crucial aspect is that we prioritize Up-Dirty nodes over Down-Dirty ones as it allows us to upper bound the weight of every node by 2, since any individual weight increase increases the weight of a node by at most 1 and the call FIX-UP-DIRTY does not increase the weight of any node, while FIX-DOWN-DIRTY might do so. Thus, as long as we are fixing Up-Dirty nodes as soon as they become Up-Dirty, no node will ever have weight 2 or larger, a fact that we will exploit in the running time analysis.

FIX-DOWN-DIRTY(v) Suppose that \(\ell (v) = i\) when the subroutine is called at time t. By definition, we have \(i > 0\) and \(W_v(t) \le 1/(\alpha \beta ^2)\). We need to increase the value of \(W_v\) if we want to ensure that v no longer remains Dirty. This means that we should decrease the level of v, so that some of the hyperedges incident on v can increase their weights. Accordingly, we find the largest possible level \(j \in \{1,\ldots ,(i-1)\}\) such that \(W_{v\rightarrow j}(t) > 1/\beta ^2\), and move the node v down to this level j. If no such level exists, that is, if even \(W_{v\rightarrow 1}(t) \le 1/\beta ^2\), then we move the node v down to level 0. Note that in this case there is no hyperedge \(e \in E_v\) with \(\ell _v(e) = 0\) for such a hyperedge would have \(w(e) = \beta ^{-1} > 1/\beta ^2\) when v is moved to level 1. In particular, we get \(W_{v\rightarrow 0}(t) = W_{v\rightarrow 1}(t)\).

Claim 3.2

FIX-DOWN-DIRTY(v) makes the node vSuper-Clean .

Proof

Suppose node v was at level i when FIX-DOWN-DIRTY(v) was called at time t and it ended up in level \(j < i\). If \(j = 0\), then \(W_{v\rightarrow 0}(t) \le 1/\beta ^2\), and so v becomes Super-Clean after time t. Henceforth assume \(j > 0\). Since \(j\in \{1,\ldots ,i-1\}\) is the maximum level where \(W_{v\rightarrow j}(t) > 1/\beta ^2\), we have \(W_{v\rightarrow (j+1)}(t) \le 1/\beta ^2\). Now note that \(W_{v\rightarrow j}(t) \le \beta \cdot W_{v\rightarrow (j+1)}(t)\) since weights of hyperedges can increase by at most a factor \(\beta \) when one end point drops exactly one level. This implies \(W_{v\rightarrow j}(t) \le 1/\beta \). Together we get that after time t when v is fixed to level j, we have \(1/\beta ^2 < W_v \le 1/\beta \).

Now we argue about the up-weights. Note that every hyperedge e that contributes to \(W^+_{v\rightarrow j}(t)\) must have \(\ell _v(e) \ge (j+1)\). The weight of such a hyperedge remains unchanged as v moves from level \((j+1)\) to j. We infer that \(W^+_{v\rightarrow j}(t) \le W_{v\rightarrow (j+1)}(t) \le 1/\beta ^2\). Therefore after time t when v is fixed at level j, we have \(W^+_v \le 1/\beta ^2.\) In sum, v becomes Super-Clean after time t. \(\square \)

FIX-UP-DIRTY(v) Suppose that \(\ell (v) = i\) when the subroutine is called at time t. At this stage, we have either {\(i = 0, W_v(t) > 1/\beta ^2\)} or {\(i > 1, W_v(t) \ge 1\)}. We need to increase the level of v so as to reduce the weight faced by it. Accordingly, we find the smallest possible level \(j \in \{i+1,\ldots , L\}\) where \(W_{v\rightarrow j}(t) \le 1/\beta \) and move v up to level j. Such a level j always exists because \(W_{v\rightarrow L}(t) \le n^f \cdot \beta ^{-L} \le 1/\beta \).

Claim 3.3

After a call to the subroutine FIX-DOWN-DIRTY(v) at time t, we have \(1/\beta ^2 < W_v \le 1/\beta \).

Proof

Suppose that the node v moves up from level i to level \(j > i\). We now consider four possible cases.

  • Case 1. We have\(i > 0\).

    Since \(j \in \{i+1,\ldots ,L\}\) is the minimum possible level where \(W_{v\rightarrow j}(t) \le 1/\beta \), and since \(W_{v \rightarrow i}(t) \ge 1 > 1/\beta \), we infer that \(W_{v\rightarrow (j-1)}(t) > 1/\beta \). As the node v moves up from level \((j-1)\) to level j, the weight it faces can drop by at most a factor of \(\beta \). Hence, we get: \(W_{v\rightarrow j}(t) \ge (1/\beta ) \cdot W_{v\rightarrow (j-1)}(t) > 1/\beta ^2\). Therefore after time t when the node v moves to level j, we have \(1/\beta ^2 < W_v \le 1/\beta \).

  • Case 2. We have\(i = 0\), and there is an edge\(e \in E_v\)with\(\ell _v(e) = 0\)at timet.In this case we have \(W_{v \rightarrow i}(t) \ge 1\) in the beginning of time-step t, since the edge \(e \in E_v\) with \(\ell _v(e) = 0\) has weight \(w(e) = 1\). The rest of the proof is similar to Case 1.

  • Case 3. We have\(i =0\), there is no edge\(e \in E_v\)with\(\ell _v(e) = 0\)at timet, and\(j = 1\).

    The value of \(W_v\) does not change as v moves up from level \(i = 0\) to level \(j = 1\). Thus, we get: \(W_{v \rightarrow j}(t) = W_{v \rightarrow 0}(t) > 1/\beta ^2\), for the node v is Up-Dirty at level \(i =0\) at time t. Since the node does not move further up than level j, we get: \(W_{v \rightarrow j}(t) \le 1/\beta \).

  • Case 4. We have\(i = 0\), there is no edge\(e \in E_v\)with\(\ell _v(e) = 0\)at timet, and\(j > 1\).

    Since the node v does not stop at level 1, we get: \(W_{v \rightarrow 1}(t) > 1/\beta \). Hence, we infer that \(j \in \{2, \ldots , L\}\) is the minimum possible level where \(W_{v \rightarrow j}(t) \le 1/\beta \). Since the node v had to rise above level \(j-1\), we infer that \(W_{v\rightarrow (j-1)}(t)>1/\beta \). Next, since the weight of the node v can drop by at most a factor of \(\beta \) when it rises from level \(j-1\) to level j, we get: \(W_{v\rightarrow j}(t) \ge (1/\beta ) \cdot W_{v \rightarrow (j-1)}(t) > 1/\beta ^2\). Therefore, we have \(1/\beta ^2 < W_{v \rightarrow j}(t) \le 1/\beta \). \(\square \)

It is clear that if and when FIX-DIRTY() terminates, we are in a state which satisfies Invariant 2.2. In the next section we show that after T hyperedge insertions and deletions, the total update time is indeed \(O(f^2 \cdot T)\) and so our algorithm has \(O(f^2)\)-amortized update time.

Analysis of the Algorithm

Starting from an empty graph \({\mathscr {G}} = (V, E)\), fix any sequence of Tupdates. The term “update” refers to the insertion or deletion of a hyperedge in \({\mathscr {G}}\). We show that the total time taken by our algorithm to handle this sequence of updates is \(O(f^2 \cdot T)\). We also show that our algorithm has an approximation ratio of \(O(f^3)\).

Relevant Counters We define three counters \(C^{up}, C^{down}\) and \(I^{down}\). The first two counters account for the time taken to update the data structures while the third accounts for the time taken to find the index j in both FIX-DOWN-DIRTY(v) and FIX-UP-DIRTY(v). Initially, when the input graph is empty, all the three counters are set to zero. Subsequently, we increment these counters as follows.

  1. 1.

    Suppose node v moves from level i to level \(j>i\) upon a call of FIX-UP-DIRTY(v). Then for every hyperedge \(e \in E_v\) with \(\ell _v(e) \le j-1\), we increment \(C^{up}\) by one.

  2. 2.

    Suppose node v moves from level i to level \(j < i\) upon a call of FIX-DOWN-DIRTY(v). Then for every hyperedge \(e \in E_v\) with \(\ell _v(e) \le i\), we increment the value of \(C^{down}\) by one. Furthermore, we increment the value of \(I^{down}\) by \(\beta ^{i-2}/\alpha \).

Remark

We will see in Sect. 4.5 that the time complexity of finding the appropriate index j during FIX-UP-DIRTY is subsumed by the increment in \(I^{down}\) during FIX-DOWN-DIRTY (we will crucially exploit the fact that each sequence of up moves will be followed by a down move since the final graph is empty graph).

We now explain why we increment \(I^{down}\) during a call to FIX-DOWN-DIRTY which moves a node v down from level i to level \(j < i\). Note that in order to compute the new level of v, it suffices to process all the hyperedges whose other endpoints are at levels \(\le i\). Each of these hyperedges has weight \(\beta ^{-i}\). Furthermore, given that the node was down-dirty, its current weight is less than \(1/(\beta ^2 \alpha )\). Hence, the number of hyperedges we need to scan to find the level j is upper bounded by \(\beta ^{i-2}/\alpha \).

The next lemma upper bounds the total time taken by our algorithm in terms of the values of these counters. The proof of Lemma 4.1 appears in Sect. 4.5.

Lemma 4.1

Our algorithm takes \(\Theta (f \cdot (C^{up} + C^{down} + T) + f^2 I^{down})\) time to handle a sequence of T updates.

We will show that \(C^{up} = \Theta (f) \cdot T\) and \(C^{down}+I^{down} = O(1) \cdot T\), which will imply an amortized update time of \(O(f^2)\) for our algorithm. Towards this end, we now prove three lemmata that relate the values of these three counters. The proofs can be found in the next subsections.

The first lemma follows immediately from the fact that at most \(\beta ^{i-2}/\alpha \) edges fulfill the condition \(\ell _v(e) \le i\) when node v moves down, and thus, \(I^{down}\) is incremented by at most as much as \(C^{down}\).

Lemma 4.2

We have: \(C^{down} \le I^{down}\).

The next lemma exploits the gap between the weight when a node v reaches a level i and when v moves down: When v is placed on level i we have \(W_v> 1/\beta ^2\), when it is Down-Dirty it holds that \(W_v \le 1/(\alpha \beta ^2)\). This “slack” suffices to upper bound \(I^{down}\) by a function linear in T and \(C^{up}\). Thus, this proof compares the weight of node v at the beginning and at the end of a time interval during which vremains at the same level. Note that we picked \(\alpha \) such that the constant is much smaller than 1 and even a function of 1 / f.

Lemma 4.3

We have: \(I^{down} \le \frac{f}{\alpha - 1}\cdot (T + C^{up})\).

Note that when in prior work such as [7] the “slack” was exploited by comparing the weight of v at two different points in time, node v had to be at the same level from time \(t_1\) on until time t and the whole weight ofvon this level was used in the comparison.

In order to prove the next lemma we compares the weight of v at two points in time even thoughvis at different levels at these points in time. More specifically, we use the following new three ideas:

  1. (1)

    We break, for each node v, the sequence of operations into subsequences and then argue over a maximal subsequence of operations during which the level of v only increases, called phase. When we compare the weight of v, we use as \(t_1\) the beginning of the phase and as t a later point in time during the phase. Thus, v will be t different levels at these points in time. (Note that both the definition of a phase and its subintervals, called epochs, are different from the epochs introduced in [22].)

  2. (2)

    We use a reference level\(i^*\), on which v might never be during the phase and compare the weight of v at time \(t_1\) if it were at a level \(i^*\) with the weight of v if it were at level \(i^*\) at time t.

  3. (3)

    In this comparison we use, however, not the complete weight of v, but only the weight ofvto upper levels, i.e. we use the weight \(W_{v \rightarrow i^*}^+\). When doing so, we use the strong properties that are guaranteed by the Super-Clean state.

These ideas enable us finally to upper bound \(C^{up}\) by a function linear in T and \(C^{down}\).

Lemma 4.4

We have: \(C^{up} \le 7 f \beta ^2 \cdot (T+ C^{down})\).

The proofs of lemmata 4.24.3 and 4.4 appear in Sects. 4.24.3 and 4.4 respectively, using the concepts of epochs, jumps and phases which are defined in Sect. 4.1. When combining their statements we can upper bound all three counters by a function that is linear in T, which was our goal, as it shows that the number of edges that are updated during all calls to FIX-DOWN-DIRTY and FIX-UP-DIRTY is linear in the number of update operations in the graph.

Lemma 4.5

(Corollary to Lemma 4.2,4.3, and 4.4.) We have: \(C^{up} = \Theta (f) \cdot T\) and \(C^{down} + I^{down} = \Theta (1)\cdot T\).

Proof

Replacing \(C^{down}\) in the RHS of Lemma 4.4 by the upper bounds from lemmata 4.2 and 4.3, we get:

$$\begin{aligned} C^{up}\le & {} (7f\beta ^2) \cdot T + (7f\beta ^2) \cdot C^{down} \\\le & {} (7f\beta ^2) \cdot T + (7f\beta ^2) \cdot I^{down} \\\le & {} (7f\beta ^2) \cdot T + \frac{(7f\beta ^2) f}{(\alpha - 1)} \cdot (T+ C^{up}) \\\le & {} (7f\beta ^2) \cdot T + (1/4) \cdot T + (1/4) \cdot C^{up} \qquad \text {(see equation (1))} \end{aligned}$$

Rearranging the terms in the above inequality, we get: \((3/4) \cdot C^{up} \le (7f \beta ^2 + 1/4) \cdot T = (28f\beta ^2+1) \cdot (T/4)\). Multiplying both sides by (4 / 3), we get: \(C^{up} \le (28f\beta ^2/3+1/3) \cdot T \le (10f\beta ^2) T\). Since \(\beta = 6\), we get:

$$\begin{aligned} C^{up} \le \Theta (f) \cdot T \end{aligned}$$
(2)

Since \(\alpha = \Theta (f^2)\), lemmata 4.2 and 4.3 and Eq. (2) imply that:

$$\begin{aligned} C^{down} \le I^{down} \le \Theta (1) \cdot T \end{aligned}$$
(3)

\(\square \)

The main result of our paper (see Theorem 2.1) now follows from Theorem 2.4, Lemma 4.1 and Lemma 4.5.

Epochs, Jumps and Phases

Fix any node \(v \in V\). An epoch of this node is a maximal time-interval during which the node stays at the same level. An epoch ends when either (a) the node v moves up to a higher level due to a call to FIX-UP-DIRTY, or (b) the node v moves down to a lower level due to a call to the subroutine FIX-DOWN-DIRTY. These events are called jumps. Accordingly, there are Up-Jumps and Down-Jumps. Next, we define a phase of a node to be a maximal sequence of at least two consecutive epochs where the level of v keeps on increasing, i.e., where all but the last epoch end in an Up-Jump. We use \(\Phi _v\) to denote a phase of a node v and we will analyse each such phase. Suppose that a phase \(\Phi _v\) consists of \(k >1\) consecutive epochs of v at levels \(i_1, \ldots , i_k \in \{0,1,\ldots , L\}\). Then we have: \(i_1< i_2< \cdots < i_k\). By definition, the epoch immediately before \(\Phi _v\) must have level larger than \(i_1\) implying FIX-DOWN-DIRTY(v) landed v at level \(i_1\). Similarly, the epoch subsequent to \(i_k\) is smaller than \(i_k\) implying FIX-DOWN-DIRTY(v) is called again.

Proof of Lemma 4.2

Suppose that a node v moves down from level i to level \(j < i\) at time t due to a call to the subroutine FIX-DOWN-DIRTY(v). Let \(\Delta ^{down}\) and \(\Delta _I^{down}\) respectively denote the increase in the counters \(C^{down}\) and \(I^{down}\) due to this event. We will show that \(\Delta ^{down} \le \Delta _I^{down}\), which will conclude the proof of the lemma. By definition, we have:

$$\begin{aligned} \Delta _{I}^{down} = \beta ^{i-2}/\alpha \end{aligned}$$
(4)

Let \(X = \{ e \in E_v : \ell _v(e) \le i \}\) denote the set of hyperedges incident on v that contribute to the increase in \(C^{down}\) due to the Down-Jump of v at time t. Note that if a hyperedge e belongs to the set X, then it must be the case that \(\ell _v(e) = i\). Specifically, we have: \(|X| = \Delta ^{down}\). Each edge \(e \in X\) contributes a weight \(\beta ^{-i}\) towards the node-weight \(W_{v \rightarrow i}(t)\). Thus, we get: \(|X| \cdot \beta ^{-i} \le W_{v \rightarrow i}(t) \le 1/(\alpha \beta ^2)\). The last inequality holds since v is Down-Dirty in the beginning of time-step t. Rearranging the terms, we get: \(\Delta ^{down} = |X| \le \beta ^{i-2}/\alpha \). The lemma now follows from Eq. (4).

Proof of Lemma 4.3

Suppose we call FIX-DOWN-DIRTY(v) at some time \(t_2\). Let \(\ell (v) = i\) just before the call, and let \([t_1,t_2]\) be the epoch with level of v being i. Let \(X := \{e \in E_v : \ell _v(e) \le i\}\) at time \(t_2\). By definition, \(I^{down}\) increases by \(\beta ^{i-2}/\alpha \) during the execution of FIX-DOWN-DIRTY(v); let us call this increase \(\Delta _I^{down}\). Thus, we have:

$$\begin{aligned} \Delta _I^{down} = \beta ^{i-2}/\alpha \end{aligned}$$
(5)

Consider the time between \([t_1,t_2]\) and let us address how \(W_v\) can decrease in this time while v’s level is fixed at i. Either some hyperedge incident on v is deleted, or some hyperedge \(e \in E_v\) incident on it decreases its weight. In the latter case, the level \(\ell (e)\) of such an hyperedge e must increase above i. Let \(\Delta ^T\) denote the number of hyperedge deletions incident on v during the time-interval \([t_1, t_2]\). Let \(\Delta ^{up}\) denote the increase in the value of \(C^{up}\) during the time-interval \([t_1, t_2]\)due to the hyperedges incident on v. Specifically, at time \(t_1\), we have \(\Delta ^T = \Delta ^{up} = 0\). Subsequently, during the time-interval \([t_1, t_2]\), we increase the value of \(\Delta ^{up}\) by one each time we observe that a hyperedge \(e \in E_v\) increases its level \(\ell (e)\) to something larger than i. Note that \(\ell (v) = i\) throughout the time-interval \([t_1, t_2]\). Hence, each time we observe an unit increase in \(\Delta ^T + \Delta ^{up}\), this decreases the value of \(W_v\) by at most \(\beta ^{-i}\). Just before time \(t_1\), the node v made either an Up-Jump, or a Down-Jump. Hence, Claims 3.3 and 3.2 imply that \(W_{v\rightarrow i}(t_1) > 1/\beta ^2\). As \(W_v(t_2) \le 1/(\alpha \beta ^2)\) at time \(t_2\), we infer that \(W_v\) has dropped by at least \((1-1/\alpha ) \cdot \beta ^{-2}\) during the time-interval \([t_1, t_2]\). In order to account for this drop in \(W_v\), the value of \(\Delta ^T + \Delta ^{up}\) must have increased by at least \((1-1/\alpha ) \cdot \beta ^{-2}/\beta ^{-i} = (1-1/\alpha ) \cdot \beta ^{i-2}\). Since \(\Delta ^T = \Delta ^{up} = 0\) at time \(t_1\), at time \(t_2\) we get: \(\Delta ^T + \Delta ^{up} \ge (1-1/\alpha ) \cdot \beta ^{i-2}\). Hence, (5) gives us:

$$\begin{aligned} \Delta _I^{down} \le (\alpha -1)^{-1} \cdot (\Delta ^T + \Delta ^{up}) \end{aligned}$$
(6)

Each time the value of \(I^{down}\) increases due to FIX-DOWN-DIRTY on some node, inequality (6) applies. If we sum all these inequalities for all nodes v, then the left hand side (LHS) will be exactly equal to the final value of \(I^{down}\), and the right hand side (RHS) will be at most \((\alpha -1 )^{-1} \cdot (f \cdot T + (f-1) \cdot C^{up})\). The factor f appears in front of T because each hyperedge deletion can contribute f times to the sum \(\sum \Delta ^T\), once for each of its endpoints. Similarly, the factor \((f-1)\) appears in front of \(C^{up}\) because whenever the level of an hyperedge e moves up due to the increase in the level \(\ell (v)\) of some endpoint \(v \in e\), this contributes at most \((f-1)\) times to the sum \(\sum \Delta ^{up}\), once for every other endpoint \(u \in e, u \ne v\). Since LHS \(\le \) RHS, we get: \(I^{down} \le (\alpha - 1)^{-1} \cdot (f \cdot T + (f-1) \cdot C^{up}) \le (f/(\alpha - 1)) \cdot (T+C^{up})\). This concludes the proof of the lemma.

Proof of Lemma 4.4

To upper bound \(C^{up}\) by a function linear in T and \(C^{down}\) we analyse the increase in \(C^{up}\) caused by each node v and each phase of v individually. Thus, we first need to define suitable variables. Fix a node v and consider a phase \(\Phi _v\) where v goes through levels \(i_1< i_2< \cdots < i_k\). Thus, the node v enters the level \(i_1\) at time \(t_1\) (say) due to a call to FIX-DOWN-DIRTY(v). For \(r \in [2, k]\), the node v performs an Up-Jump at time \(t_r\) (say) from the level \(i_{r-1}\) to the level \(i_r\), due to a call to FIX-UP-DIRTY(v). This implies that \(t_1< t_2< \cdots < t_k\). The phase ends, say, at time \(t_{k+1} > t_k\) when the node v again performs a Down-Jump from the level \(i_k\) due to a call to FIX-DOWN-DIRTY(v). As we will argue in this proof about the weight of v at various levels we use the notation \(W_{v\rightarrow i_1}({t_1})\) to denote the weight of v on level \(i_1\) at time \(t_1\) as this notation clearly shows the three parameters that the weight depends on (v, \(i_1\), and \(t_1)\), even if we could simply use \(W_v(t_1)\) if the level of v at time \(t_1\) is \(i_1\).

Let \(\Delta ^{up}\) denote the total increase in the value of the counter \(C^{up}\) during the phase \(\Phi _v\), that happens because of the node v. For \(r \in [2, k]\), let \(\Delta ^{up}_r\) denote the increase in the value of the counter \(C^{up}\) due to the Up-Jump of v at time \(t_r\). Thus, we have:

$$\begin{aligned} \Delta ^{up} = \sum _{r = 2}^k \Delta ^{up}_r \end{aligned}$$
(7)

We define two more counters: \(\Delta ^T\), \(\Delta ^{down}\). The former counter equals the number of hyperedge insertions/deletions incident on v during the time-interval \([t_1, t_k]\). The latter counter equals the increase in the value of \(C^{down}\) due to the hyperedges incident on v during the time-interval \([t_1, t_k]\). Thus, \(\Delta ^T\) and \(\Delta ^{down}\) are functions of v, \(t_1\) and \(t_k\), but we omit these parameters to simplify the notation in this proof. (Alternately, these two counters can be defined as follows. At time \(t_1\), we set \(\Delta ^T \leftarrow 0\) and \(\Delta ^{down} \leftarrow 0\). Subsequently, whenever at any time \(t \in [t_1, t_k]\), a hyperedge incident on v gets inserted into or deleted from the input graph, we increment the value of \(\Delta ^T\) by one. Further, whenever at any time \(t \in [t_1, t_k]\), a hyperedge e incident on v gets its level decreased because of a Down-Jump of some node \(u \in e, u \ne v\), we increment the value of \(\Delta ^{down}\) by one. )

Before proceeding with the proof we first state a crucial property of \(W_{v\rightarrow i_1}({t_1})\) that we will use repeatedly below. Since v enters the level \(i_1\) at time \(t_1\) due to a call to FIX-DOWN-DIRTY(v), Claim 3.2 implies that the node v is Super-Clean at level \(i_1\) at time \(t_1\). We now recall the definition of a Super-Clean node from Definition 2.6. There are two cases to consider, depending on the value of \(i_1\).

Case 1.\(i_1 > 0\). In this case, Definition 2.6 implies that \(W_{v \rightarrow i_1}(t_1) \le 1/\beta \) and \(W^+_{v \rightarrow i_1}(t_1) \le 1/\beta ^2\).

Case 2.\(i_1 = 0\). In this case, Definition 2.6 implies that \(W_{v \rightarrow i_1}(t_1) \le 1/\beta ^2 \le 1/\beta \). Furthermore, Definition 2.6 also implies that \(W^+_{v \rightarrow i_1}(t_1) \le W_{v \rightarrow i_1}(t_1) \le 1/\beta ^2\).

We therefore obtain the following strict bounds on \(W_v\):

$$\begin{aligned} W_{v\rightarrow i_1}({t_1}) \le 1/\beta \qquad \text { and } \qquad W^+_{v\rightarrow i_1}({t_1}) \le 1/\beta ^2 \end{aligned}$$
(8)

Our main goal is to upper bound \(\Delta ^{up}\) in terms of the final values of the counters \(\Delta ^T\) and \(\Delta ^{down}\). To achieve this we upper bound in the next claim \(\Delta ^{up}_r\) by \(\beta ^{i_r - 1}\) and in the following claim we lower bound \(\Delta ^T +\Delta ^{down}\) by a function of \(\beta ^{i_r-1}\). In the proof of the next claim we use the fact that we have an upper bound of \(1/\beta \) on \(W_v\) right after an Up-Jump and we know the exact weight of all edges that contribute to \(\Delta ^{up}_r\) for the Up-Jump. As these edges also contribute to \(W_{v \rightarrow i_r}(t_r+1)\), the upper bound to \(W_{v \rightarrow i_r}(t_r+1)\) bounds the number of such edges and thus \(\Delta ^{up}_r\).

Claim 4.6

For \(2\le r \le k\), \(\Delta ^{up}_r \le \beta ^{i_r - 1}\).

Proof

By Claim 3.3 we have \(W_{v\rightarrow i_r}(t_r+1) \le 1/\beta \), that is, the total weight incident on v after it has gone through FIX-UP-DIRTY at time \(t_r\) is at most \(1/\beta \). Now, each hyperedge \(e \in E_v\) which contributes to \(\Delta ^{up}_r\) has weight, right after time \({t_r}\), precisely \(\beta ^{-\ell (v)} = \beta ^{-i_r}\). As \(\Delta ^{up}_r \cdot \beta ^{-i_r} \le W_{v\rightarrow i_r}(t_r+1) \le 1/\beta \), we get \(\Delta ^{up}_r \le \beta ^{i_r - 1}\). \(\square \)

Using the slack provided by Eq. (8) and the fact that v performs an Up-Jump at time \(t_{k-1}\) we can lower bound the increase in \(W_{v \rightarrow i_{k-1}}\) during the time interval \([t_1,t_k]\) by \((1 - 1/\beta )\). As this weight increase can only happen through insertions of hyperedges incident to v or hyperedges incident to v increasing their weights, and each such event increases the weight of v by at most \(\beta ^{-i_{k-1}}\), we can lower bound the sum \(\Delta ^T + \Delta ^{down}\) by \(\beta ^{i_{k-1}}/2\). Combining this lower bound with the preceeding claim shows the desired upper bound on the sum of all but the last \(\Delta ^{up}_k\) by a linear function of \(\Delta ^T + \Delta ^{down}\).

Claim 4.7

For \(k > 2\) we have \(\Delta ^T + \Delta ^{down} > \beta ^{i_{k-1}}/2 \) and for \(k > 1\) we have \(\sum _{r=2}^{k-1} \Delta ^{up}_r < 2 (\Delta ^T + \Delta ^{down})\).

Proof

If \(k = 2\), then we have an empty sum \(\sum _{r = 2}^{k-1} \Delta ^{up}_r = 0\), and hence the claim is trivially true.

For the rest of the proof, we suppose that \(k > 2\), which implies that \(i_{k-1} \ge i_2 > i_1 \ge 0\). Thus, we get:

$$\begin{aligned} k> 2 \text { and } i_{k-1} > 0. \end{aligned}$$
(9)

Summing over the inequalities from Claim 4.6, we get:

$$\begin{aligned} \sum _{r = 2}^{k-1} \Delta ^{up}_r \le \beta ^{i_{k-1}} \end{aligned}$$
(10)

Since the node v performs an Up-Jump at time \(t_{k-1}\) from level \(i_{k-1} > 0\) (see Eq. (9)), the node must be Up-Dirty at that time. It follows that \(W_{v\rightarrow i_{k-1}}(t_{k}) > 1\). From Eq. (8), we have \(W_{v \rightarrow i_{k-1}}(t_1) \le W_{v\rightarrow i_{1}}(t_1) < 1/\beta \). Thus, during the time interval \([t_1,t_k]\) the value of \(W_{v \rightarrow i_{k-1}}\) increases by at least \((1-1/\beta )\). This can be either due to (a) some hyperedge incident to v being inserted, or (b) some hyperedge \(e \in E_v\) gaining its weight because of some endpoint \(u \in e, u \ne v,\) going down. The former increases \(\Delta ^T\) and the latter increases \(\Delta ^{down}\). Furthermore, the increase in \(W_{v \rightarrow i_{k-1}}\) due to every such hyperedge is at most \(\beta ^{-i_{k-1}}\). This gives us the following lower bounds:

$$\begin{aligned} \Delta ^T + \Delta ^{down}\ge & {} (1 - 1/\beta )\cdot \beta ^{i_{k-1}} > \beta ^{i_{k-1}}/2 ~~ \text { and, with equation (10),} \nonumber \\ \sum _{r=2}^{k-1} \Delta ^{up}_r< & {} 2(\Delta ^T + \Delta ^{down}) \end{aligned}$$
(11)

The claim follows from Eq. (11). \(\square \)

It now remains to upper bound \(\Delta ^{up}_k\). This is done in Claim 4.8, whose proof appears in Sect. 4.4.1.

Claim 4.8

We have: \(\Delta ^{up}_k < (6\beta ^2) \cdot (\Delta ^T + \Delta ^{down})\).

Putting it all together from Eqs. 17 and Claims 4.74.8, we get:

$$\begin{aligned} \Delta ^{up} \le (6\beta ^2 + 2)(\Delta ^T + \Delta ^{down}) \le (7\beta ^2) \cdot (\Delta ^T + \Delta ^{down}) \end{aligned}$$
(12)

Using Eq. (12), now we can prove Lemma 4.4. For every phase of a node v, as per Eq. (12) we can charge the increase in \(C^{up}\) to the increase in \((T + C^{down})\) corresponding to hyperedges incident of v. Summing up over all nodes and phases, the LHS gives \(C^{up}\) while the RHS gives \((9\beta ^2) \cdot (f \cdot T + (f-1) \cdot C^{down})\). The coefficient f before T comes from the fact that every hyperedge insertion can contribute f times to the RHS, once for each of its endpoints. The coefficient \((f-1)\) before \(C^{down}\) comes from the fact that whenever the level of a hyperedge e decreases due to the Down-Jump of a node \(u \in e\), this event contributes at most \((f-1)\) times to the RHS: once for every other endpoint \(v \in e, v \ne u\). Thus, we get:

$$\begin{aligned} C^{up} \le (7\beta ^2) \cdot (f \cdot T + (f-1) \cdot C^{down}) \le 7f\beta ^2 \cdot (T+ C^{down}) \end{aligned}$$

Proof of Claim 4.8

To prove an upper bound of \(6\beta ^2) \cdot (\Delta ^T + \Delta ^{down})\) for \(\Delta ^{up}_k\) we consider two cases. One case arises if levels \(i_k\) and \(i_{k-1}\) are “close”, the other one when they are far. The first case follows the proof pattern of Claim 4.7, which showed the bound for \(\Delta ^{up}_{k-1}\). This is possible since the weight of edges contributing to \(\Delta ^{up}_k\) are within \(\beta ^3\) of the weight of edges contributing to \(\Delta ^{up}_{k-1}\) as \(i_k\) and \(i_{k-1}\) are “close”. Thus, the same arguments as in Claim 4.7 can be used to show the desired bound for \(\Delta ^{up}_k\). If, however, \(i_k\) and \(i_{k-1}\) are far, a new approach is needed, which we will explain in Case 2 below.

Case 1:\(i_k \le i_{k-1} + 3\).

Since \(i_k \le i_{k-1}+3\), Claim 4.6 implies that:

$$\begin{aligned} \Delta ^{up}_k \le \beta ^{i_k - 1} \le \beta ^{i_{k-1}} \cdot \beta ^2 \end{aligned}$$
(13)

In the proof of Claim 4.7, we derived Eq. (11), which gives us:

$$\begin{aligned} \Delta ^T + \Delta ^{down} \ge \beta ^{i_{k-1}}/2 \end{aligned}$$
(14)

Equations 1314 imply that \(\Delta ^{up}_k < 2\beta ^2 \cdot (\Delta ^T + \Delta ^{down}) \le 6\beta ^2 \cdot (\Delta ^T + \Delta ^{down})\). This concludes the proof of Claim 4.8 for this case.

Case 2:\(i_k > i_{k-1} + 3\).

The proof in this case consists of three steps:

  1. (1)

    First we show that at any point in time the weight of any node is less than 2. This holds due to our subroutine FIX-DIRTY.

  2. (2)

    Using this upper bound on \(W_v\) show that \(W_{v \rightarrow (i_k - 3)}^+(t_k) > 1/(2\beta )\). It is interesting that we can show this bound, even though v is not at level \(i_k -3\) at time \(t_k\). We show that claim by contradiction: If that weight were at most \(1/(2 \beta )\), then the weight of v on level \(i_k-1\) were less than \(1/\beta \), since, as we show, the difference between \(W_{v\rightarrow (i_k - 1)}(t_k)\) and \(W_{v \rightarrow (i_k - 3)}^+(t_k)\) is at most \(2/\beta ^2\). But then the Up-jump would have placed v on level \(i_k-1\) and not on level \(i_k\).

  3. (3)

    Using the previous bound we show that \(W_{v \rightarrow (i_k - 3)}^+\) increases from \(t_1\) to \(t_k\) by at least \(1/(2\beta ) - 1/\beta ^2\). Here we use the fact that v was Super-Clean at time \(t_1\), which implies that \(W_{v \rightarrow i_1}^+(t_1) \le 1/\beta ^2\) and the fact that \(W_{v \rightarrow (i_k - 3)}^+(t_1) \le W_{v \rightarrow i_1}^+(t_1)\). In previous situations there were two reasons why the weight of v increased over a certain time interval, one of them can be contributed to an increase in \(\Delta ^T\) and the other to an increase in \(\Delta ^{down}\). Here, however, there are three such cases and the third one can neither be contributed to an increase in \(\Delta ^T\) nor to an increase in \(\Delta ^{down}\). However, we can show that all events of the third type can lead to an increase of \(W_{v \rightarrow (i_k - 3)}^+\) of at most \(1/\beta ^2\) during the time interval \([t_1,t_k]\). Thus, at least \(1/(2\beta ) - 2/\beta ^2\) of the increase of \(W_{v \rightarrow (i_k - 3)}^+\) can be contributed to either an increase in \(\Delta ^T\) or an increase in \(\Delta ^{down}\) and the same arguments at in 4.7 show the desired bound.

We start by noting that the weight of a node is always less than 2 at every time.

Claim 4.9

We have: \(W_v(t) < 2\) at every time t.

Proof

The crucial observation is that fixing an Up-Dirty node u never increases the weight of any node. Furthermore, a Down-Dirty node gets fixed only if no other node is Up-Dirty (see Fig. 1).

In the beginning of time-step \(t = 0\), the input graph is empty, and we clearly have \(W_v(t) = 0 < 1\). By induction, suppose that \(W_v(t) < 1\) in the beginning of some time-step t. Now, during time-step t, the weight \(W_v\) can increase only if one of the following events occur:

  • A hyperedge containing v gets inserted into the graph. This increases the value of \(W_v\) by at most one. Thus, we have \(W_v(t+1) < 2\).

  • We call the subroutine FIX-DOWN-DIRTY(u) for some node u. Note that fixing a Down-Dirty node u can increase the weight \(W_u\) by at most one, and hence this can increase the weight of a neighbour of u also by at most one. It again follows that \(W_v(t+1) < 2\).

If \(W_v(t+1) < 1\), then we are back in the same situation as in time-step t. Otherwise, if \(1 \le W_v(t+1) < 2\), then v is Up-Dirty in the beginning of time-step \(t+1\). In this case, no Down-Dirty node gets fixed (and no hyperedge gets inserted) until we ensure that \(W_v\) becomes smaller than one. Hence, the value of \(W_v\) always remains smaller than 2. \(\square \)

Claim 4.10

We have: \(W_{v \rightarrow (i_k - 3)}^+(t_k) > 1/(2\beta )\).

Proof

Suppose that the claim does not hold. Then we get:

$$\begin{aligned} W_{v\rightarrow (i_k - 1)}(t_k) ~~\le & {} ~~ W^+_{v\rightarrow (i_k - 3)}(t_k) + \frac{W_{v\rightarrow (i_k - 3)}(t_k) - W^+_{v\rightarrow (i_k - 3)}(t_k)}{\beta ^2} \nonumber \\\le & {} ~~ 1/(2\beta ) + 2/\beta ^2 \le 1/\beta \end{aligned}$$
(15)

The first inequality holds since the weights of the hyperedges \(e \in E_v\) with \(\ell _v(e) \le i_k - 3\) get scaled by at least a factor of \(1/\beta ^2\) when v moves from level \(i_k - 3\) to \(i_k - 1\), and the rest can only go down. The second inequality holds since \(W_{v\rightarrow (i_k - 3)}(t_k) \le W_{v \rightarrow i_{k-1}}(t_k) < 2\) by Claim 4.9 and the assumption \(W^+_{v\rightarrow (i_k - 3)}(t_k) \le 1/(2\beta )\). The last inequality holds since \(\beta = 6\) by Eq. (1).

Note, however, that \(W_{v \rightarrow (i_{k} - 1)} (t_k) > 1/\beta \), since node v does not stop at level \(i_k -1\) while making the Up-Jump at time \(t_k\). Thus, assuming that the claim does not hold leads to a contradiction. \(\square \)

Claim 4.10 states that \(W^+_{v\rightarrow (i_k - 3)}(t_k) > 1/(2\beta )\). Since \(i_{k-1} < i_k - 3\), Eq. (8) implies that \(W^+_{v\rightarrow (i_k - 3)}(t_1) \le W^+_{v\rightarrow i_{k - 1}}(t_1)\le W^+_{v \rightarrow i_{1}}(t_1) \le 1/\beta ^2\). Thus during the time-interval \([t_1,t_k]\), the value of \(W^+_{v\rightarrow (i_k - 3)}\) increases by at least \(1/(2\beta ) - 1/\beta ^2\). This increase can occur in three ways:

  1. 1.

    a hyperedge \(e \in E_v\) is inserted with \(\ell _v(e) > i_k - 3\) before the Up-Jump at time \(t_k\) (which contributes to \(\Delta ^T\)),

  2. 2.

    some hyperedge \(e \in E_v\) gains weight due to a Down-Jump of some node (say) \(u \in e, u \ne v\), and \(\ell _v(e) > i_k - 3\) after the Down-Jump (which contributes to \(\Delta ^{down}\)), and

  3. 3.

    some hyperedge \(e \in E_v\) had \(\ell _v(e) \le i_k - 3\) at \(t_1\) (i.e. does not contribute to \(W^+_{v\rightarrow (i_k - 3)}(t_1))\) but \(\ell _v(e) > i_k - 3\) at \(t_k\) (i.e. contributes to \(W^+_{v\rightarrow (i_k - 3)}(t_k)\)).

Note that the total weight of the hyperedges of type (3) at time \(t_1\) incident on v at level \(i_k - 3\) is at most \(1/\beta \); this follows from (8). Therefore, when \(\ell _v(e)\) for such an edge e raises to \(> i_k - 3\), the weight decreases by at least a \(1/\beta \) factor. Hence the total increase in \(W^+_{v \rightarrow (i_k - 3)}\) due to type (3) hyperedges is at most \(1/\beta ^2\), and the weight increase of at least \(1/(2\beta ) - 2/\beta ^2\) must come from hyperedges of type (1) and type (2). However each such hyperedge e can contribute at most \(\beta ^{-(i_k - 2)}\) to the weight (since \(\ell _v(e) > i_k - 3\)). And therefore, we get (recall that \(\beta = 6\) by Eq. (1)):

$$\begin{aligned} \left( \Delta ^T + \Delta ^{down}\right) \cdot \beta ^{-(i_k - 2)}\ge & {} \frac{1}{2\beta } - \frac{2}{\beta ^2} \\= & {} \frac{1}{6\beta } ~~~ \text { implying } ~~~ \left( \Delta ^T + \Delta ^{down}\right) \ge \frac{\beta ^{i_k - 1}}{6\beta ^2} \end{aligned}$$

Claim 4.6 gives \(\Delta ^{up}_k \le \beta ^{i_k - 1}\), and therefore we get:

$$\begin{aligned} \Delta ^{up}_k \le 6\beta ^2\left( \Delta ^T + \Delta ^{down}\right) \end{aligned}$$
(16)

This concludes the proof of Claim 4.8 for this case.

Proof of Lemma 4.1

For technical reasons, we assume that we end with the empty graph as well. This is without loss of generality due to the following reason. Suppose we made T updates and the current graph is G. At this point, the graph has \(T' \le T\) edges. Suppose the time taken by our algorithm till now is \(T_1\). Now delete all the \(T'\) edges, and let the time taken by our algorithm to take care of these \(T'\) updates be \(T_2\). If \(T_1 + T_2 = \Theta (f^2(T+T')) = \Theta (f^2T)\), then \(T_1 = \Theta (T)\) as well. Therefore, we assume that the process ends with an empty graph.

When a hyperedge e is inserted into or deleted from the graph, we take O(f) time to update the relevant data structures for its f endpoints. The rest of the time is spent in implementing the While loop in Fig. 1. We take care of the two subroutines separately.

Case 1. The subroutine FIX-DOWN-DIRTY(v) is called which moves the node v from level i to level \(j < i\) (say). We need to account for the time to find the relevant index j and the time taken to update the relevant data structures. By Lemma 2.8, the time taken for the latter is proportional \(\Theta (f\cdot \Delta C^{down})\), where \(\Delta C^{down}\) denotes the increase in the value of the counter \(C^{down}\) due to this event. Further, the value of \(C^{up}\) remains unchanged. For finding the index \(j < i\), it suffices to focus on the edges \(E_{v,i} = \{e\in E_v: \ell _v(e) \le i\}\) since these are the only edges that can potentially change weight as v goes down. Therefore, this takes time \(\Theta (|\{e\in E_v: \ell _v(e) \le i\}|)\). Since each of these edges had \(w(e) = \beta ^{-i}\) and since \(W_v \le \frac{1}{\alpha \beta ^2}\) before the FIX-DOWN-DIRTY(v) call, we have \(|\{e\in E_v: \ell _v(e) \le i\}| \le \beta ^{i-2}/\alpha \) which is precisely \(\Delta I^{down}\), where \(\Delta I^{down}\) is the increase in the value of the counter \(I^{down}\) due to this event. Therefore, the time taken to find the index j is \(\Theta (\Delta I^{down})\).

Case 2. The subroutine FIX-UP-DIRTY(v) is called which moves the node v from level i to level \(j > i\), say. Once again, we need to account for the time to find the relevant index j and the time taken to update the relevant data structures, and once again by Lemma 2.7 the time taken for the latter is \(\Theta (f\cdot \Delta C^{up})\). Further, the value of \(C^{down}\) remains unchanged. We now account for the time taken to find the index j.

Claim 4.11

j can be found in time \(\Theta (j-i)\).

Proof

To see this note that for \(k\ge i\),

$$\begin{aligned} W_{v\rightarrow k}(t) = \sum _{\ell \ge k} \sum _{e\in E_{v,\ell }} w(e) + \sum _{\ell < k} \frac{1}{\beta ^{k-\ell }} \sum _{e\in E_{v,\ell }} w(e) \end{aligned}$$

since (a) edges not incident on v are immaterial, (b) the edges incident on v whose levels are already \(\ge k\) do not change their weight, and (c) edges whose levels are \(\ell < k\) have their weight go from \(\beta ^{-\ell }\) to \(\beta ^{-k}\). The above implies that for \(k\ge i\),

$$\begin{aligned} W_{v\rightarrow (k+1)} (t)= & {} W_{v\rightarrow k} (t) - \left( 1-\frac{1}{\beta }\right) \sum _{e\in E_{v,k}} w(e) \nonumber \\= & {} W_{v\rightarrow k} (t) - \left( 1-\frac{1}{\beta }\right) |E_{v,k}|\cdot \beta ^{-k} \end{aligned}$$

That is, \(W_{v\rightarrow (k+1)}\) can be evaluated from \(W_{v\rightarrow k}(t)\) in \(\Theta (1)\) time since we store \(|E_{v,k}|\) in our data structure. The claim follows. \(\square \)

Note that in Claim 4.11 the time taken to find the index j can be as large as \(\Theta (\log n)\). To account for this time, we again fix a vertex v and a phase \(\Phi _v\) where the level of v changes from \(i_1\) to say \(i_k\). The total time for finding indices is \(\Theta (i_k - i_1)\). After this, there must be a DOWN-JUMP due to a call to FIX-DOWN-DIRTY(v) since the final graph is empty. Thus, we can charge the time taken in finding indices in this phase \(\Phi _v\) to \(\Delta I^{down}\) in the FIX-DOWN-DIRTY(v) call right at the end of this phase. We can do so since \(\Delta I^{down} = \beta ^{i_k - 2}/\alpha = \frac{1}{f^2} \Theta (i_k)\) since \(\beta = \Theta (1)\) and \(\alpha = \Theta (f^2)\) by (1). Therefore, the total time taken to find indices in the FIX-UP-DIRTY(v) calls in all is at most \(f^2 I^{down}\).

To summarize, the time taken to initialize and update the data structures is at most \(\Theta \left( f\cdot \left( C^{up} + C^{down} + T\right) \right) \), and the total time taken to find indices is at most \(\Theta (f^2\cdot I^{down})\). This proves Lemma 4.1.

Notes

  1. 1.

    Node updates are usually handled through the insertion and deletion of isolated nodes, but there has been also some work on the node insertions-only or node deletions-only problem [11].

References

  1. 1.

    Abboud, A., Williams, V.V.: Popular conjectures imply strong lower bounds for dynamic problems. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS) (2014)

  2. 2.

    Arar, M., Chechik, S., Cohen, S., Stein, C., Wajc, D.: Dynamic matching: reducing integral algorithms to approximately-maximal fractional algorithms. CoRR (2017). arXiv:1711.06625

  3. 3.

    Baswana, S., Gupta, M., Sen, S.: Fully dynamic maximal matching in \(O(\log n)\) update time. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS) (2011)

  4. 4.

    Bernstein, A., Stein, C.: Faster fully dynamic matchings with small approximation ratios. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA) (2016)

  5. 5.

    Bhattacharya, S., Chakrabarty, D., Henzinger, M.: Deterministic dynamic matching in \(o(1)\) update time. In: Proceedings of the MPS Symposium on Integer Programming and Combonatorial Optimization (IPCO) (2017)

  6. 6.

    Bhattacharya, S., Henzinger, M., Italiano, G.F.: Design of dynamic algorithms via primal-dual method. In: Proceedings of the International Colloquium on Automata, Languages and Programming (ICALP) (2015)

  7. 7.

    Bhattacharya, S., Henzinger, M., Italiano, G.F.: Deterministic fully dynamic data structures for vertex cover and matching. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA) (2015)

  8. 8.

    Bhattacharya, S., Henzinger, M., Nanongkai, D.: New deterministic approximation algorithms for fully dynamic matching. In: Proceedings of the ACM Symposium on Theory of Computing (STOC) (2016)

  9. 9.

    Bhattacharya, S., Henzinger, M., Nanongkai, D.: Fully dynamic approximate maximum matching and minimum vertex cover in O(log\({}^{{3}}\)n) worst case update time. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA) (2017)

  10. 10.

    Bhattacharya, S., Kulkarni, J.: Deterministically maintaining a \((2 +\epsilon )\)-approximate minimum vertex cover in \(o(1/\epsilon ^2)\) amortized update time. In: SODA, pp. 1872–1885 (2019)

  11. 11.

    Bosek, B., Leniowski, D., Sankowski, P., Zych, A.: Online bipartite matching in offline time. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS) (2014)

  12. 12.

    Charikar, M., Solomon, S.: Fully dynamic almost-maximal matching: Breaking the polynomial barrier for worst-case time bounds. CoRR (2017). arXiv:1711.06883

  13. 13.

    Gupta, A., Krishnaswamy, R., Kumar, A., Panigrahi, D.: Online and dynamic algorithms for set cover. In: Proceedings of the ACM Symposium on Theory of Computing (STOC) (2017)

  14. 14.

    Gupta, M., Peng, R.: Fully dynamic \((1+\epsilon )\)-approximate matchings. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS) (2013)

  15. 15.

    Henzinger, M., Krinninger, S., Nanongkai, D., Saranurak, T.: Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In: Proceedings of the ACM Symposium on Theory of Computing (STOC) (2015)

  16. 16.

    Henzinger, M.R., Fredman, M.L.: Lower bounds for fully dynamic connectivity problems in graphs. Algorithmica 22(3), 351–362 (1998)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Neiman, O., Solomon, S.: Simple deterministic algorithms for fully dynamic maximal matching. In: Proceedings of the ACM Symposium on Theory of Computing (STOC) (2013)

  18. 18.

    Onak, K., Rubinfeld, R.: Maintaining a large matching and a small vertex cover. In: Proceedings of the ACM Symposium on Theory of Computing (STOC) (2010)

  19. 19.

    Parter, M., Peleg, D., Solomon, S.: Local-on-average distributed tasks. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA) (2016)

  20. 20.

    Patrascu, M.: Lower bounds for dynamic connectivity. In: Encyclopedia of Algorithms, pp. 1162–1167. Springer (2016)

  21. 21.

    Sankowski, P.: Faster dynamic matchings and vertex connectivity. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA) (2007)

  22. 22.

    Solomon, S.: Fully dynamic maximal matching in constant update time. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS) (2016)

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Sayan Bhattacharya.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

An extended abstract of this paper appeared as [5] in Proceedings of IPCO 2017.

D. Chakrabarty: Work done while the author was at Microsoft Research, India.

M. Henzinger: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 340506.

A Duality Between Maximum Fractional Matching and Minimum Vertex Cover

A Duality Between Maximum Fractional Matching and Minimum Vertex Cover

Let \(y_v \in \{0,1\}\) be an indicator variable which denotes if the node \(v \in V\) is picked in the vertex cover. The LP-relaxation for minimum vertex cover is as follows.

$$\begin{aligned} \text {Minimize }&\sum _{v \in V} x_v \\ \text {such that: } \qquad&\sum _{v \in e} x_v \ge 1 \text { for all } e\in E. \\&x_v \ge 0 \text { for all } v \in V. \end{aligned}$$

In the above LP, the first constraint encodes the fact that for every hyperedge we have to pick at least one node incident on it. The dual of the above LP is the following.

$$\begin{aligned} \text {Maximize }&\sum _{e \in E} y_e \\ \text {such that: } \qquad&\sum _{e \in E : v \in e} y_e \le 1 \text { for all } v \in V. \\&y_e \ge 0 \text { for all } e \in E. \end{aligned}$$

Note that the dual LP precisely encodes the maximum fractional problem: We have to assign a weight \(y_e \ge 0\) to every hyperedge \(e \in E\) in such a way that the total weight received by any node v from all its incident hyperedges is at most 1. The goal is to maximize the total weight assigned to all the hyperedges. Hence, the size of the maximum fractional matching in a hypergraph is at most the size of its minimum vertex cover.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bhattacharya, S., Chakrabarty, D. & Henzinger, M. Deterministic Dynamic Matching in O(1) Update Time. Algorithmica 82, 1057–1080 (2020). https://doi.org/10.1007/s00453-019-00630-4

Download citation

Keywords

  • Dynamic algorithms
  • Data structures
  • Graph algorithms
  • Matching
  • Vertex cover