Dynamic Kernels for Hitting Sets and Set Packing

Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size kd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k^d$$\end{document} (which is a polynomial as d is a constant), and they do so in time m·2dpoly(d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m\cdot 2^d {\text {poly}}(d)$$\end{document} for a small polynomial poly(d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text {poly}}(d)$$\end{document} (which is linear in the hypergraph size for d fixed). We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-kd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k^d$$\end{document} kernel. This paper presents a deterministic solution with worst-case time 3dpoly(d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3^d {\text {poly}}(d)$$\end{document} for updating the kernel upon inserts and time 5dpoly(d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5^d {\text {poly}}(d)$$\end{document} for updates upon deletions. These bounds nearly match the time 2dpoly(d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^d {\text {poly}}(d)$$\end{document} needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(ck)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(c^k)$$\end{document} where c=d-1+O(1/d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c = d - 1 + O(1/d)$$\end{document} equals the best base known for the static setting.


Introduction
The hitting set problem is a fundamental combinatorial problem that asks, given a hypergraph, whether there is a small vertex subset that intersects ("hits") each hyperedge. Many interesting problems reduce to it. A dominating set of a graph is just a hitting set in the hypergraph that for every vertex v contains a hyperedge consisting of the closed neighborhood of v. For any fixed graph H , the question of whether we can delete k vertices from a graph G in order to make G an H -free graph can be reduced to the hitting set problem for the hypergraph to which each occurrence of H in G contributes one hyperedge -and this problem in turn generalizes problems such as triangle-deletion and cluster-vertex-deletion [1]. The hitting set problem also finds applications in the area of descriptive complexity, as a fragment of first-order logic can be reduced to it [2].
The hitting set problem is NP-complete [3] and its parameterized version p k -hitting-set is W [2]-complete [4]. However, if we restrict the size of hyperedges to at most some constant d, the resulting problem p k -d-hitting-set lies in FPT [5] and even has polynomial kernels. In particular, d = 2 is the vertex cover problem, which is still NP-complete, but one of the best-investigated parameterized problems. Already the jump from d = 2 to d = 3 turns out to be nontrivial in this setting. In detail, the inputs for our algorithms are a hypergraph H = (V , E) and an upper bound k for the size of a hitting set X wanted (a set for which e ∩ X = ∅ holds for all e ∈ E). We think of the numbers n = |V | and m = |E| as large numbers, of k as a (relatively small) parameter, and of d = max e∈E |e| as a small constant (already the cases d = 3 and d = 4 are of high interest).
Parameterized algorithms for the hitting set problem proceed in two steps: First, the input (H , k) is kernelized, which means that we quickly compute a (small) new hypergraph K such that H has a size-k hitting set iff K has one. Afterwards the problem is solved on K using an expensive algorithm based on search trees or iterative compression. The currently best algorithm for computing a kernel with respect to the number of kernel edges is due to Fafianie and Kratsch [6], see also [7,8] for some recent developments. The cited algorithm outputs a kernel of size k d (meaning that K has at most k d hyperedges) in time m ·2 d poly(d) (meaning that time 2 d poly(d) is needed on average per hyperedge of H ). The best algorithms for solving the hitting set problem on the computed kernel K run in time O(c k ), where the exact value of c = d −1+ O(1/d) is a subject of ongoing research [9, Section 6] and [10][11][12][13][14]. In summary, on input (H , k) one can solve the hitting set problem in time O(2 d poly(d) · m + c k ).
Our objective in this paper is to transfer (only) the first part of solving the hitting set problem (namely the computation of the kernel K ) into the dynamic setting. Instead of a single hypergraph H being given at the beginning, there is a sequence H 0 , H 1 , H 2 , H 3 , . . . of hypergraphs each of which differs from the previous one by a single edge being either added or deleted. One continuously has to keep track of hitting set kernels K 0 , K 1 , K 2 , K 3 , . . . for the current H i (including moments when H i has no size-k hitting set). Our aim is to compute the updated kernel K i+1 from K i in constant time based solely on the knowledge which edge was added to or deleted from H i in order to obtain H i+1 .
Doing the necessary bookkeeping to dynamically manage a hitting set kernel is not easy. As an example, consider two hypergraphs H andH with disjoint vertex sets, were H is a clear no-instance (like a matching of size k + 1) whileH is a hard, borderline case that can only be reduced to a relatively large kernelK . A dynamic kernel algorithm that works on H∪H must be able to cope with the situation that we first add all the edges of H (at which point a natural kernel would be a trivial size-1 no-instance K ), followed by all the edges ofH (which even underlines the fakt that the trivial no-instance K is a correct kernel for the ever-larger hypergraph), followed by a deletion of the edges from H . At some point during these deletions, a dynamic kernel algorithm must switch from the constant-size K to the large kernelK . Previous work from the literature [15] shows that it is already tricky to achieve this switch in time polynomial in the size of kernels K andK . The challenge we address is to do the updates in constant worst-case time, which forces our dynamic algorithm to spread the necessary changes over time while neither resorting to amortization nor to randomization.
Note that we only give a dynamic algorithm for keeping the kernel up-to-date with constant update times -we make no claims concerning the time needed to actually compute a hitting set for the current kernel K i (and, thus, for the current H i ). Phrased in terms of dynamic complexity theory, there are two different problems for which we present algorithms with differing update times (the time needed for updating internal data structures) and query times (the time needed to construct an output upon request): For the first problem of (just) computing hitting set kernels K for inputs H , we present a dynamic algorithm with constant update time and zero query time (since the current kernel K i is explicitly stored in memory as an adjacency matrix at all times). For the second problem of computing size-k hitting sets X for inputs H , our dynamic algorithm also has constant update time (to keep track of kernels K i ), but has a query time of c k (to compute X i from K i , i. e., a hitting set for H i ). Since in both cases our update times are constant and since it is not hard to see that one cannot improve the query times beyond the time needed by the fastest static algorithm, these bounds are optimal.

Main Result: A Fully Dynamic Hitting Set Kernel
In the fully dynamic case where edges may be inserted and deleted over time, the hypergraph may repeatedly switch between having and not having a size-k hitting set. This turns out to be a big obstacle for updating a kernel in just a few steps. Dynamic kernels have already been constructed by Alman, Mnich, and Williams [15]. They present a p k -vertex-cover kernel with O(k) worst-case update time and O(1) amortized update time. For the p k -d-hitting-set they achieve a kernel of size (d − 1)!k(k + 1) d−1 with update time (d!) d · k O(d 2 ) .
In this paper, for each fixed number d we present a fully dynamic algorithm that maintains a p k -d-hitting-set kernel of size O(k d ) with constant updates. Theorem 1 For every d ≥ 2 there is a deterministic, fully dynamic kernel algorithm for the problem p k -d-hitting-set that maintains at most d i=0 k i ≤ (k + 1) d hyperedges in the kernel, has worst-case insertion time 3 d poly(d), and worst-case deletion time 5 d poly(d). As d is a constant, the algorithm performs both insertions and deletions in time that is constant and independent of the input and parameter k.

Corollary 1 There is a fully dynamic algorithm for p k -d-hitting-set with update time O(1) and query time O(c k
In order to achieve update times independent of k, this paper makes three major improvements on the general sunflower approach [1]. First, relevant objects are handled hierarchically. This allows an inductive construction and an analysis that improves the bounds on the kernel size as well as the update time. Second, we replace the notion of strong edges (see [15]) by needed edges to be defined later. Whenever a flower is formed, the replacement of its petals can be handled much more easily this way. Finally, the use of b-flowers (see also [6]) instead of generalized sunflowers [15] decreases the size of the kernel.
Our kernel is a full kernel [16]: It preserves all size-k solution. Therefore, we can use the kernel for counting and enumeration problems; and we can use it as approximate solution. The kernel size is optimal insofar as p k -d-hitting-set has no kernel of size O(k d− ) unless coNP ⊆ NP/poly [17]. Note that if we feed the hyperedges of a static hypergraph to our algorithm one-at-a-time, we compute a static kernel in time 3 d poly(d) · m. Since the currently best algorithms [6,7,18] run in time 2 d poly(d) · m, our algorithm is not far from the best static runtime: the difference just lies in the constant factor 3 d versus 2 d .
Extension to Set Packing: Our hitting set kernelization can be adapted for p k -matching and the more general p k -d-set-packing: The input (H , k) is as before, but the question is whether there is a packing P ⊆ E with |P| ≥ k (that is, e ∩ f = ∅ for any two different e, f ∈ P). Related Work A sequence of improved upper bounds on the kernel size for p k -d-hitting-set is due to Flum and Grohe [5], van Bevern [18], and Fafianie and Kratsch [6]. Damaschke studied full kernels for the problem, which are kernels that contain all small solutions [16]. There are also optimized algorithms for specific values of d. For instance the algorithm by Buss and Goldsmith [19] for d = 2, or by Niedermeier and Rossmanith [12] and Abu-Khzam [1] for d = 3.
Dynamic algorithms can be used in a variety of monitoring applications such as maintaining a minimum spanning tree [20] or connected components [21]. There is also a recent trend in studying dynamic approximation algorithms, for instance for vertex-cover [22]. Algorithms that maintain a solution for a dynamically changing input can also be studied using descriptive complexity, as suggested by Patnaik and Immerman [23]. A recent break-through result in this area is that reachability is contained in DynFO [24].
Iwata and Oka [25] were the first to combine kernelization and dynamic algorithms by studying a dynamic quadratic kernel for p k -vertex-cover. Their dynamic kernel algorithm requires O(k 2 ) update time and works in a promise model where at all times it is guaranteed that there actually is a size-k vertex cover in the input graph.
Alman, Mnich, and Williams extended this line of research by studying dynamic parameterized algorithms for a broad range of problems [15]. Among others, they provided a p k -vertex-cover kernel with O(k) worst-case update time and O(1) amortized update time that works in the fully dynamic model. Their generalization to a fully dynamic algorithm for p k -d-hitting-set with a slightly larger kernel size and noninstant update time has already been mentioned above. Recent advances in dynamic FPT-algorithms were achieved by a dynamic data structure that maintains an optimum-height elimination forest for a graph of bounded treedepth [26].
Organization of This Paper: After a short introduction to dynamic algorithms, data structures, and parameterized complexity in Sect. 2, we first illustrate the algorithm for the special case of p k -vertex-cover in Sect. 3. Then, in Sect. 4, we generalize the algorithm to p k -d-hitting-set. In Sect. 5 we argue that with slight modifications, the same algorithm can be used to maintain a polynomial kernel for p k -d-set-packing.

A Framework for Parametrized Dynamic Algorithms
Our aim is to dynamically maintain kernels with minimal update time. To formalize this, let us begin with the definition of kernels and then explain properties of dynamic kernels. Since we are interested in constant update times, some remarks on standard data structures will also be of interest.
A uniform d-hypergraph has |e| = d for all e ∈ E, e.g., a graph is a uniform 2-hypergraph. We use V d to denote the set { e ⊆ V | |e| = d } of all size-d hyperedges and let V ≤d = { e ⊆ V | |e| ≤ d }. Parameterized hypergraph problems are sets Q ⊆ * × N, where instances (H , k) ∈ * × N consist of a hypergraph H and a parameter k. The p k -d-hitting-set and p k -d-set-packing problems from the introduction are examples. Note that in both cases k is the parameter while d is fixed; the special cases for d = 2 are exactly p k -vertex-cover and p k -matching. A parameterized problem is in FPT if the question (H , k) ∈ Q can be decided in time f (k) · (|V | + |E|) c for some computable function f and constant c. It is known that Q ∈ FPT holds iff kernels can be computed for Q in polynomial time [27]. Kernels of polynomial size are of special interest: For a polynomial σ , a σ -kernel for an instance (H , k) ∈ * × N of a problem Q is another instance Kernel algorithms normally ensure k ≤ k and we always have k = k. Polynomial kernels for p k -d-hitting-set can be computed in polynomial time; our objective is to maintain such kernels in a dynamic setting.
Dynamic Hypergraphs and Dynamic Kernels: One might consider several properties of hypergraphs that change in a dynamic way. We consider as fixed and immutable the bound d on the hyperedge sizes, the vertex set V , and also the parameter k. That means only the most specific one, the hyperedge set E, will change dynamically. We assume that initially it is the empty set: For convenience (and without loss of generality) we assume only missing hyperedges are inserted and only existing ones deleted. A dynamic hypergraph algorithm gets the update sequence of a dynamic hypergraph as input and has to output a sequence of solutions, one for each H i . Crucially, the solution for H i must be generated before the next operation o i+1 is read. While after each update we could solve the problem from scratch for H i , we may do better by taking into account that the difference between H i−1 and H i is small. With the help of an internal auxiliary data structure A i that the algorithm updates alongside the graphs, one might be able to solve the original problem faster after each update. The problem we wish to solve dynamically is to compute for each H i a kernel K i (as opposed to the problem of solving the parameterized problem Q itself).

Definition 2 (Dynamic Kernel Algorithm)
Let Q be a parameterized problem and σ : N → N be a bound. A dynamic kernel algorithm Algo for Q with kernel size σ (k) has three methods: 1. Algo.init(n, k) gets the size n of V and the parameter k as inputs, neither of which will change during a run of the algorithm, and must initialize an auxiliary data structure A 0 and a kernel K 0 for (H 0 , k) and Q and σ (observe that H 0 = (V , ∅) holds). 2. Algo.insert(e) gets a hyperedge e to be added to H i−1 and must update A i−1 and K i−1 to A i and K i with, again, K i being a kernel for (H i , k) and Q and σ . 3. Algo.delete(e) removes an edge instead of adding it.
One could also require that only the data structure A i is updated in each step, while a kernel K i would only be needed to be computed upon a query request. This would allow to differentiate between update times and query times for computing kernels. By requiring that the kernel K i is explicitly computed at each step alongside A i , our definition implies a query time of zero for computing K i . However, solving the query (H i , k) ∈ ? Q using K i may take exponential time in k. Concerning the update times, an efficient dynamic kernel algorithm should of course compute A i and K i faster than a static kernelization that processes H i completely. The best one could hope for is constant time for the initialization and per update, even independent of the parameter k -and this is exactly what we achieve in this paper.

Data Structures for Dynamic Algorithms:
The A i rely on data structures such as objects and arrays. We additionally use a novel data structure called relevance list, which are ordinary lists equipped with a relevance bound ρ ∈ N: the first ρ elements are said to be relevant, while the others are irrelevant. This data structure supports insertion and deletion, querying the relevance status of an element, and querying the last relevant element -each in O(1) time. For concrete implementations and an analysis, see the appendix.

Dynamic Vertex Cover with Constant Update Time
In order to better explain the ideas behind our dynamic kernel algorithm, we first tackle the case d = 2 in this section and show how we can maintain kernels of size O(k 2 ) for the vertex cover problem with update time O (1). The idea is based on a well-known static kernel: Buss [19] noticed that in order to cover all edges of a graph G = (V , E) with k vertices, we must pick any vertex with more than k neighbors (let us call such vertices heavy). If there are more than k 2 edges after all heavy vertices have been picked and removed, no vertex cover of size k is possible, since each light vertex can cover at most k edges (light vertices being all but the heavy ones).
To turn this idea into a dynamic kernel, let us first consider only insertions. Initially, new edges can simply be added to the kernel; but at some point a vertex v "becomes heavy." In the static setting one would remove v from the graph and decrease the parameter by 1. In the dynamic setting, however, removing v with its adjacent edges would take time O(k) rather than O (1). Instead, we leave v in the graph, but do not add further edges containing v to the kernel once v becomes heavy. We call the first k +1 edges relevant for the vertex and the rest irrelevant. By putting the relevant edges of a heavy vertex in the kernel, we ensure that this vertex still must be chosen for any vertex cover. By leaving out the irrelevant edges, we ensure a kernel size of at most O(k 2 ). More precisely, if the kernel size now threatens to exceed k 2 + k + 1, then any additional edges will be irrelevant for the kernel since the already inserted edges already form a proof that no size-k vertex cover exists.
Being relevant for a vertex is a "local" property: For an edge e = {u, v}, the vertex u may consider e to be relevant, while v may consider it to be irrelevant. An edge only "makes it to the kernel" when it is relevant for both endpoints -then it will be called needed. It is not obvious that this is how the case of a "disagreement" should be resolved and that this is the right notion of "needed edges" -but Lemma 3 shows that it leads to a correct kernel.
A Dynamic Vertex Cover Kernel Algorithm: We turn the sketched ideas into a formal algorithm in the sense of Definition 2. The initialization sets up the auxiliary data structures for a hypergraph with n vertices and a parameter k: One relevance list L v with relevance bound k + 1 per vertex v (to keep track of the edges that are relevant for v) and one relevance list L (to keep track of the edges that are relevant for the kernel). The code violates the requirement that the initialization procedures should run in constant time, but a trick [28] for ensuring this will be discussed in the general hitting set case. The delete operation for an edge e is more complex: When e = {u, v} is removed from the lists L u , L v , and L, formerly irrelevant edges may suddenly become relevant from the point of view of these three lists and, thus, possibly also needed. Fortunately, we know which edge e may suddenly have become relevant for a list: After the removal of e, the edge e that is now the last relevant edge stored in the list is the (only) one that may have become relevant -and relevance lists keep track of the last relevant element. Correctness and Kernel Size: The relevant edges in L clearly have some properties that we would expect of a kernel: First, there are at most k 2 + k + 1 of them (for the simple reason that L caps the number of relevant edges in line 4) -which is exactly the size that a kernel should have. Second, it is also easy to see from the code of the algorithm that all operations run in time O (1). Two lemmas make these observations precise, where R(L) denotes the set of relevant edges in a list L and E(L) denotes all edges in L; and where we say that a dynamic algorithm maintains an invariant if that invariant holds for its auxiliary data structure right after the init method has been called and after every call to insert and delete.

Proof of Lemma 1
The relevance list L is setup in line 4 to have the claimed number of relevant elements at most.

Proof of Lemma 2
The codes itself clearly only need time O(1) and call only operations on relevance lists, all of which run in constant time.
The crucial, much less obvious property of the algorithm is stated in the next lemma, whose proof contains a non-trivial recursive analysis showing that irrelevant edges must already be covered by relevant edges inserted earlier.

Proof of Lemma 3 One direction is trivial since R(L) ⊆ E.
For the other direction, consider a size-k vertex cover X of R(L), that is, a set X with |X | = k and X ∩ e = ∅ for all e ∈ R(L). We need to show that X ∩ e = ∅ holds for all e ∈ E. We distinguish three cases: e ∈ R(L), e ∈ E(L) − R(L), and e ∈ E − E(L).
Case 1: The edge is in L and is relevant. The first case is trivial: If e ∈ R(L), then by assumption we have X ∩ e = ∅ as claimed.
Case 2: The edge is in L, but is irrelevant. For the second case, we need an observation: Proof Consider any v ∈ V . All edges in R(L) that contain v must be relevant edges with respect to L v since the function check if needed only allows such edges to enter L. However, the init method sets L v to contain at most k + 1 relevant edges. Using this observation, we see that the second case (e ∈ E(L)− R(L)) cannot happen: L can only have an irrelevant edge if there are already k 2 +k +1 relevant edges in R(L). However, by the claim, each of the k many x ∈ X covers at most k + 1 edges in R(L), implying that X covers at most k(k +1) = k 2 +k edges of R(L). In particular, contrary to the assumption, one edge of R(L) is not covered by X .
Case 3: The edge is not even in L. For the third case, let e ∈ E − E(L), that is, let e = {u, v} be an edge that "did not make it into the L list." This can only happen because it was irrelevant for L u or L v (or both).
Recall that when e is irrelevant for a list L u , this means that u has more than k + 1 adjacent edges in E and, hence, u must be present in any vertex cover of G = (V , E). If all the relevant edges of u are also present in R(L), then u has exactly k + 1 neighbors in the graph (V , R(L)) and, in particular, its vertex cover X must include u. Unfortunately, it may happen that even though a vertex u has some irrelevant adjacent edges in E, not all relevant edges of L u make it into L: After all, the other endpoint v of an edge e = {u, v} may also have irrelevant adjacent edges and e may happen to be one of them. We can now try to apply the same argument to v; but may again find yet another edge e and another vertex w that causes v to have a degree less than k + 1 in R(L). Fortunately, it turns out that after a finite number of steps, we arrive at a vertex that must be present in X . Furthermore, starting from this vertex, we can track back to show that eventually we must have u ∈ X . The details are as follows.

Claim 2
There is an ordering u 1 , . . . , u q of the vertices of degree at least k Proof In the current graph G, each edge e has a time t e when it entered the graph and these times define a total order on the edges in E. For each vertex v, let l(v) be the last relevant edge of L v , that is, the edge returned by L v .last relevant. Order the vertices of degree at least k + 1 in E according to the following rule: For i < j we must have that t l(u i ) ≤ t l(u j ) (if two vertices u i and u j happen to have the same last relevant edge, they can be ordered arbitrarily). Consider any u i . Then all edges from u i to any vertex v / ∈ {u 1 , . . . , u i−1 } are relevant for L v since the last relevant edge of L v is an edge that came later than the edge {u i , v} and, hence, {u i , v} is relevant for L v . However, this means that the only edges of R(L u i ) that do not get passed to L can be those of the form {u i , u j } for some j ∈ {1, . . . , i − 1}. Clearly, since L u i has k + 1 relevant edges and only i − 1 do not get passed, we get the claim.
We can use this claim to show {u 1 , . . . , u q } ⊆ X : We show by induction on i that This concludes the third case: If e ∈ E − E(L), then one or both elements of e must be one of the u i -and we just saw that all of them are in X . Hence, X ∩ e = ∅.
Put together, we get the following special case of Theorem 1: Theorem 3 DynKernelVC is a dynamic kernel algorithm for p k -vertex-cover with update time O(1) and kernel size k 2 + k + 1.

Proof of Theorem 3
Lemmas 1, 2, and 3 together state that at all times during a run of the algorithm DynKernelVC the graph (V , R(L)) has at most k 2 + k + 1 edges and has the same size-k vertex covers as the current graph. Thus, (V , R(L)) is almost a kernel except that R(L) is actually a linked list of edges (with potentially large vertex identifiers). However, we can simultaneously keep track of an adjacency matrix of a graph K with the vertex set V K = {1, . . . , 2(k 2 + k + 1)} and with an edge set E K that is always isomorphic to R(L), that is, E K ∼ R(L). In particular, K has a size-k hitting set if, and only if, G has one.
The update times are constant. The time needed for DynKernelVC.init(n, k) can be made constant with the already mentioned trick [28], see Lemma 16.

Dynamic Hitting Set Kernels
The hitting set problem is a generalization of vertex-cover to hypergraphs. However, allowing larger hyperedges introduces considerable complications into the algorithmic machinery. Nevertheless, we still seek and prove an update time that is constant. More precisely, it is independent of n = |V |, m = |E|, and the parameter k, while it does depend on d (in fact even exponentially). Such an exponential dependency on d seems unavoidable, since a direct consequence of our dynamic algorithm is a static algorithm with running time 3 d poly(d) · m, and the currently best static algorithm runs in time 2 d poly(d) · m.
The first core idea of our algorithm concerns a replacement notion for the "heavy vertices" from the previous section. Sunflowers [29] are usually a stand-in (see [5,Section 9.1] and [8,30]), but they are hard to find and especially hard to manage dynamically. Instead, we use an idea first proposed by Fafianie and Kratsch [6], but adapted to our dynamic setting: a generalizations of sunflowers, which we call bflowers for different parameters b ∈ N that will be easier to keep track of dynamically. The second core idea is to recursively reduce each case d to the case d − 1: For a fixed d > 2, we compute a set of hyperedges relevant for the kernel (the set R(L), but now called R(L d [∅]) in the more general case), but additionally we dynamically keep track of an instance for p k -(d − 1)-hitting-set and merge the dynamic kernel for this instance (which we get from the recursion) with the list of hyperedges relevant for the kernel.

From High-Degree Vertices in Graphs to Flowers in Hypergraphs
For example, the edges adjacent to a heavy vertex v form a (large) sunflower with core {v}. In general, any size-k hitting set has to intersect with the core of a sunflower if it has more than k edges -which means that replacing large sunflowers by their cores is a reduction rule for p k -d-hitting-set. This rule yields a kernel since the Sunflower Lemma [29] states that every d-hypergraph with more than k d · d! hyperedges contains a sunflower of size k + 1.
Unfortunately, it is not easy to find sunflowers for larger d in the first place, let alone to keep track of them in a dynamic setting with constant update times. Rather than trying to find all sunflowers, we use a more general concept called b-flowers. We remark that we introduce this notation here to illustrate the general idea of the algorithm, but we will only need the definition later in Lemma 10 to prove the kernel properties.

Definition 3 For a hypergraph
Note that a 1-flower is exactly a sunflower and, thus, b-flowers are in fact a generalization of sunflowers, see Fig. 1 for an example.
The following property of b-flowers is essential for our dynamic kernelization strategy (it implies that we can replace large flowers by their cores):

Lemma 4 Let F be a b-flower with core c in H and X a size-k hitting set of H . If
|F| > b · k, then X ∩ c = ∅ ("X must hit c").

Dynamic Hitting Set Kernels: A Recursive Approach
As previously mentioned, the core idea behind our main algorithm is to recursively reduce the case d to d − 1. To better explain this idea, we illustrate how the (already covered) case d = 2 can be reduced to d = 1 and how this in turn can be reduced to d = 0. Following this, we present the complete recursive algorithm, prove its correctness, and analyze its runtime.
Recall that DynKernelVC adds up to k + 1 edges per vertex v into the kernel R(L) to ensure that v "gets hit." In the recursive hitting set scenario we ensure this differently: When we notice that v is "forced" into all hitting sets, we add a new hyperedge {v} to an internal 1-hypergraph used exclusively to keep track of the forced vertices (clearly the only way to hit {v} is to include v in the hitting set). When, later on after a deletion, we notice that a singleton hyperedge is no longer forced, we remove it from the internal 1-hypergraph once more. Since we have to ensure that not too many new hyperedges make it into the final kernel, we keep track of a dynamic kernel of the internal 1-hypergraph (using a dynamic kernel algorithm for d = 1) and then join this kernel with R(L).
Using a hypergraph to track the forced vertices allows us to change the relevance bounds of the algorithm: For the lists L v these were k + 1, but since we explicitly "force" {v} into the solution by generating a new hyperedge, it is enough to set the bound to k. Similarly, the bound for the original list L was set to k 2 + k + 1 since this constitutes a proof that no size-k vertex cover exists. In the new setting with the relevance bound for L v lowered to k, we can also lower the relevance bound for L to k 2 : All vertices v ∈ V have a degree of at most k in R(L) and, thus, k vertices can hit at most k 2 hyperedges. If L contains more elements, we consider the (unhittable) empty hyperedge as forced and add it to the 1-hypergraph.
In order to dynamically keep track of a kernel for the internal 1-hypergraph, we proceed similarly: We simply put all its hyperedges (which have size 1 or 0) in a list (called L 1 [∅] in the algorithm). If the number of hyperedges in this list exceeds k, we immediately know that no hitting set of size k exists; and we "recursively remember this" by inserting the empty set into yet another internal 0-hypergraph -this is the recursive call to d = 0.
Managing Needed and Forced Hyperedges: In the general setting (now for arbitrary d), we need a uniform way to keep track of lists like the L v and L for the many different internal hypergraphs. We do this using arrays  In the following, we develop code that ensures that the Need Invariant and the Force Invariant hold at all times. We will show that this is the case both for an insert operation and also for delete operations. Then we show that the invariants imply that ) is a kernel for the hitting set problem. Finally, we analyze the runtimes.
Initializiation The initialization creates the arrays L i and the relevance lists. The important point is that both allocation and pre-filling can be done in constant time using the standard trick to work with uninitialized memory [28].
Independently of the time needed for the allocation, observe that the amount of memory we allocate is about O(n d ) -which is already too much in almost any practical setting for d = 3, see [31,Chapter 5] for a discussion of experimental findings. However, we will only use a very small fraction of the allocated memory: The only lists L i [s] that are non-empty at any point during a run of the algorithms are those where s ⊆ e ∈ E holds. This means that we actually only need space O(2 d |E|) to manage the non-empty lists if we use hash tables. Of course, this entails a typically non-constant overhead per access for managing the hash tables, which is why our analysis is only for the wasteful implementation above. For a clever way around this problem in the static setting, see [8].

Lemma 5 The Need and Force Invariant hold after the init method has been called.
Proof of Lemma 5 All lists are empty after the initialization.
Insertion of e = {u, v, w} now yields: . Observe that the property of being needed is "upward closed": if s is needed in L i [ p], it is also needed in all L i [s * ] with p ⊆ s * ⊆ s. This implies that by processing the hyperedges s in descending order of size (line 20), s will be needed for L i [s ] iff s is needed for all the hyperedges t = s ∪ {v} that are one element larger than s. This is exactly what we test.

Lemma 7 The Need and Force Invariant are maintained by the insert method.
Proof of Lemma 7 For the Need Invariant, observe that whenever the fix force method adds an edge s to L i [s] in line 12, it calls fix needs(s, s, i). By Lemma 6, this ensures that s is inserted exactly into those L i [s ] for s ⊆ s where it is needed. For the Force Invariant, observe that we only add elements to lists of L i , which means that they can only become forced -they cannot lose this status through an addition of an edge. However, after any insertion of s into any list of L i (in lines 12 and 23) we immediately call fix forced, which inserts s into L i−1 [s] if s is forced.
Deletions: The delete operation has to delete an edge e from all places where it might have been inserted to, which is just from all lists L d [s] for s ⊆ e. However, removing e from such a list can have two side-effects: First, it can cause L d [s] to lose its last irrelevant element, changing the status of s from "forced" to "not forced" and we need to "unforce" it (remove it from L d−1 [s]), which may recursively entail new deletions. Furthermore, removing e from L d [s] may make a previous irrelevant hyperedge (the first irrelevant hyperedge of L d [s]) relevant. Then one has to fix the needs for this hyperedge once more, which may entail new inserts and forcings, but no new deletions (see Table 2).
Deletion of e = {u, v, w} now yields: Crucially, observe that both the Need Invariant and the Force Invariant now hold for all hyperedges whose relevance status may have changed, namely s (as shown earlier) and all f in line 43. No other hyperedges in L i change their relevance status (and for L j with j < i the invariants hold by the inductive assumption).
Kernel: As stated earlier, the kernel maintained by DynKernelHS is the set K = Correctness: We have already established that the algorithm maintains the Need Invariant and the Force Invariant. Our objective is now to show that DynKernelHS does maintain a kernel at all times. We start with the size: Lemma 9 DynKernelHS maintains the invariant |K | ≤ k d + k d−1 + · · · + k + 1.

Proof of Lemma 9
The init-method installs a relevance bound of k i for L i [∅] for all i ∈ {0, . . . , d}.
Lemma 12 shows the crucial property that the current K has a hitting set of size k iff the current hypergraph does. The proof hinges on the following two lemmas on "flower properties": we know that X hits s, as claimed.

Proof of Lemma 12
For the first direction, let X be a size-k hitting set of H = (V , E).  [s]. By Lemma 11, this means that X hits s. Since X hits all hyperedges in all lists, it also hits all hyperedges in the kernel, which is just a union of such lists.
For the second direction, let X be a size-k hitting set of K . Let e ∈ V ≤d be an arbitrary hyperedge (not necessarily in E). We show by induction on i that if e ∈ E(L i [e]), then e gets hit by X . This will show that X hits all of H : The insertmethod ensures that for all e ∈ E we have e ∈ E(L d [e]) and, hence, they all get hit by X .
The case i = 0 is trivial since we can only have e ∈ L 0 [e] for e = ∅ and L 0 [∅] is part of the kernel K and all its elements get hit by assumption (actually, ∅ ∈ K means that the assumption that X hits the kernel is never satisfied; the implication is true anyway). Next, consider a larger i and a hyperedge e ∈ E(L i [e] Run-Time Analysis: It remains to bound the run-times of the insert and delete operations.

Proof of Lemma 13
The call DynKernelHS.insert(e) will result in at least one call of insert(s, i): The initial call is for s = e and i = d, but the method fix force may cause further calls for different values. However, observe that all subsequently triggered calls have the property s e and i < d. Furthermore, observe that insert(s, i) returns immediately if s has already been inserted. We will establish a time bound t insert (|s|, i) on the total time needed by a call of insert(s, i) and a time bound t * insert (|s|, i) where we do not count the time needed by the recursive calls (made to insert in line 28), that is, for a "stripped" version of the method where no recursive calls are made. We can later account for the missing calls by summing up over all calls that could possibly be made (but we count each only once, as we just observed that subsequent calls for the same parameters return immediately). In a similar fashion, let us try to establish time bounds t fix (|s |, i) and t * fix (|s |, i) on the time needed (including or excluding the time needed by calls to insert) by a call to the method fix needs downward(s, s , i) (note that, indeed, these times are largely independent of s and its size -it is the size of s that matters).
The starred versions are easy to bound: We have t * insert (|s|, i) = O(1) + t * fix (|s|, i) as we call fix needs downward for s = s. We have t * fix (|s |, i) = 2 |s | poly |s | since the run-time is clearly dominated by the loop in line 20, which iterates over all subsets s of s . For each of these 2 |s | many sets, we run a test in line 22 that needs time O(|s |), yielding a total run-time of t * fix (|s |, i) = O(|s |2 |s | ). For the unstarred version we get: |s| c number of s ⊆s with |s |=c Pluggin in the bound 2 c poly(c) for t * insert (c, j), we get that everything following the binomial can be bounded by (d − c)2 c poly(c) = 2 c poly (c). This means that the main sum we need to bound is |s| c 2 c . The latter is equal to 3 |s| , which yields the claim.

Proof of Lemma 14
Similarly to the analysis of the insert method, let t delete (|s|, i) denote the run-time needed by delete(s, i) and let t * delete (|s|, i) the time to delete excluding the time needed by the recursive calls made to delete(s , i − 1) inside this method. In other words, we do not count the (huge) time actually needed in line 39, where a recursive call is made, and will once more later on account for this time by summing over all t delete (|s|, i); but t * delete (|s|, i) will include the run-time needed for the second loop, starting at line 42, where we (possibly) fix the Need Invariant for many f (this loop does not involve any recursive calls to the delete method). Note that -as in the insertion case -if there are multiple calls of delete(s, i) for the same s and i, we only need to count one of them since all subsequent ones return immediately (and could be suppressed).
A call to delete(s, i) clearly spends at most time 2 |s| poly |s| in the first loop (starting on line 36) if we ignore the recursive calls. For the second loop, we iterate over all s ⊆ s and for each of them we call fix needs downwards( f , s , i): With the bound of 3 |s | poly |s | established in the proof of Lemma 13 for t fix (|s |, i), we can focus on bounding s ⊆s 3 |s | = |s| c=0 |s| c 3 c and this is equal to 4 |s| . Therefore, we have t * delete (|s|, i) = 4 |s| poly |s|. We can now bound the total run-time of the delete method by summing over all recursive calls: Plugging in 4 |s | poly |s | for t * delete (|s |, j) we get that the crucial sum is yielding t delete (|s|, i) = 5 |s| poly |s| as claimed.

Dynamic Set Packing Kernels
Like the static kernel [1], the dynamic kernel algorithm we have developed in the previous section also works, after a slight modification, for the set packing problem, which is the "dual" of the hitting set problem: Instead of trying to "cover" all hyperedges using as few vertices as possible, we must now "pack" as many hyperedges as possible. These superficially quite different problems allow similar kernelization algorithms because correctness of the dynamic hitting set kernel algorithm hinges on Lemma 4, which states that every size-k hitting set X must hit the core of any b-flower F with |F| > b · k. It leads to the central idea behind the complex management of the lists L i [s]: The lists L i [s] were all b-flowers for different values of b by construction and the moment one of them gets larger than b · k, we stop adding hyperedges to its relevant part and instead "switch over to the core s" by adding s to L i−1 [s]. It turns out that a similar lemma also holds for set packings:

Lemma 15 Let F be a b-flower with core c in a d-hypergraph H = (V , E) and let
. If E ∪ {c} has a packing of size k, so does E.

Proof of Lemma 15
Let P be the size-k packing of E ∪ {c}. If c / ∈ P, we are done, so assume c ∈ P. For each p ∈ P − {c}, consider the hyperedges in e ∈ F with p ∩ e = ∅. Since p has at most d elements v and since each v lies in at most b different hyperedges of the b-flower F, we conclude that p intersects with at most d · b hyperedges in F. However, this means that the (k − 1) different p / ∈ P − {c} can intersect with at most (k − 1) · b · d hyperedges in F. In particular, there is a hyperedge Keeping this lemma in mind, suppose we modify the relevance bounds of the lists L i [s] as follows: Instead of setting them to k i−|s| , we set them to (d(k − 1)) i−|s| . Then all lists are b-flowers for a value of b such that whenever more than b · d(k − 1) hyperedges are in L i [s], the set s gets forced into L i−1 [s]. Lemma 15 now essentially tells us that instead of considering the flower E(L i [s]), it suffices to consider the core s. Thus, simply by replacing line 5 inside the init method as follows, we get a dynamic kernel algorithm for p k -d-set-packing: We now prove the claim by proving two directions. The first direction is easy: Consider a packing P of E. By the above observations, for every p ∈ P there is a set s p ∈ K with s p ⊆ p. Then {s p | p ∈ P} is a packing of size k in K .
For the second direction, let P be a packing of K of size k. For some number Since P is a packing of K ⊆ B 0 , we know that B 0 has a size-k packing. We show by induction on i that all B i have a size-k packing and, hence, in particular B d = E as claimed.
For the inductive step, let B i−1 have a size-k packing and let P be one of these with the minimum number of elements that do not already lie in B i (that is, which lie only in A i−1 ). If the number is zero, P is already a size-k packing of B i ; so let p ∈ P − B i . By the force property, a hyperedge p can lie in A i−1 only because it was forced, that is, because L i [s] has an irrelevant hyperedge. This means that E(L i [s]) ⊆ B i is a (d(k − 1)) i−|s|−1 -flower (by Lemma 10 where we clearly just have to replace k by d(k − 1)). Since |E(L i [s])| is larger than the relevance bound of (d(k − 1)) i−|s| , Lemma 15 tells us that there is a set f ∈ E(L i [s]) such that P − {p} ∪ { f } is also a packing of B i . Since f ∈ B i , this violates the assumed minimality of P. Thus, P must be a size-k packing of B i .
Clearly, the analysis of the kernel size and of the runtimes is identical to the hitting set case, yielding the claim.

Conclusion
We introduced a fully dynamic algorithm that maintains a p k -d-hitting-set kernel of size d i=0 k i ≤ (k + 1) d with update time 5 d poly(d) -which is a constant, deterministic, worst-case bound -and zero query time. Since p k -d-hitting-set has no kernel of size O(k d− ) unless coNP ⊆ NP/poly [17], and since the currently best static algorithm requires time |E| · 2 d poly(d) [18], this paper essentially settles the dynamic complexity of computing hitting set kernels. While it seems possible that the update time can be bounded even tighter with an amortized analysis, we remark that this could, at best, yield an improvement from the already constant worst-case time 5 d poly(d) to an amortized time of 2 d poly(d).
Our algorithm has the useful property that any size-k hitting set of a kernel is a size-k hitting set of the input graph. Therefore, we can also dynamically provide the following "gap" approximation with constant query time: Given a dynamic hypergraph H and a number k, at any time the algorithm either correctly concludes that there is no size-k hitting set, or provides a hitting set of size at most d i=0 k i . With a query time that is linear with respect to the kernel size, we can also greedily obtain a solution of size dk, which gives a simple d-approximation. A "real" dynamic approximation algorithm, however, should combine the concept of α-approximate pre-processing algorithms [32,33] with dynamic updates of the hypergraph. This seems manageable if we allow only edge insertions, but a solution for the general case is not obvious to us.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Appendix A Appendix: Implementation Details of Data Structures
The dynamic kernel algorithms that we present in this paper internally employ different standard dynamic data structures like linked lists or small arrays that allow update operations in time O (1). In the following, for completeness, we sketch how these basic data structures can be implemented so that all basic operations work in constant time.
Objects: We will often store and treat mathematical entities like edges or sequences as objects in the sense of object-oriented programming. As is customary, they are just blocks of memory storing the object's current attributes and the object can be referenced with a pointer to the start of the memory block. For an object X we write X .attribute for the current value of an attribute.
Arrays: By arrays we refer to the usual notion of arrays that store a value for each index number from an immutable domain D = {1, . . . , r }. We write A[i] for the value stored at position i ∈ D, write A[i] ← v to indicate that we store the value v (typically an object or a number) at the ith position in A and we write A[i] = ⊥ to indicate that nothing is stored at an address i. In order to allocate a new array, we write A ← new array(D) initialized with f (s) for s ∈ D, whose semantics is the following: Of course, implemented this way, while the allocation of an uninitialized memory block in line 1 will take constant time on a normal operating system, filling the array with initial values in the for-loop will take time O(|D|t f ), where t f is the time needed to compute f . However, a trick [28] allows us to perform this initialization in constant time:

Lemma 16 Let the time needed to allocate an uninitialized array of size |D| (so the initial content can be random) be constant and let t f be the time needed to compute the function f . Then A ← new array(D) initialized with f (s) for s ∈ D can be implemented in such a way that it runs in time O(1) and such that it subsequently takes extra time at most O(t f ) each time an element A[i] is accessed for the first time.
In particular, if t f is constant (as is the case in our paper, where this is the time needed to initialize an empty relevance list), the creation of the array takes only constant time and only a constant time overhead is incurred for later accesses.
Proof We only sketch the proof. The idea is from [28, Section III. 8 (1). For this paper, we just assume that in whatever way arrays are really implemented, reading and writing from arrays can be done in time O(1).
Maps: Maps (also known as associative arrays) are similar to arrays, but may be indexed by keys k, which can be arbitrary objects, and not just by numbers from a small domain. We still write M[k] for the value v stored at the key k (and M[k] = ⊥ if nothing is stored) and write M[k] ← v to indicate that we store the value v for the key k, possibly replacing any previous value stored for k. Implementing maps is normally much trickier than implementing arrays, but we will only need and use maps that store values for a constant number of keys. In this case, even if we implement accesses using just a linear search in a normal array, all reading and writing can be done in time O(1).
Lists: We will use the standard data structure of doubly-linked lists a lot, which we will just refer to as lists. We consider lists L to be objects that store pointers to the first and last cell of the list. Each cell stores pointers to the next and the previous cell in the list plus a pointer to an object, called the payload of the cell. For a list L, we write L.append(x) to indicate that a new cell c gets created with x as its payload and then c is added to the list at the end (and the last and, possibly, first cells stored in L are updated appropriately).
Quite less standard, when creating a cell c for a list L, we also store the cell c in x: We assume that x has an attribute lists that is a map and we execute x.lists [L] ← c. In other words, inside x, we store a back-pointer to the cell c. This allows us to perform the operation L.delete(x) without being given the cell c: We first lookup the cell c that has x as its payload in the map x.lists and can then easily remove the cell from the doubly-linked list in constant time. Storing back-pointers in objects allows us to remove elements in time O (1) provided that (i) no element is added more than once to a list (this will always be the case) and (ii) each element is added only to a constant number of lists (this will also always be the case).
Relevance Lists: The next data structure is more specific to the needs of the present paper: relevance lists. These are normal lists with a parameter ρ ∈ N in which the first ρ elements are "more relevant" than later elements (with respect to the order of the elements inside the list). Once a relevance list L for a bound ρ has been allocated by the call new relevance list(ρ), we wish the following to hold for its elements: If there are only ρ or less elements in L, all of them are relevant; but if there are more, all elements after the ρth element are irrelevant. The operations we wish to support (in time O (1)) in addition to the normal list operations append and delete are L.is relevant(x), which should return whether x is one of the first ρ elements in L, and L.last relevant, which should return the last relevant element of the list, respectively. For convenience, we will also use L.first irrelevant, which is the successor of L.last relevant (and thus ⊥ if there are no irrelevant elements), and L.has irrelevant elements, which just checks whether L.last relevant has a successor.
Note that it is not immediately clear how the two additional operations of relevance list can be implemented in time O(1): The relevance status of an element can change when far-away elements get added or deleted. The following lemma shows how this can be achieved:

Proof of Lemma 17
A relevance list object L stores the immutable bound ρ as an attribute. It also stores the length of the list using an additional attribute (just increment or decrement it as needed). To keep track of which elements x are relevant with respect to L and which one is the last of them, we use two kinds of "trackers": First, we store one bit of information in each element x as follows. In x we have, in addition to the map attribute lists mentioned earlier, another attribute relevances. It is also a map and we set x.relevances[L] ← true for relevant x and set x.relevances[L] ← false otherwise. Clearly, if we can keep these values up-to-date, we can implement L.is relevant(x) simply as returning x.relevances [L]. Second, in L we store a pointer to the last relevant element in an attribute last relevant. Once more, if we can keep this pointer up-to-date, we can trivially access it in time O (1).
To keep the introduced trackers up to date, first consider the operation L.append(x): Before we insert x, we check whether the length of L is at most ρ − 1. If so, after x has been appended, it is flagged as relevant (x.relevances[L] ← true) and L.last relevant ← x; and otherwise it is flagged as irrelevant and L.last relevant is not changed. Next, consider the operation L.delete (x). If x is not relevant (x.relevances[L] = false), we can simply delete it. However, if x is relevant, deleting x will make the first irrelevant element (if it exists) relevant: before deleting x, if L.last relevant has a successor s, we set L.last relevant ← s and s.relevances[L] ← true. As a special case, if x happens to be the last relevant element and has no successor, set L.last relevant to the predecessor of x.
Note that all operations needed to keep the trackers up-to-date can be implemented to run in time O(1), yielding the claim.
Dense Adjacency Matrices: The final data structure that we introduce addresses a specific subtlety of kernelizations: In our algorithms, we keep track of linked lists of hyperedges e ∈ V ≤d that, collectively, form a kernel. However, a kernel should be a mathematical object whose encoding only depends on the parameter k -while encoding the lists takes something like O(k d d log n) bits since we need O(log n) bits to encode a vertex number and the lists are "scattered around in memory." Furthermore, the whole idea behind kernelizations is, of course, that we wish to perform further computations on the kernel once it has been determined. Thus, we should not insist on constant update times in our algorithms while then allowing time O(k d ) to transform lists into something "usable" when we actually use the kernel to find a solution. Fortunately, it turns out that we can keep track of a "real" kernel in the form of a dense adjacency matrix with only constant extra time per update. For simplicity, we only describe the case of graphs, the generalization to hypergraphs is straightforward.
For a set E of edges, let E = {x | ∃e ∈ E : x ∈ e} denote the set of all vertices mentioned in any edge of E. For two graphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) let us write G 1 ∼ G 2 if G 1 and G 2 are isomorphic. For two edge sets E 1 and E 2 , let us write E 1 ∼ E 2 if ( E 1 , E 1 ) ∼ ( E 2 , E 2 ) (so vertices that are not involved in any edges are ignored).
Our objective is the following: Suppose we already have a dynamic algorithm that keeps track of a set F of edges for the vertex set V (in our dynamic vertex cover kernel algorithm we have F = R(L); in the dynamic hitting set kernel algorithm we have F = d i=0 R(L i [∅])) and suppose that we have a bound ρ such that |F| ≤ ρ always holds. In the following lemma we show that we can then dynamically manage the adjacency matrix of a graph K = (V K , E K ) with the fixed vertex set V K = {1, . . . , 2ρ} such that we always have E K ∼ F with only constant additional update times. (In the statement of the lemma, by "gets a dynamically changing set as input" we mean that whenever an edge e is added to F, the method insert(e) of the algorithm gets called, and whenever e is removed from F, a call of delete(e) is triggered.)

Lemma 18
There is an algorithm DynamicDenseAdjacencyMatrix that takes as input a dynamically changing edge set F over the vertex set V and a bound ρ with |F| ≤ ρ, and keeps track of the adjacency matrix of K = (V K , E K ) with the fixed vertex set V K = {1, . . . , 2ρ} such that we have E K ∼ F. The update times are O (1).

Proof of Lemma 18
The algorithm starts by setting up the following auxiliary data structures: 2. a mapping I that stores for each vertex of F to which vertex in V K it corresponds (and I (x) = ⊥ for x / ∈ F), 3. an array D that stores for each v ∈ V K the degree of v in K , and 4. a list Z of zero degree vertex intervals in K . Each element of the list is a pair (a, b) of numbers from V K that stands for the interval [a, b]. The semantics is that the union of the intervals should be exactly the set of vertices in V K that have degree 0 in K . Clearly, we can initialize the Z with the single interval [1, 2ρ] to ensure that this holds at the beginning.
Translated to code, we get: Let us now see how these auxiliary data structures allow us to keep track of E K such that E K ∼ F holds when edges enter or leave F. Suppose e = {u, v} is about to enter F and this triggered a call of the following insert(e) method, which should now update the matrix E K . First, we test whether u / ∈ F holds (by testing whether I [u] = ⊥ holds). In this case, consider the first interval [a, b] in Z (such an interval must exist since there will never be more than 2ρ vertices in F by assumption and, hence, there is always a vertex of degree 0 in K when a new vertex is about to enter F). If a = b, remove this interval from Z , otherwise replace it by [a+1, b]. We think of this as "allocating" a and will store in I that u gets mapped to a. Next, if v / ∈ F holds, we allocate a vertex from V K for it. Then both u and v have corresponding vertices in V K and we store an edge between them in E K and adjust the values in D accordingly: Observe that after the above steps and after e has been added to F, we have E K ∼ F and all auxiliary data structures hold the proper values.
Now suppose e = {u, v} is about to be deleted from F. The code for this case is simple: Once more, E K ∼ F holds after the updates and all auxiliary data structures have also been updated correctly.