Reoptimization of parameterized problems

Parameterized complexity allows us to analyze the time complexity of problems with respect to a natural parameter depending on the problem. Reoptimization looks for solutions or approximations for problem instances when given solutions to neighboring instances. We combine both techniques, in order to better classify the complexity of problems in the parameterized setting. Specifically, we see that some problems in the class of compositional problems, which do not have polynomial kernels under standard complexity-theoretic assumptions, do have polynomial kernels under the reoptimization model for some local modifications. We also observe that, for some other local modifications, these same problems do not have polynomial kernels unless \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{NP}\subseteq \mathbf{coNP/poly}$$\end{document}NP⊆coNP/poly. We find examples of compositional problems, whose reoptimization versions do not have polynomial kernels under any of the considered local modifications. Finally, in another negative result, we prove that the reoptimization version of Connected Vertex Cover does not have a polynomial kernel unless Set Cover has a polynomial compression. In a different direction, looking at problems with polynomial kernels, we find that the reoptimization version of Vertex Cover has a polynomial kernel of size \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{2k+1}$$\end{document}2k+1 using crown decompositions only, which improves the size of the kernel achievable with this technique in the classic problem.


Introduction
In this paper, we try to combine the techniques of reoptimization and parametrization in order to have a better understanding of what makes a problem hard from a parameterized complexity point of view.The goal is, given a solution for an instance of a parameterized problem, try to look at local modifications and see if the problem becomes easier or if it stays in the same complexity class.For this, we look at classical problems in parameterized complexity, whose complexity is well understood and classified.
While the connections between reoptimization and parameterization were not systematically explored up to now, some links were already discovered.The technique of iterative compression which was introduced by Reed, Smith, and Vetta [26] was very successfully used to design parameterized algorithms, see the textbook by Cygan et al. [10] for an overview.It is closely related to common design techniques for reoptimization algorithms.Abu-Khzam et al. [1] looked at the parameterized complexity of dynamic, reoptimization-related versions of dominating set and other problems, albeit more related to a slightly different model of reoptimization as introduced by Shachnai et al. [27].Very recently, Alman, Mnich, and Williams [2] considered dynamic parameterized problems, which can be seen as a generalization of reoptimization problems.
We start by introducing the main concepts of parameterized complexity and reoptimization that we are going to use in our results.

Parameterized Complexity
Classical complexity theory classifies problems by the amount of time or space that is required by algorithms solving them.Usually, the time or space in these problems is measured by the input size.However, measuring complexity only in terms of the input size ignores any structural information about the input instances, making problems appear sometimes more difficult than they actually are.
Parameterized complexity was developed by Downey and Fellows in a series of articles in the early 1990's [13,14].Parameterized complexity theory provides a theory of intractability and of fixed-parameter tractability that relaxes the classical notion of tractability, namely polynomial-time computability, by allowing non-polynomial computations only depending on a parameter independent of the instance size.For a deeper introduction to parameterized complexity we refer the reader to Downey et al. [15,18].
We now introduce the formal framework for parameterized complexity that we use throughout the paper.Let Σ denote a finite alphabet and N the set of natural numbers.A decision problem L is a subset of Σ * .We will call the strings x ∈ Σ * , input of L, regardless of whether x ∈ L. A parameterized problem is a subset L ⊆ Σ * × N.An input (x, k) to a parameterized language consists of two parts where the second part is the parameter.A parameterized problem L is fixed-parameter tractable if there exists an algorithm that given an input (x, k) ∈ Σ * × N, decides whether (x, k) ∈ L in f (k)p(n) time, where f is an arbitrarly computable function solely in k, and p is a polynomial in the total input length n = |x| + k.FPT is the class of parameterized problems which are fixed-parameter tractable.
A kernelization for a parameterized problem L ⊆ Σ * ×N is an algorithm that, given (x, k) ∈ Σ * × N, outputs in p(n) time a pair (x Where p is a polynomial and f an arbitrary computable function, f is referred to as the size of the kernel.If for a problem L, the size of the kernel f is polynomial in k, we say that L has a polynomial kernel.PK is the class of parameterized problems which have polynomial kernels. A Turing kernelization is a procedure consisting of two parameterized problems L 1 and L 2 (typically L 1 = L 2 ) and a polynomial g together with an oracle for L 2 , such that, on an input (x, k) ∈ Σ * , the procedures outputs the answer whether x ∈ L 1 in polynomial time by querying the oracle for L 2 with questions of the form "Is (x 2 , k 2 ) ∈ L 2 ?" for |x 2 |, k 2 ≤ g(k).Essentially, a Turing kernelization allows us to use an oracle for small instances, in order to solve L 1 on a larger instance (x, k).A polynomial Turing kernelization is a Turing kernelization querying the oracle only a polynomial number of times in k.PTK is the class of parameterized problems which have polynomial Turing kernelizations.
The problem classes we defined up to now satisfy PK ⊆ PTK ⊆ FPT.There are well known problems, however, that are not known to be FPT.For example, k-Clique, which is the problem of identifying whether a graph G contains a clique of size k, is not contained in FPT under some standard complexitytheoretic assumptions.Neither the complementary problem k-Independent Set, which is the problem of identifying whether a graph G contains an independent set of size k, or the k-Set Cover problem, where given a set S and a family F of subsets of S, we are asked to determine whether there is a subset of F of size k which contains every element of S. For these problems outside FPT there is a further classification of their hardness in terms of the so-called . For the definition of these classes and the theory behind it see [15].In this paper, we will only use the classes W [1] and W [2].For them we have the following characterizations in terms of complete problems: k-Clique and k-Independent Set are complete for W [1], and k-Set Cover is complete for W [2].

Reoptimization
Often, one has to solve multiple instances of one optimization problem which might be somehow related.Consider the example of a timetable for some railway network.Assume that we have spent a lot of effort and resources to compute an optimal or near-optimal timetable satisfying all given requirements.Now, a small local change occurs like, e. g., the closing of a station due to construction work.This leads to a new instance of our timetable problem that is closely related to the old one.Such a situation naturally raises the question whether it is necessary to compute a new solution from scratch or whether the known old solution can be of any help.The framework of reoptimization tries to address this question: We are given an optimal or nearly optimal solution to some instance of a hard optimization problem, then a small local change is applied to the instance, and we ask whether we can use the knowledge of the old solution to facilitate computing a reasonable solution for the locally modified instance.It turns out that, for different problems and different kinds of local modifications, the answer to this question might be completely different.Generally speaking, we should not expect that solving the problem on the modified instance optimally can be done in polynomial time, but, in some cases, the approximability might improve a lot.This notion of reoptimization was mentioned for the first time by Schäffter [28] in the context of a scheduling problem.Archetti et al. [4] used it for designing an approximation algorithm for the metric traveling salesman problem (∆TSP) with an improved running time, but still the same approximation ratio as for the original problem.But the real power of the reoptimization concept lies in its potential to improve the approximation ratio compared to the original problem.This was observed for the first time by Böckenhauer et al. [6] for the ∆TSP, considering the change of one edge weight as a local modification.Independently at the same time, Ausiello et al. [5] proved similar results for TSP reoptimization under the local modification of adding or removing vertices.
Intuitively, the additional information that is given in a reoptimization setup seems to be rather powerful.Intriguingly, many reoptimization variants of NPhard optimization problems are also NP-hard.A general approach towards proving the NP-hardness of reoptimization problems uses a sequence of reductions and can on a high level be described as follows [7]: Consider an NP-hard optimization problem L, a local modification lm, and a resulting reoptimization problem lm-L.Moreover, suppose we are able to transform an efficiently solvable instance x ′ of L to any instance x of L in a polynomial number of local modifications of type lm.Then, any efficient algorithm for lm-L could be used to efficiently solve L, thus the NP-hardness of L implies the hardness of lm-L.

Our Contribution
In this paper, we use reoptimization techniques to solve parameterized problems or to compute better approximations for them.In particular, we show in Section 2 that some compositional parameterized problems [8], which do not have polynomial kernels under standard complexity-theoretic assumptions, do have polynomial kernels in a reoptimization setting.
We then show in Section 3 that for the reoptimization version of the vertex cover problem, the crown decomposition technique yields a kernel of size 2k.
Section 4 contains a reduction from Connected Vertex Cover to k-Set Cover that shows that reoptimization techniques will not help to find a Turing kernelization for Connected Vertex Cover, unless k-Set Cover is in PK.
Then, in Section 5, we show a similar reduction from Connected Vertex Cover to k-Set Cover that provides an FPT algorithm for Set Cover with respect to the size of the base set as a parameter s.

Kernels for Compositional Problems and AND-Compositional Problems
Bodlaender et al. [8] define the concept of a compositional parameterized problem and its complement for both of which no polynomial kernel exists under standard complexity-theoretic assumptions.In this section, we see that some of these problems do indeed have polynomial kernels in a reoptimization setting, where an optimal solution or a polynomial kernel is given for a locally modified.

Preliminaries
A characterization of compositional graph problems is the following.

Definition 1 ([8]
).Let L be a parameterized graph problem such that, for any pair of graphs G 1 and G 2 , and any integer k ∈ N, we have Now, if we define the complement of a problem, an analogous characterization can be defined that will identify problems whose complement is compositional, the so-called AND-compositional problems.
Definition 2. Let L be a parameterized decision problem.The complement of L, L is the decision problem resulting from reverting the yes-and no-answers.

Definition 3 ([8]
).Let L be a parameterized graph problem such that, for any pair of graphs G 1 and G 2 , and any integer k ∈ N, we have Bodlaender et al. [8] showed the following result.

Theorem 1 ([8]
). NP-hard compositional problems do not have polynomial kernels, unless P H = Σ 3 p , i. e., the polynomial hierarchy collapses to the third level.
Moerover, Drucker [16] was able to show the following.
We prove in this section that reoptimization versions of some compositional or AND-compositional problems have polynomial kernels.Let us see now which local modifications will provide these results.
When we talk about graph problems in a reoptimization setting, four local modifications come to mind immediately, namely edge addition and deletion, and vertex addition and deletion.We now define them formally.
Given a graph G = (V, E), and a pair of non-neighboring vertices u, v ∈ V , we denote an edge addition (V, E ∪{u, v}) by G+{u, v}, or G+e where e = {u, v}.Analogously, for edge deletion, given an edge e ∈ E, G−e is the graph (V, E−{e}).Furthermore, for vertex deletion, where E ′ is E without the edges incident to v. Finally, in the case of vertex addition, given a new vertex v and a set of edges Given a graph problem L, we will call the reoptimization version of L under edge addition, edge deletion, vertex addition, and vertex deletion e + -L, e − -L, v + -L, and v − -L, respectively.We now give an example of a compositional FPT problem that is in PK under reoptimization conditions.We want to see which are the conditions that allow us to find a kernel in this setting.

Internal Vertex Subtree in General Graphs
A subtree T of a graph G is a (not necessarily induced) subgraph of G which is also a tree.Let us consider the following parameterized decision problem called the Internal Vertex SubTree problem.Given a graph G and an integer k, we have to determine whether G contains a subtree with at least k internal vertices.
If we consider as input only pairs (G, k) where G is connected, this problem has a polynomial kernel of size 3k using the crown lemma [19].However, the general version of this problem does not have a polynomial kernel, unless P H = Σ 3 p .Let us see this.
Theorem 3. Internal Vertex SubTree in general graphs does not have a polynomial kernel unless P H = Σ 3 p .
Proof.Observe first that Internal Vertex SubTree is compositional.As required by Definition 1, given two connected graphs G 1 and G 2 , if one of them has a subtree with k internal vertices, then the disjoint union of them, i. e., the graph with two connected components G 1 and G 2 will also have one, the same one that was in G 1 or G 2 .This argument easily extends to arbitrary graphs G 1 and G 2 .Moreover, Internal Vertex SubTree is NP-complete (in particular NPhard).This is because there is a straightforward reduction from Hamiltonian path (see [25]), which is well known to be NP-complete.
Finally, we see that, by Theorem 1, Internal Vertex SubTree does not have a polynomial kernel unless P H = Σ 3 p .

⊓ ⊔
Now, we are going to prove that e + -Internal Vertex SubTree has a polynomial kernel.
Theorem 4. e + -Internal Vertex SubTree has a polynomial kernel of size 3k.

Generalization
To begin with, we observe that, in order for a problem to be analogous to the problem above, it is important that the property defining the problem is maintained under the local modification considered.For instance, a subtree of a graph G is also a subtree of the same graph with an added edge, G + e.However, the same does not hold for edge deletion, because the deleted edge might be part of the chosen subtree for G.In order to formalize this, we define the following: Definition 4 (Increasingly Monotone Graph Property).A graph property is called increasingly monotone if it is closed under addition of edges and vertices.
Definition 5 (Decreasingly Monotone Graph Property).A graph property is called decreasingly monotone if it is closed under removal of edges and vertices.
If a graph property P is either increasingly or decreasingly monotone, we say that P is monotone.We see in this subsection, how to construct polynomial kernels for the reoptimization versions of some compositional graph problems with monotone properties that are not in PK.
We realize that, in order to get similar results as in the examples above, we need the following conditions.Let L be a graph problem based on a property P : 1. L is compositional and NP-hard.2. L has a polynomial kernel on a local environment of a given vertex or edge.3. The property P is increasingly (or decreasingly) monotone.
The first condition ensures that the considered problem does not have a polynomial kernel under the complexity assumption that P H = Σ 3 p , which makes results on reoptimization interesting.The second condition allows us to find kernels locally.The third condition allows the modification to only affect the solution locally, whereas other modifications could potentially require to look at the whole instance for a solution.Let us formalize this.
We define a local environment of an edge e or a vertex v in a graph G + e or G + v, as the connected component that contains e or v.For edge and vertex deletions, we say that the local environment of e or v in a graph G − e or G − v are the connected components that are modified or generated when e or v is deleted from G.
Theorem 5. Let L be a parameterized compositional graph problem based on an increasingly monotone property P .If for every instance (G, k), we can compute a polynomial kernel for a local environment of any edge e ∈ E, then e + -L is in PK.
Proof.Given a solution to an instance (G, k) for a parameterized graph problem L, we find a polynomial kernel for an instance (G + e, k) as follows.
If (G, k) has a solution S, due to the fact that the property S is based on is increasingly monotone, S will still be a valid solution for G + e.
Otherwise, take the local environment of e in G + e, we can compute a polynomial kernel for this component.Let us argue that this is a kernel for the whole graph G + e, by means of a contradiction.If there was a solution S ′ in G + e that could not be computed via the kernel, then, due to the fact that L is compositional, this solution would need to be in another component of the graph, thus not using the edge e.However, except for the new edge e, the graph has not been modified, thus providing a solution S ′ for G, which is a contradiction to the assumption.

⊓ ⊔
The same can be said for the vertex addition local modification: Theorem 6.Let L be a parameterized compositional graph problem based on an increasingly monotone property P .If for every instance (G, k), we can compute a polynomial kernel for a local environment of any vertex v ∈ V , then v + -L is in PK.
Proof.Completely analogous to the proof of Theorem 5, but rooting the kernel to the new vertex v if necessary.

⊓ ⊔
In the case of deletion reoptimization problems based on decreasingly monotone properties we have the same results, but we have to be a bit more careful when considering what the theorems are really stating.⊓ ⊔ In this case, the number of newly generated connnected components can be as high as the degree of the deleted vertex v.It is important to point out that, in this case, in order to have a polynomial kernel for a local environment, or a polynomial kernel for the rooted cases, it might not be enough that L is in PK if restricted to connected instances (which was true for the previous cases).
In general, polynomial kernels cannot be built for compositional hard problems because one might have a lot of connected components and one can only build a polynomial kernel for each connected component.With problems based on monotone properties and their analogous local modifications, one can ensure that existing solutions will stay and instances without solutions will only require looking at the newly generated connected components.This is not true if the properties and modifications go in opposite directions.For example, if the local modification was decreasing and the property monotonically increasing, given an instance with a solution, we would need to make sure that the deletion did not destroy the considered solution, moreover there would also be the possibility that an alternative solution could be found in any other component if the modified instance does not admit the same solution anymore.Thus, we would require to look at the whole graph, for which maybe no polynomial kernel exists.
If we now come back to the Internal Vertex SubTree problem that we saw in Subsection 2.2, we realize that not only the conditions are satisfied to apply Theorem 5, but also to apply Theorem 6.Thus, we conclude that v + -Internal Vertex SubTree is in KP.
On problems based on looking for trees, the rooted version of the problem, consists in looking for a solution tree, with a given root v in V .On rooted problems, finding local kernels might not require to look into the local environment as a whole, it may be suficcient to look for solutions rooted around the local modification.
A problem where this applies is k-Leaf Out Tree.We, given a directed graph D and an integer k, are asked to compute a tree in D with at least k leaves.We first observe that this problem is based on an increasingly monotone property.Moreover, the rooted version of this problem is in PK with a quadratic kernel as was seen by Daligault and Thomassé [11], moreover k-Leaf Out Tree has no polynomial kernel unless co − NP ⊆ NP/poly as pointed out by Fernau et al. [17], due to the fact that the problem is compositional and NP-hard.Then, applying Theorems 5 and 6, we deduce that e + -k-Leaf Out Tree and v + -k-Leaf Out Tree are in PK.
Yet another problem that falls into this category is Clique on graphs of maximum degree d (d-Clique), this problem has a polynomial kernel for the rooted case.It is compositional and does not have a polynomial kernel in the general case as observed by Hermelin et al. [21], thus we deduce that we can apply Theorems 5 and 6 and thus e + -d-Clique and v + -d-Clique are in PK.Now, we can think about the complementary version the problems described, where a property is required in every component in order for a solution to exist.Again, we can state four analogous theorems based on the considered local modifications.
Theorem 9. Let L be a parameterized AND-compositional graph problem based on an increasingly monotone property P .If, for every instance (G, k), we can compute a polynomial kernel for a local environment of any edge e ∈ E, then e − -L is in PK.
Proof.Given a solution to an instance (G, k) for a L, we find a polynomial kernel for an instance (G − e, k) as follows.
If (G, k) has a solution S, S might not be a valid solution for G − e.However, because L is AND-compositional, it means that the required property is already satisfied in every component of G − e except maybe in the local environment of e.This means that checking whether the local environment of e is in L is enough to ensure that G − e ∈ L.
Otherwise, if (G, k) has no solution, then for sure (G − e, k) does not contain one.If (G − e, k) was in L, then any solution for (G − e, k) would also be a solution for (G, k) because L is based on an increasingly monotone property.⊓ ⊔ Theorem 10.Let L be a parameterized graph AND-compositional problem based on an increasingly monotone property P .If, for every instance (G, k), we can compute a polynomial kernel for a local environment of any vertex v ∈ V , then v − -L is in PK.
Proof.Completely analogous to the proof of Theorem 9.
⊓ ⊔ Analogously to Theorems 5 and 6, here we have to be careful on how many connected components are being generated when deleting vertices and edges.In the edge case, it is sufficient to have a problem where polynomial kernels exist for the connected case.However, it might not be sufficient in the vertex case.
Theorem 11.Let L be a parameterized graph AND-compositional problem based on a decreasingly monotone property P .If, for every instance (G, k), we can compute a polynomial kernel for a local environment of any edge e ∈ E, then e + -L is in PK.
Proof.Completely analogous to the proof of Theorem 9.
⊓ ⊔ Theorem 12. Let L be a parameterized graph AND-compositional problem based on a decreasingly monotone property P .If, for every instance (G, k), we can compute a polynomial kernel for a local environment of any new vertex v ∈ V , then v + -L is in PK.
Proof.Completely analogous to the proof of Theorem 9. ⊓ ⊔

Reoptimization and Vertex Cover
Another case where we observe the power of reoptimization in parameterized problems is in the Vertex Cover (VC) problem.Vertex Cover is a problem in PK, whose best known polynomial kernel is of size 2k using linear programming [24].However, using crown decomposition only allows us to achieve a kernel of size 3k in the classical setting [3] (see also [18]).Very recently and independent of our work, there have been attempts to reduce the size of the crown decomposition kernel by refining the method, like in [23].We present here a way to achieve a kernel of size 2k using reoptimization and crown decomposition.First we define the problem.A vertex cover of a graph G = (V, E) is a subset A ⊆ V such that every edge is covered, i. e., every edge e ∈ E is incident to a vertex v ∈ A. As a parameterized problem, we say (G, k) ∈ VC if there exists a vertex cover of G of size k or smaller.
The crown decomposition is a structure in a graph that can be defined as follows.(It is shown schematically in Fig. 1.) Definition 6.Let G = (V, E) be a graph.A crown decomposition of G is a partition of V into three sets C, H, and R satisfying the following properties.
1. C is a non-empty independent set in G, 2. There are no edges between C and R, 3. The set of edges between C and H contains a matching M of size |H|, we also say that M saturates H.
We call C the crown, H the head, and R the rest of the crown decomposition.The crown lemma tells us under which conditions crown decompositions exist, and is the basis for kernelization using crown decomposition.

Lemma 1 ([3]
).Let G be a graph without isolated vertices and with at least 3k +1 vertices.There is a polynomial-time algorithm that either finds a matching of size k + 1 in G or finds a crown decomposition of G.
This lemma allows us to guarantee that we will be able to reduce any Vertex Cover instance to size at most 3k.This is because, given a graph of size larger than 3k, we either find a matching of size k + 1 or a crown decomposition of G. Given a crown decomposition of G into H, C and R, take the maximum matching between H and C.This matching provides proof that any vertex cover for G will need at least |H| vertices to cover the vertices in the matching.Thus, we may reduce an instance (G, k) to an instance (G − (H ∪ C), k − |H|).
Let us consider the Vertex Cover problem under edge addition, i. e., e + -VC.Given an (optimal) k-vertex cover for a graph G, we will give a kernel of size 2k for (G + e, k) using crown decomposition.First of all, let A ⊆ V be the optimal vertex cover of G, and B ⊆ V the rest of the vertices in G.Then, |A| = k and |B| = n − k.Observe, that there might be edges between vertices of A but not between vertices of B, otherwise A would not be a vertex cover.
Fig. 2. Partition of G. Bold edges belong to the matching, solid edges indicate that there exist edges between the two subsets and dashed edges indicate that edges might exist between the two groups.Any non-edge between A and B indicates that edges do not exist between the two subsets by construction.
Let us pick M to be a maximal matching between A and B. We will now partition A and B further into subsets according to their adjacencies (see Fig. 2).First, we consider the vertices of A and B that are not part of the matching, let us call these vertices A M and B M respectively.The vertices of these two subsets do not share edges because otherwise M would not be maximal.The vertices of A and B that are part of the matching will be A M and B M respectively.Now let A 1 M be the matched vertices in A that have an alternating path to at least one vertex in B M and B 1 M the vertices matched to those from A 1 M .Let then B 2 M be the vertices in B that have an alternating path to at least one vertex in A M and A 2 M the vertices matched to those from B 2 M .These four subsets have no intersection because otherwise there would be an augmenting path starting in B M through an alternating path to v ∈ A 1 M ∩ A 2 M and through its matched vertex in B 2 M and another alternating path to A M , contradicting the maximality of M .Let then A 3 M and B 3 M be the rest of the matched vertices in A and B respectively.
Observe, that the following is a valid crown decomposition for G: This is true because B is an independent set by construction, and there are no edges between C 1 and R 1 because B M and B 1 M have no edges to neither of A M or A 2 M and neither does , is also a valid crown decomposition, as B M and B 1 M also have no edges to A 3 M .In the second crown decomposition, H is empty, it means that there is a set of isolated vertices in G of size at least n − 2k and we can erase them.If the new edge is incident to an isolated vertex, we can use the following reduction: if G + e − {isolated vertices} contains a leaf, add the vertex adjacent to the leaf to the cover and use the rest of the graph as a (2k − 1)-sized kernel.Thus, except in this special case, Now we consider reoptimization under edge addition.If the new edge e is adjacent to any vertex in A, the optimal vertex cover for G is also an optimal vertex cover for G + e and thus the problem is solved.If e is adjacent to two vertices u and v in B, we make the following case distinction.
The new edge e provides the matching between u and v, so H, C and R as defined are a crown decomposition.
This provides a valid crown decomposition, as e will be left inside R.
There is always an alternating path between the vertex matched to u and B M .This path provides an alternative maximum matching M ′ that does not use u.Thus, we are in Case 3.
Using the same technique as in Case 3, we can assume v ∈ B M .If there is an alternating path from u to B M − v, we can, again, use the same technique as in Case 3 and we are in Case 3. Otherwise, every alternating path from u to B M leads exclusively to v. Meaning that there is a set of vertices we have a valid crown decomposition.
For every crown decomposition we defined, the set R contains not more than 2k vertices, thus these decompositions provide a kernel of size 2k for e + -Vertex Cover.
If we consider Vertex Cover with other local modifications, we observe it is easy to use the same technique.Adding vertices and deleting vertices or edges allows us to use exactly the same crown decomposition and similar techniques to find kernels of size at most 2k.

Reduction from Set Cover to Connected Vertex Cover
One of the problems in which reoptimization does not help us to achieve any improvement with respect to the classical parametrization techniques, is the Connected Vertex Cover (CVC) problem.A connected vertex cover of a graph G = (V, E) is a subset A ⊆ V that is a vertex cover of G and such that the subgraph induced by A is connected.Connected Vertex Cover is FPT with respect to the solution size [9].Moreover, it is conjectured that Connected Vertex Cover does not have a Turing Kernel [21].On the other hand, Set Cover does not have a polynomial kernel [12].In fact, it is complete for W [2].
We prove through a reduction that, if a Turing Kernel could be constructed for any instance of Connected Vertex Cover, then Set Cover would be in PK.
Given a nontrivial Set Cover instance F = {F 1 , . . .F t } with a base set S of size |S| = s, we want to check if it has an optimal solution of size k.We construct an instance for Connected Vertex Cover as shown in Fig. 3.
We create a grid of vertices u i,j where i = 1, . . ., k and j = 0, . . ., s.Each of these vertices will have an attached leaf u ′ i,j .The columns of the original grid will represent the base set in the Set Cover instance.
We add a row of vertices F 1 , . . ., F t in which each vertex u i,j of the column j, will be connected to F ℓ if and only if j ∈ F ℓ .We also add a vertex E which is connected to all the first column u i,0 for all i = 2, . . ., k, except for u k,0 .This is the edge that will be added in the reoptimization step (dashed edge in Fig. 3).
We add a column v 1 , . . ., v k in which each vertex u i,j of row i, will be connected to v i , as represented in Fig. 3. Finally, we add two vertices F and V , F neighboring F ℓ , for all ℓ = 0, . . ., t, and also V , and V additionally neighboring v i , for all i = 1, . . ., k.Now, to select a connected vertex cover in the graph, first observe that, if the grid vertex u i,j is not part of the connected vertex cover, then, even if all the vertices u ′ i,j were in the cover, the cover would not be connected.Thus, we select all the vertices u i,j of the grid (i.e., k(s + 1) vertices in total).This covers all leaf edges (u i,j , u ′ i,j ), all edges (u i,j , F ℓ ) and (u i,j , v i ).It does not cover, however, the edges (F ℓ , F ), (v i , V ) and (V, F ), and it is not connected.
The vertex F needs to be taken to cover the edges (F ℓ , F ) because, again, taking every F ℓ would make the vertex cover exponential in s.Moreover, to connect the u i,j , we have two options: 1. Take all v i and V : With these vertices we cover the remaining edges and we obtain a CVC of size k(s + 1) + k + 2. (See Fig. 4) 2. Take a selection of F ℓ that covers all columns and also take v k in order to connect the vertex u k,0 .Finally we also take V to cover the edges (v i , V ).This makes a total of k(s + 1) + SCsol + 1 + 3 where SCsol + 1 stands for the size of a Set Cover solution for (S, F ) together with the vertex E and +3 stands for F , v k and V .(See Fig. 5) Mixing these two strategies is not a good option because, for any column j missed by the set cover, in order to connect every vertex u i,j to the rest of the vertex cover, we would need to take all vertices v i , 1 ≤ i ≤ k, into the cover, rendering the selection of any F ℓ pointless.Both solutions are of the same size if and only if the size of the optimal set cover for F is k − 2.
Once the dashed edge is added to the graph, the connected vertex cover that uses the set cover can be reduced by one vertex by not taking v k .This vertex is not needed anymore because now E already connects u k,0 to the rest of the vertex cover.The vertex cover using the v i , however, cannot be reduced with the addition of this new edge.This means that only the solution using the set cover is optimal once the new edge is added.
We can conclude that, if we could get an optimal solution in one query to the oracle after the edge addition knowing one previous solution, we would need to send a reduced version of the new input for CVC to the oracle.This reduced version would contain only polynomially (in k) many vertices.This means that, if we only knew the previous solution using the v i 's, the algorithm must be able to find a reduction in such a way that it contains sufficient information to solve the SC instance without any previous knowledge.This would mean that a polynomial kernel could be found for any SC instance, by making this construction and then performing the reduction.This means that a reoptimization Turing kernelization algorithm cannot exist, unless we have a polynomial kernel for Set Cover.

Reduction for FPT
The reduction we presented in Section 4 is only so complicated because we are reducing under reoptimization.However, if we only want to use this reduction in order to find a fast algorithm for Set Cover, we can get rid of most of the construction.
First of all, we know that CVC is in FPT with respect to the solution size [9].However, Set Cover is W [2]-hard with respect to its own solution size.We want to find a parameter with respect to which Set Cover is FPT.In order to have a parameter as small as possible we need a reduction where the size of the connected vertex cover and the size of the set cover do not differ too much.We consider one vertex for each element of the base set v 1 , . . ., v s and for each of these vertices, an attached leaf v ′ i .Then we consider vertices F 1 , . . ., F t for each element of F .There is an edge (v i , F j ) in the graph if and only if v i ∈ F j (see Fig. 6).Finally, we add a vertex F which is adjacent to every vertex F j .

Because of the leaves v ′
i , all of the vertices v i must belong to any optimal connected cover.Furthermore, all vertices v i together with F provide a cover, if F is not taken into the cover, then all vertices F j must be in.Moreover, the only way to connect the selected vertex cover is to take some vertices F j into the solution.The minimum number of them that will connect all of the cover together is exactly the optimal set cover for the original instance.This solution will have size s + k + 1, where k is the size of the optimal set cover.Now we have a CVC of size s + k + 1, where k is the size of the optimal set cover.Because CVC can be solved in O(2 sol • n o(1) ), we can solve Set Cover in time O(2 s+k • n o (1) ).Which means that Set Cover is FPT with respect to s + k.Moreover, if we take into consideration that k ≤ s, we can say that Set Cover is FPT with respect to s.

Conclusions and Further Research
We presented examples of problems that do not have polynomial kernels under standard complexity-theoretic assumptions, but whose reoptimization versions have polynomial kernels.We also presented an example, where a kernel using the same technique is smaller in the reoptimization version of the problem.We finally presented a reduction proving that there are problems and local modifications, for which the complexity does not decrease when considering reoptimization.
In conclusion, there are problems that are easier under reoptimization conditions and problems that are not.We hope that further research will help us to better understand how much information are neighboring solutions providing and when is this information helpful.

Theorem 7 .
Let L be a parameterized compositional graph problem based on a decreasingly monotone property P .If for every instance (G, k), we can compute a polynomial kernel for a local environment of any edge e ∈ E, then e − -L is in PK.Proof.Completely analogous to the proof of Theorem 5.⊓ ⊔Observe, that in case of an edge deletion, given an instance G and an edge e, the local environment of e might contain two connected components.If the considered problem L has polynomial kernels for connected graph instances, it will have a polynomial kernel for the graph G − e considered because one polynomial kernel for each component (or one polynomial kernel rooted in each of the vertices incident to e) is still polynomial in k.Moreover, none of the other components are modified, thus any solution found for G − e must be in one of the newly generated components.Theorem 8. Let L be a parameterized compostional graph problem based on a decreasingly monotone property P .If for every instance (G, k), we can compute a polynomial kernel for a local environment of any new vertex v ∈ V , then v − -L is in PK.Proof.Completely analogous to the proof of Theorem 5.

Fig. 1 .
Fig. 1.Example of a crown decomposition of a graph.

1 Fig. 3 .
Fig. 3. Reduction of the Set Cover instance to an instance of CVC.The leaves u ′ i,j

F 1 Fig. 5 .
Fig. 5. Second CVC option using a SC for F.

v ′ 1 FF1Fig. 6 .
Fig. 6.Reduction of the Set Cover instance to an instance of CVC.The leaves v ′ i