The Power of Linear-Time Data Reduction for Maximum Matching

Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(mn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(m\sqrt{n})$$\end{document} time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings.


Introduction
"Matching is a powerful piece of algorithmic magic" [24]. In the Maximum Matching problem, given an undirected graph, one has to compute a maximum-cardinality set of nonoverlapping edges. Maximum matching is arguably among the most fundamental graph-algorithmic primitives allowing for a polynomial-time algorithm. More specifically, on an n-vertex and m-edge graph a maximum matching can be found in O(m √ n) time [21]. Improving this upper time bound resisted decades of research. Recently, however, Duan and Pettie [8] presented a linear-time algorithm that computes a (1 − )-approximate maximum-weight matching, where the running time dependency on is −1 log( −1 ) . For the unweighted case, the O(m √ n) algorithm of Micali and Vazirani [21] implies a linear-time (1 − )-approximation, where in this case the running time dependency on is −1 [8]. We take a different route: First, we do not give up the quest for optimal solutions. Second, we focus on efficient-more specifically, linear-time executable-data reduction rules, that is, not solving an instance but significantly shrinking its size before actually solving the problem. 1 In the context of decision problems and parameterized algorithmics this approach is known as kernelization; this is a particularly active area of algorithmic research on NP-hard problems.
The spirit behind our approach is thus closer to the identification of efficiently solvable special cases of Maximum Matching. There is quite some body of work in this direction. For instance, since an augmenting path can be found in linear time [10], the standard augmenting path-based algorithm runs in O(s(n + m)) time, where s is the number of edges in the maximum matching. Yuster [26] developed an O(rn 2 log n)-time algorithm, where r is the difference between maximum and minimum degree of the input graph. Moreover, there are linear-time algorithms for computing maximum matchings in special graph classes, including convex bipartite [25], strongly chordal [7], chordal bipartite graphs [6], and cocomparability graphs [20].
All this and the more general spirit of "parameterization for polynomial-time solvable problems" (also referred to as "FPT in P" or "FPTP" for short) [12] forms the starting point of our research. Remarkably, Fomin et al. [9] recently developed an algorithm to compute a maximum matching in graphs of treewidth k in O(k 4 n log 2 n) randomized time. Afterwards, Iwata, Ogasawara, and Ohsaka [16] provided an elegant algorithm computing a maximum matching in graphs of treedepth in O( ⋅ m) time. This implies an O(k 2 n log n)-time algorithm where k is the treewidth, since m ∈ O(kn) and ≤ (k + 1) log n [23]. Recently, Kratsch and Nelles [19] presented a O(r 2 log(r)n + m)-time algorithm where r is the modular-width.
Following the paradigm of kernelization, that is, provably effective and efficient data reduction, we provide a systematic exploration of the power of not only polynomial-time but actually linear-time data reduction for Maximum Matching. Thus, our aim (fitting within FPTP) is to devise problem kernels that are computable in linear time. In other words, the fundamental question we pose is whether there is a very efficient preprocessing that provably shrinks the input instance, where the effectiveness is measured by employing some parameters. The philosophy behind is that if we can design linear-time data reduction algorithms, then we may employ them for free before afterwards employing any super-linear-time solving algorithm. We believe that this sort of question deserves deeper investigation and we initiate it based on the Maximum Matching problem. In fact, in follow-up work we demonstrated that such linear-time data reduction rules can significantly speed-up state-ofthe-art solvers for Matching [18].
As kernelization is usually defined for decision problems, we use in the remainder of the paper the decision version of Maximum Matching. In the rest of the paper we call this decision version Matching. In a nutshell, a kernelization of a decision problem instance is an algorithm that produces an equivalent instance whose size can solely be upper-bounded by a function in the parameter (preferably a polynomial function). The focus on decision problems is justified by the fact that all our results, although formulated for the decision version, in a straightforward way extend to the corresponding optimization version (as also done in our follow-up work [18]).
(Maximum-Cardinality) Matching Input: An undirected graph G = (V, E) and a nonnegative integer s. Question: Is there a size-s subset M G ⊆ E of nonoverlapping (i.e. disjoint) edges?
Note that, for any polynomial-time solvable problem, solving the given instance and returning a trivial yes-or no-instance always produces a constant-size kernel in polynomial time. Hence, we are looking for kernelization algorithms that are faster than the algorithms solving the problem. The best we can hope for is linear time. For NP-hard problems, each polynomial-time kernelization algorithm is faster than any solution algorithm, unless P=NP. While the focus of classical kernelization for NP-hard problems is mostly on improving the size of the kernel, we particularly emphasize that for polynomially solvable problems it is mandatory to also focus on the running time of the kernelization algorithm. Indeed, we can consider linear-time kernelization as the holy grail and this drives our research when studying kernelization for Matching.
Our contributions. We present three kernels for Matching (see Table 1 for an overview). All our parameterizations can be categorized as "distance to triviality" in practice, our kernelization algorithm might achieve much better upper bounds on real-world input instances. In fact, in experiments using the kernelization with respect to the feedback edge number, the observed kernels were always significantly smaller than the theoretical bound [18]. As a technical side remark, we emphasize that in order to achieve a linear-time kernelization algorithm, we often need to use suitable data structures and to carefully design the appropriate data reduction rules to be exhaustively applicable in linear time, making this form of "algorithm engineering" much more relevant than in the classical setting of mere polynomial-time data reduction rules.

Preliminaries and Basic Observations
Notation and Observations. We use standard notation from graph theory. A feedback vertex (edge) set of a graph G is a set X of vertices (edges) such that G − X is a tree or forest. The feedback vertex (edge) number denotes the size of a minimum feedback vertex (edge) set. All paths we consider are simple paths. Two paths in a graph are called internally vertex-disjoint if they are either completely vertex-disjoint or they overlap only in their endpoints. A matching in a graph is a set of pairwise disjoint edges. Let G = (V, E) be a graph and let M ⊆ E be a matching in G. The degree of a vertex is denoted by deg(v) . A vertex v ∈ V is called matched with respect to M if there is an edge in M containing v, otherwise v is called free with respect to M. If the matching M is clear from the context, then we omit "with respect to M". An alternating path with respect to M is a path in G such that every second edge of the path is in M. An augmenting path is an alternating path whose endpoints are free. It is well known that a matching M is maximum if and only if there is no augmenting path for it. Let M ⊆ E and M ′ ⊆ E be two matchings in G.
, v p is a shorter path which is also an augmenting path for M in G. The corresponding maxi-

Kernelization for Matching on General Graphs
In this section, we investigate the possibility of efficient and effective preprocessing for Matching. As a warm-up, we first present in Sect. 3.1 a simple, linear-size kernel for Matching with respect to the parameter feedback edge number. Exploiting the data reduction rules and ideas used for this kernel, we then present in Sect. 3.2 the main result of this section: an exponential-size kernel for the almost always significantly smaller parameter feedback vertex number.

Warm-Up: Parameter Feedback Edge Number
We provide a linear-time computable linear-size kernel for Matching parameterized by the feedback edge number, that is, the size of a minimum feedback edge set. Observe that a minimum feedback edge set can be computed in linear time via a simple depth-first search or breadth-first search. The kernel is based on the next two simple data reduction rules due to Karp and Sipser [17]. These rules deal with vertices of degree at most two.
, then delete v and its neighbor and decrease the solution size s by one (v is matched with its neighbor).

Reduction Rule 2
Let v be a vertex of degree two and let u, w be its neighbors. Then remove v, merge u and w, and decrease the solution size s by one.
Proof Let G be the input graph. First we apply Reduction Rule 1 to G exhaustively, obtaining graph G 1 = (V 1 , E 1 ) , and then we apply Reduction Rule 2 to G 1 exhaustively, obtaining graph G 2 = (V 2 , E 2 ) . Note that both G 1 and G 2 can be computed in linear time [2]. We will prove that G 2 has at most 6k vertices and 7k edges. Denote with X 1 ⊆ E 1 and X 2 ⊆ E 2 the minimum feedback edge sets for G 1 and G 2 respectively. Note that |X 2 | ≤ |X 1 | ≤ k . For any graph H, denote with V 1 H , V 2 H , and V ≥3 H the vertices of H that have degree one, two, and more than two, respectively (in our case H will be replaced by G 1 − X 1 or G 2 − X 2 , respectively). Observe that all vertices 1 3 in G 1 have degree at least two, since G 1 is reduced with respect to Reduction Rule 1. Thus |V 1 | ≤ 2k , as each leaf in G 1 − X 1 has to be incident to an edge in X 1 . Next, since G 1 − X 1 is a forest, we have that |V ≥3 | , and thus |V ≥3 Note that the number of degree-two vertices in G 1 cannot be upper-bounded by a function of k . However, observe that the exhaustive application of Reduction Rule 2 to G 1 removes all vertices that have degree-two in G 1 and possibly merges some of the remaining vertices. Thus, G 2 contains no vertices with degree two and thus, Applying the O(m √ n)-time algorithm for Matching [21] on the above kernel yields the following.

Parameter Feedback Vertex Number
We next provide for Matching a kernel of size 2 O(k) computable in O(kn) time where k is the feedback vertex number. Using a known linear-time factor 4-approximation algorithm [1], we can compute an approximate feedback vertex set and use it in our kernelization algorithm.
Roughly speaking, our kernelization algorithm extends the linear-time computable kernel with respect to the parameter feedback edge set. Thus, Reduction Rule 1 and 2 play an important role in the kernelization. Compared to the other kernels presented in this paper, the kernel presented here comes at the price of higher running time O(kn) and bigger kernel size (exponential). It remains open whether Matching parameterized by the feedback vertex number admits a lineartime computable kernel (possibly of exponential size), or whether it admits a polynomial kernel computable in O(kn) time.
Subsequently, we describe our kernelization algorithm which keeps in the kernel all vertices in the given feedback vertex set X and shrinks the size of G − X . Before doing so, we need some further notation. In this section, we assume that each tree is rooted at some arbitrary (but fixed) vertex such that we can refer to the parent and children of a vertex. A leaf in G − X is called a bottommost leaf either if it has no siblings or if all its siblings are also leaves. (Here, bottommost refers to the subtree with the root being the parent of the considered leaf.) The outline of the algorithm is as follows (we assume throughout this section that k < log n since otherwise the input instance is already a kernel of size O(2 k ) ): 2. Compute a maximum matching M G−X in G − X (where X is a feedback vertex set that is computed by the linear-time 4-approximation algorithm [1] Whenever we reduce the graph at some step, we also show that the reduction is correct. That is, the given instance is a yes-instance if and only if the reduced one is a yes-instance. The correctness of our kernelization algorithm then follows by the correctness of each step. We discuss in the following the details of each step.

Steps 1-4
In this subsection, we first discuss the straightforward Steps 1 to 3 and then turn to Step 4. Steps 1-3. As in Sect. 3.1, we perform Step 1 in linear time by first applying Reduction Rule 1 and then Reduction Rule 2 using the algorithm due to Bartha and Kresz [2]. By Lemma 1 this step is correct.
A maximum matching M G−X in Step 2 can be computed by repeatedly matching a free leaf to its neighbor and by removing both vertices from the graph (thus effectively applying Reduction Rule 1 to G − X ). Clearly, this can be done in linear time.
Step 3 can be done in O(n) time by traversing each tree in G − X in a BFS manner starting from the root: If a visited inner vertex v is free, then observe that all children are matched since M G−X is maximum. Pick an arbitrary child u of v and match it with v. The vertex w that was previously matched to u is now free and since it is a child of u, it will be visited in the future. Observe that Steps 2 and 3 do not change the graph but only the auxiliary matching M G−X , and thus the first three steps are correct. The next observation summarizes the short discussion above.
Observation 3 Steps 1 to 3 are correct and can be applied in linear time.
Step 4. Our goal is to upper-bound the number of edges between vertices of X and V⧵X , since we can then use a simple analysis as for the parameter feedback edge set. Furthermore, recall that by Observation 2 the size of any maximum matching in G is at most k plus the size of M G−X . Now, the crucial observation here is that, if a vertex x ∈ X has at least k neighbors {v 1 , … , v k } in V⧵X which are free wrt. M G−X , then there exists a maximum matching where x is matched to one of {v 1 , … , v k } since at most k − 1 can be "blocked" by other matching edges. Indeed, consider otherwise a maximum matching M in which x is not matched with any of {v 1 , … , v k } . Then, since |X| = k , note that at most k − 1 vertices among {v 1 , … , v k } are matched in M with a vertex in X; suppose without loss of generality that v k is not matched with any vertex in X (and thus v k is not matched at all in M). If x is unmatched in M, then the matching M ∪ {{x, v k }} has greater cardinality than M, a contradiction. Otherwise, if x is matched in M with a vertex z, then M ∪ {{x, v k }}⧵{{x, z}} is another maximum matching of G, in which x is matched with a vertex among {v 1 , … , v k } . Formalizing this idea, we obtain the following data reduction rule.
Reduction Rule 3 Let G = (V, E) be a graph, let X ⊆ V be a vertex subset of size k, and let M G−X be a maximum matching for G − X . If there is a vertex x ∈ X with at least k free neighbors V x = {v 1 , … , v k } ⊆ V⧵X , then delete all edges from x to vertices in V⧵V x .
We first show the correctness and then the running time of Reduction Rule 3.

Lemma 2 Reduction Rule 3 is correct.
Proof Denote by s the size of a maximum matching in the input graph G = (V, E) and by s ′ the size of a maximum matching in the new graph G � = (V � , E � ) , where some edges incident to x are deleted. We need to show that s = s � . Since any matching in G ′ is also a matching in G, we easily obtain s ≥ s ′ .
It remains to show s ≤ s ′ . To this end, let M G ∶= M max G (M G−X ) be a maximum matching for G with the maximum overlap with M G−X (see Sect. 2). If x is free wrt. M G or if x is matched to a vertex v that is also in G ′ a neighbor of x, then M G is also a matching in G ′ ( M G ⊆ E ′ ) and thus we have s ≤ s ′ . Hence, consider the remaining case where x is matched to some vertex v such that {v, x} ∉ E � , that is, the edge {v, x} was deleted by Reduction Rule 3. Hence, x has k neighbors v 1 , … , v k in V⧵X such that each of these neighbors is free wrt. M G−X and none of the edges {v i , x}, i ∈ [k] , was deleted. Observe that by the choice of M G , the graph G(M G−X , M G ) (the graph over vertex set V and the edges that are either in M G−X or in M G , see Sect. 2) contains exactly s − |M G−X | paths of length at least one. Each of these paths is an augmenting path for M G−X . By Observation 2, we have s − |M G−X | ≤ k . Observe that {v, x} is an edge in one of these augmenting paths; denote this path with P. Thus, there are at most k − 1 augmenting paths in G(M G−X , M G ) that do not contain x. Also, each of these paths contains exactly two vertices that are free wrt. M G−X : the endpoints of the path. This means that no vertex in X is an inner vertex on such a path. Furthermore, since M G−X is a maximum matching, it follows that for each path at most one of these two endpoints is in V⧵X . Hence, at most k − 1 vertices of v 1 , … , v k are contained in the k − 1 augmenting paths of G(M G−X , M G ) except P. Consequently, one of these vertices, say v i , is free wrt. M G and can be matched with x. Thus, by reversing the augmentation along P and adding the edge {v i , x} we obtain another matching M ′ G of size s. Observe that M ′ G is a matching for G and for G ′ and thus we have s ≤ s ′ . This completes the proof of correctness. ◻

Lemma 3 Reduction Rule 3 can be exhaustively applied in O(n + m) time.
Proof We exhaustively apply the data reduction rule as follows. First, initialize for each vertex x ∈ X a counter with zero. Second, iterate over all free vertices in G − X in an arbitrary order. For each free vertex v ∈ V⧵X iterate over its neighbors in X.
For each neighbor x ∈ X do the following: if the counter is less than k, then increase the counter by one and mark the edge {v, x} (initially all edges are unmarked). Third, iterate over all vertices in X. If the counter of the currently considered vertex x is k, then delete all unmarked edges incident to x. This completes the algorithm. Clearly, it deletes edges incident to a vertex x ∈ X if and only if x has k free neighbors in V⧵X and the edges to these k neighbors are kept. The running time is O(n + m) : When iterating over all free vertices in V⧵X we consider each edge at most once. Furthermore, when iterating over the vertices in X, we again consider each edge at most once. ◻

To finish
Step 4, we exhaustively apply Reduction Rule 3 in linear time. Afterwards, there are at most k 2 free (wrt. to M G−X ) leaves in G − X that have at least one neighbor in X since each of the k vertices in X is adjacent to at most k free leaves. Thus, applying Reduction Rule 1 we can remove the remaining free leaves that have no neighbor in X. However, since for each degree-one vertex also its neighbor is removed, we might create new free leaves in G − X and need to again apply Reduction Rule 3 and update the matching (see Step 3). This process of alternating application of Reduction Rule 1 and 3 stops after at most k rounds since the neighborhood of each vertex in X can be changed by Reduction Rule 3 at most once. This shows the running time O(k(n + m)) . We next show how to improve this to O(n + m) . In doing so, we arrive at the central proposition of this subsection, stating that Steps 1 to 4 can be performed in linear time.
Proposition 1 Given a matching instance (G, s) and a feedback vertex set X, Algorithm 1 computes in linear time an instance (G � , s � ) with feedback vertex set X and a maximum matching M G � −X in G � − X such that the following holds.

-There is a matching of size s in G if and only if there is a matching of size s
-There are at most k 2 free leaves in G � − X.
Before proving Proposition 1, we explain Algorithm 1 which reduces the graph with respect to Reduction Rules 1 and 3 and updates the matching M G−X as described in Step 3. The algorithm performs in Lines 1 and 2 Steps 1 to 3. This can be done in linear time (see Observation 3). Next, Reduction Rule 3 is applied in Lines 8 to 15 using the approach described in the proof of Lemma 3: For each vertex in x a counter c(x) is maintained. When iterating over the free leaves in G − X , these counters will be updated. If a counter c(x) reaches k, then the algorithm knows that x has k fixed free neighbors and according to Reduction Rule 3 the edges to all other vertices can be deleted (see Line 10). Observe that once the counter c(x) reaches k, the vertex x will never be considered again by the algorithm since its only remaining neighbors are free leaves in G − X that already have been popped from the stack L. The only difference from the description in the proof of Lemma 3 is that the algorithm reacts if some leaf v in G − X lost its last neighbor in X (see Line 15). If v is free, then add v to the stack L of unmatched degree-one vertices and defer dealing with v to a second stage of the algorithm (in Line 16 to 24). (If v is matched, then we deal with v in Step 6.) We next discuss this second stage from Lines 16-24 (see Fig. 1 for an illustration): Let u be an entry in L such that u has degree one in Line 16, that is, u is a free leaf in G − X and has no neighbors in X. Then, following Reduction Rule 1, delete u and its neighbor v and decrease the solution size s by one (see Lines 18 and 19). Let w denote the previously matched neighbor of v. Since v was removed, w is now free. If w is a leaf in G − X , then we can simply add it to L and deal with it later. If w is not a leaf, then we need to update M G−X since only leaves are allowed to be free. To this end, augment along an arbitrary alternating path from w to a leaf in the subtree with root w (see Lines 20 to 23). This is done as follows: Pick an arbitrary child w ′ of w. Let w ′′ be the matched neighbor of w ′ . Observe that w ′′ has to exist as if w ′ would be free, then {w, w � } could be added to M G−X ; a contradiction to the maximality of M G−X . Since w is the parent of w ′ , it follows that w ′′ is a child of w ′ . Now, remove {w � , w �� } from M G−X , add {w � , w} and repeat the procedure with w ′′ taking the role of w. Observe that the endpoint of this found alternating path, after augmentation, always is a free leaf. Thus, this free leaf needs to be pushed to L. This completes the algorithm description.
The correctness of Algorithm 1 (stated in the next lemma) follows in a straightforward way from the above discussion. For the formal proofs we introduce some notation. We denote by G i (respectively M i ) the intermediate graph (respectively matching) stored by Algorithm 1 before the i th iteration of the while loop in Line 6, Vertex w ′′ is the leaf where the augmentation stops ( w ′′ is matched, otherwise M G−X would not be a maximum matching). Right side: Situation after Algorithm 1 augmentation. Vertex w ′′ will be added to the list L and further processed that is, G 1 is the input graph and M 1 is the initial matching computed in Line 2. The following observation is easy to see but useful in our proofs.

Observation 4
For each i ∈ {1, … , q} where q is the number of iterations of the while loop in Line 6, we have that M i is a maximum matching for Lemma 4 Algorithm 1 is correct, that is, given a matching instance (G, s) and a feedback vertex set X, it computes an instance (G � , s � ) with feedback vertex set X and a maximum matching M G � −X in G � − X such that:

There is a matching of size s in G if and only if there is a matching of size s
Proof Observation 4 implies that the returned graph G ′ is a subgraph of the input graph G. Thus, X is a feedback vertex set for both these graphs. Moreover, by Observation 4, M G � −X is a maximum matching for G � − X.
As to 1, observe that Algorithm 1 obtains G ′ from G by deleting edges in Line 14 according to Reduction Rule 3 and by deleting vertices in Line 18 according to Reduction Rule 1. Thus, 1 follows from the correctness of these data reduction rules (see Lemmas 1 and 2).
As to 2, observe that G − X is changed if and only if the matching M G−X is changed accordingly (see Lines 16 to 24 ). That is, after each deletion of vertices, the algorithm ensures that only leaves are free. Moreover, during the algorithm M G−X is always a maximum matching for G − X.
As to 3, observe that any free leaf in G − X that is not removed needs to have a neighbor in X (see Line 16). As Reduction Rule 3 is applied in Lines 8 to 15, there are at most k 2 such free leaves. ◻ We next show that Algorithm 1 runs in linear time. To this end, we need a further technical statement.

Lemma 5
In G i , let P be an even-length alternating path wrt. M i from a free leaf r to a matched inner vertex t of G i − X . Let u be the matched neighbor of t. Then for each j ∈ {1, … , i} there exists in G j an even-length alternating path P ′ from t to a free leaf r ′ such that the neighbor of t on P ′ is either Proof We prove the statement of the lemma by induction on i. The base case i = 1 is trivial since G 1 = G and thus P � = P. Now assume the statement is true for G i−1 , i ≥ 2 . We show that it holds for G i as well. By Observation 4, G i is a subgraph of G i−1 (and of G). Thus, the path P is also contained in G i−1 (and in G). If r is a leaf in G i−1 − X and if M i contains the same edges of P as M i−1 , then P is an even-length augmenting path in G i−1 and the statement of the lemma follows from applying the induction hypothesis and Observation 4. Thus, assume that (a) r is not a leaf in G i−1 − X or (b) M i does not contain the same edges of P as M i−1 (or both).
We start with case (a) assuming that r is not a leaf in G i−1 − X . Then in the (i − 1) st iteration of the while loop in Line 6, Algorithm 1 deleted the child r ′ of r and the child r ′′ of r ′ in Line 18. Moreover, M i−1 contained the edge {r, r � } and r ′′ was a free leaf in G i−1 − X . Thus, extending P by the two vertices r ′ , r ′′ yields in G i−1 an even-length alternating path P * from t to the free leaf r ′′ such that the neighbor of t on P * is u. Hence, the statement of the lemma follows from the induction hypothesis and Observation 4.
We next consider case (b), assuming that M i and M i−1 do not contain the same edges of P. Thus, in the (i − 1) st iteration of the while loop in Line 6, Algorithm 1 augmented along some alternating path in Lines 20 to 23. Denote with Q this alternating path and let w q be starting point of Q, that is, w q is the vertex w in Line 17.
Let v Q , u Q be the two deleted vertices in Line 17. Let r Q be the other endpoint of Q, that is, r Q is a leaf in G i−1 and thus a free leaf in G i . Since M i and M i−1 differ on P, this implies that the two paths Q and P overlap. Let z be the vertex on P and on Q which is closest to r. If z = r = r q , then P is a subpath of Q and in G i−1 there is an alternating path P * from t to the free leaf u Q . (Here, P * is the part of Q that is not contained in P.) Since the alternating path built in Line 20 to 23 is only extended by selecting child vertices, this implies that w q = t or w q is an ancestor of t. Thus, the neighbor of t in P * is either t's parent or v Q , that is, a child of t not contained in G i . Hence, the statement of the lemma follows from the induction hypothesis and Observation 4.
It remains to consider the case that z ≠ r . Let z Q ( z P ) be the neighbor of z that is on Q but not on P (on P but not on Q); similarly let z PQ be the neighbor of z that is on both P and Q. Since Q is an alternating path either {z, First consider the case that {z, z Q } is in M i−1 . Then, since both the subpath of Q from z to u Q and the subpath of P from z to r are alternating, we obtain an augmenting path from u Q over z to r. This is a contradiction to the maximality of M i−1 .
Second, consider the case that {z, z PQ } is in M i−1 . Thus, (after augmenting Q) the edge {z, z PQ } is not in M i . Moreover, as {z, z Q } is in M i , the edge {z, z P } is also not in M i . This contradicts the fact that P is an alternating path. ◻ Proof By Observation 3, Steps 1 to 3 in Lines 1 and 2 can be executed in linear time. Moreover, it is easy to execute Lines 3 to 5 in one sweep over the graph, that is, in linear time. It remains to show that Lines 6 to 24 run in linear time. To this end, we prove that each edge in E is being processed at most two times in Lines 6 to 24. Start with the edges with at least one endpoint in X. These edges will be inspected at most twice by the algorithm: Once, when the edge is marked (see Line 9). The second time is when the edge is checked and possibly deleted Lines 13 and 14. This shows that the first part (Lines 8 to 15) runs in linear time.
It remains to consider the edges within G − X . To this end, observe that the algorithm performs two actions on the edges: deleting the edges (Line 18) and finding and augmenting along an alternating path (Lines 20 to 23). Clearly, after deleting an edge it will no longer be considered, so it remains to show that each edge is part of at most one alternating path in Line 22. Assume toward a contradiction that the algorithm augments along an edge twice or more. From all the edges that are augmented twice or more let e ∈ E be one that is closest to the root of the tree containing e, that is, there is no edge closer to a root. Let P 1 and P 2 be the first two augmenting paths containing e. Assume without loss of generality that the algorithm augmented along P 1 in iteration i 1 and along P 2 in iteration i 2 of the while loop in Line 6 with i 1 ≤ i 2 . Let w 1 and w 2 be the two start points (the respective vertex w in Line 17) of P 1 and P 2 respectively. Let u 1 and v 1 ( u 2 and v 2 ) be the vertices deleted in Line 18 which in turn made w 1 ( w 2 ) free.
Observe that e does not contain any of these four vertices u 1 , v 1 , u 2 , v 2 since before augmenting P 1 ( P 2 ) the vertices u 1 and v 1 ( u 2 and v 2 ) are deleted in Line 18. Since e is contained in both paths, either w 1 is an ancestor of w 2 or vice versa (or w 1 = w 2 ).
Assume first that w 2 is an ancestor of Line 21). Consider G i 2 and M i 2 before the augmentation along P 2 . Clearly, in G i 2 there is an alternating path of length two from w 2 to the free leaf u 2 . Thus, by Lemma 5, in G i 1 there is an alternating path Q 1 from w 2 to a free leaf r such that r and w 1 are not in the same subtree of w 2 . Moreover, by choice of e the two matchings M i 1 and M i 2 contain the same edges on the path from w 1 to w 2 in G − X . Hence, there is an alternating path Q 2 from w 1 to w 2 in G i 1 . There is also an alternating path Q 3 from w 1 to the free leaf u 1 in G i 1 (see Line 17). Combining Q 1 , Q 2 , Q 3 gives an augmenting path from u 1 to r in G i 1 ; a contradiction to the maximality of M i 1 (see Observation 4).
Next, consider the case that w 1 = w 2 . By choice of e we have that e = {w 1 , w � } with w ′ being a child of w 1 in G i 2 and w ′ ≠ v 2 . Thus, after the augmentation along P 1 the edge e is matched (see Line 21). This is a contradiction to the choice of P 2 and the fact that {w 2 , v 2 } ∈ M i 2 (see Line 17).
Finally, consider the case that w 1 is an ancestor of w 2 . By choice of e we have that e = {w 2 , w � 2 } with w ′ 2 being a child of w 2 in G i 2 and w ′ 2 ≠ v 2 . From the argumentation used in the case w 1 = w 2 above, we can infer that after augmenting P 1 the edge e is not matched, thus e ∉ M i 1 +1 and e ∈ M i 1 . Observe that in G i 2 there is a length-two alternating path from w 2 to the free leaf u 2 . Thus, by Lemma 5, there is an even-length alternating path P from w 2 to a free leaf in G i . Moreover, the (matched) Proposition 1 now follows from Lemmas 4 and 6.

Step 5
In this step we reduce the graph in O(kn) time so that at most k 2 (2 k + 1) bottommost leaves will remain in the forest G − X . We will restrict ourselves to consider leaves that are matched with their parent vertex in M G−X and that do not have a sibling. We call these bottommost leaves interesting. Any sibling of a bottommost leaf is by definition also a leaf. Thus, at most one of these leaves (the bottommost leaf or one of its siblings) is matched with respect to M G−X and all other leaves are free. Recall that in the previous step we upper-bounded the number of free leaves with respect to M G−X by k 2 . Hence there are at most 2k 2 bottommost leaves that are not interesting (each free leaf can be a bottommost leaf with a sibling matched to the parent). Our general strategy for this step is to extend the idea behind Reduction Rule 3: We want to keep for each pair of vertices x, y ∈ X at most k different internally vertex-disjoint augmenting paths from x to y. In this step, we only consider augmenting paths of the form Observe that in this case any augmenting path starting with the two vertices x and u has to continue to v and end in a neighbor of v. Thus, the edge {x, u} can be only used in augmenting paths of length three. Furthermore, for different parent vertices u ≠ u ′ the length-three augmenting paths are clearly internally vertexdisjoint. If we do not need the edge {x, u} because we kept k augmenting paths from x to each neighbor y ∈ N(v) ∩ X already, then we can delete {x, u} . Furthermore, if we deleted the last edge from u to X (or u had no neighbors in X in the beginning), then u is a degree-two vertex in G and can be removed by applying Reduction Rule 2. As the child v of u is a leaf in G − X , it follows that v has at most k + 1 neighbors in G. We show below (Lemma 7) that the application of Reduction Rule 2 to remove u takes O(k) time. As we remove at most n vertices, at most O(kn) time is spent on Reduction Rule 2 in this step.
We now show that, after a simple preprocessing, one application of Reduction Rule 2 in the algorithm above can indeed be performed in O(k) time. Proof The preprocessing is to simply create a partial adjacency matrix for G with the vertices in X in one dimension and V in the other dimension. This adjacency matrix has size O(kn) and can clearly be computed in O(kn) time.
Now apply Reduction Rule 2 to v. Deleting v takes constant time. To merge u and w iterate over all neighbors of u. If a neighbor u ′ of u is already a neighbor of w, then decrease the degree of u ′ by one, otherwise add u ′ to the neighborhood of w. Then, relabel w to be the new merged vertex uw.
Since u is a leaf in G − X and its only neighbor in G − X , namely v, is deleted, it follows that all remaining neighbors of u are in X. Thus, using the above adjacency matrix, one can check in constant time whether u ′ is a neighbor of w. Hence, the above algorithm runs in The above ideas are used in Algorithm 2 which we use for this step (Step 5). The algorithm is explained in the proof of the following proposition stating the correctness and the running time of Algorithm 2.
Proposition 2 Let (G = (V, E), s) be a Matching instance, let X ⊆ V be a feedback vertex set, and let M G−X be a maximum matching for G − X with at most k 2 free vertices in G − X that are all leaves. Then, Algorithm 2 computes in O(kn) time an instance (G � , s � ) with feedback vertex set X and a maximum matching M G � −X in G � − X such that the following holds.

-There is a matching of size s in G if and only if there is a matching of size s
-There are at most k 2 free vertices in G � − X and they are all leaves.
Proof We start with describing the basic idea of the algorithm. To this end, let {u, v} ∈ E be an edge such that v is an interesting bottommost leaf, that is, v has no siblings and is matched to its parent u by M G−X . Counting for each pair x ∈ N(u) ∩ X and y ∈ N(v) ∩ X one augmenting path in a simple worst-case analysis gives O(k 2 ) time per edge, which is too slow for our purposes. Instead, we count for each pair consisting of a vertex x ∈ N(u) ∩ X and a set Y = N(v) ∩ X one augmenting path. In this way, we know that for each y ∈ Y there is one augmenting path from x to y without iterating through all y ∈ Y . This comes at the price of considering up to k2 k such pairs. However, we will show that we can do the computations in O(k) time per considered edge in G − X . The main reason for this improved running time is a simple preprocessing that allows for a bottommost vertex v to determine N(v) ∩ X in constant time.
The preprocessing is as follows (see Lines 1 to 3): First, fix an arbitrary bijection f between the set of all subsets of X to the numbers {1, 2, … , 2 k } . This can be done for example by representing a set Y ⊆ X = {x 1 , … , x k } by a length-k binary string (a number) where the i th position is 1 if and only if x i ∈ Y . Given a set Y ⊆ X such a number can be computed in O(k) time in a straightforward way. Thus, Lines 1 to 3 can be performed in O(kn) time. Furthermore, since we assume that k < log n (otherwise the input instance has already at most 2 k vertices), we have that f (Y) < n for each Y ⊆ X . Thus, reading and comparing these numbers can be done in constant time. Furthermore, in Line 3 the algorithm precomputes for each vertex the number corresponding to its neighborhood in X.
After the preprocessing, the algorithm uses a table Tab where it counts an augmenting path from a vertex x ∈ X to a set Y ⊆ X whenever a bottommost leaf v has exactly Y as neighborhood in X and the parent of v is adjacent to x (see Lines 4 to 18). To do this in O(kn) time, the algorithm proceeds as follows: First, it computes in Line 5 the set P which contains all parents of interesting bottommost leaves. Clearly, this can be done in linear time. Next, the algorithm processes the vertices in P. Observe that further vertices might be added to P (see Line 18) during this processing. Let u be the currently processed vertex of P, let v be its child vertex, and let Y be the neighborhood of v in X. For each neighbor x ∈ N(u) ∩ X , the algorithm checks whether there are already k augmenting paths between x and Y with a table lookup in Tab (see Line 10). If not, then the table entry is incremented by one (see Line 11) since u and v provide another augmenting path. If yes, then the edge {x, u} is deleted in Line 13 (we show below that this does not change the maximum matching size). If u has degree two after processing all neighbors of u in X, then, by applying Reduction Rule 2, we can remove u and merge its two neighbors v and w. It follows from Lemma 7 that this application of Reduction Rule 2 can be done in O(k) time. Hence, one iteration of the while loop requires O(k) time and thus Algorithm 2 runs in O(kn) time.
Recall that all vertices in G − X that are free wrt. M G−X are leaves. Thus, the changes to M G−X by applying Reduction Rule 2 in Line 15 are as follows: First, the edge {u, v} is removed and second the edge {w, q} is replaced by {vw, q} for some q ∈ V . Hence, the matching M G−X after running Algorithm 2 has still at most k 2 free vertices and all of them are leaves.
It remains to prove that Thus, each maximum matching M G for G contains for some y ∈ Y the edge {v, y} .
Observe that Algorithm 2 deletes {x, u} only if there are at least k other interesting bottommost leaves v 1 , … , v k in G − X such that their respective parent is adjacent to x and N(v i ) ∩ X = Y (see Lines 9 to 13). Since |Y| ≤ k , it follows by the pigeonhole principle that at least one of these vertices, say v i , is not matched to any vertex in Y. Thus, since v i is an interesting bottommost leaf, it is matched to its only remaining neighbor: its parent u i in G − X . This implies that there is another maximum matching a contradiction to the assumption that all maximum matchings for G have to contain {x, u}. We next show (b) that the resulting instance has at most 2k 2 (2 k + 1) bottommost leaves. To this end, recall that there are at most 2k 2 bottommost leaves that are not interesting (see discussion at the beginning of this subsection). Hence, it remains to upper-bound the number of interesting bottommost leaves. Observe that each parent u of an interesting bottommost leaf has to be adjacent to a vertex in X since otherwise u would have been deleted in Line 15. Furthermore, after running Algorithm 2, each vertex x ∈ X is adjacent to at most k2 k parents of interesting bottommost leaves (see Lines 10 to 13). Thus, the number of interesting bottommost leaves is at most k 2 2 k . Hence, the number of bottommost leaves is upper-bounded by 2k 2 (2 k + 1) . ◻

Step 6
In this subsection, we provide the final step of our kernelization algorithm. Recall that in the previous steps we have upper-bounded the number of bottommost leaves in G − X by O(k 2 2 k ) . We also computed a maximum matching M G−X for G − X such that at most k 2 vertices are free wrt. M G−X and all free vertices are leaves in G − X .
Using this, we next show how to reduce G to a graph of size O(k 3 2 k ) . To this end we need some further notation. A leaf in G − X that is not bottommost is called a pendant. We define T to be the pendant-free tree (forest) of G − X , that is, the tree (forest) obtained from G − X by removing all pendants. The next observation shows that G − X is not much larger than T. This allows us to restrict ourselves on giving an upper bound on the size of T instead of G − X.
Observation 5 Let G − X be as described above with vertex set V⧵X and let T be the pendant-free tree (forest) of G − X with vertex set V T . Then, |V⧵X| ≤ 2|V T | + k 2 .
Proof Observe that V⧵X is the union of all pendants in G − X and V T . Thus, it suffices to show that G − X contains at most |V T | + k 2 pendants. To this end, recall that we have a maximum matching for G − X with at most k 2 free leaves. Thus, there are at most k 2 leaves in G − X that have a sibling which is also a leaf since from two leaves with the same parent at most one can be matched. Hence, all but at most k 2 pendants in G − X have pairwise different parent vertices. Since all these parent vertices are in V T , it follows that the number of pendants in G − X is |V T | + k 2 . ◻ We use the following observation to provide an upper bound on the number of leaves of T.

Observation 6
Let F be a forest, let F ′ be the pendant-free forest of F, and let B be the set of all bottommost leaves in F. Then, the set of leaves in F ′ is exactly B.
Proof First observe that each bottommost leaf of F is a leaf of F ′ since no bottommost leaf is removed and F ′ is a subgraph of F. Thus, it remains to show that each leaf v in F ′ is a bottommost leaf in F.
We distinguish two cases of whether or not v is a leaf in F: First, assume that v is not a leaf in F. Thus, all of its child vertices have been removed. Since we only remove pendants to obtain F ′ from F and since each pendant is a leaf, it follows that v is in F the parent of one or more leaves u 1 , … , u . Thus, by definition, all these leaves u 1 , … , u are bottommost leaves, a contradiction to the fact that they were deleted when creating F ′ .
Second, assume that v is a leaf in F. If v is a bottommost leaf, then we are done. Thus, assume that v is not a bottommost leaf and hence a pendant. However, since we remove all pendants to obtain F ′ from F, it follows that v is not contained in F ′ , a contradiction. ◻ From Observation 6 it follows that the set B of bottommost leaves in G − X is exactly the set of leaves in T. In the previous step we reduced the graph such that |B| ≤ 2k 2 (2 k + 1) (see Proposition 2). Thus, T has at most 2k 2 (2 k + 1) vertices of degree one and, since T is a tree (a forest), T also has at most 2k 2 (2 k + 1) vertices of degree at least three. Let V 2 T be the vertices of degree two in T and let V ≠2 T be the remaining vertices in T. From the above it follows that |V ≠2 T | ≤ 4k 2 (2 k + 1) . Hence, it remains to upper-bound the size of V 2 T . To this end, we will upper-bound the degree of each vertex in X by O(k 2 2 k ) and then use Reduction Rule 1 and 2. We will check for each edge {x, v} ∈ E with x ∈ X and v ∈ V⧵X whether we "need" it. This check will use the idea from the previous subsection where each vertex in X needs to reach each subset Y ⊆ X at most k times via an augmenting path. Similarly as in the previous subsection, we want to keep "enough" of these augmenting paths. However, this time the augmenting paths might be long and different augmenting paths might overlap. To still use the basic approach, we use the following lemma stating that we can still somehow replace augmenting paths.

Lemma 8
Let M G−X be a maximum matching in the forest G − X . Let P uv be an augmenting path for M G−X in G from u to v. Let P wx , P wy , and P wz be three internally vertex-disjoint augmenting paths from w to x, y, and z, respectively, such that P uv intersects all of them. Then, there exist two vertex-disjoint augmenting paths with endpoints u, v, w, and one of the three vertices x, y, and z.
Proof Label the vertices in P uv alternating as odd or even with respect to P uv so that no two consecutive vertices have the same label, u is odd, and v is even. Analogously, label the vertices in P wx , P wy , and P wz as odd and even with respect to P wx , P wy , and P wz , respectively, so that w is always odd. Since all these paths are augmenting, it follows that each edge from an even vertex to its succeeding odd vertex is in the matching M G−X and each edge from an odd vertex to its succeeding even vertex is not in the matching. Observe that P uv intersects each of the other paths at least at two consecutive vertices, since every second edge must be an edge in M G−X . Since G − X is a forest and all vertices in X are free with respect to M G−X , it follows that the intersection of two augmenting paths is connected and thus a path. Since P uv intersects the three augmenting paths from w, it follows that at least two of these paths, say P wx and P wy , have a "fitting parity", that is, in the intersections of P uv with P wx and with P wy the even vertices with respect to P uv are either even or odd with respect to both P wx and P wy .
Assume without loss of generality that in the intersections of the paths the vertices have the same label with respect to the three paths (if the labels differ, then revert the ordering of the vertices in P uv , that is, exchange the names of u and v and change all labels on P uv to their opposite). Denote with v 1 s and v 1 t the first and the last vertex in the intersection of P uv and P wx . Analogously, denote with v 2 s and v 2 t the first and the last vertex in the intersection of P uv and P wy . Assume without loss of generality that P uv intersects first with P wx and then with P wy . Observe that v 1 s and v 2 s are even vertices and v 1 t and v 2 t are odd vertices since the intersections have to start and end with edges in M G−X (see Fig. 2 for an illustration). For an arbitrary path P and for two arbitrary vertices p 1 , p 2 of P, denote by p 1 − P − p 2 the subpath of P from p 1 to p 2 . Observe that u − P uv − v 1 t − P wx − x and w − P wy − v 2 t − P uv − v are vertexdisjoint augmenting paths. ◻ Algorithm description. We now provide the algorithm for Step 6 (see Algorithm 3 for pseudocode). The algorithm uses the same preprocessing (see Lines 1 to 3) as Algorithm 2. Thus, the algorithm can determine whether two vertices have the same neighborhood in X in constant time. As in Algorithm 2, Algorithm 3 uses a table Tab which has an entry for each vertex x ∈ X and each set Y ⊆ X . The table is filled in such a way that the algorithm detected for each y ∈ Y at least Tab [x, Y] internally vertex-disjoint augmenting paths from x to y. The main part of the algorithm is the boolean function 'Keep-Edge' in Line 13 to 22 which makes the decision on whether to delete an edge {x, v} for v ∈ V⧵X and x ∈ X . The function works as follows for edge {x, v} : Starting at v the graph will be explored along possible augmenting paths until a "reason" for keeping the edge {x, v} is found or no further exploration is possible (see Fig. 3 for an illustration). Fig. 2 The situation in the proof of Lemma 8. The augmenting path from u to v intersects the two augmenting paths P wx and P wy from w to x and y, respectively. Bold edges indicate edges in the matching, dashed edges indicate odd-length alternating paths starting with the first and last edge not being in the matching. The gray paths in the background highlight the different augmenting paths: the initial paths from u to v, w to x, and x to y as well as the new paths from u to x and w to v as postulated by Lemma 8 Fig. 3 Illustration of the graph exploration of the function Keep-Edge in Algorithm 3: The vertices x and y are vertices in the feedback vertex set X. The vertices v 1 , … , v 10 are part of G − X where v 10 is a free leaf. The matching M G−X is denoted by the thick edges. Three alternating paths are highlighted; each path represents an exploration of Keep-Edge from x that returns true: First, the path via v 3 ends in v 1 -a vertex with degree more than two in G − X (see Line 14). The second path via v 4 ends in v 5 -a vertex connected to two vertices in X (here we assume that there are less than 6k 2 paths from x to vertices adjacent to y and z; see . The third path via v 8 ends in the free leaf v 10 (see Line 14) If the vertex v is free wrt. M G−X , then {x, v} is an augmenting path and we keep {x, v} (see Line 14). Observe that in Step 4 (see Proposition 1) we upperbounded the number of free vertices by k 2 and all these vertices are leaves. Thus, we keep a bounded number of edges incident to x because the corresponding augmenting paths can end at a free leaf. We provide the exact bound below when discussing the size of the graph returned by Algorithm 3. In Line 14, the algorithm also stops exploring the graph and keeps the edge {x, v} if v has degree at least three in T. The reason is to keep the graph exploration simple by following only degree-two vertices in T. This ensures that the running time for exploring the graph from x does not exceed O(n). Since the number of vertices in T with degree at least three is bounded (see discussion after Observation 6), it follows that only a bounded number of such edges {x, v} are kept.
If v is not free wrt. M G−X , then it is matched with some vertex w. If w is adjacent to some leaf u in G − X that is free wrt. M G−X , then the path x, v, w, u is an augmenting path. Thus, the algorithm keeps in this case the edge {x, v} , see Line 16. Again, since the number of free leaves is bounded, only a bounded number of edges incident to x will be kept. If w has degree at least three in T, then the algorithm stops the graph exploration here and keeps the edge {x, v} , see Line 16. Again, this is to keep the running time at O(kn) overall.
Let Y ⊆ X denote the neighborhood of w in X. Thus the partial augmenting path x, v, w can be extended to each vertex in Y. Thus, if the algorithm did not yet find 6k 2 paths from x to vertices whose neighborhood in X is also Y, then the table entry Tab [x, f X (w)] (where f X (w) encodes the set Y = N(w) ∩ X ) is increased by one and the edge {x, v} will be kept (see Lines 18 and 19). (Here we need 6k 2 paths since these paths might be long and intersect with many other augmenting paths, see proof of Proposition 3 for the details of why 6k 2 is enough.) If the algorithm already found 6k 2 "augmenting paths" from x to Y, then the neighborhood of w in X is irrelevant for x and the algorithm continues.
In Line 20, all above discussed cases to keep the edge {x, v} do not apply and the algorithm extends the partial augmenting part x, v, w by considering the neighbors of w except v. Since the algorithm dealt with possible extensions to vertices in X in Lines 17 and 19 and with extensions to free vertices in G − X in Line 14, it follows that the next vertex on this path has to be a vertex u that is matched wrt. M G−X . Furthermore, since we want to extend a partial augmenting path from x, we require that u is not adjacent to x: otherwise the length-one path x, u would be another, shorter partial augmenting path from x to u and we do not need the currently stored partial augmenting path.
Statements on Algorithm 3. To show that Algorithm 3 indeed performs Step 6, we need further lemmas. For each edge {x, z} with x ∈ X and z ∈ V⧵X we denote by P(x, z) the induced subgraph of G − X on the vertices that are explored in the function Keep-Edge when called in Line 9 with x and z. More precisely, we initialize P(x, z) ∶= � . Whenever the algorithm reaches Line 14, we add v to P(x, z). Furthermore, whenever the algorithm reaches Line 17, we add w to P(x, z). Similarly, when the recursive call in Line 21 returns true, then we add u to P(x, z) in the recursive call (with u taking the role of v).
We next show that P(x, z) is a path with at most one additional pendant.

Lemma 9 Let
x ∈ X and z ∈ V⧵X be two vertices such that {x, z} ∈ E . Then, P(x, z) is either a path or a tree with exactly one vertex z ′ having more than two neighbors in P(x, z). Furthermore, z ′ has degree exactly three and z is a neighbor of z ′ .

Proof
We first show that all vertices in P(x, z) except z and its neighbor z ′ have degree at most two in P(x, z). Observe that having more vertices than z and z ′ in P(x, z) requires Algorithm 3 to reach Line 20. Let w be the currently last vertex when Algorithm 3 continues the graph exploration in Line 20. Observe that the algorithm therefore dealt with the case that w has degree at least three in the pendant-free tree T in Line 16. Thus, w is either a pendant leaf in G − X or w ∉ V ≥3 T (that is, w has degree at most two in T). In the first case, there is no candidate to continue and the graph exploration stops. In the second case, w has degree at most two in T. We next show that any candidate u for continuing the graph exploration in Line 21 is not a leaf in G − X . Assume toward a contradiction that u is a leaf in G − X . Since the parent w of u is matched with some vertex v ≠ u (this is how w is chosen, see Line 15), it follows that u is not matched. This implies that the function 'Keep-Edge' would have returned true in Line 16 and would not have reached Line 20, a contradiction. Thus, the graph exploration follows only vertices in T. Furthermore, the above argumentation implies that w is not adjacent to a leaf unless this leaf is its predecessor v in the graph exploration.
We now have two cases: Either w is not adjacent to a leaf in G − X or v = z is a leaf and w = z � is its matched neighbor. In the first case, w has at most one neighbor u ≠ v since w ∉ V ≥3 T . Hence, w has degree two in P(x, z). In the second case, w = z � has at most two neighbors u ≠ v and u ′ ≠ v . Thus, z ′ has degree at most three. ◻ For x ∈ X let be the union of all induced subgraphs that Algorithm 3 explores from x.

Lemma 10
There exists a partition of P x into P x = P A x ∪ P B x such that all graphs within P A x and within P B x are pairwise disjoint.
Proof Since G − X is a tree (or forest), G − X is also bipartite. Let A and B be its two color classes (so A ∪ B = V⧵X ). We define the two parts P A x and P B x as follows: We show that all subgraphs in P A x and P B x are pairwise vertex-disjoint. To this end, assume toward a contradiction that two graphs P, Q ∈ P A x share some vertex. (The case P, Q ∈ P B x is completely analogous.) Let p 1 and q 1 be the first vertex in P and Q respectively, that is, p 1 and q 1 are adjacent to x in G. Observe that p 1 ≠ q 1 . Let u ≠ x be the first vertex that is in P and in Q. By Lemma 9, P and Q are paths or trees with at most one vertex of degree more than two and this vertex has degree three and is the neighbor of p 1 or q 1 , respectively. This implies together with q 1 , p 1 ∈ A that either u = p 1 or u = q 1 . Assume without loss of generality that u = p 1 . Since p 1 ∈ A and q 1 ∈ A and u is a vertex in Q, it follows that Algorithm 3 followed u in the graph exploration from q 1 in Line 21. However, this is a contradiction since the algorithm checks in Line 20 whether the new vertex u in the path is not adjacent to x. Thus, all subgraphs in P A x and P B x are pairwise vertex-disjoint. ◻

We next show that if
Tab [x, f (Y)] = 6k 2 for some x ∈ X and Y ⊆ X (recall that f maps Y to a number, see Line 1), then there exist at least 3k 2 internally vertex-disjoint augmenting paths from x to Y.

Lemma 11
If in Line 17 of Algorithm 3 it holds for x ∈ X and Y ⊆ X that Tab [x, f (Y)] = 6k 2 , then there exist in G wrt. M G−X at least 3k 2 alternating paths from x to vertices v 1 , … , v 3k 2 such that all these paths are pairwise vertexdisjoint (except x) and Proof Note that each time Tab [x, f (Y)] is increased by one (see Line 18), the algorithm found a vertex w such that there is an alternating path P from x to w and N(w) ∩ X = Y . Furthermore, since the function Keep-Edge returns true in this case, the edge from x to its neighbor on P is not deleted in Line 10. Thus, there exist at least 6k 2 alternating paths from x to vertices whose neighborhood in X is exactly Y. By Lemma 10, it follows that at least half of these 6k 2 paths are vertexdisjoint. ◻ The next lemma shows that Algorithm 3 is correct and runs in O(kn) time. Proof We split the proof into three claims, one for the correctness of the algorithm, one for the returned kernel size, and one for the running time. ◻

Claim 1 The input instance (G, s) is a yes-instance if and only if the instance (G � , s � ) produced by Algorithm 3 is a yes-instance.
Proof Observe that the algorithm changes the input graph only in two lines: Lines 10 and 11. By Lemma 1, applying Reduction Rules 1 and 2 yields an equivalent instance. Thus, it remains to show that deleting the edges in Line 10 is correct, that is, it does not change the size of a maximum matching. To this end, observe that deleting edges does not increase the size of a maximum matching. Thus, we need to show that the size of the maximum matching does not decrease. Assume toward a contradiction that it does. Let {x, v} be the edge whose deletion decreased the maximum matching size. Redefine G to be the graph before the deletion of {x, v} and G ′ to be the graph after the deletion of {x, v} . Recall that Algorithm 3 gets as additional input a maximum matching Recall (see Sect. 2) that since P is a path in G M it follows that P is an augmenting path for M G−X . Since all vertices in X are free wrt. M G−X , it follows that all vertices in P except the endpoints are in V⧵X . Let z be the second endpoint of this path P. We call a vertex on P an even (odd) vertex if it has an even (odd) distance to x on P.
(So x is an even vertex and v and z are odd vertices). Observe that v is the only odd 1 3 vertex in P adjacent to x: Otherwise there would be another augmenting path from x to z which only uses vertices from P. This would imply the existence of another maximum matching that does not use {x, v} , a contradiction.
Let u be the neighbor of z in P. Since no odd vertex on P except v is adjacent to x, it follows that the graph exploration in the function Keep-Edge starting from x and v in Line 9 either reached u or returned true before. If z ∈ V⧵X , then in both cases, the function Keep-Edge would have returned true in Line 9 and Algorithm 3 would not have deleted {x, v} , a contradiction. Thus, assume that z ∈ X . Therefore, the function Keep-Edge considered the vertex u in Line 17 but did not keep the edge {x, v} . Thus, when considering u, it holds that Tab By Lemma 11, it follows that there are 3k 2 pairwise vertex-disjoint (except x) alternating paths from x to vertices u 1 , … , u 3k 2 with N(u i ) ∩ X = Y . Thus, there is a set Q of 3k 2 internally vertex-disjoint paths from x to y in G. If one of the paths Q ∈ Q does not intersect any path in G M , then reverting the augmentation along P and augmenting along Q would result in another maximum matching not containing {x, v} , a contradiction. Thus, assume that each path in Q intersects at least one path in G M .
For each two paths Q 1 , Q 2 ∈ Q that intersect the same path P ′ in G M it holds that each further path P ′′ in G M can intersect at most one of Q 1 and Q 2 : Assume toward a contradiction that P ′′ does intersect both Q 1 and Q 2 . Since no path in G M except P contains x and z it follows that all intersections between the paths are within G − X . Since P ′ and P ′′ are vertex-disjoint and Q 1 and Q 2 are internally vertex-disjoint, it follows that there is a cycle in G − X , a contradiction to the fact that X is a feedback vertex set.
Since 3k 2 > 3k + k 2 , it follows from the pigeon hole principle that there is a path P � ∈ G M that intersects at least three paths Q 1 , Q 2 , Q 3 ∈ Q such that no further path in G M intersects them. We can now apply Lemma 8 and obtain two vertex-disjoint augmenting paths Q and Q ′ . Thus, reverting the augmentation along P and P ′′ and augmenting along Q and Q ′′ yields another maximum matching for G which does not contain {x, v} , a contradiction. ◻

Claim 2 The graph G ′ returned by Algorithm 3 has O(k 3 2 k ) vertices and edges.
Proof We first show that each vertex x ∈ X has degree O(k 2 2 k ) in G ′ . To this end, we need to count the number of neighbors v ∈ N(x)⧵X where the function Keep-Edge returns true in Line 9. By Lemma 9, the function Keep-Edge explores the graph along one or two paths (essentially growing from one starting point into two directions). Recall that P x denotes the subgraphs induced by the graph exploration of Keep-Edge for the neighbors of x. By Lemma 10 there is a partition of P x into P A x and P B x such that within each part the subgraphs are pairwise vertex-disjoint. We consider the two parts independently. We start with bounding the number of graphs in P A x where the function 'Keep-Edge' returned true (the analysis is completely analogous for P B x ).
Since all explored subgraphs are disjoint and all free vertices in G − X wrt. M G−X are leaves, it follows that Algorithm 3 returned at most k 2 times true in Line 16 due to w being adjacent to a free leaf in G − X . Also, the algorithm returns at most k 2 times true in Line 14 due to v being free. Furthermore, the algorithm returns at most 6k 2 ⋅ 2 k times true in Line 19. Finally, we show that the algorithm returns at most 8k 2 ⋅ (2 k − 1) times true in Line 14 and 16, respectively. It follows from the discussion below Observation 6 that T, the pendent-free tree of G − X , has at most 2k 2 (2 k + 1) leaves (denoted by V 1 T ) and 2k 2 (2 k + 1) vertices of degree at least three (denoted by V ≥3 T ). Let V T be the vertices of T. Since T is a tree (or forest), it has more vertices than edges and hence which implies Thus, Algorithm 3 returns at most 2 ⋅ |V ≥3 T | + |V 1 T | < 6k 2 (2 k + 1) times true in Line 16 due to w being a vertex in V ≥3 T . Also, Algorithm 3 returns at most |V ≥3 T | ≤ 2k 2 (2 k + 1) times true in Line 14 due to v being a vertex in V ≥3 T . Summarizing, considering the graph explorations in P A x , Algorithm 3 returns at most times true in the function Keep-Edge. Analogously, considering the graph explorations in P A x , Algorithm 3 also returned at most O(k 2 2 k ) times true. Hence, each vertex x ∈ X has degree at most O(k 2 We now show that the exhaustive application of first Reduction Rule 1 and then Reduction Rule 2 indeed results in a kernel of the claimed size. To this end, denote with V 1 G � −X , V 2 G � −X , and V ≥3 G � −X the vertices that have degree one, two, and at least three in G � − X . We have |V 1 G � −X | ∈ O(k 3 2 k ) since each vertex in X has degree at most O(k 2 2 k ) and G ′ is reduced wrt. Reduction Rule 1. Next, since G � − X is a forest (or tree), we have |V ≥3 . Finally, each degree-two vertex in G ′ needs at least one neighbor of degree at least three since G ′ is reduced with respect to Reduction Rule 2. Thus, each vertex in V 2 G � −X is either incident to a vertex in X or adjacent to one of the at most O(k 2 2 k ) vertices in G � − X that have degree at least three. Thus, |V 2 G � −X | ∈ O(k 3 2 k ) . Summarizing, G ′ contains at most O(k 3 2 k ) vertices and edges.
applying Reduction Rule 1 in O(n + m) time is straightforward and Bartha and Kresz [2] showed how to apply Reduction Rule 2 in O(n + m) time. Thus, it remains to show that each iteration of the foreach-loop in Line 7 can be done in O(n) time. By Lemma 10, the graphs P x explored from x can be partitioned into two parts such that within each part all subgraphs are vertex-disjoint. Thus, each vertex in G − X is visited only twice during the execution of the function Keep-Edge. Furthermore, observe that in Line 17 and 18 the table can be accessed in constant time. Thus, the function Keep-Edge only checks once whether a vertex in V⧵X has a neighbor in X, namely in Line 20. This single check can be done in constant time. Since the rest of the computation is done on G − X which has less than |V⧵X| edges, it follows that each iteration of the foreach-loop in Line 7 can indeed be done in O(n) time.
This completes the proof of Proposition 3. ◻ This completes the description of Step 6. Combining Steps 1 to 6 we obtain our kernelization algorithm for the parameter feedback vertex number.

Theorem 2 Matching parameterized by the feedback vertex number k admits a kernel of size 2 O(k) . It can be computed in O(kn) time.
Proof First, using the linear-time factor-four approximation of Bar-Yehuda et al. [1], we compute an approximate feedback vertex set X with |X| ≤ 4k . Then, we apply Step 1 to 6 using Algorithms 1 to 3. By Propositions 1 to 3, this can be done in O(kn) time and results in a kernel of size O((4k) 3

Kernelization for Matching on Bipartite Graphs
In this section, we investigate the possibility of efficient and effective preprocessing for Bipartite Matching. More specifically, we show a linear-time computable polynomial-size kernel with respect to the parameter distance to chain graphs. In the first part of this section, we provide the definition of chain graphs and describe how to compute the parameter. In the second part, we discuss the kernelization algorithm.
Definition and computation of the parameter. We first define chain graphs which are a subclass of bipartite graphs with special monotonicity properties. G = (A, B, E) be a bipartite graph. Then G is a chain graph if each of its two color classes A, B admits a linear ordering wrt. neighborhood inclusion, that is,

Definition 1 [4] Let
Observe that if the graph G contains twins, then there is more than one linear ordering wrt. neighborhood inclusion. To avoid ambiguities, we fix for the vertices of the color class A (resp. B) in a chain graph G = (A, B, E) N(v) . In the remainder of the section we consider a bipartite representation of a given chain graph G = (A, B, E) where the vertices of A (resp. B) are ordered according to ≺ A (resp. ≺ B ) from left to right (resp. from right to left), as illustrated in Fig. 4.
For simplicity of notation we use in the following ≺ to denote the orderings ≺ A and ≺ B whenever the color class is clear from the context. Note that we use the direction left/right to indicate the ordering ≺ . That is, for a vertex a � ∈ A to the right (left) of a ∈ A we have a ≺ a ′ ( a ′ ≻ a ). In contrast, for a vertex b � ∈ B to the right We next show that we have a constant-factor approximation for the parameter and the corresponding vertex subset working in linear time. To this end, we use the following characterization of chain graphs. Here, 2K 2 denote the one-regular graph on four vertices (with disjoint two edges).

Lemma 13 There is a linear-time factor-4 approximation for the problem of deleting a minimum number of vertices in a bipartite graph in order to obtain a chain graph.
Proof Let G = (A, B, E) be a bipartite graph. We compute a set S ⊆ A ∪ B such that G − S is a chain graph and S is at most four times larger than a minimum size of Fig. 4 A chain graph. Note that the ordering ≺ A of the vertices in A is going from left to right while the ordering ≻ B of the vertices in B is going from right to left. The reason for these two orderings being drawn in different directions is that a maximum matching can be drawn as parallel edges, see e. g. the bold edges. In fact, Algorithm 4 computes such matchings with the matched edges being parallel to each other such a set. The algorithm iteratively tries to find a 2K 2 and deletes the four corresponding vertices until no further 2K 2 is found. Since in each 2K 2 , by Lemma 12, at least one vertex needs to be removed, the algorithm yields the claimed factor-4 approximation.
The details of the algorithm are as follows: First, it initializes S = � and sorts the vertices in A and in B by their degree; the vertices in A = {a 1 , … , a } in increasing order and the vertices in B = {b 1 , … , b } in decreasing order, that is, deg(a 1 ) ≤ … ≤ deg(a ) and deg(b 1 ) ≥ … ≥ deg(b ) . Since the degree of each vertex is at most max{ , } , this can be done in linear time with e. g. Bucket Sort. At any stage the algorithm deletes all vertices of degree zero and all vertices which are adjacent to all vertices in the other partition. The deleted vertices are not added to S since these vertices cannot participate in a 2K 2 . Next, the algorithm recursively processes the vertices in A in a nondecreasing order of their degrees. Let a ∈ A be a minimum-degree vertex and let b ∈ B be a neighbor of a. Since b is not adjacent to all vertices in A (otherwise b would be deleted), there is a vertex a � ∈ A that is not adjacent to b. Since deg(a) ≤ deg(a � ) it follows that a ′ has a neighbor b ′ that is not adjacent to a. Hence, the four vertices a, a ′ , b, b ′ induce only two edges: {a, b} and {a � , b � } and thus form a 2K 2 . Thus, the algorithm adds the four vertices to S, deletes them from the graph, and continues with a vertex in A that has minimum degree.
As to the running time, we now show that, after the initial sorting, the algorithm considers each edge only twice: Selecting a and b as described above can be done in O(1) time. To select a ′ , the algorithm simply iterates over all vertices in A until it finds a vertex that is not adjacent to b. In this way at most deg(b) + 1 vertices are considered. Similarly, by iterating over the neighbors of a ′ , one finds b ′ . Hence, the edges incident to a, a ′ , b, and b ′ are used once to find the vertices and a second time when these vertices are deleted. Thus, using appropriate data structures, the algorithm runs in O(n + m) time. ◻ Kernelization overview. In the rest of this section, we provide a linear-time computable kernel for Bipartite Matching with respect to the parameter vertex deletion distance k to chain graphs. On a high level, our kernelization algorithm consists of two steps: First, we upper-bound by O(k) the number of neighbors of each vertex in the deletion set. Second, we mark O(k 2 ) special vertices and we use the monotonicity properties of chain graphs to upper-bound the number of vertices that lie between any two consecutive marked vertices, thus bounding the total size of the reduced graph to O(k 3 ) vertices.
Step 1. Let G = (A, B, E) be the bipartite input graph, where V = A ∪ B , and let X ⊆ V be a vertex subset such that G − X is a chain graph. By Lemma 13, we can compute an approximate X in linear time. Our kernelization algorithm uses a specific maximum matching M G−X ⊆ E in G − X with Algorithm 4 where all edges in M G−X are "parallel" and all matched vertices are consecutive in the ordering ≺ A and ≺ B , see also Fig. 4. Since in convex graphs matching is linear-time solvable [25] and convex graphs are a superclass of chain graphs, this can be done in O(n + m) time. We use M G−X in our kernelization algorithm to obtain some local information about possible augmenting paths. For example, each augmenting path has at least one endpoint in X. Forming this into a data reduction rule, with s denoting the size of a maximum matching, yields the following.

Reduction Rule 4
If |M G−X | ≥ s , then return a trivial yes-instance; if s > |M G−X | + k , then return a trivial no-instance.
The correctness of Reduction Rule 4 follows from Observation 2. We will show next that there is a maximum matching M G for G in which each vertex in X is either matched with another vertex in X or with a "small-degree vertex" in G − X . This means that an augmenting path starting at some vertex in X will "enter" the chain graph G − X in a small-degree vertex. We now formalize this con- Proof Assume, towards a contradiction, that there is no such matching M G . Let M ′ G be a maximum matching for G that maximizes the number of vertices x ∈ X that are matched to a vertex in N V⧵X ◻ Based on Lemma 14, we can provide our next data reduction rule.
Reduction Rule 5 Let (G, s) be an instance reduced with respect to Reduction Rule 4 and let x ∈ X . Then delete all edges between x and V⧵N V⧵X small (x).
Clearly, Reduction Rule 5 can be exhaustively applied in O(n + m) time by one iteration over A and B in the ordering ≺.
Step 2. For the second step of our kernelization algorithm, we first mark a set K of O(k 2 ) vertices that are kept in the graph (and thus will end up in the kernel): Keep all vertices of X. For each vertex x ∈ X keep all vertices in N V⧵X small (x) and if a kept vertex is matched wrt. M G−X , then keep also the vertex with which it is matched. Formally, we have: Observe that exhaustively applying Reduction Rule 5 ensures that K is of size at most 2k 2 .
Next, we use the monotonicity properties of the chain graph to show that it suffices to keep for each vertex v ∈ K at most k vertices to the right and to the left of v. Consider an augmenting path P = x, a 1 , b 1 , … , a , b , y from a vertex x ∈ B ∩ X to a vertex y ∈ A ∩ X . Observe that if a 1 ≺ a , then also {b 1 , a } ∈ E and thus P � = x, a 1 , b 1 , a , b , y is an augmenting path (see Fig. 5 for a visualization). Furthermore, the vertices in the augmenting path P ′ are a subset of K ∪ X and, thus, by keeping these vertices (and the edges between them), we also keep the augmenting path P ′ in our kernel. Hence, it remains to consider the more complicated case that a ≺ a 1 (see Fig. 6). To this end, we next show that in certain "areas" of the chain graph G − X the number of augmenting paths "passing through" such an area is upper-bounded. To specify an "area", we need the following definition. G = (A, B, E) be a chain graph and let M be a matching in G. Furthermore let a ∈ A , b ∈ B with {a, b} ∈ M . Then #lmv (b, M) (resp. #rmv (a, M) ) is With these definitions, we can show a limit on the number of augmenting paths that can "cross" an edge in M G−X .  switched roles of a and b). Let #aug denote the number of vertex-disjoint alternating paths from {a � ∈ A | a � ≺ a} to {b � ∈ B | b � ≺ b} such that the first and last edge are not in M (see Fig. 6 for an example with #aug = 1 for a = a 4 and b = b 4 ). Furthermore, let a b 1 , … , a b #lmv (b,M) be the neighbors of b that are to the left of a, that is, Lemma 16 states that the number of augmenting paths passing through the "area" between a 1 and a 2 is bounded. Using this, we want to replace this area by a gadget with O(k) vertices. To this end, we need further notation. For each kept vertex v ∈ K , we may also keep some bounded number of vertices to the right and to the left of v. We call these vertices the left buffer (right buffer) of v.

Definition 2 Let
Note that in Definition 3 each of the sets B r (a 1 , M) , B (a 2 , M) , B r (b 1 , M) , and B (b 2 , M) depends on all four vertices a 1 , a 2 , b 1 , b 2 ; we omit these dependencies from the names for the sake of brevity.
The basic idea is now to delete vertices "outside" these buffers. See Fig. 7 for an illustrating example of the following data reduction rule formalizing this idea.

Reduction Rule 6
Let (G, s) be an instance reduced with respect to Reduction Rule 4. Let a 1 , a 2 ∈ K ∩ A with a 1 ≺ a 2 and {a 1 , b 1 }, {a 2 , b 2 } ∈ M G−X such that A � ∶= {a ∈ A | a 1 ≺ a ≺ a 2 } is of size at least 2 ⋅ min{ #lmv (b 1 , b 2 , M), k} + 1 and A � ∩ K = � . Then delete all vertices in Since we assumed some vertices of P M i to be in V⧵V ′ , it follows that for at least one ver- Furthermore since by assumption no vertex between a 1 and a 2 or between b 1 and b 2 is in K, it follows that v i This means that in G M there is for each i ∈ [t] an alternating path from a P M i to b P M i starting and ending with non-matched edges and all these paths are pairwise vertex-disjoint. We show that also in G ′ there are pairwise vertex-disjoint alternating paths from a P M i to b P M i . Assume without loss of generality that a P M , the successor of a P M i is to the right of b 1 , it follows that a P M i has at least i neighbors right of b 1 . Since the right buffer of b 1 contains the #lmv (b 1 , b 2 , M G−X ) ≥ t (see Lemma 16) vertices to the right of b 1 , we have {a P M i , b r i } ∈ E . By symmetry, we have {b P M i , a i } ∈ E . Recall that M G−X forms a perfect matching between B r (b 1 , M G−X ) and B r (a 1 , M G−X ) as well as between B (a 2 , M G−X ) and B (b 2 , M G−X ) . Since Reduction Rule 6 added all edges between B r (a 1 , M G−X ) and B (b 2 , M G−X ) to E ′ , it follows that each path P M i can be completed as follows: We show that there are t pairwise vertex-disjoint augmenting paths from {a r note, however, that these paths are not necessarily from a r i r to b j r , where r ∈ [t] . To this end, recall that, by definition of #lmv (b 1 , b 2 , M G−X ) , each vertex b ∈ B with b 2 ≺ b ≺ b 1 has at least #lmv (b 1 , b 2 , M G−X ) neighbors to the left of its matched neighbor. This allows us to iteratively find augmenting paths as follows: To create the q th augmenting path P q start with some vertex b j q . Denote by v the last vertex added to P q (in the beginning we have v = b j q ). If v ∈ A , then add to P q the neighbor matched to v.
If v ∈ B , then do the following: if v is adjacent to a vertex a ∈ {a r i 1 , … , a r i t } , then add a to P q , otherwise add the leftmost neighbor of v to P q . Repeat this process until P q contains a vertex from {a r i 1 , … , a r i t } . After we found P q , remove all vertices of P q from G. If q < t , then continue with P q+1 . Observe that any two vertices of P q that are in A have at least #lmv (b 1 , b 2 , M G−X ) − 1 other vertices of A between them (in the ordering of the vertices of A, see Fig. 4). Thus, after a finite number of steps, P q will reach a vertex in {a r i 1 , … , a r i t } . Furthermore, it follows that after removing the vertices of P q it holds that #lmv (b 1 , b 2 , M G−X ) is decreased by exactly one: P q contains for each vertex b ∈ B at most one vertex among the #lmv (b 1 , b 2 , M G−X ) neighbors of b that are directly to the left of its matched neighbor in M G−X . Thus, in each iteration we have #lmv (b 1 , b 2 , M G−X ) > 0 . It follows that the above procedure constructs t vertex-disjoint augmenting paths from {a r Hence, G contains a matching of size s and thus (G, s) is a yes-instance. ◻ The correctness of the data reduction rule follows from the previous two claims. It remains to prove the running time. To this end, observe that the matching M G−X is given. Computing all degrees of G can be done in O(n + m) time. Proof The running time is clear. It remains to show the correctness. Let (G, s) be the input instance reduced with respect to Reduction Rule 4 and let (G � , s) be the instance produced by Reduction Rule 7. We show that deleting the vertices in A free ⧵(K ∪ A k free ) yields an equivalent instance. It then follows from symmetry that deleting the vertices in B free ⧵(K ∪ B k free ) yields also an equivalent instance. We first show that if (G, s) is a yes-instance, then also the produced instance (G � , s) is a yes-instance. Let (G, s) be a yes-instance and M G be a maximum matching for G. Clearly, |M G | ≥ s . Observe that for each removed vertex a ∈ A free ⧵(K ∪ A k free ) it holds that every vertex a � ∈ A k free is to the right of a, that is, a ≺ a ′ and thus N G−X (a) ⊆ N G−X (a � ) . Since (G, s) is reduced with respect to Reduction Rule 4, it follows that |M G−X | ≥ |M G | − k . Thus, there exist at most k augmenting paths for M G−X in G. If none of these augmenting paths ends in a vertex a ∈ A free ⧵(K ∪ A k free ) , then all augmenting paths exist also in G ′ and thus (G � , s) is a yes-instance. If one of these augmenting paths, say P, ends in a, then at least one vertex a � ∈ A k free is not endpoint of any of these augmenting paths. Since a ∉ K , it follows from the definition of K that the neighbor b of a on P is indeed in B⧵X . Since N G−X (a) ⊆ N G−X (a � ) , it follows that {a � , b} ∈ E and thus we can replace a by a ′ in the augmenting path. By exhaustively applying the above exchange argument, it follows that we can assume that none of the augmenting paths uses a vertex in A free ⧵(K ∪ A k free ) . Thus, all augmenting paths are also contained in G ′ and hence the resulting instance (G � , s) is still a yes-instance.
Finally observe that if (G � , s) is a yes-instance, then also (G, s) is a yes-instance: any matching of size s in G ′ is also a matching in G since G ′ is a subgraph of G. ◻ We have now all statements that we need to show our second main result.
A free ∶= {a ∈ A | a is free with respect to M G−X } A k free ∶= {a ∈ A free | |{a � ∈ A free | a ⪯ a � }| ≤ k}, Theorem 3 Matching on bipartite graphs admits a cubic-vertex kernel with respect to the vertex deletion distance to chain graphs. The kernel can be computed in linear time.
Proof Let (G, s) be the input instance with G = (V, E) , the two partitions V = A ∪ B , and X ⊆ V such that G − X is a chain graph. If X is not given explicitly, then use the linear-time factor-four approximation provided in Lemma 13 to compute X. The kernelization is as follows: First, compute the matching M G−X in linear time with Algorithm 4. Next compute the set of kept vertices K. Then, apply Reduction Rules 5 to 7. By Lemmas 17 and 18, this can be done in linear time. Let b K the leftmost vertex in K ∩ B and a K r rightmost vertex in A ∩ K . Let a K and b K r be their matched neighbors. Since we reduced the instance with respect to Reduction Rule 5, we have |K| ≤ 2k 2 . Moreover, as we reduced the instance with respect to Reduction Rule 6, it follows that the number of vertices between a K and a K r as well as the number of vertices between b K and b K r is at most 4k 3 , respectively. Furthermore, there are at most 2k free vertices left in V⧵X since we reduced the instance with respect to Reduction Rule 7. It remains to upper-bound the number of matched vertices left of b K and right of a K r (see Fig. 8). Observe that all vertices left of b K are matched with respect to M G−X . If there are more than 2k vertices to the left of b K , then do the following: Add four vertices a , b , x a , x b to V. The idea is that {a , b } should be an edge in M G−X such that a ∈ A and b ∈ B are in K and there is no vertex left of b . This means we add these vertices to simulate the situation where the leftmost vertex in B⧵X is also in K. To ensure that a and b are in K and that they are not matched with some vertices in G, we add x a and x b to X and make x a respectively x b to their sole neighbors. In this way, we ensure that there is maximum matching in the new graph that is exactly two edges larger than the maximum matching in the old graph. In this new graph we can then apply Reduction Rule 6 to reduce the number of vertices between b and b K . Formally, we add the following edges. Add {a , x a }, {b , x b } to E. Add all edges between b and the vertices in B⧵X . Let a be the rightmost vertex in A k free . Moreover, the vertices a K r , a K , b K r , and b K are all adjacent to vertices in X. It remains to upper-bound the vertices right of a K + r and left of b K parameter k can now be made an additive one. For instance, in this way the running time for Bipartite Matching parameterized by vertex deletion distance to chain graphs "improves" from O(k(n + m)) to O(k 7.5 + n + m).
We conclude with some questions and tasks for future research. Can the size or the running time of the kernel with respect to feedback vertex set (see Sect. 3) be improved? In particular, can the exponential upper bound on the kernel size be decreased to a polynomial upper bound? Is there a linear-time computable kernel for Matching parameterized by the treedepth (assuming that is given)? This would complement the recent O( m) time algorithm [16]. Can one extend the kernel of Sect. 4 from Bipartite Matching to Matching parameterized by the distance to chain graphs?