1 Introduction

Given a graph G, a feedback vertex set of G is a set of vertices \(S\subseteq V(G)\) such that GS is a forest. The Feedback Vertex Set problem (FVS) asks to find a feedback vertex set of the minimum size. This problem is a classic NP-hard problem which has been studied extensively in many fields of complexity and algorithms [1].

In this work, we take the point of view of parameterized complexity, where every instance I of a problem at hand is accompanied with a parameter k, intended to represent the complexity of the instance at hand. We ask for a fixed-parameter algorithm (FPT algorithm for short) that solves an instance I with parameter k in time f(k)|I|c for some computable function f and a constant c. That is, the exponential blow-up in the running time bound, probably unavoidable for NP-hard problems, is confined to be a function of the parameter only. For more on parameterized complexity, we refer to a recent textbook [7].

In the context of parameterized complexity of the FVS problem, there is a long line of work improving the upper bound of the FPT algorithm for the standard parameterization of the solution size [4,5,6, 11, 12, 14, 17, 18] (i.e., the input consists of a graph G and a parameter k, and the goal is to find a feedback vertex set of size at most k or show that no such set exists). The fastest randomized FPT algorithm for FVS, which runs in time \(\mathcal {O}^{\ast }(3^{k})\), is given by Cygan et al. [8].Footnote 1,Footnote 2 If one asks for a deterministic FPT algorithm, the champion runs in O(3.619k) and is due to Kociumaka and Pilipczuk [18].

At the same time, many variants of FVS received significant attention, including Subset FVS [10, 16, 21], Group FVS [9, 13, 16, 19], Connected FVS [23], or Simultaneous FVS [3].

In this paper, we focus on the parameterized version of the Independent Feedback Vertex Set problem (IFVS), which is to decide if there exists a feedback vertex set S of size at most k such that no two vertices of S are adjacent in G. Misra et al. gave the first FPT algorithm running in time \(\mathcal {O}(5^{k}n^{\mathcal {O}(1)})\) and an \(\mathcal {O}(k^{3})\) kernel for IFVS [22]. Agrawal et al. presented an improved FPT algorithm running in time \(\mathcal {O}^{\ast }(4.1481^{k})\) for IFVS [2]. In this paper, we propose a faster FPT algorithm.

Theorem 1

The Independent Feedback Vertex Set problem, parameterized by the solution size, can be solved in \(\mathcal {O}^{\ast }((1+\varphi ^{2})^{k})\leq \mathcal {O}^{\ast }(3.619^{k})\) time, where \(\varphi =\frac {1+\sqrt {5}}{2}<1.619\) is the golden ratio.

We remark here that Theorem 1 is not “just another” improvement in the base of the exponential function, but in some sense “the end of the road”. The exponential function of the time bound of Theorem 1 matches the one of the algorithm of Kociumaka and Pilipczuk [18] for the classic FVS problem. Since FVS trivially reduces to IFVS (subdivide each edge once), any (deterministic) improvement to the base of the exponential function of Theorem 1 would give a similar improvement for FVS.

On the technical side, we follow the standard approach of iterative compression as in Agrawal et al. [2] to reduce to a “disjoint” version of the problem. Here, our approach diverges from the one of Agrawal et al. [2]. We follow a modified measure for the subsequent branching process, somewhat inspired by the work of Kociumaka and Pilipczuk [18]. This improved measure, together with a number of new notions (generalized W -degree, potential nice vertices and tents), allow us to simplify the algorithm and analysis as compared to [2].

2 Preliminaries

The graphs in our paper are all undirected and may contain multiple edges or loops. For a graph G, we denote its vertex set by V (G) and edge multiset by E(G). For a vertex vV (G), we use N(v) = {uV (G) : uvE(G)} to denote the neighborhood of v; note that N(v) is a set, containing a vertex u only once even in the presence of multiple edges uv. We define the closed neighborhood of v as N[v] = N(v) ∪{v}. For a vertex set \(A\subseteq V(G)\), the neighborhood of A is \(N(A)=\bigcup _{v\in A}N(v) \setminus A\). For a vertex set \(X\subseteq V(G)\), we denote the induced subgraph of X by G[X]. For simplicity, we use GX to denote G[V (G) ∖ X]. For a vertex set \(X\subseteq V(G)\) and vV (G), we define X-degree of v as the number of edges with one endpoint being v and the other lying in X, and we denote it by \(\deg _{X}(v)\). Note that the X-degree counts edges with multiplicities. A connected component is a maximal connected subgraph. Contracting a connected subgraph H is the operation of replacing the subgraph H with a vertex vH and every edge xy with xV (H) and yV (G) ∖ V (H) with an edge vHy (keeping multiplicities).

In the realms of parameterized complexity, every instance I of a problem at hand is accompanied with a parameter k. A fixed-parameter algorithm (FPT algorithm for short) solves an instance I with parameter k in time f(k)|I|c for some computable function f and a constant c. A kernel of size g(k) for some computable function g is a polynomial-time procedure that reduces an instance I with parameter k to an equivalent instance with size and parameter value bounded by g(k).

3 An Algorithm for Independent Feedback Vertex Set

Given an instance (G,k), we first invoke the \(\mathcal {O}^{\ast }((1+\varphi ^{2})^{k})\)-time FPT algorithm for the classic FVS problem [18]. If the algorithm returns NO, we conclude that there is no independent feedback vertex set of size at most k since an independent feedback vertex set is also a feedback vertex set. Otherwise, the algorithm returns a feedback vertex set Z such that |Z|≤ k. Obviously, F = GZ is a forest.

Suppose there is a solution S for the input instance (G,k). The algorithm branches into 2|Z| directions, guessing a subset \(Z^{\prime }\) of Z such that \(S\cap Z=Z^{\prime }\). Let \(W = Z \setminus Z^{\prime }\). If \(G[Z^{\prime }]\) is not an independent set or G[W] is not a forest, the algorithm rejects this guess. Hence, we can assume that \(G[Z^{\prime }]\) is an independent set and G[W] is a forest. Let \(R=N(Z^{\prime })\cap V(F)\). Since the solution S is an independent set and \(Z^{\prime } \subseteq S\), we have RS = . Then the algorithm tries to find an independent feedback vertex set \(S^{\prime }\subseteq V(F)\) for \(G\setminus Z^{\prime }\) such that \(S^{\prime }\cap R=\emptyset \) and \(|S^{\prime }|\leq k-|Z^{\prime }|\). Note that at this point we cannot delete the vertices from W nor R from the graph as, albeit undeletable, they take part in the structure of cycles in G that we are to break. Following Agrawal et al. [2], we call this subproblem Disjoint Independent Feedback Vertex Set (DIS-IFVS for short). We give a faster FPT algorithm for DIS-IFVS in the next section. The algorithm tries every possible \(Z^{\prime }\subseteq Z\) and solves the corresponding subproblem of DIS-IFVS. If the algorithm finds a YES instance of DIS-IFVS, then it returns YES for the instance (G,k) of IFVS. Otherwise, if the algorithm tries every possible \(Z^{\prime }\subseteq Z\) and obtains a NO answer for every corresponding instance of DIS-IFVS, it reports that (G,k) is a NO instance.

3.1 Disjoint Independent Feedback Vertex Set

We start with a formal definition of the problem.

Disjoint Independent Feedback Vertex Set Input: An undirected (multi)graph G, a feedback vertex set W of G, \(R\subseteq V(G)\setminus W\), and an integer k. Question: Is there an independent feedback vertex set \(X\subseteq V(G)\setminus (W\cup R)\) for G such that |X|≤ k?

Let F = V (GW). Obviously, G[F] is a forest since W is a feedback vertex set of G. A vertex vFR is a nice vertex if \(\deg _{W}(v)=2\) and v has no neighbors in F. A vertex vFR is a tent if \(\deg _{W}(v)=3\) and v has no neighbors in F.

As mentioned earlier, we rely on a measure different from the one in [2]. The measure μ of an instance (G,W,R,k) is defined as

$$ \mu=k+\rho-(\eta+\tau). $$

Here, ρ represents the number of connected components of G[W], η is the number of nice vertices in FR and τ is the number of tents in FR.

We remark that the distinction between sets W and R is purely for the sake of complexity of the algorithm. The set of feasible solutions to a DIS-IFVS instance (G,W,R,k) would be the same if we move vertices from R to W. However, the notions of tents, nice vertices, and the measure μ strongly depends on the distinction between the sets W and R. The algorithm maintains this distinction to ensure the promised running time bound.

Our main technical result is the following.

Lemma 1

A Disjoint Independent Feedback Vertex Set instance I with measure μ can be solved in time \(\mathcal {O}^{\ast }(\varphi ^{\mu })\), where \(\varphi =\frac {1+\sqrt {5}}{2}\) is the golden ratio.

Theorem 1 follows by standard analysis as in [2]:

Proof

of Theorem 1 The algorithm for FVS of [18] runs in time \(\mathcal {O}^{\ast }((1+\varphi ^{2})^{k})\). In a branch with a set \(Z^{\prime } \subseteq Z\) the routine for DIS-IFVS is passed an instance with both \(W=Z\setminus Z^{\prime }\) and the parameter bounded by \(k-|Z^{\prime }|\), and hence with measure bounded by \(2(k-|Z^{\prime }|)\). Since the algorithm for DIS-IFVS runs in time O(φμ), the total running time of its applications is bounded by

$$ \sum\limits_{i=0}^{k} \binom{k}{i} \mathcal{O}^{\ast}(\varphi^{2(k-i)}) = \mathcal{O}^{\ast}((1+\varphi^{2})^{k}) \leq \mathcal{O}^{\ast}(3.619^{k}). $$

This completes the proof. □

The remainder of this section is devoted to the proof of Lemma 1. We start with showing that μ is nonnegative on YES instances.

Lemma 2

Let I = (G,W,R,k) be a YES instance of DIS-IFVS. Then μ ≥ 0.

Proof

Let X be a solution to the instance I. Thus \(G^{\prime }=G\setminus X\) is a forest. Let \(N\subseteq V(G)\setminus (W\cup R)\) be the set of nice vertices and \(T\subseteq V(G)\setminus (W\cup R)\) be the set of tents. Since XW = , we have that H := G[W ∪ (NX) ∪ (TX)] is a forest. Now we contract each component in H[W] into a single vertex and get a forest \(\tilde {H}\). Since there are at most ρ + |NX| + |TX| vertices in \(\tilde {H}\), there are at most ρ + |NX| + |TX|− 1 edges in \(\tilde {H}\). According to the definition of tents and nice vertices, (NT) ∖ X is an independent set. Moreover, since the degree of any vertex in NX and TX is 2 and 3, respectively, we get the following inequality:

$$ 2|N\setminus X|+3|T\setminus X|\leq |E(\tilde{H})|\leq \rho+|N\setminus X|+|T\setminus X|-1. $$

It follows that:

$$ |N\setminus X|+|T\setminus X|\leq |N\setminus X|+2|T\setminus X| \leq \rho. $$

Hence, as |X|≤ k,

$$ |N|+|T| \leq \rho+k. $$

As a result, μ = ρ + k − (η + τ) ≥ 0. □

A small comment is in place. Our measure μ is different from the one of [2]: \(\mu ^{\prime }=2k+\rho -(\eta +2\tau )\). The change in the measure is one of the critical insights in this paper: while it sometimes leads to weaker branching vectors as compared to [2], the “starting value” in an application in the above proof of Theorem 1 is \(2(k-|Z^{\prime }|)\), not \(3(k-|Z^{\prime }|)\) as in [2]. Thus, to obtain the promised running time bound, we are fine with branching vectors of the form (1,2); that is, we are fine with branching steps in two directions, where in one direction the measure drops by at least one, and in the other direction by at least two. The change in the measure is similar to the one that happened in the work of Kociumaka and Pilipczuk for FVS [18], as compared to a previous champion of Cao, Chen, and Liu [5].

We introduce now some definitions that will help us streamline later arguments. Let (G,W,R,k) be an instance of DIS-IFVS and let F = V (G) ∖ W. We say that uFR is a potential nice vertex or P-nice if u is of degree 2 and exactly one of its incident edges has a second endpoint in W. For a vertex v in G[F], we define the nice degree of v, denoted by Ndeg(v), as the number of P-nice neighbors of v. A generalized degree of v is \(\text {Gdeg}_W(v)=\text {Ndeg}(v)+\deg _{W}(v)\). We say that uFR is a potential tent or P-tent if GdegW(u) = 2 and \(\deg (u) = 3\). For a vertex v in F, we define the tent degree of v, denoted by Tdeg(v), as the number of neighbors of v that are P-tents. See Fig. 1 for an illustration.

Fig. 1
figure 1

A nice vertex, a tent, a P-tent, and a P-nice vertex

3.2 Reduction Rules for DIS-IFVS

Now we introduce some reduction rules for DIS-IFVS. We always apply the applicable reduction rule of the lowest number. First, let us introduce five reduction rules from [2].

Reduction Rule 1::

Delete any vertex of degree at most one.

Reduction Rule 2::

Let u, v be two adjacent vertices of GW that are both not nice and both of degree 2 in G. Let x be the second neighbor of u and let y be the second neighbor of v (x and y could be the same vertex). If neither u nor v is in R or both are in R, then delete one vertex in {u,v} arbitrarily, say u, and connect the two neighbors u (i.e., v and the other neighbor of u) with a new edge. If exactly one of u and v is in R, say vR, then delete v and add an edge between its neighbors (i.e., an edge uy).

Reduction Rule 3::

If k < 0 or μ < 0, return that the input instance is a NO instance.

Reduction Rule 4::

If there is a vertex vR such that v has two incident edges with the second endpoints in the same component of W, then return that the input instance is a NO instance.

Reduction Rule 5::

If there is a vertex vFR such that v has at least two incident edges with the second endpoints in the same component of W, then remove v from G and add all vertices in FN(v) to R. In this case, k decreases by one.

It is not difficult to verify the safeness of Reduction Rules 1 − 5 as shown in [2]. But when analyzing Reduction Rules 1 and 5, we need to be careful since we use a different measure μ = k + ρ − (η + τ). In Reduction Rule 1, if one deletes a neighbor w of a tent or a nice vertex v, then v stops being a tent or a nice vertex (η + τ could decrease by one), but also {w} stops being a connected component of G[W] (decreasing ρ by one). For Reduction Rule 5, it may happen that v is a tent or a nice vertex, and its deletion decreases η + τ by one. However, the removal of v also decreases k by one. Thus μ does not increase.

Now we introduce two new reduction rules (see Figs. 2 and 3).

Fig. 2
figure 2

Reduction Rule 6

Fig. 3
figure 3

Reduction Rule 7

Reduction Rule 6::

If there is a vertex vR such that GdegW(v) ≥ 1 or Tdeg(v) ≥ 1, then remove v from R and add v to W.

Reduction Rule 7::

If there is a vertex vFR such that every neighbor wN(v) ∖ (WR) is of degree 2, and at least one such neighbor exists, then put N(v) ∖ (WR) into R.

We first show their safeness.

Claim

Reduction Rules 6 and 7 are safe.

Proof

The safeness of Reduction Rule 6 is straightforward. For the safeness of Reduction Rule 7, suppose that (G,W,R,k) is an input instance. Let v be the vertex satisfying the condition of Reduction Rule 7 and (G,W,R ∪ (N(v) ∩ F),k) be the instance obtained after applying Reduction Rule 7. We claim that (G,W,R,k) is a YES instance if and only if (G,W,R ∪ (N(v) ∩ F),k) is a YES instance. The “if” direction is straightforward, since we only increased the set R.

For the “only if” direction, let X be a solution of size at most k to the instance (G,W,R,k). If XN(v) = , then X is also a solution to (G,W,R ∪ (N(v) ∩ F),k). Otherwise, we construct a vertex set \(X^{\prime }=(X\cup \{v\})\setminus (N(v)\cap F)\). Obviously \(|X^{\prime }|\leq k\). We will show that \(X^{\prime }\) is a solution to (G,W,R ∪ (N(v) ∩ F),k). Clearly, it is disjoint with WRN((v) ∩ F) and independent, as it is disjoint with N(v). To show that \(X^{\prime }\) is a feedback vertex set in G, observe that since every vertex wN(v) ∖ (WR) is of degree 2, every cycle passing through w in G passes also through v. □

Since Reduction Rule 7 only moves vertices to R, its application does not change the measure; note that the neighbors of a vertex affected by Reduction Rule 7 can be neither a nice vertex nor a tent. However, the situation is not that easy for Reduction Rule 6, and we need to show that its application does not increase μ. To this end, we show a number of generic observations on how the measure μ changes if we modify a neighbor of a P-nice vertex or a P-tent; we refer to Fig. 4 for an illustration.

Observation 1

Let vF be a vertex with a P-nice neighbor w. Consider the operation of moving v to W. Then, the vertex w becomes nice and η goes up at least by one.

Observation 2

Let vF be a vertex with a P-tent neighbor w such that v is not P-nice. Consider the operation of putting v in a solution: deleting it from G and putting N(v) ∩ F into R. Then the application of reduction rules on w and its (possible) other neighbors in F decreases μ by at least one.

Proof

The operation moves w to R and decreases its degree to 2. Since w is a P-tent and v is not a P-nice vertex, every neighbor u ∈ (N(w) ∩ F) ∖{v} is a P-nice vertex. Consequently, Reduction Rule 2 reduces (N[w] ∩ F) ∖{v} to a single vertex \(w^{\prime }\), which is in R if \((N(w) \cap F) \setminus \{v\} \subseteq R\). Furthermore, \(\deg (w^{\prime }) = \deg _{W}(w^{\prime }) = 2\). We remark here that the above discussion includes the case when N(v) ∩ F = {v}, that is, all other neighbors of v are already in W.

Fig. 4
figure 4

Observation 1-3

If \(w^{\prime }\) has both neighbors in the same connected component of G[W], then either Reduction Rule 4 rejects the instance or Reduction Rule 5 decreases k by one. Otherwise, if \(w^{\prime } \in R\), Reduction Rule 6 moves \(w^{\prime }\) to W, decreasing ρ by one. If \(w^{\prime } \notin R\), then \(w^{\prime }\) becomes a nice vertex, increasing η by one. Thus, in all cases, μ decreases by at least one. □

Observation 3

Let vF be a vertex with a P-tent neighbor w such that v is not P-nice. Consider the operation of moving v into W. Then the application of reduction rules on w and its (possible) other neighbors in F decrease μ by at least one.

Proof

Since w is a P-tent and v is not P-nice, every neighbor u ∈ (N(w) ∩ F) ∖{v} is P-nice. Consider such a vertex u; note that uFR by the definition of P-nice. Reduction Rule 7 is applicable to w; this rule would move u to R and then Reduction Rule 6 would move u to W and, since u is P-nice, this would not create a new connected component of G[W]. Along this process, Reduction Rule 5 can be triggered on w, deleting w and decreasing k by one.

If this does not happen, in the end of this process, we have \(\deg (w) = \deg _{W}(w) = 3\); note that we are already in this situation if N(w) ∩ F = {v} in the beginning. Since w is a P-tent at the beginning, wR in the end of the process. Then, w becomes a tent and increases τ by one. Thus, in all cases, μ decreases by at least one. □

We remark here that Observations 2 and 3 treat measure drop after the respective operation on v is applied; it does not count how the operation on v itself affects the measure.

Armed with the above observations (see Fig. 4), we can now show that Reduction Rule 6 on its own does not increase the measure.

Claim

An application of Reduction Rule 6 does not increase the measure.

Proof

Since vR, v is neither a tent nor a nice vertex. Thus, moving v to W does not decrease η nor τ.

If \(\deg _{W}(v)\geq 1\), ρ does not increase by moving v to W. Hence, μ does not increase.

We are left with the case \(\deg _{W}(v)=0\), and then moving v to W increases ρ by one. If GdegW(v) ≥ 1 but \(\deg _{W}(v)=0\), we have a P-nice neighbor w of v. Then, after v is moved to W, Observation 1 asserts that future application of reduction rules on w cause a measure decrease of at least one, offsetting the increase of ρ. Otherwise, Tdeg(v) ≥ 1, and we have a neighbor w of v that is a P-tent. Then, after v is moved to W, Observation 3 asserts that future application of reduction rules on w and its possible neighbors in F cause measure decrease of at least one. This finishes the proof. □

3.3 Branching for DIS-IFVS

Now we are ready to introduce the branching algorithm. We assume that all reduction rules have been applied exhaustively. As a branching pivot, we pick a vertex vF that is neither a nice vertex, nor a tent, nor a P-nice vertex, and satisfies one of the following three cases:

Case A::

GdegW(v) ≥ 3.

Case B::

GdegW(v) ≥ 1 and Tdeg(v) ≥ 1.

Case C::

Tdeg(v) ≥ 2.

In case of more than one vertices of F satisfying one of the above cases, we prefer to pick a vertex v that satisfies an earlier case. Within one case, we break ties arbitrarily.

First, note that the non-applicability of Reduction Rule 6 implies that the chosen branching pivot v does not lie in R.

No matter which case the chosen branching pivot v satisfies, we branch into two cases. In one case we include v into the solution: we delete v from the graph, include N(v) ∩ F into R, and decrease k by one. In the other case, we move v to W.

We now show that in each of the cases, the branching gives a branching vector (1,2) or better with respect to the measure μ. That is, in one of the branches the measure drops by at least one, and in the other by at least two.

Case A: :

GdegW(v) ≥ 3.

  1. (i)

    Branch where v is deleted and all vertices in N(v) ∩ F are added to R. k decreases by 1, ρ stays the same, and η and ρ does not decrease as v is neither a nice vertex nor a tent. Thus, μ decreases by at least one.

  2. (ii)

    Branch where v is moved from F to W. ρ decreases by \(\deg _{W}(v)-1\) (which may be − 1 if \(\deg _{W}(v)=0\)) and η increases by Ndeg(v). Since \(\text {Gdeg}_W(v)=\deg _{W}(v)+\text {Ndeg}(v)\geq 3\) and τ does not decrease, μ decreases by at least two.

Case B: :

GdegW(v) ≥ 1 and Tdeg(v) ≥ 1.

  1. (i)

    Branch where v is deleted and all vertices in N(v) ∩ F are added to R. First, k decreases by one. Furthermore, v has a P-tent neighbor w and Observation 2 asserts that future applications of reduction rules on w and its remaining neighbors in F decrease the measure by at least one. Thus, in total μ decreases by at least two.

  2. (ii)

    Branch where v is moved from F to W. For every P-tent neighbor w of v, Observation 3 asserts that the application of reduction rules to w and its remaining neighbors in F cause a measure decrease of at least 1. If \(\deg _{W}(v) \geq 1\), then moving v to W does not increase ρ, and we are done. Otherwise, if \(\deg _{W}(v)=0\), moving v to W increases ρ by 1 but the assumption GdegW(v) ≥ 1 implies that there also exists a P-nice neighbor w of v. For every such P-nice neighbor w of v, Observation 1 asserts that the future application of reduction rules on w and its remaining neighbors in F cause measure drop by at least 1. Consequently, in this case we also have a measure drop of at least 1.

Case C: :

Tdeg(v) ≥ 2.

  1. (i)

    Branch where v is deleted and all vertices in N(v) ∩ F are added to R. First, k decreases by one. Furthermore, for every P-tent neighbor w of v, Observation 2 asserts that the application of reduction rules on w and its remaining neighbors in F cause measure drop by at least one. Since Tdeg(v) ≥ 2, together with the decrease of k we have a total measure decrease of at least 3.

  2. (ii)

    Branch where v is moved from F to W. The move itself may increase ρ by one. For every P-tent neighbor w of v, Observation 3 asserts that the future application of reduction rules on w and its remaining neighbors in F cause measure drop by at least 1. Since Tdeg(v) ≥ 2, in total we have a measure decrease by at least 1.

We are left with analyzing what happens if no vertex of F satisfies any of the three cases for the choice of the branching pivot. As in [2], we rely on the following base case.

Lemma 3

[2] Let (G,W,R,k) be an instance of DIS-IFVS where every vertex in V (G) ∖ W is either a nice vertex or a tent. Then we can find an independent feedback vertex set \(X\subseteq V(G)\setminus (W\cup R)\) for G of the minimum size in polynomial time.

Lemma 3 follows from the observation by Cao et al. [5] and the fact that all nice vertices and tents form an independent set.

We show the following.

Lemma 4

If no reduction rule can be applied and every vertex of F does not satisfy any of the cases for the choice of the branching pivot, then the remaining instance of DIS-IFVS can be solved in polynomial time.

Proof

We claim that every vertex in F of the remaining graph G is either a tent or a nice vertex; the claim then follows by Lemma 3.

For contradiction, suppose that there is a connected component D of G[F] that is not a singleton with a tent or a nice vertex. Since no vertex of D falls into Case A, GdegW(v) ≤ 2 for every vD.

Let vD be any leaf, that is, a vertex in F that has only exactly one neighbor in F. If \(\deg _{W}(v) = 0\), then \(\deg (v) = 1\) and Reduction Rule 1 deletes v. Thus, since GdegW(v) ≤ 2, we have \(\deg _{W}(v) \in \{1,2\}\). In particular, every leaf of D has at least one neighbor in W and, since Reduction Rule 6 is not applicable to v, vR.

Root the tree G[D] at an arbitrary vertex, and consider a leaf vD that is furthest from the root in G[D] and, among such leaves, choose one maximizing \(\deg _{W}(v)\). Note that vR as otherwise Reduction Rule 6 would move v to W.

First, assume \(\deg _{W}(v) = 2\). Since v is a leaf of D and is not nice, v has exactly one neighbor uD, and v is a P-tent. Hence, Tdeg(u) ≥ 1. If \(\deg (u) \leq 1\), then Reduction Rule 1 applies to u. If \(\deg (u) = 2\), then Reduction Rule 7 applies to v if uR and once u is in R, then Reduction Rule 6 applies to u, making v a tent. Consequently, \(\deg (u) \geq 3\). However, by the choice of v, \(\deg _{W}(u) \geq 1\) or u is adjacent to another leaf \(v^{\prime }\) of D. However, this implies that either

  • GdegW(u) ≥ 1, if \(\deg _{W}(u) \geq 1\) or \(v^{\prime }\) exists and \(\deg _{W}(v^{\prime }) = 1\), i.e., \(v^{\prime }\) is P-nice; or

  • Tdeg(u) ≥ 2, if \(v^{\prime }\) exists and \(\deg _{W}(v^{\prime }) = 2\), i.e., \(v^{\prime }\) is a P-tent.

Consequently, Case B or C applies to u.

Second, assume \(\deg _{W}(v) = 1\), and again let u be the unique neighbor of v in G[D]. If \(\deg (u) = 2\), then Reduction Rule 2 is applicable. By the choice of v, every other leaf \(v^{\prime }\) adjacent to u also satisfies \(\deg _{W}(v^{\prime }) = 1\); that is, every child of u is P-nice. If uR, then Reduction Rule 6 applies to u. If GdegW(u) ≥ 3, then Case A applies to u. Hence, uR, \(\deg (u) = 3\), and GdegW(u) = 2: u has a parent x in G[D] and either has one more child \(v^{\prime }\) that is P-nice or a neighbor in W. In particular, u is a P-tent, and Tdeg(x) ≥ 1.

If \(\deg (x) = 2\), then Reduction Rule 7 would apply to u and move v to R, and consequently Reduction Rule 6 would move v to W. If GdegW(x) ≥ 1, then Case B applies to x. Hence, x has another child \(u^{\prime }\) that is not P-nice. By the choice of v, the connected component of G[D] ∖{x} containing \(u^{\prime }\) is a star with \(u^{\prime }\) as a center. Furthermore, since in the choice of v we maximized \(\deg _{W}(v)\), every child w of \(u^{\prime }\) is P-nice (i.e., \(\deg _{W}(w) = 1\)). Since Case A is not applicable to \(u^{\prime }\), we have \(\text {Gdeg}_W(u^{\prime }) \leq 2\). If \(\deg (u^{\prime }) = 2\), then either \(u^{\prime }\) is P-nice (if \(\deg _{W}(u^{\prime }) = 1\)) or Reduction Rule 2 is applicable to \(u^{\prime }\) and its child (if \(\deg _{W}(u^{\prime }) = 0\)). We infer that \(\deg (u^{\prime }) = 3\) and \(\text {Gdeg}_W(u^{\prime }) = 2\). If \(u^{\prime } \in R\), then Reduction Rule 6 is applicable to \(u^{\prime }\). We infer that \(u^{\prime }\) is a P-tent. Hence, Tdeg(x) ≥ 2 and Case C applies to x. This completes the proof of the lemma. □

Every step of the reduction rules and branching can be executed in polynomial time. In every case of branching, the branching vector is (1,2). Thus we get the following recurrence: T(μ) = T(μ − 1) + T(μ − 2). As a result, the running time of the algorithm for DIS-IFVS is O(φ2k). This concludes the proof of Lemma 1 and thus of the whole Theorem 1.

4 Conclusion

In this paper, we presented a faster FPT algorithm for the Independent Feedback Vertex Set problem by using a different measure, introducing some new reduction rules and improving the branching algorithm for the Disjoint Independent Feedback Vertex Set problem. Moreover, we introduced the notion of generalized degree and tent degree, which makes the reduction and branching more concise. The running time of our algorithm is \(\mathcal {O}^{\ast }(3.619^{k})\), which matches the running time of the current fastest FPT algorithm for the Feedback Vertex Set problem. As IFVS is a more general problem than FVS, any improvement for IFVS will lead to an improvement for the FPT algorithm of FVS. We conclude with re-iterating an open problem of [23]: does there exist a kernel of size \(\mathcal {O}(k^{2})\), as it is the case for FVS [15, 24]?