1 Introduction

Graph coloring is a central problem in discrete mathematics and computer science. In exponential time algorithmics [15], graph coloring is among the most well studied problems, and it is an archetypical partitioning problem. Given a graph G and an integer k, the problem is to determine whether the vertex set of G can be partitioned into k independent sets. Already in 1976, Lawler [25] designed a dynamic programming algorithm for graph coloring and upper bounded its running time by \(O(2.4423^n)\), where n is the number of vertices of the input graph. This was the best running time for graph coloring for 25 years, when Eppstein [10] improved the running time to \(O(2.4150^n)\) by using better bounds on the number of small maximal independent sets in a graph. Based on bounds on the number of maximal induced bipartite subgraphs and refined bounds on the number of size-constrained maximal independent sets, Byskov [7] improved the running time to \(O(2.4023^n)\). An algorithm based on fast matrix multiplication by Björklund and Husfeldt [3] improved the running time to \(O(2.3236^n)\). The current fastest algorithm for graph coloring, by Björklund et al. [2, 4] and, [24], is based on the principle of inclusion–exclusion and Yates’ algorithm for the fast zeta transform. This breakthrough algorithm solves graph coloring in \(O^*(2^n)\) time, where the \(O^*\)-notation is similar to the O-notation but ignores polynomial factors.

A significant drawback of the aforementioned algorithms is that they use exponential space. Often, the space bound is the same as the time bound, up to polynomial factors. This exponential space bound is undesirable [30], certainly for modern computing devices. Polynomial-space algorithms for graph coloring have been studied extensively as well with successive running times \(O^*(n!)\) [8], \(O((k/e)^n)\) (randomized) [11], \(O((2+ \log k)^n)\) [1], \(O(5.283^n)\) [6], \(O(2.4423^n)\) [3], and \(O(2.2461^n)\) [4]. The latter algorithm is an inclusion–exclusion algorithm relying on a \(O(1.2461^n)\) time algorithm [16] for computing the number of independent sets in a graph as a subroutine. Their method transforms any polynomial-space \(O(c^n)\) time algorithm for counting independent sets into a polynomial space \(O((1+c)^n)\) time algorithm for graph coloring. The running time bound for counting independent sets was subsequently improved by Fomin et al. [12] to \(O(1.2431^n)\) and by Wahlström [29] to \(O(1.2377^n)\). Wahlström’s algorithm is the current fastest published algorithm for counting independent sets of a graph, it uses polynomial space, and it works for the more general problem of computing the number of maximum-weight satisfying assignments of a 2-CNF formula. For a reduction from counting independent sets to counting maximum-weight satisfying assignments of a 2-CNF formula where the number of variables equals the number of vertices, see [9].

We note that Junosza-Szaniawski and Tuczynski [23] present an algorithm for counting independent sets with running time \(O(1.2369^n)\) in a technical report that also strives to disconnect low-degree graphs. For graphs with maximum degree 3 that have no degree-3 vertex with all neighbors of degree 3, they present a new algorithm with running time \(2^{n_3/5+o(n)}\), where \(n_3\) is the number of degree-3 vertices, and the overall running time improvement comes from plugging this result into Wahlström’s [29] previously fastest algorithm for the problem. However, we note that the \(2^{n_3/5+o(n)}\) running time for counting independent sets can easily be obtained from previous results. Namely, the problem of counting independent sets is a polynomial constraint satisfaction problem (PCSP) with domain size 2, as shown in [27]. The algorithm of [20] for PCSPs preprocesses all degree-2 vertices, leaving a cubic graph on \(n_3\) vertices that is solved in \(2^{n/5+o(n)}\) time. Improving on this bound is challenging, and degree-3 vertices with all neighbors of degree 2 need special attention since branching on them affects the degree-3 vertices of the graph exactly the same way as for the much more general PCSP problem, whereas for other degree-3 vertices one can take advantage of the asymmetric nature of the typical independent set branching (i.e., we can delete the neighbors when counting the independent sets containing the vertex we branch on).

Our Results.

We present a polynomial-space algorithm computing the number of independent sets of any input graph G in time \(O(1.2356^n)\), where n is the number of vertices of G. Our algorithm is a branching algorithm that works initially similarly as Wahlström’s algorithm, where we slightly improve the analysis using potentials (as, e.g., in [19, 22, 28]) to amortize some of the worst branching cases with better ones. This algorithm uses a branching strategy that basically ensures that both the maximum degree and the average degree of the graph do not increase. This makes it possible to divide the analysis of the algorithm into sections depending on what local structures can still occur in the graph, use a separate measure for the analysis of each section, and combine these measures giving a compound (piecewise linear) measure for the analysis of the overall algorithm.

We also specifically design a subroutine to handle instances where the maximum degree is 3 and no vertex has three neighbors with degree 3, and is designed and analyzed using the recently introduced Separate, Measure and Conquer technique [20]. In this subroutine, the average degree of the graph is at most \({8}/{3}\). It computes a small balanced separator of the graph and prefers to branch on vertices in the separator, adjusting the separator as needed by the analysis, and reaping a huge benefit when the separator is exhausted and the resulting connected components can be handled independently. The Separate, Measure and Conquer technique helps to amortize this sudden gain with the analysis of the previous branchings, for an overall improvement of the running time.

Since using a separator restricts our choice in the vertices to branch on, we use the structure of the graph and its separation to upper bound the number of unfavorable branching situations and adapt our measure accordingly. Namely, the algorithm avoids branching on degree-3 vertices in the separator with all neighbors of degree 2 as long as possible, often rearranging the separator to avoid this case. In our analysis we can then upper bound the number of unfavorable branchings and give the central vertex involved in such a branching a special weight and role in the analysis. We call these vertices spider vertices. Our meticulous analysis of this subroutine upper bounds its running time by \(O(1.0963^n)\). For graphs with maximum degree at most 3, we obtain a running time of \(O(1.1389^n)\). This improvement for small degree graphs is bootstrapped, using Wahlström’s compound measure analysis, to larger degrees, and gives a running time improvement to \(O(1.2356^n)\) for counting independent sets of arbitrary graphs and to \(O(2.2356^n)\) for graph coloring. Bootstrapping an exponential-space pathwidth-based \(O(1.1225^n)\) time algorithm [14] for cubic graphs instead, we obtain an exponential-space algorithm for counting independent sets with running time \(O(1.2330^n)\). A preliminary version of this paper appeared in the proceedings of COCOON 2017 [18].

2 Methods

Measure and Conquer. The analysis of our algorithm is based on the Measure and Conquer method [13]. A measure for a problem (or its instances) is a function from the set of all instances of the problem to the set of non-negative reals. Modern branching analyses often use a potential function as measure that gives a more fine-grained way of tracking the progress of a branching algorithm than a measure that is merely the number of vertices or edges of the graph. The following lemma is at the heart of our analysis. It generalizes a similar lemma from [19] to the treatment of subroutines. The proof of the lemma (see Lemma 2.6 in [17]) is a simple inductive argument. It uses a measure \(\mu \) to upper bound the number of leaves of any search tree of an algorithm A, a measure \(\eta \) to upper bound the height of such a search tree, and a measure \(\mu _B\) that upper bounds the running time of an algorithm B that algorithm A may call as a subroutine instead of branching.

Lemma 1

(Lemma 2.6 in [17]) Let A be an algorithm for a problem P. Let B be an algorithm for a class \({\mathcal {C}}\) of instances of P, \(c \ge 0\) and \(r>1\) be constants. The functions \(\mu (\cdot ), \mu _B(\cdot ), \eta (\cdot )\) are measures for P such that for any input instance I from \({\mathcal {C}}\), \(\mu _B(I) \le \mu (I)\). If for any input instance I, A either solves P on \(I\in {\mathcal {C}}\) by invoking B with running time \(O(\eta (I)^{c+1} r^{\mu _B(I)})\), or reduces I to k instances \(I_1,\ldots ,I_k\), solves these recursively, and combines their solutions to solve I, using time \(O(\eta (I)^{c})\) for the reduction and combination steps (but not the recursive solves),

$$\begin{aligned} (\forall i) \quad \eta (I_i)&\le \eta (I)-1 \text {, and} \end{aligned}$$
(1)
$$\begin{aligned} \sum _{i=1}^k r^{\mu (I_i)}&\le r^{\mu (I)} , \end{aligned}$$
(2)

then A solves any instance I in time \(O(\eta (I)^{c+1} r^{\mu (I)})\).

When Algorithm A does not invoke Algorithm B, we have the usual Measure and Conquer analysis. Here, \(\mu \) is used to upper bound the number of leaves of the search tree and deserves the most attention, while \(\eta \) is usually a polynomial measure that is used to upper bound the depth of the search tree. For handling subroutines, it is crucial that the measure does not increase when Algorithm A hands over the instance to Algorithm B and we constrain that \(\mu _B(I)\le \mu (I)\).

Compound analysis. We can view Wahlström’s compound analysis [29] as a repeated application of Lemma 1. For example, there is one subroutine \(A_3\) for when the maximum degree of the graph is 3. The algorithm prefers then to branch on a degree-3 vertex with all neighbors of degree 3. After all such vertices have been exhausted, the algorithm calls a new subroutine \(A_{8/3}\) that takes as input a graph with maximum degree 3 where no degree-3 vertex has only degree-3 neighbors. In this case the average degree of the graph is at most \({8}/{3}\) - all neighbors of s in \(\varGamma (G)\) are a (2,2,2) vertex in G - and the algorithm prefers to branch on vertices of degree 3 that have two neighbors of degree 3, etc. The analysis constrains that the measure for the analysis of \(A_{8/3}\) is at most the measure for \(A_{3}\) for the instance that is handed by \(A_3\) to \(A_{8/3}\). In an optimal analysis, we expect the measure for such an instance to be equal in the analysis of \(A_3\) and \(A_{8/3}\), and Wahlström actually imposes equality at the pivot point \({8}/{3}\).

Separate, Measure and Conquer. In our case, the \(A_{8/3}\) algorithm is based on a technique from [20] known as Separate, Measure and Conquer. For small-degree graphs, we can compute small balanced separators in polynomial time. The algorithm then prefers to branch on vertices in the separator. The Separate, Measure and Conquer technique allows to distribute the large gain obtained by disconnecting the instance onto the previous branching vectors. While, often, the measure is made up of weights that are assigned to each vertex, this method assigns these weights only to the larger part of the graph that is separated from the rest by the separator, and somewhat larger weights to the vertices in the separator. See (3) on page 11 for the measure we use in the analysis. Thus, after exhausting the separator, the measure accurately reflects the “amount of work” left to do. We artificially increase the measure of very balanced instances by small penalty weights – this is done so because branching on vertices can change the measure of the parts that are separated by the separator and the branching strategy might not always be able to make most of its progress on the large side. Since we may exhaust the separators a logarithmic number of times, and computing a new separator might introduce a penalty term each time, the measure also includes a logarithmic term that counteracts these artificial increases in measure, and will in the end only contribute a polynomial factor to the running time. For an in-depth treatment of the method we refer to [20]. Since we use the Separate, Measure and Conquer method when the average degree drops to at most \({8}/{3}\), we slightly generalize the separation computation from [20], where the bound on the separator size depended only on the maximum degree. A separation (LSR) of a graph G is a partition of the vertex set into (possibly empty) sets L, S, R such that every path from a vertex in L to a vertex in R contains a vertex from S.

Lemma 2

Let \(B\in {\mathbb {R}}\). Let \(\mu \) be a measure for graph problems such that for every graph \(G=(V,E)\), every \(R\subseteq V\), and every \(v\in V\), we have that \(|\mu (R\cup \{v\}) - \mu (R)| \le B\). Assume that \(\mu (R)\) can be computed in polynomial time. If there is an algorithm computing a path decomposition of width at most k of a graph G in polynomial time, where G contains a path decomposition of width k, then there is a polynomial time algorithm computing a separation (LSR) of G with \(|S|\le k\) and \(|\mu (L)-\mu (R)| \le B\).

Proof

The proof is the same as for the separation computation from [20], but we repeat it here for completeness. First, compute a path decomposition of width k in polynomial time. We view a path decomposition as a sequence of bags \((B_1, \dots , B_b)\) which are subsets of vertices such that for each edge of G, there is a bag containing both endpoints, and for each vertex of G, the bags containing this vertex form a non-empty consecutive subsequence. The width of a path decomposition is the maximum bag size minus one. We may assume that \(B_1=B_b=\emptyset \) and that every two consecutive bags \(B_i\), \(B_{i+1}\) differ by exactly one vertex, otherwise we insert between \(B_i\) and \(B_{i+1}\) a sequence of bags where the vertices from \(B_i \setminus B_{i+1}\) are removed one by one followed by a sequence of bags where the vertices of \(B_{i+1} \setminus B_i\) are added one by one; this is the standard way to transform a path decomposition into a nice path decomposition of the same width where the number of bags is polynomial in the number of vertices [5]. Note that each bag is a separator and a bag \(B_i\) defines the separation \((L_i, B_i, R_i)\) with \(L_i = (\bigcup _{j=1}^{i-1} B_j)\setminus B_i\) and \(R_i = V \setminus (L_i\cup B_i)\). Since the first of these separations has \(L_1=\emptyset \) and the last one has \(R_b=\emptyset \), at least one of these separations has \(|\mu _r(L_i)-\mu _r(R_i)| \le B\). Finding such a bag can clearly be done in polynomial time. \(\square \)

We will use the lemma for graphs with maximum degree 3 and graphs with maximum degree 3 and average degree at most \({8}/{3}\), for which path decompositions of width at most \({n}/{6}+o(n)\) and \({n}/{9}+o(n)\) can be computed in polynomial time, respectively [12, 14].

One disadvantage of using the Separate, Measure and Conquer method for \(A_{8/3}\) is that the algorithm needs to choose vertices for branching so that the size of the separator decreases in each branch. However, Wahlström’s algorithm defers to branch on degree-3 vertices with all neighbors of degree 2 until this is no longer possible, since this case leads to the largest branching factor for degree 3. For our approach, we instead rearrange the separator in some cases until we are only left with spider vertices, a structure where our algorithm cannot avoid branching on a degree-3 vertex with all neighbors of degree 2, we give a special weight to these spider vertices and upper bound their number.

Potentials. To optimize the running time further, we also use potentials; see [19, 22, 28]. These are constant weights that are added to the measure if certain global properties of the instance hold. For instance, we may use them to slightly increase the measure when an unfavorable branching strategy needs to be used. The constraint (2) for this unfavorable case then becomes less constraining, while all branchings that can lead to this unfavorable case get tighter constraints. This allows then to amortize unfavorable cases with favorable ones.

3 Algorithm

We first introduce notation necessary to present the algorithm. Let V(G) and E(G) denote the vertex set and the edge set of the input graph G. For a vertex \(v \in V(G)\), its neighborhood, \(N_G(v)\), is the set of vertices adjacent to v. The closed neighborhood of a vertex v is \(N_G[v] = N_G(v) \cup \{v\}\). If G is clear from context, we use N(v) and N[v].

The degree of v is denoted \(d(v) = |N_G(v)|\). For two vertices u and v connected by a path \(P \subseteq V(G)\), if \(P \setminus \{u, v\}\) consists only of degree-2 vertices then we call P a 2-path of u and v.

The maximum degree of G is denoted \(\varDelta (G)\) and \(d(G) = 2 |E(G)| / |V(G)|\) is its average degree. A cubic graph consists only of degree-3 vertices. A subcubic graph has maximum degree at most 3. A \((k_1, k_2,...,k_d)\) vertex is a degree-d vertex with neighbors of degree \(k_1, k_2, ... ,k_d\). A separation (LSR) of G is a partition of its vertex set into the three sets LSR such that no vertex in L is adjacent to any vertex in R. The sets LSR are also known as the left set, separator, and right set. Using a notion from [20], a separation (LSR) of G is balanced with respect to some measure \(\mu \), and a branching constant B if \(|\mu (R) - \mu (L)| \le 2B\) and imbalanced if \(|\mu (R) - \mu (L)| > 2B\).

By convention, \(\mu (R) \ge \mu (L)\) otherwise, we swap L and R. We will now describe the algorithm #IS which takes as input a graph G, a separation (LSR), and a cardinality function \({{\textbf {c}}} : \{0,1\} \times V(G) \rightarrow {\mathbb {N}}\), and computes the number of independent sets of G weighted by the cardinality function \({{\textbf {c}}}\). For clarity, let \({{\textbf {c}}}_{out}(v) = {{\textbf {c}}}(0,v)\) and \({{\textbf {c}}}_{in}(v) = {{\textbf {c}}}(1,v)\). More precisely, it computes

$$\begin{aligned} ind(G,{{\textbf {c}}}) = \sum _{\begin{array}{c} X\subseteq V(G) \\ X \text { is an independent set in } G \end{array}} \prod _{v\in X} {{\textbf {c}}}_{in}(v) \cdot \prod _{v\in V\setminus X} {{\textbf {c}}}_{out}(v). \end{aligned}$$

Note that for a cardinality function \({{\textbf {c}}}\) initialized to \({{\textbf {c}}}(0,v)={{\textbf {c}}}(1,v)=1\) for every vertex \(v\in V(G)\), we have that \(ind(G, {{\textbf {c}}})\) is the number of independent sets of G. Cardinality functions are dynamic and used for bookkeeping during the branching process and have been used in this line of work before.

The separation (LSR) is initialized to \((\emptyset , \emptyset , V(G))\) and will only come into play when G is subcubic and has no (3,3,3)-vertex. In this case, the algorithm calls a subroutine #3IS, which constitutes the main contribution of this paper. #3IS computes a balanced separation of G, preferring to branch on vertices in the separator, readjusting the separator as needed, and is analyzed using the Separate, Measure and Conquer method.

Dragging refers to moving vertices or a set of vertices of G from one component of (LSR) to another, creating a new separation \((L', S', R')\).

Skeleton Graph. The skeleton graph \(\varGamma (G)\), or just \(\varGamma \), of a subcubic graph G is a graph where the degree-3 vertices of G are in bijection with the vertices in \(\varGamma \). Two vertices in \(\varGamma \) are adjacent if the corresponding vertices are adjacent in G, or there exists a 2-path between the corresponding vertices in G. If G has a separation (LSR) then denote \(L_{\varGamma }, S_{\varGamma }, R_{\varGamma }\) to be the vertices of LSR respectively in \(\varGamma \), which have exactly degree 3.

Spider Vertices. As Wahlström’s [29] analysis showed, an unfavorable branching case occurs on (2, 2, 2) vertices. In this analysis we identified further specifically what kinds of (2, 2, 2) vertices were more undesirable, and attempted to amortize the weights of these vertices. We call these vertices spider vertices. Here \(\varGamma \) refers to the skeleton graph. A vertex s is a spider vertex if

  • \(s \in S\)

  • all neighbors of s are a (2,2,2) vertex and,

  • either:

    • \(|N_{\varGamma }(s) \cap L| = 2\) and \(N_{\varGamma }(s) \cap R = \{r\}\) with r having neighbors of degree (2,2,2); in this case we call s a left spider vertex

    • \(|N_{\varGamma }(s) \cap R| = 2\) and \(N_{\varGamma }(s) \cap L = \{l\}\) with l having neighbors of degree (2,2,2); in this case we call s a right spider vertex

    • \(|N_{\varGamma }(s) \cap L| = 1\), \(|N_{\varGamma }(s) \cap R| = 1\), \(N_{\varGamma }(s) \cap S = \{s'\}\) and \(s'\) has neighbors of degree (2,2,2); in this case we call both s and \(s'\) a center spider vertex, which occur in pairs.

A left spider vertex \(s \in S\) can be dragged to the left along with the 2-path from s to r. If this occurs, then r becomes a right spider vertex, and vice versa (Fig. 1).

Fig. 1
figure 1

A left spider vertex s

Multiplier Reduction. We use a reduction called multiplier reduction to simplify graphs that have a cut vertex efficiently. Suppose \(G = (V,E)\) has a separation \((V_1, \{x\}, V_2)\) and \(G_1 = G[V_1 \cup \{x\}]\) has measure at most a constant B. For any vertex set \(U \subseteq V\) and a subgraph \(G' = (V',E')\) of G let \(G'[U]\) represent \(V' \cap U\), in other words, the vertices of U which remain in the new subgraph \(G'\). The multiplier reduction can be applied to compute #IS(\(G, (L,S,R), {{\textbf {c}}}\)) as follows.

  1. 1.

    Let:

    • \(G_{\text {out}} = G_1 \setminus \{x\}\)

    • \(G_{\text {in}} = G_1 \setminus N_{G_1}[x]\)

    • \(c_{\text {out}} = \)#IS(\(G_{\text {out}}\),(\(G_{\text {out}}[L],G_{\text {out}}[S],G_{\text {out}}[R]\)), c)

    • \(c_{\text {in}} = \)#IS(\(G_{\text {in}}\),(\(G_{\text {in}}[L],G_{\text {in}}[S],G_{\text {in}}[R]\)), c)

  2. 2.

    Modify c such that \({{\textbf {c}}}_{\text {in}}(x) = {{\textbf {c}}}_{\text {in}}(x) \cdot c_{\text {in}}\) and \({{\textbf {c}}}_{\text {out}}(x) = {{\textbf {c}}}_{\text {out}}(x) \cdot c_{\text {out}}\)

  3. 3.

    Return #IS(\(G[V_2 \cup \{x\}]\), \((L \setminus V_1, S \setminus V_1, R \setminus V_1), {{\textbf {c}}}\))

Our measure will make sure that for any instance whose measure is upper bounded by a constant steps 1 and 2 can be performed in polynomial time. Since the measure of \(G_1\) is upper bounded by a constant, \(G_1\) is processed in polynomial time by the multiplier reduction.

Lazy 2-separator. Suppose there is a vertex x initially chosen to branch on as well as two vertices \(\{y,z\} \subset V(G)\) with \(d(y) \ge 3\) and \(d(z) \ge 3\) such that x belongs to an induced subgraph of G of constant measure separated from the rest of the graph by the separator \(\{y,z\}\). We call this separator a lazy 2-separator, for a vertex x. Similar to Wahlström’s elimination of separators of size 2 in [28], in line 15 of #IS (Algorithm 1) instead of branching on x, if there exists a lazy 2-separator \(\{y,z\}\) for x we branch on y. A multiplier reduction will be performed on z in the recursive calls. Prioritizing lazy 2-separators allows to exclude some unfavorable cases when branching on x.

Associated Average Degree. Similar to [29], we define the associated average degree of a vertex \(x \in V(G)\) as \(\alpha (x) / \beta (x)\), in G with average degree d(G) where

$$\begin{aligned} \alpha (x)&= d(x) + | \{ y \in N(x) : d(y)< d(G)\} |, \text { and}\\ \beta (x)&= 1 + \sum _{ y \in N(x) \text {, } d(y) < d(G)} 1/d(y) . \end{aligned}$$

By selecting vertices with high associated average degree, our algorithm prioritizes branching on vertices with larger decreases in measure.

Branching. We now outline the branching routine used to recursively solve smaller instances of the problem. Suppose we have a graph G, a separation (LSR), and a cardinality function c. For a vertex x we denote the following steps as branching on x.

  1. 1.

    Let:

    • \(G_{\text {out}} = G \setminus \{x\}\)

    • \(G_{\text {in}} = G \setminus N[x]\)

    • \(c_{\text {out}} = \) #IS(\(G_{\text {out}}\), (\(L[G_{\text {out}}],S[G_{\text {out}}],R[G_{\text {out}}]\)), c)

    • \(c_{\text {in}} = \) #IS(\(G_{\text {in}}\), (\(L[G_{\text {in}}],S[G_{\text {in}}],R[G_{\text {in}}]\)), c)

    • \(c'_{\text {out}} = {{\textbf {c}}}_{\text {out}}(x)\)

    • \(c'_{\text {in}} = {{\textbf {c}}}_{\text {in}}(x) \cdot \prod _{v \in N(x)} {{\textbf {c}}}_{\text {out}}(v)\)

  2. 2.

    Return \(c'_{\text {out}}\cdot c_{\text {out}} + c'_{\text {in}}\cdot c_{\text {in}}\)

Branching Vectors. Constraints, of the type referred to in (2), are presented as branching vectors \((\delta _1, \delta _2)\) which equates to the constraints \(2^{-\delta _1} + 2^{-\delta _2} \le 1\).

figure a
figure b
figure c
figure d

4 Running Time Analysis

This section describes the running time analysis for #IS and #3IS, conducted via compound measures [29] using Lemma 1. Compound measures are piecewise measures which apply a finer analysis to specific states during the execution of the algorithm. In our case, the piecewise nature of compound measures allows different analyses and running times to apply for different average degrees of the input graph.

4.1 Measures

Measure with no (3,3,3) vertex. When using the Separate, Measure and Conquer technique from [20] the measure of a cubic graph instance G with no (3,3,3) vertices consists of additive components \(\mu _s\) and \(\mu _r\), the measure of vertices in the separator, and those in either L or R, respectively. Let \(S' \subseteq S\) be the set of all spider vertices, \(s_i\) and \(r_i\) refer to the weight attributed to a separator vertex and a right vertex, in R or L, respectively, of degree i. Left and right spider vertices have weight \(s_3'\). In a center spider vertex pair s and \(s'\), one of them has weight \(s_3'\) while the other takes on an ordinary weight of \(s_3\). The weights \(s_3\) and \(s_3'\) attributed to spider vertices allows for amortization of the spider vertex cases against non-spider vertices. Define the measure \(\mu _{8/3}\) as

$$\begin{aligned} \mu _{{8}/{3}}&= \mu _s(S) + \mu _r(R) + \mu _o(L,S,R), \text { where}\nonumber \\ \mu _s(S)&= |S'| \cdot s_3' + \sum _{v \in S\setminus S'} s_{d(v)},\nonumber \\ \mu _r(R)&= \sum _{v \in R} r_{d(v)},\nonumber \\ B&= 6 s_3, \text { and}\nonumber \\ \mu _o(L,S,R)&= \max \left\{ 0, B - \frac{\mu _r(R) - \mu _r(L)}{2}\right\} + (1+B) \cdot \log _{1 + \epsilon }(\mu _r(R) + \mu _s(S)). \end{aligned}$$
(3)

We also require that \(s_i \ge s_{i-1}\) and \(r_i \ge r_{i-1}\) for \(i \in \{1,2,3\}\). The constant \(B=6s_3\) is larger than the maximum change in imbalance in each transformation in the analysis, except the separation transformation.

The constant \(\epsilon > 0\) is required to satisfy

$$\begin{aligned}\mu _r(R) + \mu _r(S) \ge (1+\epsilon )(\mu _r(R') + \mu _s(S'))\end{aligned}$$

which constrains that separating \((\emptyset , S, R)\) to \((L',S',R')\) should reduce \(\mu _r(R)+\mu _s(S)\) by a constant factor, namely \(1+\epsilon \).

We use the measure \(\mu _r\) defined on page 11 to compute the separation in our algorithm using Lemma 2.

Lemma 3

For a balanced separation (LSR) of a graph G, computed by Lemma 2, with average degree \(d = d(G)\), maximum degree at most 3, and no (3,3,3) vertex, an upper bound for the measure \(\mu _{8/3}\) is:

$$\begin{aligned} \mu _{8/3}(d) \le {\left\{ \begin{array}{ll} \frac{n}{6} (d-2) s_3' + \frac{1}{2}\left( \frac{5n}{6} (d-2) r_3 + n(3-d) r_2\right) \\ +\mu _o(L,S,R) + o(n) &{} \text {if } 2 \le d \le \frac{28}{11} \\ \frac{n}{4} (8 - 3d) s_3' + \frac{n}{12} (11 d - 28) s_3 \\ +\frac{1}{2}\left( \frac{5n}{6} (d-2) r_3 + n(3-d) r_2 \right) + \mu _o(L,S,R) + o(n) &{} \text {if } \frac{28}{11} < d \le \frac{8}{3} \\ \end{array}\right. } \end{aligned}$$

which is maximised when \(d = \frac{8}{3}\) with the value

$$\begin{aligned}\mu _{8/3} \le \frac{n}{9}s_3 + \frac{1}{2}\left( \frac{5n}{9}r_3 + \frac{n}{3}r_2 \right) + \mu _o(L,S,R) + o(n)\end{aligned}$$

if constraints \(\frac{r_2}{2} \le \frac{s_3'}{11} + \frac{5r_3}{22} + \frac{5r_2}{22} \le \frac{s_3}{9}+ \frac{5r_3}{18} + \frac{r_2}{3}\) are satisfied.

Proof

Let \(d = d(G)\) be the average degree of G. For an appropriate upper bound of \(\mu _{8/3}\) we first consider the upper bound on the number of separator vertices,

$$\begin{aligned} \#\text {Spiders} \le |S| \le \frac{n_3}{6} + o(n_3) = \frac{n(d-2)}{6} + o(n) \end{aligned}$$
(4)

where \(n_3 = n(d - 2)\) is the number of degree-3 vertices in G, since a subcubic graph with \(n_3\) vertices of degree 3 has pathwidth at most \(\frac{n_3}{6} + o(n_3)\) [12].

As we have no (3,3,3) vertex, every degree-3 vertex is incident to an edge incident to a degree-2 vertex. However, each spider vertex has to have 4 more edges incident to degree-2 vertices. As the number of edges incident to degree-2 vertices is \(2n_2\) where \(n_2 = n(3 - d)\) is the number of degree-2 vertices in G, and there are at least \(n_3\) of those edges taken up to be incident to a degree-3 vertex, an upper bound on the number of spider vertices is:

$$\begin{aligned} \#\text {Spiders} \le \frac{2n_2 - n_3}{4} = n\left( 2 - \frac{3}{4}d \right) \end{aligned}$$
(5)

Since both upper bounds are valid for all \(2 \le d \le 8/3\), a more accurate upper bound can be found by taking the minimum of Eqs. (4) and (5). This results in:

$$\begin{aligned} \#\text {Spiders} \le {\left\{ \begin{array}{ll} \frac{n}{6} (d-2) + o(n) &{} \text { if } 2 \le d \le \frac{28}{11} \\ n\left( 2 - \frac{3}{4}d \right) &{} \text { if } \frac{28}{11} < d \le \frac{8}{3} \\ \end{array}\right. } \end{aligned}$$

As \(|S| \le \frac{n}{6} (d-2) + o(n)\) for all \(2 \le d \le \frac{8}{3}\), with the weight for spider vertices \(s_3\) being greater than regular non-spider degree-3 vertices in the separator, then an upper bound for \(\mu _{8/3}\) would have as many spider vertices in S as possible for a given average degree d. For \(2 \le d \le \frac{28}{11}\) it is possible to have all vertices in S be spider vertices, so this gives the greatest value of \(\mu _{8/3}\). However, from \(\frac{28}{11} < d \le \frac{8}{3}\) we use Eq. (4) to upper bound |S|. We also place in S as many spider vertices with weight \(s_3'\) as Eq. (5) allows, with the rest of the vertices in S being of weight \(s_3\). Therefore,

$$\begin{aligned} \mu _{8/3} \le {\left\{ \begin{array}{ll} \frac{n}{6} (d-2) s_3' + \frac{1}{2}\left( \frac{5n}{6} (d-2) r_3 + n(3-d) r_2\right) \\ + \mu _o(L,S,R) + o(n) &{} \text {if } 2 \le d \le \frac{28}{11} \\ \frac{n}{4} (8 - 3d) s_3' + \frac{n}{12} (11 d - 28) s_3 + \\ \frac{1}{2}\left( \frac{5n}{6} (d-2) r_3 + n(3-d) r_2 \right) + \mu _o(L,S,R) + o(n) &{} \text {if } \frac{28}{11} < d \le \frac{8}{3}. \\ \end{array}\right. } \end{aligned}$$

Let \(f_1(d) = \frac{n}{6} (d-2) s_3' + \frac{1}{2}(\frac{5n}{6} (d-2) r_3 + n(3-d) r_2)\) and \(f_2(d) = \frac{n}{4} (8 - 3d) s_3' + \frac{n}{12} (11 d - 28) s_3 + \frac{1}{2}(\frac{5n}{6} (d-2) r_3 + n(3-d) r_2)\). We notice that \(f_1\) and \(f_2\) are both linear functions in d and \(f_1(\frac{28}{11}) = f_2(\frac{28}{11})\) meaning that the endpoints \(f_1(2), f_2\left( \frac{28}{11}\right) \), and \(f_2\left( \frac{8}{3}\right) \) are the only points of interest. For the piecewise measure to be valid, and not result in an increase of measure as d decreases, we consider the values of \(\mu _{8/3}\) at the endpoint values of \(2, \frac{28}{11}\), and \(\frac{8}{3}\) and require that \(f_1(2) \le f_2\left( \frac{28}{11}\right) \le f_2\left( \frac{8}{3}\right) \) which results in the constraints

$$\begin{aligned} \frac{r_2}{2} \le \frac{s_3'}{11} + \frac{5r_3}{22} + \frac{5r_2}{22} \le \frac{s_3}{9} + \frac{5r_3}{18} + \frac{r_2}{3}. \end{aligned}$$

Also the maximum value achieved by \(f_2\) when average degree \(d = \frac{8}{3}\) is:

$$\begin{aligned} \mu _{8/3}\le & {} f_2\left( \frac{8}{3}\right) + \mu _o(L,S,R) + o(n)\\= & {} \frac{n}{9}s_3 + \frac{1}{2}\left( \frac{5n}{9}r_3 + \frac{n}{3}r_2 \right) + \mu _o(L,S,R) + o(n). \end{aligned}$$

\(\square \)

General Measure In order to analyze higher degree cases, we use a measure of the form

$$\begin{aligned} \mu _i(G) = \sum _{v \in G} r_{d(v)} + \mu _o(L,S,R) \quad \text { where } \varDelta (G) = i \end{aligned}$$

for each part of the compound measure, as defined by the parameter i, which represents different branching cases. When branching, we always attempt to find the highest degree vertex, with neighbors of highest degrees. Once this kind of vertex is exhausted, we will never branch on this kind of vertex again, and transition to a different case and apply a different measure \(\mu _i(G)\). These cases, described in Table 1, are defined by their highest possible average degree.

The term \(\mu _o(L,S,R)\) is the same sub-linear term from the Separate, Measure and Conquer analysis in Eq. (3) which needs to be propagated into the higher degree analyses.

4.2 Degree 3 Analysis

The problem of counting independent sets, #IS, can be solved in polynomial time when \(\varDelta (G) \le 2\) [26]. However, stepping up to cubic graphs is a much harder problem. Greenhill [21] proves that #3IS is actually a #P-hard problem.

Lemma 4

Algorithm #IS applied to a graph G with \(\varDelta (G) \le 3\) and no (3, 3, 3) vertex has running time \(O(1.0963^n)\).

Proof

Lemma 4 will be proved over the next few subsections. We will analyze the running time with respect to the measure \(\mu _{8/3}\) described in Eq. (3). Weights will be attributed to vertices, depending on structural properties such as their degree, and whether or not they are spider vertices. As suggested in [20] we will provide constraints that these vertex weights need to satisfy, and the provided values minimize an upper bound of the measure of the form \(\alpha n\). The measure \(\mu _{8/3}\) can be viewed in two regimes; a balanced separation, where \(\mu _r(R) - \mu _r(L) \le 2B\) resulting in \(\mu _{8/3} = \mu _s(S) + \frac{1}{2}( \mu _r(R) - \mu _r(L)) + \mu _o(L,S,R)\) and an imbalanced separation, where \(\mu _r(R) - \mu _r(L) > 2B\) resulting in \(\mu _{8/3} = \mu _s(S) + \mu _r(R) + \mu _o(L,S,R)\). To characterize decreases in vertex degrees, let \(\varDelta s_i = s_i - s_{i-1}\) and \(\varDelta r_i = r_i - r_{i-1}\). Trivial constraints are

$$\begin{aligned} \begin{aligned} r_0 = r_1 = 0 \quad \quad&s_0 = s_1 = 0. \end{aligned} \end{aligned}$$

Our algorithm handles 2-paths as if they were single edges. This is due to the consideration of the skeleton graph which is used when making decision on how to branching. By using the skeleton graph in the algorithm, which imagines 2-paths as single edges, we also constrain that \(r_2 = 0\). This means that degree 2 vertices do not impact the measure since we won’t need to branch on these kinds of vertices and only ever simplify them.

Constraints from #IS Simplification rules in lines 2 to 8 in #IS take polynomial time. If we are given a graph G with \(\varDelta (G) \le 3\) and no (3, 3, 3) vertex and the lazy 2-separator rule in line 15 did not apply, then we enter the subroutine #3IS.

Constraints from simplify The simplification rules in simplify either reduce the separator size by removing a vertex or the rule drags degree-2 vertices in S away making S consist only of degree-3 vertices. For vertex dragging to R in line 2 of simplify, the most constraining instances are the balanced ones:

$$\begin{aligned} - s_d + r_d \le 0 \text { where } d \in \{2,3\} \text { and } -s_3' + r_3 \le 0 . \end{aligned}$$

However, for vertex dragging to L in line 4, the imbalanced instances are most constraining

$$\begin{aligned} - s_d + 1/2 \cdot r_d \le 0 \text { where } d \in \{2,3\} \text { and } -s_3' + 1/2 \cdot r_3 \le 0 \end{aligned}$$

but this is no more constraining than the constraints generated from line 2 of simplify.

Line 8 drags to R the degree-2 separator vertex s and a 2-path, ending in a vertex l which is either in S or has degree 3, which itself is dragged into S. This most constraining in the balanced case

$$\begin{aligned} -s_2 + s_3' + \frac{1}{2} \cdot (r_2 - r_3) \le 0; \end{aligned}$$

In line 11 the most constraining case is

$$\begin{aligned} -s_2 + s_3' - r_3 \le 0 . \end{aligned}$$

The operations in line 16 drag neighbors and associated 2-paths from L into R, also removing \(s \in S\). Since \(r_2 = 0\) we can simplify the most constraining case, which is imbalanced, to:

$$\begin{aligned}-s_3 + 2 r_3\le 0.\end{aligned}$$

Line 21 is most constraining in the balanced case, which induces the constraint

$$\begin{aligned}-s_3 \le 0.\end{aligned}$$

Claim

After simplify has been applied to a graph G and its separation (LSR), for each \(s \in S\) there exists \(r \in N_{\varGamma }(s) \cap R\) such that \(N_{\varGamma }(r) \cap R \ne \emptyset \), and also there exists \(l \in N_{\varGamma }(s) \cap L\) such that \(N_{\varGamma }(l) \cap L \ne \emptyset \)

Proof

If there is a vertex s that does not satisfy the claim, then line 16 or 21 would trigger and remove s from S. \(\square \)

Constraints from spider The first two conditions of lines 3 and 7 in spider aim to drag into the separator a (2,2,3) or (2,3,3) vertex in order to branch more efficiently on. In the worst case there is no change in measure since s is replaced by r in the separator. Since the separation (LSR) is balanced, due to the context in which spider is called, moving \(P_r\) and r or \(P_l\) and l also does not change the measure as L and R contribute equally to \(\mu _{8/3}\).

In line 14, s is a center spider vertex with attributed weight \(s_3'\). We branch on \(l \in L\), which is a skeleton neighbor of s. The for loop drags vertices which are skeleton neighbors of l with no change in measure so that when l is branched on, it obtains a decrease in measure of at least \(\frac{3}{2}r_3\) by its neighbors. However, we choose l to branch on because on both subproblems, branching on l causes the removal of s from the separator as it no longer has neighbors in L. This results in the branching constraints, described in Fig 2 (recall from Page 9):

$$\begin{aligned} \left( s_3' + \frac{1}{2}(r_3 + 2\varDelta r_3), s_3' + \frac{1}{2}(r_3 + 2\varDelta r_3) \right) . \end{aligned}$$

Line 17 finds a valid left or right spider vertex and branches on it, resulting in the constraints

$$\begin{aligned} \left( s_3' + \frac{3}{2}\varDelta r_3, s_3' + \frac{3}{2}\varDelta r_3 \right) . \end{aligned}$$

Constraints from #3IS - Computing Separator. Much like in [20], computing a new separator in line 2 of #3IS imposes the constraint

$$\begin{aligned} \begin{aligned}&s_3' / 6 + 5/12 \cdot r_3< r_3 \text {, or}&s_3' < 7/2 \cdot r_3. \end{aligned} \end{aligned}$$

In line 5 the algorithm simplifies the graph G and its separation (LSR) through a call to simplify (Algorithm 3). This algorithm applies simplification rules which also imposes new constraints to be satisfied for the analysis.

The constraints for the reduction rule in line 14 are the same as the constraints for line 11 in simplify. We now deal with branching on lazy-2 separators and regular branching, in both imbalanced and balanced cases, separately. As decreasing a degree-3 vertex weight to a degree-2 vertex weight may result in the introduction of a spider vertex with weight \(s_3'\) instead of \(s_3\), let \(\delta = s_3' - s_3\) be the increase in measure from a spider vertex creation. This increase in measure via \(\delta \) is offset by either a \(\varDelta s_3\) or \(\frac{1}{2}\varDelta r_3\) decrease in measure in the same constraint.

Fig. 2
figure 2

Worst case configurations for spider vertex branching in spider

Constraints from #3IS - Balanced Lazy 2-Separator Branching Suppose the instance is balanced and #3IS selects a vertex \(s \in S\) but s has a lazy 2-separator \(\{y,z\}\) which line 10 of #3IS branches on instead of s. As the degree-3 vertices y, z and s are all removed in the branches of this problem, as well as the fact that due to Claim 4.2 for L and R there will be another degree-3 vertex that will be removed (see described in Fig 3) we obtain the branching vector

$$\begin{aligned} \left( s_3 + \frac{1}{2}(2 r_3 + 2\varDelta r_3) - 2 \delta , s_3 + \frac{1}{2}(2 r_3 + 2\varDelta r_3) - 2 \delta \right) .\end{aligned}$$

The worst case contains measure increases of \(2 \delta \) since the two decreases of \(\frac{1}{2}\varDelta r_3\) could create a spider vertex, and there are at least 2 of them. We could have more \(\delta \) decreases, but this only occurs when we have an additional \(\frac{1}{2} \varDelta r_3\) decrease, or \(\varDelta s_3\) decrease in the worst case. But since \(\delta \le \varDelta s_3 \le \frac{1}{2} \varDelta r_3\) the tightest constraint occurs at the smallest possible number of \(\delta \) changes.

Constraints from #3IS - Imbalanced Lazy 2-Separator Branching Once again, we have a vertex \(s \in S\) and a lazy 2-separator \(\{y,z\}\), but the instance is imbalanced. First assume either 1 or more of \(\{y,z\}\) is in R. In this case, we disconnect s, either the y or z vertex y, as well as some other vertex \(r \in R\) due to Claim 4.2. At worst this results in the branching vector

$$\begin{aligned}(s_3 + 2 r_3, s_3 + 2 r_3).\end{aligned}$$

In the case where \(\{y,z\} \subseteq L\) we also refer to Claim 1 which guarantees that there is a skeleton neighbor \(r \in N_{\varGamma }(s) \cap R\), which itself has a neighbor \(r' \in N_{\varGamma }(r) \cap R\). These two combined with s are removed in both branches, otherwise s cannot be removed and \(\{y,z\}\) is not a lazy-2 separator. This also results in the branching vector

$$\begin{aligned}(s_3 + 2 r_3, s_3 + 2 r_3).\end{aligned}$$
Fig. 3
figure 3

Worst case configurations for non-spider vertex branching in #3IS

Constraints from #3IS - Balanced Branching: neighbor in separator Consider the balanced branching case where we branch on \(s \in S\) and s has a neighbor \(s' \in S\). Let \(u \in R\) and \(v \in L\) denote the two other neighbors. In the worst case, u and v are both degree-2 vertices, meaning in both branches we only reduce a vertex of weight \(r_3\) to \(r_2\), but never delete one. Since \(s'\) reduces in degree in the first branch and is removed in the second branch, then we get the following branching vector

$$\begin{aligned} \left( s_3 + \varDelta s_3 + \frac{1}{2}(2 \varDelta r_3) - 3 \delta , 2s_3 + \frac{1}{2}(2 \varDelta r_3) - 2 \delta \right) . \end{aligned}$$

4.2.1 Constraints from #3IS - Balanced Branching: no Neighbor in Separator

Next consider the balanced branching case where the algorithm branches on a non-spider vertex \(s \in S\) with no neighbors in the separator S. Let \(u, u' \in R\) and \(v \in L\) denote its neighbors. Since s is not a spider vertex, s is either a (2,2,3) or a (2,3,3) vertex.

We first consider the case where s is a (2,2,3) vertex. In the worst case, the single degree-3 neighbor of weight \(\frac{r_3}{2}\) would be in R or L. This leads to a decrease of \(\frac{\varDelta r_3}{2}\). Of the two remaining neighbors, they are the start of a 2-path to another degree-3 vertex. Now both of these cannot be in S so we will have another decrease of at least \(\frac{\varDelta r_3}{2}\), leaving a decrease of \(\varDelta s_3\) for the last neighbor.

In the second case, we also get a decrease of \(\varDelta s_3 + \frac{\varDelta r_3}{2}\) from the degree-3 neighbor of s. This is due to Claim 4.2 forcing at least 1 of the neighbors to be in R. This results in a branching vector of

$$\begin{aligned} \left( s_3 + \varDelta s_3 + \frac{1}{2}( 2\varDelta r_3) - 3 \delta , s_3 + 2 \varDelta s_3 + \frac{1}{2}(r_3 + 2 \varDelta r_3 ) - 4 \delta \right) . \end{aligned}$$

Now if s is a (2,3,3) vertex, s has 2 degree 3. In the worst case, the degree 2 neighbor of s is the start of a 2-path to another vertex in S.

$$\begin{aligned} \left( s_3 + \varDelta s_3 + \frac{1}{2}(2 \varDelta r_3) - 3 \delta , s_3 + 3 \varDelta s_3 + \frac{1}{2}(2 r_3 + 2 \varDelta r_3) - 5 \delta \right) . \end{aligned}$$

Constraints from #3IS - Imbalanced Branching: neighbor in separator In the imbalanced instances of G the measure \(\mu _{8/3}\) simplifies to \(\mu _{{8}/{3}} = \mu _s(S) + \mu _r(R) + \mu _o(L,S,R)\). Suppose we choose \(s \in S\) to branch on and s has a neighbor \(s' \in S\). By Claim 4.2, s has a skeleton neighbor \(r \in N_{\varGamma }(s) \cap R\). Now in the worst case, r is only a skeleton neighbor, and the actual neighbor \(r' \in N_{G}(s) \cap R\) is of degree 2. By considering the removal, or reduction of degree, of s, \(s'\) and \(r'\), then we get the following worst case constraint

$$\begin{aligned}(s_3 + \varDelta s_3 + r_3 - 3 \delta , 2 s_3 + r_3 + 5 \delta ).\end{aligned}$$

The first branch has a \(3 \delta \) term since we get at most 1 decrease for each neighbor. The \(5 \delta \) term comes from the fact that the left neighbor \(l \in N_{G}(s) \cap L\) does not contribute any weight to \(\mu _{8/3}\) meaning it could have degree 3. Now \(s'\) is also of degree 3, so in the second case where we remove \(s'\) and l, these two could create 4 spider vertices. The last possible increase comes from r being reduced to a degree-2 vertex.

Constraints from #3IS - Imbalanced Branching: no neighbors in separator There are two branching rules to consider in this case. First branching occurs in line 17 where instead of branching on \(s \in S\) we branch on one of its skeleton neighbors in R. The other case occurs when we branch on s as normal in line 19.

In line 17, we are given the case where s has 1 skeleton neighbor in R. This means that we do not get a beneficial branching by branching on s. However, in a similar method to line 15 of spider, if we branch on \(r \in N_{\varGamma }(s) \cap R\) such that \(N_{\varGamma }(r) \cap R \ne \emptyset \), then in both branches, we are able to remove s entirely from the separator due to the simplification rules in simplify. We get the following worst case constraint

$$\begin{aligned}(r_3 + s_3 + \varDelta r_3 + \varDelta s_3 - 3 \delta , r_3 + s_3 + \varDelta r_3 + \varDelta s_3 - 3 \delta ).\end{aligned}$$

Otherwise, we progress to line 19, which guarantees that we have 2 skeleton neighbors of s in R. This results in the following constraint

$$\begin{aligned}(s_3 + 2 \varDelta r_3 - 3 \delta , s_3 + 2 \varDelta r_3 - 4 \delta ).\end{aligned}$$

Weights and Results. After compiling all necessary constraints induced by the steps of the algorithms, we are able to attempt to set up a constraint optimization problem in order to minimize the measure, with respect to these constraints. The combination of all constraints obtained in this way, minimizing the measure results in the measure of \(\mu _{{8}/{3}} = 0.13262 \cdot n\), and that the running time is \(O(2^{\mu _{{8}/{3}}}) \subseteq O(2^{0.13262 n})\) results in an upper bound of \(O(1.0963^n)\) for the running time. The specific weights after minimization are summarized below.

$$\begin{aligned}\begin{aligned} r_0 = 0&\quad r_1 = 0&r_2 = 0 \quad \quad \text { }&\quad r_3 = 0.2 + o(n)&\\ s_0 = 0&\quad s_1 = 0&\quad s_2 = 0.6352&\quad s_3 = 0.6784 \quad&s_3' = 0.7\\ \end{aligned}\end{aligned}$$

\(\square \)

Lemma 5

Algorithm #IS applied to a graph G with \(d(G) \le 3\) has running time \(O(1.1389^n)\) and uses polynomial space.

The algorithm #IS uses subroutine #3IS, which we analyze the measure and the weights for. We equate the Separate, Measure and Conquer weights with weights of the measure \(\mu _3\), based on the compound analysis from Wahlström [29]. As Wahlström’s analysis only contains weights \(w_3'\) and \(w_2'\), for vertices of degree 3 and degree 2 respectively, the measure is

$$\begin{aligned} \mu _3(G) = \left( (d - 2) w_3' + (3 - d) w_2' \right) n + \mu _o(L,S,R) \end{aligned}$$

where \(d = d(G)\) is the average degree of the input graph, and \(\mu _o(L,S,R)\) is the sub-linear term left over from the average degree \({8}/{3}\) analysis.

In the case of a graph G with no (3,3,3) vertex, in order for Lemma 1 to apply, the values of \(w_1\) and \(w_2\) must satisfy inequalities

$$\begin{aligned} \frac{1}{2}r_2&\le w_2, \\ \frac{s_3'}{11} + \frac{5 r_3}{22} + \frac{5 r_2}{22}&\le \frac{6 w_3}{11} + \frac{5 w_2}{11},\\ \frac{s_3}{9} + \frac{5 r_3}{18} + \frac{r_2}{3}&\le \frac{2 w_3}{3} + \frac{ w_2}{3}, \end{aligned}$$

induced when \(d = 2, \frac{28}{11}\), and \(\frac{8}{3}\) for \(\mu _{8/3}\) respectively. This results in the weights \(w_3 = 0.1973\) and \(w_2 = 0.0033\) when G has no (3,3,3) vertex.

We also let \(w_3' \ge 0\) and \(w_2' \ge 0\) be the weights associated with vertices of degree 3 and degree 2 respectively, for a subcubic graph G. Using the analysis by compound measures with \(\mu _3(G) = \sum _{i \in \{2,3\}} w_i' \cdot n_i\), the following constraint

$$\begin{aligned} \mu _{{8}/{3}}(G) \le \mu _{3}(G), \quad \text { when } d(G) = {8}/{3} \end{aligned}$$

is required for a valid compound measure. This can be rewritten as

$$\begin{aligned} 2 w_3 + w_2 \le 2 w_3' + w_2'. \end{aligned}$$

Branching on a (3,3,3) vertex, the only type of degree-3 vertex that we will be branched on in #IS, gives a branching vector of

$$\begin{aligned} (4w_3' - 3w_2', 8w_3' - 4w_2'). \end{aligned}$$

Setting the weights \(w_3' = 0.1876\) and \(w_2' = 0.0228\) satisfies the system of constraints described above and by using the measure \(\mu _3(G)\), results in a running time of \(O^*(1.1389^n)\).

4.3 Degree-4 Analysis

The notation \(\mathbbm {1}(\cdot )\) refers to an indicator function which returns 1 if its argument is true and 0 otherwise. For a graph with maximum degree 4, the analysis is done with a measure of

$$\begin{aligned} \mu _4 = \sum _{i \le 4} w_i \cdot n_i + \mathbbm {1} \left( \begin{array}{ll} G \text { has only degree-4 and degree-2}\\ \text {vertices and no degree-4 vertex has}\\ \text {a degree-4 neighbor} \end{array} \right) \psi + \mu _o(L,S,R) \end{aligned}$$

where \(w_i\) are weights attributed to vertices of degree i, \(n_i\) are the number of vertices with degree i and \(\psi \) is a potential.

We can ignore weights applied to degree 0 or degree 1 vertices since they are removed by simplification rules, and effectively have a weight of 0.

Potentials in Degree-4 Analysis. For analyzing branching on vertices in the degree-4 case, potentials are used for branching on a degree-4 vertices who only have degree-2 neighbors. In case (a), for any vertex v we have that all 2-paths starting from v, have endpoints of degree 4. Case (b) is where there exists one vertex u who has at least one 2-path from u that ends up in a degree-3 vertex. We do not have the case where a 2-path from a degree-4 vertex ends up in a degree-1 vertex since we could have used multiplier reduction to handle this case.

Table 1 Possible cases when branching on a degree-4 vertex

Lemma 6

For a graph G with maximum degree 4, #IS can be solved in time \(O^*(1.2070^n)\).

Proof

The degree-4 analysis uses pivot points 3, 3.2, 3.5, 3.75 and 4, shown as different rows of Figure 4. These pivot points refer to the highest possible average degree in those respective branching cases, which are defined by the highest possible degree of neighbors.

Pivot points generate multiple compound measures with weights and constraints for each. By including constraints generated from the table of branching factors in Figure 1, we gain satisfying weights for \(\mu _4\), shown in Figure 4. This results in a running time upper bound of \(O(2^{\mu _4 n}) \subseteq O(2^{0.2713 n}) \subseteq O(1.2070^n) \) in the worst case for degree-4 graphs. \(\square \)

Fig. 4
figure 4

Component measures \(\sum _i w_i \cdot n_i\) for maximum degree 4

4.4 Degree-5+ Analysis

The following two theorems show for degree-5+ graphs the generalized procedure for constructing branching vectors for v and all its possible combinations of degrees of neighbors.

Lemma 7

Suppose a graph G is 3-connected, with all simplification rules applied. Let \(v \in V(G)\) be a vertex to be branched on in \(\texttt {\#IS}\) with \(d(v) \in \{5, 6\}\). Let out(v) represent the number of edges xy from N(v) with \(x \in N(v)\) and \(y \notin N[v]\). Then

$$\begin{aligned} out(v) = {\left\{ \begin{array}{ll} 3 &{} \text {If } d(v) = 5 \text { and } \sum _{u \in N(v)} d(u) = 0 \text { mod } 2 \text { or } \\ &{} {\quad } d(v) = 6 \text { and } \sum _{u \in N(v)} d(u) = 1 \text { mod } 2 \\ 4 &{} \text {If } d(v) = 5 \text { and } \sum _{u \in N(v)} d(u) = 1 \text { mod } 2 \text { or } \\ &{} {\quad } d(v) = 6 \text { and } \sum _{u \in N(v)} d(u) = 0 \text { mod } 2 \\ 5 &{} \text { If neighbors of } v \text { have degree (2, 2, 2, 2, 2) or (2, 2, 2, 2, 2, 3)} \\ 6 &{} \text { If neighbors of } v \text { have degree (2, 2, 2, 2, 2, 2)}, \\ \end{array}\right. } \end{aligned}$$

otherwise if \(out(v)\ leq 2\) the graph G is constant sized.

Proof

Let out(v) represent the number of outgoing edges xy from N(v) with \(x \in N(v)\) and \(y \notin N[v]\). Suppose we have a 3-connected graph with all simplification rules applied. This means that the multiplier reduction does not apply, and there are no lazy 2-separators.

If \(out(v) = 0\) then we have an instance of size d, which is 5 or 6, and the graph is completely connected. This instance cannot occur, and even if it did, we would be able to solve it in constant time. If \(out(v) = 1\) we can apply the multiplier reduction, which is a contradiction. Similarly, if \(out(v) = 2\) we have a lazy 2-separator which is also a contradiction. Hence \(out(v) \ge 3\).

Suppose \(d(v) = 5\) and v has neighbors with degrees (2, 2, 2, 2, 2). Any edge adjacent to two neighbors of v means G can be reduced by multiplier reduction by branching on v, so \(out(v) = 5\). Similarly, if \(d(v) = 6\) and v has neighbors (2, 2, 2, 2, 2, 2), then \(out(v) = 6\).

Supppose \(d(v) = 6\) and v has neighbors (2, 2, 2, 2, 2, 3). There are 7 edges adjacent to N(v) but not v. Suppose \(u \in N(v)\) and \(d(u) = 3\). If \(out(v) < 5\) then at least 3 of these 7 edges must connect two vertices in N(v), but at most two of them are adjacent to u. Thus there exists one edge \(\{a,b\}\) with \(d(a) = d(b) = 2\). But this means the multiplier reduction can be applied, hence \(out(v) = 5\).

Suppose \(d(v) = 5\) and \(\sum _{u \in N(v)} d(u) = 1 \mod 2\). We showed there are at least 3 outgoing edges from N(v). There are also 5 edges incident to N(v) and v which gives a total of at least 8 edges that are incident to N(v). Since having adjacent neighbors does not change the fact that \(\sum _{u \in N(v)} d(u)\) is odd, out(v) must be even.

If \(\sum _{u \in N(v)} d(u)\) is odd, then since any edge adjacent to two neighbors of v contributes a value of 2 to the sum, then \(\sum _{u \in N(v)} d(u) = 1\) mod 2 implies out(v) = 4. A similar parity argument is used for \(d(v) = 6\), except with the parity swapped. \(\square \)

Lemma 8

Let \(deg_2(v)\) denote the number of degree-2 vertices in N(v). Then v has a branching vector of

$$\begin{aligned} \begin{aligned}&\left( w_{d(v)} + \sum _{u \in N(v)} w_{d(u)} + out(v) \cdot \varDelta w_{d(v)}, w_{d(v)} + \sum _{u \in N(v)} \varDelta w_{d(u)} + deg_2(v) \cdot \varDelta w_{d(v)} \right) . \end{aligned}\nonumber \\ \end{aligned}$$
(6)

Proof

The left hand side of the branching factor considers removing a vertex v and its neighbors. The right hand side considers removing just a vertex v. The reduction in measure on the graph G follows from reduction rules, the measure \(\mu = \sum _{v \in F}w_{d(v)} + \mu _o(L,S,R)\) and the definition of out(v) and \(deg_2(v)\). \(\square \)

Theorem 1

#IS can be solved in time \(O^*(1.2356^n)\) and polynomial space.

Proof

If \(d(G) \ge 7\) we can perform a simple branching analysis in terms of n, considering if a selected vertex v is inside the independent set, or its neighbors are. In this situation, the branching number is at worst \((1, 8) < 1.2321\). So we only need to compute \(\mu _6(G)\) with compound measures using Eq. (6), with \(d(G) \le 6\) in order to find the worst case running time for #IS (Fig. 5).

Fig. 5
figure 5

Weights and running time for \(\mu _6(G)\)

If we plug in a simple pathwidth-based subroutine [14] for graphs of maximum degree 3, we obtain the following exponential-space result.

Theorem 2

#IS can be solved in time \(O^*(1.2330^n)\).