New Tools and Connections for Exponential-Time Approximation

In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r>1$$\end{document}r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of r for maximum independent set in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(\exp ({\tilde{O}}(n/r \log ^2 r+r\log ^2r)))$$\end{document}O∗(exp(O~(n/rlog2r+rlog2r))) time, r for chromatic number in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(\exp (\tilde{O}(n/r \log r+r\log ^2r)))$$\end{document}O∗(exp(O~(n/rlogr+rlog2r))) time, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(2-1/r)$$\end{document}(2-1/r) for minimum vertex cover in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(\exp (n/r^{\varOmega (r)}))$$\end{document}O∗(exp(n/rΩ(r))) time, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(k-1/r)$$\end{document}(k-1/r) for minimum k-hypergraph vertex cover in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(\exp (n/ (kr)^{\varOmega (kr)}))$$\end{document}O∗(exp(n/(kr)Ω(kr))) time. (Throughout, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\tilde{O}}$$\end{document}O~ and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*$$\end{document}O∗ omit \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {polyloglog} (r)$$\end{document}polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{n/r})$$\end{document}O∗(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954–1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\exp (n^{1-o(1)}/r^{1+o(1)})$$\end{document}exp(n1-o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370–379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O^*(2^{n/r})$$\end{document}O∗(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan’s PCP (Chan in J. ACM 63(3):27:1–27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016).


Introduction
The Independent Set, Vertex Cover, and Coloring problems are central problems in combinatorial optimization and have been extensively studied. Most of the classical results concern either approximation algorithms that run in polynomial time or exact algorithms that run in (sub)exponential-time. While these algorithms are useful in most scenarios, they lack flexibility: Sometimes, we wish for a better approximation ratio with worse running time (e.g., computationally powerful devices), or faster algorithms with less accuracy. In particular, the trade-offs between the running time and approximation ratios are needed in these settings.
Algorithmic results on the trade-offs between approximation ratio have been studied already in the literature in several settings, most notably in the context of Polynomial-time Approximation Schemes (PTAS). For instance, in planar graphs, Baker's celebrated approximation scheme for several NP-hard problems [2] gives an (1 + ε)-approximation for, e.g., Independent Set in time O * (exp(O(1/ε))) time.
In graphs of small treewidth, Czumaj et al. [16] give an O * (exp(tw/r )) time algorithm that given a graph along with a tree decomposition of it of width at most tw, find an r -approximation for Independent Set. For general graphs, approximation results for several problems have been studied in several works (see, e.g., [6][7][8][13][14][15]). A basic building block that lies behind many of these results is to partition the input instance in smaller parts in which the optimal (sub)solution can be computed quickly (or at least faster than fully exponential-time). For example, to obtain an r -approximation for Independent Set one may arbitrarily partition the vertex set in r blocks and restrict attention to independent sets that are subsets of these blocks to get a O * (exp(n/r )) time r -approximation algorithm.
While at first sight one might think that such a naïve algorithm should be easily improvable via more advanced techniques, it was shown in [6,11] that almost linear-size PCPs with sub-constant error imply that r -approximating Independent Set [11] and Coloring [31] requires at least exp(n 1−o(1) /r 1+o(1) ) time assuming the popular Exponential Time Hypothesis (ETH). In the setting of the more sophisticated Baker-style approximation schemes for planar graphs, Marx [34] showed that no (1 + ε)-approximating algorithm for planar Independent Set can run in time O * (exp((1/ε) 1−δ )) assuming ETH, which implies that the algorithm of Czumaj cannot be improved to run in time O * (exp(tw/r 1+ε )).
These lower bounds, despite being interesting, do not say anything about the lower order terms and by no means answer the question whether the known approximation trade-offs can be improved significantly, and in fact in many settings we are far from understanding the full power of exponential time approximation. For example, until recently [10], we cannot exclude (under any plausible complexity assumption) algo-rithms that 2-approximate k-Independent Set 1 in time n o(k) (see also [30]), nor do we know algorithms that run asymptotically faster than the fastest exact algorithm that runs in time n 0.792k time [36].
In this paper, we aim to advance this understanding and study the question of designing fast (exponential-time) algorithms that guarantee the designated approximation ratios of r for Independent Set, Coloring and Vertex Cover in general (hyper)graphs. Ultimately, we wish to design approximation algorithms that are as fast as possible.

Our Results
For Independent Set, our result is the following. Here we useÕ to omit log log factors in r .

Theorem 1
There is a randomized algorithm that given an n-vertex graph G and integer r outputs an independent set that, with constant positive probability, has size at least α(G)/r , where α(G) denotes the maximum independent set size of G. The algorithm runs in expected time O * (exp(Õ(n/(r log 2 r ) + r log 2 r ))).
To prove this result, we introduce a new randomized branching rule that we will now introduce and put in context towards previous results. This follows a sparsification technique that reduces the maximum degree to a given number. This technique was already studied before in the setting of exponential time approximation algorithms for Independent Set by Cygan et al. (see [13, paragraph 'Search Tree Techniques']) and Bourgeois et al. (see [8,Section 2.1]), but the authors did not obtain running times sub-exponential in n/r . Specifically, the sparsification technique is to branch (e.g., select a vertex and try to both include v in an independent set or discard and recurse for both possibilities) on vertices of sufficiently high degree. The key property is that if we decide to include a vertex in the independent set, we may discard all neighbors of v. If we generate instances by keeping branching on vertices of degree at least d until the maximum degree is smaller than d, then at most n n/d exp(n log(d)/d) instances are created. In each such instance, the maximum independent set can be easily d-approximated by a greedy argument. Cygan et al. [13] note that this gives worse than O * (2 n/r ) running times.
Our algorithm works along this line but incorporates two (simple) ideas. Our first observation is that instead of solving each leaf instance by greedy d-approximation algorithm, one can use a recentÕ( d log 2 d ) approximation algorithm by Bansal et al. [3] for Independent Set on bounded degree graphs. If we choose d ≈ r log 2 r , this immediately gives an improvement, an r -approximation in time essentially ex p( n r log r ). To improve this further, we present an additional (more innovative) idea introducing randomization. This idea relies on the fact that in the sparsification step we have (unexploited) slack as we aim for an approximation. 2 Specifically, whenever we branch, we only consider the 'include' branch with probability 1/r . This will lower the expected number of produced leaf instances in the sparsification step to 2 n/d ≈ ex p( n r log 2 r ) and preserves the approximation factor with good probability.
Via fairly standard methods (see, e.g., [5]) we show this also gives a faster algorithm for coloring in the following sense: Theorem 2 There is a randomized algorithm that, given an n-vertex graph G and an integer r > 0, outputs with constant positive probability a proper coloring of G using at most r ·χ(G) colors. The algorithm runs in time O * (exp(Õ(n/(r log r )+r log 2 r ))).
As a final indication that sparsification is a very powerful tool to obtain fast exponential time approximation algorithms, we show that a combination of a result of Halperin [22] and the sparsification Lemma [25] gives the following result for the Vertex Cover problem in hypergraphs with edges of size at most k (a.k.a. the Set Cover problem with frequency at most k).
Theorem 3 For every k, there is an r 0 := r (k) such that for every r ≥ r 0 there is an O * (exp( n (kr) Ω(kr) )) time randomized (k − 1 r )-approximation algorithm for the Vertex Cover problem in hypergraphs with edges of size at most k.
Note that for k = 2 (e.g., vertex cover in graphs), this gives an O * (exp( n r Ω(r ) )) running time, which gives an exponential improvement (in the denominator of the exponent) upon the (2 − 1/r ) approximation by Bonnet et al. [8] that runs in time O * (2 n/r ). It was recently brought to our attention that Williams and Yu [38] independently have unpublished results for (hypergraph) vertex cover and independent set using sparsification techniques similar to ours.

Connections to PCP parameters
The question of approximating the maximum independent set problem in sub-exponential time has close connections to the trade-off between three important parameters of PCPs: size, gap and free-bit. We discuss the implications of our algorithmic results in terms of these PCP parameters.
Roughly speaking, the gap parameter is the ratio of completeness to soundness, while the freeness parameter is the number of "locally" distinct proofs that would cause the verifier to accept 3 ; the free-bit is simply a logarithm of freeness. For convenience, we will continue our discussions in terms of freeness, instead of freebit.
-Freebit versus Gap The dependency between freeness and gap has played an important role in hardness of approximation. Most notably, the existence of PCPs with freeness g o (1) where g is a gap parameter is "equivalent" to n 1−o(1) hardness of approximating maximum independent set [4,23]; this result is a building block for proving other hardness of approximation for many other combinatorial problems, e.g., coloring [20], disjoint paths [1], induced matching [11], cycle packing [21], and pricing [11]. Arguably, the trade-off of these PCP parameters captures the approximability of many natural combinatorial problems.
Better parameter trade-off implies stronger hardness results. The existence of a PCP with arbitrarily large gap, freeness 1 (lowest possible), and completeness close to 1/2, is in fact equivalent to (2− ) inapproximability for Vertex Cover [4]. The best known trade-off is due to Chan [12]: For any g > 0, there is a polynomial-sized PCP with gap g (completeness close to one) and freeness O(log g), yielding the best known NP-hardness of approximating maximum independent set in sparse graphs, i.e., Ω(d/ log 4 d) NP-hardness of approximating maximum independent set in degree-d graphs. 4 -Size, Freebit, and Gap When a polynomial-time approximation algorithm is the main concern, polynomial size PCPs are the only thing that matter. But when it comes to exponential time approximability, another important parameter, size of the PCPs, has come into play. The trade-off between size, freebit, and gap tightly captures the (sub-)exponential time approximability of many combinatorial problems. For instance, for any constant g > 0, Moshkovitz and Raz [35] construct PCPs 5 of size n 1+o(1) and freeness 2 O( √ log g) and gap g; this implies that r -approximating Independent Set requires time 2 n 1−o(1) /r 1+o(1) [11].
Our exponential-time approximation result for Independent Set implies the following trade-off results.

Corollary 1 Unless the ETH fails, a freebit PCP on an n-variable SAT formula, with gap parameter g, freeness parameter F and size parameter S must satisfy F
In particular, this implies that (i) the size of Chan's PCP cannot be made smaller than o(n log g) unless ETH breaks, and (ii) in light of the equivalence between gapamplifying freebit PCPs with freeness 1 and (2 − ) approximation for Vertex Cover, our result shows that such a PCP must have size at least Ω(n log 2 g). We remark that no such trade-off results are known for polynomial-sized PCPs. To our knowledge, this is the first result of its kind.

Further Related Results
The best known results for Independent Set in the polynomial-time regime are an O( n(log log n) 2 log 3 n )-approximation [18], and the hardness of n/ex p(O(log 3/4+o(1) n)) (which also holds for Coloring) [28]. For Vertex Cover, the best known hardness of approximation is ( (1)) NP-hardness [26,27] and (2− ) hardness assuming the unique games conjecture [29]. All three problems (Independent Set, Coloring, and Vertex Cover) do not admit exact algorithms that run in time 2 o(n) , unless ETH fails. Besides the aforementioned works [8,13] sparsification techniques for exponential time approximation were studied by Bonnet and Paschos in [7], but mainly hardness results were obtained. 4 Roughly speaking, the existence of a PCP with freeness hardness of approximating independent set in degree-d graphs.

Preliminaries
We first formally define the three problems that we consider in this paper. Independent Set: Given a graph G = (V , E), we say that J ⊆ V is an independent set if there is no edge with both endpoints in J . The goal of Independent Set is to output an independent set J of maximum cardinality. Denote by α(G), the cardinality of a maximum independent set. Vertex Cover: Given a graph G = (V , E), we say that J ⊆ V is a vertex cover of G if every edge is incident to at least one vertex in J . The goal of Vertex Cover is to output a vertex cover of minimum size. A generalization of vertex cover, called k-Hypergraph Vertex Cover (k-Vertex Cover), 6 is defined as follows. Given a hypergraph G = (V , E) where each hyperedge h ∈ E has cardinality at most k, the goal is to find a collection of vertices J ⊆ V such that each hyperedge is incident to at least one vertex in J , while minimizing |J |. The degree Δ(H ) of hypergraph H is the maximum frequency of an element. Coloring: The goal of Coloring is to compute a minimum integer k > 0 such that G admits a (proper) k-coloring; this number is referred to as the chromatic number, , the subgraph of G induced by X . We use exp(x) to denote 2 x in order to avoid superscripts. We use the O * (·)-notation to suppress factors polynomial in the input size. We useÕ andΩ to suppress factors polyloglog in r in respectively upper and lower bounds and writẽ for all functions that are in bothÕ andΩ.

Maximum Independent Set
In this section, we prove Theorem 1. Below is our key lemma. r )) that outputs an independent set of expected size α(G)/r.

Lemma 1 Suppose there is an approximation algorithm dIS(G, r ) that runs in time T (n, r ) and outputs an independent set of G of size
Proof Consider the algorithm listed in Fig. 1.
For convenience, let us fix r and d := d(r ). We start by analyzing the expected running time of this algorithm. Per recursive call the algorithm clearly uses O * (T (n, r )) time. It remains to bound the expected number of recursive calls R(n) made by return dIS(G). Fig. 1 Approximation algorithm for the maximum independent set problem using an approximation algorithm d I S that works in bounded degree graphs IS(G, r ) when G has n vertices. We will bound R(n) ≤ 2 λn for λ = log(4d/r )/d by induction on n. Note that here λ is chosen such that where we use d/r ≥ 2 for the inequality. For the base case of the induction, note that if the condition at Line 1 does not hold, the algorithm does not use any recursive calls and the statement is trivial as λ is clearly positive. For the inductive step, we see that Using exp(−λ · d(r )) ≤ λr /2 from (1) ≤ exp(λn).
We continue by analyzing the output of the algorithm. It clearly returns a valid independent set as all neighbors of v are discarded when v is included in Line 4 and an independent set is returned at Line 8. It remains to show E[|IS(G, r )|] ≥ α(G)/r which we do by induction on n. In the base case in which no recursive call is made, note that on Line 8 we indeed obtain an r -approximation as G has maximum degree d(r ). For the inductive case, let X be a maximum independent set of G and let v be the vertex as picked on Line 1. We distinguish two cases based on whether v ∈ X . If v / ∈ X , then α(G) = α(G[V \v]) and the inductive step follows as as required. Here the first inequality uses the induction hypothesis twice.

Proof of Theorem 1
We apply Lemma 1. By virtue of Theorem 4, d I S(G) runs in time T (n, r ) = O * (exp(Õ(r log 2 r ))), and outputs an independent set of size at least α(G)/r if G has maximum degree d(r ) for some function d(r ) =Õ(r log 2 r ) with d(r ) ≥ 2r . We obtain an O * (exp(Õ(n/r log 2 r + r log 2 r ))) expected time algorithm that outputs an independent set of expected size α(G)/r .
To obtain the required probabilistic guarantee, we apply this algorithm with r /3 instead of r . Since the size of the output is upper bounded by α(G) we obtain an independent set of size at least α(G)/r with probability at least 1/(3r ), and we may boost this constant positive probability using O(r ) repetitions.
By Markov's inequality these repetitions together run in O * (exp(Õ(n/r log 2 r + r log 2 r ))) time with probability 3/4. The theorem statement follows by a union bound as these O(r ) repetitions run in the claimed running time and simultaneously some repetition finds an independent set of size at least α(G)/r , with probability at least 1/2.

A deterministic algorithm
An interesting question here is whether our randomized branching algorithm can be derandomized. We show a deterministic r -approximation algorithm that has in slightly worse running time of exp(Õ(n/r log r )). The algorithm utilizes Feige's algorithm [18] as a blackbox, and is deferred to Sect. 6.1.

Graph Coloring
Now we use the approximation algorithm for Independent Set as a subroutine for an approximation algorithm for Coloring to prove Theorem 2 as follows (Fig. 2):

Proof of Theorem 2
The algorithm combines the approximation algorithm IS from Sect. 3.1 for Independent Set with an exact algorithm optcol for Coloring (see, e.g., [5]) as follows: In the algorithm, IS + denote the algorithm that makes n calls to IS and outputs the maximum size independent set found to boost the success probability. Specifically, IS + (G[V ], r / ln(r log r )) clearly finds an independent set of size α(G[V ]) ln(r log r )/r with probability at least 1 − exp(−Ω(n)), and thus it will find with at least constant positive probability in each of the at most n iterations an independent set of size at least α(G[V ]) ln(r log r )/r . We claim that CHR(G, r ) returns with high probability a proper coloring of G using ≤ (r + 2) · χ(G) colors. To prove the theorem, we invoke CHR(G, r − 2) which has the same asymptotic running time. First, note that in each iteration of the while loop (Line 2 of Algorithm 2), |V | is decreased by a multiplicative factor of at most 1 − ln(r log r ) r ·χ(G) because G[V ] must have an independent set of size at least n/χ (G) and therefore |C c | ≥ ln(r log r )n/(r · χ(G)). Before the last iteration, we have |V | ≥ n/(r ln r ). Thus, the number of iterations must satisfy This implies that ( − 1) ≤ r · χ(G). Consequently, the number of colors used in the first phase of the algorithm (Line 1 to Line 5) is c ≤ r χ(G) + 1.

The claimed upper bound on follows because the number of colors used for G[V ] in the second phase (Line 6) is clearly upper bounded by χ(G).
To upper bound the running time, note that Line 4 runs in time exp Õ n ln(r log r ) r log 2 (r / ln(r log r )) + r log 2 r = exp Õ n r log r + r lg 2 r , and implementing optcol(G = (V , E)) by using the O * (2 |V | ) time algorithm from [5], Line 6 also takes O * (2 n/(r log r ) ) time and the running time follows.
Let us remark that Algorithm CHR uses exp(n/r log r ) space, but by using a space efficient alternative for optcol (see, i.e., [5]) this space usage can be reduced to poly(n).

Vertex Cover and Hypergraph Vertex Cover
In this section, we show an application of the sparsification technique to Vertex Cover to obtain Theorem 3. Here the sparsification step is not applied explicitly. Instead, we utilize the sparsification Lemma of Impagliazzo et al. [25] as a blackbox. Subsequently, we solve each low-degree instance by using an algorithm of Halperin [22]. The sparsification lemma due to Impagliazzo et al. [25], shows that an instance of the k-Hypergraph Vertex Cover problem can be reduced to a (sub-)exponential number of low-degree instances. 7 Lemma 2 (Sparsification Lemma, [9,25]) There is an algorithm that, given a hypergraph H = (V , E) with edges of size at most k ≥ 2, a real number ε > 0, produces set systems H 1 = (V , E 1 ) i = 1, . . . , , the degree Δ(H i ) is at most (k/ε) The next tool is an approximation algorithm for the k-Hypergraph Vertex Cover problem when the input graph has low degree due to Halperin [22].

PCP Parameters and Exponential-Time Approximation Hardness
Exponential-time approximation has connections to the trade-off questions between three parameters of PCPs: size, freebit, and gap. To formally quantify this connection, we define new terms, formally illustrating the ideas that have been already around in the literature. We define a class of languages FGPCP which stands for Freebit and Gapamplifiable PCP. Let c and g be positive reals, and S, F be non-decreasing functions. A language L is in FGPCP c (S, F) if there is a constant g 0 > 1 such that, for all constants g ≥ g 0 , there is a verifier V g that, on input x ∈ {0, 1} n , has access to a proof π : |π | = O (S(n, g)) and satisfies the properties: -The verifier runs in 2 o(n) time.
-If x ∈ L, then there is a proof π such that V π g (x) accepts with probability ≥ c.
-If x / ∈ L, then for any proof π , V π g (x) accepts with probability ≤ c/g. -For each x and each random string r , the verifier has ≤ F(g) accepting configurations.
The parameters g, S and log F are referred to as gap, size and freebit of the PCPs respectively. For convenience, we call F(g) the freeness of the PCP. Intuitively, ones may view FGPCP as a class of PCPs parameterized by gap g. An interesting question in the PCPs and hardness of approximation literature has been to find the smallest functions S and F. Roughly speaking, if one can construct a small PCP with small freeness, this could be turned into a stronger lower bound on the running time. The following theorem made this intuition precise. Here the connection between PCP size and running time lower bound is captured by the term S −1 . When r is fixed and S is an increasing function, then S −1 (·, r ) is welldefined (e.g., the size function such as S(n, r ) = n log r has an inverse S −1 (N , r ) = N / log r ).
We prove the theorem later in this section. Meanwhile, we argue that the trade-off result follows directly.

Corollary 2
Assuming that SAT has no 2 o(n) -time randomized algorithm and that S AT ∈ FGPCP δ (S, F), then it must be the case that S(n, g) · F(g) = Ω(n · log 2 g poly(log log g) ).
Proof Assume otherwise that such a PCP exists with the parameters S and F such that S(n, g)F(g) = o(n · log 2 g poly(log log g) ). Denote |V (G)| by N . Notice that S −1 (N , r ) = ω(N · F(r ) poly(log log r ) log 2 r ), and Theorem 6 would imply that there is no , contradicting the existence of our approximation algorithm for the maximum independent set problem. Now let us phrase the known PCPs in our framework of FGPCP. Chan's PCPs [12] can be stated that S AT ∈ FGPCP 1−o(1) (poly, O(log g)). Applying our results, this means that if one wants to keep the same freebit parameters given by Chan's PCPs, then the size must be at least Ω(n log g). Another interesting consequence is a connection between Vertex Cover and Freebit PCPs in the polynomial time setting [4]. ( poly, 1).

Theorem 7 ([4]-Section 7 on the "reverse connection") Vertex Cover is (2− ) hard to approximate if and only if S AT ∈ FG PC P 1/2−
The intended PCPs in Theorem 7 have arbitrary small soundness while the freeness remains 1. Our Corollary 2 implies that such a PCP must have size at least Ω(n log 2 g).

Proof of Theorem 6
First, we define a standard terminology for dealing with constraint satisfaction problems (CSP ). An input to the general CSP is a collection of clauses C 1 , . . . , C m over n variables x 1 , . . . , x n where each clause is a predicate over some subset of variables. Given a CSP φ, the value of φ, denoted by val(φ) is the maximum number of clauses that can be simultaneously satisfied by an assignment. The goal of the problem is to compute the value of an input CSP. For each clause C i , let Y i ⊆ {x 1 , . . . , x n } be the set of variables appearing in C i . The freeness of φ is the maximum, over all clauses C i , of the number of ways C i can be satisfied by different assignments on variables in Y i .
Step 1: Creating a hard CSP We will need the following lemma that creates a "hard" CSP from FGPCP. This CSP will be used later to construct a hard instance of Independent Set. Proof Let g be any number and V g be the corresponding verifier. On input φ, we create a CSP φ as follows. For each proof bit i , we have variable x i . The set of variables is X = {x 1 , . . . , x S(n) }. We perform M = 10 S(n)g/δ iterations. In iteration j, the verifier picks a random string r j ∈ {0, 1} R where R is the random coins used by the verifier and create a predicate P j (x b 1 , . . . , x b q ), where b 1 , . . . , b q are the proof bits read by the verifier V g on random string r j . This predicate is true on assignment γ if and only if the verifier accepts the local assignment First, assume that φ is satisfiable. Then there is a proof * such that the verifier V * (φ) accepts with probability δ. Let γ : X → {0, 1} be an assignment that agrees with the proof * . So, γ satisfies each predicate P j with probability δ, and therefore, the expected number of satisfied predicates is δ M. By Chernoff's bound, the probability that γ satisfies less than δ M 2 predicates is at most 2 −δ M/8 ≤ 2 −n . Next, assume that φ is not satisfiable. For each assignment γ : X → {0, 1}, the fraction of random strings satisfied by the corresponding proof γ is at most δ/g. When we pick a random string r j , the probability that V γ (φ, r j ) accepts is then at most δ/g. So, over all the choices of M strings, the expected number of satisfied predicates is δ M/g ≥ 10S(n). By Chernoff's bound, the probability that γ satisfies more than δ M/g predicates is at most 2 −10S(n) . By union bound over all possible proofs of length S(n) (there are 2 S(n) such proofs), the probability that there is such a γ is at most 2 S(n) 2 −10S(n) ≤ 2 −S(n) .
Step 2: FGLSS reduction The FGLSS reduction is a standard reduction from CSP to Independent Set introduced by Feige et al. [19]. The reduction simply lists all possible configurations (partial assignment) for each clause as vertices and adding edges if there is a conflict between two configuration. In more detail, for each predicate P i and each partial assignment γ such that P i (γ ) is true, we have a vertex v(i, γ ). For each pair of vertices v(i, γ )v(i , γ ) such that there is a variable appearing in both P i and P i for which γ (x j ) = γ (x j ), we have an edge between v(i, γ ) and v(i , γ ). [19]) There is an algorithm that, given an input CSP φ with m clauses, n variables, and freeness F, produces a graph G = (V , E) such that (i) |V (G)| ≤ m F and (ii) α(G) = val(φ)m, where val(φ) denotes the maximum number of predicates of φ that can be satisfied by an assignment.

Lemma 4 (FGLSS Reduction
Combining everything Assume that S AT ∈ FGPCP δ (S, F). Let g > 0 be a constant and V g be the verifier of SAT that gives the gap of g. By invoking Lemma 3, we have a CSP φ 1 with S(n, g) variables and 100S(n, g)g/δ clauses. Moreover, the freeness and gap of φ 1 are F(g) and g, respectively. Applying the FGLSS reduction, we have a graph G with N = |V (G)| = 100S(n, g)F(g)g/δ = O(S(n, g)F(g)g). Now assume that we have an algorithm A that gives a g-approximation algorithm in time 2 . Notice that S −1 (N , g) ≤ O(ng F(g)) (here we used the assumption that S is at least a linearly growing function.) and therefore algorithm A distinguishes between Yesand No-instance in time 2 o(n) , a contradiction.
Hardness under Gap-ETH Dinur [17] and Manurangsi and Raghavendra [32] made a conjecture that SAT does not admit an approximation scheme that runs in 2 o(n) time. We observe a Gap-ETH hardness of r -approximating Independent Set in time 2 n/r c for some constant c. The proof uses a standard amplification technique and is deferred to Sect. 6.2.

Further Research
Our work leaves ample opportunity for exciting research. An obvious open question is to derandomize our branching, e.g., whether Theorem 1 can be proved without randomized algorithms. While the probabilistic approximation guarantee can be easily derandomized using splitters, it seems harder to strengthen the expected running time bound to a worst-case running time bound.
Can we improve the running times of the other algorithms mentioned in the introduction that use the partition argument, possibly using the randomized branching strategy? Specifically, can we (1 + ε)-approximate Independent Set on planar graphs in time O * (2 (1/ε)/ log(1/ε) ), or r -approximate Independent Set in time O * (2 tw/r log r )? As mentioned in the introduction, a result of Marx [34] still leaves room for such lower order improvements. Another open question in this category is how fast we can r -approximate k-Independent Set, where the goal is to find an independent set of size of k. Recently, Chalermsook et al. [10] showed, under the Gap-ETH assumption, that finding a k-Independent Set always takes time n Ω(k) , despite assuming the existence of q-clique for q >> k. It remains open whether one can rule out such an algorithm under ETH. Finally, a big open question in the area is to find or exclude a (2 − ε)-approximation for Vertex Cover in graphs in subexponential time for some fixed constant ε > 0. We remark that the dependence on presented in our paper has recently been improved by Manurangsi and Trevisan [33].

A Deterministic Algorithm for Independent Set
In this section, we give a deterministic r -approximation algorithm that runs in time 2Õ (n/r log r ) . This algorithm is a simple consequence of Feige's algorithm [18], that we restate below in a slightly different form.

Theorem 8 ([18]) Let G be a graph with independence ratio α(G)
|V (G)| = 1/k. Then, for any parameter t ∈ N, one can find an independent set of size t · log k ( n 6kt ) in time k O(t) · poly(n).
Feige used the above theorem with parameter k = poly log n, and t = (log n/ log log n), so he obtained an algorithm that runs in polynomial time. Here we will be using the power of his algorithm in (mildly) exponential time.
-If α(G) < n log 2 r , then we can enumerate all independent sets of size n/(r log 2 r ) (this is an r -approximation) in time n n/(r log 2 r ) ≤ (er log 2 r ) n r log 2 r ≤ 2 O(n/(r log r )) . -Otherwise, the independence ratio is at least 1/k where k = log 2 r . We choose t = n/(r log r ) , so Feige's algorithm finds an independent set of size at least t · log k n 6kt = Ω n r log r · log k (r log r ) = Ω(n/(r log log r )) The running time of this algorithm is If we redefine r = r log log r , then the algorithm is an r -approximation algorithm that runs in time 2 O(n(log log r ) 2 /r log r ) .

Gap-ETH Hardness of Independent Set (Sketch)
We now sketch the proof. We are given an n-variable 3-CNF-SAT formula φ with perfect completeness and soundness 1 − for some > 0. We first perform standard amplification and sparsification to get φ with gap parameter g, the number of clauses is ng, and freeness is g O(1/ ) . Then we perform FGLSS reduction to get a graph G such that |V (G)| = ng O(1/ ) . Therefore, g-approximation in time 2 o(|V (G)|/g O(1/ ) ) would lead to an algorithm that satisfies more than (1 − ) fraction of clauses in 3-CNF-SAT formula in time 2 o(n) . In other words, any 2 n/r c -time algorithm that r -approximates Independent Set can be turned into a (1 + O(1/c))-approximation algorithm for approximating 3-CNF-SAT in sub-exponential time.