Skip to main content
Log in

Improved Approximation Algorithms for Projection Games

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

The projection games (aka Label Cover) problem is of great importance to the field of approximation algorithms, since most of the NP-hardness of approximation results we know today are reductions from Label Cover. In this paper we design several approximation algorithms for projection games: (1) A polynomial-time approximation algorithm that improves on the previous best approximation by Charikar et al. (Algorithmica 61(1):190–206, 2011). (2) A sub-exponential time algorithm with much tighter approximation for the case of smooth projection games. (3) A polynomial-time approximation scheme (PTAS) for projection games on planar graphs and a tight running time lower bound for such approximation schemes. The conference version of this paper had only the PTAS but not the running time lower bound.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Recall that, for \(\alpha \ge 1\), an \(\alpha \)-approximation algorithm for a maximization problem is an algorithm that, for every input, outputs a solution of value at least \(1/\alpha \) times the value of the optimal solution. \(\alpha \) is called the “approximation ratio” of the algorithm.

  2. Note that the exponential time hypothesis states that 3-SAT cannot be solved in sub-exponential time.

  3. In both [6] and [10], the vertices, not edges, are partitioned. However, it is obvious that the lemma works for edges as well.

  4. In [17], the theorem is phrased as a running time lower bound for PTAS but it is clear from the proof that the theorem can be stated as our more specific version too.

References

  1. Arora, S., Barak, B., Steurer, D.: Subexponential algorithms for unique games and related problems. In: Proceedings of the 51st IEEE Symposium on Foundations of Computer Science, FOCS ’10, pp. 563–572. IEEE Computer Society, Washington, DC, USA (2010)

  2. Arora, S., Lund, C., Motwani, R., Sudan, M., Szegedy, M.: Proof verification and the hardness of approximation problems. J. ACM 45(3), 501–555 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  3. Arora, S., Safra, S.: Probabilistic checking of proofs: a new characterization of NP. J. ACM 45(1), 70–122 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  4. Babai, L., Fortnow, L., Levin, L.A., Szegedy, M.: Checking computations in polylogarithmic time. In: Proceedings of the 23rd ACM Symposium on Theory of Computing, STOC ’91, pp. 21–32. ACM, New York, NY, USA (1991)

  5. Babai, L., Fortnow, L., Lund, C.: Non-deterministic exponential time has two-prover interactive protocols. Comput. Complex. 1(1), 3–40 (1991)

    Article  MATH  Google Scholar 

  6. Baker, B.S.: Approximation algorithms for NP-complete problems on planar graphs. J. ACM 41(1), 153–180 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bellare, M., Goldreich, O., Sudan, M.: Free bits, pcps, and nonapproximability-towards tight results. SIAM J. Comput. 27(3), 804–915 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Charikar, M., Hajiaghayi, M., Karloff, H.: Improved approximation algorithms for label cover problems. Algorithmica 61(1), 190–206 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dinur, I., Steurer, D.: Analytical approach to parallel repetition. In: Proceedings of the 46th ACM Symposium on Theory of Computing, STOC ’14, pp. 624–633. ACM, New York, NY, USA (2014)

  10. Eppstein, D.: Diameter and treewidth in minor-closed graph families. Algorithmica 27(3), 275–291 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  11. Feige, U.: A threshold of \(\ln n\) for approximating set cover. J. ACM 45(4), 634–652 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  12. Garey, M., Johnson, D., Stockmeyer, L.: Some simplified NP-complete graph problems. Theor. Comput. Sci. 1(3), 237–267 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  13. Håstad, J.: Some optimal inapproximability results. J. ACM 48(4), 798–859 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  14. Holmerin, J., Khot, S.: A new PCP outer verifier with applications to homogeneous linear equations and max-bisection. In: Proceedings of the 36th ACM Symposium on Theory of Computing, STOC ’04, pp. 11–20. ACM, New York, NY, USA (2004)

  15. Khot, S.: Hardness results for coloring 3-colorable 3-uniform hypergraphs. In: Proceedings of the 43rd IEEE Symposium on Foundations of Computer Science, FOCS ’02, pp. 23–32. IEEE Computer Society, Washington, DC, USA (2002)

  16. Khot, S.: On the power of unique 2-prover 1-round games. In: Proceedings of the 34th ACM Symposium on Theory of Computing, STOC ’02, pp. 767–775. ACM, New York, NY, USA (2002)

  17. Marx, D.: On the optimality of planar and geometric approximation schemes. In: Proceedings of the 48th IEEE Symposium on Foundations of Computer Science, FOCS ’07, pp. 338–348. IEEE Computer Society, Washington, DC, USA (2007)

  18. Moshkovitz, D.: The projection games conjecture and the NP-hardness of \(\ln n\)-approximating set-cover. In: Gupta, A., Jansen, K., Rolim, J., Servedio, R. (eds.) Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques vol. 7408 of Lecture Notes in Computer Science, pp. 276–287. Springer, Berlin (2012)

    Chapter  Google Scholar 

  19. Moshkovitz, D., Raz, R.: Two-query PCP with subconstant error. J. ACM 57(5), 29:1–29:29 (2008)

    MathSciNet  MATH  Google Scholar 

  20. Peleg, D.: Approximation algorithms for the label-cover max and red-blue set cover problems. J. Discret. Algorithms 5(1), 55–64 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Raz, R.: A parallel repetition theorem. SIAM J. Comput. 27(3), 763–803 (1998)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pasin Manurangsi.

Additional information

Pasin Manurangsi—Part of this work was completed while the author was at Massachusetts Institute of Technology.

This material is based upon work supported by the National Science Foundation under Grant numbers 1218547 and 1452302.

Appendix

Appendix

1.1 Polynomial-Time Approximation Algorithms for Projection Games for Nonuniform Preimage Sizes

In this section, we will describe a polynomial time \(O((n_A|\varSigma _A|)^\frac{1}{4})\)-approximation algorithm for satisfiable projection games, including those with nonuniform preimage sizes.

It is not hard to see that, if the \(p_e\)’s are not all equal, then “know your neighbors’ neighbors” algorithm from Sect. 4.1 does not necessarily end up with at least \(h_{max}/\overline{p}\) fraction of satisfied edges anymore. The reason is that, for a vertex a with large \(|N_2(a)|\) and any assignment \(\sigma _a \in \varSigma _A\) to the vertex, the number of preimages in \(\pi _e^{-1}(\pi _{(a, b)}(\sigma _a))\) might be large for each neighbor b of a and each edge e that has an endpoint b. We solve this issue, by instead of using all the edges for the algorithm, only using “good” edges whose preimage sizes for the optimal assignments are at most a particular value. However, this definition of “good” does not only depend on an edge but also on the assignment to the edge’s endpoint in B, which means that we need to have some extra definitions to address the generalization of h and p as follows.

\(\sigma _b^{max}\) :

for each \(b \in B\), denotes \(\sigma _b \in \varSigma _B\) that maximizes the value of \(\sum _{a \in N(b)} |\pi ^{-1}_{(a, b)}(\sigma _b)|\).

\(p^{max}_e\) :

for each edge \(e = (a, b)\), denotes \(\left| \pi ^{-1}_e(\sigma _b^{max})\right| \), the size of the preimage of e if b is assigned \(\sigma _b^{max}\).

\(\overline{p}^{max}\) :

denotes the average of \(p^{max}_e\) over all \(e \in E\), i.e. \(\frac{1}{|E|}\sum _{e \in E} p^{max}_e\). We will use \(2\overline{p}^{max}\) as a threshold for determining “good” edges as we shall see below.

E(S):

for each set of vertices S, denotes the set of edges with at least one endpoint in S, i.e. \(\{(u, v) \in E \mid u \in S \text { or } v \in S\}\).

\(E_N^{max}\) :

denotes the maximum number of edges coming out of N(a) for all \(a \in A\), i.e., \(max_{a \in A}\{|E(N(a))|\}\).

\(\varSigma ^*_A(a)\) :

for each \(a \in A\), denotes the set of all assignments \(\sigma _a\) to a that, for every \(b \in B\), there exists an assignment \(\sigma _b\) such that, if a is assigned \(\sigma _a\), b is assigned \(\sigma _b\) and all a’s neighbors are assigned according to a, then there are still possible assignments left for all vertices in \(N_2(a) \cap N(b)\), i.e., \(\{\sigma _a \in \varSigma _A \mid \text { for each } b \in B, \text { there is } \sigma _b \in \varSigma _B \text { such that, for all } \) \( a' \in N_2(a) \cap N(b)\text {, } \left( \bigcap _{b' \in N(a') \cap N(a)} \pi ^{-1}_{(a', b')}(\pi _{(a, b')}(\sigma _a))\right) \cap \pi ^{-1}_{(a', b)}(\sigma _b) \ne \emptyset \}.\) Note that \(\sigma ^{OPT}_a \in \varSigma ^*_A(a)\). In other words, if we replace \(\varSigma _A\) with \(\varSigma ^*_A(a)\) for each \(a \in A\), then the resulting instance is still satisfiable.

\(N^*(a, \sigma _a)\) :

for each \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\), denotes \(\{b \in N(a) \mid |\pi ^{-1}_{(a', b)}(\pi _{(a, b)}(\sigma _a))|\) \(\le 2\overline{p}^{max} \text { for some } a' \in N(b)\}\). Provided that we assign \(\sigma _a\) to a, this set contains all the neighbors of a with at least one good edge as we discussed above. Note that \(\pi _{(a, b)}(\sigma _a)\) is the assignment to b corresponding to the assignment of a.

\(N^*_2(a, \sigma _a)\) :

for each \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\), denotes all the neighbors of neighbors of a with at least one good edge with another endpoint in N(a) when a is assigned \(\sigma _a\), i.e., \(\bigcup _{b \in N^*(a, \sigma _a)} \{a' \in N(b) \mid |\pi ^{-1}_{(a', b)}(\pi _{(a, b)}(\sigma _a))| \le 2\overline{p}^{max}\}\).

\(h^*(a, \sigma _a)\) :

for each \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\), denotes \(|E(N^*_2(a, \sigma _a))|\). In other words, \(h^*(a, \sigma _a)\) represents how well \(N^*_2(a, \sigma _a)\) spans the graph G.

\(E^*(a, \sigma _a)\) :

for each \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\), denotes \(\{(a', b) \in E \mid b \in N^*(a, \sigma _a),\) \(a' \in N^*_2(a, \sigma _a) \text { and } |\pi ^{-1}_{(a', b)}(\pi _{(a, b)}(\sigma _a))| \le 2\overline{p}^{max}\}\). When a is assigned \(\sigma _a\), this is the set of all good edges with one endpoint in N(a).

\(h^*_{max}\) :

denotes \(\max _{a \in A, \sigma _a \in \varSigma ^*_A(a)} h^*(a, \sigma _a)\).

\(E'\) :

denotes the set of all edges \(e \in E\) such that \(p_e \le 2\overline{p}^{max}\), i.e., \(E' = \{e \in E \mid p_e \le 2\overline{p}^{max}\}\). Recall that \(p_e\) is defined earlier as \(|\pi ^{-1}(\sigma _b^{OPT})|\). Since \(E'\) depends on \(\sigma _b^{OPT}\), \(E'\) will not be used in the algorithms below but only used in the analyses. Same goes for all the notations defined below.

\(G'\) :

denotes a subgraph of G with its edges being \(E'\).

\(E'(S)\) :

for each set of vertices S, denotes the set of all edges in \(E'\) with at least one endpoint in S, i.e., \(\{(u, v) \in E' \mid u \in S \text { or } v \in S\}\).

\(E'_S\) :

for each set of vertices S, denotes the set of edges with both endpoints in S, i.e. \(E'_S = \{(a, b) \in E' \mid a \in S \text { and } b \in S\}\).

\(N'(u)\) :

for each vertex u, denotes the set of vertices that are neighbors of u in the graph \(G'\).

\(N'(U)\) :

for each set of vertices U, denotes the set of vertices that are neighbors of at least one vertex in U in the graph \(G'\).

\(N'_2(u)\) :

for each vertex u, denotes \(N'(N'(u))\), the set of neighbors of neighbors of u in \(G'\).

From the definitions above, we can derive two very useful observations as stated below.

Observation 2

\(|E'| \ge \frac{|E|}{2}\)

Proof

Suppose for the sake of contradiction that \(|E'| < \frac{|E|}{2}\). From the definition of \(E'\), this means that, for more than \(\frac{|E|}{2}\) edges e, we have \(p_e > 2\overline{p}^{max}\). As a result, we can conclude that

$$\begin{aligned} |E|\overline{p}^{max}&< \sum _{e \in E} p_e \\&= \sum _{b \in B} \sum _{a \in N(b)} p_{(a, b)} \\&= \sum _{b \in B} \sum _{a \in N(b)} |\pi ^{-1}_{(a, b)}(\sigma _b^{OPT})| \\&\le \sum _{b \in B} \sum _{a \in N(b)} |\pi ^{-1}_{(a, b)}(\sigma _b^{max})| \\&= |E|\overline{p}^{max}. \end{aligned}$$

This is a contradiction. Hence, \(|E'| \ge \frac{|E|}{2}\).

Observation 3

If \(\sigma _a = \sigma _a^{OPT}\), then \(N^*(a, \sigma _a) = N'(a)\), \(N^*_2(a, \sigma _a) = N'_2(a)\) and \(E^*(a, \sigma _a) = E'(N'(a))\).

This observation is obvious since, when pluging in \(\sigma _a^{OPT}\), each pair of definitions of \(N^*(a, \sigma _a)\) and \(N'(a)\), \(N^*_2(a, \sigma _a)\) and \(N'_2(a)\), and \(E^*(a, \sigma _a)\) and \(E'(N'(a))\) becomes the same.

Note also that from its definition, \(G'\) is the graph with good edges when the optimal assignments are assigned to B. Unfortunately, we do not know the optimal assignments to B and, thus, do not know how to find \(G'\) in polynomial time. However, directly from the definitions above, \(\sigma _b^{max}, p_e^{max}, \overline{p}^{max}, E_N^{max}, \varSigma ^*_A(a),\) \(N^*(a, \sigma _a), N^*_2(a, \sigma _a)\), \(h^*(a, \sigma _a)\) and \(h^*_{max}\) can be computed in polynomial time. These notations will be used in the upcoming algorithms. Other defined notations we do not know how to compute in polynomial time and will only be used in the analyses.

For the nonuniform preimage sizes case, we use five algorithms as opposed to four algorithms used in uniform case. We will proceed to describe those five algorithms. In the end, by using the best of these five, we are able to produce a polynomial-time \(O\left( (n_A|\varSigma _A|)^{1/4}\right) \)-approximation algorithm as desired.

We now list the algorithms along with their rough descriptions; detailed description and analysis of each algorithm will follow later on:

  1. 1.

    Satisfy one neighbor\(|E|/n_B\)-approximation. Assign each vertex in A an arbitrary assignment. Each vertex in B is then assigned to satisfy one of its neighboring edges. This algorithm satisfies at least \(n_B\) edges.

  2. 2.

    Greedy assignment\({|\varSigma _A|}/{\overline{p}^{max}}\) -approximation. Each vertex in B is assigned an assignment \(\sigma _b\in \varSigma _B\) that has the largest number of preimages across neighboring edges \(\sum _{a \in N(b)} |\pi _{(a, b)}^{-1}(\sigma _b)|\). Each vertex in A is then assigned so that it satisfies as many edges as possible. This algorithm works well when \(\varSigma _B\) assignments have many preimages.

  3. 3.

    Know your neighbors\(|E|/E_N^{max}\)-approximation. For a vertex \(a_0 \in A\), pick an element of \(\varSigma ^*_A(a_0)\) and assign it to \(a_0\). Assign its neighbors \(N(a_0)\) accordingly. Then, for each node in \(N_2(a_0)\), we find one assignment that satisfies all the edges between it and vertices in \(N(a_0)\).

  4. 4.

    Know your neighbors’ neighbors\(O(|E|\overline{p}^{max}/h^*_{max})\)-approximation. For a vertex \(a_0 \in A\), we go over all possible assignments in \(\varSigma _A^*(a)\) to it. For each assignment, we assign its neighbors \(N(a_0)\) accordingly. Then, for each node in \(N_2(a_0)\), we keep only the assignments that satisfy all the edges between it and vertices in \(N(a_0)\).

    When \(a_0\) is assigned the optimal assignment, the number of choices for each node in \(N^*_2(a_0)\) is reduced to at most \(2\overline{p}^{max}\) possibilities. In this way, we can satisfy \(1/2{\overline{p}^{max}}\) fraction of the edges that touch \(N^*_2(a_0)\). This satisfies many edges when there exists \(a_0\in A\) such that \(N^*_2(a_0)\) spans many edges.

  5. 5.

    Divide and Conquer\(O(n_A n_B (h^*_{max} + E_N^{max})/|E|^2)\)-approximation. For every \(a\in A\), we can fully satisfy \(N^*(a) \cup N^*_2(a)\) efficiently, and give up on satisfying other edges that touch this subset. Repeating this process, we can satisfy \(\varOmega (|E|^2/(n_A n_B (h^*_{max} + E_N^{max})))\) fraction of the edges.

Aside from the new “know your neighbors” algorithm, the main idea of each algorithm remains the same as in the uniform preimage sizes case. All the details of each algorithm are described below.

Satisfy One Neighbor Algorithm The algorithm is exactly the same as that of the uniform case.

Lemma 11

For satisfiable instances of projection games, an assignment that satisfies at least \(n_B\) edges can be found in polynomial time, which gives the approximation ratio of \(\frac{|E|}{n_B}\).

Proof

The proof is exactly the same as that of Lemma 1.

Greedy Assignment Algorithm The algorithm is exactly the same as that of the uniform case.

Lemma 12

There exists a polynomial-time \(\frac{|\varSigma _A|}{\overline{p}^{max}}\)-approximation algorithm for satisfiable instances of projection games.

Proof

The proof of this lemma differs only slightly from the proof of Lemma 2.

The algorithm works as follows:

  1. 1.

    For each b, assign it \(\sigma ^*_b\) that maximizes \(\sum _{a \in N(b)} |\pi _{(a, b)}^{-1}(\sigma _b)|\).

  2. 2.

    For each a, assign it \(\sigma ^*_a\) that maximizes the number of edges satisfied, \(|\{b \in N(a) \mid \pi _{(a, b)}(\sigma _a) = \sigma ^*_b\}|\).

Let \(e^*\) be the number of edges that get satisfied by this algorithm. We have

$$\begin{aligned} e^*&= \sum _{a \in A} |\{b \in N(a) \mid \pi _{(a, b)}(\sigma ^*_a) = \sigma ^*_b\}|. \end{aligned}$$

Due to the second step, for each \(a \in A\), the number of edges satisfied is at least an average of the number of edges satisfied over all assignments in \(\varSigma _A\). This can be written as follows.

$$\begin{aligned} e^*&= \sum _{a \in A} |\{b \in N(a) \mid \pi _{(a, b)}(\sigma ^*_a) = \sigma ^*_b\}| \\&\ge \sum _{a \in A} \frac{\sum _{\sigma _a \in \varSigma _A} |\{b \in N(a) \mid \pi _{(a, b)}(\sigma _a) = \sigma ^*_b\}|}{|\varSigma _A|} \\&= \sum _{a \in A} \frac{\sum _{b \in N(a)} |\pi ^{-1}_{(a, b)}(\sigma ^*_b)|}{|\varSigma _A|} \\&= \frac{1}{|\varSigma _A|} \sum _{a \in A} \sum _{b \in N(a)} \left| \pi ^{-1}_{(a, b)}(\sigma ^*_b)\right| . \end{aligned}$$

From the definition of \(\sigma _b^{max}\), we can conclude that \(\sigma _b^* = \sigma _b^{max}\) for all \(b \in B\). As a result, we can conclude that

$$\begin{aligned} e^*&\ge \frac{1}{|\varSigma _A|} \sum _{a \in A} \sum _{b \in N(a)} \left| \pi ^{-1}_{(a, b)}(\sigma ^*_b)\right| \\&= \frac{1}{|\varSigma _A|} \sum _{a \in A} \sum _{b \in N(a)} \left| \pi ^{-1}_{(a, b)}(\sigma ^{max}_b)\right| \\&= \frac{1}{|\varSigma _A|} \sum _{a \in A} \sum _{b \in N(a)} p^{max}_{(a, b)} \\&= \frac{1}{|\varSigma _A|} |E||\overline{p}^{max}| \\&= \frac{\overline{p}^{max}}{|\varSigma _A|} |E|. \end{aligned}$$

Hence, this algorithm satisfies at least \(\frac{\overline{p}^{max}}{|\varSigma _A|}\) fraction of the edges, which concludes our proof.

1.2 Know Your Neighbors Algorithm

The next algorithm shows that one can satisfy all the edges with one endpoint in the neighbors of a vertex \(a_0\in A\).

Lemma 13

For each \(a_0 \in A\), there exists a polynomial time \(\frac{|E|}{|E(N(a_0))|}\)-approximation algorithm for satisfiable instances of projection games.

Proof

The algorithm works as follows:

  1. 1.

    Pick any assignment \(\sigma _{a_0} \in \varSigma ^*_A(a_0)\) and assign it to \(a_0\):

  2. 2.

    Assign \(\sigma _b = \pi _{(a_0, b)}(\sigma _{a_0})\) to b for all \(b \in N(a_0)\).

  3. 3.

    For each \(a \in N_2(a_0)\), find the set of plausible assignments to a, i.e., \(S_a = \{\sigma _a\in \varSigma _A \mid \forall b \in N(a) \cap N(a_0), \pi _{(a, b)}(\sigma _a) = \sigma _b\}\). Pick one \(\sigma ^*_a\) from this set and assign it to a. Note that \(S_a \ne \emptyset \) from the definition of \(\varSigma ^*_A(a_0)\).

  4. 4.

    Assign any assignment to unassigned vertices.

  5. 5.

    Output the assignment \(\{\sigma ^*_{a}\}_{a\in A}\), \(\{\sigma ^*_b\}_{b\in B}\) from the previous step.

From step 3, we can conclude that all the edges in \(E(N(a_0))\) get statisfied. This yields \(\frac{|E|}{|E(N(a_0))|}\) approximation ratio as desired.

1.3 Know Your Neighbors’ Neighbors Algorithm

The next algorithm shows that if the neighbors of neighbors of a vertex \(a_0\in A\) expand, then one can satisfy many of the (many!) edges that touch the neighbors of \(a_0\)’s neighbors. While the core idea is similar to the uniform version, in this version, we will need to consider \(N^*_2(a_0, \sigma _{a_0})\) instead of \(N_2(a_0)\) in order to ensure that the number of possible choices left for each vertex in this set is at most \(2\overline{p}^{max}\).

Lemma 14

For each \(a_0 \in A\) and \(\sigma _{a_0} \in \varSigma _A^*(a_0)\), there exists a polynomial-time \(O\left( \frac{|E|\overline{p}^{max}}{h^*(a_0, \sigma _{a_0})}\right) \)-approximation algorithm for satisfiable instances of projection games.

Proof

To prove Lemma 14, we first fix \(a_0 \in A\) and \(\sigma _{a_0} \in \varSigma _A^*(a_0)\). We will describe an algorithm that satisfies \(\varOmega \left( \frac{h^*(a_0, \sigma _{a_0})}{\overline{p}^{max}}\right) \) edges, which implies the lemma.

The algorithm works as follows:

  1. 1.

    Assign \(\sigma _b = \pi _{(a_0, b)}(\sigma _{a_0})\) to b for all \(b \in N(a_0)\).

  2. 2.

    For each \(a \in A\), find the set of plausible assignments to a, i.e., \(S_a = \{\sigma _a \in \varSigma _A \mid \forall b \in N(a) \cap N(a_0), \pi _{(a, b)}(\sigma _a) = \sigma _b\}\). Note that \(S_a \ne \emptyset \) from the definition of \(\varSigma ^*_A(a_0)\).

  3. 3.

    For all \(b \in B\), pick an assignment \(\sigma ^*_b\) for b that maximizes the average number of satisfied edges over all assignments in \(S_a\) to vertices a in \(N(b) \cap N^*_2(a_0)\), i.e., maximizes \(\sum _{a \in N(b) \cap N^*_2(a_0)} |\pi ^{-1}_{(a, b)}(\sigma _b) \cap S_a|\).

  4. 4.

    For each vertex \(a \in A\), pick an assignment \(\sigma ^*_a \in S_a\) that maximizes the number of satisfied edges, \(|\{b \in N(a) \mid \pi _{(a, b)}(\sigma _a) = \sigma _b^*\}|\) over all \(\sigma _a \in S_a\).

We will prove that this algorithm indeed satisfies at least \(\frac{h^*(a_0, \sigma _{a_0})}{\overline{p}^{max}}\) edges.

Let \(e^*\) be the number of edges satisfied by the algorithm. We have

$$\begin{aligned} e^*&= \sum _{a \in A} |\{b \in N(a) \mid \pi _{(a, b)}(\sigma ^*_a) = \sigma ^*_b\}|. \end{aligned}$$

Since for each \(a \in A\), the assignment \(\sigma ^*_a\) is chosen to maximize the number of edges satisfied, we can conclude that the number of edges satisfied by selecting \(\sigma ^*_a\) is at least the average of the number of edges satisfied over all \(\sigma _a \in S_a\).

As a result, we can conclude that

$$\begin{aligned} e^*&\ge \sum _{a \in A} \frac{\sum _{\sigma _a \in S_a} |\{b \in N(a) \mid \pi _{(a, b)}(\sigma _a) = \sigma ^*_b\}|}{|S_a|} \\&= \sum _{a \in A} \frac{\sum _{\sigma _a \in S_a} \sum _{b \in N(a)} 1_{\pi _{(a, b)}(\sigma _a) = \sigma ^*_b}}{|S_a|} \\&= \sum _{a \in A} \frac{\sum _{b \in N(a)} \sum _{\sigma _a \in S_a} 1_{\pi _{(a, b)}(\sigma _a) = \sigma ^*_b}}{|S_a|} \\&= \sum _{a \in A} \frac{\sum _{b \in N(a)} |\pi _{(a, b)}^{-1}(\sigma ^*_b) \cap S_a|}{|S_a|} \\&= \sum _{b \in B} \sum _{a \in N(b)} \frac{|\pi _{(a, b)}^{-1}(\sigma ^*_b) \cap S_a|}{|S_a|} \\&\ge \sum _{b \in B} \sum _{a \in N(b) \cap N^*_2(a_0, \sigma _{a_0})} \frac{ |\pi _{(a, b)}^{-1}(\sigma ^*_b) \cap S_a|}{|S_a|} \end{aligned}$$

From the definition of \(N^*_2(a_0, \sigma _{a_0})\), we can conclude that, for each \(a \in N^*_2(a_0, \sigma _{a_0})\), there exists \(b' \in N^*(a_0) \cap N(a)\) such that \(|\pi ^{-1}_{(a, b')}(\sigma _{b'})| \le 2\overline{p}^{max}\). Moreover, from the definition of \(S_a\), we have \(S_a \subseteq \pi ^{-1}_{(a, b')} (\sigma _{b'})\). As a result, we can arrive at the following inequalities.

$$\begin{aligned} |S_a|&\le |\pi ^{-1}_{(a, b')} (\sigma _{b'})| \\&\le 2\overline{p}^{max}. \end{aligned}$$

This implies that

$$\begin{aligned} e^*&\ge \frac{1}{2\overline{p}^{max}} \sum _{b \in B} \sum _{a \in N(b) \cap N^*_2(a_0, \sigma _{a_0})} |\pi _{(a, b)}^{-1}(\sigma ^*_b) \cap S_a|. \end{aligned}$$

From the definition of \(\varSigma _A^*(a_0)\), we can conclude that, for each \(b \in B\), there exists \(\sigma _b \in B\) such that \(\pi _{(a, b)}^{-1}(\sigma _b) \cap S_a \ne \emptyset \) for all \(a \in N_2(a_0) \cap N(b)\). Since \(N^*_2(a_0, \sigma _{a_0}) \subseteq N_2(a_0)\), we can conclude that \(|\pi _{(a, b)}^{-1}(\sigma _b) \cap S_a| \ge 1\) for all \(a \in N^*_2(a_0, \sigma _{a_0}) \cap N(b)\).

Since we pick the assignment \(\sigma ^*_b\) that maximizes \(\sum _{a \in N(b) \cap N^*_2(a_0)} |\pi ^{-1}_{(a, b)}(\sigma ^*_b) \cap S_a|\) for each \(b \in B\), we can conclude that

$$\begin{aligned} e^*&\ge \frac{1}{2\overline{p}^{max}} \sum _{b \in B} \sum _{a \in N(b) \cap N^*_2(a_0, \sigma _{a_0})} |\pi _{(a, b)}^{-1}(\sigma ^*_b) \cap S_a|\\&\ge \frac{1}{2\overline{p}^{max}} \sum _{b \in B} \sum _{a \in N(b) \cap N^*_2(a_0, \sigma _{a_0})} |\pi _{(a, b)}^{-1}(\sigma _b) \cap S_a| \\&\ge \frac{1}{2\overline{p}^{max}} \sum _{b \in B} \sum _{a \in N(b) \cap N^*_2(a_0, \sigma _{a_0})} 1. \end{aligned}$$

The last term can be rewritten as

$$\begin{aligned} \frac{1}{2\overline{p}^{max}} \sum _{b \in B} \sum _{a \in N(b) \cap N^*_2(a_0, \sigma _{a_0})} 1&= \frac{1}{2\overline{p}^{max}} \sum _{a \in N^*_2(a_0, \sigma _{a_0})} \sum _{b \in N(a)} 1 \\&= \frac{1}{2\overline{p}^{max}} \sum _{a \in N^*_2(a_0, \sigma _{a_0})} d_a \\&= \frac{h^*(a_0, \sigma _{a_0})}{2\overline{p}^{max}}. \end{aligned}$$

As a result, we can conclude that this algorithm gives an assignment that satisfies at least \(\frac{h^*(a_0, \sigma _{a_0})}{2\overline{p}^{max}}\) edges out of all the |E| edges. Hence, this is a polynomial-time \(O\left( \frac{|E|\overline{p}^{max}}{h^*(a_0, \sigma _{a_0})}\right) \)-approximation algorithm as desired.

1.3.1 Divide and Conquer Algorithm

We will present an algorithm that separates the graph into disjoint subgraphs for which we can find the optimal assignments in polynomial time. We shall show below that, if \(h^*(a, \sigma _a)\) is small for all \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\), then we are able to find such subgraphs that contain most of the graph’s edges.

Lemma 15

There exists a polynomial-time \(O\left( \frac{n_An_B(h^*_{max} + E_N^{max})}{|E|^2}\right) \)-approximation algorithm for satisfiable instances of projection games.

Proof

To prove this lemma, we will present an algorithm that gives an assignment that satisfies \(\varOmega \left( \frac{|E|^3}{n_An_B(h^*_{max} + E_N^{max})}\right) \) edges.

We use \(\mathcal {P}\) to represent the collection of subgraphs we find. The family \(\mathcal {P}\) consists of disjoint sets of vertices. Let \(V_\mathcal {P}\) be \(\bigcup _{P \in \mathcal {P}} P\).

For any set S of vertices, define \(G_S\) to be the graph induced on S with respect to G. Moreover, define \(E_S\) to be the set of edges of \(G_S\). We also define \(E_\mathcal {P} = \bigcup _{P \in \mathcal {P}} E_P\). Note that \(E_S\) is similar to \(E'_S\) defined earlier in the appendix. The only difference is that \(E'_S\) is with respect to \(G'\) instead of G.

The algorithm works as follows.

  1. 1.

    Set \(\mathcal {P} \leftarrow \emptyset \).

  2. 2.

    While there exists a vertex \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\) such that

    $$\begin{aligned} |E^*(a, \sigma _a) \cap E_{(A \cup B) - V_\mathcal {P}}| \ge \frac{1}{16} \frac{|E|^2}{n_An_B}: \end{aligned}$$
    1. (a)

      Set \(\mathcal {P} \leftarrow \mathcal {P} \cup \{(N^*_2(a, \sigma _a) \cup N^*(a, \sigma _a)) - V_\mathcal {P}\}\).

  3. 3.

    For each \(P \in \mathcal {P}\), find in time \(poly(|\varSigma _A|, |P|)\) an assignment to the vertices in P that satisfies all the edges spanned by P. This can be done easily by assigning \(\sigma _a\) to a and \(\pi _{(a, b)} (\sigma _a)\) to \(b \in B \cap P\). Then assign any plausible assignment to all the other vertices in \(A \cap P\).

We will divide the proof into two parts. First, we will show that when we cannot find a vertex a and an assignment \(\sigma _a \in \varSigma ^*_A(a)\) in step 2, \(\left| E_{(A \cup B) - V_\mathcal {P}} \right| \le \frac{3|E|}{4}\). Second, we will show that the resulting assignment from this algorithm satisfies \(\varOmega \left( \frac{|E|^3}{n_An_B(h^*_{max} + E_N^{max})}\right) \) edges.

We will start by showing that, if no vertex a and an assignment \(\sigma _a \in \varSigma ^*_A(a)\) in step 2 exist, then \(\left| E_{(A \cup B) - V_\mathcal {P}} \right| \le \frac{3|E|}{4}\).

Suppose that we cannot find a vertex a and an assignment \(\sigma _a \in \varSigma ^*_A(a)\) in step 2. In other words, \(|E^*(a, \sigma _a) \cap E_{(A \cup B) - V_\mathcal {P}}| < \frac{1}{16} \frac{|E|^2}{n_An_B}\) for all \(a \in A\) and \(\sigma _a \in \varSigma ^*_A(a)\).

Since \(\sigma ^{OPT}_a \in \varSigma ^*_A(a)\) for all \(a \in A\), we can conclude that

$$\begin{aligned} |E^*(a, \sigma ^{OPT}_a) \cap E_{(A \cup B) - V_\mathcal {P}}| < \frac{1}{16} \frac{|E|^2}{n_An_B}. \end{aligned}$$

From Observation 3, we have \(E^*(a, \sigma ^{{OPT}}) = E'(N'(a))\). As a result, we have

$$\begin{aligned} \frac{1}{16} \frac{|E|^2}{n_An_B}&> |E^*(a, \sigma ^{OPT}_a) \cap E_{(A \cup B) - V_\mathcal {P}}|\\&= |E'(N'(a)) \cap E_{(A \cup B) - V_\mathcal {P}}| \end{aligned}$$

for all \(a \in A\).

Since \(E'(N'(a)) = E'_{N'(a) \cup N'_2(a)}\), we can rewrite the last term as

$$\begin{aligned} |E'(N'(a)) \cap E_{(A \cup B) - V_\mathcal {P}}|&= |E'_{N'(a) \cup N'_2(a)} \cap E_{(A \cup B) - V_\mathcal {P}}|\\&= |E'_{N'(a) \cup N'_2(a) - V_\mathcal {P}}|. \end{aligned}$$

Consider \(\sum _{a \in A} |E'_{N'(a) \cup N'_2(a) - V_\mathcal {P}}|\). Since \(|E'_{N'(a) \cup N'_2(a) - V_\mathcal {P}}| < \frac{1}{16} \frac{|E|^2}{n_An_B}\) for all \(a \in A\), we have the following inequality:

$$\begin{aligned} \frac{|E|^2}{16n_B} > \sum _{a \in A} |E'_{N'(a) \cup N'_2(a) - V_\mathcal {P}}|. \end{aligned}$$

Let \(N^p(v) = N'(v) - V_\mathcal {P}\) and \(N_2^p(v) = N'_2(v) - V_\mathcal {P}\). Similarly, define \(N^p(S)\) for a subset \(S \subseteq A \cup B\). It is easy to see that \(N_2^p(v) \supseteq N^p(N^p(v))\). This implies that, for all \(a \in A\), we have \(|E'_{N^p(a) \cup N^p_2(a)}| \ge |E'_{N^p(a) \cup N^p(N^p(a))}|\). Moreover, it is easy to see that, for all \(a \in A - V_\mathcal {P}\), we have \(|E'_{N^p(a) \cup N^p(N^p(a))}| = \sum _{b \in N^p(a)} |N^p(b)|\).

Thus, the following holds:

$$\begin{aligned} \sum _{a \in A} |E'_{(N'(a) \cup N'_2(a)) - V_\mathcal {P}}|&= \sum _{a \in A} |E_{(N^p(a) \cup N^p_2(a))}| \\&\ge \sum _{a \in A - V_\mathcal {P}} |E_{(N^p(a) \cup N^p_2(a))}| \\&= \sum _{a \in A - V_\mathcal {P}} \sum _{b \in N^p(a)} |N^p(b)| \\&= \sum _{b \in B - V_\mathcal {P}} \sum _{a \in N^p(b)} |N^p(b)| \\&= \sum _{b \in B - V_\mathcal {P}} |N^p(b)|^2. \\ \end{aligned}$$

From Jensen’s inequality, we have

$$\begin{aligned} \sum _{a \in A} |E'_{(N'(a) \cup N'_2(a)) - V_\mathcal {P}}|&\ge \frac{1}{|B- V_\mathcal {P}|} \left( \sum _{b \in B - V_\mathcal {P}} |N^p(b)| \right) ^2 \\&= \frac{1}{|B- V_\mathcal {P}|} \left| E'_{\left( A \cup B\right) - V_\mathcal {P}}\right| ^2 \\&\ge \frac{1}{n_B} \left| E'_{\left( A \cup B\right) - V_\mathcal {P}}\right| ^2. \end{aligned}$$

Since \(\frac{|E|^2}{16n_B} \ge \sum _{a \in A} |E_{(N'(a) \cup N'_2(a)) - V_\mathcal {P}}|\) and \(\sum _{a \in A} |E_{(N'(a) \cup N'_2(a)) - V_\mathcal {P}}| \ge \frac{1}{n_B} \left| E'_{\left( A \cup B\right) - V_\mathcal {P}}\right| ^2\), we can conclude that

$$\begin{aligned} \frac{|E|}{4} \ge \left| E'_{\left( A \cup B\right) - V_\mathcal {P}}\right| . \end{aligned}$$

Consider \(E'_{\left( A \cup B\right) - V_\mathcal {P}}\) and \(E_{\left( A \cup B\right) - V_\mathcal {P}}\). We have

$$\begin{aligned} E'_{\left( A \cup B\right) - V_\mathcal {P}} \cup (E - E')&\supseteq E_{\left( A \cup B\right) - V_\mathcal {P}} \\ \left| E'_{\left( A \cup B\right) - V_\mathcal {P}}\right| + \left| E - E'\right|&\ge \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \\ \frac{|E|}{4} + \left| E - E'\right|&\ge \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| . \end{aligned}$$

From Observation 2, we have \(|E'| \ge \frac{|E|}{2}\). Thus, we have

$$\begin{aligned} \frac{3|E|}{4}&\ge \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| , \end{aligned}$$

which concludes the first part of the proof.

Next, we will show that the assignment the algorithm finds satisfies at least \(\varOmega \left( \frac{|E|^3}{n_An_B(h^*_{max} + E_N^{max})}\right) \) edges. Since we showed that \(\frac{3|E|}{4} \ge \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \) when the algorithm terminates, it is enough to prove that \(|E_\mathcal {P}| \ge \frac{|E|^2}{16n_An_B(h^*_{max} + E_N^{max})} \left( |E| -\right. \left. \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \right) \). Note that the algorithm guarantees to satisfy all the edges in \(E_\mathcal {P}\).

We will prove this by using induction to show that at any point in the algorithm, \(|E_\mathcal {P}| \ge \frac{|E|^2}{16n_An_B(h^*_{max} + E_N^{max})} \left( |E| - \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \right) \).

Base Case. At the beginning, we have \(|E_\mathcal {P}| = 0 = \frac{|E|^2}{16n_An_B(h^*_{max} + E_N^{max})} \left( |E| -\right. \left. \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \right) \), which satisfies the inequality.

Inductive Step. The only step in the algorithm where any term in the inequality changes is step 2a. Let \(\mathcal {P}_{old}\) and \(\mathcal {P}_{new}\) be the set \(\mathcal {P}\) before and after step 2a is executed, respectively. Let a be the vertex selected in step 2. Suppose that \(\mathcal {P}_{old}\) satisfies the inequality.

Since \(|E_{\mathcal {P}_{new}}| = |E_{\mathcal {P}_{old}}| + |E_{(N^*(a, \sigma _a) \cup N^*_2(a, \sigma _a)) - V_{\mathcal {P}_{old}}}|\), we have

$$\begin{aligned} |E_{\mathcal {P}_{new}}|&= |E_{\mathcal {P}_{old}}| + |E_{(N^*(a, \sigma _a) \cup N^*_2(a, \sigma _a)) - V_{\mathcal {P}_{old}}}|\\&= |E_{\mathcal {P}_{old}}| + |E_{(N^*(a, \sigma _a) \cup N^*_2(a, \sigma _a))} \cap E_{(A \cup B) - V_{\mathcal {P}_{old}}}|. \end{aligned}$$

From the condition in step 2, we have \(|E^*(a, \sigma _a) \cap E_{(A \cup B) - V_{\mathcal {P}_{old}}}| \ge \frac{1}{16} \frac{|E|^2}{n_An_B}\). Moreover, \(E_{(N^*(a, \sigma _a) \cup N^*_2(a, \sigma _a))} \supseteq E^*(a, \sigma _a)\) holds. As a result, we have

$$\begin{aligned} |E_{\mathcal {P}_{new}}|&= |E_{\mathcal {P}_{old}}| + |E_{(N^*(a, \sigma _a) \cup N^*_2(a, \sigma _a))} \cap E_{A \cup B - V_{\mathcal {P}_{old}}}| \\&\ge |E_{\mathcal {P}_{old}}| + |E^*(a, \sigma _a) \cap E_{(A \cup B) - V_{\mathcal {P}_{old}}}| \\&\ge |E_{\mathcal {P}_{old}}| + \frac{1}{16} \frac{|E|^2}{n_An_B}. \end{aligned}$$

Now, consider \(\left( |E| - |E_{\left( A \cup B\right) - V_{\mathcal {P}_{new}}}|\right) - \left( |E| - |E_{\left( A \cup B\right) - V_{\mathcal {P}_{old}}}|\right) \). We have

$$\begin{aligned}&\left( |E| - |E_{\left( A \cup B\right) - V_{\mathcal {P}_{new}}}|\right) - \left( |E| - |E_{\left( A \cup B\right) - V_{\mathcal {P}_{old}}}|\right) \\&\quad = |E_{\left( A \cup B\right) - V_{\mathcal {P}_{old}}}| - |E_{\left( A \cup B\right) - V_{\mathcal {P}_{new}}}| \end{aligned}$$

Since \(V_{\mathcal {P}_{new}} = V_{\mathcal {P}_{old}} \cup \left( N^*_2(a, \sigma _a) \cup N^*(a, \sigma _a)\right) \), we can conclude that

$$\begin{aligned} \left( (A \cup B) - V_{\mathcal {P}_{old}}\right) \subseteq \left( (A \cup B) - V_{\mathcal {P}_{new}}\right) \cup \left( N^*_2(a, \sigma _a) \cup N^*(a, \sigma _a)\right) . \end{aligned}$$

Thus, we can also derive

$$\begin{aligned} E_{(A \cup B) - V_{\mathcal {P}_{old}}}&\subseteq E_{\left( (A \cup B) - V_{\mathcal {P}_{new}}\right) \cup \left( N^*_2(a, \sigma _a) \cup N^*(a, \sigma _a)\right) } \\&= E_{(A \cup B) - V_{\mathcal {P}_{new}}} \cup \{(a', b') \in E \mid a' \in N^*_2(a, \sigma _a) \text { or } b' \in N^*(a, \sigma _a)\}. \\ \end{aligned}$$

Moreover, we can write \(\{(a', b') \in E \mid a' \in N^*_2(a, \sigma _a) \text { or } b' \in N^*(a, \sigma _a)\}\) as \(\{(a', b') \in E \mid a' \in N^*_2(a, \sigma _a)\} \cup \{(a', b') \in E \mid b' \in N^*(a, \sigma _a)\}\). Since \(N^*(a, \sigma _a) \subseteq N(a)\), we can conclude that

$$\begin{aligned} \{(a', b') \in E \mid a' \in N^*_2(a, \sigma _a) \text { or } b' \in N^*(a, \sigma _a)\} \subseteq&\{(a', b') \in E \mid a' \in N^*_2(a, \sigma _a)\} \\&\cup \{(a', b') \in E \mid b' \in N(a)\}. \end{aligned}$$

Thus, we can conclude that

$$\begin{aligned} |\{(a', b') \!\in \! E \mid a' \!\in \! N^*_2(a, \sigma _a) \text { or } b' \!\in \! N^*(a, \sigma _a)\}|&\le |\{(a', b') \in E \mid a' \in N^*_2(a, \sigma _a)\}| \\&\text { } \text { } \text { } + |\{(a', b') \in E \mid b' \in N(a)\}| \\&= h^*(a, \sigma _a) + |E(N(a))|. \end{aligned}$$

Hence, we can conclude that

$$\begin{aligned} \left| E_{(A \cup B) - V_{\mathcal {P}_{old}}}\right|&\le \left| E_{(A \cup B) - V_{\mathcal {P}_{new}}} \cup \{(a', b') \in E \mid a' \in N_2(a) \text { or } b' \in N(a)\}\right| \\&\le \left| E_{(A \cup B) - V_{\mathcal {P}_{new}}}\right| + \left| \{(a', b') \in E \mid a' \in N_2(a) \text { or } b' \in N(a)\}\right| \\&\le \left| E_{(A \cup B) - V_{\mathcal {P}_{new}}}\right| + h^*(a, \sigma _a) + |E(N(a))| \\&\le \left| E_{(A \cup B) - V_{\mathcal {P}_{new}}}\right| + h^*_{max} + E_N^{max}. \end{aligned}$$

This implies that \(\left( |E| - \left| E_{\left( A \cup B\right) - V_{\mathcal {P}}}\right| \right) \) increases by at most \(h^*_{max} + E_N^{max}\).

Hence, since \(\left( |E| - \left| E_{\left( A \cup B\right) - V_{\mathcal {P}}}\right| \right) \) increases by at most \(h^*_{max} + E_N^{max}\) and \(\left| E_\mathcal {P}\right| \) increases by at least \(\frac{1}{16}\frac{|E|^2}{n_An_B}\) and from the inductive hypothesis, we can conclude that

$$\begin{aligned} |E_{\mathcal {P}_{new}}| \ge \frac{|E|^2}{16n_An_B(h^*_{max} + E_N^{max})} \left( |E| - \left| E_{\left( A \cup B\right) - V_{\mathcal {P}_{new}}}\right| \right) . \end{aligned}$$

Thus, the inductive step is true and the inequality holds at any point during the execution of the algorithm.

When the algorithm terminates, since \(|E_\mathcal {P}| \ge \frac{|E|^2}{16n_An_B(h^*_{max} + E_N^{max})} \left( |E| -\right. \left. \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \right) \) and \(\frac{3|E|}{4} \ge \left| E_{\left( A \cup B\right) - V_\mathcal {P}}\right| \), we can conclude that \(|E_{\mathcal {P}}| \ge \frac{|E|^3}{64n_An_B(h^*_{max} + E_N^{max})}\). Since the algorithm guarantees to satisfy every edge in \(E_{\mathcal {P}}\), it yields an \(O\left( \frac{n_An_B(h^*_{max} + E_N^{max})}{|E|^2}\right) \) approximation ratio, which concludes our proof of Lemma 15.

1.4 Proof of Theorem 2

Proof

Using Lemma 14 with \(a_0\) and \(\sigma _{a_0}\) that maximizes the value of \(h^*(a_0, \sigma _{a_0})\), i.e., \(h^*(a_0, \sigma _{a_0}) = h^*_{max}\), we can conclude that there exists a polynomial-time \(O\left( \frac{|E|\overline{p}^{max}}{h^*_{max}}\right) \)-approximation algorithm for satisfiable instances of projection games.

Similarly, from Lemma 13 with \(a_0\) that maximizes the value of \(E(N(a_0))\), i.e., \(|E(N(a_0))| = E_N^{max}\), there exists a polynomial-time \(\frac{|E|}{E_N^{max}}\)-approximation algorithm for satisfiable instances of projection games.

Moreover, from Lemmas 1112 and 15, there exists a polynomial-time \(\frac{|E|}{n_B}\)-approximation algorithm, a polynomial-time \(\frac{|\varSigma _A|}{\overline{p}^{max}}\)-approximation algorithm and a polynomial time \(O\left( \frac{n_An_B(h^*_{max} + E_N^{max})}{|E|^2}\right) \)-approximation algorithm for satisfiable instances of the projection game.

Consider the following two cases.

First, if \(h^*_{max} \ge E_N^{max}\), we have \(O(n_A n_B (h^*_{max} + E_N^{max})/|E|^2) = O(n_An_Bh^*_{max}/|E|^2)\). Using the best of the first, second, fourth and fifth algorithms, the smallest of the four approximation factors is at most as large as their geometric mean, i.e.,

$$\begin{aligned} O\left( \root 4 \of {\frac{|E|}{n_B}\cdot \frac{|\varSigma _A|}{\overline{p}^{max}}\cdot \frac{|E|\overline{p}^{max}}{h^*_{max}}\cdot \frac{n_A n_B h^*_{max}}{|E|^2}}\right) = O((n_A|\varSigma _A|)^{1/4}). \end{aligned}$$

Second, if \(E_N^{max} > h^*_{max}\), we have \(O(n_A n_B (h^*_{max} \!+\! E_N^{max})/|E|^2) = O(n_An_BE_N^{max}/|E|^2)\). We use the best answer we get from the first, second, third and fifth algorithms. The smallest of the four approximation factors is at most as large as their geometric mean, i.e.,

$$\begin{aligned} O\left( \root 4 \of {\frac{|E|}{n_B}\cdot \frac{|\varSigma _A|}{\overline{p}^{max}}\cdot \frac{|E|}{E_N^{max}} \cdot \frac{n_A n_B E_N^{max}}{|E|^2}}\right) = O\left( \left( \frac{n_A|\varSigma _A|}{\overline{p}^{max}}\right) ^{1/4}\right) . \end{aligned}$$

It is obvious that \(\overline{p}^{max}\) is at least one. Thus, we can conclude that the approximation factor is at most \(O((n_A|\varSigma _A|)^{\frac{1}{4}})\).

This concludes the proof of Theorem 2 for the nonuniform preimage sizes case.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Manurangsi, P., Moshkovitz, D. Improved Approximation Algorithms for Projection Games. Algorithmica 77, 555–594 (2017). https://doi.org/10.1007/s00453-015-0088-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-015-0088-5

Keywords

Navigation