Skip to main content
Log in

Collapsing and Separating Completeness Notions Under Average-Case and Worst-Case Hypotheses

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

This paper presents the following results on sets that are complete for NP.

  1. (i)

    If there is a problem in NP that requires \(2^{n^{\Omega(1)}}\) time at almost all lengths, then every many-one NP-complete set is complete under length-increasing reductions that are computed by polynomial-size circuits.

  2. (ii)

    If there is a problem in co-NP that cannot be solved by polynomial-size nondeterministic circuits, then every many-one NP-complete set is complete under length-increasing reductions that are computed by polynomial-size circuits.

  3. (iii)

    If there exist a one-way permutation that is secure against subexponential-size circuits and there is a hard tally language in NP∩co-NP, then there is a Turing complete language for NP that is not many-one complete.

Our first two results use worst-case hardness hypotheses whereas earlier work that showed similar results relied on average-case or almost-everywhere hardness assumptions. The use of average-case and worst-case hypotheses in the last result is unique as previous results obtaining the same consequence relied on almost-everywhere hardness results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Arora, S., Barak, B.: Computational Complexity: A Modern Approach. Cambridge University Press, Cambridge (2009)

    MATH  Google Scholar 

  2. Amir, A., Beigel, R., Gasarch, W.: Some connections between bounded query classes and non-uniform complexity. Inf. Comput. 186, 104–139 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  3. Agrawal, M.: Pseudo-random generators and structure of complete degrees. In: Proceedings of the Seventeenth Annual IEEE Conference on Computational Complexity, pp. 139–147 (2002)

    Google Scholar 

  4. Ambos-Spies, K., Bentzien, L.: Separating NP-completeness notions under strong hypotheses. J. Comput. Syst. Sci. 61(3), 335–361 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  5. Agrawal, M., Watanabe, O.: One-way functions and the isomorphism conjecture. Technical report TR09-019, Electronic Colloquium on Computational Complexity, 2009

  6. Beigel, R.: Query-limited reducibilities. PhD thesis, Stanford University, 1987

  7. Berman, L.: Polynomial reducibilities and complete sets. PhD thesis, Cornell University, 1977

  8. Beigel, R., Feigenbaum, J.: On being incoherent without being very hard. Comput. Complex. 2(1), 1–17 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  9. Babai, L., Fortnow, L., Nisan, N., Wigderson, A.: BPP has subexponential time simulations unless EXPTIME has publishable proofs. Comput. Complex. 3, 307–318 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  10. Buhrman, H., Fortnow, L., Torenvliet, L.: Six hypotheses in search of a theorem. In: IEEE Conference on Computational Complexity, pp. 2–12 (1997)

    Google Scholar 

  11. Berman, L., Hartmanis, J.: On isomorphism and density of NP and other complete sets. SIAM J. Comput. 6, 305–322 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  12. Buhrman, H., Hitchcock, J.M.: NP-hard sets are exponentially dense unless NP⊆coNP/poly. In: Proceedings of the 23rd Annual IEEE Conference on Computational Complexity, pp. 1–7. IEEE Comput. Soc., Los Alamitos (2008)

    Chapter  Google Scholar 

  13. Buhrman, H., Hescott, B., Homer, S., Torenvliet, L.: Non-uniform reductions. Theory Comput. Syst. 47(2), 317–341 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  14. Buhrman, H., Homer, S., Torenvliet, L.: Completeness for nondeterministic complexity classes. Math. Syst. Theory 24, 179–200 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  15. Cook, S.A.: The complexity of theorem proving procedures. In: Proceedings of the Third ACM Symposium on the Theory of Computing, pp. 151–158 (1971)

    Google Scholar 

  16. Feigenbaum, J., Fortnow, L., Laplante, S., Naik, A.: On coherence, random-self-reducibility, and self-correction. Comput. Complex. 7, 174–191 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  17. Feigenbaum, J., Fortnow, L., Lund, C., Spielman, D.: The power of adaptiveness and additional queries in random-self-reductions. Comput. Complex. 4, 158–174 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  18. Fortnow, L., Santhanam, R.: Infeasibility of instance compression and succinct PCPs for NP. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 133–142 (2008)

    Google Scholar 

  19. Ganesan, K., Homer, S.: Complete problems and strong polynomial reducibilities. SIAM J. Comput. 21(4), 733–742 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  20. Goldreich, O., Levin, L.A.: A hard-core predicate for all one-way functions. In: Proceedings of the Twenty-First ACM Symposium on the Theory of Computing, pp. 25–32 (1989)

    Google Scholar 

  21. Goldreich, O., Levin, L., Nisan, N.: On constructing 1-1 one-way function. Technical report TR95-029, ECCC, 1995

  22. Hemaspaandra, E., Naik, A., Ogiwara, M., Selman, A.: P-selective sets and reducing search to decision vs. self-reducibility. J. Comput. Syst. Sci. 53(2), 194–209 (1996)

    Article  MATH  Google Scholar 

  23. Hitchcock, J.M., Pavan, A.: Comparing reductions to NP-complete sets. Inf. Comput. 205(5), 694–706 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  24. Hitchcock, J.M., Pavan, A., Vinodchandran, N.V.: Partial Bi-immunity, scaled dimension, and NP-completeness. Theory Comput. Syst. 42(2), 131–142 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  25. Impagliazzo, R.: Hard-core distributions for somewhat hard problems. In: Proceedings of the 36th Annual Conference on Foundations of Computer Science, pp. 538–545 (1995)

    Google Scholar 

  26. Impagliazzo, R., Wigderson, A.: P=BPP if E requires exponential circuits: derandomizing the XOR lemma. In: Proceedings of the 29th Symposium on Theory of Computing, pp. 220–229 (1997)

    Google Scholar 

  27. Johnson, S.: A new upper bound for error correcting codes. IRE Trans. Inf. Theory 8(3), 203–207 (1962)

    Article  MATH  Google Scholar 

  28. Levin, L.: Universal sorting problems. Probl. Inf. Transm. 9, 265–266 (1973). English translation of original in Problemy Peredaci Informacii

    Google Scholar 

  29. Ladner, R., Lynch, N., Selman, A.: A comparison of polynomial time reducibilities. Theor. Comput. Sci. 1, 103–123 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  30. Lutz, J.H., Mayordomo, E.: Measure, stochasticity, and the density of hard languages. SIAM J. Comput. 23(4), 762–779 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  31. Lutz, J.H., Mayordomo, E.: Cook versus Karp-Levin: separating completeness notions if NP is not small. Theor. Comput. Sci. 164(1–2), 141–163 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  32. Nisan, N., Wigderson, A.: Hardness vs randomness. J. Comput. Syst. Sci. 49, 149–167 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  33. Pavan, A.: Comparison of reductions and completeness notions. SIGACT News 34(2), 27–41 (2003)

    Article  Google Scholar 

  34. Pavan, A., Selman, A.: Separation of NP-completeness notions. SIAM J. Comput. 31(3), 906–918 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  35. Pavan, A., Selman, A.: Bi-immunity separates strong NP-completeness notions. Inf. Comput. 188, 116–126 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  36. Sudan, M., Trevisan, L., Vadhan, S.: Pseudorandom generators without the XOR lemma. J. Comput. Syst. Sci. 62(2), 236–266 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  37. Watanabe, O.: A comparison of polynomial time completeness notions. Theor. Comput. Sci. 54, 249–265 (1987)

    Article  MATH  Google Scholar 

  38. Yao, A.: Theory and applications of trapdoor functions. In: Proceedings of the 23rd IEEE Symposium on Foundations of Computer Science, pp. 80–91 (1982)

    Google Scholar 

Download references

Acknowledgements

We thank anonymous reviewers for several comments and suggestions that led to a vastly improved presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Pavan.

Additional information

Preliminary version appeared at 27th International Symposium on Theoretical Aspects of Computer Science.

Part of the research of X.G. was done while the author was at Iowa State University. Research supported in part by NSF grants 0652569 and 0728806.

Research of J.M.H. supported in part by NSF grants 0515313 and 0652601 and by an NWO travel grant. Part of this research was done while this author was on sabbatical at CWI.

Research of A.P. supported in part by NSF grants 0830479 and 0916797.

Appendix

Appendix

Here we provide a proof of Lemma 4.5

Definition

Let x,y∈{0,1}m. \(\Delta(x,y) = \frac{1}{m}|\{ i \mid x[i]\neq y[i] \} |\). We say that E:{0,1}n→{0,1}m is an error correcting code with distance δ if for every xy∈{0,1}n, Δ(E(x),E(y))≥δ.

Theorem 5.1

(Johnson Bound [27])

If E:{0,1}n→{0,1}m is an ECC with distance at least \(\frac{1}{2}-\epsilon\), then for every x∈{0,1}m, and \(\eta \geq \sqrt{\epsilon}\), there exist at most l≤1/(2η 2) strings y 1,…,y l E({0,1}n) such that \(\Delta(x, y_{i})\leq \frac{1}{2}-\eta\) for every i∈[l].

Consider an ECC \(E:\{0,1\}^{2n\log n}\rightarrow\{0,1\}^{n^{4}}\) with distance 1/2−/2n. An explicit example of such ECC is the code obtained by concatenating the Walsh-Hadamard code with the Reed-Solomon code. We will view strings of length 2nlogn and n 4 truth tables of languages at certain lengths. For this, both 2nlogn and n 4 must be powers of 2. If n is of the form \(2^{2^{k-1}}\), then both 2nlogn and n 4 are powers of two.

For any language L″, let L′ be fold(L″). It is clear that L″∈NEEE∩coNEEE if and only if L′∈NEEE∩coNEEE. The following fact is easy to verify.

Lemma 5.2

If L″∉EEE/O(logn), then L′∉EEE/O(logn) and for every EEE/O(logn) algorithm \(\mathcal{A}\), there are infinitely many lengths n of the form 2k+1+k such that \(\mathcal{A}\) fails to decide Lcorrectly at length n.

Lemma 4.5

If there is a language in L″∈NEEE∩coNEEE−EEE/O(logn), then there is a language L∈NEEE∩coNEEE such that for every EEE/O(logn) algorithm \(\mathcal{A}\), there exist infinitely many n and \(\mathcal{A}\) correctly decides L on at most \(\frac{1}{2}+\frac{1}{n}\) fraction of strings from Σn.

Proof

Let L″∈NEEE∩coNEEE−EEE/O(log). Let L′=fold(L″). Let L be the language such that for every k∈ℕ, \(L_{2^{k+1}} = E(L'_{2^{k-1}+k})\) where E is the WH-RS code. On other lengths, we set L to be empty. It is clear that L∈NEEE∩coNEEE since we can actually compute the truth table of L′ in NEEE∩coNEEE.

Now, for the sake of contradiction, assume that there is an EEE/O(logn) algorithm \(\mathcal{A}\) that correctly decides L on more than \(\frac{1}{2}+\frac{1}{n}\) fraction of inputs at every length n. We will first show that this yields a EEE/O(logn) algorithm for L. Since L is empty at lengths that are not of the form 2k+1, we can decide L easily at those lengths. Thus we will concentrate on lengths of the form 2k+1.

Consider n of the form 2k+1. By the definition of L, we know that \(L_{n} = E(L'_{2^{k-1}+k})\). Let N=2n. Recall that the distance of the code E is \(\frac{1}{2}-\epsilon\), where \(\epsilon= \frac{1}{2\sqrt[4]{N}}\). Let \(\eta= \frac{1}{n}=\frac{1}{\log N}> \sqrt{\epsilon}\).

By running \(\mathcal{A}\) on all strings of length n compute a candidate sequence, x∈{0,1}N, for L n . By our assumption, A is correct on at least \(\frac{1}{2}+\frac{1}{n}\) of strings from Σn. Thus \(\Delta(x, L_{n}) \leq\frac{1}{2}-\frac{1}{n}\). By the Johnson bound 5.1, there exist at most \(l \leq \frac{n^{2}}{2}\) strings y 1,…,y l in \(E(\Sigma^{2^{k-1}+k})\) such that \(\Delta(x, y_{i}) \leq\frac{1}{2}-\frac{1}{n}\). By running E on all strings from \(\Sigma^{2^{k-1}+k}\), we can compute these y i s. Since WH-RS code is computable in polynomial time, this computation can be done in triple exponential time. Since \(L_{n} =E(L'_{2^{k-1}+k})\), there exists a j, 1≤jl such that L n =y j . Thus logl=O(logn) bits of information is enough to identify j for which L n =y j . Thus L∈EEE/O(logn).

Recall that \(L_{n} = E(L'_{2^{k-1}+k}\)) when n=2k. Since L∈EEE/O(logn), by inverting the WH-RS code E, we can decide L′ correctly on all lengths of the form 2k+1+k, using an EEE/O(logn) algorithm. This contradicts Lemma 5.2. □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gu, X., Hitchcock, J.M. & Pavan, A. Collapsing and Separating Completeness Notions Under Average-Case and Worst-Case Hypotheses. Theory Comput Syst 51, 248–265 (2012). https://doi.org/10.1007/s00224-011-9365-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-011-9365-0

Keywords

Navigation