Advertisement

Increasing the Output Length of Zero-Error Dispersers

  • Ariel GabizonEmail author
Chapter
Part of the Monographs in Theoretical Computer Science. An EATCS Series book series (EATCS)

Summary

Let \(\cal C\) be a class of probability distributions over a finite set Ω. A function \(D:\Omega \mapsto {\{0,1\}}^m\) is a disperser for \(\cal C\) with entropy threshold k and error ε if for any distribution X in \(\cal C\) such that X gives positive probability to at least \(2^k\) elements we have that the distribution D(X) gives positive probability to at least \((1-\epsilon)2^m\) elements. A long line of research is devoted to giving explicit (that is, polynomial-time computable) dispersers (and related objects called “extractors”) for various classes of distributions while trying to maximize m as a function of k.

In this chapter we are interested in explicitly constructing zero-error dispersers (that is, dispersers with error \(\epsilon=0\)). For several interesting classes of distributions there are explicit constructions in the literature of zero-error dispersers with “small” output length m, and we give improved constructions that achieve “large” output length, namely \(m=\Omega(k)\).

We achieve this by developing a general technique to improve the output length of zero-error dispersers (namely, to transform a disperser with short output length into one with large output length). This strategy works for several classes of sources and is inspired by the transformation that improves the output length of extractors used in previous chapters. However, we stress that this technique is different, and in particular gives nontrivial results in the errorless case.

Using our approach we construct improved zero-error disper- sers for the class of 2-sources. More precisely, we show that for any constant \(\delta>0\) there is a constant \(\eta>0\) such that for sufficiently large n there is a poly-time computable function \(D:{\{0,1\}}^n \times {\{0,1\}}^n \mapsto {\{0,1\}}^{\eta n}\) such that for any two independent distributions \(X_1,X_2\) over \({\{0,1\}}^n\) such that both of them support at least \(2^{\delta n}\) elements we get that the output distribution \(D(X_1,X_2)\) has full support. This improves the output length of previous constructions by [4] and has applications in Ramsey Theory and in constructing certain data structures [24].

We also use our techniques to give explicit constructions of zero-error dispersers for bit-fixing sources and affine sources over polynomially large fields. These constructions improve the best known explicit constructions due to [52, 25] and achieve \(m=\Omega(k)\) for bit-fixing sources and \(m=k-o(k)\) for affine sources.

This chapter is based on [27]

Keywords

Convex Combination Explicit Construction Good Index Seed Length Good Seed 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 3.
    B. Barak, R. Impagliazzo, and A. Wigderson. Extracting randomness using few independent sources. SIAM J. Comput, 36(4):1095–1118, 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  2. 4.
    B. Barak, G. Kindler, R. Shaltiel, B. Sudakov, and A. Wigderson. Simulating independence: New constructions of condensers, Ramsay graphs, dispersers, and extractors. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing, pages 1–10, 2005.Google Scholar
  3. 5.
    B. Barak, A. Rao, R. Shaltiel, and A. Wigderson. 2-source dispersers for sub-polynomial entropy and Ramsey graphs beating the Frankl–Wilson construction. In Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 671–680, 2006.Google Scholar
  4. 9.
    J. Bourgain. More on the sum-product phenomenon in prime fields and its applications.International Journal of Number Theory, 1:1–32, 2005.MathSciNetzbMATHCrossRefGoogle Scholar
  5. 10.
    J. Bourgain. On the construction of affine extractors.Geometric And Functional Analysis, 17(1):33–57, 2007.MathSciNetzbMATHCrossRefGoogle Scholar
  6. 13.
    I. L. Carter and M. N. Wegman. Universal classes of hash functions. InProceedings of the 9th Annual ACM Symposium on Theory of Computing, pages 106–112, 1977.Google Scholar
  7. 14.
    B. Chor and O. Goldreich. Unbiased bits from sources of weak randomness and probabilistic communication complexity.SIAM Journal on Computing, 17(2):230–261, April 1988. Special issue on cryptography.MathSciNetzbMATHCrossRefGoogle Scholar
  8. 15.
    B. Chor, O. Goldreich, J. Hastad, J. Friedman, S. Rudich, and R. Smolensky. The bit extraction problem or t-resilient functions. InProceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science, pages 396–407, 1985.Google Scholar
  9. 19.
    Y. Dodis, A. Elbaz, R. Oliveira, and R. Raz. Improved randomness extraction from two independent sources. InRANDOM: International Workshop on Randomization and Approximation Techniques in Computer Science, pages 334–344, 2004.Google Scholar
  10. 24.
    A. Fiat and M. Naor. Implicit O(1) probe search.SICOMP: SIAM Journal on Computing, 22, 1993.Google Scholar
  11. 25.
    A. Gabizon and R. Raz. Deterministic extractors for affine sources over large fields. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, pages 407–418, 2005.Google Scholar
  12. 26.
    A.Gabizon, R. Raz, and R. Shaltiel. Deterministic extractors for bit-fixing sources by obtaining an independent seed.SICOMP: SIAM Journal on Computing, 36(4):1072–1094, 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  13. 27.
    A. Gabizon and R. Shaltiel. Increasing the output length of zero-error dispersers. In APPROX-RANDOM, pages 430–443, 2008.Google Scholar
  14. 31.
    O. Goldreich. A sample of samplers – A computational perspective on sampling (survey). InECCCTR: Electronic Colloquium on Computational Complexity, Technical Reports, 1997b.Google Scholar
  15. 32.
    R. L. Graham, B. L. Rothschild, and J. H. Spencer.Ramsey Theory. Wiley, 1980.Google Scholar
  16. 34.
    V. Guruswami, C. Umans, and S. P. Vadhan. Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes. InProceedings of the 22nd Annual IEEE Conference on Computational Complexity, pages 96–108, 2007.Google Scholar
  17. 42.
    C. Lu, O. Reingold, S. Vadhan, and A. Wigderson. Extractors: Optimal up to constant factors. InProceedings of the 35th Annual ACM Symposium on Theory of Computing, pages 602–611, 2003.Google Scholar
  18. 48.
    N. Nisan and D. Zuckerman. Randomness is linear in space. Journal of Computer and System Sciences, 52(1):43–52, 1996.MathSciNetzbMATHCrossRefGoogle Scholar
  19. 49.
    J. Radhakrishnan and A. Ta-Shma. Bounds for dispersers, extractors, and depth-two superconcentrators. SIAM Journal on Discrete Mathematics, 13(1):2–24, 2000.MathSciNetzbMATHCrossRefGoogle Scholar
  20. 50.
    A. Rao. Extractors for a constant number of polynomially small min-entropy independent sources. In Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 497–506, 2006.Google Scholar
  21. 52.
    A. Rao. Extractors for low weight affine sources. Manuscript, 2008.Google Scholar
  22. 53.
    R. Raz. Extractors with weak random seeds. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing, pages 11–20, 2005.Google Scholar
  23. 63.
    R. Shaltiel. How to get more mileage from randomness extractors. In CCC ’06: Proceedings of the 21st Annual IEEE Conference on Computational Complexity, pages 46–60, 2006.Google Scholar
  24. 70.
    U. Vazirani. Strong communication complexity or generating quasi-random sequences from two communicating semi-random sources. Combinatorica, 7:375–392, 1987.MathSciNetzbMATHCrossRefGoogle Scholar
  25. 75.
    A. C.-C. Yao. Should tables be sorted? J. ACM, 28(3):615–628, 1981.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Dept. Computer ScienceUniversity of Texas at AustinAustinUSA

Personalised recommendations