On the Complexity of Computational Problems Regarding Distributions
We consider two basic computational problems regarding discrete probability distributions: (1) approximating the statistical difference (aka variation distance) between two given distributions, and (2) approximating the entropy of a given distribution. Both problems are considered in two different settings. In the first setting the approximation algorithm is only given samples from the distributions in question, whereas in the second setting the algorithm is given the “code” of a sampling device (for the distributions in question).
We survey the know results regarding both settings, noting that they are fundamentally different: The first setting is concerned with the number of samples required for determining the quantity in question, and is thus essentially information theoretic. In the second setting the quantities in question are determined by the input, and the question is merely one of computational complexity. The focus of this survey is actually on the latter setting. In particular, the survey includes proof sketches of three central results regarding the latter setting, where one of these proofs has only appeared before in the second author’s PhD Thesis.
KeywordsApproximation Reductions Entropy Statistical Difference Variation Distance Sampleable Distributions Zero-Knowledge Promise Problems
Unable to display preview. Download preview PDF.
- 1.Aiello, W., Håstad, J.: Perfect Zero-Knowledge Languages can be Recognized in Two Rounds. In: 28th FOCS, pp. 439–448 (1987)Google Scholar
- 2.Barak, B.: Non-Black-Box Techniques in Cryptography. Ph.D. Thesis, Weizmann Institute of Science (January 2004)Google Scholar
- 4.Batu, T., Dasgupta, S., Kumar, R., Rubinfeld, R.: The Complexity of Approximating the Entropy. In: 34th STOC (2002)Google Scholar
- 5.Batu, T., Fischer, E., Fortnow, L., Kumar, R., Rubinfeld, R., White, P.: Testing random variables for independence and identity. In: 42nd FOCS (2001)Google Scholar
- 6.Batu, T., Fortnow, L., Rubinfeld, R., Smith, W.D., White, P.: Testing that distributions are close. In: 41st FOCS, pp. 259–269 (2000)Google Scholar
- 11.Fortnow, L.: The Complexity of Perfect Zero-Knowledge. In: 19th STOC, pp. 204–209 (1987)Google Scholar
- 12.Goldreich, O., Goldwasser, S., Ron, D.: Property testing and its connection to learning and approximation. JACM, 653–750 (July 1998)Google Scholar
- 15.Goldreich, O., Sahai, A., Vadhan, S.: Honest-Verifier Statistical Zero-Knowledge equals general Statistical Zero-Knowledge. In: 30th STOC, pp. 399–408 (1998)Google Scholar
- 17.Goldreich, O., Vadhan, S.: Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK. In: 14th IEEE Conference on Computational Complexity, pp. 54–73 (1999)Google Scholar
- 19.Okamoto, T.: On relationships between statistical zero-knowledge proofs. In: 28th STOC, pp. 649–658 (1996)Google Scholar
- 21.Sahai, A., Vadhan, S.: A Complete Promise Problem for Statistical Zero-Knowledge. In: 38th FOCS, pp. 448–457 (1997)Google Scholar
- 22.Vadhan, S.: A Study of Statistical Zero-Knowledge Proofs. PhD Thesis, Department of Mathematics, MIT (1999)Google Scholar
- 23.Valiant, G., Valiant, P.: A CLT and tight lower bounds for estimating entropy. In: ECCC, TR10-179 (2010)Google Scholar
- 24.Valiant, G., Valiant, P.: Estimating the unseen: A sublinear-sample canonical estimator of distributions.In: ECCC, TR10-180 (2010)Google Scholar
- 25.Valiant, P.: Testing symmetric properties of distributions. In: ECCC, TR07-135 (2007)Google Scholar