Advertisement

Hardness of Non-interactive Differential Privacy from One-Way Functions

  • Lucas Kowalczyk
  • Tal Malkin
  • Jonathan Ullman
  • Daniel Wichs
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10991)

Abstract

A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset \(D \in X^n\) consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.

Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.

In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].

Notes

Acknowledgements

The authors are grateful to Salil Vadhan for many helpful discussions.

The first and second authors are supported in part by the Defense Advanced Research Project Agency (DARPA) and Army Research Office (ARO) under Contract W911NF-15-C-0236, and NSF grants CNS-1445424 and CCF-1423306. Any opinions, findings and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency, Army Research Office, the National Science Foundation, or the U.S. Government. The first author is also supported by NSF grant CNS-1552932 and NSF Graduate Research Fellowship DGE-16-44869.

The third author is supported by NSF CAREER award CCF-1750640, NSF grant CCF-1718088, and a Google Faculty Research Award.

The fourth author is supported by NSF grants CNS-1314722, CNS-1413964.

References

  1. 1.
    Bafna, M., Ullman, J.: The price of selection in differential privacy. In: COLT 2017 - The 30th Annual Conference on Learning Theory (2017)Google Scholar
  2. 2.
    Barrington, D.A.: Bounded-width polynomial-size branching programs recognize exactly those languages in \(NC^1\). In: Proceedings of the 18th ACM Symposium on Theory of Computing (STOC) (1986)Google Scholar
  3. 3.
    Bassily, R., Nissim, K., Smith, A.D., Steinke, T., Stemmer, U., Ullman, J.: Algorithmic stability for adaptive data analysis. In: Proceedings of the 48th Annual ACM on Symposium on Theory of Computing, STOC (2016)Google Scholar
  4. 4.
    Bassily, R., Smith, A., Thakurta, A.: Private empirical risk minimization: efficient algorithms and tight error bounds. In: FOCS, pp. 464–473. IEEE, 18–21 October 2014Google Scholar
  5. 5.
    Beimel, A., Nissim, K., Stemmer, U.: Private learning and sanitization: pure vs. approximate differential privacy. In: Raghavendra, P., Raskhodnikova, S., Jansen, K., Rolim, J.D.P. (eds.) APPROX/RANDOM – 2013. LNCS, vol. 8096, pp. 363–378. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40328-6_26CrossRefzbMATHGoogle Scholar
  6. 6.
    Blum, A., Dwork, C., McSherry, F., Nissim, K.: Practical privacy: the SuLQ framework. In: Symposium on Principles of Database Systems (PODS) (2005)Google Scholar
  7. 7.
    Blum, A., Ligett, K., Roth, A.: A learning theory approach to noninteractive database privacy. J. ACM 60(2), 12 (2013)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Boneh, D., Sahai, A., Waters, B.: Fully collusion resistant traitor tracing with short ciphertexts and private keys. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 573–592. Springer, Heidelberg (2006).  https://doi.org/10.1007/11761679_34CrossRefGoogle Scholar
  9. 9.
    Boneh, D., Shaw, J.: Collusion-secure fingerprinting for digital data. IEEE Trans. Inf. Theory 44(5), 1897–1905 (1998)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Boneh, D., Zhandry, M.: Multiparty key exchange, efficient traitor tracing, and more from indistinguishability obfuscation. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014. LNCS, vol. 8616, pp. 480–499. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  11. 11.
    Brakerski, Z., Segev, G.: Function-private functional encryption in the private-key setting. J. Cryptol. 31(1), 202–225 (2018)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Bun, M., Nissim, K., Stemmer, U., Vadhan, S.: Differentially private release and learning of threshold functions. In: IEEE Annual Symposium on Foundations of Computer Science (FOCS) (2015)Google Scholar
  13. 13.
    Bun, M., Ullman, J., Vadhan, S.P.: Fingerprinting codes and the price of approximate differential privacy. In: STOC, pp. 1–10. ACM, 31 May–3 June 2014Google Scholar
  14. 14.
    Bun, M., Zhandry, M.: Order-revealing encryption and the hardness of private learning. In: Kushilevitz, E., Malkin, T. (eds.) TCC 2016. LNCS, vol. 9562, pp. 176–206. Springer, Heidelberg (2016).  https://doi.org/10.1007/978-3-662-49096-9_8CrossRefzbMATHGoogle Scholar
  15. 15.
    Chandrasekaran, K., Thaler, J., Ullman, J., Wan, A.: Faster private release of marginals on small databases. In: Innovations in Theoretical Computer Science (ITCS) (2014)Google Scholar
  16. 16.
    Chor, B., Fiat, A., Naor, M.: Tracing traitors. In: Desmedt, Y.G. (ed.) CRYPTO 1994. LNCS, vol. 839, pp. 257–270. Springer, Heidelberg (1994).  https://doi.org/10.1007/3-540-48658-5_25CrossRefGoogle Scholar
  17. 17.
    Daniely, A., Linial, N., Shalev-Shwartz, S.: From average case complexity to improper learning complexity. In: Symposium on Theory of Computing (STOC) (2014)Google Scholar
  18. 18.
    Daniely, A., Shalev-Shwartz, S.: Complexity theoretic limitations on learning DNFs. In: COLT (2016)Google Scholar
  19. 19.
    Dinur, I., Nissim, K.: Revealing information while preserving privacy. In: Principles of Database Systems (PODS). ACM (2003)Google Scholar
  20. 20.
    Dodis, Y., Yu, Y.: Overcoming weak expectations. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 1–22. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36594-2_1CrossRefGoogle Scholar
  21. 21.
    Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O., Roth, A.: Preserving statistical validity in adaptive data analysis. In: STOC. ACM (2015)Google Scholar
  22. 22.
    Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006).  https://doi.org/10.1007/11681878_14CrossRefGoogle Scholar
  23. 23.
    Dwork, C., Naor, M., Reingold, O., Rothblum, G.N., Vadhan, S.P.: On the complexity of differentially private data release: efficient algorithms and hardness results. In: Symposium on Theory of Computing (STOC). ACM (2009)Google Scholar
  24. 24.
    Dwork, C., Nikolov, A., Talwar, K.: Using convex relaxations for efficiently and privately releasing marginals. In: Symposium on Computational Geometry (SOCG) (2014)Google Scholar
  25. 25.
    Dwork, C., Nissim, K.: Privacy-preserving datamining on vertically partitioned databases. In: Franklin, M. (ed.) CRYPTO 2004. LNCS, vol. 3152, pp. 528–544. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-28628-8_32CrossRefGoogle Scholar
  26. 26.
    Dwork, C., Rothblum, G.N., Vadhan, S.P.: Boosting and differential privacy. In: Foundations of Computer Science (FOCS). IEEE (2010)Google Scholar
  27. 27.
    Dwork, C., Smith, A., Steinke, T., Ullman, J.: Exposed! a survey of attacks on private data (2017)CrossRefGoogle Scholar
  28. 28.
    Dwork, C., Smith, A., Steinke, T., Ullman, J., Vadhan, S.: Robust traceability from trace amounts. In: FOCS. IEEE (2015)Google Scholar
  29. 29.
    Dwork, C., Talwar, K., Thakurta, A., Zhang, L.: Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In: Symposium on Theory of Computing, STOC, pp. 11–20 (2014)Google Scholar
  30. 30.
    Gorbunov, S., Vaikuntanathan, V., Wee, H.: Functional encryption with bounded collusions via multi-party computation. In: Safavi-Naini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 162–179. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-32009-5_11CrossRefGoogle Scholar
  31. 31.
    Goyal, R., Koppula, V., Waters, B.: Risky traitor tracing and new differential privacy negative results. Cryptology ePrint Archive, Report 2017/1117 (2017)Google Scholar
  32. 32.
    Gupta, A., Hardt, M., Roth, A., Ullman, J.: Privately releasing conjunctions and the statistical query barrier. SIAM J. Comput. 42(4), 1494–1520 (2013)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Gupta, A., Roth, A., Ullman, J.: Iterative constructions and private data release. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 339–356. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-28914-9_19CrossRefGoogle Scholar
  34. 34.
    Hardt, M., Ligett, K., McSherry, F.: A simple and practical algorithm for differentially private data release. In: Advances in Neural Information Processing Systems (NIPS) (2012)Google Scholar
  35. 35.
    Hardt, M., Rothblum, G.: A multiplicative weights mechanism for privacy-preserving data analysis. In: Foundations of Computer Science (FOCS) (2014)Google Scholar
  36. 36.
    Hardt, M., Rothblum, G.N., Servedio, R.A.: Private data release via learning thresholds. In: Symposium on Discrete Algorithms (SODA) (2012)CrossRefGoogle Scholar
  37. 37.
    Hardt, M., Ullman, J.: Preventing false discovery in interactive data analysis is hard. In: FOCS. IEEE (2014)Google Scholar
  38. 38.
    Kearns, M.J.: Efficient noise-tolerant learning from statistical queries. In: Symposium on Theory of Computing (STOC). ACM (1993)Google Scholar
  39. 39.
    Kilian, J.: Founding cryptography on oblivious transfer. In: Proceedings of the 20th ACM Symposium on Theory of Computing (STOC) (1988)Google Scholar
  40. 40.
    Kowalczyk, L., Malkin, T., Ullman, J., Zhandry, M.: Strong hardness of privacy from weak traitor tracing. In: Hirt, M., Smith, A. (eds.) TCC 2016. LNCS, vol. 9985, pp. 659–689. Springer, Heidelberg (2016).  https://doi.org/10.1007/978-3-662-53641-4_25CrossRefzbMATHGoogle Scholar
  41. 41.
    Nikolov, A., Talwar, K., Zhang, L.: The geometry of differential privacy: the small database and approximate cases. SIAM J. Comput. 45(2), 575–616 (2016).  https://doi.org/10.1137/130938943MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. J. ACM (JACM) 35(4), 965–984 (1988)MathSciNetCrossRefGoogle Scholar
  43. 43.
    Roth, A., Roughgarden, T.: Interactive privacy via the median mechanism. In: Symposium on Theory of Computing (STOC). ACM (2010)Google Scholar
  44. 44.
    Sahai, A., Seyalioglu, H.: Worry-free encryption: functional encryption with public keys. In: Conference on Computer and Communications Security (CCS) (2010)Google Scholar
  45. 45.
    Steinke, T., Ullman, J.: Interactive fingerprinting codes and the hardness of preventing false discovery. In: Proceedings of the 28th Conference on Learning Theory, COLT, pp. 1588–1628 (2015)Google Scholar
  46. 46.
    Steinke, T., Ullman, J.: Tight lower bounds for differentially private selection. In: IEEE 58th Annual Symposium on Foundations of Computer Science, FOCS, pp. 634–649 (2017)Google Scholar
  47. 47.
    Tang, B., Zhang, J.: Barriers to black-box constructions of traitor tracing systems. In: Kalai, Y., Reyzin, L. (eds.) TCC 2017. LNCS, vol. 10677, pp. 3–30. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70500-2_1CrossRefGoogle Scholar
  48. 48.
    Thaler, J., Ullman, J., Vadhan, S.: Faster algorithms for privately releasing marginals. In: Czumaj, A., Mehlhorn, K., Pitts, A., Wattenhofer, R. (eds.) ICALP 2012. LNCS, vol. 7391, pp. 810–821. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-31594-7_68CrossRefGoogle Scholar
  49. 49.
    Ullman, J.: Private multiplicative weights beyond linear queries. In: PODS. ACM (2015)Google Scholar
  50. 50.
    Ullman, J.: Answering \(n^{2+o(1)}\) counting queries with differential privacy is hard. SIAM J. Comput. 45(2), 473–496 (2016)MathSciNetCrossRefGoogle Scholar
  51. 51.
    Ullman, J., Vadhan, S.: PCPs and the hardness of generating private synthetic data. In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 400–416. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-19571-6_24CrossRefzbMATHGoogle Scholar
  52. 52.
    Vadhan, S.: The complexity of differential privacy. In: Lindell, Y. (ed.) Tutorials on the Foundations of Cryptography. ISC, pp. 347–450. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-57048-8_7CrossRefGoogle Scholar

Copyright information

© International Association for Cryptologic Research 2018

Authors and Affiliations

  • Lucas Kowalczyk
    • 1
  • Tal Malkin
    • 1
  • Jonathan Ullman
    • 2
  • Daniel Wichs
    • 2
  1. 1.Department of Computer ScienceColumbia UniversityNew YorkUSA
  2. 2.College of Computer and Information ScienceNortheastern UniversityBostonUSA

Personalised recommendations