Advertisement

Differential Privacy in Practice

  • Maryam Shoaran
  • Alex Thomo
  • Jens Weber
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7482)

Abstract

Differential privacy (DP) has attracted considerable attention as the method of choice for releasing aggregate query results making it hard to infer information about individual records in the database. The most common way to achieve DP is to add noise following Laplace distribution. In this paper, we study differential privacy from a utility point of view for single and multiple queries. We examine the relationship between the cumulative probability of noise and the privacy degree. Using this analysis and the notion of relative error, we show when for a given problem it is reasonable to employ a differentially private algorithm without losing a certain level of utility. For the case of multiple queries, we introduce a simple DP method called Differential (DIFF) that adds noise proportional to a query index used to express our preferences for having different noise scales for different queries. We also introduce an equation capturing when DIFF satisfies a user-given relative error threshold.

Keywords

Statistical Databases Differetial Privacy Utility 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barak, B., Chaudhuri, K., Dwork, C., Kale, S., McSherry, F., Talwar, K.: Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In: PODS, pp. 273–282 (2007)Google Scholar
  2. 2.
    Ding, B., Winslett, M., Han, J., Li, Z.: Differentially private data cubes: optimizing noise sources and consistency. In: SIGMOD Conference, pp. 217–228 (2011)Google Scholar
  3. 3.
    Dwork, C.: Differential Privacy: A Survey of Results. In: Agrawal, M., Du, D.-Z., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our Data, Ourselves: Privacy Via Distributed Noise Generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating Noise to Sensitivity in Private Data Analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  6. 6.
    Garofalakis, M.N., Kumar, A.: Wavelet synopses for general error metrics. ACM Trans. Database Syst. 30(4), 888–928 (2005)CrossRefGoogle Scholar
  7. 7.
    Hay, M., Rastogi, V., Miklau, G., Suciu, D.: Boosting the accuracy of differentially private histograms through consistency. PVLDB 3(1), 1021–1032 (2010)Google Scholar
  8. 8.
    Li, C., Hay, M., Rastogi, V., Miklau, G., McGregor, A.: Optimizing linear counting queries under differential privacy. In: PODS, pp. 123–134 (2010)Google Scholar
  9. 9.
    McSherry, F.: Privacy integrated queries: an extensible platform for privacy-preserving data analysis. In: SIGMOD Conference, pp. 19–30 (2009)Google Scholar
  10. 10.
    Vitter, J.S., Wang, M.: Approximate computation of multidimensional aggregates of sparse data using wavelets. In: SIGMOD Conference, pp. 193–204 (1999)Google Scholar
  11. 11.
    Xiao, X., Bender, G., Hay, M., Gehrke, J.: ireduct: differential privacy with reduced relative errors. In: SIGMOD Conference, pp. 229–240 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Maryam Shoaran
    • 1
  • Alex Thomo
    • 1
  • Jens Weber
    • 1
  1. 1.University of VictoriaVictoriaCanada

Personalised recommendations