Worst- and Average-Case Privacy Breaches in Randomization Mechanisms

  • Michele Boreale
  • Michela Paolini
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7604)


In a variety of contexts, randomization is regarded as an effective technique to conceal sensitive information. We model randomization mechanisms as information-theoretic channels. Our starting point is a semantic notion of security that expresses absence of any privacy breach above a given level of seriousness ε, irrespective of any background information, represented as a prior probability on the secret inputs. We first examine this notion according to two dimensions: worst vs. average case, single vs. repeated observations. In each case, we characterize the security level achievable by a mechanism in a simple fashion that only depends on the channel matrix, and specifically on certain measures of “distance” between its rows, like norm-1 distance and Chernoff Information. We next clarify the relation between our worst-case security notion and differential privacy (dp): we show that, while the former is in general stronger, the two coincide if one confines to background information that can be factorised into the product of independent priors over individuals. We finally turn our attention to expected utility, in the sense of Ghosh et al., in the case of repeated independent observations. We characterize the exponential growth rate of any reasonable utility function. In the particular case the mechanism provides ε-dp, we study the relation of the utility rate with ε: we offer either exact expressions or upper-bounds for utility rate that apply to practically interesting cases, such as the (truncated) geometric mechanism.


Foundations of security quantitative information flow differential privacy utility information theory 


  1. 1.
    Alvim, M.S., Andrés, M.E., Chatzikokolakis, K., Degano, P., Palamidessi, C.: Differential Privacy: On the Trade-Off between Utility and Information Leakage. In: Barthe, G., Datta, A., Etalle, S. (eds.) FAST 2011. LNCS, vol. 7140, pp. 39–54. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  2. 2.
    Alvim, M.S., Andrés, M.E., Chatzikokolakis, K., Palamidessi, C.: Quantitative Information Flow and Applications to Differential Privacy. In: Aldini, A., Gorrieri, R. (eds.) FOSAD 2011. LNCS, vol. 6858, pp. 211–230. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    Alvim, M.S., Andrés, M.E., Chatzikokolakis, K., Palamidessi, C.: On the Relation between Differential Privacy and Quantitative Information Flow. In: Aceto, L., Henzinger, M., Sgall, J. (eds.) ICALP 2011, Part II. LNCS, vol. 6756, pp. 60–76. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  4. 4.
    Barthe, G., Köpf, B.: Information-theoretic Bounds for Differentially Private Mechanisms. In: 24rd IEEE Computer Security Foundations Symposium, CSF 2011, pp. 191–204. IEEE Computer Society (2011)Google Scholar
  5. 5.
    Boreale, M., Pampaloni, F., Paolini, M.: Asymptotic Information Leakage under One-Try Attacks. In: Hofmann, M. (ed.) FOSSACS 2011. LNCS, vol. 6604, pp. 396–410. Springer, Heidelberg (2011), Full version to appear on MSCS available at CrossRefGoogle Scholar
  6. 6.
    Boreale, M., Pampaloni, F., Paolini, M.: Quantitative Information Flow, with a View. In: Atluri, V., Diaz, C. (eds.) ESORICS 2011. LNCS, vol. 6879, pp. 588–606. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  7. 7.
    Boreale, M., Paolini, M.: Worst- and average-case privacy breaches in randomization mechanisms. Full version of the present paper,
  8. 8.
    Braun, C., Chatzikokolakis, K., Palamidessi, C.: Quantitative Notions of Leakage for One-try Attacks. In: Proc. of MFPS 2009. Electr. Notes Theor. Comput. Sci, vol. 249, pp. 75–91 (2009)Google Scholar
  9. 9.
    Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: Anonymity protocols as noisy channels. Information and Computation 206(2-4), 378–401 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: On the Bayes risk in information-hiding protocols. Journal of Computer Security 16(5), 531–571 (2008)Google Scholar
  11. 11.
    Chaum, D.: The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability. Journal of Cryptology 1(1), 65–75 (1988)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. John Wiley Sons (2006)Google Scholar
  13. 13.
    Dwork, C.: Differential Privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  14. 14.
    Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating Noise to Sensitivity in Private Data Analysis. In: Proc. of the 3rd IACR Theory of Cryptography Conference (2006)Google Scholar
  15. 15.
    Evfimievski, A., Gehrke, J., Srikant, R.: Limiting Privacy Breaches in Privacy Preserving Data Mining. In: Proc. of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (2003)Google Scholar
  16. 16.
    Friedman, A., Shuster, A.: Data Mining with Differential Privacy. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD (2010)Google Scholar
  17. 17.
    Ganta, S.R., Kasiviswanathan, S.P., Smith, A.: Composition Attacks and Auxiliary Information in Data Privacy. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD (2008)Google Scholar
  18. 18.
    Ghosh, A., Roughgarden, T., Sundararajan, M.: Universally utility-maximizing privacy mechanisms. In: STOC 2009, pp. 351–360 (2009)Google Scholar
  19. 19.
    Köpf, B., Smith, G.: Vulnerability Bounds and Leakage Resilience of Blinded Cryptography under Timing Attacks. In: CSF 2010, pp. 44–56 (2010)Google Scholar
  20. 20.
    Leang, C.C., Johnson, D.H.: On the asymptotics of M-hypothesis Bayesian detection. IEEE Transactions on Information Theory 43, 280–282 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    McSherry, F.: Privacy Integrated Queries. In: Proceedings of the 2009 ACM SIGMOD International Conference on Management of Data, SIGMOD (2009)Google Scholar
  22. 22.
    McSherry, F., Talwar, K.: Mechanism Design via Differential Privacy. In: Proceedings Annual IEEE Symposium on Foundations of Computer Science, FOCS (2007)Google Scholar
  23. 23.
    Narayanan, A., Shmatikov, V.: Robust De-anonymization of Large Sparse Datasets. In: Proc. of IEEE Symposium on Security and Privacy (2008)Google Scholar
  24. 24.
    Reiter, M.K., Rubin, A.D.: Crowds: Anonymity for Web Transactions. ACM Trans. Inf. Syst. Secur. 1(1), 66–92 (1998)CrossRefGoogle Scholar
  25. 25.
    Rényi, A.: On Measures of Entropy and Information. In: Proc. of the 4th Berkeley Symposium on Mathematics, Statistics, and Probability, pp. 547–561 (1961)Google Scholar
  26. 26.
    Smith, G.: On the Foundations of Quantitative Information Flow. In: de Alfaro, L. (ed.) FOSSACS 2009. LNCS, vol. 5504, pp. 288–302. Springer, Heidelberg (2009)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2012

Authors and Affiliations

  • Michele Boreale
    • 1
  • Michela Paolini
    • 2
  1. 1.Università di FirenzeItaly
  2. 2.IMT - Institute for Advanced StudiesLuccaItaly

Personalised recommendations