A Privacy Risk Model for Trajectory Data

  • Anirban Basu
  • Anna Monreale
  • Juan Camilo Corena
  • Fosca Giannotti
  • Dino Pedreschi
  • Shinsaku Kiyomoto
  • Yutaka Miyake
  • Tadashi Yanagihara
  • Roberto Trasarti
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 430)

Abstract

Time sequence data relating to users, such as medical histories and mobility data, are good candidates for data mining, but often contain highly sensitive information. Different methods in privacy-preserving data publishing are utilised to release such private data so that individual records in the released data cannot be re-linked to specific users with a high degree of certainty. These methods provide theoretical worst-case privacy risks as measures of the privacy protection that they offer. However, often with many real-world data the worst-case scenario is too pessimistic and does not provide a realistic view of the privacy risks: the real probability of re-identification is often much lower than the theoretical worst-case risk. In this paper we propose a novel empirical risk model for privacy which, in relation to the cost of privacy attacks, demonstrates better the practical risks associated with a privacy preserving data release. We show detailed evaluation of the proposed risk model by using k-anonymised real-world mobility data.

Keywords

privacy risk utility model anonymisation sequential data 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Samarati, P.: Protecting respondents identities in microdata release. IEEE Transactions on Knowledge and Data Engineering 13(6), 1010–1027 (2001)CrossRefGoogle Scholar
  2. 2.
    Sweeney, L.: k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10(05), 557–570 (2002)CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    Abul, O., Bonchi, F., Nanni, M.: Never walk alone: Uncertainty for anonymity in moving objects databases. In: The 24th IEEE International Conference on Data Engineering (ICDE), pp. 376–385 (2008)Google Scholar
  4. 4.
    Terrovitis, M., Mamoulis, N.: Privacy preservation in the publication of trajectories. In: MDM, pp. 65–72 (2008)Google Scholar
  5. 5.
    Monreale, A., Andrienko, G.L., Andrienko, N.V., Giannotti, F., Pedreschi, D., Rinzivillo, S., Wrobel, S.: Movement data anonymity through generalization. TDP 3(2), 91–121 (2010)MathSciNetGoogle Scholar
  6. 6.
    Monreale, A., Pedreschi, D., Pensa, R., Pinelli, F.: Anonymity preserving sequential pattern mining. Artificial Intelligence and Law (to appear, 2014)Google Scholar
  7. 7.
    Voronoï, G.: Nouvelles applications des paramètres continus à la théorie des formes quadratiques. deuxième mémoire. recherches sur les parallélloèdres primitifs. Journal für die Reine und Angewandte Mathematik 134, 198–287 (1908)MATHGoogle Scholar
  8. 8.
    Westin, A.F.: Privacy and freedom. Washington and Lee Law Review 25(1), 166 (1968)Google Scholar
  9. 9.
    Hong, J.I., Ng, J.D., Lederer, S., Landay, J.A.: Privacy risk models for designing privacy-sensitive ubiquitous computing systems. In: The 5th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 91–100. ACM (2004)Google Scholar
  10. 10.
    Kosa, T.A., EI-Khatib, K., Marsh, S.: Measuring privacy. Journal of Internet Services and Information Security (JISIS) 1(4), 60–73 (2011)Google Scholar
  11. 11.
    Kiyomoto, S., Nakamura, T., Takasaki, H., Watanabe, R., Miyake, Y.: PPM: Privacy policy manager for personalized services. In: Cuzzocrea, A., Kittl, C., Simos, D.E., Weippl, E., Xu, L. (eds.) CD-ARES Workshops 2013. LNCS, vol. 8128, pp. 377–392. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  12. 12.
    Becker, J.L., Chen, H.: Measuring privacy risk in online social networks (2009)Google Scholar
  13. 13.
    Krishnamurthy, B., Malandrino, D., Wills, C.E.: Measuring privacy loss and the impact of privacy protection in web browsing. In: The 3rd Symposium on Usable Privacy and Security, pp. 52–63. ACM (2007)Google Scholar
  14. 14.
    Yu, T., Zhang, Y., Lin, K.-J.: Modeling and measuring privacy risks in qos web services. In: The 3rd IEEE Conference on E-Commerce Technology and the 8th IEEE International Conference on and Enterprise Computing, E-Commerce, and E-Services, p. 4. IEEE (2006)Google Scholar
  15. 15.
    Banescu, S., Petković, M., Zannone, N.: Measuring privacy compliance using fitness metrics. In: Barros, A., Gal, A., Kindler, E. (eds.) BPM 2012. LNCS, vol. 7481, pp. 114–119. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  16. 16.
    Agrawal, R., Srikant, R.: Privacy-preserving data mining. ACM SIGMOD Record 29(2), 439–450 (2000)CrossRefGoogle Scholar
  17. 17.
    Dinur, I., Nissim, K.: Revealing information while preserving privacy. In: The 22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 202–210 (2003)Google Scholar
  18. 18.
    Evfimievski, A., Gehrke, J., Srikant, R.: Limiting privacy breaches in privacy preserving data mining. In: The 22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 211–222 (2003)Google Scholar
  19. 19.
    Blum, A., Dwork, C., McSherry, F., Nissim, K.: Practical privacy: the sulq framework. In: The 24th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 128–138 (2005)Google Scholar
  20. 20.
    Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: l-diversity: Privacy beyond k-anonymity. ACM Transactions on Knowledge Discovery from Data (TKDD) 1(1), 3 (2007)CrossRefGoogle Scholar
  21. 21.
    Li, N., Li, T., Venkatasubramanian, S.: t-closeness: Privacy beyond k-anonymity and l-diversity. In: The 23rd IEEE International Conference on Data Engineering (ICDE), pp. 106–115 (2007)Google Scholar
  22. 22.
    Truta, T.M., Vinay, B.: Privacy protection: p-sensitive k-anonymity property. In: The 22nd International Conference on Data Engineering Workshops, p. 94. IEEE (2006)Google Scholar
  23. 23.
    Wong, R.C.-W., Li, J., Fu, A.W.-C., Wang, K. (α, k)-anonymity: an enhanced k-anonymity model for privacy preserving data publishing. In: The 12th ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, pp. 754–759 (2006)Google Scholar
  24. 24.
    Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  25. 25.
    Domingo-Ferrer, J., Torra, V.: A critique of k-anonymity and some of its enhancements. In: The 3rd International Conference on Availability, Reliability and Security (ARES), pp. 990–993. IEEE (2008)Google Scholar
  26. 26.
    Rastogi, V., Suciu, D., Hong, S.: The boundary between privacy and utility in data publishing. In: The 33rd International Conference on Very Large Databases, pp. 531–542. VLDB Endowment (2007)Google Scholar
  27. 27.
    Brickell, J., Shmatikov, V.: The cost of privacy: destruction of data-mining utility in anonymized data publishing. In: The 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 70–78 (2008)Google Scholar
  28. 28.
    Sramka, M., Safavi-Naini, R., Denzinger, J., Askari, M.: A practice-oriented framework for measuring privacy and utility in data sanitization systems. In: EDBT/ICDT Workshops, p. 27. ACM (2010)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2014

Authors and Affiliations

  • Anirban Basu
    • 1
  • Anna Monreale
    • 2
  • Juan Camilo Corena
    • 1
  • Fosca Giannotti
    • 3
  • Dino Pedreschi
    • 2
  • Shinsaku Kiyomoto
    • 1
  • Yutaka Miyake
    • 1
  • Tadashi Yanagihara
    • 4
  • Roberto Trasarti
    • 3
  1. 1.KDDI R&D LaboratoriesJapan
  2. 2.University of PisaItaly
  3. 3.ISTI-CNRItaly
  4. 4.Toyota ITCJapan

Personalised recommendations