Data Mining and Knowledge Discovery

, Volume 29, Issue 6, pp 1733–1782

Discrimination- and privacy-aware patterns

  • Sara Hajian
  • Josep Domingo-Ferrer
  • Anna Monreale
  • Dino Pedreschi
  • Fosca Giannotti
Article

DOI: 10.1007/s10618-014-0393-7

Cite this article as:
Hajian, S., Domingo-Ferrer, J., Monreale, A. et al. Data Min Knowl Disc (2015) 29: 1733. doi:10.1007/s10618-014-0393-7

Abstract

Data mining is gaining societal momentum due to the ever increasing availability of large amounts of human data, easily collected by a variety of sensing technologies. We are therefore faced with unprecedented opportunities and risks: a deeper understanding of human behavior and how our society works is darkened by a greater chance of privacy intrusion and unfair discrimination based on the extracted patterns and profiles. Consider the case when a set of patterns extracted from the personal data of a population of individual persons is released for a subsequent use into a decision making process, such as, e.g., granting or denying credit. First, the set of patterns may reveal sensitive information about individual persons in the training population and, second, decision rules based on such patterns may lead to unfair discrimination, depending on what is represented in the training cases. Although methods independently addressing privacy or discrimination in data mining have been proposed in the literature, in this context we argue that privacy and discrimination risks should be tackled together, and we present a methodology for doing so while publishing frequent pattern mining results. We describe a set of pattern sanitization methods, one for each discrimination measure used in the legal literature, to achieve a fair publishing of frequent patterns in combination with two possible privacy transformations: one based on \(k\)-anonymity and one based on differential privacy. Our proposed pattern sanitization methods based on \(k\)-anonymity yield both privacy- and discrimination-protected patterns, while introducing reasonable (controlled) pattern distortion. Moreover, they obtain a better trade-off between protection and data quality than the sanitization methods based on differential privacy. Finally, the effectiveness of our proposals is assessed by extensive experiments.

Keywords

Frequent patterns Anti-discrimination Privacy  Data mining 

Copyright information

© The Author(s) 2014

Authors and Affiliations

  • Sara Hajian
    • 1
  • Josep Domingo-Ferrer
    • 1
  • Anna Monreale
    • 2
  • Dino Pedreschi
    • 2
  • Fosca Giannotti
    • 3
  1. 1.Department of Computer Engineering and Maths, UNESCO Chair in Data PrivacyUniversitat Rovira i VirgiliTarragonaCatalonia
  2. 2.Dipartimento di InformaticaUniversità di PisaPisaItaly
  3. 3.ISTI-CNRPisaItaly

Personalised recommendations