Information Leakage in Optimal Anonymized and Diversified Data

Purchase on

$29.95 / €24.95 / £19.95*

* Final gross prices may vary according to local VAT.

Get Access


To reconcile the demand of information dissemination and preservation of privacy, a popular approach generalizes the attribute values in the dataset, for example by dropping the last digit of the postal code, so that the published dataset meets certain privacy requirements, like the notions of k-anonymity and ℓ-diversity. On the other hand, the published dataset should remain useful and not over generalized. Hence it is desire to disseminate a database with high “usefulness”, measured by a utility function. This leads to a generic framework whereby the optimal dataset (w.r.t. the utility function) among all the generalized datasets that meet certain privacy requirements, is chosen to be disseminated. In this paper,we observe that, the fact that a generalized dataset is optimal may leak information about the original. Thus, an adversary who is aware of how the dataset is generalized may able to derive more information than what the privacy requirements constrained. This observation challenges the widely adopted approach that treats the generalization process as an optimization problem. We illustrate the observation by giving counter-examples in the context of k-anonymity and ℓ-diversity.