Abstract
Differential privacy is a mathematical technique that provides strong theoretical privacy guarantees by ensuring the statistical indistinguishability of individuals in a dataset. It has become the de facto framework for providing privacy-preserving data analysis over statistical datasets. Differential privacy has garnered significant attention from researchers and privacy experts due to its strong privacy guarantees. However, the accuracy loss caused by the noise added has been an issue. First, we propose a new noise adding mechanism that preserves \((\epsilon ,\delta )\)-differential privacy. The distribution pertaining to this mechanism can be observed as a generalized truncated Laplacian distribution. We show that the proposed mechanism adds optimal noise in a global context, conditional upon technical lemmas. We also show that the generalized truncated Laplacian mechanism performs better than the optimal Gaussian mechanism. In addition, we also propose an \((\epsilon )\)-differentially private mechanism to improve the utility of differential privacy by fusing multiple Laplace distributions. We also derive the closed-form expressions for absolute expectation and variance of noise for the proposed mechanisms. Finally, we empirically evaluate the performance of the proposed mechanisms and show an increase in all utility measures considered, while preserving privacy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Balle, B., Wang, Y.: Improving the Gaussian mechanism for differential privacy: analytical calibration and optimal denoising. CoRR abs/1805.06530 (2018). http://arxiv.org/abs/1805.06530
Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014). https://doi.org/10.1561/0400000042
Fan, L., Xiong, L.: Real-time aggregate monitoring with differential privacy. In: 21st ACM International Conference on Information and Knowledge Management, CIKM 2012, Maui, HI, USA, 29 October–02 November 2012, pp. 2169–2173 (2012). https://doi.org/10.1145/2396761.2398595
Geng, Q., Ding, W., Guo, R., Kumar, S.: Truncated Laplacian mechanism for approximate differential privacy. arXiv preprint arXiv:1810.00877 (2018)
Hardt, M., Talwar, K.: On the geometry of differential privacy. In: Proceedings of the Forty-Second ACM Symposium on Theory of Computing, pp. 705–714. ACM (2010)
Hill, K.: How Target Figured Out a Teen Girl was Pregnant Before Her Father Did. Forbes, Inc., Jersey City (2012)
Li, C., Hay, M., Rastogi, V., Miklau, G., McGregor, A.: Optimizing linear counting queries under differential privacy. In: Proceedings of the Twenty-Ninth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2010, 6–11 June 2010, Indianapolis, Indiana, USA, pp. 123–134 (2010). https://doi.org/10.1145/1807085.1807104
Machanavajjhala, A., Gehrke, J., Kifer, D., Venkitasubramaniam, M.: l-diversity: privacy beyond k-anonymity. In: 22nd International Conference on Data Engineering (ICDE 2006), pp. 24–24. IEEE (2006)
Narayanan, A., Shmatikov, V.: How to break anonymity of the Netflix prize dataset. CoRR abs/cs/0610105 (2006). http://arxiv.org/abs/cs/0610105
Nikolov, A., Talwar, K., Zhang, L.: The geometry of differential privacy: the sparse and approximate cases. In: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, pp. 351–360. ACM (2013)
Perry, D.: Sex and Uber’s “rides of glory”: the company tracks your one-night stands-and much more’. Comput. Law Secur. Rev. (2014)
Rajendran, K., Jayabalan, M., Rana, M.E.: A study on k-anonymity, l-diversity, and t-closeness techniques. IJCSNS 17(12), 172 (2017)
Smith, D.B., Thilakarathna, K., Kâafar, M.A.: More flexible differential privacy: the application of piecewise mixture distributions in query release. CoRR abs/1707.01189 (2017). http://arxiv.org/abs/1707.01189
Sweeney, L.: k-anonymity: a model for protecting privacy. Int. J. Uncertainty Fuzziness Knowl.-Based Syst. 10(05), 557–570 (2002)
Voigt, P., Von dem Bussche, A.: The EU General Data Protection Regulation (GDPR). A Practical Guide, 1st edn. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57959-7
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Proof for Lemma 1 and Lemma 2
Here, we present the proof for Lemma 1 and Lemma 2 used in Sect. 4.2.
Proof
(Proof for Lemma 1). Since the probability density function f(x) is monotonically increasing when \(x\ge 0\) and is monotonically decreasing when \(x < 0\),
We will first discuss the case when \(|A|\ge |B|\),
Plugging in \(M=\frac{1}{\lambda (2-e^{\frac{A}{\lambda }}-e^{-\frac{B}{\lambda }})}\) as in our definition for the generalized truncated Laplacian distribution, \(A = \lambda \ln \left[ 2+(\frac{1 - \delta }{\delta })e^{-\frac{B}{\lambda }} - (\frac{1}{\delta })e^{-\frac{B-\varDelta }{\lambda }}\right] \) as specified in Theorem 1, we have
We omit showing the computation for the case when \(|A|\le |B|\) as the derivation is very similar to that of the above mentioned case.
Now, we will proceed to prove Lemma 2.
Proof
(Proof for Lemma 2). Given two neighboring datasets \(\mathcal {D}_1\sim \mathcal {D}_2\), we know that \(|q(\mathcal {D}_1)-q(\mathcal {D}_2)| \le \varDelta \), thus the condition \(\mathcal {P}(\mathcal {S})-e^\epsilon \mathcal {P}(\mathcal {S}+d)\le \delta \) for any \(|d|\le \varDelta \) is equivalent to
Hence, for any \(t\in \mathcal {S}\), the condition is equivalent to
which is the necessary condition for mechanism \(\mathcal {A}\) to preserve \((\epsilon ,\delta )\)-differential privacy.
B Generalized Truncated Laplacian - Evaluation
We empirically show that the ratio of the noise amplitude \(L_1^*\) and noise power \(L_2^*\) of generalized truncated Laplacian mechanism and the optimal Gaussian mechanism is always less than 1 for appropriate values of \(\delta \), A and B as described in Sect. 4. Compared to the optimal Gaussian mechanism, the generalized truncated Laplacian mechanism reduces the noise power and noise amplitude across all privacy regimes. The implementation can be found in https://github.com/vaikkunth/DPMechanisms.
\(\epsilon \) | \(\delta \) | A | \(L_1^*\) | \(L_2^*\) |
---|---|---|---|---|
0.7 | 2.5e−06 | \(-17.46\) | 0.25 | 0.13 |
0.4 | 4.0e−06 | \(-27.57\) | 0.27 | 0.15 |
0.4 | 2.5e−06 | \(-28.74\) | 0.27 | 0.14 |
0.4 | 9.5e−06 | \(-25.4\) | 0.29 | 0.17 |
0.4 | 3.0e−06 | \(-28.29\) | 0.27 | 0.14 |
0.7 | 6.0e−06 | \(-16.21\) | 0.27 | 0.14 |
0.4 | 8.5e−06 | \(-25.68\) | 0.29 | 0.16 |
0.7 | 3.5e−06 | \(-16.98\) | 0.26 | 0.13 |
0.7 | 4.0e−06 | \(-16.79\) | 0.26 | 0.14 |
0.4 | 2.0e−06 | \(-29.3\) | 0.26 | 0.14 |
0.1 | 4.5e−06 | \(-93.66\) | 0.31 | 0.19 |
0.7 | 3.0e−06 | \(-17.2\) | 0.26 | 0.13 |
0.4 | 5.5e−06 | \(-26.77\) | 0.28 | 0.15 |
0.7 | 9.5e−06 | \(-15.55\) | 0.28 | 0.15 |
0.4 | 6.0e−06 | \(-26.55\) | 0.28 | 0.16 |
0.7 | 1.0e−06 | \(-18.77\) | 0.24 | 0.12 |
0.4 | 6.5e−06 | \(-26.35\) | 0.28 | 0.16 |
0.4 | 7.0e−06 | \(-26.17\) | 0.28 | 0.16 |
0.4 | 4.5e−06 | \(-27.27\) | 0.27 | 0.15 |
0.4 | 9.0e−06 | \(-25.54\) | 0.29 | 0.17 |
0.4 | 3.5e−06 | \(-27.9\) | 0.27 | 0.15 |
0.1 | 9.5e−06 | \(-86.19\) | 0.32 | 0.21 |
0.1 | 5.5e−06 | \(-91.66\) | 0.31 | 0.19 |
0.1 | 5.0e−06 | \(-92.61\) | 0.31 | 0.19 |
0.4 | 1.5e−06 | \(-30.02\) | 0.26 | 0.13 |
0.4 | 8.0e−06 | \(-25.83\) | 0.29 | 0.16 |
0.7 | 7.0e−06 | \(-15.99\) | 0.27 | 0.15 |
0.7 | 7.5e−06 | \(-15.89\) | 0.27 | 0.15 |
0.4 | 1.0e−06 | \(-31.03\) | 0.25 | 0.13 |
0.1 | 7.0e−06 | \(-89.24\) | 0.32 | 0.2 |
0.4 | 5.0e−06 | \(-27.01\) | 0.28 | 0.15 |
C Merging Laplacian Distributions - Evaluation
We evaluate the \(l^1\) and \(l^2\) cost for the Laplacian, Merged Laplacian with 1 break point and Merged Laplacian with 2 break points. We show that the cost for the Merged Laplacian with 2 break points is lower than that of the Laplacian mechanism and hence we achieve better utility for the same privacy loss. The implementation can be found in https://github.com/vaikkunth/DPMechanisms.
(\(\epsilon _1\), \(\epsilon _2\), \(\epsilon _3\)) | (\(c_1\), \(c_2\)) | \(L_1^*\) | \(L_2^*\) |
---|---|---|---|
(0.2, 0.25, 0.33) | (1, 3) | (3.0, 3.03, 1.12) | (18.0, 18.22, 3.94) |
(0.17, 0.25, 0.33) | (1, 3) | (3.0, 3.03, 1.12) | (18.0, 18.22, 3.96) |
(0.17, 0.2, 0.33) | (1, 3) | (3.0, 3.05, 1.13) | (18.0, 18.35, 4.07) |
(0.14, 0.25, 0.33) | (1, 3) | (3.0, 3.03, 1.12) | (18.0, 18.22, 3.97) |
(0.14, 0.2, 0.33) | (1, 3) | (3.0, 3.05, 1.13) | (18.0, 18.35, 4.08) |
(0.14, 0.17, 0.33) | (1, 3) | (3.0, 3.06, 1.13) | (18.0, 18.43, 4.15) |
(0.14, 0.17, 0.2) | (1, 3) | (5.0, 5.01, 1.96) | (50.0, 50.15, 16.26) |
(0.12, 0.25, 0.33) | (1, 3) | (3.0, 3.03, 1.13) | (18.0, 18.22, 3.98) |
(0.12, 0.2, 0.33) | (1, 3) | (3.0, 3.05, 1.13) | (18.0, 18.35, 4.08) |
(0.12, 0.17, 0.33) | (1, 3) | (3.0, 3.06, 1.13) | (18.0, 18.43, 4.16) |
(0.12, 0.17, 0.2) | (1, 3) | (5.0, 5.01, 1.96) | (50.0, 50.15, 16.28) |
(0.12, 0.14, 0.33) | (1, 3) | (3.0, 3.07, 1.14) | (18.0, 18.49, 4.21) |
(0.12, 0.14, 0.2) | (1, 3) | (5.0, 5.02, 1.98) | (50.0, 50.26, 16.5) |
(0.11, 0.25, 0.33) | (1, 3) | (3.0, 3.03, 1.13) | (18.0, 18.22, 3.98) |
(0.11, 0.2, 0.33) | (1, 3) | (3.0, 3.05, 1.13) | (18.0, 18.35, 4.09) |
(0.11, 0.17, 0.33) | (1, 3) | (3.0, 3.06, 1.14) | (18.0, 18.43, 4.16) |
(0.11, 0.17, 0.2) | (1, 3) | (5.0, 5.01, 1.96) | (50.0, 50.15, 16.3) |
(0.11, 0.14, 0.33) | (1, 3) | (3.0, 3.07, 1.14) | (18.0, 18.49, 4.21) |
(0.11, 0.14, 0.2) | (1, 3) | (5.0, 5.02, 1.98) | (50.0, 50.26, 16.52) |
(0.11, 0.12, 0.33) | (1, 3) | (3.0, 3.08, 1.14) | (18.0, 18.53, 4.25) |
(0.11, 0.12, 0.2) | (1, 3) | (5.0, 5.03, 1.99) | (50.0, 50.34, 16.68) |
(0.11, 0.12, 0.14) | (1, 3) | (7.0, 7.01, 3.28) | (98.0, 98.12, 42.57) |
(0.2, 0.25, 0.33) | (1, 5) | (3.0, 3.03, 1.61) | (18.0, 18.22, 5.17) |
(0.17, 0.25, 0.33) | (1, 5) | (3.0, 3.03, 1.61) | (18.0, 18.22, 5.19) |
(0.17, 0.2, 0.33) | (1, 5) | (3.0, 3.05, 1.63) | (18.0, 18.35, 5.4) |
(0.14, 0.25, 0.33) | (1, 5) | (3.0, 3.03, 1.61) | (18.0, 18.22, 5.2) |
(0.14, 0.2, 0.33) | (1, 5) | (3.0, 3.05, 1.63) | (18.0, 18.35, 5.41) |
(0.14, 0.17, 0.33) | (1, 5) | (3.0, 3.06, 1.64) | (18.0, 18.43, 5.55) |
(0.14, 0.17, 0.2) | (1, 5) | (5.0, 5.01, 1.85) | (50.0, 50.15, 10.69) |
(0.12, 0.25, 0.33) | (1, 5) | (3.0, 3.03, 1.62) | (18.0, 18.22, 5.21) |
(0.12, 0.2, 0.33) | (1, 5) | (3.0, 3.05, 1.63) | (18.0, 18.35, 5.42) |
(0.12, 0.17, 0.33) | (1, 5) | (3.0, 3.06, 1.64) | (18.0, 18.43, 5.56) |
(0.12, 0.17, 0.2) | (1, 5) | (5.0, 5.01, 1.85) | (50.0, 50.15, 10.7) |
(0.12, 0.14, 0.33) | (1, 5) | (3.0, 3.07, 1.65) | (18.0, 18.49, 5.65) |
(0.12, 0.14, 0.2) | (1, 5) | (5.0, 5.02, 1.86) | (50.0, 50.26, 10.98) |
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Mugunthan, V., Xiao, W., Kagal, L. (2020). Utility-Enhancing Flexible Mechanisms for Differential Privacy. In: Domingo-Ferrer, J., Muralidhar, K. (eds) Privacy in Statistical Databases. PSD 2020. Lecture Notes in Computer Science(), vol 12276. Springer, Cham. https://doi.org/10.1007/978-3-030-57521-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-57521-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57520-5
Online ISBN: 978-3-030-57521-2
eBook Packages: Computer ScienceComputer Science (R0)