Abstract
The essential task of differentially private data analysis is extending the current non-private algorithms to differentially private algorithms. This extension can be realized by several frameworks, roughly categorized into Laplace/exponential frameworks and private learning frameworks. The Laplace/exponential framework incorporates Laplace or exponential mechanisms into non-private analysis algorithms directly. For example, adding Laplace noise to the count steps in the algorithm or by employing exponential mechanisms when making selections. Private learning frameworks consider data analysis as learning problems in terms of optimization. Learning problems are solved by defining a series of objective functions. Compared with the Laplace/exponential framework, a private learning framework has a clear target, and the results produced by this framework are easier to compare in terms of risk bound or sample complexity. But private learning frameworks can only deal with limited learning algorithms, while nearly all types of analysis algorithms can be implemented in a Laplace/exponential framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In CCS, pages 308–318, 2016.
R. Bassily, A. D. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In FOCS, pages 464–473, 2014.
A. Beimel, S. P. Kasiviswanathan, and K. Nissim. Bounds on the sample complexity for private learning and private data release. In TCC, pages 437–454. 2010.
A. Beimel, K. Nissim, and U. Stemmer. Characterizing the sample complexity of private learners. In ITCS, pages 97–110, 2013.
A. Beimel, K. Nissim, and U. Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. CoRR, abs/1407.2674, 2014.
A. Beimel, K. Nissim, and U. Stemmer. Learning privately with labeled and unlabeled examples. In SODA, pages 461–477, 2015.
R. Bhaskar, S. Laxman, A. Smith, and A. Thakurta. Discovering frequent patterns in sensitive data. In SIGKDD, pages 503–512, 2010.
A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SuLQ framework. In PODS, pages 128–138, 2005.
A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In STOC, pages 609–618, 2008.
K. Chaudhuri and D. Hsu. Sample complexity bounds for differentially private learning. In COLT, pages 155–186, 2011.
K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In NIPS 2014, pages 289–296, 2008.
K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(2):1069–1109, 2011.
C. Dwork, M. Naor, T. Pitassi, and G. N. Rothblum. Differential privacy under continual observation. In STOC, pages 715–724, 2010.
D. Feldman, A. Fiat, H. Kaplan, and K. Nissim. Private coresets. In STOC, pages 361–370, 2009.
A. Friedman and A. Schuster. Data mining with differential privacy. In SIGKDD, pages 493–502, 2010.
R. Hall, A. Rinaldo, and L. Wasserman. Differential privacy for functions and functional data. J. Mach. Learn. Res., 14(1):703–727, 2013.
G. Jagannathan, K. Pillaipakkamnatt, and R. N. Wright. A practical differentially private random decision tree classifier. Transactions on Data Privacy, 5(1):273–295, 2012.
P. Jain and A. Thakurta. Differentially private learning with kernels. In ICML, pages 118–126, 2013.
P. Jain and A. G. Thakurta. (near) dimension independent risk bounds for differentially private learning. In ICML, pages 476–484, 2014.
H. Jiawei and M. Kamber. Data mining: concepts and techniques. San Francisco, CA, itd: Morgan Kaufmann, 5, 2001.
S. P. Kasiviswanathan and H. Jin. Efficient private empirical risk minimization for high-dimensional learning. In ICML, pages 488–497, 2016.
S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn privately? In FOCS, pages 531–540, 2008.
S. P. Kasiviswanathan, K. Nissim, and H. Jin. Private incremental regression. CoRR, abs/1701.01093, 2017.
M. J. Kearns and U. V. Vazirani. An introduction to computational learning theory. 8(2001):44–58, 1994.
D. Kifer, A. D. Smith, and A. Thakurta. Private convex optimization for empirical risk minimization with applications to high-dimensional regression. In COLT, pages 25.1–25.40, 2012.
J. Lee and C. W. Clifton. Top-k frequent itemsets via differentially private FP-trees. In SIGKDD, pages 931–940, 2014.
N. Li, W. Qardaji, D. Su, and J. Cao. Privbasis: Frequent itemset mining with differential privacy. Proc. VLDB Endow., 5(11):1340–1351, 2012.
F. McSherry. Privacy integrated queries: An extensible platform for privacy-preserving data analysis. Commun. ACM, 53(9), 2010.
P. Mohan, A. Thakurta, E. Shi, D. Song, and D. Culler. GUPT: Privacy preserving data analysis made easy. In SIGMOD, pages 349–360, 2012.
K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In STOC, pages 75–84, 2007.
N. Phan, Y. Wang, X. Wu, and D. Dou. Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In AAAI, pages 1309–1316, 2016.
D. Proserpio, S. Goldberg, and F. McSherry. Calibrating data to sensitivity in private data analysis. PVLDB, 7(8):637–648, 2014.
S. Rana, S. K. Gupta, and S. Venkatesh. Differentially private random forest with high utility. In ICDM, pages 955–960, 2015.
B. I. P. Rubinstein, P. L. Bartlett, L. Huang, and N. Taft. Learning in a large function space: Privacy-preserving mechanisms for SVM learning. CoRR, abs/0911.5708, 2009.
E. Shen and T. Yu. Mining frequent graph patterns with differential privacy. In SIGKDD, pages 545–553, 2013.
R. Shokri and V. Shmatikov. Privacy-preserving deep learning. In SIGSAC, pages 1310–1321, 2015.
T. Steinke and J. Ullman. Between pure and approximate differential privacy. CoRR, abs/1501.06095, 2015.
A. G. Thakurta and A. Smith. Differentially private feature selection via stability arguments, and the robustness of the lasso. In Conference on Learning Theory, pages 819–850, 2013.
J. Ullman. Private multiplicative weights beyond linear queries. In PODS, pages 303–312, 2015.
Y. Wang, J. Lei, and S. E. Fienberg. Learning with differential privacy: Stability, learnability and the sufficiency and necessity of ERM principle. CoRR, abs/1502.06309, 2015.
Y. Wang, Y. Wang, and A. Singh. Differentially private subspace clustering. In NIPS, pages 1000–1008, 2015.
Y. Wang, Y. Wang, and A. Singh. A theoretical analysis of noisy sparse subspace clustering on dimensionality-reduced data. CoRR, abs/1610.07650, 2016.
S. Xu, S. Su, L. Xiong, X. Cheng, and K. Xiao. Differentially private frequent subgraph mining. In ICDE, pages 229–240, 2016.
C. Zeng, J. F. Naughton, and J.-Y. Cai. On differentially private frequent itemset mining. Proc. VLDB Endow., 6(1):25–36, 2012.
J. Zhang, X. Xiao, Y. Yang, Z. Zhang, and M. Winslett. PrivGene: Differentially private model fitting using genetic algorithms. In SIGMOD, pages 665–676, 2013.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Zhu, T., Li, G., Zhou, W., Yu, P.S. (2017). Differentially Private Data Analysis. In: Differential Privacy and Applications. Advances in Information Security, vol 69. Springer, Cham. https://doi.org/10.1007/978-3-319-62004-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-62004-6_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-62002-2
Online ISBN: 978-3-319-62004-6
eBook Packages: Computer ScienceComputer Science (R0)