Skip to main content
Log in

Security and Privacy Issues in Deep Learning: A Brief Review

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Nowadays, deep learning is becoming increasingly important in our daily life. The appearance of deep learning in many applications in life relates to prediction and classification such as self-driving, product recommendation, advertisements and healthcare. Therefore, if a deep learning model causes false predictions and misclassification, it can do great harm. This is basically a crucial issue in the deep learning model. In addition, deep learning models use large amounts of data in the training/learning phases, which contain sensitive information. Therefore, when deep learning models are used in real-world applications, it is required to protect the privacy information used in the model. In this article, we carry out a brief review of the threats and defenses methods on security issues for the deep learning models and the privacy of the data used in such models while maintaining their performance and accuracy. Finally, we discuss current challenges and future developments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Dang TK, Pham CDM, Ho DD. On verifying the authenticity of e-commercial crawling data by a semi-crosschecking method. Int J Web Inf Syst. 2019;15(4):454–73. https://doi.org/10.1108/IJWIS-10-2018-0075.

    Article  Google Scholar 

  2. Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P. Ensemble adversarial training: attacks and defenses. In: Proceedings of the 6th international conference on learning representations, Vancouver, BC, Canada, 30 Apr–3 May 2018.

  3. Yuan X, Pan H, Qile Z, Xiaolin L. Adversarial examples: attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst. 2019;30(9):2805–24.

    Article  MathSciNet  Google Scholar 

  4. Ji Z, Lipton ZC, Elkan C. Differential privacy and machine learning: a survey and review. arXiv preprint arXiv:1412.7584. 2014. https://arxiv.org/abs/1412.7584. Accessed 15 Apr 2020.

  5. Zhang D, Chen X, Wang D, Shi J. A survey on collaborative deep learning and privacy-preserving. In: Proceedings of the 3rd IEEE international conference on data science in cyberspace, Guangzhou, China, 18–21 June 2018. pp. 652–658.

  6. Dwork C. Differential privacy: a survey of results. In: International conference on theory and applications of models of computation. Lecture notes in computer science, vol. 4978. Berlin: Springer; 2008. pp. 1–19.

  7. Bun M, Steinke T. Concentrated differential privacy: simplifications, extensions, and lower bounds. In: Theory of cryptography conference. Lecture notes in computer science, vol 9985. Berlin: Springer; 2016. pp. 635–658.

  8. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. In: arXiv preprint arXiv:1312.6199. 2013. https://arxiv.org/abs/1312.6199. Accessed 15 Apr 2020.

  9. Tabacof P, Valle E. Exploring the space of adversarial images. In: International joint conference on neural networks, Vancouver, BC, Canada, 24–29 July 2016. pp. 426–433.

  10. Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of the 37th IEEE symposium on security and privacy, San Jose, CA, USA, 22–26 May 2016. pp. 582–597.

  11. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy (EuroS&P), Saarbrücken, Germany, 21–24 Mar 2016. pp. 372–387.

  12. Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: Proceedings of the 38th IEEE symposium on security and privacy, San Jose, CA, USA, 22–26 May 2017. pp. 39–57.

  13. Moosavi-Dezfooli SM, Fawzi A, Frossard P. Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016. pp. 2574–2582.

  14. Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P. Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017. pp. 86–94.

  15. Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. 2014. https://arxiv.org/abs/1412.6572. Accessed 15 Apr 2020.

  16. Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale. In: Proceedings of the 5th international conference on learning representations, Toulon, France, 24–26 Apr 2017.

  17. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J. Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–22 June 2018. pp. 9185–9193.

  18. Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world. In: Proceedings of the 5th international conference on learning representations, Toulon, France, 24–26 Apr 2017.

  19. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A. Practical black-box attacks against machine learning. In: Proceedings of the ACM on Asia conference on computer and communications security, Abu Dhabi, United Arab Emirates, 2–6 Apr 2017. pp. 506–519.

  20. Metzen JH, Genewein T, Fischer V, Bischoff B. On detecting adversarial perturbations. In: Proceedings of the 5th international conference on learning representations, Toulon, France, 24–26 Apr 2017.

  21. Lu J, Issaranon T, Forsyth D. Safetynet: detecting and rejecting adversarial examples robustly. In: Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22–29 Oct 2017. pp. 446–454.

  22. Hendrycks D, Gimpel K. Early methods for detecting adversarial images. In: Proceedings of the 5th international conference on learning representations, Toulon, France, 24–26 Apr 2017.

  23. Song Y, Kim T, Nowozin S, Ermon S, Kushman N. Pixeldefend: leveraging generative models to understand and defend against adversarial examples. In: Proceedings of the 6th international conference on learning representations, Vancouver, BC, Canada, 30 Apr–3 May 2018.

  24. Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: Proceedings of the 40th IEEE symposium on security and privacy, San Francisco, CA, USA, 19–23 May 2019. pp. 739–753.

  25. Fredrikson M, Lantz E, Jha S, Lin S, Page D, Ristenpart T. Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: 23rd USENIX security symposium, San Diego, CA, USA, 20–22 Aug 2014. pp. 17–32.

  26. Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, Denver, CO, USA, 12–16 Oct 2015. pp. 1322–1333.

  27. Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T. Stealing machine learning models via prediction apis. In: 25th USENIX security symposium, Austin, TX, USA, 10–12 Aug 2016. pp. 601–618.

  28. Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. In: Proceedings of the 38th IEEE symposium on security and privacy, San Jose, CA, USA, 22–26 May 2017. pp. 3–18.

  29. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE. 2015;10(7):e0130140.

    Article  Google Scholar 

  30. Chaudhuri K, Monteleoni C. Privacy-preserving logistic regression. In: Advances in neural information processing systems. 2009. pp. 289–296.

  31. Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 24th ACM SIGSAC conference on computer and communications security, Dallas, TX, USA, 30 Oct–03 Nov 2017. pp. 603–618.

  32. Shokri R, Shmatikov V. Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, Denver, CO, USA, 12–16 Oct 2015. pp. 1310–1321.

  33. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L. Deep learning with differential privacy. In: Proceedings of the 23rd ACM SIGSAC conference on computer and communications security, Vienna, Austria, 24–28 Oct 2016. pp. 308–318.

  34. McMahan HB, Ramage D, Talwar K, Zhang L. Learning differentially private recurrent language models. In: Proceedings of the 6th international conference on learning representations, Vancouver, BC, Canada, 30 Apr–3 May 2018.

  35. Acs G, Melis L, Castelluccia C, De Cristofaro E. Differentially private mixture of generative neural networks. IEEE Trans Knowl Data Eng. 2018;31(6):1109–21.

    Article  Google Scholar 

  36. Blum A, Dwork C, McSherry F, Nissim K. Practical privacy: the SuLQ framework. In: Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on principles of database systems. 2005. pp. 128–138.

  37. Chitta R, Jin R, Jain AK. Efficient kernel clustering using random fourier features. In: IEEE 12th international conference on data mining, Brussels, Belgium, 10–13 Dec 2012. pp. 161–170.

  38. Kingma DP, Welling M. Auto-encoding variational bayes. In: 2nd International conference on learning representations, Banff, AB, Canada, 14–16 Apr 2014

  39. Goodfellow I, Bengio Y, Courville A, Bengio Y. Deep learning. 1st ed. Cambridge: MIT press; 2016.

    MATH  Google Scholar 

  40. Xie L, Lin K, Wang S, Wang F, Zhou J. Diffeentially private generative adversarial network. arXiv preprint arXiv:1802.06739. 2018. https://arxiv.org/abs/1802.06739. Accessed 15 Apr 2020.

  41. Yu L, Liu L, Pu C, Gursoy ME, Truex S. Differentially private model publishing for deep learning. In: Proceedings of the 40th IEEE symposium on security and privacy, San Francisco, CA, USA, 19–23 May 2019. pp. 332–349.

  42. Lee H, Grosse R, Ranganath R, Ng AY. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning. 2009. pp. 609–616.

  43. Phan N, Wu X, Dou D. Preserving differential privacy in convolutional deep belief networks. J Mach Learn. 2017;106(9–10):1681–704.

    Article  MathSciNet  Google Scholar 

  44. Phan N, Wu X, Hu H, Dou D. Adaptive laplace mechanism: differential privacy preservation in deep learning. In: Proceedings of the 17th IEEE international conference on data mining (ICDM), New Orleans, LA, USA, 18–21 Nov 2017. pp. 385–394.

  45. Papernot N, Abadi M, Erlingsson U, Goodfellow I, Talwar K. Semi-supervised knowledge transfer for deep learning from private training data. In: Proceedings of the 5th international conference on learning representations, Toulon, France, 24–26 Apr 2017.

  46. Papernot N, Song S, Mironov I, Raghunathan A, Talwar K, Erlingsson Ú. Scalable private learning with pate. In: Proceedings of the 6th international conference on learning representations, Vancouver, BC, Canada, 30 Apr–3 May 2018.

  47. Triastcyn A, Faltings B. Generating differentially private datasets using gans. arXiv preprint arXiv:1803.03148. 2018. https://128.84.21.199/abs/1803.03148v1. Accessed 15 Apr 2020.

  48. Nguyen TAT, Dang TK. Privacy preserving biometric-based remote authentication with secure processing unit on untrusted server. J IET Biom. 2019;8(1):79–91.

    Article  MathSciNet  Google Scholar 

  49. Thi QNT, Dang TK. Towards a fine-grained privacy-enabled attribute-based access control mechanism. Trans Large Scale Data Knowl Cent Syst. 2017;36:52–72.

    Google Scholar 

  50. Dang TK, Pham CDM, Nguyen TLP. A pragmatic elliptic curve cryptography-based extension for energy-efficient device-to-device communications in smart cities. Sustain Cities Soc. 2020. https://doi.org/10.1016/j.scs.2020.102097.

    Article  Google Scholar 

  51. Dang TK, Tran KTK. The meeting of acquaintances: a cost-efficient authentication scheme for light-weight objects with transient trust level and plurality approach. In: Security and Communication Networks, Hindawi, vol. 2019. 2019.

  52. Bengio Y. Learning deep architectures for AI. In: Foundations and trends® in machine learning. 2009. vol. 2, no. 1, pp. 1–127.

  53. Long Y, Bindschaedler V, Wang L, Bu D, Wang X, Tang H, Chen K. Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889. 2018. https://arxiv.org/abs/1802.04889. Accessed 15 Apr 2020.

  54. Yeom S, Giacomelli I, Fredrikson M, Jha S. Privacy risk in machine learning: Analyzing the connection to overfitting. In: Proceedings of the 31st IEEE computer security foundations symposium, Oxford, United Kingdom, 9–12 July 2018. pp. 268–282.

  55. Phan N, Wang Y, Wu X, Dou D. Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: Proceedings of the 30th AAAI conference on artificial intelligence, Phoenix, Arizona, USA, 12–17 Feb 2016.

Download references

Acknowledgements

This work is supported by a project with the Department of Science and Technology, Ho Chi Minh City, Vietnam (contract with HCMUT No. 08/2018/HĐ-QKHCN, dated 16/11/2018).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tran Khanh Dang.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Software Technology and Its Enabling Computing Platforms” guest edited by Lam-Son Lê and Michel Toulouse.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ha, T., Dang, T.K., Le, H. et al. Security and Privacy Issues in Deep Learning: A Brief Review. SN COMPUT. SCI. 1, 253 (2020). https://doi.org/10.1007/s42979-020-00254-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-020-00254-4

Keywords

Navigation