Skip to main content

Dissecting Membership Inference Risk in Machine Learning

  • Conference paper
  • First Online:
Cyberspace Safety and Security (CSS 2021)

Abstract

Membership inference attacks (MIA) have been identified as a distinct threat to privacy when sensitive personal data are used to train the machine learning (ML) models. This work is aimed at deepening our understanding with respect to the existing black-box MIAs while introducing a new label only MIA model. The proposed MIA model can successfully exploit the well generalized models challenging the conventional wisdom that states generalized models are immune to membership inference. Through systematic experimentation, we show that the proposed MIA model can outperform the existing attack models while being more resilient towards manipulations to the membership inference results caused by the selection of membership validation data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, J., Jordan, M.I., Wainwright, M.J.: Hopskipjumpattack: a query-efficient decision-based attack. In: 2020 IEEE Symposium on Security and Privacy, SP 2020, San Francisco, CA, USA, 18–21 May 2020, pp. 1277–1294. IEEE (2020)

    Google Scholar 

  2. Choo, C.A.C., Tramer, F., Carlini, N., Papernot, N.: Label-only membership inference attacks. arXiv preprint arXiv:2007.14321 (2020)

  3. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  4. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017. OpenReview.net (2017)

    Google Scholar 

  5. Humphries, T., et al.: Differentially private learning does not bound membership inference. arXiv preprint arXiv:2010.12112 (2020)

  6. Jayaraman, B., Wang, L., Knipmeyer, K., Gu, Q., Evans, D.: Revisiting membership inference under realistic assumptions. arXiv preprint arXiv:2005.10881 (2020)

  7. Jia, J., Salem, A., Backes, M., Zhang, Y., Gong, N.Z.: Memguard: defending against black-box membership inference attacks via adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 259–274 (2019)

    Google Scholar 

  8. Li, J., Li, N., Ribeiro, B.: Membership inference attacks and defenses in classification models. In: Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, pp. 5–16 (2021)

    Google Scholar 

  9. Li, Z., Zhang, Y.: Membership leakage in label-only exposures. arXiv preprint arXiv:2007.15528 (2020)

  10. Long, Y., et al.: Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889 (2018)

  11. Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 634–646 (2018)

    Google Scholar 

  12. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP), pp. 739–753. IEEE (2019)

    Google Scholar 

  13. Rahman, M.A., Rahman, T., Laganière, R., Mohammed, N., Wang, Y.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11(1), 61–79 (2018)

    Google Scholar 

  14. Salem, A., Zhang, Y., Humbert, M., Fritz, M., Backes, M.: Ml-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed Systems Security Symposium 2019. Internet Society (2019)

    Google Scholar 

  15. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  16. Song, L., Mittal, P.: Systematic evaluation of privacy risks of machine learning models. In: 30th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 2021) (2021)

    Google Scholar 

  17. Song, L., Shokri, R., Mittal, P.: Membership inference attacks against adversarially robust deep learning models. In: 2019 IEEE Security and Privacy Workshops (SPW), pp. 50–56. IEEE (2019)

    Google Scholar 

  18. Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: Analyzing the connection to overfitting. In: 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282. IEEE (2018)

    Google Scholar 

  19. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26, 2017. OpenReview.net (2017)

    Google Scholar 

  20. Zhao, Y., Nasrullah, Z., Li, Z.: Pyod: A python toolbox for scalable outlier detection. J. Mach. Learn. Res. 20(96), 1–7 (2019). http://jmlr.org/papers/v20/19-011.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Navoda Senavirathne .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Senavirathne, N., Torra, V. (2022). Dissecting Membership Inference Risk in Machine Learning. In: Meng, W., Conti, M. (eds) Cyberspace Safety and Security. CSS 2021. Lecture Notes in Computer Science(), vol 13172. Springer, Cham. https://doi.org/10.1007/978-3-030-94029-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-94029-4_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-94028-7

  • Online ISBN: 978-3-030-94029-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics