Abstract
Membership inference attacks (MIA) have been identified as a distinct threat to privacy when sensitive personal data are used to train the machine learning (ML) models. This work is aimed at deepening our understanding with respect to the existing black-box MIAs while introducing a new label only MIA model. The proposed MIA model can successfully exploit the well generalized models challenging the conventional wisdom that states generalized models are immune to membership inference. Through systematic experimentation, we show that the proposed MIA model can outperform the existing attack models while being more resilient towards manipulations to the membership inference results caused by the selection of membership validation data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, J., Jordan, M.I., Wainwright, M.J.: Hopskipjumpattack: a query-efficient decision-based attack. In: 2020 IEEE Symposium on Security and Privacy, SP 2020, San Francisco, CA, USA, 18–21 May 2020, pp. 1277–1294. IEEE (2020)
Choo, C.A.C., Tramer, F., Carlini, N., Papernot, N.: Label-only membership inference attacks. arXiv preprint arXiv:2007.14321 (2020)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)
Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017. OpenReview.net (2017)
Humphries, T., et al.: Differentially private learning does not bound membership inference. arXiv preprint arXiv:2010.12112 (2020)
Jayaraman, B., Wang, L., Knipmeyer, K., Gu, Q., Evans, D.: Revisiting membership inference under realistic assumptions. arXiv preprint arXiv:2005.10881 (2020)
Jia, J., Salem, A., Backes, M., Zhang, Y., Gong, N.Z.: Memguard: defending against black-box membership inference attacks via adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 259–274 (2019)
Li, J., Li, N., Ribeiro, B.: Membership inference attacks and defenses in classification models. In: Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, pp. 5–16 (2021)
Li, Z., Zhang, Y.: Membership leakage in label-only exposures. arXiv preprint arXiv:2007.15528 (2020)
Long, Y., et al.: Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889 (2018)
Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 634–646 (2018)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP), pp. 739–753. IEEE (2019)
Rahman, M.A., Rahman, T., Laganière, R., Mohammed, N., Wang, Y.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11(1), 61–79 (2018)
Salem, A., Zhang, Y., Humbert, M., Fritz, M., Backes, M.: Ml-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed Systems Security Symposium 2019. Internet Society (2019)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)
Song, L., Mittal, P.: Systematic evaluation of privacy risks of machine learning models. In: 30th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 2021) (2021)
Song, L., Shokri, R., Mittal, P.: Membership inference attacks against adversarially robust deep learning models. In: 2019 IEEE Security and Privacy Workshops (SPW), pp. 50–56. IEEE (2019)
Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: Analyzing the connection to overfitting. In: 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282. IEEE (2018)
Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26, 2017. OpenReview.net (2017)
Zhao, Y., Nasrullah, Z., Li, Z.: Pyod: A python toolbox for scalable outlier detection. J. Mach. Learn. Res. 20(96), 1–7 (2019). http://jmlr.org/papers/v20/19-011.html
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Senavirathne, N., Torra, V. (2022). Dissecting Membership Inference Risk in Machine Learning. In: Meng, W., Conti, M. (eds) Cyberspace Safety and Security. CSS 2021. Lecture Notes in Computer Science(), vol 13172. Springer, Cham. https://doi.org/10.1007/978-3-030-94029-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-94029-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-94028-7
Online ISBN: 978-3-030-94029-4
eBook Packages: Computer ScienceComputer Science (R0)