Skip to main content
Log in

Membership inference attacks against compression models

  • Regular Paper
  • Published:
Computing Aims and scope Submit manuscript

Abstract

With the rapid development of artificial intelligence, privacy threats are already getting the spotlight. One of the most common privacy threats is the membership inference attack (MIA). Existing MIAs can effectively explore the potential privacy leakage risks of deep neural networks. However, DNNs are usually compressed for practical use, especially for edge computing, MIA will fail due to changes in DNNs’ structure or parameters during the compression. To address this problem, we propose CM-MIA, an MIA against compression models, which can effectively determine their privacy leakage risks before deployment. In specific, firstly we use a variety of compression methods to help build shadow models for different target models. Then, we use these shadow models to construct sample features and identify abnormal samples by calculating the distance between each sample feature. Finally, based on the hypothesis test, we determine whether the abnormal sample is a member of the training dataset. Meanwhile, only abnormal samples are used for membership inference, which reduces time costs and improves attack efficiency. Extensive experiments are conducted on 6 datasets to evaluate CM-MIA’s attack capacity. The results show that CM-MIA achieves the state-of-the-art attack performance in most cases. Compared with baselines, the attack success rate of CM-MIA is increased by 10.5% on average.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Toma, Rafia N, Farzin Piltan, Kichang Im, Dongkoo Shon, Tae H. Yoon, Dae-Seung Yoo, and Jong-Myon Kim (2022) A bearing fault classification framework based on image encoding techniques and a convolutional neural network under different operating conditions. Sensors 22, no. 13: 4881. https://doi.org/10.3390/s22134881

  2. Rathnayake N, Rathnayake U, Dang TL, Hoshino Y (2022) An efficient automatic fruit-360 image identification and recognition using a novel modified cascaded-ANFIS algorithm. Sensors 22(12):4401. https://doi.org/10.3390/s22124401

    Article  Google Scholar 

  3. KIM I, BAEK W, KIM S (2020) Spatially attentive output layer for image classification. In: Proceedings of 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR). Piscataway: IEEE Press, pp 9530–9539

  4. SZEGEDY C, LIU W, JIA Y Q, et al (2015) Going deeper with convolutions. In: Proceedings of 2015 IEEE conference on computer vision and pattern recognition (CVPR). Piscataway: IEEE Press,pp 1–9

  5. Wang TW, Zhu YZ, Jin LW et al (2020) Decoupled attention network for text recognition. In: The thirty-second innovative applications of artificial intelligence conference. AAAI Press, Palo Alto, pp 12216–12224

  6. YU et al (2020) Towards accurate scene text recognition with semantic reasoning networks. In: Proceedings of 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR). Piscataway: IEEE Press, pp 12110–12119

  7. GRAVES A, MOHAMED AR, HINTON G (2013) Speech recognition with deep recurrent neural networks. In: Proceedings of 2013 IEEE international conference on acoustics, speech and signal processing. Piscataway: IEEE Press, pp 6645–6649

  8. Hinton G, Deng L, Yu D et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97

    Article  Google Scholar 

  9. Sen P, Namata G, Bilgic M et al (2008) Collective classification in network data. AI Mag 29(3):93–106

    Google Scholar 

  10. Liben-Nowell D, Kleinberg J (2007) The link-prediction problem for social networks. J Am Soc Inf Sci Technol 58(7):1019–1031

    Article  Google Scholar 

  11. Rani M, Dhok S, Deshmukh R (2020) A machine condition monitoring framework using compressed signal processing. Sensors (Basel, Switzerland) 20(1):E319. https://doi.org/10.3390/s20010319.PMID: 31935948; PMCID: PMC6983155

  12. Wang Q, Du PF, Yang JY et al (2019) Transferred deep learning based waveform recognition for cognitive passive radar. Signal Process 155:259–267

    Article  Google Scholar 

  13. Darvishi H, Ciuonzo D, Eide ER et al (2020) Sensor-fault detection, isolation and accommodation for digital twins via modular data-driven architecture. IEEE Sens J 21(4):4827–4838

    Article  Google Scholar 

  14. Nascita A, Montieri A, Aceto G et al (2021) XAI meets mobile traffic classification: understanding and improving multimodal deep learning architectures. IEEE Trans Netw Serv Manage 18(4):4225–4246

    Article  Google Scholar 

  15. SHOKRI R, STRONATI M, SONG CZ, et al (2017) Membership inference attacks against machine learning models. In: Proceedings of 2017 IEEE symposium on security and privacy (SP). Piscataway: IEEE Press, pp 3–18

  16. Salem A, Zhang Y, Humbert M et al (2019) ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Proceedings 2019 network and distributed system security symposium. Internet Society, Reston, pp 24–27

  17. YEOM S, GIACOMELLI I, FREDRIKSON M, et al (2018) Privacy risk in machine learning: analyzing the connection to overfitting. In: Proceedings of 2018 IEEE 31st computer security foundations symposium (CSF). Piscataway: IEEE Press, pp 268–282

  18. Liu S, Lin Y, Zhou Z, et al (2018) On-demand deep model compression for mobile devices: A usage-driven model selection framework. In: Proceedings of the 16th annual international conference on mobile systems, Applications, and Services, pp 389–400

  19. Cheng Y, Wang D, Zhou P et al (2018) Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Process Mag 35(1):126–136

    Article  Google Scholar 

  20. Yeom S, Giacomelli I, Fredrikson M, et al (2018) Privacy risk in machine learning: Analyzing the connection to overfitting. In: 2018 IEEE 31st computer security foundations symposium (CSF). IEEE, pp 268–282

  21. Deng L (2012) The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process Mag 29(6):141–142

    Article  Google Scholar 

  22. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  23. Nasr M, Shokri R, Houmansadr A (2018) Machine learning with membership privacy using adversarial regularization. In: Proceedings of the ACM SIGSAC conference on computer and communications security. 2018, pp 634–646

  24. Homer N, Szelinger S, Redman M, Duggan D, Tembe W, Muehling J, Pearson J. et al (2008) Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genetics 4(8)

  25. Sankararaman S, Obozinski G, Halperin E et al (2009) Genomic privacy and limits of individual detection in a pool. Nat Genet 41(9):965–967

    Article  Google Scholar 

  26. Pyrgelis A, Troncoso C, De Cristofaro E (2018) Knock knock, who’s there? membership inference on aggregate location data. In: NDSS

  27. Truex S, Liu L, Gursoy ME, et al (2019) Demystifying membership inference attacks in machine learning as a service. IEEE Trans Serv Comput

  28. NASR M, SHOKRI R, HOUMANSADR A (2019) Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: Proceedings of 2019 IEEE symposium on security and privacy (SP). Piscataway: IEEE Press, pp 739–753

  29. Leino K, Fredrikson M (2020) Stolen memories: leveraging model memorization for calibrated white-box membership inference. In: 2020 USENIX Security Symposium (USENIX Security 20). USENIX Association, Berkeley, pp 1605–1622

  30. LONG et al (2020) A pragmatic approach to membership inferences on machine learning models. In: Proceedings of 2020 IEEE European Symposium on Security and Privacy (EuroS &P). Piscataway: IEEE Press, pp 521–534

  31. Pyrgelis A, Troncoso C, De Cristofaro E (2017) Knock knock, who’s there? Membership inference on aggregate location data. arXiv preprint arXiv:1708.06145

  32. Choquette-Choo C A, Tramer F, Carlini N, et al (2021) Label-only membership inference attacks. In: International conference on machine learning. PMLR, pp 1964–1974

  33. Liu Y, Zhao Z, Backes M et al (2022) Membership inference attacks by exploiting loss trajectory. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, 2022, pp 2085–2098

  34. Carlini N, Chien S, Nasr M, et al (2022) Membership inference attacks from first principles. In: 2022 IEEE Symposium on Security and Privacy (SP). IEEE, pp 1897–1914

  35. Li H, Kadav A, Durdanovic I, et al (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710

  36. Molchanov P, Tyree S, Karras T, et al (2016) Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440

  37. Wang Z, Li C, Wang X (2021) Convolutional neural network pruning with structural redundancy reduction. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14913–14922

  38. Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7)

  39. Wang L, Yoon KJ (2021) Knowledge distillation and student-teacher learning for visual intelligence: a review and new outlooks. IEEE Trans Pattern Anal Mach Intell

  40. Cheng Y, Wang D, Zhou P, et al (2017) A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282

  41. Liu B, Wang M, Foroosh H, et al (2015) Sparse convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 806–814

  42. Zhou H, Alvarez JM, Porikli F (2016) Less is more: Towards compact cnns. In: European conference on computer vision. Springer, Cham, pp 662–677

  43. Li H, Kadav A, Durdanovic I, et al (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710

  44. Wang Y, Xu C, Xu C, et al (2018) Adversarial learning of portable student networks. In: Proceedings of the AAAI conference on artificial intelligence, 32(1)

  45. Guzzi F, De Bortoli L, Molina RS, Marsi S, Carrato S, Ramponi G (2020) Distillation of an end-to-end oracle for face verification and recognition sensors. Sensors 20(5):1369. https://doi.org/10.3390/s20051369

    Article  Google Scholar 

  46. Strimel GP, Sathyendra KM, Peshterliev S (2018) Statistical model compression for small-footprint natural language understanding. arXiv preprint arXiv:1807.07520

  47. Xing J (2003) An image compression model based on the human visual system. In: Visual information processing XII. International Society for Optics and Photonics 5108:63–72

  48. Park W, Kim D, Lu Y, et al (2019) Relational knowledge distillation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3967–3976

  49. Tung F, Mori G (2019) Similarity-preserving knowledge distillation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 1365–1374

  50. Polino A, Pascanu R, Alistarh D (2018) Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668

  51. Zhao R, Hu Y, Dotzel J, et al (2019) Improving neural network quantization without retraining using outlier channel splitting. In: International conference on machine learning. PMLR, pp 7543–7552

  52. Lou Q, Guo F, Liu L, et al (2019) Autoq: Automated kernel-wise neural network quantization. arXiv preprint arXiv:1902.05690

  53. Nagel M, Amjad RA, Van Baalen M, et al (2020) Up or down? adaptive rounding for post-training quantization. In: International Conference on Machine Learning. PMLR, pp 7197–7206

  54. Yang J, Shen X, Xing J, et al (2019) Quantization networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7308–7316

  55. Jin Q, Yang L, Liao Z (2020) Adabits: Neural network quantization with adaptive bit-widths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2146–2156

  56. Jin Q, Yang L, Liao Z (2019) Towards efficient training for neural network quantization. arXiv preprint arXiv:1912.10207

  57. Choi Y, El-Khamy M, Lee J (2016) Towards the limit of network quantization. arXiv preprint arXiv:1612.01543

  58. Yao Z, Dong Z, Zheng Z, et al (2021) Hawq-v3: dyadic neural network quantization. In: International conference on machine learning. PMLR, pp 11875–11886

  59. Louizos C, Reisser M, Blankevoort T, et al (2018) Relaxed quantization for discretized neural networks. arXiv preprint arXiv:1810.01875

  60. Chen J, Jordan MI, Wainwright MJ (2020) Hopskipjumpattack: a query-efficient decision-based attack. In: 2020 IEEE symposium on security and privacy (sp). IEEE, pp 1277–1294

  61. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images

  62. Song L, Mittal P (2021) Systematic evaluation of privacy risks of machine learning models. In: 30th USENIX Security Symposium (USENIX Security 21), pp 2615–2632

  63. Gers FA (1999) Learning to forget: continual prediction with LSTM. In : 9th International conference on artificial neural networks: ICANN ’99. 9th International Conference onArtificial Neural Networks: ICANN ’99. Edinburgh, UK, 7–10 IEE, pp 850–855

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Jin.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, Y., Lou, W. & Gao, Y. Membership inference attacks against compression models. Computing 105, 2419–2442 (2023). https://doi.org/10.1007/s00607-023-01180-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00607-023-01180-y

Keywords

Mathematics Subject Classification

Navigation