Skip to main content

Practical Black Box Model Inversion Attacks Against Neural Nets

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

Adversarial machine learning is a set of malicious techniques that aim to exploit machine learning’s underlying mathematics. Model inversion is a particular type of adversarial machine learning attack where an adversary attempts to reconstruct the target model’s private training data. Specifically, given black box access to a target classifier, the attacker aims to recreate a particular class sample with just the ability to query the model. Traditionally, these attacks have depended on the target classifier returning a confidence vector. The process of model inversion iteratively creates an image to maximize the target model’s confidence of a particular class. Our technique allows the attack to be performed with only a one-hot-encoded confidence vector from the target. The approach begins with performing model extraction, e.g. training a local model to mimic the behavior of a target model. Then we perform inversion on the local model within our control. Through this combination, we introduce the first model inversion attack that can be performed in a true black box setting; i.e. without knowledge of the target model’s architecture, and by only using outputted class labels. This is possible due to transferability properties inherent in our model extraction approach known as Jacobian Dataset Augmentation. Throughout this work, we will train shallow Artificial Neural Nets (ANNs) to mimic deeper ANNs, and CNNs. These shallow local models allow us to extend Fredrikson et al.’s inversion attack to invert more complex models than previously thought possible.

Supported by University of Colorado, Denver.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chi, C.-L., et al.: Individualized patient-centered lifestyle recommendations: an expert system for communicating patient specific cardiovascular risk information and prioritizing lifestyle options. J. Biomed. Inform. 45(6), 1164–1174 (2012)

    Article  Google Scholar 

  2. International Warfarin Pharmacogenetics Consortium: Estimation of the warfarin dose with clinical and pharmacogenetic data. N. Engl. J. Med. 360(8), 753–764 (2009)

    Google Scholar 

  3. Taigman, Y., Yang, M., Ranzato, M.A., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)

    Google Scholar 

  4. AWS. https://aws.amazon.com/rekognition/

  5. Google Cloud. https://cloud.google.com/healthcare/

  6. Fredrikson, M., et al.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: 23rd USENIX Security Symposium (USENIX Security 2014) (2014)

    Google Scholar 

  7. Jensen, C.A., et al.: Inversion of feedforward neural networks: algorithms and applications. Proc. IEEE 87(9), 1536–1549 (1999)

    Article  Google Scholar 

  8. Lee, S., Kil, R.M.: Inverse mapping of continuous functions using local and global information. IEEE Trans. Neural Netw. 5(3), 409–423 (1994)

    Article  Google Scholar 

  9. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  10. Várkonyi-Kóczy, A.R.: Observer-based iterative fuzzy and neural network model inversion for measurement and control applications. In: Rudas, I.J., Fodor, J., Kacprzyk, J. (eds.) Towards Intelligent Engineering and Information Technology, pp. 681–702. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03737-5_49

    Chapter  Google Scholar 

  11. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (2017)

    Google Scholar 

  12. Papernot, N., et al.: SoK: security and privacy in machine learning. In: 2018 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE (2018)

    Google Scholar 

  13. Shokri, R., et al.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE (2017)

    Google Scholar 

  14. Yang, Z., et al.: Neural network inversion in adversarial setting via background knowledge alignment. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (2019)

    Google Scholar 

  15. Tramèr, F., et al.: Stealing machine learning models via prediction APIs. In: 25th USENIX Security Symposium (USENIX Security 2016) (2016)

    Google Scholar 

  16. Jia, J., et al.: Memguard: defending against black-box membership inference attacks via adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (2019)

    Google Scholar 

  17. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015)

    Google Scholar 

  18. Papernot, N., et al.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (2017)

    Google Scholar 

  19. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)

  20. Ng, H.-W., Winkler, S.: A data-driven approach to cleaning large face datasets. In: 2014 IEEE International Conference on Image Processing (ICIP). IEEE (2014)

    Google Scholar 

  21. Batrinca, B., Treleaven, P.C.: Social media analytics: a survey of techniques, tools and platforms. AI Soc. 30(1), 89–116 (2014). https://doi.org/10.1007/s00146-014-0549-4

    Article  Google Scholar 

  22. Salem, A., et al.: ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018)

  23. Hayes, J., et al.: Logan: membership inference attacks against generative models. arXiv preprint arXiv:1705.07663 (2017)

  24. Laskov, P.: Practical evasion of a learning-based classifier: a case study. In: 2014 IEEE Symposium on Security and Privacy. IEEE (2014)

    Google Scholar 

  25. Zhang, Y., et al.: The secret revealer: generative model-inversion attacks against deep neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bekman, T., Abolfathi, M., Jafarian, H., Biswas, A., Banaei-Kashani, F., Das, K. (2021). Practical Black Box Model Inversion Attacks Against Neural Nets. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1525. Springer, Cham. https://doi.org/10.1007/978-3-030-93733-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93733-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93732-4

  • Online ISBN: 978-3-030-93733-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics