Skip to main content

Adversarial Vulnerability of Active Transfer Learning

  • Conference paper
  • First Online:
Advances in Intelligent Data Analysis XIX (IDA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12695))

Included in the following conference series:

  • 840 Accesses

Abstract

Two widely used techniques for training supervised machine learning models on small datasets are Active Learning and Transfer Learning. The former helps to optimally use a limited budget to label new data. The latter uses large pre-trained models as feature extractors and enables the design of complex, non-linear models even on tiny datasets. Combining these two approaches is an effective, state-of-the-art method when dealing with small datasets.

In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner. As a result, Active Learning algorithms no longer select the optimal instances, but almost exclusively the ones injected by the attacker. This allows an attacker to manipulate the active learner to select and include arbitrary images into the data set, even against an overwhelming majority of unpoisoned samples. We show that a model trained on such a poisoned dataset has a significantly deteriorated performance, dropping from 86% to 34% test accuracy. We evaluate this attack on both audio and image datasets and support our findings empirically. To the best of our knowledge, this weakness has not been described before in literature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://drive.google.com/file/d/1JtXUu6degxnQ84Kggav8rgm9ktMhyAq0/view.

  2. 2.

    16bit audio has a feature range of \({\pm } [0,..., 2^{15}-1]\), where one bit is reserved for the sign.

References

  1. ResNet and ResNetV2. https://keras.io/api/applications/resnet/. Accessed on 26 Nov 2020

  2. STL-10 dataset (2011). http://ai.stanford.edu/~acoates/stl10/. Accessed 26 Nov 2020

  3. Google audio set (2017). https://research.google.com/audioset/. Accessed 26 Nov 2020

  4. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012)

  5. Chan, Y.S., Ng, H.T.: Domain adaptation with active learning for word sense disambiguation. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, pp. 49–56. Association for Computational Linguistics, June 2007

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  7. Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 19–35. IEEE (2018)

    Google Scholar 

  8. Kale, D., Liu, Y.: Accelerating active learning with transfer learning. In: 2013 IEEE 13th International Conference on Data Mining, pp. 1085–1090, December 2013

    Google Scholar 

  9. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Citeseer (2009)

    Google Scholar 

  10. Li, B., Vorobeychik, Y., Chen, X.: A general retraining framework for scalable adversarial classification. arXiv preprint arXiv:1604.02606 (2016)

  11. Li, K., Zhang, T., Malik, J.: Approximate feature collisions in neural nets. In: Advances in Neural Information Processing Systems, pp. 15842–15850 (2019)

    Google Scholar 

  12. Miller, B., et al.: Adversarial active learning. In: Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, pp. 3–14 (2014)

    Google Scholar 

  13. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)

    Article  Google Scholar 

  14. Plakal, M., Ellis, D.: YAMNet. github.com/tensorflow/models/tree/master/research/audioset/yamnet

  15. Settles, B.: Active learning literature survey. Technical report, Department of Computer Sciences, University of Wisconsin-Madison (2009)

    Google Scholar 

  16. Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)

    Google Scholar 

  17. Shi, X., Fan, W., Ren, J.: Actively transfer domain knowledge. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part II. LNCS (LNAI), vol. 5212, pp. 342–357. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87481-2_23

    Chapter  Google Scholar 

  18. Wang, X., Huang, T.-K., Schneider, J.: Active transfer learning under model shift. In: Xing, E.P., Jebara, T. (eds.) Proceedings of the 31st International Conference on Machine Learning, Bejing, China. Proceedings of Machine Learning Research, vol. 32, pp. 1305–1313. PMLR, 22–24 Jun 2014

    Google Scholar 

  19. Yang, L., Hanneke, S., Carbonell, J.: A theory of transfer learning with applications to active learning. Mach. Learn. 90(02), 161–189 (2013). https://doi.org/10.1007/s10994-012-5310-y

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhu, C., et al.: Transferable clean-label poisoning attacks on deep neural nets. arXiv preprint arXiv:1905.05897 (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas M. Müller .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Müller, N.M., Böttinger, K. (2021). Adversarial Vulnerability of Active Transfer Learning. In: Abreu, P.H., Rodrigues, P.P., Fernández, A., Gama, J. (eds) Advances in Intelligent Data Analysis XIX. IDA 2021. Lecture Notes in Computer Science(), vol 12695. Springer, Cham. https://doi.org/10.1007/978-3-030-74251-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-74251-5_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-74250-8

  • Online ISBN: 978-3-030-74251-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics