Abstract
Two widely used techniques for training supervised machine learning models on small datasets are Active Learning and Transfer Learning. The former helps to optimally use a limited budget to label new data. The latter uses large pre-trained models as feature extractors and enables the design of complex, non-linear models even on tiny datasets. Combining these two approaches is an effective, state-of-the-art method when dealing with small datasets.
In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner. As a result, Active Learning algorithms no longer select the optimal instances, but almost exclusively the ones injected by the attacker. This allows an attacker to manipulate the active learner to select and include arbitrary images into the data set, even against an overwhelming majority of unpoisoned samples. We show that a model trained on such a poisoned dataset has a significantly deteriorated performance, dropping from 86% to 34% test accuracy. We evaluate this attack on both audio and image datasets and support our findings empirically. To the best of our knowledge, this weakness has not been described before in literature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
16bit audio has a feature range of \({\pm } [0,..., 2^{15}-1]\), where one bit is reserved for the sign.
References
ResNet and ResNetV2. https://keras.io/api/applications/resnet/. Accessed on 26 Nov 2020
STL-10 dataset (2011). http://ai.stanford.edu/~acoates/stl10/. Accessed 26 Nov 2020
Google audio set (2017). https://research.google.com/audioset/. Accessed 26 Nov 2020
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012)
Chan, Y.S., Ng, H.T.: Domain adaptation with active learning for word sense disambiguation. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, pp. 49–56. Association for Computational Linguistics, June 2007
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 19–35. IEEE (2018)
Kale, D., Liu, Y.: Accelerating active learning with transfer learning. In: 2013 IEEE 13th International Conference on Data Mining, pp. 1085–1090, December 2013
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Citeseer (2009)
Li, B., Vorobeychik, Y., Chen, X.: A general retraining framework for scalable adversarial classification. arXiv preprint arXiv:1604.02606 (2016)
Li, K., Zhang, T., Malik, J.: Approximate feature collisions in neural nets. In: Advances in Neural Information Processing Systems, pp. 15842–15850 (2019)
Miller, B., et al.: Adversarial active learning. In: Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop, pp. 3–14 (2014)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)
Plakal, M., Ellis, D.: YAMNet. github.com/tensorflow/models/tree/master/research/audioset/yamnet
Settles, B.: Active learning literature survey. Technical report, Department of Computer Sciences, University of Wisconsin-Madison (2009)
Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)
Shi, X., Fan, W., Ren, J.: Actively transfer domain knowledge. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part II. LNCS (LNAI), vol. 5212, pp. 342–357. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87481-2_23
Wang, X., Huang, T.-K., Schneider, J.: Active transfer learning under model shift. In: Xing, E.P., Jebara, T. (eds.) Proceedings of the 31st International Conference on Machine Learning, Bejing, China. Proceedings of Machine Learning Research, vol. 32, pp. 1305–1313. PMLR, 22–24 Jun 2014
Yang, L., Hanneke, S., Carbonell, J.: A theory of transfer learning with applications to active learning. Mach. Learn. 90(02), 161–189 (2013). https://doi.org/10.1007/s10994-012-5310-y
Zhu, C., et al.: Transferable clean-label poisoning attacks on deep neural nets. arXiv preprint arXiv:1905.05897 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Müller, N.M., Böttinger, K. (2021). Adversarial Vulnerability of Active Transfer Learning. In: Abreu, P.H., Rodrigues, P.P., Fernández, A., Gama, J. (eds) Advances in Intelligent Data Analysis XIX. IDA 2021. Lecture Notes in Computer Science(), vol 12695. Springer, Cham. https://doi.org/10.1007/978-3-030-74251-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-74251-5_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-74250-8
Online ISBN: 978-3-030-74251-5
eBook Packages: Computer ScienceComputer Science (R0)