Abstract
With the emergence of adversarial examples, target detection models based on deep learning have been found to be susceptible to well-designed input samples. Most adversarial examples are imperceptible to humans, but they will purposefully mislead the target detection system to produce various errors. This article provides a new idea of adversarial example generation: a method of adversarial attack based on training set data. In this paper, by establishing the training set KD-tree of the target model, it is used to retrieve the reference image that is closest to the original image and does not belong to the same category. By moving the image closer to the reference image, the purpose of constructing an adversarial example with a stronger attack ability is achieved. We show through experiments that moving the image closer to the training set image that is most similar to it can quickly achieve the attack effect with less disturbance.
This work is supported by the NSFC (under Grant 61876130).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Qiu, J., Wu, Q., Ding, G., et al.: A survey of machine learning for big data processing. EURASIP J. Adv. Signal Process. 1, 67 (2016)
Ozdag, M.: Adversarial attacks and defenses against deep neural networks: a survey. Procedia Comput. Sci. 140, 152–161 (2018)
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Kurakin, A., Goodfellow, I.J., et al.: Adversarial machine learning at scale. In: International Conference on Learning Representations (2017)
Goodfellow, I.J., Shlens, J., et al.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
Papernot, N., McDaniel, P., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. Comput. Sci. 14(7), 38–39 (2015)
Bentley, L.J.: Multidimensional binary search trees used for associative searching. Commun. ACM 18(9), 509–517 (1975)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: Network and Distributed System Security Symposium (2018)
Meng, D., Chen, H.: MagNet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)
Papernot, N., Mcdaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples (2016)
Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. CoRR, vol. abs/1605.01775 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Guo, X., Jia, F., An, J., Han, Y. (2021). Adversarial Attack with KD-Tree Searching on Training Set. In: Peng, Y., Hu, SM., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds) Image and Graphics. ICIG 2021. Lecture Notes in Computer Science(), vol 12889. Springer, Cham. https://doi.org/10.1007/978-3-030-87358-5_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-87358-5_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87357-8
Online ISBN: 978-3-030-87358-5
eBook Packages: Computer ScienceComputer Science (R0)