Skip to main content

Adversarial Attack with KD-Tree Searching on Training Set

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2021)

Abstract

With the emergence of adversarial examples, target detection models based on deep learning have been found to be susceptible to well-designed input samples. Most adversarial examples are imperceptible to humans, but they will purposefully mislead the target detection system to produce various errors. This article provides a new idea of adversarial example generation: a method of adversarial attack based on training set data. In this paper, by establishing the training set KD-tree of the target model, it is used to retrieve the reference image that is closest to the original image and does not belong to the same category. By moving the image closer to the reference image, the purpose of constructing an adversarial example with a stronger attack ability is achieved. We show through experiments that moving the image closer to the training set image that is most similar to it can quickly achieve the attack effect with less disturbance.

This work is supported by the NSFC (under Grant 61876130).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Qiu, J., Wu, Q., Ding, G., et al.: A survey of machine learning for big data processing. EURASIP J. Adv. Signal Process. 1, 67 (2016)

    Article  Google Scholar 

  2. Ozdag, M.: Adversarial attacks and defenses against deep neural networks: a survey. Procedia Comput. Sci. 140, 152–161 (2018)

    Article  Google Scholar 

  3. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  4. Kurakin, A., Goodfellow, I.J., et al.: Adversarial machine learning at scale. In: International Conference on Learning Representations (2017)

    Google Scholar 

  5. Goodfellow, I.J., Shlens, J., et al.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  6. Papernot, N., McDaniel, P., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)

    Google Scholar 

  7. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  8. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. Comput. Sci. 14(7), 38–39 (2015)

    Google Scholar 

  9. Bentley, L.J.: Multidimensional binary search trees used for associative searching. Commun. ACM 18(9), 509–517 (1975)

    Article  Google Scholar 

  10. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: Network and Distributed System Security Symposium (2018)

    Google Scholar 

  11. Meng, D., Chen, H.: MagNet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)

    Google Scholar 

  12. Papernot, N., Mcdaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples (2016)

    Google Scholar 

  13. Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. CoRR, vol. abs/1605.01775 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yahong Han .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, X., Jia, F., An, J., Han, Y. (2021). Adversarial Attack with KD-Tree Searching on Training Set. In: Peng, Y., Hu, SM., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds) Image and Graphics. ICIG 2021. Lecture Notes in Computer Science(), vol 12889. Springer, Cham. https://doi.org/10.1007/978-3-030-87358-5_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87358-5_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87357-8

  • Online ISBN: 978-3-030-87358-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics