Skip to main content
Log in

Adversarial attacks in computer vision: a survey

  • Review Paper
  • Published:
Journal of Membrane Computing Aims and scope Submit manuscript

Abstract

Deep learning, as an important topic of artificial intelligence, has been widely applied in various fields, especially in computer vision applications, such as image classification and object detection, which have made remarkable advancements. However, it has been demonstrated that deep neural networks (DNNs) suffer from adversarial vulnerability. For the image classification task, the carefully crafted perturbations are added to the clean images, and the resulting adversarial examples are able to change the prediction results of DNNs. Hence, the presence of adversarial examples presents a significant obstacle to the security of DNNs in practical applications, which has garnered considerable attention from researchers in related fields. Recently, a number of studies have been conducted on adversarial attacks. In this survey, the relevant concepts and background are first introduced. Then, based on computer vision tasks, we systematically review the existing adversarial attack methods and research progress. Finally, several common defense methods are summarized, and some challenges are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

References

  1. Kim, H. E., Cosa-Linan, A., Santhanam, N., Jannesari, M., Maros, M. E., & Ganslandt, T. (2022). Transfer learning for medical image classification: A literature review. BMC Medical Imaging, 22(1), 69.

    Article  Google Scholar 

  2. Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE.

  3. Li, C., Yao, W., Wang, H., Jiang, T., & Zhang, X. (2023). Bayesian evolutionary optimization for crafting high-quality adversarial examples with limited query budget. Applied Soft Computing, 142, 110370.

    Article  Google Scholar 

  4. Wong, E., Schmidt, F., & Kolter, Z. (2019). Wasserstein adversarial examples via projected Sinkhorn iterations. In: International Conference on Machine Learning (pp. 6808–6817). PMLR.

  5. Ilyas, A., Engstrom, L., & Madry, A. (2018). Prior convictions: Black-box adversarial attacks with bandits and priors. arXiv:1807.07978.

  6. Komkov, S., & Petiushko, A. (2021). Advhat: Real-world adversarial attack on arcface face id system. In: 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 819–826). IEEE.

  7. Li, J., Ji, S., Du, T., Li, B., & Wang, T. (2018). Textbugger: Generating adversarial text against real-world applications. arXiv:1812.05271.

  8. Wang, D., Yao, W., Jiang, T., Li, C., & Chen, X. (2023). Rfla: A stealthy reflected light adversarial attack in the physical world. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4455–4465).

  9. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1625–1634).

  10. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv:1312.6199.

  11. Akhtar, N., Mian, A., Kardan, N., & Shah, M. (2021). Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access, 9, 155161–155196.

    Article  Google Scholar 

  12. Sun, H., Zhu, T., Zhang, Z., Jin, D., Xiong, P., & Zhou, W. (2023). Adversarial attacks against deep generative models on data: A survey. IEEE Transactions on Knowledge and Data Engineering, 35(4), 3367–3388. https://doi.org/10.1109/TKDE.2021.3130903

    Article  Google Scholar 

  13. Goodfellow, I.J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv:1412.6572.

  14. Wang, Y., Liu, J., Chang, X., Rodríguez, R. J., & Wang, J. (2022). Di-aa: An interpretable white-box attack for fooling deep neural networks. Information Sciences, 610, 14–32.

    Article  Google Scholar 

  15. Bai, Y., Wang, Y., Zeng, Y., Jiang, Y., & Xia, S.-T. (2023). Query efficient black-box adversarial attack on deep neural networks. Pattern Recognition, 133, 109037.

    Article  Google Scholar 

  16. Feng, W., Xu, N., Zhang, T., & Zhang, Y. (2023). Dynamic generative targeted attacks with pattern injection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16404–16414)

  17. Reza, M.F., Rahmati, A., Wu, T., & Dai, H. (2023). Cgba: Curvature-aware geometric black-box attack. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 124–133)

  18. Deng, L. (2012). The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine, 29(6), 141–142.

    Article  Google Scholar 

  19. Krizhevsky, A., & Hinton, G. et al. (2009). Learning multiple layers of features from tiny images.

  20. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 248–255). Ieee

  21. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L.: (2014). Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13 (pp. 740–755). Springer

  22. Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88, 303–338.

    Article  Google Scholar 

  23. Kurakin, A., Goodfellow, I., & Bengio, S, et al. (2016). Adversarial examples in the physical world.

  24. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083.

  25. Moosavi-Dezfooli, S.-M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2574–2582).

  26. Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In: 2017 Ieee Symposium on Security and Privacy (sp) (pp. 39–57). IEEE.

  27. Wang, X., He, X., Wang, J., & He, K. (2021). Admix: Enhancing the transferability of adversarial attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 16158–16167).

  28. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9185–9193).

  29. Wang, X., & He, K. (2021). Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1924–1933).

  30. Li, C., Yao, W., Wang, H., & Jiang, T. (2023). Adaptive momentum variance for attention-guided sparse adversarial attacks. Pattern Recognition, 133, 108979.

    Article  Google Scholar 

  31. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., & Yuille, A.L. (2019). Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2730–2739).

  32. Dong, Y., Pang, T., Su, H., & Zhu, J. (2019). Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4312–4321).

  33. Lin, J., Song, C., He, K., Wang, L., & Hopcroft, J.E. (2019) Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv:1908.06281.

  34. Liu, Y., Chen, X., Liu, C., & Song, D. (2016). Delving into transferable adversarial examples and black-box attacks. arXiv:1611.02770.

  35. Chen, S., He, Z., Sun, C., Yang, J., & Huang, X. (2020). Universal adversarial attack on attention and the resulting dataset damagenet. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  36. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C.-J. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (pp. 15–26).

  37. Brendel, W., Rauber, J., & Bethge, M. (2017). Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv:1712.04248.

  38. Andriushchenko, M., Croce, F., Flammarion, N., & Hein, M. (2020). Square attack: a query-efficient black-box adversarial attack via random search. In: European Conference on Computer Vision (pp. 484–501). Springer.

  39. Shukla, S. N., Sahu, A.K., Willmott, D., & Kolter, J. Z. (2019). Black-box adversarial attacks with Bayesian optimization. arXiv:1909.13857.

  40. Li, Z., Cheng, H., Cai, X., Zhao, J., & Zhang, Q. (2022). Sa-es: Subspace activation evolution strategy for black-box adversarial attacks. IEEE Transactions on Emerging Topics in Computational Intelligence.

  41. Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2), 182–197.

    Article  Google Scholar 

  42. Vidnerová, P., & Neruda, R. (2020). Vulnerability of classifiers to evolutionary generated adversarial examples. Neural Networks, 127, 168–181.

    Article  Google Scholar 

  43. Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.-J., & Srivastava, M. B. (2019). Genattack: Practical black-box attacks with gradient-free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference (pp. 1111–1119).

  44. Lin, J., Xu, L., Liu, Y., & Zhang, X. (2020). Black-box adversarial sample generation based on differential evolution. Journal of Systems and Software, 170, 110767.

    Article  Google Scholar 

  45. Wang, J., Yin, Z., Jiang, J., Tang, J., & Luo, B. (2022). Pisa: Pixel skipping-based attentional black-box adversarial attack. Computers & Security, 123, 102947.

    Article  Google Scholar 

  46. Tian, Y., Pan, J., Yang, S., Zhang, X., He, S., & Jin, Y. (2022). Imperceptible and sparse adversarial attacks via a dual-population-based constrained evolutionary algorithm. IEEE Transactions on Artificial Intelligence, 4(2), 268–281.

    Article  Google Scholar 

  47. Zhang, Q., Wang, K., Zhang, W., & Hu, J. (2019). Attacking black-box image classifiers with particle swarm optimization. IEEE Access, 7, 158051–158063.

    Article  Google Scholar 

  48. Ilyas, A., Engstrom, L., Athalye, A., & Lin, J. (2018). Black-box adversarial attacks with limited queries and information. In: International Conference on Machine Learning (pp. 2137–2146). PMLR.

  49. Qiu, H., Custode, L.L., & Iacca, G. (2021). Black-box adversarial attacks using evolution strategies. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion (pp. 1827–1833).

  50. Li, C., Wang, H., Zhang, J., Yao, W., & Jiang, T. (2022). An approximated gradient sign method using differential evolution for black-box adversarial attack. IEEE Transactions on Evolutionary Computation.

  51. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS &P) (pp. 372–387). IEEE.

  52. Giulivi, L., Jere, M., Rossi, L., Koushanfar, F., Ciocarlie, G., Hitaj, B., & Boracchi, G. (2023). Adversarial scratches: Deployable attacks to cnn classifiers. Pattern Recognition, 133, 108985.

    Article  Google Scholar 

  53. Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.

    Article  Google Scholar 

  54. Mopuri, K. R., Ganeshan, A., & Babu, R. V. (2018). Generalizable data-free objective for crafting universal adversarial perturbations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(10), 2452–2465.

    Article  Google Scholar 

  55. Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1765–1773).

  56. Ghosh, A., Mullick, S. S., Datta, S., Das, S., Das, A. K., & Mallipeddi, R. (2022). A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers. Pattern Recognition, 122, 108279.

    Article  Google Scholar 

  57. Wei, X., Guo, Y., & Li, B. (2021). Black-box adversarial attacks by manipulating image attributes. Information Sciences, 550, 285–296.

    Article  MathSciNet  Google Scholar 

  58. Duan, R., Ma, X., Wang, Y., Bailey, J., Qin, A.K., & Yang, Y. (2020). Adversarial camouflage: Hiding physical-world attacks with natural styles. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1000–1008).

  59. Thys, S., Van Ranst, W., & Goedemé, T. (2019). Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.

  60. Tang, G., Jiang, T., Zhou, W., Li, C., Yao, W., & Zhao, Y. (2023). Adversarial patch attacks against aerial imagery object detectors. Neurocomputing, 537, 128–140.

    Article  Google Scholar 

  61. Hu, Y.-C.-T., Kung, B.-H., Tan, D.S., Chen, J.-C., Hua, K.-L., & Cheng, W.-H. (2021). Naturalistic physical adversarial patch for object detectors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 7848–7857).

  62. Tang, G., Yao, W., Jiang, T., Zhou, W., Yang, Y., & Wang, D. (2023). Natural weather-style black-box adversarial attacks against optical aerial detectors. IEEE Transactions on Geoscience and Remote Sensing.

  63. Liu, X., Yang, H., Liu, Z., Song, L., Li, H., & Chen, Y. (2018). Dpatch: An adversarial patch attack on object detectors. arXiv:1806.02299.

  64. Wang, D., Jiang, T., Sun, J., Zhou, W., Gong, Z., Zhang, X., Yao, W., & Chen, X. (2022). Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack. In: Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, pp. 2414–2422).

  65. Sun, J., Yao, W., Jiang, T., Wang, D., & Chen, X. (2023). Differential evolution based dual adversarial camouflage: Fooling human eyes and object detectors. Neural Networks, 163, 256–271.

    Article  Google Scholar 

  66. Zhu, X., Li, X., Li, J., Wang, Z., & Hu, X. (2021). Fooling thermal infrared pedestrian detectors in real world using small bulbs. In: Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, pp. 3616–3624).

  67. Zhu, X., Hu, Z., Huang, S., Li, J., & Hu, X.(2022). Infrared invisible clothing: Hiding from infrared detectors at multiple angles in real world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13317–13326).

  68. Hu, C., Shi, W., Jiang, T., Yao, W., Tian, L., Chen, X., Zhou, J., & Li, W. (2023). Adversarial infrared blocks: A multi-view black-box attack to thermal infrared detectors in physical world. Available at SSRN 4532269.

  69. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., & Yuille, A. (2017). Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision (pp. 1369–1378).

  70. Strudel, R., Garcia, R., Laptev, I., & Schmid, C. (2021). Segmenter: Transformer for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 7262–7272).

  71. Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: A survey. ACM Computing Survey, 38(4), 13. https://doi.org/10.1145/1177352.1177355

    Article  Google Scholar 

  72. Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., & Bennamoun, M. (2021). Deep learning for 3d point clouds: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12), 4338–4364. https://doi.org/10.1109/TPAMI.2020.3005434

    Article  Google Scholar 

  73. Arnab, A., Miksik, O., & Torr, P. H. S. (2018). On the robustness of semantic segmentation models to adversarial attacks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

  74. Nesti, F., Rossolini, G., Nair, S., Biondi, A., & Buttazzo, G.(2022). Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (pp. 2280–2289).

  75. Wiyatno, R. R., Xu, A.(2019). Physical adversarial textures that fool visual object tracking. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).

  76. Yan, B., Wang, D., Lu, H., & Yang, X. (2020). Cooling-shrinking attack: Blinding the tracker with imperceptible noises. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  77. Chen, X., Yan, X., Zheng, F., Jiang, Y., Xia, S.-T., Zhao, Y., & Ji, R. (2020). One-shot adversarial attacks on visual tracking with dual attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  78. Cao, Y., Xiao, C., Yang, D., Fang, J., Yang, R., Liu, M., & Li, B. (2019). Adversarial Objects Against LiDAR-Based Autonomous Driving Systems.

  79. Zheng, S., Song, Y., Leung, T., & Goodfellow, I.(2016). Improving the robustness of deep neural networks via stability training. In: Proceedings of the Ieee Conference on Computer Vision and Pattern Recognition (pp. 4480–4488).

  80. Ross, A., & Doshi-Velez, F.(2018). Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32).

  81. Sun, J., Yao, W., Jiang, T., & Chen, X. (2024). Efficient search of comprehensively robust neural architectures via multi-fidelity evaluation. Pattern Recognition, 146, 110038.

    Article  Google Scholar 

  82. Zhou, X., Qin, A. K., Sun, Y., & Tan, K. C. (2021). A survey of advances in evolutionary neural architecture search. In: 2021 IEEE Congress on Evolutionary Computation (CEC) (pp. 950–957). https://doi.org/10.1109/CEC45853.2021.9504890.

  83. Zhou, X., Qin, A. K., Gong, M., & Tan, K. C. (2021). A survey on evolutionary construction of deep neural networks. IEEE Transactions on Evolutionary Computation, 25(5), 894–912. https://doi.org/10.1109/TEVC.2021.3079985

    Article  Google Scholar 

  84. Liu, J., & Jin, Y. (2021). Multi-objective search of robust neural architectures against multiple types of adversarial attacks. Neurocomputing, 453, 73–84.

    Article  Google Scholar 

  85. Xie, C., Wang, J., Zhang, Z., Ren, Z., & Yuille, A. (2017) Mitigating adversarial effects through randomization. arXiv:1711.01991.

  86. Dziugaite, G. K., Ghahramani, Z., & Roy, D. M. (2016). A study of the effect of jpg compression on adversarial images. arXiv:1608.00853.

  87. Xu, W., Evans, D., & Qi, Y. (2017). Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv:1704.01155.

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 62376202).

Author information

Authors and Affiliations

Authors

Contributions

Chao Li wrote the main manuscript text and Handing Wang, Wen Yao, and Tingsong Jiang revised it. All authors reviewed the manuscript.

Corresponding authors

Correspondence to Handing Wang or Wen Yao.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, C., Wang, H., Yao, W. et al. Adversarial attacks in computer vision: a survey. J Membr Comput (2024). https://doi.org/10.1007/s41965-024-00142-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41965-024-00142-3

Keywords

Navigation