Skip to main content
Log in

Adversarial defenses for object detectors based on Gabor convolutional layers

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Despite their many advantages and positive features, the deep neural networks are extremely vulnerable against adversarial attacks. This drawback has substantially reduced the adversarial accuracy of the visual object detectors. To make these object detectors robust to adversarial attacks, a new Gabor filter-based method has been proposed in this paper. This method has then been applied on the YOLOv3 with different backbones, the SSD with different input sizes and on the FRCNN; and thus, six robust object detector models have been presented. In order to evaluate the efficacy of the models, they have been subjected to adversarial training via three types of targeted attacks (TOG-fabrication, TOG-vanishing, and TOG-mislabeling) and three types of untargeted random attacks (DAG, RAP, and UEA). The best average accuracy (49.6%) was achieved by the YOLOv3-d model, and for the PASCAL VOC dataset. This is far superior to the best performance and accuracy and obtained in previous works (25.4%). Empirical results show that, while the presented approach improves the adversarial accuracy of the object detector models, it does not affect the performance of these models on clean data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Kong, T., Sun, F., Liu, H., Jiang, Y., Li, L., Shi, J.: FoveaBox: Beyound anchor-based object detection. IEEE Trans. Image Process. 29, 7389–7398 (2020)

    Article  Google Scholar 

  2. Wu, F., Jin, G., Gao, M., He, Z. and Yang, Y.: "Helmet detection based on improved YOLO V3 deep Model," IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), Canada, pp. 363–368, 2019.

  3. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y. and Berg, A. C.: "Ssd: Single shot multibox detector, " European Conference on Computer Vision (ECCV), 2016.

  4. Liu, Z., Xiang, Q., Tang, J., Wang, Y., Zhao, P.: Robust salient object detection for RGB images. Vis. Comput. 36, 1823–1835 (2020)

    Article  Google Scholar 

  5. Naseer, M., Khan, S. and Porikli, F.: "Local gradients smoothing: Defense against localized adversarial attacks," IEEE Winter Conference on Applications of Computer Vision (WACV), USA, pp. 1300–1307, 2019.

  6. Ramanathan, A., Pullum, L., Husein, Z., Raj, S., Torosdagli, N., Pattanaik, S. and Jha, S. K.: "Adversarial attacks on computer vision algorithms using natural perturbations," 2017 Tenth International Conference on Contemporary Computing (IC3), Noida, 2017, pp. 1–6.

  7. Chow, K.-H., Liu, L., Gursoy, M. E., Truex, S., Wei, W., and Wu, Y.: "Understanding object detection through an adversarial lens," Computer Security–ESORICS 2020 Lecture Notes in Computer Science, pp. 460–481, 2020.

  8. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  9. Li, H., Li, G., Yu, Y.: ROSA: Robust salient object detection against adversarial attacks. IEEE Trans. Cybern. 50(11), 4835–4847 (2020)

    Article  Google Scholar 

  10. Kamboj, A., Rani, R., and Nigam, A.: "A comprehensive survey and deep learning-based approach for human recognition using ear biometric," The Visual Computer, 2021, https://doi.org/10.1007/s00371-021-02119-0.

  11. Yadav, K. and Singh, A.: "Comparative analysis of visual recognition capabilities of CNN architecture enhanced with Gabor filter," International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2020, pp. 45–50,.

  12. Cho, S., Jun, T. J., Oh, B. and Kim, D.: "DAPAS : Denoising autoencoder to prevent adversarial attack in Semantic Segmentation," International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1–8.

  13. Yahya, Z., Hassan, M., Younis, S., Shafique, M.: Probabilistic analysis of targeted attacks using transform-domain adversarial examples. IEEE Access 8, 33855–33869 (2020)

    Article  Google Scholar 

  14. Chow, K.H., Liu, L., Loper, M., Bae, J., Gursoy, M.E., Truex, S., Wei, W. and Wu, Y: Adversarial objectness gradient attacks in real-time object detection systems. 2020 [Online]. Available: https://khchow.com/media/TPS20_TOG.pdf

  15. Naghdy, G., Ros, M., Todd, C. and Norahmawati, E.: "Cervical cancer classification using Gabor filters," IEEE First International Conference on Healthcare Informatics, Imaging and Systems Biology, San Jose, CA, 2011, pp. 48–52.

  16. Pérez, J. C., Alfarra, M., Jeanneret, G., Bibi, A., Thabet, A., Ghanem, B. and Arbeláez, P.:"Gabor layers enhance network robustness," Computer Vision – ECCV 2020 Lecture Notes in Computer Science, pp. 450–466, 2020.

  17. Alekseev, A. and Bobe, A.: "GaborNet: Gabor filters with learnable parameters in deep convolutional neural network," International Conference on Engineering and Telecommunication (EnT), Dolgoprudny, Russia, 2019, pp. 1–4.

  18. Bansal, A., Ranjan, R., Castillo, C. D. and Chellappa, R.: "Deep features for recognizing disguised faces in the wild", Computer Vision and Pattern Recognition Workshops (CVPRW) IEEE/CVF Conference on, pp. 10–106, 2018.

  19. Miyato, T., Maeda, S., Koyama, M. and Ishii, S.: "Virtual adversarial training: A regularization method for supervised and semi-supervised learning," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 8, pp. 1979–1993, 2019.

  20. Wang, Y., Tan, Y., Zhang, W., Zhao, Y. and Kuang, X.: "An adversarial attack on DNN-based black-box object detectors," Journal of Network and Computer Applications, vol. 161, 2020.

  21. Lee, M. and Kolter, Z.: "On physical adversarial patches for object detection", 2019, [online] Available: https://arxiv.org/abs/1906.11897.

  22. Li, D., Zhang, J. and Huang, K.: "Universal adversarial perturbations against object detection", Pattern Recognition, vol. 110, 2021.

  23. Wang, Y., Lv, H., Kuang, X., Zhao, G., Tan, Y., Zhang, Q., Hu, J.: Towards a physical-world adversarial patch for blinding object detection models. Inf. Sci. 556, 459–471 (2021)

    Article  MathSciNet  Google Scholar 

  24. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L. and Yuille, A.: “Adversarial examples for semantic segmentation and object detection,” 2017 IEEE International Conference on Computer Vision (ICCV), 2017.

  25. Li, Y., Tian, D., Bian, X., Lyu, S.: "Robust adversarial perturbation on deep proposal-based models", British Machine Vision Conference (BMVC), 2018.

  26. Wei, X., Liang, S., Chen, N. and Cao, X.: "Transferable adversarial attacks for image and video object detection", Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 954–960, 2019.

  27. Aprilpyone, M., Kinoshita, Y., Kiya, H.: Adversarial robustness by one Bit double quantization for visual classification. IEEE Access 7, 177932–177943 (2019)

    Article  Google Scholar 

  28. Carlini, N. and Wagner, D.: "Towards evaluating the robustness of neural networks", Proc. IEEE Symp. Secur. Privacy (SP), pp. 39–57, May 2017.

  29. Moosavi-Dezfooli, S., Fawzi, A. and Frossard, P.: "DeepFool: A simple and accurate method to fool deep neural networks", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2574–2582, Jun. 2016.

  30. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B. and Swami, A.: "The limitations of deep learning in sdversarial settings," IEEE European Symposium on Security and Privacy (EuroS&P), Germany, 2016, pp. 372–387.

  31. Ross, A. and Doshi-Velez, F.: “Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients”, Proc. of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.

  32. Guo, Q., Xie, X., Ma, L., Li, Z., Xue, W., Feng, W. and Liu, Y.: "SPARK: Spatial-aware online incremental attack against visual tracking," Proc. of the European Conference on Computer Vision (ECCV), 2019.

  33. Arnab, A., Miksik, O. and Torr, P. H. S.:"On the robustness of semantic segmentation models to adversarial attacks", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 888–897, 2018.

  34. Sarwar, S. S., Panda, P. and Roy, K.: "Gabor filter assisted energy efficient fast learning convolutional neural networks," IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Taipei, 2017, pp. 1–6..

  35. Song, D., Eykholt, K., Evtimov, I. and Fernandes, E.: "Physical adversarial examples for object detectors," 12th Workshop on Offensive Technologies (WOOT), 2018.

  36. Zhang, H. and Wang, J.: "Towards adversarially robust object detection", Proc. IEEE Int. Conf. Computer Vision, pp. 421–430, 2019.

  37. Arora, S., Bhatia, M. P. S. and Mittal, V.: "A robust framework for spoofing detection in faces using deep learning, " The Visual Computer, 2021, https://doi.org/10.1007/s00371-021-02123-4.

  38. Goswami, G., Agarwal, A., Ratha, N., Singh, R., Vatsa, M.: Detecting and mitigating adversarial perturbations for robust face recognition. Int. J. Comput. Vision 127(6), 719–742 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdollah Amirkhani.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amirkhani, A., Karimi, M.P. Adversarial defenses for object detectors based on Gabor convolutional layers. Vis Comput 38, 1929–1944 (2022). https://doi.org/10.1007/s00371-021-02256-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02256-6

Keywords

Navigation