Skip to main content
Log in

Defense against adversarial examples based on wavelet domain analysis

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In recent years, machine learning and deep learning, in particular, have shown powerful performance on different challenging tasks. However, research has shown that deep learning systems can be vulnerable to malicious inputs modified by perturbations crafted to be imperceptible to humans. These adversarial examples can fool the classifier into misclassifying them with high confidence, limiting the applications of deep learning systems, especially where guaranteeing the security of the learning model is necessary. In this paper, we propose a two-level defense method consisting of adversarial detection and input data reconstruction modules against adversarial attacks. The detector differentiates between normal and adversarial examples fed to a deep image classification model, and the reconstructor transforms the detected adversarial images to their corresponding normal samples. Both detection and reconstruction modules are novel and fast signal processing-based techniques depending on analyzing the attacks in the wavelet domain. We show that our defense method is effective against the most state-of-the-art attacks with neither modifying the protected classifier nor utilizing any deep learning model that could be exposed to attacks itself.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Code Availability

The codes implemented during the current study are available from the corresponding author on request.

References

  1. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199

  2. Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 387–402

  3. Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1605.07277

  4. Gu S, Rigazio L (2014) Towards deep neural network architectures robust to adversarial examples. arXiv:1412.5068

  5. Goodfellow I J, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv:1412.6572

  6. Das N, Shanbhogue M, Chen S-T, Hohman F, Chen L, Kounavis M E, Chau D H (2017) Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv:1705.02900

  7. Armi L, Fekri-Ershad S (2019) Texture image analysis and texture classification methods-a review. arXiv:1904.06554

  8. Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP). IEEE, pp 582–597

  9. Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst 30(9):2805–2824

    Article  MathSciNet  Google Scholar 

  10. Kurakin A, Goodfellow I, Bengio S, et al. (2016) Adversarial examples in the physical world

  11. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. 1706.06083

  12. Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2019) Detection based defense against adversarial examples from the steganalysis point of view. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 4825–4834

  13. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  14. Dziugaite G K, Ghahramani Z, Roy D M (2016) A study of the effect of jpg compression on adversarial images. arXiv:1608.00853

  15. Guo C, Rana M, Cisse M, Van Der Maaten L (2017) Countering adversarial images using input transformations. arXiv:1711.00117

  16. Gong Z, Wang W, Ku W-S (2017) Adversarial and clean data are not twins. arXiv:1704.04960

  17. Grosse K, Manoharan P, Papernot N, Backes M, McDaniel P (2017) On the (statistical) detection of adversarial examples. arXiv:1702.06280

  18. Carlini N, Wagner D (2017) Adversarial examples are not easily detected: Bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp 3–14

  19. Carlini N, Wagner D (2017) Magnet and” efficient defenses against adversarial attacks” are not robust to adversarial examples. arXiv:1711.08478

  20. Meng D, Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp 135–147

  21. Samangouei P, Kabkab M, Chellappa R (2018) Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv:1805.06605

  22. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: International conference on machine learning. PMLR, pp 214–223

  23. Goodfellow I (2016) Nips 2016 tutorial: Generative adversarial networks. arXiv:1701.00160

  24. Sun K, Zhu Z, Lin Z (2019) Enhancing the robustness of deep neural networks by boundary conditional gan. arXiv:1902.11029

  25. Gonzalez R C, Woods R E, et al. (2002) Digital image processing. Prentice Hall, Upper Saddle River

  26. Mohideen S K, Perumal S A, Sathik M M (2008) Image de-noising using discrete wavelet transform. Int J Comput Sci Netw Secur 8(1):213–216

    Google Scholar 

  27. Donoho D L (1995) De-noising by soft-thresholding. IEEE Trans Inf Theory 41(3):613–627

    Article  MathSciNet  MATH  Google Scholar 

  28. Donoho D L, Johnstone J M (1994) Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3):425–455

    Article  MathSciNet  MATH  Google Scholar 

  29. LeCun Y (1998) The mnist database of handwritten digits. http://yannlecuncom/exdb/mnist/

  30. Krizhevsky A, Hinton G, et al. (2009) Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto

  31. Papernot N, Goodfellow I, Sheatsley R, Feinman R, McDaniel P (2016) cleverhans v2. 0.0: an adversarial machine learning library. arXiv:1610.0076810

  32. Ford N, Gilmer J, Carlini N, Cubuk D (2019) Adversarial examples are a natural consequence of test error in noise. arXiv:1901.10513

  33. Shaham U, Garritano J, Yamada Y, Weinberger E, Cloninger A, Cheng X, Stanton K, Kluger Y (2018) Defending against adversarial images using basis functions transformations. arXiv:1803.10840

  34. Zheng Q, Qiu H, Zhang T, Memmi G, Qiu M, Lu J (2020) Resisting adversarial examples via wavelet extension and denoising.. In: SmartCom, pp 204–214

  35. Prakash A, Moran N, Garber S, DiLillo A, Storer J (2018) Deflecting adversarial attacks with pixel deflection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8571–8580

  36. Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille A (2019) Improving transferability of adversarial examples with input diversity. In: Computer Vision and Pattern Recognition. IEEE

  37. Dong Y, Liao F, Pang T, Hu X, Zhu J (2017) Discovering adversarial examples with momentum. CoRR arXiv:1710.06081

  38. Bai T, Zhao J, Zhu J, Han S, Chen J, Li B, Kot A (2021) Ai-gan: Attack-inspired generation of adversarial examples. In: 2021 IEEE International Conference on Image Processing (ICIP). IEEE, pp 2543–2547

  39. Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In: IEEE International Joint Conference on Neural Networks, pp 1453–1460

  40. Alessio C (2019) “animals-10 dataset”. https://www.kaggle.com/alessiocorrado99/animals10

Download references

Funding

The authors did not receive support from any organization for the submitted work.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Armaghan Sarvar and Maryam Amirmazlaghani. The first draft of the manuscript is written by Armaghan Sarvar. Other authors read and approved the final manuscript.

Corresponding author

Correspondence to Armaghan Sarvar.

Ethics declarations

Conflict of Interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Availability of Data and Material

The datasets generated during and analysed during the current study are available from the corresponding author on request.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sarvar, A., Amirmazlaghani, M. Defense against adversarial examples based on wavelet domain analysis. Appl Intell 53, 423–439 (2023). https://doi.org/10.1007/s10489-022-03159-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03159-2

Keywords

Navigation