Skip to main content
Log in

DSCAE: a denoising sparse convolutional autoencoder defense against adversarial examples

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Deep neural networks are a state-of-the-art method used to computer vision. Imperceptible perturbations added to benign images can induce the deep learning network to make incorrect predictions, though the perturbation is imperceptible to human eyes. Those adversarial examples threaten the safety of deep learning model in many real-world applications. In this work, we proposed a method called denoising sparse convolutional autoencoder (DSCAE) to defense against the adversarial perturbations. This is a preprocessing module works before the classification model, which can remove substantial amounts of the adversarial noise. The DSCAE defense has been evaluated against FGSM, DeepFool, \( \hbox {C} \& \hbox {W}\), JSMA attacks on the MNIST and CIFAR-10 datasets. The experimental results show that DSCAE defends against state-of-art attacks effectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  • Akhtar N, Mian A (2018) Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. arXiv e-prints arXiv:1801.00553

  • Alex K, Ilya S, Hg E (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of NIPS, IEEE, Neural Information Processing System Foundation, pp 1097–1105

  • A Krizhevsky VN, Hinton G (2009) Cifar-10 and cifar-100 datasets. https://www.cs.toronto.edu/~kriz/cifar.html. Accessed 11 July 2020

  • Bakhti Y, Fezza SA, Hamidouche W, Deforges O (2019) Ddsa: a defense against adversarial attacks using deep denoising sparse autoencoder. IEEE Access 7:160397–160407. https://doi.org/10.1109/ACCESS.2019.2951526

    Article  Google Scholar 

  • Biggio B, Roli F (2018) Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognit 84:317–331

    Article  Google Scholar 

  • Cao N, Li GF, Zhu PJ, Sun Q, Wang YY, Li J, Yan ML, Zhao YB (2019) Handling the adversarial attacks: a machine learning’s perspective. J Ambient Intell Hum Comput 10(8):2929–2943

    Article  Google Scholar 

  • Carlini N, Wagner D (2016) Towards Evaluating the Robustness of Neural Networks. arXiv e-prints arXiv:1608.04644

  • Gong Z (2018) Craft image adversarial samples with tensorflow. https://github.com/gongzhitaao/tensorflow-adversarial/tree/v0.2.0.Accessed 11 July 2020

  • Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61:56–66. https://doi.org/10.1145/3134599

    Article  Google Scholar 

  • Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and Harnessing Adversarial Examples. arXiv e-prints arXiv:1412.6572

  • Gu S, Rigazio L (2014) Towards Deep Neural Network Architectures Robust to Adversarial Examples. arXiv e-prints arXiv:1412.5068

  • Gu T, Dolan-Gavitt B, Garg S (2017) Badnets: Identifying vulnerabilities in the machine learning model supply chain

  • Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TNJISPM (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97

    Article  Google Scholar 

  • LeCun Y (1998) The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/.Accessed 11 July 2020

  • Lee H, Han S, Lee J (2017) Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN. arXiv e-prints arXiv:1705.03387

  • Li Y, Huang JB, Ahuja N, Yang MH (2016) Deep joint image filtering. vol 9908, https://doi.org/10.1007/978-3-319-46493-0_10

  • Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi F (2016b) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26. https://doi.org/10.1016/j.neucom.2016.12.038

    Article  Google Scholar 

  • Liu S, Pan J (2016a) Yang MH. Learning recursive filters for low-level vision via a hybrid neural network. 9908:560–576. https://doi.org/10.1007/978-3-319-46493-0_34

  • Melis M, Demontis A, Biggio B, Brown G, Fumera G, Roli F (2017) Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid. arXiv e-prints arXiv:1708.06939

  • Moosavi-Dezfooli SM, Fawzi A, Frossard P (2015) DeepFool: a simple and accurate method to fool deep neural networks. arXiv e-prints arXiv:1511.04599

  • Nitin Bhagoji A, Cullina D, Sitawarin C, Mittal P (2017) Enhancing Robustness of Machine Learning Systems via Data Transformations. arXiv e-prints arXiv:1704.02654

  • Papernot N, Mcdaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2015) The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security & Privacy

  • Shen X, Chen YC, Tao X, Jia J (2017) Convolutional Neural Pyramid for Image Processing. arXiv e-prints arXiv:1704.02071

  • Szegedy C SIBJEDGI Zaremba W (2013) Intriguing properties of neural networks. arXiv e-prints arXiv:1312.6199

  • Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: a review. IEEE Signal Process Mag 17(2):151–178

    Google Scholar 

  • Xu W, Evans D, Qi Y (2017) Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv e-prints arXiv:1704.01155

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grant No. 61966011 and the Project of Educational Commission of Guangdong Province of China(2018GKTSCX114, 2018GKTSCX068).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaozhang Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ye, H., Liu, X. & Li, C. DSCAE: a denoising sparse convolutional autoencoder defense against adversarial examples. J Ambient Intell Human Comput 13, 1419–1429 (2022). https://doi.org/10.1007/s12652-020-02642-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-020-02642-3

Keywords

Navigation