Skip to main content

A Data Augmentation-Based Defense Method Against Adversarial Attacks in Neural Networks

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 12453)

Abstract

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results. Such variability has been a potential risk for systems in real-life equipped DNNs as core components. Numerous efforts have been put into research on how to protect DNN models from being tackled by AEs. However, no previous work can efficiently reduce the effects caused by novel adversarial attacks and be compatible with real-life constraints at the same time. In this paper, we focus on developing a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints. From basic affine transformations, we integrate three transformations with randomized coefficients that fine-tuned respecting the amount of change to the defended sample. Comparing to 4 state-of-art defense methods published in top-tier AI conferences in the past two years, our method demonstrates outstanding robustness and efficiency. It is worth highlighting that, our model can withstand advanced adaptive attack, namely BPDA with 50 rounds, and still helps the target model maintain an accuracy around 80%, meanwhile constraining the attack success rate to almost zero.

Keywords

  • Adversarial Examples
  • Deep learning
  • Security
  • Affine Transformation
  • Data augmentation

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-60239-0_19
  • Chapter length: 16 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-60239-0
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   149.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.

References

  1. Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 16), pp. 265–283 (2016)

    Google Scholar 

  2. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning, pp. 274–283 (2018)

    Google Scholar 

  3. Buckman, J., Roy, A., Raffel, C., Goodfellow, I.: Thermometer encoding: one hot way to resist adversarial examples (2018)

    Google Scholar 

  4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  5. Das, N., et al.: Shield: fast, practical defense and vaccination for deep learning using JPEG compression. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196–204 (2018)

    Google Scholar 

  6. Dong, Y., Liao, F., Pang, T., Hu, X., Zhu, J.: Discovering adversarial examples with momentum. arXiv preprint arXiv:1710.06081 (2017)

  7. Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D.C., Nepal, S.: Strip: a defence against trojan attacks on deep neural networks. In: Proceedings of the 35th Annual Computer Security Applications Conference, pp. 113–125 (2019)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  9. Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations. In: International Conference on Learning Representations (2018)

    Google Scholar 

  10. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)

  11. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    CrossRef  Google Scholar 

  12. Liu, Z., et al.: Feature distillation: DNN-oriented JPEG compression against adversarial examples. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 860–868. IEEE (2019)

    Google Scholar 

  13. Mao, Y., Yi, S., Li, Q., Feng, J., Xu, F., Zhong, S.: A privacy-preserving deep learning approach for face recognition with edge computing. In: Proceedings USENIX Workshop Hot Topics Edge Computing (HotEdge), pp. 1–6 (2018)

    Google Scholar 

  14. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  15. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)

    Google Scholar 

  16. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)

    Google Scholar 

  17. Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.: Deflecting adversarial attacks with pixel deflection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8571–8580 (2018)

    Google Scholar 

  18. Qiu, H., Zeng, Y., Zheng, Q., Zhang, T., Qiu, M., Memmi, G.: Mitigating advanced adversarial attacks with more advanced gradient obfuscation techniques. arXiv preprint arXiv:2005.13712 (2020)

  19. Qiu, H., Zheng, Q., Memmi, G., Lu, J., Qiu, M., Thuraisingham, B.:Deepresidual learning based enhanced JPEG compression in the internet of things. IEEE Trans. Ind. Inf. (2020)

    Google Scholar 

  20. Qiu, H., Zheng, Q., Zhang, T., Qiu, M., Memmi, G., Lu, J.: Towards secure and efficient deep learning inference in dependable IoT systems. IEEE Internet of Things J. (2020)

    Google Scholar 

  21. Qiu, M., Qiu, H.: Review on image processing based adversarial example defenses in computer vision. In: 2020 IEEE 6th International Conference on Big Data Security on Cloud (BigDataSecurity), pp. 94–99. IEEE (2020)

    Google Scholar 

  22. Rakin, A.S., He, Z., Fan, D.: Bit-flip attack: crushing neural network with progressive bit search. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1211–1220 (2019)

    Google Scholar 

  23. Shaham, U., Yamada, Y., Negahban, S.: Understanding adversarial training: increasing local stability of supervised models through robust optimization. Neurocomputing 307, 195–204 (2018)

    CrossRef  Google Scholar 

  24. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  25. Szegedy, C., etal.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  26. Tang, Y., Zhang, C., Gu, R., Li, P., Yang, B.: Vehicle detection and recognition for intelligent traffic surveillance system. Multimedia Tools Appl. 76(4), 5817–5832 (2015). https://doi.org/10.1007/s11042-015-2520-x

    CrossRef  Google Scholar 

  27. Tramer, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347 (2020)

  28. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)

  29. Wu, B., Iandola, F., Jin, P.H., Keutzer, K.: Squeezedet: unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 129–137 (2017)

    Google Scholar 

  30. Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.: Mitigating adversarial effects through randomization. In: International Conference on Learning Representations (2018)

    Google Scholar 

  31. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18–21, 2018. The Internet Society (2018)

    Google Scholar 

  32. Yang, Y., Zhang, G., Katabi, D., Xu, Z.: Me-Net: towards effective adversarial robustness with matrix estimation. In: International Conference on Machine Learning, pp. 7025–7034 (2019)

    Google Scholar 

  33. Zeng, Y., Gu, H., Wei, W., Guo, Y.: \( deep-full-range \): A deep learning based network encrypted traffic classification and intrusion detection framework. IEEE Access 7, 45182–45190 (2019)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han Qiu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Zeng, Y., Qiu, H., Memmi, G., Qiu, M. (2020). A Data Augmentation-Based Defense Method Against Adversarial Attacks in Neural Networks. In: Qiu, M. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2020. Lecture Notes in Computer Science(), vol 12453. Springer, Cham. https://doi.org/10.1007/978-3-030-60239-0_19

Download citation