Skip to main content

WaveTransform: Crafting Adversarial Examples via Input Decomposition

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 Workshops (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12535))

Included in the following conference series:

Abstract

Frequency spectrum has played a significant role in learning unique and discriminating features for object recognition. Both low and high frequency information present in images have been extracted and learnt by a host of representation learning techniques, including deep learning. Inspired by this observation, we introduce a novel class of adversarial attacks, namely ‘WaveTransform’, that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination). The frequency subbands are analyzed using wavelet decomposition; the subbands are corrupted and then used to construct an adversarial example. Experiments are performed using multiple databases and CNN models to establish the effectiveness of the proposed WaveTransform attack and analyze the importance of a particular frequency component. The robustness of the proposed attack is also evaluated through its transferability and resiliency against a recent adversarial defense algorithm. Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Original codes provided by the authors are used to perform the experiment.

References

  1. Agarwal, A., Singh, R., Vatsa, M., Ratha, N.: Are image-agnostic universal adversarial perturbations for face recognition difficult to detect? In: IEEE BTAS, pp. 1–7 (2018)

    Google Scholar 

  2. Agarwal, A., Vatsa, M., Singh, R.: The role of sign and direction of gradient on the performance of CNN. In: IEEE CVPRW (2020)

    Google Scholar 

  3. Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: Noise is inside me! Generating adversarial perturbations with noise derived from natural filters. In: IEEE CVPRW (2020)

    Google Scholar 

  4. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  5. Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: ICML, pp. 284–293 (2018)

    Google Scholar 

  6. Behzadan, V., Munir, A.: Vulnerability of deep reinforcement learning to policy induction attacks. In: Perner, P. (ed.) MLDM 2017. LNCS (LNAI), vol. 10358, pp. 262–275. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-62416-7_19

    Chapter  Google Scholar 

  7. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: a dataset for recognising faces across pose and age. In: IEEE FG, pp. 67–74 (2018)

    Google Scholar 

  8. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE S&P, pp. 39–57 (2017)

    Google Scholar 

  9. Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: IEEE S&PW, pp. 1–7 (2018)

    Google Scholar 

  10. Chen, P.Y., Sharma, Y., Zhang, H., Yi, J., Hsieh, C.J.: EAD: elastic-net attacks to deep neural networks via adversarial examples. In: AAAI (2018)

    Google Scholar 

  11. Deng, J., Dong, W., Socher, R., Li, L.: ImageNet: a large-scale hierarchical image database. In: IEEE CVPR, pp. 710–719 (2009)

    Google Scholar 

  12. Din, S.U., Akhtar, N., Younis, S., Shafait, F., Mansoor, A., Shafique, M.: Steganographic universal adversarial perturbations. Pattern Recogn. Lett. 135, 146–152 (2020)

    Article  Google Scholar 

  13. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: IEEE CVPR, pp. 9185–9193 (2018)

    Google Scholar 

  14. Gao, J., Lanchantin, J., Soffa, M.L., Qi, Y.: Black-box generation of adversarial text sequences to evade deep learning classifiers. In: IEEE S&PW, pp. 50–56 (2018)

    Google Scholar 

  15. Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In: ICLR (2019)

    Google Scholar 

  16. Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: DeepRing: protecting deep neural network with blockchain. In: IEEE CVPRW (2019)

    Google Scholar 

  17. Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: Securing CNN model and biometric template using blockchain. In: IEEE BTAS, pp. 1–6 (2019)

    Google Scholar 

  18. Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: DNDNet: reconfiguring CNN for adversarial robustness. In: IEEE CVPRW (2020)

    Google Scholar 

  19. Goel, A., Singh, A., Agarwal, A., Vatsa, M., Singh, R.: SmartBox: benchmarking adversarial detection and mitigation algorithms for face recognition. In: IEEE BTAS (2018)

    Google Scholar 

  20. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  21. Goswami, G., Ratha, N., Agarwal, A., Singh, R., Vatsa, M.: Unravelling robustness of deep learning based face recognition against adversarial attacks. In: AAAI, pp. 6829–6836 (2018)

    Google Scholar 

  22. Goswami, G., Agarwal, A., Ratha, N., Singh, R., Vatsa, M.: Detecting and mitigating adversarial perturbations for robust face recognition. Int. J. Comput. Vision 127, 719–742 (2019). https://doi.org/10.1007/s11263-019-01160-w

    Article  Google Scholar 

  23. Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-pie. I&V Comp. 28(5), 807–813 (2010)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)

    Google Scholar 

  25. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE CVPR, pp. 4700–4708 (2017)

    Google Scholar 

  26. Kiefer, J., Wolfowitz, J., et al.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462–466 (1952)

    Article  MathSciNet  Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  28. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  29. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR-W (2017)

    Google Scholar 

  30. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)

    Google Scholar 

  31. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  32. Matthew, D., Fergus, R.: Visualizing and understanding convolutional neural networks. In: ECCV, pp. 6–12 (2014)

    Google Scholar 

  33. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: IEEE CVPR, pp. 1765–1773 (2017)

    Google Scholar 

  34. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: IEEE CVPR, pp. 2574–2582 (2016)

    Google Scholar 

  35. Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Engineering 1–15 (2020)

    Google Scholar 

  36. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  37. Singh, R., Agarwal, A., Singh, M., Nagpal, S., Vatsa, M.: On the robustness of face recognition algorithms against attacks and bias. In: AAAI SMT (2020)

    Google Scholar 

  38. Szegedy, C., et al.: Going deeper with convolutions. In: IEEE CVPR, pp. 1–9 (2015)

    Google Scholar 

  39. Wang, H., Wu, X., Huang, Z., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In: IEEE/CVF CVPR, pp. 8684–8694 (2020)

    Google Scholar 

  40. Xiang, C., Qi, C.R., Li, B.: Generating 3D adversarial point clouds. In: IEEE CVPR, pp. 9136–9144 (2019)

    Google Scholar 

  41. Xiao, C., Li, B., Zhu, J.Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018)

  42. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  43. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: IEEE ICCV, pp. 1369–1378 (2017)

    Google Scholar 

  44. Yahya, Z., Hassan, M., Younis, S., Shafique, M.: Probabilistic analysis of targeted attacks using transform-domain adversarial examples. IEEE Access 8, 33855–33869 (2020)

    Article  Google Scholar 

  45. Yao, L., Miller, J.: Tiny imagenet classification with convolutional neural networks. CS 231N 2(5), 8 (2015)

    Google Scholar 

  46. Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014)

  47. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE TNNLS 30(9), 2805–2824 (2019)

    MathSciNet  Google Scholar 

  48. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

  49. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)

    Google Scholar 

  50. Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)

    Article  Google Scholar 

Download references

Acknowledgements

A. Agarwal was partly supported by the Visvesvaraya PhD Fellowship. R. Singh and M. Vatsa are partially supported through a research grant from MHA, India. M. Vatsa is also partially supported through Swarnajayanti Fellowship by the Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mayank Vatsa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Anshumaan, D., Agarwal, A., Vatsa, M., Singh, R. (2020). WaveTransform: Crafting Adversarial Examples via Input Decomposition. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12535. Springer, Cham. https://doi.org/10.1007/978-3-030-66415-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66415-2_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66414-5

  • Online ISBN: 978-3-030-66415-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics