Skip to main content

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12894))

Abstract

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Kenneth T. Co is supported in part by the DataSpartan research grant DSRD201801.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  2. Brown, T.B., Mané, D.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017)

  3. Cao, Y., et al.: Adversarial sensor attack on lidar-based perception in autonomous driving. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 2267–2281 (2019)

    Google Scholar 

  4. Co, K.T., Muñoz González, L., de Maupeou, S., Lupu, E.C.: Procedural noise adversarial examples for black-box attacks on deep convolutional networks. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 275–289. CCS 2019 (2019). https://doi.org/10.1145/3319535.3345660

  5. Co, K.T., Muñoz-González, L., Kanthan, L., Glocker, B., Lupu, E.C.: Universal adversarial robustness of texture and shape-biased models. arXiv preprint arXiv:1911.10364 (2019)

  6. Co, K.T., Muñoz-González, L., Lupu, E.C.: Sensitivity of deep convolutional networks to gabor noise. arXiv preprint arXiv:1906.03455 (2019)

  7. Eykholt, K., et al.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (\(WOOT\) 18) (2018)

    Google Scholar 

  8. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1625–1634 (2018)

    Google Scholar 

  9. Hau, Z. Co, K.T., Demetriou, S., Lupu, E.C.: Object removal attacks on lidar-based 3d object detectors. arXiv preprint arXiv:2102.03722 (2021)

  10. Hau, Z., Demetriou, S., Muñoz-González, L., Lupu, E.C.: Ghostbuster: Looking into shadows to detect ghost objects in autonomous vehicle 3d sensing. arXiv preprint arXiv:2008.12008 (2020)

  11. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Magazine 29(6), 82–97 (2012)

    Google Scholar 

  12. Hoffman, J., Roberts, D.A., Yaida, S.: Robust learning with jacobian regularization. arXiv preprint arXiv:1908.02729 (2019)

  13. Jakubovitz, D., Giryes, R.: Improving DNN robustness to adversarial attacks using jacobian regularization. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 514–529 (2018)

    Google Scholar 

  14. Khrulkov, V., Oseledets, I.: Art of singular vectors and universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8562–8570 (2018)

    Google Scholar 

  15. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 1097–1105 (2012)

    Google Scholar 

  16. Labaca-Castro, R., Muñoz-González, L., Pendlebury, F., Rodosek, G.D., Pierazzi, F., Cavallaro, L.: Universal adversarial perturbations for malware. arXiv preprint arXiv:2102.06747 (2021)

  17. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  18. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1765–1773 (2017)

    Google Scholar 

  19. Mummadi, C.K., Brox, T., Metzen, J.H.: Defending against universal perturbations with shared adversarial training. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4928–4937 (2019)

    Google Scholar 

  20. Novak, R., Bahri, Y., Abolafia, D.A., Pennington, J., Sohl-Dickstein, J.: Sensitivity and generalization in neural networks: an empirical study. In: International Conference on Learning Representations (2018)

    Google Scholar 

  21. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)

    Google Scholar 

  22. Roth, K., Kilcher, Y., Hofmann, T.: Adversarial training is a form of data-dependent operator norm regularization. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  23. Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. arXiv preprint arXiv:1811.11304 (2018)

  24. Sokolić, J., Giryes, R., Sapiro, G., Rodrigues, M.R.: Robust large margin deep neural networks. IEEE Trans. Signal Process. 65(16), 4265–4280 (2017)

    Article  MathSciNet  Google Scholar 

  25. Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceeding of the International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  26. Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: CVPRW: Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (2019)

    Google Scholar 

  27. Tramèr, F., Dupré, P., Rusak, G., Pellegrino, G., Boneh, D.: Adversarial: Perceptual ad blocking meets adversarial machine learning. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, pp. 2005–2021 (2019). https://doi.org/10.1145/3319535.3354222

  28. Tu, J., et al.: Physically realizable adversarial examples for lidar object detection. arXiv preprint arXiv:2004.00543 (2020)

  29. Varga, D., Csiszárik, A., Zombori, Z.: Gradient regularization improves accuracy of discriminative models. arXiv preprint arXiv:1712.09936 (2017)

  30. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kenneth T. Co .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Co, K.T., Rego, D.M., Lupu, E.C. (2021). Jacobian Regularization for Mitigating Universal Adversarial Perturbations. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12894. Springer, Cham. https://doi.org/10.1007/978-3-030-86380-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86380-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86379-1

  • Online ISBN: 978-3-030-86380-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics