Skip to main content

Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13681))

Included in the following conference series:

Abstract

It is a highly desirable property for deep networks to be robust against small input changes. One popular way to achieve this property is by designing networks with a small Lipschitz constant. In this work, we propose a new technique for constructing such Lipschitz networks that has a number of desirable properties: it can be applied to any linear network layer (fully-connected or convolutional), it provides formal guarantees on the Lipschitz constant, it is easy to implement and efficient to run, and it can be combined with any training objective and optimization method. In fact, our technique is the first one in the literature that achieves all of these properties simultaneously.

Our main contribution is a rescaling-based weight matrix parametrization that guarantees each network layer to have a Lipschitz constant of at most 1 and results in the learned weight matrices to be close to orthogonal. Hence we call such layers almost-orthogonal Lipschitz (AOL). Experiments and ablation studies in the context of image classification with certified robust accuracy confirm that AOL layers achieve results that are on par with most existing methods. Yet, they are simpler to implement and more broadly applicable, because they do not require computationally expensive matrix orthogonalization or inversion steps as part of the network architecture.

We provide code at https://github.com/berndprach/AOL.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Anil, C., Lucas, J., Grosse, R.B.: Sorting out Lipschitz function approximation. In: International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

  2. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning (ICML) (2017)

    Google Scholar 

  3. Bank, D., Giryes, R.: An ETF view of dropout regularization. In: British Machine Vision Conference (BMVC) (2020)

    Google Scholar 

  4. Björck, Å., Bowie, C.: An iterative algorithm for computing the best estimate of an orthogonal matrix. SIAM J. Numer. Anal. 8(2) (1971)

    Google Scholar 

  5. Cayley, A.: About the algebraic structure of the orthogonal group and the other classical groups in a field of characteristic zero or a prime characteristic. J. für die reine und angewandte Mathematik 30 (1846)

    Google Scholar 

  6. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. arXiv preprint arXiv:1810.00069 (2018)

  7. Cissé, M., Bojanowski, P., Grave, E., Dauphin, Y.N., Usunier, N.: Parseval networks: improving robustness to adversarial examples. In: International Conference on Machine Learning (ICML) (2017)

    Google Scholar 

  8. Cogswell, M., Ahmed, F., Girshick, R.B., Zitnick, L., Batra, D.: Reducing overfitting in deep networks by decorrelating representations. In: International Conference on Learning Representations (ICLR) (2016)

    Google Scholar 

  9. Gouk, H., Frank, E., Pfahringer, B., Cree, M.J.: Regularisation of neural networks by enforcing Lipschitz continuity. Mach. Learn. 110(2), 393–416 (2020). https://doi.org/10.1007/s10994-020-05929-w

    Article  MathSciNet  MATH  Google Scholar 

  10. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Conference on Neural Information Processing Systems (NeurIPS) (2017)

    Google Scholar 

  11. Huang, L., et al.: Controllable orthogonalization in training DNNs. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  12. Leino, K., Wang, Z., Fredrikson, M.: Globally-robust neural networks. In: International Conference on Machine Learning (ICML) (2021)

    Google Scholar 

  13. Li, B., Chen, C., Wang, W., Carin, L.: Certified adversarial robustness with additive noise. In: Conference on Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  14. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  15. Saxe, A.M., McClelland, J.L., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In: International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  16. Sedghi, H., Gupta, V., Long, P.M.: The singular values of convolutional layers. In: International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  17. Serban, A., Poll, E., Visser, J.: Adversarial examples on object recognition: a comprehensive survey. ACM Comput. Surv. (CSUR) 53(3), 1–38 (2020)

    Article  Google Scholar 

  18. Singla, S., Feizi, S.: Skew orthogonal convolutions. In: International Conference on Machine Learning (ICML) (2021)

    Google Scholar 

  19. Singla, S., Feizi, S.: Improved deterministic \(l_2\) robustness on CIFAR-10 and CIFAR-100. In: International Conference on Learning Representations (ICLR) (2022). https://openreview.net/forum?id=tD7eCtaSkR

  20. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  21. Trockman, A., Kolter, J.Z.: Orthogonalizing convolutional layers with the Cayley transform. In: International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  22. Trockman, A., Kolter, J.Z.: Patches are all you need? arXiv preprint arXiv:2201.09792 (2022)

  23. Tsuzuku, Y., Sato, I., Sugiyama, M.: Lipschitz-margin training: scalable certification of perturbation invariance for deep neural networks. In: Conference on Neural Information Processing Systems (NeurIPS) (2018)

    Google Scholar 

  24. Virmaux, A., Scaman, K.: Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: Conference on Neural Information Processing Systems (NeurIPS) (2018)

    Google Scholar 

  25. Wang, J., Chen, Y., Chakraborty, R., Yu, S.X.: Orthogonal convolutional neural networks. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  26. Xu, H., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17(2), 151–178 (2020)

    Article  Google Scholar 

  27. Yu, T., Li, J., CAI, Y., Li, P.: Constructing orthogonal convolutions in an explicit manner. In: International Conference on Learning Representations (ICLR) (2022). https://openreview.net/forum?id=Zr5W2LSRhD

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernd Prach .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 359 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Prach, B., Lampert, C.H. (2022). Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13681. Springer, Cham. https://doi.org/10.1007/978-3-031-19803-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19803-8_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19802-1

  • Online ISBN: 978-3-031-19803-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics