Skip to main content
Log in

Neural networks

Deconstructing the generalization gap

  • News & Views
  • Published:

From Nature Machine Intelligence

View current issue Submit your manuscript

New research reveals a duality between neural network weights and neuron activities that enables a geometric decomposition of the generalization gap. The framework provides a way to interpret the effects of regularization schemes such as stochastic gradient descent and dropout on generalization — and to improve upon these methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1: Two methods to reduce the generalization gap.

References

  1. Neyshabur, B., Tomioka, R. & Srebro, N. Preprint at https://doi.org/10.48550/arXiv.1412.6614 (2014).

  2. Zhang, C., Bengio, S., Hardt, M., Recht, B. & Vinyals, O. In Int. Conf. Learning Representations (ICLR) 2017 https://openreview.net/forum?id=Sy8gdB9xx (2022).

  3. Feng, Y., Zhang, W. & Tu, Y. Nat. Mach. Intell. 5, 908–918 (2023).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrey Gromov.

Ethics declarations

Competing interests

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gromov, A. Deconstructing the generalization gap. Nat Mach Intell 5, 1340–1341 (2023). https://doi.org/10.1038/s42256-023-00766-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-023-00766-7

  • Springer Nature Limited

Navigation