Advertisement

Early Stopping - But When?

  • Lutz Prechelt
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1524)

Abstract

Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting (“early stopping”). The exact criterion used for validation-based early stopping, however, is usually chosen in an ad-hoc fashion or training is stopped interactively. This trick describes how to select a stopping criterion in a systematic fashion; it is a trick for either speeding learning procedures or improving generalization, whichever is more important in the particular situation. An empirical investigation on multi-layer perceptrons shows that there exists a tradeoff between training time and generalization: From the given mix of 1296 training runs using difierent 12 problems and 24 difierent network architectures I conclude slower stopping criteria allow for small improvements in generalization (here: about 4% on average), but cost much more training time (here: about factor 4 longer on average).

Keywords

Training Time Neural Information Processing System Generalization Error Validation Error Early Stopping 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. Amari, N. Murata, K.-R. Müller, M. Finke, and H. Yang. Statistical theory of overtraining-is cross-validation efiective? In [23], pages 176–182, 1996.Google Scholar
  2. 2.
    S. Amari, N. Murata, K.-R. Müller, M. Finke, and H. Yang. Aymptotic statistical theory of overtraining and cross-validation. IEEE Trans. on Neural Networks, 8(5):985–996, September 1997.Google Scholar
  3. 3.
    P. Baldi and Y. Chauvin. Temporal evolution of generalization during learning in linear networks. Neural Computation, 3:589-603, 1991.Google Scholar
  4. 4.
    J. D. Cowan, G. Tesauro, and J. Alspector, editors. Advances in Neural Information Processing Systems 6, San Mateo, CA, 1994. Morgan Kaufman Publishers Inc.Google Scholar
  5. 5.
    Y Le Cun, J. S. Denker, and S. A. Solla. Optimal brain damage. In [22], pages 598–605, 1990.Google Scholar
  6. 6.
    S. E. Fahlman. An empirical study of learning speed in back-propagation networks. Technical Report CMU-CS-88-162, School of Computer Science, Carnegie Mellon University, Pittsburgh,PA, September 1988.Google Scholar
  7. 7.
    S. E. Fahlman and C. Lebiere. The Cascade-Correlation learning architecture. In [22f], pages 524–532, 1990.Google Scholar
  8. 8.
    E. Fiesler (rs efiesler@idiap.ch URL). Comparative bibliography of ontogenic neural networks. (submitted for publication), 1994.Google Scholar
  9. 9.
    W. Finnoff, F. Hergert, and H. G. Zimmermann. Improving model selection by nonconvergent methods. Neural Networks, 6:771–783, 1993.CrossRefGoogle Scholar
  10. 10.
    S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Computation, 4:1–58, 1992.CrossRefGoogle Scholar
  11. 11.
    S. J. Hanson, J. D. Cowan, and C. L. Giles, editors. Advances in Neural Information Processing Systems 5, San Mateo, CA, 1993. Morgan Kaufman Publishers Inc.Google Scholar
  12. 12.
    B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In [11], pages 164–171, 1993.Google Scholar
  13. 13.
    A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In [16], pages 950–957, 1992.Google Scholar
  14. 14.
    A. U. Levin, T. K. Leen, and J. E. Moody. Fast pruning using principal components. In [4], 1994.Google Scholar
  15. 15.
    R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors. Advances in Neural Information Processing Systems 3, San Mateo, CA, 1991. Morgan Kaufman Publishers Inc.Google Scholar
  16. 16.
    J. E. Moody, S. J. Hanson, and R. P. Lippmann, editors. Advances in Neural Information Processing Systems 4, San Mateo, CA, 1992. Morgan Kaufman Publishers Inc.Google Scholar
  17. 17.
    N. Morgan and H. Bourlard. Generalization and parameter estimation in feedforward nets: Some experiments. In [22], pages 630–637, 1990.Google Scholar
  18. 18.
    S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Computation, 4(4):473–493, 1992.CrossRefGoogle Scholar
  19. 19.
    L. Prechelt. PROBEN1-A set of benchmarks and benchmarking rules for neural network training algorithms. Technical Report 21/94, FakultÄt für Informatik, UniversitÄt Karlsruhe, Germany, September 1994. Anonymous FTP: ftp://pub/papers/techreports/1994/1994-21.ps.gz_on_ftp.ira.uka.de.Google Scholar
  20. 20.
    R. Reed. Pruning algorithms-a survey. IEEE Transactions on Neural Networks, 4(5):740–746, 1993.CrossRefGoogle Scholar
  21. 21.
    M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proc. of the IEEE Intl. Conf. on Neural Networks, pages 586–591, San Francisco, CA, April1993.Google Scholar
  22. 22.
    D. S. Touretzky, editor. Advances in Neural Information Processing Systems 2, San Mateo, CA, 1990. Morgan Kaufman Publishers Inc.Google Scholar
  23. 23.
    D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors. Advances in Neural Information Processing Systems 8, Cambridge, MA, 1996. MIT Press.Google Scholar
  24. 24.
    C. Wang, S. S. Venkatesh, and J. S. Judd. Optimal stopping and efiective machine complexity in learning. In [4], 1994.Google Scholar
  25. 25.
    A. S. Weigend, D. E. Rumelhart, and B. A. Huberman. Generalization by weightelimination with application to forecasting. In [15], pages 875–882, 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Lutz Prechelt
    • 1
  1. 1.FakultÄt für InformatikUniversitÄt KarlsruheKarlsruheGermany

Personalised recommendations