Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7988))

Abstract

A machine learning model is said overfit the training data relative to a simpler model if the first model is more accurate on the training data but less accurate on the test data. Overfitting control—selecting an appropriate complexity fit—is a central problem in machine learning. Previous overfitting control methods include penalty methods, which penalize a model for complexity, cross-validation methods, which experimentally determine when overfitting occurs on the training data relative to the test data, and ensemble methods, which reduce overfitting risk by combining multiple models. These methods are all eager in that they attempt to control overfitting at training time, and they all attempt to improve the average accuracy, as computed over the test data. This paper presents an overfitting control method which is lazy—it attempts to control overfitting at prediction time for each test case. Our results suggest that lazy methods perform well because they exploit the particulars of each test case at prediction time rather than averaging over all possible test cases at training time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Borra, S., Di Ciaccio, A.: Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods. Computational Statistics & Data Analysis 54(12), 2976–2989 (2010)

    Article  MathSciNet  Google Scholar 

  2. Ng, A.Y.: Preventing ‘overfitting’ of cross-validation data. In: Fourteenth International Conference on Machine Learning (Workshop), pp. 245–253. Morgan Kaufmann (1997)

    Google Scholar 

  3. Krogh, P.S.A.: Learning with ensembles: How over-fitting can be useful. In: Advances in Neural Information Processing Systems 8: Proceedings of the 1995 Conference, vol. 8, p. 190. MIT Press (1996)

    Google Scholar 

  4. Esposito, F., Malerba, D., Semeraro, G., Kay, J.: A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(5), 476–491 (1997)

    Article  Google Scholar 

  5. Mehta, M., Rissanen, J., Agrawal, R.: MDL-based decision tree pruning. In: Proc. 1st Intl. Conf. Knowledge Discovery and Data Mining, KDD 1995, Montreal, Canada (1995)

    Google Scholar 

  6. Hinton, G.E., Van Camp, D.: Keeping the neural networks simple by minimizing the description length of the weights. In: Proceedings of the Sixth Annual Conference on Computational Learning Theory, pp. 5–13. ACM (1993)

    Google Scholar 

  7. Lawrence, S., Giles, C.L.: Overfitting and neural networks: Conjugate gradient and back-propagation. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, vol. 1, pp. 114–119. IEEE (2000)

    Google Scholar 

  8. Castellano, G., Fanelli, A.M., Pelillo, M.: An empirical comparison of node pruning methods for layered feed-forward neural networks. In: Proceedings of 1993 International Joint Conference on Neural Networks, IJCNN 1993, Nagoya, vol. 1, pp. 321–326. IEEE (1993)

    Google Scholar 

  9. Reed, R.: Pruning algorithms-a survey. IEEE Transactions on Neural Networks 4(5), 740–747 (1993)

    Article  Google Scholar 

  10. Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.: Choosing multiple parameters for support vector machines. Machine Learning 46(1), 131–159 (2002)

    Article  MATH  Google Scholar 

  11. Kearns, M.: A bound on the error of cross validation using the approximation and estimation rates, with consequences for the training-test split. Neural Computation 9(5), 1143–1161 (1997)

    Article  Google Scholar 

  12. Samet, H.: Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann (2006)

    Google Scholar 

  13. Moore, A., Schneider, J., et al.: Efficient Locally Weighted Polynomial Regression Predictions. In: Fourteenth International Conference on Machine Learning. Morgan Kaufmann (1997)

    Google Scholar 

  14. Garcia, E.K., Feldman, S., Gupta, M.R., Srivastava, S.: Completely lazy learning. IEEE Transactions on Knowledge and Data Engineering 22(9), 1274–1285 (2010)

    Article  Google Scholar 

  15. Friedman, J.H., Kohavi, R., Yun, Y.: Lazy decision trees. In: Proceedings of the National Conference on Artificial Intelligence, pp. 717–724 (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Prieditis, A., Sapp, S. (2013). Lazy Overfitting Control. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2013. Lecture Notes in Computer Science(), vol 7988. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39712-7_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39712-7_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39711-0

  • Online ISBN: 978-3-642-39712-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics