Advertisement

Minds and Machines

, Volume 20, Issue 2, pp 291–301 | Cite as

Varieties of Justification in Machine Learning

  • David CorfieldEmail author
Article

Abstract

Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.

Keywords

Induction Guarantee Kernel methods Bayesian 

Notes

Acknowledgments

The author thanks the Max Planck Society for supporting his research for this paper.

References

  1. Bartlett, P., Jordan, M., & McAuliffe, J. (2006). Comment on support vector machines with applications. Statistical Science, 21, 341–346.CrossRefMathSciNetGoogle Scholar
  2. Corfield, D., Schölkopf, B., & Vapnik, V. (2009). Falsificationism and statistical learning theory: Comparing the Popper and Vapnik-Chervonenkis dimensions. Journal for General Philosophy of Science, 40(1), 51–58.CrossRefGoogle Scholar
  3. Grünwald, P. (2007). The minimum description length principle. Cambridge: MIT Press.Google Scholar
  4. Harman, G., & Kulkarni, S. (2007). Reliable reasoning: Induction and statistical learning theory. Cambridge: MIT Press.Google Scholar
  5. LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M., & Huang, F. (2006). A tutorial on energy-based learning. In G. Bakir, T. Hofman, B. Schölkopf, A. Smola & B. Taskar (Eds.), Predicting structured data. Cambridge: MIT Press.Google Scholar
  6. Minka, T. (2001). Empirical risk minimization is an incomplete inductive principle. http://www.research.microsoft.com/~minka/papers/minka-erm.pdf.
  7. Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. Cambridge: MIT Press.zbMATHGoogle Scholar
  8. Schölkopf, B., Tsuda, K., & Vert, J.-P. (eds.) (2004). Kernel methods in computational biology. Cambridge: MIT Press.Google Scholar
  9. Seeger, M. (2003). Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse approximations. Dissertation, University of Edinburgh.Google Scholar
  10. Snoussi, H., & Mohammad-Djafari, A. (2002). Information geometry and Prior Selection. In C. J. Williams (Ed.), Proceedings of 22nd international workshop on Bayesian inference and maximum entropy methods in science and engineering (MaxEnt’02), pp. 307–327, Moscow, Idaho, USA, August 2002.Google Scholar
  11. Vapnik, V. (1998). Statistical learning theory. New York: Wiley.zbMATHGoogle Scholar
  12. Williamson, J. (2007). ‘Philosophies of probability: Objective bayesianism and its challenges. In A. Irvine (Ed.), Handbook of the philosophy of mathematics. Elsevier, Amsterdam. Handbook of the Philosophy of Science volume 4.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.SECL, University of KentCanterburyUK

Personalised recommendations