Skip to main content

Editors’ Introduction

  • Conference paper
  • First Online:
  • 500 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2225))

Abstract

Learning theory is an active research area with contributions from various fields including artificial intelligence, theoretical computer science, and statistics. The main thrust is an attempt to model learning phenomena in precise ways and study the mathematical properties of these scenarios. In this way one hopes to get a better understanding of the learning scenarios and what is possible or as we call it learnable in each. Of course this goes with a study of algorithms that achieve the required performance. Learning theory aims to define reasonable models of phenomena and find provably successful algorithms within each such model. To complete the picture we also seek impossibility results showing that certain things are not learnable within a particular model, irrespective of the particular learning algorithms or methods being employed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dana Angluin. Queries and concept learning. Machine Learning, 2(4):319–342, 1988.

    MathSciNet  Google Scholar 

  2. S. Arikawa, T. Shinohara, A. Yamamoto. Elementary formal systems as a unifying framework for language learning. In Proc. Second Annual Workshop on Computational Learning Theory, pages 312–327, Morgan Kaufmann, San Mateo, CA, 1989.

    Google Scholar 

  3. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Occam’s razor. Inform. Proc. Lett., 24:377–380, 1987.

    Article  MathSciNet  Google Scholar 

  4. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–965, 1989.

    Article  MathSciNet  Google Scholar 

  5. C. Cortes and V. N. Vapnik. Support-vector Networks, Machine Learning 20:273–297, 1995.

    MATH  Google Scholar 

  6. Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge, U.K., 2000.

    Book  Google Scholar 

  7. T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multipleinstance problem with axis-parallel rectangles. Artificial Intelligence, 89(1–2):31–71, 1997.

    Article  Google Scholar 

  8. Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.

    Article  MathSciNet  Google Scholar 

  9. E Mark Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.

    Article  MathSciNet  Google Scholar 

  10. Michael Kearns. Efficient noise-tolerant learning from statistical queries. In Journal of the ACM, 45(6):983–1006, 1998.

    Article  MathSciNet  Google Scholar 

  11. Michael Kearns and Yishay Mansour. On the boosting ability of top-down decision tree learning algorithms. Journal of Computer and System Sciences, 58(1):109–128, 1999.

    Article  MathSciNet  Google Scholar 

  12. Yishay Mansour and David McAllester. Boosting using branching programs. In Proc. 13th Annual Conference on Computational Learning Theory, pages 220–224. Morgan Kaufmann, San Francisco, 2000.

    Google Scholar 

  13. Yasuhito Mukouchi and Setsuo Arikawa. Towards a mathematical theory of machine discovery from facts. Theoretical Computer Science, 137(1):53–84, 1995.

    Article  MathSciNet  Google Scholar 

  14. J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.

    Google Scholar 

  15. Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11):1134–1142, 1984.

    Article  Google Scholar 

  16. V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, 36:153–173, 1998.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Abe, N., Khardon, R., Zeugmann, T. (2001). Editors’ Introduction. In: Abe, N., Khardon, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2001. Lecture Notes in Computer Science(), vol 2225. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45583-3_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-45583-3_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42875-6

  • Online ISBN: 978-3-540-45583-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics