Skip to main content

Stochastic Finite Learning

  • Conference paper
  • First Online:
Book cover Stochastic Algorithms: Foundations and Applications (SAGA 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2264))

Included in the following conference series:

Abstract

Recently, we have developed a learning model, called stochastic finite learning, that makes a connection between concepts from PAC learning and inductive inference learning models. The motivation for this work is as follows. Within Gold’s (1967) model of learning in the limit many important learning problems can be formalized and it can be shown that they are algorithmically solvable in principle. However, since a limit learner is only supposed to converge, one never knows at any particular learning stage whether or not it has already been successful. Such an uncertainty may be not acceptable in many applications. The present paper surveys the new approach to overcome this uncertainty that potentially has a wide range of applicability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Angluin, Finding Patterns common to a Set of Strings, Journal of Computer and System Sciences 21 (1980), 46–62.

    Article  MATH  MathSciNet  Google Scholar 

  2. A. Blumer, A. Ehrenfeucht, D. Haussler and M. Warmuth, Learnability and the Vapnik-Chervonenkis Dimension, Journal of the ACM 36 (1989), 929–965.

    Article  MATH  MathSciNet  Google Scholar 

  3. J. Case, S. Jain, S. Lange and T. Zeugmann, Incremental Concept Learning for Bounded Data Mining, Information and Computation 152, No. 1, 1999, 74–110.

    Article  MATH  MathSciNet  Google Scholar 

  4. R. Daley and C.H. Smith. On the Complexity of Inductive Inference. Information and Control 69 (1986), 12–40.

    Article  MATH  MathSciNet  Google Scholar 

  5. T. Erlebach, P. Rossmanith, H. Stadtherr, A. Steger and T. Zeugmann, Learning one-variable pattern languages very efficiently on average, in parallel, and by asking queries, Theoretical Computer Science 261, No. 1–2, 2001, 119–156.

    Article  MATH  MathSciNet  Google Scholar 

  6. E.M. Gold, Language identification in the limit, Information and Control 10 (1967), 447–474.

    Article  MATH  Google Scholar 

  7. S.A. Goldman, M.J. Kearns and R.E. Schapire, Exact identification of circuits using fixed points of amplification functions. SIAM Journal of Computing 22, 1993, 705–726.

    Article  MATH  MathSciNet  Google Scholar 

  8. D. Haussler, Bias, version spaces and Valiant’s learning framework. “Proc. 8th National Conference on Artificial Intelligence” (pp. 564–569). Morgan Kaufmann, 1987.

    Google Scholar 

  9. D. Haussler, M. Kearns, N. Littlestone and M.K. Warmuth, Equivalence of models for polynomial learnability. Information and Computation 95 (1991), 129–161.

    Article  MATH  MathSciNet  Google Scholar 

  10. M. Kearns L. Pitt, A polynomial-time algorithm for learning k-variable pattern languages from examples. “Proc. Second Annual ACM Workshop on Computational Learning Theory” (pp. 57–71). Morgan Kaufmann, 1989.

    Google Scholar 

  11. S. Lange and R. Wiehagen, Polynomial-time inference of arbitrary pattern languages. New Generation Computing 8 (1991), 361–370.

    Article  MATH  Google Scholar 

  12. S. Lange and T. Zeugmann, Set-driven and Rearrangement-independent Learning of Recursive Languages, Mathematical Systems Theory 29 (1996), 599–634.

    MATH  MathSciNet  Google Scholar 

  13. S. Lange and T. Zeugmann, Incremental Learning from Positive Data, Journal of Computer and System Sciences 53(1996), 88–103.

    Article  MATH  MathSciNet  Google Scholar 

  14. A. Mitchell, A. Sharma, T. Scheffer and F. Stephan, The VC-dimension of Subclasses of Pattern Languages, in “Proc. 10th International Conference on Algorithmic Learning Theory,” (O. Watanabe and T. Yokomori, Eds.), Lecture Notes in Artificial Intelligence, Vol. 1720, pp. 93–105, Springer-Verlag, Berlin, 1999.

    Google Scholar 

  15. L. Pitt, Inductive Inference, DFAs and Computational Complexity, in “Proc. 2nd Int. Workshop on Analogical and Inductive Inference” (K.P. Jantke, Ed.), Lecture Notes in Artificial Intelligence, Vol. 397, pp. 18–44, Springer-Verlag, Berlin, 1989.

    Google Scholar 

  16. R. Reischuk and T. Zeugmann, Learning One-Variable Pattern Languages in Linear Average Time, in “Proc. 11th Annual Conference on Computational Learning Theory-COLT’98,” July 24th–26th, Madison, pp. 198–208, ACM Press 1998.

    Google Scholar 

  17. R. Reischuk and T. Zeugmann, A Complete and Tight Average-Case Analysis of Learning Monomials, in “Proc. 16th International Symposium on Theoretical Aspects of Computer Science,” (C. Meinel and S. Tison, Eds.), Lecture Notes in Computer Science, Vol. 1563, pp. 414–423, Springer-Verlag, Berlin 1999.

    Google Scholar 

  18. R. Reischuk and T. Zeugmann, An Average-Case Optimal One-Variable Pattern Language Learner, Journal of Computer and System Sciences 60, No. 2, 2000, 302–335.

    Article  MATH  MathSciNet  Google Scholar 

  19. P. Rossmanith and T. Zeugmann. Stochastic Finite Learning of the Pattern Languages, Machine Learning 44, No. 1–2, 2001, 67–91.

    Article  MATH  Google Scholar 

  20. L.G. Valiant, A Theory of the Learnable, Communications of the ACM 27 (1984), 1134–1142.

    Article  MATH  Google Scholar 

  21. R. Wiehagen and T. Zeugmann, Ignoring Data may be the only Way to Learn Efficiently, Journal of Experimental and Theoretical Artificial Intelligence 6 (1994), 131–144.

    Article  MATH  Google Scholar 

  22. T. Zeugmann, Lange and Wiehagen’s Pattern Language Learning Algorithm: An Average-case Analysis with respect to its Total Learning Time, Annals of Mathematics and Artificial Intelligence 23, No. 1–2, 1998, 117–145.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zeugmann, T. (2001). Stochastic Finite Learning. In: Steinhöfel, K. (eds) Stochastic Algorithms: Foundations and Applications. SAGA 2001. Lecture Notes in Computer Science, vol 2264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45322-9_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-45322-9_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43025-4

  • Online ISBN: 978-3-540-45322-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics