Learning Recursive Concepts with Anomalies

  • Gunter Grieser
  • Steffen Lange
  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1968)

Abstract

This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios in which the learner is successful if its final hypothesis describes a finite variant of the target concept - henceforth called learning with anomalies. As usual, we distinguish between learning from only positive data and learning from positive and negative data.

We investigate the following learning models: finite identification, conservative inference, set-driven learning, and behaviorally correct learning. In general, we focus our attention on the case that the number of allowed anomalies is finite but not a priori bounded. However, we also present a few sample results that affect the special case of learning with an a priori bounded number of anomalies. We provide characterizations of the corresponding models of learning with anomalies in terms of finite tell-tale sets. The varieties in the degree of recursiveness of the relevant tell-tale sets observed are already sufficient to quantify the differences in the corresponding models of learning with anomalies.

In addition, we study variants of incremental learning and derive a complete picture concerning the relation of all models of learning with and without anomalies mentioned above.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin. Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21:46–62, 1980.MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    G.R. Baliga, J. Case, and S. Jain. The synthesis of language learners. Information and Computation, 152:16–43, 1999.MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    J. Bārzdiņnš. Two theorems on the limiting synthesis of functions. In Theory of Algorithms and Programs Vol. 1, pages 82–88, Latvian State University, 1974, (Russian).Google Scholar
  5. 5.
    L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:122–155, 1975.CrossRefMathSciNetGoogle Scholar
  6. 6.
    J. Case and C. Lynes. Machine inductive inference and language identification. In Proc. 9th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science 140, pages 107–115. Springer-Verlag, Berlin, 1982.CrossRefGoogle Scholar
  7. 7.
    J. Case and C.H. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25:193–220, 1983.MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    J. Case, S. Jain, S. Lange, and T. Zeugmann, Incremental concept learning for bounded data mining. Information and Computation 152:74–110, 1999.MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    M. Fulk. Prudence and other restrictions in formal language learning. Information and Computation, 85:1–11, 1990.MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.CrossRefMATHGoogle Scholar
  11. 11.
    S. Jain, D. Osherson, J. Royer, and A. Sharma. Systems that Learn-2nd Edition, An Introduction to Learning Theory. MIT Press, Cambridge, Mass., 1999.Google Scholar
  12. 12.
    S. Lange and G. Grieser. On the strength of incremental learning. In Proc. 10th International Conference on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 1720, pages 118–131. Springer-Verlag, Berlin, 1999.Google Scholar
  13. 13.
    S. Lange and T. Zeugmann. Types of monotonic language learning and their characterization. In Proc. 5th Annual ACM Workshop on Computational Learning Theory, pages 377–390. ACM Press, New York, 1992.Google Scholar
  14. 14.
    S. Lange and T. Zeugmann. Language learning in dependence on the space of hypotheses. In Proc. 6th Annual ACM Conference on Computational Learning Theory, pages 127–136. ACM Press, New York, 1993.Google Scholar
  15. 15.
    S. Lange and T. Zeugmann. Incremental learning from positive data. Journal of Computer and System Sciences, 53:88–103, 1996.MATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    S. Lange and T. Zeugmann. Set-driven and rearrangement-independent learning of recursive languages. Mathematical Systems Theory, 29:599–634, 1996.MATHMathSciNetGoogle Scholar
  17. 17.
    T. Tabe and T. Zeugmann. Two variations of inductive inference of languages from positive data. Technical Report RIFIS-TR-CS-105, Kyushu University, 1995.Google Scholar
  18. 18.
    K. Wexler and P. Culicover. Formal Principles of Language Acquisition. MIT Press, Cambridge, Mass., 1980.Google Scholar
  19. 19.
    R. Wiehagen. Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Journal of Information Processing and Cybernetics (EIK), 12:93–99, 1976.MathSciNetMATHGoogle Scholar
  20. 20.
    T. Zeugmann and S. Lange. A guided tour across the boundaries of learning recursive languages. In K. P. Jantke and S. Lange, editors, Algorithmic Learning for Knowledge-Based Systems, Lecture Notes in Artificial Intelligence 961, pages 190–258. Springer-Verlag, Berlin, 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Gunter Grieser
    • 1
  • Steffen Lange
    • 2
  • Thomas Zeugmann
    • 3
  1. 1.Technische Universität Darmstadt, Fachbereich InformatikDarmstadtGermany
  2. 2.Universität Leipzig, Institut für InformatikLeipzigGermany
  3. 3.Medizinische Universität Lübeck, Institut für Theoretische InformatikLübeckGermany

Personalised recommendations