Advertisement

On monotonic strategies for learning r.e. languages

  • Sanjay Jain
  • Arun Sharma
Selected Papers Algorithmic Learning Theory
Part of the Lecture Notes in Computer Science book series (LNCS, volume 872)

Abstract

Overgeneralization is a major issue in identification of grammars for formal languages from positive data. Different formulations of monotonic strategies have been proposed to address this problem and recently there has been a flurry of activity investigating such strategies in the context of indexed families of recursive languages.

The present paper studies the power of these strategies to learn recursively enumerable languages from positive data. In particular, the power of strong monotonic, monotonic, and weak monotonic (together with their dual notions modeling specialization) strategies are investigated for identification of r.e. languages. These investigations turn out to be different from the previous investigations on learning indexed families of recursive languages and at times require new proof techniques.

A complete picture is provided for the relative power of each of the strategies considered. An interesting consequence is that the power of weak monotonic strategies is equivalent to that of conservative strategies. This result parallels the scenario for indexed classes of recursive languages. It is also shown that any identifiable collection of r.e. languages can also be identified by a strategy that exhibits the dual of weak monotonic property.

Keywords

Language Learning Formal Language Inductive Inference Positive Data Strong Monotonicity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Ang80]
    D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.Google Scholar
  2. [BB75]
    L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.Google Scholar
  3. [Blu67]
    M. Blum. A machine independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.Google Scholar
  4. [Ful90]
    M. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85:1–11, 1990.Google Scholar
  5. [Gol67]
    E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.Google Scholar
  6. [HU79]
    J. Hopcroft and J. Ullman, Introduction to Automata Theory Languages and Computation. Addison-Wesley Publishing Company, 1979.Google Scholar
  7. [Jan91]
    K. P. Jantke. Monotonic and non-monotonic inductive inference. New Generation Computing, 8:349–360, 1991.Google Scholar
  8. [JS89]
    S. Jain and A. Sharma. Recursion theoretic characterizations of language learning. Technical Report 281, University of Rochester, 1989.Google Scholar
  9. [JS94]
    S. Jain and A. Sharma. Characterizing language learning by standardizing operations. Journal of Computer and System Sciences, 1994. To Appear.Google Scholar
  10. [Kap92]
    S. Kapur. Monotonic language learning. In Proceedings of the Third Workshop on Algorithmic Learning Theory. JSAI Press, 1992. Proceedings reprinted as Lecture Notes in Artificial Intelligence, Springer-Verlag.Google Scholar
  11. [Kap93]
    S. Kapur. Uniform characterizations of various kinds of language learning. In Proceedings of the Fourth International Workshop on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 744. Springer, 1993.Google Scholar
  12. [KB92]
    S. Kapur and G. Bilardi. Language learning without overgeneralization. In Proceedings of the Ninth Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science 577. Springer-Verlag, Berlin, 1992.Google Scholar
  13. [Kin94]
    E. Kinber. Monotonicity versus efficiency for learning languages from texts. Technical Report 94-22, Department of Computer and Information Sciences, University of Delaware, 1994.Google Scholar
  14. [KS]
    E. Kinber and F. Stephan. Language learning from texts: Mind changes, limited memory and monotonicity. Private Communication. Manuscript.Google Scholar
  15. [LZ92a]
    S. Lange and T. Zeugmann. Monotonic language learning on informant. Technical Report 11/92, GOSLER-Report, FB Mathematik und Informatik, TH Lepzig, 1992.Google Scholar
  16. [LZ92b]
    S. Lange and T. Zeugmann. Types of monotonic language learning and their characterization. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, pages 377–390. ACM Press, 1992.Google Scholar
  17. [LZ93a]
    S. Lange and T. Zeugmann. Language learning with bounded number of mind changes. In Proceedings of the Tenth Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes Computer Science 665, pages 682–691. Springer-Verlag, Berlin, 1993.Google Scholar
  18. [LZ93b]
    S. Lange and T. Zeugmann. Monotonic versus non-monotonic language learning. In Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic, Lecture Notes in Artificial Intelligence 659, pages 254–269. Springer-Verlag, Berlin, 1993.Google Scholar
  19. [LZ93c]
    S. Lange and T. Zeugmann. On the impact of order independence to the learnability of recursive languages. Technical Report ISIS-RR-93-17E, Institute for Social Information Science Research Report, Fujitsu Laboratories Ltd., 1993.Google Scholar
  20. [LZK92]
    S. Lange, T. Zeugmann, and S. Kapur. Class preserving monotonic language learning. Technical Report 14/92, GOSLER-Report, FB Mathematik und Informatik, TH Lepzig, 1992.Google Scholar
  21. [MA93]
    Y. Mukouchi and S. Arikawa. Inductive inference machines that can refute hypothesis spaces. In Proceedings of the Fourth International Workshop on Algorithmic Learning Theory, Lecture Notes in Artficial Intelligence 744, pages 123–136. Springer-Verlag, Berlin, 1993.Google Scholar
  22. [Muk92a]
    Y. Mukouchi. Characterization of finite identification. In Proceedings of the Third International Workshop on Analogical and Inductive Inference, Dagstuhl Castle, Germany, pages 260–267, October 1992.Google Scholar
  23. [Muk92b]
    Y. Mukouchi. Inductive inference with bounded mind changes. In Proceedings of the Third Workshop on Algorithmic Learning Theory, pages 125–134. JSAI Press, 1992. Proceedings reprinted as Lecture Notes in Artificial Intelligence, Springer-Verlag.Google Scholar
  24. [MY78]
    M. Machtey and P. Young. An Introduction to the General Theory of Algorithms. North Holland, New York, 1978.Google Scholar
  25. [OSW82]
    D. Osherson, M. Stob, and S. Weinstein. Learning strategies. Information and Control, 53:32–51, 1982.Google Scholar
  26. [Rog58]
    H. Rogers. Gödel numberings of partial recursive functions. Journal of Symbolic Logic, 23:331–341, 1958.Google Scholar
  27. [Rog67]
    H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw Hill, New York, 1967. Reprinted, MIT Press 1987.Google Scholar
  28. [TB70]
    B. Trakhtenbrot and J. M. Barzdin. Konetschnyje Awtomaty (Powedenie i Sintez) (in Russian). Nauka, Moskwa, 1970. English Translation: Finite Automata-Behavior and Synthesis, Fundamental Studies in Computer Science 1, North Holland, Amsterdam, 1975.Google Scholar
  29. [Wie90]
    R. Wiehagen. A thesis in inductive inference. In Nonmonotonic and Inductive Logic, 1st International Workshop, Karlsruhe, Germany, pages 184–207. Springer Verlag, 1990. Lecture Notes in Computer Science 543.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Sanjay Jain
    • 1
  • Arun Sharma
    • 2
  1. 1.Department of Information Systems and Computer ScienceNational University of SingaporeSingaporeRepublic of Singapore
  2. 2.School of Computer Science and EngineeringThe University of New South WalesSydneyAustralia

Personalised recommendations