Abstract
Learnability of families of recursive languages from positive data is studied in the Gold paradigm of inductive inference, where the learner obeys certain constraints motivated by work in inductive reasoning. Previously, various notions of monotonicity have been defined in the context of language learning. These constraints require that the learner's guess monotonically ‘improves’ with regard to the target language. In this paper, the ideas from inductive reasoning are instantiated in alternative ways. Links are established between the various new constraints both among themselves as well as with other well-known constraints, such as conservativeness. Exactly learnable families are characterized for prudent learners which obey various combinations of these constraints. Applications of these characterizations are also shown.
This work was supported in part by ARO grant DAAL 03-89-C-0031, DARPA grant N00014-90-J-1863, NSF grant IRI 90-16592 and Ben Franklin grant 91S.3078C-1.
Preview
Unable to display preview. Download preview PDF.
References
Dana Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.
Robert Berwick. The Acquisition of Syntactic Knowledge. MIT press, Cambridge, MA, 1985.
E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
Klaus P. Jantke. Monotonic and non-monotonic inductive inference. New Generation Computing, 8:349–360, 1991.
Shyam Kapur. Computational Learning of Languages. PhD thesis, Cornell University, September 1991. Technical Report 91-1234.
Shyam Kapur and Gianfranco Bilardi. Language learning without overgeneralization. In Proceedings of the 9th Symposium on Theoretical Aspects of Computer Science (Lecture Notes in Computer Science 577), pages 245–256. Springer-Verlag, 1992.
Shyam Kapur and Gianfranco Bilardi. On uniform learnability of language families. Information Processing Letters, 1992. To appear.
Steffen Lange and Thomas Zeugmann. Monotonic versus non-monotonic language learning. In Proceedings of the 2nd International Workshop on Nonmonotonic and Inductive Logic (Lecture Notes in Artificial Intelligence Series), 1991.
Steffen Lange and Thomas Zeugmann. Types of monotonic language learning and their characterization. In Proceedings of the 5th Conference on Computational Learning Theory. Morgan-Kaufman, 1992.
Steffen Lange, Thomas Zeugmann, and Shyam Kapur. Class preserving monotonic language learning. In preparation, 1992.
Tatsuya Motoki. Consistent, responsive and conservative inference from positive data. In Proceedings of the LA Symposium, pages 55–60, 1990.
Yasuhito Mukouchi. Characterization of finite identification. 1992. To appear in AII'92.
Masako Sato and Kazutaka Umayahara. Inductive inferability for formal languages from positive data. In Proceedings of the Workshop on Algorithmic Learning Theory. JSAI, 1991.
E. Y. Shapiro. Inductive inference of theories from facts. Technical Report 192, Yale University, 1981.
Rolf Wiehagen. A thesis in inductive inference. In Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic. Springer-Verlag, 1991. Lecture Notes in Artificial Intelligence Vol. 543.
Thomas Zeugmann, Steffen Lange, and Shyam Kapur. Characterizations of class preserving monotonic language learning. In preparation, 1992.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kapur, S. (1993). Monotonic language learning. In: Doshita, S., Furukawa, K., Jantke, K.P., Nishida, T. (eds) Algorithmic Learning Theory. ALT 1992. Lecture Notes in Computer Science, vol 743. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57369-0_35
Download citation
DOI: https://doi.org/10.1007/3-540-57369-0_35
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-57369-2
Online ISBN: 978-3-540-48093-8
eBook Packages: Springer Book Archive