Co-learning of recursive languages from positive data
- 105 Downloads
The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully co-learned iff the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L.
The capabilities of co-learning are investigated in dependence on the choice of the hypothesis space, and compared to language learning in the limit, conservative learning and finite learning from positive data. Class preserving learning (L has to be co-learned with respect to some suitably chosen enumeration of all and only the languages from L), class comprising learning (L has to be co-learned with respect to some hypothesis space containing at least all the languages from L), and absolute co-learning (L has to be co-learned with respect to all class preserving hypothesis spaces for L) are distinguished.
Our results are manyfold. First, it is shown that co-learning is exactly as powerful as learning in the limit provided the hypothesis space is appropriately chosen. However, while learning in the limit is insensitive to the particular choice of the hypothesis space, the power of co-learning crucially depends on it. Therefore the properties a hypothesis space should have in order to be suitable for co-learning are studied. Finally, a sufficient conditions for absolute co-learnability is derived, and it is separated from finite learning.
KeywordsLanguage Learning Target Language Recursive Function Inductive Inference Positive Data
Unable to display preview. Download preview PDF.
- Angluin, D. (1980), Finding patterns common to a set of strings, Journal of Computer and System Sciences, 21, 46–62.Google Scholar
- Angluin, D. (1980), Inductive inference of formal languages from positive data, Information and Control 45, 117–135.Google Scholar
- Freivalds, R., Karpinski, M., and Smith, C.H. (1994), Co-learning of total recursive functions, in “Proc. 7th Ann. ACM Conf. Computational Learning Theory,” pp. 190–197, ACM Press, New York.Google Scholar
- Freivalds, R., Gobleja, D., Karpinski, M., and Smith, C.H. (1994), Co-learnability and FIN-identifiability of enumerable classes of total recursive functions, in “Proc. 4th International Workshop on Analogical and Inductive Inference — AII94,” Lecture Notes in Artificial Intelligence 872, pp. 100–105, Springer-Verlag, Berlin.Google Scholar
- Freivalds, R., and Zeugmann, T. (1995), Co-learning of recursive languages from positive data, RIFIS-TR-CS-110, RIFIS, Kyushu University 33, April 20.Google Scholar
- Gold, E.M. (1967), Language identification in the limit, Information and Control 10, 447–474.Google Scholar
- Kearns, M., and Pitt, L. (1989), A polynomial-time algorithm for learning k-variable pattern languages from examples, in “Proc. 2nd Ann. Workshop on Computational Learning Theory,” pp. 57–71, Morgan Kaufmann Publ. Inc., San Mateo.Google Scholar
- Kummer, M. (1995), A learning-theoretic characterization of classes of recursive functions, Information Processing Letters 54, 205–211.Google Scholar
- Lange, S., and Wiehagen, R. (1991), Polynomial-time inference of arbitrary pattern languages, New Generation Computing 8, 361–370.Google Scholar
- Lange, S., Wiehagen, R. and Zeugmann, T. (1996), Learning by Erasing, RIFIS-TR-CS-122, RIFIS, Kyushu University 33, February 13.Google Scholar
- Lange, S., and Zeugmann, T. (1993), Monotonic versus non-monotonic language learning, in “Proceedings 2nd International Workshop on Nonmonotonic and Inductive Logic, December 1991, Reinhardsbrunn,” (G. Brewka, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence Vol. 659, pp. 254–269, Springer-Verlag, Berlin.Google Scholar
- Lange, S., and Zeugmann, T. (1993), Language learning in dependence on the space of hypotheses, in “Proc. 6th Ann. ACM Conf. Computational Learning Theory,” pp. 127–136, ACM Press, New York.Google Scholar
- Lange, S., and Zeugmann, T. (1993), Learning recursive languages with bounded mind changes, International Journal of Foundations of Computer Science 4, 157–178.Google Scholar
- Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT-Press, Cambridge, Massachusetts.Google Scholar
- Rogers, H.Jr. (1967), “Theory of Recursive Functions and Effective Computability”, McGraw-Hill, New York.Google Scholar
- Sato, M., and Umayahara, K. (1992), Inductive inferability for formal languages from positive data, IEICE Transactions on Information and Systems E-75D, 415–419.Google Scholar
- Shinohara, T. (1982), Polynomial time inference of extended regular pattern languages, in “Proc. RIMS Symposia on Software Science and Engineering,” Kyoto, Lecture Notes in Computer Science 147, pp. 115–127, Springer-Verlag, Berlin.Google Scholar
- Wiehagen, R., and Zeugmann, T. (1994), Ignoring data may be the only way to learn efficiently, Journal of Theoretical and Experimental Artificial Intelligence 6, 131–144.Google Scholar
- Zeugmann, T. (1995), Lange and Wiehagen's pattern language learning algorithm: An average-case analysis with respect to its total learning time, RIFIS-TR-CS-111, RIFIS, Kyushu University 33, April 20, 1995.Google Scholar
- Zeugmann, T., Lange, S., and Kapur, S. (1995), Characterizations of monotonic and dual monotonic language learning, Information and Computation 120, No. 2, 1995, 155–173.Google Scholar