Learning Recursive Concepts with Anomalies
- 339 Downloads
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios in which the learner is successful if its final hypothesis describes a finite variant of the target concept - henceforth called learning with anomalies. As usual, we distinguish between learning from only positive data and learning from positive and negative data.
We investigate the following learning models: finite identification, conservative inference, set-driven learning, and behaviorally correct learning. In general, we focus our attention on the case that the number of allowed anomalies is finite but not a priori bounded. However, we also present a few sample results that affect the special case of learning with an a priori bounded number of anomalies. We provide characterizations of the corresponding models of learning with anomalies in terms of finite tell-tale sets. The varieties in the degree of recursiveness of the relevant tell-tale sets observed are already sufficient to quantify the differences in the corresponding models of learning with anomalies.
In addition, we study variants of incremental learning and derive a complete picture concerning the relation of all models of learning with and without anomalies mentioned above.
KeywordsInductive Inference Incremental Learning Positive Data Target Concept Hypothesis Space
Unable to display preview. Download preview PDF.
- 4.J. Bārzdiņnš. Two theorems on the limiting synthesis of functions. In Theory of Algorithms and Programs Vol. 1, pages 82–88, Latvian State University, 1974, (Russian).Google Scholar
- 11.S. Jain, D. Osherson, J. Royer, and A. Sharma. Systems that Learn-2nd Edition, An Introduction to Learning Theory. MIT Press, Cambridge, Mass., 1999.Google Scholar
- 12.S. Lange and G. Grieser. On the strength of incremental learning. In Proc. 10th International Conference on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 1720, pages 118–131. Springer-Verlag, Berlin, 1999.Google Scholar
- 13.S. Lange and T. Zeugmann. Types of monotonic language learning and their characterization. In Proc. 5th Annual ACM Workshop on Computational Learning Theory, pages 377–390. ACM Press, New York, 1992.Google Scholar
- 14.S. Lange and T. Zeugmann. Language learning in dependence on the space of hypotheses. In Proc. 6th Annual ACM Conference on Computational Learning Theory, pages 127–136. ACM Press, New York, 1993.Google Scholar
- 17.T. Tabe and T. Zeugmann. Two variations of inductive inference of languages from positive data. Technical Report RIFIS-TR-CS-105, Kyushu University, 1995.Google Scholar
- 18.K. Wexler and P. Culicover. Formal Principles of Language Acquisition. MIT Press, Cambridge, Mass., 1980.Google Scholar
- 20.T. Zeugmann and S. Lange. A guided tour across the boundaries of learning recursive languages. In K. P. Jantke and S. Lange, editors, Algorithmic Learning for Knowledge-Based Systems, Lecture Notes in Artificial Intelligence 961, pages 190–258. Springer-Verlag, Berlin, 1995.Google Scholar