Monotonic and non-monotonic inductive inference of functions and patterns
Monotonic and non-monotonic reasoning is introduced into inductive inference. In inductive inference, which is a mathematical theory of algorithmic learning from possibly incomplete information, monotonicity means to construct hypotheses somehow incrementally. whereas the necessity of non-monotonic reasoning indicates that during hypothesis formation considerable belief revisions may be required. Therefore, it is of a particular interest to find areas of inductive inference where monotonic construction of hypotheses is always possible. It turned out that in the area of inductive inference of total recursive functions monotonicity can rarely be guaranteed. These results are compared to the problem of inductively inferring text patterns from finite samples. For this area, there is a universal weakly monotonic inductive inference algorithm. The computability of a stronger algorithm which is developed depends on the decidability of the inclusion problem for pattern languages. This problems remains open. Unfortunately, the latter algorithm turns out to be inconsistent, i.e. it sometimes generates hypotheses not able to reflect the information they are build upon. Consistency and monotonicity can hardly be achieved simultaneously. It arises the question under which circumstances an inductive inference algorithm for learning text patterns can be both consistent and monotonic. This problem class is characterized by closedness under intersection.
Unable to display preview. Download preview PDF.
- [Angluin'80]D. Angluin, Finding Patterns Common to a Set of Strings, J. Comp. Syst. Sci. 21(1980), 46–62Google Scholar
- [Angluin/Smith'83]D. Angluin and C.H. Smith, A Survey of Inductive Inference: Theory and Methods, Computing Surveys 15 (1983), 237–269Google Scholar
- [Gold'67]E.M. Gold, Language Identification in the Limit, Information and Control 14(1967), 447–474Google Scholar
- [Jantke'84]K.P. Jantke, Polynomial Time Inference of General Pattern Languages, in: STACS 84, Proc., M. Fontet and K. Mehlhorn (eds.), Lecture Notes in Computer Science 166, Springer-Verlag, 1984, 314–325Google Scholar
- [Jantke'89]K.P. Jantke, Algorithmic Learning from Incomplete Information: Principles and Problems, in: Machines, Languages, and Complexity, J. Dassow and J. Kelemen (eds.), Lecture Notes in Computer Science 381, Springer-Verlag, 1989, 188–207Google Scholar
- [Jantke'91]K.P. Jantke, Monotonic and Non — monotonic Inductive Inference. New Generation Computing 8 (1991), 349–360Google Scholar
- [Jantke/Beick'81]K.P. Jantke and H.-R. Beick, Combining Postulates of Naturalness in Inductive Inference, EIK 17(1981) 8/9, 465–484Google Scholar
- [Klette/Wiehagen'80]R. Klette and R. Wiehagen, Research in the Theory of Inductive Inference by GDR — mathematicians — A Survey. Inf. Sciences 22 (1980), 149–169Google Scholar
- [Lange'91]S. Lange, A Note on Consistent Polynomial — Time Inference of k-Variable Pattern Languages, this volumeGoogle Scholar
- [Lange/Wiehagen'90]S. Lange and R. Wiehagen, Polynomial — Time Inference of Pattern Languages, in: Algorithmic Learning, Theory'90, Tokyo, October 1990, Proc., Ohmsha Ltd., 1990Google Scholar
- [Nix'83]R.P. Nix, Editing by Example, Yale University, Dept. Comp. Sci., Technical Report 280, 1983Google Scholar
- [Shapiro'81]E.Y. Shapiro, Inductive Inference of Theories from Facts, Yale Univ., Dept. Comp. Sci., Research Report 192, 1981Google Scholar