Part of the Lecture Notes in Computer Science book series (LNCS, volume 961)

# Learning and consistency

• Rolf Wiehagen
• Thomas Zeugmann
1 Inductive Inference Theory 1.1 Inductive Inference of Recursive Function

## Abstract

In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it.

Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard” spaces of hypotheses.

Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently.

## Preview

Unable to display preview. Download preview PDF.

### References

1. Angluin, D. (1980), Finding patterns common to a set of strings, Journal of Computer and System Sciences21, 46–62.Google Scholar
2. Angluin, D., and Smith, C.H. (1983), Inductive inference: theory and methods, Computing Surveys15, 237–269.Google Scholar
3. Angluin, D., and Smith, C.H. (1987), Formal inductive inference, in “Encyclopedia of Artificial Intelligence” (St.C. Shapiro, Ed.), Vol. 1, pp. 409–418, Wiley-Interscience Publication, New York.Google Scholar
4. Barzdin, J. (1974a), Inductive inference of automata, functions and programs, in “Proceedings International Congress of Math.,” Vancouver, pp. 455–460.Google Scholar
5. Barzdin, J. (1974b), Две теоремы о предельном синтезе функций, in “Теория Алгоритмов и Программ,” (J. Barzdin, Ed.), Vol.1, pp.82–88, Latvian State University.Google Scholar
6. Blum, L., and Blum, M. (1975), Toward a mathematical theory of inductive inference, Information and Control28, 122–155.Google Scholar
7. Blum, M. (1967), Machine independent theory of complexity of recursive functions, Journal of the Association for Computing Machinery14, 322–336.Google Scholar
8. Fulk, M. (1988), Saving the phenomena: requirements that inductive inference machines not contradict known data, Information and Computation79, 193–209.Google Scholar
9. Garey, M.R., and Johnson, D.S. (1979), “Computers and Intractability. A Guide to the Theory of $$\mathcal{N}\mathcal{P}$$-completeness,” San Francisco, Freeman and Company.Google Scholar
10. Gold, M.E. (1965), Limiting recursion, Journal of Symbolic Logic30, 28–48.Google Scholar
11. Gold, M.E. (1967), Language identification in the limit, Information and Control10, 447–474.Google Scholar
12. Jantke, K.P. (1991a), Monotonic and non-monotonic inductive inference, New Generation Computing8, 349–360.Google Scholar
13. Jantke, K.P., and Beick, H.R. (1981), Combining postulates of naturalness in inductive inference, Journal of Information Processing and Cybernetics (EIK)8/9, 465–484.Google Scholar
14. Kearns, M., and Pitt, L. (1989), A polynomial-time algorithm for learning k- variable pattern languages from examples, in “Proceedings 1st Annual Workshop on Computational Learning Theory,” (D. Haussler and L. Pitt, Eds.), pp. 196–205, Morgan Kaufmann Publishers Inc., San Mateo.Google Scholar
15. Ko, Ker-I, Marron, A., and Tzeng, W.G. (1990), Learning string patterns and tree patterns from examples, in “Proceedings 7th Conference on Machine Learning,” (B.W. Porter, and R.J. Mooney, Eds.), pp. 384–391, Morgan Kaufmann Publishers Inc., San Mateo.Google Scholar
16. Kummer, M. (1992), personal communication to T. Zeugmann.Google Scholar
17. Lange, S., and Wiehagen, R. (1991), Polynomial-time inference of arbitrary pattern languages, New Generation Computing8, 361–370.Google Scholar
18. Lange, S., and Zeugmann, T. (1992), Types of monotonic language learning and their characterization, in “Proceedings 5th Annual ACM Workshop on Computational Learning Theory,” (D. Haussier, Ed.), pp. 377–390, ACM Press, New York.Google Scholar
19. Lange, S., and Zeugmann, T. (1993), Monotonic versus non-monotonic language learning, in “Proceedings 2nd International Workshop on Nonmonotonic and Inductive Logic,” (G. Brewka, K.P. Jantke and P.H. Schmitt, Eds.), Lecture Notes in Artificial Intelligence Vol. 659, pp. 254–269, Springer-Verlag, Berlin.Google Scholar
20. Michalski, R.S., Carbonell, J.G., and Mitchell, T.M. (1984), “Machine Learning, An Artificial Intelligence Approach,” Vol. 1, Springer-Verlag, Berlin.Google Scholar
21. Michalski, R.S., Carbonell, J.G., and Mitchell, T.M. (1986), “Machine Learning, An Artificial Intelligence Approach,” Vol. 2, Morgan Kaufmann Publishers Inc., San Mateo.Google Scholar
22. Minicozzi, E. (1976), Some natural properties of strong-identification in inductive inference, Theoretical Computer Science2, 345–360.Google Scholar
23. Nix, R.P. (1983), Editing by examples, Yale University, Dept. Computer Science, Technical Report 280.Google Scholar
24. Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT-Press, Cambridge, Massachusetts.Google Scholar
25. Porat, S., and Feldman, J.A. (1988), Learning automata from ordered examples, in “Proceedings 1st Workshop on Computational Learning Theory,” (D. Haussler and L. Pitt, Eds.), pp. 386–396, Morgan Kaufmann Publishers Inc., San Mateo.Google Scholar
26. Rogers, H.Jr. (1967), “Theory of Recursive Functions and Effective Computability,” McGraw-Hill, New York.Google Scholar
27. Schapire, R.E. (1990), Pattern languages are not learnable, in “Proceedings 3rd Annual Workshop on Computational Learning Theory,” (M.A. Fulk and J. Case, Eds.), pp. 122–129, Morgan Kaufmann Publishers, Inc., San Mateo.Google Scholar
28. Shinohara, T. (1982), Polynomial time inference of extended regular pattern languages, in “Proceedings RIMS Symposia on Software Science and Engineering,” Lecture Notes in Computer Science 147, pp. 115–127, Springer-Verlag, Berlin.Google Scholar
29. Smullyan, R.M. (1961), Theory of formal systems, Annals of Math. Studies47.Google Scholar
30. Solomonoff, R. (1964), A formal theory of inductive inference, Information and Control7, 1–22, 234–254.Google Scholar
31. Trakhtenbrot, B.A., and Barzdin, J. (1970) “Конечные Автоматы (Поведение и Синтез),” Наука, Москва, English translation: “Finite Automata-Behavior and Synthesis, Fundamental Studies in Computer Science 1,” North-Holland, Amsterdam, 1973.Google Scholar
32. Wiehagen, R. (1976), Limes-Erkennung rekursiver Funktionen durch spezielle Strategien, Journal of Information Processing and Cybernetics (EIK)12, 93–99.Google Scholar
33. Wiehagen, R. (1978), Characterization problems in the theory of inductive inference, in “Proceedings 5th Colloquium on Automata, Languages and Programming,” (G. Ausiello and C. Böhm, Eds.), Lecture Notes in Computer Science 62, pp. 494–508, Springer-Verlag, Berlin.Google Scholar
34. Wiehagen, R. (1992), From inductive inference to algorithmic learning theory, in “Proceedings 3rd Workshop on Algorithmic Learning Theory,” (S. Doshita, K. Furukawa, K.P. Jantke and T. Nishida, Eds.), Lecture Notes in Artificial Intelligence 743, pp. 3–24, Springer-Verlag, Berlin.Google Scholar
35. Wiehagen, R., and Liepe, W. (1976), Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen, Journal of Information Processing and Cybernetics (EIK)12, 421–438.Google Scholar
36. Wiehagen, R., and Zeugmann, T. (1992), Too much information can be too much for learning efficiently, in “Proceedings 3rd International Workshop on Analogical and Inductive Inference,” (K.P. Jantke, Ed.), Lecture Notes in Artificial Intelligence 642, pp. 72–86, Springer-Verlag, Berlin.Google Scholar
37. Wiehagen, R., and Zeugmann, T. (1994), Ignoring data may be the only way to learn efficiently, Journal of Theoretical and Experimental Artificial Intelligence6, 131–144.Google Scholar
38. Zeugmann, T. (1983), A-posteriori characterizations in inductive inference of recursive functions, Journal of Information Processing and Cybernetics (EIK)19, 559–594.Google Scholar
39. Zeugmann, T., Lange, S., and Kapur, S. (199x), Characterizations of monotonic and dual monotonic language learning, Information and Computation, to appear.Google Scholar