Abstract
We present two phenomena which were discovered in pure recursion-theoretic inductive inference, namely inconsistent learning (learning strategies producing apparently “senseless” hypotheses can solve problems unsolvable by “reasonable” learning strategies) and learning from good examples (“much less” information can lead to much more learning power). Recently, it has been shown that these phenomena also hold in the world of polynomial-time algorithmic learning. Thus inductive inference can be understood and used as a source of potent ideas guiding both research and applications in algorithmic learning theory.
Preview
Unable to display preview. Download preview PDF.
References
Angluin, D., On the complexity of minimum inference of regular sets. Information and Control 39 (1978) 337–350.
Angluin, D., Finding patterns common to a set of strings. Journal of Computer and System Sciences 21 (1980) 46–62.
Angluin, D., Computational learning theory: Survey and selected bibliography. Proc. ACM Symposium on Theory of Computing, ACM Press, 351–368, 1992.
Angluin, D. and Smith, C. H., Inductive inference: Theory and methods. Computing Surveys 15 (1983), 237–269.
Barzdin, J., Two theorems on the limiting synthesis of functions. In[28], vol.1 (1974), 82–88 (in Russian).
Barzdin, J., Inductive inference of automata, functions and programs. Proc. Int. Congress of Mathematicians, 455–460, 1974.
Blum, L. and Blum, M., Toward a mathematical theory of inductive inference. Information and Control 28 (1975) 122–155.
Case, J. and Smith, C., Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25 (1983) 193–220.
Daley, R., On the error correcting power of pluralism in inductive inference. Theoretical Computer Science 24 (1983) 95–104.
Freivalds, R., Finite identification of general recursive functions by probabilistic strategies. Proc. Conf. Foundations of Computation Theory, Akademie-Verlag, 138–145, 1979.
Freivalds, R., Kinber, E. B. and Wiehagen, R., On the power of inductive inference from good examples. Theoretical Computer Science (to appear).
Fulk, M., Saving the phenomena: Requirements that inductive inference machines not contradict known data. Information and Computation 79 (1988) 193–209.
Garey, M. R. and Johnson, D. S., Computers and Intractability, Freeman and Company, 1979.
Gold, E. M., Language identification in the limit. Information and Control 10 (1967) 447–474.
Gold, E. M., Complexity of automaton identification from given data. Information and Control 37 (1978) 302–320.
Jain, S. and Sharma, A., Finite learning by a team. Proc. Third Annual Workshop on Computational Learning Theory, Morgan Kaufmann, 163–177, 1990.
Jantke, K. P. and Beick, H.-R., Combining postulates of naturalness in inductive inference. Journal of Information Processing and Cybernetics (EIK) 17 (1981) 465–484.
Kearns, M. and Pitt, L., A polynomial-time algorithm for learning k-variable pattern languages from examples. Proc Second Annual on Computational Learning Theory, Morgan Kaufmann, 57–70, 1989.
Klette, R. and Wiehagen, R., Research in the theory of inductive inference by GDR mathematicians—a survey. Information Sciences 22 (1980) 149–169.
Ko, Ker-I, Marron, A. and Tzeng, W.-G., Learning string patterns and tree patterns from examples. Proc. Seventh Int. Conf. on Machine Learning, Morgan Kaufmann, 384–391, 1990.
Lange, S. and Wiehagen, R., Polynomial-time inference of arbitrary pattern languages. New Generation Computing 8 (1991) 361–370.
Osherson, D., Stob, M. and Weinstein, S., Systems that learn. MIT Press, 1986.
Pitt, L., Inductive inference, DFA's, and computational complexity. Proc. Int. Workshop on Analogical and Inductive Inference, Lecture Notes in Artificial Intelligence 397 (1989) 18–44.
Pitt, L. and Warmuth, M. K., The minimum consistent DFA problem cannot be approximated within any polynomial. Tech. Report UIUCDCS-R-89-1499, University of Illinois at Urbana-Champaign, Febr. 1989.
Podnieks, K. M., Comparing various concepts of function prediction, part I, in[28], vol.1 (1974) 68–81 (in Russian).
Rogers, H. Jr., Theory of recursive functions and effective computability, McGraw-Hill, 1967.
Shinohara, T., Polynomial-time inference of extended regular pattern languages. Proc. RIMS Symp. on Software Science and Engineering, Lecture Notes in Computer Science 147 (1983) 115–127.
Theory of Algorithms and Programs, vol.1, 2, 3. Barzdin, J., Ed., Latvian State University, Riga, 1974, 1975, 1977 (in Russian).
Trakhtenbrot, B. A. and Barzdin, J., Finite Automata: Behavior and synthesis. North-Holland, 1973.
Wiehagen, R., Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Journal of Information Processing and Cybernetics (EIK) 12 (1976) 93–99.
Wiehagen, R. and Zeugmann, T., Too much information can be too much for learning efficiently. Proc. Int. Workshop on Analogical and Inductive Inference, Lecture Notes in Artificial Intelligence, Oct. 1992.
Zeugmann, T., A-posteriori characterizations in inductive inference of recursive functions. Journal of Information Processing and Cybernetics (EIK) 19 (1983) 559–594.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wiehagen, R. (1993). From inductive inference to algorithmic learning theory. In: Doshita, S., Furukawa, K., Jantke, K.P., Nishida, T. (eds) Algorithmic Learning Theory. ALT 1992. Lecture Notes in Computer Science, vol 743. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57369-0_24
Download citation
DOI: https://doi.org/10.1007/3-540-57369-0_24
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-57369-2
Online ISBN: 978-3-540-48093-8
eBook Packages: Springer Book Archive