Consistency Conditions for Inductive Inference of Recursive Functions

  • Yohji Akama
  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4384)


A consistent learner is required to correctly and completely reflect in its actual hypothesis all data received so far. Though this demand sounds quite plausible, it may lead to the unsolvability of the learning problem.

Therefore, in the present paper several variations of consistent learning are introduced and studied. These variations allow a so-called δ–delay relaxing the consistency demand to all but the last δ data.

Additionally, we introduce the notion of coherent learning (again with δ–delay) requiring the learner to correctly reflect only the last datum (only the n − δth datum) seen.

Our results are threefold. First, it is shown that all models of coherent learning with δ–delay are exactly as powerful as their corresponding consistent learning models with δ–delay. Second, we provide characterizations for consistent learning with δ–delay in terms of complexity. Finally, we establish strict hierarchies for all consistent learning models with δ–delay in dependence on δ.


Coherence Editing Weinstein Starke 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barzdin, J.M.: Две теоремы о предельном синтезе функций Теория. In: Barzdin, J.M. (ed.) Алгоритмов и Программ, vol. 1, pp. 82–88. Latvian State University (1974)Google Scholar
  2. 2.
    Barzdin, J.M.: Inductive inference of automata, functions and programs (republished in Amer. Math. Soc. Transl. (2) 109, 1977, pp.107- 112). In: Proc. of the 20-th International Congress of Mathematicians, Vancouver, Canada, pp. 455–460 (1974)Google Scholar
  3. 3.
    Barzdin, J.M., Freivalds, R.V.: On the prediction of general recursive functions. Soviet Math. Dokl. 13, 1224–1228 (1972)MATHGoogle Scholar
  4. 4.
    Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Inform. Control 28(2), 125–155 (1975)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Blum, M.: A machine-independent theory of the complexity of recursive functions. Journal of the ACM 14(2), 322–336 (1967)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Freivalds, R., Kinber, E.B., Wiehagen, R.: How inductive inference strategies discover their errors. Inform. Comput. 118(2), 208–226 (1995)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Fulk, M.A.: Saving the phenomenon: Requirements that inductive inference machines not contradict known data. Inform. Comput. 79(3), 193–209 (1988)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Gold, E.M.: Language identification in the limit. Inform. Control 10(5), 447–474 (1967)CrossRefGoogle Scholar
  9. 9.
    Grabowski, J.: Starke Erkennung. In: Linder, R., Thiele, H. (eds.) Strukturerkennung diskreter kybernetischer Systeme. Seminarberichte der Sektion Mathematik der Humboldt-Universität zu Berlin, vol. 82, pp. 168–184 (1986)Google Scholar
  10. 10.
    Helm, J.P.: On effectively computable operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML) 17, 231–244 (1971)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Jain, S., Osherson, D., Royer, J.S., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)Google Scholar
  12. 12.
    Jantke, K.P., Beick, H.-R.: Combining postulates of naturalness in inductive inference. Elektronische Informationsverarbeitung und Kybernetik 17(8/9), 465–484 (1981)MathSciNetGoogle Scholar
  13. 13.
    Lange, S., Zeugmann, T.: Incremental learning from positive data. J. of Comput. Syst. Sci. 53(1), 88–103 (1996)MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Minicozzi, E.: Some natural properties of strong identification in inductive inference. Theoret. Comput. Sci. 2, 345–360 (1976)MATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Odifreddi, P.G.: Classical Recursion Theory. North Holland, Amsterdam (1989)MATHGoogle Scholar
  16. 16.
    Odifreddi, P.G.: Classical Recursion Theory, vol. 2. North Holland, Amsterdam (1999)Google Scholar
  17. 17.
    Osherson, D.N., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge (1986)Google Scholar
  18. 18.
    Rogers, H.: Theory of Recursive Functions and Effective Computability (Reprinted, MIT Press 1987). McGraw-Hill, New York (1967)MATHGoogle Scholar
  19. 19.
    Stephan, F., Zeugmann, T.: Learning classes of approximations to non-recursive functions. Theoret. Comput. Sci. 288(2), 309–341 (2002)MATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Wiehagen, R.: Zur Theorie der Algorithmischen Erkennung. Dissertation B, Humboldt-Universität zu Berlin (1978)Google Scholar
  21. 21.
    Wiehagen, R., Liepe, W.: Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen. Elektronische Informationsverarbeitung und Kybernetik 12(8/9), 421–438 (1976)MathSciNetGoogle Scholar
  22. 22.
    Wiehagen, R., Zeugmann, T.: Ignoring data may be the only way to learn efficiently. J. of Experimental and Theoret. Artif. Intell. 6(1), 131–144 (1994)MATHCrossRefGoogle Scholar
  23. 23.
    Wiehagen, R., Zeugmann, T.: Learning and consistency. In: Lange, S., Jantke, K.P. (eds.) Algorithmic Learning for Knowledge-Based Systems. LNCS, vol. 961, pp. 1–24. Springer, Heidelberg (1995)Google Scholar
  24. 24.
    Zeugmann, T.: On the nonboundability of total effective operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML) 30, 169–172 (1984)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Yohji Akama
    • 1
  • Thomas Zeugmann
    • 2
  1. 1.Mathematical Institute, Tohoku University, Sendai Miyagi 980-8578Japan
  2. 2.Division of Computer Science, Hokkaido University, N-14, W-9, Sapporo 060-0814Japan

Personalised recommendations