Advertisement

Learning Recursive Functions Refutably

  • Sanjay Jain
  • Efim Kinber
  • Rolf Wiehagen
  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2225)

Abstract

Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.

For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria.

Keywords

Learning Machine Accumulation Point Partial Function Recursive Function Inductive Inference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin and C. Smith. Inductive inference: Theory and methods. Computing Surveys, 15:237–289, 1983.MathSciNetCrossRefGoogle Scholar
  2. 2.
    J. Bārzdiņš. Two theorems on the limiting synthesis of functions. In Theory of Algorithms and Programs, Vol. 1, pp. 82–88. Latvian State University, 1974. In Russian.Google Scholar
  3. 3.
    J. Bārzdiņš and R. Freivalds. Prediction and limiting synthesis of recursively enumerable classes of functions. Latvijas Valsts Univ. Zimatm. Raksti, 210:101–111, 1974.Google Scholar
  4. 4.
    S. Ben-David. Can finite samples detect singularities of real-valued functions? In 24th Annual ACM Symposium on the Theory of Computing, pp. 390–399, 1992.Google Scholar
  5. 5.
    L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Inform. and Control, 28:125–155, 1975.zbMATHMathSciNetCrossRefGoogle Scholar
  6. 6.
    M. Blum. A machine-independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.zbMATHMathSciNetCrossRefGoogle Scholar
  7. 7.
    J. Case. Periodicity in generations of automata. Mathematical Systems Theory, 8:15–32, 1974.zbMATHMathSciNetCrossRefGoogle Scholar
  8. 8.
    J. Case, S. Jain, and S. Ngo Manguelle. Refinements of inductive inference by Popperian and reliable machines. Kybernetika, 30:23–52, 1994.zbMATHMathSciNetGoogle Scholar
  9. 9.
    J. Case, E. Kinber, A. Sharma, and F. Stephan. On the classification of computable languages. In Proc. 14th Symposium on Theoretical Aspects of Computer Science, Vol. 1200 of Lecture Notes in Computer Science, pp. 225–236. Springer, 1997.Google Scholar
  10. 10.
    J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.zbMATHMathSciNetCrossRefGoogle Scholar
  11. 11.
    R. Freivalds. Inductive inference of recursive functions: Qualitative theory. In Baltic Computer Science, Vol. 502 of Lecture Notes in Computer Science, pp. 77–110. Springer, 1991.CrossRefGoogle Scholar
  12. 12.
    R. Freivalds, E. Kinber, and C. H. Smith. On the intrinsic complexity of learning. Information and Computation, 123(1):64–71, 1995.zbMATHMathSciNetCrossRefGoogle Scholar
  13. 13.
    E. M. Gold. Language identification in the limit. Inform. and Control, 10:447–474, 1967.CrossRefzbMATHGoogle Scholar
  14. 14.
    J. Grabowski. Starke Erkennung. In Strukturerkennung diskreter kybernetischer Systeme, Teil I, pp. 168–184. Seminarbericht Nr. 82, Department of Mathematics, Humboldt University of Berlin, 1986.Google Scholar
  15. 15.
    G. Grieser. Reflecting inductive inference machines and its improvement by therapy. In Algorithmic Learning Theory: 7th International Workshop (ALT’ 96), Vol. 1160 of Lecture Notes in Artificial Intelligence, pp. 325–336. Springer, 1996.Google Scholar
  16. 16.
    S. Jain. Learning with refutation. Journal of Computer and System Sciences, 57(3):356–365, 1998.zbMATHMathSciNetCrossRefGoogle Scholar
  17. 17.
    S. Jain, E. Kinber, R. Wiehagen and T. Zeugmann. Refutable inductive inference of recursive functions. Schriftenreihe der Institute für Informatik/Mathematik, Serie A, SIIM-TR-A-01-06, Medical University at Lübeck, 2001.Google Scholar
  18. 18.
    S. Jain, D. Osherson, J. S. Royer, and A. Sharma. Systems that Learn: An Introduction to Learning Theory. MIT Press, Cambridge, Mass., second edition, 1999.Google Scholar
  19. 19.
    K. P. Jantke. Reflecting and self-confident inductive inference machines. In Algorithmic Learning Theory: 6th International Workshop (ALT’ 95), Vol. 997 of Lecture Notes in Artificial Intelligence, pp. 282–297. Springer, 1995.Google Scholar
  20. 20.
    W. Jekeli. Universelle Strategien zur Lösung induktiver Lernprobleme. MSc Thesis, Dept. of Computer Science, University of Kaiserslautern, 1997.Google Scholar
  21. 21.
    E. B. Kinber and T. Zeugmann. Inductive inference of almost everywhere correct programs by reliably working strategies. Journal of Information Processing and Cybernetics (EIK), 21:91–100, 1985.MathSciNetzbMATHGoogle Scholar
  22. 22.
    E. Kinber and T. Zeugmann. One-sided error probabilistic inductive inference and reliable frequency identification. Information and Computation, 92(2):253–284, 1991.zbMATHMathSciNetCrossRefGoogle Scholar
  23. 23.
    R. Klette and R. Wiehagen. Research in the theory of inductive inference by GDR mathematicians-A survey. Information Sciences, 22:149–169, 1980.zbMATHMathSciNetCrossRefGoogle Scholar
  24. 24.
    S. Lange and P. Watson. Machine discovery in the presence of incomplete or ambiguous data. In Algorithmic Learning Theory: 4th International Workshop on Analogical and Inductive Inference (AII’ 94) and 5th International Workshop on Algorithmic Learning Theory (ALT’ 94), Vol. 872 of Lecture Notes in Artificial Intelligence, pp. 438–452. Springer, 1994.Google Scholar
  25. 25.
    R. Lindner. Algorithmische Erkennung. Dissertation B, University of Jena, 1972.Google Scholar
  26. 26.
    M. Machtey and P. Young. An Introduction to the General Theory of Algorithms. North Holland, New York, 1978.Google Scholar
  27. 27.
    E. Minicozzi. Some natural properties of strong identification in inductive inference. Theoretical Computer Science, 2:345–360, 1976.zbMATHMathSciNetCrossRefGoogle Scholar
  28. 28.
    T. Miyahara. Refutable inference of functions computed by loop programs. Technical Report RIFIS-TR-CS-112, Kyushu University, Fukuoka, 1995.Google Scholar
  29. 29.
    Y. Mukouchi and S. Arikawa. Inductive inference machines that can refute hypothesis spaces. In Algorithmic Learning Theory: 4th International Workshop (ALT’ 93), Vol. 744 of Lecture Notes in Artificial Intelligence, pp. 123–136. Springer, 1993.Google Scholar
  30. 30.
    Y. Mukouchi and S. Arikawa. Towards a mathematical theory of machine discovery from facts. Theoretical Computer Science, 137:53–84, 1995.zbMATHMathSciNetCrossRefGoogle Scholar
  31. 31.
    K. R. Popper. The Logic of Scientific Discovery. Harper and Row, 1965.Google Scholar
  32. 32.
    H. Rice. On completely recursively enumerable classes and their key arrays. The Journal of Symbolic Logic, 21:304–308, 1956.zbMATHMathSciNetCrossRefGoogle Scholar
  33. 33.
    H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967. Reprinted by MIT Press in 1987.Google Scholar
  34. 34.
    C. H. Smith, R. Wiehagen, and T. Zeugmann. Classifying predicates and languages. International Journal of Foundations of Computer Science, 8(1):15–41, 1997.zbMATHCrossRefGoogle Scholar
  35. 35.
    F. Stephan. On one-sided versus two-sided classification. Technical Report Forschungsberichte Mathematische Logik 25/1996, Mathematical Institute, University of Heidelberg, 1996.Google Scholar
  36. 36.
    R. Wiehagen. Characterization problems in the theory of inductive inference. In Proc. of the 5th International Colloquium on Automata, Languages and Programming, Vol. 62 of Lecture Notes in Computer Science, pp. 494–508. Springer, 1978.Google Scholar
  37. 37.
    R. Wiehagen and C. H. Smith. Generalization versus classification. Journal of Experimental and Theoretical Artificial Intelligence, 7:163–174, 1995.zbMATHCrossRefGoogle Scholar
  38. 38.
    T. Zeugmann. A-posteriori characterizations in inductive inference of recursive functions. J. of Inform. Processing and Cybernetics (EIK), 19:559–594, 1983.MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Sanjay Jain
    • 1
  • Efim Kinber
    • 2
  • Rolf Wiehagen
    • 3
  • Thomas Zeugmann
    • 4
  1. 1.School of ComputingNational University of SingaporeSingapore
  2. 2.Department of Computer ScienceSacred Heart UniversityFairfieldUSA
  3. 3.Department of Computer ScienceUniversity of KaiserslauternKaiserslauternGermany
  4. 4.Institut für Theoretische InformatikMed. Universität zu LübeckLübeckGermany

Personalised recommendations