Learnability of Co-r.e. Classes

  • Ziyuan Gao
  • Frank Stephan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7183)


The object of investigation in this paper is the learnability of co-recursively enumerable (co-r.e.) languages based on Gold’s [11] original model of inductive inference. In particular, the following learning models are studied: finite learning, explanatory learning, vacillatory learning and behaviourally correct learning. The relative effects of imposing further learning constraints, such as conservativeness and prudence on these various learning models are also investigated. Moreover, an extension of Angluin’s [1] characterisation of identifiable indexed families of recursive languages to families of conservatively learnable co-r.e. classes is presented. In this connection, the paper considers the learnability of indexed families of recursive languages, uniformly co-r.e. classes as well as other general classes of co-r.e. languages. A containment hierarchy of co-r.e. learning models is thereby established; while this hierarchy is quite similar to its r.e. analogue, there are some surprising collapses when using a co-r.e. hypothesis space; for example vacillatory learning collapses to explanatory learning.


Inductive Inference Positive Instance Learnable Class Negative Instance Hypothesis Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Angluin, D.: Inductive inference of formal languages from positive data. Information and Control 45(2), 117–135 (1980)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Baliga, G., Case, J., Jain, S.: The synthesis of language learners. Information and Computation 152, 16–43 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bārzdiņš, J., Freivalds, R.: On the prediction of general recursive functions. Soviet Mathematical Doklady 13, 1224–1228 (1972)Google Scholar
  4. 4.
    Blum, L., Blum, M.: Towards a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Carlucci, L., Case, J., Jain, S.: Learning correction grammars. The Journal of Symbolic Logic 74, 489–516 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Case, J.: The power of vacillation in language learning. SIAM Journal on Computing 28(6), 1941–1969 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Case, J., Jain, S., Sharma, A.: On learning limiting programs. International Journal of Foundations of Computer Science 3, 93–115 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Case, J., Lynes, C.: Machine Inductive Inference and Language Identification. In: Nielsen, M., Schmidt, E.M. (eds.) ICALP 1982. LNCS, vol. 140, pp. 107–115. Springer, Heidelberg (1982)CrossRefGoogle Scholar
  9. 9.
    Freibergs, V., Tulving, E.: The effect of practice on utilization of information from positive and negative instances in concept identification. Canadian Journal of Psychology 15(2), 101–106 (1961)CrossRefGoogle Scholar
  10. 10.
    Fulk, M.A.: Prudence and other conditions on formal language learning. Information and Computation 85(1), 1–11 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Gold, E.M.: Language identification in the limit. Information and Control 10, 447–474 (1967)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Jain, S., Stephan, F., Ye, N.: Prescribed learning of r.e. classes. Theoretical Computer Science 410(19), 1796–1806 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Jongh, D.D., Kanazawa, M.: Angluin’s theorem for indexed families of r.e. sets and applications. In: COLT, pp. 193–204 (1996)Google Scholar
  14. 14.
    Krebs, M.J., Lovelace, E.A.: Disjunctive concept identification: stimulus complexity and positive versus negative instances. Journal of Verbal Learning and Verbal Behaviour 9, 653–657 (1970)CrossRefGoogle Scholar
  15. 15.
    Lange, S., Zeugmann, T.: Set-driven and rearrangement-independent learning of recursive languages. Mathematical Systems Theory 29(6), 599–634 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Lange, S., Zeugmann, T., Kapur, S.: Characterizations of monotonic and dual monotonic language learning. Information and Computation 120(2), 155–173 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Lange, S., Zeugmann, T., Zilles, S.: Learning indexed families of recursive languages from positive data: a survey. Theoretical Computer Science 397(1-3), 194–232 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Osherson, D., Stob, M., Weinstein, S.: Learning strategies. Information and Control 53, 32–51 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Popper, K.R.: Conjectures and refutations: the growth of scientific knowledge. Routledge and Kegan Paul, London (1972)Google Scholar
  20. 20.
    Rogers Jr., H.: Theory of recursive functions and effective computability. MIT Press, Cambridge (1987)zbMATHGoogle Scholar
  21. 21.
    Zeugmann, T., Lange, S.: A Guided Tour Across the Boundaries of Learning Recursive Languages. In: Lange, S., Jantke, K.P. (eds.) GOSLER 1994. LNCS, vol. 961, pp. 190–258. Springer, Heidelberg (1995)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ziyuan Gao
    • 1
  • Frank Stephan
    • 2
  1. 1.Department of MathematicsNational University of SingaporeSingaporeRepublic of Singapore
  2. 2.Department of Mathematics and Department of Computer ScienceNational University of SingaporeSingaporeRepublic of Singapore

Personalised recommendations