Language learning with a bounded number of mind changes

  • Steffen Lange
  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 665)


We study the learnability of enumerable families \(\mathcal{L}\) of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well-studied measure of efficiency. We distinguish between exact learnability (\(\mathcal{L}\) has to be inferred w.r.t. \(\mathcal{L}\)) and class preserving learning (\(\mathcal{L}\) has to be inferred w.r.t. some suitable chosen enumeration of all the languages from \(\mathcal{L}\)) as well as between learning from positive and from both, positive and negative data.

The measure of efficiency is applied to prove the superiority of class preserving learning algorithms over exact learning. We considerably improve results obtained previously and establish two infinite hierarchies. Furthermore, we separate exact and class preserving learning from positive data that avoids overgeneralization. Finally, language learning with a bounded number of mind changes is completely characterized in terms of recursively generable finite sets. These characterizations offer a new method to handle overgeneralizations and resolve an open question of Mukouchi (1992).


Language Learning Recursive Function Inductive Inference Positive Data Negative Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Angluin, D. (1980), Inductive inference of formal languages from positive data, Inf. and Control45, 117–135.Google Scholar
  2. Angluin, D., and Smith, C.H. (1987), Formal inductive inference, in “Encyclopedia of Artificial Intelligence” (St.C. Shapiro, Ed.), Vol. 1, pp. 409–418, Wiley-Interscience Publication, New York.Google Scholar
  3. Barzdin, Ya.M., and Freivalds, R.V. (1972), On the prediction of general recursive functions, Sov. Math. Dokl.13, 1224–1228.Google Scholar
  4. Blum, L., and Blum, M. (1975), Toward a mathematical theory of inductive inference, Inf. and Control28, 122–155Google Scholar
  5. Case, J. (1988), The power of vacillation, in “Proc. 1st Workshop on Computational Learning Theory,” (D. Haussler and L. Pitt, Eds.), pp. 196–205, Morgan Kaufmann Publishers Inc.Google Scholar
  6. Case, J., and Lynes, C. (1982), Machine inductive inference and language identification, in “Proc. Automata, Languages and Programming, 9th Colloquium,” (M. Nielsen and E.M. Schmidt, Eds.), Lecture Notes in Computer Science Vol. 140, pp. 107–115, Springer-Verlag, Berlin.Google Scholar
  7. Case, J., and Smith, C. (1983), Comparison of identification criteria for machine inductive inference, Theoretical Computer Science25, 193–220.Google Scholar
  8. Fulk, M.(1990), Prudence and other restrictions in formal language learning, Inf. and Computation85, 1–11.Google Scholar
  9. Gasarch, W.I., and Velauthapillai, M. (1992), Asking questions versus verifiability, in “Proc. 3rd International Workshop on Analogical and Inductive Inference,” (K.P. Jantke, ed.) Lecture Notes in Artificial Intelligence Vol. 642, pp. 197–213, Springer-Verlag, Berlin.Google Scholar
  10. Gold, E.M. (1967), Language Identification in the Limit, Inf. and Control10, 447–474.Google Scholar
  11. Jain, S., and Sharma, A. (1989), Recursion theoretic characterizations of language learning, Univ. of Rochester, Dept. of Comp. Sci., TR 281.Google Scholar
  12. Kapur, S., and Bilardi, G. (1992), Language learning without overgeneralization, in “Proc. 9th Annual Symposium on Theoretical Aspects of Computer Science,” (A. Finkel and M. Jantzen, Eds.), Lecture Notes in Computer Science Vol. 577, pp. 245–256, Springer-Verlag, Berlin.Google Scholar
  13. Lange, S., and Zeugmann, T. (1992), Types of monotonic language learning and their characterization, in “Proc. 5th Annual ACM Workshop on Computational Learning Theory,” pp. 377–390, ACM Press.Google Scholar
  14. Lange, S., Zeugmann, T., and Kapur, S. (1992), Class preserving monotonic language learning, GOSLER-Report 14/92, FB Mathematik und Informatik, TH Leipzig.Google Scholar
  15. Mukouchi, Y. (1992), Inductive Inference with Bounded Mind Changes, in Proc. “Algorithmic Learning Theory,” October 1992, Tokyo, Japan, JSAI.Google Scholar
  16. Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT-Press, Cambridge, Massachusetts.Google Scholar
  17. Shinohara, T. (1990), Inductive Inference from Positive Data is Powerful, in “Proc. 3rd Annual Workshop on Computational Learning Theory,” (M. Fulk and J. Case, Eds.), pp. 97–110, Morgan Kaufmann Publishers Inc.Google Scholar
  18. Wiehagen, R. (1977), Identification of formal languages, in “Proc. Mathematical Foundations of Computer Science,” (J. Gruska, Ed.), Lecture Notes in Computer Science Vol. 53, pp. 571–579, Springer-Verlag, Berlin.Google Scholar
  19. Wiehagen, R., Freivalds, R., and Kinber, B. (1984), On the power of probabilistic strategies in inductive inference, Theoretical Computer Science28, 111–133.Google Scholar
  20. Zeugmann, T. (1983), A-posteriori characterizations in inductive inference of recursive functions, J. of Inf. Processing and Cybernetics (EIK)19, 559–594.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Steffen Lange
    • 1
  • Thomas Zeugmann
    • 2
  1. 1.FB Mathematik und InformatikTH LeipzigLeipzig
  2. 2.Institut für Theoretische InformatikTH DarmstadtDarmstadt

Personalised recommendations