Mathematical systems theory

, Volume 29, Issue 6, pp 599–634 | Cite as

Set-driven and rearrangement-independent learning of recursive languages

  • S. Lange
  • T. Zeugmann


This paper studies the impact of order independence to the learnability of indexed families\(\mathcal{L}\) of uniformly recursive languages from positive data. In particular, we considerset-driven andrearrangement-independent learners, i.e., learning devices whose output exclusively depends on the range and on the range and length of their input, respectively. The impact of set-drivenness and rearrangement-independence on the behavior of learners to their learning power is studied in dependence on thehypothesis space the learners may use. We distinguish betweenexact learnability (\(\mathcal{L}\) has to be inferred with respect to\(\mathcal{L}\)),class-preserving learning (\(\mathcal{L}\) has to be inferred with respect to some suitably chosen enumeration of all the languages from\(\mathcal{L}\)), andclass-comprising inference (\(\mathcal{L}\) has to be learned with respect to some suitably chosen enumeration of uniformly recursive languages containing at least all the languages from\(\mathcal{L}\)).

Furthermore, we consider the influence of set-drivenness and rearrangement-independence for learning devices that realize thesubset principle to different extents. Thereby we distinguish betweenstrong-monotonic, monotonic, andWeakmonotonic orconservative learning.

The results obtained are threefold. First, rearrangement-independent learning does not constitute a restriction except in the case of monotonic learning. Next, we prove that for all but two of the learning models considered set-drivenness is a severe restriction. However, class-comprising set-drivenconservative learning is exactly as powerful as unrestricted class-comprisingconservative learning. Finally, the power of class-comprising set-driven learning in the limit is characterized by equating the collection of learnable indexed families with the collection of class-comprisingly conservatively inferable indexed families. These results considerably extend previous work done in the field (see, e.g., [20] and [5]).


Target Language Hypothesis Space Monotonicity Constraint Correct Hypothesis Canonical Text 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    D. Angluin, Finding patterns common to a set of strings,Journal of Computer and System Sciences,21 (1980), 46–62.zbMATHCrossRefMathSciNetGoogle Scholar
  2. [2]
    D. Angluin, Inductive inference of formal languages from positive data,Information and Control,45 (1980), 117–135.zbMATHCrossRefMathSciNetGoogle Scholar
  3. [3]
    R. Berwick,The Acquisition of Syntactic Knowledge, MIT Press, Cambridge, MA, 1985.Google Scholar
  4. [4]
    L. Blum and M. Blum, Toward a mathematical theory of inductive inference,Information and Control,28 (1975), 122–155.CrossRefMathSciNetGoogle Scholar
  5. [5]
    M. Fulk, Prudence and other restrictions in formal language learning,Information and Computation,85 (1990), 1–11.zbMATHCrossRefMathSciNetGoogle Scholar
  6. [6]
    E. M. Gold, Language identification in the limit,Information and Control,10 (1967), 447–474.CrossRefzbMATHGoogle Scholar
  7. [7]
    J. E. Hopcroft and J. D. Ullman,Formal Languages and Their Relation to Automata, Addison-Wesley, Reading, MA, 1969.zbMATHGoogle Scholar
  8. [8]
    K. P. Jantke, Monotonic and non-monotonic inductive inference,New Generation Computing,8 (1991), 349–360.zbMATHGoogle Scholar
  9. [9]
    S. Lange and R. Wiehagen, Polynomial-time inference of arbitrary pattern languages,New Generation Computing,8 (1991), 361–370.zbMATHCrossRefGoogle Scholar
  10. [10]
    S. Lange and T. Zeugmann, Types of monotonic language learning and their characterization,Proc. 5th Annual ACM Workshop on Computational Learning Theory, ACM Press, New York, 1992, pp. 377–390.Google Scholar
  11. [11]
    S. Lange and T. Zeugmann, Monotonic versus non-monotonic language learning,Proc. 2nd International Workshop on Nonmonotonic and Inductive Logic (G. Brewka, K. P. Jantke, and P. H. Schmitt, Eds.), Springer-Verlag, Berlin, 1993, Lecture Notes in Artificial Intelligence, Vol. 659, pp. 254–269.Google Scholar
  12. [12]
    S. Lange and T. Zeugmann, Language learning in dependence on the space of hypotheses,Proc. 6th Annual ACM Conference on Computational Learning Theory, ACM Press, New York, 1993, pp. 127–136.Google Scholar
  13. [13]
    S. Lange and T. Zeugmann, The learnability of recursive languages in dependence on the hypothesis space, GOSLER-Report 20/93, FB Mathematik, Informatik und Naturwissenschaften, HTWK Leipzig, 1993.Google Scholar
  14. [14]
    S. Lange and T. Zeugmann, Learning recursive languages with bounded mind changes,International Journal of Foundations of Computer Science,4 (1993), 157–178.zbMATHCrossRefMathSciNetGoogle Scholar
  15. [15]
    S. Lange and T. Zeugmann, Characterization of language learning from informant under various monotonicity constraints,Journal of Experimental and Theoretical Artificial Intelligence,6 (1994), 71–94.CrossRefGoogle Scholar
  16. [16]
    S. Lange, T. Zeugmann, and S. Kapur, Monotonic and dual-monotonic language learning,Theoretical Computer Science,155 (1996), 365–410.zbMATHCrossRefMathSciNetGoogle Scholar
  17. [17]
    M. Machtey and P. Young,An Introduction to the General Theory of Algorithms, North-Holland, New York, 1978.zbMATHGoogle Scholar
  18. [18]
    Y. Mukouchi, Inductive inference with bounded mind changes,Proc. 3rd Workshop on Algorithmic Learning Theory (S. Doshita, K. Furukawa, K. P. Jantke, and T. Nishida, Eds.), Springer-Verlag, Berlin, 1993, Lecture Notes in Artificial Intelligence, Vol. 743, pp. 125–134.Google Scholar
  19. [19]
    D. Osherson, M. Stob, and S. Weinstein,Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists, MIT Press, Cambridge, MA, 1986.Google Scholar
  20. [20]
    G. Schafer-Richter, Über Eingabeabhangigkeit und Komplexität von Inferenzstrategien, Dissertation, Rheinisch Westfälische Technische Hochschule, Aachen, 1984.Google Scholar
  21. [21]
    F. Stephan, Personal communication.Google Scholar
  22. [22]
    K. Wexler, The subset principle is an intensional principle, inKnowledge and Language: Issues in Representation and Acquisition (E. Reuland and W. Abraham, Eds.), Vol. 1, From Orwell's Problem to Plato's Problem, Chapter 9, Kluwer, Dordrecht, 1993.Google Scholar
  23. [23]
    K. Wexler and P. Culicover,Formal Principles of Language Acquisition, MIT Press, Cambridge, MA, 1980.Google Scholar
  24. [24]
    R. Wiehagen, A thesis in inductive inference,Proc. First International Workshop on Nonmonotonic and Inductive Logic (J. Dix, K. P. Jantke, and P. H. Schmitt, Eds.), Springer-Verlag, Berlin, 1991, Lecture Notes in Artificial Intelligence, Vol. 543, pp. 184–207.Google Scholar
  25. [25]
    R. Wiehagen and T. Zeugmann, Ignoring data may be the only way to learn efficiently,Journal of Experimental and Theoretical Artificial Intelligence,6 (1994), 131–144.zbMATHCrossRefGoogle Scholar
  26. [26]
    T. Zeugmann, Lange and Wiehagen's pattern language learning algorithm: an average-case analysis with respect to its total learning time, RIFIS Technical Report RIFIS-TR-CS-111, RIFIS, Kyushu University 33, 1995.Google Scholar
  27. [27]
    T. Zeugmann and S. Lange, A guided tour across the boundaries of learning recursive languages,Algorithmic Learning for Knowledge-Based Systems (K. P. Jantke and S. Lange, Eds.), Springer-Verlag, Berlin, 1995, Lecture Notes in Artificial Intelligence, Vol. 961, pp. 193–262.Google Scholar

Copyright information

© Springer-Verlag New York Inc 1996

Authors and Affiliations

  • S. Lange
    • 1
  • T. Zeugmann
    • 2
  1. 1.FB Mathematik und InformatikHTWK LeipzigLeipzigGermany
  2. 2.Department of Informatics, Graduate School of Information Science and EEKyushu UniversityFukuokaJapan

Personalised recommendations