Learning by erasing

  • Steffen Lange
  • Rolf Wiehagen
  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1160)


Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem.

The present paper deals with learnability by erasing of indexed families of languages from both positive data as well as positive and negative data. This refers to the following scenario. A family L of target languages and a hypothesis space for it are specified. The learner is fed eventually all positive examples (all labeled examples) of an unknown target language L chosen from L. The target language L is learned by erasing if the learner erases some set of possible hypotheses and the least hypothesis never erased correctly describes L.

The capabilities of learning by erasing are investigated in dependence on the requirement of what sets of hypotheses have to be or may be erased, and in dependence of the choice of the hypothesis space.

Class preserving learning by erasing (L has to be learned w.r.t. some suitably chosen enumeration of all and only the languages from L), class comprising learning by erasing (L has to be learned w.r.t. some hypothesis space containing at least all the languages from L), and absolute learning by erasing (L has to be learned w.r.t. all class preserving hypothesis spaces for L) are distinguished.

For all these models of learning by erasing necessary and sufficient conditions for learnability are presented. A complete picture of all separations and coincidences of the learning by erasing models is derived. Learning by erasing is compared with standard models of language learning such as learning in the limit, finite learning and conservative learning The exact location of these types within the hierarchy of the learning by erasing models is established.


Language Learning Initial Segment Target Language Recursive Function Positive Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Angluin, D. (1980), Inductive inference of formal languages from positive data, Information and Control 45, 117–135.CrossRefGoogle Scholar
  2. [2]
    Baliga, G., Case, J., and Jain, S. (1996), Synthesizing enumeration techniques for language learning, eCOLT, eC-TR-96-003.Google Scholar
  3. [3]
    Blum, M. (1967), A machine-independent theory of the complexity of recursive functions, Journal of the ACM 14, 322–336.CrossRefGoogle Scholar
  4. [4]
    Freivalds, R., Karpinski, M., and Smith, C.H. (1994), Co-learning of total recursive functions, in “Proc. 7th Annual ACM Conference on Computational Learning Theory,” pp. 190–197, ACM Press, New York.Google Scholar
  5. [5]
    Freivalds, R., Gobleja, D., Karpinski, M., and Smith, C.H. (1994), Colearnability and FIN-identifiability of enumerable classes of total recursive functions, in “Proc. 4th Int. Workshop on Analogical and Inductive Inference, AII'94,” LNAI Vol. 872, pp. 100–105, Springer-Verlag, Berlin.Google Scholar
  6. [6]
    Freivalds, R., and Zeugmann. T. (1995), Co-learning of recursive languages from positive data, RIFIS-TR-CS-110, RIFIS, Kyushu University 33.Google Scholar
  7. [7]
    Gold, E.M. (1967), Language identification in the limit, Information and Control 10, 447–474.CrossRefGoogle Scholar
  8. [8]
    Kapur, S., and Bilardi, G. (1995), Language learning without overgeneralization, Theoretical Computer Science 141, 151–162.CrossRefGoogle Scholar
  9. [9]
    Kummer, M. (1995), A learning-theoretic characterization of classes of recursive functions, Information Processing Letters 54, 205–211.CrossRefGoogle Scholar
  10. [10]
    Lange, S., Wiehagen, R., and Zeugmann, T. (1996), Learning by erasing, RIFIS-TR-CS-122, RIFIS, Kyushu University 33.Google Scholar
  11. [11]
    Lange, S., and Zeugmann, T. (1994), Characterization of language learning from informant under various monotonicity constraints, Journal of Experimental & Theoretical Artificial Intelligence 6, 73–94.Google Scholar
  12. [12]
    Osherson, D., Stob, M., and Weinstein, S. (1986), “Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists,” MIT Press, Cambridge, Massachusetts.Google Scholar
  13. [13]
    Rogers, H. Jr. (1967), “Theory of Recursive Functions and Effective Computability”, McGraw-Hill, New York.Google Scholar
  14. [14]
    Sato, M., and Umayahara, K. (1992), Inductive inferability for formal languages from positive data, IEICE Transactions on Information and Systems E-75D, 415–419.Google Scholar
  15. [15]
    Selivanov, V.L. (1976). Enumerations of families of general recursive functions, Algebra and Logic 15, 128–141.Google Scholar
  16. [16]
    Zeugmann, T., and Lange, S. (1995), A guided tour across the boundaries of learning recursive languages, in “Algorithmic Learning for Knowledge-Based Systems,” LNAI Vol. 961, pp. 193–262, Springer-Verlag, Berlin.Google Scholar
  17. [17]
    Zeugmann, T., Lange, S., and Kapur, S. (1995), Characterizations of monotonic and dual monotonic language learning, Information and Computation 120, 155–173.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Steffen Lange
    • 1
  • Rolf Wiehagen
    • 2
  • Thomas Zeugmann
    • 3
  1. 1.FB Math. & InformatikHTWK LeipzigLeipzigGermany
  2. 2.FB InformatikUniversität KaiserslauternKaiserslauternGermany
  3. 3.Department of InformaticsKyushu University 33FukuokaJapan

Personalised recommendations