ALT 1999: Algorithmic Learning Theory pp 118-131 | Cite as

On the Strength of Incremental Learning

  • Steffen Lange
  • Gunter Grieser
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1720)

Abstract

This paper provides a systematic study of incremental learning from noise-free and from noisy data, thereby distinguishing between learning from only positive data and from both positive and negative data. Our study relies on the notion of noisy data introduced in [22].

The basic scenario, named iterative learning, is as follows. In every learning stage, an algorithmic learner takes as input one element of an information sequence for a target concept and its previously made hypothesis and outputs a new hypothesis. The sequence of hypotheses has to converge to a hypothesis describing the target concept correctly.

We study the following refinements of this scenario. Bounded example-memory inference generalizes iterative inference by allowing an iterative learner to additionally store an a priori bounded number of carefully chosen data elements, while feedback learning generalizes it by allowing the iterative learner to additionally ask whether or not a particular data element did already appear in the data seen so far.

For the case of learning from noise-free data, we show that, where both positive and negative data are available, restrictions on the accessibility of the input data do not limit the learning capabilities if and only if the relevant iterative learners are allowed to query the history of the learning process or to store at least one carefully selected data element. This insight nicely contrasts the fact that, in case only positive data are available, restrictions on the accessibility of the input data seriously affect the capabilities of all types of incremental learning (cf. [18]).

For the case of learning from noisy data, we present characterizations of all kinds of incremental learning in terms being independent from learning theory. The relevant conditions are purely structural ones. Surprisingly, where learning from only noisy positive data and from both noisy positive and negative data, iterative learners are already exactly as powerful as unconstrained learning devices.

Keywords

Data Element Initial Segment Noisy Data Incremental Learning Iterative Learner 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    J. Case and S. Jain. Synthesizing learners tolerating computable noisy data. In Proc. 9th ALT, LNAI 1501, pp. 205–219. Springer-Verlag, 1998.Google Scholar
  3. 3.
    J. Case, S. Jain, S. Lange, and T. Zeugmann. Incremental concept learning for bounded data mining. Information and Computation. to appear.Google Scholar
  4. 4.
    J. Case, S. Jain, and A. Sharma. Synthesizing noise-tolerant language learners. In Proc. 8th ALT, LNAI 1316, pp. 228–243. Springer-Verlag, 1997.Google Scholar
  5. 5.
    J. Case, S. Jain, and F. Stephan. Vacillatory and BC learning on noisy data. In Proc. 7th ALT, LNAI 1160, pp. 285–289. Springer-Verlag, 1997.Google Scholar
  6. 6.
    A. Cornuéjols. Getting order independence in incremental learning. In Proc. ECML, LNAI 667, pp. 196–212. Springer-Verlag, 1993.Google Scholar
  7. 7.
    U.M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy, editors. Advances in Knowledge Discovery and Data Mining. MIT Press, 1996.Google Scholar
  8. 8.
    M. Fulk, S. Jain, and D.N. Osherson. Open problems in systems that learn. Journal of Computer and System Sciences, 49:589–604, 1994.CrossRefMathSciNetGoogle Scholar
  9. 9.
    R. Godin and R. Missaoui. An incremental concept formation approach for learning from databases. Theoretical Computer Science, 133:387–419, 1994.MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    M.E. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.CrossRefMATHGoogle Scholar
  11. 11.
    J.E. Hopcroft and J.D. Ullman. Formal Languages and their Relation to Automata. Addison-Wesley, 1969.Google Scholar
  12. 12.
    S. Jain. Program synthesis in the presence of infinite number of inaccuracies. In Proc. 5th ALT, LNAI 872, pp. 333–348. Springer-Verlag, 1994.Google Scholar
  13. 13.
    K.P. Jantke and H.R. Beick. Combining postulates of naturalness in inductive inference. Journal of Information Processing and Cybernetics, 17:465–484, 1981.MathSciNetMATHGoogle Scholar
  14. 14.
    E. Kinber and F. Stephan. Mind changes, limited memory, and monotonicity. In Proc. 8th COLT, pp. 182–189. ACM Press, 1995.Google Scholar
  15. 15.
    S. Lange and R. Wiehagen. Polynomial-time inference of arbitrary pattern languages. New Generation Computing, 8:361–370, 1991.MATHCrossRefGoogle Scholar
  16. 16.
    S. Lange and T. Zeugmann. Language learning in dependence on the space of hypotheses. In Proc. 6th COLT, pp. 127–136. ACM Press, 1993.Google Scholar
  17. 17.
    S. Lange and T. Zeugmann. Learning recursive languages with bounded mind changes. Int. Journal of Foundations of Computer Science, 4:157–178, 1993.MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    S. Lange and T. Zeugmann. Incremental learning from positive data. Journal of Computer and System Sciences, 53:88–103, 1996.MATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    D. Osherson, M. Stob, and S. Weinstein. Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, 1986.Google Scholar
  20. 20.
    S. Porat and J.A. Feldman. Learning automata from ordered examples. In Proc. 1st COLT, pp. 386–396. Morgan Kaufmann Publ., 1988.Google Scholar
  21. 21.
    R. Rivest. Learning decision lists. Machine Learning, 2:229–246, 1988.Google Scholar
  22. 22.
    F. Stephan. Noisy inference and oracles. In Proc. 6th ALT, LNAI 997, pp. 185–200. Springer-Verlag, 1995.Google Scholar
  23. 23.
    F. Stephan. Noisy inference and oracles. Theoretical Computer Science, 185:129–157, 1997.MATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    L. Torgo. Controlled redundancy in incremental rule learning. In Proc. ECML, LNAI 667, pp. 185–195. Springer-Verlag, 1993.Google Scholar
  25. 25.
    L.G. Valiant. A theory of the learnable. Communications of the ACM, 27:1134–1142, 1984.MATHCrossRefGoogle Scholar
  26. 26.
    R. Wiehagen. Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Journal of Information Processing and Cybernetics, 12:93–99, 1976.MathSciNetMATHGoogle Scholar
  27. 27.
    T. Zeugmann and S. Lange. A guided tour across the boundaries of learning recursive languages. In Algorithmic Learning for Knowledge-Based Systems, LNAI 961, pp. 193–262. Springer-Verlag, 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Steffen Lange
    • 1
  • Gunter Grieser
    • 2
  1. 1.Institut für InformatikUniversität LeipzigLeipzigGermany
  2. 2.Fachbereich InformatikTechnische Universität DarmstadtDarmstadtGermany

Personalised recommendations