A New Incremental Learning Technique

  • Nick Dunkin
  • John Shawe-Taylor
  • Pascal Koiran
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

We present a new type of constructive algorithm for incremental learning. The algorithm overcomes many of the problems associated with standard back propagation such as speed and optimum network size. We investigate the ability of the network to learn and test the resulting generalisation of the network.

keywords

Incremental learning Neural networks Back propagation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    D. DeMers fe, G. Cottrel (1992) Non-Linear Dimensionality Reduction, University of California 1992Google Scholar
  2. [2]
    Scott Fahlman & Christian Lebiere (1991). The Cascade-Correlation Learning Architecture, CMU-CS-90–100, Technical ReportGoogle Scholar
  3. [3]
    Christian Jutten & Rachida Ghentouf (1995), A New Scheme for Incremental Learning, Neural Processing LettersGoogle Scholar
  4. [4]
    Pascal Koiran (1994) Efficient Learning of Continuous Neural Networks, COLT 94 Techniques in Neural Learning, NC-TR-95–036, Technical ReportGoogle Scholar
  5. [5]
    Tin-Yau Kwok & Dit Yan Yeung (1995), Constructive Feedforward Neural Networks for Regression Problems: A Survey, HKUST-CS95–43, Technical ReportGoogle Scholar

Copyright information

© Springer-Verlag London Limited 1997

Authors and Affiliations

  • Nick Dunkin
    • 1
  • John Shawe-Taylor
    • 1
  • Pascal Koiran
    • 2
  1. 1.Department of Computer ScienceRoyal Holloway College University of LondonUK
  2. 2.LIPENS-LyonLyonFrance

Personalised recommendations