Skip to main content

Meta-learning for Fast Incremental Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2714))

Abstract

Model based learning systems usually face to a problem of forgetting as a result of the incremental learning of new instances. Normally, the systems have to re-learn past instances to avoid this problem. However, the re-learning process wastes substantial learning time. To reduce learning time, we propose a novel incremental learning system, which consists of two neural networks: a main-learning module and a meta-learning module. The main-learning module approximates a continuous function between input and desired output value, while the meta-learning module predicts an appropriate change in parameters of the main-learning module for incremental learning. The meta-learning module acquires the learning strategy for modifying current parameters not only to adjust the main-learning module’s behavior for new instances but also to avoid forgetting past learned skills.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Takao Yoneda, Masashi Yamanaka, and Yukinori Kakazu. Study on optimization of grinding conditions using neural networks — a method of additional learning —. Journal of the Japan Society of Precision Engineering/Seimitsu kogakukaishi, 58(10):1707–1712, October 1992.

    Google Scholar 

  2. Hiroshi Yamakawa, Daiki Masumoto, Takashi Kimoto, and Shigemi Nagata. Active data selection and subsequent revision for sequential learning with neural networks. World congress of neural networks (WCNN’94), 3:661–666, 1994.

    Google Scholar 

  3. Koichiro Yamauchi, Nobuhiko Yamaguchi, and Naohiro Ishii. Incremental learning methods with retrieving interfered patterns. IEEE TRANSACTIONS ON NEURAL NETWORKS, 10(6):1351–1365, November 1999.

    Article  Google Scholar 

  4. Koichiro Yamauchi, Sachio Itho, and Naohiro Ishii. Combination of fast and slow learning neural networks for quick adaptation and pruning redundant cells. In IEEE SMC’99 1999 IEEE System, Man and Cybernetics Conference, volume III, pages 390–395, October 1999.

    Google Scholar 

  5. Koichiro Yamauchi, Sachio Ito, and Naohiro Ishii. Wake-sleep learning method for quick adaptation and reduction of redundant cells. In ICONIP 2000 7th International Conference on Neural Information Processing, volume 1, pages 559–564, November 2000.

    Google Scholar 

  6. Koichiro Yamauchi. Sequential learning and model selection with sleep. In Liming Zhang and Fanji Gu, editors, ICONIP 2001 8th International Conference on Neural Information Processing, volume 1, pages 205–210. Fudan University Press, November 2001.

    Google Scholar 

  7. Koichiro Yamauchi and Nobufusa Kobayashi. Incremental learning with sleep — learning of noiseless datasets —. In International Conference on Neural Information Processing ICONIP2002, volume 1, pages 398–403, November 2002.

    Article  Google Scholar 

  8. M. Kawato, Kazunori Furukawa, and R. Suzuki. A hierarchical neural-network model for control and learning of voluntary movement. Biological Cybernetics, 57:169–185, 1987.

    Article  MATH  Google Scholar 

  9. John Platt. A resource allocating network for function interpolation. Neural Computation, 3(2):213–225, 1991.

    Article  MathSciNet  Google Scholar 

  10. Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Sebastian Thrun and Lorien Pratt, editors, Learning to Learn, pages 3–17. Kluwer Academic Publishers, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Oohira, T., Yamauchi, K., Omori, T. (2003). Meta-learning for Fast Incremental Learning. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds) Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003. ICANN ICONIP 2003 2003. Lecture Notes in Computer Science, vol 2714. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44989-2_20

Download citation

  • DOI: https://doi.org/10.1007/3-540-44989-2_20

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40408-8

  • Online ISBN: 978-3-540-44989-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics