Abstract
Model based learning systems usually face to a problem of forgetting as a result of the incremental learning of new instances. Normally, the systems have to re-learn past instances to avoid this problem. However, the re-learning process wastes substantial learning time. To reduce learning time, we propose a novel incremental learning system, which consists of two neural networks: a main-learning module and a meta-learning module. The main-learning module approximates a continuous function between input and desired output value, while the meta-learning module predicts an appropriate change in parameters of the main-learning module for incremental learning. The meta-learning module acquires the learning strategy for modifying current parameters not only to adjust the main-learning module’s behavior for new instances but also to avoid forgetting past learned skills.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Takao Yoneda, Masashi Yamanaka, and Yukinori Kakazu. Study on optimization of grinding conditions using neural networks — a method of additional learning —. Journal of the Japan Society of Precision Engineering/Seimitsu kogakukaishi, 58(10):1707–1712, October 1992.
Hiroshi Yamakawa, Daiki Masumoto, Takashi Kimoto, and Shigemi Nagata. Active data selection and subsequent revision for sequential learning with neural networks. World congress of neural networks (WCNN’94), 3:661–666, 1994.
Koichiro Yamauchi, Nobuhiko Yamaguchi, and Naohiro Ishii. Incremental learning methods with retrieving interfered patterns. IEEE TRANSACTIONS ON NEURAL NETWORKS, 10(6):1351–1365, November 1999.
Koichiro Yamauchi, Sachio Itho, and Naohiro Ishii. Combination of fast and slow learning neural networks for quick adaptation and pruning redundant cells. In IEEE SMC’99 1999 IEEE System, Man and Cybernetics Conference, volume III, pages 390–395, October 1999.
Koichiro Yamauchi, Sachio Ito, and Naohiro Ishii. Wake-sleep learning method for quick adaptation and reduction of redundant cells. In ICONIP 2000 7th International Conference on Neural Information Processing, volume 1, pages 559–564, November 2000.
Koichiro Yamauchi. Sequential learning and model selection with sleep. In Liming Zhang and Fanji Gu, editors, ICONIP 2001 8th International Conference on Neural Information Processing, volume 1, pages 205–210. Fudan University Press, November 2001.
Koichiro Yamauchi and Nobufusa Kobayashi. Incremental learning with sleep — learning of noiseless datasets —. In International Conference on Neural Information Processing ICONIP2002, volume 1, pages 398–403, November 2002.
M. Kawato, Kazunori Furukawa, and R. Suzuki. A hierarchical neural-network model for control and learning of voluntary movement. Biological Cybernetics, 57:169–185, 1987.
John Platt. A resource allocating network for function interpolation. Neural Computation, 3(2):213–225, 1991.
Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Sebastian Thrun and Lorien Pratt, editors, Learning to Learn, pages 3–17. Kluwer Academic Publishers, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Oohira, T., Yamauchi, K., Omori, T. (2003). Meta-learning for Fast Incremental Learning. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds) Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003. ICANN ICONIP 2003 2003. Lecture Notes in Computer Science, vol 2714. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44989-2_20
Download citation
DOI: https://doi.org/10.1007/3-540-44989-2_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40408-8
Online ISBN: 978-3-540-44989-8
eBook Packages: Springer Book Archive