Skip to main content
Log in

Incremental Learning with Respect to New Incoming Input Attributes

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Neural networks are generally exposed to a dynamic environment where the training patterns or the input attributes (features) will likely be introduced into the current domain incrementally. This Letter considers the situation where a new set of input attributes must be considered and added into the existing neural network. The conventional method is to discard the existing network and redesign one from scratch. This approach wastes the old knowledge and the previous effort. In order to reduce computational time, improve generalization accuracy, and enhance intelligence of the learned models, we present ILIA algorithms (namely ILIA1, ILIA2, ILIA3, ILIA4 and ILIA5) capable of Incremental Learning in terms of Input Attributes. Using the ILIA algorithms, when new input attributes are introduced into the original problem, the existing neural network can be retained and a new sub-network is constructed and trained incrementally. The new sub-network and the old one are merged later to form a new network for the changed problem. In addition, ILIA algorithms have the ability to decide whether the new incoming input attributes are relevant to the output and consistent with the existing input attributes or not and suggest to accept or reject them. Experimental results show that the ILIA algorithms are efficient and effective both for the classification and regression problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Kwok, T. Y. and Yeung, D. Y.: Objective functions for training new hidden units in constructive neural networks, IEEE Transactions on Neural Networks 8 (1997), 1131–1148.

    Google Scholar 

  2. Fu, L. M., Hsu, H.-H. and Principe, J. C.: Incremental backpropagation learning networks, IEEE Transactions on Neural Networks 7 (1996), 757–761.

    Google Scholar 

  3. Bruzzon, L. and Fernandez, P. D.: An incremental-learning neural networkfor the classification of remote - sensing images, Pattern Recognition Letters 20 (1999), 1241–1248.

    Google Scholar 

  4. Hebert, J.-F., Parizeau, M. and Ghazzali, N.: Cursive character detection using incremental learning, In: Proceedings of the Fifth International Conference on Document Analysis and Recognition, pp. 808–811, 1999.

  5. Blum, A. and Rivest, R. L.: Training a 3-node neural networkis NP-complete, Neural Networks 5 (1992), 117–128.

    Google Scholar 

  6. Baum, E. B. and Haussler, D.: What size net gives valid generalization?, Neural Computation 1 (1989), 151–160.

    Google Scholar 

  7. Reed, R.: Pruning algorithm - a survey, IEEE Transactions on Neural Networks 4 (1993), 740–747.

    Google Scholar 

  8. Lehtokangas, M.: Modelling with constructive backpropagation, Neural Networks 12 (1999), 707–716.

    Google Scholar 

  9. Fahlman, S. E. and Lebiere, C.: The cascade-correlation learning architecture, In: D. S. Touretzky (ed), Advances in neural information processing systems, 2, pp. 524–532, Morgan Kaufmann Publishers, CA, 1990.

    Google Scholar 

  10. Ash, T.: Dynamic node creation in backpropagation networks, Connection Science 1 (1989), 365–375.

    Google Scholar 

  11. Mezard, M. and Nadal, J. P.: Learning in feedforward layered networks: The tiling algorithm, J. Phys. A22 (1989), 2191–2203.

    Google Scholar 

  12. Guan, S.-U. and Li, S.: An approach to parallel growing and training of neural networks, In: Proceedings of 2000 IEEE International Symposium on Intelligent Signal Processing and Communication Systems, 2, Honolulu, Hawaii, 2000, pp. 1101–1104.

    Google Scholar 

  13. Prechelt, L.: PROBEN1: A set of neural networkb enchmarkprob lems and benchmarking rules, Technical Report 21/94, Department of Informatics, University of Karlsruhe, Germany, 1994.

    Google Scholar 

  14. Prechelt, L.: Investigation of the CasCor family of learning algorithms, Neural Networks 10 (1997), 885–896.

    Google Scholar 

  15. Riedmiller, M. and Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm, In: Proceedings of the IEEE International Conference on Neural Networks, pp. 586–591, 1993.

  16. Setiono, R. and Liu, H.: Neural-networkfe ature selector, IEEE Transactions on Neural Networks 8 (1997), 654–662.

    Google Scholar 

  17. Laar, P. V. D., Heskes, T. and Gielen, S.: Partial retraining: a new approach to input relevance determination, International Journal of Neural Systems 9 (1999), 75–85.

    Google Scholar 

  18. Guan, S.-U. and Li, S.: An approach to incremental learning with respect to input attributes, In: Proceedings of International ICSC Congress on Intelligent Systems and Applications (ISA'2000), University of Wollongong, Australia, 2000.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheng-Uei Guan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guan, SU., Li, S. Incremental Learning with Respect to New Incoming Input Attributes. Neural Processing Letters 14, 241–260 (2001). https://doi.org/10.1023/A:1012799113953

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1012799113953

Navigation