Abstract
Outpost Vector model synthesizes new vectors from two classes of data at their boundary to maintain the shape of the current system in order to increase the level of accuracy of classification. This paper presents an incremental learning preprocessor for Feed-forward Neural Network (FFNN) which utilizes Outpost Vector model to improve the level of accuracy of classification of both new data and old data. The preprocessor generates outpost vectors from selected new samples, selected prior samples, both samples, or generates no outpost vector at all. After that, they are included in the final training set, as well as selected new samples and selected prior samples, based on the specified parameters. The final training set is then trained with FFNN. The whole process is repeated again when new samples are sufficiently collected in order to learn newer knowledge. The experiments are conducted with a 2-dimension partition problem. The distribution of training and test samples is created in a limited location of a 2-dimension donut ring. The context of the problem is assumed to shift 45° in counterclockwise direction. There are two classes of data which are represented as 0 and 1. Every consecutive partition is set to have different class of both new data and old data. The experimental results show that the use of outpost vectors generated from either selected new samples or selected prior or both samples helps improve the level of accuracy of classification for all data. The run-time complexity of the algorithm presents that the overhead from outpost vector generation process is insignificant and is compensated by the improved level of accuracy of classification.
Similar content being viewed by others
References
Fritzke B (1995) Incremental learning of locally linear mappings. In: International conference on artificial neural networks, pp 217–222
Haykin S (1999) Neural networks: a comprehensive foundation (2nd ed.). Prentice Hall, Upper Saddle River
Negnevitsky M (2005) Artificial intelligence: a guide to intelligent systems (2nd ed.). Addison Wesley, Essex
Polikar R, Udpa L, Udpa SS, Honavar V (2001) Learn++: an incremental learning algorithm for supervised neural networks. IEEE Trans Syst Man Cybern 31(4): 497–508
Russell S, Norving P (2004) Artificial intelligence a modern approach (2nd ed.). Pearson Education, Delhi
Tanprasert T, Kripruksawan T (2002) An approach to control aging rate of neural networks under adaptation to gradually changing context. In: International conference on neural information processing, pp 174–178
Tanprasert T, Kaitikunkajorn S (2005) Improving synthesis process of decayed prior sampling technique. In: International conference on intelligent technologies, pp 240–244
Tanprasert T, Tanprasert C, Lursinsap C (1998) Contour preserving classification for maximal reliability. In: International joint conference on neural networkss, pp 1125–1130
Tanprasert T, Fuangkhon P, Tanprasert C (2008) An improved technique for retraining neural networks in adaptive environment. In: International conference on intelligent technologies, pp 77–80
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fuangkhon, P. An incremental learning preprocessor for feed-forward neural network. Artif Intell Rev 41, 183–210 (2014). https://doi.org/10.1007/s10462-011-9304-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10462-011-9304-0