An Unsupervised Learning Rule for Class Discrimination in a Recurrent Neural Network
A number of well-known unsupervised feature extraction neural network models are present in literature. The development of unsupervised pattern classification systems, although they share many of the principles of the aforementioned network models, has proven to be more elusive. This paper describes in detail a neural network capable of performing class separability through self-organizing Hebbian like dynamics, i.e., the network is able to autonomously find classes of patterns without the help from any external agent. The model is built around a recurrent network performing winner-takes-all competition. Automatic labelling of input data samples is based upon the induced activity pattern after presentation of the sample. Neurons compete against each other through recurrent interactions to code the input sample. Resulting active neurons update their parameters to improve the classification process. The learning dynamics are moreover absolutely stable.
KeywordsWeight Vector Independent Component Analysis Input Pattern Recurrent Neural Network Independent Component Analysis
Unable to display preview. Download preview PDF.
- 1.Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81 (1984)Google Scholar
- 5.Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience 2(1), 32–48 (1982)Google Scholar
- 6.Hyvärinen, A., Oja, E.: Independent component analysis by general non-linear hebbian-like learning rules. Signal Processing (1998)Google Scholar
- 8.Amari, S.I.: Dynamic stability for formation of cortical maps. In: Arbib, M.A., Amari, S.I. (eds.) Dynamic Interactions in Neural Networks: Models and Data. Research notes in Neural Computing, vol. 1, pp. 15–34. Springer, Heidelberg (1989)Google Scholar
- 9.Ermentrout, B., Osan, R.: Models for pattern formation in development. In: Champneys, J.S., Krauskopf, A.R., di Bernardo, B., Wilson, M., Osinga, R.E., Homer, H.M.,, M.E. (eds.) Nonlinear Dynamics and Chaos. Where do we go from here?, pp. 321–347. Institute of Physics Publishing (2003)Google Scholar
- 12.Bie, T.D., Cristianini, N., Rosipal, R.: Eigenproblems in pattern recognition. In: Handbook of Computational Geometry for Pattern Recognition, Computer Vision, Neurocomputing and Robotics. E. bayro-corrochano edn. Springer, HeidelbergGoogle Scholar
- 13.Burges, C.J.C.: Geometric methods for feature extraction and dimensional reduction. In: Rokach, L., Maimon, O. (eds.) Data Mining and Knowledge Discovery Handbook: A Complete Guide for Practitioners and Researchers, Kluwer Academic Publishers, Dordrecht (2005)Google Scholar
- 16.Vogels, T.P., Rajan, K., Abbott, L.F.: Neural network dynamics. Annual Reviews Neuroscience 28 (2005)Google Scholar