Advertisement

An Unsupervised Learning Rule for Class Discrimination in a Recurrent Neural Network

  • Juan Pablo de la Cruz Gutiérrez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)

Abstract

A number of well-known unsupervised feature extraction neural network models are present in literature. The development of unsupervised pattern classification systems, although they share many of the principles of the aforementioned network models, has proven to be more elusive. This paper describes in detail a neural network capable of performing class separability through self-organizing Hebbian like dynamics, i.e., the network is able to autonomously find classes of patterns without the help from any external agent. The model is built around a recurrent network performing winner-takes-all competition. Automatic labelling of input data samples is based upon the induced activity pattern after presentation of the sample. Neurons compete against each other through recurrent interactions to code the input sample. Resulting active neurons update their parameters to improve the classification process. The learning dynamics are moreover absolutely stable.

Keywords

Weight Vector Independent Component Analysis Input Pattern Recurrent Neural Network Independent Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81 (1984)Google Scholar
  2. 2.
    Hopfield, J.J., Tank, D.W.: Neural computation of decisions in optimization problems. Biological Cybernetics 52, 141–152 (1985)MATHMathSciNetGoogle Scholar
  3. 3.
    Kulkarni, S.R., Lugosi, G., Venkatesh, S.S.: Learning pattern classification – a survey. IEEE Transactions on Information Theory 44(6), 2178–2206 (1998)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Zhang, G.P.: Neural networks for classification: A survey. IEEE Transactions on Systems, Man and Cybernetics–Part C: Applications and Reviews 30(4), 451–462 (2000)CrossRefGoogle Scholar
  5. 5.
    Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience 2(1), 32–48 (1982)Google Scholar
  6. 6.
    Hyvärinen, A., Oja, E.: Independent component analysis by general non-linear hebbian-like learning rules. Signal Processing (1998)Google Scholar
  7. 7.
    Dayan, P., Abbott, L.F.: Theoretical Neuroscience. Computational Neuroscience. MIT press, Cambridge (2001)MATHGoogle Scholar
  8. 8.
    Amari, S.I.: Dynamic stability for formation of cortical maps. In: Arbib, M.A., Amari, S.I. (eds.) Dynamic Interactions in Neural Networks: Models and Data. Research notes in Neural Computing, vol. 1, pp. 15–34. Springer, Heidelberg (1989)Google Scholar
  9. 9.
    Ermentrout, B., Osan, R.: Models for pattern formation in development. In: Champneys, J.S., Krauskopf, A.R., di Bernardo, B., Wilson, M., Osinga, R.E., Homer, H.M.,, M.E. (eds.) Nonlinear Dynamics and Chaos. Where do we go from here?, pp. 321–347. Institute of Physics Publishing (2003)Google Scholar
  10. 10.
    Kohonen, T.: Self-organizing Maps, 3rd edn. Information Sciences, vol. 30. Springer, Heidelberg (2001)MATHGoogle Scholar
  11. 11.
    Oja, E.: Principal components, minor components, and linear neural networks. Neural Networks 5, 927–935 (1992)CrossRefGoogle Scholar
  12. 12.
    Bie, T.D., Cristianini, N., Rosipal, R.: Eigenproblems in pattern recognition. In: Handbook of Computational Geometry for Pattern Recognition, Computer Vision, Neurocomputing and Robotics. E. bayro-corrochano edn. Springer, HeidelbergGoogle Scholar
  13. 13.
    Burges, C.J.C.: Geometric methods for feature extraction and dimensional reduction. In: Rokach, L., Maimon, O. (eds.) Data Mining and Knowledge Discovery Handbook: A Complete Guide for Practitioners and Researchers, Kluwer Academic Publishers, Dordrecht (2005)Google Scholar
  14. 14.
    Chatterjee, C., Roychowdhury, V.P.: On self-organizing algorithms and networks for class-separability features. IEEE Transactions on Neural Networks 8(3), 663–678 (1997)CrossRefGoogle Scholar
  15. 15.
    Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998)CrossRefGoogle Scholar
  16. 16.
    Vogels, T.P., Rajan, K., Abbott, L.F.: Neural network dynamics. Annual Reviews Neuroscience 28 (2005)Google Scholar
  17. 17.
    Perko, L.: Differential Equations and Dynamical Systems. Texts in Applied Mathematics. Springer, Heidelberg (1991)MATHGoogle Scholar
  18. 18.
    Atick, J.J., Redlich, A.N.: Predicting ganglion and simple cell receptive field organizations. International Journal of Neural Systems 1(4), 305–315 (1990)CrossRefGoogle Scholar
  19. 19.
    Dong, D.W., Atick, J.J.: Temporal decorrelation: A theory of lagged and nonlagged responses in the lateral geniculate nucleus. Network: Computation in Neural Systems 6(2), 159–178 (1990)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Juan Pablo de la Cruz Gutiérrez
    • 1
  1. 1.Infineon Technologies AGNeubibergGermany

Personalised recommendations