Advertisement

Biological Cybernetics

, Volume 72, Issue 6, pp 533–541 | Cite as

Non-linear data structure extraction using simple hebbian networks

  • Colin Fyfe
  • Roland Baddeley
Article

Abstract

We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to selforganise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA.

Keywords

Neural Network Principal Component Analysis Order Structure Statistical Literature High Order Structure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baldi P, Hornik K (1988) Neural networks and principal component analysis learning from examples without local minima. Neural Networks 2:53–58Google Scholar
  2. DeMers D, Cottrell G (1993) Non-linear dimensionality reduction. ftp site.Google Scholar
  3. Diaconis P, Freedman D (1984) Asymptotics of graphical projections. Ann Stat 12:793–815Google Scholar
  4. Friedman JH (1987) Exploratory projection pursuit. J Am Stat Assoc 82:249–266Google Scholar
  5. Fyfe C (1993a) Interneurons which identify principal components. In:Recent advances in neural networks, bnns 93, Conference of the British Neural Networks SocietyGoogle Scholar
  6. Fyfe C (1993b) Pca properties of interneurons. In:From neurobiology to real world computing, icann 93, International Conference on Artificial Neural NetworksGoogle Scholar
  7. Horswell RL, Looney SW (1992) A comparison of tests for multivariate normality that are based on measures of multivariate skewness and kurtosis. J Stat Comput Simulations 42:21–38Google Scholar
  8. Huber PJ (1985) Projection pursuit. Ann Stat 13:435–475Google Scholar
  9. Jones MC, Sibson R (1987) What is projection pursuit. Royal Statistical SocietyGoogle Scholar
  10. Karhunen J (1994) Stability of oja's pca subspace rule. Neural Comput (preprint)Google Scholar
  11. Karhunen J, Joutsensalo J (1992) Nonlinear hebbian algorithms for sinusoidal frequency estimation. Aleksander I, Taylor J, (eds) Artificial neural networks 2. North-Holland, Amsterdam, 1099–1103Google Scholar
  12. Karhunen J, Joutsensalo J (1993a) Learning of robust principal component subspace. International Joint Conference on Neural Networks 2409–2412Google Scholar
  13. Karhunen J, Joutsensalo J (1993b) Nonlinear generalizations of principal component learning algorithms. International Joint Conference on Neural Networks 2599–2602Google Scholar
  14. Karhunen J, Joutsensalo J (1994) Representation and separation of signals using nonlinear pca type learning. Neural Networks 7:113–127Google Scholar
  15. Kashyap RL, Blaydon CC, Fu KS (1994) A prelude to neural networks: Adaptive and learning systems. Prentice Hall, New York, 329–355Google Scholar
  16. Mardia KV, Kent JT, Bibby JM (1979) Multivariate analysis. Academic Press, LondonGoogle Scholar
  17. Oja E (1982) A simplified neuron model as a principal component analyser. J Math Biol 15:267–273Google Scholar
  18. Oja E (1989) Neural networks, principal components and subspaces. Int J Neural Syst. 1:61–68Google Scholar
  19. Oja E, Karhunen J (1993) Nonlinear pca:algorithms and applications (Tech. report A18) University of Technology, HelsinkiGoogle Scholar
  20. Oja E, Ogawa J, Wangviwattana J (1991) Learning in nonlinear constrained hebbian networks. Kohonen T, Makisara K, Simula O, Kangas J (eds) Artificial neural networks. Elsevier, Amsterdam, 385–390Google Scholar
  21. Oja E, Ogawa H, Wangviwattana J (1992a) Pca in fully parallel neural networks. In:Aleksander I, Taylor J (eds) Artificial neural networks, 2. North-Holland, AmsterdamGoogle Scholar
  22. Oja E, Ogawa H, Wangviwattana J (1992b) Principal component analysis by homogeneous neural networks, part 2: analysis and extensions of the learning algorithms. Ieice Trans Inf. Syst E75-D (3):375–381Google Scholar
  23. Sanger TD (1990) Analysis of the two-dimensional receptive fields learned by the generalised hebbian algorithm in response to andom input. Biol Cybern 63:221–228Google Scholar
  24. Shapiro JL, Prugel-Bennett A (1992) Unsupervised hebbian learning and the shape of the neuron activation function. In: Aleksander I, Taylor J (eds) Artificial neural networks 2. North-Holland, AmsterdamGoogle Scholar

Copyright information

© Springer-Verlag 1995

Authors and Affiliations

  • Colin Fyfe
    • 1
  • Roland Baddeley
    • 2
  1. 1.Department of Computer ScienceUniversity of StrathclydeUK
  2. 2.Department of Experimental PsychologyUniversity of OxfordOxfordUK

Personalised recommendations