Advertisement

Neural Processing Letters

, Volume 6, Issue 1–2, pp 33–41 | Cite as

A Neural Network for PCA and Beyond

  • Colin Fyfe
Article

Abstract

Principal Component Analysis (PCA) has been implemented by several neural methods. We discuss a Network which has previously been shown to find the Principal Component subspace though not the actual Principal Components themselves. By introducing a constraint to the learning rule (we do not allow the weights to become negative) we cause the same network to find the actual Principal Components. We then use the network to identify individual independent sources when the signals from such sources are ORed together.

independence PCA 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    T.D. Sanger, “Analysis of the two-dimensional receptive fields learned by the generalized hebbian algorithm in response to random input”, Biological Cybernetics, 1990.Google Scholar
  2. 2.
    E. Oja, H. Ogawa and J. Wangviwattana, “Principal component analysis by homogeneous neural networks, part 1: The weighted subspace criterion”, IEICE Trans. Inf. & Syst., E75-D:366–375, May 1992.Google Scholar
  3. 3.
    C. Jutten and J. Herault, “Blind separation of sources,part 1: An adaptive algorithm based on neuromimetic architecture”, Signal Processing, Vol. 24, pp. 1–10, 1991.Google Scholar
  4. 4.
    A.J. Bell and T.J. Sejnowski, “An information maximization approach to blind separation and blind deconvolution”, Neural Computation, Vol. 7, pp. 1129–1159, 1995.Google Scholar
  5. 5.
    J. Karhunen and J. Joutsensalo, “Representation and separation of signals using nonlinear pca type learning”, Neural Networks, Vol. 7, No. 1, pp. 113–127, 1994.Google Scholar
  6. 6.
    M. Girolami and C. Fyfe, “Stochastic ica contrast maximisation using oja's nonlinear pca algorithm”, International Journal of Neural Systems, 1997.Google Scholar
  7. 7.
    M. Girolami and C. Fyfe, “A temporal model of linear anti-hebbian learning”, Neural Processing Letters, 1997 (in press).Google Scholar
  8. 8.
    C. Fyfe, “Pca properties of interneurons”, in From Neurobiology to Real World Computing, ICANN 93, pp. 183–188, 1993.Google Scholar
  9. 9.
    E. Oja, “Neural networks, principal components and subspaces”, International Journal of Neural Systems, Vol. 1, pp. 61–68, 1989.Google Scholar
  10. 10.
    P. Földiák, Models of Sensory Coding, PhD thesis, University of Cambridge, 1992.Google Scholar
  11. 11.
    E. Saund, “A multiple cause mixture model for unsupervised learning”, Neural Computation, Vol. 7, pp. 51–71, 1995.Google Scholar
  12. 12.
    P. Dayan and R.S. Zemel, “Competition and multiple cause models”, Neural Computation, Vol. 7, pp. 565–579, 1995.Google Scholar
  13. 13.
    R.H. White, “Competitive hebbian learning: Algorithm and demonstration”, Neural Networks, Vol. 5, pp. 261–275, 1992.Google Scholar
  14. 14.
    L. Xu, E. Oja and C.Y. Suen, “Modified hebbian learning for curve and surface fitting”, Neural Networks, Vol. 5, pp. 441–457, 1992.Google Scholar
  15. 15.
    J. Karhunen, E. Oja, L. Wang, R. Vigário and J. Joutsensalo, “A class of neural networks for independent component analysis”, IEEE Transactions on Neural Networks, 1997 (in press).Google Scholar
  16. 16.
    L. Wang and J. Karhunen, “A unified neural bigradient algorithm for robust pca and mca”, International Journal of Neural Systems, 1995.Google Scholar

Copyright information

© Kluwer Academic Publishers 1997

Authors and Affiliations

  • Colin Fyfe
    • 1
  1. 1.Department of Computing and Information SystemsThe University of PaisleyUK

Personalised recommendations