Time-Oriented Hierarchical Method for Computation of Minor Components

  • M. Jankovic
  • H. Ogawa


This paper proposes a general method that transforms known neural network MSA algorithms, into MCA algorithms. The method uses two distinct time scales. A given MSA algorithm is responsible, on a faster time scale, for the “behavior” of all output neurons. On this scale minor subspace is obtained. On a slower time scale, output neurons compete to fulfill their “own interests”. On this scale, basis vectors in the minor subspace are rotated toward the minor eigenvectors. Actually, time-oriented hierarchical method is proposed. Some simplified mathematical analysis, as well as simulation results are presented.


Weight Vector Learning Rule Output Neuron Blind Signal Hebbian Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A. Chichocki, S.-I. Amari (2003) Adaptive Blind Signal and Image Processing — Learning Algorithms and Applications. John Wiley and Sons, New YorkGoogle Scholar
  2. [2]
    R. Klemm (1987) Adaptive airborne MTI: an auxiliary channel approach. IEE Proceedings 134:269–276Google Scholar
  3. [3]
    G. Mathew and V. Reddy (1994) Orthogonal Eigensubspace estimation using neural networks. IEEE Trans. On Signal Processing: 42: 1803–1811CrossRefGoogle Scholar
  4. [4]
    R. Schmidt (1986) Multiple emitter location and signal parameter estimation. IEEE Trans. On Antennas and Propagation 34: 276–280CrossRefGoogle Scholar
  5. [5]
    L. Wiscott (1998) Learning invariance manifolds. International Conference on Artificial Neural Networks: 555–560Google Scholar
  6. [6]
    L. Xu, E. Oja, C.Y. Suen (1992) Modified Hebbian learning for curve and surface fitting. Neural Networks 5:441–457CrossRefGoogle Scholar
  7. [7]
    S. Fiori (2003) A Neural Minor Component Analysis Approach to Robust Constrained Beamforming. IEE Proceedings — Vision, Image and Signal Processing 150:205–218CrossRefGoogle Scholar
  8. [8]
    T.-P. Chen and S. Amari (2001) Unified stabilization approach to principal and minor components. Neural Networks 14: 1377–1387CrossRefGoogle Scholar
  9. [9]
    T.-P. Chen, S. Amari, and Q. Lin (1998) A unified algorithm for principal and minor components extraction. Neural Networks 11: 385–390CrossRefGoogle Scholar
  10. [10]
    S. Fiori (2002) A minor subspace algorithm based on neural Stiefel dynamics. International Journal of Neural Systems 12: 339–350CrossRefGoogle Scholar
  11. [11]
    S.C. Douglas, S.Y. Kung and S. Amari (1998) A self-stabilized minor subspace rule. IEEE Signal Processing Letters 5: 328–330CrossRefGoogle Scholar
  12. [12]
    K. Abed-Meraim, S. Attallah, A. Ckheif and Y. Hua (2000) Orthogonal Oja algorithm. IEEE Signal Processing Letters 7: 116–119CrossRefGoogle Scholar
  13. [13]
    S. Fiori (2001) A theory for learning by weight flow on Stiefel-Grassman Manifold. Neural Computation 13: 1625–1647MATHCrossRefGoogle Scholar
  14. [14]
    L. Ljung (1977) Analysis of recursive stochastic algorithms. IEEE Trans. Automat. Contr. 22, 551–575MATHMathSciNetCrossRefGoogle Scholar
  15. [15]
    E. Oja, J. Karhunen (1985) On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. J. Math. Anal., Appl. 106: 69–84MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 2005

Authors and Affiliations

  • M. Jankovic
    • 1
  • H. Ogawa
    • 2
  1. 1.Control Department, EE Institute “Nikola Tesla”Serbia and MontenegroYugoslavia
  2. 2.Department of Computer ScienceTokyo Institute of TechnologyJapan

Personalised recommendations