Skip to main content
Log in

On the Discrete-Time Dynamics of Cross-Coupled Hebbian Algorithm

Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Principal/minor component analysis (PCA/MCA), generalized principal/minor component analysis (GPCA/GMCA), and singular value decomposition (SVD) algorithms are important techniques for feature extraction. In the convergence analysis of these algorithms, the deterministic discrete-time (DDT) method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively. However, the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure. In this paper, for the first time, we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms. First, taking the cross-coupled Hebbian algorithm as an example, by concatenating the two cross-coupled variables into a single vector, we successfully get a PCA-like DDT system. Second, we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method, and obtain the boundedness of the weight vectors and learning rate. Moreover, further discussion shows the universality of the proposed method for analyzing other SVD algorithms. As a result, the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Ma H W, Lin Y Z, Nie Z H. Physical interpretation of principal component analysis for structural dynamics through string vibration. International Journal of Structural Stability & Dynamics, 2019, 19(9): Article No. 1950109. 10.1142/S0219455419501098.

  2. Du B Y, Kong X Y, Feng X W. Generalized principal component analysis-based subspace decomposition of fault deviations and its application to fault reconstruction. IEEE Access, 2020, 8: 34177-34186. https://doi.org/10.1109/ACCESS.2020.2971507.

    Article  Google Scholar 

  3. Yuan G, Shen L, Zheng W S. A decomposition algorithm for the sparse generalized eigenvalue problem. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.6113-6122. https://doi.org/10.1109/CVPR.2019.00627.

  4. Ma J, Yuan Y. Dimension reduction of image deep feature using PCA. Journal of Visual Communication & Image Representation, 2019, 63: Article No. 102578. https://doi.org/10.1016/j.jvcir.2019.102578.

  5. Omar H M, Morsli M, Yaichi S. Image compression using principal component analysis. In Proc. the 2nd International Conference on Mathematics and Information Technology, Feb. 2020, pp.226-231. https://doi.org/10.1109/ICMIT47780.2020.9047014.

  6. Harmouche J, Delpha C, Diallo D. Incipient fault detection and diagnosis based on Kullback-Leibler divergence using principal component analysis: Part II. Signal Processing, 2015, 109: 334-344. https://doi.org/10.1016/j.sigpro.2014.06.023.

    Article  Google Scholar 

  7. Yu D, Wang M, Cheng X. A method for the compound fault diagnosis of gearboxes based on morphological component analysis. Measurement, 2016, 91: 519-531. https://doi.org/10.1016/j.measurement.2016.05.087.

    Article  Google Scholar 

  8. Shahbazpanahi S, Gershman A, Luo Z Q, Wong K M. Robust adaptive beamforming for general-rank signal models. IEEE Transactions on Signal Processing, 2003, 51(9): 2257-2269. https://doi.org/10.1109/TSP.2003.815395.

    Article  Google Scholar 

  9. Morgan D R. Adaptive algorithms for solving generalized eigenvalue signal enhancement problems. Signal Processing, 2004, 84(6): 957-968. https://doi.org/10.1016/j.sigpro.2004.02.002.

    Article  MATH  Google Scholar 

  10. Jian Y, Xi C, Xi H. Fast adaptive extraction algorithm for multiple principal generalized eigenvectors. International Journal of Intelligent Systems, 2013, 28(3): 289-306. https://doi.org/10.1002/int.21570.

    Article  Google Scholar 

  11. Chu D, Liao L Z, Ng M K P, Wang X. Incremental linear discriminant analysis: A fast algorithm and comparisons. IEEE Transactions on Neural Networks & Learning Systems, 2015, 26(11): 2716-2735. https://doi.org/10.1109/TNNLS.2015.2391201.

    Article  MathSciNet  Google Scholar 

  12. Chen Y, Tong S, Cong F, Xu J. Symmetrical singular value decomposition representation for pattern recognition. Neurocomputing, 2016, 214: 143-154. https://doi.org/10.1016/j.neucom.2016.05.075.

    Article  Google Scholar 

  13. Wang J, Shi D, Cheng D, Zhang Y, Gao J. LRSR: Low-rank-sparse representation for subspace clustering. Neurocomputing, 2016, 214: 1026-1037. https://doi.org/10.1016/j.neucom.2016.07.015.

  14. Peng X, Tang H, Zhang L. A unified framework for representation-based subspace clustering of out-of-sample and large-scale data. IEEE Transactions on Neural Networks & Learning Systems, 2015, 27(12): 2499-2512. https://doi.org/10.1109/TNNLS.2015.2490080.

    Article  MathSciNet  Google Scholar 

  15. Abbass H H, Mousa Z R. A proposed method of face recognition based on edge detection and SVD. Journal of Engineering & Applied Sciences, 2019, 14(18): 6560-6566. https://doi.org/10.36478/jeasci.2019.6560.6566.

    Article  Google Scholar 

  16. Peng D, Yi Z, Xiang Y. A unified learning algorithm to extract principal and minor components. Digital Signal Processing, 2009, 19(4): 640-649. https://doi.org/10.1016/j.dsp.2009.03.004.

    Article  Google Scholar 

  17. Cirrincione G, Cirrincione M, Herault J, Van Huffel S. The MCA EXIN neuron for the minor component analysis. IEEE Transactions on Neural Networks, 2002, 13(1): 160-187. https://doi.org/10.1109/72.977295.

    Article  Google Scholar 

  18. Kong X Y, Hu C, Ma H G. A unified self-stabilizing neural network algorithm for principal and minor components extraction. IEEE Transactions on Neural Networks & Learning Systems, 2012, 23(2): 185-198. https://doi.org/10.1109/TNNLS.2011.2178564.

    Article  Google Scholar 

  19. Ben X, Meng W, Wang K, Yan R. An adaptive neural networks formulation for the two-dimensional principal component analysis. Neural Computing & Applications, 2016, 27(5): 1245-1261. https://doi.org/10.1007/s00521-015-1922-z.

    Article  Google Scholar 

  20. Nguyen T D, Yamada I. A unified convergence analysis of normalized PAST algorithms for estimating principal and minor components. Signal Processing, 2013, 93(1): 176-184. https://doi.org/10.1016/j.sigpro.2012.07.020.

    Article  Google Scholar 

  21. Nguyen T D, Yamada I. Adaptive normalized quasi-Newton algorithms for extraction of generalized eigen-pairs and their convergence analysis. IEEE Transactions on Signal Processing, 2013, 61(6): 1404-1418. https://doi.org/10.1109/TSP.2012.2234744.

    Article  MathSciNet  MATH  Google Scholar 

  22. Feng X W, Kong X W, Ma H G, Si X S. A novel unified and self-stabilizing algorithm for generalized eigen-pairs extraction. IEEE Transactions on Neural Networks & Learning Systems, 2017, 28(12): 3032-3044. https://doi.org/10.1109/TNNLS.2016.2614130.

    Article  MathSciNet  Google Scholar 

  23. Ouyang S, Hua Y. Bi-iterative least square versus bi-iterative singular value decomposition for subspace tracking. In Proc. the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, May 2004, pp.353-356. https://doi.org/10.1109/ICASSP.2004.1326267.

  24. Feng D Z, Bao Z, Zhang X D. A cross-associative neural network for SVD of nonsquared data matrix in signal processing. IEEE Transactions on Neural Networks, 2001, 12(5): 1215-1221. https://doi.org/10.1109/72.950149.

    Article  Google Scholar 

  25. Hasan M A. Low-rank approximations with applications to principal singular component learning systems. In Proc. the 47th IEEE Conference on Decision & Control, Jan. 2009, pp.3293-3298. https://doi.org/10.1109/CDC.2008.4739112.

  26. Feng X W, Kong X Y, Xu D H, Qin J Q. A fast and effective principal singular subspace tracking algorithm. Neurocomputing, 2017, 267: 201-209. https://doi.org/10.1016/j.neucom.2017.06.006.

    Article  Google Scholar 

  27. Peng D Z, Yi Z. Convergence analysis of a deterministic discrete time system of Feng’s MCA learning algorithm. IEEE Transactions on Signal Processing, 2006, 54(9): 3626-3632. https://doi.org/10.1109/TSP.2006.877662.

    Article  MATH  Google Scholar 

  28. Oja E. A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 1982, 15(3):267-273. https://doi.org/10.1007/BF00275687.

    Article  MathSciNet  MATH  Google Scholar 

  29. Ljung L. Analysis of recursive stochastic algorithms. IEEE Transactions on Automatic Control, 1977, 22(4): 551-575. https://doi.org/10.1109/TAC.1977.1101561.

    Article  MathSciNet  MATH  Google Scholar 

  30. Zufiria P J. On the discrete-time dynamics of the basic Hebbian neural network node. IEEE Transactions on Neural Networks, 2002, 13(6): 1342-1352. https://doi.org/10.1109/TNN.2002.805752.

    Article  Google Scholar 

  31. Mao Y. Global convergence analysis of a self-stabilizing MCA learning algorithm. Neurocomputing, 2005, 67: 321-327. https://doi.org/10.1016/j.neucom.2005.01.002.

    Article  Google Scholar 

  32. Zhang Y, Mao Y, Jian C L, Kok K T. Convergence analysis of a deterministic discrete time system of Oja’s PCA learning algorithm. IEEE Transactions on Neural Networks, 2005, 16(6): 1318-1328. https://doi.org/10.1109/TNN.2005.852236.

    Article  Google Scholar 

  33. Peng D Z, Yi Z. Global convergence of an adaptive minor component extraction algorithm. Chaos Solitons & Fractals, 2008, 35(3): 550-561. https://doi.org/10.1016/j.chaos.2006.05.051.

    Article  MathSciNet  MATH  Google Scholar 

  34. Gao Y B, Kong X Y, Hu C H, Zhang H H, Hou L A. Convergence analysis of Möller algorithm for estimating minor component. Neural Processing Letters, 2015, 42(2): 355-368. https://doi.org/10.1007/s11063-014-9360-y.

    Article  Google Scholar 

  35. Kong X Y, Hu C, Duan Z S. Deterministic discrete-time system for the analysis of iterative algorithms. In Principal Component Analysis Networks and Algorithms, Kong X Y, Hu C, Duan Z S (eds.), Springer, 2017, pp.149-184. https://doi.org/10.1007/978-981-10-2915-8_6.

  36. Peng D Z, Yi Z. Dynamics of generalized PCA and MCA learning algorithms. IEEE Transactions on Neural Networks, 2007, 18(6): 1777-1784. https://doi.org/10.1109/TNN.2007.895821.

    Article  Google Scholar 

  37. Tuan D N, Noriyuki T, Isao Y. An adaptive extraction of generalized eigensubspace by using exact nested orthogonal complement structure. Multidimensional Systems & Signal Processing, 2013, 24(3): 457-483. https://doi.org/10.1007/s11045-012-0172-9.

    Article  MathSciNet  MATH  Google Scholar 

  38. Karser A, Schenck W, Möller R. Coupled singular value decomposition of a cross-covariance matrix. International Journal of Neural Systems, 2010, 20(4): 293-318. https://doi.org/10.1142/S0129065710002437.

    Article  Google Scholar 

  39. Feng D Z, Zhang X D, Bao Z. A neural network learning for adaptively extracting cross-correlation features between two high-dimensional data streams. IEEE Transactions on Neural Networks, 2004, 15(6): 1541-1554. https://doi.org/10.1109/TNN.2004.838523.

    Article  Google Scholar 

  40. Kong X Y, Ma H G, An Q S, Zhang Q. An effective neural learning algorithm for extracting cross-correlation feature between two high-dimensional data streams. Neural Processing Letters, 2015, 42(2): 459-477. https://doi.org/10.1007/s11063-014-9367-4.

    Article  Google Scholar 

  41. Diamantaras K, Kung S Y. Cross-correlation neural network models. IEEE Transactions on Signal Processing, 1994, 42(11): 3218-3223. https://doi.org/10.1109/78.330379.

    Article  Google Scholar 

  42. Haykin S. Adaptive Filter Theory (5th edition). Pearson, 2002.

  43. Oja E, Karhunen J. On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. Journal of Mathematical Analysis & Applications, 1985, 106(1): 69-84. https://doi.org/10.1016/0022-247X(85)90131-3.

    Article  MathSciNet  MATH  Google Scholar 

  44. Stewart G W, Sun J G. Matrix Perturbation Theory (1st edition). Academic Press, 1990.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang-Yu Kong.

Supplementary Information

ESM 1

(PDF 126 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, XW., Kong, XY., He, C. et al. On the Discrete-Time Dynamics of Cross-Coupled Hebbian Algorithm. J. Comput. Sci. Technol. 37, 252–265 (2022). https://doi.org/10.1007/s11390-021-0655-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-021-0655-y

Keywords

Navigation