Skip to main content

Introduction

  • Chapter
  • First Online:
  • 2447 Accesses

Abstract

Pattern recognition and data compression are two applications that rely critically on efficient data representation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Diamantaras, K. I., & Kung, S. Y. (1996). Principal component neural networks: Theory and application. Wiley, INC.

    Google Scholar 

  2. Zhang, Q., & Leung, Y.-W. (2000). A class of learning algorithms for principal component analysis and minor component analysis. IEEE Transactions on Neural Network, 11(1), 200–204.

    Article  Google Scholar 

  3. Washizawa, Y. (2010). Feature extraction using constrained approximation and suppression. IEEE Transactions on Neural Network, 21(2), 201–210.

    Article  Google Scholar 

  4. Cirrincione, G., Cirrincione, M., Herault, J., & Huffel, S. V. (2002). The MCA EXIN neuron for the minor component analysis. IEEE Transactions on Neural Network, 13(1), 160–187.

    Article  Google Scholar 

  5. Yang, B. (1995). Projection approximation subspace tracking. IEEE Transactions on Signal Processing, 43(1), 95–107.

    Article  Google Scholar 

  6. Comon, P., & Golub, G. H. (1990). Tracking a few extreme singular values and vectors in signal processing. In Processing of the IEEE (pp. 1327–1343).

    Google Scholar 

  7. Owsley, N. L. (1978). Adaptive data orthogonalization. In Proceedings of IEEE ICASSP (pp. 100–112).

    Google Scholar 

  8. Tufts, D. W., & Melissinos, C. D. (1986). Simple, effective computation of principal eigenvectors and their eigenvalues and applications to high-resolution estimation of frequencies. IEEE Transactions on Acoustic, Speech and Signal Processing, ASSP-34, 1046–1053.

    Google Scholar 

  9. Shaman, K. C. (1986). Adaptive algorithms for estimating the complete covariance eigenstructure. In Proceedings of IEEE ICASSP (Tokyo, Japan) (pp. 1401–1404).

    Google Scholar 

  10. Moonen, M., Van Dooren, P., & Vandewalle, J. (1989). Updating singular value decompositions: A parallel implementation. In PWC. SPlE 4dv. Algorithms Arthitectures Signal Processing (San Diego, CA) (pp. 80–91).

    Google Scholar 

  11. Bunch, J. R., Nielsen, C. P., & Sorenson, D. (1978). Rank-one modification of the symmetric eigenproblem. Numerical Mathematics, 31, 31–48.

    Article  MathSciNet  MATH  Google Scholar 

  12. Karasalo, I. (1986). Estimating the covariance matrix by signal subspace averaging. IEEE Transactions on Acoustic Speech and Signal Processing, ASSP-34, 8–12.

    Google Scholar 

  13. DeGroat, R. D. (1992). Noniterative subspace tracking. IEEE Transactions on Signal Processing, 40(3), 571–577.

    Article  Google Scholar 

  14. Thompson, P. A. (1980). An adaptive spectral analysis technique for unbiased frequency estimation in the presence of white noise. In Proceeding 13th Asilnmur Conj:Circuit System. Computation (pp. 529–533).

    Google Scholar 

  15. Oja, E. (1982). A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 15(3), 267–273.

    Article  MathSciNet  MATH  Google Scholar 

  16. Karhunen, J., & Oja, E. (1982). New methods for stochastic approximation of truncated Karhunen-LoCve expansions. In Prw. 6th International Conference: Putt. Recop. (5Xk553).

    Google Scholar 

  17. Karhunen, J. (1984). Adaptive algorithms for estimating eigenvectors of correlation type matrices. In Proceedings of IEEE ICASSP (San Diego, CA) (pp. 14.6.1–14.6.3).

    Google Scholar 

  18. Yang, J., & Kaveh, M. (1988). Adaptive eigensubspace algorithms for direction or frequency estimation and tracking. IEEE Transactions on Acoustic, Speech and Signal Processing, 36(2), 241–251.

    Article  Google Scholar 

  19. Kung, S. Y. (1993). Digital neural processing. Englewood Cliffs, NJ: Prentice Hall.

    MATH  Google Scholar 

  20. Reddy, V. U., Egardt, B., & Kailath, T. (1982). Least squares type algorithm for adaptive implementation of Pisarenko’s harmonic retrieval method. IEEE Transactions on Acoustic Speech and Signal processing, 30(3), 399–405.

    Article  Google Scholar 

  21. Bannour, S., & Azimi-Sadjadi, M. R. (1992). An adaptive approach for optimal data reduction using recursive least squares learning method. In Pwc. IEEE ICASSP (San Francisco, CA) (pp. 11297–11300).

    Google Scholar 

  22. Yang, X., Sarkar, T. K., & Arvas, E. (1989). A survey of conjugate gradient algorithms for solution of extreme eigen-problems of a symmetric matrix. IEEE Transactions on Acoustic, Speech and Signal Processing, 37(10), 1550–1556.

    Article  MathSciNet  MATH  Google Scholar 

  23. Stewart, G. W. (1992). An updating algorithm for subspace tracking. IEEE Transactions on Signal Processing, 40(6), 1535–1541.

    Article  Google Scholar 

  24. Bischof, C. H., & Shroff, G. M. (1992). On updating signal subspaces. IEEE Transactions on Signal Processing, 40(1), 96–105.

    Article  Google Scholar 

  25. Bannour, S., & Azimi-Sadjadi, R. (1995). Principal component extraction using recursive least squares learning. IEEE Transactions on Neural Networks, 6(2), 457–469.

    Article  Google Scholar 

  26. Cichocki, A., Kasprzak, W., & Skarbek, W. (1996). Adaptive learning algorithm for principal component analysis with partial data. Cybernetic Systems, 2, 1014–1019.

    Article  Google Scholar 

  27. Kung, S. Y., Diamantaras, K., & Taur, J. (1994). Adaptive principal component extraction (APEX) and applications. IEEE Transactions on Signal Processing, 42(5), 1202–1217.

    Article  Google Scholar 

  28. Möller, R., & Könies, A. (2004). Coupled principal component analysis. IEEE Transactions on Neural Networks, 15(1), 214–222.

    Article  Google Scholar 

  29. Ouyang, S., Bao, Z., & Liao, G. S. (2000). Robust recursive least squares learning algorithm for principal component analysis. IEEE Transactions on Neural Networks, 11(1), 215–221.

    Article  Google Scholar 

  30. Sanger, T. D. (1989). Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2(6), 459–473.

    Article  Google Scholar 

  31. Xu, L. (1993). Least mean square error reconstruction principle for selforganizing neural-nets. Neural Netwoks, 6(5), 627–648.

    Article  Google Scholar 

  32. Xu, L., Oja, E., & Suen, C. (1992). Modified Hebbian learning for curve and surface fitting. Neural Networks, 5, 441–457.

    Article  Google Scholar 

  33. Oja, E. (1992). Principal component, minor component and linear neural networks. Neural Networks, 5(6), 927–935.

    Article  Google Scholar 

  34. Feng, D. Z., Bao, Z., & Jiao, L. C. (1998). Total least mean squares algorithm. IEEE Transactions on Signal Processing, 46(6), 2122–2130.

    Article  Google Scholar 

  35. Feng, D. Z., Zheng, W. X., & Jia, Y. (2005). Neural network learning algorithms for tracking minor subspace in high-dimensional data stream. IEEE Transactions on Neural Networks, 16(3), 513–521.

    Article  Google Scholar 

  36. Luo, F. L., & Unbehauen, R. (1997). A minor subspace analysis algorithm. IEEE Transactions on Neural Networks, 8(5), 1149–1155.

    Article  Google Scholar 

  37. Ouyang, S., Bao, Z., Liao, G. S., & Ching, P. C. (2001). Adaptive minor component extraction with modular structure. IEEE Transactions on Signal Processing, 49(9), 2127–2137.

    Article  Google Scholar 

  38. Zhang, Q., & Leung, Y.-W. (2000). A class of learning algorithms for principal component analysis and minor component analysis. IEEE Transactions on Neural Networks, 11(2), 529–533.

    Article  Google Scholar 

  39. Möller, R. (2004). A self-stabilizing learning rule for minor component analysis. International Journal of Neural Systems, 14(1), 1–8.

    Article  Google Scholar 

  40. Douglas, S. C., Kung, S. Y., & Amari, S. (2002). A self-stabilized minor subspace rule. IEEE Signal Processing Letter, 5(12), 1342–1352.

    Google Scholar 

  41. Oja, E. (1989). Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1(1), 61–68.

    Article  Google Scholar 

  42. Williams R. J.(1985). Feature discovery through error-correction learning. Institute of Cognition Science, University of California, San Diego, Technical Report. 8501.

    Google Scholar 

  43. Baldi, P. (1989). Linear learning: Landscapes and algorithms. In D. S. Touretzky (Ed.), Advances in neural information processing systems 1. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  44. Miao, Y. F., & Hua, Y. B. (1998). Fast subspace tracking and neural network learning by a novel information criterion. IEEE Transactions on Signal Processing, 46(7), 1967–1979.

    Article  Google Scholar 

  45. Fu, Z., & Dowling, E. M. (1995). Conjugate gradient eigenstructure tracking for adaptive spectral estimation. IEEE Transactions on Signal Processing, 43(5), 1151–1160.

    Article  Google Scholar 

  46. Mathew, G., Reddy, V. U., & Dasgupta, S. (1995). Adaptive estimation of eigensubspace. IEEE Transactions on Signal Processing, 43(2), 401–411.

    Article  Google Scholar 

  47. Chiang, C. T., & Chen, Y. H. (1999). On the inflation method in adaptive noise subspace estimator. IEEE Transactions on Signal Processing, 47(4), 1125–1129.

    Article  Google Scholar 

  48. Nguyen, T. D., & Yamada, I. (2013). Adaptive normalized quasi-newton algorithms for extraction of generalized eigen-pairs and their convergence analysis. IEEE Transactions on Signal Processing, 61(6), 1404–1418.

    Article  MathSciNet  Google Scholar 

  49. Chatterjee, C., Roychowdhury, V., Ramos, P. J., & Zoltowski, M. D. (1997). Self-organizing algorithms for generalized eigen-decomposition. IEEE Transactions on Neural Networks, 8(6), 1518–1530.

    Article  Google Scholar 

  50. Xu, D. X., Principe, J. C., & Wu, H. C. (1998). Generalized eigendecomposition with an on-line local algorithm. IEEE Signal Processing Letter, 5(11), 298–301.

    Article  Google Scholar 

  51. Rao, Y. N., Principe, J. C., & Wong, T. F. (2004). Fast RLS-like algorithm for generalized eigendecomposition and its applications. Journal of VLSI Signal Processing, 37(2–3), 333–344.

    Article  MATH  Google Scholar 

  52. Wong, T. F., Lok, T. M., Lehnert, J. S., & Zoltowski, M. D. (1998). A linear receiver for direct-sequence spread-spectrum multiple-access systems with antenna arrays and blind adaptation. IEEE Transactions on Information Theory, 44(2), 659–676.

    Article  MathSciNet  MATH  Google Scholar 

  53. Strobach, P. (1998). Fast orthogonal iteration adaptive algorithms for the generalized symmetric eigenproblem. IEEE Transactions on Signal Processing, 46(12), 3345–3359.

    Article  MathSciNet  MATH  Google Scholar 

  54. Yang, J., Xi, H. S., Yang, F., & Zhao, Y. (2006). RLS-based adaptive algorithms for generalized eigen-decomposition. IEEE Transactions on Signal Processing, 54(4), 1177–1188.

    Article  Google Scholar 

  55. Tanaka, T. (2009). Fast generalized eigenvector tracking based on the power method. IEEE Signal Processing Letter, 16(11), 969–972.

    Article  Google Scholar 

  56. Attallah, S., & Abed-Meraim, K. (2008). A fast adaptive algorithm for the generalized symmetric eigenvalue problem. IEEE Signal Processing Letter, 15, 797–800.

    Article  Google Scholar 

  57. Chen, T. P., & Amari, S. (2001). Unified stabilization approach to principal and minor components extraction algorithms. Neural Networks, 14(10), 1377–1387.

    Article  Google Scholar 

  58. Hasan, M. A. (2007). Self-normalizing dual systems for minor and principal component extraction. In Proceedings of ICASSP 2007 IEEE International Conference on Acoustic, Speech and Signal Processing (15–20, Vol. 4, pp. IV-885–IV-888).

    Google Scholar 

  59. Peng, D. Z., Zhang, Y., & Xiang, Y. (2009). A unified learning algorithm to extract principal and minor components. Digit Signal Processing, 19(4), 640–649.

    Article  Google Scholar 

  60. Manton, J. H., Helmke, U., & Mareels, I. M. Y. (2005). A dual purpose principal and minor component flow. Systems Control Letter, 54(8), 759–769.

    Article  MathSciNet  MATH  Google Scholar 

  61. Chen, T., Amari, S. I., & Lin, Q. (1998). A unified algorithm for principal and minor component extraction. Neural Networks, 11(3), 365–369.

    Article  Google Scholar 

  62. Peng, D. Z., & Zhang, Y. (2007). Dynamics of generalized PCA and MCA learning algorithms. IEEE Transactions on Neural Networks, 18(6), 1777–1784.

    Article  Google Scholar 

  63. Kong, X. Y., Hu, C. H., & Han, C. Z. (2012). A dual purpose principal and minor subspace gradient flow. IEEE Transactions on Signal Processing, 60(1), 197–210.

    Article  MathSciNet  Google Scholar 

  64. Kong, X. Y., Hu, C. H., Ma, H. G., & Han, C. Z. (2012). A unified self-stabilizing neural network algorithm for principal and minor components extraction. IEEE Transactions on Neural Networks and Learning Systems, 23(2), 185–198.

    Article  Google Scholar 

  65. Ferali, W., & Proakis J. G. (1990). Adaptive SVD algorithm for covariance matrix eigenstructure computation. In Proceedings of IEEE International Conference on Acoustic, Speech, Signal Processing (pp. 176–179).

    Google Scholar 

  66. Cichocki, A. (1992). Neural network for singular value decomposition. Electronic Letter, 28(8), 784–786.

    Article  Google Scholar 

  67. Diamantaras, K. I., & Kung, S. Y. (1994). Cross-correlation neural network models. IEEE Transactions on Signal Processing, 42(11), 3218–3223.

    Article  Google Scholar 

  68. Feng, D. Z., Bao, Z., & Zhang, X. D. (2001). A cross-associative neural network for SVD of nonsquared data matrix in signal processing. IEEE Transactions on Neural Networks, 12(9), 1215–1221.

    Article  Google Scholar 

  69. Feng, D. Z., Zhang, X. D., & Bao, Z. (2004). A neural network learning for adaptively extracting cross-correlation features between two high-dimensional data streams. IEEE Transactions on Neural Networks, 15(6), 1541–1554.

    Article  Google Scholar 

  70. Kong, X. Y., Ma, H. G., An, Q. S., & Zhang, Q. (2015). An effective neural learning algorithm for extracting cross-correlation feature between two high-dimensional data streams. Neural Processing Letter, 42(2), 459–477.

    Article  Google Scholar 

  71. Zhang, X. D. (2004) Matrix analysis and applications, Tsinghua University Press, Beijing.

    Google Scholar 

  72. Bradbury, W. W., & Fletcher, R. (1966). New iterative methods for solutions of the eigenproblem. Numerical Mathematics, 9(9), 259–266.

    Article  MathSciNet  MATH  Google Scholar 

  73. Sarkar, T. K., Dianat, S. A., Chen, H., & Brule, J. D. (1986). Adaptive spectral estimation by the conjugate gradient method. IEEE Transactions on Acoustic, Speech, and Signal Processing, 34(2), 272–284.

    Article  Google Scholar 

  74. Golub, G. H., & Van Load, C. F. (1989). Matrix computation (2nd ed.). Baltimore: The John Hopkins University Press.

    Google Scholar 

  75. Golub, G. H. (1973). Some modified matrix eigenvalue problems. SIAM Review, 15(2), 318–334.

    Article  MathSciNet  MATH  Google Scholar 

  76. Bunch, J. R., Nielsen, C. P., & Sorensen, D. C. (1978). Rank-one modification of the symmetric eigenproblem. Numerical Mathematics, 31(1), 31–48.

    Article  MathSciNet  MATH  Google Scholar 

  77. Bunch, J. R., & Nielsen, C. P. (1978). Updating the singular value decomposition. Numerical Mathematics, 31(31), 111–129.

    Article  MathSciNet  MATH  Google Scholar 

  78. Schereiber, R. (1986). Implementation of adaptive array algorithms. IEEE Transactions on Acoustic, Speech, and Signal Processing, 34(5), 1038–1045.

    Article  Google Scholar 

  79. Karasalo, I. (1986). Estimating the covariance matrix by signal subspace averaging. IEEE Transactions on Acoustic, Speech, and Signal Processing, 34(1), 8–12.

    Article  MathSciNet  Google Scholar 

  80. DeGroat, R. D., & Roberts, R. A. (1990). Efficient, numerically stabilized rank-one eigenstructure updating. IEEE Transactions on Acoustic, Speech, and Signal Processing, 38(2), 301–316.

    Article  Google Scholar 

  81. Yu, K. B. (1991). Recursive updating the eigenvalue decomposition of a covariance matrix. IEEE Transactions on Signal Processing, 39(5), 1136–1145.

    Article  MathSciNet  Google Scholar 

  82. Stewart, G. W. (1992). An updating algorithm for subspace tracking. IEEE Transactions on Signal Processing, 40(6), 1135–1541.

    Article  Google Scholar 

  83. Bischof, C. H., & Shroff, G. M. (1992). On updating signal subspace. IEEE Transactions on Signal Processing, 40(1), 96–105.

    Article  Google Scholar 

  84. Champagne, B. (1994). Adaptive eigendecomposition of data covariance matrices based on first-order perturbations. IEEE Transactions on Signal Processing, 42(10), 2758–2770.

    Article  Google Scholar 

  85. Fuhrmann, D. R. (1988). An algorithm for subspace computation with applications in signal processing. SIAM Journal of Matrix Analysis and Applications, 9(2), 213–220.

    Article  MathSciNet  MATH  Google Scholar 

  86. Xu, G., Cho, Y., & Kailath, T. (1994). Application of fast subspace decomposition to signal processing and communication problems. IEEE Transactions on Signal Processing, 42(6), 1453–1461.

    Article  Google Scholar 

  87. Xu, G., & Kailath, T. (1994). Fast subspace decomposition. IEEE Transactions on Signal Processing, 42(3), 539–551.

    Article  Google Scholar 

  88. Moonen, M., Dooren, P. V., & Vandewalle, J. (1992). A singular value decomposition updating algorithm for subspace tracking. SIAM Journal of Matrix Analysis and Applications, 13(4), 1015–1038.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangyu Kong .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Science Press, Beijing and Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter

Kong, X., Hu, C., Duan, Z. (2017). Introduction. In: Principal Component Analysis Networks and Algorithms. Springer, Singapore. https://doi.org/10.1007/978-981-10-2915-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-2915-8_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-2913-4

  • Online ISBN: 978-981-10-2915-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics