In this paper, we present a novel incremental algorithm for principal component analysis (PCA). The proposed algorithm is a kind of covariance-free type algorithm which requires less computation and storage space in finding out the eigenvectors, than other incremental PCA methods using a covariance matrix. The major contribution of this paper is to explicitly deal with the changing mean and to use a Gram-Schmidt Orthogonalization (GSO) for enforcing the orthogonality of the eigenvectors. As a result, more accurate eigenvectors can be found with this algorithm than other algorithms. The performance of the proposed algorithm is evaluated by experiments on the data sets with various properties and it is shown that the proposed method can find out the eigenvectors more closer to those of batch algorithm than the others.


Principal Component Analysis Input Dimension Incremental Algorithm Batch Algorithm Conventional Principal Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Jolliffe, I.: Principal Component Analysis. Springer, Heidelberg (1986)CrossRefzbMATHGoogle Scholar
  2. 2.
    Li, Y., Xu, L., Morphett, J., Jacobs, R.: An integrated algorithm of incremental and robust pca. In: Proceedings of International Conference on Image Processing, pp. 245–248 (2003)Google Scholar
  3. 3.
    Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. The Johns Hopkins Press (1989)Google Scholar
  4. 4.
    Oja, E.: Neural networks, principal components, and subspace. International Journal of Neural Systems 1, 61–68 (1989)CrossRefGoogle Scholar
  5. 5.
    Abbas, H., Fahmy, M.: Neural model for karhunen-loeve transform with application to adaptive image compression. IEE Proceedings I: Communications, Speech, Vision 140, 135–143 (1993)Google Scholar
  6. 6.
    Bannour, S., Azimi-Sadjadi, M.: Principal component extraction using recursive least squares learning. IEEE Transactions on Neural Networks, 457–469 (1995)Google Scholar
  7. 7.
    Diamantaras, K., K.S.Y.: Principal component neural networks: Theory and applications (1996)Google Scholar
  8. 8.
    Cichocki, A., Kasprzak, W., Skarbek, W.: Adaptive learning algorithm for principal component analysis with partial data. In: Proceedings of Cybernetics System, pp. 1014–1019 (1996)Google Scholar
  9. 9.
    Weng, J., Zhang, Y., Hwang, W.: Candid covariance-free incremental principal component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(8), 1034–1040 (2003)CrossRefGoogle Scholar
  10. 10.
    Yan, S., Tang, X.: Largest-eigenvalue-theory for incremental principal component analysis. In: Proceedings of IEEE International Conference on Image Processing, pp. 1181–1184 (2005)Google Scholar
  11. 11.
    Hall, P., Marshall, D., Martin, R.: Incremental eigenanalysis for classification. In: Proceedings of British Machine Vision Conference, pp. 286–295 (1998)Google Scholar
  12. 12.
    Press, W., Flannery, B., Teukolsky, S., Vetterling, W.: Numerical Recipes in C. Cambridge University Press, Cambridge (1993)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Myoung Soo Park
    • 1
  • Jin Young Choi
    • 1
  1. 1.School of Electrical Engineering and Computer Science, ASRISeoul National UniversitySeoulKorea

Personalised recommendations