Non-euclidean Principal Component Analysis for Matrices by Hebbian Learning

  • Mandy Lange
  • David Nebel
  • Thomas Villmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8467)


Modern image data analysis is apparently based on matrix norms. The calculation of those norms is frequently time consuming as well as matrix calculations in general. For this reason, complexity reduction is a key feature in image analysis. In this paper we investigate Schatten-p-norms as matrix norms based on the matrix trace operator, such that the mathematical vector space of matrices becomes a Banach space. As the first main result we develop a semi-inner product for these Banach spaces which generate the respective norms. Then we explain a mathematical theory of eigen-matrices for this scenario and give as the second main result an online learning scheme for the iterative determination of those eigen-matrices with respect to a covariance operator defined for datasets of matrices/images, which can be used for complexity reduction.


Banach Space Covariance Operator Matrix Norm Learn Vector Quantization Machine Learn Research 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Biehl, M., Kästner, M., Lange, M., Villmann, T.: Non-Euclidean principal component analysis and Oja’s learning rule – theoretical aspects. In: Estevez, P.A., Principe, J.C., Zegers, P. (eds.) Advances in Self-Organizing Maps. AISC, vol. 198, pp. 23–34. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  2. 2.
    Bishop, C.: Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC, New York (2006)zbMATHGoogle Scholar
  3. 3.
    Der, R., Lee, D.: Large-margin classification in Banach spaces. In: JMLR Workshop and Conference Proceedings. AISTATS, vol. 2, pp. 91–98 (2007)Google Scholar
  4. 4.
    Giles, J.: Classes of semi-inner-product spaces. Transactions of the American Mathematical Society 129, 436–446 (1967)CrossRefzbMATHMathSciNetGoogle Scholar
  5. 5.
    Gu, Z., Shao, M., Li, L., Fu, Y.: Discriminative metric: Schatten norms vs. vector norm. In: Proc. of The 21st International Conference on Pattern Recognition (ICPR 2012), pp. 1213–1216 (2012)Google Scholar
  6. 6.
    Hein, M., Bousquet, O., Schölkopf, B.: Maximal margin classification for metric spaces. Journal of Computer Systems Sciences 71, 333–359 (2005)CrossRefzbMATHGoogle Scholar
  7. 7.
    Horn, R., Johnson, C.: Matrix Analysis, 2nd edn. Cambridge University Press (2013)Google Scholar
  8. 8.
    Kaden, M., Lange, M., Nebel, D., Riedel, M., Geweniger, T., Villmann, T.: Aspects in classification learning - Review of recent developments in Learning Vector Quantization. In: Foundations of Computing and Decision Sciences (page accepted, 2014)Google Scholar
  9. 9.
    Lange, M., Biehl, M., Villmann, T.: Non-Euclidean independent component analysis and Ojaś learning. In: Verleysen, M. (ed.) Proc. of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2013), Louvain-La-Neuve, Belgium, pp. 125–130 (2013),
  10. 10.
    Lange, M., Biehl, M., Villmann, T.: Non-Euclidean principal component analysis by Hebbian learning. Neurocomputing (page in press, 2014)Google Scholar
  11. 11.
    Lange, M., Villmann, T.: Derivatives of l p-norms and their approximations. Machine Learning Reports 7(MLR-04-2013), 43–59 (2013) ISSN:1865-3960,
  12. 12.
    Lumer, G.: Semi-inner-product spaces. Transactions of the American Mathematical Society 100, 29–43 (1961)CrossRefzbMATHMathSciNetGoogle Scholar
  13. 13.
    Nie, F., Wang, H., Cai, X., Huang, H., Ding, C.: Robust matrix completiition via joint Schattenen p-norm and l p-norm minimization. In: Zaki, M., Siebes, A., Yu, J., Goethals, B., Webb, G., Wu, X. (eds.) Proc. of the 12th IEEE International Conference on Data Mining (ICDM), Brussels, pp. 566–574. IEEE Press (2012)Google Scholar
  14. 14.
    Oja, E.: Neural networks, principle components and subspaces. International Journal of Neural Systems 1, 61–68 (1989)CrossRefMathSciNetGoogle Scholar
  15. 15.
    Oja, E.: Nonlinear PCA: Algorithms and applications. In: Proc. of the World Congress on Neural Networks, Portland, pp. 396–400 (1993)Google Scholar
  16. 16.
    Sanger, T.: Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks 12, 459–473 (1989)CrossRefGoogle Scholar
  17. 17.
    Schatten, R.: A Theory of Cross-Spaces. Annals of Mathematics Studies, vol. 26. Princeton University Press (1950)Google Scholar
  18. 18.
    Sonka, M., Hlavac, V., Boyle, R.: Image Processing, Analysis and Machine Vision, 2nd edn. Brooks Publishing (1998)Google Scholar
  19. 19.
    von Luxburg, U., Bousquet, O.: Distance-based classification with Lipschitz functions. Journal of Machine Learning Research 5, 669–695 (2004)zbMATHGoogle Scholar
  20. 20.
    Zhang, H., Xu, Y., Zhang, J.: Reproducing kernel banach spaces for machine learning. Journal of Machine Learning Research 10, 2741–2775 (2009)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Mandy Lange
    • 1
  • David Nebel
    • 1
  • Thomas Villmann
    • 1
  1. 1.Computational Intelligence GroupUniversity of Applied Sciences MittweidaMittweidaGermany

Personalised recommendations