Advertisement

From Covariance Matrices to Covariance Operators: Data Representation from Finite to Infinite-Dimensional Settings

  • Hà Quang MinhEmail author
  • Vittorio Murino
Chapter
Part of the Advances in Computer Vision and Pattern Recognition book series (ACVPR)

Abstract

This chapter presents some of the recent developments in the generalization of the data representation framework using finite-dimensional covariance matrices to infinite-dimensional covariance operators in Reproducing Kernel Hilbert Spaces (RKHS). We show that the proper mathematical setting for covariance operators is the infinite-dimensional Riemannian manifold of positive definite Hilbert–Schmidt operators, which are the generalization of symmetric, positive definite (SPD) matrices. We then give the closed form formulas for the affine-invariant and Log-Hilbert–Schmidt distances between RKHS covariance operators on this manifold, which generalize the affine-invariant and Log-Euclidean distances, respectively, between SPD matrices. The Log-Hilbert–Schmidt distance in particular can be used to design a two-layer kernel machine, which can be applied directly to a practical application, such as image classification. Experimental results are provided to illustrate the power of this new paradigm for data representation.

Keywords

Covariance Matrice Covariance Operator Reproduce Kernel Hilbert Space Schmidt Operator Kernel Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    E. Andruchow, A. Varela, Non positively curved metric in the space of positive definite infinite matrices. Revista de la Union Matematica Argentina 48(1), 7–15 (2007)MathSciNetzbMATHGoogle Scholar
  2. 2.
    V.I. Arsenin, A.N. Tikhonov, Solutions of Ill-Posed Problems (Winston, Washington, 1977)zbMATHGoogle Scholar
  3. 3.
    V. Arsigny, P. Fillard, X. Pennec, N. Ayache, Fast and simple calculus on tensors in the Log-Euclidean framework, Medical Image Computing and Computer-Assisted Intervention–MICCAI 2005 (Springer, New York, 2005), pp. 115–122Google Scholar
  4. 4.
    V. Arsigny, P. Fillard, X. Pennec, N. Ayache, Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM J. Matrix Anal. Appl. 29(1), 328–347 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    F. Barbaresco, Information geometry of covariance matrix: Cartan-Siegel homogeneous bounded domains, Mostow/Berger fibration and Frechet median, Matrix Information Geometry (Springer, New York, 2013), pp. 199–255CrossRefGoogle Scholar
  6. 6.
    R. Bhatia, Positive Definite Matrices (Princeton University Press, Princeton, 2007)zbMATHGoogle Scholar
  7. 7.
    D.A. Bini, B. Iannazzo, Computing the Karcher mean of symmetric positive definite matrices. Linear Algebra Appl. 438(4), 1700–1710 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    B.J. Boom, J. He, S. Palazzo, P.X. Huang, C. Beyan, H.-M. Chou, F.-P. Lin, C. Spampinato, R.B. Fisher, A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage. Ecol. Inf. 23, 83–97 (2014)Google Scholar
  9. 9.
    B. Caputo, E. Hayman, P. Mallikarjuna, Class-specific material categorisation, in ICCV (2005), pp. 1597–1604Google Scholar
  10. 10.
    C.-C. Chang, C.-J. Lin, LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27:1–27:27 (2011)Google Scholar
  11. 11.
    A. Cherian, S. Sra, A. Banerjee, N. Papanikolopoulos, Jensen-Bregman LogDet divergence with application to efficient similarity search for covariance matrices. TPAMI 35(9), 2161–2174 (2013)CrossRefGoogle Scholar
  12. 12.
    I.L. Dryden, A. Koloydenko, D. Zhou, Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Ann. Appl. Stat. 3, 1102–1123 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, vol. 375, Mathematics and Its Applications (Springer, New York, 1996)CrossRefzbMATHGoogle Scholar
  14. 14.
    M. Faraki, M. Harandi, F. Porikli, Approximate infinite-dimensional region covariance descriptors for image classification, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2015)Google Scholar
  15. 15.
    P. Formont, J.-P. Ovarlez, F. Pascal, On the use of matrix information geometry for polarimetric SAR image classification, Matrix Information Geometry (Springer, New York, 2013), pp. 257–276CrossRefGoogle Scholar
  16. 16.
    M. Harandi, M. Salzmann, F. Porikli, Bregman divergences for infinite dimensional covariance matrices, in CVPR (2014)Google Scholar
  17. 17.
    S. Jayasumana, R. Hartley, M. Salzmann, H. Li, M. Harandi, Kernel methods on the Riemannian manifold of symmetric positive definite matrices, in CVPR (2013)Google Scholar
  18. 18.
    S. Jayasumana, R. Hartley, M. Salzmann, H. Li, M. Harandi, Kernel methods on Riemannian manifolds with Gaussian RBF kernels. IEEE Trans. Pattern Anal. Mach. Intell. 37(12), 2464–2477 (2015)CrossRefGoogle Scholar
  19. 19.
    B. Kulis, M.A. Sustik, I.S. Dhillon, Low-rank kernel learning with Bregman matrix divergences. J. Mach. Learn. Res. 10, 341–376 (2009)MathSciNetzbMATHGoogle Scholar
  20. 20.
    G. Kylberg, The Kylberg texture dataset v. 1.0. External report (Blue series) 35, Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University (2011)Google Scholar
  21. 21.
    G. Larotonda, Geodesic Convexity, Symmetric Spaces and Hilbert-Schmidt Operators. Ph.D. thesis, Universidad Nacional de General Sarmiento, Buenos Aires, Argentina (2005)Google Scholar
  22. 22.
    G. Larotonda, Nonpositive curvature: a geometrical approach to Hilbert-Schmidt operators. Differ. Geom. Appl. 25, 679–700 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    J.D. Lawson, Y. Lim, The geometric mean, matrices, metrics, and more. Am. Math. Monthly 108(9), 797–812 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    P. Li, Q. Wang, W. Zuo, L. Zhang, Log-Euclidean kernels for sparse representation and dictionary learning, in ICCV (2013)Google Scholar
  25. 25.
    H.Q. Minh, Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory. Constr. Approx. 32, 307–338 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    H.Q. Minh, Affine-invariant Riemannian distance between infinite-dimensional covariance operators, in Geometric Science of Information, vol. 9389, Lecture Notes in Computer Science, ed. by F. Nielsen, F. Barbaresco (Springer International Publishing, Switzerland, 2015), pp. 30–38CrossRefGoogle Scholar
  27. 27.
    H.Q. Minh, P. Niyogi, Y. Yao, Mercer’s theorem, feature maps, and smoothing, in Proceedings of 19th Annual Conference on Learning Theory (Springer, Pittsburg, 2006)Google Scholar
  28. 28.
    H.Q. Minh, M. San Biagio, V. Murino, Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces, in Advances in Neural Information Processing Systems 27 (NIPS 2014) (2014), pp. 388–396Google Scholar
  29. 29.
    H.Q. Minh, M. San Biagio, L. Bazzani, V. Murino, Approximate Log-Hilbert-Schmidt distances between covariance operators for image classification, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  30. 30.
    G.D. Mostow, Some new decomposition theorems for semi-simple groups. Mem. Am. Math. Soc. 14, 31–54 (1955)MathSciNetzbMATHGoogle Scholar
  31. 31.
    X. Pennec, P. Fillard, N. Ayache, A Riemannian framework for tensor computing. Int. J. Comput. Vis. 66(1), 41–66 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    D. Pigoli, J. Aston, I.L. Dryden, P. Secchi, Distances and inference for covariance operators. Biometrika 101(2), 409–422 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    F. Porikli, O. Tuzel, P. Meer, Covariance tracking using model update based on Lie algebra, in CVPR, vol. 1 (IEEE, 2006), pp. 728–735Google Scholar
  34. 34.
    A. Qiu, A. Lee, M. Tan, M.K. Chung, Manifold learning on brain functional networks in aging. Med. Image Anal. 20(1), 52–60 (2015)CrossRefGoogle Scholar
  35. 35.
    I.J. Schoenberg, Metric spaces and positive definite functions. Trans. Am. Math. Soc. 44, 522–536 (1938)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    B. Schölkopf, A. Smola, K.-R. Müller, Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 1299 (1998)CrossRefGoogle Scholar
  37. 37.
    J. Shawe-Taylor, N. Cristianini, Kernel Methods for Pattern Analysis (Cambridge University Press, Cambridge, 2004)CrossRefzbMATHGoogle Scholar
  38. 38.
    S. Sra, A new metric on the manifold of kernel matrices with application to matrix geometric means. Adv. Neural Inf. Process. Syst. 1, 144–152 (2012)Google Scholar
  39. 39.
    D. Tosato, M. Spera, M. Cristani, V. Murino, Characterizing humans on Riemannian manifolds. TPAMI 35(8), 1972–1984 (2013)CrossRefGoogle Scholar
  40. 40.
    O. Tuzel, F. Porikli, P. Meer, Pedestrian detection via classification on Riemannian manifolds. TPAMI 30(10), 1713–1727 (2008)CrossRefGoogle Scholar
  41. 41.
    S.K. Zhou, R. Chellappa, From sample similarity to ensemble similarity: probabilistic distance measures in reproducing kernel Hilbert space. TPAMI 28(6), 917–929 (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Pattern Analysis and Computer Vision (PAVIS)Istituto Italiano di Tecnologia (IIT)GenovaItaly

Personalised recommendations