Riemannian Sparse Coding for Positive Definite Matrices

  • Anoop Cherian
  • Suvrit Sra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8691)

Abstract

Inspired by the great success of sparse coding for vector valued data, our goal is to represent symmetric positive definite (SPD) data matrices as sparse linear combinations of atoms from a dictionary, where each atom itself is an SPD matrix. Since SPD matrices follow a non-Euclidean (in fact a Riemannian) geometry, existing sparse coding techniques for Euclidean data cannot be directly extended. Prior works have approached this problem by defining a sparse coding loss function using either extrinsic similarity measures (such as the log-Euclidean distance) or kernelized variants of statistical measures (such as the Stein divergence, Jeffrey’s divergence, etc.). In contrast, we propose to use the intrinsic Riemannian distance on the manifold of SPD matrices. Our main contribution is a novel mathematical model for sparse coding of SPD matrices; we also present a computationally simple algorithm for optimizing our model. Experiments on several computer vision datasets showcase superior classification and retrieval performance compared with state-of-the-art approaches.

Keywords

Sparse coding Riemannian distance Region covariances 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tuzel, O., Porikli, F., Meer, P.: Region Covariance: A Fast Descriptor for Detection and Classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 589–600. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Sandeep, J., Richard, H., Mathieu, S., Li, H., Harandi, M.: Kernel methods on the Riemannian manifold of symmetric positive definite matrices. In: CVPR (2013)Google Scholar
  3. 3.
    Pang, Y., Yuan, Y., Li, X.: Gabor-based region covariance matrices for face recognition. IEEE Transactions on Circuits and Systems for Video Technology 18(7), 989–993 (2008)CrossRefGoogle Scholar
  4. 4.
    Porikli, F., Tuzel, O.: Covariance tracker. In: CVPR (June 2006)Google Scholar
  5. 5.
    Ma, B., Su, Y., Jurie, F., et al.: Bicov: A novel image representation for person re-identification and face verification. In: BMVC (2012)Google Scholar
  6. 6.
    Cherian, A., Morellas, V., Papanikolopoulos, N., Bedros, S.J.: Dirichlet process mixture models on symmetric positive definite matrices for appearance clustering in video surveillance applications. In: IEEE CVPR, pp. 3417–3424 (2011)Google Scholar
  7. 7.
    Fehr, D., Cherian, A., Sivalingam, R., Nickolay, S., Morellas, V., Papanikolopoulos, N.: Compact covariance descriptors in 3d point clouds for object recognition. In: IEEE ICRA (2012)Google Scholar
  8. 8.
    Ma, B., Wu, Y., Sun, F.: Affine object tracking using kernel-based region covariance descriptors. In: Wang, Y., Li, T. (eds.) ISKE 2011. AISC, vol. 122, pp. 613–623. Springer, Heidelberg (2011)Google Scholar
  9. 9.
    Elad, M., Aharon, M.: Image denoising via learned dictionaries and sparse representation. In: CVPR (2006)Google Scholar
  10. 10.
    Olshausen, B., Field, D.: Sparse coding with an overcomplete basis set: A strategy employed by V1. Vision Research 37(23), 3311–3325 (1997)CrossRefGoogle Scholar
  11. 11.
    Guha, T., Ward, R.K.: Learning sparse representations for human action recognition. PAMI 34(8), 1576–1588 (2012)CrossRefGoogle Scholar
  12. 12.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. PAMI 31(2), 210–227 (2009)CrossRefGoogle Scholar
  13. 13.
    Yang, J., Yu, K., Gong, Y., Huang, T.: Linear spatial pyramid matching using sparse coding for image classification. In: IEEE CVPR (2009)Google Scholar
  14. 14.
    Pennec, X., Fillard, P., Ayache, N.: A Riemannian framework for tensor computing. IJCV 66(1), 41–66 (2006)CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    Sivalingam, R., Boley, D., Morellas, V., Papanikolopoulos, N.: Tensor sparse coding for region covariances. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 722–735. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  16. 16.
    Sra, S., Cherian, A.: Generalized dictionary learning for symmetric positive definite matrices with application to nearest neighbor retrieval. In: Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M. (eds.) ECML PKDD 2011, Part III. LNCS (LNAI), vol. 6913, pp. 318–332. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  17. 17.
    Arsigny, V., Fillard, P., Pennec, X., Ayache, N.: Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magnetic Resonance in Medicine 56(2), 411–421 (2006)CrossRefGoogle Scholar
  18. 18.
    Guo, K., Ishwar, P., Konrad, J.: Action recognition using sparse representation on covariance manifolds of optical flow. In: IEEE AVSS (2010)Google Scholar
  19. 19.
    Ho, J., Xie, Y., Vemuri, B.: On a nonlinear generalization of sparse coding and dictionary learning. In: ICML (2013)Google Scholar
  20. 20.
    Harandi, M.T., Sanderson, C., Hartley, R., Lovell, B.C.: Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 216–229. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  21. 21.
    Sra, S.: Positive definite matrices and the S-divergence. ArXiv preprint ArXiv:1110.1773 (2011)Google Scholar
  22. 22.
    Jayasumana, S., Hartley, R., Salzmann, M., Li, H., Harandi, M.: Kernel methods on the riemannian manifold of symmetric positive definite matrices. In: CVPR (2013)Google Scholar
  23. 23.
    Li, P., Wang, Q., Zuo, W., Zhang, L.: Log-euclidean kernels for sparse representation and dictionary learning. In: IEEE ICCV (2013)Google Scholar
  24. 24.
    Bhatia, R.: Positive Definite Matrices. Princeton University Press (2007)Google Scholar
  25. 25.
    Hiriart-Urruty, J.B., Lemaréchal, C.: Fundamentals of convex analysis. Springer (2001)Google Scholar
  26. 26.
    Higham, N.: Functions of Matrices: Theory and Computation. SIAM (2008)Google Scholar
  27. 27.
    Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific (1999)Google Scholar
  28. 28.
    Barzilai, J., Borwein, J.M.: Two-Point Step Size Gradient Methods. IMA J. Num. Analy. 8(1) (1988)Google Scholar
  29. 29.
    Schmidt, M., van den Berg, E., Friedlander, M., Murphy, K.: Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm. In: AISTATS (2009)Google Scholar
  30. 30.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Algorithm 813: SPG - Software for Convex-constrained Optimization. ACM Transactions on Mathematical Software 27, 340–349 (2001)CrossRefMATHGoogle Scholar
  31. 31.
    Luis-García, R., Deriche, R., Alberola-López, C.: Texture and color segmentation based on the combined use of the structure tensor and the image components. Signal Processing 88(4), 776–795 (2008)CrossRefMATHGoogle Scholar
  32. 32.
    Laws, K.I.: Rapid texture identification. In: 24th Annual Technical Symposium, International Society for Optics and Photonics, pp. 376–381 (1980)Google Scholar
  33. 33.
    Schwartz, W., Davis, L.: Learning Discriminative Appearance-Based Models Using Partial Least Squares. In: Proceedings of the XXII Brazilian Symposium on Computer Graphics and Image Processing (2009)Google Scholar
  34. 34.
    Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view rgb-d object dataset. In: ICRA (2011)Google Scholar
  35. 35.
    Fehr, D.A.: Covariance based point cloud descriptors for object detection and classification. University of Minnesota (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Anoop Cherian
    • 1
  • Suvrit Sra
    • 2
  1. 1.LEAR teamInria Grenoble Rhône-AlpesFrance
  2. 2.Max Planck Institute for Intelligent SystemsTübingenGermany

Personalised recommendations