Scalable Low-Rank Representation

  • Guangcan LiuEmail author
  • Shuicheng Yan


While the optimization problem associated with LRR is convex and easy to solve, it is actually a big challenge to achieve high efficiency, especially under large-scale settings. In this chapter we therefore address the problem of solving nuclear norm regularized optimization problems (NNROPs), which contain a category of problems including LRR. Based on the fact that the optimal solution matrix to an NNROP is often low-rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large-size solution matrix into the product of a small-size orthonormal matrix (active subspace) and another small-size matrix. Although such a transformation generally leads to non-convex problems, we show that suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) [7] problem, which is a typical example of NNROPs, theoretical results verify sub-optimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.


Nuclear norm optimization Active subspace Matrix factorization Stiefel manifold Alternating direction method 


  1. 1.
    F. Bach, Consistency of trace norm minimization. J. Mach. Learn. Res. 9, 1019–1048 (2008)MathSciNetzbMATHGoogle Scholar
  2. 2.
    S. Burer, R. Monteiro, Local minima and convergence in low-rank semidefinite programming. Math. Program. 103, 427–444 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    J. Cai, S. Osher, Fast singular value thresholding without singular value decomposition. UCLA Technical Report (2010)Google Scholar
  4. 4.
    J. Cai, E. Candès, Z. Shen, A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    E. Candés, Y. Plan, Matrix completion with noise. IEEE Proc. 9(6), 925–936 (2010)CrossRefGoogle Scholar
  6. 6.
    E. Candès, B. Recht, Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    E. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis? J. ACM 58(3), 1–37 (2009)CrossRefGoogle Scholar
  8. 8.
    V. Chandrasekaran, S. Sanghavi, P. Parrilo, A. Willsky, Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2), 572–596 (2009)MathSciNetCrossRefGoogle Scholar
  9. 9.
    A. Edelman, T. Arias, S. Smith, The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl. 20, 303–353 (1999)MathSciNetCrossRefGoogle Scholar
  10. 10.
    M. Fazel, Matrix rank minimization with applications. PhD Thesis (2002)Google Scholar
  11. 11.
    N. Halko, P. Martinsson, J. Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    N. Higham, Matrix procrustes problems (1995)Google Scholar
  13. 13.
    M. Jaggi, M. Sulovský, A simple algorithm for nuclear norm regularized problems, in International Conference on Machine Learning, pp. 471–478 (2010)Google Scholar
  14. 14.
    K.C. Lee, J. Ho, D. Kriegman, Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005)CrossRefGoogle Scholar
  15. 15.
    Z. Lin, M. Chen, L. Wu, Y. Ma, The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report, UILU-ENG-09-2215 (2009)Google Scholar
  16. 16.
    Z. Lin, R. Liu, Z. Su, Linearized alternating direction method with adaptive penalty for low-rank representation. Neural Inf. Process. Syst. 25, 612–620 (2011)Google Scholar
  17. 17.
    G. Liu, Z. Lin, Y. Yu, Robust subspace segmentation by low-rank representation. Int. Conf. Mach. Learn. 3, 663–670 (2010)Google Scholar
  18. 18.
    G. Liu, S. Yan, Active subspace: toward scalable low-rank learning. Neural Comput. 24(12), 3371–3394 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, Y. Ma, Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell., Preprint (2012)Google Scholar
  20. 20.
    K. Min, Z. Zhang, J. Wright, Y. Ma, Decomposing background topics from keywords by principal component pursuit. Conf. Inf. Knowl. Manag. 269–278 (2010)Google Scholar
  21. 21.
    J. Nocedal, S. Wright, Numerical Optimization (Springer, New York, 2006)zbMATHGoogle Scholar
  22. 22.
    S. Shalev-Shwartz, A. Gonen, O. Shamir, Large-scale convex minimization with a low-rank constraint. Int. Conf. Mach. Learn. 329–336 (2011)Google Scholar
  23. 23.
    Y. Shen, Z. Wen, Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization. Technical Report (2011)Google Scholar
  24. 24.
    N. Srebro, N. Alon, T. Jaakkola, Generalization error bounds for collaborative prediction with low-rank matrices. Neural Inf. Process. Syst. 5–27 (2005)Google Scholar
  25. 25.
    R. Tomioka, T. Suzuki, M. Sugiyama, H. Kashima, A fast augmented lagrangian algorithm for learning low-rank matrices. Int. Conf. Mach. Learn. 1087–1094 (2010)Google Scholar
  26. 26.
    P. Tseng, On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM J. Optim. (2008)Google Scholar
  27. 27.
    M. Weimer, A. Karatzoglou, Q. Le, A. Smola, Cofi rank—maximum margin matrix factorization for collaborative ranking. Neural Inf. Process. Syst. (2007)Google Scholar
  28. 28.
    C. Williams, M. Seeger, The effect of the input density distribution on kernel-based classifiers. Int. Conf. Mach. Learn., 1159–1166 (2000)Google Scholar
  29. 29.
    J. Wright, A. Ganesh, S. Rao, Y. Peng, Y. Ma, Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. Neural Inform. Process. Syst. 2080–2088 (2009)Google Scholar
  30. 30.
    J. Yang, X. Yuan, An inexact alternating direction method for trace norm regularized least squares problem. Under Rev. Math. Comput. (2010)Google Scholar
  31. 31.
    Y. Zhang, Recent advances in alternating direction methods: practice and theory. Tutorial (2010)Google Scholar
  32. 32.
    Z. Zhang, X. Liang, A. Ganesh, Y. Ma, TILT: transform invariant low-rank textures. Int. J. Comput. Vis. 99(1), 314–328 (2012)MathSciNetCrossRefGoogle Scholar
  33. 33.
    G. Zhu, S. Yan, Y. Ma, Image tag refinement towards low-rank, content-tag prior and error sparsity. ACM Multimed. 461–470 (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Cornell UniversityIthacaUSA
  2. 2.National University of SingaporeKent RidgeSingapore

Personalised recommendations