Optimum Subspace Learning and Error Correction for Tensors

  • Yin Li
  • Junchi Yan
  • Yue Zhou
  • Jie Yang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6313)


Confronted with the high-dimensional tensor-like visual data, we derive a method for the decomposition of an observed tensor into a low-dimensional structure plus unbounded but sparse irregular patterns. The optimal rank-(R 1,R 2,...R n ) tensor decomposition model that we propose in this paper, could automatically explore the low-dimensional structure of the tensor data, seeking optimal dimension and basis for each mode and separating the irregular patterns. Consequently, our method accounts for the implicit multi-factor structure of tensor-like visual data in an explicit and concise manner. In addition, the optimal tensor decomposition is formulated as a convex optimization through relaxation technique. We then develop a block coordinate descent (BCD) based algorithm to efficiently solve the problem. In experiments, we show several applications of our method in computer vision and the results are promising.


Irregular Pattern Matrix Completion Tensor Decomposition Optimal Rank Robust Principal Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Vasilescu, M.A.O., Terzopoulos, D.: Multilinear analysis of image ensembles: Tensorfaces. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 447–460. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  2. 2.
    Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cognitive Neuroscience 3, 71–86 (1991)CrossRefGoogle Scholar
  3. 3.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 711–720 (1997)CrossRefGoogle Scholar
  4. 4.
    He, X., Niyogi, P.: Locality preserving projections. In: Neural Information Processing Systems, NIPS (2004)Google Scholar
  5. 5.
    Lewis, A., Knowles, G.: Image compression using the 2-d wavelet transform. IEEE Transactions on Image Processing 1, 244–250 (1992)CrossRefGoogle Scholar
  6. 6.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  7. 7.
    Sheikh, Y., Shah, M.: Bayesian modeling of dynamic scenes for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 1778–1792 (2005)CrossRefGoogle Scholar
  8. 8.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. of Comput. Math. 9, 717–772 (2008)CrossRefGoogle Scholar
  9. 9.
    Cai, J.F., Candes, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion (2008) (preprint)Google Scholar
  10. 10.
    Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. In: International Conference on Computer Vision, ICCV (2009)Google Scholar
  11. 11.
    Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 15, 3736–3745 (2006)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Wright, J., Ma, Y.: Dense error correction via ℓ1-minimization (2008) (preprint),
  13. 13.
    Wright, J., Yang, A., Ganesh, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 210–227 (2009)CrossRefGoogle Scholar
  14. 14.
    Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In: Neural Information Processing Systems, NIPS (2009)Google Scholar
  15. 15.
    Cands, E., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? (2009) (preprint)Google Scholar
  16. 16.
    He, X., Cai, D., Niyogi, P.: Tensor subspace analysis. In: Neural Information Processing Systems, NIPS (2005)Google Scholar
  17. 17.
    Xu, D., Yan, S., Zhang, L., Zhang, H.J., Liu, Z., Shum, H.Y.: Concurrent subspaces analysis. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 203–208 (2005)Google Scholar
  18. 18.
    Vasilescu, M., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. II – 93–II –9 (2003)Google Scholar
  19. 19.
    Vasilescu, M., Terzopoulos, D.: Multilinear projection for appearance-based recognition in the tensor framework. In: International Conference on Computer Vision (ICCV), pp. 1–8 (2007)Google Scholar
  20. 20.
    Wang, H., Ahuja, N.: A tensor approximation approach to? dimensionality reduction. International Journal of Computer Vision 76, 217–229 (2008)CrossRefGoogle Scholar
  21. 21.
    Wang, H., Ahuja, N.: Facial expression decomposition. In: International Conference on Computer Vision (ICCV), vol. 2, pp. 958–965 (2003)Google Scholar
  22. 22.
    Jia, K., Gong, S.: Multi-modal tensor face for simultaneous super-resolution and recognition. In: International Conference on Computer Vision (ICCV), vol. 2, pp. 1683–1690 (2005)Google Scholar
  23. 23.
    Lathauwer, L.D., Moor, B.D., Vandewalle, J.: On the best rank-1 and rank-(r1,r2,.,rn) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)zbMATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    Shashua, A., Levin, A.: Linear image coding for regression and classification using the tensor-rank principle. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. I–42–I–49 (2001)Google Scholar
  25. 25.
    Shashua, A., Zass, R., Hazan, T.: Multi-way clustering using super-symmetric non-negative tensor factorization. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 595–608. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  26. 26.
    Kruskal, J.B.: Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra and its Applications 18, 95–138 (1977)zbMATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Zhou, Z., Li, X., Wright, J., Candes, E., Ma, Y.: Stable principal component pursuit. In: International Symposium on Information Theory (2010)Google Scholar
  28. 28.
    Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for l1-minimization: Methodology and convergence. SIAM Journal on Optim. 19, 1107–1130 (2008)zbMATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109, 475–494 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  30. 30.
    Simon, H.D.: The lanczos algorithm with partial reorthogonalization. Math. Comp. 42, 115–142 (1984)zbMATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Yin Li
    • 1
  • Junchi Yan
    • 1
  • Yue Zhou
    • 1
  • Jie Yang
    • 1
  1. 1.Institute of Image Processing and Pattern RecogntionShanghai Jiaotong University 

Personalised recommendations