Local Sensitive Low Rank Matrix Approximation via Nonconvex Optimization

  • Chong-Ya Li
  • Wenzheng Bao
  • Zhipeng Li
  • Youhua Zhang
  • Yong-Li Jiang
  • Chang-An Yuan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10363)

Abstract

The problem of matrix approximation appears ubiquitously in recommendation systems, computer vision and text mining. The prevailing assumption is that the partially observed matrix has a low-rank or can be well approximated by a low-rank matrix. However, this assumption is strictly that the partially observed matrix is globally low rank. In this paper, we propose a local sensitive formulation of matrix approximation which relaxes the global low-rank assumption, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We solve the problem by nonconvex optimization which exhibits superior performance of low rank matrix estimation when compared with convex relaxation. Our experiments show improvements in prediction accuracy over classical approaches for recommendation tasks.

Keywords

Matrix completion Low rank Nonconvex optimization Recommender systems 

Notes

Acknowledgments

This work was supported by the grants of the National Science Foundation of China, Nos. 61472280, 61472173, 61572447, 61373098, 61672382, 61672203, 61402334, 61520106006, 31571364, U1611265, and 61532008, China Postdoctoral Science Foundation Grant, No. 2016M601646.

References

  1. 1.
    Abernethy, J., Bach, F., Evgeniou, T., et al.: Low-rank matrix factorization with attributes. arXiv preprint cs/0611124 (2006)Google Scholar
  2. 2.
    ACM SIGKDD, Netflix, Proceedings of KDD Cup and Workshop (2007). http://www.cs.uic.edu/~liub/KDD-cup-2007/proceedings.html
  3. 3.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. Adv. Neural. Inf. Process. Syst. 19, 41 (2007)Google Scholar
  6. 6.
    Lee, J., Kim, S., Lebanon, G., et al.: Local low-rank matrix approximation. ICML 28(2), 82–90 (2013)Google Scholar
  7. 7.
    Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Osher, S., Burger, M., Goldfarb, D., et al.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Ji, H., Liu, C., Shen, Z., et al.: Robust video denoising using low rank matrix completion. In: CVPR, pp. 1791–1798 (2010)Google Scholar
  10. 10.
    Candes, E.J., Eldar, Y.C., Strohmer, T., et al.: Phase retrieval via matrix completion. SIAM Rev. 57(2), 225–251 (2015)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)CrossRefGoogle Scholar
  12. 12.
    Deerwester, S., Dumais, S.T., Furnas, G.W., et al.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391 (1990)CrossRefGoogle Scholar
  13. 13.
    Buckland, M.: Annual review of information science and technology. J. Documentation (2013)Google Scholar
  14. 14.
    Sarwar, B., Karypis, G., Konstan, J., et al.: Application of dimensionality reduction in recommender system-a case study. Minnesota Univ Minneapolis Dept of Computer Science (2000)Google Scholar
  15. 15.
    Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)CrossRefGoogle Scholar
  16. 16.
    Koren, Y.: Collaborative filtering with temporal dynamics. Commun. ACM 53(4), 89–97 (2010)CrossRefGoogle Scholar
  17. 17.
    Wang, X.-F., Huang, D.S., Xu, H.: An efficient local Chan-Vese model for image segmentation. Pattern Recogn. 43(3), 603–618 (2010)CrossRefMATHGoogle Scholar
  18. 18.
    Li, B., Huang, D.S.: Locally linear discriminant embedding: an efficient method for face recognition. Pattern Recogn. 41(12), 3813–3821 (2008)CrossRefMATHGoogle Scholar
  19. 19.
    Huang, D.S., Du, J.-X.: A constructive hybrid structure optimization methodology for radial basis probabilistic neural networks. IEEE Trans. Neural Netw. 19(12), 2099–2115 (2008)CrossRefGoogle Scholar
  20. 20.
    Singer, A., Cucuringu, M.: Uniqueness of low-rank matrix completion by rigidity theory. SIAM J. Matrix Anal. Appl. 31(4), 1621–1641 (2010)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128(1–2), 321–353 (2011)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4(4), 333–361 (2012)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6(615–640), 15 (2010)MathSciNetMATHGoogle Scholar
  25. 25.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Huang, D.S.: Systematic theory of neural networks for pattern recognition. Publishing House of Electronic Industry of China, May 1996. (in Chinese)Google Scholar
  27. 27.
    Huang, D.S., Jiang, W.: A general CPL-AdS methodology for fixing dynamic parameters in dual environments. IEEE Trans. Syst. Man Cybern. - Part 42(5), 1489–1500 (2012)CrossRefGoogle Scholar
  28. 28.
    Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)
  29. 29.
    Huang, D.S.: Radial basis probabilistic neural networks: model and application. Int. J. Pattern Recogn. Artif. Intell. 13(7), 1083–1101 (1999)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Chong-Ya Li
    • 1
  • Wenzheng Bao
    • 1
  • Zhipeng Li
    • 1
  • Youhua Zhang
    • 2
  • Yong-Li Jiang
    • 3
  • Chang-An Yuan
    • 4
  1. 1.School of Electronics and Information EngineeringTongji UniversityShanghaiChina
  2. 2.School of Information and ComputerAnhui Agricultural UniversityHefeiChina
  3. 3.Ningbo Haisvision Intelligence System Co., Ltd.NingboChina
  4. 4.Science Computing and Intelligent Information Processing of GuangXi Higher Education Key LaboratoryGuangxi Teachers Education University NanningGuangxiChina

Personalised recommendations