International Conference on Algorithmic Learning Theory

Algorithmic Learning Theory pp 3-18 | Cite as

Efficient Matrix Sensing Using Rank-1 Gaussian Measurements

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9355)

Abstract

In this paper, we study the problem of low-rank matrix sensing where the goal is to reconstruct a matrix exactly using a small number of linear measurements. Existing methods for the problem either rely on measurement operators such as random element-wise sampling which cannot recover arbitrary low-rank matrices or require the measurement operator to satisfy the Restricted Isometry Property (RIP). However, RIP based linear operators are generally full rank and require large computation/storage cost for both measurement (encoding) as well as reconstruction (decoding).

In this paper, we propose simple rank-one Gaussian measurement operators for matrix sensing that are significantly less expensive in terms of memory and computation for both encoding and decoding. Moreover, we show that the matrix can be reconstructed exactly using a simple alternating minimization method as well as a nuclear-norm minimization method. Finally, we demonstrate the effectiveness of the measurement scheme vis-a-vis existing RIP based methods.

Keywords

Matrix sensing Matrix completion Inductive learning Alternating minimization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

370963_1_En_1_MOESM1_ESM.pdf (468 kb)
Supplementary material (pdf 468 KB)

References

  1. 1.
    Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P., Tandon, R.: Learning sparsely used overcomplete dictionaries via alternating minimization. COLT (2014)Google Scholar
  2. 2.
    Cai, T.T., Zhang, A., et al.: Rop: Matrix recovery via rank-one projections. The Annals of Statistics 43(1), 102–138 (2015)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Foundations of Computational Mathematics 9(6), 717–772 (2009)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Candès, E.J., Tao, T.: The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inform. Theory 56(5), 2053–2080 (2009)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Chen, Y.: Incoherence-optimal matrix completion. arXiv preprint arXiv:1310.0154 (2013)
  6. 6.
    Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory 57(3), 1548–1566 (2011)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Hardt, M.: Understanding alternating minimization for matrix completion. In: Foundations 2014 IEEE 55th Annual Symposium on of Computer Science (FOCS), pp. 651–660. IEEE (2014)Google Scholar
  8. 8.
    Hardt, M., Wootters, M.: Fast matrix completion without the condition number. In: Proceedings of The 27th Conference on Learning Theory, pp. 638–678 (2014)Google Scholar
  9. 9.
    Hsieh, C.J., Dhillon, I.S., Ravikumar, P.K., Becker, S., Olsen, P.A.: Quic & dirty: A quadratic approximation approach for dirty statistical models. In: Advances in Neural Information Processing Systems, pp. 2006–2014 (2014)Google Scholar
  10. 10.
    Hsieh, C.J., Olsen, P.: Nuclear norm minimization via active subspace selection. In: Proceedings of The 31st International Conference on Machine Learning, pp. 575–583 (2014)Google Scholar
  11. 11.
    Jain, P., Dhillon, I.S.: Provable inductive matrix completion (2013). CoRR. http://arxiv.org/abs/1306.0626
  12. 12.
    Jain, P., Meka, R., Dhillon, I.S.: Guaranteed rank minimization via singular value projection. In: NIPS, pp. 937–945 (2010)Google Scholar
  13. 13.
    Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: STOC (2013)Google Scholar
  14. 14.
    Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Transactions on Information Theory 56(6), 2980–2998 (2010)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Kueng, R., Rauhut, H., Terstiege, U.: Low rank matrix recovery from rank one measurements. arXiv preprint arXiv:1410.6913 (2014)
  16. 16.
    Lee, K., Bresler, Y.: Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint. arXiv preprint arXiv:0903.4742 (2009)
  17. 17.
    Liu, Y.K.: Universal low-rank matrix recovery from pauli measurements. In: Advances in Neural Information Processing Systems, pp. 1638–1646 (2011)Google Scholar
  18. 18.
    Netrapalli, P., Niranjan, U., Sanghavi, S., Anandkumar, A., Jain, P.: Non-convex robust PCA. In: Advances in Neural Information Processing Systems, pp. 1107–1115 (2014)Google Scholar
  19. 19.
    Recht, B.: A simpler approach to matrix completion. The Journal of Machine Learning Research 12, 3413–3430 (2011)MathSciNetMATHGoogle Scholar
  20. 20.
    Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review 52(3), 471–501 (2010)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Tropp, J.A.: User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics 12(4), 389–434 (2012)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Xu, M., Jin, R., Zhou, Z.H.: Speedup matrix completion with side information: application to multi-label learning. In: Advances in Neural Information Processing Systems, pp. 2301–2309 (2013)Google Scholar
  23. 23.
    Yu, H.F., Hsieh, C.J., Si, S., Dhillon, I.S.: Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. In: ICDM, pp. 765–774 (2012)Google Scholar
  24. 24.
    Yu, H.F., Jain, P., Kar, P., Dhillon, I.S.: Large-scale multi-label learning with missing labels. In: Proceedings of The 31st International Conference on Machine Learning, pp. 593–601 (2014)Google Scholar
  25. 25.
    Zuk, O., Wagner, A.: Low-rank matrix recovery from row-and-column affine measurements. In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015, pp. 2012–2020 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.University of Texas at AustinAustinUSA
  2. 2.Microsoft ResearchBangaloreIndia

Personalised recommendations