Skip to main content

Efficient Matrix Sensing Using Rank-1 Gaussian Measurements

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9355))

Abstract

In this paper, we study the problem of low-rank matrix sensing where the goal is to reconstruct a matrix exactly using a small number of linear measurements. Existing methods for the problem either rely on measurement operators such as random element-wise sampling which cannot recover arbitrary low-rank matrices or require the measurement operator to satisfy the Restricted Isometry Property (RIP). However, RIP based linear operators are generally full rank and require large computation/storage cost for both measurement (encoding) as well as reconstruction (decoding).

In this paper, we propose simple rank-one Gaussian measurement operators for matrix sensing that are significantly less expensive in terms of memory and computation for both encoding and decoding. Moreover, we show that the matrix can be reconstructed exactly using a simple alternating minimization method as well as a nuclear-norm minimization method. Finally, we demonstrate the effectiveness of the measurement scheme vis-a-vis existing RIP based methods.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P., Tandon, R.: Learning sparsely used overcomplete dictionaries via alternating minimization. COLT (2014)

    Google Scholar 

  2. Cai, T.T., Zhang, A., et al.: Rop: Matrix recovery via rank-one projections. The Annals of Statistics 43(1), 102–138 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  3. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Foundations of Computational Mathematics 9(6), 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Candès, E.J., Tao, T.: The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inform. Theory 56(5), 2053–2080 (2009)

    Article  MathSciNet  Google Scholar 

  5. Chen, Y.: Incoherence-optimal matrix completion. arXiv preprint arXiv:1310.0154 (2013)

  6. Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory 57(3), 1548–1566 (2011)

    Article  MathSciNet  Google Scholar 

  7. Hardt, M.: Understanding alternating minimization for matrix completion. In: Foundations 2014 IEEE 55th Annual Symposium on of Computer Science (FOCS), pp. 651–660. IEEE (2014)

    Google Scholar 

  8. Hardt, M., Wootters, M.: Fast matrix completion without the condition number. In: Proceedings of The 27th Conference on Learning Theory, pp. 638–678 (2014)

    Google Scholar 

  9. Hsieh, C.J., Dhillon, I.S., Ravikumar, P.K., Becker, S., Olsen, P.A.: Quic & dirty: A quadratic approximation approach for dirty statistical models. In: Advances in Neural Information Processing Systems, pp. 2006–2014 (2014)

    Google Scholar 

  10. Hsieh, C.J., Olsen, P.: Nuclear norm minimization via active subspace selection. In: Proceedings of The 31st International Conference on Machine Learning, pp. 575–583 (2014)

    Google Scholar 

  11. Jain, P., Dhillon, I.S.: Provable inductive matrix completion (2013). CoRR. http://arxiv.org/abs/1306.0626

  12. Jain, P., Meka, R., Dhillon, I.S.: Guaranteed rank minimization via singular value projection. In: NIPS, pp. 937–945 (2010)

    Google Scholar 

  13. Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: STOC (2013)

    Google Scholar 

  14. Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Transactions on Information Theory 56(6), 2980–2998 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Kueng, R., Rauhut, H., Terstiege, U.: Low rank matrix recovery from rank one measurements. arXiv preprint arXiv:1410.6913 (2014)

  16. Lee, K., Bresler, Y.: Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint. arXiv preprint arXiv:0903.4742 (2009)

  17. Liu, Y.K.: Universal low-rank matrix recovery from pauli measurements. In: Advances in Neural Information Processing Systems, pp. 1638–1646 (2011)

    Google Scholar 

  18. Netrapalli, P., Niranjan, U., Sanghavi, S., Anandkumar, A., Jain, P.: Non-convex robust PCA. In: Advances in Neural Information Processing Systems, pp. 1107–1115 (2014)

    Google Scholar 

  19. Recht, B.: A simpler approach to matrix completion. The Journal of Machine Learning Research 12, 3413–3430 (2011)

    MathSciNet  MATH  Google Scholar 

  20. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review 52(3), 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Tropp, J.A.: User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics 12(4), 389–434 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  22. Xu, M., Jin, R., Zhou, Z.H.: Speedup matrix completion with side information: application to multi-label learning. In: Advances in Neural Information Processing Systems, pp. 2301–2309 (2013)

    Google Scholar 

  23. Yu, H.F., Hsieh, C.J., Si, S., Dhillon, I.S.: Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. In: ICDM, pp. 765–774 (2012)

    Google Scholar 

  24. Yu, H.F., Jain, P., Kar, P., Dhillon, I.S.: Large-scale multi-label learning with missing labels. In: Proceedings of The 31st International Conference on Machine Learning, pp. 593–601 (2014)

    Google Scholar 

  25. Zuk, O., Wagner, A.: Low-rank matrix recovery from row-and-column affine measurements. In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015, pp. 2012–2020 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Inderjit S. Dhillon .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material (pdf 468 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Zhong, K., Jain, P., Dhillon, I.S. (2015). Efficient Matrix Sensing Using Rank-1 Gaussian Measurements. In: Chaudhuri, K., GENTILE, C., Zilles, S. (eds) Algorithmic Learning Theory. ALT 2015. Lecture Notes in Computer Science(), vol 9355. Springer, Cham. https://doi.org/10.1007/978-3-319-24486-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24486-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24485-3

  • Online ISBN: 978-3-319-24486-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics