Skip to main content
Log in

Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions

  • Published:
Constructive Approximation Aims and scope

Abstract

In this paper, we improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. (1) In compressed sensing, we show that if the m×n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable 1 minimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m/(log(n/m)+1)). (2) In the very general sensing model introduced in Candès and Plan (IEEE Trans. Inf. Theory 57(11):7235–7254, 2011) and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m/(log2 n)) nonzero entries. (3) Finally, we prove that one can recover an n×n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m/(nlog2 n)); again, this holds when there is a positive fraction of corrupted samples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Agarwal, A., Negahban, S., Wainwright, M.: Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions. In: Proc. 28th Inter. Conf. Mach. Learn. (ICML), pp. 1129–1136 (2011)

    Google Scholar 

  2. Ahlswede, R., Winter, A.: Strong converse for identification via quantum channels. IEEE Trans. Inf. Theory 48(3), 569–579 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. Baraniuk, R., Davenport, M., DeVore, R., Wakin, M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Candès, E., Plan, Y.: Matrix completion with noise. In: Proceedings of the IEEE (2009)

    Google Scholar 

  5. Candès, E., Plan, Y.: Near-ideal model selection by 1 minimization. Ann. Stat. 37(5A), 2145–2177 (2009)

    Article  MATH  Google Scholar 

  6. Candès, E., Plan, Y.: A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inf. Theory 57(11), 7235–7254 (2011)

    Article  Google Scholar 

  7. Candès, E., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6) (2009)

  8. Candès, E., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12) (2005)

  9. Candès, E., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Article  Google Scholar 

  10. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MATH  Google Scholar 

  11. Candès, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  Google Scholar 

  12. Candès, E., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3) (2011)

  13. Chandrasekaran, V., Sanghavi, S., Parrilo, P., Willsky, A.: Sparse and low-rank matrix decompositions. In: 15th IFAC Symposium on System Identification (SYSID) (2009)

    Google Scholar 

  14. Chandrasekaran, V., Sanghavi, S., Parrilo, P., Willsky, A.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2), 572–596 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  16. Chen, Y., Jalali, A., Sanghavi, S., Caramanis, C.: Low-rank matrix recovery from errors and erasures. ISIT (2011)

  17. Davidson, K., Szarek, S.: Local operator theory, random matrices and Banach spaces. Handb. Geom. Banach Spaces I(8), 317–366 (2001)

    Article  MathSciNet  Google Scholar 

  18. Donoho, D.: For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59(6), 797–829 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  20. Fazel, M.: Matrix rank minimization with applications. Ph.D. Thesis (2002)

  21. Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)

    Article  Google Scholar 

  22. Gross, D., Liu, Y.-K., Flammia, S., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105(15) (2010)

  23. Haupt, J., Bajwa, W., Rabbat, M., Nowak, R.: Compressed sensing for networked data. IEEE Signal Process. Mag. 25(2), 92–101 (2008)

    Article  Google Scholar 

  24. Hsu, D., Kakade, S., Zhang, T.: Robust matrix decomposition with sparse corruptions. IEEE Trans. Inf. Theory 57(11), 7221–7234 (2011)

    Article  MathSciNet  Google Scholar 

  25. Keshavan, R., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Trans. Inf. Theory 56(6), 2980–2998 (2010)

    Article  MathSciNet  Google Scholar 

  26. Laska, J., Davenport, M., Baraniuk, R.: Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. In: Asilomar Conference on Signals Systems and Computers (2009)

    Google Scholar 

  27. Laska, J., Boufounos, P., Davenport, M., Baraniuk, R.: Democracy in action: quantization, saturation, and compressive sensing. Appl. Comput. Harmon. Anal. 31(3), 429–443 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  28. Li, Z., Wu, F., Wright, J.: On the systematic measurement matrix for compressed sensing in the presence of gross errors. In: Data Compression Conference, pp. 356–365 (2010)

    Chapter  Google Scholar 

  29. Nguyen, N., Tran, T.: Exact recoverability from dense corrupted observations via l1 minimization. Preprint (2011)

  30. Ngyuen, N., Nasrabadi, N., Tran, T.: Robust lasso with missing and grossly corrupted observations. Preprint (2011)

  31. Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)

    MathSciNet  Google Scholar 

  32. Recht, B., Fazel, M., Parillo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3) (2010)

  33. Romberg, J.: Compressive sensing by random convolution. SIAM J. Imaging Sci. 2(4), 1098–1128 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. Rudelson, M.: Random vectors in the isotropic position. J. Funct. Anal. 164(1), 60–72 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  35. Rudelson, M., Vershynin, R.: On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61(8), 1025–1045 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  36. Studer, C., Kuppinger, P., Pope, G., Bölcskei, H.: Recovery of sparsely corrupted signals. Preprint (2011)

  37. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  38. Tropp, J.: User-friendly tail bounds for sums of random matrices. Found. Comput. Math. (2011)

  39. Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Eldar, Y., Kutyniok, G. (eds.) Compressed Sensing, Theory and Applications, pp. 210–268. Cambridge University Press, Cambridge (2012), Chap. 5

    Chapter  Google Scholar 

  40. Wright, J., Ma, Y.: Dense error correction via 1-minimization. IEEE Trans. Inf. Theory 56(7), 3540–3560 (2010)

    Article  MathSciNet  Google Scholar 

  41. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Article  Google Scholar 

  42. Wu, L., Ganesh, A., Shi, B., Matsushita, Y., Wang, Y., Ma, Y.: Robust photometric stereo via low-rank matrix completion and recovery. In: Proceedings of the 10th Asian Conference on Computer Vision, Part III (2010)

    Google Scholar 

  43. Xu, H., Caramanis, C., Sanghavi, S.: Robust PCA via outlier pursuit. In: Ad. Neural Infor. Proc. Sys. (NIPS), pp. 2496–2504 (2010)

    Google Scholar 

Download references

Acknowledgements

I am grateful to my Ph.D. advisor, Emmanuel Candès, for his encouragements and his help in preparing this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Li.

Additional information

Communicated by Joel A. Tropp.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, X. Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions. Constr Approx 37, 73–99 (2013). https://doi.org/10.1007/s00365-012-9176-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00365-012-9176-9

Keywords

Mathematics Subject Classification

Navigation