Abstract
In this paper, we improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. (1) In compressed sensing, we show that if the m×n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable ℓ 1 minimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m/(log(n/m)+1)). (2) In the very general sensing model introduced in Candès and Plan (IEEE Trans. Inf. Theory 57(11):7235–7254, 2011) and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m/(log2 n)) nonzero entries. (3) Finally, we prove that one can recover an n×n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m/(nlog2 n)); again, this holds when there is a positive fraction of corrupted samples.
Similar content being viewed by others
References
Agarwal, A., Negahban, S., Wainwright, M.: Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions. In: Proc. 28th Inter. Conf. Mach. Learn. (ICML), pp. 1129–1136 (2011)
Ahlswede, R., Winter, A.: Strong converse for identification via quantum channels. IEEE Trans. Inf. Theory 48(3), 569–579 (2002)
Baraniuk, R., Davenport, M., DeVore, R., Wakin, M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)
Candès, E., Plan, Y.: Matrix completion with noise. In: Proceedings of the IEEE (2009)
Candès, E., Plan, Y.: Near-ideal model selection by ℓ 1 minimization. Ann. Stat. 37(5A), 2145–2177 (2009)
Candès, E., Plan, Y.: A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inf. Theory 57(11), 7235–7254 (2011)
Candès, E., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6) (2009)
Candès, E., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12) (2005)
Candès, E., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)
Candès, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)
Candès, E., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3) (2011)
Chandrasekaran, V., Sanghavi, S., Parrilo, P., Willsky, A.: Sparse and low-rank matrix decompositions. In: 15th IFAC Symposium on System Identification (SYSID) (2009)
Chandrasekaran, V., Sanghavi, S., Parrilo, P., Willsky, A.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2), 572–596 (2011)
Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)
Chen, Y., Jalali, A., Sanghavi, S., Caramanis, C.: Low-rank matrix recovery from errors and erasures. ISIT (2011)
Davidson, K., Szarek, S.: Local operator theory, random matrices and Banach spaces. Handb. Geom. Banach Spaces I(8), 317–366 (2001)
Donoho, D.: For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59(6), 797–829 (2006)
Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Fazel, M.: Matrix rank minimization with applications. Ph.D. Thesis (2002)
Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)
Gross, D., Liu, Y.-K., Flammia, S., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105(15) (2010)
Haupt, J., Bajwa, W., Rabbat, M., Nowak, R.: Compressed sensing for networked data. IEEE Signal Process. Mag. 25(2), 92–101 (2008)
Hsu, D., Kakade, S., Zhang, T.: Robust matrix decomposition with sparse corruptions. IEEE Trans. Inf. Theory 57(11), 7221–7234 (2011)
Keshavan, R., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Trans. Inf. Theory 56(6), 2980–2998 (2010)
Laska, J., Davenport, M., Baraniuk, R.: Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. In: Asilomar Conference on Signals Systems and Computers (2009)
Laska, J., Boufounos, P., Davenport, M., Baraniuk, R.: Democracy in action: quantization, saturation, and compressive sensing. Appl. Comput. Harmon. Anal. 31(3), 429–443 (2011)
Li, Z., Wu, F., Wright, J.: On the systematic measurement matrix for compressed sensing in the presence of gross errors. In: Data Compression Conference, pp. 356–365 (2010)
Nguyen, N., Tran, T.: Exact recoverability from dense corrupted observations via l1 minimization. Preprint (2011)
Ngyuen, N., Nasrabadi, N., Tran, T.: Robust lasso with missing and grossly corrupted observations. Preprint (2011)
Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)
Recht, B., Fazel, M., Parillo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3) (2010)
Romberg, J.: Compressive sensing by random convolution. SIAM J. Imaging Sci. 2(4), 1098–1128 (2009)
Rudelson, M.: Random vectors in the isotropic position. J. Funct. Anal. 164(1), 60–72 (1999)
Rudelson, M., Vershynin, R.: On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61(8), 1025–1045 (2008)
Studer, C., Kuppinger, P., Pope, G., Bölcskei, H.: Recovery of sparsely corrupted signals. Preprint (2011)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58(1), 267–288 (1996)
Tropp, J.: User-friendly tail bounds for sums of random matrices. Found. Comput. Math. (2011)
Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Eldar, Y., Kutyniok, G. (eds.) Compressed Sensing, Theory and Applications, pp. 210–268. Cambridge University Press, Cambridge (2012), Chap. 5
Wright, J., Ma, Y.: Dense error correction via ℓ 1-minimization. IEEE Trans. Inf. Theory 56(7), 3540–3560 (2010)
Wright, J., Yang, A.Y., Ganesh, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)
Wu, L., Ganesh, A., Shi, B., Matsushita, Y., Wang, Y., Ma, Y.: Robust photometric stereo via low-rank matrix completion and recovery. In: Proceedings of the 10th Asian Conference on Computer Vision, Part III (2010)
Xu, H., Caramanis, C., Sanghavi, S.: Robust PCA via outlier pursuit. In: Ad. Neural Infor. Proc. Sys. (NIPS), pp. 2496–2504 (2010)
Acknowledgements
I am grateful to my Ph.D. advisor, Emmanuel Candès, for his encouragements and his help in preparing this manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Joel A. Tropp.
Rights and permissions
About this article
Cite this article
Li, X. Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions. Constr Approx 37, 73–99 (2013). https://doi.org/10.1007/s00365-012-9176-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-012-9176-9
Keywords
- Compressed sensing
- Matrix completion
- Robust PCA
- Convex optimization
- Restricted isometry property
- Golfing scheme