Advertisement

Mathematical Programming Computation

, Volume 8, Issue 3, pp 253–269 | Cite as

Revisiting compressed sensing: exploiting the efficiency of simplex and sparsification methods

  • Robert VanderbeiEmail author
  • Kevin Lin
  • Han Liu
  • Lie Wang
Full Length Paper

Abstract

We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as \(\ell _1\_\ell _s\) and Mirror Prox regardless of the sparsity level or problem size.

Keywords

Linear programming Compressed sensing Parametric simplex method Sparse signals Interior-point methods 

Mathematics Subject Classification

65K05 62P99 

Notes

Acknowledgments

The authors would like to offer their sincerest thanks to the referees and the editors all of whom read earlier versions of the paper very carefully and made many excellent suggestions on how to improve it.

References

  1. 1.
    Adler, I., Karp, R.M., Shamir, R.: A simplex variant solving an \(m\times d\) linear program in \({O}(\min (m_2, d_2)\) expected number of pivot steps. J. Complex. 3, 372–387 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Belloni, A., Chernozhukov, V.: \(\ell _1\)-penalized quantile regression in high-dimensional sparse models. Ann. Stat. 39, 82–130 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Cai, T.T., Zhang, A.: Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmonic Anal. 35, 74–93 (2012)Google Scholar
  4. 4.
    Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. 346, 589–592 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cohen, A., Dahmen, W., Devore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22, 211–231 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton (1998)zbMATHGoogle Scholar
  9. 9.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _1\)-minimization. Proc. Natl. Acad. Sci USA 100, 2197–2202 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Donoho, D.L., Elad, M., Temlyakov, V.N.: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Donoho, D.L., Huo, X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47, 2845–2862 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Donoho, D.L., Maleki, A., Montanari, A.: Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 106, 18914–18919 (2009)CrossRefGoogle Scholar
  14. 14.
    Donoho, D.L., Stark, P.B.: Uncertainty principles and signal recovery. SIAM J. Appl. Math. 49, 906–931 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Donoho, D.L., Tanner, J.: Neighborliness of randomly projected simplices in high dimensions. Proc. Natl. Acad. Sci. 102, 9452–9457 (2005)Google Scholar
  16. 16.
    Donoho, D. L., Tanner, J.: Sparse nonnegative solutions of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. 102, 9446–9451 (2005)Google Scholar
  17. 17.
    Donoho, D.L., Tanner, J.: Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. Roy. Soc. S. A 367, 4273–4273 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Duarte, M.F., Baraniuk, R.G.: Kronecker compressive sensing. IEEE Trans. Image Process. 21, 494–504 (2012)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Elad, M.: Sparse and Redundant Representations—From Theory to Applications in Signal and Image Processing. Springer, New York (2010)zbMATHGoogle Scholar
  20. 20.
    Figueiredo, M., Nowak, R., Wright, S.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–597 (2008)CrossRefGoogle Scholar
  21. 21.
    Forrest, J.J., Goldfarb, D.: Steepest-edge simplex algorithms for linear programming. Math. Program. 57, 341–374 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49, 2543–2563 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013)CrossRefzbMATHGoogle Scholar
  24. 24.
    Gilbert, A.C., Strauss, M.J., Tropp, J.A., Vershynin, R.: One sketch for all: fast algorithms for compressed sensing. In: Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pp. 237–246. ACM, New York (2007)Google Scholar
  25. 25.
    Gill, P.E., Murray, W., Ponceleon, D.B., Saunders, M.A.: Solving reduced KKT systems in barrier methods for linear and quadratic programming. Tech. rep, DTIC Document (1991)Google Scholar
  26. 26.
    Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comut. Math. 10, 303–338 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Juditsky, A., Karzan, F.K., Nemirovski, A.: \(\ell _1\) minimization via randomized first order algorithms. Université Joseph Fourier, Tech. rep. (2014)Google Scholar
  28. 28.
    Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large-scale \(l_1\)-regularized least squares. IEEE Trans. Sel. Top. Signal Process. 1, 606–617 (2007)CrossRefGoogle Scholar
  29. 29.
    Klee, V., Minty, G. J.: How good is the simplex method? Inequalities-III, pp. 159–175 (1972)Google Scholar
  30. 30.
    Kutyniok, G.: Compressed sensing: theory and applications. CoRR . arXiv:1203.3815 (2012)
  31. 31.
    Lustig, I.J., Mulvey, J.M., Carpenter, T.J.: Formulating two-stage stochastic programs for interior point methods. Oper. Res. 39, 757–770 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. Signal Process. IEEE Trans. 41, 3397–3415 (1993)CrossRefzbMATHGoogle Scholar
  33. 33.
    Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 53, 93–100 (2010)CrossRefzbMATHGoogle Scholar
  34. 34.
    Needell, D., Vershynin, R.: Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comut. Math. 9, 317–334 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Pan, P.-Q.: A largest-distance pivot rule for the simplex algorithm. Eur. J. Oper. Res. 187, 393–402 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Post, I., Ye, Y.: The simplex method is strongly polynomial for deterministic markov decision processes. Math. Oper. Res. 40, 859–868 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Spielman, D.A., Teng, S.-H.: Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time. J. ACM (JACM) 51, 385–463 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50, 2231–2242 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Vanderbei, R.: Splitting dense columns in sparse linear systems. Linear Algebra Appl. 152, 107–117 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Vanderbei, R.: LOQO: an interior point code for quadratic programming. Optim. Methods Softw. 12, 451–484 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Vanderbei, R.: Linear Programming: Foundations and Extensions, 3rd edn. Springer, New York (2007)zbMATHGoogle Scholar
  42. 42.
    Vanderbei, R.J.: Alpo: another linear program optimizer. ORSA J. Comput. 5, 134–146 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Vanderbei, R. J.: Linear programming. Foundations and extensions, International Series in Operations Research & Management Science, vol. 37 (2001)Google Scholar
  44. 44.
    Vanderbei, R.J.: Fast Fourier optimization. Math. Prog. Comp. 4, 1–17 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and The Mathematical Programming Society 2016

Authors and Affiliations

  • Robert Vanderbei
    • 1
    Email author
  • Kevin Lin
    • 2
  • Han Liu
    • 1
  • Lie Wang
    • 3
  1. 1.Department of Operations Research and Financial EngineeringPrinceton UniversityPrincetonUSA
  2. 2.Department of StatisticsCarnegie Mellon UniversityPittsburghUSA
  3. 3.Department of MathematicsMassachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations