Skip to main content

For-All Sparse Recovery in Near-Optimal Time

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8572))

Abstract

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N, an m-by-N measurement Φ, and a recovery algorithm, \(\mathcal{R}\). Given a vector, x, the system approximates x by \(\widehat{\mathbf{x}} = \mathcal{R}(\Phi\mathbf{x})\), which must satisfy \(\|\widehat{\mathbf{x}}-\mathbf{x}\|_1 \leq (1+\epsilon)\|\mathbf{x}-\mathbf{x}_k\|_1\). We consider the “for all” model, in which a single matrix Φ is used for all signals x. The best existing sublinear algorithm by Porat and Strauss (SODA’12) uses O(ε − 3 klog(N/k)) measurements and runs in time O(k 1 − α N α) for any constant α > 0.

In this paper, we improve the number of measurements to O(ε − 2 k log(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k 1 + βpoly(logN,1/ε)), with a modest restriction that k ≤ N 1 − α and ε ≤ (logk/logN)γ, for any constants α, β,γ > 0. With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog(N/k)((logN/logk)γ + 1/ε)) measurements. The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages.

Omitted details and proofs can be found at arXiv:1402.1726 [cs.DS].

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE T. Info. Theory 52(2), 489–509 (2006)

    Article  MATH  Google Scholar 

  2. Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: Widmayer, P., Triguero, F., Morales, R., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 693–703. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  3. Cohen, A., Dahmen, W., Devore, R.: Compressed sensing and best k-term approximation. J. Amer. Math. Soc., 211–231 (2009)

    Google Scholar 

  4. Cormode, G., Muthukrishnan, S.: Combinatorial algorithms for compressed sensing. In: SIROCCO, pp. 280–294 (2006)

    Google Scholar 

  5. Donoho, D.L.: Compressed sensing. IEEE T. Info. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  6. Duarte, M.F., Davenport, M.A., Takhar, D., Laska, J.N., Kelly, K.F., Baraniuk, R.G.: Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine 25(2), 83–91 (2008)

    Article  Google Scholar 

  7. Friedman, J., Kahn, J., Szemerédi, E.: On the second eigenvalue of random regular graphs. In: STOC, pp. 587–598 (1989)

    Google Scholar 

  8. Gilbert, A., Li, Y., Porat, E., Strauss, M.: Approximate sparse recovery: Optimizing time and measurements. SIAM J. Comput. 41(2), 436–453 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  9. Gilbert, A., Strauss, M., Tropp, J., Vershynin, R.: Algorithmic linear dimension reduction in the ℓ1 norm for sparse vectors. In: Allerton (2006)

    Google Scholar 

  10. Gilbert, A., Strauss, M., Tropp, J., Vershynin, R.: One sketch for all: fast algorithms for compressed sensing. In: ACM STOC, pp. 237–246 (2007)

    Google Scholar 

  11. Gilbert, A.C., Ngo, H.Q., Porat, E., Rudra, A., Strauss, M.J.: ℓ2/ℓ2-foreach sparse recovery with low risk. In: Fomin, F.V., Freivalds, R., Kwiatkowska, M., Peleg, D. (eds.) ICALP 2013, Part I. LNCS, vol. 7965, pp. 461–472. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  12. Guruswami, V., Umans, C., Vadhan, S.: Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes. J. ACM 56(4), 20:1–20:34

    Google Scholar 

  13. Indyk, P., Ngo, H.Q., Rudra, A.: Efficiently decodable non-adaptive group testing. In: SODA, pp. 1126–1142 (2010)

    Google Scholar 

  14. Indyk, P., Ruzic, M.: Near-optimal sparse recovery in the ℓ1 norm. In: FOCS, pp. 199–207 (2008)

    Google Scholar 

  15. Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2007)

    Article  Google Scholar 

  16. Nelson, J., Nguyễn, H.L., Woodruff, D.P.: On deterministic sketching and streaming for sparse recovery and norm estimation. In: Gupta, A., Jansen, K., Rolim, J., Servedio, R. (eds.) APPROX/RANDOM 2012. LNCS, vol. 7408, pp. 627–638. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  17. Parvaresh, F., Vardy, A.: Correcting errors beyond the guruswami-sudan radius in polynomial time. In: FOCS, pp. 285–294 (2005)

    Google Scholar 

  18. Porat, E., Strauss, M.J.: Sublinear time, measurement-optimal, sparse recovery for all. In: SODA, pp. 1215–1227 (2012)

    Google Scholar 

  19. Upfal, E.: Tolerating linear number of faults in networks of bounded degree. In: PODC, pp. 83–89 (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gilbert, A.C., Li, Y., Porat, E., Strauss, M.J. (2014). For-All Sparse Recovery in Near-Optimal Time. In: Esparza, J., Fraigniaud, P., Husfeldt, T., Koutsoupias, E. (eds) Automata, Languages, and Programming. ICALP 2014. Lecture Notes in Computer Science, vol 8572. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-43948-7_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-43948-7_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-43947-0

  • Online ISBN: 978-3-662-43948-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics