Skip to main content

2/ℓ2-Foreach Sparse Recovery with Low Risk

  • Conference paper
Automata, Languages, and Programming (ICALP 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7965))

Included in the following conference series:

Abstract

In this paper, we consider the “foreach” sparse recovery problem with failure probability p. The goal of the problem is to design a distribution over m ×N matrices Φ and a decoding algorithm A such that for every x ∈ ℝN, we have with probability at least 1 − p

$$\|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$

where x k is the best k-sparse approximation of x.

Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k) + log(1/p)) for \(2^{-\Theta(N)}\leqslant p <1\). Cohen, Dahmen, and DeVore [4] prove that this bound is tight. (2) We prove nearly matching upper bounds that also admit sub-linear time decoding. Previous such results were obtained only when p = Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baraniuk, R.G., Candes, E., Nowak, R., Vetterli, M.: Compressive sampling. IEEE Signal Processing Magazine 25(2) (2008)

    Google Scholar 

  2. Candès, E.J., Tao, T.: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory 52(12), 5406–5425 (2006)

    Article  Google Scholar 

  3. Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: Widmayer, P., Triguero, F., Morales, R., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 693–703. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  4. Cohen, A., Dahmen, W., De Vore, R.A.: Near Optimal Approximation of Arbitrary Vectors from Highly Incomplete Measurements. Bericht. Inst. für Geometrie und Praktische Mathematik (2007)

    Google Scholar 

  5. Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best k-term approximation. J. Amer. Math. Soc. 22(1), 211–231 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Gilbert, A.C., Hemenway, B., Rudra, A., Strauss, M.J., Wootters, M.: Recovering simple signals. In: ITA, pp. 382–391 (2012)

    Google Scholar 

  7. Gilbert, A.C., Indyk, P.: Sparse recovery using sparse matrices. Proceedings of the IEEE 98(6), 937–947 (2010)

    Article  Google Scholar 

  8. Gilbert, A.C., Li, Y., Porat, E., Strauss, M.J.: Approximate sparse recovery: Optimizing time and measurements. SIAM J. Comput. 41(2), 436–453 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Gilbert, A.C., Ngo, H., Porat, E., Rudra, A., Strauss, M.J.: L2/L2-foreach sparse recovery with low risk. ArXiv e-prints, arXiv:1304.6232 (April 2013)

    Google Scholar 

  10. Guruswami, V., Rudra, A.: Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on Information Theory 54(1), 135–150 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Guruswami, V., Sudan, M.: Improved decoding of reed-solomon and algebraic-geometry codes. IEEE Transactions on Information Theory 45(6), 1757–1767 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  12. Indyk, P., Ruzic, M.: Near-optimal sparse recovery in the l1 norm. In: FOCS, pp. 199–207 (2008)

    Google Scholar 

  13. Irony, D., Toledo, S., Tiskin, A.: Communication lower bounds for distributed-memory matrix multiplication. J. Parallel Distrib. Comput. 64(9), 1017–1026 (2004)

    Article  MATH  Google Scholar 

  14. Lapidoth, A., Narayan, P.: Reliable communication under channel uncertainty. IEEE Transactions on Information Theory 44, 2148–2177 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lehman, A.R., Lehman, E.: Network coding: does the model need tuning? In: SODA, pp. 499–504 (2005)

    Google Scholar 

  16. Lipton, R.J.: A new approach to information theory. In: Enjalbert, P., Mayr, E.W., Wagner, K.W. (eds.) STACS 1994. LNCS, vol. 775, pp. 699–708. Springer, Heidelberg (1994)

    Chapter  Google Scholar 

  17. Loomis, L.H., Whitney, H.: An inequality related to the isoperimetric inequality. Bull. Amer. Math. Soc. 55, 961–962 (1949)

    Article  MathSciNet  MATH  Google Scholar 

  18. Ngo, H.Q., Porat, E., Ré, C., Rudra, A.: Worst-case optimal join algorithms. In: PODS, pp. 37–48 (2012)

    Google Scholar 

  19. Ngo, H.Q., Porat, E., Rudra, A.: Efficiently decodable compressed sensing by list-recoverable codes and recursion. In: STACS, pp. 230–241 (2012)

    Google Scholar 

  20. Porat, E., Strauss, M.J.: Sublinear time, measurement-optimal, sparse recovery for all. In: SODA, pp. 1215–1227 (2012)

    Google Scholar 

  21. Price, E., Woodruff, D.P.: (1 + ε)-approximate sparse recovery. In: FOCS, pp. 295–304 (2011)

    Google Scholar 

  22. Rudra, A.: List Decoding and Property Testing of Error Correcting Codes. PhD thesis, University of Washington (2007)

    Google Scholar 

  23. Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: The 37th Annual Allerton Conference on Communication, Control, and Computing, pp. 368–377 (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gilbert, A.C., Ngo, H.Q., Porat, E., Rudra, A., Strauss, M.J. (2013). ℓ2/ℓ2-Foreach Sparse Recovery with Low Risk. In: Fomin, F.V., Freivalds, R., Kwiatkowska, M., Peleg, D. (eds) Automata, Languages, and Programming. ICALP 2013. Lecture Notes in Computer Science, vol 7965. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39206-1_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39206-1_39

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39205-4

  • Online ISBN: 978-3-642-39206-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics