Advertisement

2/ℓ2-Foreach Sparse Recovery with Low Risk

  • Anna C. Gilbert
  • Hung Q. Ngo
  • Ely Porat
  • Atri Rudra
  • Martin J. Strauss
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7965)

Abstract

In this paper, we consider the “foreach” sparse recovery problem with failure probability p. The goal of the problem is to design a distribution over m ×N matrices Φ and a decoding algorithm A such that for every x ∈ ℝ N , we have with probability at least 1 − p
$$\|\mathbf{x}-A(\Phi\mathbf{x})\|_2\leqslant C\|\mathbf{x}-\mathbf{x}_k\|_2,$$
where x k is the best k-sparse approximation of x.

Our two main results are: (1) We prove a lower bound on m, the number measurements, of Ω(klog(n/k) + log(1/p)) for \(2^{-\Theta(N)}\leqslant p <1\). Cohen, Dahmen, and DeVore [4] prove that this bound is tight. (2) We prove nearly matching upper bounds that also admit sub-linear time decoding. Previous such results were obtained only when p = Ω(1). One corollary of our result is an an extension of Gilbert et al. [6] results for information-theoretically bounded adversaries.

Keywords

Failure Probability Recovery Algorithm Recovery Problem Sparse Recovery Heavy Hitter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baraniuk, R.G., Candes, E., Nowak, R., Vetterli, M.: Compressive sampling. IEEE Signal Processing Magazine 25(2) (2008)Google Scholar
  2. 2.
    Candès, E.J., Tao, T.: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory 52(12), 5406–5425 (2006)CrossRefGoogle Scholar
  3. 3.
    Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: Widmayer, P., Triguero, F., Morales, R., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 693–703. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Cohen, A., Dahmen, W., De Vore, R.A.: Near Optimal Approximation of Arbitrary Vectors from Highly Incomplete Measurements. Bericht. Inst. für Geometrie und Praktische Mathematik (2007)Google Scholar
  5. 5.
    Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best k-term approximation. J. Amer. Math. Soc. 22(1), 211–231 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Gilbert, A.C., Hemenway, B., Rudra, A., Strauss, M.J., Wootters, M.: Recovering simple signals. In: ITA, pp. 382–391 (2012)Google Scholar
  7. 7.
    Gilbert, A.C., Indyk, P.: Sparse recovery using sparse matrices. Proceedings of the IEEE 98(6), 937–947 (2010)CrossRefGoogle Scholar
  8. 8.
    Gilbert, A.C., Li, Y., Porat, E., Strauss, M.J.: Approximate sparse recovery: Optimizing time and measurements. SIAM J. Comput. 41(2), 436–453 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Gilbert, A.C., Ngo, H., Porat, E., Rudra, A., Strauss, M.J.: L2/L2-foreach sparse recovery with low risk. ArXiv e-prints, arXiv:1304.6232 (April 2013)Google Scholar
  10. 10.
    Guruswami, V., Rudra, A.: Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on Information Theory 54(1), 135–150 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Guruswami, V., Sudan, M.: Improved decoding of reed-solomon and algebraic-geometry codes. IEEE Transactions on Information Theory 45(6), 1757–1767 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Indyk, P., Ruzic, M.: Near-optimal sparse recovery in the l1 norm. In: FOCS, pp. 199–207 (2008)Google Scholar
  13. 13.
    Irony, D., Toledo, S., Tiskin, A.: Communication lower bounds for distributed-memory matrix multiplication. J. Parallel Distrib. Comput. 64(9), 1017–1026 (2004)zbMATHCrossRefGoogle Scholar
  14. 14.
    Lapidoth, A., Narayan, P.: Reliable communication under channel uncertainty. IEEE Transactions on Information Theory 44, 2148–2177 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Lehman, A.R., Lehman, E.: Network coding: does the model need tuning? In: SODA, pp. 499–504 (2005)Google Scholar
  16. 16.
    Lipton, R.J.: A new approach to information theory. In: Enjalbert, P., Mayr, E.W., Wagner, K.W. (eds.) STACS 1994. LNCS, vol. 775, pp. 699–708. Springer, Heidelberg (1994)CrossRefGoogle Scholar
  17. 17.
    Loomis, L.H., Whitney, H.: An inequality related to the isoperimetric inequality. Bull. Amer. Math. Soc. 55, 961–962 (1949)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Ngo, H.Q., Porat, E., Ré, C., Rudra, A.: Worst-case optimal join algorithms. In: PODS, pp. 37–48 (2012)Google Scholar
  19. 19.
    Ngo, H.Q., Porat, E., Rudra, A.: Efficiently decodable compressed sensing by list-recoverable codes and recursion. In: STACS, pp. 230–241 (2012)Google Scholar
  20. 20.
    Porat, E., Strauss, M.J.: Sublinear time, measurement-optimal, sparse recovery for all. In: SODA, pp. 1215–1227 (2012)Google Scholar
  21. 21.
    Price, E., Woodruff, D.P.: (1 + ε)-approximate sparse recovery. In: FOCS, pp. 295–304 (2011)Google Scholar
  22. 22.
    Rudra, A.: List Decoding and Property Testing of Error Correcting Codes. PhD thesis, University of Washington (2007)Google Scholar
  23. 23.
    Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: The 37th Annual Allerton Conference on Communication, Control, and Computing, pp. 368–377 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Anna C. Gilbert
    • 1
  • Hung Q. Ngo
    • 2
  • Ely Porat
    • 3
  • Atri Rudra
    • 2
  • Martin J. Strauss
    • 1
  1. 1.University of MichiganUSA
  2. 2.University at Buffalo (SUNY)USA
  3. 3.Bar-Ilan UniversityIsrael

Personalised recommendations