Advertisement

For-All Sparse Recovery in Near-Optimal Time

  • Anna C. Gilbert
  • Yi Li
  • Ely Porat
  • Martin J. Strauss
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8572)

Abstract

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N, an m-by-N measurement Φ, and a recovery algorithm, \(\mathcal{R}\). Given a vector, x, the system approximates x by \(\widehat{\mathbf{x}} = \mathcal{R}(\Phi\mathbf{x})\), which must satisfy \(\|\widehat{\mathbf{x}}-\mathbf{x}\|_1 \leq (1+\epsilon)\|\mathbf{x}-\mathbf{x}_k\|_1\). We consider the “for all” model, in which a single matrix Φ is used for all signals x. The best existing sublinear algorithm by Porat and Strauss (SODA’12) uses O(ε − 3 klog(N/k)) measurements and runs in time O(k 1 − α N α ) for any constant α > 0.

In this paper, we improve the number of measurements to O(ε − 2 k log(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k 1 + β poly(logN,1/ε)), with a modest restriction that k ≤ N 1 − α and ε ≤ (logk/logN) γ , for any constants α, β,γ > 0. With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog(N/k)((logN/logk) γ  + 1/ε)) measurements. The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages.

Keywords

Network Code Measurement Matrix Recovery Algorithm Expander Graph Weak System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE T. Info. Theory 52(2), 489–509 (2006)CrossRefzbMATHGoogle Scholar
  2. 2.
    Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: Widmayer, P., Triguero, F., Morales, R., Hennessy, M., Eidenbenz, S., Conejo, R. (eds.) ICALP 2002. LNCS, vol. 2380, pp. 693–703. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  3. 3.
    Cohen, A., Dahmen, W., Devore, R.: Compressed sensing and best k-term approximation. J. Amer. Math. Soc., 211–231 (2009)Google Scholar
  4. 4.
    Cormode, G., Muthukrishnan, S.: Combinatorial algorithms for compressed sensing. In: SIROCCO, pp. 280–294 (2006)Google Scholar
  5. 5.
    Donoho, D.L.: Compressed sensing. IEEE T. Info. Theory 52(4), 1289–1306 (2006)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Duarte, M.F., Davenport, M.A., Takhar, D., Laska, J.N., Kelly, K.F., Baraniuk, R.G.: Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine 25(2), 83–91 (2008)CrossRefGoogle Scholar
  7. 7.
    Friedman, J., Kahn, J., Szemerédi, E.: On the second eigenvalue of random regular graphs. In: STOC, pp. 587–598 (1989)Google Scholar
  8. 8.
    Gilbert, A., Li, Y., Porat, E., Strauss, M.: Approximate sparse recovery: Optimizing time and measurements. SIAM J. Comput. 41(2), 436–453 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Gilbert, A., Strauss, M., Tropp, J., Vershynin, R.: Algorithmic linear dimension reduction in the ℓ1 norm for sparse vectors. In: Allerton (2006)Google Scholar
  10. 10.
    Gilbert, A., Strauss, M., Tropp, J., Vershynin, R.: One sketch for all: fast algorithms for compressed sensing. In: ACM STOC, pp. 237–246 (2007)Google Scholar
  11. 11.
    Gilbert, A.C., Ngo, H.Q., Porat, E., Rudra, A., Strauss, M.J.: ℓ2/ℓ2-foreach sparse recovery with low risk. In: Fomin, F.V., Freivalds, R., Kwiatkowska, M., Peleg, D. (eds.) ICALP 2013, Part I. LNCS, vol. 7965, pp. 461–472. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  12. 12.
    Guruswami, V., Umans, C., Vadhan, S.: Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes. J. ACM 56(4), 20:1–20:34Google Scholar
  13. 13.
    Indyk, P., Ngo, H.Q., Rudra, A.: Efficiently decodable non-adaptive group testing. In: SODA, pp. 1126–1142 (2010)Google Scholar
  14. 14.
    Indyk, P., Ruzic, M.: Near-optimal sparse recovery in the ℓ1 norm. In: FOCS, pp. 199–207 (2008)Google Scholar
  15. 15.
    Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2007)CrossRefGoogle Scholar
  16. 16.
    Nelson, J., Nguyễn, H.L., Woodruff, D.P.: On deterministic sketching and streaming for sparse recovery and norm estimation. In: Gupta, A., Jansen, K., Rolim, J., Servedio, R. (eds.) APPROX/RANDOM 2012. LNCS, vol. 7408, pp. 627–638. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  17. 17.
    Parvaresh, F., Vardy, A.: Correcting errors beyond the guruswami-sudan radius in polynomial time. In: FOCS, pp. 285–294 (2005)Google Scholar
  18. 18.
    Porat, E., Strauss, M.J.: Sublinear time, measurement-optimal, sparse recovery for all. In: SODA, pp. 1215–1227 (2012)Google Scholar
  19. 19.
    Upfal, E.: Tolerating linear number of faults in networks of bounded degree. In: PODC, pp. 83–89 (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Anna C. Gilbert
    • 1
  • Yi Li
    • 2
  • Ely Porat
    • 3
  • Martin J. Strauss
    • 4
  1. 1.Department of MathematicsUniversity of MichiganUSA
  2. 2.Max-Planck Institute for InformaticsGermany
  3. 3.Department of Computer ScienceBar-Ilan UniversityIsrael
  4. 4.Department of Mathematics and Department of EECSUniversity of MichiganUSA

Personalised recommendations