Advertisement

Mathematical Programming Computation

, Volume 5, Issue 3, pp 267–304 | Cite as

GPU accelerated greedy algorithms for compressed sensing

  • Jeffrey D. Blanchard
  • Jared TannerEmail author
Full Length Paper

Abstract

For appropriate matrix ensembles, greedy algorithms have proven to be an efficient means of solving the combinatorial optimization problem associated with compressed sensing. This paper describes an implementation for graphics processing units (GPU) of hard thresholding, iterative hard thresholding, normalized iterative hard thresholding, hard thresholding pursuit, and a two-stage thresholding algorithm based on compressive sampling matching pursuit and subspace pursuit. The GPU acceleration of the former bottleneck, namely the matrix–vector multiplications, transfers a significant portion of the computational burden to the identification of the support set. The software solves high-dimensional problems in fractions of a second which permits large-scale testing at dimensions currently unavailable in the literature. The GPU implementations exhibit up to 70\(\times \) acceleration over standard Matlab central processing unit implementations using automatic multi-threading.

Keywords

Combinatorial optimization Compressed sensing Sparse approximation Greedy algorithms Graphics processing units Parallel computing IHT NIHT HTP CoSaMP Subspace pursuit 

Mathematics Subject Classification

Primary 15–04 65Y05 65F30 Secondary 90C27 94A12 94A20 

Notes

Acknowledgments

We thank Stephen Wright and Shangkyun Lee for allowing inclusion of their \({ dct}\) matrix-vector product code [25]. We also thank Erik Opavsky and Emircan Uysaler, Grinnell College students, for their dense matrix–vector product which is the foundation of our \({ gen}\) matrix–vector product.

References

  1. 1.
    Alabi, T., Blanchard, J., Gordon, B., Steinbach, R.: Fast k-selection algorithms for graphics processing units. ACM J. Exp. Algorithmics 17(2), 4.2:1–4.2:29 (2012) (Article 4.2)Google Scholar
  2. 2.
    Allison, D., Noga, M.: Selection by distributive partitioning. Inf. Process. Lett. 11(1), 7–8 (1980)zbMATHCrossRefGoogle Scholar
  3. 3.
    Beliakov, G.: Parallel calculation of the median and order statistics on GPUs with application to robust regression. Computing Research Repository abs/1104.2 (2011)Google Scholar
  4. 4.
    Berg, E.v.d., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)Google Scholar
  5. 5.
    Blanchard, J.D., Tanner, J.: GAGA: GPU Accelerated Greedy Algorithms (2013). www.gaga4cs.org. Version 1.0.0
  6. 6.
    Blanchard, J.D., Tanner, J.: Performance comparisons of greedy algorithms in compressed sensing. www.math.grinnell.edu/~blanchaj/PCGACS_preprint.pdf (2013, submitted)
  7. 7.
    Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Blumensath, T., Davies, M.E.: Normalised iterative hard thresholding; guaranteed stability and performance. IEEE Select. Topics Signal Process. 4(2), 298–309 (2010)CrossRefGoogle Scholar
  9. 9.
    Candès, E.J.: Compressive sampling. In: International Congress of Mathematicians, Vol. III, pp. 1433–1452. Eur. Math. Soc., Zürich (2006)Google Scholar
  10. 10.
    Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)zbMATHCrossRefGoogle Scholar
  11. 11.
    Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51(12), 4203–4215 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Cevher, V.: An ALPS view of sparse recovery. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5808–5811. IEEE (2011)Google Scholar
  13. 13.
    Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory 55(5), 2230–2249 (2009)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Donoho, D.L.: Neighborly polytopes and sparse solution of underdetermined linear equations: Technical Report. Stanford University, Department of Statistics (2004)Google Scholar
  15. 15.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Donoho, D.L.: For most large underdetermined systems of equations, the minimal \(l_1\)-norm near-solution approximates the sparsest near-solution. Commun. Pure Appl. Math. 59(7), 907–934 (2006)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Donoho, D.L.: High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete Comput. Geom. 35(4), 617–652 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Donoho, D.L., Tanner, J.: Sparse nonnegative solution of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. USA 102(27), 9446–9451 (2005) (electronic)Google Scholar
  19. 19.
    Donoho, D.L., Tsaig, Y.: Fast solution of l1 minimization problems when the solution may be sparse. IEEE Trans. Inform. Theory 54(11), 4789–4812 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Donoho, D.L., Tsaig, Y., Drori, I., Stark, J.L.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inform. Theory 58(2), 1094–1121 (2012)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE Select. Topics Signal Process. 1(4), 586–597 (2007)CrossRefGoogle Scholar
  22. 22.
    Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  23. 23.
    Hoberock, J., Bell, N.: Thrust: A parallel template library (2010). http://www.meganewtons.com/. Version 1.3.0, http://www.meganewtons.com/
  24. 24.
    Kyrillidis, A., Cevher, V.: Recipes on hard thresholding methods. In: 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 353–356. IEEE (2011)Google Scholar
  25. 25.
    Lee, S., Wright, S.J.: Implementing algorithms for signal and image reconstruction on graphical processing units (2008). http://pages.cs.wisc.edu/swright/GPUreconstruction/gpu_image.pdf
  26. 26.
    Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)zbMATHCrossRefGoogle Scholar
  27. 27.
    Merrill, D., Grimshaw, A.: High performance and scalable radix sorting: a case study of implementing dynamic parallelism for GPU computing. Parallel Process. Lett. 21(02), 245–272 (2011)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Monroe, L., Wendelberger, J., Michalak, S.: Randomized selection on the GPU. In: Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, HPG ’11, pp. 89–98. ACM, New York (2011)Google Scholar
  29. 29.
    Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Needell, D., Tropp, J.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    NVIDIA: Cuda toolkit 4.0 (2011). http://developer.nvidia.com/cuda-toolkit-40
  32. 32.
    Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50(10), 2231–2242 (2004)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53(12), 4655–4666 (2007)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. In: Proceedings of the International Conference on Acoustics, Speech, and, Signal Processing (2008)Google Scholar
  35. 35.
    Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell ^1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2013

Authors and Affiliations

  1. 1.Department of Mathematics and StatisticsGrinnell CollegeGrinnellUSA
  2. 2.Mathematics InstituteUniversity of OxfordOxfordUK

Personalised recommendations