B-SMART: Bregman-Based First-Order Algorithms for Non-negative Compressed Sensing Problems

  • Stefania Petra
  • Christoph Schnörr
  • Florian Becker
  • Frank Lenzen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7893)


We introduce and study Bregman functions as objectives for non-negative sparse compressed sensing problems together with a related first-order iterative scheme employing non-quadratic proximal terms. This scheme yields closed-form multiplicative updates and handles constraints implicitly. Its analysis does not rely on global Lipschitz continuity in contrast to established state-of-the-art gradient-based methods, hence it is attractive for dealing with very large systems. Convergence and a O(k − 1) rate are proved. We also introduce an iterative two-step extension of the update scheme that accelerates convergence. Comparative numerical experiments for non-negativity and box constraints provide evidence for a O(k − 2) rate and reveal competitive and also superior performance.


multiplicative algebraic reconstruction compressed sensing underdetermined systems of nonnegative linear equations convergence rates limited angle tomography 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Atkinson, C., Soria, J.: An efficient simultaneous reconstruction technique for tomographic particle image velocimetry. Experiments in Fluids 47(4), 553–568 (2009)CrossRefGoogle Scholar
  2. 2.
    Batenburg, K.J.: A network flow algorithm for reconstructing binary images from discrete x-rays. J. Math. Imaging Vis. 27(2), 175–191 (2007)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Bauschke, H.H., Borwein, J.M.: Legendre Functions and the Method of Random Bregman Projections. J. Convex Analysis 4(1), 27–67 (1997)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters 31(3), 167–175 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Beck, A., Teboulle, M.: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Img. Sci. 2, 183–202 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Berinde, R., Gilbert, A.C., Indyk, P., Karloff, H., Strauss, M.J.: Combining Geometry and Combinatorics: A Unified Approach to Sparse Signal Recovery. CoRR (2008); Preprint arXiv:0804.4666Google Scholar
  7. 7.
    Byrne, C.L.: Iterative image reconstruction algorithms based on cross-entropy minimization. IEEE Transactions on Image Processing, 96–103 (1993)Google Scholar
  8. 8.
    Candès, E.: Compressive sampling. In: Int. Congress of Math, Madrid, Spain, vol. 3 (2006)Google Scholar
  9. 9.
    Candes, E., Rudelson, M., Tao, T., Vershynin, R.: Error Correcting via Linear Programming. In: 46th Ann. IEEE Symp. Found. Computer Science (FOCS 2005), pp. 295–308 (2005)Google Scholar
  10. 10.
    Chen, G., Teboulle, M.: Convergence analysis of a proximal-like minimization algorithm using Bregman functions. SIAM Journal on Optimization 3(3), 538–543 (1993)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory. John Wiley & Sons (1991)Google Scholar
  12. 12.
    Donoho, D.: Compressed Sensing. IEEE Trans. Information Theory 52, 1289–1306 (2006)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Donoho, D.L., Tanner, J.: Counting the Faces of Randomly-Projected Hypercubes and Orthants, with Applications. Discrete & Computational Geometry 43(3), 522–541 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Eckstein, J.: Nonlinear Proximal Point Algorithms using Bregman Functions, with Applications to Convex Programming. Math. Oper. Res. 18(1), 202–226 (1993)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Kivinen, J., Warmuth, M.: Exponentiated Gradient versus Gradient Descent for Linear Predictors. Inform. Comput. 132, 1–63 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Kullback, S.: A Lower Bound for Discrimination Information in Terms of Variation. IEEE Trans. Inf. Theory 13(1), 126–127 (1967)Google Scholar
  17. 17.
    Mangasarian, O.L., Recht, B.: Probability of Unique Integer Solution to a System of Linear Equations. European Journal of Operational Research 214(1), 27–30 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate O (1/k2). Soviet Mathematics Doklady 27(2), 372–376 (1983)zbMATHGoogle Scholar
  19. 19.
    Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Nesterov, Y.E., Nemirovski, A.S.: Interior-Point Polynomial Algorithms in Convex Programming (Studies in Applied and Numerical Mathematics). Society for Industrial Mathematics (1994)Google Scholar
  21. 21.
    Petra, S., Schnörr, C., Schröder, A.: Critical Parameter Values and Reconstruction Properties of Discrete Tomography: Application to Experimental Fluid Dynamics. Fundamenta Informaticae (2013); To appear and arXiv:1209.4316 (2012)Google Scholar
  22. 22.
    Slawski, M., Hein, M.: Non-negative least squares for sparse recovery in the presence of noise. In: Proc. SPARS (2011)Google Scholar
  23. 23.
    Tseng, P.: On accelerated proximal gradient methods for convex-concave optimization. Submitted to SIAM J. Control Optim. (2008)Google Scholar
  24. 24.
    Wang, M., Xu, W., Tang, A.: A Unique ”Nonnegative” Solution to an Underdetermined System: From Vectors to Matrices. IEEE Transactions on Signal Processing 59(3), 1007–1016 (2011)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Stefania Petra
    • 1
  • Christoph Schnörr
    • 1
  • Florian Becker
    • 1
  • Frank Lenzen
    • 1
  1. 1.IPA & HCIHeidelberg UniversityHeidelbergGermany

Personalised recommendations