Skip to main content

Bregman Methods for Large-Scale Optimization with Applications in Imaging

  • Living reference work entry
  • Latest version View entry history
  • First Online:
Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging

Abstract

In this chapter we review recent developments in the research of Bregman methods, with particular focus on their potential use for large-scale applications. We give an overview on several families of Bregman algorithms and discuss modifications such as accelerated Bregman methods, incremental and stochastic variants, and coordinate descent-type methods. We conclude this chapter with numerical examples in image and video decomposition, image denoising, and dimensionality reduction with auto-encoders.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  • Adler, J., Öktem, O.: Learned primal-dual reconstruction. IEEE Trans. Med. Imaging 37(6), 1322–1332 (2018)

    Article  Google Scholar 

  • Ahookhosh, M., Hien, L.T.K., Gillis, N., Patrinos, P.: Multi-block Bregman proximal alternating linearized minimization and its application to sparse orthogonal nonnegative matrix factorization. arXiv preprint arXiv:1908.01402 (2019)

    Google Scholar 

  • Arridge, S., Maass, P., Öktem, O., Schönlieb, C.-B.: Solving inverse problems using data-driven models. Acta Numerica 28, 1–174 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  • Attouch, H., Buttazzo, G., Michaille, G.: Variational analysis in Sobolev and BV spaces: applications to PDEs and optimization. SIAM (2014)

    Google Scholar 

  • Azizan, N., Hassibi, B.: Stochastic gradient/mirror descent: Minimax optimality and implicit regularization. arXiv preprint arXiv:1806.00952 (2018)

    Google Scholar 

  • Bachmayr, M., Burger, M.: Iterative total variation schemes for nonlinear inverse problems. Inverse Prob. 25(10), 105004 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Bauschke, H.H., Bolte, J., Teboulle, M.: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications. Math. Oper. Res. 42(2), 330–348 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Bregman monotone optimization algorithms. SIAM J. Control. Optim. 42(2), 596–636 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  • Beck, A.: First-Order Methods in Optimization, Vol. 25. SIAM (2017)

    Google Scholar 

  • Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31(3), 167–175 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  • Beck, A., Tetruashvili, L.: On the convergence of block coordinate descent type methods. SIAM J. Optim. 23(4), 2037–2060 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Ben-Tal, A., Margalit, T., Nemirovski, A.: The ordered subsets mirror descent optimization method with applications to tomography. SIAM J. Optim. 12(1), 79–108 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Benning, M., Betcke, M., Ehrhardt, M., Schönlieb, C.-B.: Gradient descent in a generalised Bregman distance framework. In: Geometric Numerical Integration and its Applications, Vol. 74, pp. 40–45. MI Lecture Notes series of Kyushu University (2017)

    Google Scholar 

  • Benning, M., Betcke, M.M., Ehrhardt, M.J., Schönlieb, C.-B.: Choose your path wisely: gradient descent in a Bregman distance framework. SIAM Journal on Imaging Sciences (SIIMS). arXiv preprint arXiv:1712.04045 (2017)

    Google Scholar 

  • Benning, M., Burger, M.: Error estimates for general fidelities. Electron. Trans. Numer. Anal. 38(44–68), 77 (2011)

    MathSciNet  MATH  Google Scholar 

  • Benning, M., Burger, M.: Modern regularization methods for inverse problems. Acta Numerica 27, 1–111 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Benning, M., Knoll, F., Schönlieb, C.-B., Valkonen, T.: Preconditioned ADMM with nonlinear operator constraint. In: IFIP Conference on System Modeling and Optimization, pp. 117–126. Springer (2015)

    Google Scholar 

  • Benning, M., Lee, E., Pao, H., Yacoubou-Djima, K., Wittman, T., Anderson, J.: Statistical filtering of global illumination for computer graphics. IPAM Research in Industrial Projects for Students (RIPS) Report (2007)

    Google Scholar 

  • Benning, M., Riis, E.S., Schönlieb, C.-B.: Bregman Itoh–Abe methods for sparse optimisation. In print: J. Math. Imaging Vision (2020)

    Google Scholar 

  • Bertocchi, C., Chouzenoux, E., Corbineau, M.-C., Pesquet, J.-C., Prato, M.: Deep unfolding of a proximal interior point method for image restoration. Inverse Prob. 36, 034005 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  • Bertsekas, D.P.: Incremental gradient, subgradient, and proximal methods for convex optimization: A survey. Optim. Mach. Learn. 2010(1–38), 3 (2011)

    Google Scholar 

  • Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129(2), 163 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Blatt, D., Hero, A.O., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18(1), 29–51 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Bolte, J., Sabach, S., Teboulle, M., Vaisbourd, Y.: First order methods beyond convexity and Lipschitz gradient continuity with applications to quadratic inverse problems. SIAM J. Optim. 28(3), 2131–2151 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Bonettini, S., Rebegoldi, S., Ruggiero, V.: Inertial variable metric techniques for the inexact forward–backward algorithm. SIAM J. Sci. Comput. 40(5), A3180–A3210 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A.: A family of variable metric proximal methods. Math. Program. 68(1–3), 15–47 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  • Bouwmans, T., Javed, S., Zhang, H., Lin, Z., Otazo, R.: On the applications of robust PCA in image and video processing. Proc. IEEE 106(8), 1427–1457 (2018)

    Article  Google Scholar 

  • Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)

    Google Scholar 

  • Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  • Brunton, S.L., Kutz, J.N.: Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press (2019)

    Google Scholar 

  • Burbea, J., Rao, C.: On the convexity of higher order Jensen differences based on entropy functions (corresp.). IEEE Trans. Inf. Theory 28(6), 961–963 (1982)

    Google Scholar 

  • Burbea, J., Rao, C.: On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inf. Theory 28(3), 489–495 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M.: Bregman distances in inverse problems and partial differential equations. In: Advances in Mathematical Modeling, Optimization and Optimal Control, pp. 3–33. Springer (2016)

    Google Scholar 

  • Burger, M., Frick, K., Osher, S., Scherzer, O.: Inverse total variation flow. Multiscale Model. Simul. 6(2), 366–395 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M., Gilboa, G., Moeller, M., Eckardt, L., Cremers, D.: Spectral decompositions using one-homogeneous functionals. SIAM J. Imag. Sci. 9(3), 1374–1408 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M., Gilboa, G., Osher, S., Xu, J.: Nonlinear inverse scale space methods. Commun. Math. Sci. 4(1), 179–212 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M., Moeller, M., Benning, M., Osher, S.: An adaptive inverse scale space method for compressed sensing. Math. Comput. 82(281), 269–299 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M., Osher, S.: Convergence rates of convex variational regularization. Inverse Prob. 20(5), 1411 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  • Burger, M., Resmerita, E., He, L.: Error estimation for Bregman iterations and inverse scale space methods in image restoration. Computing 81(2–3), 109–135 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Cai, J.-F., Osher, S., Shen, Z.: Convergence of the linearized Bregman iteration for â„“1-norm minimization. Math. Comput. 78(268), 2127–2136 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Cai, J.-F., Osher, S., Shen, Z.: Linearized Bregman iterations for compressed sensing. Math. Comput. 78(267), 1515–1536 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Cai, J.-F., Osher, S., Shen, Z.: Linearized Bregman iterations for frame-based image deblurring. SIAM J. Imag. Sci. 2(1), 226–252 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Calatroni, L., Garrigos, G., Rosasco, L., Villa, S.: Accelerated iterative regularization via dual diagonal descent. arXiv preprint arXiv:1912.12153 (2019)

    Google Scholar 

  • Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3), 11 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Censor, Y., Lent, A.: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 34(3), 321–353 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  • Censor, Y., Stavros Zenios, A.: Proximal minimization algorithm with d-functions. J. Optim. Theory Appl. 73(3), 451–464 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., Dossal, C.: On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm. J. Optim. Theory Appl. 166(3), 968–982 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., Ehrhardt, M.J., Richtárik, P., Carola-Schonlieb, B.: Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM J. Optim. 28(4), 2783–2808 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numerica 25, 161–319 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal–dual algorithm. Math. Prog. 159(1–2), 253–287 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Chavent, G., Kunisch, K.: Regularization of linear least squares problems by total bounded variation. ESAIM Control Optim. Calc. Var. 2, 359–376 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  • Chen, G., Teboulle, M.: Convergence analysis of a proximal-like minimization algorithm using Bregman functions. SIAM J. Optim. 3(3), 538–543 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  • Chouzenoux, E., Pesquet, J.-C., Repetti, A.: Variable metric forward–backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162(1), 107–132 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Clarke, F.H.: Optimization and Nonsmooth Analysis. Classics in Applied Mathematics, 1st edn. SIAM, Philadelphia (1990)

    Google Scholar 

  • Clason, C., Mazurenko, S., Valkonen, T.: Acceleration and global convergence of a first-order primal-dual method for nonconvex problems. SIAM J. Optim. 29(1), 933–963 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  • Clason, C., Valkonen, T.: Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization. SIAM J. Optim. 27(3), 1314–1339 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Combettes, P.L., Pesquet, J.-C.: Deep neural network structures solving variational inequalities. arXiv preprint arXiv:1808.07526 (2018)

    Google Scholar 

  • Combettes, P.L., VÅ©, B.C.: Variable metric forward–backward splitting with applications to monotone inclusions in duality. Optimization 63(9), 1289–1318 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Corona, V., Benning, M., Ehrhardt, M.J., Gladden, L.F., Mair, R., Reci, A., Sederman, A.J., Reichelt, S., Schönlieb, C.-B.: Enhancing joint reconstruction and segmentation with non-convex Bregman iteration. Inverse Prob. 35(5), 055001 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  • Corona, V., Benning, M., Gladden, L.F., Reci, A., Sederman, A.J., Schoenlieb, C.-B.: Joint phase reconstruction and magnitude segmentation from velocity-encoded MRI data. arXiv preprint arXiv:1908.05285 (2019)

    Google Scholar 

  • Doan, T.T., Bose, S., Nguyen, D.H., Beck, C.L.: Convergence of the iterates in mirror descent methods. IEEE Control Syst. Lett. 3(1), 114–119 (2018)

    Article  MathSciNet  Google Scholar 

  • Dragomir, R.-A., Taylor, A., d’Aspremont, A., Bolte, J.: Optimal complexity and certification of Bregman first-order methods. arXiv preprint arXiv:1911.08510 (2019)

    Google Scholar 

  • Duchi, J.C., Agarwal, A., Johansson, M., Jordan, M.I.: Ergodic mirror descent. SIAM J. Optim. 22(4), 1549–1578 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Eckstein, J.: Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming. Math. Oper. Res. 18(1), 202–226 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  • Ehrhardt, M.J., Riis, E.S., Ringholm, T., Schönlieb, C.-B.: A geometric integration approach to smooth optimisation: Foundations of the discrete gradient method. ArXiv e-prints (2018)

    Google Scholar 

  • Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3(4), 1015–1046 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  • Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for Kurdyka–łojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165(3), 874–900 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Frerix, T., Möllenhoff, T., Moeller, M., Cremers, D.: Proximal backpropagation. arXiv preprint arXiv:1706.04638 (2017)

    Google Scholar 

  • Frick, K., Scherzer, O.: Convex inverse scale spaces. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 313–325. Springer (2007)

    Google Scholar 

  • Gabay, D.: Chapter ix applications of the method of multipliers to variational inequalities. In: Studies in Mathematics and Its Applications, Vol. 15, pp. 299–331. Elsevier (1983)

    Google Scholar 

  • Gao, T., Lu, S., Liu, J., Chu, C.: Randomized Bregman coordinate descent methods for non-Lipschitz optimization. arXiv preprint arXiv:2001.05202 (2020)

    Google Scholar 

  • Garrigos, G., Rosasco, L., Villa, S.: Iterative regularization via dual diagonal descent. J. Math. Imaging Vision 60(2), 189–215 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Gilboa, G., Moeller, M., Burger, M.: Nonlinear spectral analysis via one-homogeneous functionals: Overview and future prospects. J. Math. Imaging Vision 56(2), 300–319 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Goldstein, T., Li, M., Yuan, X.: Adaptive primal-dual splitting methods for statistical learning and image processing. In: Advances in Neural Information Processing Systems, pp. 2089–2097 (2015)

    Google Scholar 

  • Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016)

    Google Scholar 

  • Gordon, R., Bender, R., Herman, G.T.: Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol. 29(3), 471–481 (1970)

    Article  Google Scholar 

  • Gower, R.M., Richtárik, P.: Randomized iterative methods for linear systems. SIAM J. Matrix Anal. Appl. 36(4), 1660–1690 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Grimm, V., McLachlan, R.I., McLaren, D.I., Quispel, G.R.W., Schönlieb, C.-B.: Discrete gradient methods for solving variational image regularisation models. J. Phys. A 50(29), 295201 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Gutman, D.H., Peña, J.F.: A unified framework for Bregman proximal methods: subgradient, gradient, and accelerated gradient schemes. arXiv preprint arXiv:1812.10198 (2018)

    Google Scholar 

  • Hairer, E., Lubich, C., Wanner, G.: Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, Vol. 31, 2nd edn. Springer Science & Business Media, Berlin (2006)

    MATH  Google Scholar 

  • Hanzely, F., Richtarik, P., Xiao, L.: Accelerated Bregman proximal gradient methods for relatively smooth convex optimization. arXiv preprint arXiv:1808.03045 (2018)

    Google Scholar 

  • Hellinger, E.: Neue begründung der theorie quadratischer formen von unendlichvielen veränderlichen. Journal für die reine und angewandte Mathematik (Crelles Journal) 1909(136), 210–271 (1909)

    Google Scholar 

  • Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex analysis and minimization algorithms I: Fundamentals, volume 305 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathemati- cal Sciences], 2nd edn. Springer, Berlin (1993)

    Book  Google Scholar 

  • Hohage, T., Homann, C.: A generalization of the chambolle-pock algorithm to Banach spaces with applications to inverse problems. arXiv preprint arXiv:1412.0126 (2014)

    Google Scholar 

  • Hsieh, Y.-P., Kavis, A., Rolland, P., Cevher, V.: Mirrored Langevin dynamics. In: Advances in Neural Information Processing Systems, pp. 2878–2887 (2018)

    Google Scholar 

  • Hua, X., Yamashita, N.: Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization. Math. Program. 160(1–2), 1–32 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Huang, B., Ma, S., Goldfarb, D.: Accelerated linearized Bregman method. J. Sci. Comput. 54(2–3), 428–453 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Itakura, F.: Analysis synthesis telephony based on the maximum likelihood method. In: The 6th International Congress on Acoustics, 1968, pp. 280–292 (1968)

    Google Scholar 

  • Itoh, T., Abe, K.: Hamiltonian-conserving discrete canonical equations based on variational difference quotients. J. Comput. Phys. 76(1), 85–102 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  • Juditsky, A., Nemirovski, A., et al.: First order methods for nonsmooth convex large-scale optimization, I: General purpose methods. Optim. Mach. Learn. 121–148 (2011). https://doi.org/10.7551/mitpress/8996.003.0007

  • Kaczmarz, M.S.: Angenäherte Auflösung von Systemen linearer Gleichungen. Bulletin International de l’Académie Polonaise des Sciences et des Lettres. Classe des Sciences Mathématiques et Naturelles. Série A, Sciences Mathématiques 35, 355–357 (1937)

    Google Scholar 

  • Kiwiel, K.C.: Free-steering relaxation methods for problems with strictly convex costs and linear constraints. Math. Oper. Res. 22(2), 326–349 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  • Kiwiel, K.C.: Proximal minimization methods with generalized Bregman functions. SIAM J. Control. Optim. 35(4), 1142–1168 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  • Kobler, E., Klatzer, T., Hammernik, K., Pock, T.: Variational networks: connecting variational methods and deep learning. In: German Conference on Pattern Recognition, pp. 281–293. Springer (2017)

    Google Scholar 

  • Krichene, W., Bayen, A., Bartlett, P.L.: Accelerated mirror descent in continuous and discrete time. In: Advances in Neural Information Processing Systems, pp. 2845–2853 (2015)

    Google Scholar 

  • Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  • LeCun, Y., Cortes, C., Burges, C.J.C.: The mnist database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist 10:34 (1998)

  • Lee, K.-C., Ho, J., Kriegman, D.J.: Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005)

    Article  Google Scholar 

  • Li, G., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25(4), 2434–2460 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Li, H., Schwab, J., Antholzer, S., Haltmeier, M.: Nett: Solving inverse problems with deep neural networks. Inverse Prob. 36, 065005 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  • Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  • Lorenz, D.A., Schöpfer, F., Wenger, S.: The linearized Bregman method via split feasibility problems: Analysis and generalizations. SIAM J. Imag. Sci. 7(2), 1237–1262 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Lorenz, D.A., Wenger, S., Schöpfer, F., Magnor, M.: A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing. arXiv e-prints (2014)

    Google Scholar 

  • Prasanta, P.C.: On the generalized distance in statistics. National Institute of Science of India (1936)

    Google Scholar 

  • Matet, S., Rosasco, L., Villa, S., Vu, B.L.: Don’t relax: Early stopping for convex regularization. arXiv preprint arXiv:1707.05422 (2017)

    Google Scholar 

  • McLachlan, R.I., Quispel, G.R.W.: Six lectures on the geometric integration of ODEs, pp. 155–210. London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge (2001)

    Google Scholar 

  • McLachlan, R.I., Quispel, G.R.W., Robidoux, N.: Geometric integration using discrete gradients. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 357(1754), 1021–1045 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  • Miyatake, Y., Sogabe, T., Zhang, S.-L.: On the equivalence between SOR-type methods for linear systems and the discrete gradient methods for gradient systems. J. Comput. Appl. Math. 342, 58–69 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Moeller, M., Benning, M., Schönlieb, C., Cremers, D.: Variational depth from focus reconstruction. IEEE Trans. Image Process. 24(12), 5369–5378 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Möllenhoff, T., Strekalovskiy, E., Moeller, M., Cremers, D.: The primal-dual hybrid gradient method for semiconvex splittings. SIAM J. Imag. Sci. 8(2), 827–857 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Moreau, J.-J.: Proximité et dualité dans un espace hilbertien. Bulletin de la Société mathématique de France 93, 273–299 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  • Morozov, V.A.: Regularization of incorrectly posed problems and the choice of regularization parameter. USSR Comput. Math. Math. Phys. 6(1), 242–251 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  • Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807–814 (2010)

    Google Scholar 

  • Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Nemirovsky, A.S., Yudin, D.B.: Problem complexity and method efficiency in optimization (1983)

    Google Scholar 

  • Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence \(\mathscr {O} (1/k^2)\). In: Doklady AN USSR, Vol. 269, pp. 543–547 (1983)

    Google Scholar 

  • Nesterov, Y.: Primal-dual subgradient methods for convex problems. Math. Program. 120(1), 221–259 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Neubauer, A.: On Nesterov acceleration for Landweber iteration of linear ill-posed problems. J. Inverse Ill-posed Prob. 25(3), 381–390 (2017)

    MathSciNet  MATH  Google Scholar 

  • Nielsen, F., Boltz, S.: The Burbea-Rao and Bhattacharyya centroids. IEEE Trans. Inf. Theory 57(8), 5455–5466 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Ochs, P., Chen, Y., Brox, T., Pock, T.: iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM J. Imag. Sci. 7(2), 1388–1419 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Ochs, P., Ranftl, R., Brox, T., Pock, T.: Bilevel optimization with nonsmooth lower level problems. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 654–665. Springer (2015)

    Google Scholar 

  • Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W.: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4(2), 460–489 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Oswald, P., Zhou, W.: Convergence analysis for Kaczmarz-type methods in a Hilbert space framework. Linear Algebra Appl. 478, 131–161 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Ouyang, H., He, N., Tran, L., Gray, A.: Stochastic alternating direction method of multipliers. In: International Conference on Machine Learning, pp. 80–88 (2013)

    Google Scholar 

  • Parikh, N., Boyd, S., et al.: Proximal algorithms. Found. Trends® Optim. 1(3), 127–239 (2014)

    Google Scholar 

  • Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990)

    Article  Google Scholar 

  • Pock, T., Cremers, D., Bischof, H., Chambolle, A.: An algorithm for minimizing the Mumford-Shah functional. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1133–1140. IEEE (2009)

    Google Scholar 

  • Resmerita, E., Scherzer, O.: Error estimates for non-quadratic regularization and the relation to enhancement. Inverse Prob. 22(3), 801 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Riis, E.S., Ehrhardt, M.J., Quispel, G.R.W., Schönlieb, C.-B.: A geometric integration approach to nonsmooth, nonconvex optimisation. Foundations of Computational Mathematics (FOCM). ArXiv e-prints (2018)

    Google Scholar 

  • Ringholm, T., Lazić, J., Schönlieb, C.-B.: Variational image regularization with Euler’s elastica using a discrete gradient scheme. SIAM J. Imag. Sci. 11(4), 2665–2691 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  • Scherzer, O., Groetsch, C.: Inverse scale space theory for inverse problems. In: International Conference on Scale-Space Theories in Computer Vision, pp. 317–325. Springer (2001)

    Google Scholar 

  • Marie Schmidt, F., Benning, M., Schönlieb, C.-B.: Inverse scale space decomposition. Inverse Prob. 34(4), 179–212 (2018)

    MathSciNet  MATH  Google Scholar 

  • Schmidt, M., Le Roux, N., Bach, F.: Minimizing finite sums with the stochastic average gradient. Math. Program. 162(1–2), 83–112 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Schöpfer, F., Lorenz, D.A.: Linear convergence of the randomized sparse Kaczmarz method. Math. Program. 173(1), 509–536 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  • Schuster, T., Kaltenbacher, B., Hofmann, B., Kazimierski, K.S.: Regularization methods in Banach spaces, Vol. 10. Walter de Gruyter (2012)

    Google Scholar 

  • Su, W., Boyd, S., Candes, E.J.: A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. J. Mach. Learn. Res. 17(153), 1–43 (2016)

    MathSciNet  MATH  Google Scholar 

  • Teboulle, M.: Entropic proximal mappings with applications to nonlinear programming. Math. Oper. Res. 17(3), 670–690 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  • Teboulle, M.: A simplified view of first order methods for optimization. Math. Program. 170(1), 67–96 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Teboulle, M., Chen, G.: Convergence analysis of a proximal-like minimization algorithm using Bregman function. SIAM J. Optim. 3(3), 538–543 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  • Valkonen, T.: A primal–dual hybrid gradient method for nonlinear operators with applications to MRI. Inverse Prob. 30(5), 055012 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Wang, H., Banerjee, A.: Bregman alternating direction method of multipliers. In: Advances in Neural Information Processing Systems, pp. 2816–2824 (2014)

    Google Scholar 

  • Widrow, B., Hoff, M.E.: Adaptive switching circuits. Technical report, Stanford Univ Ca Stanford Electronics Labs (1960)

    Book  Google Scholar 

  • Wright, S.J.: Coordinate descent algorithms. Math. Program. 1(151), 3–34 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)

    MathSciNet  MATH  Google Scholar 

  • Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imag. Sci. 3(4), 856–877 (2010)

    Article  MATH  Google Scholar 

  • Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for ∖ell_1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 1(1), 143–168 (2008)

    Article  MATH  Google Scholar 

  • Yosida, K.: Functional Analysis. Springer (1964)

    Google Scholar 

  • Young, D.M.: Iterative Solution of Large Linear Systems. Computer Science and Applied Mathematics, 1st edn. Academic Press, Inc., Orlando (1971)

    Google Scholar 

  • Zhang, H., Dai, Y.-H., Guo, L., Peng, W.: Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions. arXiv preprint arXiv:1711.01136 (2017)

    Google Scholar 

  • Zhang, K.S., Peyré, G., Fadili, J., Pereyra, M.: Wasserstein control of mirror Langevin Monte Carlo. arXiv e-prints (2020)

    Google Scholar 

  • Zhang, X., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46(1), 20–46 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Zhou, Z., Mertikopoulos, P., Bambos, N., Boyd, S., Glynn, P.W.: Stochastic mirror descent in variationally coherent optimization problems. In: Advances in Neural Information Processing Systems, pp. 7040–7049 (2017)

    Google Scholar 

  • Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report 34 (2008)

    Google Scholar 

Download references

Acknowledgements

MB thanks Queen Mary University of London for their support. ESR acknowledges support from the London Mathematical Society.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Benning .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Benning, M., Riis, E.S. (2023). Bregman Methods for Large-Scale Optimization with Applications in Imaging. In: Chen, K., Schönlieb, CB., Tai, XC., Younces, L. (eds) Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging. Springer, Cham. https://doi.org/10.1007/978-3-030-03009-4_62-2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03009-4_62-2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03009-4

  • Online ISBN: 978-3-030-03009-4

  • eBook Packages: Springer Reference MathematicsReference Module Computer Science and Engineering

Publish with us

Policies and ethics

Chapter history

  1. Latest

    Bregman Methods for Large-Scale Optimization with Applications in Imaging
    Published:
    07 December 2023

    DOI: https://doi.org/10.1007/978-3-030-03009-4_62-2

  2. Original

    Bregman Methods for Large-Scale Optimisation with Applications in Imaging
    Published:
    27 May 2021

    DOI: https://doi.org/10.1007/978-3-030-03009-4_62-1