Abstract
The alternating direction method of multipliers (ADM or ADMM) breaks a complex optimization problem into much simpler subproblems. The ADM algorithms are typically short and easy to implement yet exhibit (nearly) state-of-the-art performance for large-scale optimization problems.
To apply ADM, we first formulate a given problem into the “ADM-ready” form, so the final algorithm depends on the formulation. A problem like \(\mathop{\mathrm{minimize}}\limits _{\mathbf{x}}u(\mathbf{x}) + v(\mathbf{C}\mathbf{x})\) has six different “ADM-ready” formulations. They can be in the primal or dual forms, and they differ by how dummy variables are introduced. To each “ADM-ready” formulation, ADM can be applied in two different orders depending on how the primal variables are updated. Finally, we get twelve different ADM algorithms! How do they compare to each other? Which algorithm should one choose? In this chapter, we show that many of the different ways of applying ADM are equivalent. Specifically, we show that ADM applied to a primal formulation is equivalent to ADM applied to its Lagrange dual; ADM is equivalent to a primal-dual algorithm applied to the saddle-point formulation of the same problem. These results are surprising since the primal and dual variables in ADM are seemingly treated very differently, and some previous work exhibit preferences in one over the other on specific problems.
In addition, when one of the two objective functions is quadratic, possibly subject to an affine constraint, we show that swapping the update order of the two primal variables in ADM gives the same algorithm. These results identify the few truly different ADM algorithms for a problem, which generally have different forms of subproblems from which it is easy to pick one with the most computationally friendly subproblems.
Keywords
- Dual Problem
- Dual Variable
- Master Problem
- Augmented Lagrangian Method
- Matrix Vector Multiplication
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, access via your institution.
Buying options
References
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer (2011)
Chambolle, A.: An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision 20 (1-2), 89–97 (2004)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40 (1), 120–145 (2011)
Davis, D., Yin, W.: Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions. arXiv preprint arXiv:1407.5210 (2014)
Davis, D., Yin, W.: Convergence rate analysis of several splitting schemes. In: R. Glowinski, S. Osher, W. Yin (eds.) Splitting Methods in Communication and Imaging, Science and Engineering, Chapter 4 Springer (2016)
Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing 66 (3), 889–916 (2015)
Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Transactions of the American Mathematical Society 82 (2), 421–439 (1956)
Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. Ph.D. thesis, Massachusetts Institute of Technology (1989)
Eckstein, J., Fukushima, M.: Some reformulations and applications of the alternating direction method of multipliers. In: Large Scale Optimization, pp. 115–134. Springer (1994)
Esser, E., Zhang, X., Chan, T.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM Journal on Imaging Sciences 3 (4), 1015–1046 (2010)
Esser, J.: Primal dual algorithms for convex models and applications to image restoration, registration and nonlocal inpainting. Ph.D. thesis, University of California, Los Angeles (2010)
Fukushima, M.: The primal Douglas-Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Mathematical Programming 72 (1), 1–15 (1996)
Gabay, D.: Applications of the method of multipliers to variational inequalities. In: M. Fortin, R. Glowinski (eds.) Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems. North-Holland: Amsterdam, Amsterdam (1983)
Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications 2 (1), 17–40 (1976)
Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Rev. Française d’Automat. Inf. Recherche Opérationelle 9 (2), 41–76 (1975)
Goldstein, T., Osher, S.: The split Bregman method for l1-regularized problems. SIAM Journal on Imaging Sciences 2 (2), 323–343 (2009)
Hestenes, M.: Multiplier and gradient methods. Journal of Optimization Theory and Applications 4 (5), 303–320 (1969)
Lions, P., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis 16 (6), 964–979 (1979)
Peaceman, D.W., Rachford, H.H.: The numerical solution of parabolic and elliptic differential equations. Journal of the Society for Industrial and Applied Mathematics 3 (1), 28–41 (1955)
Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Mathematical Programming 5 (1), 354–373 (1973)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60 (1–4), 259–268 (1992)
Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences 1 (3), 248–272 (2008)
Xiao, Y., Zhu, H., Wu, S.Y.: Primal and dual alternating direction algorithms for ℓ 1-ℓ 1-norm minimization problems in compressive sensing. Computational Optimization and Applications 54 (2), 441–459 (2013)
Yang, J., Zhang, Y.: Alternating direction algorithms for ℓ 1-problems in compressive sensing. SIAM Journal on Scientific Computing 33 (1), 250–278 (2011)
Yang, Y., Möller, M., Osher, S.: A dual split Bregman method for fast ℓ 1 minimization. Mathematics of Computation 82 (284), 2061–2085 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Yan, M., Yin, W. (2016). Self Equivalence of the Alternating Direction Method of Multipliers. In: Glowinski, R., Osher, S., Yin, W. (eds) Splitting Methods in Communication, Imaging, Science, and Engineering. Scientific Computation. Springer, Cham. https://doi.org/10.1007/978-3-319-41589-5_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-41589-5_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-41587-1
Online ISBN: 978-3-319-41589-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)