Computational Optimization and Applications

, Volume 68, Issue 2, pp 363–405 | Cite as

Approximate ADMM algorithms derived from Lagrangian splitting

  • Jonathan EcksteinEmail author
  • Wang Yao


This paper presents two new approximate versions of the alternating direction method of multipliers (ADMM) derived by modifying of the original “Lagrangian splitting” convergence analysis of Fortin and Glowinski. They require neither strong convexity of the objective function nor any restrictions on the coupling matrix. The first method uses an absolutely summable error criterion and resembles methods that may readily be derived from earlier work on the relationship between the ADMM and the proximal point method, but without any need for restrictive assumptions to make it practically implementable. It permits both subproblems to be solved inexactly. The second method uses a relative error criterion and the same kind of auxiliary iterate sequence that has recently been proposed to enable relative-error approximate implementation of non-decomposition augmented Lagrangian algorithms. It also allows both subproblems to be solved inexactly, although ruling out “jamming” behavior requires a somewhat complicated implementation. The convergence analyses of the two methods share extensive underlying elements.


Alternating direction method of multipliers Convex programming Decomposition methods 

Mathematics Subject Classification

90C25 49M27 


  1. 1.
    Alves, M.M., Svaiter, B.F.: A note on Fejér-monotone sequences in product spaces and its applications to the dual convergence of augmented Lagrangian methods. Math. Program. 155(1–2), 613–616 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Birgin, E.G., Martínez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization. SIAM, Philadelphia (2014)CrossRefzbMATHGoogle Scholar
  4. 4.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefzbMATHGoogle Scholar
  5. 5.
    Chambolle, A., DeVore, R.A., Lee, N.Y., Lucier, B.J.: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 7(3), 319–335 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Dettling, M., Bühlmann, P.: Finding predictive gene groups from microarray data. J. Multivar. Anal. 90(1), 106–131 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Duarte, M.F., Davenport, M.A., Takbar, D., Laska, J.N., Sun, T., Kelly, K.F., Baraniuk, R.G.: Single-pixel imaging via compressive sampling: building simpler, smaller, and less-expensive digital cameras. IEEE Sig. Process. Mag. 25(2), 83–91 (2008)CrossRefGoogle Scholar
  8. 8.
    Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. Ph.D. thesis, Massachusetts Institute of Technology (1989)Google Scholar
  9. 9.
    Eckstein, J.: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 4(1), 75–83 (1994)CrossRefGoogle Scholar
  10. 10.
    Eckstein, J.: A practical general approximation criterion for methods of multipliers based on Bregman distances. Math. Program. 96(1), 61–86 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55(3), 293–318 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Eckstein, J., Silva, P.J.S.: A practical relative error criterion for augmented Lagrangians. Math. Program. 141(1–2), 319–348 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Eckstein, J., Yao, W.: Approximate versions of the alternating direction method of multipliers. Tech. Rep. 2016-01-5276, Optimization Online (2016)Google Scholar
  14. 14.
    Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using second order information for training support vector machines. J. Mach. Learn. Res. 6, 1889–1918 (2005)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Fortin, M., Glowinski, R.: On decomposition-coordination methods using an augmented Lagrangian. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems, Studies in Mathematics and its Applications, vol. 15. North-Holland, Amsterdam (1983)Google Scholar
  16. 16.
    Franklin, J.: The elements of statistical learning: data mining, inference and prediction. Math. Intell. 27(2), 83–85 (2005)CrossRefGoogle Scholar
  17. 17.
    Gabay, D.: Applications of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems, Studies in Mathematics and its Applications, vol. 15, pp. 299–331. North-Holland, Amsterdam (1983)Google Scholar
  18. 18.
    Guyon, I., Gunn, S., Ben-Hur, A., Dror, G.: Result analysis of the NIPS 2003 feature selection challenge. In: Saul, L., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems 17, pp. 545–552. MIT Press, Cambridge (2005)Google Scholar
  19. 19.
    He, B.: Inexact implicit methods for monotone general variational inequalities. Math. Program. 86(1), 199–217 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    He, B., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92(1), 103–118 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Huang, Y., Liu, H.: A Barzilai-Borwein type method for minimizing composite functions. Numer. Algorithms 69(4), 819–838 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Kogan, S., Levin, D., Routledge, B.R., Sagi, J.S., Smith, N.A.: Predicting risk from financial reports with regression. In: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. NAACL ’09, pp. 272–280. Association for Computational Linguistics, Stroudsburg, PA, USA (2009)Google Scholar
  23. 23.
    Lichman, M.: UCI machine learning repository (2013). URL
  24. 24.
    Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(3), 503–528 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Ng, A.Y.: Feature selection, \(L_1\) vs. \(L_2\) regularization, and rotational invariance. In: Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004, pp. 615–622 (2004)Google Scholar
  26. 26.
    Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)zbMATHGoogle Scholar
  27. 27.
    Polyak, B.T.: Introduction to Optimization. Optimization Software Inc, New York (1987)zbMATHGoogle Scholar
  28. 28.
    Rockafellar, R.T.: Local boundedness of nonlinear, monotone operators. Michigan Math. J. 16, 397–407 (1969)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)CrossRefzbMATHGoogle Scholar
  30. 30.
    Rockafellar, R.T.: Conjugate Duality and Optimization. SIAM, Philadephia (1974)CrossRefzbMATHGoogle Scholar
  31. 31.
    Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6(1), 59–70 (1999)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Solodov, M.V., Svaiter, B.F.: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 25(2), 214–230 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Xie, J., Liao, A., Yang, X.: An inexact alternating direction method of multipliers with relative error criteria. Optim. Lett. 11(3), 583–596 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Yuan, X.M.: The improvement with relative errors of He et al’.s inexact alternating direction method for monotone variational inequalities. Math. Comput. Model. 42(11–12), 1225–1236 (2005)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Department of Management Science and Information Systems and RUTCORRutgers UniversityPiscatawayUSA
  2. 2.RUTCORRutgers UniversityPiscatawayUSA

Personalised recommendations