Skip to main content
Log in

Combining Lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it automatically and simultaneously updates the smoothness parameters which significantly improves its performance. The convergence of the algorithm is proved under weak conditions imposed on the original problem. The rate of convergence is \(O(\frac {1}{k})\), where k is the iteration counter. In the second part of the paper, the proposed algorithm is coupled with a dual scheme to construct a switching variant in a dual decomposition framework. We discuss implementation issues and make a theoretical comparison. Numerical examples confirm the theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Bit Error Rate function.

References

  1. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Prentice Hall, New York (1989)

    MATH  Google Scholar 

  2. Boyd, S., Parikh, N., Chu, E., Peleato, B.: Distributed optimization and statistics via alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  Google Scholar 

  3. Chen, G., Teboulle, M.: A proximal-based decomposition method for convex minimization problems. Math. Program. 64, 81–101 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  4. Connejo, A.J., Mínguez, R., Castillo, E., García-Bertrand, R.: Decomposition Techniques in Mathematical Programming: Engineering and Science Applications. Springer, Berlin (2006)

    Google Scholar 

  5. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  6. Duchi, J.C., Agarwal, A., Wainwright, M.J.: Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592–606 (2012)

    Article  MathSciNet  Google Scholar 

  7. Eckstein, J., Bertsekas, D.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  8. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vols. 1–2. Springer, Berlin (2003)

    Google Scholar 

  9. Goldfarb, D., Ma, S.: Fast multiple splitting algorithms for convex optimization. SIAM J. Optim. 22(2), 533–556 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Hamdi, A.: Two-level primal-dual proximal decomposition technique to solve large-scale optimization problems. Appl. Math. Comput. 160, 921–938 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  11. Han, S.P., Lou, G.: A parallel algorithm for a class of convex programs. SIAM J. Control Optim. 26, 345–355 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hariharan, L., Pucci, F.D.: Decentralized resource allocation in dynamic networks of agents. SIAM J. Optim. 19(2), 911–940 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. He, B.S., Tao, M., Xu, M.H., Yuan, X.M.: Alternating directions based contraction method for generally separable linearly constrained convex programming problems. Optimization (2011). doi:10.1080/02331934.2011.611885

    Google Scholar 

  14. He, B.S., Yang, H., Wang, S.L.: Alternating directions method with self-adaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 106, 349–368 (2000)

    Article  MathSciNet  Google Scholar 

  15. He, B.S., Yuan, X.M.: On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50, 700–709 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Holmberg, K.: Experiments with primal-dual decomposition and subgradient methods for the uncapacitated facility location problem. Optimization 49(5–6), 495–516 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  17. Holmberg, K., Kiwiel, K.C.: Mean value cross decomposition for nonlinear convex problem. Optim. Methods Softw. 21(3), 401–417 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kojima, M., Megiddo, N., Mizuno, S., et al.: Horizontal and vertical decomposition in interior point methods for linear programs. Technical report, Information Sciences, Tokyo Institute of Technology, Tokyo (1993)

  19. Lenoir, A., Mahey, P.: Accelerating convergence of a separable augmented Lagrangian algorithm. Technical report, LIMOS/RR-07-14, pp. 1–34 (2007)

  20. Love, R.F., Kraemer, S.A.: A dual decomposition method for minimizing transportation costs in multifacility location problems. Transp. Sci. 7, 297–316 (1973)

    Article  MathSciNet  Google Scholar 

  21. Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  22. Necoara, I., Suykens, J.A.K.: Applications of a smoothing technique to decomposition in convex optimization. IEEE Trans. Autom. Control 53(11), 2674–2679 (2008)

    Article  MathSciNet  Google Scholar 

  23. Nedíc, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54, 48–61 (2009)

    Article  Google Scholar 

  24. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence o(1/k 2). Dokl. Akad. Nauk SSSR 269, 543–547 (1983) (Translated as Soviet Math. Dokl.)

    MathSciNet  Google Scholar 

  25. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87. Kluwer Academic, Dordrecht (2004)

    MATH  Google Scholar 

  26. Nesterov, Y.: Excessive gap technique in nonsmooth convex minimization. SIAM J. Optim. 16(1), 235–249 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  27. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  28. Neveen, G., Jochen, K.: Faster and simpler algorithms for multicommodity flow and other fractional packing problems. SIAM J. Comput. 37(2), 630–652 (2007)

    Article  MathSciNet  Google Scholar 

  29. Ruszczyński, A.: On convergence of an augmented Lagrangian decomposition method for sparse convex optimization. Math. Oper. Res. 20, 634–656 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  30. Samar, S., Boyd, S., Gorinevsky, D.: Distributed estimation via dual decomposition. In: Proceedings European Control Conference (ECC), Kos, Greece, pp. 1511–1516 (2007)

    Google Scholar 

  31. Spingarn, J.E.: Applications of the method of partial inverses to convex programming: decomposition. Math. Program. Ser. A 32, 199–223 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  32. Tran-Dinh, Q., Necoara, I., Savorgnan, C., Diehl, M.: An inexact perturbed path-following method for Lagrangian decomposition in large-scale separable convex optimization. Int. Report 12-181, ESATSISTA, KU Leuven, Belgium (2012). SIAM J. Optim., accepted

  33. Tseng, P.: Alternating projection-proximal methods for convex programming and variational inequalities. SIAM J. Optim. 7(4), 951–965 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  34. Tsiaflakis, P., Diehl, M., Moonen, M.: Distributed spectrum management algorithms for multi-user DSL networks. IEEE Trans. Signal Process. 56(10), 4825–4843 (2008)

    Article  MathSciNet  Google Scholar 

  35. Tsiaflakis, P., Necoara, I., Suykens, J.A.K., Moonen, M.: Improved dual decomposition based optimization for DSL dynamic spectrum management. IEEE Trans. Signal Process. 58(4), 2230–2245 (2010)

    Article  MathSciNet  Google Scholar 

  36. Vania, D.S.E.: Finding approximate solutions for large scale linear programs. Ph.D. Thesis, No. 18188, ETH, Zurich (2009)

  37. Venkat, A.N.: Distributed model predictive control: theory and applications. Ph.D. Thesis, University of Wisconsin-Madison (2006)

  38. Wächter, A., Biegler, L.T.: On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  39. Zhao, G.: A Lagrangian dual method with self-concordant barriers for multistage stochastic convex programming. Math. Program. 102, 1–24 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Dr. Ion Necoara and Dr. Michel Baes for useful comments on the text and for pointing out some interesting references. Furthermore, the authors are grateful to Dr. Paschalis Tsiaflakis for providing the problem data in the last numerical example.

Research supported by Research Council KUL: CoE EF/05/006 Optimization in Engineering (OPTEC), IOF-SCORES4CHEM, GOA/10/009 (MaNet), GOA /10/11, several PhD/postdoc and fellow grants; Flemish Government: FWO: PhD/postdoc grants, projects G.0452.04, G.0499.04, G.0211.05, G.0226.06, G.0321.06, G.0302.07, G.0320.08, G.0558.08, G.0557.08, G.0588.09, G.0377.09, G.0712.11, research communities (ICCoS, ANMMM, MLDM); IWT: PhD Grants, Belgian Federal Science Policy Office: IUAP P6/04; EU: ERNSI; FP7-HDMPC, FP7-EMBOCON, ERC-HIGHWIND, Contract Research: AMINAL. Other: Helmholtz-viCERP, COMET-ACCM.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quoc Tran Dinh.

Appendix: The proofs of technical lemmas

Appendix: The proofs of technical lemmas

This appendix provides the proofs of two technical lemmas stated in the previous sections.

Proof of Lemma 4

The proof of this lemma is very similar to Lemma 3 in [27].

Proof

Let \(\hat{y} := y^{*}(\hat{x};\beta_{2}) := \frac{1}{\beta_{2}}(A\hat {x}-b)\). Then it follows from (18) that:

(77)

By using the expression f(x;β 2)=ϕ(x)+ψ(x;β 2), the definition of \(\bar{x}\), the condition (26) and (77) we have:

which is indeed the condition (21). □

Proof of Lemma 7

Let us define \(\xi(t) := \frac{2}{\sqrt{1+4/t^{2}}+1}\). It is easy to show that ξ is increasing in (0,1). Moreover, τ k+1=ξ(τ k ) for all k≥0. Let us introduce u:=2/t. Then, we can show that \(\frac {2}{u+2} < \xi(\frac{2}{u}) < \frac{2}{u+1}\). By using this inequalities and the increase of ξ in (0,1), we have:

$$ \frac{\tau_0}{1+2\tau_0k} \equiv\frac{2}{u_0 + 2k} < \tau_k < \frac{2}{u_0 + k} \equiv\frac{2\tau_0}{2+\tau_0k}. $$
(78)

Now, by the update rule (58), at each iteration k, we only either update \(\beta_{1}^{k}\) or \(\beta_{2}^{k}\). Hence, it implies that:

$$ \begin{aligned}[c] \beta_1^k& = (1-\tau_0) (1- \tau_2)\cdots(1-\tau_{2\lfloor {k/2\rfloor}})\beta_1^0, \\ \beta_2^k& = (1-\tau_1) (1- \tau_3)\cdots(1-\tau_{2\lfloor {k/2\rfloor}-1})\beta_2^0, \end{aligned} $$
(79)

where ⌊x⌋ is the largest integer number which is less than or equal to the positive real number x. On the other hand, since τ i+1<τ i for i≥0, for any l≥0, it implies:

$$ \begin{aligned}[c] &(1-\tau_0)\prod_{i=0}^{2l}(1- \tau_i) < \bigl[(1-\tau_0) (1-\tau_2) \cdots(1-\tau_{2l}) \bigr]^2 < \prod _{i=0}^{2l+1}(1-\tau_i),\quad\mbox{and} \\ &\prod_{i=0}^{2l-1}(1- \tau_i) < \bigl[(1-\tau_1) (1-\tau_3) \cdots(1-\tau_{2l-1}) \bigr]^2 < (1-\tau_0)^{-1} \prod_{i=0}^{2l}(1-\tau_i). \end{aligned} $$
(80)

Note that \(\prod_{i=0}^{k}(1-\tau_{i}) = \frac{(1-\tau_{0})}{\tau _{0}^{2}}\tau_{k}^{2}\), it follows from (79) and (80) for k≥1 that:

By combining these inequalities and (78), and noting that τ 0∈(0,1), we obtain (59). □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tran Dinh, Q., Savorgnan, C. & Diehl, M. Combining Lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems. Comput Optim Appl 55, 75–111 (2013). https://doi.org/10.1007/s10589-012-9515-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-012-9515-6

Keywords

Navigation