Subgradient Methods for Saddle-Point Problems

Article

Abstract

We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.

Keywords

Saddle-point subgradient methods Averaging Approximate primal solutions Primal-dual subgradient methods Convergence rate 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Low, S., Lapsley, D.E.: Optimization flow control, I: Basic algorithm and convergence. IEEE/ACM Trans. Netw. 7(6), 861–874 (1999) CrossRefGoogle Scholar
  2. 2.
    Srikant, R.: Mathematics of Internet Congestion Control. Birkhauser, Basel (2004) MATHGoogle Scholar
  3. 3.
    Chiang, M., Low, S.H., Calderbank, A.R., Doyle, J.C.: Layering as optimization decomposition: a mathematical theory of network architectures. Proc. IEEE 95(1), 255–312 (2007) CrossRefGoogle Scholar
  4. 4.
    Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Non-Linear Programming. Stanford University Press, Stanford (1958) MATHGoogle Scholar
  5. 5.
    Bruck, R.E.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159–164 (1977) CrossRefMathSciNetMATHGoogle Scholar
  6. 6.
    Nemirovski, A.S., Judin, D.B.: Cezare convergence of gradient method approximation of saddle points for convex-concave functions. Dokl. Akad. Nauk SSSR 239, 1056–1059 (1978) MathSciNetGoogle Scholar
  7. 7.
    Uzawa, H.: Iterative methods in concave programming. In: Arrow, K., Hurwicz, L., Uzawa, H. (eds.) Studies in Linear and Nonlinear Programming, pp. 154–165. Stanford University Press, Stanford (1958) Google Scholar
  8. 8.
    Gol’shtein, E.G.: A generalized gradient method for finding saddle points. Matekon 10, 36–52 (1974) Google Scholar
  9. 9.
    Maistroskii, D.: Gradient methods for finding saddle points. Matekon 13, 3–22 (1977) Google Scholar
  10. 10.
    Zabotin, I.Y.: A subgradient method for finding a saddle point of a convex-concave function. Issled. Prikl. Mat. 15, 6–12 (1988) MATHGoogle Scholar
  11. 11.
    Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matekon 13, 35–49 (1977) Google Scholar
  12. 12.
    Kallio, M., Ruszczyński, A.: Perturbation methods for saddle point computation. Report No. WP-94-38, International Institute for Applied Systems Analysis (1994) Google Scholar
  13. 13.
    Kallio, M., Rosa, C.H.: Large-scale convex optimization via saddle-point computation. Oper. Res. 47, 93–101 (1999) CrossRefMathSciNetMATHGoogle Scholar
  14. 14.
    Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003) CrossRefMathSciNetMATHGoogle Scholar
  15. 15.
    Nemirovski, A.S.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004) CrossRefMathSciNetMATHGoogle Scholar
  16. 16.
    Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006) CrossRefMathSciNetMATHGoogle Scholar
  17. 17.
    Auslender, A., Teboulle, M.: Projected subgradient methods with non-Euclidean distances for nondifferentiable convex minimization and variational inequalities. Math. Program., Ser. B (2007). doi:10.1007/s10107-007-0147-z Google Scholar
  18. 18.
    Shor, N.Z.: Minimization methods for Nondifferentiable Functions. Springer, Berlin (1985). Translated from Russian by K.C. Kiwiel and A. Ruszczynski Google Scholar
  19. 19.
    Polyak, B.T.: Introduction to Optimisation. Optimization Software, New York (1987) Google Scholar
  20. 20.
    Demyanov, V.F., Vasilyev, L.V.: Nondifferentiable Optimization. Optimization Software, New York (1985) MATHGoogle Scholar
  21. 21.
    Correa, R., Lemaréchal, C.: Convergence of some algorithms for convex minimization. Math. Program. 62, 261–271 (1993) CrossRefGoogle Scholar
  22. 22.
    Nedić, A., Bertsekas, D.P.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P. (eds.) Stochastic Optimization: Algorithms and Applications. Kluwer Academic, Dordrecht (2000) Google Scholar
  23. 23.
    Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001) CrossRefMathSciNetMATHGoogle Scholar
  24. 24.
    Nedić, A., Ozdaglar, A.: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim. 19, 1757–1780 (2009) CrossRefGoogle Scholar
  25. 25.
    Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Cambridge (1999) MATHGoogle Scholar
  26. 26.
    Bertsekas, D.P., Nedić, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003) MATHGoogle Scholar
  27. 27.
    Sherali, H.D., Choi, G.: Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Oper. Res. Lett. 19, 105–113 (1996) CrossRefMathSciNetMATHGoogle Scholar
  28. 28.
    Larsson, T., Patriksson, M., Strömberg, A.: Ergodic results and bounds on the optimal value in subgradient optimization. In: Kelinschmidt, P., (eds.) Operations Research Proceedings, pp. 30–35. Springer, Berlin (1995) Google Scholar
  29. 29.
    Larsson, T., Patriksson, M., Strömberg, A.: Ergodic convergence in subgradient optimization. Optim. Methods Soft. 9, 93–120 (1998) CrossRefMATHGoogle Scholar
  30. 30.
    Larsson, T., Patriksson, M., Strömberg, A.: Ergodic primal convergence in dual subgradient schemes for convex programming. Math. Program. 86, 283–312 (1999) CrossRefMathSciNetMATHGoogle Scholar
  31. 31.
    Nesterov, Y.: Primal-dual subgradient methods for convex problems. Center for Operations Research and Econometrics (CORE) Report No. 67, Catholic University of Louvain (UCL) (2005) Google Scholar
  32. 32.
    Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms. Springer, Berlin (1996) Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Department of Industrial and Enterprise Systems EngineeringUniversity of Illinois at Urbana-ChampaignUrbanaUSA
  2. 2.Department of Electrical Engineering and Computer ScienceMassachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations