A Deflected Subgradient Method Using a General Augmented Lagrangian Duality with Implications on Penalty Methods

Part of the Springer Optimization and Its Applications book series (SOIA, volume 47)


We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primal–dual method with strong duality, that is, with zero duality gap. We prove that our method converges to a dual solution if and only if a dual solution exists. We also prove that all accumulation points of an auxiliary primal sequence are primal solutions. Our results apply, in particular, to classical penalty methods, since the penalty functions associated with these methods can be recovered as a special case of our augmented Lagrangians. Besides the classical augmenting terms given by the 1- or 2-norm forms, terms of many other forms can be used in our Lagrangian function. Using a practical selection of the step-size parameters, as well as various choices of the augmenting term, we demonstrate the method on test problems. Our numerical experiments indicate that it is more favourable to use an augmenting term of an exponential form rather than the classical 1- or 2-norm forms.


Penalty Function Accumulation Point Dual Solution Strong Duality Subgradient Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    d’Antonio, G., Frangioni, A.: Convergence analysis of deflected conditional approximate subgradient methods, SIAM Journal on Optimization 20, 357–386 (2009)Google Scholar
  2. 2.
    Bazaraa, M.S., Sherali, H.D.: On the choice of step sizes in subgradient optimization, European Journal of Operations Research 7, 380–388 (1981)MATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, Wiley, New York (1993)MATHGoogle Scholar
  4. 4.
    Bertsekas, D.P.: Nonlinear Programming, Athena Scientific, Belmont, MA (1995)MATHGoogle Scholar
  5. 5.
    Birgin, E.G., Castillo, R.A., Martínez, J.M.: Numerical comparison of augmented Lagrangian algorithms for nonconvex problems. Computational Optimization and Applications 31, 31–55 (2005)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Burachik, R.S.: On primal convergence for augmented Lagrangian duality, Optimization (accepted for publication)Google Scholar
  7. 7.
    Burachik, R.S., Rubinov, A.M.: On the absence of duality gap for Lagrange-type functions, Journal of Industrial Management and Optimization 1, 33–38 (2005)MATHMathSciNetGoogle Scholar
  8. 8.
    Burachik, R.S., Gasimov R.N., Ismayilova N.A., Kaya C.Y.: On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian, Journal of Global Optimization 34, 55–78 (2006)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Burachik, R.S., Iusem, A.N., Melo, J.G.: A primal dual modified subgradient algorithm with sharp Lagrangian, Journal of Global Optimization 46, 347–361 (2010)MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Burachik, R.S., Kaya, C.Y.: An update rule and a convergence result for a penalty function method, Journal of Industrial and Management Optimization 3, 381–398 (2007)MATHMathSciNetGoogle Scholar
  11. 11.
    Burachik, R.S., Kaya, C.Y., Mammadov, M.: An inexact modified subgradient algorithm for nonconvex optimization, Computational Optimization and Applications 45, 1–24 (2010)MATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Demyanov, V.F.: Algorithms for some minimax problems, Journal of Computer and Systems Sciences 2, 342–380 (1968)MATHCrossRefGoogle Scholar
  13. 13.
    Gasimov, R.N.: Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming, Journal of Global Optimization 24, 187–203 (2002)MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Gasimov R.N., Ismayilova, N.A.: The modified subgradient method for equality constrained nonconvex optimization problems. In: Qi, L., Teo, K.L., Yang, X.Q. (eds.) Optimization and Control with Applications, pp.257–269, Springer, New York (2005)Google Scholar
  15. 15.
    Griva, I., Polyak, R.A.: Primal–dual nonlinear rescaling method with dynamic scaling parameter update, Mathematical Programming A 106, 237–259 (2006)MATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes, Springer, Berlin (1981)MATHGoogle Scholar
  17. 17.
    Kaya, C.Y., Noakes, J.L.: Computational method for time-optimal switching control, Journal of Optimization Theory and Applications 117, 69–92 (2003)MATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Kim, S., Hyunsil, A.: Convergence of a generalized subgradient method for nondifferentiable convex optimization, Mathematical Programming 50, 75–80 (1991)MATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Konnov, I.V.: On convergence properties of a subgradient method, Optimization Methods and Software 18, 53–62 (2003)MATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Polyak, B.T.: The conjugate gradient method in extremal problems, Zhurnal Vychislitelnoy Matematiki i Matematicheskoy Fiziki 9, 94–112 (1969)Google Scholar
  21. 21.
    Polyak, B.T.: Minimization of unsmooth functionals, Zhurnal Vychislitelnoy Matematiki i Matematicheskoy Fiziki 9, 14–29 (1969)Google Scholar
  22. 22.
    Polyak, B.T.: Iterative methods using Lagrange multipliers for solving extremal problems with constraints of the equation type, Zhurnal Vychislitelnoy Matematiki i Matematicheskoy Fiziki 10, 1098–1106 (1970)MATHGoogle Scholar
  23. 23.
    Polyak, B.T.: Introduction to Optimization, Optimization Software Inc., Publications Division, New York (1987)Google Scholar
  24. 24.
    Polyak, R.A.: Nonlinear rescaling vs. smoothing technique in convex optimization, Mathematical Programming A 92, 197–235 (2002)Google Scholar
  25. 25.
    Polyak, R.A., Teboulle M.: Nonlinear rescaling and proximal-like methods in convex optimization. Mathematical Programming 76, 265–284 (1997)MathSciNetGoogle Scholar
  26. 26.
    Ren, Y.-H., Zhang, L.-W., Xiao, X.-T.: A nonlinear Lagrangian based on Fischer–Burmeister NCP function, Applied Mathematics and Computation 188, 1344–1363 (2007)MATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, Springer, Berlin (1998)MATHCrossRefGoogle Scholar
  28. 28.
    Rubinov, A.M.: Abstract Convexity and Global Optimization, Kluwer, Dordrecht (2000)MATHGoogle Scholar
  29. 29.
    Rubinov, A.M., Gasimov, R.N.: The nonlinear and augmented Lagrangians for nonconvex optimization problems with a single constraint, Applied and Computational Mathematics 1, 142–157 (2002)MATHMathSciNetGoogle Scholar
  30. 30.
    Rubinov, A.M., Gasimov, R.N.: Strictly increasing positively homogeneous functions with applications to exact penalization, Optimization 52, 1–28 (2003)MATHCrossRefMathSciNetGoogle Scholar
  31. 31.
    Rubinov, A.M., Glover, B.M., Yang, X.Q.: Decreasing functions with application to penalization, SIAM Journal on Optimization 10, 289–313 (1999)MATHCrossRefMathSciNetGoogle Scholar
  32. 32.
    Rubinov, A.M., Yang, X.Q., Bagirov, A.M., Gasimov, R.N.: Lagrange-type functions in constrained optimization, Journal of Mathematical Sciences 115, 2437–2505 (2003)CrossRefMathSciNetGoogle Scholar
  33. 33.
    Sherali, H.D., Choi, G., Tuncbilek, C.H.: A variable target value method for nondifferentiable optimization, Operations Research Letters 26, 1–8 (2000)MATHCrossRefMathSciNetGoogle Scholar
  34. 34.
    Shor, N.Z.: Minimization Methods for Nondifferentiable Functions, Springer, Berlin (1985)Google Scholar
  35. 35.
    Shor, N.Z.: Dual estimates in multiextremal problems, Journal of Global Optimization 7, 75–91 (1995)CrossRefGoogle Scholar
  36. 36.
    Yang, X.Q., Huang, X.X.: A nonlinear Lagrangian approach to constrained optimization problems, SIAM Journal on Optimization 14, 1119–1144 (2001)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Mathematics and StatisticsUniversity of South AustraliaMawson LakesAustralia

Personalised recommendations