Advertisement

Computational Optimization and Applications

, Volume 51, Issue 3, pp 941–965 | Cite as

Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

  • Ernesto G. Birgin
  • J. M. Martínez
Article

Abstract

At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.

Keywords

Nonlinear programming Augmented Lagrangian methods Penalty parameters Numerical experiments 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andreani, R., Haeser, G., Martínez, J.M.: On sequential optimality conditions for smooth constrained optimization. Optimization (to appear) Google Scholar
  2. 2.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On Augmented Lagrangian Methods with general lower-level constraints. SIAM J. Optim. 18, 1286–1309 (2007) MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification. Math. Program. 111, 5–32 (2008) MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Andreani, R., Martínez, J.M., Schuverdt, M.L.: On the relation between the Constant Positive Linear Dependence condition and quasinormality constraint qualification. J. Optim. Theory Appl. 125, 473–485 (2005) MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Andreani, R., Martínez, J.M., Svaiter, B.F.: A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM J. Optim. (to appear) Google Scholar
  6. 6.
    Andretta, M., Birgin, E.G., Martínez, J.M.: Practical active-set Euclidian trust-region method with spectral projected gradients for bound-constrained minimization. Optimization 54, 305–325 (2005) MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Birgin, E.G., Fernández, D., Martínez, J.M.: On the boundedness of penalty parameters in an Augmented Lagrangian method with lower level constraints. Technical Report, Department of Applied Mathematics, State University of Campinas, Brazil Google Scholar
  8. 8.
    Birgin, E.G., Martínez, J.M.: A box-constrained optimization algorithm with negative curvature directions and spectral projected gradients. Computing [Suppl.] 15, 49–60 (2001) CrossRefGoogle Scholar
  9. 9.
    Birgin, E.G., Martínez, J.M.: Large-scale active-set box-constrained optimization method with spectral projected gradients. Comput. Optim. Appl. 23, 101–125 (2002) MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Buys, J.D.: Dual algorithms for constrained optimization problems. Doctoral Dissertation, University of Leiden, Leiden, the Netherlands (1972) Google Scholar
  11. 11.
    Byrd, R.H., Lu, P., Nocedal, J.: A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Stat. Comput. 16, 1190–1208 (1995) MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Coleman, T.F., Li, Y.: On the convergence of reflective newton methods for large-scale nonlinear minimization subject to bounds. Math. Program. 67, 189–224 (1994) MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Coleman, T.F., Li, Y.: An interior, trust region approach for nonlinear minimization subject to bounds. SIAM J. Optim. 6, 418–445 (1996) MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Conn, A.R., Gould, N.I.M., Sartenaer, A., Toint, Ph.L.: Convergence properties of an Augmented Lagrangian algorithm for optimization with a combination of general equality and linear constraints. SIAM J. Optim. 6, 674–703 (1996) MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Conn, A.R., Gould, N.I.M., Toint, Ph.L.: Global convergence of a class of trust region algorithms for optimization with simple bounds. SIAM J. Numer. Anal. 25, 433–460 (1988) MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Conn, A.R., Gould, N.I.M., Toint, Ph.L.: Testing a class of methods for solving minimization problems with simple bounds on the variables. Math. Comput. 50, 399–430 (1988) MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Conn, A.R., Gould, N.I.M., Toint, Ph.L.: A globally convergent Augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991) MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Conn, A.R., Gould, N.I.M., Toint, Ph.L.: LANCELOT: a Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer Series in Computational Mathematics, vol. 17. Springer, New York (1992) MATHGoogle Scholar
  19. 19.
    Conn, A.R., Gould, N.I.M., Toint, Ph.L.: Trust Region Methods. MPS/SIAM Series on Optimization. SIAM, Philadelphia (2000) CrossRefMATHGoogle Scholar
  20. 20.
    Conn, A.R., Gould, N.I.M., Toint, Ph.L.: Lancelot: A Fortran Package For farge Scale Nonlinear Optimization. Springer, Berlin (1992) Google Scholar
  21. 21.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002) MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Gould, N.I.M., Orban, D., Toint, Ph.L.: CUTEr and SifDec: a constrained and unconstrained testing environment. ACM Trans. Math. Softw. 29, 373–394 (2003) MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17, 526–557 (2006) MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 45, 303–320 (1969) MathSciNetCrossRefGoogle Scholar
  25. 25.
    Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1969) Google Scholar
  26. 26.
    Qi, L., Wei, Z.: On the constant positive linear dependence condition and its application to SQP methods. SIAM J. Optim. 10, 963–981 (2000) MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Rockafellar, R.T.: A dual approach for solving nonlinear programming problems by unconstrained optimization. Math. Program. 5, 354–373 (1973) MathSciNetCrossRefMATHGoogle Scholar
  28. 28.
    Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106, 25–57 (2006) MathSciNetCrossRefMATHGoogle Scholar
  29. 29.
    Zhu, C., Byrd, R.H., Nocedal, J.: L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization. ACM Trans. Math. Softw. 23, 550–560 (1997) MathSciNetCrossRefMATHGoogle Scholar
  30. 30.

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Computer Science, Institute of Mathematics and StatisticsUniversity of São PauloSão PauloBrazil
  2. 2.Department of Applied Mathematics, Institute of Mathematics, Statistics and Scientific ComputingUniversity of CampinasCampinasBrazil

Personalised recommendations