Advertisement

Computational Optimization and Applications

, Volume 60, Issue 3, pp 609–631 | Cite as

Optimality properties of an Augmented Lagrangian method on infeasible problems

Article

Abstract

Sometimes, the feasible set of an optimization problem that one aims to solve using a Nonlinear Programming algorithm is empty. In this case, two characteristics of the algorithm are desirable. On the one hand, the algorithm should converge to a minimizer of some infeasibility measure. On the other hand, one may wish to find a point with minimal infeasibility for which some optimality condition, with respect to the objective function, holds. Ideally, the algorithm should converge to a minimizer of the objective function subject to minimal infeasibility. In this paper the behavior of an Augmented Lagrangian algorithm with respect to those properties will be studied.

Keywords

Nonlinear programming Infeasible domains Augmented Lagrangians Algorithms Numerical experiments 

Notes

Acknowledgments

This work was supported by PRONEX-CNPq/FAPERJ E-26/111.449/2010-APQ1, FAPESP 2010/10133-0, 2013/05475-7, and 2013/07375-0, Capes/MES-Cuba 226/2012, Capes/Procad NF 21/2009, and CNPq 474160/2013-0.

References

  1. 1.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On Augmented Lagrangian methods with general lower-level constraints. SIAM J. Optim. 18, 1286–1309 (2007)CrossRefMATHMathSciNetGoogle Scholar
  2. 2.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Programm. 111, 5–32 (2008)CrossRefMATHGoogle Scholar
  3. 3.
    Andreani, R., Haeser, G., Martínez, J.M.: On sequential optimality conditions for smooth constrained optimization. Optimization 60, 627–641 (2011)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: A relaxed constant positive linear dependence constraint qualification and applications. Math. Programm. 135, 255–273 (2012)CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: Two new weak constraint qualifications and applications. SIAM J. Optim. 22, 1109–1135 (2012)CrossRefMATHMathSciNetGoogle Scholar
  6. 6.
    Andreani, R., Martínez, J.M., Svaiter, B.F.: A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM J. Optim. 20, 3533–3554 (2010)CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Andretta, M., Birgin, E.G., Martínez, J.M.: Partial spectral projected gradient method with active-set strategy for linearly constrained optimization. Numer. Algorithms 53, 23–52 (2010)CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Audet, C., Dennis Jr, J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17, 188–217 (2006)CrossRefMATHMathSciNetGoogle Scholar
  9. 9.
    Audet, C., Dennis Jr, J.E.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20, 445–472 (2009)CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Benson, H.Y., Shanno, D.F., Vanderbei, R.J.: Interior-point methods for nonconvex nonlinear programming: filter methods and merit functions. Comput. Optim. Appl. 23, 257–272 (2002)CrossRefMATHMathSciNetGoogle Scholar
  11. 11.
    van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008)CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Birgin, E.G., Castillo, R., Martínez, J.M.: Numerical comparison of Augmented Lagrangian algorithms for nonconvex problems. Comput. Optim. Appl. 31, 31–55 (2005)CrossRefMATHMathSciNetGoogle Scholar
  13. 13.
    Birgin, E.G., Fernández, D., Martínez, J.M.: The boundedness of penalty parameters in an Augmented Lagrangian method with constrained subproblems. Optim. Methods Softw. 27, 1001–1024 (2012)CrossRefMATHMathSciNetGoogle Scholar
  14. 14.
    Birgin, E.G., Floudas, C.A., Martínez, J.M.: Global minimization using an Augmented Lagrangian method with variable lower-level constraints. Math. Programm. 125, 139–162 (2010)CrossRefMATHGoogle Scholar
  15. 15.
    Birgin, E.G., Martínez, J.M.: Large-scale active-set box-constrained optimization method with spectral projected gradients. Comput. Optim. Appl. 23, 101–125 (2002)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Birgin, E.G., Martínez, J.M.: Improving ultimate convergence of an Augmented Lagrangian method. Optim. Methods Softw. 23, 177–195 (2008)CrossRefMATHMathSciNetGoogle Scholar
  17. 17.
    Birgin, E.G., Martínez, J.M., Prudente, L.F.: Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming. J. Glob. Optim. 58, 207–242 (2014)CrossRefMATHGoogle Scholar
  18. 18.
    Birgin, E.G., Martínez, J.M., Prudente, L.F.: Optimality properties of an Augmented Lagrangian method on infeasible problems, Technical Report MCDO 070714, Department of Computer Science, Institute of Mathematics and Statistics, University of São Paulo http://www.ime.usp.br/egbirgin/ (2014)
  19. 19.
    Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10, 1196–1211 (2000)CrossRefMATHMathSciNetGoogle Scholar
  20. 20.
    Byrd, R.H., Curtis, F.E., Nocedal, J.: Infeasibility detection and SQP methods for nonlinear optimization. SIAM J. Optim. 20, 2281–2299 (2010)CrossRefMATHMathSciNetGoogle Scholar
  21. 21.
    Byrd, R.H., Gilbert, JCh., Nocedal, J.: A trust region method based on interior point techniques for nonlinear programming. Math. Programm. 89, 149–185 (2000)CrossRefMATHMathSciNetGoogle Scholar
  22. 22.
    Byrd, R.H., Nocedal, J., Waltz, R.A.: KNITRO: An integrated package for nonlinear optimization. In: Di Pillo, G., Roma, M. (eds.) Large-Scale Nonlinear Optim., pp. 35–59. Springer, US (2006)CrossRefGoogle Scholar
  23. 23.
    Conn, A.R., Gould, N.I.M., Toint, PhL: Trust Region Methods. MPS/SIAM Series on Optimization. SIAM, Philadelphia (2000)CrossRefGoogle Scholar
  24. 24.
    Dostál, Z.: Optimal Quadratic Programming Algorithms. Springer, New York (2009)MATHGoogle Scholar
  25. 25.
    Dostál, Z., Friedlander, A., Santos, S.A.: Augmented Lagrangians with adaptive precision control for quadratic programming with simple bounds and equality constraints. SIAM J. Optim. 13, 1120–1140 (2003)CrossRefMATHMathSciNetGoogle Scholar
  26. 26.
    Figueiredo, M.A., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Topics Signal Process. 1, 586–597 (2007)CrossRefGoogle Scholar
  27. 27.
    Fletcher, R., Gould, N.I.M., Leyffer, S., Toint, PhL, Wächter, A.: Global convergence of trust-region SQP-filter algorithms for general nonlinear programming. SIAM J. Optim. 13, 635–659 (2002)CrossRefMATHMathSciNetGoogle Scholar
  28. 28.
    Fletcher, R., Leyffer, S.: Nonlinear programming without a penalty function. Math. Programm. 91, 239–269 (2002)CrossRefMATHMathSciNetGoogle Scholar
  29. 29.
    Fletcher, R., Leyffer, S., Toint, PhL: On the global convergence of a filter-SQP algorithm. SIAM J. Optim. 13, 44–59 (2002)CrossRefMATHMathSciNetGoogle Scholar
  30. 30.
    Gould, N.I.M., Orban, D., Toint, PhL: CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw 29, 373–394 (2003)CrossRefMATHMathSciNetGoogle Scholar
  31. 31.
    Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)CrossRefMATHMathSciNetGoogle Scholar
  32. 32.
    Lu, S.: Implications of the constant rank constraint qualification. Math. Programm. 126, 365–392 (2009)CrossRefGoogle Scholar
  33. 33.
    Lu, S.: Relation between the constant rank and the relaxed constant rank constraint qualifications. Optimization 61, 555–566 (2012)CrossRefMATHMathSciNetGoogle Scholar
  34. 34.
    Martínez, J.M., Prudente, L.F.: Handling infeasibility in a large-scale nonlinear optimization algorithm. Numer. Algorithms 60, 263–277 (2012)CrossRefMATHMathSciNetGoogle Scholar
  35. 35.
    Minchenko, L., Stakhovski, S.: On relaxed constant rank regularity conditions in mathematical programming. Optimization 60, 429–440 (2011)CrossRefMATHMathSciNetGoogle Scholar
  36. 36.
    Minchenko, L., Stakhovski, S.: Parametric nonlinear programming problems under the relaxed constant rank condition. SIAM J. Optim. 1, 314–332 (2011)CrossRefMathSciNetGoogle Scholar
  37. 37.
    Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1969)Google Scholar
  38. 38.
    Prudente, L.F.: Augmented Lagrangian approaches in Nonlinear Programming, Doctoral Dissertation Campinas. State University of Campinas, Campinas (2012)Google Scholar
  39. 39.
    Rockafellar, R.T.: Augmented Lagrange multiplier functions and duality in nonconvex programming. SIAM J. Control Optim. 12, 268–285 (1974)CrossRefMATHMathSciNetGoogle Scholar
  40. 40.
    Shen, C., Leyffer, S., Fletcher, R.: A nonmonotone filter method for nonlinear optimization. Comput. Optim. Appl. 52, 583–607 (2012)CrossRefMATHMathSciNetGoogle Scholar
  41. 41.
    Shen, C., Xue, W., Pu, D.: A filter SQP algorithm without a feasibility restoration phase. Comput. Appl. Math. 28, 167–194 (2009)MATHMathSciNetGoogle Scholar
  42. 42.
    Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Programm. 106, 25–57 (2006)CrossRefMATHGoogle Scholar
  43. 43.
  44. 44.

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • E. G. Birgin
    • 1
  • J. M. Martínez
    • 2
  • L. F. Prudente
    • 3
  1. 1.Department of Computer Science, Institute of Mathematics and StatisticsUniversity of São PauloSão PauloBrazil
  2. 2.Department of Applied Mathematics, Institute of Mathematics, Statistics, and Scientific ComputingUniversity of CampinasCampinasBrazil
  3. 3.Institute of Mathematics and StatisticsFederal University of GoiásGoiâniaBrazil

Personalised recommendations