Advertisement

Computational Optimization and Applications

, Volume 46, Issue 2, pp 279–304 | Cite as

Proximal methods for nonlinear programming: double regularization and inexact subproblems

  • Jonathan Eckstein
  • Paulo J. S. Silva
Article

Abstract

This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that leads to fully smooth subproblems. We first enhance the existing theory to show that our approach is globally convergent in both the primal and dual spaces when applied to convex problems. We then present an extensive computational evaluation using the CUTE test set, showing that some aspects of our approach are promising, but some are not. These conclusions in turn lead to additional computational experiments suggesting where to next focus our theoretical and computational efforts.

Keywords

Proximal algorithms Augmented Lagrangians Nonlinear programming 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On augmented Lagrangian methods with general lower-level constraints. SIAM J. Optim. 18(4), 1286–1309 (2007) zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111(1–2), 5–32 (2008) zbMATHMathSciNetGoogle Scholar
  3. 3.
    Auslender, A., Teboulle, M., Ben-Tiba, S.: Interior proximal and multiplier methods based on second order homogeneous kernels. Math. Oper. Res. 24(3), 645–668 (1999) zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Auslender, A., Teboulle, M., Ben-Tiba, S.: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 12(1–3), 31–40 (1999) zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Auslender, A., Silva, P.J.S., Teboulle, M.: Nonmonotone projected gradient methods based on barrier and Euclidean distances. Comput. Optim. Appl. 38(3), 305–327 (2007) zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Ben-Tal, A., Zibulevsky, M.: Penalty/barrier multiplier methods for convex programming problems. SIAM J. Optim. 7(2), 347–366 (1997) zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Birgin, E.G., Castillo, R.A., Martínez, J.M.: Numerical comparison of augmented Lagrangian algorithms for nonconvex problems. Comput. Optim. Appl. 31(1), 31–55 (2005) zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Burachick, R., Svaiter, B.F.: A relative error tolerance for a family of generalized proximal point methods. Math. Oper. Res. 26(4), 816–831 (2001) CrossRefMathSciNetGoogle Scholar
  9. 9.
    Censor, Y., Zenios, S.A.: Proximal minimization algorithm with D-functions. J. Optim. Theory Appl. 73(3), 451–464 (1992) zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Cohen, A.I.: Rate of convergence of several conjugate gradient algorithms. SIAM J. Numer. Anal. 9, 248–259 (1972) zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer, New York (1992) zbMATHGoogle Scholar
  12. 12.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002) zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Eckstein, J.: Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming. Math. Oper. Res. 18(1), 202–226 (1993) zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Eckstein, J.: A practical general approximation criterion for methods of multipliers based on Bregman distances. Math. Program. 96(1), 61–86 (2003) zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Hager, W.W., Zhang, H.: ASA-CG source code. http://www.math.ufl.edu/~hager/papers/CG/
  16. 16.
    Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005) zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17(2), 526–557 (2006) zbMATHCrossRefMathSciNetGoogle Scholar
  18. 18.
    Humes, C. Jr., Silva, P.J.S., Svaiter, B.F.: Some inexact hybrid proximal augmented Lagrangian algorithms. Numer. Algorithms 35(2–4), 175–184 (2004) zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Iusem, A.N., Pennanen, T., Svaiter, B.F.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13(4), 1080–1097 (2003) zbMATHCrossRefMathSciNetGoogle Scholar
  20. 20.
    Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: Open source scientific tools for Python (2001). http://www.scipy.org/
  21. 21.
    Kummer, B.K.: Newton’s method for nondifferentiable functions. In: Guddat, J., et al. (eds.) Advances in Mathematical Optimization, pp. 114–125. Akademie-Verlag, Berlin (1988) Google Scholar
  22. 22.
    Pang, J.-S., Qi, L.: Nonsmooth equations: motivation and algorithms. SIAM J. Optim. 3, 443–465 (1993) zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27(1), 170–191 (2002) zbMATHCrossRefMathSciNetGoogle Scholar
  24. 24.
    Qi, L.Q.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18(1), 227–244 (1993) zbMATHCrossRefMathSciNetGoogle Scholar
  25. 25.
    Qi, L.Q., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58(3), 353–367 (1993) CrossRefMathSciNetGoogle Scholar
  26. 26.
    Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) zbMATHGoogle Scholar
  27. 27.
    Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976) zbMATHCrossRefMathSciNetGoogle Scholar
  28. 28.
    Silva, P.J.S., Eckstein, J.: Double-regularization proximal methods, with complementarity applications. Comput. Optim. Appl. 33(2–3), 115–156 (2006) zbMATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    Silva, P.J.S., Eckstein, J., Humes, C. Jr.: Rescaling and stepsize selection in proximal methods using generalized distances. SIAM J. Optim. 12(1), 238–261 (2001) zbMATHCrossRefMathSciNetGoogle Scholar
  30. 30.
    Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999) zbMATHCrossRefMathSciNetGoogle Scholar
  31. 31.
    Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6(1), 59–70 (1999) zbMATHMathSciNetGoogle Scholar
  32. 32.
    Solodov, M.V., Svaiter, B.F.: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 25(2), 214–230 (2000) zbMATHCrossRefMathSciNetGoogle Scholar
  33. 33.
    Teboulle, M.: Entropic proximal mappings with applications to nonlinear programming. Math. Oper. Res. 17(3), 670–690 (1992) zbMATHCrossRefMathSciNetGoogle Scholar
  34. 34.
    van Rossum, G., et al.: Python language website. http://www.python.org/

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Department of Management Science and Information Systems and RUTCORRutgers UniversityPiscatawayUSA
  2. 2.Department of Computer ScienceUniversity of São PauloSão PauloBrazil

Personalised recommendations