Fruitful uses of smooth exact merit functions in constrained optimization
In this paper we are concerned with continuously differentiable exact merit functions as a mean to solve constrained optimization problems even of considerable dimension. In order to give a complete understanding of the fundamental properties of exact merit functions, we first review the development of smooth and exact merit functions. A recently proposed shifted barrier augmented Lagrangian function is then presented as a potentially powerful tool to solve large scale constrained optimization problems. This latter merit function, rather than directly minimized, can be more fruitfully used to globalize efficient local algorithms, thus obtaining methods suitable for large scale problems. Moreover, by carefully choosing the search directions and the linesearch strategy, it is possible to define algorithms which are superlinearly convergent towards points satisfying first and second order necessary optimality conditions. We propose a general scheme for an algorithm employing such a merit function.
Keywordsconstrained optimization continously differentiable merit functions primal-dual algorithms
Unable to display preview. Download preview PDF.
- D. P. Bertsekas. Constrained Optimization and Lagrange Multipliers Methods. Academic Press, New York, 1982.Google Scholar
- G. Di Pillo. Exact penalty methods. In E. Spedicato, editor, Algorithms for Continuous Optimization: the State of the Art, pages 1–45. Kluwer Academic Press, Boston, 1994.Google Scholar
- G. Di Pillo, G. Liuzzi, S. Lucidi, and L. Palagi. An exact augmented Lagrangian function for two-sided constraints. Tech. Rep. 26–01, Department of Computer and Systems Science, University of Rome “La Sapienza”, Rome, Italy, 2001. Accepted for publication in Computational Optimization and Applications. Google Scholar
- G. Di Pillo, G. Liuzzi, S. Lucidi, and L. Palagi. Use of a truncated Newton direction in an augmented Lagrangian framework. Tech. Rep. 18–02, Department of Computer and System Sciences, University of Rome “La Sapienza”, Rome, Italy, 2002. Submitted.Google Scholar
- G. Di Pillo and S. Lucidi. On exact augmented Lagrangian functions in nonlinear programming. In G. Di Pillo and F. Giannessi, editors, Nonlinear Optimization and Applications, pages 85–100. Plenum Press, New York, 1996.Google Scholar
- G. Di Pillo, S. Lucidi, and L. Palagi. A truncated Newton method for constrained optimization. In G. Di Pillo and F. Giannessi, editors, Nonlinear Optimization and Related Topics. Kluwer Academic, Dordrecht, 1999.Google Scholar
- G. Di Pillo, S. Lucidi, and L. Palagi. Use of a truncated Newton direction in an augmented Lagrangian framework. Tech. rep., Department of Computer and Systems Science, University of Rome “La Sapienza”, Rome, Italy, 1999.Google Scholar
- G. Di Pillo, S. Lucidi, and L. Palagi. Convergence to 2nd order stationary points of a primal-dual algorithm model for nonlinear programming. Tech. Rep. 10–01, Department of Computer and System Sciences, University of Rome “La Sapienza”, Rome, Italy, 2001. Submitted.Google Scholar
- G. Di Pillo and L. Palagi. Nonlinear programming: introduction, unconstrained optimization, constrained optimization. In P. Pardalos and M. Resende, editors, Handbook of Applied Optimization, pages 263–298. Oxford University Press, New York, 2002.Google Scholar
- A. V. Fiacco and G. P. McCormick. Nonlinear Programming: Sequential Unconstrained Minimization Techniques. John Wiley and Sons, New York, 1969.Google Scholar
- N. Maratos. Exact penalty function algorithm for finite dimensional and control optimization problems. PhD thesis, University of London, London, England, 1978.Google Scholar
- M. J. D. Powell. A method for nonlinear constraints in minimization problem. In R. Fletcher, editor, Optimization. Academic Press, New York, 1969.Google Scholar