Log-Sigmoid Multipliers Method in Constrained Optimization Article DOI:
Cite this article as: Polyak, R.A. Annals of Operations Research (2001) 101: 427. doi:10.1023/A:1010938423538 Abstract
In this paper we introduced and analyzed the Log-Sigmoid (LS) multipliers method for constrained optimization. The LS method is to the recently developed smoothing technique as augmented Lagrangian to the penalty method or modified barrier to classical barrier methods. At the same time the LS method has some specific properties, which make it substantially different from other nonquadratic augmented Lagrangian techniques.
We established convergence of the LS type penalty method under very mild assumptions on the input data and estimated the rate of convergence of the LS multipliers method under the standard second order optimality condition for both exact and nonexact minimization.
Some important properties of the dual function and the dual problem, which are based on the LS Lagrangian, were discovered and the primal–dual LS method was introduced.
log-sigmoid multipliers method duality smoothing technique References
A. Auslender, R. Cominetti and M. Haddou, Asymptotic analysis for penalty and barrier methods in convex and linear programming, Mathematics of Operations Research 22 (1) (1997) 43–62.
A. Ben–Tal, I. Uzefovich and M. Zibulevsky, Penalty/barrier multiplier methods for minmax and constrained smooth convex programs, Research Report, Optimization Laboratory, Technion, Israel (1992) pp. 1–16.
A. Ben–Tal and M. Zibulevsky, Penalty–barrier methods for convex programming problems, SIAM J. Optimization 7 (1997) 347–366.
Constrained Optimization and Lagrange Multiplier Methods
(Academic Press, New York, 1982).
M. Breitfeld and D. Shanno, Computational experience with modified log–barrier functions for nonlinear programming, Annals of Operations Research 62 (1996) 439–464.
C. Chen and O.L. Mangasarian, Smoothing methods for convex inequalities and linear complementarity problems, Mathematical Programming 71 (1995) 51–69.
A.V. Fiacco and G.P. McCormick,
Nonlinear Programming: Sequential Unconstrained Minimization Techniques
, SIAM Classics in Applied Mathematics (SIAM, Philadelphia, PA, 1990).
A.N. Iusem, B. Svaiter and M. Teboulle, Entropy–like proximal methods in convex programming, Mathematics of Operations Research 19 (1994) 790–814.
B.W. Kort and D.P. Bertsekas, Multiplier methods for convex programming, in:
Proceedings 1973, IEEE Conference on Decision and Control, San Diego, CA, pp. 428–432.
A. Melman and R. Polyak, The Newton modified barrier method for quadratic programming problems, Annals of Operations Research 62 (1996) 465–519.
R. Polyak, Modified barrier functions (theory and methods), Mathematical Programming 54 (1992) 177–222.
R. Polyak, I. Griva and J. Sobieski, The Newton log–sigmoid method in constrained optimization, in:
A Collection of Technical Papers, 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization 3 (1998) pp. 2193–2201.
R. Polyak and M. Teboulle, Nonlinear rescaling and proximal–like methods in convex optimization, Mathematical Programming 76 (1997) 965–984.
(Princeton University Press, Princeton, NJ, 1970).
M. Teboulle, On
ψ–divergence and its applications, in: Systems and Management Science by Extremal Methods, eds. F.Y. Philips and J.J. Rousseau (Kluwer Academic, 1992) pp. 255–289.
M. Teboulle, Entropic proximal mappings with application to nonlinear programming, Mathematics of Operations Research 17 (1992) 670–690.
P. Tseng and O. Bertsekas, On the convergence of the exponential multiplier method for convex programming, Mathematical Programming 60 (1993) 1–19.
Google Scholar Copyright information
© Kluwer Academic Publishers 2001