Skip to main content
Log in

On the method of multipliers for mathematical programming problems

  • Contributed Papers
  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, the numerical solution of the basic problem of mathematical programming is considered. This is the problem of minimizing a functionf(x) subject to a constraint ϕ(x)=0. Here,f is a scalar,x is ann-vector, and ϕ is aq-vector, withq<n.

The approach employed is based on the introduction of the augmented penalty functionW(x,λ,k)=f(x)+λTϕ(x)+kϕT(x) ϕ(x). Here, theq-vector λ is an approximation to the Lagrange multiplier, and the scalark>0 is the penalty constant.

Previously, the augmented penalty functionW(x, λ,k) was used by Hestenes in his method of multipliers. In Hestenes' version, the method of multipliers involves cycles, in each of which the multiplier and the penalty constant are held constant. After the minimum of the augmented penalty function is achieved in any given cycle, the multiplier λ is updated, while the penalty constantk is held unchanged.

In this paper, two modifications of the method of multipliers are presented in order to improve its convergence characteristics. The improved convergence is achieved by (i) increasing the updating frequency so that the number of iterations in a cycle is shortened to ΔN=1 for the ordinary-gradient algorithm and the modified-quasilinearization algorithm and ΔN=n for the conjugate-gradient algorithm, (ii) imbedding Hestenes' updating rule for the multiplier λ into a one-parameter family and determining the scalar parameter β so that the error in the optimum condition is minimized, and (iii) updating the penalty constantk so as to cause some desirable effect in the ordinary-gradient algorithm, the conjugate-gradient algorithm, and the modified-quasilinearization algorithm. For the sake of identification, Hestenes' method of multipliers is called Method MM-1, the modification including (i) and (ii) is called Method MM-2, and the modification including (i), (ii), (iii) is called Method MM-3.

Evaluation of the theory is accomplished with seven numerical examples. The first example pertains to a quadratic function subject to linear constraints. The remaining examples pertain to non-quadratic functions subject to nonlinear constraints. Each example is solved with the ordinary-gradient algorithm, the conjugate-gradient algorithm, and the modified-quasilinearization algorithm, which are employed in conjunction with Methods MM-1, MM-2, and MM-3.

The numerical results show that (a) for given penalty constantk, Method MM-2 generally exhibits faster convergence than Method MM-1, (b) in both Methods MM-1 and MM-2, the number of iterations for convergence has a minimum with respect tok, and (c) the number of iterations for convergence of Method MM-3 is close to the minimum with respect tok of the number of iterations for convergence of Method MM-2. In this light, Method MM-3 has very desirable characteristics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Miele, A., Huang, H. Y., andHeideman, J. C.,Sequential Gradient-Restoration Algorithm for the Minimization of Constrained Functions, Ordinary and Conjugate Gradient Versions, Journal of Optimization Theory and Applications, Vol. 4, No. 4, 1969.

  2. Miele, A., Heideman, J. C., andLevy, A. V.,Combined Conjugate Gradient-Restoration Algorithm for Mathematical Programming Problems, Ricerche di Automatica, Vol. 2, No. 2, 1971.

  3. Miele, A., andLevy, A. V.,Modified Quasilinearization and Optimal Initial Choice of the Multipliers, Part 1, Mathematical Programming Problems, Journal of Optimization Theory and Applications, Vol. 6, No. 5, 1970.

  4. Kelley, H. J.,Method of Gradients, Optimization Techniques, Edited by G. Leitmann, Academic Press, New York, 1962.

    Google Scholar 

  5. Bryson, A. E., Jr., andHo, Y. C.,Applied Optimal Control, Blaisdell Publishing Company, Waltham, Massachusetts, 1969.

    Google Scholar 

  6. Fiacco, A. V., andMcCormick, G. P.,Nonlinear Programming: Sequential Unconstrained Minimization Techniques, John Wiley and Sons, New York, 1968.

    Google Scholar 

  7. Hestenes, M. R.,Multiplier and Gradient Methods, Journal of Optimization Theory and Applications, Vol. 4, No. 5, 1969.

  8. Miele, A., Moseley, P. E., andCragg, E. E.,Numerical Experiments on Hestenes' Method of Multipliers for Mathematical Programming Problems, Rice University, Aero-Astronautics Report No. 85, 1971.

  9. Miele, A., Coggins, G. M., andLevy, A. V.,Updating Rules for the Penalty Constant Used in the Penalty Function Method for Mathematical Programming Problems, Rice University, Aero-Astronautics Report No. 90, 1972.

  10. Miele, A., Moseley, P. E., andCragg, E. E.,A Modification of the Method of Multipliers for Mathematical Programming Problems, Techniques of Optimization, Edited by A. V. Balakrishnan, Academic Press, New York, 1972.

    Google Scholar 

  11. Tripathi, S. S., andNarendra, K. S.,Constrained Optimization Using Multiplier Methods, Journal of Optimization Theory and Applications, Vol. 9, No. 1, 1972.

Download references

Author information

Authors and Affiliations

Authors

Additional information

This research was supported by the National Science Foundation, Grant No. GP-32453. The authors are indebted to Messieurs E. E. Cragg and A. Esterle for computational assistance.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Miele, A., Moseley, P.E., Levy, A.V. et al. On the method of multipliers for mathematical programming problems. J Optim Theory Appl 10, 1–33 (1972). https://doi.org/10.1007/BF00934960

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00934960

Keywords

Navigation