Skip to main content
Log in

A globally and quadratically convergent algorithm for general nonlinear programming problems

Ein global und quadratisch konvergenter Algorithmus für allgemeine nichtlineare Optimierungsprobleme

  • Published:
Computing Aims and scope Submit manuscript

Abstract

This paper presents an algorithm for the minimization of a nonlinear objective function subject to nonlinear inequality and equality constraints. The proposed method has the two distinguishing properties that, under weak assumptions, it converges to a Kuhn-Tucker point for the problem and under somewhat stronger assumptions, the rate of convergence is quadratic. The method is similar to a recent method proposed by Rosen in that it begins by using a penalty function approach to generate a point in a neighborhood of the optimum and then switches to Robinson's method. The new method has two new features not shared by Rosen's method. First, a correct choice of penalty function parameters is constructed automatically, thus guaranteeing global convergence to a stationary point. Second, the linearly constrained subproblems solved by the Robinson method normally contain linear inequality constraints while for the method presented here, only linear equality constraints are required. That is, in a certain sense, the new method “knows” which of the linear inequality constraints will be active in the subproblems. The subproblems may thus be solved in an especially efficient manner.

Preliminary computational results are presented.

Zusammenfassung

Diese Arbeit beschreibt einen Algorithmus zur Minimierung einer nichtlinearen Funktion mit nichtlinearen Ungleichungen und Gleichungen als Nebenbedingungen. Die vorgeschlagene Methode hat die Eigenschaft, daß sie unter schwachen Voraussetzungen gegen einen Kuhn-Tucker-Punkt des betrachteten Optimierungsproblems konvergiert und unter stärkeren Voraussetzungen eine quadratische Konvergenzgeschwindigkeit aufweist. Ähnlich wie eine vor kurzem von Rosen vorgeschlagene Methode benutzt der Algorithmus eine Straffunktion, um einen Punkt in der Nähe der optimalen Lösung zu berechnen und schaltet dann auf Robinsons Methode um. Die neue Methode hat gegenüber dem Verfahren von Rosen zwei neue Eigenschaften. Erstens wird der richtige Wert des Parameters in der Straffunktion automatisch gefunden. Zweitens enthalten die mit der Methode von Robinson gelösten Teilprobleme nur lineare Gleichungen als Nebenbedingungen. Die Teilprobleme können daher besonders leicht gelöst werden.

Vorläufige numerische Ergebnisse werden berichtet.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Asaadi, J.: A computational comparison of some non-linear programs. Mathematical Programming4, 144–154 (1973).

    Article  Google Scholar 

  2. Best, M. J., Ritter, K.: A class of accelerated conjugate direction methods for linearly constrained minimization problems. Mathematics of Computation30, 478–504 (1976).

    Google Scholar 

  3. Bräuninger, J.: A modification of Robinson's algorithm for general nonlinear programming problems requiring only approximate solutions of subproblems with linear equality constraints. In: Optimization Techniques (Lecture Notes in Control and Information Sciences, Vols. 6, 7) (Stoer, J. ed.). Berlin-Heidelberg-New York: Springer 1977.

    Google Scholar 

  4. Colville, A. R.: A comparative study on nonlinear programming codes. IBM, New York Scientific Center, Report No. 320-2949, 1968.

  5. Himmelblau, D. M.: Applied nonlinear programming. New York: McGraw-Hill 1972.

    Google Scholar 

  6. Mc Cormick, G. P.: Penalty function versus nonpenalty function methods for constrained nonlinear programming problems. Mathematical Programming1, 217–238 (1971).

    Article  Google Scholar 

  7. Murtagh, B. A., Saunders, M. A.: Nonlinear programming for large, sparse systems. Technical Report SOL 76-15, Systems Optimization Lab., Stanford, August, 1976.

    Google Scholar 

  8. Ritter, K.: A superlinearly convergent method for minimization problems with linear inequality constraints. Mathematical Programming4, 44–71 (1973).

    Article  Google Scholar 

  9. Robinson, S. M.: A quadratically-convergent algorithm for general nonlinear programming problems. Mathematical Programming3, 145–156 (1972).

    Article  Google Scholar 

  10. Rosen, J. B.: Two-phase algorithm for nonlinear constraint problems. Technical Report 77-8, Computer Science Department, University of Minnesota, 1977.

Download references

Author information

Authors and Affiliations

Authors

Additional information

Sponsored by the United States Army under Contract No. DAAG29-75-C-0024, by the National Research Council of Canada under Research Grant A8189, and by the National Science Foundation under Grant No. MCS74-20584 A02.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Best, M.J., Bräuninger, J., Ritter, K. et al. A globally and quadratically convergent algorithm for general nonlinear programming problems. Computing 26, 141–153 (1981). https://doi.org/10.1007/BF02241780

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02241780

AMS (MOS) Subject Classifications

Key words

Navigation