Mathematical Programming

, Volume 45, Issue 1–3, pp 547–566 | Cite as

A tolerant algorithm for linearly constrained optimization calculations

  • M. J. D. Powell


Two extreme techniques when choosing a search direction in a linearly constrained optimization calculation are to take account of all the constraints or to use an active set method that satisfies selected constraints as equations, the remaining constraints being ignored. We prefer an intermediate method that treats all inequality constraints with “small” residuals as inequalities with zero right hand sides and that disregards the other inequality conditions. Thus the step along the search direction is not restricted by any constraints with small residuals, which can help efficiency greatly, particularly when some constraints are nearly degenerate. We study the implementation, convergence properties and performance of an algorithm that employs this idea. The implementation considerations include the choice and automatic adjustment of the tolerance that defines the “small” residuals, the calculation of the search directions, and the updating of second derivative approximations. The main convergence theorem imposes no conditions on the constraints except for boundedness of the feasible region. The numerical results indicate that a Fortran implementation of our algorithm is much more reliable than the software that was tested by Hock and Schittkowski (1981). Therefore the algorithm seems to be very suitable for general use, and it is particularly appropriate for semi-infinite programming calculations that have many linear constraints that come from discretizations of continua.

Key words

Convergence theory degeneracies linear constraints matrix factorizations nonlinear optimization semi-infinite programming 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. J. Bracken and G.P. McCormick,Selected Applications of Nonlinear Programming (Wiley, New York, 1968).Google Scholar
  2. R. Fletcher,Practical Methods of Optimization (Wiley, Chichester, 1988).Google Scholar
  3. P.E. Gill, W. Murray and M.H. Wright,Practical Optimization (Academic Press, New York, 1981).Google Scholar
  4. D. Goldfarb and A. Idnani, “A numerically stable dual method for solving strictly convex quadratic programs,”Mathematical Programming 27 (1983) 1–33.Google Scholar
  5. W. Hock and K. Schittkowski,Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems, Vol. 187 (Springer, Berlin, 1981).Google Scholar
  6. E. Polak,Computational Methods in Optimization: A Unified Approach (Academic Press, New York, 1971).Google Scholar
  7. M.J.D. Powell, “Updating conjugate directions by the BFGS formula,”Mathematical Programming 38 (1987) 29–46.Google Scholar
  8. M.J.D. Powell, “On a matrix factorization for linearly constrained optimization problems,” Report DAMTP/1988/NA9, University of Cambridge (Cambridge, UK, 1988).Google Scholar
  9. M.J.D. Powell, “TOLMIN: a Fortran package for linearly constrained optimization calculations,” Report DAMTP/1989/NA2, University of Cambridge (Cambridge, UK, 1989).Google Scholar
  10. P. Wolfe, “Convergence conditions for accent methods,”SIAM Review 11 (1969) 226–235.Google Scholar

Copyright information

© North-Holland 1989

Authors and Affiliations

  • M. J. D. Powell
    • 1
  1. 1.Department of Applied Mathematics and Theoretical PhysicsUniversity of CambridgeCambridgeEngland

Personalised recommendations