Abstract
This paper develops a reduced Hessian method for solving inequality constrained optimization problems. At each iteration, the proposed method solves a quadratic subproblem which is always feasible by introducing a slack variable to generate a search direction and then computes the steplength by adopting a standard line search along the direction through employing the l ∞ penalty function. And a new update criterion is proposed to generate the quasi-Newton matrices, whose dimensions may be variable, approximating the reduced Hessian of the Lagrangian. The global convergence is established under mild conditions. Moreover, local R-linear and superlinear convergence are shown under certain conditions.
Similar content being viewed by others
References
Anitescu, M.: Degenerate nonlinear programming with quadratic growth condition. SIAM J. Optim. 10, 1116–1135 (2000)
Biegler, L.T., Nocedal, J., Schmid, C.: A reduced Hessian method for large-scale constrained optimization. SIAM J. Optim. 5, 314–347 (1995)
Boggs, P.T., Tolle, J.W.: A strategy for global convergence in a sequential quadratic programming algorithm. SIAM J. Numer. Anal. 600–623 (1989)
Boggs, P.T., Tolle, J.W., Wang, P.: On the local convergence of quasi-Newton methods for constrained optimization. SIAM J. Control Optim. 2, 161–171 (1982)
Burke, J.V.: An exact penalization viewpoint of constrained optimization. SIAM J. Control Optim. 29, 968–998 (1991)
Byrd, R.H., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 26, 727–739 (1989)
Byrd, R.H., Nocedal, J.: An analysis of reduced Hessian methods for constrained optimization. Math. Program. 49, 285–323 (1991)
Chamberlain, R.M., Lemarechal, C., Pedersen, H.C., Powell, M.J.D.: The watchdog technique for forcing convergence in algorithms for constrained optimization. Math. Program. Stud. 16, 1–17 (1982)
Coleman, T.F., Conn, A.R.: On the local convergence of a quasi-Newton method for the nonlinear programming problem. SIAM J. Numer. Anal. 2, 755–769 (1984)
Coleman, T.F., Sorensen, D.: A note on the computation of an orthonormal basis for the null space of a matrix. Math. Progam. 29, 234–242 (1984)
De Pantoja, J.F.A.O., Mayne, D.Q.: Exact penalty function algorithm with simple updating of the penalty parameter. J. Optim. Theory Appl. 69(3), 441–467 (1991)
Fischer, A.: Modified Wilson method for nonlinear programmings with nonunique multipliers. Math. Oper. Res. 24, 699–727 (1999)
Gabay, D.: Reduced quasi-Newton methods with feasibility improvement for nonlinear constrained optimization. Math. Program. Stud. 16, 18–44 (1982)
Gill, P.F., Murray, W., Saunders, M.A., Stewart, G.W., Wright, M.H.: Properties of a representation of a basis for null space. Math. Program. 33, 172–186 (1985)
Janin, R.: Directional derivative of the marginal function in nonlinear programming. Math. Program. Stud. 21, 110–126 (1984)
Li, D.H., Fukushima, M.: On the global convergence of the BFGS method for unconstrained optimization problems. SIAM J. Optim. 11, 1054–1064 (2001)
Liu, T.W., Li, D.H.: A practical update criterion for SQP method. Optim. Methods Softw. 2, 253–266 (2007)
Liu, T.W., Li, D.H.: A cautious BFGS update for reduced Hessian SQP. Numer. Algorithms 44, 11–28 (2007)
Murray, W., Wright, M.H.: Projected Lagrangian methods based on the trajectories of penalty and barrier functions. Systems Optimization Laboratory Report, 78-23, Stanford University, Stanford, CA (1978)
Nocedal, J., Overton, M.L.: Projected Hessian updating algorithms for nonlinearly constrained optimization. SIAM J. Numer. Anal. 2, 821–850 (1985)
Penot, J.P.: A new constraint qualification condition. J. Optim. Theory Appl. 48, 459–468 (1986)
Powell, M.J.D.: The convergence of variable metric methods for nonlinearly constrained optimization calculations. In: Mangasarian, O., Meyer, R., Robinson, S. (eds.) Nonlinear Programming, vol. 3, pp. 27–63. Academic Press, New York (1978)
Powell, M.J.D., Yuan, Y.: A recursive quadratic programming algorithm that uses differentiable exact penalty functions. Math. Program. 35, 265–278 (1986)
Schulz, V.: Solving discretized optimization problems by partially reduced SQP methods. Comput. Vis. Sci. 1, 83–96 (1998)
Solodov, M.V.: On the sequential quadratically constrained quadratic programming methods. Math. Oper. Res. 29, 64–79 (2004)
Wright, St.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11, 253–275 (1998)
Wright, St.J.: Modifying SQP for degenerate problems. SIAM J. Optim. 13, 470–497 (2002)
Xie, Y.F., Byrd, R.H.: Practical update criteria for reduced Hessian SQP: global analysis. SIAM J. Optim. 9, 578–604 (1999)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was partially supported by the National Natural Science Foundation of China via grant 10671060.
Rights and permissions
About this article
Cite this article
Liu, TW. A reduced Hessian SQP method for inequality constrained optimization. Comput Optim Appl 49, 31–59 (2011). https://doi.org/10.1007/s10589-009-9285-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-009-9285-y