Abstract
We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives a new second-order optimality condition without constraint qualifications stronger than previous ones associated with global convergence of algorithms. We prove that second-order variants of augmented Lagrangian (under an additional smoothness assumption based on the Lojasiewicz inequality) and interior point methods generate sequences satisfying our optimality condition. We present also a companion minimal constraint qualification, weaker than the ones known for second-order methods, that ensures usual global convergence results to a classical second-order stationary point. Finally, our optimality condition naturally suggests a definition of second-order stationarity suitable for the computation of iteration complexity bounds and for the definition of stopping criteria.
Similar content being viewed by others
References
Anandkumar, A., Ge, R.: Efficient approaches for escaping higher order saddle points in non-convex optimization. arXiv:1602.05908v1 (2016)
Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On augmented lagrangian methods with general lower-level constraints. SIAM J. Optim. 18, 1286–1309 (2008)
Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Second-order negative-curvature methods for box-constrained and general constrained optimization. Comput. Optim. Appl. 45, 209–236 (2010)
Andreani, R., Haeser, G., Martínez, J.M.: On sequencial optimality conditions for smooth constrained optimization. Optimization 60(5), 627–641 (2011)
Andreani, R., Haeser, G., Ramos, A., Silva, P.: A second-order sequential optimality condition associated to the convergence of algorithms. IMA J. Numer. Anal. 37(4), 1902–1929 (2017)
Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: Two new weak constraint qualifications and applications. SIAM J. Optim. 22, 1109–1135 (2012)
Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: A cone-continuity constraint qualification and algorithmic consequences. SIAM J. Optim. 26(1), 96–110 (2016)
Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: Strict constraint qualifications and sequential optimality conditions for constrained optimization. Math. Oper. Res. (2018). https://doi.org/10.1287/moor.2017.0879
Andreani, R., Martínez, J.M., Schuverdt, M.L.: On second-order optimality conditions for nonlinear programming. Optimization 56, 529–542 (2007)
Andreani, R., Martínez, J.M., Svaiter, B.F.: A new sequencial optimality condition for constrained optimization and algorithmic consequences. SIAM J. Optim. 20(6), 3533–3554 (2010)
Bertsekas, D.P.: Nonlinear Programming. Athenas Scientific, Belmont (1999)
Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149(1), 301–327 (2015)
Birgin, E., Martínez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization. SIAM Publications, Philadelphia (2014)
Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, P.L.: Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models. SIAM J. Optim. 26(2), 951–967 (2016)
Birgin, E.G., Haeser, G., Ramos, A.: Augmented lagrangians with constrained subproblems and convergence to second-order stationary points. Comput. Optim. Appl. 69(1), 51–75 (2018)
Bolte, J., Daniilidis, A., Lewis, A.S.: The lojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2007)
Bonnans, J.F., Shapiro, A.: Pertubation Analysis of Optimization Problems. Springer, Berlin (2000)
Cartis, C., Gould, N.I.M., Toint, P.L.: Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization. Found. Comput. Math. (2017). https://doi.org/10.1007/s10208-017-9363-y
Chen, L., Goldfarb, D.: Interior-point \(\ell \)2-penalty methods for nonlinear programming with strong global convergence properties. Math. Program. 108(1), 1–36 (2006)
Coleman, T.F., Liu, J., Yuan, W.: A new trust-region algorithm for equality constrained optimization. Comput. Optim. Appl. 21, 177–199 (2002)
Conn, A.R., Gould, N.I.M., Toint, P.L.: Lancelot: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer, Berlin (1992)
Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust Region Methods. MPS/SIAM Series on Optimization. SIAM, Philadelphia (2000)
Dennis, J.E., Vicente, L.N.: On the convergence theory of trust-region-based algorithms for equality-constrained optimization. SIAM J. Optim. 7(4), 927–950 (1997)
Facchinei, F., Lucidi, S.: Convergence to second order stationary points in inequality constrained optimization. Math. Oper. Res. 23(3), 746–766 (1998)
Fiacco, A.V.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, New York (1968)
Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: global convergence. IMA J. Numer. Anal. 37(1), 407–443 (2016)
Gould, N.I.M., Conn, A.R., Toint, P.L.: A note on the convergence of barrier algorithms for second-order necessary points. Math. Program. 85, 433–438 (1998)
Haeser, G.: On the global convergence of interior-point nonlinear programming algorithms. Comput. Appl. Math. 29, 125–138 (2010)
Haeser, G.: Some theoretical limitations of second-order algorithms for smooth constrained optimization. Oper. Res. Lett. 46(3), 295–299 (2018)
Haeser, G., Liu, H., Ye, Y.: Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary. Optimization Online (2017). http://www.optimization-online.org/DB_HTML/2017/02/5861
Haeser, G., Ramos, A.: A survey of constraint qualifications with second-order properties in nonlinear optimization. Optimization Online (2018). http://www.optimizationonline.org/DB_HTML/2018/01/6409.html
Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems. Math. Program. 142(1), 591–604 (2013)
Krantz, S.G., Parks, H.G.: A Primer on Real Analytic Functions. Birkhäuser, Basel (2002)
Liu, H., Yao, T., Li, R., Ye, Y.: Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions. Math. Program. 166(1–2), 207–240 (2017)
Moguerza, J.M., Prieto, F.J.: An augmented lagrangian interior-point method using directions of negative curvature. Math. Program. 95(3), 573–616 (2003)
Pillo, G.D., Liuzzi, G., Lucidi, S.: A primal-dual algorithm for nonlinear programming exploiting negative curvature directions. Numer. Algebra Control Optim. 1(3), 509–528 (2011)
Pillo, G.D., Lucidi, S., Palagi, L.: Convergence to second-order stationary points of a primal-dual algorithm model for nonlinear programming. Math. Oper. Res. 30(4), 897–915 (2005)
Tseng, P.: Convergent infeasible interior-point trust-region methods for constrained minimization. SIAM J. Optim. 13(2), 432–469 (2002)
Ye, Y.: On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 56(1), 285–300 (1992)
Ye, Y.: On the complexity of approximating a KKT point of quadratic programming. Math. Program. 80(2), 195–211 (1998)
Acknowledgements
This work was supported by FAPESP (Grants 2013/05475-7 and 2016/02092-8) and CNPq.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Haeser, G. A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms. Comput Optim Appl 70, 615–639 (2018). https://doi.org/10.1007/s10589-018-0005-3
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-018-0005-3