Abstract
We propose a primal-dual interior-point method (IPM) with convergence to second-order stationary points (SOSPs) of nonlinear semidefinite optimization problems, abbreviated as NSDPs. As far as we know, the current algorithms for NSDPs only ensure convergence to first-order stationary points such as Karush–Kuhn–Tucker points, but without a worst-case iteration complexity. The proposed method generates a sequence approximating SOSPs while minimizing a primal-dual merit function for NSDPs by using scaled gradient directions and directions of negative curvature. Under some assumptions, the generated sequence accumulates at an SOSP with a worst-case iteration complexity. This result is also obtained for a primal IPM with a slight modification. Finally, our numerical experiments show the benefits of using directions of negative curvature in the proposed method.
Similar content being viewed by others
Data availability statement
The source code utilized in the numerical experiments can be accessed at https://github.com/Mathematical-Informatics-5th-Lab/Decreasing-NC-PDIPM. There is no conflict of interest in writing the paper.
Notes
A scaled gradient direction is intended to be the steepest-descent one premultiplied with a positive definite symmetric matrix.
When \(\mathop {\text {Ker}}(X(\bar{x}))=\{0\}\), thus in the case where \(X(\bar{x})\in \mathbb {S}^m_{++}\), we write \(T_{\mathbb {S}^m_{+}}\left( {X(\bar{x})}\right) = \mathbb {S}^m\).
If \(\varLambda \ne O\), such \(\overline{U}_1\) exists.
The subscripts “PB” and “BC” represent “primal-barrier” and “barrier complementarity”, respectively in abbreviated form.
We suppose that \(\psi _{\mu ,\nu } \left( {x_1, Z_1}\right) - \psi _{\mu ,\nu }^*\) does not converge to 0 as \(\mu \rightarrow 0\).
References
Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Second-order negative-curvature methods for box-constrained and general constrained optimization. Comput. Optim. Appl. 45(2), 209–236 (2010)
Andreani, R., Haeser, G., Viana, D.S.: Optimality conditions and global convergence for nonlinear semidefinite programming. Math. Program. 180(1), 203–235 (2020)
Andreani, R., Martínez, J.M., Schuverdt, M.L.: On second-order optimality conditions for nonlinear programming. Optimization 56(5–6), 529–542 (2007)
Arahata, S., Okuno, T., Takeda, A.: Interior-point methods for second-order stationary points of nonlinear semidefinite optimization problems using negative curvature. arXiv preprint arXiv:2103.14320 (2021)
Auslender, A.: Penalty methods for computing points that satisfy second order necessary conditions. Math. Program. 17(1), 229–238 (1979)
Beck, A.: First-Order Methods in Optimization. SIAM, Philadelphia (2017)
Bendsoe, M.P., Guedes, J.M., Haber, R.B., Pedersen, P., Taylor, J.E.: An analytical model to predict optimal material properties in the context of optimal structural design. J. Appl. Mech. 61(4), 930–937 (1994)
Bonnans, J., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., Zhang, Q.: JAX: composable transformations of Python+NumPy programs. http://github.com/google/jax
Conn, A.R., Gould, N.I.M., Orban, D., Toint, P.L.: A primal-dual trust-region algorithm for non-convex nonlinear programming. Math. Program. 87(2), 215–249 (2000)
Correa, R., Ramirez, C.H.: A global algorithm for nonlinear semidefinite programming. SIAM J. Optim. 15(1), 303–318 (2004)
Curtis, F.E., Lubberts, Z., Robinson, D.P.: Concise complexity analyses for trust region methods. Optim. Lett. 12(8), 1713–1724 (2018)
Curtis, F.E., Robinson, D.P.: Exploiting negative curvature in deterministic and stochastic optimization. Math. Program. 176(1), 69–94 (2019)
Curtis, F.E., Robinson, D.P., Royer, C.W., Wright, S.J.: Trust-region Newton-CG with strong second-order complexity guarantees for nonconvex optimization. SIAM J. Optim. 31(1), 518–544 (2021)
Curtis, F.E., Robinson, D.P., Samadi, M.: A trust region algorithm with a worst-case iteration complexity of \({\cal{O} }(\epsilon ^{-3/2})\) for nonconvex optimization. Math. Program. 162(1), 1–32 (2017)
Di Pillo, G., Lucidi, S., Palagi, L.: Convergence to second-order stationary points of a primal-dual algorithm model for nonlinear programming. Math. of Oper. Res. 30(4), 897–915 (2005)
El-Alem, M.M.: Convergence to a second-order point of a trust-region algorithm with a nonmonotonic penalty parameter for constrained optimization. J. Optim. Theory Appl. 91(1), 61–79 (1996)
Facchinei, F., Lucidi, S.: Convergence to Second order stationary points in inequality constrained optimization. Math. Oper. Res. 23(3), 746–766 (1998)
Fares, B., Apkarian, P., Noll, D.: An augmented Lagrangian method for a class of LMI-constrained problems in robust control theory. Int. J. Control 74(4), 348–360 (2001)
Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, vol. 3, pp. 2156–2162 (2003)
Forsgren, A.: Optimality conditions for nonconvex semidefinite programming. Math. Program. 88(1), 105–128 (2000)
Forsgren, A., Murray, W.: Newton methods for large-scale linear inequality-constrained minimization. SIAM J. Optim. 7(1), 162–176 (1997)
Freund, R.W., Jarre, F., Vogelbusch, C.H.: Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling. Math. Program. 109(2–3), 581–611 (2007)
Fukuda, E.H., Lourenço, B.F.: Exact augmented Lagrangian functions for nonlinear semidefinite programming. Comput. Optim. Appl. 71(2), 457–482 (2018)
Goldfarb, D., Mu, C., Wright, J., Zhou, C.: Using negative curvature in solving nonlinear programs. Comput. Optim. Appl. 68(3), 479–502 (2017)
Hallak, N., Teboulle, M.: Finding second-order stationary points in constrained minimization: a feasible direction approach. J. Optim. Theory Appl. 186(2), 480–503 (2020)
Hinder, O., Ye, Y.: Worst-case iteration bounds for log barrier methods for problems with nonconvex constraints. arXiv:1807.00404 (2020)
Hoi, C., Scherer, C., van der Meché, E., Bosgra, O.: A nonlinear SDP approach to fixed-order controller synthesis and comparison with two other methods applied to an active suspension system. Eur. J. Control. 9(1), 13–28 (2003)
Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2012)
Jarre, F.: An interior method for nonconvex semidefinite programs. Optim. Eng. 1, 347–372 (2000)
Kanno, Y., Takewaki, I.: Sequential semidefinite program for maximum robustness design of structures under load uncertainty. J. Optim. Theory Appl. 130(2), 265 (2006)
Kato, A., Yabe, H., Yamashita, H.: An interior point method with a primal-dual quadratic barrier penalty function for nonlinear semidefinite programming. J. Comput. Appl. Math. 275, 148–161 (2015)
Konno, H., Kawadai, N., Tuy, H.: Cutting plane algorithms for nonlinear semi-definite programming problems with applications. J. Global Optim. 25(2), 141–155 (2003)
Konno, H., Kawadai, N., Wu, D.: Estimation of failure probability using semi-definite logit model. Comput. Manage. Sci. 1(1), 59–73 (2003)
Kočvara, M., Leibfritz, F., Stingl, M., Henrion, D.: A nonlinear SDP algorithm for static output feedback problems in COMPleib. IFAC Proceedings Volumes 38(1), 1055–1060 (2005)
Kočvara, M., Stingl, M.: Solving nonconvex SDP problems of structural optimization with stability control. Optim. Method. Softw. 19(5), 595–609 (2004)
Lahat, D., Févotte, C.: Positive semidefinite matrix factorization: a link to phase retrieval and a block gradient algorithm. In: IEEE International Conference on Acoustics, Speech and Signal Process (2020)
Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)
Leibfritz, F., Maruhn, J.H.: A successive SDP-NSDP approach to a robust optimization problem in finance. Comput. Optim. Appl. 44(3), 443 (2008)
Leibfritz, F., Mostafa, E.M.E.: An interior point constrained trust region method for a special class of nonlinear semidefinite programming problems. SIAM J. Optim. 12(4), 1048–1074 (2002)
Lourenço, B.F., Fukuda, E.H., Fukushima, M.: Optimality conditions for nonlinear semidefinite programming via squared slack variables. Math. Program. 168(1), 177–200 (2018)
Lu, S., Razaviyayn, M., Yang, B., Huang, K., Hong, M.: Finding second-order stationary points efficiently in smooth nonconvex linearly constrained optimization problems. In: Advances in Neural Information Processing Systems, 33, 2811–2822 (2020)
McCormick, G.P.: A modification of Armijo’s step-size rule for negative curvature. Math. Program. 13(1), 111–115 (1977)
Moguerza, J.M., Prieto, F.J.: An augmented Lagrangian interior-point method using directions of negative curvature. Math. Program. 95(3), 573–616 (2003)
Mokhtari, A., Ozdaglar, A., Jadbabaie, A.: Escaping saddle points in constrained optimization. Adv. Neural Inf. Process. Syst. 31, 3629–3639 (2018)
Mukai, H., Polak, E.: A second-order method for the general nonlinear programming problem. J. Optim. Theory Appl. 26(4), 515–532 (1978)
Murty, K.G., Kabadi, S.N.: Some NP-complete problems in quadratic and nonlinear programming. Math. Program. 39(2), 117–129 (1987)
Nesterov, Y., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Program. 108(1), 177–205 (2006)
Nouiehed, M., Lee, J.D., Razaviyayn, M.: Convergence to second-order stationarity for constrained non-convex optimization. arXiv:1810.02024 (2020)
Nouiehed, M., Razaviyayn, M.: A trust region method for finding second-order stationarity in linearly constrained nonconvex optimization. SIAM J. Optim. 30(3), 2501–2529 (2020)
Okuno, T.: Local convergence of primal-dual interior point methods for nonlinear semi-definite optimization using the family of Monteiro–Tsuchiya directions. arXiv:2009.03020 (2020)
Okuno, T., Fukushima, M.: An interior point sequential quadratic programming-type method for log-determinant semi-infinite programs. J. Comput. Appl. Math. 376, 112784 (2020)
Okuno, T., Fukushima, M.: Primal-dual path following method for nonlinear semi-infinite programs with semi-definite constraints. Math. Program. pp. 1–53 (2022)
O’Neill, M., Wright, S.J.: A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees. IMA J. Numer. Anal. (2020)
Paatero, P., Tapper, U.: Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values. Environmetrics 5(2), 111–126 (1994)
Qi, H.: Local duality of nonlinear semidefinite programming. Math. Oper. Res. 34(1), 124–141 (2009)
Qi, H., Sun, D.: A quadratically convergent newton method for computing the nearest correlation matrix. SIAM J. Matrix Anal. Appl. 28(2), 360–385 (2006)
Royer, C.W., O’Neill, M., Wright, S.J.: A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization. Math. Program. 180(1), 451–488 (2020)
Shapiro, A.: First and second order analysis of nonlinear semidefinite programs. Math. Program. 77(1), 301–320 (1997)
Sorensen, D.C.: Newton’s method with a model trust region modification. SIAM J. Numer. Anal. 19(2), 409–426 (1982)
Sun, D.: The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31(4), 761–776 (2006)
Sun, Y., Fazel, M.: Escaping saddle points efficiently in equality-constrained optimization problems. In: Workshop on Modern Trends in Nonconvex Optimization for Machine Learning at International Conference on Machine Learning (2018)
Vandaele, A., Glineur, F., Gillis, N.: Algorithms for positive semidefinite factorization. Comput. Optim. Appl. 71(1), 193–219 (2018)
Wolkowicz, H., Saigal, R., Vandenberghe, L.: Handbook of Semidefinite Programming: Theory, Algorithms, and Applications. Kluwer Academic Publishers, Alphen aan den Rijn (2000)
Xie, Y., Wright, S.J.: Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints. J. Sci. Comput. 86(3), 38 (2021)
Yamakawa, Y., Okuno, T.: A stabilized sequential quadratic semidefinite programming method for degenerate nonlinear semidefinite programs. Comput. Optim. Appl. 83, 1027–1064 (2022). https://doi.org/10.1007/s10589-022-00402-x
Yamakawa, Y., Yamashita, N.: A two-step primal-dual interior point method for nonlinear semidefinite programming problems and its superlinear convergence. J. Oper. Res. Soc. Jpn. 57(3–4), 105–127 (2014)
Yamashita, H.: Convergence to a second-order critical point by a primal-dual interior point trust-region method for nonlinear semidefinite programming. Optim. Methods Softw. pp. 1–35 (2022)
Yamashita, H., Yabe, H.: Local and superlinear convergence of a primal-dual interior point method for nonlinear semidefinite programming. Math. Program. 132(1–2), 1–30 (2012)
Yamashita, H., Yabe, H., Harada, K.: A primal-dual interior point method for nonlinear semidefinite programming. Math. Program. 135(1), 89–121 (2012)
Yamashita, H., Yabe, H., Harada, K.: A primal-dual interior point trust-region method for nonlinear semidefinite programming. Optim. Method. Softw. (2020). Published online
Zhao, Q., Chen, Z.: On the superlinear local convergence of a penalty-free method for nonlinear semidefinite programming. J. Comput. Appl. Math. 308, 1–19 (2016)
Acknowledgements
We are very grateful to the two anonymous reviewers and the associate editor for their valuable comments and suggestions. In addition, we also thank Dr. Hiroshi Yamashita and Professor Hiroshi Yabe for helpful discussions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported by the Japan Society for the Promotion of Science KAKENHI Grant Number 19H04069, 20K19748, 20H04145, and 23H03351. It was conducted when the first author was a student of the University of Tokyo and is irrelevant to his present affiliation.
Proofs of Lemma 5.2
Proofs of Lemma 5.2
1.1 Proof of Lemma 5.2 (1)
Proof
Notice that
Since \(\Vert X^{-1}\Vert _{\textrm{F}}\le \Vert X^{-1}_{\ell }\Vert _{\textrm{F}}\) follows from Proposition 5.1 (1), inequality (8) yields \(\bigl \Vert X^{-1}_{\ell }-X^{-1}\bigr \Vert _{\textrm{F}} \le 2 L_0 \Vert X^{-1}_{\ell }\Vert _{\textrm{F}}^2 \Vert x^{\ell }-x\Vert . \) The proof is complete. \(\square \)
1.2 Proof of Lemma 5.2 (2)
Proof
Using (17), we obtain
In what follows, we evaluate upper bounds of (A), (B), and (C).
Evaluation of (A) : From (3), we have
Evaluation of (B) :
where the third inequality follows from (9).
Evaluation of (C) : Let
As \((C)\le (D) + (E)\), we will evaluate upper bounds of (D) and (E). With regard to (D), we obtain from (9) that
Similarly, it follows from (5), that
Therefore
Lastly, from (A-1)-(A-3), we obtain
The proof is complete. \(\square \)
1.3 Proof of Lemma 5.2 (3)
Proof
Note that
where the equality follows from (18). Since \(\Vert Z^{-1}\Vert _{\textrm{F}} \le 2\Vert Z_\ell ^{-1}\Vert _{\textrm{F}}\) from Proposition 5.1 (2), we have
The proof is complete. \(\square \)
1.4 Proof of Lemma 5.2 (4)
Proof
From (19), we have
where
In what follows, we evaluate \((A_{ij})\), \((B_{ij})\), and \((C_{ij})\).
Evaluation of \((A_{ij})\): From (4), we have
Evaluation of \((B_{ij})\): \((B_{ij})\) can be written as
Here, \((D_{ij})\) can be bounded as
In the above, \((F_{ij})\) can be bounded as
Taking the sum of \((F_{ij})\) over i, j together with (5) and (9) yields
For \((G_{ij})\), we obtain, from Lemma Lemma 5.2 (1),
which together with (5) yields
By combining inequalities (A-5) and (A-6), the sum of \((D_{ij})\) is bounded from above by
With regard to \((E_{ij})\), we have
By taking the sum over i, j of \(|\left( {E_{ij}}\right) |\) together with (5), Lemma 5.2 (1), and Proposition 5.1 (1), we have
Combining inequalities (A-7) and (A-8) yields
Evaluation of \((C_{ij})\): Note that \((C_{ij})\) can be written as
By the Cauchy-Schwartz inequality, we have
With regard to the sum of \(|(K_{ij})|\),
where the inequality follows from (6), (7), and Lemma 5.2 (1). Moreover, combining (A-10) and (A-11) yields
Lastly, we have
which together with (A-4), (A-9), and (A-12) implies
The proof is complete. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Arahata, S., Okuno, T. & Takeda, A. Complexity analysis of interior-point methods for second-order stationary points of nonlinear semidefinite optimization problems. Comput Optim Appl 86, 555–598 (2023). https://doi.org/10.1007/s10589-023-00501-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-023-00501-3
Keywords
- Nonlinear semidefinite programming
- Primal-dual interior-point method
- Negative curvature direction
- Second-order stationary points