Skip to main content
Log in

IPRSDP: a primal-dual interior-point relaxation algorithm for semidefinite programming

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose an efficient primal-dual interior-point relaxation algorithm based on a smoothing barrier augmented Lagrangian, called IPRSDP, for solving semidefinite programming problems in this paper. The IPRSDP algorithm has three advantages over classical interior-point methods. Firstly, IPRSDP does not require the iterative points to be positive definite. Consequently, it can easily be combined with the warm-start technique used for solving many combinatorial optimization problems, which require the solutions of a series of semidefinite programming problems. Secondly, the search direction of IPRSDP is symmetric in itself, and hence the symmetrization procedure is not required any more. Thirdly, with the introduction of the smoothing barrier augmented Lagrangian function, IPRSDP can provide the explicit form of the Schur complement matrix. This enables the complexity of forming this matrix in IPRSDP to be comparable to or lower than that of many existing search directions. The global convergence of IPRSDP is established under suitable assumptions. Numerical experiments are made on the SDPLIB set, which demonstrate the efficiency of IPRSDP.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2

Similar content being viewed by others

Data availibility

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Alizadeh, F., Haeberly, J., Nayakkankuppa, M., Overton, M., Schmieta, S.: SDPPACK User’s Guide–Version 0.9 Beta for Matlab 5.0. New York University (1997)

  2. Alizadeh, F.: Interior point methods in semidefinite programming with applications to combinatorial optimization. SIAM J. Optim. 5(1), 13–51 (1995)

    Article  MathSciNet  Google Scholar 

  3. Alizadeh, F., Haeberly, J.P.A., Overton, M.L.: Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results. SIAM J. Optim. 8(3), 746–768 (1998)

    Article  MathSciNet  Google Scholar 

  4. Antoniou, A., Lu, W.S.: Practical Optimization: Algorithms and Engineering Applications, vol. 19. Springer, New York (2007)

    Google Scholar 

  5. Benson, S.J., Ye, Y.Y., Zhang, X.: Solving large-scale sparse semidefinite programs for combinatorial optimization. SIAM J. Optim. 10(2), 443–461 (2000)

    Article  MathSciNet  Google Scholar 

  6. Borchers, B.: SDPLIB 1.2, a library of semidefinite programming test problems. Optim. Methods Softw. 11(1–4), 683–690 (1999)

    Article  MathSciNet  Google Scholar 

  7. Burer, S., Monteiro, R.D.: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Program. 95(2), 329–357 (2003)

    Article  MathSciNet  Google Scholar 

  8. Burer, S., Monteiro, R.D.: Local minima and convergence in low-rank semidefinite programming. Math. Program. 103(3), 427–444 (2005)

    Article  MathSciNet  Google Scholar 

  9. Chen, X., Tseng, P.: Non-interior continuation methods for solving semidefinite complementarity problems. Math. Program. 95(3), 431–474 (2003)

    Article  MathSciNet  Google Scholar 

  10. Dai, Y.H., Liu, X.W., Sun, J.: A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. J. Ind. Manag. Optim. 16(2), 1009–1035 (2020)

    Article  MathSciNet  Google Scholar 

  11. De Simone, C., Rinaldi, G.: A cutting plane algorithm for the max-cut problem. Optim. Methods Softw. 3(1–3), 195–214 (1994)

    Article  Google Scholar 

  12. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  Google Scholar 

  13. Fischer, I., Gruber, G., Rendl, F., Sotirov, R.: Computational experience with a bundle approach for semidefinite cutting plane relaxations of max-cut and equipartition. Math. Program. 105(2), 451–469 (2006)

    Article  MathSciNet  Google Scholar 

  14. Helmberg, C., Rendl, F., Vanderbei, R.J., Wolkowicz, H.: An interior-point method for semidefinite programming. SIAM J. Optim. 6(2), 342–361 (1996)

    Article  MathSciNet  Google Scholar 

  15. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, New York (2012)

    Book  Google Scholar 

  16. Huang, Z.H., Liu, X.H.: Extension of smoothing Newton algorithms to solve linear programming over symmetric cones. J. Syst. Sci. Complex. 24, 195–206 (2011)

    Article  MathSciNet  Google Scholar 

  17. Kanzow, C., Nagel, C.: Semidefinite programs: new search directions, smoothing-type methods, and numerical results. SIAM J. Optim. 13(1), 1–23 (2002)

    Article  MathSciNet  Google Scholar 

  18. Kojima, M., Shindoh, S., Hara, S.: Interior-point methods for the monotone semidefinite linear complementarity problem in symmetric matrices. SIAM J. Optim. 7(1), 86–125 (1997)

    Article  MathSciNet  Google Scholar 

  19. Li, Y.F., Wen, Z.W., Yang, C., Yuan, Y.X.: A semismooth Newton method for semidefinite programs and its applications in electronic structure calculations. SIAM J. Sci. Comput. 40(6), 4131–4157 (2018)

    Article  MathSciNet  Google Scholar 

  20. Lisser, A., Rendl, F.: Graph partitioning using linear and semidefinite programming. Math. Program. 95(1), 91–101 (2003)

    Article  MathSciNet  Google Scholar 

  21. Liu, X.W., Dai, Y.H.: A globally convergent primal-dual interior-point relaxation method for nonlinear programs. Math. Comput. 89(323), 1301–1329 (2019)

    Article  MathSciNet  Google Scholar 

  22. Liu, X.W., Dai, Y.H., Huang, Y.K.: A primal-dual interior-point relaxation method with global and rapidly local convergence for nonlinear programs. Math. Methods Oper. Res. 96(3), 351–382 (2022)

    Article  MathSciNet  Google Scholar 

  23. Lu, C., Liu, Y.F., Zhang, W.Q., Zhang, S.Z.: Tightness of a new and enhanced semidefinite relaxation for MIMO detection. SIAM J. Optim. 29(1), 719–742 (2019)

    Article  MathSciNet  Google Scholar 

  24. Luo, Z.Q., Ma, W.K., So, A.M.-C., Ye, Y.Y., Zhang, S.Z.: Semidefinite relaxation of quadratic optimization problems. IEEE Signal Process. Mag. 27(3), 20–34 (2010)

    Article  Google Scholar 

  25. Mironowicz, P.: Applications of semidefinite optimization in quantum information protocols. arXiv preprint arXiv:1810.05145 (2018)

  26. Monteiro, R.D.: Primal-dual path-following algorithms for semidefinite programming. SIAM J. Optim. 7(3), 663–678 (1997)

    Article  MathSciNet  Google Scholar 

  27. Monteiro, R.D.: Polynomial convergence of primal-dual algorithms for semidefinite programming based on the Monteiro and Zhang family of directions. SIAM J. Optim. 8(3), 797–812 (1998)

    Article  MathSciNet  Google Scholar 

  28. Monteiro, R.D.: First-and second-order methods for semidefinite programming. Math. Program. 97(1), 209–244 (2003)

    Article  MathSciNet  Google Scholar 

  29. Monteiro, R.D., Zanjacomo, P.: Implementation of primal-dual methods for semidefinite programming based on Monteiro and Tsuchiya Newton directions and their variants. Optim. Methods Softw. 11(1–4), 91–140 (1999)

    Article  MathSciNet  Google Scholar 

  30. Schmieta, S.H., Alizadeh, F.: Extension of primal-dual interior point algorithms to symmetric cones. Math. Program. 96(3), 409–438 (2003)

    Article  MathSciNet  Google Scholar 

  31. Siddhu, V., Tayur, S.: Five starter pieces: Quantum information science via semidefinite programs. arXiv preprint arXiv:2112.08276 (2021)

  32. Sturm, J.F.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 11–12, 625–653 (1999)

    Article  MathSciNet  Google Scholar 

  33. Todd, M.J.: A study of search directions in primal-dual interior-point methods for semidefinite programming. Optim. Methods Softw. 11(1–4), 1–46 (1999)

    MathSciNet  Google Scholar 

  34. Todd, M.J., Toh, K.C., Tütüncü, R.H.: On the Nesterov–Todd direction in semidefinite programming. SIAM J. Optim. 8(3), 769–796 (1998)

    Article  MathSciNet  Google Scholar 

  35. Toh, K.C., Todd, M.J., Tütüncü, R.H.: On the implementation and usage of SDPT3-a Matlab software package for semidefinite quadratic linear programming, version 4.0. In: Handbook on Semidefinite. Conic and Polynomial Optimization, pp. 715–754. Springer, Boston (2012)

  36. Wen, Z.W., Goldfarb, D., Yin, W.T.: Alternating direction augmented Lagrangian methods for semidefinite programming. Math. Program. Comput. 2(3–4), 203–230 (2010)

    Article  MathSciNet  Google Scholar 

  37. Yamashita, H., Tanabe, T.: A primal-dual exterior point method for nonlinear optimization. SIAM J. Optim. 20(6), 3335–3363 (2010)

    Article  MathSciNet  Google Scholar 

  38. Yang, L.Q., Sun, D.F., Toh, K.C.: SDPNAL \(+ \): a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7(3), 331–366 (2015)

    Article  MathSciNet  Google Scholar 

  39. Zhang, R.J., Liu, X.W., Dai, Y.H.: IPRQP: a primal-dual interior-point relaxation algorithm for convex quadratic programming. J. Global Optim. 87(2), 1027–1053 (2023)

    Article  MathSciNet  Google Scholar 

  40. Zhang, Y.: On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming. SIAM J. Optim. 8(2), 365–386 (1998)

    Article  MathSciNet  Google Scholar 

  41. Zhao, X.Y., Sun, D.F., Toh, K.C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20(4), 1737–1765 (2010)

    Article  MathSciNet  Google Scholar 

  42. Zheng, Y., Fantuzzi, G., Papachristodoulou, A., Goulart, P., Wynn, A.: Chordal decomposition in operator-splitting methods for sparse semidefinite programs. Math. Program. 180(1), 489–532 (2020)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The second author was supported by the NSFC grants (nos. 12071108 and 11671116). The third author was supported by the Natural Science Foundation of China (nos. 12021001, 11991021, 11991020, and 11971372) and the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDA27000000).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Hong Dai.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Some proofs

Appendix: Some proofs

Proof of Lemma 3.5

\(\Longleftarrow \)” First assume that \(X\succ 0,\, S\succ 0,\, XS = \mu I\), which implies that \(XS+SX=2\mu I\). Therefore,

$$\begin{aligned} \begin{array}{rl} &{}(S-\rho X)^2+ 4\rho \mu I\\ &{}\quad = S^2-\rho (SX+XS)+\rho ^2X^2+4\rho \mu I \\ &{}\quad = S^2+\rho (SX+XS)+\rho ^2 X^2\\ &{}\quad = (S+\rho X)^2, \end{array} \end{aligned}$$

and

$$\begin{aligned} Z(X,S;\mu ,\rho ) -X = \dfrac{((S-\rho X)^2+4\rho \mu I)^{\frac{1}{2}}-(S+\rho X)}{2\rho } \\ = \dfrac{S+\rho X-(S+\rho X)}{2\rho } = 0. \end{aligned}$$

\(\Longrightarrow \)" Assume that \(Z(X,S;\mu ,\rho ) -X = 0\), which results in \(((S-\rho X)^2+4\rho \mu I)^{\frac{1}{2}}=S+\rho X\). After squaring both sides of the equation, we get

$$\begin{aligned} (S-\rho X)^2+4\rho \mu I = (S+\rho X)^2,\,\, S+\rho X \succ 0. \end{aligned}$$

This is equivalent to

$$\begin{aligned} XS+SX=2\mu I,\,\, S+\rho X \succ 0. \end{aligned}$$
(61)

Suppose there is an eigenvalue decomposition \(X=Q^{\top }\Omega Q\), where \(Q\in {\mathbb {R}}^{n\times n}\) is an orthogonal matrix, \(\Omega = \text {Diag}(\omega _1,\dots ,\omega _n)\). Then (61) can be reformulated as

$$\begin{aligned} \Omega QSQ^{\top }+QSQ^{\top }\Omega = 2\mu I,\,\, QSQ^{\top }+\rho \Omega \succ 0. \end{aligned}$$
(62)

Define \(\Xi = QSQ^{\top }=(\xi _{ij})_{1\le i\le j \le n}\). Then (62) can be equivalently written as:

$$\begin{aligned} \Omega \Xi +\Xi \Omega = 2\mu I,\,\, \Xi +\rho \Omega \succ 0, \end{aligned}$$
(63)

or written in a component form:

$$\begin{aligned} (\omega _{i}+\omega _{j})\xi _{ij}=\left\{ \begin{array}{ll} 2\mu , &{}\text {if }i = j;\\ 0, &{} \text {if }i \ne j, \end{array} \right. \forall i,j= 1,\dots ,n, \,\Xi +\rho \Omega \succ 0, \end{aligned}$$
(64)

If \(i=j\), \(2\omega _i \xi _{ii} = 2\mu \) and \(\rho \omega _i+ \xi _{ii} > 0\), this implies \(\omega _i > 0,\forall i = 1,\dots ,n\). As a result, X is a symmetric positive definite matrix. We can demonstrate that S is a symmetric positive definite matrix in a similar manner.

Next we prove that \(XS = \mu I\). Notice that (64) implies \(\xi _{ij}=0\) for all \(i\ne j\). As a result, the matrix \(\Xi \) is diagonal. Due to \(\Omega \Xi +\Xi \Omega = 2\mu I\), we can get \(\Omega \Xi = \mu I\). Then, we obtain \(Q^{\top } \Omega \Xi Q = Q^{\top } \Omega QSQ^{\top }Q = XS = \mu I\) by multiplying this equation left by \(Q^{\top }\) and right by Q. \(\square \)

Proof of Theorem 4.2

The directional derivative of the merit function \(\phi _{(\mu ^{(k)},\rho ^{(k)})}(w)\) at the point \((\mu ^{(k)},w^{(k)})\) along the direction \((\Delta \mu ^{(k)},\Delta w^{(k)})\) is

$$\begin{aligned} \phi ^{'}_{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)};\Delta \mu ^{(k)},\Delta w^{(k)}) =-2\phi _{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)}). \end{aligned}$$

The Taylor expansion of \({\phi }_{(\mu ^{(k)}+\alpha \Delta \mu ^{(k)},\rho ^{(k)})}(w^{(k)}+\alpha \Delta w^{(k)})\) with respect to \(\alpha \) at \(\alpha = 0\) shows that

$$\begin{aligned} \begin{array}{rl} &{}{\phi }_{(\mu ^{(k)}+\alpha \Delta \mu ^{(k)},\rho ^{(k)})}(w^{(k)}+\alpha \Delta w^{(k)})-\phi _{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)})\\ &{}\quad =\alpha \phi ^{'}_{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)};\Delta \mu ^{(k)},\Delta w^{(k)})+o(\alpha )\\ &{}\quad =-2\tau \alpha \phi _{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)})-2(1-\tau )\alpha \phi _{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)})+o(\alpha ). \end{array} \end{aligned}$$

Since \(\tau < 1\) and \(\phi _{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)})>0\), the inequality (44) holds for all sufficiently small \(\alpha > 0\). \(\square \)

Proof of Lemma 5.3

Due to Lemma 5.2, we have \(\phi _{(\mu ^{(k)},\rho ^{(k)})}(w^{(k)})\le \phi _{(\mu ^{(0)},\rho ^{(0)})}(w^{(0)})\) for all \(k = 1,2,\dots \) By the monotonicity of the merit function (27), we have

$$\begin{aligned} \frac{1}{2}\Vert Z^{(k)}-X^{(k)}\Vert _F^2\le \phi _{(\mu ^{(0)},\rho ^{(0)})}(w^{(0)}). \end{aligned}$$

which together with Theorem 5.1 implies that \(\{Z^{(k)}\}\) is bounded. Suppose there exists an eigenvalue decomposition \(S^{(k)}-\rho ^{(k)} X^{(k)}=\displaystyle \sum _{i=1}^n\lambda _i^{(k)} d_i^{(k)}(d_i^{(k)})^\top \), where \(\Vert d_i^{(k)}\Vert = 1,i=1,\dots ,n\). Then

$$\begin{aligned} Y^{(k)}=\displaystyle \sum _{i=1}^n\dfrac{\sqrt{(\lambda _i^{(k)})^2+4\rho ^{(k)}\mu ^{(k)}}+\lambda _i^{(k)}}{2\rho ^{(k)}}d_i^{(k)}(d_i^{(k)})^{\top }. \end{aligned}$$

Therefore it is sufficient to prove that \(\dfrac{\sqrt{(\lambda _i^{(k)})^2+4\rho ^{(k)}\mu ^{(k)}}+\lambda _i^{(k)}}{2\rho ^{(k)}}, i =1,\dots ,n\) are bounded. For any \( i =1,\dots ,n\), we have

$$\begin{aligned} \begin{array}{rl} &{}\dfrac{\sqrt{(\lambda _i^{(k)})^2+4\rho ^{(k)}\mu ^{(k)}}+\lambda _i^{(k)}}{2\rho ^{(k)}}\\ &{}\quad \le \dfrac{\arrowvert \lambda _i^{(k)}\arrowvert }{\rho ^{(k)}} + \sqrt{\dfrac{\mu ^{(k)}}{\rho ^{(k)}}}\\ &{}\quad \le \dfrac{\Vert S^{(k)}-\rho ^{(k)} X^{(k)}\Vert _F}{\rho ^{(k)}} + \sqrt{\dfrac{\mu ^{(k)}}{\rho ^{(k)}}}\\ &{}\quad \le \dfrac{\Vert S^{(k)}\Vert _F}{\rho ^{(k)}}+\Vert X^{(k)}\Vert _F+ \sqrt{\dfrac{\mu ^{(k)}}{\rho ^{(k)}}}\\ &{}\quad \le \dfrac{\max \{1,\Vert X^{(k)}\Vert _F\}}{\sigma }+\Vert X^{(k)}\Vert _F+ \sqrt{\dfrac{\mu ^{(0)}}{\rho ^{(0)}}}. \end{array} \end{aligned}$$

Thus, \(\dfrac{\sqrt{(\lambda _i^{(k)})^2+4\rho ^{(k)}\mu ^{(k)}}+\lambda _i^{(k)}}{2\rho ^{(k)}}, i=1,\dots ,n\) are bounded under Assumption 3.1, which implies that the sequence \(\{Y^{(k)}\}\) is bounded.

Due to \(\rho ^{(k)}Y^{(k)}Z^{(k)}=\mu ^{(k)}I\), we get \(\Vert Y^{(k)}Z^{(k)}\Vert _F=\sqrt{n}\dfrac{\mu ^{(k)}}{\rho ^{(k)}}\). Combining with the boundedness of \(\{Y^{(k)}\}\) and \(\{Z^{(k)}\}\), one has the desired inequalities. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, RJ., Liu, XW. & Dai, YH. IPRSDP: a primal-dual interior-point relaxation algorithm for semidefinite programming. Comput Optim Appl 88, 1–36 (2024). https://doi.org/10.1007/s10589-024-00558-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-024-00558-8

Keywords

Navigation