Skip to main content
Log in

Domain-Driven Solver (DDS) Version 2.1: a MATLAB-based software package for convex optimization problems in domain-driven form

  • Full Length Paper
  • Published:
Mathematical Programming Computation Aims and scope Submit manuscript

Abstract

Domain-Driven Solver (DDS) is a MATLAB-based software package for convex optimization. The current version of DDS accepts every combination of the following function/set constraints: (1) symmetric cones (LP, SOCP, and SDP); (2) quadratic constraints that are SOCP representable; (3) direct sums of an arbitrary collection of 2-dimensional convex sets defined as the epigraphs of univariate convex functions (including as special cases geometric programming and entropy programming); (4) generalized Koecher (power) cone; (5) epigraphs of matrix norms (including as a special case minimization of nuclear norm over a linear subspace); (6) vector relative entropy; (7) epigraphs of quantum entropy and quantum relative entropy; and (8) constraints involving hyperbolic polynomials. The infeasible-start primal-dual algorithms used for DDS rely heavily on duality theory and properties of Legendre-Fenchel conjugate functions, and are designed to rigorously determine the status of a given problem. We discuss some important implementation details and techniques we used to improve the robustness and efficiency of the software. The appendix contains many examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

Enquiries about data availability should be directed to the authors.

Code availability

The full code was made available for review. Reference [24] in this published article is the link to the publicly available code.

Notes

  1. For these results, the code provided in the Hypatia package for CBLIB using JuMP [14] is used, with the parameter default_tol_relax =1 By changing the parameter to default_tol_relax =1000, Hypatia solves the problems batch and enpro48 approximately, but fails again for isil01 and LogExpCR-n500-m1600.

References

  1. Amini, N., Brändén, P.: Non-representable hyperbolic matroids. Adv. Math. 334, 417–449 (2018)

    Article  MathSciNet  Google Scholar 

  2. Boyd, S., Kim, S.J., Vandenberghe, L., Hassibi, A.: A tutorial on geometric programming. Optim. Eng. 8(1), 67–127 (2007)

    Article  MathSciNet  Google Scholar 

  3. Brändén, P.: Polynomials with the half-plane property and matroid theory. Adv. Math. 216(1), 302–320 (2007)

    Article  MathSciNet  Google Scholar 

  4. Brändén, P.: Obstructions to determinantal representability. Adv. Math. 226(2), 1202–1212 (2011)

    Article  MathSciNet  Google Scholar 

  5. Brändén, P.: Hyperbolicity cones of elementary symmetric polynomials are spectrahedral. Opt. Lett. 8(5), 1773–1782 (2014)

    Article  MathSciNet  Google Scholar 

  6. Burton, S., Vinzant, C., Youm, Y.: A real stable extension of the Vamos matroid polynomial. arXiv preprint arXiv:1411.2038 (2014)

  7. Chandrasekaran, V., Shah, P.: Relative entropy optimization and its applications. Math. Program. 161(1–2), 1–32 (2017)

    Article  MathSciNet  Google Scholar 

  8. Chares, R.: Cones and interior-point algorithms for structured convex optimization involving powers and exponentials. Ph.D. thesis, Université Catholique de Louvain, Louvain-la-Neuve (2008)

  9. Choe, Y.B., Oxley, J.G., Sokal, A.D., Wagner, D.G.: Homogeneous multivariate polynomials with the half-plane property. Adv. Appl. Math. 32(1–2), 88–187 (2004)

    Article  MathSciNet  Google Scholar 

  10. Coey, C., Kapelevich, L., Vielma, J.P.: Performance enhancements for a generic conic interior point algorithm. Mathematical Programming Computation pp. 1–49 (2022)

  11. Coey, C., Kapelevich, L., Vielma, J.P.: Solving natural conic formulations with hypatia. jl. INFORMS J. Comput. 34(5), 2686–2699 (2022)

    Article  MathSciNet  Google Scholar 

  12. Dahl, J., Andersen, E.D.: A primal-dual interior-point algorithm for nonsymmetric exponential-cone optimization. Math. Program. 194(1–2), 341–370 (2022)

    Article  MathSciNet  Google Scholar 

  13. Davis, C.: All convex invariant functions of Hermitian matrices. Arch. Math. 8(4), 276–278 (1957)

    Article  MathSciNet  Google Scholar 

  14. Dunning, I., Huchette, J., Lubin, M.: Jump: a modeling language for mathematical optimization. SIAM Rev. 59(2), 295–320 (2017). https://doi.org/10.1137/15M1020575

    Article  MathSciNet  Google Scholar 

  15. Fang, S.C., Rajasekera, J.R., Tsao, H.S.J.: Entropy optimization and mathematical programming, vol. 8. Springer, Berln (1997)

    Google Scholar 

  16. Fawzi, H., Saunderson, J.: Optimal self-concordant barriers for quantum relative entropies. arXiv preprint arXiv:2205.04581 (2022)

  17. Fawzi, H., Saunderson, J., Parrilo, P.A.: Semidefinite approximations of the matrix logarithm. Found. Comput. Math. 19(2), 259–296 (2019)

    Article  MathSciNet  Google Scholar 

  18. Faybusovich, L., Tsuchiya, T.: Matrix monotonicity and self-concordance: how to handle quantum entropy in optimization problems. Opt. Lett. 11, 1513–1526 (2017)

    Article  MathSciNet  Google Scholar 

  19. Faybusovich, L., Zhou, C.: Long-step path-following algorithm in quantum information theory: Some numerical aspects and applications. arXiv preprint arXiv:1906.00037 (2020)

  20. Friberg, H.A.: CBLIB 2014: a benchmark library for conic mixed-integer and continuous optimization. Math. Program. Comput. 8(2), 191–214 (2016)

    Article  MathSciNet  Google Scholar 

  21. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.2. (2020) http://cvxr.com/cvx

  22. Güler, O.: Hyperbolic polynomials and interior point methods for convex programming. Math. Oper. Res. 22(2), 350–377 (1997)

    Article  MathSciNet  Google Scholar 

  23. Hiai, F., Petz, D.: Introduction to matrix analysis and applications. Springer, Berlin (2014)

    Book  Google Scholar 

  24. Karimi, M., Tunçel, L.: mehdi-karimi-math/DDS: DDS 2.1 (2023). https://doi.org/10.5281/zenodo.8339473

  25. Karimi, M., Tunçel, L.: Primal-dual interior-point methods for domain-driven formulations. Math. Oper. Res. 45(2), 591–621 (2020)

    MathSciNet  Google Scholar 

  26. Karimi, M., Tunçel, L.: Status determination by interior-point methods for convex optimization problems in Domain-Driven form. Math. Program. 194(1–2), 937–974 (2022)

    Article  MathSciNet  Google Scholar 

  27. Lewis, A.S.: The mathematics of eigenvalue optimization. Math. Program. 97(1–2), 155–176 (2003)

    Article  MathSciNet  Google Scholar 

  28. MOSEK ApS: The MOSEK optimization toolbox for MATLAB manual. Version 9.0. (2019). http://docs.mosek.com/9.0/toolbox/index.html

  29. Myklebust, T.G.J.: On primal-dual interior-point algorithms for convex optimisation. Ph.D. thesis, Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo (2015)

  30. Nemirovski, A., Tunçel, L.: Cone-free primal-dual path-following and potential reduction polynomial time interior-point methods. Math. Program. 102, 261–294 (2005)

    Article  MathSciNet  Google Scholar 

  31. Nesterov, Y.: Lectures on convex optimization. Springer, Berlin (2018)

    Book  Google Scholar 

  32. Nesterov, Y., Nemirovski, A.: Interior-Point Polynomial Algorithms in Convex Programming. SIAM Series in Applied Mathematics, SIAM, Philadelphia (1994)

  33. Papp, D., Yildiz, S.: Sum-of-squares optimization without semidefinite programming. SIAM J. Optim. 29(1), 822–851 (2019)

    Article  MathSciNet  Google Scholar 

  34. Papp, D., Yıldız, S.: Alfonso: Matlab package for nonsymmetric conic optimization. INFORMS J. Comput. 34(1), 11–19 (2021)

    Article  MathSciNet  Google Scholar 

  35. Pataki, G., Schmieta, S.: The DIMACS library of semidefinite-quadratic-linear programs. Preliminary draft, Computational Optimization Research Center, Columbia University, New York, Tech. Rep. (2002)

  36. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MathSciNet  Google Scholar 

  37. Renegar, J.: Hyperbolic programs, and their derivative relaxations. Found. Comput. Math. 6(1), 59–79 (2006)

    Article  MathSciNet  Google Scholar 

  38. Roy, S., Xiao, L.: On self-concordant barriers for generalized power cones. Opt. Lett. 16(2), 681–694 (2022)

    Article  MathSciNet  Google Scholar 

  39. Skajaa, A., Ye, Y.: A homogeneous interior-point algorithm for nonsymmetric convex conic optimization. Math. Program. 150(2), 391–422 (2015)

    Article  MathSciNet  Google Scholar 

  40. Toh, K.C., Todd, M.J., Tütüncü, R.H.: SDPT3– a MATLAB software package for semidefinite programming, version 1.3. Opt. Methods Softw., 11(1-4), 545–581 (1999)

  41. Tropp, J.A.: An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning 8(1-2), 1–230 (2015)

  42. Tunçel, L.: Generalization of primal-dual interior-point methods to convex optimization problems in conic form. Found. Comput. Math. 1(3), 229–254 (2001)

    Article  MathSciNet  Google Scholar 

  43. Tunçel, L., Nemirovski, A.: Self-concordant barriers for convex approximations of structured convex sets. Found. Comput. Math. 10(5), 485–525 (2010)

    Article  MathSciNet  Google Scholar 

  44. Wagner, D.G., Wei, Y.: A criterion for the half-plane property. Discret. Math. 309(6), 1385–1390 (2009)

    Article  MathSciNet  Google Scholar 

  45. Wang, W., Lütkenhaus, N.: OpenQKDSecurity platform. (2021) https://github.com/nlutkenhaus/openQKDsecurity

  46. Winick, A., Lütkenhaus, N., Coles, P.J.: Reliable numerical key rates for quantum key distribution. Quantum 2, 77 (2018)

    Article  Google Scholar 

  47. Zinchenko, Y.: On hyperbolicity cones associated with elementary symmetric polynomials. Optim. Lett. 2(3), 389–402 (2008)

Download references

Acknowledgements

The authors wish to thank the associate editor and the anonymous reviewers, whose insightful comments and careful reading helped improve the presentation. The first author, Mehdi Karimi, extends heartfelt gratitude to his wife, Mehrnoosh, for her loving support throughout the five-year journey of developing this code.

Funding

Research of the authors was supported in part by Discovery Grants from NSERC and by U.S. Office of Naval Research under award numbers: N00014-12-1-0049, N00014-15-1-2171 and N00014-18-1-2078.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehdi Karimi.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

How to add different function/set constraints

How to add different function/set constraints

1.1 Linear programming (LP) and second-order cone programming (SOCP)

Suppose we want to add \(\ell \) LP constraints of the form

$$\begin{aligned} A_L^i x + b_L^i \ge 0, \ \ \ i\in \{1,\ldots ,\ell \}, \end{aligned}$$
(107)

where \(A_L^i\) is an \(m_L^i\)-by-n matrix, as the kth block of constraints. Then, we define

$$\begin{aligned}{} & {} \texttt {A\{k,1\}}=\left[ \begin{array}{c} A_L^1 \\ \vdots \\ A_L^\ell \end{array}\right] , \ \ \ \texttt {b\{k,1\}}=\left[ \begin{array}{c} b_L^1 \\ \vdots \\ b_L^\ell \end{array}\right] \nonumber \\{} & {} \texttt {cons\{k,1\}='LP'}, \ \ \ \texttt {cons\{k,2\}}=[m_L^1, \ldots , m_L^\ell ]. \end{aligned}$$
(108)

Similarly to add \(\ell \) SOCP constraints of the form

$$\begin{aligned} \Vert A_S^i x + b_S^i\Vert \le (g_S^i)^\top x + d_S^i, \ \ \ i\in \{1,\ldots ,\ell \}, \end{aligned}$$
(109)

where \(A_S^i\) is an \(m_S^i\)-by-n matrix for \(i=\in \{1,\ldots ,\ell \}\), as the kth block, we define

$$\begin{aligned}{} & {} \texttt {A\{k,1\}}=\left[ \begin{array}{c} (g_S^1)^\top \\ A_S^1 \\ \vdots \\ (g_S^\ell )^\top \\ A_S^\ell \end{array}\right] , \ \ \ \texttt {b\{k,1\}}=\left[ \begin{array}{c} d_S^1 \\ b_S^1 \\ \vdots \\ d_S^\ell \\ b_S^\ell \end{array}\right] \nonumber \\{} & {} \texttt {cons\{k,1\}='SOCP'}, \ \ \ \texttt {cons\{k,2\}}=[m_S^1, \ldots , m_S^\ell ]. \end{aligned}$$
(110)

Let us see an example:

Example 2

Suppose we are given the problem:

$$\begin{aligned}&\min&c^\top x \nonumber \\&\text {s.t.}&[-2,1] x \le 1, \nonumber \\{} & {} \left\| \left[ \begin{array}{cc} 2 &{} 1 \\ 1 &{} 3 \end{array} \right] x + \left[ \begin{array}{c} 3 \\ 4 \end{array} \right] \right\| _2 \le 2. \end{aligned}$$
(111)

Then we define

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='LP'}, \ \ \texttt {cons\{1,2\}=[1]}, \ \ \\{} & {} \quad \texttt {A\{1,1\}}=\left[ \begin{array}{cc} 2&-1 \end{array}\right] , \ \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} 1 \end{array}\right] , \\{} & {} \texttt {cons\{2,1\}='SOCP'}, \ \ \texttt {cons\{2,2\}=[2]}, \ \ \\{} & {} \quad \texttt {A\{2,1\}}=\left[ \begin{array}{cc} 0 &{} 0 \\ 2 &{} 1 \\ 1 &{} 3 \end{array}\right] , \ \ \ \texttt {b\{2,2\}}=\left[ \begin{array}{c} 2 \\ 3 \\ 4 \end{array}\right] . \end{aligned}$$

DDS also accepts constraints defined by the rotated second order cones:

$$\begin{aligned} \{(z,t,s) \in {\mathbb {R}}^n \oplus {\mathbb {R}}\oplus {\mathbb {R}}: \Vert z\Vert ^2 \le ts, \ t \ge 0, \ s \ge 0\}. \end{aligned}$$
(112)

The abbreviation we use is ’SOCPR’. To add \(\ell \) rotated SOCP constraints of the form

$$\begin{aligned}{} & {} \Vert A_S^i x + b_S^i\Vert _2^2 \le ((g_S^i)^\top x + d_S^i)((\bar{g}_S^i)^\top x + {{\bar{d}}}_S^i), \ \ \ i\in \{1,\ldots ,\ell \}, \nonumber \\{} & {} (g_S^i)^\top x + d_S^i\ge 0, \ \ ({{\bar{g}}}_S^i)^\top x + {{\bar{d}}}_S^i \ge 0, \end{aligned}$$
(113)

where \(A_S^i\) is an \(m_S^i\)-by-n matrix for \(i \in \{1,\ldots ,\ell \}\), as the kth block, we define

$$\begin{aligned}{} & {} \texttt {A\{k,1\}}=\left[ \begin{array}{c} (g_S^1)^\top \\ (\bar{g}_S^1)^\top \\ A_S^1 \\ \vdots \\ (g_S^\ell )^\top \\ (\bar{g}_S^\ell )^\top \\ A_S^\ell \end{array}\right] , \ \ \ \texttt {b\{k,1\}}=\left[ \begin{array}{c} d_S^1 \\ {{\bar{d}}}_S^1 \\ b_S^1 \\ \vdots \\ d_S^\ell \\ {{\bar{d}}}_S^\ell \\ b_S^\ell \end{array}\right] \nonumber \\{} & {} \texttt {cons\{k,1\}='SOCPR'}, \ \ \ \texttt {cons\{k,2\}}=[m_S^1, \ldots , m_S^\ell ]. \end{aligned}$$
(114)

1.2 Semidefinite programming (SDP)

Consider \(\ell \) SDP constraints in standard inequality (linear matrix inequality (LMI)) form:

$$\begin{aligned} F^i_0+x_1 F^i_1+ \cdots +x_n F^i_n \succeq 0, \ \ \ i\in \{1,\ldots ,\ell \}. \end{aligned}$$
(115)

\(F^i_j\)’s are \(n_i\)-by-\(n_i\) symmetric matrices. The above optimization problem is in the matrix form. To formulate it in our setup, we need to write it in the vector form. DDS has two internal functions sm2vec and vec2sm. sm2vec takes an n-by-n symmetric matrix and changes it into a vector in \({\mathbb {R}}^{n^2}\) by stacking the columns of it on top of one another in order. vec2sm changes a vector into a symmetric matrix such that

$$\begin{aligned} \texttt {vec2sm(sm2vec(X))=X}. \end{aligned}$$
(116)

By this definition, it is easy to check that for any pair of n-by-n symmetric matrices X and Y we have

$$\begin{aligned} \langle X,Y \rangle = \texttt {sm2vec(X)}^ \top \texttt {sm2vec(Y)}. \end{aligned}$$
(117)

To give (115) to DDS as the kth input block, we define:

$$\begin{aligned}{} & {} \texttt {A\{k,1\}}:=\left[ \begin{array}{c} \texttt {sm2vec}(F^1_1), \cdots , \texttt {sm2vec}(F^1_n) \\ \vdots \\ \texttt {sm2vec}(F^\ell _1), \cdots , \texttt {sm2vec}(F^\ell _n)\end{array} \right] ,\nonumber \\ {}{} & {} \ \ \ b\{k,1\}:=\left[ \begin{array}{c}\texttt {sm2vec}(F^1_0)\\ \vdots \\ \texttt {sm2vec}(F^\ell _0) \end{array} \right] , \nonumber \\{} & {} \texttt {cons\{k,1\}='SDP'} \ \ \ \texttt {cons\{k,2\}}=[n^1,\ \ldots \, n^\ell ]. \end{aligned}$$
(118)

The s.c. barrier used in DDS for SDP is the well-known function \(-\ln (\det (X))\) defined on the convex cone of symmetric positive definite matrices.

Example 3

Assume that we want to find scalars \(x_1\), \(x_2\), and \(x_3\) such that \(x_1+x_2+x_3 \ge 1\) and the maximum eigenvalue of \(A_0+x_1A_1+x_2A_2+x_3A_3\) is minimized, where

$$\begin{aligned}{} & {} A_0=\left[ \begin{array}{ccc}2&{} -0.5&{} -0.6 \\ -0.5 &{} 2 &{} 0.4 \\ -0.6 &{} 0.4 &{} 3 \end{array} \right] , \ A_1=\left[ \begin{array}{ccc}0&{} 1&{} 0 \\ 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \end{array} \right] , \ A_2=\left[ \begin{array}{ccc}0&{} 0&{} 1 \\ 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 \end{array} \right] , \\{} & {} A_3=\left[ \begin{array}{ccc}0&{} 0&{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 1 &{} 0 \end{array} \right] . \end{aligned}$$

We can write this problem as

$$\begin{aligned}&\min&t \nonumber \\&s.t.&-1+x_1+x_2+x_3 \ge 0, \nonumber \\{} & {} tI-(A_0+x_1A_1+x_2A_2+x_3A_3) \succeq 0. \end{aligned}$$
(119)

To solve this problem, we define:

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='LP'}, \ \ \texttt {cons\{1,2\}}=[1], \ \ \\{} & {} \quad \texttt {cons\{2,1\}='SDP'}, \ \ \texttt {cons\{2,2\}}=[3], \\{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{cccc} 1&1&1&0 \end{array} \right] , \ \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} -1 \end{array} \right] , \\{} & {} \texttt {A\{2,1\}}=\left[ \begin{array}{cccc} 0&{}0&{}0&{}1 \\ -1&{}0&{}0&{}0 \\ 0&{}-1&{}0&{}0 \\ -1&{}0&{}0&{}0 \\ 0&{}0&{}0&{}1 \\ 0&{}0&{}-1&{}0 \\ 0 &{} -1&{} 0&{}0 \\ 0&{}0&{}-1&{}0 \\ 0&{}0&{}0&{}1 \end{array} \right] , \ \ \ \texttt {b\{2,1\}}=\left[ \begin{array}{c} -2 \\ 0.5 \\ 0.6 \\ 0.5 \\ -2 \\ -0.4 \\ 0.6 \\ -0.4 \\ -3 \end{array} \right] , \\{} & {} \texttt {c}=(0,0,0,1)^\top . \end{aligned}$$

Then DDS(c,A,b,cons) gives the answer \(x=(1.1265,0.6,-0.4,3)^\top \), which means the minimum largest eigenvalue is 3.

1.3 Quadratic constraints

Suppose we want to add the following constraints to DDS:

$$\begin{aligned} x^\top A_i^\top Q_i A_i x + b_i^\top x + d_i \le 0, \ \ \ i \in \{1,\ldots ,\ell \}, \end{aligned}$$
(120)

where each \(A_i\) is \(m_i\)-by-n with rank n, and \(Q_i \in \mathbb S^{m_i}\). To give constraints in (120) as input to DDS as the kth block, we define

$$\begin{aligned}{} & {} \texttt {A\{k,1\}}=\left[ \begin{array}{c} b_1^\top \\ A_1 \\ \vdots \\ b_l^\top \\ A_\ell \end{array}\right] , \ \ \ \texttt {b\{k,1\}}=\left[ \begin{array}{c} d_1 \\ 0 \\ \vdots \\ d_\ell \\ 0 \end{array}\right] \nonumber \\{} & {} \texttt {cons\{k,1\}='QC'} \ \ \texttt {cons\{k,2\}}=[m_1,\ldots ,m_\ell ], \nonumber \\{} & {} \texttt {cons\{k,3,i\}}=Q_i, \ \ i \in \{1,\ldots ,\ell \}. \end{aligned}$$
(121)

If cons{k,3} is not given as the input, DDS takes all \(Q_i\)’s to be identity matrices.

1.4 Generalized power cone

To add generalized power cone constraints to DDS, we use the abbreviation ’GPC’. Therefore, if the kth block of constraints is GPC, we define cons{k,1}=’GPC’. Assume that we want to input the following \(\ell \) constraints to DDS:

$$\begin{aligned} (A_s^i x + b_s^i, A_u^i x+b_u^i) \in K^{(m_i,n_i)}_{\alpha ^i}, \ \ \ i \in \{1,\ldots ,\ell \}, \end{aligned}$$
(122)

where \(A_s^i\), \(b_s^i\), \(A_u^i\), and \(b_u^i\) are matrices and vectors of proper size, and \(K^{(m,n)}_{\alpha }\) is defined in (27). Then, to input these constraints as the kth block, we define cons{k,2} as a MATLAB cell array of size \(\ell \)-by-2, each row represents one constraint. We then define:

$$\begin{aligned} \texttt {cons\{k,2\}\{i,1\}}= & {} [m_i \ \ n_i], \nonumber \\ \texttt {cons\{k,2\}\{i,2\}}= & {} \alpha ^i, \ \ \ \ i \in \{1,\ldots ,\ell \}. \end{aligned}$$
(123)

For matrices A and b, we define:

$$\begin{aligned} \texttt {A\{k,1\}}=\left[ \begin{array}{c} A_s^1 \\ A_u^1 \\ \vdots \\ A_s^\ell \\ A_u^\ell \end{array}\right] , \ \ \ \texttt {b\{k,1\}}=\left[ \begin{array}{c} b_s^1 \\ b_u^1 \\ \vdots \\ b_s^\ell \\ b_u^\ell \end{array}\right] . \end{aligned}$$
(124)

Example 4

Consider the following optimization problem with ’LP’ and ’GPC’ constraints:

$$\begin{aligned}&\min&-x_1-x_2-x_3 \nonumber \\&s.t.&\Vert x\Vert \le (x_1+3)^{0.3} (x_2+1)^{0.3} (x_3+2)^{0.4}, \nonumber \\{} & {} x_1,x_2,x_3 \ge 3. \end{aligned}$$
(125)

Then we define:

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='GPC'}, \ \ \texttt {cons\{1,2\}}=\{[3, \ \ 3], \ \ [0.3; 0.3; 0.4] \} \\{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{cc} \texttt {eye(3)} \\ \texttt {eye(3)} \end{array} \right] , \ \ \texttt {b\{1,1\}}= [3;1;2;0;0;0] \\{} & {} \texttt {cons\{2,1\}='LP'}, \ \ \texttt {cons\{2,2\}}=[3] \\{} & {} \texttt {A\{2,1\}}=\left[ -\texttt {eye(3)} \right] , \ \ \texttt {b\{2,1\}}= [3;3;3] \\{} & {} c=[-1,-1,-1]. \end{aligned}$$

1.5 Epigraphs of matrix norms

Assume that we have constraints of the form

$$\begin{aligned}{} & {} X-UU^\top \succeq 0, \nonumber \\{} & {} X=A_0+\sum _{i=1}^\ell x_i A_i, \nonumber \\{} & {} U=B_0+\sum _{i=1}^\ell x_i B_i, \end{aligned}$$
(126)

where \(A_i\), \(i \in \{1,\ldots ,\ell \}\), are m-by-m symmetric matrices, and \(B_i\), \(i \in \{1,\ldots ,\ell \}\), are m-by-n matrices. DDS has two internal functions m2vec and vec2m for converting matrices (not necessarily symmetric) to vectors and vice versa. For an m-by-n matrix Z, m2vec(Z,n) change the matrix into a vector. vec2m(v,m) reshapes a vector v of proper size to a matrix with m rows. The abbreviation we use for epigraph of a matrix norm is MN. If the kth input block is of this type, cons{k,2} is an \(\ell \)-by-2 matrix, where \(\ell \) is the number of constraints of this type, and each row is of the form \([m \ \ n]\). For each constraint of the form (126), the corresponding parts in A and b are defined as

$$\begin{aligned} \texttt {A\{k,1\}}=\left[ \begin{array}{ccc} \texttt {m2vec}(B_1,n) &{} \cdots &{} \texttt {m2vec}(B_\ell ,n) \\ \texttt {sm2vec}(A_1) &{} \cdots &{} \texttt {sm2vec}(A_\ell ) \end{array} \right] , \ \ \texttt {b\{k,1\}}=\left[ \begin{array}{c} \texttt {m2vec}(B_0,n) \\ \texttt {sm2vec}(A_0) \end{array} \right] .\nonumber \\ \end{aligned}$$
(127)

Example 5

Assume that we have matrices

$$\begin{aligned} U_0=\left[ \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 1 \end{array} \right] , \ \ U_1=\left[ \begin{array}{ccc} -1 &{} -1 &{} 1 \\ 0 &{} 0 &{} 1 \end{array} \right] , \ \ U_2=\left[ \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \end{array} \right] , \end{aligned}$$
(128)

and our goal is to solve

$$\begin{aligned}&\min&t \nonumber \\&s.t.&U U^\top \preceq tI, \nonumber \\{} & {} U=U_0+x_1U_1+x_2U_2. \end{aligned}$$
(129)

Then the input to DDS is defined as

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='MN'}, \ \ \texttt {cons\{2,1\}}=[2 \ \ 3], \\{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{ccc} \texttt {m2vec}(U_1,3) &{} \texttt {m2vec}(U_2,3) &{} \texttt {zeros}(6,1) \\ \texttt {zeros}(4,1) &{} \texttt {zeros}(4,1) &{} \texttt {sm2vec}(I_{2\times 2}) \end{array} \right] ,\\{} & {} \quad \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} \texttt {m2vec}(U_0,3) \\ \texttt {zeros}(4,1) \end{array} \right] , \nonumber \\{} & {} \texttt {c}=[0,0,1]. \end{aligned}$$

1.6 Minimizing nuclear norm

Consider the optimization problem

$$\begin{aligned}&\min&\Vert X \Vert _* \nonumber \\&s.t.&\text {Tr}(U_iX)=c_i, \ \ \ i \in \{1,\ldots ,\ell \}, \end{aligned}$$
(130)

where X is n-by-m. DDS can solve the dual problem by defining

$$\begin{aligned}{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{ccc} \texttt {m2vec}(U_1,n) &{} \cdots &{} \texttt {m2vec}(U_\ell ,n) \\ \texttt {zeros}(m^2,1) &{} \cdots &{} \texttt {zeros}(m^2,1) \end{array} \right] , \nonumber \\ {}{} & {} \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} \texttt {zeros}(mn,1) \\ \texttt {sm2vec}(I_{m\times m}) \end{array} \right] , \nonumber \\{} & {} \texttt {cons\{1,1\}='MN'}, \ \ \texttt {cons\{1,2\}}=[m \ \ n]. \end{aligned}$$
(131)

Then, if we run [x,y]=DDS(c,A,b,cons) and define V:=(vec2m(y{1}(1:m*n),m))\(^{\top }\), then V is an optimal solution for (130). In Sect. 14.4, we present numerical results for solving problem (130) and show that for the cases that \(n \gg m\), DDS can be more efficient than SDP based solvers. Here is an example:

Example 6

We consider minimizing the nuclear norm over a subspace. Consider the following optimization problem:

$$\begin{aligned}&\min&\Vert X \Vert _* \nonumber \\&s.t.&\text {Tr}(U_1X)=1 \nonumber \\{} & {} \text {Tr}(U_2X)=2, \end{aligned}$$
(132)

where

$$\begin{aligned} U_1=\left[ \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} 0 \end{array} \right] , \ \ U_2=\left[ \begin{array}{cccc} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{}1 \end{array} \right] . \end{aligned}$$
(133)

By using (37), the dual of this problem is

$$\begin{aligned}&\min&-u_1-2u_2 \nonumber \\&s.t.&\Vert u_1 U_1 + u_2 U_2\Vert \le 1. \end{aligned}$$
(134)

To solve this problem with our code, we define

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='MN'}, \ \ \texttt {cons\{1,2\}}=[2 \ \ 4], \\{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{cc} \texttt {m2vec}(U_1,4) &{} \texttt {m2vec}(U_2,4) \\ \texttt {zeros}(4,1) &{} \texttt {zeros}(4,1) \end{array} \right] , \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} \texttt {zeros}(8,1) \\ \texttt {sm2vec}(I_{2\times 2}) \end{array} \right] , \nonumber \\{} & {} \texttt {c}=[-1,-2]. \end{aligned}$$

If we solve the problem using [x,y]=DDS(c,A,b,cons), the optimal value is \(-2.2360\). Now V:=(vec2m(y{1}(1:8),2))\(^{\top }\) is the solution of (132) with objective value 2.2360. We have

$$\begin{aligned} X^*:=V=\left[ \begin{array}{cc} 0.5 &{} 0 \\ 0 &{} 0.5 \\ 1 &{} 0 \\ 0 &{} 1 \end{array} \right] . \end{aligned}$$
(135)

1.7 Epigraphs of convex univariate functions (geometric, entropy, and p-norm programming)

In this subsection, we show how to add constraints of the form (38). Let us assume that we want to add the following s constraints to our model

$$\begin{aligned} \sum _{type} \sum _{i=1}^{\ell _{type}^j} \alpha _i^{j,type} f_{type}((a_i^{j,type})^\top x + \beta _i^{j,type}) + g_{j}^ \top x + \gamma _{j} \le 0, \ \ \ \ j\in \{1,\ldots ,s\}.\nonumber \\ \end{aligned}$$
(136)

From now on, type indexes the rows of Table 2. The abbreviation we use for these constraints is TD. Hence, if the kth input block are the constraints in (136), then we have cons{k,1}=’TD’. cons{k,2} is a cell array of MATLAB with s rows, each row represents one constraint. For the jth constraint we have:

  • cons{k,2}{j,1} is a matrix with two columns: the first column shows the type of a function from Table 2 and the second column shows the number of that function in the constraint. Let us say that in the jth constraint, we have \(l_{2}^j\) functions of type 2 and \(l_{3}^j\) functions of type 3, then we have

    $$\begin{aligned} \texttt {cons\{k,2\}\{j,1\}} =\left[ \begin{array}{cc} 2 &{} l_{2}^j \\ 3 &{} l_3^j \end{array} \right] . \end{aligned}$$

    The types can be in any order, but the functions of the same type are consecutive and the order must match with the rows of A and b.

  • cons{k,2}{j,2} is a vector with the coefficients of the functions in a constraint, i.e., \(\alpha _i^{j,type}\) in (136). Note that the coefficients must be in the same order as their corresponding rows in A and b. If in the jth constraint we have 2 functions of type 2 and 1 function of type 3, it starts as

    $$\begin{aligned} \texttt {cons\{k,2\}\{j,2\}}=[\alpha _1^{j,2}, \alpha _2^{j,2}, \alpha _1^{j,3}, \cdots ]. \end{aligned}$$

To add the rows to A, for each constraint j, we first add \(g_{j}\), then \(a_i^{j,type}\)’s in the order that matches cons{k,2}. We do the same thing for vector b (first \(\gamma _j\), then \(\beta _i^{j,type}\)’s). The part of A and b corresponding to the jth constraint is as follows if we have for example five types

$$\begin{aligned} \texttt {A}=\left[ \begin{array}{c} g_j^\top \\ a_1^{j,1} \\ \vdots \\ a_{l_1^j}^{j,1}\\ \vdots \\ a_1^{j,5} \\ \vdots \\ a_{l_5^j}^{j,5} \end{array}\right] , \ \ \ \texttt {b}=\left[ \begin{array}{c} \gamma _j \\ \beta _1^{j,1} \\ \vdots \\ \beta _{l_1^j}^{j,1} \\ \vdots \\ \beta _1^{j,5} \\ \vdots \\ \beta _{l_5^j}^{j5} \end{array}\right] . \end{aligned}$$
(137)

Let us see an example:

Example 7

Assume that we want to solve

$$\begin{aligned}&\min&c^\top x \nonumber \\&\text {s.t.}&-1.2\ln (x_2+2x_3+55) + 1.8e^{x_1+x_2+1} + x_1 -2.1 \le 0, \nonumber \\{} & {} -3.5\ln (x_1+2x_2+3x_3-30) + 0.9e^{-x_3-3} -x_3 +1.2 \le 0, \nonumber \\{} & {} x \ge 0. \end{aligned}$$
(138)

For this problem, we define:

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='LP'}, \ \ \texttt {cons\{1,2\}}=[3], \\{} & {} \texttt {cons\{2,1\}='TD'}, \ \ \texttt {cons\{2,2\}} = \left\{ \left[ \begin{array}{cc} 1 &{} 1 \\ 2 &{} 1\end{array} \right] , [1.2 \ \ 1.8] \; \ \ \left[ \begin{array}{cc} 1 &{}1 \\ 2 &{} 1\end{array} \right] , [3.5 \ \ 0.9] \right\} , \\{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{ccc}-1&{} 0&{} 0 \\ 0 &{} -1 &{} 0 \\ 0 &{} 0 &{} -1 \end{array}\right] , \ \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array}\right] , \\{} & {} \texttt {A\{2,1\}}=\left[ \begin{array}{ccc} 1 &{} 0 &{} 0\\ 0 &{} 1 &{} 2 \\ 1 &{} 1 &{} 0 \\ 0 &{} 0 &{} -1\\ 1 &{} 2 &{} 3\\ 0 &{} 0 &{} -1 \end{array}\right] , \ \ \ \texttt {b\{2,1\}}=\left[ \begin{array}{c} -2.1 \\ 55 \\ 1 \\ 1.2 \\ -30 \\ -3 \end{array}\right] . \end{aligned}$$

Note: As we mentioned, modeling systems for convex optimization that are based on SDP solvers, such as CVX, have to use approximation for functions involving exp and \(\ln \). Approximation makes it hard to return dual certificates, specifically when the problem is infeasible or unbounded.

1.8 Constraints involving power functions

The difference between these two types (4 and 5) and the others is that we also need to give the value of p for each function. To do that, we add another column to cons{k,2}.

Note: For TD constraints, cons{k,2} can have two or three columns. If we do not use types 4 and 5, it has two, otherwise three columns. cons{k,2}{j,3} is a vector which contains the powers p for functions of types 4 and 5. The powers are given in the same order as the coefficients in cons{k,2}{j,2}. If the constraint also has functions of other types, we must put 0 in place of the power.

Let us see an example:

Example 8

 

$$\begin{aligned}&\min&c^\top x \\&\text {s.t.}&2.2\exp (2x_1+3)+|x_1+x_2+x_3| ^2 \\ {}{} & {} + 4.5 |x_1 + x_2| ^{2.5} + |x_2 + 2x_3|^3 + 1.3x_1 -1.9 \le 0. \end{aligned}$$

For this problem, we define:

$$\begin{aligned}{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{ccc}1.3&{} 0&{} 0 \\ 2 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 0\\ 0 &{} 1 &{} 2 \end{array}\right] , \ \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} -1.9 \\ 3 \\ 0 \\ 0 \\ 0 \end{array}\right] , \nonumber \\{} & {} \texttt {cons\{1,1\}='TD'}, \ \ \texttt {cons\{1,2\}}=\left\{ \left[ \begin{array}{cc} 2 &{} 1 \\ 4 &{} 3\end{array} \right] , \ [2.2 \ \ 1 \ \ 4.5 \ \ 1], \ [0 \ \ 2 \ \ 2.5 \ \ 3] \right\} . \end{aligned}$$

1.9 Vector relative entropy

The abbreviation we use for relative entropy is RE. So, for the kth block of s constraints of the form

$$\begin{aligned} f(A_u^i x + b_u^i, A_z^i x + b_z^i) + g_i^\top x + \gamma _i \le 0, \ \ i \in \{1,\ldots , s\}, \end{aligned}$$
(139)

we define cons{k,1} = ’RE’ and cons{k,2} is a vector of length s with the ith element equal to \(2\ell +1\). We also define:

$$\begin{aligned} \texttt {A}=\left[ \begin{array}{c} g_i^\top \\ A_u^1 \\ A_z^1 \\ \vdots \\ g_s^\top \\ A_u^s \\ A_z^s \end{array}\right] , \ \ \ \texttt {b}=\left[ \begin{array}{c} \gamma _1 \\ b_u^1 \\ b_z^1 \\ \vdots \\ \gamma _s \\ b_u^s \\ b_z^s \end{array}\right] . \end{aligned}$$
(140)

Example 9

Assume that we want to minimize a relative entropy function under a linear constraint:

$$\begin{aligned}&\min&(0.8x_1+1.3) \ln \left( \frac{0.8x_1+1.3}{2.1x_1+1.3x_2+1.9}\right) \nonumber \\{} & {} + (1.1x_1-1.5x_2-3.8) \ln \left( \frac{1.1x_1-1.5x_2-3.8}{3.9x_2}\right) \nonumber \\&s.t.&x_1 + x_2 \le \beta . \end{aligned}$$
(141)

We add an auxiliary variable \(x_3\) to model the objective function as a constraint. For this problem we define:

$$\begin{aligned}{} & {} \texttt {cons\{1,1\}='RE'}, \ \ \texttt {cons\{1,2\}}=\left[ 5 \right] \\{} & {} \texttt {A\{1,1\}}=\left[ \begin{array}{ccc}0&{} 0&{} -1 \\ 0.8 &{} 0 &{} 0 \\ 1.1 &{} -1.5 &{} 0\\ 2.1 &{} 1.3 &{} 0 \\ 0 &{} 3.9 &{} 0 \end{array}\right] , \ \ \ \texttt {b\{1,1\}}=\left[ \begin{array}{c} 0 \\ 1.3 \\ -3.8 \\ 1.9 \\ 0 \end{array}\right] , \nonumber \\{} & {} \texttt {cons\{2,1\}='LP'}, \ \ \texttt {cons\{2,2\}}=\left[ 1 \right] \\{} & {} \texttt {A\{2,1\}}=[-1 \ \ -1 \ \ 0], \ \ \ \texttt {b\{2,1\}}=[\beta ]. \end{aligned}$$

If we solve this problem by DDS, for \(\beta = 2\) the problem is infeasible, and for \(\beta = 7\) it returns an optimal solution \(x^*:=(5.93,1.06)^{\top }\) with the minimum of function equal to \(-7.259\).

1.10 Adding quantum entropy based constraints

Let \(qe_i: {\mathbb {S}}^{n_i} \rightarrow {\mathbb {R}}\cup \{+\infty \}\) be quantum entropy functions and consider \(\ell \) quantum entropy constraints of the form

$$\begin{aligned} qe_i(A^i_0+x_1 A^i_1+ \cdots +x_n A^i_n) \le g_i^\top x+d_i, \ \ \ i\in \{1,\ldots ,\ell \}. \end{aligned}$$
(142)

\(A^i_j\)’s are \(n_i\)-by-\(n_i\) symmetric matrices. To input (142) to DDS as the kth block, we define:

$$\begin{aligned}{} & {} \texttt {cons\{k,1\}='QE'}, \ \ \texttt {cons\{k,2\}}=[n_1, \ldots ,n_\ell ], \nonumber \\{} & {} \texttt {A\{k,1\}}:=\left[ \begin{array}{c} g_1^\top \\ \texttt {sm2vec}(A^1_1), \cdots , \texttt {sm2vec}(A^1_n) \\ \vdots \\ g_\ell ^\top \\ \texttt {sm2vec}(A^\ell _1), \cdots , \texttt {sm2vec}(A^\ell _n)\end{array} \right] , \nonumber \\ {}{} & {} \ \ \ \texttt {b\{k,1\}}:=\left[ \begin{array}{c} d_1 \\ \texttt {sm2vec}(A^1_0)\\ \vdots \\ d_\ell \\ \texttt {sm2vec}(A^\ell _0) \end{array} \right] .\nonumber \\ \end{aligned}$$
(143)

Example 10

Assume that we want to find scalars \(x_1\), \(x_2\), and \(x_3\) such that \(2x_1+3x_2-x_3 \le 5\) and all the eigenvalues of \(H:=x_1A_1+x_2A_2+x_3A_3\) are at least 3, for

$$\begin{aligned} A_1=\left[ \begin{array}{ccc}1&{} 0&{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \end{array} \right] , \ A_2=\left[ \begin{array}{ccc}0&{} 0&{} 1 \\ 0 &{} 1 &{} 0 \\ 1 &{} 0 &{} 0 \end{array} \right] , \ A_3=\left[ \begin{array}{ccc}0&{} 1&{} 0 \\ 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \end{array} \right] , \end{aligned}$$

such that the quantum entropy H is minimized. We can write this problem as

$$\begin{aligned}&\min&t \nonumber \\&s.t.&qe(x_1A_1+x_2A_2+x_3A_3) \le t, \nonumber \\{} & {} 2x_1+3x_2-x_3 \le 5, \nonumber \\{} & {} x_1A_1+x_2A_2+x_3A_3 \succeq 3I. \end{aligned}$$
(144)

For the objective function we have \(\texttt {c}=(0,0,0,1)^\top \). Assume that the first and second blocks are LP and SDP as before. We define the third block of constraints as:

$$\begin{aligned}{} & {} \texttt {cons\{3,1\}='QE'}, \ \ \texttt {cons\{3,2\}}=[3], \ \ \texttt {b\{3,1\}}:=\left[ \begin{array}{c} 0 \\ \texttt {zeros}(9,1) \end{array} \right] , \\{} & {} \texttt {A\{3,1\}}:=\left[ \begin{array}{cccc} 0&{}0&{}0&{}1\\ \texttt {sm2vec}(A1) &{} \texttt {sm2vec}(A2) &{} \texttt {sm2vec}(A3) &{} \texttt {sm2vec}(\texttt {zeros}(3)) \end{array} \right] . \end{aligned}$$

If we run DDS, the answer we get is \((x_1,x_2,x_3)=(4,-1,0)\) with \(f(H)=14.63\).

1.11 Adding quantum relative entropy based constraints

The abbreviation we use for quantum relative entropy is QRE. Let \(qre_i: {\mathbb {S}}^{n_i}\oplus {\mathbb {S}}^{n_i} \rightarrow {\mathbb {R}}\cup \{+\infty \}\) be quantum relative entropy functions and consider \(\ell \) quantum entropy constraints of the form

$$\begin{aligned}{} & {} qre_i(A^i_0+x_1 A^i_1+ \cdots +x_n A^i_n, B^i_0+x_1 B^i_1+ \cdots +x_n B^i_n)\\{} & {} \quad \le g_i^\top x+d_i, \ \ \ i\in \{1,\ldots ,\ell \}. \end{aligned}$$

\(A^i_j\)’s and \(B^i_j\)’s are \(n_i\)-by-\(n_i\) symmetric matrices. To input (142) to DDS as the kth block, we define:

$$\begin{aligned}{} & {} \texttt {cons\{k,1\}='QRE'}, \ \ \texttt {cons\{k,2\}}=[n_1, \ldots ,n_\ell ], \nonumber \\{} & {} \texttt {A\{k,1\}}:=\left[ \begin{array}{c} g_1^\top \\ \texttt {sm2vec}(A^1_1), \cdots , \texttt {sm2vec}(A^1_n) \\ \texttt {sm2vec}(B^1_1), \cdots , \texttt {sm2vec}(B^1_n) \\ \vdots \\ g_\ell ^\top \\ \texttt {sm2vec}(A^\ell _1), \cdots , \texttt {sm2vec}(A^\ell _n) \\ \texttt {sm2vec}(B^\ell _1), \cdots , \texttt {sm2vec}(B^\ell _n) \end{array} \right] ,\nonumber \\ {}{} & {} \ \ \ \texttt {b\{k,1\}}:=\left[ \begin{array}{c} d_1 \\ \texttt {sm2vec}(A^1_0) \\ \texttt {sm2vec}(B^1_0) \\ \vdots \\ d_\ell \\ \texttt {sm2vec}(A^\ell _0) \\ \texttt {sm2vec}(B^\ell _0) \end{array} \right] .\nonumber \\ \end{aligned}$$
(145)

Note: \(H_1,\ldots ,H_m\) must be nonzero symmetric matrices.

1.12 Adding constraints involving hyperbolic polynomials

Consider a hyperbolic polynomial constraint of the form

$$\begin{aligned} p(Ax+b) \ge 0. \end{aligned}$$
(146)

To input this constraint to DDS as the kth block, A and b are defined as before, and different parts of cons are defined as follows:

cons{k,1}=’HB’,

cons{k,2}= number of variables in p(z).

cons{k,3} is the poly that can be given in one of the three formats of Sect. 11.1.

cons{k,4} is the format of polynomial: \(\texttt {'monomial'}\), \(\texttt {'straight\_line'}\), or \(\texttt {'determinant'}\).

cons{k,5} is the direction of hyperbolicity or a vector in the interior of the hyperbolicity cone.

Example 11

Assume that we want to give constraint (146) to DDS for \(p(x)=x_1^2-x_2^2-x_3^2\), using the monomial format. Then, cons part is defined as

$$\begin{aligned}{} & {} \texttt {cons\{k,1\}='HB'}, \ \ \texttt {cons\{k,2\}}=[3], \\{} & {} \texttt {cons\{k,3\}}=\left[ \begin{array}{cccc} 2&{}0&{}0&{}1 \\ 0&{}2&{}0&{}-1 \\ 0&{}0&{}2&{}-1 \end{array} \right] , \\{} & {} \texttt {cons\{k,4\}='monomial'}, \ \ \texttt {cons\{k,5\}}=\left[ \begin{array}{c} 1\\ 0\\ 0\end{array}\right] . \end{aligned}$$

1.13 Equality constraints

To input a block of equality constraints \(Bx=d\), where B is a \(m_{EQ}\)-by-n matrix, as the kth block, we define

$$\begin{aligned} \texttt {cons\{k,1\}='EQ'}, \ \ \texttt {cons\{k,2\}}=m_{EQ}, \ \ \texttt {A\{k,1\}}=B, \ \ \texttt {b\{k,1\}}=d. \end{aligned}$$

Example 12

If for a given problem with \(x \in {\mathbb {R}}^3\), we have a constraint \(x_2-x_3=2\), then we can add it as the kth block by the following definitions:

$$\begin{aligned} \texttt {cons\{k,1\}='EQ'}, \ \ \texttt {cons\{k,2\}}=1, \ \ \texttt {A\{k,1\}}=[0 \ \ 1 \ -1], \ \ \texttt {b\{k,1\}}=[2]. \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karimi, M., Tunçel, L. Domain-Driven Solver (DDS) Version 2.1: a MATLAB-based software package for convex optimization problems in domain-driven form. Math. Prog. Comp. 16, 37–92 (2024). https://doi.org/10.1007/s12532-023-00248-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12532-023-00248-2

Keywords

Mathematics Subject Classification

Navigation