Abstract
Domain-Driven Solver (DDS) is a MATLAB-based software package for convex optimization. The current version of DDS accepts every combination of the following function/set constraints: (1) symmetric cones (LP, SOCP, and SDP); (2) quadratic constraints that are SOCP representable; (3) direct sums of an arbitrary collection of 2-dimensional convex sets defined as the epigraphs of univariate convex functions (including as special cases geometric programming and entropy programming); (4) generalized Koecher (power) cone; (5) epigraphs of matrix norms (including as a special case minimization of nuclear norm over a linear subspace); (6) vector relative entropy; (7) epigraphs of quantum entropy and quantum relative entropy; and (8) constraints involving hyperbolic polynomials. The infeasible-start primal-dual algorithms used for DDS rely heavily on duality theory and properties of Legendre-Fenchel conjugate functions, and are designed to rigorously determine the status of a given problem. We discuss some important implementation details and techniques we used to improve the robustness and efficiency of the software. The appendix contains many examples.
Similar content being viewed by others
Data availability
Enquiries about data availability should be directed to the authors.
Code availability
The full code was made available for review. Reference [24] in this published article is the link to the publicly available code.
Notes
For these results, the code provided in the Hypatia package for CBLIB using JuMP [14] is used, with the parameter default_tol_relax =1 By changing the parameter to default_tol_relax =1000, Hypatia solves the problems batch and enpro48 approximately, but fails again for isil01 and LogExpCR-n500-m1600.
References
Amini, N., Brändén, P.: Non-representable hyperbolic matroids. Adv. Math. 334, 417–449 (2018)
Boyd, S., Kim, S.J., Vandenberghe, L., Hassibi, A.: A tutorial on geometric programming. Optim. Eng. 8(1), 67–127 (2007)
Brändén, P.: Polynomials with the half-plane property and matroid theory. Adv. Math. 216(1), 302–320 (2007)
Brändén, P.: Obstructions to determinantal representability. Adv. Math. 226(2), 1202–1212 (2011)
Brändén, P.: Hyperbolicity cones of elementary symmetric polynomials are spectrahedral. Opt. Lett. 8(5), 1773–1782 (2014)
Burton, S., Vinzant, C., Youm, Y.: A real stable extension of the Vamos matroid polynomial. arXiv preprint arXiv:1411.2038 (2014)
Chandrasekaran, V., Shah, P.: Relative entropy optimization and its applications. Math. Program. 161(1–2), 1–32 (2017)
Chares, R.: Cones and interior-point algorithms for structured convex optimization involving powers and exponentials. Ph.D. thesis, Université Catholique de Louvain, Louvain-la-Neuve (2008)
Choe, Y.B., Oxley, J.G., Sokal, A.D., Wagner, D.G.: Homogeneous multivariate polynomials with the half-plane property. Adv. Appl. Math. 32(1–2), 88–187 (2004)
Coey, C., Kapelevich, L., Vielma, J.P.: Performance enhancements for a generic conic interior point algorithm. Mathematical Programming Computation pp. 1–49 (2022)
Coey, C., Kapelevich, L., Vielma, J.P.: Solving natural conic formulations with hypatia. jl. INFORMS J. Comput. 34(5), 2686–2699 (2022)
Dahl, J., Andersen, E.D.: A primal-dual interior-point algorithm for nonsymmetric exponential-cone optimization. Math. Program. 194(1–2), 341–370 (2022)
Davis, C.: All convex invariant functions of Hermitian matrices. Arch. Math. 8(4), 276–278 (1957)
Dunning, I., Huchette, J., Lubin, M.: Jump: a modeling language for mathematical optimization. SIAM Rev. 59(2), 295–320 (2017). https://doi.org/10.1137/15M1020575
Fang, S.C., Rajasekera, J.R., Tsao, H.S.J.: Entropy optimization and mathematical programming, vol. 8. Springer, Berln (1997)
Fawzi, H., Saunderson, J.: Optimal self-concordant barriers for quantum relative entropies. arXiv preprint arXiv:2205.04581 (2022)
Fawzi, H., Saunderson, J., Parrilo, P.A.: Semidefinite approximations of the matrix logarithm. Found. Comput. Math. 19(2), 259–296 (2019)
Faybusovich, L., Tsuchiya, T.: Matrix monotonicity and self-concordance: how to handle quantum entropy in optimization problems. Opt. Lett. 11, 1513–1526 (2017)
Faybusovich, L., Zhou, C.: Long-step path-following algorithm in quantum information theory: Some numerical aspects and applications. arXiv preprint arXiv:1906.00037 (2020)
Friberg, H.A.: CBLIB 2014: a benchmark library for conic mixed-integer and continuous optimization. Math. Program. Comput. 8(2), 191–214 (2016)
Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.2. (2020) http://cvxr.com/cvx
Güler, O.: Hyperbolic polynomials and interior point methods for convex programming. Math. Oper. Res. 22(2), 350–377 (1997)
Hiai, F., Petz, D.: Introduction to matrix analysis and applications. Springer, Berlin (2014)
Karimi, M., Tunçel, L.: mehdi-karimi-math/DDS: DDS 2.1 (2023). https://doi.org/10.5281/zenodo.8339473
Karimi, M., Tunçel, L.: Primal-dual interior-point methods for domain-driven formulations. Math. Oper. Res. 45(2), 591–621 (2020)
Karimi, M., Tunçel, L.: Status determination by interior-point methods for convex optimization problems in Domain-Driven form. Math. Program. 194(1–2), 937–974 (2022)
Lewis, A.S.: The mathematics of eigenvalue optimization. Math. Program. 97(1–2), 155–176 (2003)
MOSEK ApS: The MOSEK optimization toolbox for MATLAB manual. Version 9.0. (2019). http://docs.mosek.com/9.0/toolbox/index.html
Myklebust, T.G.J.: On primal-dual interior-point algorithms for convex optimisation. Ph.D. thesis, Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo (2015)
Nemirovski, A., Tunçel, L.: Cone-free primal-dual path-following and potential reduction polynomial time interior-point methods. Math. Program. 102, 261–294 (2005)
Nesterov, Y.: Lectures on convex optimization. Springer, Berlin (2018)
Nesterov, Y., Nemirovski, A.: Interior-Point Polynomial Algorithms in Convex Programming. SIAM Series in Applied Mathematics, SIAM, Philadelphia (1994)
Papp, D., Yildiz, S.: Sum-of-squares optimization without semidefinite programming. SIAM J. Optim. 29(1), 822–851 (2019)
Papp, D., Yıldız, S.: Alfonso: Matlab package for nonsymmetric conic optimization. INFORMS J. Comput. 34(1), 11–19 (2021)
Pataki, G., Schmieta, S.: The DIMACS library of semidefinite-quadratic-linear programs. Preliminary draft, Computational Optimization Research Center, Columbia University, New York, Tech. Rep. (2002)
Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)
Renegar, J.: Hyperbolic programs, and their derivative relaxations. Found. Comput. Math. 6(1), 59–79 (2006)
Roy, S., Xiao, L.: On self-concordant barriers for generalized power cones. Opt. Lett. 16(2), 681–694 (2022)
Skajaa, A., Ye, Y.: A homogeneous interior-point algorithm for nonsymmetric convex conic optimization. Math. Program. 150(2), 391–422 (2015)
Toh, K.C., Todd, M.J., Tütüncü, R.H.: SDPT3– a MATLAB software package for semidefinite programming, version 1.3. Opt. Methods Softw., 11(1-4), 545–581 (1999)
Tropp, J.A.: An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning 8(1-2), 1–230 (2015)
Tunçel, L.: Generalization of primal-dual interior-point methods to convex optimization problems in conic form. Found. Comput. Math. 1(3), 229–254 (2001)
Tunçel, L., Nemirovski, A.: Self-concordant barriers for convex approximations of structured convex sets. Found. Comput. Math. 10(5), 485–525 (2010)
Wagner, D.G., Wei, Y.: A criterion for the half-plane property. Discret. Math. 309(6), 1385–1390 (2009)
Wang, W., Lütkenhaus, N.: OpenQKDSecurity platform. (2021) https://github.com/nlutkenhaus/openQKDsecurity
Winick, A., Lütkenhaus, N., Coles, P.J.: Reliable numerical key rates for quantum key distribution. Quantum 2, 77 (2018)
Zinchenko, Y.: On hyperbolicity cones associated with elementary symmetric polynomials. Optim. Lett. 2(3), 389–402 (2008)
Acknowledgements
The authors wish to thank the associate editor and the anonymous reviewers, whose insightful comments and careful reading helped improve the presentation. The first author, Mehdi Karimi, extends heartfelt gratitude to his wife, Mehrnoosh, for her loving support throughout the five-year journey of developing this code.
Funding
Research of the authors was supported in part by Discovery Grants from NSERC and by U.S. Office of Naval Research under award numbers: N00014-12-1-0049, N00014-15-1-2171 and N00014-18-1-2078.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
How to add different function/set constraints
How to add different function/set constraints
1.1 Linear programming (LP) and second-order cone programming (SOCP)
Suppose we want to add \(\ell \) LP constraints of the form
where \(A_L^i\) is an \(m_L^i\)-by-n matrix, as the kth block of constraints. Then, we define
Similarly to add \(\ell \) SOCP constraints of the form
where \(A_S^i\) is an \(m_S^i\)-by-n matrix for \(i=\in \{1,\ldots ,\ell \}\), as the kth block, we define
Let us see an example:
Example 2
Suppose we are given the problem:
Then we define
DDS also accepts constraints defined by the rotated second order cones:
The abbreviation we use is ’SOCPR’. To add \(\ell \) rotated SOCP constraints of the form
where \(A_S^i\) is an \(m_S^i\)-by-n matrix for \(i \in \{1,\ldots ,\ell \}\), as the kth block, we define
1.2 Semidefinite programming (SDP)
Consider \(\ell \) SDP constraints in standard inequality (linear matrix inequality (LMI)) form:
\(F^i_j\)’s are \(n_i\)-by-\(n_i\) symmetric matrices. The above optimization problem is in the matrix form. To formulate it in our setup, we need to write it in the vector form. DDS has two internal functions sm2vec and vec2sm. sm2vec takes an n-by-n symmetric matrix and changes it into a vector in \({\mathbb {R}}^{n^2}\) by stacking the columns of it on top of one another in order. vec2sm changes a vector into a symmetric matrix such that
By this definition, it is easy to check that for any pair of n-by-n symmetric matrices X and Y we have
To give (115) to DDS as the kth input block, we define:
The s.c. barrier used in DDS for SDP is the well-known function \(-\ln (\det (X))\) defined on the convex cone of symmetric positive definite matrices.
Example 3
Assume that we want to find scalars \(x_1\), \(x_2\), and \(x_3\) such that \(x_1+x_2+x_3 \ge 1\) and the maximum eigenvalue of \(A_0+x_1A_1+x_2A_2+x_3A_3\) is minimized, where
We can write this problem as
To solve this problem, we define:
Then DDS(c,A,b,cons) gives the answer \(x=(1.1265,0.6,-0.4,3)^\top \), which means the minimum largest eigenvalue is 3.
1.3 Quadratic constraints
Suppose we want to add the following constraints to DDS:
where each \(A_i\) is \(m_i\)-by-n with rank n, and \(Q_i \in \mathbb S^{m_i}\). To give constraints in (120) as input to DDS as the kth block, we define
If cons{k,3} is not given as the input, DDS takes all \(Q_i\)’s to be identity matrices.
1.4 Generalized power cone
To add generalized power cone constraints to DDS, we use the abbreviation ’GPC’. Therefore, if the kth block of constraints is GPC, we define cons{k,1}=’GPC’. Assume that we want to input the following \(\ell \) constraints to DDS:
where \(A_s^i\), \(b_s^i\), \(A_u^i\), and \(b_u^i\) are matrices and vectors of proper size, and \(K^{(m,n)}_{\alpha }\) is defined in (27). Then, to input these constraints as the kth block, we define cons{k,2} as a MATLAB cell array of size \(\ell \)-by-2, each row represents one constraint. We then define:
For matrices A and b, we define:
Example 4
Consider the following optimization problem with ’LP’ and ’GPC’ constraints:
Then we define:
1.5 Epigraphs of matrix norms
Assume that we have constraints of the form
where \(A_i\), \(i \in \{1,\ldots ,\ell \}\), are m-by-m symmetric matrices, and \(B_i\), \(i \in \{1,\ldots ,\ell \}\), are m-by-n matrices. DDS has two internal functions m2vec and vec2m for converting matrices (not necessarily symmetric) to vectors and vice versa. For an m-by-n matrix Z, m2vec(Z,n) change the matrix into a vector. vec2m(v,m) reshapes a vector v of proper size to a matrix with m rows. The abbreviation we use for epigraph of a matrix norm is MN. If the kth input block is of this type, cons{k,2} is an \(\ell \)-by-2 matrix, where \(\ell \) is the number of constraints of this type, and each row is of the form \([m \ \ n]\). For each constraint of the form (126), the corresponding parts in A and b are defined as
Example 5
Assume that we have matrices
and our goal is to solve
Then the input to DDS is defined as
1.6 Minimizing nuclear norm
Consider the optimization problem
where X is n-by-m. DDS can solve the dual problem by defining
Then, if we run [x,y]=DDS(c,A,b,cons) and define V:=(vec2m(y{1}(1:m*n),m))\(^{\top }\), then V is an optimal solution for (130). In Sect. 14.4, we present numerical results for solving problem (130) and show that for the cases that \(n \gg m\), DDS can be more efficient than SDP based solvers. Here is an example:
Example 6
We consider minimizing the nuclear norm over a subspace. Consider the following optimization problem:
where
By using (37), the dual of this problem is
To solve this problem with our code, we define
If we solve the problem using [x,y]=DDS(c,A,b,cons), the optimal value is \(-2.2360\). Now V:=(vec2m(y{1}(1:8),2))\(^{\top }\) is the solution of (132) with objective value 2.2360. We have
1.7 Epigraphs of convex univariate functions (geometric, entropy, and p-norm programming)
In this subsection, we show how to add constraints of the form (38). Let us assume that we want to add the following s constraints to our model
From now on, type indexes the rows of Table 2. The abbreviation we use for these constraints is TD. Hence, if the kth input block are the constraints in (136), then we have cons{k,1}=’TD’. cons{k,2} is a cell array of MATLAB with s rows, each row represents one constraint. For the jth constraint we have:
-
cons{k,2}{j,1} is a matrix with two columns: the first column shows the type of a function from Table 2 and the second column shows the number of that function in the constraint. Let us say that in the jth constraint, we have \(l_{2}^j\) functions of type 2 and \(l_{3}^j\) functions of type 3, then we have
$$\begin{aligned} \texttt {cons\{k,2\}\{j,1\}} =\left[ \begin{array}{cc} 2 &{} l_{2}^j \\ 3 &{} l_3^j \end{array} \right] . \end{aligned}$$The types can be in any order, but the functions of the same type are consecutive and the order must match with the rows of A and b.
-
cons{k,2}{j,2} is a vector with the coefficients of the functions in a constraint, i.e., \(\alpha _i^{j,type}\) in (136). Note that the coefficients must be in the same order as their corresponding rows in A and b. If in the jth constraint we have 2 functions of type 2 and 1 function of type 3, it starts as
$$\begin{aligned} \texttt {cons\{k,2\}\{j,2\}}=[\alpha _1^{j,2}, \alpha _2^{j,2}, \alpha _1^{j,3}, \cdots ]. \end{aligned}$$
To add the rows to A, for each constraint j, we first add \(g_{j}\), then \(a_i^{j,type}\)’s in the order that matches cons{k,2}. We do the same thing for vector b (first \(\gamma _j\), then \(\beta _i^{j,type}\)’s). The part of A and b corresponding to the jth constraint is as follows if we have for example five types
Let us see an example:
Example 7
Assume that we want to solve
For this problem, we define:
Note: As we mentioned, modeling systems for convex optimization that are based on SDP solvers, such as CVX, have to use approximation for functions involving exp and \(\ln \). Approximation makes it hard to return dual certificates, specifically when the problem is infeasible or unbounded.
1.8 Constraints involving power functions
The difference between these two types (4 and 5) and the others is that we also need to give the value of p for each function. To do that, we add another column to cons{k,2}.
Note: For TD constraints, cons{k,2} can have two or three columns. If we do not use types 4 and 5, it has two, otherwise three columns. cons{k,2}{j,3} is a vector which contains the powers p for functions of types 4 and 5. The powers are given in the same order as the coefficients in cons{k,2}{j,2}. If the constraint also has functions of other types, we must put 0 in place of the power.
Let us see an example:
Example 8
For this problem, we define:
1.9 Vector relative entropy
The abbreviation we use for relative entropy is RE. So, for the kth block of s constraints of the form
we define cons{k,1} = ’RE’ and cons{k,2} is a vector of length s with the ith element equal to \(2\ell +1\). We also define:
Example 9
Assume that we want to minimize a relative entropy function under a linear constraint:
We add an auxiliary variable \(x_3\) to model the objective function as a constraint. For this problem we define:
If we solve this problem by DDS, for \(\beta = 2\) the problem is infeasible, and for \(\beta = 7\) it returns an optimal solution \(x^*:=(5.93,1.06)^{\top }\) with the minimum of function equal to \(-7.259\).
1.10 Adding quantum entropy based constraints
Let \(qe_i: {\mathbb {S}}^{n_i} \rightarrow {\mathbb {R}}\cup \{+\infty \}\) be quantum entropy functions and consider \(\ell \) quantum entropy constraints of the form
\(A^i_j\)’s are \(n_i\)-by-\(n_i\) symmetric matrices. To input (142) to DDS as the kth block, we define:
Example 10
Assume that we want to find scalars \(x_1\), \(x_2\), and \(x_3\) such that \(2x_1+3x_2-x_3 \le 5\) and all the eigenvalues of \(H:=x_1A_1+x_2A_2+x_3A_3\) are at least 3, for
such that the quantum entropy H is minimized. We can write this problem as
For the objective function we have \(\texttt {c}=(0,0,0,1)^\top \). Assume that the first and second blocks are LP and SDP as before. We define the third block of constraints as:
If we run DDS, the answer we get is \((x_1,x_2,x_3)=(4,-1,0)\) with \(f(H)=14.63\).
1.11 Adding quantum relative entropy based constraints
The abbreviation we use for quantum relative entropy is QRE. Let \(qre_i: {\mathbb {S}}^{n_i}\oplus {\mathbb {S}}^{n_i} \rightarrow {\mathbb {R}}\cup \{+\infty \}\) be quantum relative entropy functions and consider \(\ell \) quantum entropy constraints of the form
\(A^i_j\)’s and \(B^i_j\)’s are \(n_i\)-by-\(n_i\) symmetric matrices. To input (142) to DDS as the kth block, we define:
Note: \(H_1,\ldots ,H_m\) must be nonzero symmetric matrices.
1.12 Adding constraints involving hyperbolic polynomials
Consider a hyperbolic polynomial constraint of the form
To input this constraint to DDS as the kth block, A and b are defined as before, and different parts of cons are defined as follows:
cons{k,1}=’HB’,
cons{k,2}= number of variables in p(z).
cons{k,3} is the poly that can be given in one of the three formats of Sect. 11.1.
cons{k,4} is the format of polynomial: \(\texttt {'monomial'}\), \(\texttt {'straight\_line'}\), or \(\texttt {'determinant'}\).
cons{k,5} is the direction of hyperbolicity or a vector in the interior of the hyperbolicity cone.
Example 11
Assume that we want to give constraint (146) to DDS for \(p(x)=x_1^2-x_2^2-x_3^2\), using the monomial format. Then, cons part is defined as
1.13 Equality constraints
To input a block of equality constraints \(Bx=d\), where B is a \(m_{EQ}\)-by-n matrix, as the kth block, we define
Example 12
If for a given problem with \(x \in {\mathbb {R}}^3\), we have a constraint \(x_2-x_3=2\), then we can add it as the kth block by the following definitions:
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Karimi, M., Tunçel, L. Domain-Driven Solver (DDS) Version 2.1: a MATLAB-based software package for convex optimization problems in domain-driven form. Math. Prog. Comp. 16, 37–92 (2024). https://doi.org/10.1007/s12532-023-00248-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12532-023-00248-2
Keywords
- Convex optimization
- Software package
- Self-concordant functions
- Interior-point methods
- Primal-dual algorithms
- Duality theory