2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf {2\times 2}$$\end{document}-Convexifications for convex quadratic optimization with indicator variables

In this paper, we study the convex quadratic optimization problem with indicator variables. For the 2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${2\times 2}$$\end{document} case, we describe the convex hull of the epigraph in the original space of variables, and also give a conic quadratic extended formulation. Then, using the convex hull description for the 2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${2\times 2}$$\end{document} case as a building block, we derive an extended SDP relaxation for the general case. This new formulation is stronger than other SDP relaxations proposed in the literature for the problem, including the optimal perspective relaxation and the optimal rank-one relaxation. Computational experiments indicate that the proposed formulations are quite effective in reducing the integrality gap of the optimization problems.


optimization
problems.

Introduction

We consider the convex quadratic optim
n be relaxed to (QI).

Building strong convex relaxations of (QI) is instrumental in solving it effectively.A number of approaches for developing linear and nonlinear valid inequalities for (QI) are considered in literature.Dong and Linderoth [22] describe lifted linear inequalities from its continuous quadratic optimization counterpart with bounded variables.Bienstock and Michalka [13] derive valid linear inequalities for optimization of a convex objective function over a non-convex set based on gradients of the objective function.Valid linear inequalities for (QI) can also be obtained using the epigraph of bilinear terms in the objective [e.g.14,20,30,39].In addition, several specialized results concerning optimization problems with indicator variables exist in the literature [6,10,11,16, Building strong convex relaxations of (QI) is instrumental in solving it effectively.A number of approaches for developing linear and nonlinear valid inequalities for (QI) are considered in literature.Dong and Linderoth [22] describe lifted linear inequalities from its continuous quadratic optimization counterpart with bounded variables.Bienstock and Michalka [13] derive valid linear inequalities for optimization of a convex objective function over a non-convex set based on gradients of the objective function.Valid linear inequalities for (QI) can also be obtained using the epigraph of bilinear terms in the objective [e.g.14,20,30,39].In addition, several specialized results concerning optimization problems with indicator variables exist in the literature [6,10,11,16,19,27,28,37,40].
9,27,28,37,40].

There is a substantial body of research on the perspective formulation of convex univariate functions with indicators [1, 21-23, 29, 33, 44].When Q is diagonal, y Qy is separable and the perspective formulation provides the convex hull of the epigraph of y Qy with indicator variables by strengthening each term Q ii y 2  i with its perspective counterpart Q ii y 2  i /x i , individually.For the general case, however, convex relaxations based on the perspective reformulation may not be strong.The computational experiments in [25] demonstrate that as Q deviates from a diagonal matrix, the performance of the perspective formulatio There is a substantial body of research on the perspective formulation of convex univariate functions with indicators [1, 21-23, 29, 33, 44].When Q is diagonal, y Qy is separable and the perspective formulation provides the convex hull of the epigraph of y Qy with indicator variables by strengthening each term Q ii y 2  i with its perspective counterpart Q ii y 2  i /x i , individually.For the general case, however, convex relaxations based on the perspective reformulation may not be strong.The computational experiments in [25] demonstrate that as Q deviates from a diagonal matrix, the performance of the perspective formulation deteriorates.
deteriorates.

Beyond the perspective reformulation, which is based on the convex hull of the epigraph of a univariate convex quadratic function with one indicator variable, the convexification for the 2 × 2 case has received attention recently.Convex hulls of univariate and 2 × 2 cases can be used as building blocks to strengthen (QI) by decomposing y Qy into a sequence of low-dimensional terms.Castro et al. [17] study convexification of a special class of two-term quadratic function controlled by a single indicator variable.Jeon et al. [36] give conic quadratic valid inequalities for the 2 × 2 case.Frangioni et al. [25] combine perspective reformulation and disjunctive programming and apply them to the 2 × 2 case.Atamtürk et al. [8] study the convex hull of the mixed-integer set
Z − := (x, y, t) ∈ I 2 × R + : t ≥ d 1 y 2 1 − 2y 1 y 2 + d 2 y 2 2 ,
with coefficients d ∈ D := {d ∈ R 2 : d 1 ≥ 0, d 2 ≥ 0, d 1 d 2 ≥ 1}, which subsumes the case where d 1 = d 2 = 1 considered in [4].The conditions on the coefficients d 1 , d 2 imply convexity of the quadratic function.Atamtürk and Gómez [5] study the case where the continuous variables are free and the rank of the coefficient matrix is one in the context of sparse linear regression.Anstreicher and Burer [3] give an extended SDP formulation for the convex hull of the 2 × 2 bounded set (y, yy , x x ) : 0 ≤ y ≤ x ∈ {0, 1} 2 .Their formulation does not assume convexity of the quadratic function and contain PSD matrix variables X and Y as proxies for x x and yy as additional variables.De Rosa and Khajavirad [18] give the explicit convex hull description of the set (y, yy , x x ) : (x, y) ∈ I 2 .Anstreicher and Burer [2] study computable representations of convex hulls of low dimensional quadratic forms without indicator variables.More general convexifications for low-rank quadratic functions [7,31] or quadratic functions with tridiagonal matrices [38] have also been propos Beyond the perspective reformulation, which is based on the convex hull of the epigraph of a univariate convex quadratic function with one indicator variable, the convexification for the 2 × 2 case has received attention recently.Convex hulls of univariate and 2 × 2 cases can be used as building blocks to strengthen (QI) by decomposing y Qy into a sequence of low-dimensional terms.Castro et al. [17] study convexification of a special class of two-term quadratic function controlled by a single indicator variable.Jeon et al. [36] give conic quadratic valid inequalities for the 2 × 2 case.Frangioni et al. [25] combine perspective reformulation and disjunctive programming and apply them to the 2 × 2 case.Atamtürk et al. [8] study the convex hull of the mixed-integer set with coefficients d ∈ D := {d ∈ R 2 : d 1 ≥ 0, d 2 ≥ 0, d 1 d 2 ≥ 1}, which subsumes the case where d 1 = d 2 = 1 considered in [4].The conditions on the coefficients d 1 , d 2 imply convexity of the quadratic function.Atamtürk and Gómez [5] study the case where the continuous variables are free and the rank of the coefficient matrix is one in the context of sparse linear regression.Anstreicher and Burer [3] give an extended SDP formulation for the convex hull of the 2 × 2 bounded set (y, yy , x x ) : 0 ≤ y ≤ x ∈ {0, 1} 2 .Their formulation does not assume convexity of the quadratic function and contain PSD matrix variables X and Y as proxies for x x and yy as additional variables.De Rosa and Khajavirad [18] give the explicit convex hull description of the set (y, yy , x x ) : (x, y) ∈ I 2 .Anstreicher and Burer [2] study computable representations of convex hulls of low dimensional quadratic forms without indicator variables.More general convexifications for low-rank quadratic functions [7,31] or quadratic functions with tridiagonal matrices [38] have also been proposed.
d.

To design convex relaxations for (QI) based on convexifications for simpler substructures, a standard approach is to decompose the matrix Q as Q = R + j∈J Q j , for some index set J , where R, Q j 0, j ∈ J .After writing problem (1) as min a x + b y + y To design convex relaxations for (QI) based on convexifications for simpler substructures, a standard approach is to decompose the matrix Q as Q = R + j∈J Q j , for some index set J , where R, Q j 0, j ∈ J .After writing problem (1) as min a x + b y + y Ry c)
formulation (2) can then be strengthened based on convexifications of the simpler structures induced by constraints (2b) (e.g., matrices Q j are diagonal or 2 × 2).There are two main approaches to implement convexifications based on (2).On the one hand, one may choose fixed R, Q j , j ∈ J , a priori and treat them as parameters, as done in [7,24,25,38,45], resulting in simpler formulations (e.g., conic quadratic representable) that may be amenable to use with off-the-shelf solvers for mixed-integer optimization.On the other hand, one may treat matrices R, Q j , j ∈ J , as decision variables that are chosen with the goal of obtaining the optimal relaxation bound after strengthening, as done in [5,8,21].The resulting formulations with the second approach are stronger but typically more complex to represent.In general, neither approach is preferable to the other.
formulation (2) can then be strengthened based on convexifications of the simpler structures induced by constraints (2b) (e.g., matrices Q j are diagonal or 2 × 2).There are two main approaches to implement convexifications based on (2).On the one hand, one may choose fixed R, Q j , j ∈ J , a priori and treat them as parameters, as done in [7,24,25,38,45], resulting in simpler formulations (e.g., conic quadratic representable) that may be amenable to use with off-the-shelf solvers for mixed-integer optimization.On the other hand, one may treat matrices R, Q j , j ∈ J , as decision variables that are chosen with the goal of obtaining the optimal relaxation bound after strengthening, as done in [5,8,21].The resulting formulations with the second approach are stronger but typically more complex to represent.In general, neither approach is preferable to the other.

ntributions

T
The contributions of this paper are two-fold.
e contributions of this paper are two-fold.

1 1. 2 × 2 case: we describe the convex hull of the epigraph of a convex bivariate quadratic with a positive cross product and indicators.

2 × 2 case: we describe the convex hull of the epigraph of a convex bivariate quadratic with a positive cross product and indicators.
Consider Consider
Z + := (x, y, t) ∈ I 2 × R + : t ≥ d 1 y 2 1 + 2y 1 y 2 + d 2 y 2 2 , where d ∈ D. Observe that any bivariate convex quadratic with positive off-diagonals can be written as d 1 y 2 1 + 2y 1 y 2 + d 2 y 2 2 , by scaling appropriately.Therefore, Z + is the complementary set to Z − and, together, Z + and Z − model epigraphs of all bivariate convex quadratics with indicators and nonnegative continuous variables.
here d ∈ D. Observe that any bivariate convex quadratic with positive off-diagonals can be written as d 1 y 2 1 + 2y 1 y 2 + d 2 y 2 2 , by scaling appropriately.Therefore, Z + is the complementary set to Z − and, together, Z + and Z − model epigraphs of all bivariate convex quadratics with indicators and nonnegative continuous variables.

I In this paper, we propose conic quadratic extended formulations to describe cl conv(Z − ) and cl conv(Z + ).These extended formulations are more compact than alternatives previously proposed in the literature.More importantly, a distinguishing contribution of this paper is that we also give the explicit description of cl conv(Z + ) in the original space of the variables.The corresponding convex envelope of the bivariate function is a four-piece function.While convexifications in the original space of variables are more difficult to implement using current off-the-shelf mixed-integer optimization solvers, they offer deeper insights on the structure of the convex hulls.Whereas the ideal formulations of Z − can be conveniently described with two simple valid "extremal" inequalities [8], a similar result does not hold for Z + (see Example 1 in Sect.3).The derivation of ideal formulations for the more involved set Z + differs significantly from the methods in [8].The complementary results of this paper and [8] for Z − complete the convex hull descriptions of bivariate convex functions with indicators and nonnegative continuous variables.
this paper, we propose conic quadratic extended formulations to describe cl conv(Z − ) and cl conv(Z + ).These extended formulations are more compact than alternatives previously proposed in the literature.More importantly, a distinguishing contribution of this paper is that we also give the explicit description of cl conv(Z + ) in the original space of the variables.The corresponding convex envelope of the bivariate function is a four-piece function.While convexifications in the original space of variables are more difficult to implement using current off-the-shelf mixed-integer optimization solvers, they offer deeper insights on the structure of the convex hulls.Whereas the ideal formulations of Z − can be conveniently described with two simple valid "extremal" inequalities [8], a similar result does not hold for Z + (see Example 1 in Sect.3).The derivation of ideal formulations for the more involved set Z + differs significantly from the methods in [8].The complementary results of this paper and [8] for Z − complete the convex hull descriptions of bivariate convex functions with indicators and nonnegative continuous variables.


General

se: we develop an optimal SDP relaxation based on 2×2 convexifications for (QI)

In order
In order to construct a strong convex formulation for (QI), we extract a sequence of 2 × 2 PSD matrices from Q such that the residual term is a PSD matrix as well, and convexify each bivariate quadratic term utilizing the descriptions of cl conv(Z + ) and cl conv(Z − ).This approach works very well when Q is 2 × 2 PSD decomposable, i.e., when Q is scaled-diagonally dominant [15].Otherwise, a natural question is how to optimally decompose y Qy into bivariable convex quadratics and a residual convex quadratic term so as to achieve the best strengthening.
o construct a strong convex formulation for (QI), we extract a sequence of 2 × 2 PSD matrices from Q such that the residual term is a PSD matrix as well, and convexify each bivariate quadratic term utilizing the descriptions of cl conv(Z + ) and cl conv(Z − ).This approach works very well when Q is 2 × 2 PSD decomposable, i.e., when Q is scaled-diagonally dominant [15].Otherwise, a natural question is how to optimally decompose y Qy into bivariable convex quadratics and a residual convex quadratic term so as to achieve the best strengthening.

We address We address this question by deriving an optimal convex formulation using SDP duality.The new SDP formulation dominates any formulation obtained through a 2 × 2-decomposition scheme.This formulation is also stronger than other SDP formulations in the literature, including the optimal perspective formulation [21] and the optimal rank-one convexification [5].In addition, the proposed formulation is solved many orders of magnitude faster than the 2 × 2-decomposition approaches based on disjunctive programming [25], and delivers higher quality bounds than standard mixed-integer optimization approaches in difficult portfolio index tracking problems.
his question by deriving an optimal convex formulation using SDP duality.The new SDP formulation dominates any formulation obtained through a 2 × 2-decomposition scheme.This formulation is also stronger than other SDP formulations in the literature, including the optimal perspective formulation [21] and the optimal rank-one convexification [5].In addition, the proposed formulation is solved many orders of magnitude faster than the 2 × 2-decomposition approaches based on disjunctive programming [25], and delivers higher quality bounds than standard mixed-integer optimization approaches in difficult portfolio index tracking problems.


Outline

The

Outline
The rest of the paper is organized as follows.In Sect. 2 we review the convex hull results on Z − and illustrate the structural difference between Z + and Z − .In Sect. 3 we provide a conic quadratic formulation of cl conv(Z + ) and cl conv(Z − ) in an extended space and derive the explicit form of cl conv(Z + ) in the original space.In Sect.4, employing the results in Sect.3, we give a strong convex relaxation for (QI) using SDP techniques.In Sect.5, we compare the strength of the proposed SDP relaxation with others in literature.In Sect.6, we present computational results demonstrating the effectiveness of the proposed convex relaxations.Finally, in Sect.7, we conclude with a few final remarks.

st of th
paper is organized as follows.In Sect. 2 we review the convex hull results on Z − and illustrate the structural difference between Z + and Z − .In Sect. 3 we provide a conic quadratic formulation of cl conv(Z + ) and cl conv(Z − ) in an extended space and derive the explicit form of cl conv(Z + ) in the original space.In Sect.4, employing the results in Sect.3, we give a strong convex relaxation for (QI) using SDP techniques.In Sect.5, we compare the strength of the proposed SDP relaxation with others in literature.In Sect.6, we present computational results demonstrating the effectiveness of the proposed convex relaxations.Finally, in Sect.7, we conclude with a few final remarks.


Notation

To simplify th

Notation
To simplify the notation throughout, we adopt the following convention for division by 0: given x ≥ 0, x 2 /0 = ∞ if x = 0 and x 2 /0 = 0 if x = 0. Thus, x 2 /z, the closure of the perspective of x 2 , is a closed convex function (see [41], pages67-68).For a set X ⊆ R n , cl conv(X ) denotes the closure of the convex hull of X .For a vector v, diag(v) denotes the diagonal matrix V with V ii = v i for each i.Finally, S n + refers to the cone of n × n real symmetric PSD matrices.

notation
hroughout, we adopt the following convention for division by 0: given x ≥ 0, x 2 /0 = ∞ if x = 0 and x of the perspective of x 2 , is a closed convex function (see [41], pages67-68).For a set X ⊆ R n , cl conv(X ) denotes the closure of the convex hull of X .For a vector v, diag(v) denotes the diagonal matrix V with V ii = v i for each i.Finally, S n + refers to the cone of n × n real symmetric PSD matrices.


Preliminaries

In this sect

Preliminaries
In this section, we review the existing results on convex hulls of sets Z − , Z + , and their relaxation Z f with free continuous variables:

n, we review t
e existing results on convex hulls of sets Z − , Z + , and their relaxation Z f with free continuous variables:
Z f := (x, y, t) ∈{0, 1} 2 ×R Note that when the continuous variables are free, the sign associated with the cross term 2y 1 y 2 is irrelevant, since one can state it equivalently with the opposite sign by substituting ȳi = −y i .In contrast, if y ≥ 0, such a substitution is not possible; hence, the need for separate analyses for sets Z + and Z − .We first point out that all three sets can be naturally seen as disjunctions of four convex sets corresponding to the four possible values for x ∈ {0, 1} 2 .Thus, a direct application of disjunctive programming yields similar (conic quadratic) representations of the three sets [25,36] but such representations require several additional variables.While the disjunctive approach might suggest that Z f , Z + , Z − may be similar, we now argue that the sign of the cross terms materially affect the complexity of the optimization problems as well as the structure of the convex hulls.
variables are free, the sign associated with the cross term 2y 1 y 2 is irrelevant, since one can state it equivalently with the opposite sign by substituting ȳi = −y i .In contrast, if y ≥ 0, such a substitution is not possible; hence, the need for separate analyses for sets Z + and Z − .We first point out that all three sets can be naturally seen as disjunctions of four convex sets corresponding to the four possible values for x ∈ {0, 1} 2 .Thus, a direct application of disjunctive programming yields similar (conic quadratic) representations of the three sets [25,36] but such representations require several additional variables.While the disjunctive approach might suggest that Z f , Z + , Z − may be similar, we now argue that the sign of the cross terms materially affect the complexity of the optimization problems as well as the structure of the convex hulls.


Optimization

The sign of the o

Optimization
The sign of the off-diagonals of matrix Q critically affect the complexity of the optimization problem (QI).We first state a result concerning optimization with Stieltjes matrices Q, first proven in [4].

-diagonals of
matrix Q critically affect the complexity of the optimization problem (QI).We first state a result concerning optimization with Stieltjes matrices Q, first proven in [4].

Proposition 1 (Atamtürk and Gómez [4]) Problem ( 1) can be solved in polynomial time if Q 0 and Q i j ≤ 0 for all i = j and b ≤ 0.

In contrast, an analogous result does not hold if the off-diagona Proposition 1 (Atamtürk and Gómez [4]) Problem ( 1) can be solved in polynomial time if Q 0 and Q i j ≤ 0 for all i = j and b ≤ 0.
In contrast, an analogous result does not hold if the off-diagonal terms of matrix Q are nonnegative.

terms of matrix Q are
onnegative.

Proposition 2 Problem (1) is N P-hard if Q 0 and Q i j ≥ 0 for all i = j and b ≤ 0.

Proof We show that (QI) includes the N P−hard subset sum problem as a spec Proposition 2 Problem (1) is N P-hard if Q 0 and Q i j ≥ 0 for all i = j and b ≤ 0.
Proof We show that (QI) includes the N P−hard subset sum problem as a special case under the assumptions of the proposition: given w ∈ Z n + , K ∈ Z + , solve the equation ns of the proposition: given w ∈ Z n + , K ∈ Z + , solve the equation
w x = K , x Set Q = (I + qq )/2 0 where q ∈ R n ++ is a parameter to be specified later.Let Q = (I + qq )/2 0 where q ∈ R n ++ is a parameter to be specified later.Let
p i = q 2 i , i ∈ [n]
, b = −q and a = γ p for some γ > 0 to be specified later as well.For a vector z ∈ R n and matrix M ∈ S n + , let z S and M S denote the subvec , b = −q and a = γ p for some γ > 0 to be specified later as well.For a vector z ∈ R n and matrix M ∈ S n + , let z S and M S denote the subvector and principle submatrix defined by S ⊆ [n], respectively.Then (QI) reduces to min 1 2 y S (I S + q S q S )y S − q S y S + γ i∈S p i (S := {i : q S + γ i∈S p i (Woodbury matrix identity) min 2 2 + γ q S 2 2 ( p i = q 2 i ) min S⊆[n] 1 2(1 + q S 2 2 ) + γ (1 + q S 2 2 ) − γ − 1 2 • (4b)
Note that the nonnegativity Note that the nonnegativity constraints are dropped in (4a) because they are trivially satisfied by the optimal solution as y S = I S − q S q S 1 + q S 2 2 2 .Then (4b) simplifies to (after dropping the constant term −γ − 1/2 and multiplying by 2) S 1 + q S 2 2 2 .Then (4b) simplifies to (after dropping the constant term −γ − 1/2 and et q i = √ w i , i ∈ [n] and γ = 1 2(1+K )= min S 1 1 + w(S) + 1 + w(S) (1 + K ) 2 ≥ 2 1 + K ,
where the lower bound is attained if and only if w(S) = K .Hence, the subset sum problem (3) ha where the lower bound is attained if and only if w(S) = K .Hence, the subset sum problem (3) has a solution if and only if the optimal value of (QI) as constructed above equals 2/(1 + K ).
a solution if and only if the optimal value of (QI) as constructed above equals 2/(1 + K ).

Propositions 1 and 2 suggest that convex hull Propositions 1 and 2 suggest that convex hulls of sets with negative cross terms are substantially simpler than those with positive terms.

of sets with nega
ive cross terms are substantially simpler than those with positive terms.


Rank-one results

It is convenient to formulate convex hulls of sets via conic quadratic constraints as they are readily supported by modern mixed-integer optimization software.While such representations are easy to obtain via disjunctive programming, the resulting formulations generally have a prohibitive number of variables and constraints, which hamper the performance of solvers.Therefore, it is of interest to find the most compact conic quadratic formulations.In this regard, as well, Z + is significantly more complex than Z f and Z − .Cons

Rank-one results
It is convenient to formulate convex hulls of sets via conic quadratic constraints as they are readily supported by modern mixed-integer optimization software.While such representations are easy to obtain via disjunctive programming, the resulting formulations generally have a prohibitive number of variables and constraints, which hamper the performance of solvers.Therefore, it is of interest to find the most compact conic quadratic formulations.In this regard, as well, Z + is significantly more complex than Z f and Z − .Consider the existing results for the simpler sets in the rank-one case, i.e.,

er the existing results for the simpl
Proposition 3 Atamtürk and Gómez [5] f ) = (x, y, t) ∈ [0, 1] 2 × R 3 : t ≥ (y 1 ± y 2 ) 2 , t(z 1 +z 2 ) ≥ (y 1 ± y 2 ) 2 •
In particular, for the rank-one case with free continuous variables, the cl conv(Z f ) is conic quadratic repre In particular, for the rank-one case with free continuous variables, the cl conv(Z f ) is conic quadratic representable in the original space of variables, without the need for additional variables.

Proposition 4 Atamtürk and Gómez
x 2 , y 1 , y 2 ) = (y 1 − y 2 ) 2 /x 1 if y 1 ≥ y 2 (y 1 − y 2 ) 2 /x 2 if y 1 ≤ y 2 .
In contrast to cl conv(Z f ), since constraints t ≥ (y 1 − y 2 )2 /x i , i ∈ [2], are not valid for Z − , it is unclear how to reformulate cl conv(Z − ) using conic quadratic constraints in the original space of variables.A conic quadratic representation with two additional variables is given in [8].In Sect.3, Corollary 2, we describe cl conv(Z + ) in the original space for the rankone case.This description is more complex than Z − as it requires four pieces instead of two and it is not conic-quadratic representabl In contrast to cl conv(Z f ), since constraints t ≥ (y 1 − y 2 )2 /x i , i ∈ [2], are not valid for Z − , it is unclear how to reformulate cl conv(Z − ) using conic quadratic constraints in the original space of variables.A conic quadratic representation with two additional variables is given in [8].In Sect.3, Corollary 2, we describe cl conv(Z + ) in the original space for the rankone case.This description is more complex than Z − as it requires four pieces instead of two and it is not conic-quadratic representable.We also provide a compact extended formulation with three additional variables.

We also provide a
ompact extended formulation with three additional variables.


Full-rank results

A description of cl conv(Z − ) in the original space of variables is given in [8].Interestingly, it can be expressed as two valid inequalities involving function φ introduced in Proposition 4.

Proposition 5 (Atamtürk et al. [8]) Set cl conv(Z − ) i

Full-rank results
A description of cl conv(Z − ) in the original space of variables is given in [8].Interestingly, it can be expressed as two valid inequalities involving function φ introduced in Proposition 4.

Convex hull description of Z +
In this section, we give ideal convex formulations for When d 1 = d 2 = 1, Z + reduces to the simpler rank-one set Set X + is of special interest as it arises naturally in (QI) when Q is a diagonally dominant matrix, see computations in Sect.6.1 for details.As we shall see, the convex hulls of Z + and X + are significantly more complicated than their complementary sets Z − and X − studied earlier.In Sect.3.1, we develop an SOCP-representable extended formulation of cl conv(Z + ).Then, in Sect.3.2, we derive the explicit form of cl conv(Z + ) in the original space of variables.

Conic quadratic-representable extended formulation
We start by writing Z + as the disjunction of four convex sets defined by all values of the indicator variables; that is, where Z i + , i = 1, 2, 3, 4 are convex sets defined as:
Redefining variables in (6), we arrive at the following conic quadratic-representable extended formulation for cl conv(Z + ) and its rank-one special case cl conv(X + ).

Proposition 6
The set cl conv(Z + ) can be represented as Corollary 1 The set cl conv(X + ) can be represented as

nded form
lation is smaller than the one given Atamtürk et al. [8] for cl conv(Z − ).


Description in the original space of variables x,

Description in the original space of variables x, y, t
The purpose of this section is to express cl conv(Z + ) and cl conv(X + ) in the original space.

2 + d 2 w 2 2 )
and g(λ)
: x → R as g(λ) := min w∈R 2
and g(λ) G(λ, w).
Note that as G is SOCP-representable, it is convex.We first prove an auxiliary lemma th Note that as G is SOCP-representable, it is convex.We first prove an auxiliary lemma that will be used in the derivation.
We now state and prove the main result in this subsection.
Remark 2 For further intuition, we now comment on the validity of each piece of 1] 2 × R 3 + for Z + .
Because the first piece can be obtained by dropping the nonnegative c Because the first piece can be obtained by dropping the nonnegative cross product term y 1 y 2 and then strengthening t ≥ y 2  1 + y 2 2 using perspective reformulation, it is valid everywhere.When ulation, it is valid everywhere.When
x 1 + x 2 < 1 and y 1 , y 2 > 0,

≥ y 2 i /
i + y 2 j /(1 − x i ) > f * + (x, y; 1, 1) for i = j.
Therefore, the s Therefore, the second and the third pieces are not valid on the domain in
[0, 1] 2 × R 3 + . If d 1 d 2 > 1, the last piece t ≥ f (x, y, x 1 + x 2 − 1; d) is not valid for cl conv(Z + ) everywhere, as seen by exhibiting a point (x, y, t) ∈ cl conv(Z + ) violat- ing t ≥ f (x, y, x 1 + x 2 − 1; d 1 , 2 d 1 , f * + (x, y)),
where > 0 is small enough so that
x 1 + x where > 0 is small enough so that Hence, for point (x, y, t), we have 2 − 1)x 1 x 2 (d 1 y 2 1 /x 1 + d where α = λ/((d which is true by the choice of x 2 .Moreover, x 1 + x 2 − 1; d) reduces to the original quadr Otherwise, although t ≥ f (x, y, x 1 + x 2 − 1; d) appears complicated, the next proposition implies that it is convex over its restricted domain and can, in fact, be stated as an SDP constraint.This results strongly indicates that SOCP-representable relaxations of (QI) may be inadequate to describe the convex hull of the relevant mixed-integer sets, unless a large number of additional variables are added.The proof of Proposition 8 can be found in the Appendix.

Rank-one approximations of Z +
We now consider valid inequalities analogous to the ones given in Proposition 5 for Z − .Consider the two decompositions of the bivariate quadratic function given by Applying perspective reformulation and Corollary 2 to the separable and pairwise quadratic terms, respectively, one can obtain two simple valid inequalities for Z + :

orollary 2 to the separable and
pairwise quadratic terms, respectively, one can obtain two simple valid inequalities for Z + :
t ≥ d 1 f 1+ x 1 , x 2 , y 1 , y 2 d 1 + d 2 − 1 d 1 y 2 2 x 2 (12a) t ≥ The following example shows that the inequalities above do not describe cl conv(Z + ), highlighting the more complicated structure of cl conv(Z + ) compared to its complementary set cl conv(Z − ).
bove do not describe cl conv(Z + ), highlighting the more complicated structure of cl conv(Z + ) compared to its complementary set cl conv(Z − ).
Example 1 Consider Z ).On the one hand, d,
x 1 + x 2 > 1 implies t = f * + (x, y) = f (2/

An SDP relaxation for (QI)

In this section, we will give an extended SDP relaxation for (QI) utilizing the convex hull results obtained in the pr

An SDP relaxation for (QI)
In this section, we will give an extended SDP relaxation for (QI) utilizing the convex hull results obtained in the previous section.Introducing a symmetric matrix variable Y , let us write (QI) as Suppose for a class of PSD matrices ⊆ S n + we have an underestimator f P (x, y) for y Py for any P ∈ .Then, since P, Y ≥ y Py, we obtain a valid inequality s of PSD matrices P, Y ≥ y Py, we obt for (13).For example, if is the set of diagonal PSD matrices and f P (x, y) = i P ii y 2 i /x i , for P ∈ , then inequality ( 14) is the perspective inequality.Furthermore, since (14) holds for any P ∈ , one can take the supremum over all P ∈ to get an optimal valid inequality of the type (14) sup of diagonal PSD matrices and f P (x, y) = i P ii y 2 i /x i , fo

P ∈ , then inequality ( 14)
is the perspective inequality.Furthermore, since (14) holds for any P ∈ , one can take the supremum over all P ∈ to get an optimal valid inequality of the type (14) sup
P∈ f P (x, y) − P, Y ≤ 0. (15) In the example of perspective reformulation, inequality (15) becomes sup quality (15) becomes sup
P 0 diagonal i P ii y 2 i /x i − Y ii ≤ 0,
which can be further reduced to the closed form
y 2 i ≤ Y ii x i , ∀i ∈ [n]
. This leads t which can be further reduced to the closed form . This leads to the the optimal perspective formulation [21] min ormulation [21] min
a x + b y + Q, Y (16a) (OptPersp) s.t. Y − yy 0 (16b)y 2 i ≤ Y ii x i ∀i ∈ [n] (16c) 0 ≤ x ≤ 1, y ≥ 0. (16d)
Han et al. [32] show that OptPersp is equivalent to the Shor's SDP relaxation [42] for problem (1).Letting be the class of 2 × 2 PSD matrices and f P (•) as the function describing the convex hull Han et al. [32] show that OptPersp is equivalent to the Shor's SDP relaxation [42] for problem (1).Letting be the class of 2 × 2 PSD matrices and f P (•) as the function describing the convex hull of the mixed-integer epigraph of y Py, one can derive new valid inequalities for (QI).Specifically, using the extended formulations for f * + (x, y; d) and f * − (x, y; d) describing cl conv(Z + ) and cl conv(Z − ), we have f the mixed-integer epigraph of y Py, one can derive new valid inequa d formulations for f * + (x, y; d) and f * − (x, + ) and cl conv(Z − ), we have
f * + (x, y; d) = min z,λ d 1 (y 1 ≥ 0 (17b) max{0, x 1 + x 2 − 1} ≤ λ ≤ min{x 1 , x 2 },(17c)
and
f * − (x, y; d) = min z,λ d 1 (y 1 − and z 1 ) 2 x 1 − λ + d 2 (y 2 − z 2 ) 2 x 2 − λ + d 1 z 2 1 − 2z 1 z 2 + d 2 z 2 2 λ (18a) s.t. z 1 ≤ y 1 , z 2 ≤ y 2 (18b) max{0, x 1 + x 2 − 1} ≤ λ ≤ min{x 1 , x 2 }. (18c)
Since any 2 × 2 symmetric PSD matrix P can be rewritten in the form of P =
p d 1 1 1 d 2 or P = p d 1 −1 −1 d 2 ,
we can take f P (x, y) = p f * + (x, Since any 2 × 2 symmetric PSD matrix P can be rewritten in the form of P = we can take f P (x, y) = p f * + (x, y; d) or f P (x, y) = p f * − (x, y; d), correspondingly.Since we have the explicit form of f * + (•) and f * − (•), for any fixed d, (14) gives a nonlinear valid inequality which can be added to (13).Alternatively, ( 17) and ( 18) can be used to reformulate these inequalities as conic quadratic inequalities in an extended space.Moreover, maximizing the inequalities gives the optimal valid inequalities among the class of of 2 × 2 PSD matrices stated below.Recall that tern class of of 2 × 2 PSD matrices stated below.Recall that
D := {d ∈ R 2 : d 1 ≥ 0, d 2 ≥ 0, d 1 d 2 ≥ 1}.
Proposition 9 For any pair of indices i < j, the following inequalities are valid for (QI):
max d∈D f * + (x i , x j , y i , y j ; d 1 , d 2 )− d 1 Y ii − d 2 Y j j − 2Y i j ≤ 0, (19a) max d∈D f * − (x i , x j , y i , y j ; d 1 , d 2 )− d 1 Y ii − d 2 Y j j + 2Y i j ≤ 0. (19b)
Optimal inequalities ( 19) may be employed effectively if they can be expressed explicitly.We will now show how to write inequalities (19) explicitly using an auxiliary 3 × 3 matrix Proposition 9 For any pair of indices i < j, the following inequalities are valid for (QI): Optimal inequalities ( 19) may be employed effectively if they can be expressed explicitly.We will now show how to write inequalities (19) explicitly using an auxiliary 3 × 3 matrix variable W .

Proof of Lemma 2
The Lemma is proved by means of conic duality.For brevity, dual variables associated with each constraint are introduced in the formulation below.Writing f * + as a conic quadratic minimization problem as in (17), we first express inequality (19a) as 1 , s 1 , η 1 where Taking the dual of the inner minimization, the inequality can be written as x i s i + 2y
The proof of Lemma 3 is similar and is omitted for brevity.Since both (19a) and (19b) are valid, using (20) and ( 21) together, one can obtain an SDP relaxation of (QI).While inequalities in (20) and ( 21) are quite similar, in general, W + and W − do not have to coincide.However, we show below that choosing W + = W − , the resulting SDP formulation is still valid and it is at least as strong as the strengthening obtained by valid inequalities (19).
W − do not have to coincide.However, we show below that choosing W + = W − , the resulting SDP formulation is still valid and it is at least as strong as the strengthening obtained by valid inequalities (19).

Let W be the set of points (x 1 , x 2 , y 1 , y 2 , Y 11 , Y 12 , Y 22 ) such that there exists a 3 × 3 matrix W satisfying
W 12 = Y 12 (25a) (Y 11 − W 11 )(x 1 − W 33 ) ≥ (y Let W be the set of points (x 1 , x 2 , y 1 , y 2 , Y 11 , Y 12 , Y 22 ) such that there exists a 3 × 3 matrix W satisfying Then, using W for every pair of indices, we can define the strengthened SDP formulation ion
min a x + b y + Q, Y (26a) (OptPairs) s.t. Y − yy 0 (26b) (x i , x j , y i , y j , Y ii , Y i j , Y j j ) ∈ W ∀i < j (26c) 0 ≤ x ≤ 1, y ≥ 0. (26d)
Proposition 10 OptPairs is a valid convex relaxation of (QI) and Proposition 10 OptPairs is a valid convex relaxation of (QI) and every feasible solution to it satisfies all valid inequalities (19).
very feasible solution to it satisfies all valid inequalities (19).

Proof To see that OptPairs is a valid relaxation, consider a feasible solution (x, y) of (QI) and let Y = yy .For i < j, if x i = x j = 1, constraint (26c) is satisfied with
W = ⎛ ⎝ Y ii Y i j y i Y i j Y j j y j y i y j 1 ⎞ ⎠ .
Otherwise, without loss of generality, one may assume x i = 0.

It follows that Y ii = y 2 i = Y i j = y i y j = 0.Then, constraint (26c) is satisfied with W Proof To see that OptPairs is a valid relaxation, consider a feasible solution (x, y) of (QI) and let Y = yy .For i < j, if x i = x j = 1, constraint (26c) is satisfied with Otherwise, without loss of generality, one may assume x i = 0.

Comparison of convex relaxations
In this section, we compare the strength of OptPairs with other convex relaxations of (QI).The perspective relaxation and the optimal perspective relaxation OptPersp for (QI) are well-known.

Proposition 11 OptPairs is at least as strong as OptPersp.
Proof Note that (26c) includes constraints roof Note that (26c) includes constraints
Y ii y i y i x i W 11 W 31 W 31 W 33 0,
corres corresponding to (25b)-(25c).Thus, the perspective constraints Y ii x i ≥ y 2 i are implied.
In the context of linear regression, Atamtürk and Gómez [5] study the convex hull of the epigraph of rank-one quadratic with indicators

study the convex hull of the epigraph of rank-one quadratic with indicators
X f = (x, y, t) ∈ {0, 1} n × R n+1 : t ≥ n i=1 y i 2 , y i
(1 − x i ) = 0, i ∈ [n] ,
where the continuous variables are unrestricted in sign.Their extended SDP formulatio where the continuous variables are unrestricted in sign.Their extended SDP formulation based on cl conv(X f ), leads to the following relaxation for (QI) i ≤ Y ii x i ∀i (27c) (OptRankOne) ⎛ ⎝ x i + x j y i y j y i Y i Y i j y j Y i j Y j j ⎞ ⎠ 0, ∀i < j (27d) y ≥ 0, 0 ≤ x ≤ 1. (27e)
With the additional constraints (27d), it is immediate that OptRankOne is stronger than OptPersp.The following With the additional constraints (27d), it is immediate that OptRankOne is stronger than OptPersp.The following proposition compares OptRankOne and OptPairs.

oposition compares OptRankOne and
OptPairs.


Proposition 12

OptPairs is at least as strong as OptRankOne.

Proof It suffices to show that for each pair i < j, constraint (26c) of OptPairs implies (27d) of OptRankOne.Rewritin

Proposition 12
OptPairs is at least as strong as OptRankOne.
Example 2 For n = 2, OptPairs is the ideal (convex) formulation of (QI).For the instance of (QI) with irs is the ideal (convex) formulation of (QI).For the instance of (QI) with
a = 1 5 , b = −8 −5 each of the other convex relaxations has a fractional optimal solution as demonstrated in Table 1.Notably, the fractional x values for OptPersp and OptRankOne are far from their optimal integer values.A common approach to quickly obtain feasible solutions to NP-hard problems is to round a solution obtained from a suitable convex relaxation.This example indicates that feasible solutions obtained in this way from formulation OptPairs may be of higher quality than those obtained from weaker relaxations-our computations in Sect.6.2 further corroborates this intuition.
l x values for OptPersp and OptRankOne are far from their optimal integer values.A icates that feasible solutions obtained in this way from formulation OptPairs may be of higher quality than those obtained from weaker relaxations-our computations in An alternative way of constructing strong relaxations for (QI) is to decompose the quadratic function y Qy into a sum of univariate and bivariate convex quadratic functions and utilize the convex hull results of 2 × 2 quadratics i j y 2 i ± 2y i y j + γ i j y 2 j ,
where α i j > 0, in Sect. 3 for each term, see [25] for such an approach.Specifically, let y Qy = y Dy + (i, j)∈P α i j q i j (y i , y where α i j > 0, in Sect. 3 for each term, see [25] for such an approach.Specifically, let y Qy = y Dy + (i, j)∈P α i j q i j (y i , y j ) + (i, j)∈N α i j q i j (y i , y j ) + y Ry where D is a diagonal PSD matrix, P/N is the set of quadratics q i j (•) with positive/negative off-diagonals and R is PSD remainder matrix.Applying the convex hull description for each univariate and bivariate term we obtain the following convex relaxation for (QI):

) + (i, j)
N α i j q i j (y i , y j ) + y Ry where D is a diagonal PSD matrix, P/N is the set of quadrat ve off-diagonals and R is PSD remainder matrix.Applying the convex hull description for each univar ate and bivariate term we obtain the following convex relaxation for (QI):
min a + b y + n i=1 D ii y 2 i /x i + (i, j)∈P α i j f * + (x i , x j , y i , y j ; β i j , γ i j ) (Decomp) + (i, j)∈N α i j f * − (x i , x j , y i , y j ; β i j , γ i j ) + y Ry s.t. 0 ≤ x ≤ 1, y ≥ 0.
The next proposition shows that OptPairs dominates Decomp.Similar duality arguments were used in [21,25,45].


Proposition 13

OptPairs is at least as strong as Decomp.Moreover, there exists a decom The next proposition shows that OptPairs dominates Decomp.Similar duality arguments were used in [21,25,45].

Proposition 13
OptPairs is at least as strong as Decomp.Moreover, there exists a decomposition for which Decomp is equivalent to OptPairs.
osition for which Decomp is equivalent to OptPairs.

Proof We prove the result via the minimax theory of concave-convex programs and show that Decomp can be viewed as a dual formulation of OptPairs.To make the dual relationship m Proof We prove the result via the minimax theory of concave-convex programs and show that Decomp can be viewed as a dual formulation of OptPairs.To make the dual relationship more transparent, we define z , λ i j = W i j 33 and = (x, y, z,λ) ∈ [0, 1] n × R n + × R n(n−1) × R n(n−1)/2 : 0 ≤ z i j Then, OptPairs can be rewritten as en, OptPairs can be rewritten as
min x,y,z,λ min Y ,W a x + b y + Q, Y (28a) s.t. Y yy (R) Y ii − y i − z i j i 2 x i − λ i j ≥ W i j 11 ≥ z i j i 2 λ i j ∀i < j ( i j i , u i j i ) Y j j − y j − z i j j 2 x j − λ i j ≥ W i j 22 ≥ z i j j 2 λ i j ∀i < j ( i j j , u i j Taking the SDP dual with respect to the inner minimization problem, one arrives at z,λ max R,Q i j , ,u a x + b y + i< j ⎡ ⎢ ⎣ i j i z i j i 2 λ i j + i j j z i j j 2 λ i j + u i j i y i − z i

i 2 x i − λ i j
+ u i j j y j − z i j j 2 x j − λ i j + 1 λ i j z i j Q i j z i j ⎤ ⎥ ⎦ + R, yy (29a) s.t. Q ii = R ii + u i j i + i> j u ji ∀i (Y ii ) Q i j = R i j + Q i j 12 ∀i < j (Y i j ) 0 = i j i − u i j i + Q i j 11 ∀i < j (W i j 11 ) 0 = i j j − u i j j + Q i j 22 ∀i < j (W i j 22 ) R 0, Q i j 0, i j ≥ 0, u i j ≥ 0 ∀i < j (29b) (x, y, Since one can take the diagonal elements of Y and W i j large enough, there exists a strictly feasible solution to the inner minimization of ( 28), which implies strong duality holds and, thus, ( 28) is equivalent to (29).Next, substituting out u i j in (29a), one gets equivalent to (29).Next, substitut where Qi j = 2
. By changing variables Q i j ← Qi j , one arrives at min
(x,y,z,λ)∈ max R,Q i j . By changing variables Q i j ← Qi j , one arrives at min which is equivalent to (29).Notice that (30b) is, in fact, tight.Thus, (30b), (30c),and (30d) define a valid decomposition of Q.Moreover, Q i j 2 , R 2 ≤ Trace(Q) by (30b), which implies the feasible region of the inner maximization problem is compact.Therefore, according to Von Neumann's Minimax Theorem [43], one can interchange max and min without of equivalence and arrive at max er max mization problem is compact.Therefore, according to Von Neumann's Minimax Theorem [43], one can interchange max and min without of equivalence and arrive at max
R,Q i j :(30b)-(30d) min z i j ,λ i j i< j ⎡ ⎢ ⎣Q i j 11 y i − z i j i 2 x i − λ i j + Q i j 22 y j − z i j j where the inner minimization problem is in the form Decomp from Proposition 6.

Computations
In this section, we report on computational experiments performed to test the effectiveness the formulations derived in the paper.Section 6.1 is devoted to synthetic portfolio optimization instances, where matrix Q is diagonally dominant and the conic quadratic-representable extended formulations developed in Sect. 3 can be readily used in a branch-and-bound algorithm without the need for an SDP constraint.The instances here are generated similarly to [4], and serve to check the incremental value of convexifications based on Z + compared to those based on only Z − .In Sect.6.2, we use real instances derived from stock market returns and test the SDP relaxation OptPairs derived in Sect.4, as well as mixed-integer optimization approaches based on decompositions of the quadratic matrices.
branch-and-bound algorithm without the need for an SDP constraint.The instances here are generated similarly to [4], and serve to check the incremental value of convexifications based on Z + compared to those based on only Z s based on decompositions of the quadratic matrices.


Synthetic instances-the

agonally domi
ant case

We consider a standard cardinality-constrained mean-variance portfolio optimization problem of the form min
x,y y Qy : b y ≥ r , 1 x ≤ k 0 ≤ y ≤ x, x ∈ {0, 1} n (31)
where Q is the covariance matrix of returns, b ∈ R n is the vector of the expected returns, r is the target return and k is the maximum number of securities in the portfolio.All experiments are conducted using Mosek 9.1 solver on a laptop with a 2.30GHz Intel® Core TM i9-9880 H CPU and 64 GB main memory.The time limit is set to one hour and all other settings are default by Mosek.


Instance generation

We adopt the method used in [4] to generate the instances.The instances are designed to control the integrality gap of the instances and the effectiveness of the perspective formulation.Let ρ ≥ 0 be a parameter controll We consider a standard cardinality-constrained mean-variance portfolio optimization problem of the form min where Q is the covariance matrix of returns, b ∈ R n is the vector of the expected returns, r is the target return and k is the maximum number of securities in the portfolio.All experiments are conducted using Mosek 9.1 solver on a laptop with a 2.30GHz Intel® Core TM i9-9880 H CPU and 64 GB main memory.The time limit is set to one hour and all other settings are default by Mosek.

Instance generation
We adopt the method used in [4] to generate the instances.The instances are designed to control the integrality gap of the instances and the effectiveness of the perspective formulation.Let ρ ≥ 0 be a parameter controlling the ratio of the magnitude positive off-diagonal entries of Q to the magnitude of the negative off-diagonal entries of Q.

g the ratio of the magnitude positive off-diagona
entries of Q to the magnitude of the negative off-diagonal entries of Q.

Lower values of ρ to higher integrali Lower values of ρ to higher integrality Let δ ≥ 0 be the parameter controlling the diagonal dominance of Q.The perspective formulation is more effective in closing the integrality gap for higher values of δ.The following steps are followed to generate the instances: l dominance of Q.The perspective formulation is more effective in closing the integrality gap for higher values of δ.The following steps are followed to generate the instances:

• Construct an auxiliary matrix Q by drawing a factor covariance matrix G 20×20 uniformly from [−1, 1], and generating an exposure matrix H n×20 such that H i j = 0 with probability 0.75, and H i j drawn unif • Construct an auxiliary matrix Q by drawing a factor covariance matrix G 20×20 uniformly from [−1, 1], and generating an exposure matrix H n×20 such that H i j = 0 with probability 0.75, and H i j drawn uniformly from [0, 1], otherwise.Let

mly from [0, 1], oth
rwise.Let
Q = H GG H . • Construct off-diagonal entries of Q: For i = j, set Q i j = Qi j , if Qi j < 0 and set Q i j = ρ Qi j otherwise. Positive off-diagonal elements of Q are scaled by a factor of ρ. • Construct diagonal entries of Q: Pick μ i uniformly from [0, δ σ ], where σ = 1 n i = j |Q i j |. Let Q ii = i = j |Q i j | + μ i . Note that f δ = μ i = 0, then matrix Q is already diagonally dominant. • Construct b, r , k: b i is drawn uniformly from [0.5Q ii , 1.5Q ii ], r = 0.25 n i=1 b i , and k = n/5 .
Matrices Q generated in this way have only 20.1% of the off-diagonal entries negative on average.


Formulations Matrices Q generated in this way have only 20.1% of the off-diagonal entries negative on average.

Formulations
With above setting, the portfolio optimization problem can be rewritten as With above setting, the portfolio optimization problem can be rewritten as
min i∈[n] μ i z i + Q i j <0 |Q i j |t i j + Q i j >0 |Q i j |t i j s.t. (x i , y i , z i ) ∈ X 0 , ∀i ∈ N , (x i , x j , y i , y j , t i j ) ∈ Z − , ∀i > j : Q i j < 0, (x i , x j , y i , y j , t i j ) ∈ Z + , ∀i > j : Q i j > 0, b y ≥ r , 1 x ≤ k,(32)
where Z + and Z − are defined as before with d 1 = d 2 = 1.Four strong formulations are tested by repla where Z + and Z − are defined as before with d 1 = d 2 = 1.Four strong formulations are tested by replacing the mixed-integer sets with their convex hulls: ConicQuadPersp by replacing X 0 with cl conv(X 0 ) using the perspective reformulation (2) ConicQuadN by replacing X 0 and Z − with cl conv(X 0 ) and cl conv(Z − ) using the corresponding extended formulation, (3) ConicQuadP by replacing X 0 and Z + with cl conv(X 0 ) and cl conv(Z + ) respectively, and (4) ConicQuadP+N by replacing X 0 , Z − , and Z + with cl conv(X 0 ), cl conv(Z − ) and cl conv(Z + ), correspondingly.

Results
Table 2 shows the results for matrices with varying diagonal dominance δ for ρ = 0.3.Each row in the table represents the average for five instances generated with the same parameters.Table 2 displays the dimension of the problem n, the initial gap (igap), the root gap (rimp), the number of branch and bound nodes (nodes), the elapsed in secons (time), and the end gap provided by the solver at termination (egap).In addition, in brackets, we report the number of instances solved to optimality within the time limit.The initial gap is computed as igap = obj best −obj cont |obj best | × 100, where obj best is the objective value of the best feasible solution found and obj cont is the objective value of the natural continuous relaxation of (31), i.e. obtained by dropping the integral constraints; rimp is computed as rimp = obj relax −obj cont obj best −obj cont × 100, where obj relax is the objective value of the continuous relaxation of the corresponding formulation.
ap (rimp), the number of branch and bound nodes (nodes), the elapsed in secons (time), and the end gap provided by the solver at termination (egap).In addition, in brackets, we report the number of instances solved to optimality within the time limit.The initial gap is computed as igap = obj best −obj cont |obj best | × 100, where obj best is the objective value of the best feasible solution found and obj cont is the objective value of the natural continuous relaxation of (31), i.e. obtained by dropping the integral constraints; rimp is computed as rimp = obj relax −obj cont obj best −obj cont × 100, where obj relax is the objective value of the continuous relaxation of the corresponding formulation.

In Table 2, as expected, ConicQuadPersp has the worst performance in terms of both root gap and end gap as well as the solution time.It can only solve instances with dimension n = 40 and some instances with dimension n = 60 to optimality.The rimp of ConicQuadPersp is less than 10% when the diagonal dominance is small.This reflects the fact that ConicQuadPersp provides strengthening only for diagonal terms.ConicQuadN performs better than ConicQuadPersp with rimp about 10%-25%, and it can solve all low-dimensional instances and most instances of dimension n = 60.However, ConicQuadN is still unable to solve high-dimensional instances effectively.ConicQuadP performs much better than ConicQuadN for the instances considered: The rimp results in significantly stronger root improvements (between 70-80% on average).Moreover, ConicQuadP can solve almost all instances In Table 2, as expected, ConicQuadPersp has the worst performance in terms of both root gap and end gap as well as the solution time.It can only solve instances with dimension n = 40 and some instances with dimension n = 60 to optimality.The rimp of ConicQuadPersp is less than 10% when the diagonal dominance is small.This reflects the fact that ConicQuadPersp provides strengthening only for diagonal terms.ConicQuadN performs better than ConicQuadPersp with rimp about 10%-25%, and it can solve all low-dimensional instances and most instances of dimension n = 60.However, ConicQuadN is still unable to solve high-dimensional instances effectively.ConicQuadP performs much better than ConicQuadN for the instances considered: The rimp results in significantly stronger root improvements (between 70-80% on average).Moreover, ConicQuadP can solve almost all instances to near-optimality for n = 80.For the instances that ConicQuadP is unable to solve to optimality, the average end gap is less than 5%.By strengthening both the negative and positive offdiagonal terms, ConicQuadP+N provides the best performance with rimp above 90%.ConicQuadP+N can solve all instances and most of them are solved within 10 min.Finally, observe that as the diagonal dominance increases, the performance of all formulations improves.Specifically, larger diagonal dominance results in more instances solved to optimality, smaller egap and shorter solving time for all formulations.For these instances, on average, the gap improvement is raised from 50.69% to 92.90% by incorporating strengthening from off-diagonal coefficients.

o near-o
timality for n = 80.For the instances that ConicQuadP is unable to solve to optimality, the average end gap is less than 5%.By strengthening both the negative and positive offdiagonal terms, ConicQuadP+N provides the best performance with rimp above 90%.ConicQuadP+N can solve all instances and most of them are solved within 10 min.Finally, observe that as the diagonal dominance increases, the performance of all formulations improves.Specifically, larger diagonal dominance results in more instances solved to optimality, smaller egap and shorter solving time for all formulations.For these instances, on average, the gap improvement is raised from 50.69% to 92.90% by incorporating strengthening from off-diagonal coefficients.

Ta Table 3 displays the computational results for different values of ρ with fixed δ = 0.1.The relative comparison of formulations is similar as discussed before, with ConicQuadP+N resulting in the best performance.As ρ increases, the performance of ConicQuadN deteriorates in terms of Rimp while the performance of ConicQuadP improves, as expected.The performance of ConicQuadP+N also improves for high values of ρ, and always results in significant improvement compared to other formulations for all instances.For these instances, on average, the gap improvement is raised from 9.77% to 85.38% by incorporating strengthening from off-diagonal coefficients.
le 3 displays the computational results for different values of ρ with fixed δ = 0.1.The relative comparison of formulations is similar as discussed before, with ConicQuadP+N resulting in the best performance.As ρ increases, the performance of ConicQuadN deteriorates in terms of Rimp while the performance of ConicQuadP improves, as expected.The performance of ConicQuadP+N also improves for high values of ρ, and always results in significant improvement compared to other formulations for all instances.For these instances, on average, the gap improvement is raised from 9.77% to 85.38% by incorporating strengthening from off-diagonal coefficients.

In summary, we conclude that utilizing convexification for Z + complement those previously obtained for Z − , and together result in significantly higher root gap improvement over the simpler perspective relaxation.For the experiments in this section, we use the results of Sect. 3 to convexify pairwise quadratic terms, but do not utilize the more sophisticated SDP formulations in Sect. 4. For the instances in this section, the optimal perspective formulation [21,45] achieves close to 100% root improvement, and all the mixed-integer optimization problems are solved in a few seconds.Moreover, the new convex formulation OptPairs produces integer (thus opti-  [5] 66.07 526 18 0.00 [5] 86.93 65 13 0.00 [5] 0.5 51.10 33.17 3896 26 0.00 [5] 47.86 1335 18 0.00 [5] 79.48 198 9 0.00 [5] 95.01 24 9 0.00 [5] 1.0 52.73 60.86 1463 9 0.00 [5] 74.62 375 7 0.00 [5] 86.83 146 7 0.00 [5] 97.46  123 mal In summary, we conclude that utilizing convexification for Z + complement those previously obtained for Z − , and together result in significantly higher root gap improvement over the simpler perspective relaxation.For the experiments in this section, we use the results of Sect. 3 to convexify pairwise quadratic terms, but do not utilize the more sophisticated SDP formulations in Sect. 4. For the instances in this section, the optimal perspective formulation [21,45] achieves close to 100% root improvement, and all the mixed-integer optimization problems are solved in a few seconds.Moreover, the new convex formulation OptPairs produces integer (thus opti-  [5] 66.07 526 18 0.00 [5] 86.93 65 13 0.00 [5] 0.5 51.10 33.17 3896 26 0.00 [5] 47.86 1335 18 0.00 [5] 79.48 198 9 0.00 [5] 95.01 24 9 0.00 [5] 1.0 52.73 60.86 1463 9 0.00 [5] 74.62 375 7 0.00 [5] 86.83 146 7 0.00 [5] 97.46  123 mal) solutions in all instances.In the next section, we consider these stronger conic for the more realistic and challenging instances.
solutions in all instances.In the next section, we consider these stronger conic for the more realistic and challenging instances.


Real instances-the general case

Now using real stock market data, we consider portfolio index tracking problem of the form
min (y − y B ) Q(y − y B ) (IT) s.t. 1 y = 1, 1 x ≤ k 0 ≤ y ≤ x, x ∈ {0, 1} n ,
where y B ∈ R n is a benchmark index portfolio, Q is the covariance matrix of security returns and k is the maximum number of securities in the portfolio.The (continuous) conic formulations are solved using Mosek 9.1 and the mixed-integer formulations are solved using CPLEX 12.8.The experiments are conducted on a laptop with a 1.80 GHz Intel® Core TM i7 CPU and 16 GB main memory.The solver time limit is set to 1200 s and all other settings are kept at their default values.


Instance generation

We use the daily stock return data provi

Real instances-the general case
Now using real stock market data, we consider portfolio index tracking problem of the form where y B ∈ R n is a benchmark index portfolio, Q is the covariance matrix of security returns and k is the maximum number of securities in the portfolio.The (continuous) conic formulations are solved using Mosek 9.1 and the mixed-integer formulations are solved using CPLEX 12.8.The experiments are conducted on a laptop with a 1.80 GHz Intel® Core TM i7 CPU and 16 GB main memory.The solver time limit is set to 1200 s and all other settings are kept at their default values.

Instance generation
We use the daily stock return data provided by Boris Marjanovic in Kaggle1 to compute the covariance matrix Q.Specifically, given a desired start date (either 1/1/2010 or 1/1/2015 in our computations), we compute the sample covariance matrix based on the stocks with available data in at least 99% of the days since the start (returns for missing data are set to 0).The resulting covariance matrices are available at https:// sites.google.com/usc.edu/gomez/data.We then generate instances as follows:

d by Boris Marjanovic in Kaggle1
to compute the covariance matrix Q.Specifically, given a desired start date (either 1/1/201 d on the stocks with available data in at least 99% of the days since the start (returns for missing data are set to 0).The resulting covariance matrices are available at https:// sites.google.com/usc.edu/gomez/data.We then generate instances as follows:

• we randomly sample an n × n covariance matrix Q corresponding to n stocks, and • we draw each element of y B from uniform [0,1], and then scale y B so that 1 y B = 1.


Convex relaxations

The natural convex relaxation of I • we randomly sample an n × n covariance matrix Q corresponding to n stocks, and • we draw each element of y B from uniform [0,1], and then scale y B so that 1 y B = 1.

Convex relaxations
The natural convex relaxation of IT always yields a trivial lower bound of 0, as it is possible to set x = y = y B .Thus, we do not report results concerning the natural relaxation.Instead, we consider the optimal perspective relaxation OptPersp of [21]:

always yields a triv
al lower bound of 0, as it is possible to set x = y = y B .Thus, we do not report results concerning the natural relaxation.Instead, we consider the optimal perspective relaxation OptPersp of [21]:
min x,y,Y y B Qy B − 2y B Qy + Q, Y (34a) s.t. Y − yy 0 (34b) (OptPersp) y 2 i ≤ Y ii x i ∀i ∈ [n] (34c) 0 ≤ x ≤ 1, y ≥ 0 (34d) 1 y = 1, 1 x ≤ k (34e)
and the proposed OptPairs exploiting off-diagonal elements of Q:
min x,y,Y ,W y Qy B − 2y B Qy + Q, Y s.t. Y − yy 0 W i j 0 ∀i < j (Y ii − W i j 11 )(x i − and the proposed OptPairs exploiting off-diagonal elements of Q: i j 33 ) ≥ (y i − W i j 31 ) 2 , W i j 11 ≤ Y ii ∀i < j (OptPairs) ( Y j j − W i j 22 )(x j − W i j 33 ) ≥ (y j − W i j 32 ) 2 , W i j 22 ≤ Y j j ∀i < j W i j 33 ≤ x i

x j − 1, W i j 33 ≤
x i , W i j 33 ≤ x j ∀i < j 0 ≤ W i j 31 ≤ y i , 0 ≤ W i j 32 ≤ y j , W i j 12 = Y i j ∀i < j 0 ≤ x ≤ 1, y ≥ 0 1 y = 1, 1 x ≤ k,
As pointed out in Example 2, formulation OptPairs may yield high quality feasible solutions by rounding.Therefore, for each rela As pointed out in Example 2, formulation OptPairs may yield high quality feasible solutions by rounding.Therefore, for each relaxation, we consider a simple rounding heuristic to obtain feasible solutions to (IT): given an optimal solution ( x, ȳ) to the continuous relaxation, we fix x i = 1 for the k-largest values of x and the remaining x i = 0, and resolve the continuous relaxation to compute y.
we fix x i = 1 for the k-largest values of x and the remaining x

Exact mixed-integer optimization approaches
We also consider three mixed-integer optimization approaches, each associated with a different convex relaxation.The first one is the Natural relaxation corresponding to the mixed-integer quadratic formulation (IT ).
The second one is the corresponding OptPersp formulation min y B Qy B − 2y B Qy + y Ry i=1 D ii t i (35a) s.t. t i x i ≥ y 2 i , i ∈ [n] (35b) 1 y = 1, 1 x ≤ k (35c) 0 ≤ y ≤ x, x ∈ {0, 1} n , (35d)
where D + R = Q and R are the dual variables associated with constraint (34b).The third one is the OptPairs formulation based on the decomposition min y B Qy B − 2y B Qy + y Ry +
i< j t i j (36a) s.t. t i j ≥ Q i j ii y 2 i + Q i j j j y 2 j + 2Q i j i j y i y j ∀i < j (36b) 1 y = 1, 1 x where D + R = Q and R are the dual variables associated with constraint (34b).The third one is the OptPairs formulation based on the decomposition min y B Qy B − 2y B Qy + y Ry +

k (36c) 0 ≤ y ≤ x, x ∈ {0, 1} n , (36d)
whe
where matrix R is the dual variable associated with constraint Y −yy 0, and Q i j are the dual variables associated with constraints W i j 0. The formulation is then obtained from the SOCP-representable convexification of constraints (36b) using Proposition 6 (if e matrix R is the dual variable associated with constraint Y −yy 0, and Q i j are the dual variables associated with constraints W i j 0. The formulation is then obtained from the SOCP-representable convexification of onstraints (36b) using Proposition 6 (if
Q i j i j ≥ 0) or Remark 1 (if Q i j i j < 0).
i − z i j i 2 , λ i j ≤ x i , 0 ≤ z i j i ≤ y i ∀i < j t i j j (x j − λ i j ) j ≤ y j ∀i < j t i j i j λ i j ≥ Q i j ii z i j i 2 + Q i j j j z i j j 2 + 2Q i j i j z i j i z i j j ∀i < j 1 y = 1, 1 x ≤ k 0 ≤ y ≤ x, x ∈ {0, 1} n .
In practice, one may use solutions obtained from rounding the SDP relaxations as warm-starts for the mixed-integer optimiza In practice, one may use solutions obtained from rounding the SDP relaxations as warm-starts for the mixed-integer optimization solvers for an improved performance.However, in the experiments, our goal is to compare the bounds obtained from the SDP rounding approach with the branch-and-bound approach.Therefore, we do not use solutions from one method in the other one in order to properly compare the two approaches.

Results
In these experiments, the solution time limit is set to 20 min, which includes the time required to solve the SDP relaxations to find suitable decompositions.Tables 4 and 5 present the results using historical data since 2010 and 2015, respectively.They show, for different values of n and k, and for each conic relaxation: the time required to solve the convex relaxations in seconds, the lower bound (LB) corresponding to the optimal objective value of the continuous relaxation, the upper bound (UB) corresponding to the objective value of the heuristic, the gap between these two values, computed as Gap = UB−LB UB ; they also show the best objective found at termination, and the associated gap, number of nodes explored, time spent in branch-and-bound in seconds, and number of instances that could be solved to optimality within the time limit (#).The lower bounds, upper bounds from the convex relaxations, and objective from branch-and-bound, are scaled so that the best upper bound found for a given instance is 100.Each row represents an average of five instances generated with the same parameters.
solve the SDP relaxations to find suitable decompositions.Tables 4 and 5 present the results using historical data since 2010 and 2015, respectively.They show, for different values of n and k, and for each conic relaxation: the time required to solve the convex relaxations in seconds, the lower bound (LB) corresponding to the optimal objective value of the continuous relaxation, the upper bound (UB) corresponding to

e object
ve value of the heuristic, the gap between these two values, computed as Gap = UB−LB UB ; they also show the best objective found at termination, and the associated gap, number of nodes explored, time spent in branch-and-bound in seconds, and number of instances that could be solved to optimality within the time limit (#).The lower bounds, upper bounds from the convex relaxations, and objective from branch-and-bound, are scaled so that the best upper bound found for a given instance is 100.Each row represents an average of five instances generated with the same parameters.

We first summarize our conclusions, then discuss in depth the relative performance of the mixed-integer optimization formulations, and finally discuss the performance of the conic formulations (which, we argue, perform best for this class of problems).


123

• Summary The perspective reformulation (35) remains the best approach to solve the to optimality with the current off-the-shelf MISOCP solvers, as MIP solvers struggle with more the sophisticated formulations.However, the stronger formulations are very effective in produci We first summarize our conclusions, then discuss in depth the relative performance of the mixed-integer optimization formulations, and finally discuss the performance of the conic formulations (which, we argue, perform best for this class of problems).

123
• Summary The perspective reformulation (35) remains the best approach to solve the to optimality with the current off-the-shelf MISOCP solvers, as MIP solvers struggle with more the sophisticated formulations.However, the stronger formulations are very effective in producing comparable or better solutions (especially in challenging instances with poor natural convex relaxations) via rounding the convex relaxation solutions in a fraction of the computational time.
g comparable or better solutions (especially in challenging instances with poor natural convex relaxations) via rounding the convex relaxation solutions in a fraction of the computational time.

• Comparison of mixed-integer optimization approaches For instances with n = 50, we see that, among the mixed-integer optimization approaches, the one based on OptPersp is arguably the best, solving to optimality 22/30 instances (compared with Natural: 15/22, and OptPairs: 8/22).The Natural mixed-integer optimization formulation is able to explore more nodes, but the relaxations are weaker, ultimately leading to inferior performance.In contrast, the stronger mixed-integer formulation based on OptPairs needs more time to process each • Comparison of mixed-integer optimization approaches For instances with n = 50, we see that, among the mixed-integer optimization approaches, the one based on OptPersp is arguably the best, solving to optimality 22/30 instances (compared with Natural: 15/22, and OptPairs: 8/22).The Natural mixed-integer optimization formulation is able to explore more nodes, but the relaxations are weaker, ultimately leading to inferior performance.In contrast, the stronger mixed-integer formulation based on OptPairs needs more time to process each node (by orders-of-magnitude) due to the increased complexity of the relaxations, resulting in poor performance overall.Nonetheless, for instances where it can prove optimality (e.g., n = 50, k = 5), it does so with substantially fewer nodes, illustrating the power of the stronger relaxations.Interestingly, in the more challenging instances with data from 2010, n = 50 and k = 10, OptPairs is able to prove the best optimality gap of 9.6% (compared with OptPersp: 20.5%, and Natural: 4.8%).
node (by orders-of-magnitude) due to the increased complexity of the relaxations, resulting in poor performance overall.Nonetheless, for instances where it can prove optimality (e.g., n = 50, k = 5), it does so with substantially fewer nodes, illustrating the power of the stronger relaxations.Interestingly, in the more challenging instances with data from 2010, n = 50 and k = 10, OptPairs is able to prove the best optimality gap of 9.6% (compared with OptPersp: 20.5%, and Natural: 4.8%).

For larger instances with n = 100, all mixed-integer optimization formulations struggle.Formulations based on OptPairs result in gaps well-above 100%, that is, the best lower bound achieved by branch-and-bound is negative; for instances with data since 2015 and k = 20, the root node relaxations cannot be fully processed in 20 min, and the branch-and-bound solver terminates without an incumbent solution.Indeed, MISOCP solvers based on outer approximations struggle to solve highly nonlinear instances with a large number of variables and e For larger instances with n = 100, all mixed-integer optimization formulations struggle.Formulations based on OptPairs result in gaps well-above 100%, that is, the best lower bound achieved by branch-and-bound is negative; for instances with data since 2015 and k = 20, the root node relaxations cannot be fully processed in 20 min, and the branch-and-bound solver terminates without an incumbent solution.Indeed, MISOCP solvers based on outer approximations struggle to solve highly nonlinear instances with a large number of variables and exhibit pathological behavior, e.g., see [4,7,31] for similar documented results.Formulations based on Natural produce the best incumbent solutions, due to the large number of nodes explored, but terminate with optimality gaps close to 100% in all cases.Formulations based on OptPersp achieve a middle ground of producing reasonably good solutions with moderate gaps, although the optimality gaps of 50% are still quite high.
hibit pathological behavior, e.g., see [4,7,31] for similar documented results.Formulations based on Natural produce the best incumbent solutions, due to the large number of nodes explored, but terminate with optimality gaps close to 100% in all cases.Formulations based on OptPersp achieve a middle ground of producing reasonably good solutions with moderate gaps, although the optimality gaps of 50% are still quite high.

• Discussion of conic formulations First, note that the continuous conic formulation OptPairs produces better lower bounds and upper bounds (via the rounding heuristic) than the continuous OptPersp: in particular, gaps are on average reduced by 66%, see Fig. 1 for a summary of the gaps across all instances.The better performance comes at the expense of increased computational times by a factor of three, which does not depend on the dimension of the problem.For the instances considered, the additional computation time is at most 30 s, which • Discussion of conic formulations First, note that the continuous conic formulation OptPairs produces better lower bounds and upper bounds (via the rounding heuristic) than the continuous OptPersp: in particular, gaps are on average reduced by 66%, see Fig. 1 for a summary of the gaps across all instances.The better performance comes at the expense of increased computational times by a factor of three, which does not depend on the dimension of the problem.For the instances considered, the additional computation time is at most 30 s, which is negligible compared with the cost of solving the mixed-integer optimization problem.
s negligible compared with the cost of solving the mixed-integer optimization problem.

We now compare rounding OptPairs solution with the mixed-integer optimization based on OptPersp, henceforth referred to as MIO, which produced the best results among branch-and-bound approaches.For instances MIO solves to optimality (typically requiring between one and ten minutes), OptPairs produces optimality gaps under 2% in less than four seconds, indicating the effectiveness of rounding the strong OptPairs solutions.More importantly, in all other instances, OptPairs invariably produces much better gaps than MIO in a fraction of the time.
We now compare rounding OptPairs solution with the mixed-integer optimization based on OptPersp, henceforth referred to as MIO, which produced the best results among branch-and-bound approaches.For instances MIO solves to optimality (typically requiring between one and ten minutes), OptPairs produces optimality gaps under 2% in less than four seconds, indicating the effectiveness of rounding the strong OptPairs solutions.More importantly, in all other instances, OptPairs invariably produces much better gaps than MIO in a fraction of the time.For example, in Table 4 with n = 100, OptPairs optimality gaps under 2% in one minute, whereas MIO terminates with gaps above 40% after 20 min of branch-and-bound.While the improved gaps are mostly caused by considerably better lower bounds, in many cases the rounding heuristic based on OptPairs delivers better primal bounds than MIO: for example, in Table 4, n = 100 and k = 20, OptPairs produces feasible solutions with an average objective value of 100.4,whereas MIO results in incumbents with average value of 109.7.
or example, in Table 4 with n = 100, OptPairs optimality gaps under 2% in one minute, whereas MIO terminates with gaps above 40% after 20 min of branch-and-bound.While the improved gaps are mostly caused by considerably better lower bounds, in many cases the rounding heuristic based on OptPairs delivers better primal bounds than MIO: for example, in Table 4, n = 100 and k = 20, OptPairs produces feasible solutions with an average objective value of 100.4,whereas MIO results in incumbents with average value of 109.7.


Conclusions

In this paper, we describe the convex hull of the mixed-integer epigraph of the bivariate convex quadratic functions with nonnegative variables and off-diagonals with an SOCP-representable extended formulation as well as in the original space of variables.Furthermore, we develop a new technique for constructing an optimal convex relaxation from elementary valid inequalities.Using this technique, we develop a new strong SDP relaxation for (QI), based on the convex hull descriptions of the bivariate cases as building blocks.Moreover, the computational results with synthetic and real portfolio opti

Conclusions
In this paper, we describe the convex hull of the mixed-integer epigraph of the bivariate convex quadratic functions with nonnegative variables and off-diagonals with an SOCP-representable extended formulation as well as in the original space of variables.Furthermore, we develop a new technique for constructing an optimal convex relaxation from elementary valid inequalities.Using this technique, we develop a new strong SDP relaxation for (QI), based on the convex hull descriptions of the bivariate cases as building blocks.Moreover, the computational results with synthetic and real portfolio optimization instances indicate that the proposed formulations provide substantial improvement over existing alternatives in the literature.

zation insta
ces indicate that the proposed formulations provide substantial improvement over existing alternatives in the literature.



Unable to fully process the root node in the time limit


Fig. 1
1
Fig. 1 Distribution of gaps for OptPersp and OptPairs




y 1 , y 2 , Y 11 , Y 12 , Y 22 ) satisfies inequality (19a) if and only if there exists W + ∈ S 3
+ such that the inequality systemW + 12 ≤ Y 12(20a)



[35] B + is singular.In this case, one can apply the same argument to the Moore-Penrose pseudo inverse of B + (see p108, Ch12 and Corollary 15.3.2in[41]) and use the generalized Schur Complement Lemma (see 7.3.P8 in[35]) to deduce the last SDP constraint.Finally, taking the SDP dual of the maximization problem we arrive at
2Y 12 ≥min p,q Unable to fully process the root node in the time limit

Table 2
Experiments with varying diagonal

Table 4
Results with stock return data since 2010