Polyhedral approximations of the semidefinite cone and their application

We develop techniques to construct a series of sparse polyhedral approximations of the semidefinite cone. Motivated by the semidefinite (SD) bases proposed by Tanaka and Yoshise (Ann Oper Res 265:155–182, 2018), we propose a simple expansion of SD bases so as to keep the sparsity of the matrices composing it. We prove that the polyhedral approximation using our expanded SD bases contains the set of all diagonally dominant matrices and is contained in the set of all scaled diagonally dominant matrices. We also prove that the set of all scaled diagonally dominant matrices can be expressed using an infinite number of expanded SD bases. We use our approximations as the initial approximation in cutting plane methods for solving a semidefinite relaxation of the maximum stable set problem. It is found that the proposed methods with expanded SD bases are significantly more efficient than methods using other existing approximations or solving semidefinite relaxation problems directly.


Introduction
A semidefinite optimization problem (SDP) is an optimization problem in variables in the space of symmetric matrices with a linear objective function and linear constraints over the semidefinite cone. We denote the space of symmetric matrices as n ∶= {X ∈ ℝ n×n | X i,j = X j,i (1 ≤ i < j ≤ n)} and the semidefinite cone as S n + ∶= {X ∈ n | d T Xd ≥ 0 for any d ∈ ℝ n } . Accordingly, we can readily define an SDP in the standard form, as where C ∈ n , A j ∈ n , b j ∈ ℝ ( j = 1, 2, … , m ), and ⟨A, B⟩ ∶= Trace(A T B) = ∑ n i,j=1 A i,j B i,j is the inner product over n . SDPs are powerful tools that provide convex relaxations for combinatorial and nonconvex optimizations, such as the max-cut problem (e.g., [12,19]) and the k-equipartition problem (e.g., [23,46]). Some of these relaxations can even attain the optimum, as shown in [24,31]. Interested readers may find details about SDPs and their relaxations in [32,42,46].
A cone K ⊂ n is called proper if it has a non-empty interior and is closed, pointed (i.e., K ∩ −K = {O} ), and convex. It is known that the SDP cone is a proper cone [9]. By replacing the semidefinite constraint X ∈ S n + with a general conic constraint X ∈ K in (1) (say, a proper cone K ⊂ n ), one can obtain a general class of problems, namely, conic optimization problems. The class of conic optimization problems has been an active field of study because it contains many popular classes of problems, including linear optimization problems (LPs), second-order cone programs (SOCPs), SDPs, and copositive programs. Copositive programs have been shown capable of providing tight lower bounds for combinatorial and quadratic optimization problems, as described in the survey paper by Dür [17] and the recent work of Arima et al. [3,4,25], etc. It has been shown that a copositive relaxation sometimes gives a highly accurate approximate solution for some combinatorial problems under certain conditions [5,11]. However, the copositive program and its dual problem are both NP-hard (see, e.g., [16,36]).
SDPs are also attractive because they can be solved in polynomial time to any desired precision. There are state-of-the-art solvers, such as SDPA [47], SeDuMi [40], SDPT3 [43], and Mosek [35], but their computations become difficult when the size of the SDP becomes large. To overcome this deficiency, for example, one may use preprocessing to reduce the size of the SDPs, which leads to facial reduction methods [37,38,44]. As another idea, one may generate relaxations of SDPs and solve them as easily handled optimization problems, e.g., LPs and SOCPs, which leads to cutting plane methods. We will focus on these latter methods.
The cutting plane method solves an SDP by transforming it into an optimization problem (e.g., an LP or an SOCP), adding cutting planes at each iteration to cut the current approximate solution out of the feasible region in the next iterations and to get close to the optimal value. The cutting plane method was first used on the traveling-salesman problem, by Dantzig et al. [13,14] in 1954. It was used in 1958 by Gomory [20] to solve integer linear programming problems. As SDPs became popular, it came to be used on them as well; see, for instance, Krishnan and Mitchell [28][29][30], and Konno et al. [27]. Kobayashi and Takano [26] applied it to a class of mixed-integer SDPs. In [2], Ahmadi et al. applied it to nonconvex polynomial optimization problems and copositive programs.
In the above-mentioned cutting plane methods for SDPs, the semidefinite constraint X ∈ S n + in (1) is first relaxed to X ∈ K out , where S n + ⊆ K out ⊆ n , and an initial relaxation of the SDP is obtained. If K out is polyhedral, the initial relaxation may give an LP; if K out is given by second-order constraints, the initial relaxation becomes an SOCP. To improve the performance of these cutting plane methods, we consider generating initial relaxations for SDPs that are both tight and computationally efficient and focus on approximations of S n + . Many approximations of S n + have been proposed on the basis of its well-known properties. Kobayashi and Takano [26] used the fact that the diagonal elements of semidefinite matrices are nonnegative. Konno et al. [27] imposed an assumption that all diagonal elements of the variable X in the SDPs appearing in their iterative algorithm are bounded by a constant. The sets of diagonally dominant matrices and scaled diagonally dominant matrices are known to be cones contained in S n + (see e.g., [2,22] for details). The inclusive relation among them has been studied in, e.g., [7,8]. Ahmadi et al. [1,2] used these sets as initial approximations of their cutting plane method. Boman et al. [10] defined the factor width of a semidefinite matrix, and Permenter and Parrilo used it to generate approximations of S n + , which they applied to facial reduction methods in [38].
Tanaka and Yoshise defined various bases of n , wherein each basis consists of n(n+1) 2 semidefinite matrices, called semidefinite (SD) bases, and used them to devise approximations of S n + [41]. They showed that the conical hull of SD bases and its dual cone give inner and outer polyhedral approximations of S n + , respectively. On the basis of the SD bases, they also developed techniques to determine whether a given matrix is in the semidefinite plus nonnegative cone S n + + N n , which is the Minkowski sum of S n + and the nonnegative matrices cone N n . In this paper, we focus on the fact that SD bases are sometimes sparse, i.e., the number of nonzero elements in a matrix is relatively small, and hence, it is not so computationally expensive to solve polyhedrally approximated problems in such SD bases. We call such an approximation, a sparse polyhedral approximation, and propose efficient sparse approximations of S n + . The goal of this paper is to construct tight and sparse polyhedral approximations of S n + by using SD bases in order to solve hard conic optimization problems, e.g., doubly nonnegative (DNN, or S n + ∩ N n ) and semidefinite plus nonnegative ( S n + + N n ) optimization problems. The contributions of this paper are summarized as follows.
• This paper gives the relation between the conical hull of sparse SD bases and the set of diagonally dominant matrices. We propose a simple expansion of SD bases without losing the sparsity of the matrices and prove that one can generate a sparse polyhedral approximation of S n + that contains the set of diagonally dominant matrices and is contained in the set of scaled diagonally dominant matrices. • The expanded SD bases are used by cutting plane methods for a semidefinite relaxation of the maximum stable set problem. It is found that the proposed methods with expanded SD bases are significantly more efficient than methods using other approximations or solving semidefinite relaxation problems directly.
The organization of this paper is as follows. Various approximations of S n + are introduced in Sect. 2, including those based on the factor width by Boman et al. [10], diagonal dominance by Ahmadi et al. [2], and SD bases by Tanaka and Yoshise [41]. The main results of this paper, i.e., an expansion of SD bases and an analysis of its theoretical properties, are provided in Sect. 3. In Sect. 4, we introduce the cutting plane method using different approximations of S n + for calculating upper bounds of the maximum stable set problem. We also describe the results of numerical experiments and evaluate the efficiency of the proposed method with expanded SD bases.

Factor width approximation
In [10], Boman et al. defined a concept called factor width. [10]) The factor width of a real symmetric matrix A ∈ n is the smallest integer k such that there exists a real matrix V ∈ ℝ n×m where A = VV T and each column of V contains at most k nonzero elements.

Definition 1 (Definition 1 in
For k ∈ {1, 2, … , n} , we can also define It is obvious that the factor width is only defined for semidefinite matrices, because for every matrix A in Definition 1, the decomposition A = VV T implies that A ∈ S n + . Therefore, for every k ∈ {1, 2, … , n} , the set of matrices with a factor width of at most k gives an inner approximation of n + : FW(k) ⊆ n + .

Diagonal dominance approximation
In [1,2], the authors approximated the cone S n + with the set of diagonally dominant matrices and the set of scaled diagonally dominant matrices.

Definition 2
The set of diagonally dominant matrices DD n and the set of scaled diagonally dominant matrices SDD n are defined as follows: FW(k) ∶= {X ∈ n | X has a factor width of at most k}.
It is easy to see that DD n is a convex cone and SDD n is a cone in n . As a consequence of the Gershgorin circle theorem [18], we have the relation DD n ⊆ SDD n ⊆ S n + . Ahmadi et al. [2] defined U n,k as the set of vectors in ℝ n with at most k nonzeros, each equal to 1 or −1 . They also defined a set of matrices U n,k ∶= {uu T | u ∈ U n,k } . Barker and Carlson [6] proved the following theorem. [6]) DD n = cone(U n,2 ).

Theorem 1 (Barker and Carlson
The conical hull of a given set K ⊆ n is defined as is the set of nonnegative integers. A cone generated in this way by a finite number of elements is called finitely generated. Theorem 1 implies that DD n has n 2 extreme rays; thus, it is a finitely generated cone. A The following theorem follows from the results of Minkowski [34] and Weyl [45].

Theorem 2 (Minkowski-Weyl theorem, see Corollary 7.1a in [39]) A convex cone is polyhedral if and only if it is finitely generated.
The above theorem ensures that DD n is a polyhedral cone. Using the expression in Theorem 1, Ahmadi et al. proved that optimization problems over DD n can be solved as LPs. They also proved that optimization problems over SDD n can be solved as SOCPs. They designed a column generation method using DD n and SDD n to obtain a series of inner approximations of S + n . As for the relation between the factor width and diagonal dominance, useful results were presented in [1,10], which gives a relation between SDD n and the set of matrices with a factor width of at most 2.

Corollary 1
The set SDD n is a convex cone.
SDD n ∶= {A ∈ n | DAD ∈ DD n for some positive diagonal matrix D }. [41]) Let e i ∈ ℝ n denotes the vector with a 1 at the ith coordinate and 0 elsewhere, and let I = (e 1 , … , e n ) ∈ n be the identity matrix. Then is called an SD basis of Type I, and

Definition 3 (Definitions 1 and 2 in
is called an SD basis of Type II. Matrices in SD bases Type I and II are defined as As shown in [41], B + and B − are subsets of S n + and bases of n . Given a set K ⊆ n , we define the dual cone of K as (K) * ∶= {A ∈ n | ⟨A, B⟩ ≥ 0 for any B ∈ K} . The conical hull of B + ∪ B − and its dual give an inner and an outer polyhedral approximation of S n + , as follows.

Definition 4
Let I = (e 1 , … , e n ) ∈ n be the identity matrix. The inner and outer approximations of S n + by using SD bases are defined as By Definition 3, we know that B + , B − ⊆ S n + . Since S n + is a convex cone, we have S in ⊆ cone(S n + ) = S n + . By Lemma 1.7.3 in [32], we know that S n + is selfdual; that is, S n + = (S n + ) * . Accordingly, we can conclude that S in ⊆ S n + ⊆ S out .

Remark 1
In [41], B + and B − are defined as B + (P) and B − (P) using an orthogonal matrix P instead of the identity matrix I. In fact, for any orthogonal matrix P, also give other bases and generalizations of B + and B − . However, as we will see in section 4, we use the matrices in the bases as in optimization problems of the form which is equivalent to Therefore, we consider that the generalizations PB + P T and PB − P T are not essential throughout this paper and omit those descriptions from subsequent sections to simplify the presentation.
Polyhedral approximations of the semidefinite cone and their…

Expansion of SD bases
When we use the SD bases for approximating S n + , the sparsity of the matrices in those bases is quite important in terms of computational efficiency. As we mentioned in Remark 1, for any orthogonal matrix P, PB + P T and PB − P T give generalizations of the SD bases. However, it is hard to choose an appropriate orthogonal matrix P (except for the identity matrix I) to keep the sparsity of the matrices PCP T and PAP T in (2). In this section, we try to extend the definition of the SD bases in order to obtain various sparse SD bases which will lead us to sparse polyhedral approximations of S n + .

SD bases and their relations with S n + and DD n
First, we give a lemma that provides an expression of S n + by using SD bases. The lemma is a direct corollary of the fact that any X ∈ S n + has nonnegative eigenvalues and a corresponding orthogonal basis of eigenvectors.

Lemma 2
where O n is the set of orthogonal matrices in ℝ n×n . Lemma 2 gives a way to approximate S n + by changing the matrix P = (p 1 , … , p n ) ∈ O n when creating SD bases. However, a dense matrix P ∈ O n may lead to a dense formulation of the approximation using SD basis, which is unattractive from the standpoint of computational efficiency.
Note that we can easily see that the set cone(B + ∪ B − ) , the conical hull of the sparse SD bases B + and B − , is equivalent to cone(U n,2 ) . Thus, we obtain the following proposition as a corollary of Theorem 1.

Expansion of SD bases without losing sparsity
The previous section shows that we can obtain a sparse polyhedral approximation of S n + by using the SD bases. In this section, we try to extend the definition of the SD bases in order to obtain various sparse polyhedral approximations of S n + . Definition 5 Let I = (e 1 , … , e n ) ∈ n be the identity matrix. Define the expansion of the SD basis with one parameter ∈ ℝ as The proposition below ensures that the expansion of the SD bases also gives bases of n . Proposition 2 Let I = (e 1 , … , e n ) ∈ n be the identity matrix. For any ∈ ℝ ⧵ {0, −1} , B ( ) is a set of n(n + 1)∕2 independent matrices and thus a basis of n .

3
The above leads us to conclude that {B i,j ( )} =B( ) is a set of n(n + 1)∕2 linearly independent matrices. ◻ If we let = 1 , then it is straightforward that B (1) = B + . If we let be other real numbers, we may obtain different SD bases. The following proposition gives the condition for generating different expanded SD bases.

3
Polyhedral approximations of the semidefinite cone and their… From the last inequality in (14), we have either For case (i), from the first and second inequalities of (14), we have 2 − 1 ≥ 0 and 1 − 2 ≥ 0 , which implies 2 = 1 and contradicts the assumption 2 ≠ 1 . A similar contradiction is obtained for case (ii). Thus, we have B 2 i,j ∉ cone(B( 1 )) . ◻

Expression of SDD n with expanded SD bases
As we have seen in Corollary 1, the set SDD n = FW(2) is a convex cone. This fact ensures that as a corollary of Theorem 1, the conical hull of the union of the extended SD bases B ( ) on ∈ ℝ coincides with FW(2) and hence, the set of scaled diagonally dominant matrices SDD n :

Notes on the parameter H
ere, we discuss the choice for the parameter to increase the "volume" of the polyhedral approximation cone(B( )) of the semidefinite cone S n + . For any ∈ ℝ and 1 ≤ i < j ≤ n , by Definition 5, we can calculate the Frobenius norm of B i,j ( ): According to Proposition 3, by changing , one can obtain different polyhedral approximations. However, we can see that and by Definitions 3 and 5, we have (15)

3
This shows that, if | | → ∞ or ∈ {0, 1, −1} , the new matrix B i,j ( ) will become close to the existing matrices, e.g. B + i,i , B + j,j , B + i,j and B − i,j , and the "volume" of the polyhedral approximation cone(B( ) ∪ B + ∪ B − ) of the semidefinite cone S n + will also be close to the "volume" of the existing inner approximation cone(B + ∪ B − ) of S n + .
To give an illustrative explanation of the above discussion, here we consider the specific case and draw some figures in ℝ 3 with coordinate a, b and c. Figure 1a shows the set of S 2 + in ℝ 3 . The red arrow in Fig. 1b shows the extreme rays {B i,j ( ) | ≥ 0} with | | → ∞ and ∈ {0, 1, −1} . The conical hull of these extreme rays is cone(B + ∪ B − ) and its cross section with {X ∈ 2 | ⟨X, I⟩ = 1} is illustrated as the blue area. To avoid generating a new matrix B i,j ( ) that is close to the existing matrices, we should choose an such that the angle between B i,j ( ) and existing matrices are equal, as illustrated in Fig. 1c.
We expand this idea to the case of generating a matrix B i,j ( ) ∈ n . Given an ∈ ℝ , we can define the angles between matrices in the expanded SD bases and SD bases Type I and II for every 1 ≤ i < j ≤ n , as follows:  In general, to obtain a large enough inner approximation with limited parameters, we prefer an that makes 1 ( ) = 3 ( ) , which means that the new matrix B i,j ( ) will be in the middle of B + i,i and B + i,j on the boundary of S n + . Similarly, we can obtain by calculating 2 ( ) = 3 ( ) , 1 ( ) = 4 ( ) and 2 ( ) = 4 ( ) . By solving these equalities, we find that The expansions with these parameters are expected to provide generally large inner approximations for S n + .

Cutting plane methods for the maximum stable set problem
Conic optimization problems, including SDPs and copositive programs, have been shown to provide tight bounds for NP-hard combinatorial and noconvex optimization problems. Here, we consider applying approximations of S n + to one of those NP-hard problems, the maximum stable set problem. A stable set of a graph G(V, E) is a set of vertices in V, such that there is no edge connecting any pair of vertices in the set. The maximum stable set problem aims to find the stability number, i.e. the number of vertices of the largest stable set of G, namely (G).
De Klerk and Pasechnik [15] proposed a copositive programming formulation to obtain the exact stability number of a graph G with n vertices: .
where e is the all-ones vector, A is the adjacency matrix of graph G, and C * n is the dual cone of the copositive cone C n ∶= {X ∈ n | d T Xd ≥ 0 ∀d ∈ ℝ n , d ≥ 0}.
Although problem (16) is a conic optimization problem, it is still difficult since determining whether X ∈ C * n or not is NP-hard [16]. A natural approach is to relax this problem to a more tractable optimization problem. From the definition of each cone, we can see the validity of the following inclusions: By replacing C * n with S n + ∩ N n , one can obtain an SDP relaxation of (16): Solving this SDP is not as easy as it seems to be; in fact, we could not obtain a useful result of (17) after 6 hours of calculation using the state-of-the-art SDP solver Mosek for a random generalized problem when n = 300 . Combining the expanded SD bases with the cutting plane method, we apply the approximations of S n + to (17) and solve it by calculating a series of more tractable problems.
Let P n satisfy S n + ⊆ P n ⊆ n and replace X ∈ S n + by X ∈ P n in (17). Then, we obtain a relaxation of (17): Usually, the relaxed problem (18) is expected to be easier to solve and to give us a better upper bound of problem (17) from its optimal solution X * . To get a better upper bound, we select some eigenvectors with negative eigenvalues of an optimal solution X * of problem (18), say d 1 , … , d k , by adding cutting planes to (18), and obtain a new optimization problem Notice that the optimal solution X * of problem (18) is cut from the feasible region of problem (19) 1, … , k) . On the other hand, since S n + = {X ∈ n | ∀d ∈ ℝ n , ⟨dd T , X⟩ ≥ 0} ⊆ P n , every feasible solution of (17) is feasible for (19), and hence problem (19) is a relaxation of problem (17). These facts ensure that problem (19) is a tighter relaxation of problem (17) than problem (18). By repeating this procedure, we are able to obtain a series of nonincreasing upper max ⟨ee T , X⟩ s.t. ⟨A + I, X⟩ = 1, max ⟨ee T , X⟩ s.t. ⟨A + I, X⟩ = 1, X ∈ P n ∩ N n .
max ⟨ee T , X⟩ s.t. ⟨A + I, X⟩ = 1, bounds of (17). Since the eigenvectors are usually dense, we only have to add eigenvectors corresponding to up to the second smallest eigenvalues to {d i } at every iteration, which increases computational efficiency.
As for the selection of the initial relaxation P n , we are ready to use the approximations of S n + based on the expanded SD bases. Let H ∶= {±1, ±1 ± √ 2} be the set of parameters calculated in Sect. 3.4, and let SDB n denote the conical hull of expanded SD bases using H: Then, as has been described in the previous sections, we have If SDB * n or DD * n is selected to be P n , the corresponding relaxed problem in the cutting plane procedure becomes an LP, which allows us to use powerful state-of-theart LP solvers, such as Gurobi [21]. Ahmadi et. al. [2] showed that when SDD * n is selected, the relaxations turn out to be SOCPs. Although SDD * n provides a tighter relaxation than either DD n or SDB n , the latter two relaxations are expected to have a lower computational cost. In addition, in [2], Ahmadi et al. also proposed an SOCPbased cutting plane approach, named SDSOS, which adds SOCP cuts at every iteration. We conducted experiments to compare the efficiencies of those cutting plane methods using different approximations and SDSOS. The specifications of the experimental methods are summarized in Table 1.
We tested these methods on the Erdös-Rényi graphs ER(n, p), randomly generated by Ahmadi et al. in [2], where n is the number of vertices and every pair of vertices has an edge with probability p. All experiments were performed with MATLAB 2018b on a Windows PC with an Intel(R) Core(TM) i7-6700 CPU running at 3.4 GHz and 16 GB of RAM. The LPs were solved using Gurobi Optimizer 8.0.0 [21] and the SOCPs and SDPs are solved using Mosek Optimizer 9.0 [35]. Figure 2 shows the result for an instance with n = 250 and p = 0.8 . The x-axis is the number of iterations, and the y-axis is the gap between the upper bounds of each method and the SDP bound obtained by (17); the gap is computed by for the obtained upper bound f k at k's iteration and the SDP bound f * obtained by solving (17) directly.
As can be seen in this figure, the accuracy of CPDD is the worst among the four methods at each iteration. CPSDB achieves almost the same upper bounds as CPSDD and SDSOS, which shows that the proposed polyhedral approximation SDB n is promising for obtaining a solution close to the non-polyhedral approximation SDD n of S n + . Although SDSOS adds an extra SOCP cut at every iteration and takes longer to solve, the accuracy of SDSOS does not seem to be affected and is not so different from the accuracy of CPSDD at each iteration. Figure 3 shows the relation between the computation time and the gap of each method for the same instance. Although its accuracy is not necessarily the best at every iteration, it seems that CPSDB is the most efficient method. CPSDB attains an upper bound whose gap is 2 within 30 s, while CPSDD and SDSOS attain upper bounds whose gap is 4 after the same amount of time. The difference might come from that the subproblems of CPSDB are sparse LPs at earlier iterations and the computations are relatively cheaper than those of CPSDD and SDSOS whose subproblems are SOCPs. Tables 2 and 3 give the bounds of iterative methods and the SDP bound for all the instances. In Table 2, the CPSDD 0 /SDSOS 0 column shows the first upper bound obtained by CPSDD and SDSOS, i.e., the upper bound obtained by solving the same SOCP before adding any cutting plane. The (5 min) and (10 min) columns of CPSDD (SDSOS) show the upper bounds obtained after 5 min and after 10 min of the CPSDD (SDSOS) computation, respectively. The SDP column shows the SDP bound obtained by solving (17).  Table 3, the CPDD 0 and CPSDB 0 columns show the first upper bounds obtained by CPDD and CPSDB, respectively, before adding any cutting plane. The (5 min) and (10 min) columns of CPDD (CPSDB) show the upper bounds obtained after 5 min and after 10 min of the CPDD (CPSDB) computation, respectively.
Note that we failed to solve SDPs (17) for instances having n = 300 nodes within our time limit 20000s. In Table 2, the Value and Time (s) columns of SDP with n = 300 show the results obtained in [2] for these two instances, as a reference.
As can be seen in Tables 2 and 3, for all instances, the values of CPSDD 0 /SDSOS 0 are better than the values of CPSDB 0 and CPDD 0 . These results correspond to the  inclusion relationship of initial approximations (20). We can also see that the values of CPSDB 0 are almost the same as those of CPSDD 0 /SDSOS 0 for all instances, while the values of CPDD 0 are much worse than others. For all instances, CPSDB seems to be significantly more efficient than all other methods. For example, for instance with n = 250 and p = 0.3 , after 10 min of calculation, CPSDB obtained an upper bound of 73.24, while CPSDD and SDSOS got upper bounds greater than 90 and CPDD got a bound of more than 146. At present, solving a large SDP, e.g., one with more than n = 300 nodes requires a significant amount of computational time. The cutting plane method CPSDB with our polyhedral approximation SDB n is a promising way of obtaining efficient upper bounds of such large SDPs in a moderate time.

Concluding remarks
We developed techniques to construct a series of sparse polyhedral approximations of the semidefinite cone. We provided a way to approximate the semidefinite cone by using SD bases and proved that the set of diagonally dominant matrices can be expressed with sparse SD bases. We proposed a simple expansion of SD bases that keeps the sparsity of the matrices that compose it. We gave the conditions for generating linearly independent matrices in expanded SD bases as well as for generating an expansion different from the existing one. We showed that the polyhedral approximation using our expanded SD bases contains the set of diagonally dominant matrices and is contained in the set of scaled diagonally dominant matrices. We also proved that the set of scaled diagonally dominant matrices can be expressed using an infinite number of expanded SD bases.
The polyhedral approximations were applied to the cutting plane method for solving a semidefinite relaxation of the maximum stable set problem. The results of the numerical experiments showed that the method with our expanded SD bases is more efficient than other methods (see Fig. 3); improving the efficiency of our method still remains an important study issue.