A New Extension of Chubanov’s Method to Symmetric Cones

We propose a new variant of Chubanov’s method for solving the feasibility problem over the symmetric cone by extending Roos’s method (2018) of solving the feasibility problem over the nonnegative orthant. The proposed method considers a feasibility problem associated with a norm induced by the maximum eigenvalue of an element and uses a rescaling focusing on the upper bound for the sum of eigenvalues of any feasible solution to the problem. Its computational bound is (i) equivalent to that of Roos’s original method (2018) and superior to that of Louren¸co et al.’s method (2019) when the symmetric cone is the nonnegative orthant, (ii) superior to that of Louren¸co et al.’s method (2019) when the symmetric cone is a Cartesian product of second-order cones, (iii) equivalent to that of Louren¸co et al.’s method (2019) when the symmetric cone is the simple positive semideﬁnite cone, and (iv) superior to that of Pena and Soheili’s method (2017) for any simple symmetric cones under the feasibility assumption of the problem imposed in Pena and Soheili’s method (2017). We also conduct numerical experiments that compare the performance of our method with existing methods by generating strongly (but ill-conditioned) feasible instances. For any of these instances, the proposed method is rather more eﬃcient than the existing methods in terms of accuracy and execution time.


Introduction
Recently, Chubanov [2,3] proposed a new polynomial-time algorithm for solving the problem (P(A)), where A is a given integer (or rational) matrix and rank(A) = m and 0 is an n-dimensional vector of 0s. The method explores the feasibility of the following problem P S1 (A), which is equivalent to P(A) and given by P S1 (A) find x > 0 where 1 is an n-dimensional vector of 1s. Chubanov's method consists of two ingredients, the "main algorithm" and the "basic procedure." Note that the alternative problem D(A) of P(A) is given by where rangeA ⊤ is the orthogonal complement of kerA. The structure of the method is as follows: In the outer iteration, the main algorithm calls the basic procedure, which generates a sequence in R n using projection to the set kerA := {x ∈ R n | Ax = 0}. The basic procedure terminates in a finite number of iterations returning one of the following: (i). a solution of problem P(A), (ii). a solution of problem D(A), or (iii). a cut of P(A), i.e., an index j ∈ {1, 2, . . . , n} for which 0 < x j ≤ 1 2 holds for any feasible solution of problem P S1 (A).
If result (i) or (ii) is returned by the basic procedure, then the feasibility of problem P(A) can be determined and the main procedure stops. If result (iii) is returned, then the main procedure generates a diagonal matrix D ∈ R n×n with a (j, j) element of 2 and all other diagonal elements of 1 and rescales the matrix as AD −1 . Then, it calls the basic procedure with the rescaled matrix. Chubanov's method checks the feasibility of P(A) by repeating the above procedures.
For problem P(A), [15] proposed a tighter cut criterion of the basic procedure than the one used in [3]. [3] used the fact that x j ≤ √ n z 2 yj holds for any y ∈ R n satisfying n i=1 y i = 1, y ≥ 0 and y / ∈ rangeA T , z ∈ R n obtained by projecting this y onto kerA, and any feasible solution x ∈ R n of P S1 (A), and the basic procedure is terminated if a y is found for which √ n z 2 yj ≤ 1 2 holds for some index j. On the other hand, [15] showed that for v = y − z, x j ≤ min(1, is the projection of −v/v j ∈ R n onto the nonnegative orthant and 1 is the vector of ones, and the basic procedure is terminated if a y is found for which Chubanov's method has also been extended to include the feasibility problem over the second-order cone [9] and the symmetric cone [13,10]. The feasibility problem over the symmetric cone is of the form, where A is a linear operator, K is a symmetric cone, and int K is the interior of the set K. As proposed in [13,10], for problem P(A), the structure of Chubanov's method remains the same; i.e., the main algorithm calls the basic procedure, and the basic procedure returns one of the following in a finite number of iterations: (i). a solution of problem P(A), or (ii). a solution of the alternative problem of problem P(A), or (iii). a recommendation of scaling problem P(A). If result (i) or (ii) is returned by the basic procedure, then the feasibility of the problem P(A) can be determined and the main procedure stops. If result (iii) is returned, the problem is scaled appropriately and the basic procedure is called again.
The authors focused on the volume of the feasible region of P S1,∞ (A) and devised a rescaling method so that the volume becomes smaller. Their method will stop when the feasibility of problem P S1,∞ (A) or the fact that the minimum eigenvalue of any feasible solution of problem P S1,∞ (A) is less than ε is determined.
The aim of this paper is to devise a new variant of Chubanov's method for solving P(A) by extending Roos's method [15] to the following feasibility problem (P S∞ (A)) over the symmetric cone K: where x ∞ is the maximum absolute eigenvalue of x. Throughout this paper, we will assume that K is the Cartesian product of p simple symmetric cones K 1 , . . . , K p , i.e., K = K 1 × · · · × K p . Here, we should mention an important issue about Lemma 4.2 in [15], which is one of the main results of [15]. The proof of Lemma 4.2 given in the paper [15] is incorrect and a correct proof is provided in the paper [19], while this study derives theoretical results without referring to the lemma. Our method has a feature that the main algorithm works while keeping information about the minimum eigenvalue of any feasible solution of P S∞ (A) and, in this sense, it is closely related to Lourenço et al.'s method [10]. Using the norm · ∞ in problem P S∞ (A) makes it possible to • calculate the upper bound for the minimum eigenvalue of any feasible solution of P S∞ (A), • quantify the feasible region of P(A), and hence, • determine whether there exists a feasible solution of P(A) whose minimum eigenvalue is greater than ε as in [10].
Note that the symmetric cone optimization includes several types of problems (linear, second-order cone, and semi-definite optimization problems) with various settings and the computational bound of an algorithm depends on these settings. As we will describe in section 6, the theoretical computational bound of our method is • equivalent to that of Roos's original method [15] and superior to that of Lourenço et al.'s method [10] when the symmetric cone is the nonnegative orthant, • superior to that of Lourenço et al.'s method when the symmetric cone is a Cartesian product of second-order cones, and • equivalent to that of Lourenço et al.'s method when the symmetric cone is the simple positive semidefinite cone, under the assumption that the costs of computing the spectral decomposition and of the minimum eigenvalue are of the same order for any given symmetric matrix.
• superior to that of Pena and Soheili's method [13] for any simple symmetric cones under the feasibility assumption of the problem imposed in [13].
Another aim of this paper is to give comprehensive numerical comparisons of the existing algorithms and our method. As described in section 7, we generate strongly feasible ill-conditioned instances, i.e., kerA ∩ int K = ∅ and x ∈ kerA ∩ int K has positive but small eigenvalues, for the simple positive semidefinite cone K, and conduct numerical experiments.
The paper is organized as follows: Section 2 contains a brief description of Euclidean Jordan algebras and their basic properties. Section 3 gives a collection of propositions which are necessary to extend Roos's method to problem P S∞ (A) over the symmetric cone. In sections 4 and 5, we explain the basic procedure and the main algorithm of our variant of Chubanov's method. Section 6 compares the theoretical computational bounds of Lourenço et al.'s method [10], Pena and Soheili's method [13] and our method. In section 7, we conduct numerical experiments comparing our variant with the existing methods. The conclusions are summarized in section 8.

Euclidean Jordan algebras and their basic properties
In this section, we briefly introduce Euclidean Jordan algebras and symmetric cones. For more details, see [5]. In particular, the relation between symmetry cones and Euclidean Jordan algebras is given in Chapter III (Koecher and Vinberg theorem) of [5].

Euclidean Jordan algebras
Let E be a real-valued vector space equipped with an inner product ·, · and a bilinear operation • : E × E → E, and e be the identity element, i.e.,x • e = e • x = x holds for any x ∈ E. (E, •) is called a Euclidean Jordan algebra if it satisfies for all x, y, z ∈ E and x 2 := x • x. We denote y ∈ E as x −1 if y satisfies x • y = e. c ∈ E is called an idempotent if it satisfies c • c = c, and an idempotent c is called primitive if it can not be written as a sum of two or more nonzero idempotents. A set of primitive idempotents c 1 , c 2 , . . . c k is called a Jordan frame if c 1 , . . . c k satisfy For x ∈ E, the degree of x is the smallest integer d such that the set {e, x, x 2 , . . . , x d } is linearly independent. The rank of E is the maximum integer r of the degree of x over all x ∈ E. The following properties are known.

Symmetric cone
A proper cone is symmetric if it is self-dual and homogeneous. It is known that the set of squares (ii) x ∈ int K if and only if λ j > 0 (j = 1, 2, . . . , r).
From Propositions 2.1 and 2.2 for any x ∈ E, its projection P K (x) onto K can be written as an operation to round all negative eigenvalues of x to 0, i.e., P K (x) = r i=1 [λ i ] + c i , where [·] + denotes the projection onto the nonnegative orthant. Using P K , we can decompose any x ∈ E as follows. Lemma 2.3. Let x ∈ E, and K be the symmetric cone corresponding to E. Then, x can be decomposed into Proof. From Propositoin 2.1, let x be given as x = r i=1 λ i c i . Let I 1 be the set of indices such that λ i ≥ 0 and I 2 be the set of indices such that λ i < 0. Then, we have P K (x) = i∈I1 λ i c i and A Euclidean Jordan algebra (E, •) is called simple if it cannot be written as any Cartesian product of non-zero Euclidean Jordan algebras. If the Euclidean Jordan algebra (E, •) associated with a symmetric cone K is simple, then we say that K is simple. In this paper, we will consider that K is given by a Cartesian product of p simple symmetric cones K ℓ , K := K 1 × · · · × K p , whose rank and identity element are r ℓ and e ℓ (ℓ = 1, . . . , p). The rank r and the identity element of K are given by r = p ℓ=1 r ℓ , e = (e 1 , . . . , e p ). ( In what follows, x ℓ stands for the ℓ-th block element of x ∈ K, i.e., x = (x 1 , . . . , x p ) ∈ K 1 × · · · × K p . For each ℓ = 1, · · · , p, we define λ min (x ℓ ) := min{λ 1 , · · · , λ r ℓ } where λ 1 , · · · , λ r ℓ are eigenvalues of x ℓ . The minimum eigenvalue λ min (x) of x ∈ K is given by λ min (x) = min{λ min (x 1 ), · · · , λ min (x p )}.
Next, we consider the quadratic representation . Letting I ℓ be the identity operator of the Euclidean Jordan algebra (E ℓ , • ℓ ) associated with the cone K ℓ , we have Q e ℓ = I ℓ for ℓ = 1, . . . , p. The following properties can also be retrieved from the results in [5] as in Proposition 3 of [10]: It is also known that the following relations hold for the quadratic representation Q v and det(·) [5].
Proposition 2.5 (cf. Proposition II.3.3 and III.4.2-(i), [5]). For any v, x ∈ E, More detailed descriptions, including concrete examples of symmetric cone optimization, can be found in, e.g., [5,6,16,1]. Here, we will use concrete examples of symmetric cones to explain the biliniear operation, the identity element, the inner product, the eigenvalues, the primitive idempotents, the projection on the symmetric cone and the quadratic representation on the cone. Example 2.6 (K is the semidefinite cone S n + ). Let S n be the set of symmetric matrices of n × n.The semidefinite cone S n + is given by S n + = {X ∈ S n : X O}. For any symmetric matrices X, Y ∈ S n , define the bilinear operation • and inner product as X • Y = XY +Y X 2 and X, Y = tr(XY ) = n i=1 n j=1 X ij Y ij , respectively. For any X ∈ S n , perform the eigenvalue decomposition and let u 1 , . . . , u n be the corresponding normalized eigenvectors for the eigenvalues λ 1 , . . . , λ n : The eigenvalues of X in the Jordan algebra are λ 1 , . . . , λ n and the primitive idempotents are c 1 = u 1 u T 1 , . . . , c n = u n u T n , which implies that the rank of the semidefinite cone S n + is r = n. The identity element is the identity matrix I and the projection onto S n + is given by P S n Example 2.7 (K is the second-order cone L n ). The second order cone is given by For any x, y ∈ R n , define the bilinear operation • and the inner product as x • y = (x ⊤ y, (x 1ȳ + y 1x ) ⊤ ) ⊤ and x, y = 2 n i=1 x i y i , respectively. For any x ∈ R n , by the decomposition we obtain the eigenvalues and the primitive idempotents as follows: where z ∈ R n−1 is an arbitrary vector satisfying z 2 = 1. The above implies that the rank of the second-order cone L n is r = 2. The identity element is given by e = (1, 0 ⊤ ) ⊤ ∈ R n . The projection P Ln (x) onto L n is given by Letting I n−1 be the identity matrix of order n − 1, the quadratic representation Q v (·) of v ∈ R n is as follows:

Notation
This subsection summarizes the notations used in this paper. For any x, y ∈ E, we define the inner product ·, · and the norm · J as x, y := trace(x • y) and x J := x, x , respectively. For any x ∈ E having decomposition x = r i=1 λ i c i as in Proposition 2.1, we also define x 1 := |λ 1 | + · · · + |λ r |, x ∞ := max{|λ 1 |, . . . , |λ r |}. For x ∈ K, we obtain the following equivalent representations: x 1 = e, x , x ∞ = λ max (x). The following is a list of other definitions and frequently used symbols in the paper.
• d: the dimension of the Euclidean space E corresponding to K, • P A (·): the projection map onto kerA, • P K (·): the projection map onto K, • λ(x) ∈ R r : an r-dimensional vector composed of the eigenvalues of x ∈ K, • λ(x ℓ ) ∈ R r ℓ : an r ℓ -dimensional vector composed of the eigenvalues of x ℓ ∈ K ℓ (ℓ = 1, . . . , p), • c(x ℓ ) i ∈ K ℓ : the i-th primitive idempotent of x ℓ ∈ E ℓ . When K is simple, it is abbreviated as c i .
3 Extension of Roos's method to the symmetric cone problem

Outline of the extended method
We focus on the feasibility of the following problem P S∞ (A), which is equivalent to P(A): The alternative problem D(A) of P(A) is where rangeA * is the orthogonal complement of kerA. As we mentioned in section 2.2, we assume that K is given by a Cartesian product of p simple symmetric cones K ℓ (ℓ = 1, . . . , p), i.e., K = K 1 × · · · × K p . In our method, the upper bound for the sum of eigenvalues of a feasible solution of P S∞ (A) plays a key role, whereas the existing work focuses on the volume of the set of the feasible region [10] or the condition number of a feasible solution [13]. Before describing the theoretical results, let us outline the proposed algorithm when K is simple. The algorithm repeats two steps: (i). find a cut for P S∞ (A), (ii) scale the problem to an isomorphic problem equivalent to P S∞ (A) such that the region narrowed by the cut is efficiently explored. Given a feasible solution x of P S∞ (A) and a constant 0 < ξ < 1, our method first searches for a Jordan frame {c 1 , . . . , c r } such that the following is satisfied: where H ⊆ {1, . . . , r} and |H| > 0. In this case, instead of P S∞ (A), we may consider P Cut S∞ (A) as follows: Here, we define the set SR Cut = {x ∈ E : ∈ H)} as the search range for the solutions of the problem P Cut S∞ (A). The proposed method then creates a problem equivalent and isomorphic to P S∞ (A) such that SR Cut , the region narrowed by the cut, can be searched efficiently. Such a problem is obtained as follows: In the succeeding sections, we explain how the cut for P S∞ (A) is obtained from some v ∈ rangeA * ; we also explain the scaling method for the problem in detail. To simplify our discussion, we will assume that K is simple, i.e., p = 1, in section 3.2. Then, in section 3.3, we will generalize our discussion to the case of p ≥ 2.

Simple symmetric cone case
Let us consider the case where K is simple. It is obvious that, for any feasible solution x of P S∞ (A), the constraint x ∞ ≤ 1 implies that e, x ≤ r, since x ∈ K. In Proposition 3.3, we show that this bound may be improved as e, x < r by using a point v ∈ rangeA * \ {0}. To prove Proposition 3.3, we need the following Lemma 3.1 and Proposition 3.2. y, x = P K (y) , e .
Proof. Using the decomposition y = r i=1 λ i c i obtained by Proposition 2.1, we see that Noting that x ∈ K, e − x ∈ K from 0 ≤ λ(x) ≤ 1, since c i ∈ K is primitive idempotent, we find that c i , x ≥ 0 and c i , e − x ≥ 0, which implies that 0 ≤ c i , x ≤ 1. Thus, letting I 1 be the set of indices for which λ i ≤ 0 and I 2 be the set of indices for which λ i > 0, if there exists an x satisfying then such an x is an optimal solution of (2). In fact, if we define x * = i∈I2 c i , then by the dedfinition of the Jordan frame, x * satisfies (3) and 0 ≤ λ(x) ≤ 1 and becomes an optimal solution of (2). In this case, the optimal value of (2) turns out to be The dual problem of the above is Proof. Define the Lagrangian function L(x, w) as L(x, w) := c, x − w ⊤ A(x) where w ∈ R m is the Lagrange multiplier. Supoose that x * is an optimal sotution of the primal problem. Then, for any w ∈ R m , we have c, x * = L(w, x * ) ≤ max 0≤λ(x)≤1 L(w, x), and hence, Then, the following relations hold for any x ∈ F PS ∞ (A) and i ∈ {1, . . . , r}: Proof. For each i ∈ {1, 2, . . . , r}, we have and hence, Note that, since q i (α) is a piece-wise linear convex function, if λ i = 0, it attains the minimum at α = 0 with q i (0) = 1, and if λ i = 0, it attains the minimum at α = 0 with q i (0) = 1 or at α = 1 λi with Thus, we obtain equivalence in (4). Since αv ∈ rangeA * for all α ∈ R, for each i ∈ {1, . . . , r}, Proposition 3.2 and (5) ensure that c i , x ≤ P K (c i − αv) , e = q i (α) for all α ∈ R, which implies the inequality in (4).
Since r i=1 c i = e holds, Proposition 3.3 allows us to compute upper bounds for the sum of eigenvalues of x. The following proposition gives us information about indices whose upper bound for c i , x in Proposition 3.3 is less than 1.
Proof. First, we consider the case where λ i > 0. Since the assumption implies that e, where the first equality comes from Lemma 2.3.
For the case where λ i < 0, since the assumption also implies that e, P K (v) = −λ i ξ, we have This completes the proof.
The above two propositions imply that, for any v ∈ rangeA * with v = r i=1 λ i c i , if we compute c i , x according to Proposition 3.3 for i ∈ {1, . . . , r} having the same sign as the one of e, v , we obtain an upper bound for the sum of eigenvalues of x over the set F PS ∞ (A) . The following proposition concerns the scaling method of problem P S∞ (A) when we find such a v ∈ rangeA * . Proposition 3.5. Let H ⊆ {1, . . . r} be a nonempty set, c 1 , . . . , c r be a Jordan frame, and ξ be a real number satisfying 0 < ξ < 1. Let us define g ∈ int K as For the two sets SR Cut = {x ∈ E : Proof. Letx be an arbitrary point of SR Scaled . It suffices to show that (i) (i): Let us show that Q g (x) ∈ int K. Since g andx lie in the set int K, from Propositions 2.4 and 2.5, we see that (ii): Next let us show that Q g (x) ∞ ≤ 1. Sincex ∈ SR Scaled , we see thatx ∈ int K, x ∞ ≤ 1 and hence e −x ∈ K. Since g ∈ int K, Proposition 2.4 guarantees that By the definition (6) of g, the following equations hold for c 1 , . . . , c r : Thus, we obtain Q g (e) = ξ i∈H c i + i / ∈H c i . Combining this with the facts c i ∈ K and (1 − ξ) > 0 and (7), we have Since we have shown that Q g (x) ∈ int K, we can conclude that Q g (x) ∞ ≤ 1.
(iii) and (iv): Finally, we compute an upper bound for the value Q g (x), c i over the set SR Scaled . It follows from c i ∈ K and (7) that Q g (e −x), c i ≥ 0, i.e., Q g (e), c i ≥ Q g (x), c i holds. Since we have shown that Q g (e) = ξ i∈H c Note that Proposition 3.5 implies that if a cut is obtained for P S∞ (A) based on Proposition 3.3, we can expect a more efficient search for solutions to problem P S∞ (AQ g ) rather than trying to solve problem P S∞ (A).

Non-simple symmetric cone case
In this section, we consider the case where the symmetric cone is not simple. Propositions 3.6 and 3.7 are extensions of Proposition 3.3 and 3.4, respectively.
Proof. Let c ∈ E be an element whose ℓ-th block element is c ℓ = c(v ℓ ) i and other block elements take 0. For any real number α ∈ R, Proposition 3.2 ensures that We obtain (9) by following a similar argument to the one used in the proof of Proposition 3.3.
From Proposition 3.6, if we obtain v ∈ rangeA * satisfying (11) for a block ℓ ∈ {1, . . . , p} with an index i ∈ {1, . . . r ℓ }, then the upper bound for the sum of the eigenvalues of any feasible solution x of P S∞ (A) is reduced by e, x ≤ r − 1 + ξ ℓ < r. In this case, as described below, we can find a scaling such that the sum of eigenvalues of any feasible solution of P S∞ (A) is bounded by r. Let H ℓ be the set of indices i satisfying (11) for each block ℓ. According to Proposition 3.5, set h and define the linear operator Q as follows: where I ℓ is the identity operator of the Euclidean Jordan algebra E ℓ associated with the symmetric cone K ℓ . From Proposition 3.5 and its proof, we can easily see that and the sum of eigenvalues of any feasible solution of the scaled problem P S∞ (AQ) is bounded by e, e = r = p ℓ=1 r ℓ .
4 Basic procedure of the extended method 4.1 Outline of the basic procedure In this section, we describe the details of our basic procedure. First, we introduce our stopping criteria and explain how to update y k when the the stopping criteria is not satisfied. Next, we show that the stopping criteria is satisfied within a finite number of iterations. Our stopping criteria is new and different from the ones used in [10,13], while the method of updating y k is similar to the one used in [10] or in the von Neumann scheme of [13]. Algorithm 1 is a full description of our basic procedure.

Termination conditions of the basic procedure
For z k = P A (y k ), v k = y k − z k and a given ξ ∈ (0, 1), our basic procedure terminates when any of the following four cases occurs: 3. y k − z k ∈ K and y k − z k = 0 meaning that y k − z k is feasible for D(A), or 4. there exist ℓ ∈ {1, . . . , p} and i ∈ {1, . . . , r ℓ } for which meaning that e, x < r holds for any feasible solution x of P S∞ (A) (see Proposition 3.6).
Cases 1 and 2 are direct extensions of the cases in [3], while case 3 was proposed in [9,10]. Case 3 helps us to determine the feasibility of P(A) efficiently, while we have to decompose y k − z k for checking it. If the basic procedure ends with case 1, 2, or 3, the basic procedure returns a solution of P(A) or D(A) to the main algorithm. If the basic procedure ends with case 4, the basic procedure returns to the main algorithm p index sets H 1 , . . . , H p each of which consists of indices i satisfying (13) and the set of

Update of the basic procedure
The basic procedure updates y k ∈ int K with y k , e = 1 so as to reduce the value of z k J . The following proposition is essentially the same as Proposition 13 in [10], so we will omit its proof.
Proposition 4.1 (cf. Proposition 13, [10]). For y k ∈ int K satisfying y k , e = 1, let z k = P A (y k ). If z k / ∈ int K and z k = 0, then the following hold.
2. For the above c, suppose that P A (c) = 0 and define Then, y k+1 := αy k + (1 − α)c satisfies y k+1 ∈ int K, y k+1 1,∞ ≥ 1/p, y k+1 , e = 1, and A method of accelerating the update of y k is provided in [15]. For ℓ ∈ {1, 2, . . . , p}, let , the acceleration method computes α by (14) so as to minimize the norm of z k+1 and update y by y k+1 = αy k +(1−α)c. We incorporate this method in the basic procedure of our computational experiment. As described in [13], we can also use the smooth perceptron scheme [17,18] to update y k in the basic procedure. As explained in the next section, using the smooth perceptron scheme significantly reduces the maximum number of iterations of the basic procedure. A detailed description of our basic procedure is given in Appendix A.

Finite termination of the basic procedure
In this section, we show that the basic procedure terminates in a finite number of iterations. To do so, we need to prove Lemma 4.2 and Proposition 4.3. Proof. Let x ∈ E and suppose that each ℓ-th block element x ℓ of x is given by where the inequality follows from the fact that c(x ℓ ) 1 , . . . , c(x ℓ ) r ℓ , and y ℓ lie in K ℓ .
Proof. The first inequality of (15) follows from (10) in the proof of Proposition 3.6. The second inequality is obtained by evaluating q ℓ,i (α) at α = 1 y ℓ ,c(v ℓ ) i , as follows: Proposition 4.4. Let r max = max{r 1 , . . . , r p }. The basic procedure (Algorithm 1) terminates in at most Proof. Suppose that y k is obtained at the k-th iteration of Algorithm 1. Proposition 4.1 implies that y k 1,∞ ≥ 1 p and an ℓ-th block element exists for which y ℓ , e ℓ ≥ 1 p holds. Thus, by letting v k = y k − z k and the ℓ- Since Proposition 4.1 ensures that 1 ≥ k holds at the k-th iteration, by setting k = p 2 r 2 max ξ 2 , we see that ξ ≥ pr max z k J , and combining this with (16), we have The above inequality and Proposition 4.3 imply that for any ℓ ∈ {1, . . . , p} and i ∈ {1, . . . , r p }, From the equivalence in (9) and the setting ξ ∈ (0, 1), we conclude that Algorithm 1 terminates in at most iterations by satisfying (13) in the fourth termination condition at an ℓ-th block and an index i.
An upper bound for the number of iterations of Algorithm 5 using smooth perceptoron scheme can be found as follows. Proof. From Proposition 6 in [13], after k ≥ 1 iterations, we obtain the inequality z k 2 Similarly to the previous proof of Proposition 4.4, if ξ ≥ pr max z k J holds, then Algorithm 5 terminates.
holds for a given k satisfying Here, we discuss the computational cost per iteration of Algorithm 1. At each iteration, the two most expensive operations are computing the spectral decomposition on line 5 and computing P A (·) on lines 24 and 26. Let C sd ℓ be the computational cost of the spectral decomposition of an element of K ℓ . For example, where L r ℓ denotes the r ℓ -dimensional second-order cone. Then, the cost C sd of computing the spectral decomposition of an element of K is C sd = p ℓ=1 C sd ℓ . Next, let us consider the computational cost of P A (·). Recall that d is the dimension of the Euclidean space E corresponding to K. As discussed in [10], we can compute P A = I − A * (AA * ) −1 A by using the Cholesky decomposition of (AA * ) −1 . Suppose that (AA * ) −1 = LL * , where L is an m × m matrix and we store L * A in the main algorithm. Then, we can compute P A (·) on lines 24 and 26, which costs O(md). The operation u µ (·) : E → {u ∈ K | u, e = 1} in Algorithm 5 can be performed within the cost C sd [18,13]. From the above discussion and Proposition 4.4, the total costs of Algorithm 1 and Algorithm 5 are given by O Algorithm 1 Basic procedure (von Neumann scheme) 1: Input: P A , y 1 ∈ int K such that y 1 , e = 1 and a constant ξ such that 0 < ξ < 1 2: Output: (i) a solution to P(A) or (ii) D(A) or (iii) a certificate that, for any feasible solution x to P S∞ (A), e, x < r 3: initialization: For every ℓ ∈ {1, . . . , p}, perform spectral decomposition: stop basic procedure and return z k (Output (i)) 8: stop basic procedure and return y k or v k (Output (ii)) 10: end if 11: if v k , e > 0 then 12: for ℓ ∈ {1, . . . , p} do 13:  Let u be an idempotent such that e, u = 1 and z k , u = λ min (z k ) 25:

26:
k ← k + 1 , z k ← P A (y k ) and v k ← y k − z k 27: end while 28: return basic procedure error 5 Main algorithm of the extended method 5

.1 Outline of the main algorithm
In what follows, for a given accuracy ε > 0, we call a feasible solution of P S∞ (A) whose minimum eigenvalue is ε or more an ε-feasible solution of P S∞ (A). This section describes the main algorithm. To set the upper bound for the minimum eigenvalue of any feasible solution x of P S∞ (A), Algorithm 2 focuses on the product det(x) of the eigenvalues of the arbitrary feasible solutionx of the scaled problem P S∞ (A k Q k ). Algorithm 2 works as follows. First, we calculate the corresponding projection P A onto kerA and generate an initial point as input to the basic procedure. Next, we call the basic procedure and determine whether to end the algorithm with an ε-feasible solution or to perform problem scaling according to the returned result, as follows: 1. If a feasible solution of P(A) or D(A) is returned from the basic procedure, the feasibility of P(A) can be determined, and we stop the main algorithm.
2. If the basic procedure returns the sets of indices H 1 , . . . , H p and the sets of primitive idempotents C 1 , . . . , C p that construct the corresponding Jordan frames, then for the total number of cuts obtaibed in the ℓ-th block num ℓ , (a) if num ℓ ≥ r ℓ log ε log ξ holds for some ℓ ∈ {1, . . . p}, we determine that P S∞ (A) has no ε-feasible solution according to Proposition 5.1 and stop the main algorithm, log ξ holds for any ℓ ∈ {1, . . . p}, we rescale the problem and call the basic procedure again.
Note that our main algorithm is similar to Lourenço et al.'s method in the sense that it keeps information about the possible minimum eigenvalue of any feasible solution of the problem. In contrast, Pena and Soheili's method [13] does not keep such information. Algorithm 2 terminates after no more than − r log ξ log 1 ε − p + 1 iterations, so our main algorithm can be said to be a polynomial-time algorithm. We will give this proof in section 5.2. We should also mention that step 24 in Algorithm 2 is not a reachable output theoretically. We have added this step in order to consider the influence of the numerical error in practice.

Algorithm 2 Main algorithm
1: Input: A, K, ε and a constant ξ such that 0 < ξ < 1 2: Output: a solution to P(A) or D(A) or a certificate that there is no ε feasible solution.
. . , p} 4: Compute P A k and call the basic procedure with P A k , 1 r e, ξ 5: if basic procedure returns z then 6: stop main algorithm and return RP z (RP z is a feasible solution of P(A)) 7: else if basic procedure returns y or v then 8: stop main algorithm and return RDy or RDv ( RDy or RDv is a feasible solution of D(A)) 9: else if basic procedure returns H k 1 , . . . , H k p and C k 1 , . . . , C k p then 10: for ℓ ∈ {1, . . . , p} do 11: if |H k ℓ | > 0 then 12: if num ℓ ≥ r ℓ log ε log ξ then 16: stop main algorithm. There is no ε feasible solution.

Finite termination of the main algorithm
Here, we discuss how many iterations are required until we can determine that the minimum eigenvalue λ min (x) is less than ε for any x ∈ F PS ∞ (A) . Before going into the proof, we explain the Algorithm 2 in more detail than in section 5.1. At each iteration of Algorithm 2, it accumulates the number of cuts |H k ℓ | obtained in the ℓ-th block and stores the value in num ℓ . Using num ℓ , we can compute an upper bound for λ min (x) (Proposition 5.1). On line 18,Q ℓ is updated toQ ℓ ← Q g −1 ℓQ ℓ , whereQ ℓ plays the role of an operator that gives the relationx ℓ =Q ℓ (x ℓ ) for the solution x of the original problem and the solution x of the scaled problem. For example, if |H 1 ℓ | > 0 for k = 1 (suppose that the cut was obtained in the ℓ-th block), then the proposed method scales A 1 ℓ Q 1 ℓ and the problem to yieldx ℓ = Q g −1 ℓ (x ℓ ) for the feasible solution x of the original problem. And if |H 2 ℓ | > 0 even for k = 2, then the proposed method scalesx again, so thatx ℓ = Q g −1 ℓ (x ℓ ) =Q ℓ (x ℓ ) holds. Note thatQ ℓ is used only for a concise proof of Proposition 5.1, so it is not essential.
Note that detx ℓ can be expressed in terms of det x ℓ . For example, if |H 1 ℓ | > 0 when k = 1, then using Proposition 2.5, for any feasible solutionx of P S∞ (A 2 ), we find that This means that detx ℓ can be determined from det x ℓ and the number of cuts obtained so far in the ℓ-th block. In Algorithm 2, the value of num ℓ is updated only when |H k ℓ | > 0. Sincex satisfies x ℓ =Q ℓ (x ℓ ) (ℓ = 1, 2, . . . , p) for each feasible solution x of P S∞ (A), we can see that Therefore, (18) implies det x ℓ ≤ ξ num ℓ det e ℓ = ξ num ℓ and the fact (λ min (x ℓ )) r ℓ ≤ det x ℓ implies (λ min (x ℓ )) r ℓ ≤ ξ num ℓ . By taking the logarithm of both sides of this inequality, we obtain (17).

Computational costs of the algorithms
This section compares the computational costs of Algorithm 2, Lourenço et al.'s method [10] and Pena and Soheili's method [13]. Section 6.1 compares the computational costs of Algorithm 2 and Lourenço et al.'s method, and Section 6.2 compares those of Algorithm 2 and Pena and Soheili's method under the assumption that kerA ∩ int K = ∅.
Both the proposed method and the method of Lourenço et al. guarantee finite termination of the main algorithm by termination criteria indicating the nonexistence of an ε-feasible solution, so that it is possible to compare the computational costs of the methods without making any special assumptions. This is because both methods proceed by making cuts to the feasible region using the results obtained from the basic procedure. On the other hand, Pena and Soheili's method cannot be simply compared because the upper bound of the number of iterations of their main algorithm includes an unknown value of δ(kerA ∩ int K) := max x det(x) | x ∈ kerA ∩ int K, x 2 J = r . However, by making the assumption kerA ∩ int K = ∅ and deriving a lower bound for δ(kerA ∩ int K), we make it possible to compare Algorithm 2 with Pena and Soheili's method without knowing the specific values of δ(kerA ∩ int K).

Comparison of Algorithm 2 and Lourenço et al.'s method
Let us consider the computational cost of Algorithm 2. At each iteration, the most expensive operation is computing P A on line 4. Recall that d is the dimension of the Euclidean space E corresponding to K. As discussed in [10], by considering P A to be an m × d matrix, we find that the computational cost of P A is O(m 3 + m 2 d). Therefore, by taking the computational cost of the basic procedure (Algorithm 1) and Proposition 5.2 into consideration, the cost of Algorithm 2 turns out to be where C sd is the computational cost of the spectral decomposition of x ∈ E.
Note that, in [10], the authors showed that the cost of their algorithm is where C min is the cost of computing the minimum eigenvalue of x ∈ E with the corresponding idempotent, ρ is an input parameter used in their basic procedure (like ξ in the proposed algorithm) and ϕ(ρ) = 2 − 1/ρ − 3 − 2/ρ.
When the symmetric cone is simple, by setting ξ = 1/2 and ρ = 2, the maximum number of iterations of the basic procedure is bounded by the same value in both algorithms. Accordingly, we will compare the two computational costs (19) and (20) by supposing ξ = 1/2 and ρ = 2 (hence, − log ξ ≃ 0.69 and ϕ(ρ) ≃ 0.09). As we can see below, the cost (19) of our method is smaller than (20) in the cases of linear programming and second-order cone problems and is equivalent to (20) in the case of semidefinite problems. First, let us consider the case where K is the n-dimensional nonnegative orthant R n + . Here, we see that r = p = d = n, r 1 = · · · = r p = r max = 1, and max C sd , md = max C min , md = md hold. By substituting these values, the bounds (19) and (20)  This implies that for the linear programming case, our method (which is equivalent to Roos's original method [15]) is superior to Lourenço et al.'s method [10] in terms of bounds (19) and (20) .
Next, let us consider the case where K is composed of p simple second-order cones L ni (i = 1, . . . , p). In this case, we see that d = p i=1 n i , r 1 = · · · = r p = r max = 2 and max C sd , md = max C min , md = md hold. By substituting these values, the bounds (19) and (20)  Note that ε is expected to be very small (10 −6 or even 10 −12 in practice) and 1 0.69 log 1 ε ≤ 1 0.09 log 1 ε − log 2 if ε ≤ 0.451. Thus, even in this case, we may conclude that our method is superior to Lourenço et al.'s method in terms of the bounds (19) and (20) .
Finally, let us consider the case where K is a simple n × n positive semidefinite cone. We see that p = 1, r = n, and d = n(n+1) 2 hold, and upon substituting these values, the bounds (19) and (20)  From the discussion in Section 6.3, we can assume O C sd = O C min , and the computational bounds of two methods are equivalent.

Comparison of Algorithm 2 and Pena and Soheili's method
In this section, we assume that K is simple since [13] has presented an algorithm for the simple form. We also assume that kerA∩int K = ∅, because Pena and Soheili's method does not terminate if kerA∩int K = ∅ and rangeA * ∩ int K = ∅. Furthermore, for the sake of simplicity, we assume that the main algorithm of Pena and Soheili's method applies only to kerA ∩ int K. (Their original method applies the main algorithm to rangeA * ∩ int K as well.) First, we will briefly explain the idea of deriving an upper bound for the number of iterations required to find x ∈ kerA ∩ int K in Pena and Soheili's method. Pena and Soheili derive it by focusing on the indicator δ(kerA ∩ int K) := max x det(x) | x ∈ kerA ∩ int K, x 2 J = r . If kerA ∩ int K = ∅, then δ(kerA∩int K) ∈ (0, 1] holds, and if e ∈ kerA∩int K, then δ(kerA∩int K) = 1 holds. If e ∈ kerA∩int K, then the basic procedure terminates immediately and returns 1 r e as a feasible solution. Then, they prove that δ(Q v (kerA) ∩ int K) ≥ 1.5 · δ(kerA ∩ int K) holds if the parameters are appropriately set, and derive an upper bound on the number of scaling steps, i.e., the number of iterations, required to obtain δ(Q v (kerA) ∩ int K) = 1.
In the following, we obtain an upper bound for the number of iterations of Algorithm 2 using the index δ supposed (kerA ∩ int K) := max x det(x) | x ∈ kerA ∩ int K, x 2 J = 1 . Note that δ (kerA ∩ int K) = r r 2 ·δ supposed (kerA ∩ int K). In fact, if x * is the point giving the maximum value of δ supposed (kerA ∩ int K), then the point giving the maximum value of δ (kerA ∩ int K) is √ rx * . Also, if kerA ∩ int K = ∅, then δ supposed (kerA ∩ int K) ∈ (0, 1/r r 2 ], and if 1 √ r e ∈ kerA ∩ int K, then δ supposed (kerA ∩ int K) = 1/r r 2 .
The outline of this section is as follows: First, we show that a lower bound for δ supposed (kerA ∩ int K) can be derived using the index value δ supposed Q g −1 (kerA) ∩ int K for the problem after scaling (Proposition 6.5). Then, using this result, we derive an upper bound for the number of operations required to obtain δ supposed Q g −1 (kerA) ∩ int K = 1/r r 2 (Proposition 6.6). Finally, we compare the proposed method with Pena and Soheili's method. To prove Proposition 6.3 used in the proof of Proposition 6.5, we use the following propositions from [8].
Proposition 6.1 (Theorem 3 of [8]). Let c ∈ E be an idempotent and N λ (c) be the set such that N λ (c) = {x ∈ E | c • x = λx}. Then N λ (c) is a linear maniforld, but if λ = 0, 1 2 , and 1, then N λ (c) consists of zero alone. Each x ∈ E can be represented in the form , in one and only one way. Proof. From Propositions 6.1 and 6.2, for any x ∈ E, there exist a real number λ ∈ R and elements u ∈ N 0 (c) and v ∈ N 1 2 (c) such that

Proposition 6.2 (Theorem 11 of [8]). c ∈ E is a primitive idempotent if and only if
c , which implies that v, c = 0. Thus, since u ∈ N 0 (c) and u • c = 0, x, c is given by On the other hand, by using the facts Thus, we have shown that x, Q c (x) = x, c 2 holds. [13] is not correct since equation (14) does not necessarily hold. The above Proposition 6.3 also gives a correct proof of Proposition 3 in [13]. See the computation y, Q g −2 (y) in the proof of Proposition 6.5.
Next, using Proposition 6.5, we derive the maximum number of iterations until the proposed method finds x ∈ kerA ∩ int K by using δ (kerA ∩ int K) as in Pena and Soheili's method.
Proof. Let kerĀ be the linear subspace at the start of k iterations of Algorithm 2 and suppose that δ supposed kerĀ ∩ int K = 1/r r 2 holds. Then, from Proposition 6.5, we find that δ supposed (kerA ∩ int K) > ξ k /r r 2 . This implies that δ (kerA ∩ int K) > ξ k since δ (kerA ∩ int K) = r r 2 · δ supposed (kerA ∩ int K) holds. By taking the logarithm base ξ, we obtain log ξ δ (kerA ∩ int K) > k.
From here on, using the above results, we will compare the computational complexities of the methods in the case that K is simple and kerA ∩ int K = ∅ holds. Table 1 summarizes the upper bounds on the number of iterations of the main algorithm (UB#iter) of the two methods and the computational costs required per iteration (CC/iter). As in the previous section, the main algortihm requires O(m 3 + m 2 d) to compute the projection P A . Here, BP shows the computational cost of the basic procedure in each method.
The upper bound on the number of iterations of Algorithm 2 is given by log ξ δ (kerA ∩ int K) = log 1.5 δ (kerA ∩ int K) / log 1.5 ξ, where we should note that 0 < ξ < 1. Since 0 < 1 − log 1.5 ξ ≤ 1 when ξ ≤ 2/3, if ξ ≤ 2/3, then the upper bound on the number of iterations of Algorithm 2 is smaller than that of the main algorithm of Pena and Soheili's method.
Next, Table 2 summarizes upper bounds on the number of iterations of basic procedures in the proposed method (UB#iter) and Pena and Soheili's method and the computational cost required per iteration (CC/iter). It shows cases of using the von Neumann scheme and the smooth perceptron in each method (corresponding to Algorithm 1 and Algorithm 5 in the proposed method). As in the previous section, C sd denotes the computational cost required for spectral decomposition, and C min denotes the computational cost required to compute only the minimum eigenvalue and the corresponding primitive idempotent.   Table 2 shows that the proposed method is superior for finding a point x ∈ kerA ∩ int K.

Computational costs of C sd and C min
This section discusses the computational cost required for spectral decomposition C sd and the computational cost required to compute only the minimum eigenvalue and the corresponding primitive idempotent C min .
There are so-called direct and iterative methods for eigenvalue calculation algorithms, briefly described on pp.139-140 of [4]. (Note that it is also written that there is no direct method in the strict sense of an eigenvalue calculation since finding eigenvalues is mathematically equivalent to finding zeros of polynomials).
In general, when using the direct method of O(n 3 ), we see that C sd = O(n 3 ) and C min = O(n 3 ). The Lanczos algorithm is a typical iterative algorithm used for sparse matrices. Its cost per iteration of computing the product of a matrix and a vector once is O(n 2 ). Suppose the number of iterations at which we obtain a sufficiently accurate solution is constant with respect to the matrix size. In that case, the overall computational cost of the algorithm is O(n 2 ). Corollary 10.1.3 in [7] discusses the number of iterations that yields sufficient accuracy. It shows that we can expect fewer iterations if the value of "the difference between the smallest and second smallest eigenvalues / the difference between the second smallest and largest eigenvalue" is larger. However, it is generally difficult to assume that the above value does not depend on the matrix size and is sufficiently large. Thus, even in this case, we cannot take advantage of the condition that we only need the minimum eigenvalue, and we conclude that it is reasonable to consider that O(C sd ) = O(C min ).

Numerical experiments 7.1 Outline of numerical implementation
Numerical experiments were performed using the authors' implementations of the algorithms on a positive semidefinite optimization problem with one positive semidefinite cone K = S n + of the form where S n ++ denotes the interior of K = S n + . We created strongly feasible ill-conditioned instances, i.e., kerA ∩ S n ++ = ∅ and X ∈ kerA ∩ S n ++ has positive but small eigenvalues. We will explain how to make a such instance in section 7.2. In what follows, we refer to Lourenço et al.'s method [10] as Lourenço (2019), and Pena and Soheili's method [13] as Pena (2017). We set the termination parameter as ξ = 1/4 in our basic procedure. The reason for setting ξ = 1/4 is to prevent the square root of ξ from becoming an infinite decimal, and to prevent the upper bound on the number of iterations of the basic procedure from becoming too large. We also set the accuracy parameter as ε = 1e-12, both in our main algorithm and in Lourenço (2019) and determined whether P S∞ (A) or P S1 (A) has a solution whose minimum eigenvalue is greater than or equal to ε. Note that [13] proposed various update methods for the basic procedure. In our numerical experiments, all methods employed the modified von Neumann scheme (Algorithm 4) with the identity matrix as the initial point and the smooth perceptron scheme (Algorithm 5). This implies that the basic procedures used in the three methods differ only in the termination conditions for moving to the main algorithm and that all other steps are the same. All executions were performed using MATLAB R2022a on an Intel (R) Core (TM) i7-6700 CPU @ 3.40GHz machine with 16GB of RAM. Note that we computed the projection P A using the MATLAB function for the singular value decomposition. The projection P A was given by P A = I − A ⊤ (AA ⊤ ) −1 A using the matrix A ∈ R m×d which represents the linear operator A(·) and the identity matrix I. Here, suppose that the singular value decomposition of a matrix A is given by A = U ΣV ⊤ = U (Σ m O)V ⊤ where U ∈ R m×m and V ∈ R d×d are orthogonal matrices, and Σ m ∈ R m×m is a diagonal matrix with m singular values on the diagonal. Substituting this decomposition into A ⊤ (AA ⊤ ) −1 A, we have In what follows,X ∈ S n denotes the output obtained from the main algorithm and X * the result scaled as the solution of the original problem P(A) multiplied by a real number such that λ max (X * ) = 1. When X * was obtained, we defined the residual of the constraints as the value of A(X * ) 2 .
We also solved the following problem with a commercial code, Mosek [12], and compared it with the output of Chubanov's methods: Mosek solves the self-dual embedding model by using a path-following interior-point method, so if we obtain a solution (X * , y * ), then X * and −A * y * lie in the (approximate) relative interior of the primal feasible region and the dual feasible region, respectively [20]. That is, X * obtained by solving a strongly feasible problem with Mosek is in S n ++ , X * obtained by solving a weakly feasible problem is in S n + \ S n ++ , and X * obtained by solving an infeasible problem is X * = O (i.e., −A * y * ∈ S n ++ ). As well as for Chubanov's methods, we computed A(X * ) 2 for the solution obtained by Mosek after scaling so that λ max (X * ) would be 1. Note that (P) and (D) do not simultaneously have feasible interior points.In general, it is difficult to solve such problems stably by using interior point methods, but since strong complementarity exists between (P) and (D), they can be expected to be stably solved. By applying Lemma 3.4 of [11], we can generate a problem in which both the primal and dual problems have feasible interior points in which it can be determined whether (P) has a feasible interior point. However, since there was no big difference between the solution obtained by solving the problem generated by applying Lemma 3.4 of [11] and the solution obtained by solving the above (P) and (D), we showed only the results of solving (P) and (D) above.

How to generate instances
Here, we describe how the strongly feasible ill-conditioned instances were generated. In what follows, for any natural numbers m, n, rand(n) is a function that returns n-dimensional real vectors whose elements are uniformly distributed in the open segment (0, 1), and rand(m, n) is a function that returns an m × n real matrix whose elements are uniformly distributed in the open segment (0, 1). Furthermore, for any x ∈ R n and X ∈ R m×n , diag(x) ∈ R n×n is a function that returns a diagonal matrix whose diagonal elements are the elements of x, and vec(X) ∈ R mn is a function that returns a vector obtained by stacking the n column vectors of X. The strongly feasible ill-conditioned instances were generated by extending the method of generating ill-conditioned strongly feasible instances proposed in [14] to the symmetric cone case. ( a 1 , x , a 2 , x , . . . , a m , x ) T for which a 1 =ū −x −1 and a j ,x = 0 hold for any j = 2, . . . , m. Then, Proof. First, note that the assertion (21) is equivalent tō From the assumptions, we see thatx ∈ K, x ∞ ≤ 1 and a 1 ,x = ū −x −1 ,x = r − r = 0; thus, A(x) = 0 andx ∈ F . Since ∇ log det(x) = x −1 , ifx satisfies x −x,x −1 ≤ 0 for any x ∈ F we can conclude that (22) holds. In what follows, we show that (23) holds.
Proposition 7.1 guarantees that we can generate a linear operator A satisfying kerA ∩ S n ++ = ∅ by determining an appropriate value µ = max X∈F det(X), where F = {X ∈ S n : X ∈ S n ++ ∩ kerA, X ∞ = 1}. The details on how to generate the strongly feasible instances are in Algorithm 3. The input consists of the rank of the semidefinite cone n, the number of constraints m, an arbitrary orthogonal matrix P , and the parameter τ ∈ R ++ which determines the value of µ. We made instances for which the value of µ satisfies 1e − τ ≤ µ ≤ 1e − (τ − 1). In the experiments, we set τ ∈ {50, 100, 150, 200, 250} so that µ would vary around 1e-50, 1e-100, 1e-150, 1e-200, and 1e-250.
Note that Algorithm 3 generates instances usingx that has a natural eigenvalue distribution. For example, let n − 1 = 3 and consider two Xs where one has 3 eigenvalues of about 1e-2, and the others have 1 each of 1e-1, 1e-2, and 1e-3. det(X) ≃1e-6 is obtained for both Xs, but the latter is more natural for the distribution of eigenvalues. In our experiment, we generated ill-conditioned instances by using X having a natural eigenvalue distribution as follows: 1. Find an integer s that satisfies 1e-s ≤ l 2. Generate t = 2s − 1 eigenvalue classes.
3. Decide how many eigenvalues to generate for each class.

Numerical results and observations
We set the size of the positive semidefinite matrix to n = 50, so that the computational experiments could be performed in a reasonable period of time. To eliminate bias in the experimental results, we generated instances in which the number of constraints m was controlled using the parameter ν for the number n(n+1) 2 of variables in the symmetric matrix of order n. Specifically, the number of constraints m on an integer was obtained by rounding the value of n(n+1) 2 ν, where ν ∈ {0.1, 0.3, 0.5, 0.7, 0.9}. For each ν ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, we generated five instances, i.e., 25 instances for each of five strongly feasible cases (corresponding to five patterns of µ ≃ 1e-50, . . . , µ ≃ 1e-250, see section 7.2 for details). Thus, we generated 125 strongly feasible instances. We set the upper limit of the execution time to 2 hours and compared the performance of our method with those of Lourenço (2019), Pena(2017) and Mosek. When using Mosek, we set the primal feasibility tolerance to 1e-12.
Tables 4 and 5 list the results for the (ill-conditioned) strongly feasible case. The "CO-ratio" column shows the ratio of |N 2 |/|N 1 | where N 1 is the set of problems for which the algorithm terminated within 2 hours, the upper limit of the execution time, and N 2 ⊆ N 1 is the set of problems for which a correct output is obtained, the "times(s)" column shows the average CPU time of the method, the "M-iter" column shows the average iteration number of each main algorithm, the A(X * ) 2 column shows the residual of the constraints, and the λ min (X * ) column shows the minimum eigenvalue of X * . The "BP" column shows which scheme (the modified von Neumann (MVN) or the smooth perceptron (SP)) was  Table 5: Results for ill-conditioned strongly feasible instances ( A(X * ) 2 and λ min (X * )) used in the basic procedure. The values in parentheses () in row µ ≈ 1e-100 are the average values excluding instances for which the method ended up running out of time.
First, we compare the results when using MVN or SP as the basic procedure in each method. From Table 4, we can see that for strongly-feasible problems, using SP as the basic procedure has a shorter average execution time than using MVN. Next, we compare the results of each method. For µ ≃ 1e-50, there was no significant difference in performance among the three methods. For µ ≤ 1e-100, the results in the rows BP=MVN show that our method and Lourenço (2019) obtained interior feasible solutions for all problems, while Pena (2017) ended up running out of time for 99 instances. This is because Pena (2017) needs to call its basic procedure to find a solution of rangeA * ∩ S n ++ . Comparing our method with Lourenço (2019), we see that Algorithm 2 is superior in terms of CPU time. Finally, we compare the results for each value of µ. As µ becomes smaller, i.e., as the problem becomes more ill-conditioned, the number of scaling times and the execution time increase, and the accuracy of the obtained solution gets worse. Table 6 summarizes the results of our experiments using Mosek to solve strongly feasible ill-conditioned instances. Mosek sometimes returned the error message "rescode = 10006" for the µ ≤ 1e − 200 instances. This error message means that "the optimizer is terminated due to slow progress." In this case, the obtained solution is not guaranteed to be optimal, but it may have sufficient accuracy as a feasible solution. Therefore, we took the CO-ratio when the residual A(X * ) 2 is less than or equal to 1e-5 to be the correct output. The reason why we set the threshold to 1e-5 is that the maximum value of A(X * ) 2 was less than 1e-5 among the X * values obtained for the strongly feasible ill-conditioned instances by the three methods, Algorithm 2, Lourenço (2019) and Pena (2017). On the other hand, for the µ ≤ 1e − 200 instances, the Chubanov methods had higher CO-ratios. That is, when the problem was quite ill-conditioned, the solution obtained by each of the Chubanov methods had a smaller value of A(X * ) 2 compared with the solution obtained by Mosek, which implies that the accuracy of the solution obtained by each of the Chubanov methods was higher than that of Mosek.

Concluding remarks
In this study, we proposed a new version of Chubanov's method for solving the feasibility problem over the symmetric cone by extending Roos's method [15] for the feasible problem over the nonnegative orthant. Our method has the following features: • Using the norm · ∞ in problem P S∞ (A) makes it possible to (i) calculate the upper bound for the minimum eigenvalue of any feasible solution of P S∞ (A), (ii) quantify the feasible region of P(A), and hence (iii) determine whether there exists a feasible solution of P(A) whose minimum eigenvalue is greater than ǫ as in [10].
• In terms of the computational bound, our method is (i) equivalent to Roos's original method [15] and superior to Lourenço et al.'s method [10] when the symmetric cone is the nonnegative orthant, (ii) superior to Lourenço et al.'s when the symmetric cone is a Cartesian product of secondorder cones, (iii) equivalent to Lourenço et al.'s when the symmetric cone is the simple positive semidefinite cone, under the assumption that the costs of computing the spectral decomposition and the minimum eigenvalue are of the same order for any given symmetric matrix, and (iv) superior to Pena and Soheili's method [13] for any simple symmetric cones under the assumption that P(A) is feasible.
We also conducted comprehensive numerical experiments comaring our method with the existing mtehods of Chubanov [10,13] and Mosek. Our numerical results showed that • It is considerably faster than the existing methods on ill-conditioned strongly feasible instances.
• Mosek was the better than Chubanov's methods in terms of execution time. On the other hand, in terms of the accuracy of the solution (the value of A(X * ) 2 ), we found that all of Chubanov's methods are better than Mosek. In particular, we have seen such results for strongly-feasible (terribly) ill-conditioned (µ ≃ 1e − 250) instances.
In this paper, we performed computer experiments by setting ξ = 1/4 in the basic procedure to avoid inducements for calculation errors, but there is room for further study on how to choose the value of ξ. For example, if the problem size is large, the computation of the projection P A is expected to take much more time. In this case, rather than setting ξ = 1/4, running the algorithm as ξ < 1/4 may reduce the number of scaling steps to be performed before completion. As a result, the algorithm's run time may be shorter than when we set ξ = 1/4. More desirable approach may be to choose an appropriate value of ξ at each iteration along to the algorithm's progress.
Same as lines 5-23 of Algorithm 1 7: k ← k + 1 , z k ← P A (y k ) and v k ← y k − z k 10: end while

B Another new main algorithm
Here, we introduce another new main algorithm, Algorithm 6, whose computational cost may not be given in polynomials but might better determine ǫ-feasible solutions.

B.1 Outline of Algorithm 6
The procedures of Algorithm 6 are almost identical to Algorithm 2, except for one of the termination criteria (the criterion indicating the non-existence of ε-feasible solutions). Specifically, to set the upper bound for the minimum eigenvalue of any feasible solution x of P S∞ (A), Algorithm 2 focuses on the product det(x) of the eigenvalues of the arbitrary feasible solutionx of the scaled problem P S∞ (A k Q k ), while Algorithm 6 focuses on the sum x, e of the eigenvalues. Algorithm 6 works as follows.    Table 7 lists upper bounds on the numbers of iterations required by Algorithms 2 and 6. As shown in the table, Algorithm 2 can be said to be a polynomial-time algorithm, but Algorithm 6 is not. On the other hand, the results of the numerical experiments in Appendix C.2 show that Algorithm 6 is superior to Algorithm 2 at detecting ε-feasibility for the generated instances.
{1, . . . , p}, the ℓ-th block element x ℓ of x satisfies Proof. In Algorithm 6, m ℓ is updated only when |H k ℓ | > 0. Suppose that, at the end of the k-th iteration of Algorithm 6, the last update of m ℓ had been at the k ′ (≤ k)-th iteration. Then, the stopping criteria of the basic procedure guarantees that at the beginning of the k ′ -th iteration,Q ℓ satisfies This gives a lower bound for |H k ′ ℓ |: Using the fact that x ℓ − λ min (x ℓ )e ℓ ∈ K ℓ , we obtain and hence, Next, suppose that, at the beginning of the k ′ -th iteration of Algorithm 6, the last update of m ℓ had been performed at the i(< k ′ )-th iteration.
LetQ ℓ pre beQ ℓ obtained at the beginning of the i-th iteration of Algorithm 6, and let Q pre g ℓ and m pre ℓ be Q ℓ and m ℓ obtained after the update at the i-th iteration. Note thatQ ℓ at the beginning of the k ′ -th iteration of Algorithm 6 can be represented byQ ℓ =Q ℓ pre Q pre . Thus, from (12), we see that By recursively applying (28) toQ ℓ pre (e ℓ ), we finally obtain Let m k ′ ℓ be the value of m ℓ obtained after the update at the k ′ -th iteration. Then, and, by (27), we obtain Since, at the end of the k-th iteration of Algorithm 6, the last update of m ℓ was at the k ′ -th iteration, we see that m ℓ = m k ′ ℓ , and hence (24) holds after k iterations of Algorithm 6.
Using Proposition B.1, we find an upper bound on the number of iterations of Algorithm 6.
Proposition B.2. Algorithm 6 terminates after no more than ξ Proof. When |H k ℓ | > 0 for ℓ ∈ {1, . . . , p} at the k-th iteration of Algorithm 6, we say that the iteration is "good" for the ℓ-th block. From Proposition B.1, since the (meaningful) upper bound of the minimum eigenvalue λ min (x ℓ ) of x ℓ of the ℓ-th block of any feasible solution x of P S∞ (A) depends on the value of m ℓ , we first calculate a lower bound for the increment of m ℓ per good iteration in the ℓ-th block. Similar to the proof of Proposition B.1, suppose that the k ′ -th iteration is a good iteration for the ℓ-th block. As shown in equation (29), the value of m ℓ is increased at this time by e ℓ ,Q ℓ Q ℓ at the beginning of the k ′ -th iteration. Let us express Q g −1 ℓ using g ℓ obtained at the k-th iteration . Then, the increment of m ℓ at the k ′ -th iteration is as follows: (30)

C Additional numerical experiments
In addition to the strongly feasible instances described in section 7.2, we generated the following two types of instances and conducted numerical experiments.

C.1 How to generate instances used in the additional experiments
Here, we describe how the weakly feasible instances and infeasible instances were generated. Note that, due to the rounding error of the numerical computation, the weakly (ill-conditioned strongly) feasible instances generated in this experiment may not have been weakly (ill-conditioned strongly) feasible, and could be infeasible or interior feasible. Thus, the term "fragilely" may not be appropriate, but it would be better to call them "fragilely weakly feasible (fragilely ill-conditioned strongly)."

C.1.1 Weakly feasible instances
The weakly feasible instances were generated by Algorithm 7. A ′ i ← rand(n, n) and For any A ∈ R m×n 2 returned by Algorithm 7, no X ∈ S n ++ exists that satisfies A (vec(X)) = 0, but an X ∈ S n + \ {O} exists that satisfies A (vec(X)) = 0.
Proof. First, we show that an X ∈ S n + \{O} exists that satisfies A (vec(X)) = 0. For the matrix C + ∈ S n + computed on line 5 of Algorithm 7, we see that C + = O and the following holds: Next, we show by contradiction that no X ∈ S n ++ exists that satisfies A (vec(X)) = 0. Suppose that an X ∈ S n ++ satisfies A (vec(X)) = 0. Since the first row of A is vec(C − ) T , if A (vec(X)) = 0 holds, then vec(C − ) T vec(X) = 0, i.e., where C − = P DP T , X = QEQ T , P are Q orthogonal matrices, and D and E are diagonal matrices.

C.1.2 Infeasible instances
The infeasible instances were generated by Algorithm 8. If we define the linear operator A : S n → R m as A(X) = ( A 1 , X , . . . , A m , X ) T , then by choosing A 1 ∈ S n ++ , we obtain A such that kerA ∩ S n + = {O}. On the basis of this observation, by introducing a parameter α > 0, we generated a positive definite matrix A 1 whose minimum eigenvalue is a uniformly distributed random number in (0, α). We chose α ∈ {1e−1, 1e−2, 1e−3, 1e−4, 1e−5}. The input of Algorithm 8 consisted of the rank of the semidefinite cone n, the number of constraints m, an arbitrary orthogonal matrix P , and the parameter α > 0.
Note that the first row of the matrix A returned by Algorithm 8 is vec(B + ) T . Since B + ∈ S n ++ , we see that vec(B + ) T vec(X) > 0 for any positive definite matrix X ∈ S n ++ . Thus, there is no X ∈ S n ++ satisfying A (vec(X)) = 0, which implies that the generated instance is infeasible.

C.2 Additional numerical results and observations
As in section 7, we set the size of the semidefinite matrix to n = 50 and the number of constraints m using ν ∈ {0.1, 0.3, 0.5, 0.7, 0.9}. For each ν ∈ {0.1, 0.3, 0.5, 0.7, 0.9}, we generated five instances, i.e., 25 instances for a weakly feasible case and 25 instances for each of five infeasible cases (corresponding to five patterns of α = 1e-1, . . . , α = 1e-5, see section C.1.2 for details). Thus, we generated 25 weakly feasible instances and 125 infeasible instances. We set the upper limit of the execution time to 2 hours Table 8 summarizes the results for infeasible instances. Similarly to Table 4, the "CO-ratio and "times(s)" columns respectively show the ratio of correct outputs and the average CPU time of each method (the values in parentheses () in rows α = 1e-4 and α = 1e-5 are the average CPU times of each method excluding the instances for which the method ended up running out of time). When using MVN as the basic procedure, whereas our method and Lourenço (2019) found an element of rangeA * ∩ S n + for all instances, Pena (2017) ended up running out of time for one instance for α = 1e-4 and α = 1e-5.
From the results for infeasible instances, we can observe the following three points. First, our method obtained correct outputs for every instance in a short execution time. This would be because it employed an efficient scaling and found an element of rangeA * ∩ S n + . Second, the method of Pena (2017) obtained better results when SP was used as the basic procedure. As shown in Table 8, the method of Pena (2017) using SP as the basic procedure solved all problems and had shorter execution times than the method using MVN. Since Pena's (2017) method calls the basic procedure not only to find points in kerA ∩ S n ++ but also to find points in rangeA * ∩ S n ++ , using SP, which can update basic procedures efficiently, is better than using MVN in terms of execution time. Third, it is not always possible to detect infeasibility (i.e., to find a point in rangeA * ∩ S n + ) in a shorter execution time when using SP than when using MVN. In fact, according to Lourenço (2019), the execution time is shorter when using MVN as the basic procedure than when using SP. SP is a more efficient update method than MVN in terms of satisfying a termination criterion (the criterion for moving to scaling) of the basic procedure. On the other hand, from the point of view of finding points in rangeA * ∩ S n + , it is not possible to determine whether SP or MVN is more suitable. Pena (2017) used SP to significantly reduce the execution time, which is the result of updating the basic procedure for finding points in rangeA * ∩ S n ++ more efficiently than MVN. Mosek obtained a point in rangeA * ∩ S n ++ as a feasible solution to the dual problem for all instances. From the viewpoint of execution time, Mosek was superior to the other methods.
For the weakly feasible instances, we compared our method (Algorithm 2), a modified version with another criteria for ε-feasibility (Algorithm 6), Lourenço (2019), and Pena (2017). The results are summarized in Table 9. As described above, we classified the output-results into type A: an interior feasible solution is found; type B: no interior feasible solution is found (ver.1); type C: no ε-feasible solution is found (only for Lorenço (2019) and our methods); type D: no interior feasible solution is found (ver.2; only for Pena (2017)); type E: Out-of-time. Note that B * indicates that the output was B, but when we converted the obtained solution to a solution of D(A), it contained a negative eigenvalue and violated the SDP constraint. Note that, due to rounding errors, the true state of each generated weakly feasible instance is unknown, and it is impossible to determine whether the results obtained by  Table 9 lists the output types and average execution time without noting which are correct.
From Table 9, we can observe the following: • For all the methods, the average execution time was shorter when SP was used as the basic procedure than when MVN was used.
• All methods except Algorithm 6 sometimes obtained output type A (an interior feasible solution is found), and Pena(2017) returned output-result D, while the obtained solution had 0 ∼ 5 negative eigenvalues (about -1e-16) and more than 20 positive eigenvalues (less than 1e-12) when we converted it into a solution of P(A).
• Lourenço (2019) obtained output type B * (no interior feasible solution is found) but when we converted the obtained solution into a solution of D(A), it contained a negative eigenvalue and violated the SDP constraint). The obtained solution had 1 ∼ 3 negative eigenvalues (about -1e-6) and violated the SDP constraint when we converted it into a solution of D(A).
• Our modified method (Algorithm 6) was able to determine the existence of an ε-feasible solution for all instances. This implies that, at least for this specific set of weakly feasible instances, the criteria focusing on the total value of the eigenvalues used in Algorithm 6 is more suitable than the criteria focusing on the product of all the eigenvalues. Table 10 summarizes the results obtained by Mosek. The error message "rescode = 10006" was obtained for 22 instances, similar to the results for the strongly feasible ill-conditioned instances. Note that we assumed that feasible solutions were obtained for all instances since the constraint residual A(X * ) 2 was as small as 1.1e-7 or less for all obtained solutions.
Note that for all problems with 0.1 ≤ ν ≤ 0.7, Algorithm 2 and Lourenço (2019) using SP for the basic procedure returned output A, i.e., a feasible solution to the original problem. Table 11 summarizes the accuracies of the solutions obtained with Algorithm 2, Lourenço (2019), and Mosek for all instances with 0.1 ≤ ν ≤ 0.7. Chubanov's methods sometimes returned output-result type A for weakly feasible instances, but Table 11 shows that the average accuracy of feasible solutions obtained by Chubanov's methods was better than that of Mosek.    For our method, Proposition 5.1 ensures that log (λ min (x ℓ )) is bounded from above by num ℓ r ℓ log ξ and Proposition 5.2 ensures that num ℓ r ℓ log ξ decreases − 1 r ℓ log ξ > 0 in the same situation. By substituting ρ = 2 and ξ = 1/2 into ϕ(ρ) and − log ξ so that the upper bounds for the numbers of iterations of the basic procedures are the same, we obtain ϕ(2) = 2 − 1 2 − √ 2 ≃ 0.085786, − log 1 2 = log 2 ≃ 0.693147 which implies that the rate of reduction in the upper bound log (λ min (x ℓ )) of our method is greater than that of Lourenço (2019).