On decompositions and approximations of conjugate partial-symmetric tensors

Hermitian matrices have played an important role in matrix theory and complex quadratic optimization. The high-order generalization of Hermitian matrices, conjugate partial-symmetric (CPS) tensors, have shown growing interest recently in tensor theory and computation, particularly in application-driven complex polynomial optimization problems. In this paper, we study CPS tensors with a focus on ranks, computing rank-one decompositions and approximations, as well as their applications. We prove constructively that any CPS tensor can be decomposed into a sum of rank-one CPS tensors, which provides an explicit method to compute such rank-one decompositions. Three types of ranks for CPS tensors are defined and shown to be different in general. This leads to the invalidity of the conjugate version of Comon’s conjecture. We then study rank-one approximations and matricizations of CPS tensors. By carefully unfolding CPS tensors to Hermitian matrices, rank-one equivalence can be preserved. This enables us to develop new convex optimization models and algorithms to compute best rank-one approximations of CPS tensors. Numerical experiments from data sets in radar wave form design, elasticity tensor, and quantum entanglement are performed to justify the capability of our methods.


Introduction
As the complex counterpart of real symmetric matrices, Hermitian matrices are often considered to play a more important role than complex symmetric matrices in practice. This is mainly due to the fact that any complex quadratic form generated by a Hermitian matrix always takes real values and all the eigenvalues of a Hermitian matrix are real. It is especially important in many applications, for instance, in quantum physics where Hermitian matrices are operators that measure properties of a system, e.g., total spin which has to be real, and in mathematical optimization whereas objective functions need to be real-valued. Generalizing to high-order tensors, symmetric tensors, no matter in the real or in the complex field, have been paid enormous attention in the recent decade. However, the high-order generalization of Hermitian matrices has not been formally proposed in mathematics until recently by Jiang et al. [24], who named it as conjugate partialsymmetric (CPS) tensors. The concept of CPS tensors were later generalized to conjugate non-symmetric tensors called Hermitian tensors by Ni [32]. Nevertheless, various examples of CPS tensors can be extracted from real applications in forms of complex polynomial optimization problems. Aittomaki and Koivunen [1] considered the beampattern optimization and formulated it as a complex multivariate quartic minimization model. Sorber et al. [41] applied unconstrained complex optimization to study the simulation of nonlinear circuits in the frequency domain. Aubry et al. [2] modeled a radar signal processing problem by optimizing a complex quartic polynomial which always takes real values. Josz [26] investigated applications of complex polynomial optimization to electricity transmission network. Moreover, Madani et al. [30] studied the power system state estimation via complex polynomial optimization.
There were several discussions on high-order generalization of Hermitian matrices earlier and recently. The Hermitian tensor product, defined to be the Kronecker product of a Hermitian matrix, has been studied since 1960s [27,31]. It has many applications in quantum entanglement [9] and enjoys certain nice properties; for instance, the Kronecker product of a Hermitian matrix remains a Hermitian matrix. Fourth-order cumulant tensors in multivariate statistics were proposed and applied to the blind source separation [6]. Such tensors are called quadricovariance tensors, which is a special class of fourth-order CPS tensors. In material sciences [20,38], an elastic tensor is a three-dimensional fourth-order CPS tensor. Ni et al. [33] proposed the unitary eigenvalues and unitary symmetric eigenvalues for complex tensors and symmetric complex tensors, respectively, and demonstrated a relation to the geometric measure of quantum entanglement. Zhang et al. [44] studied the unitary eigenvalues of non-symmetric complex tensors. Jiang et al. [24] characterized real-valued complex polynomial functions and their symmetric tensor representations, which naturally led to the definition of CPS tensors as well as its generalization called conjugate super-symmetric tensors. Eigenvalues and applications for these tensors were discussed as well. Derksen et al. [8] studied entanglement of d-partite system in the field of quantum mechanics and introduced the notion of bisymmetric Hermitian tensor, which is essentially the same definition of CPS tensors in [24]. Some elementary properties of bisymmetric Hermitian tensors were discussed. Recently, Nie and Yang [36] studied decompositions of Hermitian tensors.
CPS tensors inherit many nice properties of Hermitian matrices. For instance, every symmetric complex form generated by a CPS tensor is real-valued and all the eigenvalues of a CPS tensor are real [24]. In contrast to the many efforts on the optimization aspect [12,13,19,23,40,45] of CPS tensors, the current paper aims for their decompositions, ranks and approximations, which are important topics for high-order tensors. As we all know that the generalization of matrices to high-order tensors has led to interesting new findings as well as keeping many nice properties, CPS tensors, as a generalization of Hermitian matrices in terms of the high order and a generalization of real symmetric tensors in terms of the complex field, should also be expected to behave in that sense. One of our findings states that Comon's type conjecture, i.e., the symmetric rank of a symmetric tensor is equal to the rank of the tensor, applied to CPS tensors is actually invalid in a simple example. We believe the analysis in this line will provide novel insights into CPS tensors, and hope these new findings will help in future modelling of practical applications. In fact, one of our results on rank-one equivalence via square matricization helps to develop new models and algorithms to compute best rank-one approximations and the extreme eigenvalue of CPS tensors.
The study of CPS tensors in this paper is focused on ranks, rank-one decompositions and approximations, as well as their applications. The analysis is conducted along side with a more general class of complex tensors called partial-symmetric (PS) tensors. We propose the Hermitian and skew-Hermitian parts of PS tensors, which are helpful to understand structures of PS tensors and CPS tensors. We prove constructively that any CPS tensor can be decomposed into a sum of rankone CPS tensors, using the tools in additive number theory, specifically, Hilbert's identity [4,22]. This provides an alternative definition of CPS tensors via real linear combinations of rank-one CPS tensors and a computational method to decompose it. As a consequence, perhaps surprisingly, any PS tensors can be decomposed into a complex linear combinations of rank-one CPS tensors. We then define three types of ranks for CPS tensors and show that they are non-identical in general. For CPS tensors, this leads to the invalidity of conjugate version of Comon's conjecture, albeit it is not the exact form of Comon's conjecture in [7]. We further study rank-one approximations of CPS tensors. Depending on the types of rank-one tensors to be considered, rank-one approximations could also be different. As is known in the literature, if the square matricization of an even-order symmetric tensor is rank-one, then the original symmetric tensor is also rank-one. We figure out that the same property does hold for CPS tensors when they are unfolded to Hermitian matrices under a careful way of matricization. Based on the rank-one matricization equivalence, we propose two convex optimization models to compute best rank-one approximations of CPS tensors. Several numerical experiments from real data to simulated data are performed to justify the capability of our methods. This paper is organized as follows. We start with preparations of various notations, definitions, and elementary properties in Section 2. In Section 3, we study decompositions of CPS tensors and discuss several concepts of ranks. We then focus on rank-one approximations and rank-one equivalence via matricization for CPS tensors in Section 4, which provide an immediate application to compute best rankone approximations of CPS tensors via convex optimization. Finally in Section 5, we conducted several numerical experiments to illustrate effective performance of the methods proposed in Section 4.

Preparations
Throughout this paper, we uniformly use lowercase letters, boldface lowercase letters, capital letters, and calligraphic letters to denote scalars, vectors, matrices, and high-order tensors, respectively, e.g., a scalar x, a vector x , a matrix X, and a tensor X . We use subscripts to denote their components, e.g., x i being the i-th entry of a vector x , X ij being the (i, j)-th entry of a matrix X, and X ijk being the (i, j, k)-th entry of a third-order tensor X . As usual, the field of real numbers and the field of complex numbers are denoted by ℝ and ℂ , respectively.
For any complex number z = x + y ∈ ℂ with x, y ∈ ℝ , its real part and imaginary part are denoted by Re z ∶= x and Im z ∶= y , respectively. Its argument is denoted by arg(z) and its modulus is denoted by �z� ∶= where z ∶= x − y denotes the conjugate of z. For any vector z ∈ ℂ n , we denote z H ∶= z T to be the transpose of its conjugate, and we define it analogously for matrices. The norm of a complex vector z ∈ ℂ n is defined as

CPS tensor
Let us consider ℂ n d , the space of complex tensors of order d and dimension n. A tensor T ∈ ℂ n d is called symmetric if every entry of T is invariant under all permutations of its indices, i.e., for every The set of symmetric tensors in ℂ n d is denoted by ℂ n d s .
Definition 2.1 (Jiang et al. [24,Definition 2.3]). An even-order complex tensor T ∈ ℂ n 2d is called partial-symmetric (PS) if for every Essentially, a PS tensor is symmetric with respect to its first half of the modes, and symmetric with respect to its last half of the modes as well, while a symmetric tensor is symmetric with respect to all the modes. The set of PS complex tensors in ℂ n 2d is denoted by ℂ n 2d ps . When d = 1 , one has ℂ n 2 s ⊊ ℂ n 2 ps = ℂ n 2 . However, for d ≥ 2 , it is obvious that ℂ n 2d s ⊊ ℂ n 2d ps ⊊ ℂ n 2d . A special class of PS tensors, called conjugate partial-symmetric tensors, generalizes Hermitian matrices to high-order tensor spaces. (Jiang et al. [24,Definition 3.7]). An even-order complex tensor T ∈ ℂ n 2d is called conjugate partial-symmetric (CPS) if it is partial-symmetric and

Definition 2.2
The set of CPS tensors in ℂ n 2d is denoted by ℂ n 2d cps . Obviously when d = 1 , ℂ n 2 cps is nothing but the set of Hermitian matrices in ℂ n 2 . CPS tensors are the high-order generalization of Hermitian matrices. For d ≥ 2 , one has ℂ n 2d cps ⊊ ℂ n 2d ps ⊊ ℂ n 2d . However, ℂ n 2d s and ℂ n 2d cps are not comparable. Actually one has ℂ n 2d s ∩ ℂ n 2d cps = ℝ n 2d s , the set of real symmetric tensors. This is the the same for complex symmetric matrices and Hermitian matrices, i.e., d = 1.
We remark that PS tensors are closed under addition and multiplication by complex numbers, while CPS tensors are closed under addition and multiplication by real numbers only. This fact may not be obvious from their definitions. In fact, it can be easily seen from their equivalent definitions via partial-symmetric decompositions; see Section 3.1.

Complex form
The Frobenius inner product of two complex tensors U, V ∈ ℂ n d is defined as and its induced Frobenius norm of a complex tensor T is naturally defined as ‖T‖ ∶= √ ⟨T, T⟩. We remark that these two notations naturally apply to vectors and matrices, which are tensors of order one and order two, respectively. A rank-one tensor, also called a simple tensor, is a tensor that can be written as outer products of vectors, i.e., Given a complex tensor T ∈ ℂ n d , the multilinear form of T is defined as where the variable x k ∈ ℂ n for k = 1, … , d.
If a vector in a multilinear form (1) is missing and replaced by a ' • ', say T(•, x 2 , … , x d ) , it then becomes a vector in ℂ n . Explicitly, the i-th entry of When all x k 's in (1) are the same, a multilinear form becomes a complex homogenous polynomial function (or complex form) of x ∈ ℂ n , i.e., The notations, x d standing for d copies of x in a multilinear form, and x ⊗d standing for outer products of d copies of x , will be used throughout this paper as long as there is no ambiguity.
To our particular interest in this paper, the following conjugate complex form, or conjugate form, is defined by a PS tensor T ∈ ℂ n 2d ps , Remark that x d and x d in T(x d x d ) cannot be swapped as otherwise it becomes a different form. Similarly, we may use the notation

Hermitian part and skew-Hermitian part
It is shown in [24]  are the Hermitian part and the skew-Hermitian part of A, respectively. We extend this concept to high-order PS tensors, which is helpful in the analysis of our results.

Definition 2.3
The conjugate transpose of a PS tensor T ∈ ℂ n 2d ps , denoted by T H , satisfies The Hermitian part H(T) and the skew-Hermitian part S(T) of a PS tensor T are defined as respectively.
Obviously, one has (T H ) H = T for a PS tensor T . It is clear from Definition 2.2 that a PS tensor T is CPS if and only if T H = T , or S(T) = O , a zero tensor. The following property can be verified straightforwardly, similar to Hermitian matrices.

CPS decomposition and rank
This section is devoted to decompositions of CPS tensors as well as PS tensors. It is an extension of the symmetric decomposition of symmetric tensors. One main result is to propose a constructive method to decompose a CPS tensor into a sum of rankone CPS tensors, and hence provide an alternative definition of CPS tensors via real linear combination of rank-one CPS tensors. Based on these results, we discuss several ranks for PS tensors and CPS tensors, which can be classified as the conjugate version of Waring's decomposition [5,10].

CPS decomposition
Rank-one decompositions play an essential role in exploring structures of high-order tensors. For Hermitian matrices, they enjoy the following type of conjugate symmetric decompositions: If A ∈ ℂ n 2 cps , then where j ∈ ℝ and a j ∈ ℂ n for j = 1, … , r with some r ≤ n . As the high-order generalization of Hermitian matrices, CPS tensors do inherit this important property. Before presenting the main result in this section, let us first prove a technical result.

Lemma 3.1
For any positive integers d and n, there exists a j ∈ ℂ n and j ∈ ℝ for j = 1, … , m with finite m, such that Proof For any nonzero 0 , 1 , … , 2d ∈ ℝ with i ≠ j if i ≠ j , consider the following 2d + 1 linear equations with 2d + 1 variables: and other k 's are zeros. The determinant of the coefficients of (3) is the Vandermonde determinant and so (3) has a unique real solution, which is denoted by (z 0 , z 1 , … , z 2d ) for simplicity.
Denote Ω d ∶= {e another index ≠ k such that |d − t | = d . As a result, there are i ≠ j such that d i = t j = d and the rest are zeros. Therefore, and (iii) If |d i − t i | = 0 for i = 1, … , n , then and the number of such terms in (4) is By applying (4) and (3), we obtain where even(d) is one if d is even and zero otherwise. Notice that 1 where nonzero 0 , 1 , … , d ∈ ℝ with i 2 ≠ j 2 if i ≠ j , and 0 = d∕2 = 1 and other k 's are zeros. Similar to the linear system (3) whose Vandermonde determinant is nonzero, (6) also has a unique real solution, denoted by (y 0 , y 1 , … , y d ) for simplicity. Let 1 , … , n be i.i.d. random variables uniformly distributed on Ω d+1 . Similar to the proof of (4), we can obtain Therefore, where the last inequality is due to (6). This, together with (5) for even d, gives that By taking the coefficients of x , ( k 1 1 , … , k n n ) or ( k 1 1 , … , k n n ) , and enumerating every possible value in the supports sets of i 's or i 's, to form an a j ∈ ℂ n , the above equality provides an expression of The above lemma can be viewed as a conjugate type of Hilbert's identity in the literature (see e.g., [4]), which states that for any positive integers d and n, there always exist a 1 , … , a m ∈ ℝ n , such chat With Lemma 3.1 in hand, we are ready to show that any CPS tensor can be decomposed into a sum of rank-one CPS tensors.

Theorem 3.2 An even-order tensor T ∈ ℂ n 2d is CPS if and only if T has the following CPS decomposition
where j ∈ ℝ and a j ∈ ℂ n for j = 1, … , m with finite m.
Proof For any a ∈ ℂ n , it is straightforward to check that the rank-one tensor a ⊗d ⊗ a ⊗d is CPS by Definition 2.2. In fact, it is symmetric with respect to its first half of the modes, and also symmetric with respect to its last half of the modes, resulting a PS tensor. Besides, this PS tensor satisfies resulting a CPS tensor. Therefore, as a real linear combination of such rank-one CPS tensors in (7), T must also be CPS. On the other hand, according to [24,Proposition 3.9] with a constructive algorithm, any CPS tensor T can be written as It suffices to prove that Z ⊗ Z admits a decomposition of (7) if Z ∈ ℂ n d s . Since any symmetric complex tensor admits a finite symmetric decomposition (see e.g., [5, Algorithm 5.1] with a theoretical guarantee and [34,Algorithm 4.3] with numerical efficiency), we may let where a j ∈ ℂ n for j = 1, … , r with finite r. For any x ∈ ℂ n , one has and where we let y = Ax ∈ ℂ r and A = (a 1 , … , a r ) T ∈ ℂ r×n . Therefore, by Lemma 3.1, there exist k ∈ ℝ and b k ∈ ℂ r for k = 1, … , s with finite s, such that Finally, by letting This implies that Z ⊗ Z = ∑ s k=1 k c k ⊗d ⊗ c k ⊗d , completing the whole proof. ◻ Theorem 3.2 provides an alternative definition of a CPS tensor via a real linear combination of rank-one CPS tensors, i.e., The proof of Theorem 3.2 actually develops an explicit algorithm to decompose a general CPS tensor into a sum of rank-one CPS tensors. The procedure involves the following three main steps: c jk ∈ ℂ n for every j.
In fact, as a consequence of Theorem 3.2, perhaps surprisingly, PS tensors also enjoy similar decompositions, via complex linear combinations of rank-one CPS tensors.

Corollary 3.3 An even-order tensor T ∈ ℂ n 2d is PS if and only if T has the following PS decomposition
where j ∈ ℂ and a j ∈ ℂ n for j = 1, … , m with finite m.

Proof
The proof of the 'if' part can be straightforwardly verified by Definition 2.1 using a PS decomposition as that in the proof of Theorem 3.2.
For the 'only if' part, by Proposition 2.4, T = H(T) + S(T) where H(T) and S(T) are CPS. By Theorem 3.2, both H(T) and S(T) can be decomposed into a sum of rank-one CPS tensors as in (7), with coefficients being real numbers. Therefore, T = H(T) + (− ) S(T) can be decomposed into a sum of rank-one CPS tensors as in (9), with coefficients being complex numbers. ◻ Corollary 3.3 also provides an alternative definition of a PS tensor via a complex linear combination of rank-one CPS tensors, i.e., In terms of rank-one decompositions shown in Theorem 3.2 and Corollary 3.3, CPS tensors and PS tensors are straightforward generalization of Hermitian matrices and complex matrices, respectively.
Some remarks on decompositions of PS tensors are necessary in place. From Definition 2.1, in particular the symmetry with respect to its first half of the modes and symmetry with respect to its last half of the modes, it can be shown that any PS tensor T ∈ ℂ n 2d ps can also be decomposed as where a j , b j ∈ ℂ n for j = 1, … , m with some finite m. This decomposition seems natural from its original definition, but is quite different to and less symmetric than (9) in Corollary 3.3. In fact, (10) can be immediately obtained from (9) by absorbing each j into a j ⊗d . This makes the decomposition (9) interesting as it links the first half and the last half of the modes of a PS tensor, which is not obvious either from Definition 2.1 or the decomposition (10). Even for d = 1, (9) reduces to that any complex matrix A ∈ ℂ n 2 can be written as which, to the best of our knowledge, has not been seen in the literature. However, the connection between CPS tensors and PS tensors makes (9) more straightforward as a consequence of Theorem 3.2.

CPS rank
The discussion in Section 3.1 obviously raises the question for the shortest CPS decomposition, a common question of interest for matrices and high-order tensors, called rank. For any tensor T ∈ ℂ n d , the rank of T , denoted by rank (T) , is the smallest number r that T can be written as a sum of rank-one complex tensors, i.e., Depending on the types of rank-one tensors, we define the partial-symmetric rank and the conjugate partial-symmetric rank as follows.

Definition 3.4
The partial-symmetric rank (PS rank) of a PS tensor T ∈ ℂ n 2d ps , denoted by rank PS (T) , is defined as The conjugate partial-symmetric rank (CPS rank) of a CPS tensor T ∈ ℂ n 2d cps , denoted by rank CPS (T) , is defined as To echo the discussion at the end of Section 3.1, we remark that by the original definition (Definition 2.1) of PS tensors, another rank for PS tensors can be defined based on the decomposition (10), i.e., the minimum r such that This rank is different to the PS rank in Definition 3.4 (see Example 3.6). Our interest here is to emphasize the conjugate property and to better understand CPS tensors.
Obviously by Definition 3.4, for a CPS tensor T , one has An interesting question is whether the above inequality is an equality or not. It is obvious that (11) are equalities when the rank, PS rank, or CPS rank of a CPS tensor is one. The equality also holds in the case of matrices, i.e., for any Hermitian matrix, the three ranks must be the same. However, this is not true in general for high-order CPS tensors, as stipulated in Theorem 3.5.
In ℂ n d s , the space of symmetric tensors, a similar problem was posed by Comon: The symmetric rank of a symmetric tensor is equal to the rank of the tensor, known as Comon's conjecture [7]. It received a considerable amount of attention in recent years; see e.g., [39] and references therein. Comon's conjecture was shown to be true in various special cases and was only recently disproved by Shitov [39] using a sophisticate counter example of complex symmetric tensor. Nevertheless, the real version of Comon's conjecture remains open. Our result on the ranks of CPS tensors below can be taken as a disproof for the conjugate version of Comon's conjecture. In fact, our counter example (Example 3.6) is very simple.
cps is a CPS tensor, then Moreover, there exists a CPS tensor T such that rank (T) < rank PS (T).
Proof Let rank PS (T) = r and T has the following PS decomposition where j ∈ ℂ and a j ∈ ℂ n for j = 1, … , r . It is easy to see that T can be written as We notice that both ∑ r j=1 (Re j )a j ⊗d ⊗ a j ⊗d and ∑ r j=1 (Im j )a j ⊗d ⊗ a j ⊗d are CPS tensors. By the uniqueness result in Proposition 2.4 and the fact that T is already CPS, ∑ r j=1 (Im j )a j ⊗d ⊗ a j ⊗d must be a zero tensor. Therefore, This implies that rank CPS (T) ≤ r = rank PS (T) since Re j ∈ ℝ . Together with the obvious fact that rank CPS (T) ≥ rank PS (T) we conclude rank CPS (T) = rank PS (T). Example 3.6 shows a CPS tensor T with rank (T) < rank PS (T) . ◻ Example 3.6 Let T ∈ ℂ 2 4 cps where T 1122 = T 2211 = 1 and other entries are zeros. It follows that Proof Obviously T can be written as a sum of two rank-one tensors, each matching a nonzero entry of T . It is also easy to show that rank (T) ≠ 1 by contradiction. Therefore, rank (T) = 2.
We now prove rank CPS (T) ≥ 3 by contradiction. According to Theorem 3.5, rank CPS (T) = rank PS (T) ≥ 2 . Suppose that one has rank CPS (T) = rank PS (T) = 2 . There exist nonzero 1 , 2 ∈ ℝ and u = (u 1 , By comparing the entries T 1111 = T 1112 = T 2222 = 0 and T 1122 = 1 , we obtain First, we claim that none of u 1 , u 2 , v 1 , v 2 can be zero. Otherwise, if either u 1 or v 1 is zero, we have both u 1 and v 1 are zeros by (12a), which invalidates (12d). In the other case, if either u 2 or v 2 is zero, we have both u 2 and v 2 are zeros by (12c), which also invalidates (12d).
Let us now multiply u 2 to (12a) and multiply u 1 to (12b), and we obtain Combining the above two equations leads to which implies that v 1 u 2 = v 2 u 1 , i.e., . There exists ∈ ℂ such that u = v , and so we get Therefore, we arrive at rank CPS (T) ≤ 1 , which is obviously a contradiction. ◻ Although Example 3.6 invalids the conjugate version of Comon's conjecture, the rank and the PS rank of a generic PS tensor (including CPS tensor) can still be the same when its PS rank is no more than its dimension; see Proposition 3.7. This is similar to [7, Proposition 5.3] for a generic symmetric complex tensor.
where c jk ∈ ℂ n for j = 1, … , r and k = 1, … , 2d , and nonzero j ∈ ℂ and a j ∈ ℂ n for j = 1, … , m . As m ≤ n , it is not difficulty to show that the set of n-dimensional vectors {a 1 , … , a m } are generically linearly independent; see e.g., [7,Lemma 5.2].
As a consequence, one may find x j ∈ ℂ n for j = 1, … , m , such that By applying the multilinear form of • x k d−1 x k d on both sides of (13), we obtain Therefore, for any 1 ≤ k ≤ m i.e., a complex linear combination of {c 11 , … , c r1 } . This implies that m ≤ r . Combining with the obvious fact that r = rank (T) ≤ rank PS (T) = m , we obtain that r = m . In other words, rank (T) = rank PS (T) holds generically. ◻ We remark that the above result already holds for CPS tensors. This is because CPS tensors are PS, and PS rank of a CPS tensor is equal to CPS rank of the tensor (Theorem 3.5).

Rank-one approximation and matricization equivalence
Finding tensor ranks is in general very hard [18]. This makes low-rank approximations important, and in fact it has been one of the main topics for high-order tensors. Along this line, the rank-one approximation is perhaps the most simple and important problem. In this section, we study several rank-one approximations and the rank-one equivalence via matricization for CPS tensors. As an application of the matricization equivalence, new convex optimization models are developed to find best rank-one approximations of CPS tensors.

Rank-one approximation
It is well known that finding best rank-one approximations of a real tensor is equivalent to finding the largest singular value [29] of the tensor; see e.g., [28]. For a real symmetric tensor, best rank-one approximations can be obtained at a symmetric rank-one tensor [3,11], and is equivalent to finding the largest eigenvalue [37] of the tensor or spherical constrained polynomial optimization [17,46]. In the complex field, Sorber et al. [42] proposed line search and plane search for tensor optimization including best rank-one approximation as a special case. Ni et al. [33] studied best symmetric rank-one approximations of symmetric complex tensors. Along this line, PS and CPS tensors possess similar properties. Let us first introduce eigenvalues of these tensors. All the C-eigenvalues of CPS tensors are real [24]. The C-eigenvalue of PS tensors has not been defined, and we can simply adopt Definition 4.1 as the definition of C-eigenvalue and C-eigenvector for PS tensors. In this paper, as long as their is no ambiguity, we call C-eigenvalue and C-eigenvector to be the eigenvalue and the eigenvector for PS tensors (including CPS tensors), respectively, and call ( , x) in Definition 4.1 to be the eigenpair. We also need to clarify a couple of terms. For a CPS tensor T , if rank (T) = 1 , then rank PS (T) = rank CPS (T) = 1 , and so the term rank-one CPS tensor has no ambiguity. However, for a PS tensor T , rank (T) = 1 does not imply rank PS (T) = 1 . Here, the term rank-one PS tensor stands for a PS tensor T with rank PS (T) = 1.

Proof Straightforward computation shows that
To minimize the right hand side of the above for given T and fixed x , ∈ ℂ must satisfy arg( . is an optimal solution of the right hand side of (14). Therefore, we arrive at By comparing to the left hand side of (14), it suffices to show that sides. It remains to prove that an optimal solution of the right hand side of (15) satis- x is an eigenvector of T .
The right hand side of (15) is equivalent to max This provides (part of) the first-order optimality condition: By that x = y in the constraints of (16), the above equations lead to which implies that Therefore, x is a an eigenvector of T . ◻ Since CPS tensors are PS tensors, for a best rank-one PS tensor x ⊗d ⊗ x ⊗d approximation in (14), must be an eigenvalue. As all the eigenvalues of CPS arg min tensors are real, x ⊗d ⊗ x ⊗d becomes a best rank-one CPS tensor approximation.
Therefore, Theorem 4.2 immediately implies a similar result for CPS tensors.

Corollary 4.3 For a CPS tensor T ∈ ℂ n 2d cps , ∈ ℝ is a largest (in terms of the absolute value) eigenvalue in an eigenpair ( , x) of T if and only if x ⊗d ⊗ x
⊗d is a best rank-one CPS tensor approximation of T , i.e., For CPS tensors, one may consider different rank-one approximation problems. The following result is interesting, which echoes the inequivalence on ranks discussed earlier in Theorem 3.5.

Theorem 4.4 If T ∈ ℂ n 2d
cps is a CPS tensor with d ≥ 2 , then the best rank-one CPS tensor approximation of T is equivalent to the best rank-one PS tensor approximation of T , but is not equivalent to the best rank-one complex tensor approximation of T , i.e.,

Moreover, there exists a CPS tensor T such that
In fact, the equality in (17) is a consequence of Theorem 4.2 and Corollary 4.3. The inequality in (17) is obvious since ℂ n 2d cps ⊆ ℂ n 2d and rank CPS (X) = 1 implies that rank (X) = 1 , and its strictness can be validated by the following example. arg max ‖T − e 1 ⊗ e 1 ⊗ e 2 ⊗ e 2 ‖ 2 = 1.
We remark that the equivalence between best rank-one CPS tensor approximations and best rank-one complex tensor approximations actually holds for Hermitian matrices ( d = 1 ), i.e., when A ∈ ℂ n 2 is Hermitian. This is because of the trivial fact ℂ n 2 = ℂ n 2 ps and the equality in (17).

Rank-one equivalence via matricization
Matricization, also known as matrix flattening or matrix unfolding, of a tensor is a widely used tool to study high-order tensors. When a tensor is rank-one, it is obvious that any matricization of the tensor is rank-one, while the reverse is not true in general. The rank-one equivalence via general matricization has been studied for real tensors [43]. For an even-order symmetric tensor T ∈ ℂ n 2d s , it is known that if its square matricization (unfolding T as an n d × n d matrix) is rank-one, then the original tensor T must be rank-one; see e.g., [25,35]. In the real field, this rankone equivalence suggests some convex optimization methods to compute the largest eigenvalue or best rank-one approximations of a symmetric tensor. In practice, the methods are very likely to find global optimal solutions [25,35]. Inspired by these results, let us look into the rank-one equivalence for CPS tensors.
For a CPS tensor, one hopes that its square matricization being rank-one implies the original tensor being rank-one. Unfortunately, this may not hold true if a CPS tensor is not unfolded in a right way. The following example shows that the standard square matricization of a non-rank-one CPS tensor turns to a rank-one Hermitian matrix.

Example 4.6 Let
can be written as which can be straightforwardly verified as a CPS tensor. However, rank (T) ≥ 2 but the standard square matricization of T is a rank-one Hermitian matrix.
Proof By the construction of T via the outer product of two matrices, it is obvious that its standard square matricization is (1, 1 + , 1 + , 2) H (1, 1 + , 1 + , 2) , which is a rank-one Hermitian matrix.
On the other hand, suppose that rank (T) = 1 . This implies that rank CPS (T) = 1 and so we may let T = x ⊗ x ⊗ x ⊗ x for some x ∈ ℂ 2 . By comparing some entries, one has Clearly |x 1 | 2 = 1 and |x 2 | 2 = 2 , and this leads to a contradiction. Therefore, rank (T) ≥ 2 . ◻ The role of symmetry is important in nonlinear elasticity and material sciences [20,38]. The following example of elasticity tensors points out a structural difference for the rank-one equivalence via matricization. [21]). Elasticity tensors are three-dimensional fourthorder real positive definite tensors with certain symmetry. Specifically, A ∈ ℝ 3 4 is an elasticity tensor if It is straightforward to check that an elasticity tensor is always CPS. After applying the standard square matricization, deleting identical rows and columns, and swapping some rows and columns, A can be one-to-one represented by a 6 × 6 real symmetric matrix, known as Voigt's notation: whose degree of freedome is 21. These independent entries are called elasticities. If the Voigt's matrix, or equivalently the standard square matricization, is rank-one, then this positive definite matrix can be written as xx T with x ∈ ℝ 6 , whose degree of freedom is 6. However, if the elasticity tensor A is rank-one, then A can be written as y ⊗ y ⊗ y ⊗ y with y ∈ ℂ 3 since A is CPS and positive definite, and further it can be easily shown that y ∈ ℝ 3 , whose degree of freedom is then 3. Therefore, the standard square matricization of an elasticity tensor is rank-one cannot guarantee that the original elasticity tensor is rank-one.

Example 4.7 (Itin and Hehl
We notice that square matricization is unique for symmetric tensors, but not for CPS tensors. Examples 4.6 and 4.7 motivate us to consider other ways of , matricization, with a hope to establish certain rank-one equivalence. To this end, it is necessary to introduce tensor transpose, extending the concept of matrix transpose. (1, … , d) , the -transpose of T , denoted by T ∈ ℂ n d , satisfies

Definition 4.8 Given a tensor T ∈ ℂ n d and a permutation
In a plain language, mode 1 of T originates from mode 1 of T , mode 2 of T originates from mode 2 of T , and so on. As a matter of fact, for a matrix A ∈ ℂ n 2 and = (2, 1) , A = A T . For a PS tensor T ∈ ℂ n 2d ps and = (d + 1, … , 2d, 1, … , d) , Given any integers 1 ≤ i 1 , … , i d ≤ n , let us denote to be the decimal of the tuple i 1 … i d in the base-n numeral system. We now discuss matricization and vectorization.

Definition 4.9 Given a tensor T ∈ ℂ n d , the vectorization of T , denoted by v(T) , is an n d -dimensional vector satisfying
Given an even-order tensor T ∈ ℂ n 2d , the standard square matricization (or simply matricization) of T , denoted by M(T) , is an n d × n d matrix satisfying Obviously the standard square matricization is a -matricization when = (1, 2, … , 2d) . Vectorization, -matricization and -transpose are all one-to-one. They are different ways of representation for tensor data. The following property on ranks are straightforward.

Proposition 4.11
Given a tensor T ∈ ℂ n 2d and any ∈ Π (1, … , 2d) , it follows that rank (T) = rank (T ) and rank (M (T)) ≤ rank (T) , in particular, rank (T) = 1 ⟹ rank (M (T)) = 1. As mentioned earlier, the -matricization of a symmetric tensor is unique for any permutation , since T = T if T is symmetric. However, CPS tensors only possess partial symmetry as well as certain conjugate property. Therefore, conditions of are necessary to guarantee the rank-one equivalence, as well as for the -matricization being Hermitian.
Modes 1, … , d of T are originated from modes 1 , … , d of T , respectively. By (20), there are almost half (either ⌊d∕2⌋ or ⌈d∕2⌉ ) from modes {1, … , d} of T and the remaining half from modes {d + 1, … , 2d} of T . This is also true for the modes of X . To provide a clearer presentation, we may construct a permutation such that, the first half of the modes of X are from modes {1, … , d} of T and the remaining half are from modes {d + 1, … , 2d} of T . Explicitly, ∈ Π(1, … , d) needs to satisfy As T is symmetric to modes {1, … , d} and to modes {d + 1, … , 2d} , respectively, the order of k 's in (21) does not matter. Observing that rank (X) = rank (X ) and M(X ⊗ Y) is rank-one, we may without loss of generality assume that the first ⌈d∕2⌉ modes of X are from modes {1, … , d} of T and the remaining ⌊d∕2⌋ from modes {d + 1, … , 2d} of T , and for the same reason assume that the first ⌊d∕2⌋ modes of Y are from modes {1, … , d} of T and the remaining ⌈d∕2⌉ from modes {d + 1, … , 2d} of T . In a nutshell, we can assume without loss of generality that This implies that X ⊗ Y = T is symmetric to modes and symmetric to modes We proceed to prove that under the condition of X ⊗ Y being symmetric to modes 1 d and symmetric to modes 2 d , rank (X) = rank (Y) = 1 holds, by induction on d. When d = 1 , both X and Y are obviously rank-one as they are vectors. Suppose that the claim holds for d − 1 . For general d, as X ⊗ Y is symmetric to modes 1 d and symmetric to modes 2 d , we can swap all but the first modes of X with some modes of Y . In particularly, modes 2, … , of X are swapped with modes respectively. Consequently, one has for any (22) and we have By defining a ∈ ℂ n and U ∈ ℂ n d−1 where we obtain that X = a ⊗ U . Similarly to (22), one may swap all but the last modes of Y to some modes of X and obtain Y = V ⊗ b where V ∈ ℂ n d−1 and b ∈ ℂ n . Since is symmetric with respect to modes 1 d and symmetric with respect to modes 2 d , U ⊗ V is symmetric to modes and symmetric to modes . As a result, V ⊗ U is symmetric with respect to modes and symmetric with respect to modes . By induction, we obtain rank (U) = rank (V) = 1 , proving that rank (X) = rank (Y) = 1.
If d is even, again by noticing that . By induction, we obtain rank (V) = rank (U) = 1 , and so rank (X) = rank (Y) = 1 . ◻ In fact, the condition of in (20) is also a necessary condition for the rank-one equivalence in Theorem 4.13 for a general CPS tensor. The proof, or an explanation of a counter example, involves heavy notations and we leave it to interested readers. The key step leading to Theorem 4.13 is the identity (22), which is a consequence of modes swapping due to some partial symmetry. This is doable because, among modes {1, … , d} of T , they are (almost) equally allocated to the modes of X (the first half of the modes of T ) and to modes of Y (the last half of the modes of T ), and the same holds for modes {d + 1, … , 2d} of T . If the number of modes of X that originate from modes {1, … , d} of T differs the number of modes of Y that originate from modes {1, … , d} of T for more than one (such as Example 4.6 with = (1, 2, 3, 4) ), then (22) cannot be obtained. This makes some modes binding, i.e., not separable to the outer product of a vector and a tensor in a lower order.
In practice, such as the discussion in modelling (Section 4.3) and the numerical experiments (Section 5), we may focus on a particular permutation satisfying (19) and (20). The most straightforward one is In particular, = (1, 3, 4, 2) for fourth-order tensors ( d = 2 ), = (1, 2, 4, 5, 6, 3) for sixth-order tensors ( d = 3 ), and = (1, 2, 5, 6, 7, 8, 3, 4) for eighth-order tensors ( d = 4). Before concluding this part, let us look at elasticity tensors (Example 4.7) again with Theorem 4.14 at hand. Since every elasticity tensor is CPS, Theorem 4.14 leads to the following interesting result from the rank-one equivalence. Proof Since A is CPS and satisfies the condition of Theorem 4.14, we have that rank (A ) = 1 and M(A ) is a Hermitian matrix, and hence a real symmetric matrix. Therefore, we may let Since A is CPS, it is symmetric to the first two modes. This implies that a ⊗ b is a symmetric matrix, say a ⊗ b = x ⊗ x . We then also have implying that A = x ⊗ x ⊗ x ⊗ x , a rank-one symmetric tensor. ◻

Computing best rank-one approximations
As an immediate application of the rank-one matricization equivalence, we now discuss how it can be used to compute best rank-one approximations of CPS tensors. Specifically, we consider the problem As mentioned in Corollary 4.3, for a given T . Therefore, the above problem is essentially i.e., finding the largest eigenvalue of the CPS tensor T . The model (25) is NP-hard when the order of T is larger than two, even in the real field [16,18]. Let us reformulate the tensor based optimization model to a matrix optimization model.

Proof
The equivalence between (26) and (25) can be established via , where X and x are feasible solutions of (26) and (25), respectively.
Given an optimal solution z of (25), by Propositions 4.11 and 4.12, Z = M (z ⊗d ⊗ z ⊗d ) is a rank-one Hermitian matrix, and so Z = y ⊗ y where y is an This shows that Z is a feasible solution of (26), whose objective value is On the other hand, given an optimal solution Z of (26), let Z ∈ ℂ n 2d cps such that Z = M (Z) . As Z is a rank-one Hermitian matrix, Z = y ⊗ y for some ∈ ℝ and ‖y‖ = 1 . Further by tr (Z) = 1 and (27), we observe that = 1 and so Z = y ⊗ y . Moreover, by Theorem 4.13, Z is a rank-one CPS tensor, i.e., Z = z ⊗d ⊗ z ⊗d for some ∈ ℝ and ‖z‖ = 1 . Noticing that it is easy to see that = 1 , resulting Z = M (z ⊗d ⊗ z ⊗d ) . Therefore, z is a feasible solution of (25), whose objective value is ◻ We remark that both tr (X) = 1 and X ∈ M (ℂ n 2d cps ) in the model (26) are linear equality constraints. In particular, X ∈ M (ℂ n 2d cps ) contains O(n d ) equalities, which are the requirements of partial symmetry and conjugate property for CPS tensors. As an example, when n = d = 2 and let = (1, 3, 4, 2) as in (23), X ∈ M (ℂ 2 4 cps ) can be explicitly written as y ⊗ y = Z = M (Z) = M ( z ⊗d ⊗ z ⊗d ) and ‖y‖ = ‖z‖ = 1, In fact, X H = X is included in the constraints X ∈ M (ℂ n 2d cps ) , but we leave it in (26) to emphasize that the decision variable sits in the space of Hermitian matrices.
The problem (26) remains hard because of the rank-one constraint. However, it broadens ways by resorting to various matrix optimization tools, particularly in convex optimization. We now propose two convex relaxation methods. First, in the proof of Theorem 4.16, we observe that X is a rank-one Hermitian matrix with tr (X) = 1 actually implies that X is positive semidefinite. By dropping the rank-one constraint, (26) is relaxed to a semidefinite program (SDP): where X ⪰ O denotes that X is Hermtian positive semidefinite. The convex optimization model (28) can be easily solved by some SDP solvers in CVX [14]. Alternatively, one may resort to first order methods such as the alternating direction method of multipliers (ADMM).
The second relaxation method is to add a penalty of the nuclear norm of the decision matrix in the objective function [25]. By dropping the rank-one constraint, this leads to the following convex optimization model where > 0 is a penalty parameter and ‖X‖ * denotes the nuclear norm of X, a convex surrogate for rank (X) . To see why (29) is a convex relaxation of (26), we notice that ‖X‖ * is a convex function, and so the objective of (29) is concave. Moreover, an optimal solution of (26), say X, is rank-one and tr (X) = 1 imply that X is positive semidefinite. Thus, ‖X‖ * = tr (X) = 1 , which implies that the term − ‖X‖ * added to the objective function is actually a constant.
Our observations in several numerical examples show that the solution obtained by the two convex relaxation models (28) and (29) are often rank-one (see Section 5). Once a rank-one solution X is obtained, one may resort to find a solution x for (25), as stipulated in the proof of Theorem 4. 16. This x provides a solution to the best rank-one approximation problem (24). We remark that the convex relaxation methods can also be used to find good approximate solutions to the best rank-r approximation problem, i.e., in a successive (greedy) manner. X 14 = X 22 = X 33 = X 41 , X 12 = X 31 , X 24 = X 43 , and X H = X.

Numerical experiments
In this section, we conduct numerical experiments to test the methods proposed in Section 4.3 to compute best rank-one approximations of CPS tensors. This is also to justify applicability of the rank-one equivalence in Theorem 4.13 or Theorem 4.14.
Both the nuclear norm penalty model (29) and the SDP relaxation method (28) are applied to solve three types of instances. Interestingly, both methods are able to return rank-one solutions for almost all the test instances, and thus guarantee the optimality of the original problem (25). In case a rank-one solution fails to obtain, one can slightly perturb the original tensor to lead a success (see Example 5.2). All the numerical experiments are conducted using an Intel Core i5-4200M 2.5GHz computer with 4GB of RAM. The supporting software is MATLAB R2015a. To solve the convex optimization problems, CVX 2.1 [14] and the ADMM approach in [25] are called.

Quartic minimization from radar wave form design
In radar system, one always regulates the interference power produced by unwanted returns through controlling the range-Doppler response [2]. It is important to design a suitable radar waveform minimizing the disturbance power at the output of the matched filter. This can be written as where J r ∈ ℝ n 2 is the shifted matrix for r = 0, 1, … , n − 1 , ⊙ denotes the Hadamard product, p(v) = (1, e 2 v , … , e 2(n−1) v ) T , and (r, k) = ∑ n 0 k=1 r,r k 1 Δ k (j) 2 k �Δ k � with r,r k being the Kronecker delta and 1 Δ k (j) being an indicator function for the index set Δ k of discrete frequencies. Interested readers are referred to [2] for more details of the ambiguity function and radar waveform design.
To account for the finite energy transmitted by the radar it is assumed that ‖s‖ 2 = 1 and a similarity constraint, ‖s − s 0 ‖ 2 ≤ , needs to be enforced to obtain phase-only modulated waveforms, where s 0 is a known code sharing some nice properties. Noticing that ‖s‖ = 1 and s 0 is known, this similarity constraint can be realized by penalizing the quantity −|s H s 0 | in the objective function (s) . Therefore, the following quartic minimization problem is arrived (see [24] for a detail discussion on the modelling): with a penalty parameter > 0 . The objective function of (30) is a real-valued quartic conjugate form, i.e., there is a fourth-order CPS tensor T such that T(s 2 s 2 ) = (s) − �s H s 0 � 2 ‖s‖ 2 . This shows that (30) is an instance of (25), which is equivalent to (26).
We use the data considered in [2] to construct (s) and let = 30 in (30). To obtain phase-only modulated waveforms, a known code s 0 (see e.g., [15]) with |(s 0 ) i | = 1 for i = 1, … , n is chosen and further normalized such that ‖s 0 ‖ = 1 . The problem is solved by the nuclear norm penalty model (29) and the SDP relaxation method (28), respectively. In the experiment, we randomly generate s 0 for 100 instances and record the percentage of instances that the corresponding method outputs rank-one solutions in Table 1. The convex relaxation models are solved by the ADMM algorithm in [25], whose average CPU time (in seconds) is also reported. Observed in Table 1, both convex relaxation methods always obtain rank-one solutions, leading to optimal solutions of (30). In terms of the speed, nuclear norm penalty method runs generally faster than SDP relaxations.

Determine elasticity tensors
Elasticity tensors (see Example 4.7) are three-dimensional fourth-order real positive definite tensors with certain symmetry. For a given tensor A ∈ ℝ 3 4 , this symmetry can be easily identified as a symmetry matrix in Voigt's notation (18). Therefore, A in such a form is an elasticity tensor if and only if Let T = −A and d = 2 in (25), i.e., max ‖x‖=1 −A(x 2 x 2 ) . If we solve this problem and obtain a real optimal solution, then its optimal value can verify (31). That is, A is an elasticity tensor if and only if the optimal value is negative. In the following, we implement this idea to determine the elasticity of given tensors in the literature. The first instance is taken from [38, Example 8.1], where and others are zeros in (18). Both the nuclear norm penalty model (29) and the SDP relaxation method (28) output a rank-one matrix which yields an optimal solution (−0.7184, −0.6956, 0) for max ‖x‖=1 −A(x 2 x 2 ) . The corresponding optimal value is −3.2420 and this validates the given tensor A is an elasticity tensor.
In our second example, we consider the most general type of an anisotropic medium that allows propagation of purely polarized waves in an arbitrary direction [21,Proposition 17]. Its Voigt's matrix has the following form:  We randomly generate the data ( , 1 , 2 , 3 , 4 , 5 , 6 ) from i.i.d. standard normal distributions and solve the problem by models (28) and (29), respectively. For all the random instances, both (28) and (29) return a real-valued optimal solution with a positive optimal values, which implies that the generated tensor instance is not an elasticity tensor. For example, in one instance we have The optimal solution obtained by (28) and (29) is (0.6978, 0.2104, 0.0918) and the optimal value is 4.4558. In order to see if an elasticity tensor can be obtained in the form of (32), we increase its diagonal terms. For example, if we increase by 5 and keep all the 's unchanged in the previous instance, then both (28) and (29) return a real optimal solution (0.0005, 0.9862, 0.0133) with its optimal value being −2.2327 . Therefore, the modified tensor instance is an elasticity tensor.

Randomly generated CPS tensors
The data from (30) has its own structure. In this part, we test the two relaxation methods extensively using randomly generated CPS tensors. The aim is to check the chance of getting rank-one solutions and hence generating optimal solutions for the largest eigenvalue problem (25), under the tractability of solving the two convex relaxation models. These CPS tensors are generated as follows. First, we randomly generate two real tensors U, V ∈ ℝ n 4 whose entries follow i.i.d. standard normal distributions, independently. We then let W = U + V to define a complex tensor in ℂ n 4 . To make it being PS, we further let X ∈ ℂ n 4 ps where Finally, to make it being CPS, we let T = 1 2 (X + X H ) ∈ ℂ n 4 cps . For various n ≤ 15 , 100 random CPS tensor instances are generated. We then solve the two convex relaxation models (29) and (28) using the ADMM algorithm, and record the percentage of instances that produce rank-one solutions. The results are shown in Table 2 together with the average CPU time (in seconds). It shows that both the nuclear norm penalty method and the SDP relaxation model (28) are able to generate rank-one solutions for most randomly generated instances, and thus find the largest eigenvalue of CPS tensors. Opposite to the data of radar wave form design, SDP relaxation outperforms nuclear norm penalty method, both in speed and in the chance of optimality when the dimension of the problem increases.

Computing largest US-eigenvalues
Motivated by the geometric measure of quantum entanglement, Ni et al. [33] introduced the notion of unitary symmetric eigenvalue (US-eigenvalue) and unitary symmetric eigenvector (US-eigenvector). The geometric measure of entanglement has various applications such as entanglement witnesses and quantum computation. The US-eigenvalues and US-eigenvectors reflect some specific states of the composite quantum system to certain extent. Specifically, ∈ ℂ is a US-eigenvalue associated with a US-eigenvector x ∈ ℂ n of a symmetric tensor Z ∈ ℂ n d if It is known that US-eigenvalues must be real. Jiang et. al. [24] showed that ∈ ℝ is a US-eigenvalue of a symmetric tensor Z ∈ ℂ n d if and only if 2 is a C-eigenvalue of the CPS tensor Z ⊗ Z ∈ ℂ n 2d and their eigenvectors are closely related. Therefore, we may resort the model (25) to find the largest US-eigenvalue with its corresponding eigenvectors for a symmetric tensor Z , i.e., to solve In these tests, we look into the two examples in [33]. We first transfer the largest USeigenvalue problem to (33), and then use the SDP relaxation model (28). The hope is to find rank-one solutions and hence to obtain the largest US-eigenvalue with its corresponding eigenvectors.   By applying the SDP relaxation method to (33) and solving it using CVX, we directly generate a rank-one solution. In other words, we obtain a C-eigenpair ( 2 , x) with ∈ ℝ of the CPS tensor Z ⊗ Z , i.e., However, Z(x 3 ) may not be real. This can be easily done by rotating x . In particular, by letting z = e − ∕3 x where = arg(Z(x 3 )) , one has Z(z 3 ) = e − Z(x 3 ) ∈ ℝ . This implies that ( , z) is the corresponding US-eigenpair of Z . In this example, it recovers the largest US-eigenvalue 2.3547 with its corresponding US-eigenvector (0.9726, 0.2326) T . Table 2]). Let a symmetric tensor Z ∈ ℂ 2 3 s have entries Z 111 = 2 , Z 112 = Z 121 = Z 211 = −1 , Z 122 = Z 212 = Z 221 = −2 , Z 222 = 1 , and others being zeros.

Example 5.2 ([33,
We again consider the SDP relaxation method and resort to CVX for solutions. Unfortunately, it fails to give us a rank-one solution. Motivated by the high frequency of rank-one solutions obtained when solving randomly generated tensors as shown in Table 2, we now add a tiny random perturbation E ∈ ℂ 2 3 s with ‖E‖ = 10 −4 to the original tensor Z . The hope is to generate a rank-one solution via SDP relaxation while keeping the original US-eigenpair almost unchanged since E is small enough. Furthermore, the largest US-eigenvalue may have more than one US-eigenvectors, i.e., (33) admits multiple global optimal solutions. In our experiments, we observe that adding tiny perturbations not only obtains a rank-one solution, but also helps to generate different rank-one solutions under different perturbations. Using this approach, we successfully obtain the largest US-eigenvalue 3.1623 and its four US-eigenvectors: which are consistent with the results in [33]. In fact, our convex relaxation approach is able to certify that the obtained eigenvalue is globally the largest as long as the solution to (28) is rank-one. This certificate, however, cannot be seen from the solutions obtained in [33]. Therefore, our experiment on Example 5.2 helps to verify that the largest one among all the eigenvalues obtained in [33, Table 2] is actually the largest eigenvalue of Z.
To conclude the numerical results, the convex relaxation methods proposed in Section 4.3 and established based on the rank-one equivalence, are capable to find optimal solutions for best rank-one approximations or the largest eigenvalue of CPS tensors. At least they are able to generate rank-one solutions (hence optimality) for the three types of instances discussed above. In case a rank-one solution fails to be obtained, one may slightly perturb the original tensor, and the chance to obtain rankone solutions may increase. This is one of the research topics to look into further.