This section discusses how to compute an incomplete tensor decomposition for a symmetric tensor \(\mathcal {F} \in \mathrm {S}^{3}(\mathbb {C}^{d})\) when only its subtensor \(\mathcal {F}_{{{\varOmega }}}\) is given, for the label set Ω in (1.4). For convenience of notation, the labels for \(\mathcal {F}\) begin with zeros while a vector \(u\in \mathbb {C}^{d}\) is still labelled as u := (u1,…,ud). We set
$$ n:=d-1, \quad x = (x_{1}, \ldots, x_{n}), \quad x_{0} := 1. $$
For a given rank r, denote the monomial sets
$$ \mathscr{B}_{0} := \{x_{1},\dots,x_{r}\}, \quad \mathscr{B}_{1}=\{x_{i} x_{j}: i \in [r], j \in [r+1, n] \}. $$
For a monomial power \(\alpha \in \mathbb {N}^{n}\), by writing \(\alpha \in {\mathscr{B}}_{1}\), we mean that \(x^{\alpha } \in {\mathscr{B}}_{1}\). For each \(\alpha \in {\mathscr{B}}_{1}\), one can write α = ei + ej with i ∈ [r], j ∈ [r + 1,n]. Let \(\mathbb {C}^{[r] \times {\mathscr{B}}_{1}}\) denote the space of matrices labelled by the pair \((k,\alpha )\in [r] \times {\mathscr{B}}_{1}\). For each \(\alpha = e_{i} + e_{j}\in {\mathscr{B}}_{1}\) and \(G\in \mathbb {C}^{[r] \times {\mathscr{B}}_{1}}\), denote the quadratic polynomial in x
$$ \varphi_{ij}[G](x) := \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j})x_{k}- x_{i} x_{j}. $$
(3.1)
Suppose r is the symmetric rank of \(\mathcal {F}\). A matrix \(G\in \mathbb {C}^{[r] \times {\mathscr{B}}_{1}}\) is called a generating matrix of \(\mathcal {F}\) if each φij[G](x), with \(\alpha = e_{i} + e_{j} \in {\mathscr{B}}_{1}\), is a generating polynomial of \(\mathcal {F}\). Equivalently, G is a generating matrix of \(\mathcal {F}\) if and only if
$$ \langle x_{t} \varphi_{ij}[G](x),\mathcal{F} \rangle = {\sum}_{k=1}^{r} G(k,e_{i}+e_{j})\mathcal{F}_{0kt}-\mathcal{F}_{ijt} = 0, \quad t = 0, 1, \ldots, n, $$
(3.2)
for all i ∈ [r], j ∈ [r + 1,n]. The notion generating matrix is motivated from the fact that the entire tensor \(\mathcal {F}\) can be recursively determined by G and its first r entries (see [40]). The existence and uniqueness of the generating matrix G is shown as follows.
Theorem 3.1
Suppose \(\mathcal {F}\) has the decomposition
$$ \mathcal{F} = \lambda_{1}\left[\begin{array}{c} 1 \\ u_{1} \end{array}\right]^{\otimes 3}+\cdots+\lambda_{r}\left[\begin{array}{c} 1 \\ u_{r} \end{array}\right]^{\otimes 3} , $$
(3.3)
for vectors \(u_{i} \in \mathbb {C}^{n}\) and scalars \( 0\neq \lambda _{i} \in \mathbb {C}\). If the subvectors (u1)1:r,…,(ur)1:r are linearly independent, then there exists a unique generating matrix \(G\in \mathbb {C}^{[r] \times {\mathscr{B}}_{1}}\) satisfying (3.2) for the tensor \(\mathcal {F}\).
Proof
We first prove the existence. For each i = 1,…,r, denote the vectors vi = (ui)1:r. Under the given assumption, V := [v1…vr] is an invertible matrix. For each l = r + 1,…,n, let
$$ N_{l} := V \cdot \text{diag}\left( (u_{1})_{l},\ldots,(u_{r})_{l} \right) \cdot V^{-1}. $$
(3.4)
Then Nlvi = (ui)lvi for i = 1,…,r, i.e., Nl has eigenvalues (u1)l,…,(ur)l with corresponding eigenvectors (u1)1:r,…,(ur)1:r. We select \(G\in \mathbb {C}^{[r] \times {\mathscr{B}}_{1}}\) to be the matrix such that
$$ N_{l} = \left[\begin{array}{ccc} G(1,e_{1}+e_{l}) & {\cdots} & G(r,e_{1}+e_{l}) \\ {\vdots} & {\ddots} & {\vdots} \\ G(1,e_{r}+e_{l}) & {\cdots} & G(r,e_{r}+e_{l}) \end{array}\right],\quad l=r+1,\ldots,n. $$
For each s = 1,…,r and \(\alpha = e_{i} + e_{j} \in \mathbb {B}_{1}\) with i ∈ [r], j ∈ [r + 1,n],
$$ \varphi_{ij}[G](u_{s}) =\sum\limits_{k=1}^{r} G(k,e_{i}+e_{j})(u_{s})_{k} - (u_{s})_{i} (u_{s})_{j} = 0. $$
For each t = 1,…,n, it holds that
$$ \begin{array}{@{}rcl@{}} \langle x_{t} \varphi_{ij}[G](x),\mathcal{F} \rangle &=& \left\langle \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j}) x_{t}x_{k} - x_{t} x_{i} x_{j}, \mathcal{F} \right\rangle \\ &=& \left\langle \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j}) x_{t}x_{k} - x_{t} x_{i} x_{j}, \sum\limits_{s=1}^{r} \lambda_{s} \left[\begin{array}{c} 1 \\ u_{s} \end{array}\right]^{\otimes 3} \right\rangle \\ &=& \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j})\sum\limits_{s=1}^{r} \lambda_{s} (u_{s})_{t} (u_{s})_{k} - \sum\limits_{s=1}^{r} \lambda_{s} (u_{s})_{t} (u_{s})_{i} (u_{s})_{j} \\ &=& \sum\limits_{s=1}^{r} \lambda_{s} (u_{s})_{t} \left( \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j})(u_{s})_{k} -(u_{s})_{i} (u_{s})_{j} \right) \\ &=& 0. \end{array} $$
When t = 0, we can similarly get
$$ \begin{array}{@{}rcl@{}} \langle \varphi_{ij}[G](x) ,\mathcal{F} \rangle &=& \left\langle \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j}) x_{k} - x_{i} x_{j}, \mathcal{F} \right\rangle\\ &=& \sum\limits_{s=1}^{r} \lambda_{s} \left( \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j})(u_{s})_{k} -(u_{s})_{i} (u_{s})_{j} \right) \\ &=& 0. \end{array} $$
Therefore, the matrix G satisfies (3.2) and it is a generating matrix for \(\mathcal {F}\).
Second, we prove the uniqueness of such G. For each \(\alpha = e_{i} + e_{j} \in {\mathscr{B}}_{1}\), let
$$ F := \left[\begin{array}{ccc} \mathcal{F}_{011} & {\cdots} & \mathcal{F}_{0r1} \\ {\vdots} & {\ddots} & {\vdots} \\ \mathcal{F}_{01n} & {\cdots} & \mathcal{F}_{0rn} \end{array}\right],\quad g_{ij} := \left[\begin{array}{c} \mathcal{F}_{1ij} \\ {\vdots} \\ \mathcal{F}_{nij} \end{array}\right]. $$
Since G satisfies (3.2), we have F ⋅ G(:,ei + ej) = gij. The decomposition (3.3) implies that
$$ F = \left[\begin{array}{ccc} u_{1} & {\cdots} & u_{r} \end{array}\right] \cdot \text{diag}(\lambda_{1},\ldots,\lambda_{r}) \cdot \left[\begin{array}{ccc} v_{1} & {\cdots} & v_{r} \end{array}\right]^{T}. $$
The sets {v1,…,vr} and {u1,…,ur} are both linearly independent. Since each λi≠ 0, the matrix F has full column rank. Hence, the generating matrix G satisfying F ⋅ G(:,ei + ej) = gij for all i ∈ [r],j ∈ [r + 1,n] is unique. □
The following is an example of generating matrices.
Example 3.2
Consider the tensor \(\mathcal {F}\in \mathtt {S}^{3}(\mathbb {C}^{6})\) that is given as
$$ \mathcal{F} = 0.4\cdot(1,1,1,1,1,1)^{\otimes 3} + 0.6\cdot(1,-1,2,-1,2,3)^{\otimes 3}. $$
The rank r = 2, \({\mathscr{B}}_{0}=\{x_{1},x_{2}\}\) and \({\mathscr{B}}_{1} = \{x_{1}x_{3},x_{1}x_{4},x_{1}x_{5},x_{2}x_{3},x_{2}x_{4},x_{2}x_{5}\}\). We have the vectors
$$ u_{1} =(1,1,1,1,1), \quad u_{2} = (-1,2,-1,2,3), \quad v_{1} =(1,1), \quad v_{2} = (-1,2). $$
The matrices N3, N4, N5 as in (3.4) are
$$ \begin{array}{@{}rcl@{}} N_{3} &=& \left[\begin{array}{cc}1 & -1 \\ 1 & 2 \end{array}\right] \left[\begin{array}{cc}1 & 0 \\ 0 & -1 \end{array}\right] \left[\begin{array}{cc}1 & -1 \\ 1 & 2 \end{array}\right]^{-1}= \left[\begin{array}{cc}1/3 & 2/3 \\ 4/3 & -1/3 \end{array}\right], \\ N_{4} &=& \left[\begin{array}{cc}1 & -1 \\ 1 & 2 \end{array}\right] \left[\begin{array}{cc}1 & 0 \\ 0 & 2 \end{array}\right] \left[\begin{array}{cc}1 & -1 \\ 1 & 2 \end{array}\right]^{-1}= \left[\begin{array}{cc}4/3 & -1/3 \\ -2/3 & 5/3 \end{array}\right], \\ N_{5} &=& \left[\begin{array}{cc}1 & -1 \\ 1 & 2 \end{array}\right] \left[\begin{array}{cc}1 & 0 \\ 0 & 3 \end{array}\right] \left[\begin{array}{cc}1 & -1 \\ 1 & 2 \end{array}\right]^{-1}= \left[\begin{array}{cc}5/3 & -2/3 \\ -4/3 & 7/3 \end{array}\right]. \end{array} $$
The entries of the generating matrix G are listed as below:
The generating polynomials in (3.1) are
$$ \begin{array}{@{}rcl@{}} \varphi_{13}[G](x) &=& \frac{1}{3}x_{1}+\frac{2}{3}x_{2}-x_{1}x_{3},\quad \varphi_{23}[G](x) = \frac{4}{3}x_{1}-\frac{1}{3}x_{2}-x_{2}x_{3},\\ \varphi_{14}[G](x) &=& \frac{4}{3}x_{1}-\frac{1}{3}x_{2}-x_{1}x_{4},\quad \varphi_{24}[G](x) = -\frac{2}{3}x_{1}+\frac{5}{3}x_{2}-x_{2}x_{4},\\ \varphi_{15}[G](x) &=& \frac{5}{3}x_{1}-\frac{2}{3}x_{2}-x_{1}x_{5},\quad \varphi_{25}[G](x) = -\frac{4}{3}x_{1}+\frac{7}{3}x_{2}-x_{2}x_{5}. \end{array} $$
Above generating polynomials can be written in the following form
$$ \left[\begin{array}{c} \varphi_{1j}[G](x)\\ \varphi_{2j}[G](x) \end{array}\right] = N_{j}\left[\begin{array}{c} x_{1} \\ x_{2} \end{array}\right] - x_{j} \left[\begin{array}{c} x_{1}\\ x_{2} \end{array}\right]\quad \text{for } j=3,4,5. $$
For x to be a common zero of φ1j[G](x) and φ2j[G](x), it requires that (x1,x2) is an eigenvector of Nj with the corresponding eigenvalue xj.
Computing the Tensor Decomposition
We show how to find an incomplete tensor decomposition (3.3) for \(\mathcal {F}\) when only its subtensor \(\mathcal {F}_{{{\varOmega }}}\) is given, where the label set Ω is as in (1.4). Suppose that there exists the decomposition (3.3) for \(\mathcal {F}\), for vectors \(u_{i} \in \mathbb {C}^{n}\) and nonzero scalars \(\lambda _{i} \in \mathbb {C}\). Assume the subvectors (u1)1:r,…,(ur)1:r are linearly independent, so there is a unique generating matrix G for \(\mathcal {F}\), by Theorem 3.1.
For each \(\alpha = e_{i} + e_{j} \in {\mathscr{B}}_{1}\) with i ∈ [r],j ∈ [r + 1,n] and for each
$$ l=r+1,\ldots,j-1,j+1,\ldots,n, $$
the generating matrix G satisfies the equations
$$ \left\langle x_{l} \left( \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j})x_{k} - x_{i} x_{j} \right),\mathcal{F} \right\rangle = \sum\limits_{k =1}^{r}G(k, e_{i}+e_{j}) \mathcal{F}_{0kl} - \mathcal{F}_{ijl} = 0. $$
(3.6)
Let the matrix \(A_{ij}[\mathcal {F}]\in \mathbb {C}^{(n-r-1)\times r}\) and the vector \(b_{ij}[\mathcal {F}]\in \mathbb {C}^{n-r-1}\) be such that
$$ A_{ij}[\mathcal{F}] := \left[\begin{array}{ccc} \mathcal{F}_{0,1,r+1} & {\cdots} & \mathcal{F}_{0,r,r+1} \\ {\vdots} & {\ddots} & {\vdots} \\ \mathcal{F}_{0,1,j-1} & {\cdots} & \mathcal{F}_{0,r,j-1} \\ \mathcal{F}_{0,1,j+1} & {\cdots} & \mathcal{F}_{0,r,j+1} \\ {\vdots} & {\ddots} & {\vdots} \\ \mathcal{F}_{0,1,n} & {\cdots} & \mathcal{F}_{0,r,n} \end{array}\right], \quad b_{ij}[\mathcal{F}] := \left[\begin{array}{c} \mathcal{F}_{i,j,r+1}\\ {\vdots} \\ \mathcal{F}_{i,j,j-1}\\ \mathcal{F}_{i,j,j+1}\\ {\vdots} \\ \mathcal{F}_{i,j,n} \end{array}\right]. $$
(3.7)
To distinguish changes in the labels of tensor entries of \(\mathcal {F}\), the commas are inserted to separate labeling numbers.
The equations in (3.6) can be equivalently written as
$$ A_{ij}[\mathcal{F}] \cdot G(:, e_{i}+e_{j}) = b_{ij}[\mathcal{F}]. $$
(3.8)
If the rank \(r\le \frac {d}{2}-1\), then n − r − 1 = d − r − 2 ≥ r. Thus, the number of rows is not less than the number of columns for matrices \(A_{ij}[\mathcal {F}]\). If \(A_{ij}[\mathcal {F}]\) has linearly independent columns, then (3.8) uniquely determines G(:,α). For such a case, the matrix G can be fully determined by the linear system (3.8). Let \(N_{r+1}(G),\ldots ,N_{m}(G) \in \mathbb {C}^{r\times r}\) be the matrices given as
$$ N_{l}(G) = \left[\begin{array}{ccc} G(1,e_{1}+e_{l}) & {\cdots} &G(r,e_{1}+e_{l}) \\ {\vdots} & {\ddots} & {\vdots} \\ G(1,e_{r}+e_{l}) & {\cdots} &G(r,e_{r}+e_{l}) \end{array}\right],\quad l=r+1,\ldots, n. $$
(3.9)
As in the proof of Theorem 3.1, one can see that
$$ N_{l}(G) \left[\begin{array}{c} (u_{i})_{1} \\ {\vdots} \\(u_{i})_{r} \end{array}\right] = (u_{i})_{l} \cdot \left[\begin{array}{c} (u_{i})_{1} \\ {\vdots} \\(u_{i})_{r} \end{array}\right], \quad l=r+1,\ldots, n. $$
The above is equivalent to the equations
$$ N_{l}(G)v_{i} = (w_{i})_{l-r} \cdot v_{i}, \quad l=r+1,\ldots, n, $$
for the vectors (i = 1,…,r)
$$ v_{i} := (u_{i})_{1:r}, \quad w_{i} := (u_{i})_{r+1:n}. $$
(3.10)
Each vi is a common eigenvector of the matrices Nr+ 1(G),…,Nn(G) and (wi)l−r is the associated eigenvalue of Nl(G). These matrices may or may not have repeated eigenvalues. Therefore, we select a generic vector \(\xi := (\xi _{r+1},\dots ,\xi _{n})\) and let
$$ N(\xi) := \xi_{r+1}N_{r+1}+\cdots+\xi_{n}N_{n}. $$
(3.11)
The eigenvalues of N(ξ) are ξTw1,…,ξTwr. When w1,…,wr are distinct from each other and ξ is generic, the matrix N(ξ) does not have a repeated eigenvalue and hence it has unique eigenvectors v1,…,vr, up to scaling. Let \(\tilde {v}_{1},\ldots ,\tilde {v}_{r}\) be unit length eigenvectors of N(ξ). They are also common eigenvectors of Nr+ 1(G),…,Nn(G). For each i = 1,…,r, let \(\tilde {w}_{i}\) be the vector such that its j th entry \((\tilde {w}_{i})_{j}\) is the eigenvalue of Nj+r(G), associated to the eigenvector \(\tilde {v}_{i}\), or equivalently,
$$ \tilde{w}_{i}=\left( \tilde{v}_{i}^{H} N_{r+1}(G)\tilde{v}_{i},\dots, \tilde{v}_{i}^{H} N_{n}(G)\tilde{v}_{i}\right),\quad i=1,\ldots,r. $$
(3.12)
Up to a permutation of \((\tilde {v}_{1},\ldots , \tilde {v}_{r})\), there exist scalars γi such that
$$ v_{i} = \gamma_{i} \tilde{v}_{i}, \quad w_{i} = \tilde{w}_{i}. $$
(3.13)
The tensor decomposition of \(\mathcal {F}\) can also be written as
$$ \mathcal{F} = \lambda_{1} \left[\begin{array}{c} 1 \\ \gamma_{1} \tilde{v}_{1} \\ \tilde{w}_{1} \end{array}\right]^{\otimes 3} + {\cdots} +\lambda_{r} \left[\begin{array}{c} 1 \\ \gamma_{r} \tilde{v}_{r} \\ \tilde{w}_{r} \end{array}\right]^{\otimes 3}. $$
The scalars \(\lambda _{1},\dots ,\lambda _{r}\) and \( \gamma _{1},\dots ,\gamma _{r}\) satisfy the linear equations
$$ \begin{array}{@{}rcl@{}} \lambda_{1}\gamma_{1} \tilde{v}_{1} \otimes\tilde{w}_{1} +\cdots+ {\lambda_{r}}{\gamma_{r}} \tilde{v}_{r} \otimes \tilde{w}_{r} &=& \mathcal{F}_{[0,1:r,r+1:n]}, \\ \lambda_{1}{\gamma_{1}^{2}}\tilde{v}_{1}\otimes \tilde{v}_{1} \otimes \tilde{w}_{1}+\cdots+\lambda_{r}{\gamma_{r}^{2}} \tilde{v}_{r}\otimes \tilde{v}_{r}\otimes \tilde{w}_{r} &=&\mathcal{F}_{[1:r,1:r,r+1:n]} . \end{array} $$
Denote the label sets
To determine the scalars λi, γi, we can solve the linear least squares
$$ \begin{array}{@{}rcl@{}} &&\underset{(\beta_{1},\ldots,\beta_{r})}{\min} \left\|\mathcal{F}_{J_{1}} - \sum\limits_{i=1}^{r} \beta_{i} \cdot \tilde{v}_{i} \otimes \tilde{w}_{i} \right\|^{2}, \end{array} $$
(3.15)
$$ \begin{array}{@{}rcl@{}} && \underset{(\theta_{1},\ldots,\theta_{r})}{\min} \left\|\mathcal{F}_{J_{2}} - \sum\limits_{k=1}^{r}\theta_{k} \cdot (\tilde{v}_{k} \otimes \tilde{v}_{k} \otimes \tilde{w}_{k})_{J_{2}} \right\|^{2}. \end{array} $$
(3.16)
Let \((\beta _{1}^{\ast },\ldots ,\beta _{r}^{\ast })\), \((\theta _{1}^{\ast },\ldots ,\theta _{r}^{\ast })\) be minimizers of (3.15) and (3.16) respectively. Then, for each i = 1,…,r, let
$$ \lambda_{i} := (\beta_{i}^{\ast})^{2}/\theta_{i}^{\ast}, \quad \gamma_{i} := \theta_{i}^{\ast}/\beta_{i}^{\ast}. $$
(3.17)
For the vectors (i = 1,…,r)
$$ p_{i} := \sqrt[3]\lambda_{i}(1,\gamma_{i} \tilde{v}_{i},\tilde{w}_{i}), $$
the sum \(p_{1}^{\otimes 3}+ {\cdots } +p_{r}^{\otimes 3}\) is a tensor decomposition for \(\mathcal {F}\). This is justified in the following theorem.
Theorem 3.3
Suppose the tensor \(\mathcal {F}\) has the decomposition as in (3.3). Assume that the vectors v1,…,vr are linearly independent and the vectors w1,…,wr are distinct from each other, where v1,…,vr,w1,…,wr are defined as in (3.10). Let ξ be a generically chosen coefficient vector and let p1,…,pr be the vectors produced as above. Then, the tensor decomposition \(\mathcal {F} = p_{1}^{\otimes 3}+ {\cdots } +p_{r}^{\otimes 3}\) is unique.
Proof
Since v1,…,vr are linearly independent, the tensor decomposition (3.3) is unique, up to scalings and permutations. By Theorem 3.1, there is a unique generating matrix G for \(\mathcal {F}\) satisfying (3.2). Under the given assumptions, (3.8) uniquely determines G. Note that ξTw1,…,ξTwr are the eigenvalues of N(ξ) and v1,…,vr are the corresponding eigenvectors. When ξ is generically chosen, the values of ξTw1,…,ξTwr are distinct eigenvalues of N(ξ). So N(ξ) has unique eigenvalue decompositions, and hence (3.13) must hold, up to a permutation of (v1,…,vr). Since the coefficient matrices have full column ranks, the linear least squares problems have unique optimal solutions. Up to a permutation of p1,…,pr, it holds that \(p_{i} = \sqrt [3]{\lambda _{i}} \left [\begin {array}{c} 1 \\ u_{i} \end {array}\right ]\). Then, the conclusion follows readily. □
The following is the algorithm for computing an incomplete tensor decomposition for \(\mathcal {F}\) when only its subtensor \(\mathcal {F}_{{{\varOmega }}}\) is given.
Algorithm 3.4
(Incomplete symmetric tensor decompositions)
-
A third order symmetric subtensor \({\mathcal {F}}_{{{\varOmega }}}\) and a rank \(r =\text {rank}_{S}(\mathcal {F})\le \frac {d}{2}-1\).
-
Determine the matrix G by solving (3.8) for each \(\alpha =e_{i}+e_{j} \in \mathbb {B}_{1}\).
-
Let N(ξ) be the matrix as in (3.11), for a randomly selected vector ξ. Compute the unit length eigenvectors \(\tilde {v}_{1},\ldots ,\tilde {v}_{r}\) of N(ξ) and choose \(\tilde {w}_{i}\) as in (3.12).
-
Solve the linear least squares (3.15) and (3.16) to get the coefficients λi, γi as in (3.17).
-
For each i = 1,…,r, let \(p_{i} := \sqrt [3]{ \lambda _{i}}(1, \gamma _{i} \tilde {v}_{i}, \tilde {w}_{i})\).
-
The tensor decomposition \(\mathcal {F} = (p_{1})^{\otimes 3}+\cdots +(p_{r})^{\otimes 3}\).
The following is an example of applying Algorithm 3.4.
Example 3.5
Consider the same tensor \(\mathcal {F}\) as in Example 3.2. The monomial sets \({\mathscr{B}}_{0}\), \({\mathscr{B}}_{1}\) are the same. The matrices \(A_{ij}[\mathcal {F}]\) and vectors \(b_{ij}[\mathcal {F}]\) are
$$ \begin{array}{@{}rcl@{}} A_{13}[\mathcal{F}] &=& A_{23}[\mathcal{F}]= \left[\begin{array}{cc} -0.8 & 2.8\\ -1.4 & 4 \end{array}\right], \qquad b_{13}[\mathcal{F}]=\left[\begin{array}{c}1.6\\2.2 \end{array}\right],\quad~~~ b_{23}[\mathcal{F}]=\left[\begin{array}{c}-2\\-3.2 \end{array}\right],\\ A_{14}[\mathcal{F}] &=& A_{24}[\mathcal{F}]= \left[\begin{array}{cc} 1 & -0.8\\ -1.4 & 4 \end{array}\right], \quad~ b_{14}[\mathcal{F}]=\left[\begin{array}{c}1.6\\-3.2 \end{array}\right],\quad b_{24}[\mathcal{F}]=\left[\begin{array}{c}-2\\7.6 \end{array}\right],\\ A_{15}[\mathcal{F}] &=& A_{25}[\mathcal{F}]= \left[\begin{array}{cc} 1 & -0.8\\ -0.8 & 2.8 \end{array}\right], \quad~ b_{15}[\mathcal{F}] = \left[\begin{array}{c}2.2\\-3.2 \end{array}\right],\quad~ b_{25}[\mathcal{F}] = \left[\begin{array}{c}-3.2\\7.6 \end{array}\right]. \end{array} $$
Solve (3.8) to obtain G, which is same as in (3.5). The matrices N3(G), N4(G), N5(G) are
$$ N_{3}(G)=\left[\begin{array}{cc} 1/3 & 2/3\\4/3 & -1/3 \end{array}\right],\quad N_{4}(G)=\left[\begin{array}{cc} 4/3 & -1/3\\-2/3 & 5/3 \end{array}\right],\quad N_{5}(G)=\left[\begin{array}{cc} 5/3 & -2/3\\-4/3 & 7/3 \end{array}\right]. $$
Choose a generic ξ, say, ξ = (3,4,5), then
$$ N(\xi) = \left[\begin{array}{cc}1/\sqrt{2} & -1/\sqrt{5} \\ 1/\sqrt{2} & 2/\sqrt{5} \end{array}\right] \left[\begin{array}{cc} 12 & 0 \\ 0 & 20 \end{array}\right] \left[\begin{array}{cc}1/\sqrt{2} & -1/\sqrt{5} \\ 1/\sqrt{2} & 2/\sqrt{5} \end{array}\right]^{-1}. $$
The unit length eigenvectors are
$$ \tilde{v}_{1} = (1/\sqrt{2},1/\sqrt{2}), \quad \tilde{v}_{2}=(-1/\sqrt{5},2/\sqrt{5}) . $$
As in (3.12), we get the vectors
$$ w_{1} = (1,1,1),\quad w_{2} = (-1,2,3). $$
Solving (3.15) and (3.16), we get the scalars
$$ \gamma_{1}=\sqrt{2}, \quad \gamma_{2}=\sqrt{5}, \quad \lambda_{1}=0.4, \quad \lambda_{2} = 0.6. $$
This produces the decomposition \(\mathcal {F}=\lambda _{1}u_{1}^{\otimes 3}+\lambda _{2}u_{2}^{\otimes 3}\) for the vectors
$$ u_{1}=(1,\gamma_{1}v_{1},w_{1})=(1,1,1,1,1,1), \quad u_{2}=(1,\gamma_{2}v_{2},w_{2})=(1,-1,2,-1,2,3). $$
Remark 3.6
Algorithm 3.4 requires the value of r. This is generally a hard question. In computational practice, one can estimate the value of r as follows. Let \(\text {Flat}(\mathcal {F})\in \mathbb {C}^{(n+1) \times (n+1)^{2}}\) be the flattening matrix, labelled by (i,(j,k)) such that
$$ \text{Flat}(\mathcal{F})_{i,(j,k)} = \mathcal{F}_{ijk} $$
for all i,j,k = 0,1,…,n. The rank of \(\text {Flat}(\mathcal {F})\) equals the rank of \(\mathcal {F}\) when the vectors p1,…,pr are linearly independent. The rank of \(\text {Flat}(\mathcal {F})\) is not available since only the subtensor \((\mathcal {F})_{{{\varOmega }}}\) is known. However, we can calculate the ranks of submatrices of \((\mathcal {F})_{{{\varOmega }}}\) whose entries are known. If the tensor \(\mathcal {F}\) as in (3.3) is such that both the sets {v1,…,vr} and {w1,…,wr} are linearly independent, one can see that \({\sum }_{i=1}^{r} \lambda _{i} v_{i}{w_{i}^{T}}\) is a known submatrix of \(\text {Flat}(\mathcal {F})\) whose rank is r. This is generally the case if \(r\le \frac {d}{2}-1\), since vi has the length r and wi has length d − 1 − r ≥ r. Therefore, the known submatrices of \(\text {Flat}(\mathcal {F})\) are generally sufficient to estimate \(\text {rank}_{S}(\mathcal {F})\). For instance, we consider the case \(\mathcal {F}\in \text {S}^{3}(\mathbb {C}^{7})\). The flattening matrix \(\text {Flat}(\mathcal {F})\) is
$$ \left[\begin{array}{ccccccc} \ast & \ast & \ast & \ast & \ast & \ast & \ast\\ \ast & \ast & \mathcal{F}_{120} & \mathcal{F}_{130} & \mathcal{F}_{140} & \mathcal{F}_{150} & \mathcal{F}_{160}\\ \ast & \mathcal{F}_{210} & \ast & \mathcal{F}_{230} & \mathcal{F}_{240} & \mathcal{F}_{250} & \mathcal{F}_{260}\\ \ast & \mathcal{F}_{310} & \mathcal{F}_{320} & \ast & \mathcal{F}_{340} & \mathcal{F}_{350} & \mathcal{F}_{360}\\ \ast & \mathcal{F}_{410} & \mathcal{F}_{420} & \mathcal{F}_{430} & \ast & \mathcal{F}_{450} & \mathcal{F}_{460}\\ \ast & \mathcal{F}_{510} & \mathcal{F}_{520} & \mathcal{F}_{530} & \mathcal{F}_{540} & \ast & \mathcal{F}_{560}\\ \ast & \mathcal{F}_{610} & \mathcal{F}_{620} & \mathcal{F}_{630} & \mathcal{F}_{640} & \mathcal{F}_{650} & \ast \end{array}\right], $$
where each ∗ means that entry is not given. The largest submatrices with known entries are
$$ \left[\begin{array}{ccc} \mathcal{F}_{410} & \mathcal{F}_{420} & \mathcal{F}_{430}\\ \mathcal{F}_{510} & \mathcal{F}_{520} & \mathcal{F}_{530}\\ \mathcal{F}_{610} & \mathcal{F}_{620} & \mathcal{F}_{630} \end{array}\right], \quad \left[\begin{array}{ccc} \mathcal{F}_{140} & \mathcal{F}_{150} & \mathcal{F}_{160}\\ \mathcal{F}_{240} & \mathcal{F}_{250} & \mathcal{F}_{260}\\ \mathcal{F}_{340} & \mathcal{F}_{350} & \mathcal{F}_{360} \end{array}\right]. $$
The rank of above matrices generally equals \(\text {rank}_{S}(\mathcal {F})\) if \(r\le \frac {d}{2}-1 = 2.5\).