An algorithm for the non-identifiability of rank-3 tensors

We present an algorithm aimed to recognize if a given tensor is a non-identifiable rank-3 tensor.

Fix C-vector spaces V 1 , . . ., V k of dimensions n 1 , . . ., n k respectively.A tensor for some v i ∈ V i with i = 1, . . ., k. Elementary tensors are the building blocks of the tensor rank decomposition and the rank r(T ) of a tensor T ∈ V 1 ⊗ • • • ⊗ V k is the minimum integer r such that we can write T as a combination of r elementary tensors: , where all v j,i ∈ V j for j = 1, . . ., k.
A rank-r tensor T is identifiable if admits a unique rank decomposition up to reordering the elementary tensors and up to scalar multiplication.Remark that since the notion of rank does not depend on scalar multiplication, it is well defined for projective classes of tensors too.
The first modern contribution on identifiability of tensors has been given by J. B. Kruskal [Kru77] and, starting from Kruskal's result, over the years there have been many contributions on the identifiability problem (cf.e.g.[CC06], [CO12], [BC13], [BCO14], [COV14], [COV17], [BBC18], [MMS18], [GM19], [HOOS19], [CM21], [CM22], [LMR22]).In particular, working in the applied fields, one may also be interested in the identifiability of specific tensors.Indeed, when translating an applied problem into the language of tensors one may be forced to deal with a very specific tensor that has a precise structure by reasons related to the nature of the applied problem itself.Working with specific tensors, the literature review becomes more scattered and most of the results can be considered extensions or generalizations of Kruskal's result (cf.[Bor13], [DDL13], [DDL14], [SDL15], [BV18] and [LP21]).
The first complete classification on the identifiability problem appeared in [BBS20] where, together with E. Ballico and A. Bernardi, we completely characterize all identifiable tensors of rank either 2 or 3.The classification is based on the classical Concision Lemma (cf.[Lan12, Prop.3.1.3.1] and also Subsection 2.1 below) and, in particular, for r = 2 it has been proved that the only non-identifiable rank-2 tensors are 2 × 2 matrices (cf.[BBS20, Proposition 2.3]).A more interesting situation occurs for the rank-3 case, where there have been found 6 different families of non-identifiable concise rank-3 tensors (cf.[BBS20, Theorem 7.1]).
In this manuscript we present an algorithm aimed to recognize if a given tensor falls into one of the 6 families above mentioned or not.
The paper is organized as follows.Section 2 is devoted to recollect basic notions needed to develop the algorithm.We start by recalling [BBS20, Theorem 7.1] and explaining each case of the classification working in coordinates.In Subsection 2.1 we recall the coordinate description of the concision process for a tensor while Subsection 2.2 is devoted to review basic facts on matrix pencils.In Section 3 is presented the algorithm itself.In particular, Subsection 3.1 focuses on the 3-factors case, while Subsection 3.2 considers the general case of k ≥ 4 factors.
We end the manuscript with an appendix written together with E. Ballico and A. Bernardi in which we fix an imprecision in the statement of [BBS20, Proposition 3.10] and consequently in an item in [BBS20,Theorem 7.1].In the following, if needed, we will refer to the correct statement of [BBS20, Proposition 3.10 and Theorem 7.1] given in the forthcoming Proposition 4.5 and Theorem 4.1 respectively.
Since the algorithm we are going to present is based on the classification [BBS20, Theorem 7.1], we briefly recall it here in the revised version of our Theorem 4.1.The classification.[BBS20, Theorem 7.1 revised]-Theorem 4.1 in the present paper.A concise rank-3 tensor a) [Matrix case] The first trivial example of non-identifiable rank-3 tensors are 3 × 3 matrices, which is a very classical case.b) [Tangential case] The tangential variety of a variety is the tangent developable of the variety itself.
A point q essentially lying on the tangential variety of the Segre X 1,1,1 is actually a point of the tangent space T [p] X 1,1,1 for some p = u ⊗ v ⊗ w ∈ (C 2 ) ⊗3 .Therefore there exists some a, b, c ∈ C 2 such that T can be written as and hence q is actually non-identifiable.c) [Defective case] We recall that the third secant variety of a Segre variety X n1,...,n k is defective if and only if (n 1 , . . ., n k ) = (1, 1, 1, 1), (1, 1, a) with a ≥ 3 (cf.[AOP09, Theorem 4.5]).We will see that the latter case will not play a role in the discussion and hence we can focus on the case k = 4.By defectivity, the dimension of σ 3 (X 1,1,1,1 ) is strictly smaller than the expected dimension and this proves that the generic element of σ 3 (X 1,1,1,1 ) has an infinite number of rank-3 decompositions and therefore all the rank-3 tensor of this variety have an infinite number of decompositions.d),e) [Conic cases] In this case one works with the Segre variety X 2,1,1 given by the image of a projective plane and two projective lines.Let Y 2,1,1 = P 2 × P 1 × P 1 .Consider the Segre variety X 1,1 ⊂ P 3 given by the last two factors of Y 2,1,1 and take a hyperplane section which intersects X 1,1 in a conic C. Let L C be the Segre given by the product of the first factor P 2 of Y 2,1,1 and the conic C, therefore L C ⊂ X 2,1,1 .The family of non-identifiable rank-3 tensors are points lying in the span of L C .In this case, the non-identifiability comes from the fact that the points on C are not identifiable and the distinction between the two cases reflects the fact that the conic C can be either irreducible or reducible.The distinction between the two cases can be expressed as follows working in coordinates: d) The non-identifiable tensor T ∈ C 3 ⊗ C 2 ⊗ C 2 and there exists a basis {u 1 , u 2 , u 3 } ⊂ C 3 and a basis {v 1 , v 2 } ⊂ C 2 such that T can be written as e) The non-identifiable tensor T ∈ C 3 ⊗ C 2 ⊗ C 2 and there exists a basis {u 1 , u 2 , u 3 } ⊂ C 3 and a basis {v 1 , v 2 } ⊂ C 2 such that T can be written as for some q ∈ v 1 , v 2 , where p, w ∈ C 2 must be linearly independent; f) [General case] The last family of non-identifiable rank-3 tensors relates the Segre variety X n1,n2,1 k−2 that is the image of the multiprojective space Y n1,n2,1 k−2 = P n1 × P n2 × (P 1 ) (k−2) , where either k ≥ 4 and n 1 , n 2 ∈ {1, 2} or k = 3 and (n 1 , n 2 , n 3 ) = (2, 1, 1).The non-identifiable rank-3 tensors of this case are as follows.Let the span of the Segre image of Y ′ with the constrain that q ′ is not an elementary tensor.Therefore q ′ is a non-identifiable tensor of rank-2 since it can be seen as a 2 × 2 matrix of rank-2.Let p ∈ X n1,n2,1 k−2 be a rank-1 tensor taken outside the Segre image of Y ′ .Now any point q ∈ {q ′ , p} \ {q ′ , p} is a rank-3 tensor (cf.Proposition 4.5) and it is not identifiable since q ′ has an infinite number of decompositions and each of these decompositions can be taken by considering p together with a decomposition of q ′ .For a coordinate description of this case, we take and for all i ≥ 3 there exists a basis {u i , ũi } of the i-th factor such that T can be written as a basis of the second factor.For a more detailed overview of the next couple of sections we refer to [San22].
be the coordinates of T with respect to those bases.
A useful operation that allows to store the elements of a tensor as a matrix is the flattening (cf.[Lan12, Section 3.4]), also called matrix-unfolding of a tensor in [DLDMV00, Definition 1], which is the oldest reference we found for a formal definition of this operation.
Definition 2.2.The ℓ-th flattening of a tensor We denote by T ℓ the n ℓ × ( i =ℓ n i ) associated matrix with respect to bases B ℓ and For all ℓ = 1, . . ., k let T ℓ be the ℓ-th flattening of T as in Definition 2.2 and denote by r ℓ := r(T ℓ ).The multilinear rank of T is the k-uple mr(T ) := (r 1 , . . ., r k ) containing the ranks of all the flattenings of T .
We are now ready to recall the concision process for a tensor.The following Lemma is the base step also for the algorithm we are going to construct in order to test the possible identifiability of a given tensor T .
and we will call it the concise tensor space of T .The lemma states that for any tensor T ∈ C n1 ⊗ • • • ⊗ C n k there exists a unique minimal tensor space included in C n1 ⊗ • • • ⊗ C n k that contains both the tensor and all its possible rank decompositions.Let us review more in details a procedure that computes the concise tensor space After having fixed basis of be its coordinate representation , where all n i ≥ 1 and k ≥ 2. For all ℓ = 1, . . ., k consider the ℓ-th flattening T ℓ of T as in Definition 2.2.For the sake of simplicity take ℓ = 1.The first column of T 1 is The same holds for the other columns of T 1 .Once we have computed n ′ 1 := r(T 1 ) we can extract n ′ 1 linearly independent columns from T 1 , say u 1 1 , . . ., u 1 , we rewrite the other columns as a linear combination of the independent ones.The resulting tensor T ′ will therefore live in a smaller space By continuing this process for each flattening we arrive to the concise tensor space where we may assume 2.2.Matrix pencils.In this subsection we review some basic facts on matrix pencils that will be useful for the construction of the algorithm.We will briefly describe how to achieve the Kronecker normal form of any matrix pencil and we refer to [Gan59, Vol. 1, Ch. XII] for a detailed exposition.
For the rest of this subsection, unless specified, we will work over an arbitrary field K of characteristic 0. Fix integers m, n > 0. A polynomial matrix A(λ) is a matrix whose entries are polynomials in λ, namely A(λ) = (a i,j (λ)) i=1,...,m,j=1,...,n , where a i,j (λ) := a i,j λ l , for some l > 0. If we set A k := (a (k) i,j ), then we can write A(λ) as The rank r(A(λ)) of A(λ) is the positive integer r such that all r + 1 minors of A(λ) are identically zero as polynomials in λ and there exists at least one minor of size r which is not identically zero.A matrix pencil is a polynomial matrix of type A(λ) = A 0 + λA 1 .Given two matrix pencils A(λ) = A 0 + λA 1 and B(λ) = B 0 + λB 1 , we say that A(λ) and B(λ) are strictly equivalent if there exist two invertible matrices P, Q such that P (A 0 + λA 1 )Q = B 0 + λB 1 .We shall see that the Kronecker normal form of a matrix pencil is determined by a complete system of invariants with respect to the strict equivalence relation defined above.
Any matrix pencil A + λB of size m × n can be either regular or singular: Definition 2.5.Let A, B ∈ M m,n (K).A pencil of matrices A + λB is called regular if (1) both A and B are square matrices of the same order m; (2) the determinant det(A + λB) does not vanish identically in λ.Otherwise the matrix pencil is called singular.
We now recall how to find the normal form of a pencil A + λB depending on whether it is regular or not.2.2.1.Normal form of regular pencils.In the case of regular pencils, normal forms can be found by looking at the elementary divisors of a given matrix pencil.In order to introduce them, it is convenient to consider the pencil A + λB with homogeneous parameters λ, µ, i.e. µA + λB.
Note that all i j (λ, µ) ∈ K[λ, µ] can be splitted into products of powers of irreducible homogeneous polynomials that we call elementary divisors.Elementary divisors of the form µ q for some q > 0 are called infinite elementary divisors.
One can prove that two regular pencils A + λB and A 1 + λB 1 are strictly equivalent if and only if they have the same elementary divisors and infinite elementary divisors (cf.[Gan59, Vol. 2, Ch.XII, Theorem 2]).Therefore elementary divisors and infinite elementary divisors are invariant with respect to the strict equivalence relation.Moreover they form a complete system of invariants for the strict equivalence relation since they are irreducible elements with respect to the fixed field K.This is the reason why the polynomials i j (λ, µ) defined above are actually called invariant polynomials for all j = 1, . . ., r.
• The last p diagonal blocks L w1 , . . ., L wp are the companion matrices associated to the remaining elementary divisors of A + λB.

Normal form of singular pencils.
In the previous case, a complete system of invariants was made by both elementary divisors and infinite ones.We shall see that, in case of singular pencils, this is not sufficient to determine a complete system of invariants with respect to the strict equivalence relation.Fix m ≤ n and let A + λB be a singular pencil of rank r, where A, B ∈ M m,n (K).Since the pencil is singular, the columns of A + λB are linearly dependent, therefore the system (2) has a non-zero solution with respect to x.Note that any solution x of the above system is a vector whose entries are polynomials in λ, i.e. x = x(λ).It has been proven in [Gan59, Vol. 2, Ch.XII, Theorem 4] that if equation (2) has a solution of minimal degree ε = 0 with respect to λ, the singular pencil A + λB is strictly equivalent to where and Â + λ B is a pencil of matrices for which the equation analogous to (2) has no solution of degree less than ε.
By applying the previous result iteratively, a singular pencil A + λB is strictly equivalent to the block diagonal matrix [L ε1 ; . . .; L εp ; A p + λB p ], where 0 = ε 1 ≤ • • • ≤ ε p and the last block is such that (A p + λB p )x = 0 has no non zero solution, i.e. the columns of A p + λB p are linearly independent.Then one looks at the rows of A p + λB p .If these are linearly dependent, one can apply the same procedure just described by considering the associated system of the transposed pencil.Now let us treat the case in which there are some relations of degree zero (with respect to λ) between the rows and the columns of the given pencil A + λB.Denote by g and h the maximal number of independent constant solutions of equations (A + λB)x = 0 and (A T + λB T )x = 0 respectively.Let e 1 , . . ., e g ∈ K n be linearly independent solutions of the system (A + λB)x = 0, completing them to a basis of K n and rewriting the pencil with respect to this basis, we get Ã + λ B = 0 m×g Ã1 + λ B1 .One can do the same by taking h linearly independent vectors that are solutions of the transpose pencil and hence the first h rows of Ã1 + λ B1 are zero with respect this new basis.Thus we obtain where A 0 + λB 0 does not have any degree zero relation, and hence either A 0 + λB 0 satisfies the assumptions of [Gan59, Vol. 2, Ch.XII, Theorem 4] or it is a regular pencil.
There is a quicker way, due to Kronecker, to determine the canonical form of a given pencil, avoiding the iterative reduction just explained.It involves the notion of minimal indices.These last, together with elementary divisors (possibly infinite) will form a complete system of invariants for non singular pencils.
Let A+λB be a non singular pencil and let x 1 (λ) be a non zero solution of least degree ε 1 for (A+λB)x = 0. Take x 2 (λ) as a solution of least degree ε 2 such that x 2 (λ) is linearly independent from x 1 (λ).Continuing this process, we get a so called fundamental series of solutions of the system We remark that a fundamental series of solution is not uniquely determined, but one can show that the degrees ε 1 , . . ., ε p are the same for any fundamental series associated to a given system (A + λB)x = 0.The minimal indices for the columns of A + λB are the integers ε 1 , . . ., ε p .Similarly, the minimal indices for the rows are the degrees η 1 , . . ., η q of a fundamental series of solutions of (A T + λB T )x = 0. Strictly equivalent pencils have the same minimal indices (cf.[Gan59, Vol. 2, Ch.XII, Sec. 5, Par.2]).Now let A + λB be a singular pencil and consider its normal form (3) Remark 2.1.The system of indices for the columns (rows) of the above block diagonal matrix is obtained by taking the union of the corresponding system of minimal indices of the individual blocks.
We want to determine minimal indices for the above normal form (3). By the previous remark, it is sufficient to determine the minimal indices for each block.Clearly the regular block A 0 + λB 0 has no minimal indices, the zero block 0 h×g has g minimal indices for columns and h minimal indices for rows all equal to zero respectively, namely K) has linearly independent rows, therefore it has just one minimal index for column ε i for all i = 1, . . ., p. Similarly, for all j = 1, . . ., q the block L ηj has just one minimal index for rows η j .
It is classically attributed to Kronecker the result proving that two arbitrary pencils A + λB and A 1 + λB 1 of rectangular matrices are strictly equivalent if and only if they have the same minimal indices and the same elementary divisors (possibly infinite).We conclude this part by illustrating with an example how to construct the Kronecker normal form of a matrix pencil.
Example 2.1.Consider the pencil The kernel of the system (A + λB)x = 0 is generated by Since the minimum integer of the non constant solution is ε = 2, we know that the normal form of the pencil contains the following block Moreover, we see that there are g = 2 linearly independent constant solutions.Considering the transpose pencil, then so there is just one constant solution.Therefore, keeping the above notation, η = 0 and h = 1.Moreover the invariant polynomials of the pencil are i 4 (λ, µ) = 0, i 3 (λ, µ) = µ and all the others are equal to 1. Therefore the Kronecker normal form of

3-factors tensor spaces and matrix pencils.
From now on we work again over C. Any tensor T ∈ C 2 ⊗ C m ⊗ C n can be seen as a matrix pencil via the isomorphism We can easily pass from a tensor T ∈ C 2 ⊗ C m ⊗ C n to its associated matrix pencil (and viceversa) by fixing a basis on each factor and looking at T in its coordinates with respect to the fixed bases.For example, let us fix the canonical basis on each factor and let T = (t ijk ) ∈ C 2 ⊗ C m ⊗ C n .We can associate to T the map where A = (t 1ij ) i=1,...,m,j=1,...,n and B = (t 2ij ) i=1,...,m,j=1,...,n .Fixing the integer m equal to either 2 or 3 in C 2 ⊗C m ⊗C n leads us to consider very special tensor formats, namely C 2 ⊗ C 2 ⊗ C n and C 2 ⊗ C 3 ⊗ C n .In these cases there is a finite number of orbits with respect to the action of products of general linear groups (cf.[Kac80]).Such cases have been widely studied in [Par01], where the author gave a complete orbit classification working in the affine setting.
Remark that for any tensor belonging to either can consider the associated matrix pencil and, by computing its Kronecker normal form, it is possible to understand its rank.This last result comes from the following more general statement that is historically attributed to Grigoriev, JáJá and Teichert.We refer to [BL13, Remark 5.4] for a historical note on the theorem.
and let A be the corresponding pencil with minimal indices ε 1 , . . ., ε p , η 1 , . . ., η q and regular part C = A 0 + λB 0 of size N .Let δ(C) be the number of non-squarefree invariant polynomials of C. Then T is a tensor of rank p i=1 In [BL13] the authors reviewed the orbits classification made in [Par01] and gave a geometric interpretation of the projectivization of all the orbits closures appearing in both cases.In the following section we will refer to the classification of [BL13] when necessary.

Algorithm for the non-identifiability of a rank-3 tensor
The purpose of this section is to write Algorithm 3 where we can determine if a rank-3 tensor is not identifiable.
All possible cases of non-identifiabile rank-3 tensors are collected in Theorem 4.1.
• The input of the algorithm we propose is a tensor presented in its coordinate descripiton with respect to canonical basis, where k ≥ 3, all n j ≥ 1 and all i j = 1, . . ., n j , j = 1, . . ., k. • The output of the algorithm is a statement telling if the given tensor is a rank-3 tensor that falls into one of the cases mentioned above or not.The first step of Algorithm 3 is to compute the concise tensor space of T that we have already detailed in Subsection 2.1, hence from now on we will work with concise tensors.Based on the resulting concise tensor space T n ′ 1 ,...,n ′ k ′ , we split the algorithm into two different parts depending on whether T n ′ 1 ,...,n ′ k ′ is made by three factors or not.Subsection 3.1 is devoted to the 3-factors case while we refer to Subsection 3.2 for the other case.
and compute the multilinear rank of T .By using the left inequality in (1) on each flattening ϕ ℓ , we are able to exclude some of the cases in which r(T ) is higher than 3.In those cases the algorithm stops since we are interested in rank-3 tensors.Moreover, if the multilinear rank of T contains more than k −3 positions equal to 1 then T is either a rank-1 tensor or a matrix and we can also exclude these cases.Lastly, we remark that since the concise Segre of a rank-3 tensor is ν( where all m i ∈ {1, 2} for all i = 1, . . ., k, if one of the values in mr(T ) = (dim(C mi+1 )) i=1,...,k is different from either 2 or 3 then we can immediately stop the algorithm.Therefore, at the end of the concision process, we deal only with a tensor . Now, depending on whether k ′ = 3 or k ′ ≥ 4, we split the algorithm in two different parts.
3.1.Three factors case.This subsection is devoted to treat the case in which the concise tensor space of the tensor T given in input has three factors.By Remark 3.1, the concise space T n1,..., of a tensor T is such that all n i ∈ {2, 3}.Moreover, if k = 3 the only possibilities for T n1,n2,n3 , up to a reordering of the factors, are: Remark 3.2.The presence of a C 2 in T 2,2,2 , T 3,2,2 , T 3,3,2 allows to see all their elements as a matrix pencil (cf.Subsection 2.2), in these cases we are also able to compute the rank of one of those tensors by classifying their at its associated matrix pencils (cf.Theorem 2.7).
All the considerations made in the following will be summed up in Algorithm 1 at the end of the subsection to which Algorithm 3 will refer for the case of 3-factors.
Both cases d) and e) of Theorem 4.1 can be treated by looking at the matrix pencil associated to the corresponding tensor.Remark 3.4.In order to be consistent with the matrix pencil notation used in Subsection 2.2 in which the first factor is used as a parameter space for the pencil, we swap the first and third factor of T 3,2,2 , working now on 1] offers a complete description of all orbits in C 2 ⊗ C 2 ⊗ C 3 , providing also the orbit closure in each case together with the Kronecker normal form of each orbit representative and its rank.Since we are working with concise rank-3 tensors of T 2,2,3 , we are interested in cases 7 and 8 of [BL13, Table 1], i.e.

Matrix pencil Tensor representative
where we considered all a i , b j , c k are linearly independent elements of the corresponding factors and λ, µ represent homogeneous coordinates with respect to the first factor of T 2,2,3 .Let us see which is the relation between the above Kronecker normal forms and our examples of non-identifiable rank-3 tensors in T 2,2,3 .
Lemma 3.1.The matrix pencil associated to any tensor T ∈ C 2 ⊗ C 2 ⊗ C 3 belonging to e) is of the following form: Proof.Let T ∈ C 2 ⊗ C 2 ⊗ C 3 be as in case e), so The matrix pencil associated to T with homogeneous parameters λ, µ referred to the basis {p, w} ⊂ C 2 is Since A is a singular pencil (cf.Definition 2.5), in order to achieve the normal form of A, we have to look at the minimum degree ε of the elements in Therefore we can conclude that The tensor T is a non-identifiable rank 3 tensor coming from case e) of Theorem 4.1 if and only if the pencil associated to T is of the form Proof.By Lemma 3.1, the matrix pencil associated to any tensor that belongs to case e) is either λ µ 0 0 0 λ or λ µ 0 0 0 µ .
The viceversa also holds since actually the left above pencil corresponds to the tensor (considering the first factor as a parameter space for the pencil) which is as in case e).
Lemma 3.3.The matrix pencil associated to a tensor The matrix pencil associated to T with homogeneous parameters λ, µ referred to the basis The kernel of A is so the minimum degree ε of the elements in Ker(A) with respect to λ, µ is 2. Therefore, the normal form of A is λ µ 0 0 λ µ .
The tensor T is a non-identifiable rank-3 tensor coming from case d) of Theorem 4.1 if and only if the pencil associated to T is of the form λ µ 0 0 λ µ .
Proof.By Lemma 3.3, the matrix pencil associated to any tensor that belongs to case d) is The viceversa also holds since actually the above pencil corresponds to the tensor e 1 ⊗ e 1 ⊗ e 1 + (e 1 ⊗ e 2 + e 2 ⊗ e 1 ) ⊗ e 2 + e 2 ⊗ e 2 ⊗ e 3 which is as in case d). 3.1.3.
Let T 3,3,2 be the concise tensor space of the tensor T we have in input.We recall that the only nonidentifiable rank-3 tensors in this case are the ones of case f) of Theorem 4.1 (cf.also Proposition 4.5).More precisely, let Y ′ = P 1 × P 1 × {w} ⊂ Y 2,2,1 = P 2 × P 2 × P 1 .Take q ′ ∈ ν(Y ′ ) \ ν(Y 2,2,1 ) and p ∈ Y 2,2,1 \ Y ′ .Then [T ] ∈ q ′ , ν(p) is a rank-3 tensor and it is not identifiable.If we take {u i } i≤3 ⊂ C 3 as a basis of the first factor, {v i } i≤3 ⊂ C 3 as a basis of the second factor and {w, w} ⊂ C 2 as a basis of the third factor, then T is of the form (5) Again we can look at this case by considering the associated matrix pencil of T .As before (cf.Remark 3.4), to be consistent with the matrix pencil notation we already introduced, we swap the first and third factor of T 3,3,2 , working now on T 2,3,3 = C 2 ⊗ C 3 ⊗ C 3 .
In [BL13, Table 3] are collected all Kronecker normal forms contained in T 2,3,3 .Since we are interested in rank-3 tensors having T 2,3,3 as concise tensor space, the only possibilities in terms of matrix pencils are Remark 3.5.The matrix pencil associated to (5) is the first one in (6) and it is easy to check that the tensor corresponding to the first matrix pencil in (6) is actually T .Therefore, if the concise tensor space of T is T 2,3,3, , it is sufficient to consider the normal form of the concise tensor T ′ related to T and check if it corresponds to Moreover, as in the previous case, we are able to detect the rank of any tensor having T 2,3,3 as a concise tensor space (cf.Remark 3.2).

T
By Theorem 4.1, all rank-3 tensors whose concise tensor space is T 3,3,3 are identifiable.Therefore if the concise tensor space of T is T 3,3,3 we can immediately say that T does not belong to one of the 6 families of non-identifiable rank-3 tensors.
We collect all the considerations made in this subsection in the following pseudo-algorithm.
Output: A statement on whether T belongs to one of the six cases of non-identifiable rank-3 tensors or not.
Otherwise the output is T is an identifiable rank-2 tensor.
Compute the Kronecker normal form of T .
• If the Kronecker normal form of T is λ µ 0 0 0 µ then the output is T belongs to case e) of Theorem 4.1, therefore it is not identifiable.• Else, T is as in case d) and the output is T belongs to case d) of Theorem 4.1 and it is not identifiable.
Compute the normal form of T .
• If the Kronecker normal form of T is   λ 0 0 0 λ 0 0 0 µ   then the output is T belongs to case f ) of Theorem 4.1, therefore it is not identifiable.• Else the output will be the rank of T computed via (4) of Theorem 2.7 and T is not on the list of non-identifiable rank-3 tensors.(4) Otherwise (n 1 , n 2 , n 3 ) = (3, 3, 3) and the output is T is not on the list of non-identifiable rank-3 tensors, hence T is either identifiable or its rank is greater than 3.
Here we provide an implementation of Algorithm 1 with the algebra software Macaulay2 [GS].The input of the function is a concise 3-factors tensor In practice T must be given as a list of matrices {A 1 , . . ., A n1 }, where each A i ∈ M n2×n3 (C) as displayed in the following image.
For the case (n 1 , n 2 , n 3 ) = (2, 2, 2) the algorithm evaluates the Cayley's hyperdeterminant in the entries of the tensor, while for the remaining cases it computes the Kronecker normal form of the matrix pencil associated to the given T .
where k > 3 and all n i ∈ {2, 3}.We will first treat the case in which k = 4 and n 1 = n 2 = n 3 = n 4 = 2 and then we will treat all together the remaining cases.

3.2.1.
Non-identifiable tensors with at least 4 factors.Consider for the moment the 4-factors case, i.e.
In this case, any non-identifiable rank-3 tensor comes from case f) of Theorem 4.1.More precisely, let We saw that any [T ] ∈ q ′ , ν(p) is a non-identifiable rank-3 tensor.Let {u i , ũi } be a basis of the C ni arising from the i-th factor of Y m1,m2,1 k−2 for all i ≥ 3. Take distinct a 1 , a 2 ∈ C m1+1 and distinct b 1 , b 2 ∈ C m2+1 and if m 1 = 1 then let a 3 ∈ a 1 , a 2 otherwise we let a 1 , a 2 , a 3 form a basis of the first factor.Let b a basis of the second factor.With respect to these bases T can be written as Since the only type of tensors that we have to detect corresponds to (7), we may restrict ourselves to consider the following tensor spaces: In other words a reshape of a tensor space T n1,...,n k is a different way of grouping together some of the factors of T n1,...,n k (eventually it is also necessary to reorder the factors of T n1,...,n k ).
In the following we will be interested in the reshape grouping together two factors of a tensor space T n1,...,n k and we will denote by ϑ i,j the corresponding map for some i, j = 1, . . ., k, i.e.

Then T is as in case f ) of Theorem 4.1 if and only if the following conditions hold:
(1) the reshaped tensor for some independent x, y ∈ C n1n2 and some u i , v i ∈ C 2 with {u i , v i } linearly independent for all i = 3, . . ., k; (2) looking at x, y ∈ C n1n2 as elements of C n1 ⊗ C n2 then {r(x), r(y)} = {1, 2}.
Proof.Let T ∈ T n1,n2,2 k−2 be as in case f) of Theorem 4.1, so T can be written as where u i = v i for all i = 3, . . ., k, a 1 , a 2 , a 3 are linearly independent if n 1 = 3 and b 1 , b 2 , b 3 are linearly independent if n 2 = 3.Let ϑ 1,2 be the reshape grouping together the first two factors of T n1,...,n k .Let x := a 1 ⊗ b 1 , y := a 2 ⊗ b 2 and z := a 3 ⊗ b 3 and remark that r(x + y) = 2 and r(z) = 1.Therefore Note that the rank of (T 1 + T 2 ) ∈ T n1n2,2 k−2 is at most 2 and in fact r(T 1 + T 2 ) = 2 since u i , v i are linearly independent for all i = 3, . . ., k.Moreover, we recall that the only non-identifiable rank-2 tensors are matrices (cf.[BBS20, Proposition 2.3]).Therefore, since the concise tensor space of T 1 + T 2 is made by at least 3 factors, then T 1 + T 2 is an identifiable rank-2 tensor.
and by relabeling if necessary we may assume r(ϑ −1 1,2 (a)) = 2 and r(ϑ We remark that the concise space of T is T n1,n2,2 k−2 , therefore if n 1 = 3 (or n 2 = 3) then a 1 , a 2 , v 1 are linearly independent (b 1 , b 2 , v 2 are linearly independent).Thus T is as in case f).
Remark 3.6.In Lemma 3.6 we assumed that dealing with a tensor as in (7) the non-identifiable part of the tensor was in the first two factors because it is always possible to permute the factors of the tensor space in this way.This assumption cannot be made in the algorithm and we have to be careful if either (n 1 , n 2 ) = (3, 2) or (n 1 , n 2 ) = (2, 2).Dealing with (n 1 , n 2 ) = (3, 2), we have to check if there exists i = 2, . . ., k such that ϑ 1,i (T ) satisfies the conditions of Lemma 3.6.Similarly, for the case of (n 1 , n 2 ) = (2, 2) we have to check all reshaping of T if necessary, i.e. we have to check if there exist i, j ∈ {1, . . ., k} with i = j such that ϑ i,j (T ) satisfies the conditions of Lemma 3.6.
Recall that a concise tensor By Lemma 3.6, given an identifiable rank-2 tensor T ∈ T n1n2,2 k−2 , in order to verify if T is as in case f), we do not need to find an explicit decomposition of T as in (8) but it is enough made the following steps: • distinguish x, y ∈ C n1n2 and look at them as elements of C n1 ⊗ C n2 ; • prove that either r(x) = 2 and r(y) = 1 or that r(x) = 1 and r(y) = 2.
Let us explain in detail how to do so.

Reshape procedure for an identifiable rank-2 tensor of
. Remark that the rank of the first flattening ϕ 1 : (C 2 ) ⊗(k−2) → (C n1n2 ) * of T is 2 and to complete the concision process, we can take as basis of the first new factor two independent elements x, y of Im(ϕ 1 ).Therefore T can be written as If we reshape our tensor space by grouping together all factors from the 4-th one onwards, then T can be seen as We want to look at this 3-factors tensor as a pencil of matrices with respect to the second factor of ) and denote by 3) .
We can write T as Call X 3 the matrix whose columns are given by x and y and denote by X 4 the matrix whose rows are given by u and v. Therefore Remark that C 2 is right invertible and denote by C −1 2 its right inverse.Moreover r(X 3 ) = r(X 4 ) = 2, therefore X 3 is invertible and there exists a right inverse of X 4 that we denote by X −1 4 .Thus We have now an eigenvalue problem that we can easily solve to find x, y ∈ C 2 .
Remark 3.7.When computing the concision process of T with respect to the first factor of T n1n2,2 k−2 , we concretely find a basis of Im(ϕ 1 ).Therefore, after we found x, y ∈ C 2 with the above procedure, we can easily get back to x, y ∈ C n1n2 ∼ = C n1 ⊗ C n2 and compute the rank of both x, y seen as elements of C n1 ⊗ C n2 .
Let us sum up how to find a non-identifiable rank-3 tensor of at least 4 factors in the following pseudoalgorithm.
• Case k = 4. Test if T ∈ σ 3 (X 1 4 ) \ σ 2 (X 1 4 ) (cf. [Qi13, Theorem 1.4] for the equations of the third secant variety and [LM04] for the equations of the second secant variety).If the answer to the test is positive, the output is: T is a non-identifiable rank-3 tensor, otherwise the output is: T is not on the list of non-identifiable rank-3 tensors.• Case k ≥ 5.For all i = 1, . . ., k − 1 and for all j = i + 1 . . ., k follow this procedure: • Test if ϑ i,j (T ) satisfies the equations of σ 2 (X 3,1 k−2 ) and does not satisfy the equations of T ) is an identifiable rank-2 tensor.Make the concision process on the first factor of T 3,1 k−2 and call T ′ the resulting tensor.Consider T ′ as a matrix pencil of ) with respect to the second factor Find the eigenvectors x, y ∈ C 2 of C 1 C −1 2 and then rewrite x, y as elements of C 4 ∼ = C 2 ⊗C 2 via ϑ −1 i,j .If {r(x), r(y)} = {1, 2} then the output is: T is a non-identifiable rank-3 tensor corresponding to case f ) of Theorem 4.1.
• Else, if one of the previous conditions is not satisfied, then stop and restart with another j (and another i when necessary).If the algorithm stops at some point when i = k − 1, j = k then break and the output is: T is not on the list of non-identifiable rank-3 tensors.
• Else, if one of the previous conditions is not satisfied then stop and restart with another i.If the algorithm stops at some point when i = k then break and the output is: T is not on the list of non-identifiable rank-3 tensors.
• Test if ϑ 1,2 (T ) satisfies the equations of σ 2 (X 8,1 k−2 ) and does not satisfy the equations of T ) is an identifiable rank-2 tensor.Reduce the first factor of T 9,2 k−2 via the concision process, working now with T ′ on (C 2 ) ⊗(k−1) .Consider T ′ as a matrix pencil with respect to the second factor of C 2 ⊗ C 2 ⊗ (C 2 ) ⊗(k−3) , i.e.
• If one of these conditions is not satisfied then stop and the output is: T is not on the list of non-identifiable rank-3 tensors.
Before proceeding, we need to recall the following.
we get that S ⊂ H and therefore we get a contradiction with the autarky assumption because the minimal multiprojective space containing q is P 2 × P 2 × P 1 .Therefore it is also not possible that E = {p} ∪ E ′ with E ′ ⊂ H. Thus E is of type {p} ∪ A for some A ∈ S(Y ′ , q ′ ) and this concludes the proof of the claim.
Proof.The proof of [BBS20, Proposition 3.10] is splitted in two cases depending on whether Y n1,...,n k is made by all projective lines or not and both cases are worked out by induction.If (n 1 , . . ., n k ) = (1, . . ., 1) the induction is contained in steps (B) and (C) of the proof of [BBS20, Proposition 3.10] and they are not altered by the new statement.If instead Y n1,...,n k contains at least one projective plane, then we need to use Y n1,n2,n3,n4 = P 2 × P 1 × P 1 × P 1 nstead of P 2 × P 1 × P 1 as base of the induction for which step (D) will then act as the inductive step.Case P 2 × P 1 × P 1 × P 1 follows from the case P 1 × P 1 × P 1 × P 1 proved in step (C) as follow.Consider a general u ∈ P 2 and the linear projection P 2 \{u} → P 1 .Construct the associate morphism (P 2 \ {u}) × P 1 × P 1 × P 1 − → P 1 × P 1 × P 1 × P 1 and consider the projection from Λ = ν({u} × P 1 × P 1 × P 1 ) as in step (D).This covers the proof of Proposition 4.5 for the case k ≥ 4. Since case k = 3 is completely covered by Lemma 4.4 this concludes the proof of the statement.
Remark 4.1.The only statement in the rest of [BBS20] citing [BBS20, Proposition 3.10] is Proposition 5.1 but the result is not altered using the revised Proposition 4.5.
With the above result we completely covered Proposition 4.5.Now Theorem 4.1 is completely fixed but for the sake of completeness let us show that the case (n 1 , n 2 , n 3 ) = (2, 1, 1) fits only inside items d) and e).In this case the corresponding tensor space P(C 3 ⊗ C 2 ⊗ C 2 ) has a finite number of orbits with respect to the action of Aut(P 2 ) × Aut(P 1 ) × Aut(P 1 ) (cf. [Par01], also [BL13,Table 1]) and there are only two possibilities for a concise rank-3 tensor, namely cases 7 and 8 of [BL13, Table 1].We already proved in Corollary 3.2 that case 7 corresponds to [BBS20, Example 3.7], while in Corollary 3.4 we saw that case 8 corresponds to [BBS20, Example 3.6].
We see now how to distiguish these two cases in a more geometrical way.
Every G ∈ |O Y2,1,1 (0, 1, 1)| is of the form G = P 2 × C for some C ∈ |O P 1 ×P 1 (1, 1)| and viceversa.Since C is a hyperplane section of a smooth quadric in the Segre embedding of the last two factors P 1 × P 1 of Y 2,1,1 then either C is a smooth conic or C = L ∪ R with L ∈ |O P 1 ×P 1 (1, 0)|, R ∈ |O P 1 ×P 1 (0, 1)| and L ∩ R is a unique point o ∈ P 1 × P 1 .Let us distinguish two cases depending on wether G is irreducible or not.
(1) Fix a solution A such that G is irreducible, i.e. assume that C is irreducible and hence smooth.Let u i : P 1 × P 1 − → P 1 for i = 1, 2 denote the projection from the last two factors of Y 2,1,1 onto the second and third factor of Y 2,1,1 respectively.Note that each u i|C : C − → P 1 has degree 1 and hence it is an isomorphism.Claim 4.5.1 shows that #π 2 (B) = #π 3 (B) = 3 for all B ∈ S(Y 2,1,1 , q).Taking as A the union of 3 general points of Y 2,1,1 we see that this case occurs.Moreover, the open orbit of σ 3 (X 2,1,1 ) arises here and by Claim 4.5.1 this is the only case in which we fall in this orbit.The case just described is [BBS20, Example 3.6] with the additional observation that S(Y 2,1,1 , q) = S(G, q).(2) Fix A such that G is reducible and write G = G 1 ∪ G 2 with G 1 = P 2 × L, G 2 = P 2 × R and G 1 ∩ G 2 = P 2 × {o}.This case is precisely the case described in [BBS20, Example 3.7 and Proposition 3.5] with the additional information that S(Y 2,1,1 , q) = S(G, q).Since Y 2,1,1 is the minimal multiprojective space containing A then A G 1 and A G 2 .We have #(A ∩ (G 1 ∩ G 2 )) ≤ 1 and 1 ≤ #(A ∩ G i ) ≤ 2 for i = 1, 2, and moreover Notice that both π 2 (G 1 ∩ A) and π 3 (G 2 ∩ A) is a single point and hence at least one i ∈ {2, 3} has #π i (A) = 2. Let us treat the two cases separately.Thus S(Y 2,1,1 , q) has precisely 2 irreducible components, as observed in [BBS20, Example 3.7], and the dimension of the space of solution is dim S(Y 2,1,1 , q) = 4. From the above discussion we see that a single A ∈ S(Y 2,1,1 , q) is sufficient to know if q is in the open orbit of σ 3 (X 2,1,1 ) of case 8 of [BL13, Table 1] or in the smaller orbit of case 7 of [BL13, Table 1].
For the sake of clarity we conclude by summing up the above discussion in the following statement.

•
The case #π 2 (A) = #π 3 (A) = 2 occurs if and only if A ∩ G 1 ∩ G 2 = ∅, i.e. if and only if the projection of A in the last two factors contains {o} = L ∩ R. To fix the ideas denote byA = {a, b, c}, with a, b ∈ G 1 and c ∈ G 2 and set L = {o L } × P 1 , R = P 1 × {o R }, where o = (o L , o R ).In this case #π 2 (A) = #π 3 (A) = 2 where {o L } ∈ π 2 (A) and {o R } ∈ π 3 (A) and either a is of the form (π 1 (a), o L , o R ) or b is of the form (π 1 (b), o L , o R ).• Taking as A a general union of two general points of G 1 and a point of G 2 (or viceversa), we see that also the case #π 2 (A) = 2 and #π 3 (A) = 3 (or #π 2 (A) = 2 and #π 3 (A) = 3) occurs.
) q is as in Proposition 4.5 where Y n1,...,n k f