Rank and border rank of Kronecker powers of tensors and Strassen's laser method

We prove that the border rank of the Kronecker square of the little Coppersmith–Winograd tensor Tcw,q\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_{cw,q}$$\end{document} is the square of its border rank for q>2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q > 2$$\end{document} and that the border rank of its Kronecker cube is the cube of its border rank for q>4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q > 4$$\end{document}. This answers questions raised implicitly by Coppersmith & Winograd (1990, §11) and explicitly by Bläser (2013, Problem 9.8) and rules out the possibility of proving new upper bounds on the exponent of matrix multiplication using the square or cube of a little Coppersmith–Winograd tensor in this range. In the positive direction, we enlarge the list of explicit tensors potentially useful for Strassen's laser method, introducing a skew-symmetric version of the Coppersmith–Winograd tensor, Tskewcw,q\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_{skewcw,q}$$\end{document}. For q=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q = 2$$\end{document}, the Kronecker square of this tensor coincides with the 3×3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3$$\end{document} determinant polynomial, det3∈C9⊗C9⊗C9\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\det_{3} \in \mathbb{C}^{9} \otimes \mathbb{C}^{9} \otimes \mathbb{C}^{9}$$\end{document}, regarded as a tensor. We show that this tensor could potentially be used to show that the exponent of matrix multiplication is two. We determine new upper bounds for the (Waring) rank and the (Waring) border rank of det3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\det_3$$\end{document}, exhibiting a strict submultiplicative behaviour for Tskewcw,2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_{skewcw,2}$$\end{document} which is promising for the laser method. We establish general results regarding border ranks of Kronecker powers of tensors, and make a detailed study of Kronecker squares of tensors in C3⊗C3⊗C3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb{C}^{3} \otimes \mathbb{C}^{3} \otimes \mathbb{C}^{3}$$\end{document}.


Introduction
The exponent ω of matrix multiplication is defined as ω := inf{τ | two n × n matrices may be multiplied using O(n τ ) arithmetic operations}.This is a fundamental constant governing the complexity of the basic operations in linear algebra.It is conjectured that ω = 2.There is a classical upper bound ω ≤ 3 following from the standard row-by-column multiplication.Starting from 1969 [Str69], a great deal of effort has been spent on the research on upper bounds on the exponent, involving methods from combinatorics, probability, and statistical mechanics; we refer to Section 1.4 for a brief history.The more recent Cohn-Umans approach [CU03] uses group-theoretic techniques and in particular the Fourier-transform of finite groups.In this work, we approach the problem via algebraic geometry and representation theory.We obtain both negative and hopeful results.Our focus will be on Strassen's laser method [Str87].This technique was used to achieve Strassen's upper bound of 1988 and essentially all subsequent upper bounds.In order to present the method and our contributions, we adopt the language of tensors.
1.1.Definitions.Let A, B, C be complex vector spaces.A tensor T ∈ A ⊗ B ⊗ C has rank one if T = a ⊗ b ⊗ c for some a ∈ A, b ∈ B, c ∈ C. The rank of T , denoted R(T ), is the smallest r such that T is sum of r rank-one tensors.The border rank of T , denoted R(T ), is the smallest r such that T is the limit of a sequence of rank r tensors.
A tensor T ∈ A ⊗ B ⊗ C defines a bilinear map A * × B * → C and a trilinear map A * × B * × C * → C. The matrix multiplication tensor M l,m,n is the tensor associated to the bilinear map M l,m,n : Mat l×m × Mat m×n → Mat l×n sending a pair of matrices (X, Y ) to their product XY .As a trilinear map, the matrix multiplication tensor is M l,m,n (X, Y, Z) = trace(XY Z), where X, Y, Z are matrices of size l × m, m × n and n × l, respectively.The matrix multiplication tensor has the following important self-reproducing property: M l,m,n ⊠ M l ′ ,m ′ n ′ = M ll ′ ,mm ′ ,nn ′ .Write M n := M n,n,n .
The complexity of performing a bilinear map, and in particular the complexity of matrix multiplication, is controlled by the tensor rank of the corresponding tensor.Bini [Bin80] showed that border rank controls the complexity as well: Let GL(A) be the general linear group of invertible linear maps A → A and similarly for B and C. We say that two tensors are isomorphic if they are in the same orbit under the natural action of GL(A) × GL(B) × GL(C) on A ⊗ B ⊗ C. We will often assume that all tensors involved in the discussion belong to the same space A ⊗ B ⊗ C.This is not restrictive, since we may re-embed the spaces A, B, C into larger spaces whenever it is needed.
Given T, T ′ ∈ A ⊗ B ⊗ C, we say that T degenerates to T ′ if T ′ ∈ GL(A) × GL(B) × GL(C) • T , the closure of the orbit of T , equivalently in the Euclidean or in the Zariski topology.Border rank is semicontinuous under degeneration: R(T ′ ) ≤ R(T ) if T degenerates to T ′ .Border rank may be rephrased in terms of degeneration as follows.For a tensor T , one has R(T ) ≤ r if and only if T is a degeneration of M ⊕r 1 = r i=1 a i ⊗b i ⊗c i , where {a i } is a set of linearly independent vectors and similarly for {b i } and {c i }.The border subrank of T , denoted Q(T ), is the maximum q such that T degenerates to M ⊕q 1 .For tensors T ∈ A ⊗ B ⊗ C and T ′ ∈ A ′ ⊗ B ′ ⊗ C ′ , the Kronecker product of T and T ′ is the tensor , regarded as 3-way tensor.Given T ∈ A ⊗ B ⊗ C, the Kronecker powers of T are T ⊠N ∈ A ⊗N ⊗B ⊗N ⊗C ⊗N , defined iteratively.Rank and border rank are submultiplicative under Kronecker product: R(T ⊠ T ′ ) ≤ R(T )R(T ′ ), R(T ⊠ T ′ ) ≤ R(T )R(T ′ ), and both inequalities may be strict.
Asymptotic versions of border rank and border subrank, respectively called asymptotic rank and asymptotic subrank, are defined as follows: One has ω = log 2 (R ✿ (M 2 ); in particular ω = 2 if and only if R ✿ (M n ) = n 2 for any (and as a consequence all) n. 1.2.Strassen's laser method and its barriers.The two fundamental ingredients of Strassen's laser method are submultiplicativity of border rank under Kronecker powers and semicontinuity of border rank under degeneration.The laser method relies on an auxiliary tensor T with the property that R(T ) is small and, for some large N , T ⊠N degenerates to a large matrix multiplication tensor.Since 1987, only three tensors have been employed in the method and the best upper bounds so far come from the big Coppersmith-Winograd tensor [CW90]: It was used to prove ω < 2.38 in 1988 and all further improvements to the current best known upper bound ω < 2.373.
In 2014 [AFL15] gave an explanation for the limited progress since 1988, followed by further explanations in [AW18a,AW18b,CVZ21,Alm19]. One major consequence of these results is that T CW,q cannot be used to prove ω < 2.3 using the standard laser method.
A geometric identification of the barrier of [AFL15] was given in [CVZ21].Strassen showed 4 n 2 ⌉ (in [KMZ20, Theorem 3] equality was proved).This, together with the selfreproducing property of the matrix multiplication tensor, implies Q ✿ (M n ) = n 2 , which is the maximum possible value.A consequence is that no tensor having non-maximal asymptotic subrank can be used to prove ω = 2 via the laser method; in [Str91] it was shown that Q ✿ (T CW,q ) is non-maximal.
The second most effective tensor used for upper bounds via Strassen's laser method is the small Coppersmith-Winograd tensor : (1) T cw,q := q j=1 a 0 ⊗ b j ⊗ c j + a j ⊗ b 0 ⊗ c j + a j ⊗ b j ⊗ c 0 ∈ (C q+1 ) ⊗3 .
One has R(T cw,q ) = q + 2, which is one more than minimal (see, e.g., [BCS97,Sec. 15.8]).Applying Theorem 1.1 to T cw,8 with k = 1 one obtains ω ≤ 2.41 [CW90].Theorem 1.1 implies that if the border rank of the Kronecker square or some higher Kronecker power of T cw,q were strictly submultiplicative, one could get a better bound, and one could even potentially prove ω = 2 using Kronecker powers of T cw,2 .Indeed, [BCS97, Ex. 15.24] observes that Theorem 1.1 holds replacing R(T ⊠k cw,q ) 1 k with R ✿ (T cw,q ).In particular, were R ✿ (T cw,2 ) = 3, then Theorem 1.1 would imply ω = 2.This shows that the barriers of [AFL15, AW18a, AW18b, CVZ21] do not apply to T cw,2 .Previous to our work, the possibility to prove the upper bound ω < 2.3 using the second and third Kronecker power of T cw,q for 3 ≤ q ≤ 10 was open, in the sense that the if the state of art lower bound on T ⊠k cw,q were equal to an upper bound, then Theorem 1.1 would have given an improvement.We show that this is not the case.1.3.Main results.M. Bläser [Blä13,Problem 9.8] posed the problem of determining the border rank of T ⊠2 cw,q .We show: Theorem 1.2.For all q > 2, R(T ⊠2 cw,q ) = (q + 2) 2 ; moreover 15 ≤ R(T ⊠2 cw,2 ) ≤ 16.
For all q > 4, R(T ⊠3 cw,q ) = (q + 2) 3 ; if q = 3, 4 then R(T ⊠3 cw,q ) ≥ (q + 2) 2 (q + 1) For all q > 4 and all N , R(T ⊠N cw,q ) ≥ (q+1 This improves on the previous lower bound from [BL16], which was R(T ⊠N cw,q ) ≥ (q + 1) N + 2 N − 1 for all q, N .This result shows that the second and third Kronecker powers of T cw,q cannot give any improvement on the current upper bounds on the exponent.For instance, the lower bound of [BL16] for (q, N ) = (3, 3) is R(T ⊠3 cw,3 ) ≥ 71; if this had been the value of R(T ⊠3 cw,3 ) then Theorem 1.1 would have given ω < 2.15; however, the lower bound of Theorem 1.2 guarantees R(T ⊠3 cw,3 ) ≥ 100, and even if this turns out to be the value of R(T ⊠3 cw,3 ), Theorem 1.2 only gives ω < 2.46.In light of the above-mentioned barriers and Theorem 1.2, one might try to determine better tensors which are not subject to the barriers (similarly to T cw,q ) and at the same time have strict submultiplicativity of border rank under Kronecker powers.
Inspired by [CGLV19], we introduce a new family of tensors, which are a skew-symmetric version of the small Coppersmith-Winograd tensors for every even q: (3) T skewcw,q := Proposition 2.2 shows Theorem 1.1 holds with T cw,q is replaced by T skewcw,q , so in particular T skewcw,2 could potentially be used to prove ω = 2. Proposition 3.1 contains more negative news: R(T skewcw,q ) ≥ q +3, and in particular R(T skewcw,2 ) = 5.However, we show a strong submultiplicative behaviour for T skewcw,q , namely R(T ⊠2 skewcw,2 ) ≤ 17 < 5 2 .Theorem 1.3 below actually proves a stronger statement.We show in Lemma 2.4 that T ⊠2 skewcw,2 is isomorphic to the 3 × 3 determinant polynomial regarded as a tensor and we prove new upper bounds for the symmetric rank (also known as Waring rank, see, e.g., [Lan12, §2.6.6]) and symmetric border rank of the 3 × 3 determinant polynomial.
In [CHL19], it was shown that R(det 3 ) = 17 and in particular the second inequality in Theorem 1.3 is an equality.
The proof of Theorem 1.2 is given in Section 3 and the proof of Theorem 1.3 is given in Section 4.
Some of the proofs of this work rely on computer calculations performed by the software Macaulay2 [GS] and Sage [Sag].The scripts performing these calculations are collected in different appendices in the Supplementary Material available at http://fulges.github.io/code/CGLV/index.html 1.4.Brief history of upper bounds.There was steady progress in the research for upper bounds on ω from 1968 to 1988.
A major breakthrough due to Schönhage [Sch81], known as the asymptotic sum inequality, was used to show ω < 2.55 by exploiting the interplay between direct sums and the self-reproducing property of the matrix multiplication tensor.In [Str87] Strassen introduced the laser method and showed ω < 2.48.A refined form of the laser method was used by Coppersmith and Winograd to show ω < 2.3755 [CW90].
There was no progress on upper bounds on the exponent until 2011 when, via a further refinement of the method, a series of improvements by Stothers, Williams, Le Gall and Alman and Williams and [Sto10, Wil12, Le 14, AW21] lowered the upper bound to the current state of the art ω < 2.373.

Preliminary results
In this section, we provide some results which will be useful in the rest of the paper.
2.1.T skewcw,q and the laser method.The first result is the analog of Theorem 1.1 for the family T skewcw,q : Proposition 2.2.For all k, Proof.Similarly to the case of T cw,q , the proof follows immediately from [BCS97, Theorem 15.41], because T skewcw,q has the same "block structure" as T cw,q .
In particular, similarly to T cw,q , if R ✿ (T skewcw,2 ) = 3 then ω = 2 and it is potentially possible to improve the current upper bounds on ω using T skewcw,q .Therefore, it is important to determine upper bounds on the border rank of the Kronecker powers of T skewcw,q , and in particular in the case q = 2.

2.2.
Coppersmith-Winograd tensors, symmetries, determinants and permanents.Let S 3 C m and Λ 3 C m respectively denote the subspaces of symmetric and skew-symmetric tensors in C m ⊗ C m ⊗ C m .By identifying the three copies of C q+1 in (1) and (3), we observe that T cw,q is isomorphic to a symmetric tensor and T skewcw,q is isomorphic to a skew-symmetric tensor.Inded, fixing a basis a 0 , . . ., a q of C q+1 , the isomorphism a j ↔ b j ↔ c j provides (5) We introduce some definitions concerning the symmetries of a tensor.The group homomorphism Φ : In particular, the group (GL(A) × GL(B) × GL(C)) /(C * ) ×2 is identified with a subgroup of GL(A⊗ B ⊗ C).Given T ∈ A ⊗ B ⊗ C, the symmetry group of a tensor T is the stabilizer of T in If the three spaces A, B, C are identified, so that A ⊗ B ⊗ C ≃ A ⊗3 , one can consider the action restricted to GL(A) embedded diagonally as GL diag (A) ⊆ GL(A) ×3 .In this case, the kernel of the action reduces to the cyclic group Z 3 = {ζId A : ζ 3 = 1} and one can consider a restricted version of the symmetry group Let S k be the permutation group on k elements.
We record the following observation: where the symmetric group acts by permuting the factors of the direct product. Proof.
T ⋊ S N acts on a single factor of T ⊠N and it stabilizes it by definition of G T .The groups S N permutes the factors of T ⊠N , which is a Kronecker power and therefore it is stabilized.
The statement for T ∈ A ⊗3 is an immediate consequence.
Consider the action of the symmetric group S 3 which permutes the tensor factors.A tensor is symmetric if it is invariant under this action and skew-symmetric if it is skew-invariant.It is easy to observe that Kronecker powers of symmetric tensors are symmetric tensors.Moreover, odd Kronecker powers of skew-symmetric tensors are skew-symmetric and even Kronecker powers of skew-symmetric tensors are symmetric.
We record the expressions of the 3 × 3 permanent and determinant polynomials as tensors in C 9 ⊗ C 9 ⊗ C 9 .Write (−1) σ for the sign of a permutation σ.Then Lemma 2.4.We have the following isomorphisms of tensors: Proof.From (5), we have T cw,2 = a 0 (a 2 1 + a 2 2 ).Let ã1 = (a 1 + √ −1a 2 ) and ã2 = (a 1 − √ −1a 2 ), so that T cw,2 = a 0 ã1 ã2 .This shows that after a suitable change of basis T cw,2 = a 0 a 1 a 2 .Its symmetry group is G s T cw,2 = T SL 3 ⋊ S 3 , where T SL 3 denotes the torus of diagonal matrices with determinant one, and S 3 acts permuting the three basis elements.By Proposition 2.3, we deduce that T ⊠2 cw,2 is a symmetric tensor, with G s T ⊠2 cw,2 ⊇ (T SL 3 ⋊ S 3 ) ×2 ⋊ Z 2 (and in fact equality holds).This is the stabilizer of the permanent polynomial perm 3 .Since the permanent is characterized by its stabilizer (see, e.g., Lemma 2.5 below), we conclude.
The proof for T skewcw,2 is similar.From (6), we have T skewcw,2 = a 0 ∧ a 1 ∧ a 2 .Therefore G s T skewcw,2 = SL 3 ; indeed T skewcw,2 is the unique, up to scale, SL 3 -invariant in C 3 ⊗ C 3 ⊗ C 3 .By Proposition 2.3, we deduce that T ⊠2 skewcw,2 is a symmetric tensor, with G s (and in fact equality holds).This is the stabilizer of the determinant polynomial det 3 .Since the determinant is characterized by its stabilizer, we conclude.
The symmetric tensors det m and perm m are characterized by their stabilizers.For the determinant, this fact is classical.For the permanent, the statement, but not the proof, appears in [MS08].For completeness, we provide a proof here assuming some familiarity with the representation theory of SL m and of the symmetric group S m .
Lemma 2.5.Let T ∈ S m (C m ⊗C m ) be a symmetric tensor of order m.
Proof.First consider the case of the determinant.Let SL m × SL m = SL(E) × SL(F ) act on S m (E ⊗ F ).This space decomposes as SL(E) × SL(F )-representation as (see, e.g., [Lan12, §6.7.6]) this is multiplicity free, with the only trivial module . This is the space spanned by det m .
In the case of the permanent, note that the decomposition above holds for the action of (T SL ⋊S m ) ×2 as well.Then, the T SL(E) × T SL(F ) -invariant subspace is given by the sum of the weight zero spaces (S π E) 0 ⊗ (S π F ) 0 .By [Gay76], one has the isomorphism (S π E) 0 ⊗ (S π F ) 0 = [π] E ⊗ [π] F for the weight zero spaces as S E × S F -modules.The only trivial representation is the one corresponding to π = (d), which is the subspace spanned by perm m .
Lemma 2.4 guarantees that perm 3 and det 3 are tensors not subject to barriers for the laser method.
Remark 2.7.One can regard the 3×3 determinant and permanent as trilinear maps C 3 ×C 3 ×C 3 → C, where the three copies of C 3 are the first, second and third column of a 3 × 3 matrix.From this point of view, the trilinear map given by the determinant is T skewcw,2 as a tensor and the one given by the permanent is T cw,2 as a tensor.This perspective, combined with the notion of product rank (in the sense of [IT16]) provides the upper bounds R S (perm 3 ) ≤ 16 and R(det 3 ) ≤ 20.These bounds already appeared in [Der16,IT16] and are also a consequence of Lemma 2.4.

Generic tensors in C
Evidence for the remark is obtained as follows.We considered tensors T ∈ C 3 ⊗ C 3 ⊗ C 3 whose coefficients in a fixed basis were taken independently and uniformly random in [−1, 1].We obtained numerically that R(T ⊠2 ) ≤ 22.An instance of this computation is available in Appendix A of the Supplementary Material.
Remark 2.8 is not too surprising because C 3 ⊗ C 3 ⊗ C 3 is secant defective, in the sense that by a dimension count, one would expect the maximum border rank of a tensor to be 4, but the actual maximum is 5.This means that for a generic tensor, there is a 8 parameter family of rank 5 decompositions, and it is not surprising that the naïve 64-parameter family of decompositions of the square might have decompositions of lower border rank on the boundary.

Koszul flattenings and lower bounds for Kronecker powers
In this section we review Koszul flattenings and prove a result on propagation of Koszul flattening lower bounds under Kronecker products.We will use Koszul flattenings to prove R(T skewcw,q ) ≥ q+3 in Proposition 3.1.Moreover, we prove Theorem 1.2: the proof will follow from Theorem 3.3, Theorem 3.4 and Corollary 3.5.
Fix bases {a i }, {b j }, {c k } of the vector spaces A, B, C, respectively; fix an integer p.Given a tensor This type of lower bound has a long history.More generally, one considers an embedding of the space A ⊗ B ⊗ C into a large space of matrices.Then if a rank-one tensor maps to a rank q matrix, a rank r tensor maps to a rank at most rq matrix, so the size rq + 1 minors give equations testing for border rank r.In this case the size of the matrices is a p b × a p+1 c and a rank-one tensor maps to a matrix of rank a−1 p .Here a = dimA, b = dimB and c = dimC.In practice, one considers a subspace A ′ * ⊆ A * of dimension 2p + 1 and restricts T (considered as a trilinear form) to A ′ * × B * × C * to get an optimal bound, so the denominator dim(A)−1 p is replaced by 2p p in (8).Equivalently, one considers a linear map φ : A → A ′ and the corresponding Koszul flattening map gives a lower bound for R(φ(T )), which, by linearity, is a lower bound for R(T ).
As an element of Λ 3 A ⊆ A ⊗ A ⊗ A, we have T skewcw,q = a 0 ∧ u i=1 a i ∧ a u+i as in (6).
We prove that for T = (φ⊗Id

We record the images via T ∧1
A ′ of a basis of A ′ ⊗ B * .Fix the range of i = 1, . . ., u: Notice that the image of u i=1 (e 1 ⊗ β i ) − u i=1 (e 2 ⊗ β u+i ) − e 0 ⊗ β 0 is (up to scale) e 1 ∧ e 2 ⊗ c 0 .From the contributions above, we deduce that the image of T ∧1 A ′ contains the three subspaces e 0 ∧ e 1 , e 0 ∧ e 2 , e 1 ∧ e 2 ⊗ c 0 , These subspaces are in direct sum, therefore we conclude
3.1.Propagation of lower bounds under Kronecker products.In [CJZ18, CGJ19], it was shown that generalized flattening lower bounds are multiplicative under the unflattened tensor product.The same result does not hold for Kronecker products.However, we provide a partial multiplicativity result for Koszul flattenings lower bounds.
In particular, if Proof.Identify T 1 with its image in . Since matrix rank is multiplicative under Kronecker product, we conclude.

3.2.
A lower bound for the Kronecker square of T cw,q .In this section, we give a proof of the first statement in Theorem 1.2.
The statement for q = 2, can be checked explicitly.The lower bound R(T ⊠2 cw,2 ) ≥ 15 follows from the p = 2 Koszul flattening lower bound and coincides with the current best known lower bound for the border rank of the 3 × 3 permanent polynomial.The upper bound is immediate by submultiplicativity.
Proof.Recall the expression of T cw,q from (1).When q = 3, the result is true by a direct calculation using the p = 2 Koszul flattening with a sufficiently generic restriction A → C 5 .
Assume q > 3. Write a ij = a i ⊗ a j ∈ A ⊗2 and similarly for B ⊗2 and C ⊗2 .Let A ′ = e 0 , e 1 , e 2 and define the linear map φ 2 : φ 2 (a ij ) = 0 for all other pairs (i, j).
We proceed by induction on q.When q = 4 one does a direct computation with the p = 1 Koszul flattening, which is left to the reader, and which provides the base of the induction.
Write W j = a 0 ⊗b j ⊗c j +a j ⊗b 0 ⊗c j +a j ⊗b j ⊗c 0 .Then T cw,q = q j=1 W j , so that Representing the Koszul flattening in blocks, we have First, we prove that rank(M 11 + N 11 ) ≥ rank(M 11 ) = 2(q + 1) 2 .This follows by a degeneration argument.
We will provide a second proof of Theorem 3.3, which will generalize to the proof of Theorem 3.4.More precisely, we will give a representation-theoretic argument to compute the rank of the Koszul flattening map considered in the proof above.The same representation-theoretic technique will apply for the third Kronecker power.

3.3.
A short detour on computing ranks of equivariant maps.We briefly explain how to exploit Schur's Lemma (see, e.g., [FH91, §1.2]) to compute the rank of an equivariant linear map.This is a standard technique, used extensively e.g., in [LO15, GIP17] and will reduce the proof of Theorems 3.3 and 3.4 to the computation of the ranks of specific linear maps in small dimension.
Let G be a reductive group.In the proof of Theorems 3.3 and 3.4, G will be the product of symmetric groups.Let Λ G be the set of irreducible representations of G.For λ ∈ Λ G , let W λ denote the corresponding irreducible module.
, where m λ is the multiplicity of W λ in U and ℓ λ is the multiplicity of W λ in V .The direct summand corresponding to λ is called the isotypic component of type λ.
, where φ λ : M λ → L λ .Thus rank(f ) can be expressed in terms of rank(φ λ ) and the dimension of the multiplicity spaces W λ for λ ∈ Λ G : The ranks rank(φ λ ) can be computed via restrictions of f .For every λ, fix a nonzero vector w λ ∈ W λ , so that M λ ⊗ w λ is a subspace of U .Here and in what follows, for a subset X ⊂ V , X denotes the span of X.Then the rank of the restriction of f to M λ ⊗ w λ coincides with the rank of φ λ .
The second proof of Theorem 3.3 and proof of Theorem 3.4 will follow the algorithm described above, exploiting the symmetries of T cw,q .Consider the action of the symmetry group S q on A ⊗ B ⊗ C defined by permuting the basis elements with indices {1, . . ., q}.More precisely, a permutation σ ∈ S q induces the linear map defined by σ(a i ) = a σ(i) for i = 1, . . ., q and σ(a 0 ) = a 0 .The group S q acts on B, C similarly, and the simultaneous action on the three factors defines an S q -action on A ⊗ B ⊗ C. The tensor T cw,q is invariant under this action.
3.4.Second Proof of Theorem 3.3.We use the method explained in Section 3.3 to give a representation-theoretic proof of Theorem 3.3.
Proof of Theorem 3.3.As before, the case q = 3 can be verified explicitly.For q ≥ 4, we apply the p = 1 Koszul flattening map to the same restriction of T ⊠2 cw,q as the first proof, although to be consistent with the code on the website, we use the less appealing swap of the roles of a 2 and a 3 in the projection φ 2 .
The tensor T cw,q is invariant under the action of S q acting on the indices {1, . . ., q} of the basis elements of C q+1 .Therefore T ⊠2 cw,q is invariant under the action of S q × S q on A ⊗2 ⊗ B ⊗2 ⊗ C ⊗2 .Let Γ := S q−3 × S q−3 where S q−3 is the permutation group on {4, . . ., q}; T ⊠2 cw,q is invariant under the action of Γ.
Moreover, the projection φ 2 is invariant under the action of Γ.

In general, the map
Using this fact, and the invariancy with respect to Γ described above, we deduce (φ 2 (T ⊠2 cw,q )) ∧1 A ′ is Γ-equivariant.
We now apply the method described in §3.3 to compute rank((T q ) ∧1 A ′ ).Let [triv] denote the trivial S q−3 -representation and let V denote the standard representation, that is the Specht module associated to the partition (q − 4, 1) of q − 3. We have dim[triv] = 1 and dim V = q − 4. When q = 4 only the trivial representation appears.
The spaces B, C are isomorphic as S q−3 -modules and they decompose as B = C = [triv] ⊕5 ⊕ V .After fixing a 5-dimensional multiplicity space C 5 for the trivial isotypic component, we write Thus, Write W 1 , . . ., W 4 for the four irreducible representations in the decomposition above and let M 1 , . . ., M 4 be the four corresponding multiplicity spaces.
Recall from [Ful97] that a basis of V is given by standard Young tableaux of shape (q − 4, 1) (with entries in 4, . . ., q for consistency with the action of S q−3 ); let w std be the vector corresponding to the standard tableau having 4, 6, . . ., q in the first row and 5 in the second row.We refer to [Ful97,§7] for the straightening laws of the tableaux.Let w triv be a generator of the trivial representation [triv].Writing C q+1 = e 0 , . . ., e q , we explicitly have w std = e 5 − e 4 and the multiplicity space 5-dimensional multiplicity space of the trivial representation is e 0 , . . ., e 3 , q 4 e j .For each of the four isotypic components in the decomposition above, we fix a vector w i ∈ W i and explicitly realize the subspaces M i ⊗ w i of B * ⊗2 as follows: The subspaces in C ⊗2 are realized similarly.
Since (T ⊠2 cw,q ) ∧1 A ′ is Γ-equivariant, by Schur's Lemma, it has the isotypic decomposition (T ⊠2 cw,q As explained in §3.3, it suffices to compute the ranks of the four restrictions Φ i : The four matrices representing Φ 1 , . . ., Φ 4 are computed by a routine which exploits their structure.The script to compute the matrices and their ranks is available in Appendix D of the Supplementary Material.The method to compute the matrices is explained in Section 6.
The script provides an expression for the entries of the matrices Φ i which are univariate polynomials in q up to a global univariate polynomial factor.The expressions are valid for q ≥ 5.The rank of the Koszul flattening in the cases q = 3 and q = 4 is computed directly.
We determine a lower bound on rank(Φ i ) by computing a matrix P i •Φ i •Q i , where P i is a rectangular matrix whose entries are rational functions of q (well defined for q ≥ 5) and Q i is a rectangular matrix whose entries are constant.The resulting matrix P i • Φ i • Q i is a square matrix, upper triangular with ±1 on the diagonal, so that the size of P i Φ i Q i gives a lower bound on rank(Φ i ).
We summarize the results of the script in the following table.
Adding the total contributions, we obtain This concludes the proof of Theorem 3.3.

3.5.
A lower bound for the Kronecker cube of T cw,q .In this section, we use the method explained in Section 3.3 and illustrated in Section 3.4 to prove the second part of Theorem 1.2.Theorem 3.4.Let q ≥ 5. Then R(T ⊠3 cw,q ) = (q + 2) 3 .
We employ the same method as in Section 3.4 in the case of T ⊠2 cw,q .The Koszul flattening is equivariant for the action of Γ = S ×3 q−4 where S q−4 acts on {5, . . ., q}.In particular C q+1 splits under the action of S q−4 into a 6-dimensional subspace of invariants C 6 ⊗ [triv] = e 0 , . . ., e 4 , e 5 + • • • + e q and a copy of the standard representation V = e i − e 5 : i = 6, . . ., q , with dim V = q − 5.
Hence, the spaces B ⊗3 and C ⊗3 split into the direct sum of 8 isotypic components for the action of Γ as follows (we use indices 1, 2, 3 to denote the trivial or the standard representation on the first, second or third factor): Similarly to the square case, for each of the eight isotypic components, we consider w i ∈ W i where W i is the corresponding irreducible and we compute the rank of the restriction The matrices representing the maps Ψ i are computed exploiting the structure of the tensors involved, following the method described in Section 6.The expression computed by the script is valid for q ≥ 6.The case q = 5 is computed explicitly.Their ranks are computed by reducing Ψ i to a triangular matrix as in the previous case.
The third part of Theorem 1.2 is a consequence of Proposition 3.2 and Theorem 3.4 for the case q ≥ 5 and Proposition 3.2 and Theorem 3.3 in the case q = 4.We record it explicitly in the following Corollary Corollary 3.5.For all q > 4 and all N , R(T ⊠N cw,q ) ≥ (q + 1) N −3 (q + 2) 3 , and R(T ⊠N cw,4 ) ≥ 36 × 5 N −2 .

Upper bounds for Waring rank and border Waring rank of det 3
In this section, we give a proof of Theorem 1.3.
We briefly recall the definition of Waring rank and border Waring rank.A symmetric tensor T ∈ S d C m ⊆ C m⊗d has Waring rank one if T = a ⊗d for some a ∈ C m .The Waring rank of T , denoted R S (T ), is the smallest r such that T is sum of r tensors of Waring rank one.The border Waring rank of T , denoted R S (T ), is the smallest r such that T is limit of a sequence of tensors of Waring rank r.If T is regarded as a homogeneous polynomial of degree d, then a ∈ C m can be regarded as a linear form and a ⊗d coincides with the d-th power of a: in this setting, the Waring rank is the minimum number of summands in an expression of T as sum of powers of linear forms.4.1.Waring rank of det 3 .Theorem 1.3 will be a consequence of Theorem 4.1 and Theorem 4.2 below.
Proof.We give the rank 18 decomposition for det 3 explicitly, as a collection of 18 linear forms on C 9 = C 3 ⊗ C 3 whose cubes add up to det 3 .The linear forms are given in coordinates recorded in the matrices below: the 3 × 3 matrix (ζ ij ) represents the linear form ij ζ ij x ij .This presentation highlights some of the symmetries of the decomposition.
Let ϑ = exp(2πi/6) and let ϑ be its inverse.The tensor det where L 1 , . . ., L 18 are the 18 linear forms given by the following coordinates: The equality can be verified by hand.A Macaulay2 file performing the calculation is available in Appendix B of the Supplementary Material.
The Waring decomposition of 4.1 was generalized in [JT20] giving an upper bound for the Waring rank of the determinant polynomial det m .

4.2.
Waring border rank of det 3 .The statement for the border rank is given by the following result.As in the previous proof, the border rank upper bound is proved explicitly giving linear forms, depending on a parameter t, whose cubes provide a border rank expression for det 3 .The algebraic numbers involved are more complicated than in the previous case.
The result was achieved by numerical methods, which allowed us to sparsify the decomposition and ultimately determine the value of the coefficients.A detailed explanation of the method is given in Section 4.3.
Proof.The 17 linear forms providing a border rank decomposition of det 3 are described below.Consider The coefficients z 1 , . . ., z 44 are algebraic numbers described as follows.Let y * be a real root of the polynomial For i = 1, . . ., 44, we consider algebraic numbers y j in the field extension Q[y * ], described as a polynomial of degree (at most) 26 in y * with rational coefficients.Notice that all the y j 's are real.The expressions of the y 1 , . . ., y 44 in terms of y * are provided in the file yy_exps in Appendix C of the Supplementary Material.Let z j be the unique real cubic root of y j .
We are going to prove that, with this choice of coefficients z j , (11) 3 is equivalent to the fact that the degree 0 and the degree 1 components of 17 i=1 L i (t) 3 vanish and that the degree 2 component equals det 3 .Given the sparse structure of the L i (t), this reduces to a system of 54 cubic equations in the 44 unknowns z 1 , . . ., z 44 .Our goal is to show that the algebraic numbers described above are a solution of this system.We show that the z i 's satisfy each equation as follows.After evaluating the equations at the z i 's, there are two possible cases (1) all monomials appearing in the equation are elements of Q[y * ]; we say that this is an equation of type 1; there are 14 such equations; (2) at least one monomial appearing in the equation is not an element of Q[y * ]; we say that this is an equation of type 2; there are 40 such equations.
For equations of type 1, we provide expressions of each monomial in terms of y * .To verify that each expression is indeed equal to the corresponding monomial, it suffices to compare the cube of the given expression and the expression obtained by evaluating the monomial at the y j 's.Finally, the equation can be verified in Q[y * ].This is performed by the file checkingType1eqns.m2.
For equations of type 2, let u be one of the monomials which do not belong to Q[y * ].We claim that it is possible to choose the monomial in such a way that For each equation, we choose one of the monomials and we verify the claim as follows.The element u 3 has an expression in terms of y * which equals the chosen monomial evaluated at the y i 's.Let M u be the 27 × 27 matrix with rational entries such that M u can be computed directly by considering the expressions of the powers of u 3 in terms of y * . Then In particular y * has an expression in terms of u 3 , which can be computed inverting the matrix M u .A consequence of this is that At this point, we observe that Q[u] contains the other monomials occurring in the equation as well.To see this, we proceed as in the case of equations of type 1.For each monomial occurring in the equation, we provide an expression in terms of u (in fact, to speed up the calculation, we provide an expression in terms of u and y * , which is equivalent to an expression in u because and y * has a unique expression in terms of u 3 ); we compare the cube of this expression (appropriately reduced modulo the minimal polynomial of y * and the relation between u 3 and y * ) with the expression obtained by evaluating the monomial at the y i 's (expressed in terms of y * ).This shows that all monomials occurring in the expression belong to Q[u], and verifies that the given expressions are indeed equal to the corresponding monomials.Finally, the equation is verified in Q[u] as in the case of type 1.This is performed by the file checkingType2eqns.m2.

4.3.
Discussion of how the decomposition was obtained.Many steps were accomplished by finding solutions of polynomial equations by nonlinear optimization.In each case, this was accomplished using a variant of Newton's method applied to the mapping of variable values to corresponding polynomial values.The result of this procedure in each case is limited precision machine floating point numbers.
First, we attempted to solve the equations describing a Waring rank 17 decomposition of det 3 with nonlinear optimization, namely, det 3 = 17 i=1 (w ′ i ) ⊗3 , where w ′ i ∈ C 3×3 .Instead of finding a solution to working precision, we obtained a sequence of local refinements to an approximate solution where the distance between det 3 and its approximation is slowly converging to zero, and some of the parameter values are exploding to infinity.Numerically, these are Waring decompositions of polynomials very close to det 3 .
Next, this approximate solution needed to be upgraded to a solution to equation (11).
We found a choice of parameters in the neighborhood of a solution, and then applied local optimization to solve to working precision.We used the following method: Consider the linear mapping , and let M = U ΣV * be its singular value decomposition (with respect to the standard inner products for the natural coordinate systems).We observed that the singular values seemed to be naturally partitioned by order of magnitude.We estimated this magnitude factor as t 0 ≈ 10 −3 , and wrote Σ ′ as Σ where we multiplied each singular value by (t/t 0 ) k , with k chosen to agree with this observed partitioning, so that the constants remaining were reasonably sized.Finally, we let M ′ = U Σ ′ V * , which has entries in C [[t]].Thus M ′ is a representation of the map M with a parameter t.
Next, for each i, we optimized to find a best fit to the equation (a i + tb i + t 2 c i ) ⊗3 = M ′ (e i ), which is defined by polynomial equations in the entries of a i , b i and c i .The a i , b i and c i we constructed in this way proved to be a good initial guess to optimize equation ( 11), and we immediately saw quadratic convergence to a solution to machine precision.At this point, we greedily sparsified the solution by speculatively zero-ing values and re-optimizing, rolling back one step in case of failure.After sparsification, it turned out the c i were not needed.The resulting matrices are those given in the proof.
To compute the minimal polynomials and other integer relationships between quantities, we used Lenstra-Lenstra-Lovász integer lattice basis reduction [LLL82].As an example, let ζ ∈ R be approximately an algebraic number of degree k.Let N be a large number inversely proportional to the error of ζ.Consider the integer lattice with basis Polynomials p for which ζ is an approximate root are distinguished by the property of having relatively small Euclidean norm in this lattice.Computing a small norm vector in an integer lattice is accomplished by LLL reduction of a known basis.
For example, the fact that the number field of degree 27 obtained by adjoining any z 3 α to Q contains all the rest was determined via LLL reduction, looking for expressions of z 3 α as a polynomial in z 3 β for some fixed β.These expressions of z 3 α in a common number field can be checked to have the correct minimal polynomial, and thus agree with our initial description of the z α .LLL reduction was also used to find the expressions of values as polynomials in the primitive root of the various number fields.
After refining the known value of the parameters to 10, 000 bits of precision using Newton's method, LLL reduction was successful in identifying the minimal polynomials.The degrees were simply guessed, and the results checked by evaluating the computed polynomials in the parameters to higher precision.
Remark 4.3.With the minimal polynomial information, it is possible to check that equation ( 11) is satisfied to any desired precision by the parameters.

Tight Tensors in C
Following an analysis started in [CGL + 21], we consider Kronecker squares of tight tensors in C 3 ⊗ C 3 ⊗ C 3 .We compute their symmetry groups and numerically provide bounds to their tensor rank and border rank, highlighting the submultiplicativity properties.
We refer to [Str94, Blä13, BCS97, CGL + 21] for an exposition of the role of tightness in Strassen's work and in the laser method.In Lemma 5.1 below, we explicitly show that T cw,q and T CW,q are tight tensors.This fact was known and appears implicitly in [Blä13,CVZ21] and other related works: however we are not aware of a reference where the proof is given in its entirity.A tensor T ∈ A ⊗ B ⊗ C is tight if g T /C 2 contains a regular semisimple element.Given a basis {a i : i = 1, . . ., dim A} of A and similarly for B and C, write T ijk for the coordinates of a tensor T in the induced basis Tightness can be defined combinatorially with respect to a basis, see, e.g., [CGL + 21, Def.1.3].Explicitly, T is tight if and only if there exist bases of A, B, C and injective functions s A : {1, . . ., dim A} → Z, s B : {1, . . ., dim B} → Z, s C : {1, . . ., dim C} → Z such that s A (i) + s B (j) + s C (k) = 0 for every (i, j, k) ∈ supp(T ).
The following result was "known to the experts" but since we do not have a reference for it, we provide its proof.
Lemma 5.1.The tensors T cw,q and T CW,q are tight.
Proof.Write q = 2u or q = 2u + 1 depending on the parity of q.Consider the change of basis and similarly on B and C.
After this change of basis, regarding T cw,q and T CW,q as symmetric tensors in S 3 A, we have T CW,2u+1 = a 0 u j=1 a j a u+j + a 2 q + a 2 0 a q+1 , depending on the parity of q.
The combinatorial characterization of tightness makes it clear that this property only depends on the support of a tensor in a given basis; we say that a support S is tight if every tensor having support S is tight.
Given concise tensors moreover if g T 1 = 0 and g T 2 = 0 then equality holds g T 1 ⊠T 2 = 0.
The strict containment in (12) occurs, for instance, in the case of the matrix multiplication tensor.In [CGL + 21], we posed the problem of characterizing tensors Proposition 5.3 provides several additional examples of tensors in C 3 ⊗ C 3 ⊗ C 3 for which this containment is strict.
5.2.Tight supports in C 3 ⊗C 3 ⊗C 3 .From [CGL + 21, Proposition 2.14], one obtains an exhaustive list of unextendable tight supports for tensors in C 3 ⊗ C 3 ⊗ C 3 , up to the action of Z 2 × S 3 , where S 3 acts permuting the factors and Z 2 acts by reversing the order of the basis elements.In fact, tightness is invariant under the action of the full S 3 acting by permutation on the basis vectors.This additional simplification, pointed out by J. Hauenstein, provides the following list of 9 unextendable tight supports up to the action of ((S 3 ) ×3 ) ⋊ S 3 .
The following result characterizes tight tensors in C 3 ⊗ C 3 ⊗ C 3 up to isomorphism.
Proposition 5.2.Let T ∈ C 3 ⊗ C 3 ⊗ C 3 be a tight tensor with unextendable tight support in some basis.Then, up to permuting the three factors, T is isomorphic to exactly one of the following.
Proof.The result of [CGL + 21, Proposition 2.14] and the discussion above shows that T is, up to permutation of the factors, equivalent to a tensor with support T i for some i = 1, . . ., 9.
For i = 1, . . ., 8, it is straightforward to verify that all tensors with support T i are isomorphic, via the change of bases given by three diagonal matrices.
The case of T 9 is slightly more involved but essentially the same argument shows that a tensor T with support T 9 is isomorphic to T 9,µ , for some µ.
Finally, we have to show that any two of the tensors in the statement are not isomorphic.For tensors having distinct supports, this is a consequence of Proposition 5.3 below: indeed, if T, T ′ are two of the tensors above, Proposition 5.3 shows that either dim g T = dim g T ′ or dim g T ⊠2 = dim g T ′⊠2 .
As for the tensors with support T 9 , we proceed as follows.Let T = T 9,µ and T ′ = T 9,µ ′ with µ = µ ′ .We show that T is not isomorphic to T ′ .Suppose by contradiction that there is a triple of 3 × 3 matrices g = (g A , g B , g C ) ∈ GL 3 ×GL 3 ×GL 3 with g(T ) = T ′ .One sees that in each case, g A , g B , g C have to be diagonal matrices, and an explicit calculation shows that there is no triple of diagonal matrices such that g(T ) = T ′ .We point out that T 7 is isomorphic to the Coppersmith-Winograd tensor T CW,1 , as well as to the structure tensor of the algebra C[x]/(x 3 ).
The tensors T cw,2 and T skewcw,2 are degenerations of T 9,µ , respectively for µ = 1 and µ = −1.In particular, they do not have an unextendable tight support in some basis.
Proof.For T 1 , . . ., T 8 and for the T 9,−1 , the proof follows by a direct calculation.The first part of the Macaulay2 file symmetryTightSupports.m2 in Appendix E of the Supplementary Material computes the dimension of the symmetry algebras of interest in these cases.
The second part of the file deals with the case T 9,µ when µ = −1.By tightness, dim g T 9,µ ≥ 1.
The second part of the file symmetryTightSupports.m2computes a matrix representation of ω T 9,µ , depending on a parameter µ (t in the file).Let F µ be this 27 × 27 matrix representation.Then, it suffices to select a 24 × 24 submatrix whose determinant is a nonzero univariate polynomial in µ.
If µ is a value for which dim g T 9,µ > 1, then µ has to be a root of this univariate polynomial.
In the example computed in the file, we select a 24 × 24 submatrix whose determinant is (µ + 1) 6 µ, showing that the only possible values of µ for which dim g T 9,µ > 1 are µ = 0 or µ = −1.The case µ = −1 was considered separately.The case µ = 0 does not correspond to a unextendable support, so it is not of interest.We point out that however, rank(ω T 9,0 ) = 24, namely dim g T 9,0 = 1.
For T ⊠2 9,µ , we follow essentially the same argument.By tightness, and (12), we obtain dim g T ⊠2 9,µ ≥ 2. The third part of symmetryTightSupports.m2computes a matrix representation of the map ω T ⊠2 9,µ , depending on a parameter µ: this is a 729 × 243 matrix of rank at most 239.
In the example computed in the file, we select a 239 × 239 submatrix whose determinant is the univariate polynomial µ 8 (µ + 1) 12 .As before, we conclude.
We also provide the values of the border rank of the tensors in C 3 ⊗ C 3 ⊗ C 3 having unextendable tight support and numerical evidence for the values of border rank of their Kronecker square.They are recorded in the following table.The values of the border rank for the T i 's are straightforward to verify.The lower bounds for the Kronecker squares are obtained via Koszul flattenings.In the cases labeled by N/A the upper bounds coincide with the multiplicative upper bound; in the other cases, the upper bound is obtained via numerical methods, and the last column of the table records the ℓ 2 distance (in the given basis) between the tensor obtained via the numerical approximation and the Kronecker square.The numerical approximations are recorded in the supplementary files in Appendix F of the Supplementary Material.In this section, we explain how to compute the matrices Φ 1 , . . ., Φ 4 in Section 3.4 and the matrices Ψ 1 , . . ., Ψ 8 in Section 3.5.
The matrices Φ 1 , . . ., Φ 4 and Ψ 1 , . . ., Ψ 8 arise via a series of tensor contractions of highly structured tensors.In this section, we introduce the notion of box parametrized sequence of tensors.Lemma 6.2 below shows that contraction of box parametrized tensors gives rise to box parametrized tensors; in addition, the expression of the tensors resulting from the contraction is particularly easy to control.
We will then show that the tensors in Section 3.4 and Section 3.5 which give rise to the matrices Φ 1 , . . ., Φ 4 and Ψ 1 , . . ., Ψ 8 are box parametrized.This allows us to track down the entries of the final matrices as functions of the dimension q.
The full calculation of the matrices is left to the scripts available in Appendix D of the Supplementary Material.
The point of view is partially inspired by the interpretation of tensors in communication models, where a tensor on k factors is regarded as a function from sending a k-tuple of integers to the corresponding coefficient of the tensor.Explicitly, for every j = 1, . . ., k fix a basis {v i } on the j-th factor: given a finite support Σ ⊆ N ×k , the tensor i k corresponds to the function defined by T (i 1 , . . ., i k ) = t i 1 ,...,i k .We do not explicitly write the dimensions of the factors.
Let T = {T q : q ∈ N} be a sequence of tensors of order k.We say that T is basic box-parametrized if, for every q T q = p(q) where p(q) is a univariate polynomial in q and the support Σ q is defined by conditions η j q + ϑ j ≤ i j ≤ H j q + Θ j for η j , H j ∈ {0, 1} and ϑ j , Θ j ∈ Z ≥0 , and any number (not depending on q) of equalities i j = i j ′ among indices.Without loss of generality, assume that the inequalities are sharp for every j, in the sense that for every i j satisfying the j-th inequality, the basis element v (j) i j does appear in T q .We often say that T is basic box-parametrized for q ≥ q 0 for some q 0 , in the sense that the sequence has the desired structure for q ≥ q 0 .Example 6.1.The sequence (3) i is basic box-parametrized for q ≥ 1, with support Σ q defined by the conditions We define a contraction operation between the j 1 -th and the j 2 -th factor of T , obtained by summing over the corresponding indices: in other words, the contraction is the image of T via the trace map u applied to the j 1 -th and j 2 -th factors, where {u i } is the dual basis to the fixed basis {v (j) i } on the j-th factor.Lemma 6.2.Let T , T ′ be basic box-parametrized tensors for q ≥ q 0 and q ≥ q ′ 0 respectively.Then • T ⊗ T ′ is basic box-parametrized for q ≥ max{q 0 , q ′ 0 }; • the contraction of T on factors j 1 and j 2 is basic box-parametrized for q ≥ max{|ϑ j 1 − ϑ j 2 |, |Θ j 1 − Θ j 2 |, q 0 }; moreover, if the univariate coefficient p(q) of T is a polynomial of degree e, then the coefficient of the tensor resulting from the contraction has degree at most e + 1.
Proof.The first statement is immediate.
For the second statement, without loss of generality assume j 1 = 1 and j 2 = 2. First observe that if T is basic box-parametrized, then summing over the first index, or equivalently applying the linear map i u i , generates a basic box-parametrized tensor; the coefficient of this tensor has the same degree as the coefficient of T unless the first index i 1 is not related by equality to any other index, and η 1 = 0 and H 1 = 1; in the latter case, the degree of the coefficient is increased by one.Now, contraction of T on factors 1 and 2 is equivalent to first imposing the equality i 1 = i 2 on the support Φ q and then summing up on the first and second index.Imposing the equality i 1 = i 2 effects the inequalities of i 1 and i 2 as follows: max{η 1 q + ϑ 1 , η 2 q + ϑ 2 } ≤ i 1 = i 2 ≤ min{H 1 q + Θ 1 , H 2 q + Θ 2 }.
Each of the two bounds can be replaced by one of the two linear functions (uniformly in q) whenever q ≥ {|ϑ 1 − ϑ 2 |, |Θ 1 − Θ 2 |}.This, together with the previous observation, concludes the proof.
Given two sequences of tensors T (1) , T (2) of order k, we define their sum as T 1 + T 2 = {T (1) q + T (2) q : q ∈ N}.We say that a sequence T is box parametrized (for q ≥ q 0 ) if T is a finite sum of basic boxparametrized sequences of tensors (for q ≥ q 0 ).Observe that a sequence of tensors with constant dimensions is box parametrized if and only if its coefficients are univariate polynomials in q.
We will show that the maps Φ 1 , . . ., Φ 4 in the proof of Theorem 3.3 in Section 3.4 and the maps Ψ 1 , . . ., Ψ 8 in the proof of Theorem 3.4 in Section 3.5 are box parametrized.
The scripts in Appendix D perform the contraction of box parametrized tensors according to Lemma 6.2, keeping track of the univariate polynomial coefficients and of the lower bound q 0 for which the expressions are valid.The final result is that the maps Φ 1 , . . ., Φ 4 are box parametrized for q ≥ 5 and the maps Ψ 1 , . . ., Ψ 8 are box parametrized for q ≥ 6.
In the following, we show that the tensors involved in the various contractions are box parametrized.Lemma 6.2 guarantees that the results of the contractions are box parametrized as well.
First, notice that T cw,q is box parametrized for q ≥ 1, as it is the sum of three tensors as the ones described in Example 6.1.By Lemma 6.2, we deduce that T ⊗2 cw,q (regarded as a tensor of order 6) and T ⊗3 cw,q (regarded as a tensor of order 9) are box parametrized.In all three cases, the polynomials defining the coefficients have degree 0. 6.1.Restriction.We show that the two restriction maps φ 2 : A ⊗2 → C 3 and φ 3 : A ⊗3 → C 5 are box parametrized as tensors of order 3 and 4 respectively.Write φ 2 = X 0 ⊗ e 0 + X 1 ⊗ e 1 + X 2 ⊗ e 2 , where C 3 = e 0 , e 1 , e 2 and X 0 , X 1 , X 2 ∈ A ⊗2 * .It suffices to show that X 0 , X 1 , X 2 are box parametrized, regarded as tensors of order two.Using a basis dual to the basis of A ⊗2 , we have This shows that X 0 , X 1 , X 2 are box parametrized.
6.2.Koszul maps.The Koszul differentials on C 3 and C 5 used in the definition of the Koszul flattenings are the skew-symmetric projections C 3 ⊗ C 3 → Λ 2 C 3 and Λ 2 C 5 ⊗ C 5 → Λ 3 C 5 .They are both fixed size, therefore they are box parametrized.
We analyze the square case in detail.For the square case, let M be the matrix of change of basis on C q from the basis {e 1 , . . ., e q } to the basis {e 1 , e 2 , e 3 , q 4 e i , e 5 − e 4 , . . ., e q − e q−1 }.Explicitly In particular, M diagonalizes the action of S q−3 and therefore the change of basis defined by Id C 3 ⊠ M ⊠2 on C 3 ⊗ B ⊗2 brings the matrix representing (φ 2 (T ⊠2 cw,q )) ∧1 into a block diagonal matrix, whose diagonal blocks are matrices representing the maps f i : C 3 ⊗ (M i ⊗ W i ) → Λ 2 C 3 ⊗ (M i ⊗ W i ) from (10); denote the diagonal blocks by f M 1 , . . ., f M 4 .
Because of our choice of basis, the multiplicity subspaces C 3 ⊗ w i ⊗ M i and Λ 2 C 3 ⊗ w i ⊗ M i described in Section 3.4 are spanned by basis vectors, so that the matrices representing Φ 1 , . . ., Φ 4 are given by submatrices of f M 1 , . . ., f M 4 .More precisely, setting π inv , π std to be the matrices of the two coordinate projections of C q onto e 1 , . . ., e 4 and e 5 , we have Since the composition can be performed on the single factors, by Lemma 6.2 it suffices to show that the four matrices M −1 • π T inv , M −1 • π T std , π inv • M and π std • M are box parametrized.From the structure of M, it is clear that π inv • M and π std • M are box parametrized.The computation of M −1 is straightforward, and it is easy to see that M −1 • π T inv , M −1 • π T std are box parametrized.
This shows that Φ 1 , . . ., Φ 4 are box parametrized.The script available in Appendix D computes the box parametrized representation of Φ 1 , . . ., Φ 4 starting from the box parametrized version of T cw , the restriction map φ 2 , the Koszul differential and the four matrices The cube case is similar.Now, restriction space C 3 is a C 5 , the top left block in the matrix M is a 5×5 identity block, the result of the conjugation by M is block diagonal with 8 blocks, corresponding to the eight isotypic components.The coordinate projections π inv and π std are onto e 1 , . . ., e 6 and e 7 .The script computes the box parametrized representation of the matrices Ψ 1 , . . ., Ψ 8 .

T
R(T ) R(T ⊠2 ) ℓ 2 error for upper bound in T ⊠2 decomposition