Computing images of polynomial maps

The image of a polynomial map is a constructible set. While computing its closure is standard in computer algebra systems, a procedure for computing the constructible set itself is not. We provide a new algorithm, based on algebro-geometric techniques, addressing this problem. We also apply these methods to answer a question of W. Hackbusch on the non-closedness of site-independent cyclic matrix product states for infinitely many parameters.


Introduction
Determining the image of a polynomial map is of fundamental importance in numerous disciplines of mathematics. In particular, this problem comes up in dealing with parametrizations of (unirational) varieties, a situation which arises frequently in theory and in application, for instance in low-rank tensor approximation.
Given a projective variety X ⊂ P n C , we compute the image of a polynomial map f : X P m C . This setting easily extends to rational maps from a ne varieties to a ne spaces-see Section 2.1.
Our primary goal is to develop an algorithm to compute this image. We have two design principles regarding the output: rst, it should give immediate insight to a human, and second, a computer using our output should be able to determine instantly if a point in the codomain belongs to the image. Let us emphasize here that the output we produce will make it clear at rst sight whether or not the image is closed.
We begin with a simple example. Consider the Cremona transformation f : P P de ned by Let us write the image of f as a constructible set V \ (V \ V ) where V = P , V = Z(y y y ) and V = Z(y y , y y , y y ). Here we represent closed algebraic sets as the zeros of an ideal, written Z(I ).
It is more convenient however to decompose the V i 's into their irreducible components and store the containment relations in the form of a graph.
Then V = L ∪L ∪L and V = p ∪p ∪p where the L i 's (resp. p i 's) are the three lines (resp. points) in P de ned by the vanishing of coordinates. The image of the Cremona transformation can now be presented as a graph as in Figure 1a.
In our implementation the image is represented in the form of a tree. For the Cremona transformation it is depicted in Figure 1b, meanwhile the output of our implementation is presented in Figure 2.  Standard methods exist for determining the closure of the image. They rely on Gröbner basis computations and are implemented in any general mathematical software, cf. §3.3 [6]. As far as we are aware, however, the only software which computes the image of a polynomial map is PolynomialMapImage in the Maple™ module RegularChains [4,21]. This program uses triangular decompositions-a technique well-developed in algorithmics [31], but which is not a part of the canon of algebraic geometry.
Our algorithm relies on a central technique in algebraic geometry: resolving a rational map through blow-ups. Our implementation of the algorithm is called TotalImage 1 . It compares favorably to PolynomialMapImage in our tests. See Section 3.2 for a detailed comparison.
We also demonstrate how one can make theoretical use of the idea behind this algorithm to prove that an image is not closed without computing the entire image. In the process, we prove that the set of tensors that admit a site-independent matrix product state (IMPS) representations with xed rank is not closed (Theorem 4.5). This answers a question posed by W. Hackbusch. A cousin problem of deciding whether the set of tensors that admit a matrix product state (MPS) representation form a closed set, posed by L. Grasedyck, was settled in [20].
We will now describe three domains of application in which the determination of the image of a map plays a crucial role.

Physics
Tensors play a prominent role in physics, for instance in the representation of quantum states. An issue is that relevant tensors often appear in spaces of huge dimension, making them practically impossible to work with directly.
A way around this problem is to nd compact representations of a tensor, such as low-rank presentations (also known as the canonical polyadic decomposition) or tensor networks [13]. In practice, one often gives an algebraic parametrization of a family of well-behaving tensors, such as those admitting compact representations. It is of great concern from the point of view of numerical mathematics to decide whether the image of such a parametrization map is closed.
In other words, one wishes to know in advance whether a sequence of well-behaving tensors T n , approximating an arbitrary tensor T , will converge to a good approximation T ∞ within the set of wellbehaving tensors. For example, for a speci ed r and a real tensor T , there may be no best-possible realrank r approximation of T . In fact, this happens with positive probability in the choice of T [8]. The complex case, where such phenomena do not take place, along with examples when best rank approximations do not exist, is discussed in [25].

Statistics
A statistical model is a parametric family of probability distributions. A large class of statistical models are parametrized by algebraic maps [7,23,27].
The primary question about a statistical model is if a given, i.e. observed, probability distribution ts the model. To attack this question, one wishes to describe the real image of the parametrization corresponding to the model within the space of all probability distributions.
In this paper we only deal with the complex image of algebraic maps. However, the complex image, being larger, often gives a good rst test for the tness of a statistical model.

Computational Sciences
Tensors represent multi-linear maps. Good representations of a tensor, for instance its rank decomposition, yield algorithms of lower complexity [18,19].
A famous example demonstrating this relationship is matrix multiplication. The multiplication of two n × n matrices is a bilinear operation and thus is represented by a -dimensional tensor. The complexity of the optimal algorithm for multiplying matrices is known to be governed by the rank (or border rank) of the associated tensor, see [18,19]. (Let us point out that it is not known in general if the rank and the border rank of the matrix multiplication tensor coincide.) Computing the tensor rank (as well as determining if the tensor rank equals the border rank) of a given tensor T is equivalent to the problem of deciding whether T belongs to the image of an algebraic map (or its closure).
We start by presenting the preliminaries in Section 2. In Section 3, we present our algorithm for computation of images. In Section 4, we answer the question of W. Hackbusch proving that IMPS tensors do not form a closed set in general. In Section 5 we present in detail two explicit examples inspired by statistics and physics. Some of the proofs and remarks are postponed to the Appendix.

Acknowledgments
We thank Wolfgang Hackbusch for posing the question which motivated this work, and for the stimulating discussions. We are grateful to Bernd Sturmfels and Michael Joswig for many suggestions and encouraging remarks.

Preliminaries
) where the f i are polynomials in the coordinates of x = (x , . . . , x n ) is called a polynomial map. If the f i are given as the quotient of two polynomials, then f is called a map of rational functions. Note that if the f i are not polynomial, the map f is not well de ned everywhere in the domain and we use the notation f : C n C m to allow for this possibility. The goal of this paper is to compute the image of a map of rational functions. There is another case of interest however, which turns out to generalize the one above while providing a more advantageous perspective.
Consider a map f : P n P m de ned by where the f i are rational functions. This makes sense only when the f i are homogeneous of the same degree.
When each f i is a polynomial we may emphasize this fact by referring to f as a polynomial map. Note that even when f is a polynomial map, f need not be well-de ned on the entire domain and we will use the notation P n P m to highlight this fact.

From affine rational to projective polynomial maps
Although one can extend a map of rational functions f : C n C m to a polynomial map P n P m , the standard way to do this would change the image. The trick below allows one to perform this extension without changing the image.
Let This map is unde ned wherever the previous map was unde ned and additionally at the hyperplane at in nity (unless the map is constant). In particular, it has the same image. Further, we can convert any rational map f : P n P m , de ned by rational functions f i = The image of f and f coincide, while f is polynomial.
For these reasons, in the rest of this paper we will concentrate on polynomial maps P n P m and their restrictions to varieties X in P n .

Image of a variety
In this section we describe our algorithm for computing the image of a polynomial map de ned on a projective variety. Let us emphasize that we work over the complex numbers and point to the references [6,26] for the basic facts we will be using from algebraic geometry.

Constructible sets
Our starting point is Chevalley's theorem on constructible sets. Theorem 3.1 (Chevalley). Let f : P n P k be a rational map and V ⊂ P n a variety. If B is the base locus of f , then f (V \B) is a constructible set.
In other words, the image can be described by a nite sequence of algebraic sets (Z , . . . , Z k ) such that: The last of these conditions is often written in the form Here we note that the subtraction and addition operations on sets do not commute.
De nition 3.2. For a constructible set C a representation C = V − V + · · · + (− ) ℓ V ℓ will be called canonical if the following properties hold: Presenting constructible sets as graphs.
Throughout, graphs are simple and connected. A graph with a distinguished vertex r and no cycles is called a tree. The vertex r is called the root. On each edge of we can choose an orientation so that the edge points away from r. With this orientation, we will view our trees as being directed graphs. Let be a tree. If n → n is an edge of , then n is called a child of n, and n is called the parent of n . The vertices with no children are called leaves. The root is the only vertex with no parent. A tree has a natural grading, called depth, given by the path length from the root.
We can represent an irreducible constructible set C as a tree with vertices labeled by varieties. The construction is inductive. If C is closed, we represent it as a tree with a single vertex r labeled by C . Otherwise, let V i be the irreducible components of C \ C and by induction let i be the tree representation of the constructible set V i . The tree corresponding to C is then constructed as follows: we label the root r of by the closure C and then attach the root of each i to r.
De nition 3.3. Any labeled tree obtained from a constructible set by the construction above will be called a constructible tree.
From the tree we can recover the corresponding constructible set C ( ) inductively as follows: where n ⊂ is the subtree with root n.
Proof. The parity of depth determines whether or not the generic point of Therefore, to obtain a constructible graph from a constructible tree, we may identify vertices having the same label.  Remark 3.5. For our main algorithm, it seems more natural to de ne and use constructible trees. However, constructible graphs are a more compact representation of a constructible set. An implementation utilizing the graph structure would like yield savings in time and memory (cf. [9,15]). Remark 3.6. If f : A n → A m is given by monomials, the constructible graph representing Im(f ) is a subposet of the face lattice of the Newton polytope of f , see [10].
In general, if f is a toric map, then Im(f ) is a subposet of the face lattice of the polytope of characters de ning f .

An algorithm for computing images
Let X ⊂ P n be a variety and f : X P m be a polynomial map x → [f (x) : · · · : f m (x)] in the coordinates of P n .
De nition 3.7. The indeterminacy locus of f is the subscheme B of X cut out by the ideal (f , . . . , f m ) De nition 3.8. An algebraic set A Im(f ) containing the di ference Im(f ) \ Im(f ) will be called an (image) frame of f , since it covers the boundary of the image.
We start by describing a subroutine Frame which computes an image frame of f : X P m . The idea is to resolve the map f by the blowup Bl f X of X along the indeterminacy locus B and compute the image of the exceptional divisor.
However, if dim X is strictly greater than the dimension of the image, the images of the exceptional divisors may dominate the image of f . To resolve this issue we will cut down the dimension of X by taking an appropriate linear section of X.
De nition 3.9. Let Λ ⊂ P n be a linear space and δ = dim X − dim X ∩ Λ. Then X ∩ Λ is a codimension δ linear section of X. If Im(f ) = Im(f | X∩Λ ) then the linear section X ∩ Λ will be called generic.
Let δ := dim X − dim Im(f ) and pick a generic codimension δ linear section X := X ∩ Λ of X. Blowing up X along the indeterminacy locus of f := f | X gives a resolutionf : Bl f X → P m . Computing the images of the exceptional divisors viaf gives a frame of f . This in turn is a frame for f . All these statements will be proved in Lemma 3.10.
if δ > then 4: return irreducible components off (E) 10: end procedure Lemma 3.10. Let X ⊂ P n be irreducible. Then Frame(f : X P m ) returns the irreducible components of a frame A of f .
Proof. By taking a generic linear section X of X, we make sure dim X = dim Im(f ) and f := f | X has image closure equal to Im(f ). Then the exceptional divisor E of the blow-up of X has dimension strictly less than Im(f ). Therefore the image A of E will be strictly contained in Im(f ).
The blowup Bl f X π → X gives a resolutionf : Bl f X → P m of f . Note that Im(f ) = Im(f ).
In particular, for any point y ∈ Im(f ) \ Im(f ) we can nd a point x ∈X satisfyingf (x) = y. Then π(x) must be in the indeterminacy locus of f . Therefore, x ∈ E and y ∈ A.
The idea behind the main algorithm TotalImage is to compute successively ner approximations of the image boundary Im(f ) \ Im(f ). We now give an informal demonstration of how these approximations can be obtained. Let We improve this approximation as follows. De ne X := f − (A ) ⊂ X to be the preimage of A and let Y = Im(f | X ). Note that the image of f | X is precisely A ∩ Im(f ) ⊂ Y . In particular, The frames are meant to get strictly smaller in dimension. Therefore, after at most N := dim Y iterations the frame A N should be empty. This gives which expresses Im(f ) exactly when A N = ∅. Note that our algorithm for computing frames uses the irreducibility of the domain in a crucial way. This means that we need to decompose each pullback of a frame into its irreducible components. This causes the algorithm to branch at each step making the construction above harder to visualize. Furthermore, the output will be in the form of a constructible tree. Nevertheless, the nature of the argument remains the same and we prove in Theorem 3.13 that the resulting constructible tree represents Im(f ).
In preparation for the main algorithm, we introduce the following notions: 1. Assigning varieties to vertices in the algorithm means creating vertices labeled by these varieties. We are ready to present the main algorithm of this paper, which computes the image of a polynomial map (as we prove in Theorem 3.13).  Im(X)

Cleaning the tree
The tree we construct at the end of the loop in TotalImage may contain edges of the form n → n where V (n) = V (n ). This happens when a component of a frame is dominated by the image. Then we can delete V (n) as well as all of its descendents and add the descendents of V (n ) to the parent of V (n) (cf. Figure 4). There is another instance of redundancy. It may be that V (n) has two children V (n ) and V (n ) with V (n ) ⊂ V (n ). It is then unnecessary to keep both of these branches of the tree. We will remove V (n ) and all its descendants (cf. Figure 5).  Figure 5: The tree before and after cleaning

Justification of the algorithm
Let f : X P m be a polynomial map. We start by proving a lemma that our algorithm correctly describes the image. Lemma 3.12. Before we clean the tree, for any point p in (resp. not in) the image, any longest path starting at the root and going through vertices labeled by varieties containing p goes through an odd (resp. even) number of vertices.
Proof. The proof is inductive on dim Im f , the case dim Im f = being trivial. If p Im f we are done, so assume p ∈ Im f . If p is not in the image of the exceptional divisor then p ∈ Im f and the claim is true. Otherwise, p belongs to a component Z of the image of the exceptional divisor. Our algorithm will work on components of f − (Z) and we conclude by induction.
We note that Lemma 3.12 remains true also after cleaning the tree. Theorem 3.13. The algorithm TotalImage terminates and outputs a constructible tree for the canonical representation of Im(f ).
Proof. The algorithm stops, as the frames (which always appear at odd levels), have dimension strictly smaller than their parents.
Let V − V + · · · + (− ) ℓ V ℓ be the canonical representation of Im(f ) as in De nition 3.2. We prove the following statements by induction on i: 1. all components of V i appear at depth i of the tree, 2. all labels at depth i are subvarieties of V i .
Note that for i odd (resp. even) the components of V i , in the canonical representation, are the largest subvarieties in V i− , with generic points in (resp. not in) Im f . The claim is true for i = . Let Z be a component of V i for i even (resp. odd). Suppose Z ⊂ Y for Y a component of V i− . Then Z must be a child of Y by Lemma 3.12, which proves the rst point.
For the second point consider a label W at depth i, a child of Y . By induction Y belongs to a compo-nentỸ of V i− . A generic point of W is (resp. is not) in the image, so W must belong to a component of V i . Proof. The canonical representation of a set has a single term if and only if the set is closed.

Running time
We compared our implementation to PolynomialMapImage. Now we present timings for comparison. We used the following examples as benchmarks: 1. The Whitney Umbrella example from documentation 2 for PolynomialMapImage: (x, y) → (xy, x, y ).
3. The map G given by the gradient of xyz(x + y + z): (x, y, z) → ( xyz + y z + yz , x z + xyz + xz , x y + xy + xyz). Let us point out that the two algorithms have outputs of a di ferent nature. As an example we compare our outputs for Item 1 in our list, the Whitney Umbrella. The image is a closed surface in C . This is demonstrated by the fact that our output has a single node:

Site-independent (cyclic) matrix product state
Matrix product states (MPS) and their more symmetric version-site-independent cyclic matrix product states (IMPS)-play an important role in quantum physics and quantum chemistry [29]. They are applied, for instance, to compute the eigenstates of the Schrödinger equation. As numerical methods are often involved in their study, the question of the closedness of families of tensors that allow such representations are central and were asked by W. Hackbusch and L. Grasedyck.
To answer these questions we present the families of tensors that allow a representation as a matrix product state as orbits under a group action. The equivalence of the classical de nition and ours is proved in the Appendix.
We begin by picking a special element in IMPS and describe IMPS as the orbit of this element with respect to change of coordinates. This allows us to work with an explicit parametrization of IMPS and we will show that this parametrization map does not have closed image, proving that IMPS is not closed. The element we pick for this purpose is the iterated matrix multiplication tensor.
Since what we do here works equally well over R or C we use the letter K to stand for one of these elds.
De nition 4.1 (Iterated matrix multiplication tensor). For positive integers a , . . . , a q de ne the tensor M a ,...,a q ∈ K a ×a ⊗ K a ×a ⊗ · · · ⊗ K a q ×a as M a ,...,a q := ≤i j ≤a j e i ,i ⊗ e i ,i ⊗ · · · ⊗ e i q− ,i q ⊗ e i q ,i , where e i j ,i j+ are the basis vectors of the space of matricies K a j ×a j+ .
The following statement maybe taken as a working de nition of IMPS and MPS. The result itself is a generalization of [20, Proposition 2.0.1].
One of the main motivations to start the work on this article was the following question posed by W. Hackbusch: Question 4.4. Is the set IMPS(r, k, q) closed for every k, r and q?
To be more precise, W. Hackbusch expected a negative answer to the above question and also asked for an explicit tensor T ∈ IMPS(r, k, q) \ IMPS(r, k, q). An analogous question for MPS was asked by L. Grasedyck in the context of quantum information theory and was completely answered in [20].
It is an easy exercise to show that when q = both IMPS and MPS are closed. Below we will present in nitely many values of (r, k, q) for which IMPS (r, k, q) is not closed. In fact, we give an explicit tensor T in IMPS(r, k, q) such that T is not even in MPS((r, . . . , r), (k, . . . , k), q), let alone in IMPS(r, k, q) (see Remark 4.3). This demonstrates that MPS is also not closed in these sets of examples, as predicted by Theorem 1.3.2 of [20].
This puts us exactly within the context of the current article. We now show that the image of ψ r,k,q is not closed by constructing a point in its closure which is demonstrably not hit by ψ r,k,q . Here we will use the idea of approximating the boundary of the image (cf. Section 3.1). In general, the approximation is done by blowing up the indeterminacy locus and computing its image. However, individual points in this approximate boundary may be constructed analytically by approaching the indeterminacy locus along a path γ : ( , ] → Hom(K r×r , K k ) and computing the limit lim t→ ψ r,k,q • γ(t). Consider the curves γ(t) := (M + t · ) and c(t) := t ψ , , (γ(t)). Let us denote by e ∨ , e ∨ , e ∨ , e ∨ the dual basis to e , e , e , e . Then we can write From this point onwards we suppress the tensor notation, writing the tensor product as ordinary product, as no confusion is likely. Recall that we have M , , = e e e + e e e + e e e + e e e + e e e + e e e + e e e + e e e .
Therefore we can write ψ , , • γ(t) = γ(t) ⊗ (M , , ) as follows: It is now clear that ψ , , • γ(t) when t so that ψ , , ([γ(t)]) is well-de ned. We then de ne: We now prove that D is not in MPS. Suppose for contradiction that it were. Then using Proposition 4.2 we can nd three linear maps L , L , L in Hom(K × , K ) such that D = (L ⊗ L ⊗ L )(M , , ).
We will now show that each L i is an isomorphism.
Denoting by V i ⊂ K the image of L i we have D ∈ V ⊗ V ⊗ V by design. Contracting the second and third tensors via V ∨ and V ∨ respectively, the element D may also be viewed as a linear map However, it is clear that the image of D is b , b , b , b . This forces V = K which in turn implies L is an isomorphism. Similarly, we can show L and L are isomorphisms. Therefore, D is isomorphic to M , , . The multiplication tensor M , , is known to have tensor rank 7 [14,16,17,30], but we already have a rank 6 decomposition of D, a contradiction.
We stated Theorem 4.5 in a way that the proof could be written explicitly. However, with minor modi cation the proof extends to the case of arbitrary odd q.
Proof. Here we will simply outline the proof in comparison to the proof of Theorem 4.5. Take the same M ∈ Hom(K × , K ) and γ(t) = M + t · . Let M ,..., be the iterated matrix product tensor with 2 repeated q times. As before, de ne the tensor D := lim t→ t q ψ r,k,q • γ(t).
It will be su cient to show D is not in MPS. However, the contraction maps D i induced by D all have surjective images when q is odd. Therefore, D is in MPS if and only if D is isomorphic to M ,..., . But D has rank at most q whereas M ,..., has rank at least q− [3, Proposition 20].
There are four obvious independent linear phylogenetic invariants-linear polynomials vanishing on the image. In fact, it is well known that the closure of the image is a four dimensional linear space [2,27].
Our algorithm TotalImage(φ) returns the output in Figure 7. There are four linear spaces of dimension three, whose generic points do not belong to the image closure Im(φ). There are six distinct planes which are added back in. This provides a complete description of the image in statistically meaningful coordinates (without applying the discrete Fourier transform).

The locus IMPS( , , ) is closed
Here we describe the map ψ , , explicitly in coordinates. An element in the domain of ψ , , is a pair of × matricies (M, L) which we write as The map ψ r,k,q takes this to the coordinate vector Aa + Abc + aBc + Bcd + abC + bCd + bcD + Dd Aa + aBc + abC + bcD + Abc + Bcd + bCd + Dd Aa + abC + Abc + bCd + aBc + bcD + cdB + Dd a + abc There are 4 linear relations among these polynomials which implies that the image lies in a four-dimensional subspace U of C . In fact, U = S (C ) is the subspace of symmetric tensors. We proved that the image is closed and equals U using TotalImage in the following way. We restrict the map to pairs of matrices (M, L) where M is diagonal and L has its non-diagonal entries equal. Then TotalImage can compute that the image, even restricted to this smaller domain, is exactly U . In this example, one could also conclude purely theoretically that the image is closed, as the space of symmetric tensors U has only three GL( ) orbits.

A. Matrix product states
We recall here two representations of tensors that are inspired from physics [24]. Recall we use K to stand for C or R.
For any a ∈ Z > the vector space K a comes with the standard basis e , . . . , e a . Therefore, a tensor T ∈ K a × · · · × K a q may be represented as which is also written De nition A.1 (Site-independent (cyclic) matrix product state). Fix integers r > , k > , q > and matrices M i ∈ K r×r for i = , . . . , k. Let T ∈ (K k ) ⊗q be a tensor given by T [i , . . . , i q ] := tr(M i M i · · · M i q ).
The set of all tensors that allow such a representation will be denoted by IMPS(r, k, q) ⊂ (K k ) ⊗q .
Example A.2. Let us consider the case of matrices (q = ). Here elements of IMPS(r, k, ) can be viewed as matrices M such that M [i , i ] = tr(M i M i ). This is equivalent to a factorization of M = A · A t for some matrix A ∈ Hom(K r , K k ). In particular, M ∈ IMPS(r, k, ) if and only if M is symmetric and has rank at most k . It follows that IMPS(r, k, ) is closed.
When q = the tensor T corresponds to a symmetric matrix. However, for q > the tensor T will not be a symmetric tensor in general, though the identity T [i , . . . , i q ] = T [i q , i , . . . , i q− ] continues to hold. In other words, the tensor has cyclic symmetries with respect to the order of the product of the matrices.
De nition A.1 can be regarded as a symmetrization of the following de nition of a cyclic matrix product state, where the underlying graph for the tensor network is a cycle.
Fix an integer q > and tuples of positive integers a = (a , . . . , a q ), b = (b , . . . , b q ). We set a q+ = a . Then the locus MPS(a, b, q) ⊂ K b ⊗ · · · ⊗ K b q is given by the following de nition.
De nition A.3 (Cyclic matrix product state). A tensor T ∈ K b ⊗ · · · ⊗ K b q is in MPS(a, b, q) , a ), (b , b ), ) if and only if M = AB where A ∈ Hom(K b , K a a ) and B ∈ Hom(K a a , K b ). This can happen if and only if the rank of the matrix M is at most a a . Therefore, MPS(a, b, ) is always closed.
Proof. The proofs of both statements are similar. We prove the rst one, as it is more important for this paper. We will be interpreting elements of Hom(K r×r , K k ) as r × k matrices. First we note that there is a natural bijection φ between k-tuples of r × r matrices := (A , . . . , A k ) and matrices φ( ) ∈ Hom(K r×r , K k ). For ≤ i ≤ k the i-th column of φ( ) is the representation of A i as a vector of length r .
Write M i = r p,q= a i,p,q e p,q , where e p,q is the matrix with a 1 in its (p, q)-th entry and zeros everywhere else. Note that φ( )(e p,q ) = a i,p,q .
We prove the claim by showing that the tensor T ∈ IMPS(r, k, q) associated to equals φ( )(M r,...,r ). Indeed, we have T = ≤i j ≤k tr(M i · · · M i q )e i ⊗ · · · ⊗ e i q = ≤i j ≤k ≤p j ≤r a i ,p ,p a i ,p ,p · · · a i q− ,p q− ,p q a i q ,p q ,p e i ⊗ · · · ⊗ e i q , where in all sums ≤ j ≤ q. We can simplify further: T = ≤i j ≤k ≤p j ≤r (a i ,p ,p e i ) ⊗ · · · ⊗ (a i q ,p q ,p e i q ) = ≤p j ≤r ≤i ≤k a i ,p ,p e i ⊗ · · · ⊗ ≤i q ≤k a i q ,p q ,p e i q = ≤p j ≤r φ( )(e p ,p ) ⊗ · · · ⊗ φ( )(e p q ,p ) = φ( ) ⊗q (M r,...,r ).