The Average Condition Number of Most Tensor Rank Decomposition Problems is Infinite

The tensor rank decomposition, or canonical polyadic decomposition, is the decomposition of a tensor into a sum of rank-1 tensors. The condition number of the tensor rank decomposition measures the sensitivity of the rank-1 summands with respect to structured perturbations. Those are perturbations preserving the rank of the tensor that is decomposed. On the other hand, the angular condition number measures the perturbations of the rank-1 summands up to scaling. We show for random rank-2 tensors that the expected value of the condition number is infinite for a wide range of choices of the density. Under a mild additional assumption, we show that the same is true for most higher ranks r≥3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r\ge 3$$\end{document} as well. In fact, as the dimensions of the tensor tend to infinity, asymptotically all ranks are covered by our analysis. On the contrary, we show that rank-2 tensors have finite expected angular condition number. Based on numerical experiments, we conjecture that this could also be true for higher ranks. Our results underline the high computational complexity of computing tensor rank decompositions. We discuss consequences of our results for algorithm design and for testing algorithms computing tensor rank decompositions.

The integer d is called the order of A. The tensor product of d vectors u 1 ∈ R n1 , . . ., u d ∈ R n d is defined to be the tensor Any nonzero multidimensional array obeying this relation is called a rank-1 tensor.Not every multidimensional array represents a rank-1 tensor, but every tensor A is a finite linear combination of rank-1 tensors: Hitchcock [50] coined the name polyadic decomposition for the decomposition (1.1).The smallest number r for which A admits an expression as in (1.1) is called the (real) rank of A. A corresponding minimal decomposition is called a canonical polyadic decomposition (CPD).
For instance, in algebraic statistics [1,59], chemical sciences [67], machine learning [4], psychometrics [54], signal processing [35,36,64], or theoretical computer science [26], the input data has the structure of a tensor and the CPD of this tensor reveals the information of interest.Usually, this data is subject to measurement errors, which will cause the CPD computed from the measured data to differ from the CPD of the true data.In numerical analysis, the sensitivity CB: Universidad de Cantabria, beltranc@unican.es.Supported by Spanish "Ministerio de Economía y Competitividad" under projects MTM2017-83816-P and MTM2017-90682-REDT (Red ALAMA), as well as by the Banco Santander and Universidad de Cantabria under project 21.SI01.64658.
of the model parameters, such as the rank-1 summands in the CPD, to perturbations of the data is often quantified by the condition number [61].
When there are multiple CPDs of a tensor A, the condition number must be defined at a decomposition {A 1 , . . ., A r }.However, in this article, we will restrict our analysis to tensors A having a unique decomposition.Such tensors are called identifiable.In this case, the condition number of the tensor rank decomposition of a tensor A is well-defined, and we denote it by κ(A).We will explain in Section 1.3 below in greater detail the notion of identifiablity of tensors.At this point, the reader should mainly bear in mind that the assumption of being identifiable is comparably weak as most tensors of low rank satisfy it.However, note that matrices (d = 2) are never identifiable, so we assume that the order of the tensor is d ≥ 3.
The condition number of tensor rank decomposition was characterized in [20], and it is the condition number of the following computational problem: On input A ∈ R n1×•••×n d of rank r, compute the set of rank-1 terms {A 1 , . . ., A r } in the decomposition (1.1).This condition number measures the sensitivity of the rank-1 terms with respect to perturbations of the tensor A. In other words, when the condition number κ(A) of the rank-r identifiable tensor A = r i=1 A i in (1.1) is finite, it is the smallest value κ(A) such that min π∈Sr r i=1 holds for all rank-r tensors A = r i=1 A i (with A i of rank 1) sufficiently close to A. Herein, the norm on R n1×•••×n d is the usual Euclidean norm, and S r is the permutation group on {1, . . ., r}.It was shown in [24,Corollary 5.5] that the same expression holds if A is any tensor close to A and r i=1 A i is the best rank-r approximation of A in the Euclidean norm.As a general principle in numerical analysis, the condition number is an intrinsic property of the computational problem that governs the forward error and attainable precision of any method for solving the problem.Its study is also useful for other purposes.For example, in [21,22] the local rate of convergence of Riemannian Gauss-Newton optimization methods for computing the CPD was related to the condition number κ(A).
A conventional wisdom in numerical analysis is that it is harder to compute the condition number of a given problem instance than solving the problem itself [38,39].This viewpoint led Smale to initiate the study of the probability distribution of condition numbers: If the condition number is small with high probability, then for many practical purposes one can assume that any given input is well-conditioned; at least the probability of failure necessarily will be small.Smale started studying the probability that a polynomial is ill-conditioned [66].This strategy was extended to linear algebra condition numbers [27,31,41], to systems of polynomial equations in diverse settings [42,62], to linear systems of inequalities [49], to linear and convex programming [2,68], eigenvalue and eigenvectors in the classic and other settings [7], to polynomial eigenvalue problems [6,9], and to other computational models [30], among others.As there is a substantive bibliography on this setting, we refer the reader to [29] for further details.
Tensor rank decomposition seems to be no exception to this wisdom: The characterization of κ(A) for a given A in [20] requires the CPD of A itself.This forces us to rely on probabilistic studies to establish reasonable a priori values of the condition number.Settling this is the main purpose of this paper.
1.2.Informal version of our main results and discussion.The first probabilistic analyses of the condition number of CPD were given in [8,23].In those references the expected value was computed for random rank-1 tensors; that is, for random output of the computational problem of computing CPDs.This amounts to choosing random u k i in the notation above, constructing the corresponding tensor A and studying κ(A).The probabilistic study is feasible, in principle, because one can obtain a closed expression for κ(A) which is polynomial in terms of the u k i , so that the question boils down to an explicit but nontrivial integration problem.
This article is the first to investigate the condition number for random input.That is, we assume that A is chosen at random within the set of rank-r tensors (see the definition of random tensors in Definition 1.3 and the extension in Theorem 1.11) and we wonder about the expected value of κ(A).The difficulty now is that, even if we assume that a decomposition (1.1) exists, we do not have it and hence we lack a closed expression for κ(A).
One may wonder if these two different random procedures should give similar distributions in this or other numerical problems.The answer is no.For example, say that our problem is to compute the kernel of a given matrix A ∈ R n×(n+1) and we want to study the expected value of the associated condition number A A † .Choosing A at random produces E( A A † ) < ∞ but choosing the kernel at random and then A at random within the matrices with that kernel is the same as computing the expected value of the usual Turing's condition number of a square real Gaussian matrix, which is infinity; see [31] for precise estimations of these quantities.The situation is similar in the study of systems of homogeneous polynomial equations: random inputs have better condition number than inputs produced from random outputs; see for example [11].In both these examples, the condition number of input constructed from random output is, on average, larger than the condition number of random input.This is a stroke of luck since in general one expects instances from practical, real life problems, to be somehow random within the input space, not to have a random output!
In this paper we show that computing the CPD is a rara avis: We prove in Theorem 1.5 and Theorem 1.6 that (under suitable hypotheses) the condition number of random input tensors turns out to be infinity.On the contrary, by [8,23] it is presumed that the average condition number is finite when choosing random output.This result reinforces the evidence that computing CPDs is a very challenging computational problem.
The literature often cites the result of Håstad [53] to underline the high computational complexity of computing CPDs.Håstad showed that the NP-complete 3-satisfiability problem (also called 3-SAT) can be reduced to computing the rank of a tensor; hence, solving the tensor rank decomposition problem is NP-hard in the Turing machine computational model.Our main result is different in two aspects: first, Håstad showed the difficulty of only one particular instance of a CPD, whereas we show that computing the CPD is difficult on average.Second, our evidence supporting the hardness of the problem is not based on Turing machine complexity, but given by analyzing the condition number, which is more appropriate for numerical computations [16].Linking complexity analyses to condition numbers is common in the literature; for instance, in the case of solving polynomial systems [11,28,56,63].In general, the book [29] provides a good overview.In this interpretation, we show that computing CPDs numerically is hard on average.
On the other hand, in the literature, the main result of de Silva and Lim [37] is often cited as a key reason why approximating a tensor by a low-rank CPD is such a challenging problem: for some input tensors, a best low-rank approximation may not exist!This is because the set of tensors of bounded rank is not closed: There are tensors of rank strictly greater than r that can be approximated arbitrarily well by rank-r tensors.It is shown in [37] that this ill-posedness of the approximation problem is not rare in the sense that for every tensor space R n1×n2×n3 there exists an open set of input tensors which do not admit a best rank-2 approximation.This result is stronger than Håstad's in the sense that it proves that instances with no solution to the tensor rank approximation problem may occur on an open set, rather than in one particular set of measure zero.Notwithstanding this key result, it does not tell us about the complexity of solving the tensor rank decomposition problem, in which we are given a rank-r tensor whose CPD we seek.In this setting, there are no ill-posed inputs in the sense of [37].It was already shown in [20] that the condition number diverges as one moves towards the open part of the boundary of tensors of bounded rank, entailing that there exist regions with arbitrarily high condition number.One of the main result of this paper, Theorem 1.6, shows that such regions cannot be ignored: They are sufficiently large to cause the integral of the condition number over the set of rank-r tensors to diverge.In other words, one cannot neglect the regions where the condition number is so high that a CPD computed from a floating-point representation of a rank-r tensor in R n1×•••×n d , subject only to roundoff errors, is meaningless-a result similar in spirit to de Silva and Lim [37].
One may conclude from the above that, at least from the point of view of average stability of the problem, tensor rank decomposition is doomed to fail.However, if one only cares about the directions of the rank-1 terms in the decomposition, then the situation changes dramatically.The condition number associated with the computational problem "Given a rank-r identifiable tensor A = r i=1 A i as in (1.1), output the set of normalized rank-1 tensors { Ai A1 , . . ., Ar Ar }" will be called the angular condition number κ ang (A).Analogously to the bound (1.2), one can show that when κ ang is finite, it is the smallest number such that min π∈Sr r i=1 for all rank-r tensors A = r i=1 A i (with A i rank-1 tensors) in a sufficiently small open neighborhood of A. By [24, Corollary 5.5] the same expression holds for all tensors A in a small open neighborhood of A if r i=1 A i is the best rank-r approximation of A .We will prove in Theorem 1.9 that at least in the case of rank-2 tensors, the angular condition number κ ang for random inputs is finite, contrary to the classic condition number κ; in fact, the numerical experiments in Section 7 suggest that this finite average condition seems to extend to much higher ranks as well.In other words, on average we may expect to be able to recover the angular part of the CPD: where A i is as in (1.1).One could conclude from this that a tensor decomposition algorithm should aim to produce the normalized rank-1 terms U i from the tensor rank decomposition accurately.Once these terms are obtained, one can recover the λ i 's by solving a linear system of equations.Since, as a general principle, the condition number of a composite smooth map g • f between manifolds satisfies [16,29] κ it follows that the condition number of tensor decomposition is bounded by the product of the condition numbers of the problem of finding the angular part of the CPD and the condition number of solving a linear least-squares problem.Our main results suggest that precisely the last problem will on average be ill-conditioned.
The foregoing observation can have major implications for algorithm design.Indeed, solving the tensor rank decomposition problem by first solving for the angular part and then the linear least-squares problem decomposes the problem into a nonlinear and a linear part.Crucially, the latter least-squares problem can be solved by direct methods, such as a QR-factorization combined with a linear system solver.Such methods have a uniform computational cost regardless of the condition number of the problem.By contrast, since no (provably) numerically stable direct algorithms for tensor rank decomposition are currently known [8], iterative methods are indispensable for this problem.We may expect their computational performance to depend on the condition number of the problem instance.Indeed, our main results combined with the main result of [21] imply, for example, that Riemannian Gauss-Newton optimization methods for solving the angular part of the CPD should, on average, require less iterations to reach convergence than Riemannian Gauss-Newton methods for solving the tensor decomposition problem directly (such as the methods in [21,22]), because the angular condition number κ ang appears to be finite on average, while the regular condition number κ is proved to be ∞ on average in most cases, as we show in this article.
Our main results also have consequences for researchers testing numerical algorithms for computing the CPD.In the literature, a common way of generating input data for testing algorithms is to sample the rank-1 terms , and then apply the algorithm to the associated tensor A = r i=1 A i .However, our analysis in this paper and the analyses in [8,23] show that this procedure generates tensors that are heavily biased towards being numerically well-conditioned.Hence, this way of testing algorithms probably does not correspond to a realistic distribution on the inputs.We acknowledge that it is currently not easy to sample rank-r tensors uniformly even though some methods exist [18].In part, this is because equations for the algebraic variety containing the tensors of rank bounded by r are hard to obtain [57].Nevertheless, in Section 7, using the observation from Remark 1.4, we present an acceptancerejection method that can be applied to a few cases and yields uniformly distributed rank-r tensors, relative to the Gaussian density in Definition 1.3.In any case we strongly advocate that the (range of) condition numbers are reported when testing the performance of iterative methods for solving the tensor rank decomposition problem, so that one can assess the difficulty of the problem instances.We believe it is always recommended to include models that are known to lead to instances with high condition numbers, such as those used in [20,22].
The formal presentation of our main results requires some extra notation that we introduce in subsequent sections.
1.3.Identifiable tensors and a formula for the condition number.A particular feature of higher-order tensors that distinguishes them from matrices is identifiability.This means that in many cases the CPD of tensors of order d ≥ 3 of small rank is unique.A tensor and all A i 's are rank-1 tensors.A celebrated criterion by Kruskal [55] gives a tool to decide if a given tensor of order 3 satisfies this property.
Lemma 1.1 (Kruskal's criterion [55,65]).Let F be R or C, A ∈ F n1×n2×n3 a tensor of order 3 and assume that A = r i=1 A i , where Since matrix rank does not change with a field extension from R to C, a real rank-r tensor A ∈ R n1×n2×n3 that satisfies the assumptions of Lemma 1.1 is r-identifiable over R and also automatically r-identifiable over C. In other words, Kruskal's criterion is certifying complex r-identifiability of tensors, which is a strictly stronger notion than r-identifiability over R [5].
Most order 3 tensors of low-rank satisfy Kruskal's criterion [34]: There is an open dense subset of the set of rank-r tensors in R n1×n2×n3 , n 1 ≥ n 2 ≥ n 3 ≥ 2, where complex r-identifiability holds, provided r ≤ n 1 + min{ 12 δ, δ} with δ := n 2 + n 3 − n 1 − 2. In fact, this phenomenon occurs much more generally than third-order tensors of very small rank.Let us denote the set of complex tensors of complex rank bounded by r by This constructible 1 set turns out to be an open dense subset (in the Euclidean topology) of its Zariski closure σ C r;n1,...,n d ; see [57].One says that σ C r;n1,...,n d is generically complex r-identifiable if the subset of points of σ C r;n1,...,n d that are not complex r-identifiable is contained in a proper closed subset in the Zariski topology on the algebraic variety σ C r;n1,...,n d ; see [32].It is known from dimensionality arguments [32] that there is a maximum value of r for which generic ridentifiability of σ r;n1,...,n d can hold, namely r ≤ r crit n1,...,n d , where r crit n1,...,n d := In fact, it is conjectured that the inequality is strict in general; see [47] for details.For all other values of r, generic r-identifiability does not hold.In [17,32,33,40] it is proved that in the majority of choices for n 1 , . . ., n d , generic complex r-identifiability holds for most ranks with r < r crit ; see [17,Theorem 7.2] for a result that is asymptotically optimal.For a summary of the conjecturally complete picture of complex r-identifiability results, see [34,Section 3].Assumption 1.In the rest of this article, we will assume that σ C r;n1,...,nr is generically complex r-identifiable.
The reason why we make this assumption is because it greatly simplifies some of the arguments.At the same time, Assumption 1 is (conjectured to be) extremely weak and only limits the generality in the exceptional cases listed in [33,Theorem 1.1], and even then generic r-identifiability only fails very close to the upper bound r crit of the permitted ranks.
An immediate benefit of Assumption 1 is that it allows for a nice expression of the condition number of the tensor rank decomposition problem.Let us denote the set of rank-1 tensors in It is a smooth manifold, called the Segre manifold [46,57].The set of tensors of rank bounded by r is the image of the addition map: σ r;n1,...,n d = Φ(S ×r n1,...,n d ), where (1.4) Then, under Assumption of Φ for each a ∈ Φ −1 (A).Recall from [20] that the condition number of the CPD at A ∈ N r;n1,...,n d is then the condition number (in the classic sense of Rice [61]; see also [29,69]) of any of these local inverses: Remark 1.2.We did not specify the value of the condition number for A ∈ σ r;n1,...,n d \N r;n1,...,n d .The main reason is that our analysis is independent of the values that the condition number takes on this set of measure zero, so that for simplicity we decided against including the more complicated general case where there can be several distinct elements in the preimage.1.4.Main results.The goal of this paper is to study the average condition number relative to "reasonable" density functions.By this we mean probability distributions ρ that are comparable to the standard Gaussian density ρ: There exist positive constants c 1 , c 2 such that c 1 ≤ ρ ρ ≤ c 2 .The main result, Theorem 1.11, applies, among others, for all distributions ρ comparable to the following Gaussian density defined on the set of bounded rank tensors σ r;n1,...,n d .Definition 1.3 (Gaussian Identifiable Tensors).We define a random variable A on σ r;n1,...,n d by specifying its density as We first state our results for the foregoing Gaussian density.At the end of this subsection, in Theorem 1.11, we generalize these results to other densities, including all densities comparable to the Gaussian density.Our first contribution is the following result.We prove it in Section 3.
It should be mentioned that in our analysis we consider a small subset of σ 2;n1,...,n d and show that on this subset the condition number integrates to infinity.In particular, a weak average-case analysis as proposed in [3] would be of interest in this problem.
Under one additional assumption we can extend the result from Theorem 1.5 to higher ranks.We prove the following theorem in Section 4.
By [17,Theorem 7.2], the assumptions of Theorem 1.6 are satisfied in a large number of cases.In fact, as the size of the tensor increases, the assumptions become weaker: When conditions in Theorem 1.6 are satisfied for r ≤ min(s 1 , s 2 ) with Note that for large n i , the second piece s 2 is the most restrictive.From (1.3) it is implied that . Therefore, we obtain the following asymptotically optimal result.It follows from dimensionality arguments that if r > r crit , then the addition map Φ does not have a local inverse.In fact, in this case all of the connected components in the fiber of Φ at A ∈ σ r;n1,...,n d have positive dimension [46].It follows from [20] that the condition number of the tensor rank decomposition problem at each expression (1.1) of length r of such a tensor A is ∞.In this case, κ(A) = ∞, regardless of how the tensor decomposition problem is defined3 when A has multiple distinct decompositions; see also the discussion in [29,Remark 14.14].In this case the average condition number is infinite, as well.
Our results lead us to the conjecture that the expected condition number is infinite, also without making the assumption from Theorem 1.6 and without any upper bound on the rank.Conjecture 1.8.Let A ∈ σ r;n1,...,n d be a GIT of rank r ≥ 2.Then, E κ(A) = ∞.
Corollary 1.7 above proves this conjecture asymptotically, in practice leaving only a small range of ranks for which it might fail.
As mentioned above, it turns out that for GITs the expected angular condition number is not always infinite.Formally, the angular condition number is defined as follows: Let the canonical projection onto the sphere be p : R n1× where Φ −1 a is an arbitrary local inverse of Φ with A = Φ(a).As before we do not specify what happens on the measure-zero set σ r;n1,...,n d \ N r;n1,...,n d , because it is not relevant for this paper.The angular condition number only accounts for the angular part of the CPD, i.e., the directions of the tensors, not for their magnitude, hence the name.
To distinguish the condition numbers (1.5) and (1.6), we will refer to the condition number from (1.5) as the regular condition number.Oftentimes we even drop the clarification "regular".
Here is the result for κ ang (A) for tensors of rank two that we prove in Section 5.
Unfortunately, we do not know if this theorem can be extended to higher rank tensors.However, based on our experiments in Section 7, we pose the following: We finally observe that the foregoing main results are not limited to GITs.They are valid for a wide range of distributions of random tensors.
Theorem 1.11.Theorems 1.5, 1.6, Corollary 1.7 and Theorem 1.9 are still true if instead of GITs we take random tensors defined by a wide range of other probability distributions, including some of interest such as: (1) All probability distributions that are comparable to the standard Gaussian density ρ.This means that the random tensor A has a density ρ for which there exists positive constants (2) Uniformly randomly chosen A in the unit sphere S(σ r ).
(3) Uniformly randomly chosen A in the unit ball {A ∈ σ r : A ≤ 1}.1.5.Acknowledgements.Part of this work was made while the second and third author were visiting the Universidad de Cantabria, supported by the funds of Grant 21.SI01.64658(Banco Santander and Universidad de Cantabria), Grant MTM2017-83816-P from the Spanish Ministry of Science, and the FWO Grant for a long stay abroad V401518N.We thank these institutions for their support.We also thank two anonymous referees for helpful comments.
1.6.Organization of the article.The rest of the article is organized as follows.In the next section we give some preliminary material.Thereafter, in Sections 3 to 6, we successively prove Theorem 1.5, Theorem 1.6, Theorem 1.9 and Theorem 1.11.In Section 7 we present numerical experiments supporting our main results.Finally, in Appendix A to C we give proofs for several lemmata that we need in the other sections.

Notation and Preliminaries
2.1.Notation.We will use the following typographic conventions for convenience: Vectors are typeset in a bold face (a, b), matrices in upper case (A, B), tensors in a calligraphic font (A, B), and manifolds and linear spaces in a different calligraphic font (A, B).
The positive integer d ≥ 2 is reserved for the order of a tensor, n 1 , . . ., n d ≥ 2 are its dimensions, and r ≥ 1 is its rank.The following integers are used throughout the paper: they correspond to the dimension of the Segre manifold S n1,...,n d and the dimension of the ambient space R n1×•••×n d respectively.The symmetric group on r elements is denoted by S r .
We work exclusively with real vector spaces, for which •, • denotes the Euclidean inner product and • always denotes the associated norm.We will switch freely between the finitedimensional vector spaces By the above choice of norms all of these finite-dimensional Hilbert spaces are isometric; specifically, if its coordinate array with respect to an orthogonal basis, then A = a .Similarly, if the coordinates a are reshaped into a multidimensional array It is important to note that this notation can conflict with the usual meaning of A when d = 2; to distinguish the spectral norm from the standard norm in this paper, we write A 2 for the former; see (2.1).
For matrices By the universal property [44], this extends to a linear map For any subset U ⊂ V of a normed vector space V , we define the sphere over U as In particular, the unit sphere in R n is denoted by S(R n ).
Given an m × n matrix R or a linear operator R : R n → R m , we denote the pseudo-inverse by R † .The spectral norm and smallest singular value of R are denoted respectively by A special role will be played in this paper by the product of all but the smallest singular values of R, which we denote by q(R).In other words, if R is injective, then where R T is the transposed matrix (operator) and ς i (R) is the ith largest singular value of R.

Differential geometry.
In this article we only consider submanifolds of Euclidean spaces; see, e.g., [58] for the general definitions.A smooth (C ∞ ) manifold is a topological manifold with a smooth structure, in the sense of [58].The tangent space At every point x ∈ M, there exist open neighborhoods V ⊂ M and U ⊂ T x M of x, and a bijective smooth map φ : V → U with smooth inverse.The tuple (V, φ) is a coordinate chart of M. A smooth map between manifolds F : M → N is a map such that for every x ∈ M and coordinate chart (V, φ) containing x, and every coordinate chart (W, ψ) containing F (x), we have that ψ there is a neighborhood W ⊂ M on which F is invertible and its inverse is also smooth; that is, F is a diffeomorphism between W and F (W).If this property holds for all x ∈ M, then F is called a local diffeomorphism.
A differentiable submanifold M ⊂ R N can be equipped with a Riemannian metric g, turning it into a Riemannian manifold, allowing for the computation of integrals.The manifolds in this paper are all embedded submanifolds of Euclidean space, so the Riemannian metric for us will always be the metric inherited from the ambient space.
2.3.The manifold of r-nice tensors.As in the introduction, the Segre manifold is It is a smooth manifold of dimension Σ.Its tangent space is given by note that this is not a direct sum.The Euclidean inner product between rank-1 tensors is conveniently computed by the following formula (see, e.g., [45]): (2.4) The set of tensors of rank at most r is denoted by it is a semialgebraic set of dimension at most min{rΣ, Π}; see, e.g., [60].Under Assumption 1 the dimension of σ r;n1,...,n d is exactly rΣ.
In [8,Section 4] we introduced an open dense subset of σ r;n1,...,n d with favorable differentialgeometric properties.We called it the manifold of r-nice tensors in [8,Definition 4.2].Below, we present a slightly modified definition that is suitable for our present purpose; it eliminates conditions ( 4) and ( 5) from [8,Definition 4.2].
In what follows, we denote the real closure in the Zariski topology of a subset A ⊂ R Π by A.
Remark that the third item in the definition is well defined because of the second item.
Proposition 2.2.If Assumption 1 holds, then the following statements are true: (   Since the tangent space of N r;n1,...,n d at a point is the image of the derivative of the local diffeomorphism Φ, we have the following characterization: 2.4.Sensitivity of CPDs.The condition number of the problem of computing the rank-1 terms of a CPD of a tensor was studied in a general setting in [20]; the following characterization of the condition number is Theorem 1.1 of [20]. . The matrix U = [U 1 , . . ., U r ] ∈ R Π×rΣ is also called a Terracini matrix.An explicit expression for the U i 's is given in [20, equation (5.1)].
Since A uniquely depends on a := (A 1 , . . ., A r ) ∈ S ×r n1,...,n d , we can view the condition number of A ∈ N r,n1,...,n d as a function of a: where the matrices U i are as before.The benefit of (2.7) is that it is well-defined for any tuple a ∈ S ×r n1,...,n d (and not just those mapping into N r,n1,...,n d ).2.5.Integrals.For fixed t ∈ (0, 1] and a point y ∈ S(R n ), the spherical cap of radius t around y is defined as cap(y, t) for some positive constants 0 < c 1 (n) < c 2 (n).
The following general lemma will be useful later.
), which is finite.2.6.The coarea formula.Let M and N be submanifolds of R n of equal dimension, and let F : M → N be a smooth surjective map.A point y ∈ N is called a regular value of F if for all points x ∈ F −1 (y) the differential d x F is of full rank.The preimage F −1 (y) of a regular value y is a discrete set of points.Let |F −1 (y)| be the number of elements in this preimage.Then, the coarea formula [52] states that for every integrable function g we have where Jac(F )(x) := | det d x F | is the Jacobian determinant of F at x.Note that almost all y ∈ N are regular values of F by Sard's theorem [58,Theorem 6.10].Hence, integrating over N is the same as integrating over all regular values of F .Remark 2.5.In [52], the coarea formula is given in the more general case when dim M ≥ dim N .In this article we only need the case when the dimension of dim M and dim N coincide.Moreover, if F is injective, then (2.9) reduces to the well-known change-of-variables formula.

The average condition number of Gaussian tensors of rank two
The goal of this section is to prove Theorem 1.5.We will proceed in three steps.First, the 2-nice tensors are conveniently parameterized via elementary manifolds such as one-dimensional intervals and spheres in Section 3.1.Second, the Jacobian determinant of this map is computed in Section 3.2.Third, the integral can be bounded from below with the help of a few technical auxiliary lemmas in Section 3.3.In the next section, we will exploit Theorem 1.5 for generalizing the argument to most higher ranks.To simplify notation, in this section we let and consider the next parametrization of the Segre manifold: . By composing Ψ := ψ × ψ with the addition map from (1.4) we get the following alternative representation of tensors of rank bounded by 2: We would like to apply the coarea formula (2.9) to pull back the integral of κ(A)e − A 2 over σ 2 via the parametrization Φ • Ψ.However, σ 2 in general is not a manifold, so the formula does not apply.Nevertheless, we can use the manifold N 2 of 2-nice tensors instead.By Proposition 2.2 (3), where C 2 := C 2;n1,...,n d is as in Definition 1.3.By applying the coarea formula (2.9) to the smooth map Φ | M2 we get where Jac(Φ)(A 1 , A 1 ) is the Jacobian determinant of Φ at (A 1 , A 1 ).In the first equality we used |Φ −1 (A)| = 2 for 2-identifiable tensors; indeed, we have that Φ In the following, we switch to the notation from (2.7): 2), we may replace the integral over M 2 by an integral over S × S, thus obtaining We use the coarea formula again, but this time for Ψ = ψ × ψ, where ψ is the parametrization from (3.2).Note that for (A where a = (λ, u 1 , . . ., u d ) and b = (µ, v 1 , . . ., v d ) are both tuples in (0, ∞) × P. Next, we compute the Jacobian determinant Jac(Φ • Ψ)(a, b).

3.2.
Computing the Jacobian determinant.Note that the dimension of the domain of Φ•Ψ is equal to 2Σ.As above, let a = (λ, u 1 , . . ., u d ) and b = (µ, v 1 , . . ., v d ) be tuples in (0, ∞) × P with P as in (3.1).In the following, we write The Jacobian determinant of Φ•Ψ at (a, b) is, by definition, the absolute value of the determinant of the linear map Consider the matrix of partial derivatives of Φ • Ψ with respect to the standard orthonormal basis of R n1×•••×n d : The latter is the volume of the parallelepiped spanned by the columns of Q.We fix notation in the next definition.
Definition 3.1.Let N ≥ n be positive integers and U ∈ R N ×n be a matrix with columns u 1 , . . ., u n ∈ R N .We denote by vol(U ) the volume of the parallelepiped spanned by the u i : vol(U ) := det(U T U ).
We can now rewrite (3.6) as The reason why we write the partial derivatives of Φ • Ψ with respect to the standard basis of R n1×•••×n d is that we get the following convenient description: be matrices containing as columns an ordered orthonormal basis of (u k ) ⊥ = T u k S(R n k ) and (v k ) ⊥ = T v k S(R n k ), respectively.Then, by linearity and the product rule of differentiation, we have that L = λL 1 µL 2 is the block matrix consisting of 2 blocks of the form Note that M depends only on the u k 's and v k 's, whereas L also depends on the parameters λ and µ; we do not emphasize these dependencies in the notation.
Comparing with [20, equation (5.1)], we see that the matrix L 1 has as columns an orthonormal basis for the orthogonal complement of U in T U S. Analogously, the columns of L 2 form an orthonormal basis for the orthogonal complement of V in T V S. Consequently, for Ψ(a, b), Terracini's matrix from (2.6) can be chosen as and so having used the notation from Definition 3.1 and the fact that singular values are invariant under orthogonal transformations such as permutations of columns.
3.3.Bounding the integral.We are now ready to conclude the proof of Theorem 1.5, by showing that the expected value of the condition number of tensor rank decomposition is bounded from below by infinity.By (2.6), the condition number at is the inverse of the smallest singular value of the Terracini's matrix U from (3.9).Therefore, if we plug (3.9) and (3.10) into (3.3),then we get dλ du dµ dv, (3.11)where q(U ) = vol(U ) ςmin(U ) is as in (2.2), and (3.12) From (3.9) it is clear that U is a function of u and v but is independent of λ and µ.Therefore, if we integrate first over λ and µ, then we can ignore the factor q(U ).In Appendix A.1 we compute this integral; the result is stated here as the next lemma.
The foregoing integral can be bounded from below by exploiting the next lemma, which is proved in Appendix A.2. Lemma 3.3.Let x, y ∈ S(R p ) be two unit-norm vectors and s ≥ 1.Then, there exists a constant k = k(p, s) independent of x, y such that Combining the foregoing lemmata and plugging the result into (3.11),we obtain Next, we exploit the symmetry of the domain S(R n1 ) by flipping the sign of v 1 and, hence, of This substitution transforms U into U D, where D is a diagonal matrix with some pattern of ±1 on the diagonal.Since D is orthogonal, q(U ) = q(U D), so that Denote this last integral by J, and then it remains to show that J = ∞.Consider the open set We now need two lemmata.The first one is straightforward.Lemma 3.4.Let > 0 be sufficiently small.For all (u, v) ∈ D( ) with u = (u 1 , . . ., u d ) and v = (v 1 , . . ., v d ), we have Proof.For proving the upper bound, apply the triangle inequality to the telescoping sum The second one is the final piece of the puzzle.We prove it in Appendix A.3.Lemma 3.5.For sufficiently small > 0, we have for all where U is the matrix that depends on u and v as in (3.9) and q is as in (2.2).
Combining Lemmata 3.4 and 3.5 with (3.13) we find where c > 0 is some constant.Note that the integrand in this equation only depends on u 1 and v 1 .By definition of D( ), for each 2 ≤ k ≤ d, and if we fix u k , the domain of integration of v k contains the difference of two spherical caps of respective affine radii 9  10 u 1 − v 1 and u 1 − v 1 .From (2.8), the volume of this difference of caps is greater than a constant times By rotational invariance, the inner integral does not depend on u 1 and moreover for small projecting through the stereographic projection (which has a Jacobian bounded above and below by a positive constant close to its center) we conclude that, for some other constant c , This proves J = ∞, so that E κ(A) = ∞ for tensors of rank bounded by 2, constituting a proof of Theorem 1.5.

4.
The average condition number: from rank 2 to higher ranks Having established that the average condition number of tensor rank decomposition of rank 2 tensors is infinite, we extend this result to higher ranks.That is, we will prove Theorem 1.6.As before, we abbreviate S := S n1,...,n d , σ r := σ r,n1,...,n d , N r := N r,n1,...,n d , and M r := M r,n1,...,n d .
We proceed with an observation that is of independent interest.Proof.First we observe that A is r-identifiable, and B is s-identifiable.Indeed, if the tensor C = A + B is (r + s)-identifiable, then the unique set C of cardinality |C| ≤ r consisting of rank-1 tensors summing to C is C = {A 1 , . . ., A r , B 1 , . . ., B s }.If A had an alternative decomposition {A 1 , . . ., A r }, potentially of a shorter length r ≤ r, then {A 1 , . . ., A r , B 1 , . . ., B s } would be an alternative decomposition of C .Hence, {A 1 , . . ., A r } needs to equal {A 1 , . . ., A r }, so that A is r-identifiable.By symmetry, the result for B follows.For all i, let U i be a matrix with orthonormal columns that span T Ai S n1,...,n d , and V i be a matrix with orthonormal columns that span T Bi S n1,...,n d .Consider the matrices U = [U 1 , . . ., U r ] and , and κ(A .
The claim follows from standard interlacing properties of singular values; see [51,Chapter 3].
(2) Let A ∈ σ r be r-identifiable.Then, Finally, the next lemma is the key to Theorem 1.6, providing a lower bound for the Jacobian determinant of φ in a special open subset of σ 2 × S ×(r−2) .We postpone its proof to Appendix B. Lemma 4.3.On top of Assumption 1 we assume that σ r−2;n1−2,...,n d −2 is generically complex identifiable.Then, there are constants µ, , ν 1 , . . ., ν r−2 > 0 depending only on r, n 1 , . . ., n d with the following property: For all B ∈ N 2 there exists a tuple (A 1 , . . ., A r−2 ) ∈ S ×(r−2) with A i = ν i and inf where φ is as in Lemma 4.2.
Remark 4.4.Given any B ∈ σ 2 , by taking a sequence B (i) ⊆ N 2 converging to B one can generate the corresponding sequences r−2 ∈ S from Lemma 4.3.Now, by compactness we can find an accumulation point A 1 , . . ., A r−2 ∈ S. Since Jac(φ) is continuous and hence uniformly continuous when restricted to a compact set, by choosing small enough we can assure that for all B , B − B ≤ and for all A i , A i − A i ≤ , we have Jac (φ)(B , A 1 , . . ., A r−2 ) > µ 2 , where and µ do not depend on B. Now we prove Theorem 1.6.
Proof of Theorem 1.6.Recall the surjective map φ : σ 2 × S ×(r−2) → σ r from Lemma 4.2.From Theorem 1.5 and the fact that κ(B) = κ(tB) for t > 0, there exists a tensor B ∈ σ 2 such that for every δ > 0 we have From Lemma 4.3 and Remark 4.4, there exist tensors A 1 , . . ., A r−2 ∈ S such that Jac (φ)(B , A 1 , . . ., A r−2 ) > µ 2 for all B , A 1 , . . ., A r−2 such that B − B < , A i − A i < , and B ∈ N 2 .Let U ⊆ N 2 × S ×r−2 be the set of all B , A 1 , . . ., A r−2 satisfying the foregoing conditions.From Lemma 4.1, we have Moreover, by Lemma 4.3 and the inverse function theorem, by taking small enough and δ we can assume that φ| U is a diffeomorphism onto its image 4 and hence φ(U) is open.The coarea formula (2.9) thus applies yielding The theorem follows since φ(U) ⊆ σ r .

5.1.
A characterization of the angular condition number as a singular value.We first derive a formula for the angular condition number in terms of singular values, similar to the one from (2.6).Recall from (1.6) that the angular condition number for rank r = 2 is is the canonical projection onto the sphere and where Φ −1 a is a local inverse of Φ : S × S → σ 2 at a ∈ S ×2 with A = Φ(a).As before, the value of κ ang on σ 2 \ N 2 is not relevant for our analysis, so we do not specify it.
Recall from (3.5) the definitions of the matrices M and L, associated to A. The following equality holds: as far as the right-hand term is finite.
Proof.By Proposition 2.2, any local inverse Φ −1 a is differentiable at A = Φ(a) ∈ N 2 .The projection p is also differentiable, so that , where • 2 is the spectral norm from (2.1).We compute this norm.
Let Ȧ ∈ T A N 2 and ( Ȧ1 , Ȧ2 ) = d A Φ −1 a ( Ȧ).Then, by linearity of the derivative, we have Ȧ = Ȧ1 + Ȧ2 .Furthermore, for i = 1, 2, the derivative d Ai p is the orthogonal projection onto the orthogonal complement of A i in R Π .According to this we decompose Ȧ1 and Ȧ2 as Ȧ1 = Ȧ⊥ 1 + λU, where U = Then, we have d and, consequently, Recall from (3.5) the matrices L = λL 1 µL 2 and M = U V .We can find vectors x 1 , x 2 ∈ R Σ−1 with Ȧ⊥ 1 = λL 1 x 1 and Ȧ⊥ 2 = µL 2 x 2 , and such that Ȧ⊥ Since we are assuming that (I − M M † )L is injective (for ς min ((I − M M † )L) = 0), it has a left inverse and we can write Combining (5.2) and ( 5.3) we see that the second equality from (P L) † P = (P L) † , which is a basic property of the Moore-Penrose pseudoinverse holding for any orthogonal projector P .This finishes the proof.5.2.Proof of Theorem 1.9.Now comes the actual proof of Theorem 1.9.Proceeding in exactly the same way as in Section 3.1 and using Proposition 5.1, we get where C 2 = C 2;n1,...,n d is as in Definition 1.3, P is as in (3.1), Q = L M is as in (3.4), the volume vol is as in Definition 3.1, and is as in (3.12), so that A = λU + µV .Next, we relate vol(Q) to the volume of (I − M M † )L.
Proof.Let Q ⊥ be a matrix whose columns contain an orthonormal basis for the orthogonal complement of the column span of Q.Then, from the definition, where in the last step we just multiplied by a matrix whose determinant is 1.Performing the inner multiplication we then get These two blocks are mutually orthogonal, since (I − M M † ) is the projection on the orthogonal complement of the span of M , and hence the volume is the product of the volumes corresponding to each block.The assertion follows.
We use Lemma 5.2 to rewrite (5.4) as dλ du dµ dv, (5.5)where q is as in (2.2).Recall from (3.7) that M is independent of λ and µ.We first compute the integral over λ, µ using the next lemma.We prove the lemma in Appendix C.1.
Lemma 5.3.Let L 1 , L 2 be the matrices defined as in (3.8), such that L = λL 1 µL 2 .Let Inserting the results from this lemma into (5.5),we get where In the remaining part of this section we show that J outer is bounded by a constant, which would conclude the proof.We do this by giving a sequence of upper bounds.We have no hope of providing sharp bounds, so rather than keeping track of all the constants, we will exploit the following definition for streamlining the proof.
First, note that vol(M ) = 1 − U, V 2 , so that Next, we exploit the symmetry of S(R n1 ) and transform v 1 → −v 1 .This transformation flips the sign of V , but the value of q is not affected.Indeed, the matrix I − M M † still projects onto span(U, V ) ⊥ = span(U, −V ) ⊥ , and L 2 is transformed into L 2 D, where D is a diagonal matrix with some pattern of ±1 on the diagonal.Since [ I D ] is an orthogonal transformation, the singular values do not change.Thus, we obtain The next lemma is proved in Appendix C.2.
Lemma 5.5.Let θ ∈ [0, π 2 ] and fix θ, u and v.There is a constant K > 0, depending only on n 1 , . . ., n d and d, such that The lemma implies (5.7) J outer For bounding the integral over θ we need the next lemma, which we prove in Appendix C.3.Lemma 5.6.Let a > 1, p ≥ 1.There exists a constant K > 0, depending only on a, such that for any unit vectors x, y ∈ S(R p ), x = y, we have Applying this lemma to (5.7), we obtain , we arrive at By orthogonal invariance, we may fix u k ∈ S(R n k ) to be u k = (1, 0, . . ., 0), and integrate the constant function 1 over one copy of Ignoring the product of volumes Now, this spherical integral is particularly simple because the integrand depends uniquely on one of the components of each vector.One can thus transform each integral in a sphere into an integral in an interval (see for example [10, Lemma 1]) getting: For this last integral we consider the partition of the cube [−1, 1] d into 2 d pieces corresponding to the different signs of the coordinates.In the pieces where the number of negative coordinates is odd, the denominator of the integrand is bounded below by 1 and thus the whole integrand is also bounded above by 1. Hence it suffices to check that the integral in the rest of the pieces is bounded.Assume now that t i1 , . . ., t i k with k ≥ 2 even are the negative coordinates in some particular piece of the partition.The mapping that leaves all coordinates fixed but maps t i k−1 → −t i k−1 and t i k → −t i k preserves the integrand and moves the domain to another piece of the partition with k − 2 negative coordinates.This process can then be repeated until none of the coordinates is negative.All in one, we have The change of variables The next lemma is proved in Appendix C.4. .
Using the lemma and the inequality sin(θ) < on 0 ≤ θ ≤ π 2 , we find that the integral in (5.8) is bounded by a constant times the following integral: Changing the name of the variables to x 1 , . . ., x d and integrating over the d-dimensional ball of radius π 2 √ d, which contains the domain [0, π 2 ] d , we get a new upper bound for the last integral, which implies . By passing to polar coordinates we get This shows J outer < ∞ implying E κ ang (A) < ∞, finishing the proof of Theorem 1.9.

Other random tensors: proof of Theorem 1.11
We demonstrate how our main results can be extended to many other distributions as well.
Consider the first item of Theorem 1.11.We assume that A ∈ σ r;n1,...,n d has the density ρ and that there exists positive constants c 1 , c 2 such that c 1 ≤ ρ ρ ≤ c 2 , where ρ is the density of a GIT.Then, for any measurable function f (A) we have and Thus, E A∼ ρ f (A) = ∞ if and only if E A∼ρ f (A) = ∞.Replacing f by κ and κ ang proves the first part of Theorem 1.11.
By [20,Proposition 4.4] κ is invariant under multiplication of A by a scalar.Therefore, the expected value of κ for the Gaussian is equal to the expected value when A is chosen uniformly in the unit ball, and also when A is chosen uniformly in the unit sphere of the space of tensors.Namely, we have (see, e.g., [29, Section 2.2.4]) This proves the second and third item of Theorem 1.11 for κ.
For κ ang we need the following lemma.
Proof.Since A is r-nice, we have κ ang (A) = d A (p ×r • Φ −1 a ) .Similar as for (5.1) we can show is the corresponding decomposition the tangent vector.The derivative d Ai p is the orthogonal projection onto A ⊥ i and independent of scaling.Moreover, σ r;n1,...,n d is a cone and so T A σ r;n1,...,n d can be identified with T tA σ r;n1,...,n d .This shows that after scaling the tensor A we get d tA (p dA.Since our results for κ ang are for rank-2 tensors, we put r = 2 in the following.We also abbreviate σ 2 := σ 2;n1,...,n d and C 2 := C 2;n1,...,n d .Then, using Lemma 6.1 we can integrate in polar coordinates to obtain κ ang (A)dA.
It follows immediately that the last integral is finite, proving that a randomly chosen A ∈ S(σ 2 ) has finite expected κ ang .Finally, if A is chosen randomly in the unit ball in σ 2 , the same argument shows that the expected value is again finite: This finishes the proof of Theorem 1.11.

Numerical experiments
Having proved that the expected value of the condition number is infinite in most cases, we provide further computational evidence in support of conjecture 1.8.To this end, a natural idea is to perform Monte Carlo experiments in a few of the unknown cases as in [23].
Sampling GITs is hard in practice, as the defining polynomial equalities and inequalities of the semialgebraic set σ r = σ r;n1,...,n d of tensors of rank bounded by r are not known in the literature. 5Nevertheless, there are a few cases that we can treat numerically.If r = Π Σ and the algebraic closure σ r (C) has dim σ r (C) = Π, a so-called perfect tensor space, then σ r is an open subset of the ambient R n1×•••×n d ; see, e.g., [15,57].
From Remark 1.4, we can sample from the density ρ on σ r via an acceptance-rejection method: Randomly sample tensors A from the density e − A 2 2 on R n1×•••×n d until we find one that belongs to σ r .While this scheme will yield tensors distributed according to the density ρ on σ r , it does not yield Gaussian identifiable tensors in general.The reason is that most perfect tensor spaces are not (expected to be) generically r-identifiable [47].Fortunately, there are a few known exceptions: matrix pencils (R n×n×2 for all n ≥ 2), R 5×4×3 and R 3×2×2×2 are proved to be generically complex r-identifiable for r = Π Σ .By applying the acceptance-rejection method to these spaces, every sampled tensor is a GIT with probability 1.
For numerically checking if a random tensor A ∈ R n1×•••×n d in a perfect tensor space lies in σ r with r = Π Σ , we apply a homotopy continuation method to the square system of Π equations where the Π = rΣ entries of the a k i 's are treated as variables, and the n 1 × • • • × n d tensor A is the tensor to decompose.We generate a start system with one solution to track by randomly sampling the entries of the vectors a k i i.i.d.from a real standard Gaussian distribution and then constructing the corresponding tensor A 0 .Since r = Π Σ is the so-called generic rank of tensors in perfect tensor spaces C n1×•••×n d , the above system has at least one complex solution with probability 1 as well.If we consider complex r-identifiable perfect tensor spaces at the ⊂ R n1×n2×n3 via an acceptance-rejection method.Columns three to five list the number of samples where the final tracked solution of the homotopy was real, complex, or failed, respectively.The next column shows the fraction of successful samples that were real; in the case of n × n × 2 the analytical solution from [13] is also stated and the correct digits from the empirical estimate are underlined.The final column indicates the total wall-clock time required to perform the Monte Carlo experiments.
generic rank, we can thus determine if A ∈ σ r by solving the square system and checking whether the unique solution is real.Assuming that we use a certified homotopy method such as alphaCertified [48], this approach will correctly classify A with probability 1, thus not impacting the overall distribution produced by the acceptance-rejection scheme.We implemented the above scheme in Julia 1.0.3 using version 0.4.3 of the package Homo-topyContinuation.jl[19], employing the solve function with default parameter settings.We deem a solution real if the norm of the imaginary part is less than 10 −8 .Note that this package does not offer certified tracking; however, the failure rate observed in our experiments was very low, namely 0.0512498%-see Table 7.1.For this reason, we are convinced that the distribution produced by the acceptance-rejection scheme is very close to the true distribution.
We performed the following experiment for estimating the distribution of the condition numbers of GITs of generically complex r-identifiable tensors in perfect tensor spaces with r = Π Σ , the complex generic rank.As explained above, we randomly sampled an element A of R n1×•••×n d from the density e − A 2 2 by choosing its entries i.i.d.standard normally distributed.Then, we generated one random starting starting system and applied the solve function from Homotopy-Continuation.jl for tracking the starting solution A 0 to the target A. If the final solution of the square system was real, we recorded both the regular and angular condition numbers at the CPD of A computed via homotopy continuation.These computations were performed in parallel using 20 computational threads until 100, 000 finite, nonsingular, real solutions and corresponding condition numbers were obtained.This experiment was performed on a computer system consisting of 2 Intel Xeon E5-2697 v3 CPUs with 12 cores clocked at 2.6GHz and 128GB main memory.Information about the sampling process via the acceptance-rejection method are summarized in Table 7.1, and Figure 7.1 visualizes the complementary cumulative distribution functions of the regular and angular condition numbers.
In Table 7.1, the total fractions of solutions that are real when sampling random Gaussian tensors (with i.i.d.standard normally distributed entries) seem to agree very well with the known theoretical results by Bergqvist and Forrester [13]; they showed that for random Gaussian n×n×2 tensors the rank is n = Π Σ with probability , where Γ is the gamma function and G the Barnes G-function (or double gamma function).The correct digits in the numerical approximation are underlined in the penultimate column of Table 7.1.
The empirical complementary cumulative distribution functions of the regular and angular condition numbers are shown in Figure 7.1.The full lines correspond to the empirical data and the thinner dashed lines correspond to an exponential model fitted to the data.From the figure Both plots are on the same scale.it is namely reasonable to postulate that the complementary cumulative distribution function c(x) for large x has the form which corresponds to a straight line in the log-log plot in Figure 7.1.We fitted the parameters a and b of the postulated model to the data restricted to the range 10 −1 ≤ c(x) ≤ 10 −3 .The reason for restricting the data set is that for small condition numbers it is visually evident in Figure 7.1 that the model is incorrect and for large condition numbers the data contains few samples, which negatively impacts the robustness of the fit.The parameters were fitted using fminsearch from Matlab R2017b with starting point (1, 1) and default settings.In all cases the algorithm terminated because the relative change of the parameters fell below 10 −4 .The obtained parameters are shown in Table 7.2, along with the coefficient of determination R 2 between the log-transformed data and log-transformed model predictions; 1 indicates perfect correlation.
We may estimate the expected values of the regular and angular condition numbers based on their empirical distributions.If p(x) denotes the probability density function of the regular condition number, then From the postulated model of c(x) we find that p(x) = abx −b−1 for large x, so that we postulate that the expected value will be well approximated by x −b dx for some finite κ 0 .The expression on the right is finite only if b > 1.The same discussion applies to the angular condition number as well.Regarding the estimated parameters in Table 7.2, our empirical data strongly suggests that the expected value of the condition number is infinite for r = Π Σ in the tested cases, as b ≈ 0.6 < 1.On the other hand, the expected angular condition number seems finite in all cases, as b ≈ 1.8 > 1.This suggests that both Theorem 1.6 and Theorem 1.9 might hold for higher ranks as well.
Appendix A. Proof of the Lemmata from Section 3 A.1.Proof of Lemma 3.2.Integrating in polar coordinates, we have The change of variables t := ρ cos(θ)U + sin(θ)V transforms the integral for ρ into The last is 2 Σ−1 Γ(Σ).Plugging this into the equation above shows the assertion.
A.2. Proof of Lemma 3.3.Expanding the denominator and taking a change of variables by ϕ = 2θ, the lemma reduces to proving that (for the first equality, make the change of variables ϕ → π/2 − ϕ) where we are denoting a = | x, y | ∈ [0, 1].With the + sign the inequality is clear (choosing an appropriate k(s)).For the other case we have, up to a constant k(s), the lower bound Thus, it suffices to check that but this is easily checked taking the change of variables ϕ 2 = (1 − a)t.
Finally, we will also use Transforming q(U ).For computing q(U ), we will first make a convenient orthogonal transformation of U 's columns.Recall from (3.9 The columns of M are rotated into while the columns of L 1 and L 2 are rotated as follows.We define the Π × (Σ − 1)-matrices where The reason for using ↓ and ↑ will become clear from the computations below: Inner products of two quantities with arrows pointing in opposite directions are zero, and swapping the directions of both arrows flips a sign in the expression of the inner product.Now, instead of considering the matrix U we work with the matrix By construction, M L 1 L 2 = N Q for some orthogonal matrix Q, so that Note that the choice of U k and V k does not affect the value of q.In particular, we may choose the matrices as follows.Let H be the plane in R n k spanned by u k and v k .By definition of D( ), u k = ±v k .Let O be the rotation that sends u k to v k , but leaves the orthogonal complement H ⊥ of H fixed.Take uk 2 ∈ H with uk 2 ⊥ u k as the unit norm vector making the smallest angle with v k , as follows: We also take vk 2 := O uk 2 , as in the illustration above.If h 3 , . . ., h n k is any orthogonal basis of H ⊥ , then our choice of bases is (A.8) uk 2 and vk 2 = O uk 2 as above, and for 3 ≤ j ≤ n k : uk j = vk j = h j .In other words, we can assume that all but the first columns of U k and V k are equal.Moreover, as can be seen from the foregoing figure, the following properties hold: where Q is a rotation by π 2 radians in same direction as the rotation O.In particular, we have , where e k := (1, 0, . . ., 0) T ∈ R n k −1 .
Consider then the Gram matrix G = N T N for this particular choice of tangent vectors: We continue by computing its entries.
Inner products involving only a.Using (2.4) for computing inner products of rank-1 tensors, we see that ).
Putting everything together.From the definition of the vector f in (A.11), it is clear that we can construct a permutation matrix P that moves the nonzero elements of f to the first d positions: and 0 is a vector of zeros of length Σ − 1 − d.Applying P T on the right of the R's yields R ↓ P T := T ↓ S ↓ and R ↑ P T := T ↑ S ↑ , (A.12) where the T matrices are respectively given by and analogously for T ↑ replacing the subtraction by an addition; the matrices S ↓ and S ↑ contain the remainder of the columns of R ↓ and R ↑ respectively.Then, we have , and , where ).
Hence, by swapping rows and columns of G, which leaves the value of q unchanged because they are orthogonal operations, we find that q(G) = q(G ) with Bounding q(G).To simplify more, we write where we swapped some rows and columns again.Because the smallest singular value of the matrix G ↑ −zhh T is larger than or equal to the smallest singular value of the positive semidefinite matrix G , we obtain the bound We now obtain bounds on the individual factors on the right-hand side of (A.15).First, we compute the determinants of the diagonal matrices: , where δ max := max{δ 1 , . . ., δ d }.
Note that we used 2 9   10 inequality.As a result, we obtain det(F ↓ ) det(F ↑ ) > 81 100 The final determinant in (A.15) can be computed as follows.Note that zhh T is a symmetric matrix with one positive eigenvalue and all others zero.Hence, it follows from Weyl's inequalities [51,Theorem 4.3.7]that the eigenvalues of G ↓ cannot decrease by adding zhh T .Hence, Next, we bound 1 − z(1 + 2 i ) from below and above.From (A.3) and (A.4) we have for some universal constant C > 0: 2  1 .For obtaining the lower bound, note that
The proof can be completed by a fortunate application of Geršgorin's theorem [43,Satz III].According to this theorem, the eigenvalues of are contained in the following Geršgorin discs k , and . By choosing = 1 small enough, we can get z as close to 1 and k ≤ 1 as close to 0 as we want.Therefore, if we choose small enough, disc 0 is a small disc near zero, and disc k are small, pairwise overlapping discs near two.When is sufficiently small, disc 0 is disjoint from the other discs, so that disc 0 contains exactly 1 eigenvalue close to 0, and d i=1 disc i contains d eigenvalues close to 2. Furthermore, since we are dealing with symmetric matrices, all the eigenvalues are real.Therefore, for sufficiently small , Finally, plugging (A.16), (A.19), and (A.20) into (A.15),we find This finishes the proof.
Appendix B. Proof of Lemma 4.3 For the proof of Lemma 4.3, we need the following two simple results.The first one is a well-known result.

Proof. Let us write
and where v ∈ R n k constitute a generating set of T A S n1,...,n d .A similar statement holds for T B S n1,...,n d .Then, by (2.4), the inner product between two such generators t because d ≥ 3 and a j , b j = 0 for all 1 ≤ j ≤ d.This completes the proof.
Lemma B.2.For all 1 ≤ k ≤ d, let A k ⊂ R n k be a linear subspace of dimension m i , and define the tensor space Then, the following holds.
(2) For 1 ≤ k ≤ d, let U k ∈ R n k ×m k be a matrix whose columns form an orthonormal basis for A k .The map ..,m d .Hence, S n1,...,n d ∩ V is the image of a manifold under an invertible linear map.This implies that S n1,...,n d ∩ V itself is a manifold.Furthermore, U preserves the Euclidean inner product, and so the manifolds S n1,...,n d ∩ V and S m1,...,m d are isometric.
We are now ready to prove Lemma 4.3.
Recall that the restriction of φ to N In the terminology of [20], N * r is the join of N 2;n1,...,n d and r − 2 copies of S n1,...,n d .Let B ∈ N 2;n1,...,n d .By construction, the tensor B has multilinear rank bounded by (2, . . ., 2), meaning that there exists a tensor subspace Now, we make the following choice of rank-1 tensors A 1 , . . ., A r−2 ∈ S n1,...,n d ∩ W: for 1 ≤ k ≤ d let U k ∈ R Π×n k −2 be a matrix whose columns form an orthonormal basis of A ⊥ k .Then, by Lemma B.2, the map is an isometry of manifolds.For 1 ≤ i ≤ r − 2 we define A i := U (X i ), where X i are the rank-1 tensors from above.We write The plan for the rest of the proof is to show that Jac(φ)(B, A 1 , . . ., A r−2 ) is a finite value which does not depend on B, and that there is a neighborhood around (A 1 , . . ., A r−2 ), whose size is also independent of B, on which the Jacobian does not deviate too much.
For showing this, we first compute the derivative of φ at the point (B, A 1 , . . ., A r−2 ): Ȧi , be a matrix whose columns form an orthonormal basis of T B N 2;n1,...,n d and, for 1 ≤ i ≤ r − 2 let W i ∈ R Π×Σ be a matrix whose columns form an orthonormal basis of T Ai S n1,...,n d .Then, we have the last equality because V has orthonormal columns.Let us further investigate the W i .By (2.3), the tangent space of S n1,...,n d at Moreover, by Lemma B.2, S n1,...,n d ∩ W is a manifold, and its tangent space at A i is For all 1 ≤ k ≤ d let {t k , s k } be an orthonormal basis of A k , and let us write and because a j i = 1, the tensors listed even form orthogonal bases for K i and L i , respectively.Furthermore, we have for all pairs of indices i, j that T Ai (S n1,...,n d ∩ W) ⊥ K j , T Ai (S n1,...,n d ∩ W) ⊥ L j , and K i ⊥ L j .
Therefore, we have the following orthogonal decomposition: The columns of U X i form an orthonormal basis of T Ai (S n1,...,n d ∩ W).Altogether, we have where both products range for 1 ≤ h The rest of the proof is a variational argument: Let us denote by Gr(Σ, Π) the Grassmann manifold of Σ-dimensional linear spaces in R Π .We endow Gr(Σ, Π) with the standard Riemannian metric, such that the distance between two spaces is the Euclidean length of the vector of principal angles [14].Let us denote this distance by d(•, •).Furthermore, let G : S n1,...,n d → Gr(Σ, Π), A → T A S n1,...,n d be the Gauss map.
From [23,Proposition 4.3] we get for each 1 ≤ i ≤ r − 2 that d Ai G ≤ √ Σ.This means, that for > 0 and any tuple (A 1 , . . ., A r−2 ) ∈ S n1,...,n d satisfying A i − A i < for 1 ≤ i ≤ r − 2, we have d(T Ai S n1,...,n d , T A i S n1,...,n d ) < 2 √ Σ for sufficiently small .Let W i ∈ R Π×Σ be a matrix with orthonormal columns that span T A i S n1,...,n d .Consequently, there is a constant K > 0, which depends only on n 1 , . . ., n d , such that are orthogonal to the columns of V .Moreover, rΣ ≤ Π, since we have assumed that σ r;n1,...,n d is generically complex identifiable.Hence, there is a matrix where I Π is the Π × Π identity matrix.Therefore, if we choose small enough, we may assume det M > 1 2 .Note that such a choice of is independent of B. Altogether, we have shown that for all tuples (A 1 , . . ., A r−2 ) ∈ S n1,...,n d satisfying A i −A i < we have that Note that the argument of q is a Π × (2Σ − 2) matrix.Since q(A) = ς 1 (A) • • • ς n−1 (A) for A ∈ R m×n with n ≤ m, we have This yields C.2. Proof of Lemma 5.5.First, note that the left-hand term in (5.6) is bounded above by a constant depending on n 1 , . . ., n d , d.Thus, by choosing an appropriate constant K in (5.6) we can assume that U − V is smaller than any predefined quantity.We thus assume from now on that U − V ≤ for some > 0 that can be chosen as small as desired.Furthermore, as in (A.2), we write We also assume that δ 1 = min{δ 1 , . . ., δ d }, or, equivalently, 1 = max{ 1 , . . ., d }.Note that, if ≈ 0, then one can assume k ≈ 0 and δ k ≈ 1 for all 1 ≤ k ≤ d.In particular, all inequalities from the proof of Lemma 3. ), so that AB = (I−M M † ) cos(θ)L 1 sin(θ)L 2 .Observe that, by definition, A ultimately depends on u and v; that is, A = A(u, v).
Simplifying the matrix by orthogonal transformations.Recall from the proof of Lemma 3.5 that applying the orthogonal transformation in (A.7) on the right, we have where ς i (A) denotes the ith largest singular value of the matrix A. Recalling (A.12), we have ς i = ς i (I − P ) S ↑ S ↓ T ↑ T ↓ , where P = M M † .Let a ↑ and a ↓ be as in (A.6).The matrix M M † projects orthogonally onto the span of U and V , which coincides with the span of the orthonormal vectors a ↑ a ↑ and a ↓ a ↓ .Moreover, by (A.10) we have a T ↓ a ↓ = 1 + z and a T ↑ a ↑ = 1 − z.This shows that Next, it follows from (A.14) that P S ↑ = 0 and P S ↓ = 0, so that ς i = ς i ( N ), where N := S ↑ S ↓ (I − P )T ↑ (I − P )T ↓ .
Computing the Gram matrix.Next, we compute the Gram matrix of N .Consider again (A.14), from which all of the following computations follow.We have where the F 's are the diagonal matrices from (A.13).From the symmetry of P we obtain S T ↓ P = 0 and S T ↑ P = 0, so that S T ↑ (I − P )T ↑ = S T ↑ T ↑ − 0 = 0, S T ↓ (I − P )T ↑ = S T ↓ T ↑ − 0 = 0, S T ↑ (I − P )T ↓ = S T ↑ T ↓ − 0 = 0, S T ↓ (I − P )T ↓ = S T ↓ T ↓ − 0 = 0. We also find We observe Analogously we find The eigenvalues of E ↓ + z 1+z gg T can be bounded by using that the spectral norm of the rank-1 term is bounded by g 2 = d k=1 1 .It follows from Weyl's perturbation inequality; see, e.g., [51,Corollary 7.3.8],and the positive semidefiniteness of E ↓ + z 1+z gg T that From the definition of δ 1 and 1 , we have provided that is sufficiently small.The eigenvalues of the diagonal matrix E ↓ are bounded from above by C(d + 1) 2  1 due to (A.17).Putting all of these together and using d ≥ 1, we find ), i = 1, . . ., d, (C.4) for some universal constant C > 0.
For computing an upper bound on the eigenvalues of E ↑ − z 1−z gg T , we start by noting that the spectral norm of the last diagonal matrix is bounded by C(d+1) 2 1 because of (A.17).Adding the matrix 2I d causes all eigenvalues to be shifted by 2; hence, it suffices to compute the nonzero eigenvalue of the rank-1 matrix.Its eigenvalues are 0, . . ., the eigenvector corresponding to the last eigenvalue is g g .In order to bound that eigenvalue, note that from (A.18) and (A.17) we have for some constant c = c(d) > 0: where the last inequality is proved as follows.Note that √ 1 + t 2 ≤ 1 + t 2 2 , which can be verified by squaring the terms and comparing.Then, where the last step is by Lemma 3.4.This finishes the proof.

1 a 2
is arbitrary; it is a corollary of [20, Theorem 1.1] that the above definition does not depend on the choice of a. Herein, • in the denominator is the Euclidean norm induced by the ambient R n1×•••×n d , and the norm in the numerator is the product norm of the Euclidean norms inherited from the ambient R n1×•••×n d 's.The right-hand side d A Φ −is the spectral norm of the derivative of Φ −1 a at A. See Section 2 for more details.By [20, Proposition 4.4], the condition number κ(A) does not depend on the norm of A: κ(tA) = κ(A) for t ∈ R \ {0}.

Figure 7 . 1 .
Figure 7.1.Empirical complementary cumulative distribution function of the regular and angular condition numbers for the tensor spaces from Table 7.1.Both plots are on the same scale.

,≥ 2 1 ,
where p = p( 1 , . . ., d ) is a polynomial expression in the i of degree and coefficients bounded by a constant depending only on d, with all its monomials of degree at least 4 in 1 , . . ., d .Hence, for some constant c = c(d) we have |p| ≤ c(d) − ĉ(d) 2for some new constant ĉ(d); the last step follows from (A.3).Putting all of the foregoing together with (C.3), we have thus shown that E ↑ − z 1−z gg T has eigenvalues satisfyingλ d E ↑ − z 1 − z gg T ≤ c (d)(1 − δ 1 ), and (C.5)λ i E ↑ − z 1 − z gg T ≤ 2 + c (d)(1 − δ 1 ), i = 1, . . .,d − 1, where c (d) is again a constant depending only on d.In (C.1), (C.2), (C.4) and (C.5), we have shown that precisely Σ−d−1+d+1 = Σ eigenvalues are smaller than some constant times 1−δ 1 , while the remaining eigenvalues are clustered near 2. It follows that there exists a constant K, depending only on n 1 , . . ., n d and d, such that 1, there exists an open dense subset N r;n1,...,n d of σ r;n1,...,n d such that for all A ∈ N r;n1,...,n d we have |Φ −1 (A)| = r! by [8, Proposition 4.5-4.7].2In particular, the points in the fiber are isolated, so there is a local inverse map Φ −1 ••×n d → S(Rn1×•••×n d ).Then the angular condition number of A ∈ N r;n1,...,n d is [8,n d are embedded submanifolds.Proof.Items 1, 2, 3, and 6 are proved as follows.Let X 1 and X 2 be respectively the set of tensors in σ r;n1,...,n d which are not complex r-identifiable and which are not smooth points of σ r;n1,...,n d .Both are Zariski-closed in σ r;n1,...,n d under Assumption 1, and hence so are the preimages Φ −1 (X 1 ) and Φ −1 (X 2 ).Moreover, the third defining condition of M r;n1,...,n d is also Zariski-closed in (S n1,...,n d ) ×r from the explicit formula for the condition number (2.6) below.Hence, M r;n1,...,n d is Zariski-open.An open subset of an embedded submanifold is itself an embedded submanifold so the claim for M r;n1,...,n d is proved.Moreover, the dimension of the complement of M r;n1,...,n d is at most rΣ − 1 and so its image by the rational map Φ is containedThe fourth item is due to the definition of the condition number, the fact that it is finite on N r;n1,...,n d by Definition 2.1, and the injectivity of Φ| Mr;n 1 ,...,n d by Definition 2.1(2).The fifth item follows by noting that the three defining properties of N r;n1,...,n d are all true independent of a nonzero scaling.Remark 2.3.The definition of r-nice tensors in[8, Definition 4.2]involves two more requirements, but those are not needed here.
in an algebraic set of dimension at most rΣ − 1, thus proving that N r;n1,...,n d is also Zariski-open and indeed an embedded submanifold of the set of smooth points of σ r;n1,...,n d , which is itself an embedded submanifold of its affine ambient space, see [12, Proposition 3.2.9].

Table 7 .
2. Estimated parameters of the exponential model (7.1) fitted to the complementary cumulative distribution functions from Figure7.1.The coefficient of determination R 2 between the log-transformed data and log-transformed model predictions is also indicated.