An iterative method for the solution of Laplace-like equations in high and very high space dimensions

This paper deals with the equation $-\Delta u+\mu u=f$ on high-dimensional spaces $\mathbb{R}^m$, where the right-hand side $f(x)=F(Tx)$ is composed of a separable function $F$ with an integrable Fourier transform on a space of a dimension $n>m$ and a linear mapping given by a matrix $T$ of full rank and $\mu\geq 0$ is a constant. For example, the right-hand side can explicitly depend on differences $x_i-x_j$ of components of $x$. We show that the solution of this equation can be expanded into sums of functions of the same structure and develop in this framework an equally simple and fast iterative method for its computation. The method is based on the observation that in almost all cases and for large problem classes the expression $\|T^ty\|^2$ deviates on the unit sphere $\|y\|=1$ the less from its mean value the higher the dimension $m$ is, a concentration of measure effect. The higher the dimension $m$, the faster the iteration converges.


Introduction
The numerical solution of partial differential equations in high space dimensions is a difficult and challenging task.Methods such as finite elements, which work perfectly in two or three dimensions, are not suitable for solving such problems because the effort grows exponentially with the dimension.Random walk based techniques only provide solution values at selected points.Sparse grid methods are best suited for problems in still moderate dimensions.Tensor-based methods [2], [10], [11] stand out in this area.They are not subject to such limitations and perform surprisingly well in a large number of cases.Tensor-based methods exploit less the regularity of the solutions rather than their structure.Consider the equation on R m for high dimensions m, where µ > 0 is a given constant.Provided the righthand side f of the equation (1.1) possesses an integrable Fourier transform, is a solution of this equation, and the only solution that tends uniformly to zero as x goes to infinity.If the right-hand side f of the equation is a tensor product of functions, say from the three-dimensional space to the real numbers, or a sum of such tensor products, the same holds for the Fourier transform of f .If one replaces the corresponding term in the high-dimensional integral (1.2) by an approximation based on an appropriate approximation of 1/r by a sum of exponential functions, the integral then collapses to a sum of products of lower-dimensional integrals.That is, the solution can be approximated by a sum of such tensor products whose number is independent of the space dimension.The computational effort no longer increases exponentially, but only linearly with the space dimension.However, the right-hand side of the equation does not always have such a simple structure and cannot always be well represented by tensors of low rank.A prominent example is quantum mechanics.The potential in the Schrödinger equation depends on the distances between the particles considered.Therefore, it is desirable to approximate the solutions of this equation by functions that explicitly depend on the position of the particles relative to each other.As a building block in more comprehensive calculations, this can require the solution of equations of the form (1.1) with right-hand sides that are composed of terms such as (1.5) The question is whether such structures transfer to the solution and whether in such a context arising iterates stay in this class.The present work deals with this problem.We present a conceptually simple iterative method that preserves such structures and takes advantage of the high dimensions.
First we embed the problem as in our former paper [17] into a higher dimensional space introducing, for example, some or all differences x i − x j , i < j, in addition to the components x i of the vector x ∈ R m as additional variables.We assume that the right-hand side of the equation (1.1) is of the form f (x) = F(T x), where T is a matrix of full rank that maps the vectors in R m to vectors in an R n of a still higher dimension and F : R n → R is a function that possesses an integrable Fourier transform and as such is continuous.The solution of the equation (1.1) is then the trace u(x) = U(T x) of the then equally continuous function which is, in a corresponding sense, the solution of a degenerate elliptic equation This equation replaces the original equation (1.1).Its solution (1.6) is approximated by the iterates arising from a polynomially accelerated version of the basic method starting from U 0 = 0.The calculation of the iterates requires the solution of equations of the form (1.1), that is, the calculation of integrals of the form (1.2), now over the higher dimensional R n .The symbol ∥T t ω∥ 2 of the operator L is a homogeneous second-order polynomial in ω.For separable right-hand sides F as above, the calculation of these integrals thus reduces to the calculation of products of lower, in the extreme case one-dimensional integrals.
The reason for the usually astonishingly fast convergence of this iteration is the directional behavior of the term ∥T t ω∥ 2 , more precisely the fact that the euclidean norm of the vectors T t η ∈ R m for vectors η on the unit sphere S n−1 of the R n takes an almost constant value outside a small set whose size decreases with increasing dimensions, a typical concentration of measure effect.To capture this phenomenon quantitatively, we introduce the probability measure on the Borel subsets M of the R n , where ν n is the volume of the unit ball and nν n thus is the area of the unit sphere.If M is a subset of the unit sphere, P(M) is equal to the ratio of the area of M to the area of the unit sphere.If M is a sector, if M contains with a vector ω also its scalar multiples, P(M) measures the opening angle of M. The decisive quantity, on which all our analysis is based, is the angular distribution of the values ∥T t ω∥ 2 .We assume that its expected value, the mean value of the expression ∥T t η∥ 2 over the unit sphere, is one.This is only a matter of the scaling of the variables in the higher dimensional space and does not represent a restriction.Apart from extreme cases, the distribution (1.10) approaches a normal distribution with increasing dimensions.The concentration of measure effect is reflected in the fact that the variance of the distribution decreases and tends, under rather general circumstances, to zero as the dimensions increase.For a given ρ < 1, let S be the sector that consists of the points ω for which the expression ∥T t ω∥ 2 differs from ∥ω∥ 2 by ρ∥ω∥ 2 or less.If the Fourier transform of the right-hand side of the equation (1.7) vanishes at all ω outside this set, the same holds for the Fourier transform of its solution (1.6) and the Fourier transforms of the iterates U k .Under this condition, the iteration error decreases at least like with respect to a broad range of Fourier-based norms.Provided the values ∥T t η∥ 2 are approximately normally distributed with expected value E = 1 and small variance V , the measure (1.9) of the sector S is almost one as soon as ρ exceeds the standard deviation σ = √ V by more than a moderate factor.The sector S then fills almost the entire frequency space.This is admittedly an idealized situation and the actual convergence behavior is more complicated, but this example characterizes pretty well what one can expect.The higher the dimensions and the smaller the variance, the faster the iterates approach the solution.
The rest of this paper is organized as follows.Section 2 sets the framework and is devoted to the representation of the solutions of the equation (1.1) as traces of higher dimensional functions (1.6) for right-hand sides that are themselves traces of functions with an integrable Fourier transform.In comparison with the proof in [17], we give a more direct proof of this representation.In addition, we introduce two scales of norms with respect to which we later estimate the iteration error.
In a sense, the following section forms the core of the present work.It is devoted to the study of the angular distribution (1.10) of the values ∥T t ω∥ 2 .Our first result, Theorem 3.1, is a semi-explicit representation of the density of this distribution in form of an integral over the unit sphere in R m .It gives detailed information about the behavior of this distribution for small δ .In a special case, when all singular values of T are equal, it becomes a rescaled beta distribution.Moreover, we show how the expected value E and the variance V of the distribution can be expressed in terms of the singular values of the matrix T .Using this representation, we show that for random matrices T with assigned expected value E = 1 the expected value of the variances V not only tends to zero as the dimension m goes to infinity, but also that these variances cluster the more around their expected value the larger m is.We also study a class of matrices T that are associated with interaction graphs.These matrices correspond to the case that some or all coordinate differences are introduced as additional variables and formed the motivation for the present work.The expected values that are assigned to these matrices take the value one and the variances V can be expressed directly in terms of the vertex degrees.Finally, we show how to efficiently sample the values ∥T t η∥ 2 for a large number of points η that are uniformly distributed on the unit sphere.Calculations of this kind support our claim that these values are approximately normally distributed in high dimensions.
In the final section, we return to the higher-dimensional counterpart (1.7) of the original equation (1.1) and examine the convergence behavior of the polynomially accelerated version of the iteration (1.8) for its solution.Special attention is given to the limit case µ = 0 of the Laplace equation.The section ends with a brief review of an approximation of the form (1.4) by sums of Gauss functions.

Solutions as traces of higher-dimensional functions
In this paper we are mainly concerned with functions U : R n → R, n a potentially high dimension, that possess the then unique representation in terms of an integrable function U, their Fourier transform.Such functions are uniformly continuous by the Riemann-Lebesgue theorem and vanish at infinity.The space B 0 (R n ) of these functions becomes a Banach space under the norm Let T be an arbitrary (n × m)-matrix of full rank m < n and let be the trace of a function in U ∈ B 0 (R n ).Since the functions in B 0 (R n ) are uniformly continuous, the same also applies to the traces of these functions.Since there is a constant c with ∥x∥ ≤ c ∥T x∥ for all x ∈ R m , the trace functions (2.3) vanish at infinity as U itself.The next lemma gives a criterion for the existence of partial derivatives of the trace functions, where we use the common multi-index notation.
Lemma 2.1 Let U : R n → R be a function in B 0 (R n ) and let the functions be integrable.Then the trace function (2.3) possesses the partial derivative which is, like u, itself uniformly continuous and vanishes at infinity.
Proof Let e k ∈ R m be the vector with the components e k | j = δ k j .To begin with, we examine the limit behavior of the difference quotient of the trace function as h goes to zero.Because of and under the condition that the function ω → ω • Te k U(ω) is integrable, it tends to as follows from the dominated convergence theorem.Because of ω • Te k = T t ω • e k , this proves (2.5) for partial derivatives of order one.For partial derivatives of higher order, the proposition follows by induction.⊓ ⊔ Let D(L ) be the space of the functions U ∈ B 0 (R n ) with finite (semi)-norm Because of |(T t ω) β | ≤ 1 + ∥T t ω∥ 2 for all multi-indices β of order two or less, the traces of the functions in this space are twice continuously differentiable by Lemma 2.1.Let L : D(L ) → B 0 (R n ) be the pseudo-differential operator given by For the functions U ∈ D(L ) and their traces (2.3), by Lemma 2.1 holds.With corresponding right-hand sides, the solutions of the equation (1.1) are thus the traces of the solutions U ∈ D(L ) of the pseudo-differential equation Theorem 2.1 Let F : R n → R be a function with integrable Fourier transform, let f (x) = F(T x), and let µ be a positive constant.Then the trace (2.3) of the function is twice continuously differentiable and the only solution of the equation whose values tend uniformly to zero as ∥x∥ goes to infinity.Provided the function is integrable, the same holds in the limit case µ = 0.
Proof That the trace u is a classical solution of the equation (2.11) follows from the remarks above, and that u vanishes at infinity by the already discussed reasons from the Riemann-Lebesgue theorem.The maximum principle ensures that no further solution of the equation (2.11) exists that vanishes at infinity.⊓ ⊔ From now on, the equation (2.9) will replace the original equation (2.11).Our aim is to compute its solution (2.10) iteratively by polynomial accelerated versions of the basic iteration (1.8).The convergence properties of this iteration depend decisively on the directional behavior of the values ∥T t ω∥ 2 , which will be studied in the next section, before we return to the equation and its iterative solution.
Before we continue with these considerations and turn our attention to the directional behavior of these values, we introduce the norms with respect to which we will show convergence.The starting point is the radial-angular decomposition of the integrals of functions in L 1 (R n ) into an inner radial and an outer angular part.
Inserting the characteristic function of the unit ball, one recognizes that the area of the n-dimensional unit sphere S n−1 is nν n , with ν n the volume of the unit ball.If f is rotationally symmetric, f (rη) = f (re) holds for every η ∈ S n−1 and every fixed, arbitrarily given unit vector e.In this case, (2.13) simplifies to The norms split into two groups, both of which depend on a smoothness parameter.
The norms in the first group possess the radial-angular representation and take in cartesian coordinates the more familiar form The spectral Barron spaces B s = B s (R n ), s ≥ 0, consist of the functions U with finite norms ∥U∥ t for 0 ≤ t ≤ s.Since all these norms scale differently, we do not combine them to an entity but treat them separately.One should be aware that these norms are quite strong.For integer values s, functions U ∈ B s possess continuous partial derivatives up to order s, which are bounded by the norms ∥U∥ k , k ≤ s, and vanish at infinity.Barron spaces play a prominent role in the analysis of neural networks and in high-dimensional approximation theory in general; see for example [14].
The norms in the second group are a mixture between an L 1 -based norm, in radial direction, and an L 2 -based norm for the angular part and are given by (2.17) If the norm |||U||| s of U is finite, then also the norm ∥U∥ s and ∥U∥ s ≤ |||U||| s holds.This follows from the Cauchy-Schwarz inequality.Both norms scale in the same way.By (2.14), they coincide for radially symmetric functions.For a given function F with integrable Fourier transform, the function possesses the radial-angular representation As will follow from Theorem 3.1 below, the expressions are integrable over the unit sphere.If the function φ : S n−1 → R given by is bounded and the dimension m is three or higher, the function (2.18) is therefore integrable and Fourier transform of a function For dimensions m greater than four, it suffices that φ is square integrable.For the other norms introduced above, one can argue in a similar way.

The angular distribution
The purpose of this section is a as detailed as possible study of the angular distribution of the values ∥T t ω∥ 2 , based on the already introduced probability measure on the Borel subsets M of the R n , for a sector a measure of its opening angle.In terms of this probability measure, the angular distribution of these values is or, after restriction to the unit sphere ∥η∥ = 1 itself, The direct calculation of such and similar quantities is difficult.We therefore reduce the calculation of integrals over the unit sphere to the calculation of simpler volume integrals.Our main tool is the radial-angular decomposition (2.13).
Lemma 3.1 If the function χ : R n → R is positively homogeneous of the nonnegative degree ℓ, the integral of χ over the unit sphere is equal to the volume integral where the rotationally symmetric function W = W n is the normed Gauss function which splits into a product of one-dimensional functions, and the prefactor is Proof This follows immediately from the radial angular decomposition of volume integrals into an inner radial and an outer angular part, the identities and the homogeneity χ(rη) = r ℓ χ(η) of the function under consideration.⊓ ⊔ Let H be the Heaviside function, with the values H(t) = 0 for t < 0 and H(t) = 1 for t ≥ 0, and let χ(ω, δ ) = H(δ ∥ω∥ 2 − ∥T t ω∥ 2 ).As χ(ω, δ ) is homogeneous of degree zero as a function of ω, the distribution (3.2) possesses the representation by Lemma 3.1.Since χ(ω, δ ) is right-continuous as a function of δ , the distribution is right-continuous by the dominated convergence theorem.Moreover, it is invariant to orthogonal transformations of the matrix T and depends only on the singular values of this matrix.For dimensions n > m + 2, it has a representation with a density f that attains values f (t) > 0 on the interval 0 < t < ∥T t ∥ 2 and f (t) = 0 outside of it and shows a characteristic, dimension-dependent behavior near t = 0.
2) possesses the density function f that vanishes for t ≤ 0 and is, for arguments t > 0, given by where Σ 0 = diag(σ 1 , . . ., σ m ) is the (m × m)-diagonal matrix with the singular values of the matrix T on its diagonal, the coefficients α and β are the function R takes the values R(t) = max(0,t), and B is Euler's beta function.
Proof Due to the invariance of the distribution to orthogonal transformations, we can assume that T t = (Σ 0 0) is itself a diagonal matrix.In the following, we split the vectors in R n into parts x ∈ R m and y ∈ R n−m .Let a(x, δ ) ≥ 0, δ > 0, be given by Since ∥Σ 0 x∥ 2 ≤ δ (∥x∥ 2 + ∥y∥ 2 ) holds if and only if ∥y∥ − a(x, δ ) ≥ 0, the distribution can be written as a double integral as follows This results from its representation (3.7) and Fubini's theorem.The inner integral reduces to the one-dimensional integral 2 Introducing the function the distribution therefore takes the form ) is differentiable on this interval and there has the derivative Because n − m > 2, the derivative tends to zero as t goes to ∥Σ 0 x∥ 2 /∥x∥ 2 from the left.Because a(x,t) = 0 for ∥Σ 0 x∥ 2 /∥x∥ 2 ≤ t, φ (a(x,t)) is therefore continuously differentiable as a function of t on the entire positive real axis t > 0 and its derivative has the above representation.The same applies in the case x = 0.For 0 < δ 0 < δ follows, or, after interchanging the order of integration, where the density for arguments t > 0 is given by Because the exponential terms cancel each other out in parts, it reads explicitly in terms of the matrix A = Σ 0 / √ t, or, after a change of variables, for δ > 0. Since the kernel of the (m × n)-matrix T t is as an (n − m)-dimensional subspace a set of volume measure zero, F(0) = 0 follows from the representation (3.7) of the distribution.This concludes the proof.⊓ ⊔ ) is integrable over the unit sphere S n−1 if and only if the function t → h(t) f (t) is integrable over the real axis, the function that is then also the density of the distribution of the values h(∥T t η∥ 2 ).Therefore, the quantities are integrable over the unit sphere.For large m and ℓ ≪ m/2, the densities of their distributions tend very rapidly to zero as t goes to zero.Practically, they vanish on a rather big neighborhood of the origin.This is one of the basic effects that underly our considerations and on which our theory is founded.
If the matrix Σ 0 = σ I m is a scalar multiple the identity matrix, the integral (3.9) can be evaluated explicitly.In this case, the density is on the interval 0 < t < σ 2 and vanishes outside this interval.That is, the distribution is a rescaled variant of a beta distribution.Figure 1 shows the assigned distributions for the cases σ 2 = n/m, m = 2 k , k = 1, . . ., 16, and n = 2m.The higher the dimensions are, the more the distributions approach the step function that jumps at δ 0 = 1 from zero to one and the more the values ∥T t η∥ 2 , ∥η∥ = 1, cluster around one.Something similar can be observed in the general case.This is reflected in the variances of the distributions.They tend with very high probability to zero as the dimensions increase.The expected value and the variance of the distributions are and play a fundamental role in the further considerations.
of order one and two of the squares of the singular values, they read as follows

.16)
Fig. 1 The probability distributions assigned to the densities (3.13) for m = 2 k , k = 1, . . ., 16, and n = 2m Proof Due to the invariance of the distribution and with that also of its moments to orthogonal transformations, we can again assume that T t = (Σ 0 0) is a diagonal matrix.Lemma 3.1 then leads to the representation of the moment of order k as a homogeneous symmetric polynomial of degree k in the σ 2 i .The volume integral on the right-hand side splits into a sum of products of one-dimensional integrals.In principle, it can be calculated this way.For the moments of order one and two, this is possible without problems.For higher-order moments, the number of terms to be considered separately increases rapidly.One can then take advantage of the fact that the symmetric polynomials are polynomials in the power sums of the σ 2 i ; see [16], or [15] for a more recent treatment.⊓ ⊔ In terms of the normalized singular values η i = σ i / √ n, the expected value and the variance (3.14) and (3.16), respectively, can be written as follows (3.17) We are interested in matrices T for which the expected value E is one, that is, for which the vector η composed of the normalized singular values η i lies on the unit sphere of the R m .The variances V then possess the representation The function X attains the minimum value 1/m and the maximum value one on the unit sphere of the R m .The variances (3.18) therefore extend over the interval However, they are most likely of the order O(1/m).
Lemma 3.3 Let the vectors η composed of the normalized singular values η i be uniformly distributed on the part of the unit sphere consisting of points with strictly positive components.Then the expected value and the variance of X are

.20)
Proof For symmetry reasons and because the intersections of lower-dimensional subspaces with the unit sphere have measure zero, we can allow points η that are uniformly distributed on the whole unit sphere.The expected value and the variance of the function X, treated as a random variable, are therefore and can be calculated along the lines given by Lemma 3.1.⊓ ⊔ It is instructive to express the variance (3.18) in terms of the random variable which is rescaled to the expected value E( X) = 1.Its variance tends to zero as m goes to infinity.This not only means that the expected value of the variances V tends to zero as the dimension m increases, but also that the variances cluster the more around their expected value the larger m is.This observation is supported by simple, easily reproducible experiments.Uniformly distributed points on the unit sphere S m−1 can be generated from vectors in R m with components that are subject to the standard normal distribution.Such vectors themselves follows the standard normal distribution in the m-dimensional space.Scaling them to length one gives the desired uniformly distributed points on the unit sphere.This allows one to sample the random variable X for any given dimension m.
The expected value and the variance (3.14) can be expressed directly in terms of the entries of the (m × m)-matrix S = T t T , since the power sums (3.15) are the traces of the matrices S and S 2 , and can therefore be computed without recourse to the singular values of T .Consider the (n × m)-matrix T assigned to an arbitrarily given undirected graph with m vertices and n − m edges that maps the components x i of a vector x ∈ R m first to themselves and then to the n − m weighted differences1 assigned to the edges of the graph connecting the vertices i and j.In quantum physics, matrices of the given kind can be associated with the interaction of particles.In the given case, the matrix S has the form S = I + L/2, where I is the identity matrix and L is the Laplacian matrix of the graph.The off-diagonal entries of L are L i j = −1 if the vertices i and j are connected by an edge and L i j = 0 otherwise.The diagonal entries L ii = d i are the vertex degrees, the numbers of edges that meet at the vertices i.
Lemma 3.4 For a matrix T that is assigned to an undirected graph with m vertices as described above, the expected value and the variance (3.14) are where g 1 and g 2 ≥ g 2 1 are the mean values of the vertex degrees and their squares: Proof By (3.24), in the given case the constants (3.15) possess the representation and it is n − m = mg 1 /2.Therefore, the proposition follows from (3.16).⊓ ⊔ For the matrices assigned to a family of graphs for which the two mean values or even only the ratio g 2 /g 2 1 remain bounded independent of the number of vertices, the variance tends to zero as the dimensions go to infinity.The matrices assigned to graphs whose vertices up to one are connected with a designated central vertex, but not with each other, form the other extreme.For these matrices, the variances decrease to a limit value greater than zero as the number of vertices increases.However, such matrices are a rare exception, not only with respect to the above random matrices, but also in the context of matrices assigned to graphs.Consider the random graphs with a fixed number m of vertices that are with a given probability p connected by an edge, or those with a correspondingly given number of edges.Sampling the variances assigned to a large number of graphs in such a class, one sees that these variances do not exceed the value 2/m with a very high probability.If all vertices are connected with each other, that is, if their degree is and tends to zero as the number of vertices goes to infinity.With the exception of a few extreme cases, it can be observed that the distribution of the values ∥T t η∥ 2 for points η on the unit sphere of R n more and more approaches the normal distribution with the expected value and the variance (3.14) as the dimensions increase, a fact that underlines the importance of these quantities.This can be seen by evaluating the expression ∥T t η∥ 2 at a large number of independently chosen, uniformly distributed points η on the unit sphere and comparing the frequency distribution of the resulting values with the given Gauss function Due to the invariance of the distribution to orthogonal transformations, we can assume that T t = (Σ 0 0) is a diagonal matrix, with the singular values of T on the diagonal of the square diagonal matrix Σ 0 .Given the above remarks about generating uniformly distributed points on a unit sphere, sampling the values ∥T t η∥ 2 for a large number of points η ∈ S n−1 means to sample the ratio for a large number of vectors x ∈ R m and y ∈ R n−m with standard normally distributed components.The values ∥y∥ 2 are then distributed according to the χ 2 -distribution with n − m degrees of freedom.The direct calculation of ∥y∥ 2 can thus be replaced by the calculation of a single scalar quantity following this distribution.The amount of work then remains strictly proportional to the dimension m, no matter how large the difference n − m of the dimensions is.Let T be the matrix assigned to the graph associated with the C 60 -fullerene molecule, which consists of the ninety edges of a truncated icosahedron and its sixty corners as vertices.The degree of these vertices is three and the assigned variance therefore V = 9/380.The frequency distribution of the values ∥T t η∥ 2 for a million randomly chosen points η on the unit sphere and the corresponding Gauss function (3.29) are shown in Fig. 2.

The iterative procedure
Now we are ready to analyze the iterative method presented in the introduction and its polynomially accelerated counterpart, respectively, for the solution of the equation (2.9), the equation that has replaced the original equation (2.11).The iteration error possesses the Fourier representation where U is the exact solution (2.10) of the equation (2.9), α(ω) is given by and the functions P k (λ ) are polynomials of order k with value P k (0) = 1.Throughout this section, we assume that the expression ∥T t η∥ 2 possesses the expected value one.We restrict ourselves to the analysis of this iteration in the spectral Barron spaces B s equipped with the norm (2.16), or (2.15) in radial-angular representation.In a corresponding sense, this analysis can be transferred to the Hilbert spaces H s .
Theorem 4.1 If the solution U lies in the Barron space B s (R n ), s ≥ 0, this also holds for the iterates U k .For all coefficients µ ≥ 0, the norm (2.16) of the error, given by then tends to zero for suitably chosen polynomials P k as k goes to infinity.
Proof Because the expression ∥T t η∥ 2 possesses the expected value one, the spectral norm of the matrix T t attains a value therefore holds for all ω outside the kernel of T t , as a subspace of a dimension less than n a set of measure zero.Choosing P k (λ ) = (1 − ϑ λ ) k , the proposition thus follows from the dominated convergence theorem.⊓ ⊔ Of course, one would like to have more than just convergence.The next theorem is a first step in this direction.
and κ = b/a and let be the to the interval a ≤ λ ≤ b transformed Chebyshev polynomial T k of the first kind of degree k.The norm (2.16) of the iteration error (4.2) then satisfies the estimate where U δ takes the same values as U on the set of all ω for which holds and vanishes outside this set.
Proof This follows from 0 ≤ α(ω)(∥T t ω∥ 2 + µ) ≤ b for ω ∈ R n and the estimates for the values of the Chebyshev polynomial (4.5) on the interval 0 ≤ λ ≤ b. ⊓ ⊔ Depending on the size of κ = ∥T t ∥ 2 /δ , the norm of the error U −U k soon reaches the size of the norm of U −U δ .The idea behind the estimate (4.6) is that in high space dimensions the condition (4.7) is satisfied for nearly all ω and that the part U − U δ of the solution U is therefore negligible.Independent of µ, the set on which the condition (4.7) is violated is a subset of the sector In terms of the probability density function from Theorem 3.1, its measure is Consider the solution U for a rotationally symmetric right-hand side F with finite norm ∥F∥ s .With the Heaviside function H, then holds, independent of µ and with equality for µ = 0.This follows from the radialangular decomposition of the involved integrals.In terms of the probability density function from Theorem 3.1, the integral on the right-hand side takes the value It tends like O(δ m/2−1 ) to zero as δ goes to zero, that is, very rapidly for high dimensions m, and effectively vanishes on a neighborhood of the origin.The estimate (4.6) is extremely robust in many respects.First, it is based on a pointwise estimate of the Fourier transform of the error.It is therefore equally valid for other Fourier-based norms.Second, the function (4.3) can be replaced by any approximation α(ω) for which an estimate on the entire frequency space and an inverse estimate on a sufficiently large spherical shell around its origin hold.To see why, note that holds for the frequency vectors ω outside the sector (4.8).Therefore, the part of the solution associated with the ω / ∈ S(δ ) outside a sufficiently large ball around the origin can be neglected.In the high dimensions considered, the same holds for the part of the solution associated with the ω / ∈ S(δ ) in a sufficiently small ball around the origin, provided that the Fourier transform of F can be controlled there.
Nevertheless, the estimate from Theorem 4.2 is rather pessimistic, because it only takes into account the decay of the left tail of the distribution (4.9), but ignores the fast decay of its right tail.To see what can be reached, we study the behavior of the iteration in the limit µ = 0, the case where the underlying effects are most clearly brought to light.Decomposing the vectors ω = rη into a radial part r ≥ 0 and an angular part η ∈ S n−1 , the error (4.2) propagates frequency-wise as and after the transition to the limit value µ = 0 as In the limit case, therefore, the method acts only on the angular part of the error.Nevertheless, by Theorem 4.1 the iterates converge to the solution.To clarify the underlying effects, we prove this once again in a different form.
Theorem 4.3 If the solution U lies in the Barron space B s (R n ), s ≥ 0, this also holds for the iterates U k implicitly given by (4.16).The norm (2.15) of the iteration error then tends to zero for suitably chosen polynomials P k as k goes to infinity.
Proof In radial-angular representation, the iteration error is given by where the integrable function φ : S n−1 → R is defined by the expression To prove the convergence of the iterates U k to the solution U, we again consider the polynomials where the value one is only attained on the intersection of the (n − m)-dimensional kernel of the matrix T t with the unit sphere, that is, on a set of area measure zero.The convergence again follows from the dominated convergence theorem.⊓ ⊔ Under a seemingly harmless additional assumption, one gets a rather sharp, hardly to improve estimate for the speed of convergence.
Theorem 4.4 Under the assumption that the norm given by (2.17) of the solution U takes a finite value, the iteration error can be estimated as in terms of the density f of the distribution of the values ∥T t η∥ 2 on the unit sphere.
Proof The proof is based on the same error representation as that in the proof of the previous theorem, but by assumption the function is now square integrable, not only integrable.Its correspondingly scaled L 2 -norm is the norm |||U||| s of the solution.The Cauchy-Schwarz inequality thus leads to If one rewrites the integral still in terms of the density f , (4.17) follows.⊓ ⊔ In sharp contrast to Theorem 4.1 and Theorem 4.2, this theorem strongly depends on the involved norms.But of course one can hope that other error norms behave similarly, especially for solutions whose Fourier transform is not strongly concentrated on a small sector around the kernel of T t .For rotationally symmetric solutions U, holds as the norms given by (2.15) and (2.17) coincide in this case.This estimate can in turn be transferred to the Hilbert spaces H s , in which even equality holds.
then tends to zero for suitably chosen polynomials P k as k goes to infinity.
Proof In this case, the iteration error possesses the radial-angular representation where the integrable function φ : S n−1 → R is defined by the expression Since the function φ takes the constant value |U| 2 2,s in the given case, this proves the error representation.Convergence is shown as in the proof of Theorem 4.3.⊓ ⊔ This lemma may not be very interesting on its own, but once again shows that not much is lost in the error estimate (4.17) and that the prefactors cannot really be improved.Moreover, it shows that these prefactors tend to zero for optimally or near optimally chosen polynomials P k as k goes to infinity.The task is therefore to find the polynomials P k of order k that minimize the integral (4.20) under the constraint P k (0) = 1.These polynomials can be expressed in terms of the orthogonal polynomials assigned to the density f as weight function.Under the given circumstances, the expression defines an inner product on the space of the polynomials.Let the polynomials p k of order k satisfy the orthogonality condition (p k , p ℓ ) = δ kℓ .
Lemma 4.2 In terms of the given orthogonal polynomials p k , the polynomial P k of order k that minimizes the integral (4.20) under the constraint P k (0) = 1 is and the integral itself takes the minimum value

.23)
Proof We represent the optimum polynomial P k as linear combination x j p j (t).
The zeros of p j lie strictly between the zeros of p j+1 , the interlacing property of the zeros of orthogonal polynomials.The polynomials p 0 , p 1 , . . ., p k therefore cannot take the value zero at the same time.Introducing the vector x with the components x j , the vector p ̸ = 0 with the components p j (0), and the vector a = p/∥p∥, we have to minimize ∥x∥ 2 under the constraint a t x = 1/∥p∥.Because of ∥x∥ 2 = (a t x) 2 + ∥x − (a t x)a∥ 2 , the polynomial P k minimizes the integral if and only if its coefficient vector x is a scalar multiple of a or p that satisfies the constraint.⊓ ⊔ This is where our observations from the previous section come back into play.We have seen there that the values ∥T t ω∥ 2 are approximately normally distributed, with a variance V and a standard deviation σ = √ V that tend to zero in almost all cases as the dimensions increase.This justifies replacing the actual distribution (4.9) by the corresponding normal distribution with the density Then one ends up up with a classical case and can express the orthogonal polynomials p k in terms of the Hermite polynomials He 0 (x) = 1, He 1 (x) = x, and that satisfy the orthogonality condition In dependence of the standard deviation σ , in this case the p k are given by

.27)
The first twelve M k = M k (σ ) assigned to the orthogonal polynomials (4.27) for the standard deviations σ = 1/16, σ = 1/32, σ = 1/64, and σ = 1/128 are compiled in Table 1.They give a good impression of the speed of convergence that can be expected and are fully in line with our predictions.The only matrices T t for which the distribution density (3.9) is explicitly known and available for comparison are the rescaled versions of the orthogonal projections from R n to R m , (m×n)-matrices with one as the only singular value.By Theorem 3.1, the densities (3.9) assigned to such orthogonal projections take the value on the interval 0 < t < 1 and vanish outside of it, where α and β in this case are given by (3.10).The to the expected value one rescaled counterpart of such densities is in terms of Jacobi polynomials.Details can be found in the appendix.For smaller dimensions, the resulting values (4.23) tend to zero much faster than the values that one gets approximating the actual density by the density of a normal distribution.This is probably due to the different behavior of the two distributions for small δ .
The values rapidly approach each other as the dimensions increase.
For the matrices T from Sect. 3 assigned to undirected interaction graphs, the angular distribution of the values ∥T t ω∥ 2 is often very similar to that in the case of rescaled orthogonal projections.In fact, it differs in many cases only very slightly from a to the expected value one rescaled beta distribution.The variance of a rescaled beta distribution with the density (4.29) is

.30)
The parameters α and β and the variance V are connected via the relation Thus, such a beta distribution with given parameter α and given variance V exists if and only if 1 −V α > 0. In this case, it is uniquely determined and β is given by

.32)
If one sets α to the value m/2 given by Theorem 3.1, the compatibility condition is satisfied for the matrices T assigned to large classes of graphs, provable or with very high probability.Examples are the matrices assigned to complete graphs, the Fig. 3 Comparison of the approximation by a rescaled beta distribution and a normal distribution matrices assigned to random graphs with a fixed number of vertices that are with a given probability connected by an edge, or the matrices assigned to random graphs with a fixed number of vertices and edges.The resulting densities (4.29) agree in such cases almost perfectly with the actual densities.The values (4.23) assigned to these densities therefore reflect the convergence behavior presumably at least as well as the values resulting from the given approximation by a simple Gauss function.Figure 3 compares the distribution density originating from the matrix assigned to a small random graph with m = 32 vertices and n − m = 96 edges with its approximation by the density of the corresponding normal distribution and that by the density of the given rescaled beta distribution.The variance is V = 941/16640 in this example.The difference is striking, but soon disappears for larger graphs.The matrices T assigned to interaction graphs still have another nice property.For vectors e pointing into the direction of a coordinate axis, ∥T t e∥ = ∥e∥ holds.The expression α(ω)(∥T t ω∥ 2 + µ) thus takes the value one on the coordinate axes, which is the reason for the square root in the definition (3.18) of these matrices.The described effects become therefore particularly noticeable on the regions on which the Fourier transforms of functions in hyperbolic cross spaces are concentrated.The functions that are well representable in tensor formats and in which we are first and foremost interested in the present paper fall into this category.
We still need to approximate 1/r on intervals µ ≤ r ≤ Rµ with moderate relative accuracy by sums of exponential functions, which then leads to the approximations (1.4) of the kernel (4.3) by sums of Gauss functions.Relative, not absolute accuracy, since these approximations are embedded in an iterative process; see the remarks following Theorem 4. To check with which relative error the function (4.35) approximates 1/r on a given interval 1 ≤ r ≤ R, thus one has to check how well the function φ approximates the constant 1 on the interval 0 ≤ s ≤ ln R.
For h = 1 and summation indices k ranging from −2 to 50, the relative error is, for example, less than 0.0007 on almost the whole interval 1 ≤ r ≤ 10 18 , that is, in the per mill range on an interval that spans eighteen orders of magnitude.Such an accuracy is surely exaggerated in the given context, but the example underscores the excellent approximation properties of the functions (4.35).Figure 4 depicts the corresponding function φ .These observations are underpinned by the analysis of the approximation properties of the corresponding infinite series in [13,Sect. 5].It is shown there that these series approximate 1/r on the positive real axis with a relative error ∼ 4πh −1/2 e −π 2 /h .(4.38) High relative accuracy on large intervals 1 ≤ r ≤ R is a considerably stronger requirement than high absolute accuracy.The approximation of 1/r with minimum absolute error has been studied in [5], [6], and [9].The technique in [9] can be used to compute the approximations with the least relative error.On the given interval, one gains a factor of just over five with a sum of 50 exponential functions.Hackbusch has compiled the best approximations for a large number of intervals and sums of exponential functions of very different length.The data can be downloaded from the website of the Max Planck Institute in Leipzig [8].
The practical feasibility of the approach depends on the representation of the tensors involved and the access to the Fourier transforms of the functions represented by them.A central task not discussed here is the recompression of the data between the iteration steps in order to keep the amount of work and storage under control, a problem common to all tensor-oriented iterative methods.If one fixes the accuracy in the single coordinate directions, the process reduces to an iteration on the space of the functions defined on a given high-dimensional cubic grid, functions that are stored in a compressed tensor format.There exist highly efficient, linear algebrabased techniques for recompressing such data.A problem in the given context may be that the in this framework naturally looking discrete ℓ 2 -norm of the tensors does not match the underlying norms of the continuous problem.Another open question is the overall complexity of the process.A difficulty with our approach is that the given operators do not split into sums of operators that act separately on a single variable or a small group of variables, a fact that complicates the application of techniques such as of those in [3] or [7].For more information on tensor-oriented solution methods for partial differential equations, see the monographs [10] and [11] of Hackbusch and of Khoromskij and Bachmayr's comprehensive survey article [2].The Jacobi polynomials satisfy the orthogonality relation

Remark
and the representation B(α, β ) = Γ (α)Γ (β ) Γ (α + β ) of the beta function in terms of the gamma function, this volume integral can finally be converted into the surface integral (3.9).Since the function (3.9) is locally integrable and the distribution function (3.7) is right-continuous, one can let δ 0 tend to zero in the relation above and gets

Lemma 3 . 2
The expected value and the variance (3.14) depend only on the singular values σ 1 , . . ., σ m of the matrix T .In terms of the power sums

Fig. 2
Fig.2The frequency distribution of the values ∥T t η∥ 2 for the matrix T associated with the C 60 -molecule

Lemma 4 . 1
Let the rotationally symmetric function U lie in the Hilbert-space H s equipped with the seminorm | • | 2,s .The iteration error, here given by

2 . 2 ∑ k=k 1 e 2 ∑ k=k 1 ϕ
It suffices to restrict oneself to intervals 1 ≤ r ≤ R. If v(r) approximates 1/r on this interval with a given relative accuracy, the function r → 1/r on the original interval µ ≤ r ≤ Rµ with the same relative accuracy.Good approximations of 1/r with astonishingly small relative error are the at first sight rather harmless looking sumsv(r) = h k −kh exp(−e −kh r),(4.35)aconstruction due to Beylkin and Monzón[4] based on an integral representation.The parameter h determines the accuracy and the quantities k 1 h and k 2 h control the approximation interval.The functions (4.35) possess the representationv(r) = φ (ln r) r , φ (s) = h k (s − kh),(4.36)interms of the for s going to infinity rapidly decaying window function ϕ(s) = exp(−e s + s).(4.37)

Table 1
The reduction factors M k (σ ) assigned to the approximate density (4.24) for the first 12 iterations.