Gaussian kernel quadrature at scaled Gauss–Hermite nodes

This article derives an accurate, explicit, and numerically stable approximation to the kernel quadrature weights in one dimension and on tensor product grids when the kernel and integration measure are Gaussian. The approximation is based on use of scaled Gauss–Hermite nodes and truncation of the Mercer eigendecomposition of the Gaussian kernel. Numerical evidence indicates that both the kernel quadrature and the approximate weights at these nodes are positive. An exponential rate of convergence for functions in the reproducing kernel Hilbert space induced by the Gaussian kernel is proved under an assumption on growth of the sum of absolute values of the approximate weights.


Introduction
Let µ be the standard Gaussian measure on R and f : R → R a measurable function.We consider the problem of numerical computation of the integral with respect to µ of f using a kernel quadrature rule (we reserve the term cubature for rules on higher dimensions) based on the Gaussian kernel with the length-scale > 0. Given any distinct nodes x 1 , . . ., x N , the kernel quadrature rule is an approximation of the form with its weights w k = (w k,1 , . . ., w k,N ) ∈ R N solved from the linear system of equations where [K] ij := k(x i , x j ) and [k µ ] i := R k(x i , x) dµ(x).This is equivalent to uniquely selecting the weights such that the N kernel translates k(x 1 , •), . . ., k(x N , •) are integrated exactly by the quadrature rule.Kernel quadrature rules can be interpreted as best quadrature rules in the reproducing kernel Hilbert space (RKHS) induced by a positive-definite kernel (Larkin, 1970), integrated kernel (radial basis function) interpolants (Bezhaev, 1991;Sommariva and Vianello, 2006), and posteriors to µ(f ) under a Gaussian process prior on the integrand (Larkin, 1972;O'Hagan, 1991;Briol et al., 2018).
As is well-known, interpolation and quadrature using the Gaussian kernel are numerically notoriously difficult problems because the condition number of the kernel matrix K tends to grow exponentially in the number of nodes (Schaback, 1995).Inspired by the work of Fasshauer and McCourt (2012) on numerically stable kernel interpolation, we show that selecting the nodes by a suitable and fairly natural scaling of the nodes of the classical Gauss-Hermite quadrature rule permits, through truncation of the Mercer eigendecomposition, an accurate and numerically stable approximation to the Gaussian kernel quadrature weights.To be precise, Theorem 2.3 states that the quadrature rule Q k , that exactly integrates the first N Mercer eigenfunctions of the Gaussian kernel and uses the nodes has the weights where α (for which the value 1/ √ 2 seems the most natural), β, and δ are constants defined in Equation (2.3), H n are the probabilists' Hermite polynomials (2.2), and x GH n and w GH n are the nodes and weights of the N -point Gauss-Hermite quadrature rule.We argue that these weights are a good approximation to w k,n and accordingly call them approximate Gaussian kernel quadrature weights.Although we derive no bounds for the error of this weight approximation, numerical experiments in Section 5 indicate that the approximation is accurate and that it appears that w k → w k as N → ∞.In Section 4 we extend the weight approximation for d-dimensional Gaussian tensor product kernel cubature rules of the form , where Q k,i are one-dimensional Gaussian kernel quadrature rules.Since each weight of Q d k is a product of weights of the univariate rules, an approximation for the tensor product weights is readily available.
Furthermore, numerical experiments also suggest that both w k,n and w k,n are positive for any N ∈ N and every n = 1, . . ., N .We are not aware of any explicit node configurations giving rise to positive kernel quadrature weights for the Gaussian or other kernels, such as those of the Matérn class, popular in scattered data approximation and Gaussian process regression communities.The nodes that minimise the associated worstcase error have, if distinct, positive weights (Richter-Dyn, 1971;Oettershagen, 2017), but for the aforementioned kernels these nodes are only available using costly optimisation.In Sections 3 and 4 we show that slow enough growth with N of N i=1 | w k,n | (numerical evidence indicates this sum converges to one) guarantees that the approximate Gaussian kernel quadrature rule-as well as the corresponding tensor product version-converges with an exponential rate for functions in the RKHS of the Gaussian kernel.Convergence analysis is based on analysis of magnitude of the remainder of the Mercer expansion and rather explicit bounds on Hermite polynomials and their roots.

Approximate weights
This section contains the main results of this article.The main contribution is derivation, in Theorem 2.3, of the weights w k , that can be used to approximate the kernel quadrature weights.We also discuss positivity of these weights, the effect the kernel length-scale is expected to have on quality of the approximation, and computational complexity.

Eigendecomposition of the Gaussian kernel
Let ν be a probability measure on the real line.If the support of ν is compact, Mercer's theorem guarantees that any positive-definite kernel k admits an absolutely and uniformly convergent eigendecomposition for positive and non-increasing eigenvalues λ n and eigenfunctions ϕ n , included in the RKHS H induced by k, that are orthonormal in L 2 (ν).Moreover, √ λ n ϕ n are Horthonormal.If the support of ν is not compact, the expansion (2.1) converges absolutely and uniformly on all compact subsets of R × R under some mild assumptions (Sun, 2005;Steinwart and Scovel, 2012).For the Gaussian kernel (1.1) and measure the eigenvalues and eigenfunctions are available analytically.For a collection of explicit eigendecompositions of some other kernels, see for instance (Fasshauer and McCourt, 2015, Appendix A) Let µ α stand for the Gaussian probability measure, ) and for the (unnormalised) probabilists' Hermite polynomial satisfying the orthogonality property , and and note that β > 1 and δ 2 > 0. Then the eigenvalues and L 2 (µ α )-orthonormal eigenfunctions of the Gaussian kernel are (Fasshauer and McCourt, 2012) See (Fasshauer and McCourt, 2015, Section 12.2.1)for verification that these indeed are Mercer eigenfunctions and eigenvalues for the Gaussian kernel.The role of the parameter α is discussed in Section 2.4.The following result, also derivable from Equation 22.13.17 in (Abramowitz and Stegun, 1964), will be useful.
Lemma 2.1.The eigenfunctions (2.5) of the Gaussian kernel (1.1) satisfy Proof.Since an Hermite polynomial of odd order is an odd function, µ(ϕ α 2m+1 ) = 0.For even indices, use the explicit expression and the binomial theorem to conclude that

Approximation via QR decomposition
We begin by outlining a straightforward extension to kernel quadrature of the work of Fasshauer andMcCourt (2012, 2015, Chapter 13) on numerically stable kernel interpolation.Recall that the kernel quadrature weights w k ∈ R N at distinct nodes x 1 , . . ., x N are solved from the linear system Truncation of the eigendecomposition (2.1) after M ≥ N terms yields the approximations K ≈ ΦΛΦ T and k µ ≈ ΦΛϕ µ , where [Φ] ] ii := λ i−1 contains the eigenvalues in appropriate order, and [ϕ µ ] i := µ(ϕ i−1 ) is an M -vector.The kernel quadrature weights w k can be therefore approximated by (2.6) Equation (2.6) can be written in a more convenient form by exploiting the QR decomposition.The QR decomposition of Φ is The decomposition ) allows for writing Therefore, where I N is the N × N identity matrix.For the Gaussian kernel, numerical stability can be significantly improved by performing the multiplications by Unfortunately, using the QR decomposition does not provide an attractive closed form solution for the approximate weights w M k .Setting M = N turns Φ into a square matrix, enabling its direct inversion and forming of a connection to the classical Gauss-Hermite quadrature.

Gauss-Hermite quadrature
Given a measure ν on R, the N -point Gaussian quadrature rule is the unique N -point quadrature rule that is exact for all polynomials of degree at most 2N − 1.We are interested in Gauss-Hermite quadrature rules that are Gaussian rules for the measure µ: for every polynomial p : R → R with deg p ≤ 2N − 1.The nodes x GH 1 , . . .x GH N are the roots of the N th Hermite polynomial H N and the weights w GH 1 , . . ., w GH N are positive and sum to one.The nodes and the weights are related to the eigenvalues and eigenvectors of the tridiagonal Jacobi matrix formed out of three-term recurrence relation coefficients of normalised Hermite polynomials (Gautschi, 2004, Theorem 3.1).
We make use of the following theorem, a one-dimensional special case of a more general result due to Mysovskikh (1968).See also (Cools, 1997, Section 7).This result also follows from the Christoffel-Darboux formula (2.12).
Theorem 2.2.Let ν be a measure on R. Suppose that x 1 , . . ., x N and w 1 , . . ., w N are the nodes and weights of the unique Gaussian quadrature rule.Let p 0 , . . ., p N −1 be the L 2 (ν)-orthonormal polynomials.Then the matrix [P ] ) is diagonal and has the diagonal elements [P ] ii = 1/w i .

Approximate weights at scaled Gauss-Hermite nodes
Let us now consider the approximate weights (2.6) with M = N .Assuming that Φ is invertible, we then have The weights w k are those of the unique quadrature rule that is exact for the N first eigenfunctions ϕ α 0 , . . ., ϕ α N −1 .For the Gaussian kernel, we are in a position to do much more.Recalling the form of the eigenfunctions in Equation (2.5), we can write Φ = √ βE −1 V for the diagonal matrix [E] ii := e δ 2 x 2 i and the Vandermonde matrix of scaled and normalised Hermite polynomials.From this it is evident that Φ is invertiblewhich is just a manifestation of the fact that the eigenfunctions of a totally positive kernel constitute a Chebyshev system (Kellog, 1918;Pinkus, 1996).Consequently, Select the nodes Then the matrix V defined in Equation (2.8) is precisely the Vandermonde matrix of the normalised Hermite polynomials and V V T is the matrix P of Theorem 2.2.Let W GH be the diagonal matrix containing the Gauss-Hermite weights.It follows that (2.9) Combining this equation with Lemma 2.1, we obtain the main result of this article.
Theorem 2.3.Let x GH 1 , . . ., x GH N and w GH 1 , . . ., w GH N stand for the nodes and weights of the N -point Gauss-Hermite quadrature rule.Define the nodes (2.10) defined by the exactness conditions where α, β, and δ are defined in Equation (2.3) and H 2m are the probabilists' Hermite polynomials (2.2).
Since the weights w k are obtained by truncating of the Mercer expansion of k, it is to be expected that w k ≈ w k .This motivates our calling of these weights the approximate Gaussian kernel quadrature weights.We do not provide theoretical results on quality of this approximation, but the numerical experiments in Section 5.1 indicate that the approximation is accurate and that its accuracy increases with N .See (Fasshauer and McCourt, 2012) for related experiments.
An alternative non-analytical formula for the approximate weights can be derived using the Christoffel-Darboux formula (Gautschi, 2004, Section 1.3 (2.12) From Equation (2.9) we then obtain (keep in mind that x GH 1 , . . ., x GH N are the roots of H N ) This formula is analogous to the formula for the Gauss-Hermite weights.Plugging this in, we get It appears that both w k,n and w k,n of Theorem 2.3 are positive for many choices of α; see Section 5.2 for experiments involving α = 1/ √ 2. Unfortunately, we have not been able to prove this.In fact, numerical evidence indicates something slightly stronger.Namely that the even polynomial of degree 2 (N − 1)/2 is positive for every N ≥ 1 and (at least) every 0 < γ ≤ 1.This would imply positivity of w k,n since the Gauss-Hermite weights w GH n are positive.For example, with α = 1/ √ 2, As discussed in (Fasshauer and McCourt, 2012) in the context of kernel interpolation, the parameter α acts as a global scale parameter.While in interpolation it is not entirely clear how this parameter should be selected, in quadrature it seems natural to set α = 1/ √ 2 so that the eigenfunctions are orthonormal in L 2 (µ).This is the value that we use, though also other values are potentially of interest since α can be used to control the spread of the nodes independently of the length-scale .In Section 3, we also see that this value leads to more natural convergence analysis.

Effect of the length-scale
Roughly speaking, magnitude of the eigenvalues determines how many eigenfunctions are necessary for an accurate weight approximation.We therefore expect that the approximation (2.11) is less accurate when the length-scale is small (i.e., ε = 1/( √ 2 ) is large).This is confirmed by the numerical experiments in Section 5.
Consider then the case → ∞.This scenario is called the flat limit in scattered data approximation literature where it has been proved1 that the kernel interpolant associated to an isotropic kernel with increasing length-scale converges to (i) the unique polynomial interpolant of degree N − 1 to the data if the kernel is infinitely smooth (Larsson and Fornberg, 2005;Schaback, 2005;Lee et al., 2007) or (ii) to a polyharmonic spline interpolant if the kernel is of finite smoothness (Lee et al., 2014).In our case, → ∞ results in If the nodes are selected as in Equation (2.10), That the approximate weights convergence to the Gauss-Hermite ones can be seen, for example, from Equation (2.11) by noting that only the first term in the sum is retained at the limit.Based on the aforementioned results regarding convergence of kernel interpolants to polynomial ones at the flat limit, it is to be expected that also w k,n → w GH n as → ∞ (we do not attempt to prove this).Because the Gauss-Hermite quadrature rule is the "best" for polynomials and kernel interpolants convergence to polynomials at the flat limit, the above observation provides another justification for the choice α = 1/ √ 2 that we proposed the preceding section.When it comes to node placement, the length-scale is having an intuitive effect if the nodes are selected according to Equation (2.10).For small , the nodes are placed closer to the origin where most of the measure is concentrated as integrands are expected to converge quickly to zero as |x| → ∞, whereas for larger the nodes are more-but not unlimitedly-spread out in order to capture behaviour of functions that potentially contribute to the integral also further away from the origin.

On computational complexity
Since the kernel matrix K of the Gaussian kernel is dense, solving the kernel quadrature weights from the linear system (1.2) incurs a cubic (in the number of nodes) computational cost.In contrast, the Gauss-Hermite nodes and weights are related to the eigenvalues and eigenvectors of the tridiagonal Jacobi matrix (Gautschi, 2004, Theorem 3.1) and solvable in quadratic time.From Equation (2.11) it is seen that computation of each approximate weight is linear in N : there are approximately (N − 1)/2 terms in the sum and the Hermite polynomials can be evaluated on the fly using the three-term recurrence formula.That is, computational cost of obtaining xn and w k,n for n = 1, . . ., N is cubic in N .Because computational cost of a tensor product rule does not depend on the nodes and weights after these have been computed, the above discussion also applies to the rules presented in Section 4.

Convergence analysis
In this section we analyse convergence in the reproducing kernel Hilbert space H ⊂ C ∞ (R) induced by the Gaussian kernel of quadrature rules that are exact for the Mercer eigenfunctions.First, we prove a generic result (Theorem 3.3) to this effect and then apply this to the quadrature rule with the nodes xn and weights w k,n .If N n=1 | w k,n | does not grow too fast with N , we obtain exponential convergence rates.
Recall some basic facts about reproducing kernel Hilbert spaces spaces (Berlinet and Thomas-Agnan, 2004) Crucially, the worst-case error satisfies for any f ∈ H.This justifies calling a sequence {Q N } ∞ N =1 of N -point quadrature rules convergent if e(Q N ) → 0 as N → ∞.For given nodes x 1 , . . ., x N , the weights w k = (w k,1 , . . ., w k,N ) of the kernel quadrature rule Q k are unique minimisers of the worst-case error: It follows that a rate of convergence to zero for e(Q) also applies to e(Q k ).
A number of convergence results for kernel quadrature rules on compact spaces appear in (Bezhaev, 1991;Kanagawa et al., 2017;Briol et al., 2018).When it comes to the RKHS of the Gaussian kernel, characterised in (Steinwart et al., 2006;Minh, 2010), Kuo and Woźniakowski (2012) have analysed convergence of the Gauss-Hermite quadrature rule.Unfortunately, it turns out that the Gauss-Hermite rule converges in this space if and only if ε 2 < 1/2.Consequently, we believe that the analysis below is the first to establish convergence, under the assumption (supported by our numerical experiments) that the sum of | w k,n | does not grow too fast, of an explicitly constructed sequence of quadrature rules in the RKHS of the Gaussian kernel with any value of the length-scale parameter.We begin with two simple lemmas.
Lemma 3.1.The eigenfunctions ϕ α n admit the bound for a constant K ≤ 1.087 and every x ∈ R.
Theorem 3.3.Let α = 1/ √ 2. Suppose that the nodes x 1 , . . ., x N and weights w 1 , . . ., w N of an N -point quadrature rule Then there exist constants C 1 , C 2 > 0, independent of N and Q N , and Explicit forms of these constants appear in Equation (3.4).
Remark 3.4.From Lemma 3.2 we observe that the proof does not yield η < 1 (for every ) if the assumption sup 1≤n≤N |x n | ≤ 2 √ M N /β on placement of the nodes is relaxed by replacing the constant 2 on the right-hand side with C > 2.
Consider now the N -point approximate Gaussian kernel quadrature rule Q k,N = N n=1 w k,n f (x n ) whose nodes and weights are defined in Theorem 2.3 and set α = 1/ √ 2. The nodes x GH n of the N -point Gauss-Hermite rule admit the bound (Area et al., 2004) sup Since the rule Q k,N is exact for the first N eigenfunctions, M N = N .Hence the assumption on placement of the nodes in Theorem 3.3 holds.As our numerical experiments indicate that the weights w k,n are positive and , it seems that the exponential convergence rate of Theorem 3.3 is valid for Q k,N (as well as for the corresponding kernel quadrature rule Q k,N ) with M N = N .Naturally, this result is valid whenever the growth of the absolute weight sum is, for example, polynomial in N .
Another interesting case are the generalised Gaussian quadrature rules 1 for the eigenfunctions.As the eigenfunctions constitute a complete Chebyshev system (Kellog, 1918;Pinkus, 1996), there exists a quadrature rule Q * N with positive weights w * 1 , . . ., w Barrow, 1978).Appropriate control of the nodes of these quadrature rules would establish an exponential convergence result with the "double rate" M N = 2N .

Tensor product rules
Let Q 1 , . . ., Q d be quadrature rules on R with nodes X i = {x i,1 , . . ., x i,Ni } and weights w i 1 , . . ., w i Ni for each i = 1, . . ., d.The tensor product rule on the Cartesian grid where We equip R d with the d-variate standard Gaussian measure The following proposition is a special case of a standard result on exactness of tensor product rules (Oettershagen, 2017, Section 2.4).
Proposition 4.1.Consider the tensor product rule (4.1) and suppose that, for each When a multivariate kernel is separable, this result can be used in constructing kernel cubature rules out of kernel quadrature rules.We consider d-dimensional separable Gaussian kernels where i are dimension-wise length-scales.For each i = 1, . . ., d, the kernel quadrature rule Q k,i with nodes X i = {x i,1 , . . ., x i,Ni } and weights w i k,1 , . . ., w i k,Ni is, by definition, exact for the N i kernel translates at the nodes: a tensor product of the univariate rules: with the weights being products of univariate Gaussian kernel quadrature weights, . This is the case because each kernel translate k d (x, •), x ∈ X, can be written as We can extend Theorem 2.3 to higher dimensions if the node set is a Cartesian product of a number of scaled Gauss-Hermite node sets.For this purpose, for each i = 1, . . ., d we use the L(µ αi ) 2 -orthonormal eigendecomposition of the Gaussian kernel k i .The eigenfunctions, eigenvalues, and other related constants from Section 2.1 for the eigendecomposition of the ith kernel are assigned an analogous subscript.Furthermore, use the notation Theorem 4.2.For i = 1, . . ., d, let x GH i,1 , . . ., x GH i,Ni and w GH i,1 , . . .w GH i,Ni stand for the nodes and weights of the N i -point Gauss-Hermite quadrature rule and define the nodes Then the weights of the tensor product quadrature rule that is defined by the exactness conditions where α, β, and δ are defined in Equation (2.3) and H 2m are the probabilists' Hermite polynomials (2.2).
As in one dimension, the weights w k,I are supposed to approximate w k,I .Moreover, convergence rates can be obtained: a tensor product analogues of Theorems 3.3 and 3.5 follow from noting that every function f : R d → R in the RKHS H d of k d admits the multivariate Mercer expansion See (Kuo et al., 2017) for similar convergence analysis of tensor product Gauss-Hermite rules in H d .
Suppose that the nodes x i,1 , . . ., x i,Ni and weights w Define the tensor product rule , where M = min(M N1 , . . ., M N d ).Explicit forms of these constants appear in Equation (4.10).
Proof.The proof is largely analogous to that of Theorem 3. we obtain Consequently, the Cauchy-Schwarz inequality yields where we again use the notation Since µ(ϕ n ) ≤ 1 for any n ≥ 0, integration error for the eigenfunction ϕ I satisfies This completes the proof.
A multivariate version of Theorem 3.5 is obvious.1.The weight approximation is quite accurate, its accuracy increasing with the number of nodes and the length-scale, as predicted in Section 2.5.

Numerical experiments
2. The weights w k,n and w k,n are positive for every N and n = 1, . . ., N and their sums converge to one exponentially in N .
3. The quadrature rule Q k converges exponentially, as implied by Theorem 3.5 and empirical observations on the behaviour of its weights.4. In numerical integration of specific functions, the approximate kernel quadrature rule Q k can achieve integration accuracy almost indistinguishable from that of the corresponding Gaussian kernel quadrature rule Q k and superior to some more traditional alternatives.
This suggest Equation (2.11) can be used as an accurate and numerically stable surrogate for computing the Gaussian kernel quadrature weights when the naive approach based on solving the linear system (1.2) is precluded by ill-conditioning of the kernel matrix.Furthermore, the choice (2.10) of the nodes by scaling the Gauss-Hermite nodes appears to yield an exponentially convergent kernel quadrature rule that has positive weights.

Accuracy of the weight approximation
We begin by assessing quality of the weight approximation w k ≈ w k .Figure 5.1 depicts the results for a number of different length-scales in terms of norm of the relative weight error, . (5.1) As the kernel matrix quickly becomes ill-conditioned, computation of the kernel quadrature weights w k is challenging, particularly when the length-scale is large.To partially mitigate the problem we replaced the kernel quadrature weights with their QR decomposition approximations w M k derived in Section 2.2.The truncation length M was selected based on machine precision; see (Fasshauer and McCourt, 2012, Section 4.2.2) for details.Yet even this does not work for large enough N .Because kernel quadrature rules on symmetric point sets have symmetric weights (Karvonen and Särkkä, 2018;Oettershagen, 2017, Section 5.2.4), breakdown in symmetricity of the computed kernel quadrature weights was used as a heuristic proxy for emergence of numerical instability: for each length-scale, relative errors are presented in Figure 5.1 until the first N such that |1 − w k,N /w k,1 | > 10 −6 , ordering of the nodes being from smallest to the largest so that w k,N = w k,1 in absence of numerical errors.5.1), exhibit behaviour practically indistinguishable from the approximate ones and are not therefore depicted separately in Figure 5.2.

Worst-case error
The worst-case error e(Q) of a quadrature rule Q(f ) = (5.2) Figure 5.3 compares the worst-case errors in the RKHS of the Gaussian kernel for six different length-scales of (i) the classical Gauss-Hermite quadrature rule, (ii) the quadrature Q k (f ) = N n=1 w k,n f (x n ) of Theorem 2.3, and (iii) the kernel quadrature rule with its nodes placed uniformly between the largest and smallest of xn .We observe that Q k is, for all length-scales, the fastest of these rules to converge (the kernel quadrature rule at xn yields WCEs practically indistinguishable from those of Q k and is therefore not included).It also becomes apparent that the convergence rates derived in Theorems 3.3 and 3.5 for Q k are rather conservative.For example, for = 0.2 and = 1 the empirical rates are e( Q k ) = O(e −cN ) with c ≈ 0.21 and c ≈ 0.98, respectively, whereas Equation (3.4) yields the theoretical values c ≈ 0.00033 and c ≈ 0.054, respectively.3) in dimensions one and three using the quadrature rule of Theorem 2.3 (SGHKQ), the corresponding kernel quadrature rule (KQ), the kernel quadrature rule with nodes placed uniformly between the largest and smallest of xn (UKQ), and the Gauss-Hermite rule (GH).Tensor product versions of these rules are used in dimension three.

Numerical integration
when m i are even (when they are not even, the integral is obviously zero).Figure 5.4 shows integration error of the three methods (or, in higher dimensions, their tensor product versions) used in Section 5.3 and the kernel quadrature rule based on the nodes xn for (i) d = 1, m 1 = 6, c 1 = 3/2 and (ii) d = 3, m 1 = 6, m 2 = 4, m 3 = 2, c 1 = 3/2, c 2 = 3, c 3 = 1/2.As expected, there is little difference between Q k and Q k .
and the nodes and weights are x I := (x 1,I(1) , . . .x d,I(d) ) ∈ X and w I := 3. Since f ∈ H d can be written as f = I≥0 λ I f, ϕ I H d ϕ I , by defining the index set A M := I ∈ N d : I(i) ≥ M Ni for at least one i ∈ {1, . . ., d} ⊂ N d 4.8) because the first term vanishes and |Q d,N d (ϕ I(d) )| = |µ(ϕ I(d) )| ≤ 1.In the course of the proof of Theorem 3.3 we saw that

Figure 5 .
Figure 5.2 shows the minimal weights min n=1,...,N w k,n and convergence to one of N n=1 | w k,n | for a number of different length-scales.These results provide strong numerical evidence for the conjecture that w k,n remain positive and that the assumptions of Theorem 3.5 hold.Exact weights, as long as they can be reliably computed (see

Figure 5 . 2 :
Figure 5.2: Minimal weights and convergence to one of the sum of absolute of the weights for six different length-scales.

N
n=1 w n f (x n ) in a reproducing kernel Hilbert space induced by the kernel k is explicitly computable:e(Q) 2 = µ(k µ ) + N n,m=1 w n w m k(x n , x m ) − 2 N n=1 w n k µ (x n ).
Figure 5.4: Error in computing the Gaussian integral of the function (5.3) in dimensions one and three using the quadrature rule of Theorem 2.3 (SGHKQ), the corresponding kernel quadrature rule (KQ), the kernel quadrature rule with nodes placed uniformly between the largest and smallest of xn (UKQ), and the Gauss-Hermite rule (GH).Tensor product versions of these rules are used in dimension three.
This section contains numerical experiments on properties and accuracy of the approximate Gaussian kernel quadrature weights defined in Theorems 2.3 and 4.2.The experiments have been implemented in MATLAB, and they are available at https://github.com/tskarvone/gauss-mercer.The value α = 1/