Gaussian kernel quadrature at scaled Gauss–Hermite nodes
- 471 Downloads
- 2 Citations
Abstract
This article derives an accurate, explicit, and numerically stable approximation to the kernel quadrature weights in one dimension and on tensor product grids when the kernel and integration measure are Gaussian. The approximation is based on use of scaled Gauss–Hermite nodes and truncation of the Mercer eigendecomposition of the Gaussian kernel. Numerical evidence indicates that both the kernel quadrature and the approximate weights at these nodes are positive. An exponential rate of convergence for functions in the reproducing kernel Hilbert space induced by the Gaussian kernel is proved under an assumption on growth of the sum of absolute values of the approximate weights.
Keywords
Numerical integration Kernel quadrature Gaussian quadrature Mercer eigendecompositionMathematics Subject Classification
45C05 46E22 47B32 65D30 65D321 Introduction
Recently, Fasshauer and McCourt [12] have developed a method to circumvent the well-known problem that interpolation with the Gaussian kernel becomes often numerically unstable—in particular when \(\ell \) is large—because the condition number of K tends to grow with an exponential rate [33]. They do this by truncating the Mercer eigendecomposition of the Gaussian kernel after M terms and replacing the interpolation basis \(\{k(x_n,\cdot )\}_{n=1}^N\) with the first M eigenfunctions. In this article we show that application of this method with \(M=N\) to kernel quadrature yields, when the nodes are selected by a suitable and fairly natural scaling of the nodes of the classical Gauss–Hermite quadrature rule, an accurate, explicit, and numerically stable approximation to the Gaussian kernel quadrature weights. Moreover, the proposed nodes appear to be a good and natural choice for the Gaussian kernel quadrature.
We are not aware of any work on efficient selection of “good” nodes in the setting of this article. The Gauss–Hermite nodes [29, Section 3] and random points [31] are often used, but one should clearly be able to do better, while computation of the optimal nodes [28, Section 5.2] is computationally demanding. As such, given the desirable properties, listed below, of the resulting kernel quadrature rules, the nodes \(\tilde{x}_n\) appear to be an excellent heuristic choice. These nodes also behave naturally when \(\ell \rightarrow \infty \); see Sect. 2.5.
Numerical experiments in Sect. 5.3 suggest that both \(w_{k,n}\) (for the nodes \(\tilde{x}_n\)) and \({\widetilde{w}}_{k,n}\) are positive for any \(N \in \mathbb {N}\) and every \(n = 1,\ldots , N\). Besides the optimal nodes, the weights for which are guaranteed to be positive when the Gaussian kernel is used [28, 32], there are no node configurations that give rise to positive weights as far as we are aware of.
Numerical experiments in Sects. 5.1 and 5.3 demonstrate that computation of the approximate weights is numerically stable. Furthermore, construction of these weights only incurs a quadratic computational cost in the number of points, as opposed to the cubic cost of solving \(w_k\) from Eq. (1.2). See Sect. 2.6 for more details. Note that to obtain a numerically stable method it is not necessary to use the nodes \(\tilde{x}_n\) as the method in [12] can be applied in a straightforward manner for any nodes. However, doing so one forgoes a closed form expression and has to use the QR decomposition.
In Sects. 3 and 4 we show that slow enough growth with N of \(\sum _{n=1}^N \left|\widetilde{w}_{k,n}\right|\) (numerical evidence indicates this sum converges to one) guarantees that the approximate Gaussian kernel quadrature rule—as well as the corresponding tensor product version—converges with an exponential rate for functions in the RKHS of the Gaussian kernel. Convergence analysis is based on analysis of magnitude of the remainder of the Mercer expansion and rather explicit bounds on Hermite polynomials and their roots. Magnitude of the nodes \(\tilde{x}_n\) is crucial for the analysis; if they were further spread out the proofs would not work as such.
We find the connection to the Gauss–Hermite weights and nodes that the closed form expression for \({\widetilde{w}}_k\) provides intriguing and hope that it can be at some point used to furnish, for example, a rigorous proof of positivity of the approximate weights.
2 Approximate weights
This section contains the main results of this article. The main contribution is derivation, in Theorem 2.2, of the weights \({\widetilde{w}}_k\), that can be used to approximate the kernel quadrature weights. We also discuss positivity of these weights, the effect the kernel length-scale \(\ell \) is expected to have on quality of the approximation, and computational complexity.
2.1 Eigendecomposition of the Gaussian kernel
Lemma 2.1
Proof
2.2 Approximation via QR decomposition
Unfortunately, using the QR decomposition does not provide an attractive closed form solution for the approximate weights \({\widetilde{w}}_k^M\) for general M. Setting \(M = N\) turns \(\varPhi \) into a square matrix, enabling its direct inversion and formation of an explicit connection to the classical Gauss–Hermite quadrature. The rest of the article is concerned with this special case.
2.3 Gauss–Hermite quadrature
We make use of the following theorem, a one-dimensional special case of a more general result due to Mysovskikh [27]. See also [8, Section 7]. This result also follows from the Christoffel–Darboux formula (2.12).
Theorem 2.1
Let \(\nu \) be a measure on \(\mathbb {R}\). Suppose that \(x_1,\ldots ,x_N\) and \(w_1,\ldots ,w_N\) are the nodes and weights of the unique Gaussian quadrature rule. Let \(p_0,\ldots ,p_{N-1}\) be the \(L^2(\nu )\)-orthonormal polynomials. Then the matrix \([P]_{ij} \,{{:}{=}}\, \sum _{n=0}^{N-1} p_n(x_i) p_n(x_j)\) is diagonal and has the diagonal elements \([P]_{ii} = 1/w_i\).
2.4 Approximate weights at scaled Gauss–Hermite nodes
Theorem 2.2
Since the weights \(\widetilde{w}_{k}\) are obtained by truncating of the Mercer expansion of k, it is to be expected that \(\widetilde{w}_k \approx w_k\). This motivates our calling of these weights the approximate Gaussian kernel quadrature weights. We do not provide theoretical results on quality of this approximation, but the numerical experiments in Sect. 5.2 indicate that the approximation is accurate and that its accuracy increases with N. See [12] for related experiments.
2.5 Effect of the length-scale
When it comes to node placement, the length-scale is having an intuitive effect if the nodes are selected according to Eq. (2.10). For small \(\ell \), the nodes are placed closer to the origin where most of the measure is concentrated as integrands are expected to converge quickly to zero as \(\left|x\right| \rightarrow \infty \), whereas for larger \(\ell \) the nodes are more—but not unlimitedly—spread out in order to capture behaviour of functions that potentially contribute to the integral also further away from the origin.
2.6 On computational complexity
Because the Gauss–Hermite nodes and weights are related to the eigenvalues and eigenvectors of the tridiagonal Jacobi matrix [13, Theorem 3.1] they—and the points \({\tilde{x}}_n\)—can be solved in quadratic time (in practice, these nodes and weights can be often tabulated beforehand). From Eq. (2.11) it is seen that computation of each approximate weight is linear in N: there are approximately \((N-1)/2\) terms in the sum and the Hermite polynomials can be evaluated on the fly using the three-term recurrence formula \(\mathrm {H}_{n+1}(x) = x \mathrm {H}_n(x) - n \mathrm {H}_{n-1}(x)\). That is, computational cost of obtaining \({\tilde{x}}_n\) and \({\widetilde{w}}_{k,n}\) for \(n=1,\ldots ,N\) is quadratic in N. Since the kernel matrix K of the Gaussian kernel is dense, solving the exact kernel quadrature weights from the linear system (1.2) for the points \({\tilde{x}}_n\) incurs a more demanding cubic computational cost. Because computational cost of a tensor product rule does not depend on the nodes and weights after these have been computed, the above discussion also applies to the rules presented in Sect. 4.
3 Convergence analysis
In this section we analyse convergence in the reproducing kernel Hilbert space \(\mathscr {H}\subset C^\infty (\mathbb {R})\) induced by the Gaussian kernel of quadrature rules that are exact for the Mercer eigenfunctions. First, we prove a generic result (Theorem 3.1) to this effect and then apply this to the quadrature rule with the nodes \(\tilde{x}_n\) and weights \({\widetilde{w}}_{k,n}\). If \(\sum _{n=1}^N \left|{\widetilde{w}}_{k,n}\right|\) does not grow too fast with N, we obtain exponential convergence rates.
A number of convergence results for kernel quadrature rules on compact spaces appear in [5, 7, 14]. When it comes to the RKHS of the Gaussian kernel, characterised in [25, 36], Kuo and Woźniakowski [19] have analysed convergence of the Gauss–Hermite quadrature rule. Unfortunately, it turns out that the Gauss–Hermite rule converges in this space if and only if \(\varepsilon ^2 < 1/2\). Consequently, we believe that the analysis below is the first to establish convergence, under the assumption (supported by our numerical experiments) that the sum of \(\left|{\widetilde{w}}_{k,n}\right|\) does not grow too fast, of an explicitly constructed sequence of quadrature rules in the RKHS of the Gaussian kernel with any value of the length-scale parameter. We begin with two simple lemmas.
Lemma 3.1
Proof
Lemma 3.2
Proof
Theorem 3.1
- 1.
\(\sum _{n=1}^N \left|w_{n}\right| \le W_N\) for some \(W_N \ge 0\);
- 2.
\(Q_N(\varphi _n^\alpha ) = \mu (\varphi _n^\alpha )\) for each \(n = 0,\ldots ,M_N-1\) for some \(M_N \ge 1\);
- 3.
\(\sup _{1\le n \le N}\left|x_{n}\right| \le 2 \sqrt{M_N} / \beta \).
Proof
Remark 3.1
From Lemma 3.2 we observe that the proof does not yield \(\eta < 1\) (for every \(\ell \)) if the assumption \(\sup _{1\le n \le N}\left|x_{n}\right| \le 2 \sqrt{M_N} / \beta \) on placement of the nodes is relaxed by replacing the constant 2 on the right-hand side with \(C > 2\).
Theorem 3.2
Another interesting case are the generalised Gaussian quadrature rules4 for the eigenfunctions. As the eigenfunctions constitute a complete Chebyshev system [17, 30], there exists a quadrature rule \(Q^*_N\) with positive weights \(w_1^*,\ldots ,w_N^*\) such that \(Q_N^*(\varphi _n) = \mu (\varphi _n)\) for every \(n=0,\ldots ,2N-1\) [3]. Appropriate control of the nodes of these quadrature rules would establish an exponential convergence result with the “double rate” \(M_N = 2N\).
4 Tensor product rules
Proposition 4.1
Theorem 4.1
Theorem 4.2
- 1.
\(\sup _{1\le i \le d} \sum _{n=1}^{N_i} \left|w_{n}^i\right| \le W_{\mathscr {N}}\) for some \(W_{\mathscr {N}} \ge 1\);
- 2.
\(Q_{i,N_i}(\varphi _n^\alpha ) = \mu (\varphi _n^\alpha )\) for each \(n = 0,\ldots ,M_{N_i}-1\) and \(i=1,\ldots ,d\) for some \(M_{N_i} \ge 1\);
- 3.
\(\sup _{1\le n \le {N_i}}\left|x_{i,n}\right| \le 2 \sqrt{M_{N_i}} / \beta \) for each \(i=1,\ldots ,d\).
Proof
A multivariate version of Theorem 3.2 is obvious.
5 Numerical experiments
- 1.
Computation of the approximate weights in Eq. (2.11) is numerically stable.
- 2.
The weight approximation is quite accurate, its accuracy increasing with the number of nodes and the length-scale, as predicted in Sect. 2.5.
- 3.
The weights \(w_{k,n}\) and \({\widetilde{w}}_{k,n}\) are positive for every N and \(n=1,\ldots ,N\) and their sums converge to one exponentially in N.
- 4.
The quadrature rule \({\widetilde{Q}}_{k}\) converges exponentially, as implied by Theorem 3.2 and empirical observations on the behaviour of its weights.
- 5.
In numerical integration of specific functions, the approximate kernel quadrature rule \({\widetilde{Q}}_{k}\) can achieve integration accuracy almost indistinguishable from that of the corresponding Gaussian kernel quadrature rule \(Q_{k}\) and superior to some more traditional alternatives.
5.1 Numerical stability and distribution of weights
Absolute kernel quadrature weights, as computed directly from the linear system (1.2), and the approximate weights (2.11) for \(N = 99\), nodes \({\tilde{x}}_{k,n}\), and three different length-scales. Red is used to indicate those of \(w_{k,n}\) that are negative. The nodes are in ascending order, so by symmetry it is sufficient to display weights only for \(n=1,\ldots ,50\) (in fact, \(w_{k,n}\) are not necessarily numerically symmetric; see Sect. 5.2). The Gauss–Hermite nodes and weights were computed using the Golub–Welsch algorithm [13, Section 3.1.1.1] and MATLAB’s variable precision arithmetic. Equation (2.11) did not present any numerical issues as the sum, which can contain both positive and negative terms, was always dominated by the positive terms and all its terms were of reasonable magnitude
5.2 Accuracy of the weight approximation
Relative weight approximation error (5.1) for different length-scales
5.3 Properties of the weights
Minimal weights and convergence to one of the the sum of absolute values of the weights for six different length-scales
5.4 Worst-case error
Worst-case errors (5.2) in the Gaussian RKHS as functions of the number of nodes of the quadrature rule of Theorem 2.2 (SGHKQ), the kernel quadrature rule with nodes placed uniformly between the largest and smallest of \({\tilde{x}}_n\) (UKQ), and the Gauss–Hermite rule (GH). WCEs are displayed until the square root of floating-point relative accuracy (\(\approx 1.4901 \times 10^{-8}\)) is reached
5.5 Numerical integration
Error in computing the Gaussian integral of the function (5.3) in dimensions one and three using the quadrature rule of Theorem 2.2 (SGHKQ), the corresponding kernel quadrature rule (KQ), the kernel quadrature rule with nodes placed uniformly between the largest and smallest of \({\tilde{x}}_n\) (UKQ), and the Gauss–Hermite rule (GH). Tensor product versions of these rules are used in dimension three
Footnotes
- 1.
Low-rank approximations (i.e., \(M < N\)) are also possible [12, Section 6.1].
- 2.
- 3.
In particular, the factor \(n^{-1/6}\) could be added on the right-hand side. This would make little difference in convergence analysis of Theorem 3.1.
- 4.
Note that the cited results are for kernels and functions on compact intervals. However, generalisations for the whole real line are possible [15, Chapter VI].
Notes
Acknowledgements
Open access funding provided by Aalto University.
References
- 1.Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. United States Department of Commerce, National Bureau of Standards (1964)Google Scholar
- 2.Area, I., Dimitrov, D.K., Godoy, E., Ronveaux, A.: Zeros of Gegenbauer and Hermite polynomials and connection coefficients. Math. Comput. 73(248), 1937–1951 (2004)MathSciNetCrossRefGoogle Scholar
- 3.Barrow, D.L.: On multiple node Gaussian quadrature formulae. Math. Comput. 32(142), 431–439 (1978)MathSciNetCrossRefGoogle Scholar
- 4.Berlinet, A., Thomas-Agnan, C.: Reproducing Kernel Hilbert Spaces in Probability and Statistics. Springer, Berlin (2004)CrossRefGoogle Scholar
- 5.Bezhaev, A.Yu.: Cubature formulae on scattered meshes. Russ. J. Numer. Anal. Math. Model. 6(2), 95–106 (1991)Google Scholar
- 6.Bonan, S.S., Clark, D.S.: Estimates of the Hermite and the Freud polynomials. J. Approx. Theory 63(2), 210–224 (1990)MathSciNetCrossRefGoogle Scholar
- 7.Briol, F.-X., Oates, C.J., Girolami, M., Osborne, M.A., Sejdinovic, D.: Probabilistic integration: a role in statistical computation? Stat. Sci. 34(1), 1–22 (2019)MathSciNetCrossRefGoogle Scholar
- 8.Cools, R.: Constructing cubature formulae: the science behind the art. Acta Numer. 6, 1–54 (1997)MathSciNetCrossRefGoogle Scholar
- 9.Driscoll, T.A., Fornberg, B.: Interpolation in the limit of increasingly flat radial basis functions. Comput. Math. Appl. 43(3–5), 413–422 (2002)MathSciNetCrossRefGoogle Scholar
- 10.Erdélyi, A.: Higher Transcendental Functions, vol. 2. McGraw-Hill, New York (1953)zbMATHGoogle Scholar
- 11.Fasshauer, G., McCourt, M.: Kernel-Based Approximation Methods Using MATLAB. Number 19 in Interdisciplinary Mathematical Sciences. World Scientific Publishing, Singapore (2015)CrossRefGoogle Scholar
- 12.Fasshauer, G.E., McCourt, M.J.: Stable evaluation of Gaussian radial basis function interpolants. SIAM J. Sci. Comput. 34(2), A737–A762 (2012)MathSciNetCrossRefGoogle Scholar
- 13.Gautschi, W.: Orthogonal Polynomials: Computation and Approximation. Numerical Mathematics and Scientific Computation. Oxford University Press, Oxford (2004)zbMATHGoogle Scholar
- 14.Kanagawa, M., Sriperumbudur, B.K., Fukumizu, K.: Convergence analysis of deterministic kernel-based quadrature rules in misspecified settings. Found. Comput. Math. (2019). https://doi.org/10.1007/s10208-018-09407-7
- 15.Karlin, S., Studden, W.J.: Tchebycheff Systems: With Applications in Analysis and Statistics. Interscience Publishers, New York (1966)zbMATHGoogle Scholar
- 16.Karvonen, T., Särkkä, S.: Fully symmetric kernel quadrature. SIAM J. Sci. Comput. 40(2), A697–A720 (2018)MathSciNetCrossRefGoogle Scholar
- 17.Kellog, O.D.: Orthogonal function sets arising from integral equations. Am. J. Math. 40(2), 145–154 (1918)MathSciNetCrossRefGoogle Scholar
- 18.Kuo, F.Y., Sloan, I.H., Woźniakowski, H.: Multivariate integration for analytic functions with Gaussian kernels. Math. Comput. 86, 829–853 (2017)MathSciNetCrossRefGoogle Scholar
- 19.Kuo, F.Y., Woźniakowski, H.: Gauss-Hermite quadratures for functions from Hilbert spaces with Gaussian reproducing kernels. BIT Numer. Math. 52(2), 425–436 (2012)MathSciNetCrossRefGoogle Scholar
- 20.Larkin, F.M.: Optimal approximation in Hilbert spaces with reproducing kernel functions. Math. Comput. 24(112), 911–921 (1970)MathSciNetCrossRefGoogle Scholar
- 21.Larkin, F.M.: Gaussian measure in Hilbert space and applications in numerical analysis. Rocky Mt. J. Math. 2(3), 379–422 (1972)MathSciNetCrossRefGoogle Scholar
- 22.Larsson, E., Fornberg, B.: Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions. Comput. Math. Appl. 49(1), 103–130 (2005)MathSciNetCrossRefGoogle Scholar
- 23.Lee, Y.J., Micchelli, C.A., Yoon, J.: On convergence of flat multivariate interpolation by translation kernels with finite smoothness. Constr. Approx. 40(1), 37–60 (2014)MathSciNetCrossRefGoogle Scholar
- 24.Lee, Y.J., Yoon, G.J., Yoon, J.: Convergence of increasingly flat radial basis interpolants to polynomial interpolants. SIAM J. Math. Anal. 39(2), 537–553 (2007)MathSciNetCrossRefGoogle Scholar
- 25.Minh, H.Q.: Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory. Constr. Approx. 32(2), 307–338 (2010)MathSciNetCrossRefGoogle Scholar
- 26.Minka, T.: Deriving quadrature rules from Gaussian processes. Technical report, Statistics Department, Carnegie Mellon University (2000)Google Scholar
- 27.Mysovskikh, I.P.: On the construction of cubature formulas with fewest nodes. Sov. Math. Dokl. 9, 277–280 (1968)zbMATHGoogle Scholar
- 28.Oettershagen, J.: Construction of Optimal Cubature Algorithms with Applications to Econometrics and Uncertainty Quantification. PhD thesis, Institut für Numerische Simulation, Universität Bonn (2017)Google Scholar
- 29.O’Hagan, A.: Bayes–Hermite quadrature. J. Stat. Plan. Inference 29(3), 245–260 (1991)MathSciNetCrossRefGoogle Scholar
- 30.Pinkus, A.: Spectral properties of totally positive kernels and matrices. In: Gasca, M., Micchelli, C.A. (eds.) Total Positivity and Its Applications. Springer, pp. 477–511 (1996)Google Scholar
- 31.Rasmussen, C.E., Ghahramani, Z.: Bayesian Monte Carlo. In: Becker, S., Thrun, S., Obermayer, K. (eds.) Advances in Neural Information Processing Systems, vol. 15, pp. 505–512 (2002)Google Scholar
- 32.Richter-Dyn, N.: Properties of minimal integration rules II. SIAM J. Numer. Anal. 8(3), 497–508 (1971)MathSciNetCrossRefGoogle Scholar
- 33.Schaback, R.: Error estimates and condition numbers for radial basis function interpolation. Adv. Comput. Math. 3(3), 251–264 (1995)MathSciNetCrossRefGoogle Scholar
- 34.Schaback, R.: Multivariate interpolation by polynomials and radial basis functions. Constr. Approx. 21(3), 293–317 (2005)MathSciNetCrossRefGoogle Scholar
- 35.Sommariva, A., Vianello, M.: Numerical cubature on scattered data by radial basis functions. Computing 76(3–4), 295–310 (2006)MathSciNetCrossRefGoogle Scholar
- 36.Steinwart, I., Hush, D., Scovel, C.: An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels. IEEE Trans. Inf. Theory 52(10), 4635–4643 (2006)MathSciNetCrossRefGoogle Scholar
- 37.Steinwart, I., Scovel, C.: Mercer’s theorem on general domains: on the interaction between measures, kernels, and RKHSs. Constr. Approx. 35(3), 363–417 (2012)MathSciNetCrossRefGoogle Scholar
- 38.Sun, H.: Mercer theorem for RKHS on noncompact sets. J. Complex. 21(3), 337–349 (2005)MathSciNetCrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.