Abstract
We provide a general discussion of Smolyak’s algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak’s work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak’s algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
I. Babuška, R. Tempone, G.E. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM J. Numer. Anal. 42(2), 800–825 (2004)
H.-J. Bungartz, M. Griebel, Sparse grids. Acta Numer. 13, 147–269 (2004)
A. Defant, K. Floret, Tensor Norms and Operator Ideals (Elsevier, Burlington, 1992)
D. Dũng, M. Griebel, Hyperbolic cross approximation in infinite dimensions. J. Complexity 33, 55–88 (2016)
D. Dũng, V.N. Temlyakov, T. Ullrich, Hyperbolic cross approximation (2015). arXiv:1601.03978
J. Garcke, A dimension adaptive sparse grid combination technique for machine learning. ANZIAM J. 48(C), C725–C740 (2007)
J. Garcke, Sparse grids in a nutshell, in Sparse Grids and Applications (Springer, Berlin, 2012), pp. 57–80
M.B. Giles, Multilevel monte carlo path simulation. Oper. Res. 56(3), 607–617 (2008)
M. Griebel, H. Harbrecht, On the convergence of the combination technique, in Sparse Grids and Applications (Springer, Cham, 2014), pp. 55–74
M. Griebel, J. Oettershagen, On tensor product approximation of analytic functions. J. Approx. Theory 207, 348–379 (2016)
M. Griebel, M. Schneider, C. Zenger, A combination technique for the solution of sparse grid problems, in Iterative Methods in Linear Algebra, ed. by P. de Groen, R. Beauwens. IMACS, (Elsevier, Amsterdam, 1992), pp. 263–281
W. Hackbusch, Tensor Spaces and Numerical Tensor Calculus (Springer, Berlin, 2012)
A.-L. Haji-Ali, F. Nobile, R. Tempone, Multi-index Monte Carlo: when sparsity meets sampling. Numer. Math. 132(4), 767–806 (2016)
H. Harbrecht, M. Peters, M. Siebenmorgen, Multilevel accelerated quadrature for PDEs with log-normally distributed diffusion coefficient. SIAM/ASA J. Uncertain. Quantif. 4(1), 520–551 (2016)
M. Hegland, Adaptive sparse grids. ANZIAM J. 44(C), C335–C353 (2002)
S. Heinrich, Monte Carlo complexity of global solution of integral equations. J. Complexity 14(2), 151–175 (1998)
P.E. Kloeden, E. Platen, Numerical Solution of Stochastic Differential Equations (Springer, Berlin, 1992)
O.P. Le Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification (Springer, Berlin, 2010)
S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations (Wiley, New York, 1990)
F. Nobile, R. Tempone, S. Wolfers, Sparse approximation of multilinear problems with applications to kernel-based methods in UQ. Numer. Math. 139(1), 247–280 (2018)
F.W.J. Olver, A.B. Olde Daalhuis, D.W. Lozier, B.I. Schneider, R.F. Boisvert, C.W. Clark, B.R. Miller, B.V. Saunders (eds.), NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/. Release 1.0.13 of 2016-09-16
A. Papageorgiou, H. Woźniakowski, Tractability through increasing smoothness. J. Complexity 26(5), 409–421 (2010)
I.H. Sloan, H. Woźniakowski, Tractability of multivariate integration for weighted Korobov classes. J. Complexity 17(4), 697–721 (2001)
S.A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Math. Dokl. 4, 240–243 (1963)
G.W. Wasilkowski, H. Woźniakowski, Explicit cost bounds of algorithms for multivariate tensor product problems. J. Complexity 11(1), 1–56 (1995)
C. Zenger, Sparse grids, in Parallel Algorithms for Partial Differential Equations. Proceedings of the Sixth GAMM-Seminar, ed. by W. Hackbusch (Vieweg, Braunschweig, 1991)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Lemma 1
Let γ j > 0, β j > 0, and t j > 0 for j ∈{1, …, n}. Then
where \(\rho :=\max _{j=1}^{n}\gamma _j/{\beta }_j\) , \(\mu :=\frac {\rho }{1+\rho }\) , J := {j ∈{1, …, n} : γ j ∕β j = ρ}, n ∗ := |J|, t ∗ :=∑j ∈ J t j , and \((\boldsymbol {{k}}+\mathbf {1})^{\boldsymbol {t}}:=\prod _{j=1}^{n}({k}_j+1)^{t_j}\).
Proof
First, we assume without loss of generality that the dimensions are ordered according to whether they belong to J or J c := {1, …, n}∖ J. To avoid cluttered notation we then separate dimensions by plus or minus signs in the subscripts; for example, we write \(\boldsymbol {t}=(\boldsymbol {t}_J,\boldsymbol {t}_{J^{c}})=:(\boldsymbol {t}_+,\boldsymbol {t}_-)\).
Next, we may replace the sum by an integral over {(β + γ) ⋅x ≤ L}. Indeed, by monotonicity we may do so if we replace L by L + |β + γ|1, but looking at the final result we observe that a shift of L only affects the constant C(γ, t, n).
Finally, using a change of variables y j := (β j + γ j )x j and the shorthand μ := γ∕(β + γ) (with componentwise division) we obtain
where the last equality holds by definition of \(\mu =\max \{\boldsymbol {\mu }_{+}\}\) and \(\mu _{-}:=\max \{\boldsymbol {\mu }_{-}\}\). We use the letter C here and in the following to denote quantities that depend only on γ, t and n but may change value from line to line. Using \((\boldsymbol {y}_{ +}+\mathbf {1})^{\boldsymbol {t}_{+}}\leq (|\boldsymbol {y}_{+}|{ }_1+1)^{|\boldsymbol {t}_{+}|{ }_1}\) and \((\boldsymbol {y}_{-}+\mathbf {1})^{\boldsymbol {t}_{-}}\leq (|\boldsymbol {y}_{-}|{ }_1+1)^{|\boldsymbol {t}_{-}|{ }_1}\) and the linear change of variables y↦(|y|1, y 2, …, y n ) in both integrals, we obtain
where we used supremum bounds for both integrals for the third inequality, the change of variables w := L − u for the penultimate equality, and the fact that μ > μ − for the last inequality.
Lemma 2
Let γ j > 0, β j > 0, and s j > 0 for j ∈{1, …, n}. Then
where \(\rho :=\max _{j=1}^{n}\gamma _j/{\beta }_j\) , \(\nu :=\frac {1}{1+\rho }\) , J := {j ∈{1, …, n} : γ j ∕β j = ρ}, n ∗ := |J|, s ∗ :=∑j ∈ J t j , and \((\boldsymbol {{k}}+\mathbf {1})^{\boldsymbol {s}}:=\prod _{j=1}^{n}({k}_j+1)^{s_j}\).
Proof
First, we assume without loss of generality that the dimensions are ordered according to whether they belong to J or J c. To avoid cluttered notation we then separate dimensions by plus or minus signs in the subscripts; for example, we write \(\boldsymbol {s}=(\boldsymbol {s}_J,\boldsymbol {s}_{J^{c}})=:(\boldsymbol {s}_+,\boldsymbol {s}_-)\).
Next, we may replace the sum by an integral over {(β + γ) ⋅x > L}. Indeed, by monotonicity we may do so if we replace L by L −|β + γ|1, but looking at the final result we observe that a shift of L only affects the constant C(β, s, n).
Finally, using a change of variables y j := (β j + γ j )x j and the shorthand ν := β∕(β + γ) (with componentwise division) we obtain
where the last equality holds by definition of \(\nu =\max \{\boldsymbol {\nu }_{+}\}\) and \(\nu _{-}:=\max \{\boldsymbol {\nu }_{-}\}\). We use the letter C here and in the following to denote quantities that depend only on β, s and n but may change value from line to line. Using \((\boldsymbol {y}_{ +}+\mathbf {1})^{\boldsymbol {s}_{+}}\leq (|\boldsymbol {y}_{+}|{ }_1+1)^{|\boldsymbol {s}_{+}|{ }_1}\) and \((\boldsymbol {y}_{-}+\mathbf {1})^{\boldsymbol {s}_{-}}\leq (|\boldsymbol {y}_{-}|{ }_1+1)^{|\boldsymbol {s}_{-}|{ }_1}\) and the linear change of variables y↦(|y|1, y 2, …, y n ) in both integrals, we obtain
To bound (⋆⋆), we estimate the inner integral using the inequality \(\int _{a}^{\infty }\exp (-b v)(v+1)^{c}\,dv\leq C\exp (-b a)(a+1)^c\) [21, (8.11.2)], which is valid for all positive a, b, c:
where we used a supremum bound and the change of variables w := L − u for the second inequality, and the fact that ν − > ν for the last inequality. Finally, to bound (⋆ ⋆ ⋆), we observe that the inner integral is independent of L, and bound the outer integral in the same way we previously bounded the inner integral. This shows
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Tempone, R., Wolfers, S. (2018). Smolyak’s Algorithm: A Powerful Black Box for the Acceleration of Scientific Computations. In: Garcke, J., Pflüger, D., Webster, C., Zhang, G. (eds) Sparse Grids and Applications - Miami 2016. Lecture Notes in Computational Science and Engineering, vol 123. Springer, Cham. https://doi.org/10.1007/978-3-319-75426-0_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-75426-0_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75425-3
Online ISBN: 978-3-319-75426-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)