Skip to main content

Smolyak’s Algorithm: A Powerful Black Box for the Acceleration of Scientific Computations

  • Conference paper
  • First Online:
Sparse Grids and Applications - Miami 2016

Part of the book series: Lecture Notes in Computational Science and Engineering ((LNCSE,volume 123))

Abstract

We provide a general discussion of Smolyak’s algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak’s work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak’s algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. I. Babuška, R. Tempone, G.E. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM J. Numer. Anal. 42(2), 800–825 (2004)

    Article  MathSciNet  Google Scholar 

  2. H.-J. Bungartz, M. Griebel, Sparse grids. Acta Numer. 13, 147–269 (2004)

    Article  MathSciNet  Google Scholar 

  3. A. Defant, K. Floret, Tensor Norms and Operator Ideals (Elsevier, Burlington, 1992)

    MATH  Google Scholar 

  4. D. Dũng, M. Griebel, Hyperbolic cross approximation in infinite dimensions. J. Complexity 33, 55–88 (2016)

    Article  MathSciNet  Google Scholar 

  5. D. Dũng, V.N. Temlyakov, T. Ullrich, Hyperbolic cross approximation (2015). arXiv:1601.03978

    Google Scholar 

  6. J. Garcke, A dimension adaptive sparse grid combination technique for machine learning. ANZIAM J. 48(C), C725–C740 (2007)

    Article  MathSciNet  Google Scholar 

  7. J. Garcke, Sparse grids in a nutshell, in Sparse Grids and Applications (Springer, Berlin, 2012), pp. 57–80

    Book  Google Scholar 

  8. M.B. Giles, Multilevel monte carlo path simulation. Oper. Res. 56(3), 607–617 (2008)

    Article  MathSciNet  Google Scholar 

  9. M. Griebel, H. Harbrecht, On the convergence of the combination technique, in Sparse Grids and Applications (Springer, Cham, 2014), pp. 55–74

    MATH  Google Scholar 

  10. M. Griebel, J. Oettershagen, On tensor product approximation of analytic functions. J. Approx. Theory 207, 348–379 (2016)

    Article  MathSciNet  Google Scholar 

  11. M. Griebel, M. Schneider, C. Zenger, A combination technique for the solution of sparse grid problems, in Iterative Methods in Linear Algebra, ed. by P. de Groen, R. Beauwens. IMACS, (Elsevier, Amsterdam, 1992), pp. 263–281

    Google Scholar 

  12. W. Hackbusch, Tensor Spaces and Numerical Tensor Calculus (Springer, Berlin, 2012)

    Book  Google Scholar 

  13. A.-L. Haji-Ali, F. Nobile, R. Tempone, Multi-index Monte Carlo: when sparsity meets sampling. Numer. Math. 132(4), 767–806 (2016)

    Article  MathSciNet  Google Scholar 

  14. H. Harbrecht, M. Peters, M. Siebenmorgen, Multilevel accelerated quadrature for PDEs with log-normally distributed diffusion coefficient. SIAM/ASA J. Uncertain. Quantif. 4(1), 520–551 (2016)

    Article  MathSciNet  Google Scholar 

  15. M. Hegland, Adaptive sparse grids. ANZIAM J. 44(C), C335–C353 (2002)

    MathSciNet  MATH  Google Scholar 

  16. S. Heinrich, Monte Carlo complexity of global solution of integral equations. J. Complexity 14(2), 151–175 (1998)

    Article  MathSciNet  Google Scholar 

  17. P.E. Kloeden, E. Platen, Numerical Solution of Stochastic Differential Equations (Springer, Berlin, 1992)

    Book  Google Scholar 

  18. O.P. Le Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification (Springer, Berlin, 2010)

    Book  Google Scholar 

  19. S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations (Wiley, New York, 1990)

    MATH  Google Scholar 

  20. F. Nobile, R. Tempone, S. Wolfers, Sparse approximation of multilinear problems with applications to kernel-based methods in UQ. Numer. Math. 139(1), 247–280 (2018)

    Article  MathSciNet  Google Scholar 

  21. F.W.J. Olver, A.B. Olde Daalhuis, D.W. Lozier, B.I. Schneider, R.F. Boisvert, C.W. Clark, B.R. Miller, B.V. Saunders (eds.), NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/. Release 1.0.13 of 2016-09-16

  22. A. Papageorgiou, H. Woźniakowski, Tractability through increasing smoothness. J. Complexity 26(5), 409–421 (2010)

    Article  MathSciNet  Google Scholar 

  23. I.H. Sloan, H. Woźniakowski, Tractability of multivariate integration for weighted Korobov classes. J. Complexity 17(4), 697–721 (2001)

    Article  MathSciNet  Google Scholar 

  24. S.A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Math. Dokl. 4, 240–243 (1963)

    MATH  Google Scholar 

  25. G.W. Wasilkowski, H. Woźniakowski, Explicit cost bounds of algorithms for multivariate tensor product problems. J. Complexity 11(1), 1–56 (1995)

    Article  MathSciNet  Google Scholar 

  26. C. Zenger, Sparse grids, in Parallel Algorithms for Partial Differential Equations. Proceedings of the Sixth GAMM-Seminar, ed. by W. Hackbusch (Vieweg, Braunschweig, 1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sören Wolfers .

Editor information

Editors and Affiliations

Appendix

Appendix

Lemma 1

Let γ j  > 0, β j  > 0, and t j  > 0 for j ∈{1, …, n}. Then

$$\displaystyle \begin{aligned} \sum_{(\boldsymbol{{\beta}}+\boldsymbol{\gamma})\cdot\boldsymbol{{k}}\leq L}\exp(\boldsymbol{\gamma}\cdot\boldsymbol{{k}})(\boldsymbol{{k}}+\mathbf{1})^{\boldsymbol{t}}\leq C(\boldsymbol{\gamma},\boldsymbol{t},n)\exp(\mu L)(L+1)^{n^*-1+t^*}, \end{aligned}$$

where \(\rho :=\max _{j=1}^{n}\gamma _j/{\beta }_j\) , \(\mu :=\frac {\rho }{1+\rho }\) , J := {j ∈{1, …, n} : γ j β j  = ρ}, n  := |J|, t  :=∑jJ t j , and \((\boldsymbol {{k}}+\mathbf {1})^{\boldsymbol {t}}:=\prod _{j=1}^{n}({k}_j+1)^{t_j}\).

Proof

First, we assume without loss of generality that the dimensions are ordered according to whether they belong to J or J c := {1, …, n}∖ J. To avoid cluttered notation we then separate dimensions by plus or minus signs in the subscripts; for example, we write \(\boldsymbol {t}=(\boldsymbol {t}_J,\boldsymbol {t}_{J^{c}})=:(\boldsymbol {t}_+,\boldsymbol {t}_-)\).

Next, we may replace the sum by an integral over {(β + γ) ⋅x ≤ L}. Indeed, by monotonicity we may do so if we replace L by L + |β + γ|1, but looking at the final result we observe that a shift of L only affects the constant C(γ, t, n).

Finally, using a change of variables y j  := (β j  + γ j )x j and the shorthand μ := γ∕(β + γ) (with componentwise division) we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \int_{(\boldsymbol{{\beta}}+\boldsymbol{\gamma})\cdot \boldsymbol{x}\leq L}\exp(\boldsymbol{\gamma}\cdot \boldsymbol{x})(\boldsymbol{x}+\mathbf{1})^{\boldsymbol{t}}\;d\boldsymbol{x}\\&\displaystyle &\displaystyle \quad \leq C \int_{|\boldsymbol{y}|{}_1\leq L}\exp(\boldsymbol{\mu}\cdot \boldsymbol{y})(\boldsymbol{y}+\mathbf{1})^{\boldsymbol{t}}\;d\boldsymbol{y}\\ &\displaystyle &\displaystyle \quad =C\int_{|\boldsymbol{y}_{+}|{}_1\leq L}\exp(\boldsymbol{\mu}_{+}\cdot \boldsymbol{y}_{+})(\boldsymbol{y}_{+}+\mathbf{1})^{\boldsymbol{t}_{+}}\\&\displaystyle &\displaystyle \qquad \times\int_{|\boldsymbol{y}_{-}|{}_{1}\leq L-|\boldsymbol{y}_{+}|{}_1}\exp(\boldsymbol{\mu}_{-}\cdot\boldsymbol{y}_{-})(\boldsymbol{y}_{-}+\mathbf{1})^{\boldsymbol{t}_{-}}\;d\boldsymbol{y}_{-}\;d\boldsymbol{y}_{+}\\ &\displaystyle &\displaystyle \quad \leq C\int_{|\boldsymbol{y}_{+}|{}_1\leq L}\exp(\mu|\boldsymbol{y}_{+}|{}_1)(\boldsymbol{y}_{+}+\mathbf{1})^{\boldsymbol{t}_{+}}\\&\displaystyle &\displaystyle \qquad \times\int_{|\boldsymbol{y}_{-}|{}_{1}\leq L-|\boldsymbol{y}_{+}|{}_1}\exp(\mu_{-}|\boldsymbol{y}_-|{}_{1})(\boldsymbol{y}_{-}+\mathbf{1})^{\boldsymbol{t}_{-}}\;d\boldsymbol{y}_{-}\;d\boldsymbol{y}_{+}=(\star), \end{array} \end{aligned} $$

where the last equality holds by definition of \(\mu =\max \{\boldsymbol {\mu }_{+}\}\) and \(\mu _{-}:=\max \{\boldsymbol {\mu }_{-}\}\). We use the letter C here and in the following to denote quantities that depend only on γ, t and n but may change value from line to line. Using \((\boldsymbol {y}_{ +}+\mathbf {1})^{\boldsymbol {t}_{+}}\leq (|\boldsymbol {y}_{+}|{ }_1+1)^{|\boldsymbol {t}_{+}|{ }_1}\) and \((\boldsymbol {y}_{-}+\mathbf {1})^{\boldsymbol {t}_{-}}\leq (|\boldsymbol {y}_{-}|{ }_1+1)^{|\boldsymbol {t}_{-}|{ }_1}\) and the linear change of variables y↦(|y|1, y 2, …, y n ) in both integrals, we obtain

where we used supremum bounds for both integrals for the third inequality, the change of variables w := L − u for the penultimate equality, and the fact that μ > μ for the last inequality.

Lemma 2

Let γ j  > 0, β j  > 0, and s j  > 0 for j ∈{1, …, n}. Then

$$\displaystyle \begin{aligned} \sum_{(\boldsymbol{{\beta}}+\boldsymbol{\gamma})\cdot\boldsymbol{{k}}>L}\exp(-\boldsymbol{{\beta}}\cdot\boldsymbol{{k}})(\boldsymbol{{k}}+\mathbf{1})^{\boldsymbol{s}} \leq C(\boldsymbol{{\beta}},\boldsymbol{s},n)\exp(-\nu L)(L+1)^{n^{*}-1+s^*}, \end{aligned}$$

where \(\rho :=\max _{j=1}^{n}\gamma _j/{\beta }_j\) , \(\nu :=\frac {1}{1+\rho }\) , J := {j ∈{1, …, n} : γ j β j  = ρ}, n  := |J|, s  :=∑jJ t j , and \((\boldsymbol {{k}}+\mathbf {1})^{\boldsymbol {s}}:=\prod _{j=1}^{n}({k}_j+1)^{s_j}\).

Proof

First, we assume without loss of generality that the dimensions are ordered according to whether they belong to J or J c. To avoid cluttered notation we then separate dimensions by plus or minus signs in the subscripts; for example, we write \(\boldsymbol {s}=(\boldsymbol {s}_J,\boldsymbol {s}_{J^{c}})=:(\boldsymbol {s}_+,\boldsymbol {s}_-)\).

Next, we may replace the sum by an integral over {(β + γ) ⋅x > L}. Indeed, by monotonicity we may do so if we replace L by L −|β + γ|1, but looking at the final result we observe that a shift of L only affects the constant C(β, s, n).

Finally, using a change of variables y j  := (β j  + γ j )x j and the shorthand ν := β∕(β + γ) (with componentwise division) we obtain

$$\displaystyle \begin{aligned} \begin{aligned} &\int_{(\boldsymbol{{\beta}}+\boldsymbol{\gamma})\cdot \boldsymbol{x}>L}\exp(-\boldsymbol{{\beta}}\cdot \boldsymbol{x})(\boldsymbol{x}+\mathbf{1})^{\boldsymbol{s}}\;d\boldsymbol{x}\leq C\int_{|\boldsymbol{y}|{}_1>L}\exp(-\boldsymbol{\nu}\cdot \boldsymbol{y})(\boldsymbol{y}+\mathbf{1})^{\boldsymbol{s}}\;d\boldsymbol{y}\\ &=C\int_{|\boldsymbol{y}_{+}|{}_1> L}\exp(-\boldsymbol{\nu}_{+}\cdot \boldsymbol{y}_{+})(\boldsymbol{y}_{+}+\mathbf{1})^{\boldsymbol{s}_{+}}\int_{|\boldsymbol{y}_{-}|{}_{1}>(L-|\boldsymbol{y}_{+}|{}_1)^{+}}\exp(-\boldsymbol{\nu}_{-}\cdot\boldsymbol{y}_{-})(\boldsymbol{y}_{-}+\mathbf{1})^{\boldsymbol{s}_{-}}d\boldsymbol{y}_{-}d\boldsymbol{y}_{+}\\ &\leq C\int_{|\boldsymbol{y}_{+}|{}_1> L}\exp(-\nu|\boldsymbol{y}_{+}|{}_1)(\boldsymbol{y}_{+}+\mathbf{1})^{\boldsymbol{s}_{+}}\int_{|\boldsymbol{y}_{-}|{}_{1}>(L-|\boldsymbol{y}_{+}|{}_1)^{+}}\exp(-\nu_{-}|\boldsymbol{y}_-|{}_{1})(\boldsymbol{y}_{-}+\mathbf{1})^{\boldsymbol{s}_{-}}d\boldsymbol{y}_{-}d\boldsymbol{y}_{+}\\ &=:(\star), \end{aligned} \end{aligned}$$

where the last equality holds by definition of \(\nu =\max \{\boldsymbol {\nu }_{+}\}\) and \(\nu _{-}:=\max \{\boldsymbol {\nu }_{-}\}\). We use the letter C here and in the following to denote quantities that depend only on β, s and n but may change value from line to line. Using \((\boldsymbol {y}_{ +}+\mathbf {1})^{\boldsymbol {s}_{+}}\leq (|\boldsymbol {y}_{+}|{ }_1+1)^{|\boldsymbol {s}_{+}|{ }_1}\) and \((\boldsymbol {y}_{-}+\mathbf {1})^{\boldsymbol {s}_{-}}\leq (|\boldsymbol {y}_{-}|{ }_1+1)^{|\boldsymbol {s}_{-}|{ }_1}\) and the linear change of variables y↦(|y|1, y 2, …, y n ) in both integrals, we obtain

To bound (⋆⋆), we estimate the inner integral using the inequality \(\int _{a}^{\infty }\exp (-b v)(v+1)^{c}\,dv\leq C\exp (-b a)(a+1)^c\) [21, (8.11.2)], which is valid for all positive a, b, c:

$$\displaystyle \begin{aligned} \begin{aligned} (\star\star)&\leq C\int_0^{L}\exp(-\nu u)(u+1)^{|\boldsymbol{s}_{+}|{}_{1}+|J|-1}\exp(-\nu_{-}(L-u))(L-u+1)^{|\boldsymbol{s}_{-}|{}_1+|J^{c}|-1}\;du\\ &\leq C(L+1)^{|\boldsymbol{s}_{+}|{}_{1}+|J|-1}\int_0^{L}\exp(-\nu(L-w))\exp(-\nu_{-}w)(w+1)^{|\boldsymbol{s}_{-}|{}_1+|J^{c}|-1}\;dw\\ &=C(L+1)^{|\boldsymbol{s}_{+}|{}_{1}+|J|-1}\exp(-\nu L)\int_0^{L}\exp(-(\nu_{-}-\nu)w)(w+1)^{|\boldsymbol{s}_{-}|{}_1+|J^{c}|-1}\;dw\\ &\leq C(L+1)^{|\boldsymbol{s}_{+}|{}_{1}+|J|-1}\exp(-\nu L), \end{aligned} \end{aligned}$$

where we used a supremum bound and the change of variables w := L − u for the second inequality, and the fact that ν  > ν for the last inequality. Finally, to bound (⋆ ⋆ ⋆), we observe that the inner integral is independent of L, and bound the outer integral in the same way we previously bounded the inner integral. This shows

$$\displaystyle \begin{aligned} (\star\star\star)\leq C\exp(-\nu L)(L+1)^{|\boldsymbol{s}_{+}|{}_{1}+|J|-1}. \end{aligned}$$

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tempone, R., Wolfers, S. (2018). Smolyak’s Algorithm: A Powerful Black Box for the Acceleration of Scientific Computations. In: Garcke, J., Pflüger, D., Webster, C., Zhang, G. (eds) Sparse Grids and Applications - Miami 2016. Lecture Notes in Computational Science and Engineering, vol 123. Springer, Cham. https://doi.org/10.1007/978-3-319-75426-0_9

Download citation

Publish with us

Policies and ethics