Skip to main content
Log in

Sparse Spectral Methods for Solving High-Dimensional and Multiscale Elliptic PDEs

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

In his monograph Chebyshev and Fourier Spectral Methods, John Boyd claimed that, regarding Fourier spectral methods for solving differential equations, “[t]he virtues of the Fast Fourier Transform will continue to improve as the relentless march to larger and larger [bandwidths] continues” [Boyd in Chebyshev and Fourier spectral methods, second rev ed. Dover Publications, Mineola, NY, 2001, pg. 194]. This paper attempts to further the virtue of the Fast Fourier Transform (FFT) as not only bandwidth is pushed to its limits, but also the dimension of the problem. Instead of using the traditional FFT however, we make a key substitution: a high-dimensional, sparse Fourier transform paired with randomized rank-1 lattice methods. The resulting sparse spectral method rapidly and automatically determines a set of Fourier basis functions whose span is guaranteed to contain an accurate approximation of the solution of a given elliptic PDE. This much smaller, near-optimal Fourier basis is then used to efficiently solve the given PDE in a runtime which only depends on the PDE’s data compressibility and ellipticity properties, while breaking the curse of dimensionality and relieving linear dependence on any multiscale structure in the original problem. Theoretical performance of the method is established herein with convergence analysis in the Sobolev norm for a general class of non-constant diffusion equations, as well as pointers to technical extensions of the convergence analysis to more general advection–diffusion–reaction equations. Numerical experiments demonstrate good empirical performance on several multiscale and high-dimensional example problems, further showcasing the promise of the proposed methods in practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. Here \(C^\infty : = \left\{ \phi : \mathbb {T}^d \rightarrow \mathbb {C}~|~ \partial ^\mathbf { \alpha } \phi ~ \text {is~continuous}~\forall \mathbf { \alpha } \in \mathbb {N}_0^d \right\} \).

  2. note that by Proposition 1, \( \langle u, v \rangle _{ H } \simeq \langle u, v \rangle _{ H^1 } \) for \( u, v \in H \).

  3. https://gitlab.com/grosscra/SparseADR.

  4. this code is publicly available at https://gitlab.com/grosscra/Rank1LatticeSparseFourier.

References

  1. Ben Adcock, Simone Brugiapaglia, and Clayton G. Webster, Sparse polynomial approximation of high-dimensional functions, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2022.

    Book  Google Scholar 

  2. Sina Bittens, Ruochuan Zhang, and Mark A Iwen, A deterministic sparse FFT for functions with structured Fourier sparsity, Advances in Computational Mathematics 45 (2019), no. 2, 519–561.

  3. John P. Boyd, Chebyshev and Fourier spectral methods, second, rev ed., Dover Publications, Mineola, N.Y, 2001.

    Google Scholar 

  4. S Brugiapaglia, S Micheletti, F Nobile, and S Perotto, Wavelet-Fourier CORSING techniques for multidimensional advection-diffusion-reaction equations, IMA Journal of Numerical Analysis (2020), no. draa036.

  5. S. Brugiapaglia, S. Micheletti, and S. Perotto, Compressed solving: A numerical approximation technique for elliptic PDEs based on compressed sensing, Computers & Mathematics with Applications 70 (2015), no. 6, 1306–1335 (en).

  6. Simone Brugiapaglia, COmpRessed SolvING: Sparse Approximation of PDEs based on compressed sensing, Ph.D. thesis, Polytecnico Di Milano, Milan, Italy, January 2016.

  7. Simone Brugiapaglia, A compressive spectral collocation method for the diffusion equation under the restricted isometry property, Quantification of Uncertainty: Improving Efficiency and Technology: QUIET selected contributions (Marta D’Elia, Max Gunzburger, and Gianluigi Rozza, eds.), Lecture Notes in Computational Science and Engineering, Springer International Publishing, Cham, 2020, pp. 15–40 (en).

  8. Simone Brugiapaglia, Sjoerd Dirksen, Hans Christian Jung, and Holger Rauhut, Sparse recovery in bounded Riesz systems with applications to numerical methods for PDEs, Applied and Computational Harmonic Analysis 53 (2021), 231–269 (en).

  9. Simone Brugiapaglia, Fabio Nobile, Stefano Micheletti, and Simona Perotto, A theoretical study of COmpRessed SolvING for advection-diffusion-reaction problems, Mathematics of Computation 87 (2018), no. 309, 1–38 (en).

  10. Hans-Joachim Bungartz and Michael Griebel, Sparse grids, Acta Numerica 13 (2004), 147–269 (en).

  11. Claudio Canuto, M. Yousuff Hussaini, Alfio Quarteroni, and Thomas A. Zang, Spectral methods: Fundamentals in single domains, Scientific Computation, Springer-Verlag, Berlin Heidelberg, 2006 (en).

  12. H. Cho, D. Venturi, and G.E. Karniadakis, Numerical methods for high-dimensional probability density function equations, Journal of Computational Physics 305 (2016), 817–837 (en).

  13. Albert Cohen, Wolfgang Dahmen, and Ronald DeVore, Compressed sensing and best \(k\)-term approximation, Journal of the American Mathematical Society 22 (2009), no. 1, 211–231 (en).

  14. Dinh Dũng, Vladimir Temlyakov, and Tino Ullrich, Hyperbolic cross approximation, Advanced Courses in Mathematics - CRM Barcelona, Springer International Publishing, Cham, 2018 (en).

  15. Ingrid Daubechies, Olof Runborg, and Jing Zou, A sparse spectral method for homogenization multiscale problems, Multiscale Modeling & Simulation 6 (2007), no. 3, 711–740.

    Article  MathSciNet  Google Scholar 

  16. Michael Döhler, Stefan Kunis, and Daniel Potts, Nonequispaced hyperbolic cross fast Fourier transform, SIAM Journal on Numerical Analysis 47 (2010), no. 6, 4415–4428.

    Article  MathSciNet  Google Scholar 

  17. Weinan E, Jiequn Han, and Arnulf Jentzen, Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning, Nonlinearity 35 (2021), no. 1, 278 (en), Publisher: IOP Publishing.

  18. Lawrence C. Evans, Partial differential equations, second ed., Graduate studies in mathematics, no. v. 19, American Mathematical Society, Providence, R.I, 2010.

  19. Anna C Gilbert, Sudipto Guha, Piotr Indyk, Shanmugavelayutham Muthukrishnan, and Martin Strauss, Near-optimal sparse Fourier representations via sampling, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, 2002, pp. 152–161.

  20. Anna C Gilbert, Piotr Indyk, Mark Iwen, and Ludwig Schmidt, Recent developments in the sparse Fourier transform: A compressed Fourier transform for big data, IEEE Signal Processing Magazine 31 (2014), no. 5, 91–100.

  21. Gene H. Golub and Charles F. Van Loan, Matrix computations, fourth ed., Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 2013.

  22. V Gradinaru, Fourier transform on sparse grids: Code design and the time dependent Schrödinger equation, Computing (Wien. Print) 80 (2007), no. 1, 1–22.

  23. Michael Griebel and Jan Hamaekers, Sparse grids for the Schrödinger equation, Special issue on molecular modelling 41 (2007), no. 2, 215–247.

    Google Scholar 

  24. Michael Griebel and Jan Hamaekers, Fast discrete Fourier transform on generalized sparse grids, Sparse Grids and Applications - Munich 2012 (Jochen Garcke and Dirk Pflüger, eds.), vol. 97, Springer International Publishing, Cham, 2014, pp. 75–107 (en).

  25. Craig Gross, Sparsity in the spectrum: sparse Fourier transforms and spectral methods for functions of many dimensions, Ph.D., Michigan State University, East Lansing, Michigan, USA, 2023.

    Google Scholar 

  26. Craig Gross, Mark Iwen, Lutz Kämmerer, and Toni Volkmer, Sparse Fourier transforms on rank-1 lattices for the rapid and low-memory approximation of functions of many variables, Sampling Theory, Signal Processing, and Data Analysis 20 (2021), no. 1, 1.

  27. Craig Gross, Mark A Iwen, Lutz Kämmerer, and Toni Volkmer, A deterministic algorithm for constructing multiple rank-1 lattices of near-optimal size, Advances in Computational Mathematics 47 (2021), no. 6, 1–24.

  28. Tristan Guillaume, On the multidimensional Black-Scholes partial differential equation, Annals of Operations Research 281 (2019), no. 1, 229–251 (en).

  29. Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price, Simple and practical algorithm for sparse Fourier transform, Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, SIAM, 2012, pp. 1183–1194.

  30. Mark A Iwen, Combinatorial sublinear-time Fourier algorithms, Foundations of Computational Mathematics 10 (2010), no. 3, 303–338.

  31. Dante Kalise and Karl Kunisch, Polynomial approximation of high-dimensional Hamilton–Jacobi–Bellman equations and applications to feedback control of semilinear parabolic PDEs, SIAM Journal on Scientific Computing 40 (2018), no. 2, A629–A652, Publisher: Society for Industrial and Applied Mathematics.

  32. Frances Kuo, Giovanni Migliorati, Fabio Nobile, and Dirk Nuyens, Function integration, reconstruction and approximation using rank-1 lattices, Mathematics of Computation 90 (2021), no. 330, 1861–1897 (en).

  33. Friedrich Kupka, Sparse grid spectral methods for the numerical solution of partial differential equations with periodic boundary conditions, Ph.D., Universität Wien, Vienna, Austria, 1997.

    Google Scholar 

  34. Lutz Kämmerer, Stefan Kunis, and Daniel Potts, Interpolation lattices for hyperbolic cross trigonometric polynomials, Journal of Complexity 28 (2012), no. 1, 76–92 (en).

  35. Lutz Kämmerer, Daniel Potts, and Toni Volkmer, Approximation of multivariate periodic functions by trigonometric polynomials based on rank-1 lattice sampling, Journal of Complexity 31 (2015), no. 4, 543–576 (en).

  36. Dong Li and Fred J. Hickernell, Trigonometric spectral collocation methods on lattices, Recent advances in scientific computing and partial differential equations (Hong Kong, 2002), Contemp. Math., vol. 330, Amer. Math. Soc., Providence, RI, 2003, pp. 121–132.

  37. Sami Merhi, Ruochuan Zhang, Mark A. Iwen, and Andrew Christlieb, A new class of fully discrete sparse Fourier transforms: Faster stable implementations with guarantees, Journal of Fourier Analysis and Applications 25 (2019), no. 3, 751–784 (en).

  38. Hans Munthe-Kaas and Tor Sørevik, Multidimensional pseudo-spectral methods on lattice grids, Applied Numerical Mathematics 62 (2012), no. 3, 155–165 (en).

  39. Gerlind Plonka, Daniel Potts, Gabriele Steidl, and Manfred Tasche, Numerical Fourier analysis, Applied and Numerical Harmonic Analysis, Springer International Publishing, Cham, 2018 (en).

  40. Holger Rauhut and Christoph Schwab, Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations, Mathematics of Computation 86 (2017), no. 304, 661–700 (en).

  41. Holger Rauhut and Rachel Ward, Interpolation via weighted\(l_{1}\)minimization, Applied and Computational Harmonic Analysis 40 (2016), no. 2, 321–351 (en).

  42. A.D. Rubio, A. Zalts, and C.D. El Hasi, Numerical solution of the advection-reaction-diffusion equation at different scales, Environmental Modelling & Software 23 (2008), no. 1, 90–95 (en).

  43. Jie Shen and Li-Lian Wang, Sparse spectral approximations of high-dimensional problems based on hyperbolic cross, SIAM Journal on Numerical Analysis 48 (2010), no. 3, 1087–1109.

    Article  MathSciNet  Google Scholar 

  44. Weiqi Wang and Simone Brugiapaglia, Compressive fourier collocation methods for high-dimensional diffusion equations with periodic boundary conditions, 2022.

  45. H. Yserentant, Sparse grid spaces for the numerical solution of the electronic Schrödinger equation, Numerische Mathematik 101 (2005), no. 2, 381–389 (en).

  46. Harry Yserentant, On the regularity of the electronic Schrödinger equation in Hilbert spaces of mixed derivatives, Numerische Mathematik 98 (2004), no. 4, 731–759 (en).

Download references

Acknowledgements

This work was supported in part by the National Science Foundation Award Numbers DMS 2106472 and 1912706. This work was also supported in part through computational resources and services provided by the Institute for Cyber-Enabled Research at Michigan State University. We thank Lutz Kämmerer for helpful discussions related to random rank-1 lattice construction and Ben Adcock and Simone Brugiapaglia for motivating discussions related to compressive sensing and high-dimensional PDEs. We also thank the reviewers of this manuscript for their insightful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Craig Gross.

Additional information

Communicated by Tino Ullrich.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Stamp Set Cardinality Bound

We begin by proving the following combinatorial upper bound for the cardinality of a stamp set.

Lemma 8

Suppose that \( \textbf{ 0 } \in {{\,\textrm{supp}\,}}(\hat{ a }) \), \( {{\,\textrm{supp}\,}}(\hat{ a }) = -{{\,\textrm{supp}\,}}(\hat{ a }) \), and \( \vert {{\,\textrm{supp}\,}}(\hat{ a }) \vert = s \). Then

$$\begin{aligned} \vert \mathcal {S}^N[\hat{ a }]({{\,\textrm{supp}\,}}(\hat{ f })) \vert \le \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \sum _{ n = 0 }^N \sum _{ t = 0 }^{ \min (n, (s-1)/2) } 2^{ t } \left( {\begin{array}{c}(s-1)/2\\ t\end{array}}\right) \left( {\begin{array}{c}n - 1\\ t - 1\end{array}}\right) . \end{aligned}$$
(A1)

Proof

We begin by separating \( \mathcal {S}^N \) into the disjoint pieces

$$\begin{aligned} \mathcal {S}^N = \bigsqcup _{ n = 0 }^{ N } \left( \mathcal {S}^n \setminus \left( \bigcup _{ i = 0 }^{ n - 1 } \mathcal {S}^i \right) \right) \end{aligned}$$

and computing the cardinality of each of these sets (where we take \( S^{ -1 } = \emptyset \)). If \( \textbf{ k } \in \mathcal {S}^n {\setminus } \left( \cup _{ i = 0 }^{ n - 1 } \mathcal {S}^i \right) \), then we are able to write \( \textbf{ k } \) as

$$\begin{aligned} \textbf{ k } = \textbf{ k }_f + \sum _{ m = 1 }^n \textbf{ k }_a^m \end{aligned}$$
(A2)

where \( \textbf{ k }_f \in {{\,\textrm{supp}\,}}(\hat{ f }) \) and \( \textbf{ k }_a^m \in {{\,\textrm{supp}\,}}(\hat{ a }) {\setminus } \{ 0 \} \) for all \( m = 1, \ldots , n \). Additionally, since \( \textbf{ k } \) is not in any earlier stamping sets, this is the smallest n for which this is possible. In particular, it is not possible for any two frequencies in the sum to be negatives of each other resulting in pairs of cancelled terms.

With this summation in mind, arbitrarily split \( {{\,\textrm{supp}\,}}( \hat{ a } ) {\setminus } \{ \textbf{ 0 } \} \) into \( A \sqcup -A \) (i.e., place all frequencies which do not negate each other into A and their negatives in \( -A \)). By collecting like frequencies that occur as a \( \textbf{ k }_a^m \) term in (A2), we can rewrite this sum as

$$\begin{aligned} \textbf{ k } = \textbf{ k }_f + \sum _{ \textbf{ k }_a \in A } s(\textbf{ k }, \textbf{ k }_a) m(\textbf{ k }, \textbf{ k }_a) \textbf{ k }_a, \end{aligned}$$
(A3)

where the sign function \( s(\textbf{ k }, \textbf{ k }_a) \) is given by

$$\begin{aligned} s(\textbf{ k }, \textbf{ k }_a):= {\left\{ \begin{array}{ll} 1 &{}\text {if } \textbf{ k }_a \text { is a term in the summation }(A2)\\ -1 &{}\text {if } -\textbf{ k }_a \text { is a term in the summation }(A2)\\ 0 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$

and the multiplicity function \( m(\textbf{ k }, \textbf{ k }_a) \) is defined as the number of times that \( \textbf{ k }_a \) or \( -\textbf{ k }_a \) appears as a \( \textbf{ k }_a^m \) term in (A2). Letting \( \textbf{ s }(\textbf{ k }):= (s(\textbf{ k }, \textbf{ k }_a))_{ k_a \in A } \) and \( \textbf{ m }(\textbf{ k }):= (m(\textbf{ k }, \textbf{ k }_a))_{ k_a \in A } \), we can then identify any \( \textbf{ k } \in \mathcal {S}^n {\setminus } \left( \cup _{ i = 0 }^{ n - 1 } \mathcal {S}^i \right) \) with the tuple

$$\begin{aligned} (\textbf{ k }_f, \textbf{ s }(\textbf{ k }), \textbf{ m }(\textbf{ k })) \in {{\,\textrm{supp}\,}}(\textbf{ f }) \times \{-1, 0, 1\}^{ A } \times \{0, \ldots , n\}^A. \end{aligned}$$

Upper bounding the number of these tuples that can correspond to a value of \( \textbf{ k } \in \mathcal {S}^n {\setminus } \left( \cup _{ i = 0 }^{ n - 1 } \mathcal {S}^i \right) \) will then upper bound the cardinality of this set.

Since any \( \textbf{ k }_f \in {{\,\textrm{supp}\,}}(\hat{ f }) \) can result in a valid \( \textbf{ k } \) value, we will focus on the pairs of sign and multiplicity vectors. Define by \( T_n \subseteq \{-1, 0, 1\}^A \times \{0, \ldots , n\}^A \) the set of valid sign and multiplicity pairs that can correspond to a \( \textbf{ k } \in \mathcal {S}^n {\setminus } \left( \cup _{ i = 0 }^{ n - 1 } \mathcal {S}^i \right) \). In particular, for \( (\textbf{ s }, \textbf{ m }) \in T_n \), \( \vert \vert \textbf{ m } \vert \vert _1 = n \) and \( {{\,\textrm{supp}\,}}(\textbf{ s } ) = {{\,\textrm{supp}\,}}( \textbf{ m } ) \). Thus, we can write

$$\begin{aligned} \begin{aligned} T_n \subseteq \bigsqcup _{ t = 0 }^{ \min (n, |A|) }&\left\{ (\textbf{ s }, \textbf{ m }) \in \{-1, 0, 1\}^A \times \{0, \ldots , n\}^A \right. \\ {}&\qquad \left. \mid \vert \vert \textbf{ m } \vert \vert _1 = n \text { and } |{{\,\textrm{supp}\,}}(\textbf{ s })| = |{{\,\textrm{supp}\,}}(\textbf{ m })| = t \right\} . \end{aligned} \end{aligned}$$

This inner set then corresponds to the t-partitions of the integer n spread over the |A| entries of \( \textbf{ m } \) where each non-zero term is assigned a sign \( -1 \) or 1. The cardinality is therefore \( 2^t \left( {\begin{array}{c} |A| \\ t \end{array}}\right) \left( {\begin{array}{c}n - 1\\ t - 1\end{array}}\right) \): the first factor is from the possible sign options, the second is the number of ways to choose the entries of \( \textbf{ m } \) which are nonzero, and the last is the number of t-partitions of n which will fill the nonzero entries of \( \textbf{ m } \). Noting that \( |A| = \frac{s - 1}{2} \), our final cardinality estimate is

$$\begin{aligned} \vert \mathcal {S}^N\vert&= \sum _{ n = 0 }^{ N } \vert \mathcal {S}^n \setminus \left( \bigcup _{ i = 0 }^{ n - 1 } \mathcal {S}^i \right) \vert \\&\le \sum _{ n = 0 }^N \vert {{\,\textrm{supp}\,}}(\hat{ f })\vert |T_n|\\&\le \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \sum _{ n = 0 }^N \sum _{ t = 0 }^{ \min (n, (s-1)/2) } 2^{ t } \left( {\begin{array}{c}(s-1)/2\\ t\end{array}}\right) \left( {\begin{array}{c}n - 1\\ t - 1\end{array}}\right) \end{aligned}$$

as desired. \(\square \)

Though this upper bound is much tighter than the one given in the main text, it is harder to parse. As such, we simplify it to the bound presented in Lemma 2, restated here for convenience.

Lemma 2

Suppose that \( \textbf{ 0 } \in {{\,\textrm{supp}\,}}(\hat{ a }) \), \( {{\,\textrm{supp}\,}}(\hat{ a }) = -{{\,\textrm{supp}\,}}(\hat{ a }) \), and \( \vert {{\,\textrm{supp}\,}}(\hat{ a }) \vert = s \). Then

$$\begin{aligned} \vert \mathcal {S}^N[\hat{ a }]({{\,\textrm{supp}\,}}(\hat{ f }))\vert \le 6 \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \left( \frac{\max (s, 2N)}{\sqrt{2}} \right) ^{ \min (s, 2N) }. \end{aligned}$$

Proof

Let \( r = (s - 1) / 2 \). We consider two cases:

Case 1::

(\( s \ge 2N \implies r \ge N\)) We estimate the innermost sum of (A1). Since \( r \ge N \ge n\), \( \min (n, (s - 1) / 2) = n \). By upper bounding the binomial coefficients with powers of r, we obtain

$$\begin{aligned} \sum _{ t = 0 }^{ n } 2^t \left( {\begin{array}{c}r\\ t\end{array}}\right) \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right)&\le \sum _{ t = 0 }^n 2^t (r^t)^2 \\&\le 2(2r^2)^{ n } \end{aligned}$$

where the second estimate follows from approximating the geometric sum. Again, bounding the next geometric sum by double the largest term, we have

$$\begin{aligned} \vert \mathcal {S}^N \vert \le \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \sum _{ n = 0 }^N 2(2r^2)^n \le \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert 4(2r^2)^N \le 4 \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \left( \frac{ s }{\sqrt{ 2 }}\right) ^{ 2N }. \end{aligned}$$
Case 2::

(\( s< 2N \implies r < N \)) Bounding the innermost sum of (A1) proceeds much the same way as Case 1, but we must first split the outermost sum into the first \( r + 1 \) terms and last \( N - r \) terms. Working with the first terms, we find

$$\begin{aligned} \sum _{ n = 0 }^r \sum _{ t = 0 }^{ n } 2^t \left( {\begin{array}{c}r\\ t\end{array}}\right) \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right) \le 4(2r^2)^r \end{aligned}$$

using the argument in Case 1. Now, we bound

$$\begin{aligned} \sum _{ n = r + 1 }^N \sum _{ t = 0 }^{ r } 2^t \left( {\begin{array}{c}r\\ t\end{array}}\right) \left( {\begin{array}{c}n-1\\ t-1\end{array}}\right)&\le \sum _{ n = r + 1 }^N 2(2(n - 1)^2)^r \\&\le 2N(2(N - 1)^2)^r \\&\le \sqrt{ 2 } (\sqrt{ 2 }N)^{ 2r + 1 }. \end{aligned}$$

Thus,

$$\begin{aligned} \vert \mathcal {S}^N \vert \le \vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \left[ 4(2r^2)^r + \sqrt{ 2 } (\sqrt{ 2 }N)^{ 2r + 1 } \right] \le (4 + \sqrt{ 2 })\vert {{\,\textrm{supp}\,}}(\hat{ f }) \vert \left( \frac{2N}{\sqrt{ 2 }} \right) ^s. \end{aligned}$$

Combining the two cases gives the desired upper bound.

\(\square \)

Appendix B: Proof of SFT Recovery Guarantees

We restate the theorem for convenience.

Theorem 2

([26], Corollary 2) Let \( \mathcal {I} \subseteq \mathbb {Z}^d\) be a frequency set of interest with expansion defined as \( K:= \max _{ j \in \{1, \ldots , d\} } (\max _{ \textbf{ k } \in \mathcal {I} } k_j - \min _{ \textbf{ l } \in \mathcal {I}} l_j ) + 1 \) (i.e., the sidelength of the smallest hypercube containing \( \mathcal {I} \)), and \( \Lambda ( \textbf{ z }, M) \) be a reconstructing rank-1 lattice for \( \mathcal {I} \).

There exists a fast, randomized SFT which, given \( \Lambda (\textbf{ z }, M) \), sampling access to \( g \in L^2 \), and a failure probability \( \sigma \in (0, 1] \), will produce a 2s-sparse approximation \( \hat{ \textbf{ g } }^s \) of \( {\hat{g}} \) and function \( g^s:= \sum _{ \textbf{ k } \in {{\,\textrm{supp}\,}}( \hat{ \textbf{ g }}^s) } {\hat{g}}_{ \textbf{ k } }^s e_\textbf{ k } \) approximating g satisfying

$$\begin{aligned} \vert \vert g - g^s \vert \vert _{ L^2 } \le \vert \vert {\hat{g}} - \hat{ \textbf{ g } }^s \vert \vert _{ \ell ^2 }&\le (25 + 3K) \left[ \frac{\vert \vert \hat{ g }|_{ \mathcal {I} } - (\hat{ g }|_{ \mathcal {I} })_s^\textrm{opt}\vert \vert _1}{\sqrt{ s }} + \sqrt{ s } \vert \vert \hat{ g } - \hat{ g }|_{ \mathcal {I} } \vert \vert _1 \right] \end{aligned}$$

with probability exceeding \( 1 - \sigma \). If \( g \in L^\infty \), then we additionally have

$$\begin{aligned} \vert \vert g - g^s \vert \vert _{ L^\infty } \le \vert \vert {\hat{g}} - \hat{ \textbf{ g } }^s \vert \vert _{ \ell ^1 } \le (33 + 4K) \left[ \vert \vert \hat{ g } |_{ \mathcal {I} } - (\hat{ g }|_{ \mathcal {I} })_s^\textrm{opt} \vert \vert _1 + \vert \vert \hat{ g } - \hat{ g }|_{ \mathcal {I} } \vert \vert _1 \right] \end{aligned}$$

with the same probability estimate. The total number of samples of g and computational complexity of the algorithm can be bounded above by

$$\begin{aligned} \mathcal {O} \left( d s \log ^3( d K M) \log \left( \frac{d K M }{\sigma } \right) \right) . \end{aligned}$$

Proof

The \( L^2 \) upper bound is mostly the same as the original result. We are not considering noisy measurements here which removes the \( \sqrt{ s } e_\infty \) term from that result (though, this could be added back in if desired). Additionally, we have upper bounded \( \vert \vert \hat{ g } - \hat{ g }|_{ \mathcal {I} } \vert \vert _2 \) by \( \sqrt{s} \vert \vert \hat{ g } - \hat{ g }|_{ \mathcal {I} } \vert \vert _1 \) adding one to the constant.

The \( L^\infty \) / \( \ell ^1 \) bound was not given in the original paper, but can be proven using the same techniques. In particular, replacing the \( \ell ^2 \) norm by the \( \ell ^1 \) norm in [26, Lemma 4] has the effect of replacing all \( \ell ^2 \) norms with \( \ell ^1 \) norms and replacing \( \sqrt{ 2s } \) by 2s. This small change cascades through the proof of Property 3 in [26, Theorem 2] (again, with \( \ell ^2 \) norms replaced by \( \ell ^1 \) norms) to produce the univariate \( \ell ^1 \) upper bound (in the language of the original paper)

$$\begin{aligned} \vert \vert \hat{ \textbf{ a } } - \textbf{ v } \vert \vert _1 \le \vert \vert \hat{ \textbf{ a } } - \hat{ \textbf{ a } }_{ 2s }^\textrm{opt} \vert \vert _1 + (16 + 6 \sqrt{ 2 }) \left( \vert \vert \hat{ \textbf{ a } } - \hat{ \textbf{ a } } _{ s }^\textrm{opt} \vert \vert _1 + s (\vert \vert \hat{ a } - \hat{ \textbf{ a } } \vert \vert _1 + \vert \vert \mu \vert \vert _\infty )\right) =: \eta _1. \end{aligned}$$

A similar logic applies to revising the proof of [26, Lemma 1]. Equation (4) with all \( \ell ^2 \) norms replaced by \( \ell ^1 \) norms is derived the same way, and the first term is upper bounded by the maximal entry of the vector multiplied by the number of elements without the square root. The remainder of the proof carries through without change which leads to a final error estimate of

$$\begin{aligned} \vert \vert \textbf{ b } - c \vert \vert _{ \ell ^2 } \le (\beta + \eta _\infty ) \max (s - \vert \mathcal {S}_\beta \vert , 0 ) + \eta _1 + \vert \vert c|_{ \mathcal {I} } - c|_{ \mathcal {S}_\beta } \vert \vert _{ 1 } + \vert \vert c - c|_{ \mathcal {I} } \vert \vert _1. \end{aligned}$$

Finally, the proof of [26, Corollary 2] follows using the same logic as the original substituting these revised upper bounds. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gross, C., Iwen, M. Sparse Spectral Methods for Solving High-Dimensional and Multiscale Elliptic PDEs. Found Comput Math (2024). https://doi.org/10.1007/s10208-024-09649-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10208-024-09649-8

Keywords

Mathematics Subject Classification

Navigation