Skip to main content
Log in

Sparse Fourier transforms on rank-1 lattices for the rapid and low-memory approximation of functions of many variables

  • Original Article
  • Published:
Sampling Theory, Signal Processing, and Data Analysis Aims and scope Submit manuscript

Abstract

This paper considers fast and provably accurate algorithms for approximating smooth functions on the d-dimensional torus, \(f: \mathbb {T}^d \rightarrow \mathbb {C}\), that are sparse (or compressible) in the multidimensional Fourier basis. In particular, suppose that the Fourier series coefficients of f, \(\{c_\mathbf{k} (f) \}_{\mathbf{k} \in \mathbb {Z}^d}\), are concentrated in a given arbitrary finite set \(\mathscr {I} \subset \mathbb {Z}^d\) so that

$$\begin{aligned} \min _{\text{\O}mega \subset \mathscr {I} ~s.t.~ \left| \text{\O}mega \right| =s }\left\| f - \sum _{\mathbf{k} \in \text{\O}mega } c_\mathbf{k} (f) \, \mathbb {e}^{ -2 \pi \mathbb {i} {{\mathbf {k}}}\cdot \circ } \right\| _2 < \epsilon \Vert f \Vert _2 \end{aligned}$$

holds for \(s \ll \left| \mathscr {I} \right| \) and \(\epsilon \in (0,1)\) small. In such cases we aim to both identify a near-minimizing subset \(\text{\O}mega \subset \mathscr {I}\) and accurately approximate its associated Fourier coefficients \(\{ c_\mathbf{k} (f) \}_{\mathbf{k} \in \text{\O}mega }\) as rapidly as possible. In this paper we present both deterministic and explicit as well as randomized algorithms for solving this problem using \(\mathscr {O}(s^2 d \log ^c (|\mathscr {I}|))\)-time/memory and \(\mathscr {O}(s d \log ^c (|\mathscr {I}|))\)-time/memory, respectively. Most crucially, all of the methods proposed herein achieve these runtimes while simultaneously satisfying theoretical best s-term approximation guarantees which guarantee their numerical accuracy and robustness to noise for general functions. These results are achieved by modifying several different one-dimensional Sparse Fourier Transform (SFT) methods to subsample a function along a reconstructing rank-1 lattice for the given frequency set \(\mathscr {I} \subset \mathbb {Z}^d\) in order to rapidly identify a near-minimizing subset \(\text{\O}mega \subset \mathscr {I}\) as above without having to use anything about the lattice beyond its generating vector. This requires the development of new fast and low-memory frequency identification techniques capable of rapidly recovering vector-valued frequencies in \(\mathbb {Z}^d\) as opposed to recovering simple integer frequencies as required in the univariate setting. Two different multivariate frequency identification strategies are proposed, analyzed, and shown to lead to their own best s-term approximation methods herein, each with different accuracy versus computational speed and memory tradeoffs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Availability of data and material

Not applicable.

Code availability

All code is available at https://gitlab.com/grosscra/Rank1LatticeSparseFourier.

Notes

  1. Available at https://gitlab.com/grosscra/Rank1LatticeSparseFourier.

  2. Available at https://gitlab.com/grosscra/SublinearSparseFourierMATLAB.

References

  1. Bittens, S., Plonka, G.: Real sparse fast DCT for vectors with short support. Linear Algebra Appl. 582, 359–390 (2019). https://doi.org/10.1016/j.laa.2019.08.006

    Article  MathSciNet  MATH  Google Scholar 

  2. Bittens, S., Plonka, G.: Sparse fast DCT for vectors with one-block support. Numer. Algorithms 82(2), 663–697 (2019). https://doi.org/10.1007/s11075-018-0620-1

    Article  MathSciNet  MATH  Google Scholar 

  3. Choi, B., Christlieb, A., Wang, Y.: Multiscale High-Dimensional Sparse Fourier Algorithms for Noisy Data. ArXiv e-prints (2019). arXiv:1907.03692

  4. Choi, B., Christlieb, A., Wang, Y.: High-dimensional sparse Fourier algorithms. Numer. Algorithms (2020). https://doi.org/10.1007/s11075-020-00962-1

    Article  MATH  Google Scholar 

  5. Choi, B., Iwen, M.A., Krahmer, F.: Sparse harmonic transforms: a new class of sublinear-time algorithms for learning functions of many variables. Found. Comput. Math. (2020). https://doi.org/10.1007/s10208-020-09462-z

    Article  MATH  Google Scholar 

  6. Choi, B., Iwen, M.A., Volkmer, T.: Sparse harmonic transforms II: best s-term approximation guarantees for bounded orthonormal product bases in sublinear-time. ArXiv e-prints (2020). arXiv:1909.09564

  7. Christlieb, A., Lawlor, D., Wang, Y.: A multiscale sub-linear time Fourier algorithm for noisy data. Appl. Comput. Harmon. Anal. 40(3), 553–574 (2016). https://doi.org/10.1016/j.acha.2015.04.002

    Article  MathSciNet  MATH  Google Scholar 

  8. Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009). https://doi.org/10.1090/S0894-0347-08-00610-3

    Article  MathSciNet  MATH  Google Scholar 

  9. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, Berlin (2013). https://doi.org/10.1007/978-0-8176-4948-7

    Book  MATH  Google Scholar 

  10. Gilbert, A.C., Indyk, P., Iwen, M., Schmidt, L.: Recent developments in the sparse Fourier transform: a compressed Fourier transform for big data. IEEE Signal Process. Mag. 31(5), 91–100 (2014). https://doi.org/10.1109/MSP.2014.2329131

    Article  Google Scholar 

  11. Gilbert, A.C., Muthukrishnan, S., Strauss, M.: Improved time bounds for near-optimal sparse Fourier representations. In: Papadakis, M., Laine, A.F., Unser, M.A. (eds.) Wavelets XI, vol. 5914, pp. 398 – 412. International Society for Optics and Photonics. SPIE (2005). https://doi.org/10.1117/12.615931

  12. Gilbert, A.C., Strauss, M.J., Tropp, J.A.: A tutorial on fast Fourier sampling. IEEE Signal Process. Mag. 25(2), 57–66 (2008). https://doi.org/10.1109/MSP.2007.915000

    Article  Google Scholar 

  13. Gross, C., Iwen, M.A., Kämmerer, L., Volkmer, T.: A deterministic algorithm for constructing multiple rank-1 lattices of near-optimal size. ArXiv e-prints (2020). arXiv:2003.09753

  14. Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Simple and practical algorithm for sparse Fourier transform. In: Proceedings of the twenty-third annual ACM-SIAM symposium on discrete algorithms, pp. 1183–1194. ACM, New York (2012). https://doi.org/10.1137/1.9781611973099.93

  15. Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comput. Math. 10(3), 303–338 (2010). https://doi.org/10.1007/s10208-009-9057-1

    Article  MathSciNet  MATH  Google Scholar 

  16. Iwen, M.A.: Improved approximation guarantees for sublinear-time Fourier algorithms. Appl. Comput. Harmon. Anal. 34, 57–82 (2013). https://doi.org/10.1016/j.acha.2012.03.007

    Article  MathSciNet  MATH  Google Scholar 

  17. Kämmerer, L.: High Dimensional Fast Fourier Transform Based on Rank-1 Lattice Sampling. Dissertation. Universitätsverlag Chemnitz (2014)

  18. Kämmerer, L.: Reconstructing multivariate trigonometric polynomials from samples along rank-1 lattices. In: Fasshauer, G.E., Schumaker, L.L. (eds.) Approximation Theory XIV: San Antonio 2013, pp. 255–271. Springer International Publishing, Berlin (2014). https://doi.org/10.1007/978-3-319-06404-8_14

    Chapter  Google Scholar 

  19. Kämmerer, L., Krahmer, F., Volkmer, T.: A sample efficient sparse FFT for arbitrary frequency candidate sets in high dimensions. ArXiv e-prints (2020). arXiv:2006.13053

  20. Kämmerer, L., Potts, D., Volkmer, T.: Approximation of multivariate periodic functions by trigonometric polynomials based on rank-1 lattice sampling. J. Complex. 31(4), 543–576 (2015). https://doi.org/10.1016/j.jco.2015.02.004

    Article  MathSciNet  MATH  Google Scholar 

  21. Kämmerer, L., Potts, D., Volkmer, T.: High-dimensional sparse FFT based on sampling along multiple rank-1 lattices. Appl. Comput. Harmon. Anal. 51, 225–257 (2021). https://doi.org/10.1016/j.acha.2020.11.002

    Article  MathSciNet  MATH  Google Scholar 

  22. Kapralov, M.: Sparse Fourier transform in any constant dimension with nearly-optimal sample complexity in sublinear time, pp. 264–277. Assoc. Comput. Mach., New York (2016). https://doi.org/10.1145/2897518.2897650

  23. Kuo, F.Y., Migliorati, G., Nobile, F., Nuyens, D.: Function integration, reconstruction and approximation using rank-1 lattices. ArXiv e-prints (2020). arXiv:1908.01178

  24. Kuo, F.Y., Sloan, I.H., Woźniakowski, H.: Lattice rules for multivariate approximation in the worst case setting. In: Monte Carlo and quasi-Monte Carlo methods 2004, pp. 289–330. Springer, Berlin (2006). https://doi.org/10.1007/3-540-31186-6_18

  25. Lawlor, D., Wang, Y., Christlieb, A.: Adaptive sub-linear time Fourier algorithms. Adv. Adapt. Data Anal. 05(01), 1350003 (2013). https://doi.org/10.1142/S1793536913500039

    Article  MathSciNet  Google Scholar 

  26. Li, D., Hickernell, F.J.: Trigonometric spectral collocation methods on lattices. In: Cheng, S.Y., Shu, C.-W., Tang T. (eds.) Recent advances in scientific computing and partial differential equations (Hong Kong, 2002), Contemp. Math., vol. 330, pp. 121–132. Amer. Math. Soc., Providence (2003). https://doi.org/10.1090/conm/330/05887

  27. Merhi, S., Zhang, R., Iwen, M.A., Christlieb, A.: A new class of fully discrete sparse Fourier transforms: Faster stable implementations with guarantees. J. Fourier Anal. Appl. 25(3), 751–784 (2019). https://doi.org/10.1007/s00041-018-9616-4

    Article  MathSciNet  MATH  Google Scholar 

  28. Morotti, L.: Explicit universal sampling sets in finite vector spaces. Appl. Comput. Harmon. Anal. 43(2), 354–369 (2017). https://doi.org/10.1016/j.acha.2016.06.001

    Article  MathSciNet  MATH  Google Scholar 

  29. Munthe-Kaas, H., Sørevik, T.: Multidimensional pseudo-spectral methods on lattice grids. Appl. Numer. Math. 62(3), 155–165 (2012). https://doi.org/10.1016/j.apnum.2011.11.002

    Article  MathSciNet  MATH  Google Scholar 

  30. Plonka, G., Potts, D., Steidl, G., Tasche, M.: Numerical Fourier Analysis. Springer, Berlin (2018). https://doi.org/10.1007/978-3-030-04306-3

    Book  MATH  Google Scholar 

  31. Plonka, G., Wannenwetsch, K.: A sparse fast Fourier algorithm for real non-negative vectors. J. Comput. Appl. Math. 321, 532–539 (2017). https://doi.org/10.1016/j.cam.2017.03.019

    Article  MathSciNet  MATH  Google Scholar 

  32. Plonka, G., Wannenwetsch, K., Cuyt, A., Lee, Ws.: Deterministic sparse FFT for \(M\)-sparse vectors. Numer. Algorithms 78(1), 133–159 (2018). https://doi.org/10.1007/s11075-017-0370-5

    Article  MathSciNet  MATH  Google Scholar 

  33. Potts, D., Volkmer, T.: Sparse high-dimensional FFT based on rank-1 lattice sampling. Appl. Comput. Harmon. Anal. 41(3), 713–748 (2016). https://doi.org/10.1016/j.acha.2015.05.002

    Article  MathSciNet  MATH  Google Scholar 

  34. Segal, B., Iwen, M.: Improved sparse Fourier approximation results: faster implementations and stronger guarantees. Numer. Algorithms 63(2), 239–263 (2013). https://doi.org/10.1007/s11075-012-9621-7

    Article  MathSciNet  MATH  Google Scholar 

  35. Temlyakov, V.N.: Reconstruction of periodic functions of several variables from the values at the nodes of number-theoretic nets. Anal. Math. 12(4), 287–305 (1986). https://doi.org/10.1007/BF01909367

    Article  MathSciNet  MATH  Google Scholar 

  36. Temlyakov, V.N.: Approximation of Periodic Functions. Computational Mathematics Analysis Series. Nova Sci. Publ., Inc., Commack (1993)

    Google Scholar 

  37. Volkmer, T.: Multivariate Approximation and High-Dimensional Sparse FFT Based on Rank-1 Lattice Sampling. Dissertation. Universitätsverlag Chemnitz (2017)

Download references

Funding

Craig Gross and Mark Iwen were supported in part by the National Science Foundation (NSF) DMS 1912706. Toni Volkmer was supported in part by Sächsische Aufbaubank-Förderbank-(SAB) 100378180. Lutz Kämmerer was supported by the Deutsche Forschugsgemeinschaft (DFG, German Research Foundation), project number 380648269.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Craig Gross.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Communicated by Simon Foucart.

C. Gross and M. Iwen were supported in part by the National Science Foundation DMS 1912706. T. Volkmer was supported in part by Sächsische Aufbaubank-Förderbank-(SAB) 100378180.

Appendices

Proof of Theorem 1

We begin with a lemma which will be used to slightly improve the error bounds in Theorems 1 and 2 from their original statements.

Lemma 4

For \( {{\mathbf {x}}}\in {\mathbb {C}}^K \) and \( \mathscr {S}_\tau := \{ k \in [K] \mid |x_k| \ge \tau \} \), if \( \tau \ge \frac{\Vert {{\mathbf {x}}}- {{\mathbf {x}}}_s^\mathrm {opt}\Vert _1}{s} \), then \( | \mathscr {S}_\tau | \le 2s \) and

$$\begin{aligned} \Vert {{\mathbf {x}}}- {{\mathbf {x}}}|_{ \mathscr {S}_\tau }\Vert _{ 2 } \le \Vert {{\mathbf {x}}}- {{\mathbf {x}}}_{ 2s }^\mathrm {opt} \Vert _2 + \tau \sqrt{ 2s } . \end{aligned}$$

Proof

Ordering the entries of \( {{\mathbf {x}}}\) in descending order (with ties broken arbitrarily) as \( |x_{ k_1 }| \ge |x_{ k_2 }| \ge \ldots \), we first note that

$$\begin{aligned} \Vert {{\mathbf {x}}}- {{\mathbf {x}}}_s^\mathrm {opt}\Vert _1 \ge \sum _{ j = s + 1 }^{ 2s } |x_{ k_j }| \ge s |x_{ k_{ 2s } }|. \end{aligned}$$

By assumption then, \( \tau \ge | x_{ k_{ 2s } }| \), and since \( \mathscr {S}_\tau \) contains the \( | \mathscr {S}_\tau | \)-many largest entries of \( {{\mathbf {x}}}\), we must have \( \mathscr {S}_\tau \subset {{\,\mathrm{supp}\,}}({{\mathbf {x}}}_{ 2s }^\mathrm {opt}) \). Note then that \( | \mathscr {S}_\tau | \le 2s \). Finally, we calculate

$$\begin{aligned} \Vert {{\mathbf {x}}}- {{\mathbf {x}}}|_{ \mathscr {S}_\tau }\Vert _2&\le \Vert {{\mathbf {x}}}- {{\mathbf {x}}}_{ 2s }^\mathrm {opt} \Vert _2 + \Vert {{\mathbf {x}}}_{ 2s }^\mathrm {opt} - {{\mathbf {x}}}_{ \mathscr {S}_\tau }\Vert _2 \\&\le \Vert {{\mathbf {x}}}- {{\mathbf {x}}}_{ 2s }^\mathrm {opt} \Vert _2 + \sqrt{ \sum _{ k \in {{\,\mathrm{supp}\,}}({{\mathbf {x}}}_{ 2s }^\mathrm {opt}) \setminus \mathscr {S}_\tau } x_k^2 } \\&\le \Vert {{\mathbf {x}}}- {{\mathbf {x}}}_{ 2s }^\mathrm {opt} \Vert _2 + \tau \sqrt{ 2s } \end{aligned}$$

completing the proof. \(\square \)

We now give the proof of Theorem 1. For convenience we restate the theorem.

Theorem 1

For a signal \( a \in \mathscr {W}(\mathbb { T }) \cap C( \mathbb { T } ) \) corrupted by some arbitrary noise \( \mu : \mathbb { T } \rightarrow {\mathbb {C}}\), Algorithm 3 of [16], denoted \( \mathscr {A}_{ 2s, M }^{ \mathrm {sub} } \), will output a 2s-sparse coefficient vector \( {{\mathbf {v}}}\in {\mathbb {C}}^M \) which

  1. 1.

    reconstructs every frequency of \( {\hat{{{\mathbf {a}}}}} \in {\mathbb {C}}^M \), \( \omega \in \mathscr {B}_M \), with corresponding Fourier coefficients meeting the tolerance

    $$\begin{aligned} | {\hat{a}}_\omega | > (4 + 2 \sqrt{ 2 }) \left( \frac{\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ s }^\mathrm {opt}\Vert _1}{s} + \Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty \right) , \end{aligned}$$
  2. 2.

    satisfies the \( \ell ^\infty \) error estimate for recovered coefficients

    $$\begin{aligned} \Vert ({\hat{{{\mathbf {a}}}}} - {{\mathbf {v}}})|_{ {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _\infty \le \sqrt{ 2 } \left( \frac{\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ s }^\mathrm {opt}\Vert _1}{s} + \Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty \right) , \end{aligned}$$
  3. 3.

    satisfies the \( \ell ^2 \) error estimate

    $$\begin{aligned} \Vert {\hat{{{\mathbf {a}}}}} - {{\mathbf {v}}}\Vert _2&\le \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ 2s }^\mathrm {opt} \Vert _2 + \frac{(8 \sqrt{ 2 } + 6)\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_s^\mathrm {opt} \Vert _1}{\sqrt{ s }} \\&\qquad + (8 \sqrt{ 2 } + 6)\sqrt{ s } (\Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty ), \end{aligned}$$
  4. 4.

    and the number of required samples of a and the operation count for \( \mathscr {A}_{ 2s, M }^{ \mathrm {sub} } \) are

    $$\begin{aligned} \mathscr {O} \left( \frac{s^2\log ^4 M}{\log s} \right) . \end{aligned}$$

The Monte Carlo variant of \( \mathscr {A}_{ 2s, M }^\mathrm {sub} \), denoted \( \mathscr {A}_{ 2s, M }^\mathrm {sub, MC} \), referred to by Corollary 4 of [16] satisfies all of the conditions (1)–(3) simultaneously with probability \( (1 - \sigma ) \in [2/3, 1) \) and has number of required samples and operation count

$$\begin{aligned} \mathscr {O} \left( s \log ^3(M) \log \left( \frac{M}{\sigma } \right) \right) . \end{aligned}$$

The samples required by \( \mathscr {A}_{ 2s, M }^\mathrm {sub, MC} \) are a subset of those required by \( \mathscr {A}_{ 2s, M }^\mathrm {sub} \).

Proof

We refer to [16, Theorem 7] and its modification for noise robustness in [27, Lemma 4] for the proofs of properties (2) and (4). As for (1), [16, Lemma 6] and its modification in [27, Lemma 4] imply that any \( \omega \in \mathscr {B}_M \) with \( |{\hat{a}}_\omega | > 4 (\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_s^\mathrm {opt}\Vert _1 / s + \Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty ) =: 4\delta \) will be identified in [16, Algorithm 3]. An approximate Fourier coefficient for these and any other recovered frequencies is stored in the vector \( {{\mathbf {x}}}\) which satisfies the same estimate in property (2) by the proof of [16, Theorem 7] and [27, Lemma 4]. However, only the 2s largest magnitude values of \( {{\mathbf {x}}}\) will be returned in \( {{\mathbf {v}}}\). We therefore analyze what happens when some of the potentially large Fourier coefficients corresponding to frequencies in \( \mathscr {S}_{ 4\delta } \) do not have their approximations assigned to \( {{\mathbf {v}}}\).

For the definition of \( \mathscr {S}_\tau \) in Lemma 4 applied to \( {\hat{{{\mathbf {a}}}}} \), we must have \( | \mathscr {S}_{ 4\delta }| \le 2s = |{{\,\mathrm{supp}\,}}({{\mathbf {v}}})|\). If \( \omega \in \mathscr {S}_{ 4\delta } \setminus {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) \), there must then exist some other \( \omega ' \in {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) \setminus \mathscr {S}_{ 4\delta } \) which was identified and took the place of \( \omega \) in \( {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) \). For this to happen, \( |{\hat{a}}_{ \omega ' }| \le 4\delta \) and \( |x_{ \omega ' }| \ge |x_{ \omega }| \). But by property (2) (extended to all coefficients in \( {{\mathbf {x}}}\)), we know

$$\begin{aligned} 4 \delta + \sqrt{ 2 } \delta \ge |{\hat{a}}_{ \omega ' }| + \sqrt{ 2 } \delta \ge |x_{ \omega ' }| \ge |x_\omega | \ge |{\hat{a}}_\omega | - \sqrt{ 2 }\delta . \end{aligned}$$

Thus, any frequency in \( \mathscr {S}_{ 4\delta } \) not chosen satisfies \( |{\hat{a}}_\omega | \le (4 + 2\sqrt{ 2 }) \delta \), and so every frequency in \( \mathscr {S}_{ (4 + 2\sqrt{ 2 }) \delta } \) is in fact identified in \( {{\mathbf {v}}}\) verifying property (1).

As for property (3), we estimate the \( \ell ^2 \) error using property (2), Lemma 4, and the above argument as

$$\begin{aligned} \Vert {\hat{{{\mathbf {a}}}}} - {{\mathbf {v}}}\Vert _2&\le \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}|_{ {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _2 + \Vert ({\hat{{{\mathbf {a}}}}} - {{\mathbf {v}}})|_{ {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _2 \\&\le \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}|_{ \mathscr {S}_{ 4\delta } \cap {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _2 + \sqrt{ 2 } \delta \sqrt{ 2 s }\\&\le \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}|_{ \mathscr {S}_{ 4\delta } } \Vert _2 + \Vert {\hat{{{\mathbf {a}}}}}|_{ \mathscr {S}_{ 4\delta } \setminus {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) }\Vert _2 + 2 \delta \sqrt{ s } \\&\le \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ 2s }^\mathrm {opt} \Vert _2 + 4 \delta \sqrt{ 2s } + (4 + 2\sqrt{ 2 }) \delta \sqrt{ 2s } + 2 \delta \sqrt{ s }\\&= \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ 2s }^\mathrm {opt} \Vert _2 + (8 \sqrt{ 2 } + 6) \sqrt{ s } \delta \end{aligned}$$

as desired. \(\square \)

Proof of Theorem 2

We again restate the theorem for convenience.

Theorem 2

For a signal \( a \in \mathscr {W}(\mathbb { T }) \cap C( \mathbb { T } ) \) corrupted by some arbitrary noise \( \mu : \mathbb { T } \rightarrow {\mathbb {C}}\), and \( 1 \le r \le \frac{M}{36} \) Algorithm 1 of [27], denoted \( \mathscr {A}_{ 2s, M }^{ \mathrm {disc} } \), will output a 2s-sparse coefficient vector \( {{\mathbf {v}}}\in {\mathbb {C}}^M \) which

  1. 1.

    reconstructs every frequency of \( {{\mathbf {F}}}_M \, {{\mathbf {a}}}\in {\mathbb {C}}^M \), \( \omega \in \mathscr {B}_M \), with corresponding aliased Fourier coefficient meeting the tolerance

    $$\begin{aligned} | ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_\omega | > 12(1 + \sqrt{ 2 }) \left( \frac{\Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_s^\mathrm {opt}\Vert _1}{2s} + 2( \Vert {{\mathbf {a}}}\Vert _\infty M^{ -r } + \Vert \varvec{\mu } \Vert _{ \infty } )\right) , \end{aligned}$$
  2. 2.

    satisfies the \( \ell ^\infty \) error estimate for recovered coefficients

    $$\begin{aligned} \Vert ({{\mathbf {F}}}_M \, {{\mathbf {a}}}- {{\mathbf {v}}})|_{ {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _\infty \le 3 \sqrt{ 2 } \left( \frac{\Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_s^\mathrm { opt }\Vert _1}{2s} + 2 (\Vert {{\mathbf {a}}}\Vert _\infty M^{ -r } + \Vert \varvec{\mu } \Vert _\infty ) \right) , \end{aligned}$$
  3. 3.

    satisfies the \( \ell ^2 \) error estimate

    $$\begin{aligned} \Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- {{\mathbf {v}}}\Vert _2&\le \Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_{ 2s }^\mathrm {opt} \Vert _2 + 38 \frac{\Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_s^\mathrm {opt} \Vert _1}{\sqrt{ s }} \\&\quad + 152 \sqrt{ s } (\Vert {{\mathbf {a}}}\Vert _\infty M^{ -r } + \Vert \varvec{\mu } \Vert _\infty ), \end{aligned}$$
  4. 4.

    and the number of required samples of \( {{\mathbf {a}}}\) and the operation count for \( \mathscr {A}_{ 2s, M }^\mathrm {disc} \) are

    $$\begin{aligned} \mathscr {O} \left( \frac{s^2 r^{ 3/2 } \log ^{ 11/2 } M}{\log s} \right) . \end{aligned}$$

The Monte Carlo variant of \( \mathscr {A}_{ 2s, M }^\mathrm {disc} \), denoted \( \mathscr {A}_{ 2s, M }^\mathrm {disc, MC} \), satisfies the all of the conditions (1)–(3) simultaneously with probability \( (1 - \sigma ) \in [2/3, 1) \) and has number of required samples and operation count

$$\begin{aligned} \mathscr {O} \left( s r^{ 3/2 }\log ^{ 9/2 }(M) \log \left( \frac{M}{\sigma } \right) \right) . \end{aligned}$$

Proof

All notation in this proof matches that in [27] (in particular, we use f to denote the one-dimensional function in place of a in the theorem statement and \( N = 2M + 1\)). We begin by substituting the \( 2\pi \)-periodic gaussian filter given in (3) on page 756 with the 1-periodic gaussian and associated Fourier transform

$$\begin{aligned} g(x) = \frac{1}{c_1} \sum _{ n = -\infty }^\infty \mathbb { e }^{ - \frac{(2\pi )^2(x - n)^2}{2 c_1^2} },\quad {\hat{g}}_\omega = \frac{1}{\sqrt{ 2\pi }} \mathbb { e }^{ - \frac{c_1^2 \omega ^2}{2} }. \end{aligned}$$

Note then that all results regarding the Fourier transform remain unchanged, and since this 1-periodic gaussian is a just a rescaling of the \( 2\pi \)-periodic one used in [27], the bound in [27, Lemma 1] holds with a similarly compressed gaussian, that is, for all \( x \in \left[ -\frac{1}{2}, \frac{1}{2} \right] \)

$$\begin{aligned} g(x) \le \left( \frac{3}{c_1} + \frac{1}{\sqrt{ 2\pi }} \right) \mathbb { e }^{ -\frac{ \left( 2\pi x \right) ^2 }{2 c_1^2} }. \end{aligned}$$
(9)

Analogous results up to and including [27, Lemma 10] for 1-periodic functions then hold straightforwardly.

Assuming that our signal measurements \( {{\mathbf {f}}}= (f(y_j))_{ j = 0 }^{ 2M } = (f( \frac{j}{N} ))_{ j = 0 }^{ 2M } \) are corrupted by some discrete noise \( \varvec{\mu } = (\mu _j)_{ j = 0 }^{ 2M } \), we consider for any \( x \in \mathbb {T} \) a similar bound to [27, Lemma 10]. Here, \( j' := \arg \min _{ j }| x - y_j| \) and \( \kappa := \left\lceil \gamma \ln N \right\rceil + 1 \) for some \( \gamma \in \mathbb { R }^+ \) to be determined. Then,

$$\begin{aligned}&\left| \frac{1}{N} \sum _{ j = 0 }^{ 2M } f(y_j) g(x - y_j) - \frac{1}{N} \sum _{ j = j' - \kappa }^{ j' + \kappa } (f(y_j) + \mu _j) g(x - y_j) \right| \\&\qquad \le \frac{1}{N} \left| \sum _{ j = 0 }^{ 2M } f(y_j) g( x-y_j ) - \sum _{ j = j' - \kappa }^{ j'+\kappa } f(y_j) g(x - y_j) \right| + \frac{1}{N} \left| \sum _{ j = j' - \kappa }^{ j' + \kappa } \mu _j g(x - y_j) \right| \\&\qquad \le \frac{1}{N} \left| \sum _{ j = 0 }^{ 2M } f(y_j) g( x-y_j ) - \sum _{ j = j' - \kappa }^{ j'+\kappa } f(y_j) g(x - y_j) \right| + \frac{\Vert \varvec{\mu }\Vert _\infty }{N} \sum _{ k = - \kappa }^{ \kappa } g(x - y_{ j' + k }) \end{aligned}$$

We bound the first term in this sum by a direct application of [27, Lemma 10]; however, we take this opportunity to reduce the constant in the bound given there. In particular, bounding this term by the final expression in the proof of [27, Lemma 10] and using our implicit assumption that \( 36 \le N \), we have

$$\begin{aligned} \begin{aligned}&\left| \frac{1}{N} \sum _{ j = 0 }^{ 2M } f(y_j) g(x - y_j) - \frac{1}{N} \sum _{ j = j' - \kappa }^{ j' + \kappa } (f(y_j) + \mu _j) g(x - y_j) \right| \\&\qquad \qquad \le \left( \frac{3}{\sqrt{ 2\pi }} + \frac{1}{2\pi } \sqrt{ \frac{\ln 36}{36} } \right) \Vert \mathbf {f} \Vert _\infty N^{ -r } + \frac{1}{N} \Vert \varvec{\mu } \Vert _\infty \sum _{ k = - \kappa }^{ \kappa } g(x - y_{ j' + k }). \end{aligned} \end{aligned}$$
(10)

We now work on bounding the second term. First note that for all \( k \in [-\kappa , \kappa ] \cap \mathbb { Z } \),

$$\begin{aligned} g(x-y_{ j' \pm k })&= g\left( x - y_{ j' } \pm \frac{k}{N}\right) . \end{aligned}$$

Assuming without loss of generality that \( 0 \le x - y_{ j' } \), we can bound the nonnegatively indexed summands by (9) as

$$\begin{aligned} g\left( x - y_{ j' } + \frac{k}{N}\right)&\le \left( \frac{3}{c_1} + \frac{2}{\sqrt{ 2 \pi }}\right) \mathbb { e }^{ - \frac{(2 \pi )^2k^2}{2 c_1^2N^2} }. \end{aligned}$$
(11)

For the negatively indexed summands, the definition of \( j' = \arg \min _j |x - y_j| \) implies that \( x - y_{ j' } \le \frac{1}{2N} \). In particular,

$$\begin{aligned} x - y_{ j' } - \frac{k}{N} \le \frac{1 - 2k}{2N} < 0 \end{aligned}$$

implies

$$\begin{aligned} \left( x - y_{ j' } - \frac{k}{N} \right) ^2 \ge \frac{1 - 2k}{2N} \left( x- y_{ j' } - \frac{k}{N} \right) \ge \frac{2k - 1}{2N} \cdot \frac{k}{N} , \end{aligned}$$

giving

$$\begin{aligned} g \left( x - y_{ j' } - \frac{k}{N} \right) \le \left( \frac{3}{c_1} + \frac{2}{\sqrt{ 2\pi }} \right) \mathbb { e }^{ - \frac{(2\pi )^2k^2}{2c_1^2N^2} } \mathbb { e }^{ \frac{(2\pi )^2k}{4c_1^2N^2} }. \end{aligned}$$
(12)

We now bound the final exponential. We first recall from [27] the choices of parameters

$$\begin{aligned} c_1 = \frac{\beta \sqrt{ \ln N }}{N}, \quad \kappa = \left\lceil \gamma \ln N \right\rceil + 1, \quad \gamma = \frac{6r}{\sqrt{ 2 }\pi } = \frac{\beta \sqrt{ r}}{2 \sqrt{ \pi }}, \quad \beta = 6 \sqrt{ r } \end{aligned}$$

with \( 1 \le r \le \frac{N}{36} \). For \( k \in [1, \kappa ] \cap \mathbb { Z } \) then,

$$\begin{aligned} \exp \left( \frac{(2\pi )^2k}{4c_1^2N^2} \right)&\le \exp \left( \frac{(2\pi )^2\kappa }{4c_1^2N^2} \right) \\&\le \exp \left( \frac{\pi ^2\left( \frac{6r \ln N}{\sqrt{ 2 }\pi } + 2\right) }{36 r \ln N} \right) \\&\le \exp \left( \frac{\pi }{6\sqrt{ 2 }} + \frac{\pi ^2}{18r \ln N} \right) \\&\le \exp \left( \frac{\pi }{6\sqrt{ 2 }} + \frac{\pi ^2}{18 \ln 36} \right) =: A. \end{aligned}$$

Combining this with our bounds for the nonnegatively indexed summands (11) and the negatively indexed summands (12), we have

$$\begin{aligned} \frac{1}{N} \sum _{ k = -\kappa }^\kappa g(x - y_{ j'+ k }) \le \left( \frac{3}{\beta \sqrt{ \ln N }} + \frac{1}{N \sqrt{ 2 \pi }}\right) \left( 1 + (1 + A) \sum _{ k = 1 }^\kappa \mathbb { e }^{ - \frac{(2\pi )^2 k^2}{2 \beta ^2 \ln N} } \right) \end{aligned}$$

Expressing the final sum as a truncated lower Riemann sum and applying a change of variables on the resulting integral, we have

$$\begin{aligned} \sum _{ k = 1 }^\kappa \mathbb { e }^{ - \frac{(2\pi )^2 k^2}{2 \beta ^2 \ln N} } \le \frac{\beta \sqrt{ \ln N }}{\sqrt{ 2 }\pi } \int _{ 0 }^{ \infty } e^{ -x^2 } \,d x = \frac{\beta \sqrt{ \ln N }}{2 \sqrt{ 2 \pi }}. \end{aligned}$$

Making use of our parameter values from [27], and the fact that \( 1 \le r \le \frac{N}{36} \),

$$\begin{aligned} \frac{1}{N} \sum _{ k = -\kappa }^\kappa g(x - y_{ j'+ k })\le & {} \left( \frac{3}{\beta \sqrt{ \ln N }} + \frac{1}{N \sqrt{ 2 \pi }}\right) \left( 1 + \frac{1 + A}{2 \sqrt{ 2 \pi }} \beta \sqrt{ \ln N } \right) \nonumber \\\le & {} \frac{3}{6 \sqrt{ \ln 36 }} + \frac{3(1 + A)}{2 \sqrt{ 2 \pi }} + \frac{1}{36 \sqrt{ 2\pi }} + \frac{1+A}{4\pi } \sqrt{ \frac{\ln 36}{36} } \nonumber \\< & {} 2. \end{aligned}$$
(13)

With our revised bound for (10) above, we reprove [27, Theorem 4] to estimate \( g *f \) by the truncated discrete convolution with noisy samples. In particular, we apply [27, Theorem 3], (10), (9), and finally our same assumption that \( 1 \le r \le \frac{N}{36} \) to obtain

$$\begin{aligned}&\left| (g * f) (x) - \frac{1}{N} \sum _{ j = j' - \left\lceil \frac{6r}{\sqrt{ 2 } \pi } \ln N \right\rceil - 1 }^{ j' + \left\lceil \frac{6r}{\sqrt{ 2 }\pi } \ln N \right\rceil + 1 } (f(y_j) + \mu _j) g(x - y_j) \right| \\&\qquad \le \frac{N^{ 1 - r }}{6 \sqrt{ r } \sqrt{ \ln N }} \Vert \mathbf {f} \Vert _\infty N^{ -r } + \left( \frac{3}{\sqrt{ 2\pi }} + \frac{1}{2\pi } \sqrt{ \frac{\ln 36}{36} } \right) \Vert \mathbf {f} \Vert _\infty N^{ -r } + 2 \Vert \varvec{\mu } \Vert _\infty \\&\qquad \le \left( \frac{1}{6 \sqrt{ \ln 36 }} + \frac{3}{\sqrt{ 2\pi }} + \frac{1}{2\pi } \sqrt{ \frac{\ln 36}{36} } \right) \frac{ \Vert \mathbf {f} \Vert _\infty }{N^r} + 2 \Vert \varvec{\mu } \Vert _\infty < 2 \left( \frac{\Vert \mathbf {f} \Vert _\infty }{N^r} + \Vert \varvec{\mu }\Vert _\infty \right) . \end{aligned}$$

Replacing all references of \( 3 \Vert {{\mathbf {f}}}\Vert _{ \infty } N^{ -r } \) by \( 2( \Vert {{\mathbf {f}}}\Vert _\infty N^{ -r } + \Vert \varvec{\mu } \Vert _\infty ) \) in the remainder of the steps up to proving [27, Theorem 5] gives the desired noise robustness (with a slightly improved constant).

Using the revised error estimates of the nonequispaced algorithm from Theorem 1 and redefining \( \delta = 3 (\Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}}_s^\mathrm {opt} \Vert _1 / 2s + 2( \Vert {{\mathbf {f}}}\Vert _\infty N^{ -r } + \Vert \varvec{ \mu } \Vert _\infty )) \) as in the proof of [27, Theorem 5] (which also contains the proof of property (2)), the discretization algorithm [27, Algorithm 1] will produce candidate Fourier coefficient approximations in lines 9 and 12 corresponding to every \( |{\hat{f}}_\omega | \ge (4 + 2 \sqrt{ 2 }) \delta \) in place of \( 4\delta \) in Theorem 1. The exact same argument as in the proof of Theorem 1 then applies to the selection of the 2s-largest entries of this approximation with the revised threshold values and error bounds to give properties (1) and (3).

In detail, [27, Lemma 13] and the discussion right after its statement gives that property (2) holds for any approximate coefficient with frequency recovered throughout the algorithm (which, for the purposes of the following discussion, we will store in \( \mathbf {x} \) rather than \( {\hat{R}} \) defined in [27, Algorithm 1]), not just those in the final output \( \mathbf {v} := \mathbf {x}_s^{ \mathrm {opt} } \). Additionally, by the same lemma and our revised bounds from Theorem 1, any frequency \( \omega \in [N] \) satisfying \( |f_\omega | > (4 + 2 \sqrt{ 2 })\delta \) will have an associated coefficient estimate in \( \mathbf {x} \).

By Lemma 4, \( | \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } | \le 2s = | {{\,\mathrm{supp}\,}}(\mathbf {v}) | \), and so if \( \omega \in \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } \setminus {{\,\mathrm{supp}\,}}( \mathbf {v} ) \), there exists some \( \omega ' \in {{\,\mathrm{supp}\,}}(\mathbf {v}) \setminus \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } \) such that \( v_{\omega '} \) took the place of \( v_{ \omega } \) in \( \mathscr {S} \). In particular, this means that \( |x_{ \omega ' }| \ge |x_{ \omega }| \), \( |{\hat{f}}_{ \omega ' }| \le (4 + 2 \sqrt{ 2 })\delta \), and \( |{\hat{f}}_{ \omega }| > (4 + 2 \sqrt{ 2 }\delta ) \). Thus,

$$\begin{aligned} (4 + 2 \sqrt{ 2 })\delta + \sqrt{ 2 }\delta > |{\hat{f}}_{ \omega ' }| + \sqrt{ 2 } \delta \ge |x_{ \omega ' }| \ge |x_\omega | \ge |{\hat{f}}_{ \omega }| - \sqrt{ 2 }\delta , \end{aligned}$$

implying that \( |{\hat{f}}_\omega | \le 4(1 + \sqrt{ 2 })\delta \) and therefore proving (1).

Finally, to prove (3), we use Lemma 4, and consider

$$\begin{aligned} \Vert {\hat{{{\mathbf {f}}}}} - \mathbf {v} \Vert _2&\le \Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}} |_{ {{\,\mathrm{supp}\,}}(\mathbf {v}) } \Vert _2 - \Vert ( {\hat{{{\mathbf {f}}}}} - \mathbf {v} )|_{ {{\,\mathrm{supp}\,}}(v) } \Vert _2 \\&\le \Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}}|_{ \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } \cap {{\,\mathrm{supp}\,}}(\mathbf {v}) } \Vert _2+ \sqrt{ 2 }\delta \sqrt{ 2s } \\&\le \Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}}|_{ \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } } \Vert _2 + \Vert {\hat{{{\mathbf {f}}}}}|_{ \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } \setminus {{\,\mathrm{supp}\,}}(\mathbf {v}) }\Vert _2 + 2 \delta \sqrt{ s } \\&\le \Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}}_{ 2s }^\mathrm {opt}\Vert _2 + (4 + 2 \sqrt{ 2 })\delta \sqrt{ 2s } + 4(1 + \sqrt{ 2 })\delta \sqrt{ 2s } + 2 \delta \sqrt{ s } \\&\le \Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}}_{ 2s }^\mathrm {opt} \Vert _2 + (14 + 8 \sqrt{ 2 }) \delta \sqrt{ s } \end{aligned}$$

which finishes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gross, C., Iwen, M., Kämmerer, L. et al. Sparse Fourier transforms on rank-1 lattices for the rapid and low-memory approximation of functions of many variables. Sampl. Theory Signal Process. Data Anal. 20, 1 (2022). https://doi.org/10.1007/s43670-021-00018-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43670-021-00018-y

Keywords

Mathematics Subject Classification

Navigation