Abstract
This paper considers fast and provably accurate algorithms for approximating smooth functions on the d-dimensional torus, \(f: \mathbb {T}^d \rightarrow \mathbb {C}\), that are sparse (or compressible) in the multidimensional Fourier basis. In particular, suppose that the Fourier series coefficients of f, \(\{c_\mathbf{k} (f) \}_{\mathbf{k} \in \mathbb {Z}^d}\), are concentrated in a given arbitrary finite set \(\mathscr {I} \subset \mathbb {Z}^d\) so that
holds for \(s \ll \left| \mathscr {I} \right| \) and \(\epsilon \in (0,1)\) small. In such cases we aim to both identify a near-minimizing subset \(\text{\O}mega \subset \mathscr {I}\) and accurately approximate its associated Fourier coefficients \(\{ c_\mathbf{k} (f) \}_{\mathbf{k} \in \text{\O}mega }\) as rapidly as possible. In this paper we present both deterministic and explicit as well as randomized algorithms for solving this problem using \(\mathscr {O}(s^2 d \log ^c (|\mathscr {I}|))\)-time/memory and \(\mathscr {O}(s d \log ^c (|\mathscr {I}|))\)-time/memory, respectively. Most crucially, all of the methods proposed herein achieve these runtimes while simultaneously satisfying theoretical best s-term approximation guarantees which guarantee their numerical accuracy and robustness to noise for general functions. These results are achieved by modifying several different one-dimensional Sparse Fourier Transform (SFT) methods to subsample a function along a reconstructing rank-1 lattice for the given frequency set \(\mathscr {I} \subset \mathbb {Z}^d\) in order to rapidly identify a near-minimizing subset \(\text{\O}mega \subset \mathscr {I}\) as above without having to use anything about the lattice beyond its generating vector. This requires the development of new fast and low-memory frequency identification techniques capable of rapidly recovering vector-valued frequencies in \(\mathbb {Z}^d\) as opposed to recovering simple integer frequencies as required in the univariate setting. Two different multivariate frequency identification strategies are proposed, analyzed, and shown to lead to their own best s-term approximation methods herein, each with different accuracy versus computational speed and memory tradeoffs.
Similar content being viewed by others
Availability of data and material
Not applicable.
Code availability
All code is available at https://gitlab.com/grosscra/Rank1LatticeSparseFourier.
Notes
Available at https://gitlab.com/grosscra/Rank1LatticeSparseFourier.
Available at https://gitlab.com/grosscra/SublinearSparseFourierMATLAB.
References
Bittens, S., Plonka, G.: Real sparse fast DCT for vectors with short support. Linear Algebra Appl. 582, 359–390 (2019). https://doi.org/10.1016/j.laa.2019.08.006
Bittens, S., Plonka, G.: Sparse fast DCT for vectors with one-block support. Numer. Algorithms 82(2), 663–697 (2019). https://doi.org/10.1007/s11075-018-0620-1
Choi, B., Christlieb, A., Wang, Y.: Multiscale High-Dimensional Sparse Fourier Algorithms for Noisy Data. ArXiv e-prints (2019). arXiv:1907.03692
Choi, B., Christlieb, A., Wang, Y.: High-dimensional sparse Fourier algorithms. Numer. Algorithms (2020). https://doi.org/10.1007/s11075-020-00962-1
Choi, B., Iwen, M.A., Krahmer, F.: Sparse harmonic transforms: a new class of sublinear-time algorithms for learning functions of many variables. Found. Comput. Math. (2020). https://doi.org/10.1007/s10208-020-09462-z
Choi, B., Iwen, M.A., Volkmer, T.: Sparse harmonic transforms II: best s-term approximation guarantees for bounded orthonormal product bases in sublinear-time. ArXiv e-prints (2020). arXiv:1909.09564
Christlieb, A., Lawlor, D., Wang, Y.: A multiscale sub-linear time Fourier algorithm for noisy data. Appl. Comput. Harmon. Anal. 40(3), 553–574 (2016). https://doi.org/10.1016/j.acha.2015.04.002
Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009). https://doi.org/10.1090/S0894-0347-08-00610-3
Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, Berlin (2013). https://doi.org/10.1007/978-0-8176-4948-7
Gilbert, A.C., Indyk, P., Iwen, M., Schmidt, L.: Recent developments in the sparse Fourier transform: a compressed Fourier transform for big data. IEEE Signal Process. Mag. 31(5), 91–100 (2014). https://doi.org/10.1109/MSP.2014.2329131
Gilbert, A.C., Muthukrishnan, S., Strauss, M.: Improved time bounds for near-optimal sparse Fourier representations. In: Papadakis, M., Laine, A.F., Unser, M.A. (eds.) Wavelets XI, vol. 5914, pp. 398 – 412. International Society for Optics and Photonics. SPIE (2005). https://doi.org/10.1117/12.615931
Gilbert, A.C., Strauss, M.J., Tropp, J.A.: A tutorial on fast Fourier sampling. IEEE Signal Process. Mag. 25(2), 57–66 (2008). https://doi.org/10.1109/MSP.2007.915000
Gross, C., Iwen, M.A., Kämmerer, L., Volkmer, T.: A deterministic algorithm for constructing multiple rank-1 lattices of near-optimal size. ArXiv e-prints (2020). arXiv:2003.09753
Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Simple and practical algorithm for sparse Fourier transform. In: Proceedings of the twenty-third annual ACM-SIAM symposium on discrete algorithms, pp. 1183–1194. ACM, New York (2012). https://doi.org/10.1137/1.9781611973099.93
Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comput. Math. 10(3), 303–338 (2010). https://doi.org/10.1007/s10208-009-9057-1
Iwen, M.A.: Improved approximation guarantees for sublinear-time Fourier algorithms. Appl. Comput. Harmon. Anal. 34, 57–82 (2013). https://doi.org/10.1016/j.acha.2012.03.007
Kämmerer, L.: High Dimensional Fast Fourier Transform Based on Rank-1 Lattice Sampling. Dissertation. Universitätsverlag Chemnitz (2014)
Kämmerer, L.: Reconstructing multivariate trigonometric polynomials from samples along rank-1 lattices. In: Fasshauer, G.E., Schumaker, L.L. (eds.) Approximation Theory XIV: San Antonio 2013, pp. 255–271. Springer International Publishing, Berlin (2014). https://doi.org/10.1007/978-3-319-06404-8_14
Kämmerer, L., Krahmer, F., Volkmer, T.: A sample efficient sparse FFT for arbitrary frequency candidate sets in high dimensions. ArXiv e-prints (2020). arXiv:2006.13053
Kämmerer, L., Potts, D., Volkmer, T.: Approximation of multivariate periodic functions by trigonometric polynomials based on rank-1 lattice sampling. J. Complex. 31(4), 543–576 (2015). https://doi.org/10.1016/j.jco.2015.02.004
Kämmerer, L., Potts, D., Volkmer, T.: High-dimensional sparse FFT based on sampling along multiple rank-1 lattices. Appl. Comput. Harmon. Anal. 51, 225–257 (2021). https://doi.org/10.1016/j.acha.2020.11.002
Kapralov, M.: Sparse Fourier transform in any constant dimension with nearly-optimal sample complexity in sublinear time, pp. 264–277. Assoc. Comput. Mach., New York (2016). https://doi.org/10.1145/2897518.2897650
Kuo, F.Y., Migliorati, G., Nobile, F., Nuyens, D.: Function integration, reconstruction and approximation using rank-1 lattices. ArXiv e-prints (2020). arXiv:1908.01178
Kuo, F.Y., Sloan, I.H., Woźniakowski, H.: Lattice rules for multivariate approximation in the worst case setting. In: Monte Carlo and quasi-Monte Carlo methods 2004, pp. 289–330. Springer, Berlin (2006). https://doi.org/10.1007/3-540-31186-6_18
Lawlor, D., Wang, Y., Christlieb, A.: Adaptive sub-linear time Fourier algorithms. Adv. Adapt. Data Anal. 05(01), 1350003 (2013). https://doi.org/10.1142/S1793536913500039
Li, D., Hickernell, F.J.: Trigonometric spectral collocation methods on lattices. In: Cheng, S.Y., Shu, C.-W., Tang T. (eds.) Recent advances in scientific computing and partial differential equations (Hong Kong, 2002), Contemp. Math., vol. 330, pp. 121–132. Amer. Math. Soc., Providence (2003). https://doi.org/10.1090/conm/330/05887
Merhi, S., Zhang, R., Iwen, M.A., Christlieb, A.: A new class of fully discrete sparse Fourier transforms: Faster stable implementations with guarantees. J. Fourier Anal. Appl. 25(3), 751–784 (2019). https://doi.org/10.1007/s00041-018-9616-4
Morotti, L.: Explicit universal sampling sets in finite vector spaces. Appl. Comput. Harmon. Anal. 43(2), 354–369 (2017). https://doi.org/10.1016/j.acha.2016.06.001
Munthe-Kaas, H., Sørevik, T.: Multidimensional pseudo-spectral methods on lattice grids. Appl. Numer. Math. 62(3), 155–165 (2012). https://doi.org/10.1016/j.apnum.2011.11.002
Plonka, G., Potts, D., Steidl, G., Tasche, M.: Numerical Fourier Analysis. Springer, Berlin (2018). https://doi.org/10.1007/978-3-030-04306-3
Plonka, G., Wannenwetsch, K.: A sparse fast Fourier algorithm for real non-negative vectors. J. Comput. Appl. Math. 321, 532–539 (2017). https://doi.org/10.1016/j.cam.2017.03.019
Plonka, G., Wannenwetsch, K., Cuyt, A., Lee, Ws.: Deterministic sparse FFT for \(M\)-sparse vectors. Numer. Algorithms 78(1), 133–159 (2018). https://doi.org/10.1007/s11075-017-0370-5
Potts, D., Volkmer, T.: Sparse high-dimensional FFT based on rank-1 lattice sampling. Appl. Comput. Harmon. Anal. 41(3), 713–748 (2016). https://doi.org/10.1016/j.acha.2015.05.002
Segal, B., Iwen, M.: Improved sparse Fourier approximation results: faster implementations and stronger guarantees. Numer. Algorithms 63(2), 239–263 (2013). https://doi.org/10.1007/s11075-012-9621-7
Temlyakov, V.N.: Reconstruction of periodic functions of several variables from the values at the nodes of number-theoretic nets. Anal. Math. 12(4), 287–305 (1986). https://doi.org/10.1007/BF01909367
Temlyakov, V.N.: Approximation of Periodic Functions. Computational Mathematics Analysis Series. Nova Sci. Publ., Inc., Commack (1993)
Volkmer, T.: Multivariate Approximation and High-Dimensional Sparse FFT Based on Rank-1 Lattice Sampling. Dissertation. Universitätsverlag Chemnitz (2017)
Funding
Craig Gross and Mark Iwen were supported in part by the National Science Foundation (NSF) DMS 1912706. Toni Volkmer was supported in part by Sächsische Aufbaubank-Förderbank-(SAB) 100378180. Lutz Kämmerer was supported by the Deutsche Forschugsgemeinschaft (DFG, German Research Foundation), project number 380648269.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Communicated by Simon Foucart.
C. Gross and M. Iwen were supported in part by the National Science Foundation DMS 1912706. T. Volkmer was supported in part by Sächsische Aufbaubank-Förderbank-(SAB) 100378180.
Appendices
Proof of Theorem 1
We begin with a lemma which will be used to slightly improve the error bounds in Theorems 1 and 2 from their original statements.
Lemma 4
For \( {{\mathbf {x}}}\in {\mathbb {C}}^K \) and \( \mathscr {S}_\tau := \{ k \in [K] \mid |x_k| \ge \tau \} \), if \( \tau \ge \frac{\Vert {{\mathbf {x}}}- {{\mathbf {x}}}_s^\mathrm {opt}\Vert _1}{s} \), then \( | \mathscr {S}_\tau | \le 2s \) and
Proof
Ordering the entries of \( {{\mathbf {x}}}\) in descending order (with ties broken arbitrarily) as \( |x_{ k_1 }| \ge |x_{ k_2 }| \ge \ldots \), we first note that
By assumption then, \( \tau \ge | x_{ k_{ 2s } }| \), and since \( \mathscr {S}_\tau \) contains the \( | \mathscr {S}_\tau | \)-many largest entries of \( {{\mathbf {x}}}\), we must have \( \mathscr {S}_\tau \subset {{\,\mathrm{supp}\,}}({{\mathbf {x}}}_{ 2s }^\mathrm {opt}) \). Note then that \( | \mathscr {S}_\tau | \le 2s \). Finally, we calculate
completing the proof. \(\square \)
We now give the proof of Theorem 1. For convenience we restate the theorem.
Theorem 1
For a signal \( a \in \mathscr {W}(\mathbb { T }) \cap C( \mathbb { T } ) \) corrupted by some arbitrary noise \( \mu : \mathbb { T } \rightarrow {\mathbb {C}}\), Algorithm 3 of [16], denoted \( \mathscr {A}_{ 2s, M }^{ \mathrm {sub} } \), will output a 2s-sparse coefficient vector \( {{\mathbf {v}}}\in {\mathbb {C}}^M \) which
-
1.
reconstructs every frequency of \( {\hat{{{\mathbf {a}}}}} \in {\mathbb {C}}^M \), \( \omega \in \mathscr {B}_M \), with corresponding Fourier coefficients meeting the tolerance
$$\begin{aligned} | {\hat{a}}_\omega | > (4 + 2 \sqrt{ 2 }) \left( \frac{\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ s }^\mathrm {opt}\Vert _1}{s} + \Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty \right) , \end{aligned}$$ -
2.
satisfies the \( \ell ^\infty \) error estimate for recovered coefficients
$$\begin{aligned} \Vert ({\hat{{{\mathbf {a}}}}} - {{\mathbf {v}}})|_{ {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _\infty \le \sqrt{ 2 } \left( \frac{\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ s }^\mathrm {opt}\Vert _1}{s} + \Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty \right) , \end{aligned}$$ -
3.
satisfies the \( \ell ^2 \) error estimate
$$\begin{aligned} \Vert {\hat{{{\mathbf {a}}}}} - {{\mathbf {v}}}\Vert _2&\le \Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_{ 2s }^\mathrm {opt} \Vert _2 + \frac{(8 \sqrt{ 2 } + 6)\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_s^\mathrm {opt} \Vert _1}{\sqrt{ s }} \\&\qquad + (8 \sqrt{ 2 } + 6)\sqrt{ s } (\Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty ), \end{aligned}$$ -
4.
and the number of required samples of a and the operation count for \( \mathscr {A}_{ 2s, M }^{ \mathrm {sub} } \) are
$$\begin{aligned} \mathscr {O} \left( \frac{s^2\log ^4 M}{\log s} \right) . \end{aligned}$$
The Monte Carlo variant of \( \mathscr {A}_{ 2s, M }^\mathrm {sub} \), denoted \( \mathscr {A}_{ 2s, M }^\mathrm {sub, MC} \), referred to by Corollary 4 of [16] satisfies all of the conditions (1)–(3) simultaneously with probability \( (1 - \sigma ) \in [2/3, 1) \) and has number of required samples and operation count
The samples required by \( \mathscr {A}_{ 2s, M }^\mathrm {sub, MC} \) are a subset of those required by \( \mathscr {A}_{ 2s, M }^\mathrm {sub} \).
Proof
We refer to [16, Theorem 7] and its modification for noise robustness in [27, Lemma 4] for the proofs of properties (2) and (4). As for (1), [16, Lemma 6] and its modification in [27, Lemma 4] imply that any \( \omega \in \mathscr {B}_M \) with \( |{\hat{a}}_\omega | > 4 (\Vert {\hat{{{\mathbf {a}}}}} - {\hat{{{\mathbf {a}}}}}_s^\mathrm {opt}\Vert _1 / s + \Vert {\hat{a}} - {\hat{{{\mathbf {a}}}}} \Vert _1 + \Vert \mu \Vert _\infty ) =: 4\delta \) will be identified in [16, Algorithm 3]. An approximate Fourier coefficient for these and any other recovered frequencies is stored in the vector \( {{\mathbf {x}}}\) which satisfies the same estimate in property (2) by the proof of [16, Theorem 7] and [27, Lemma 4]. However, only the 2s largest magnitude values of \( {{\mathbf {x}}}\) will be returned in \( {{\mathbf {v}}}\). We therefore analyze what happens when some of the potentially large Fourier coefficients corresponding to frequencies in \( \mathscr {S}_{ 4\delta } \) do not have their approximations assigned to \( {{\mathbf {v}}}\).
For the definition of \( \mathscr {S}_\tau \) in Lemma 4 applied to \( {\hat{{{\mathbf {a}}}}} \), we must have \( | \mathscr {S}_{ 4\delta }| \le 2s = |{{\,\mathrm{supp}\,}}({{\mathbf {v}}})|\). If \( \omega \in \mathscr {S}_{ 4\delta } \setminus {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) \), there must then exist some other \( \omega ' \in {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) \setminus \mathscr {S}_{ 4\delta } \) which was identified and took the place of \( \omega \) in \( {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) \). For this to happen, \( |{\hat{a}}_{ \omega ' }| \le 4\delta \) and \( |x_{ \omega ' }| \ge |x_{ \omega }| \). But by property (2) (extended to all coefficients in \( {{\mathbf {x}}}\)), we know
Thus, any frequency in \( \mathscr {S}_{ 4\delta } \) not chosen satisfies \( |{\hat{a}}_\omega | \le (4 + 2\sqrt{ 2 }) \delta \), and so every frequency in \( \mathscr {S}_{ (4 + 2\sqrt{ 2 }) \delta } \) is in fact identified in \( {{\mathbf {v}}}\) verifying property (1).
As for property (3), we estimate the \( \ell ^2 \) error using property (2), Lemma 4, and the above argument as
as desired. \(\square \)
Proof of Theorem 2
We again restate the theorem for convenience.
Theorem 2
For a signal \( a \in \mathscr {W}(\mathbb { T }) \cap C( \mathbb { T } ) \) corrupted by some arbitrary noise \( \mu : \mathbb { T } \rightarrow {\mathbb {C}}\), and \( 1 \le r \le \frac{M}{36} \) Algorithm 1 of [27], denoted \( \mathscr {A}_{ 2s, M }^{ \mathrm {disc} } \), will output a 2s-sparse coefficient vector \( {{\mathbf {v}}}\in {\mathbb {C}}^M \) which
-
1.
reconstructs every frequency of \( {{\mathbf {F}}}_M \, {{\mathbf {a}}}\in {\mathbb {C}}^M \), \( \omega \in \mathscr {B}_M \), with corresponding aliased Fourier coefficient meeting the tolerance
$$\begin{aligned} | ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_\omega | > 12(1 + \sqrt{ 2 }) \left( \frac{\Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_s^\mathrm {opt}\Vert _1}{2s} + 2( \Vert {{\mathbf {a}}}\Vert _\infty M^{ -r } + \Vert \varvec{\mu } \Vert _{ \infty } )\right) , \end{aligned}$$ -
2.
satisfies the \( \ell ^\infty \) error estimate for recovered coefficients
$$\begin{aligned} \Vert ({{\mathbf {F}}}_M \, {{\mathbf {a}}}- {{\mathbf {v}}})|_{ {{\,\mathrm{supp}\,}}({{\mathbf {v}}}) } \Vert _\infty \le 3 \sqrt{ 2 } \left( \frac{\Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_s^\mathrm { opt }\Vert _1}{2s} + 2 (\Vert {{\mathbf {a}}}\Vert _\infty M^{ -r } + \Vert \varvec{\mu } \Vert _\infty ) \right) , \end{aligned}$$ -
3.
satisfies the \( \ell ^2 \) error estimate
$$\begin{aligned} \Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- {{\mathbf {v}}}\Vert _2&\le \Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_{ 2s }^\mathrm {opt} \Vert _2 + 38 \frac{\Vert {{\mathbf {F}}}_M \, {{\mathbf {a}}}- ({{\mathbf {F}}}_M \, {{\mathbf {a}}})_s^\mathrm {opt} \Vert _1}{\sqrt{ s }} \\&\quad + 152 \sqrt{ s } (\Vert {{\mathbf {a}}}\Vert _\infty M^{ -r } + \Vert \varvec{\mu } \Vert _\infty ), \end{aligned}$$ -
4.
and the number of required samples of \( {{\mathbf {a}}}\) and the operation count for \( \mathscr {A}_{ 2s, M }^\mathrm {disc} \) are
$$\begin{aligned} \mathscr {O} \left( \frac{s^2 r^{ 3/2 } \log ^{ 11/2 } M}{\log s} \right) . \end{aligned}$$
The Monte Carlo variant of \( \mathscr {A}_{ 2s, M }^\mathrm {disc} \), denoted \( \mathscr {A}_{ 2s, M }^\mathrm {disc, MC} \), satisfies the all of the conditions (1)–(3) simultaneously with probability \( (1 - \sigma ) \in [2/3, 1) \) and has number of required samples and operation count
Proof
All notation in this proof matches that in [27] (in particular, we use f to denote the one-dimensional function in place of a in the theorem statement and \( N = 2M + 1\)). We begin by substituting the \( 2\pi \)-periodic gaussian filter given in (3) on page 756 with the 1-periodic gaussian and associated Fourier transform
Note then that all results regarding the Fourier transform remain unchanged, and since this 1-periodic gaussian is a just a rescaling of the \( 2\pi \)-periodic one used in [27], the bound in [27, Lemma 1] holds with a similarly compressed gaussian, that is, for all \( x \in \left[ -\frac{1}{2}, \frac{1}{2} \right] \)
Analogous results up to and including [27, Lemma 10] for 1-periodic functions then hold straightforwardly.
Assuming that our signal measurements \( {{\mathbf {f}}}= (f(y_j))_{ j = 0 }^{ 2M } = (f( \frac{j}{N} ))_{ j = 0 }^{ 2M } \) are corrupted by some discrete noise \( \varvec{\mu } = (\mu _j)_{ j = 0 }^{ 2M } \), we consider for any \( x \in \mathbb {T} \) a similar bound to [27, Lemma 10]. Here, \( j' := \arg \min _{ j }| x - y_j| \) and \( \kappa := \left\lceil \gamma \ln N \right\rceil + 1 \) for some \( \gamma \in \mathbb { R }^+ \) to be determined. Then,
We bound the first term in this sum by a direct application of [27, Lemma 10]; however, we take this opportunity to reduce the constant in the bound given there. In particular, bounding this term by the final expression in the proof of [27, Lemma 10] and using our implicit assumption that \( 36 \le N \), we have
We now work on bounding the second term. First note that for all \( k \in [-\kappa , \kappa ] \cap \mathbb { Z } \),
Assuming without loss of generality that \( 0 \le x - y_{ j' } \), we can bound the nonnegatively indexed summands by (9) as
For the negatively indexed summands, the definition of \( j' = \arg \min _j |x - y_j| \) implies that \( x - y_{ j' } \le \frac{1}{2N} \). In particular,
implies
giving
We now bound the final exponential. We first recall from [27] the choices of parameters
with \( 1 \le r \le \frac{N}{36} \). For \( k \in [1, \kappa ] \cap \mathbb { Z } \) then,
Combining this with our bounds for the nonnegatively indexed summands (11) and the negatively indexed summands (12), we have
Expressing the final sum as a truncated lower Riemann sum and applying a change of variables on the resulting integral, we have
Making use of our parameter values from [27], and the fact that \( 1 \le r \le \frac{N}{36} \),
With our revised bound for (10) above, we reprove [27, Theorem 4] to estimate \( g *f \) by the truncated discrete convolution with noisy samples. In particular, we apply [27, Theorem 3], (10), (9), and finally our same assumption that \( 1 \le r \le \frac{N}{36} \) to obtain
Replacing all references of \( 3 \Vert {{\mathbf {f}}}\Vert _{ \infty } N^{ -r } \) by \( 2( \Vert {{\mathbf {f}}}\Vert _\infty N^{ -r } + \Vert \varvec{\mu } \Vert _\infty ) \) in the remainder of the steps up to proving [27, Theorem 5] gives the desired noise robustness (with a slightly improved constant).
Using the revised error estimates of the nonequispaced algorithm from Theorem 1 and redefining \( \delta = 3 (\Vert {\hat{{{\mathbf {f}}}}} - {\hat{{{\mathbf {f}}}}}_s^\mathrm {opt} \Vert _1 / 2s + 2( \Vert {{\mathbf {f}}}\Vert _\infty N^{ -r } + \Vert \varvec{ \mu } \Vert _\infty )) \) as in the proof of [27, Theorem 5] (which also contains the proof of property (2)), the discretization algorithm [27, Algorithm 1] will produce candidate Fourier coefficient approximations in lines 9 and 12 corresponding to every \( |{\hat{f}}_\omega | \ge (4 + 2 \sqrt{ 2 }) \delta \) in place of \( 4\delta \) in Theorem 1. The exact same argument as in the proof of Theorem 1 then applies to the selection of the 2s-largest entries of this approximation with the revised threshold values and error bounds to give properties (1) and (3).
In detail, [27, Lemma 13] and the discussion right after its statement gives that property (2) holds for any approximate coefficient with frequency recovered throughout the algorithm (which, for the purposes of the following discussion, we will store in \( \mathbf {x} \) rather than \( {\hat{R}} \) defined in [27, Algorithm 1]), not just those in the final output \( \mathbf {v} := \mathbf {x}_s^{ \mathrm {opt} } \). Additionally, by the same lemma and our revised bounds from Theorem 1, any frequency \( \omega \in [N] \) satisfying \( |f_\omega | > (4 + 2 \sqrt{ 2 })\delta \) will have an associated coefficient estimate in \( \mathbf {x} \).
By Lemma 4, \( | \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } | \le 2s = | {{\,\mathrm{supp}\,}}(\mathbf {v}) | \), and so if \( \omega \in \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } \setminus {{\,\mathrm{supp}\,}}( \mathbf {v} ) \), there exists some \( \omega ' \in {{\,\mathrm{supp}\,}}(\mathbf {v}) \setminus \mathscr {S}_{ (4 + 2 \sqrt{ 2 })\delta } \) such that \( v_{\omega '} \) took the place of \( v_{ \omega } \) in \( \mathscr {S} \). In particular, this means that \( |x_{ \omega ' }| \ge |x_{ \omega }| \), \( |{\hat{f}}_{ \omega ' }| \le (4 + 2 \sqrt{ 2 })\delta \), and \( |{\hat{f}}_{ \omega }| > (4 + 2 \sqrt{ 2 }\delta ) \). Thus,
implying that \( |{\hat{f}}_\omega | \le 4(1 + \sqrt{ 2 })\delta \) and therefore proving (1).
Finally, to prove (3), we use Lemma 4, and consider
which finishes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Gross, C., Iwen, M., Kämmerer, L. et al. Sparse Fourier transforms on rank-1 lattices for the rapid and low-memory approximation of functions of many variables. Sampl. Theory Signal Process. Data Anal. 20, 1 (2022). https://doi.org/10.1007/s43670-021-00018-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43670-021-00018-y
Keywords
- Multivariate Fourier approximation
- Approximation algorithms
- Fast Fourier transforms
- Sparse Fourier transforms
- Rank-1 lattices
- Fast algorithms