Skip to main content
Log in

A New Class of Fully Discrete Sparse Fourier Transforms: Faster Stable Implementations with Guarantees

  • Published:
Journal of Fourier Analysis and Applications Aims and scope Submit manuscript

Abstract

In this paper we consider sparse Fourier transform (SFT) algorithms for approximately computing the best s-term approximation of the discrete Fourier transform (DFT) \({\hat{\mathbf {f}}}\in {{\mathbb {C}}}^N\) of any given input vector \(\mathbf {f}\in {{\mathbb {C}}}^N\) in just \(\left( s \log N\right) ^{{{\mathcal {O}}}(1)}\)-time using only a similarly small number of entries of \(\mathbf {f}\). In particular, we present a deterministic SFT algorithm which is guaranteed to always recover a near best s-term approximation of the DFT of any given input vector \(\mathbf {f}\in {{\mathbb {C}}}^N\) in \({{\mathcal {O}}} \left( s^2 \log ^{\frac{11}{2}} (N) \right) \)-time. Unlike previous deterministic results of this kind, our deterministic result holds for both arbitrary vectors \(\mathbf {f}\in {{\mathbb {C}}}^N\) and vector lengths N. In addition to these deterministic SFT results, we also develop several new publicly available randomized SFT implementations for approximately computing \({\hat{\mathbf {f}}}\) from \(\mathbf {f}\) using the same general techniques. The best of these new implementations is shown to outperform existing discrete sparse Fourier transform methods with respect to both runtime and noise robustness for large vector lengths N.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Note that methods which compute the DFT \({\hat{\mathbf {f}}}\) of a given vector \(\mathbf {f}\) implicitly assume that \(\mathbf {f}\) contains equally spaced samples from the trigonometric polynomial f above.

  2. Note that \({\hat{\mathbf {f}}}_{s}^{\mathrm{opt}}\) may not be unique as there can be ties for the sth largest entry in magnitude of \(\mathbf {f}\). This trivial ambiguity turns out not to matter.

  3. Theorem 1 is a slightly simplified version of Theorem 5 proven in Sect. 4.

  4. Of course deterministic algorithms with error guarantees of the type of (1) do exist for more restricted classes of periodic functions f. See, e.g. [3, 4, 27] for some examples. These include USSFT methods developed for periodic functions with structured Fourier support [3] which are of use for, among other things, the fast approximation of functions which exhibit sparsity with respect to other bounded orthonormal basis functions [16].

  5. The interested reader may refer to Appendix A for the proof of (2).

  6. We hasten to point out, moreover, that similar ideas can also be employed for adaptive and noise robust SFT algorithms in order to approximately evaluate f in an “on demand” fashion as well. We leave the details to the interested reader.

  7. Each function evaluation \(f(x_k)\) needs to be accurately computed in just \({\mathcal {O}}(\log ^c N)\)-time in order to allow us to achieve our overall desired runtime for Algorithm 1.

  8. The code for both DMSFT variants is available at https://sourceforge.net/projects/aafftannarborfa/.

  9. The CLW-DSFT code is available at www.math.msu.edu/~markiwen/Code.html.

  10. Code for GFFT is also available at www.math.msu.edu/~markiwen/Code.html.

  11. This code is available at http://www.fftw.org/.

  12. This code is available at https://groups.csail.mit.edu/netmit/sFFT/.

  13. The SNR is defined as \(SNR = 20\log \frac{\parallel \mathbf {f}\parallel _2}{\parallel \mathbf {n}\parallel _2}\), where \(\mathbf {f}\) is the length N input vector and \(\mathbf {n}\) is the length N noise vector.

References

  1. Bailey, J., Iwen, M.A., Spencer, C.V.: On the design of deterministic matrices for fast recovery of Fourier compressible functions. SIAM J. Matrix Anal. Appl. 33(1), 263–289 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  2. Beylkin, G.: On the fast Fourier transform of functions with singularities. Appl. Comput. Harmon. Anal. 2(4), 363–381 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bittens, S.: Sparse FFT for Functions with Short Frequency Support. University of Göttingen, Göttingen (2016)

    MATH  Google Scholar 

  4. Bittens, S., Zhang, R., Iwen, M.A.: A deterministic sparse FFT for functions with structured Fourier sparsity. arXiv:1705.05256 (2017)

  5. Bluestein, L.: A linear filtering approach to the computation of discrete Fourier transform. IEEE Trans. Audio Electroacoust. 18(4), 451–455 (1970)

    Article  Google Scholar 

  6. Christlieb, A., Lawlor, D., Wang, Y.: A multiscale sub-linear time Fourier algorithm for noisy data. Appl. Comput. Harmon. Anal. 40, 553–574 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best k-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cooley, J.W., Tukey, J.W.: An algorithm for the machine calculation of complex Fourier series. Math. Comput. 19(90), 297–301 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dutt, A., Rokhlin, V.: Fast Fourier transforms for nonequispaced data. SIAM J. Sci. Comput. 14(6), 1368–1393 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dutt, A., Rokhlin, V.: Fast Fourier transforms for nonequispaced data. ii. Appl. Comput. Harmon. Anal. 2(1), 85–100 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  11. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhäuser, Basel (2013)

    Book  MATH  Google Scholar 

  12. Gilbert, A.C., Muthukrishnan, S., Strauss, M.: Improved time bounds for near-optimal sparse Fourier representations. In: Proceedings of the Optics & Photonics 2005, pp. 59141A–59141A. International Society for Optics and Photonics (2005)

  13. Gilbert, A.C., Strauss, M.J., Tropp, J.A.: A tutorial on fast Fourier sampling. IEEE Signal Process. Mag. 25(2), 57–66 (2008)

    Article  Google Scholar 

  14. Gilbert, A.C., Indyk, P., Iwen, M., Schmidt, L.: Recent developments in the sparse Fourier transform: a compressed Fourier transform for big data. IEEE Signal Process. Mag. 31(5), 91–100 (2014)

    Article  Google Scholar 

  15. Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Simple and practical algorithm for sparse Fourier transform. In: Proceedings of the SODA (2012)

  16. Hu, X., Iwen, M., Kim, H.: Rapidly computing sparse Legendre expansions via sparse Fourier transforms. Numer. Algorithms 74(4), 1029–1059 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Iwen, M.A.: A deterministic sub-linear time sparse Fourier algorithm via non-adaptive compressed sensing methods. In: Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 20–29. Society for Industrial and Applied Mathematics (2008)

  18. Iwen, M.A.: Simple deterministically constructible rip matrices with sublinear Fourier sampling requirements. In: Proceedings of the CISS, pp. 870–875 (2009)

  19. Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comput. Math. 10, 303–338 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Iwen, M.A.: Notes on lemma 6. Preprint at www.math.msu.edu/~markiwen/Papers/Lemma6_FOCM_10.pdf (2012)

  21. Iwen, M.A.: Improved approximation guarantees for sublinear-time Fourier algorithms. Appl. Comput. Harmon. Anal. 34, 57–82 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Iwen, M., Gilbert, A., Strauss, M., et al.: Empirical evaluation of a sub-linear time sparse DFT algorithm. Commun. Math. Sci. 5(4), 981–998 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  23. Keiner, J., Kunis, S., Potts, D.: Using NFFT 3–a software library for various nonequispaced fast Fourier transforms. ACM Trans. Math. Softw. 36(4), 19:1–19:30 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Laska, J., Kirolos, S., Massoud, Y., Baraniuk, R., Gilbert, A., Iwen, M., Strauss, M.: Random sampling for analog-to-information conversion of wideband signals. In: Proceedings of the 2006 IEEE Dallas/CAS Workshop on Design, Applications, Integration and Software, pp. 119–122. IEEE (2006)

  25. Lawlor, D., Wang, Y., Christlieb, A.: Adaptive sub-linear time Fourier algorithms. Adv. Adapt. Data Anal. 5(01), 1350003 (2013)

    Article  MathSciNet  Google Scholar 

  26. Morotti, L.: Explicit universal sampling sets in finite vector spaces. Appl. Comput. Harmonic Anal. (2016) https://doi.org/10.1016/j.acha.2016.06.001

  27. Plonka, G., Wannenwetsch, K.: A deterministic sparse FFT algorithm for vectors with small support. Numer. Algorithms 71(4), 889–905 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  28. Rabiner, L., Schafer, R., Rader, C.: The chirp z-transform algorithm. IEEE Trans. Audio Electroacoust. 17(2), 86–92 (1969)

    Article  Google Scholar 

  29. Segal, I., Iwen, M.: Improved sparse Fourier approximation results: Faster implementations and stronger guarantees. Numer. Algorithms 63, 239–263 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  30. Steidl, G.: A note on fast Fourier transforms for nonequispaced grids. Adv. Comput. Math. 9, 337–353 (1998)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

M.A. Iwen, R. Zhang, and S. Merhi were all supported in part by NSF DMS-1416752. The authors would like to thank Aditya Viswanathan for helpful comments and feedback on the first draft of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sami Merhi.

Additional information

Communicated by Hans G. Feichtinger.

Appendices

Appendix A: Fourier Basics: Continuous versus Discrete Fourier Transforms for Trigonometric Polynomials

Our objective in this appendix is to provide additional details regarding (2) and its relationship to the continuous sparse Fourier transform methods for periodic functions that we employ herein. Our starting point will be to assume only that we have been provided with a vector of data \(\mathbf {f}\in {{\mathbb {C}}}^{N}\). Our goal is to rapidly approximate the matrix vector product \({\hat{\mathbf {f}}}= F \mathbf {f}\) where \(F\in {{\mathbb {C}}}^{N\times N}\) is the DFT matrix whose entries are given by

$$\begin{aligned} F_{\omega ,j} :=\frac{(-1)^\omega }{N} \mathbb {e}^{-\frac{2\pi \mathbb {i}\cdot \omega \cdot j}{N}} \end{aligned}$$

for \(\omega \in \left( -\left\lceil \frac{N}{2}\right\rceil ,\left\lfloor \frac{N}{2}\right\rfloor \right] \cap {\mathbb {Z}}\) and \(j = 0, \dots , N-1\).

Beginning from this starting point, one may choose to regard the given vector of data \(\mathbf {f}\) as having been generated by sampling a \(2 \pi \)-periodic trigonometric polynomial \(f: [-\pi , \pi ] \rightarrow {{\mathbb {C}}}\) of the form

$$\begin{aligned} f\left( x\right) =\sum _{\omega \in B}\widehat{f}_{\omega } \mathbb {e}^{\mathbb {i}\omega x} \end{aligned}$$

where \(B:=\left( -\left\lceil \frac{N}{2}\right\rceil ,\left\lfloor \frac{N}{2}\right\rfloor \right] \cap {\mathbb {Z}}\). In particular, herein we will assume that \(\mathbf {f}\) has its jth entry generated by \(f_j := f\left( -\pi +\frac{2\pi j}{N}\right) \) for \(j = 0, \dots , N-1\). Note that there is exactly one such f for the given data \(\mathbf {f}\) since \(|B| = N\) (i.e., f is the unique interpolating polynomial for \(\mathbf {f}\) with \(\omega \in B\)).

Considering f just above, we can now see that for any \(k \in \mathbb {Z}\) the associated Fourier series coefficient of f is

$$\begin{aligned} \frac{1}{2\pi }\int _{-\pi }^{\pi }f\left( x\right) \mathbb {e}^{-\mathbb {i}k x}~dx&~=~ \frac{1}{2\pi }\int _{-\pi }^{\pi } \left( \sum _{\omega \in B} \widehat{f}_{\omega } \mathbb {e}^{\mathbb {i}\omega x} \right) \mathbb {e}^{-\mathbb {i}k x}~dx \\&~=~ \sum _{\omega \in B} \widehat{f}_{\omega } \left( \frac{1}{2\pi }\int _{-\pi }^{\pi } \mathbb {e}^{\mathbb {i}(\omega - k) x}~dx \right) \\&~=~ {\left\{ \begin{array}{ll} \widehat{f}_{k}, &{}\quad k \in B \\ 0, &{}\quad \text {otherwise} \end{array}\right. }. \end{aligned}$$

Changing our focus now to the discrete Fourier transform of \(\mathbf {f}\) considered as being samples from f we can see that

Note that the line above establishes (2) where the vector \({\hat{\mathbf {f}}}\in {{\mathbb {C}}}^N\) exactly contains the nonzero Fourier series coefficients of f as its entries. As a result we can see that computing the Fourier series coefficients of f is equivalent to computing the matrix vector product \({\hat{\mathbf {f}}}= F \mathbf {f}\) for our given data \(\mathbf {f}\).

Appendix B: Proof of Lemmas 12 and 3

We will restate each lemma before its proof for ease of reference.

Lemma 14

(Restatement of Lemma 1) The \(2\pi \hbox {-periodic}\) Gaussian \(g:\left[ -\pi ,\pi \right] \rightarrow {\mathbb {R}}^{+}\) has

$$\begin{aligned} g\left( x\right) \le \left( \frac{3}{c_1}+\frac{1}{\sqrt{2\pi }} \right) \mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}} \end{aligned}$$

for all \(x \in \left[ -\pi ,\pi \right] \).

Proof

Observe that

$$\begin{aligned} c_{1}g\left( x\right) =\sum _{n=-\infty }^{\infty }\mathbb {e}^{-\frac{\left( x-2n\pi \right) ^{2}}{2c_{1}^{2}}}&=\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\mathbb {e}^{-\frac{\left( x-2\pi \right) ^{2}}{2c_{1}^{2}}}+\mathbb {e}^{-\frac{\left( x+2\pi \right) ^{2}}{2c_{1}^{2}}}+\sum _{\left| n\right| \ge 2}\mathbb {e}^{-\frac{\left( x-2n\pi \right) ^{2}}{2c_{1}^{2}}}\\&\le 3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\int _{1}^{\infty }\mathbb {e}^{-\frac{\left( x-2n\pi \right) ^{2}}{2c_{1}^{2}}}dn+\int _{1}^{\infty }\mathbb {e}^{-\frac{\left( x+2n\pi \right) ^{2}}{2c_{1}^{2}}}dn \end{aligned}$$

holds since the series above have monotonically decreasing positive terms, and \(x\in \left[ -\pi ,\pi \right] \).

Now, if \(x\in \left[ 0,\pi \right] \) and \(n\ge 1\), one has

$$\begin{aligned} \mathbb {e}^{-\frac{\left( 2n+1\right) ^{2}\pi ^{2}}{2c_{1}^{2}}}\le \mathbb {e}^{-\frac{\left( x+2n\pi \right) ^{2}}{2c_{1}^{2}}}\le \mathbb {e}^{-\frac{4n^{2}\pi ^{2}}{2c_{1}^{2}}}\le \mathbb {e}^{-\frac{\left( x-2n\pi \right) ^{2}}{2c_{1}^{2}}}\le \mathbb {e}^{-\frac{\left( 2n-1\right) ^{2}\pi ^{2}}{2c_{1}^{2}}}, \end{aligned}$$

which yields

$$\begin{aligned} c_{1}g\left( x\right)&~\le ~3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+2\int _{1}^{\infty }\mathbb {e}^{-\frac{\pi ^{2}\left( 2n-1\right) ^{2}}{2c_{1}^{2}}}dn \\&~=~ 3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\frac{1}{2}\left( \int _{-\infty }^{\infty }\mathbb {e}^{-\frac{\pi ^{2}m^{2}}{2c_{1}^{2}}}dm-\int _{-1}^{1}\mathbb {e}^{-\frac{\pi ^{2}m^{2}}{2c_{1}^{2}}}dm\right) \\&=3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\frac{c_{1}}{\sqrt{2\pi }}-\frac{1}{2}\int _{-1}^{1}\mathbb {e}^{-\frac{\pi ^{2}m^{2}}{2c_{1}^{2}}}dm. \end{aligned}$$

Using Lemma 8 to bound the last integral we can now get that

$$\begin{aligned} c_{1}g\left( x\right)&\le 3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\frac{c_{1}}{\sqrt{2\pi }}-\frac{1}{2}\frac{\sqrt{2}c_{1}}{\pi }\sqrt{\pi \left( 1-\mathbb {e}^{-\frac{\pi ^{2}}{2c_{1}^{2}}}\right) }\\&= 3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\frac{c_{1}}{\sqrt{2\pi }}\left( 1-\sqrt{\left( 1-\mathbb {e}^{-\frac{\pi ^{2}}{2c_{1}^{2}}}\right) }\right) \\&\le 3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\frac{c_{1}}{\sqrt{2\pi }}\mathbb {e}^{-\frac{\pi ^{2}}{2c_{1}^{2}}} \le 3\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}+\frac{c_{1}}{\sqrt{2\pi }}\mathbb {e}^{-\frac{x^{2}}{2c_{1}^{2}}}. \end{aligned}$$

Recalling now that g is even we can see that this inequality will also hold for all \(x \in [-\pi ,0]\) as well. \(\square \)

Lemma 15

(Restatement of Lemma 2) The \(2\pi \hbox {-periodic}\) Gaussian \(g:\left[ -\pi ,\pi \right] \rightarrow {\mathbb {R}}^{+}\) has

$$\begin{aligned} \widehat{g}_{\omega } = \frac{1}{\sqrt{2\pi }}\mathbb {e}^{-\frac{c_1^2 \omega ^2}{2}} \end{aligned}$$

for all \(\omega \in {\mathbb {Z}}\). Thus, \(\widehat{g}=\left\{ \widehat{g}_{\omega }\right\} _{\omega \in {\mathbb {Z}}}\in \ell ^{2}\) decreases monotonically as \(|\omega |\) increases, and also has \(\Vert \widehat{g} \Vert _{\infty } = \frac{1}{\sqrt{2 \pi }}\).

Proof

Starting with the definition of the Fourier transform, we calculate

$$\begin{aligned} \widehat{g}_{\omega }= & {} \frac{1}{c_1}\sum _{n=-\infty }^{\infty }\frac{1}{2\pi }\int _{-\pi }^{\pi }\mathbb {e}^{-\frac{(x-2n\pi )^{2}}{2c_1^{2}}}\mathbb {e}^{-\mathbb {i}\omega x}~dx\\= & {} \frac{1}{c_1}\sum _{n=-\infty }^{\infty }\frac{1}{2\pi }\int _{-\pi }^{\pi }\mathbb {e}^{-\frac{(x-2n\pi )^{2}}{2c_1^{2}}}\mathbb {e}^{-\mathbb {i}\omega (x-2n\pi )}~dx\\= & {} \frac{1}{c_1}\sum _{n=-\infty }^{\infty }\frac{1}{2\pi }\int _{-\pi -2n\pi }^{\pi -2n\pi }\mathbb {e}^{-\frac{u{}^{2}}{2c_1^{2}}}\mathbb {e}^{-\mathbb {i}\omega u}~du\\= & {} \frac{1}{2\pi c_1}\int _{-\infty }^{\infty }\mathbb {e}^{-\frac{u{}^{2}}{2c_1^{2}}} \mathbb {e}^{-\mathbb {i}\omega u}~du\\= & {} \frac{c_1\sqrt{2\pi }}{2\pi c_1} \mathbb {e}^{-\frac{c_1^{2}\omega ^{2}}{2}}\\= & {} \frac{\mathbb {e}^{-\frac{c_1^{2}\omega ^{2}}{2}}}{\sqrt{2\pi }}. \end{aligned}$$

The last two assertions now follow easily. \(\square \)

Lemma 16

(Restatement of Lemma 3) Choose any \(\tau \in \left( 0, \frac{1}{\sqrt{2\pi }} \right) \), \(\alpha \in \left[ 1, \frac{N}{\sqrt{\ln N}} \right] \), and \(\beta \in \left( 0 , \alpha \sqrt{\frac{\ln \left( 1/\tau \sqrt{2\pi } \right) }{2}} ~\right] \). Let \(c_1 = \frac{\beta \sqrt{\ln N}}{N}\) in the definition of the periodic Gaussian g from (3). Then \(\widehat{g}_{\omega } \in \left[ \tau , \frac{1}{\sqrt{2\pi }} \right] \) for all \(\omega \in {\mathbb {Z}}\) with \(|\omega | \le \Bigl \lceil \frac{N}{\alpha \sqrt{\ln N}}\Bigr \rceil \).

Proof

By Lemma 2 above it suffices to show that

$$\begin{aligned} \frac{1}{\sqrt{2\pi }}\mathbb {e}^{-\frac{c_1^2 \left( \bigl \lceil \frac{N}{\alpha \sqrt{\ln N}}\bigr \rceil \right) ^2}{2}} \ge \tau , \end{aligned}$$

which holds if and only if

$$\begin{aligned} c_1^2 \left( \left\lceil \frac{N}{\alpha \sqrt{\ln N}}\right\rceil \right) ^2\le & {} 2 \ln \left( \frac{1}{\tau \sqrt{2\pi }}\right) \\ c_1\le & {} \frac{\sqrt{2 \ln \left( \frac{1}{\tau \sqrt{2\pi }}\right) }}{\left\lceil \frac{N}{\alpha \sqrt{\ln N}}\right\rceil }. \end{aligned}$$

Thus, it is enough to have

$$\begin{aligned} c_1 \le \frac{\sqrt{2 \ln \left( \frac{1}{\tau \sqrt{2\pi }}\right) }}{ \frac{N}{\alpha \sqrt{\ln N}} + 1} ~=~\frac{\alpha \sqrt{2 \ln \left( \frac{1}{\tau \sqrt{2\pi }}\right) \ln N}}{ N + \alpha \sqrt{\ln N}}, \end{aligned}$$

or,

$$\begin{aligned} c_1 = \frac{\beta \sqrt{\ln N}}{N} \le \frac{\alpha \sqrt{2 \ln \left( \frac{1}{\tau \sqrt{2\pi }}\right) \ln N}}{2N} \le \frac{\alpha \sqrt{2 \ln \left( \frac{1}{\tau \sqrt{2\pi }}\right) \ln N}}{ N + \alpha \sqrt{\ln N}}. \end{aligned}$$

This, in turn, is guaranteed by our choice of \(\beta \). \(\square \)

Appendix C: Proof of Lemma 4 and Theorem 2

We will restate Lemma 4 before its proof for ease of reference.

Lemma 17

(Restatement of Lemma 4) Let \(s, \epsilon ^{-1} \in {\mathbb {N}} \setminus \{ 1 \}\) with \((s/\epsilon ) \ge 2\), and \(\mathbf {n}\in {\mathbb {C}}^m\) be an arbitrary noise vector. There exists a set of m points \(\left\{ x_k \right\} ^m_{k=1} \subset [-\pi , \pi ]\) such that Algorithm 3 on page 72 of [21], when given access to the corrupted samples \(\left\{ f(x_k) + n_k \right\} ^m_{k=1}\), will identify a subset \(S \subseteq B\) which is guaranteed to contain all \(\omega \in B\) with

$$\begin{aligned} \left| \widehat{f}_{\omega } \right| > 4 \left( \frac{\epsilon \cdot \left\| {\hat{\mathbf {f}}}- {\hat{\mathbf {f}}}^{\mathrm{opt}}_{(s/\epsilon )} \right\| _1}{s} + \left\| \widehat{f} - \widehat{f}\vert _{B} \right\| _1 + \Vert \mathbf {n}\Vert _\infty \right) . \end{aligned}$$
(15)

Furthermore, every \(\omega \in S\) returned by Algorithm 3 will also have an associate Fourier series coefficient estimate \(z_{\omega } \in {\mathbb {C}}\) which is guaranteed to have

$$\begin{aligned} \left| \widehat{f}_{\omega } - z_{\omega } \right| \le \sqrt{2} \left( \frac{\epsilon \cdot \left\| {\hat{\mathbf {f}}}- {\hat{\mathbf {f}}}^{\mathrm{opt}}_{(s/\epsilon )} \right\| _1}{s} + \left\| \widehat{f} - \widehat{f}\vert _{B} \right\| _1 + \Vert \mathbf {n}\Vert _\infty \right) . \end{aligned}$$
(16)

Both the number of required samples, m, and Algorithm 3’s operation count are

$$\begin{aligned} {\mathcal {O}} \left( \frac{s^2 \cdot \log ^4 (N)}{\log \left( \frac{s}{\epsilon } \right) \cdot \epsilon ^2} \right) . \end{aligned}$$
(17)

If succeeding with probability \((1-\delta ) \in [2/3,1)\) is sufficient, and \((s/\epsilon ) \ge 2\), the Monte Carlo variant of Algorithm 3 referred to by Corollary 4 on page 74 of [21] may be used. This Monte Carlo variant reads only a randomly chosen subset of the noisy samples utilized by the deterministic algorithm,

$$\begin{aligned} \left\{ f(\tilde{x}_k) + \tilde{n}_k \right\} ^{\tilde{m}}_{k=1} \subseteq \left\{ f(x_k) + n_k \right\} ^m_{k=1}, \end{aligned}$$

yet it still outputs a subset \(S \subseteq B\) which is guaranteed to simultaneously satisfy both of the following properties with probability at least \(1-\delta \):

  1. (i)

    S will contain all \(\omega \in B\) satisfying (15), and

  2. (ii)

    all \(\omega \in S\) will have an associated coefficient estimate \(z_{\omega } \in {\mathbb {C}}\) satisfying (16).

Finally, both this Monte Carlo variant’s number of required samples, \(\tilde{m}\), as well as its operation count will also always be

$$\begin{aligned} {\mathcal {O}} \left( \frac{s}{\epsilon } \cdot \log ^3 (N) \cdot \log \left( \frac{N}{\delta } \right) \right) . \end{aligned}$$
(18)

Proof

The proof of this lemma involves a somewhat tedious and uninspired series of minor modifications to various results from [21]. In what follows we will outline the portions of that paper which need to be changed in order to obtain the stated lemma. Algorithm 3 on page 72 of [21] will provide the basis of our discussion.

In the first paragraph of our lemma we are provided with m-contaminated evaluations of f, \(\left\{ f(x_k) + n_k \right\} ^m_{k=1}\), at the set of m points \(\left\{ x_k \right\} ^m_{k=1} \subset [-\pi , \pi ]\) required by line 4 of Algorithm 1 on page 67 of [21]. These contaminated evaluations of f will then be used to approximate the vector \(\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A} \in {\mathbb {C}}^m\) in line 4 of Algorithm 3. More specifically, using (18) on page 67 of [21] one can see that each \(\left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j \in {\mathbb {C}}\) is effectively computed via a DFT

$$\begin{aligned} \left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j = \frac{1}{s_j} \sum ^{s_j - 1}_{k=0} f \left( -\pi + \frac{2 \pi k}{s_j} \right) \mathbb {e}^{\frac{-2 \pi \mathbb {i}k h_j}{s_j}} \end{aligned}$$
(19)

for some integers \(0 \le h_j < s_j\). Note that we are guaranteed to have noisy evaluations of f at each of these points by assumption. That is, we have \(f \left( x_{j,k} \right) + n_{j,k}\) for all \(x_{j,k} := -\pi + \frac{2 \pi k}{s_j}\), \(k = 0, \dots , s_j - 1\).

We therefore approximate each \(\left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j\) via an approximate DFT as per (19) by

$$\begin{aligned} E_j := \frac{1}{s_j} \sum ^{s_j - 1}_{k=0} \left( f \left( x_{j,k} \right) + n_{j,k} \right) \mathbb {e}^{\frac{-2 \pi \mathbb {i}k h_j}{s_j}}. \end{aligned}$$

One can now see that

$$\begin{aligned} \left| E_j - \left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j \right| = \left| \frac{1}{s_j} \sum ^{s_j - 1}_{k=0} n_{j,k} \mathbb {e}^{\frac{-2 \pi \mathbb {i}k h_j}{s_j}} \right| \le \frac{1}{s_j} \sum ^{s_j - 1}_{k=0} \left| n_{j,k} \right| \le \Vert \mathbf {n}\Vert _{\infty } \end{aligned}$$
(20)

holds for all j. Every entry of both \({\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}}\) and \({\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}}\) referred to in Algorithm 3 will therefore be effectively replaced by its corresponding \(E_j\) estimate. Thus, the lemma we seek to prove is essentially obtained by simply incorporating the additional error estimate (20) into the analysis of Algorithm 3 in [21] wherever an \({\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}}\) or \({\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}}\) currently appears.

To show that lines 6 – 14 of Algorithm 3 will identify all \(\omega \in B\) satisfying (15) we can adapt the proof of Lemma 6 on page 72 of [21]. Choose any \(\omega \in B\) you like. Lemmas 3 and 5 from [21] together with (20) above ensure that both

$$\begin{aligned} \left| E_j - \widehat{f}_{\omega } \right|&\le \left| E_j - \left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j \right| + \left| \left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j - \widehat{f}_{\omega } \right| \nonumber \\&\le \Vert \mathbf {n}\Vert _{\infty } + \frac{\epsilon \cdot \left\| {\hat{\mathbf {f}}}- {\hat{\mathbf {f}}}^{\mathrm{opt}}_{(s/\epsilon )} \right\| _1}{s} + \left\| \widehat{f} - \widehat{f}\vert _{B} \right\| _1 \end{aligned}$$
(21)

and

$$\begin{aligned} \left| E_{j'} - \widehat{f}_{\omega } \right|&\le \left| E_{j'} - \left( {\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}} \right) _{j'} \right| + \left| \left( {\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}} \right) _{j'} - \widehat{f}_{\omega } \right| \nonumber \\&\le \Vert \mathbf {n}\Vert _{\infty } + \frac{\epsilon \cdot \left\| {\hat{\mathbf {f}}}- {\hat{\mathbf {f}}}^{\mathrm{opt}}_{(s/\epsilon )} \right\| _1}{s} + \left\| \widehat{f} - \widehat{f}\vert _{B} \right\| _1 \end{aligned}$$
(22)

hold for more than half of the j and \(j'\)-indexes that Algorithm 3 uses to approximate \(\widehat{f}_{\omega }\). The rest of the proof of Lemma 6 now follows exactly as in [21] after the \(\delta \) at the top of page 73 is redefined to be \(\delta := \frac{\epsilon \cdot \left\| {\hat{\mathbf {f}}}- {\hat{\mathbf {f}}}^{\mathrm{opt}}_{(s/\epsilon )} \right\| _1}{s} + \left\| \widehat{f} - \widehat{f}\vert _{B} \right\| _1 + \Vert \mathbf {n}\Vert _\infty \), each \(\left( { \mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j\) entry is replaced by \(E_j\), and each \(\left( {\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}} \right) _{j'}\) entry is replaced by \(E_{j'}\).

Similarly, to show that lines 15 – 18 of Algorithm 3 will produce an estimate \(z_{\omega } \in {\mathbb {C}}\) satisfying (16) for every \(\omega \in S\) one can simply modify the first few lines of the proof of Theorem 7 in Appendix F of [21]. In particular, one can redefine \(\delta \) as above, replace the appearance of each \(\left( { \mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j\) entry by \(E_j\), and then use (21). The bounds on the runtime follow from the last paragraph of the proof of Theorem 7 in Appendix F of [21] with no required changes. To finish, we note that the second paragraph of the lemma above follows from a completely analogous modification of the proof of Corollary 4 in Appendix G of [21]. \(\square \)

1.1 Appendix C.1: Proof of Theorem 2

To get the first paragraph of Theorem 2 one can simply utilize the proof of Theorem 7 exactly as it is written in Appendix F of [21] after redefining \(\delta \) as above, and then replacing the appearance of each \(\left( \mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A} \right) _j\) entry with its approximation \(E_j\). Once this has been done, equation (42) in the proof of Theorem 7 can then be taken as a consequence of Lemma 4 above. In addition, all references to Lemma 6 of [21] in the proof can then also be replaced with appeals to Lemma 4 above. To finish, the proof of Corollary 4 in Appendix G of [21] can now be modified in a completely analogous fashion in order to prove the second paragraph of Theorem 2.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Merhi, S., Zhang, R., Iwen, M.A. et al. A New Class of Fully Discrete Sparse Fourier Transforms: Faster Stable Implementations with Guarantees. J Fourier Anal Appl 25, 751–784 (2019). https://doi.org/10.1007/s00041-018-9616-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00041-018-9616-4

Keywords

Mathematics Subject Classification

Navigation