Abstract
In this paper we consider sparse Fourier transform (SFT) algorithms for approximately computing the best s-term approximation of the discrete Fourier transform (DFT) \({\hat{\mathbf {f}}}\in {{\mathbb {C}}}^N\) of any given input vector \(\mathbf {f}\in {{\mathbb {C}}}^N\) in just \(\left( s \log N\right) ^{{{\mathcal {O}}}(1)}\)-time using only a similarly small number of entries of \(\mathbf {f}\). In particular, we present a deterministic SFT algorithm which is guaranteed to always recover a near best s-term approximation of the DFT of any given input vector \(\mathbf {f}\in {{\mathbb {C}}}^N\) in \({{\mathcal {O}}} \left( s^2 \log ^{\frac{11}{2}} (N) \right) \)-time. Unlike previous deterministic results of this kind, our deterministic result holds for both arbitrary vectors \(\mathbf {f}\in {{\mathbb {C}}}^N\) and vector lengths N. In addition to these deterministic SFT results, we also develop several new publicly available randomized SFT implementations for approximately computing \({\hat{\mathbf {f}}}\) from \(\mathbf {f}\) using the same general techniques. The best of these new implementations is shown to outperform existing discrete sparse Fourier transform methods with respect to both runtime and noise robustness for large vector lengths N.
Similar content being viewed by others
Notes
Note that methods which compute the DFT \({\hat{\mathbf {f}}}\) of a given vector \(\mathbf {f}\) implicitly assume that \(\mathbf {f}\) contains equally spaced samples from the trigonometric polynomial f above.
Note that \({\hat{\mathbf {f}}}_{s}^{\mathrm{opt}}\) may not be unique as there can be ties for the sth largest entry in magnitude of \(\mathbf {f}\). This trivial ambiguity turns out not to matter.
Of course deterministic algorithms with error guarantees of the type of (1) do exist for more restricted classes of periodic functions f. See, e.g. [3, 4, 27] for some examples. These include USSFT methods developed for periodic functions with structured Fourier support [3] which are of use for, among other things, the fast approximation of functions which exhibit sparsity with respect to other bounded orthonormal basis functions [16].
The interested reader may refer to Appendix A for the proof of (2).
We hasten to point out, moreover, that similar ideas can also be employed for adaptive and noise robust SFT algorithms in order to approximately evaluate f in an “on demand” fashion as well. We leave the details to the interested reader.
Each function evaluation \(f(x_k)\) needs to be accurately computed in just \({\mathcal {O}}(\log ^c N)\)-time in order to allow us to achieve our overall desired runtime for Algorithm 1.
The code for both DMSFT variants is available at https://sourceforge.net/projects/aafftannarborfa/.
The CLW-DSFT code is available at www.math.msu.edu/~markiwen/Code.html.
Code for GFFT is also available at www.math.msu.edu/~markiwen/Code.html.
This code is available at http://www.fftw.org/.
This code is available at https://groups.csail.mit.edu/netmit/sFFT/.
The SNR is defined as \(SNR = 20\log \frac{\parallel \mathbf {f}\parallel _2}{\parallel \mathbf {n}\parallel _2}\), where \(\mathbf {f}\) is the length N input vector and \(\mathbf {n}\) is the length N noise vector.
References
Bailey, J., Iwen, M.A., Spencer, C.V.: On the design of deterministic matrices for fast recovery of Fourier compressible functions. SIAM J. Matrix Anal. Appl. 33(1), 263–289 (2012)
Beylkin, G.: On the fast Fourier transform of functions with singularities. Appl. Comput. Harmon. Anal. 2(4), 363–381 (1995)
Bittens, S.: Sparse FFT for Functions with Short Frequency Support. University of Göttingen, Göttingen (2016)
Bittens, S., Zhang, R., Iwen, M.A.: A deterministic sparse FFT for functions with structured Fourier sparsity. arXiv:1705.05256 (2017)
Bluestein, L.: A linear filtering approach to the computation of discrete Fourier transform. IEEE Trans. Audio Electroacoust. 18(4), 451–455 (1970)
Christlieb, A., Lawlor, D., Wang, Y.: A multiscale sub-linear time Fourier algorithm for noisy data. Appl. Comput. Harmon. Anal. 40, 553–574 (2016)
Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best k-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)
Cooley, J.W., Tukey, J.W.: An algorithm for the machine calculation of complex Fourier series. Math. Comput. 19(90), 297–301 (1965)
Dutt, A., Rokhlin, V.: Fast Fourier transforms for nonequispaced data. SIAM J. Sci. Comput. 14(6), 1368–1393 (1993)
Dutt, A., Rokhlin, V.: Fast Fourier transforms for nonequispaced data. ii. Appl. Comput. Harmon. Anal. 2(1), 85–100 (1995)
Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhäuser, Basel (2013)
Gilbert, A.C., Muthukrishnan, S., Strauss, M.: Improved time bounds for near-optimal sparse Fourier representations. In: Proceedings of the Optics & Photonics 2005, pp. 59141A–59141A. International Society for Optics and Photonics (2005)
Gilbert, A.C., Strauss, M.J., Tropp, J.A.: A tutorial on fast Fourier sampling. IEEE Signal Process. Mag. 25(2), 57–66 (2008)
Gilbert, A.C., Indyk, P., Iwen, M., Schmidt, L.: Recent developments in the sparse Fourier transform: a compressed Fourier transform for big data. IEEE Signal Process. Mag. 31(5), 91–100 (2014)
Hassanieh, H., Indyk, P., Katabi, D., Price, E.: Simple and practical algorithm for sparse Fourier transform. In: Proceedings of the SODA (2012)
Hu, X., Iwen, M., Kim, H.: Rapidly computing sparse Legendre expansions via sparse Fourier transforms. Numer. Algorithms 74(4), 1029–1059 (2017)
Iwen, M.A.: A deterministic sub-linear time sparse Fourier algorithm via non-adaptive compressed sensing methods. In: Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 20–29. Society for Industrial and Applied Mathematics (2008)
Iwen, M.A.: Simple deterministically constructible rip matrices with sublinear Fourier sampling requirements. In: Proceedings of the CISS, pp. 870–875 (2009)
Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comput. Math. 10, 303–338 (2010)
Iwen, M.A.: Notes on lemma 6. Preprint at www.math.msu.edu/~markiwen/Papers/Lemma6_FOCM_10.pdf (2012)
Iwen, M.A.: Improved approximation guarantees for sublinear-time Fourier algorithms. Appl. Comput. Harmon. Anal. 34, 57–82 (2013)
Iwen, M., Gilbert, A., Strauss, M., et al.: Empirical evaluation of a sub-linear time sparse DFT algorithm. Commun. Math. Sci. 5(4), 981–998 (2007)
Keiner, J., Kunis, S., Potts, D.: Using NFFT 3–a software library for various nonequispaced fast Fourier transforms. ACM Trans. Math. Softw. 36(4), 19:1–19:30 (2009)
Laska, J., Kirolos, S., Massoud, Y., Baraniuk, R., Gilbert, A., Iwen, M., Strauss, M.: Random sampling for analog-to-information conversion of wideband signals. In: Proceedings of the 2006 IEEE Dallas/CAS Workshop on Design, Applications, Integration and Software, pp. 119–122. IEEE (2006)
Lawlor, D., Wang, Y., Christlieb, A.: Adaptive sub-linear time Fourier algorithms. Adv. Adapt. Data Anal. 5(01), 1350003 (2013)
Morotti, L.: Explicit universal sampling sets in finite vector spaces. Appl. Comput. Harmonic Anal. (2016) https://doi.org/10.1016/j.acha.2016.06.001
Plonka, G., Wannenwetsch, K.: A deterministic sparse FFT algorithm for vectors with small support. Numer. Algorithms 71(4), 889–905 (2016)
Rabiner, L., Schafer, R., Rader, C.: The chirp z-transform algorithm. IEEE Trans. Audio Electroacoust. 17(2), 86–92 (1969)
Segal, I., Iwen, M.: Improved sparse Fourier approximation results: Faster implementations and stronger guarantees. Numer. Algorithms 63, 239–263 (2013)
Steidl, G.: A note on fast Fourier transforms for nonequispaced grids. Adv. Comput. Math. 9, 337–353 (1998)
Acknowledgements
M.A. Iwen, R. Zhang, and S. Merhi were all supported in part by NSF DMS-1416752. The authors would like to thank Aditya Viswanathan for helpful comments and feedback on the first draft of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Hans G. Feichtinger.
Appendices
Appendix A: Fourier Basics: Continuous versus Discrete Fourier Transforms for Trigonometric Polynomials
Our objective in this appendix is to provide additional details regarding (2) and its relationship to the continuous sparse Fourier transform methods for periodic functions that we employ herein. Our starting point will be to assume only that we have been provided with a vector of data \(\mathbf {f}\in {{\mathbb {C}}}^{N}\). Our goal is to rapidly approximate the matrix vector product \({\hat{\mathbf {f}}}= F \mathbf {f}\) where \(F\in {{\mathbb {C}}}^{N\times N}\) is the DFT matrix whose entries are given by
for \(\omega \in \left( -\left\lceil \frac{N}{2}\right\rceil ,\left\lfloor \frac{N}{2}\right\rfloor \right] \cap {\mathbb {Z}}\) and \(j = 0, \dots , N-1\).
Beginning from this starting point, one may choose to regard the given vector of data \(\mathbf {f}\) as having been generated by sampling a \(2 \pi \)-periodic trigonometric polynomial \(f: [-\pi , \pi ] \rightarrow {{\mathbb {C}}}\) of the form
where \(B:=\left( -\left\lceil \frac{N}{2}\right\rceil ,\left\lfloor \frac{N}{2}\right\rfloor \right] \cap {\mathbb {Z}}\). In particular, herein we will assume that \(\mathbf {f}\) has its jth entry generated by \(f_j := f\left( -\pi +\frac{2\pi j}{N}\right) \) for \(j = 0, \dots , N-1\). Note that there is exactly one such f for the given data \(\mathbf {f}\) since \(|B| = N\) (i.e., f is the unique interpolating polynomial for \(\mathbf {f}\) with \(\omega \in B\)).
Considering f just above, we can now see that for any \(k \in \mathbb {Z}\) the associated Fourier series coefficient of f is
Changing our focus now to the discrete Fourier transform of \(\mathbf {f}\) considered as being samples from f we can see that
Note that the line above establishes (2) where the vector \({\hat{\mathbf {f}}}\in {{\mathbb {C}}}^N\) exactly contains the nonzero Fourier series coefficients of f as its entries. As a result we can see that computing the Fourier series coefficients of f is equivalent to computing the matrix vector product \({\hat{\mathbf {f}}}= F \mathbf {f}\) for our given data \(\mathbf {f}\).
Appendix B: Proof of Lemmas 1, 2 and 3
We will restate each lemma before its proof for ease of reference.
Lemma 14
(Restatement of Lemma 1) The \(2\pi \hbox {-periodic}\) Gaussian \(g:\left[ -\pi ,\pi \right] \rightarrow {\mathbb {R}}^{+}\) has
for all \(x \in \left[ -\pi ,\pi \right] \).
Proof
Observe that
holds since the series above have monotonically decreasing positive terms, and \(x\in \left[ -\pi ,\pi \right] \).
Now, if \(x\in \left[ 0,\pi \right] \) and \(n\ge 1\), one has
which yields
Using Lemma 8 to bound the last integral we can now get that
Recalling now that g is even we can see that this inequality will also hold for all \(x \in [-\pi ,0]\) as well. \(\square \)
Lemma 15
(Restatement of Lemma 2) The \(2\pi \hbox {-periodic}\) Gaussian \(g:\left[ -\pi ,\pi \right] \rightarrow {\mathbb {R}}^{+}\) has
for all \(\omega \in {\mathbb {Z}}\). Thus, \(\widehat{g}=\left\{ \widehat{g}_{\omega }\right\} _{\omega \in {\mathbb {Z}}}\in \ell ^{2}\) decreases monotonically as \(|\omega |\) increases, and also has \(\Vert \widehat{g} \Vert _{\infty } = \frac{1}{\sqrt{2 \pi }}\).
Proof
Starting with the definition of the Fourier transform, we calculate
The last two assertions now follow easily. \(\square \)
Lemma 16
(Restatement of Lemma 3) Choose any \(\tau \in \left( 0, \frac{1}{\sqrt{2\pi }} \right) \), \(\alpha \in \left[ 1, \frac{N}{\sqrt{\ln N}} \right] \), and \(\beta \in \left( 0 , \alpha \sqrt{\frac{\ln \left( 1/\tau \sqrt{2\pi } \right) }{2}} ~\right] \). Let \(c_1 = \frac{\beta \sqrt{\ln N}}{N}\) in the definition of the periodic Gaussian g from (3). Then \(\widehat{g}_{\omega } \in \left[ \tau , \frac{1}{\sqrt{2\pi }} \right] \) for all \(\omega \in {\mathbb {Z}}\) with \(|\omega | \le \Bigl \lceil \frac{N}{\alpha \sqrt{\ln N}}\Bigr \rceil \).
Proof
By Lemma 2 above it suffices to show that
which holds if and only if
Thus, it is enough to have
or,
This, in turn, is guaranteed by our choice of \(\beta \). \(\square \)
Appendix C: Proof of Lemma 4 and Theorem 2
We will restate Lemma 4 before its proof for ease of reference.
Lemma 17
(Restatement of Lemma 4) Let \(s, \epsilon ^{-1} \in {\mathbb {N}} \setminus \{ 1 \}\) with \((s/\epsilon ) \ge 2\), and \(\mathbf {n}\in {\mathbb {C}}^m\) be an arbitrary noise vector. There exists a set of m points \(\left\{ x_k \right\} ^m_{k=1} \subset [-\pi , \pi ]\) such that Algorithm 3 on page 72 of [21], when given access to the corrupted samples \(\left\{ f(x_k) + n_k \right\} ^m_{k=1}\), will identify a subset \(S \subseteq B\) which is guaranteed to contain all \(\omega \in B\) with
Furthermore, every \(\omega \in S\) returned by Algorithm 3 will also have an associate Fourier series coefficient estimate \(z_{\omega } \in {\mathbb {C}}\) which is guaranteed to have
Both the number of required samples, m, and Algorithm 3’s operation count are
If succeeding with probability \((1-\delta ) \in [2/3,1)\) is sufficient, and \((s/\epsilon ) \ge 2\), the Monte Carlo variant of Algorithm 3 referred to by Corollary 4 on page 74 of [21] may be used. This Monte Carlo variant reads only a randomly chosen subset of the noisy samples utilized by the deterministic algorithm,
yet it still outputs a subset \(S \subseteq B\) which is guaranteed to simultaneously satisfy both of the following properties with probability at least \(1-\delta \):
-
(i)
S will contain all \(\omega \in B\) satisfying (15), and
-
(ii)
all \(\omega \in S\) will have an associated coefficient estimate \(z_{\omega } \in {\mathbb {C}}\) satisfying (16).
Finally, both this Monte Carlo variant’s number of required samples, \(\tilde{m}\), as well as its operation count will also always be
Proof
The proof of this lemma involves a somewhat tedious and uninspired series of minor modifications to various results from [21]. In what follows we will outline the portions of that paper which need to be changed in order to obtain the stated lemma. Algorithm 3 on page 72 of [21] will provide the basis of our discussion.
In the first paragraph of our lemma we are provided with m-contaminated evaluations of f, \(\left\{ f(x_k) + n_k \right\} ^m_{k=1}\), at the set of m points \(\left\{ x_k \right\} ^m_{k=1} \subset [-\pi , \pi ]\) required by line 4 of Algorithm 1 on page 67 of [21]. These contaminated evaluations of f will then be used to approximate the vector \(\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A} \in {\mathbb {C}}^m\) in line 4 of Algorithm 3. More specifically, using (18) on page 67 of [21] one can see that each \(\left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j \in {\mathbb {C}}\) is effectively computed via a DFT
for some integers \(0 \le h_j < s_j\). Note that we are guaranteed to have noisy evaluations of f at each of these points by assumption. That is, we have \(f \left( x_{j,k} \right) + n_{j,k}\) for all \(x_{j,k} := -\pi + \frac{2 \pi k}{s_j}\), \(k = 0, \dots , s_j - 1\).
We therefore approximate each \(\left( {\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j\) via an approximate DFT as per (19) by
One can now see that
holds for all j. Every entry of both \({\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}}\) and \({\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}}\) referred to in Algorithm 3 will therefore be effectively replaced by its corresponding \(E_j\) estimate. Thus, the lemma we seek to prove is essentially obtained by simply incorporating the additional error estimate (20) into the analysis of Algorithm 3 in [21] wherever an \({\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}}\) or \({\mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}}\) currently appears.
To show that lines 6 – 14 of Algorithm 3 will identify all \(\omega \in B\) satisfying (15) we can adapt the proof of Lemma 6 on page 72 of [21]. Choose any \(\omega \in B\) you like. Lemmas 3 and 5 from [21] together with (20) above ensure that both
and
hold for more than half of the j and \(j'\)-indexes that Algorithm 3 uses to approximate \(\widehat{f}_{\omega }\). The rest of the proof of Lemma 6 now follows exactly as in [21] after the \(\delta \) at the top of page 73 is redefined to be \(\delta := \frac{\epsilon \cdot \left\| {\hat{\mathbf {f}}}- {\hat{\mathbf {f}}}^{\mathrm{opt}}_{(s/\epsilon )} \right\| _1}{s} + \left\| \widehat{f} - \widehat{f}\vert _{B} \right\| _1 + \Vert \mathbf {n}\Vert _\infty \), each \(\left( { \mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j\) entry is replaced by \(E_j\), and each \(\left( {\mathcal {E}_{s_1,K} \tilde{\psi } \mathbf{A}} \right) _{j'}\) entry is replaced by \(E_{j'}\).
Similarly, to show that lines 15 – 18 of Algorithm 3 will produce an estimate \(z_{\omega } \in {\mathbb {C}}\) satisfying (16) for every \(\omega \in S\) one can simply modify the first few lines of the proof of Theorem 7 in Appendix F of [21]. In particular, one can redefine \(\delta \) as above, replace the appearance of each \(\left( { \mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A}} \right) _j\) entry by \(E_j\), and then use (21). The bounds on the runtime follow from the last paragraph of the proof of Theorem 7 in Appendix F of [21] with no required changes. To finish, we note that the second paragraph of the lemma above follows from a completely analogous modification of the proof of Corollary 4 in Appendix G of [21]. \(\square \)
1.1 Appendix C.1: Proof of Theorem 2
To get the first paragraph of Theorem 2 one can simply utilize the proof of Theorem 7 exactly as it is written in Appendix F of [21] after redefining \(\delta \) as above, and then replacing the appearance of each \(\left( \mathcal {G}_{\lambda ,K} \tilde{\psi } \mathbf{A} \right) _j\) entry with its approximation \(E_j\). Once this has been done, equation (42) in the proof of Theorem 7 can then be taken as a consequence of Lemma 4 above. In addition, all references to Lemma 6 of [21] in the proof can then also be replaced with appeals to Lemma 4 above. To finish, the proof of Corollary 4 in Appendix G of [21] can now be modified in a completely analogous fashion in order to prove the second paragraph of Theorem 2.
Rights and permissions
About this article
Cite this article
Merhi, S., Zhang, R., Iwen, M.A. et al. A New Class of Fully Discrete Sparse Fourier Transforms: Faster Stable Implementations with Guarantees. J Fourier Anal Appl 25, 751–784 (2019). https://doi.org/10.1007/s00041-018-9616-4
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00041-018-9616-4
Keywords
- Fast Fourier transforms
- Discrete Fourier transforms
- Sparse Fourier transforms
- Nonequispaced Fourier transforms
- Compressive sensing
- Sparse approximation