1 Introduction

The discrete Fourier transform (DFT) can easily be generalized to arbitrary nodes in the space domain as well as in the frequency domain (see [4, 6], [13, pp. 394–397]). Let \(N \in \mathbb N\) with N ≫ 1 and \(M_{1}, M_{2} \in 2 \mathbb N\) be given. By \(\mathcal {I}_{M_{1}}\) we denote the index set \(\{-\frac {M_{1}}{2}, 1-\frac {M_{1}}{2}, \ldots , \frac {M_{1}}{2}-1\}\). We consider an exponential sum \(f: \left[-\frac {1}{2}, \frac {1}{2}\right]\to \mathbb C\) of the form

$$f(x):=\sum_{k\in{\mathcal I}_{M_1}}f_k\mathrm e^{-2\pi\mathrm\;iNv_kx},\quad x\in \left[-\frac{1}{2},\frac{1}{2}\right],$$
(1.1)

where \(f_{k} \in \mathbb C\) are given coefficients and \(v_{k} \in \left[-\frac {1}{2}, \frac {1}{2} \right]\), \(k\in \mathcal {I}_{M_{1}}\), are arbitrary nodes in the frequency domain. The parameter \(N \in \mathbb N\) is called nonharmonic bandwidth of the exponential sum (1.1).

We assume that a linear combination (1.1) of exponentials with bounded frequencies is given. For arbitrary nodes \(x_{j} \in \left[-\frac {1}{2}, \frac {1}{2} \right]\), \(j \in \mathcal {I}_{M_{2}}\), in the space domain, we are interested in a fast evaluation of the M2 values

$$f(x_{j}) = \sum\limits_{k\in \mathcal{I}_{M_{1}}} f_{k} {\mathrm e}^{-2\pi {\mathrm i}Nv_{k} x_{j}} , \quad j \in \mathcal{I}_{M_{2}} .$$
(1.2)

A fast algorithm for the computation of the M2 values (1.2) is called a nonuniform fast Fourier transform with nonequispaced spatial and frequency data (NNFFT) which was introduced by B. Elbel and G. Steidl in [6]. In this approach, the rapid evaluation of NNFFT is mainly based on the use of two compactly supported, continuous window functions. As in [10] this approach is also referred to as NFFT of type 3.

In this paper we present new error estimates for the NNFFT. Since these estimates depend exclusively on the so-called window parameters of the NNFFT, this gives rise to an appropriate parameter choice. The outline of this paper is as follows. In Section 2, we introduce the special set Ω of continuous, even functions \(\omega : \mathbb R \to [0, 1]\) with the support [− 1,1]. Choosing ω1, ω2 ∈Ω, we consider two window functions

$$\varphi_{1}(t) = \omega_{1}\left(\frac{N_{1} t}{m_{1}}\right) , \quad \varphi_{2}(t) = \omega_{2}\left(\frac{N_{2} t}{m_{2}}\right) , \quad t \in \mathbb R ,$$

where \(N_{1} := \sigma _{1} N \in 2 \mathbb N\) with some oversampling factor σ1 > 1 and where \(m_{1} \in {\mathbb N}\setminus \{1\}\) is a truncation parameter with 2m1N1. Analogously, \(N_{2} := \sigma _{2} (N_{1} + 2m_{1}) \in 2 \mathbb N\) is given with some oversampling factor σ2 > 1 and \(m_{2} \in {\mathbb N}\setminus \{1\}\) is another truncation parameter with \(2 m_{2} \ll \left(1 - \frac {1}{\sigma _{1}} \right) N_{2}\). For the fast, approximate computation of the values (1.2), we formulate the NNFFT in Algorithm 1. In Section 3, we derive new explicit error estimates of the NNFFT with two general window functions φ1 and φ2. In Section 4, we specify the result when using two \(\sinh\)-type window functions. Namely, we show that for fixed nonharmonic bandwidth N of (1.1), the error of the related NNFFT has an exponential decay with respect to the truncation parameters m1 and m2. Numerical experiments illustrate the performance of our error estimates.

In Section 5, we study the approximation of the function sinc(Nπx), x ∈ [− 1, 1], by an exponential sum. For given target accuracy ε > 0 and n ≥ 4N, there exist coefficients wj > 0 and frequencies vj ∈ (− 1, 1), j = 1…,n, such that for all x ∈ [− 1, 1],

$$\bigg|{sinc}(N \pi x) - \sum\limits_{j=1}^{n} w_{j} {\mathrm e}^{- \pi {\mathrm i}N v_{j} x}\bigg| \le \varepsilon .$$

In practice, we simplify the approximation procedure. Since for fixed \(N \in \mathbb N\), it holds

$${sinc} (N \pi x) = \frac{1}{2} {\int}_{-1}^{1} {\mathrm e}^{- \pi {\mathrm i} N t x} \;{\mathrm d}t , \quad x \in \mathbb R ,$$

we apply the Clenshaw–Curtis quadrature with Chebyshev points \(z_{k} = {\cos \limits } \frac {k \pi }{n}\in [-1, 1]\), k = 0…, n, where \(n\in \mathbb N\) fulfills n ≥ 4N. Then the function sinc(Nπx), x ∈ [− 1, 1], can be approximated by the exponential sum

$$\sum\limits_{k=0}^{n} w_{k} {\mathrm e}^{- \pi {\mathrm i} N z_{k} x}$$
(1.3)

with explicitly known coefficients wk > 0 which satisfy the condition \({\sum }_{k=0}^{n} w_{k} = 1\).

An interesting signal processing application of the NNFFT is presented in the last Section 6. If a signal \(h: \left[- \frac {1}{2}, \frac {1}{2} \right] \to \mathbb C\) is to be reconstructed from its nonuniform samples at \(a_{k}\in \left[- \frac {1}{2}, \frac {1}{2} \right]\), then h is often modeled as linear combination of shifted sinc functions

$$h(x) = \sum\limits_{k \in \mathcal{I}_{L_{1}}} c_{k} {sinc}\big(N \pi (x-a_{k})\big)$$

with complex coefficients ck. Hence, we present a fast, approximate computation of the discrete sinc transform (see [7, 11])

$$h(b_{\ell}) = \sum\limits_{k \in \mathcal{I}_{L_{1}}} c_{k} {sinc}\big(N \pi (b_{\ell} - a_{k})\big) , \quad \ell \in \mathcal{I}_{L_{2}} ,$$

where \(b_{\ell } \in \left[- \frac {1}{2}, \frac {1}{2} \right]\) can be nonequispaced. The discrete sinc transform is motivated by numerous applications in signal processing. However, since the sinc function decays slowly, it is often avoided in favor of some more local approximation. Here we prefer the approximation of the sinc function by an exponential sum (1.3). Then we obtain the fast sinc transform in Algorithm 3, which is an approximate algorithm for the fast computation of the values (6.2) and applies the NNFFT twice. Besides, the error of the fast sinc transform is estimated and numerical examples are presented as well.

2 NNFFT

Now we start with the explanation of the main algorithm, the NNFFT. To this end, we firstly introduce the special set Ω, which is necessary to define required window functions φj, j = 1,2. Since the NNFFT is mainly based on the well-known NFFT, then we proceed with a short description of the NFFT and move on to the NNFFT afterwards. This procedure is summarized in Algorithm 1. Note that here a parameter a > 1 is necessary in order to prevent aliasing artifacts, since we approximate a non-periodic function on the interval [− 1, 1] by means of a-periodic functions.

Let Ω be the set of all functions \(\omega : \mathbb R \to [0, 1]\) with the following properties:

  • Each function ω is even, has the support [− 1, 1], and is continuous on \(\mathbb R\).

  • Each restricted function ω|[0,1] is decreasing with ω(0) = 1.

  • For each function ω its Fourier transform

    $${\hat \omega}(v) := {\int}_{\mathbb R} \omega(x) {\mathrm e}^{- 2 \pi {\mathrm i} v x} \;{\mathrm d}x = 2 {{\int}_{0}^{1}} \omega(x) \cos(2\pi v x) \;{\mathrm d}x$$

    is positive and decreasing for all \(v\in [0, \frac {m_{1}}{2 \sigma _{1}} ]\), where it holds \(m_{1} \in \mathbb N \setminus \{1\}\) and \(\sigma _{1} \in [\frac {5}{4}, 2 ]\).

Obviously, each ω ∈Ω is of bounded variation over [− 1,1].

Example 2.1

By \(B_{2m_{1}}\), we denote the centered cardinal B-spline of even order 2m1 with \(m_{1} \in \mathbb N\). Thus, B2 is the centered hat function. We consider the spline

$$\omega_{\mathrm{B},1}(x) := \frac{1}{B_{2m_{1}}(0)} B_{2m_{1}}(m_{1} x) , \quad x \in \mathbb R ,$$

which has the support [− 1, 1]. Its Fourier transform reads as

$${\hat \omega}_{\mathrm{B},1}(v) = \frac{1}{m_{1} B_{2m_{1}}(0)} \Big({sinc} \frac{\pi v}{m_{1}}\Big)^{2m_{1}} , \quad v \in \mathbb R .$$

Obviously, \({\hat \omega }_{\mathrm {B},1}(v)\) is positive and decreasing for v ∈ [0,m1). Hence, the function ωB,1 belongs to the set Ω.

For \(\sigma _{1} > \frac {\pi }{3}\) and β1 = 3m1 with \(m_{1} \in \mathbb N \setminus \{1\}\), we consider

$$\omega_{{alg},1}(x) := \left\{ \begin{array}{ll} (1 - x^{2})^{\beta_{1} - 1/2} & \quad x \in [-1, 1] ,\\ 0 & \quad x \in \mathbb R \setminus [-1, 1] . \end{array} \right.$$

By [12, p. 8], its Fourier transform reads as

$${\hat\omega}_{\mathrm{alg},1}(v)=\frac{\pi(2\beta_1)!}{4^{\beta_1}\beta_1!}\cdot\left\{\begin{array}{ll}(\pi v)^{-\beta_1}J_{\beta_1}(2\pi v)&\quad v\in\mathbb{R}\setminus\{0\},\\\frac{1}{\beta_1!}&\quad v=0,\end{array}\right.$$

where \(J_{\beta _{1}}\) denotes the Bessel function of order β1. By [1, p. 370], it holds for v≠ 0 the equality

$$(\pi v)^{-\beta_1}J_{\beta_1}(2\pi v)=\frac{1}{\beta_1!}\prod_{s=1}^\infty(1-\frac{4\pi^2v^2}{j_{\beta_1,s}^2}),$$

where \(j_{\beta _{1},s}\) denotes the s th positive zero of \(J_{\beta _{1}}\). For β1 = 3m1, it holds \(j_{\beta _{1},1} > 3 m_{1} + \pi - \frac {1}{2}\) (see [8]). Hence, by \(\sigma _{1} > \frac {\pi }{3}\) we get

$$\frac{2 \pi m_{1}}{2 \sigma_{1} j_{\beta_{1},1}} < \frac{\frac{\pi}{\sigma_{1}} m_{1}}{3 m_{1} + \pi - \frac{1}{2}} < \frac{3 m_{1}}{3 m_{1} + \pi - \frac{1}{2}} < 1 .$$

Therefore, the Fourier transform \({\hat \omega }_{ {alg},1}(v)\) is positive and decreasing for \(v \in \big [0, \frac {m_{1}}{2 \sigma _{1}}\big ]\). Hence, ωalg,1 belongs to the set Ω.

Let \(\sigma _{1} \in \left[\frac {5}{4}, 2 \right]\) and \(m_{1} \in \mathbb N \setminus \{1\}\) be given. We consider the function

$$\omega_{\sinh,1} (x) := \left\{ \begin{array}{ll} \frac{1}{\sinh \beta_{1}} \sinh \big(\beta_{1} \sqrt{1 - x^{2}}\big) &\quad x \in [ - 1, 1] ,\\ 0 & \quad x \in \mathbb R \setminus [ - 1, 1] \end{array} \right.$$

with the shape parameter

$$\beta_{1} := 2 \pi m_{1} \left(1 - \frac{1}{2\sigma_{1}}\right) .$$

Then by [12, p. 38], its Fourier transform reads as

$${\hat\omega}_{\sinh,1}(v)=\frac{\pi\beta_1}{\sinh\beta_1}\cdot\left\{\begin{array}{ll}(\beta_1^2-4\pi^2v^2)^{-1/2}I_1(\sqrt{\beta_1^2-4\pi^2v^2})&\quad |{v}| {<{m}}_1(1-\frac{1}{2\sigma_1}),\\\frac{1}{2}&\quad v=\pm m_1(1-\frac{1}{2\sigma_1}),\\(4\pi^2v^2-\beta_1^2)^{-1/2}J_1(\sqrt{4\pi^2v^2-\beta_1^2})&\quad |{v}| {>{m}}_1(1-\frac{1}{2\sigma_1}),\end{array}\right.$$
(2.1)

where I1 and J1 denote the modified Bessel function and the Bessel function of first order, respectively. Using the power series expansion of I1 (see [1, p. 375]), we obtain for \(|v| < m_{1} \left(1 - \frac {1}{2\sigma _{1}} \right)\) that

$$({\beta_{1}^{2}} - 4 \pi^{2} v^{2})^{-1/2} I_{1}\left(\sqrt{{\beta_{1}^{2}} - 4 \pi^{2} v^{2}} \right) = \frac{1}{2} \sum\limits_{k=0}^{\infty} \frac{1}{4^{k} k! (k+1)!} ({\beta_{1}^{2}} - 4 \pi^{2} v^{2})^{k} .$$

Therefore, the Fourier transform \({\hat \omega }_{\sinh ,1}(v)\) is positive and decreasing for \(v \in \left[0, \frac {m_{1}}{2 \sigma _{1}} \right]\), since for \(\sigma _{1} \ge \frac {5}{4}\) it holds

$$\frac{m_{1}}{2 \sigma_{1}} < m_{1} \Big(1 - \frac{1}{2\sigma_{1}}\Big) .$$

Hence, \(\omega _{\sinh ,1}\) belongs to the set Ω.   

As known (see [6, 14]), the NNFFT can mainly be computed by means of an NFFT. This is why this algorithm is briefly explained below. For fixed \(N, { M_{2}} \in 2 \mathbb N\) and N1 := σ1N with σ1 > 1, the NFFT (see [4, 5, 17] or [13, pp. 377–381]) is a fast algorithm that approximately computes the values p(xj), \(j\in \mathcal {I}_{{M_{2}}}\), of any 1-periodic trigonometric polynomial

$$\begin{array}{@{}rcl@{}} p(x) := \sum\limits_{k\in \mathcal{I}_{N}} c_{k} {\mathrm e}^{2\pi {\mathrm i} k x} \end{array}$$
(2.2)

at nonequispaced nodes \(x_{j} \in \left[- \frac {1}{2}, \frac {1}{2} \right]\), \(j \in \mathcal {I}_{{ M_{2}}}\), where \(c_{k} \in \mathbb C\), \(k\in \mathcal {I}_{N}\), are given complex coefficients. In other words, for the NFFT it holds \(N=M_{1}\in 2\mathbb N\) in (1.2).

For ω1 ∈Ω we introduce the window function

$$\varphi_{1}(t) := \omega_{1}\left(\frac{N_{1} t}{m_{1}}\right) , \quad t\in \mathbb R .$$
(2.3)

By construction, the window function (2.3) is even, has the support \(\left[- \frac {m_{1}}{N_{1}}, \frac {m_{1}}{N_{1}} \right]\), and is continuous on \(\mathbb R\). Further, the restricted window function \(\varphi _{1} |_{[0, m_{1}/N_{1}]}\) is decreasing with φ1(0) = 1. Its Fourier transform

$${\hat \varphi}_{1}(v) := {\int}_{\mathbb R} \varphi_{1}(t) {\mathrm e}^{- 2 \pi {\mathrm i} v t} \;{\mathrm d}t = 2 {\int}_{0}^{m_{1}/N_{1}} \varphi_{1}(t) \cos(2\pi v t) \;{\mathrm d}t$$

is positive and decreasing for \(v \in \Big[0, N_{1} - \frac {N}{2} \Big)\). Thus, φ1 is of bounded variation over \(\left[-\frac {1}{2}, \frac {1}{2} \right]\).

In the following, we denote the torus \(\mathbb R/\mathbb Z\) by \(\mathbb T\) and the Banach space of continuous, 1-periodic functions by \(C(\mathbb T)\). For the window function (2.3), we denote its 1-periodization by

$${\tilde \varphi}_{1}^{(1)}(x) := \sum\limits_{k \in \mathbb Z} \varphi_{1}(x + k) , \quad x \in \mathbb R .$$

Using a linear combination of shifted versions of the 1-periodized window function \({\tilde \varphi }_{1}^{(1)}\), we construct a 1-periodic continuous function \(s \in C(\mathbb T)\) which approximates (2.2) well. Then the computation of the values s(xj), \(j \in \mathcal {I}_{{ M_{2}}}\), is very easy, since φ1 has the small support \(\left[-\frac {m_{1}}{N_{1}}, \frac {m_{1}}{N_{1}} \right]\). The computational cost of NFFT is \({\mathcal O} (N \log N + { M_{2}} )\) flops, see [4, 5, 17] or [13, pp. 377–381]. The error of the NFFT (see [15]) can be estimated by

$$\begin{array}{@{}rcl@{}} \max\limits_{j \in \mathcal{I}_{{ M_{2}}}} |s(x_{j}) - p(x_{j})| &\le& \|s - p\|_{C(\mathbb T)} := \max\limits_{x\in [-1/2,1/2]} |s(x) - p(x)|\\ &\le& e_{\sigma_{1}}(\varphi_{1}) \sum\limits_{n \in \mathcal{I}_{N}} |c_{n}| , \end{array}$$

where \(e_{\sigma _{1}}(\varphi _{1})\) denotes the \(C(\mathbb T)\)-error constant defined as

$$e_{\sigma_{1}}(\varphi_{1}) = \sup_{N \in 2 \mathbb N} e_{\sigma_{1},N}(\varphi_{1})$$
(2.4)

with

$$e_{\sigma_{1},N}(\varphi_{1}) := \max\limits_{n\in \mathcal{I}_{N}} \left\| \sum\limits_{r \in \mathbb Z \setminus \{0\}} \frac{{\hat \varphi}_{1}(n + r N_{1})}{{\hat \varphi}_{1}(n)} {\mathrm e}^{2 \pi {\mathrm i} rN_{1} \cdot }\right\|_{C(\mathbb T)} .$$

Note that the constants \(e_{\sigma _{1},N}(\varphi _{1})\) are bounded with respect to N (see [15, Theorem 5.1]).

Now we proceed with the NNFFT. For better readability, we describe the procedure just shortly. For more detailed explanations we refer to [6]. For chosen functions ω1,ω2 ∈ Ω, we form the window functions

$$\varphi_{1}(t) := \omega_{1}\left(\frac{N_{1} t}{m_{1}}\right) , \quad \varphi_{2}(t) := \omega_{2}\left(\frac{N_{2} t}{m_{2}}\right) , \quad t\in \mathbb R ,$$
(2.5)

where again \(N_{1} := \sigma _{1} N \in 2 \mathbb N\) with some oversampling factor σ1 > 1 and \(m_{1} \in \mathbb N \setminus \{1\}\) with 2m1N1 and where \(N_{2}:= \sigma _{2} (N_{1} + 2 m_{1}) \in 2 \mathbb N\) with an oversampling factor σ2 > 1 and \(m_{2} \in \mathbb N \setminus \{1\}\) with \(2 m_{2} \le \left(1 - \frac {1}{\sigma _{1}} \right) N_{2}\). The second window function φ2 has the support \(\left[-\frac {m_{2}}{N_{2}}, \frac {m_{2}}{N_{2}} \right]\). Additionally, in order to prevent aliasing, we use a-periodic functions, where we introduce the constant

$$a:= 1 + \frac{2m_{1}}{N_{1}} > 1 ,$$
(2.6)

such that aN1 = N1 + 2m1 and N2 = σ2σ1aN. Without loss of generality, we can assume that

$$v_{k} \in \left[- \frac{1}{2a}, \frac{1}{2a}\right] .$$
(2.7)

If \(v_{k} \in \left[-\frac {1}{2}, \frac {1}{2} \right]\), then we replace the nonharmonic bandwidth N by \(N^{\ast } := N + \lceil \frac {2m_{1}}{\sigma _{1}}\rceil\) and set \(v_{j}^{\ast } := \frac {N}{N^{\ast }} v_{j} \in \big [- \frac {1}{2a}, \frac {1}{2a}\big ]\) such that \(N v_{j} = N^{\ast } v_{j}^{\ast }\).

For arbitrarily given \(f_{k} \in \mathbb C\), \(k \in \mathcal {I}_{M_{1}}\), and \(k \in \mathcal {I}_{M_{1}}\), \(k\in \mathcal {I}_{M_{1}}\), we introduce the compactly supported, continuous auxiliary function

$$h(t) := \sum\limits_{k \in \mathcal{I}_{M_{1}}} f_{k} \varphi_{1}(t - v_{k}) , \quad t \in \mathbb R ,$$

which has the Fourier transform

$$\begin{aligned} \hat h (N x) &= \int_{\mathbb R} h(t)\ {\mathrm e}^{-2 \pi {\mathrm i}N x t} \ {\mathrm d}t \\& = \sum\limits_{k \in \mathcal{I}_{M_{1}}} f_{k} \int_{\mathbb R} \varphi_{1}(t - v_{k})\;{\mathrm e}^{-2 \pi {\mathrm i}N x t} \ {\mathrm d}t \end{aligned}$$
(2.8)
$$\begin{array}{@{}rcl@{}} &=& \sum\limits_{k \in \mathcal{I}_{M_{1}}} f_{k} {\mathrm e}^{-2 \pi {\mathrm i}N v_{k}x} {\hat \varphi}_{1}(N x) = f(x)\;{\hat \varphi}_{1}(N x) , \quad x \in \mathbb R . \end{array}$$
(2.9)

Hence, for arbitrary nodes \(x_{j} \in \left[- \frac {1}{2}, \frac {1}{2} \right]\), \(j\in \mathcal {I}_{M_{2}}\), we have

$$f(x_{j}) = \frac{{\hat h}(Nx_{j})}{{\hat \varphi}_{1}(Nx_{j})} , \quad j \in \mathcal{I}_{M_{2}} .$$

Therefore, it remains to compute the values \({\hat h}(Nx_{j})\), \(j\in \mathcal {I}_{M_{2}}\), because we can precompute the values \({\hat \varphi }_{1}(Nx_{j})\), \(j \in \mathcal {I}_{M_{2}}\). In some cases (see Section 4), these values \({\hat \varphi }_{1}(Nx_{j})\), \(j\in \mathcal {I}_{M_{2}}\), are explicitly known.

For arbitrary \(v_{k} \in \left[- \frac {1}{2a}, \frac {1}{2a} \right]\), \(k \in \mathcal {I}_{M_{1}}\), we have φ1(tvk) = 0 for all \(t < -\frac {1}{2a} - \frac {m_{1}}{N_{1}} =\)\(-\frac {a}{2} + \left(\frac {1}{2}- \frac {1}{2a} \right)\) and for all \(t > \frac {1}{2a} + \frac {m_{1}}{N_{1}} = \frac {a}{2} - \left(\frac {1}{2}- \frac {1}{2a} \right)\), since \({ {supp}} \varphi _{1} = \left[- \frac {m_{1}}{N_{1}}, \frac {m_{1}}{N_{1}} \right]\) and \(\frac {1}{2}- \frac {1}{2a}> 0\). Thus, by (2.8) and

$${{supp}} \varphi_{1}(\cdot - v_{k}) \subset \left[-\frac{a}{2}, \frac{a}{2}\right] , \quad k\in \mathcal{I}_{M_{1}} ,$$

we obtain

$$\hat h (N x) = \sum\limits_{k\in \mathcal{I}_{M_{1}}} f_{k} {\int}_{-a/2}^{a/2} \varphi_{1}(t-v_{k}) {\mathrm e}^{-2 \pi {\mathrm i}N x t} \;{\mathrm d}t , \quad x \in \mathbb R .$$

Then the rectangular quadrature rule leads to

$$s(N x) := \sum\limits_{k\in \mathcal{I}_{M_{1}}} f_{k} \frac{1}{N_{1}} \sum\limits_{\ell \in \mathcal{I}_{N_{1} + 2m_{1}}} \varphi_{1}\Big(\frac{\ell}{N_{1}} - v_{k}\Big) {\mathrm e}^{-2 \pi {\mathrm i} \ell x/\sigma_{1}} , \quad x \in \mathbb R ,$$
(2.10)

which approximates \({\hat h}(N x)\). Note that \(\frac {\ell }{N_{1}} \in \left[- \frac {a}{2}, \frac {a}{2} \right]\) for each \(\ell \in \mathcal {I}_{N_{1} + 2 m_{1}}\) by N1 + 2m1 = aN1. Changing the order of summations in (2.10), it follows that

$$s(N x) = \sum\limits_{\ell \in \mathcal{I}_{N_{1} + 2m_{1}}} \Bigg(\frac{1}{N_{1}} \sum\limits_{k\in \mathcal{I}_{M_{1}}} f_{k} \varphi_{1}\Big(\frac{\ell}{N_{1}} - v_{k}\Big)\Bigg) {\mathrm e}^{-2 \pi {\mathrm i} \ell x/\sigma_{1}} .$$
(2.11)

After computation of the inner sums

$$g_{\ell} := \frac{1}{N_{1}} \sum\limits_{k\in \mathcal{I}_{M_{1}}} f_{k} \varphi_{1}\Big(\frac{\ell}{N_{1}} - v_{k}\Big) , \quad \ell \in \mathcal{I}_{N_{1} + 2m_{1}} ,$$
(2.12)

we arrive at the following NFFT

$$s(N x_{j}) = \sum\limits_{\ell \in \mathcal{I}_{N_{1} + 2m_{1}}} g_{\ell} {\mathrm e}^{-2 \pi {\mathrm i} \ell x_{j}/\sigma_{1}} , \quad j \in \mathcal{I}_{M_{2}} .$$

If we denote the result of this NFFT (with the 1-periodization \({\tilde \varphi }_{2}^{(1)}\) of the second window function φ2 and N2 := σ2(N1 + 2m1)) by s1(Nxj), then \(s_{1}(Nx_{j})/{\hat \varphi }_{1}(Nx_{j})\) is an approximate value of f(xj), \(j \in \mathcal {I}_{M_{2}}\). Thus, the algorithm can be summarized as follows.

Algorithm 1
figure a

NNFFT

The computational cost of the NNFFT is equal to \({\mathcal O}\big (N \log N + M_{1} + M_{2}\big )\) flops.

In Step 4 of Algorithm 1 we use the assumption \(2m_{2} \le \left(1 - \frac {1}{\sigma _{1}} \right) N_{2}\) such that

$$\frac{1}{2 \sigma_{1}} + \frac{m_{2}}{N_{2}} \le \frac{1}{2} .$$

Then for all \(j \in \mathcal {I}_{M_{2}}\) and \(s\in \mathcal {I}_{N_{2}}\), it holds

$${\tilde \varphi}_{2}^{(1)}\Big(\frac{x_{j}}{\sigma_{1}} - \frac{s}{N_{2}}\Big) = \varphi_{2}\Big(\frac{x_{j}}{\sigma_{1}} - \frac{s}{N_{2}}\Big) .$$

Since we approximate a non-periodic function f on the interval \(\left[-\frac {1}{2}, \frac {1}{2} \right]\) by means of a-periodic functions on the torus \(a\mathbb T\cong \Big[-\frac {a}{2}, \frac {a}{2} \Big)\), the parameter a has to fulfill the condition a > 1, in order to prevent aliasing artifacts.

3 Error estimates for NNFFT

Now we study the error of the NNFFT, which is measured in the form

$$\max_{j \in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s_{1}(Nx_{j})}{{\hat \varphi}_{1}(Nx_{j})}\bigg| ,$$

where f is a given exponential sum (1.1) and \(x_{j} \in \left[-\frac {1}{2}, \frac {1}{2} \right]\), \(j \in \mathcal {I}_{M_{2}}\), are arbitrary spatial nodes. At the beginning of this section we present some technical lemmas. The main result will be Theorem 3.5.

We introduce the a-periodization of the given window function (2.3) by

$${\tilde \varphi}_{1}^{(a)}(x) := \sum\limits_{\ell \in \mathbb Z} \varphi_{1}(x + a \ell) , \quad x \in \mathbb R .$$
(3.1)

For each \(x\in \mathbb R\), the above series (3.1) has at most one nonzero term. This can be seen as follows: For arbitrary \(x \in \mathbb R\) there exists a unique \(\ell ^{*} \in \mathbb Z\) such that x = −a + r with a residuum \(r \in \big [-\frac {a}{2}, \frac {a}{2}\big )\). Then φ1(x + a) = φ1(r) and hence φ1(r) > 0 for \(r \in \left(-\frac {m_{1}}{N_{1}}, \frac {m_{1}}{N_{1}} \right)\) and φ1(r) = 0 for \(r \in \big [-\frac {a}{2}, - \frac {m_{1}}{N_{1}}\big ] \cup \big [ \frac {m_{1}}{N_{1}}, \frac {a}{2}\big )\). For each \(\ell \in \mathbb Z \setminus \{\ell ^{*}\}\), we have

$$\varphi_{1} (x + a \ell) = \varphi_{1}(a (\ell - \ell^{*}) + r) = 0 ,$$

since \(|a (\ell - \ell ^{*}) + r | \ge \frac {a}{2} = \frac {1}{2} + \frac {m_{1}}{N_{1}} > \frac {m_{1}}{N_{1}}\). Further it holds

$${\tilde \varphi}_{1}^{(a)}(x) = \varphi_{1}(x) , \quad x \in \left[-1-\frac{m_{1}}{N_{1}}, 1 + \frac{m_{1}}{N_{1}}\right] .$$

By the construction of φ1, the a-periodic window function (3.1) is continuous on \(\mathbb R\) and of bounded variation over \(\left[- \frac {a}{2}, \frac {a}{2} \right]\). Then the k-th Fourier coefficient of the a-periodic window function (3.1) reads as follows

$$c_{k}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big) := \frac{1}{a} {\int}_{-a/2}^{a/2} {\tilde \varphi}_{1}^{(a)}(t)\ {\mathrm e}^{-2 \pi {\mathrm i} k t/a}\ {\mathrm d}t = \frac{1}{a} {\hat \varphi}_{1}\bigg(\!\frac{k}{a}\!\bigg) , \quad k \in \mathbb Z .$$
(3.2)

By the convergence theorem of Dirichlet–Jordan (see [19, Vol. 1, pp. 57–58]), the a-periodic Fourier series of (3.1) converges uniformly on \(\mathbb R\) and it holds

$${\tilde \varphi}_{1}^{(a)}(x) = \sum\limits_{k\in \mathbb Z} c_{k}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} k x/a} = \frac{1}{a} \sum\limits_{k\in \mathbb Z} {\hat \varphi}_{1}\bigg(\!\frac{k}{a}\!\bigg)\ {\mathrm e}^{2 \pi {\mathrm i} k x/a} .$$
(3.3)

Then we have the following technical lemma.

Lemma 3.1

Let the window function φ1 be given by (2.3). Then for any \(n \in \mathcal {I}_{N}\) with \(N \in 2 \mathbb N\), the series

$$\sum\limits_{r\in \mathbb Z} c_{n + r (N_{1} + 2m_{1})}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} (n + r (N_{1}+ 2m_{1})) x/a}$$

is uniformly convergent on \(\mathbb R\) and has the sum

$$\frac{1}{N_{1} + 2m_{1}} \sum\limits_{\ell \in \mathcal{I}_{N_{1}+2m_{1}}} {\mathrm e}^{-2 \pi {\mathrm i} n \ell/(N_{1} + 2m_{1})} {\tilde \varphi}_{1}^{(a)}\left(\!x + \frac{\ell}{N_{1}}\!\right)$$

which coincides with the rectangular quadrature rule of the integral

$$c_{n}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}(x + \cdot)\!\Big) = \frac{1}{a} {\int}_{-a/2}^{a/2} {\tilde \varphi}_{1}^{(a)}(x+ s) \ {\mathrm e}^{2 \pi {\mathrm i} n s/a}\ {\mathrm d}s = c_{n}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} n x/a} .$$

Proof

Using the uniformly convergent Fourier series (3.3), we obtain for all \(n\in \mathcal {I}_{N}\) that

$${\mathrm e}^{-2 \pi {\mathrm i} n x/a} {\tilde \varphi}_{1}^{(a)}(x) = \sum\limits_{k\in \mathbb Z} c_{k}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} (k-n) x/a} =\sum\limits_{q\in \mathbb Z} c_{n+q}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} q x/a} .$$

Replacing x by \(x + \frac {\ell }{N_{1}}\) with \(\ell \in \mathcal {I}_{N_{1}+ 2m_{1}}\), we see that by N1 + 2m1 = aN1,

$${\mathrm e}^{-2 \pi {\mathrm i} n (x + \ell/N_{1})/a} {\tilde \varphi}_{1}^{(a)}\bigg(\!x + \frac{\ell}{N_{1}}\!\bigg) =\sum\limits_{q\in \mathbb Z} c_{n+q}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} q x/a}\ {\mathrm e}^{2\pi {\mathrm i} q \ell/(N_{1} + 2m_{1})} .$$

Summing the above formulas for all \(\ell \in \mathcal {I}_{N_{1}+ 2m_{1}}\) and applying the known formula

$$\sum\limits_{\ell\in \mathcal{I}_{N_{1}+2m_{1}}}{\mathrm e}^{2\pi {\mathrm i} q \ell/(N_{1} + 2m_{1})} = \left\{ \begin{array}{ll} N_{1}+ 2m_{1} & \quad q \equiv 0 \; \mathrm{mod} (N_{1} + 2m_{1}) ,\\ 0 & \quad q \not\equiv 0 \; \mathrm{mod} (N_{1} + 2m_{1}) , \end{array} \right.$$

we conclude that

$$\begin{array}{@{}rcl@{}} & &\sum\limits_{\ell\in \mathcal{I}_{N_{1}+2m_{1}}} {\mathrm e}^{-2 \pi {\mathrm i} n (x + \ell/N_{1})/a} {\tilde \varphi}_{1}^{(a)}\bigg(\!x + \frac{\ell}{N_{1}}\!\bigg)\\ & & = (N_{1} + 2m_{1}) \sum\limits_{r\in \mathbb Z} c_{n + r (N_{1} + 2m_{1})}^{(a)}\Big(\!{\tilde \varphi}_{1}^{(a)}\!\Big)\ {\mathrm e}^{2 \pi {\mathrm i} r (N_{1}+ 2m_{1}) x/a} . \end{array}$$

Obviously,

$$\frac{1}{N_{1}+2m_{1}} \sum\limits_{\ell\in \mathcal{I}_{N_{1}+2m_{1}}} {\mathrm e}^{-2 \pi {\mathrm i} n (x + \ell/N_{1})/a} {\tilde \varphi}_{1}^{(a)}\left(\!x + \frac{\ell}{N_{1}}\!\right)$$

is the rectangular quadrature formula of the integral

$$\frac{1}{a} {\int}_{-a/2}^{a/2} {\tilde \varphi}_{1}^{(a)}(x+ s) {\mathrm e}^{2 \pi {\mathrm i} n s/a} \;{\mathrm d}s$$

with respect to the uniform grid \(\Big\{\frac {\ell }{N_{1}} : \ell \in \mathcal {I}_{N_{1}+2m_{1}} \Big\}\) of the interval \(\left[-\frac {a}{2}, \frac {a}{2} \right]\). This completes the proof.

By (3.2) we obtain that for \(n \in \mathcal {I}_{N}\),

$$\Bigg| \sum\limits_{r \in \mathbb Z \setminus \{0\}} \frac{c_{n + r (N_{1}+2m_{1})}^{(a)}({\tilde \varphi}_{1}^{(a)})}{c_{n}^{(a)}({\tilde \varphi}_{1}^{(a)})}\ {\mathrm e}^{2 \pi {\mathrm i} r (N_{1}+2m_{1}) x/a}\Bigg| = \Bigg| \sum\limits_{r \in \mathbb Z \setminus \{0\}} \frac{{\hat \varphi}_{1}(n/a + r N_{1})}{{\hat \varphi}_{1}(n/a)}\ {\mathrm e}^{2 \pi {\mathrm i} rN_{1} x/a}\Bigg| .$$

Now we generalize the technical Lemma 3.1.

Lemma 3.2

For arbitrary fixed \(v \in \left[-\frac {N}{2}, \frac {N}{2} \right]\), \(N\in \mathbb N\), and given window function (2.3), the function

$$\psi_{1}(x):= \frac{1}{N_{1}} \sum\limits_{\ell\in \mathbb Z}\ {\mathrm e}^{- 2\pi {\mathrm i} v \ell/(N_{1} + 2m_{1})}\ {\mathrm e}^{-2 \pi {\mathrm i} v x/a} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right)$$
(3.4)

is \(\frac {1}{N_{1}}\)-periodic, continuous on \(\mathbb R\), and of bounded variation over \(\left[-\frac {1}{2}, \frac {1}{2} \right]\). For each \(x\in \mathbb R\), the corresponding \(\frac {1}{N_{1}}\)-periodic Fourier series converges uniformly to ψ1(x), i.e.,

$$\psi_{1}(x) = \sum\limits_{r \in \mathbb Z} {\hat \varphi}_{1}\Big(\frac{v}{a} + r N_{1}\Big)\ {\mathrm e}^{2 \pi {\mathrm i} rN_{1} x} .$$

Proof

The definition (3.4) of the function ψ1 is correct, since

$$\psi_{1}(x) = \frac{1}{N_{1}} \sum\limits_{\ell\in {\mathbb Z}_{m_{1},N_{1}}(x)}\ {\mathrm e}^{- 2\pi {\mathrm i} v \ell/(N_{1} + 2m_{1})}\ {\mathrm e}^{-2 \pi {\mathrm i} v x/a} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right)$$

with the finite index set \({\mathbb Z}_{m_{1},N_{1}}(x) = \{ \ell \in \mathbb Z: | N_{1} x + \ell | < m_{1} \}\). If \(x\in \left[-\frac {1}{2}, \frac {1}{2} \right]\), we observe that \({\mathbb Z}_{m_{1},N_{1}}(x) \subseteq \mathcal {I}_{N_{1} + 2m_{1}}\) and therefore

$$\psi_{1}(x) = \frac{1}{N_{1}} \sum\limits_{\ell\in \mathcal{I}_{N_{1}+2m_{1}}} {\mathrm e}^{- 2\pi {\mathrm i} v \ell/(N_{1} + 2m_{1})}\ {\mathrm e}^{-2 \pi {\mathrm i} v x/a} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right) .$$

Simple calculation shows that for each \(x \in \mathbb R\),

$$\psi_{1}\left(\!x + \frac{1}{N_{1}}\!\right) = \frac{1}{N_{1}} \sum\limits_{\ell\in \mathbb Z}\ {\mathrm e}^{- 2\pi {\mathrm i} v (\ell+1)/(N_{1} + 2m_{1})}\ {\mathrm e}^{-2 \pi {\mathrm i} v x/a} \varphi_{1}\left(\!x +\frac{\ell+1}{N_{1}}\!\right) = \psi_{1}(x) .$$

By the construction of φ1, the \(\frac {1}{N_{1}}\)-periodic function ψ1 is continuous on \(\mathbb R\) and of bounded variation over \(\left[-\frac {1}{2}, \frac {1}{2} \right]\). Thus, by the convergence theorem of Dirichlet–Jordan, the Fourier series of ψ1 converges uniformly on \(\mathbb R\) to ψ1. The r th Fourier coefficient of ψ1 reads as follows

$$\begin{array}{@{}rcl@{}} c_{r}^{(1/N_{1})}(\psi_{1}) &=& N_{1} {\int}_{0}^{1/N_{1}} \psi_{1}(t) {\mathrm e}^{-2 \pi {\mathrm i} rN_{1} t} \;{\mathrm d}t\\ &=& \sum\limits_{\ell\in \mathbb Z} {\mathrm e}^{- 2\pi {\mathrm i} v \ell/(N_{1} + 2m_{1})} {\int}_{0}^{1/N_{1}} {\mathrm e}^{-2 \pi {\mathrm i} v t/a} \varphi_{1}\left(\!t +\frac{\ell}{N_{1}}\!\right) \;{\mathrm d}t\\ &=& \sum\limits_{\ell\in \mathbb Z} {\int}_{\ell/N_{1}}^{(\ell+1)/N_{1}} \varphi_{1}(s) {\mathrm e}^{-2 \pi {\mathrm i} (v/a + rN_{1}) s } {\mathrm\;d}s = {\hat \varphi}_{1}\Big(\frac{v}{a} + r N_{1}\Big) , \quad r \in \mathbb Z . \end{array}$$

This completes the proof.

Then Lemma 3.2 leads immediately to the following technical result.

Corollary 3.3

Let the window function φ1 be given by (2.3). For all \(x \in \left[-\frac {1}{2}, \frac {1}{2} \right]\) and \(w \in \left[-\frac {N}{2a}, \frac {N}{2a} \right]\) it holds then

$$\begin{array}{@{}rcl@{}} & &\sum\limits_{r \in {\mathbb Z}\setminus \{0\}} \frac{{\hat \varphi}_{1}(w + r N_{1})}{{\hat \varphi}_{1}(w)}\ {\mathrm e}^{2 \pi {\mathrm i} (w + r N_{1}) x} \\ & & = \frac{1}{N_{1} {\hat \varphi}_{1}(w)} \sum\limits_{\ell\in \mathcal{I}_{N_{1} + 2m_{1}}}\ {\mathrm e}^{- 2\pi {\mathrm i} w \ell/N_{1}} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right) -\ {\mathrm e}^{2 \pi {\mathrm i} w x} . \end{array}$$
(3.5)

Further, for all \(w \in \left[-\frac {N}{2a}, \frac {N}{2a} \right]\), it holds

$$\begin{array}{@{}rcl@{}} & &\max\limits_{x\in [-1/2, 1/2]} \left| \frac{1}{N_{1} {\hat \varphi}_{1}(w)}\sum\limits_{\ell \in \mathcal{I}_{N_{1}+2m_{1}}} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right)\ {\mathrm e}^{- 2\pi {\mathrm i} w \ell /N_{1}} - {\mathrm e}^{2 \pi {\mathrm i} w x}\right| \\ & &= \left\|\sum\limits_{r \in {\mathbb Z}\setminus \{0\}} \frac{{\hat \varphi}_{1}(w + r N_{1})}{{\hat \varphi}_{1}(w)}\ {\mathrm e}^{2 \pi{\mathrm i} r N_{1} \cdot}\right\|_{C(\mathbb T)} . \end{array}$$
(3.6)

Proof

As before, let \(v \in \Big[-\frac {N}{2}, \frac {N}{2} \Big]\) be given. Substituting \(w := \frac {v}{a}\in \left[-\frac {N}{2a}, \frac {N}{2a}\right]\) and observing N1 + 2m1 = aN1, we obtain by Lemma 3.2 that for all \(x \in \Big[-\frac {1}{2}, \frac {1}{2} \Big]\) it holds,

$$\begin{array}{@{}rcl@{}} & &\frac{1}{N_{1}} \sum\limits_{\ell\in \mathcal{I}_{N_{1}+2m_{1}}}\ {\mathrm e}^{- 2\pi {\mathrm i} w \ell/N_{1}}\ {\mathrm e}^{-2 \pi {\mathrm i} w x} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right) - {\hat \varphi}_{1}(w)\\ & & = \sum\limits_{r \in {\mathbb Z}\setminus \{0\}} {\hat \varphi}_{1}(w + r N_{1})\ {\mathrm e}^{2 \pi {\mathrm i} r N_{1} x} . \end{array}$$

Since by assumption \({\hat \varphi }_{1}(w) > 0\) for all \(w \in \left[-\frac {N}{2a}, \frac {N}{2a} \right]\subset \left[-\frac {N}{2}, \frac {N}{2} \right]\), we have

$$\begin{array}{@{}rcl@{}} & &\frac{1}{N_{1} {\hat \varphi}_{1}(w)} \sum\limits_{\ell\in \mathcal{I}_{N_{1}+2m_{1}}}\ {\mathrm e}^{- 2\pi {\mathrm i} w \ell/N_{1}}\ {\mathrm e}^{-2 \pi {\mathrm i} w x} \varphi_{1}\left(\!x +\frac{\ell}{N_{1}}\!\right) - 1\\ & & = \sum\limits_{r \in {\mathbb Z}\setminus \{0\}} \frac{{\hat \varphi}_{1}(w + r N_{1})}{{\hat \varphi}_{1}(w)}\ {\mathrm e}^{2 \pi {\mathrm i} r N_{1} x} . \end{array}$$

Multiplying the above equality by the exponential e2πiwx, this results in (3.5) and (3.6).

We say that the window function φ1 of the form (2.3) is convenient for NNFFT, if the general \(C(\mathbb T)\)-error constant

$$E_{\sigma_{1}}(\varphi_{1}) := \sup\limits_{N\in\mathbb N} E_{\sigma_{1},N}(\varphi_{1})$$
(3.7)

with

$$E_{\sigma_{1},N}(\varphi_{1}) := \max\limits_{v \in [- N/2, N/2]} \left\| \sum\limits_{r \in \mathbb Z \setminus \{0\}} \frac{{\hat \varphi}_{1}(v + r N_{1})}{{\hat \varphi}_{1}(v)}\ {\mathrm e}^{2 \pi {\mathrm i} r N_{1} \cdot}\right\|_{C(\mathbb T)}$$
(3.8)

fulfills the condition \(E_{\sigma _{1}}(\varphi _{1}) \ll 1\) for conveniently chosen truncation parameter m1 ≥ 2 and oversampling factor σ1 > 1. Obviously, the \(C(\mathbb T)\)-error constant (2.4) is a “discrete” version of the general \(C(\mathbb T)\)-error constant (3.7) with the property

$$e_{\sigma_{1}}(\varphi_{1}) \le E_{\sigma_{1}}(\varphi_{1}) .$$
(3.9)

Thus, Corollary 3.3 means that all complex exponentials e2πiwx with \(w\in \left[-\frac {N}{2a}, \frac {N}{2a} \right]\) and \(x\in \left[-\frac {1}{2}, \frac {1}{2} \right]\) can be uniformly approximated by short linear combinations of shifted window functions, cf. [4, Theorem 2.10], if φ1 is convenient for NNFFT.

Theorem 3.4

Let σ1 > 1, \(m_{1} \in {\mathbb N} \setminus \{1\}\), and \(N_{1} = \sigma _{1} N \in 2\mathbb N\) with 2m1N1 be given. Let φ1 be the scaled version (2.3) of ω1 ∈Ω. Assume that the Fourier transform \({\hat \omega }_{1}\) fulfills the decay condition

$$|{\hat \omega}_{1}(v)| \le \left\{ \begin{array}{ll} c_{1} & \quad |v| \in \Big[m_{1} \Big(1 - \frac{1}{2\sigma_{1}}\Big), m_{1} \Big(1 + \frac{1}{2\sigma_{1}}\Big)\Big] ,\\ c_{2} |v|^{-\mu} & \quad |v| \ge m_{1} \Big(1 + \frac{1}{2\sigma_{1}}\Big) , \end{array} \right.$$

with certain constants c1 > 0, c2 > 0, and μ > 1.

Then the general \(C(\mathbb T)\)-error constant \(E_{\sigma _{1}}(\varphi _{1})\) of the window function (2.3) has the upper bound

$$E_{\sigma_{1}}(\varphi_{1}) \le \frac{1}{{\hat \omega}_{1}(\frac{m_{1}}{2 \sigma_{1}})} \left[2 c_{1} + \frac{2 c_{2}}{(\mu -1) m_{1}^{\mu}}\left(\!1 - \frac{1}{2\sigma_{1}}\!\right)^{1-\mu}\right] .$$
(3.10)

Proof

By the scaling property of the Fourier transform, we have

$${\hat \varphi}_{1}(v) = {\int}_{\mathbb R} \varphi_{1}(t)\ {\mathrm e}^{-2 \pi {\mathrm i}v t} \;{\mathrm d}t = \frac{m}{N_{1}} {\hat \omega}_{1}\Big(\frac{m_{1} v}{N_{1}}\Big) , \quad v \in \mathbb R .$$

For all \(v \in \left[-\frac {N}{2}, \frac {N}{2} \right]\) and \(r\in {\mathbb Z}\setminus \{0, \pm 1\}\), we obtain

$$\bigg| \frac{m_{1} v}{N_{1}} + m_{1} r \bigg| \ge m_{1} \left(\!2 - \frac{1}{2 \sigma_{1}}\!\right) > m_{1} \left(\!1 + \frac{1}{2 \sigma_{1}}\!\right)$$

and hence

$$|{\hat \varphi_{1}}(v + r N_{1})| = \frac{m_{1}}{N_{1}} \Big|{\hat \omega}_{1}\Big(\frac{m_{1} v}{N_{1}} + m_{1} r\Big)\Big| \le \frac{m_{1} c_{2}}{m_{1}^{\mu} N_{1}} \Big|\frac{v}{N_{1}} + r\Big|^{-\mu} .$$

From [15, Lemma 3.1] it follows that for fixed \(u =\frac {v}{N_{1}}\in \left[- \frac {1}{2\sigma _{1}}, \frac {1}{2\sigma _{1}} \right]\),

$$\sum\limits_{r \in {\mathbb Z} \setminus \{0, \pm 1\}} |u + r|^{-\mu} \le \frac{2}{\mu - 1} \left(\!1 - \frac{1}{2\sigma_{1}}\!\right)^{1-\mu} .$$

For all \(v \in \left[- \frac {N}{2}, \frac {N}{2} \right]\), we sustain

$$|{\hat \varphi}_{1}(v \pm N_{1})| = \frac{m_{1}}{N_{1}} \Big|{\hat \omega}_{1} \Big(\frac{m_{1} v}{N_{1}} \pm m_{1} \Big)\Big| \le \frac{m_{1}}{N_{1}} c_{1} ,$$

since it holds

$$\bigg|\frac{m_{1} v}{N_{1}} \pm m_{1} \bigg| \in \left[m_{1} \left(\!1 - \frac{1}{2\sigma_{1}}\!\right), m_{1} \left(\!1 + \frac{1}{2\sigma_{1}}\!\right)\right] .$$

Thus, for each \(v \in \left[-\frac {N}{2}, \frac {N}{2} \right]\), we estimate the sum

$$\begin{array}{@{}rcl@{}} \sum\limits_{r \in {\mathbb Z} \setminus \{0\}} |{\hat \varphi}_{1}(v + r N_{1})| &\le& \frac{m_{1}}{N_{1}} \bigg[\Big|{\hat \omega}_{1}\Big(\frac{m_{1} v}{N_{1}} - m_{1} \Big)\Big| + \Big|{\hat \omega}_{1}\Big(\frac{m_{1} v}{N_{1}} + m_{1} \Big)\Big|\\ & & + \sum\limits_{k \in {\mathbb Z} \setminus \{0, \pm 1\}} \Big|{\hat \omega}_{1}\Big(\frac{m_{1} v}{N_{1}} + m_{1} r\Big)\Big|\bigg]\\ &\le& \frac{m_{1}}{N_{1}} \bigg[2 c_{1} + \frac{c_{2}}{m_{1}^{\mu}} \sum\limits_{r \in {\mathbb Z} \setminus \{0,\pm 1\}} \Big|\frac{v}{N_{1}} + r \Big|^{- \mu}\bigg]\\ &\le& \frac{m_{1}}{N_{1}} \bigg[2 c_{1} + \frac{2 c_{2}}{(\mu - 1) m_{1}^{\mu}} \Big(1 -\frac{1}{2\sigma_{1}}\Big)^{1 -\mu}\bigg] \end{array}$$

such that

$$\max\limits_{v \in [-N/2,N/2]}\sum\limits_{r \in {\mathbb Z} \setminus \{0\}} |{\hat \varphi}_{1}(v + r N_{1})| \le \frac{m_{1}}{N_{1}} \left[2 c_{1} + \frac{2 c_{2}}{(\mu - 1) m_{1}^{\mu}} \left(\!1 -\frac{1}{2\sigma_{1}}\!\right)^{1 -\mu}\right] .$$

Now we determine the minimum of all positive values

$${\hat \varphi}_{1}(v) = \frac{m_{1}}{N_{1}} {\hat \omega}_{1}\Big(\frac{m_{1} v}{N_{1}}\Big) , \quad v \in \left[\! - \frac{N}{2}, \frac{N}{2}\!\right] .$$

Since \(\frac {m_{1} |v|}{N_{1}} \le \frac {m_{1}}{2 \sigma _{1}}\) for all \(v \in \left[ - \frac {N}{2}, \frac {N}{2} \right]\), we obtain

$$\min_{v \in [-N/2,N/2]} {\hat \varphi}_{1}(v) = \frac{m_{1}}{N_{1}} \min_{v \in [-N/2,N/2]} {\hat \omega}_{1}(\frac{m_{1} v}{N_{1}}) = \frac{m_{1}}{N_{1}} {\hat \omega}_{1}(\frac{m_{1}}{2 \sigma_{1}}) = {\hat \varphi}_{1}(\frac{N}{2}) > 0 .$$

Thus, we see that the constant \(E_{\sigma _{1},N}(\varphi _{1})\) can be estimated by an upper bound which depends on m1 and σ1, but does not depend on N. We obtain

$$\begin{array}{@{}rcl@{}} E_{\sigma_{1},N}(\varphi_{1}) &\le& \frac{1}{{\hat \varphi}_{1}(N/2)} \max\limits_{v \in [-N/2, N/2]} \sum\limits_{r \in {\mathbb Z} \setminus \{0\}} |{\hat \varphi}_{1}(n + r N_{1})|\\ &\le& \frac{1}{{\hat \omega}_{1}(\frac{m_{1}}{2\sigma_{1}})} \left[2 c_{1} + \frac{2 c_{2}}{(\mu - 1) m_{1}^{\mu}} \left(1 -\frac{1}{2\sigma_{1}}\right)^{1-\mu}\right] . \end{array}$$

Consequently, the general \(C(\mathbb T)\)-error constant \(E_{\sigma _{1}}(\varphi _{1})\) has the upper bound (3.10). By (3.9), the expression (3.10) is also an upper bound of \(C(\mathbb T)\)-error constant \(e_{\sigma _{1}}(\varphi _{1})\).

Thus, by means of these technical results we obtain the following error estimate for the NNFFT.

Theorem 3.5

Let the nonharmonic bandwidth \(N\in \mathbb N\) with N ≫ 1 be given. Assume that \(N_{1} = \sigma _{1} N \in 2 \mathbb N\) with σ1 > 1. For fixed \(m_{1} \in \mathbb N \setminus \{1\}\) with 2m1N1, let N2 = σ2(N1 + 2m1) with σ2 > 1. For \(m_{2} \in \mathbb N \setminus \{1\}\) with \(2m_{2} \le (1 - \frac {1}{\sigma _{1}} ) N_{2}\), let φ1 and φ2 be the window functions of the form (2.5). Let \(x_{j} \in \left[ - \frac {1}{2}, \frac {1}{2} \right]\), \(j \in \mathcal {I}_{M_{2}}\), be arbitrary spatial nodes and let \(f_{k} \in \mathbb C\), \(k \in \mathcal {I}_{M_{1}}\), be arbitrary coefficients. Further, let a > 1 be the constant (2.6).

Then for a given exponential sum (1.1) with arbitrary frequencies \(v_{k}\in \left[- \frac {1}{2a}, \frac {1}{2a} \right]\), \(k \in \mathcal {I}_{M_{1}}\), the error of the NNFFT can be estimated by

$$\begin{array}{@{}rcl@{}} \max\limits_{j\in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s_{1}(Nx_{j})}{{\hat\varphi}_{1}(Nx_{j})}\bigg| &&\le \max\limits_{x \in [-1/2, 1/2]} \bigg| f(x) - \frac{s_{1}(Nx)}{{\hat\varphi}_{1}(Nx)}\bigg| \\ &&\le \left[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\Big(\frac{N}{2}\Big)} E_{\sigma_{2}}(\varphi_{2})\right] \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}| , \end{array}$$
(3.11)

where \(E_{\sigma _{j}}(\varphi _{j})\) for j = 1,2, are the general \(C(\mathbb T)\)-error constants of the form (3.7).

Proof

Now for arbitrary spatial nodes \(x_{j} \in \left[-\frac {1}{2}, \frac {1}{2} \right]\), \(j\in \mathcal {I}_{M_{2}}\), we estimate the error of the NNFFT in the form

$$\max\limits_{j\in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s_{1}(N x_{j})}{{\hat \varphi}_{1}(Nx_{j})}\bigg|\le \max\limits_{j\in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s(Nx_{j})}{{\hat \varphi}_{1}(N x_{j})}\bigg| + \max\limits_{j\in \mathcal{I}_{M_{2}}} \frac{|s(Nx_{j}) - s_{1}(Nx_{j})|}{{\hat \varphi}_{1}(N x_{j})} .$$

At first we consider

$$\max_{j\in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s(N x_{j})}{{\hat \varphi}_{1}(Nx_{j})}\bigg| \le \max_{x\in [-1/2, 1/2]} \bigg| f(x) - \frac{s(Nx)}{{\hat \varphi}_{1}(Nx)}\bigg| .$$

From (2.9) and (2.11) it follows that for all \(x \in \mathbb R\),

$$\begin{array}{@{}rcl@{}} & &f(x) - \frac{s(Nx)}{{\hat \varphi}_{1}(N x)} = \frac{\hat h(Nx) - s(Nx)}{\hat \varphi_{1}(Nx)}\\ & &= \sum\limits_{k\in \mathcal{I}_{M_{1}}} f_{k} \left[ {\mathrm e}^{-2\pi{\mathrm i}N v_{k} x} - \frac{1}{N_{1} {\hat \varphi}_{1}(Nx)} \sum\limits_{\ell \in \mathcal{I}_{N_{1}+2m_{1}}}\varphi_{1}\left(\!\frac{\ell}{N_{1}}-v_{k}\!\right) {\mathrm e}^{-2\pi {\mathrm i} \ell x/\sigma_{1}}\right] . \end{array}$$

Thus, by (2.7), (3.6), and (3.8), we obtain the estimate

$$\begin{array}{@{}rcl@{}} \max\limits_{x\in [-1/2, 1/2]} \bigg| f(x) - \frac{s(Nx)}{{\hat \varphi}_{1}(Nx)}\bigg| \le E_{\sigma_{1},N}(\varphi_{1}) \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}| \le E_{\sigma_{1}}(\varphi_{1}) \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}| . \end{array}$$
(3.12)

Now we show that for \(\varphi _{2}(t) := \omega _{2}\big (\frac {N_{2} t}{m_{2}}\big )\) and N2 = σ2(N1 + 2m1) it holds

$$\max\limits_{x\in [-1/2, 1/2]} |s(N x) - s_{1}(Nx)| \le E_{\sigma_{2}}(\varphi_{2}) \sum\limits_{\ell\in \mathcal{I}_{N_{1}+ 2m_{1}}} |g_{\ell}| .$$
(3.13)

By construction, the functions s and s1 can be represented in the form

$$\begin{array}{@{}rcl@{}} s(N x) &=& \sum\limits_{\ell \in \mathcal{I}_{N_{1} + 2m_{1}}} g_{\ell} {\mathrm e}^{-2 \pi {\mathrm i}\ell x/\sigma_{1}} ,\\ s_{1}(N x) &=& \sum\limits_{s \in \mathcal{I}_{N_{2}}} h_{s} {\tilde \varphi}_{2}^{(1)}\bigg(\!\frac{x}{\sigma_{1}} - \frac{s}{N_{2}}\!\bigg) , \quad x \in \mathbb R , \end{array}$$

where \({\tilde \varphi }_{2}^{(1)}\) denotes the 1-periodization of the second window function φ2 and

$$h_{s} := \frac{1}{N_{2}} \sum\limits_{\ell \in \mathcal{I}_{N_{1} + 2m_{1}}} \frac{g_{\ell}}{{\hat \varphi}_{2}(\ell)}\ {\mathrm e}^{- 2 \pi {\mathrm i} \ell s /N_{2}} .$$

Substituting \(t = \frac {x}{\sigma _{1}}\), it follows that

$$\begin{array}{@{}rcl@{}} s(N_{1} t) &=& \sum\limits_{\ell \in \mathcal{I}_{N_{1} + 2m_{1}}} g_{\ell}\ {\mathrm e}^{-2 \pi {\mathrm i}\ell t} ,\\ s_{1}(N_{1} t) &=& \sum\limits_{s \in \mathcal{I}_{N_{2}}} h_{s} {\tilde \varphi}_{2}^{(1)}\left(\!t - \frac{s}{N_{2}}\!\right) , \quad t \in \mathbb R , \end{array}$$

are 1-periodic functions. By [15, Lemma 2.3], we conclude

$$\max\limits_{t\in [-1/2, 1/2]} |s(N_{1} t) - s_{1}(N_{1} t)| \le e_{\sigma_{2}}(\varphi_{2}) \sum\limits_{\ell\in \mathcal{I}_{N_{1}+ 2m_{1}}} |g_{\ell}| \le E_{\sigma_{2}}(\varphi_{2}) \sum\limits_{\ell\in \mathcal{I}_{N_{1}+ 2m_{1}}} |g_{\ell}| ,$$

where the general \(C(\mathbb T)\)-error constant \(E_{\sigma _{2}}(\varphi _{2})\) defined similar to (3.7) has an analogous property (3.9). Since x = σ1t, we obtain that

$$\begin{array}{@{}rcl@{}} \max\limits_{t\in [-1/2, 1/2]} |s(N_{1} t) - s_{1}(N_{1} t)| &=& \max\limits_{x \in [-\sigma_{1}/2, \sigma_{1}/2]} |s(N x) - s_{1}(N x)| \\ &\le& E_{\sigma_{2}}(\varphi_{2}) \sum\limits_{\ell\in \mathcal{I}_{N_{1}+ 2m_{1}}} |g_{\ell}| , \end{array}$$
(3.14)

such that (3.13) is shown. Note that for \(x \in \left[- \frac {1}{2}, \frac {1}{2} \right]\) it holds

$$s_{1}(N x) = \sum\limits_{s \in \mathcal{I}_{N_{2}}^{\prime\prime}(x)} h_{s} \varphi_{2}\left(\!\frac{x}{\sigma_{1}} - \frac{s}{N_{2}}\!\right)$$

with the index set

$$\mathcal{I}_{N_{2}}^{\prime\prime}(x) := \bigg\{ s \in \mathcal{I}_{N_{2}}: \bigg|\frac{s}{N_{2}} - \frac{x}{\sigma_{1}}\bigg| < \frac{m_{2}}{N_{2}}\bigg\} .$$

Further, by (2.6) and (2.12) it holds

$$\begin{array}{@{}rcl@{}} \sum\limits_{\ell\in \mathcal{I}_{N_{1}+ 2m_{1}}} |g_{\ell}| &\le& \frac{1}{N_{1}} \sum\limits_{\ell \in \mathcal{I}_{N_{1}+2m_{1}}} \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}| \cdot 1 \le \frac{N_{1} + 2m_{1}}{N_{1}} \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}| = a \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}| . \end{array}$$

Combining this with (3.12) and (3.14) completes the proof.

Now it merely remains to estimate the general \(C(\mathbb T)\)-error constants \(E_{\sigma _{j}}(\varphi _{j})\) for j = 1,2, and \({\hat \varphi }_{1} \left(\frac {N}{2} \right)\) in (3.11) for specific window functions.

4 Error of NNFFT with \({\sinh }\)-type window functions

In this section we specify the result in Theorem 3.5 for the NNFFT with two \(\sinh\)-type window functions.

Let \(N\in \mathbb N\) with N ≫ 1 be the fixed nonharmonic bandwidth. Let \(\sigma _{1}, \sigma _{2} \in \left[\frac {5}{4}, 2 \right]\) be given oversampling factors. Further let \(N_{1} = \sigma _{1} N \in 2 \mathbb N\), \(m_{1} \in \mathbb N \setminus \{1\}\) with 2m1N1, and \(N_{2} = \sigma _{2} (N_{1} + 2m_{1}) = \sigma _{1} \sigma _{2} a N \in 2 \mathbb N\) be given, where a > 1 denotes the constant (2.6). Let \(m_{2} \in \mathbb N \setminus \{1\}\) with \(2m_{2} \le \left(1 - \frac {1}{\sigma _{1}} \right) N_{2}\) be given as well.

For j = 1, 2, we consider the functions

$$\omega_{\sinh,j}(x) := \left\{\begin{array}{ll} \frac{1}{\sinh \beta_{j}} \sinh\big(\beta_{j} \sqrt{1 - x^{2}}\big) & \quad x\in [-1, 1] ,\\ 0 & \quad x \in {\mathbb R} \setminus [-1, 1] \end{array} \right.$$

with the shape parameter

$$\beta_{j} := 2 \pi m_{j} \left(\!1 - \frac{1}{2\sigma_{j}}\!\right) .$$

As shown in Example 2.1, both functions belong to the set Ω. By scaling, for j = 1, 2, we introduce the \(\sinh\)-type window functions

$$\varphi_{\sinh,j}(t) := \omega_{\sinh,j}\left(\!\frac{N_{j} t}{m_{j}}\!\right) , \quad t \in \mathbb R .$$
(4.1)

Now we show that the error of the NNFFT with two \(\sinh\)-type window functions (4.1) has exponential decay with respect to the truncation parameters m1 and m2.

Theorem 4.1

Let the nonharmonic bandwidth \(N\in \mathbb N\) with N ≫ 1 be given. Further let \(N_{1} = \sigma _{1} N \in 2 \mathbb N\) with \(\sigma _{1} \in \left[\frac {5}{4}, 2 \right]\) be given. For fixed \(m_{1} \in \mathbb N \setminus \{1\}\) with 2m1N1, let \(N_{2} = \sigma _{2} (N_{1} + 2m_{1})\in 2 \mathbb N\) with \(\sigma _{2} \in \left[\frac {5}{4}, 2 \right]\). For \(m_{2} \in \mathbb N \setminus \{1\}\) with \(2m_{2} \le \left(1 - \frac {1}{\sigma _{1}} \right) N_{2}\), let \(\varphi _{\sinh ,1}\) and \(\varphi _{\sinh ,2}\) be the \(\sinh\)-type window functions (4.1). Assume that m2m1. Let \(x_{j} \in \left[ - \frac {1}{2}, \frac {1}{2} \right]\), \(j \in \mathcal {I}_{M_{2}}\), be arbitrary spatial nodes and let \(f_{k} \in \mathbb C\), \(k \in \mathcal {I}_{M_{1}}\), be arbitrary coefficients. Let a > 1 be the constant (2.6).

Then for the exponential sum (1.1) with arbitrary frequencies \(v_{k}\in \left[- \frac {1}{2a}, \frac {1}{2a} \right]\), \(k \in \mathcal {I}_{M_{1}}\), the error of the NNFFT with the \(\sinh\)-type window functions (4.1) can be estimated in the form

$$\max\limits_{j\in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s_{1}(Nx_{j})}{{\hat\varphi_{\sinh,1}(Nx_{j})}}\bigg| \le \max\limits_{x \in [-1/2, 1/2]} \bigg| f(x) - \frac{s_{1}(Nx)}{{\hat\varphi_{\sinh,1}(Nx)}}\bigg| \le E(\varphi_{\sinh}) \sum\limits_{k\in \mathcal{I}_{M_{1}}} |f_{k}|$$

with the constant

$$\begin{array}{@{}rcl@{}} {\hat \varphi}_{\sinh,1}\bigg(\!\frac{N}{2}\!\bigg) &=& \frac{m_{1}}{N_{1}} {\hat \omega}_{\sinh,1}\bigg(\!\frac{m_{1} N}{2 N_{1}}\!\bigg) = \frac{m_{1}}{N_{1}} {\hat \omega}_{\sinh,1}\bigg(\!\frac{m_{1}}{2 \sigma_{1}}\!\bigg)\\ &=& \frac{\pi m_{1} \beta_{1}}{N_{1} \sinh \beta_{1}} \bigg(\!{\beta_{1}^{2}} - \frac{\pi^{2} {m_{1}^{2}}}{{\sigma_{1}^{2}}}\!\bigg)^{-1/2} I_{1}\Bigg(\!\sqrt{{\beta_{1}^{2}} - \frac{\pi^{2} {m_{1}^{2}}}{{\sigma_{1}^{2}}}} \Bigg)\\ &=& \frac{m_{1} \pi}{N_{1} \sinh \beta_{1}} \bigg(\!1 - \frac{1}{2 \sigma_{1}}\!\bigg)\bigg(\!1 - \frac{1}{\sigma_{1}}\!\bigg)^{-1/2} I_{1}\bigg(\!2 \pi m_{1} \sqrt{1-\frac{1}{\sigma_{1}}} \bigg) , \end{array}$$
(4.2)

Proof

By Theorem 3.5 we have to estimate the general \(C(\mathbb T)\)-error constants \(E_{\sigma _{j}}(\varphi _{j})\), j = 1, 2, and \({\hat \varphi }_{1}\big (\frac {N}{2}\big )\) in (3.11) for the \(\sinh\)-type window functions (4.1).

Applying Theorem 3.4, we obtain by the same technique as in [15, Theorem 5.6] that

$$E_{\sigma_{j}}(\varphi_{\sinh,j}) \le (24 m_{j}^{3/2} + 10) {\mathrm e}^{- 2 \pi m_{j} \sqrt{1- 1/\sigma_{j}}} , \quad j=1, 2.$$
(4.3)

Now we estimate \({\hat \varphi }_{\sinh ,1}\big (\frac {N}{2}\big )\). Using the scaling property of the Fourier transform, by (2.1) we obtain

$$\begin{array}{@{}rcl@{}} {\hat \varphi}_{\sinh,1}\left(\!\frac{N}{2}\!\right) &=& \frac{m_{1}}{N_{1}} {\hat \omega}_{\sinh,1}\left(\!\frac{m_{1} N}{2 N_{1}}\!\right) = \frac{m_{1}}{N_{1}} {\hat \omega}_{\sinh,1}\left(\!\frac{m_{1}}{2 \sigma_{1}}\!\right)\\ &=& \frac{\pi m_{1} \beta_{1}}{N_{1} \sinh \beta_{1}} \left(\!{\beta_{1}^{2}} - \frac{\pi^{2} {m_{1}^{2}}}{{\sigma_{1}^{2}}}\!\right)^{-1/2} I_{1}\left(\!\sqrt{{\beta_{1}^{2}} - \frac{\pi^{2} {m_{1}^{2}}}{{\sigma_{1}^{2}}}} \right)\\ &=& \frac{m_{1} \pi}{N_{1} \sinh \beta_{1}} \left(\!1 - \frac{1}{2 \sigma_{1}}\!\right)\left(\!1 - \frac{1}{\sigma_{1}}\!\right)^{-1/2} I_{1}\left(\!2 \pi m_{1} \sqrt{1-\frac{1}{\sigma_{1}}} \right) , \end{array}$$

where we have used the equality

$$\bigg(\!{\beta_{1}^{2}} - \frac{\pi^{2} {m_{1}^{2}}}{{\sigma_{1}^{2}}}\!\bigg)^{1/2} = 2\pi m_{1} \Bigg(\! \bigg(\!1- \frac{1}{2\sigma_{1}}\!\bigg)^{2} - \frac{1}{4 {\sigma_{1}^{2}}}\Bigg)^{1/2} = 2 \pi m_{1} \sqrt{1 - \frac{1}{\sigma_{1}}} .$$

From m1 ≥ 2 and \(\sigma _{1} \ge \frac {5}{4}\), it follows that

$$2 \pi m_{1} \sqrt{1 - \frac{1}{\sigma_{1}}} \ge 4 \pi \sqrt{1 - \frac{1}{\sigma_{1}}} \ge x_{0} := \frac{4 \pi}{\sqrt 5} .$$

By the inequality for the modified Bessel function I1 (see [15, Lemma 3.3]) it holds

$$I_{1}(x) \ge \sqrt x_{0}\ {\mathrm e}^{-x_{0}} I_{1}(x_{0}) x^{-1/2}\ {\mathrm e}^{x} > \frac{2}{5} x^{-1/2}\ {\mathrm e}^{x} , \quad x \ge x_{0} .$$

Thus, we obtain

$${\hat \varphi}_{\sinh,1}\bigg(\!\frac{N}{2}\!\bigg) \ge \frac{\sqrt{2 m_{1} \pi}}{5 N_{1} \sinh \beta_{1}} \bigg(\!1- \frac{1}{2\sigma_{1}}\!\bigg)\bigg(\!1- \frac{1}{\sigma_{1}}\!\bigg)^{-3/4} {\mathrm e}^{2 \pi m_{1} \sqrt{1 - 1/\sigma_{1}}} .$$

By the simple inequality

$$\sinh \beta_{1} < \frac{1}{2}\ {\mathrm e}^{\beta_{1}} = \frac{1}{2}\ {\mathrm e}^{2 \pi m_{1} (1- 1/(2\sigma_{1}))} ,$$

we conclude that

$${\hat \varphi}_{\sinh,1}\left(\!\frac{N}{2}\!\right) \ge \frac{2 \sqrt{2 m_{1} \pi}}{5 N_{1}} \left(\!1- \frac{1}{2\sigma_{1}}\!\right)\left(\!1- \frac{1}{\sigma_{1}}\!\right)^{-3/4}\ {\mathrm e}^{2 \pi m_{1} (\sqrt{1 - 1/\sigma_{1}} - 1 + 1/(2\sigma_{1}))}$$

and hence

$$\frac{a}{{\hat \varphi}_{\sinh,1}(N/2)} \le \frac{5 N_{1} a}{2 \sqrt{2 m_{1} \pi}} \bigg(\!1- \frac{1}{2\sigma_{1}}\!\bigg)^{-1}\bigg(\!1- \frac{1}{\sigma_{1}}\!\bigg)^{3/4} {\mathrm e}^{-2 \pi m_{1} (\sqrt{1 - 1/\sigma_{1}} - 1 + 1/(2\sigma_{1}))} .$$
(4.4)

Applying Theorem 3.5, we estimate the error of the NNFFT with two \(\sinh\)-type window functions (4.1). By (4.3) and (4.4) we obtain the inequality

$$\begin{array}{@{}rcl@{}} E_{\sigma_{1}}(\varphi_{\sinh,1}) &+& \frac{a}{{\hat \varphi}_{\sinh,1}(N/2)} E_{\sigma_{2}}(\varphi_{\sinh,2}) \le (24 m_{1}^{3/2} + 10) {\mathrm e}^{- 2 \pi m_{1} \sqrt{1- 1/\sigma_{1}}}\\ &+& (24 m_{2}^{3/2} + 10) \frac{2 N_{1} a}{\sqrt{2 m_{1} \pi}} {\mathrm e}^{2 \pi m_{1} (1 - \sqrt{1 - 1/\sigma_{1}} - 1/(2\sigma_{1}))} {\mathrm e}^{- 2 \pi m_{2} \sqrt{1- 1/\sigma_{2}}} , \end{array}$$

since it holds

$$\frac{5}{2} \left(\!1- \frac{1}{2\sigma_{1}}\!\right)^{-1}\left(\!1- \frac{1}{\sigma_{1}}\!\right)^{3/4} \le \frac{5}{3}\sqrt{2}<2 , \quad \sigma_{1} \in \left[\frac{5}{4}, 2 \right] .$$

This completes the proof. 

Example 4.2

Now we visualize the result of Theorem 4.1. To this end, we fix N = 1200 and consider m1 ∈{2,…,8} and σ1 ∈{1.25,1.5,2}. In Fig. 1 the error bound (4.2) is depicted for several choices of m2m1 and σ2σ1. Clearly, the error bounds (4.2) decrease for increasing truncation parameters and oversampling factors, respectively. Moreover, we recognize that the results get better when choosing σ2 > σ1, cf. Fig. 1c, and are best for m2 > m1, cf. Fig. 1a. Besides, we remark that choices m2 < m1 or σ2 < σ1 produce the same results as in the equality setting such that we omitted these tests.

Fig. 1
figure 1

Error bound (4.2) (dashed) for the NNFFT with sinh-type window functions for \(N=1200\), \(m_{1}\in \{2,\dots ,8\}\) and \(\sigma_1\in \{1.25,1.5,2\}\). Part (b) additionally depicts the relative error (4.5) (solid) using Kaiser–Bessel window functions

Therefore, we recommend the use of truncation parameters m2 > m1 and oversampling factors σ2σ1. For the choice of m1 and σ1, we refer to previous works concerning the NFFT, e.g., [15, 16].

Additionally, we aim to compare these theoretical bounds with the errors obtained by the NNFFT. For this purpose, we introduce the relative error

$$\bigg(\sum\limits_{k \in \mathcal{I}_{M_{1}}} |f_{k}|\bigg)^{-1} \max\limits_{j \in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s_{1}(N x_{j})}{{\hat \varphi}_{\sinh,1}(N x_{j})}\bigg| ,$$
(4.5)

since by Theorem 4.1 it holds

$$\left({\sum}_{k \in \mathcal{I}_{M_{1}}} |f_{k}|\right)^{-1} \max_{j \in \mathcal{I}_{M_{2}}} \bigg| f(x_{j}) - \frac{s_{1}(N x_{j})}{{\hat \varphi}_{\sinh,1}(N x_{j})}\bigg| \le E(\varphi_{\sinh}) .$$

Thus, we choose random nodes \(x_{j} \in \left[-\frac 12, \frac 12 \right]\), \(j \in \mathcal {I}_{M_{2}}\), and \(v_{k} \in \left[- \frac {1}{2a}, \frac {1}{2a}\right]\), \(k \in \mathcal {I}_{M_{1}}\), with \(a = 1 + \frac {2m_{1}}{N_{1}}\), as well as random coefficients \(f_{k}\in \mathbb C\), \(k \in \mathcal {I}_{M_{1}}\), and compute the values (1.2) once directly and once rapidly using the NNFFT. Due to the randomness of the given data, this test is repeated one hundred times and afterwards the maximum error over all repetitions is computed. The errors (4.5) for the parameter choice M1 = 2400 and M2 = 1600 are displayed in Fig. 1b.

Unfortunately, the current release NFFT 3.5.3 of the software package [9] is not yet designed for the use of parameters m1m2 and σ1σ2. Therefore, we can only handle the setting m1 = m2 and σ1 = σ2 in Fig. 1b. Moreover, the \(\sinh\)-type window function is currently not implemented in the software package [9]. Thus, we use two standard window functions, namely the Kaiser–Bessel window functions, since it was shown in [15] that those are very much related. Since the results in Fig. 1 show great promise, these features might be part of future releases.

5 Approximation of sinc function by exponential sums

Since we aim to present an interesting signal processing application of the NNFFT in the last Section 6, we now study the approximation of the function sinc(Nπx), x ∈ [− 1,1], by an exponential sum (1.1).

In [3] the exponential sum (1.1) is used for a local approximation of a bandlimited function F of the form

$$F(x) := \int_{-1/2}^{1/2} w(t)\ {\mathrm e}^{-2 \pi {\mathrm i}N t x} \;{\mathrm d}t , \quad x \in \mathbb R ,$$
(5.1)

where \(w: \left[-\frac {1}{2}, \frac {1}{2}\right]\to [0, \infty )\) is an integrable function with \(\int_{-1/2}^{1/2} w(t) \;{\mathrm d}t >0\). By the substitution

$$F(x) = \frac{1}{N} \int_{-N/2}^{N/2} w(\!-\frac{s}{N})\ {\mathrm e}^{2 \pi {\mathrm i} s x} \;{\mathrm d}s ,$$

we recognize that the Fourier transform of (5.1) is supported on \(\left[-\frac {N}{2}, \frac {N}{2} \right]\), i.e., the function (5.1) is bandlimited with bandwidth N. For instance, for w(t) := 1, \(t \in \left[-\frac {1}{2}, \frac {1}{2} \right]\), we obtain the famous bandlimited sinc function

$$F(x) = {sinc}(\pi N x) := \left\{ \begin{array}{ll} \frac{\sin (\pi N x)}{\pi N x} & \quad x \in \mathbb R \setminus \{0\} ,\\ 1 & \quad x = 0 . \end{array} \right.$$
(5.2)

Now we show that the bandlimited sinc function (5.2) can be uniformly approximated on the interval [− 1,1] by an exponential sum (1.1). We start with the uniform approximation of the sinc function on the interval \(\left[-\frac {1}{2}, \frac {1}{2} \right]\).

Theorem 5.1

Let ε > 0 be a given target accuracy.

Then for sufficiently large \(n\in \mathbb N\) with n ≥ 2N, there exist constants wj > 0 and frequencies \(v_{j}\in \left(- \frac {1}{2}, \frac {1}{2} \right)\), j = 1,…,n, such that for all \(x\in \left[-\frac {1}{2}, \frac {1}{2} \right]\),

$$\bigg| {sinc}(\pi N x) - \sum\limits_{j=1}^{n} w_{j}\ {\mathrm e}^{-2\pi {\mathrm i}Nv_{j} x}\bigg| \le \varepsilon .$$
(5.3)

Proof

This result is a simple consequence of [3, Theorem 6.1]. Introducing \(\nu := \frac {N}{n} \le \frac {1}{2}\), we obtain by substitution \(\tau := - \frac {t}{2 \nu }\) that

$${sinc}(\pi N x) = {\int}_{-1/2}^{1/2}\ {\mathrm e}^{-2 \pi {\mathrm i}N \tau x} \;{\mathrm d}\tau = \frac{1}{2\nu} {\int}_{-\nu}^{\nu}\ {\mathrm e}^{{\mathrm i} \pi n t x} \;{\mathrm d}t .$$

Setting \(y:= n x \in \left[-\frac {n}{2}, \frac {n}{2} \right]\), we have

$${sinc}(\pi \nu y) = \frac{1}{2 \nu} {\int}_{-\nu}^{\nu}\ {\mathrm e}^{{\mathrm i} \pi t y} \;{\mathrm d}t .$$

Then from [3, Theorem 6.1] (with \(d = \frac {1}{2}\)), it follows the existence of wj > 0 and Θj ∈ (−νν), j = 1,…, n, such that for all \(y \in \left[- \frac {n}{2}-1, \frac {n}{2} + 1 \right]\),

$$\bigg|\frac{1}{2\nu} {\int}_{-\nu}^{\nu} \sigma(t)\ {\mathrm e}^{{\mathrm i} \pi t y} \;{\mathrm d}t - \sum\limits_{j=1}^{n} w_{j} {\mathrm e}^{\pi {\mathrm i} {\Theta}_{j} y}\bigg| \le \varepsilon .$$

Hence, for all \(x = \frac {y}{n}\in \left[-\frac {1}{2}, \frac {1}{2} \right]\), we conclude that for \(v_{j} := - \frac {{\Theta }_{j}}{2 \nu } \in \left(- \frac {1}{2}, \frac {1}{2} \right)\), j = 1,…,n,

$$\bigg| \frac{1}{2\sigma} {\int}_{-\nu}^{\nu} {\mathrm e}^{{\mathrm i} \pi n t x} \;{\mathrm d}t - \sum\limits_{j=1}^{n} w_{j} {\mathrm e}^{\pi {\mathrm i} n {\Theta}_{j} x}\bigg| = \bigg|{sinc}(\pi N x) - \sum\limits_{j=1}^{n} w_{j} {\mathrm e}^{-2\pi {\mathrm i}Nv_{j} x}\bigg| \le \varepsilon .$$

This completes the proof.

Substituting the variable \(x = \frac {t}{2}\), t ∈ [− 1, 1], the frequencies \(v_{j}=\frac {z_{j}}{2}\), zj ∈ (− 1, 1), and replacing the bandwidth N in (5.3) by 2N, we obtain the following uniform approximation of the sinc function (5.2) on the interval [− 1, 1] (after denoting t by x and zj by vj again):

Corollary 5.2

Let ε > 0 be a given target accuracy.

Then for sufficiently large \(n\in \mathbb N\) with n ≥ 4N, there exist constants wj > 0 and frequencies vj ∈ (− 1,1), j = 1,…, n, such that (5.3) holds for all x ∈ [− 1, 1], i.e.,

$$\bigg| {sinc}(\pi N x) - \sum\limits_{j=1}^{n} w_{j}\ {\mathrm e}^{-\pi {\mathrm i}Nv_{j} x}\bigg| \le \varepsilon , \quad x\in [-1, 1] .$$

In practice, we simplify the approximation procedure of the function sinc(Nπx). Since for fixed \(N \in \mathbb N\), it holds

$${sinc} (N \pi x) = \frac{1}{2} \int_{-1}^{1}\ {\mathrm e}^{- \pi {\mathrm i} N t x} \;{\mathrm d}t , \quad x \in \mathbb R ,$$

the approximation on the interval [− 1, 1] can efficiently be realized by means of the Clenshaw–Curtis quadrature (see [18, pp. 143–153] or [13, pp. 357–364]). Using this procedure for the integrand \(\frac {1}{2} {\mathrm e}^{- \pi {\mathrm i} N t x}\), t ∈ [− 1, 1], with fixed parameter x ∈ [− 1, 1], the Chebyshev points \(z_{k} = {\cos \limits } \frac {k \pi }{n}\in [-1,\;1]\), k = 0,…, n, and the positive coefficients

$$w_{k} = \left\{ \begin{array}{ll} \frac{1}{n} \varepsilon_{n}(k)^{2} {\sum}_{j=0}^{n/2} \varepsilon_{n}(2 j)^{2} \frac{2}{1 - 4 j^{2}} \cos \frac{2jk \pi}{n} & \quad n \in 2 \mathbb N ,\\ \frac{1}{n} \varepsilon_{n}(k)^{2} {\sum}_{j=0}^{(n-1)/2} \varepsilon_{n}(2 j)^{2} \frac{2}{1 - 4 j^{2}} \cos \frac{2jk \pi}{n} & \quad n \in 2 \mathbb N + 1 , \end{array} \right.$$
(5.4)

with \(\varepsilon _{n}(0) = \varepsilon _{n}(n) := \frac {\sqrt 2}{2}\) and εn(j) := 1, j = 1,…, n − 1 (see [13, p. 359]), we obtain

$${sinc}(N \pi x) = \frac{1}{2} {\int}_{-1}^{1} {\mathrm e}^{- \pi {\mathrm i} N t x} \;{\mathrm d}t \approx \sum\limits_{k = 0}^{n} w_{k} {\mathrm e}^{- \pi {\mathrm i} N z_{k} x} .$$

Further the coefficients fulfill the condition (see [13, p. 359])

$$\sum\limits_{k=0}^{n} w_{k} = 1 .$$
(5.5)

Then we receive the following error estimate.

Theorem 5.3

Let \(N\in \mathbb N\), n = νN be given. Let \(z_{k} = {\cos \limits } \frac {k \pi }{n}\in [-1, 1]\), be the Chebyshev points, let wk, k = 0,…, n, denote the coefficients (5.4), and set \(C := \frac {\pi ({\mathrm e}^{2}-1)}{2 {\mathrm e}}\).

Then for all x ∈ [− 1, 1], the approximation error of sinc(Nπx) can be estimated in the form

$$\bigg|{sinc}(N \pi x) - \sum\limits_{k=0}^{n} w_{k} {\mathrm e}^{- \pi {\mathrm i} N z_{k} x}\bigg| \le { \frac{36 (1+\mathrm e^{-2CN})}{35 ({\mathrm e}^{2} -1)}\ \mathrm e^{-N(\nu-C)}} .$$
(5.6)

In other words, the error bound is exponentially decaying if ν > C ≈ 3.69.

Proof

Since the imaginary part of the integrand \(\frac {1}{2} {\mathrm e}^{- \pi {\mathrm i} N t x}\), t ∈ [− 1, 1], is odd, it holds

$$\mathrm{sinc}(N \pi x) = \frac{1}{2} {\int}_{-1}^{1}\ {\mathrm e}^{- \pi {\mathrm i} N t x} \;{\mathrm d}t = \frac{1}{2} {\int}_{-1}^{1} \cos(\pi N t x) \;{\mathrm d}t .$$
(5.7)

Therefore, we apply the Clenshaw–Curtis quadrature to the analytic function \(f(t, x) := \frac {1}{2} \cos \limits (\pi N t x)\), t ∈ [− 1, 1], with fixed parameter x ∈ [− 1, 1]. Note that it holds

$$\sum\limits_{k=0}^{n} w_{k} {\mathrm e}^{- \pi {\mathrm i} N z_{k} x} = \sum\limits_{k=0}^{n} w_{k} \cos (\pi N z_{k} x) + 0$$

by the symmetry properties of the Chebyshev points zk and the coefficients wk, namely zk = −znk and wk = wnk, k = 0,…, n (see [13, p. 359]).

By Eρ with some ρ > 1, we denote the Bernstein ellipse defined by

$$E_{\rho} := \bigg\{z \in \mathbb C: \mathrm{Re}\ z = \frac{1}{2}\bigg(\!\rho + \frac{1}{\rho}\!\bigg) \cos t , \mathrm{Im}\ z = \frac{1}{2}\bigg(\!\rho - \frac{1}{\rho}\!\bigg) \sin t , t\in [0, 2\pi) \bigg\} .$$

Then Eρ has the foci − 1 and 1. For simplicity, we choose ρ = e.

For \(z \in \mathbb C\) and fixed x ∈ [− 1, 1], it holds

$$\bigg| \frac{1}{2} \cos (\pi N x z)\bigg| \le \frac{1}{2} \cosh (\pi N x {\mathrm{Im}}\ z) .$$

For \(z \in \mathbb C\) with Re z = 0 we have

$$\bigg| \frac{1}{2} \cos (\pi N x z)\bigg| = \frac{1}{2} \cosh (\pi N x \mathrm{Im}\ z) .$$

Hence, in the interior of the Bernstein ellipse Ee, the integrand is bounded, since

$$\bigg| \frac{1}{2} \cos (\pi N x z)\bigg| \le \frac{1}{2} \cosh \frac{\pi N x ({\mathrm e}^{2} - 1)}{2 \mathrm e} \le \frac{1}{2} \cosh \frac{\pi N ({\mathrm e}^{2} - 1)}{2 \mathrm e} .$$

Therefore, by [18, p. 146] we obtain the error estimate

$$\bigg|{sinc}(N \pi x) - \sum\limits_{k=0}^{n} w_{k} {\mathrm e}^{- \pi {\mathrm i} N z_{k} x}\bigg| \le \frac{144}{70 ({\mathrm e}^{2} -1)} {\mathrm e}^{-n} \cosh \frac{\pi ({\mathrm e}^{2}-1) N}{2 {\mathrm e}} .$$
(5.8)

By defining \(C := \frac {\pi ({\mathrm e}^{2}-1)}{2 {\mathrm e}}\), the term \(\mathrm e^{-n} \cosh (CN)\) in (5.8) can be rewritten as

$$\mathrm e^{-n} \cosh(CN) = \mathrm e^{-\nu N} \cdot \tfrac 12(\mathrm e^{CN}+\mathrm e^{-CN}) = \tfrac 12 \mathrm e^{-N(\nu-C)} (1+\mathrm e^{-2CN}) .$$

Thus, we end up with (5.6). This completes the proof.

In practice, the coefficients wk in (5.4) can be computed by a fast algorithm, the so-called discrete cosine transform of type I (DCT–I) of length n + 1, n = 2t, (see [13, Algorithm 6.28 or Algorithm 6.35]). This DCT–I uses the orthogonal cosine matrix of type I

$${\mathbf C}_{n+1}^{\mathrm I} := \sqrt{\frac{2}{n}} \left(\varepsilon_{n}(j) \varepsilon_{n}(k) \cos \frac{jk \pi}{n}\right)_{j,k=0}^{n} .$$
Algorithm 2
figure b

Fast computation of the coefficients wk

A similar approach can be found in [7], where a Gauss–Legendre quadrature was applied to obtain explicit coefficients wk for given Legendre points zk. However, the computation of the coefficients wk using Algorithm 2 is more effective for large n.

Example 5.4

Now we visualize the result of Theorem 5.3. In Fig. 2a the error bound (5.6) is depicted as a function of N for several choices of \(\nu \in \{1,\dots ,5\}\), where n = νN. It clearly demonstrates that ν ≥ 4 is needed to obtain reasonable error bounds.

Fig. 2
figure 2

Error constant (5.6) (dashed) and maximum error (5.9) (solid) of the approximation of \(\mathrm{sinc}(N\pi x)\), \(x\in[-1,1]\), for different bandwidths \(N=2^{\ell}\), \(\ell =1,\dots ,7\), where \(n=\nu N\), \(\nu \in \{1,\dots ,10\}\), and Chebyshev nodes \(z_{k}\in [-1,1], k=0,\dots ,n\)

Additionally, we compare the error constant and the maximum approximation error, cf. (5.6). To measure the accuracy we consider a fine evaluation grid \(x_{r}=\frac {2r}{R}\), \(r\in \mathcal {I}_{R}\), with RN, where R = 3 ⋅ 105 is fixed. On this grid we calculate the discrete maximum error

$$\max_{r\in \mathcal{I}_{R}} \bigg| {{sinc}}(\pi N x_{r})- \sum\limits_{k=0}^{n} w_{k}\ {\mathrm e}^{-\pi {\mathrm i}N z_{k} x_{r}} \bigg|$$
(5.9)

for different bandwidths N = 2, \(\ell =3,\dots ,7\). For the parameter n = νN we investigate several choices \(\nu \in \{1,\dots ,10\}\). We compute the coefficients wk using Algorithm 2. Subsequently, the approximation to the sinc function is computed by means of the NFFT, which is possible since the xr are equispaced. The results for both the error bound (5.6) and the maximum error (5.9) are displayed in Fig. 2b. It becomes apparent that for increasing oversampling factor ν, the maximum error (5.9) decreases to machine precision for all choices of N. Even for rather large choices of ν (up to 10) the error remains stable, so there is no worsening in terms of ν.

6 Discrete sinc transform

Finally, we present an interesting signal processing application of the NNFFT. If a signal \(h: \left[- \frac {1}{2}, \frac {1}{2} \right] \to \mathbb C\) is to be reconstructed from its equispaced/nonequispaced samples at \(a_{k}\in \left[- \frac {1}{2}, \frac {1}{2} \right]\), then h is often modeled as linear combination of shifted sinc functions

$$h(x) = \sum\limits_{k \in \mathcal{I}_{L_{1}}} c_{k}\ \mathrm{sinc}\big(N \pi (x-a_{k})\big) , \quad x \in \mathbb R ,$$
(6.1)

with complex coefficients ck. In the following, we propose a fast algorithm for the approximate computation of the discrete sinc transform (see [7, 11])

$$h(b_{\ell}) = \sum\limits_{k \in \mathcal{I}_{L_{1}}} c_{k}\ \mathrm{sinc}\big(N \pi (b_{\ell} - a_{k})\big) , \quad \ell \in \mathcal{I}_{L_{2}} ,$$
(6.2)

where \(b_{\ell } \in \left[- \frac {1}{2}, \frac {1}{2} \right]\) can be equispaced/nonequispaced points.

Such a function (6.1) occurs by the application of the famous sampling theorem of Shannon–Whittaker–Kotelnikov (see, e.g., [13, pp. 86–88]). Let \(f\in L_{1}(\mathbb R) \cap C(\mathbb R)\) be bandlimited on \(\left[-\frac {L_{2}}{2}, \frac {L_{2}}{2} \right]\) for some L2 > 0, i.e., the Fourier transform of f is supported on \(\left[-\frac {L_{2}}{2}, \frac {L_{2}}{2} \right]\). Then for \(N \in \mathbb N\) with NL2, the function f is completely determined by its values \(f \left(\frac {k}{N} \right)\), \(k \in \mathbb Z\), and further f can be represented in the form

$$f(x) = \sum\limits_{k\in \mathbb Z} f(\frac{k}{N}) {sinc}(N \pi(x - \frac{k}{N})) , \quad x \in \mathbb R ,$$

where the series converges absolutely and uniformly on \(\mathbb R\). By truncation of this series, we obtain the linear combination of shifted sinc functions

$$\sum\limits_{k\in \mathcal{I}_{L_{1}}} f(\frac{k}{N}) {sinc}(N \pi(x - \frac{k}{N})) , \quad x \in \mathbb R ,$$

which has the same form as (6.1), when ak are equispaced.

Since the naive computation of (6.2) requires \(\mathcal O(L_{1}\cdot L_{2})\) arithmetic operations, the aim is to find a more efficient method for the evaluation of (6.2). Up to now, several approaches for a fast computation of the discrete sinc transform (6.2) are known. In [7], the discrete sinc transform (6.2) is realized by applying a Gauss–Legendre quadrature rule to the integral (5.7). The result can then be approximated by means of two NNFFT’s with \(\mathcal O((L_{1}+L_{2})\log (L_{1}+L_{2}))\) arithmetic operations. A multilevel algorithm with \(\mathcal {O}(L_{2}\log (1/\delta ))\) arithmetic operations is presented in [11] which is most effective for equispaced points ak and b and, as the authors claim themselves, is only practical for rather large target evaluation accuracy δ > 0.

In the following, we present a new approach for a fast sinc transform (6.2), where we approximate the function sinc(Nπx) by an exponential sum on the interval [− 1, 1] by means of the Clenshaw–Curtis quadrature as described in Section 5. Let the Chebyshev points \(z_{j} = \cos \limits \frac {j \pi }{n}\), j = 0,…, n, and the coefficients wj defined by (5.4) be given. Utilizing (5.8), for arbitrary ak, \(b_{\ell } \in \left[- \frac {1}{2}, \frac {1}{2} \right]\) we obtain the approximation

$${sinc}(N \pi(a_{k} - b_{\ell})) \approx \sum\limits_{j=0}^{n} w_{j}\; \mathrm{e}^{-\pi\mathrm i N z_{j} (a_{k} - b_{\ell})} = \sum\limits_{j=0}^{n} w_{j}\; \mathrm{e}^{-\pi\mathrm i N z_{j} a_{k}}\; \mathrm{e}^{\pi\mathrm i N z_{j} b_{\ell}} .$$

Inserting this approximation into (6.2) yields

$$\begin{array}{@{}rcl@{}} h_{\ell} &&:= \sum\limits_{k\in \mathcal{I}_{L_{1}}} c_{k} \sum\limits_{j=0}^{n} w_{j}\; \mathrm{e}^{-\pi\mathrm i N z_{j} a_{k}}\; \mathrm{e}^{\pi\mathrm i N z_{j} b_{\ell}} \\ &&= \sum\limits_{j=0}^{n} w_{j} \left(\sum\limits_{k\in \mathcal{I}_{L_{1}}} c_{k}\; \mathrm{e}^{-\pi\mathrm i N z_{j} a_{k}}\right)\; \mathrm{e}^{\pi\mathrm i N z_{j} b_{\ell}} , \quad \ell\in \mathcal{I}_{L_{2}} . \end{array}$$
(6.3)

If ε > 0 denotes a target accuracy, then we choose n = 2t, \(t \in \mathbb N \setminus \{1\}\) such that by Theorem 5.3 it holds

$${ \frac{36 (1+\mathrm e^{-2CN})}{35 ({\mathrm e}^{2} -1)} \mathrm e^{-N(\nu-C)}} < \varepsilon , \quad \nu > C = 3.69 .$$

For example, in the case ε = 10− 8 we obtain n ≥ 4N for N ≥ 54.

We recognize that the term inside the brackets of (6.3) is an exponential sum of the form (1.2), which can be computed by means of an NNFFT. Then the resulting outer sum is of the same form such that this can also be computed by means of an NNFFT. Thus, as in [7] we may compute the discrete sinc transform (6.2) by means of an NNFFT, a multiplication by the precomputed coefficients wj as well as another NNFFT afterwards. Hence, the fast sinc transform, which is an application of the NNFFT, can be summarized as follows.

Algorithm 3
figure c

Fast sinc transform.

If we use the same NNFFT in both steps (with the window functions φj, truncation parameters mj, and oversampling factors σj for j = 1,2), Algorithm 3 requires all in all

$$\mathcal O(N\log N + L_{1}+L_{2}+2n)$$

arithmetic operations.

Considering the discrete sinc transform (6.2), we can deal with the special sums of the form

$$h(\frac{\ell}{N})= \sum\limits_{k\in \mathcal{I}_{L_{1}}} c_{k} {sinc} (N\pi (a_{k}-\frac{\ell}{N})) ,\quad \ell\in \mathcal{I}_{N},$$

i.e., we are given equispaced points \(b_{\ell } = \frac {\ell }{N}\) with L2 = N. In this special case, we simply obtain an adjoint NFFT instead of the NNFFT in step 3 of Algorithm 3. Therefore, the computational cost of Algorithm 3 reduces to \(\mathcal O(N\log N + L_{1} + n)\). In the case, where \(a_{k} = \frac {k}{L_{1}}\), \(k\in \mathcal {I}_{L_{1}}\), the NNFFT in step 1 of Algorithm 3 naturally turns into an NFFT. Clearly, in this case the same amount of arithmetic operations is needed as in the first special case. If both sets of nodes ak and b are equispaced, then the computational cost reduces even more to \(\mathcal O(N\log N + n)\). Hence, these modifications are automatically included in our fast sinc transform.

A quite similar approach was already developed in [2] for the computation of the Coulombian interaction between punctual masses, where the main idea is using two different quadrature rules to approximate the given problem. Then the computation can be done by means of NNFFTs, i.e., they receive a 3-step method analogous to Algorithm 3.

Now we study the error of the fast sinc transform in Algorithm 3, which is measured in the form

$$\max_{\ell \in \mathcal{I}_{L_{2}}}\ | h(b_{\ell})-\tilde h_{\ell} | .$$
(6.5)

Theorem 6.1

Let \(N\in \mathbb N\) with N ≫ 1 and L1, \(L_{2} \in 2 \mathbb N\) be given. Let \(N_{1} = \sigma _{1} N \in 2 \mathbb N\) with σ1 > 1. For fixed \(m_{1} \in \mathbb N \setminus \{1\}\) with 2m1N1, let N2 = σ2(N1 + 2m1) with σ2 > 1. For \(m_{2} \in \mathbb N \setminus \{1\}\) with \(2m_{2} \le \big (1 - \frac {1}{\sigma _{1}}\big ) N_{2}\), let φ1 and φ2 be the window functions of the form (2.5). Let ak, \(b_{\ell } \in \left[ - \frac {1}{2}, \frac {1}{2} \right]\) with \(k \in \mathcal {I}_{L_{1}}\), \(\ell \in \mathcal {I}_{L_{2}}\) be arbitrary points and let \(c_{k} \in \mathbb C\), \(k \in \mathcal {I}_{L_{1}}\), be arbitrary coefficients. Let a > 1 be the constant (2.6). For a given target accuracy ε > 0, the number n = 2t, \(t \in \mathbb N \setminus \{1\}\), is chosen such that

$${ \frac{36 (1+\mathrm e^{-2CN})}{35 ({\mathrm e}^{2} -1)} \mathrm e^{-N(\nu-C)}} < \varepsilon , \quad { \nu > C = 3.69} .$$
(6.6)

Then the error of the fast sinc transform can be estimated by

$$\begin{array}{@{}rcl@{}} \max\limits_{\ell \in \mathcal{I}_{L_{2}}} \big| h(b_{\ell})- {\tilde h}_{\ell} \big| &\le& \Bigg(\varepsilon + 2 \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg] \\ & & + \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg]^{2} \Bigg) \sum\limits_{k\in \mathcal{I}_{L_{1}}} |c_{k}| , \end{array}$$
(6.7)

where \(E_{\sigma _{j}}(\varphi _{j})\) for j = 1,2, are the general \(C(\mathbb T)\)-error constants of the form (3.7). If it holds

$$\begin{array}{@{}rcl@{}} E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2}) \le 1 , \end{array}$$
(6.8)

one can use the simplified estimate

$$\begin{array}{@{}rcl@{}} \max\limits_{\ell \in \mathcal{I}_{L_{2}}} | h(b_{\ell})- {\tilde h}_{\ell} | \le \left(\varepsilon + { 3} E_{\sigma_{1}}(\varphi_{1}) + \frac{{ 3}a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2})\right) \sum\limits_{k\in \mathcal{I}_{L_{1}}} |c_{k}| . \end{array}$$
(6.9)

Proof

By (6.3), the value h is an approximation of h(b). Since ak, \(b_{\ell }\in \left[- \frac {1}{2}, \frac {1}{2} \right]\), it holds by (5.8) and (6.6) that

$$\bigg| \mathrm{sinc}(\pi N \left(a_{k} - b_{\ell})\right) - \sum\limits_{j=0}^{n} w_{j} {\mathrm e}^{-\pi {\mathrm i}Nz_{j} (a_{k} - b_{\ell})}\bigg| \le \varepsilon .$$

Hence, we conclude that

$$|h(b_{\ell}) - h_{\ell}| \le \varepsilon \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| , \quad \ell \in \mathcal{I}_{L_{2}} .$$
(6.10)

After step 1 of Algorithm 3, the error of the NNFFT (with the window functions φ1 and φ2) can be estimated by Theorem 3.5 in the form

$$|g_{j} - {\tilde g}_{j}| \le \left[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2})\right] \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| , \quad j = 0,\ldots, n .$$

Using (5.5), step 2 of Algorithm 3 generates the error

$$\begin{array}{@{}rcl@{}} |{\hat h}_{\ell} - h_{\ell} | &\le& \sum\limits_{j=0}^{n} w_{j} |g_{j} - {\tilde g}_{j}| \le \left(\sum\limits_{j=0}^{n} w_{j}\right) \left[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2})\right] \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| \\ &=& \left[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2})\right] \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| . \end{array}$$
(6.11)

After step 3 of Algorithm 3, the error of the NNFFT (with the same window functions φ1 and φ2) can be estimated by Theorem 3.5 in the form

$$|{\hat h}_{\ell} - {\tilde h}_{\ell}| \le \left[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2})\right] \sum\limits_{j=0}^{n} w_{j} |{\tilde g}_{j}| , \quad \ell \in \mathcal{I}_{L_{2}} .$$

Using the triangle inequality, we obtain

$$\begin{array}{@{}rcl@{}} |{\tilde g}_{j}| &\le& |g_{j}| + |g_{j} - {\tilde g}_{j}| \le \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| + |g_{j} - {\tilde g}_{j}|\\ &\le& \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| + \left[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\left(\frac{N}{2}\right)} E_{\sigma_{2}}(\varphi_{2})\right]\sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| , \quad j=0,\ldots,n \end{array}$$

such that by (5.5)

$$\begin{array}{@{}rcl@{}} \big|{\hat h}_{\ell} - {\tilde h}_{\ell}\big| &\le& \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg]\bigg(\sum\limits_{j=0}^{n} w_{j}\bigg)\sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| \\ & & + \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg]^{2} \bigg(\sum\limits_{j=0}^{n} w_{j}\bigg)\sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| \\ &=& \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg] \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| \\ & & + \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg]^{2} \sum\limits_{k \in \mathcal{I}_{L_{1}}} |c_{k}| . \end{array}$$
(6.12)

Thus, the error of Algorithm 3 can be estimated by

$$| h(b_{\ell}) - {\tilde h}_{\ell}| \le |h(b_{\ell}) - h_{\ell}| + |h_{\ell} - {\hat h}_{\ell}| + |{\hat h}_{\ell} - {\tilde h}_{\ell}| , \quad \ell \in \mathcal{I}_{L_{2}} .$$

From (6.10)–(6.12) it follows the estimate (6.7). If it holds (6.8), we have

$$\begin{array}{@{}rcl@{}} \bigg[E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2})\bigg]^{2} &\le& E_{\sigma_{1}}(\varphi_{1}) + \frac{a}{ {\hat \varphi}_{1}\big(\frac{N}{2}\big)} E_{\sigma_{2}}(\varphi_{2}) \end{array}$$

and therefore the simplified estimate (6.9).

Thus, the error of Algorithm 3 for the fast sinc transform mostly depends on the target accuracy ε of the precomputation and on the general \(C(\mathbb T)\)-error constants \(E_{\sigma _{j}}(\varphi _{j})\), j = 1,2, of the window functions φj, j = 1,2, see Theorem 3.5.

Example 6.2

Next we verify the accuracy of our fast sinc transform in Algorithm 3. To this end, we choose random nodes \(a_{k}\in \left[-\frac 12,\frac 12 \right]\), equispaced points \(b_{\ell } = \frac {\ell }{N}\) with \(\ell \in \mathcal {I}_{N}\), as well as random coefficients \(c_{k}\in \mathbb C\), \(k\in \mathcal {I}_{L_{1}}\), and compute the discrete sinc transform (6.2) directly as well as its approximation (6.4) by means of the fast sinc transform. Subsequently, we compute the maximum error (6.5). Due to the randomness of the given values this test is repeated one hundred times and afterwards the maximum error over all repetitions is computed.

In this experiment we choose different bandwidths N = 2k, \(k=5,\dots ,13,\) and without loss of generality we use \(L_{1}=\frac N2\). We apply Algorithm 3 using the weights wj computed by means of Algorithm 2 and the Chebyshev points \(z_{j} = {\cos \limits } \frac {j \pi }{n}\), j = 0,…,n. Therefore, we only have to examine the parameter choice of n ≥ 4N. To this end, we compare the results for several choices, namely for n ∈{4N,6N,8N}. The appropriate results can be found in Fig. 3. We see that for large N there is almost no difference between the different choices of n. However, we point out that a higher choice heavily increases the computational cost of Algorithm 3. Therefore, it is recommended to use the smallest possible choice n = 4N. Compared to [7] the same approximation errors are obtained, but with a more efficient precomputation of weights.

Fig. 3
figure 3

Maximum error (6.5) for several bandwidths N = 2k, \(k=5,\dots ,13,\) shown for n = νN, ν ∈{4,6,8}, using the coefficients wj obtained by Algorithm 2