1 Introduction

The classical Whittaker–Kotelnikov–Shannon sampling theorem plays a fundamental role in signal processing, since it describes the close relation between a bandlimited function and its equidistant samples. More precisely, this sampling theorem states that any function \(f \in L^2({\mathbb {R}})\) with bandwidth \(\le \frac{N}{2}\), i.e., the support of the Fourier transform

$$\begin{aligned} {{\hat{f}}}(v) :=\int _{{\mathbb {R}}} f(t)\,{\mathrm e}^{-2 \pi {\mathrm i}t v}\,{\mathrm d}t, \quad v \in {\mathbb {R}}, \end{aligned}$$

is contained in \(\big [- \frac{N}{2},\frac{N}{2}\big ]\), can be recovered from its samples \(f\big (\frac{\ell }{L}\big )\), \(\ell \in {\mathbb {Z}}\), with \(L \ge N\) and it holds

$$\begin{aligned} f(t) = \sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big )\,\mathrm {sinc}\big (L\pi \,\big (t - \tfrac{\ell }{L}\big )\big ), \quad t \in {\mathbb {R}}, \end{aligned}$$
(1.1)

with the \(\mathrm {sinc}\) function

$$\begin{aligned} \mathrm {sinc}\,x :=\left\{ \begin{array}{ll} \frac{\sin x}{x} &{} \quad x \in {\mathbb {R}} \setminus \{0\}, \\ 1 &{} \quad x = 0. \end{array} \right. \end{aligned}$$

Unfortunately, the practical use of this sampling theorem is limited, since it requires infinitely many samples, which is impossible in practice. Further, the \(\mathrm {sinc}\) function decays very slowly such that the Shannon sampling series

$$\begin{aligned} \sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big )\,\mathrm {sinc}\big (L\pi \,\big (t - \tfrac{\ell }{L}\big )\big ), \quad t \in {\mathbb {R}}, \end{aligned}$$

has rather poor convergence. Moreover, in the presence of noise or quantization in the samples \(f\big (\frac{\ell }{L}\big )\), \(\ell \in {\mathbb {Z}}\), the convergence of Shannon sampling series may even break down completely (see [4]).

To overcome these drawbacks, one can use the following three techniques (see [6, 8, 13, 18]):

1. The function \({\mathrm {sinc}}(L \pi \, \cdot )\) is regularized by a truncated window function

$$\begin{aligned} \varphi _m(x) :=\varphi (x)\,{{\mathbf {1}}}_{[-m/L,\,m/L]}(x), \quad x \in {\mathbb {R}}, \end{aligned}$$

where the window function \(\varphi : {\mathbb {R}} \rightarrow [0,\,1]\) belongs to the set \(\Phi _{m,L}\) (as defined in Section 3***) and \({\mathbf{1}}_{[-m/L,\,m/L]}\) denotes the indicator function of the interval \(\big [-\frac{m}{L},\,\frac{m}{L}\big ]\) with some \(m \in {\mathbb {N}} \setminus \{1\}\). Then we recover a function \(f \in L^2({\mathbb {R}})\) with bandwidth \(\le \frac{N}{2}\) by the regularized Shannon sampling formula

$$\begin{aligned} (R_{\varphi ,m} f)(t) :=\sum _{\ell \in {\mathbb {Z}}} f\left( \tfrac{\ell }{L}\right) \, {\mathrm {sinc}}\big (L \pi \left( t - \tfrac{\ell }{L}\right) \big )\,\varphi _m\left( t - \tfrac{\ell }{L}\right) , \end{aligned}$$

where \(L \ge N\). Obviously, this is an interpolating approximation of f, since it holds

$$\begin{aligned} (R_{\varphi ,m} f)\left( \tfrac{k}{L}\right) = f\left( \tfrac{k}{L}\right) , \quad k \in {\mathbb {Z}}. \end{aligned}$$

2. The use of the truncated window function \(\varphi _m\) with compact support \(\big [- \frac{m}{L},\,\frac{m}{L}\big ]\) leads to localized sampling of f, i.e., the computation of \((R_{\varphi ,m} f)(t)\) for \(t \in {\mathbb {R}} \setminus \frac{1}{L}\,{\mathbb {Z}}\) requires only 2m samples \(f\big (\frac{k}{L}\big )\), where \(k\in {\mathbb {Z}}\) fulfills the condition \(|k - L t| \le m\). If f has bandwidth \(\le N/2\) and \(L \ge N\), then the reconstruction of f on the interval \([0,\,1]\) requires only \(2m + L\) samples \(f\big (\frac{\ell }{L}\big )\) with \(\ell = -m,\,1-m,\,\ldots ,\,m+L\).

3. In many applications, one usually employs oversampling, i.e., a function \(f \in L^2({\mathbb {R}})\) of bandwidth \(\le N/2\) is sampled on a finer grid \(\frac{1}{L}\, {\mathbb {Z}}\) with \(L > N\).

This concept of regularized Shannon sampling formulas with localized sampling and oversampling has already been studied by various authors, e.g. in [13, 16] and references therein for the Gaussian window function. An improvement of the theoretical error bounds for the Gaussian window function was made by [6], whereas oversampling was neglected in this work. Rather, the special case \(L=N=1\) was studied. The case of erroneous sampling for the Gaussian window function was examined in [15]. Generalizations of the Gaussian regularized Shannon sampling formula to holomorphic functions f were introduced by [17] using contour integration and by [19] for the approximation of derivatives of f. A survey of different approaches for window functions can be found in [14]. Furthermore, in [18] the problem was approached in Fourier space. Oversampling then is equivalent to continuing the Fourier transform of the \(\mathrm {sinc}\) function on the larger interval \([- L/2, L/2]\). Here the aim is to find a regularization function whose Fourier transform is smooth. However, the resulting function does not have an explicit representation and therefore cannot directly be used in spatial domain. Nevertheless, the complexity and efficiency of the received methods was not the main focus of the aforementioned approaches. On the contrary, we now propose a new set \(\Phi _{m,L}\) of window functions \(\varphi \) such that smaller truncation parameters m are sufficient for achieving high accuracy, therefore yielding short sums being evaluable very fast.

In this paper we present new regularized Shannon sampling formulas with localized sampling and derive new estimates of the uniform approximation error

$$\begin{aligned} \max _{t \in {\mathbb {R}}}\big | f(t) - (R_{\varphi ,m} f)(t) \big |, \end{aligned}$$

where we apply several window functions \(\varphi \). It is shown in the subsequent sections that the uniform approximation error decays exponentially with respect to m, if \(\varphi \in \Phi _{m,L}\) is the Gaussian, \(\mathrm B\)-spline, or \(\sinh \)-type window function. Otherwise, if \(\varphi \in \Phi _{m,L}\) is chosen as the rectangular window function, then the uniform approximation error of the regularized Shannon sampling formula has a poor decay of order \(m^{-1/2}\). Moreover, we show that the regularized Shannon sampling formulas are numerically robust for noisy samples, i.e., if \(\varphi \in \Phi _{m,L}\) is the Gaussian, \(\mathrm B\)-spline, or \(\sinh \)-type window function, then the uniform perturbation error (3.29) only grows as \(m^{1/2}\).

In our approach, we need the Fourier transform of the product of \({\mathrm {sinc}}(L \pi \, \cdot )\) and the window function \(\varphi \). Since the \({\mathrm {sinc}}\) function belongs to \(L^2({\mathbb {R}})\), but not to \(L^1({\mathbb {R}})\), we present the convolution property of the Fourier transform for \(L^2({\mathbb {R}})\) functions in the preliminary Sect. 2 (see Lemma 2.2). In Sect. 3, we consider regularized Shannon sampling formulas for an arbitrary window function \(\varphi \in \Phi _{m,L}\). Here the main results are Theorem 3.2 with a unified approach to error estimates for regularized Shannon sampling formulas and Theorem 3.4 on the numerical robustness of regularized Shannon sampling formulas. Afterwards, we concretize the results for several choices of window functions. In Sect. 4, we consider the Gaussian window function (as in [6, 13]). Then Theorem 4.1 shows that the uniform approximation error decays exponentially with respect to m. In Sect. 5, we use the \(\mathrm B\)-spline window function and prove the exponential decay of the uniform approximation error with respect to m in Theorem 5.3. In Sect. 6 we discuss the \(\sinh \)-type window function, where it is proven in Theorem 6.1 that the uniform approximation error decays exponentially with respect to m. Several numerical experiments illustrate the theoretical results. Finally, in the concluding Sect. 7, we compare the proposed window functions and show the superiority of the new \(\sinh \)-type window function.

2 Convolution property of the Fourier transform

Let \(C_0({\mathbb {R}})\) denote the Banach space of continuous functions \(f :{\mathbb {R}}\rightarrow {\mathbb {C}}\) vanishing as \(|x|\rightarrow \infty \) with norm

$$\begin{aligned} \Vert {f}\Vert _{C_0({\mathbb {R}})} :=\max _{t\in {\mathbb {R}}} |{f}(t)|. \end{aligned}$$

As known (see [10, p. 66]), the Fourier transform defined by

$$\begin{aligned} {{\hat{f}}}(v) :=\int _{{\mathbb {R}}} f(t)\,{\mathrm e}^{-2 \pi {\mathrm i}v t}\, {\mathrm d}t, \quad v \in {\mathbb {R}}, \end{aligned}$$
(2.1)

is a continuous mapping from \(L^1({\mathbb {R}})\) into \(C_0({\mathbb {R}})\) with

$$\begin{aligned} \Vert {\hat{f}}\Vert _{C_0({\mathbb {R}})} \le \Vert f\Vert _{L^1({\mathbb {R}})} :=\int _{{\mathbb {R}}} |f(t)| {\mathrm d}t. \end{aligned}$$

Here we are interested in the Hilbert space \(L^2({\mathbb {R}})\) with inner product and norm

$$\begin{aligned} \langle f , g \rangle _{L^2({\mathbb {R}})} :=\int _{{\mathbb {R}}} f(t) \, \overline{g(t)} \, {\mathrm d}t , \quad \Vert f\Vert _{L^2({\mathbb {R}})} :=\left( \langle f , f \rangle _{L^2({\mathbb {R}})}\right) ^{1/2} . \end{aligned}$$

By the theorem of Plancherel, the Fourier transform is also an invertible, continuous mapping from \(L^2({\mathbb {R}})\) onto itself with \(\Vert f\Vert _{L^2({\mathbb {R}})} = \Vert {\hat{f}}\Vert _{L^2({\mathbb {R}})}\).

For \(f,\,g \in L^1({\mathbb {R}})\) the convolution property of the Fourier transform reads as

$$\begin{aligned} (f * g){\hat{}} = {{\hat{f}}}\,{{\hat{g}}} \in C_0({\mathbb {R}}), \end{aligned}$$
(2.2)

where the convolution is defined by

$$\begin{aligned} (f * g)(x) :=\int _{{\mathbb {R}}} f(x-t)\,g(t)\, {\mathrm d}t, \quad x \in {\mathbb {R}}. \end{aligned}$$

However, for any f, \(g \in L^2({\mathbb {R}})\) the convolution property of the Fourier transform is not true in the form (2.2), since by Young’s inequality \(f * g \in C_0({\mathbb {R}})\), by Hölder’s inequality \({{\hat{f}}}\,{{\hat{g}}} \in L^1({\mathbb {R}})\) and since the Fourier transform does not map \(C_0({\mathbb {R}})\) into \(L^1({\mathbb {R}})\). Instead, the convolution property of the Fourier transform in \(L^2({\mathbb {R}})\) has the following form.

Lemma 2.1

For all \(f,\,g \in L^2({\mathbb {R}})\) it holds

$$\begin{aligned} f * g = ({{\hat{f}}}\, {{\hat{g}}}){\check{}} \in C_0({\mathbb {R}}), \end{aligned}$$
(2.3)

where \({\check{h}}\) denotes the inverse Fourier transform of \(h \in L^1({\mathbb {R}})\) defined as

$$\begin{aligned} {{\check{h}}}(t) :=\int _{{\mathbb {R}}} h(v)\,{\mathrm e}^{2 \pi {\mathrm i}v t}\,{\mathrm d}v, \quad t \in {\mathbb {R}}. \end{aligned}$$
(2.4)

Proof

. For arbitrary \(f,\,g \in L^2({\mathbb {R}})\) it holds \({{\hat{f}}},\,{{\hat{g}}} \in L^2({\mathbb {R}})\), see [10, p. 80]. Since the Schwartz space \({{\mathcal {S}}}({\mathbb {R}})\), cf. [10, p. 167], is dense in \(L^2({\mathbb {R}})\), see. [10, p. 170], there exist sequences \((f_n)_{n=1}^{\infty }\) and \((g_n)_{n=1}^{\infty }\) in \({\mathcal S}({\mathbb {R}})\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert f_n - f \Vert _{L^2({\mathbb {R}})} = \lim _{n\rightarrow \infty } \Vert g_n - g \Vert _{L^2({\mathbb {R}})} = 0. \end{aligned}$$
(2.5)

Since the Fourier transform is a continuous mapping on \(L^2(\mathbb R)\), it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert {{\hat{f}}}_n - {{\hat{f}}} \Vert _{L^2({\mathbb {R}})} = \lim _{n\rightarrow \infty } \Vert {{\hat{g}}}_n - {{\hat{g}}} \Vert _{L^2({\mathbb {R}})} = 0. \end{aligned}$$
(2.6)

If we now write

$$\begin{aligned} (f * g) - (f_n * g_n) = (f - f_n) * g + f_n * (g - g_n), \end{aligned}$$

we see by the triangle inequality and Young’s inequality that

$$\begin{aligned} \Vert (f * g) - (f_n * g_n)\Vert _{C_0({\mathbb {R}})} \le \Vert f - f_n\Vert _{L^2({\mathbb {R}})}\,\Vert g \Vert _{L^2({\mathbb {R}})}+ \Vert f_n \Vert _{L^2({\mathbb {R}})}\, \Vert g - g_n\Vert _{L^2({\mathbb {R}})} \end{aligned}$$

and hence by (2.5) that

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert (f * g) - (f_n * g_n)\Vert _{C_0({\mathbb {R}})} = 0. \end{aligned}$$
(2.7)

On the other hand, if we write

$$\begin{aligned} {{\hat{f}}}\,{{\hat{g}}} - {{\hat{f}}}_n\, {{\hat{g}}}_n = ({{\hat{f}}} - {{\hat{f}}}_n)\,{{\hat{g}}} + {{\hat{f}}}_n\,({{\hat{g}}} - {{\hat{g}}}_n), \end{aligned}$$

we see by the triangle inequality and Hölder’s inequality that

$$\begin{aligned} \Vert {{\hat{f}}}\,{{\hat{g}}} - {{\hat{f}}}_n\, {{\hat{g}}}_n \Vert _{L^1({\mathbb {R}})} \le \Vert {{\hat{f}}} - {{\hat{f}}}_n\Vert _{L^2({\mathbb {R}})}\,\Vert {{\hat{g}}}\Vert _{L^2({\mathbb {R}})} + \Vert {{\hat{f}}}_n\Vert _{L^2({\mathbb {R}})}\,\Vert {{\hat{g}}} - {{\hat{g}}}_n\Vert _{L^2({\mathbb {R}})} \end{aligned}$$

and hence by (2.6) that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert {{\hat{f}}}\,{{\hat{g}}} - {{\hat{f}}}_n\, {{\hat{g}}}_n \Vert _{L^1({\mathbb {R}})} = 0. \end{aligned}$$
(2.8)

By the convolution property of the Fourier transform in \({\mathcal S}({\mathbb {R}})\), we have for \(f_n\), \(g_n \in {{\mathcal {S}}}({\mathbb {R}})\) that

$$\begin{aligned} (f_n * g_n){\hat{}} = {{\hat{f}}}_n\, {{\hat{g}}}_n. \end{aligned}$$

Note that \(f_n * g_n \in {{\mathcal {S}}}({\mathbb {R}})\) and \({{\hat{f}}}_n\, {{\hat{g}}}_n \in {{\mathcal {S}}}({\mathbb {R}})\) (see [10, p. 175]). Since the Fourier transform on \({{\mathcal {S}}}({\mathbb {R}})\) is invertible (see [10, p. 175]), it holds

$$\begin{aligned} f_n * g_n = ({{\hat{f}}}_n \,{{\hat{g}}}_n){\check{}}. \end{aligned}$$
(2.9)

Moreover, since the inverse Fourier transform is a continuous mapping from \(L^1({\mathbb {R}})\) into \(C_0({\mathbb {R}})\), it holds by [10, pp. 66–67] that

$$\begin{aligned} \Vert ({{\hat{f}}}\,{{\hat{g}}}){\check{}} - ({{\hat{f}}}_n \,{{\hat{g}}}_n){\check{}}\,\Vert _{C_0({\mathbb {R}})} \le \Vert {{\hat{f}}}\,{{\hat{g}}} - {{\hat{f}}}_n\, {{\hat{g}}}_n \Vert _{L^1({\mathbb {R}})}. \end{aligned}$$

From (2.8) it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert ({{\hat{f}}}\,{{\hat{g}}}){\check{}} - ({{\hat{f}}}_n \,{{\hat{g}}}_n){\check{}}\,\Vert _{C_0({\mathbb {R}})} = 0. \end{aligned}$$
(2.10)

Thus, by (2.9) we conclude that

$$\begin{aligned} \Vert f *g - ({{\hat{f}}} \,{{\hat{g}}}){\check{}}\, \Vert _{C_0({\mathbb {R}})}\le & {} \Vert f*g - f_n *g_n\Vert _{C_0({\mathbb {R}})} + \Vert f_n *g_n - ({{\hat{f}}} \,{{\hat{g}}}){\check{}}\,\Vert _{C_0(\mathbb R)}\\= & {} \Vert f*g - f_n *g_n\Vert _{C_0({\mathbb {R}})} + \Vert ({{\hat{f}}}_n \,{{\hat{g}}}_n){\check{}} - ({{\hat{f}}} \,{{\hat{g}}}){\check{}}\,\Vert _{C_0({\mathbb {R}})}. \end{aligned}$$

For \(n\rightarrow \infty \) the right hand side of above estimate converges to zero by (2.7) and (2.10). This implies (2.3). \(\square \)

The following equivalent formulation of the convolution property in \(L^2({\mathbb {R}})\) can be obtained, if we replace \(f \in L^2(\mathbb R)\) by \({\hat{f}} \in L^2({\mathbb {R}})\) and \(g \in L^2({\mathbb {R}})\) by \({\hat{g}} \in L^2({\mathbb {R}})\) in (2.3).

Lemma 2.2

For all f, \(g \in L^2({\mathbb {R}})\) it holds

$$\begin{aligned} {{\hat{f}}} * {{\hat{g}}} = (f \,g)\hat{} \in C_0({\mathbb {R}}). \end{aligned}$$

Proof

. For any f, \(g \in L^2({\mathbb {R}})\) it holds \(\hat{{\hat{f}}} = f(-\,\cdot )\) as well as \(\hat{{\hat{g}}} = g(-\, \cdot ). \) Note that by Hölder’s inequality we have \(f\,g \in L^1({\mathbb {R}})\). Then by above Lemma 2.1 it follows that

$$\begin{aligned} ({{\hat{f}}} * {{\hat{g}}})(t)= & {} \big (\hat{{\hat{f}}}\, \hat{{\hat{g}}}\big )\check{}(t) = \big (f(-\,\cdot )\, g(-\,\cdot )\big )\check{}(t)\\= & {} \int _{{\mathbb {R}}} f(-v)\,g(-v)\,{\mathrm e}^{2 \pi {\mathrm i}v t}\, {\mathrm d}v= \int _{{\mathbb {R}}} f(v)\,g(v)\, {\mathrm e}^{-2 \pi {\mathrm i}v t}\,{\mathrm d}v = (f\, g)\hat{}\,(t). \end{aligned}$$

This completes the proof. \(\square \)

Note that Lemma 2.2 improves a corresponding result in [3, p. 209]. There it was remarked that for f, \(g \in L^2({\mathbb {R}})\) it holds \((f \,g)\hat{} = {{\hat{f}}} * {{\hat{g}}} \in L^{\infty }({\mathbb {R}})\), but by Lemma 2.2 the function \({{\hat{f}}} * {{\hat{g}}}\) indeed belongs to \(C_0({\mathbb {R}}) \subset L^{\infty }({\mathbb {R}})\).

3 Regularized Shannon sampling formulas with localized sampling

Let

$$\begin{aligned} {{\mathcal {B}}}_{\delta }({\mathbb {R}}) :=\{f \in L^2({\mathbb {R}}):\, \mathrm {supp}\, {\hat{f}} \subseteq [- \delta , \, \delta ]\} \end{aligned}$$

be the Paley–Wiener space. The functions of \({\mathcal B}_{\delta }({\mathbb {R}})\) are called bandlimited to \([- \delta ,\,\delta ]\), where \(\delta > 0\) is the so-called bandwidth. By definition the Paley–Wiener space \({\mathcal B}_{\delta }({\mathbb {R}})\) consists of equivalence classes of almost equal functions. Each of these equivalence classes contains a smooth function, since by inverse Fourier transform it holds for each \(r \in {{\mathbb {N}}}_0\) that

$$\begin{aligned} f^{(r)}(t) = \int _{-\delta }^{\delta } {{\hat{f}}}(v)\,(2 \pi {\mathrm i}v)^r\,{\mathrm e}^{2\pi {\mathrm i}v t}\,{\mathrm d}v, \end{aligned}$$

i.e., \(f^{(r)} \in C_0({\mathbb {R}})\), because \((2 \pi {\mathrm i}\,\cdot )^r\,{{\hat{f}}}\in L^1([-\delta ,\, \delta ])\). In the following we will always select the smooth representation of an equivalence class in \({{\mathcal {B}}}_{\delta }({\mathbb {R}})\).

In this paper we consider bandlimited functions \(f \in {\mathcal B}_{\delta }({\mathbb {R}})\) with \(\delta \in (0,\,N/2)\), where \(N\in {\mathbb {N}}\) is fixed. For \(L:=N(1+\lambda )\) with \(\lambda \ge 0\), and any \(m \in {\mathbb {N}} \setminus \{1\}\) with \(2m \ll L\), we introduce the set \(\Phi _{m,L}\) of all window functions \(\varphi : {\mathbb {R}} \rightarrow [0,\,1]\) with the following properties:

  • The window function \(\varphi \in L^2({\mathbb {R}})\) is even, positive on \((-m/L,\,m/L)\) and continuous on \({\mathbb R}\setminus \{- m/L,\, m/L\}\).

  • The restricted window function \(\varphi |_{[0,\,\infty )}\) is monotonously non-increasing with \(\varphi (0) = 1.\)

  • The Fourier transform

    $$\begin{aligned} {{\hat{\varphi }}}(v) :=\int _{{\mathbb {R}}} \varphi (x)\,{\mathrm e}^{- 2 \pi {\mathrm i}v x}\,{\mathrm d}x = 2 \int _0^{\infty } \varphi (x)\,\cos ( 2 \pi \,v x)\,{\mathrm d}x \end{aligned}$$

    is explicitly known.

Examples of such window functions are the rectangular window function

$$\begin{aligned} \varphi _{\mathrm {rect}} (x) :={\mathbf{1}}_{[-m/L,\,m/L]}(x),\quad x \in {\mathbb {R}}, \end{aligned}$$
(3.1)

where \({{\mathbf {1}}}_{[-m/L,\,m/L]}\) is the indicator function of the interval \([-m/L,\,m/L]\), the Gaussian window function

$$\begin{aligned} \varphi _{\mathrm {Gauss}}(x) :={\mathrm e}^{- x^2/(2 \sigma ^2)}, \quad x \in {\mathbb {R}}, \end{aligned}$$
(3.2)

with some \(\sigma >0\), the modified \(\mathrm B\)-spline window function

$$\begin{aligned} \varphi _{\mathrm {B}}(x) :=\frac{1}{M_{2s}(0)}\, M_{2s}\bigg (\frac{Lxs}{m}\bigg ),\quad x \in {\mathbb {R}}, \end{aligned}$$
(3.3)

where \(M_{2s}\) is the centered cardinal \(\mathrm B\)-spline of even order 2s, and the \(\sinh \)-type window function

$$\begin{aligned} \varphi _{\sinh }(x) :={\left\{ \begin{array}{ll} \frac{1}{\sinh \beta }\, \sinh \big (\beta \,\sqrt{1-(L x/m)^2}\big ) &{}:\quad x \in [-m/L,\,m/L] , \\ 0 &{}:\quad x \in {\mathbb {R}}\setminus [-m/L,\,m/L] , \end{array}\right. } \end{aligned}$$
(3.4)

with certain \(\beta > 0\). All these window functions are well-studied in the context of the nonequispaced fast Fourier transform (NFFT), see e.g. [11] and references therein.

Now let \(\varphi \in \Phi _{m,L}\) be a given window function. We define the truncated window function

$$\begin{aligned} \varphi _m (x) :=\varphi (x) \,{\mathbf{1}}_{[-m/L,\,m/L]}(x), \quad x \in {\mathbb {R}}, \end{aligned}$$
(3.5)

and study the regularized Shannon sampling formula with localized sampling

$$\begin{aligned} (R_{\varphi ,m}f)(t) :=\sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big )\,\mathrm {sinc}\big (L \pi \,\big (t-\tfrac{\ell }{L}\big )\big )\,\varphi _m\big (t - \tfrac{\ell }{L}\big ), \quad t \in {\mathbb {R}}, \end{aligned}$$
(3.6)

to rapidly reconstruct the values f(t) for \(t \in {\mathbb {R}}\) from given sampling data \(f\big (\frac{\ell }{L}\big )\), \(\ell \in \mathbb Z\), with high accuracy.

It is known that \(\{\mathrm {sinc}\big (L \pi \,\big (\cdot -\frac{\ell }{L}\big )\big ) : \, \ell \in {\mathbb {Z}}\}\) forms an orthogonal system in \(L^2({\mathbb {R}})\), since by the shifting property the Fourier transform of \(\mathrm {sinc}\big (L \pi \,\big (\cdot -\frac{\ell }{L}\big )\big )\) is equal to

$$\begin{aligned} \frac{1}{L}\,{\mathrm e}^{- 2 \pi {\mathrm i}\ell v/L}\,{\mathbf{1}}_{[-L/2,\,L/2]}(v),\quad v \in {\mathbb {R}}, \end{aligned}$$

and by the Parseval identity it holds for all \(\ell \), \(k \in \mathbb Z\) that

$$\begin{aligned} \big \langle \mathrm {sinc}\big (L \pi \,\big ( \cdot -\tfrac{\ell }{L}\big )\big ),\,\mathrm {sinc}\big (L \pi \,\big ( \cdot -\tfrac{k}{L}\big )\big )\big \rangle _{L^2({\mathbb {R}})}= \frac{1}{L^2}\, \int _{-L/2}^{L/2} {\mathrm e}^{2 \pi {\mathrm i}(k-\ell )\,v/L}\,{\mathrm d}v = \frac{1}{L}\, \delta _{\ell ,k}\nonumber \\ \end{aligned}$$
(3.7)

with the Kronecker symbol \(\delta _{\ell ,k}\). Moreover, it follows directly from the Whittaker–Kotelnikov–Shannon sampling theorem (see (1.1)) that the system \(\{\mathrm {sinc}\big (L \pi \,\big (\cdot -\frac{\ell }{L}\big )\big ) : \, \ell \in {\mathbb {Z}}\}\) forms an orthogonal basis of \({{\mathcal {B}}}_{N/2}({\mathbb {R}})\). From (1.1) and (3.7) it follows that for any \(f \in {{\mathcal {B}}}_{\delta }({\mathbb {R}}) \subset {\mathcal B}_{L/2}({\mathbb {R}})\) with \(\delta \in (0,\, N/2]\) and \(L\ge N\) it holds

$$\begin{aligned} \Vert f \Vert _{L^2({\mathbb {R}})}^2 = \tfrac{1}{L}\, \sum _{\ell \in \mathbb Z} \big |f\big (\tfrac{\ell }{L}\big )\big |^2. \end{aligned}$$
(3.8)

Firstly, we consider the regularized Shannon sampling formula (3.6) with the simple rectangular window function \(\varphi = \varphi _{\mathrm {rect}}\), see (3.1), i.e., for some \(m \in {\mathbb {N}} \setminus \{1\}\) we form the rectangular regularized Shannon sampling formula with localized sampling

$$\begin{aligned} (R_{{\mathrm {rect}},m}f)(t) :=\sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big ) \, \mathrm {sinc}\big (L\pi \,\big (t-\tfrac{\ell }{L}\big )\big )\,{\mathbf{1}}_{[-m/L,\,m/L]}\big (t - \tfrac{\ell }{L}\big ), \quad t \in {\mathbb {R}}.\qquad \end{aligned}$$
(3.9)

Obviously, the rectangular regularized Shannon sampling formula (3.9) interpolates f on the grid \(\frac{1}{L}\,{\mathbb {Z}}\), i.e., for all \(m\in {\mathbb {N}}\setminus \{1\}\), the interpolation property

$$\begin{aligned} f\big (\tfrac{\ell }{L}\big ) = (R_{{\mathrm {rect}},m} f)\big (\tfrac{\ell }{L}\big ), \quad \ell \in {\mathbb {Z}}, \end{aligned}$$
(3.10)

is fulfilled since \(\mathrm {sinc}(\pi k)=0\) for \(k\in {\mathbb {Z}}\setminus \{0\}\). Due to the definition of the indicator function \({\mathbf{1}}_{[-m/L,\,m/L]}\), for \(t \in \big (0,\,\frac{1}{L}\big )\) the rectangular regularized Shannon sampling formula reads as

$$\begin{aligned} (R_{{\mathrm {rect}},m} f)(t) = \sum _{{\ell \in {\mathcal {J}}_m}} f\big (\tfrac{\ell }{L}\big ) \, \mathrm {sinc}\big (L\pi \,\big (t - \tfrac{\ell }{L}\big )\big ) \end{aligned}$$

with the index set \({\mathcal {J}}_m :=\{-m+1,\,-m+2,\ldots ,\,m\}\). Indeed, on any interval \(\big (\frac{k}{L},\,\frac{k+1}{L}\big )\) with \(k\in {\mathbb {Z}}\) the rectangular regularized Shannon sampling formula reads as

$$\begin{aligned} (R_{{\mathrm {rect}},m} f)\big (t+\tfrac{k}{L}\big ) = \sum _{{\ell \in {\mathcal {J}}_m}} f\big (\tfrac{\ell +k}{L}\big ) \, \mathrm {sinc}\big (L\pi \,\big (t - \tfrac{\ell }{L}\big )\big ), \quad t \in \big (0,\,\tfrac{1}{L}\big ) . \end{aligned}$$
(3.11)

However, since the \(\mathrm {sinc}\) function decays slowly at infinity, (3.9) is not a good approximation to f on \(\mathbb R\). As a consequence of a result in [8], it can be seen that the convergence rate of the sequence \(\big (f - R_{{\mathrm {rect}},m} f \big )_{m=1}^{\infty }\) is only \({\mathcal {O}}(m^{-1/2})\) for sufficiently large m.

Lemma 3.1

Let \(f \in {\mathcal {B}}_{N/2}({\mathbb {R}})\) with fixed \(N \in {\mathbb {N}}\), \(L:=N(1+\lambda )\) with \(\lambda \ge 0\) and \(m \in {\mathbb {N}} \setminus \{1\}\) be given. Then it holds

$$\begin{aligned} \Vert f - R_{{\mathrm {rect}},m} f\Vert _{C_0({\mathbb {R}})} \le \frac{{\sqrt{L}}}{\pi }\,\sqrt{\frac{2}{m} + \frac{1}{m^2}}\,\Vert f\Vert _{L^2({\mathbb {R}})}. \end{aligned}$$

Proof

Since \(R_{{\mathrm {rect}},m} f\) possesses similar representations (3.11) on each interval \(\big (\frac{k}{L},\,\frac{k+1}{L}\big )\), \(k\in {\mathbb {Z}}\), we consider \(f(t) - (R_{m,{\mathrm {rect}}} f)(t)\) only for \(t \in \big [0,\,\frac{1}{L}\big ]\) and show that

$$\begin{aligned} \max _{t\in [0,\,1/L]} |f(t) - (R_{{\mathrm {rect}},m} f)(t)| \le \frac{{\sqrt{L}}}{\pi }\,\sqrt{\frac{2}{m} + \frac{1}{m^2}}\,\Vert f\Vert _{L^2({\mathbb {R}})}. \end{aligned}$$
(3.12)

The Whittaker–Kotelnikov–Shannon sampling theorem (see (1.1)) implies that

$$\begin{aligned} f(t) - (R_{{\mathrm {rect}},m} f)(t) = \sum _{\ell \in {\mathbb {Z}}\setminus {{\mathcal {J}}_m}} f\big (\tfrac{\ell }{L}\big ) \,{\mathrm {sinc}}\big (L \pi \,\big (t - \tfrac{\ell }{L}\big )\big ). \end{aligned}$$

Then by the Cauchy–Schwarz inequality and (3.8) it follows that

$$\begin{aligned} \big | f(t) - (R_{{\mathrm {rect}},m} f)(t) \big |\le & {} \Big (\sum _{\ell \in {\mathbb {Z}}\setminus {{\mathcal {J}}_m}} \big |f\big (\tfrac{\ell }{L}\big )\big |^2\Big )^{1/2}\,\Big (\sum _{\ell \in {\mathbb {Z}}\setminus {{\mathcal {J}}_m}} \big [{\mathrm {sinc}}\big (L \pi \,\big (t-\tfrac{\ell }{L}\big )\big )\big ]^2\Big )^{1/2} \nonumber \\\le & {} {\sqrt{L}}\,\Vert f\Vert _{L^2({\mathbb {R}})}\,\sqrt{h_m(t)} \end{aligned}$$
(3.13)

with the auxiliary function

$$\begin{aligned} h_m(t)&:=\sum _{\ell \in {\mathbb {Z}}\setminus {{\mathcal {J}}_m}} \big [{\mathrm {sinc}}\big (L\pi \,\big (t-\tfrac{\ell }{L}\big )\big )\big ]^2\\&= \frac{(\sin (L\pi t))^2}{\pi ^2}\, \sum _{\ell \in {\mathbb Z}\setminus {{\mathcal {J}}_m}} \frac{1}{(L t-\ell )^2} \ge 0, \quad t \in \big [0,\, \tfrac{1}{L}\big ]. \end{aligned}$$

By the integral test for convergence of series we estimate the function

$$\begin{aligned} h_m(t) \le \frac{1}{\pi ^2}\, \sum _{k\in {{\mathbb {Z}}}\setminus {{\mathcal {J}}_m}} \frac{1}{k^2} \le \frac{1}{\pi ^2}\,\left( \frac{1}{m^2} + 2 \, \int _m^{\infty } \frac{1}{x^2} \, \mathrm {d}x\right) = \frac{1}{\pi ^2}\,\left( \frac{1}{m^2} + \frac{2}{m}\right) \,\nonumber \\ \end{aligned}$$
(3.14)

for \(t \in \big (0,\, \frac{1}{L}\big )\). Then (3.13), (3.14) combined with (3.10) imply (3.12).

By the same technique, the above estimate of the approximation error can be shown on each interval \(\big (\frac{k}{L},\,\frac{k+1}{L}\big )\), \(k\in {\mathbb {Z}}\). This completes the proof. \(\square \)

In view of the slow convergence of the sequence \(\big (R_{{\mathrm {rect}},m} f(t)\big )_{m=1}^{\infty }\) it has been proposed to modify the rectangular regularized Shannon sampling sum (3.9) by multiplying the \(\mathrm {sinc}\) function with a more convenient window function \(\varphi \in \Phi _{m,L}\) (see [6, 13]). For any \(m \in {\mathbb {N}} \setminus \{1\}\) the regularized Shannon sampling formula with localized sampling is given by

$$\begin{aligned} (R_{\varphi ,m}f)(t) = \sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big )\,\mathrm {sinc}\big (L \pi \,\big (t-\tfrac{\ell }{L}\big )\big ) \, \varphi _m\big (t - \tfrac{\ell }{L}\big ), \quad t \in {\mathbb {R}}, \end{aligned}$$
(3.15)

with the truncated window function (3.5). Note that it holds the interpolation property

$$\begin{aligned} f\big (\tfrac{\ell }{L}\big ) = (R_{\varphi , m}f)\big (\tfrac{\ell }{L}\big ), \quad \ell \in {\mathbb {Z}}. \end{aligned}$$
(3.16)

Especially for \(t \in \big (0,\,\frac{1}{L}\big )\), we obtain the regularized Shannon sampling formula

$$\begin{aligned} (R_{\varphi ,m} f)(t) = \sum _{\ell \in {{\mathcal {J}}_m}} f\big (\tfrac{\ell }{L}\big )\,\mathrm {sinc}\big (L \pi \,\big (t-\tfrac{\ell }{L}\big )\big ) \, \varphi _m\big (t - \tfrac{\ell }{L}\big ) = \sum _{\ell \in {{\mathcal {J}}_m}} f\big (\tfrac{\ell }{L}\big )\, \psi \big (t - \tfrac{\ell }{L}\big ) , \end{aligned}$$

where

$$\begin{aligned} \psi (x) :=\mathrm {sinc}(L\pi x) \,\varphi (x) \end{aligned}$$
(3.17)

is the regularized \(\mathrm {sinc}\) function. For the reconstruction of f on any interval \(\big (\frac{k}{L},\,\frac{k+1}{L}\big )\) with \(k \in {\mathbb {Z}}\), we use

$$\begin{aligned} (R_{\varphi ,m} f)\big (t + \tfrac{k}{L}\big ) = \sum _{\ell \in {{\mathcal {J}}_m}} f\big (\tfrac{\ell +k}{L}\big )\, \psi \big (t - \tfrac{\ell }{L}\big ), \quad t \in \big (0,\,\tfrac{1}{L}\big ), \end{aligned}$$
(3.18)

i.e., we reconstruct f by \(R_{\varphi ,m}f\) separately for each open interval \(\big (\frac{k}{L},\,\frac{k+1}{L}\big )\), \(k\in \mathbb Z\). Now we estimate the uniform approximation error

$$\begin{aligned} \Vert f - R_{\varphi ,m} f \Vert _{C_0({\mathbb {R}})} :=\max _{t\in {\mathbb {R}}} \big | f(t) - (R_{\varphi ,m}f)(t) \big | \end{aligned}$$
(3.19)

of the regularized Shannon sampling formula.

Theorem 3.2

Let \(f\in {{\mathcal {B}}}_{\delta }({\mathbb {R}})\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N \in {\mathbb {N}}\), \(L= N(1+\lambda )\) with \(\lambda \ge 0\) and \(m \in {\mathbb N}\setminus \{1\}\). Further let \(\varphi \in \Phi _{m,L}\) with the truncated window function (3.5) be given.

Then the regularized Shannon sampling formula (3.15) with localized sampling satisfies

$$\begin{aligned} \Vert f - R_{\varphi ,m}f \Vert _{C_0({\mathbb {R}})} \le \big ( E_1(m,\delta ,L) + E_2(m,\delta ,L) \big ) \,\Vert f\Vert _{L^2({\mathbb {R}})} , \end{aligned}$$
(3.20)

where the corresponding error constants are defined by

$$\begin{aligned} E_1(m,\delta ,L)&:=\sqrt{2\delta } \, \max _{v \in [-\delta ,\delta ]} \left| 1 - \int _{v - \frac{L}{2}}^{v + \frac{L}{2}} {{\hat{\varphi }}}(u)\,{\mathrm d}u \,\right| , \end{aligned}$$
(3.21)
$$\begin{aligned} E_2(m,\delta ,L)&:=\frac{\sqrt{2L}}{\pi m}\, \left( \varphi ^2\big (\tfrac{m}{L}\big ) + L\int _{\frac{m}{L}}^{\infty } \varphi ^2(t)\,{\mathrm d}t\right) ^{1/2} . \end{aligned}$$
(3.22)

Proof

. By (3.18) we split the approximation error

$$\begin{aligned} f\big (t + \tfrac{k}{L}\big ) - (R_{\varphi ,m} f)\big (t + \tfrac{k}{L}\big ) = e_1\big (t + \tfrac{k}{L}\big ) + e_{2,k}(t), \quad t \in \big [0,\, \tfrac{1}{L}\big ], \end{aligned}$$

on each interval \(\big [\frac{k}{L},\,\frac{k+1}{L}\big ]\) with \(k \in {\mathbb {Z}}\) into the regularization error

$$\begin{aligned} e_1\big (t + \tfrac{k}{L}\big )&:=f\big (t + \tfrac{k}{L}\big ) - \sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell + k}{L}\big )\,\psi \big (t - \tfrac{\ell }{L}\big ) , \quad t \in \big [0,\, \tfrac{1}{L}\big ] , \end{aligned}$$
(3.23)

where \(\psi \) denotes the regularized \(\mathrm {sinc}\) function (3.17), and the truncation error

$$\begin{aligned} e_{2,k}(t)&:=\sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell + k}{L}\big ) \psi \big (t - \tfrac{\ell }{L}\big ) - (R_{\varphi ,m} f)(t) , \quad t \in \big [0,\,\tfrac{1}{L}\big ] . \end{aligned}$$
(3.24)

Initially, we only consider the error on the interval \(\big [0,\,\frac{1}{L}\big ]\). We start with the regularization error (3.23). By Lemma 2.2, the Fourier transform of \(\psi \) reads as

$$\begin{aligned} {{\hat{\psi }}}(v) = \frac{1}{L} \int _{\mathbb {R}}{{\mathbf {1}}}_{[-L/2,\,L/2]}(v-u) \, {{\hat{\varphi }}}(u) \,{\mathrm d}u = \frac{1}{L} \int _{v-L/2}^{v+L/2} {{\hat{\varphi }}}(u) \,{\mathrm d}u. \end{aligned}$$

Hence, using the shifting property of the Fourier transform, the Fourier transform of \(\psi \big (\cdot \, - \frac{\ell }{L}\big )\) reads as

$$\begin{aligned} \frac{1}{L}\,{\mathrm e}^{-{2 \pi \mathrm i} v \ell /L}\,\int _{v - L/2}^{v + L/2} {\hat{\varphi }}(u)\,{\mathrm d}u. \end{aligned}$$

Therefore, the Fourier transform of the regularization error \(e_1\) has the form

$$\begin{aligned} {{\hat{e}}}_1(v) = {{\hat{f}}}(v) - \left( \sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big )\,\frac{1}{L}\,{\mathrm e}^{-2 \pi {\mathrm i} v \ell /L}\right) \,\int _{v - L/2}^{v + L/2} {{\hat{\varphi }}}(u)\,{\mathrm d}u. \end{aligned}$$
(3.25)

By the assumption \(f \in {{\mathcal {B}}}_{\delta }({\mathbb {R}})\) with \(\delta \in (0,\,N/2)\) and \(L\ge N\), it holds \( \mathrm {supp}\,{\hat{f}} \subseteq [- \delta ,\, \delta ] \subset [- L/2,\,L/2 ] \) and hence the restricted function \({\hat{f}}|_{[-L/2,\,L/2]}\) belongs to \(L^2([-L/2,\,L/2])\). Thus, this function possesses the L-periodic Fourier expansion

$$\begin{aligned} {{\hat{f}}}(v) = \sum _{k \in {\mathbb {Z}}} c_k({\hat{f}})\, {\mathrm e}^{-2 \pi {\mathrm i} k v/L}, \quad v \in [-L/2, \,L/2], \end{aligned}$$

with the Fourier coefficients

$$\begin{aligned} c_k({\hat{f}}) = \frac{1}{L}\,\int _{-L/2}^{L/2} {{\hat{f}}}(u)\,{\mathrm e}^{2\pi {\mathrm i} k u/L}\, {\mathrm d}u = \frac{1}{L}\,f\big (\tfrac{k}{L}\big ) , \quad k \in {\mathbb {Z}}. \end{aligned}$$

In other words, \({{\hat{f}}}\) can be represented as

$$\begin{aligned} {{\hat{f}}}(v) = {{\hat{f}}}(v)\,{{\mathbf {1}}}_{[-\delta ,\,\delta ]}(v) = \Big (\sum _{k \in {\mathbb {Z}}} \tfrac{1}{L}\,f\big (\tfrac{k}{L}\big )\,{\mathrm e}^{-2 \pi {\mathrm i}k v/L}\Big ) \,{{\mathbf {1}}}_{[-\delta ,\,\delta ]}(v) , \quad v \in {\mathbb {R}}. \end{aligned}$$
(3.26)

Introducing the auxiliary function

$$\begin{aligned} \eta (v) :={{\mathbf {1}}}_{[-\delta ,\,\delta ]}(v) - \int _{v - L/2}^{v + L/2} {{\hat{\varphi }}}(u)\,{\mathrm d}u , \quad v \in {\mathbb {R}}, \end{aligned}$$
(3.27)

we see by inserting (3.26) into (3.25) that \({{\hat{e}}}_1(v) = {{\hat{f}}}(v)\, \eta (v)\) and thereby \(|{{\hat{e}}}_1(v)| \le |{{\hat{f}}}(v)|\,|\eta (v)|\). Thus, by inverse Fourier transform (2.4) we get

$$\begin{aligned} |e_1(t)| \le \int _{{\mathbb {R}}} |{{\hat{e}}}_1(v)|\,{\mathrm d}v \le \int _{-\delta }^{\delta } |{{\hat{f}}}(v)|\,|\eta (v)|\,{\mathrm d}v \le \max _{v \in [-\delta ,\delta ]} \,|\eta (v)| \,\int _{-\delta }^{\delta } |{{\hat{f}}}(v)|\,{\mathrm d}v . \end{aligned}$$

Using Cauchy–Schwarz inequality and Parseval identity, we see that

$$\begin{aligned} \int _{-\delta }^{\delta } |{{\hat{f}}}(v)|\,{\mathrm d}v&\le \left( \int _{-\delta }^{\delta } 1^2\,{\mathrm d}v\right) ^{1/2} \left( \int _{-\delta }^{\delta } |{{\hat{f}}}(v)|^2\,{\mathrm d}v\right) ^{1/2} = \sqrt{2 \delta }\,\Vert {{\hat{f}}}\Vert _{L^2({\mathbb {R}})} \\&= \sqrt{2 \delta }\,\Vert f\Vert _{L^2({\mathbb {R}})}. \end{aligned}$$

In summary, using the error constant (3.21), this yields \( \Vert e_1 \Vert _{C_0({\mathbb {R}})} \le E_1(m,\delta ,L)\,\Vert f\Vert _{L^2(\mathbb R)} . \)

Now we estimate the truncation error. By (3.24) it holds for \(t \in \big (0,\,\frac{1}{L}\big )\) that

$$\begin{aligned} e_{2,0}(t)&= \sum _{\ell \in {\mathbb {Z}}} f\big (\tfrac{\ell }{L}\big )\, \psi \big (t - \tfrac{\ell }{L}\big ) \,\big [1- {{\mathbf {1}}}_{[-m/L,m/L]}\big (t - \tfrac{\ell }{L}\big )\big ] = \sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} f\big (\tfrac{\ell }{L}\big )\, \psi \big (t - \tfrac{\ell }{L}\big ) . \end{aligned}$$

Using (3.17) and the non-negativity of \(\varphi \), we receive

$$\begin{aligned} |e_{2,0}(t)| \le \sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} \big |f\big (\tfrac{\ell }{L}\big )\big |\, \big |\mathrm {sinc}\big (L \pi \,\big (t - \tfrac{\ell }{L}\big )\big )\big | \,\varphi \big (t - \tfrac{\ell }{L}\big ). \end{aligned}$$

For \(t \in \big (0,\, \frac{1}{L}\big )\) and \(\ell \in {{\mathbb {Z}}} \setminus {{\mathcal {J}}_m}\) we obtain

$$\begin{aligned} \big |\mathrm {sinc}\big (L \pi \,\big (t - \tfrac{\ell }{L}\big )\big )\big | \le \frac{1}{\pi \,|L t - \ell |} \le \frac{1}{\pi m} \end{aligned}$$

and hence

$$\begin{aligned} |e_{2,0}(t)| \le \frac{1}{\pi m}\, \sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} \big |f\big (\tfrac{\ell }{L}\big )\big |\, \varphi \big (t - \tfrac{\ell }{L}\big ). \end{aligned}$$

Then the Cauchy–Schwarz inequality implies that

$$\begin{aligned} |e_{2,0}(t)| \le \frac{1}{\pi m}\, \bigg ( \sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} \big |f\big (\tfrac{\ell }{L}\big )\big |^2 \bigg )^{1/2}\bigg (\sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} \varphi ^2\big (t - \tfrac{\ell }{L}\big ) \bigg )^{1/2}. \end{aligned}$$

By (3.8) it holds

$$\begin{aligned} \bigg ( \sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} \big |f\big (\tfrac{\ell }{L}\big )\big |^2 \bigg )^{1/2} \le \sqrt{L}\, \Vert f \Vert _{L^2({\mathbb {R}})}. \end{aligned}$$

Since \(\varphi |_{[0,\,\infty )}\) is monotonously non-increasing by assumption \(\varphi \in \Phi _{m,L}\), we can estimate the series for \(t \in \big (0,\, \frac{1}{L}\big )\) by

$$\begin{aligned} \sum _{\ell \in {\mathbb {Z}} \setminus {{\mathcal {J}}_m}} \varphi ^2\big (t - \tfrac{\ell }{L}\big )&= \Bigg (\sum _{\ell =-\infty }^{-m} + \sum _{\ell =m+1}^{\infty }\Bigg )\, \varphi ^2\big (t - \tfrac{\ell }{L}\big ) = \sum _{\ell =m}^{\infty } \,\varphi ^2\big (t + \tfrac{\ell }{L}\big ) \\&\quad + \sum _{\ell =m+1}^{\infty } \,\varphi ^2\big (t - \tfrac{\ell }{L}\big ) \\&\le \sum _{\ell =m}^{\infty } \,\varphi ^2\big (\tfrac{\ell }{L}\big ) + \sum _{\ell =m+1}^{\infty } \,\varphi ^2\big (\tfrac{1}{L} - \tfrac{\ell }{L}\big ) = 2 \sum _{\ell =m}^{\infty } \,\varphi ^2\big (\tfrac{\ell }{L}\big ) . \end{aligned}$$

Using the integral test for convergence of series, we obtain that

$$\begin{aligned} \sum _{\ell =m}^{\infty } \,\varphi ^2\big (\tfrac{\ell }{L}\big )&= \varphi ^2\big (\tfrac{m}{L}\big ) + \sum _{\ell =m+1}^{\infty } \,\varphi ^2\big (\tfrac{\ell }{L}\big ) < \varphi ^2\big (\tfrac{m}{L}\big ) + \int _{m}^{\infty } \varphi ^2\big (\tfrac{t}{L}\big )\,{\mathrm d}t = \varphi ^2\big (\tfrac{m}{L}\big ) \\&\quad + L \, \int _{m/L}^{\infty } \varphi ^2(t)\,{\mathrm d}t . \end{aligned}$$

By (3.16) it holds \(e_{2,0}(0) = e_{2,0}\big (\frac{1}{L}\big ) = 0\). Hence, we obtain by (3.22) that

$$\begin{aligned} \max _{t \in [0,1/L]} |e_{2,0}(t)|&\le \frac{\sqrt{L}}{\pi m}\, \Vert f \Vert _{L^2({\mathbb {R}})} \left( 2\varphi ^2\big (\tfrac{m}{L}\big ) + 2L\int _{m/L}^{\infty } \varphi ^2(t)\,{\mathrm d}t\right) ^{1/2}\\&= E_2(m,\delta ,L) \, \Vert f \Vert _{L^2({\mathbb {R}})} . \end{aligned}$$

By the same technique, this error estimate can be shown for each interval \(\big [\frac{k}{L},\,\frac{k+1}{L}\big ]\) with \(k \in \mathbb Z\). This completes the proof. \(\square \)

Remark 3.3

Theorem 3.2 can be simplified, if the window function \(\varphi \in \Phi _{m,L}\) is continuous on \({\mathbb {R}}\) and vanishes on \({\mathbb {R}} \setminus \big [- \frac{m}{L},\, \frac{m}{L}\big ]\). Then the truncation errors \(e_{2,k}(t)\) vanish for all \(t \in \big (0,\,\frac{1}{L}\big )\) and \(k \in {\mathbb {Z}}\), such that \(E_2(m,\delta ,L) = 0\). Thereby, we obtain the simple error estimate

$$\begin{aligned} \Vert f - R_{\varphi ,m}f \Vert _{C_0({\mathbb {R}})} \le E_1(m,\delta ,L)\,\Vert f\Vert _{L^2({\mathbb {R}})}. \end{aligned}$$

We remark that this is the case for the \(\mathrm B\)-spline (3.3) as well as the \(\sinh \)-type window function (3.4), but not for the Gaussian window function (3.2) since \(\varphi _{\mathrm {Gauss}}\) does not vanish on \({\mathbb {R}} \setminus \big [- \frac{m}{L},\, \frac{m}{L}\big ]\). Also the rectangular window function (3.1) does not fit into this setting since \(\varphi _{\mathrm {rect}}\) is not continuous on \({\mathbb {R}}\).

If the samples \(f\big (\frac{\ell }{L}\big )\), \(\ell \in {\mathbb {Z}}\), of a bandlimited function \(f\in {{\mathcal {B}}}_{\delta }({\mathbb {R}})\) are not known exactly, i.e., only erroneous samples \({{\tilde{f}}}_{\ell } :=f\big (\frac{\ell }{L}\big ) + \varepsilon _{\ell }\) with \(|\varepsilon _{\ell }| \le \varepsilon \), \(\ell \in {\mathbb {Z}}\), with \(\varepsilon >0\) are known, the corresponding Shannon sampling series may differ appreciably from f (see [4]). Here we denote the regularized Shannon sampling formula with erroneous samples \({{\tilde{f}}}_{\ell }\) by

$$\begin{aligned} (R_{\varphi ,m}{{\tilde{f}}})(t) = \sum _{\ell \in {\mathbb {Z}}} \,\tilde{f}_\ell \ \mathrm {sinc}\big (L \pi \,\big (t-\tfrac{\ell }{L}\big )\big ) \, \varphi _m\big (t - \tfrac{\ell }{L}\big ), \quad t \in {\mathbb {R}}. \end{aligned}$$
(3.28)

In contrast to the classical Shannon sampling series, the regularized Shannon sampling formula is numerically robust, i.e., the uniform perturbation error

$$\begin{aligned} \Vert R_{\varphi ,m}{{\tilde{f}}} - R_{\varphi ,m}f \Vert _{C_0({\mathbb {R}})} :=\max _{t\in {\mathbb {R}}} \big | (R_{\varphi ,m}{{\tilde{f}}})(t) - (R_{\varphi ,m}f)(t) \big | \end{aligned}$$
(3.29)

is small, as shown in the following.

Theorem 3.4

Let \(f\in {{\mathcal {B}}}_{\delta }(\mathbb R)\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N \in {\mathbb {N}}\), \(L= N(1+\lambda )\) with \(\lambda \ge 0\) and \(m \in {\mathbb N}\setminus \{1\}\) be given. Further let \(\varphi \in \Phi _{m,L}\) with the truncated window function (3.5) as well as \({{\tilde{f}}}_{\ell } = f(\ell /L) + \varepsilon _{\ell }\), where \(|\varepsilon _{\ell }| \le \varepsilon \) for all \(\ell \in {\mathbb {Z}}\), with \(\varepsilon >0\) be given.

Then the regularized Shannon sampling sum (3.15) with localized sampling satisfies

$$\begin{aligned} \Vert R_{\varphi ,m}{{\tilde{f}}} - R_{\varphi ,m}f \Vert _{C_0({\mathbb {R}})}\le & {} \varepsilon \, \big ( 2+L\,{\hat{\varphi }}(0) \big ) , \end{aligned}$$
(3.30)
$$\begin{aligned} \Vert f - R_{\varphi ,m}{{\tilde{f}}} \Vert _{C_0({\mathbb {R}})}\le & {} \Vert f - R_{\varphi ,m}{f}\Vert _{C_0({\mathbb {R}})} + \varepsilon \, \big ( 2+L\,{\hat{\varphi }}(0) \big ) . \end{aligned}$$
(3.31)

Proof

. On each open interval \(\big (\frac{k}{L},\,\frac{k+1}{L}\big )\) with \(k \in {\mathbb {Z}}\) we denote the error by (3.18) as

$$\begin{aligned} {\tilde{e}}_{k}(t) :=(R_{\varphi ,m} {{\tilde{f}}})\big (t + \tfrac{k}{L}\big ) - (R_{\varphi ,m} f)\big (t + \tfrac{k}{L}\big ) =\sum _{\ell \in {{\mathcal {J}}_m}} \varepsilon _{\ell + k}\,\psi \big (t - \tfrac{\ell }{L}\big ) , \quad t \in \big (0,\, \tfrac{1}{L}\big ). \end{aligned}$$

Initially, we consider the interval \(\big [0,\,\frac{1}{L}\big ]\). Using (3.17), the non-negativity of \(\varphi \) and \(|\varepsilon _{\ell }| \le \varepsilon \), we receive

$$\begin{aligned} |{\tilde{e}}_0(t)|&\le \sum _{\ell \in {{\mathcal {J}}_m}} \left| \varepsilon _{\ell }\right| \, \big |\mathrm {sinc}\big (L \pi \,\big (t - \tfrac{\ell }{L}\big )\big )\big | \,\varphi \big (t - \tfrac{\ell }{L}\big ) \le \varepsilon \sum _{\ell \in {{\mathcal {J}}_m}} \,\varphi \big (t - \tfrac{\ell }{L}\big ) . \end{aligned}$$

Since \(\varphi |_{[0,\,\infty )}\) is monotonously non-increasing by assumption \(\varphi \in \Phi _{m,L}\), we can estimate the sum for \(t \in \big (0,\, \frac{1}{L}\big )\) by

$$\begin{aligned} \sum _{\ell \in {{\mathcal {J}}_m}} \varphi \big (t - \tfrac{\ell }{L}\big )&= \Big (\sum _{\ell =-m+1}^{0} + \sum _{\ell =1}^{m}\Big )\, \varphi \big (t - \tfrac{\ell }{L}\big ) = \sum _{\ell =0}^{m-1} \,\varphi \big (t + \tfrac{\ell }{L}\big ) + \sum _{\ell =1}^{m} \,\varphi \big (t - \tfrac{\ell }{L}\big ) \\&\le \sum _{\ell =0}^{m-1} \,\varphi \big (\tfrac{\ell }{L}\big ) + \sum _{\ell =1}^{m} \,\varphi \big (\tfrac{1}{L} - \tfrac{\ell }{L}\big ) = 2 \sum _{\ell =0}^{m-1} \,\varphi \big (\tfrac{\ell }{L}\big ) . \end{aligned}$$

Using the integral test for convergence of series, we obtain that

$$\begin{aligned} \sum _{\ell =0}^{m-1} \,\varphi \big (\tfrac{\ell }{L}\big ) < \varphi (0) + \int _{0}^{m-1} \varphi \big (\tfrac{t}{L}\big )\,{\mathrm d}t = \varphi (0) + L \, \int _{0}^{(m-1)/L} \varphi (t){\mathrm d}t . \end{aligned}$$

By the definition of the Fourier transform (2.1) it holds for \(\varphi \in \Phi _{m,L}\) that

$$\begin{aligned} {{\hat{\varphi }}}(0) = \int _{{\mathbb {R}}} \varphi (t)\, {\mathrm d}t \ge \int _{-m/L}^{m/L} \varphi (t)\, {\mathrm d}t = 2\int _{0}^{m/L} \varphi (t)\, {\mathrm d}t \ge 2\int _{0}^{(m-1)/L} \varphi (t)\, {\mathrm d}t , \end{aligned}$$

and therefore

$$\begin{aligned} |{\tilde{e}}_0(t)|&\le 2\,\varepsilon \sum _{\ell =0}^{m-1} \,\varphi \big (\tfrac{\ell }{L}\big ) \le 2\,\varepsilon \big ( \varphi (0) + \tfrac{L}{2} \, {{\hat{\varphi }}}(0) \big ) = \varepsilon \big ( 2\,\varphi (0) + L \, {{\hat{\varphi }}}(0) \big ) , \quad t \in \big (0, \tfrac{1}{L}\big ) . \end{aligned}$$

Additionally, by the interpolation property (3.16) it holds \(|{\tilde{e}}_0(0)| = |\varepsilon _0| \le \varepsilon \) as well as \(\left| {\tilde{e}}_0\big (\frac{1}{L}\big )\right| = |\varepsilon _1| \le \varepsilon \). By \(\varphi \in \Phi _{m,L}\) we have \(\varphi (0)=1\) and therefore we obtain that

$$\begin{aligned} \max _{t \in [0,1/L]} |{\tilde{e}}_0(t)| \le \varepsilon \big ( 2 + L \, {{\hat{\varphi }}}(0) \big ) . \end{aligned}$$

By the same technique, this error estimate can be shown for each interval \(\big [\frac{k}{L},\,\frac{k+1}{L}\big ]\) with \(k \in \mathbb Z\). Then the triangle inequality yields (3.31), which completes the proof. \(\square \)

Now it merely remains to estimate the error constants \(E_j(m,\delta ,L)\), \(j=1,2,\) for a certain window function, which shall be done for some selected ones in the following sections.

4 Gaussian regularized Shannon sampling formula

Firstly, we consider the Gaussian window function (3.2) with some \(\sigma >0\) and show that in this case the uniform approximation error (3.19) for the regularized Shannon sampling formula (3.15) decays exponentially with respect to m.

Theorem 4.1

Let \(f\in {{\mathcal {B}}}_{\delta }(\mathbb R)\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N \in {\mathbb {N}}\), \(L= N(1+\lambda )\) with \(\lambda \ge 0\), and \(m \in {\mathbb N}\setminus \{1\}\) be given.

Then the regularized Shannon sampling formula (3.15) with the Gaussian window function (3.2) and \(\sigma = \sqrt{\frac{m}{\pi L\,(L - 2\delta )}}\) satisfies the error estimate

$$\begin{aligned} \Vert f - R_{\mathrm {Gauss},m}f \Vert _{C_0({\mathbb {R}})} \le \frac{2\sqrt{\pi \delta L} + L(m+1)/\sqrt{m}}{\pi \,\sqrt{m\pi (L- 2 \delta )}}\,{\mathrm e}^{-\pi m (L/2 - \delta )/L}\,\Vert f\Vert _{L^2({\mathbb {R}})}. \end{aligned}$$
(4.1)

Proof

(cf. [6, 13]) By Theorem 3.2 we have to compute the error constants \(E_j(m,\delta ,L)\), \(j=1,2,\) for the Gaussian window function (3.2). First we study the regularization error constant (3.21). The function (3.2) possesses the Fourier transform

$$\begin{aligned} {{\hat{\varphi }}_{\mathrm {Gauss}}}(v)&= \sqrt{2\pi }\,\sigma \,{\mathrm e}^{- 2\pi ^2 \sigma ^2 v^2}, \quad v \in {\mathbb {R}}. \end{aligned}$$
(4.2)

Thus, we obtain by substitution \(w=\sqrt{2} \pi \sigma u\) that the auxiliary function (3.27) is given by

$$\begin{aligned} \eta (v) = {{\mathbf {1}}}_{[-\delta ,\,\delta ]}(v) - \frac{1}{\sqrt{\pi }}\,\int _{\sqrt{2} \pi \sigma (v - L/2)}^{\sqrt{2} \pi \sigma (v + L/2)}\,{\mathrm e}^{-w^2}\,{\mathrm d}w . \end{aligned}$$

For \(v \in [- \delta ,\, \delta ]\), the function \(\eta \) can be evaluated as

$$\begin{aligned} \eta (v)= & {} \frac{1}{\sqrt{\pi }}\,\Bigg [ \int _{{\mathbb {R}}} {\mathrm e}^{-w^2}\,{\mathrm d}w - \int _{\sqrt{2} \pi \sigma (v - L/2)}^{\sqrt{2} \pi \sigma (v + L/2)}\,{\mathrm e}^{-w^2}\, {\mathrm d}w\Bigg ]\\= & {} \frac{1}{\sqrt{\pi }}\,\Bigg [\int _{-\infty }^{\sqrt{2} \pi \sigma (v - L/2)}\,{\mathrm e}^{-w^2}\,{\mathrm d}w + \int _{\sqrt{2} \pi \sigma (v + L/2)}^{\infty }\, {\mathrm e}^{-w^2}\,{\mathrm d}w\Bigg ]\\= & {} \frac{1}{\sqrt{\pi }}\,\Bigg [\int _{\sqrt{2} \pi \sigma (L/2 - v)}^{\infty }\,{\mathrm e}^{-w^2}\,{\mathrm d}w + \int _{\sqrt{2} \pi \sigma (v + L/2)}^{\infty }\, {\mathrm e}^{-w^2}\,{\mathrm d}w\Bigg ]. \end{aligned}$$

By [1, p. 298, Formula 7.1.13], for \(x \ge 0\) it holds the inequality

$$\begin{aligned} \frac{1}{x + \sqrt{x^2 + 2}}\,{\mathrm e}^{-x^2} \le \int _x^{\infty } {\mathrm e}^{-w^2}\,{\mathrm d}w \le \frac{1}{x + \sqrt{x^2 + 4/\pi }}\,{\mathrm e}^{-x^2}, \end{aligned}$$

which can be simplified to

$$\begin{aligned} \int _x^{\infty } {\mathrm e}^{-w^2}\,{\mathrm d}w \le \frac{1}{2x}\,{\mathrm e}^{-x^2}, \quad x > 0. \end{aligned}$$
(4.3)

Therefore, the auxiliary function \(\eta (v)\) can be estimated by

$$\begin{aligned} \eta (v) < \frac{1}{\sqrt{\pi }}\,\left( \frac{{\mathrm e}^{-2 \pi ^2 \sigma ^2 (L/2 - v)^2}}{2 \,\sqrt{2}\pi \sigma (L/2 - v)} + \frac{{\mathrm e}^{-2 \pi ^2 \sigma ^2 (L/2 + v)^2}}{2 \,\sqrt{2} \pi \sigma (L/2 + v)}\right) . \end{aligned}$$

Since the function \(\frac{1}{x}\,{\mathrm e}^{- \sigma ^2 x^2/2}\) decreases for \(x>0\), and \(L/2 - v\), \(L/2 + v \in \) \([L/2 - \delta , \, L/2 + \delta ]\) by \(v \in [- \delta ,\, \delta ]\) with \(0< \delta <L/2\), we conclude that

$$\begin{aligned} \eta (v) < \frac{{\mathrm e}^{-2 \pi ^2 \sigma ^2 (L/2 - v)^2}}{\sqrt{2 \pi }\,\pi \sigma (L/2 - v)} \le \frac{{\mathrm e}^{-2 \pi ^2 \sigma ^2 (L/2 - \delta )^2}}{\sqrt{2 \pi }\,\sigma (L \pi - \delta )}. \end{aligned}$$

Hence, by (3.21) and (3.27) we receive

$$\begin{aligned} E_{1}(m,\delta ,L)&\le \frac{\sqrt{\delta }}{\sqrt{\pi }\,\pi \sigma (L/2 - \delta )}\,{\mathrm e}^{-2 \pi ^2\sigma ^2 (L/2 - \delta )^2} . \end{aligned}$$
(4.4)

Now we examine the truncation error constant (3.22). Here it holds

$$\begin{aligned} \varphi _{\mathrm {Gauss}}^2(\tfrac{m}{L}\big ) + L\int _{m/L}^{\infty } \varphi _{\mathrm {Gauss}}^2(t)\,{\mathrm d}t = \mathrm e^{-m^2/(L^2\sigma ^2)} + L\sigma \int _{m/(L\sigma )}^{\infty } \mathrm e^{-t^2} \,\mathrm dt. \end{aligned}$$

From (4.3) it follows

$$\begin{aligned} \mathrm e^{-m^2/(L^2\sigma ^2)} + L\sigma \int _{m/(L\sigma )}^{\infty } \mathrm e^{-t^2} \,\mathrm dt \le \frac{2m+L^2\sigma ^2}{2m} \,\mathrm e^{-m^2/(L^2\sigma ^2)} . \end{aligned}$$

Thus, by (3.21) we obtain

$$\begin{aligned} E_2(m,\delta ,L)&\le \frac{\sqrt{2L}}{\pi m} \,\sqrt{\frac{2m+L^2\sigma ^2}{2m}} \,\mathrm e^{-m^2/(2L^2\sigma ^2)} . \end{aligned}$$
(4.5)

For the special parameter \(\sigma = \sqrt{\frac{m}{\pi L\,(L - 2\delta )}}\), both error terms (4.4) and (4.5) have the same exponential decay such that

$$\begin{aligned} E_1(m,\delta ,L)&\le \frac{2}{\pi }\,\sqrt{\frac{\delta L}{m\, (L - 2\delta )}}\,{\mathrm e}^{-\pi m (L/2 - \delta )/L} , \\ E_2(m,\delta ,L)&\le \frac{1}{\pi }\,\sqrt{\frac{L(2\pi (L-2\delta )+1)/m}{{m\pi (L-2\delta )}}} \,\mathrm e^{-\pi m (L/2 - \delta )/L} . \end{aligned}$$

For \(\delta \in (0,N/2)\) and \(m\in {\mathbb {N}}\setminus \{1\}\) it additionally holds

$$\begin{aligned} \sqrt{2\pi (L-2\delta )+1} \le \sqrt{L}\,\sqrt{2\pi +1} \le \sqrt{L} (m+1) . \end{aligned}$$

This completes the proof. \(\square \)

We remark that Theorem 4.1 improves the corresponding results in [6, 13] since the actual decay rate could be improved in Theorem 4.1 from \((m-1)\) to m.

Example 4.2

We aim to visualize the error bound from Theorem 4.1. For a given function \(f\in {{\mathcal {B}}}_{\delta }({\mathbb {R}})\) with \(\delta =\tau N \in (0,\,N/2)\) and \(L= N(1+\lambda )\), where \(0<\tau <\frac{1}{2}\) and \(\lambda \ge 0\), we consider the approximation error

$$\begin{aligned} e_{m,\tau ,\lambda }(f) :=\max _{t\in [-1,\,1]}| f(t) - (R_{\varphi ,m} f)(t) | . \end{aligned}$$
(4.6)

For \(\varphi = \varphi _{\mathrm {Gauss}}\) we show that by (4.1) it holds \(e_{m,\tau ,\lambda }(f) \le E_{m,\tau ,\lambda }\,\Vert f\Vert _{L^2({\mathbb {R}})}\) where

$$\begin{aligned} E_1(m,\delta ,L) + E_2(m,\delta ,L) \le E_{m,\tau ,\lambda } :=\frac{2\sqrt{\pi \delta L} + L(m+1)/\sqrt{m}}{\pi \,\sqrt{m\pi (L- 2 \delta )}}\,{\mathrm e}^{-\pi m (L/2 - \delta )/L} \,\nonumber \\ \end{aligned}$$
(4.7)

with \(\sigma = \sqrt{\frac{m}{\pi L\,(L - 2\delta )}}\). The error (4.6) shall here be approximated by evaluating a given function f and its approximation \(R_{\varphi ,m} f\) at \(S=10^5\) equidistant points \(t_s\in [-1,\,1]\), \(s=1,\dots ,S\). By the definition of the regularized Shannon sampling formula in (3.15) it can be seen that for \(t \in [-1, 1]\) we have

$$\begin{aligned} (R_{\varphi ,m} f)(t) = \sum _{\ell = -L-m}^{L+m} f(\tfrac{\ell }{L}) \,\psi (t - \tfrac{\ell }{L}) . \end{aligned}$$

Here we study the function \(f(t) = \sqrt{2 \delta } \,\mathrm {sinc}(2 \delta \pi t)\), \(t \in {\mathbb {R}}\), such that it holds \(\Vert f\Vert _{L^2({\mathbb {R}})}=1\). We fix \(N=128\) and consider the evolution for different values \(m \in {\mathbb {N}} \setminus \{1\}\), i.e., we are still free to make a choice for the parameters \(\tau \) and \(\lambda \). In a first experiment we fix \(\lambda =1\) and choose different values for \(\tau <\frac{1}{2}\), namely we consider \(\tau \in \{\nicefrac {1}{20},\,\nicefrac {1}{10},\,\nicefrac {1}{4},\,\nicefrac {1}{3},\,\nicefrac {9}{20}\}\). The corresponding results are depicted in Fig. 1a. We recognize that the smaller the factor \(\tau \) can be chosen, the better the error results are. As a second experiment we fix \(\tau =\frac{1}{3}\), but now choose different \(\lambda \in \{0,0.5,1,2\}\). The associated results are displayed in Fig. 1b. It can clearly be seen that the higher the oversampling parameter \(\lambda \) is chosen, the better the error results get. We remark that for larger choices of N, the line plots in Fig. 1 would only be shifted slightly upwards, such that for all N we receive almost the same error results.

Fig. 1
figure 1

Maximum approximation error (4.6) and error constant (4.7) using \(\varphi _{\mathrm {Gauss}}\) in (3.2) and \(\sigma = \sqrt{\frac{m}{\pi L\,(L - 2\delta )}}\) for the function \(f(x) = \sqrt{2\delta } \,\mathrm {sinc}(2 \delta \pi x)\) with \(N=128\), \(m\in \{2, 3, \ldots , 10\}\), as well as \(\tau \in \{\nicefrac {1}{20},\,\nicefrac {1}{10},\,\nicefrac {1}{4},\,\nicefrac {1}{3},\,\nicefrac {9}{20}\}\), \(\delta = \tau N\), and \(\lambda \in \{0,0.5,1,2\}\), respectively

Now we show that for the regularized Shannon sampling formula with the Gaussian window function (3.2) the uniform perturbation error (3.29) only grows as . We remark that a similar result can also be found in [15].

Theorem 4.3

Let \(f\in {\mathcal B}_{\delta }({\mathbb {R}})\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N \in {\mathbb {N}}\), \(L= N(1+\lambda )\) with \(\lambda \ge 0\) Jand \(m \in {{\mathbb {N}}}\setminus \{1\}\) be given. Further let \(R_{\mathrm {Gauss},m}{{\tilde{f}}}\) be as in (3.28) with the noisy samples \({{\tilde{f}}}_{\ell } = f\big (\frac{\ell }{L}\big ) + \varepsilon _{\ell }\), where \(|\varepsilon _{\ell }| \le \varepsilon \) for all \(\ell \in {\mathbb {Z}}\) and \(\varepsilon >0\).

Then the regularized Shannon sampling formula (3.15) with the Gaussian window function (3.2) and \(\sigma = \sqrt{\frac{m}{\pi L\,(L - 2\delta )}}\) satisfies

$$\begin{aligned} \Vert R_{\mathrm {Gauss},m}{{\tilde{f}}} - R_{\mathrm {Gauss},m}f \Vert _{C_0({\mathbb {R}})} \le \varepsilon \left( 2 + \sqrt{\frac{2 + 2\lambda }{\lambda + 1 - 2 \tau }}\, \sqrt{m}\right) . \end{aligned}$$
(4.8)

Proof

By Theorem 3.4 we only have to compute \(\hat{\varphi }_{\mathrm {Gauss}}(0)\) for the Gaussian window function (3.2). By (4.2) we recognize that

$$\begin{aligned} \hat{\varphi }_{\mathrm {Gauss}}(0)&= \sqrt{2\pi }\,\sigma = \sqrt{\frac{2 m}{L\,(L - 2\delta )}} = \frac{1}{L}\,\sqrt{\frac{2 + 2\lambda }{\lambda + 1 - 2 \tau }}\, \sqrt{m} \end{aligned}$$

such that (3.30) yields the assertion. \(\square \)

Example 4.4

Now we visualize the error bound from Theorem 4.3. Similar to Example 4.2, we consider the perturbation error

$$\begin{aligned} {\tilde{e}}_{m,\tau ,\lambda }(f) :=\max _{t\in [-1,\,1]}| (R_{\varphi ,m} {\tilde{f}})(t) - (R_{\varphi ,m} f)(t) | . \end{aligned}$$
(4.9)

For \(\varphi = \varphi _{\mathrm {Gauss}}\) we show that by (4.8) it holds \(\tilde{e}_{m,\tau ,\lambda }(f) \le {\tilde{E}}_{m,\tau ,\lambda }\), where

$$\begin{aligned} {\tilde{E}}_{m,\tau ,\lambda } :=\varepsilon \left( 2 + \sqrt{\frac{2 + 2\lambda }{\lambda + 1 - 2 \tau }}\, \sqrt{m}\right) . \end{aligned}$$
(4.10)

We conduct the same experiments as in Example 4.2 and introduce a maximum perturbation of \(\varepsilon =10^{-3}\) as well as uniformly distributed random numbers \(\varepsilon _{\ell }\) in \((-\varepsilon ,\varepsilon )\). Due to the randomness we perform the experiments one hundred times and then take the maximum error over all runs. The associated results are displayed in Fig. 2.

Fig. 2
figure 2

Maximum perturbation error (4.9) over 100 runs and error constant (4.10) using \(\varphi _{\mathrm {Gauss}}\) in (3.2) and \(\sigma = \sqrt{\frac{m}{\pi L\,(L - 2\delta )}}\) for the function \(f(x) = \sqrt{2\delta } \,\mathrm {sinc}(2 \delta \pi x)\) with \(\varepsilon =10^{-3}\), \(N=128\), \(m\in \{2, 3, \ldots , 10\}\), as well as \(\tau \in \{\nicefrac {1}{20},\,\nicefrac {1}{10},\,\nicefrac {1}{4},\,\nicefrac {1}{3},\,\nicefrac {9}{20}\}\), \(\delta = \tau N\), and \(\lambda \in \{0,0.5,1,2\}\), respectively

5 B-spline regularized Shannon sampling formula

Now we consider the modified \(\mathrm B\)-spline window function (3.3) with \(s,\,m \in {\mathbb {N}} \setminus \{1\}\) and \(L= N\,(1 + \lambda )\), \(\lambda \ge 0\), where \(M_{2s}\) denotes the centered cardinal \(\mathrm B\)-spline of even order 2s. Note that (3.3) is supported on \(\big [-\frac{m}{L},\, \frac{m}{L}\big ]\).

Lemma 5.1

For the value \(M_{2s}(0)\), \(s\in {\mathbb {N}}\), it holds the formula

$$\begin{aligned} M_{2s}(0) = \frac{1}{(2s-1)!}\,\sum _{j=0}^{s-1} (-1)^j\,{2s \atopwithdelims ()j}\,(s-j)^{2s-1}. \end{aligned}$$
(5.1)

The sequence \(\big (\sqrt{2s}\,M_{2s}(0)\big )_{s=1}^{\infty }\) has the limit

$$\begin{aligned} \lim _{s \rightarrow \infty } \sqrt{2s}\,M_{2s}(0) = \sqrt{\frac{6}{\pi }} {\approx 1.3820}. \end{aligned}$$
(5.2)

Proof

. By inverse Fourier transform of \({{\hat{\varphi }}}_{\mathrm B}\) it holds

$$\begin{aligned} \varphi _{\mathrm B}(x) = \int _{{\mathbb {R}}} {{\hat{\varphi }}}_{\mathrm B}(v)\,{\mathrm e}^{2 \pi {\mathrm i}v x}\, {\mathrm d}v, \quad x \in {\mathbb {R}}. \end{aligned}$$

Hence, for \(x = 0\) it follows that

$$\begin{aligned} M_{2s}(0) = \int _{{\mathbb {R}}} \big ({\mathrm {sinc}} (\pi v)\big )^{2s}\,{\mathrm d}v = \frac{2}{\pi }\,\int _0^{\infty } \big ({\mathrm {sinc}} \,w\big )^{2s}\,{\mathrm d}w. \end{aligned}$$

The above integral can be determined in explicit form (see [7, 9, p. 20, 5.12] or [2, (4.1.12)]) as

$$\begin{aligned} \int _0^{\infty } \big ({\mathrm {sinc}} \,w\big )^{2s}\,{\mathrm d}w = \frac{\pi }{2\,(2s-1)!}\,\sum _{j=0}^{s-1} (-1)^j\,{2s \atopwithdelims ()j}\,(s-j)^{2s-1} \end{aligned}$$

such that (5.1) is shown. Especially, it holds \(M_2(0) = 1\), \(M_4(0) = \frac{2}{3}\), \(M_6(0) = \frac{11}{20}\), \(M_8(0) = \frac{151}{315}\), \(M_{10}(0) = \frac{15619}{36288}\), and \(M_{12}(0) = \frac{655177}{1663200}\). A table with the decimal values of \(M_{2s}(0)\) for \(m =15,\,\ldots ,\,50,\) can be found in [7]. For example, it holds \(M_{100}(0) \approx 0.137990\).

By [20, (3.6)], there exists the pointwise limit

$$\begin{aligned} \lim _{s \rightarrow \infty } \sqrt{\frac{s}{6}}\, M_{2s}\left( \sqrt{\frac{s}{6}}\,x \right) = \frac{1}{\sqrt{2\pi }}\,{\mathrm e}^{-x^2/2} \end{aligned}$$

such that for \(x = 0\) we obtain (5.2). \(\square \)

Remark 5.2

By numerical computations we can see that the sequence \(\big (\sqrt{2s}\,M_{2s}(0)\big )_{s=2}^{50}\) increases monotonously, see Fig. 3. For large s we can use the asymptotic expansion

$$\begin{aligned} \sqrt{2s}\,M_{2s}(0) \approx \sqrt{\frac{6}{\pi }}\,\left[ 1 - \frac{3}{40\,s} - \frac{13}{4480\,s^2} + \frac{27}{25600\,s^3} + \frac{52791}{63078400\,s^4} + \frac{482427}{2129920000\,s^5}\right] \end{aligned}$$

(see [7]) such that the whole sequence \(\big (\sqrt{2s}\,M_{2s}(0)\big )_{s=2}^{\infty }\) increases monotonously. Hence, for \(s \in {{\mathbb {N}}}\setminus \{1\}\) it holds

$$\begin{aligned} \frac{4}{3} \le \sqrt{2s}\,M_{2s}(0) < \sqrt{\frac{6}{\pi }} , \quad s \in {\mathbb {N}} \setminus \{1\} . \end{aligned}$$
(5.3)
Fig. 3
figure 3

The sequence \(\big (\sqrt{2s}\,M_{2s}(0)\big )_{s=2}^{50}\)

Now we show that for the regularized Shannon sampling formula (3.15) with \(\mathrm B\)-spline window function (3.3) the uniform approximation error (3.19) decays exponentially with respect to m.

Theorem 5.3

Let \(f\in {{\mathcal {B}}}_{\delta }(\mathbb R)\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N\in {\mathbb {N}}\), \(L = N\,(1 + \lambda )\), \(\lambda \ge 0\), and \(m \in {\mathbb {N}} \setminus \{1\}\) be given. Assume that

$$\begin{aligned} \frac{\tau }{1 + \lambda } < \frac{1}{2} - \frac{1}{\pi }. \end{aligned}$$
(5.4)

Then the regularized Shannon sampling formula (3.15) with the \(\mathrm B\)-spline window function (3.3) and \(s = \left\lceil \frac{m+1}{2} \right\rceil \) satisfies the error estimate

$$\begin{aligned} \Vert f - R_{{\mathrm B},m} f \Vert _{C_0({\mathbb {R}})} \le \frac{3\,\sqrt{\delta s}}{(2s-1)\,\pi }\, \mathrm e^{-m\,\left( \ln \left( \pi m\,(1 + \lambda - 2 \tau )\right) - \ln \left( 2s(1 + \lambda )\right) \right) } \, \Vert f \Vert _{L^2({\mathbb {R}})}. \end{aligned}$$
(5.5)

Proof

. By Theorem 3.2 we only have to estimate the regularization error constant (3.21), since it holds \(\varphi _{\mathrm B}(x) = \varphi _{\mathrm B}(x)\,{\mathbf{1}}_{[-m/L,\,m/L]}(x)\) for all \(x \in {\mathbb {R}}\) and therefore the truncation error constant (3.22) vanishes for the \(\mathrm B\)-spline window function (3.3) by Remark 3.3.

By [10, 11] it holds

$$\begin{aligned} {{\hat{\varphi }}_{\mathrm {B}}}(v)&= \frac{m}{sL\, M_{2s}(0)}\,\left( \mathrm {sinc} \,\frac{\pi vm}{sL}\right) ^{2s}, \quad v \in {\mathbb {R}} , \end{aligned}$$
(5.6)

such that the auxiliary function (3.27) is given by

$$\begin{aligned} \eta (v) = {{\mathbf {1}}}_{[-\delta ,\, \delta ]}(v) - \frac{m}{sL\,M_{2s}(0)}\,\int _{v-L/2}^{v+L/2} \left( \mathrm {sinc} \,\frac{\pi um}{sL}\right) ^{2s}\,{\mathrm d}u , \quad v \in {\mathbb {R}} . \end{aligned}$$

By inverse Fourier transform (2.4) we have

$$\begin{aligned} 1 = \varphi _{\mathrm B}(0) = \int _{{\mathbb {R}}} {\hat{\varphi }}_{\mathrm B}(v) \,{\mathrm d}v = \frac{m}{sL\,M_{2s}(0)}\, \int _{{\mathbb {R}}} \left( \mathrm {sinc}\, \frac{\pi vm}{sL}\right) ^{2s} \,{\mathrm d}v . \end{aligned}$$
(5.7)

Then, for \(v \in [- \delta , \, \delta ]\), the function \(\eta \) can be determined by (5.7) in the following form

$$\begin{aligned} \eta (v)= & {} \frac{m}{sL\,M_{2s}(0)}\,\left[ \int _{{\mathbb {R}}} \left( \mathrm {sinc} \,\frac{\pi um}{sL}\right) ^{2s}\,{\mathrm d}u - \int _{v- L/2}^{v+L/2} \left( \mathrm {sinc} \,\frac{\pi um}{sL}\right) ^{2s}\,{\mathrm d}u\right] \\= & {} \frac{m}{sL\,M_{2s}(0)}\,\left[ \int _{L/2-v}^{\infty } \left( \mathrm {sinc} \,\frac{\pi um}{sL}\right) ^{2s}\,{\mathrm d}u + \int _{v + L/2}^{\infty } \big (\mathrm {sinc} \,\frac{\pi um}{sL}\big )^{2s}\,{\mathrm d}u\right] . \end{aligned}$$

Applying the simple estimates

$$\begin{aligned} \int _{L/2-v}^{\infty } \left( \mathrm {sinc} \,\frac{\pi um}{sL}\right) ^{2s}\,{\mathrm d}u\le & {} \frac{s^{2s}\,L^{2s}}{m^{2s}\,\pi ^{2s}}\, \int _{L/2-v}^{\infty } u^{-2s}\,{\mathrm d}u = \frac{s^{2s}\,L^{2s}}{(2s-1)\, m^{2s}\,\pi ^{2s}\,(L/2-v)^{2s-1}},\\ \int _{v + L/2}^{\infty } \left( \mathrm {sinc} \,\frac{\pi um}{sL}\right) ^{2s}\,{\mathrm d}u\le & {} \frac{s^{2s}\,L^{2s}}{m^{2s}\,\pi ^{2s}}\, \int _{v + L/2}^{\infty } u^{-2s}\,{\mathrm d}u = \frac{s^{2s}\,L^{2s}}{(2s-1)\, m^{2s}\,\pi ^{2s}\,(v + L/2)^{2s-1}}, \end{aligned}$$

the function \(\eta \) can be estimated for \(v \in [- \delta , \delta ]\) by

$$\begin{aligned} \eta (v) \le \frac{s^{2s-1}\,L^{2s-1}}{(2s-1)\,m^{2s-1}\,\pi ^{2s}\,M_{2s}(0)}\,\left[ \frac{1}{(L/2 - v)^{2s-1}} + \frac{1}{(L/2 + v)^{2s-1}}\right] . \end{aligned}$$

By \(v \in [- \delta ,\,\delta ]\) with \(0< \delta < N/2 \le L/2\), it holds \(L/2 - v\), \(L/2 + v \in [L/2 - \delta ,\,L/2 + \delta ]\). Since the function \(x^{1-2s}\) decreases for \(x > 0\), we conclude that

$$\begin{aligned} \max _{v \in [-\delta ,\delta ]} |\eta (v)| \le \frac{2\,s^{2s-1}\,L^{2s-1}}{(2s-1)\,m^{2s-1}\,\pi ^{2s}\,M_{2s}(0)\,(L/2 - \delta )^{2s-1}} . \end{aligned}$$

Hence, by (3.21), (3.27) and (5.3) we receive

$$\begin{aligned} E_1(m,\delta ,L)&\le \frac{2\,\sqrt{2 \delta }}{(2s-1)\,\pi \,M_{2s}(0)} \left( \frac{2 sL}{\pi mL - 2 \pi m\delta }\right) ^{2s-1}\nonumber \\&\le \frac{3\,\sqrt{\delta s}}{(2s-1)\,\pi }\, \left( \frac{2 sL}{\pi mL - 2 \pi m\delta }\right) ^{2s-1} . \end{aligned}$$
(5.8)

To guarantee convergence of this result, we have to satisfy

$$\begin{aligned} \frac{2 sL}{\pi mL - 2 \pi m\delta } = \frac{2s(1 + \lambda )}{\pi m\,(1 + \lambda - 2 \tau )} =:c < 1 . \end{aligned}$$
(5.9)

By means of logarithmic laws we recognize that \(c^{\,2s-1} = \mathrm e^{\ln (c^{2s-1})} = \mathrm e^{(2s-1)\,\ln c}\). Thus, the condition \(c < 1\) yields \(\ln c<0\) and therefore an exponential decay of (5.8) with respect to \((2s-1)\). Thereby, we need on the one hand that \(\ln c\) is as small as possible, which is equivalent to choosing s as small as possible. On the other hand, we aim achieving a decay rate of at least m, i.e., we need to fulfill \(2s-1\ge m\). These two conditions can now be used to pick the optimal parameter \(s\in {\mathbb {N}}\) in the form \(s = \left\lceil \frac{m+1}{2} \right\rceil \). Then (5.9) holds, if (5.4) is fulfilled, and

$$\begin{aligned} c^{\,2s-1}&= \mathrm e^{(2s-1)\,\ln c} = \mathrm e^{(2s-1)\,\left( \ln \left( 2s(1 + \lambda )\right) - \ln \left( \pi m\,(1 + \lambda - 2 \tau )\right) \right) } \\&= \mathrm e^{-(2s-1)\,\left( \ln \left( \pi m\,(1 + \lambda - 2 \tau )\right) - \ln \left( 2s(1 + \lambda )\right) \right) } \le \mathrm e^{-m\,\left( \ln \left( \pi m\,(1 + \lambda - 2 \tau )\right) - \ln \left( 2s(1 + \lambda )\right) \right) } \end{aligned}$$

yields the assertion. We remark that it holds \(\pi m\,(1 + \lambda - 2 \tau ) > 2s(1 + \lambda )\) since \(c<1\). \(\square \)

Example 5.4

Analogous to Example 4.2, we now visualize the error bound from Theorem 5.3, i.e., for \(\varphi = \varphi _{\mathrm {B}}\) we show that for the approximation error (4.6) it holds by (5.5) that \(e_{m,\tau ,\lambda }(f) \le E_{m,\tau ,\lambda }\,\Vert f\Vert _{L^2({\mathbb {R}})}\), where

$$\begin{aligned} E_1(m,\delta ,L) \le E_{m,\tau ,\lambda } :=\frac{3\,\sqrt{\delta s}}{(2s-1)\,\pi }\, \mathrm e^{-m\,\left( \ln \left( \pi m\,(1 + \lambda - 2 \tau )\right) - \ln \left( 2s(1 + \lambda )\right) \right) } \end{aligned}$$
(5.10)

with \(s = \left\lceil \frac{m+1}{2} \right\rceil \). Additionally, we now have to observe the condition (5.4). For the first experiment in Example 4.2 with \(\lambda =1\) this leads to \(\tau <1-\frac{2}{\pi }\approx 0.3634\), while in the second experiment we fixed \(\tau = \frac{1}{3}\) and therefore have to satisfy \(\lambda >\frac{2\pi }{3\pi -6}-1\approx 0.8346\). Thus, only in these settings the requirements of Theorem 5.3 are fulfilled, and therefore only those error bounds are plotted in Fig. 4 while the approximation error (4.6) is computed for all constellations of parameters as given in Example 4.2. We recognize that we have almost the same behavior as in Fig. 1, which means that there is hardly any improvement using the \(\mathrm {B}\)-spline window function in comparison to the well-studied Gaussian window function.

Fig. 4
figure 4

Maximum approximation error (4.6) and error constant (5.10) using \(\varphi _{\mathrm {B}}\) in (3.3) and \(s = \left\lceil \frac{m+1}{2} \right\rceil \) for the function \(f(x) = \sqrt{2\delta } \,\mathrm {sinc}(2 \delta \pi x)\) with \(N=128\), \(m\in \{2, 3, \ldots , 10\}\), as well as \(\tau \in \{\nicefrac {1}{20},\,\nicefrac {1}{10},\,\nicefrac {1}{4},\,\nicefrac {1}{3},\,\nicefrac {9}{20}\}\), \(\delta = \tau N\), and \(\lambda \in \{0,0.5,1,2\}\), respectively

Now we show that for the regularized Shannon sampling formula with the \(\mathrm B\)-spline window function (3.3) the uniform perturbation error (3.30) only grows as .

Theorem 5.5

Let \(f\in {{\mathcal {B}}}_{\delta }(\mathbb R)\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N \in {\mathbb {N}}\), \(L= N(1+\lambda )\) with \(\lambda \ge 0\) and \(m \in {\mathbb N}\setminus \{1\}\) be given. Let \(s \in {\mathbb {N}}\) be defined by \(s = \left\lceil \frac{m+1}{2} \right\rceil \). Further let \(R_{\mathrm {B},m}{{\tilde{f}}}\) be as in (3.28) with the noisy samples \({{\tilde{f}}}_{\ell } = f\big (\frac{\ell }{L}\big ) + \varepsilon _{\ell }\), where \(|\varepsilon _{\ell }| \le \varepsilon \) for all \(\ell \in {\mathbb {Z}}\) and \(\varepsilon >0\).

Then the regularized Shannon sampling formula (3.15) with the \(\mathrm B\)-spline window function (3.3) and \(s = \left\lceil \frac{m+1}{2} \right\rceil \) satisfies

$$\begin{aligned} \Vert R_{\mathrm {B},m}{{\tilde{f}}} - R_{\mathrm {B},m}f \Vert _{C_0({\mathbb {R}})} \le \varepsilon \left( 2+\frac{3}{2} \,\sqrt{m}\right) . \end{aligned}$$
(5.11)

Proof

By Theorem 3.4 we only have to compute \(\hat{\varphi }_{\mathrm B}(0)\) for the \(\mathrm B\)-spline window function (3.3). By (5.6) and (5.3) we recognize that

$$\begin{aligned} \hat{\varphi }_{\mathrm B}(0)&= \frac{m}{sL\, M_{2s}(0)} \le \frac{m}{sL} \frac{3 \,\sqrt{2}}{4}\sqrt{s} . \end{aligned}$$

Due to \(s = \left\lceil \frac{m+1}{2} \right\rceil \) it holds \(\sqrt{s} \ge \frac{\sqrt{m}}{\sqrt{2}}\) such that (3.30) yields the assertion (5.11). \(\square \)

Similar to Example 4.4, one can visualize the error bound from Theorem 5.5, which leads to results analogous to Fig. 2.

6 sinh-type regularized Shannon sampling formula

We consider the \(\sinh \)-type window function (3.4) with the parameter \(\beta :=\frac{s\pi \,(1 + 2 \lambda )}{1 + \lambda }\) for \(s>0\), \(m \in {\mathbb {N}} \setminus \{1\}\), and \(L= N\,(1 + \lambda )\), \(\lambda \ge 0\), and show that in this case the uniform approximation error (3.19) for the regularized Shannon sampling formula (3.15) decays exponentially with respect to m.

Theorem 6.1

Let \(f\in {{\mathcal {B}}}_{\delta }({\mathbb {R}})\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N\in {\mathbb {N}}\), \(L = N\,(1 + \lambda )\), \(\lambda \ge 0\), and \(m \in {\mathbb {N}} \setminus \{1\}\) be given.

Then the regularized Shannon sampling formula (3.15) with the \(\sinh \)-type window function (3.4) and \(\beta = \frac{\pi m\,(1 + \lambda + 2 \tau )}{1 + \lambda }\) satisfies the error estimate

$$\begin{aligned} {\Vert f - R_{\sinh ,m} f \Vert _{C_0({\mathbb {R}})} \le \bigg (\frac{\sqrt{\beta \,\pi \delta }}{(1 - 2\,{\mathrm e}^{-\beta })(1-w_0^2)^{1/4}}\, \, {\mathrm e}^{-\beta \big (1-\sqrt{1-w_0^2}\big )} + \frac{2\,\sqrt{2 \delta }}{1 - {\mathrm e}^{-2\beta }}\,{\mathrm e}^{-\beta }\bigg )\, \Vert f \Vert _{L^2({\mathbb {R}})} ,} \end{aligned}$$

where \(w_0 = \frac{1 + \lambda - 2\tau }{1 + \lambda + 2\tau } \in (0,\,1)\). Further, the regularized Shannon sampling formula (3.15) with the \(\sinh \)-type window function (3.4) and \(\beta = \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda }\) fulfills

$$\begin{aligned} {\Vert f - R_{\sinh ,m} f \Vert _{C_0({\mathbb {R}})} \le 3\,\sqrt{2\delta } \, \mathrm e^{-\beta } \, \Vert f \Vert _{L^2({\mathbb {R}})} .} \end{aligned}$$

Proof

. By Theorem 3.2 we only have to estimate the regularization error constant (3.21), since it holds \(\varphi _{\sinh }(x) = \varphi _{\sinh }(x)\,{\mathbf{1}}_{[-m/L,\,m/L]}(x)\) for all \(x \in {\mathbb {R}}\) and therefore the truncation error constant (3.22) vanishes for the \(\sinh \)-type window function (3.4) by Remark 3.3.

By [9, p. 38, 7.58] or [12] it holds

$$\begin{aligned} {{\hat{\varphi }}_{\sinh }}(v)&= \frac{\pi m \beta }{L\, \sinh \beta } \cdot \left\{ \begin{array}{ll} (w^2 - \beta ^2)^{-1/2}\,J_1\big (\sqrt{w^2 - \beta ^2}\big ) &{} \quad w\in {\mathbb {R}} \setminus \{- \beta , \, \beta \},\\ 1/2 &{} \quad w = \pm \beta , \end{array} \right. \end{aligned}$$
(6.1)

where \(w :=2\pi m v/L\) denotes a scaled frequency. Thus, the auxiliary function (3.27) is given by

$$\begin{aligned}&\eta (v) = {{\mathbf {1}}}_{[-\delta ,\, \delta ]}(v) - \frac{m\beta }{2 L\,\sinh \beta }\\&\quad \int _{v-L/2}^{v+L/2} \frac{J_1\big (2 \pi \,\sqrt{m^2u^2/L^2 - s^2(1+ 2 \lambda )^2/(2 + 2\lambda )^2}\big )}{\sqrt{m^2u^2/L^2 - s^2(1+ 2 \lambda )^2/(2 + 2\lambda )^2}}\,{\mathrm d}u , \ v \in {\mathbb {R}} . \end{aligned}$$

Substituting \(u = \frac{sL\,(1+ 2\lambda )}{m(2 + 2\lambda )}\,w\), we obtain for \(v \in [- \delta , \, \delta ]\) that

$$\begin{aligned} \eta (v) = 1 - \frac{\beta }{2\,\sinh \beta }\,\int _{{-w_1(-v)}}^{{w_1(v)}} \frac{J_1\big (\beta \, \sqrt{w^2-1}\big )}{\sqrt{w^2-1}}\,{\mathrm d}w \end{aligned}$$
(6.2)

with

$$\begin{aligned} {w_1(v) :=\frac{m(v + L/2)(2+2\lambda )}{sL\,(1+2\lambda )} >0, \quad v \in [- \delta , \delta ] .} \end{aligned}$$
(6.3)

Since the integrand of (6.2) behaves differently for \(w \in [-1, 1]\) and \(w \in {\mathbb {R}} \setminus (-1, 1)\) we have to distinguish between the cases \(w_1(v)\le 1\) and \(w_1(v)\ge 1\) for all \(v \in [-\delta ,\, \delta ]\). By definition \(w_1(v)\) is linear and monotonously increasing. Thus, we have \(\min \{w_1(v):\,v\in [-\delta ,\,\delta ]\} = w_1(-\delta )\) and \(\max \{w_1(v):\,v\in [-\delta ,\,\delta ]\} = w_1(\delta )\). This fact can now be used to choose an optimal parameter \(s = s(m,\tau ,\lambda ) > 0\) such that either \(w_1(\delta )\le 1\) or \(w_1(-\delta )\ge 1\) is fulfilled.

Case 1 (\(\varvec{w_1(\delta )\le 1}\)): Note that by [5, 6.681–3] and [1, 10.2.13] as well as \(J_1({\mathrm i}\,z) = {\mathrm i}\,I_1(z)\) for \(z \in {\mathbb {C}}\) it holds

$$\begin{aligned} \int _{-1}^1 \frac{J_1\big (\beta \, \sqrt{w^2-1}\big )}{\sqrt{w^2-1}}\,{\mathrm d}w&= \int _{-1}^1 \frac{I_1\big (\beta \, \sqrt{1-w^2}\big )}{\sqrt{1-w^2}}\,{\mathrm d}w = \int _{-\pi /2}^{\pi /2} I_1(\beta \,\cos s)\,{\mathrm d}s \nonumber \\&= \pi \,\left( I_{1/2}\Big (\frac{\beta }{2}\Big )\right) ^2 = \frac{4}{\beta }\,\left( \sinh \frac{\beta }{2}\right) ^2 . \end{aligned}$$
(6.4)

Then from (6.2) and (6.4) it follows that

$$\begin{aligned} \eta (v)&= \frac{\beta }{4\, \big (\sinh \frac{\beta }{2}\big )^2}\,\int _{-1}^1 \frac{I_1\big (\beta \, \sqrt{1 -w^2}\big )}{\sqrt{1 -w^2}}\,{\mathrm d}w - \frac{\beta }{2 \, \sinh \beta }\, \int _{-w_1(-v)}^{w_1(v)} \frac{I_1\big (\beta \, \sqrt{1 -w^2}\big )}{\sqrt{1 - w^2}}\,{\mathrm d}w \\[1ex]&= \eta _1(v) + \eta _2(v) \nonumber \end{aligned}$$
(6.5)

with

$$\begin{aligned} \eta _1(v):= & {} \bigg (\frac{\beta }{4\, \big (\sinh \frac{\beta }{2}\big )^2} - \frac{\beta }{2 \, \sinh \beta }\bigg )\, \int _{-w_1(-v)}^{w_1(v)} \frac{I_1\big (\beta \, \sqrt{1 - w^2}\big )}{\sqrt{1 - w^2}}\,{\mathrm d}w , \\ \eta _2(v):= & {} \frac{\beta }{4\, \big (\sinh \frac{\beta }{2}\big )^2}\,\bigg (\int _{-1}^1 - \int _{-w_1(-v)}^{w_1(v)}\bigg )\, \frac{I_1\big (\beta \, \sqrt{1 -w^2}\big )}{\sqrt{1 -w^2}}\,{\mathrm d}w . \end{aligned}$$

By \(2\,\big (\sinh \frac{\beta }{2}\big )^2 < \sinh \beta \) we have

$$\begin{aligned} \frac{\beta }{4\, \big (\sinh \frac{\beta }{2}\big )^2} - \frac{\beta }{2 \, \sinh \beta } > 0. \end{aligned}$$
(6.6)

Since the integrand of (6.5) is positive, it is easy to find an upper bound of \(\eta _1(v)\) for all \(v \in [-\delta ,\,\delta ]\), because by (6.4) it holds

$$\begin{aligned} 0 \le \eta _1(v)\le & {} \bigg (\frac{\beta }{4\, \big (\sinh \frac{\beta }{2}\big )^2} - \frac{\beta }{2 \, \sinh \beta }\bigg )\, \int _{-1}^1 \frac{I_1\big (\beta \, \sqrt{1 - w^2}\big )}{\sqrt{1 -w^2}}\,{\mathrm d}w \nonumber \\= & {} 1 - \frac{2\,\big (\sinh \frac{\beta }{2}\big )^2}{\sinh \beta } = \frac{2 - 2\,{\mathrm e}^{-\beta }}{{\mathrm e}^{\beta } - {\mathrm e}^{-\beta }} < \frac{2}{1 - {\mathrm e}^{-2 \beta }}\, {\mathrm e}^{-\beta }. \end{aligned}$$

Further, for arbitrary \(v \in [- \delta , \, \delta ]\) we obtain

$$\begin{aligned} 0 \le \eta _2(v)= & {} \frac{\beta }{4 \, \big (\sinh \frac{\beta }{2}\big )^2}\,\bigg (\int _{-1}^{-w_1(-v)} + \int _{w_1(v)}^1\bigg ) \,\frac{I_1\big (\beta \, \sqrt{1 - w^2}\big )}{\sqrt{1 - w^2}}\,{\mathrm d}w \nonumber \\= & {} \frac{\beta }{4 \, \big (\sinh \frac{\beta }{2}\big )^2}\,\bigg (\int _{w_1(-v)}^1 + \int _{w_1(v)}^1\bigg ) \,\frac{I_1\big (\beta \, \sqrt{1 - w^2}\big )}{\sqrt{1 - w^2}}\,{\mathrm d}w \nonumber \\\le & {} \frac{\beta }{2 \, \big (\sinh \frac{\beta }{2}\big )^2}\, \int _{w_0}^1 \frac{I_1\big (\beta \, \sqrt{1 - w^2}\big )}{\sqrt{1 - w^2}}\,{\mathrm d}w, \end{aligned}$$
(6.7)

since the integrand is positive and \(w_0 :=w_1(-\delta ) = \min \{w_1(v):\,v\in [-\delta ,\,\delta ]\}\). Substituting \(w = \sin t\) in (6.7), for all \(v \in [-\delta ,\,\delta ]\) we can estimate

$$\begin{aligned} \eta _2(v) \le \frac{\beta }{2 \, \big (\sinh \frac{\beta }{2}\big )^2}\, \int _{\mathrm {arcsin}\,w_0}^{\pi /2} I_1(\beta \, \cos t)\,{\mathrm d}t \end{aligned}$$

with \(\mathrm {arcsin}\,w_0 \in \big (0,\, \frac{\pi }{2}\big )\). The above integral can now be approximated by the rectangular rule (see Fig. 5) such that

$$\begin{aligned} \eta _2(v) \le \frac{\beta }{2\, \big (\sinh \frac{\beta }{2}\big )^2}\,\Big (\frac{\pi }{2} - {\mathrm {arcsin}}\,w_0\Big )\, I_1\Big (\beta \,\sqrt{1 - w_0^2}\,\Big ) . \end{aligned}$$

Further it holds \(4\, (\sinh \frac{\beta }{2})^2 = {\mathrm e}^{\beta } - 2 + {\mathrm e}^{-\beta } > {\mathrm e}^{\beta } - 2\). Since by [11, Lemma 7] we have \(\sqrt{2 \pi x}\,{\mathrm e}^{-x}\,I_1(x) < 1\), it holds

$$\begin{aligned} I_1\Big (\beta \,\sqrt{1 - w_0^2}\,\Big ) < \frac{1}{\sqrt{2 \pi \beta }}\,\big (1 - w_0^2\big )^{-1/4}\,{\mathrm e}^{\beta \,\sqrt{1 - w_0^2}}, \end{aligned}$$

and therefore we obtain

$$\begin{aligned} \eta _2(v) \le \frac{\sqrt{\beta }\,(\pi - 2\,{\mathrm {arcsin}}\,w_0)}{\sqrt{2 \pi }\,(1 - w_0^2)^{1/4}\,(1 - 2\,{\mathrm e}^{-\beta })}\,{\mathrm e}^{-\beta \,\big (1 - \sqrt{1-w_0^2}\big )} . \end{aligned}$$

Additionally using (3.21) and (3.27) as well as \(\mathrm {arcsin}\,w_0 \in \big (0,\, \frac{\pi }{2}\big )\) this yields

$$\begin{aligned} {E_1(m,\delta ,L) \le \frac{\sqrt{\beta \,\pi \delta }}{(1 - 2\,{\mathrm e}^{-\beta })(1-w_0^2)^{1/4}}\,{\mathrm e}^{-\beta \,\big (1-\sqrt{1-w_0^2}\big )} + \frac{2\,\sqrt{2 \delta }}{1 - {\mathrm e}^{-2\beta }}\,{\mathrm e}^{-\beta }.} \end{aligned}$$
(6.8)
Fig. 5
figure 5

The integrand \(I_1(\beta \,\cos t)\) on the interval \([{\arcsin \,w_0},\,{\pi /2}]\)

What remains is the choice of the optimal parameter \(s > 0\), where we have to fulfill \(w_1(\delta ) \le 1\). To obtain the smallest possible error bound we are looking for an \(s>0\) that minimizes the error term \(\max _{v\in [-\delta ,\,\delta ]} |\eta (v)|\). By (6.5) and (6.6) we maximize the second integral in (6.5). Since the integrand of (6.5) is positive, the integration limit \(w_1(v)\) should be as large as possible for all \(v \in [-\delta ,\, \delta ]\) and therefore \(w_1(\delta ) = 1\). Rearranging this by (6.3) in terms of s we see immediately that

$$\begin{aligned} s = \frac{m\,(1+\lambda +2\tau )}{1+2\lambda } \end{aligned}$$

and hence

$$\begin{aligned} \beta = \frac{\pi m\,(1 + \lambda + 2 \tau )}{1 + \lambda }, \quad w_0 = w_1(-\delta ) = \frac{1+\lambda -2\tau }{1+\lambda +2\tau } \in (0,\,1) \end{aligned}$$

such that \(\beta \) depends linearly on m by definition.

Case 2 (\(\varvec{w_1(-\delta )\ge 1}\)): From (6.2) it follows that

$$\begin{aligned} \eta (v) = \eta _3(v) - \eta _4(v), \quad v \in [-\delta ,\, \delta ], \end{aligned}$$
(6.9)

with

$$\begin{aligned} \eta _3(v):= & {} 1 - \frac{\beta }{2\,\sinh \beta }\,\int _{-1}^1 \frac{I_1\big (\beta \, \sqrt{1-w^2}\big )}{\sqrt{1-w^2}}\, {\mathrm d}w, \\ \eta _4(v):= & {} \frac{\beta }{2\,\sinh \beta }\,\bigg (\int _{-w_1(-v)}^{-1} + \int _1^{w_1(v)}\bigg ) \,\frac{J_1\big (\beta \, \sqrt{w^2-1}\big )}{\sqrt{w^2-1}}\, {\mathrm d}w. \end{aligned}$$

By (6.4) we obtain

$$\begin{aligned} \eta _3(v) = 1 - \frac{2 \big (\sinh \frac{\beta }{2}\big )^2}{\sinh \,\beta } = \frac{2\, {\mathrm e}^{-\beta }}{1 + {\mathrm e}^{-\beta }} > 0. \end{aligned}$$

Further it holds

$$\begin{aligned} \eta _4(v) = \frac{\beta }{2\,\sinh \beta }\,\bigg (\int _1^{w_1(-v)} + \int _1^{w_1(v)}\bigg ) \frac{J_1\big (\beta \, \sqrt{w^2-1}\big )}{\sqrt{w^2-1}}\, {\mathrm d}w. \end{aligned}$$

Substituting \(w=\cosh t\) in above integrals, we have

$$\begin{aligned} \eta _4(v) = \frac{\beta }{2\,\sinh \beta }\,\bigg ( \int _0^{\mathrm {arcosh}(w_{1}(-v))} +\int _0^{\mathrm {arcosh}(w_{1}(v))} \bigg ) J_1(\beta \, \sinh t) \,{\mathrm d}t . \end{aligned}$$

In order to estimate these integrals properly, we now have a closer look at the integrand. As known, the Bessel function \(J_1\) oscillates on \([0,\,\infty )\) and has the non-negative simple zeros \(j_{1,n}\), \(n \in {{\mathbb {N}}}_0\), with \(j_{1,0} = 0\). The zeros \(j_{1,n}\), \(n = 1,\,\ldots ,\,40\), are tabulated in [21, p. 748]. On each interval \(\big [\mathrm {arsinh}\,\frac{j_{1,2n}}{\beta },\,\mathrm {arsinh}\,\frac{j_{1,2n+2}}{\beta }\big ]\), \(n \in {{\mathbb {N}}}_0\), the integrand \(J_1(\beta \,\sinh t)\) is firstly non-negative and then non-positive, see Fig. 6. Due to this properties and the fact that the amplitude is decreasing when \(x\rightarrow \infty \), the integrals are positive on each interval \(\big [\mathrm {arsinh}\,\frac{j_{1,2n}}{\beta },\,\mathrm {arsinh}\,\frac{j_{1,2n+2}}{\beta }\big ]\), \(n \in {{\mathbb {N}}}_0\). Note that by [5, 6.645–1] it holds

Fig. 6
figure 6

The integrand \(J_1(\beta \sinh t)\) on the interval \([0,\mathrm {arcosh}(w_1(\delta ))]\)

$$\begin{aligned} \int _0^{\infty } J_1(\beta \sinh t) \,{\mathrm d}t = I_{1/2}\Big (\frac{\beta }{2}\Big ) \, K_{1/2}\Big (\frac{\beta }{2}\Big ) = \frac{2}{\sqrt{\pi \beta }}\,\sinh \frac{\beta }{2} \cdot \sqrt{\frac{\pi }{\beta }}\,\mathrm e^{-\beta /2} = \frac{1 - \mathrm e^{- \beta }}{\beta } , \end{aligned}$$

where \(K_{\alpha }\) denotes the modified Bessel function of second kind and \(I_{1/2}\), \(K_{1/2}\) denote modified Bessel functions of half order (see [1, 10.2.13, 10.2.14, and 10.2.17]). In addition, numerical experiments have shown that for all \(T\ge 0\) it holds

$$\begin{aligned} 0\le \int _0^{T} J_1(\beta \sinh t) \,{\mathrm d}t\le \frac{3\,(1 - \mathrm e^{- \beta })}{2 \, \beta } . \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} 0 \le \eta _4(v) \le \frac{\beta }{2\, \sinh \beta } \cdot \frac{3\,(1 - {\mathrm e}^{-\beta })}{\beta } = \frac{3\,{\mathrm e}^{-\beta }}{1 + {\mathrm e}^{-\beta }}< 3\,{\mathrm e}^{-\beta } \end{aligned}$$

and hence by (6.9) it holds

$$\begin{aligned} \max _{v\in [-\delta ,\,\delta ]} |\eta (v)| = \max _{v\in [-\delta ,\,\delta ]} |\eta _3(v) - \eta _4(v)| < 3\,{\mathrm e}^{-\beta }. \end{aligned}$$
(6.10)

Thus, by (3.21) and (3.27) we conclude that

$$\begin{aligned} {E_1(m,\delta ,L) \le 3\,\sqrt{2\delta }\, \mathrm e^{-\beta } .} \end{aligned}$$
(6.11)

What remains is the choice of the optimal parameter \(s > 0\), where we have to fulfill \(w_1(-\delta ) = c\) with \(c\ge 1\). Rearranging this by (6.3) in terms of s we see that

$$\begin{aligned} s = s(c) = \frac{m\,(1 + \lambda - 2 \tau )}{c\,(1 + 2\lambda )}, \quad \beta = \beta (c) = \frac{\pi m\,(1 + \lambda - 2 \tau )}{c\,(1 + \lambda )}. \end{aligned}$$

To obtain the smallest possible error bound we are looking for a constant \(c\ge 1\) that minimizes the error term \(\max _{v\in [-\delta ,\,\delta ]} |\eta (v)|\). By (6.10) we minimize the upper bound \(3\,{\mathrm e}^{-\beta (c)}\). Since \(3\,{\mathrm e}^{-\beta (c)}\) is monotonously increasing for \(c\ge 1\), we recognize that the minimum value is obtained for \(c=1\). Hence, the suggested parameters are

$$\begin{aligned} s = \frac{m\,(1 + \lambda - 2 \tau )}{1 + 2\lambda }, \quad \beta = \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda } \end{aligned}$$

such that \(\beta \) depends linearly on m by definition. This completes the proof. \(\square \)

Now we compare the actual decay rates of the error constants (6.8) with \(\beta = \frac{\pi m\,(1 + \lambda +2 \tau )}{1 + \lambda }\) and (6.11) with \(\beta = \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda }\). It can be seen that the decay rate of (6.8) reads as

$$\begin{aligned} \frac{\pi m\,(1 + \lambda +2 \tau )}{1 + \lambda }\,\Big (1 - \sqrt{1 - w_0^2}\,\Big ) \end{aligned}$$

with \(w_0 = \frac{1 + \lambda - 2\tau }{1 + \lambda + 2\tau }\). On the other hand, the decay rate of (6.11) is given by \(\frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda }\). Since \(1 + \lambda > 2\tau \) for all \(\lambda \ge 0\) and \(\tau \in \big (0,\,\frac{1}{2}\big )\), simple calculation shows that

$$\begin{aligned} \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda } > \frac{\pi m\,(1 + \lambda +2 \tau )}{1 + \lambda }\,\Big (1 - \sqrt{1 - w_0^2}\,\Big ). \end{aligned}$$

Hence, the error constant (6.11) decays faster than the one in (6.8). Therefore, we will use the \(\sinh \)-type window function (3.4) with \(\beta = \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda }\) in the remainder of this paper.

Example 6.2

Analogous to Example 4.2, we visualize the error bound of Theorem 6.1, i. e., for \(\varphi = \varphi _{\mathrm {sinh}}\) we show that for the approximation error (4.6) it holds by (6.11) that \(e_{m,\tau ,\lambda }(f) \le E_{m,\tau ,\lambda }\,\Vert f\Vert _{L^2(\mathbb R)}\), where

$$\begin{aligned} {E_1(m,\delta ,L) \le E_{m,\tau ,\lambda } :=3\,\sqrt{2\delta } \, \mathrm e^{-\beta } ,} \end{aligned}$$
(6.12)

with \(\beta = \frac{\pi m\,(1+\lambda - 2 \tau )}{1+\lambda }\). The associated results are displayed in Fig. 7, where a substantial improvement can be seen compared to the Figs. 1 and 4. We also remark that for larger choices of N, the line plots in Fig. 7 would only be shifted slightly upwards, such that for all N we receive almost the same results. This is to say, the \(\sinh \)-type window function is by far the best choice as a regularization function for regularized Shannon sampling sums.

Fig. 7
figure 7

Maximum approximation error (4.6) and error constant (6.12) using \(\varphi _{\sinh }\) and \(\beta = \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda }\) in (3.4) for the function \(f(x) = \sqrt{2\delta } \,\mathrm {sinc}(2 \delta \pi x)\) with \(N=128\), \(m\in \{2, 3, \ldots , 10\}\), as well as \(\tau \in \{\nicefrac {1}{20},\,\nicefrac {1}{10},\,\nicefrac {1}{4},\,\nicefrac {1}{3},\,\nicefrac {9}{20}\}\), \(\delta = \tau N\), and \(\lambda \in \{0,0.5,1,2\}\), respectively

Now we show that for the regularized Shannon sampling formula with the \(\sinh \)-type window function (3.4) the uniform perturbation error (3.30) only grows as .

Theorem 6.3

Let \(f\in {\mathcal B}_{\delta }({\mathbb {R}})\) with \(\delta = \tau N\), \(\tau \in (0,\,1/2)\), \(N \in {\mathbb {N}}\), \(L= N(1+\lambda )\) with \(\lambda \ge 0\) and \(m \in {{\mathbb {N}}}\setminus \{1\}\) be given. Further let \(R_{\sinh ,m}{{\tilde{f}}}\) be as in (3.28) with the noisy samples \({{\tilde{f}}}_{\ell } = f\big (\frac{\ell }{L}\big ) + \varepsilon _{\ell }\), where \(|\varepsilon _{\ell }| \le \varepsilon \) for all \(\ell \in {\mathbb {Z}}\) and \(\varepsilon >0\).

Then the regularized Shannon sampling formula (3.15) with the \(\sinh \)-type window function (3.4) and \(\beta = \frac{\pi m\,(1 + \lambda - 2 \tau )}{1 + \lambda }\) satisfies

$$\begin{aligned} \Vert R_{\sinh ,m}{{\tilde{f}}} - R_{\sinh ,m}f \Vert _{C_0({\mathbb {R}})} \le \varepsilon \left( 2+\sqrt{\frac{2+2\lambda }{1+\lambda - 2 \tau }} \,\frac{1}{1 - {\mathrm e}^{-2 \beta }}\,\sqrt{m} \right) . \end{aligned}$$
(6.13)

Proof

By Theorem 3.4 we only have to compute \(\hat{\varphi }_{\sinh }(0)\). By (6.1) and \(\sqrt{2 \pi \beta }\,{\mathrm e}^{-\beta }\,I_1(\beta ) < 1\) (see [11, Lemma 7]) we recognize that

$$\begin{aligned} \hat{\varphi }_{\mathrm {sinh}}(0)&= \frac{\pi m \beta }{L\, \sinh \beta } \cdot \frac{I_1\big (\beta \big )}{\beta } = \frac{\pi m\,I_1\big (\beta \big )}{L\, \sinh \beta } \le \frac{\pi m \,{\mathrm e}^{\beta }}{\sqrt{2 \pi \beta }L\, \sinh \beta }=\frac{\sqrt{2\pi }\,m}{\sqrt{\beta }L\,(1 - {\mathrm e}^{- 2 \beta })}. \end{aligned}$$

If we now use \(\beta = \frac{\pi m\,(1+\lambda - 2 \tau )}{1+\lambda }\), then (3.30) yields the assertion (6.13). \(\square \)

Similar to Example 4.4, one can visualize the error bound from Theorem 6.3, which leads to results analogous to Fig. 2.

Fig. 8
figure 8

Maximum approximation error (4.6) and error constant (3.20) using \(\varphi \in \{\varphi _{\mathrm {Gauss}},\varphi _{\mathrm {B}}, \varphi _{\sinh }\}\) for the function \(f(x) = \delta \,\mathrm {sinc}^2(\delta \pi x)\) with \(N=256\), \(\tau = 0.45\), \(\delta = \tau N\), as well as \(m\in \{2, 3, \ldots , 10\},\) and \(\lambda \in \{0.5,1,2\}\)

7 Conclusion

To overcome the drawbacks of classical Shannon sampling series—which are poor convergence and non-robustness in the presence of noise—in this paper we considered regularized Shannon sampling formulas with localized sampling. To this end, we considered bandlimited functions \(f\in \mathcal {B}_{\delta }({\mathbb {R}})\) and introduced a set \(\Phi _{m,L}\) of window functions. Despite the original result, where \(\varphi \in \Phi _{m,L}\) is chosen as the rectangular window function, and the well–studied approach of using the Gaussian window function, we proposed new window functions with compact support \([-m/L,\,m/L]\), namely the \(\mathrm B\)-spline and \(\sinh \)-type window function, which are well-studied in the context of the nonequispaced fast Fourier transform (NFFT).

In Sect. 3, we considered an arbitrary window function \(\varphi \in \Phi _{m,L}\) and presented a unified approach to error estimates of the uniform approximation error in Theorem 3.2, as well as a unified approach to the numerical robustness in Theorem 3.4.

Then, in the next sections, we concretized the results for special choices of the window functions. More precisely, it was shown that the uniform approximation error decays exponentially with respect to the truncation parameter m, if \(\varphi \in \Phi _{m,L}\) is the Gaussian, \(\mathrm B\)-spline, or \(\sinh \)-type window function. Moreover, we have shown that the regularized Shannon sampling formulas are numerically robust for noisy samples, i. e., if \(\varphi \in \Phi _{m,L}\) is the Gaussian, \(\mathrm B\)-spline, or \(\sinh \)-type window function, then the uniform perturbation error only grows as . While the Gaussian window function from Sect. 4 has already been studied in numerous papers such as [6, 13,14,15,16], we remarked that Theorem 4.1 improves a corresponding result in [6], since we improved the exponential decay rate from \((m-1)\) to m.

Throughout this paper, several numerical experiments illustrated the corresponding theoretical results. Finally, comparing the proposed window functions as done in Fig. 8, the superiority of the new proposed \(\sinh \)-type window function can easily be seen, since even small choices of the truncation parameter \(m\le 10\) are sufficient for achieving high precision. Due to the usage of localized sampling the evaluation of \(R_{\varphi ,m}f\) on an interval [0, 1/L] requires only 2m samples and therefore has a computational cost of flops. Thus, a reduction of the truncation parameter m is desirable to obtain an efficient method.