1 Introduction

The exponential sampling formula was introduced in a formal way in [13, 24] in connection with problems related to optical physics phenomena like light-scattering, diffraction and others. Later, a rigorous treatment was first performed in [15, 16] for functions f belonging to the Mellin–Paley–Wiener spaces \(B^1_{c, \pi T}\) and \(B^2_{c, \pi T}\), with \(c \in \mathbb {R}\) and \(T>0\) (see Sect. 2 for the definition of these spaces). The latter formula is contained in the following theorem, which may serve as the starting point of this paper. We present this result in a notation that will be specified in Sect. 2; also see [16, Theorem 5.2] or [3, Theorem 1].

Theorem A

Let f belong to the Mellin–Paley–Wiener space \(B_{c,\pi T}^2\), where \(c\in \mathbb {R}\) and \(T>0\). Then

$$\begin{aligned} f(r) = \sum _{k\in \mathbb {Z}} f(e^{k/T}) {{\,\textrm{lin}\,}}_{c/T}(e^{-k}r^T) \qquad (r>0). \end{aligned}$$

The series converges uniformly on compact subsets of the positive part of the real line.

In Mellin analysis, this is the counter-part of the familiar sampling formula of Whittaker–Kotel’nikov–Shannon (WKS for short). As the latter, Theorem A is of central importance in several ways but it also has some limitations in theoretical as well as in practical use. For example, the assumption that the function f is bandlimited in the sense of Mellin, imposes a severe restriction, as one recognizes by looking at the Mellin version of the Paley–Wiener theorem (see [4, 5]). It excludes all functions f for which \(\limsup _{r\rightarrow +\infty } \left| r^cf(r)\right| >0\) or \(\limsup _{r\rightarrow 0+} \left| r^cf(r)\right| >0\) let alone unbounded functions. Furthermore, the series may converge very slowly. We shall see that these deficiencies can be overcome by incorporating a Mellin bandlimited multiplier in the series.

This technique can be considered within a more general framework. Motivated by the development in the study of the WKS sampling series, there was introduced in [11] (see also [10]) the so-called generalized exponential sampling series, in which the function \(\textrm{lin}_{c/T}\) is replaced by a general continuous function \(\varphi \) satisfying suitable assumptions (see Definition 4 below). Thus the generalized exponential sampling series of a function f with w instead of T takes the form

$$\begin{aligned} \bigl (S^\varphi _wf\bigr )(r):= \sum _{k \in \mathbb {Z}} f\big (e^{k/w}\big ) \varphi (e^{-k} r^w) \qquad (r>0,\, w>0). \end{aligned}$$
(1)

The function \(\varphi \) is called the kernel of \((S^\varphi _wf)(r).\)

In general, such a series does not reproduce f as the series in Theorem A, but it approximates continuous, bounded functions f in the sense that

$$\begin{aligned} \lim _{w \rightarrow \infty } (S^\varphi _wf)(r) = f(r) \end{aligned}$$

(see [11, Theorem 3.2]). For finite w, we have \(f(r)=(S^\varphi _wf)(r)+(E_wf)(r)\) with an approximation error \((E_wf)(r)\). A further error occurs in computations since we will have to truncate the series \((S^\varphi _wf)(r)\).

The present paper can also be seen as a study of the following questions:

  1. 1.

    How can we find kernels \(\varphi \) for which the corresponding generalized exponential sampling series improves upon the series of Theorem A?

  2. 2.

    What is an appropriate kernel \(\varphi \) for establishing an efficient algorithm by truncating the corresponding generalized exponential sampling series?

We shall answer both questions by adopting a multiplier technique that was effectively developed for the WKS sampling series (see [17,18,19,20, 22, 25,26,27,28,29]).

This paper is organized as follows. After fixing our notation and recalling some preliminary results in the next section, we devote Sect. 3 to Mellin bandlimited multipliers \(\psi \) such that \(\psi (1)=1\), obtaining a formula

$$\begin{aligned} f(r) = \sum _{k \in \mathbb {Z}} f\big (e^{k/w}\big )\,\psi \big (e^{-k} r^w\big )\,\textrm{lin}_{c/w}\big (e^{-k} r^w\big ) \qquad (r \in \mathbb {R}^+). \end{aligned}$$
(2)

that holds for a wider class of functions f than the corresponding formula of Theorem A (see Theorem 3 below). The generalized exponential sampling series \((S_w^{\psi {{\,\textrm{lin}\,}}}f)(r)\), occurring on the right-hand side for \(c=0\), has several remarkable properties (see Theorem 5) and is even of interest in the case of functions f for which (2) is violated since this series always interpolates with respect to the sample points. Moreover, there is a converse result: Any generalized sampling series (1) which interpolates with respect to the sample points and has a Mellin bandlimited kernel \(\varphi \) satisfying a mild side condition can be obtained from the series of Theorem A by incorporating a Mellin bandlimited multiplier \(\psi \), that is, \(\varphi = \psi {{\,\textrm{lin}\,}}\) (Theorem 6). This establishes an interesting link between the classical and the generalized sampling series. We also study the approximation power of \((S_w^{\psi {{\,\textrm{lin}\,}}}f)(r)\) for bounded, continuous functions f and express it in terms of best approximation by functions from a Mellin–Bernstein space (Proposition 7).

Particular attention is paid to the truncation error which occurs when the series \((S_w^{\psi {{\,\textrm{lin}\,}}}f)(r)\) is reduced to the sum \((S_{w,N}^{\psi {{\,\textrm{lin}\,}}}f)(r)\) containing the \(2N+1\) terms whose sample points are closest to r. The obtained results include classes of unbounded functions f (Theorem 9). As one would suggest, the speed of convergence to zero of the truncation error, which may be interpreted as the speed of an algorithm that approximates f(r) from \(2N+1\) samples, depends on the decay of \(\psi (r)\) as \(r\rightarrow 0+\) and \(r\rightarrow +\infty \). This leads us to the question as to how fast a Mellin bandlimited function \(\psi \) can decay if \(\psi (1)=1\). With an answer adopted from [21], we shall see that one can achieve convergence of order \(\mathcal {O}(e^{-N/(\log N)^\gamma })\) with \(\gamma >1\) as \(N\rightarrow \infty \), but it is in general not possible to have \(\mathcal {O}(e^{-\alpha N})\) with \(\alpha >0\).

In Sect. 4 we overcome this limitation. Inspired by results for the WKS sampling series, we use a multiplier that is based on a Gaussian function. It is not Mellin bandlimited and the corresponding generalized exponential sampling series cannot reproduce Mellin bandlimited functions f, but approximates them with a so-called aliasing error which adds to the truncation error. Nevertheless, one has the amazing phenomenon that the sum of these two errors converges to zero with a higher rate than the truncation error of an exact formula (Theorem 11, Corollaries 2 and 3).

Finally, in Sect. 5, we carry out numerical experiments for three examples of admissible functions f in order to illustrate some of the results of Sects. 3 and 4. An algorithm based on the results of Sect. 4 may be the method of choice in computational exponential sampling.

As to the methods employed in this paper, we mention that we use only tools form Mellin analysis which are based on the theory of polar-analytic functions, introduced in [5] and further developed in [6,7,8,9] (also see the forthcoming book [10]).

2 Basic notations and preliminary results

In what follows, we denote by \(\mathbb {N},\) and \(\mathbb {Z}\) the sets of positive integers and integers, respectively, by \(\mathbb {R}\) and \(\mathbb {R}^+\) the sets of real and positive real numbers, respectively, and by \(\mathbb {C}\) the set of complex numbers.

Let \(C(\mathbb {R}^+)\) be the space of all continuous functions defined on \(\mathbb {R}^+.\)

For \(1\le p < +\infty ,\) let \(L^p(\mathbb {R}^+)\)  be the space of all Lebesgue measurable and p-integrable complex-valued functions defined on \(\mathbb {R}^+\) endowed with the usual norm \(\Vert f\Vert _p.\) Analogous notations hold for functions defined on \(\mathbb {R}.\)

For \(p=1\) and \(c \in \mathbb {R},\) we introduce the space (see [14])

$$\begin{aligned} X_c = \{ f: \mathbb {R}^+\rightarrow \mathbb {C}\, :\, f(\cdot ) (\cdot )^{c-1}\in L^1(\mathbb {R}^+) \} \end{aligned}$$

endowed with the norm

$$\begin{aligned} \Vert f\Vert _{X_c} := \Vert f(\cdot ) (\cdot )^{c-1} \Vert _1 = \int _0^{+\infty } |f(u)|u^{c-1} du. \end{aligned}$$

More generally, let \(X^p_c\) denote the space of all functions \(f: \mathbb {R}^+\rightarrow \mathbb {C}\) such that \(f(\cdot ) (\cdot )^{c-1/p}\in L^p (\mathbb {R}^+)\), where \(1<p< \infty .\) Finally, for \(p=\infty \), we define \(X^\infty _c\) as the space comprising all measurable functions \(f : \mathbb {R}^+\rightarrow \mathbb {C}\) such that \(\Vert f\Vert _{X^\infty _c}:= \sup _{x>0}x^{c}|f(x)| < \infty ;\) see [16] for \(p=2\) and [10] for general p.

For \(h \in \mathbb {R}^+\) and \(c \in \mathbb {R}\), the Mellin translation operator \(\tau _h^c\), applying to functions \(f: \mathbb {R}^+ \rightarrow \mathbb {C},\) is defined by

$$\begin{aligned} (\tau _h^c f)(x) := h^c f(hx) \qquad (x\in \mathbb {R}^+). \end{aligned}$$

Setting \(\tau _h:= \tau ^0_h,\) we have \((\tau _h^cf)(x) = h^c (\tau _hf)(x)\) and \(\Vert \tau _h^c f\Vert _{X_c} = \Vert f\Vert _{X_c}.\)

The Mellin transform of a function \(f\in X_c\) is the linear and bounded operator defined by (see, e.g., [14])

$$\begin{aligned} M_c[f](s) \equiv [f]^{\wedge }_{M_c} (s) := \int _0^{+\infty } u^{s-1} f(u) du \qquad (s=c+ it, t\in \mathbb {R}). \end{aligned}$$

More generally, for \(1<p \le 2,\) the Mellin transform \(M_c^p\) of \(f \in X^p_c\) is given by (see [16] for \(p=2\) and [10] for general p)

$$\begin{aligned} M_c^p[f](s) \equiv [f]^{\wedge }_{M_c^p} (s) = \text{ l.i.m.}_{\rho \rightarrow +\infty }~\int _{1/\rho }^\rho f(u) u^{s-1}du, \end{aligned}$$

for \(s=c+it,\) in the sense that

$$\begin{aligned} \lim _{\rho \rightarrow \infty }\bigg \Vert M_c^p[f](c+it) - \int _{1/\rho }^\rho f(u) u^{s-1}du\bigg \Vert _{L^{p'}(c + i \mathbb {R})} = 0, \end{aligned}$$

where \(p'\) is the conjugate exponent of p, that is, \(1/p+ 1/p'=1\).

The basic tool for a self-contained and independent treatment of Mellin analysis is the theory of polar-analytic functions. The notion of polar analyticity was first introduced in [5], and subsequently the theory of these functions was developed in the papers [6,7,8,9]; also see the forthcoming book [10]. Here we recall the definition of this notion. Let \(\mathbb {H}:=\{(r, \theta ): r>0,\,\theta \in \mathbb {R}\}\) be the polar plane. By \(\mathscr {D}\) we denote a domain in \(\mathbb {H}\), that is, a nonempty, open and connected subset of \(\mathbb {H}.\)

Definition 1

We say that \(f:\mathscr {D} \rightarrow \mathbb {C}\) is polar-analytic on \(\mathscr {D}\) if for every \((r_0, \theta _0) \in \mathscr {D}\) the limit

$$\begin{aligned} \lim _{(r, \theta ) \rightarrow (r_0, \theta _0)} \frac{f(r,\theta ) - f(r_0, \theta _0)}{re^{i \theta } - r_0e^{i \theta _0}} =: (D_{\textrm{pol}}f)(r_0, \theta _0) \end{aligned}$$

exists and is the same howsoever \((r, \theta )\) approaches \((r_0, \theta _0)\) within \(\mathscr {D}.\)

It is easy to see that, writing \(f(r, \theta ) = u(r, \theta ) + i v(r, \theta )\) with uv being the real and imaginary parts of f,  the function f is polar-analytic on \(\mathscr {D}\) if and only if u and v have continuous partial derivatives on \(\mathscr {D}\) which satisfy the Cauchy–Riemann equations in polar form (see, e.g., [7, 10]).

Next, we recall the definitions of two fundamental function spaces.

Definition 2

For \(c \in \mathbb {R},\) \(T>0\) and \(p \in [1, +\infty ]\) the Mellin–Bernstein space \(\mathscr {B}^p_{c,T}\) comprises all functions \(f: \mathbb {H}\rightarrow \mathbb {C}\) with the following properties:

  1. (i)

    f is polar-analytic on \(\mathbb {H};\)

  2. (ii)

    \(f(\cdot , 0) \in X^p_{c};\)

  3. (iii)

    there exists a positive constant \(C_f\) such that

    $$\begin{aligned} \left| f(r,\theta )\right| \le C_f r^{-c}e^{T|\theta |}\qquad ((r, \theta ) \in \mathbb {H}). \end{aligned}$$

It is easily seen that the following inclusions hold:

$$\begin{aligned} \mathscr {B}_{c,T_1}^p \subset \mathscr {B}_{c,T_2}^p \qquad (0<T_1< T_2< +\infty , \; 1\le p \le +\infty ) \end{aligned}$$

and

$$\begin{aligned} \mathscr {B}_{c,T}^{p_1} \subset \mathscr {B}_{c,T}^{p_2} \qquad (T\in \mathbb {R}^+, \; 1\le p_1 <p_2\le +\infty ). \end{aligned}$$

Furthermore, the Mellin–Bernstein spaces are translation invariant in the following sense. If \(f\in \mathscr {B}_{c,T}^p\) and \((r_0, \theta _0)\) is any point of \(\mathbb {H}\), then the function

$$\begin{aligned} f_{(r_0,\theta _0)}\, :\, (r,\theta ) \, \longmapsto \, r_0^c f(r_0r, \theta +\theta _0) \end{aligned}$$
(3)

also belongs to \(\mathscr {B}_{c,T}^p\). Note that \(f_{(r_0,\theta _0)}\) is obtained from f by the Mellin translation \(\tau _{r_0}^c\) with respect to the first variable and an ordinary translation by \(\theta _0\) with respect to the second variable. For a proof one has to verify that \(f_{(r_0,\theta _0)}\) satisfies conditions (i)–(iii) of Defintion 2. For (i) one can use the Cauchy–Riemann equations in polar form. For (ii) one may employ a result in [5, Theorem 4.2], which yields that

$$\begin{aligned} \Vert f_{(r_0,\theta _0)}(\cdot ,0)\Vert _{X_c^p} \le e^{T\left| \theta _0\right| } \Vert f(\cdot , 0)\Vert _{X_c^p}, \end{aligned}$$
(4)

and it is easily seen that (iii) holds with \(C_{f_{(r_0,\theta _0)}}=e^{T\left| \theta _0\right| }C_f.\)

Definition 3

For \(c \in \mathbb {R},\,T>0\) and \(p\in [1, 2]\), the Mellin–Paley–Wiener space \(B^p_{c, T}\) comprises all functions \(f\in C(\mathbb {R}^+)\cap X^p_c\) which are Mellin bandlimited to \([-T, T]\), that is, \([f]^\wedge _{M_c^p}(c+it) = 0\) a.e. for \(|t| >T.\)

The following basic result is a Mellin version of the classical Paley–Wiener theorem (see [5, 10]).

Theorem 1

A function \(\varphi \in X^2_c\) belongs to the Mellin–Paley–Wiener space \(B^2_{c,T}\) if and only if there exists a function \(f \in \mathscr {B}^2_{c,T}\) such that \(f(\cdot , 0) = \varphi (\cdot ).\)

In connection with sampling formulas, the second index of the Mellin–Bernstein spaces and the Mellin–Paley–Wiener spaces is often expressed as a multiple of \(\pi \) to the benefit that the sample points will contain no \(\pi \). Subsequently we want to stick to this custom.

A second basic result is the following version of the Parseval formula in Mellin analysis (see [3]).

Theorem 2

Let \(f,g \in B^2_{c,\pi T}.\) Then

$$\begin{aligned} \int _0^\infty f(u) \overline{g(u)}u^{2c}\frac{du}{u} = \frac{1}{T}\sum _{k = -\infty }^\infty f(e^{k/T})\overline{g(e^{k/T})}e^{2ck/T}. \end{aligned}$$

We conclude this section by recalling the \({{\,\textrm{sinc}\,}}\) function, defined on \(\mathbb {C}\) by

$$\begin{aligned} {{\,\textrm{sinc}\,}}z := \frac{\sin (\pi z)}{\pi z} \hbox { if } z\in \mathbb {C}\setminus \{0\}, \quad {{\,\textrm{sinc}\,}}0=1. \end{aligned}$$

It can be used to introduce the \({{\,\textrm{lin}\,}}\) function by

$$\begin{aligned} {{\,\textrm{lin}\,}}_c(r):= r^{-c}{{\,\textrm{sinc}\,}}(\log r) \qquad (c\in \mathbb {R}, r\in \mathbb {R}^+). \end{aligned}$$

We agree that \({{\,\textrm{lin}\,}}(r):= {{\,\textrm{lin}\,}}_0(r)\). By \(r^{-c}e^{-ic\theta }{{\,\textrm{lin}\,}}(re^{i\theta })\) for \((r,\theta )\in \mathbb {H}\), we obtain a polar-analytic continuation of \({{\,\textrm{lin}\,}}_c\) to the whole of \(\mathbb {H}\).

3 A Mellin bandlimited multiplier

We start with a lemma that will be useful for studying the convergence of the arising exponential sampling series.

Lemma 1

Let \(\Psi \in \mathscr {B}_{0,\pi \delta }^2\), where \(\delta \in \, ]0, 1]\). Then, for \(w>0\), we have

$$\begin{aligned} \sum _{k\in \mathbb {Z}}\left| \Psi (e^{-k}r^w, w\theta ) {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| \le e^{\pi w(1+\delta )\left| \theta \right| }\Vert \Psi (\cdot ,0)\Vert _{X_0^2} \quad ((r,\theta )\in \mathbb {H}). \end{aligned}$$
(5)

The series converges uniformly on compact subsets of \(\mathbb {H}\).

Proof

Using the notation (3), we may write

$$\begin{aligned} \Psi (e^{-k}r^w,w\theta ) = \Psi _{(r^w,w\theta )}(e^{-k},0). \end{aligned}$$

By the translation invariance of the Mellin–Bernstein spaces and the Mellin-Paley–Wiener theorem, we have

$$\begin{aligned} \Psi _{(r^w,w\theta )}(\cdot , 0) \in B_{0,\pi \delta }^2 \subseteq B_{0,\pi }^2. \end{aligned}$$

Therefore the Mellin–Parseval formula (see [3, Theorem 4]) and (4) yield

$$\begin{aligned} \left(\sum _{k\in \mathbb {Z}}\left| \Psi (e^{-k}r^w, w\theta )\right| ^2\right)^{1/2} =\left\Vert\Psi _{(r^w,w\theta )}(\cdot ,0)\right\Vert_{X_0^2}\le e^{\pi \delta w \left| \theta \right| } \Vert \Psi (\cdot , 0)\Vert _{X_0^2}. \end{aligned}$$
(6)

Analogously we conclude that

$$\begin{aligned} \left(\sum _{k\in \mathbb {Z}} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| ^2\right)^{1/2}\le e^{\pi w\left| \theta \right| } \Vert {{\,\textrm{lin}\,}}\Vert _{X_0^2} = e^{\pi w\left| \theta \right| }. \end{aligned}$$

Now (5) is an immediate consequence of the Cauchy–Schwarz inequality.

It remains to verify the assertion on uniform convergence. For \(N\in \mathbb {N}\), we have again by the Cauchy–Schwarz inequality and (6) that

$$\begin{aligned}&\sum _{\left| k\right|>N}\left| \Psi (e^{-k}r^w, w\theta ) {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| \\&\qquad \quad \le \left(\sum _{\left| k\right|>N}\left| \Psi (e^{-k}r^w, w\theta )\right| ^2\right)^{1/2}\left(\sum _{\left| k\right|>N} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| ^2\right)^{1/2}\\&\qquad \quad \le e^{\pi \delta w \left| \theta \right| } \Vert \Psi (\cdot , 0)\Vert _{X_0^2}\left(\sum _{\left| k\right| >N} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| ^2\right)^{1/2}. \end{aligned}$$

This shows that it suffices to prove the assertion on uniform convergence for \(\sum _{k\in \mathbb {Z}} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| ^2\) only.

Now let C be a compact subset of \(\mathbb {H}\). There exists an integer \(m\in \mathbb {N}\) such that \(\sup _{(r,\theta )\in C} w\left| \log r +i\theta \right| \le m\). Then for \(\left| k\right| >m\) we have

$$\begin{aligned} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| = \frac{\left| \sin (\pi w(\log r+ i\theta ))\right| }{\left| \pi (w(\log r+i\theta )-k)\right| } \le \frac{e^{\pi m}}{\pi (\left| k\right| -m)} \end{aligned}$$

and so for \(N>m\),

$$\begin{aligned} \sum _{\left| k\right| >N} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| ^2&\le \frac{2e^{2\pi m}}{\pi ^2} \sum _{j=1}^\infty \frac{1}{{(N+j-m)^2}}\\&\le \frac{2e^{2\pi m}}{\pi ^2}\int _{N-m}^\infty \frac{dt}{t^2} = \frac{2e^{2\pi m}}{\pi ^2(N-m)}. \end{aligned}$$

Hence, given \(\varepsilon >0\), there exists an \(N\in \mathbb {N}\) such that

$$\begin{aligned} \sum _{\left| k\right| >N} \left| {{\,\textrm{lin}\,}}(e^{-k}r^w e^{iw\theta })\right| ^2 < \varepsilon \end{aligned}$$

for \((r,\theta )\in C.\) This completes the proof. \(\square \)

Theorem 3

For \(\delta \in ]0,1[\), let \(\psi \in B_{0,\pi \delta }^2\) such that \(\psi (1)=1\). Suppose that \(f\in \mathscr {B}_{c,\pi T}^\infty \), where \(c\in \mathbb {R}\) and \(T>0\). Then for \(w=T/(1-\delta )\), we have

$$\begin{aligned} f(r,0)= \sum _{k\in \mathbb {Z}}f(e^{k/w},0) \psi (e^{-k}r^w){{\,\textrm{lin}\,}}_{c/w}(e^{-k}r^w) \qquad (r\in \mathbb {R}^+). \end{aligned}$$
(7)

The series converges absolutely and uniformly on compact subsets of \(\mathbb {R}^+.\)

Proof

For any \(\rho \in \mathbb {R}^+\) serving as a parameter, consider the function \(\psi (\rho ^w/(\cdot )^w)\). It is easily verified that it belongs to \(B_{0,\pi \delta w}^2\). By Theorem 1, there exists a function \(\Psi _\rho \in \mathscr {B}_{0,\pi \delta w}^2\) such that \(\Psi _\rho (r,0)=\psi (\rho ^w/r^w)\) for all \(r\in \mathbb {R}^+.\) Now we readily see from the definition of Mellin–Bernstein spaces that \(f\Psi _\rho \in \mathscr {B}_{c,\pi w}^2.\) Again by the Mellin-Paley–Wiener theorem,

$$\begin{aligned} f(\cdot ,0) \Psi _\rho (\cdot ,0) = f(\cdot ,0) \psi (\rho ^w/(\cdot )^w)\in B_{c,\pi w}^2. \end{aligned}$$

Therefore Theorem A with T replaced by w applies to this function and yields

$$\begin{aligned} f(r,0)\psi \left(\frac{\rho ^w}{r^w}\right) = \sum _{k\in \mathbb {Z}} f(e^{k/w},0)\psi (\rho ^w e^{-k}) {{\,\textrm{lin}\,}}_{c/w}(e^{-k}r^w). \end{aligned}$$

This holds for any \(\rho \in \mathbb {R}^+\). Substituting \(\rho =r\), we obtain (7).

It remains to verify the assertion on convergence. Since

$$\begin{aligned} {{\,\textrm{lin}\,}}_{c/w}(e^{-k}r^w)= r^{-c} e^{ck/w} {{\,\textrm{lin}\,}}(e^{-k}r^w), \end{aligned}$$

we may rewrite the right-hand side of (7) as

$$\begin{aligned} r^{-c} \sum _{k\in \mathbb {Z}} e^{ck/w}f(e^{k/w},0) \psi (e^{-k}r^w) {{\,\textrm{lin}\,}}(e^{-k}r^w), \end{aligned}$$

and so

$$\begin{aligned}&\sum _{k\in \mathbb {Z}}\left| f(e^{k/w},0) \psi (e^{-k}r^w){{\,\textrm{lin}\,}}_{c/w}(e^{-k}r^w)\right| \\&\qquad \quad \le r^{-c} \sup _{k\in \mathbb {Z}}\left| e^{ck/w}f(e^{k/w},0)\right| \sum _{k\in \mathbb {Z}}\left| \psi (e^{-k}r^w) {{\,\textrm{lin}\,}}(e^{-k}r^w)\right| \\&\qquad \quad \le r^{-c} \Vert f(\cdot , 0)\Vert _{X_c^\infty }\sum _{k\in \mathbb {Z}}\left| \psi (e^{-k}r^w) {{\,\textrm{lin}\,}}(e^{-k}r^w)\right| . \end{aligned}$$

Now the proof is completed by employing Lemma 1 with \(\theta =0\). \(\square \)

Remark 1

Note that formula (7) of Theorem 3 remains true for all \(w> T/(1-\delta )\). This is a consequence of the inclusions between Mellin–Bernstein spaces. Indeed, if \(f\in \mathscr {B}_{c,\pi T}^\infty \), then \(f\in \mathscr {B}_{c,\pi T_1}^\infty \) for all \(T_1 >T\) and so Theorem 3 allows us to set \(w=T_1/(1-\delta )\). A corresponding observation holds for some of the subsequent statements.

In the following we shall show that Theorem 3 can be extended in various ways. It will suffice to consider the case \(c=0\) only. For if \(f\in \mathscr {B}_{c,\pi T}^\infty \), then

$$\begin{aligned} g\, :\, (r,\theta ) \longmapsto \left(re^{i\theta }\right)^c f(r,\theta ) \end{aligned}$$

belongs to \(\mathscr {B}_{0,\pi T}^\infty \). Hence the reconstruction of f can be obtained by applying Theorem 3 with \(c=0\) to g and simply multiplying both sides of the corresponding equation (7) by \(r^{-c}\).

First we show that Theorem 3 can be extended to a reconstruction of f on the whole of \(\mathbb {H}\). For this we need a corresponding extension of the multiplier \(\psi \) as guaranteed by Theorem 1. We denote it by \(\Psi \).

Theorem 4

For \(\delta \in \, ]0,1[\), let \(\Psi \in \mathscr {B}_{0,\pi \delta }^2\) such that \(\Psi (1,0)=1.\) Suppose that \(f\in \mathscr {B}_{0,\pi T}^\infty \), where \(T>0\). Then for \(w=T/(1-\delta )\), we have

$$\begin{aligned} f(r,\theta )= \sum _{k\in \mathbb {Z}}f(e^{k/w},0) \Psi (e^{-k}r^w, w\theta ){{\,\textrm{lin}\,}}(e^{-k}r^we^{iw\theta }) \qquad \left((r,\theta )\in \mathbb {H}\right). \end{aligned}$$

The series converges absolutely and uniformly on compact subsets of \(\mathbb {H}\).

Proof

Since

$$\begin{aligned} \sup _{k\in \mathbb {Z}} \left| f(e^{k/w},0)\right| \le \Vert f(\cdot ,0)\Vert _{X_0^\infty } < \infty , \end{aligned}$$

the assertion on the convergence follows from Lemma 1. Hence

$$\begin{aligned} g(r,\theta ) := \sum _{k\in \mathbb {Z}}f(e^{k/w},0) \Psi (e^{-k}r^w, w\theta ){{\,\textrm{lin}\,}}(e^{-k}r^we^{iw\theta }) \end{aligned}$$

exists for all \((r,\theta )\in \mathbb {H}\) and defines a continuous function g. We also note that, as a function of \((r,\theta )\), each term of the series defining g is polar-analytic on \(\mathbb {H}\).

Now let \(\mathcal {R}\) be any rectangle in \(\mathbb {H}\) with its edges parallel to the axes of the \((r,\theta )\) coordinate system and denote by \(\partial \mathcal {R}\) its positively oriented boundary. Then, for any \(N\in \mathbb {N}\),

$$\begin{aligned} \int _{\partial \mathcal {R}} \sum _{\left| k\right| \le N}f(e^{k/w},0) \Psi (e^{-k}r^w, w\theta ){{\,\textrm{lin}\,}}(e^{-k}r^we^{iw\theta })e^{i\theta }(dr+ird\theta ) =0 \end{aligned}$$

as a consequence of Cauchy’s theorem for polar-analytic functions (see [6, Theorem 4.1]). On the other hand, given \(\varepsilon >0\), the uniform convergence on compact subsets of \(\mathbb {H}\) guarantees the existence of an integer \(N\in \mathbb {N}\) such that

$$\begin{aligned} \left| \int _{\partial \mathcal {R}} \sum _{\left| k\right| > N}f(e^{k/w},0) \Psi (e^{-k}r^w, w\theta ){{\,\textrm{lin}\,}}(e^{-k}r^we^{iw\theta })e^{i\theta }(dr+ird\theta )\right| <\varepsilon . \end{aligned}$$

This allows us to conclude that

$$\begin{aligned} \int _{\partial \mathcal {R}} g(r,\theta )e^{i\theta } (dr + ird\theta )=0. \end{aligned}$$

Now a Morera type theorem (see [6, Theorem 4.2]) implies that g is polar-analytic on \(\mathbb {H}\). Then \(h:=f-g\) is also polar-analytic on \(\mathbb {H}\) and, as a consequence of Theorem 3, we have \(h(r,0)=0\) for all \(r\in \mathbb {R}^+\). Hence the identity theorem for polar-analytic functions (see [7, Theorem 2]) implies that h is identically zero on \(\mathbb {H}\), as was to be shown. \(\square \)

3.1 Generalized exponential sampling

Now we want to generalize Theorem 3 in another way. Setting \(\varphi := \psi {{\,\textrm{lin}\,}}\) and denoting \(f(\cdot , 0)\) simply by f, we may write the right-hand side of (7) for \(c=0\) as

$$\begin{aligned} \left(S_w^\varphi f\right)(r):= \sum _{k\in \mathbb {Z}} f(e^{k/w}) \varphi (e^{-k}r^w) \qquad (r\in \mathbb {R}^+). \end{aligned}$$
(8)

Series of this form have been studied under the name generalized exponential sampling series (see [11, 12]). In order that \((S_w^\varphi f)(r)\) exists and approximates f(r) in some sense, it was required that \(\varphi \) belongs to a class \(\Phi \) which is defined as follows.

Definition 4

The class \(\Phi \) comprises all continuous functions \(\varphi \,:\,\mathbb {R}^+ \rightarrow \mathbb {C}\) with the following properties: For every \(u\in \mathbb {R}^+\), we have

  1. (i)

    \( \sum _{k\in \mathbb {Z}} \varphi (e^{-k}u) = 1,\)

  2. (ii)

    \( \sup _{u\in \mathbb {R}^+}\sum _{k\in \mathbb {Z}} \left| \varphi (e^{-k}u)\right| <\infty ,\)

  3. (iii)

    \( \lim _{\rho \rightarrow \infty }\sum _{\left| k-\log u\right| >\rho } \left| \varphi (e^{-k}u)\right| = 0,\) uniformly with respect to \(u\in \mathbb {R}^+\).

For example, if \(\varphi \in \Phi \), then \((S_w^\varphi f)(r)\) exists for every bounded function \(f\,:\, \mathbb {R}^+ \rightarrow \mathbb {C}\) and \(\lim _{w\rightarrow \infty }(S_w^\varphi f)(r)=f(r)\) at each continuity point of f (see [11, Theorem 3.2]). In view of generalized exponential sampling, we can extend Theorem 3 as follows.

Theorem 5

For \(\delta \in \, ]0,1[\), let \(\psi \in B_{0,\pi \delta }^2\) such that \(\psi (1)=1\). Set \(\varphi := \psi {{\,\textrm{lin}\,}}\). Then the following statements hold:

  1. (a)

    \(\; \varphi \in \Phi \);

  2. (b)

    \(\; \varphi \log \in B_{0,\pi (1+\delta )}^2\);

  3. (c)

    for each bounded function \(f\,:\,\mathbb {R}^+\rightarrow \mathbb {C}\) the series (8) converges absolutely and uniformly on compact subsets of \(\mathbb {R}^+\) and

    $$\begin{aligned} \sup _{r\in \mathbb {R}^+}\left| \left(S_w^\varphi f\right)(r)\right| \le \Vert \psi \Vert _{X_0^2} \sup _{r\in \mathbb {R}^+}\left| f(r)\right| ; \end{aligned}$$
  4. (d)

    for each bounded function \(f\,:\,\mathbb {R}^+\rightarrow \mathbb {C}\) the series (8) is interpolating with respect to the sample points, that is,

    $$\begin{aligned} \left(S_w^\varphi f\right)(e^{\ell /w})= f(e^{\ell /w}) \quad (\ell \in \mathbb {Z}); \end{aligned}$$
  5. (e)

    for each \(f\in \mathscr {B}_{0,\pi w(1-\delta )}^\infty \), the restriction to \(\mathbb {R}^+\) is a fixpoint of \(S_w^\varphi \), that is,

    $$\begin{aligned} S_w^\varphi f(\cdot ,0) = f(\cdot , 0). \end{aligned}$$

Proof

Clearly \(\varphi \) is continuous. Since the constant function \(g(r, \theta ) \equiv 1\) on \(\mathbb {H}\) belongs to any of the spaces \(\mathscr {B}^\infty _{0,T},\) for every \(T>0,\) it follows from Theorem 3 applied to the function g that

$$\begin{aligned} 1= \sum _{k\in \mathbb {Z}} \varphi (e^{-k}u) \end{aligned}$$

for all \(u\in \mathbb {R}^+\). As a consequence of Lemma 1 with \(w=1\) and \(\theta =0\), we have

$$\begin{aligned} \sum _{k\in \mathbb {Z}} \left| \varphi (e^{-k}u)\right| \le \Vert \psi \Vert _{X_0^2} \qquad (u\in \mathbb {R}^+). \end{aligned}$$

Next, let \(\rho \ge 1\) and denote by \(\lfloor \rho \rfloor \) the largest integer not exceeding \(\rho \). Then, by the Cauchy–Schwarz inequality and Theorem 2,

$$\begin{aligned} \sum _{\left| k-\log u\right|>\rho } \left| \varphi (e^{-k}u)\right|&\le \left(\sum _{k\in \mathbb {Z}}\left| \psi (e^{-k}u)\right| ^2\right)^{1/2} \left(\sum _{\left| k-\log u\right|>\rho } \left| {{\,\textrm{lin}\,}}(e^{-k}u)\right| ^2\right)^{1/2}\\&\le \Vert \psi \Vert _{X_0^2}\left(\sum _{\left| k-\log u\right| >\rho } \frac{1}{{\pi ^2(\log u -k)^2}}\right)^{1/2}\\&\le \frac{\sqrt{2}}{\pi }\, \Vert \psi \Vert _{X_0^2}\left(\sum _{n=\lfloor \rho \rfloor +1}^\infty \frac{1}{{n^2}}\right)^{1/2}\\&\le \frac{\sqrt{2}}{\pi }\, \Vert \psi \Vert _{X_0^2}\left(\int _{\lfloor \rho \rfloor }^\infty \frac{dx}{x^2}\right)^{1/2} = \frac{\sqrt{2}}{\pi }\, \Vert \psi \Vert _{X_0^2} \,\frac{1}{{{\lfloor \rho \rfloor }^{1/2}}}. \end{aligned}$$

This shows that the left-hand side converges uniformly to zero as \(\rho \rightarrow \infty .\) Altogether, we see that \(\varphi \in \Phi \).

Next we note that

$$\begin{aligned} \varphi (r)\log r = \psi (r) {{\,\textrm{lin}\,}}r \log r = \psi (r) \frac{\sin (\pi \log r)}{\pi } \qquad (r\in \mathbb {R}^+). \end{aligned}$$

Since \(\psi \in B_{0, \pi \delta }^2,\) there exists by Theorem 1 a function \(\Psi \in \mathscr {B}^2_{0, \pi \delta }\) such that \(\psi (r) = \Psi (r,0)\) and so the function \(\Theta (r, \theta ) :=\Psi (r,\theta ) \frac{\sin (\pi (\log r + i\theta ))}{\pi }\) belongs to \(\mathscr {B}^2_{0, \pi (\delta +1)}\). Again by Theorem 1, one has that \(\varphi (r) \log r= \Theta (r,0)\) belongs to \(B^2_{0, \pi (\delta +1)}\), and so (b) holds.

Assertion (c) is verified with the help of Lemma 1.

As regards (d), it suffices to note that if \(\ell \in \mathbb {Z}\) and \(r=e^{\ell /w}\), then

$$\begin{aligned} \varphi (e^{-k}r^w)= & {} \varphi (e^{\ell -k}) = \psi (e^{\ell -k}){{\,\textrm{lin}\,}}(e^{\ell -k})\\= & {} \psi (e^{\ell -k}){{\,\textrm{sinc}\,}}(\ell -k) = \delta _{\ell ,k} \end{aligned}$$

with Kronecker’s delta.

Finally assertion (e) is contained in Theorem 3. \(\square \)

It is remarkable that Theorem 5 has a converse. For this we need statements (b) and (d) only. In fact, any generalized exponential sampling series (8) that is interpolating and has a Mellin bandlimited \(\varphi \) satisfying (b) is a multiplier version of the sampling series of Theorem A.

Theorem 6

A generalized exponential sampling series (8) has the properties (b) and (d) of Theorem 5 if and only if there exists a \(\psi \in B_{0,\pi \delta }^2\) such that \(\psi (1)=1\) and \(\varphi =\psi {{\,\textrm{lin}\,}}\).

Proof

  The sufficiency is already contained in Theorem 5.

Necessity. Suppose that statements (b) and (d) of Theorem 5 hold. Then, by Theorem 1, there exists a function \(F\in \mathscr {B}_{0,\pi (1+\delta )}^2\) such that \(F(r,0)=\varphi (r) \log r\) for all \(r\in \mathbb {R}^+\). Now consider

$$\begin{aligned} G(r,\theta ) := \frac{\pi F(r,\theta )}{\sin (\pi (\log r + i\theta ))}. \end{aligned}$$
(9)

The denominator on the right-hand side has simple zeros at the points \((e^\ell ,0)\) for \(\ell \in \mathbb {Z}\) and is different from zero at all other points of \(\mathbb {H}\). Hence G is polar-analytic on \(\mathbb {H}\setminus \{(e^\ell ,0)\,:\,\ell \in \mathbb {Z}\}\). Since \({{\,\textrm{lin}\,}}\) is bounded on \(\mathbb {R}^+\), we conclude from statement (d) with \(w=1\) that

$$\begin{aligned} \left(S_1^\varphi {{\,\textrm{lin}\,}}\right)(e^\ell ) = \sum _{k\in \mathbb {Z}}{{\,\textrm{lin}\,}}(e^k) \varphi (e^{\ell -k}) = \varphi (e^\ell ) = {{\,\textrm{lin}\,}}(e^\ell ). \end{aligned}$$
(10)

Thus

$$\begin{aligned} F(e^\ell ,0)= \varphi (e^\ell )\ell = {{\,\textrm{sinc}\,}}(\ell )\ell =0 \end{aligned}$$

for all \(\ell \in \mathbb {Z}\). Hence the points \((e^\ell ,0)\) for \(\ell \in \mathbb {Z}\) are all removable isolated singularities of G, and so G has a continuation that is polar-analytic on the whole of \(\mathbb {H}\).

It is easily verified that

$$\begin{aligned} \left| \sin (\pi (\log r + i\theta ))\right| \ge \left| \sinh (\pi \theta )\right| \ge \frac{1 -e^{-2\pi }}{2} e^{\pi \left| \theta \right| } \qquad (\left| \theta \right| \ge 1). \end{aligned}$$
(11)

As a consequence of statement (b), we have

$$\begin{aligned} \left| F(r,\theta )\right| \le C_F e^{\pi (1+\delta )\left| \theta \right| } \qquad ((r,\theta )\in \mathbb {H}) \end{aligned}$$
(12)

with a constant \(C_F\) depending on F only. Hence

$$\begin{aligned} \left| G(r,\theta )\right| \le \frac{2\pi C_F}{1-e^{-2\pi }}\, e^{\pi \delta \left| \theta \right| } \qquad (\left| \theta \right| \ge 1). \end{aligned}$$
(13)

Next, let \(N\in \mathbb {N}\) and let \(\mathcal {R}_N\) be a rectangle with vertices at \((e^{\pm (N+1/2)}, \pm 1)\). On its vertical edges, we have \(\left| \sin (\pi (\log r +i\theta ))\right| \ge \cosh (\pi \theta )\), and so (12) yields that on these line segments

$$\begin{aligned} \left| G(r,\theta )\right| \le \frac{2\pi C_F}{1+e^{-2\pi \left| \theta \right| }}\, e^{\pi \delta \left| \theta \right| } \le \frac{2\pi C_F}{1+e^{-2\pi }}\, e^{\pi \delta }. \end{aligned}$$
(14)

On the horizontal edges, it follows from (13) that

$$\begin{aligned} \left| G(r,\theta )\right| \le \frac{2\pi C_F}{1-e^{-2\pi }}\, e^{\pi \delta }. \end{aligned}$$
(15)

The bounds (14) and (15) do not depend on N. By the maximum principle for polar-analytic functions (see [9, Theorem 12]), it follows that

$$\begin{aligned} \left| G(r,\theta )\right| \le \frac{2\pi C_F}{1-e^{-2\pi }}\, e^{\pi \delta } \qquad (\left| \theta \right| \le 1). \end{aligned}$$

Combined with (13), we see that

$$\begin{aligned} \left| G(r,\theta )\right| \le C_G \,e^{\pi \delta \left| \theta \right| } \qquad ((r,\theta )\in \mathbb {H}), \end{aligned}$$
(16)

where

$$\begin{aligned} C_G = \frac{2\pi C_F}{1-e^{-2\pi }}\, e^{\pi \delta }. \end{aligned}$$

We would like to show that \(G( \cdot , 0)\in X_0^2\) which is a little delicate because of the removable singularities. Therefore, we first show that \(G(\cdot ,1)\in X_0^2\). By the translation invariance of Mellin-Bernstein spaces, we know that \(F(\cdot , 1)\in X_0^2\) and by (4),

$$\begin{aligned} \Vert F(\cdot , 1)\Vert _{X_0^2} \le e^{\pi (1+\delta )} \Vert F(\cdot , 0)\Vert _{X_0^2}. \end{aligned}$$

From this we conclude with the help of (11) that \(G(\cdot , 1)\in X_0^2\) and

$$\begin{aligned} \Vert G(\cdot , 1)\Vert _{X_0^2} \le \frac{2\pi e^{\pi \delta }}{1-e^{-2\pi }} \Vert F(\cdot , 0)\Vert _{X_0^2}. \end{aligned}$$

Now, by employing the Cauchy–Riemann equations in polar form and manipulating (16) slightly, we find that \(G_1\,:\,(r,\theta ) \mapsto G(r,\theta +1)\) belongs to \(\mathscr {B}_{0,\pi \delta }^2\). But then, by the translation invariance, G itself belongs to \(\mathscr {B}_{0,\pi \delta }^2\).

Finally, setting \(\psi := G(\cdot , 0)\), we have by the Theorem 1 that \(\psi \in B_{0,\pi \delta }^2\) and

$$\begin{aligned} \psi (r) {{\,\textrm{lin}\,}}(r) = \frac{F(r,0)}{\log r} = \varphi (r) \qquad (r\in \mathbb {R}^+). \end{aligned}$$

Furthermore \(\psi (1) = \varphi (1) = {{\,\textrm{lin}\,}}(1) =1 \) as a consequence of (10) for \(\ell =0\). This completes the proof. \(\square \)

3.2 Approximation of bounded, continuous functions

The approximation power of \(S_w^{\psi {{\,\textrm{lin}\,}}}f\) can be expressed in terms of the best approximation of f by functions from \(\mathscr {B}_{0,\pi T}^\infty \), where \(T=w(1-\delta )\). For this, we introduce

$$\begin{aligned} E_{\pi T}^\infty [f] := \inf _{g\in \mathscr {B}_{0,\pi T}^\infty } \Vert f-g(\cdot ,0)\Vert _{X_0^\infty }. \end{aligned}$$

The announced result can now be stated as follows:

Proposition 7

For \(\delta \in ]0,1[\), let \(\psi \in B_{0,\pi \delta }^2\) such that \(\psi (1)=1.\) Suppose that \(f\in C(\mathbb {R}^+)\cap X_0^\infty \). Then for \(w>0\) and \(T=w(1-\delta )\), we have

(17)

Proof

  Given \(\varepsilon >0\), there exists a function \(g_\varepsilon \in \mathscr {B}_{0,\pi T}^\infty \) such that \(\Vert f-g_\varepsilon (\cdot ,0)\Vert _{X_0^\infty } \le E_{\pi T}^\infty [f]+\varepsilon .\) By Theorem 3, we have \(g_\varepsilon (r,0)=(S_w^{\psi {{\,\textrm{lin}\,}}}g_\varepsilon (\cdot , 0))(r)\). Now employing statement (c) of Theorem 5, we obtain

The left-hand side does not depend on \(\varepsilon \). Letting \(\varepsilon \rightarrow 0\), we arrive at (17). \(\square \)

Note that \(S_w^{\psi {{\,\textrm{lin}\,}}}f\) comes close to a best approximation by functions from \(\mathscr {B}_{0,\pi T}^\infty \) but misses it in two ways. First \(S_w^{\psi {{\,\textrm{lin}\,}}}f\) is the restriction to \(\mathbb {R}^+\) of a function from \(\mathscr {B}_{0,\pi w(1+\delta )}^\infty \) while \(T=w(1-\delta )\). Secondly, there is a factor \(1+\Vert \psi \Vert _{X_0^2}\) on the right-hand side of (17), which is at least as big as \(1+\delta ^{-1/2}\). Indeed, by Theorem 2, we have

$$\begin{aligned} \Vert \psi \Vert _{X_0^2}^2 = \frac{1}{{\delta }} \sum _{k\in \mathbb {Z}}\left| \psi (e^{k/\delta })\right| ^2 \ge \frac{1}{{\delta }} \left| \psi (1)\right| ^2 = \frac{1}{{\delta }}, \end{aligned}$$

where equality is attained for \(\psi (r)={{\,\textrm{lin}\,}}(r^\delta )\).

3.3 The truncation error

Next we want to estimate the truncation error of the series \((S_w^{\psi {{\,\textrm{lin}\,}}}f)(r)\). It will depend on the decay of \(\left| \psi (r)\right| \) as \(r\rightarrow 0+\) and \(r\rightarrow +\infty \). Therefore we will classify multipliers \(\psi \) according to their decay.

Definition 5

Let \(\delta \in \, ]0,1[\) and let \(\mu \) be a positive, decreasing function on \([0, +\infty [\) such that \(\mu (t)\rightarrow 0\) as \(t\rightarrow +\infty .\) Then the class \(B_{0,\pi \delta }^2(\mu )\) comprises all functions \(\psi \in B_{0,\pi \delta }^2\) such that \(\psi (1)=1\) and

$$\begin{aligned} \left| \psi (r)\log r\right| \le \mu (\left| \log r\right| ) \qquad (r\in \mathbb {R}^+). \end{aligned}$$
(18)

Example 1

For an integer \(m\ge 2\), let \(\psi (r):={{\,\textrm{lin}\,}}^m(r^{\delta /m})\) and let \(\kappa :=\pi \delta /m\). Then \(\psi \in B_{0,\pi \delta }^2\), \(\psi (1)=1\) and

$$\begin{aligned} \left| \psi (r) \log r\right| = \left| \frac{\sin ^m(\log r^\kappa )}{\kappa (\log r^\kappa )^{m-1}}\right| \le \frac{1}{{\kappa ^m\left| \log r\right| ^{m-1}}} \qquad (r\in \mathbb {R}^+\setminus \{1\}). \end{aligned}$$

We also have

$$\begin{aligned} \left| \psi (r) \log r\right| = \left| {{\,\textrm{lin}\,}}^{m-1}(r^{\delta /m})\right| \frac{\left| \sin (\log r^\kappa )\right| }{\kappa } \le \frac{1}{{\kappa }} \qquad (r\in \mathbb {R}^+). \end{aligned}$$

The first bound does not extend to \(r=1\), the second is not decreasing, but the harmonic mean of these two bounds leads to

$$\begin{aligned} \left| \psi (r) \log r\right| \le \frac{2}{\kappa ( 1+ (\kappa \left| \log r\right| )^{m-1})} \qquad (r\in \mathbb {R}^+). \end{aligned}$$

Hence

$$\begin{aligned} \mu (t):= \frac{2}{\kappa ( 1+ (\kappa t)^{m-1})} \qquad (t\in [0, +\infty [) \end{aligned}$$

defines an admissible function \(\mu \) that yields (18).

When \(r\in \mathbb {R}^+\) is given and we want to approximate \((S_w^{\psi {{\,\textrm{lin}\,}}}f)(r)\) by \(2N+1\) terms of this series, it seems reasonable to take those terms whose sample points \(e^{k/w}\) are closest to r. This means that we associate with (wr) the integer \(N_{w,r}:=\lfloor w\log r + 1/2\rfloor \) and keep only those terms of the series for which

$$\begin{aligned} k \in \left\rbrace N_{w,r}, N_{w,r} \pm 1, \dots , N_{w,r} \pm N\right\lbrace . \end{aligned}$$

The truncated series obtained in this way shall be denoted by

$$\begin{aligned} \left(S_{w,N}^{\psi {{\,\textrm{lin}\,}}}f\right)(r) := \sum _{k=N_{w,r}-N}^{N_{w,r}+ N} f(e^{k/w}) \psi (e^{-k}r^w) {{\,\textrm{lin}\,}}(e^{-k}r^w). \end{aligned}$$

We also use the notation

The following proposition provides an estimate of the truncation error in terms of \(\mu \) and N.

Proposition 8

Let \(\delta \in \, ]0,1[\), \(\psi \in B_{0,\pi \delta }^2(\mu )\) and \(w>0\). Suppose that f is a function defined on \(\mathbb {R}^+\) such that . Then

(19)

for \(N\in \mathbb {N}\) and \(r\in \mathbb {R}^+.\)

Proof

  Obviously,

Since \(\log r^w -N_{w,r}\in \, ]-{\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}}]\), we find that \(\left| \log r^w -N_{w,r}- j\right| \ge \left| j\right| -{\textstyle \frac{1}{2}}\) for \(\left| j\right| >N\) and so (18) implies

$$\begin{aligned} \left| \psi (\exp (\log r^w -N_{w,r}-j))\right| \le \frac{\mu (\left| j\right| -{\textstyle \frac{1}{2}})}{\left| j\right| -{\textstyle \frac{1}{2}}} \le \frac{\mu (N+{\textstyle \frac{1}{2}})}{\left| j\right| -{\textstyle \frac{1}{2}}} \qquad (\left| j\right| >N). \end{aligned}$$

Hence

Finally, noting that

$$\begin{aligned} \frac{1}{{(j-{\textstyle \frac{1}{2}})^2}} < \int _{j-1}^j \frac{dx}{x^2}, \end{aligned}$$

we obtain

$$\begin{aligned} \sum _{j=N+1}^\infty \frac{1}{{(j-{\textstyle \frac{1}{2}})^2}} < \int _N^\infty \frac{dx}{x^2} = \frac{1}{{N}}. \end{aligned}$$
(20)

This completes the proof. \(\square \)

Remark 2

The bound on the right-hand side of (19) can be simplified by replacing it with

$$\begin{aligned} \sup _{r\in \mathbb {R}^+} \left| f(r)\right| \, \frac{2\mu (N)}{\pi N}. \end{aligned}$$

However, the sine in (20) shows that the truncated series is still interpolating.

Proposition 8 combined with Theorem 3 implies the following result.

Corollary 1

Let \(\delta \in \,]0,1[\) and \(\psi \in B_{0,\pi \delta }^2(\mu )\). Suppose that \(f\in \mathscr {B}_{0,\pi T}^\infty \), where \(T>0\). Then, for \(w=T/(1-\delta )\), we have

$$\begin{aligned} \left| f(r,0) - \left(S_{w,N}^{\psi {{\,\textrm{lin}\,}}}f(\cdot ,0)\right)(r)\right| <\left| \sin (\pi w\log r)\right| \Vert f\Vert _{X_0^\infty }\frac{2\mu (N+{\textstyle \frac{1}{2}})}{\pi N} \end{aligned}$$

for \(N\in \mathbb {N}\) and \( r\in \mathbb {R}^+\).

Now we want to show that Proposition 8 has a generalization that admits unbounded functions f. Roughly speaking, this is achieved by taking a portion of the decaying multiplier \(\psi \) for compensating the growth of f while the remaining portion serves for the decay of the truncation error as \(N\rightarrow \infty \).

Theorem 9

For \(\delta \in \,]0,1[\), let \(\psi \in B_{0,\pi \delta }^2(\mu )\) and let f be a function defined on \(\mathbb {R}^+\). Suppose that

$$\begin{aligned} \left| f(r)\right| \le K(\left| \log r\right| ) \qquad (r\in \mathbb {R}^+), \end{aligned}$$

where K is a non-decreasing, positive function defined on \([0, +\infty [\) such that

$$\begin{aligned} K(t) \mu (t)^\lambda \le K(0) \mu (0)^\lambda \qquad (0\le t<+\infty ) \end{aligned}$$
(21)

for some \(\lambda \in [0, 1]\). Then for \(w>1\) the generalized exponential sampling series \((S_w^{\psi {{\,\textrm{lin}\,}}}f)(r)\) converges absolutely for each \(r\in \mathbb {R}^+\) and

$$\begin{aligned}&\left| \left(S_w^{\psi {{\,\textrm{lin}\,}}}f\right)(r) - \left(S_{w,N}^{\psi {{\,\textrm{lin}\,}}}f\right)(r)\right| \nonumber \\&\qquad \quad < \left| \sin (\pi w\log r)\right| K\left(\frac{w\left| \log r\right| +1}{w-1}\right) \mu (0)^\lambda \, \frac{2\mu (N+{\textstyle \frac{1}{2}})^{1-\lambda }}{\pi N} \end{aligned}$$
(22)

for \(N\in \mathbb {N}\).

Proof

  It suffices to prove (22). We have

$$\begin{aligned}&\left| \left(S_w^{\psi {{\,\textrm{lin}\,}}}f\right)(r) - \left(S_{w,N}^{\psi {{\,\textrm{lin}\,}}}f\right)(r)\right| \\&\qquad \quad \le \sum _{\left| k-N_{w,r}\right|>N}\left| f(e^{k/w})\psi (e^{-k}r^w) {{\,\textrm{lin}\,}}(e^{-k}r^w)\right| \\&\qquad \quad = \sum _{\left| j\right|>N} \left| f(e^{(j+N_{w,r})/w})\psi (e^{w\log r -N_{w,r}-j}) {{\,\textrm{lin}\,}}(e^{w\log r -N_{w,r}-j})\right| \\&\qquad \quad \le \left| \sin (\pi w\log r)\right| \sum _{\left| j\right| >N} K\left(\frac{\left| j+N_{w,r}\right| }{w}\right) \frac{\mu (\left| w\log r - N_{w,r}-j\right| )}{\pi \left| w\log r - N_{w,r}-j\right| ^2}. \end{aligned}$$

Next we estimate the term

$$\begin{aligned} T_j:= K\left(\frac{\left| j+N_{w,r}\right| }{w}\right) \frac{\mu (\left| w\log r - N_{w,r}-j\right| )}{\pi \left| w\log r - N_{w,r}-j\right| ^2} \qquad (\left| j\right| >N). \end{aligned}$$

Noting again that \(\xi := w\log r - N_{w,r} \in \,]-{\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}}]\), we find that

$$\begin{aligned} T_j&= K\left(\left| \log r +\frac{j-\xi }{w}\right| \right) \frac{\mu (\left| j-\xi \right| )}{\pi \left| j-\xi \right| ^2}\\&\le K\left(\left| \log r\right| + \frac{\left| j\right| +{\textstyle \frac{1}{2}}}{w}\right) \frac{\mu (\left| j\right| -{\textstyle \frac{1}{2}})^\lambda }{\pi (\left| j\right| -{\textstyle \frac{1}{2}})^2} \, \mu (N+{\textstyle \frac{1}{2}})^{1-\lambda } \quad (\left| j\right| >N). \end{aligned}$$

We distinguish two cases: If

$$\begin{aligned} \left| \log r\right| + \frac{\left| j\right| +{\textstyle \frac{1}{2}}}{w} \le \left| j\right| -{\textstyle \frac{1}{2}}, \end{aligned}$$

then

$$\begin{aligned} K\left(\left| \log r\right| + \frac{\left| j\right| +{\textstyle \frac{1}{2}}}{w}\right)\mu (\left| j\right| -{\textstyle \frac{1}{2}})^\lambda \le K(\left| j\right| -{\textstyle \frac{1}{2}})\mu (\left| j\right| -{\textstyle \frac{1}{2}})^\lambda \le K(0) \mu (0)^\lambda , \end{aligned}$$

and so

$$\begin{aligned} T_j \le K(0) \mu (0)^\lambda \, \frac{\mu (N+{\textstyle \frac{1}{2}})^{1-\lambda }}{\pi (\left| j\right| -{\textstyle \frac{1}{2}})^2}. \end{aligned}$$
(23)

If

$$\begin{aligned} \left| \log r\right| + \frac{\left| j\right| +{\textstyle \frac{1}{2}}}{w} > \left| j\right| -{\textstyle \frac{1}{2}}, \end{aligned}$$

then

$$\begin{aligned} \left| j\right| + {\textstyle \frac{1}{2}}< \frac{w}{w-1} (\left| \log r\right| +1). \end{aligned}$$

Therefore

$$\begin{aligned} K\left(\left| \log r\right| + \frac{\left| j\right| +{\textstyle \frac{1}{2}}}{w}\right) \le K\left(\frac{w\left| \log r\right| + 1}{w-1}\right), \end{aligned}$$

and so

$$\begin{aligned} T_j \le K\left(\frac{w\left| \log r\right| + 1}{w-1}\right)\frac{\mu (N+{\textstyle \frac{1}{2}})}{\pi (\left| j\right| -{\textstyle \frac{1}{2}})^2}. \end{aligned}$$
(24)

Comparing the estimates (23) and (24), we see that in any case

$$\begin{aligned} T_j \le K\left(\frac{w\left| \log r\right| + 1}{w-1}\right) \mu (0)^\lambda \, \frac{\mu (N+{\textstyle \frac{1}{2}})^{1-\lambda }}{\pi (\left| j\right| -{\textstyle \frac{1}{2}})^2} \qquad (\left| j\right| >N). \end{aligned}$$

The proof is completed by employing inequality (20) for the summation over j. \(\square \)

The choice of \(\lambda \) allows us some flexibility. For \(\lambda =0\), we see from condition (21) that only bounded functions f are admissible and then we essentially recover Proposition 8 with the rate of convergence to zero of the truncation error being \(\mathcal {O}(\mu (N)/N)\) as \(N\rightarrow \infty \). Note that inequality (23) remains true if \(\lambda \) is replaced by any \(\lambda '\in \,]\lambda , 1]\). We may even raise both sides of (21) to the power \(\lambda '/\lambda \) and obtain

$$\begin{aligned} K(t)^{\lambda '/\lambda } \mu (t)^{\lambda '} \le K(0)^{\lambda '/\lambda } \mu (0)^{\lambda '} \qquad (0\le t<+\infty ). \end{aligned}$$

This means that, if \(K(0)\ge 1\), we can enlarge the class of admissible functions by requiring that

$$\begin{aligned} \left| f(r)\right| \le K(\left| \log r\right| )^{\lambda '/\lambda } \qquad (r\in \mathbb {R}^+), \end{aligned}$$

but for the functions added in this way the guaranteed rate of convergence of the trunction error will be \(\mathcal {O}(\mu (N)^{1-\lambda '}/N)\) only. When we choose \(\lambda '=1\), we fix a largest class of admissible functions but the speed of convergence in the worst case reduces to \(\mathcal {O}(1/N)\).

As a suitable majorant \(K(\cdot )\) for the admissible functions we can always take \(K(t):= c \mu (t)^{-\lambda }\) with a constant \(c>0\), which satisfies (23) trivially.

In view of Proposition 8 and Theorem 9 it is desirable to have a rapidly decaying multiplier \(\psi \). This raises the non-trivial question as to how fast \(\psi \) can decay. Similarly, prescribing \(\mu \) as a rapidly decaying function, we may ask if the class \(B_{0,\pi \delta }^2(\mu )\) specified in Definition 5 will be non-empty? In the case of entire functions of exponential type an answer to the correponding questions can be found in [21, p. 101], which reads as follows: Let \(S(r)\ge 1\) be increasing. A necessary and sufficient condition for there to exist entire functions \(\varphi \not \equiv 0\) of exponential type with

$$\begin{aligned} \left| \varphi (x)\right| \le \frac{1}{{S(\left| x\right| )}} \qquad \hbox { on } \mathbb {R}\end{aligned}$$

is that

$$\begin{aligned} \int _0^\infty \frac{\log S(r)}{1+r^2} dr < \infty . \end{aligned}$$

If that condition is met, there are entire functions \(\varphi \not \equiv 0\) of arbitrarily small exponential type satisfying the inequality in question.

We recall that there is a correspondence between Mellin bandlimited functions and entire functions of exponential type. If \(\psi \in B_{0, \pi \delta }^2\) and \(f(x):= \psi (e^x)\), then f is the restriction to \(\mathbb {R}\) of an entire function of exponential type \(\pi \delta \) that belongs to the Lebesgue class \(L^2\) on the real line. The converse is also true. Given f of the latter kind, then \(\psi \) defined by \(\psi (r):= f(\log r)\) belongs to \(B_{0, \pi \delta }^2\). Therefore the cited result implies the following criterion.

Proposition 10

The class \(B_{0, \pi \delta }^2(\mu )\) specified in Definition 4 is non-empty if and only if

$$\begin{aligned} - \int _0^\infty \frac{\log \mu (t)}{1 + t^2} dt < \infty . \end{aligned}$$

This shows that in Proposition 8 and Theorem 9 we can have \(\mathcal {O}(e^{-N/(\log N)^\gamma })\) with \(\gamma >1\) for the convergence of the trunction error as \(N\rightarrow \infty \) but it is not possible to have such a rate of convergence with \(\gamma \le 1\). In particular, we cannot have \(\mathcal {O}(e^{-\alpha N})\) for some positive \(\alpha \).

4 A multiplier based on a Gaussian function

Unfortunately, multipliers \(\psi \in B_{0,\pi \delta }(\mu )\) with

$$\begin{aligned} \mu (t)= \mathcal {O}\left(e^{-t/(\log t)^\gamma }\right) \qquad (\gamma >1, \, t\rightarrow \infty ) \end{aligned}$$

are not easily available. They can be constructed by an infinite product of \({{\,\textrm{lin}\,}}\) functions (see [18] for corresponding products of \({{\,\textrm{sinc}\,}}\) functions) which is quite inconvenient for computations. But there is a striking observation. When we analyse the proofs of Proposition 8 and Theorem 9, we find that only the estimate (18) is needed but we nowhere use that \(\psi \in B_{0,\pi \delta }^2\). For example, we could take

$$\begin{aligned} \psi (r):= e^{-2(\log r)^2}. \end{aligned}$$
(25)

Then inequality (18) will hold with \(\mu (t)= e^{-t^2}\) and (19) will be valid with this function \(\mu \). Thus we obtain an exceptionally fast converging truncation error by using a very convenient multiplier. However, since this multiplier \(\psi \) is not Mellin bandlimited, Theorem 3 and Corollary 1 are not applicable. We produce a so-called aliasing error which may prevent \(S_w^{\psi {{\,\textrm{lin}\,}}}f\) from being a good approximation to f.

The same dilemma arises when one equips the classical sampling formula of Whittaker–Kotel’nikov–Shannon with a multiplier. Nevertheless some scientists experimented with Gaussian multipliers, that is with functions of the form \(t \mapsto c_1e^{-c_2t^2}\), where \(c_1\) and \(c_2\) are positive constants, despite the fact that these functions are not bandlimited. It was reported that the Chinese chemist G.W. Wei and his collaborators tackled diverse dynamical problems arising in physics and chemistry by using such multipliers. Inspired by their excellent numerical results, Qian [25] (also see Creamer–Qian [26]) provided a mathematical justification for this approach with rigorous error estimates. Using methods of Fourier analysis, these authors showed that a bandlimited function can be approximated by \(2N+1\) terms of its classical sampling series equipped with a cleverly scaled Gaussian multiplier with an error that decays like

$$\begin{aligned} \mathcal {O}\left(\frac{e^{-\alpha N}}{\sqrt{N}}\right) \qquad (N\rightarrow \infty ), \end{aligned}$$
(26)

where \(\alpha \) is a positive constant. This is an amazing result since it shows that, by allowing in addition an aliasing error, one can achieve a better approximation by \(2N+1\) samples of f than with the truncated series of an exact formula. This result was considerably extended in [29] by using methods of complex analysis. Multidimensional versions were presented in [2] and in [1]. Micchelli et al. [23] considered the sequence of samples of a bandlimited function f occurring in the classical sampling formula and asked for an optimal algorithm that approximates f(x) by using \(2N+1\) of these samples. They found that such an algorithm converges for the worst case of the function f as fast as (26) with some \(\alpha >0\) but not faster. Hence sampling with a Gaussian multiplier as initiated by Wei and Qian has the rate of convergence of an optimal algorithm and is therefore the method of choice for computations. A very early use of Gaussian multipliers in a somewhat different context can be found in E.T. Whittaker’s construction of “cotabular” functions; see [30].

The distinguished role of Gaussian functions can be made plausible by the following reflection: For the truncation error to converge fast one should have a rapidly decaying multiplier \(\psi \). For the aliasing error to converge fast one should have a multiplier \(\psi \) whose Fourier transform decays rapidly. This leads us naturally to Gaussian functions since they decay rapidly and their Fourier transforms are again Gaussian functions.

In the case of exponential sampling, appropriately scaled functions of the form (25) may be a reasonable choice. We set

$$\begin{aligned} \psi (r) = \exp \left(-\frac{\alpha }{N}(\log r)^2\right) \end{aligned}$$

with a positive \(\alpha \) to be fixed later, consider the truncated generalized exponential sampling series \(\bigl (S_{w,N}^{\psi {{\,\textrm{lin}\,}}}f\bigr )(r)\) and call it

$$\begin{aligned} \bigl (G_{w,N}^\alpha f\bigr )(r):= \sum _{k=N_{w,r}-N}^{N_{w,r}+N} f(e^{k/w})\exp \left(-\frac{\alpha }{N}(\log (e^{-k}r^w))^2\right) {{\,\textrm{lin}\,}}( e^{-k}r^w) \end{aligned}$$

with G referring to Gauss. The following theorem includes an efficient algorithm for approximating the restriction to \(\mathbb {R}^+\) of a polar-analytic function from its exponential samples.

Theorem 11

Let f be polar-analytic on \(\mathbb {H}\) such that

$$\begin{aligned} \left| f(r,\theta )\right| \le K(\left| \log r\right| ) e^{\pi T\left| \theta \right| } \qquad ((r,\theta )\in \mathbb {H}), \end{aligned}$$

where K is a non-decreasing, non-negative function defined on \([0, +\infty [\) and \(T\ge 0\). Then, for \(w>T\), \(\alpha =\pi (1-T/w)/2\), \(N\in \mathbb {N}\) and \(r\in \mathbb {R}^+\), we have

$$\begin{aligned} \left| f(r,0) -\bigl (G_{w,N}^\alpha f(\cdot ,0)\bigr )(r)\right| \le \left| \sin (\pi w\log r)\right| K\left(\left| \log r\right| + \frac{N+1}{w}\right)\,\frac{2e^{-\alpha N}}{\sqrt{\pi \alpha N}}\,\beta _N, \end{aligned}$$

where

$$\begin{aligned} \beta _N = \frac{1}{{1-e^{-2\pi N}}} + \frac{2}{\sqrt{\pi \alpha N}} = 1 + \mathcal {O}(N^{-1/2}) \qquad (N\rightarrow \infty ). \end{aligned}$$

Proof

  First we note that for \(r\in \{e^{k/w} \,:\, k\in \mathbb {Z}\}\) the assertion is trivially true. For \(r\in \mathbb {R}^+\setminus \{e^{k/w} \,:\, k\in \mathbb {Z}\}\), \((\rho , \vartheta )\in \mathbb {H}\) and \(\lambda :=\alpha /N\), consider

$$\begin{aligned} F(\rho ,\vartheta ) := \frac{f(\rho ,\vartheta ) \exp (-\lambda (\log (\rho ^w/r^w)+iw\vartheta )^2)}{(\log (\rho /r)+i\vartheta )\sin (\pi w(\log \rho +i\vartheta ))}. \end{aligned}$$

This function F is polar-analytic on \(\mathbb {H}\) except for isolated singularities at the points (r, 0) and \((e^{k/w},0)\) with \(k\in \mathbb {Z}\), which are all poles of order 1. We want to show by employing the residue theorem for polar-analytic functions (see [7, Sect. 4]) that the error of the approximation of f(r, 0) by \((G_{w,N}^\alpha f(\cdot ,0))(r)\) can be represented by a contour integral of F. For this, we have to calculate the residues \({{\,\textrm{res}\,}}_0\) of the poles of F. It is easily seen that

$$\begin{aligned} ({{\,\textrm{res}\,}}_0 F)(r,0) = \frac{f(r,0)}{\sin (\pi w\log r)}. \end{aligned}$$

For calculating the residues at \((e^{k/w},0)\), we first factor the sine in the denominator of F as follow:

$$\begin{aligned} \sin (\pi w(\log \rho +i\vartheta ))&= (-1)^k \sin (\pi w(\log \rho + i\vartheta - k/w))\\&= (-1)^k \pi w(\log \rho + i\vartheta - k/w) {{\,\textrm{sinc}\,}}(\log \rho ^w +iw\vartheta -k)\\&= (-1)^k \pi w(\log \rho + i\vartheta - k/w) {{\,\textrm{lin}\,}}(e^{-k}\rho ^w e^{iw\vartheta }). \end{aligned}$$

With this decomposition, a basic formula for the residues of poles of order 1 yields

$$\begin{aligned} ({{\,\textrm{res}\,}}_0F)(e^{k/w},0)&= \frac{f(e^{k/w},0) \exp (-\lambda (\log (e^{-k}r^w))^2)}{(-1)^{k+1} \pi \log (e^{-k}r^w)}\\&= (-1)^{k+1}f(e^{k/w},0) \exp (-\lambda (\log (e^{-k}r^w))^2) \frac{{{\,\textrm{lin}\,}}(e^{-k}r^w)}{\sin (\pi \log (e^{-k}r^w))}\\&= - f(e^{k/w},0) \exp (-\lambda (\log (e^{-k}r^w))^2) \frac{{{\,\textrm{lin}\,}}(e^{-k}r^w)}{\sin (\pi \ w\log r)}. \end{aligned}$$

Now write \(N':= N+{\textstyle \frac{1}{2}}\) and denote by \(\mathcal {R}_N\) the rectangle with vertices at

$$\begin{aligned} \left(e^{(N_{w,r}\pm N')/w}, \frac{N}{w}\right), \quad \left(e^{(N_{w,r}\pm N')/w}, - \frac{N}{w}\right) \end{aligned}$$

and by \(\partial \mathcal {R}_N\) its positively oriented boundary. By the residue theorem for polar-analytic functions, we have

$$\begin{aligned}&\frac{\sin (\pi w\log r)}{2\pi i} \int _{\partial \mathcal {R}_N} F(\rho , \vartheta )\left(\frac{d\rho }{\rho } + id\vartheta \right)\\&\qquad \quad = \sin (\pi w\log r)\left[({{\,\textrm{res}\,}}_0F)(r,0) + \sum _{\left| k-N_{w,r}\right| \le N} ({{\,\textrm{res}\,}}_0F)(e^{k/w},0)\right]\\&\qquad \quad = f(r,0) - \sum _{\left| k-N_{w,r}\right| \le N}f(e^{k/w},0) \exp (-\lambda (\log (e^{-k}r^w))^2){{\,\textrm{lin}\,}}(e^{-k}r^w) \\&\qquad \quad = f(r,0) - \bigl (G_{w,N}^\alpha f(\cdot ,0)\bigr )(r). \end{aligned}$$

Next we denote by \(I_{\textrm{hor}}^\pm \) the contributions to the integral coming from the two horizontal parts of \(\partial \mathcal {R}_N\), where \(+\) and − refer to the upper and lower line segment, respectively. Similarly, we denote by \(I_{\textrm{vert}}^\pm \) the contributions to the integral coming from the two vertical parts of \(\partial \mathcal {R}_N\), where \(+\) and − refer to the right and left line segment, respectively. Then

$$\begin{aligned} f(r,0) - (G_{w,N}^\alpha f(\cdot ,0))(r) = \frac{\sin (\pi w\log r)}{2\pi i}\bigl (I_{\textrm{hor}}^- + I_{\textrm{vert}}^+ + I_{\textrm{hor}}^+ +I_{\textrm{vert}}^-\bigr ). \end{aligned}$$
(27)

We note that for \((\rho ,\vartheta )\in \partial \mathcal {R}_N\), we have

$$\begin{aligned} \left| f(\rho ,\vartheta )\right| \le K\left(\left| \log r\right| +\frac{N+1}{w}\right) e^{\pi T\left| \vartheta \right| } \end{aligned}$$

and

$$\begin{aligned} \left| \log \frac{\rho }{r} + i\vartheta \right| \ge \frac{N}{w}. \end{aligned}$$

Therefore

$$\begin{aligned} \left| I_{\textrm{hor}}^\pm \right|&\le K\left(\left| \log r\right| +\frac{N+1}{w}\right) \frac{w}{N} e^{\pi TN/w}\\&\qquad \qquad \times \left| \int _{e^{(N_{w,r}-N')/w}}^{e^{(N_{w,r}+N')/w}}\frac{\exp (-\lambda (\log (\rho ^w/r^w)\pm iN)^2))}{\sin (\pi w(\log \rho \pm iN/w))} \frac{d\rho }{\rho }\right| \\&= K\left(\left| \log r\right| +\frac{N+1}{w}\right) \frac{e^{\pi TN/w}}{N} \left| \int _{N_{w,r}-N'}^{N_{w,r}+N'} \frac{\exp (-\lambda (u-\log r^w \pm iN)^2)}{\sin (\pi (u\pm iN))} du\right| . \end{aligned}$$

Since

$$\begin{aligned} \left| \exp (-\lambda (u-\log r^w \pm iN)^2)\right| = e^{\lambda N^2} e^{-\lambda (u-w\log r)^2}, \\ \left| \sin (\pi (u\pm iN))\right| \ge \sinh (\pi N) = {\textstyle \frac{1}{2}}e^{\pi N}(1- e^{-2\pi N}), \\ \left| \int _{N_{w,r}-N'}^{N_{w,r}+N'} e^{-\lambda (u-w\log r)^2} du\right| \le \int _{-\infty }^\infty e^{-\lambda x^2} dx = \sqrt{\frac{\pi }{\lambda }} = \sqrt{\frac{\pi N}{\alpha }} \end{aligned}$$

and

$$\begin{aligned} \pi N \frac{T}{w} -\pi N +\lambda N^2 = - \alpha N, \end{aligned}$$

we conclude that

$$\begin{aligned} \left| I_{\textrm{hor}}^\pm \right| \le K\left(\left| \log r\right| +\frac{N+1}{w}\right) \frac{2\sqrt{\pi } e^{-\alpha N}}{\sqrt{\alpha N}(1-e^{-2\pi N})}. \end{aligned}$$
(28)

For the contributions coming from the vertical line segments, we have

$$\begin{aligned} \left| I_{\textrm{vert}}^\pm \right|&\le K\left(\left| \log r\right| +\frac{N+1}{w}\right) \frac{w}{N}\\&\qquad \quad \times \left| \int _{-N/w}^{N/w} \frac{\exp (-\lambda (N_{w,r}\pm N'-\log r^w +iw\vartheta )^2) e^{\pi T\left| \vartheta \right| }}{\sin (\pi (N_{w,r}\pm N' +iw\vartheta ))} d\vartheta \right| . \end{aligned}$$

Here

$$\begin{aligned} \left| \exp (-\lambda (N_{w,r}\pm N'-\log r^w +iw\vartheta )^2)\right| \le e^{-\lambda N^2} e^{\lambda w^2\vartheta ^2} \end{aligned}$$

and

$$\begin{aligned} \left| \sin (\pi (N_{w,r}\pm N' +iw\vartheta ))\right| \ge \cosh (\pi w\vartheta ) \ge {\textstyle \frac{1}{2}}e^{\pi w\left| \vartheta \right| }. \end{aligned}$$

Substituting \(\theta =w\vartheta \) and noting that

$$\begin{aligned} \lambda \theta ^2+ \pi \left| \theta \right| \frac{T}{w} -\pi \left| \theta \right| = \frac{\alpha }{N}\theta ^2-2\alpha \left| \theta \right| \le -\alpha \left| \theta \right| \end{aligned}$$

for \(\theta \in [-N, N]\), we arrive at

$$\begin{aligned} \left| I_{\textrm{vert}}^\pm \right|&\le K\left(\left| \log r\right| +\frac{N+1}{w}\right) \frac{2e^{-\alpha N}}{N} \int _{-N}^N e^{-\alpha \left| \theta \right| } d\theta \nonumber \\&\le K\left(\left| \log r\right| +\frac{N+1}{w}\right) \frac{4e^{-\alpha N}}{\alpha N}. \end{aligned}$$
(29)

Employing the estimates (28) and (29) in conjunction with (27), we readily complete the proof. \(\square \)

Note that in Theorem 11 the speed of convergence may be lower than \(\mathcal {O}(e^{-\alpha N})\) since N also occurs in the argument of the non-decreasing function K. However, when the restriction of f to \(\mathbb {R}^+\) is bounded, such a deterioration is not possible. The following statement is an obvious consequence of Theorem 11.

Corollary 2

Let \(f\in \mathscr {B}_{0,\pi T}^\infty \), where \(T>0\). Then, for \(w>T\), \(\alpha =\frac{\pi }{2}(1-\frac{T}{w})\), \(N\in \mathbb {N}\) and \(r\in \mathbb {R}^+\), we have

$$\begin{aligned} \left| f(r,0) -\bigl (G_{w,N}^\alpha f(\cdot ,0)\bigr )(r)\right| \le \left| \sin (\pi w\log r)\right| C_f\,\frac{2e^{-\alpha N}}{\sqrt{\pi \alpha N}}\,\beta _N, \end{aligned}$$

where \(C_f\) is the constant introduced in Definition 2 for \(c=0\) and \(\beta _N\) is as in Theorem 11.

When we proved the assertion of Theorem 11 for an arbitrarily chosen \(N\in \mathbb {N}\), we did not really need that the function f is polar-analytic on the whole of \(\mathbb {H}\). It will suffice that it is polar-analytic in a strip

$$\begin{aligned} S_d :=\{(r,\theta )\in \mathbb {H}\, :\, \left| \theta \right| <d\} \end{aligned}$$

with \(d> N/w\). This condition can always be satisfied by choosing \(w= (N+ \varepsilon )/d\) with \(\varepsilon >0\). This leads us to the following statement in which the hypothesis is designed such that \(T=0\) and the function K is a constant \(K_f\) which allows us to let \(\varepsilon \) approach zero on both sides of the error estimate.

Corollary 3

Let f be polar-analytic in a strip \(S_d\) with \(d>0\). Suppose that

$$\begin{aligned} K_f:= \sup _{(r,\theta )\in S_d} \left| f(r,\theta )\right| < \infty . \end{aligned}$$

Then for \(N\in \mathbb {N}\) and \(r\in \mathbb {R}^+\), we have

$$\begin{aligned} \left| f(r,0) -\bigl (G_{N/d,N}^{\pi /2}f(\cdot ,0)\bigr )(r)\right| \le \left| \sin \left(\frac{\pi N}{d}\log r\right)\right| K_f\,\frac{2\sqrt{2}\,e^{-\pi N/2}}{\pi \sqrt{N}}\,\beta _N, \end{aligned}$$

where

$$\begin{aligned} \beta _N= \frac{1}{{1- e^{-2\pi N}}} + \frac{2\sqrt{2}}{\pi \sqrt{N}} = 1 +\mathcal {O}(N^{-1/2}) \qquad (N\rightarrow \infty ). \end{aligned}$$
Table 1 Computations for the function (30) according to Corollaries 1 and 2, respectively

5 Numerical experiments

As a first example, we consider the function

$$\begin{aligned} f\, :\, (r,\theta )\,\longmapsto \, \cos \bigl (\pi (\log r +i\theta )\bigr ) \end{aligned}$$
(30)

which belongs to \(\mathscr {B}^\infty _{0,\pi }\). It qualifies for Corollaries 1 and 2. In the case of Corollary 1, we use the multiplier \(\psi \) of Example 1 with \(\delta =1/2\) and \(m=8\). Then the constructed function \(\mu \) becomes

$$\begin{aligned} \mu (t) = \frac{32}{\pi (1+(\pi t/16)^7)}. \end{aligned}$$

In view of Remark 1, we may choose \(w=16\) in both corollaries. This enables us to compare the two statements by computing approximations of \(f(r_j,0)\) at \(r_j= e^{(j+1/2)/w}\) in both cases. Results for the true absolute errors are shown in the columns of Table 1. For the chosen points \(r_j\) the error bounds of the two corollaries do not depend on j. They are presented in the last line. We see that the approximation by the operator \(G^\alpha _{w,N}\) of Corollary 2 is much better than the approximation by \(S^{\psi {{\,\textrm{lin}\,}}}_{w,N}\) of Corollary 1. Moreover, the error bound of Corollary 2 is closer to the true errors.

Table 2 Computations for the function (31) according to Theorem 11
Table 3 Computations for the function (31) according to Thorem 11, continued
Table 4 Computations for the function (32) according to Corollary 3

Our second example is the function

$$\begin{aligned} f\, :\, (r,\theta )\, \longmapsto \, \frac{r e^{i\theta } + r^{-1}e^{-i\theta }}{2}, \end{aligned}$$
(31)

which is polar-analytic on \(\mathbb {H}\) but unbounded on \(\mathbb {R}^+\). It satisfies the hypotheses of Theorem 11 with \(K(t)=\cosh (t)\), \(T=0\) and \(\alpha =\pi /2\). We choose \(w=16/\pi \) and compute approximations of \(f(r_j,0)\) for \(r_j=e^{(j+1/2)/w}\). The true absolute errors and their bounds are shown in Tables 2 and 3 for four different values of N. The absolute errors grow with increasing \(\left| \log r_j\right| \) as suggested by the bound of Theorem 11 which contains \(\left| \log r\right| \) in the argument of the increasing function K. However, the relative errors

$$\begin{aligned} \left| \frac{f(r_j,0)-\bigl (G_w^\alpha f(\cdot ,0)\bigr )(r_j)}{f(r_j,0)}\right| \end{aligned}$$

and their bounds are nearly constant. Taking into account the contribution of K, the dependence of the error bounds on N is \(\mathcal {O}\bigl (\frac{e^{-7\pi N/16}}{\sqrt{N}}\bigr )\) as \(N\rightarrow \infty \).

As a third example, we consider the function

$$\begin{aligned} f\, :\, (r,\theta ) \, \longmapsto \, \sqrt{1+ (\log r + i\theta )^2}\,{{\,\textrm{lin}\,}}\bigl ((re^{i\theta })^{1/\pi }\bigr ). \end{aligned}$$
(32)

It is polar-analytic and bounded on the strip \(S_1\) with

$$\begin{aligned} \sup _{(r,\theta )\in S_1}\left| f(r,\theta )\right| \le 2^{1/2} 3^{-1/4} \cosh (1) =: K_f. \end{aligned}$$

Therefore it may serve for illustrating Corollary 3. The columns of Table 4 show the index j and the true absolute errors at the points \(r_j=e^{(j+1/2)/N}\) for four choices of N. For these points, the error bounds provided by Corollary 3 do not depend on j. They are presented in the last line. It turns out that they are very realistic. The factor of overestimation of the largest absolute error in a column decreases with increasing N. For \(N=5\) it is 1.53; for \(N=30\) it has decreased to 1.18.