1 Introduction

In this work we consider the computation of the Hankel transform of order \(\nu \) of a function f, given by

$$\begin{aligned} H^{(\nu )}(\omega ) = \int _0^{+\infty } f(x) J_{\nu }(\omega x) x dx, \end{aligned}$$
(1)

where \(\omega \in \mathbb {R}\) and \(J_{\nu }\) is the Bessel function of the first kind of order \(\nu \) (see [26] for an overview). For \(\nu > -\frac{1}{2}\), the inverse Hankel transform is defined as

$$\begin{aligned} f(x) = \int _0^{+\infty } H^{(\nu )}(\omega ) J_{\nu }(\omega x) \omega d \omega . \end{aligned}$$
(2)

Sufficient conditions for the validity of (1) and (2) are (see [23]):

  1. 1.

    \(f(x) = {\mathcal {O}} \left( x^{\alpha } \right) \), with \(\alpha < -\frac{3}{2}\), for \(x \rightarrow + \infty \);

  2. 2.

    \(f^{'}(x)\) is piecewise continuous over each bounded subinterval of \([0, +\infty )\);

  3. 3.

    f(x) is such that

    $$\begin{aligned} f(x) = \lim _{\delta \rightarrow 0^+} \frac{f(x+i \delta ) +f(x-i\delta ) }{2}, \quad {i = \sqrt{-1}.} \end{aligned}$$

The Hankel transform commonly appears in problems of mathematical physics and applied mathematics, described by equations having axial symmetry, as for instance in the analysis of central potential scattering [24], in geophysical electromagnetic survey [25] and medical tomography [14]. Among the existing techniques employed to compute integral (1), here we quote the ones based on the logarithmic change of variables leading to the Fast Hankel transforms [6, 10, 15], digital filtering algorithms [12, 13, 27], asymptotic expansion of the Bessel function [5, 22], integral representations of the Bessel function [4] and complex Gaussian quadrature [2, 28]. Recently, in [7] a Gaussian rule for weight functions involving powers, exponentials and Bessel functions of the first kind was developed.

In this work we consider the sinc rule together with the single exponential transform

$$\begin{aligned} x = \frac{\tau }{\omega } \phi \left( t-q \right) , \quad \phi (\xi ) = \frac{\xi }{1-e^{-\xi }}, \end{aligned}$$
(3)

where \(\tau >0\) and \(q \in \mathbb {R}\) are suitable parameters. The idea is similar to the one developed in [18,19,20,21] for Fourier type integrals. In these papers a double exponential approach is considered and an exponential convergence is shown (see also [8]) by a suitable choice of the step size that allows to overcome the difficulties due to the oscillations. Indeed, for Fourier type integrals, the transformations can be defined in order that the quadrature points double exponentially converge to the zeros of the sine/cosine function. This allows to "rapidly" neglect the tails of the sinc rule and then obtain fast convergence. Working with the Hankel transform, the problem is more complicated. Indeed, the double exponential transformations designed for Fourier type integrals do not work properly. The reason is that the zeros of the Bessel function can only be approximated and then a double exponential convergence to them is not achievable. This is the main motivation for the choice of a single exponential transformation of type (3).

In this work we provide a rather accurate error analysis of the truncated sinc rule which shows that, after a proper definition of the step size and the number of points to consider, the error decay is asymptotically quite similar to the one of f(m), for slowly decaying functions, where m is the total number of points of the quadrature rule. If f decays exponentially, then the error is of type \(e^{-{const}\sqrt{m}}\). By exploiting some experimental evidences, we also provide an algorithm for automatic integration in which the only inputs are the function f and the error tolerance. A simple Matlab code is also reported in Appendix.

Throughout the paper, together with \(\nu >-\frac{1}{2}\), we always also assume \(\nu \ne \frac{1}{2}\), since

$$\begin{aligned} J_{\frac{1}{2}}(z) = \left( \frac{2}{\pi z}\right) ^{\frac{1}{2}} \sin z, \end{aligned}$$

(see [17, 10.16.1]), and so integral (1) reduces to a sine transform.

In this work the symbol \(\sim \) denotes the asymptotic equality, \(\approx \) a generic approximation and \(\lesssim \) states for less than or asymptotically equal to.

The paper is organized as follows. In Sect. 2 we recall some basic results concerning the sinc rule over unbounded intervals. In Sect. 3 we study the transformation (3). In Sect. 4 we present the error analysis and in Sect. 5 we show some numerical experiments. In Sect. 6 we describe a prototype automatic integrator, implemented in the Matlab code reported in Appendix.

2 General Results for the Sinc Rule

Before presenting our approach for the Hankel transform, in this section we recall some useful results concerning the sinc approximation

$$\begin{aligned} I(g) = \int _{-\infty }^{+\infty } g(x) dx \approx h \sum _{j=-\infty }^{+\infty } g(jh), \end{aligned}$$
(4)

in which \(g :\mathbb {R} \rightarrow \mathbb {R}\) is a generic integrable function and h is a positive scalar. Given M and N positive integers, we denote the truncated rule by

$$\begin{aligned} T_{M,N,h}(g) = h \sum _{j=-M}^{N} g(jh). \end{aligned}$$

Defining the quadrature error as

$$\begin{aligned} {\mathcal {E}}_{M,N,h} = \left| I(g) - T_{M,N,h}(g) \right| , \end{aligned}$$

it holds

$$\begin{aligned} {\mathcal {E}}_{M,N,h} \le {\mathcal {E}}_D +{\mathcal {E}}_{T_L}+{\mathcal {E}}_{T_R}, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {E}}_D = \left| \int _{-\infty }^{+\infty } g(x) dx - h \sum _{j=-\infty }^{+\infty } g(jh) \right| \end{aligned}$$

is the discretization error and

$$\begin{aligned} {\mathcal {E}}_{T_L} = h \left| \sum _{j=-\infty }^{-M-1} g(jh) \right| , \quad {\mathcal {E}}_{T_R} = h \left| \sum _{j=N+1}^{+\infty } g(jh) \right| \end{aligned}$$

are the truncation errors. For simplicity of notations, we omit their dependence on MNh.

Definition 1

[16, Definition 2.12] Given \(d>0\), let \({\mathcal {D}}_d\) be the infinite strip domain of width 2d given by

$$\begin{aligned} {\mathcal {D}}_d = \lbrace \zeta \in \mathbb {C} :| \Im (\zeta ) | < d \rbrace , \end{aligned}$$

and let \({\textbf{B}}\left( {\mathcal {D}}_d\right) \) be the set of functions analytic in \({\mathcal {D}}_d\) that satisfy

$$\begin{aligned} \int _{-d}^{d} | g(x+ i \eta ) | d \eta = {\mathcal {O}}\left( |x|^a \right) , \quad x \rightarrow \pm \infty , \; 0 \le a < 1, \end{aligned}$$

and

$$\begin{aligned} {\mathcal {N}}(g,d)= \lim _{\eta \rightarrow d^-} \left\{ \int _{-\infty }^{+\infty } | g(x+ i \eta ) | d x + \int _{-\infty }^{+\infty } | g(x- i \eta ) | d x \right\} < + \infty . \end{aligned}$$

As for the discretization error, the following theorem holds (see [16, Theorem 2.20]).

Theorem 1

Assume \(g \in {\textbf{B}}\left( {\mathcal {D}}_d\right) \). Then

$$\begin{aligned} {\mathcal {E}}_D \le \frac{{\mathcal {N}}(g,d)}{2 \sinh (\pi d / h)} e^{-\frac{\pi d}{h}}. \end{aligned}$$
(5)

By (5) we have that

$$\begin{aligned} {\mathcal {E}}_D \sim {\mathcal {N}}(g,d) e^{-\frac{2 \pi d}{h}}, \quad h \rightarrow 0. \end{aligned}$$

The above result shows that, in order to obtain a sharp estimate for \({\mathcal {E}}_D\), one should optimize the bound with respect to d, taking into account the position of the singularities of g with respect to \({\mathcal {D}}_d\).

In the case of meromorphic functions, the analysis is simpler because one can exploit the results presented in [3, 9], that are based on the use of the residue theorem. Indeed, assuming that the poles of g are simple and symmetric with respect to the real axis, it can be seen that

$$\begin{aligned} {\mathcal {E}}_D \sim 4 \pi \left| \rho _0 \right| e^{-2 d \frac{\pi }{h}}, \quad h \rightarrow 0, \end{aligned}$$
(6)

where \(\rho _0 = \textrm{Res}(g(z),z_0)\), \(d= \left| \Im (z_0) \right| \) and \(z_0\), together with its conjugate \(\overline{z_0}\), is the pole closest to the real axis (see also [8] for details). In case of unsymmetric poles, multiplicity greater than one, many poles at the same distance from the real axis, the analysis follows the same line and formula (6) remains true with the factor 4 replaced by another suitable integer as consequence of the residue theorem. Formula (6) is also valid for any function not necessary meromorphic but analytic in a strip symmetric with respect to the real axis, except for two simple poles \(z_0\) and \(\overline{z_0}\). With respect to the bound (5), the above formula shows an error constant that is independent of d, and a faster exponential decay (in (5) one has to take \(d < \left| \Im (z_0) \right| \)).

3 Exponential Transformation

As mentioned in Introduction, in order to numerically evaluate integral (1), we consider the transformation (3)

$$\begin{aligned} x = \frac{\tau }{\omega } \phi \left( t-q \right) , \quad \phi (\xi ) = \frac{\xi }{1-e^{-\xi }}, \end{aligned}$$
(7)

where \(\tau >0\) and \(q \in \mathbb {R}\) are given parameters. We observe that \(\phi (\xi ) >0\), \(\forall \xi \in \mathbb {R}\), \(\xi \ne 0\), and

$$\begin{aligned} \phi (\xi ) \sim {\left\{ \begin{array}{ll} \vert \xi \vert e^{\xi }, &{}\quad \xi \rightarrow - \infty \\ \xi , &{}\quad \xi \rightarrow + \infty \end{array}\right. }. \end{aligned}$$
(8)

As for its derivative

$$\begin{aligned} \phi ^{'}(\xi ) = \frac{1-e^{- \xi } (1+ \xi )}{(1-e^{- \xi })^2}, \end{aligned}$$

we have that \(\phi ^{'}(\xi ) >0\), \(\forall \xi \in \mathbb {R}\), \(\xi \ne 0\), and

$$\begin{aligned} \phi ^{'}(\xi ) \sim 1, \quad \textrm{for} \quad \xi \rightarrow + \infty . \end{aligned}$$
(9)

Remark 1

It is interesting to notice that the function \(\phi \) is the generating function of the Bernoulli numbers \(B_n\) \(\left( B_1 = \frac{1}{2} \right) \) [17, 24.2(i)]. Indeed, it holds

$$\begin{aligned} \phi (\xi ) = \sum _{n=0}^{\infty } \frac{B_n \xi ^n}{n!}, \end{aligned}$$

so that the singularities of \(\phi \) and its derivatives at zero can be removed by taking

$$\begin{aligned} \phi ^{(k)}(0)= B_k, \quad k =0,1,\ldots . \end{aligned}$$

By using (7), integral (1) becomes

$$\begin{aligned} H^{(\nu )}(\omega ) = \left( \frac{\tau }{\omega } \right) ^2 \int _{-\infty }^{+\infty } \tilde{f}(t) J_{\nu }(\tau \phi (t-q)) \phi (t-q) \phi ^{'}(t-q) dt, \end{aligned}$$
(10)

where \(\tilde{f}(t) = f \left( \frac{\tau }{\omega } \phi (t-q) \right) \), and the sinc rule with mesh size h reads

$$\begin{aligned} H_{M,N,h}^{(\nu )}(\omega ) = \left( \frac{\tau }{\omega } \right) ^2 h \sum _{j=-M}^{N} \tilde{f}(jh) J_{\nu }(\tau \phi (jh-q)) \phi (jh-q) \phi ^{'}(jh-q). \end{aligned}$$
(11)

The term \(\phi \phi ^{'}\) in (10) introduces a first limitation in the admissible value of d in formula (5) or (6). Indeed, \(\phi \) and \(\phi ^{'}\) have a removable singularity at 0 (cf. Remark 1), but they have poles at \(2k \pi i\), \(k \in \mathbb {Z}\), \(k \ne 0\). Besides, in order to study the accuracy of (11), it is fundamental to understand how the region of analiticity of f is mapped in the t-plane for \(\tilde{f}\).

To this purpose, let A be the region of analiticity of f (\(\mathbb {R}^+ \subset A\)) and let \(S = \mathbb {C} \setminus A\). In order to study the region of analiticity of \(\tilde{f}\), for any given \(w \in S\) we have to find t that solves

$$\begin{aligned} \frac{\tau }{\omega } \phi (t-q)= w. \end{aligned}$$
(12)

This reduces to study the equation

$$\begin{aligned} \phi (\xi ) = \frac{\xi }{1-e^{-\xi }} = z \notin \mathbb {R}^+, \end{aligned}$$
(13)

whose countable set of solutions is given by

$$\begin{aligned} \chi (z) = \left\{ \xi \in \mathbb {C} \setminus \lbrace 0 \rbrace \, \mid \, \xi = z+W_k(-ze^{-z}), \, k \in \mathbb {Z} \right\} , \end{aligned}$$

where \(W_k\) denotes the k-th branch of the Lambert-W function (see [17, 4.13]). We remark that, even if \(\forall z \in \mathbb {C}\) there exists \(k \in \mathbb {Z}\) such that

$$\begin{aligned} z +W_k(-ze^{-z})=0, \end{aligned}$$
(14)

\(0 \notin \chi (z)\) because it is not a solution of (13), since \(\phi (0) = 1\) (cf. Remark 1). The regions of the complex plane where (14) holds true are reported in [17, Figure 4.13.2].

In order to exploit (5) or (6), we need to study the imaginary part of the set \(\chi ( \tilde{S} )\), where

$$\begin{aligned} \tilde{S} = \left\{ z \in \mathbb {C} \, |\, z = \frac{\omega }{\tau }w, w \in S \right\} , \end{aligned}$$

cf. (12). To this purpose, for \(z \notin \mathbb {R}^+\), we define the function

$$\begin{aligned} \varPhi (z) = \min \left| \Im \left( \chi (z) \right) \right| . \end{aligned}$$
(15)

By setting

$$\begin{aligned} \tilde{d} = \inf _{z \in \tilde{S}} \varPhi (z), \end{aligned}$$
(16)

if \(\tilde{d}\) is an isolated minimum, then we can use (6) with \(d = \tilde{d}\), otherwise we use (5) with \(d<\tilde{d}\). Since in a given subset of the complex plane the function \(\varPhi \) may be defined by using more than one branch of the Lambert-W function (see again [17, 4.13]), the analysis is not immediate. Anyway, numerically we observe that

$$\begin{aligned} \varPhi (z) \ge \min \left( \pi , \max (\vert \arg (z) \vert , \vert \Im (z) \vert )\right) . \end{aligned}$$
(17)

The above relation can be explained by the following results.

  1. (a)

    For any \(\frac{\pi }{2} \le \theta \le \pi \) and \(-\pi \le \theta \le -\frac{\pi }{2}\), the function \(\varPhi (re^{i \theta })\) is increasing with respect to r and \(\varPhi (r e^{i \theta }) \rightarrow 2\pi \) for \(r \rightarrow +\infty \). For \(0<\theta <\frac{\pi }{2}\) or \(-\frac{\pi }{2}<\theta <0\), the function \(\varPhi (re^{i \theta })\) initially increases, reaches a maximum greater than \(2 \pi \) and then tends to \(2 \pi \). In any case, \(\varPhi (r e^{i \theta }) \rightarrow \vert \theta \vert \), for \(r \rightarrow 0^+\). Indeed, for \(r \rightarrow 0^+\), \(\varPhi (r e^{i \theta })\) involves only \(W_1\) (for \(0<\theta \le \pi \)) or \(W_{-1}\) (for \(-\pi \le \theta <0\)) and therefore (cf. (15))

    $$\begin{aligned} \varPhi (r e^{i \theta }) = \left| \Im \left( re^{i\theta } +W_{\pm 1}\left( -re^{i\theta }e^{-re^{i\theta }} \right) \right) \right| . \end{aligned}$$

    By using the approximation (see [17, 4.3.11])

    $$\begin{aligned} W_{\pm 1}(-ze^{-z}) = \ln (ze^{-z}) -\ln (-\ln (ze^{-z}))+{\mathcal {O}}\left( \frac{\ln (-\ln (ze^{-z}))}{\ln (ze^{-z})}\right) , \end{aligned}$$

    with \(z=re^{i\theta }\), we obtain

    $$\begin{aligned} \varPhi (r e^{i \theta })&= \left| r \sin \theta +\Im \left( W_{\pm 1} \left( -re^{i\theta }e^{-re^{i\theta }} \right) \right) \right| \\&= \pm \theta +{\mathcal {O}}\left( \frac{1}{\ln r} \right) . \end{aligned}$$
  2. (b)

    For \(0<y\le 2 \pi \), we have \(\varPhi (x+iy) \rightarrow y\), for \(x \rightarrow + \infty \). This holds true because in this case the branch to consider is \(W_0\) and \(W_0(z) \sim z\) for \(z \rightarrow 0\) (see [17, 4.13.5]). Moreover, \(\varPhi (x+iy) \rightarrow 2 \pi \), for \(x \rightarrow - \infty \), since \(\Im \left( W_k(z) \right) \rightarrow 2 \pi k\), \(k \in \mathbb {Z}\), for \(\vert z \vert \rightarrow \infty \). In particular, for \(0 \le y \le \pi \), \(\varPhi (x+iy)\) is monotone increasing. For \(\pi< y<2 \pi \), \(\varPhi (x+iy)\) involves only \(W_0\) and shows a local minimum at \(x^{\star } >0\) where \(\pi \le \varPhi (x^{\star }+iy) <y\). Indeed, suppose \(\varPhi (x^{\star }+iy) < \pi \), then there exists \(x^{\star \star }> x^{\star }\) such that, for \(z = x^{\star \star }+i\pi \), \(\Im \left( z +W_0\left( -ze^{-z} \right) \right) = \pi \), that is \(\Im \left( W_0 (-ze^{-z}) \right) =0\). This is impossible because \(\Im \left( W_0\left( -ze^{-z} \right) \right) = 0\) only for \(\Im \left( -ze^{-z} \right) = 0\), that is, \(\tan \Im (z) = \frac{\Im (z)}{\Re (z)}\) (see [17, 4.13]). For \(y>2 \pi \), the situation is even more complicated but, nevertheless, \(\varPhi (x+iy) \ge \pi \). The case of \(y<0\) is specular.

In order to validate the above considerations, in Fig. 1 we show the contour plot of the function \(\varPhi (z)\).

Fig. 1
figure 1

Contour plot of the function \(\varPhi \) defined in (15)

The accuracy and the efficiency of the sinc rule (11) are strictly related with the choice of \(\tau , q,h\). Similarly to the choice made in [8, 21] for the case of Fourier type integrals, the idea is to set

$$\begin{aligned} \tau = \frac{\pi }{h} \quad \textrm{and} \quad q = \frac{\pi }{4 \tau } (1- 2 \nu ), \end{aligned}$$
(18)

in order to cancel the dominant term in the asymptotic representation of the Bessel function. Indeed, by using (8), (18) and

$$\begin{aligned} J_{\nu }(z) = \sqrt{\frac{2}{\pi z}} \left[ \cos \left( z-\frac{1}{2} \nu \pi -\frac{1}{4} \pi \right) +e^{\left| \Im (z) \right| } {\mathcal {O}}\left( \frac{1}{z} \right) \right] , \quad \vert z \vert \rightarrow + \infty , \end{aligned}$$
(19)

(see [17, p. 223, 10.7.8]), we have that

$$\begin{aligned} J_{\nu } (\tau \phi (jh-q)) \sim J_{\nu }(\tau (jh-q)) = J_{\nu }\left( j\pi -\frac{\pi }{4}(1-2 \nu ) \right) = {\mathcal {O}}\left( j^{-\frac{3}{2}} \right) , \; j \rightarrow + \infty , \end{aligned}$$

since

$$\begin{aligned} \cos \left( \tau (jh-q) -\frac{1}{2}\nu \pi -\frac{1}{4} \pi \right) = \cos \left( j \pi -\frac{\pi }{2} \right) =0. \end{aligned}$$

4 Error Analysis

The analysis presented in this section is based on the standard idea of defining hMN in (11) in order to equalize the error contributions \({\mathcal {E}}_D\), \({\mathcal {E}}_{T_L}\), \({\mathcal {E}}_{T_R}\). We assume to work with functions f such that

$$\begin{aligned} f(x) \sim e^{-\beta x^k} x^{\alpha }, \quad \textrm{for} \quad x \rightarrow + \infty , \end{aligned}$$

with \(\beta ,k \ge 0\), \(\alpha \in \mathbb {R}\) (\(\alpha <-\frac{3}{2}\) if \(\beta =0\) or \(k=0\)), and \(\left| f(x) \right| \le c\) for \(x \in [0,+\infty )\).

4.1 The Discretization Error \({\mathcal {E}}_D\)

The analysis of the term \({\mathcal {E}}_D\) has more or less already been given in previous section. Nevertheless, a question still open is how to approximate d in practice. We restrict our considerations to the case of \(\tilde{d}\) in (16) isolated minimum. This is true for instance when f is a meromorphic function with poles \(\left\{ w_j \right\} _{j \in J}\), \(J \subseteq \mathbb {Z}\), symmetric with respect to the real axis, and such that \(\left| \Re (w_j) \right| \le R\), \(\forall j \in J\), for a certain \(R>0\). Since for \(h \rightarrow 0\) (\(\tau \rightarrow + \infty \)) the set \(\tilde{S}\), now given by

$$\begin{aligned} \tilde{S} = \left\{ z \in \mathbb {C} \, \left| \, z = \frac{\omega }{\tau }w_j, \, j \in J \right. \right\} , \end{aligned}$$

tends to collapse on the imaginary axis, we have that \(d =\tilde{d} \sim \min _{j \in J} \left\{ \vert \arg (w_j) \vert \right\} \), for \(h\rightarrow 0\), since \(\varPhi (re^{i\theta }) \rightarrow \vert \theta \vert \), for \(r \rightarrow 0^+\) (see item a) of previous section). Therefore, by (6),

$$\begin{aligned} {\mathcal {E}}_D \sim 4 \pi \vert \tilde{\rho } \vert e^{-\frac{2 \pi d}{h}}, \end{aligned}$$
(20)

where

$$\begin{aligned} \tilde{\rho } = \textrm{Res} (f,\tilde{w}) \phi (\tilde{t}) J_{\nu }(\tilde{t}) \left( \frac{\tau }{\omega }\right) ^2 \phi ^{'}(\tilde{t}), \end{aligned}$$

in which \(\tilde{w}\) is the pole of f defining d and \(\tilde{t}\) is the corresponding one for \(\tilde{f}\).

4.2 The Truncation Error \({\mathcal {E}}_{T_L}\)

For the truncation error (see (11))

$$\begin{aligned} {\mathcal {E}}_{T_L} = \left| \left( \frac{\tau }{\omega } \right) ^2 h \sum _{j=-\infty }^{-M-1} \tilde{f}(jh) J_{\nu }(\tau \phi (jh-q)) \phi (jh-q) \phi ^{'}(jh-q) \right| , \end{aligned}$$

by using (8)–(9) and the asymptotic relation

$$\begin{aligned} J_{\nu }(z) \sim \left( \frac{1}{2} z \right) ^{\nu } \frac{1}{\Gamma (\nu +1)}, \quad z \rightarrow 0, \quad \nu \ne -1,-2,\ldots , \end{aligned}$$

(see [17, 10,7.3]), where \(\Gamma \) denotes the Gamma function, we have

$$\begin{aligned} {\mathcal {E}}_{T_L}&\lesssim \left| \frac{c \tau ^{\nu +2}}{\omega ^2 2^{\nu } \Gamma (\nu +1)} h \sum _{j=-\infty }^{-M-1} \left( \phi (jh) \right) ^{\nu +1} \phi ^{'}(jh) \right| \nonumber \\&\lesssim \frac{c \tau ^{\nu +2}}{\omega ^2 2^{\nu } \Gamma (\nu +1)} \int _{-\infty }^{-Mh} \left( \phi (x) \right) ^{\nu +1} \phi ^{'}(x) dx \nonumber \\&\sim \frac{c \pi ^{\nu +2}}{\omega ^2 2^{\nu } {\Gamma (\nu +1)(\nu +2)}} M^{\nu +2} e^{-(\nu +2)Mh}. \end{aligned}$$
(21)

4.3 The Truncation Error \({\mathcal {E}}_{T_R}\)

In order to study the truncation error (see again (11))

$$\begin{aligned} {\mathcal {E}}_{T_R} = \left| \left( \frac{\tau }{\omega } \right) ^2 h \sum _{j=N+1}^{+\infty } \tilde{f}(jh) J_{\nu }(\tau \phi (jh-q)) \phi (jh-q) \phi ^{'}(jh-q) \right| , \end{aligned}$$

we first need the following result.

Proposition 1

For \(j \rightarrow + \infty \), it holds

$$\begin{aligned} J_{\nu }(\tau \phi (jh-q)) = \frac{\sqrt{2}}{8 \pi ^2} (4 \nu ^2-1) j^{-\frac{3}{2}} (-1)^j \left( 1+{\mathcal {O}} \left( \frac{1}{j} \right) \right) . \end{aligned}$$
(22)

Proof

By writing

$$\begin{aligned} \begin{aligned} J_{\nu }(\tau \phi (jh-q))&= J_{\nu }(\tau (jh-q))+J_{\nu }^{'} (\tau (jh-q))[\tau \phi (jh-q)-\tau (jh-q)]\\&\quad +{\mathcal {O}}[\phi (jh-q)-(jh-q)]^2, \end{aligned} \end{aligned}$$

and denoting by \(t_j\) the j-th positive zero of \(J_{\nu }\), we have

$$\begin{aligned} \begin{aligned} J_{\nu }(\tau \phi (jh-q))&= J_{\nu }(t_j)+J_{\nu }^{'}(t_j) [\tau (jh-q)-t_j]+{\mathcal {O}}[\tau (jh-q)-t_j]^2 \\&\quad + J_{\nu }^{'}(\tau (jh-q))(\tau \phi (jh-q)-\tau (jh-q)) \\&\quad +{\mathcal {O}}(\phi (jh-q)-(jh-q))^2. \end{aligned} \end{aligned}$$
(23)

At this point, we need the following observations. First of all, for the derivative \(J_{\nu }^{'}(z)\) it holds (see [1, p. 361, 9.1.27])

$$\begin{aligned} J_{\nu }^{'}(z) = J_{\nu -1}(z)-\frac{\nu }{2}J_{\nu }(z), \end{aligned}$$

and, in particular,

$$\begin{aligned} J_{\nu }^{'}(t_j) = J_{\nu -1}(t_j). \end{aligned}$$
(24)

Moreover, from [1, p. 371, 9.5.12] we have

$$\begin{aligned} t_j = a_j-\frac{4 \nu ^2 -1}{8a_j}+{\mathcal {O}} \left( \frac{1}{j^2} \right) , \quad a_j = \left( j+\frac{1}{2}\nu -\frac{1}{4} \right) \pi . \end{aligned}$$
(25)

Then, remembering that \(q = \frac{\pi }{4 \tau } (1-2 \nu )\), \(\tau = \frac{\pi }{h}\), and by using (25), we have that

$$\begin{aligned} \tau (jh-q) -t_j = \frac{4 \nu ^2-1}{8a_j}+{\mathcal {O}}\left( \frac{1}{j^2} \right) . \end{aligned}$$
(26)

Finally, we observe that, for \(\xi \rightarrow + \infty \),

$$\begin{aligned} \tau (\phi (\xi )-\xi ) = \tau \frac{\xi e^{- \xi }}{1-e^{- \xi }} \sim \tau \xi e^{- \xi }. \end{aligned}$$
(27)

By inserting (24)–(26)–(27) in (23), we obtain

$$\begin{aligned} J_{\nu }(\tau \phi (jh-q))= J_{\nu -1}(t_j) \frac{4 \nu ^2-1}{8 a_j}+{\mathcal {O}}\left( \frac{1}{j^2}\right) . \end{aligned}$$

Now, by using (19) the above expression becomes

$$\begin{aligned} J_{\nu }(\tau \phi (jh-q))&= \sqrt{\frac{2}{\pi t_j}} \left[ \cos \left( t_j-\frac{\nu -1}{2}\pi -\frac{\pi }{4} \right) +{\mathcal {O}} \left( \frac{1}{t_j} \right) \right] \frac{4 \nu ^2-1}{8a_j} + {\mathcal {O}} \left( \frac{1}{j^2} \right) \nonumber \\&= \sqrt{\frac{2}{\pi t_j}} \left[ (-1)^j +{\mathcal {O}}\left( \frac{1}{t_j} \right) \right] \frac{4 \nu ^2-1}{8a_j} +{\mathcal {O}} \left( \frac{1}{j^2} \right) , \end{aligned}$$

where we have also used (cf. (25))

$$\begin{aligned} t_j = \left( j+\frac{1}{2}\nu -\frac{1}{4} \right) \pi +{\mathcal {O}} \left( \frac{1}{j} \right) . \end{aligned}$$

Finally, since obviously

$$\begin{aligned} t_j = j \pi \left( 1+ {\mathcal {O}} \left( \frac{1}{j} \right) \right) , \end{aligned}$$

we obtain the result. \(\square \)

By the above proposition and relations (8)–(9), for the truncation error \({\mathcal {E}}_{T_R}\) it holds

$$\begin{aligned} {\mathcal {E}}_{T_R} \sim \frac{\sqrt{2}}{8 \omega ^2} \vert 4 \nu ^2-1 \vert \left| \sum _{j=N+1}^{+\infty } f \left( \frac{\pi }{\omega } j \right) (-1)^j j^{-\frac{1}{2}} \right| , \quad N \rightarrow + \infty , \end{aligned}$$
(28)

where we have also used

$$\begin{aligned} f \left( \frac{\tau }{\omega } \phi (jh-q) \right) \sim f \left( \frac{\tau }{\omega } jh \right) = f \left( \frac{\pi }{\omega } j \right) , \quad j \rightarrow + \infty . \end{aligned}$$

Then, we have the following results.

Proposition 2

Let \(f(x) \sim e^{-\beta x^k} x^{\alpha }\), for \(x \rightarrow +\infty \), with \(\beta \ge 0\), \(k \ge 0\), \(\alpha \in \mathbb {R}\) (\(\alpha < -\frac{3}{2}\) if \(\beta =0\)). Let moreover

$$\begin{aligned} f_j = f \left( \frac{\tau }{\omega } j \right) (-1)^j j^{-\frac{1}{2}}. \end{aligned}$$
(29)

Then,

$$\begin{aligned} \sum _{j=N+1}^{+\infty } f_j \sim \gamma (-1)^{N+1} \int _N^{+\infty } p(x) dx, \quad N \rightarrow + \infty , \end{aligned}$$

where \(\gamma = \frac{1}{2}\left( \frac{\pi }{\omega }\right) ^{\alpha }\), and

$$\begin{aligned} p(x) = e^{-\bar{\beta }x^k} x^{\alpha -\frac{1}{2}} \left( 1-e^{-\bar{\beta }kx^{k-1}}-\left( \alpha -\frac{1}{2}\right) \frac{1}{x} e^{-\bar{\beta }kx^{k-1}} \right) , \quad \bar{\beta }=\beta \left( \frac{\pi }{\omega }\right) ^k. \end{aligned}$$
(30)

Proof

First of all, we show that

$$\begin{aligned} f_j+f_{j+1} \sim 2 \gamma (-1)^j p(j), \quad \textrm{for} \quad j \rightarrow + \infty . \end{aligned}$$
(31)

Since \(f(x) \sim e^{-\beta x^k} x^{\alpha }\), for \(x \rightarrow +\infty \), by (29) we have that

$$\begin{aligned} f_j \sim 2 \gamma (-1)^j j^{\alpha -\frac{1}{2}}e^{-\bar{\beta }x^k}, \quad j \rightarrow + \infty . \end{aligned}$$

Therefore,

$$\begin{aligned} f_j+f_{j+1} = 2 \gamma (-1)^j e^{-\bar{\beta }j^k} \left( j^{\alpha -\frac{1}{2}}-(j+1)^{\alpha -\frac{1}{2}} e^{-\bar{\beta }((j+1)^k-j^k)} \right) . \end{aligned}$$

Now, since for \(j \rightarrow + \infty \)

$$\begin{aligned}&(j+1)^k-j^k \sim k j^{k-1}, \\&(j+1)^{\alpha -\frac{1}{2}} \sim j^{\alpha -\frac{1}{2}} \left( 1+\left( \alpha -\frac{1}{2} \right) \frac{1}{j} \right) , \end{aligned}$$

we obtain (31). Then,

$$\begin{aligned} \sum _{j=N+1}^{+\infty } f_j&= (f_{N+1}+f_{N+2})+(f_{N+3}+f_{N+4})+\ldots \\&= \gamma (-1)^{N+1} (2 p(N+1)+2p(N+3)+\ldots ) \\&\sim \gamma (-1)^{N+1} \int _N^{+\infty } p(x) dx, \quad N \rightarrow + \infty . \end{aligned}$$

\(\square \)

Before going on, we recall that, by [11, p. 317, 3.381, n. 3 and p. 942, 8.357],

$$\begin{aligned} \int _u^{+\infty } x^{\tau -1}e^{-\mu x} dx = \mu ^{-\tau } \Gamma (\tau ,\mu u) \sim \frac{1}{\mu } u^{\tau -1} e^{-\mu u}, \quad u \rightarrow + \infty , \end{aligned}$$
(32)

where \(\Gamma \) is the incomplete Gamma function (see e.g. [11, 8.35]).

Proposition 3

Let f be as in Proposition 2. For \(N \rightarrow + \infty \), we have

  1. (a)

    \({\mathcal {E}}_{T_R} \sim \kappa N^{\bar{\alpha }}\), for \(\beta = 0\) and \(\alpha <-\frac{3}{2}\),

  2. (b)

    \({\mathcal {E}}_{T_R} \sim \frac{\kappa s}{k \bar{\beta }} N^{\bar{\alpha }-k} e^{-\bar{\beta }N^k}\), for \(\beta >0\) and \(k > 0\),

where \(\kappa =\frac{\pi ^{\alpha }}{\omega ^{\alpha +2}}\frac{\sqrt{2}}{16}\vert 4 \nu ^2 -1 \vert \),

$$\begin{aligned} \bar{\alpha } = {\left\{ \begin{array}{ll} \alpha -\frac{1}{2}, &{}\quad 0<k<1 \; \textrm{or} \; \beta =0 \\ \alpha +\frac{1}{2}, &{}\quad k \ge 1 \end{array}\right. }, \end{aligned}$$

and

$$\begin{aligned} s = {\left\{ \begin{array}{ll} \left| \frac{1}{2}-\alpha \right| , &{}\quad 0<k<1 \\ 1-e^{-\bar{\beta }}, &{}\quad k=1 \\ 1, \quad &{}k>1 \end{array}\right. }. \end{aligned}$$

Proof

By (28) and Proposition 2, we have

$$\begin{aligned} {\mathcal {E}}_{T_R} \sim \kappa \int _N^{+\infty } p(x) dx. \end{aligned}$$

For \(\beta =0\) we have \(p(x) = \left( \frac{1}{2}-\alpha \right) x^{\alpha -\frac{3}{2}}\), and then (a) straightfully follows. For \(\beta >0\), by using (30), a simple asymptotic analysis shows that

$$\begin{aligned} p(x) \sim s x^{\bar{\alpha }-1} e^{- \bar{\beta } x^k}. \end{aligned}$$

At this point,

$$\begin{aligned} \int _N^{+\infty } p(x) dx \sim s \int _N^{+\infty } x^{\bar{\alpha }-1} e^{-\bar{\beta }x^k} dx = \frac{s}{k} \int _{N^k}^{+\infty } e^{-\bar{\beta }y}y^{\frac{\bar{\alpha }}{k}-1} dy, \end{aligned}$$

and using (32) we find (b). \(\square \)

4.4 Definition of hMN

As already mentioned, the idea is to define hMN such that \({\mathcal {E}}_D\), \({\mathcal {E}}_{T_R}\), \({\mathcal {E}}_{T_L}\) have similar asymptotic behavior. By comparing (5) or (6) and (21) we simply impose

$$\begin{aligned} (\nu +2)Mh = \frac{2 \pi d}{h}, \end{aligned}$$

so that, for any given M, we define

$$\begin{aligned} h:= \sqrt{\frac{2 \pi d}{M (\nu +2)} }. \end{aligned}$$
(33)

Then, by inserting (33) in (21), we have

$$\begin{aligned} {\mathcal {E}}_{T_L} \sim \frac{c \pi ^{\nu +2}}{\omega ^2 2^{\nu } {\Gamma (\nu +1)(\nu +2)}} M^{\nu +2} e^{- \sqrt{2 \pi (\nu +2) d M }}. \end{aligned}$$
(34)

As for the definition of N, we need to distinguish between the two cases (cf. Proposition 3), that is, we define N by solving

$$\begin{aligned} \frac{c \pi ^{\nu +2}M^{\nu +2}e^{- \sqrt{2 \pi (\nu +2) \arg (z_0) M }}}{\omega ^2 2^{\nu } {\Gamma (\nu +1)(\nu +2)}} = {\left\{ \begin{array}{ll} \kappa N^{\alpha -\frac{1}{2}} \; &{}\quad \textrm{for} \; \beta =0, \, \alpha <-\frac{3}{2} \\ \frac{\kappa s}{k \bar{\beta }} N^{\bar{\alpha }-k}e^{-\bar{\beta }N^k} \; &{}\quad \textrm{for} \; \beta>0, \, k>0 \end{array}\right. }. \end{aligned}$$
(35)

We remark that for \(\beta =0\) or \(k \approx 0\) previous formulas lead to \(N>M\) and, therefore, the error decay with respect to \(m=M+N+1\) is qualitatively given by (a) of Proposition 3 (with N replaced by m). The same holds true for high frequency, since \(\bar{\beta }\) may be very small (cf. (30) and (b) of Proposition 3). In these situations, by solving (35), one can observe that \(N \approx \omega ^{\frac{2 \alpha }{2\alpha -1}}\), which represents the deterioration of speed in presence of high frequencies. On the other side, for \(\beta >0\) and \(k>\frac{1}{2}\) we obtain \(N<M\) and the decay rate is of type \(e^{- const \sqrt{m}}\). These considerations are quite evident by looking at Tables 1, 2, 3, 4 and 5 of Sect. 6.

5 Numerical Experiments

In this section we show some numerical experiments in which we compare the sinc rule (11) and the error estimate obtained by summing up the three contributions, that is, \({\mathcal {E}}_D+{\mathcal {E}}_{T_L}+{\mathcal {E}}_{T_R}\), by using (20), (34) and Proposition 3. In particular, for \(M=1,2,\ldots \), we define h as in (33) and N by solving (35). Since the functions considered are

$$\begin{aligned} f_1(x)&= \frac{1}{1+x^2}, \quad \mathrm{with \; poles} \quad w_j= (-1)^j i, \; j =0,1, \\ f_2(x)&= \frac{x^{\nu }}{(1+x^4)^{\nu +\frac{1}{2}}}, \quad \mathrm{with \; poles} \quad w_j= e^{i\frac{\pi }{4}(1+2j)}, \; j =0,1,2,3, \\ f_3(x)&= \frac{e^{-x}}{1+x^2}, \quad \mathrm{with \; poles} \quad w_j= (-1)^j i, \; j =0,1, \\ f_4(x)&= \frac{e^{-x^2}}{1+x^2}, \quad \mathrm{with \; poles} \quad w_j= (-1)^j i, \; j =0,1, \end{aligned}$$

we use formula (20) with the approximations \(d \sim \frac{\pi }{2}\) for \(f_1,f_3,f_4\) and \(d\sim \frac{\pi }{4}\) for \(f_2\).

In Figs. 2, 3, 4 and 5, for different values of \(\omega \), we plot the error (obtained with respect to a reference solution), together with the error estimate, with respect to m, the total number of function evaluations.

Fig. 2
figure 2

The error of the sinc rule and the estimate for \(f_1(x)\) and \(\nu = 0\)

Fig. 3
figure 3

The error of the sinc rule and the estimate for \(f_2(x)\) and \(\nu = \frac{3}{2}\)

Fig. 4
figure 4

The error of the sinc rule and the estimate for \(f_3(x)\) and \(\nu = 1\)

Fig. 5
figure 5

The error of the sinc rule and the estimate for \(f_4(x)\) and \(\nu = 0\)

In this section we also compare formula (11) with the sinc rule applied after using in integral (1) the standard logarithmic change of variables (see e.g. [15])

$$\begin{aligned} y = \ln \left( \frac{1}{x} \right) \quad \textrm{and} \quad \lambda = \ln \omega , \end{aligned}$$

that leads to

$$\begin{aligned} H^{(\nu )} \left( e^{\lambda } \right) = \int _{-\infty }^{+\infty } f(e^{-y}) J_{\nu }(e^{\lambda -y}) e^{-2y} dy. \end{aligned}$$
(36)

In Fig. 6, working with \(f_3(x)\) and \(\omega =1,5\), we show the results of formula (11) and the ones of the sinc rule applied to (36), with parameters \(M=N\), \(h = 1e-1\) (for \(\omega =1\)) and \(h=5e-2\) (for \(\omega =5\)). With respect to the sinc rule applied to (36), this and many other experiments not reported reveal that the sinc rule based on the exponential transformation described in Sect. 3 provides better results, especially for "large" frequencies. Indeed, by increasing \(\omega \) the problem becomes more difficult due to the oscillations introduced by the Bessel function. By using formula (11), this issue is partially smoothed by the choice made for the parameters of the transformation \(\phi \) (cf. (18)). We remark that we report only the results of a couple of experiments obtained by working with the function \(f_3(x)\), which decays exponentially. Indeed, the standard logarithmic change of variable is not suitable for slowly decaying functions.

Fig. 6
figure 6

The relative error of the sinc rule (11) (solid black line) and of the sinc rule applied to integral (36) (blue dots) for \(f_3(x)\) and \(\nu =0\) (Color figure online)

6 Automatic Integrator

In this section we provide a prototype automatic integrator for the computation of the Hankel transform. The idea is based on the observation that the left truncation error \({\mathcal {E}}_{T_L}\) does not depend on the function f. Therefore, if for a generic function we plot the total error with respect to M, we always obtain something of the type

$$\begin{aligned} {\mathcal {E}}_{M,N,h} \approx \gamma 10^{-\delta M }, \end{aligned}$$

with \(\gamma ,\delta >0\). By running a number of experiments with different functions we have observed that conservative but rather good values of the parameters are given by

$$\begin{aligned} \delta =\frac{1}{5} \quad \textrm{and} \quad \gamma = 1. \end{aligned}$$

In this way, by setting an arbitrary tolerance \(\eta \), the idea is to define

$$\begin{aligned} M = -5 \log _{10} \eta . \end{aligned}$$

Then, in order to define h, we impose (cf. (21))

$$\begin{aligned} \frac{\pi ^{\nu +2}}{\omega ^2 2^{\nu } {\Gamma (\nu +1)(\nu +2)}} M^{\nu +2} e^{-(\nu +2)Mh} = \eta , \end{aligned}$$

that leads to

$$\begin{aligned} h = \frac{1}{(\nu +2)M} \left( \ln \frac{M^{\nu +2}}{\eta } + \ln \frac{\pi ^{\nu +2}}{\omega ^2 2^{\nu } {\Gamma (\nu +1)(\nu +2)}} \right) . \end{aligned}$$
(37)

As for the choice of N, we observe that

$$\begin{aligned} {\mathcal {E}}_{T_R} \sim c_{\omega ,\nu } f\left( \frac{\pi }{\omega } N \right) N^{-\frac{1}{2}}, \end{aligned}$$

with \(c_{\omega ,\nu } = \frac{\sqrt{2}}{16} \omega ^2 \left| 4 \nu ^2-1 \right| \) (cf. Proposition 3). Then, we set N as the numerical solution of

$$\begin{aligned} c_{\omega ,\nu } f\left( \frac{\pi }{\omega } N \right) N^{-\frac{1}{2}}=\eta . \end{aligned}$$
(38)

We summarize the above strategy in the following algorithm.

Algorithm 2

Given \(\eta >0\)

  1. 1.

    set \(M=\lceil -5 \log _{10} \eta \rceil \);

  2. 2.

    set h as in (37);

  3. 3.

    solve problem (38) to define N;

  4. 4.

    evaluate integral (1) by using (11).

In Tables 1, 2, 3, 4 and 5, working with different functions and taking \(\omega =1,5,20\), we report the results of Algorithm 2 for \(\eta = 1e-4,1e-7,1e-10\). In particular, we solve problem (38) by employing the secant method with \(N_0 =5\) as starting point. In the tables we also report the number of iterations of the secant method, denoted by k, since it represents an additional number of function evaluations. As stopping criteria for the secant algorithm we set a termination tolerance in the order of \(\eta \) and a maximum number of iterations \(k_{\textrm{max}} = 100\). The Matlab code that implements Algorithm 2 is reported in Appendix. We want to point out that the code can be labeled as a first attempt. For instance, the subfunction relative to the secant method is not optimized and can be strongly improved. Nevertheless, a further development would also be the computation on a vector of frequencies.

Table 1 Results of Algorithm 2 with \(f(x) = e^{-x}\), \(\nu = 0\)
Table 2 Results of Algorithm 2 with \(f(x) = \frac{\ln (1+x)}{1+x^3}\), \(\nu = 1\)
Table 3 Results of Algorithm 2 with \(f(x) = e^{-\frac{1}{2} x^{\frac{3}{2}}}\), \(\nu = 2\)
Table 4 Results of Algorithm 2 with \(f(x) = e^{-\sqrt{x}} \ln (1+x)\), \(\nu = \frac{3}{2}\)
Table 5 Results of Algorithm 2 with \(f(x) = \frac{x}{\cosh x}\), \(\nu = 2\)

7 Conclusion

In this work we have considered the computation of the Hankel transform of a function f by employing a particular exponential transformation followed by the sinc rule. A rather accurate error analysis, based on the theory of analytic functions, allowed to proper define the parameters of the quadrature formula and to derive quite sharp error estimates. The theory and the numerical experiments reveal that the rule is also able to handle the difficult cases represented by the presence of slowly decaying functions f. The main reason is that the choice of the parameters of the exponential transform and, as consequence, of the step size allowed to manage the oscillations due to the Bessel function, especially for increasing frequency. Anyway, for increasing values of \(\omega \) the method slows down (cf. Sect. 4.4). The prototype automatic integrator, implemented in Matlab, has been successfully tested on several examples and, in our opinion, it represents a good starting point for the development of a reliable code.