1 Introduction

In 2022 we commemorated the first centenary of Watson’s celebrated masterpiece A treatise on the theory of Bessel functions [19]; see also [14]. Although one hundred years have passed since the first edition of this fundamental book was published, there are still some interesting problems about Bessel functions to be addressed. One of them is related to Bessel expansions in several variables. Watson displayed just a couple of bivariate expansions: the Kneser-Sommerfeld expansion [19, § 15.42, p. 499] (by the way, this expansion is likely the only mistake in Watson’s book: see [13]), and a particular example of a Neumann series [19, § 16.32, p. 531]. And it is enough to take a look at [16, Sect. 5.7] or [1, Sect. 6.8] to realize that only a few two variable Bessel series of the form

$$\begin{aligned} \sum _{m\ge 1} \alpha _m J_{\mu _1}(\zeta _m x_1)J_{\mu _2}(\zeta _m x_2) \end{aligned}$$

have been explicitly computed if we compare to single-variable ones (see also [4, 10, 12]). Even less is known if we consider multivariate Bessel series with an arbitrary number of variables (see [17]). That also happens in the more studied case when the sequence \(\zeta _m\) is the sequence of zeros \(j_{m,\nu }\) of other Bessel function \(J_\nu \).

Of course, this is not surprising because the multivariate case is more difficult to handle than the single-variable one.

The purpose of this paper is to improve that situation. To do that, we develop a method for computing in a closed form multivariate Bessel expansions of the type

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \prod _{i=1}^k \frac{J_{\mu _i}(\zeta _m x_i)}{(\zeta _m x_i)^{\mu _i}}, \end{aligned}$$
(1.1)

assuming that for a particular value \(\eta \), a closed expression for the single-variable Bessel expansion

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \frac{J_{\eta }(\zeta _m x)}{(\zeta _m x)^\eta } \end{aligned}$$
(1.2)

as a power series of \(x^{2j}\), \(j\in \mathbb {N}\), is known.

Using our method, we compute explicitly a bunch of multivariate Bessel expansions, among which are (for \(n\in \mathbb {Z}\))

$$\begin{aligned}&\sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -2n-1}}{J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu }x_i)}{(j_{m,\nu }x_i)^{\mu _i}}, \end{aligned}$$
(1.3)
$$\begin{aligned}&\sum _{m\ge 1}\frac{j_{m,\nu }^{\nu -2n-1}}{(j_{m,\nu }^2-z^2)J_{\nu + 1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu } x_i)}{(j_{m,\nu }x)^{\mu _i}}, \end{aligned}$$
(1.4)
$$\begin{aligned}&\sum _{m\ge 1} \frac{(-1)^m}{(1+m^2/\theta ^2)^{n}} \prod _{i=1}^k \frac{J_{\mu _i}(\sqrt{1+m^2/\theta ^2}x_i)}{(\sqrt{1+m^2/\theta ^2}x_i)^{\mu _i}}, \end{aligned}$$
(1.5)
$$\begin{aligned}&\sum _{m\ge 1} \frac{\lambda _{m}^{\nu -2n}}{(\lambda _{m}^2-\nu ^2+H^2)J_\nu (\lambda _{m})} \prod _{i=1}^k\frac{J_{\mu _i}(\lambda _{m}x_i)}{(\lambda _{m}x_i)^{\mu _i}}, \end{aligned}$$
(1.6)
$$\begin{aligned}&\sum _{m\ge 1} \frac{\lambda _{m}^{\nu -2n}}{(\lambda _{m}^2-z^2)(\lambda _{m}^2-\nu ^2+H^2)J_\nu (\lambda _{m})} \prod _{i=1}^k \frac{J_{\mu _i}(\lambda _{m}x_i)}{(\lambda _{m}x_i)^{\mu _i}}, \end{aligned}$$
(1.7)

where in the last two expansions \(\lambda _m\) are the positive zeros (ordered in increasing size) of the function

$$\begin{aligned} zJ'_\nu (z)+HJ_\nu (z),\quad \nu>-1,\ \nu +H>0. \end{aligned}$$

The method is explained in full detail in Sect. 4. In order to establish our method we prove in Sect. 3 a theorem on multivariate cosine expansions which has interest by itself (see Theorem 3.1); this theorem is the bridge which allows us to move from the single-variable Bessel expansion (1.2) to the multivariate one (1.1).

In Sect. 5, we consider the case when the particular Bessel series (1.2) is a polynomial in certain interval; this includes the expansions (1.3) and (1.6). We show that associated to this type of Bessel expansions are the so-called Bessel–Appell polynomials, i.e., one-parameter sequences of polynomials \((p_{n,\mu })_n\) defined by a generating function of the form

$$\begin{aligned} A(z)\frac{J_\mu (xz)}{(xz)^\mu } = \sum _{n=0}^\infty p_{n,\mu }(x)z^n, \end{aligned}$$

where A is a function analytic at \(z=0\). In particular, they satisfy

$$\begin{aligned} p'_{n,\mu }(x)=-xp_{n-1,\mu +1}(x), \quad n\ge 1. \end{aligned}$$

The multivariate Bessel series (1.1) can then be explicitly summed from the Taylor coefficients of the analytic function A. For the benefit of the readers, we display here one of our results in full detail. Denote by

$$\begin{aligned} \hat{\mathbb {C}}=\mathbb {C}\setminus \{-1,-2,-3,\ldots \} \end{aligned}$$
(1.8)

and, for \(\omega >0\),

$$\begin{aligned} \Omega _{[\omega ]}&= \Big \{(x_1,\ldots ,x_k)\in \mathbb {R}^k: \sum _{i=1}^k |x_i| \le \omega \Big \}, \end{aligned}$$
(1.9)
$$\begin{aligned} \Omega ^*_{[\omega ]}&= \Big \{(x_1,\ldots ,x_k)\in \Omega _{[\omega ]}: \prod _{i=1}^kx_i\not =0 \Big \}. \end{aligned}$$
(1.10)

We then prove that for \(\nu >-1\), \(\nu +H>0\), \(\mu _i\in \hat{\mathbb {C}}\), \(i=1,\ldots ,k\), with \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots ,x_k)\in \Omega ^*_{[1]}\), the multivariate Dini-Young expansion (1.6) is equal to the polynomial

$$\begin{aligned} \sum _{l=0}^na_{n-l}^{H,\nu } \sum _{l_1+\cdots +l_k=l} \prod _{i=1}^k \frac{\left( -x_i^2/4\right) ^{l_i}}{2^{\mu _i}l_i!\, \Gamma (\mu _i + l_i+1)}, \end{aligned}$$

where \((a_n^{H,\nu })_n\) is the sequence defined by the generating function

$$\begin{aligned} \frac{z^\nu }{2\big ((H-\nu )J_\nu (z)+zJ_{\nu -1}(z)\big )} = \sum _{n=0}^\infty a_n^{H,\nu }z^{2n}. \end{aligned}$$

In Sect. 6 we extend our results to the case when the particular Bessel series (1.2) is not a polynomial but still can be expanded in powers of \(x^{2j}\), \(j\in \mathbb {N}\) (which includes the expansions (1.4), (1.5) and (1.7)). Here is an example in full detail. For \(\textrm{Re}\nu <2n+ \sum _{i=1}^k \textrm{Re}\mu _i+(k+5)/2\) and \((x_1,\ldots ,x_k)\in \Omega _{[1]}^*\), the multivariate Dini-Young expansion (1.7) is equal to

$$\begin{aligned}&\frac{1}{z^{2n+2}} \Bigg ( \frac{z^{\nu }}{2\big ((H-\nu )J_\nu (z)+zJ_{\nu -1}(z)\big )} \prod _{i=1}^k \frac{J_{\mu _i}(x_i z)}{(x_i z)^{\mu _i}} \\&\qquad - \sum _{l=0}^n z^{2l} \sum _{j=0}^l a_{l-j}^{H,\nu } \sum _{l_1+\cdots +l_k=j} \prod _{i=1}^k \frac{(-x_i^2/4)^{l_i}}{2^{\mu _i} l_i!\,\Gamma (\mu _i + l_i+1)} \Bigg ). \end{aligned}$$

When the particular Bessel series (1.2) cannot be expanded in powers of \(x^{2j}\), \(j\in \mathbb {N}\), the application of our method is much more complicated. In Appendix A (“Multivariate Sneddon expansion” section), we consider an example of such situation. We can still obtain some result but not as complete as in the previous scenario. We have considered the multivariate Sneddon expansion

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{2\nu -2}}{J^2_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu }x_i)}{(j_{m,\nu }x_i)^{\mu _i}}. \end{aligned}$$
(1.11)

The case \(k=2\) has been summed in [9] for \(2\textrm{Re}\nu <1+\textrm{Re}\mu _1+\textrm{Re}\mu _2\) and \(0<x+y<2\) (see also [18, § 2.2] and [13]). For \(k\ge 3\), we consider the sets

$$\begin{aligned} \Lambda _{i}^+&= \Big \{(x_1,\ldots ,x_k)\in \mathbb {R}^k : \forall j\, x_j>0,\, \sum _{j=1}^k x_j<2,\, \sum _{j\not =i}x_j < x_i\Big \}, \quad i=1,\ldots ,k, \end{aligned}$$
(1.12)
$$\begin{aligned} \Lambda _{r}^+&= \Big \{(x_1,\ldots ,x_k)\in \mathbb {R}^k : \sum _{j=1}^k x_j<2,\, \forall i\, 0<x_i< \sum _{j\not =i} x_j\Big \} \end{aligned}$$
(1.13)

(notice that for \(k=2\), \(\Lambda _{r}^+=\emptyset \)).

Assuming that one of the parameters \(\mu _i\) is equal to \(-1/2\), we have explicitly summed the expansion (1.11) in the piece \(\Lambda _{i}^+\). More precisely using the symmetry of (1.11) we can take \(\mu _1=-1/2\), and then we have

$$\begin{aligned}&\sum _{m\ge 1} \frac{j_{m,\nu }^{2\nu -2}}{J^2_{\nu +1}(j_{m,\nu })}\prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu }x_i)}{(j_{m,\nu }x_i)^{\mu _i}} = \frac{2^{2\nu -2}\Gamma (\nu +1)^2}{\nu \prod _{i=1}^k2^{\mu _i}\Gamma (\mu _i+1)} \nonumber \\&\qquad \times \left( -1 + \left( {\begin{array}{c}\mu _1\\ \nu \end{array}}\right) \sum _{j=0}^\infty (\nu )_j(\nu -\mu _1)_jx_1^{-2\nu -2j} \sum _{l_2+\cdots +l_k=j} \prod _{i=2}^k \frac{x_i^{2l_{i}}}{l_i!\,(\mu _i+1)_{l_i}}\right) \end{aligned}$$
(1.14)

for \(\nu ,\mu _i\in \hat{\mathbb {C}}\), \(i=2,\ldots ,k\), with \(2\textrm{Re}\nu <2n+k/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots , x_k)\in \Lambda _{1}^+\). Moreover, we have computational evidence showing that the sum (1.14) also holds when \(\mu _1\not =-1/2\), but we have not been able to prove it.

We have also failed summing the expansion (1.11) in the piece \(\Lambda _{r}^+\) (1.13).

2 Preliminaries

Throughout this paper, by \(\frac{J_{\mu }(z)}{z^\mu }\) we denote the even entire function

$$\begin{aligned} \frac{1}{2^\mu } \sum _{n=0}^{\infty } \frac{(-1)^n (z/2)^{2n}}{n! \, \Gamma (\mu +n+1)}, \quad z \in \mathbb {C}. \end{aligned}$$

As usual, \((a)_n\) denotes the Pochhammer symbol

$$\begin{aligned} (a)_n = a(a+1)(a+2) \ldots (a+n-1) = \frac{\Gamma (a+n)}{\Gamma (a)} \end{aligned}$$

(with n a nonnegative integer).

The zeros of the even function \(J_{\nu }(z)/z^{\nu }\), are simple and can be ordered as a double sequence \((j_{m,\nu })_{m\in \mathbb {Z}{\setminus }\{0\}}\) with \(j_{-m,\nu } = -j_{m,\nu }\) and \(0 \le \textrm{Re}j_{m,\nu } \le \textrm{Re}j_{m+1,\nu }\) for \(m \ge 1\) [19, § 15.41, p. 497]. The imaginary part of these zeros is bounded and, when m is a sufficiently large integer, there is exactly one zero in the strip \(m\pi + \frac{\pi }{2} \textrm{Re}\nu + \frac{\pi }{4}< \textrm{Re}z < (m+1)\pi + \frac{\pi }{2} \textrm{Re}\nu + \frac{\pi }{4}\) [19, § 15.4, p. 497], so that

$$\begin{aligned} \lim _{m\rightarrow +\infty } \frac{|j_{m,\nu }|}{\pi m} = 1. \end{aligned}$$

For \(\nu >-1\) and \(H+\nu >0\), the zeros \(\lambda _m\), \(m\ge 1\), of \(zJ'_\nu (z)+HJ_\nu (z)\) interlace the zeros of the Bessel function \(J_\nu \) [19, § 15.23, p. 480]. In particular, they are positive and increasing.

We will also use the well-known estimate

$$\begin{aligned} 0 < c \le |J_{\nu + 1}(j_{m,\nu })^2 j_{m,\nu }| \le C \end{aligned}$$

for some constants c and C not depending on m.

Bessel functions satisfy the bound

$$\begin{aligned} |J_\beta (z)| \le C \frac{e^{|\textrm{Im}z|}}{|z|^{1/2}}, \end{aligned}$$

for |z| large enough, with a constant C depending only on \(\beta \). To be precise, for \(|z|> \varepsilon > 0\) and \(\beta \) on a compact set K, there is a constant C depending only on \(\varepsilon \) and K, as follows from [15, Eq. 10.4.4 and § 10.17(iv)].

We also use the well-known identity

$$\begin{aligned} \frac{\textrm{d}}{\textrm{d}x}\left( \frac{J_{\mu }(x)}{x^\mu }\right) = -x\frac{J_{\mu +1}(x)}{x^{\mu +1}}. \end{aligned}$$
(2.1)

For \(\mu \) and \(\eta \) satisfying \(\textrm{Re}\mu>\textrm{Re}\eta >-1\), consider the integral transform \(T_{\mu ,\eta }\) given by

$$\begin{aligned} T_{\mu ,\eta }(f)(x) = \frac{1}{2^{\mu -\eta -1}\Gamma (\mu -\eta )} \int _0^1f(xs)s^{2\eta +1}(1-s^2)^{\mu -\eta -1}\,\textrm{d}s \end{aligned}$$
(2.2)

(with a small abuse of notation, we will often write \(T_{\mu ,\eta }(f(x))\) if it does not cause confusion).

Sonin’s formula for the Bessel functions [19, 12.11(1), p. 373] can be written as

$$\begin{aligned} \frac{J_{\mu }(x)}{x^\mu }= & {} \frac{1}{2^{\mu -\eta -1} \Gamma (\mu -\eta )} \int _0^1 \frac{J_\eta (xs)}{(xs)^{\eta }} s^{2\eta +1} (1-s^2)^{\mu -\eta -1} \, \textrm{d}s \nonumber \\= & {} T_{\mu ,\eta }\left( \frac{J_\eta (x)}{x^{\eta }}\right) \end{aligned}$$
(2.3)

valid for \(\textrm{Re}\mu>\textrm{Re}\eta >-1\).

For \(2\textrm{Re}\eta +r+2>0\), we also have

$$\begin{aligned} T_{\mu ,\eta }(x^{r}) = \frac{\Gamma (\eta +\frac{r}{2}+1)}{2^{\mu -\eta } \Gamma (\mu +\frac{r}{2}+1)} \,x^{r}, \end{aligned}$$
(2.4)

where we have used that

$$\begin{aligned} \int _0^1 s^{a}(1-s^2)^{b}\,\textrm{d}s = \frac{\Gamma (\frac{a+1}{2}) \Gamma (b+1)}{2 \Gamma (\frac{a+1}{2}+b+1)}, \qquad \textrm{Re}a,\textrm{Re}b>-1. \end{aligned}$$

The identity (2.3) can be extended for \(\textrm{Re}\eta <-1\) as follows. For \(\mu \in \hat{\mathbb {C}}\), \(\eta \in \hat{\mathbb {C}}\), \(\eta \not =-3/2,-5/2,\ldots \), and a positive integer h satisfying \(\textrm{Re}\eta >-h/2-1\), \(\textrm{Re}\mu >\textrm{Re}\eta +h\), consider the integral transform \(T_{\mu ,\eta ,h}\) given by

$$\begin{aligned} T_{\mu ,\eta ,h}(f)(x)= & {} \frac{(-1)^h 2^{\eta +1-\mu } \Gamma (2\eta +2)}{\Gamma (\mu -\eta )\Gamma (2\eta +2+h)} \nonumber \\ {}{} & {} \times \int _0^1 \frac{d^h}{ds^h} \big (f(xs)(1-s^2)^{\mu -\eta -1}\big ) s^{2\eta +h+1}\,\textrm{d}s. \end{aligned}$$
(2.5)

It is then easy to check that

$$\begin{aligned} T_{\mu ,\eta ,h}(x^{r})&= \frac{\Gamma \left( \eta +\frac{r}{2}+1\right) }{2^{\mu -\eta } \Gamma \left( \mu +\frac{r}{2}+1\right) } \,x^{r},\\ T_{\mu ,\eta ,h}\left( \frac{J_\eta (x)}{x^{\eta }}\right)&=\frac{J_{\mu }(x)}{x^\mu }. \end{aligned}$$

3 Multivariate cosine expansions

We denote by \(\pi _k\) the set of k-tuples \(\varepsilon = (\varepsilon _1,\ldots ,\varepsilon _k)\) of signs \(\varepsilon _j=\pm 1\) and by \(s_\varepsilon \) the number of negative signs in \(\varepsilon \) (so that \(\prod _{j=1}^k\varepsilon _j = (-1)^{s_\varepsilon }\)).

We define

$$\begin{aligned} \mathcal {C}_{k}^{l}(x_1,\ldots ,x_k) = \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} \Bigg (\sum _{j=1}^k \varepsilon _j x_j\Bigg )^l, \end{aligned}$$

where \(l \in \mathbb {N}\) (we often use \(\mathcal {C}_k^l\) without the variables \(x_j\)).

In what follow, we will use the multinomial formula

$$\begin{aligned} (y_1 + y_2 + \cdots + y_k)^l = \sum _{l_1+l_2+\cdots +l_k=l} \left( {\begin{array}{c}l\\ l_1,l_2,\ldots ,l_k\end{array}}\right) y_1^{l_1} y_2^{l_2} \ldots y_k^{l_k} \end{aligned}$$

(in the sum, the \(l_j\) are non negative integers), where

$$\begin{aligned} \left( {\begin{array}{c}l\\ l_1,l_2,\ldots ,l_k\end{array}}\right) = \frac{l!}{l_1!\,l_2!\ldots l_k!}, \quad \text {with } l_1+l_2+\cdots +l_k=l \end{aligned}$$

are the so-called multinomial coefficients. Of course, these coefficients are invariant under permutation of the \(l_j\); this will be used along the paper without explicit remark. This gives

$$\begin{aligned} \mathcal {C}_{k}^{l}(x_1,\ldots ,x_k)&= \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} \Bigg (\sum _{j=1}^k \varepsilon _j x_j\Bigg )^l\\&= \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}l\\ l_1,\ldots ,l_k\end{array}}\right) \prod _{i=1}^k \varepsilon _i^{l_i}x_i^{l_i}\\&= \frac{1}{2^k} \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}l\\ l_1,\ldots ,l_k\end{array}}\right) \prod _{i=1}^k x_i^{l_i} \sum _{\varepsilon \in \pi _k}\prod _{i=1}^k \varepsilon _i^{l_i}. \end{aligned}$$

If some \(l_j\) is odd, then

$$\begin{aligned} \sum _{\varepsilon \in \pi _k}\prod _{i=1}^k \varepsilon _i^{l_i} = \sum _{\varepsilon \in \pi _{k-1}}\prod _{i=1;i\not =j}^k \varepsilon _i^{l_i} - \sum _{\varepsilon \in \pi _{k-1}}\prod _{i=1;i\not =j}^k \varepsilon _i^{l_i}=0, \end{aligned}$$

and the corresponding summand in \( \sum _{l_1+\cdots +l_k=l}\) vanishes; otherwise, if all the \(l_j\) are even,

$$\begin{aligned} \sum _{\varepsilon \in \pi _k} \prod _{i=1}^k \varepsilon _i^{l_i} = \sum _{\varepsilon \in \pi _k} 1=2^k. \end{aligned}$$

Consequently, \(\mathcal {C}_{k}^{l}=0\) when l is odd, and

$$\begin{aligned} \mathcal {C}_{k}^{2l}(x_1,\ldots ,x_k) = \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}2l\\ 2l_1,\ldots ,2l_k\end{array}}\right) x_1^{2l_1} \ldots x_k^{2l_k}. \end{aligned}$$
(3.1)

Theorem 3.1

Let \((a_m)_{m\ge 1}\), \((\zeta _m)_{m\ge 1}\) be two sequences of real numbers such that the following sine and cosine expansions converge pointwisely in some interval \((-w,w)\):

$$\begin{aligned} \phi (x)&= \sum _{m\ge 1}a_m\cos (\zeta _m x),\\ \psi (x)&= \sum _{m\ge 1}a_m\sin (\zeta _m x). \end{aligned}$$

Then, the series

$$\begin{aligned} G(x_1,\ldots , x_k) = \sum _{m\ge 1} a_m \prod _{j=1}^k \cos (\zeta _m x_j) \end{aligned}$$
(3.2)

converges pointwisely if \( \sum _{j=1}^k |x_j| < \omega \), and

$$\begin{aligned} G(x_1,\ldots , x_k) = \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} \phi \Bigg (\sum _{j=1}^k\varepsilon _jx_j\Bigg ). \end{aligned}$$
(3.3)

Proof

First of all, we note that

$$\begin{aligned} \sum _{j=1}^k |x_j|<\omega \iff -\omega< \sum _{j=1}^k\varepsilon _jx_j<\omega \text { for all }\varepsilon \in \pi _k. \end{aligned}$$
(3.4)

Using Euler’s formula \(\cos x = (e^{ix}+e^{-ix})/2\), we get

$$\begin{aligned} \prod _{j=1}^k\cos (\zeta _m x_j)= & {} \prod _{j=1}^k \frac{e^{i\zeta _m x_j}+e^{-i\zeta _m x_j}}{2} = \sum _{\varepsilon \in \pi _k} \prod _{j=1}^k \frac{1}{2}e^{\varepsilon _j i\zeta _m x_j} \\= & {} \sum _{\varepsilon \in \pi _k} \frac{1}{2^k} e^{i\zeta _m \sum _{j=1}^k\varepsilon _j x_j}. \end{aligned}$$

Formally, we get from (3.2)

$$\begin{aligned} G(x_1,\ldots , x_k)&= \sum _{m\ge 1} a_m \sum _{\varepsilon \in \pi _k} \frac{1}{2^k} e^{i\zeta _m \sum _{j=1}^k\varepsilon _jx_j} = \sum _{\varepsilon \in \pi _k} \frac{1}{2^k} \sum _{m\ge 1} a_m e^{i\zeta _m \sum _{j=1}^k\varepsilon _jx_j} \\&= \sum _{\varepsilon \in \pi _k} \frac{1}{2^k} \sum _{m\ge 1} a_m \left( \cos \left( \zeta _m \sum _{j=1}^k\varepsilon _jx_j\right) + i\sin \left( \zeta _m \sum _{j=1}^k\varepsilon _jx_j\right) \right) \\&= \sum _{\varepsilon \in \pi _k} \frac{1}{2^k} \left( \phi \left( \sum _{j=1}^k\varepsilon _jx_j\right) + i\psi \left( \sum _{j=1}^k\varepsilon _jx_j\right) \right) . \end{aligned}$$

The pointwise convergence of the series \(\phi \) and \(\psi \) and (3.4) say that each series in the last sum is convergent, and hence, we deduce the pointwise convergence of the series (3.2). The identity (3.3) then follows taking into account that G is a real function because \(a_m\), \(m\ge 1\), are real numbers. \(\square \)

4 The method

Our method for computing a multivariate Bessel expansion like (1.1) can be described in the following three steps.

4.1 First step

We start with a particular expansion

$$\begin{aligned} f_\eta (x) = \sum _{m\ge 1} \alpha _m \frac{J_\eta (\zeta _m x)}{(\zeta _mx)^\eta }. \end{aligned}$$
(4.1)

By applying the integral transform \(T_{\mu ,\eta }\) (2.2) to (4.1), we get the more general expansion

$$\begin{aligned} \sum _{m\ge 1}\alpha _m\frac{J_\mu (\zeta _mx)}{(\zeta _mx)^\mu } = T_{\mu ,\eta }(f_\eta )(x). \end{aligned}$$
(4.2)

This approach was worked out in [8, Lemma 1] assuming that a closed expression for (1.2) as a power series of x is known. In [8], we considered only the case when \((\zeta _m)_m\) is the sequence of zeros \((j_{m,\nu })_m\) of the Bessel function \(J_\nu \), but there is not problem in taking any arbitrary sequence \(\zeta _m\). We consider here complex parameters \(\mu ,\eta \in \hat{\mathbb {C}}\) (1.8), removing the assumption in [8] where we only considered real parameters with \(\mu ,\eta >-1\). For the benefit of the readers, we display next the new version of [8, Lemma 1] we will use in this paper.

Lemma 4.1

Given a real number \(\omega \ge 1\), a complex number \(\eta \in \hat{\mathbb {C}}\) such that \(\textrm{Re}\eta \ne -\frac{3}{2}, -\frac{5}{2}, -\frac{7}{2},\ldots \), and two sequences \((\alpha _m)_{m\ge 1}\) and \((\zeta _m)_{m\ge 1}\), \(\zeta _m\not =0\), with \(\liminf |\zeta _m| \ge 1\), assume that

$$\begin{aligned} \sum _{m\ge 1} \frac{|\alpha _m|}{|\zeta _m|^{\textrm{Re}\eta +1/2}} < +\infty \quad \text { and }\quad \sum _{m \ge 1} \alpha _m \frac{J_\eta (\zeta _m x)}{(\zeta _m x)^\eta } = \sum _{j=0}^{+\infty } u_j x^{2j}, \quad x \in (0,\omega ).\nonumber \\ \end{aligned}$$
(4.3)

Let \(\mu \in \hat{\mathbb {C}}\). If

$$\begin{aligned} \sum _{m\ge 1} \frac{|\alpha _m|}{|\zeta _m|^{\textrm{Re}\mu +1/2}} < +\infty , \end{aligned}$$
(4.4)

then

$$\begin{aligned} \sum _{m \ge 1} \alpha _m\frac{J_\mu (\zeta _m x)}{(\zeta _mx)^{\mu }} = \sum _{j=0}^{+\infty } \frac{u_j \Gamma (\eta +j + 1)}{2^{\mu - \eta } \Gamma (\mu + j + 1)} \, x^{2j}, \quad x \in (0,\omega ). \end{aligned}$$
(4.5)

In particular, this holds if \(\textrm{Re}\mu \ge \textrm{Re}\eta \).

Proof

Take a positive integer h and \(\mu \in \hat{\mathbb {C}}\) such that \(\textrm{Re}\eta >-h/2-1\) and \(\textrm{Re}\mu >\textrm{Re}\eta +h\). The first assumption in (4.3) implies that the series in the left-hand side of the Bessel expansion in (4.3) converges uniformly on compacts. The identity (4.5) can then be proved by applying the integral transform \(T_{\mu ,\eta ,h}\) (2.2) to both sides of the Bessel expansion in (4.3). The assumption (4.4) implies that the series in the left-hand side of (4.5) is an analytic function of \(\mu \). Since the right-hand side of (4.5) is also an analytic function of \(\mu \), we can conclude that (4.5) holds for complex numbers \(\mu \in \hat{\mathbb {C}}\) satisfying (4.4). \(\square \)

4.2 Second step

The second step of our method consists in a bridge which allows us to move from a single-variable Bessel expansion to a multivariate one, and use the result on multivariate trigonometric expansions proved in Sect. 3. Once we have (4.2), since

$$\begin{aligned} \frac{J_{-1/2}(z)}{z^{-1/2}} = (2/\pi )^{1/2} \cos z, \end{aligned}$$
(4.6)

setting \(\mu =-1/2\) in (4.2) and using Theorem 3.1 we get the multivariate cosine expansion

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \prod _{j=1}^k \cos (\zeta _m x_j) = \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} \phi \left( \sum _{j=1}^k\varepsilon _jx_j\right) , \end{aligned}$$
(4.7)

where \(\phi (x) = (\pi /2)^{1/2}T_{-1/2,\eta }(f_\eta )(x)\).

4.3 Third step

The third step of our method consists in a multivariate version of Lemma 4.1, which was also established in [8] (see Lemma 2) again by using the integral transforms \(T_{\mu _i,\eta _i}\) (2.2) in each variable \(x_i\). In doing that we get from (4.7) the more general multivariate expansion

$$\begin{aligned} \sum _{m\ge 1} \alpha _m \prod _{i=1}^k \frac{J_{\mu _i}(\zeta _m x_i)}{(\zeta _m x_i)^{\mu _i}} \end{aligned}$$

(the original version in [8] takes as \(\zeta _m\) the zeros of a Bessel function \(J_\eta \) and considers only real parameters, but this is not relevant). Indeed, consider sets \(\Omega \subseteq (0,+\infty )^k\) with the property that \((0,1)^k\subset \Omega \) and

$$\begin{aligned} (x_1,x_2,\ldots ,x_k) \in \Omega \;\Longrightarrow \; \prod _{i=1}^k (0,x_i] \subseteq \Omega . \end{aligned}$$

The precise statement goes as follows:

Lemma 4.2

Let \(\eta _i\in \hat{\mathbb {C}}\), \(i=1,\ldots , k\), such that \(\textrm{Re}\eta _i\not =-\frac{3}{2}, -\frac{5}{2}, -\frac{7}{2},\ldots \), and \((\alpha _m)_{m}\) and \((\zeta _m)_m\) be two sequences, \(\zeta _m\not =0\), with \(\liminf |\zeta _m| \ge 1\), such that

$$\begin{aligned} \sum _{m\ge 1} \frac{|\alpha _m|}{\prod _{i=1}^k|\zeta _m|^{\textrm{Re}\eta _i+1/2}} < +\infty . \end{aligned}$$

Assume that, for \((x_1,\ldots , x_k)\in \Omega \),

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \prod _{i=1}^k\frac{J_{\eta _i}(\zeta _m x_i)}{(\zeta _m x_i)^{\eta _i}}= \sum _{j_1,\ldots ,j_k=1}^{\infty }u_{j_1,\ldots ,j_k} \prod _{i=1}^kx_i^{2j_i}, \end{aligned}$$

where the power series on the right-hand side converges absolutely. If \(\mu _i\in \hat{\mathbb {C}}\), \(i=1,\ldots , k\), and \(\textrm{Re}\mu _i>\textrm{Re}\eta _i\) then for \((x_1,\ldots , x_k)\in \Omega \),

$$\begin{aligned} \sum _{m\ge 1} \alpha _m \prod _{i=1}^k \frac{J_{\mu _i}(\zeta _m x_i)}{(\zeta _mx_i)^{\mu _i}} = \sum _{j_1,\ldots ,j_k=1}^{\infty } u_{j_1,\ldots ,j_k} \prod _{i=1}^k\frac{\Gamma (\eta _i+j_i + 1)x_i^{2j_i}}{2^{\mu _i - \eta _i} \Gamma (\mu _i + j_i + 1)}. \end{aligned}$$
(4.8)

Moreover, if \(\mu _i\in \hat{\mathbb {C}}\), \(i=1,\ldots , k\), satisfy

$$\begin{aligned} \sum _{m\ge 1} \frac{|\alpha _m|}{\prod _{i=1}^k|\zeta _m|^{\textrm{Re}\mu _i+1/2}} < +\infty , \end{aligned}$$

then (4.8) also holds.

To sum up, this three-step method works as far as we can explicitly compute the integral transforms \(T_{\mu ,\eta }(f_\eta )\) in (4.2) in the first and the third steps. This is the case when a closed expression for the function \(f_\eta \) in (4.1) as an even power series of x is known. Then, step 1 is Lemma 4.1, step 2 is Theorem 3.1, and step 3 is the particular case \(\eta _i=-1/2\) in Lemma 4.2.

In the following lemma we put together all the steps.

Lemma 4.3

Let \(\eta , \mu _i\in \hat{\mathbb {C}}\), \(i=1,\ldots , k\), with \(\textrm{Re}\eta \not =-\frac{3}{2}, -\frac{5}{2}, -\frac{7}{2},\ldots \), a real number \(\omega \ge 1\), and two sequences \((\alpha _m)_{m}\) and \((\zeta _m)_m\), \(\zeta _m\not =0\), with \(\liminf \vert \zeta _m\vert \ge 1\), such that the series

$$\begin{aligned} \sum _{m\ge 1} |\alpha _m| |\zeta _m|^{-\textrm{Re}\eta -1/2}, \quad \sum _{m\ge 1} |\alpha _m|,\quad \sum _{m\ge 1} |\alpha _m| |\zeta _m|^{-k/2- \sum _{i=1}^k\textrm{Re}\mu _i} \end{aligned}$$
(4.9)

all converge. Assume that

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \frac{J_{\eta }(\zeta _m x)}{(\zeta _mx)^{\eta }} = \sum _{l=0}^{\infty }u_lx^{2l},\quad x\in (-\omega ,\omega ). \end{aligned}$$
(4.10)

Then we have, for \(k\in \mathbb {N}\) and \(\sum _{i=1}^k |x_i| < \omega \),

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \prod _{i=1}^k \frac{J_{\mu _i}(\zeta _m x_i)}{(\zeta _mx_i)^{\mu _i}}&= \sum _{l=0}^{\infty } 2^{\eta } \Gamma (\eta +l+ 1)u_l \nonumber \\&\qquad \times \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}l\\ l_1,\ldots ,l_k\end{array}}\right) \prod _{i=1}^k\frac{x_i^{2l_i}}{2^{\mu _i} \Gamma (\mu _i + l_i+1)}. \end{aligned}$$
(4.11)

Proof

The assumptions in (4.9) on the sequences \((\alpha _m)_m\) and \((\zeta _m)_m\) guarantee the uniform convergence in compact sets of \(\mathbb {R}\) of the series in the left-hand side of (4.10) and (4.11).

Taking into account (4.6) and applying Lemma 4.1 for \(\mu =-1/2\) (this is why we need the second assumption in (4.9)), we get for \(x\in (0,\omega )\) that

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \cos (\zeta _mx)= \sqrt{\pi } \sum _{l=0}^{\infty } \frac{u_l \Gamma (\eta +l + 1)}{2^{- \eta } \Gamma (l+\frac{1}{2})}\, x^{2l}. \end{aligned}$$
(4.12)

Since both sides in (4.12) are even functions we get that (4.12) also holds for \(x\in (-\omega ,0)\) and trivially from (4.10) also for \(x=0\).

Theorem 3.1 gives, for \(\sum _{i=1}^k |x_i| <\omega \),

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \prod _{j=1}^k\cos (\zeta _m x_j) = \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} p \Big ( \sum _{j=1}^k\varepsilon _jx_j\Big ), \end{aligned}$$

where p is the power series in the right-hand side of (4.12). Using (3.1) we get, for \( \sum _{i=1}^k |x_i| <\omega \),

$$\begin{aligned} \sum _{m\ge 1}\alpha _m \prod _{j=1}^k\cos (\zeta _mx_j)=\sqrt{\pi } \sum _{l=0}^{\infty } \frac{u_l \Gamma (\eta +l+ 1)}{2^{- \eta } \Gamma (l+\frac{1}{2})} \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}2l\\ 2l_1,\ldots ,2l_k\end{array}}\right) x_1^{2l_1}\ldots x_k^{2l_k}. \end{aligned}$$

Taking into account (4.6), the last identity can be rewritten in the form

$$\begin{aligned}{} & {} \sum _{m\ge 1} \alpha _m \prod _{j=1}^k\frac{J_{-1/2} (\zeta _mx_j)}{(\zeta _mx_j)^{-1/2}}\\{} & {} \quad =\frac{2^{k/2}}{\pi ^{(k-1)/2}} \sum _{l=0}^{\infty } \frac{u_l \Gamma (\eta +l+ 1)}{2^{- \eta } \Gamma (l+\frac{1}{2})} \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}2l\\ 2l_1,\ldots ,2l_k\end{array}}\right) x_1^{2l_1}\ldots x_k^{2l_k}. \end{aligned}$$

Write \(\Lambda =\{(x_1,\ldots , x_k): x_i>0,\, \sum _{i=1}^kx_i<\omega \}\). Lemma 4.2 gives, for \((x_1,\ldots ,x_k)\in \Lambda \),

$$\begin{aligned}&\sum _{m\ge 1} \alpha _m \prod _{i=1}^k \frac{J_{\mu _i}(\zeta _mx_i)}{(\zeta _mx_i)^{\mu _i}} = \frac{2^{k/2}}{\pi ^{(k-1)/2}} \nonumber \\&\qquad \times \sum _{l=0}^{\infty } \frac{u_l \Gamma (\eta +l+ 1)}{2^{- \eta } \Gamma (l+\frac{1}{2})} \sum _{l_1+\cdots +l_k=l} \left( {\begin{array}{c}2l\\ 2l_1,\ldots ,2l_k\end{array}}\right) \prod _{i=1}^k \frac{\Gamma (l_i+\frac{1}{2})}{2^{\mu _i +1/2} \Gamma (\mu _i + l_i+1)}\, x_i^{2l_i}, \end{aligned}$$
(4.13)

from where (4.11) follows easily. Since both sizes of (4.13) are even functions in each variable \(x_i\), we deduce that (4.11) also holds in \( \sum _{i=1}^k |x_i| <\omega \), if \(x_1\ldots x_k\not =0\), and by continuity for \(x_1\ldots x_k =0\) as well. \(\square \)

We illustrate the method with a simple but significant example.

One of the most interesting examples of a trigonometric expansion is the Hurwitz series for the Bernoulli polynomials, \(n\ge 1\),

$$\begin{aligned} B_{2n+1}(x)&= (-1)^{n+1} \, \frac{2(2n+1)!}{(2\pi )^{2n+1}} \sum _{m=1}^{\infty } \frac{\sin (2\pi mx)}{m^{2n+1}}, \quad x\in [0,1],\nonumber \\ B_{2n}(x)&= (-1)^{n+1} \, \frac{2(2n)!}{(2\pi )^{2n}} \sum _{m=1}^{\infty } \frac{\cos (2\pi mx)}{m^{2n}}, \quad x\in [0,1], \end{aligned}$$
(4.14)

see [5, 24.8(i)].

For our purpose, it is better to translate the expansion (4.14) to the interval \([-1,1]\). Hence, we change \(x \mapsto (x+1)/2\) to obtain the equivalent cosine series

$$\begin{aligned} B_{2n}((x+1)/2) = (-1)^{n+1} \, \frac{2(2n)!}{2^{2n}} \sum _{m=1}^{\infty } \frac{(-1)^m\cos (\pi mx)}{(\pi m)^{2n}}, \quad x\in [-1,1].\nonumber \\ \end{aligned}$$
(4.15)

Using the binomial expansion of the Bernoulli polynomials,

$$\begin{aligned} B_{2n}((x+1)/2) = \sum _{l=0}^{2n} \left( {\begin{array}{c}2n\\ l\end{array}}\right) B_{2n-l}({1}/2) \left( \frac{x}{2}\right) ^l, \end{aligned}$$

the identity \(B_j(1/2) = -(1-2^{1-j})B_j\) (see [5, 24.4.27]), as well as \(B_1(x) = x-1/2\) and \(B_{2\,l+1} = 0\) for \(l=0,1,2\ldots \), we get

$$\begin{aligned} B_{2n}((x+1)/2) = - \sum _{j=0}^{n} \left( {\begin{array}{c}2n\\ 2j\end{array}}\right) \frac{(2^{2n-2j-1}-1)B_{2n-2j}}{2^{2n-1}} \,x^{2j}. \end{aligned}$$

Hence, (4.15) gives, for \(x\in [-1,1]\),

$$\begin{aligned} \sum _{m=1}^{\infty } \frac{(-1)^m\cos (\pi mx)}{(\pi m)^{2n}} = \frac{(-1)^{n}}{(2n)!} \sum _{j=0}^{n} \left( {\begin{array}{c}2n\\ 2j\end{array}}\right) (2^{2n-2j-1}-1)B_{2n-2j}x^{2j}. \end{aligned}$$

Taking into account (4.6), we can apply the Lemma 4.1 to get (after easy computations) the Bessel expansion

$$\begin{aligned} \sum _{m=1}^{\infty } \frac{(-1)^m}{(\pi m)^{2n}} \frac{J_\mu (\pi mx)}{(\pi mx)^\mu } = (-1)^{n} \sum _{j=0}^{n} \frac{(2^{2n-2j-1}-1)B_{2n-2j}}{2^{2j+\mu }j!\,(2n-2j)!\,\Gamma (\mu +j+1)} \,x^{2j},\nonumber \\ \end{aligned}$$
(4.16)

valid for \(x\in [0,1]\) and \(2n+\textrm{Re}\mu >1/2\) (\(n\ge 1\)). The identity (4.16) is already known (although in a more complicated form): it is [16, p. 678, (14)].

Applying Lemma 4.3, we get the following multivariate Bessel expansion which seems to be new (as far as we know):

$$\begin{aligned}&\sum _{m=1}^{\infty } \frac{(-1)^m}{(\pi m)^{2n}} \prod _{i=1}^k\frac{J_{\mu _i}(\pi mx_i)}{(\pi mx_i)^{\mu _i}} \nonumber \\&\quad = (-1)^{n} \sum _{j=0}^{n}\frac{(2^{2n-2j-1}-1)B_{2n-2j}}{(2n-2j)!} \sum _{l_1+\cdots +l_k=j} \prod _{i=1}^k\frac{(x_i/2)^{2l_i}}{2^{\mu _i}l_i!\, \Gamma (\mu _i+l_i+1)}, \end{aligned}$$
(4.17)

valid for \( \sum _{i=1}^k |x_i| \le 1\) and \(2n+ \sum _{i=1}^k\textrm{Re}\mu _i+k/2>1\) (\(n\ge 1\)). Actually, this is the particular case of the expansion (1.3) (which will be computed in the next section: see (5.38)) for \(\nu =1/2\).

We can also find the case when \(n\le 0\) by differentiating (4.17). Indeed, for \(n=1\), we have

$$\begin{aligned} \sum _{m=1}^{\infty } \frac{(-1)^m}{(\pi m)^{2}} \prod _{i=1}^k\frac{J_{\mu _i}(\pi mx_i)}{(\pi mx_i)^{\mu _i}} = \frac{1}{4}\prod _{i=1}^k\frac{1}{2^{\mu _i}\Gamma (\mu _i+1)} \left( -\frac{1}{3}+\frac{1}{2} \sum _{i=1}^k\frac{x_i^2}{\mu _i+1}\right) . \end{aligned}$$

Differentiating with respect to \(x_1\), using (2.1), and setting \(\mu _1+1 \mapsto \mu _1\), we get

$$\begin{aligned} \sum _{m=1}^{\infty } (-1)^m \prod _{i=1}^k \frac{J_{\mu _i}(\pi mx_i)}{(\pi mx_i)^{\mu _i}} = -\frac{1}{2} \prod _{i=1}^k\frac{1}{2^{\mu _i} \Gamma (\mu _i+1)}, \end{aligned}$$

valid for \( \sum _{i=1}^k |x_i| \le 1\) and \( \sum _{i=1}^k\textrm{Re}\mu _i+k/2>1\). And then, differentiating again, we get

$$\begin{aligned} \sum _{m=1}^{\infty } (-1)^m(\pi m)^{2n} \prod _{i=1}^k \frac{J_{\mu _i}(\pi mx_i)}{(\pi mx_i)^{\mu _i}}=0, \end{aligned}$$

valid for \(\sum _{i=1}^k |x_i| \le 1\) and \(-2n+ \sum _{i=1}^k \textrm{Re}\mu _i+k/2>1\) (\(n\ge 1\)) (for \(k=1\) this is [16, Identity (10), p. 678]).

Cosine expansions (4.14) and (4.15) are equivalent under the linear change of variable \(x \mapsto (x+1)/2\). However if we apply our method to the expansion (4.14) we get completely different results to those found above (expansions (4.16) and (4.17)). For the one variable case, producing the Bessel extension is as easy as the previous one, but the scenario changes dramatically in the multivariate case. This is because in the left hand side of (4.14), \(B_{2n}(x)\) contains a term \(x^{2n-1}\), which corresponds to the even function \(f(x) = |x|^{2n-1}\). This even function is not analytic at 0 and in the multivariate case it makes the computation of the integral transforms (2.2) rather complicated. In fact, in that case infinite power series appears in the close expression for the multivariate Bessel expansion. Indeed, by applying the integral transform \(T_{\mu ,-1/2}\) (2.2) to both sides of (4.14), we can still produce the following Bessel expansion:

$$\begin{aligned} \sum _{m=1}^{\infty } \frac{1}{(\pi m)^{2n}} \frac{J_\mu (\pi mx)}{(\pi mx)^\mu } = \frac{(-1)^{n+1}2^{2n}}{2\sqrt{\pi }(2n)!} \sum _{j=0}^{2n} \left( {\begin{array}{c}2n\\ j\end{array}}\right) \frac{\Gamma ((j+1)/2)B_{2n-j}}{2^{\mu }\Gamma (\mu +j/2+1)}(x/2)^j,\nonumber \\ \end{aligned}$$
(4.18)

valid for \(x\in [0,2]\) and \(2n+\textrm{Re}\mu >1/2\) (\(n\ge 1\)). This identity is different to (4.16) but it is also known: [16, p. 678, (13)] (the case \(n\le 0\) can be obtained by differentiation from the case \(n=1\) in (4.18)).

As mentioned above, the monomial \(x^{2n-1}\) in the cosine expansion (4.14) makes difficult to extend it to a multivariate expansion using our method. To illustrate the problem, let us take \(n=1\), then (4.14) gives

$$\begin{aligned} \sum _{m=1}^{\infty } \frac{\cos (\pi mx)}{(\pi m)^{2}} = \frac{x^2}{4}-\frac{|x|}{2}+\frac{1}{6} \end{aligned}$$

for \(|x| \le 1\). Using Theorem 3.1, we get

$$\begin{aligned}{} & {} \sum _{m=1}^{\infty } \frac{\cos (\pi mx) \cos (\pi my)}{(\pi m)^{2}} \\{} & {} \quad = \frac{x^2}{4} + \frac{y^2}{4} + \frac{1}{6} - \frac{1}{4}(|x+y| + |x-y|). \end{aligned}$$

Applying the integral transforms \(T_{\mu _1,-1/2}\) in the variable x and \(T_{\mu _2,-1/2}\) in the variable y, respectively, and using (2.3), (2.4), we find that

$$\begin{aligned}&\sum _{m=1}^{\infty } \frac{1}{(\pi m)^{2}} \frac{J_{\mu _1}(\pi mx)J_{\mu _2}(\pi my)}{(\pi mx)^{\mu _1}(\pi my)^{\mu _2}}\nonumber \\&\quad = \frac{2^{-\mu _1-\mu _2-3}}{\Gamma (\mu _1+1)\Gamma (\mu _2+1)} \left( \frac{x^2}{\mu _1+1}+\frac{y^2}{\mu _2+1}+\frac{4}{3}\right) \nonumber \\&\qquad - \frac{2^{-\mu _1-\mu _2}}{\pi \Gamma (\mu _1+1/2)\Gamma (\mu _2+1/2)}\nonumber \\&\qquad \int _0^1 \int _0^1 (|rx+sy| + |rx-sy|) (1-r^2)^{\mu _1-1/2}(1-s^2)^{\mu _2-1/2}\,\textrm{d}r\,\textrm{d}s. \end{aligned}$$
(4.19)

It is not necessary to compute the double integral because the Bessel expansion (4.19) is the particular case \(\nu =1/2\) of the Sneddon-Bessel series we compute in [9], and so using [9, Sect. 4.1.2], we get

(4.20)

valid for \(0<2+\textrm{Re}\mu _1+\textrm{Re}\mu _2\), and \(0<y\le x\), \(x+y<2\) (also for \(x+y=2\) if \(0<1+\textrm{Re}\mu _1+\textrm{Re}\mu _2\)).

Contrary to the multivariate Bessel expansion (4.17), (4.20) is not anymore a polynomial (except when the parameters \(\mu _1\) and \(\mu _2\) are half positive integers).

5 Bessel expansions of multivariate polynomials

5.1 Bessel–Appell polynomials

Given a function A(z) analytic at \(z=0\) with \(A(0)\not =0\), we define the associated one-parameter family \(p_{n,\mu }(x)\), \(n\ge 0\), of Bessel–Appell polynomials by means of the following generating function:

$$\begin{aligned} A(z)\frac{J_\mu (xz)}{(xz)^\mu } = \sum _{n=0}^\infty p_{n,\mu }(x)z^n. \end{aligned}$$
(5.1)

It is straightforward from the definition that each \(p_{n,\mu }\) is an even polynomial of degree 2n, \(n\ge 0\). Moreover, using (2.1) we have

$$\begin{aligned} p'_{n,\mu }(x) = -xp_{n-1,\mu +1}(x),\quad n\ge 1. \end{aligned}$$
(5.2)

Bessel–Appell polynomials have been already considered in the literature ( [2]), although with no special denomination and, as far as we know, with no connection with the explicit sum of Bessel expansions. Bessel–Appell polynomials also satisfy

$$\begin{aligned} T(p_{n,\mu })(x) = p_{n-1,\mu }(x), \quad T = -\frac{d^2}{dx^2}-\frac{2\mu +1}{x}\frac{\textrm{d}}{\textrm{d}x}. \end{aligned}$$

Write \({\hat{T}}\) for the linear operator \({\hat{T}}(p)(x)=T(p(x^2))(\sqrt{x})\) acting on polynomials p. We then have

$$\begin{aligned} {\hat{T}}(p_{n,\mu }(\sqrt{x})) = p_{n-1,\mu }(\sqrt{x}),\quad n\ge 1, \end{aligned}$$

and so the polynomials \((p_{n,\mu }(\sqrt{x}))_n\) are of the Appell type studied in [11, Chap. 10].

The generating function (5.1) also shows that if \(A(z)= \sum _{n=0}^\infty a_nz^n\), then

$$\begin{aligned} p_{n,\mu }(0) = \frac{a_n}{2^\mu \Gamma (\mu +1)}, \quad n\ge 0. \end{aligned}$$
(5.3)

Moreover, iterating the identity (5.2), we get

$$\begin{aligned} \frac{p^{(2j)}_{n,\mu }(0)}{(2j)!} = \frac{(-1)^ja_{n-j}}{2^{\mu +2j}j!\,\Gamma (\mu +j+1)}, \quad n\ge 0. \end{aligned}$$
(5.4)

For \(\textrm{Re}\mu>\textrm{Re}\nu >-1\), using the integral transform (2.2) in (5.1) we have

$$\begin{aligned} T_{\mu ,\nu }(p_{n,\nu })(x)=p_{n,\mu }(x),\quad n\ge 0. \end{aligned}$$
(5.5)

The identity (5.5) can be extended for \(\textrm{Re}\nu <-1\) using the integral transform (2.5).

In the opposite direction, assume that we have a one-parameter family \(p_{n,\mu }(x)\), \(n\ge 0\), of polynomials with \(p_{n,\mu }\) of degree 2n satisfying (5.2) and (5.3) for certain sequence \((a_n)_n\) such that \(A(z)= \sum _{n=0}^\infty a_nz^n\) defines a function analytic at \(z=0\), which is equivalent to

$$\begin{aligned} \limsup _{n\rightarrow +\infty } |a_n|^{1/n} < +\infty . \end{aligned}$$
(5.6)

Since (5.2) and (5.3) determine uniquely the whole parametric family of polynomials, it follows that \((p_{n,\mu })_n\) also satisfy (5.1).

Remark 5.1

We can find a connection of Bessel–Appell polynomials and Bessel expansions of the form

$$\begin{aligned} \sum _{m\ge 1}\alpha _m\zeta _m^{-2n} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu }. \end{aligned}$$
(5.7)

To this end, assume we have sequences \((\alpha _m)_{m\ge 1}, (\zeta _m)_{m\ge 1}\), \(\zeta _m\not =0\), such that for certain \(\nu \in \mathbb {C}\), \(\textrm{Re}\nu >-1\), and \(\omega >0\),

$$\begin{aligned}&\liminf _m |\zeta _m| \ge 1, \quad \sum _{m\ge 1} \frac{|\alpha _m|}{|\zeta _m|^{\textrm{Re}\nu +1/2}} < +\infty , \end{aligned}$$
(5.8)
$$\begin{aligned}&\sum _{m\ge 1}\alpha _m \frac{J_{\nu }(\zeta _m x)}{(\zeta _m x)^\nu } = a_0 \in \mathbb {C}\setminus \{0\},\quad {x\in (0,\omega ).} \end{aligned}$$
(5.9)

Using the assumption (5.8) we can define for \(\textrm{Re}\mu \ge \textrm{Re}\nu \), \(n\ge 0\) and \(0<x\) the functions

$$\begin{aligned} p_{n,\mu }(x)= \sum _{m\ge 1}\alpha _m\zeta _m^{-2n} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu }. \end{aligned}$$
(5.10)

Notice that the convergence is uniform in compact subsets of \((0,+\infty )\). It is then easy to see using (2.1) that they satisfy (5.2), that is,

$$\begin{aligned} p_{n,\mu }'(x) = -xp_{n-1,\mu +1}(x),\quad n\ge 1. \end{aligned}$$

Moreover, for \(\textrm{Re}\mu >\textrm{Re}\nu \), using (2.3) we have, from (5.10) that

$$\begin{aligned} p_{n,\mu }(x) = T_{\mu ,\nu }(p_{n,\nu })(x), \end{aligned}$$
(5.11)

where \(T_{\mu ,\nu }\) is the integral transform (2.2): the assumption (5.9) allows changing the order of the integral transform and the series (5.10) which defines the function \(p_{n,\nu }(x)\).

The assumption (5.9), the identity (5.11) for \(n=0\), and (2.4) imply that for \(\textrm{Re}\mu \ge \textrm{Re}\nu \), \(p_{0,\mu }(x)\) is constant in \((0,\omega )\), and then \(p_{n,\mu }(x)\), \(n\ge 0\), is a polynomial of degree 2n in \((0,\omega )\). With a small abuse of notation, we also write \(p_{n,\mu }(x)\) for the polynomial in \(\mathbb {C}\) that coincides with \(p_{n,\mu }(x)\) in \((0,\omega )\).

Let us take

$$\begin{aligned} a_n = 2^\nu \Gamma (\nu +1)p_{n,\nu }(0). \end{aligned}$$
(5.12)

The sequence \((a_n)_n\) can be used to sum a bunch of Bessel series, including (5.7). This goes as follows. Since for n big enough

$$\begin{aligned} p_{n,\nu }(0) = \frac{1}{2^\nu \Gamma (\nu +1)} \sum _{m\ge 1}\alpha _m\zeta _m^{-2n}, \end{aligned}$$

it follows from (5.8) that \((a_n)_n\) satisfies (5.6) and we can define a function A(z) analytic at \(z=0\), with \(A(0)=a_0\not =0\), by the power series

$$\begin{aligned} A(z) = \sum _{n=0}^\infty a_nz^{2n}. \end{aligned}$$
(5.13)

Using (5.11) for \(x=0\) and (2.4) for \(r=0\), we get (5.3):

$$\begin{aligned} p_{n,\mu }(0)=\frac{a_n}{2^\mu \Gamma (\mu +1)},\quad n\ge 0. \end{aligned}$$

Hence for \(\textrm{Re}\mu >\textrm{Re}\nu \) our discussion at the beginning of this section shows that the polynomials \(p_{n,\mu }\), \(n\ge 0\), defined by (5.10), are also the Bessel–Appell polynomials defined by (5.1) where the analytic function A is given by (5.13).

Using (5.4) and (5.10), we get

$$\begin{aligned} \sum _{m\ge 1} \alpha _m\zeta _m^{-2n} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu } = \sum _{j=0}^n\frac{a_{n-j}(-x^2/4)^{j}}{2^{\mu }j!\,\Gamma (\mu +j+1)}, \quad x\in (0,\omega ). \end{aligned}$$
(5.14)

Moreover, if for some \(n<0\), \( \sum _{m\ge 1} |\alpha _m| |\zeta _m|^{-2n-\textrm{Re}\mu -1/2} < +\infty \) (with \(\textrm{Re}\mu >\textrm{Re}\nu \)), differentiating \(-n\) times in (5.14) for \(n=0\), we get

$$\begin{aligned} \sum _{m\ge 1} \alpha _m\zeta _m^{-2n} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu } = 0, \quad x\in (0,\omega ). \end{aligned}$$

In the next proposition, we include other series that can be summed using the sequence \((a_n)_n\) given in (5.12).

Proposition 5.2

Assume that the sequences \((\alpha _m)_{m\ge 1}, (\zeta _m)_{m\ge 1}\), satisfy (5.8) and (5.9). We then have for \(n\ge 0\), \(\textrm{Re}\mu \ge \textrm{Re}\nu \) and \(x\in (0,\omega )\),

$$\begin{aligned}{} & {} \sum _{m\ge 1} \frac{\alpha _m\zeta _m^{-2n}}{(\zeta _m^2-z^2)} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu }\nonumber \\{} & {} \quad = \frac{1}{z^{2n+2}} \left( A(z)\frac{J_{\mu }(xz)}{(xz)^\mu } - \sum _{l=0}^nz^{2l} \sum _{j=0}^l \frac{a_{l-j}(-x^2/4)^{j}}{2^{\mu }j!\,\Gamma (\mu +j+1)}\right) , \end{aligned}$$
(5.15)

where A is given by (5.13) and \((a_n)_n\) is defined by (5.12).

Proof

The proof is a matter of computation. Indeed, using the geometric series and the polynomials (5.10), we deduce for \(|z| < \inf _m |\zeta _m|\) (and then on the whole range by analytic continuation) that

$$\begin{aligned} \sum _{m\ge 1}\frac{\alpha _m\zeta _m^{-2n}}{(\zeta _m^2-z^2)} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu }&= \sum _{m\ge 1} \sum _{l=0}^\infty \frac{\alpha _m z^{2l}}{\zeta _m^{2n+2l+2}} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu } \\&= \sum _{l=0}^\infty z^{2l} \sum _{m\ge 1} \frac{\alpha _m}{\zeta _m^{2n+2l+2}} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu } \\&= \sum _{l=0}^\infty p_{n+l+1,\mu }(x)z^{2l} = \frac{1}{z^{2n+2}} \sum _{l=n+1}^\infty p_{l,\mu }(x)z^{2l} \\&= \frac{1}{z^{2n+2}} \bigg (A(z)\frac{J_{\mu }(xz)}{(xz)^\mu } - \sum _{l=0}^np_{l,\mu }(x)z^{2l} \bigg ). \end{aligned}$$

It is then enough to use (5.4). \(\square \)

Moreover, if for some \(n<0\), \( \sum _{m\ge 1} |\alpha _m| |\zeta _m|^{-2n-\textrm{Re}\mu -5/2} < +\infty \) (with \(\textrm{Re}\mu >\textrm{Re}\nu \)), differentiating \(-n\) times in (5.15) for \(n=0\) and taking into account the identity (2.1), we have, for \(x\in (0,\omega )\),

$$\begin{aligned} \sum _{m\ge 1} \frac{\alpha _m\zeta _m^{-2n}}{(\zeta _m^2-z^2)} \frac{J_{\mu }(\zeta _m x)}{(\zeta _m x)^\mu } = \frac{A(z)}{z^{2n+2}} \frac{J_{\mu }(xz)}{(xz)^\mu }. \end{aligned}$$

Our method will allow us to compute explicitly the corresponding multivariate version of the expansions (5.15). They can well be called Kneser-Sommerfeld type expansions, since for

$$\begin{aligned} \zeta _m = j_{m,\nu },\quad \alpha _m = \frac{1}{J_{\nu +1}(j_{m,\nu })^2} \end{aligned}$$

the corresponding two variable expansion is the well-known Kneser-Sommerfeld expansion ( [13]).

Let us develop a couple of illustrative examples. The first one is the Dini-Young series

$$\begin{aligned} \sum _{m\ge 1} \frac{\lambda _m^{\nu -2n}}{(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)} \frac{J_{\mu }(\lambda _m x)}{(\lambda _m x)^\mu }, \end{aligned}$$
(5.16)

for \(0<x\le 1\), where \(\nu \) and H are real parameters satisfying \(\nu >-1\) and \(H+\nu >0\), \(\mu \in \hat{\mathbb {C}}\) with \(\nu <2n+1+\textrm{Re}\mu \) and \(\lambda _m\) are the positive zeros (ordered in increasing size) of the function

$$\begin{aligned} zJ'_\nu (z)+HJ_\nu (z). \end{aligned}$$
(5.17)

To sum explicitly (5.16), we define the sequence \((a_n^{H,\nu })_n\) by

$$\begin{aligned} \frac{z^\nu }{2\big ((H-\nu )J_\nu (z)+zJ_{\nu -1}(z)\big )} = \sum _{n=0}^\infty a_n^{H,\nu }z^{2n}. \end{aligned}$$
(5.18)

Using the power series for the Bessel functions, the sequence \((a_n^{H,\nu })_n\) can be recursively defined as follows: \(a_0^{H,\nu } = 2^{\nu -1}\Gamma (\nu +1)/(H+\nu )\), and

$$\begin{aligned} \sum _{j=0}^n \frac{\nu +2(n-j)+H}{(-4)^{n-j}(n-j)!\,(\nu +1)_{n-j}}a_j^{H,\nu }=0,\quad n\ge 1. \end{aligned}$$
(5.19)

Define now the one-parameter Bessel–Appell polynomials by the generating function

$$\begin{aligned} \frac{z^\nu }{2\big ((H-\nu )J_\nu (z)+zJ_{\nu -1}(z)\big )} \frac{J_\mu (xz)}{(xz)^\mu } = \sum _{n=0}^\infty p_{n,\mu }^{H,\nu }(x)z^n. \end{aligned}$$
(5.20)

For \(n\ge 0\) and \(0\le j\le n\), define

$$\begin{aligned} b_j^n = 2(-4)^j(n-j+1)_j(\nu +n-j+1)_j(\nu +2(n-j)+H). \end{aligned}$$

An easy computation, using the power series of the Bessel functions, shows that the polynomials \(p_{n,\nu }^{H,\nu }\), \(n\ge 0\), can also be defined recursively by

$$\begin{aligned} p_{0,\nu }^{H,\nu } = \frac{1}{2(\nu +H)},\qquad \sum _{j=0}^nb_j^n p_{j,\nu }^{H,\nu }(x)=x^{2n},\quad n\ge 1. \end{aligned}$$
(5.21)

Consider next the Bessel-Dini series of \(x^{2n}\) in (0, 1), namely

$$\begin{aligned} x^{2n} = \sum _{m\ge 1} \beta _m^n\frac{J_{\nu }(\lambda _m x)}{(\lambda _m x)^\nu }. \end{aligned}$$
(5.22)

The case \(n=0\) was summed by Young [20], this is why we call Dini-Young series to the expansion (5.16). If we write

$$\begin{aligned} \xi _m = \frac{\lambda _m^{\nu }}{(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)}, \end{aligned}$$

according to [19, § 18.12, (2), p. 581] we have

$$\begin{aligned} \beta _m^0=2(\nu +H)\xi _m, \end{aligned}$$
(5.23)

as follows from [19, § 18.12, (2), p. 581] and the trivial fact that the zeros \(\lambda _m\) of (5.17) satisfy

$$\begin{aligned} (\lambda _m^2-\nu ^2) J_\nu (\lambda _m)^2+\lambda _m^2 J_\nu '(\lambda _m)^2&= (\lambda _m^2-\nu ^2+H^2) J_\nu (\lambda _m)^2, \\ \frac{\lambda _m J_{\nu +1}(\lambda _m)}{J_{\nu }(\lambda _m)}&= \nu +H. \end{aligned}$$

According to the reduction formula in [19, § 18.12, p. 581], we have the recursion

$$\begin{aligned} \beta _m^n = 2(\nu +2n+H)\xi _m-\frac{4n(\nu +n)}{\lambda _m^2}\beta _m^{n-1},\quad n\ge 1. \end{aligned}$$

This shows that

$$\begin{aligned} \beta _m^n = \xi _m \sum _{j=0}^nb_j^n\frac{1}{\lambda _m^{2j}}. \end{aligned}$$
(5.24)

Define finally the functions

$$\begin{aligned} q_{n}^{H,\nu }(x)= \sum _{m\ge 1}\xi _m\lambda _m^{-2n}\frac{J_{\nu }(\lambda _m x)}{(\lambda _m x)^\nu },\quad x\in (0,1). \end{aligned}$$

The definition of \(\beta _m^n\) (5.22) and the identity (5.24) show that

$$\begin{aligned} \sum _{j=0}^n b_j^n q_{j}^{H,\nu }(x) = x^{2n},\quad n\ge 1. \end{aligned}$$
(5.25)

On the one hand, (5.21), (5.22), and (5.23) for \(n=0\) imply that \(q_{0}^{H,\nu }=p_{0,\nu }^{H,\nu }\). On the other hand, the recursions (5.20) and (5.25) show that \(q_{n}^{H,\nu }=p_{n,\nu }^{H,\nu }\), \(n\ge 1\).

Hence setting \(\zeta _m=\lambda _m\) and \(\alpha _m=\xi _m\), we can apply Remark 5.1 to get, for \(\textrm{Re}\mu >\nu \) and \(0<x\le 1\),

$$\begin{aligned} \sum _{m\ge 1} \frac{\lambda _m^{\nu -2n}}{(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)} \frac{J_{\mu }(\lambda _m x)}{(\lambda _m x)^\mu } = \sum _{j=0}^n\frac{a_{n-j}^{H,\nu }\left( -x^2/4\right) ^{j}}{2^{\mu }j!\,\Gamma (\mu +j+1)}. \end{aligned}$$
(5.26)

The identity (5.26) also holds for \(\nu <2n+1+\textrm{Re}\mu \), because then both sides of the identity are analytic functions of \(\mu \). The identity is also valid for \(x=0\) assuming that \(\nu <2n+1/2\).

Moreover, for \(0<-2n<-\nu +\textrm{Re}\mu +1\),

$$\begin{aligned} \sum _{m\ge 1} \frac{\lambda _m^{\nu -2n}}{(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)} \frac{J_{\mu }(\lambda _m x)}{(\lambda _m x)^\mu }=0, \quad x\in (0,1). \end{aligned}$$

In the next section, we will consider the expansions provided by Proposition 5.2.

The second illustrative example are the Bessel expansions

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{J_{\nu +1}(j_{m,\nu })} \frac{J_{\mu }(j_{m,\nu }x)}{(j_{m,\nu }x)^{\mu }}, \end{aligned}$$
(5.27)

\(0<x<1\) and \(\textrm{Re}\nu <\textrm{Re}\mu +2n\). The series (5.27) was explicitly summed in [7, Sect. 5] using the theory of residues. For the sake of completeness, we compute the sum here using our method.

The starting point is the sequence \((a_n^{\nu })_n\) defined by

$$\begin{aligned} \frac{z^\nu }{2J_{\nu }(z)} = \sum _{n=0}^\infty a_n^{\nu }z^{2n}. \end{aligned}$$
(5.28)

This is the case \(H=\nu \) of the previous example with \(\nu +1\) instead of \(\nu \), but since we now consider complex parameters \(\nu \), we work it out from the scratch.

Using the power series for the Bessel functions, the sequence \((a_n^{\nu })_n\) can be recursively defined as follows: \(a_0^{\nu }=2^{\nu -1}\Gamma (\nu +1)\), and

$$\begin{aligned} \sum _{j=0}^n \frac{4\nu +4(n-j+1)}{(-4)^{n-j}(n-j)!\,(\nu +2)_{n-j}}a_j^{\nu }=0, \quad n\ge 1. \end{aligned}$$
(5.29)

Define now the one-parameter Bessel–Appell polynomials by the generating function

$$\begin{aligned} \frac{z^\nu }{2J_{\nu }(z)} \frac{J_\mu (xz)}{(xz)^\mu } = \sum _{n=0}^\infty p_{n,\mu }^{\nu }(x)z^{2n}. \end{aligned}$$
(5.30)

For \(\mu =\nu \) and \(\mu =\nu -1\) they are the even Euler-Dunkl and Bernoulli-Dunkl polynomials we introduced in [6] and [3], respectively (up to renormalization). Using [6, Theorem 3.1], we have

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{J_{\nu +1}(j_{m,\nu })} \frac{J_{\nu }(j_{m,\nu }x)}{(j_{m,\nu }x)^{\nu }} = p_{n,\nu }^{\nu }(x), \end{aligned}$$
(5.31)

with uniform convergence on compact subsets of \((-1,1){\setminus }\{0\}\) for \(n=0\) and \([-1,1]{\setminus } \{0\}\) for \(n\ge 1\). The convergence extends to \(x=0\) provided that \(\textrm{Re}\nu <n+1/2\).

We next prove that for \(\textrm{Re}\mu +2n>\textrm{Re}\nu \) and \(x\in (0,1)\),

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{J_{\nu +1}(j_{m,\nu })} \frac{J_{\mu }(j_{m,\nu }x)}{(j_{m,\nu }x)^{\mu }} = p_{n,\mu }^{\nu }(x) = \sum _{j=0}^n\frac{a^\nu _{n-j}(-x^{2}/4)^j}{2^{\mu }j!\,\Gamma (\mu +j+1)}. \end{aligned}$$
(5.32)

It is interesting to note that the polynomials \(p_{n,\mu }^{\nu }(x)\) in this formula are Bessel–Appell polynomials as defined in (5.1), with \(A(z) = \frac{z^\nu }{2J_{\nu }(z)}\), see (5.30). In particular, they satisfy the general properties (5.2) and (5.5).

Let us start taking \(\zeta _m=j_{m,\nu }\) and \(\alpha _m= \frac{j_{m,\nu }^{\nu -1}}{J_{\nu +1}(j_{m,\nu })}\). Although the second assumption in (5.8) fails, we can still use the Remark 5.1 due to the uniform convergence in \((-1,1)\setminus \{0\}\) of (5.31) for \(n=0\).

To extend the identity (5.32) to \(\textrm{Re}\mu +2n>\textrm{Re}\nu \), we proceed as follows. For \(\textrm{Re}\nu <-1\), \(\nu \not =-3/2, -5/2,\ldots \), consider a positive integer h such that \(\textrm{Re}\nu >-h/2-1\), and take \(\mu \) with \(\textrm{Re}\mu >\textrm{Re}\nu +h\). Using the integral transform \(T_{\mu ,\nu ,h}\) (2.5), together with integration by parts and (5.11), we get from (5.31)

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{J_{\nu +1}(j_{m,\nu })} \frac{J_{\mu }(j_{m,\nu }x)}{(j_{m,\nu }x)^{\mu }}= & {} T_{\mu ,\nu ,h}(p_{n,\nu }^{\nu })(x) \nonumber \\= & {} T_{\mu ,\nu }(p_{n,\nu }^{\nu })(x) = p_{n,\mu }^{\nu }(x), \quad x\in (0,1). \end{aligned}$$
(5.33)

For \(\nu =-3/2, -5/2,\ldots \), (5.33) follows by continuity. It is now enough to take into account that for fixed \(\nu \) and assuming \(\textrm{Re}\mu +2n\ge \textrm{Re}\nu \) both sides of (5.32) are analytic functions of \(\mu \).

For \(n<0\), \(\textrm{Re}\mu +2n>\textrm{Re}\nu \) and \(x\in (0,1)\), we have, differentiating the case \(n=0\) in (5.32),

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{J_{\nu +1}(j_{m,\nu })} \frac{J_{\mu }(j_{m,\nu }x)}{(j_{m,\nu }x)^{\mu }}= 0. \end{aligned}$$

5.2 Getting multivariate Bessel expansions of polynomials

We next use Lemma 4.3 to sum in an explicit form the multivariate series (1.6) and (1.3).

For the Dini-Young expansion (1.6), we assume \(\nu \), H to be real parameters with \(\nu +H>0\) and \(\nu >-1\), and \(\mu _i\in \hat{\mathbb {C}}\), \(i=1,\ldots , k\). We next prove that for \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\) and \((x_1,\ldots , x_k)\in \Omega _{[1]}^*\) (see (1.10))

$$\begin{aligned}&\sum _{m\ge 1}\frac{\lambda _m^{\nu -2n}}{(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)} \prod _{i=1}^k \frac{J_{\mu _i}(\lambda _m x_i)}{(\lambda _m x_i)^{\mu _i}} \nonumber \\&\quad = \sum _{l=0}^n a_{n-l}^{H,\nu } \sum _{l_1+\cdots +l_k=l} \prod _{i=1}^k \frac{\left( -x_i^2/4\right) ^{l_i}}{2^{\mu _i}l_i!\, \Gamma (\mu _i + l_i+1)}, \end{aligned}$$
(5.34)

where \((a_n^{H,\nu })_n\) is the sequence defined by (5.18) (or (5.19)).

We proceed in two steps.

5.3 First step

The identity (5.34) holds for \(\nu <2n+1/2\), \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\). This is a direct consequence of Lemma 4.3 (after some easy computations).

5.4 Second step

The identity (5.34) holds for \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\).

Fixed \(\nu \), notice that the series in the left-hand side of (5.34) converges uniformly in \(\Omega ^*_{[1]}\) for each n such that \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\). Fix then n such that \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\), and take a positive integer \(n_\nu \ge n\) such that \(\nu <2n_\nu +1/2\). Since we also have \(\nu <2n_\nu +(k+1)/2+ \sum _{i=1}^k\mu _i\), the first step shows that (5.34) holds for \(n_\nu \) instead of n. Fix j, \(1\le j\le k\), and write \(H_{n,\mu _j}(x_j)\), \(\mathcal {H}_{n,\mu _j}(x_j)\) for the functions in the left- and right-hand side of (5.34), respectively (there is no need to include in the notation neither the parameters \(\nu \), \(\mu _i\), \(i\not =j\), nor the variables \(x_i\), \(i\not =j\)). We have that \(H_{n_\nu ,\mu _j}(x_j)=\mathcal {H}_{n_\nu ,\mu _j}(x_j)\). Take now \(\mu _i\) real and big enough so as to satisfy \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\) and to allow the following computations. First of all, we prove that

$$\begin{aligned} \frac{\partial }{\partial x_j}H_{n_\nu ,\mu _j}(x_j) = -x_j H_{n_\nu -1,\mu _j+1}(x_j), \quad \frac{\partial }{\partial x_j}\mathcal {H}_{n_\nu ,\mu _j}(x_j) = -x_j\mathcal {H}_{n_\nu -1,\mu _j+1}(x_j).\nonumber \\ \end{aligned}$$
(5.35)

Indeed, the first identity above is straightforward from (2.1). With respect to the second identity, by differentiation it follows that

$$\begin{aligned} \frac{\partial }{\partial x_j}\mathcal {H}_{n_\nu ,\mu _j}(x_j)&= \sum _{l=1}^{n_\nu }a_{n_\nu -l}^{H,\nu } \sum _{l_1+\cdots +l_k=l} \frac{2l_j}{x_j} \prod _{i=1}^k \frac{\left( -x_i^2/4\right) ^{l_i }}{2^{\mu _i}l_i!\, \Gamma (\mu _i + l_i+1)}\nonumber \\&= \sum _{l=0}^{n_\nu -1}a_{n_\nu -1-l}^{H,\nu } \sum _{l_1+\cdots +l_k=l+1} \frac{2l_j}{x_j} \prod _{i=1}^k \frac{\left( -x_i^2/4\right) ^{l_i }}{2^{\mu _i}l_i!\, \Gamma (\mu _i + l_i+1)}. \end{aligned}$$
(5.36)

Since the summand in right-hand side of (5.36) vanishes for \(l_j=0\), we get (after simplification)

$$\begin{aligned} \frac{\partial }{\partial x_j} \mathcal {H}_{n_\nu ,\mu _j}(x_j) = -x_j\mathcal {H}_{n_\nu -1,\mu _j+1}(x_j). \end{aligned}$$

This means, using (5.35) and \(H_{n_\nu ,\mu _j}(x_j)=\mathcal {H}_{n_\nu ,\mu _j}(x_j)\), that

$$\begin{aligned} H_{n_\nu -1,\mu _j+1}(x_j) = \mathcal {H}_{n_\nu -1,\mu _j+1}(x_j). \end{aligned}$$

Iterating, we get

$$\begin{aligned} H_{n,\mu _j+n_\nu -n}(x_j) = \mathcal {H}_{n,\mu _j+n_\nu -n}(x_j). \end{aligned}$$

This proves the identity (5.34) for \(\mu _i\), \(i=1,\ldots ,k\), real and big enough. Since for \(\mu _i\), \(i=1,\ldots ,k\), such that \(\nu <2n+(k+1)/2 + \sum _{i=1}^k\mu _i\) the left- and right-hand sides of (5.34) are analytic functions of each \(\mu _i\), we deduce that (5.34) actually holds in \(\Omega ^*_{[1]}\) under the assumption \(\nu <2n+(k+1)/2 + \sum _{i=1}^k\mu _i\).

For \(n<0\), \(\nu <2n+(k+1)/2+ \sum _{i=1}^k\mu _i\) and \((x_1,\ldots , x_k)\in \Omega ^*_{[1]}\),

$$\begin{aligned} \sum _{m\ge 1} \frac{\lambda _m^{\nu -2n}}{(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)} \prod _{i=1}^k \frac{J_{\mu _i}(\lambda _m x_i)}{(\lambda _m x_i)^{\mu _i}} = 0. \end{aligned}$$

Proceeding in the same way, we can explicitly sum the Bessel expansion (1.3). First of all, we complete the notation (1.9) and (1.10) with the following one: for \(\omega >0\),

$$\begin{aligned} \Omega _{(\omega )}&= \Big \{(x_1,\ldots ,x_k)\in \mathbb {R}^k: \sum _{i=1}^k |x_i| < \omega \Big \},\nonumber \\ \Omega ^*_{(\omega )}&= \Big \{(x_1,\ldots ,x_k)\in \Omega _{(\omega )}: \prod _{i=1}^kx_i\not =0 \Big \}. \end{aligned}$$
(5.37)

Then we get, for \(\textrm{Re}\nu <2n+(k-1)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\),

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -2n-1}}{J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu } x_i)}{(j_{m,\nu } x_i)^{\mu _i}} = \sum _{l=0}^n a_{n-l}^{\nu } \sum _{l_1+\cdots +l_k=l} \prod _{i=1}^k\frac{\left( -x_i^2/4\right) ^{l_i}}{2^{\mu _i}l_i!\, \Gamma (\mu _i + l_i+1)},\nonumber \\ \end{aligned}$$
(5.38)

where \((a_n^{\nu })_n\) is the sequence defined by (5.28) (or (5.29)), and \((x_1,\ldots , x_k)\in \Omega ^*_{[1]}\) for \(n\ge 1\), or \((x_1,\ldots , x_k)\in \Omega ^*_{(1)}\) for \(n=0\).

For \(n<0\), \(\textrm{Re}\nu <2n+(k-1)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots , x_k)\in \Omega ^*_{(1)}\), we have

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -2n-1}}{J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu } x_i)}{(j_{m,\nu } x_i)^{\mu _i}} = 0. \end{aligned}$$

6 Bessel expansions of non-polynomial functions

In this section, some other examples of Bessel expansions which are not polynomials will be given.

We start from the expansion generated by Proposition 5.2 applied to the Bessel expansions (5.32) and (5.26).

6.1 Kneser-Sommerfeld type expansions

Let us prove the following identity for the multivariate expansion (1.4):

$$\begin{aligned}&\sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{(j_{m,\nu }^{2}-z^2)J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu }x_i)}{(j_{m,\nu }x_i)^{\mu _i}} = \frac{1}{z^{2n+2}} \Bigg ( \frac{z^{\nu }}{2J_\nu (z)}\prod _{i=1}^k\frac{J_{\mu _i}(x_i z)}{(x_i z)^{\mu _i}} \nonumber \\&\qquad - \sum _{l=0}^nz^{2l} \sum _{j=0}^l a_{l-j}^\nu \sum _{l_1+\cdots +l_k=j} \prod _{i=1}^k\frac{(-x_i^2/4)^{l_i}}{2^{\mu _i }l_i!\, \Gamma (\mu _i + l_i+1)} \Bigg ), \end{aligned}$$
(6.1)

for \(\textrm{Re}\nu <2n+(k+3)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots , x_k)\in \Omega _{[1]}^*\) (see (1.10)), where the sequence \((a_n^{\nu })_n\) is defined by (5.28) (or (5.29)).

We proceed in two steps.

6.2 First step

The case \(k=1\).

Applying Proposition 5.2 to the expansion (5.32), we get

$$\begin{aligned}&\sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{(j_{m,\nu }^{2}-z^2)J_{\nu +1}(j_{m,\nu })} \frac{J_{\mu }(j_{m,\nu }x)}{(j_{m,\nu }x)^{\mu }} \nonumber \\&\quad = \frac{1}{z^{2n+2}} \bigg ( \frac{z^\nu }{2J_\nu (z)} \frac{J_{\mu }(xz)}{(xz)^\mu } - \sum _{l=0}^nz^{2l} \sum _{j=0}^l \frac{a_{l-j}^\nu (-x^2/4)^{j}}{2^{\mu }j!\, \Gamma (\mu +j+1)} \bigg ), \end{aligned}$$
(6.2)

with uniform convergence in compact sets of (0, 1] for \(\textrm{Re}\nu <2n+2+\textrm{Re}\mu \) and in [0, 1] for \(\textrm{Re}\nu <2n+3/2\), where the sequence \((a_n^\nu )\) is defined by (5.28) (or (5.29)).

6.3 Second step

The case \(k\ge 1\).

Write

$$\begin{aligned} f_{n,\nu }(x,z)=\sqrt{\frac{2}{\pi }} \sum _{l=0}^nz^{2l} \sum _{j=0}^l \frac{(-1)^ja_{l-j}^\nu x^{2j}}{(2j)!}. \end{aligned}$$
(6.3)

Consider the case \(\mu =-1/2\) in (6.2). Using the identity (4.6), we get, for \(\textrm{Re}\nu < 2n+3/2\) and \(x\in [-1,1]\),

$$\begin{aligned} \sqrt{\frac{2}{\pi }} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -2n-1} \cos (j_{m,\nu } x)}{(j_{m,\nu }^2-z^2)J_{\nu +1}(j_{m,\nu })} = \frac{1}{z^{2n+2}} \bigg ( \frac{z^{\nu }\sqrt{2/\pi }\cos (zx)}{2J_\nu (z)}-f_{n,\nu }(x,z) \bigg ). \end{aligned}$$

The trigonometric identity

$$\begin{aligned} \sum _{\varepsilon \in \pi _k} \cos \Big (z \sum _{i=1}^k\varepsilon _ix_i\Big ) = 2^k\prod _{i=1}^k\cos (zx_i), \end{aligned}$$

together with Theorem 3.1 gives

$$\begin{aligned}&\sqrt{\frac{2}{\pi }} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -2n-1}}{(j_{m,\nu }^2-z^2)J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k\cos (j_{m,\nu } x_i) \\&\quad = \frac{1}{z^{2n+2}} \Bigg ( \frac{z^{\nu }\sqrt{2/\pi }}{2J_\nu (z)} \prod _{i=1}^k\cos (zx_i) - \frac{1}{2^k} \sum _{\varepsilon \in \pi _k} f_{n,\nu } \Big ( \sum _{i=1}^k\varepsilon _ix_i,z\Big ) \Bigg ). \end{aligned}$$

Using the identity (4.6), (6.3) and (3.1), this can be rewritten in the form

$$\begin{aligned}&\sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -2n-1}}{(j_{m,\nu }^2-z^2)J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{-1/2}(j_{m,\nu } x_i)}{(j_{m,\nu } x_i)^{-1/2}} = \frac{1}{z^{2n+2}} \Bigg (\frac{z^{\nu }}{2J_\nu (z)} \prod _{i=1}^k \frac{J_{-1/2}(zx_i)}{(zx_i)^{-1/2}} \nonumber \\&\qquad -\left( \frac{2}{\pi }\right) ^{k/2} \sum _{l=0}^nz^{2l} \sum _{j=0}^l \frac{(-1)^ja_{l-j}^\nu x^{2j}}{(2j)!} \sum _{l_1+\cdots +l_k=j} \left( {\begin{array}{c}2j\\ 2l_1,\ldots ,2l_k\end{array}}\right) \prod _{i=1}^kx_i^{2l_i} \Bigg ), \end{aligned}$$
(6.4)

where \((x_1,\ldots ,x_k)\in \Omega ^*_{[1]}\).

Assuming that \(\textrm{Re}\mu _i\ge -1/2\) and using the integral transform \(T_{\mu _i,-1/2}\) (2.2) acting on the variable \(x_i\), we get from (6.4) the identity (6.1).

To extend the formula (6.1) from \(\textrm{Re}\nu \le 2n+3/2\), \(\textrm{Re}\mu _i\ge -1/2\) to \(\textrm{Re}\nu <2n+(k+3)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots ,x_k)\in \Omega ^*_{[1]}\), we can proceed as in the second step in Sect. 5.2.

For \(n<0\), we have, by \(-n\) times differentiation of the case \(n=0\) of (6.1),

$$\begin{aligned} \sum _{m\ge 1} \frac{j_{m,\nu }^{\nu -1-2n}}{(j_{m,\nu }^{2}-z^2)J_{\nu +1}(j_{m,\nu })} \prod _{i=1}^k \frac{J_{\mu _i}(j_{m,\nu }x_i)}{(j_{m,\nu }x_i)^{\mu _i}} = \frac{1}{z^{2n+2}}\frac{z^{\nu +k-2n-2}}{2J_\nu (z)} \prod _{i=1}^k \frac{J_{\mu _i}(x_i z)}{(x_i z)^{\mu _i}} \end{aligned}$$

for \(\textrm{Re}\nu <2n+(k+3)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots , x_k)\in \Omega _{[1]}^*\).

In the same way, one can prove that

$$\begin{aligned}&\sum _{m\ge 1} \frac{\lambda _m^{\nu -2n}}{(\lambda _m^2-z^2)(\lambda _m^2-\nu ^2+H^2)J_\nu (\lambda _m)} \prod _{i=1}^k \frac{J_{\mu _i}(\lambda _m x_i)}{(\lambda _m x_i)^{\mu _i}} \\&\quad = \frac{1}{z^{2n+2}} \Bigg ( \frac{z^{\nu }}{2\big ((H-\nu )J_\nu (z)+zJ_{\nu -1}(z)\big )} \prod _{i=1}^k \frac{J_{\mu _i}(x_i z)}{(x_i z)^{\mu _i}} \\&\qquad - \sum _{l=0}^nz^{2l} \sum _{j=0}^l a_{l-j}^{H,\nu } \sum _{l_1+\cdots +l_k=j} \prod _{i=1}^k\frac{(-x_i^2/4)^{l_i}}{2^{\mu _i}l_i!\, \Gamma (\mu _i + l_i+1)} \Bigg ), \end{aligned}$$

for \(\textrm{Re}\nu <2n+(k+5)/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots , x_k)\in \Omega ^*_{[1]}\), where the sequence \((a_n^{H,\nu })_n\) is defined by (5.18) (or (5.19)).

6.4 Two more examples

In this section, we sum the Bessel expansion (1.5) and other related expansion.

Let \(\varphi \) be the analytic function in \(\mathbb {C}{\setminus }\{m^2: m\in \mathbb {N}{\setminus }\{0\}\}\) defined by

$$\begin{aligned} \varphi (z) = \frac{1}{z}-\frac{\pi }{\sqrt{z}\sin (\pi \sqrt{z})} = 2 \sum _{m\ge 1}\frac{(-1)^m}{m^2-z}. \end{aligned}$$
(6.5)

Define now the sequence

$$\begin{aligned} a_{n}^{\theta } = 1+\frac{\theta ^{2n+2}\varphi ^{(n)}(-\theta ^2)}{n!}, \quad n\ge 0. \end{aligned}$$

We next prove that the multivariate Bessel series (1.5) is equal to

$$\begin{aligned}&\sum _{m\ge 1} \frac{(-1)^m}{(1+m^2/\theta ^2)^{n}} \prod _{i=1}^k \frac{J_{\mu _i}(\sqrt{1+m^2/\theta ^2}x_i)}{(\sqrt{1+m^2/\theta ^2}x_i)^{\mu _i}} \nonumber \\&\quad = \frac{1}{2} \Bigg ({-}\prod _{i=1}^k\frac{J_{\mu _i}(x_i)}{x_i^{\mu _i}} + \sum _{l=0}^{n-1}a_{n-l-1}^{\theta } \sum _{l_1+\cdots +l_k=l} \prod _{i=1}^k\frac{(-x_i^2/4)^{l_i}}{2^{\mu _i} l_i!\,\Gamma (\mu _i +l_i+1)} \Bigg ), \end{aligned}$$
(6.6)

where \(1<2n+k/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) for \((x_1,\ldots , x_k)\in \Omega _{(\theta \pi )}^*\) (with the notation of (5.37)).

To this end, define the polynomials \(P_n^{\mu ,\theta }(x)\), \(n\ge 0\), by

$$\begin{aligned} P_0^{\mu ,\theta }(x)&= 0,\nonumber \\ P_n^{\mu ,\theta }(x)&= \sum _{j=0}^{n-1} \frac{a_{n-j-1}^{\theta }(-x^2/4)^{j}}{2^{\mu } j!\,\Gamma (\mu +j+1)}, \quad n\ge 1. \end{aligned}$$
(6.7)

Then, \(P_n^{\mu ,\theta }\) is an even polynomial of degree \(2n-2\). On the one hand, it is easy to check that

$$\begin{aligned} P_n^{\mu ,\theta }(0)&= \frac{1}{2^{\mu } \Gamma (\mu +1)} \left( 1+\frac{\theta ^{2n}\varphi ^{(n-1)}(-\theta ^2)}{(n-1)!}\right) , \quad n\ge 1, \end{aligned}$$
(6.8)
$$\begin{aligned} (P_n^{\mu ,\theta })'(x)&= -xP_{n-1}^{\mu +1,\theta }(x),\nonumber \\ P_n^{\mu ,\theta }(x)&= T_{\mu ,-1/2,h}(P_n^{-1/2,\theta })(x),\quad \mu \ge -1/2+h. \end{aligned}$$
(6.9)

On the other hand, it is plain that the conditions (6.8) and (6.9) determine uniquely the family of polynomials \(P_{n}^{\mu ,\theta }\).

A simple computation using (6.5) and (6.7) shows that

$$\begin{aligned} P_n^{\mu ,\theta }(0) = \frac{1}{2^{\mu } \Gamma (\mu +1)} \left( 1+2 \sum _{m\ge 1}\frac{(-1)^m}{(1+m^2/\theta ^2)^n}\right) , \quad n\ge 1. \end{aligned}$$
(6.10)

Let us note that the polynomials \((P_n^{\mu ,\theta })_n\) are actually quasi Bessel–Appell. Indeed, write \(p_n^{\mu ,\theta }(x)=P_{n+1}^{\mu ,\theta }(x)\) so that \(p_n^{\mu ,\theta }\) is a even polynomial of degree 2n. It is then easy to check that the polynomials \((p_n^{\mu ,\theta })_n\) are the Bessel–Appell polynomials defined by (5.1) from the generating function

$$\begin{aligned} A(z) = A(z;\theta ) = \frac{1}{1-z}+\theta ^2\varphi (\theta ^2(z-1)). \end{aligned}$$

The starting point to prove (6.6) is the series (see [16, 5.7.22.3, p. 682])

$$\begin{aligned} \sum _{m\ge 1} (-1)^m\frac{J_{\mu }(\sqrt{1+m^2/\theta ^2}x)}{(\sqrt{1+m^2/\theta ^2}x)^{\mu }} = -\frac{J_\mu (x)}{2x^\mu }, \end{aligned}$$
(6.11)

where \(\textrm{Re}\mu \ge 0\), \(x\in (0,\theta \pi )\) and \(\theta \not =0\). This is the case \(k=1\), \(n=0\) of the series (6.5).

We next prove the case \(k=1\) and \(n\ge 0\):

$$\begin{aligned} \sum _{m\ge 1} \frac{(-1)^m}{(1+m^2/\theta ^2)^{n}} \frac{J_{\mu }(\sqrt{1+m^2/\theta ^2}x)}{(\sqrt{1+m^2/\theta ^2}x)^{\mu }} = \frac{1}{2} \bigg ({-}\frac{J_\mu ( x)}{x^\mu }+P^{\mu ,\theta }_n(x)\bigg ), \end{aligned}$$
(6.12)

with \(x\in [0,\theta \pi )\) for \(1/2< 2n+\textrm{Re}\mu \) (or \(x \in (0,\theta \pi )\) for \(n=0\)). Write

$$\begin{aligned} G_{\mu ,\theta ,n}(x) = \sum _{m\ge 1} \frac{(-1)^m}{(1+m^2/\theta ^2)^{n}} \frac{J_{\mu }(\sqrt{1+m^2/\theta ^2}x)}{(\sqrt{1+m^2/\theta ^2}x)^{\mu }},\quad x\in (0,\theta \pi ), \end{aligned}$$
(6.13)

which is an analytic function of \(\mu \) for \(1/2< \textrm{Re}\mu +2n\). Let us define the functions \(Q_{\mu ,\theta ,n}\) by the identity

$$\begin{aligned} G_{\mu ,\theta ,n}(x) = \frac{1}{2} \bigg ({-}\frac{J_\mu ( x)}{ x^\mu }+Q_{\mu ,\theta ,n}(x)\bigg ). \end{aligned}$$
(6.14)

This definition and (6.11) show that

$$\begin{aligned} Q_{\mu ,\theta ,0}(x)=0. \end{aligned}$$
(6.15)

Consider now \(\mu \) big enough so as to allow the following computations. Using (2.1), it easily follows that \(G_{\mu ,\theta ,n}'(x)=-xG_{\mu +1,\theta ,n-1}(x)\), which proves that \(Q_{\mu ,\theta ,n}\) satisfies

$$\begin{aligned} Q_{\mu ,\theta ,n}'(x)=-xQ_{\mu +1,\theta ,n-1}(x). \end{aligned}$$
(6.16)

Thus, (6.15) and (6.16) imply that \(Q_{\mu ,\theta ,n}\) are polynomials. The identities (6.13), (6.14) and (6.10) show that

$$\begin{aligned} Q_{\mu ,\theta ,n}(0) = P^{\mu ,\theta }_n(0). \end{aligned}$$
(6.17)

Then, from (6.16) and (6.17) we obtain \(Q_{\mu ,\theta ,n}(x)=P_n^{\mu ,\theta }(x)\). This proves the identity (6.12) for \(\mu \) big enough, and using a standard argument of analytic continuation, for \(1/2< 2n+\textrm{Re}\mu \).

The multivariate expansion (6.6) can be proved proceeding as in the second step in Sect. 6.1.

If we assume \(n<0\) and \(1<2n+k/2+ \sum _{i=1}^k\textrm{Re}\mu _i\), differentiating \(-n\) times in (6.6) for \(n=0\) proves that

$$\begin{aligned} \sum _{m\ge 1} \frac{(-1)^m}{(1+m^2/\theta ^2)^{n}} \prod _{i=1}^k \frac{J_{\mu _i}(\sqrt{1+m^2/\theta ^2}x_i)}{(\sqrt{1+m^2/\theta ^2}x_i)^{\mu _i}} = -\frac{1}{2}\prod _{i=1}^k\frac{J_{\mu _i}(x_i)}{x_i^{\mu _i}} \end{aligned}$$

for \((x_1,\ldots , x_k)\in \Omega _{(\theta \pi )}^*\).

The last example in this section is the Bessel expansion

$$\begin{aligned} \sum _{m\ge 1} \frac{(-1)^m}{m^2(1+m^2/\theta ^2)^{n}} \prod _{i=1}^k\frac{J_{\mu _i}(\sqrt{1+m^2/\theta ^2}x_i)}{(\sqrt{1+m^2/\theta ^2}x_i)^{\mu _i}}. \end{aligned}$$

This can be worked out in a way similar to the previous example using now the analytic function \({\hat{\varphi }}\) in \(\mathbb {C}{\setminus }\{m^2: m\in \mathbb {N}{\setminus }\{0\}\}\) defined by

$$\begin{aligned} {\hat{\varphi }}(z) = \frac{1}{z} \left( \frac{1}{z}-\frac{\pi }{\sqrt{z}\sin (\pi \sqrt{z})} + \frac{\pi ^2}{6}\right) = 2 \sum _{m\ge 1}\frac{(-1)^m}{m^2(m^2-z)}. \end{aligned}$$

Define next the sequence

$$\begin{aligned} {\hat{a}}_{n}^\theta = \frac{\pi ^2}{6}-\frac{n+1}{\theta ^2} + \frac{\theta ^{2n+2}\hat{\varphi }^{(n)}(-\theta ^2)}{n!}, \quad n\ge 0, \end{aligned}$$

and the polynomials \({\hat{P}}_n^{\mu ,\theta }(x)\), \(n\ge 0\), by

$$\begin{aligned} {\hat{P}}_0^{\mu ,\theta }(x)&= 0,\\ {\hat{P}}_n^{\mu ,\theta }(x)&= \sum _{j=0}^{n-1}\frac{{\hat{a}}_{n-j-1}^\theta (-x^2/4)^{j}}{2^{\mu }j!\,\Gamma (\mu +j+1)}, \quad n\ge 1. \end{aligned}$$

As before, \({\hat{P}}_n^{\mu ,\theta }\) is an even polynomial of degree \(2n-2\) and satisfies

$$\begin{aligned} {\hat{P}}_n^{\mu ,\theta }(0)&= \frac{1}{2^{\mu } \Gamma (\mu +1)}\left( \frac{\pi ^2}{6} - \frac{n}{\theta ^2}+\frac{\theta ^{2n}\hat{\varphi }^{(n-1)}(-\theta ^2)}{(n-1)!}\right) , \quad n\ge 1,\\ ({\hat{P}}_n^{\mu ,\theta })'(x)&= -x{\hat{P}}_{n-1}^{\mu +1,\theta }(x),\\ {\hat{P}}_n^{\mu ,\theta }(x)&= T_{\mu ,-1/2,h}({\hat{P}}_n^{-1/2,\theta })(x),\quad \mu \ge -1/2+h. \end{aligned}$$

Starting from the expansion

$$\begin{aligned} \sum _{m\ge 1} (-1)^m\frac{J_{\mu }(\sqrt{1+m^2/\theta ^2}x)}{(\sqrt{1+m^2/\theta ^2}x)^{\mu }} = \frac{1}{2} \bigg (\frac{x^2J_{\mu +1}(x)}{2\theta ^2 x^{\mu +1}} -\frac{\pi ^2}{6}\frac{J_\mu (x)}{x^\mu }\bigg ) \end{aligned}$$

(see [16, 5.7.22.4, p. 682]), we can prove as before that

$$\begin{aligned}{} & {} \sum _{m\ge 1} \frac{(-1)^m}{m^2(1+m^2/\theta ^2)^{n}} \frac{J_{\mu }(\sqrt{1+m^2/\theta ^2}x)}{(\sqrt{1+m^2/\theta ^2}x)^{\mu }} \\{} & {} \quad = \frac{1}{2} \bigg ( \frac{x^2J_{\mu +1}(x)}{2\theta ^2 x^{\mu +1}} - \left( \frac{\pi ^2}{6}-\frac{n}{\theta ^2}\right) \frac{J_\mu (x)}{ x^\mu }+P_{n}^{\mu ,\theta }(x)\bigg ), \end{aligned}$$

with \(x\in [0,\theta \pi )\) for \(-3/2-2n< \textrm{Re}\mu \) (or \(x \in (0,\theta \pi )\) for \(n=0\)).

Proceeding as in the previous example, and using the identities

$$\begin{aligned} \sum _{\varepsilon \in \pi _k} \left( \sum _{i=0}^k\varepsilon _ix_i\right) \sin \left( \theta \sum _{i=0}^k \varepsilon _ix_i \right)&= 2^k \sum _{i=1}^kx_i\sin (\theta x_i)\prod _{j=1;j\not =i}\cos (\theta x_j), \\ T_{\mu ,-1/2}(x\sin (x))&= 2\sqrt{\pi } x^2 \,\frac{J_{\mu +1}(x)}{x^{\mu +1}}, \end{aligned}$$

we arrive at

$$\begin{aligned}&\sum _{m\ge 1} \frac{(-1)^m}{m^2(1+m^2/\theta ^2)^{n}} \prod _{i=1}^k \frac{J_{\mu _i}(\sqrt{1+m^2/\theta ^2}x_i)}{(\sqrt{1+m^2/\theta ^2}x_i)^{\mu _i}}\nonumber \\&\quad = \frac{1}{2} \Bigg (\sum _{i=1}^k \frac{x_i^2J_{\mu _i+1}(x_i)}{2\theta ^2 x_i^{\mu _i+1}} \prod _{j=1;j\not =i}^k\frac{J_{\mu _j}(x_j)}{ x_j^{\mu _j}} - \Bigg (\frac{\pi ^2}{6}-\frac{n}{\theta ^2}\Bigg ) \prod _{i=1}^k\frac{J_{\mu _i}(x_i)}{x_i^{\mu _i}} \nonumber \\&\qquad + \sum _{l=0}^{n-1} {\hat{a}}_{n-l-1}^\theta \sum _{l_1+\cdots +l_k=l} \prod _{i=1}^k\frac{(-x_i^2/4)^{l_i}}{2^{\mu _i}l_i!\,\Gamma (\mu _i + l_i +1)} \Bigg ), \end{aligned}$$
(6.18)

for \(0<2n+1+k/2+ \sum _{i=1}^k\textrm{Re}\mu _i\) and \((x_1,\ldots , x_k)\in \Omega _{(\theta \pi )}^*\) (recall that this set is defined in (5.37)).

If we assume \(n<0\) and \(0<2n+1+k/2+ \sum _{i=1}^k\textrm{Re}\mu _i\), differentiating \(-n\) times in (6.18) for \(n=0\), we have, for \((x_1,\ldots , x_k)\in \Omega _{(\theta \pi )}^*\),

$$\begin{aligned}&\sum _{m\ge 1} \frac{(-1)^m}{m^2(1+m^2/\theta ^2)^{n}} \prod _{i=1}^k\frac{J_{\mu _i}(\sqrt{1+m^2/\theta ^2}x_i)}{(\sqrt{1+m^2/\theta ^2}x_i)^{\mu _i}} \\&\quad = \frac{1}{2} \left( \sum _{i=1}^k \frac{x_i^2J_{\mu _i+1}(x_i)}{2\theta ^2 x_i^{\mu _i+1}} \prod _{j=1;j\not =i}^k \frac{J_{\mu _j}(x_j)}{ x_j^{\mu _j}} - \left( \frac{\pi ^2}{6}-\frac{n}{\theta ^2}\right) \prod _{i=1}^k\frac{J_{\mu _i}(x_i)}{x_i^{\mu _i}}\right) . \end{aligned}$$