1 Introduction and Motivation of Our Work

Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as sound, images, and scientific measurements, and it is very essential for the use of X-rays, MRIs and CT scans, allowing medical images to be analyzed and deciphered by complex data processing techniques. According to the web-page https://en.wikipedia.org/wiki/Window_function, in signal processing a window function (also called a tapering function or apodization function) is often used to restrict an arbitrary function or signal in some way. A window function is usually zero-valued outside of some interval, symmetric around the middle of the interval, usually near to a maximum in the middle, and usually tapering away from the middle. Thus, when another function or waveform is ”multiplied” by an arbitrary window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the so-called ”view through the window”. Thus, tapering, not segmentation, is the main purpose of window functions. In typical applications, the window functions used are non-negative, smooth, ”bell-shaped” curves. Window functions are often used in spectral analysis, the design of finite impulse response filters, as well as beamforming and antenna design. The most popular window functions are the followings: Parzen window, Welch window, sine window, power of sine/cosine window, Hann and Hamming window, Blackman window, Nuttal window, Blackman–Nuttall window, Blackman–Harris window, Rife–Vincent window, Gaussian window, Slepian window, Kaiser window, Dolph–Chebyshev window, ultraspherical window, Bartlett–Hann window, Planck–Bessel window and Lanczos window. For more details on window functions and their applications in signal processing the interested reader is referred to the recent book [55] on this topic and to the references therein. The Kaiser window, also known as the Kaiser–Bessel window, was developed by the electrical engineer James Frederick Kaiser at Bell Laboratories. It is a one-parameter family of window functions used in finite impulse response filter design and spectral analysis. The Kaiser window approximates the discrete prolate spheroidal sequence or Slepian window which maximizes the energy concentration in the main lobe, but which is difficult to compute. The Kaiser–Bessel window function is given by [37]

$$\begin{aligned} w_{a,\alpha }(r)=\left\{ \begin{array}{ll}\displaystyle \frac{1}{I_0(\alpha )}I_0\left( \alpha \sqrt{1 -\left( \frac{r}{a}\right) ^2}\right) ,&{}\qquad |r|\le a\\ 0,&{}\qquad |r|>a\end{array}\right. , \end{aligned}$$

where \(I_0\) is the zeroth-order modified Bessel function of the first kind, a is the window duration, and the parameter \(\alpha \) controls the taper of the window and thereby controls the trade-off between the width of the main lobe and the amplitude of the side lobes of the Fourier transform of the window.

The use of spherically symmetric basis functions, also known as blobs, represents an alternative regularization method for the low-count emission imaging. Moreover, blobs usually have some attractive properties for tomographic image reconstruction such as rotational symmetry, finite spatial support and their being nearly band-limited. Blobs are non-orthogonal functions, which means that there exists overlapping between adjacent blobs. A specific blob used in tomographic image reconstruction is based on the generalized Kaiser–Bessel window function and is described as follows (see [41, p. 1844])

$$\begin{aligned} w_{a,\alpha ,m}(r) = \left\{ \begin{array}{ll}\displaystyle \frac{1}{I_{m}(\alpha )}\left( \sqrt{1-\left( \frac{r}{a}\right) ^2}\right) ^{m} I_{m}\left( \alpha \sqrt{1-\left( \frac{r}{a}\right) ^2}\right) ,&{}\qquad 0\le r\le a\\ 0,&{}\qquad r>a\end{array}\right. , \end{aligned}$$

where r is the radial distance from the blob center, \(I_m\) denotes the modified Bessel function of the first kind of order \(m>-1\), a is the radius of the blob, and \(\alpha \) is a parameter controlling the blob shape. The shape and smoothness of this generalized Kaiser–Bessel window function can be controlled by the parameters (the parameter m allows us to control the smoothness of the function and the parameter \(\alpha \) determines its shape), it is completely localized in space, it is nearly band limited, and finally, its projection has a particularly convenient analytical form which can be easily evaluated. Moreover, this function is isotropic (its projections do not depend on the direction), which makes the computation of the imaging operator significantly faster, see [52] for more details. An interesting study of the blob parameters and a comparison of different spherically symmetric basis functions is presented in [46], where the authors found out the optimum parameters in some sense to be \(m=2\) (or higher, because in this case the blob has a continuous first derivative at the radial boundary), \(a=2\) and \(\alpha =10.417\) (being the second positive zero of the Bessel function \(J_{\frac{7}{2}}\)). In [46] the optimum parameters are found out with the help of the Fourier transform of the generalized Kaiser–Bessel window function (see also [42]), and in [52] the authors arrive at a similar conclusion based on an analysis on approximation-theoretic properties and determining \(\alpha \) to minimize a residual error (since the generalized Kaiser–Bessel window function does not satisfy the partition of unity). For more details and interesting applications on the generalized Kaiser–Bessel window function the interested reader is also referred to the papers [19, 54, 72] and to the references therein.

It is important to mention that in fact the generalized Kaiser–Bessel window function \(w_{a,\alpha ,m}\) is the extension of the truncated version of the Kaiser–Bessel window function \(w_{a,\alpha }\) having support the interval [0, a] instead of \([-a,a].\) Motivated by the importance of the Kaiser–Bessel window function and of the generalized Kaiser–Bessel window function, in this paper our aim is to consider the symmetric version of the generalized Kaiser–Bessel window function supported on the symmetric intervalFootnote 1\([-a,a]\) and to look at from the point of view of probability theory and of classical analysis. The idea behind this is quite natural: since the symmetric and generalized Kaiser–Bessel window function is a ”bell-shaped” curve, we can normalize it to arrive at a probability density function and then to study the properties of the obtained distribution. Since the Kaiser–Bessel and the generalized Kaiser–Bessel window functions are frequently used in signal processing, we hope that the Kaiser–Bessel distribution will be of potential interestFootnote 2 for the electrical engineering people and also for the mathematical community. The rest of the paper is organized as follows: in the next section we define the so-called Kaiser–Bessel distribution with the help of the symmetric version of the generalized Kaiser–Bessel window function. We find out that its cumulative distribution function can be expressed in terms of hypergeometric type functions. In Sect. 3 we make a detailed analysis of the moments of the Kaiser–Bessel distribution, and conclude that this distribution is sub-Gaussian and it is an extension of the Wigner’s semicircle distribution. We also present a modified version of the method of moments in parameter estimation of the Kaiser–Bessel distribution. The conclusion of Sect. 4 is that the Kaiser–Bessel distribution is a log-concave and geometrically concave distribution and the monotonicity and convexity (with respect to the argument and each of the parameters) of its probability density function depends on the behavior of the logarithmic derivative of modified Bessel functions of the first kind. Some known and new Turán type inequalities for modified Bessel functions of the first kind play an important role in this section. In addition, by using the classical rejection method two algorithms are presented for sampling independent continuous random variables of Kaiser–Bessel distribution. Section 5 contains the characteristic function and moment generating function of the Kaiser–Bessel distribution, as well as an explicit form of the differential entropy or Shannon entropy and of the Rényi entropy of the Kaiser–Bessel distribution.

2 Probability Density and Cumulative Distribution Function of the Kaiser–Bessel Probability Distribution

In this section our aim is to introduce a new univariate continuous and symmetric distribution, named in what follows as the Kaiser–Bessel distribution, of which probability density function consists from the normalized form of the symmetric and generalized Kaiser–Bessel window function and the normalizing constant is related to the modified Bessel function of the first kind. We also present three different derivations of the cumulative distribution function by using special cases of the generalized hypergeometric functions.

2.1 Probability Density Function

For the real parameters \(a>0,\) \(\alpha >0\) and \(\nu >-1\) we consider the so-called symmetric and generalized Kaiser–Bessel window function \(f_{a,\alpha ,\nu }:{\mathbb {R}}\rightarrow [0,\infty ),\) defined by

$$\begin{aligned} f_{a,\alpha ,\nu }(x) = \left\{ \begin{array}{ll}\displaystyle \frac{1}{I_{\nu }(\alpha )} \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu } I_{\nu }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) , &{}\qquad |x|\le a\\ 0, &{}\qquad |x|>a\end{array}\right. . \end{aligned}$$
(2.1)

By using the change of variable \(s=\sqrt{1-\left( \frac{x}{a}\right) ^2}\) we obtain

$$\begin{aligned} \int _{-a}^{a}f_{a,\alpha ,\nu }(x){\textrm{d}}x&= \dfrac{2}{I_\nu (\alpha )}\int _0^a \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^\nu I_\nu \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x\\&= \frac{2a}{I_{\nu }(\alpha )}\int _0^1\frac{s^{\nu +1}}{\sqrt{1-s^2}}I_{\nu }(\alpha s){\textrm{d}}s = a \sqrt{\dfrac{2 \pi }{\alpha }} \frac{I_{\nu +\frac{1}{2}}(\alpha )}{I_{\nu }(\alpha )}. \end{aligned}$$

It turns out that the function \(\varphi _{a,\alpha ,\nu }:{\mathbb {R}}\rightarrow [0,\infty ),\) defined by

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x) = \frac{1}{a}\sqrt{\frac{\alpha }{2 \pi }} \frac{I_\nu (\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )} \cdot f_{a,\alpha ,\nu }(x), \end{aligned}$$

that is,

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x) = \left\{ \begin{array}{ll} \displaystyle \frac{\sqrt{\frac{\alpha }{2 \pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )} \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu } I_{\nu }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ,&{}\qquad |x| \le a\\ 0,&{}\qquad |x|> a \end{array}\right. , \end{aligned}$$
(2.2)

is a probability density function with symmetric support \([-a,a]\). Consequently, the continuous and symmetric (with respect to the origin) random variable X defined on some standard probability space has Kaiser–Bessel distribution with the parameter space \(\left\{ (a, \alpha , \nu ) \in {\mathbb {R}}_+^2\times (-1, \infty )\right\} \) if it possesses the probability density function (2.2). This we signify in the sequel as \(X\sim \textrm{KB}(a,\alpha ,\nu ).\)

The Bessel distribution of type I of McKay has the probability density function (see [47])

$$\begin{aligned} y_{b,c,m}(x)=\frac{\left| 1-c^2\right| ^{m+\frac{1}{2}}x^m}{\sqrt{\pi }2^mb^{m+1}\Gamma \left( m+\frac{1}{2}\right) }e^{-\frac{cx}{b}}I_{m}\left( \frac{x}{b}\right) , \end{aligned}$$

where \(x>0,\) \(b>0,\) \(c>1\) and \(m>-\frac{1}{2}.\) This is a well-known univariate distribution and together with the Bessel distribution of type II of MacKay, which involves the modified Bessel function of the second kind, appears frequently in electrical and electronic engineering literature, see for example [21, 34, 49] and [26] for more details and the references therein. The next transformation of the probability density function of the Bessel distribution of type I of McKay

$$\begin{aligned}{} & {} y_{1,c,m}\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) =\frac{\left| 1-c^2\right| ^{m+\frac{1}{2}}\alpha ^m}{\sqrt{\pi }2^m\Gamma \left( m+\frac{1}{2}\right) } \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^m \\{} & {} \quad I_m \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) \cdot e^{-c\alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}} \end{aligned}$$

resembles to the probability density function of the Kaiser–Bessel distribution, however the exponential term makes these distributions completely different from each other. Finally, note that an interesting list of Bessel type (and other special function type) distributions can be found in [48].

2.2 Cumulative Distribution Function

Since the Kaiser–Bessel distribution is a symmetric distribution with respect to the origin, that is, the probability density function \(x\mapsto \varphi _{a,\alpha ,\nu }(x)\) is an even function, the related cumulative distribution function \(F_{a,\alpha ,\nu }(x) = P(X<x)\) can be uniquely presented for each \(x \in {\mathbb {R}}\) in the form

$$\begin{aligned} F_{a,\alpha ,\nu }(x) = \frac{1}{2}+\Phi _{a,\alpha ,\nu }(x), \end{aligned}$$

where the auxiliary function \(\Phi _{a,\alpha ,\nu }:{\mathbb {R}}\rightarrow [0,\infty )\) is defined by

$$\begin{aligned} \Phi _{a,\alpha ,\nu }(x) = \int _0^x \varphi _{a,\alpha ,\nu }(u){\textrm{d}}u = \frac{\sqrt{\frac{\alpha }{2 \pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )} \int _0^x \left( \sqrt{1-\left( \frac{u}{a}\right) ^2}\right) ^{\nu } I_{\nu }\left( \alpha \sqrt{1-\left( \frac{u}{a}\right) ^2}\right) {\textrm{d}}u.\nonumber \\ \end{aligned}$$
(2.3)

This auxiliary function is an odd function of the argument, that is, \(\Phi _{a,\alpha ,\nu }(-x) = -\Phi _{a,\alpha ,\nu }(x)\) for each x real, and the median equals zero, therefore \(\Phi _{a,\alpha ,\nu }(a) =\frac{1}{2}\), since the random variable X has support \([-a, a]\). To obtain the cumulative distribution function we have to compute the integral in (2.3). Substituting \(u = a\sqrt{1-s^2}\), we arrive at

$$\begin{aligned} \Phi _{a,\alpha ,\nu }(x) = \frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \int _{g_a(x)}^1 \dfrac{s^{\nu +1}}{\sqrt{1-s^2}}I_\nu (\alpha s){\textrm{d}}s, \end{aligned}$$
(2.4)

where

$$\begin{aligned} g_a(x)=\sqrt{1-\left( \frac{x}{a}\right) ^2}. \end{aligned}$$

In what follows the integral (2.4) will be the starting point in the evaluation of \(\Phi _{a,\alpha ,\nu }(x).\)

First approach Expanding \((1-s^2)^{-\frac{1}{2}}\) in the integrand of (2.4) into the Maclaurin series, which is a legitimate procedure since the integration region \(\left( g_a(x), 1\right) \) is inside the unit s-interval, and then changing the order of summation and integration (Tonelli’s theorem) we conclude that

$$\begin{aligned} \Phi _{a,\alpha ,\nu }(x)&= \frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \sum _{n \ge 0} \left( {\begin{array}{c}-\frac{1}{2}\\ n\end{array}}\right) (-1)^n \int _{g_a(x)}^1 s^{\nu +2n+1}I_\nu (\alpha s){\textrm{d}}s \\&= \frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\frac{1}{2\Gamma (\nu +2)}\left( \frac{\alpha }{2}\right) ^\nu \\&\quad \sum _{n\ge 0} \frac{\left( \frac{1}{2}\right) _n (\nu +1)_n}{n! (\nu +2)_n} s^{2(n+\nu +1)} {}_1F_2\left. \left[ \left. \begin{array}{c} n+\nu +1\\ \nu +1, n+\nu +2 \end{array} \right| \frac{(\alpha s)^2}{4} \right] \right| _{g_a(x)}^1 \\&= \frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\frac{1}{2\Gamma (\nu +2)} \left( \frac{\alpha }{2}\right) ^\nu \sum _{n\ge 0} \frac{\left( \frac{1}{2}\right) _n (\nu +1)_n}{n! (\nu +2)_n} \cdot {}_1\Omega _{n,a,\alpha ,\nu }(x), \end{aligned}$$

where

$$\begin{aligned}{} & {} {}_1\Omega _{n,a,\alpha ,\nu }(x)={}_1F_2 \left[ \left. \begin{array}{c} n+\nu +1\\ \nu +1, n+\nu +2 \end{array} \right| \frac{\alpha ^2}{4} \right] \\{} & {} \quad -g_a^{2(n+\nu +1)}(x){}_1F_2 \left[ \left. \begin{array}{c} n+\nu +1\\ \nu +1, n+\nu +2 \end{array} \right| \frac{(\alpha g_a(x))^2}{4} \right] . \end{aligned}$$

Collecting the appropriate relations we deduce the expression

$$\begin{aligned} F_{a,\alpha ,\nu }(x)= \frac{1}{2} + \frac{\left( \frac{\alpha }{2}\right) ^{\nu +\frac{1}{2}}}{2 \sqrt{\pi } \Gamma (\nu +2)I_{\nu +\frac{1}{2}}(\alpha )} \sum _{n\ge 0}\frac{\left( \frac{1}{2}\right) _n (\nu +1)_n}{n! (\nu +2)_n} \cdot {}_1\Omega _{n,a,\alpha ,\nu }(x).\nonumber \\ \end{aligned}$$
(2.5)

Here we used the usual notation for the generalized hypergeometric function \({}_pF_q,\) which is defined by

$$\begin{aligned} {}_pF_q \left[ \left. \begin{array}{c} a_1, \ldots , a_p\\ b_1, \ldots , b_q \end{array} \right| x\right] = \sum _{n \ge 0} \frac{(a_1)_n \cdots (a_p)_n}{(b_1)_n \cdots (b_q)_n} \frac{x^n}{n!} \end{aligned}$$

and \((a)_n=a(a+1)\cdots (a+n-1),\) \( a\ne 0,\) stands for the Pochhammer symbol or ascending factorial. Recall that for \(p \le q\) this series converges for any x, but when \(q = p-1\) convergence occurs for \(|x| <1\) unless the series terminates. The function \({}_2F_1\) is the familiar Gaussian hypergeometric function.

Second approach Expanding now the modified Bessel function term in the integrand of (2.4) into the power series, then changing in legitimate manner the integration and summation order we arrive at

$$\begin{aligned} \Phi _{a,\alpha ,\nu }(x)&= \frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \left( \frac{\alpha }{2}\right) ^\nu \sum _{n \ge 0} \frac{\left( \frac{\alpha }{2}\right) ^{2n}}{n! \Gamma (n+\nu +1)} \int _{g_a(x)}^1 \frac{s^{\nu +2n+1}}{\sqrt{1-s^2}}{\textrm{d}}s \\&=\frac{\sqrt{\frac{\alpha }{2 \pi }}}{2I_{\nu +\frac{1}{2}}(\alpha )} \left( \frac{\alpha }{2}\right) ^\nu \sum _{n \ge 0} \frac{\left( \frac{\alpha }{2}\right) ^{2n}}{n! \Gamma (n+\nu +1)} \left[ B\left( n+\frac{\nu }{2}+1, \frac{1}{2}\right) \right. \\&\quad \left. - B_{g_a^2(x)} \left( n+\frac{\nu }{2}+1, \frac{1}{2}\right) \right] . \end{aligned}$$

Here the complete and the incomplete Beta functions were used, respectively

$$\begin{aligned} B(a, b) = \int _0^1 x^{a-1} (1-x)^{b-1}{\textrm{d}}x, \qquad B_z(a, b) = \int _0^z x^{a-1} (1-x)^{b-1}{\textrm{d}}x,\end{aligned}$$

where \(\min \{a,b\}>0\). Having in mind the transformation formula

$$\begin{aligned} B_z(a, b) = \dfrac{z^a}{a} {}_2F_1\left[ \left. \begin{array}{c} a, 1-b\\ a+1\end{array}\right| z \right] ,\end{aligned}$$

after some algebraic computations we find that

$$\begin{aligned} \Phi _{a,\alpha ,\nu }(x)=\frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\frac{\left( \frac{\alpha }{2}\right) ^\nu }{2 \Gamma (\nu +1)} \cdot {}_2\Omega _{a,\alpha ,\nu }(x), \end{aligned}$$

where

$$\begin{aligned}{}_2\Omega _{a,\alpha ,\nu }(x)&=\sum _{n \ge 0}\frac{\left( \frac{\alpha }{2}+1\right) _n \left( \frac{\alpha }{2}\right) ^{2n}}{n!(\nu +1)_n}\left[ \frac{B\left( \frac{\nu }{2}+1,\frac{1}{2}\right) }{\left( \frac{\nu +3}{2}\right) _n}- \frac{2 g_a^{2n+\nu +2}(x)}{\left( \frac{\nu +4}{2}\right) _n}{}_2F_1\left[ \left. \begin{array}{c} n+\frac{\nu }{2}+1, \frac{1}{2}\\ n+\frac{\nu }{2}+2\end{array}\right| g_a^2(x) \right] \right] \\&=B\left( \frac{\nu }{2}+1,\frac{1}{2}\right) {}_1F_2\left[ \left. \begin{array}{c}\frac{\nu }{2}+1\\ \nu +1, \frac{\nu }{2}+\frac{3}{2}\end{array}\right| \frac{\alpha ^2}{4}\right] \\&\quad -2 \sum _{n \ge 0}\frac{g_a^{2n+\nu +2}(x)}{\left( \frac{\nu +4}{2}\right) _n}{}_2F_1\left[ \left. \begin{array}{c} n+\frac{\nu }{2}+1, \frac{1}{2}\\ n+\frac{\nu }{2}+2\end{array}\right| g_a^2(x)\right] .\end{aligned}$$

Therefore, we deduce that

$$\begin{aligned} F_{a,\alpha ,\nu }(x)= \frac{1}{2} + \frac{\left( \frac{\alpha }{2}\right) ^{\nu +\frac{1}{2}}}{2 \sqrt{\pi }\Gamma (\nu +1)I_{\nu +\frac{1}{2}}(\alpha )} \cdot {}_2\Omega _{a,\alpha ,\nu }(x). \end{aligned}$$
(2.6)

Third approach Expanding now either the binomial \((1-s^2)^{-\frac{1}{2}}\) and the modified Bessel \(I_\nu (\alpha s)\) terms in the integrand of (2.4), repeating the summation and integration change, we deduce a double series result. Indeed, after several routine steps in which we employ the Legendre duplication formula for the Pochhammer symbol, that is,

$$\begin{aligned} (q)_{2m} = 4^m \left( \frac{q}{2}\right) _m \left( \frac{q+1}{2}\right) _m \end{aligned}$$

we arrive at

$$\begin{aligned} \Phi _{a,\alpha ,\nu }(x)=\frac{\sqrt{\frac{\alpha }{2 \pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\frac{\left( \frac{\alpha }{2}\right) ^\nu }{2 \Gamma (\nu +1)} \cdot {}_3\Omega _{a,\alpha ,\nu }(x), \end{aligned}$$

where

$$\begin{aligned} {}_3\Omega _{a,\alpha ,\nu }(x)&= \sum _{n, k \ge 0} \frac{(\nu +1)_{n+k} \left( \frac{1}{2}\right) _n}{(\nu +2)_{n+k}(\nu +1)_k n! k!} \left( \frac{\alpha ^2}{4}\right) ^k \left( 1-g_a^{2(n+k+\nu +1)}(x)\right) \\&= F_{1:0;1}^{1:1;0}\left[ \left. \begin{array}{c} \nu +1:\, \frac{1}{2};- \\ \nu +2:-;\nu +1\end{array} \right| 1,\,\frac{\alpha ^2}{4} \right] \\&\quad - g_a^{2\nu +2}(x) F_{1:0;1}^{1:1;0}\left[ \left. \begin{array}{c}\nu +1:\,\frac{1}{2};-\\ \nu +2:-;\nu +1 \end{array}\right| 1,\,\frac{(\alpha g_a(x))^2}{4}\right] \end{aligned}$$

and

$$\begin{aligned} F_{l:m;n}^{p:q;k}\left[ \left. \begin{array}{c} (a_p):\, (b_q);\,(c_k) \\ (\alpha _l):(\beta _m);(\gamma _n)\end{array} \right| x,\, y \right] = \sum _{r,t \ge 0} \dfrac{\prod \limits _{j=1}^p (a_j)_{r+t} \prod \limits _{j=1}^q (b_j)_{r} \prod \limits _{j=1}^k (c_j)_t}{\prod \limits _{j=1}^l (\alpha _j)_{r+t} \prod \limits _{j=1}^m (\beta _j)_{r} \prod \limits _{j=1}^n (\gamma _j)_{t} } \frac{x^r}{r!}\frac{y^t}{t!} \end{aligned}$$

stands for the so-called Kampé de Fériet hypergeometric function of two variables [4, p. 150]. This function converges when

  • (i) \(p+q<l+m+1\), \(p+k<l+n+1\) and \(\max \{|x|, |y|\} <\infty \);

  • (ii) \(p+q=l+m+1\), \(p+k=l+n+1\) and \(|x|^{\frac{1}{p-l}}+|y|^{\frac{1}{p-l}}<1\) for \(l<p\) and \(\max \{|x|,|y|\} < 1\) for \(l\ge p.\)

Consequently, we find that

$$\begin{aligned} F_{a,\alpha ,\nu }(x)= \frac{1}{2} + \frac{\left( \frac{\alpha }{2}\right) ^{\nu +\frac{1}{2}}}{2 \sqrt{\pi }\Gamma (\nu +1)I_{\nu +\frac{1}{2}}(\alpha )} \cdot {}_3\Omega _{a,\alpha ,\nu }(x). \end{aligned}$$
(2.7)

It remains to confirm the convergence in both Kampé de Fériet double hypergeometric series. According to the condition (ii) we have \(p+q=2=l+m+1\), and therefore the convergence domain of the first series above is \(0<\alpha <2\), while the second series converges for \(|x|<a\).

Summarizing, since the cumulative distribution functions are left continuous, we obtain the next result.

Theorem 1

The cumulative distribution function of the Kaiser–Bessel distribution has the form

$$\begin{aligned} F_{a, \alpha , \nu }(x) = \left\{ \begin{array}{ll}0, &{} \qquad x \le -a \\ (2.5) \ \ \text{ or }\ \ (2.6) \ \ \text{ or }\ \ (2.7),&{} \qquad -a<x<a \\ 1, &{} \qquad x\ge a\end{array}\right. , \end{aligned}$$

with the condition that when we use (2.7) we need to suppose that \(\alpha <2.\)

Note that since \(F_{a, \alpha , \nu }(a) = 1,\) the last case is well-defined, and clearly the cumulative distribution function formulae (2.5), (2.6) and (2.7) present the same function. Therefore as a consequence of our result by equating the related auxiliary \(\Phi _{a, \alpha , \nu }\) functions we may establish potentially useful functional transformation results between special cases of generalized hypergeometric \({}_1F_2\), the Gaussian \({}_2F_1\) and/or the generalized Kampé de Fériet hypergeometric function \(F_{1:0;1}^{1:1;0}\) of two variables.

3 Moments, Absolute Moments and Mellin Transforms of the Kaiser–Bessel Distribution

In probability theory the moments of a probability distribution are important numerical characteristics. For a probability distribution on a bounded interval, the collection of all the moments (of all orders) uniquely determines the distribution (Hausdorff moment problem) if it is solvable (determinate moment problem). In this section our aim is to give an explicit expression for the moments, absolute moments and Mellin transforms of the Kaiser–Bessel distribution. In addition, we consider the limiting and some particular cases of the Kaiser–Bessel distribution, and it turns out that the Kaiser–Bessel distribution is an extension of Wigner’s semicircle distribution, Wigner’s parabolic distribution and Wigner’s n-sphere distribution. We also deduce a recurrence relation for the moments and by using some Turán type inequalities for modified Bessel functions of the first kind, we deduce some bounds for the effective variance and we show that the Kaiser–Bessel distribution is a platykurtic distribution. We mention that in Sect. 5.2 we show that the moments of the Kaiser–Bessel distribution determine uniquely the distribution.

3.1 Recurrence Relation for the Moments of the Kaiser–Bessel Distribution

Since the probability density function of the Kaiser–Bessel distribution is an even function, the odd order moments vanish, that is, \({\text {E}}\left[ X^{2n-1}\right] =0\) for each \(n\in {\mathbb {N}}\). The even order moments for each \(n\in {\mathbb {N}}\) are given by

$$\begin{aligned} {\text {E}}\left[ X^{2n}\right]&=\int _{-a}^{a}x^2\varphi _{a,\alpha ,\nu }(x){\textrm{d}}x=\frac{\sqrt{\frac{2\alpha }{\pi }}}{a I_{\nu +\frac{1}{2}}(\alpha )}\int _{0}^{a} x^{2n}\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu }I_{\nu } \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x\\&=\frac{a^{2n} \sqrt{\frac{2\alpha }{\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\int _0^1s^{\nu +1} \left( 1-s^2\right) ^{n-\frac{1}{2}}I_{\nu }(\alpha s){\textrm{d}}s=(2n-1)!!a^{2n}\frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha ^n I_{\nu +\frac{1}{2}}(\alpha )}. \end{aligned}$$

Now, consider the following notations: \(\mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] \) and

$$\begin{aligned} \lambda _{2n,\nu }=\frac{I_{\nu +\frac{1}{2}}(\alpha )}{a^{2n} \sqrt{\frac{2\alpha }{\pi }}}\cdot \mu _{2n,\nu }=\int _0^1s^{\nu +1} \left( 1-s^2\right) ^{n-\frac{1}{2}}I_{\nu }(\alpha s){\textrm{d}}s. \end{aligned}$$

Then clearly

$$\begin{aligned} \lambda _{2n,\nu }-\lambda _{2n+2,\nu }=\int _0^1s^{\nu +3} \left( 1-s^2\right) ^{n-\frac{1}{2}}I_{\nu }(\alpha s){\textrm{d}}s \end{aligned}$$

and by using the recurrence relation

$$\begin{aligned} I_{\nu }(\alpha )-I_{\nu +2} (\alpha )=\frac{2(\nu +1)}{\alpha }I_{\nu +1}(\alpha )\end{aligned}$$
(3.1)

we deduce that

$$\begin{aligned} \lambda _{2n,\nu }-\lambda _{2n+2,\nu }= & {} \int _0^1s^{\nu +3} \left( 1-s^2\right) ^{n-\frac{1}{2}}I_{\nu +2}(\alpha s){\textrm{d}}s \\{} & {} \quad +\frac{2(\nu +1)}{\alpha }\int _0^1s^{\nu +2}\left( 1-s^2\right) ^{n-\frac{1}{2}}I_{\nu +1}(\alpha s){\textrm{d}}s, \end{aligned}$$

that is,

$$\begin{aligned} \lambda _{2n+2,\nu }=\lambda _{2n,\nu }-\lambda _{2n,\nu +2} -\frac{2(\nu +1)}{\alpha }\lambda _{2n,\nu +1}. \end{aligned}$$

This recurrence relation can be rewritten as

$$\begin{aligned} \frac{1}{a^2}\cdot \mu _{2n+2,\nu }=\mu _{2n,\nu } -\frac{I_{\nu +\frac{5}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \mu _{2n,\nu +2} -\frac{2(\nu +1)}{\alpha }\frac{I_{\nu +\frac{3}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \mu _{2n,\nu +1} \end{aligned}$$

and this is equivalent to

$$\begin{aligned} \mu _{2n+2,\nu }=a^2 \mu _{2n,\nu }-\frac{\alpha ^2}{3a^2}\mu _{4,\nu } \mu _{2n,\nu +2}-2(\nu +1)\mu _{2,\nu }\mu _{2n,\nu +1}. \end{aligned}$$

3.2 Mellin Transforms and Inequalities for the Absolute Moments

An extension of the moments involves the Mellin transform of |X|,  that is, \({\text {E}}\left[ |X|^{r-1}\right] ,\) which for \(r\in {\mathbb {C}}\) such that \({\text {Re}}r>0\) is given by

$$\begin{aligned} {\text {E}}\left[ |X|^{r-1}\right]&=\int _{-a}^{a}|x|^{r-1}\varphi _{a,\alpha ,\nu }(x){\textrm{d}}x =\frac{\sqrt{\frac{2\alpha }{\pi }}}{a I_{\nu +\frac{1}{2}}(\alpha )}\int _{0}^{a} x^{r-1}\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu }I_{\nu }\left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x\\&=\frac{a^{r-1} \sqrt{\frac{2\alpha }{\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\int _0^1s^{\nu +1}\left( 1-s^2\right) ^{\frac{r}{2} -1}I_{\nu }(\alpha s){\textrm{d}}s= \frac{\left( a\sqrt{2}\right) ^{r-1}\Gamma \left( \frac{r}{2}\right) }{\sqrt{\pi }\left( \sqrt{\alpha }\right) ^{r-1}}\frac{I_{\nu +\frac{r}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}. \end{aligned}$$

Thus, for \(m>-1\) the \(m\hbox {th}\) absolute moment of the random variable \(X_{\nu }\sim \textrm{KB}(a,\alpha ,\nu )\) is given by

$$\begin{aligned} \beta _{\nu ,m}={\text {E}}\left[ \left| X_{\nu }\right| ^{m}\right] =\frac{\left( a\sqrt{2}\right) ^{m}\Gamma \left( \frac{m+1}{2}\right) }{\sqrt{\pi }\left( \sqrt{\alpha }\right) ^{m}}\frac{I_{\nu +\frac{m+1}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}. \end{aligned}$$

On the other hand, we know that [44, Theorem 3] if \(\alpha >0\) and \(\beta >0\) are fixed, then \(\nu \mapsto I_{\nu +\beta }(\alpha )/I_{\nu }(\alpha )\) is decreasing for \(-\beta \le 2\nu \) and \(\nu >-1.\) This implies that for \(\alpha >0,\) \(m\ge 3\) and \(\nu >-1\) the function \(\nu \mapsto {I_{\nu +\frac{m+1}{2}}(\alpha )}/{I_{\nu +\frac{1}{2}}(\alpha )}\) is decreasing, and consequently the function \(\nu \mapsto {\text {E}}\left[ \left| X_{\nu }\right| ^{m}\right] \) is also decreasing on \((-1,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(m\ge 3.\) In other words, if \(X_{\mu }\sim \textrm{KB}(a,\alpha ,\mu )\) and \(X_{\nu }\sim \textrm{KB}(a,\alpha ,\nu ),\) then for \(\mu \ge \nu >-1,\) \(a>0,\) \(\alpha >0\) and \(m\ge 3\) we arrive at

$$\begin{aligned} \beta _{\mu ,m}={\text {E}}\left[ \left| X_{\mu }\right| ^{m}\right] \le {\text {E}}\left[ \left| X_{\nu }\right| ^{m}\right] =\beta _{\nu ,m}. \end{aligned}$$

Observe that \(\beta _{\nu ,0}=1\) and \(r\mapsto {\left( \beta _{\nu ,r}\right) }^{\frac{1}{r}}\) is a non-decreasing function on \((0,\infty ),\) that is, if \(r>s>0,\) \(\alpha >0\) and \(\nu >-1,\) then

$$\begin{aligned} \frac{{\left( \beta _{\nu ,r}\right) }^{\frac{1}{r}}}{{\left( \beta _{\nu ,s}\right) }^{\frac{1}{s}}}=\left[ \Gamma \left( \frac{r+1}{2}\right) \frac{I_{\nu +\frac{r+1}{2}}(\alpha )}{\sqrt{\pi }I_{\nu +\frac{1}{2}}(\alpha )}\right] ^{\frac{1}{r}}\cdot \left[ \Gamma \left( \frac{s+1}{2}\right) \frac{I_{\nu +\frac{s+1}{2}}(\alpha )}{\sqrt{\pi }I_{\nu +\frac{1}{2}}(\alpha )}\right] ^{-\frac{1}{s}}>1.\nonumber \\\end{aligned}$$
(3.2)

This Lyapunov-type inequality follows from the Hölder-Rogers inequality for integrals with conjugate exponents. Moreover, if we consider the Gauss-Winckler inequality (see [16, eq. (51)] or [8]) for the absolute moments, that is,

$$\begin{aligned} \left[ (s+1)\beta _s\right] ^{\frac{1}{s}}<\left[ (r+1)\beta _r\right] ^{\frac{1}{r}}, \end{aligned}$$

where \(r>s>0,\) we obtain that

$$\begin{aligned} \frac{{\left( \beta _{\nu ,r}\right) }^{\frac{1}{r}}}{{\left( \beta _{\nu ,s}\right) }^{\frac{1}{s}}} =\left[ \Gamma \left( \frac{r+1}{2}\right) \frac{I_{\nu +\frac{r+1}{2}}(\alpha )}{\sqrt{\pi }I_{\nu +\frac{1}{2}}(\alpha )}\right] ^{\frac{1}{r}}\cdot \left[ \Gamma \left( \frac{s+1}{2}\right) \frac{I_{\nu +\frac{s+1}{2}}(\alpha )}{\sqrt{\pi }I_{\nu +\frac{1}{2}}(\alpha )}\right] ^{-\frac{1}{s}}>\frac{(s+1)^{\frac{1}{s}}}{(r+1)^{\frac{1}{r}}}, \end{aligned}$$

where \(r>s>0,\) \(\alpha >0\) and \(\nu >-1,\) and this inequality clearly improves (3.2) since the function \(r\mapsto (r+1)^{\frac{1}{r}}\) is decreasing on \((-1,\infty ).\)

It is also known that the absolute moment \(\beta _r={\text {E}}\left[ |X|^r\right] \) of a random variable X is log-convex with respect to r and there holds the Turán type inequality [45, p. 28, Eq. (1.4.6)] \(\beta _{r-1}^2 \le \beta _r \beta _{r-2},\) where \(r>2\), which we can readily extend by using the well-known Cauchy-Bunyakovsky-Schwarz inequality to the case \(\beta _{r-s}^2 \le \beta _r \beta _{r-2\,s},\) where \(r>2\,s>0.\) Now, if we consider the absolute moments of the Kaiser–Bessel distribution, the above inequality for the absolute moments leads to the next Turán-type inequality for modified Bessel functions of the first kind

$$\begin{aligned} \frac{I^2_{\nu +\frac{r-s+1}{2}}(\alpha )}{I_{\nu +\frac{r+1}{2}}(\alpha ) I_{\nu +\frac{r-2s+1}{2}}(\alpha )}\le \frac{\Gamma (\frac{r+1}{2}) \Gamma (\frac{r-2s+1}{2})}{\Gamma ^2(\frac{r-s+1}{2})}, \end{aligned}$$
(3.3)

where \(\alpha >0,\) \(\nu >-1\) and \(r>2s-1>-1.\) In Sect. 3.4 we will see that this general Turán type inequality is not new, it has been proved recently in [69] and [51].

3.3 Limiting and Particular Cases

Now, let us focus on the limiting and some particular cases of the Kaiser–Bessel probability distribution and its moments. Since for \(\nu \) fixed as \(x\rightarrow 0\) we have

$$\begin{aligned} I_{\nu }(x)\sim \frac{x^{\nu }}{2^{\nu }\Gamma (\nu +1)}, \end{aligned}$$

we obtain that for \(a>0,\) \(\nu >-1\) and \(|x|<a\) fixed as \(\alpha \rightarrow 0\)

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x)\sim \frac{\Gamma \left( \nu +\frac{3}{2}\right) }{a\sqrt{\pi } \Gamma (\nu +1)}\left( 1-\left( \frac{x}{a}\right) ^2\right) ^{\nu }.\end{aligned}$$
(3.4)

Similarly, since for \(\nu \) fixed as \(x\rightarrow \infty \) we have

$$\begin{aligned} I_{\nu }(x)\sim \frac{e^x}{\sqrt{2\pi x}}, \end{aligned}$$

we obtain that for \(a>0,\) \(\nu >-1\) and \(|x|<a\) fixed as \(\alpha \rightarrow \infty \)

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x)\sim \frac{1}{a} \sqrt{\frac{\alpha }{2\pi }}\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu -\frac{1}{2}}e^{\alpha \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}-1\right) }=q_{a,\alpha ,\nu }(x). \end{aligned}$$

Moreover, in view of the asymptotic expansion

$$\begin{aligned} I_{\nu }(x)\sim \frac{e^x}{\sqrt{2\pi x}} \sum _{n\ge 0}(-1)^n\frac{a_n(\nu )}{x^n},\end{aligned}$$
(3.5)

where \(x\rightarrow \infty ,\) \(\nu \) is fixed and

$$\begin{aligned} a_0(\nu )=1,\ \ a_n(\nu ) =\frac{(4\nu ^2-1^2)(4\nu ^2-3^2)\cdots (4\nu ^2-(2n-1)^2)}{n!8^n},\ \ n\in {\mathbb {N}},\end{aligned}$$
(3.6)

and taking into account the fact that the quotient of two asymptotic power series is also an asymptotic power series, we arrive at the following asymptotic power series

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x)\sim q_{a,\alpha ,\nu }(x) \left( 1+\delta _1(\nu ,s)\frac{1}{\alpha }+\delta _2(\nu ,s) \frac{1}{\alpha ^2}+\cdots +\delta _n(\nu ,s)\frac{1}{\alpha ^n}+\cdots \right) , \qquad \text{ as }\ \ \alpha \rightarrow \infty , \end{aligned}$$

where \(s=g_a(x)\) and for all \(n\in {\mathbb {N}}\) we have

$$\begin{aligned} \delta _n(\nu ,s)=(-1)^n\frac{a_n(\nu )}{s^n}-\sum _{m=0}^{n-1}(-1)^{n-m} \delta _m(\nu ,s)a_{n-m}\left( \nu +\frac{1}{2}\right) . \end{aligned}$$

Observe also that for \(\nu =\frac{1}{2},\) \(\nu =1\) and \(a=1,\) \(\nu =\frac{n-1}{2}\) the limiting distribution function in (3.4) for \(|x|<a\) and \(|x|<1,\) respectively, reduces to

$$\begin{aligned}{} & {} \varphi _{a,0,\frac{1}{2}}(x)=\frac{2}{\pi a^2}\sqrt{a^2-x^2}, \quad \varphi _{a,0,1}(x)=\frac{3}{4a^3}(a^2-x^2), \\{} & {} \quad \varphi _{1,0,\frac{n-1}{2}}(x)= \frac{\Gamma \left( \frac{n}{2}+1\right) }{\sqrt{\pi } \Gamma \left( \frac{n+1}{2}\right) }(1-x^2)^{\frac{n-1}{2}}, \end{aligned}$$

and these are known in the literature as Wigner’s semicircle distribution (called also sometimes in the number-theoretic literature as the Sato-Tate distribution), Wigner’s parabolic distribution and Wigner’s n-sphere distribution (see for example [24] and [39]). This shows that the Kaiser–Bessel distribution is in fact an extension of the Wigner’s semicircle distribution.Footnote 3 Moreover, the distribution with density function

$$\begin{aligned} \varphi _{2,0,n-\frac{1}{2}}(x)=\frac{\Gamma (n+1)}{2\sqrt{\pi } \Gamma \left( n+\frac{1}{2}\right) }\cdot \left( 1-\frac{x^2}{4}\right) ^{n -\frac{1}{2}}=\frac{n!}{\pi \cdot 2^n(2n-1)!!}(4-x^2)^{n-\frac{1}{2}} \end{aligned}$$

is called in the literature as the ultra-spherical (or hyper-spherical) distribution, and contains all the Gaussian distributions with respect to the five fundamental independence in non-commutative probability, as classified by Muraki [50], see [6] for more details. Note also that the limiting distribution in (3.4) is known in probability theory and statistics as the Pearson type II distribution, and its density function is a solution of the Pearson differential equation. It is also worth to mention that the limiting distribution in (3.4) for \(\nu +\frac{1}{2}\) instead of \(\nu \) is known in the literature as the power semicircle distribution (see for example [6]) and from the Kaiser–Bessel distribution can be obtained for \(a>0,\) \(\nu >-\frac{3}{2}\) and \(|x|<a\) fixed as \(\alpha \rightarrow 0\)

$$\begin{aligned} \varphi _{a,\alpha ,\nu +\frac{1}{2}} (x)\sim \frac{\Gamma \left( \nu +2\right) }{a\sqrt{\pi }\Gamma \left( \nu +\frac{3}{2}\right) }\left( 1-\left( \frac{x}{a}\right) ^2\right) ^{\nu +\frac{1}{2}}.\end{aligned}$$
(3.7)

Moreover, if we use the known result that for \(x>0\) fixed we have

$$\begin{aligned} I_{\nu }(x)\sim \frac{1}{\sqrt{2\pi \nu }}\left( \frac{ex}{2\nu }\right) ^{\nu } \end{aligned}$$

as \(\nu \rightarrow \infty ,\) then for \(\alpha >0,\) and xa fixed such that \(|x|<a\) we deduce that

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x)\sim \frac{1}{a\sqrt{\pi e}} \frac{\left( \nu +\frac{1}{2}\right) ^{\nu +1}}{\nu ^{\nu +\frac{1}{2}}} \left( 1-\left( \frac{x}{a}\right) ^2\right) ^{2\nu } \end{aligned}$$

as \(\nu \rightarrow \infty .\)

By using the above limiting forms of the modified Bessel function of the first kind it is also possible to deduce the limiting behavior of the moments and absolute moments of the Kaiser–Bessel distribution. More precisely, for fixed \(n\in {\mathbb {N}},\) \(m>-1,\) \(a>0\) and \(\nu >-1\) as \(\alpha \rightarrow 0\) we arrive at

$$\begin{aligned} \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] =(2n-1)!!a^{2n} \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha ^nI_{\nu +\frac{1}{2}}(\alpha )}\sim \frac{(2n-1)!!a^{2n}}{2^n\left( \nu +\frac{3}{2}\right) _n}\end{aligned}$$
(3.8)

and

$$\begin{aligned} \beta _{\nu ,m}={\text {E}}\left[ \left| X\right| ^{m}\right] =\frac{\left( a\sqrt{2}\right) ^{m} \Gamma \left( \frac{m+1}{2}\right) }{\sqrt{\pi }\left( \sqrt{\alpha }\right) ^{m}} \frac{I_{\nu +\frac{m+1}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )} \sim \frac{a^m\Gamma \left( \frac{m+1}{2}\right) \Gamma \left( \nu +\frac{3}{2}\right) }{\sqrt{\pi }\Gamma \left( \nu +\frac{m+3}{2}\right) }. \end{aligned}$$

Similarly, for fixed \(n\in {\mathbb {N}},\) \(m>-1,\) \(a>0\) and \(\nu >-1\) as \(\alpha \rightarrow \infty \) we see that

$$\begin{aligned} \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] \sim (2n-1)!!\frac{a^{2n}}{\alpha ^n} \qquad \text{ and }\qquad \beta _{\nu ,m}={\text {E}}\left[ \left| X\right| ^{m}\right] \sim \frac{\left( a\sqrt{2}\right) ^{m}\Gamma \left( \frac{m+1}{2}\right) }{\sqrt{\pi }\left( \sqrt{\alpha }\right) ^{m}}, \end{aligned}$$

and for \(n\in {\mathbb {N}},\) \(m>-1,\) \(\alpha >0\) and \(a>0\) fixed, as \(\nu \rightarrow \infty \) we find that

$$\begin{aligned} \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] \sim (2n-1)!!\frac{a^{2n}e^n}{2^n} \frac{\left( \nu +\frac{1}{2}\right) ^{\nu +1}}{\left( \nu +n+\frac{1}{2}\right) ^{\nu +n+1}} \end{aligned}$$

and

$$\begin{aligned} \beta _{\nu ,m}={\text {E}}\left[ \left| X\right| ^{m}\right] \sim \frac{\left( a\sqrt{e} \right) ^{m}}{\sqrt{\pi }}\frac{\Gamma \left( \frac{m+1}{2}\right) \left( \nu +\frac{1}{2}\right) ^{\nu +1}}{\left( \nu +\frac{m+1}{2}\right) ^{\nu +\frac{m}{2}+1}}. \end{aligned}$$

Note also that by using the relations of modified Bessel functions of the first kind with elementary functions like hyperbolic sine and hyperbolic cosine functions, that is,

$$\begin{aligned} I_{\frac{1}{2}}(x)=\sqrt{\frac{2}{\pi x}}\sinh x, \qquad I_{-\frac{1}{2}}(x)=\sqrt{\frac{2}{\pi x}}\cosh x, \end{aligned}$$

we arrive at the following particular hyperbolic sine and hyperbolic cosine distributions with support \([-a,a]\)

$$\begin{aligned} \varphi _{a,\alpha ,\frac{1}{2}}(x)=\frac{\sinh \left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) }{\pi a I_1(\alpha )},\qquad \varphi _{a,\alpha ,-\frac{1}{2}}(x)=\frac{\cosh \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) }{\pi a I_0(\alpha )\sqrt{1-\left( \frac{x}{a}\right) ^2}}. \end{aligned}$$

It is important to note that the symmetric and generalized Kaiser–Bessel window function \(f_{a,\alpha ,\nu }\) for \(\nu =-\frac{1}{2}\) reduces to

$$\begin{aligned} f_{a,\alpha ,-\frac{1}{2}}(x)=\frac{\cosh \left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) }{\cosh (\alpha )\sqrt{1-\left( \frac{x}{a}\right) ^2}}, \end{aligned}$$

which resembles to the hyperbolic cosine window function considered in [7]

$$\begin{aligned} w_c(x)=\frac{\cosh \left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) }{\cosh (\alpha )} \end{aligned}$$

with support \([-a,a].\)

3.4 Bounds for the Effective Variance and Sign of the Excess Kurtosis

The effective radius and variance of a probability distribution (see [29]) is defined via the second, third and fourth order moments and are analogous to characteristics used in statistics to describe frequency distribution. In the case of the generalized inverse Gaussian distribution the knowledge of the effective radius and variance obtained for example by means of remote sensing together with the aerosol optical depth is very useful to calculate physical quantities as the aerosol column mass loading per unit area (see [1]). Motivated by the papers [1, 29], in this subsection we introduce the effective radius and variance of the absolute moments of the Kaiser–Bessel distribution and we deduce some bounds for the effective variance of the absolute moments. It is worth to mention that because we use absolute moments instead of moments, the properties of the effective variance of the absolute moments will be different from the properties of the usual effective variance involving moments. By using the notation \(\beta _{\nu ,m}={\text {E}}\left[ \left| X_{\nu }\right| ^{m}\right] ,\) we define the effective radius and variance for the Kaiser–Bessel distribution involving the \(m\hbox {th}\) absolute moments as follows

$$\begin{aligned}{} & {} r_{\nu ,a}(\alpha )=\frac{\beta _{\nu ,3}}{\beta _{\nu ,2}} =\frac{2a\sqrt{2}}{\sqrt{\pi \alpha }}\frac{I_{\nu +2}(\alpha )}{I_{\nu +\frac{3}{2}}(\alpha )}, \\{} & {} \sigma _{\nu }(\alpha )=\frac{\beta _{\nu ,2}\beta _{\nu ,4}}{\beta _{\nu ,3}^2}-1=\frac{3\pi }{8}\frac{I_{\nu +\frac{3}{2}} (\alpha )I_{\nu +\frac{5}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )}-1. \end{aligned}$$

By using the above mentioned result of Lorch, clearly the effective radius is a decreasing function of \(\nu \) on \(\left[ -\frac{3}{4},\infty \right) \) for each \(a>0\) and \(\alpha >0.\) Observe that the effective variance is independent of a. On the other hand, by following the discussion in Sect. (3.2) about the log-convexity of the absolute moments, we conclude that \(\sigma _{\nu }(\alpha )>0\) for all \(\alpha >0\) and \(\nu >-1.\) Moreover, according to Lorch [44, p. 79] the next general Turán type inequality is valid

$$\begin{aligned} T_{\nu ,\varepsilon }(x)=I_{\nu }^2(x)-I_{\nu -\varepsilon }(x)I_{\nu +\varepsilon }(x)=\frac{4}{\pi }\int _0^{\frac{\pi }{2}}I_{2\nu }(2x\cos \theta )\sin ^2(\varepsilon \theta ){\textrm{d}}\theta >0\qquad \end{aligned}$$
(3.9)

for all \(\nu >-\frac{1}{2},\) \(0<\varepsilon \le 1\) and \(x>0.\) Consequently, for each \(\nu >-\frac{5}{2}\) and \(\alpha >0\) we find that

$$\begin{aligned} -\frac{T_{\nu +2,\frac{1}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )} =\frac{I_{\nu +\frac{3}{2}}(\alpha )I_{\nu +\frac{5}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )}-1<0, \end{aligned}$$

that is, the effective variance satisfies the next inequality

$$\begin{aligned} \sigma _{\nu }(\alpha )=\frac{3\pi }{8}-1-\frac{3\pi }{8} \frac{T_{\nu +2,\frac{1}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )}<\frac{3\pi }{8}-1\simeq 0.17809724509617{\cdots }. \end{aligned}$$

Note that according to [69] it is known that for \(\min \{\nu ,\mu \}>-2,\) \(\nu +\mu >-2\) and \(\nu ,\mu \ne -1\) the function

$$\begin{aligned} x\mapsto \frac{I_{\nu }(x)I_{\mu }(x)}{\left[ I_{\frac{\nu +\mu }{2}}(x)\right] ^2} \end{aligned}$$

is strictly increasing on \((0,\infty )\) and consequently under the same conditions the Turán type inequalityFootnote 4

$$\begin{aligned} \frac{\Gamma ^2 \left( \frac{\nu +\mu }{2}+1\right) }{\Gamma (\nu +1)\Gamma (\mu +1)}<\frac{I_{\nu }(x)I_{\mu }(x)}{\left[ I_{\frac{\nu +\mu }{2}}(x)\right] ^2}<1\end{aligned}$$
(3.10)

is valid (see also [51, Theorem 3] and its proof for a different approach of (3.10)). This in turn implies that for all \(\nu >-\frac{5}{2}\) and \(\alpha >0\) we have

$$\begin{aligned} \ell _\nu =\frac{\Gamma ^2(\nu +3)}{\Gamma \left( \nu +\frac{5}{2}\right) \Gamma \left( \nu +\frac{7}{2}\right) }-1< -\frac{T_{\nu +2,\frac{1}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )} =\frac{I_{\nu +\frac{3}{2}}(\alpha )I_{\nu +\frac{5}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )}-1<0,\nonumber \\\end{aligned}$$
(3.11)

which leads to the next result

$$\begin{aligned} \ell _{\nu }+\frac{3\pi }{8}-1<\sigma _{\nu }(\alpha )<\frac{3\pi }{8}-1. \end{aligned}$$

Moreover, by using the general result in [38, Eq. (25)] for \(\varepsilon =\frac{1}{2},\) we arrive at

$$\begin{aligned} \ell _{\nu }<-\frac{T_{\nu +2,\frac{1}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )}<\left( \frac{\alpha }{2}\right) ^{2\nu +4}\frac{\ell _{\nu }}{\Gamma ^2(\nu +3)}\frac{1}{I_{\nu +2}^2(\alpha )}<0 \end{aligned}$$

for all \(\nu >-\frac{5}{2}\) and \(\alpha >0,\) which clearly implies the next result.

Theorem 2

The effective variance of the Kaiser–Bessel distribution satisfies

$$\begin{aligned} \ell _{\nu }+\frac{3\pi }{8}-1<\sigma _{\nu }(\alpha )<\left( \frac{\alpha }{2}\right) ^{2\nu +4}\frac{\ell _{\nu }}{\Gamma ^2(\nu +3)} \frac{1}{I_{\nu +2}^2(\alpha )}+\frac{3\pi }{8}-1<\frac{3\pi }{8}-1 \end{aligned}$$

for all \(\nu >-\frac{5}{2}\) and \(\alpha >0,\) where \(\ell _{\nu }\) is described in (3.11).

In other words, Kalmykov and Karp result on the generalized Turánian in (3.9) gives in particular the same lower bound for the effective variance as the left-hand side of (3.10) does, however their lower bound for the generalized Turánian gives a better upper bound for the effective variance.

It is also important to mention that Lorch’s lower bound in (3.9) can be improved in another way than in [38, Example 1] and this in turn yields another upper bound for the effective variance. Namely, by using the well-known Jordan inequalities

$$\begin{aligned} \sin x \ge \tfrac{2}{\pi }x, \qquad \cos x \ge 1-\tfrac{2}{\pi }x, \qquad x \in \left[ 0, \tfrac{\pi }{2}\right] , \end{aligned}$$

and having in mind also the monotone increasing nature of the modified Bessel function of the first kind appearing in the integrand of (3.9), after suitable substitutions we conclude that

$$\begin{aligned} T_{\nu , \varepsilon }(x)\ge \frac{16\varepsilon ^2}{\pi ^3} \int _0^{\frac{\pi }{2}}\theta ^2I_{2\nu }\left( 2x\left( 1-\frac{2}{\pi }\theta \right) \right) {\textrm{d}}\theta =2\varepsilon ^2 \int _0^1 (1-u)^2 I_{2\nu }(2xu) {\textrm{d}}u, \end{aligned}$$

that is,

$$\begin{aligned} T_{\nu , \varepsilon }(x) \ge \frac{2 \varepsilon ^2 x^{2\nu }}{\Gamma (2\nu +1)} \cdot \varpi _{\nu +\frac{1}{2}}(x)>0,\end{aligned}$$
(3.12)

where

$$\begin{aligned} \varpi _{\rho }(x)= & {} \frac{1}{2\rho }{}_1F_2\left[ \left. \begin{array}{c} \rho \\ 2\rho ,\rho +1\end{array}\right| x^2\right] - \frac{1}{\rho +\frac{1}{2}}{}_1F_2\left[ \left. \begin{array}{c}\rho +\frac{1}{2} \\ 2\rho ,\rho +\frac{3}{2}\end{array}\right| x^2\right] \\{} & {} + \frac{1}{2\rho +2} {}_1F_2 \left[ \left. \begin{array}{c}\rho +1\\ 2\rho ,\rho +2\end{array}\right| x^2\right] , \end{aligned}$$

as the integral is positive for all \(\nu >-\frac{1}{2},\) \(x>0,\) \(0<\varepsilon \le 1\) and where we applied three times the formula

$$\begin{aligned} \int _0^1 y^n I_{2\nu } (2xy)dy = \frac{x^{2\nu }}{(2\nu + n+1)\Gamma (2\nu +1)} {}_1F_2 \left[ \left. \begin{array}{c} \nu +\frac{n+1}{2} \\ 2\nu +1, \nu +\frac{n+3}{2} \end{array} \right| x^2 \right] \end{aligned}$$

with \(2\nu +n+1>0\) and \(n\in {\mathbb {N}}.\) By using the above improved lower bound for the generalized Turánian \(T_{\nu ,\varepsilon }(x)\) we obtain the next upper bound for the effective variance

$$\begin{aligned} \sigma _{\nu }(\alpha )\le -\frac{\alpha ^{2\nu +4}}{2\Gamma (2\nu +5)}\cdot \frac{\varpi _{\nu +\frac{5}{2}}(\alpha )}{I_{\nu +2}^2(\alpha )}+\frac{3\pi }{8}-1<\frac{3\pi }{8}-1, \end{aligned}$$

which holds for all \(\nu >-\frac{5}{2}\) and \(\alpha >0.\)

We also mention that as an attractive by-product on the same parameter range we arrive at a convexity type inequality, which is valid for the contiguous neighbors for the hypergeometric function \({}_1F_2\)

$$\begin{aligned}{} & {} \frac{1}{\nu +\frac{1}{2}}{}_1F_2 \left[ \left. \begin{array}{c} \nu +\frac{1}{2} \\ 2\nu +1, \nu +\frac{3}{2} \end{array} \right| x^2\right] + \frac{1}{\nu +\frac{3}{2}}{}_1F_2\left[ \left. \begin{array}{c} \nu +\frac{3}{2} \\ 2\nu +1, \nu +\frac{5}{2} \end{array} \right| x^2\right] \\{} & {} \quad > \frac{2}{\nu +1} {}_1F_2\left[ \left. \begin{array}{c} \nu +1 \\ 2\nu +1, \nu +2 \end{array} \right| x^2\right] , \end{aligned}$$

and holds for all \(\nu > -\frac{1}{2}\) and \(x>0\).

Note that the excess kurtosis of the Kaiser–Bessel distribution is given by

$$\begin{aligned} \kappa =\frac{{\text {E}}[X^{4}]}{\left( {\text {E}}[X^{2}]\right) ^2}-3=\frac{\mu _{4,\nu }}{\mu _{2,\nu }^2}-3= 3\cdot \left[ \frac{I_{\nu +\frac{1}{2}}(\alpha )I_{\nu +\frac{5}{2}} (\alpha )}{I_{\nu +\frac{3}{2}}^2(\alpha )}-1\right] =-3\cdot \frac{T_{\nu +\frac{3}{2},1}(\alpha )}{I_{\nu +\frac{3}{2}}^2(\alpha )}, \end{aligned}$$

and in view of (3.12) we arrive at

$$\begin{aligned} \kappa =-3\cdot \frac{T_{\nu +\frac{3}{2},1}(\alpha )}{I_{\nu +\frac{3}{2}}^2(\alpha )}\le -\frac{6\alpha ^{2\nu +3}}{\Gamma (2\nu +4)}\cdot \frac{\varpi _{\nu +2}(\alpha )}{I_{\nu +\frac{3}{2}}^2(\alpha )}<0 \end{aligned}$$

for all \(a>0,\) \(\alpha >0\) and \(\nu >-2.\) This implies the following result.

Theorem 3

If \(a>0,\) \(\alpha >0\) and \(\nu >-1,\) then the Kaiser–Bessel distribution has negative excess kurtosis and thus it is a platykurtic or sub-Gaussian distribution. Since the Kaiser–Bessel distribution has compact support, it is not infinitely divisible in the classical sense.

The fact that the Kaiser–Bessel distribution is not infinitely divisible in the classical sense can be also deduced immediately from the fact that its excess kurtosis is strictly negative. In other words, if the Kaiser–Bessel distribution would be infinitely divisible in the classical sense, then its excess kurtosis would be non-negative, according to [6, p. 171]. It would be of interest to verify whether the Kaiser–Bessel distribution is infinitely divisible in the free sense or in the monotone sense, and to evaluate its free divisibility indicator. This problem is motivated by the fact that the integer powers of Wigner’s semicircular distribution are freely infinitely divisible, see [5] for more details.

3.5 Bounds for the Moments

By using the well-known Amos’ inequality (see for example [2] or [58])

$$\begin{aligned} \frac{I_{\nu +\frac{1}{2}}(\alpha )}{\alpha I_{\nu -\frac{1}{2}}(\alpha )}<\frac{1}{\nu +\sqrt{\alpha ^2+\nu ^2}}, \end{aligned}$$

which holds for all \(\nu \ge 0\) and \(\alpha >0,\) we find that

$$\begin{aligned} \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha ^n I_{\nu +\frac{1}{2}}(\alpha )}= & {} \left[ \frac{I_{\nu +\frac{3}{2}}(\alpha )}{\alpha I_{\nu +\frac{1}{2}}(\alpha )} \right] \left[ \frac{I_{\nu +\frac{5}{2}}(\alpha )}{\alpha I_{\nu +\frac{3}{2}}(\alpha )}\right] \cdots \left[ \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha I_{\nu +n-\frac{1}{2}}(\alpha )}\right] \\< & {} \prod _{k=1}^n\frac{1}{\nu +k+\sqrt{\alpha ^2+(\nu +k)^2}}<1 \end{aligned}$$

for all \(n\in {\mathbb {N}},\) \(\nu \ge -\frac{1}{2}\) and \(\alpha >0.\) This in turn implies that \(\mu _{2n,\nu }={\text {E}}[X^{2n}]<(2n-1)!!a^{2n}\) for each \(a>0,\) \(\alpha >0,\) \(\nu \ge -\frac{1}{2}\) and \(n\in {\mathbb {N}}.\)

Moreover, in view of the well-known Mittag-Leffler expansion

$$\begin{aligned} \frac{I_{\nu +1}(x)}{xI_{\nu }(x)}=\sum _{n\ge 1}\frac{2}{x^2+j_{\nu ,n}^2}, \end{aligned}$$
(3.13)

where \(j_{\nu ,n}\) stands for the \(n\hbox {th}\) positive zero of the Bessel function of the first kind \(J_{\nu },\) we find that the function

$$\begin{aligned} \alpha \mapsto \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha ^n I_{\nu +\frac{1}{2}}(\alpha )}=\left[ \frac{I_{\nu +\frac{3}{2}}(\alpha )}{\alpha I_{\nu +\frac{1}{2}}(\alpha )}\right] \left[ \frac{I_{\nu +\frac{5}{2}}(\alpha )}{\alpha I_{\nu +\frac{3}{2}}(\alpha )}\right] \cdots \left[ \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha I_{\nu +n-\frac{1}{2}}(\alpha )}\right] \end{aligned}$$
(3.14)

is decreasing on \((0,\infty )\) for all \(\nu \ge -1\) and \(a>0\) as a product of n decreasing functions. Thus, we deduce that \(\alpha \mapsto \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] \) is also decreasing on \((0,\infty )\) for all \(\nu >-1\) and \(a>0\), and this together with (3.8) implies that

$$\begin{aligned} \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] <\frac{(2n-1)!!a^{2n}}{2^n\left( \nu +\frac{3}{2}\right) _n} \end{aligned}$$

for all \(n\in {\mathbb {N}},\) \(a>0,\) \(\alpha >0\) and \(\nu \ge -1.\)

Since the zeros \(j_{\nu ,n}\) for fixed \(n\in {\mathbb {N}}\) are increasing functions of \(\nu ,\) in view of the Mittag-Leffler expansion (3.13) it follows that the function

$$\begin{aligned} \nu \mapsto \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha ^n I_{\nu +\frac{1}{2}}(\alpha )}=\left[ \frac{I_{\nu +\frac{3}{2}}(\alpha )}{\alpha I_{\nu +\frac{1}{2}}(\alpha )}\right] \left[ \frac{I_{\nu +\frac{5}{2}}(\alpha )}{\alpha I_{\nu +\frac{3}{2}}(\alpha )}\right] \cdots \left[ \frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha I_{\nu +n-\frac{1}{2}}(\alpha )}\right] \end{aligned}$$

is decreasing on \((-1,\infty )\) for all \(a>0\) and \(\alpha >0\) as a product of n decreasing functions. Thus, we find that \(\nu \mapsto \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right] \) is also decreasing on \((-1,\infty )\) for all \(a>0\) and \(\alpha >0,\) and this in turn implies that

$$\begin{aligned} \mu _{2n,\nu }={\text {E}}\left[ X^{2n}\right]<(2n-1)!!a^{2n} \frac{I_{n-\frac{1}{2}}(\alpha )}{\alpha ^nI_{-\frac{1}{2}}(\alpha )} <\frac{(2n-1)!!a^{2n}}{2^n\left( \frac{1}{2}\right) _n}=a^{2n} \end{aligned}$$

for all \(n\in {\mathbb {N}},\) \(a>0,\) \(\alpha >0\) and \(\nu >-1,\) where in the last inequality we used the monotonicity of (3.14) for \(\nu =-1\) and the relation

$$\begin{aligned} \Gamma \left( n+\frac{1}{2}\right) =\frac{(2n-1)!!}{2^n}\sqrt{\pi }. \end{aligned}$$

3.6 Parameter Estimation by Method of Moments

To determine the values of the unknown parameters of the Kaiser–Bessel distribution \(\textrm{KB}(a, \alpha , \nu )\) under the knowledge of the values of three consecutively indexed absolute moments \(\beta _{\nu ,m}, \beta _{\nu ,m+1}, \beta _{\nu ,m+2},\) it is enough to consider for some fixed m the related system of their formulae

$$\begin{aligned} {\left\{ \begin{array}{ll} \beta _{\nu ,m} = \displaystyle \frac{(a\sqrt{2})^m \Gamma (\frac{m+1}{2})}{\sqrt{\pi }\,\sqrt{\alpha }^m}\cdot \frac{I_{\nu +\frac{m+1}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\\ \beta _{\nu ,m+1} = \displaystyle \frac{(a\sqrt{2})^{m+1} \Gamma (\frac{m}{2}+1)}{\sqrt{\pi }\,\sqrt{\alpha }^{m+1}}\cdot \frac{I_{\nu +\frac{m}{2}+1}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\\ \beta _{\nu ,m+2} = \displaystyle \frac{(a\sqrt{2})^{m+2} \Gamma (\frac{m+3}{2})}{\sqrt{\pi }\,\sqrt{\alpha }^{m+2}}\cdot \frac{I_{\nu +\frac{m+3}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )} \end{array}\right. }. \end{aligned}$$
(3.15)

Obviously, it is necessary to compute a\(\alpha ,\) \(\nu \) from the system (3.15) numerically, since explicit solution can be found (indeed partially) only in exceptional cases, like for fixed \(\nu \), where the above system reduces after some routine calculation to the a-free equation which consists of three Turánian quotients

$$\begin{aligned} \frac{I_{\nu +\frac{m+1}{2}}(\alpha )\, I_{\nu +\frac{m+3}{2}}(\alpha )}{I_{\nu +\frac{m}{2}+1}^2(\alpha )} = \frac{\Gamma ^2(\frac{m}{2}+1)}{\Gamma (\frac{m+1}{2}) \Gamma (\frac{m+3}{2})}\, \frac{\beta _{\nu ,m} \beta _{\nu ,m+2}}{\beta _{\nu ,m+1}^2}. \end{aligned}$$

Now, solving this equation with respect to \(\alpha \), routine steps lead to the value of a.

Another approach to solve (3.15) it would be to consider the corresponding quotients of modified Bessel functions of the first kind in the above equations and to use the fact that every quotient is in fact a power series (whose coefficients can be evaluated by using the Cauchy product of two power series) and then to use the Lagrange inversion theorem for each equations and then solve the system of equations. However, this way looks complicated and difficult to implement numerically.

However, to avoid the numerical equation solving, we can proceed by another method fixing the values of five consecutive odd order absolute moments \(\beta _{\nu ,j}\) with \(j\in \{1,3,5,7,9\}\). Thus, since this system looks difficultFootnote 5 to solve, we are going to propose a new approach of the method of moments for this special situation, based on the recurrence relation for the modified Bessel functions of the first kind. For this observe that since

$$\begin{aligned}I_{\nu +n}(\alpha ) = \frac{\sqrt{\pi }\left( \sqrt{\alpha }\right) ^{2n-1}}{\left( a\sqrt{2}\right) ^{2n-1} \Gamma (n)}I_{\nu +\frac{1}{2}}(\alpha ) \beta _{\nu ,2n-1},\end{aligned}$$

if we use the three term recurrence relation (3.1) of the modified Bessel functions of the first kind for \(\nu +n\) instead of \(\nu ,\) then we arrive at the following recurrence relation for the absolute moments of odd order of the Kaiser–Bessel distribution

$$\begin{aligned} \alpha ^2\beta _{\nu ,2n+3} = 4n(n+1)a^4\beta _{\nu ,2n-1}-4(n+1)(\nu +n+1)a^2\beta _{\nu ,2n+1}. \end{aligned}$$
(3.16)

Thus, if we consider the particular cases when \(n\in \{1,2,3\},\) we find that

$$\begin{aligned} {\left\{ \begin{array}{ll} 8 \beta _{\nu ,1} a^4 - 8 \beta _{\nu ,3}(\nu +2)a^2 - \beta _{\nu ,5} \alpha ^2 = 0\\ 24 \beta _{\nu ,3} a^4 - 12 \beta _{\nu ,5}(\nu +3)a^2 - \beta _{\nu ,7} \alpha ^2 = 0\\ 48 \beta _{\nu ,5} a^4 - 16 \beta _{\nu ,7}(\nu +4)a^2 - \beta _{\nu ,9} \alpha ^2 = 0 \end{array}\right. }, \end{aligned}$$

which is equivalent to the non-homogeneous linear system

$$\begin{aligned} {\left\{ \begin{array}{ll} 8 \beta _{\nu ,1} x - 8 \beta _{\nu ,3} y - \beta _{\nu ,5} z = 16 \beta _{\nu ,3}\\ 24 \beta _{\nu ,3} x - 12 \beta _{\nu ,5} y - \beta _{\nu ,7} z = 36 \beta _{\nu ,5}\\ 48 \beta _{\nu ,5} x - 16 \beta _{\nu ,7} y - \beta _{\nu ,9} z = 64 \beta _{\nu ,7} \end{array}\right. }, \end{aligned}$$
(3.17)

where \(x= a^2,\) \(y= \nu ,\) \(z =\alpha ^2/a^2\). Solving this system we obtain the following explicitly expressed values of the parameters a\(\alpha \) and \(\nu \)

$$\begin{aligned}&a = \displaystyle \sqrt{\frac{3{\beta }_3 {\beta }_5 {\beta }_9-8 {\beta }_3{\beta }_7^2+6{\beta }_5^2{\beta }_7}{-3{\beta }_5 \left( {\beta }_1{\beta }_9+8{\beta }_3{\beta }_7\right) +4{\beta }_1{\beta }_7^2+6{\beta }_3^2{\beta }_9 + 18{\beta }_5^3}}, \\&\alpha = \displaystyle \frac{4\sqrt{3\left( {\beta }_1{\beta }_5{\beta }_7-4{\beta }_3^2{\beta }_7 +3{\beta }_3{\beta }_5^2\right) \left( 3{\beta }_3 {\beta }_5{\beta }_9-8{\beta }_3{\beta }_7^2+6{\beta }_5^2{\beta }_7\right) }}{\left| -3{\beta }_5\left( {\beta }_1{\beta }_9 + 8{\beta }_3{\beta }_7\right) +4{\beta }_1{\beta }_7^2+6{\beta }_3^2{\beta }_9+18{\beta }_5^3\right| }, \\&\nu = \displaystyle \frac{-4\left( 4{\beta }_1{\beta }_7^2+3{\beta }_3^2{\beta }_9\right) +9{\beta }_5({\beta }_1{\beta }_9+8{\beta }_3{\beta }_5) - 54{\beta }_5^3}{-3{\beta }_5\left( {\beta }_1{\beta }_9+8{\beta }_3{\beta }_7\right) +4{\beta }_1{\beta }_7^2+6{\beta }_3^2{\beta }_9 + 18{\beta }_5^3}, \end{aligned}$$

provided that the denominator in the above fractions does not vanish and everywhere the shorthand notation \(\beta _j = \beta _{\nu ,j}\) is used. It is worth to mention that the system (3.17) is not overdetermined because there are no more equations than unknowns, and using five absolute theoretical moments instead of three, there is no inconsistency in the system since that moments are in connection to each other according to the recurrence relation (3.16).

In the method of moments classical approach the parameters of a probability distribution are estimated by matching the empirical moments of the sample with that of the corresponding probability distribution. The number of moments required corresponds to the number of unknown parameters of the distribution. Application of this method is straightforward, as closed-form expressions for the moments can be readily derived for most commonly used distributions. Now, let \(X_1,X_2,\ldots ,X_n\) be independent and identically distributed Kaiser–Bessel random variables with unknown parameters a\(\alpha \) and \(\nu ,\) and suppose that for some \(k\in {\mathbb {N}}\) the theoretical absolute moment \(\beta _{\nu ,k}\) is estimated by the empirical absolute sample moment

$$\begin{aligned} {\overline{\beta }}_k = \frac{1}{n} \sum _{j=1}^n\left| X_j\right| ^k, \end{aligned}$$

where, in the case of a numerical sample the realizations \(x_j\) replace the random variable \(X_j\) in the statistics \({\overline{\beta }}_k\). However, these considerations lead to the counterpart system to (3.17), that is,

$$\begin{aligned} {\left\{ \begin{array}{ll} 8 {\overline{\beta }}_{\nu ,1} a^4 - 8 {\overline{\beta }}_{\nu ,3}(\nu +2)a^2 - {\overline{\beta }}_{\nu ,5} \alpha ^2 = 0\\ 24 {\overline{\beta }}_{\nu ,3} a^4 - 12 {\overline{\beta }}_{\nu ,5}(\nu +3)a^2 - {\overline{\beta }}_{\nu ,7} \alpha ^2 = 0\\ 48 {\overline{\beta }}_{\nu ,5} a^4 - 16\overline{ \beta }_{\nu ,7}(\nu +4)a^2 - {\overline{\beta }}_{\nu ,9} \alpha ^2 = 0 \end{array}\right. }, \end{aligned}$$

which leads to the following estimates of the parameters a\(\alpha \) and \(\nu \)

$$\begin{aligned} {\overline{a}}&= \sqrt{\frac{3{\overline{\beta }}_3{\overline{\beta }}_5{\overline{\beta }}_9 -8{\overline{\beta }}_3{\overline{\beta }}_7^2 + 6{\overline{\beta }}_5^2{\overline{\beta }}_7}{-3{\overline{\beta }}_5 \left( {\overline{\beta }}_1{\overline{\beta }}_9 + 8{\overline{\beta }}_3{\overline{\beta }}_7\right) +4{\overline{\beta }}_1{\overline{\beta }}_7^2 + 6{\overline{\beta }}_3^2{\overline{\beta }}_9+18{\overline{\beta }}_5^3}}, \\ {\overline{\alpha }}&= \frac{4\sqrt{3\left( {\overline{\beta }}_1{\overline{\beta }}_5{\overline{\beta }}_7 - 4{\overline{\beta }}_3^2{\overline{\beta }}_7+3{\overline{\beta }}_3{\overline{\beta }}_5^2\right) \left( 3{\overline{\beta }}_3{\overline{\beta }}_5{\overline{\beta }}_9-8{\overline{\beta }}_3{\overline{\beta }}_7^2 + 6{\overline{\beta }}_5^2{\overline{\beta }}_7\right) }}{\left| -3{\overline{\beta }}_5\left( {\overline{\beta }}_1 {\overline{\beta }}_9+8{\overline{\beta }}_3{\overline{\beta }}_7\right) +4{\overline{\beta }}_1{\overline{\beta }}_7^2 + 6{\overline{\beta }}_3^2{\overline{\beta }}_9+18{\overline{\beta }}_5^3\right| }, \\ {\overline{\nu }}&= \frac{-4\left( 4{\overline{\beta }}_1{\overline{\beta }}_7^2+3{\overline{\beta }}_3^2{\overline{\beta }}_9\right) + 9{\overline{\beta }}_5\left( {\overline{\beta }}_1{\overline{\beta }}_9 + 8{\overline{\beta }}_3 {\overline{\beta }}_5\right) -54{\overline{\beta }}_5^3}{-3{\overline{\beta }}_5\left( {\overline{\beta }}_1 {\overline{\beta }}_9+8{\overline{\beta }}_3{\overline{\beta }}_7\right) +4{\overline{\beta }}_1{\overline{\beta }}_7^2 + 6{\overline{\beta }}_3^2{\overline{\beta }}_9+18{\overline{\beta }}_5^3}, \end{aligned}$$

provided that the denominator in the above fractions does not vanish.

Finally, note that clearly we can use the same procedure with the first five even order absolute moments \(\beta _{\nu ,j},\) \(j\in \{2,4,6,8,10\}\) (which in fact coincide with the same order raw moments \(\mu _{j,\nu },\) \(j\in \{2,4,6,8,10\}\)). Namely, by using again the three term recurrence relation (3.1) we find that

$$\begin{aligned} \alpha ^2\beta _{\nu ,2n+4}=(2n+1)(2n+3)a^4\beta _{\nu ,2n}-(2n+3) (2\nu +2n+3)a^2\beta _{\nu ,2n+2}, \end{aligned}$$

and if we consider the particular cases when \(n\in \{1,2,3\},\) we obtain that

$$\begin{aligned} {\left\{ \begin{array}{ll} 15 \beta _{\nu ,2} a^4 - 5 \beta _{\nu ,4} (2\nu +5)a^2 - \beta _{\nu ,6} \alpha ^2 = 0\\ 35 \beta _{\nu ,4} a^4 - 7 \beta _{\nu ,6} (2\nu +7)a^2 - \beta _{\nu ,8} \alpha ^2 = 0\\ 63 \beta _{\nu ,6} a^4- 9 \beta _{\nu ,8} (2\nu +9)a^2 - \beta _{\nu ,10} \alpha ^2 = 0 \end{array}\right. }, \end{aligned}$$

which is equivalent to the following linear system

$$\begin{aligned} {\left\{ \begin{array}{ll} 15 \beta _{\nu ,2} x - 10 \beta _{\nu ,4} y - \beta _{\nu ,6} z = 25 \beta _{\nu ,4}\\ 35 \beta _{\nu ,4} x - 14 \beta _{\nu ,6} y - \beta _{\nu ,8} z = 49 \beta _{\nu ,6}\\ 63 \beta _{\nu ,6} x- 18 \beta _{\nu ,8} y - \beta _{\nu ,10} z = 81 \beta _{\nu ,8} \end{array}\right. }. \end{aligned}$$

The desired values of the parameters are

$$\begin{aligned} a&= \displaystyle \sqrt{\frac{2(\beta _{10} \beta _4 \beta _6-90 \beta _4 \beta _8^2 + 63 \beta _6^2\beta _8)}{5(27\beta _2\beta _8^2 +35\beta _{10}\beta _4^2) -105\beta _6(\beta _{10}\beta _2+6\beta _4\beta _8)+441\beta _6^2}},\\ \alpha&= \displaystyle \frac{6\sqrt{35(\beta _{10} \beta _4 \beta _6-90 \beta _4 \beta _8^2 + 63 \beta _6^2\beta _8)(3\beta _2\beta _6\beta _8-10\beta _4^2 \beta _8+7\beta _4\beta _6^2)}}{\left| {5(27\beta _2\beta _8^2 +35\beta _{10}\beta _4^2) -105\beta _6(\beta _{10}\beta _2 + 6\beta _4\beta _8)+441\beta _6^2}\right| },\\ \nu&= \displaystyle \frac{735\beta _6(\beta _{10}\beta _2+6\beta _4\beta _8)-1215\beta _2\beta _8^2-875\beta _{10}\beta _4^2-3087\beta _6^2}{10(27\beta _2\beta _8^2 +35\beta _{10}\beta _4^2) -210 \beta _6(\beta _{10}\beta _2 + 6\beta _4\beta _8)+882\beta _6^3}, \end{aligned}$$

provided that the denominator in the above fractions does not vanish. Moreover, when we have no insight into the values of the theoretical absolute moments \(\beta _{\nu ,2j},\) \(j\in \{1,2,3,4,5\}\), but the statistical sample from a \(\textrm{KB}(a, \alpha , \nu )\)-distributed population is at our disposal, we estimate the unknown distributional parameters by the following moments method estimators

$$\begin{aligned} {\overline{a}}&= \displaystyle \sqrt{\frac{2({\overline{\beta }}_{10} {\overline{\beta }}_4 {\overline{\beta }}_6-90 {\overline{\beta }}_4 {\overline{\beta }}_8^2 + 63 {\overline{\beta }}_6^2\beta _8)}{5(27{\overline{\beta }}_2{\overline{\beta }}_8^2 + 35{\overline{\beta }}_{10}{\overline{\beta }}_4^2) - 105{\overline{\beta }}_6({\overline{\beta }}_{10}{\overline{\beta }}_2 + 6{\overline{\beta }}_4{\overline{\beta }}_8) + 441{\overline{\beta }}_6^2}}, \\ {\overline{\alpha }}&= \displaystyle \frac{6\sqrt{35({\overline{\beta }}_{10} {\overline{\beta }}_4 {\overline{\beta }}_6-90 {\overline{\beta }}_4 {\overline{\beta }}_8^2 + 63{\overline{\beta }}_6^2{\overline{\beta }}_8)(3{\overline{\beta }}_2{\overline{\beta }}_6{\overline{\beta }}_8 - 10{\overline{\beta }}_4^2 {\overline{\beta }}_8+7{\overline{\beta }}_4{\overline{\beta }}_6^2)}}{\left| {5(27{\overline{\beta }}_2 {\overline{\beta }}_8^2 +35{\overline{\beta }}_{10}{\overline{\beta }}_4^2) -105{\overline{\beta }}_6({\overline{\beta }}_{10} {\overline{\beta }}_2 + 6{\overline{\beta }}_4{\overline{\beta }}_8)+441{\overline{\beta }}_6^2}\right| }, \\ {\overline{\nu }}&= \displaystyle \frac{735\beta _6({\overline{\beta }}_{10}{\overline{\beta }}_2+6{\overline{\beta }}_4{\overline{\beta }}_8) - 1215{\overline{\beta }}_2{\overline{\beta }}_8^2-875{\overline{\beta }}_{10}{\overline{\beta }}_4^2-3087{\overline{\beta }}_6^2}{10(27{\overline{\beta }}_2{\overline{\beta }}_8^2 +35{\overline{\beta }}_{10}{\overline{\beta }}_4^2) - 210 {\overline{\beta }}_6({\overline{\beta }}_{10}{\overline{\beta }}_2 + 6{\overline{\beta }}_4{\overline{\beta }}_8) + 882{\overline{\beta }}_6^3}, \end{aligned}$$

provided that the denominator in the above fractions does not vanish. The probabilistic/statistical efficiency analysis we will consider on a different address.

Fig. 1
figure 1

The graph of the probability density function of the Kaiser–Bessel distribution for \(a=2,\) \(\nu =2\) and \(\alpha \in \{2,3,\ldots ,11\}\)

4 Unimodality, Monotonicity, Convexity, Log-Concavity and Geometrical Concavity of the Kaiser–Bessel Probability Density Function

In this section our aim is to present a detailed analysis of the probability density function of the Kaiser–Bessel distribution. We show that the distribution is unimodal, it belongs to the family of log-concave and geometrical concave distributions, and we also study the monotonicity (and convexity) of the probability density function with respect to the argument x as well as with respect to the parameters a\(\alpha \) and \(\nu \). In the process of this analytic study we use some known inequalities for the logarithmic derivative of the modified Bessel function of the first kind as well as some Turán type inequalities. We also use a formula for the product of two modified Bessel functions of the first kind of different order and argument, and this formula is one of the key tools in the proofs of this whole section. Because of the connection between the monotonicity and convexity of the probability density function of the Kaiser–Bessel distribution with respect to the argument and parameters, in this section we present these monotonicity and convexity results divided in subsections and the order of these subsections depends mostly on the connections. In addition, by using the classical rejection method we present two algorithms for sampling independent continuous random variables of Kaiser–Bessel distribution. Some nice figures illustrate the main results of this section.

4.1 Unimodality, Monotonicity with Respect to x

To show that the Kaiser–Bessel distribution is unimodal we use the monotonicity of \(s\mapsto s^{\nu }I_{\nu }(\alpha s).\) Namely, by using the recurrence relation

$$\begin{aligned} \frac{\partial }{\partial s}\left[ (\alpha s)^{\nu }I_{\nu }(\alpha s)\right] =\alpha \cdot (\alpha s)^{\nu }I_{\nu -1}(\alpha s) \end{aligned}$$

we obtain that for each \(\alpha >0\) and \(\nu >-1\) the function \(s\mapsto s^{\nu }I_{\nu }(\alpha s)\) is increasing on \((0,\infty )\) and so is on (0, 1]. On the other hand, by using the notations

$$\begin{aligned} \omega _{a,\alpha ,\nu }(x)=\frac{\sqrt{\frac{\alpha }{2 \pi }}}{a I_{\nu +\frac{1}{2}}(\alpha )}\cdot x^{\nu }I_{\nu }(\alpha x)\qquad \text{ and } \qquad g_a(x)=\sqrt{1-\left( \frac{x}{a}\right) ^2}, \end{aligned}$$

we arrive at

$$\begin{aligned} \varphi _{a,\alpha ,\nu }'(x)=\left( \omega _{a,\alpha ,\nu }(g_a(x))\right) '=\omega _{a,\alpha ,\nu }'(g_a(x))\cdot g_a'(x) \end{aligned}$$

and since

$$\begin{aligned} g_a'(x)=\frac{-x}{a^2\sqrt{1-\left( \frac{x}{a}\right) ^2}} \end{aligned}$$

this becomesFootnote 6

$$\begin{aligned} \varphi _{a,\alpha ,\nu }'(x)=-\frac{\alpha \sqrt{\frac{\alpha }{2 \pi }}\cdot x}{a^3 I_{\nu +\frac{1}{2}}(\alpha )}\cdot \left( \sqrt{1- \left( \frac{x}{a}\right) ^2}\right) ^{\nu -1}I_{\nu -1}\left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) . \end{aligned}$$
(4.1)

Thus, we have the following result.

Theorem 4

The probability density function of the Kaiser–Bessel distribution \(\varphi _{a,\alpha ,\nu }\) is increasing on \((-a,0]\) and decreasing on [0, a) for all \(a>0,\) \(\alpha >0\) and \(\nu >-1.\) In other words, the distribution is unimodal, and its mode is zero,Footnote 7 and similarly as in the case of other symmetric unimodal distributions the mean \({\text {E}}[X]=0\) and mode coincide. Moreover, the median of the distribution is also zero because of the symmetry of the support.

Figure , which resembles to a beautiful colored pashmina, illustrates the monotonicity behavior of the probability density function for fixed values of a and \(\nu ,\) and for different values of \(\alpha .\)

Fig. 2
figure 2

The graph of the logarithm of the probability density function of the Kaiser–Bessel distribution for \(a=2,\) \(\nu =2\) and \(\alpha \in \{2,3,\ldots ,11\}\)

4.2 Log-Concavity and Geometrical Concavity with Respect to x

In view of (4.1), the logarithmic derivative of the probability density function of the Kaiser–Bessel distribution becomes

$$\begin{aligned} Q_{a,\alpha ,\nu }(x)= & {} \frac{\varphi _{a,\alpha ,\nu }'(x)}{\varphi _{a,\alpha ,\nu }(x)}=-\frac{\alpha x}{a^2\sqrt{1-\left( \frac{x}{a}\right) ^2}}\cdot \frac{I_{\nu -1} \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) }{I_{\nu }\left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) }\\= & {} -\frac{\alpha x}{a^2g_a(x)}\cdot \frac{I_{\nu -1}(\alpha g_a(x))}{I_{\nu }(\alpha g_a(x))}. \end{aligned}$$

Now, consider the auxiliary function \(S_{a,\alpha ,\nu }:(0,\alpha ]\rightarrow (-\infty ,0),\) defined by

$$\begin{aligned} S_{a,\alpha ,\nu }(x)=-\frac{\alpha \sqrt{\alpha ^2-x^2}}{ax}\cdot \frac{I_{\nu -1}(x)}{I_{\nu }(x)}. \end{aligned}$$

The function \(x\mapsto s_{a,\alpha }(x)=-\displaystyle \frac{\alpha \sqrt{\alpha ^2-x^2}}{ax}\) is strictly increasing on \((0,\alpha )\) for all \(a>0\) and \(\alpha >0\) since its derivative is strictly positive therein, that is

$$\begin{aligned} s_{a,\alpha }'(x)=\frac{\alpha ^3}{ax^2\sqrt{\alpha ^2-x^2}}>0 \end{aligned}$$

for each \(a>0,\) \(\alpha >0\) and \(x\in (0,\alpha ).\) On the other hand, the function \(x\mapsto 1/q_{\nu }(x)=I_{\nu }(x)/I_{\nu -1}(x)\) is increasing on \((0,\infty )\) for \(\nu \ge \frac{1}{2},\) see [71] or [67]. This implies that when \(\nu \ge \frac{1}{2}\) the function \(q_{\nu }\) is decreasing on \((0,\infty )\) and so is on \((0,\alpha ].\) This in turn implies that for \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}\) the function \(x\mapsto S_{a,\alpha ,\nu }(x)=s_{a,\alpha }(x)\cdot q_{\nu }(x)\) is strictly increasing on \((0,\alpha ]\) as a product of a negative and strictly increasing function and of a strictly positive and decreasing function. Now, observe that \(Q_{a,\alpha ,\nu }(x)=S_{a,\alpha ,\nu }(\alpha g_a(x)).\) Since \(x\mapsto \alpha g_a(x)\) is decreasing on [0, a),  the monotonicity of \(S_{a,\alpha ,\nu }\) implies that \(Q_{a,\alpha ,\nu }\) is decreasing on [0, a) for all \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\) Moreover, the function \(g_a\) is even on \((-a,a)\) and thus the function \(Q_{a,\alpha ,\nu }\) is odd on \((-a,a),\) and consequently \(Q_{a,\alpha ,\nu }\) is decreasing too on \((-a,0]\) for all \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\) This is justified by the fact that geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Thus, the logarithmic derivative \(Q_{a,\alpha ,\nu }\) is decreasing on \((-a,a)\) for all \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2},\) and this is equivalent to the fact that the probability density function \(\varphi _{a,\alpha ,\nu }\) is log-concave on \((-a,a)\) for all \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\) In other words, the Kaiser–Bessel distribution for \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}\) belongs to the family of log-concave distributions, that is

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(\rho x+(1-\rho )y)\ge \left[ \varphi _{a,\alpha ,\nu } (x)\right] ^{\rho }\left[ \varphi _{a,\alpha ,\nu }(y)\right] ^{1-\rho } \end{aligned}$$

for all \(a>0,\) \(\alpha >0,\) \(\nu \ge \frac{1}{2},\) \(x,y\in (-a,a)\) and \(\rho \in [0,1].\) Moreover, since the probability density function \(\varphi _{a,\alpha ,\nu }\) is decreasing on [0, a) for all \(a>0,\) \(\alpha >0\) and by using the arithmetic–geometric mean inequality we arrive at

$$\begin{aligned} \varphi _{a,\alpha ,\nu }(x^{\rho }y^{1-\rho })\ge \left[ \varphi _{a,\alpha ,\nu }(x)\right] ^{\rho }\left[ \varphi _{a,\alpha ,\nu }(y)\right] ^{1-\rho } \end{aligned}$$

for all \(a>0,\) \(\alpha >0,\) \(\nu \ge \frac{1}{2},\) \(x,y\in [0,a)\) and \(\rho \in [0,1],\) that is, the probability density function \(\varphi _{a,\alpha ,\nu }\) is geometrically concave on [0, a) for \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\)

Figure , which resembles to a beautiful rainbow, illustrates the log-concavity behavior of the probability density function.

On the other hand, we know that when \(\nu \in \left( 0,\frac{1}{2}\right) ,\) then the function \(x\mapsto 1/q_{\nu }(x)=I_{\nu }(x)/I_{\nu -1}(x)\) is increasing first to reach a maximum and then decreasing, see for example [71, p. 446]. Note that the quotient \(I_{\nu }(x)/I_{\nu -1}(x)\) satisfies the Ricatti differential equation

$$\begin{aligned} y'(x)=1-y^2(x)-\frac{2\nu -1}{x}y(x). \end{aligned}$$

So, if we denote by \(x_{\nu }^*\) the point in which the derivative of \(1/q_{\nu }(x)\) vanishes, then \(x_{\nu }^*\) is in fact the solution of the equation

$$\begin{aligned} x\left[ I_{\nu -1}^2(x)-I_{\nu }^2(x)\right] =(2\nu -1)I_{\nu -1}(x)I_{\nu }(x), \end{aligned}$$

and numerical experiments and graphs show that \(x_{\nu }^*\) is increasing with \(\nu .\) By using a similar argument as in the case of \(\nu \ge \frac{1}{2}\) above, we readily see that the probability density function \(\varphi _{a,\alpha ,\nu }\) is log-concave on \((-a,a)\) for all \(a>0,\) \(x_{\nu }^*\ge \alpha >0\) and \(0<\nu <\frac{1}{2},\) and is geometrically concave on [0, a) for all \(a>0,\) \(x_{\nu }^*\ge \alpha >0\) and \(0<\nu <\frac{1}{2}.\)

Summarizing, we have obtained the following result.

Theorem 5

The probability density function \(\varphi _{a,\alpha ,\nu }\) is log-concave on \((-a,a)\) for all for \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}\) and is geometrically concave on [0, a) for \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\) Moreover, the probability density function \(\varphi _{a,\alpha ,\nu }\) is log-concave on \((-a,a)\) for all \(a>0,\) \(x_{\nu }^*\ge \alpha >0\) and \(0<\nu <\frac{1}{2},\) and is geometrically concave on [0, a) for all \(a>0,\) \(x_{\nu }^*\ge \alpha >0\) and \(0<\nu <\frac{1}{2}.\)

It is worth to mention here that if a probability density function is log-concave, then the corresponding cumulative distribution is also log-concave, see for example [3] and [9]. Moreover, we also know that if a probability density function is geometrically concave, then the cumulative distribution function is also geometrically concave, according to [14, Theorem 3]. By using these results, we clearly have the following result.

Corollary 1

The cumulative distribution function \(F_{a,\alpha ,\nu }\) is log-concave on \((-a,a)\) for all \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2},\) and is geometrically concave on [0, a) for \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\) In addition, the cumulative distribution function \(F_{a,\alpha ,\nu }\) is log-concave on \((-a,a)\) for all \(a>0,\) \(x_{\nu }^*\ge \alpha >0\) and \(0<\nu <\frac{1}{2},\) and is geometrically concave on [0, a) for all \(a>0,\) \(x_{\nu }^*\ge \alpha >0\) and \(0<\nu <\frac{1}{2}.\)

Note that in view of the explicit form of \(F_{a,\alpha ,\nu }(x)\) a direct proof for the above properties would be not trivial.

Fig. 3
figure 3

The graph of the function \(\gamma _{\nu }(t)=\frac{tI_{\nu }'(t)}{I_{\nu }(t)}-t\) for \(\nu \in \{0,0.1,0.2,0.3,0.4,0.49\}\)

4.3 Monotonicity of the Probability Density Function with Respect to a

Observe that by using the notations \(s=g_a(x)\) and \(t=\alpha s,\) and the recurrence relation

$$\begin{aligned} I_{\nu -1}(t)+I_{\nu +1}(t)=2I_{\nu }'(t),\end{aligned}$$
(4.2)

we find that

$$\begin{aligned} \frac{\partial \varphi _{a,\alpha ,\nu }(x)}{\partial a}=\frac{x^2\sqrt{\frac{\alpha }{2\pi }}}{a^4I_{\nu +\frac{1}{2}}(\alpha )}\cdot I_{\nu }(t)s^{\nu -2}\cdot \left[ \frac{tI_{\nu }'(t)}{I_{\nu }(t)}+\nu +1 -\frac{a^2}{x^2}\right] .\end{aligned}$$
(4.3)

In view of the inequalities [58, p. 526]

$$\begin{aligned} \sqrt{t^2+(\nu +1)^2}-1<\frac{tI_{\nu }'(t)}{I_{\nu }(t)}<\sqrt{t^2+\left( \nu +\frac{1}{2}\right) ^2}-\frac{1}{2}, \end{aligned}$$
(4.4)

where the left-hand side holds for \(\nu \ge -1\) and \(t>0,\) while the right-hand side holds for \(\nu \ge -\frac{1}{2}\) and \(t>0,\) we obtain that the right-hand side of (4.3) is strictly positive when \((2\nu +1)x^2\ge a^2\) and it is strictly negative when

$$\begin{aligned} \sqrt{\alpha ^2+\left( \nu +\frac{1}{2}\right) ^2}+\nu +\frac{1}{2}\le \frac{a^2}{x^2}. \end{aligned}$$

More precisely, for \(\alpha >0,\) \(a>0\) and \(\nu \ge -\frac{1}{2},\) the function \(a\mapsto \varphi _{a,\alpha ,\nu }(x)\) is strictly increasing for \(a>|x|\ge \frac{a}{\sqrt{2\nu +1}}\) and is strictly decreasing for

$$\begin{aligned} |x|\le \frac{a}{\sqrt{\sqrt{\alpha ^2+\left( \nu +\frac{1}{2}\right) ^2}+\nu +\frac{1}{2}}}. \end{aligned}$$

Moreover, the range of the monotonicity can be further extended in the case when \(a>0\) and \(\alpha>\nu +1>1.\) Namely, by using the arithmetic–geometric mean inequality for \(I_{\nu -1}(t)\) and \(I_{\nu +1}(t)\) and the Turán type inequality (3.10) we arrive at

$$\begin{aligned}\frac{a^4I_{\nu +\frac{1}{2}}(\alpha )}{x^2\sqrt{\frac{\alpha }{2\pi }}} \cdot \frac{\displaystyle \frac{\partial \varphi _{a,\alpha ,\nu }(x)}{\partial a}}{I_{\nu }(t)s^{\nu -2}}&=t\cdot \frac{I_{\nu -1}(t)+I_{\nu +1}(t)}{2I_{\nu }(t)}+\nu +1-\frac{a^2}{x^2}\\ {}&\ge t\cdot \frac{\sqrt{I_{\nu -1}(t)I_{\nu +1}(t)}{}}{I_{\nu }(t)}+\nu +1-\frac{a^2}{x^2}\\&>t\sqrt{\frac{\nu }{\nu +1}}+\nu +1-\frac{a^2}{x^2}\ge 0 \end{aligned}$$

whenever

$$\begin{aligned} \frac{a}{\sqrt{\alpha \frac{\nu }{\nu +1}+\nu +1}}\le |x|\le \frac{a}{\sqrt{\nu +1}} \end{aligned}$$

with \(a>0,\) \(\alpha >0\) and \(\nu >0.\) Thus, in this case the function \(a\mapsto \varphi _{a,\alpha ,\nu }(x)\) is strictly increasing on \((0,\infty ),\) of course with the above assumptions.

Now, we introduce the following notations: for \(\nu \ge 0\) let \(t_{\alpha ,\nu }\) be the unique positive root of the equation

$$\begin{aligned} \frac{tI_{\nu }'(t)}{I_{\nu }(t)} +\nu +1=\frac{\alpha ^2}{\alpha ^2-t^2}\end{aligned}$$
(4.5)

in the interval \([0,\alpha )\) and let \(r_{\alpha ,\nu }^2=1-t_{\alpha ,\nu }^2/\alpha ^2.\) Moreover, for \(-1<\nu <0\) and \(\alpha \) large enough let \(t_{\alpha ,\nu ,1}\) and \(t_{\alpha ,\nu ,2}\) denote the positive roots of the equation (4.5) in the interval \([0,\alpha )\), and let \(r_{\alpha ,\nu ,i}^2=1-t_{\alpha ,\nu ,i}^2/\alpha ^2\) for \(i\in \{1,2\}.\)

With a more sophisticated analysis we arrive at an almost complete description of the monotonicity pattern of the probability density function with respect to a.

Theorem 6

The following assertions are true:

  • (i) If \(a>0,\) \(\alpha >0,\) \(\nu \ge 0\) and \(a>|x|\ge a\cdot r_{\alpha ,\nu },\) then the function \(a\mapsto \varphi _{a,\alpha ,\nu }(x)\) is increasing; if \(a>0,\) \(\alpha >0,\) \(\nu \ge 0\) and \(|x|\le a\cdot r_{\alpha ,\nu },\) then the function \(a\mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing;

  • (ii) If \(a>0,\) \(-1<\nu <0,\) \(\alpha \) is large enough and \(a\cdot r_{\alpha ,\nu ,1}\ge |x|\ge a\cdot r_{\alpha ,\nu ,2},\) then the function \(a\mapsto \varphi _{a,\alpha ,\nu }(x)\) is increasing; if \(a>0,\) \(-1<\nu <0,\) \(\alpha \) is large enough and \(|x|\le a\cdot r_{\alpha ,\nu ,2},\) \(a\cdot r_{\alpha ,\nu ,1}\le |x|<a\) then the function \(a\mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing;

Proof

First we show that the Eq. (4.5) for \(\nu \ge 0\) has only one positive solution in \([0,\alpha ).\) To prove this we introduce the next auxiliary notations

$$\begin{aligned} y_{\nu }(t)=\frac{tI_{\nu }'(t)}{I_{\nu }(t)},\quad r_{\nu }(t)=y_{\nu }(t)+\nu +1,\quad q_{\alpha }(t)=\frac{\alpha ^2}{\alpha ^2-t^2},\quad w_{\alpha ,\nu }(t)=r_{\nu }(t)-q_{\alpha }(t). \end{aligned}$$

Since

$$\begin{aligned} y_{\nu }(t)=\nu +\frac{t^2}{2(\nu +1)} -\frac{t^4}{8(\nu +1)^2(\nu +2)}+{\cdots }, \end{aligned}$$
(4.6)

the function \(t\mapsto y_{\nu }(t)\) maps \([0,\infty )\) into \([\nu ,\infty )\) and hence \([0,\alpha ]\) into \([\nu ,y_{\nu }(\alpha )].\) This implies that for all \(\nu >-1\) the function \(t\mapsto r_{\nu }(t)\) maps \([0,\alpha ]\) into \([2\nu +1,r_{\nu }(\alpha )].\) On the other hand, the function \(t\mapsto y_{\nu }(t)\) is an increasing function on \((0,\infty )\) for all \(\nu >-1,\) and for large values of t and fixed \(\nu \) it has the asymptotic expansion [28, p. 275]

$$\begin{aligned} y_{\nu }(t)\sim t-\frac{1}{2}+\frac{4\nu ^2-1}{8t}-{\cdots }. \end{aligned}$$

Consequently \(\lim \limits _{t\rightarrow \infty }{r_{\nu }(t)}/{t}=1\) and \(\lim \limits _{t\rightarrow \infty }\left[ r_{\nu }(t)-t\right] =\nu +\frac{1}{2},\) and thus the function \(t\mapsto r_{\nu }(t)\) is increasing on \([0,\alpha ]\) and has the skew asymptote \(u=t+\nu +\frac{1}{2}\) in the tu-plane. Moreover, since according to [28, p. 275] (see also [15, p. 231]) we have \(y_{\nu }(t)>t-\frac{1}{2}\) for all \(t>0\) and \(\nu \ge \frac{1}{2},\) we see that \(r_{\nu }(t)>t+\nu +\frac{1}{2},\) that is, the graph of the function \(r_{\nu }\) is over the graph of its skew asymptote. On the other hand, the function \(q_{\alpha }\) is an increasing and convex function on \([0,\alpha )\) and has the vertical asymptote \(t=\alpha \) in the tu-plane. Since \(q_{\alpha }(0)=1\le \nu +\frac{1}{2}<2\nu +1=r_{\nu }(0),\) the graph of the function \(q_{\alpha }\) crosses the graph of the skew asymptote \(u=t+\nu +\frac{1}{2}\) and then the graph of the function \(r_{\nu }\) only once, and in the case when \(\nu \ge \frac{1}{2}\) the equation (4.5) has indeed only one positive solution in \([0,\alpha ).\)

When \(0\le \nu <\frac{1}{2}\) the situation is a little bit different. In this case the function \(r_{\nu }\) crosses its skew asymptote \(u=t+\nu +\frac{1}{2}\) at a point \(\left( s_{\nu },r_{\nu }(s_{\nu })\right) \) and for \(t > rless s_{\nu }\) we find that \(r_{\nu }(t)\lessgtr t+\nu +\frac{1}{2}.\) Since \(r_{\nu }(0)=2\nu +1\ge q_{\alpha }(0)=1> \nu +\frac{1}{2},\) the graph of the function \(q_{\alpha }\) crosses the graph of the function \(r_{\nu }\) and its skew asymptote only onceFootnote 8 and in this case the Eq. (4.5) has also only one positive solution in \((0,\alpha ).\) It remains just to show that the graph of the function \(r_{\nu }\) crosses his skew asymptote only once, that is, the equation \(r_{\nu }(t)=t+\nu +\frac{1}{2}\) has only one solution in t on \([0,\infty )\) for each \(\nu \in \left[ 0,\frac{1}{2}\right) .\) For this we use Gronwall’s idea [28, p. 276] and consider the function \(\gamma _{\nu }:[0,\infty )\rightarrow (-\infty ,\nu ],\) defined by

$$\begin{aligned} \gamma _{\nu }(t)=r_{\nu }(t)-t-\nu -1=y_{\nu }(t)-t=\frac{tI_{\nu }'(t)}{I_{\nu }(t)}-t =\frac{tI_{\nu +1}(t)}{I_{\nu }(t)}+\nu -t \end{aligned}$$

and we show that the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) has only one solution in t on \([0,\infty )\) for each \(\nu \in \left[ 0,\frac{1}{2}\right) .\) Since \(y_{\nu }\) satisfies the Ricatti differential equation

$$\begin{aligned} ty_{\nu }'(t)=t+\nu ^2-y_{\nu }^2(t), \end{aligned}$$

we obtain that \(\gamma _{\nu }\) satisfies the next Ricatti differential equation

$$\begin{aligned} t\gamma _{\nu }'(t) =-(1+2\gamma _{\nu }(t))t+\nu ^2-\gamma _{\nu }^2(t), \end{aligned}$$
(4.7)

and hence

$$\begin{aligned} t\gamma _{\nu }''(t) =-1-2\gamma _{\nu }(t)-(2t+1+2\gamma _{\nu }(t))\gamma _{\nu }'(t). \end{aligned}$$
(4.8)

Now, observe that \(\gamma _{\nu }(0)=\nu \) and in view of (4.6) the function \(\gamma _{\nu }\) is decreasing for t sufficiently small. This implies that the first extreme of \(\gamma _{\nu },\) if any, must therefore be a minimum, and according to (4.8) this minimum must satisfy \(\gamma _{\nu }(t)\le -\frac{1}{2}.\) Thus, the graph of the function \(t\mapsto \gamma _{\nu }(t)\) crosses the horizontal line \(u=-\frac{1}{2}\) at least once (Fig.  illustrates this behavior quite well) and in what follows we show that there is no more intersection point. We denote the first solution of the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) by \(t_{\nu }\) and we assume that \(t^{\circ },\) where \(t^{\circ }>t_{\nu },\) is the smallest positive value of t for which \(\gamma _{\nu }(t)=-\frac{1}{2}.\) Then obviously \(\gamma _{\nu }'(t^{\circ })\ge 0\) and in view of the Ricatti differential equation (4.7) we obtain that \(\nu ^2\ge \frac{1}{4},\) which is a contradiction. Thus, indeed when \(\nu \in \left[ 0,\frac{1}{2}\right) ,\) the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) has only one solutionFootnote 9, denoted by \(t_{\nu }.\) Moreover, it is important to mention here that in fact the above proof about the uniqueness of the solution is valid for all \(\nu \in \left( -\frac{1}{2},0\right] \) too and thus we have shown that the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) has only one solution in t when \(|\nu |<\frac{1}{2}.\)

Now, since for \(\nu \ge 0\) we find that \(w_{\alpha ,\nu }(0)=2\nu \ge 0\) and \(w_{\alpha ,\nu }(t)\) tends to \(-\infty \) as \(t\nearrow \alpha ,\) we arrive at the conclusion that

$$\begin{aligned} w_{\alpha ,\nu }(t)=\frac{tI_{\nu }'(t)}{I_{\nu }(t)}+\nu +1-\frac{\alpha ^2}{\alpha ^2-t^2} =\frac{tI_{\nu }'(t)}{I_{\nu }(t)}+\nu +1-\frac{a^2}{x^2} \end{aligned}$$

is positive when \(t\in (0,t_{\alpha ,\nu }]\) and is negative when \(t\in [t_{\alpha ,\nu },\alpha ).\) This leads to the result stated in (i).

When \(-\frac{1}{2}\le \nu <0\) the situation is a little bit more complicated. In this case the graph of the function \(r_{\nu }\) crosses its skew asymptote \(u=t+\nu +\frac{1}{2}\) at a point \(\left( s_{\nu },r_{\nu }(s_{\nu })\right) \) and for \(t > rless s_{\nu }\) we find that \(r_{\nu }(t)\lessgtr t+\nu +\frac{1}{2}.\) The equation \(r_{\nu }(t)=t+\nu +\frac{1}{2}\) has indeed only one solution in t for \(\nu \in \left( -\frac{1}{2},0\right) ,\) according to the above discussion about the uniqueness of the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\). We just need to verify the case when \(\nu =-\frac{1}{2}.\) In this case the equation \(\gamma _{-\frac{1}{2}}(t)=-\frac{1}{2}\) becomes \(t\sinh t/\cosh t=t,\) which has the unique solution \(t=0.\) Now, since \(q_{\alpha }(0)=1>r_{\nu }(0)=2\nu +1\ge \nu +\frac{1}{2},\) for \(\alpha \) large enough the graph of the function \(q_{\alpha }\) crosses the graph of the function \(r_{\nu }\) and its skew asymptote twice. This is because the function \(q_{\alpha }\) is convex, it has a vertical asymptote and is growing faster than \(r_{\nu }.\)

Fig. 4
figure 4

The graph of the functions \(r_3,\) \(q_2,\) \(u=t+\frac{7}{2};\) \(r_{\frac{1}{4}},\) \(q_{\frac{3}{2}},\) \(u=t+\frac{3}{4};\) \(r_{-\frac{1}{4}},\) \(q_{\sqrt{7}},\) \(u=t+\frac{1}{4};\) \(r_{-\frac{3}{4}},\) \(q_4,\) \(u=t-\frac{1}{4},\) respectively

Similarly, when \(-1<\nu <-\frac{1}{2}\) the graph of the function \(r_{\nu }\) crosses its skew asymptote \(u=t+\nu +\frac{1}{2}\) at a point \(\left( s_{\nu },r_{\nu }(s_{\nu })\right) \) and for \(t > rless s_{\nu }\) we find that \(r_{\nu }(t) > rless t+\nu +\frac{1}{2}.\) To show this we use a similar argument as before. We know that in this case \(-1<\gamma _{\nu }(0)=\nu <-\frac{1}{2}\) and \(\lim \limits _{t\rightarrow \infty }\gamma _{\nu }(t)=-\frac{1}{2}.\) On the other hand, in view of (4.6) the function \(\gamma _{\nu }\) is decreasing for t sufficiently small, and thus the first extreme of \(\gamma _{\nu }\) is clearly a minimum. Now, if we suppose that the function \(\gamma _{\nu }\) has no more critical points, then clearly we arrive at \(\gamma _{\nu }'(t)>0\) for all \(t>t^*,\) where \(\gamma _{\nu }'(t^*)=0.\) But, according to (4.7), this implies that \(\gamma _{\nu }^2(t)<\nu ^2\) and thus \(\gamma _{\nu }(t)>\nu \) for all \(t>t^*,\) which contradicts the fact that \(\gamma _{\nu }\) intersects the horizontal line \(u=\nu .\) Thus, the function \(\gamma _{\nu }\) has a second critical point and because of its behaviour at infinity, this is a maximum. Denoting by \(t^\bullet \) the value for which \(\gamma _{\nu }'(t^\bullet )=0\) and \(\gamma _{\nu }''(t^\bullet )<0,\) in view of (4.8) we clearly obtain that \(\gamma _{\nu }(t^\bullet )>-\frac{1}{2},\) that is, the graph of the function \(\gamma _{\nu }\) intersects the horizontal line \(u=-\frac{1}{2}\) at least once. Now, we show that there is no other intersection point. We denote the first solution of the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) by \(t_{\nu }\) and we assume that \(t^{\star },\) where \(t^{\star }>t_{\nu },\) is the smallest positive value of t for which \(\gamma _{\nu }(t)=-\frac{1}{2}.\) Then obviously \(\gamma _{\nu }'(t^{\star })\le 0\) and in view of the Ricatti differential equation (4.7) we obtain that \(\nu ^2\le \frac{1}{4},\) which is a contradiction. Thus, indeed when \(\nu \in \left( -1,-\frac{1}{2}\right) ,\) the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) has only one solution. Now, since \(q_{\alpha }(0)=1>\nu +\frac{1}{2}>r_{\nu }(0)=2\nu +1,\) for \(\alpha \) large enough the graph of the function \(q_{\alpha }\) crosses the graph of the function \(r_{\nu }\) and its skew asymptote twice. This is also the consequence of the convexity of \(q_\alpha \), its vertical asymptote and the faster growth of \(q_{\alpha }\) than \(r_\nu \).

Thus, when \(-1<\nu <0\) we find that \(w_{\alpha ,\nu }(0)=2\nu <0\) and \(w_{\alpha ,\nu }(t)\) tends to \(-\infty \) as \(t\nearrow \alpha .\) Taking into account these facts we arrive at the conclusion that \(w_{\alpha ,\nu }(t)\) is positive when \(t\in [t_{\alpha ,\nu ,1},t_{\alpha ,\nu ,2}]\) and is negative when \(t\in (0,t_{\alpha ,\nu ,1}]\) or \(t\in [t_{\alpha ,\nu ,2},\alpha ).\) This leads to the result stated in (ii). \(\square \)

Figure  illustrates well geometrically the solutions of Eq. (4.5) in the above four different cases concerning the parameter \(\nu \). The only thing which is missing from this almost complete description of monotonicity pattern is that when \(-1<\nu <0\) the parameter \(\alpha \) how large needs to be in order that the graph of \(q_{\alpha }\) is crossing the graph of \(r_{\nu },\) that is, Eq. (4.5) has two solutions. Of course, when \(\alpha \) is small and the graph of the functions \(q_{\alpha }\) and \(r_{\nu }\) do not intersect each other, the right-hand side of (4.3) is negative, and in this case the probability density function of the Kaiser–Bessel distribution is decreasing with respect to a. It is a challenging problem to find in the case when \(-1<\nu <0\) the smallest positive value of \(\alpha \) for which the Eq. (4.3) has at least one solution.

4.4 Inequalities for the Quotient of Modified Bessel Functions

In this subsection we present some direct consequences of an auxiliary result proved in the previous subsection. Observe that the equation \(\gamma _{\nu }(t)=-\frac{1}{2}\) is equivalent to \(y_{\nu }(t)=t-\frac{1}{2}.\) Thus, in fact in Sect. 4.3 we have shown that the equation \(y_{\nu }(t)=t-\frac{1}{2}\) has a unique solution in t when \(t\in [0,\infty )\) and \(\nu \in \left( -1,\frac{1}{2}\right) .\) Since for \(|\nu |<\frac{1}{2}\) we have \(y_{\nu }(0)=\nu >-\frac{1}{2},\) we obtain the inequalities \(y_{\nu }(t) > rless t-\frac{1}{2},\) that is

$$\begin{aligned} \frac{tI_{\nu }'(t)}{I_{\nu }(t)} > rless t-\frac{1}{2} \end{aligned}$$

whenever \(t\lessgtr t_{\nu }.\) Moreover, since for \(-1<\nu <-\frac{1}{2}\) we have \(y_{\nu }(0)=\nu <-\frac{1}{2},\) we arrive at the inequalities \(y_{\nu }(t)\lessgtr t-\frac{1}{2},\) that is

$$\begin{aligned} \frac{tI_{\nu }'(t)}{I_{\nu }(t)}\lessgtr t-\frac{1}{2} \end{aligned}$$

whenever \(t\lessgtr t_{\nu }.\) These inequalities complement the results in [28] and [15].

To show another direct consequence of the above mentioned auxiliary result observe that the equation \(r_{\nu }(t)=t+\nu +\frac{1}{2}\) is equivalent to \(v_{\nu }(t)=h_{\nu }(t),\) where

$$\begin{aligned} v_{\nu }(t)=\frac{tI_{\nu }(t)}{I_{\nu +1}(t)} \quad \text{ and } \quad h_{\nu }(t)=\frac{t^2}{t-\left( \nu +\frac{1}{2}\right) }. \end{aligned}$$

The function \(v_{\nu }\) has applications in finite elasticity, and it has been used to prove that a nonlinearly elastic cylinder eventually becomes unstable in uniaxial compression, see the papers [60] and [61] for more details. The properties of the function \(v_{\nu }\) have been well studied in the last years, several bounds have been found for this quotient of modified Bessel functions of the first kind, see for example the recent papers [10, 31, 68] and [70]. In this subsection we present some novel results on the function \(v_{\nu },\) which are in fact some byproducts of the results proved in the previous subsection. Recall that from Sect. 4.3 we know that the equation \(v_{\nu }(t)=h_{\nu }(t)\) has only one solution in t when \(t\in [0,\infty )\) and \(\nu \in \left( -1,\frac{1}{2}\right) .\) Since \(v_{\nu }(0)=2(\nu +1)>0,\) when \(|\nu |<\frac{1}{2}\) we obtain that \(v_{\nu }(t)<h_{\nu }(t)\) for all \(\nu +\frac{1}{2}<t<t_{\nu }\) and \(v_{\nu }(t)>h_{\nu }(t)\) for all \(t>t_{\nu }.\) Moreover, since \(t_{-\frac{1}{2}}=0,\) we easily obtain that \(v_{-\frac{1}{2}}(t)>h_{-\frac{1}{2}}(t)=t\) for all \(t\ge 0.\) In addition, since \(v_{\nu }(0)=2(\nu +1)>0\) for all \(\nu \in \left( -1,-\frac{1}{2}\right) ,\) we arrive at \(v_{\nu }(t)>h_{\nu }(t)\) for all \(0\le t<t_{\nu }\) and \(\nu \in \left( -1,-\frac{1}{2}\right) ,\) and \(v_{\nu }(t)<h_{\nu }(t)\) for all \(t>t_{\nu }\) and \(\nu \in \left( -1,-\frac{1}{2}\right) .\) Finally, since \(y_{\nu }(t)>t-\frac{1}{2}\) for all \(t\ge 0\) and \(\nu \ge \frac{1}{2},\) according to to [28, p. 275] (see also [15, p. 231]), it follows that the equation \(y_{\nu }(t)=t-\frac{1}{2}\) has no solution when \(t\ge 0\) and \(\nu \ge \frac{1}{2}.\) This implies that the equation \(v_{\nu }(t)=h_{\nu }(t)\) has no solution when \(t\ge 0\) and \(\nu \ge \frac{1}{2},\) and consequently, for all \(t>\nu +\frac{1}{2}\) and \(\nu \ge \frac{1}{2}\) we have that \(v_{\nu }(t)<h_{\nu }(t).\) In other words, for all \(t>\nu +\frac{1}{2}\) and \(\nu \ge \frac{1}{2}\) the following inequality is valid

$$\begin{aligned} \frac{tI_{\nu }(t)}{I_{\nu +1}(t)}<\frac{t^2}{t-\left( \nu +\frac{1}{2}\right) }. \end{aligned}$$

It is interesting to note that both functions \(v_{\nu }\) and \(h_{\nu }\) have the skew asymptote \(u=t+\nu +\frac{1}{2}\) in the tu-plane. Moreover, \(v_{\nu }\) is convex (according to [60]) for all \(\nu \ge 0\) and the hyperbola \(h_{\nu }\) is also convex on \(\left( \nu +\frac{1}{2},\infty \right) \) for all \(\nu \ge 0.\) The asymptotic expansion (in fact the Laurent series) of \(h_{\nu }(t)\) is the following

$$\begin{aligned} h_{\nu }(t)\sim t\sum _{n\ge 0}\frac{\left( \nu +\frac{1}{2}\right) ^n}{t^n} =t+\nu +\frac{1}{2}+\frac{\left( \nu +\frac{1}{2}\right) ^2}{t} +\frac{\left( \nu +\frac{1}{2}\right) ^3}{t^2}+\cdots , \quad t\rightarrow \infty . \end{aligned}$$

Now, in view of (3.5), for the function \(v_{\nu }(t)\) we arrive to the following asymptotic expansion

$$\begin{aligned} v_{\nu }(t)\sim t+\nu +\frac{1}{2} +\frac{\alpha _2(\nu )}{t}+\frac{\alpha _3(\nu )}{t^2}+\cdots , \quad t\rightarrow \infty , \end{aligned}$$

where \(\alpha _0(\nu )=1,\) \(\alpha _1(\nu )=\nu +\frac{1}{2}\) and for all \(n\ge 1\)

$$\begin{aligned} \alpha _n(\nu )=(-1)^na_n(\nu )-\sum _{m=0}^{n-1}(-1)^{n-m}\alpha _m(\nu )a_{n-m}(\nu +1), \end{aligned}$$

with the coefficients \(a_n(\nu )\) given in (3.6). Note that by using the above recurrence relation, we arrive at the next explicit expressions

$$\begin{aligned} \alpha _2(\nu )&=\alpha _3(\nu )=\frac{1}{2}\left( \nu +\frac{1}{2}\right) \left( \nu +\frac{3}{2}\right) ,\\ \alpha _4(\nu )&=\frac{1}{8}\left( \nu +\frac{1}{2}\right) \left( \nu +\frac{3}{2}\right) \left( \nu +\frac{7}{2}\right) \left( \frac{3}{2}-\nu \right) ,\\ \alpha _5(\nu )&=\frac{1}{2}\left( \nu +\frac{1}{2}\right) \left( \nu +\frac{3}{2}\right) \left( \frac{9}{4}-2\nu -\nu ^2\right) ,\\ \alpha _6(\nu )&=\frac{1}{16}\left( \nu +\frac{1}{2}\right) \left( \nu +\frac{3}{2}\right) \left( \nu ^4+4\nu ^3-\frac{45}{2}\nu ^2-53\nu +\frac{633}{16}\right) . \end{aligned}$$

Now, if we consider the sequence of functions \(\{\pi _n(\nu )\}_{n\ge 2},\) defined by

$$\begin{aligned} \pi _n(\nu )=\alpha _n(\nu )-\left( \nu +\frac{1}{2}\right) ^n, \end{aligned}$$

then we obtain that

$$\begin{aligned} \pi _2(\nu )&=\frac{1}{2}\left( \nu +\frac{1}{2}\right) \left( \frac{1}{2}-\nu \right) ,\\ \pi _3(\nu )&=\left( \nu +\frac{1}{2}\right) (\nu +1)\left( \frac{1}{2}-\nu \right) ,\\ \pi _4(\nu )&=\frac{9}{8}\left( \nu +\frac{1}{2}\right) \left( \frac{1}{2}-\nu \right) \left( \nu ^2+\frac{20}{9}\nu +\frac{55}{36}\right) ,\\ \pi _5(\nu )&=\left( \nu +\frac{1}{2}\right) \left( \frac{1}{2} -\nu \right) \left( \nu ^3+3\nu ^2+\frac{19}{4}\nu +\frac{13}{4}\right) ,\\ \pi _6(\nu )&=\frac{15}{16}\left( \nu +\frac{1}{2}\right) \left( \frac{1}{2}-\nu \right) \left( \nu ^4+\frac{14}{5}\nu ^3+\frac{31}{6}\nu ^2+\frac{97}{10}\nu +\frac{1883}{240}\right) . \end{aligned}$$

Observe that \(\pi _n(\nu )>0\) for all \(n\in \{2,3,4,5,6\}\) and \(|\nu |<\frac{1}{2},\) and based on computer experiments our conjecture is that \(\pi _n(\nu )>0\) for all \(n\ge 2\) and \(|\nu |<\frac{1}{2}.\) This conjecture was also motivated by the fact that when \(t>t_{\nu }\) and \(|\nu |<\frac{1}{2}\) we have that \(v_{\nu }(t)>h_{\nu }(t).\)

Finally, it is worth to mention that in [12, p. 581] the author mentioned that computer-generated pictures suggest that for \(\nu \in \left( -1,\frac{1}{2}\right) \) the first derivative of the function \(t\mapsto \sqrt{t}e^{-t}I_{\nu }(t)\) changes sign on \([0,\infty ).\) An elementary argument shows that the fact that the equation \(y_{\nu }(t)=t-\frac{1}{2}\) has a unique solution in t when \(t\in [0,\infty )\) and \(\nu \in \left( -1,\frac{1}{2}\right) \) is equivalent to the following result: for \(\nu \in \left( -1,\frac{1}{2}\right) \) the first derivative of the function \(t\mapsto \sqrt{t}e^{-t}I_{\nu }(t)\) indeed changes sign on \([0,\infty ),\) and only once. This shows the validity of the above claim stated in [12].

Fig. 5
figure 5

The graph of the function \(\zeta _{a,\alpha ,\nu }\) for \(a=3,\) \(\alpha =7\) and \(\nu \in \{-0.7,-0.5,\ldots ,1.7\}\)

4.5 Convexity with Respect to x and Inflection Points

In view of (4.1) we obtain

$$\begin{aligned} \varphi _{a,\alpha ,\nu }''(x)=\frac{\alpha \sqrt{\frac{\alpha }{2 \pi }}}{a^5 I_{\nu +\frac{1}{2}}(\alpha )}\cdot \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu -3}I_{\nu -1} \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) \cdot \zeta _{a,\alpha ,\nu }(x),\end{aligned}$$
(4.9)

where

$$\begin{aligned} \zeta _{a,\alpha ,\nu }(x)=\alpha x^2\sqrt{1-\left( \frac{x}{a}\right) ^2}\cdot \frac{I_{\nu }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) +I_{\nu -2}\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) }{2I_{\nu -1} \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) }+\nu x^2-a^2. \end{aligned}$$

Thus, the inflection points of the probability density function \(\varphi _{a,\alpha ,\nu }\) are given by the transcendental equation \(\zeta _{a,\alpha ,\nu }(x)=0.\) This equation for \(a,\alpha >0\) and \(\nu \ge 0\) has clearly at least two solutions in x since the function \(\zeta _{a,\alpha ,\nu }\) is even, \(\zeta _{a,\alpha ,\nu }(0)=-a^2<0\) and \(\zeta _{a,\alpha ,\nu }(x)>0\) for \(\nu x^2\ge a^2.\) Fig. , which resembles to the wings of a beautiful flying adder, illustrates the inflection points of the probability density function. This figure suggests that the positive zeros of \(\zeta _{a,\alpha ,\nu }\) are decreasing with respect to \(\nu ,\) and hence the negative zeros of \(\zeta _{a,\alpha ,\nu }\) are increasing with respect to \(\nu .\)

Note also that for \(a>0,\) \(\alpha >0\) and \(\nu \ge 0\) such that \(\nu x^2\ge a^2\) we deduce that \(\varphi _{a,\alpha ,\nu }''(x)>0\) and thus for \(a>|x|\ge \frac{a}{\sqrt{\nu }}\) the probability density function \(\varphi _{a,\alpha ,\nu }\) is convex. By using some known bounds for the logarithmic derivative of the modified Bessel function of the first kind we can improve the range of convexity a little bit. The idea is to use the change of variables \(t=\alpha g_a(x)\) and the recurrence relation (4.2). With this change of variables, \(\zeta _{a,\alpha ,\nu }(x)\) becomes

$$\begin{aligned} \zeta _{a,\alpha ,\nu }(x) =x^2\left[ \frac{tI_{\nu -1}'(t)}{I_{\nu -1}(t)}+\nu -\frac{a^2}{x^2}\right] .\end{aligned}$$
(4.10)

By using the inequalities (4.4) we find that

$$\begin{aligned} \zeta _{a,\alpha ,\nu }(x)>x^2\left[ \sqrt{t^2+\nu ^2} +\nu -1-\frac{a^2}{x^2}\right] >x^2\left( 2\nu -1-\frac{a^2}{x^2}\right) \ge 0 \end{aligned}$$

for \((2\nu -1)x^2\ge a^2\) and

$$\begin{aligned} \zeta _{a,\alpha ,\nu }(x)< & {} x^2\left[ \sqrt{t^2+\left( \nu -\frac{1}{2}\right) ^2} +\nu -\frac{1}{2}-\frac{a^2}{x^2}\right] \\< & {} x^2\left[ \sqrt{\alpha ^2+\left( \nu -\frac{1}{2}\right) ^2} +\nu -\frac{1}{2}-\frac{a^2}{x^2}\right] \le 0 \end{aligned}$$

for \(\left( \sqrt{\alpha ^2+\left( \nu -\frac{1}{2}\right) ^2}+\nu -\frac{1}{2}\right) x^2\le a^2.\) Here in the first case \(a>0,\) \(\alpha >0\) and \(\nu >\frac{1}{2},\) while in the second case \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}.\) In other words, for all \(a>0,\) \(\alpha >0\) and \(\nu >\frac{1}{2}\) the probability density function \(\varphi _{a,\alpha ,\nu }\) is convex for \(|x|\ge \frac{a}{\sqrt{2\nu -1}}\) and for all \(a>0,\) \(\alpha >0\) and \(\nu \ge \frac{1}{2}\) is concave for \(|x|\le \frac{a}{\sqrt{\sqrt{\alpha ^2+\left( \nu -\frac{1}{2}\right) ^2}+\nu -\frac{1}{2}}}.\) Thus, for \(a>0,\) \(\alpha >0\) and \(\nu >-\frac{1}{2}\) the inflection points of the probability density function of the Kaiser–Bessel distribution are contained in

$$\begin{aligned} \left[ -\frac{a}{\sqrt{2\nu -1}},-\frac{a}{\sqrt{\sqrt{\alpha ^2+\left( \nu -\frac{1}{2}\right) ^2} +\nu -\frac{1}{2}}}\right] \bigsqcup \left[ \frac{a}{\sqrt{\sqrt{\alpha ^2+\left( \nu -\frac{1}{2}\right) ^2} +\nu -\frac{1}{2}}},\frac{a}{\sqrt{2\nu -1}}\right] . \end{aligned}$$

Moreover, the range of convexity can be further extended in the case when \(a>0\) and \(\alpha>\nu >1.\) Namely, by using the arithmetic–geometric mean inequality for \(I_{\nu }(t)\) and \(I_{\nu -2}(t)\) and the Turán type inequality (3.10) we obtain

$$\begin{aligned} \zeta _{a,\alpha ,\nu }(x)\ge x^2t\frac{\sqrt{I_{\nu }(t)I_{\nu -2}(t)}}{I_{\nu -1}(t)}+\nu x^2-a^2>x^2t\sqrt{\varsigma }+\nu x^2-a^2>(\nu +\alpha \varsigma )x^2-a^2\ge 0 \end{aligned}$$

whenever \(\frac{a}{\sqrt{\nu +\alpha \varsigma }}\le |x|\le \frac{a}{\sqrt{\nu }}\) with \(a>0,\) \(\alpha >0\) and \(\varsigma =1-\frac{1}{\nu }>0.\) This clearly implies that for \(a>0,\) \(\alpha >0\) and \(\nu >1\) the probability density function \(\varphi _{a,\alpha ,\nu }\) is convex for \(|x|\ge \frac{a}{\sqrt{\nu +\alpha \varsigma }}.\)

Finally, in view of (4.10)Footnote 10 and by changing \(\nu \) to \(\nu -1\) in (i) and (ii) of Sect. 4.3, we obtain the following result.

Corollary 2

The next assertions are true:

  • (iii) If \(a>0,\) \(\alpha >0,\) \(\nu \ge 1\) and \(a>|x|\ge a\cdot r_{\alpha ,\nu -1},\) then the function \(x\mapsto \varphi _{a,\alpha ,\nu }(x)\) is convex; if \(a>0,\) \(\alpha >0,\) \(\nu \ge 1\) and \(|x|\le a\cdot r_{\alpha ,\nu -1},\) then the function \(x\mapsto \varphi _{a,\alpha ,\nu }(x)\) is concave;

  • (iv) If \(a>0,\) \(0<\nu <1,\) \(\alpha \) is large enough and \(a\cdot r_{\alpha ,\nu -1,1}\ge |x|\ge a\cdot r_{\alpha ,\nu -1,2},\) then the function \(x\mapsto \varphi _{a,\alpha ,\nu }(x)\) is convex; if \(a>0,\) \(0<\nu <1,\) \(\alpha \) is large enough and \(|x|\le a\cdot r_{\alpha ,\nu -1,2},\) \(a\cdot r_{\alpha ,\nu -1,1}\le |x|<a\) then the function \(x\mapsto \varphi _{a,\alpha ,\nu }(x)\) is concave;

4.6 Monotonicity of the Probability Density Function with Respect to \(\nu \)

By using the notations \(t=\alpha s\) and \(s=g_a(x),\) we focus on the following expression

$$\begin{aligned} \frac{\partial }{\partial \nu }\left[ \frac{I_{\nu }(t)}{I_{\nu +\frac{1}{2}}(\alpha )}\right] =\lim _{\varepsilon \rightarrow 0} \frac{I_{\nu +\varepsilon }(t)I_{\nu +\frac{1}{2}}(\alpha )-I_{\nu }(t)I_{\nu +\varepsilon +\frac{1}{2}} (\alpha )}{\varepsilon \cdot I_{\nu +\frac{1}{2}}(\alpha )I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )} \end{aligned}$$

and in what follows our aim is determine its sign. For this we recall the following Cauchy product formula for the product of Bessel functions of the first kind (see [66, p. 148])

$$\begin{aligned} J_{\mu }(az)J_{\nu }(bz)=\frac{(az)^{\mu }(bz)^{\nu }}{2^{\mu +\nu } \Gamma (\nu +1)}\sum _{m\ge 0}\frac{(-1)^m\cdot {}_2F_1\left[ \left. \begin{array}{c} -m, -\mu -m\\ \nu +1\end{array}\right| \displaystyle \frac{b^2}{a^2} \right] }{2^{2m} \Gamma (\mu +m+1)}\cdot \frac{(az)^{2m}}{m!}, \end{aligned}$$

which is valid for \(\mu ,\nu >-1,\) \(a,b\in {\mathbb {R}}\) and \(z\in {\mathbb {C}}.\) Observe that if we change in this equation z to \(\textrm{i}z\) and we use the relation \(I_{\nu }(z)=\textrm{i}^{-\nu }J_{\nu }(\textrm{i}z),\) then we arrive at the following interesting formula for the product of modified Bessel functions of the first kind

$$\begin{aligned} I_{\mu }(az)I_{\nu }(bz)=\frac{(az)^{\mu }(bz)^{\nu }}{2^{\mu +\nu }\Gamma (\nu +1)}\sum _{m\ge 0}\frac{{}_2F_1\left[ \left. \begin{array}{c} -m, -\mu -m\\ \nu +1\end{array}\right| \displaystyle \frac{b^2}{a^2} \right] }{2^{2m}\Gamma (\mu +m+1)}\cdot \frac{(az)^{2m}}{m!}.\end{aligned}$$
(4.11)

Choosing in (4.11) the values \(z=\alpha ,\) \(b=s,\) \(a=1,\) \(\mu =\nu +\frac{1}{2}\) and changing \(\nu \) to \(\nu +\varepsilon ,\) we find that

$$\begin{aligned} I_{\nu +\varepsilon }(t)I_{\nu +\frac{1}{2}}(\alpha ) =\frac{s^{\nu +\varepsilon }\alpha ^{2\nu +\varepsilon +\frac{1}{2}}}{2^{2\nu +\varepsilon +\frac{1}{2}}\Gamma (\nu +\varepsilon +1)}\cdot \sum _{m\ge 0}\frac{{}_2F_1\left[ \left. \begin{array}{c} -m, -\nu -m-\frac{1}{2}\\ \nu +\varepsilon +1\end{array}\right| s^2 \right] }{2^{2m}\Gamma (\nu +m+\frac{3}{2})}\cdot \frac{\alpha ^{2m}}{m!}.\nonumber \\\end{aligned}$$
(4.12)

By choosing \(z=\alpha ,\) \(b=s,\) \(a=1\) and \(\mu =\nu +\varepsilon +\frac{1}{2}\) in (4.11), a similar argument shows that

$$\begin{aligned} I_{\nu }(t)I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )=\frac{s^{\nu }\alpha ^{2\nu +\varepsilon +\frac{1}{2}}}{2^{2\nu +\varepsilon +\frac{1}{2}}\Gamma (\nu +1)}\cdot \sum _{m\ge 0}\frac{{}_2F_1\left[ \left. \begin{array}{c} -m, -\nu -m-\varepsilon -\frac{1}{2}\\ \nu +1\end{array}\right| s^2 \right] }{2^{2m}\Gamma (\nu +m+\varepsilon +\frac{3}{2})}\cdot \frac{\alpha ^{2m}}{m!}. \end{aligned}$$
(4.13)

It is important to mention that

$$\begin{aligned} {}_2F_1\left[ \left. \begin{array}{c} -m, -\mu -m\\ \nu +1\end{array}\right| z \right] \end{aligned}$$

is a polynomial and is equal to

$$\begin{aligned} \lim _{b\rightarrow -\mu -m}{}_2F_1\left[ \left. \begin{array}{c} -m, b\\ \nu +1\end{array}\right| z \right] =\lim _{b\rightarrow -\mu -m} \sum _{n=0}^m(-1)^n\left( \begin{array}{c}m\\ n\end{array}\right) \frac{(b)_n}{(\nu +1)_n}z^n=\sum _{n=0}^m\left( \begin{array}{c}m\\ n\end{array}\right) \frac{(\mu +m)_n}{(\nu +1)_n}z^n. \end{aligned}$$

Combining this with (4.12) and (4.13) we arrive at

$$\begin{aligned} \frac{I_{\nu +\varepsilon }(t)I_{\nu +\frac{1}{2}}(\alpha )-I_{\nu } (t)I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )}{\varepsilon \cdot I_{\nu +\frac{1}{2}}(\alpha )I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )}= \frac{s^{\nu }\left( \frac{1}{2}\alpha \right) ^{2\nu +\varepsilon +\frac{1}{2}}}{\varepsilon \cdot I_{\nu +\frac{1}{2}}(\alpha )I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )}\cdot \sum _{m\ge 0}\Delta _{\nu ,m,\varepsilon }(s)\frac{\alpha ^{2m}}{2^{2m}m!}, \end{aligned}$$

where

$$\begin{aligned} \Delta _{\nu ,m,\varepsilon }(s)&=\frac{s^{\varepsilon }}{\Gamma (\nu +\varepsilon +1)}\frac{{}_2F_1\left[ \left. \begin{array}{c} -m, -\nu -m-\frac{1}{2}\\ \nu +\varepsilon +1\end{array}\right| s^2 \right] }{\Gamma (\nu +m+\frac{3}{2})}\\&\quad -\frac{1}{\Gamma (\nu +1)}\frac{{}_2F_1\left[ \left. \begin{array}{c} -m, -\nu -m-\varepsilon -\frac{1}{2}\\ \nu +1\end{array}\right| s^2 \right] }{\Gamma (\nu +m +\varepsilon +\frac{3}{2})}\\&=\sum _{n=0}^m\left( \begin{array}{c}m\\ n\end{array}\right) \left[ \frac{s^{\varepsilon }\Gamma \left( \nu +m+n+\frac{1}{2}\right) }{\Gamma \left( \nu +m+\frac{3}{2}\right) \Gamma (\nu +n+\varepsilon +1)}\right. \\&\quad \left. - \frac{\Gamma \left( \nu +m+n+\varepsilon +\frac{1}{2}\right) }{\Gamma \left( \nu +m+\varepsilon +\frac{3}{2}\right) \Gamma (\nu +n+1)}\right] \cdot s^{2n}.\end{aligned}$$

Now, since

$$\begin{aligned} \delta _{\nu ,m,n}(s)&=\lim _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\left[ \frac{s^{\varepsilon }\Gamma \left( \nu +m+n+\frac{1}{2}\right) }{\Gamma \left( \nu +m+\frac{3}{2}\right) \Gamma (\nu +n+\varepsilon +1)}\right. \\&\quad \left. - \frac{\Gamma \left( \nu +m+n+\varepsilon +\frac{1}{2}\right) }{\Gamma \left( \nu +m+\varepsilon +\frac{3}{2}\right) \Gamma (\nu +n+1)}\right] \\ {}&= \frac{\Gamma \left( \nu +m+n+\frac{1}{2}\right) }{\Gamma \left( \nu +m+\frac{3}{2}\right) \Gamma (\nu +n+1)}\left[ \psi \left( \nu +m+\frac{3}{2}\right) \right. \\&\quad \left. \left( \nu +m+n+\frac{1}{2}\right) -\psi \left( \nu +n+1\right) +\ln s\right] , \end{aligned}$$

we deduce that

$$\begin{aligned} \frac{\partial }{\partial \nu }\left[ \frac{I_{\nu }(t)}{I_{\nu +\frac{1}{2}}(\alpha )}\right] = \frac{s^{\nu }\left( \frac{1}{2}\alpha \right) ^{2\nu +\frac{1}{2}}}{I_{\nu +\frac{1}{2}}^2(\alpha )}\cdot \sum _{m\ge 0}\left[ \sum _{n=0}^m\left( \begin{array}{c}m\\ n\end{array}\right) \delta _{\nu ,m,n}(s)\cdot s^{2n}\right] \frac{\alpha ^{2m}}{2^{2m}m!},\nonumber \\ \end{aligned}$$
(4.14)

where \(\psi (x)=\Gamma '(x)/\Gamma (x)\) is the logarithmic derivative of the Euler gamma function. It is well-known that the gamma function is log-convex on \((0,\infty )\), and hence the logarithmic derivative of the gamma function is increasing on \((0,\infty ).\) On the other hand we see that \(\ln s<0,\) since \(s<1.\) These in turn imply that for each \(m\ge n\ge 1\) and \(\nu >-\frac{1}{2}\) the coefficients \(\delta _{\nu ,m,n}(s)\) are all strictly negative. From this we readily see that the sign of (4.14) depends on the sign of \(\delta _{\nu ,m,0}(s),\) that is,

$$\begin{aligned}\delta _{\nu ,m,0}(s)&=\frac{\Gamma \left( \nu +m+\frac{1}{2}\right) }{\Gamma \left( \nu +m+\frac{3}{2}\right) \Gamma (\nu +1)}\left[ \psi \left( \nu +m+\frac{3}{2}\right) -\psi \left( \nu +m+\frac{1}{2}\right) -\psi \left( \nu +1\right) +\ln s\right] \\ {}&=\frac{\Gamma \left( \nu +m+\frac{1}{2}\right) }{\Gamma \left( \nu +m+\frac{3}{2}\right) \Gamma (\nu +1)}\left[ \frac{1}{\nu +m+\frac{1}{2}}-\psi \left( \nu +1\right) +\ln s\right] .\end{aligned}$$
Fig. 6
figure 6

The graph of the function \(\varphi _{a,\alpha ,\nu }\) for \(a=11,\) \(\alpha =7\) and \(\nu \in \{-0.79,-0.54,\ldots ,1.46\}\)

Here in the last step we used the recurrence relation for the digamma function, that is,

$$\begin{aligned} \psi (z+1)=\psi (z)+\frac{1}{z}. \end{aligned}$$

Observe that for all \(m\ge 0\) and \(\nu \ge \nu _0\) we arrive at

$$\begin{aligned} \frac{1}{\nu +m+\frac{1}{2}}-\psi \left( \nu +1\right) +\ln s<\frac{1}{\nu +\frac{1}{2}}-\psi (\nu +1)\le 0, \end{aligned}$$

where \(\nu _0\simeq 1.24873582438346{\cdots }\) is the unique positive rootFootnote 11 of the equation \(\psi (\nu +1)=\frac{1}{\nu +\frac{1}{2}},\) and we find that \(\delta _{\nu ,m,0}(s)\) is strictly negative for all \(m\ge 0\) and \(\nu \ge \nu _0.\) From this we readily see that the sign of (4.14) is negative for all \(a>0,\) \(\alpha >0,\) \(\nu \ge \nu _0\) and \(|x|<a,\) and since \(\nu \mapsto s^{\nu }\) is decreasing on \((0,\infty ),\) we deduce that the probability density function of the Kaiser–Bessel distribution is decreasing with respect to the parameter \(\nu \) on the interval \((\nu _0,\infty ).\) In other words, \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((\nu _0,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(|x|<a.\) Moreover, because the coefficients \(\delta _{\nu ,m,0}(s)\) contain the term \(\ln s,\) we may have other monotonicity results with respect to \(\nu ,\) but on subintervals of \([-a,a].\) For example, by using the elementary inequality \(\ln (1-u)<-u\) for \(u=x^2/a^2<1,\) we see that for \(a>|x|\ge \frac{a}{\sqrt{\nu +\frac{1}{2}}},\) \(\nu \ge \nu _1\) and \(m\ge 0\) this leads to the next result

$$\begin{aligned} \frac{1}{\nu +m+\frac{1}{2}}-\psi \left( \nu +1\right) +\ln s\le \frac{1}{\nu +\frac{1}{2}}-\psi (\nu +1)+\ln s<\frac{1}{2}\frac{1}{\nu +\frac{1}{2}}-\psi (\nu +1)\le 0,\nonumber \\\end{aligned}$$
(4.15)

where \(\nu _1\simeq 0.901017015767612{\cdots }\) is the unique positive root of the equation \(2\psi (\nu +1)=\frac{1}{\nu +\frac{1}{2}}.\) Consequently, we deduce that \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((\nu _1,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(\frac{a}{\sqrt{\nu +\frac{1}{2}}}\le |x|<a.\) It is possible to broaden the domain of x in the above monotonicity result. For this we emphasize that a similar argument as in the derivation of (4.14) shows that

$$\begin{aligned} \frac{\partial }{\partial \nu }\left[ \frac{s^{\nu }I_{\nu }(t)}{I_{\nu +\frac{1}{2}}(\alpha )}\right] = \frac{s^{2\nu }\left( \frac{1}{2}\alpha \right) ^{2\nu +\frac{1}{2}}}{I_{\nu +\frac{1}{2}}^2(\alpha )}\cdot \sum _{m\ge 0}\left[ \sum _{n=0}^m\left( \begin{array}{c}m\\ n\end{array}\right) \delta _{\nu ,m,n}(s^2)\cdot s^{2n}\right] \frac{\alpha ^{2m}}{2^{2m}m!} \end{aligned}$$

and changing s to \(s^2\) in (4.15) we deduce that \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((\nu _1,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(\frac{a}{\sqrt{2\nu +1}}\le |x|<a.\)

Summarizing, we have obtained the following monotonicity results.

Theorem 7

The function \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((\nu _0,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(|x|<a,\) and is decreasing on \((\nu _1,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(\frac{a}{\sqrt{2\nu +1}}\le |x|<a.\)

These monotonicity properties are illustrated on Fig. . This figure resembles also to a beautiful colored pashmina, and it is interesting to note the similarity between the behavior of the probability density function with respect to \(\alpha \) and \(\nu .\) Figure  illustrates also the monotonicity behavior of the function \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x).\) From this figure we can see that for example the function \(\nu \mapsto \varphi _{2,1,\nu }(1)\) is decreasing on \((1,\infty )\) and this shows that the number \(\nu _0\) is not the optimal one in the sense that it is not the critical point of \(\nu \mapsto \varphi _{2,1,\nu }(1).\) All the same, the decreasing property of \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) on \((\nu _0,\infty )\) is quite interesting since does not depend on a\(\alpha \) and x. It is an interesting challenge to find the critical point of \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x),\) which is a global maximum and is independent of a\(\alpha \) and x.

Fig. 7
figure 7

The graph of the function \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) for the special values \(a=2,\) \(x=1\) and \(\alpha \in \{0.001,1,2,\ldots ,10\}\) on the interval [0, 20]

4.7 Monotonicity of the Abel Transform of the Density Function with Respect to \(\nu \)

Among other properties, the probability density function of the Kaiser–Bessel distribution satisfies the following integral relationFootnote 12

$$\begin{aligned} {\mathcal {A}}[\varphi _{a,\alpha ,\nu }] =2\int _0^{\sqrt{a^2-x^2}}\varphi _{a,\alpha ,\nu }\left( \sqrt{x^2+t^2}\right) {\textrm{d}}t= a\sqrt{\frac{2\pi }{\alpha }}\frac{I_{\nu +1}(\alpha )}{I_{\nu +\frac{1}{2}} (\alpha )}\varphi _{a,\alpha ,\nu +\frac{1}{2}}(x),\nonumber \\\end{aligned}$$
(4.16)

where \({\mathcal {A}}[\varphi _{a,\alpha ,\nu }]\) is the Abel transform of the probability density function, and the derivation of this integral equation can be done by using the Sonine first finite integral [66, p. 373]

$$\begin{aligned} J_{\mu +\nu +1}(z)=\frac{z^{\mu +1}}{2^{\mu }\Gamma (\mu +1)}\int _0^{\frac{\pi }{2}} J_{\nu }(z\sin \theta )\sin ^{\nu +1}\theta \cos ^{2\mu +1}\theta {\textrm{d}}\theta , \end{aligned}$$

where \(\mu ,\nu >-1\) and z is an arbitrary complex number. Namely, following Lewitt [41, p. 1844], and choosing \(\mu =-\frac{1}{2}\) in the above Sonine first finite integral, we arrive at

$$\begin{aligned} {\mathcal {A}}[\varphi _{a,\alpha ,\nu }]&=2\int _0^{\sqrt{a^2-x^2}}\varphi _{a,\alpha ,\nu } \left( \sqrt{x^2+t^2}\right) {\textrm{d}}t\\&=\frac{\sqrt{\frac{2\alpha }{\pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )} \int _0^{\sqrt{a^2-x^2}}\left( \sqrt{1-\frac{x^2+t^2}{a^2}}\right) ^{\nu }I_{\nu } \left( \alpha \sqrt{1-\frac{x^2+t^2}{a^2}}\right) {\textrm{d}}t\\&=\frac{\sqrt{\frac{2\alpha }{\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \int _0^{\frac{\pi }{2}}\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu +1}I_{\nu } \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\sin \theta \right) \sin ^{\nu +1}\theta {\textrm{d}}\theta \\&=\frac{\sqrt{\frac{2\alpha }{\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \int _0^{\frac{\pi }{2}}{\textrm{i}}^{-\nu }\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu +1}J_{\nu } \left( \textrm{i}\alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\sin \theta \right) \sin ^{\nu +1}\theta {\textrm{d}}\theta \\&=\frac{{\textrm{i}}^{-\left( \nu +\frac{1}{2}\right) }}{I_{\nu +\frac{1}{2}}(\alpha )}\left( \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) ^{\nu +\frac{1}{2}}J_{\nu +\frac{1}{2}}\left( \textrm{i}\alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) \\&=\frac{1}{I_{\nu +\frac{1}{2}}(\alpha )}\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu +\frac{1}{2}}I_{\nu +\frac{1}{2}}\left( \textrm{i}\alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) \\&=a\sqrt{\frac{2\pi }{\alpha }}\frac{I_{\nu +1}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\varphi _{a,\alpha ,\nu +\frac{1}{2}}(x), \end{aligned}$$

where we used the change of variables \(t=\sqrt{a^2-x^2}\cos \theta \) in the first step and later the connection between Bessel and modified Bessel functions of the first kind.

Now, observe that by using the Chu-Vandermonde identity

$$\begin{aligned} {}_2F_1\left[ \left. \begin{array}{c} -m, -\mu -m\\ \nu +1\end{array}\right| 1 \right] =\frac{(\mu +\nu +m+1)_m}{(\nu +1)_m} \end{aligned}$$

the relation (4.11) reduces to a well-known and frequently used formula for modified Bessel functions of the first kind

$$\begin{aligned} I_{\mu }(z)I_{\nu }(z)=\frac{z^{\mu +\nu }}{2^{\mu +\nu }}\sum _{m\ge 0} \frac{\Gamma (\mu +\nu +2m+1)}{\Gamma (\mu +\nu +m+1)\Gamma (\mu +m+1) \Gamma (\nu +m+1)}\cdot \frac{z^{2m}}{2^{2m}m!}, \end{aligned}$$

which holds for all \(\mu ,\nu >-1\) and \(z\in {\mathbb {C}}.\) Applying this product formula for the modified Bessel functions of the first kind, we arrive at

$$\begin{aligned} \frac{\partial }{\partial \nu }\left[ \frac{I_{\nu +1}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\right]&=\lim _{\varepsilon \rightarrow 0} \frac{I_{\nu +\frac{1}{2}}(\alpha )I_{\nu +\varepsilon +1}(\alpha )-I_{\nu +1}(\alpha )I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )}{\varepsilon \cdot I_{\nu +\frac{1}{2}}(\alpha )I_{\nu +\varepsilon +\frac{1}{2}}(\alpha )}\\&=\frac{\left( \frac{\alpha }{2}\right) ^{2\nu +\frac{3}{2}}}{I_{\nu +\frac{1}{2}}^2(\alpha )}\sum _{n\ge 0}\frac{\Gamma \left( 2\nu +2n+\frac{5}{2}\right) }{\Gamma \left( 2\nu +n +\frac{5}{2}\right) } \cdot \frac{\psi \left( \nu +n+\frac{3}{2}\right) -\psi (\nu +n+2)}{\Gamma \left( \nu +n +\frac{3}{2}\right) \Gamma (\nu +n+2)}\cdot \frac{\alpha ^{2n}}{2^{2n}n!}, \end{aligned}$$

which is clearly negative for each \(\alpha >0\) and \(\nu >-\frac{3}{2}.\) Thus, the function \(\nu \mapsto {I_{\nu +1}(\alpha )}/{I_{\nu +\frac{1}{2}}(\alpha )}\) is decreasing on \(\left[ -\frac{3}{2},\infty \right) \) for all \(\alpha >0.\) Combining this result with the results of the previous subsection, in view of (4.16) we obtain the following results for the Abel transform of the probability density function of the Kaiser–Bessel distribution.

Corollary 3

The function \(\nu \mapsto {\mathcal {A}}[\varphi _{a,\alpha ,\nu }]\) is decreasing on \(\left( \nu _0-\frac{1}{2},\infty \right) \) for all \(a>0,\) \(\alpha >0\) and \(|x|<a;\) the function \(\nu \mapsto {\mathcal {A}}[\varphi _{a,\alpha ,\nu }]\) is decreasing on \(\left( \nu _1-\frac{1}{2},\infty \right) \) for all \(a>0,\) \(\alpha >0\) and \(\frac{a}{\sqrt{2\nu +1}}\le |x|<a.\)

4.8 Monotonicity of the Probability Density Function with Respect to \(\alpha \)

By using the recurrence relation (4.2) twice, and the Mittag-Leffler expansion for modified Bessel functions of the first kind (3.13), we see that

$$\begin{aligned}&\frac{\partial }{\partial \alpha }\left[ \sqrt{\frac{\alpha }{2\pi }} \cdot \frac{s^{\nu }I_{\nu }(\alpha s)}{aI_{\nu +\frac{1}{2}}(\alpha )}\right] \\&\quad = \frac{s^{\nu }I_{\nu }(t)}{2a\sqrt{2\pi \alpha }\cdot I_{\nu +\frac{1}{2}}(\alpha )}\left[ 1-\alpha \cdot \frac{I_{\nu -\frac{1}{2}} (\alpha )+I_{\nu +\frac{3}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )} +t\cdot \frac{I_{\nu -1}(t)+I_{\nu +1}(t)}{I_{\nu }(t)}\right] \\&\quad =\frac{s^{\nu }I_{\nu }(t)}{a\sqrt{2\pi \alpha }\cdot I_{\nu +\frac{1}{2}}(\alpha )}\left[ \frac{1}{2}-\frac{\alpha I_{\nu +\frac{1}{2}}'(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}+\frac{tI_{\nu }'(t)}{I_{\nu }(t)}\right] \\&\quad =\frac{s^{\nu }I_{\nu }(t)}{a\sqrt{2\pi \alpha }\cdot I_{\nu +\frac{1}{2}}(\alpha )}\left[ \frac{tI_{\nu +1}(t)}{I_{\nu }(t)} -\frac{\alpha I_{\nu +\frac{3}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\right] \\&\quad =\frac{2s^{\nu }I_{\nu }(t)}{a\sqrt{2\pi \alpha }\cdot I_{\nu +\frac{1}{2}}(\alpha )}\sum _{n\ge 1}\frac{\alpha ^2\left( s^2j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2\right) }{\left( t^2+j_{\nu ,n}^2\right) \left( \alpha ^2+j_{\nu +\frac{1}{2},n}^2\right) }.\end{aligned}$$

Note that, based on the Hellmann-Feynman theorem of quantum chemistry (see for example [33] for a rigorous proof of the Hellmann-Feynman theorem), Lewis and Muldoon [40] proved that for fixed n the function \(\nu \mapsto j_{\nu ,n}/\nu \) is decreasing on \((0,\infty )\) and \(\nu \mapsto j_{\nu ,n}^2/\nu \) is increasing for sufficiently large \(\nu ,\) in particular, \(\nu \mapsto j_{\nu ,1}^2/\nu \) is increasing for \(\nu \ge 3.\) By using these results we immediately conclude that

$$\begin{aligned} \frac{j_{\nu ,n}^2}{\nu ^2}>\frac{j_{\nu +\frac{1}{2},n}^2}{\left( \nu +\frac{1}{2}\right) ^2}, \qquad \frac{j_{\nu ,n}^2}{\nu }<\frac{j_{\nu +\frac{1}{2},n}^2}{\nu +\frac{1}{2}}, \end{aligned}$$

where \(n\in {\mathbb {N}},\) \(\nu >0\) in the first inequality and \(\nu \) is sufficiently large in the second inequality, in particular when \(n=1\) satisfies \(\nu \ge 3.\) These in turn imply that

$$\begin{aligned} s^2j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2 \le \left( \frac{\nu }{\nu +\frac{1}{2}}\right) ^2j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2<0, \qquad \text{ as } \quad a>|x|\ge \frac{a\sqrt{4\nu +1}}{2\nu +1},\end{aligned}$$
(4.17)
$$\begin{aligned} s^2j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2\ge \frac{\nu }{\nu +\frac{1}{2}}j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2>0, \qquad \text{ as } \quad |x|\le \frac{a}{\sqrt{2\nu +1}},\nonumber \\ \end{aligned}$$
(4.18)

where \(n\in {\mathbb {N}},\) \(a>0,\) \(\nu >0\) in (4.17) and \(n\in {\mathbb {N}},\) \(a>0\) and \(\nu \) is sufficiently large in (4.18). Thus, as a function of \(\alpha ,\) the probability density function \(\alpha \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((0,\infty )\) for \(a>0,\) \(\nu >0\) and \(a>|x|\ge \frac{a\sqrt{4\nu +1}}{2\nu +1},\) and is increasing on \((0,\infty )\) for \(a>0\) and sufficiently large \(\nu \) such that \(|x|\le \frac{a}{\sqrt{2\nu +1}}.\) Moreover, it is possible to obtain a monotonicity property of a little bit different type with a similar argument as above. Namely, if we apply the fact that the function \(\nu \mapsto j_{\nu ,n}/j_{\nu ,1}\) is decreasing on \((-1,\infty )\) for all \(n\in {\mathbb {N}}\) fixed (see [43]), then we arrive at

$$\begin{aligned} s^2j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2\le \frac{j_{\nu ,1}^2}{j_{\nu +\frac{1}{2},1}^2}j_{\nu +\frac{1}{2},n}^2-j_{\nu ,n}^2<0, \qquad \text{ as } \quad a>|x|\ge \frac{a}{j_{\nu +\frac{1}{2},1}}\cdot \sqrt{j_{\nu +\frac{1}{2},1}^2-j_{\nu ,1}^2}, \end{aligned}$$

which implies that the probability density function \(\alpha \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((0,\infty )\) for \(a>0,\) \(\nu >-1\) and \(a>|x|\ge a\sqrt{j_{\nu +\frac{1}{2},1}^2-j_{\nu ,1}^2}\Big /{j_{\nu +\frac{1}{2},1}}.\) On the other hand, we know that \(\nu \mapsto j_{\nu ,n}\) is concave (according to Elbert [25]) and hence log-concave on \((-1,\infty )\) for each \(n\in {\mathbb {N}}\) fixed and so is the function \(\nu \mapsto j_{\nu ,1}.\) In view of this we see that \(\nu \mapsto j_{\nu +\frac{1}{2},1}/j_{\nu ,1}\) is clearly decreasing on \((-1,\infty )\) and consequently \(\nu \mapsto j_{\nu ,1}^2/j_{\nu +\frac{1}{2},1}^2\) is increasing on \((-1,\infty ),\) and \(\nu \mapsto 1-j_{\nu ,1}^2/j_{\nu +\frac{1}{2},1}^2\) is decreasing on \((-1,\infty ).\) From this we may obtain for example that the function \(\alpha \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((0,\infty )\) for \(a>0,\) \(\nu >0\) and \(a>|x|\ge a\sqrt{j_{\frac{1}{2},1}^2-j_{0,1}^2}\Big /{j_{\frac{1}{2},1}}.\) Note that \(j_{\frac{1}{2},1}=\pi \) and \(j_{0,1}\simeq 2.404825557695772{\cdots }\) and hence \(\sqrt{j_{\frac{1}{2},1}^2-j_{0,1}^2}\Big /{j_{\frac{1}{2},1}}\simeq 0.643459985555029{\cdots }.\)

The above monotonicity properties are illustrated in Fig. 1.

4.9 Other Bounds for the Ratio \(\varphi _{a,\alpha ,\nu +1}(x)/\varphi _{a,\alpha ,\nu }(x)\)

By using the monotonicity property of \(\nu \mapsto \varphi _{a,\alpha ,\nu }(x)\) we readily see that

$$\begin{aligned} \frac{\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }(x)} =\frac{tI_{\nu +1}(t)}{I_{\nu }(t)}\cdot \frac{I_{\nu +\frac{1}{2}}(\alpha )}{\alpha I_{\nu +\frac{3}{2}}(\alpha )}<1\end{aligned}$$
(4.19)

for all \(a>0,\) \(\alpha >0,\) \(\nu \ge \nu _0\) and \(|x|<a.\) We note that it is possible to deduce other upper bounds for the ratio \(\varphi _{a,\alpha ,\nu +1}(x)/\varphi _{a,\alpha ,\nu }(x)\) which are not so tight, but valid for a greater range of the parameter \(\nu .\) For example, we can deduce the following inequality

$$\begin{aligned} \frac{\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }(x)} <\frac{I_{\nu +\frac{1}{2}}(\alpha )}{I_{\nu +\frac{3}{2}}(\alpha )},\end{aligned}$$
(4.20)

which is valid for all \(a>0,\) \(\alpha >0,\) \(\nu \ge -\frac{1}{2},\) \(|x|<a\) and follows from the Soni inequality [62] \(I_{\nu }(t)>I_{\nu +1}(t)\) (valid for \(t>0\) and \(\nu \ge -\frac{1}{2}\)). In the case when \(\nu >0\) the inequality (4.20) follows also from the fact that the function \(\nu \mapsto I_{\nu +\frac{1}{2}}(\alpha )\varphi _{a,\alpha ,\nu }(x)\) is decreasing on \((0,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(|x|<a.\) This latter monotonicity property is a direct consequence of the decreasing property of the function \(\nu \mapsto I_{\nu }(t)\) for \(t>0\) fixed, see [20]. A better upper bound than in the inequality (4.20) can be obtained easily by noticing that \(t\mapsto tI_{\nu +1}(t)/I_{\nu }(t)\) is strictly increasing on \((0,\infty )\) for all \(\nu >-1\) (this follows for example from the Mittag-Leffler expansion (3.13)). Namely, we find that.Footnote 13

$$\begin{aligned} \frac{\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }(x)}<\frac{I_{\nu +1}(\alpha )I_{\nu +\frac{1}{2}}(\alpha )}{I_{\nu }(\alpha )I_{\nu +\frac{3}{2}}(\alpha )}<\frac{I_{\nu +1}(t)I_{\nu +\frac{1}{2}}(t)}{I_{\nu }(t)I_{\nu +\frac{3}{2}}(t)} <\frac{\nu +\frac{3}{2}}{\nu +1},\end{aligned}$$
(4.21)

which hold for all \(a>0,\) \(\alpha >0,\) \(\nu >-1,\) \(|x|<a.\)

Finally, it is important to note that the inequality (4.19) is in fact strongly connected to the monotonicity of the probability density function with respect to \(\alpha ,\) discussed in Sect. (4.8). More precisely, a straightforward argument shows that

$$\begin{aligned} \frac{\partial \varphi _{a,\alpha ,\nu }(x)}{\partial \alpha } =\frac{I_{\nu +\frac{3}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )} \left[ \varphi _{a,\alpha ,\nu +1}(x)-\varphi _{a,\alpha ,\nu }(x)\right] ,\end{aligned}$$
(4.22)

and this in turn implies that the inequality (4.19) is also valid when \(a>0,\) \(\alpha >0,\) \(\nu >0\) and \(a>|x|\ge \frac{a\sqrt{4\nu +1}}{2\nu +1},\) and it is reversed when \(a>0,\) \(\alpha >0\) and \(\nu \) is sufficiently large such that \(|x|\le \frac{a}{\sqrt{2\nu +1}}.\) Moreover we would like to emphasize the fact that the inequality (4.19) together with the derivative formula (4.22) leads to the next monotonicity results: the function \(\alpha \mapsto \varphi _{a,\alpha ,\nu }(x)\) is decreasing \((0,\infty )\) for all \(a>0,\) \(\nu \ge \nu _0\) and \(|x|<a,\) and is also decreasing on \((0,\infty )\) for all \(a>0,\) \(\nu \ge \nu _1\) and \(\frac{a}{\sqrt{2\nu +1}}\le |x|<a.\)

4.10 Turán Type Inequalities for the Probability Density Function

By using the fact the zeros \(j_{\nu ,n}\) are increasing with respect to \(\nu \) for fixed \(n\in {\mathbb {N}},\) from the Mittag-Leffler expansion (3.13) clearly the function \(\nu \mapsto I_{\nu +1}(t)/I_{\nu }(t)\) is decreasing on \((-1,\infty )\) for all \(t>0.\) This in turn implies that the function

$$\begin{aligned} \nu \mapsto \frac{I_{\nu +\frac{3}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \frac{\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }(x)} =\frac{sI_{\nu +1}(t)}{I_{\nu }(t)} \end{aligned}$$

is decreasing too on \((-1,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(|x|<a,\) and from this we readily see that the next Turán type inequality is valid for all \(a>0,\) \(\alpha >0,\) \(\nu >0\) and \(|x|<a\)

$$\begin{aligned} \frac{\varphi _{a,\alpha ,\nu -1}(x)\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }^2(x)}\le \frac{I_{\nu +\frac{1}{2}}^2(\alpha )}{I_{\nu -\frac{1}{2}} (\alpha )I_{\nu +\frac{3}{2}}(\alpha )}. \end{aligned}$$
(4.23)

Note that Turán type inequality (4.23) can be deduced also by using a log-concavity argument. Namely, according to Lorch [44] the function \(\nu \mapsto I_{\nu }(t)\) is log-concave on \((-1,\infty )\) for all \(t>0\) fixed, and hence the function \(\nu \mapsto I_{\nu +\frac{1}{2}}(\alpha )\varphi _{a,\alpha ,\nu }(x)\) is log-concave on \((-1,\infty )\) for all \(a>0,\) \(\alpha >0\) and \(|x|<a.\) The Turán type inequality (4.23) is a direct consequence of this log-concavity result. Moreover, if we use a slightly different argument, then we can show that (4.23) is also valid when \(a>0,\) \(\alpha >0,\) \(\nu \ge -\frac{1}{2}\) and \(|x|<a.\) According to Segura [59, Lemma 1], the function \(\nu \mapsto I_{\nu -1}(t)/I_{\nu }(t)\) is increasing on \(\left[ -\frac{1}{2},\infty \right) \) for all \(t>0\) fixed. In view of (4.1) this implies that the function \(\nu \mapsto \varphi _{a,\alpha ,\nu }'(x)/\varphi _{a,\alpha ,\nu }(x)\) is decreasing on \(\left[ -\frac{1}{2},\infty \right) \) for all \(a>0,\) \(\alpha >0\) and \(x\in [0,a),\) and is increasing on \(\left[ -\frac{1}{2},\infty \right) \) for all \(a>0,\) \(\alpha >0\) and \(x\in (-a,0].\) On the other hand, combining (2.2) with (4.1) we arrive at

$$\begin{aligned} \varphi _{a,\alpha ,\nu }'(x)=-\frac{\alpha x}{a^2}\cdot \frac{I_{\nu -\frac{1}{2}} (\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \varphi _{a,\alpha ,\nu -1}(x), \end{aligned}$$

and an elementary argument shows that the function

$$\begin{aligned} \nu \mapsto \frac{I_{\nu -\frac{1}{2}}(\alpha )}{I_{\nu +\frac{1}{2}}(\alpha )} \cdot \frac{\varphi _{a,\alpha ,\nu -1}(x)}{\varphi _{a,\alpha ,\nu }(x)} \end{aligned}$$

is increasing for all \(a>0,\) \(\alpha >0,\) \(\nu \ge -\frac{1}{2}\) and \(|x|<a.\) This monotonicity result implies that indeed (4.23) is also valid when \(a>0,\) \(\alpha >0,\) \(\nu \ge -\frac{1}{2}\) and \(|x|<a.\) Note that in view of (3.10) the right-hand side of (4.23) is clearly bounded by \(\left( \nu +\frac{3}{2}\right) /\left( \nu +\frac{1}{2}\right) \) and we also observe that in (4.23) the expression on the right-hand side is not the best possible one. Moreover, since the function \(t\mapsto I_{\nu -1}(t)I_{\nu +1}(t)/I_{\nu }^2(t)\) is increasing on \((0,\infty )\) for all \(\nu >-1\) (see [11, Theorem 2.1] or more generally the discussion before (3.10)), we obtain that the function

$$\begin{aligned} t\mapsto \frac{I_{\nu -1}(t)I_{\nu +1}(t)}{I_{\nu }^2(t)} = \frac{\varphi _{a,\alpha ,\nu -1}(x)\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }^2(x)}\Big /\frac{I_{\nu +\frac{1}{2}}^2(\alpha )}{I_{\nu -\frac{1}{2}}(\alpha )I_{\nu +\frac{3}{2}}(\alpha )} \end{aligned}$$

is also increasing on \((0,\alpha ]\) for all \(a>0,\) \(\alpha >0,\) \(\nu >-1\) and we immediately arrive at the next sharp two-sided Turán type inequality for the probability density function of the Kaiser–Bessel distribution

$$\begin{aligned} \frac{\nu }{\nu +1}\cdot \frac{I_{\nu +\frac{1}{2}}^2(\alpha )}{I_{\nu -\frac{1}{2}}(\alpha )I_{\nu +\frac{3}{2}}(\alpha )}\le \frac{\varphi _{a,\alpha ,\nu -1}(x)\varphi _{a,\alpha ,\nu +1}(x)}{\varphi _{a,\alpha ,\nu }^2(x)}\le \frac{I_{\nu +\frac{1}{2}}^2(\alpha )}{I_{\nu -\frac{1}{2}}(\alpha )I_{\nu +\frac{3}{2}}(\alpha )}\cdot \frac{I_{\nu -1}(\alpha )I_{\nu +1}(\alpha )}{I_{\nu }^2(\alpha )},\nonumber \\ \end{aligned}$$
(4.24)

where \(a>0,\) \(\alpha >0,\) \(\nu >-1\) and \(|x|<a.\) Observe that in (4.24) in the first inequality equality holds if \(x\rightarrow \pm a\) and in the second inequality equality holds when \(x=0.\) Moreover, if we use the bounds in (3.10), then we find the following Turán type inequality

$$\begin{aligned} \frac{\nu }{\nu +1}\le \frac{\varphi _{a,\alpha ,\nu -1}(x)\varphi _{a,\alpha ,\nu +1} (x)}{\varphi _{a,\alpha ,\nu }^2(x)}\le \frac{\nu +\frac{3}{2}}{\nu +\frac{1}{2}} \end{aligned}$$

or equivalently

$$\begin{aligned} -\frac{1}{\nu +\frac{1}{2}}\varphi _{a,\alpha ,\nu }^2(x)\le \varphi _{a,\alpha ,\nu }^2(x) -\varphi _{a,\alpha ,\nu -1}(x)\varphi _{a,\alpha ,\nu +1}(x)\le \frac{1}{\nu +1}\varphi _{a,\alpha ,\nu }^2(x), \end{aligned}$$

where \(a>0,\) \(\alpha >0,\) \(\nu >-1\) and \(|x|<a.\)

4.11 Generating Independent Continuous Random Variables of Kaiser–Bessel Distribution

In this subsection, our aim is to present two algorithms for sampling independent continuous random variables of Kaiser–Bessel distribution over the interval \([-a,a].\) These algorithms are based on some basic inequalities for the probability density function of the Kaiser–Bessel distribution and on the well-known classical rejection method over a compact interval. For more details on the rejection method, the interested reader is referred to the book of Devroye [23] and to the references therein.

Algorithm 1 is based on the rejection method and the unimodality of the Kaiser–Bessel distribution. Namely, we use the basic inequality \(\varphi _{a,\alpha ,\nu }(x)\le \varphi _{a,\alpha ,\nu }(0),\) which holds for all \(a,\alpha >0,\) \(\nu >-1\) and \(|x|\le a.\)

figure a

The second algorithm is also based on the rejection method. By using the inequalityFootnote 14

$$\begin{aligned} 0\le \varphi _{a,\alpha ,\nu }(x)\le \frac{\varphi _{a,\alpha ,\nu }(0)}{\varphi _{a,0,\nu }(0)}\cdot \varphi _{a,0,\nu }(x)={\overline{\varphi }}_{a,\alpha ,\nu }\left( x\right) ,\end{aligned}$$
(4.25)

which holds for all \(a,\alpha >0,\) \(\nu >-1\) and \(|x|\le a,\) we can provide a slightly more efficient rejection method for sampling independent continuous random variables of Kaiser–Bessel distribution over the interval \([-a,a].\) The reason behind the fact that Algorithm 2 is more efficient than the first one is that the evaluation of \(\varphi _{a,0,\nu }(x)\) is simpler than of \(\varphi _{a,\alpha ,\nu }(x).\) This is true because, according to (3.4), to evaluate \(\varphi _{a,0,\nu }(x)\) we need to handle only powers and gamma function values, however to evaluate \(\varphi _{a,\alpha ,\nu }(x)\) we need to calculate the values of the modified Bessel function of the first kind, which computationally is a much more expensive and time-consuming process. Moreover, compared with Algorithm 1, now the area between the probability density function \(\varphi _{a,\alpha ,\nu }\) and its majorating function \({\overline{\varphi }}_{a,\alpha ,\nu }\) is significantly less, than the area between the same probability density function and the constant function \(M=\varphi _{a,\alpha ,\nu }\left( 0\right) \), i.e., in the case of Algorithm 2 the measure of the rejection domain is less than that of Algorithm 1.

figure b

The red diagram in Fig. a has been generated within an average runtime of 5.1519 s in Matlab 2021 by using Algorithm 1 with the input parameters \(a=2,\) \(\alpha =10.417,\) \(\nu =2\) and \(n=10^6\) on a desktop computer Intel Core 7-2600 CPU@3.40 GHz, 4 GB RAM and Windows 10. The blue diagram in Fig. 8b has been generated by using Algorithm 2 with the input parameters \(a=4,\) \(\alpha =5,\) \(\nu =\pi \) and \(n=10^6\) within an average runtime of 3.3533 s on the same computer.

5 Characteristic and Moment Generating Functions, Differential Entropy

The characteristic function provides an alternative way for describing a random variable. In this section our aim is to deduce the characteristic and moment generating functions of the Kaiser–Bessel distribution, and for this we use a known integral formula of Sonine. We also deduce the differential entropy or Shannon entropy as well as the Rényi entropy for the Kaiser–Bessel distribution.

5.1 Characteristic and Moment Generating Function

The characteristic function of the Kaiser–Bessel distribution is in fact the Fourier transform of the probability density function and since this is an even function this Fourier transform reduces to the Fourier cosine transform. The characteristic function is given by

$$\begin{aligned} \phi _X(t)={\text {E}}\left[ e^{\textrm{i}tX}\right]&=\int _{-a}^ae^{\textrm{i}tx}\varphi _{a,\alpha ,\nu }(x){\textrm{d}}x=\int _{-a}^a\varphi _{a,\alpha ,\nu }(x)\cos (tx){\textrm{d}}x\\&=\frac{\sqrt{\frac{2\alpha }{\pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )}\int _0^a \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\nu }I_{\nu }\left( \alpha \sqrt{1 -\left( \frac{x}{a}\right) ^2}\right) \cos (tx){\textrm{d}}x\\&=\frac{\sqrt{\frac{2\alpha }{\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\int _0^1\frac{s^{\nu +1}I_{\nu }(\alpha s)}{\sqrt{1-s^2}}\cos \left( at\sqrt{1-s^2}\right) ds\\&=\frac{\sqrt{\frac{2\alpha }{\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\int _0^{\frac{\pi }{2}}I_{\nu }(\alpha \sin u)\sin ^{\nu +1}u\cos (at\cos u){\textrm{d}}u\\&=\frac{\textrm{i}^{-\nu }\sqrt{a\alpha t}}{I_{\nu +\frac{1}{2}}(\alpha )}\int _0^{\frac{\pi }{2}} J_{\nu }(\textrm{i}\alpha \sin u)J_{-\frac{1}{2}}(at\cos u)\sin ^{\nu +1}u\cos ^{\frac{1}{2}}u{\textrm{d}}u\\&=\frac{\alpha ^{\nu +\frac{1}{2}}}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \frac{J_{\nu +\frac{1}{2}} \left( \sqrt{a^2t^2-\alpha ^2}\right) }{\left( \sqrt{a^2t^2-\alpha ^2}\right) ^{\nu +\frac{1}{2}}}, \end{aligned}$$

where we used the change of variables \(x=a\sqrt{1-s^2}\) and \(s=\sin u,\) and in the last step we applied the so-called Sonine second finite integral formula [66, p. 376]

$$\begin{aligned} \int _0^{\frac{\pi }{2}}J_{\nu }(a\sin \theta )J_{\mu }(b\cos \theta ) \sin ^{\nu +1}\theta \cos ^{\mu +1}\theta {\textrm{d}}\theta =\frac{a^{\nu }b^{\mu }J_{\nu +\mu +1} \left( \sqrt{a^2+b^2}\right) }{\left( \sqrt{a^2+b^2}\right) ^{\nu +\mu +1}}, \end{aligned}$$

which is valid for \(\nu ,\mu >-1\) and arbitrary complex numbers a and b. Note however that when \(a^2t^2-\alpha ^2\le 0,\) then clearly the argument of \(J_{\nu +\frac{1}{2}}\) becomes purely imaginary and in this case we use the modified Bessel function \(I_{\nu +\frac{1}{2}}\) instead of \(J_{\nu +\frac{1}{2}}.\) More precisely, we obtain the following result.

Fig. 8
figure 8

Histograms of two sets of \(n=10^6\) samples of independent random variables of Kaiser–Bessel distribution. In the cases (a) and (b), we have used Algorithms 1 and 2 with the parameter settings \(a=2,\) \(\alpha =10.417,\) \(\nu =2\) and \(a=4,\) \(\alpha =5,\) \(\nu =\pi ,\) respectively

Theorem 8

The characteristic function of the Kaiser–Bessel distribution is given by

$$\begin{aligned} \phi _X(t)=\left\{ \begin{array}{ll}\displaystyle \frac{\alpha ^{\nu +\frac{1}{2}}}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \frac{J_{\nu +\frac{1}{2}}\left( \sqrt{a^2t^2-\alpha ^2}\right) }{\left( \sqrt{a^2t^2-\alpha ^2}\right) ^{\nu +\frac{1}{2}}},&{} \qquad |t|>\frac{\alpha }{a}\\ \displaystyle \frac{\alpha ^{\nu +\frac{1}{2}}}{I_{\nu +\frac{1}{2}}(\alpha )}\cdot \frac{I_{\nu +\frac{1}{2}}\left( \sqrt{\alpha ^2-a^2t^2}\right) }{\left( \sqrt{\alpha ^2-a^2t^2}\right) ^{\nu +\frac{1}{2}}},&{}\qquad |t|\le \frac{\alpha }{a}\end{array}\right. . \end{aligned}$$

Note that, this is in agreement with the Fourier transform of the generalized Kaiser–Bessel window function, considered by Lewitt [41, p. 1844]. This in turn implies that the moment generating function of the Kaiser–Bessel distribution is given by

$$\begin{aligned} M_X(t)=\phi _X(-\textrm{i}t)=\frac{\alpha ^{\nu +\frac{1}{2}}}{I_{\nu +\frac{1}{2}}(\alpha )} \cdot \frac{I_{\nu +\frac{1}{2}}\left( \sqrt{a^2t^2+\alpha ^2}\right) }{\left( \sqrt{a^2t^2+\alpha ^2}\right) ^{\nu +\frac{1}{2}}}. \end{aligned}$$

An important property of this moment generating function is that it uniquely determines the Kaiser–Bessel distribution. As every moment generating function, \(M_X\) is positive and log-convex with \(M_X(0)=1.\) Note that Jensen’s inequality provides a simple lower bound on the moment generating functions: \(M_X(t)\ge e^{\mu t},\) where \(\mu \) is the mean of X. In our case \(\mu =0\) and \(M_X(t)\ge M_X(0)=1\) for every real t and for all \(a>0,\) \(\alpha >0\) and \(\nu >-1.\) Upper bounding the moment generating function \(M_X\) can be used in conjunction with Markov’s inequality to bound the upper tail of the real random variable X. This is the so-called Chernoff bound. However, in our case the complementary cumulative distribution function is not so simple, and so it is better to find some more computable and useful upper bounds. One such upper bound can be found easily by using the so-called Hoeffding inequality [30]

$$\begin{aligned} {\text {E}}\left[ e^{t\left( X-{\text {E}}[X]\right) }\right] \le e^{\frac{1}{8}t^2(\beta -\alpha )^2}, \end{aligned}$$

where \(t\in {\mathbb {R}}\) and X is a bounded real-valued random variable with \(X\in [\alpha ,\beta ].\) Applying the Hoeffding inequality to the case of the Kaiser–Bessel distribution, we arrive at

$$\begin{aligned} M_X(t)={\text {E}}\left[ e^{tX}\right] \le e^{\frac{1}{2}a^2t^2},\end{aligned}$$
(5.1)

where \(t\in {\mathbb {R}},\) \(a>0,\) \(\alpha >0\) and \(\nu >-1.\) Moreover, we can deduce other bounds for the moment generating function by using its explicit form and some known inequalities for quotients of modified Bessel functions of the first kind. For example, we can use the inequality

$$\begin{aligned} e^{x-y}\left( \frac{x}{y}\right) ^{\nu }<\frac{I_{\nu }(x)}{I_{\nu }(y)},\end{aligned}$$
(5.2)

where \(0<x<y\) and \(\nu \ge -\frac{1}{2}.\) This inequality was proved by Bordelon [18] and Ross [57] for \(\nu >0,\) by Paris [53] for \(\nu >-\frac{1}{2},\) and later was rediscovered by Joshi and Bissu [35] for \(\nu \ge -\frac{1}{2}.\) By using this inequality and the following relation

$$\begin{aligned} \left( \frac{x}{y}\right) ^{\nu +\frac{1}{2}}\cdot \frac{1}{M_X(t)}=\frac{I_{\nu +\frac{1}{2}}(x)}{I_{\nu +\frac{1}{2}}(y)} \end{aligned}$$
(5.3)

with \(x=\alpha \) and \(y=\sqrt{a^2t^2+\alpha ^2},\) we find the following upper bound (depending also on \(\alpha \)) for the moment generating function

$$\begin{aligned} M_X(t)\le e^{\sqrt{a^2t^2+\alpha ^2}-\alpha } \end{aligned}$$

for all \(t\in {\mathbb {R}},\) \(a>0,\) \(\alpha >0\) and \(\nu \ge -1.\) We can deduce more lower and upper bounds for the moment generating function by using other inequalities of the type (5.2). For a survey of such type of inequalities we refer to [12]. If we use the inequalities of Joshi and Bissu [36]

$$\begin{aligned} e^{\frac{x^2-y^2}{4(\nu +1)}}\left( \frac{x}{y}\right) ^{\nu }<\frac{I_{\nu }(x)}{I_{\nu }(y)}<e^{\frac{x^2-y^2}{4(\nu +1)} -\frac{x^4-y^4}{32(\nu +1)^2(\nu +2)}}\left( \frac{x}{y}\right) ^{\nu }, \end{aligned}$$

where \(0<x<y\) and \(\nu >-1,\) then in view of (5.3) we obtain that

$$\begin{aligned} e^{\frac{a^2t^2}{4\left( \nu +\frac{3}{2}\right) } -\frac{a^4t^4+2a^2\alpha ^2t^2}{32\left( \nu +\frac{3}{2}\right) ^2\left( \nu +\frac{5}{2}\right) }} \le M_X(t)\le e^{\frac{a^2t^2}{4\left( \nu +\frac{3}{2}\right) }}, \end{aligned}$$

with \(t\in {\mathbb {R}},\) \(a>0,\) \(\alpha >0\) and \(\nu >-\frac{3}{2}.\) Observe that the above upper bound is more tight for \(\nu >-1\) than the upper bound in (5.1), obtained from the Hoeffding inequality.

Finally, it is interesting to note that by using the well-known fact that the \(n\hbox {th}\) moment of an arbitray random variable X is the coefficient of \(t^n/n!\) in the Maclaurin series of \(M_X(t),\) we clearly obtain the following Neumann type expansion of the modified Bessel function of the first kind

$$\begin{aligned} \frac{I_{\nu +\frac{1}{2}}\left( \sqrt{a^2t^2 +\alpha ^2}\right) }{\left( \sqrt{a^2t^2+\alpha ^2}\right) ^{\nu +\frac{1}{2}}}= \sum _{n\ge 0}\frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{2^n\alpha ^{\nu +n +\frac{1}{2}}}\cdot \frac{(at)^{2n}}{n!},\end{aligned}$$
(5.4)

which for \(\alpha \rightarrow 0\) reduces to the infinite series representation of the expression \(I_{\nu +\frac{1}{2}}(at).\) Note that the above Neumann type series representation of the modified Bessel function of the first kind can be deduced in fact from the well-known Lommel expansion of a Bessel function of the first kind as a series of Bessel functions of the first kind. Namely, if we change z to \(-z\) and h to \(-h\) in the following Lommel expansion [66, p. 140]

$$\begin{aligned} \frac{J_{\nu }\left( \sqrt{z+h}\right) }{\left( \sqrt{z+h}\right) ^{\nu }} =\sum _{n\ge 0}\frac{(-1)^nJ_{\nu +n}\left( \sqrt{z}\right) }{2^n\left( \sqrt{z}\right) ^{\nu +n}}\cdot \frac{h^n}{n!}, \end{aligned}$$

then in view of the relation \(I_{\nu }(x)={\textrm{i}}^{-\nu }J_{\nu }(\textrm{i}x)\) we arrive at

$$\begin{aligned} \frac{I_{\nu }\left( \sqrt{z+h}\right) }{\left( \sqrt{z+h}\right) ^{\nu }} =\sum _{n\ge 0}\frac{I_{\nu +n}\left( \sqrt{z}\right) }{2^n\left( \sqrt{z}\right) ^{\nu +n}}\cdot \frac{h^n}{n!}, \end{aligned}$$

and now changing \(\nu \) to \(\nu +\frac{1}{2},\) z to \(\alpha ^2\) and h to \(a^2t^2\) we obtain (5.4). This shows that in fact there is another way to find the moment generating function and then the characteristic function, and for this we just need to start from (5.4) and to observe that on the right-hand side of (5.4) the moments of the Kaiser–Bessel distribution are involved.

5.2 Moment Determinacy Problem

The moment determinacy problem can be described as follows: is X the only random variable whose set of moments is \(\left\{ \mu _n\right\} _{n\ge 1}\)? More precisely, let the random variable X with the cumulative distribution function F (or probability density function f) having finite moments \(\mu _{n} = {\text {E}}\left[ X^n\right] \), that is, the absolute moments with respect to zero \(\beta _{n} = {\text {E}}\left[ |X|^n\right] < \infty \) for all \(n \in {\mathbb {N}}\). In the case when F (or f) is the only distribution function with the sequence \(\left\{ \mu _{n}\right\} _{n\ge 1}\), then X is moment determinate. Otherwise, when there exists at least one other cumulative distribution function G (related to the probability density function g), such that \(\mu _{n}(F) = \mu _{n}(G)\) for any index \(n \in {\mathbb {N}}\), then the random variable X is indeterminate, see for example [64, pp. 699–700]. It is also worth to mention that in the case of the Kaiser–Bessel distribution \(\textrm{KB}(a, \alpha , \nu )\) we are faced with the so-called Hamburger moment problem, when the support set of the probability density function belongs to \({\mathbb {R}}\), that is, the considered random variable X is not nonnegative, being \(\textrm{supp}(\varphi _{a, \alpha , \nu })=[-a,a] \subset {\mathbb {R}}\).

To solve our moment determinacy dilemma for the Kaiser–Bessel distribution \(\textrm{KB}(a, \alpha , \nu )\) we apply the well-known Cramér condition, which says that if there exists a constant \(c>0\), for which the moment generating function satisfies \(M_X(t) = {\text {E}}\left[ e^{tX}\right] < \infty \) for all \(|t|<c\), then all moments are finite, that is, X is moment determinate, see [64, p. 700] for more details.

Bearing in mind the set of upper bound results and two subsequent ones upon the moment generating functions in the previous subsection, we can write that for all \(c>0\) there holds true the estimate

$$\begin{aligned} M_X(t) \le \min \left\{ e^{\frac{1}{2} a^2 c^2}, e^{\sqrt{a^2 c^2+\alpha ^2} - \alpha }, e^{\frac{a^2 c^2}{2(2\nu +3)}} \right\} , \end{aligned}$$

which confirms the finiteness of the moment generating function for all \(|t|<c,\) and this implies the finiteness of all moments

$$\begin{aligned} \mu _{2n,\nu } = (2n-1)!!a^{2n}\frac{I_{\nu +n+\frac{1}{2}}(\alpha )}{\alpha ^nI_{\nu +\frac{1}{2}}(\alpha )},\end{aligned}$$

pointing out that all odd-indexed moments vanish. In turn, these facts fulfill Cramér’s condition, and therefore we have the following result.

Theorem 9

The Kaiser–Bessel random variable \(X\sim \textrm{KB}(a, \alpha , \nu )\) is determinate via the well-defined moment sequence \(\left\{ \mu _{n,\nu }\right\} _{n\ge 1}\).

5.3 Differential Entropy or Shannon Entropy

The differential entropy of the random variable \(X \sim \textrm{KB}(a, \alpha , \nu )\) is the quantity

$$\begin{aligned} h[\varphi _{a, \alpha , \nu }] = {\text {E}}[-\ln \varphi _{a, \alpha , \nu }(X)] = -\int _{-a}^a \varphi _{a, \alpha , \nu }(x)\ln \varphi _{a, \alpha , \nu }(x){\textrm{d}}x,\end{aligned}$$

where \(\varphi _{a, \alpha , \nu }\) is as in (2.2). The change of variable \(x = a \sqrt{1-s^2}\) leads to

$$\begin{aligned} h[\varphi _{a, \alpha , \nu }]&= - 2 \frac{\sqrt{\frac{\alpha }{2\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \int _0^1 s^{\nu +1} \frac{I_\nu (\alpha s)}{\sqrt{1-s^2}} \ln \left[ \frac{\sqrt{\frac{\alpha }{2\pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )} s^\nu I_\nu (\alpha s)\right] {\textrm{d}}s \\&= - 2 \frac{\sqrt{\frac{\alpha }{2\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\ln \left( \frac{\sqrt{\frac{\alpha }{2\pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )}\right) \int _0^1 s^{\nu +1} \frac{I_\nu (\alpha s)}{\sqrt{1-s^2}}{\textrm{d}}s \\&\quad - 2\frac{\sqrt{\frac{\alpha }{2\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \nu \int _0^1 s^{\nu +1} \ln (s)\frac{I_\nu (\alpha s)}{\sqrt{1-s^2}}{\textrm{d}}s \\&\quad - 2 \frac{\sqrt{\frac{\alpha }{2\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\int _0^1 s^{\nu +1} I_\nu (\alpha s)\frac{\ln I_\nu (\alpha s)}{\sqrt{1-s^2}}{\textrm{d}}s. \end{aligned}$$

Now, we introduce the following notations

$$\begin{aligned} \Theta _1(\nu ,\alpha )=\int _0^1 s^{\nu +1} \frac{I_\nu (\alpha s)}{\sqrt{1-s^2}}{\textrm{d}}s, \qquad \Theta _2(\nu ,\alpha )=\int _0^1 s^{\nu +1} \ln (s)\frac{I_\nu (\alpha s)}{\sqrt{1-s^2}}{\textrm{d}}s \end{aligned}$$

and

$$\begin{aligned} \Theta _3(\nu ,\alpha )=\int _0^1 s^{\nu +1} I_\nu (\alpha s)\frac{\ln I_\nu (\alpha s)}{\sqrt{1-s^2}}{\textrm{d}}s. \end{aligned}$$

The first integral we find from (2.4), that is

$$\begin{aligned}\Theta _1(\nu ,\alpha ) = \frac{I_{\nu +\frac{1}{2}}(\alpha )}{\sqrt{\frac{\alpha }{2\pi }}} \Phi _{a, \alpha , \nu }(a) =\frac{I_{\nu +\frac{1}{2}}(\alpha )}{\sqrt{\frac{2\alpha }{\pi }}},\end{aligned}$$

while the second integral we write in an explicit form by expanding in Maclaurin series the binomial \((1-s^2)^{-1/2}\) and writing the infinite series form of \(I_\nu (\alpha s)\), and then following the method of the third approach for the cumulative distribution function

$$\begin{aligned} \Theta _2(\nu ,\alpha )&= \sum _{n \ge 0} \sum _{k \ge 0} \dfrac{\left( \frac{1}{2}\right) _n \left( \frac{\alpha }{2}\right) ^{2k+\nu }}{n!k!\Gamma (\nu +k+1)} \int _0^1 s^{2(\nu +n+k)+1} \ln s {\textrm{d}}s \\&= - \frac{1}{4 \Gamma (\nu +1)} \left( \frac{\alpha }{2}\right) ^\nu \sum _{n, k \ge 0}\frac{\left( \frac{1}{2}\right) _n}{(n+k+\nu +1)^2 (\nu +1)_kn!k!} \left( \frac{\alpha ^2}{4}\right) ^k\\&= - \frac{1}{4 (\nu +1)^2 \Gamma (\nu +1)} \left( \frac{\alpha }{2}\right) ^\nu \sum _{n, k \ge 0} \frac{(\nu +1)_{n+k}^2 \left( \frac{1}{2}\right) _n}{(\nu +2)_{n+k}^2\,(\nu +1)_kn!k!} \left( \frac{\alpha ^2}{4}\right) ^k\\&= - \frac{1}{4 (\nu +1) \Gamma (\nu +2)} \left( \frac{\alpha }{2}\right) ^\nu F_{2:0;1}^{2:1;0}\left[ \left. \begin{array}{c} \nu +1, \nu +1:\, \frac{1}{2};- \\ \nu +2,\nu +2:-;\nu +1 \end{array} \right| 1,\,\frac{\alpha ^2}{4} \right] \,. \end{aligned}$$

The computation of \(\Theta _3(\nu ,\alpha )\) is more challenging. By expanding the binomial term into Maclaurin series we deduce that

$$\begin{aligned} \Theta _3(\nu ,\alpha )= & {} \sum _{n \ge 0} \frac{\left( \frac{1}{2}\right) _n}{n!} \int _0^1 s^{\nu +2n+1} I_\nu (\alpha s)\ln I_\nu (\alpha s){\textrm{d}}s \\= & {} \frac{1}{\alpha ^{\nu +2}} \sum _{n \ge 0} \frac{\left( \frac{1}{2}\right) _n}{\alpha ^{2n}n!}\int _0^\alpha t^{\nu +2n+1} I_\nu (t)\ln I_\nu (t){\textrm{d}}t.\end{aligned}$$

Now, consider the extended version of the last integral in the above equation, that is

$$\begin{aligned} \lambda _n(q,\nu ,\alpha ) = \int _0^\alpha t^{\nu +2n+1} I_\nu (t)\ln I_\nu (q t){\textrm{d}}t,\end{aligned}$$

where n is a natural number or zero and q stands for a suitable positive parameter. Differentiating under the sign of integral we find that

$$\begin{aligned} \frac{\partial }{\partial q} \lambda _n(q,\nu ,\alpha ) = \int _0^\alpha t^{\nu +2n+2} \frac{I_\nu (t)}{I_\nu (q t)}I_\nu '(q t){\textrm{d}}t. \end{aligned}$$
(5.5)

By using the recurrence relation

$$\begin{aligned} I_\nu '(x) = \frac{\nu }{x}I_\nu (x) + I_{\nu +1}(x),\end{aligned}$$

the integral (5.5) becomes

$$\begin{aligned} \frac{\partial }{\partial q} \lambda _n(q,\nu ,\alpha ) = \frac{\nu }{q}\int _0^\alpha t^{\nu +2n+1} I_\nu (t){\textrm{d}}t +\int _0^\alpha t^{\nu +2n+2} \frac{I_\nu (t)}{I_\nu (q t)}I_{\nu +1}(qt){\textrm{d}}t. \end{aligned}$$
(5.6)

Introducing the notations

$$\begin{aligned} \Theta _4(\nu ,\alpha )=\int _0^\alpha t^{\nu +2n+1} I_\nu (t){\textrm{d}}t, \qquad \Theta _5(q,\nu ,\alpha )=\int _0^\alpha t^{\nu +2n+2} \frac{I_\nu (t)}{I_\nu (q t)}I_{\nu +1}(qt){\textrm{d}}t, \end{aligned}$$

we find that the integral \(\Theta _4(\nu ,\alpha )\) is generated by the formula

$$\begin{aligned} \int _0^\alpha t^p I_\mu (t){\textrm{d}}t = \frac{\alpha ^{p+\mu +1} \Gamma \left( \frac{1}{2}(p+\mu +1)\right) }{2^{\mu +1} \Gamma (\mu +1) \Gamma \left( \frac{1}{2}(p+\mu +3)\right) } {}_1F_2 \left[ \left. \begin{array}{c} \frac{1}{2}(p+\mu +1)\\ \mu +1, \frac{1}{2}(p+\mu +3) \end{array} \right| \frac{\alpha ^2}{4} \right] ,\end{aligned}$$

which is valid for all \(p+\mu >-1\) and \(\alpha \ge 0\). Hence, we arrive at

$$\begin{aligned} \Theta _4(\nu ,\alpha ) = \frac{\alpha ^{2(n+\nu +1)} (\nu +1)_n}{2^{\nu +1} \Gamma (\nu +2)(\nu +2)_n} {}_1F_2 \left[ \left. \begin{array}{c} n+\nu +1\\ \nu +1, n+\nu +2 \end{array} \right| \frac{\alpha ^2}{4} \right] . \end{aligned}$$

To compute \(\Theta _5(q,\nu ,\alpha )\) we apply the formula [32, Eq. (2.14)] which in our settings reads as

$$\begin{aligned} \frac{I_\nu (t)}{I_\nu (q t)} = \frac{2}{q^2} \sum _{m \ge 1} \frac{j_{\nu , m} J_\nu (q^{-1} j_{\nu , m})}{(t^2+q^{-2} j_{\nu , m}^2)J_{\nu +1}(j_{\nu , m})}, \end{aligned}$$

and it is valid for \(\nu >-1,\) \(q>1,\) \(t>0.\) Transforming \(\Theta _5(q,\nu ,\alpha )\) with this formula we find that

$$\begin{aligned} \Theta _5(q,\nu ,\alpha )&= \frac{2}{q^2} \sum _{m \ge 1} \frac{j_{\nu , m} J_\nu (q^{-1} j_{\nu , m})}{J_{\nu +1}(j_{\nu , m})} \int _0^\alpha \frac{t^{\nu +2n+2}}{t^2+q^{-2} j_{\nu , m}^2} I_{\nu +1}(q t){\textrm{d}}t \nonumber \\&= \frac{2}{q^{\nu +3}} \sum _{m \ge 1} \frac{j_{\nu , m} J_\nu (q^{-1} j_{\nu , m})}{q^{2m} J_{\nu +1}(j_{\nu , m})} \int _0^{q \alpha } \frac{x^{\nu +2n+2}}{x^2+j_{\nu , m}^2} I_{\nu +1}(x){\textrm{d}}x \nonumber \\&= \frac{2^{-\nu }}{q^{\nu +3}} \sum _{m-1, k \ge 0} \frac{j_{\nu , m}\, J_\nu (q^{-1} j_{\nu , m})}{q^{2m} J_{\nu +1}(j_{\nu , m})} \frac{2^{-2k}}{k!\Gamma (k+\nu +2)} \int _0^{q \alpha } \frac{x^{2\nu +2n+2k+3}}{x^2+j_{\nu , m}^2}{\textrm{d}}x. \end{aligned}$$

The latter integral we express via the Riemann–Liouville fractional integral transform [27, p. 186]

$$\begin{aligned} \int _0^u x^{\mu -1} (u-x)^{\eta -1} (x^2+\beta ^2)^\lambda {\textrm{d}}x = \beta ^{2\lambda } u^{\mu +\eta -1} B(\mu , \eta ){}_3F_2\left[ \left. \begin{array}{c} -\lambda , \frac{\mu }{2}, \frac{\mu +1}{2}\\ \frac{\mu +\eta }{2}, \frac{\mu +\eta +1}{2} \end{array} \right| - \frac{u^2}{\beta ^2} \right] ,\end{aligned}$$

which holds for all \(\frac{u}{\beta }>0,\) \(\mu >0,\) \(\eta >0\). Indeed, specifying \(\mu = 2\nu +2n+2k+4\), \(\eta =1,\) \(\lambda =-1\) and \(u = q \alpha ,\) \(\beta = j_{\nu , n}\), we have

$$\begin{aligned}&\int _0^{q \alpha } \frac{x^{2\nu +2n+2k+3}}{x^2+j_{\nu , m}^2}{\textrm{d}}x = \frac{(q \alpha )^{2(\nu +n+k+2)}}{2(\nu +2)j_{\nu ,m}^2} \frac{(\nu +2)_{n+k}}{(\nu +3)_{n+k}}{}_2F_1\left[ \left. \begin{array}{c} 1, \nu +n+k+2\\ \nu +n+k+3 \end{array} \right| - \frac{q^2\alpha ^2}{j_{\nu , m}^2} \right] \\&\quad = \frac{(q \alpha )^{2(\nu +n+k+2)}}{2(\nu +2)j_{\nu ,m}^2} \frac{(\nu +2)_n (\nu +2+n)_k}{(\nu +3)_n (\nu +3+n)_k}{}_2F_1\left[ \left. \begin{array}{c} 1, \nu +n+k+2\\ \nu +n+k+3 \end{array} \right| - \frac{q^2\alpha ^2}{j_{\nu , m}^2} \right] , \end{aligned}$$

where we used the Pochhammer symbol transformation \((\omega )_{n+k} = (\omega )_n (\omega +n)_k.\) This in turn implies that

$$\begin{aligned} \Theta _5(q,\nu ,\alpha )&= \frac{\left( \frac{q}{2}\right) ^{\nu +1} (\nu +2)_n}{\Gamma (\nu +3) (\nu +3)_n} \sum _{m \ge 1} \frac{J_\nu (q^{-1} j_{\nu , m})}{j_{\nu ,m} J_{\nu +1}(j_{\nu , m})} \sum _{k \ge 0}\frac{(\nu +2+n)_k}{(\nu +3+n)_k} \frac{\left( \frac{q^2}{4} \right) ^k}{(\nu +2)_k k!}\\&\quad \times {}_2F_1\left[ \left. \begin{array}{c} 1, \nu +n+k+2\\ \nu +n+k+3 \end{array} \right| - \frac{q^2\alpha ^2}{j_{\nu , m}^2} \right] \end{aligned}$$

and consequently we deduce that

$$\begin{aligned} \Theta _5(q,\nu ,\alpha )&= \frac{\left( \frac{q}{2}\right) ^{\nu +1} (\nu +2)_n}{\Gamma (\nu +3) (\nu +3)_n} \sum _{m \ge 1} \frac{J_\nu (q^{-1} j_{\nu , m})}{j_{\nu ,m} J_{\nu +1}(j_{\nu , m})} \nonumber \\&\quad \sum _{k, \ell \ge 0} \frac{(\nu +2+n)_{k+\ell } (1)_\ell }{(\nu +3+n)_{k+\ell }(\nu +2)_k} \frac{\left( \frac{q^2}{4} \right) ^k}{k!} \frac{\left( -\frac{q^2\alpha ^2}{j_{\nu , m}^2}\right) ^\ell }{\ell !} \nonumber \\&= \frac{\left( \frac{q}{2}\right) ^{\nu +1} (\nu +2)_n}{\Gamma (\nu +3) (\nu +3)_n} \sum _{m \ge 1} \frac{J_\nu (q^{-1} j_{\nu , m})}{j_{\nu ,m} J_{\nu +1}(j_{\nu , m})} \nonumber \\&\quad F_{1:1;0}^{1:0;1}\left[ \left. \begin{array}{c} \nu +2+n:\, -;\,1 \\ \nu +3+n :\nu +2;- \end{array} \right| \frac{q^2}{4} ,\, -\frac{q^2\alpha ^2}{j_{\nu , m}^2} \right] . \end{aligned}$$
(5.7)

Now, from (5.5) and (5.6) we calculate

$$\begin{aligned} \lambda _n(q,\nu ,\alpha ) = \int \left( \frac{\nu }{q} \Theta _4(\nu ,\alpha ) + \Theta _5(q,\nu ,\alpha )\right) {\textrm{d}}q = \nu \cdot \ln (q) \cdot \Theta _4(\nu ,\alpha ) + \int \Theta _5(q,\nu ,\alpha ){\textrm{d}}q,\nonumber \\ \end{aligned}$$
(5.8)

where \(\Theta _5(q,\nu ,\alpha )\) is given by (5.7) and its integral reduces to a sum of terms

$$\begin{aligned} \int q^{p+\nu +1}J_\nu \left( \frac{j_{\nu , m}}{q}\right) {\textrm{d}}q = \frac{j_{\nu , m}^\nu q^{p+2}{}_1F_2\left[ \left. \begin{array}{c} -\frac{p}{2}-1 \\ \nu +1, -\frac{p}{2} \end{array} \right| - \frac{j_{\nu ,m}^2}{4q^2}\right] }{2^\nu \Gamma (\nu +1) (p+2)}. \end{aligned}$$
(5.9)

Note that the variable integer \(p = p(m)\ge 0\) depends of the summation subscript index \(m \in {\mathbb {N}}\) in the sum occurring in (5.7). Presenting now \(\Theta _5(q,\nu ,\alpha )\) from (5.7) as the triple sum of expression like the integrand in (5.9), we arrive at

$$\begin{aligned} \int \Theta _5(q,\nu ,\alpha ){\textrm{d}}q&= \frac{4^{-\nu -1} q^3 (\nu +2)_n}{\Gamma ^2(\nu +2) (\nu +3)_n} \sum _{m \ge 1} \frac{j_{\nu ,m}^{\nu -1}}{J_{\nu +1}(j_{\nu , m})} \sum _{k, \ell \ge 0} \frac{(1)_{k+\ell } (\nu +2+n)_{k+\ell } (1)_\ell }{(2)_{k+\ell } (\nu +3+n)_{k+\ell } (\nu +2)_k} \\&\qquad \times \frac{\left( \frac{q^2}{4}\right) ^k}{k!} \frac{\left( -\frac{\alpha ^2}{j_{\nu , m}^2} \right) ^\ell }{\ell !} {}_1F_2\left[ \left. \begin{array}{c} -k-\ell -1 \\ \nu +1, -k-\ell \end{array} \right| - \frac{j_{\nu ,m}^2}{4q^2}\right] . \end{aligned}$$

Therefore, by virtue of (5.8) we deduce that

$$\begin{aligned} \lambda _n(1,\nu ,\alpha ) = \lim _{q \searrow 1}\lambda _n(q,\nu ,\alpha )&= \frac{4^{-\nu -1}(\nu +2)_n}{\Gamma ^2(\nu +2) (\nu +3)_n} \\&\quad \sum _{k, \ell \ge 0} \frac{(1)_{k+\ell } (\nu +2+n)_{k+\ell } (1)_\ell }{(2)_{k+\ell } (\nu +3+n)_{k+\ell } (\nu +2)_k} \frac{(-1)^\ell }{k!\ell !} \frac{\alpha ^{2\ell }}{4^k}\\&\quad \times \sum _{m \ge 1} \frac{j_{\nu ,m}^{\nu -2\ell -1}}{J_{\nu +1}(j_{\nu , m})} {}_1F_2\left[ \left. \begin{array}{c} -k-\ell -1 \\ \nu +1, -k-\ell \end{array} \right| - \frac{j_{\nu ,m}^2}{4}\right] , \end{aligned}$$

and on the other hand we see that

$$\begin{aligned} \Theta _3(\nu ,\alpha )&= \frac{1}{\alpha ^{\nu +2}} \sum _{n \ge 0} \frac{\left( \frac{1}{2}\right) _n}{\alpha ^{2n}\,n!}\lambda _n(1,\nu ,\alpha ). \end{aligned}$$

Collecting now the values of the integrals \(\Theta _1(\nu ,\alpha ),\) \(\Theta _2(\nu ,\alpha )\) and \(\Theta _3(\nu ,\alpha )\), we finally deduce the following result.

Theorem 10

The differential entropy of the random variable \(X \sim \textrm{KB}(a, \alpha , \nu )\) is given by

$$\begin{aligned} h[\varphi _{a, \alpha , \nu }]&=\frac{\sqrt{\frac{\alpha }{2\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )} \frac{\nu \left( \frac{\alpha }{2}\right) ^\nu }{2 (\nu +1) \Gamma (\nu +2)} F_{2:0;1}^{2:1;0}\left[ \left. \begin{array}{c} \nu +1, \nu +1:\, \frac{1}{2};- \\ \nu +2,\nu +2:-;\nu +1 \end{array} \right| 1,\frac{\alpha ^2}{4} \right] \\&\quad - \ln \left( \frac{\sqrt{\frac{\alpha }{2\pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )}\right) \\&\quad - \frac{\sqrt{\frac{\alpha }{2\pi }}}{I_{\nu +\frac{1}{2}}(\alpha )}\frac{2^{-2\nu -1}}{\alpha ^{\nu +2}\Gamma ^2(\nu +2)} \sum _{n, k, \ell \ge 0} \frac{\left( \frac{1}{2}\right) _n(\nu +2)_n}{\alpha ^{2n}(\nu +3)_n n!} \frac{(1)_{k+\ell } (\nu +2+n)_{k+\ell } (1)_\ell }{(2)_{k+\ell } (\nu +3+n)_{k+\ell } (\nu +2)_k} \\&\quad \times \frac{1}{k!} \left( \frac{1}{4}\right) ^k\,\frac{(-\alpha ^2)^\ell }{\ell !} \sum _{m \ge 1} \frac{j_{\nu ,m}^{\nu -2\ell -1}}{J_{\nu +1}(j_{\nu , m})} {}_1F_2\left[ \left. \begin{array}{c} -k-\ell -1 \\ \nu +1, -k-\ell \end{array} \right| - \frac{j_{\nu ,m}^2}{4}\right] . \end{aligned}$$

5.4 Rényi Entropy

Being a measure of variation of the uncertainty, Alfréd Rényi introduced his entropy measure in 1961 in [56] for discrete random variables, generalizing a whole set of another entropy measures like Shannon entropy, for instance. The definition of the Rényi entropy for continuous random variable X with probability density function f, for any real parameter \(\lambda >0\) and \(\lambda \ne 1\) is given by

$$\begin{aligned} R[f] = \frac{1}{1-\lambda } \ln \left[ \int _{{\mathbb {R}}}\left[ f(x)\right] ^{\lambda }{\textrm{d}}x\right] . \end{aligned}$$

In our case \(X \sim \textrm{KB}(a, \alpha , \nu )\) it takes the form

$$\begin{aligned} R[\varphi _{a,\alpha ,\nu }]&= \frac{1}{1-\lambda } \ln \left[ \int _{-a}^a \left[ \varphi _{a, \alpha , \nu }(x)\right] ^{\lambda }{\textrm{d}}x\right] \nonumber \\&= \frac{1}{1-\lambda } \ln \left[ 2\cdot \left( \frac{\sqrt{\frac{\alpha }{2\pi }}}{aI_{\nu +\frac{1}{2}}(\alpha )}\right) ^{\lambda } \int _0^a\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\lambda \nu } I_{\nu }^{\lambda }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x\right] . \end{aligned}$$
(5.10)

Obviously, the calculation is concentrated to the integral

$$\begin{aligned} \Lambda [\varphi _{a,\alpha ,\nu }] = \int _0^a\left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\lambda \nu } I_{\nu }^{\lambda }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x,\end{aligned}$$

which after the usual substitutions \(s=g_a(x)\) and \(t=\alpha s\) reduces to

$$\begin{aligned} \Lambda [\varphi _{a,\alpha ,\nu }] = \frac{a}{\alpha ^{\lambda \nu +2}} \int _0^\alpha \frac{t^{\lambda \nu +1}}{\sqrt{1-\frac{t^2}{\alpha ^2}}}\, \left[ I_\nu (t)\right] ^\lambda {\textrm{d}}t. \end{aligned}$$
(5.11)

Now, raising to the real power \(\lambda \) the modified Bessel function, given in the form of power series, leads to

$$\begin{aligned} \left[ I_\nu (t)\right] ^\lambda = \frac{\left( \frac{t}{2}\right) ^{\lambda \nu }}{\left[ \Gamma (\nu +1)\right] ^\lambda } \sum _{n \ge 0} \left( {\begin{array}{c}\lambda \\ n\end{array}}\right) \sum _{k \ge 1} a_{\nu ,k}(n) \left( \frac{t}{2}\right) ^{2k} ,\end{aligned}$$

where the shorthand notation

$$\begin{aligned} a_{\nu ,k}(n)= \underset{{\begin{array}{c} 0<k_1<\cdots <k_n \le k\\ k_1+\cdots +k_n=k \end{array}}}{\sum } \frac{1}{\prod \limits _{j=1}^n k_j! (\nu +1)_{k_j}} \end{aligned}$$
(5.12)

is employed. Hence,

$$\begin{aligned} \Lambda [\varphi _{a,\alpha ,\nu }]&= \frac{a}{2^{\lambda \nu } \alpha ^{\lambda \nu +2} \left[ \Gamma (\nu +1)\right] ^\lambda } \sum _{n \ge 0} \left( {\begin{array}{c}\lambda \\ n\end{array}}\right) \sum _{k \ge 1} \frac{a_{\nu ,k}(n)}{2^{2k}} \int _0^\alpha \frac{t^{2\lambda \nu +2k+1}}{\sqrt{1-\frac{t^2}{\alpha ^2}}}{\textrm{d}}t \\&= \frac{a \sqrt{\pi }\alpha ^{\lambda \nu }}{2^{\lambda \nu +1}\left[ \Gamma (\nu +1)\right] ^\lambda } \sum _{n \ge 0} \left( {\begin{array}{c}\lambda \\ n\end{array}}\right) \sum _{k \ge 1}a_{\nu ,k}(n)\frac{\Gamma (\lambda \nu +k+1)}{\Gamma \left( \lambda \nu +k+\frac{3}{2}\right) }\left( \frac{\alpha }{2}\right) ^{2k}, \end{aligned}$$

which in conjunction with (5.10) implies the following result.

Theorem 11

The Rényi entropy of the Kaiser–Bessel distribution is given by

$$\begin{aligned} R[\varphi _{a,\alpha ,\nu }] = \frac{1}{1-\lambda } \ln \left[ \frac{a^{1-\lambda }\pi ^{\frac{1-\lambda }{2}} \left( \frac{\alpha }{2}\right) ^{\lambda \left( \nu +\frac{1}{2}\right) }\Gamma (\lambda \nu +1)}{\left[ \Gamma (\nu +1)\right] ^\lambda \Gamma \left( \lambda \nu +\frac{3}{2}\right) I_{\nu +\frac{1}{2}}^\lambda (\alpha )}\sum _{n \ge 0} \left( {\begin{array}{c}\lambda \\ n\end{array}}\right) \sum _{k \ge 1}a_{\nu ,k}(n) \frac{(\lambda \nu +1)_k}{\left( \lambda \nu +\frac{3}{2}\right) _k}\left( \frac{\alpha }{2}\right) ^{2k}\right] ,\nonumber \\ \end{aligned}$$
(5.13)

where the coefficients \(a_{\nu ,k}(n)\) are described above in (5.12).

It is worth to mention the result in [13], where a recursive coefficient sequence was reported for the real (or complex) power of the modified Bessel function of the first kind

$$\begin{aligned} \left[ I_\nu (t)\right] ^\lambda = \frac{1}{\left[ \Gamma (\nu +1)\right] ^\lambda } \sum _{n \ge 0} \frac{\gamma _n(\lambda )}{n! (\nu +1)_n}\, \left( \frac{t}{2}\right) ^{2n+\lambda \nu }, \end{aligned}$$
(5.14)

which holds true for \(x, \lambda \in {\mathbb {C}},\) \(\nu \in {\mathbb {C}} {\setminus } {\mathbb {Z}}_{-}\). Here \(\gamma _0(\lambda )=1\), while [13, p. 723, Eq. (6)]

$$\begin{aligned} \gamma _k(\lambda ) = \frac{1}{k} \sum _{m=1}^k [m(\lambda +1)-k] \left( {\begin{array}{c}k\\ m\end{array}}\right) \frac{(\nu +1+m)_{k-m}}{(\nu +1)_{k-m}} \gamma _{k-m}(\lambda ). \end{aligned}$$
(5.15)

The related discussion and introduction into the matter clearly shows the inter-connection of the structures of \(a_{\nu ,k}(n)\) occurring in (5.12) and the building blocks \([4^n n! (\nu +1)_n]^{-1}\) of \(\gamma _k(\lambda )\).

Now, it remains to look at the above integral \(\Lambda [\varphi _{a,\alpha ,\nu }]\) by using the representation (5.14), which finally implies that

$$\begin{aligned} R[\varphi _{a,\alpha ,\nu }] = \frac{1}{1-\lambda } \ln \left[ \frac{a^{1-\lambda }\pi ^{\frac{1-\lambda }{2}} \left( \frac{\alpha }{2}\right) ^{\lambda \left( \nu +\frac{1}{2}\right) }\Gamma (\lambda \nu +1)}{\left[ \Gamma (\nu +1)\right] ^\lambda \Gamma \left( \lambda \nu +\frac{3}{2}\right) I_{\nu +\frac{1}{2}}^\lambda (\alpha )}\sum _{n \ge 0} \frac{\gamma _n(\lambda ) (\lambda \nu +1)_n}{n! (\nu +1)_n \left( \lambda \nu +\frac{3}{2}\right) _n}\left( \frac{\alpha }{2}\right) ^{2n}\right] ,\nonumber \\ \end{aligned}$$
(5.16)

giving to the Rényi entropy a modestly elegant form.

Morover, by comparing (5.13) and (5.16), we clearly conclude the following unexpected by-product of the Rényi entropy of the Kaiser–Bessel distribution.

Corollary 4

For all \(x \in {\mathbb {C}},\) \(\nu \in {\mathbb {C}} {\setminus } \mathbb Z_{-}\), for the parameters range for which the Kaiser–Bessel distribution \(\textrm{KB}(a, \alpha , \nu )\) is defined and \(\lambda \not \in {\mathbb {N}}_0\) there holds true the summation

$$\begin{aligned} \sum _{k \ge 1}\,\underset{{\begin{array}{c} 0<k_1<\cdots <k_n \le k\\ k_1+\cdots +k_n=k \end{array}}}{\sum } \frac{1}{\prod \limits _{j=1}^n k_j! (\nu +1)_{k_j}} \frac{(\lambda \nu +1)_k}{(\lambda \nu +\frac{3}{2})_k}x^k = \frac{(-1)^n\gamma _n(\lambda ) (\lambda \nu +1)_n}{(-\lambda )_n (\nu +1)_n (\lambda \nu +\frac{3}{2})_n}x^n, \qquad n \in {\mathbb {N}}_0, \end{aligned}$$

where \(\gamma _n(\lambda )\) is defined in (5.15).

6 Conclusion and Future Work

In this paper we defined a new probability distribution via the symmetric form of the generalized Kaiser–Bessel window function, and we studied its properties in details. We investigated the analytic properties (monotonicity, convexity, log-concavity, geometrical concavity, inflection points, Turán type inequalities) in great details of the probability density function of this new distribution, called by us the Kaiser–Bessel distribution, and we concluded that it is an extension of the Wigner’s semicircle distribution as well as of the power semicircle distribution. We obtained explicit forms for the moments, cumulative distribution function, characteristic function, moment generating function and differential entropy. The Kaiser–Bessel distribution is a sub-Gaussian distribution and it is not infinitely divisible in the classical sense. We hope that the Kaiser–Bessel distribution it will be of interest for electrical engineering and applied mathematics community, and will have applications in engineering sciences.

Note that another window function proposed by Kaiser is given by the time function

$$\begin{aligned} {\widetilde{w}}_{a,\alpha }(r)=\left\{ \begin{array}{ll}\displaystyle \frac{I_1\left( \alpha \sqrt{1-\left( \frac{r}{a}\right) ^2}\right) }{I_1(\alpha )\sqrt{1-\left( \frac{r}{a}\right) ^2}},&{}\qquad |r|\le a\\ 0,&{}\qquad |r|>a\end{array}\right. , \end{aligned}$$

where \(I_1\) is the first order modified Bessel function of the first kind, and exactly as in the case of \(w_{a,\alpha }\) the parameter a is the window duration, and the parameter \(\alpha \) controls the taper of the window and thereby controls the trade-off between the width of the main lobe and the amplitude of the side lobes of the Fourier transform of the window. However, of the two families proposed by Kaiser, the modified zeroth-order Bessel family \(w_{a,\alpha }\) is closer to the optimum zeroth-order prolate spheroidal wave functions. The modified first-order Bessel family \({\widetilde{w}}_{a,\alpha }\) has the slight advantage of smaller first side lobes when compared to either zeroth-order Bessel window family \(w_{a,\alpha }\) or prolate-spheroidal wave functions, but its side-lobe fall-off rate is slower. From the mathematical point of view it is very natural to consider the common generalization of the Kaiser–Bessel window functions \(w_{a,\alpha }\) and \({\widetilde{w}}_{a,\alpha }\) and a possible generalization is the following window function:

$$\begin{aligned} {\widetilde{w}}_{a,\alpha ,\nu }(r) = \left\{ \begin{array}{ll}\displaystyle \frac{I_{\nu } \left( \alpha \sqrt{1-\left( \frac{r}{a}\right) ^2}\right) }{I_{\nu }(\alpha ) \left( \sqrt{1-\left( \frac{r}{a}\right) ^2}\right) ^{\nu }},&{}\qquad |r|\le a\\ 0,&{}\qquad |r|>a\end{array}\right. , \end{aligned}$$

where \(a>0,\) \(\alpha >0\) and \(\nu >-1.\) Now, by using the change of variable \(s=\sqrt{1-\left( \frac{x}{a}\right) ^2}\) we obtain

$$\begin{aligned} \int _{-a}^{a}{\widetilde{w}}_{a,\alpha ,\nu }(x){\textrm{d}}x&= \dfrac{2}{I_\nu (\alpha )}\int _0^a \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{-\nu } I_\nu \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x\\&= \frac{2a}{I_{\nu }(\alpha )}\int _0^1\frac{s^{1-\nu }}{\sqrt{1-s^2}}I_{\nu }(\alpha s){\textrm{d}}s = a \sqrt{\dfrac{2 \pi }{\alpha }} \frac{{\textbf{L}}_{\nu -\frac{1}{2}}(\alpha )}{I_{\nu }(\alpha )}, \end{aligned}$$

where \({\textbf{L}}_{\nu }\) stands for the modified Struve function of order \(\nu .\) It turns out that the function \({\widetilde{\varphi }}_{a,\alpha ,\nu }:{\mathbb {R}}\rightarrow [0,\infty ),\) defined by

$$\begin{aligned} {\widetilde{\varphi }}_{a,\alpha ,\nu }(x) = \frac{1}{a}\sqrt{\frac{\alpha }{2 \pi }}\frac{I_\nu (\alpha )}{{\textbf{L}}_{\nu -\frac{1}{2}}(\alpha )} \cdot {\widetilde{w}}_{a,\alpha ,\nu }(x), \end{aligned}$$

that is,

$$\begin{aligned} {\widetilde{\varphi }}_{a,\alpha ,\nu }(x) = \left\{ \begin{array}{ll} \displaystyle \frac{\sqrt{\frac{\alpha }{2 \pi }}}{a{\textbf{L}}_{\nu -\frac{1}{2}}(\alpha )} \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{-\nu } I_{\nu }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ,&{}\qquad |x| \le a\\ 0,&{}\qquad |x|> a \end{array}\right. , \end{aligned}$$
(6.1)

is a probability density function with symmetric support \([-a,a]\). Consequently, the continuous and symmetric random variable defined on some standard probability space has this kind of Kaiser–Bessel distribution with the parameter space \(\left\{ (a, \alpha , \nu ) \in {\mathbb {R}}_+^2\times (-1, \infty )\right\} \) if it possesses the probability density function (6.1). Since for \(\nu \) fixed, as \(x\rightarrow 0\)

$$\begin{aligned} {\textbf{L}}_{\nu }(x)\sim \frac{x^{\nu +1}}{\sqrt{\pi }2^{\nu }\Gamma \left( \nu +\frac{3}{2}\right) }, \end{aligned}$$

we obtain that for \(a>0,\) \(\nu >-1\) and \(|x|\le a\) fixed as \(\alpha \rightarrow 0\)

$$\begin{aligned} {\widetilde{\varphi }}_{a,\alpha ,\nu }(x)\sim \frac{1}{2a}, \end{aligned}$$

and thus the Kaiser–Bessel distribution with probability density function in (6.1) can be considered as the extension of the symmetric (with respect to the origin) continuous uniform distribution or rectangular distribution.

Moreover, taking into account the structure of the symmetric and generalized Kaiser–Bessel window functions \(w_{a,\alpha ,\nu }\) and \({\widetilde{w}}_{a,\alpha ,\nu }\) it is also very natural to consider the following more general four-parameter Kaiser–Bessel window function, defined by

$$\begin{aligned} w_{a,\alpha ,\nu ,\mu }(x) = \left\{ \begin{array}{ll}\displaystyle \frac{1}{I_{\nu }(\alpha )} \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\mu } I_{\nu }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) , &{}\qquad |x|\le a\\ 0, &{}\qquad |x|>a\end{array}\right. , \end{aligned}$$

where \(a>0,\) \(\alpha >0\) and \(\mu ,\nu >-1.\) By using the change of variable \(s=\sqrt{1-\left( \frac{x}{a}\right) ^2}\) we obtain

$$\begin{aligned} \int _{-a}^{a}w_{a,\alpha ,\nu ,\mu }(x){\textrm{d}}x&= \dfrac{2}{I_\nu (\alpha )}\int _0^a \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^\mu I_\nu \left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) {\textrm{d}}x\\&= \frac{2a}{I_{\nu }(\alpha )}\int _0^1\frac{s^{\mu +1}}{\sqrt{1-s^2}}I_{\nu }(\alpha s){\textrm{d}}s\\&=\frac{a\sqrt{\pi }}{2^{\nu }\Gamma (\nu +1)\alpha ^{-\nu }I_{\nu }(\alpha )}\cdot \frac{\Gamma \left( \frac{\mu +\nu +2}{2}\right) }{\Gamma \left( \frac{\mu +\nu +3}{2}\right) }\cdot {}_1F_2 \left[ \left. \begin{array}{c} \frac{1}{2}\left( \mu +\nu +2\right) \\ \frac{1}{2}\left( \mu +\nu +3\right) , \nu +1 \end{array} \right| \frac{\alpha ^2}{4} \right] . \end{aligned}$$

It turns out that the function \(\varphi _{a,\alpha ,\nu ,\mu }:{\mathbb {R}}\rightarrow [0,\infty ),\) defined by

$$\begin{aligned} \varphi _{a,\alpha ,\nu ,\mu }(x) = I_{\nu }(\alpha )\cdot c_{a,\alpha ,\nu ,\mu } \cdot w_{a,\alpha ,\nu ,\mu }(x), \end{aligned}$$

that is,

$$\begin{aligned} \varphi _{a,\alpha ,\nu ,\mu }(x) = \left\{ \begin{array}{ll} c_{a,\alpha ,\nu ,\mu }\cdot \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\mu } I_{\nu }\left( \alpha \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ,&{}\qquad |x| \le a\\ 0,&{}\qquad |x|> a \end{array}\right. , \end{aligned}$$
(6.2)

where

$$\begin{aligned} c_{a,\alpha ,\nu ,\mu }=\displaystyle \frac{2^{\nu }\Gamma (\nu +1)}{a\sqrt{\pi }\alpha ^{\nu }}\cdot \frac{\Gamma \left( \frac{\mu +\nu +3}{2}\right) }{\Gamma \left( \frac{\mu +\nu +2}{2}\right) }\cdot \frac{1}{{}_1F_2 \left[ \left. \begin{array}{c} \frac{1}{2}\left( \mu +\nu +2\right) \\ \frac{1}{2}\left( \mu +\nu +3\right) , \nu +1 \end{array} \right| \frac{\alpha ^2}{4} \right] }, \end{aligned}$$

is a probability density function with symmetric support \([-a,a]\). Consequently, the continuous and symmetric random variable defined on some standard probability space has this kind of four-parameter Kaiser–Bessel distribution with the parameter space \(\left\{ (a, \alpha , \nu ,\mu ) \in {\mathbb {R}}_+^2\times (-1, \infty )\times (-1,\infty )\right\} \) if it possesses the generalized probability density function (6.2). Since for \(\nu \) and \(\mu \) fixed as \(\alpha \rightarrow 0\) we have

$$\begin{aligned} {}_1F_2 \left[ \left. \begin{array}{c} \frac{1}{2}\left( \mu +\nu +2\right) \\ \frac{1}{2}\left( \mu +\nu +3\right) , \nu +1 \end{array} \right| \frac{\alpha ^2}{4} \right] \sim 1, \end{aligned}$$

we find that for \(a>0,\) \(\nu ,\mu >-1\) and \(|x|\le a\) fixed as \(\alpha \rightarrow 0\)

$$\begin{aligned} \varphi _{a,\alpha ,\nu ,\mu }(x)\sim \frac{1}{a\sqrt{\pi }}\cdot \frac{\Gamma \left( \frac{\mu +\nu +3}{2}\right) }{\Gamma \left( \frac{\mu +\nu +2}{2}\right) }\cdot \left( \sqrt{1-\left( \frac{x}{a}\right) ^2}\right) ^{\mu +\nu }. \end{aligned}$$

The limiting distribution can be considered in fact as a two-parameter power semicircle distribution and it is an extension of the limiting distribution in (3.7). Observe also that \(\varphi _{a,\alpha ,\nu ,\nu }\equiv \varphi _{a,\alpha ,\nu }\) and \(\varphi _{a,\alpha ,\nu ,-\nu }\equiv {\widetilde{\varphi }}_{a,\alpha ,\nu }.\)

Motivated by the results of the present paper, it is our future plan to investigate in details the properties of the probability density functions (6.1) and (6.2), as well as of their probability distributions: monotonicity, convexity, log-convexity properties of the probability density functions; properties of the moments, absolute moments, Mellin transforms, effective variance, excess kurtosis; explicit form of the cumulative distribution functions, characteristic functions, moment generating functions and differential entropy; uniform and non-uniform random variate generation of these Kaiser–Bessel distributions; and relation with other probability distributions. Moreover, our aim is to study in more details the log-concavity with respect to \(\nu \) and \(\alpha \) of the probability density function discussed in this paper; the special case when \(\alpha =\nu ,\) including the asymptotic expansion of the density function; integral transform of the density function; the Stieltjes transform of the density function; asymptotic and numerical inversion of the cumulative distribution function of the Kaiser–Bessel distribution.