Skip to main content
Log in

Integral transform methods in goodness-of-fit testing, I: the gamma distributions

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

We apply the method of Hankel transforms to develop goodness-of-fit tests for gamma distributions with given shape parameters and unknown rate parameters. We derive the limiting null distribution of the test statistic as an integrated squared Gaussian process, obtain the corresponding covariance operator and oscillation properties of its eigenfunctions, show that the eigenvalues of the operator satisfy an interlacing property, and make applications to two data sets. We prove consistency of the test, provide numerical power comparisons with alternative tests, study the test statistic under several contiguous alternatives, and obtain the asymptotic distribution of the test statistic for gamma alternatives with varying rate or shape parameters and for certain contaminated gamma models. We investigate the approximate Bahadur slope of the test statistic under local alternatives, and we establish the validity of the Wieand condition under which approaches through the approximate Bahadur and the Pitman efficiencies are in accord.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Allen AO (1990) Probability, statistics, and queueing theory, 2nd edn. Academic Press, San Diego

    MATH  Google Scholar 

  • Bahadur RR (1960) Stochastic comparison of tests. Ann Math Stat 31:276–295

    Article  MathSciNet  Google Scholar 

  • Bahadur RR (1967) Rates of convergence of estimates and test statistics. Ann Math Stat 38:303–324

    Article  MathSciNet  Google Scholar 

  • Bahadur RR (1971) Some limit theorems in statistics. SIAM, Philadelphia

    Book  Google Scholar 

  • Baringhaus L, Taherizadeh F (2010) Empirical Hankel transforms and their applications to goodness-of-fit tests. J Multivar Anal 101:1445–1467

    Article  Google Scholar 

  • Baringhaus L, Taherizadeh F (2013) A K–S type test for exponentiality based on empirical Hankel transforms. Commun Stat Theory Methods 42:3781–3792

    Article  MathSciNet  Google Scholar 

  • Barlow RE, Campo R (1975) Total time on test processes and applications to failure data analysis. Reliability and fault tree analysis. SIAM, Philadelphia, pp 451–481

    Google Scholar 

  • Baringhaus L, Ebner B, Henze N (2017) The limit distribution of weighted \(L^2\)-goodness-of-fit statistics under fixed alternatives, with applications. Ann Inst Stat Math 69:969–995

    Article  Google Scholar 

  • Bauer H (1981) Probability theory and elements of measure theory, second English edn. Academic Press, New York

    MATH  Google Scholar 

  • Billingsley P (1968) Convergence of probability measures. Wiley, New York

    MATH  Google Scholar 

  • Billingsley P (1979) Probability and measure. Wiley, New York

    MATH  Google Scholar 

  • Brislawn C (1991) Traceable integral kernels on countably generated measure spaces. Pac J Math 50:229–240

    Article  MathSciNet  Google Scholar 

  • Chow YS, Teicher H (1988) Probability theory: independence, interchangeability, martingales, 2nd edn. Springer, New York

    Book  Google Scholar 

  • Cuparić M, Milošević B, Obradović M (2018) New \(L^2\)-type exponentiality tests. Preprint, arXiv:1809.07585

  • Czaplicki JM (2014) Statistics for mining engineering. CRC Press, Boca Raton

    Book  Google Scholar 

  • D’Agostino R, Stephens M (1986) Goodness-of-fit techniques. Marcel Dekker, New York

    MATH  Google Scholar 

  • Dancis J, Davis C (1987) An interlacing theorem for eigenvalues of self-adjoint operators. Linear Algebra Appl 88(89):117–122

    Article  MathSciNet  Google Scholar 

  • de Wet T, Randles RH (1987) On the effect of substituting parameter estimators in limiting \(\chi ^2\)\(U\) and \(V\) statistics. Ann Stat 15:398–412

    Article  MathSciNet  Google Scholar 

  • Erdélyi A, Magnus W, Oberhettinger F, Tricomi FG (1953) Higher transcendental functions, vol 2. McGraw-Hill, New York

    MATH  Google Scholar 

  • Gīkhman \(\breve{\text{I}}\bar{\text{ I }}\), Skorokhod AV (1980) The theory of stochastic processes, vol 1. Springer, New York

  • Gupta RD, Richards DSTP (1983) Application of results of Kotz, Johnson and Boyd to the null distribution of Wilks’ criterion. In: Sen PK (ed) Contributions to statistics: essays in Honour of Johnson NL. North-Holland, Amsterdam, pp 205–210

    Google Scholar 

  • Hadjicosta E (2019) Integral transform methods in goodness-of-fit testing. Doctoral dissertation, Pennsylvania State University, University Park

  • Hadjicosta E, Richards D (2018) Integral transform methods in goodness-of-fit testing, I: the gamma distributions. Preprint, arXiv:1810.07138

  • Henze N, Meintanis SG, Ebner B (2012) Goodness-of-fit tests for the gamma distribution based on the empirical Laplace transform. Commun Stat Theory Methods 41:1543–1556

    Article  MathSciNet  Google Scholar 

  • Hochstadt H (1973) One-dimensional perturbations of compact operators. Proc Am Math Soc 37:465–467

    Article  MathSciNet  Google Scholar 

  • Hogg RV, Tanis EA (2009) Probability and statistical inference, 8th edn. Pearson, Upper Saddle River

    MATH  Google Scholar 

  • Imhof JP (1961) Computing the distribution of quadratic forms in normal variables. Biometrika 48:419–426

    Article  MathSciNet  Google Scholar 

  • Johnson RA, Wichern DW (1998) Applied multivariate statistical analysis, 5th edn. Prentice-Hall, Upper Saddle River

    MATH  Google Scholar 

  • Karlin S (1964) The existence of eigenvalues for integral operators. Trans Am Math Soc 113:1–17

    Article  MathSciNet  Google Scholar 

  • Kass RE, Eden UT, Brown EN (2014) Analysis of neural data. Springer, New York

    Book  Google Scholar 

  • Kotz S, Johnson NL, Boyd DW (1967) Series representations of distributions of quadratic forms in normal variables. I. Central case. Ann Math Stat 38:823–837

    Article  MathSciNet  Google Scholar 

  • Le Maître OP, Knio OM (2010) Spectral methods for uncertainty quantification. Springer, New York

    Book  Google Scholar 

  • Ledoux M, Talagrand M (1991) Probability in Banach spaces. Springer, New York

    Book  Google Scholar 

  • Leucht A, Neumann MH (2013) Degenerate \(U\)- and \(V\)-statistics under ergodicity: asymptotics, bootstrap and applications in statistics. Ann Inst Stat Math 65:349–386

    Article  MathSciNet  Google Scholar 

  • Olver FW, Lozier DW, Boisvert RF, Clark CW (eds) (2010) NIST handbook of mathematical functions. Cambridge University Press, New Yor

    MATH  Google Scholar 

  • Matsui M, Takemura A (2008) Goodness-of-fit tests for symmetric stable distributions—empirical characteristic function approach. TEST 17:546–566

    Article  MathSciNet  Google Scholar 

  • Pettitt AN (1978) Generalized Cramér–von Mises statistics for the gamma distribution. Biometrika 65:232–235

    MathSciNet  MATH  Google Scholar 

  • Postan MY, Poizner MB (2013) Method of assessment of insurance expediency of quay structures’ damage risks in sea ports. In: Weintrit A, Neumann T (eds) Marine navigation and safety of sea transportation: maritime transport and shipping. CRC Press, Boca Raton, pp 123–127

    Chapter  Google Scholar 

  • R Development Core Team (2007) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria

  • Sneddon IN (1972) The use of integral transforms. McGraw-Hill, New York

    MATH  Google Scholar 

  • Sturgul JR (2015) Discrete simulation and animation for mining engineers. CRC Press, Boca Raton

    Book  Google Scholar 

  • Sunder VS (2015) Operators on hilbert space. Hindustan Book Agency, New Delhi

    MATH  Google Scholar 

  • Szegö G (1967) Orthogonal polynomials, 3rd edn. American Mathematical Society, Providence, RI

    MATH  Google Scholar 

  • Taherizadeh F (2009) Empirical Hankel transform and statistical goodness-of-fit tests for exponential distributions. PhD Thesis, University of Hannover, Hannover

  • Wieand HS (1976) A condition under which the Pitman and Bahadur approaches to efficiency coincide. Ann Stat 4:1003–1011

    Article  MathSciNet  Google Scholar 

  • Young N (1998) An introduction to Hilbert space. Cambridge University Press, New York

    Google Scholar 

Download references

Acknowledgements

We are grateful to the reviewers and the editors for helpful and constructive comments on the initial version of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Donald Richards.

Ethics declarations

Conflict of interest:

On behalf of both authors, the corresponding author states that there is no conflict of interest.

Additional information

This paper is dedicated to Professor Norbert Henze, on the occasion of his 67th birthday.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Bessel functions and Hankel transforms

For the special case in which \(\nu = -\tfrac{1}{2}\), it follows from (2.1) that, for \(x \in {\mathbb {R}}\),

$$\begin{aligned} x^{1/2} \, J_{-1/2}(x) = \Big (\frac{2}{\pi }\Big )^{1/2} \cos x, \end{aligned}$$
(9.1)

For \(\nu > -1/2\), the Bessel function is also given by the Poisson integral,

$$\begin{aligned} J_{\nu }(x) = \frac{(x/2)^{\nu }}{\pi ^{1/2} \, \varGamma \big (\nu +\frac{1}{2}\big )} \int ^\pi _0 \cos (x\cos \theta ) (\sin \theta )^{2\nu } \, {\mathrm{d}}\theta , \end{aligned}$$
(9.2)

\(x \in {\mathbb {R}}\); see Erdélyi et al. (1953, 7.12(9)), Olver et al. (2010, (10.9.4)). This result can be proved by expanding \(\cos (x\cos \theta )\) as a power series in \(x \cos (\theta )\) and integrating term-by-term.

The Bessel function \(J_\nu \) also satisfies the inequality,

$$\begin{aligned} | J_{\nu }(z)| \le \frac{1}{\varGamma (\nu +1)} \, |z/2 |^{\nu } \, \exp ({\mathrm{Im}}(z)), \end{aligned}$$
(9.3)

\(\nu \ge -1/2\), \(z \in {\mathbb {C}}\); see Erdélyi et al. (1953, 7.3.2(4)) or Olver et al. (2010, (10.14.4)).

Henceforth, we assume that \(\nu \ge -1/2\). For \(t, x \ge 0\), we set \(z = 2(tx)^{1/2}\) in (9.3) to obtain

$$\begin{aligned} \big |(tx)^{-\nu /2}J_{\nu }\big (2(tx)^{1/2}\big )\big | \le \frac{1}{\varGamma (\nu +1)}. \end{aligned}$$
(9.4)

Although the next two results may be known, we were unable to find them in the literature.

Lemma 3

For \(\nu \ge -1/2\) and \(t \ge 0\),

$$\begin{aligned} \big |t^{-\nu }J_{\nu +1}(t)\big | \le \frac{1}{2^{\nu } \pi ^{1/2} \varGamma \big (\nu +\frac{3}{2}\big )}. \end{aligned}$$
(9.5)

Proof

By Olver et al. (2010, (10.6.6)),

$$\begin{aligned} t^{-\nu }J_{\nu +1}(t) = -\big (t^{-\nu }J_{\nu }(t)\big )', \end{aligned}$$
(9.6)

\(t \ge 0\). For \(\nu > -1/2\), it follows by differentiating the Poisson integral (9.2) that

$$\begin{aligned} 2^\nu \pi ^{1/2} \varGamma \big (\nu +\tfrac{1}{2}\big ) \, |t^{-\nu }J_{\nu +1}(t)|&= \bigg | \int ^\pi _0 {\cos \theta \ \sin (t\cos \theta ) \ (\sin \theta )^{2\nu }} \, {\mathrm{d}}\theta \bigg | \\&\le \int ^\pi _0 {| \cos \theta | \ |(\sin \theta )^{2\nu } |} \, {\mathrm{d}}\theta . \end{aligned}$$

By a substitution, \(s = \sin ^2 \theta \), the latter integral reduces to a beta integral,

$$\begin{aligned} \int _0^1 s^{a-1} (1-s)^{b-1} {\mathrm{d}}s = \frac{\varGamma (a) \, \varGamma (b)}{\varGamma (a+b)}, \end{aligned}$$

\(a, b > 0\). This produces (9.5).

For \(\nu = -1/2\), it follows from (9.6) and (9.1) that

$$\begin{aligned} t^{1/2} J_{1/2}(t) = (2/\pi )^{1/2} \, \sin t; \end{aligned}$$
(9.7)

cf. Olver et al. (2010, (10.16.1)). Then, \(|t^{1/2} J_{1/2}(t)| \le (2/\pi )^{1/2},\) as stated in (9.5). \(\square \)

Remark 5

Substituting \(\nu = 0\) in Lemma 3, we obtain \(|J_1(t)| \le 2/\pi \), \(t \ge 0\). This bound is sharper than a bound given in Olver et al. (2010, (10.14.1)), viz., \(|J_1(t)| \le 2^{-1/2}\), \(t \ge 0\).

Lemma 4

For \(\nu \ge -1/2\), the function \(t^{-\nu }J_{\nu +1}(t)\), \(t \ge 0\), is Lipschitz continuous, satisfying for \(u, v \in {\mathbb {R}}\), the inequality

$$\begin{aligned} \big |u^{-\nu }J_{\nu +1}(u) - v^{-\nu }J_{\nu +1}(v)\big | \le \frac{1}{2^{\nu +1} \varGamma (\nu +2)} \, |u-v|. \end{aligned}$$
(9.8)

Proof

For \(\nu > -1/2\) we apply (9.6), (9.2), and the triangle inequality to obtain

$$\begin{aligned}&2^\nu \pi ^{1/2} \varGamma \big (\nu +\tfrac{1}{2}\big ) \, \big |u^{-\nu } J_{\nu +1}(u) - v^{-\nu }J_{\nu +1}(v)\big | \\&\quad \le \int ^\pi _0 |\sin (u\cos \theta )- \sin (v \cos \theta )| \ |\cos \theta | \ (\sin \theta )^{2\nu } \, {\mathrm{d}}\theta . \end{aligned}$$

By a well-known trigonometric identity, and the inequality \(|\sin t| \le |t|\), \(t \in {\mathbb {R}}\),

$$\begin{aligned} |\sin (u\cos \theta )- \sin (v \cos \theta )|&= 2 \big |\sin \big (\tfrac{1}{2} (u-v) \cos \theta \big ) \ \cos \big (\tfrac{1}{2} (u+v) \cos \theta \big )\big | \nonumber \\&\le |u-v| \ |\cos \theta | \ \big |\cos \big (\tfrac{1}{2} (u+v) \cos \theta \big )\big | \nonumber \\&\le |u-v| \ |\cos \theta |. \end{aligned}$$
(9.9)

Therefore,

$$\begin{aligned} \big |u^{-\nu }J_{\nu +1}(u) - v^{-\nu }J_{\nu +1}(v)\big |&\le \frac{2}{2^\nu \pi ^{1/2} \varGamma \big (\nu +\frac{1}{2}\big )} |u-v| \, \int ^{\pi /2}_0 (\cos \theta )^2 \ (\sin \theta )^{2\nu } \, {\mathrm{d}}\theta . \end{aligned}$$

Substituting \(t = \sin ^2 \theta \) reduces the latter integral to a beta integral, and then we obtain (9.8).

For \(\nu = -1/2\), we apply (9.7) to obtain

$$\begin{aligned} \big |u^{1/2} J_{1/2}(u) - v^{1/2} J_{1/2}(v)\big |&= (2/\pi )^{1/2} \ |\sin u - \sin v| \le (2/\pi )^{1/2} \ |u-v|, \end{aligned}$$

the latter inequality following from (9.9) with \(\theta = 0\). Then, we obtain (9.8) for \(\nu = -1/2\). \(\square \)

As regards the modified Bessel function \(I_\nu \), defined in (2.2), with \(\mathrm {i}= \sqrt{-1}\) we find from (2.1) that \(I_\nu (x) = \mathrm {i}^{-\nu } \, J_{\nu }(\mathrm {i}x)\), \(x \in {\mathbb {R}}\); hence, by (9.3),

$$\begin{aligned} | \varGamma (\nu +1) \, (x/2)^{-\nu } \, I_\nu (x)| \le 1. \end{aligned}$$
(9.10)

For \(n \in {\mathbb {N}}_0\) and \(\alpha > 0\), the (generalized) Laguerre polynomial of order \(\alpha -1\) and degree n is

$$\begin{aligned} L_n^{(\alpha -1)}(x)&= \frac{(\alpha )_n}{n!} \, {}_1F_1(-n;\alpha ;x) = \sum _{k=0}^n \frac{(\alpha +k)_{n-k}}{(n-k)!} \, \frac{(-x)^k}{ k!}, \end{aligned}$$

\(x \in {\mathbb {R}}\); see Olver et al. (2010, Chapter 18) or Szegö (1967, Chapter 5). The normalized (generalized) Laguerre polynomial of order \(\alpha -1\) and degree n is defined by

$$\begin{aligned} {\mathcal {L}}_n^{(\alpha -1)}(x):= \left( \frac{n!}{(\alpha )_n}\right) ^{1/2} L_n^{(\alpha -1)}(x), \end{aligned}$$
(9.11)

\(x \in {\mathbb {R}}\). It is well-known (see Olver et al. (2010, Chapter 18.3) or Szegö (1967, Chapter 5.1)) that the polynomials \({\mathcal {L}}_n^{(\alpha -1)}\) are orthonormal with respect to the \(Gamma(\alpha ,1)\) distribution:

$$\begin{aligned} \int ^\infty _0 {{\mathcal {L}}_n^{(\alpha -1)}(x){\mathcal {L}}_m^{(\alpha -1)}(x)\frac{x^{\alpha -1}e^{-x}}{\varGamma (\alpha )}} \, {\mathrm{d}}x = {\left\{ \begin{array}{ll} 1, &{} \hbox {if } n = m \\ 0, &{} \hbox {if } n \ne m \end{array}\right. } \end{aligned}$$

Lemma 5

For \(v > 0\) and \(\alpha > 0\),

$$\begin{aligned} \int ^\infty _0 x^{\alpha } e^{-v x} L_n^{(\alpha -1)}(x) \, {\mathrm{d}}x = \frac{\varGamma (\alpha +n)}{n!} (v-1)^{n-1} v^{-(\alpha +n+1)} \big (\alpha (v-1)-n\big ). \end{aligned}$$

Proof

Starting with the known integral (Olver et al. 2010, (18.17.34)),

$$\begin{aligned} \int ^{\infty }_0 x^{\alpha -1}e^{-v x} L_n^{(\alpha -1)}(x) \, {\mathrm{d}}x = \frac{\varGamma (\alpha +n)}{n!} \, (v-1)^n \, v^{-(\alpha +n)}, \end{aligned}$$

we differentiate each side with respect to v and simplify the outcome to obtain the result. \(\square \)

Proof of Lemma 1

  1. (i)

    By (9.4) for \(J_{\nu }(x)\), \(\varGamma (\nu +1) \big |(tx)^{-\nu /2}J_{\nu }(2\sqrt{tx})\big | \le 1\) for all \(x, t > 0\). Therefore, by the triangle inequality, \(|{\mathcal {H}}_{X, \nu }(t)| \le 1\).

  2. (ii)

    It follows from the series expansion (2.1) that

    $$\begin{aligned} \varGamma (\nu +1) (tx)^{-\nu /2} J_{\nu }\big (2(tx)^{1/2}\big )\Big |_{t=0} = 1, \end{aligned}$$

    for all x, so we obtain \({\mathcal {H}}_{X, \nu }(0) = 1.\)

  3. (iii)

    As the function \((tx)^{-\nu /2}J_{\nu }(2\sqrt{tx})\) is a power series in tx, it is continuous in \(t \ge 0\) for every fixed \(x \ge 0\). As it is also bounded, then \(\varGamma (\nu +1)(tx)^{-\nu /2}J_{\nu }(2\sqrt{tx}) f(x)\) is bounded by the Lebesgue integrable function f(x) for all \(x, t \ge 0\). Therefore, the conclusion follows from the Dominated Convergence Theorem. \(\square \)

The following Hankel transform inversion theorem is a classical result that can be obtained from many sources, e.g., Sneddon (1972, p. 309, Theorem 1).

Theorem 11

(Hankel Inversion) Let X be a positive, continuous random variable with probability density function f(x) and Hankel transform \({\mathcal {H}}_{X, \nu }\). For \(x > 0\),

$$\begin{aligned} f(x) = \frac{1}{\varGamma (\nu +1)} \int ^\infty _0 (tx)^{\nu /2} J_{\nu }(2\sqrt{tx}) \ {\mathcal {H}}_{X,\nu }(t) \, {\mathrm{d}}t, \end{aligned}$$

As a consequence of the inversion formula, we obtain the uniqueness of the Hankel transform.

Theorem 12

(Hankel Uniqueness) Let X and Y be positive random variables with corresponding Hankel transforms \({\mathcal {H}}_{X, \nu }\) and \({\mathcal {H}}_{Y, \nu }\). Then \({\mathcal {H}}_{X, \nu } = {\mathcal {H}}_{Y, \nu }\) if and only if \(X {\mathop {=}\limits ^{d}} Y\).

The next result, on the continuity of the Hankel transform, is analogous to Theorem 2.3 of Baringhaus and Taherizadeh (2010). Therefore, we will omit the proof.

Theorem 13

(Hankel Continuity) Let \(\{X_{n}, n \in {\mathbb {N}}\}\) be a sequence of positive random variables with corresponding Hankel transforms \(\{{\mathcal {H}}_{n}, n \in {\mathbb {N}}\}\). If there exists a positive random variable X, with Hankel transform \({\mathcal {H}}\), such that \(X_{n} \xrightarrow {d} X\), then for all \(t \ge 0\),

$$\begin{aligned} \lim _{n \rightarrow \infty } {\mathcal {H}}_{n}(t) = {\mathcal {H}}(t) \end{aligned}$$
(9.12)

Conversely, suppose there exists \({\mathcal {H}}: [0,\infty ) \rightarrow {\mathbb {R}}\) such that \({\mathcal {H}}(0) = 1\), \({\mathcal {H}}\) is continuous at 0, and (9.12) holds. Then \({\mathcal {H}}\) is the Hankel transform of a positive random variable X, and \(X_{n} \xrightarrow {d} X\).

Appendix 2: The test statistic

Proof of Proposition 1

By squaring the integrand in (1.3), there are three terms to be calculated. First,

$$\begin{aligned} n \int ^{\infty }_0 {\mathcal {H}}^2_{n}(t) \, {\mathrm{d}}P_0(t)&= \frac{1}{n} \int ^{\infty }_0 \left( \sum _{i=1}^{n} \varGamma (\alpha )(Y_{i}t)^{(1-\alpha )/2} \, J_{\alpha -1}(2\sqrt{tY_{i}})\right) ^2 \, {\mathrm{d}}P_0(t) \\&= \frac{\varGamma (\alpha )}{n} \sum _{i=1}^{n}\sum _{j=1}^{n} (Y_{i}Y_{j})^{(1-\alpha )/2} \int ^{\infty }_0 {J_{\alpha -1}(2\sqrt{tY_{i}})J_{\alpha -1}(2\sqrt{tY_{j}})e^{-t}} \, {\mathrm{d}}t. \end{aligned}$$

These integrals are of the form of Weber’s exponential integral (Olver et al. 2010, (10.22.67)):

$$\begin{aligned} \int ^\infty _0 \, \exp (-p t) \, J_{\nu }(2\sqrt{at}) \, J_{\nu }(2\sqrt{bt}) \, {\mathrm{d}}t = p^{-1} \, \exp \big (-(a+b)/p\big ) \, I_{\nu }(2\sqrt{ab}{\big /}p),\nonumber \\ \end{aligned}$$
(10.1)

valid for \(\nu > -1\) and \(a, b, p > 0\). Simplifying the resulting expressions, we obtain

$$\begin{aligned} n\int ^{\infty }_0 {\mathcal {H}}^2_n(t) \, {\mathrm{d}}P_0(t) = \frac{\varGamma (\alpha )}{n} \sum _{i=1}^n \sum _{j=1}^n (Y_i Y_j)^{(1-\alpha )/2} \exp (-Y_i - Y_j) \, I_{\alpha -1}\big (2(Y_i Y_j)^{1/2}\big ). \end{aligned}$$

Second, by proceeding as in Example 1, it is straightforward to deduce

$$\begin{aligned} 2n \int ^{\infty }_0 {\mathcal {H}}_{n}(t) \, e^{-t/\alpha } \, {\mathrm{d}}P_0(t)&= 2\sum _{i=1}^{n} (1+\alpha ^{-1})^{-\alpha } \, e^{-\alpha Y_i/(\alpha +1)} \\&\equiv \frac{1}{n} \sum _{i=1}^{n}\sum _{j=1}^n \left( \frac{\alpha }{\alpha +1}\right) ^{\alpha } \left[ e^{-\alpha Y_{i}/(\alpha +1)}+e^{-\alpha Y_{j}/(\alpha +1)} \right] . \end{aligned}$$

Third, we have a gamma integral:

$$\begin{aligned} n \int ^{\infty }_0 e^{-2t/\alpha } \, {\mathrm{d}}P_0(t)&= n \left( \frac{\alpha }{\alpha +2}\right) ^\alpha = \frac{1}{n} \sum _{i=1}^n \sum _{j=1}^n \left( \frac{\alpha }{\alpha +2}\right) ^\alpha . \end{aligned}$$

Collecting together all three terms, we obtain the desired result. \(\square \)

Proof of Theorem 2

By (9.6), \((s^{1-\alpha } J_{\alpha -1}(s))'= -s^{1-\alpha } J_{\alpha }(s)\). Therefore, the Taylor expansion of order 1 of the function \(s^{1-\alpha } J_{\alpha -1}(s)\), at a point \(s_0\), is

$$\begin{aligned} s^{1-\alpha } J_{\alpha -1}(s) = s_0^{1-\alpha } J_{\alpha -1}(s_0) + (s_0 -s) u^{1-\alpha }J_{\alpha }(u), \end{aligned}$$

where u lies between s and \(s_0\). Setting \(s = 2(tY_j)^{1/2}\) and \(s_0 = 2(tX_j/\alpha )^{1/2}\), we obtain

$$\begin{aligned} 2^{1-\alpha }(tY_j)^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tY_{j})^{1/2}\big )= & {} 2^{1-\alpha } (tX_{j}/\alpha )^{(1-\alpha )/2} J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big ) \nonumber \\&\quad + 2 \big [(tX_{j}/\alpha )^{1/2} - (tY_{j})^{1/2}\big ] \, u_j^{1-\alpha } \, J_{\alpha }(u_j),\nonumber \\ \end{aligned}$$
(10.2)

where \(u_j\) lies between \(2(tY_{j})^{1/2}\)and \(2(tX_{j}/\alpha )^{1/2}\). Define

$$\begin{aligned} W_n = \alpha ^{-1/2} - {\overline{X}}_n^{-1/2} = \frac{{\overline{X}}_n-\alpha }{(\alpha {\overline{X}}_n)^{1/2} (\alpha ^{1/2}+{\overline{X}}_n^{1/2})}; \end{aligned}$$
(10.3)

then

$$\begin{aligned} (tX_{j}/\alpha )^{1/2} - (tY_{j})^{1/2} = (tX_{j}/\alpha )^{1/2} - (tX_{j}/{\overline{X}}_n)^{1/2} = W_n \, (tX_{j})^{1/2}, \end{aligned}$$

and (10.2) reduces to

$$\begin{aligned}&2^{1-\alpha }(tY_j)^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tY_{j})^{1/2}\big ) \nonumber \\&\quad = 2^{1-\alpha }(tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big ) + 2 W_n \, (tX_{j})^{1/2} \, u_j^{1-\alpha } \, J_{\alpha }(u_j).\nonumber \\ \end{aligned}$$
(10.4)

Multiplying both sides of (10.4) by \(2^{\alpha -1}\), adding and subtracting the term

$$\begin{aligned} 2 (tX_{j})^{1/2} W_n \, (tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big ) \end{aligned}$$

on the right-hand side, and then simplifying the result, we obtain

$$\begin{aligned} \begin{aligned}&(tY_j)^{(1-\alpha )/2} J_{\alpha -1}\big (2(tY_{j})^{1/2}\big ) \\&\quad = (tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big ) \\&\qquad + 2\alpha ^{1/2} \, W_n \, (tX_{j}/\alpha )^{1-(\alpha /2)} \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big ) \\&\qquad + 2^{\alpha } \, W_n \, (tX_{j})^{1/2} \, \Big (u_j^{1-\alpha } \, J_{\alpha }(u_j)-\big (2(tX_{j}/\alpha )^{1/2}\big )^{1-\alpha } \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big )\Big ). \end{aligned} \end{aligned}$$
(10.5)

Define the processes \(Z_{n,1}(t)\), \(Z_{n,2}(t)\), and \(Z_{n,3}(t)\), \(t \ge 0\), by

$$\begin{aligned} Z_{n,1}(t)&= \frac{1}{\sqrt{n}} \sum _{j=1}^{n} \big [\varGamma (\alpha ) \, (tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big )\\&\quad + 2\varGamma (\alpha ) \alpha ^{1/2} \, W_n \, (tX_{j}/\alpha )^{1-(\alpha /2)} \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big )-e^{-t/\alpha } \big ], \\ Z_{n,2}(t)&= \frac{1}{\sqrt{n}}\sum _{j=1}^{n} \big [\varGamma (\alpha ) \, (tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big ) \\&\quad + 2 \alpha ^{-1/2} \, W_n \, t e^{-t/\alpha } - e^{-t/\alpha } \big ], \\ Z_{n,3}(t)&= \frac{1}{\sqrt{n}}\sum _{j=1}^{n} \big [\varGamma (\alpha ) \, (tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big ) \\&\quad + \alpha ^{-2} (X_j -\alpha ) \, te^{-t/\alpha } - e^{-t/\alpha }\big ]. \end{aligned}$$

We will show that

$$\begin{aligned}&Z_{n,3} \xrightarrow {d} Z \ \ \text {in }L^2, \end{aligned}$$
(10.6)
$$\begin{aligned}&||Z_{n}-Z_{n,1} ||_{L^2} \xrightarrow {p} 0, \end{aligned}$$
(10.7)
$$\begin{aligned}&||Z_{n,1}-Z_{n,2} ||_{L^2} \xrightarrow {p} 0, \end{aligned}$$
(10.8)
$$\begin{aligned}&||Z_{n,2}-Z_{n,3} ||_{L^2} \xrightarrow {p} 0. \end{aligned}$$
(10.9)

To establish (10.6), let

$$\begin{aligned} Z_{n,3,j}(t):= & {} \varGamma (\alpha ) \, (tX_{j}/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_{j}/\alpha )^{1/2}\big ) \nonumber \\&+ \alpha ^{-2} (X_j -\alpha ) te^{-t/\alpha } - e^{-t/\alpha }, \end{aligned}$$
(10.10)

\(t \ge 0\), \(j=1,\dotsc ,n\). Since \(X_j \sim Gamma(\alpha ,1)\) then \(E(X_j-\alpha )=0\); also, by Example 1,

$$\begin{aligned} E \big [\varGamma (\alpha )(tX_{j}/\alpha )^{(1-\alpha )/2}J_{\alpha -1} \big (2(tX_{j}/\alpha )^{1/2}\big ) \big ] = e^{-t/\alpha }. \end{aligned}$$

Therefore \(E(Z_{n,3,j}(t))=0\), \(t \ge 0\) and \(j=1,\dotsc ,n\), and \(Z_{n,3,1},\dotsc ,Z_{n,3,n}\) clearly are i.i.d. random elements in \(L^2\). Applying the Cauchy–Schwarz inequality and (9.4), we obtain \(E(||Z_{n,3,1} ||^2_{L^2}) < \infty \). Thus, by the Central Limit Theorem in \(L^2\) (Ledoux and Talagrand 1991, p. 281),

$$\begin{aligned} Z_{n,3} = \frac{1}{\sqrt{n}}\sum _{j=1}^{n} Z_{n,3,j} \xrightarrow {d} Z, \end{aligned}$$

where \(Z:=(Z(t),t \ge 0)\) is a centered Gaussian random element in \(L^2\). This proves (10.6) and shows that Z has the same covariance operator as \(Z_{n,3,1}\).

It is well-known that the covariance operator of the random element \(Z_{n,3,1}\) is uniquely determined by the covariance function of the stochastic process \(Z_{n,3,1}(t)\) (Gīkhman and Skorokhod 1980, pp. 218–219). We now show that the function K(st) in (3.4) is the covariance function of \(Z_{n,3,1}\). Noting that \(E[Z_{n,3,1}(t)] = 0\) for all t, we obtain

$$\begin{aligned} K(s,t)&= {\mathrm{Cov}}\big [Z_{n,3,1}(s),Z_{n,3,1}(t)\big ] \\&= {\mathrm{Cov}}\big [Z_{n,3,1}(s)+e^{-s/\alpha },Z_{n,3,1}(t)+e^{-t/\alpha }\big ] \\&= E \big [\big (Z_{n,3,1}(s)+e^{-s/\alpha }\big ) \big (Z_{n,3,1}(t) + e^{-t/\alpha }\big )\big ] - e^{-(s+t)/\alpha }. \end{aligned}$$

By (10.10),

$$\begin{aligned}&E \big (Z_{n,3,1}(s) + e^{-s/\alpha }\big ) \big (Z_{n,3,1}(t) + e^{-t/\alpha }\big ) \nonumber \\&\quad = E \big [\varGamma (\alpha ) \, (sX_1/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(sX_1/\alpha )^{1/2}\big ) + \alpha ^{-2} (X_1-\alpha ) se^{-s/\alpha }\big ] \nonumber \\&\qquad \times \big [\varGamma (\alpha ) \, (tX_1/\alpha )^{(1-\alpha )/2} \, J_{\alpha -1}\big (2(tX_1/\alpha )^{1/2}\big ) + \alpha ^{-2} (X_1-\alpha ) te^{-t/\alpha }\big ],\nonumber \\ \end{aligned}$$
(10.11)

so the calculation of K(st) reduces to evaluating the four terms obtained by expanding the product on the right-hand side of (10.11).

The first term in the product in (10.11) is evaluated using Weber’s integral (10.1):

$$\begin{aligned}&E \, [\varGamma (\alpha )]^2 (sX_{1}/\alpha )^{(1-\alpha )/2}(tX_{1}/\alpha )^{(1-\alpha )/2}J_{\alpha -1}(2(sX_{1}/\alpha )^{1/2})J_{\alpha -1}(2(tX_{1}/\alpha )^{1/2}) \nonumber \\&\quad = \varGamma (\alpha )(st/\alpha ^2)^{(1-\alpha )/2}e^{-(s+t)/\alpha }I_{\alpha -1}(2\sqrt{st}/\alpha ). \end{aligned}$$
(10.12)

The second term in the product in (10.11) is a Hankel transform of the type in Example 1,

$$\begin{aligned}&E\big [ \varGamma (\alpha ) (sX_{1}/\alpha )^{(1-\alpha )/2}J_{\alpha -1}(2(sX_{1}/\alpha )^{1/2}) \, \alpha ^{-2} (X_{1}-\alpha ) te^{-t/\alpha } \big ] \\&\quad = - \alpha ^{-3}s t \exp \big (-(s+t)/\alpha \big ), \end{aligned}$$

and the third term in the product is the same as the second term but with s and t interchanged.

The fourth term in the product in (10.11) is

$$\begin{aligned} E\big [\alpha ^{-4} (X_{1}-\alpha )^2 ste^{-(s+t)/\alpha }\big ] = \alpha ^{-4} s t e^{-(s+t)/\alpha } \mathrm{Var}\,(X_1) = \alpha ^{-3} ste^{-(s+t)/\alpha }. \end{aligned}$$

Combining all four terms, we obtain (3.4).

To establish (10.7), we begin by showing that

$$\begin{aligned} (\sqrt{n} W_n)^2 = \bigg ( \frac{\sqrt{n} ({\overline{X}}_n-\alpha )}{(\alpha {\overline{X}}_n)^{1/2} (\alpha ^{1/2}+{\overline{X}}_n^{1/2})} \bigg )^2 \xrightarrow {d} \chi ^2_1/4\alpha ^2, \end{aligned}$$

where \(\chi _1^2\) denotes a chi-square random variable with one degree of freedom. By the Central Limit Theorem, \( \sqrt{n}({\overline{X}}_{n}-\alpha ) \xrightarrow {d} {\mathcal {N}}(0,\alpha ), \) and by the Law of Large Numbers and the Continuous Mapping Theorem, \( (\alpha {\overline{X}}_n)^{1/2} (\alpha ^{1/2}+{\overline{X}}_n^{1/2}) \xrightarrow {p} 2\alpha ^{3/2}. \) By Slutsky’s theorem (Chow and Teicher 1988, p. 249), \(\sqrt{n} W_n \xrightarrow {d} {\mathcal {N}}(0,\tfrac{1}{4} \alpha ^{-2})\), hence \((\sqrt{n} W_n)^2 \xrightarrow {d} \chi ^2_1/4\alpha ^2\).

By the Taylor expansion in (10.5),

$$\begin{aligned} Z_{n}-Z_{n,1}&=\frac{\varGamma (\alpha )}{\sqrt{n}}\sum _{j=1}^{n}\Big [ (tY_{j})^{(1-\alpha )/2}J_{\alpha -1}(2(tY_{j})^{1/2})\\&\quad -(tX_{j}/\alpha )^{(1-\alpha )/2}J_{\alpha -1}(2(tX_{j}/\alpha )^{1/2})\\&\quad -2 \alpha ^{1/2} \, W_n \, (tX_{j}/\alpha )^{1-(\alpha /2)} \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big )\Big ]\\&=\frac{2^{\alpha } \varGamma (\alpha )}{n} (\sqrt{n} W_n) \ \sum _{j=1}^{n} (tX_{j})^{1/2} \Big [u_j^{1-\alpha } \, J_{\alpha }(u_j)\\&\quad -\big (2(tX_{j}/\alpha )^{1/2}\big )^{1-\alpha } \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big )\Big ]. \end{aligned}$$

Define

$$\begin{aligned} V_n:= & {} \frac{1}{n^2} \int ^\infty _0 \Big [\sum _{j=1}^{n} (tX_{j})^{1/2} \, \Big (u_j^{1-\alpha } \, J_{\alpha }(u_j)\\&-\big (2(tX_{j}/\alpha )^{1/2}\big )^{1-\alpha } \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big )\Big ) \Big ]^2 \, {\mathrm{d}}P_0(t). \end{aligned}$$

Then, \( ||Z_{n}-Z_{n,1} ||^2_{L^2} = 4^{\alpha } [\varGamma (\alpha )]^2 (\sqrt{n} W_n)^2 \, V_n. \) By the Cauchy–Schwarz inequality,

$$\begin{aligned} V_n \le \frac{1}{n} \int ^\infty _0 { t \sum _{j=1}^{n} X_{j} \ \big | u_j^{1-\alpha } \, J_{\alpha }(u_j)-\big (2(tX_{j}/\alpha )^{1/2}\big )^{1-\alpha } \, J_{\alpha }\big (2(tX_{j}/\alpha )^{1/2}\big ) \big | ^2}\, {\mathrm{d}}P_0(t). \end{aligned}$$

Recall that \(u_j\) lies between \(2(tY_{j})^{1/2}\) and \(2(tX_{j}/\alpha )^{1/2}\), so we can write

$$\begin{aligned} u_j&= 2 (1-\theta _{n,j,t}) (tX_{j}/\alpha )^{1/2} + 2 \theta _{n,j,t} (tY_{j})^{1/2} \\&= 2(tX_{j})^{1/2} \big ( \alpha ^{-1/2}+\theta _{n,j,t}({\overline{X}}_n^{-1/2}-\alpha ^{-1/2})\big ), \end{aligned}$$

where \(\theta _{n,j,t} \in [0,1]\). By Lemma 4, the Lipschitz property of the Bessel functions,

$$\begin{aligned} 4^{\alpha } [\varGamma (\alpha +1)]^2 \, \big | u_j^{1-\alpha } \, J_{\alpha }(u_j) - \big (2(tX_j/\alpha )^{1/2}\big )^{1-\alpha } \, J_{\alpha }&\big (2(tX_{j}/\alpha )^{1/2}\big ) \big |^2 \\&\le \big |u_j- 2 (tX_j/\alpha )^{1/2} \big |^2 \\&= \big | 2(tX_{j})^{1/2} \theta _{n,j,t}({\overline{X}}_n^{-1/2}-\alpha ^{-1/2})\big |^2 \\&\le 4tX_{j} \, ({\overline{X}}_n^{-1/2}-\alpha ^{-1/2})^2 , \end{aligned}$$

since \(\theta _{n,j,t} \in [0,1]\). Therefore,

$$\begin{aligned} V_n \le \frac{1}{4^{\alpha -1} [\varGamma (\alpha +1)]^2} \Big (\frac{1}{n} \sum _{j=1}^n X^2_j \Big ) ({\overline{X}}_n^{-1/2}-\alpha ^{-1/2})^2 \int ^{\infty }_0 {t^2}\, {\mathrm{d}}P_0(t). \end{aligned}$$

By the Law of Large Numbers, \(({\overline{X}}_n^{-1/2}-\alpha ^{-1/2})^2 \xrightarrow {p} 0\) and \( n^{-1} \sum _{j=1}^n X^2_j \xrightarrow {p} E(X^2_1)=\alpha (\alpha +1), \) so it follows that \(V_{n} \xrightarrow {p} 0\). By Slutsky’s theorem, \( ||Z_{n}-Z_{n,1} ||^2_{L^2} = 4^{\alpha } [\varGamma (\alpha )]^2 (\sqrt{n} W_n)^2 \cdot V_n \xrightarrow {d} 0, \) therefore \(||Z_{n}-Z_{n,1} ||_{L^2} \xrightarrow {p} 0\), as asserted in (10.7).

To establish (10.8), define

$$\begin{aligned} \varDelta _j(t):= \varGamma (\alpha )(tX_{j}/\alpha )^{1-(\alpha /2)} J_{\alpha }\big ( 2(tX_{j}/\alpha )^{1/2} \big ) - \alpha ^{-1} te^{-t/\alpha }, \end{aligned}$$

\(t \ge 0\), \(j=1,\dotsc ,n\). Then it is straightforward to verify that

$$\begin{aligned} Z_{n,1}-Z_{n,2} = \frac{2\alpha ^{1/2}}{\sqrt{n}} \, W_n \,\sum _{j=1}^{n} \varDelta _j(t) \end{aligned}$$

and therefore

$$\begin{aligned} ||Z_{n,1} - Z_{n,2} ||^2_{L^2} = (2\alpha ^{1/2} W_n)^2 \, \int ^\infty _0 \Big [ \frac{1}{\sqrt{n}} \sum _{j=1}^n \varDelta _j(t) \Big ]^2 \, {\mathrm{d}}P_0(t). \end{aligned}$$
(10.13)

By the Law of Large Numbers, \(W_n \xrightarrow {p} 0\). Also, as shown in Example 3,

$$\begin{aligned} E \big [ \varGamma (\alpha )(tX_{j}/\alpha )^{1-(\alpha /2)} J_{\alpha }\big ( 2(tX_{j}/\alpha )^{1/2} \big ) \big ] = \alpha ^{-1} t e^{-t/\alpha }; \end{aligned}$$

hence \(E(\varDelta _j(t)) = 0\), \(t \ge 0\), \(j=1,\dotsc ,n\). Also, \(\varDelta _1(t),\dotsc ,\varDelta _n(t)\) are i.i.d. random elements in \(L^2\). We now show that \(E(||\varDelta _1 ||^2_{L^2}) < \infty \). We have

$$\begin{aligned} E(||\varDelta _1 ||^2_{L^2})&=E\int ^\infty _0 {\varDelta _1^2(t)}\, {\mathrm{d}}P_0(t) \\&=E\int ^\infty _0 {\big [ \varGamma (\alpha )(tX_{1}/\alpha )^{1-(\alpha /2)} J_{\alpha }\big ( 2(tX_{1}/\alpha )^{1/2} \big ) - \alpha ^{-1} te^{-t/\alpha } \big ]^2}\, {\mathrm{d}}P_0(t). \end{aligned}$$

To show that \(E(||\varDelta _1 ||^2_{L^2}) < \infty \) it suffices, by the Cauchy–Schwarz inequality, to prove that

$$\begin{aligned} E \int ^\infty _0 {\big [ \varGamma (\alpha ) (tX_1/\alpha )^{1-(\alpha /2)} \, J_{\alpha } \big ( 2(tX_1/\alpha )^{1/2} \big ) \big ]^2}\, {\mathrm{d}}P_0(t) \ < \ \infty \end{aligned}$$
(10.14)

and

$$\begin{aligned} E \int ^\infty _0 (\alpha ^{-1} te^{-t/\alpha })^2 \, {\mathrm{d}}P_0(t) \ < \ \infty . \end{aligned}$$
(10.15)

To establish (10.14), we apply the inequality (9.5) to obtain

$$\begin{aligned} |J_{\alpha } \big ( 2(tX_1/\alpha )^{1/2} \big )| \le (tX_1/\alpha )^{-(1-\alpha )/2}/\pi ^{1/2} \varGamma (\alpha +\tfrac{1}{2}), \end{aligned}$$

for \(t \ge 0\). Therefore,

$$\begin{aligned}&E \int ^\infty _0 \big [ \varGamma (\alpha )(tX_1/\alpha )^{1-(\alpha /2)} \, J_{\alpha } \big ( 2(tX_1/\alpha )^{1/2} \big ) \big ]^2 \, {\mathrm{d}}P_0(t) \\&\quad \le \Big ( \frac{\varGamma (\alpha )}{\pi ^{1/2} \varGamma (\alpha +\frac{1}{2})} \Big )^2 \ E(X_1/\alpha ) \ \int ^\infty _0 {t}\, {\mathrm{d}}P_0(t) < \infty . \end{aligned}$$

As for (10.15), that expectation is a convergent gamma integral. Hence, \(E(||\varDelta _1 ||^2_{L^2}) < \infty \).

By the Central Limit Theorem in \(L^2\), \(n^{-1/2} \sum _{j=1}^{n} \varDelta _j(t)\) converges to a centered Gaussian random element in \(L^2\). Thus, by the Continuous Mapping Theorem,

$$\begin{aligned} \Big ||\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \varDelta _j(t)\Big ||^2_{L^2} := \int ^\infty _0 \Big [ \frac{1}{\sqrt{n}} \sum _{j=1}^n \varDelta _j(t) \Big ]^2 \, {\mathrm{d}}P_0(t) \end{aligned}$$

converges in distribution to a random variable which has finite variance. Since \(W_n \xrightarrow {p} 0\) then by (10.13) and Slutsky’s Theorem, we obtain \(||Z_{n,1}-Z_{n,2} ||^2_{L^2} \xrightarrow {d} 0\); therefore, \(||Z_{n,1}-Z_{n,2} ||_{L^2} \xrightarrow {p} 0\).

To prove (10.9), we observe that

$$\begin{aligned} Z_{n,2} - Z_{n,3}&= \frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Big ( 2\alpha ^{-1/2} W_n te^{-t/\alpha }- \alpha ^{-2} (X_{j}-\alpha ) te^{-t/\alpha } \Big )\\&= te^{-t/\alpha } \, \sqrt{n}({\overline{X}}_n-\alpha ) R_n, \end{aligned}$$

where

$$\begin{aligned} R_n = \frac{2}{\alpha {\overline{X}}_n^{1/2}(\alpha ^{1/2}+{\overline{X}}_n^{1/2})}-\frac{1}{\alpha ^2}. \end{aligned}$$

Therefore,

$$\begin{aligned} ||Z_{n,2}-Z_{n,3} ||^2_{L^2}&= \big [ \sqrt{n}({\overline{X}}_n-\alpha ) R_n \big ]^2 \ \int ^\infty _0 {(te^{-t/\alpha })^2}\, {\mathrm{d}}P_0(t). \end{aligned}$$

As noted earlier, \(\int ^\infty _0 (te^{-t/\alpha })^2\, {\mathrm{d}}P_0(t) < \infty \). Also, by the Central Limit Theorem, \(\sqrt{n}({\overline{X}}_n-\alpha ) \xrightarrow {d} {\mathcal {N}}(0,\alpha )\); and by the Law of Large Numbers, \(R_n \xrightarrow {p} 0\). By Slutsky’s theorem, \(\big [ \sqrt{n}({\overline{X}}_n-\alpha ) R_n \big ]^2 \xrightarrow {d} 0\); hence \(\big [ \sqrt{n}({\overline{X}}_n-\alpha ) R_n \big ]^2 \xrightarrow {p} 0\), and therefore \(||Z_{n,2}-Z_{n,3} ||_{L^2} \xrightarrow {p} 0\).

Finally, by the Continuous Mapping Theorem in \(L^2\), \(||Z_{n} ||^2_{L^2} \xrightarrow {d} ||Z ||^2_{L^2}\), i.e.

$$\begin{aligned} T^2_{n}= \int ^\infty _0 {Z^2_{n}(t)}\, {\mathrm{d}}P_0(t) \xrightarrow {d} \int ^\infty _0 {Z^2(t)}\, {\mathrm{d}}P_0(t) . \end{aligned}$$

The proof now is complete. \(\square \)

Appendix 3: Eigenvalues and eigenfunctions of the covariance operator

Proof of Theorem 5

Since the set is an orthonormal basis for \(L^2\), the eigenfunction \(\phi \in L^2\) corresponding to an eigenvalue \(\delta \) can be written as

We restrict ourselves temporarily to eigenfunctions for which this series is pointwise convergent. Substituting this series into the equation \({\mathcal {S}} \phi =\delta \phi \), we obtain

(11.1)

Substituting the covariance function K(st) in the left-hand side of (11.1), writing K in terms of \(K_0\), and assuming that we can interchange the order of integration and summation, we obtain

(11.2)

By Theorem 3,

On writing in terms of \(L_k^{(\alpha -1)}\), the generalized Laguerre polynomial, applying the well-known formula (Olver et al. 2010, (18.17.34)) for the Laplace transform of \(L_k^{(\alpha -1)}\), and making use of (4.2) and (4.3), we obtain

(11.3)

Again writing in terms of \(L_k^{(\alpha -1)}\), applying Lemma 5, and (4.2) and (4.3), we obtain

(11.4)

In summary, (11.2) reduces to

(11.5)

By applying (11.3), we also obtain the Fourier-Laguerre expansion of \(e^{-s/\alpha }\) with respect to the orthonormal basis ; indeed,

Similarly, by applying (11.4), we have

Let

(11.6)

and

(11.7)

Combining (11.5)–(11.7), we find that (11.1) reduces to

(11.8)

and now comparing the coefficients of , we obtain

(11.9)

for all \(k \in {\mathbb {N}}_0\). Since we have assumed that \(\delta \ne \rho _k\) for any k then we can solve this equation for to obtain

(11.10)

Substituting (11.10) into (11.6), we get

$$\begin{aligned} c_1&= c_1 \beta ^{\alpha }\sum _{k=0}^{\infty } \frac{(\alpha )_k}{k! (\rho _k-\delta )}\rho _k^2 + c_2 \alpha ^{-1} \beta ^{\alpha }\sum _{k=0}^{\infty } \frac{(\alpha )_k}{k! (\rho _k-\delta )}\rho _k^2(b_{\alpha }^2-k\beta ) \\&= c_1 \big (1-A(\delta )\big ) + c_2 \alpha ^{-3} D(\delta ); \end{aligned}$$

therefore,

$$\begin{aligned} \alpha ^{3} c_1 A(\delta ) = c_2 D(\delta ). \end{aligned}$$
(11.11)

Similarly, by substituting (11.10) into (11.7), we obtain

$$\begin{aligned} c_2&= c_1 \alpha ^2 \beta ^{\alpha }\sum _{k=0}^{\infty } \frac{(\alpha )_k}{k! (\rho _k-\delta )}\rho _k^2(b_{\alpha }^2-k\beta ) + c_2 \alpha \beta ^{\alpha } \sum _{k=0}^{\infty } \frac{(\alpha )_k}{k! (\rho _k-\delta )}\rho _k^2 (b_{\alpha }^2-k\beta )^2 \\&= c_1 D(\delta ) + c_2 \big (1-B(\delta )\big ); \end{aligned}$$

hence,

$$\begin{aligned} c_2B(\delta ) = c_1D(\delta ). \end{aligned}$$
(11.12)

Suppose \(c_1=c_2=0\); then it follows from (11.10) that for all k and so \(\phi =0\), which is a contradiction since \(\phi \) is a non-trivial eigenfunction. Hence, \(c_1\) and \(c_2\) cannot be both equal to 0. Combining (11.11) and (11.12), and using the fact that \(c_1, c_2\) are not both 0, it is straightforward to deduce that \(\alpha ^3 A(\delta ) B(\delta )= D^2(\delta )\). Therefore, if \(\delta \) is a positive eigenvalue of \({\mathcal {S}}\) then it is a positive root of the function \(G(\delta ) = \alpha ^3 A(\delta ) B(\delta )-D^2(\delta )\).

Conversely, suppose that \(\delta \) is a positive root of \(G(\delta )\) with \(\delta \ne \rho _k\) for any \(k \in {\mathbb {N}}_0\). Define

$$\begin{aligned} \gamma _k := \beta ^{\alpha /2} \Big (\frac{(\alpha )_k}{k!}\Big )^{1/2} \frac{\rho _k }{\rho _k-\delta } \big (c_1+c_2 \alpha ^{-1}(b_{\alpha }^2-k\beta )\big ), \end{aligned}$$
(11.13)

\(k \in {\mathbb {N}}_0\), where \(c_1\) and \(c_2\) are real constants that are not both equal to 0 and which satisfy (11.11) and (11.12). That such constants exist can be shown by following a case-by-case argument similar to Taherizadeh (2009, p. 48); for example, if \(D(\delta ) \ne 0\), \(A(\delta ) \ne 0\), and \(B(\delta ) \ne 0\), then we can choose \(c_2\) to be any non-zero number and then set \(c_1 = c_2 B(\delta )/D(\delta )\).

Define

(11.14)

\(s \ge 0\). By applying the ratio test, we find that \(\sum _{k=0}^{\infty } \gamma _k^2 < \infty \); therefore, \({{\widetilde{\phi }}} \in L^2\).

To show also that (11.14) converges pointwise, we apply (9.11), (4.4), and a Laguerre polynomial inequality (Erdélyi et al. 1953, p. 207) to obtain

(11.15)

for \(s \ge 0\). Thus, to establish that (11.14) pointwise converges pointwise, we need to show that

$$\begin{aligned} \sum _{k=0}^{\infty }\Big ( \frac{(\alpha )_k}{k!} \Big )^{1/2} |\gamma _k|< \infty \quad \text {and}\quad \sum _{k=0}^{\infty } \Big ( \frac{k!}{(\alpha )_k} \Big )^{1/2} |\gamma _k| < \infty . \end{aligned}$$
(11.16)

However, the convergence of each of these series follows from the ratio test.

Next, we justify the interchange of summation and integration in our calculations. By a corollary to Theorem 16.7 in Billingsley (1979, p. 224), we need to verify that

(11.17)

By (9.10) and (4.1),

$$\begin{aligned} 0 \le K_0(s,t) \le \exp (-(s+t)/\alpha ) \exp (2\sqrt{st}/\alpha ) = \exp (-(\sqrt{s} - \sqrt{t})^2/\alpha ) \le 1.\nonumber \\ \end{aligned}$$
(11.18)

By the triangle inequality and by (11.18), we have

$$\begin{aligned} 0 \le K(s,t)&\le K_0(s,t)+ (\alpha ^{-3}st+ 1) \exp (-(s+t)/\alpha ) \le 2 + \alpha ^{-3}st, \end{aligned}$$

\(s, t \ge 0\). Thus, to prove (11.17), we need to establish that

By applying the bound (11.15), we see that it suffices to prove that

$$\begin{aligned} \sum _{k=0}^{\infty }\Big ( \frac{(\alpha )_k}{k!} \Big )^{1/2} |\gamma _k | \ \int ^\infty _0 t^j \, {\mathrm{d}}P_0(t) < \infty \end{aligned}$$

and

$$\begin{aligned} \sum _{k=0}^{\infty } \Big ( \frac{k!}{(\alpha )_k} \Big )^{1/2} |\gamma _k | \ \int ^\infty _0 t^j \, {\mathrm{d}}P_0(t) < \infty , \end{aligned}$$

\(j=0,1\). As these integrals are finite, the convergence of both series follows from (11.16).

To calculate \({\mathcal {S}} {{\widetilde{\phi }}} (s)\) from (11.14), we follow the same steps as before to obtain

By the definition (11.13) of \(\gamma _k\), and noting that

$$\begin{aligned} \frac{\rho _k}{\rho _k-\delta } -1=\frac{\delta }{\rho _k-\delta }, \end{aligned}$$

we have

Therefore, \(\delta \) is an eigenvalue of \({\mathcal {S}}\) with corresponding eigenfunction \({{\widetilde{\phi }}}\). \(\square \)

A proof that Conjecture 2implies Conjecture 1. Suppose there exists \(l \in {\mathbb {N}}_0\) such that \(\delta = \rho _l\). Substituting \(k = l\) in (11.9) and simplifying the outcome, we obtain

$$\begin{aligned} c_1 = c_2 \alpha ^{-1} (l\beta -b_{\alpha }^2). \end{aligned}$$
(11.19)

Substituting \(\delta =\rho _l\) in (11.8), applying (11.19), and cancelling common terms in (11.8), we obtain

(11.20)

for \(k \ne l\). Substituting this result for the inner product into (11.6), we obtain

Similarly, substituting (11.20) into (11.7), we obtain

On simplifying the above expressions and substituting for \(c_1\) from (11.19), we obtain

(11.21)

and

(11.22)

Suppose that \(c_2 = 0\) then it follows from (11.19) that \(c_1 = 0\), which contradicts the earlier observation that \(c_1\) and \(c_2\) are not both zero; therefore, \(c_2 \ne 0\). Also, by (4.2), \(b_\alpha ^2< 1 < \beta \), so \(b_\alpha ^2 - k\beta \ne 0\) for all \(k \in {\mathbb {N}}_0\). Solving (11.21) and (11.22) for the inner product and equating the two expressions, we obtain

$$\begin{aligned}&1-\alpha \beta ^{\alpha +1} \sum _{\begin{array}{c} k=0 \\ k \ne l \end{array}}^\infty \frac{(\alpha )_k}{k!} \frac{l - k}{\rho _k - \rho _l} \rho _k^2 (b_\alpha ^2 - k\beta ) \\&\quad = \alpha (b_{\alpha }^2-l\beta )\left[ (l\beta -b_{\alpha }^2)-\beta ^{\alpha +1} \sum _{\begin{array}{c} k=0 \\ k \ne l \end{array}}^\infty \frac{(\alpha )_k}{k!} \frac{l - k}{\rho _k - \rho _l} \rho _k^2 \right] . \end{aligned}$$

Simplifying the above equation, we obtain (4.5). \(\square \)

A \(C^\infty \) kernel \(K:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) is extended totally positive (ETP) if for all \(r \ge 1\), all \(s_1 \ge \cdots \ge s_r\), all \(t_1 \ge \cdots \ge t_r\), there holds

$$\begin{aligned} \frac{\det \big (K(s_i,t_j)\big )}{\prod _{1 \le i< j \le r} (s_i - s_j)(t_i - t_j)} > 0, \end{aligned}$$
(11.23)

where instances of equality for the variables \(s_i\) and \(t_j\) are to be understood as limiting cases, and then L’Hospital’s rule is to be used to evaluate this ratio.

Proof of Proposition 2

By (3.4), the kernel K(st) is of the form

$$\begin{aligned} K(s,t) = e^{-(s+t)/\alpha } s^2 t^2 \sum _{k=0}^\infty c_k s^k t^k, \end{aligned}$$

where the coefficients \(c_k\) are positive for all \(k = 0,1,2,\ldots \). Therefore,

$$\begin{aligned} \det \big (K(s_i,t_j)\big )&= \det \Big (e^{-(s_i+t_j)/\alpha } s_i^2 t_j^2 \sum _{k=0}^\infty c_k s_i^k t_j^k\Big ) \\&= \Big (\prod _{i=1}^r e^{-(s_i+t_i)/\alpha } s_i^2 t_i^2\Big ) \cdot \det \Big (\sum _{k=0}^\infty c_k s_i^k t_j^k\Big ). \end{aligned}$$

By Karlin (1964, p. 101) the series \(\sum _{k=0}^\infty c_k s^k t^k\) is ETP so, by (11.23), K(st) is ETP.

In the case of \(K_0\), we have

$$\begin{aligned} K_0(s,t) = e^{-(s+t)/\alpha } \sum _{k=0}^\infty c_k s^k t^k, \end{aligned}$$

where \(c_k > 0\) for all k. Then it follows by a similar argument that \(K_0(s,t)\) is ETP.

By a result of Karlin (1964), the eigenvalues of an integral operator are simple and positive if the kernel of the operator is ETP. Therefore, the eigenvalues of \({\mathcal {S}}\) and \({\mathcal {S}}_0\) are simple and positive. In particular, 0 is not an eigenvalue of \({\mathcal {S}}\) or \({\mathcal {S}}_0\), so both operators are injective. Also, the oscillation property (4.8) follows from Karlin (1964, Theorem 3). \(\square \)

Proof of Proposition 3

Define the kernels \(k_0(s,t) = - e^{-(s+t)/\alpha }\) and \(k_1(s,t) = - e^{-(s+t)/\alpha } \alpha ^{-3} st\), \(s, t \ge 0\). Also, define on \(L^2\) the corresponding integral operators,

$$\begin{aligned} {\mathcal {U}}_j f(s) = \int _0^\infty k_j(s,t) f(t) {\mathrm{d}}P_0(t), \end{aligned}$$

\(j=0,1\), \(s \ge 0\). Then it follows from (3.4) that \({\mathcal {S}} = {\mathcal {S}}_0 + {\mathcal {U}}_0 + {\mathcal {U}}_1\).

Each \({\mathcal {U}}_j\) clearly is self-adjoint and of rank one, i.e., the range of \({\mathcal {U}}_j\) is a one-dimensional subspace of \(L^2\). Also, \({\mathcal {S}}_0 + {\mathcal {U}}_0\) is compact and self-adjoint and its kernel, \(K_0 + k_0\), is of the form

$$\begin{aligned} K_0(s,t) + k_0(s,t) = e^{-(s+t)/\alpha } st \sum _{j=0}^\infty c_j s^j t^j, \end{aligned}$$

where \(c_j > 0\) for all j. Arguing as in the proof of Proposition 2, we find that the eigenvalues of \({\mathcal {S}}_0 + {\mathcal {U}}_0\) are simple and positive; hence, \({\mathcal {S}}_0 + {\mathcal {U}}_0\) is injective.

Let \(\omega _k\), \(k \ge 1\), be the eigenvalues of \({\mathcal {S}}_0 + {\mathcal {U}}_0\), where \(\omega _1> \omega _2 > \cdots \). Since \({\mathcal {S}}_0\) is compact, self-adjoint, and injective, and since \({\mathcal {U}}_0\) is self-adjoint and of rank one then, by Hochstadt (1973) or Dancis and Davis (1987), the eigenvalues of \({\mathcal {S}}_0\) and \({\mathcal {S}}_0 + {\mathcal {U}}_0\) are interlaced: \(\rho _{k-1} \ge \omega _k \ge \rho _k\) for all \(k \ge 1\). Also, there is exactly one eigenvalue of \({\mathcal {S}}_0 + {\mathcal {U}}_0\) in one of the intervals \([\rho _k,\rho _{k-1})\), \((\rho _k,\rho _{k-1})\), or \((\rho _k,\rho _{k-1}]\).

Since \({\mathcal {U}}_1\) is self-adjoint and of rank one then by Hochstadt’s theorem, the eigenvalues of \({\mathcal {S}}_0 + {\mathcal {U}}_0\) and \({\mathcal {S}}_0 + {\mathcal {U}}_0 + {\mathcal {U}}_1 \equiv {\mathcal {S}}\) are interlaced: \(\omega _k \ge \delta _k \ge \omega _{k+1}\) for all \(k \ge 1\). Also, there is exactly one eigenvalue of \({\mathcal {S}}\) in one of the intervals \([\omega _{k+1},\omega _k)\), \((\omega _{k+1},\omega _k)\), or \((\omega _{k+1},\omega _k]\).

Combining these interlacing results, we obtain \(\rho _{k-1} \ge \delta _k \ge \rho _{k+1}\), \(k \ge 1\). Also, since \(\rho _k = \alpha ^\alpha b_\alpha ^{4k+2\alpha }\) then, by the interlacing inequalities, \(\delta _k = O(b_\alpha ^{4k})\), hence \(\delta _k = O(\rho _k)\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hadjicosta, E., Richards, D. Integral transform methods in goodness-of-fit testing, I: the gamma distributions. Metrika 83, 733–777 (2020). https://doi.org/10.1007/s00184-019-00749-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-019-00749-y

Keywords

Mathematics Subject Classification

Navigation