Abstract
We apply the method of Hankel transforms to develop goodness-of-fit tests for gamma distributions with given shape parameters and unknown rate parameters. We derive the limiting null distribution of the test statistic as an integrated squared Gaussian process, obtain the corresponding covariance operator and oscillation properties of its eigenfunctions, show that the eigenvalues of the operator satisfy an interlacing property, and make applications to two data sets. We prove consistency of the test, provide numerical power comparisons with alternative tests, study the test statistic under several contiguous alternatives, and obtain the asymptotic distribution of the test statistic for gamma alternatives with varying rate or shape parameters and for certain contaminated gamma models. We investigate the approximate Bahadur slope of the test statistic under local alternatives, and we establish the validity of the Wieand condition under which approaches through the approximate Bahadur and the Pitman efficiencies are in accord.
Similar content being viewed by others
References
Allen AO (1990) Probability, statistics, and queueing theory, 2nd edn. Academic Press, San Diego
Bahadur RR (1960) Stochastic comparison of tests. Ann Math Stat 31:276–295
Bahadur RR (1967) Rates of convergence of estimates and test statistics. Ann Math Stat 38:303–324
Bahadur RR (1971) Some limit theorems in statistics. SIAM, Philadelphia
Baringhaus L, Taherizadeh F (2010) Empirical Hankel transforms and their applications to goodness-of-fit tests. J Multivar Anal 101:1445–1467
Baringhaus L, Taherizadeh F (2013) A K–S type test for exponentiality based on empirical Hankel transforms. Commun Stat Theory Methods 42:3781–3792
Barlow RE, Campo R (1975) Total time on test processes and applications to failure data analysis. Reliability and fault tree analysis. SIAM, Philadelphia, pp 451–481
Baringhaus L, Ebner B, Henze N (2017) The limit distribution of weighted \(L^2\)-goodness-of-fit statistics under fixed alternatives, with applications. Ann Inst Stat Math 69:969–995
Bauer H (1981) Probability theory and elements of measure theory, second English edn. Academic Press, New York
Billingsley P (1968) Convergence of probability measures. Wiley, New York
Billingsley P (1979) Probability and measure. Wiley, New York
Brislawn C (1991) Traceable integral kernels on countably generated measure spaces. Pac J Math 50:229–240
Chow YS, Teicher H (1988) Probability theory: independence, interchangeability, martingales, 2nd edn. Springer, New York
Cuparić M, Milošević B, Obradović M (2018) New \(L^2\)-type exponentiality tests. Preprint, arXiv:1809.07585
Czaplicki JM (2014) Statistics for mining engineering. CRC Press, Boca Raton
D’Agostino R, Stephens M (1986) Goodness-of-fit techniques. Marcel Dekker, New York
Dancis J, Davis C (1987) An interlacing theorem for eigenvalues of self-adjoint operators. Linear Algebra Appl 88(89):117–122
de Wet T, Randles RH (1987) On the effect of substituting parameter estimators in limiting \(\chi ^2\)\(U\) and \(V\) statistics. Ann Stat 15:398–412
Erdélyi A, Magnus W, Oberhettinger F, Tricomi FG (1953) Higher transcendental functions, vol 2. McGraw-Hill, New York
Gīkhman \(\breve{\text{I}}\bar{\text{ I }}\), Skorokhod AV (1980) The theory of stochastic processes, vol 1. Springer, New York
Gupta RD, Richards DSTP (1983) Application of results of Kotz, Johnson and Boyd to the null distribution of Wilks’ criterion. In: Sen PK (ed) Contributions to statistics: essays in Honour of Johnson NL. North-Holland, Amsterdam, pp 205–210
Hadjicosta E (2019) Integral transform methods in goodness-of-fit testing. Doctoral dissertation, Pennsylvania State University, University Park
Hadjicosta E, Richards D (2018) Integral transform methods in goodness-of-fit testing, I: the gamma distributions. Preprint, arXiv:1810.07138
Henze N, Meintanis SG, Ebner B (2012) Goodness-of-fit tests for the gamma distribution based on the empirical Laplace transform. Commun Stat Theory Methods 41:1543–1556
Hochstadt H (1973) One-dimensional perturbations of compact operators. Proc Am Math Soc 37:465–467
Hogg RV, Tanis EA (2009) Probability and statistical inference, 8th edn. Pearson, Upper Saddle River
Imhof JP (1961) Computing the distribution of quadratic forms in normal variables. Biometrika 48:419–426
Johnson RA, Wichern DW (1998) Applied multivariate statistical analysis, 5th edn. Prentice-Hall, Upper Saddle River
Karlin S (1964) The existence of eigenvalues for integral operators. Trans Am Math Soc 113:1–17
Kass RE, Eden UT, Brown EN (2014) Analysis of neural data. Springer, New York
Kotz S, Johnson NL, Boyd DW (1967) Series representations of distributions of quadratic forms in normal variables. I. Central case. Ann Math Stat 38:823–837
Le Maître OP, Knio OM (2010) Spectral methods for uncertainty quantification. Springer, New York
Ledoux M, Talagrand M (1991) Probability in Banach spaces. Springer, New York
Leucht A, Neumann MH (2013) Degenerate \(U\)- and \(V\)-statistics under ergodicity: asymptotics, bootstrap and applications in statistics. Ann Inst Stat Math 65:349–386
Olver FW, Lozier DW, Boisvert RF, Clark CW (eds) (2010) NIST handbook of mathematical functions. Cambridge University Press, New Yor
Matsui M, Takemura A (2008) Goodness-of-fit tests for symmetric stable distributions—empirical characteristic function approach. TEST 17:546–566
Pettitt AN (1978) Generalized Cramér–von Mises statistics for the gamma distribution. Biometrika 65:232–235
Postan MY, Poizner MB (2013) Method of assessment of insurance expediency of quay structures’ damage risks in sea ports. In: Weintrit A, Neumann T (eds) Marine navigation and safety of sea transportation: maritime transport and shipping. CRC Press, Boca Raton, pp 123–127
R Development Core Team (2007) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria
Sneddon IN (1972) The use of integral transforms. McGraw-Hill, New York
Sturgul JR (2015) Discrete simulation and animation for mining engineers. CRC Press, Boca Raton
Sunder VS (2015) Operators on hilbert space. Hindustan Book Agency, New Delhi
Szegö G (1967) Orthogonal polynomials, 3rd edn. American Mathematical Society, Providence, RI
Taherizadeh F (2009) Empirical Hankel transform and statistical goodness-of-fit tests for exponential distributions. PhD Thesis, University of Hannover, Hannover
Wieand HS (1976) A condition under which the Pitman and Bahadur approaches to efficiency coincide. Ann Stat 4:1003–1011
Young N (1998) An introduction to Hilbert space. Cambridge University Press, New York
Acknowledgements
We are grateful to the reviewers and the editors for helpful and constructive comments on the initial version of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest:
On behalf of both authors, the corresponding author states that there is no conflict of interest.
Additional information
This paper is dedicated to Professor Norbert Henze, on the occasion of his 67th birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Bessel functions and Hankel transforms
For the special case in which \(\nu = -\tfrac{1}{2}\), it follows from (2.1) that, for \(x \in {\mathbb {R}}\),
For \(\nu > -1/2\), the Bessel function is also given by the Poisson integral,
\(x \in {\mathbb {R}}\); see Erdélyi et al. (1953, 7.12(9)), Olver et al. (2010, (10.9.4)). This result can be proved by expanding \(\cos (x\cos \theta )\) as a power series in \(x \cos (\theta )\) and integrating term-by-term.
The Bessel function \(J_\nu \) also satisfies the inequality,
\(\nu \ge -1/2\), \(z \in {\mathbb {C}}\); see Erdélyi et al. (1953, 7.3.2(4)) or Olver et al. (2010, (10.14.4)).
Henceforth, we assume that \(\nu \ge -1/2\). For \(t, x \ge 0\), we set \(z = 2(tx)^{1/2}\) in (9.3) to obtain
Although the next two results may be known, we were unable to find them in the literature.
Lemma 3
For \(\nu \ge -1/2\) and \(t \ge 0\),
Proof
By Olver et al. (2010, (10.6.6)),
\(t \ge 0\). For \(\nu > -1/2\), it follows by differentiating the Poisson integral (9.2) that
By a substitution, \(s = \sin ^2 \theta \), the latter integral reduces to a beta integral,
\(a, b > 0\). This produces (9.5).
For \(\nu = -1/2\), it follows from (9.6) and (9.1) that
cf. Olver et al. (2010, (10.16.1)). Then, \(|t^{1/2} J_{1/2}(t)| \le (2/\pi )^{1/2},\) as stated in (9.5). \(\square \)
Remark 5
Substituting \(\nu = 0\) in Lemma 3, we obtain \(|J_1(t)| \le 2/\pi \), \(t \ge 0\). This bound is sharper than a bound given in Olver et al. (2010, (10.14.1)), viz., \(|J_1(t)| \le 2^{-1/2}\), \(t \ge 0\).
Lemma 4
For \(\nu \ge -1/2\), the function \(t^{-\nu }J_{\nu +1}(t)\), \(t \ge 0\), is Lipschitz continuous, satisfying for \(u, v \in {\mathbb {R}}\), the inequality
Proof
For \(\nu > -1/2\) we apply (9.6), (9.2), and the triangle inequality to obtain
By a well-known trigonometric identity, and the inequality \(|\sin t| \le |t|\), \(t \in {\mathbb {R}}\),
Therefore,
Substituting \(t = \sin ^2 \theta \) reduces the latter integral to a beta integral, and then we obtain (9.8).
For \(\nu = -1/2\), we apply (9.7) to obtain
the latter inequality following from (9.9) with \(\theta = 0\). Then, we obtain (9.8) for \(\nu = -1/2\). \(\square \)
As regards the modified Bessel function \(I_\nu \), defined in (2.2), with \(\mathrm {i}= \sqrt{-1}\) we find from (2.1) that \(I_\nu (x) = \mathrm {i}^{-\nu } \, J_{\nu }(\mathrm {i}x)\), \(x \in {\mathbb {R}}\); hence, by (9.3),
For \(n \in {\mathbb {N}}_0\) and \(\alpha > 0\), the (generalized) Laguerre polynomial of order \(\alpha -1\) and degree n is
\(x \in {\mathbb {R}}\); see Olver et al. (2010, Chapter 18) or Szegö (1967, Chapter 5). The normalized (generalized) Laguerre polynomial of order \(\alpha -1\) and degree n is defined by
\(x \in {\mathbb {R}}\). It is well-known (see Olver et al. (2010, Chapter 18.3) or Szegö (1967, Chapter 5.1)) that the polynomials \({\mathcal {L}}_n^{(\alpha -1)}\) are orthonormal with respect to the \(Gamma(\alpha ,1)\) distribution:
Lemma 5
For \(v > 0\) and \(\alpha > 0\),
Proof
Starting with the known integral (Olver et al. 2010, (18.17.34)),
we differentiate each side with respect to v and simplify the outcome to obtain the result. \(\square \)
Proof of Lemma 1
-
(i)
By (9.4) for \(J_{\nu }(x)\), \(\varGamma (\nu +1) \big |(tx)^{-\nu /2}J_{\nu }(2\sqrt{tx})\big | \le 1\) for all \(x, t > 0\). Therefore, by the triangle inequality, \(|{\mathcal {H}}_{X, \nu }(t)| \le 1\).
-
(ii)
It follows from the series expansion (2.1) that
$$\begin{aligned} \varGamma (\nu +1) (tx)^{-\nu /2} J_{\nu }\big (2(tx)^{1/2}\big )\Big |_{t=0} = 1, \end{aligned}$$for all x, so we obtain \({\mathcal {H}}_{X, \nu }(0) = 1.\)
-
(iii)
As the function \((tx)^{-\nu /2}J_{\nu }(2\sqrt{tx})\) is a power series in tx, it is continuous in \(t \ge 0\) for every fixed \(x \ge 0\). As it is also bounded, then \(\varGamma (\nu +1)(tx)^{-\nu /2}J_{\nu }(2\sqrt{tx}) f(x)\) is bounded by the Lebesgue integrable function f(x) for all \(x, t \ge 0\). Therefore, the conclusion follows from the Dominated Convergence Theorem. \(\square \)
The following Hankel transform inversion theorem is a classical result that can be obtained from many sources, e.g., Sneddon (1972, p. 309, Theorem 1).
Theorem 11
(Hankel Inversion) Let X be a positive, continuous random variable with probability density function f(x) and Hankel transform \({\mathcal {H}}_{X, \nu }\). For \(x > 0\),
As a consequence of the inversion formula, we obtain the uniqueness of the Hankel transform.
Theorem 12
(Hankel Uniqueness) Let X and Y be positive random variables with corresponding Hankel transforms \({\mathcal {H}}_{X, \nu }\) and \({\mathcal {H}}_{Y, \nu }\). Then \({\mathcal {H}}_{X, \nu } = {\mathcal {H}}_{Y, \nu }\) if and only if \(X {\mathop {=}\limits ^{d}} Y\).
The next result, on the continuity of the Hankel transform, is analogous to Theorem 2.3 of Baringhaus and Taherizadeh (2010). Therefore, we will omit the proof.
Theorem 13
(Hankel Continuity) Let \(\{X_{n}, n \in {\mathbb {N}}\}\) be a sequence of positive random variables with corresponding Hankel transforms \(\{{\mathcal {H}}_{n}, n \in {\mathbb {N}}\}\). If there exists a positive random variable X, with Hankel transform \({\mathcal {H}}\), such that \(X_{n} \xrightarrow {d} X\), then for all \(t \ge 0\),
Conversely, suppose there exists \({\mathcal {H}}: [0,\infty ) \rightarrow {\mathbb {R}}\) such that \({\mathcal {H}}(0) = 1\), \({\mathcal {H}}\) is continuous at 0, and (9.12) holds. Then \({\mathcal {H}}\) is the Hankel transform of a positive random variable X, and \(X_{n} \xrightarrow {d} X\).
Appendix 2: The test statistic
Proof of Proposition 1
By squaring the integrand in (1.3), there are three terms to be calculated. First,
These integrals are of the form of Weber’s exponential integral (Olver et al. 2010, (10.22.67)):
valid for \(\nu > -1\) and \(a, b, p > 0\). Simplifying the resulting expressions, we obtain
Second, by proceeding as in Example 1, it is straightforward to deduce
Third, we have a gamma integral:
Collecting together all three terms, we obtain the desired result. \(\square \)
Proof of Theorem 2
By (9.6), \((s^{1-\alpha } J_{\alpha -1}(s))'= -s^{1-\alpha } J_{\alpha }(s)\). Therefore, the Taylor expansion of order 1 of the function \(s^{1-\alpha } J_{\alpha -1}(s)\), at a point \(s_0\), is
where u lies between s and \(s_0\). Setting \(s = 2(tY_j)^{1/2}\) and \(s_0 = 2(tX_j/\alpha )^{1/2}\), we obtain
where \(u_j\) lies between \(2(tY_{j})^{1/2}\)and \(2(tX_{j}/\alpha )^{1/2}\). Define
then
and (10.2) reduces to
Multiplying both sides of (10.4) by \(2^{\alpha -1}\), adding and subtracting the term
on the right-hand side, and then simplifying the result, we obtain
Define the processes \(Z_{n,1}(t)\), \(Z_{n,2}(t)\), and \(Z_{n,3}(t)\), \(t \ge 0\), by
We will show that
To establish (10.6), let
\(t \ge 0\), \(j=1,\dotsc ,n\). Since \(X_j \sim Gamma(\alpha ,1)\) then \(E(X_j-\alpha )=0\); also, by Example 1,
Therefore \(E(Z_{n,3,j}(t))=0\), \(t \ge 0\) and \(j=1,\dotsc ,n\), and \(Z_{n,3,1},\dotsc ,Z_{n,3,n}\) clearly are i.i.d. random elements in \(L^2\). Applying the Cauchy–Schwarz inequality and (9.4), we obtain \(E(||Z_{n,3,1} ||^2_{L^2}) < \infty \). Thus, by the Central Limit Theorem in \(L^2\) (Ledoux and Talagrand 1991, p. 281),
where \(Z:=(Z(t),t \ge 0)\) is a centered Gaussian random element in \(L^2\). This proves (10.6) and shows that Z has the same covariance operator as \(Z_{n,3,1}\).
It is well-known that the covariance operator of the random element \(Z_{n,3,1}\) is uniquely determined by the covariance function of the stochastic process \(Z_{n,3,1}(t)\) (Gīkhman and Skorokhod 1980, pp. 218–219). We now show that the function K(s, t) in (3.4) is the covariance function of \(Z_{n,3,1}\). Noting that \(E[Z_{n,3,1}(t)] = 0\) for all t, we obtain
By (10.10),
so the calculation of K(s, t) reduces to evaluating the four terms obtained by expanding the product on the right-hand side of (10.11).
The first term in the product in (10.11) is evaluated using Weber’s integral (10.1):
The second term in the product in (10.11) is a Hankel transform of the type in Example 1,
and the third term in the product is the same as the second term but with s and t interchanged.
The fourth term in the product in (10.11) is
Combining all four terms, we obtain (3.4).
To establish (10.7), we begin by showing that
where \(\chi _1^2\) denotes a chi-square random variable with one degree of freedom. By the Central Limit Theorem, \( \sqrt{n}({\overline{X}}_{n}-\alpha ) \xrightarrow {d} {\mathcal {N}}(0,\alpha ), \) and by the Law of Large Numbers and the Continuous Mapping Theorem, \( (\alpha {\overline{X}}_n)^{1/2} (\alpha ^{1/2}+{\overline{X}}_n^{1/2}) \xrightarrow {p} 2\alpha ^{3/2}. \) By Slutsky’s theorem (Chow and Teicher 1988, p. 249), \(\sqrt{n} W_n \xrightarrow {d} {\mathcal {N}}(0,\tfrac{1}{4} \alpha ^{-2})\), hence \((\sqrt{n} W_n)^2 \xrightarrow {d} \chi ^2_1/4\alpha ^2\).
By the Taylor expansion in (10.5),
Define
Then, \( ||Z_{n}-Z_{n,1} ||^2_{L^2} = 4^{\alpha } [\varGamma (\alpha )]^2 (\sqrt{n} W_n)^2 \, V_n. \) By the Cauchy–Schwarz inequality,
Recall that \(u_j\) lies between \(2(tY_{j})^{1/2}\) and \(2(tX_{j}/\alpha )^{1/2}\), so we can write
where \(\theta _{n,j,t} \in [0,1]\). By Lemma 4, the Lipschitz property of the Bessel functions,
since \(\theta _{n,j,t} \in [0,1]\). Therefore,
By the Law of Large Numbers, \(({\overline{X}}_n^{-1/2}-\alpha ^{-1/2})^2 \xrightarrow {p} 0\) and \( n^{-1} \sum _{j=1}^n X^2_j \xrightarrow {p} E(X^2_1)=\alpha (\alpha +1), \) so it follows that \(V_{n} \xrightarrow {p} 0\). By Slutsky’s theorem, \( ||Z_{n}-Z_{n,1} ||^2_{L^2} = 4^{\alpha } [\varGamma (\alpha )]^2 (\sqrt{n} W_n)^2 \cdot V_n \xrightarrow {d} 0, \) therefore \(||Z_{n}-Z_{n,1} ||_{L^2} \xrightarrow {p} 0\), as asserted in (10.7).
To establish (10.8), define
\(t \ge 0\), \(j=1,\dotsc ,n\). Then it is straightforward to verify that
and therefore
By the Law of Large Numbers, \(W_n \xrightarrow {p} 0\). Also, as shown in Example 3,
hence \(E(\varDelta _j(t)) = 0\), \(t \ge 0\), \(j=1,\dotsc ,n\). Also, \(\varDelta _1(t),\dotsc ,\varDelta _n(t)\) are i.i.d. random elements in \(L^2\). We now show that \(E(||\varDelta _1 ||^2_{L^2}) < \infty \). We have
To show that \(E(||\varDelta _1 ||^2_{L^2}) < \infty \) it suffices, by the Cauchy–Schwarz inequality, to prove that
and
To establish (10.14), we apply the inequality (9.5) to obtain
for \(t \ge 0\). Therefore,
As for (10.15), that expectation is a convergent gamma integral. Hence, \(E(||\varDelta _1 ||^2_{L^2}) < \infty \).
By the Central Limit Theorem in \(L^2\), \(n^{-1/2} \sum _{j=1}^{n} \varDelta _j(t)\) converges to a centered Gaussian random element in \(L^2\). Thus, by the Continuous Mapping Theorem,
converges in distribution to a random variable which has finite variance. Since \(W_n \xrightarrow {p} 0\) then by (10.13) and Slutsky’s Theorem, we obtain \(||Z_{n,1}-Z_{n,2} ||^2_{L^2} \xrightarrow {d} 0\); therefore, \(||Z_{n,1}-Z_{n,2} ||_{L^2} \xrightarrow {p} 0\).
To prove (10.9), we observe that
where
Therefore,
As noted earlier, \(\int ^\infty _0 (te^{-t/\alpha })^2\, {\mathrm{d}}P_0(t) < \infty \). Also, by the Central Limit Theorem, \(\sqrt{n}({\overline{X}}_n-\alpha ) \xrightarrow {d} {\mathcal {N}}(0,\alpha )\); and by the Law of Large Numbers, \(R_n \xrightarrow {p} 0\). By Slutsky’s theorem, \(\big [ \sqrt{n}({\overline{X}}_n-\alpha ) R_n \big ]^2 \xrightarrow {d} 0\); hence \(\big [ \sqrt{n}({\overline{X}}_n-\alpha ) R_n \big ]^2 \xrightarrow {p} 0\), and therefore \(||Z_{n,2}-Z_{n,3} ||_{L^2} \xrightarrow {p} 0\).
Finally, by the Continuous Mapping Theorem in \(L^2\), \(||Z_{n} ||^2_{L^2} \xrightarrow {d} ||Z ||^2_{L^2}\), i.e.
The proof now is complete. \(\square \)
Appendix 3: Eigenvalues and eigenfunctions of the covariance operator
Proof of Theorem 5
Since the set is an orthonormal basis for \(L^2\), the eigenfunction \(\phi \in L^2\) corresponding to an eigenvalue \(\delta \) can be written as
We restrict ourselves temporarily to eigenfunctions for which this series is pointwise convergent. Substituting this series into the equation \({\mathcal {S}} \phi =\delta \phi \), we obtain
Substituting the covariance function K(s, t) in the left-hand side of (11.1), writing K in terms of \(K_0\), and assuming that we can interchange the order of integration and summation, we obtain
By Theorem 3,
On writing in terms of \(L_k^{(\alpha -1)}\), the generalized Laguerre polynomial, applying the well-known formula (Olver et al. 2010, (18.17.34)) for the Laplace transform of \(L_k^{(\alpha -1)}\), and making use of (4.2) and (4.3), we obtain
Again writing in terms of \(L_k^{(\alpha -1)}\), applying Lemma 5, and (4.2) and (4.3), we obtain
In summary, (11.2) reduces to
By applying (11.3), we also obtain the Fourier-Laguerre expansion of \(e^{-s/\alpha }\) with respect to the orthonormal basis ; indeed,
Similarly, by applying (11.4), we have
Let
and
Combining (11.5)–(11.7), we find that (11.1) reduces to
and now comparing the coefficients of , we obtain
for all \(k \in {\mathbb {N}}_0\). Since we have assumed that \(\delta \ne \rho _k\) for any k then we can solve this equation for to obtain
Substituting (11.10) into (11.6), we get
therefore,
Similarly, by substituting (11.10) into (11.7), we obtain
hence,
Suppose \(c_1=c_2=0\); then it follows from (11.10) that for all k and so \(\phi =0\), which is a contradiction since \(\phi \) is a non-trivial eigenfunction. Hence, \(c_1\) and \(c_2\) cannot be both equal to 0. Combining (11.11) and (11.12), and using the fact that \(c_1, c_2\) are not both 0, it is straightforward to deduce that \(\alpha ^3 A(\delta ) B(\delta )= D^2(\delta )\). Therefore, if \(\delta \) is a positive eigenvalue of \({\mathcal {S}}\) then it is a positive root of the function \(G(\delta ) = \alpha ^3 A(\delta ) B(\delta )-D^2(\delta )\).
Conversely, suppose that \(\delta \) is a positive root of \(G(\delta )\) with \(\delta \ne \rho _k\) for any \(k \in {\mathbb {N}}_0\). Define
\(k \in {\mathbb {N}}_0\), where \(c_1\) and \(c_2\) are real constants that are not both equal to 0 and which satisfy (11.11) and (11.12). That such constants exist can be shown by following a case-by-case argument similar to Taherizadeh (2009, p. 48); for example, if \(D(\delta ) \ne 0\), \(A(\delta ) \ne 0\), and \(B(\delta ) \ne 0\), then we can choose \(c_2\) to be any non-zero number and then set \(c_1 = c_2 B(\delta )/D(\delta )\).
Define
\(s \ge 0\). By applying the ratio test, we find that \(\sum _{k=0}^{\infty } \gamma _k^2 < \infty \); therefore, \({{\widetilde{\phi }}} \in L^2\).
To show also that (11.14) converges pointwise, we apply (9.11), (4.4), and a Laguerre polynomial inequality (Erdélyi et al. 1953, p. 207) to obtain
for \(s \ge 0\). Thus, to establish that (11.14) pointwise converges pointwise, we need to show that
However, the convergence of each of these series follows from the ratio test.
Next, we justify the interchange of summation and integration in our calculations. By a corollary to Theorem 16.7 in Billingsley (1979, p. 224), we need to verify that
By the triangle inequality and by (11.18), we have
\(s, t \ge 0\). Thus, to prove (11.17), we need to establish that
By applying the bound (11.15), we see that it suffices to prove that
and
\(j=0,1\). As these integrals are finite, the convergence of both series follows from (11.16).
To calculate \({\mathcal {S}} {{\widetilde{\phi }}} (s)\) from (11.14), we follow the same steps as before to obtain
By the definition (11.13) of \(\gamma _k\), and noting that
we have
Therefore, \(\delta \) is an eigenvalue of \({\mathcal {S}}\) with corresponding eigenfunction \({{\widetilde{\phi }}}\). \(\square \)
A proof that Conjecture 2implies Conjecture 1. Suppose there exists \(l \in {\mathbb {N}}_0\) such that \(\delta = \rho _l\). Substituting \(k = l\) in (11.9) and simplifying the outcome, we obtain
Substituting \(\delta =\rho _l\) in (11.8), applying (11.19), and cancelling common terms in (11.8), we obtain
for \(k \ne l\). Substituting this result for the inner product into (11.6), we obtain
Similarly, substituting (11.20) into (11.7), we obtain
On simplifying the above expressions and substituting for \(c_1\) from (11.19), we obtain
and
Suppose that \(c_2 = 0\) then it follows from (11.19) that \(c_1 = 0\), which contradicts the earlier observation that \(c_1\) and \(c_2\) are not both zero; therefore, \(c_2 \ne 0\). Also, by (4.2), \(b_\alpha ^2< 1 < \beta \), so \(b_\alpha ^2 - k\beta \ne 0\) for all \(k \in {\mathbb {N}}_0\). Solving (11.21) and (11.22) for the inner product and equating the two expressions, we obtain
Simplifying the above equation, we obtain (4.5). \(\square \)
A \(C^\infty \) kernel \(K:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) is extended totally positive (ETP) if for all \(r \ge 1\), all \(s_1 \ge \cdots \ge s_r\), all \(t_1 \ge \cdots \ge t_r\), there holds
where instances of equality for the variables \(s_i\) and \(t_j\) are to be understood as limiting cases, and then L’Hospital’s rule is to be used to evaluate this ratio.
Proof of Proposition 2
By (3.4), the kernel K(s, t) is of the form
where the coefficients \(c_k\) are positive for all \(k = 0,1,2,\ldots \). Therefore,
By Karlin (1964, p. 101) the series \(\sum _{k=0}^\infty c_k s^k t^k\) is ETP so, by (11.23), K(s, t) is ETP.
In the case of \(K_0\), we have
where \(c_k > 0\) for all k. Then it follows by a similar argument that \(K_0(s,t)\) is ETP.
By a result of Karlin (1964), the eigenvalues of an integral operator are simple and positive if the kernel of the operator is ETP. Therefore, the eigenvalues of \({\mathcal {S}}\) and \({\mathcal {S}}_0\) are simple and positive. In particular, 0 is not an eigenvalue of \({\mathcal {S}}\) or \({\mathcal {S}}_0\), so both operators are injective. Also, the oscillation property (4.8) follows from Karlin (1964, Theorem 3). \(\square \)
Proof of Proposition 3
Define the kernels \(k_0(s,t) = - e^{-(s+t)/\alpha }\) and \(k_1(s,t) = - e^{-(s+t)/\alpha } \alpha ^{-3} st\), \(s, t \ge 0\). Also, define on \(L^2\) the corresponding integral operators,
\(j=0,1\), \(s \ge 0\). Then it follows from (3.4) that \({\mathcal {S}} = {\mathcal {S}}_0 + {\mathcal {U}}_0 + {\mathcal {U}}_1\).
Each \({\mathcal {U}}_j\) clearly is self-adjoint and of rank one, i.e., the range of \({\mathcal {U}}_j\) is a one-dimensional subspace of \(L^2\). Also, \({\mathcal {S}}_0 + {\mathcal {U}}_0\) is compact and self-adjoint and its kernel, \(K_0 + k_0\), is of the form
where \(c_j > 0\) for all j. Arguing as in the proof of Proposition 2, we find that the eigenvalues of \({\mathcal {S}}_0 + {\mathcal {U}}_0\) are simple and positive; hence, \({\mathcal {S}}_0 + {\mathcal {U}}_0\) is injective.
Let \(\omega _k\), \(k \ge 1\), be the eigenvalues of \({\mathcal {S}}_0 + {\mathcal {U}}_0\), where \(\omega _1> \omega _2 > \cdots \). Since \({\mathcal {S}}_0\) is compact, self-adjoint, and injective, and since \({\mathcal {U}}_0\) is self-adjoint and of rank one then, by Hochstadt (1973) or Dancis and Davis (1987), the eigenvalues of \({\mathcal {S}}_0\) and \({\mathcal {S}}_0 + {\mathcal {U}}_0\) are interlaced: \(\rho _{k-1} \ge \omega _k \ge \rho _k\) for all \(k \ge 1\). Also, there is exactly one eigenvalue of \({\mathcal {S}}_0 + {\mathcal {U}}_0\) in one of the intervals \([\rho _k,\rho _{k-1})\), \((\rho _k,\rho _{k-1})\), or \((\rho _k,\rho _{k-1}]\).
Since \({\mathcal {U}}_1\) is self-adjoint and of rank one then by Hochstadt’s theorem, the eigenvalues of \({\mathcal {S}}_0 + {\mathcal {U}}_0\) and \({\mathcal {S}}_0 + {\mathcal {U}}_0 + {\mathcal {U}}_1 \equiv {\mathcal {S}}\) are interlaced: \(\omega _k \ge \delta _k \ge \omega _{k+1}\) for all \(k \ge 1\). Also, there is exactly one eigenvalue of \({\mathcal {S}}\) in one of the intervals \([\omega _{k+1},\omega _k)\), \((\omega _{k+1},\omega _k)\), or \((\omega _{k+1},\omega _k]\).
Combining these interlacing results, we obtain \(\rho _{k-1} \ge \delta _k \ge \rho _{k+1}\), \(k \ge 1\). Also, since \(\rho _k = \alpha ^\alpha b_\alpha ^{4k+2\alpha }\) then, by the interlacing inequalities, \(\delta _k = O(b_\alpha ^{4k})\), hence \(\delta _k = O(\rho _k)\). \(\square \)
Rights and permissions
About this article
Cite this article
Hadjicosta, E., Richards, D. Integral transform methods in goodness-of-fit testing, I: the gamma distributions. Metrika 83, 733–777 (2020). https://doi.org/10.1007/s00184-019-00749-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-019-00749-y
Keywords
- Bahadur slope
- Contaminated model
- Contiguous alternative
- Gaussian process
- Generalized Laguerre polynomial
- Goodness-of-fit testing
- Hankel transform
- Hilbert–Schmidt operator
- Lipschitz continuity
- Modified Bessel function
- Pitman efficiency