Skip to main content
Log in

On the exact distribution of the likelihood ratio test statistic for testing the homogeneity of the scale parameters of several inverse Gaussian distributions

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Several researchers have addressed the problem of testing the homogeneity of the scale parameters of several independent inverse Gaussian distributions based on the likelihood ratio test. However, only approximations of the distribution function of the test statistic are available in the literature. In this note, we present the exact distribution of the likelihood ratio test statistic for testing the equality of the scale parameters of several independent inverse Gaussian populations in a closed form. To this end, we apply the Mellin inverse transform and the Jacobi polynomial expansion to the moments of the likelihood ratio test statistic. We also propose an approximate method based on the Jacobi polynomial expansion. Finally, we apply an accurate numerical method, which is based on the inverse of characteristic function, to obtain a near-exact approximation of the likelihood ratio test statistic distribution. The proposed methods are illustrated via numerical and real data examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Acknowledgements

The author would like to acknowledge the Research Council of Shiraz University. The author also would like to thank referees for their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahmood Kharrati-Kopaei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 22 kb)

Appendix: proofs

Appendix: proofs

Proof of Lemma 1

It is well known that \( W_{i} = \lambda_{i} V_{i} \) has a Chi square distribution with \( n_{i} - 1 \) degrees of freedom. Note that the \( LR \) can be rewritten as

$$ LR = \mathop \prod \limits_{i = 1}^{k} \left( {\left( {\frac{{W_{i} /\lambda_{i} }}{{\mathop \sum \nolimits_{j = 1}^{k} W_{j} /\lambda_{j} }}} \right)^{{n_{i} /2}} f_{i}^{{ - n_{i} /2}} } \right) . $$

If \( \varvec{Z} = \left( {Z_{1} , \ldots ,Z_{k} } \right) = \left( {\frac{{W_{1} }}{{\mathop \sum \nolimits_{i = 1}^{k} w_{i} }}, \ldots ,\frac{{W_{k} }}{{\mathop \sum \nolimits_{i = 1}^{k} W_{i} }}} \right) \), it is known that \( \varvec{Z} \) has a Dirichlet distribution with parameter \( \left( {n_{1}^{*} , \ldots ,n_{k}^{*} } \right) \). It can be verified that

$$ \varvec{Y} = \left( {Y_{1} , \ldots ,Y_{k} } \right) = \left( {\frac{{Z_{1} /\lambda_{1} }}{{\mathop \sum \nolimits_{i = 1}^{k} Z_{i} /\lambda_{i} }}, \ldots ,\frac{{Z_{k} /\lambda_{k} }}{{\mathop \sum \nolimits_{i = 1}^{k} Z_{i} /\lambda_{i} }}} \right) = \left( {\frac{{W_{1} /\lambda_{1} }}{{\mathop \sum \nolimits_{j = 1}^{k} W_{j} /\lambda_{j} }}, \ldots ,\frac{{W_{k} /\lambda_{k} }}{{\mathop \sum \nolimits_{j = 1}^{k} W_{j} /\lambda_{j} }}} \right), $$

has the following density function

$$ f_{\varvec{Y}} \left( {y_{1} , \ldots ,y_{k} } \right) = \frac{{{{\varGamma }}\left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{i = 1}^{k} \varGamma \left( {n_{i}^{*} } \right)}}\frac{{\mathop \prod \nolimits_{i = 1}^{k} \lambda_{i}^{{n_{i}^{*} }} }}{{\left( {\mathop \sum \nolimits_{i = 1}^{k} \lambda_{i} y_{i} } \right)^{{N^{*} }} }}\mathop \prod \limits_{i = 1}^{k} y_{i}^{{n_{i}^{*} - 1}} , $$

see Kharrati-Kopaei and Malekzadeh (2019). Therefore, the \( h \)th moment of \( L \) under \( H_{1} \) is

$$ \begin{aligned} E_{{H_{1} }} \left( {L^{h} } \right) & = \mathop \prod \limits_{i = 1}^{k} f_{i}^{{ - f_{i} h}} E_{{H_{1} }} \left( {\mathop \prod \limits_{i = 1}^{k} \left( {\frac{{V_{i} }}{V}} \right)^{{f_{i} h}} } \right) = \mathop \prod \limits_{i = 1}^{k} f_{i}^{{ - f_{i} h}} E_{{H_{1} }} \left( {\mathop \prod \limits_{i = 1}^{k} \left( {\frac{{W_{i} /\lambda_{i} }}{{\mathop \sum \nolimits_{j = 1}^{k} W_{j} /\lambda_{j} }}} \right)^{{f_{i} h}} } \right) \\ & = \frac{{\varGamma \left( {N^{*} } \right)}}{{\varGamma \left( {N^{*} + h} \right)}}\mathop \prod \limits_{i = 1}^{k} \left( {f_{i}^{{ - f_{i} h}} \lambda_{i}^{{n_{i}^{*} }} \frac{{\varGamma \left( {f_{i} h + n_{i}^{*} } \right)}}{{\varGamma \left( {n_{i}^{*} } \right)}}} \right)E_{\varvec{D}} \left( {\left( {\mathop \sum \limits_{i = 1}^{k} \lambda_{i} D_{i} } \right)^{{ - N^{*} }} } \right), \\ \end{aligned} $$

for detail, see Kharrati-Kopaei and Malekzadeh (2019). Note that \( E_{{H_{1} }} \left( {L^{h} } \right) \) reduces to \( E_{{H_{0} }} \left( {L^{h} } \right) \) under \( H_{0} \) since \( \mathop \sum \nolimits_{i = 1}^{k} D_{i} = 1 \).

Proof of Theorem 1

Let \( f_{L} \left( . \right) \) denote the PDF of \( L \). If one can obtain \( f_{L} \left( . \right) \) in a closed form, then \( F_{L} \left( . \right) \) is obtained directly. The proof is similar to that of Kharrati-Kopaei and Malekzadeh (2019), and hence we present a sketch of the proof. By applying the MIT to \( E_{{H_{0} }} \left( {L^{h} } \right) \) and changing variable \( h + N/2 = t \), we have

$$ \begin{aligned} f_{L} \left( x \right) & = \frac{1}{2\pi i}\mathop \int \limits_{ - i\infty }^{ + i\infty } x^{ - h - 1} E_{{H_{0} }} \left( {L^{h} } \right) dh \\ & = \mathop \prod \limits_{i = 1}^{k} f_{i}^{{n_{i} /2}} \frac{{\varGamma \left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{i = 1}^{k} \varGamma \left( {n_{i}^{*} } \right)}} x^{{\frac{N}{2} - 1}} \frac{1}{2\pi i}\mathop \int \limits_{{\frac{N}{2} - i\infty }}^{{\frac{N}{2} + i\infty }} x^{ - t} \phi \left( t \right) dt, \\ \end{aligned} $$

where \( \phi \left( t \right) = \left( {\mathop \prod \nolimits_{i = 1}^{k} \varGamma \left( {f_{i} t - 1/2} \right)f_{i}^{{ - f_{i} t}} } \right)/\varGamma \left( {t - k/2} \right) \). One can expand \( \phi \left( t \right) \) as

$$ \phi \left( t \right) = \left( {2\pi } \right)^{v} t^{ - v} \mathop \prod \limits_{i = 1}^{k} \frac{1}{{f_{i} }}\left\{ {1 + \mathop \sum \limits_{j = 1}^{\infty } \beta_{j} /t^{j} } \right\}, $$
(A.1)

where \( v = \left( {k - 1} \right)/2 \), and \( \beta_{j} \) s are obtained recursively as \( \beta_{j} = \frac{1}{j}\mathop \sum \nolimits_{s = 1}^{j} s \alpha_{s} \beta_{j - s} \) with \( \beta_{0} = 1 \) in which \( \alpha_{s} = \frac{{\left( { - 1} \right)^{s} }}{{s\left( {s + 1} \right)}}\left\{ {B_{s + 1} \left( { - k} \right) - B_{s + 1} \left( { - 1} \right)\mathop \sum \limits_{i = 1}^{k} f_{i}^{ - s} } \right\} \) where \( B_{r} \left( a \right) \) is the Bernoulli polynomial of degree \( r \) and order one; see Kalinin (1971), Nagarsenker (1976). Note that \( \left( {2\pi } \right)^{ - v} \left( {\mathop \prod \nolimits_{i = 1}^{k} f_{i} } \right)\phi \left( t \right) \) can be expanded as a factorial series of the form

$$ \left( {2\pi } \right)^{ - v} \mathop \prod \limits_{i = 1}^{k} f_{i} \phi \left( t \right) = \mathop \sum \limits_{r = 1}^{\infty } R_{r} \varGamma \left( t \right)/\varGamma \left( {t + r + v} \right), $$
(A.2)

see Nair (1940). The coefficient \( R_{r} \) can be obtained by equating the coefficients of the two series in (A.1) and (A.2). In this regard, note that \( \log \left( {\varGamma \left( t \right)/\varGamma \left( {t + r + v} \right)} \right) \) can be expanded as \( - \left( {v + n} \right)\log \left( t \right) + \mathop \sum \nolimits_{j = 1}^{\infty } A_{r,j} /t^{j} \) where \( A_{r,j} = \frac{{\left( { - 1} \right)^{j - 1} }}{{j\left( {j + 1} \right)}}\left\{ {B_{j + 1} \left( 0 \right) - B_{j + 1} \left( {v + r} \right)} \right\}. \) Therefore, we can write \( \frac{\varGamma \left( t \right)}{{\varGamma \left( {t + r + v} \right)}} = t^{{ - \left( {v + r} \right)}} \left\{ {1 + \mathop \sum \nolimits_{j = 1}^{\infty } \frac{{C_{r,j} }}{{t^{j} }}} \right\} \) where \( C_{r,j} = \frac{1}{j}\mathop \sum \nolimits_{q = 1}^{j} q A_{r,q} C_{r,j - q} \) with \( C_{r,0} = 1 \). Consequently, the coefficients \( R_{r} \) are obtained recursively by \( \mathop \sum \nolimits_{j = 0}^{i} R_{i - j} C_{i - j,j} = \beta_{i} \) with \( R_{0} = 1 \). Now, \( f_{L} \left( x \right) \) can be obtained by integrating term by term of (A.2) since a factorial series is uniformly convergent in a half-plane; see Nagarsenker (1976). Therefore, we have

$$ \begin{aligned} f_{L} \left( x \right) & = \left( {2\pi } \right)^{v} x^{{\frac{N}{2} - 1}} \frac{{\varGamma \left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{i = 1}^{k} \varGamma \left( {n_{i}^{*} } \right)}} \mathop \prod \limits_{i = 1}^{k} f_{i}^{{\frac{{n_{i} }}{2} - 1}} \frac{1}{2\pi i}\mathop \int \limits_{{\frac{N}{2} - i\infty }}^{{\frac{N}{2} + i\infty }} x^{ - t} \mathop \sum \limits_{r = 0}^{\infty } R_{r} \frac{\varGamma \left( t \right)}{{\varGamma \left( {t + r + v} \right)}}dt \\ & = \left( {2\pi } \right)^{v} x^{{\frac{N}{2} - 1}} \frac{{\varGamma \left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{i = 1}^{k} \varGamma \left( {n_{i}^{*} } \right)}} \mathop \prod \limits_{i = 1}^{k} f_{i}^{{\frac{{n_{i} }}{2} - 1}} \mathop \sum \limits_{r = 0}^{\infty } R_{r} \frac{1}{2\pi i}\mathop \int \limits_{{\frac{N}{2} - i\infty }}^{{\frac{N}{2} + i\infty }} x^{ - t} \frac{\varGamma \left( t \right)}{{\varGamma \left( {t + r + v} \right)}} dt. \\ \end{aligned} $$

However, the Mellin transform of \( \varGamma \left( t \right)/\varGamma \left( {t + r + v} \right) \) is \( \left( {1 - x} \right)^{v + r - 1} /\varGamma \left( {v + r} \right) \); see Nagarsenker (1976). This completes the proof.

Proof of Theorem 2

Note that the support of \( L \) is \( \left( {0, 1} \right) \) since \( LR \) is the LRT statistic. Therefore, the Jacobi polynomial expansion of \( F_{L} \left( x \right) \) can be expressed as

$$ F_{L} \left( x \right) = \mathop \sum \limits_{n = 0}^{\infty } c_{n} \frac{{\varGamma \left( {n + q} \right)\varGamma \left( {p + q} \right)}}{{n!\varGamma \left( q \right)\varGamma \left( {n + p + q - 1} \right)}}\mathop \sum \limits_{s = 0}^{n} \left( {\begin{array}{*{20}c} n \\ s \\ \end{array} } \right)\left( { - 1} \right)^{s} \frac{{\varGamma \left( {n + p + q + s - 1} \right)}}{{\varGamma \left( {p + q + s} \right)}}F_{{Beta\left( {p,q + s} \right)}} \left( x \right) $$

where \( \varGamma \left( . \right) \) is the gamma function and \( F_{{{\text{Beta}}\left( {\nu_{1} ,\nu_{2} } \right)}} \left( . \right) \) denotes the distribution function of a Beta variable with parameters \( \nu_{1} \) and \( \nu_{2} \) and

$$ c_{n} = \frac{{\varGamma \left( p \right)\varGamma \left( q \right)\left( {2n + p + q - 1} \right)}}{{\varGamma \left( {p + q} \right)\varGamma \left( {p + n} \right)}}\mathop \sum \limits_{s = 0}^{n} \left( {\begin{array}{*{20}c} n \\ s \\ \end{array} } \right)\left( { - 1} \right)^{s} \frac{{\varGamma \left( {n + p + q + s - 1} \right)}}{{\varGamma \left( {q + s} \right)}}E_{{H_{0} }} \left( {1 - L} \right)^{s} , $$

see Derksen and Sullivan (1990), Kharrati-Kopaei and Malekzadeh (2019), Luke (1969), Provost (2005), Reinking (2002). For the convergence of the JPE, see Alexits (1961). The parameters \( p \) and q are usually determined by matching the moments of the Beta \( \left( {p,q} \right) \) random variable and the statistic \( L \); see Boik (1993), Bolkhovskaya et al. (2002). By matching the first two moments of Beta \( \left( {p,q} \right) \) random variable and \( L \), we have

$$ p = \frac{{E_{{H_{0} }} \left( L \right)\left( {E_{{H_{0} }} \left( L \right) - E_{{H_{0} }} \left( {L^{2} } \right)} \right)}}{{E_{{H_{0} }} \left( {L^{2} } \right) - E_{{H_{0} }}^{2} \left( L \right)}} \,{\text{and}}\, q = \frac{{\left( {E_{{H_{0} }} \left( L \right) - E_{{H_{0} }} \left( {L^{2} } \right)} \right)\left( {1 - E_{{H_{0} }} \left( L \right)} \right)}}{{E_{{H_{0} }} \left( {L^{2} } \right) - E_{{H_{0} }}^{2} \left( L \right)}}. $$

In this case, it can be easily verified that \( c_{0} = 1 \) and \( c_{1} = c_{2} = 0 \). It completes the proof.

Proof of Lemma 2

Note that \( Y = - 2\log \left( {LR} \right) = \mathop \sum \nolimits_{j = 1}^{k} n_{j} \log \left( {f_{j} } \right) - \mathop \sum \nolimits_{j = 1}^{k} n_{j} { \log }\left( {V_{j} /V} \right) \) where \( \left( {V_{1} /V, \ldots ,V_{k} /V} \right) \) has a Dirichlet distribution with parameter \( \left( {n_{1}^{*} , \ldots ,n_{k}^{*} } \right) \) under \( H_{0} \). Therefore,

$$ \begin{aligned} {\text{CF}}_{Y} \left( t \right) & = E\left( {\exp \left\{ {itY} \right\}} \right) = \mathop \prod \limits_{j = 1}^{k} f_{j}^{{itn_{j} }} E\left( {\exp \left\{ { - it\mathop \sum \limits_{j = 1}^{k} n_{j} { \log }\left( {V_{j} /V} \right)} \right\}} \right) \\ & = \mathop \prod \limits_{j = 1}^{k} f_{j}^{{itn_{j} }} \mathop \int \limits_{0}^{1} \ldots \mathop \int \limits_{0}^{1} \frac{{\varGamma \left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{j = 1}^{k} \varGamma \left( {n_{j}^{*} } \right)}}\mathop \prod \limits_{j = 1}^{k} x_{i}^{{ - itn_{j} }} \mathop \prod \limits_{j = 1}^{k} x_{i}^{{n_{j}^{*} - 1}} dx_{1} \ldots dx_{k} \\ & = \frac{{\varGamma \left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{j = 1}^{k} \varGamma \left( {n_{j}^{*} } \right)}}\mathop \prod \limits_{j = 1}^{k} f_{j}^{{itn_{j} }} \mathop \int \limits_{0}^{1} \ldots \mathop \int \limits_{0}^{1} \mathop \prod \limits_{j = 1}^{k} x_{i}^{{n_{j}^{*} - itn_{j} - 1}} dx_{1} \ldots dx_{k} \\ & = \frac{{\varGamma \left( {N^{*} } \right)}}{{\mathop \prod \nolimits_{j = 1}^{k} \varGamma \left( {n_{j}^{*} } \right)}}\frac{{\mathop \prod \nolimits_{j = 1}^{k} \varGamma \left( {n_{j}^{*} - itn_{j} } \right)}}{{\varGamma \left( {N^{*} - itN} \right)}}\mathop \prod \limits_{j = 1}^{k} f_{j}^{{itn_{j} }} . \\ \end{aligned} $$

This completes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kharrati-Kopaei, M. On the exact distribution of the likelihood ratio test statistic for testing the homogeneity of the scale parameters of several inverse Gaussian distributions. Comput Stat 36, 1123–1138 (2021). https://doi.org/10.1007/s00180-020-01053-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-020-01053-4

Keywords

Navigation