Skip to main content

Inference for a Poisson-Inverse Gaussian Model with an Application to Multiple Sclerosis Clinical Trials

  • Conference paper
  • First Online:
Ordered Data Analysis, Modeling and Health Research Methods

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 149))

  • 848 Accesses

Abstract

Magnetic resonance imaging (MRI) based new brain lesion counts are widely used to monitor disease progression in relapsing remitting multiple sclerosis (RRMS) clinical trials. These data generally tend to be overdispersed with respect to a Poisson distribution. It has been shown that the Poisson-Inverse Gaussian (P-IG) distribution fits better than the negative binomial to MRI data in RRMS patients that have been selected for lesion activity during the baseline scan. In this paper we use the P-IG distribution to model MRI lesion count data from RRMS parallel group trials. We propose asymptotic and simulation based exact parametric tests for the treatment effect such as the likelihood ratio (LR), score and Wald tests. The exact tests maintain precise Type I error levels whereas the asymptotic tests fail to do so for small samples. The LR test remains empirically unbiased and results in 30–50 % reduction in sample sizes required when compared to the Wilcoxon rank sum (WRS) test. The Wald test has the highest power to detect a reduction in the number of lesion counts and provides a 40–57 % reduction in sample sizes when compared to the WRS test.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aban, I., G.R. Cutter, and N. Mavinga. 2009. Inferences and power analysis concerning two negative binomial distributions with an application to MRI lesion counts data. Computational Statistics and Data Analysis 53: 820–833.

    Google Scholar 

  2. Abramowitz, M., and I.A. Stegun (eds.). 1970. Handbook of mathematical functions. New York: Dover Publications Inc.

    Google Scholar 

  3. Freedman, D.A. 2007. How can the score test be inconsistent? The American Statistician 61(4): 291–295.

    Google Scholar 

  4. Holla, M.S. 1971. Canonical expansion of the compounded correlated bivariate Poisson distribution. The American Statistician 23: 32–33.

    Google Scholar 

  5. Morgan, B.J.T., K.J. Palmer, and M.S. Ridout. 2007. Score test oddities: Negative score test statistic. The American Statistician 61(4): 285–288.

    Google Scholar 

  6. Nauta, J.J.P., A.J. Thompson, F. Barkhof, and D.H. Miller. 1994. Magnetic resonance imaging in monitoring the treatment of multiple sclerosis patients: Statistical power of parallel-groups and crossover designs. Journal of Neurological Sciences 122: 6–14.

    Google Scholar 

  7. Ord, J.K., and K.A. Whitmore. 1986. The Poisson-inverse Gaussian distribution as a model for species abundance. Communications in Statistics - Theory and Methods 15(3): 853–871.

    Google Scholar 

  8. R Development Core Team 2014. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/

  9. Rao, C.R. 1948. Large sample tests of statistical hypotheses concerning several parameters with application to problems of estimation. Proceedings of the Cambridge Philosophical Society 44: 50–57.

    Google Scholar 

  10. Rao, C.R. 2005. Advances in ranking and selection, multiple comparisons, and reliability, Chapter Score Test: Historical Review and Recent Developments, pp. 3–20. Boston: Birkhäuser

    Google Scholar 

  11. Rettiganti, M.R., and H.N. Nagaraja. 2012. Power analysis for negative binomial models with application to multiple sclerosis clinical trials. Journal of Biopharmaceutical Statistics 22(2): 237–259.

    Google Scholar 

  12. Sankaran, M. 1968. Mixtures by the inverse Gaussian distribution. Sankhya B 30: 455–458.

    Google Scholar 

  13. Sormani, M.P., P. Bruzzi, D.H. Miller, C. Gasperini, F. Barkhof, and M. Filippi. 1999. Modelling MRI enhancing lesion counts in multiple sclerosis using a negative binomial model: Implications for clinical trials. Journal of the Neurological Sciences 163: 74–80.

    Google Scholar 

  14. Sormani, M.P., P. Bruzzi, M. Rovaris, F. Barkhof, G. Comi, D.H. Miller, G.R. Cutter, and M. Filippi. 2001a. Modelling new enhancing MRI lesion counts in multiple sclerosis. Multiple Sclerosis 7: 298–304.

    Google Scholar 

  15. Sormani, M.P., D.H. Miller, G. Comi, F. Barkhof, M. Rovaris, P. Bruzzi, and M. Filippi. 2001b. Clinical trials of multiple sclerosis monitored with enhanced MRI: New sample size calculations based on large data sets. Journal of Neurology Neurosurgery and Psychiatry 70: 494–499.

    Google Scholar 

  16. Stein, G.Z., W. Zucchini, and J.M. Juritz. 1987. Parameter estimation for the Sichel distribution and its multivariate distribution. Journal of the American Statistical Association 82(399): 938–944.

    Google Scholar 

  17. Wald, A. 1943. Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society 54: 426–482.

    Google Scholar 

  18. Willmot, G. 1987. The Poisson-inverse Gaussian distribution as an alternative to the negative binomial. Scandinavian Actuarial Journal 87: 113–127.

    Google Scholar 

Download references

Acknowledgments

The authors would like to sincerely thank Dr. Marie Davidian and the anonymous referee whose comments have significantly strengthened this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mallikarjuna Rettiganti .

Editor information

Editors and Affiliations

Appendices

Appendix 1

The following lemma presents some useful results needed to obtain the score vector and the matrix of the second order derivatives.

Lemma 1

$$\begin{aligned} \frac{\partial \tau _1}{\partial \gamma }&= \ 0; \quad \ \quad \frac{\partial \tau _1}{\partial \mu } = \frac{\tau _1^3}{\mu ^3}; \ \ \quad \frac{\partial \tau _1}{\partial \lambda } = \frac{\tau _1^3}{\lambda ^2}; \\ \frac{\partial \tau _2}{\partial \gamma }&= \frac{\tau _2^3}{\gamma ^3\mu ^2}; \quad \frac{\partial \tau _2}{\partial \mu } = \frac{\tau _2^3}{\gamma ^2\mu ^3}; \quad \frac{\partial \tau _2}{\partial \lambda } = \frac{\tau _2^3}{\lambda ^2}. \end{aligned}$$

Using the above results we obtain

$$\begin{aligned} \frac{\partial \omega _1}{\partial \gamma }&= \ 0; \quad \ \quad \quad \frac{\partial \omega _1}{\partial \mu } = - \frac{\lambda \tau _1}{\mu ^3}; \quad \frac{\partial \omega _1}{\partial \lambda } = \frac{\lambda -\tau _1^2}{\lambda \tau _1}; \\ \frac{\partial \omega _2}{\partial \gamma }&= -\frac{\lambda \tau _2}{\gamma ^3\mu ^2}; \quad \frac{\partial \omega _2}{\partial \mu } = - \frac{\lambda \tau _2}{\gamma ^2\mu ^3}; \quad \frac{\partial \omega _2}{\partial \lambda } = \frac{\lambda -\tau _2^2}{\lambda \tau _2}. \end{aligned}$$

The following lemma provides results for modified Bessel functions that simplify the derivation of the score vector and the second order derivatives.

Lemma 2

(Modified Bessel function of the third kind (See Sect. 9.6, [2])).

The following relations hold for the modified Bessel function of the third kind \(K_{\nu }(z)\):

$$\begin{aligned} K_{-\nu }(z)&= K_{\nu }(z) \nonumber \\ K_{-\frac{1}{2}}(z)&= K_{\frac{1}{2}}(z) = \sqrt{\frac{\pi }{2z}}e^{-z}\nonumber \\ K_{\nu +1}(z)&= K_{\nu -1}(z) + \frac{2\nu }{z}K_{\nu }(z) \\ \frac{\partial }{\partial z} K_{\nu }(z)&= K^\prime _{\nu }(z) = - K_{\nu +1}(z)+ \frac{\nu }{z}K_{\nu }(z).\nonumber \end{aligned}$$
(15)

The ratio of modified Bessel functions \(R_{\nu }(z) = \frac{K_{\nu +1}(z)}{K_{\nu }(z)}\), satisfies the following relations:

$$\begin{aligned} R_{-\frac{1}{2}}(z)&= 1; \nonumber \\ R_{\nu }(z)&= \frac{2\nu }{z} + \frac{1}{R_{\nu -1}(z)}, \quad \nu = \frac{1}{2}, \frac{3}{2},\frac{5}{2}, \ldots ; \\ \frac{\partial }{\partial z} R_{\nu }(z)&= R^\prime _{\nu }(z) = R^2_{\nu }(z) - \frac{2(\nu +1/2)}{z}R_{\nu }(z) - 1.\nonumber \end{aligned}$$
(16)

Appendix 2

1.1 First and Second Order Derivatives

The score vector components for the log-likelihood function given in (9) are

$$\begin{aligned} \frac{\partial \ell (\gamma ,\mu ,\lambda )}{\partial \gamma }&= \frac{\lambda \tau _2}{\gamma ^3\mu ^2}\sum _{j=1}^{n_2} R_{y_{2j}-\frac{1}{2}}(\omega _2) - \frac{n_2\lambda }{\gamma ^2\mu }, \end{aligned}$$
(17)
$$\begin{aligned} \frac{\partial \ell (\gamma ,\mu ,\lambda )}{\partial \mu }&= \frac{\lambda \tau _1}{\mu ^3} \sum _{i=1}^{n_1} R_{y_{1i}-\frac{1}{2}}(\omega _1) + \frac{\lambda \tau _2}{\gamma ^2\mu ^3} \sum _{j=1}^{n_2} R_{y_{2j}-\frac{1}{2}}(\omega _2) - \frac{\lambda (n_1\gamma +n_2)}{\gamma \mu ^2}, \end{aligned}$$
(18)
$$\begin{aligned} \frac{\partial \ell (\gamma ,\mu ,\lambda )}{\partial \lambda }&= \frac{n_1\bar{y}_1+n_2\bar{y}_2}{\lambda } + \frac{n_1\gamma +n_2}{\gamma \mu } - \left( \frac{\lambda -\tau _1^2}{\lambda \tau _1}\right) \sum _{i=1}^{n_1} R_{y_{1i}-\frac{1}{2}}(\omega _1) \nonumber \\&\quad -\, \left( \frac{\lambda -\tau _2^2}{\lambda \tau _2}\right) \sum _{j=1}^{n_2} R_{y_{2j}-\frac{1}{2}}(\omega _2). \end{aligned}$$
(19)

The second order derivatives of the log-likelihood function in (9) are

$$\begin{aligned} \frac{\partial ^2 \ell (\gamma ,\mu ,\lambda )}{\partial \gamma ^2}&= -\frac{\lambda ^2\tau _2^2}{\gamma ^6\mu ^4}\sum _{j=1}^{n_2} R^\prime _{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{\lambda \tau _2}{\gamma ^6\mu ^4}(\tau ^2_2-3\gamma ^2\mu ^2) \sum _{j=1}^{n_2}R_{y_{2j}-\frac{1}{2}}(\omega _2)+\frac{2n_2\lambda }{\gamma ^3\mu } , \nonumber \\ \frac{\partial ^2 \ell (\gamma ,\mu ,\lambda )}{\partial \gamma \partial \mu }&= - \frac{\lambda ^2\tau _2^2}{\gamma ^5\mu ^5} \sum _{j=1}^{n_2} R^\prime _{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{\lambda \tau _2^3}{\gamma ^5\mu ^5}(\tau _2^2-2\gamma ^2\mu ^2)\sum _{j=1}^{n_2}R_{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{n_2\lambda }{\gamma ^2\mu ^2}, \nonumber \\ \frac{\partial ^2 \ell (\gamma ,\mu ,\lambda )}{\partial \gamma \partial \lambda }&= \left( \frac{\lambda -\tau _2^2}{\gamma ^3\mu ^2} \right) \sum _{j=1}^{n_2} R^\prime _{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{\tau _2}{\gamma ^3\mu ^2\lambda } (\tau _2^2+\lambda )\sum _{j=1}^{n_2}R_{y_{2j}-\frac{1}{2}}(\omega _2) - \frac{n_2}{\gamma ^2\mu }, \\ \frac{\partial ^2 \ell (\gamma ,\mu ,\lambda )}{\partial \mu ^2}&= -\frac{\lambda ^2\tau _1^2}{\mu ^6}\sum _{i=1}^{n_1} R^\prime _{y_{1i}-\frac{1}{2}}(\omega _1) + \frac{\lambda \tau _1}{\mu ^6}(\tau _1^2-3\mu ^2)\sum _{i=1}^{n_1} R_{y_{1i}-\frac{1}{2}}(\omega _1) \nonumber \\&\quad -\,\frac{\lambda ^2\tau _2^2}{\gamma ^4\mu ^6}\sum _{j=1}^{n_2} R^\prime _{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{\lambda \tau _2}{\gamma ^4\mu ^6}(\tau _2^2-3\gamma ^2\mu ^2)\sum _{j=1}^{n_2} R_{y_{2j}-\frac{1}{2}}(\omega _2) \nonumber \\&\quad +\,\frac{2\lambda (n_1\gamma +n_2)}{\gamma \mu ^3} ,\nonumber \\ \frac{\partial ^2 \ell (\gamma ,\mu ,\lambda )}{\partial \mu \partial \lambda }&= \left( \frac{\lambda - \tau _1^2}{\mu ^3} \right) \sum _{i=1}^{n_1} R^\prime _{y_{1i}-\frac{1}{2}}(\omega _1) + \frac{\tau _1}{\lambda \mu ^3}(\tau _1^2+\lambda ) \sum _{i=1}^{n_1} R_{y_{1i}-\frac{1}{2}}(\omega _1) \nonumber \\&\quad +\, \left( \frac{\lambda - \tau _2^2}{\gamma ^2\mu ^3} \right) \sum _{j=1}^{n_2} R^\prime _{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{\tau _2}{\lambda \gamma ^2\mu ^3}(\tau _2^2+\lambda ) \sum _{j=1}^{n_2} R_{y_{2j}-\frac{1}{2}}(\omega _2) \nonumber \\&\quad -\, \frac{n_1\gamma +n_2}{\gamma \mu ^2}, \nonumber \\ \frac{\partial ^2 \ell (\gamma ,\mu ,\lambda )}{\partial \lambda ^2}&= -\,\frac{n_1\bar{y}_1+n_2\bar{y}_2}{\lambda ^2} - \left( \frac{\lambda -\tau _1^2}{\lambda \tau _1} \right) ^2 \sum _{i=1}^{n_1} R^\prime _{y_{1i}-\frac{1}{2}}(\omega _1) + \frac{\tau _1^3}{\lambda ^3} \sum _{i=1}^{n_1} R_{y_{1i}-\frac{1}{2}}(\omega _1) \nonumber \\&\quad - \,\left( \frac{\lambda -\tau _2^2}{\lambda \tau _2} \right) ^2 \sum _{j=1}^{n_2} R^\prime _{y_{2j}-\frac{1}{2}}(\omega _2) + \frac{\tau _2^3}{\lambda ^3} \sum _{j=1}^{n_2} R_{y_{2j}-\frac{1}{2}}(\omega _2). \nonumber \end{aligned}$$
(20)

1.2 Fisher Information Matrix

Since \(E(\bar{Y}_1) = \mu \) and \(E(\bar{Y}_2)=\gamma \mu \), and \(Y_{1i},\ i=1,\ldots ,n_1\) and \(Y_{2j},\ j=1,\ldots ,n_2\) are respectively identically distributed, the elements \(\big (I(\varvec{\theta })\big )_{i,j} = -E\left\{ \frac{\partial ^2\ell (\varvec{\theta })}{\partial \varvec{\theta }_i\partial \varvec{\theta }_j}\right\} \) of the FIM \(\mathbf {I}(\varvec{\theta })\) with the parameter vector \(\varvec{\theta }= (\gamma ,\mu ,\lambda )\) can be expressed as follows:

$$\begin{aligned} I_{11}(\varvec{\theta })&= \frac{n_2\lambda ^2\tau _2^2}{\gamma ^6\mu ^4} E\left\{ R^\prime _{Y_2-\frac{1}{2}}(\theta _2)\right\} - \frac{n_2\lambda \tau _2}{\gamma ^6\mu ^4}(\tau ^2_{2}-3\gamma ^2\mu ^2) E\left\{ R_{Y_2-\frac{1}{2}}(\theta _2) \right\} -\frac{2n_2\lambda }{\gamma ^3\mu } , \\ I_{12}(\varvec{\theta })&= \frac{ n_2\lambda ^2\tau _2^2}{\gamma ^5\mu ^5} E\left\{ R^\prime _{Y_2-\frac{1}{2}}(\theta _2) \right\} - \frac{n_2\lambda \tau _2^3}{\gamma ^5\mu ^5}(\tau _2^2-2\gamma ^2\mu ^2)E\left\{ R_{Y_2-\frac{1}{2}}(\theta _2)\right\} - \frac{n_2\lambda }{\gamma ^2\mu ^2}, \\ I_{13}(\varvec{\theta })&= -\,n_2 \left( \frac{\lambda -\tau _2^2}{\gamma ^3\mu ^2} \right) E\left\{ R^\prime _{Y_2-\frac{1}{2}}(\theta _2)\right\} - \frac{n_2\tau _2}{\gamma ^3\mu ^2\lambda } (\tau _2^2+\lambda ) E\left\{ R_{Y_2-\frac{1}{2}}(\theta _2)\right\} + \frac{n_2}{\gamma ^2\mu },\\ I_{22}(\varvec{\theta })&= \frac{n_1\lambda ^2\tau _1^2}{\mu ^6} E\left\{ R^\prime _{Y_1-\frac{1}{2}}(\theta _1)\right\} - \frac{n_1\lambda \tau _1}{\mu ^6}(\tau _1^2-3\mu ^2)E\left\{ R_{Y_1-\frac{1}{2}}(\theta _1)\right\} - \frac{2\lambda (n_1\gamma +n_2)}{\gamma \mu ^3} \\&\quad + \frac{n_2\lambda ^2\tau _2^2}{\gamma ^4\mu ^6} E\left\{ R^\prime _{Y_2-\frac{1}{2}}(\theta _2)\right\} - \frac{n_2\lambda \tau _2}{\gamma ^4\mu ^6}(\tau _2^2-3\gamma ^2\mu ^2)E\left\{ R_{Y_2-\frac{1}{2}}(\theta _2)\right\} ,\\ I_{23}(\varvec{\theta })&= -\, n_1 \left( \frac{\lambda - \tau _1^2}{\mu ^3} \right) E\left\{ R^\prime _{Y_1-\frac{1}{2}}(\theta _1)\right\} - \frac{n_1\tau _1}{\lambda \mu ^3}(\tau _1^2+\lambda ) E\left\{ R_{Y_1-\frac{1}{2}}(\theta _1)\right\} + \frac{n_1\gamma +n_2}{\gamma \mu ^2} \\&\quad - n_2 \left( \frac{\lambda - \tau _2^2}{\gamma ^2\mu ^3} \right) E\left\{ R^\prime _{Y_2-\frac{1}{2}}(\theta _2)\right\} - \frac{n_2\tau _2}{\lambda \gamma ^2\mu ^3}(\tau _2^2+\lambda )E\left\{ R_{Y_2-\frac{1}{2}}(\theta _2)\right\} , \\ I_{33}(\varvec{\theta })&= \frac{n_1\mu +n_2\gamma \mu }{\lambda ^2} + n_1 \left( \frac{\lambda -\tau _1^2}{\lambda \tau _1} \right) ^2 E\left\{ R^\prime _{Y_1-\frac{1}{2}}(\theta _1)\right\} - \frac{n_1 \tau _1^3}{\lambda ^3} E\left\{ R_{Y_1-\frac{1}{2}}(\theta _1)\right\} \\&\quad +\, n_2 \left( \frac{\lambda -\tau _2^2}{\lambda \tau _2} \right) ^2 E\left\{ R^\prime _{Y_2-\frac{1}{2}}(\theta _2)\right\} - \frac{n_2 \tau _2^3}{\lambda ^3} E\left\{ R_{Y_2-\frac{1}{2}}(\theta _2) \right\} . \end{aligned}$$

Further, \(I_{ij}=I_{ji}\) for \(i \ne j = 1, 2, 3\). The above expressions involve evaluating the expectation of R(.), which is a ratio of two modified Bessel functions of the third kind, which cannot be computed in closed form. Instead, the observed information evaluated at the MLEs are used.

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Rettiganti, M., Nagaraja, H.N. (2015). Inference for a Poisson-Inverse Gaussian Model with an Application to Multiple Sclerosis Clinical Trials. In: Choudhary, P., Nagaraja, C., Ng, H. (eds) Ordered Data Analysis, Modeling and Health Research Methods. Springer Proceedings in Mathematics & Statistics, vol 149. Springer, Cham. https://doi.org/10.1007/978-3-319-25433-3_12

Download citation

Publish with us

Policies and ethics