Skip to main content
Log in

Inference for the bivariate Birnbaum–Saunders lifetime regression model and associated inference

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

In this paper, we discuss a regression model based on the bivariate Birnbaum–Saunders distribution. We derive the maximum likelihood estimates of the model parameters and then develop associated inference. Next, we briefly describe likelihood-ratio tests for some hypotheses of interest as well as some interval estimation methods. Monte Carlo simulations are then carried out to examine the performance of the estimators as well as the interval estimation methods. Finally, a numerical data analysis is performed for illustrating all the inferential methods developed here.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Azevedo C, Leiva V, Athayde E, Balakrishnan N (2012) Shape and change point analyses of the Birnbaum–Saunders-t hazard rate and associated estimation. Comput Stat Data Anal 56:3887–3897

    Article  MathSciNet  MATH  Google Scholar 

  • Bebbington M, Lai C-D, Zitikis R (2008) A proof of the shape of the Birnbaum–Saunders hazard rate function. Math Sci 33:49–56

    MathSciNet  MATH  Google Scholar 

  • Balakrishnan N, Zhu X (2013) A simple and efficient method of estimation of the parameters of a bivariate Birnbaum-Saunders distribution based on Type-II censored samples. In: Kollo T (ed) Multivariate statistics: theory and applications. World Scientific Publishers, Singapore, pp 34–47

    Chapter  Google Scholar 

  • Birnbaum ZW, Saunders SC (1969a) A new family of life distributions. J Appl Probab 6:319–327

    Article  MathSciNet  MATH  Google Scholar 

  • Birnbaum ZW, Saunders SC (1969b) Estimation for a family of life distributions with applications to fatigue. J Appl Probab 6:327–347

    Google Scholar 

  • Chang DS, Tang LC (1993) Reliability bounds and critical time for the Birnbaum–Saunders distribution. IEEE Trans Reliab 42:464–469

    Article  MATH  Google Scholar 

  • Desmond AF (1985) Stochastic models of failure in random environments. Can J Stat 13:171–183

    Article  MathSciNet  MATH  Google Scholar 

  • Efron B (1970) Bootstrap methods: another look at the jackknife. Ann Stat 7:1–26

    Article  MathSciNet  Google Scholar 

  • Galea M, Leiva V, Paula GA (2004) Influence diagnostics in log-Birnbaum–Saunders regression models. J Appl Stat 31:1049–1064

  • Glaser RA (1980) Bathtub and related failure rate characterization. J Am Stat Assoc 75:667–672

    Article  MathSciNet  MATH  Google Scholar 

  • Johnson NL, Kotz S, Balakrishnan N (1995) Continuous univariate distributions, vol 2, 2nd edn. Wiley, New York

    MATH  Google Scholar 

  • Johnson RA, Wichern DW (2002) Appl Multivar Anal, 5th edn. Prentice-Hall, New Jersey

    Google Scholar 

  • Kotz S, Balakrishnan N, Johnson NL (2000) Continuous multivariate distributions, vol 1, 2nd edn. Wiley, New York

    Book  MATH  Google Scholar 

  • Kundu D, Balakrishnan N, Jamalizadeh A (2010) Bivariate Birnbaum–Saunders distribution and associated inference. J Multivar Anal 101:113–125

    Article  MathSciNet  MATH  Google Scholar 

  • Kundu D, Balakrishnan N, Jamalizadeh A (2013) Generalized multivariate Birnbaum–Saunders distributions and related inferential issues. J Multivar Anal 116:230–244

    Article  MathSciNet  MATH  Google Scholar 

  • Kundu D, Kannan N, Balakrishnan N (2008) On the hazard function of Birnbaum–Saunders distribution. Comput Stat Data Anal 52:2692–2702

    Article  MathSciNet  MATH  Google Scholar 

  • Lehmann EL (1999) Introduction to large-sample theory. Springer, New York

    Google Scholar 

  • Leiva V, Barros M, Paula GA, Galea M (2007) Influence diagnostics in log-Birnbaum–Saunders regression models with censored data. Comput Stat Data Anal 51:5694–5707

    Article  MathSciNet  MATH  Google Scholar 

  • Leiva V, Riquelme M, Balakrishnan N (2008) Lifetime analysis based on the generalized Birnbaum–Saunders distribution. Comput Stat Data Anal 21:2079–2097

    Article  MathSciNet  Google Scholar 

  • Ng HKT, Kundu D, Balakrishnan N (2003) Modified moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput Stat Data Anal 43:283–298

    Article  MathSciNet  MATH  Google Scholar 

  • Rieck JR, Nedelman JR (1991) A log-linear model for the Birnbaum–Saunders distribution. Technometrics 33:51–60

    MATH  Google Scholar 

  • Tsionas EG (2001) Bayesian inference in Birnbaum–Saunders regression. Commun Stat Theory Methods 30:179–193

    Article  MathSciNet  MATH  Google Scholar 

  • Xiao QC, Liu ZM, Balakrishnan N, Lu XW (2010) Estimation of the Birnbaum–Saunders regression model with current status data. Comput Stat Data Anal 54:326–332

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

Our sincere thanks go to two anonymous reviewers and the editor, Professor Norbert Henze, for their useful comments and suggestions on an earlier version of this manuscript which led to this improved version.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Balakrishnan.

Appendix: Derivation of the Fisher information matrix

Appendix: Derivation of the Fisher information matrix

Let us define

$$\begin{aligned} z_{ji}&= \frac{1}{\alpha _j}\left( \sqrt{\frac{t_{ji}}{\theta _{ji}}}-\sqrt{\frac{\theta _{ji}}{t_{ji}}}\right) ,\\ z'_{jki}&= \frac{\partial z_{ji}}{\partial \beta _{jk}}= -\frac{1}{2\alpha _j}\left( \sqrt{\frac{t_{ji}}{\theta _{ji}}}+\sqrt{\frac{\theta _{ji}}{t_{ji}}}\right) x^{(j)}_{ki},\\ z''_{jkli}&= \frac{\partial ^2 z_{ji}}{\partial \beta _{jk}\partial \beta _{jl}}= \frac{1}{4\alpha _j}\left( \sqrt{\frac{t_{ji}}{\theta _{ji}}}-\sqrt{\frac{\theta _{ji}}{t_{ji}}}\right) x^{(j)}_{ki}x^{(j)}_{li},\\ \left( z^2_{jki}\right) '&= \frac{\partial (z^2_{ji})}{\partial \beta _{jk}}= -\frac{1}{\alpha ^2_j}\left( \frac{t_{ji}}{\theta _{ji}}-\frac{\theta _{ji}}{t_{ji}}\right) x^{(j)}_{ki},\\ \left( z^2_{jkli}\right) ''&= \frac{\partial ^2 (z^2_{ji})}{\partial \beta _{jk}\partial \beta _{jl}}= \frac{1}{\alpha ^2_j}\left( \frac{t_{ji}}{\theta _{ji}}+\frac{\theta _{ji}}{t_{ji}}\right) x^{(j)}_{ki}x^{(j)}_{li}, \end{aligned}$$

and \(x^{(0)}_{0i}=x^{(1)}_{0i}=1\) for \(i=1,\ldots ,n\), \(j=1(2)\), \(k=0,\ldots ,p(q)\), \(l=0,\ldots ,p(q)\).

The first-order derivatives of the log-likelihood function with respect to the parameters, required for the Newton-Raphson method, are given by

$$\begin{aligned} \frac{\partial \ln L}{\partial \alpha _j}&= -\frac{n}{\alpha _j}+\frac{1}{(1-\rho ^2)\alpha _j} \sum _{i=1}^n z_{ji}^2 -\frac{\rho }{(1-\rho ^2)\alpha _j}\sum _{i=1}^n z_{ji}z_{j_1i}, \end{aligned}$$
(10.1)
$$\begin{aligned} \frac{\partial \ln L}{\partial \rho }&= \frac{n\rho }{1-\rho ^2} -\frac{\rho }{(1-\rho ^2)^2} \sum _{i=1}^n(z_{1i}^2+z_{2i}^2) +\frac{1+\rho ^2}{(1-\rho ^2)^2} \sum _{i=1}^n z_{1i}z_{2i}, \end{aligned}$$
(10.2)
$$\begin{aligned} \frac{\partial \ln L}{\partial \beta _{jk}}&= -\frac{1}{2}\sum _{i=1}^n\left( \frac{t_{ji}-\theta _{ji}}{t_{ji} +\theta _{ji}}\right) x_{ki}^{(j)} -\frac{1}{2(1-\rho ^2)} \sum _{i=1}^n(z^2_{jki})' +\frac{\rho }{1-\rho ^2} \sum _{i=1}^n z_{j_1i} z'_{jki},\nonumber \\ \end{aligned}$$
(10.3)

for \(j\ne j_1\in \{1,2\}\). Next, the negative of all the second-order derivatives of the log-likelihood function with respect to the parameters are obtained from (10.1)–(10.3) as follows:

$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \alpha _j^2}&= -\frac{n}{\alpha _j^2}+\frac{3}{(1-\rho ^2)\alpha _j^2} \sum _{i=1}^n z^2_{ji} -\frac{2\rho }{(1-\rho ^2)\alpha ^2_j}\sum _{i=1}^n z_{ji}z_{j_1i}, \end{aligned}$$
(10.4)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \rho ^2}&= -\frac{n(1+\rho ^2)}{(1-\rho ^2)^2} +\frac{1+3\rho ^2}{(1-\rho ^2)^3} \sum _{i=1}^n (z_{1i}^2+z_{2i}^2) -\frac{2\rho (3+\rho ^2)}{(1-\rho ^2)^3} \sum _{i=1}^n z_{1i}z_{2i},\nonumber \\\end{aligned}$$
(10.5)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \beta ^2_{jk}}&= -\sum _{i=1}^n\frac{\theta _{ji}t_{ji}}{(t_{ji}+\theta _{ji})^2}[x^{(j)}_{ki}]^2 +\frac{1}{2(1-\rho ^2)} \sum _{i=1}^n (z_{jkki}^2)''\nonumber \\&-\frac{\rho }{1-\rho ^2} \sum _{i=1}^n (z_{jkki})''z_{j_1i},\end{aligned}$$
(10.6)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \alpha _1\partial \alpha _2}&= -\frac{\rho }{(1-\rho ^2)\alpha _1\alpha _2}\sum _{i=1}^n z_{ji}z_{j_1i},\end{aligned}$$
(10.7)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \alpha _j\partial \rho }&= -\frac{2\rho }{(1-\rho ^2)^2\alpha _j} \sum _{i=1}^n z_{ji}^2 +\frac{1+\rho ^2}{(1-\rho ^2)^2\alpha _j}\sum _{i=1}^n z_{1i}z_{2i},\end{aligned}$$
(10.8)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \alpha _j\partial \beta _{jk}}&= \frac{\rho }{(1-\rho ^2)\alpha _j}\sum _{i=1}^n z'_{jki}z_{j_1i} -\frac{1}{(1-\rho ^2)\alpha _j} \sum _{i=1}^n[z_{jki}^2]', \end{aligned}$$
(10.9)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \alpha _j\partial \beta _{j_1k}}&= \frac{\rho }{(1-\rho ^2)\alpha _j}\sum _{i=1}^n z_{ji}z_{j_1ki}', \end{aligned}$$
(10.10)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \rho \partial \beta _{jk}}&= \frac{\rho }{(1-\rho ^2)^2} \sum _{i=1}^n [z^2_{jki}]' - \frac{1+\rho ^2}{(1-\rho ^2)^2} \sum _{i=1}^n z_{jki}'z_{j_1i}, \end{aligned}$$
(10.11)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \beta _{jk}\partial \beta _{jl}}&= -\sum _{i=1}^n\frac{\theta _{ji}t_{ji}}{(t_{ji}+\theta _{ji})^2}x^{(j)}_{ki}x^{(j)}_{li} +\frac{1}{2(1-\rho ^2)} \sum _{i=1}^n[z_{jkli}^2]'' \nonumber \\&-\frac{\rho }{1-\rho ^2}\sum _{i=1}^nz_{jkli}''z_{j_1i},\quad k\ne l=0,\ldots ,p(q), \end{aligned}$$
(10.12)
$$\begin{aligned} -\frac{\partial ^2\ln L}{\partial \beta _{1k_1}\partial \beta _{2k_2}}&= -\frac{\rho }{1-\rho ^2} \sum _{i=1}^n z_{1k_1i}'z_{2k_2i}', \qquad k_1=0,\ldots ,p,~ k_2=0,\ldots ,q.\nonumber \\ \end{aligned}$$
(10.13)

By using Property 3.2, we then obtain the expected values of the second-order derivatives of the log-likelihood with respect to the parameters, in (10.4)–(10.13), as follows:

$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \alpha _j^2}\right]&= \frac{n(2-\rho ^2)}{\alpha _j^2(1-\rho ^2)},\end{aligned}$$
(10.14)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \rho ^2}\right]&= \frac{n(1+\rho ^2)}{(1-\rho ^2)^2}, \end{aligned}$$
(10.15)
$$\begin{aligned} \!-\!E\left[ \frac{\partial ^2\ln L}{\partial \beta ^2_{jk}}\right]&\!=\!&\frac{1}{1\!-\!\rho ^2}\sum _{i=1}^n\left( \frac{1}{\alpha _j^2}\!+\!\frac{1}{2}-\frac{\rho ^2}{4}\right) [x^{(j)}_{ki}]^2 -\sum _{i=1}^nx^2_{ki}E\left[ \frac{V_j}{(1+V_j)^2}\right] ,\nonumber \\ \end{aligned}$$
(10.16)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \alpha _1\partial \alpha _2}\right]&= -\frac{n\rho ^2}{(1-\rho ^2)\alpha _1\alpha _2}, \end{aligned}$$
(10.17)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \alpha _j\partial \rho }\right]&= -\frac{n\rho }{(1-\rho ^2)\alpha _j}, \end{aligned}$$
(10.18)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \alpha _j\partial \beta _{jk}}\right]&= 0,\end{aligned}$$
(10.19)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \alpha _j\partial \beta _{j_1k}}\right]&= 0,\end{aligned}$$
(10.20)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \rho \partial \beta _{jk}}\right]&= 0,\end{aligned}$$
(10.21)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \beta _{jk}\partial \beta _{jl}}\right]&= \frac{2+\alpha _1^2}{2(1-\rho ^2)\alpha _1^2}\sum _{i=1}^nx^{(j)}_{ki}x^{(j)}_{li} -\sum _{i=1}^nx^{(j)}_{ki}x^{(j)}_{li}E\left[ \frac{V_j}{(1+V_j)^2}\right] \nonumber \\&-\frac{\rho ^2}{4(1-\rho ^2)}\sum \limits _{i=1}^nx_{ki}^{(j)}x_{li}^{(j)}, \end{aligned}$$
(10.22)
$$\begin{aligned} -E\left[ \frac{\partial ^2\ln L}{\partial \beta _{1k_1}\partial \beta _{2k_2}}\right]&= -\frac{\rho I_{2}}{(1-\rho ^2)\alpha _1\alpha _2} \sum _{i=1}^nx^{(1)}_{k_1i}x^{(2)}_{k_2i}, \end{aligned}$$
(10.23)

where \(V_j\sim BS(\alpha _j,1)\).

For computing some of the above expressions, we need the value of \(E \big [\frac{V}{(1+V)^2}\big ]\), which cannot be obtained analytically. So, we may either approximate it by the Monte Carlo method or by directly using integration function in R, or utilize the binomial approximation to obtain

$$\begin{aligned} E\left[ \frac{V}{(1+V)^2}\right]&\approx E\left[ V(1-2V+3V^2)\right] \nonumber \\&= E\left[ V\right] -2E\left[ V^2\right] +3E\left[ V^3\right] \nonumber \\&= \frac{45}{2}\alpha ^6+24\alpha ^4+10\alpha ^2. \end{aligned}$$
(10.24)

Moreover, if we use \(\mathbf{J_1}\) to denote the information matrix for \(\alpha _1\), \(\alpha _2\) and \(\rho \), and use \(\mathbf{J_2}\) to denote the information matrix for \(\varvec{\beta }_1\) and \(\varvec{\beta }_2\), then we observe from the expressions in (10.14)–(10.23) that

$$\begin{aligned} Var(\hat{\varvec{\eta }})= \left[ \begin{array}{cc} \mathbf{J}_1 &{}\quad \mathbf{0} \\ \mathbf{0} &{}\quad \mathbf{J}_2 \\ \end{array} \right] ^{-1} = \left[ \begin{array}{cc} \mathbf{J}_1^{-1} &{}\quad \mathbf{0} \\ \mathbf{0} &{}\quad \mathbf{J}_2^{-1} \\ \end{array} \right] , \end{aligned}$$
(10.25)

where the last equality holds if the matrix \(Var(\hat{\varvec{\eta }})\) is positive definite.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balakrishnan, N., Zhu, X. Inference for the bivariate Birnbaum–Saunders lifetime regression model and associated inference. Metrika 78, 853–872 (2015). https://doi.org/10.1007/s00184-015-0530-3

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-015-0530-3

Keywords

Navigation