Abstract
In this paper, we study mixtures of the multivariate proportional hazard model, where the frailty (an unobservable non negative random variable) acts on the hazard rates of lifetimes at the same time to model common risk factors. We investigate ageing properties and dependence between components for the mixture model under study. In addition, we carry out stochastic comparisons assuming dependence between components of the baseline random vector. More specifically, these stochastic comparisons refer to mixture models that share the same frailty random variable but different baseline random vectors. The results are applied for inter epoch times of a non homogeneous mixed Poisson process.
Similar content being viewed by others
References
Alzaid AA, Proschan F (1994) Max-infinite divisibility and multivariate total positivity. J Appl Probab 129:721–730
Amini-Seresht E, Khaledi BE (2015) Multivariate stochastic comparisons of mixture models. Metrika 78:1015–1034
Badía FG, Cha JH (2017) On bending (down and up) properties of reliability measures in mixtures. Metrika 80:455–482
Badía FG, Sangüesa C, Cha JH (2014) Stochastic comparison of multivariate conditionally dependent mixtures. J Multivar Anal 129:82–94
Badía FG, Sangüesa C, Cha JH (2018) Univariate and multivariate stochastic comparisons and ageing properties of generalized Pòlya process. J Appl Probab 55:233–253
Belzunce F, Mercader JA, Ruiz JM, Spizzichino F (2009) Stochastic comparisons of multivariate mixture models. J Multivar Anal 100:1657–1669
Cha JH, Finkelstein M (2018) Point processes for reliability analysis: shocks and repairable system. Springer, London
Cuadras CM (2002) On the covariance between functions. J Multivar Anal 81:19–27
Feller W (1971) An introduction to probability theory and its applications, vol II. Wiley, New York
Fernández-Ponce JM, Pellerey F, Rodríguez-Griñolo MR (2015) Some stochastic properties of conditionally dependent frailty models. Statistics 50:649–666
Finkelstein M, Cha JH (2013) Stochastic modeling for reliability: shocks, burn-in and heterogeneous populations. Springer, London
Grandel J (1997) Mixed poisson processes. Chapman & Hall, London
Gupta N, Dhariyal ID, Misra M (2011) Reliability under random operating environment: frailty models. J Combin Inf Syst Sci 36:117–133
Gupta PL, Gupta RC (1996) Ageing classes of Weibull mixtures. Probab Eng Inf Sci 10:591–600
Gupta RC, Kirmani SNUA (2006) Stochastic comparisons in frailty models. J Stat Plan Inference 136:1–20
Gurland J, Sethuraman J (1994) Reversal of increasing failure rates when pooling failure data. Technometrics 36:416–418
Jarrahiferiz J, Kayid M, Izadkhah S (2019) Stochastic properties of a weighted frailty model. Stat Pap 60:53–72
Joe H (1997) Multivariate models and dependence concepts. Chapman and Hall/CRC, London
Karlin M, Rinott Y (1980) Classes of orderings of measures and related correlation inequalities I. Multivariate totally positive distributions. J Multivar Anal 10:467–498
Lai CH, Xie M (2006) Stochastic ageing and dependence in reliability. Springer, New York
Marshall AW, Olkin I (2007) Life distributions. Springer, New York
Misra N, Gupta N, Gupta RD (2009) Stochastic comparisons of multivariate frailty models. J Stat Plan Inference 139:2084–2090
Misra N, Naqvi S (2018) A unified approach to stochastic comparisons of multivariate mixture models. Commun Stat Theory Methods. https://doi.org/10.1080/03610926.2018.1445859
Mulero J, Pellerey F, Rodríguez-Griñolo MR (2010) Stochastic comparisons for time transformed exponential models. Insur Math Econ 46:328–333
Navarro J, Hernández PJ (2008) Mean residual life functions of finite mixtures, order statistics and coherent systems. Metrika 67:277–298
Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, New York
Vaupel J, Manton KG, Stallard E (1979) The impact of heterogeneity in individual frailty on the dynamics of mortality. Demography 16:439–454
Xu M, Li X (2008) Negative dependence in frailty models. J Stat Plan Inference 138:1433–1441
Acknowledgements
The authors thank the referees and the editor for their valuable comments and careful reading of an earlier version of this manuscript. The work of the first author has been supported by Spanish government under research Project MTM2015-63978-P. The work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education [Grant Number 2-2017-1659-001-1]. The authors would like to thank Prof. Lola Berrade for her helpful and valuable review of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Proof of Lemma 3
-
(a)
The claim result holds due to the condition \(z \ge 1\).
-
(b)
Due to symmetry of \(R_z(x,y)\), it is sufficient to show that \(R_z(x,y)\) is increasing in x or equivalently that \(\frac{\partial R_z(x,y)}{\partial x} \ge 0\). The partial derivative of \(R_z(x,y)\) is given by
$$\begin{aligned} \frac{\partial R_z(x,y)}{\partial x} = \frac{zx^{z-1}(x-y) -(x^z-y^z)}{(x-y)^2}= \frac{D_x(y)}{(x-y)^2}, \quad x \ne y. \end{aligned}$$It follows that \(D_x^{\prime }(y)=z(y^{z-1}-x^{z-1})\) and \(D_x(y)\) has a minimum in \(y=x\) if \(z \ge 1\). Therefore, we have \(D_x(y) \ge D_x(x) = 0 \) and thus \(\frac{\partial R_z(x,y)}{\partial x} \ge 0\). Furthermore, if \(x=y\), then \(\frac{\partial R_z(x,x)}{\partial x} = z(z-1)x^{z-2} \ge 0\). Thus, property (b) holds.
-
(c)
Applying the mean value theorem applied to \(u^z\), there exists \(v \in [x \wedge y,x \vee y]\) such that \(x^z-y^z = (x-y)zv^{z-1}\), thus
$$\begin{aligned} R_z(x,y)=zv^{z-1}, \quad v \in [x \wedge y, x \vee y]. \end{aligned}$$Therefore, we have
$$\begin{aligned} R_z(f(t),g(t))=zv(t)^{z-1}, \quad v(t) \in [f(t) \wedge g(t),f(t) \vee g(t)] \end{aligned}$$and since \( \lim _{t \rightarrow \infty } f(t)=\lim _{t \rightarrow \infty } g(t)= 0\), and then \( \lim _{t \rightarrow \infty }R_z(f(t),g(t))=0\).
\(\square \)
Appendix B
Proof of Theorem 1
(a) In order to prove that \({\mathbf X}^*\) is MDFR, it is sufficient to show that \(M( {\mathbf x},{\mathbf y})=\ln \frac{\overline{F}^*({\mathbf x}+{\mathbf y})}{\overline{F}^*({\mathbf x})}\) is increasing in \({\mathbf x} \in \mathbb {R}^n\) for all \({\mathbf y} \in \mathbb {R}^n\). Observe that
\(i=1,2, \ldots , n\). Here, note that under the assumption that \({\mathbf X}_0\) is MDFR, we have that
Furthermore, it is clear that \(G_Z(- \ln \overline{F}_0( {\mathbf x}+{\mathbf y}))\le G_Z(- \ln \overline{F}_0( {\mathbf x}))\) since \(G_Z(x)\) is decreasing in x (see Lemma 1) and \( - \ln \overline{F}_0({\mathbf x}+{\mathbf y}) \ge - \ln \overline{F}_0({\mathbf x})\) as \( - \ln \) and \( \overline{F}_0\) are both decreasing functions. Therefore, since \( \ln x\) is non positive for \(x \in [0,1]\), we have that
\(i=1, 2, \ldots , n\), for all \( {\mathbf y}\), which leads to the desired result.
(b) From the assumption that \({\mathbf X}_0\) is MNWU, we have that \(\overline{F}_0^z( {\mathbf x}+ {\mathbf y}) \ge \overline{F}_0^z( {\mathbf x})\overline{F}_0^z({\mathbf y})\) for all \( {\mathbf x}, {\mathbf y} \in \mathbb {R}^n\) and \(z \ge 0\). Then, from Lemma 2 since \( \overline{F}_0^z( {\mathbf x})\) and \(\overline{F}_0^z({\mathbf y})\) are both decreasing functions in z, it follows that
for all \( {\mathbf x}, {\mathbf y} \in \mathbb {R}^n\). This completes the proof.
(c) First of all, we will show that if \({\mathbf X}_0\) is MNWU2, then \({\mathbf X}_0^{z}\) whose survival function is given by \(\overline{F}_0^z\) is also MNWU2 for all \(z \ge 1\). To this end, let \(Y_{{\mathbf x},z}\) be the univariate random variable with the following survival function
for all \({\mathbf x} \in \mathbb {R}^n\) and \( z \ge 1 \). Then, the claim holds if we prove that \(\mathrm{E}[\phi ( Y_{{\mathbf 0},z})] \le \mathrm{E}[\phi ( Y_{{\mathbf x},z} )]\), for all \(\phi \) increasing and concave (see Remark 6). Let \( \phi \) be a function concave, increasing and twice differentiable and assume without loss of generality that \(Y_{{\mathbf x},z}\) is an absolutely continuous random variable. Then, appropriate integration by parts leads to
where
Observe that \( \lim _{t \rightarrow \infty }M(t)=0\) by Lemma 3(c) since
It is straightforward to show that M(t) is decreasing in t since it is the product of two non negative decreasing functions. First, \( \phi ^{\prime }\) is decreasing and positive as \( \phi \) is concave. In addition, g is positive and decreasing by Lemma 3(a) and (b) since \(\frac{\overline{F}_0({\mathbf x} + t {\mathbf e})}{\overline{F}_0({\mathbf x})}\) and \(\overline{F}_0( t{\mathbf e})\) are decreasing functions in t. Therefore \({\mathbf X}_0^z\) is MNWU2 for \(z \ge 1\) as (6) is non negative if \({\mathbf X}_0\) is MNWU2 and M(t) is decreasing in t.
The MNWU2 property for \({\mathbf X}^*\) can be expressed in the following equivalent forms
As \( \overline{F}_0^z( {\mathbf x })\) and \( \int _0^t \overline{F}_0^z(u {\mathbf e})du\) are both decreasing functions in z, from Lemma 2, it follows that
Therefore, by MNWU2 property of \({\mathbf X}_0^z\) for \( z \ge 1\), we have
Hence, (7) holds and so does the desired result. \(\square \)
Appendix C
Proof of Theorem 2
(a) To prove that \({\mathbf X}^*\) is RCSI, we need to show that \( \frac{\partial ^2 \ln \overline{F}^*( \mathbf {x})}{\partial x_i \partial x_j} \ge 0\), \( i \ne j \) from Remark 1. Note that \( \overline{F}^*( \mathbf {x})=H_Z ( - \ln \overline{F}_0(\mathbf {x}))\) and \(G_Z(x)=- (\ln H_Z(x))^{\prime }\). Then we have for \(i \ne j\),
Here, observe that \( \frac{\partial ^2 \ln \overline{F}_0( \mathbf {x})}{\partial x_i \partial x_j} \ge 0\) for \(i \ne j\) since \({\mathbf X}_0\) is RCSI, \(\frac{\partial \ln \overline{F}_0(\mathbf {x})}{\partial x_i}\le 0\), \(i=1, 2, \ldots , n\) for all \( {\mathbf x}\) and \(G_Z\) is decreasing (see Lemma 1). Eventually, this implies \( \frac{\partial ^2 \ln \overline{F}^*( \mathbf {x})}{\partial x_i \partial x_j} \ge 0\), \(i \ne j\) for all \( {\mathbf x}\), which completes the proof.
(b) Under the assumption that \({\mathbf X}_0\) is PUOD, it follows that \( \overline{F}_0( {\mathbf x}) \ge \prod _{i=1}^n\overline{F}_{X_i^0} ( {\mathbf x})\). Thus, we have
where the second inequality directly follows by Lemma 2 since \( \overline{F}_{X_i^0}^z( {\mathbf x})\), \(i=1, \ldots ,n\) are decreasing functions in z, which completes the proof.
(c) Let us assume without loss of generality that the components of the random vector are absolutely continuous random variables. It is straightforward to show that the pdf of \({\mathbf X}^*\) is given by
where \(\overline{F}_{X_0^i}\) and \(f_{X_0^i}\) are the sf and pdf of the component i of the baseline random vector \({\mathbf X}_0\), \(i=1, \ldots , n\). Therefore, for a function \( g: \mathbb {R}_n^+ \rightarrow \mathbb {R}\)
The multivariate change of variables \(u_i = \overline{F}_{X_0^i} (x_i)^Z=e^{- Z\varLambda _i(x_i)}\), \(i=1, \ldots , n \) leads to
where \(h_Z:[0,1]\times \cdots \times [0,1] \rightarrow \mathbb {R}_n^+ \) is the multivariate vector function defined as
and \( \mathbf {U}=(U_1,U_2,\ldots ,U_n) \) is a random vector of independent uniform distributions on the interval [0,1]. Then,
Let \(g_1, g_2: \mathbb {R}_n^+ \rightarrow \mathbb {R} \) be increasing functions and assume that the components of the baseline random vector are absolutely continuous. Since \(h_{Z,i}\) is decreasing as \( \varLambda _i\) is increasing and \(g_1\) and \(g_2\) are increasing, it follows that \(g_1(h_Z( \mathbf {u}))\) and \(g_2(h_Z( \mathbf {u}))\) are both decreasing. Note that \( \mathbf {U}\) is associated since it has independent components. Thus we have that
Furthermore, \(E_{\mathbf {U}}[g_1(h_Z( \mathbf {U}))]\) and \(E_{\mathbf {U}}[g_2(h_Z( \mathbf {U}))]\) are both increasing functions on Z since \(h_Z\) is increasing in Z. Hence by Lemma 2 and (8), we have that
and the conclusion follows since
\(\square \)
Appendix D
Proof of Lemma 4
Note that \(G_Z(0)=\mathrm{E}[Z]\) and \(G_Z^{\prime }(0)=-\mathrm{Var}[Z]\). Moreover, observe that
and
Then under the assumption that \(G_Z(x)\) is convex, we have
and thus, we conclude that
Furthermore, obviously, we have
which implies the desired result. \(\square \)
Appendix E
Proof of Theorem 3
For a given vector \((x_1, \ldots , x_{i-1}, x_{i+1}, \ldots , x_n)\), the following function is defined
and for \(i=1, \ldots , n\), where \(\varLambda _{r,i}(x)=\varLambda _j (x_1, \ldots ,x_{i-1},x,x_{i+1}, \ldots ,x_n)\) and \(\lambda _{r,i}(x)=\frac{\partial \varLambda _j (x_1, \ldots ,x_{i-1},x,x_{i+1}, \ldots ,x_n)}{\partial x}\), \(j=0, 1\). From the assumption (3), it is obvious that \( \varLambda _{0,i}(x)-\varLambda _{1,i}(x)\) is increasing, and thus we have that \( \varLambda _{0,i}(x)-\varLambda _{1,i}(x) \ge \varLambda _0(0, \ldots ,0)-\varLambda _1(0, \ldots ,0) \ge 0\) by (4) for all x. Then, we have
Now to show that \( \frac{H_Z(\varLambda _{1,i} (x))}{H_Z(\varLambda _{o,i} (x))}= \frac{\mathrm{E}[e^{- \varLambda _{1,i}(x)Z}]}{\mathrm{E}[e^{- \varLambda _{0,i}(x)Z}]}\) is increasing in \(x \ge 0\), it is sufficient to prove that \(g^{\prime }_{Z,i}(x) \ge 0\), that is, the Eq. (9) is greater than 0, for all \(x \ge 0\). Observe that from Lemma 4, it follows that
Herein, to complete the proof, we have to show that
or
Note that
where \( \delta (x) = \frac{(\lambda _{0,i}(x)-\lambda _{1,i}(x))}{(\lambda _{0,i}(x) \varLambda _{0,i} (x) - \lambda _{1,i}(x) \varLambda _{1,i} (x) )} \).
For \( 0< \alpha < \frac{1}{2}\), let \(a= \alpha (\lambda _{0,i}(x)-\lambda _{1,i}(x))\), \(b= \alpha (\lambda _{1,i}(x)-\lambda _{0,i}(x))\) and \(s = \frac{\mathrm{Var}[Z]}{\overline{\delta }(x)}\), where
Both assumptions \(\lambda _{0,i}(x) > \lambda _{1,i}(x)\) for all \(x \ge 0\) and \( 0< \alpha < \frac{1}{2}\) lead to
Therefore, we have that
which yields that \(\overline{A}(x,Z+s)\) stems from A(x, Z) replacing \(\lambda _{0,i}(y)\), \(\lambda _{1,i}(y)\), \( \varLambda _{0,i}(y)\), \( \varLambda _{1,i}(y)\) and Z by \(\lambda _{0,i}(y)+b\), \(\lambda _{1,i}(y)+a\), \( \varLambda _{0,i}(y)+b(y-x)\), \( \varLambda _{1,i}(y)+a(y-x)\) and \(Z+s\), respectively.
Let us now consider the following function of y given in a neighborhood of x: \(\frac{H_{Z+s}(\varLambda _{1,i}(y)+a(y-x))}{H_{Z+s}(\varLambda _{0,i}(y)+b(y-x))} \). Note that the result (10) implies that
\( \overline{g}_{Z+s,i}(y) = \ln \frac{H_{Z+s}(\varLambda _{1,i}(y)+a(y-x))}{H_{Z+s}(\varLambda _{0,i}(y)+b(y-x))}\) fulfills \( \overline{g}_{Z+s,i}^{\prime }(x) \ge 0\) and due to continuity of the involved functions, it follows that \(\overline{g}_{Z+s,i}^{\prime }(y) \ge 0\) for all y in a neighborhood of \(x \ge 0\). Observe that
for y in a neighborhood of x. In addition,
For \( \alpha \) tending to \( \frac{1}{2} \) in the foregoing equation, we have that
and thus Theorem 3 holds. \(\square \)
Appendix F
Proof of Theorem 4
(a) If \({\mathbf X}_0^1 \le _{\mathrm{uo}} {\mathbf X}_0^2\) (i.e., \(\overline{F}_0^1 (\mathbf {x}) \le \overline{F}_0^2 (\mathbf {x})\) for all \({\mathbf x} \in \mathbb {R}^n_+\)), then
and thus, the conclusion holds.
(b) Observe that \( \overline{F}_i^*({\mathbf x})=E[e^{-Z (- \ln \overline{F}_0^i({\mathbf x}))}]\), \( {\mathbf x} \in \mathbb {R}^n_+\) for \(i=1, 2\). The claim in this case is verified if
is increasing in \({\mathbf x}\) on the set \(\{ {\mathbf x}: \overline{F}_1^*({\mathbf x})>0 \}\). Applying Theorem 3 with \( \varLambda _i({\mathbf x}) = - \ln \overline{F}_0^i({\mathbf x})\), \(i=1,2\), the claim is demonstrated since \(\varLambda _0(0,\ldots , 0)=\varLambda _1(0,\ldots , 0)=0\) and condition (3) is derived from the assumptions. \(\square \)
Appendix G
Proof of Theorem 5
Note that for \(i=1,2\), the survival function of \({\mathbf X}_i^*\) is given by \( \overline{F}_i^*({\mathbf x}) = E \left[ (\prod _{j=1}^n \overline{F}_{0,j}^i(x_j) )^Z\right] \). Then the pdf \(f_i^*\) of \({\mathbf X}_i^*\) is as follows
where \(Z_n\) is the random variable with \( dF_{Z_n}(z)= \frac{z^n}{E[Z^n]}dF_Z(z)\), \(z >0\). To prove the claim, we have to show that
From the assumption that \( \frac{\lambda _{0,j}^2(x)}{\lambda _{0,j}^1(x)}\) is increasing in x for \(j=1, \ldots , n\), it follows that
Then, to complete the proof of the inequality in (11), we have to show that
for \({\mathbf x}=(x_1, \ldots ,x_n), {\mathbf y}=(y_1, \ldots ,y_n) \in \mathbb {R}^n_+\).
Herein, if there exits \(j_0\) such that \( \overline{F}_{0,j_0}^1(x_{j_0})=0\) or there exists \(j_1\) such that \(\overline{F}_{0,j_1}^2(y_{j_1})=0\), then it follows that
On the contrary, if \(\overline{F}_{0,j}^1(x_j)>0\) and \(\overline{F}_{0,j}^2(y_j) >0\) for \(j=1, \ldots , n\), we have that
where the above inequality follows since the following two conditions apply:
- (i)
\(\mathrm{}E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] \) is RCSI since \( \prod _{j=1}^n \overline{F}_{0,j}^1 \) is RCSI from Theorem 2(a);
- (ii)
\(\frac{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j) \right) ^{Z_n} \right] }{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] }\) is increasing in \({\mathbf x}\) by Theorem 3 with \( \varLambda _i ({\mathbf x}) = -\sum _{j=1}^n \ln \overline{F}_{0,j}^i(x_j)\), \(i=1,2\) since \( \frac{\partial \varLambda _1 ( {\mathbf x} ) }{\partial x_j} = \lambda _{0,j}^1 (x_j) > \lambda _{0,j}^2 (x_j)= \frac{\partial \varLambda _2 ( {\mathbf x} ) }{\partial x_j}\), \(j=1,2, \ldots , n \), \(\varLambda _0 ( 0, \ldots ,0 )=\varLambda _1 ( 0, \ldots ,0 )=0\), and \(G_{Z_n}(x)= \frac{\mathrm{E}[Z^{n+1}e^{-Zx}]}{\mathrm{E}[Z^ne^{-Zx}]}\).
Thus, the proof of (12) is completed. \(\square \)
Rights and permissions
About this article
Cite this article
Germán Badía, F., Lee, H. On stochastic comparisons and ageing properties of multivariate proportional hazard rate mixtures. Metrika 83, 355–375 (2020). https://doi.org/10.1007/s00184-019-00730-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-019-00730-9