Skip to main content
Log in

On stochastic comparisons and ageing properties of multivariate proportional hazard rate mixtures

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

In this paper, we study mixtures of the multivariate proportional hazard model, where the frailty (an unobservable non negative random variable) acts on the hazard rates of lifetimes at the same time to model common risk factors. We investigate ageing properties and dependence between components for the mixture model under study. In addition, we carry out stochastic comparisons assuming dependence between components of the baseline random vector. More specifically, these stochastic comparisons refer to mixture models that share the same frailty random variable but different baseline random vectors. The results are applied for inter epoch times of a non homogeneous mixed Poisson process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alzaid AA, Proschan F (1994) Max-infinite divisibility and multivariate total positivity. J Appl Probab 129:721–730

    Article  MathSciNet  Google Scholar 

  • Amini-Seresht E, Khaledi BE (2015) Multivariate stochastic comparisons of mixture models. Metrika 78:1015–1034

    Article  MathSciNet  Google Scholar 

  • Badía FG, Cha JH (2017) On bending (down and up) properties of reliability measures in mixtures. Metrika 80:455–482

    Article  MathSciNet  Google Scholar 

  • Badía FG, Sangüesa C, Cha JH (2014) Stochastic comparison of multivariate conditionally dependent mixtures. J Multivar Anal 129:82–94

    Article  MathSciNet  Google Scholar 

  • Badía FG, Sangüesa C, Cha JH (2018) Univariate and multivariate stochastic comparisons and ageing properties of generalized Pòlya process. J Appl Probab 55:233–253

    Article  MathSciNet  Google Scholar 

  • Belzunce F, Mercader JA, Ruiz JM, Spizzichino F (2009) Stochastic comparisons of multivariate mixture models. J Multivar Anal 100:1657–1669

    Article  MathSciNet  Google Scholar 

  • Cha JH, Finkelstein M (2018) Point processes for reliability analysis: shocks and repairable system. Springer, London

    Book  Google Scholar 

  • Cuadras CM (2002) On the covariance between functions. J Multivar Anal 81:19–27

    Article  MathSciNet  Google Scholar 

  • Feller W (1971) An introduction to probability theory and its applications, vol II. Wiley, New York

    MATH  Google Scholar 

  • Fernández-Ponce JM, Pellerey F, Rodríguez-Griñolo MR (2015) Some stochastic properties of conditionally dependent frailty models. Statistics 50:649–666

    Article  MathSciNet  Google Scholar 

  • Finkelstein M, Cha JH (2013) Stochastic modeling for reliability: shocks, burn-in and heterogeneous populations. Springer, London

    Book  Google Scholar 

  • Grandel J (1997) Mixed poisson processes. Chapman & Hall, London

    Book  Google Scholar 

  • Gupta N, Dhariyal ID, Misra M (2011) Reliability under random operating environment: frailty models. J Combin Inf Syst Sci 36:117–133

    MATH  Google Scholar 

  • Gupta PL, Gupta RC (1996) Ageing classes of Weibull mixtures. Probab Eng Inf Sci 10:591–600

    Article  Google Scholar 

  • Gupta RC, Kirmani SNUA (2006) Stochastic comparisons in frailty models. J Stat Plan Inference 136:1–20

    Article  MathSciNet  Google Scholar 

  • Gurland J, Sethuraman J (1994) Reversal of increasing failure rates when pooling failure data. Technometrics 36:416–418

    MATH  Google Scholar 

  • Jarrahiferiz J, Kayid M, Izadkhah S (2019) Stochastic properties of a weighted frailty model. Stat Pap 60:53–72

    Article  MathSciNet  Google Scholar 

  • Joe H (1997) Multivariate models and dependence concepts. Chapman and Hall/CRC, London

    Book  Google Scholar 

  • Karlin M, Rinott Y (1980) Classes of orderings of measures and related correlation inequalities I. Multivariate totally positive distributions. J Multivar Anal 10:467–498

    Article  MathSciNet  Google Scholar 

  • Lai CH, Xie M (2006) Stochastic ageing and dependence in reliability. Springer, New York

    MATH  Google Scholar 

  • Marshall AW, Olkin I (2007) Life distributions. Springer, New York

    MATH  Google Scholar 

  • Misra N, Gupta N, Gupta RD (2009) Stochastic comparisons of multivariate frailty models. J Stat Plan Inference 139:2084–2090

    Article  MathSciNet  Google Scholar 

  • Misra N, Naqvi S (2018) A unified approach to stochastic comparisons of multivariate mixture models. Commun Stat Theory Methods. https://doi.org/10.1080/03610926.2018.1445859

    Article  MathSciNet  MATH  Google Scholar 

  • Mulero J, Pellerey F, Rodríguez-Griñolo MR (2010) Stochastic comparisons for time transformed exponential models. Insur Math Econ 46:328–333

    Article  MathSciNet  Google Scholar 

  • Navarro J, Hernández PJ (2008) Mean residual life functions of finite mixtures, order statistics and coherent systems. Metrika 67:277–298

    Article  MathSciNet  Google Scholar 

  • Shaked M, Shanthikumar JG (2007) Stochastic orders. Springer, New York

    Book  Google Scholar 

  • Vaupel J, Manton KG, Stallard E (1979) The impact of heterogeneity in individual frailty on the dynamics of mortality. Demography 16:439–454

    Article  Google Scholar 

  • Xu M, Li X (2008) Negative dependence in frailty models. J Stat Plan Inference 138:1433–1441

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the referees and the editor for their valuable comments and careful reading of an earlier version of this manuscript. The work of the first author has been supported by Spanish government under research Project MTM2015-63978-P. The work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education [Grant Number 2-2017-1659-001-1]. The authors would like to thank Prof. Lola Berrade for her helpful and valuable review of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyunju Lee.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Proof of Lemma 3

  1. (a)

    The claim result holds due to the condition \(z \ge 1\).

  2. (b)

    Due to symmetry of \(R_z(x,y)\), it is sufficient to show that \(R_z(x,y)\) is increasing in x or equivalently that \(\frac{\partial R_z(x,y)}{\partial x} \ge 0\). The partial derivative of \(R_z(x,y)\) is given by

    $$\begin{aligned} \frac{\partial R_z(x,y)}{\partial x} = \frac{zx^{z-1}(x-y) -(x^z-y^z)}{(x-y)^2}= \frac{D_x(y)}{(x-y)^2}, \quad x \ne y. \end{aligned}$$

    It follows that \(D_x^{\prime }(y)=z(y^{z-1}-x^{z-1})\) and \(D_x(y)\) has a minimum in \(y=x\) if \(z \ge 1\). Therefore, we have \(D_x(y) \ge D_x(x) = 0 \) and thus \(\frac{\partial R_z(x,y)}{\partial x} \ge 0\). Furthermore, if \(x=y\), then \(\frac{\partial R_z(x,x)}{\partial x} = z(z-1)x^{z-2} \ge 0\). Thus, property (b) holds.

  3. (c)

    Applying the mean value theorem applied to \(u^z\), there exists \(v \in [x \wedge y,x \vee y]\) such that \(x^z-y^z = (x-y)zv^{z-1}\), thus

    $$\begin{aligned} R_z(x,y)=zv^{z-1}, \quad v \in [x \wedge y, x \vee y]. \end{aligned}$$

    Therefore, we have

    $$\begin{aligned} R_z(f(t),g(t))=zv(t)^{z-1}, \quad v(t) \in [f(t) \wedge g(t),f(t) \vee g(t)] \end{aligned}$$

    and since \( \lim _{t \rightarrow \infty } f(t)=\lim _{t \rightarrow \infty } g(t)= 0\), and then \( \lim _{t \rightarrow \infty }R_z(f(t),g(t))=0\).

\(\square \)

Appendix B

Proof of Theorem 1

(a) In order to prove that \({\mathbf X}^*\) is MDFR, it is sufficient to show that \(M( {\mathbf x},{\mathbf y})=\ln \frac{\overline{F}^*({\mathbf x}+{\mathbf y})}{\overline{F}^*({\mathbf x})}\) is increasing in \({\mathbf x} \in \mathbb {R}^n\) for all \({\mathbf y} \in \mathbb {R}^n\). Observe that

$$\begin{aligned} \frac{\partial M( {\mathbf x},{\mathbf y})}{\partial x_i}= & {} \frac{\partial \{\ln \overline{F}^*({\mathbf x}+ {\mathbf y})- \ln \overline{F}^*({\mathbf x})\}}{\partial x_i} \\= & {} \frac{\ln \overline{F}_0({\mathbf x}+ {\mathbf y})}{\partial x_i}G_Z(-\ln \overline{F}_0({\mathbf x}+ {\mathbf y}))- \frac{\ln \overline{F}_0({\mathbf x})}{\partial x_i}G_Z(-\ln \overline{F}_0({\mathbf x})), \end{aligned}$$

\(i=1,2, \ldots , n\). Here, note that under the assumption that \({\mathbf X}_0\) is MDFR, we have that

$$\begin{aligned} \frac{\partial }{\partial x_i} \frac{\overline{F}_0({\mathbf x}+ {\mathbf y})}{\overline{F}_0({\mathbf x})} \ge 0\Leftrightarrow & {} \frac{\overline{F}_0({\mathbf x}+ {\mathbf y})}{\partial x_i} \overline{F}_0({\mathbf x}) - \frac{\overline{F}_0({\mathbf x})}{\partial x_i} \overline{F}_0({\mathbf x}+ {\mathbf y}) \ge 0\\\Leftrightarrow & {} \frac{\ln \overline{F}_0( {\mathbf x}+{\mathbf y})}{\partial x_i } - \frac{\ln \overline{F}_0( {\mathbf x})}{\partial x_i } \ge 0, \quad i=1,2, \ldots , n. \end{aligned}$$

Furthermore, it is clear that \(G_Z(- \ln \overline{F}_0( {\mathbf x}+{\mathbf y}))\le G_Z(- \ln \overline{F}_0( {\mathbf x}))\) since \(G_Z(x)\) is decreasing in x (see Lemma 1) and \( - \ln \overline{F}_0({\mathbf x}+{\mathbf y}) \ge - \ln \overline{F}_0({\mathbf x})\) as \( - \ln \) and \( \overline{F}_0\) are both decreasing functions. Therefore, since \( \ln x\) is non positive for \(x \in [0,1]\), we have that

$$\begin{aligned} \frac{\partial M( {\mathbf x},{\mathbf y}))}{\partial x_i}= & {} \frac{\ln \overline{F}_0({\mathbf x}+ {\mathbf y})}{\partial x_i}G_Z(-\ln \overline{F}_0({\mathbf x}+ {\mathbf y}))- \frac{\ln \overline{F}_0({\mathbf x})}{\partial x_i}G_Z(-\ln \overline{F}_0({\mathbf x}))\\\ge & {} \frac{\ln \overline{F}_0 ( {\mathbf x})}{\partial x_i} \{G_Z(-\ln \overline{F}_0({\mathbf x}+ {\mathbf y}))- G_Z(-\ln \overline{F}_0({\mathbf x}))\} \ge 0, \end{aligned}$$

\(i=1, 2, \ldots , n\), for all \( {\mathbf y}\), which leads to the desired result.

(b) From the assumption that \({\mathbf X}_0\) is MNWU, we have that \(\overline{F}_0^z( {\mathbf x}+ {\mathbf y}) \ge \overline{F}_0^z( {\mathbf x})\overline{F}_0^z({\mathbf y})\) for all \( {\mathbf x}, {\mathbf y} \in \mathbb {R}^n\) and \(z \ge 0\). Then, from Lemma 2 since \( \overline{F}_0^z( {\mathbf x})\) and \(\overline{F}_0^z({\mathbf y})\) are both decreasing functions in z, it follows that

$$\begin{aligned} \overline{F}^*({\mathbf x}+{\mathbf y}) \ge \mathrm{E}[\overline{F}_0^Z( {\mathbf x})\overline{F}_0^Z({\mathbf y})] \ge \mathrm{E}[\overline{F}_0^Z( {\mathbf x})]\mathrm{E}[\overline{F}_0^Z({\mathbf y})] = \overline{F}^*({\mathbf x})\overline{F}^*({\mathbf y}), \end{aligned}$$

for all \( {\mathbf x}, {\mathbf y} \in \mathbb {R}^n\). This completes the proof.

(c) First of all, we will show that if \({\mathbf X}_0\) is MNWU2, then \({\mathbf X}_0^{z}\) whose survival function is given by \(\overline{F}_0^z\) is also MNWU2 for all \(z \ge 1\). To this end, let \(Y_{{\mathbf x},z}\) be the univariate random variable with the following survival function

$$\begin{aligned} \overline{F}_{Y_{{\mathbf x},z}} (t) = \frac{\overline{F}_0^z( {\mathbf x} + t {\mathbf e})}{\overline{F}_0^z( {\mathbf x})}, \quad t \ge 0, \end{aligned}$$

for all \({\mathbf x} \in \mathbb {R}^n\) and \( z \ge 1 \). Then, the claim holds if we prove that \(\mathrm{E}[\phi ( Y_{{\mathbf 0},z})] \le \mathrm{E}[\phi ( Y_{{\mathbf x},z} )]\), for all \(\phi \) increasing and concave (see Remark 6). Let \( \phi \) be a function concave, increasing and twice differentiable and assume without loss of generality that \(Y_{{\mathbf x},z}\) is an absolutely continuous random variable. Then, appropriate integration by parts leads to

$$\begin{aligned} \nonumber \mathrm{E}[\phi ( Y_{{\mathbf x},z}) -\phi ( Y_{{\mathbf 0},z})]= & {} \int _0^{\infty } \phi ^{\prime }(t) \left( \frac{\overline{F}_0^z( {\mathbf x} + t {\mathbf e})}{\overline{F}_0^z( {\mathbf x})} - \overline{F}_0^z( t{\mathbf e}) \right) dt \\ \nonumber= & {} \!\!\int _0^{\infty } \!\!\!\! \phi ^{\prime }(t)R_z \left( \frac{\overline{F}_0( {\mathbf x} + t {\mathbf e})}{\overline{F}_0( {\mathbf x})}, \overline{F}_0( t{\mathbf e}) \right) \left( \frac{\overline{F}_0( {\mathbf x} + t {\mathbf e})}{\overline{F}_0 ({\mathbf x})}- \overline{F}_0( t{\mathbf e}) \right) dt \\= & {} \int _0^{\infty } (-M^{\prime }(t)) \int _0^t \left( \frac{\overline{F}_0( {\mathbf x} + u {\mathbf e})}{\overline{F}_0 {\mathbf x})}- \overline{F}_0( u{\mathbf e}) \right) dudt, \end{aligned}$$
(6)

where

$$\begin{aligned} M(t)= \phi ^{\prime }(t) R_z \left( \frac{\overline{F}_0( {\mathbf x} + t {\mathbf e})}{\overline{F}_0( {\mathbf x})}, \overline{F}_0( t{\mathbf e}) \right) = \phi ^{\prime }(t)g(t). \end{aligned}$$

Observe that \( \lim _{t \rightarrow \infty }M(t)=0\) by Lemma 3(c) since

$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{\overline{F}_0({\mathbf x} + t {\mathbf e})}{\overline{F}_0({\mathbf x})}=\lim _{t \rightarrow \infty } \overline{F}_0( t {\mathbf e})=0 \hbox { and } 0 \le \phi ^{\prime }(x) \le \phi ^{\prime }(0). \end{aligned}$$

It is straightforward to show that M(t) is decreasing in t since it is the product of two non negative decreasing functions. First, \( \phi ^{\prime }\) is decreasing and positive as \( \phi \) is concave. In addition, g is positive and decreasing by Lemma 3(a) and (b) since \(\frac{\overline{F}_0({\mathbf x} + t {\mathbf e})}{\overline{F}_0({\mathbf x})}\) and \(\overline{F}_0( t{\mathbf e})\) are decreasing functions in t. Therefore \({\mathbf X}_0^z\) is MNWU2 for \(z \ge 1\) as (6) is non negative if \({\mathbf X}_0\) is MNWU2 and M(t) is decreasing in t.

The MNWU2 property for \({\mathbf X}^*\) can be expressed in the following equivalent forms

$$\begin{aligned} \!\!\!\!\!\!\!&\!\!\!\! \int _0^t \frac{\overline{F}^*( {\mathbf x} +u {\mathbf e} )}{\overline{F}^*( {\mathbf x})} du - \!\!\int _0^t \overline{F}^*(u {\mathbf e} )du \ge 0 \nonumber \\&\quad \Leftrightarrow \!\! \int _0^t \overline{F}^*( {\mathbf x} +u {\mathbf e} )du - \!\! \overline{F}^*( {\mathbf x})\!\!\int _0^t\!\! \overline{F}^*(u {\mathbf e} )du \ge 0 \nonumber \\ \!\!&\quad \Leftrightarrow \!\! \mathrm{E}\left[ \int _0^t \overline{F}_0^Z( {\mathbf x } +u {\mathbf e})du \right] - \mathrm{E}[\overline{F}_0^Z( {\mathbf x })]\mathrm{E}\left[ \int _0^t \overline{F}_0^Z(u {\mathbf e})du \right] \ge 0. \end{aligned}$$
(7)

As \( \overline{F}_0^z( {\mathbf x })\) and \( \int _0^t \overline{F}_0^z(u {\mathbf e})du\) are both decreasing functions in z, from Lemma 2, it follows that

$$\begin{aligned} \mathrm{E}[\overline{F}_0^Z( {\mathbf x })]\mathrm{E}\left[ \int _0^t \overline{F}_0^Z(u {\mathbf e})du \right] \le \mathrm{E}\left[ \overline{F}_0^Z( {\mathbf x }) \int _0^t \overline{F}_0^Z(u {\mathbf e})du \right] . \end{aligned}$$

Therefore, by MNWU2 property of \({\mathbf X}_0^z\) for \( z \ge 1\), we have

$$\begin{aligned}&\mathrm{E}\left[ \int _0^t \overline{F}_0^Z( {\mathbf x } +u {\mathbf e})du \right] - \mathrm{E}[\overline{F}_0^Z( {\mathbf x })]\mathrm{E}\left[ \int _0^t \overline{F}_0^Z(u {\mathbf e})du \right] \\&\quad \ge \mathrm{E}\left[ \int _0^t \overline{F}_0^Z( {\mathbf x } +u {\mathbf e})du- \overline{F}_0^Z( {\mathbf x })\int _0^t \overline{F}_0^Z(u {\mathbf e})du \right] \ge 0. \end{aligned}$$

Hence, (7) holds and so does the desired result. \(\square \)

Appendix C

Proof of Theorem 2

(a) To prove that \({\mathbf X}^*\) is RCSI, we need to show that \( \frac{\partial ^2 \ln \overline{F}^*( \mathbf {x})}{\partial x_i \partial x_j} \ge 0\), \( i \ne j \) from Remark 1. Note that \( \overline{F}^*( \mathbf {x})=H_Z ( - \ln \overline{F}_0(\mathbf {x}))\) and \(G_Z(x)=- (\ln H_Z(x))^{\prime }\). Then we have for \(i \ne j\),

$$\begin{aligned} \frac{\partial ^2 \ln \overline{F}^*( \mathbf {x})}{\partial x_i \partial x_j}= & {} \frac{\partial ^2 \ln H_Z(- \ln \overline{F}_0(\mathbf {x}))}{\partial x_i \partial x_j}\\= & {} \frac{\partial ^2 \ln \overline{F}_0( \mathbf {x})}{\partial x_i \partial x_j}G_Z(- \ln \overline{F}_0(\mathbf {x})) \\&- \frac{\partial \ln \overline{F}_0(\mathbf {x})}{\partial x_i}\frac{\partial \ln \overline{F}_0(\mathbf {x})}{\partial x_j}G_Z^{\prime }(- \ln \overline{F}_0( \mathbf {x})). \end{aligned}$$

Here, observe that \( \frac{\partial ^2 \ln \overline{F}_0( \mathbf {x})}{\partial x_i \partial x_j} \ge 0\) for \(i \ne j\) since \({\mathbf X}_0\) is RCSI, \(\frac{\partial \ln \overline{F}_0(\mathbf {x})}{\partial x_i}\le 0\), \(i=1, 2, \ldots , n\) for all \( {\mathbf x}\) and \(G_Z\) is decreasing (see Lemma 1). Eventually, this implies \( \frac{\partial ^2 \ln \overline{F}^*( \mathbf {x})}{\partial x_i \partial x_j} \ge 0\), \(i \ne j\) for all \( {\mathbf x}\), which completes the proof.

(b) Under the assumption that \({\mathbf X}_0\) is PUOD, it follows that \( \overline{F}_0( {\mathbf x}) \ge \prod _{i=1}^n\overline{F}_{X_i^0} ( {\mathbf x})\). Thus, we have

$$\begin{aligned} \overline{F}^*( {\mathbf x})= \mathrm{E}[\overline{F}_0^Z({\mathbf x}) ]\ge & {} \mathrm{E} \left[ \prod _{i=1}^n\overline{F}_{X_i^0}^Z ( {\mathbf x}) \right] \\\ge & {} \prod _{i=1}^n\mathrm{E} \left[ \overline{F}_{X_i^0}^Z ( {\mathbf x}) \right] = \prod _{i=1}^n \overline{F}_{X_i^*}( {\mathbf x}), \end{aligned}$$

where the second inequality directly follows by Lemma 2 since \( \overline{F}_{X_i^0}^z( {\mathbf x})\), \(i=1, \ldots ,n\) are decreasing functions in z, which completes the proof.

(c) Let us assume without loss of generality that the components of the random vector are absolutely continuous random variables. It is straightforward to show that the pdf of \({\mathbf X}^*\) is given by

$$\begin{aligned} f^*( \mathbf {x}) = (-1)^n\frac{\partial ^n \overline{F}^*(\mathbf {x})}{\partial x_1 \ldots \partial x_n} =E_Z[ Z^n \left( \prod _{i=1}^n \overline{F}_{X_0^i} (x_i)\right) ^{Z-1} \prod _{i=1}^nf_{X_0^i} (x_i)], \end{aligned}$$

where \(\overline{F}_{X_0^i}\) and \(f_{X_0^i}\) are the sf and pdf of the component i of the baseline random vector \({\mathbf X}_0\), \(i=1, \ldots , n\). Therefore, for a function \( g: \mathbb {R}_n^+ \rightarrow \mathbb {R}\)

$$\begin{aligned} E[g( \mathbf {X}^*)] = E_Z \left[ \int _0^{\infty } \cdots \int _0^{\infty } g( \mathbf {x}) Z^n \left( \prod _{i=1}^n \overline{F}_{X_0^i} (x_i) \right) ^{Z-1} \prod _{i=1}^nf_{X_0^i} (x_i)dx_1 \ldots dx _n \right] . \end{aligned}$$

The multivariate change of variables \(u_i = \overline{F}_{X_0^i} (x_i)^Z=e^{- Z\varLambda _i(x_i)}\), \(i=1, \ldots , n \) leads to

$$\begin{aligned}&\int _0^{\infty } \cdots \int _0^{\infty } g( \mathbf {x}) Z^n\left( \prod _{i=1}^n \overline{F}_{X_0^i} (x_i)\right) ^{Z-1} \prod _{i=1}^nf_{X_0^i} (x_i)dx_1 \ldots dx _n \\&\quad = \int _0^1 \cdots \int _0^1 g( h_Z(\mathbf {u})) du_1 \ldots du _n = E_{\mathbf {U}}[g(h_Z( \mathbf {U}))], \end{aligned}$$

where \(h_Z:[0,1]\times \cdots \times [0,1] \rightarrow \mathbb {R}_n^+ \) is the multivariate vector function defined as

$$\begin{aligned} h_{Z,i}(u_1, \ldots , u_n) = \varLambda _i^{-1} \left( \frac{- \ln u_i}{Z} \right) , \quad i=1, \ldots , n \end{aligned}$$

and \( \mathbf {U}=(U_1,U_2,\ldots ,U_n) \) is a random vector of independent uniform distributions on the interval [0,1]. Then,

$$\begin{aligned} E[g( \mathbf {X}^*)]=E_Z[E_{\mathbf {U}}[ g(h(\mathbf {U}))]]. \end{aligned}$$

Let \(g_1, g_2: \mathbb {R}_n^+ \rightarrow \mathbb {R} \) be increasing functions and assume that the components of the baseline random vector are absolutely continuous. Since \(h_{Z,i}\) is decreasing as \( \varLambda _i\) is increasing and \(g_1\) and \(g_2\) are increasing, it follows that \(g_1(h_Z( \mathbf {u}))\) and \(g_2(h_Z( \mathbf {u}))\) are both decreasing. Note that \( \mathbf {U}\) is associated since it has independent components. Thus we have that

$$\begin{aligned} E_{\mathbf {U}}[g_1(h_Z( \mathbf {U}))g_2(h_Z( \mathbf {U}))] \ge E_{\mathbf {U}}[g_1(h_Z( \mathbf {U}))]E_{\mathbf {U}}[g_2(h_Z( \mathbf {U}))]. \end{aligned}$$
(8)

Furthermore, \(E_{\mathbf {U}}[g_1(h_Z( \mathbf {U}))]\) and \(E_{\mathbf {U}}[g_2(h_Z( \mathbf {U}))]\) are both increasing functions on Z since \(h_Z\) is increasing in Z. Hence by Lemma 2 and (8), we have that

$$\begin{aligned} E[g_1( \mathbf {X}^*)g_2( \mathbf {X}^*)]= & {} E_Z[g_1(h_Z( \mathbf {U}))g_2(h_Z( \mathbf {U}))] \\\ge & {} E_Z[E_{\mathbf {U}}[g_1(h_Z( \mathbf {U}))]E_{\mathbf {U}}[g_2(h_Z( \mathbf {U}))]] \\\ge & {} E_Z[E_{\mathbf {U}}[g_1(h_Z( \mathbf {U}))]]E_Z[E_{\mathbf {U}}[g_2(h_Z( \mathbf {U}))]] \\= & {} E[g_1( \mathbf {X}^*)] E[g_2( \mathbf {X}^*)] \end{aligned}$$

and the conclusion follows since

$$\begin{aligned} \hbox {Cov}(g_1(\mathbf {X}^*),g_2(\mathbf {X}^*))=E[g_1( \mathbf {X}^*)g_2( \mathbf {X}^*)]-E[g_1( \mathbf {X}^*)] E[g_2( \mathbf {X}^*)]. \end{aligned}$$

\(\square \)

Appendix D

Proof of Lemma 4

Note that \(G_Z(0)=\mathrm{E}[Z]\) and \(G_Z^{\prime }(0)=-\mathrm{Var}[Z]\). Moreover, observe that

$$\begin{aligned} G_Z(x)=G_Z(0) + \int _0^x G_Z^{\prime }(u) du \end{aligned}$$

and

$$\begin{aligned} G_Z^{\prime }(x)=G_Z^{\prime }(0) + \int _0^x G_Z^{\prime \prime }(u) du. \end{aligned}$$

Then under the assumption that \(G_Z(x)\) is convex, we have

$$\begin{aligned} G_Z^{\prime }(x)=G_Z^{\prime }(0) + \int _0^x G_Z^{\prime \prime }(u) du \ge G_Z^{\prime }(0)=-\mathrm{Var}[Z] \end{aligned}$$

and thus, we conclude that

$$\begin{aligned} G_Z(x)=G_Z(0) + \int _0^x G_Z^{\prime }(u) du \ge \mathrm{E}[Z] - \mathrm{Var}[Z]x. \end{aligned}$$

Furthermore, obviously, we have

$$\begin{aligned} G_Z(x)-G_Z(y) = \int _y^x G_Z^{\prime }(u) du \ge - \mathrm{Var}[Z](x-y), \quad 0 \le y \le x, \end{aligned}$$

which implies the desired result. \(\square \)

Appendix E

Proof of Theorem 3

For a given vector \((x_1, \ldots , x_{i-1}, x_{i+1}, \ldots , x_n)\), the following function is defined

$$\begin{aligned} g_{Z,i}(x) = \ln \frac{H_Z(\varLambda _{1,i} (x))}{H_Z(\varLambda _{0,i} (x))} = \ln \frac{\mathrm{E}[e^{- \varLambda _{1,i} (x)Z}]}{\mathrm{E}[e^{- \varLambda _{0,i} (x)Z}]}, \quad \hbox {for all }x \ge 0, \end{aligned}$$

and for \(i=1, \ldots , n\), where \(\varLambda _{r,i}(x)=\varLambda _j (x_1, \ldots ,x_{i-1},x,x_{i+1}, \ldots ,x_n)\) and \(\lambda _{r,i}(x)=\frac{\partial \varLambda _j (x_1, \ldots ,x_{i-1},x,x_{i+1}, \ldots ,x_n)}{\partial x}\), \(j=0, 1\). From the assumption (3), it is obvious that \( \varLambda _{0,i}(x)-\varLambda _{1,i}(x)\) is increasing, and thus we have that \( \varLambda _{0,i}(x)-\varLambda _{1,i}(x) \ge \varLambda _0(0, \ldots ,0)-\varLambda _1(0, \ldots ,0) \ge 0\) by (4) for all x. Then, we have

$$\begin{aligned} \frac{\partial g_{Z,i}(x)}{\partial x}= & {} g^{\prime }_{Z,i}(x) = -\lambda _{1,i} (x)G_Z( \varLambda _{1,i}(x) ) + \lambda _{0,i}(x)G_Z( \varLambda _{0,i}(x) ) \nonumber \\ \!\!= & {} \!\! (\lambda _{0,i} (x)-\lambda _{1,i} (x))G_Z( \varLambda _{0,i}(x) ) \nonumber \\&+\lambda _{1,i}(x)(G_Z( \varLambda _{0,i}(x))-G_Z( \varLambda _{1,i}(x)). \end{aligned}$$
(9)

Now to show that \( \frac{H_Z(\varLambda _{1,i} (x))}{H_Z(\varLambda _{o,i} (x))}= \frac{\mathrm{E}[e^{- \varLambda _{1,i}(x)Z}]}{\mathrm{E}[e^{- \varLambda _{0,i}(x)Z}]}\) is increasing in \(x \ge 0\), it is sufficient to prove that \(g^{\prime }_{Z,i}(x) \ge 0\), that is, the Eq. (9) is greater than 0, for all \(x \ge 0\). Observe that from Lemma 4, it follows that

$$\begin{aligned} g^{\prime }_{Z,i}(x) \ge \mathrm{E}[Z] (\lambda _{0,i}(x)-\lambda _{1,i}(x))- \mathrm{Var}[Z] (\lambda _{0,i}(x) \varLambda _{0,i} (x) - \lambda _{1,i}(x) \varLambda _{1,i} (x) ). \end{aligned}$$

Herein, to complete the proof, we have to show that

$$\begin{aligned} \mathrm{E}[Z] (\lambda _{0,i}(x)-\lambda _{1,i}(x))- \mathrm{Var}[Z] (\lambda _{0,i}(x) \varLambda _{0,i} (x) - \lambda _{1,i}(x) \varLambda _{1 ,i} (x) ) \ge 0 \end{aligned}$$

or

$$\begin{aligned} A(x,Z) = \frac{\mathrm{E}[Z]}{\mathrm{Var}[Z]} \frac{(\lambda _{0,i}(x)-\lambda _{1,i}(x))}{(\lambda _0(x) \varLambda _{0,i} (x) - \lambda _{1,i}(x) \varLambda _{1,i} (x) )} \ge 1. \end{aligned}$$

Note that

$$\begin{aligned} A(x,Z+s) = A(x,Z)+s \frac{\delta (x)}{\mathrm{Var}[Z]}, \quad \hbox {for s >0,} \end{aligned}$$

where \( \delta (x) = \frac{(\lambda _{0,i}(x)-\lambda _{1,i}(x))}{(\lambda _{0,i}(x) \varLambda _{0,i} (x) - \lambda _{1,i}(x) \varLambda _{1,i} (x) )} \).

For \( 0< \alpha < \frac{1}{2}\), let \(a= \alpha (\lambda _{0,i}(x)-\lambda _{1,i}(x))\), \(b= \alpha (\lambda _{1,i}(x)-\lambda _{0,i}(x))\) and \(s = \frac{\mathrm{Var}[Z]}{\overline{\delta }(x)}\), where

$$\begin{aligned} \overline{\delta }(x) = \frac{(\lambda _{0,i}(x)+b)-(\lambda _{1,i}(x)+a)}{(\lambda _{0,i}(x)+b) \varLambda _{0,i}(x) - (\lambda _{1,i}(x)+a) \varLambda _{1,i}(x)}. \end{aligned}$$

Both assumptions \(\lambda _{0,i}(x) > \lambda _{1,i}(x)\) for all \(x \ge 0\) and \( 0< \alpha < \frac{1}{2}\) lead to

$$\begin{aligned} \overline{\delta }(x)= & {} \frac{(1- 2 \alpha ) (\lambda _{0,i}(x)-\lambda _{1,i}(x))}{(1- \alpha ) (\lambda _{0,i}(x) \varLambda _{0,i} (x) - \lambda _{1,i}(x) \varLambda _{1,i} (x))+\alpha (\lambda _{1,i}(x) \varLambda _{0,i} (x) - \lambda _{0,i}(x) \varLambda _{1,i} (x)} \\\ge & {} (1- 2 \alpha ) \delta (x) >0. \end{aligned}$$

Therefore, we have that

$$\begin{aligned} \overline{A}(x,Z+s)= \frac{\mathrm{E}[Z]}{\mathrm{Var }[Z]} \overline{\delta }(x) + s \frac{\overline{\delta }(x)}{\mathrm{Var }[Z]} \ge 1, \end{aligned}$$
(10)

which yields that \(\overline{A}(x,Z+s)\) stems from A(xZ) replacing \(\lambda _{0,i}(y)\), \(\lambda _{1,i}(y)\), \( \varLambda _{0,i}(y)\), \( \varLambda _{1,i}(y)\) and Z by \(\lambda _{0,i}(y)+b\), \(\lambda _{1,i}(y)+a\), \( \varLambda _{0,i}(y)+b(y-x)\), \( \varLambda _{1,i}(y)+a(y-x)\) and \(Z+s\), respectively.

Let us now consider the following function of y given in a neighborhood of x: \(\frac{H_{Z+s}(\varLambda _{1,i}(y)+a(y-x))}{H_{Z+s}(\varLambda _{0,i}(y)+b(y-x))} \). Note that the result (10) implies that

\( \overline{g}_{Z+s,i}(y) = \ln \frac{H_{Z+s}(\varLambda _{1,i}(y)+a(y-x))}{H_{Z+s}(\varLambda _{0,i}(y)+b(y-x))}\) fulfills \( \overline{g}_{Z+s,i}^{\prime }(x) \ge 0\) and due to continuity of the involved functions, it follows that \(\overline{g}_{Z+s,i}^{\prime }(y) \ge 0\) for all y in a neighborhood of \(x \ge 0\). Observe that

$$\begin{aligned} \frac{H_{Z+s}(\varLambda _{1,i}(y)+a(y-x))}{H_{Z+s}(\varLambda _{0,i}(y)+b(y-x))} = e^{s [(\varLambda _{0,i}(y)-\varLambda _{1,i}(y)) +(b-a)(y-x)]}\frac{H_Z(\varLambda _{1,i}(y)+a(y-x))}{H_Z(\varLambda _{0,i}(y)+b(y-x))} \end{aligned}$$

for y in a neighborhood of x. In addition,

$$\begin{aligned} \overline{g}_{Z+s,i}^{\prime }(x)= & {} s[(\lambda _{0,i}(x)-\lambda _{1,i}(x))+(b-a)] \\&+ (\lambda _{0,i}(x)+b)G_Z( \varLambda _{0,i}(x)) - (\lambda _{1,i}(x)+a)G_Z( \varLambda _{1,i}(x)) \\= & {} s[(1-2 \alpha )(\lambda _{0,i}(x)-\lambda _{1,i}(x))]+ \lambda _{0,i}(x) G_Z( \varLambda _{0,i}(x))\\&- \lambda _{1,i}(x) G_Z( \varLambda _{1,i}(x)) + bG_Z( \varLambda _{0,i}(x))-aG_Z( \varLambda _{1,i}(x)). \end{aligned}$$

For \( \alpha \) tending to \( \frac{1}{2} \) in the foregoing equation, we have that

$$\begin{aligned} g_{Z,i}^{\prime }(x)= & {} \lambda _{0,i}(x) G_Z( \varLambda _{0,i}(x))- \lambda _{1,i}(x) G_Z( \varLambda _{1,i}(x)) \\= & {} \lim _{\alpha \rightarrow \frac{1}{2}}\overline{g}_{Z+s,i}^{\prime }(x) + \frac{1}{2} (\lambda _{0,i}(x)-\lambda _{1,i}(x))( G_Z( \varLambda _{0,i}(x))\\&+G_Z(\varLambda _{1,i}(x))) \ge 0 \end{aligned}$$

and thus Theorem 3 holds. \(\square \)

Appendix F

Proof of Theorem 4

(a) If \({\mathbf X}_0^1 \le _{\mathrm{uo}} {\mathbf X}_0^2\) (i.e., \(\overline{F}_0^1 (\mathbf {x}) \le \overline{F}_0^2 (\mathbf {x})\) for all \({\mathbf x} \in \mathbb {R}^n_+\)), then

$$\begin{aligned} \overline{F}_1^*( {\mathbf x}) =E[ \overline{F}_0^1 (\mathbf {x})^Z] \le E[ \overline{F}_0^2 (\mathbf {x})^Z]= \overline{F}_2^*( {\mathbf x}), \quad {\mathbf x} \in \mathbb {R}^n_+, \end{aligned}$$

and thus, the conclusion holds.

(b) Observe that \( \overline{F}_i^*({\mathbf x})=E[e^{-Z (- \ln \overline{F}_0^i({\mathbf x}))}]\), \( {\mathbf x} \in \mathbb {R}^n_+\) for \(i=1, 2\). The claim in this case is verified if

$$\begin{aligned} \frac{\overline{F}_2^*({\mathbf x})}{\overline{F}_1^*({\mathbf x})} \end{aligned}$$

is increasing in \({\mathbf x}\) on the set \(\{ {\mathbf x}: \overline{F}_1^*({\mathbf x})>0 \}\). Applying Theorem 3 with \( \varLambda _i({\mathbf x}) = - \ln \overline{F}_0^i({\mathbf x})\), \(i=1,2\), the claim is demonstrated since \(\varLambda _0(0,\ldots , 0)=\varLambda _1(0,\ldots , 0)=0\) and condition (3) is derived from the assumptions. \(\square \)

Appendix G

Proof of Theorem 5

Note that for \(i=1,2\), the survival function of \({\mathbf X}_i^*\) is given by \( \overline{F}_i^*({\mathbf x}) = E \left[ (\prod _{j=1}^n \overline{F}_{0,j}^i(x_j) )^Z\right] \). Then the pdf \(f_i^*\) of \({\mathbf X}_i^*\) is as follows

$$\begin{aligned} f_i^*({\mathbf x})= & {} (-1)^n \frac{\partial ^n \overline{F}_i^*({\mathbf x}) }{\partial x_1 \cdots \partial x_n} \\= & {} \prod _{j=1}^n \lambda _{0,j}^i(x_j) E \left[ Z^n \left( \prod _{j=1}^n \overline{F}_{0,j}^i(x_j) \right) ^Z\right] \\= & {} \mathrm{E}[Z^n] \prod _{j=1}^n \lambda _{0,j}^i(x_j) E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^i(x_j) \right) ^{Z_n} \right] , \end{aligned}$$

where \(Z_n\) is the random variable with \( dF_{Z_n}(z)= \frac{z^n}{E[Z^n]}dF_Z(z)\), \(z >0\). To prove the claim, we have to show that

$$\begin{aligned}&f_1^*({\mathbf x})f_2^*({\mathbf y}) \le f_1^*({\mathbf x} \wedge {\mathbf y})f_2^*({\mathbf x} \vee {\mathbf y}) \nonumber \\&\quad \Leftrightarrow \prod _{j=1}^n \lambda _{0,j}^1(x_j)\lambda _{0,j}^2(y_j) E \left[ Z^n \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^Z\right] E \left[ Z^n \left( \prod _{j=1}^n \overline{F}_{0,j}^2(y_j) \right) ^Z\right] \nonumber \\&\quad = \mathrm{E}^2[Z] \prod _{j=1}^n \lambda _{0,j}^1(x_j)\lambda _{0,j}^2(y_j) E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n}\right] \mathrm{}E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(y_j) \right) ^{Z_n}\right] \nonumber \\&\quad \le \mathrm{E}^2[Z] \prod _{j=1}^n \lambda _{0,j}^1(x_j \wedge y_j )\lambda _{0,j}^2(x_j \vee y_j) \nonumber \\&\quad \quad \times E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j \wedge y_j) \right) ^{Z_n}\right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j \vee y_j) \right) ^{Z_n}\right] . \end{aligned}$$
(11)

From the assumption that \( \frac{\lambda _{0,j}^2(x)}{\lambda _{0,j}^1(x)}\) is increasing in x for \(j=1, \ldots , n\), it follows that

$$\begin{aligned} \prod _{j=1}^n \lambda _{0,j}^1(x_j) \lambda _{0,j}^2(y_j) \le \prod _{j=1}^n \lambda _{0,j}^1(x_j\wedge y_j) \lambda _{0,j}^2(x_j \vee y_j). \end{aligned}$$

Then, to complete the proof of the inequality in (11), we have to show that

$$\begin{aligned}&E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(y_j) \right) ^{Z_n} \right] \nonumber \\&\quad \le E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j\wedge y_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j\vee y_j) \right) ^{Z_n} \right] \end{aligned}$$
(12)

for \({\mathbf x}=(x_1, \ldots ,x_n), {\mathbf y}=(y_1, \ldots ,y_n) \in \mathbb {R}^n_+\).

Herein, if there exits \(j_0\) such that \( \overline{F}_{0,j_0}^1(x_{j_0})=0\) or there exists \(j_1\) such that \(\overline{F}_{0,j_1}^2(y_{j_1})=0\), then it follows that

$$\begin{aligned}&E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(y_j) \right) ^{Z_n} \right] =0 \\&\quad \le E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j\wedge y_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j \vee y_j) \right) ^{Z_n} \right] . \end{aligned}$$

On the contrary, if \(\overline{F}_{0,j}^1(x_j)>0\) and \(\overline{F}_{0,j}^2(y_j) >0\) for \(j=1, \ldots , n\), we have that

$$\begin{aligned}&E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(y_j) \right) ^{Z_n} \right] \\&\quad = E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(y_j) \right) ^{Z_n} \right] \frac{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(y_j) \right) ^{Z_n} \right] }{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(y_j) \right) ^{Z_n} \right] } \\&\quad \le E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j\wedge y_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j\vee y_j) \right) ^{Z_n} \right] \\&\quad \quad \times \frac{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j\vee y_j) \right) ^{Z_n} \right] }{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j\vee y_j) \right) ^{Z_n} \right] } \\&\quad = E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j\wedge y_j) \right) ^{Z_n} \right] E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j\vee y_j) \right) ^{Z_n} \right] , \end{aligned}$$

where the above inequality follows since the following two conditions apply:

  1. (i)

    \(\mathrm{}E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] \) is RCSI since \( \prod _{j=1}^n \overline{F}_{0,j}^1 \) is RCSI from Theorem 2(a);

  2. (ii)

    \(\frac{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^2(x_j) \right) ^{Z_n} \right] }{E \left[ \left( \prod _{j=1}^n \overline{F}_{0,j}^1(x_j) \right) ^{Z_n} \right] }\) is increasing in \({\mathbf x}\) by Theorem 3 with \( \varLambda _i ({\mathbf x}) = -\sum _{j=1}^n \ln \overline{F}_{0,j}^i(x_j)\), \(i=1,2\) since \( \frac{\partial \varLambda _1 ( {\mathbf x} ) }{\partial x_j} = \lambda _{0,j}^1 (x_j) > \lambda _{0,j}^2 (x_j)= \frac{\partial \varLambda _2 ( {\mathbf x} ) }{\partial x_j}\), \(j=1,2, \ldots , n \), \(\varLambda _0 ( 0, \ldots ,0 )=\varLambda _1 ( 0, \ldots ,0 )=0\), and \(G_{Z_n}(x)= \frac{\mathrm{E}[Z^{n+1}e^{-Zx}]}{\mathrm{E}[Z^ne^{-Zx}]}\).

Thus, the proof of (12) is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Germán Badía, F., Lee, H. On stochastic comparisons and ageing properties of multivariate proportional hazard rate mixtures. Metrika 83, 355–375 (2020). https://doi.org/10.1007/s00184-019-00730-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-019-00730-9

Keywords

Navigation