Abstract
In this paper, the estimation of parameters of a three-parameter Weibull–Gamma distribution based on progressively type-II right censored sample is studied. The maximum likelihood, Bayes, and parametric bootstrap methods are used for estimating the unknown parameters as well as some lifetime parameters reliability function, hazard function and coefficient of variation. Approximate confidence intervals for the unknown parameters as well as reliability function, hazard function and coefficient of variation are constructed based on the s-normal approximation to the asymptotic distribution of maximum likelihood estimators (MLEs), and log-transformed MLEs. In addition, two bootstrap CIs are also proposed. Bayes estimates of the unknown parameters and the corresponding credible intervals are obtained by using the Gibbs within Metropolis–Hasting samplers procedure. Furthermore, the results of Bayes method are obtained under both the balanced squared error loss and balanced linear-exponential loss. Analysis of a simulated data set has also been presented for illustrative purposes. Finally, a Monte Carlo simulation study is carried out to investigate the precision of the Bayes estimates with MLEs and two bootstrap estimates, also to compare the performance of different corresponding CIs considered.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In industrial life testing and medical survival analysis, very often the object of interest is lost or withdrawn before failure or the object lifetime is only known within an interval. Hence, the obtained sample is called a censored sample (or an incomplete sample). Some of the major reasons for removal of the experimental units are saving the working experimental units for future use, reducing the total time on test and lower the cost associated with these. Right censoring is one of the censoring techniques used in life-testing experiments. The most common right censoring schemes are type-I and type-II censoring, but the conventional type-I and type-II censoring schemes do not have the flexibility of allowing removal of units at points other than the terminal point of the experiment. For this reason, a more general censoring scheme called progressive type-II right censoring is proposed. A progressive type-II censoring is a useful scheme in which a specific fraction of individuals at risk may be removed from the experiment at each of several ordered failure times. Schematically a progressively type-II censored sample can be described as follows. Suppose that n independent items are put on a life test with continuous identically distributed failure times \(X_{1},X_{2},\ldots ,X_{n}\). Suppose further that a censoring scheme \(\left( R_{1},R_{2},\ldots ,R_{m}\right) \) is previously fixed such that immediately following the first failure \( X_{1},R_{1}\) surviving items are removed from the experiment at random, and immediately following the second failure \(X_{2},R_{2}\) surviving items are removed from the experiment at random. This process continues until, at the time of the m th observed failure \(X_{m}\), the remaining \(R_{m}\) surviving items are removed from the test. The m ordered observed failure times denoted by \(X_{1:m:n}^{\left( R_{1},\ldots ,R_{m}\right) }\), \( X_{2:m:n}^{\left( R_{1},\ldots ,R_{m}\right) },\ldots ,X_{m:m:n}^{\left( R_{1},\ldots ,R_{m}\right) }\) are called progressively type II right censored order statistics of size m from a sample of size n with progressive censoring scheme \((R_{1},R_{2},\ldots ,R_{m})\). It is clear that \( n=m+\sum _{i=1}^{m}R_{i}\). The special case when \(R_{1}=R_{2}=\cdot \cdot \cdot =R_{m-1}=0\) so that \(R_{m}=n-m\) is the case of conventional type-II right censored sampling. Also when \(R_{1}=R_{2}=\cdot \cdot \cdot =R_{m}=0\), so that \(m=n\), the progressively type II right censoring scheme reduces to the case of no censoring (ordinary order statistics). Many authors have discussed inference under progressive type-II censored using different lifetime distributions, see for example, Basak et al. (2009), Kim et al. (2011), Ng et al. (2005), Balakrishnan and Lin (2003), Asgharzadeh (2006), Madi and Raqab (2009), Fernandez (2004), Mahmoud et al. (2014c) and Soliman et al. (2015). A thorough overview of the subject of progressive censoring and the excellent review article is given in Balakrishnan (2007). Aggarwala and Balakrishnan (1998) developed an algorithm to simulate general, progressively type-II censored samples from the uniform or any other continuous distribution. The joint probability density function for progressively type-II censored sample of size m from a sample of size n is given by, for details see Balakrishnan and Aggarwala (2000).
where \(c=n(n-1-R_{1})(n-2-R_{1}-R_{2})\ldots \left( n-\sum _{i=1}^{m-1}(R_{i}+1)\right) \).
The Weibull–Gamma distribution is appropriate for phenomenon of loss of signals in telecommunications which is called fading when multipath is superimposed on shadowing. The Weibull–Gamma distribution is introduced by Bithas (2009). A random variable X has a Weibull–Gamma distribution if its probability density function (PDF) and the corresponding cumulative distribution function (CDF) are given by
and
The Weibull–Gamma distribution with parameters \(\alpha \), \(\beta \) and \( \lambda \) will be denoted by WGD \(\left( \alpha ,\beta ,\lambda \right) \). Its reliability and hazard functions are given by
and
It is noted that if \(\alpha =1\) and \(\lambda =1\), the WGD reduced to standard Pareto distribution. For more detials about WGD and its properties see Molenberghs and Verbeke (2011) and Mahmoud et al. (2014a, (2014b). The coefficient of variation is used in numerous areas of science such as biology, economics, and psychology, and in engineering in queueing and reliability theory see, for example Sharma and Krishna (1994). Nairy and Rao (2003) gave a summary of uses of the coefficient of variation in a number of areas. Given a set of observations from WGD\(\left( \alpha ,\beta ,\lambda \right) \), the sample coefficient of variation (CV) is often estimated by the ratio of the sample standard deviation to the sample mean. Or equivalent
where \(E\left( X\right) \) and \(E\left( X^{2}\right) \) are the first and the second moments of the WGD \(\left( \alpha ,\beta ,\lambda \right) \), given by
where \(\Gamma \left( z\right) \) is the gamma function satisfies \(\Gamma \left( z\right) =\int _{0}^{\infty }y^{z-1}e^{-y}dy\). Then, the theoretical CV for the WGD according to (6) is
where
Molenberghs and Verbeke (2011) gave a summary of the Weibull–Gamma frailty model, its infinite moments, and its connection to generalized log-logistic, logistic, Caushy, and extreme value distributions. Mahmoud et al. (2014a) discussed the recurrence relations for moments of dual generalized order statistics from WGD and its characterizations. Mahmoud et al. (2014b) established a new recurrence relations satisfied by the single and product moments of the progressively type-II right censored order statistics from non truncated and truncated WGD, and derived approximate moments of progressively type-II right censored order statistics from this distribution.
The great success story of modern day Bayesian statistics is Markov chain Monte Carlo (MCMC) technique. MCMC has a sister method. It is Gibbs sampling method. They permit the numerical calculation of posterior distributions in situations far too complicated for analytic expression see Brooks (1998) for a review. Gibbs sampler requires only the specification of the conditional posterior distribution for each parameter. In situations where those distributions are simple to sample from, the approach is easily implemented. In other situations, the more complex Metropolis–Hastings approach needs to be considered see Gamerman and Carlo (1997) and Gupta et al. (2008). In the present paper, the author has developed a hybrid strategy combining the Metropolis algorithm within the Gibbs sampler for obtaining the samples from the posterior arising from WGD. To our best knowledge, statistical inference for unknown parameters of WGD has not yet been studied under progressive type-II censoring. In this paper, maximum likelihood and Bayesian inference of unknown parameters as well as reliability function, hazard function and coefficient of variation will be studied under progressive type-II censoring. The asymptotic confidence interval of the reliability function, hazard function and coefficient of variation are approximated by delta and bootstrap methods. An MCMC procedure to estimate the parameters and corresponding credible intervals is also discussed.
The layout of the paper is as follows: Sect. 2, discusses the maximum likelihood estimators (MLEs) of the unknown parameters, reliability function, hazard function and coefficient of variation. Asymptotic confidence intervals based the maximum likelihood estimates are presented in Sect. 3. In Sect. 4, we introduce two parametric bootstrap procedures to construct the confidence intervals for the unknown parameters, reliability function, hazard function and coefficient of variation. Section 5, provides the conditional distributions required for implementing the Markov chain Monte Carlo approach. A simulation example to illustrate the approach is given in Sect. 6. Monte Carlo simulation results are presented in Sect. 7. Finally, we conclude the paper in Sect. 8.
2 Maximum likelihood inference
Suppose that \(\underline{x}=X_{1:m:n}\), \(X_{2:m:n},\ldots ,X_{m:m:n}\) a progressively type-II censored sample drawn from Weibull–Gamma population whose pdf and cdf are given by (1) and (2), with the censoring scheme \( (R_{1},R_{2},\ldots ,R_{m})\). From (1), (2) and (7), the likelihood function is then given by
The log-likelihood function \(\ell =\ln L(\alpha ,\beta ,\lambda |\underline{x })\) without normalized constant is obtained from (11) as
Calculating the first partial derivatives of \(\ell \) with respect to \(\alpha \), \(\beta \) and \(\lambda \) and equating each to zero, we get the likelihood equations as
and
From (14) we obtain the MLEs \(\beta \) as
Since Eqs. (13)–(16) do not have closed form solutions, the Newton–Raphson iteration method is used to obtain the estimates. The algorithm is described as follows:
-
1.
Use the method of moments or any other methods to estimate the parameters \(\alpha \), \(\beta \) and \(\lambda \) as starting point of iteration, denote the estimates as \(\left( \alpha _{0},\beta _{0},\lambda _{0}\right) \) and set \(k=0\).
-
2.
Calculate \(\left( \frac{\partial \ell }{\partial \alpha },\frac{ \partial \ell }{\partial \beta },\frac{\partial \ell }{\partial \lambda } \right) _{\left( \alpha _{k},\beta _{k},\lambda _{k}\right) }\) and the observed Fisher Information matrix \(I^{-1}\left( \alpha ,\beta ,\lambda \right) \), given in the next paragraph.
-
3.
Update \(\left( \alpha ,\beta ,\lambda \right) \) as
$$\begin{aligned} \left( \alpha _{k+1},\beta _{k+1},\lambda _{k+1}\right) =\left( \alpha _{k},\beta _{k},\lambda _{k}\right) +\left( \frac{\partial \ell }{\partial \alpha },\frac{\partial \ell }{\partial \beta },\frac{\partial \ell }{ \partial \lambda }\right) _{\left( \alpha _{k},\beta _{k},\lambda _{k}\right) }\times I^{-1}\left( \alpha ,\beta ,\lambda \right) . \end{aligned}$$(17) -
4.
Set \(k=k+1\) and then go back to Step 1.
-
5.
Continue the iterative steps until \(\left| \left( \alpha _{k+1},\beta _{k+1},\lambda _{k+1}\right) -\left( \alpha _{k},\beta _{k},\lambda _{k}\right) \right| \) is smaller than a threshold value. The final estimates of \(\left( \alpha ,\beta ,\lambda \right) \) are the MLE of the parameters, denoted as \((\hat{\alpha },\hat{\beta },\hat{\lambda })\).
Moreover, using the invariance property of MLEs, the MLEs of \(S\left( t\right) \), \(h\left( t\right) \) and CV can be obtained after replacing \( \alpha \), \(\beta \) and \(\lambda \) by \(\hat{\alpha }\), \(\hat{\beta }\) and \(\hat{ \lambda }\) as
3 Asymptotic confidence intervals
As indicated by Vander Wiel and Meeker (1990) the most common method to set confidence bounds for the parameters is to use the asymptotic normal distribution of the MLEs. The asymptotic variances and covariances of the MLEs, \(\hat{\alpha }\), \(\hat{\beta }\) and \(\hat{\lambda }\) are given by the entries of the inverse of the Fisher information matrix \(I_{ij}=E\left[ -\partial ^{2}\ell \left( \Phi \right) /\partial \phi _{i}\partial \phi _{j} \right] \) where \(i,j=1,2,3\) and \(\Phi =\left( \phi _{1},\phi _{2},\phi _{3}\right) =\left( \alpha ,\beta ,\lambda \right) \). Unfortunately, the exact closed forms for the above expectations are difficult to obtain. Therefore, the observed Fisher information matrix \(\hat{I}_{ij}=E\left[ -\partial ^{2}\ell \left( \Phi \right) /\partial \phi _{i}\partial \phi _{j} \right] _{\Phi =\hat{\Phi }}\), which is obtained by dropping the expectation operator E, will be used to construct confidence intervals for the parameters, see Cohen (1965). The observed Fisher information matrix has second partial derivatives of log-likelihood function as the entries, which easily can be obtained. Hence, the observed information matrix is given by
Therefore, the asymptotic variance–covariance matrix \([\hat{V}]\) for the MLEs is obtained by inverting the observed information matrix \(\hat{I}\left( \alpha ,\beta ,\lambda \right) \). Or equivalent
It is well known that under some regularity conditions, see Lawless (1982), \(( \hat{\alpha },\hat{\beta },\hat{\lambda })\) is approximately distributed as multivariate normal with mean \((\alpha ,\beta ,\lambda )\) and covariance matrix \(I^{-1}\left( \alpha ,\beta ,\lambda \right) \). Thus, the \((1-\gamma )100\) % approximate confidence intervals (ACIs) for \(\alpha \), \(\beta \) and \( \lambda \) can be given by
where \(Z_{\gamma /2}\) is the percentile of the standard normal distribution with right-tail probability \(\gamma /2\).
Furthermore, to construct the asymptotic confidence interval of the reliability function, hazard function and coefficient of variation, we need to find the variances of them. In order to find the approximate estimates of the variance of \(\hat{S}\left( t\right) \), \(\hat{h}\left( t\right) \) and \( \widehat{CV}\) we use the delta method discussed in Greene (2000). According to this method, the variance of \(\hat{S}\left( t\right) \), \(\hat{h}\left( t\right) \) and \(\widehat{CV}\), can be approximated, respectively by
where \(\triangledown \hat{S}\left( t\right) \), \(\triangledown \hat{h}\left( t\right) \) and \(\triangledown \widehat{CV}\) are, respectively, the gradient of \(\hat{S}\left( t\right) \), \(\hat{h}\left( t\right) \) and \( \widehat{CV}\) with respect to \(\alpha \), \(\beta \) and \(\lambda \). Thus, the \( (1-\gamma )100\) % ACIs for \(S\left( t\right) \), \(h\left( t\right) \) and CV can be given by
The main disadvantage of approximate \((1-\gamma )100\) % CI is that it may yield negative lower bound though the parameter takes only positive values. In such a case the negative value is replaced by zero. However, a different transformation of the MLE can be used to correct the inadequate performance of the normal approximation. Meeker and Escobar (1998) suggested the use of the normal approximation for the log-transformed MLE. Thus, A two-sided \( (1-\gamma )100\) % normal approximation CIs for \(\Omega =\left( \alpha ,\beta ,\lambda ,S\left( t\right) ,h\left( t\right) ,CV\right) \) are given by
where \(\hat{\Omega }=(\hat{\alpha },\hat{\beta },\hat{\lambda },\hat{S}\left( t\right) ,\hat{h}\left( t\right) ,\widehat{CV}).\)
4 Bootstrap confidence intervals
A parametric bootstrap interval provides much more information about the population value of the quantity of interest than does a point estimate. Also it is evident that the confidence intervals based on the asymptotic results do not perform very well for small sample size. For this, two parametric bootstrap procedures are provided to construct the bootstrap confidence intervals of \(\alpha \), \(\beta \), \(\lambda \), \(S\left( t\right) \) , \(h\left( t\right) \) and CV. The first one is the percentile bootstrap (Boot-p) confidence interval based on the idea of Efron (1982). The second one is the bootstrap-t (Boot-t) confidence interval, proposed by Hall (1988). Boot-t developed based on a studentized ‘pivot’ and requires an estimator of the variance of the MLE of \(\alpha \), \(\beta \), \(\lambda \), \(S\left( t\right) \), \(h\left( t\right) \) and CV.
4.1 Parametric Boot-p
-
(1)
Based on the original data \(\underline{x}=x_{1:m:n}\), \( x_{2:m:n},\ldots \), \(x_{m:m:n}\) obtain \(\hat{\alpha }\), \(\hat{\beta }\) and \(\hat{ \lambda }\) by maximizing Eqs. (13)–(16).
-
(2)
Based on the pre-specified progressive censoring scheme \(\left( R_{1},R_{2},\ldots ,R_{m}\right) \) generate a type-II progressive censoring sample \(\underline{x}^{*}=x_{1:m:n}^{*}\), \(x_{2:m:n}^{*},\ldots \), \( x_{m:m:n}^{*}\) from the GWD with parameters \(\hat{\alpha }\), \(\hat{\beta }\) and \(\hat{\lambda }\), using the algorithm described in Balakrishnan and Sandhu [29].
-
(3)
Obtain the MLEs based on the bootstrap sample and denote this bootstrap estimate by \(\hat{\psi }^{*}\) (in our case \(\psi \) could be \( \alpha \), \(\beta \), \(\lambda \), \(S\left( t\right) \), \(h\left( t\right) \) or CV.
-
(4)
Repeat Steps (2) and (3) Nboot times, and obtain.\(\hat{\psi } _{1}^{*},\hat{\psi }_{2}^{*},\ldots ,\hat{\psi }_{Nboot}^{*}\), where \( \hat{\psi }_{i}^{*}=(\hat{\alpha }_{i}^{*},\hat{\beta }_{i}^{*}, \hat{\lambda }_{i}^{*},\hat{S}_{i}^{*}\left( t\right) ,\hat{h} _{i}^{*}\left( t\right) ,\widehat{CV}_{i}^{*})\), \(i=1,2,3,\ldots ,Nboot\).
-
(5)
Arrange \(\hat{\psi }_{i}^{*}\), \(i=1,2,3,\ldots \),Nboot in ascending orders and obtain \(\hat{\psi }_{\left( 1\right) }^{*},\hat{\psi } _{\left( 2\right) }^{*},\ldots ,\hat{\psi }_{\left( Nboot\right) }^{*}.\)
Let \(G_{1}(z)=P(\hat{\psi }^{*}\le z)\) be the cumulative distribution function of \(\hat{\psi }^{*}\). Define \(\hat{\psi }_{boot-p}=G_{1}^{-1}(z)\) for given z. The approximate bootstrap-p \(100(1-\gamma )\) % CI of \(\hat{ \psi }\), is given by
4.2 Parametric Boot-t
-
(1)–(3)
The same as the parametric Boot-p.
-
(4)
Based on the asymptotic variance–covariance matrix (20) and delta method (22), respectively, compute the variance–covariance matrix \( I^{-1*}\left( \hat{\alpha }^{*},\hat{\beta }^{*},\hat{\lambda } ^{*}\right) \) and the approximate estimates of the variance \(\hat{S} ^{*}\left( t\right) \), \(\hat{h}^{*}\left( t\right) \) and \(\widehat{CV }^{*}\).
-
(5)
Compute the \(T^{*\psi }\) statistic defined as
$$\begin{aligned} T^{*\psi }=\frac{(\hat{\psi }^{*}-\hat{\psi })}{\sqrt{\widehat{var( \hat{\psi }^{*})}}} \end{aligned}$$ -
(6)
Repeat Steps 2–5, NBoot times and obtain \(T_{1}^{*\psi },T_{2}^{*\psi },\ldots ,T_{Nboot}^{*\psi }.\)
-
(7)
Sort \(T_{1}^{*\psi },T_{2}^{*\psi },\ldots ,T_{Nboot}^{*\psi }\) in ascending orders and obtain the ordered sequences \(T_{\left( 1\right) }^{*\psi }\), \(T_{\left( 2\right) }^{*\psi }\), \(\ldots ,T_{\left( Nboot\right) }^{*\psi }\).
Let \(G_{2}(z)=P(T^{*}\le z)\) be the cumulative distribution function of \(T^{*}\) for a given z, define
Then, the approximate bootstrap-t \(100(1-\gamma )\) % CI of \(\hat{\psi }=(\hat{ \alpha },\hat{\beta },\hat{\lambda },\hat{S}\left( t\right) ,\hat{h}\left( t\right) \ \)or \(\widehat{CV})\), is given by
5 Bayes estimation using MCMC
In this section we obtain Bayesian estimates and the corresponding credible intervals of the unknown parameters \(\alpha \), \(\beta \) and \(\lambda \), as well as some lifetime parameters \(S\left( t\right) \), \(h\left( t\right) \) and CV. It is assumed here that the parameters \(\alpha \), \(\beta \) and \( \lambda \) are independent and follows the gamma prior distributions
where the hyperparameters \(a_{i}\) and \(b_{i}\), \(i=1,2,3\) are assumed to be nonnegative and known. The posterior distribution of the parameters \(\alpha \) , \(\beta \) and \(\lambda \) denoted by \(\pi ^{*}(\alpha ,\beta ,\lambda | \underline{x})\), up to proportionality can be obtained by combining the likelihood function (11) with the prior (27) via Bayes’ theorem and it can be written as
Therefore, the Bayes estimate of any function of the parameters, say \( g\left( \alpha ,\beta ,\lambda \right) \), under squared error loss function can be obtained as
It may be noted that, the calculation of the multiple integrals in (29) cannot be solved analytically. In this case, we use the MCMC technique to generate samples from the posterior distributions and then compute the Bayes estimators of the unknown parameters and construct the corresponding credible intervals. From (28), the joint posterior density function of \( \alpha \), \(\beta \) and \(\lambda \) can be written as
The conditional posterior densities of \(\alpha \), \(\beta \) and \(\lambda \) can be written as
and
It can be easily seen that the conditional posterior densities of \(\beta \) given in (32) is gamma density with shape parameter \(( m+a_{2}) \) and scale parameter \(\left( b_{2}+\sum _{i=1}^{m}(R_{i}+1)\ln \left( 1+\frac{1}{ \lambda }x_{i}^{\alpha }\right) \right) .\) Thus, samples of \(\beta \) can be easily generated using any gamma generating routine. Also, since the conditional posteriors of \(\alpha \) and \(\lambda \) in (31) and (33) do not present standard forms, but the plot of both them shows that they similar to normal distribution see Figs. 1 and 2, and so Gibbs sampling is not a straightforward option, the use of the Metropolis–Hasting (M–H) sampler is required for the implementations of MCMC methodology. Given these conditional distributions in (31)–(33), below is a hybrid algorithm with Gibbs sampling steps for updating the parameter \(\beta \) and with M-H steps for updating \(\alpha \) and \(\lambda \). To run the Gibbs sampler algorithm we started with the MLEs of \(\hat{\alpha }\), \(\hat{\beta }\) and \(\hat{\lambda }\). We then drew samples from various full conditionals, in turn, using the most recent values of all other conditioning variables unless some systematic pattern of convergence was achieved. Now, the following steps illustrate the process of the Metropolis–Hastings algorithm within Gibbs sampling:
-
(1):
Start with initial guess \(\left( \alpha ^{\left( 0\right) },\beta ^{\left( 0\right) },\lambda ^{\left( 0\right) }\right) \).
-
(2):
Set \(j=1\).
-
(3):
Generate \(\beta ^{(j)}\) from Gamma\(\left( m+a_{2},b_{2}+\sum _{i=1}^{m}(R_{i}+1)\ln \left( 1+\frac{1}{\lambda } x_{i}^{\alpha }\right) \right) \).
-
(4):
Using the following M-H algorithm, generate \(\alpha ^{(j)}\) and \( \lambda ^{\left( j\right) }\) from \(\pi _{1}^{*}(\alpha ^{\left( j-1\right) }|\beta ^{\left( j\right) },\lambda ^{\left( j-1\right) }, \underline{x})\) and \(\pi _{3}^{*}(\lambda ^{\left( j-1\right) }|\alpha ^{\left( j\right) },\beta ^{\left( j\right) },\underline{x})\) with the normal proposal distributions \(N\left( \alpha ^{\left( j-1\right) },var\left( \alpha \right) \right) \) and \(N\left( \lambda ^{\left( j-1\right) },var\left( \lambda \right) \right) \).
-
(i):
Generate a proposal \(\alpha ^{*}\) from \(N\Big ( \alpha ^{\left( j-1\right) },var\left( \alpha \right) \Big ) \) and \(\lambda ^{*}\) from \(N\Big (\lambda ^{\left( j-1\right) },var\left( \lambda \right) \Big )\).
-
(ii):
Evaluate the acceptance probabilities
$$\begin{aligned} \eta _{\alpha }= & {} \min \left[ 1,\frac{\pi _{1}^{*}(\alpha ^{*}|\beta ^{(j)},\lambda ^{\left( j-1\right) },\underline{x})}{\pi _{1}^{*}(\alpha ^{\left( j-1\right) }|\beta ^{(j)},\lambda ^{\left( j-1\right) },\underline{x })}\right] ,\nonumber \\ \eta _{\lambda }= & {} \min \left[ 1,\frac{\pi _{3}^{*}(\lambda ^{*}|\alpha ^{\left( j\right) },\beta ^{(j)},\underline{x})}{ \pi _{3}^{*}(\lambda ^{\left( j-1\right) }|\alpha ^{\left( j\right) },\beta ^{(j)},\underline{x})}\right] . \end{aligned}$$(34) -
(iii):
Generate a \(u_{1}\) and \(u_{2}\) from a Uniform (0,1) distribution.
-
(iv):
If \(u_{1}<\eta _{\alpha }\), accept the proposal and set \(\alpha ^{\left( j\right) }=\alpha ^{*}\), else set \(\alpha ^{\left( j\right) }=\alpha ^{\left( j-1\right) }.\)
-
(v):
If \(u_{2}<\eta _{\lambda }\), accept the proposal and set \( \lambda ^{\left( j\right) }=\lambda ^{*}\), else set \(\lambda ^{\left( j\right) }=\lambda ^{\left( j-1\right) }.\)
-
(i):
-
(5):
Compute the reliability function, hazard function and coefficient of variation as
$$\begin{aligned} \left\{ \begin{array}{l} S^{\left( j\right) }\left( t\right) =\left( 1+\frac{1}{\lambda ^{\left( j\right) }}t^{\alpha ^{\left( j\right) }}\right) ^{-\beta ^{\left( j\right) }},\quad t>0, \\ h^{\left( j\right) }\left( t\right) =\frac{\alpha ^{\left( j\right) }\beta ^{\left( j\right) }}{\lambda ^{\left( j\right) }}t^{\alpha ^{\left( j\right) }-1}\left( 1+\frac{1}{\lambda ^{\left( j\right) }}t^{\alpha ^{\left( j\right) }}\right) ^{-1},\quad t>0, \\ CV^{\left( j\right) }=W\left( \alpha ^{\left( j\right) },\beta ^{\left( j\right) },\lambda ^{\left( j\right) }\right) ,\quad \quad \alpha ^{\left( j\right) }\beta ^{\left( j\right) }>2 \end{array} \right. , \end{aligned}$$(35) -
(6):
Set \(j=j+1.\)
- (7):
In order to guarante the convergence and to remove the affection of the selection of initial value, the first M simulated varieties are discarded. Then the selected sample are \(\alpha ^{(j)}\), \(\beta ^{(j)}\), \(\lambda ^{\left( j\right) }\), \(S^{\left( j\right) }\left( t\right) \), \(h^{\left( j\right) }\left( t\right) \) and \(CV^{\left( j\right) }\), \(j=M+1,\ldots ,N\), for sufficiently large N, forms an approximate posterior sample which can be used to develop the Bayes estimates of \(\phi =\alpha ,\beta ,\lambda ,S\left( t\right) \), \(h\left( t\right) \) or CV as
To compute the credible intervals of \(\alpha \), \(\beta \), \(\lambda \), \( S\left( t\right) \), \(h\left( t\right) \) and CV, order \(\alpha ^{(i)}\), \( \beta ^{(i)},\lambda ^{\left( j\right) }\), \(S^{\left( i\right) }\left( t\right) \), \(h^{\left( i\right) }\left( t\right) \) and \(CV^{\left( i\right) }\), \(i=1,\ldots ,N\) as \(\left\{ \alpha ^{(1)}< \cdots <\alpha ^{(N)}\right\} \), \( \left\{ \beta ^{(1)}< \cdots <\beta ^{(N)}\right\} \), \(\left\{ \lambda ^{(1)}< \cdots <\lambda ^{(N)}\right\} \), \(\left\{ S^{(1)}< \cdots <S^{(N)}\right\} \), \(\left\{ h^{(1)}< \cdots <h^{(N)}\right\} \) and \(\left\{ CV^{(1)}< \cdots <CV^{(N)}\right\} \). Then the \(100(1-\gamma )\) % CRIs of \(\phi =\alpha ,\beta ,\lambda ,S\left( t\right) \), \(h\left( t\right) \) or CV become
5.1 Bayes estimation using balanced loss functions
In order to make the statistical inferences more practical and applicable, we often need to choose an asymmetric loss function. A number of asymmetric loss functions proposed for use, one of the most popular is the LINEX loss function. This loss function was introduced by Varian (1975), and several others; among of them Ebrahimi et al. (1991). Recently, A more generalized loss function called the balanced loss function (see Jozani et al. 2012) of the form
where \(\rho \) is an arbitrary loss function, while \(\delta _{0}\) is a chosen a prior ‘target’ estimator of \(\theta \), obtained for instance using the criterion of maximum likelihood, least-squares or unbiasedness. Loss \( L_{\rho ,\omega ,\delta _{0}}\), which depends on the observed value of \( \delta _{0}\left( X\right) \) reflects a desire of closeness of \(\delta \) to both; the target estimator \(\delta _{0}\) and the unknown parameter \(\theta \); with the relative importance of these criteria governed by the choice of \( \omega \in [0,1)\). A general development with regard to Bayesian estimators under \(L_{\rho ,\omega ,\delta _{0}}\) is given, namely by relating such estimators to Bayesian solutions to the unbalanced case, i.e., \(L_{\rho ,\omega ,\delta _{0}}\) with \(\omega =0\). \(L_{\rho ,\omega ,\delta _{0}}\) can be specialized to various choices of loss function, such as for absolute value, entropy, LINEX and a generalization of squared error losses. In (38), the choice \(\rho \left( \theta ,\delta \right) =\left( \delta -\theta \right) ^{2}\) leads to balanced squared error loss (BSEL) function, see Ahmadi et al. (2009), in the form
and the corresponding Bayes estimate of the unknown parameter \(\theta \) under balanced squared error loss (BSEL) is given by
The balanced linear-exponential (BLINEX) loss function with shape parameter \( q (q\ne 0)\), is obtained with the choice of \(\rho \left( \theta ,\delta \right) =e^{q\left( \delta -\theta \right) }-q\left( \delta -\theta \right) -1;\) \(q\ne 0\), see Zellner (1986). Hence the Bayes estimation of the unknown parameter \(\theta \) under BLINEX loss function is given by
It is clear that the balanced loss functions are more general, which include the MLE and both symmetric and asymmetric Bayes estimates as special cases. For examples, from (40), with \(\omega =1\), the Bayes estimate under balanced squared error loss function reduces to ML estimate, and for \(\omega =0\), it reduces to the Bayes estimate relative to squared error loss function (symmetric). Also, the Bayes estimator under balanced LINEX loss function in (41) reduces to ML estimate when \(\omega =1\), and for \(\omega =0\), it reduces to the case of LINEX loss function (asymmetric). If \(\theta =\left( \alpha ,\beta ,\lambda ,S\left( t\right) ,h\left( t\right) ,CV\right) \) and suppose that we judge convergence to have been reached after M iterations of an MCMC algorithm have been performed. Now the approximate posterior mean under balanced squared error loss become
Thus, the approximate Bayes estimates of \(\theta =\) \(\alpha ,\beta ,\lambda ,S\left( t\right) \), \(h\left( t\right) \) or CV under BSEL are given by
Similarly, the approximate posterior mean under balanced LINEX loss become
Thus, the approximate Bayes estimates for \(\theta =\) \(\alpha ,\beta ,\lambda ,S\left( t\right) \), \(h\left( t\right) \) or CV, under BLINEX are given by
By sorting \(\alpha ^{\left( j\right) },\beta ^{\left( j\right) },\lambda ^{\left( j\right) },S^{\left( j\right) }\left( t\right) \), \(h^{\left( j\right) }\left( t\right) \) and \(CV^{\left( j\right) }\), \(j=M+1,\ldots ,N\) in ascending orders, using the method proposed by Chen and Shao (1999), the approximate \(100(1-\gamma )\) % CRIs for \(\theta =\alpha ,\beta ,\lambda ,S\left( t\right) \), \(h\left( t\right) \) or CV, are given by
6 Numerical computations
In this section, for illustrative purposes, we present a simulation example to check the estimation procedures. In this example, by using the algorithm described in Balakrishnan and Sandhu (1995), we generate sample from WGD\( \left( \alpha ,\beta ,\lambda \right) \) with the parameters \(\left( \alpha ,\beta ,\lambda \right) =(2,2,3)\), using progressive censoring scheme CS: (\(m=30\), \(n=20\), \(R=(1\), 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1)). The progressive type-II censored sample is
From (4), (5) and (9) the true values of \(S\left( t=0.4\right) \), \(h\left( t=0.4\right) \) and CV are 0.9013, 0.5063 and 0.8250. Using the iterative algorithm described in Sect. 2, we determine the MLEs of \(\alpha \), \(\beta \) and \(\lambda \) to be \(\hat{\alpha }=2.0515\), \(\hat{\beta }=2.1583\) and \(\hat{\lambda }=3.0525\). Using (18), the MLEs of \(S\left( t\right) \), \( h\left( t\right) \) and CV are \(\hat{S}\left( t=0.4\right) =0.9001\), \(\hat{h }\left( t=0.4\right) =0.5271\) and \(\widehat{CV}=0.6862\). Also, we determined the 95 % confidence intervals for \(\alpha ,\beta ,\lambda ,S\left( t\right) \), \(h\left( t\right) \) and CV based on MLEs and these confidence intervals are presented in Table 1.
Using the algorithms described in Sect. 4 of the bootstrap methods, the mean of 1000 Boot-p (Bp) and Boot-t (Bt) samples of the lifetime parameters becomes, respectively
and
Also, the 95 % bootstrap (Boot-p and Boot-t) confidence intervals (CIs) are displayed in Table 1.
Now we would like to compute the Bayes estimates of \(\alpha ,\) \(\beta ,\) \( \lambda ,\) \(S\left( t\right) \), \(h\left( t\right) \) and CV. We assume the informative gamma priors for \(\alpha \), \(\beta \) and \(\lambda \) that is, when the hyperparameters are \(a_{i}=1\) and \(b_{i}=2\), \(i=1,2,3\). As pointed out earlier, the posterior analysis has been done based on a hybrid strategy combining Metropolis within the Gibbs chain. We generate 12000 MCMC samples as has been suggested in Sect. 5. The initial values for the three parameters \(\alpha ,\) \(\beta \) and \(\lambda \) for running the MCMC sampler algorithm were taken to be their maximum likelihood estimates i.e \((\alpha ^{\left( 0\right) },\beta ^{\left( 0\right) },\lambda ^{\left( 0\right) })=( \hat{\alpha },\hat{\beta },\hat{\lambda })\). Burn-in is a problem that may be encountered.Which is the number of iterations that need to be discarded from the generated values. For any starting values \(\alpha ^{\left( 0\right) },\) \( \beta ^{\left( 0\right) }\) and \(\lambda ^{\left( 0\right) }\) the first M values of the generated sequences Markov chain may be far from reminded converged sequences. To determine M there are a number of diagnostic tests proposed in the literature which address the convergence problem. One of them is the trace plot. It is a simply plot of the sampled values from an algorithm at each iteration, with the x-axis referencing the iteration of the algorithm and the y-axis referencing the sampled values. With a trace plot, a lack of convergence is evidenced by trending in the sampled values such that the algorithm never levels-off to a stable, stationary state. Figure 3 shows the trace plots of the first 10000 MCMC outputs for posterior distribution of \(\alpha ,\) \(\beta ,\) \(\lambda ,\) \(S\left( t\right) \), \(h\left( t\right) \) and CV. Visually the MCMC procedure converges very well. We provide the histogram plots of generated \(\alpha ,\) \(\beta ,\) \( \lambda ,\) \(S\left( t\right) \), \(h\left( t\right) \) and CV in Fig. 4. Discarding the first 2000 samples as ‘burn-in’. Burnin of \(M=2000\) samples is enough to erase the effect of starting point (initial values). Therefore, MCMC samples can be used for constructing the approximate credible intervals or for estimating the parameters and any functions of them. A sample of size 10000 is obtained to make (approximate) Bayesian inference including posterior mean, median, mode and credible interval of the parameters of interest constructed by the 2.5 and 97.5 % quantities.
Table 1 lists the 95 % probability intervals for the parameters, reliability function, hazard function and coefficient of variation. The MCMC results of the posterior mean, median, mode, standard deviation (S.D) and skewness (Ske) of \(\alpha ,\) \(\beta ,\) \(\lambda ,\) \(S\left( t\right) \), \( h\left( t\right) \) and CV. are displayed in Table 2.
The result of Bayes estimates relative to both BSEL and BLINEX with different values of the shape parameter q of LINEX loss function and various values of \(\omega \) for the parameters \(\alpha ,\) \(\beta \) and \( \lambda \) as well as the \(S\left( t=0.4\right) \), \(h\left( t=0.4\right) \) and CV, are displayed in Table 3.
It is well known that LINEX loss function becomes symmetric for q close to zero and hence approximately behaves as the squared error loss function itself. In addition, we observed that the resulting estimates for \(q=0.0001\) are approximately similar to the corresponding squared error Bayes estimates.
7 Monte Carlo simulation study
In order to compare the estimators of parameters, as well as some lifetime parameters reliability function, hazard function and coefficient of variation of the MWD. Monte Carlo simulations were performed utilizing 1000 progressively type-II censored samples for each simulations. All computations were performed using MATHEMATICA ver. 8. To generate progressively type-II censored samples from MWD, we used the algorithm proposed by Balakrishnan and Sandhu (1995) with the parameters \(\alpha =2,\) \( \beta =2\) and \(\lambda =3\). We assume the informative gamma priors for \( \alpha \), \(\beta \) and \(\lambda \) that is, when the hyperparameters are \( a_{i}=1\) and \(b_{i}=2\), \(i=1,2,3\). The true values of S(t), \(h\left( t\right) \) and CV at \(t=0.4\) are \(S(0.4)=0.9013,\) \(h\left( 0.4\right) =0.5063\) and \(CV=0.8250\). Based on 10000 MCMC samples, the Bayes estimates of unknown quantities are derived with respect to three different loss functions, namely (BSEL) and (BLINEXL) functions. The Bayes estimates with respect to the BSE and BLINEX loss function are computed for two distinct values of \(\omega \), namely 0 and 0.6. In our study, we consider the following scheme (CS):
-
CS I:
\(R_{1}=n-m,\) \(R_{i}=0\) for \(i\ne 1\).
-
CS II:
\(R_{(m+1)/2}=n-m,\) \(R_{i}=0\) for \(i\ne (m+1)/2\) if m odd; \( R_{m/2}=n-m,\) \(R_{i}=0\) for \(i\ne m/2\) if m even.
-
CS III:
\(R_{m}=n-m,\) \(R_{i}=0\) for \(i\ne m\).
The performance of the resulting estimators of \(\alpha ,\) \(\beta ,\) \(\lambda ,\) S(t), \(h\left( t\right) \) and CV has been considered in terms of mean square error (MSE), which computed for \(k=1,2,\ldots ,6,\) \(\varphi _{1}=\alpha ,\) \(\varphi _{2}=\beta ,\) \(\varphi _{3}=\lambda ,\) \(\varphi _{4}=S(t),\) \( \varphi _{5}=h(t)\) and \(\varphi _{6}=CV\) as \(MSE=\frac{1}{M} \sum _{i=1}^{M}( \hat{\varphi }_{k}^{\left( i\right) }-\varphi _{k}) ^{2}\). Also, we compare CIs obtained by using asymptotic distributions of theMLEs, two bootstrap CIs, and MCMC CRIs. The comparison of them are made in terms of the average CI lengths/credible interval lengths (ACL) and coverage percentages (CP). For each simulated sample, we computed 95 % CIs and checked whether the true value lies within the interval and recorded the length of the CI. This procedure was repeated 1000 times. The estimated coverage probability was computed as the number of CIs that covered the true values divided by 1000, while the estimated expected width of the CI was computed as the sum of the lengths for all intervals divided by 1000. The results of MSE of estimates are shown in Tables 4, 5, 6, 7, 8 and 9 and the results of ACL and CP of 95 % CIs are shown in Tables 10, 11 and 12.
8 Conclusion
The purpose of this paper is to develop different methods to estimate and construct confidence intervals for the parameters as well as reliability function, hazard function and coefficient of variation of the Weibull–Gamma distributed under a progressively type-II censored samples. The MLEs of the unknown parameters are obtained and propose different confidence intervals using asymptotic distributions as well as parametric bootstrap methods. The Bayesian estimates of the unknown parameters are also proposed. It is observed that the Bayes estimators cannot be obtained in explicit forms and they can be obtained using the numerical integration. Because of that we have used MCMC technique and it is observed that the Bayes estimate with respect to informative prior works quite well in this case. Also, the Bayes estimates have been obtained under balanced loss functions. The theoretical results have been applied with the numerical example to illustrative purposes. A simulation study was conducted to examine and compare the performance of the proposed methods for different sample sizes (n, m) and different CSs (I, II, III). From the results, we observe the following:
-
1.
It is observed that from Tables 4, 5, 6 7, 8 and 9, as sample size increases, the MSEs decrease and Bayes estimates have the smallest MSEs for \(\alpha ,\) \( \beta ,\) \(\lambda ,\) S(t), \(h\left( t\right) \) and CV. Hence, Bayes estimates perform better than the MLEs and bootstrap methods in all cases considered.
-
2.
From Tables 4, 5, 6 7, 8 and 9. It can be seen that bootstrap-t perform better than percentile bootstrap and MLEs, because, bootstrap-t have the MSEs smaller than MSEs in percentile bootstrap and MLEs for \(\alpha ,\) \(\beta ,\) \( \lambda ,\) S(t), \(h\left( t\right) \) and CV.
-
3.
When \(\omega =0,\) Bayes estimates are provides better estimates for \(\alpha ,\) \(\beta ,\) \(\lambda ,\) S(t), \(h\left( t\right) \) and CV in the sense of having smaller MSEs.
-
4.
Bayes estimates under BLINEX with \(q=0.5\) are provides better estimates in the sense of having smaller MSEs when \(\omega =0\) and 0.6.
-
5.
For fixed values of the sample n and failure time sizes m, the scheme I perform better than scheme II and III in the sense of having smaller MSEs.
-
6.
From Tables 10, 11 and 12. It can be seen that, the MCMC CRIs give more accurate results than the approximate CIs and bootstrap CIs since the lengths of the former are less than the lengths of latter, for different sample sizes, observed failures and schemes.
-
7.
The bootstrap-t CIs is better than the percentile bootstrap CIs and ACIs in the sense of having smaller widths.
-
8.
For fixed sample sizes and observed failures, the first scheme I, in which censoring occurs after the first observed failures, gives lower lengths for the three methods of the CIs other than the other two schemes.
References
Aggarwala R, Balakrishnan N (1998) Some properties of progressive censored order statistics from arbitrary and uniform distribution with application to inference and simulation. J Stat Plan Inference 70:35–49
Ahmadi J, Jozani MJ, Marchand E, Parsian A (2009) Bayes estimation based on k-record data from a general class of distributions under balanced type loss functions. J Stat Plan Inference 139:1180–1189
Asgharzadeh A (2006) Point and interval estimation for a generalized logistic distribution under progressive type II censoring. Commun Stat Theory Methods 35:1685–1702
Balakrishnan N (2007) Progressive censoring methodology: an appraisal (with discussions). TEST 16(2):211–259
Balakrishnan N, Aggarwala R (2000) Progressive censoring: theory, methods, and applications. Birkhauser, Boston
Balakrishnan N, Lin CT (2003) On the distribution of a test for exponentiality based on progressively type-II right censored spacings. J Stat Comput Simul 73:277–283
Balakrishnan N, Sandhu RA (1995) A simple simulation algorithm for generating progressively type-II censored samples. Am Stat 49:229–230
Basak P, Basak I, Balakrishnan N (2009) Estimation for the three parameter lognormal distribution based on progressively censored data. Comput Stat Data Anal 53:3580–3592
Basu AP, Ebrahimi N (1991) Bayesian approach to life testing and reliability estimation using asymmetric loss function. J Stat Plan Inference 29:21–31
Bithas PS (2009) Weibull-gamma composite distribution: an alternative multipath/shadowing fading model. Electron Lett 45:749–751
Brooks SP (1998) Markov chain Monte Carlo method and its application. Statistician 47:69–100
Chen MH, Shao QM (1999) Monte carlo estimation of Bayesian credible and HPD intervals. J Comput Graph Stat 8:69–92
Cohen AC (1965) Maximum likelihood estimation in the Weibull distribution based on complete and on censored samples. Technometrics 5:579–588
Efron B (1982) The bootstrap and other resampling plans. In: CBMS-NSF regional conference seriesin, applied mathematics. SIAM, Philadelphia
Fernandez AJ (2004) On estimating exponential parameters with general type II progressive censoring. J Stat Plan Inference 121:135–147
Gamerman D (1997) Markov chain Monte Carlo: stochastic simulation for bayesian inference. Chapman & Hill, London
Greene WH (2000) Econometric analysis, 4th edn. Prentice-Hall, NewYork
Gupta A, Mukherjee B, Upadhyay SK (2008) A Bayes study using Markov chain Monte carlo simulation. Reliab Eng Syst Saf 93:1434–1443
Hall P (1988) Theoretical comparison of Bootstrap confidence intervals. Ann Stat 16:927–953
Jozani MJ, Marchand E, Parsian A (2012) Bayes and robust Bayesian estimation under a general class of balanced loss functions. Stat Pap 53:51–60
Kim C, Jung J, Chung Y (2011) Bayesian estimation for the exponentiated Weibull model under type II progressive censoring. Stat Pap 52(1):53–70
Lawless JF (1982) Statistical models and methods for lifetime data. Wiley, New York
Madi MT, Raqab MZ (2009) Bayesian inference for the generalized exponential distribution based on progressively censored data. Commun Stat Theory Methods 38:2016–2029
Mahmoud MAW, Abdel-Aty Y, Mohamed NM, Hamedani GG (2014a) Recurrence relations for moments of dual generalized order statistics from Weibull gamma distribution and Its characterizations. J Stat Appl Probab 3(2):189–199
Mahmoud MAW, Moshref M, Yhiea NM, Mohamed NM (2014b) Progressively censored data from the Weibull–Gamma distribution moments and estimation. J Stat Appl Probab 3(1):45–60
Mahmoud MAW, El-Sagheer RM, Soliman AA, Abd-Ellah AH (2014c) Inferences of the lifetime performance index with Lomax distribution based on progressive type-II censored. Econ Qual Control 29:39–51
Meeker WQ, Escobar LA (1998) Statistical methods for reliability data. Wiley, New York
Molenberghs G, Verbeke G (2011) On the Weibull-Gamma frailty model, its infinite moments, and its connection to generalized log-logistic, logistic, Caushy, and extreme-value distributions. J Stat Plan Inference 141:861–868
Nairy KS, Rao KN (2003) Tests of coefficients of variation of normal populations. Commun Stat Simul Comput 32:641–661
Ng HKT (2005) Parameter estimation for a modeled Weibull distribution for progressively type-II censored samples. IEEE Trans Reliab 54(3):374–380
Sharma KK, Krishna H (1994) Asymptotic sampling distribution of inverse coefficient of variation and its applications. IEEE Trans Reliab 43:630–633
Soliman AA, Abd-Ellah AH, Abou-Elheggag NA, El-Sagheer RM (2015) Inferences using type-II progressively censored data with binomial removals. Arab J Math 4(2):127–139
Vander Wiel SA, Meeker WQ (1990) Accuracy of approx confidence bounds using censored Weibull regression data from accelerated life tests. IEEE Trans Reliab 39(3):346–351
Varian HRA (1975) Bayesian approach to real estate assessment. In: Savage LJ, Feinderg SE, Zellner A (eds) Studies in Bayesian econometrics and statistics in Honor. North-Holland, Amsterdam, pp 195–208
Zellner A (1986) Bayesian estimation and prediction using asymmetric loss functions. J Am Stat Assoc 81:446–545
Acknowledgments
The author would like to express thanks to the editors and referees for their valuable comments and suggestions which significantly improved the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
EL-Sagheer, R.M. Estimation of parameters of Weibull–Gamma distribution based on progressively censored data. Stat Papers 59, 725–757 (2018). https://doi.org/10.1007/s00362-016-0787-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-016-0787-2