# Reliability estimation under type-II censored data from the generalized Bilal distribution

- 153 Downloads

## Abstract

The main object of this article is the estimation of the unknown population parameters and the reliability function for the generalized Bilal model under type-II censored data. Both maximum likelihood and Bayesian estimates are considered. In the Bayesian framework, although we have discussed mainly the squared error loss function, any other loss function can easily be considered. Gibb’s sampling procedure is used to draw Markov Chain Monte Carlo (MCMC) samples, which have been used to compute the Bayes estimates and also to construct their corresponding credible intervals with the help of two different importance sampling techniques. A simulation study is carried out to examine the accuracy of the resulting Bayesian estimates and compare them with their corresponding maximum likelihood estimates. Application to a real data set is considered for the sake of illustration.

## Keywords

Maximum likelihood estimation Fisher information matrix Bayesian estimations Gibb’s sampling Importance sampling techniques## Abbreviations

- CDF
Cumulative distribution function

- CV
Coefficient of variation

- FIM
Fisher information matrix

- GB
The generalized Bilal

- GEP
The generalized exponential-Poisson distribution

- IMSL
International Mathematical and Statistical Library

- IS1
First importance sampling technique

- IS2
Second importance sampling technique

- K-S
Kolmogorov-Smirnov test statistic

- MCMC
Markov Chain Monte Carlo

- ML
Maximum likelihood

- MLE
The maximum likelihood estimate

- MSE
Mean squared error

Probability density function

- SEL
Squared error loss function

## Mathematics Subject Classifications

62B15 60E05 62F10 62N02 62N05## Introduction

The generalized Bilal (GB) model coincides with the distribution of the median in a sample of size three from the Weibull distribution. It was first introduced by Abd-Elrahman [1]. He showed that its failure rate function can be upside-down bathtub shaped. The failure rate can also be decreasing or increasing. Therefore, the GB model can be used for several practical data analysis.

Suppose that *n* items are put on a life-testing experiment and we observe only the first *r* failure times, say *x*_{1}<*x*_{2}<⋯<*x*_{r}. Then, **x** = (*x*_{1}, *x*_{2}, ⋯, *x*_{r})^{′} is called a type-II censored sample. The remaining (*n* − *r*) items are censored and are only known to be greater than *x*_{r}. This article will be based on a type-II censored sample drawn from the GB model. Type-II censoring have been discussed by too many authors, among them, Ahmad et al. [2], Raqab [3], Wu et al. [4], Chana et al. [5], ElShahat and Mahmoud [6], and Abd-Elrahman and Niazi [7].

*invariance property*of the ML estimators, see, e.g., Dekking et al. [8]. In this article, formula (1) is used as the CDF of the GB distribution. The corresponding probability density function (PDF) and reliability function are, respectively, given by:

*q*th quantile,

*x*

_{q}, is an important quantity, especially for generating random variates using the inverse transformation method. In view of (1), following Abd-Elrahman [9],

*x*

_{q}of the GB distribution is given by:

The layout of this paper is organized as follows:

In the “Maximum likelihood estimation” section, ML estimates of *β* and *λ* are obtained. By using the missing information principle, variance-covariance matrix of the unknown population parameters is obtained, which is used to construct the asymptotic confidence intervals for *β*, *λ*, and the reliability function *s*(*t*). In the “Bayesian estimation” section, two different importance sampling techniques are introduced. These techniques are used, separately, to compute the Bayes estimates of *β*, *λ*, and *s*(*t*) and also to construct their corresponding credible intervals. In the “Simulation study” section, Monte Carlo simulations are carried out to compare the performances of the proposed estimators.

Further, in the “Data analysis” section, for the sake of illustration, application to a real life-time data set is presented.

## Maximum likelihood estimation

**x**drawn from the GB distribution, the joint PDF of the papulation parameters

*β*and

*λ*is given by:

### When *λ* is known

In this case, for fixed *λ*, say *λ*=*λ*^{(0)}, let *θ*=1/*β* and \(y_{i}=x_{i}^{\lambda ^{(0)}}\), *i*=1, 2, ⋯ *r*. Then, *y*_{1},⋯,*y*_{r} is a type-II random sample from *B**i**l**a**l*(*θ*) distribution. Abd-Elrahman and Niazi [7] established the existence and uniqueness theorem for the maximum likelihood estimate (MLE) of the parameter *θ*, say \(\hat \theta _{M}\). The MLE for the parameter *β* is then by \(\hat \beta _{M}\left (\lambda ^{(0)}\right)=1/\hat \theta _{M}\). Clearly, \(\hat \beta _{M}\left (\lambda ^{(0)}\right)\) exists and it is unique.

*β*is then given by:

*ν*=0,1,2,⋯, we calculate \(\hat \beta _{M}({\lambda }^{(0)})\) by using the following formula:

iteratively until some level of accuracy is reached.

### **Remark 1**

Note that, all of the functions *W*_{1} and *W*_{2j}, *j*=1,2, ⋯, *r*, which appear in (8), need to have some initial value for *β*, say \(\hat \beta ^{(0)}\). This initial value can be obtained based on the available type-II censored sample as if it is complete, see Ng et al. [10]. We use the moment estimator of *β* as a starting point in the iterations (8). That is, in view of (3), \(\hat \beta ^{(0)}\) is given by

### When *β* is known

*β*is assumed to be known, say

*β*

^{(0)}, it follows from (6) that the likelihood equation of

*λ*is given by

where *W*_{1} and *W*_{2j}, *j* = 1,2,⋯,*r*, are as given by (7) after replacing *β*, *λ*^{(0)} by *β*^{(0)} and *λ*, respectively. In order to established the existence and uniqueness of the MLE for *λ*, the following theorem is needed.

### **Theorem 1**

For a given fixed value of the parameter *β* = *β*^{(0)}, the MLE for the parameter *λ*, \(\hat \lambda _{M}\left (\beta ^{(0)}\right)\), exists and it is unique.

### *Proof*

See Appendix. □

for *ν*=0,1,2,⋯, where \({\mathcal {G}}_{1}(\cdot,\,\lambda |{\mathbf {x}})\) is as given by (10) and \({\mathcal {G}}_{2}(\cdot,\,\lambda |{\mathbf {x}})\) is the second derivative of ln*L*(·, *λ*|**x**) with respect to (w.r.t.) *λ*, which is given in the “Appendix” section.

### **Remark 2**

An initial value for *λ*, \(\hat \lambda ^{(0)}_{M}\), can be obtained as follows: (1) Calculate the sample coefficient of variation (CV) based on a given type-II censored sample data as if it is complete. (2) Equating the sample CV with its corresponding CV from the population would results in an equation of *λ* only. (3) \(\hat \lambda ^{(0)}_{M}\) would be the solution of this equation, which provides a good starting point for (11). This technique have been used by, e.g., Kundu and Howlader [11] and Abd-Elrahman [1].

### When both *β* and *λ* are unknown

In this case, first an initial value for *λ*, \(\hat \lambda ^{(0)}\), can be obtained as described in “When *β* is known” section. Once \(\hat \lambda ^{(0)}\) is obtained, an initial value for the parameter *β*, \(\hat \beta ^{(0)}\), can be calculated as the right hand side of (9) after replacing *λ*^{(0)} by \(\hat \lambda ^{(0)}\).

*β*, \(\hat \beta ^{(1)}\), can be obtained by using (8). Similarly, based on the pair (\(\hat \beta ^{(1)},\hat \lambda ^{(0)}\)), an updated value for

*λ*, \(\hat \lambda ^{(1)}\), can be obtained by using (11), and so on. As a stopping rule, the iterations will be terminated after some value

*s*<1000 with a level of accuracy,

*ε*≤1.2×10

^{−7}, which is defined as

*β*and

*λ*. That is, \(\hat \beta _{M}\,=\,\hat \beta ^{(s)}\) and \(\hat \lambda _{M}\,=\,\hat \lambda ^{(s)}\).

Substituting the values of *β* and *λ* in (4) by their MLEs, the MLE for reliability function *s*(*t*) at some value of *t* = *t*_{0} can then be obtained.

### Fisher information matrix (FIM)

*missing information principle*, the Fisher information matrix (FIM) about the underlying population parameters based on type-II censoring is provided. Suppose that,

**x**= (

*x*

_{1},

*x*

_{2}, …,

*x*

_{r})

^{′}and

**Y**= (

*X*

_{r+1},

*X*

_{r+2}, …,

*X*

_{n})

^{′}denote the ordered observed censored and the unobserved ordered data, respectively. The vector

**Y**can be thought of as the missing data. Combine

**x**and

**Y**to form the complete data set

**W**. It is easy to show that the amount of information about the unknown parameters

*β*and

*λ*, which is provided by

**W**is given by:

with *c*_{1}=1.92468,*c*_{2}=0.05606,*c*_{3}=1.79061, and *c*_{4}=0.11211.

*s*=

*r*+ 1,

*r*+ 2,…,

*n*, the conditional distribution of each

*X*

_{s}∈

**Y**given

*X*

_{s}>

*x*

_{r}follows the truncated underlying distribution with left truncation at

*x*

_{r}, see Ng et al. [10]. Therefore, in view of (1) and (3), the PDF of

*X*

_{s}∈

**Y**given

*X*

_{s}>

*x*

_{r}is given by

*I*

_{Y}(

*β*,

*λ*), which is related to the vector

**Y**, is then given by

In order to evaluate of the expectations involved in (15), calculations for the following expressions are required.

The integrals involved in (17) can be calculated by using a simple numerical integration tool, e.g., Simpson’s rule.

where \(I_{3 }\,=\,{\lim }_{y\,\to \, 0^{+}} I^{(3)}(y)\,=\,-\frac {9}{4}\,+\,2\, \sum _{i=1}^{\infty }\,i^{-3}\,=\,0.154114\,\).

*I*

_{i j}of

*I*

_{Y|x}(

*β*,

*λ*) after division by (

*n*−

*r*),

*i*,

*j*=1,2, are given by

Note that the elements *I*_{i j}, *i*,*j* = 1,2, constitute the Fisher information related to each *X*_{s}, *s* = *r*+1,*r*+2,⋯,*n*, where *X*_{s} is distributed as in (14). Therefore, in view of (19–21), the elements of the FIM about the parameters *β* and *λ* related to the complete data set **W** can be obtained as \(n\, {\lim }_{y\,\to \, 0^{+}}\, I_{i\,j},\,i,\,j=1,2\), which give as the same results as in (13).

*β*and

*λ*from a given type-II censored sample, (

*x*

_{1},

*x*

_{2},⋯

*x*

_{r})

^{′}, is then given by

### Asymptotic variances and covariance

*I*

_{x}(

*β*,

*λ*) is calculated, at \(\beta \,=\,\hat \beta _{M}\) and \(\lambda \,=\,\hat \lambda _{M}\), the asymptotic variance-covariance matrix of the MLEs of the two unknown parameters

*β*and

*λ*is then given by

*s*(

*t*

_{0}) can then be calculated as the lower bound of the Cram\(\acute {\mathrm {e}}\)r-Rao inequality of the variance of any unbiased estimator for

*s*(

*t*

_{0}). That is,

*α*) 100

*%*confidence intervals, ACIs, for \(\hat {\beta }_{M}\), \(\hat {\lambda }_{M}\), and \(\widehat {s(t_{\,0})}_{M}\) are given by

respectively, where \(Z_{\frac {\alpha }{2}}\) is the percentile \((1\,-\,{\frac {\alpha }{2}})\) of the standard normal distribution.

## Bayesian estimation

*β*and

*λ*have two independent gamma priors with the hyper parameters

*a*

_{1}>0 and

*b*

_{1}>0 for

*β*; and

*a*

_{2}>0 and

*b*

_{2}>0 for

*λ*. That is,

Moreover, Jeffrey’s priors can be obtained as special cases of (24) by substituting *a*_{1} = *b*_{1} = *a*_{2} = *b*_{2} = 0.

The hyper parameters can be chosen to suit the prior belief of the experimenter in terms of location and variability of the prior distribution.

*β*and

*λ*is then given by

*T*

_{1}and

*T*

_{2}are as given in (6). The Bayes estimate of any function

*g*(

*β*,

*λ*) under a squared error loss function (SEL) is given by

The integrals involved in (26) are usually not obtainable in closed form, but Lindley’s approximation [12] may be used to compute such ratio of integrals. It cannot however be used to construct credible intervals. Therefore, following Kundu and Howlader [11], we approximate (26) by using Gibb’s sampling procedure to draw MCMC samples, which can be used to compute the Bayes estimates and also to construct their corresponding credible intervals as suggested by Chen and Shao [13]. We propose the following two different importance sampling techniques.

### First importance sampling technique (IS1)

Now, since \(\pi ^{\star }_{1}(\beta |\lambda,\,{\mathbf {x}})\) follows a gamma distribution then, it is quite simple to generate from it. On the other hand, although the function \(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\) is a proper density, we can use the method developed by Devroye [14] for generating *λ*. This method requires to ensure that (29) has a log-concave density function property. Therefore, the following theorem is needed.

### **Theorem 2**

The function \(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\), given by (29), has a log-concave density function.

**Proof.** See the “Appendix” section.

Using Theorem 2, a simulation-based consistent estimate of *g*(*β*, *λ*) can be obtained by using the following algorithm.

**Algorithm 1.**

Step 1: Generate *λ* from \(\pi ^{\star }_{2}(\cdot |{\mathbf {x}})\), by using the method developed by Devroye [14].

Step 2: Generate *β* from \(\pi ^{\star }_{1}(\cdot |\lambda,\,{\mathbf {x}})\).

Step 3: Repeat Steps 1 and 2 to obtain (*β*_{i},*λ*_{i}), *i*=1, 2, ⋯, *M*.

Step 4: For *i*=1, 2, ⋯, *M*, calculate *g*_{i} as *g*(*β*_{i}, *λ*_{i}); and *ω*_{i} as \(\frac {h_{3}(\beta _{i},\,\lambda _{i})}{\sum _{i=1}^{M}\, h_{3}(\beta _{i},\,\lambda _{i})},\) where *h*_{3}(*β*, *λ*) is as given by (30).

*g*(

*β*,

*λ*) and its corresponding estimated variance can be, respectively, obtained as

### Second importance sampling technique (IS2)

*b*

_{2}>0 and \(\frac {x_{r}}{x_{j}}>1,\, j=1,\,2,\,\cdots r-1\). Therefore,

In this technique, since \(\pi ^{\star }_{1}(\beta |\lambda,\,{\mathbf {x}})\) and \(\pi ^{\star }_{3}(\lambda,\,{\mathbf {x}})\) follow a gamma distribution each, it is quite simple to generate from them. Therefore, it is straight forward that a simulation-based consistent estimate of *g*(*β*,*λ*) can be obtained using the following algorithm:

**Algorithm 2.**

Step 1: Generate *λ*^{⋆} from \(\pi ^{\star }_{3}(\cdot |{\mathbf {x}})\).

Step 2: Generate *β*^{⋆} from \(\pi ^{\star }_{1}(\cdot |\lambda ^{\star },\,{\mathbf {x}})\).

Step 3: Repeat Steps 1 and 2 to obtain \((\beta ^{\star }_{i},\lambda ^{\star }_{i})\), *i*=1, 2, ⋯, *M*.

Step 4: For *i*=1,2, ⋯, *M*, calculate \(g^{\star }_{i}\) as \({g(\beta ^{\star }_{i},\,\lambda ^{\star }_{i})}\); and \(\omega ^{\star }_{i}\) as \(\frac {h_{4}\left (\beta ^{\star }_{i},\,\lambda ^{\star }_{i}\right)}{\sum _{i=1}^{M} h_{4}\left (\beta ^{\star }_{i},\lambda ^{\star }_{i}\right)},\) where *h*_{4}(*β*, *λ*) is as given by (34).

*g*(

*β*,

*λ*) and its corresponding estimated variance can be, respectively, obtained as

By using the idea of Chen and Shao [13], based on (*g*_{i}, *ω*_{i}) (or \((g^{\star }_{i},\,\omega ^{\star }_{i})\)), *i* = 1,2,⋯,*M*, the (1 − *α*) 100 *%* highest posterior credible interval of *g*(*β*, *λ*) related to IS1 (or IS2) technique can be easily obtained.

## Simulation study

This section is devoted to compare the performance of the proposed Bayes estimators with the MLEs, we carry out a simulation study using different sample sizes (*n*), different effective sample sizes (*r*), and for different priors (non-informative and informative). For prior information, we have used non-informative prior, prior 1 with *a*_{1} = *b*_{1} = *a*_{2} = *b*_{2} =0, and informative prior, prior 2 with *a*_{1} = 2, *b*_{1} = 4, *a*_{2} = 3, and *b*_{2} =4.

The IMSL [15] routines *DRNUN* and *DRNGAM* are used in the generation of the uniform and gamma random variates, respectively.

*β*and

*λ*from gamma (

*a*

_{1},

*b*

_{1}) and gamma (

*a*

_{2},

*b*

_{2}) distributions, respectively. These generated values are

*β*

_{0}= 0.5439 and

*λ*

_{0}= 0.7468. The corresponding value of the reliability function calculated at

*t*

_{0}= 0.9 is 0.8299. Second, we generate 5000 samples from the GB distribution with

*β*= 0.5439 and

*λ*= 0.7468. For the importance sampling techniques (IS1 and IS2), we set

*M*=15,000, when we apply Algorithm 1 or 2. The average estimate of

*𝜗*

^{⋆}and the associated mean squared error (MSEs) are computed, respectively, as:

*β*,

*λ*, or

*s*(0.9), at the

*k*th iteration, and

*𝜗*stands for

*β*

_{0}= 0.5439,

*λ*

_{0}= 0.7468, or

*s*(0.9) = 0.8299.

Average estimates of *β* and the associated MSEs

| | MLE | Bayes prior 1 | Bayes prior 2 | ||
---|---|---|---|---|---|---|

IS1 | IS2 | IS1 | IS2 | |||

25 | 15 | 0.5535 | 0.5189 | 0.5381 | 0.5274 | 0.5381 |

0.0134 | 0.0144 | 0.0129 | 0.0109 | 0.0101 | ||

20 | 0.5432 | 0.5118 | 0.5411 | 0.5216 | 0.5413 | |

0.0104 | 0.0122 | 0.0099 | 0.0097 | 0.0083 | ||

25 | 0.5405 | 0.5121 | 0.5478 | 0.5200 | 0.5456 | |

0.0096 | 0.0115 | 0.0091 | 0.0092 | 0.0077 | ||

30 | 20 | 0.5476 | 0.4971 | 0.5256 | 0.5096 | 0.5291 |

0.0093 | 0.0119 | 0.0097 | 0.0093 | 0.0080 | ||

25 | 0.5427 | 0.4945 | 0.5362 | 0.5072 | 0.5362 | |

0.0083 | 0.0112 | 0.0082 | 0.0088 | 0.0072 | ||

30 | 0.5412 | 0.4955 | 0.5412 | 0.5069 | 0.5397 | |

0.0079 | 0.0108 | 0.0075 | 0.0087 | 0.0067 | ||

40 | 30 | 0.5447 | 0.4647 | 0.5117 | 0.4784 | 0.5149 |

0.0060 | 0.0125 | 0.0071 | 0.0098 | 0.0063 | ||

35 | 0.5428 | 0.4647 | 0.5236 | 0.4764 | 0.5250 | |

0.0057 | 0.0121 | 0.0060 | 0.0097 | 0.0055 | ||

40 | 0.5421 | 0.4656 | 0.5294 | 0.4775 | 0.5279 | |

0.0056 | 0.0116 | 0.0056 | 0.0095 | 0.0051 |

Average estimates of *λ* and the associated MSEs

| | MLE | Bayes prior 1 | Bayes prior 2 | ||
---|---|---|---|---|---|---|

IS1 | IS2 | IS1 | IS2 | |||

25 | 15 | 0.8355 | 0.8570 | 0.8167 | 0.8264 | 0.8006 |

0.0499 | 0.0597 | 0.0465 | 0.0363 | 0.0300 | ||

20 | 0.8049 | 0.8248 | 0.7794 | 0.8063 | 0.7736 | |

0.0274 | 0.0339 | 0.0236 | 0.0234 | 0.0180 | ||

25 | 0.7889 | 0.8056 | 0.7431 | 0.7943 | 0.7456 | |

0.0172 | 0.0216 | 0.0147 | 0.0159 | 0.0122 | ||

30 | 20 | 0.8095 | 0.8477 | 0.7976 | 0.8223 | 0.7875 |

0.0306 | 0.0437 | 0.0298 | 0.0291 | 0.0222 | ||

25 | 0.7928 | 0.8278 | 0.7707 | 0.8093 | 0.7684 | |

0.0204 | 0.0290 | 0.0183 | 0.0210 | 0.0148 | ||

30 | 0.7817 | 0.8123 | 0.7306 | 0.7995 | 0.7344 | |

0.0136 | 0.0201 | 0.0128 | 0.0153 | 0.0109 | ||

40 | 30 | 0.7857 | 0.8543 | 0.7774 | 0.8318 | 0.7738 |

0.0165 | 0.0335 | 0.0174 | 0.0242 | 0.0143 | ||

35 | 0.7782 | 0.8400 | 0.7588 | 0.8248 | 0.7589 | |

0.0128 | 0.0257 | 0.0125 | 0.0197 | 0.0107 | ||

40 | 0.7720 | 0.8272 | 0.7036 | 0.8151 | 0.7089 | |

0.0094 | 0.0185 | 0.0117 | 0.0151 | 0.0102 |

Average estimates of *s*(0.9) and the associated MSEs

| | MLE | Bayes prior 1 | Bayes prior 2 | ||
---|---|---|---|---|---|---|

IS1 | IS2 | IS1 | IS2 | |||

25 | 15 | 0.8284 | 0.8570 | 0.8403 | 0.8487 | 0.8393 |

0.0079 | 0.0088 | 0.0075 | 0.0067 | 0.0060 | ||

20 | 0.8344 | 0.8601 | 0.8355 | 0.8517 | 0.8350 | |

0.0067 | 0.0080 | 0.0062 | 0.0063 | 0.0052 | ||

25 | 0.8357 | 0.8590 | 0.8287 | 0.8523 | 0.8304 | |

0.0064 | 0.0077 | 0.0058 | 0.0061 | 0.0049 | ||

30 | 20 | 0.8310 | 0.8727 | 0.8482 | 0.8618 | 0.8449 |

0.0059 | 0.0080 | 0.0062 | 0.0062 | 0.0051 | ||

25 | 0.8339 | 0.8736 | 0.8385 | 0.8629 | 0.8384 | |

0.0055 | 0.0077 | 0.0053 | 0.0060 | 0.0046 | ||

30 | 0.8346 | 0.8722 | 0.8328 | 0.8627 | 0.8342 | |

0.0053 | 0.0075 | 0.0049 | 0.0059 | 0.0044 | ||

40 | 30 | 0.8318 | 0.8985 | 0.8577 | 0.8866 | 0.8550 |

0.0040 | 0.0089 | 0.0048 | 0.0069 | 0.0043 | ||

35 | 0.8329 | 0.8978 | 0.8474 | 0.8879 | 0.8464 | |

0.0038 | 0.0086 | 0.0041 | 0.0068 | 0.0037 | ||

40 | 0.8331 | 0.8966 | 0.8405 | 0.8866 | 0.8419 | |

0.0038 | 0.0082 | 0.0038 | 0.0067 | 0.0034 |

1) As expected, the MSEs of all estimates (ML or Bayes) decrease as *n* or *r* increases.

2) The Bayes estimators under prior 1 or prior 2 by using IS2 technique are mainly better than the corresponding estimators by using IS1 technique in terms of in terms of average bias and MSE.

3) In all cases, the MSEs of the MLEs are less than the corresponding Bayes estimators under prior 1 by using IS1 technique.

On the other hand, the performances in terms of average bias and the MSE of the Bayes estimators under prior 1 by using IS2 technique and the MLE are very similar.

4) For small and moderate sample or censoring sizes, the Bayes estimators under prior 2 by using IS2 technique clearly outperform the MLEs in terms of average bias and MSE.

5) For large sample or censoring sizes, the performances in terms of average bias and the MSE of the Bayes estimators under prior 2 with IS2 technique and the MLE are very similar.

## Data analysis

This section concerns with illustration of the methods presented in the “Maximum likelihood estimation” and “Bayesian estimation” sections, where a real data set is considered. This data set is from Hinkley [16] and consists of thirty successive values of March precipitation in Minneapolis/St. Paul. The data set points are in inches as follows:

0.32, 0.47, 0.52, 0.59, 0.77, 0.81, 0.81, 0.9, 0.96, 1.18, 1.20, 1.20, 1.31, 1.35, 1.43, 1.51, 1.62, 1.74, 1.87, 1.89, 1.95, 2.05, 2.10, 2.20, 2.48, 2.81, 3.0, 3.09, 3.37, 4.75.

This data is used by Barreto-Souza and Cribari-Neto [17] in fitting the generalized exponential-Poisson distribution (GEP), and by Abd-Elrahman [1, 9] in fitting the Bilal and GB distributions. For the complete sample case, the MLEs of *β* and *λ*, respectively, are 0.4168 and 1.2486, which are obtained as described in the “Maximum likelihood estimation” section with *r* = *n*. The negative of the log likelihood, Kolmogorov-Smirnov (K-S) test statistics and its corresponding *p* value related to these MLEs are 38.1763, 0.0532, and 1.0, respectively. Based on this *p* value, it is clear that the GB distribution is found to fit the data very well. These results agree with the results in Abd-Elrahman [1], where in (2) the MLEs of *θ* and *λ* are equal to 0.4168^{−1/1.2486}=2.016 and 1.2486, respectively.

*λ*would results in the unique solution

*λ*

_{0}= 1.7385. Based on this value of

*λ*, it follows from (9) that

*β*

_{0}is calculated as 0.6147. The iterative scheme, which is described in the “Maximum likelihood estimation” section, starts with the initials

*λ*

_{(0)}= 1.7385 and

*β*

_{(0)}= 0.6147. The estimates of

*β*and

*λ*, converge to \(\hat \beta _{M}\,=\,0.41417\) and \(\hat \lambda _{M}\,=\, 1.29926\) with a level of accuracy less than 1.2×10

^{−10}of the absolute relative errors. From these data, we have

*β*and

*λ*are 0.07576 and 0.24595, respectively.

The MLE of *s*(0.9) and its corresponding asymptotic standard error are 0.78002 and 0.06340, respectively. The 99 % ACIs for *β*, *λ*, and *s*(0.9) are (0.21897, 0.60938), (0.66575, 1.93278), and (0.61672, 0.94331), respectively.

On the other hand, the simulation study given in the “Simulation study” section shows that, the Bayes estimators by using IS2 technique is better than the corresponding estimators obtained by using IS1 technique in terms of average bias and MSE. Therefore, under non-informative prior, we compute Bayes estimate by generating an importance sample of size *M* = 15,000 with their corresponding importance weights according to Algorithm 2. The Bayes estimates of *β*, *λ*, and *s*(0.9), and their corresponding standard errors (given in parentheses), respectively, are \(\hat \beta _{IS2}= 0.39034 \, (0.04907)\), \(\hat \lambda _{IS2}= 1.34910\, (0.19207)\), and \(\widehat {s(0.9)}_{IS2}=0.79899\, (0.03866)\). The 99 % credible intervals for *β*, *λ*, and *s*(0.9) are (0.24320, 0.43781), (0.85632, 1.92996), and (0.73657, 0.91060), respectively.

## Concluding remarks

(1) In this article, the ML and Bayes estimation of the parameters as well as the reliability function of the GB distribution based on a given type-II censored sample are obtained.

(2) The existence and uniqueness theorem for the ML estimator of the population parameter *λ*, when *β* is assumed to be known, is established. An iterative procedure for finding the ML estimators of the two unknown population parameters is also provided. The elements of the FIM are obtained, and they have been used in turn for calculating the asymptotic confidence intervals of *λ*, *β*, and the reliability function.

(3) Two different importance sampling techniques have been proposed, which can be used for further Bayesian studies.

## Appendix

**Proof of Theorem** 1

*L*(

*β*,

*λ*|

**x**) w.r.t

*λ*is given by

where \(z\,=\,{\beta \,x^{\lambda }_{r}}\), *f*_{1}(*z*) = *e*^{z}[*z* + *e*^{z}(1 − *e*^{−z})(3 − 2 *e*^{−z})], \(y_{j}\,=\, {\beta \,x^{\lambda }_{j}}\), *j*=1, 2, ⋯, *r*, and \(\phantom {\dot {i}\!}f_{2}(y_{j})\,=\, 2\, {e^{2\,y_{j}}}\,-\,5\, {e^{y_{j}}}\!+3\,+\, y_{j}\, {e^{y_{j}}}\).

Now, in order to prove that \({\mathcal {G}}_{2}(\beta,\,\lambda |{\mathbf {x}})<0\),

*f*

_{1}(

*z*)>0 and

*f*

_{2}(

*y*

_{j})>0. It is clear that

*f*

_{1}(

*z*)>0. On the other hand, by expanding the exponential functions involved in

*f*

_{2}(

*y*

_{j}) about

*z*= 0,

*f*

_{2}(

*y*

_{j}) can be rewritten as

Therefore, \(\frac {\partial ^{2}{\ln L(\beta,\,\lambda |{\mathbf {x}})}}{\partial {\lambda ^{2}}}<0\). This implies that the ML estimate, \(\hat \lambda _{M}\), for *λ* is unique.

*h*

_{1}(

*λ*)=

*h*

_{2}(

*λ*), where

*h*

_{1}(

*λ*)=

*r*/

*λ*and

where *W*_{1} and *W*_{2j}, *j*=1, 2, ⋯, *r*, are as given in (10).

which implies that *ℓ*_{1}<*ℓ*_{2}. Therefore, *h*_{2}(*λ*) is an increasing function of *λ*. But *h*_{1}(*λ*) is a positive strictly decreasing function with right limit +*∞* at 0. This insures that *h*_{1}(*λ*) = *h*_{2}(*λ*) holds exactly once at some value *λ*=*λ*^{◇}. Hence, the theorem is proved.

**Proof of Theorem** 2

*e*of \(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\) w.r.t.

*λ*is given by

*ξ*

_{1}=

*ξ*

^{″}(

*λ*)

*ξ*(

*λ*) − {

*ξ*

^{′}(

*λ*)}

^{2}>0. This is true, because

## Notes

### Acknowledgements

The author would like to express my sincere thanks to to the editors and the referees for their helpful comments, which improved the current presentation of this article. On the other hand, the author contributed this article in the: “International Conference on Mathematics, Trends and Development (ICMTD17), The Egyptian Mathematical Society, 28 – 30 DEC. 2017, Cairo, Egypt”. Its ID number is: STA - 12.

### Funding

The author declares that he had no funding.

### Authors’ contributions

The author read and approved the final manuscript.

### Competing interests

The author declares that he have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## References

- 1.Abd-Elrahman, A. M.: A new two-parameter lifetime distribution with decreasing, increasing or upside-down bathtub shaped failure rate. Commun. Stat-Theor. M. 46(18), 8865–8880 (2016).MathSciNetCrossRefGoogle Scholar
- 2.Ahmad, K. E., Moustafa, H. M., Abd-ELrahman, A. M.: Approximate Bayes estimation for mixtures of two Weibull distributions under type-2 censoring. J. Stat. Comput. Sim. 58, 269–285 (1997).MathSciNetCrossRefGoogle Scholar
- 3.Raqab, M. Z.: Exact bounds for the mean of total time on test under type-II censoring samples. J. Stat. Plan. Infer. 134(2), 318–331 (2005).MathSciNetCrossRefGoogle Scholar
- 4.Wu, J., Wu, C., Tsai, M.: Optimal parameter estimation of the two-parameter bathtub-shaped lifetime distribution based on a type II right censored sample. Appl. Math. Comput. 167(2), 807–819 (2005).MathSciNetzbMATHGoogle Scholar
- 5.Chana, P. S., Ngb, H. K. T., Balakrishnanc, N., Zhoud, Q.: Point and interval estimation for extreme-value regression model under type-II censoring. Comput. Stat. Data Anal. 52, 4040–4058 (2008).MathSciNetCrossRefGoogle Scholar
- 6.ElShahat, M. A. T., Mahmoud, A. A. M.: A study on the mixture of exponentiated–Weibull distribution part ii (the method of Bayesian estimation). Pak. J. Stat. Oper. Res.XII(4), 709–737 (2016).MathSciNetCrossRefGoogle Scholar
- 7.Abd-Elrahman, A. M., Niazi, S. F.: Approximate Bayes estimators applied to the Bilal model. J. Egypt. Math. Soc. 25, 65–70 (2017). http://doi.org/10.1016/j.joems.2016.05.001.MathSciNetCrossRefGoogle Scholar
- 8.Dekking, F. M., Kraaikamp, C., Lopuhaa, H. P., Meester, L. E.: A modern introduction to probability and statistics: understanding why and how. Springer Science+Business Media springeronline.com Copyright Springer-Verlag London Limited (2005). ISBN 1-85233-896-2.
- 9.Abd-Elrahman, A. M.: Utilizing ordered statistics in lifetime distributions production: a new lifetime distribution and applications. J. Probab. Stat. Sci. 11(2), 153–164 (2013).MathSciNetGoogle Scholar
- 10.Ng, H. K. T., Chan, P. S., Balakrishnan, N.: Estimation of parameters from progressively censored data using EM algorithm. Comput. Stat. Data Anal. 39, 371–386 (2002).MathSciNetCrossRefGoogle Scholar
- 11.Kundu, D., Howlader, H.: Bayesian inference and prediction of the inverse Weibull distribution for type-II censored data. Comput. Stat. Data Anal. 54, 1547–1558 (2010).MathSciNetCrossRefGoogle Scholar
- 12.Lindley, D. V.: Approximate Bayesian method. Trabajos de Estadistica. 31, 223–237 (1980).MathSciNetCrossRefGoogle Scholar
- 13.Chen, M. -H., Shao, Q. -M.: Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 8(1), 69–92 (1999).MathSciNetGoogle Scholar
- 14.Devroye, L.: A simple algorithm for generating random variates with a log-concave density function. Comput. 33, 247–257 (1984).CrossRefGoogle Scholar
- 15.IMSL: IMSL STAT/LIBRARY user’s manual. IMSL, Inc, Houston (1991). https://m.tau.ac.il/~vaxman/imsl/imsl1_77.pdf.Google Scholar
- 16.Hinkley, D.: On quick choice of power transformations. Appl. Stat. 26, 67–96 (1977).CrossRefGoogle Scholar
- 17.Barreto-Souza, W., Cribari-Neto, F.: A generalization of the exponential-Poisson distribution. Stat. Probab. Lett. 79, 2493–2500 (2009).MathSciNetCrossRefGoogle Scholar
- 18.Balakrishnan, N, Kateri, M: On the maximum likelihood estimation of parameters of Weibull distribution based on complete and censored data. Stat. Probab. Lett. 78, 2971–2975 (2008).MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.