Introduction

The generalized Bilal (GB) model coincides with the distribution of the median in a sample of size three from the Weibull distribution. It was first introduced by Abd-Elrahman [1]. He showed that its failure rate function can be upside-down bathtub shaped. The failure rate can also be decreasing or increasing. Therefore, the GB model can be used for several practical data analysis.

Suppose that n items are put on a life-testing experiment and we observe only the first r failure times, say x1<x2<⋯<xr. Then, x = (x1, x2, ⋯, xr) is called a type-II censored sample. The remaining (nr) items are censored and are only known to be greater than xr. This article will be based on a type-II censored sample drawn from the GB model. Type-II censoring have been discussed by too many authors, among them, Ahmad et al. [2], Raqab [3], Wu et al. [4], Chana et al. [5], ElShahat and Mahmoud [6], and Abd-Elrahman and Niazi [7].

Likewise, the Weibull distribution, the cumulative distribution function (CDF) of the GB distribution can have any of the two following functional forms:

$$\begin{array}{@{}rcl@{}} F_{X}(x;\,\beta,\,\lambda)&=&1-{e^{-2\,\beta\,{x}^{\lambda}}} \left(3-2\,{e^{-\,\beta\,{x}^{\lambda}}}\right),\\ && x> 0,\, (\beta, \lambda>0), \end{array} $$
(1)
$$\begin{array}{@{}rcl@{}} F_{X}(x;\,\theta,\,\lambda)&=&1-{e^{-2\,{\left(x/\theta\right)}^{\lambda}}} \left(3-2\,{e^{-\,{\left(x/\theta\right)}^{\lambda}}} \right),\\ && x> 0,\, (\theta, \lambda>0). \end{array} $$
(2)

It is well known that, based on the maximum likelihood (ML) method, the results of any statistical inference that may be obtained by using one of these two forms is applied to the other functional form. This is true by using some re-parametrization techniques together with the invariance property of the ML estimators, see, e.g., Dekking et al. [8]. In this article, formula (1) is used as the CDF of the GB distribution. The corresponding probability density function (PDF) and reliability function are, respectively, given by:

$$\begin{array}{@{}rcl@{}} f_{X}(x;\,\beta,\,\lambda)&=&\,{6\,\beta\,\lambda}\,{x}^{\lambda-1} {e^{-2\,\beta\,{x}^{\lambda}}} \left(1-{e^{-\,\beta\,{x}^{\lambda}}} \right),\\ && x> 0,\, (\beta, \lambda>0) \end{array} $$
(3)

and

$$\begin{array}{@{}rcl@{}} s(t)&=&{e^{-2\,\beta\,{t}^{\lambda}}} \left(3-2\,{e^{-\,\beta\,{t}^{\lambda}}}\right). \end{array} $$
(4)

The qth quantile, xq, is an important quantity, especially for generating random variates using the inverse transformation method. In view of (1), following Abd-Elrahman [9], xq of the GB distribution is given by:

$$ x_{q} = {\left[ {\frac{1}{\beta} \, \ln \left(\frac{1}{\gamma(q)} \right)} \right]}^{1/\lambda}, $$
(5)

where

$$ \gamma(q)=\left\{ \begin{array}{ll} 0.5+\sin(a_{q}+\pi/6) & \text{if}\ 0< q < 0.5,\\ 0.5 & \text{if}\ q = 0.5,\\ 0.5-\cos(a_{q}+\pi/3) & \text{if}\ 0.5< q < 1, \end{array}\right. $$

for \(a_{q}\,=\,\frac {1}{3}\, \arctan (\frac {2\sqrt {q(1-q)}}{2\,q-1})\).

The layout of this paper is organized as follows:

In the “Maximum likelihood estimation” section, ML estimates of β and λ are obtained. By using the missing information principle, variance-covariance matrix of the unknown population parameters is obtained, which is used to construct the asymptotic confidence intervals for β, λ, and the reliability function s(t). In the “Bayesian estimation” section, two different importance sampling techniques are introduced. These techniques are used, separately, to compute the Bayes estimates of β, λ, and s(t) and also to construct their corresponding credible intervals. In the “Simulation study” section, Monte Carlo simulations are carried out to compare the performances of the proposed estimators.

Further, in the “Data analysis” section, for the sake of illustration, application to a real life-time data set is presented.

Maximum likelihood estimation

It follows from (1) and (3) that, based on a given type-II censored sample x drawn from the GB distribution, the joint PDF of the papulation parameters β and λ is given by:

$$ L(\beta,\,\lambda|{\mathbf{x}}) \ \propto\ {\beta}^{r}{\lambda}^{r}\, {e^{-2\,\beta\, T_{1}+T_{2}}}, $$
(6)

where

$$\begin{array}{*{20}l} {}T_{1}\!&=\!{(n\,-\,r)\, x^{\lambda}_{r}\,+\,\sum_{j=1}^{r} x^{\lambda}_{j} },\\ {}T_{2}\!&=\!(n\,-\,r)\!\ln\! \left(3\,-\,2\,{e^{-\beta\,x^{\lambda}_{r}}} \right)\,+\,{\lambda}\!\sum_{j=1}^{r}{{\ln(x_{j})}}\,+\,\!\sum_{j=1}^{r} \!\ln\left(\!1\,-\,{e^{-\beta\, x^{\lambda}_{j}}}\!\right)\!. \end{array} $$

When λ is known

In this case, for fixed λ, say λ=λ(0), let θ=1/β and \(y_{i}=x_{i}^{\lambda ^{(0)}}\), i=1, 2, ⋯ r. Then, y1,⋯,yr is a type-II random sample from Bilal(θ) distribution. Abd-Elrahman and Niazi [7] established the existence and uniqueness theorem for the maximum likelihood estimate (MLE) of the parameter θ, say \(\hat \theta _{M}\). The MLE for the parameter β is then by \(\hat \beta _{M}\left (\lambda ^{(0)}\right)=1/\hat \theta _{M}\). Clearly, \(\hat \beta _{M}\left (\lambda ^{(0)}\right)\) exists and it is unique.

Now, we provide an iterative technique for finding \(\hat \beta _{M}\left (\lambda ^{(0)}\right)\) as follows. Let,

$$ {}\begin{aligned} W_{1}&={\frac{\beta\,{x^{{\lambda}^{(0)}}_{r}}{e^{-\beta\,{x^{{\lambda}^{(0)}}_{r}} }}}{3-2\,{e^{-\beta\,{x^{{\lambda}^{(0)}}_{r}}}}}},\qquad W_{2j}={\frac{\beta\,{x^{{\lambda}^{(0)}}_{j}}{e^{-\beta\,{x^{{\lambda}^{(0)}}_{j}} }}}{1-{e^{-\beta\,{x^{{\lambda}^{(0)}}_{j}}}}}},\\ j&=1, \, 2,\, \cdots,\, r. \end{aligned} $$
(7)

In view of (6) and (7), the likelihood equation of β is then given by:

$$\begin{array}{@{}rcl@{}} \frac{\partial\,{\ln L(\beta,\,\lambda^{(0)}|{\mathbf{x}})}}{\partial\,{\beta}}&\,=\,&\frac{r+\,2\, (n\,-\,r) \,W_{1}+\sum_{j=1}^{r}W_{2j}}{\beta}\\&&-{2\left((n\,-\,r)\, {x^{{\lambda}^{(0)}}_{r}}\,+\,\sum_{j=1}^{r}{x^{{\lambda}^{(0)}}_{j}}\right)}. \end{array} $$

For ν=0,1,2,⋯, we calculate \(\hat \beta _{M}({\lambda }^{(0)})\) by using the following formula:

$$ {}\begin{aligned} \hat\beta^{(\nu+1)}_{M}&\left(\lambda^{(0)}\right)\\ &=\left.\frac{r+2\, \left(n\,-\,r \right) W_{1}+\sum_{j=1}^{r} W_{2j}}{2\,\left(\left(n\,-\,r \right) {x^{\lambda}_{r}}+\sum_{j=1}^{r}{x^{\lambda}_{j}}\right)} \right|_{\beta=\hat\beta^{(\nu)}_{M}(\lambda^{(0)}),\,\lambda=\lambda^{(0)}}, \end{aligned} $$
(8)

iteratively until some level of accuracy is reached.

Remark 1

Note that, all of the functions W1 and W2j, j=1,2, ⋯, r, which appear in (8), need to have some initial value for β, say \(\hat \beta ^{(0)}\). This initial value can be obtained based on the available type-II censored sample as if it is complete, see Ng et al. [10]. We use the moment estimator of β as a starting point in the iterations (8). That is, in view of (3), \(\hat \beta ^{(0)}\) is given by

$$ \hat{\beta}^{(0)}= \frac{5\,r\,}{6\,{\sum_{i=1}^{r} x_{i}^{\lambda^{(0)}}}}. $$
(9)

When β is known

When β is assumed to be known, say β(0), it follows from (6) that the likelihood equation of λ is given by

$$ {}\begin{aligned} \frac{\partial\,{\ln L(\beta^{(0)},\,\lambda|{\mathbf{x}})}}{\partial\,{\lambda}}\!&=\! {\frac{r}{\lambda}}\,-\,2\, (n\,-\,r) \ln (x_{r}) \left(\beta^{(0)}{x^{\lambda}_{r}}\,-\,W_{1}\right)\\ &\quad+\sum_{j=1}^{r}\ln (x_{j}) \left(1\,-\,2\!\,\beta^{(0)}{x^{\lambda}_{j}}\,+\,W_{2j} \right), \end{aligned} $$
(10)

where W1 and W2j, j = 1,2,⋯,r, are as given by (7) after replacing β, λ(0) by β(0) and λ, respectively. In order to established the existence and uniqueness of the MLE for λ, the following theorem is needed.

Theorem 1

For a given fixed value of the parameter β = β(0), the MLE for the parameter λ, \(\hat \lambda _{M}\left (\beta ^{(0)}\right)\), exists and it is unique.

Proof

See Appendix. □

The MLE \(\hat \lambda _{M}\left (\beta ^{(0)}\right)\) can be iteratively obtained by using Newton’s method, i.e.,

$$ \begin{aligned} \hat\lambda^{(\nu+1)}_{M}\left(\beta^{(0)}\right)&= \hat\lambda^{(\nu)}_{M}\left(\beta^{(0)}\right)\\ &\quad-\left.\left\{ \frac{\lambda\,{\mathcal{G}}_{1} (\beta^{(0)},\,\lambda|{ \mathbf{x}})} {\lambda\,{\mathcal{G}}_{2} (\beta^{(0)},\,\lambda|{\mathbf{x}})+{\mathcal{G}}_{1} \left(\beta^{(0)},\,\lambda|{\mathbf{x}}\right)} \right\} \right|_{\lambda=\hat\lambda^{(\nu)}_{M}\left(\beta^{(0)}\right)}\, {,} \end{aligned} $$
(11)

for ν=0,1,2,⋯, where \({\mathcal {G}}_{1}(\cdot,\,\lambda |{\mathbf {x}})\) is as given by (10) and \({\mathcal {G}}_{2}(\cdot,\,\lambda |{\mathbf {x}})\) is the second derivative of lnL(·, λ|x) with respect to (w.r.t.) λ, which is given in the “Appendix” section.

Remark 2

An initial value for λ, \(\hat \lambda ^{(0)}_{M}\), can be obtained as follows: (1) Calculate the sample coefficient of variation (CV) based on a given type-II censored sample data as if it is complete. (2) Equating the sample CV with its corresponding CV from the population would results in an equation of λ only. (3) \(\hat \lambda ^{(0)}_{M}\) would be the solution of this equation, which provides a good starting point for (11). This technique have been used by, e.g., Kundu and Howlader [11] and Abd-Elrahman [1].

Here, the population CV of the GB distribution is given by

$$ {}\begin{aligned} {\mathcal{C}}(\lambda)&= \sqrt {{\frac{ \left({3}^{m_{2}}-{2}^{m_{2}} \right) \Gamma \left(m_{2} \right) }{ \left({3}^{m_{1}}-{2}^{m_{1}} \right)^{2} \left(\Gamma \left(m_{1} \right) \right)^{2}}}-1},\\ m_{1}&=1+\frac{1}{\lambda},\quad m_{2}=1+\frac{2}{\lambda}. \end{aligned} $$
(12)

When both β and λ are unknown

In this case, first an initial value for λ, \(\hat \lambda ^{(0)}\), can be obtained as described in “When β is known” section. Once \(\hat \lambda ^{(0)}\) is obtained, an initial value for the parameter β, \(\hat \beta ^{(0)}\), can be calculated as the right hand side of (9) after replacing λ(0) by \(\hat \lambda ^{(0)}\).

Based on the initials \(\hat \beta ^{(0)}\) and \(\hat \lambda ^{(0)}\), an updated value for β, \(\hat \beta ^{(1)}\), can be obtained by using (8). Similarly, based on the pair (\(\hat \beta ^{(1)},\hat \lambda ^{(0)}\)), an updated value for λ, \(\hat \lambda ^{(1)}\), can be obtained by using (11), and so on. As a stopping rule, the iterations will be terminated after some value s<1000 with a level of accuracy, ε≤1.2×10−7, which is defined as

$$\epsilon\,=\, \left\vert\frac{\hat\beta^{(s+1)}-\hat\beta^{(s)}} {\hat\beta^{(s)}}\right\vert +\left\vert\frac{\hat\lambda^{(s+1)}-\hat\lambda^{(s)}}{\hat\lambda^{(s)}}\right\vert. $$

Hence, the limiting pair of estimates \(\left (\hat \beta ^{(s)}, \hat \lambda ^{(s)}\right)\) exists and it is unique, which would maximizes the likelihood function (6) w.r.t., the unknown population parameters β and λ. That is, \(\hat \beta _{M}\,=\,\hat \beta ^{(s)}\) and \(\hat \lambda _{M}\,=\,\hat \lambda ^{(s)}\).

Substituting the values of β and λ in (4) by their MLEs, the MLE for reliability function s(t) at some value of t = t0 can then be obtained.

Fisher information matrix (FIM)

In this section, by using the missing information principle, the Fisher information matrix (FIM) about the underlying population parameters based on type-II censoring is provided. Suppose that, x = (x1, x2, …,xr) and Y = (Xr+1, Xr+2, …, Xn) denote the ordered observed censored and the unobserved ordered data, respectively. The vector Y can be thought of as the missing data. Combine x and Y to form the complete data set W. It is easy to show that the amount of information about the unknown parameters β and λ, which is provided by W is given by:

$$\begin{array}{@{}rcl@{}} I_{\mathbf{W}}\left({\beta}, \, \lambda\right) \,=\,\left[\begin{array}{cc} \frac{\,c_{1}}{\beta^{2}}&{\frac{\,c_{2}\,-\,c_{1}\,\ln \left(\beta \right)}{\beta\,\lambda}} \\ {\frac {\,c_{2}\,-\,c_{1}\,\ln \left(\beta \right)}{\beta\,\lambda}}&{\frac {\,c_{3}\,+\,\ln \left(\beta \right) \left\{ c_{1}\,\ln \left(\beta \right)\! -\! c_{4} \right\}}{{\lambda}^{2}}}\end{array} \right] \end{array} $$
(13)

with c1=1.92468,c2=0.05606,c3=1.79061, and c4=0.11211.

For s = r + 1,r + 2,…,n, the conditional distribution of each XsY given Xs > xr follows the truncated underlying distribution with left truncation at xr, see Ng et al. [10]. Therefore, in view of (1) and (3), the PDF of XsY given Xs > xr is given by

$$ \begin{aligned} f (x|X_{s}>x_{r};\,\beta,\:\lambda) \!&=\!\frac{6\,\beta\,e^{-2\,\beta\, \left(x^{\lambda}\,-\,x^{\lambda}_{r}\right)}\, \left(1\,-\,e^{-\beta\,x^{\lambda}}\right)} {\left(3\,-\,2\,e^{-\beta\,{x^{\lambda}_{r}}}\right) },\\ &\quad x>x_{r}, \: (\beta,\:\lambda>0). \end{aligned} $$
(14)

Hence, the expected ordered unobserved (missing) information matrix IY(β, λ), which is related to the vector Y, is then given by

$$ \begin{aligned} I_{\mathbf{Y}|\mathbf{x}}(\beta,\,\lambda)\,=\,-(n\,-\,r)\,{{{\mathrm{I}\!\mathrm{E}}}} \left[ \begin{array}{cc} \frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\beta^{2}} &\ \frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\beta\,\partial\,\lambda} \\ \frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\lambda\,\partial\,\beta} &\frac{\partial^{2} \,\ln [f(x|X_{s\,}\!>\!x_{r};\,\beta,\,\lambda)]}{\partial\,\lambda^{2}} \end{array} \right]. \end{aligned} $$
(15)

In order to evaluate of the expectations involved in (15), calculations for the following expressions are required.

1) Part 1

$$ {}I^{(k)}(y)=\int_{y}^{\infty}{\left\{\ln (t)\right\}}^{k}\, G_{1}(t)\,{\mathrm{d}\!\,} t,\qquad y>0,\quad k=0,1,2, $$
(16)

where

$${G_{1}(t)=\frac{t\,{e^{-2\,t}}\, \left[ {t\,{e^{-t}\,+\,\left(1-\,{e^{-t}} \right)\,\left(2-3\,{e^{-t}} \right)} } \right] }{1-{e^{-t}}}}. $$

Denote \( I_{0} = {{{\lim }_{y\,\to \,0^{+}}}} I^{(0)}(y) = 0.32078\), \( I_{1} = {{{\lim }_{y\,\to \,0^{+}}}} I^{(1)}(y) = 0.00934\) and \( I_{2} ={{{\lim }_{y\,\to \,0^{+}}}} I^{(2)}(y) = 0.13177\). Then, (16) can be rewritten as

$$ {}I^{(k)}(y)=I_{k}- \int_{\,0}^{y}{\left\{\ln (t)\right\}}^{k}\, G_{1}(t)\,{\mathrm{d}\!\,} t,\qquad y>0,\quad k=0,1,2. $$
(17)

The integrals involved in (17) can be calculated by using a simple numerical integration tool, e.g., Simpson’s rule.

2) Part 2

$$\begin{array}{@{}rcl@{}} I^{(3)}(y)\!&=&\!\int_{y}^{\infty} {\frac{t^{2}\,e^{-3\,t}}{1-{e^{-t}}}} \, \mathrm{d}\, t \! =\! I_{3 }- \int_{\,0}^{y} {\frac{t^{2}\,e^{-3\,t}}{1-{e^{-t}}}} \, \mathrm{d}\, t,\quad y>0, \\ &=&I_{3 }-\,\sum_{j=0}^{\infty}\left\{\int_{\,0}^{y} { {t^{2}\,e^{-(j+3)\,t}}} \, \mathrm{d}\, t\right\},\\ &=&{e^{-3\,y}}\sum_{j=0}^{\infty }{\frac{\left(1+ \left(1+ \left(3+j \right) y \right)^{2} \right) {e^{-j\,y}}}{ \left(3+j \right)^{3}}}, \end{array} $$
(18)

where \(I_{3 }\,=\,{\lim }_{y\,\to \, 0^{+}} I^{(3)}(y)\,=\,-\frac {9}{4}\,+\,2\, \sum _{i=1}^{\infty }\,i^{-3}\,=\,0.154114\,\).

Now, in view of (17) and (18), it is easy to show that the elements Ii j of IY|x(β, λ) after division by (nr), i, j=1,2, are given by

$$\begin{array}{*{20}l} {}I_{11} &= \frac{1}{\beta^{2}}\left\{1 +6\,\left({\frac{e^{-y}I^{(3)}(y)}{3 - 2\,e^{-y}}} {- \frac{y^{2} e^{-y}}{{\left(3 - 2\, e^{-y} \right)}^{2}}}\right) \right\}, \, y\,=\,\beta\,x^{\lambda}_{r}, \end{array} $$
(19)
$$\begin{array}{*{20}l} {}I_{12} &= -\,\frac{6}{\beta\,\lambda} \,\left\{\!\frac{t_{1}(x_{r}) + \left[I^{(0)}{(y)} - \ln \left(\beta \right)I^{(1)}{(y)}\right] \,e^{2\,y}}{\left(3 - 2\,{e^{-y}} \right) }\!\right\} \,=\,I_{21}, \end{array} $$
(20)
$$\begin{array}{*{20}l} I_{22} &= \frac{1}{\lambda^{2}}\begin{array}{l} \left\{{1 + \frac{6 \left[e^{2\,y}\left[{\left(\ln (\beta) \right)}^{2} I^{(0)}{(y)} - 2 \ln(\beta)\,I^{(1)}(y) + I^{(2)}(y)\right] - t_{2}(x_{r})\right]}{ \left(3 - 2\,e^{-y}\right)}}\right\}, \end{array} \end{array} $$
(21)

where

$${}t_{1}(x_{r})\,=\,{\frac {{\beta\,x^{\lambda}_{r}}\ln\! \left(x^{\lambda}_{r} \right)\! \left[\!\left(\! 1\,-\,{e^{-\beta\,{x^{\lambda}_{r}}}} \!\right)\! \left(\! 3\,-\,2\,{e^{- \beta\,{x^{\lambda}_{r}}}} \!\right)\! +\!\beta\, {x^{\lambda}_{r}}{e^{-\beta\,{x^{\lambda}_{r}}}} \right] }{ \left(3\,-\,2\,{e^{-\beta\,{x^{\lambda}_{r}}}} \right)}} $$

and

$${}t_{2}(x_{r})\,=\,{\frac {\beta\,{x^{\lambda}_{r}}\! \left(\ln\! \left({x^{\lambda}_{r}} \right) \right)^{2} \!\left[ \!\beta\,{x^{\lambda}_{r}} {e^{-\beta\,{x^{\lambda}_{r}}}}\,+\, \left(\! \!1\,-\,{e^{-\beta\,{x^{\lambda}_{r }}}}\! \right) \!\left(\! 3\,-\,2\,{e^{-\beta\,{x^{\lambda}_{r}}}}\! \right) \!\right] }{ \left(3\,-\,2\,{e^{-\beta\,{x^{\lambda}_{r}}}} \right)}}. $$

Note that the elements Ii j, i,j = 1,2, constitute the Fisher information related to each Xs, s = r+1,r+2,⋯,n, where Xs is distributed as in (14). Therefore, in view of (1921), the elements of the FIM about the parameters β and λ related to the complete data set W can be obtained as \(n\, {\lim }_{y\,\to \, 0^{+}}\, I_{i\,j},\,i,\,j=1,2\), which give as the same results as in (13).

Therefore, the FIM gains about the two unknown parameters β and λ from a given type-II censored sample, (x1,x2,⋯xr), is then given by

$$I_{\mathbf{x}}(\beta,\,\lambda)= I_{\mathbf{W}}(\beta,\,\lambda) - I_{{\mathbf{Y}}|\mathbf{x}}(\beta,\,\lambda).$$

Asymptotic variances and covariance

Once Ix(β, λ) is calculated, at \(\beta \,=\,\hat \beta _{M}\) and \(\lambda \,=\,\hat \lambda _{M}\), the asymptotic variance-covariance matrix of the MLEs of the two unknown parameters β and λ is then given by

$${}{\mathbf{Var-Cov}}\left(\hat\beta_{M},\,\hat\lambda_{M}\right)= {I^{-1}_{\mathbf{x}}\left(\hat\beta_{M},\,\hat\lambda_{M}\right)}= \left[ \begin{array}{cc} {\hat\sigma_{1}^{2}}&\hat\sigma_{12}\\\noalign{\medskip}\hat\sigma_{21}&{\hat\sigma_{2}^{2}}\end{array} \right]. $$

Again, once \({I^{-1}_{\mathbf {x}}\left (\hat \beta _{M},\,\hat \lambda _{M}\right)}\) is obtained, the asymptotic variance of the reliability function s(t0) can then be calculated as the lower bound of the Cram\(\acute {\mathrm {e}}\)r-Rao inequality of the variance of any unbiased estimator for s(t0). That is,

$$ {} \begin{aligned} \text{Var}[ \widehat{s(t_{\,0})}]&= 36\,{t^{2\,\hat\lambda_{M}}_{0}}{e^{-4\,\hat\beta_{M}{t^{\hat\lambda_{M}}_{0}}}}\left[{\hat\sigma_{2}^{2}} {{\hat\beta_{M}^{2}}} \left[ \ln ({t_{\,0}}) \right]^{2}\right.\\ &\quad\left.+\hat\beta_{M}\,\ln({t_{\,0}})\, {\hat\sigma_{12}}\,+\, {{\hat\sigma_{1}^{2}}} \right] {\left[1\,-\,{e^ {-\hat\beta_{M} t^{\hat\lambda_{M}}_{0}}}\right]}^{2}. \end{aligned} $$
(22)

Consequently, the asymptotic (1 − α) 100 % confidence intervals, ACIs, for \(\hat {\beta }_{M}\), \(\hat {\lambda }_{M}\), and \(\widehat {s(t_{\,0})}_{M}\) are given by

$$ \begin{aligned} {}&\left[\hat{\beta}_{M}\,\mp\, Z_{\frac{\alpha}{2}}\,{\hat\sigma_{1}}\right],\, \left[\hat{\lambda}_{M}\,\mp\, Z_{\frac{\alpha}{2}}\,{\hat\sigma_{2}}\right] \, \text{and}\\ &\qquad\left[\widehat{s(t_{\,0})}_{M}\,\mp\, Z_{\frac{\alpha}{2}}\,\sqrt{\text{Var}[\widehat{s(t_{\,0})}]}\right], \end{aligned} $$
(23)

respectively, where \(Z_{\frac {\alpha }{2}}\) is the percentile \((1\,-\,{\frac {\alpha }{2}})\) of the standard normal distribution.

Bayesian estimation

It is assumed that β and λ have two independent gamma priors with the hyper parameters a1>0 and b1>0 for β; and a2>0 and b2>0 for λ. That is,

$$\begin{array}{@{}rcl@{}} {}\pi_{1}(\beta)\propto {\beta}^{a_{1}-1}{e^{-b_{1}\beta}} \quad{\text{and}}\quad \pi_{2}(\lambda)\propto {\lambda}^{a_{2}-1}{e^{-b_{2}\lambda}}. \end{array} $$
(24)

Moreover, Jeffrey’s priors can be obtained as special cases of (24) by substituting a1 = b1 = a2 = b2 = 0.

The hyper parameters can be chosen to suit the prior belief of the experimenter in terms of location and variability of the prior distribution.

Combining (6) and (24), the joint posterior density function of β and λ is then given by

$$ {}\pi(\beta,\,\lambda|{\mathbf{x}})\propto \ {\beta}^{r+a_{1}-1}{e^{- \left(b_{1}+2\,T_{1} \right)\, \beta}}{ \lambda}^{r+a_{2}-1}{e^{-b_{2}\lambda}}{e^{T_{2}}}, $$
(25)

where T1 and T2 are as given in (6). The Bayes estimate of any function g(β, λ) under a squared error loss function (SEL) is given by

$$ \widehat{g(\beta,\,\lambda)}_{B}= \frac{\int_{\,0}^{\infty}\!\!\int_{\,0}^{\infty}\, g(\beta,\,\lambda)\, \, \pi(\beta,\,\lambda|{\mathbf{x}}) \,{\mathrm{d}\!\,} \beta\,{\mathrm{d}\!\,} \lambda}{\int_{\,0}^{\infty}\!\!\int_{\,0}^{\infty} \, \, \pi(\beta,\,\lambda|{\mathbf{x}}) \,{\mathrm{d}\!\,} \beta\,{\mathrm{d}\!\,} \lambda}. $$
(26)

The integrals involved in (26) are usually not obtainable in closed form, but Lindley’s approximation [12] may be used to compute such ratio of integrals. It cannot however be used to construct credible intervals. Therefore, following Kundu and Howlader [11], we approximate (26) by using Gibb’s sampling procedure to draw MCMC samples, which can be used to compute the Bayes estimates and also to construct their corresponding credible intervals as suggested by Chen and Shao [13]. We propose the following two different importance sampling techniques.

First importance sampling technique (IS1)

The joint posterior density function (25) can be rewritten as

$$ {}\pi(\beta,\,\lambda|{\mathbf{x}})\propto \, \pi^{\star}_{1}(\beta|\lambda,\,{\mathbf{x}}) \,\pi^{\star}_{2}(\lambda|{\mathbf{x}})\,h_{3} (\beta,\,\lambda), $$
(27)

where \(\pi ^{\star }_{1}(\beta |\lambda,\,{\mathbf {x}})\) is a gamma density function given by

$$ {}\pi^{\star}_{1}(\beta|\lambda,\,{\mathbf{x}}) \propto \, {\beta}^{r+a_{1}-1}{e^{- \left(b_{1}+2 \, T_{1} \right)\,\beta }}, $$
(28)

\(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\) is a proper density function given by

$$ {}\pi^{\star}_{2}(\lambda|{\mathbf{x}}) \propto\,{\frac{{\lambda}^{r+a_{2}-1}{e^{-b_{2}\lambda}} \prod_{j=1}^{r} x_{j}^{\lambda}}{ {\left(b_{1}+2 \, T_{1}\right) }^{r+a_{1}}}} $$
(29)

and

$$ {}h_{3}(\beta,\,\lambda)= \left(1-\frac{2}{3}\,{e^{-\beta\,X_{r}^{\lambda}}} \right)^{n- r}\prod_{j=1}^{r}\left(1-{e^{-\beta\,X_{j}^{\lambda}}}\right). $$
(30)

Now, since \(\pi ^{\star }_{1}(\beta |\lambda,\,{\mathbf {x}})\) follows a gamma distribution then, it is quite simple to generate from it. On the other hand, although the function \(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\) is a proper density, we can use the method developed by Devroye [14] for generating λ. This method requires to ensure that (29) has a log-concave density function property. Therefore, the following theorem is needed.

Theorem 2

The function \(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\), given by (29), has a log-concave density function.

Proof. See the “Appendix” section.

Using Theorem 2, a simulation-based consistent estimate of g(β, λ) can be obtained by using the following algorithm.

Algorithm 1.

Step 1: Generate λ from \(\pi ^{\star }_{2}(\cdot |{\mathbf {x}})\), by using the method developed by Devroye [14].

Step 2: Generate β from \(\pi ^{\star }_{1}(\cdot |\lambda,\,{\mathbf {x}})\).

Step 3: Repeat Steps 1 and 2 to obtain (βi,λi), i=1, 2, ⋯, M.

Step 4: For i=1, 2, ⋯, M, calculate gi as g(βi, λi); and ωi as \(\frac {h_{3}(\beta _{i},\,\lambda _{i})}{\sum _{i=1}^{M}\, h_{3}(\beta _{i},\,\lambda _{i})},\) where h3(β, λ) is as given by (30).

Step 5: Under a SEL function, an approximate Bayes estimate of g(β, λ) and its corresponding estimated variance can be, respectively, obtained as

$$ \begin{aligned} \hat{g}{(\beta,\,\lambda)}_{{}_{IS1}} &=\! {\sum_{i=1}^{M}\,\omega_{i}\,{g}_{i}}\quad \text{and}\\ \hat{V}{\left[{g}(\beta,\,\lambda)\right]}_{{}_{IS1}} &=\! \sum_{i=1}^{M}\,\omega_{i}\, {\left({g}_{i}-\hat{g}{(\beta,\,\lambda)}_{{}_{IS1}} \right)}^{2}. \end{aligned} $$
(31)

Second importance sampling technique (IS2)

In this technique, we will start with another rewriting to the joint posterior density function (25) as

$$ \pi(\beta,\,\lambda|{\mathbf{x}})\propto \, \pi^{\star}_{1}(\beta|\lambda,\,{\mathbf{x}}) \,\pi^{\star}_{3}(\lambda|{\mathbf{x}})\,{h_{4}}(\beta,\,\lambda), $$
(32)

where \(\pi ^{\star }_{1}(\beta |\lambda,\,{\mathbf {x}})\) is as given by (28), while \(\pi ^{\star }_{3}(\lambda |{\mathbf {x}})\) is a gamma density function given by

$$ {}\pi^{\star}_{3}(\lambda|{\mathbf{x}}) \propto\, {\lambda}^{r+a_{2}-1}\, \exp \left[ {-\left(b_{2}+\sum_{j=1}^{r-1} \ln \left(\frac{x_{r}}{x_{j}}\right)\right)} \, \lambda \right]. $$
(33)

This is true, since b2>0 and \(\frac {x_{r}}{x_{j}}>1,\, j=1,\,2,\,\cdots r-1\). Therefore,

$$ {}{h_{4}}(\beta,\,\lambda)=\frac{x_{r}^{r\,\lambda}\, \left(1-\frac{2}{3}\,{e^{-\beta\,X_{r}^{\lambda}}} \right)^{n- r}\prod_{j=1}^{r}\left(1-{e^{-\beta\,X_{j}^{\lambda}}}\right)}{{ {{\left(b_{1}+2\, T_{1}\right) }^{r+a_{1}}}}}. $$
(34)

In this technique, since \(\pi ^{\star }_{1}(\beta |\lambda,\,{\mathbf {x}})\) and \(\pi ^{\star }_{3}(\lambda,\,{\mathbf {x}})\) follow a gamma distribution each, it is quite simple to generate from them. Therefore, it is straight forward that a simulation-based consistent estimate of g(β,λ) can be obtained using the following algorithm:

Algorithm 2.

Step 1: Generate λ from \(\pi ^{\star }_{3}(\cdot |{\mathbf {x}})\).

Step 2: Generate β from \(\pi ^{\star }_{1}(\cdot |\lambda ^{\star },\,{\mathbf {x}})\).

Step 3: Repeat Steps 1 and 2 to obtain \((\beta ^{\star }_{i},\lambda ^{\star }_{i})\), i=1, 2, ⋯, M.

Step 4: For i=1,2, ⋯, M, calculate \(g^{\star }_{i}\) as \({g(\beta ^{\star }_{i},\,\lambda ^{\star }_{i})}\); and \(\omega ^{\star }_{i}\) as \(\frac {h_{4}\left (\beta ^{\star }_{i},\,\lambda ^{\star }_{i}\right)}{\sum _{i=1}^{M} h_{4}\left (\beta ^{\star }_{i},\lambda ^{\star }_{i}\right)},\) where h4(β, λ) is as given by (34).

Step 5: In this case, based on a SEL function, the approximate Bayes estimate of g(β, λ) and its corresponding estimated variance can be, respectively, obtained as

$$ \begin{aligned} \hat{g}{(\beta,\,\lambda)}_{{}_{IS2}} &=\! {\sum_{i=1}^{M}\,\omega^{\star}_{i}\,{g}^{\star}_{i}}\quad \text{and}\\ \hat{V}{\left[{g}(\beta,\,\lambda)\right]}_{{}_{IS2}} &=\! \sum_{i=1}^{M}\,\omega^{\star}_{i}\, {\left({g}^{\star}_{i}-\hat{g}{(\beta,\,\lambda)}_{_{IS2}}\right)}^{2}. \end{aligned} $$
(35)

By using the idea of Chen and Shao [13], based on (gi, ωi) (or \((g^{\star }_{i},\,\omega ^{\star }_{i})\)), i = 1,2,⋯,M, the (1 − α) 100 % highest posterior credible interval of g(β, λ) related to IS1 (or IS2) technique can be easily obtained.

Simulation study

This section is devoted to compare the performance of the proposed Bayes estimators with the MLEs, we carry out a simulation study using different sample sizes (n), different effective sample sizes (r), and for different priors (non-informative and informative). For prior information, we have used non-informative prior, prior 1 with a1 = b1 = a2 = b2 =0, and informative prior, prior 2 with a1 = 2, b1 = 4, a2 = 3, and b2 =4.

The IMSL [15] routines DRNUN and DRNGAM are used in the generation of the uniform and gamma random variates, respectively.

In computing the estimates, first we generate β and λ from gamma (a1, b1) and gamma (a2, b2) distributions, respectively. These generated values are β0 = 0.5439 and λ0 = 0.7468. The corresponding value of the reliability function calculated at t0 = 0.9 is 0.8299. Second, we generate 5000 samples from the GB distribution with β = 0.5439 and λ = 0.7468. For the importance sampling techniques (IS1 and IS2), we set M=15,000, when we apply Algorithm 1 or 2. The average estimate of 𝜗 and the associated mean squared error (MSEs) are computed, respectively, as:

$${}\text{Average}=\frac{1}{5000} \sum_{k=1}^{5000} \vartheta^{\star}_{k}, \quad \textrm{MSE}=\frac{1}{5000} \sum_{k=1}^{5000} \left(\vartheta^{\star}_{k}-\vartheta\right)^{2}, $$

where \(\vartheta ^{\star }_{k}\) stands for an estimator (ML or Bayes) of β, λ, or s(0.9), at the kth iteration, and 𝜗 stands for β0 = 0.5439, λ0 = 0.7468, or s(0.9) = 0.8299.

The computational results are displayed in Tables 1, 2, and 3, where the first entry in each cell is for the average estimate and the second entry, which is given in parentheses, is for the corresponding MSE. It has been noticed from Tables 1, 2, and 3, that

Table 1 Average estimates of β and the associated MSEs
Table 2 Average estimates of λ and the associated MSEs
Table 3 Average estimates of s(0.9) and the associated MSEs

1) As expected, the MSEs of all estimates (ML or Bayes) decrease as n or r increases.

2) The Bayes estimators under prior 1 or prior 2 by using IS2 technique are mainly better than the corresponding estimators by using IS1 technique in terms of in terms of average bias and MSE.

3) In all cases, the MSEs of the MLEs are less than the corresponding Bayes estimators under prior 1 by using IS1 technique.

On the other hand, the performances in terms of average bias and the MSE of the Bayes estimators under prior 1 by using IS2 technique and the MLE are very similar.

4) For small and moderate sample or censoring sizes, the Bayes estimators under prior 2 by using IS2 technique clearly outperform the MLEs in terms of average bias and MSE.

5) For large sample or censoring sizes, the performances in terms of average bias and the MSE of the Bayes estimators under prior 2 with IS2 technique and the MLE are very similar.

Data analysis

This section concerns with illustration of the methods presented in the “Maximum likelihood estimation” and “Bayesian estimation” sections, where a real data set is considered. This data set is from Hinkley [16] and consists of thirty successive values of March precipitation in Minneapolis/St. Paul. The data set points are in inches as follows:

0.32, 0.47, 0.52, 0.59, 0.77, 0.81, 0.81, 0.9, 0.96, 1.18, 1.20, 1.20, 1.31, 1.35, 1.43, 1.51, 1.62, 1.74, 1.87, 1.89, 1.95, 2.05, 2.10, 2.20, 2.48, 2.81, 3.0, 3.09, 3.37, 4.75.

This data is used by Barreto-Souza and Cribari-Neto [17] in fitting the generalized exponential-Poisson distribution (GEP), and by Abd-Elrahman [1, 9] in fitting the Bilal and GB distributions. For the complete sample case, the MLEs of β and λ, respectively, are 0.4168 and 1.2486, which are obtained as described in the “Maximum likelihood estimation” section with r = n. The negative of the log likelihood, Kolmogorov-Smirnov (K-S) test statistics and its corresponding p value related to these MLEs are 38.1763, 0.0532, and 1.0, respectively. Based on this p value, it is clear that the GB distribution is found to fit the data very well. These results agree with the results in Abd-Elrahman [1], where in (2) the MLEs of θ and λ are equal to 0.4168−1/1.2486=2.016 and 1.2486, respectively.

If only the first 20 data points are observed, the corresponding sample mean and CV of this 20 observed sample points are 1.1225 and 0.4206, respectively. Equating the right hand side of (12) by 0.4206 and solving for λ would results in the unique solution λ0 = 1.7385. Based on this value of λ, it follows from (9) that β0 is calculated as 0.6147. The iterative scheme, which is described in the “Maximum likelihood estimation” section, starts with the initials λ(0) = 1.7385 and β(0) = 0.6147. The estimates of β and λ, converge to \(\hat \beta _{M}\,=\,0.41417\) and \(\hat \lambda _{M}\,=\, 1.29926\) with a level of accuracy less than 1.2×10−10 of the absolute relative errors. From these data, we have

$$\begin{array}{*{20}l} I_{\mathbf{W}}(\hat{\beta}_{M},\,\hat\lambda_{M}) &= \left({\begin{array}{rl} 336.6004 & 97.7070\\ 97.7070 & 60.1551 \end{array}}\right),\\ \quad I_{\mathbf{Y}}(\hat{\beta}_{M},\,\hat\lambda_{M}) &= \left(\begin{array}{rl} 81.7323 & 53.5040\\ 53.5040 & 35.9570 \end{array}\right). \end{array} $$

Hence,

$$I_{\mathbf{x}}(\hat{\beta}_{M},\,\hat\lambda_{M}) = \left(\begin{array}{rl} 254.86812 & 44.20293\\ 44.20293& 24.19810 \end{array} \right). $$

Therefore, the estimated variance-covariance matrix of \(\hat {\beta }_{M}\) and \(\,\hat \lambda _{M}\) is

$$I_{\mathbf{x}}^{-1}(\hat{\beta}_{M},\,\hat\lambda_{M}) = \left(\begin{array}{r@{}c@{}lr@{}c@{}l} \ 0&.&00574\quad & -0&.&01049\\ -0&.&01049\quad & \ 0&.&06049\end{array}\right). $$

Therefore, the standard errors of the MLEs of β and λ are 0.07576 and 0.24595, respectively.

The MLE of s(0.9) and its corresponding asymptotic standard error are 0.78002 and 0.06340, respectively. The 99 % ACIs for β, λ, and s(0.9) are (0.21897, 0.60938), (0.66575, 1.93278), and (0.61672, 0.94331), respectively.

On the other hand, the simulation study given in the “Simulation study” section shows that, the Bayes estimators by using IS2 technique is better than the corresponding estimators obtained by using IS1 technique in terms of average bias and MSE. Therefore, under non-informative prior, we compute Bayes estimate by generating an importance sample of size M = 15,000 with their corresponding importance weights according to Algorithm 2. The Bayes estimates of β, λ, and s(0.9), and their corresponding standard errors (given in parentheses), respectively, are \(\hat \beta _{IS2}= 0.39034 \, (0.04907)\), \(\hat \lambda _{IS2}= 1.34910\, (0.19207)\), and \(\widehat {s(0.9)}_{IS2}=0.79899\, (0.03866)\). The 99 % credible intervals for β, λ, and s(0.9) are (0.24320, 0.43781), (0.85632, 1.92996), and (0.73657, 0.91060), respectively.

Concluding remarks

(1) In this article, the ML and Bayes estimation of the parameters as well as the reliability function of the GB distribution based on a given type-II censored sample are obtained.

(2) The existence and uniqueness theorem for the ML estimator of the population parameter λ, when β is assumed to be known, is established. An iterative procedure for finding the ML estimators of the two unknown population parameters is also provided. The elements of the FIM are obtained, and they have been used in turn for calculating the asymptotic confidence intervals of λ, β, and the reliability function.

(3) Two different importance sampling techniques have been proposed, which can be used for further Bayesian studies.

Appendix

Proof of Theorem 1

It follows from (10) that the second of lnL(β, λ|x) w.r.t λ is given by

$$ {}\begin{aligned} {\mathcal{G}}_{2}(\beta,\,\lambda|{\mathbf{x}}) &=\! -\frac{r}{\lambda^{2}} - {\frac{6\,(n\,-\,r) z\, f_{1}(z)\, \left(\ln \left(x_{r} \right) \right)^{2} }{ \left(3\,{e^{z}}\,-\,2 \right)^{2}}}\\ &\quad-\sum_{j=1}^{r}{\frac {y_{j}\, f_{2}(y_{j})\,\left(\ln \left(x_{j} \right) \right)^{2}}{\left({e^{y_{j}}-1} \right)^{2}}}, \end{aligned} $$
(36)

where \(z\,=\,{\beta \,x^{\lambda }_{r}}\), f1(z) = ez[z + ez(1 − ez)(3 − 2 ez)], \(y_{j}\,=\, {\beta \,x^{\lambda }_{j}}\), j=1, 2, ⋯, r, and \(\phantom {\dot {i}\!}f_{2}(y_{j})\,=\, 2\, {e^{2\,y_{j}}}\,-\,5\, {e^{y_{j}}}\!+3\,+\, y_{j}\, {e^{y_{j}}}\).

Now, in order to prove that \({\mathcal {G}}_{2}(\beta,\,\lambda |{\mathbf {x}})<0\),

it is sufficient to show that f1(z)>0 and f2(yj)>0. It is clear that f1(z)>0. On the other hand, by expanding the exponential functions involved in f2(yj) about z = 0, f2(yj) can be rewritten as

$$\begin{array}{@{}rcl@{}} f_{2}(y_{j}) \!&\,=\,&\! {y_{j}^{2}}+\sum_{k=2}^{\infty }{\frac{ {y_{j}^{k}} \left({2}^{k+1}\,-\,5+\,y_{j} \right) }{k!}}>0. \end{array} $$

Therefore, \(\frac {\partial ^{2}{\ln L(\beta,\,\lambda |{\mathbf {x}})}}{\partial {\lambda ^{2}}}<0\). This implies that the ML estimate, \(\hat \lambda _{M}\), for λ is unique.

To insure that \(\hat \lambda _{M}\) exists, following Balakrishnan et al. [18], we rewrite (10) as h1(λ)=h2(λ), where h1(λ)=r/λ and

$$\begin{array}{*{20}l} h_{2}(\lambda)&=-{2\, (n\,-\,r) \ln \left(x_{r} \right) \left(\beta\,{x^{\lambda}_{r} }\,-\,W_{1}\right)}\\ &\quad+ \sum_{j=1}^{r}\ln \left(x_{j} \right) \left(1\,+\,W_{2j}\,-\,2\,\beta\,{x^{\lambda}_{j}} \right), \end{array} $$

where W1 and W2j, j=1, 2, ⋯, r, are as given in (10).

Note that,

$$\begin{array}{@{}rcl@{}} \ell_{1}&\,=\,&{\lim}_{\lambda\,\to\,0^{+}} h_{2}(\lambda) = 2\, \left(n\,-\,r \right) \left[ \beta-{\eta_{1}(\beta)} \right] \ln \left(x_{r} \right)\\ &&-\sum_{j=1}^{r}\ln \left(x_{j} \right) \left[ 1- 2\,\beta+{\eta_{2}(\beta)} \right],\\ \ell_{2}&\,=\,&{\lim}_{\lambda\,\to\,\infty} h_{2}(\lambda) =\left(\ell_{\infty}+\sum_{i=1}^{r} \ell_{2i}\right) >0, \end{array} $$

where \(\eta _{1}(\beta)\,=\,{\frac {\beta }{3\,e^{\beta }-2}}\), \(\eta _{2}(\beta)\,=\,\frac {\beta }{e^{\beta }-1}\),

$${}\ell_{\infty}\,=\,\left\{\! \begin{array}{ll} 0 & \text{if} 0< x_{r}\le1, \\ \infty & \text{if} x_{r}>1. \\ \end{array} \right. \!,\quad \ell_{2i}\,=\,\left\{\! \begin{array}{ll} 2\, \ln \left(\frac{1}{x_{i}}\right) & \text{if} 0< x_{i}\le1, \\ \infty & \text{if} x_{i}>1. \\ \end{array} \right.\!. $$

Furthermore, it follows from (A1), that

$$\begin{array}{*{20}l} {}\frac{\partial{h_{2}(\lambda)}}{\partial{\lambda}} &= {\frac{6\,(n\,-\,r) \beta\,{x^{\lambda}_{r}} f_{1}\left(\beta\,{x^{\lambda}_{r}}\right)\, \left(\ln \left(x_{r} \right) \right)^{2}}{\left(3\,{e^{\beta\,{x^{\lambda}_{r}}}}\,-\,2 \right)^{2}}}\\ &\quad+ \sum_{j=1}^{r}{\frac {\beta\,{x^{\lambda}_{j}} f_{2}\left(\beta\,{x^{\lambda}_{j}}\right)\,\left(\ln \left(x_{j} \right) \right)^{2}}{\left({e^{\beta\,{x^{\lambda}_{j}}}-1} \right)^{2}}} >0, \end{array} $$

which implies that 1<2. Therefore, h2(λ) is an increasing function of λ. But h1(λ) is a positive strictly decreasing function with right limit + at 0. This insures that h1(λ) = h2(λ) holds exactly once at some value λ=λ. Hence, the theorem is proved.

Proof of Theorem 2

It follows from (29) that, the second derivative of the logarithm base e of \(\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\) w.r.t. λ is given by

$${}\frac{{{\mathrm{d}\!\,}}^{2}{\ln\left\{\pi^{\star}_{2}(\lambda|{\mathbf{x}})\right\}}}{{\mathrm{d}\!\,} \lambda^{2}}= -\frac{r\,+\,a_{2}\,-\,1}{\lambda^{2}} -{(r+a_{1})}\,\frac{\partial^{2}{\,\ln \left\{ \xi (\lambda)\right\}}}{\partial\,\lambda^{2}}, $$

where \(\xi (\lambda)\,=\,\frac {b_{1}}{2}+(n\,-\,r)\,{x^{\lambda }_{r}}+\sum _{j=1}^{r}{x^{\lambda }_{j}}\). In order to show that \(\frac {{{\mathrm {d}\!\,}}^{2}{\ln \left \{\pi ^{\star }_{2}(\lambda |{\mathbf {x}})\right \}}}{{\mathrm {d}\!\,} \lambda ^{2}}<0\), it is sufficient to show that ξ1=ξ(λ) ξ(λ) − {ξ(λ)}2>0. This is true, because

$$\begin{aligned} \xi_{1} &\,=\, \left(\frac{b_{1}}{2}+ (n\,-\,r)\, x_{r}^{\lambda}+\sum_{j=1}^{r}x_{j}^{\lambda} \right) \left((n\,-\,r)\, x_{{r}}^{\lambda} \left(\ln \left(x_{r} \right) \right)^{2}+\sum_{j=1}^{r}x_{j}^{\lambda} \left(\ln \left(x_{j} \right) \right)^{2} \right)\\ &\quad- \left((n\,-\,r)\, x_{r}^{\lambda}\ln \left(x_{r} \right) +\sum_{j=1}^{r}x_{j}^{\lambda}\ln \left(x_{j} \right) \right)^{2},\\ &=\frac{b_{1}}{2} \left((n\,-\,r)\, x_{r}^{\lambda} \left(\ln \left(x_{r} \right) \right)^{2}+\sum_{j=1}^{r}x_{j}^{\lambda} \left(\ln \left(x_{j} \right) \right)^{2} \right) + (n\,-\,r)\, x_{r}^{\lambda}\sum_{j=1}^{r}x_{j}^{\lambda} \left(\ln \left(\frac{x_{j}}{x_{r}} \right) \right)^{2}\\ &\quad+\sum_{j=1}^{r} \left(\sum_{k=j+1}^{r}x_{k}^{\lambda}\,x_{{j}}^{\lambda} \left(\ln \left(x_{k} \right) - \ln \left(x_{j} \right) \right)^{2} \right)\,>\,0. \end{aligned} $$

Hence, the theorem is proved.