Introduction

In many lifetime and reliability studies, the experimenter may not always obtain complete information on failure times for all experimental units. The experimenters remove some unit from the experiment, or in some cases, it may happen that some units are unintentionally lost from the experiment. So, the censored data obtained from such experiments and censoring occur commonly. Type-I censoring and type-II censoring are the most common censoring schemes [33]. One important characteristic of these two censoring schemes is that they do not allow for units to be removed from the test at any point other than the final termination point. The mixture of type-I and type-II censoring schemes is known as the hybrid censoring scheme which was firstly introduced by Epstein [16], and it becomes quite popular in reliability and life testing experiments. A lot of work has been done on hybrid censoring and many different variations in it (see [6, 11, 14, 21, 22]). For example, Fairbanks et al. [17] introduced the type-I hybrid censoring (HCS) and considered the special case when the lifetime distribution is exponential. The main disadvantage of type-I HCS is that most of the inferential results need to be developed in this case under the condition that the number of observed failures is at least one; moreover, there may be very few failures occurring up to the pre-fixed time, which results in low efficiency of the estimator(s) of the model parameter(s). For this reason, Childs et al. [11] introduced an alternative hybrid censoring scheme that would terminate the experiment at the random time \(T^{*}=\max \{X_{m:m:n},T\}\). This hybrid censoring scheme is called type-II hybrid censoring scheme (type-II HCS), and it has the advantage of guaranteeing at least m failures to be observed by the end of the experiment. If m failures occur before time T, then the experiment would continue up to time T which may end up yielding possibly more than m failures in the data. On the other hand, if the \(m{\mathrm{th}}\) failure does not occur before time T, then the experiment would continue until the time when the \(m{\mathrm{th}}\) failure occurs in which case we would observe exactly m failures in the data. Hybrid censoring schemes have been introduced in the context of progressive censoring as well. Kundu and Joarder [23] discussed type-II progressive hybrid censoring scheme (Type-II PHCS). Type-II PHCS overcomes the drawback of the type-I PHCS that the maximum likelihood (ML) may not always exist. In brief, progressive type-II hybrid censoring scheme can be described as follows. Consider n identical, independent units with distinct distributions which are placed in a lifetime test, each random variable \(X_{1:m:n}\), \(X_{2:m:n}\),...,\(X_{m:m:n}\) is identically distributed, with p.d.f \(f(x;\theta )\) and c.d.f \(F(x;\theta )\), where \(\theta\) denotes the vector of parameters \((\alpha ,\beta )\). The correct value of m \((m<n)\) is a random variable. Suppose \(R_{1},R_{2},\ldots ,R_{m}\) are fixed before the start of the experiment called progressive censoring scheme with \(R_{j}>0\) and \(\sum _{j=1}^{m}R_{j}+m=n\) is specified in the experiment. Under the type-II progressive censoring scheme, at the time of the first failure, \(X_{1:m:n}\) , \(R_{1}\) of the \(n-1\) surviving units are randomly withdrawn from the life test, then at the time of the second failure \(X_{2:m:n}\) , \(R_{2}\) of then \(n-R_{1}-2\) surviving units are withdrawn, and so on, and finally at the time of \(m{\mathrm{th}}\)the failure \(X_{m:m:n}\) all \(R_{m}=n-R_{1}-R_{2}-\cdots -R_{m-1}-m\) surviving units are withdrawn from the life test. Since \(R_{i}^{\prime }\)s are pre-fixed, let us denote these failure times \(X_{1:m:n}\le X_{2:m:n}\le \cdots \le X_{m:m:n}\), though their distributions depend on \(R_{i}^{\prime }\)s. The type-II PHCS involves the termination of the life test at time \(T^{*}=\max \{X_{m:m:n},T\}\) , and let D denote the number of failures that occur before time T, and d shows the observed value of D. Then, if \(X_{m:n}>T\), the experiment would terminate at the \(m{\mathrm{th}}\)failure, with the withdrawal of units occurring after each failure according to the pre-fixed progressive censoring scheme \((R_{1},R_{2},\ldots ,R_{m})\). However, if \(X_{m:n}<T\), then instead of terminating the experiment by removing all remaining \(R_{m}\) units after the \(m{\mathrm{th}}\) failure, the experiment would continue to observe failures without any further withdrawals up to time T. Thus, in this case, we have \(R_{m}=R_{m+1}=\cdots =R_{D}=0,\)and the resulting failures times are indicated by \(X_{1:m:n},X_{2:m:n},\ldots ,X_{m:m:n},X_{m+1:n},\ldots ,X_{d:n}\). We denote the two cases as Case I and Case II, respectively:

$$\begin{aligned} \begin{array}{ll} \hbox {Case I} : \{X_{1:m:n}<X_{2:m:n}<\cdots<X_{m:m:n}\}, &{}\hbox {if }X_{m:m:n}\ge T , \\ \hbox {Case II} : \{X_{1:m:n}<\cdots<X_{m:m:n}<X_{m+1:n}<\cdots<X_{D:n}\}, &{} \hbox {if }X_{m:m:n}<T. \end{array} \end{aligned}$$

In this paper, we consider the estimation and prediction of unknown parameters of Burr type-III distribution under progressive type-II hybrid censored sample from both classical and Bayesian perspectives. Also, we provide predictive estimates and intervals for future unknown observable value based on some priori information. The p.d.f and c.d.f of a random variable X following Burr type-III distribution have the following form

$$\begin{aligned} f(x;\alpha ,\beta )= \alpha \beta x^{-\beta -1}(1+x^{-\beta })^{-\alpha -1},\,x>0,\,\alpha>0,\,\beta >0, \end{aligned}$$
(1)
$$\begin{aligned} F(x;\alpha ,\beta )= (1+x^{-\beta })^{-\alpha },\,x>0,\,\alpha>0,\,\beta >0. \end{aligned}$$
(2)

where \(\alpha\) and \(\beta\) are the shape parameters of the distribution. Burr [7] introduced twelve cumulative distribution functions for fitting various failure lifetime data. Among these distributions, Burr type-III distribution can accommodate different hazard lifetime data, so it has received considerable attention in the recent past. Also, it can properly approximate many well-known distributions such as Weibull, gamma, and log-normal for fitting lifetime data. This makes it worthwhile model to these distributions. Inferences for the parameters \(\alpha\) and \(\beta\) of \({\hbox {Burr}}(\alpha ,\beta )\) distribution have been investigated by many researchers such as [3, 4, 9, 26, 28, 31, 32, 37]. We have organized the rest of this paper as follows. The MLEs of unknown parameters are discussed in “Maximum likelihood estimation” section. We have proposed SEM algorithm for this purpose. The asymptotic confidence intervals are also constructed by using fisher’s information matrix in “Fisher’s information matrix” section. Bayes estimators are obtained with respect to loss functions in “Bayesian estimation” section. In “Data analysis” section, a real-life data set is analyzed to illustrate the proposed statistical methods, and also Monte Carlo simulations have been performed for comparison purposes in “Simulation study” section. Finally, a conclusion is given in “Conclusion” section.

Maximum likelihood estimation

In this section, we derive MLEs of the unknown parameters of the Burr type-III distribution when the lifetime data are observed under progressive hybrid type-II censoring. Suppose that \(\varvec{x}=(x_{(1)},x_{(2)},\ldots ,x_{(m)})\) from Case I and \(x_{c}=(x_{(1)},x_{(2)},\ldots ,x_{(m::m:n)},\ldots ,x_{(D:n)})\) from Case II are observed sample from Burr\((\alpha ,\beta )\) distribution under progressive hybrid type-II censoring scheme. Then, the likelihood function of \((\alpha ,\beta )\) given the observed data \(\varvec{x}\) can be written as:

$$\begin{aligned} {\hbox {Case I}} \text {: }L(\theta )=c_{1}\Pi_{i=1}^{m}f(X_{i:m:n};\theta )[1-F(X_{i:m:n};\theta )]^{R_{i}}; \end{aligned}$$
(3)
$$\begin{aligned} {\hbox {Case II}} \text {{: }}L(\theta )=c_{2}\Pi_{i=1}^{m}f(X_{i:m:n};\theta )[1-F(X_{i:m:n};\theta )]^{R_{i}}\Pi_{i=m+1}^{D}f(X_{i:n};\theta )[1-F(T)]^{\acute{R}_{D}} \end{aligned}$$
(4)

where

$$\begin{aligned} c_{1}&=n(n-R_{1}-1)\ldots (n-R_{1}-R_{2}-\cdots -R_{m-1}-m+1); \\ c_{2}&=n(n-R_{1}-1)\ldots (n-R_{1}-R_{2}-\cdots -R_{m-1}-D+1), \end{aligned}$$

and

$$\begin{aligned} D=m+1,\ldots ,n-\sum _{k=1}^{m-1}R_{k}\text {, }R_{m}=0,\quad \acute{R} _{D}=n-D-\sum _{k=1}^{m-1}R_{k}\text {, }D\ge m \end{aligned}$$

and \(f(.;\alpha ,\beta )\) and \(F(.;\alpha ,\beta )\) are as defined in Eqs. (1) and (2). Now using the associated log-likelihood function \(l=\ln L(\alpha ,\beta \mid \varvec{x})\), ML estimates of \(\alpha\) and \(\beta\) can be obtained on simultaneously solving the partial derivatives equations of l with respect to \(\alpha\) and \(\beta\). In most cases, the estimators do not admit explicit expressions and some numerical procedures such as Newton–Raphson method (NR) have to be used to determine the estimators. NR is a direct approach for obtaining the MLEs by maximizing the likelihood function and a well-known numerical algorithm for finding the root of a function or equation [39]. It involves calculation of the first and second derivatives of the observed log-likelihood with respect to the parameters. However, when employing NR method, some errors can be occur, such as time-consuming depending on the size of your system, or may fail to convergence. So, the traditional NR method or a numerical technique that can be used to solve the associated log-likelihood partial derivative equations as a closed form for the MLEs does not exist. Here we use the expectation-maximization algorithm for this purpose, see [12]. The main advantage of this algorithm is that: it is more reliable particularly dealing with censored data. Note that the Case I has been discussed by Singh et al. [15]. So, our aim in this paper is that we propose the classical and Bayesian estimation for Case II; they are given in the next sections.

Expectation-Maximization (EM) algorithm

The EM algorithm was introduced an iterative procedure to compute the MLEs in the presence of missing data and consists of an expectation (E) step and a maximization (M) step by Dempster et al. [12]. Dealing with hybrid censored observations, the problem finding MLEs of unknown parameters with the Burr type-XII model can be viewed as an incomplete data problem [30]. Under progressive hybrid type-II censoring, \(X_{(i)}\) shows at the time of ith failure, and \(R_{i}\) shows the number of units which are removed from the experiment. Now suppose that \(Z_{i}=(Z_{i1},Z_{i2},\ldots ,Z_{iR_{i}}),\) \(i=1,\ldots ,m\) and \(\acute{Z}=(\acute{Z}_{1},\acute{Z}_{2},\ldots ,\acute{Z}_{\acute{R}_{D}})\) denote a vector of those \(R_{i}\) and \(\acute{R}_{D}\) number of censored units. Then, the total censored data can be viewed as \(Z=(Z_{1},Z_{2},\ldots ,Z_{m})\) and \(\acute{Z}=(\acute{Z}_{1},\acute{Z}_{2},\ldots ,\acute{Z}_{\acute{R}_{D}})\). Now the complete sample of n number of units can be seen as a combination of the observed data and the censored data, that is, \(W=(X,Z,\acute{Z})\). Consequently, the log-likelihood function of the complete data set can be written as:

$$\begin{aligned} \log L(W;\alpha ,\beta )&=n{\hbox {log}}\beta +n{\hbox {log}}\alpha +(-\beta -1) \left[ \sum _{i=1}^{m}{\hbox {log}}x_{i}+\sum _{i=1}^{m}\sum _{j=1}^{R_{i}}z_{ij}+\sum _{i=m+1}^{D} {\hbox {log}}x_{i}+\sum _{l=1}^{\acute{R_{i}}}{\hbox {log}}\acute{z}_{l}\right] \nonumber \\&\quad + (-\alpha -1)\left[ \sum _{i=1}^{m}{\hbox {log}}(1+x_{i}^{-\beta })+\sum _{i=1}^{m} \sum _{j=1}^{R_{i}}{\hbox {log}}(1+z_{ij}^{-\beta })+\sum _{i=m+1}^{D}{\hbox {log}}(1+x_{i}^{-\beta })\right. \nonumber \\&\quad \left. +\sum _{l=1}^{\acute{R_{D}}}{\hbox {log}}(1+ \acute{z}_{l})\right] . \end{aligned}$$
(5)

Now, by taking the partial derivatives with respect to \(\alpha\) and \(\beta\) and equating them to zero, we get the following expressions

$$\begin{aligned} \alpha= &\,n\left[ \sum _{i=1}^{m}\ln (1+x_{(i)}^{-\beta }) +\sum _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta })+ \sum _{i=1}^{m}\sum _{j=1}^{R_{i}} \ln (1+z_{ij}^{-\beta }) +\sum _{l=1}^{\acute{R}_{D}} \ln (1+\acute{z} _{l}^{-\beta })\right] ^{-1} \end{aligned}$$
(6)
$$\begin{aligned} \beta= &\,n\left[ \sum _{i=1}^{m}\ln x_{(i)}+\sum _{i=m+1}^{D}\ln x_{(i)}+\sum _{i=1}^{m}\sum _{j=1}^{R_{i}}\ln z_{ij}+\sum _{l=1}^{\acute{R} _{D}}\ln \acute{z}_{l} \right. \nonumber \\&\left. -(\alpha +1)\left( \sum _{i=1}^{m}\frac{x_{(i)}^{-\beta }\ln x_{(i)}}{ (1+x_{(i)}^{-\beta })}+\sum _{i=m+1}^{D}\frac{x_{(i)}^{-\beta }\ln x_{(i)}}{ (1+x_{(i)}^{-\beta })}\sum _{i=1}^{m}\sum _{j=1}^{R_{i}}\frac{z_{ij}^{-\beta }\ln z_{ij}}{1+z_{ij}^{-\beta }}\sum _{l=1}^{\acute{R}_{D}}\frac{\acute{z} _{l}^{-\beta }\ln \acute{z}_{l}}{1+\acute{z}_{l}^{-\beta }}\right) \right] ^{-1}. \end{aligned}$$
(7)

The above two equations need to be solved simultaneously to obtain the ML estimates of \((\alpha ,\beta )\). EM algorithm consists of two steps. In E-step, expression of the censored observations is replaced by their respective expected values. Further, M-step involves the maximization of E-step. Now suppose that at the kth stage in E-step estimate of \((\alpha ,\beta )\) is \((\alpha ^{(k)},\beta ^{(k)})\); then, after using the M-step it can be shown that the \((k+1)\)th stage updated estimators are of the form

$$\begin{aligned} \alpha ^{(k+1)}= &\,n\left[ \sum _{i=1}^{m}\ln (1+x_{(i)}^{-\beta ^{(k)} }) + \sum _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta ^{(k)} }) \right. \nonumber \\&\left. + \sum _{i=1}^{m}R_{i} E_{1i}(\alpha ^{(k)}, \beta ^{(k)})+ \acute{R}_{D} E_{2i}(\alpha ^{(k)}, \beta ^{(k)}) \right] ^{-1} \end{aligned}$$
(8)
$$\begin{aligned} \beta ^{(k+1)}= &\,n\left[ \sum _{i=1}^{m}\ln x_{(i)} +\sum _{i=m+1}^{D}\ln x_{(i)} \sum _{i=1}^{m} R_{i} E_{3i} (\alpha ^{(k)}, \beta ^{(k)})+\acute{R}_{D} E_{4i}(\alpha ^{(k)}, \beta ^{(k)}) \right. \nonumber \\&- (\alpha ^{(k+1)}+1) \left( \sum _{i=1}^{m} \frac{x_{(i)}^{-\beta ^{(k)} } \ln x_{(i)}}{(1+x_{(i)}^{-\beta ^{(k)} })} \sum _{i=m+1}^{D} \frac{ x_{(i)}^{-\beta ^{(k)} } \ln x_{(i)}}{(1+x_{(i)}^{-\beta ^{(k)} })} \right. \nonumber \\&\left. \left. + \sum _{i=1}^{m} R_{i}E_{5i}(\alpha ^{(k)}, \beta ^{(k)})+ \acute{R}_{D} E_{6i}(\alpha ^{(k)}, \beta ^{(k)}) \right) \right] ^{-1} \end{aligned}$$
(9)

where

$$\begin{aligned}&E_{1i}(\alpha , \beta ) = E(\ln (1+z_{ij}^{-\beta }) \mid z_{ij}> x_{(i)}) = \frac{\alpha }{1-F(x_{(i)}; \alpha , \beta )} \int _{1}^{1+x_{(i)}^{-\beta }} u^{-(\alpha +1)} \ln u \, {\hbox {d}}u,\\&E_{2i}(\alpha , \beta ) = E(\ln (1+\acute{z}_{i}^{-\beta }) \mid \acute{z}_{i}> x_{(i)}) = \frac{ \alpha }{1-F(T; \alpha , \beta )} \int _{1}^{1+T^{-\beta }} u^{-(\alpha +1)} \ln u \, {\hbox {d}}u, \\&E_{3i}(\alpha , \beta ) = E(\ln z_{ij} \mid z_{ij}> x_{(i)}) = \frac{\alpha }{\beta (1-F(x_{(i)}; \alpha , \beta ))} \int _{1}^{1+x_{(i)}^{-\beta }} u^{-(\alpha +1)} \ln (u-1) \, {\hbox {d}}u, \\&E_{4i}(\alpha , \beta ) = E(\ln \acute{z}_{i} \mid \acute{z}_{i}> T) = \frac{\alpha }{\beta (1-F(T; \alpha , \beta ))} \int _{1}^{{1+T}^{-\beta }} u^{-(\alpha +1)} \ln (u-1) \, {\hbox {d}}u, \\&E_{5i}(\alpha , \beta ) = E\left( \frac{z_{ij}^{-\beta } \ln z_{ij}}{ 1+z_{ij}^{-\beta } } \mid z_{ij}> x_{(i)}\right) = \frac{\alpha }{ \beta (1-F(x_{(i)}; \alpha , \beta ))} \int _{1+x_{(i)}^{-\beta }}^{1} u^{-(\alpha +2)} (u-1)\ln (u-1) \, {\hbox {d}}u, \\&E_{6i}(\alpha , \beta ) = E\left( \frac{\acute{z}_{i}^{-\beta } \ln \acute{z}_{i} }{1+\acute{z}_{i}^{-\beta } } \mid \acute{z}_{i} >T\right) = \frac{\alpha }{ \beta (1-F(T; \alpha , \beta ))} \int _{1+T^{-\beta }}^{1} u^{-(\alpha +2)} (u-1)\ln (u-1) \, {\hbox {d}}u, \end{aligned}$$

Notice that the iterative procedure in M-step can be terminated after the convergence is achieved, that is, when we have \(|\alpha ^{(k+1)}-\alpha ^{(k)}|+|\beta ^{(k+1)}-\beta ^{(k)}|\le \epsilon\) for some given \(\epsilon >0\). However, one of the biggest disadvantages of EM algorithm is that it is only a local optimization procedure and can easily get stuck in a saddle point [40], in particular with high-dimensional data or increasing complexity for censored and lifetime models. A possible solution to overcome the computational inefficiencies is to invoke stochastic EM algorithm suggested by Celeux and Diebolt [8, 19, 29, 38]. It can be seen that the above EM expressions \(E_{si}(\alpha ,\beta ),s=1,2,3,4,5,6\) do not turn out to have closed form, and therefore, one needs to compute these expressions numerically. So, we used the stochastic expectation maximization (SEM) algorithm to obtain ML estimators.

SEM algorithm

In this section, the SEM algorithm is used in computing MLEs of unknown parameters. Diebolt and Celeux [13] proposed a stochastic version of EM algorithm that replaces the E-step by stochastic (S-step) and execute it by simulation. Thus, SEM algorithm completes the observed sample by replacing each missing information by a value randomly drawn from the distribution conditional on results from the previous step. The algorithm has been shown to be computationally less burdensome and more appropriate than the EM algorithm in a lot of problems, see [2, 13, 35]. Recently, Zhang et al. [40] considered SEM algorithm to obtain ML estimates for unknown parameters of various models when the data are observed under progressive type-II censoring; also they compared the results with EM algorithm. Next, we use the same idea to generate the independent \(R_{i}\) number of samples of \(z_{ij},i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,R_{i}\) and \(Z_{i}=(Z_{11},\ldots ,Z_{1R_{i}},\ldots ,Z_{m1},\ldots ,Z_{mR_{i}}),\acute{Z}=(\acute{Z} _{1},\acute{Z}_{2},\ldots ,\acute{Z}_{\acute{R}_{D}})\) from the following conditional distribution function

$$\begin{aligned} G_{i}(z_{ij};\alpha ,\beta \mid z_{ij}>x_{(i)})= & {} \frac{F(z_{ij};\alpha ,\beta )-F(x_{(i)};\alpha ,\beta )}{1-F(x_{(i)};\alpha ,\beta )},\,z_{ij}>x_{(i)}. \\ G_{l}(\acute{z}_{l};\alpha ,\beta \mid \acute{z}_{l}>T)= & {} \frac{F(\acute{z} _{l};\alpha ,\beta )-F(T;\alpha ,\beta )}{1-F(T;\alpha ,\beta )},\,\acute{z} _{l}>T. \end{aligned}$$

Subsequently, the ML estimators of \(\alpha\) and \(\beta\) at the \((k+1)\)th stage are given by

$$\begin{aligned} \alpha ^{(k+1)}= &\,n\left[ \sum _{i=1}^{m}\ln (1+x_{(i)}^{-\beta (k) }) +\sum _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta (k) })+ \sum _{i=1}^{m}\sum _{j=1}^{R_{i}} \ln (1+z_{ij}^{-\beta (k)})\right. \nonumber \\&\left. + \sum _{l=1}^{\acute{R}_{D}} \ln (1+\acute{z}_{l}^{-\beta (k) })\right] ^{-1} \end{aligned}$$
(10)
$$\begin{aligned} \beta ^{(k+1)}= &\,n\left[ \sum _{i=1}^{m}\ln x_{(i)}+\sum _{i=m+1}^{D}\ln x_{(i)}+\sum _{i=1}^{m}\sum _{j=1}^{R_{i}}\ln z_{ij}+\sum _{l=1}^{\acute{R} _{D}}\ln \acute{z}_{l} \right. \nonumber \\&-(\alpha ^{(k+1)}+1)\left( \sum _{i=1}^{m}\frac{x_{(i)}^{-\beta (k)}\ln x_{(i)}}{(1+x_{(i)}^{-\beta (k)})}+\sum _{i=m+1}^{D}\frac{x_{(i)}^{-\beta (k)}\ln x_{(i)}}{(1+x_{(i)}^{-\beta (k)})} \right. \nonumber \\&\left. \left. +\sum _{i=1}^{m}\sum _{j=1}^{R_{i}}\frac{z_{ij}^{-\beta (k)}\ln z_{ij}}{ 1+z_{ij}^{-\beta (k)}}+\sum _{l=1}^{\acute{R}_{D}}\frac{\acute{z}_{l}^{-\beta (k)}\ln \acute{z}_{l}}{1+\acute{z}_{l}^{-\beta (k)}}\right) \right] ^{-1} \end{aligned}$$
(11)

Further, the above iterative procedure can be terminated after the convergence is achieved.

Fisher’s information matrix

In this section, we present the observed fisher’s information matrix obtained by using the missing value principle of Louis [27]. The observed fisher’s information matrix can be used to construct the asymptotic confidence intervals. The idea of missing information principle is as follows:

$$\begin{aligned} {\hbox {Observed information}}={\hbox {Complete information}}-{\hbox {Missing information}}. \end{aligned}$$

This section deals with obtaining fisher’s information matrix which will be further used to compute interval estimates for the unknown parameters of the Burr type-III distribution. The asymptotic variance–covariance matrix of the MLEs of \((\alpha ,\beta )\) can be obtained by inverting the observed information matrix and is given by

$$\begin{aligned} -\left[ \begin{array}{cc} \frac{\partial ^{2}l}{\partial \alpha ^{2}} &{} \frac{\partial ^{2}l}{\partial \alpha \partial \beta } \\ \frac{\partial ^{2}l}{\partial \beta \partial \alpha } &{} \frac{\partial ^{2}l }{\partial \beta ^{2}} \end{array} \right] _{\alpha =\hat{\alpha },\beta =\hat{\beta }}^{-1}. \end{aligned}$$

where

$$\begin{aligned} \frac{\partial ^{2}l}{\partial \alpha ^{2}}&= -\frac{m}{\alpha ^{2}} + R_{i} \left[ \sum _{i=1}^{m} \left( -\frac{ F(x_{i}; \alpha , \beta ) \ln (1+x_{i}^{-\beta })^{2} }{1- F(x_{i}; \alpha , \beta )} - \frac{ F(x_{i}; \alpha , \beta )^{2} \ln (1+x_{i}^{-\beta })^{2} }{ (1- F(x_{i}; \alpha , \beta ))^{2}}\right. \right. \nonumber \\&\quad -\frac{D-m}{\alpha ^{2}}- \frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta ) \ln (1+T^{-\beta })^{2} }{ 1- F(T; \alpha , \beta )}\nonumber \\&\quad \left. - \frac{\acute{R}_{d} (D-m) F(T; \alpha , \beta ) ^{2} \ln (1+T^{-\beta })^{2} }{(1- F(T; \alpha , \beta ))^{2} } \right] \end{aligned}$$
(12)
$$\begin{aligned} \frac{\partial ^{2}l}{\partial \alpha \partial \beta }&= - \sum _{i=1}^{m} \left( -\frac{ x_{i}^{-\beta } \ln (x_{i}) }{ 1+x_{i}^{-\beta } } \right) + R_{i} \left( \sum _{i=1}^{m} \left( \frac{ F(x_{i}; \alpha , \beta ) \alpha x_{i}^{-\beta } \ln (x_{i}) \ln ( 1+x_{i}^{-\beta }) }{ (1+x_{i}^{-\beta }) (1- F(x_{i}; \alpha , \beta )) } \right. \right. \nonumber \\&\quad \left. \left. -\frac{ F(x_{i}; \alpha , \beta ) x_{i}^{-\beta } \ln (x_{i}) }{(1+x_{i}^{-\beta }) (1- F(x_{i}; \alpha , \beta ) ) } +\frac{ (1- F(x_{i}; \alpha , \beta )^{2} \ln (1+x_{i}^{-\beta }) \alpha x_{i}^{-\beta } \ln (x_{i}) }{ (1- F(x_{i}; \alpha , \beta )^{2} (1+x_{i}^{-\beta })} \right) \right) \nonumber \\&\quad - \left( \sum _{i=m+1}^{D} \left( -\frac{ x_{i}^{-\beta } \ln (x_{i}) }{ 1+x_{i}^{-\beta } } \right) + \frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta ) \alpha T^{-\beta } \ln (T) \ln (1+T^{-\beta }) }{ (1+T^{-\beta }) (1- F(T; \alpha , \beta ) ) } \right. \nonumber \\&\quad -\frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta ) T^{-\beta } \ln (T) }{ (1+T^{-\beta }) (1- F(T; \alpha , \beta ) ) } \nonumber \\&\quad \left. +\frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta )^{2} \ln (1+T^{-\beta }) \alpha T^{-\beta } \ln (T) \ln (1+T^{-\beta }) }{(1- F(T; \alpha , \beta ))^{2} (1+T^{-\beta }) } \right) \end{aligned}$$
(13)
$$\begin{aligned} \frac{\partial ^{2}l}{\partial \beta ^{2}}&=-\frac{m}{\beta ^{2}} -(\alpha +1) \left( \sum _{i=1}^{m} \left( \frac{ x_{i}^{-\beta } \ln (x_{i})^{2} }{1+x_{i}^{-\beta }} -\frac{ (x_{i}^{-\beta })^{2} \ln (x_{i})^{2} }{(1+x_{i}^{-\beta })^{2}} \right) \right) \nonumber \\&\quad + R_{i} \left( \sum _{i=1}^{m} \left( -\frac{ F(x_{i}; \alpha , \beta ) \alpha ^{2} (x_{i}^{-\beta })^{2} \ln (x_{i} ) ^{2}}{ (1+x_{i}^{-\beta } ) ( 1- F(x_{i}; \alpha , \beta ))} -\frac{F(x_{i}; \alpha , \beta ) \alpha (x_{i}^{-\beta })^{2} \ln (x_{i} ) ^{2}) }{ (1+x_{i}^{-\beta } )^{2} (1- F(x_{i}; \alpha , \beta )) } - \right. \right. \nonumber \\&\quad \left. \left. \frac{ (F(x_{i}; \alpha , \beta ))^{2} \alpha ^{2} (x_{i}^{-\beta }) ^{2} \ln (x_{i})^{2} }{ (1+x_{i}^{-\beta } )^{2} (1-F(x_{i}; \alpha , \beta ))^{2}} \right) \right) -\frac{D-m}{ \beta ^{2}} \nonumber \\&\quad - (\alpha +1) \left( \sum _{i=m+1}^{D} \left( \frac{ x_{i}^{-\beta } \ln (x_{i})^{2} }{1+x_{i}^{-\beta } } - \frac{ (x_{i}^{-\beta })^{2} \ln (x_{i})^{2} }{ (1+x_{i}^{-\beta })^{2}} \right) \right) \nonumber \\&\quad - \frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta ) \alpha ^{2} (T^{-\beta })^{2} \ln (T)^{2} }{ (1+T^{-\beta })^{2} (1- F(T; \alpha , \beta ) } + \frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta ) \alpha T^{-\beta } \ln (T)^{2} }{(1+T^{-\beta }) (1- F(T; \alpha , \beta ) } \nonumber \\&\quad - \frac{ \acute{R}_{d} (D-m) F(T; \alpha , \beta ) \alpha T^{-\beta } \ln (T)^{2} }{(1+T^{-\beta })^{2} (1- F(T; \alpha , \beta ) } - \frac{ \acute{R}_{d} (D-m) (F(T; \alpha , \beta ) )^{2} \alpha ^{2} ( T^{-\beta })^{2} \ln (T)^{2} }{(1+T^{-\beta })^{2} (1- F(T; \alpha , \beta )^{2} } \end{aligned}$$
(14)

Next, we use the SEM algorithm to compute observed information matrix. We first generate the censored observations \(z_{ij}\) using Monte Carlo simulation from the conditional density as discussed in the previous section. Subsequently, the asymptotic variance–covariance matrix of the MLEs of \((\alpha ,\beta )\) can be obtained as

$$\begin{aligned} -\left[ \begin{array}{cc} \frac{\partial ^{2}l^{*}}{\partial \alpha ^{2}} &{} \frac{\partial ^{2}l^{*}}{\partial \alpha \partial \beta } \\ \frac{\partial ^{2}l^{*}}{\partial \beta \partial \alpha } &{} \frac{ \partial ^{2}l^{*}}{\partial \beta ^{2}} \end{array} \right] _{\alpha =\hat{\alpha },\beta =\hat{\beta }}^{-1}. \end{aligned}$$

where \(l^{*}=\ln L(w; \alpha , \beta )\) given in (5). Further, the involved expressions are given by

$$\begin{aligned} \frac{\partial ^{2}l^{*}}{\partial \alpha ^{2}}= & {} -\frac{n}{\alpha ^{2}}\\ \frac{\partial ^{2}l^{*}}{\partial \alpha \partial \beta }= & {} -\left( \sum _{i=1}^{m}\left( -\frac{x_{i}^{-\beta }\ln (x_{i})}{1+x_{i}^{-\beta }}\right) \right) -\left( \sum _{i=1}^{m}\left( -\frac{R_{i}z_{ij}^{-\beta }\ln (z_{ij})}{ 1+z_{ij}^{-\beta }}\right) \right) \\ \frac{\partial ^{2}l^{*}}{\partial \beta ^{2}}= & {} -\frac{1}{\beta ^{2}} -(\alpha +1)\left( \sum _{i=1}^{m}\left( \frac{x_{i}^{-\beta }\ln (x_{i})^{2}}{ 1+x_{i}^{-\beta }}\right) \right) \\&-(\alpha +1)\left( \sum _{i=1}^{m}\left( \frac{R_{i}z_{ij}^{-\beta }\ln (z_{i})^{2}}{1+z_{ij}^{-\beta }}-\frac{R_{i}(z_{ij}^{-\beta })^{2}\ln (z_{i})^{2}}{(1+z_{ij}^{-\beta })^{2}}\right) \right) \\ \frac{\partial ^{2}l^{*}}{\partial \alpha \partial \beta }= & {} \frac{\partial ^{2}l^{*}}{\partial \beta \partial \alpha } \end{aligned}$$

Finally, in all the three cases using the idea of large sample approximation the two-sided \(100(1-\gamma )\%,0<\gamma <1\), asymptotic confidence intervals for \(\alpha\) and \(\beta\), respectively, can be obtained as \(\hat{ \alpha }\pm z_{\frac{\gamma }{2}}\sqrt{{\hbox {Var}}(\hat{\gamma })}\) and \(\hat{\beta } \pm z_{\frac{\gamma }{2}}\sqrt{{\hbox {Var}}(\hat{\beta })}\). Here \(z_{\frac{\gamma }{2} }\) is the upper \(\frac{\gamma }{2}\)th percentile of the standard normal distribution. Notice that, here \(\hat{\alpha }\) and \(\hat{\gamma }\) represent the ML estimates of \(\alpha\) and \(\beta\), obtained by NR, EM algorithm and SEM algorithm in the previous section.

Bayesian estimation

Suppose that a sample \(\varvec{x}=(x_{(1)},x_{(2)},\ldots ,x_{(m)})\) is observed from Burr type-III distribution under progressive hybrid type-II censoring scheme \(R=(R_{1},R_{2},\ldots ,R_{m})\). We assume an independent gamma priors for the unknown parameters \(\alpha\) and \(\beta\) having pdfs of the following form:

$$\begin{aligned} \pi (\alpha )= & {} \frac{b^{a}}{\Gamma (a)}\alpha ^{a-1}{\mathrm{e}}^{-b\alpha },\,a>0,b>0, \\ \pi (\beta )= & {} \frac{d^{c}}{\Gamma (c)}\beta ^{c-1}{\mathrm{e}}^{-d\beta },\,c>0,d>0. \end{aligned}$$

Here abc and d are the hyper-parameters and are responsible for providing information about the unknown parameters. Now, the joint posterior density of \((\alpha ,\beta )\) can be written as \(\pi (\alpha ,\beta )=\pi (\alpha )\pi (\beta )\). Further, the joint posterior density of \((\alpha ,\beta )\) given the observed data \(\varvec{x}\) is obtained as

$$\begin{aligned} \pi (\alpha ,\beta \mid x)= & {} \frac{\pi (\alpha ,\beta )L(\alpha ,\beta \mid x)}{\int _{0}^{\infty }\int _{0}^{\infty }\pi (\alpha ,\beta )L(\alpha ,\beta \mid x){\hbox {d}}\alpha {\hbox {d}}\beta }, \nonumber \\\propto & {} K\alpha ^{D+a-1}\beta ^{D+c-1}{\mathrm{e}}^{-\alpha (b+\sum \limits _{i=1}^{m}\ln (1+x_{(i)}^{-\beta })+\sum \limits _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta })} \nonumber \\&{\mathrm{e}}^{-\beta (d+\sum \limits _{i=1}^{m}\ln x_{(i)}+\sum \limits _{i=m+1}^{D}\ln x_{(i)})}{\mathrm{e}}^{-\sum \limits _{i=1}^{m}\ln x_{(i)}-\sum \limits _{i=m+1}^{D}\ln x_{(i)}} \nonumber \\&{\mathrm{e}}^{-\sum \limits _{i=1}^{m}\ln (1+x_{(i)}^{-\beta })-\sum \limits _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta })}{\mathrm{e}}^{R_{i}\ln (1-(1+x_{m}^{-\beta })^{-\alpha })} \nonumber \\&{\mathrm{e}}^{\acute{R}_{i}\ln (1-(1+T^{-\beta })^{-\alpha })} \end{aligned}$$
(15)

where K is the normalizing constant given by

$$\begin{aligned} K= & {} {\int _{0}^{\infty }\int _{0}^{\infty }\alpha ^{D+a-1}\beta ^{D+c-1}{\mathrm{e}}^{-\alpha (b+\sum \limits _{i=1}^{m}\ln (1+x_{(i)}^{-\beta })+\sum \limits _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta }))}} \nonumber \\&{\mathrm{e}}^{-\beta (d+\sum \limits _{i=1}^{m}\ln x_{(i)}+\sum \limits _{i=m+1}^{D}\ln x_{(i)})}{\mathrm{e}}^{-\sum \limits _{i=1}^{m}\ln x_{(i)}-\sum \limits _{i=m+1}^{D}\ln x_{(i)}} \nonumber \\&{\mathrm{e}}^{-\sum \limits _{i=1}^{m}\ln (1+x_{(i)}^{-\beta })-\sum \limits _{i=m+1}^{D}\ln (1+x_{(i)}^{-\beta })}{\mathrm{e}}^{R_{i}\ln (1-(1+x_{m}^{-\beta })^{-\alpha })} \nonumber \\&{\mathrm{e}}^{\acute{R}_{i}\ln (1-(1+T^{-\beta })^{-\alpha })} \end{aligned}$$
(16)

In Bayesian estimation, selection of loss function plays an important role. The most commonly used loss function is square error loss, given by \(\delta _{\mathrm{SL}}(g(\theta ),\hat{g}(\theta ))=(g(\theta )-\hat{g}(\theta ))^{2}\); here \(\hat{g}(\theta )\) represent an estimate of \(g(\theta )\). Bayes estimator of \(g(\theta )\) under the square error loss is the posterior mean given by

$$\begin{aligned} \hat{g}_{\mathrm{SEL}}(\theta )=E(g(\theta )\mid \varvec{x})=\int _{0}^{\infty }\int _{0}^{\infty }g(\theta )\pi (\alpha ,\beta \mid \varvec{x}){\hbox {d}}\alpha {\hbox {d}}\beta . \end{aligned}$$
(17)

It is to be noticed that square error loss is a symmetric loss function and it puts an equal weight to the under estimation and overestimation. In many practical situations, underestimation may be more serious than the overestimation, and vice versa. In such cases, asymmetric loss function can be taken into account. So, we considered linex loss function which was proposed by Varian [36] and is given by \(\delta _{\mathrm{LL}}(g(\theta ),\hat{g}(\theta ))={\mathrm{e}}^{\hat{g}(\theta )-g(\theta )}-\nu (\hat{g}(\theta )-g(\theta ))-1\), where \(\nu \ne 0\) is a shape parameter. Notice that \(\nu >0\) suggests the case when overestimation is more serious than underestimation, and vice versa for \(\nu <0\). The Bayes estimator of \(g(\theta )\) under linex loss is given by

$$\begin{aligned} \hat{g}_{\mathrm{LL}}(\theta )= & {} -\frac{1}{\nu }\ln \left[ E({\mathrm{e}}^{-\nu g(\theta )} \mid \varvec{x}) \right] =-\frac{1}{\nu } \ln \left[ \int _{0}^{\infty }\int _{0}^{\infty }{\mathrm{e}}^{-\nu g(\theta )}\pi (\alpha , \beta \mid \varvec{x}) {\hbox {d}}\alpha {\hbox {d}}\beta \right] . \end{aligned}$$
(18)

We further consider entropy loss function, given by

$$\begin{aligned} \delta _{\mathrm{EL}}(g(\theta ),\hat{g}(\theta ))\propto (\hat{g}(\theta )/g(\theta ))^{w}-w\ln (\hat{g}(\theta )/g(\theta ))-1,w\ne 0. \end{aligned}$$

Here \(w>0\) suggests the case when overestimation is more serious than underestimation, and vice versa for \(w<0\). The Bayes estimator of \(g(\theta )\) under entropy loss function is given by

$$\begin{aligned} \hat{g}_{\mathrm{EL}}(\theta )=(E(g((\theta ))^{-w}\mid \varvec{x}))^{-1/w}=\left[ \int _{0}^{ \infty }\int _{0}^{\infty }g((\theta ))^{-w}\pi (\alpha ,\beta \mid \varvec{x} ){\hbox {d}}\alpha {\hbox {d}}\beta \right] ^{\frac{-1}{w}} \end{aligned}$$
(19)

Notice that for \(w=-1\), Bayes estimator of \(g(\theta )\) under entropy loss function coincides with Bayes estimator of \(g(\theta )\) under square error loss function. Now, observe that Bayesian estimators given by (17), (18) and (19) do not admit closed-form expressions. Therefore, in the next section we use the approximation method of Lindley [25] and importance sampling technique.

Lindley approximation

In this section, we use the method of Lindley to obtain approximate explicit Bayes estimators of \(\alpha\) and \(\beta\). The Bayesian estimates involve the ratio of two integrals; we consider I(X) defined as:

$$\begin{aligned} I(X)=\dfrac{\int _{0}^{\infty }\int _{0}^{\infty }g(\alpha ,\beta ){\mathrm{e}}^{l(\alpha ,\beta \mid X)+\rho (\alpha ,\beta )}{\hbox {d}}\alpha {\hbox {d}}\beta }{\int _{0}^{\infty }\int _{0}^{\infty }{\mathrm{e}}^{l(\alpha ,\beta \mid X)+\rho (\alpha ,\beta )}{\hbox {d}}\alpha {\hbox {d}}\beta }. \end{aligned}$$
(20)

By applying the Lindley’s method, I(x) can be approximated as:

$$\begin{aligned} \hat{g}&=g\left( \hat{\alpha },\beta \right) +\frac{1}{2}\left[ \left( \hat{g }_{\alpha \alpha }+2\hat{g}_{\alpha }\hat{\rho }_{\alpha }\right) \hat{\sigma } _{\alpha \alpha }+\left( \hat{g}_{\alpha \beta }+2\hat{g}_{\alpha }\hat{\rho } _{\beta }\right) \hat{\sigma }_{\alpha \beta }+\left( \hat{g}_{\beta \alpha }+2\hat{g}_{\beta }\hat{\rho }_{\alpha }\right) \hat{\sigma }_{\beta \alpha }+\left( \hat{g}_{\beta \beta }+2\hat{g}_{\beta }\hat{\rho }_{\beta }\right) \hat{\sigma }_{\beta \beta }\right] \nonumber \\&\quad +\frac{1}{2}\left[ \left( \hat{g}_{\alpha }\hat{\sigma }_{\alpha \alpha }+ \hat{g}_{\beta }\hat{\sigma }_{\alpha \beta }\right) \left( \hat{l}_{\alpha \alpha \alpha }\hat{\sigma }_{\alpha \alpha }+\hat{l}_{\alpha \beta \alpha } \hat{\sigma }_{\alpha \beta }+\hat{l}_{\beta \alpha \alpha }\hat{\sigma } _{\beta \alpha }+\hat{l}_{\beta \beta \alpha }\hat{\sigma }_{\beta \beta }\right) \right. \nonumber \\&\quad \left. +\left( \hat{g}_{\alpha }\hat{\sigma }_{\beta \alpha }+\hat{g}_{\beta }\hat{ \sigma }_{\beta \beta }\right) \left( \hat{l}_{\beta \beta \beta }\hat{\sigma } _{\beta \beta }+\hat{l}_{\alpha \beta \beta }\hat{\sigma }_{\beta \alpha }+ \hat{l}_{\beta \alpha \beta }\hat{\sigma }_{\beta \alpha }+\hat{l}_{\beta \alpha \alpha }\hat{\sigma }_{\alpha \alpha }\right) \right] . \end{aligned}$$
(21)

where

$$\begin{aligned}&\hat{l}_{\beta \beta \beta } =\dfrac{\partial ^{3}l}{\partial \beta ^{3}}=- \frac{1}{\beta ^{3}}(R_{i}\alpha \left( \sum _{i=1}^{m}\frac{1}{((1+x_{i}^{-\beta })^{\alpha }-1)^{3}(x_{i}^{\beta }+1)^{3}}(\ln (x_{i})^{3}((1+x_{i}^{-\beta })^{2\alpha }x_{i}^{2\beta }\right. \nonumber \\&-3(1+x_{i}^{-\beta })^{2\alpha }x_{i}^{\beta }\alpha +(1+x_{i}^{-\beta })^{2\alpha }\alpha ^{2}-2(1+x_{i}^{-\beta })^{\alpha }x_{i}^{2\beta }+3(1+x_{i}^{-\beta })^{\alpha }x_{i}^{-\beta }\alpha \nonumber \\&+\alpha ^{2}(1+x_{i}^{-\beta })^{\alpha }-(1+x_{i}^{-\beta })^{2\alpha }x_{i}^{-\beta } +2x_{i}^{-\beta }(1+x_{i}^{-\beta })^{\alpha }+ x_{i}^{2\beta }-x_{i}^{\beta })) \nonumber \\&\left. -\left( \sum _{i=1}^{m}\frac{\ln (x_{i})^{3}(-x_{i}^{\beta }+ x_{i}^{2\beta })}{(x_{i}^{\beta }+1)^{3}}\right) \beta ^{3}\alpha -\left( \sum _{i=1}^{m}\frac{\ln (x_{i})^{3}(-x_{i}^{\beta }+x_{i}^{2\beta })}{(x_{i}^{\beta }+1)^{3}}\right) \right) , \nonumber \\&\hat{l}_{\alpha \alpha \alpha }=\dfrac{\partial ^{3}l}{\partial \alpha ^{3}}= \frac{2m+R_{i}\left( \frac{\ln (1+x_{i}^{-\beta })^{3}((1+x_{i}^{-\beta })^{\alpha }+(1+x_{i}^{-\beta })^{2\alpha })}{((1+x_{i}^{-\beta })^{\alpha })-1)^{3}} \right) \alpha ^{3}}{\alpha ^{3}}\nonumber \\&\hat{l}_{\alpha \beta \beta } =\dfrac{\partial ^{3}l}{\partial \alpha \partial ^{2}\beta }=\hat{l}_{\beta \beta \alpha }=\dfrac{\partial ^{3}l}{ \partial \beta \partial ^{2}\alpha }=-\left( \sum _{i=1}^{m}\frac{\ln x_{i}^{2}x_{i}^{-\beta }}{(1+x_{i}^{-\beta })^{2}} \right. \nonumber \\&- R_{i}\left( \sum _{i=1}^{m}\frac{1}{((1+x_{i}^{-\beta })^{\alpha }-1)^{3}(x_{i}^{\beta }+1)^{2}}(\ln (x_{i})^{2}((1+x_{i}^{-\beta })^{2\alpha }) \right. \nonumber \\&\ln (1+x_{i}^{-\beta })x_{i}^{\beta }\alpha -(1+x_{i}^{-\beta })^{2\alpha }\ln (1+x_{i}^{-\beta })\alpha ^{2} \nonumber \\&-(1+x_{i}^{-\beta })^{\alpha }\ln (1+x_{i}^{-\beta })x_{i}^{\beta }\alpha -\alpha ^{2}\ln (1+x_{i}^{-\beta })(1+x_{i}^{-\beta })^{\alpha } \nonumber \\&-(1+x_{i}^{-\beta })^{2\alpha }x_{i}^{\beta }+2(1+x_{i}^{-\beta })^{2\alpha }\alpha +2x_{i}^{\beta }(1+x_{i}^{-\beta })^{\alpha } \nonumber \\&\left. -2\alpha (1+x_{i}^{-\beta })^{\alpha }-x_{i}^{\beta })\right) \nonumber \\&\rho =\ln \pi (\alpha ,\beta ),\,\,\rho _{\alpha }=\frac{a-1}{\alpha } -b,\,\,\,\,\rho _{\beta }=\frac{c-1}{\beta }-d \nonumber \\&\hat{l}_{\alpha \alpha }=\dfrac{\partial ^{2}l}{\partial \alpha ^{2}},\,\,\, \hat{l}_{\beta \beta }=\dfrac{\partial ^{2}l}{\partial \beta ^{2}},\,\,\hat{l} _{\beta \alpha }=\dfrac{\partial ^{2}l}{\partial \beta \partial \alpha },\, \hat{l}_{\alpha \beta }=\dfrac{\partial ^{2}l}{\partial \alpha \partial \beta } \nonumber \\&\hat{l}_{\beta \beta \beta }=\dfrac{\partial ^{3}l}{\partial \beta ^{3}},\, \hat{l}_{\alpha \alpha \alpha }=\dfrac{\partial ^{3}l}{\partial \alpha ^{3}} ,\,\hat{l}_{\alpha \beta \beta }=\dfrac{\partial ^{3}l}{\partial \alpha \partial ^{2}\beta },\,\hat{l}_{\beta \beta \alpha }=\dfrac{\partial ^{3}l}{ \partial \beta \partial ^{2}\alpha } \end{aligned}$$
(22)

Here l(., .) denotes the log-likelihood function, \(\pi (\alpha ,\beta )\) denotes the corresponding prior distribution, and \(\sigma _{ij}\) represents the (ij)th element of the variance covariance matrix. All the expressions in Eq. (19) are evaluated at the ML estimates. Suppose, we want to estimate \(\alpha\) under the squared error loss. Then, we take \(g(\alpha ,\beta )=\alpha\) and subsequently observe that \(g_{\beta }=1\) , \(g_{\alpha \alpha }=g_{\alpha }=g_{\beta \beta }=g_{\beta \alpha }=g_{\alpha \beta }=0\). Consequently, the Bayes estimator of \(\alpha\) is obtained as:

$$\begin{aligned} \hat{\alpha }_{\mathrm{SEL}}= &\,E(\alpha \mid X)=\hat{\alpha }+0.5[2\hat{\rho }_{\alpha } \hat{\sigma }_{\alpha \alpha }+2\hat{\rho }_{\beta }\hat{\sigma }_{\alpha \beta }+\hat{\sigma }_{\alpha \alpha }^{2}\hat{l}_{\alpha \alpha \alpha } \\&+\hat{\sigma }_{\alpha \alpha }\hat{\sigma }_{\beta \beta }\hat{l}_{\beta \beta \alpha }+2\hat{\sigma }_{\alpha \beta }\hat{\sigma }_{\beta \alpha }\hat{ l}_{\alpha \beta \beta }+\hat{\sigma }_{\alpha \beta }\hat{\sigma }_{\beta \beta }\hat{l}_{\beta \beta \beta }] \end{aligned}$$

where \(\sigma _{ij},\,i,j=1,2\) are the elements of the variance-covariance matrix of \((\hat{\alpha },\hat{\beta })\) as reported in “Fisher’s information matrix” section. Other involved expressions (evaluated at \((\alpha ,\beta )=(\hat{\alpha },\hat{\beta })\)) are reported in “Appendix.” In a similar way, Bayes estimator of \(\mu\) under linex loss and entropy loss is, respectively, obtained as:

$$\begin{aligned} \hat{\alpha }_{\mathrm{LL}}=-\frac{1}{\nu }{\hbox {ln}}[E({\mathrm{e}}^{-\nu \alpha }\mid X)] \end{aligned}$$

where

$$\begin{aligned} E({\mathrm{e}}^{-\nu \alpha \mid X})= &\,{\mathrm{e}}^{-\nu \hat{\alpha }}+0.5[\hat{g}_{\alpha \alpha }\hat{\sigma }_{\alpha \alpha }+\hat{g}_{\alpha }(2\hat{\rho }_{\alpha } \hat{\sigma }_{\alpha \alpha }+2\hat{\rho }_{\beta }\hat{\sigma }_{\alpha \beta }+\hat{\sigma }_{\alpha \alpha }^{2}\hat{l}_{\alpha \alpha \alpha } \nonumber \\&+\hat{ \sigma }_{\alpha \alpha }\hat{\sigma }_{\beta \beta }\hat{l}_{\beta \beta \alpha } +2\hat{\sigma }_{\alpha \beta }\hat{\sigma }_{\beta \alpha }\hat{l}_{\alpha \beta \beta }+\hat{\sigma }_{\alpha \beta }\hat{\sigma }_{\beta \beta }\hat{l} _{\beta \beta \beta })] \end{aligned}$$
(23)

and with

$$\begin{aligned} g(\alpha ,\beta )={\mathrm{e}}^{-\nu x}\,,g_{\alpha }=-\nu {\mathrm{e}}^{-\nu \alpha }\, ,g_{\alpha \alpha }=\nu ^{2}{\mathrm{e}}^{-\nu \alpha },g_{\beta }=g_{\beta \beta }=g_{\beta \alpha }=g_{\alpha \beta }=0. \end{aligned}$$

Finally, the Bayes estimator of Burr type-III under the linex loss function is given by:

$$\begin{aligned} \hat{\alpha }_{\mathrm{EL}}={\ E(\alpha ^{-w}\mid X)}^{-\dfrac{1}{w}} \end{aligned}$$

where

$$\begin{aligned} E(\alpha ^{-w}\mid X)&= \hat{\alpha }^{-w}+0.5[\hat{g}_{\alpha \alpha } \hat{\sigma }_{\alpha \alpha }+\hat{g}_{\alpha }(2\hat{\rho }_{\alpha }\hat{ \sigma }_{\alpha \alpha }+2\hat{\rho }_{\beta }\hat{\sigma }_{\alpha \beta }+ \hat{\sigma }_{\alpha \alpha }^{2}\hat{l}_{\alpha \alpha \alpha } \nonumber \\&\quad +\hat{\sigma } _{\alpha \alpha }\hat{\sigma }_{\beta \beta }\hat{l}_{\beta \beta \alpha } +2\hat{\sigma }_{\alpha \beta }\hat{\sigma }_{\beta \alpha }\hat{l} _{\alpha \beta \beta }+\hat{\sigma }_{\alpha \beta }\hat{\sigma }_{\beta \beta }\hat{l}_{\beta \beta \beta })]. \end{aligned}$$
(24)

And with \(g(\alpha ,\beta )=\alpha ^{-w}\) ,\(g_{\alpha }=-w\alpha ^{-(w+1)}\) ,\(g_{\alpha \alpha }=w(w+1)\alpha ^{-(w+2)},g_{\beta }=g_{\beta \beta }=g_{\beta \alpha }=g_{\alpha \beta }=0\). Similarly, Bayes estimator of \(\beta\) under square error, linex and entropy loss functions can easily be obtained. Details are not presented here for the shake of brevity. Further notice that highest posterior density (HPD) interval estimates of \(\alpha\) and \(\beta\) can not be obtained using Lindley approximation. Therefore, we next use importance sampling technique for this purpose.

Importance sampling

Importance sampling is a very useful technique to draw sample from the posterior densities. Observe that posterior density given by (15) can be written as:

$$\begin{aligned} \pi (\alpha ,\beta \mid x)&\propto G_{\alpha \mid \beta }\left( D+a , b+\sum \limits _{j=1}^{m} \ln \left( 1+ x_{j}^{-\beta }+\sum \limits _{j=m+1}^{D} \ln \left( 1+ x_{j}^{-\beta } \right) \right) \right. \nonumber \\&G_{\beta }\left( D+c , d +\sum \limits _{j=1} ^{m} \ln x_{j}\right) +\sum \limits _{j=m+1} ^{D} \ln x_{j})h( \alpha , \beta ) \end{aligned}$$
(25)

where

$$\begin{aligned} h(\alpha , \beta )= & {} \dfrac{e ^{- \sum \nolimits _{j=1}^{m} \ln x_{j}-\sum \nolimits _{j=m+1}^{D} \ln x_{j} -\sum \nolimits _{j=1}^{m} \ln ( 1 + x_{j}^{-\beta }) -\sum \nolimits _{j=m+1}^{D} \ln ( 1 + x_{j}^{-\beta }) + R_{j} \ln [1 - ( 1 + x_{m} ^{ - \beta } )^{- \alpha } ] \acute{R}_{d} \ln [1 - ( 1 + T ^{ - \beta } )^{- \alpha } ]} }{ ( b + \sum \nolimits _{j=1} ^{m} \ln ( 1 + x_{j} ^{- \beta }) +\sum \nolimits _{j=m+1} ^{D} \ln ( 1 + x_{j} ^{- \beta }) )^{D+ a}} \end{aligned}$$

Now consider the following steps to draw samples from the above posterior density.

Step 1. :

Generate \(\beta _i\) from \(Gamma ( D+c , d+\sum _{i=1}^{m} \ln x_{j}+\sum _{i=m+1}^{D} \ln x_{j} )\).

Step 2. :

Given the value of \(\beta _i\), generate \(\alpha _i\) from \(Gamma( D+a , b+\sum _{i=1} ^{m} \ln (1+x_{i}^{-\beta })+\sum _{i=m+1} ^{D} \ln (1+x_{i}^{-\beta }) )\)

Step 3. :

Repeat the steps 1 and 2 s times to obtain \((\alpha _1, \beta _1), (\alpha _2, \beta _2), \ldots , (\alpha _s, \beta _s)\).

Now Bayes estimator of \(\alpha\) under square error and entropy loss functions are, respectively, obtained as:

$$\begin{aligned} \alpha _{\mathrm{SL}}= & {} \frac{\sum _{i=1}^{s}\alpha _{i}h(\alpha _{i},\beta _{i})}{ \sum _{i=1}^{s}h(\alpha _{i},\beta _{i})}, \\ \alpha _{\mathrm{LL}}= & {} -\frac{1}{\nu }\ln \left[ \frac{\sum _{i=1}^{s}{\mathrm{e}}^{-\nu \alpha _{i}}h(\alpha _{i},\beta _{i})}{\sum _{i=1}^{s}h(\alpha _{i},\beta _{i})}\right] \\ \alpha _{\mathrm{EL}}= & {} \left[ \frac{\sum _{i=1}^{s}\alpha _{i}^{-w}h(\alpha _{i},\beta _{i})}{\sum _{i=1}^{s}h(\alpha _{i},\beta _{i})}\right] \end{aligned}$$

Next we use the method of [10] for computing HPD intervals. Suppose that \(\alpha\) is the unknown parameter of interest, and \(\pi (\alpha \mid \varvec{x})\) and \(\Pi (\alpha \mid \varvec{x}).\) respectively, denote its posterior density and posterior distribution functions. If α(p) denotes the pth quantile of \(\alpha.\) then we have \(\alpha ^{(p)}=\inf \{\alpha :\Pi (\alpha \mid \varvec{x})\ge p;\,0<p<1\}\). It can be observed that for a given \(\alpha ^{*}\), a simulation consistent estimator of \(\Pi (\alpha ^{*}\mid \varvec{x})\) can be obtained as: \(\Pi (\alpha ^{*}\mid \varvec{x})=\frac{\sum _{i=1}^{s}1_{\alpha \le \alpha ^{*}}h(\alpha _{i},\beta _{i})}{\sum _{i=1}^{s}h(\alpha _{i},\beta _{i})},\) where \(1_{\alpha \le \alpha ^{*}}\) is the indicator function. Let \(\alpha _{(i)}\) be the ordered values of \(\alpha _{i}\). Then, the corresponding estimate is obtained as

$$\begin{aligned} \hat{\Pi }(\alpha ^{*}\mid \varvec{x})=\left\{ \begin{array}{cc} 0, &\quad {\text{ if }}\,\alpha ^{*}<\alpha _{(1)}, \\ \sum \nolimits _{j=1}^{i}w_{j}, &\quad \,\,\,\,\,\,\,\,\,\,{\text{ if }}\,\alpha _{(i)}\le \alpha ^{*}<\alpha _{(i+1)}, \\ 1 &\quad {\text{ if }}\,\alpha ^{*}\ge \alpha _{(s)}, \end{array}\right. \end{aligned}$$

where

$$\begin{aligned} w_{i}=\frac{h(\alpha _{(i)},\beta _{(i)})}{\sum _{i=1}^{s}h(\alpha _{(i)},\beta _{(i)})},\,\,i=1,2,\ldots ,s. \end{aligned}$$

Subsequently \(\alpha ^{(p)}\) can be estimated as

$$\begin{aligned} \hat{\alpha }^{(p)}=\left\{ \begin{array}{cc} \alpha _{(1)}, &\quad \,\,{\text{ if }}\,p=0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\ \alpha _{(i)}, &\quad \,\,{\text{ if }}\,\sum \nolimits _{j=1}^{i-1}w_{j}<p\le \sum \nolimits _{j=1}^{i}w_{j}. \end{array}\right. \end{aligned}$$

To obtain a \(100(1-p)\%\) confidence interval for \(\alpha\), we consider the intervals of the form \(\left( \hat{\alpha }^{(\frac{j}{s})},\hat{ \alpha }^{(\frac{j+[(1-p)s]}{s})}\right) ,i=1,2,\ldots ,s-[(1-p)s]\) with [u] denoting the greatest integer less than or equal to u. The interval with the smallest width is treated as the HPD interval. Similarly, the HPD interval for the parameter \(\beta\) can be constructed.

Data analysis

In this section, we analyze a real data set for illustration purpose. We consider a data set that represents the strength measured in GPA for single carbon fibers of 10 mm in gauge lengths and is given below.

figure a

In literature, this data set has been discussed for Burr type-III distribution. In fact authors considered a transformation \(X={\mathrm{e}}^{Y}\), where Y represent the original lifetime data that fits the generalized logistic distribution, see [1, 5]. Notice that the transformed variable X follows Burr type-III distribution. For comparison purpose, we also consider fitting of gamma, Weibull, and generalized exponential distribution. We use the method of negative log-likelihood criterion (NLC), Kolmogorov–Smirnov (KS) test statistics, Akaike’s information criterion (AIC) and Bayesian information criterion to judge the goodness of fit as discussed in Farbod and Gasparian [18]. The results are shown in Table 1. It can be clearly verified that the Burr type-III distribution has better fit to this data set based on the minimum KS test statistics and NLC. Also, from Fig. 1 we observe that the Burr type-III has a good fit for these data. Then, we generate the progressive hybrid type-II censored sample with different schemes as shown in Table 2 when \({T}=65\). The values of SEM and NR estimates are also given in Table 2. Also, the Bayes estimates based on Lindley and Markov Chain Monte Carlo (MCMC) methods are given in Tables 3 and 4.

Table 1 Goodness of fit tests for the real data set
Fig. 1
figure 1

Fit plot

Simulation study

In this section, we conduct a Monte Carlo simulation study to compare the performance of proposed estimators. We simulate the data from Burr (0.5, 1.5) distribution under various progressive hybrid type-II censoring schemes for different combinations of (nm). For each case, we obtain ML estimates of \(\alpha\) and \(\beta\) using NR method which was introduced in [24] and SEM algorithm. We mention that all the values are based on 5000 Monte Carlo simulations. Further, in the tables we denote the censoring schemes like (5, 0, 0, 0) as \((5,0^{*3})\) for convenience. All the average estimates and means square error (MSE) values of \(\alpha\) and \(\beta\) are reported in Tables 5 and 6. It is seen the table SEM estimates are better than NR estimates in the sense of having lower MSE values. The 95% asymptotic and Bayesian confidence intervals are also included in Tables 4 and 5. It can be observed that the both Bayesian and asymptotic confidence intervals generally have less lengths.

Tables 6 and 7 show the Bayes estimates based on Lindley approximation method. Also, Tables 8 and 9 show the Bayes estimates based on MCMC method that discussed in Rizzo [33] and Givens and Hoeting [20]. For both methods, we used the non-informative prior distribution by setting \({a}={b}={c}={d}=0\). The results show that both methods have very good performance to estimate unknown parameters. However, the Lindley method has generally lower MSEs.

Table 2 SEM algorithm and NR average and MSE (in parentheses) value when \({T}=8\)
Table 3 SEM algorithm and NR average and MSE (in parentheses) value when \({T}=12\)
Table 4 Asymptotic confidence interval when \({T}=8\)
Table 5 Asymptotic confidence interval when \({T}=12\)
Table 6 Bayes estimator using the Lindley approximation and risk value
Table 7 Bayes estimator using the Lindley approximation and risk value
Table 8 Bayes estimator using MCMC approximation and risk value when \({T}=8\)
Table 9 Bayes estimator using MCMC approximation and risk value

Conclusion

In this study, we consider the estimation of the parameters of Burr type-III model in the presence of progressive type-II hybrid censored data. To do this, we applied classical estimation methods such as NR and SEM as well as Bayesian approximation techniques including Lindley and MCMC approximation method. The results showed that the SEM method is preferable to NR method. Also, the Bayes estimates based on Lindley method have lower MSEs than the MCMC method. All computations have been done by using statistical R software version 3.1.3.