1 Introduction

Undoubtedly, several statisticians seek information about the lifetimes of products and materials to improve and develop them, and it may be difficult to obtain it during testing under normal conditions, as lifetime testing is costly and time-consuming. So, to access failure data in the shortest possible time in fields such as manufacturing industries, it is preferable to use ALTs. In ALTs, the experiment items are tested by exposing them to higher stress levels than normal ones, which could be temperature, vibration, voltage, pressure, etc., and continue under these conditions to induce early failure or start them up under normal conditions, hence exposing the units that did not fail by pre-specified time to higher stress levels. Therefore, we can divide ALTs into two types, the first one is fully ALTs which is based on the major assumption that the relationship between life and stress is known while the second one is partially ALTs in which the previous relationship is unknown or cannot be assumed. It is worth noting that in such accelerated conditions, the collected data is extrapolated by a physically appropriate statistical model to estimate the lifetime distribution under normal conditions of use.

In accordance with Nelson (2009), fully ALTs are divided mainly into three types. The first one is constant-stress ALT which is considered the most common stress. In this kind of stress, the sample items are exposed to continuous stress until failure or censoring whichever occurs first. Many authors studied that type, see, for instance, Lin et al. (2019) and Dey and Nassar (2020). Sometimes there is a large variation in failure times, so constant-stress may take too much time. Hence, an alternative approach is needed to ensure failure occurs faster. As a result, step-stress testing appeared to overcome this obstacle and to be more efficient and practical than constant stress. Under this type, the test item is exposed to a level of stress for a pre-specified period until it fails but if it does not fail, the level of stress to which it is exposed is raised and increases repeatedly until the test item fails or the censored condition is reached. Several authors studied step-stress ALT, see for example (Wang 2006 and Hakamipour 2021). The third type is considered progressive-stress ALT in which the test items are exposed to continuously increasing stress with time, see (Abdel-Hamid and Al-Hussaini 2007; Mahto et al. 2020) and (Mahto et al. 2021).

In some cases, the data of fully ALTs cannot be extrapolated to normal use conditions because the nature of the life-stress relationship is unknown, so partially accelerated life tests (PALTs) are a good option to use in such cases in estimating the acceleration factor, thus extrapolating the accelerated data to normal use conditions. Just like fully ALTs, PALTs are also mainly divided into three types. The first one is CSPALT where the sample items are run at either normal or accelerated conditions, i.e. each test item is run at a constant stress level until the test is terminated. Several authors have studied this type under various censoring schemes, see (Abushal and Soliman 2015) and (Hassan et al. 2020). In the second one, step-stress PALT, the test item is firstly run at normal stress with a pre-specified time (stress change time) until failing but if it does not fail, the test condition is switched to a higher stress level in which the item is exposed to steady stress until failure occurs or censoring reaches, that is, the total lifetime of the test item goes through two stages, normal use condition and accelerated condition, respectively, see (Ismail 2016) and (Akgul et al. 2020). The third one is considered progressive-stress PALT, see (Ismail and Al-Babtain 2015).

In life testing experiments, complete data on failure times for all test items may not be obtained and this leads us to what is known as censoring, where the data obtained from these tests are called censored data. The most common censoring types are Type-I and Type-II censoring. In the first type, the units are run simultaneously in the test for a pre-specified period. During this period, if not all test units fail, the surviving units are removed when the period expires, see (Ali and Aslam 2013) and (Algarni et al. 2020), while in the second type, the units are run simultaneously in the test until a pre-fixed number of items fail, thus the remaining items are removed, see (Balakrishnan and Han 2008) and (Kundu and Howlader 2010). In such previous types, there is no flexibility in withdrawing test items during the test. Hence, a more general censoring scheme is proposed which is known as a progressive type-II censoring scheme to overcome this obstacle. In this type, pre-specified items are withdrawn from the test at an individual item failure and the test continues at this pace until a pre-fixed number of items fail, at which stage the remaining surviving items are removed, see for example (EL-Sagheer 2018; Guo and Gui 2018), and (Mingjie and Gui 2021). At present, the most general flexible censoring scheme for withdrawing and saving the largest number of test units without failure, thus reducing time and cost, is the PFFC scheme proposed by Shuo-Jye and Kuş (2009), and will be highlighted in the next section. Several authors studied this scheme under different distributions, see (Sukhdev and Yogesh 2015; Xie and Gui 2020; Shi and Shi 2021), and (Lin et al. 2023).

Chandrakant et al. (2018) proposed WIED as an extension of the inverted exponential distribution. WIED is highly flexible and can take several shapes such as J-reversed, positively skewed, and symmetric as well. Besides, WIED in terms of the hazard rate function can acquire several forms such as constant, increasing, decreasing, unimodal, and j-shaped. In accordance with the previous features, WIED can be used in several sectors such as industry and medicine to fit different reliability data.

The probability density function (PDF), cumulative distribution function (CDF), reliability function (RF), and hazard rate function (HRF) can be written, respectively, as follows:

$$\begin{aligned} f_1\left( x;\alpha ,\beta ,\lambda \right)&=\frac{\alpha \beta \lambda }{x^2}\frac{\left( \exp \{\frac{-\lambda }{x}\}\right) ^\beta }{\left( 1-\exp \{\frac{-\lambda }{x}\}\right) ^{\beta +1}}\nonumber \\&\quad \times \exp \left\{ -\alpha \left( \frac{\exp \{\frac{-\lambda }{x}\}}{1-\exp \{\frac{-\lambda }{x}\}}\right) ^\beta \right\} ,\quad x>0,\alpha>0,\beta>0,\lambda >0, \end{aligned}$$
(1)
$$\begin{aligned} F_1(x;\alpha ,\beta ,\lambda )=1-\exp \left\{ -\alpha \left( \frac{\exp \{\frac{-\lambda }{x}\}}{1-\exp \{\frac{-\lambda }{x}\}}\right) ^\beta \right\} ,\quad x>0,\alpha>0,\beta>0,\lambda >0, \end{aligned}$$
(2)
$$\begin{aligned} S_1(x)=\exp \left\{ -\alpha \left( \frac{\exp \{\frac{-\lambda }{x}\}}{1-\exp \{\frac{-\lambda }{x}\}}\right) ^\beta \right\} ,\quad x>0, \end{aligned}$$
(3)

and

$$\begin{aligned} H_1(x)=\frac{\alpha \beta \lambda }{x^2}\frac{\left( \exp \{\frac{-\lambda }{x}\}\right) ^\beta }{\left( 1-\exp \{\frac{-\lambda }{x}\}\right) ^{\beta +1}},\quad x>0. \end{aligned}$$
(4)

This article aims to discuss the statistical inference issue for the WIED in the presence of CSPALT under the PFFC scheme. To this end, point and interval estimates are discussed by implementing classical and Bayesian approaches. Besides, two bootstrap techniques are proposed. The paper layout is arranged as follows. Section 2 discusses the characterization of the CSPALT procedure within the framework of the PFFC scheme. In Sect. 3, ML estimates are highlighted, and the observed FIM is obtained. In Sect. 4, bootstrap-p (Boot-p), and bootstrap-t (Boot-t) are discussed. In accordance with the squared error (SE) and linear exponential (LX) loss functions, Bayesian estimates are obtained in Sect. 5. In Sect. 6, a simulation study is conducted using the Monte Carlo method. A real engineering illustrative example is discussed in Sect. 7. Finally, Sect. 8 summarizes the paper.

2 Model characterization

2.1 Test procedure

  1. 1.

    Suppose u test items are divided in accordance with a certain proportion p into two groups: up items among u items are chosen at random for use condition, while the remaining items \(u(1-p)\) are chosen for accelerated condition.

  2. 2.

    The PFFC scheme is implemented as follows:

    1. i.

      The test items under use and accelerated conditions are divided into several groups \(n_{j},j=1,2\) of the same size \(k_{j},j=1,2\).

    2. ii.

      Let \(X_{ji:m_{j}:n_{j}:k_{j}}^{R_{ji}},j=1,2,i=1,2,\ldots ,m_{j}\) refer to two PFFC samples with censoring schemes \(R_{ji},j=1,2,i=1,2...,m_{j}\) from WIED.

    3. iii.

      As soon as the first failure item \(X_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}}\) in a group occurs, the group which includes that failure item, as well as \(R_{j1}\) groups are withdrawn at random from the \(n_{j}\) groups and as soon as the second failure item \(x_{j2:m_{j}:n_{j}:k_{j}}^{R_{j2}}\) in a group occurs, the group which includes that failure item, as well as \(R_{j2}\) groups are withdrawn at random from the remaining groups \(n_{j}-R_{j1}-1\) and so on until the \(m_{j}\)-th failure item \(x_{jmj:m_{j}:n_{j}:k_{j}}^{R_{jmj}}\) in a group occurs, the group which includes that failure item, as well as the remaining groups \(R_{jmj}\) are withdrawn and the test is terminated. It is noteworthy that in our study, \(m_{j}<n_{j}\), and additionally, \(R_{ji}\) are predetermined.

    4. iv.

      According to PFFC order statistics \(x_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}}<x_{j2:m_{j}:n_{j}:k_{j}}^{R_{j2}}<...<x_{jmj:m_{j}:n_{j}:k_{j}}^{R_{jmj}}\) with censoring schemes \(R_{ji}\) under CSPALT, the joint PDF for \(x_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}},x_{j2:m_{j}:n_{j}:k_{j}}^{R_{j2}},\ldots ,X_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}}\), \(j=1,2\) is given by

      $$\begin{aligned}&f_{x_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}},x_{j2:m_{j}:n_{j}:k_{j}}^{R_{j2}},\ldots ,x_{jmj:m_{j}:n_{j}:k_{j}}^{R_{jmj}}}(x_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}},x_{j2:m_{j}:n_{j}:k_{j}}^{R_{j2}},\ldots ,x_{jmj:m_{j}:n_{j}:k_{j}}^{R_{jmj}})\nonumber \\&\quad = \prod \limits _{j=1}^{2}c_{j}k_{j}^{m_{j}}\prod \limits _{i=1}^{m_{j}}f_{j}\left( X_{ji:m_{j}:n_{j}:k_{j}}^{R_{ji}}\right) \left( 1-F_{j}\left( X_{ji:m_{j}:n_{j}:k_{j}}^{R_{ji}}\right) \right) ^{k_{j}\left( R_{ji}+1\right) -1}. \end{aligned}$$
      (5)

It is clear that Eq. (5) can devolve into type-II censoring, progressive type-II censoring, first failure censoring, and complete sample as special cases.

2.2 Assumptions

  1. 1.

    Under use conditions, the lifetimes of the items \(X_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}},i=1,2,\ldots ,m_{1}\) follow WIED with the Equations given in (1)-(4).

  2. 2.

    Under accelerated conditions, the tested item hazard rate is increased to \( \mu H_{1}(x)\), where \(\mu \) is the acceleration factor satisfying \(\mu >1\). Consequently, the HRF, RF, CDF, and PDF can be written, respectively, as:

    $$\begin{aligned} H_{2}(x)=\frac{\alpha \beta \lambda \mu }{x^{2}}\exp \left\{ \frac{\lambda }{x}\right\} \left( \exp \left\{ \frac{\lambda }{x}\right\} -1\right) ^{- \left( \beta +1\right) }, \end{aligned}$$
    (6)
    $$\begin{aligned} S_{2}(x)=\exp \left\{ -\int \limits _{0}^{x}H_{2}(z)\right\} dz=\exp \left\{ -\alpha \mu \left( \exp \left\{ \frac{\lambda }{x}\right\} -1\right) ^{-\beta }\right\} , \end{aligned}$$
    (7)
    $$\begin{aligned} F_{2}(x)=1-\exp \left\{ -\alpha \mu \left( \exp \left\{ \frac{\lambda }{x} \right\} -1\right) ^{-\beta }\right\} , \end{aligned}$$
    (8)

    and

    $$\begin{aligned} f_{2}(x)=\frac{\alpha \beta \lambda \mu }{x^{2}}\left( \exp \left\{ \frac{ \lambda }{x}\right\} -1\right) ^{-\left( \beta +1\right) }\exp \left\{ \frac{ \lambda }{x}-\alpha \mu \left( \exp \left\{ \frac{\lambda }{x}\right\} -1\right) ^{-\beta }\right\} . \end{aligned}$$
    (9)
  3. 3.

    The lifetimes of the items \(X_{ji:m_{j}:n_{j}:k_{j}}^{R_{ji}},j=1,2,i=1,2,\ldots ,m_{j}\) are statistically independent and identically distributed.

3 Maximum likelihood estimation

In this section, our interest is in obtaining ML estimators of the parameters in accordance with the data \(X_{ji:m_{j}:n_{j}:k_{j}}^{R_{ji}},j=1,2,i=1,2,\ldots ,m_{j}\) obtained under the PFFC scheme with CSPALT. To this end, the natural logarithm of the likelihood function without normalized constant can be reduced to the following expression:

$$\begin{aligned}&\ell (\alpha ,\beta ,\lambda ,\mu |\underline{x})\nonumber \\&\quad \propto \left( m_{1}+m_{2}\right) \left( \ln \alpha +\ln \beta +\ln \lambda \right) +m_{2}\ln \mu +\sum \limits _{i=1}^{m_{1}}\ln \left( \frac{1}{x_{1i}^{2}} \right) +\sum \limits _{i=1}^{m_{2}}\ln \left( \frac{1}{x_{2i}^{2}}\right) \nonumber \\&\qquad +\lambda \left[ \sum \limits _{i=1}^{m_{1}}\frac{1}{x_{1i}}+\sum \limits _{i=1}^{m_{2}}\frac{1}{x_{2i}}\right] -\left( \beta +1\right) \left[ \sum \limits _{i=1}^{m_{1}}\ln \left( \exp \left\{ \frac{\lambda }{x_{1i}}\right\} -1\right) \right. \nonumber \\&\qquad \left. +\sum \limits _{i=1}^{m_{2}}\ln \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) \right] -\alpha \left[ \sum \limits _{i=1}^{m_{1}}k_{1}\left( R_{1i}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{1i}}\right\} -1\right) ^{-\beta }\right. \nonumber \\&\qquad \left. +\mu \sum \limits _{i=1}^{m_{2}}k_{2}\left( R_{2i}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) ^{-\beta }\right] . \end{aligned}$$
(10)

where \(x_{ji}\) is used instead of \(X_{ji:m_{j}:n_{j}:k_{j}}^{R_{ji}}.\)

By setting the partial derivatives of Eq. (10) with respective to \(\alpha ,\beta ,\lambda ,\) and \(\mu \) to zero, the ML estimators can be obtained by solving the following likelihood equations:

$$\begin{aligned} \frac{\partial \ell }{\partial \alpha }&=\frac{m_{1}+m_{2}}{\alpha } -\sum \limits _{i=1}^{m_{1}}k_{1}\left( R_{1i}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{1i}}\right\} -1\right) ^{-\beta }\nonumber \\&\quad -\mu \sum \limits _{i=1}^{m_{2}}k_{2}\left( R_{2i}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) ^{-\beta }=0 \end{aligned}$$
(11)
$$\begin{aligned} \frac{\partial \ell }{\partial \beta }&=\frac{m_{1}+m_{2}}{\beta } -\sum \limits _{i=1}^{m_{1}}\ln \left( \exp \left\{ \frac{\lambda }{x_{1i}} \right\} -1\right) -\sum \limits _{i=1}^{m_{2}}\ln \left( \exp \left\{ \frac{ \lambda }{x_{2i}}\right\} -1\right) \nonumber \\&\quad +\alpha \left[ \sum \limits _{i=1}^{m_{1}}k_{1}\left( R_{1i}+1\right) \left( \left( \exp \left\{ \frac{\lambda }{x_{1i}}\right\} -1\right) ^{-\beta }\ln \left( \exp \left\{ \frac{\lambda }{x_{1i}}\right\} -1\right) \right) \right. \nonumber \\&\quad \left. +\mu \sum \limits _{i=1}^{m_{2}}k_{2}\left( R_{2i}+1\right) \left( \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) ^{-\beta }\ln \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) \right) \right] =0, \nonumber \\ \end{aligned}$$
(12)
$$\begin{aligned} \frac{\partial \ell }{\partial \lambda }=&\frac{m_{1}+m_{2}}{\lambda } +\sum \limits _{i=1}^{m_{1}}\frac{1}{x_{1i}}+\sum \limits _{i=1}^{m_{2}}\frac{1}{ x_{2i}}-\left( \beta +1\right) \left[ \sum \limits _{i=1}^{m_{1}}\frac{\exp \left\{ \frac{\lambda }{x_{1i}}\right\} }{x_{1i}\left( \exp \left\{ \frac{ \lambda }{x_{1i}}\right\} -1\right) }\right. \nonumber \\&\quad \left. +\sum \limits _{i=1}^{m_{2}}\frac{\exp \left\{ \frac{\lambda }{x_{2i}}\right\} }{x_{2i}\left( \exp \left\{ \frac{ \lambda }{x_{2i}}\right\} -1\right) }\right] {+}\alpha \beta \left[ \sum \limits _{i=1}^{m_{1}}k_{1}\left( R_{1i}{+}1\right) \frac{\exp \left\{ \frac{\lambda }{x_{1i}}\right\} }{x_{1i}\left( \exp \left\{ \frac{\lambda }{x_{1i}}\right\} -1\right) ^{\beta +1}}\right. \nonumber \\&\quad \left. +\mu \sum \limits _{i=1}^{m_{2}}k_{2}\left( R_{2i}+1\right) \frac{\exp \left\{ \frac{\lambda }{x_{2i}}\right\} }{x_{2i}\left( \exp \left\{ \frac{\lambda }{ x_{2i}}\right\} -1\right) ^{\beta +1}}\right] =0, \end{aligned}$$
(13)

and

$$\begin{aligned} \frac{\partial \ell }{\partial \mu }=\frac{m_{2}}{\mu }-\alpha \sum \limits _{i=1}^{m_{2}}k_{2}\left( R_{2i}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) ^{-\beta }=0. \end{aligned}$$
(14)

It is noted that the non-linear Eqs. (11)–(14) cannot be solved analytically. Therefore, numerical methods such as the Newton–Raphson method are used.

3.1 Interval estimation

Making use of the asymptotic normality of the ML estimates, the ACIs of the parameters can be constructed via asymptotic variances that can be acquired from the inverse of the FIM which can be established according to the likelihood equations through the following form:

$$\begin{aligned} \hat{I}^{-1}\left( \alpha ,\beta ,\lambda ,\mu \right) {=}\left[ E\left( \begin{array}{cccc} -\dfrac{\partial ^{2}\ell }{\partial \alpha ^{2}} &{} -\dfrac{\partial ^{2}\ell }{\partial \alpha \partial \beta } &{} -\dfrac{\partial ^{2}\ell }{ \partial \alpha \partial \lambda } &{} -\dfrac{\partial ^{2}\ell }{\partial \alpha \partial \mu } \\ -\dfrac{\partial ^{2}\ell }{\partial \beta \partial \alpha } &{} -\dfrac{ \partial ^{2}\ell }{\partial \beta ^{2}} &{} -\dfrac{\partial ^{2}\ell }{ \partial \beta \partial \lambda } &{} -\dfrac{\partial ^{2}\ell }{\partial \beta \partial \mu } \\ -\dfrac{\partial ^{2}\ell }{\partial \lambda \partial \alpha } &{} -\dfrac{ \partial ^{2}\ell }{\partial \lambda \partial \beta } &{} -\dfrac{\partial ^{2}\ell }{\partial \lambda ^{2}} &{} -\dfrac{\partial ^{2}\ell }{\partial \lambda \partial \mu } \\ -\dfrac{\partial ^{2}\ell }{\partial \mu \partial \alpha } &{} -\dfrac{ \partial ^{2}\ell }{\partial \mu \partial \beta } &{} -\dfrac{\partial ^{2}\ell }{\partial \mu \partial \lambda } &{} -\dfrac{\partial ^{2}\ell }{ \partial \mu ^{2}} \end{array} \right) \right] _{\downarrow (\alpha =\hat{\alpha },\beta =\hat{\beta } ,\lambda =\hat{\lambda },\mu =\hat{\mu })}^{-1} \end{aligned}$$
(15)

At times it is difficult to figure out an exact expression of Eq. (15), so the inverse of the FIM will be used without taking the expectation. Correspondingly, the asymptotic variance-covariance matrix (observed FIM) is expressed as

$$\begin{aligned} \hat{I}^{-1}\left( \alpha ,\beta ,\lambda ,\mu \right) {=}\left( \begin{array}{cccc} \widehat{var(\alpha )} &{} cov(\alpha ,\beta ) &{} cov(\alpha ,\lambda ) &{} cov(\alpha ,\mu ) \\ cov(\beta ,\alpha ) &{} \widehat{var(\beta )} &{} cov(\beta ,\lambda ) &{} cov(\beta ,\mu ) \\ cov(\lambda ,\alpha ) &{} cov(\lambda ,\beta ) &{} \widehat{var(\lambda )} &{} cov(\lambda ,\mu ) \\ cov(\mu ,\alpha ) &{} cov(\mu ,\beta ) &{} cov(\mu ,\lambda ) &{} \widehat{ var(\mu )} \end{array} \right) _{\downarrow (\alpha =\hat{\alpha },\beta =\hat{\beta },\lambda =\hat{ \lambda },\mu =\hat{\mu })} \end{aligned}$$
(16)

The required asymptotic variances for \(\hat{\alpha },\hat{\beta },\hat{\lambda },\) and \(\hat{\mu }\) can be extracted from the matrix (16). Hence, \(( \hat{\alpha },\hat{\beta },\hat{\lambda },\hat{\mu })\sim N[(\alpha ,\beta ,\lambda ,\mu ),\hat{I}^{-1}\left( \alpha ,\beta ,\lambda ,\mu \right) ]\) and it can be figured out \((1-\gamma )100\%,\) \((0<\gamma <1)\) two-sided ACIs for \(\psi =(\alpha ,\beta ,\lambda ,\mu )\) as

$$\begin{aligned} \left( \hat{\psi }-Z_{\gamma /2}\sqrt{\widehat{var(\hat{\psi })}},\hat{\psi } +Z_{\gamma /2}\sqrt{\widehat{var(\hat{\psi })}}\right) \end{aligned}$$
(17)

where \(Z_{\gamma /2}\) is the percentile of the standard normal distribution with right-tailed probability \(\gamma /2\).

Occasionally, the ACIs yield a negative lower bound even though the parameters are strictly non-negative. To conquer this obstacle, we used the delta method proposed by Greene (2000) and the logarithmic transformation discussed in Meeker and Escobar (1998) and Ren and Gui (2021). The asymptotic distribution of \(\ln \hat{\psi }\) is

$$\begin{aligned} \ln \hat{\psi }-\ln \psi \overset{D}{\longrightarrow }N(0,var(\ln \hat{\psi })) \end{aligned}$$
(18)

where \(\overset{D}{\longrightarrow }\) indicates convergence in distribution and \(var(\ln \hat{\psi })=\frac{var(\hat{\psi })}{\hat{\psi }^{2}}= \frac{ \widehat{var(\hat{\psi })}}{\hat{\psi }^{2}}\)

Hence, the ACIs based on log-transformed ML estimates are

$$\begin{aligned} \left( \hat{\psi }.\exp \left\{ -\frac{Z_{\frac{\gamma }{2}}\sqrt{\widehat{ var(\hat{\psi })}}}{\hat{\psi }}\right\} ,\quad \hat{\psi }.\exp \left\{ \frac{Z_{\frac{\gamma }{2}}\sqrt{\widehat{var(\hat{\psi })}}}{\hat{ \psi }}\right\} \right) \end{aligned}$$
(19)

The accuracy and efficiency of the normal approximation of ML estimates may decrease if the sample size is not large enough. Therefore, in the next section, a resampling technique is provided to overcome the issue of constructing ACIs of the parameters in the presence of small sample sizes.

4 Bootstrap confidence intervals

Traditional statistical methods may struggle with small sample sizes, and therefore CIs based on the asymptotic results may not perform well. Parametric bootstrap addresses this issue by resampling from the estimated parametric distribution of the data, allowing for the generation of a large number of bootstrap samples. This process provides a means to estimate the sampling distribution of a statistic of interest. Consequently, CIs constructed using parametric bootstrap tend to be more reliable and accurate, especially when dealing with small samples. Two parametric bootstrap techniques are provided, one is Boot-p which is proposed by Efron (1982) and the other is Boot-t which is proposed by Hall (1988).

4.1 Parametric Boot-p

  1. 1.

    Through the original data \(x_{j1:m_{j}:n_{j}:k_{j}}^{R_{j1}},x_{j2:m_{j}:n_{j}:k_{j}}^{R_{j2}},\ldots ,x_{jmj:m_{j}:n_{j}:k_{j}}^{R_{jmj}},j=1,2, \) compute \(\hat{\alpha },\hat{\beta },\hat{\lambda },\) and \(\hat{\mu }\) by maximizing the Eqs. (1114).

  2. 2.

    Utilize the censoring plan \((n_{j},m_{j},k_{j},R_{ji})\) and \((\hat{\alpha },\hat{\beta },\hat{\lambda },\hat{\mu })\) to generate a PFFC bootstrap sample \(x_{j1:m_{j}:n_{j}:k_{j}}^{*R_{j1}},x_{j2:m_{j}:n_{j}:k_{j}}^{*R_{j2}},\ldots ,x_{jmj:m_{j}:n_{j}:k_{j}}^{*R_{jmj}}.\)

  3. 3.

    From \(x_{j1:m_{j}:n_{j}:k_{j}}^{*R_{j1}},x_{j2:m_{j}:n_{j}:k_{j}}^{*R_{j2}},\ldots ,x_{jmj:m_{j}:n_{j}:k_{j}}^{*R_{jmj}},\) compute bootstrap estimates which can be denoted by \(\hat{\varsigma }^{*}\) where \(\varsigma \) stands for \(\alpha ,\beta ,\lambda ,\)and \(\mu \) .

  4. 4.

    Do steps (2) and (3) repeatedly for Nboot times to obtain \(\hat{\varsigma }_{1}^{*},\hat{\varsigma }_{2}^{*},\ldots ,\hat{\varsigma }_{Nboot}^{*}\).

  5. 5.

    Sort \(\hat{\varsigma }_{j}^{*}\), \(j=1,2,\ldots ,Nboot\) ascendingly as \(\hat{\varsigma }_{(j)}^{*}\), \(j=1,2,\ldots ,Nboot\).

Let \(\psi _{1}(z)=P\) \((\hat{\varsigma }^{*}\le z)\) be the CDF of \(\hat{\varsigma }^{*}\). Define \(\hat{\varsigma }_{Boot-p}=\psi _{1}^{-1}(z)\) for given z. The approximate \(100(1-\gamma )\%\) Boot-p CI of \(\hat{\varsigma }\) is given by

$$\begin{aligned}{}[\hat{\varsigma }_{Boot-p}(\gamma /2),\hat{\varsigma }_{Boot-p}(1-\gamma /2)] \end{aligned}$$
(20)

4.2 Parametric Boot-t

1-3 Same as in parametric Boot-p.

  1. 4.

    Compute \(I^{-1*}(\hat{\alpha }^{*},\hat{\beta }^{*},\hat{ \lambda }^{*},\hat{\mu }^{*})\) based on the asymptotic variance-covariance matrix (16).

  2. 5.

    Compute the statistic \(\vartheta ^{*\varsigma }\) as:

    $$\begin{aligned} \vartheta ^{*\varsigma }=\frac{(\hat{\varsigma }^{*}-\hat{\varsigma }) }{\sqrt{\widehat{var(\hat{\varsigma }^{*})}}}. \end{aligned}$$
    (21)
  3. 6.

    Reiterate Steps \(2-5\) Nboot times and obtain \(\vartheta _{1}^{*\varsigma },\vartheta _{2}^{*\varsigma },\ldots ,\vartheta _{Nboot}^{*\varsigma }.\)

  4. 7.

    In ascending order, sort \(\vartheta _{j}^{*\varsigma },j=1,2,\ldots ,Nboot\) and obtain \(\vartheta _{(j)}^{*\varsigma },j=1,2,\ldots ,Nboot\).

Let \(\psi _{2}(z)=P\) \((\vartheta ^{*}\le z)\) be the CDF of \(\vartheta ^{*}\). For a given z,  define

$$\begin{aligned} \hat{\varsigma }_{Boot-t}=\hat{\varsigma }+\sqrt{\widehat{var(\hat{\varsigma }^{*})} }\psi _{2}^{-1}(z). \end{aligned}$$
(22)

Thus, the approximate \(100(1-\gamma )\%\) Boot-t CI of \(\hat{\varsigma }\) is given by:

$$\begin{aligned}{}[\hat{\varsigma }_{Boot-t}(\gamma /2),\hat{\varsigma } _{Boot-t}(1-\gamma /2)]. \end{aligned}$$
(23)

5 Bayesian estimation

In the inferential procedure, the Bayesian approach is distinguished from the frequentist approach in that it allows the incorporation of subjective prior information about life parameters which plays a pivotal and effective role in reliability analysis; additionally, it tends to use fewer sample data which makes it of great importance in expensive life tests. Now, we have to determine the appropriate prior distributions for the unknown parameters. Assume that \(\alpha \), \(\beta \), \(\gamma \), and \(\mu \) follow independent gamma prior distributions G1(a1, b1), G2(a2, b2), G3(a3, b3), and G4(a4, b4), respectively, because they have more flexibility in covering a large variety of prior beliefs. Since there is no prior information about the acceleration factor \(\mu \), the hyperparameters will be set to zero. Hence, the PDFs of the prior distributions can be formulated as:

$$\begin{aligned} \left\{ \begin{array}{c} \pi _{1}(\alpha )\propto \alpha ^{a_{1}-1}\exp \left\{ -b_{1}\alpha \right\} ,\text { }\alpha>0,a_{1},b_{1}>0, \\ \pi _{2}(\beta )\propto \beta ^{a_{2}-1}\exp \left\{ -b_{2}\beta \right\} ,\text { }\beta>0,a_{2},b_{2}>0, \\ \pi _{3}(\lambda )\propto \lambda ^{a_{3}-1}\exp \left\{ -b_{3}\lambda \right\} ,\text { }\lambda>0,a_{3},b_{3}>0, \\ \pi _{4}(\mu )\propto \mu ^{-1},\ \ \mu >1. \end{array} \right. \end{aligned}$$
(24)

The above positive hyperparameters \(a_{1},a_{2},a_{3},b_{1},b_{2},\) and \( b_{3}\) are selected to reflect prior knowledge about the unknown parameters. Hence, a technique to elicit the values of the hyperparameters is presented in in Subsect. 5.3. Now, the joint prior density can be formulated as follows:

$$\begin{aligned} \pi (\alpha ,\beta ,\lambda ,\mu )=\pi _{1}(\alpha )\pi _{2}(\beta )\pi _{3}(\lambda )\pi _{4}(\mu ). \end{aligned}$$
(25)

Consequently, the joint posterior density can be formulated as follows:

$$\begin{aligned} {\pi }^{*}{(\alpha ,\beta ,\lambda ,\mu |}\underline{x} {)=}\frac{L(\alpha ,\beta ,\lambda ,\mu |\underline{x})\pi (\alpha ,\beta ,\lambda ,\mu )}{\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }L(\alpha ,\beta ,\lambda ,\mu |\underline{x})\pi (\alpha ,\beta ,\lambda ,\mu )d\alpha d\beta d\lambda d\mu }. \nonumber \\ \end{aligned}$$
(26)

Thus, the Bayesian estimate for a given function can be constructed through a given loss function.

5.1 Loss functions

Within the framework of Bayesian approach, the loss function plays a pivotal role in evaluating the degree of difference between the true and estimated values. Let \(\hat{\varkappa }\) refer to the estimation of \( \varkappa \). Thus, the loss function denoted by \(L(\hat{\varkappa },\varkappa )\) can be defined as a real-valued function satisfying \(L(\hat{\varkappa } ,\varkappa )\ge 0\) for all possible estimates \(\hat{\varkappa }\) and all \( \varkappa \). In other words, it can be said that the loss function equals the loss incurred if one of the estimates is \(\hat{\varkappa }\) when \(\varkappa \) is the true value of the parameter. The loss function can be divided into two types, symmetric and asymmetric.

5.1.1 Symmetric loss function

In practice, when the loss resulting from overestimation and underestimation is just as important, symmetric loss functions are preferred among which the SE loss function is well-known for its good mathematical properties which can be defined as

$$\begin{aligned} L_{SE}(\hat{\varkappa },\varkappa )=(\hat{\varkappa }-\varkappa )^{2}. \end{aligned}$$
(27)

The Bayesian estimate of \(\varkappa \) under SE loss function is

$$\begin{aligned} \hat{\varkappa }_{SE}=E_{\varkappa }(\varkappa |\underline{x}). \end{aligned}$$
(28)

Hence, the Bayesian estimate for a given function \(\varphi (\alpha ,\beta ,\lambda ,\mu )\) under SE loss function can be expressed as:

$$\begin{aligned}&\hat{\varphi }(\alpha ,\beta ,\lambda ,\mu )_{SE}\nonumber \\&\quad =\frac{\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }\varphi (\alpha ,\beta ,\lambda ,\mu )L(\alpha ,\beta ,\lambda ,\mu |\underline{x})\pi (\alpha ,\beta ,\lambda ,\mu )d\alpha d\beta d\lambda d\mu }{ \int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }L(\alpha ,\beta ,\lambda ,\mu |\underline{x})\pi (\alpha ,\beta ,\lambda ,\mu )d\alpha d\beta d\lambda d\mu }. \nonumber \\ \end{aligned}$$
(29)

5.1.2 Asymmetric loss function

Sometimes, overestimation and underestimation can lead to various losses. Therefore, it is not appropriate in such cases to use symmetric loss functions; instead, asymmetric loss functions are used for the sake of making the Bayesian approach more practical and applicable. Among the asymmetric loss functions, the LX loss function is the dominant one, defined as:

$$\begin{aligned} L_{LX}(\hat{\varkappa },\varkappa )=e^{c(\hat{\varkappa }-\varkappa )}-c(\hat{ \varkappa }-\varkappa )-1,\quad c\ne 0. \end{aligned}$$
(30)

The sign and size of c represent the orientation and degree of asymmetry, respectively. When \(c>0\), overestimation is more costly than underestimation and vice versa. As for c approaching zero, the LX loss function behaves approximately like the SE loss function and is therefore almost symmetric, for more details, see (Zellner 1986).

The Bayesian estimate for \(\varkappa \) under LX loss function is

$$\begin{aligned} \hat{\varkappa }_{LX}=-\frac{1}{c}\ln E_{\varkappa }(e^{-c\varkappa }| \underline{x}). \end{aligned}$$
(31)

Hence, the Bayesian estimate for a given function \(\varphi (\alpha ,\beta ,\lambda ,\mu )\) under LX loss function can be expressed as:

$$\begin{aligned}&\hat{\varphi }(\alpha ,\beta ,\lambda ,\mu )_{LX}\nonumber \\&\quad =-\frac{1}{c}\ln \left[ \frac{\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }e^{-c\varphi (\alpha ,\beta ,\lambda ,\mu )}L(\alpha ,\beta ,\lambda ,\mu | \underline{x})\pi (\alpha ,\beta ,\lambda ,\mu )d\alpha d\beta d\lambda d\mu }{\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }\int _{0}^{\infty }L(\alpha ,\beta ,\lambda ,\mu |\underline{x})\pi (\alpha ,\beta ,\lambda ,\mu )d\alpha d\beta d\lambda d\mu }\right] . \end{aligned}$$
(32)

Obviously, the Bayesian estimates in the previous types of loss functions involve four integrals and cannot be constructed in closed forms. Therefore, the MCMC technique will be applied to derive such estimates.

5.2 MCMC technique

In realistic and complex statistical modeling, MCMC methodology provides valuable tools for Bayesian computations. One such tool that is considered to be the simplest and most widely used is the Gibbs sampling algorithm which was originally proposed by Geman and Geman (1984). The idea of this procedure is to draw samples from the conditional density of each variable. A more general procedure than Gibbs sampling is the Metropolis-Hastings (M-H) algorithm, originally presented by Metropolis et al. (1953) and Hastings (1970). In this procedure, samples can be drawn by making use of the conditional density and proposal distributions for each parameter of interest. Thereafter, by making use of drawn samples, Bayesian estimates can be computed and corresponding CRIs can also be established.

From (26), the joint posterior density can be reformulated as follows:

$$\begin{aligned} {\pi }^{*}{(\alpha ,\beta ,\lambda ,\mu |}\underline{x} {)}&\propto \alpha ^{m_{1}+m_{2}+a_{1}-1}\beta ^{m_{1}+m_{2}+a_{2}-1}\lambda ^{m_{1}+m_{2}+a_{3}-1}\mu ^{m_{2}-1}\nonumber \\&\quad \left[ \prod \limits _{j=1}^{2}\prod \limits _{i=1}^{m_{j}}\frac{\left( \exp \left\{ \frac{\lambda }{x_{ji}}\right\} -1\right) ^{-\left( \beta +1\right) }}{ x_{ji}^{2}}\right] \nonumber \\&\quad \times \exp \left\{ -\left( b_{1}\alpha +b_{2}\beta +b_{3}\lambda \right) +\sum \limits _{j=1}^{2}\sum \limits _{i=1}^{m_{j}}\left( \frac{\lambda }{x_{ji}}\right. \right. \nonumber \\&\quad \left. \left. -\alpha \Upsilon _{j}k_{j}\left( R_{ji}+1\right) \left( \exp \left\{ \frac{ \lambda }{x_{ji}}\right\} -1\right) ^{-\beta }\right) \right\} . \end{aligned}$$
(33)

where \(\Upsilon _{j}=\left\{ \begin{array}{c} 1,\text { if }j=1, \\ \mu ,\text { if }j=2. \end{array} \right. \)

Thus, the conditional densities can be expressed as follows:

$$\begin{aligned} {\pi }_{1}^{*}{(\alpha |\beta ,\lambda ,\mu ,}\underline{x} {)}&\propto \alpha ^{m_{1}+m_{2}+a_{1}-1}\nonumber \\&\quad \times \exp \left\{ -\alpha \left( b_{1}+\sum \limits _{j=1}^{2}\sum \limits _{i=1}^{m_{j}}k_{j}\Upsilon _{j}\left( R_{ji}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{ji}}\right\} -1\right) ^{-\beta }\right) \right\} , \end{aligned}$$
(34)
$$\begin{aligned}&{\pi }_{2}^{*}{(\beta |\alpha ,\lambda ,\mu ,}\underline{x} {)} \propto \beta ^{m_{1}+m_{2}+a_{2}-1}\left[ \prod \limits _{j=1}^{2}\prod \limits _{i=1}^{m_{j}}\left( \exp \left\{ \frac{\lambda }{x_{ji}}\right\} -1\right) ^{-\beta }\right] \nonumber \\&\quad \times \exp \left\{ -\left( b_{2}\beta +\alpha \sum \limits _{j=1}^{2}\sum \limits _{i=1}^{m_{j}}k_{j}\Upsilon _{j}\left( R_{ji}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{ji}}\right\} -1\right) ^{-\beta }\right) \right\} , \end{aligned}$$
(35)
$$\begin{aligned}&{\pi }_{3}^{*}{(\lambda |\alpha ,\beta ,\mu ,}\underline{x} {)} \propto \lambda ^{m_{1}+m_{2}+a_{3}-1}\left[ \prod \limits _{j=1}^{2}\prod \limits _{i=1}^{m_{j}}\left( \exp \left\{ \frac{\lambda }{x_{ji}}\right\} -1\right) ^{-(\beta +1)}\right] \nonumber \\&\quad \times \exp \left\{ -b_{3}\lambda +\sum \limits _{j=1}^{2}\sum \limits _{i=1}^{m_{j}}\left( \frac{\lambda }{x_{ji}} -\alpha k_{j}\Upsilon _{j}\left( R_{ji}+1\right) \left( \exp \left\{ \frac{ \lambda }{x_{ji}}\right\} -1\right) ^{-\beta }\right) \right\} , \end{aligned}$$
(36)
$$\begin{aligned} {\pi }_{4}^{*}{(\mu |\alpha ,\beta ,\lambda ,}\underline{x} {)}\propto \mu ^{m_{2}-1}\exp \left\{ -\alpha \mu k_{2}\sum \limits _{i=1}^{m_{2}}\left( R_{2i}+1\right) \left( \exp \left\{ \frac{\lambda }{x_{2i}}\right\} -1\right) ^{-\beta }\right\} . \end{aligned}$$
(37)

It is clear that Eqs. (34) and (37) represent gamma density. Thus, samples of \(\alpha \) and \(\mu \) can be easily drawn using any gamma-generating routines. On the other hand, Eqs. (35) and (36) do not represent well-known distributions. Consequently, employing the Gibbs sampler to generate samples is not appropriate; instead, the M-H algorithm is utilized to implement the MCMC methodology. The hybrid procedure involving Gibbs sampling and the M-H algorithm will be run in the following steps:

  1. 1.

    Initialize with \((\alpha ^{(0)}=\hat{\alpha }_{ML},\beta ^{(0)}= \hat{\beta }_{ML},\lambda ^{(0)}=\hat{\lambda }_{ML},\mu ^{(0)}=\hat{\mu } _{ML}) \) as an initial guess and set \(J=1\).

  2. 2.

    Generate \(\alpha ^{(J)}\) from Gamma distribution \({\pi } _{1}^{*}{(\alpha |\beta }^{(J-1)}{,\lambda }^{(J-1)} {,\mu }^{(J-1)}{,}\underline{x}{).}\)

  3. 3.

    Generate \(\mu ^{(J)}\) from Gamma distribution \({\pi } _{4}^{*}{(\mu |\alpha }^{(J)}{,\beta }^{(J-1)}{,\lambda }^{(J-1)}{,}\underline{x}{).}\)

  4. 4.

    Activating the M-H algorithm, generate \(\beta ^{(J)}\) and \( \lambda ^{(J)}\) from Eqs. (35) and (36) with normal proposal distributions \(N( \beta ^{( J-1) },Var( \hat{\beta })) \) and \(N( \lambda ^{( J-1) },Var( \hat{\lambda }) ) \), respectively.

  5. 5.

    Record \(\alpha ^{(J)},\beta ^{(J)},\lambda ^{(J)}\) and \(\mu ^{(J)}.\)

  6. 6.

    Set \(J=J+1\).

  7. 7.

    Reiterate steps \(2-6\) N times.

  8. 8.

    Remove B (the number of iterative values before achieving the stationary distribution) as burn-in period and derive the Bayesian estimates \(\hat{\Omega }_{SE}\) and \(\hat{\Omega }_{LX}\) of \(\Omega \) under SE and LX loss functions, respectively, by

    $$\begin{aligned} \ \hat{\Omega }_{SE}= & {} \frac{1}{N-B}\sum _{J=B+1}^{N}\Omega ^{\left( J\right) }. \end{aligned}$$
    (38)
    $$\begin{aligned} \hat{\Omega }_{LX}= & {} \frac{-1}{c}Log\left( \frac{1}{N-B}\sum _{J=B+1}^{N}e^{-c \Omega ^{\left( J\right) }}\right) ,\text { where }c\ne 0. \end{aligned}$$
    (39)

    where \(\Omega \) stands for \(\alpha ,\beta ,\lambda ,\) and \(\mu .\)

  9. 9.

    To establish two-sided CRIs of \(\Omega ,\) sort \(\hat{\Omega } ^{(J)},J=B+1,B+2,\ldots ,N\) in ascending order as \(\left\{ \hat{\Omega }^{(1)}< \hat{\Omega }^{(2)}<...<\hat{\Omega }^{(N-B)}\right\} \). Hence, \((1-\gamma )100\%\) Bayesian two-sided CRIs of \(\Omega \) can be constructed as:

    $$\begin{aligned} \left[ \Omega _{((N-B)\gamma /2)},\Omega _{((N-B)(1-\gamma /2))}\right] . \end{aligned}$$
    (40)

5.3 Hyperparameters elicitation technique

In Bayesian inference, prior distributions are generally classified as informative and non-informative according to the values of the hyperparameters. In terms of non-informative prior distributions, the hyperparameters are selected to be equal or approach zero while regarding informative prior distributions, the hyperparameters can be elicitated from the following technique:

  1. 1.

    Obtain n number of samples from WIED under normal and accelerated conditions.

  2. 2.

    Calculate the associated ML estimates \((\hat{\alpha }^{j},\hat{ \beta }^{j},\hat{\lambda }^{j}),j=1,2,\ldots ,n.\)

  3. 3.

    Calculate the mean and variance of \((\hat{\alpha }^{j},\hat{\beta }^{j},\hat{\lambda }^{j}),j=1,2,\ldots ,n\) as

    $$\begin{aligned} \frac{1}{n}\sum _{j=1}^{n}\hat{\Theta }^{j},\quad \frac{1}{n-1}\sum _{j=1}^{n}\left( \hat{\Theta }^{j}-\frac{1}{n} \sum \limits _{i=1}^{n}\hat{\Theta }^{i}\right) ^{2}. \end{aligned}$$
    (41)

    where \(\Theta \) stands for \(\alpha ,\beta ,\) and \(\lambda \).

  4. 4.

    Calculate the mean and variance of the considered priors, which, in our case, are the gamma prior \(\pi (\Theta )\propto \Theta ^{h_{1}-1}\exp \left\{ -h_{2}\Theta \right\} \) where for \(\Theta =\alpha \) we have \( h_{1}=a_{1},h_{2}=b_{1},\) for \(\Theta =\beta \) we have \( h_{1}=a_{2},h_{2}=b_{2},\) and for \(\Theta =\lambda \) we have \( h_{1}=a_{3},h_{2}=b_{3}.\)

  5. 5.

    Equate the mean and variance of \(\hat{\Theta }^{j},j=1,2,\ldots ,n\) with the mean and variance of the gamma priors and solve the equations, hence, the estimated hyperparameters can be derived from the following forms:

    $$\begin{aligned} h_{1}=\frac{\left( \frac{1}{n}\sum _{j=1}^{n}\hat{\Theta }^{j}\right) ^{2}}{ \text {\ }\frac{1}{n-1}\sum _{j=1}^{n}\left( \hat{\Theta }^{j}-\frac{1}{n} \sum \limits _{i=1}^{n}\hat{\Theta }^{i}\right) ^{2}}, \quad h_{2}=\frac{\frac{1}{n}\sum _{j=1}^{n}\hat{\Theta }^{j}}{\text { } \frac{1}{n-1}\sum _{j=1}^{n}\left( \hat{\Theta }^{j}-\frac{1}{n} \sum \limits _{i=1}^{n}\hat{\Theta }^{i}\right) ^{2}}. \end{aligned}$$
    (42)

Such a technique has been used by Dey et al. (2016).

6 Simulation study

In attempts to evaluate the performance of the proposed methods, some computations are performed in accordance with Monte Carlo simulation experiments utilizing (MATHEMATICA ver. 12.0). In light of the proposed algorithm mentioned in Balakrishnan and Sandhu (1995) with the distribution function \(1-\left( 1-F(x)\right) ^{k},\) 1000 PFFC samples were generated under both normal and acceleration conditions from WIED with the parameters \( \alpha =0.5,\beta =1,\lambda =0.6,\) and \(\mu =1.5\). The performance of the estimates derived for \(\alpha ,\beta ,\lambda ,\) and \(\mu \) from different proposed methods (ML estimation, two parametric bootstraps, and MCMC technique) is compared in terms of point and interval estimates. To this end, the average estimate (AE) and mean square error (MSE) are considered for point estimates while the average width (AW) and coverage probability (CP) are considered for interval estimates.

Table 1 AE and (MSE) of the estimates of \(\alpha ,\beta ,\lambda ,\) and \(\mu \)
Table 2 AE and (MSE) of the estimates of \(\alpha ,\beta ,\lambda ,\) and \(\mu \)
Table 3 AE and (MSE) of the estimates of \(\alpha ,\beta ,\lambda ,\) and \(\mu \)
Table 4 AE and (MSE) of the estimates of \(\alpha ,\beta ,\lambda ,\) and \(\mu \)
Table 5 AW and (CP) of \(95\%\) CIs
Table 6 AW and (CP) of \(95\%\) CIs
Table 7 AW and (CP) of \(95\%\) CIs
Table 8 AW and (CP) of \(95\%\) CIs

In order to conduct our study, distinct combinations of \(k_{1}=k_{2}=k\) (group size), different values of \(n_{1}=n_{2}=n\) (number of groups), and \( m_{j},j=1,2\) (observed data) are taken into account with different censoring schemes (CSs) \(R_{j},j=1,2\). For convenience, three types of CSs are considered:

CS I: \(\ R_{j}=\left( n-m_{j},0^{*m_{j}-1}\right) .\)

CS II: \(R_{j(m_{j}/2)}=n-m_{j},\) \(R_{ji}=0\) for\(\ i\ne m_{j}/2\) if \(m_{j}\) even; \(R_{j((m_{j}+1)/2)}=n-m_{j},\) \(R_{ji}=0\) for \(i\ne (m_{j}+1)/2\) if \(m_{j}\) odd.

CS III: \(R_{j}=\left( 0^{*m_{j}-1},n-m_{j}\right) .\)

In this work, informative priors are adopted in which the hyperparameters are selected in accordance with the mentioned technique in Subsect. 5.3 as \(a_{1}=8.2750,\) \(b_{1}=13.0085,a_{2}=59.1713,\) \(b_{2}=62.5731,\) \( a_{3}=18.3662\), and \(b_{3}=26.1601\) and inserted to compute the required estimates. Besides, the MCMC technique is reiterated 30, 000 times with the first 5, 000 times discarded as a sufficient burn-in period to erase the effect of the initial values. The simulation results are shown in Tables 1, 2, 3, 4, 5, 6, 7 and 8 , according to which we note the following:

  1. 1.

    For fixed n, the MSEs and AWs of all parameters tend to decrease as the effective sample size \(m_{j}\) gets larger.

  2. 2.

    With n and \(m_j\) keeping invariant but k increases, the MSEs have no obvious trend on the whole.

  3. 3.

    MCMC technique has the best performance compared to the rest methods in terms of MSEs.

  4. 4.

    Between two loss functions, LX loss function with \(c=0.5\) is the best mode for \( \alpha ,\lambda \), and \(\mu \), in contrast, LX loss function with \(c=-0.5\) is the best mode for \(\beta \), all based on the smallest MSEs.

  5. 5.

    Overall, MCMC CRIs are the most satisfactory because they have the narrowest width.

  6. 6.

    Scheme I often performs better than the rest schemes with regard to the MSEs and AWs.

7 Practical data analysis

In this part, authentic data sets that represent observed failure times in life testing of a light-emitting diode (LED) are used to display and illustrate the performance of the proposed inferential methods. These data were originally analyzed by Cheng and Wang (2012) and recently by Dey et al. (2022). Table 9 shows the complete observed failure samples generated under normal and accelerated conditions.

Table 9 Complete CSPALT LED failure data

To test the degree of fit between WIED and the data mentioned in Table 9, the Kolmogorov-Smirnov (K-S) test statistic is used. Through computations, the K–S distances and their corresponding p-value (.) under normal use and accelerated conditions are obtained, respectively, as 0.0934(0.6564) and 0.0921(0.6738). Based on the calculated p-values, we can conclude that the WIED fits perfectly with these data. For further illustration, Figures 2 and 3 display the empirical cumulative distributions with the fitted survival functions. By implementing the procedure characterized in Sect. 2 on the original data mentioned in Table 9, PFFC samples are obtained within the CSPALT framework. All details are provided in Table 10. Besides, Figure 1 shows the PDFs under both normal use and accelerated stress conditions. The ML, Boot-p, and Boot-t point estimates, along with their corresponding CIs, are obtained. For Bayesian estimation, informative priors are adopted where the hyperparameters are selected as \( a_{1}=14.3403,b_{1}=85.5635,a_{2}=165.31,b_{2}=141.36,a_{3}=35.1201,\) and \( b_{3}=82.4022\) based on the mentioned technique in Subsect. 5.3. Furthermore, the chain was run for 30, 000 iterations, with the initial 5, 000 values discarded as ’burn-in’, which is deemed adequate for eliminating the influence of the initial values. Bayesian point estimates are computed under both SE and LX loss functions with various values of the parameter c; moreover, \(95\%\) CRIs are also constructed. All results of point and interval estimates are presented in Tables 11 and 12. It is clear that Boot-t CIs and CRIs are the narrowest, while ACIs and Boot-p CIs are the widest, and therefore the worst in terms of interval lengths. Figures 4 and 5 display trace plots of the parameters generated by the MCMC approach and the associated histograms, respectively.

Fig. 1
figure 1

PDFs under normal and accelerated conditions

Fig. 2
figure 2

Fitness of real data obtained under normal condition

Fig. 3
figure 3

Fitness of real data obtained under accelerated condition

Table 10 PFFC CSPALT LED failure data
Table 11 Different point estimates for \(\alpha ,\beta ,\lambda ,\) and \(\mu \)
Table 12 \(95\%\) CIs and CRIs for \(\alpha ,\beta ,\lambda \), and \(\mu \)

8 Conclusive remarks

In this article, the statistical inference of WIED in the presence of CSPALT under the PFFC is highlighted. This combination makes our research more practical and applied in industrial and engineering fields by saving time, the number of test units and thus cost. Throughout this paper, several methods are developed to estimate the interested parameters of WIED. For classical estimation, ML estimates are acquired and the associated ACIs are established by making use of the observed FIM. Besides, two parametric bootstrap models (Boot-p and Boot-t) for point and interval estimates are also presented for comparison purposes. For Bayesian estimation, point and interval estimates are constructed with the help of the MCMC technique due to the difficulty of producing Bayesian estimates in closed form. Via extensive Monte Carlo simulations, the performance of the proposed methods is investigated. According to the results, Boot-t and Bayesian estimates demonstrate superior performance and accuracy compared to conventional likelihood and Boot-p estimates. Furthermore, employing the proposed hyperparameters elicitation technique enhances the efficiency and effectiveness of Bayesian estimates relative to other methods. Finally, one set of real engineering data is analyzed to demonstrate the applicability of the study.

Fig. 4
figure 4

Trance plots of \(\alpha \), \(\beta \), \(\lambda \) and \(\mu \) obtained from MCMC approach

Fig. 5
figure 5

Histograms of \(\alpha \), \(\beta \), \(\lambda \) and \(\mu \) obtained from MCMC approach