Introduction

In the present era, the term “Process Capability Indices” finds frequent space in the statistical quality control literature. If one may keen to study whether the ongoing production process is moving according to the predefined specifications or not, process capability index is the right technique to be chosen as it will help in monitoring and analyzing quality process and productivity. As we know, statistical quality control refers to the use of statistical technique in monitoring and maintaining the standards of products and services. A quality process evaluation procedure such as process capability analysis helps the manufacturer to achieve consumer quality expectations. It is an effective measure to gauge the quality of production process. Process capability analysis is used to determine whether the process capability of a supplier conforms to a customer’s specifications, by applying an expression called the process capability index (PCI), to a controlled process. PCI is one such tool to measure the quality process at given specifications. As this method is simple and transparent and underlying assumptions are not complicated compared with conventional methods, PCI methods got popularized. The first PCI was developed by Juran (1974); later, different PCIs found their origin when the underlying distribution is normal, viz. Vännman (1995), Aslam et al. (2013), etc. As pointed by Kane (1986) and Gunter (1989a, b, c, d), the quality characteristic may not be normal in many occasions and assuming normality in such cases could lead to inaccurate and unreliable results.

Clements (1989) introduced two PCIs, Cp and Cpk, for non-normal data by relaxing the normality assumption and brought the concept of quantile-based PCIs. Later, it was developed by Vännman (1995). In a similar way, Kane (1986) developed PCI based on non-normal assumption. Distinguished statisticians made their efforts in developing different PCI methods, viz. Chan et al. (1988), Pearn and Chen (1995), Chen and Pearn (1997), Wood (2005), Chen et al. (2008), Wu and Liang (2010), Perakis (2010) and Kashif et al. (2017).

Peng (2010a, b) developed parametric lower confidence limits of quantile-based PCIs and also studied PCIs for processes with skewed distributions. Similar developments can be seen from Kantam et al. (2010) and Wararit and Somchit (2012) for half-logistic distribution, and from Rao et al. (2015) for inverse Rayleigh and log-logistic distributions. The main aim behind the development of PCIs is to give indication on the quality process whether it is moving in line with the predefined standards. These standards can be determined by setting lower specification limit (L) and upper specification limit (U). In the traditional approach, it is assumed that the quality process is normally distributed, and then the PCI, Cpk, is given by

$$C_{pk} = {\text{Min}}\left\{ {\frac{{{\text{USL}} - \mu }}{3\sigma },\frac{{\mu - {\text{LSL}}}}{3\sigma }} \right\}.$$
(1)

The sample mean \(\left( {\bar{x}} \right)\) and standard deviation (s) derived from a random sample of size n\(\left( {X_{1} ,X_{2} , \ldots ,X_{n} } \right)\) are to be used to estimate the unknown parameters \(\mu\) and \(\sigma\); hence,

$$\tilde{C}_{pk} = {\text{Min}}\left\{ {\frac{{{\text{USL}} - \bar{x}}}{3s},\frac{{\bar{x} - {\text{LSL}}}}{3s}} \right\}.$$
(2)

Clements (1989) suggested that if the process characteristic is drawn from a non-normal distribution, PCI \(C_{pk}\) can be constructed for any distribution as

$$C_{pk} = {\text{Min}}\left\{ {\frac{{{\text{USL}} - M}}{{U_{p} - M}},\frac{{M - {\text{LSL}}}}{{M - L_{p} }}} \right\}.$$
(3)

\(U_{p}\), \(L_{p}\) and M are, respectively, the 99.865th, 0.135th and 50th percentiles of the concerned distribution. Another method proposed by Chen and Pearn (1997), when the underlying process is from a non-normal distribution, is

$$C_{Np} \left( {u;v} \right) = \frac{{d - u\left| {\xi_{p2} - m} \right|}}{{3\sqrt {\left( {\left( {\xi_{p3} - \xi_{p1} } \right)/6} \right)^{2} + \nu \left( {\xi_{p2} - T} \right)^{2} } }},$$
(4)

where \(\xi_{q}\) is the qth quantile, i.e.,\(P\left( {X < \xi_{q} } \right) = q\), \(p_{1} = 0.00135\), \(p_{2} = 0.5\), \(p_{3} = 0.99865\), \(d = \left( {{\text{USL}} - {\text{LSL}}} \right)/2\), \(m = \left( {{\text{USL}} + {\text{LSL}}} \right)/2\) and T is the target value; from (4), we have

$$\begin{aligned} C_{Np} \left( {0,0} \right) &= C_{Np} ,C_{Np} \left( {0,1} \right) = C_{Npm} ,C_{Np} \left( {1,0} \right) = C_{Npk} \\&\quad {\text{and}}\quad C_{Np} \left( {1,1} \right) = C_{Npmk} . \end{aligned}$$

As described above, many PCI methods are developed so far, and few of the widely used PCIs are Cp and Cpk developed by Kane (1986). In this paper, we proposed the popular process capability index \(C_{Npk}\) when the quality process follows TGLLD. The next part of the article is prepared in the following way. Introduction of TGLLD and the estimation of parameters using ML method are given in “Type-II generalized log-logistic distribution” section. In “Bootstrap confidence intervals” section, the bootstrap confidence intervals are determined for the PCIs proposed by Chen and Pearn (1997). In “Simulation study” section, simulated results for small sample comparison are tabulated. Finally, the benefit of PCIs so developed for TGLLD is demonstrated with an example in “Illustrative example” section.

Type-II generalized log-logistic distribution

Log-logistic distribution (LLD) has proven its importance in quality control. Different authors developed properties and types of acceptance sampling plans for LLD. The cumulative distribution function (CDF) of the log-logistic distribution (LLD) is

$$F\left( {t;\sigma ,\theta } \right) = \frac{{\mathop {\left( {t/\sigma } \right)}\nolimits^{\lambda } }}{{\left[ {1 + \mathop {\left( {t/\sigma } \right)}\nolimits^{\lambda } } \right]}};t > 0,\sigma > 0,\lambda > 1,$$
(5)

where σ is the scale parameter and λ is the shape parameter.

The practical pertinence of generalized log-logistic distribution (GLLD) in diverse sectors attracted various authors to put their attention in developing some extensions for effective and wide use of log-logistic distribution, viz. Rosaiah et al. (2006, 2007). One such extension to this distribution is named as type-II generalized log-logistic distribution (TGLLD) introduced by Rosaiah et al. (2008); its cumulative distribution function (CDF) is

$$F(t;\lambda ,\theta ,\sigma ) = 1 - \left[ {1 + \left( {t/\sigma } \right)^{\lambda } } \right]^{ - \theta };\quad t > 0,\lambda > 1,\theta > 0,\sigma > 0.$$
(6)

It may be noted that the distribution given in (6) is defined through the reliability-oriented generalization of log-logistic distribution. In short, we call this as the type-II generalized log-logistic distribution [type-I generalized (exponentiated) log-logistic distribution is given by Rosaiah et al. (2006)]. The corresponding probability density function (PDF) is given by

$$f(t;\lambda ,\theta ,\sigma ) = \frac{\lambda \theta }{\sigma }\frac{{\left( {t/\sigma } \right)^{\lambda - 1} }}{{\left[ {1 + \left( {t/\sigma } \right)^{\lambda } } \right]^{\theta + 1} }};\quad t > 0,\lambda > 1,\theta > 0,\sigma > 0,$$
(7)

where σ is the scale parameter, and λ and θ are shape parameters. The three-parameter TGLLD will be denoted by TGLLD \(\left( {\sigma ,\theta ,\lambda } \right)\). If  \(\theta = 1\), then Eq. (7) becomes log-logistic distribution, and if \(\lambda\) = 1, then TGLLD becomes Pareto type-II distribution. Since log-logistic distribution is also a survival model as exemplified by many authors in the past, a series system of independent components with common log-logistic lifetime distribution for each component, we are motivated to study some inferential aspects of the distribution of such a series system. As not much work is reported about the study of such a model, we made an attempt to take up some theoretical, applied inferential problems with respect to type-II generalized log-logistic model. The model provided more accurate results, especially when the data were examined for quality characteristics. Rao et al. (2012a, b) developed the reliability test plans for this distribution. The reliability function and hazard (failure rate) function of type-II generalized log-logistic distribution are, respectively, given by

$$R\left( t \right) = \left[ {1 + \left( {t/\sigma } \right)^{\lambda } } \right]^{ - \theta }$$
(8)
$$h\left( t \right) = \frac{{\lambda \theta \left( {t/\sigma } \right)^{\lambda - 1} }}{{\left[ {1 + \left( {t/\sigma } \right)^{\lambda } } \right]}};\quad t > 0,\lambda > 1,\theta > 0,\sigma > 0.$$
(9)

Let \(t_{1} ,t_{2} , \ldots ,t_{n}\) be a random sample of size n drawn from TGLLD \(\left( {T;\theta ,\lambda ,\sigma } \right)\), then likelihood function L of the sample is

$$L = \prod\limits_{i = 1}^{n} {f\left( {t_{i} ;\theta ,\lambda ,\sigma } \right)} = \frac{{\lambda^{n} \theta^{n} }}{{\sigma^{n\lambda } }}\prod\limits_{i = 1}^{n} {\frac{{\mathop {\left( {t_{i} } \right)}\nolimits^{\lambda - 1} }}{{\mathop {\left[ {1 + \mathop {\left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)}\nolimits^{\lambda } } \right]}\nolimits^{\theta + 1} }}} .$$
(10)

The log-likelihood function is

$$\begin{aligned} \log L &= n\log \lambda + n\log \theta - n\lambda \log \sigma + \left( {\lambda - 1} \right)\\& \quad \times\sum\limits_{i = 1}^{n} {\log t_{i} } - \left( {\theta + 1} \right)\sum\limits_{i = 1}^{n} {\log } \left[ {1 + \mathop {\left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)}\nolimits^{\lambda } } \right]. \end{aligned}$$
(11)

The log-likelihood equations to obtain MLEs of \(\theta ,\lambda\) and \(\sigma\) are obtained as

$$\frac{\partial \log L}{\partial \sigma } = 0 \Rightarrow \frac{ - n\lambda }{\sigma } + \frac{{\lambda \left( {\theta + 1} \right)}}{\sigma }\sum\limits_{i = 1}^{n} {\frac{{\left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)^{\lambda } }}{{\left[ {1 + \left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)^{\lambda } } \right]}}} = 0$$
(12)
$$\begin{aligned} \frac{\partial \log L}{\partial \lambda }& = 0 \Rightarrow \frac{n}{\lambda } - n\log \sigma + \sum\limits_{i = 1}^{n} {\log t_{i} } - \left( {\theta + 1} \right)\hfill \\&\quad \times\sum\limits_{i = 1}^{n} {\frac{1}{{\left[ {1 + \mathop {\left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)}\nolimits^{\lambda } } \right]}}} \left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)^{\lambda } \log \left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right) = 0 \hfill \\ \end{aligned}$$
(13)
$$\frac{\partial \log L}{\partial \theta } = \frac{n}{\theta } - \sum\limits_{i = 1}^{n} {\log \left[ {1 + \mathop {\left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)}\nolimits^{\lambda } } \right]}$$
$${\text{MLE}}\;{\text{of}}\;\theta \;{\text{is}}\;\hat{\theta } = \frac{n}{{\sum\limits_{i = 1}^{n} {\log \left[ {1 + \mathop {\left( {{\raise0.7ex\hbox{${t_{i} }$} \!\mathord{\left/ {\vphantom {{t_{i} } \sigma }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\sigma $}}} \right)}\nolimits^{\lambda } } \right]} }}.$$
(14)

Let the parametric function of TGLLD be represented by \(\tau = \left( {\theta ,\lambda ,\sigma } \right)\); by recalling the invariance property of MLE, \(\hat{\xi }_{q} = \xi_{q} (\hat{\tau })\) becomes the maximum likelihood estimator (MLE) of quantile \(\xi_{q}\), where \(\hat{\tau } = \left( {\hat{\theta },\hat{\lambda },\hat{\sigma }} \right)\) be the MLE of \(\tau = \left( {\theta ,\lambda ,\sigma } \right)\). Hence, we consider that the MLE of the proposed PCI with parametric function \(\hat{\tau } = \left( {\hat{\theta },\hat{\lambda },\hat{\sigma }} \right)\) is

$$\hat{C}_{Npk} (\tau ) = C_{Npk} (\hat{\tau }) = \frac{{\hbox{min} \left( {{\text{USL}} - \xi_{{p_{2} }} (\hat{\tau }),\xi_{{p_{2} }} (\hat{\tau }) - {\text{LSL}}} \right)}}{{\left( {{\raise0.7ex\hbox{${\left( {\xi_{{p_{3} }} (\hat{\tau }) - \xi_{{p_{1} }} (\hat{\tau })} \right)}$} \!\mathord{\left/ {\vphantom {{\left( {\xi_{{p_{3} }} (\hat{\tau }) - \xi_{{p_{1} }} (\hat{\tau })} \right)} 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}} \right)}}.$$
(15)

Therefore, \(C_{Npk} (\hat{\tau })\) is a real-valued function of quantiles \(\xi_{{p_{1} }} ,\xi_{{p_{2} }} \,{\text{and}}\,\xi_{{p_{3} }}\).

The qth quantile of TGLLD with parameters \(\tau = \left( {\theta ,\lambda ,\sigma } \right)\) is given by \(\xi_{q} = \sigma \left[ {\left( {1 - q} \right)^{{{\raise0.7ex\hbox{${ - 1}$} \!\mathord{\left/ {\vphantom {{ - 1} \theta }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\theta $}}}} - 1} \right]^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \lambda }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\lambda $}}}}\). Simulation technique is considered for small sample comparison, as it may be difficult to find out mathematical form of sampling distribution of \(\hat{\tau }\).

Bootstrap confidence intervals

Bootstrap sampling is a method of drawing samples (with replacement) from the underlying probability distribution. Efron (1982) introduced a computationally intensive method of estimation called Bootstrap, a technique of computer-based simulation for estimating the parameters under consideration. Estimates of PCI are determined through bootstrap confidence intervals. As stated by Efron and Tibshirani (1993), for obtaining reasonably accurate confidence interval estimates, a minimum of 1000 bootstrap resamples may be considered. Among many other methods, the types of bootstrap confidence interval developed by Efron and Tibshirani are considered under study, viz. the standard bootstrap (SB) confidence interval, the percentile bootstrap (PB) confidence interval and the bias-corrected percentile bootstrap (BCBP) confidence interval.

Let \(t_{1} ,t_{2} , \ldots ,t_{n}\) be a random sample of size n drawn from a quality process following TGLLD; then, \(t_{1}^{ * } ,t_{ 2}^{ * } ,\ldots , {\text{ t}}_{n}^{ * }\) is a bootstrap sample of size n drawn with replacement from the original sample. Using this bootstrap sample, the bootstrap \(\hat{C}_{Npk}\), denoted by \(\hat{C}_{Npk}^{ * }\), can be obtained. For B bootstrap samples, we can obtain B bootstrap \(\hat{C}_{Npk} ' {\text{s}}\) and arrange them in ascending order, i.e., \(\hat{C}_{Npk}^{ * } (1),\hat{C}_{Npk}^{ * } (2), \ldots ,\hat{C}_{Npk}^{ * } (B)\), which forms an empirical bootstrap distribution of \(\hat{C}_{Npk}\). Here, we take B = 10,000 bootstrap samples.

Standard bootstrap (SB) confidence interval

Here, the PCI estimate under study is \(\hat{C}_{Npk}\); then, jth bootstrap sample estimator of \(\hat{C}_{Npk}\) is

$$\hat{C}_{Npk}^{ * (j)} = \frac{{\hbox{min} \left( {{\text{USL}} - \xi_{{p_{2} }} (\hat{\tau }^{(j)} ),\xi_{{p_{2} }} (\hat{\tau }^{(j)} ) - {\text{LSL}}} \right)}}{{\left( {{\raise0.7ex\hbox{${\left( {\xi_{{p_{3} }} (\hat{\tau }) - \xi_{{p_{1} }} (\hat{\tau })} \right)}$} \!\mathord{\left/ {\vphantom {{\left( {\xi_{{p_{3} }} (\hat{\tau }) - \xi_{{p_{1} }} (\hat{\tau })} \right)} 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}} \right)}};\quad j = 1,2, \ldots ,B .$$
(16)

where \(\hat{\tau }^{(j)}\) is the jth bootstrap estimator of \(\tau\).

Hence, the sample average and standard deviation are obtained as

$$\bar{\hat{C}}_{Npk}^{ * } = \frac{1}{B}\sum\limits_{j = 1}^{B} {\hat{C}_{Npk}^{ * (j)} } \quad {\text{and}}\quad S_{{\hat{C}_{Npk}^{ * } }}^{ * } = \sqrt {\frac{1}{B - 1}\sum\limits_{j = 1}^{B} {\left( {\hat{C}_{Npk}^{ * (j)} - \bar{\hat{C}}_{Npk}^{ * } } \right)^{2} } } .$$

Then, the \(100\;(1 - \alpha )\%\) standard bootstrap (SB) confidence interval is

$$CI_{SB} = \left( {\bar{\hat{C}}_{Npk}^{ * } - Z_{1 - \alpha /2} S_{{\hat{C}_{Npk}^{ * } }}^{ * } ,\bar{\hat{C}}_{Npk}^{ * } + Z_{1 - \alpha /2} S_{{\hat{C}_{Npk}^{ * } }}^{ * } } \right) ,$$
(17)

where \(Z_{{1 - {\alpha \mathord{\left/ {\vphantom {\alpha 2}} \right. \kern-0pt} 2}}}\) is the \((1 - \alpha /2){\text{th}}\) quantile of the standard normal distribution.

Percentile bootstrap (PB) confidence interval

The \(100\;(1 - \alpha )\%\) percentile bootstrap (PB) confidence interval is given by

$$CI_{PB} = \left( {\hat{C}_{Npk(B(\alpha /2))}^{ * } ,\hat{C}_{Npk(B(1 - \alpha /2))}^{ * } } \right) ,$$
(18)

where \(\hat{C}_{Npk(B(\alpha /2))}^{ * }\) and \(\hat{C}_{Npk(B(1 - \alpha /2))}^{ * }\) be the \(100\left( {\alpha /2} \right)\)th and \(100(1 - \alpha /2)\)th empirical percentiles of \(\hat{C}_{Npk}^{ * }\), respectively.

Bias-corrected percentile bootstrap (BCPB) confidence interval

The complete bootstrap distribution obtained from a sample which could result in the higher or lower side of the expected value which is nothing but the bias, as the name indicates; the third method introduced to correct this potential bias is bias-corrected percentile bootstrap (BCPB) confidence interval. We use the ordered distribution of \(\hat{C}_{Npk}^{ * }\), to compute the probability \(p_{0} = \Pr \left[ {\hat{C}_{Npk}^{ * } \le \hat{C}_{Npk} } \right]\), where \(\hat{C}_{Npk}^{ * }\) is the estimated value of \(\hat{C}_{Npk}\). Then, the following are determined:

  1. 1.

    The biased-correction factor \(Z_{0} = \varPhi^{ - 1} (p_{0} )\), where \(\varPhi (.)\) is the cumulative standard normal distribution function.

  2. 2.

    \(P_{L} = \varPhi (2Z_{0} + Z_{\alpha /2} )\) and \(P_{U} = \varPhi (2Z_{0} + Z_{1 - \alpha /2} )\) are computed using Z0 values.

Hence, \(100\;(1 - \alpha )\%\) bias-corrected percentile bootstrap (BCPB) confidence interval for \(\hat{C}_{Npk}\) is given by

$$CI_{BCPB} = \left( {\hat{C}_{{Npk(BP_{L} )}}^{ * } ,\hat{C}_{{Npk(BP_{U} )}}^{ * } } \right) ,$$
(19)

where \(\hat{C}_{Npk(r)}^{ * }\) is the rth ordered value of the B bootstrap estimator of \(\hat{C}_{Npk}\).

To determine the performance of the above three confidence intervals, we considered their estimated coverage probabilities and average widths. The probability that the true value of \(\hat{C}_{Npk}\) is covered by the \(100\;(1 - \alpha )\%\) bootstrap confidence interval is called “coverage probability,” and the same is obtained for the methods discussed above. In addition, the average width of the bootstrap confidence interval is calculated based on 5000 different trials. The performance of the confidence intervals \(CI_{SB} ,CI_{PB} \,{\text{and}}\,CI_{BCPB}\) based on their estimated coverage probabilities and average widths of the bootstrap confidence interval is studied through simulation.

Simulation study

The present section dealt with the results obtained through simulation study on the evaluation of three bootstrap confidence intervals of the process capability index given in Eq. (16) of TGLLD. With different parametric combinations \(\sigma = 1\,\) and \(\lambda = 4,5,6,7\), we consider the sample size \(n = 10,15,20,25,30\) and set the lower and upper specification limits as 1 and 29, respectively, to draw the simulation results. \(B = 10000\) bootstrap samples of size n are generated from the original sample and repeated the exercise 5000 times. Using three methods, i.e., SB, PB and BCPB, the 95% confidence intervals were obtained. The difference between upper and lower confidence limits called the average width along with bias and MSE is calculated to compare the simulation results which are presented in Tables 1, 2, 3 and 4. The criteria set for comparison of results is the indices having lower average width, and higher coverage probabilities are to be considered.

Table 1 Estimated coverage probabilities and average widths of 95% bootstrap confidence intervals of \(C_{Npk}\) for \(\sigma = 1\,\) and \(\lambda = 4\)
Table 2 Estimated coverage probabilities and average widths of 95% bootstrap confidence intervals of \(C_{Npk}\) for \(\sigma = 1\,\) and \(\lambda = 5\)
Table 3 Estimated coverage probabilities and average widths of 95% bootstrap confidence intervals of \(C_{Npk}\) for \(\sigma = 1\,\) and \(\lambda = 6\)
Table 4 Estimated coverage probabilities and average widths of 95% bootstrap confidence intervals of \(C_{Npk}\) for \(\sigma = 1\,\) and \(\lambda = 7\)

It is noticed from the results given in Tables 1, 2, 3 and 4 that when the sample size grows the corresponding average width is falling down, indicating that moderately large sample throws better results. When we compared the average width values, BCPB method recorded lower values than SB and PB methods and followed the order BCPB < PB < SB. Average width of all the methods showed an upward trend when the shape parameter \(\lambda\) raises from 4 to 7. Similar pattern is observed when the other shape parameter \(\theta\) turns up from 3.5 to 5. From the coverage probabilities recorded in Tables 1, 2, 3 and 4 for all three methods, a raising pattern is observed when \(\lambda\) increases from 4 to 7. SB method recorded higher estimated coverage probabilities which are more than the confidence level (0.95) than BCPB and PB methods, and the pattern observed is PB < BCPB < SB. These probabilities get nearer to the confidence level (0.95) in BCPB method when the shape parameter \(\lambda\) turns up from 4 to 7. When the sample size increased to 30, bias and MSE recorded their lowest values at the parametric values \(\lambda = 4\) and \(\theta = 4\).

Illustrative example

In this section, we use a real data set to show that the type-II generalized log-logistic distribution can be a suitable model. Folks and Chhikara (1978) presented several sets of data to describe the Birnbaum–Saunders distribution. One of the data sets gives the runoff amounts at Jug Bridge, Maryland. For ready reference, this data set is reproduced as follows:

0.17, 0.23, 0.33, 0.39, 0.39, 0.40, 0.45, 0.52, 0.56, 0.59, 0.64, 0.66, 0.70, 0.76, 0.77, 0.78, 0.95, 0.97, 1.02, 1.12, 1.19, 1.24, 1.59, 1.74 and 2.92.

We show a rough indication of the goodness of fit for our model by plotting the superimposed for the data shows that the TGLLD is a good fit as shown in Fig. 1 and also goodness of fit is emphasized with QQ plot, as displayed in Fig. 1. The maximum likelihood estimates of the three-parameter TGLLD for the runoff amounts are \(\hat{\sigma } = 0.7616,\)\(\hat{\lambda } = 2.6602\) and \(\hat{\theta } = 1.772\); the Kolmogorov–Smirnov test found that the maximum distance between the data and the fitted TGLLD is 0.0657 with p value 0.9999. As it can be seen from the high p value that the data set considered is non-normal, TGLLD model is the best fit to this data set. Meanwhile, the maximum likelihood estimates of the two-parameter TGLLD for the runoff amounts are \(\hat{\lambda } = 2.6602\) and \(\hat{\theta } = 1.772\); the Kolmogorov–Smirnov test found that the maximum distance between the data and the fitted TGLLD is 0.2526 with p value 0.1891. Therefore, the two-parameter TGLLD also provides reasonable good fit for the runoff amounts.

Fig. 1
figure 1

Density plot and Q–Q plot of the fitted type-II generalized log-logistic distribution for the runoff amounts data

The bootstrap confidence intervals and widths of \(C_{Npk}\) and \(C_{pk}\) are given in Table 5 for the given example. Numerical example shows that width of class intervals is considerably large in traditional \(C_{pk}\) method as compared to the bootstrap approach for \(C_{Npk}\). Moreover, among the three bootstrap methods BCPB shows better performance than the other two methods; the simulation results also show the same.

Table 5 Bootstrap confidence intervals and widths of new \(C_{Npk}\) and traditional \(C_{pk}\) for TGLLD

Conclusions

In this article, we constructed bootstrap confidence intervals of process capability index using bootstrap method (proposed by Chen and Pearn, (1997)) by applying simulation technique, assuming that the underlying distribution is TGLLD. Bootstrap confidence intervals are constructed using three methods, i.e., SB, PB and BCPB. The performance of these methods is compared by deriving average width, coverage probabilities, bias and MSE from simulation results. ML method of estimation is used to estimate the parameters under study. When both average width and coverage probabilities are considered as the performance criteria, BCPB method throws better results than the other two methods.