Abstract
In this paper, Burr-type XII \(\bar{X}\) synthetic schemes are proposed as an alternative to the classical \(\bar{X}\) synthetic schemes when the assumption of normality fails to hold. First, the basic design of the Burr-type XII \(\bar{X}\) synthetic scheme is developed and its performance investigated using exact formulae. Secondly, the non-side-sensitive and side-sensitive Burr-type XII \(\bar{X}\) synthetic schemes are introduced and their zero-state and steady-state performances, in terms of the average run-length and expected extra quadratic loss values, are investigated using a Markov chain approach. Thirdly, the proposed schemes are compared to the existing classical runs-rules and synthetic \(\bar{X}\) schemes. It is observed that the proposed schemes have very interesting properties and outperform the competing schemes in many cases under symmetric and skewed underlying process distributions. Finally, an illustrative real-life example is given to demonstrate the design and implementation of the proposed Burr-type XII \(\bar{X}\) synthetic schemes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Statistical process monitoring (SPM) schemes are used from monitoring production and manufacturing processes (e.g. Gupta et al. 2018) to monitoring project performance (e.g. Mortaji et al. 2017) to monitoring profiles (e.g. Zakour and Taleb 2017). In SPM, two sources of variation are distinguished. On the one hand, we have chance (or common) causes of variation and on the other, special (or assignable) causes of variation. Common causes of variation are unavoidable and can be found in any process. A quality process that runs in the occurrence of common causes only is said to be in-control (IC). However, when the quality process runs in the occurrence of special causes of variation the quality process is said to be out-of-control (OOC). In this case, the causes of variation must be identified and removed as soon as possible. The faster a scheme is in detecting an OOC state, the more efficient it is (cf. Montgomery 2013).
A basic Shewhart \(\bar{X}\) scheme is known to be more efficient (or sensitive) in unmasking large shifts (i.e. changes) in the location process parameter. However, it is relatively insensitive in unmasking small and moderate shifts. This popular scheme gives a signal if a single sample mean (or point) falls beyond the upper or lower control limits (UCL and LCL) defined by
where \(\mu_{0}\) and \(\sigma_{0,}\) are the IC process mean and standard deviation, respectively, and \(k\) is a charting constant that is typically found such that some IC metric [such as the average run-length (ARL)] is equal to a pre-specified value. In order to improve the sensitivity of the basic \(\bar{X}\) scheme in detecting smaller shifts, Wu and Spedding (2000a) proposed a synthetic \(\bar{X}\) scheme for monitoring the location (or mean) process parameter which consists of two sub-charts, one, a basic \(\bar{X}\) sub-chart and a second, a conforming run-length (hereafter CRL) sub-chart. For a basic synthetic scheme, an OOC signal is not based on a single charting statistic (i.e. point) plotting beyond the threshold values given by Eq. (1). However, when a point plots beyond the threshold values defined in Eq. (1), the corresponding sample is marked as a “nonconforming sample” and the monitoring procedure moves to the second sub-chart where an OOC signal is obtained depending on the outcome of the CRL sub-chart. Note that whenever a point falls between LCL and UCL, the corresponding sample is marked as a “conforming sample” (cf. Wu and Spedding 2000a, b). Bourke (1991) defines a CRL as the number of conforming samples (or points) between two successive nonconforming points, including the nonconforming point at the end. Figure 1 illustrates an example with CRL = 2, CRL = 5 and CRL = 3.
Note that whenever we do not get any conforming point between two nonconforming points, the CRL value is equal to one (i.e. CRL = 1). The control limit of the CRL sub-chart is denoted by H (where H is a positive integer greater or equal to 1). Thus, the CRL sub-chart gives a signal whenever the CRL value is less than or equal to H (cf. Huang and Chen 2005; Wu et al. 2010; Guo et al. 2015). To make the computation of the run-length distribution of the synthetic scheme easier, Davis and Woodall (2002) showed that “a synthetic chart is a special case of a run-rule scheme, i.e. a 2-of-(H + 1) rule with a head-start (HS) feature”. The standard 2-of-(H + 1) rule gives an OOC signal when two consecutive plotting statistics, out of \(H + 1\) consecutive plotting statistics, plot above (below) the UCL (LCL) where H is a positive integer greater or equal to 1. The HS feature implies that at time 0 the first sample is assumed to be nonconforming; therefore, at least one other nonconforming sample is needed within the following H sampling points, for a 2-of-(H + 1) runs-rules scheme to issue an OOC signal (cf. Shongwe and Graham 2016).
Before proceeding any further, let us acknowledge that synthetic charts have received a lot of criticism in the literature (Knoth 2016). Knoth (2016) advised against the use of synthetic charts, however, Knoth (2016) only considered one type of synthetic chart and, it has been shown in Shongwe and Graham (2017a), that there are actually four types of synthetic charts and that the other three types outperform the type considered by Knoth (2016). It is highly recommended that the use of synthetic charts be investigated further, i.e. a thorough investigation of the other three types of synthetic charts should be done and compared to Knoth (2016)’s findings. Thus, it is of our opinion that synthetic charts should not yet be discarded, as recommended by Knoth (2016), and the abovementioned reasons are motivation to continue developing synthetic monitoring schemes even after Knoth (2016)’s warning not to do so.
Besides the basic design of the synthetic schemes, synthetic schemes that are based on the sub-chart limits in Eq. (1) can be classified into four principal types, which are given as follows:
- 1.
the NSS synthetic scheme gives an OOC signal when two nonconforming points, out of \(H + 1\) consecutive points, plot beyond the threshold values given in Eq. (1) no matter whether one (or both) of the nonconforming points lie(s) above the UCL and the other (or both) lie(s) below the LCL, which are separated by at most \(H - 1\) conforming points that plot between the LCL and the UCL (Wu and Spedding 2000a). The control charting regions of the NSS scheme are shown in Fig. 2a. From the operation of the NSS synthetic scheme, the \({\textit{CRL}}\) value can be defined as the number of conforming points that plot between the \({\textit{LCL}}\) and UCL in Fig. 2a that are plotted in between the two successive nonconforming points, irrespective of whether one (or both) fall above the UCL and the other (or both) below the LCL.
- 2.
The standard side-sensitive (SSS) synthetic scheme gives an OOC signal when two nonconforming points, out of \(H + 1\) consecutive points, plot above (below) the UCL (LCL) which are separated by at most \(H - 1\) points that plot below (above) the UCL (LCL), respectively (Davis and Woodall 2002). The control charting zones (or regions) of the SSS scheme are shown in Fig. 2b. From the operation of the SSS synthetic scheme, two different types of \({\textit{CRL}}\)s denoted \({\textit{CRL}}_{{{\bar{\text{L}}}}}\) and \({\textit{CRL}}_{{{\bar{\text{U}}}}}\) can be defined. The \({\textit{CRL}}_{{{\bar{\text{L}}}}}\) value is the number of conforming samples that fall above the \({\textit{LCL}}\) in Fig. 2b that are plotted in between the two consecutive nonconforming points below the LCL (i.e. in region L), including the nonconforming point at the end, whereas the \({\textit{CRL}}_{{{\bar{\text{U}}}}}\) value is the number of conforming samples that fall below the \({\textit{UCL}}\) in Fig. 2b, that are plotted in between the two consecutive nonconforming points above the UCL (i.e. in region U), including the nonconforming point at the end.
- 3.
The revised side-sensitive (RSS) synthetic scheme gives an OOC signal when two nonconforming points, out of \(H + 1\) successive points, plot above (below) the UCL (LCL) which are separated by at most \(H - 1\) conforming points that plot between the LCL and the UCL (Machado and Costa 2014). The control charting regions of the RSS scheme are shown in Fig. 2b. From the operation of the RSS synthetic scheme, two different types of \({\textit{CRL}}\)s denoted \({\textit{CRL}}_{{\check{L}}}\) and \({\textit{CRL}}_{{\check{\text{U}}}}\) can also be defined. The \({\textit{CRL}}_{{\check{\text{L}}}}\) is the number of conforming samples that fall within region I in Fig. 2b that are plotted in between the two consecutive nonconforming points below the LCL (i.e. in region L), including the nonconforming point at the end, whereas the \({\textit{CRL}}_{{\check{\text{U}}}}\) is the number of conforming points that within region I in Fig. 2b that are plotted in between the two consecutive nonconforming points above the UCL (i.e. in region U), including the nonconforming point at the end.
- 4.
The modified side-sensitive (MSS) synthetic scheme gives an OOC signal when two nonconforming points, out of \(H + 1\) successive points, plot above (below) the UCL (LCL) which are separated by at most \(H - 1\) conforming points that plot between the CL and the UCL (LCL), respectively (Shongwe and Graham 2016, 2018). The control charting regions of the MSS scheme are shown in Fig. 2c. From the operation of a MSS scheme, two types of \({\textit{CRLs}}\) which are: the lower \({\textit{CRL}}\) (denoted as \({\textit{CRL}}_{\text{L}}\)) and the upper (denoted as \({\textit{CRL}}_{\text{U}}\)) are needed. A \({\textit{CRL}}_{\text{L}}\) is the number of lower conforming points (i.e. conforming points that fall within region 3 in Fig. 2c) that are plotted in between the two consecutive lower nonconforming points (i.e. nonconforming points that fall below the LCL, including the lower nonconforming point at the end). However, a \({\textit{CRL}}_{\text{U}}\) is the number of upper conforming points (i.e. conforming points that fall within region 2 in Fig. 2c) that are plotted between the two consecutive upper nonconforming points (i.e. nonconforming points that fall above the UCL, including the nonconforming point at the end). Note that the absence of a conforming point implies that either the \({\textit{CRL}}_{\text{U}}\) or \({\textit{CRL}}_{\text{L}}\) equals one.
The classical NSS and SSS \(\bar{X}\) synthetic schemes (i.e. NSS and SSS \(\bar{X}\) synthetic schemes for normal data) were first proposed by Wu and Spedding (2000a, b) and Davis and Woodall (2002), respectively. Later on, Machado and Costa (2014) proposed a classical RSS \(\bar{X}\) synthetic scheme. More recently, several authors have pointed out the need to develop synthetic schemes (Lee and Khoo 2017; Shongwe and Graham 2017b, c, 2018). Lee and Khoo (2017) investigated the performance of the synthetic double sampling S scheme, which was found to perform better than the existing double sampling S scheme for a wide range of shifts. Shongwe and Graham (2017b, c) studied the zero-state and steady-state run-length characteristics of synthetic and runs-rules \(\bar{X}\) schemes, respectively. Later on, Shongwe and Graham (2018) proposed the MSS synthetic scheme for monitoring the location parameter. The above-mentioned schemes are called parametric (or classical) schemes since they are based on the normality assumption. It is well known that parametric schemes are not IC robust and they are relatively inefficient under the violation of the normality assumption. Therefore, there is a need of developing nonparametric schemes and adaptive schemes based on flexible probability distributions. The Burr-type XII distribution can be used for this purpose since it can represent any type of unimodal distribution (Malela-Majika et al. 2018b; Wooluru et al. 2016).
In these last few decades, an important discussion amongst SPM researchers is whether to monitor process shifts using traditional monitoring schemes (in the form of traditional control charts) or using support vector machines (Du et al. 2012, 2013; Du and Lv 2013). Du and Lv (2013) stated that “Support vector machine (SVM) has recently become a new generation learning system based on recent advances on statistical learning theory for solving a variety of learning, classification and prediction problems”. They proposed an enhanced minimal Euclidean distance scheme for monitoring process mean shifts of auto-correlated processes and made use of support vector regression (SVR) to predict the values of a variable in time series. SVR is an extension of SVM, and it is a regression method by introduction of an alternative loss function. SVMs have been shown to be effective in minimising both Type I and Type II errors for detecting shifts in auto-correlated processes (Chinnam 2002). SVMs are also very useful as classifiers to identify the source of a change in multivariate processes (Cheng and Cheng 2008). However, since the focus of this paper is not on multivariate or auto-correlated processes, SVMs are not explore further in this paper.
In this paper, NSS, SSS, RSS and MSS \(\bar{X}\) synthetic schemes for non-normal data are introduced in the SPM context. The Burr-type XII (BTXII) distribution is used in the design of the proposed synthetic schemes because of its simplicity and flexibility.
The remainder of this paper is organized as follows: Sect. 2 introduces the basic design of the synthetic BTXII \(\bar{X}\) scheme. The proposed NSS and side-sensitive synthetic BTXII \(\bar{X}\) schemes are introduced in Sect. 3. The zero-state and steady-state characteristics of the run-length distribution are derived using the Markov chain approach. The IC and OOC performances of the proposed schemes are discussed in Sect. 4. The proposed schemes are also compared to their parametric (or classical) counterparts. Section 5 presents a real-life example demonstrating the design and implementation of the proposed synthetic schemes. A summary and some concluding remarks are given in Sect. 6.
Operation and basic design of a BTXII \(\bar{X}\) synthetic scheme for non-normal data
Assume that \(\{ X_{ij} ;\;i \ge 1\}_{j = 1}^{n}\) is a sequence of independent and identically distributed (iid) samples from a normal distribution with IC process mean \(\mu_{0}\) and IC process standard deviation \(\sigma_{0}\). The cumulative distribution function (cdf) of the BTXII distribution is given by Burr (1973), Malela-Majika et al. (2018a)
where \(c\) and \(q\) are greater than one and represent the skewness and kurtosis of the Burr distribution, respectively. There is a relationship between a Burr variable, Y, and any random variable X. For more details, see for example, Burr (1942, 1973) and Chen (2003). Assuming that the random variables \(X\) and \(Y\) have the same skewness and kurtosis, the sample mean can be defined by
where \(\bar{X}\) and \(s_{x}\) represent the sample mean and standard deviation of the data set, respectively, and \(M\) and \(S\) represent the mean and standard deviation of the corresponding BTXII distribution with different shapes. Tables of the expected mean, standard deviation, skewness coefficient and kurtosis coefficient of the Burr distribution for various combinations of BTXII parameters \(c\) and \(q\) are given in Burr (1942, 1973).
The basic synthetic BTXII \(\bar{X}\) scheme signals when a nonconforming sample plots above (or below) the UCL (LCL) of the BTXII \(\bar{X}\) sub-chart and \(CRL \le H\).
The basic synthetic BTXII \(\bar{X}\) scheme operates as follows:
- 1.
At the ith sampling time, take a sample of size n and compute \(\bar{X}_{i}\).
- 2.
If LCL\(< \;\bar{X}_{i}\; < \;{\textit{UCL}}\) then return to Step (1).
- 3.
However, if \(\bar{X}_{i} \; \le \; {\textit{CL}}\) or if \(\bar{X}_{i} \; \ge \;{\textit{UCL}}\) go to Step (4).
- 4.
If CRL ≤ H go to Step (5), otherwise return to Step (1).
- 5.
Issue an OOC signal, and then take necessary corrective actions to find and remove the special cause(s). Then return to Step (1).
Thus, the CRL decreases as p increases, and increases as the fraction nonconforming in a process, p, decreases. Note that the CRL is a geometric random variable. Therefore, the expected value of the CRL, i.e. \(E\;({\textit{CRL}})\), and cdf of the CRL, \(F\;({\textit{CRL}})\), are given by
respectively. To detect an upward shift in p, it is recommended to set a LCL, say H, for the CRL. If \({\textit{CRL}} \le H\), then there is sufficient evidence that p has increased. Therefore, the CRL sub-chart gives an OOC signal when \({\textit{CRL}} \le H\). At this stage, the average number of CRL required to detect an OOC fraction nonconforming p is given by
where p is the probability of declaring a sample nonconforming, which is given by
When δ = 0, the process is in-control.
Thus, the \({\textit{ARL}}\) of the basic synthetic scheme is computed as follows
where p is given by Eq. (6).
To measure the overall performance of the basic synthetic scheme, the average extra quadratic loss (AEQL) is used. Therefore, using Eqs. (6) and (7), the AEQL of the basic synthetic chart is defined by
When comparing the overall performance of two or several monitoring schemes, the scheme with the smallest (or minimum) \({\textit{AEQL}}\) value is considered to be the best.
Operation and design consideration of the NSS and side-sensitive synthetic schemes for non-normal data
In this section, necessary notations are introduced and mathematical foundations of synthetic schemes are presented under the violation of the assumption of normality. These mathematical foundations are later on used to derive the run-length properties of the proposed synthetic schemes using a Markov chain approach.
The operation of the proposed synthetic schemes is given in Table 1.
Before we construct the transition probability matrices (TPMs) of the synthetic BTXII \(\bar{X}\) schemes, it is important to define the probability that a plotting statistic falls in a specific region. Table 2 gives the probability that a sample mean, \(\bar{X}\), falls in a specific region of two-sided NSS, SSS, RSS and MSS synthetic BTXII Shewhart \(\bar{X}\) schemes.
TPMs for the proposed synthetic schemes
To construct the TPMs of the proposed synthetic schemes, the Markov chain approach is used to construct the compound patterns that result in an OOC event. For instance, each of the four digits 1, 2, 3 and 4 of a MSS synthetic scheme indicates the state of a test sample. The symbol ‘\(\pm\)’ indicates that at time \(t = 0\), the first charting (or plotting) statistic lies either above the UCL or below the LCL. Therefore, the sequence of charting statistics ‘\(423\)’ of a MSS synthetic scheme indicates that in a sequence of three consecutive test samples, the first is a lower nonconforming (i.e. the charting statistic of this sample falls on or below the LCL), the second is an upper conforming (i.e. the charting statistic falls between the CL and UCL) and the third is a lower conforming sample (i.e. the charting statistic falls between the LCL and CL). The sequence of charting statistics ‘\(\pm \;33\)’ indicate that the first charting statistic falls either above the UCL (region 1) or below the LCL (region 4), and the second and third fall between the LCL and CL (region 3).
The compound patterns have ω sequences (or element) having each \(H\) or \(H + 1\) states. For instance, when \(H = 2\), the absorbing state of the NSS and MSS synthetic schemes (denoted by \(\varLambda\)) are given by \(\left\{ {{\text{OO}},\;{\text{OIO}}} \right\}\) and \(\left\{ {121, \;11, \;44, \;434, \; \pm \;1, \; \pm \;4,\; \pm \;21, \; \pm \;34} \right\}\), respectively. The elements of the absorbing state are denoted by \(\varLambda_{1}\), \(\varLambda_{2}\) … and \(\left( {\varLambda_{\omega } } \right)\). To evaluate the zero-state run-length (\({\text{ZSRL}}\)) properties of the proposed synthetic schemes, we decompose the absorbing (or compound) pattern \(\varLambda\) into simple transient sub-patterns, denoted by \(\eta\), of size ς by removing the last state of each element, which means \(\eta = \{ \eta_{1} ,\;\eta_{2} , \ldots ,\eta_{\varsigma } \}\). In our example, the simple transient sub-patterns of the NSS and MSS are given by \(\left\{ {{\text{O}},{\text{OI}}} \right\}\) and \(\left\{ {12, \;1,\; 4,\; 43,\; \pm , \; \pm \;2,\; \pm \;3} \right\}\), respectively. Afterwards, create dummy states denoted \(\phi\), which are defined by \(\left\{ {\text{I}} \right\}\) and \(\left\{ {2,\;3} \right\}\) for the NSS and MSS, respectively. Finally, the state space, denoted by Ω, is the set of all the components. The state space of the NSS and MSS synthetic schemes is given by \(\{ \phi , \;\eta_{1} ,\;\eta_{2} , {\text{OOC}}\}\) and \(\{ \eta_{1} ,\;\eta_{2} ,\; \phi ,\; \eta_{3} ,\;\eta_{4} ,\;\eta_{5} , \eta_{6} ,\;\eta_{7} , \;{\text{OOC}}\}\), respectively, where \(\eta_{5} = \varphi = \{ \pm \} ,\;\eta_{6} = \varphi_{2} = \{ \pm \;2\} \;\eta_{7} = \varphi_{3} = \{ \pm \;3\}\). The state space of the SSS and RSS synthetic schemes is constructed in a similar way. Table 3 presents the decomposition of the TPMs state space of the proposed synthetic schemes.
When \(H = 1\) the TPM of the NSS synthetic scheme is given by
The TPM of the SSS, RSS and MSS synthetic schemes is given by
In Eq. (10), for the MSS scheme, the probabilities that a charting statistic falls in a specific region are defined as follows:
\(p_{u} = p_{1}\) = probability that a charting statistic plots on or above the \({\textit{UCL}}\),
\(p_{i} = p_{2} + p_{3}\) = probability that a charting statistic falls between the \({\textit{LCL}}\) and the \({\textit{UCL}}\), and
\(p_{l} = p_{4}\) = probability that a charting statistic plots on or below the \({\textit{LCL}}\).
Consequently, \(p_{2} = p_{3} = \frac{{p_{i} }}{2}\). Table 3 yields the TPMs in Table 4 using a look forward approach when \(H = 2\) and 3 where the probabilities are found using the equations in Table 2.
The construction of the TPMs is similar for any values of H. For any H > 0, the dimension of the TPMs in Table 4 is equal to \(\varsigma + 2\) where \(\varsigma\) is the number of sub-patterns in the compound pattern. Therefore,
For instance, when \(H = 2\), the TPMs of the NSS, SSS, RSS and MSS synthetic schemes are of size \(4 \times 4\), \(10 \times 10\), \(8 \times 8\) and \(9 \times 9\), respectively.
Table 5 gives the number of sub-patterns in the compound pattern and the dimension of the TPMs (in brackets) of the NSS, SSS, RSS and MSS synthetic schemes for H = 1, 2, 3, 4 and 5. It can be observed that when H = 1, the SSS, RSS and MSS synthetic schemes have the same number of sub-patterns in the compound pattern which means that the TPMs of the SSS, RSS and MSS synthetic schemes have the same dimension. The larger the value of H, the higher the dimension of the TPMs.
Run-length characteristics of the NSS and side-sensitive synthetic schemes
Once the TPM has been formulated, we may easily calculate any of the following run-length properties (see Fu and Lou 2003). Therefore, the expected value, probability mass function, cdf and the variance of the run-length distribution are given by
respectively, where and \(\xi_{1 \times \tau }\) is the initial probability vector that depends on whether a zero-state or a steady-state mode analysis is of interest. \({\mathbf{I}}_{{\left( {\tau \times \tau } \right)}}\) is a \(\tau \times \tau\) identity matrix and \(1_{{\left( {\tau \times 1} \right)}}\) is a \(\tau \times 1\) column vector of ones.
Note that the zero-state and steady-state modes of analysis are used to characterize the short-term and long-term run-length characteristics of a monitoring scheme. Koutras et al. (2007) analysed the run-length of the runs-rules schemes based on probability-generating functions, whereas Low et al. (2012) designed runs-rules schemes using Eq. (14). Note that the \(E\left( N \right)\) defined in Eq. (12) is typically the most used metric on the performance of a monitoring scheme in SPM, and it is denoted by \({\textit{ARL}}\) in this study.
Initial probabilities vectors
The \(\xi_{1 \times \tau } = {\mathbf{q}}_{1 \times \tau } = \left( {0\;1\;0\; \ldots \;0} \right)\) is the row vector of initial probabilities associated with the zero-state case and it has a one in the component corresponding to the state in which the monitoring scheme begins and each of the other components of the vector are equal to zero. For the SSS, RSS and MSS synthetic schemes, the initial state corresponds to the element of the TPM equal to ‘±’ (i.e. \(\varphi\)), whereas for the NSS synthetic scheme, it corresponds to the element with ‘O’.
The \(\xi_{1 \times \tau } = {\mathbf{s}}_{1 \times \tau }\) is the row vector of initial probabilities associated with the steady-state case and its elements are non-zero. There are a number of method used to compute the \({\mathbf{s}}_{1 \times \tau }\), and this study focuses on one of the steady-state probability vector (SSPV) methods proposed by Champ (1992), which is defined by
where \({\mathbf{z}}\) is the \(\tau\) × 1 vector with \({\mathbf{z}}_{{\left( {\tau \times 1} \right)}} = \left( {{\mathbf{G}} - {\mathbf{Q^{\prime}}}} \right)^{ - 1} {\mathbf{e}}_{j}\) and the matrix \({\mathbf{G}}\) in Champ (1992) can be generalized as \({\mathbf{G}} = {\mathbf{e}}_{j} \cdot 1^{\prime} + {\mathbf{I}}_{\tau \times \tau }\) where \({\mathbf{e}}_{j}\) is the jth unit vector corresponding to \({\mathbf{e}}_{1}\) for the one-sided as well as the two-sided NSS synthetic scheme and \(j\) corresponds to the element of the TPM equal to ‘±’ for the two-sided SSS, RSS and MSS synthetic schemes. For more details, see “Appendix”.
Performance study
Performance of the two-sided NSS and side-sensitive BTXII \(\bar{X}\) synthetic schemes for different values of H
A monitoring scheme is designed such that when the process is IC, the \({\textit{ARL}}_{0}\) is set at some desirable level (or equivalently, the significance level is set at some standard value). For instance, a significance level of size 0.0027, 0.0020 and 0.0010 (or equivalently, the \({\textit{ARL}}_{0} = 370.4\), 500 and 1000), the k-sigma limits of the basic design of the two-sided BTXII \(\bar{X}\) synthetic schemes are as given in Table 6 when h = 1, 2, 3, 4 and 5. For instance, when the \(\left( {M, S, c, q} \right)\) combination is given by (0.6447, 0.162, 4.8737, 6.1576) we found \(k =\) 1.94757, 2.01131 and 2.15251 so that the basic synthetic scheme yields an attained \({\textit{ARL}}_{0}\) value of 370.4, 500 and 1000, respectively. It can be observed that the value of \(k\) increases as H value increases. Moreover, for a given H value, the value of \(k\) increases as the nominal \({\textit{ARL}}_{0}\) value increases. For a given nominal \({\textit{ARL}}_{0}\) value, the larger the value of H, the more efficient the BTXII \(\bar{X}\) synthetic scheme.
The design parameters found in Table 6 are used to assess the OOC performance of the proposed scheme for a nominal \({\textit{ARL}}_{0}\) of 370.4. In Tables 7, 8, 9 and 10, the results of the best scheme are in italic. When two or several columns are in italic, the schemes under consideration perform similarly. Table 7 gives the IC and OOC zero-state and steady-state performance of the proposed synthetic scheme when H = 1, 2, 3, 4 and 5 as well as the overall performance with \(\delta_{\hbox{min} } = 0\) and \(\delta_{\hbox{max} } = 2.5\). Table 6 shows that the proposed synthetic scheme is efficient for large values of H (Fig. 3a). The bigger (smaller) the magnitude of a shift, the more (less) sensitive the proposed scheme is. For large shift, the \({\textit{ARL}}\) value converges towards 1. Figure 3b shows that the performance of the proposed synthetic scheme depends on the magnitude of the shifts and many other factors such as the choice of the design parameters. The design parameters are subject to minimum\({\textit{AEQL}}\). The smaller the \({\textit{AEQL}}\), the more reliable the design parameters. Regardless of the magnitude of the shift, the higher the value of H, the more efficient the scheme becomes (see Fig. 3a, b).
Tables 8, 9 and 10 present on one hand the zero-state and steady-state performance of the NSS, SSS, RSS and MSS BTXII \(\bar{X}\) synthetic schemes with \(\delta\) = 0 (0.2) 2 for H = 1, 2 and 3, respectively, when (\(M,S,n,c,q\)) = (0.5951, 0.1801, 5, 4, 6) referred to as “design 1” and (\(M,S,n,c,q\)) = (0.6447, 0.162, 5, 4.8737, 6.1576) referred to as “design 2”. On the other hand, Tables 8, 9 and 10 give the overall performance of the proposed synthetic schemes for \(\delta_{\hbox{min} } = 0\) and \(\delta_{\hbox{max} } = 2.5\). From Table 8 it can be seen that when H = 1, the zero-state and steady-state performance of the SSS, RSS and MSS synthetic schemes are equivalent. This can also be shown by the TPMs, which are similar (see Eq. 10). For both design 1 and 2, the side-sensitive schemes perform best. In terms of the overall performance, the proposed schemes perform better under design 1. From Tables 9 and 10 it can be observed that when H = 2 and 3, for both zero-state and steady-state mode, the MSS scheme performs better from small to moderate mean shifts (\(0 < \delta < 1.5\)). However, from large shifts onwards (\(\delta \ge 1.5\)), under the zero-state mode, all four schemes are equivalent (\({\text{ZSARL}}_{\delta } = 1\)) for both designs, whereas under the steady-state mode, for all four schemes, the \({\text{ZSARL}}_{\delta }\) values are closer to 2. In terms of the \({\textit{AEQL}}\) values, in zero-state mode, the MSS scheme performs best followed by the SSS scheme for H = 2, whereas when H = 3, the MSS scheme performs best followed by the RSS scheme.
Remarks 1
-
Unlike runs-rules, synthetic schemes perform better in zero-state mode compared to steady-state mode.
-
For large shifts, in zero-state mode, the \({\text{ZSARL}}\) values converge towards 1, whereas the \({\text{SSARL}}\) values are slightly smaller than 2.
Performance comparative study
In this section, the proposed schemes, that is, the NSS and side-sensitive synthetic BTXII \(\bar{X}\) schemes, are compared to the traditional (or classical) Shewhart-type \(\bar{X}\) counterparts using similar synthetic and runs-rules schemes (cf Shongwe and Graham 2017a, 2018; Malela-Majika et al. 2018a, b). For a fair comparison, the competitive schemes are investigated under symmetric (here we use the normal) and heavy-tailed distributions with a sample of size 5, (\(\delta_{\hbox{min} }, \;\delta_{\hbox{max} }\)) = (0, 2) and H = 3. Sherill and Johnson (2009) reported that schemes based on the Box–Cox and Johnson transformations would perform better when using non-normal data. Kilinc et al. (2012) showed that the Johnson \(S_{\text{B}}\) (i.e. unbounded form) distribution presents attractive properties in building models. Therefore, the proposed BTXII \(\bar{X}\) synthetic schemes are also compared to the well-known \(\bar{X}\) schemes for non-normal data based on the Box–Cox and Johnson \(S_{\text{B}}\) transformation under both heavy-tailed and symmetric distributions when H = 3. Moreover, the proposed BTXII \(\bar{X}\) synthetic schemes are also compared to memory-type control schemes such as the cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) monitoring schemes.
The comparison of the proposed synthetic schemes and the well-known classical Shewhart \(\bar{X}\), \(\bar{X}\)-CUSUM and \(\bar{X}\)-EWMA schemes as well as the BTXII \(\bar{X}\)-CUSUM and \(\bar{X}\)-EWMA schemes is displayed in Fig. 4. To challenge Knoth (2016)’s claim about the NSS synthetic scheme, the proposed NSS, SSS, RSS and MSS synthetic schemes are compared to the classical and BTXII \(\bar{X}\)-CUSUM and \(\bar{X}\)-EWMA schemes. The comparison is done under symmetric and heavy-tailed distributions. Under symmetric distributions, and more precisely under the standard normal distribution, when the smoothing parameter λ of the classical \(\bar{X}\)-EWMA scheme is equal to 0.1 and 0.5, it is found that the optimal parameter L = 2.698 and 2.977 so that the attained ZSARL0 = 369.90 and 368.90, respectively, for a nominal ZSARL0 value of 370.4. Under heavy-tailed distributions, and more specifically under the GAM (1,1) distribution, the optimal parameters 2.698 and 2.977 yield ZSARL0 values of 271.40 and 77.20 when λ = 0.1 and 0.5, respectively. These results show that the \(\bar{X}\)-EWMA chart is not IC robust (which is what we expected to find) because the attained ZSARL0 values of 271.40 and 77.20 are far different from the nominal ZSARL0 value of 370.4. For the classical \(\bar{X}\)-CUSUM scheme, we found that the UCL value is equal to 13.26 so that the attained ZSARL0 value under the N (0,1) distribution is equal to 369.5. However, under the GAM (1,1) distribution, when UCL = 13.26, the \(\bar{X}\)-CUSUM scheme yields an attained ZSARL0 value of 301.27, which shows that the classical \(\bar{X}\)-CUSUM scheme is not IC robust as well (which is what we expected to find).
Table 11 shows that in zero-state mode, under heavy-tailed distributions, both the proposed MSS BTXII \(\bar{X}\) synthetic scheme (introduced in this paper) and MSS BTXII runs-rules \(\bar{X}\) schemes [proposed by Malela-Majika et al. (2018b)] outperform all other competing charts from small to moderate shifts. For large shifts, the proposed BTXII \(\bar{X}\) synthetic scheme and BTXII \(\bar{X}\) improved runs-rules scheme as well as the Johnson \(S_{\text{B}}\)\(\bar{X}\) synthetic scheme perform better regardless of the type of design (i.e. NSS, SSS, RSS and MSS designs). In steady-state mode, from small to moderate shifts, the MSS BTXII \(\bar{X}\) synthetic and MSS BTXII \(\bar{X}\) runs-rules schemes outperform all competing charts. For large shifts, the SSS, RSS and MSS BTXII \(\bar{X}\) improved runs-rules schemes are superior to all other competing charts.
Under symmetric distributions (see Table 12), for both zero-state and steady-state modes, the classical MSS Shewhart \(\bar{X}\) runs-rules and MSS synthetic \(\bar{X}\) scheme combined with an \(\bar{X}\) chart [proposed by Shongwe and Graham (2016)] outperform all other charts from small to moderate shifts. For large shifts, in zero-state mode, these charts are equivalent to the proposed BTXII \(\bar{X}\) synthetic schemes, the classical synthetic \(\bar{X}\) schemes [proposed by Shongwe and Graham (2017a)], the Johnson \(S_{\text{B}}\) synthetic schemes as well as the Box–Cox \(\bar{X}\) synthetic schemes. However, in steady-state mode, the control charts proposed by Shongwe and Graham (2017a) outperform the competing charts.
From Fig. 4a, b, we can draw the following conclusions:
The proposed synthetic BTXII \(\bar{X}\) schemes outperform the traditional \(\bar{X}\) schemes.
The synthetic schemes are more sensitive in zero-state (small values of the AEQL).
The proposed NSS scheme is less sensitive when compared to other schemes.
In general, when the value of H increases, the sensitivity of synthetic BTXII \(\bar{X}\) scheme increases as well. After investigating the sensitivity of the proposed synthetic schemes, it is observed that increasing the value of H does not always increase the sensitivity of the schemes. For instance, for the NSS scheme, from \(H =\) 2 to 3, the sensitivity of the proposed NSS synthetic BTXII \(\bar{X}\) scheme decreases. The latter is shown by the AEQL value increasing from 33.13 to 34.83. Therefore, it is important to investigate the optimal value of H that increases the sensitivity of synthetic schemes.
In zero-state mode, the proposed synthetic BTXII \(\bar{X}\) schemes perform best under the SSS and MSS schemes when H = 2. Under the steady-state mode, the MSS scheme performs best for H = 3.
Figure 4c, d yields the following findings:
Under symmetric distributions, when H = 1 and 2, the classical and BTXII \(\bar{X}\)-EWMA scheme outperforms the NSS synthetic scheme for small values of λ under small and moderate shifts (see for instance, Fig. 4c for λ = 0.1). When λ increases, the NSS synthetic scheme outperforms both the classical and BTXII \(\bar{X}\)-EWMA scheme regardless of the size of the mean shifts (Fig. 4c when λ = 0.1).
Under heavy-tailed distributions, when H = 1 and 2, both classical and BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes outperform the NSS synthetic scheme regardless of the values of λ for small and moderate shifts (Fig. 4d). For large shifts, the NSS \(\bar{X}\) synthetic scheme performs better than classical and BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes.
Under symmetric distributions, when H = 1, the SSS, RSS and MSS synthetic schemes are equivalent and perform better than the classical and BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes regardless of the size of the shifts.
Under heavy-tailed distributions, the SSS, RSS and MSS synthetic schemes outperform the classical \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes for two reasons, (1) they are IC robust and (2) yield small OOC ARL values. It can also be observed that the proposed SSS, RSS and MSS synthetic schemes are more sensitive than the BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes.
Under symmetric and heavy-tailed distributions, when H = 2, the proposed SSS, RSS and MSS \(\bar{X}\) synthetic schemes perform better than the classical \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes. In this case, the MSS scheme performs better than the SSS scheme and slightly better than the RSS scheme.
The BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes perform uniformly better than the classical \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes under symmetric and heavy-tailed distributions regardless of the size of the shift in the location parameter.
Illustrative example
In this section, a real-life example is given to illustrate the design and implementation of the proposed synthetic schemes using the dataset from Mahmoud and Aufy (2013) (see Table 13). The data represent the shaft diameter which is expected to be around 7.995 millimetres (mm). To assess the production process, measurements of twenty-five samples have been taken, each consist of five items from the final production stage for which a goodness of fit test for normality is rejected.
When H = 1, for both zero-state and steady-state modes, the control limits of the NSS and side-sensitive synthetic BTXII \(\bar{X}\) schemes are given by (\({\textit{LCL,}}\;{\textit{UCL}}\)) = (0.374, 0.6) and (0.38, 0.59), respectively. A plot of the charting statistics for H = 1 is shown in Fig. 5 (a). It can be seen that both NSS and side-sensitive schemes signal for the first time on the fourth subgroup. When H = 2, the control limits of the NSS and MSS synthetic BTXII \(\bar{X}\) schemes are given by (\({\textit{LCL,}}\;{\textit{UCL}}\)) = (0.37, 0.61) and (0.38, 0.6), respectively. A plot of the charting statistics for H = 3 is shown in Fig. 5 (b). It can be seen that the MSS scheme signals for the first time on the seventh subgroup while the NSS scheme does not issue a signal. This shows the superiority of the MSS scheme over the NSS scheme.
Summary and recommendations
In this paper, synthetic \(\bar{X}\) schemes for non-normal data were proposed as alternatives to the classical Shewhart-type and synthetic \(\bar{X}\) schemes when the assumption of normality fails to hold. It was observed that the proposed schemes outperform the classical ones in many cases, and present very interesting run-length characteristics under normal and non-normal distributions. It is highly recommended that practitioners, in the industries, and researchers make use of the proposed schemes instead of the classical schemes when the process is not stable or when there are doubts about the nature (or the shape) of the underlying process distribution. For the steady-state mode, when small and moderate shifts are of interest, the recommendation is to use side-sensitive synthetic schemes regardless of the size of the sample and H value. For the zero-state mode, for small and moderate shifts, the recommendation is to use side-sensitive synthetic schemes regardless of the value of H.
It must be noted that the use of synthetic schemes for large values of H is not recommended in practice because, in most of the cases, the dimension of the TPM increases exponentially as H increases. The design (or construction) of such schemes becomes cumbersome and sometimes unrealistic. Therefore, the recommendation is to use small values of H (say \(H \le 3\)) for which the schemes perform better.
The comparison of the proposed synthetic schemes with the \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes reveals that the SSS, RSS and MSS synthetic schemes outperform both classical and BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes regardless of the size of the shift in the location parameter. The NSS synthetic scheme is inferior when compared to the classical and BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes for small and moderate shifts in the location parameter. However, for large shifts, the proposed NSS synthetic scheme performs better than the classical and BTXII \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes. Therefore, we do not support Knoth (2016)’s claims of discarding synthetic schemes since the three schemes, namely the SSS, RSS and MSS synthetic schemes have very interesting ARL and AEQL properties over the classical Shewhart \(\bar{X}\), \(\bar{X}\)-EWMA and \(\bar{X}\)-CUSUM schemes.
It must also be observed that the classical Shewhart \(\bar{X}\) schemes are not IC robust and present some weakness in many situations. To fix this problem, flexible schemes such as BTXII Shewhart \(\bar{X}\) and nonparametric schemes may be used.
In future, we will consider the design non-side-sensitive and side-sensitive synthetic Shewhart-type \(\bar{X}\) schemes combined with a basic \(\bar{X}\) for non-normal data using the BTXII and Weibull distributions.
References
Bourke PD (1991) Detecting a shift in fraction nonconforming using run-length control charts with 100% inspection. J Qual Technol 23(3):225–238
Burr IW (1942) Cumulative frequency functions. Ann Math Stat 13(2):215–232
Burr IW (1973) Pameters for a general system of distributions to match a grid of α3 and α4. Commun Stat Theory Methods 2(1):1–21
Champ CW (1992) Steady-state run length analysis of a Shewhart quality control chart with supplementary runs rules. Commun Stat Theory Methods 21(3):765–777
Chen YK (2003) An evolutionary economic-statistical design for VSI \({\bar{\text{X}}}\) control charts under non-normality. Int J Adv Manuf Technol 22(7–8):602–610
Cheng CS, Cheng HP (2008) Identifying the source of variance shifts in the multivariate process using neural networks and support vector machines. Expert Syst Appl 35(1–3):198–206
Chinnam RB (2002) Support vector machines for recognising shifts in correlated and other manufacturing processes. Int J Prod Res 40(17):4449–4466
Davis RB, Woodall WH (2002) Evaluating and improving the synthetic control chart. J Qual Technol 34(2):200–208
Du S, Lv J (2013) Minimal Euclidean distance chart based on support vector regression for monitoring mean shifts of auto-correlated processes. Int J Prod Econ 141(1):377–387
Du S, Lv J, Xi L (2012) On-line classifying process mean shifts in multivariate control charts based on multiclass support vector machines. Int J Prod Res 50(22):6288–6310
Du S, Huang D, Lv J (2013) Recognition of concurrent control chart patterns using wavelet transform decomposition and multiclass support vector machines. Comput Ind Eng 66(4):683–695
Fu JC, Lou WYW (2003) Distribution theory of runs and patterns and its applications: a finite Markov chain imbedding approach. World Scientific Publishing, Singapore
Guo B, Wang BX, Cheng Y (2015) Optimal design of a synthetic chart for monitoring process dispersion with unknown in-control variance. Comput Ind Eng 88:78–87
Gupta V, Jain R, Meena ML, Dangayach GS (2018) Six-sigma application in tire-manufacturing company: a case study. J Ind Eng Int 14(3):511–520
Huang HJ, Chen FL (2005) A synthetic control chart for monitoring process dispersion with sample standard deviation. Comput Ind Eng 49(2):221–240
Kilinc K, Celik AO, Tuncan M, Tuncan A, Arslan G, Arioz O (2012) Statistical distributions of in situ microcore concrete strength. Constr Build Mater 26(1):393–403
Knoth S (2016) The case against the use of synthetic control charts. J Qual Technol 48(2):178–195
Koutras MV, Bersimis S, Maravelakis PE (2007) Statistical process control using Shewhart control charts with supplementary runs rules. Methodol Comput Appl Probab 9(2):207–224
Lee MH, Khoo MBC (2017) Synthetic double sampling S chart. Commun Stat Theory Methods 46(12):5914–5931
Low CK, Khoo MBC, Teoh WL, Wu Z (2012) The revised m-of-k runs rule based on median run length. Commun Stat Simul Comput 41(8):1463–1477
Machado MAG, Costa AFB (2014) A side-sensitive synthetic chart combined with an \({\bar{\text{X}}}\) chart. Int J Prod Res 52(11):3404–3416
Mahmoud MA, Aufy SA (2013) Process capability evaluation for a non-normal distributed one. Eng Technol J 31(17):2345–2358
Malela-Majika JC, Malandala SK, Graham MA (2018b) Shewhart \({\bar{\text{X}}}\) control schemes with supplementary 2-of-(h + 1) side-sensitive runs-rules under the Burr-type XII distribution. Qual Reliab Eng Int 34(8):1800–1817
Malela-Majika JC, Kanyama BJ, Rapoo EM (2018a). Improved Shewhart-type \({\bar{\text{X}}}\) control schemes under non-normality assumption: a Markov chain approach. Int J Qual Res 12(1):17–42
Montgomery DC (2013) Introduction to statistical quality control, 8th edn. Wiley, New York
Mortaji STH, Noori S, Noorossana R, Bagherpour M (2017) An ex ante control chart for project monitoring using earned duration management observations. J Ind Eng Int. https://doi.org/10.1007/s40092-017-0251-5
Sherill RW, Johnson LA (2009) Calculated decisions. Qual Prog 42(1):30–35
Shongwe SC, Graham MA (2016) On the performance of Shewhart-type synthetic and runs-rules charts combined with an \({\bar{\text{X}}}\) chart. Qual Reliab Eng Int 32(4):1357–1379
Shongwe SC, Graham MA (2017a) Synthetic and runs-rules charts combined with an \({\bar{\text{X}}}\) chart: theoretical discussion. Qual Reliab Eng Int 33(1):7–35
Shongwe SC, Graham MA (2017b) Some theoretical comments regarding the run-length properties of the synthetic and runs-rules monitoring schemes-part 1: zero-state. Qual Technol Quant Manag. https://doi.org/10.1080/16843703.2017.1389141
Shongwe SC, Graham MA (2017c) Some theoretical comments regarding the run-length properties of the synthetic and runs-rules monitoring schemes-part 2: steady-state. Qual Technol Quant Manag 1:1. https://doi.org/10.1080/16843703.2017.1389142
Shongwe SC, Graham MA (2018) A modified side-sensitive synthetic chart to monitor the process mean. Qual Technol Quant Manag 15(3):328–353
Wooluru Y, Swamy DR, Nagesh P (2016) Process capability estimation for non-normally distributed data using robust methods—a comparative study. Int J Qual Res 10(2):407–420
Wu Z, Spedding TA (2000a) A synthetic control chart for detecting small shifts in the process mean. J Qual Technol 32(1):32–38
Wu Z, Spedding TA (2000b) Implementing synthetic control charts. J Qual Technol 32(1):75–78
Wu Z, Wang ZJ, Jiang W (2010) A generalized conforming run length control chart for monitoring the mean of a variable. Comput Ind Eng 59(2):185–192
Zakour SB, Taleb H (2017) Endpoint in plasma etch process using new modified w-multivariate charts and windowed regression. J Ind Eng Int 13(3):307–322
Acknowledgements
The authors thank Mr Shongwe Sandile for his valuable comments that helped to improve this article. The authors also thank the University of South Africa for the support. Marien Graham’s research was funded by the National Research Foundation (NRF) [reference: PR_IFR190111407337, UID: 114814]
Author information
Authors and Affiliations
Corresponding author
Appendix: TPMs, zero-state and steady-state probability vectors of the NSS and side-sensitive synthetic schemes
Appendix: TPMs, zero-state and steady-state probability vectors of the NSS and side-sensitive synthetic schemes
This appendix explains how the markov chain approach is used to construct the TPMs of the proposed synthetic schemes. Moreover, the appendix also explains how to found the initial probability vectors of the proposed synthetic schemes by giving the steps that lead to the obtention of the zero-state and steady-state probability vectors denoted ZSPV and SSPV, respectively.
TPMs of the synthetic schemes
TPMs of the SSS synthetic schemes
Let ± , \(U, \;I\) and \(D\) represent the state of four different test samples of a SSS synthetic scheme. The symbol “±” indicates that at time \(t = 0\), the plotting statistic of the first sample falls either above the UCL or below the LCL (Fig. 2b). The second is an upper nonconforming (i.e. the plotting statistic of this sample plots above the UCL), the third is a conforming (i.e. the plotting statistic of plots between the LCL and UCL) and the fourth is a lower nonconforming (i.e. the plotting statistic of this sample plots on or below LCL). The compound (or absorbing) patterns of the SSS synthetic schemes for \(H = 1\), 2 and 3 are obtained as follows:
Step 1 List all the absorbing patterns, \(\varLambda\), given by
$$\begin{aligned} \varLambda & = \left\{ {\varLambda_{1} = \left\{ {UU} \right\},\;\varLambda_{2} = \left\{ {LL} \right\}, \;\varLambda_{3} = \left\{ { \pm U} \right\},\;\varLambda_{4} = \left\{ { \pm L} \right\}} \right\}\quad {\text{for}}\;H = 1 \\ \varLambda & = \left\{ {\varLambda_{1} = \left\{ {\text{ULU}} \right\},\;} \right.\varLambda_{2} = \left\{ {\text{UIU}} \right\},\varLambda_{3} = \left\{ {\text{UU}} \right\},\;\varLambda_{4} = \left\{ {\text{LL}} \right\}, \;\varLambda_{5} = \left\{ {\text{LIL}} \right\},\;\varLambda_{6} = \left\{ {LUL} \right\},\;\varLambda_{7} = \left\{ { \pm U} \right\},\;\varLambda_{8} \\ & \left. { = \left\{ { \pm L} \right\},\; \varLambda_{9} = \left\{ { \pm IU} \right\},\;\varLambda_{10} = \left\{ { \pm IL} \right\}} \right\}\quad {\text{for}}\;H = 2 \\ \varLambda & = \left\{ {\varLambda_{1} = \left\{ {UIIU} \right\},\;\varLambda_{2} = \left\{ {UILU} \right\},\;\varLambda_{3} = \left\{ {ULIU} \right\},\;\varLambda_{4} = \left\{ {ULU} \right\},\;\varLambda_{5} = \left\{ {UIU} \right\},\;\varLambda_{6} = \left\{ {UU} \right\},\;\varLambda_{7} = \left\{ {LL} \right\},\;\varLambda_{8} } \right. \\ & = \left\{ {LIL} \right\},\;\varLambda_{9} = \left\{ {LUL} \right\},\;\varLambda_{10} = \left\{ {LUIL} \right\},\;\varLambda_{11} = \left\{ {LIUL} \right\},\;\varLambda_{12} = \left\{ { \pm IIL} \right\},\;\varLambda_{13} = \left\{ {LIIL} \right\},\;\varLambda_{14} = \left\{ { \pm U} \right\}, \varLambda_{15} \\ & \left. { = \left\{ { \pm L} \right\}, \varLambda_{16} = \left\{ { \pm IU} \right\}, \varLambda_{17} = \left\{ { \pm IL} \right\}, \varLambda_{18} = \left\{ { \pm IIU} \right\}} \right\}\quad {\text{for}}\;H = 3 \\ \end{aligned}$$(17)Step 2: Create the dummy state \(\phi\) which is defined by the single IC state given by {I} for any value of \(H\). Thus, the dummy state is defined by
$$\begin{aligned} \phi & = \eta_{2} = \left\{ I \right\}\;{\text{for}}\;H = 1 \\ \phi & = \eta_{4} = \left\{ I \right\} \;{\text{for}}\;H = 2 \\ \phi & = \eta_{7} = \left\{ I \right\}\;{\text{for}}\;H = 3 \\ \end{aligned}$$(18)Therefore, \(\phi = \left\{ I \right\}\) for any value of \(H\).
Step 3 Decompose each element in the absorbing patterns given in Eq. (17) into its basic states by removing the last state.
$$\begin{aligned} \varLambda & = \left\{ {\eta_{1} = \left\{ U \right\},\;\eta_{3} = \left\{ L \right\}, \varphi = \left\{ \pm \right\}} \right\}\quad {\text{for}}\;H = 1 \hfill \\ \varLambda & = \left\{ {\eta_{1} = \left\{ {UL} \right\},\;\eta_{2} = \left\{ {UI} \right\},\eta_{3} = \left\{ U \right\}, \;\eta_{5} = \left\{ L \right\}, \eta_{6} = \left\{ {LI} \right\},\;\eta_{7} = \left\{ {LU} \right\},\varphi = \left\{ \pm \right\}, \varphi_{I} = \left\{ { \pm I} \right\}} \right\}\quad {\text{for}}\;H = 2 \hfill \\ \varLambda & = \left\{ {\eta_{1} = \left\{ {UII} \right\},\;\eta_{2} = \left\{ {UIL} \right\},\;\eta_{3} = \left\{ {ULI} \right\},\;\eta_{4} = \left\{ {UL} \right\},\;\eta_{5} = \left\{ {UI} \right\},\;\eta_{6} = \left\{ U \right\},\;\eta_{8} = \left\{ L \right\},\;\eta_{9} = \left\{ {LI} \right\},\;\eta_{10} } \right. \hfill \\ & \left. { = \left\{ {LU} \right\},\;n_{11} = \left\{ {LUI} \right\},\;\eta_{12} = \left\{ {LIU} \right\},\;\eta_{13} = \left\{ {LII} \right\},\;\varphi = \left\{ \pm \right\},\;\varphi_{I} = \left\{ { \pm I} \right\},\;\varphi_{II} = \left\{ { \pm II} \right\}} \right\}\quad {\text{for}}\;H = 3 \hfill \\ \end{aligned}$$(19)Step 4 Denote the OOC states as “OOC” given by Eq. (17). For example, for \(H = 2\), the set of the OOC states is given by
$${\mathbf{OOC}} = \{ ULU,\;UIU,\;UU,\;LL, \;LIL,\;LUL,\; \pm U,\; \pm L, \; \pm \;IU,\; \pm \;IL\} .$$Step 5 Combine the states in Step 2 to 4 to get the state space Ω. Therefore, the state space of the SSS synthetic schemes is given by
$$\begin{aligned} \left\{ {\eta_{1} ;\phi ;\;\eta_{3} , \varphi ;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 1 \hfill \\ \left\{ {\eta_{1} ,\;\eta_{2} ,\;\eta_{3} ;\;\phi ;\;\eta_{5} ,\;\eta_{6} ,\;\eta_{7} ,\;\varphi , \;\varphi_{I} ;\;{\text{OOC}}\} } \right\}\quad {\text{for}}\;H = 2 \hfill \\ \left\{ {\eta_{1} ,\;\eta_{2} ,\;\eta_{3} ,\;\eta_{4} ,\;\eta_{5} ,\;\eta_{6} ;\;\phi ;\;\eta_{8} ,\;\eta_{9} ,\;\eta_{10} ,\;\eta_{11} ,\;\eta_{12} ,\;\eta_{13} ,\; \varphi ,\; \varphi_{I} ,\; \varphi_{II} ;{\text{OOC}}\} } \right\}\quad {\text{for}}\;H = 3 \hfill \\ \end{aligned}$$(20)Step 6 Construct the TPMs of the proposed SSS synthetic schemes. For instance, when \(H = 2\) the TPM of the SSS synthetic scheme is constructed as follows (Table 14):
TPMs of the MSS synthetic schemes
Considering the MSS synthetic scheme, let \(Y_{i}\) (where \(i \ge 1\)) be a sequence of iid random variable taking values in the set \(\theta = \left\{ {1, 2, 3, 4} \right\}\) and let \(P\left( {Y_{i} = \theta } \right) = p_{\theta }\) (for \(1 \le \theta \le 4\)). Let digits 1 and 4 denote the upper and lower nonconforming states, respectively, while digits 2 and 3 denote the upper and lower conforming states (see Fig. 2c). Moreover, let the symbol “±” indicates that at time \(t = 0\), the first plotting statistic falls either above the UCL or below the LCL.
Let now consider the case where H = 1, 2, and 3 for a MSS synthetic scheme using a forward approach. The Markov chain states of the proposed MSS synthetic scheme are obtained as follows:
Step 1 List all the absorbing patterns, \(\varLambda\), given by
$$\begin{aligned} \varLambda & = \left\{ {\varLambda_{1} = \left\{ {11} \right\},\;\varLambda_{2} = \left\{ { \pm 1} \right\}, \;\varLambda_{3} = \left\{ {44} \right\},\;\varLambda_{4} = \left\{ { \pm 4} \right\}} \right\}\quad {\text{for}}\;H = 1 \\ \varLambda & = \left\{ {\varLambda_{1} = \left\{ {121} \right\},\;\varLambda_{2} = \left\{ {11} \right\},\;\varLambda_{3} = \left\{ {44} \right\},\; \varLambda_{4} = \left\{ {434} \right\}, \;\varLambda_{5} = \left\{ { \pm 1} \right\},\;\varLambda_{6} = \left\{ { \pm 4} \right\},\;\varLambda_{7} = \left\{ { \pm 21} \right\},\;\varLambda_{8} = \left\{ { \pm 34} \right\}} \right\}\quad {\text{for}}\;H = 2 \\ \varLambda & = \left\{ {\varLambda_{1} = \left\{ {1221} \right\},\;\varLambda_{2} = \left\{ {121} \right\},\;\varLambda_{3} = \left\{ {11} \right\},\;\varLambda_{4} = \left\{ {44} \right\},\;\varLambda_{5} = \left\{ {434} \right\},\; \varLambda_{6} = \left\{ {4334} \right\},\;\varLambda_{7} = \left\{ { \pm 1} \right\},\;\varLambda_{8} } \right. \\ & \left. { = \left\{ { \pm 4} \right\}, \;\varLambda_{9} = \left\{ { \pm 21} \right\}, \;\varLambda_{10} = \left\{ { \pm 34} \right\}, \;\varLambda_{11} = \left\{ { \pm 221} \right\}, \;\varLambda_{12} = \left\{ { \pm 334} \right\}} \right\}\quad {\text{for}}\;H = 3 \\ \end{aligned}$$(21)Step 2 Create the dummy state \(\phi\) which is defined by the single IC state given by {2, 3} for any value of \(H\). Thus, the dummy state is defined by
$$\phi = \eta_{H + 1} = \left\{ {2,3} \right\} \forall H$$(22)Step 3 Decompose each element in the absorbing patterns given in Eq. (21) into its basic states by removing the last state.
$$\begin{aligned} \varLambda & = \left\{ {\eta_{1} = \left\{ 1 \right\},\;\eta_{3} = \left\{ 4 \right\}, \varphi = \left\{ \pm \right\}} \right\}\quad {\text{for}}\;H = 1 \\ \varLambda & = \left\{ {\eta_{1} = \left\{ {12} \right\},\;\eta_{2} = \left\{ 1 \right\},\;\eta_{4} = \left\{ 4 \right\},\;\eta_{5} = \left\{ {43} \right\},\;\varphi = \left\{ \pm \right\}, \;\varphi_{2} = \left\{ { \pm 2} \right\},\;\varphi_{3} = \left\{ { \pm 3} \right\}\} } \right\}\quad {\text{for}}\;H = 2 \\ \varLambda & = \left\{ {\eta_{1} = \left\{ {122} \right\},\;\eta_{2} = \left\{ {12} \right\},\;\eta_{3} = \left\{ 1 \right\},\;\eta_{5} = \left\{ 4 \right\},\;\eta_{6} = \left\{ {43} \right\}, \;\eta_{7} = \left\{ {433} \right\},\;\varphi = \left\{ \pm \right\}, \;\varphi_{2} = \left\{ { \pm 2} \right\},\;\varphi_{3} } \right. \\ & = \left\{ { \pm 3} \right\}, \;\varphi_{22} = \left\{ { \pm 22} \right\},\;\varphi_{33} = \left\{ { \pm 33} \right\}\;{\text{for}}\;h = 3 \\ \end{aligned}$$(23)Step 4 Denote the OOC states as “OOC” given by Eq. (21). For example, for \(H = 2\), the set of the OOC states is given by
$${\mathbf{OOC}} = \left\{ {121,\;11,\;44, \;434,\; \pm \;1,\; \pm \;4, \; \pm \;21,\; \pm \;34} \right\}.$$Step 5 Combine the states in Step 2 to 4 to get the state space Ω. Therefore, the state space of the MSS synthetic schemes is given by
$$\begin{array}{*{20}l} {\left\{ {\eta_{1} ;\phi ;\;\eta_{3} , \varphi ;{\text{OOC}}\} } \right\}\quad {\text{for}}\;H = 1} \hfill \\ {\left\{ {\eta_{1} \;,\eta_{2} ;\;\phi \;;\eta_{4} ,\;\eta_{5} ,\; \varphi ,\; \varphi_{2} ,\; \varphi_{3} ;\;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 2} \hfill \\ {\left\{ {\eta_{1} ,\;\eta_{2} ,\;\eta_{3} ;\;\phi ;\;\eta_{5} ,\;\eta_{6} ,\;\eta_{7} , \;\varphi , \;\varphi_{2} ,\; \varphi_{3} ,\; \varphi_{22} , \;\varphi_{33} ;\;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(24)Step 5 Construct the TPMs of the proposed MSS synthetic schemes. For instance, when \(H = 2\) the TPM of the MSS synthetic scheme is constructed as follows (Table 15):
Note that the RSS and NSS synthetic schemes can also be constructed in a similar way. However, for the NSS synthetic scheme, we do not consider the state at time \(t = 0\), “±”.
Zero-state probability vector (ZSPV)
The \(\varvec{\xi}_{1 \times \tau } = {\mathbf{q}}_{1 \times \tau } = \left( {0 1 0 \ldots 0} \right)\) is the row vector of initial probabilities associated with the zero-state mode, and it has a one in the component associated with the state in which the chart begins and each of the other components of the vector are equal to zero. For the NSS synthetic scheme, it corresponds to the element of the TPM equal to ‘O’ (i.e. \(\varvec{\eta}_{1}\)) (Fig. 2a).
ZSPV of the NSS synthetic scheme
The ZSPV of the NSS scheme for H = 1, 2 and 3 are determined as follows:
Step 1 Define the state space
$$\begin{array}{*{20}l} {\left\{ {\phi ;\;\varvec{\eta}_{1} ;\;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 1} \hfill \\ {\left\{ {\phi ;\;\varvec{\eta}_{1} ,\;\eta_{2} \;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 2} \hfill \\ {\left\{ {\phi ;\;\varvec{\eta}_{1} ;\;\eta_{2 } ;\;\eta_{3} ;\;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(25)Step 2 From Eq. (25) remove the last state of the state space corresponding to the OOC state to find the essential TPM
$$\begin{array}{*{20}l} {\left\{ {\phi ;\;\varvec{\eta}_{1} } \right\}\quad {\text{for}}\;H = 1} \hfill \\ {\left\{ {\phi ;\;\varvec{\eta}_{1} ,\;\eta_{2} } \right\}\quad {\text{for}}\;H = 2} \hfill \\ {\left\{ {\phi ;\;\varvec{\eta}_{1} ;\;\eta_{2 } ;\;\eta_{3} } \right\}\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(26)Step 3 Substitute one into Eq. (26) for \(\varvec{\eta}_{1}\) and zero elsewhere to find the initial probability vectors \({\mathbf{q}}_{1 \times \tau }\) which are given by
$$\begin{array}{*{20}l} {\left( {0\;1} \right)\;{\text{for}}\;H = 1} \hfill \\ {\left( {0\;1\;0} \right)\;{\text{for}}\;H = \, 2} \hfill \\ {\left( {0\;1\;0\;0} \right)\;{\text{for}}\;H = \, 3} \hfill \\ \end{array}$$(27)From Eq. (27) we can see that for any value of H, the ZSPV of the NSS scheme is given by
$$\left( {0\;1\;0\; \ldots \;0\;0} \right)$$(28)
ZSPV of the side-sensitive synthetic schemes
For the SSS, RSS and MSS schemes, the initial state corresponds to the element of the TPM equal to ‘±’ (i.e. \(\varphi\)). Thus, the ZSPV of the SSS, RSS and MSS scheme is determined as follows:
Step 1 Define the state space. For instance, for the RSS scheme, the state space for H = 1, 2 and 3 is given by
$$\begin{array}{*{20}l} {\left\{ {\eta_{1} ;\;\phi ;\;\eta_{2 } ,\;{\mathbf{\varphi }}} \right\}\quad {\text{for}}\;H = \, 1} \hfill \\ {\left\{ {\eta_{1} ,\;\eta_{2 } ;\;\phi ;\;\eta_{3} ;\;\eta_{4, } \varvec{\varphi },\;\varphi_{I} ;\;{\text{OOC}}} \right\}\quad {\text{for}}\;H{ = }2} \hfill \\ {\left\{ {\eta_{1} ,\;\eta_{2} ; \;\eta_{3} ;\;\phi ;\;\eta_{4} ;\;\eta_{5} ,\;\eta_{6} ,\; \varvec{\varphi },\varphi_{I} ,\; \varphi_{II} ;\;{\text{OOC}}} \right\}\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(29)Step 2 From Eq. (29) remove the last state of the state space corresponding to the OOC state to find the essential TPM
$$\begin{array}{*{20}l} {\left\{ {\eta_{1} ;\;\phi ;\;\eta_{2 } ,\;{\mathbf{\varphi }}} \right\}\quad {\text{for}}\;H = 1} \hfill \\ {\left\{ {\eta_{1} ,\;\eta_{2 } ;\;\phi ;\;\eta_{3} ,\;\eta_{4, } \varvec{\varphi },\;\varphi_{I} } \right\}\quad {\text{for}}\;H = 2} \hfill \\ {\left\{ {\eta_{1} ,\;\eta_{2} ;\; \eta_{3} ;\;\phi ;\;\eta_{4} ;\;\eta_{5} ,\;\eta_{6} ,\; \varvec{\varphi },\varphi_{I} ,\; \varphi_{II} } \right\}\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(30)Step 3 Substitute one into Eq. (30) for \(\varvec{\varphi }\) and zero elsewhere to find the initial probability vectors \({\mathbf{q}}_{1 \times \tau }\) which are given by
$$\begin{array}{*{20}l} {\left( {0\;0\;0\;1} \right)\quad {\text{for}}\;H = 1} \hfill \\ {\left( {0\;0\;0\;0\;0\;1\;0} \right)\quad {\text{for}}\;H = 2} \hfill \\ {\left( {0\;0\;0\;0\;0\;0\;0\;1\;0\;0} \right)\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(31)From Eq. (31) we can see that for any value of H, the ZSPV of the RSS scheme is given by
$$\left( {0\;0\;0 \ldots 1\;0 \ldots \;0} \right)$$(32)
Note that the number of zero after the element corresponding to the initial state (i.e. one) for the NSS, SSS and RSS schemes is equal to “\(H - 1\)”, whereas for the MSS scheme, the number of zero after the element corresponding to the initial state is equal to “\(2H - 2\)”
Following the same procedure, the ZSPV of the SSS and MSS scheme is given as follows:
For the SSS scheme, the ZSPV is given by
$$\begin{array}{*{20}l} {\left( {0\;0\;0\;1} \right)\quad {\text{for}}\;H = 1} \hfill \\ {\left( {0\;0\;0\;0\;0\;0\;0\;1\;0} \right)\quad {\text{for}}\;H = 2} \hfill \\ {\left( {0\;0\;0\;0\;0\;0\;0\;0\;0\;0\;0\;0\;0\;1\;0\;0} \right)\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(33)For the MSS scheme, the ZSPV is defined by
$$\begin{array}{*{20}l} {\left( {0\;0\;0\;1} \right)\quad {\text{for}}\;H = 1} \hfill \\ {\left( {0\;0\;0\;0\;0\;1\;0\;0} \right)\quad {\text{for}}\;H = 2} \hfill \\ {\left( {0\;0\;0\;0\;0\;0\;0\;1\;0\;0\;0\;0} \right)\quad {\text{for}}\;H = 3} \hfill \\ \end{array}$$(34)
Steady-state probability vector (SSPV)
The \(\xi_{1 \times \tau } = {\mathbf{s}}_{1 \times \tau }\) is the row vector of initial probabilities associated with the steady-state mode and its elements are non-zero. Moreover, the sum of all its elements is equal to one (i.e. \(\sum \nolimits_{i} s_{i} = 1\)). There are a number of method used to compute the \({\mathbf{s}}_{1 \times \tau }\), and in this study, we focus on one of the steady-state probability vector (SSPV) methods proposed by Champ (1992), which is defined by
where \({\mathbf{z}}\) is the \(\tau \times 1\) vector with \({\mathbf{z}}_{{\left( {\tau \times 1} \right)}} = \left( {{\mathbf{G}} - {\mathbf{Q^{\prime}}}} \right)^{ - 1} {\mathbf{e}}_{j}\) and the matrix \({\mathbf{G}}\) in Champ (1992) can be generalized as \({\mathbf{G}} = {\mathbf{e}}_{j} \cdot 1^{\prime} + {\mathbf{I}}_{\tau \times \tau }\) where \({\mathbf{e}}_{j}\) is the jth unit vector corresponding to \({\mathbf{e}}_{1}\) for the one-sided as well as the two-sided NSS scheme and \(j\) corresponds to the element of the TPM equal to ‘±’ for the two-sided SSS, RSS and MSS schemes.
SSPV of the NSS synthetic scheme
The SSPV of the NSS scheme for H = 1, 2 and 3 is determined as follows:
Step 1 Define the jth unit vectors corresponding to \({\mathbf{e}}_{1}\), which are given by
$$\left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) \;{\text{for}}\;H = 1,\quad \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ \end{array} } \right)\;{\text{for}}\;H = 2\quad {\text{and}}\quad \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right)\;{\text{for}}\;H = 3$$(36)Step 2 Compute \(\varvec{G}\), which is defined by: \({\mathbf{e}}_{1} \cdot 1^{\prime } + {\mathbf{I}}_{\tau \times \tau }\). For H = 1, 2 and 3, \({\text{G}}\) is given by
$$\begin{aligned} G & = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 2 & 1 \\ 0 & 1 \\ \end{array} } \right)\quad {\text{for}}\;H = 1 \\ G & = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & 1 & 1 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 2 & 1 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} } \right)\quad {\text{for}}\;H = 2 \\ G & = \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right) \left( {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 2 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} } \right)\quad {\text{for}}\;H = 3 \\ \end{aligned}$$(37)Therefore, for any value of H, \(G\) is given by
$$\left( {\begin{array}{*{20}c} 2 & 1 & 1 & \ldots & 1 & 1 \\ 0 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 1 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & 1 & 0 \\ 0 & 0 & 0 & \ldots & 0 & 1 \\ \end{array} } \right)$$(38)Step 3 Compute \(\varvec{z}\), which is defined by: \(\left( {{\mathbf{G}} - {\text{Q}}^{\prime } } \right)^{ - 1} e_{1}\). For H = 1, 2 and 3, \(\varvec{z}\) is given by
$$\begin{aligned} \varvec{z} & = \left[ {\left( {\begin{array}{*{20}c} 2 & 1 \\ 0 & 1 \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} {p_{i} } & {p_{i} } \\ {1 - p_{i} } & 0 \\ \end{array} } \right)} \right]^{ - 1} \left( {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{2} - 3p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{2} - 3p_{i} + 3}}} \\ \end{array} } \right)\quad {\text{for}}\;H = 1 \\ \varvec{z} & = \left[ {\left( {\begin{array}{*{20}c} 2 & 1 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} {p_{i} } & 0 & {p_{i} } \\ {1 - p_{i} } & 0 & 0 \\ 0 & {p_{i} } & 0 \\ \end{array} } \right)} \right]^{ - 1} \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ {\frac{{p_{i} \left( {1 - p_{i} } \right)}}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ \end{array} } \right)\quad {\text{for}}\;H = 2 \\ \varvec{z} & = \left[ {\left( {\begin{array}{*{20}c} 2 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} {p_{i} } & 0 & 0 & {p_{i} } \\ {1 - p_{i} } & 0 & 0 & 0 \\ 0 & {p_{i} } & 0 & 0 \\ 0 & 0 & {p_{i} } & 0 \\ \end{array} } \right)} \right]^{ - 1} \left( {\begin{array}{*{20}c} 1 \\ 0 \\ 0 \\ 0 \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{p_{i} \left( {1 - p_{i} } \right)}}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{p_{i}^{2} \left( {1 - p_{i} } \right)}}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ \end{array} } \right)\quad {\text{for}}\;H = 3 \\ \end{aligned}$$(39)Step 4 Compute the SSPV, \(\varvec{s}\), using Eq. (3.53). Thus, for H = 1, 2 and 3, \(\varvec{s}\) is given by
$$\begin{aligned} \varvec{s} & = \left[ {\left( {\begin{array}{*{20}c} 1 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{2} - 3p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{2} - 3p_{i} + 3}}} \\ \end{array} } \right)} \right]^{ - 1} \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{2} - 3p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{2} - 3p_{i} + 3}}} \\ \end{array} } \right) = \frac{1}{{2 - p_{i} }}\left( {\begin{array}{*{20}c} 1 \\ {1 - p_{i} } \\ \end{array} } \right)\quad {\text{for}}\;H = 1 \\ {\mathbf{s}} & = \left[ {\left( {\begin{array}{*{20}c} 1 & 1 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ {\frac{{p_{i} \left( {1 - p_{i} } \right)}}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ \end{array} } \right)} \right]^{ - 1} \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ {\frac{{p_{i} \left( {1 - p_{i} } \right)}}{{p_{i}^{3} - 2p_{i}^{2} - p_{i} + 3}}} \\ \end{array} } \right) = \frac{1}{{2 - p_{i}^{2} }}\left( {\begin{array}{*{20}c} 1 \\ {1 - p_{i} } \\ {p_{i} \left( {1 - p_{i} } \right)} \\ \end{array} } \right)\quad {\text{for}}\;H = 1 \\ \varvec{s} & = \left[ {\left( {\begin{array}{*{20}c} 1 & 1 & 1 & 1 \\ \end{array} } \right) - \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{p_{i} \left( {1 - p_{i} } \right)}}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{p_{i}^{2} \left( {1 - p_{i} } \right)}}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ \end{array} } \right)} \right]^{ - 1} \left( {\begin{array}{*{20}c} {\frac{1}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{1 - p_{i} }}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{p_{i} \left( {1 - p_{i} } \right)}}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ {\frac{{p_{i}^{2} \left( {1 - p_{i} } \right)}}{{p_{i}^{4} - 2p_{i}^{3} - p_{i} + 3}}} \\ \end{array} } \right) = \frac{1}{{2 - p_{i}^{3} }}\left( {\begin{array}{*{20}c} 1 \\ {1 - p_{i} } \\ {p_{i} \left( {1 - p_{i} } \right)} \\ {p_{i}^{2} \left( {1 - p_{i} } \right)} \\ \end{array} } \right)\quad {\text{for}}\;H = \, 3 \\ \end{aligned}$$(40)Therefore, for any value of H, the SSPV is defined by
$$\frac{1}{{2 - p_{i}^{h} }}\left( {\begin{array}{*{20}c} 1 \\ {1 - p_{i} } \\ {p_{i} \left( {1 - p_{i} } \right)} \\ \ldots \\ \cdots \\ {p_{i}^{h - 2} \left( {1 - p_{i} } \right)} \\ {p_{i}^{h - 1} \left( {1 - p_{i} } \right)} \\ \end{array} } \right)$$(41)
SSPV of the side-sensitive synthetic schemes
The SSPV of the SSS, RSS and MSS synthetic scheme for H = 1, 2 and 3 are determined as follows:
Step 1 Define the jth unit vectors, \({\mathbf{e}}_{j}\), corresponding to one if η = {\(\pm\)} (i.e. \(\varphi\)). For instance, for the SSS scheme, the \({\mathbf{e}}_{j}\) vectors are given by
$$\left( {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 1 \\ \end{array} } \right)\;{\text{for}}\;H = 1,\quad \left( {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ \end{array} } \right)\;{\text{for}}\;H = 2\quad {\text{and}}\quad \left( {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ {\begin{array}{*{20}c} 0 \\ 0 \\ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ 0 \\ \end{array} } \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ \end{array} } \\ 0 \\ \end{array} } \right)\;{\text{for}}\;H = 3$$(42)The \({\mathbf{e}}_{j}\) vectors of the RSS and MSS schemes can be find in a similar way.
Step 2 Compute \(\varvec{G}\), which is defined by: \({\mathbf{e}}_{j} \cdot 1^{\prime } + {\mathbf{I}}_{\tau \times \tau }\) for H = 1, 2 and 3 [see for example Eq. (37)].
Step 3 Compute \(\varvec{z}\), which is defined by: \(\left( {{\mathbf{G}} - {\text{Q}}^{\prime } } \right)^{ - 1} e_{j}\) for H = 1, 2 and 3 [see for example Eq. (39)]
Step 4 Compute the SSPV, \(\varvec{s}\), using Eq. (3.53). Thus, for H = 1, 2 and 3, SSPV of the SSS, RSS and MSS synthetic schemes are given by
For H = 1, \({\text{SSS}} \equiv {\text{RSS}} \equiv {\text{MSS}}\) synthetic scheme. Thus,
$$s = \varsigma_{0} \left( {\begin{array}{*{20}c} {\varsigma_{1} } \\ {\varsigma_{2} } \\ {\varsigma_{3} } \\ {\varsigma_{4} } \\ \end{array} } \right)$$(43)where
$$\begin{aligned} \varsigma_{0} & = \frac{1}{{1 - p_{l} p_{u} }},\;\varsigma_{1} = p_{u} p_{i} \left( {1 + p_{l} } \right),\;\varsigma_{2} = p_{i} \left( {1 + p_{l} p_{u} } \right),\;\varsigma_{3} = p_{l} p_{i} \left( {1 + p_{u} } \right)\;{\text{and}} \\ \varsigma_{4} & = 1 - p_{i} - p_{u} p_{i} - p_{u} p_{l} - p_{l} p_{i} - p_{u} p_{i} p_{u} . \\ \end{aligned}$$When H = 2,
- 1.
The SSPV of the SSS synthetic scheme is given by
$$s = \varsigma_{0} \left( {\begin{array}{*{20}c} {\varsigma_{1} } \\ {\varsigma_{2} } \\ {\varsigma_{3} } \\ {\begin{array}{*{20}c} {\varsigma_{4} } \\ {\varsigma_{5} } \\ {\varsigma_{6} } \\ {\varsigma_{7} } \\ {\varsigma_{8} } \\ \end{array} } \\ {\varsigma_{9} } \\ \end{array} } \right)$$(44)where
$$\begin{aligned} \varsigma_{0} & = \frac{1}{{p_{l}^{3} p_{u}^{3} - p_{l}^{3} p_{u} + 2p_{l}^{2} p_{u}^{4} - 2p_{l}^{2} p_{u}^{3} - 2p_{l}^{2} p_{u}^{2} + 3p_{l}^{2} p_{u} + p_{l} p_{u}^{5} - 2p_{l} p_{u}^{4} + 4p_{l} p_{u}^{2} - 2p_{l} p_{u} + p_{u}^{3} - p_{u}^{2} + 1}} \\ \varsigma_{1} & = p_{u}^{2} p_{i}^{2} \left( {1 + p_{l} - p_{l}^{2} - 2p_{l} p_{u} + p_{l} p_{u}^{2} + p_{l}^{2} p_{u} } \right) \\ \varsigma_{2} & = p_{u} p_{i}^{3} \left( {1 + 2p_{l} - p_{l}^{2} - p_{l} p_{u} - p_{l} p_{u}^{2} + p_{l} p_{u}^{3} + p_{l}^{2} p_{u}^{2} } \right) \\ \varsigma_{3} & = p_{u} p_{i}^{2} \left( {1 + p_{l} - p_{l}^{2} - 2p_{l} p_{u} + p_{l} p_{u}^{2} + p_{l}^{2} p_{u} } \right) \\ & \quad {\text{or}}\;\varsigma_{3} = \frac{1}{{p_{u} }} \varsigma_{1} \\ \varsigma_{4} & = p_{i}^{2} \varsigma_{0}^{ - 1} \\ \varsigma_{5} & = p_{l} p_{i}^{2} \left( {1 + p_{u} - 2p_{u}^{2} + p_{u}^{3} - p_{l} p_{u} + p_{l} p_{u}^{2} } \right) \\ \varsigma_{6} & = p_{i}^{3} \left( {p_{l} + p_{u}^{2} + p_{l} p_{u} - p_{l} p_{u}^{2} - p_{l} p_{u}^{3} + p_{l} p_{u}^{4} - p_{l}^{2} p_{u} + p_{l}^{2} p_{u}^{3} } \right) \\ \varsigma_{7} & = p_{l} p_{u} p_{i}^{2} \left( {1 + p_{u} - 2p_{u}^{2} + p_{u}^{3} - p_{l} p_{u} + p_{l} p_{u}^{2} } \right) \\ & \quad {\text{or}}\;\varsigma_{7} = \frac{1}{{p_{u} }} \varsigma_{5} \\ \varsigma_{8} & = p_{u}^{2} + 2p_{l}^{2} - p_{l}^{3} + p_{l} p_{u} + 4p_{l} p_{u}^{2} - 2p_{l} p_{u}^{3} - 4p_{l} p_{u}^{4} + 4p_{l} p_{u}^{5} - p_{l} p_{u}^{6} + 3p_{l}^{2} p_{u} \\ & \quad - 8p_{l}^{2} p_{u}^{2} - 2p_{l}^{2} p_{u}^{3} + 8p_{l}^{2} p_{u}^{4} - 3p_{l}^{2} p_{u}^{5} - 4p_{l}^{3} p_{u} + 3p_{l}^{3} p_{u}^{2} + 4p_{l}^{3} p_{u}^{3} - 3p_{l}^{3} p_{u}^{4} + p_{l}^{4} p_{u} \\ & \quad - p_{l}^{4} p_{u}^{3} \\ \varsigma_{9} & = p_{u}^{2} + 2p_{l}^{2} - p_{u}^{3} - 3p_{l}^{3} + p_{l} p_{u} + 2p_{l} p_{u}^{2} - 6p_{l} p_{u}^{3} - 2p_{l} p_{u}^{4} + 8p_{l} p_{u}^{5} - 5p_{l} p_{u}^{6} + p_{l} p_{u}^{7} \\ & \quad - 15p_{l}^{2} p_{u}^{2} + 8p_{l}^{2} p_{u}^{3} + 14p_{l}^{2} p_{u}^{4} - 15p_{l}^{2} p_{u}^{5} + 4p_{l}^{2} p_{u}^{6} - 6p_{l}^{3} p_{u} + 15p_{l}^{3} p_{u}^{2} + 3p_{l}^{3} p_{u}^{3} \\ & \quad - 15p_{l}^{3} p_{u}^{4} + 6p_{l}^{3} p_{u}^{5} + p_{l}^{4} + 5p_{l}^{4} p_{u} - 4p_{l}^{4} p_{u}^{2} - 5p_{l}^{4} p_{u}^{3} + 4p_{l}^{4} p_{u}^{4} - p_{l}^{5} p_{u} + p_{l}^{5} p_{u}^{3} \\ \end{aligned}$$ - 2.
The SSPV of the RSS synthetic scheme is given by
$$s = \varsigma_{0} \left( {\begin{array}{*{20}c} {\varsigma_{1} } \\ {\varsigma_{2} } \\ {\varsigma_{3} } \\ {\begin{array}{*{20}c} {\varsigma_{4} } \\ {\varsigma_{5} } \\ {\varsigma_{6} } \\ {\varsigma_{7} } \\ \end{array} } \\ \end{array} } \right)$$(45)where
$$\begin{aligned} \varsigma_{0} & = \frac{1}{{p_{l}^{3} p_{u} + 2p_{l}^{2} p_{u}^{2} - 4p_{l}^{2} p_{u} - p_{l} p_{u}^{3} - 4p_{l} p_{u}^{2} + 4p_{l} p_{u} - 1}} \\ \varsigma_{1} & = p_{u} p_{i}^{3} \left( {1 + 2p_{l} - p_{l} p_{u} - p_{l}^{2} } \right) \\ \varsigma_{2} & = p_{u} p_{i}^{2} \left( {1 + 2p_{l} - p_{l} p_{u} - p_{l}^{2} } \right) \\ & \quad {\text{or}}\;\varsigma_{2} = \frac{1}{{p_{i} }} \varsigma_{1} \\ \varsigma_{3} & = p_{i}^{2} \varsigma_{0}^{ - 1} \\ \varsigma_{4} & = p_{l} p_{i}^{2} \left( {1 + 2p_{u} - p_{l} p_{u} - p_{u}^{2} } \right) \\ \varsigma_{5} & = p_{l} p_{i}^{3} \left( {1 + 2p_{u} - p_{l} p_{u} - p_{u}^{2} } \right) \\ & \quad {\text{or}}\;\varsigma_{5} = p_{i} \varsigma_{4} \\ \varsigma_{6} & = 2p_{l}^{2} + 2p_{u}^{2} - p_{l}^{3} - p_{u}^{3} + p_{i}^{4} p_{u} + 3p_{l}^{3} p_{u}^{2} - 4p_{l}^{3} p_{u} + 3p_{l}^{2} p_{u}^{3} - 8p_{l}^{2} p_{u}^{2} + 3p_{l}^{2} p_{u} + p_{l} p_{u}^{4} - 4p_{l} p_{u}^{3} + 3p_{l} p_{u}^{2} \\ \varsigma_{7} & = p_{l}^{4} + p_{u}^{4} + 2p_{l}^{2} + 2p_{u}^{2} - 3p_{l}^{3} - 3p_{u}^{3} - p_{i}^{5} p_{u} - 4p_{l}^{4} p_{u}^{2} + 5p_{l}^{4} p_{u} \\ & \quad - 6p_{l}^{3} p_{u}^{3} + 15p_{l}^{3} p_{u}^{2} - 6p_{l}^{3} p_{u} - 4p_{l}^{2} p_{u}^{4} + 15p_{l}^{2} p_{u}^{3} + 14p_{l}^{2} p_{u}^{2} + p_{l}^{2} p_{u} - p_{l} p_{u}^{5} + 5p_{l} p_{u}^{4} - 6p_{l} p_{u}^{3} + p_{l} p_{u}^{2} \\ \end{aligned}$$ - 3.
The SSPV of the MSS synthetic scheme is given by
$$s = \varsigma_{0} \left( {\begin{array}{*{20}c} {\varsigma_{1} } \\ {\varsigma_{2} } \\ {\varsigma_{3} } \\ {\begin{array}{*{20}c} {\varsigma_{4} } \\ {\varsigma_{5} } \\ {\varsigma_{6} } \\ {\varsigma_{7} } \\ {\varsigma_{8} } \\ \end{array} } \\ \end{array} } \right)$$(46)where
$$\begin{aligned} \varsigma_{0} & = \frac{1}{{1 - p_{1} p_{4} p_{2}^{2} - 2p_{1} p_{4} p_{2} - p_{1} p_{4} }} \\ \varsigma_{1} & = p_{1} p_{2}^{2} \left( {1 + 2p_{2} } \right)\left( {1 + p_{4} + p_{4} p_{2} } \right) \\ \varsigma_{2} & = p_{1} p_{2} \left( {1 + 2p_{2} } \right)\left( {1 + p_{4} + p_{4} p_{2} } \right) \\ & \quad {\text{or}}\;\varsigma_{2} = \frac{1}{{p_{2} }} \varsigma_{1} \\ \varsigma_{3} & = p_{2}^{2} \left( {p_{1} + p_{2} + 2p_{1} p_{2} - 2p_{1} p_{4} + 2p_{4} p_{2} - 2p_{1} p_{4} p_{2} + 4} \right) \\ \varsigma_{4} & = p_{4} p_{2} \left( {1 + 2p_{2} } \right)\left( {1 + p_{1} + p_{1} p_{2} } \right) \\ \varsigma_{5} & = p_{4} p_{2}^{2} \left( {1 + 2p_{2} } \right)\left( {1 + p_{1} + p_{1} p_{2} } \right) \\ & \quad {\text{or}}\;\varsigma_{5} = p_{2} \varsigma_{4} \\ \varsigma_{6} & = 1 - 2p_{2} - 2p_{1} p_{2}^{2} - 2p_{2}^{2} p_{4} - p_{1} p_{2} - p_{1} p_{4} - p_{2} p_{4} - 3p_{1} p_{2}^{2} p_{4} - 2p_{1} p_{2}^{3} p_{4} - 2p_{1} p_{4} p_{2} \\ \varsigma_{7} & = p_{2} - 2p_{2}^{2} - p_{1} p_{2}^{2} - 2p_{1} p_{2}^{3} - p_{4} p_{2}^{2} - 2p_{2}^{3} p_{4} - 2p_{1} p_{2}^{2} p_{4} - 3p_{1} p_{2}^{3} p_{4} - 2p_{1} p_{2}^{4} p_{4} - p_{1} p_{2} p_{4} \\ \varsigma_{8} & = p_{2} - 2p_{2}^{2} - p_{1} p_{2}^{2} - 2p_{1} p_{2}^{3} - p_{4} p_{2}^{2} - 2p_{2}^{3} p_{4} - 2p_{1} p_{2}^{2} p_{4} - 3p_{1} p_{2}^{3} p_{4} - 2p_{1} p_{2}^{4} p_{4} - p_{1} p_{2} p_{4} \\ & \quad {\text{or}}\;\varsigma_{8} = \varsigma_{7} \\ \end{aligned}$$
- 1.
When H = 3
- 1.
The SSPV of the SSS synthetic scheme is given by
$$s = \varsigma_{0} \left( {\begin{array}{*{20}c} {\varsigma_{1} } \\ {\varsigma_{2} } \\ {\varsigma_{3} } \\ {\begin{array}{*{20}c} {\varsigma_{4} } \\ {\varsigma_{5} } \\ {\varsigma_{6} } \\ {\varsigma_{7} } \\ {\varsigma_{8} } \\ \end{array} } \\ {\begin{array}{*{20}c} {\varsigma_{9} } \\ {\varsigma_{10} } \\ {\varsigma_{11} } \\ {\begin{array}{*{20}c} {\varsigma_{12} } \\ {\varsigma_{13} } \\ {\varsigma_{14} } \\ {\varsigma_{15} } \\ {\varsigma_{16} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \right)$$(47)where
$$\begin{aligned} \varsigma_{0} & = \frac{1}{{1 + 2p_{i}^{5} p_{l}^{2} p_{u}^{2} + 2p_{i}^{4} p_{l}^{2} p_{u}^{2} - p_{i}^{4} p_{l}^{2} p_{u} - p_{i}^{4} p_{l} p_{u}^{2} - p_{i}^{4} p_{l} p_{u} - 2p_{i}^{3} p_{l} p_{u} - 3p_{i}^{2} p_{l} p_{u} }} \\ \varsigma_{1} & = p_{u} p_{i}^{5} \left( {1 + p_{l} p_{i} + p_{l} p_{i}^{2} } \right) \\ \varsigma_{2} & = p_{u} p_{l} p_{i}^{4} \left( {1 + p_{l} p_{i} + p_{l} p_{i}^{2} } \right) \\ & \quad {\text{or}}\;\varsigma_{2} = \frac{{p_{l} }}{{p_{i} }} \varsigma_{1} \\ \varsigma_{3} & = p_{u} p_{l} p_{i}^{4} \left( {1 + p_{i}^{2} p_{l} - p_{i}^{2} p_{l} p_{u} - p_{u}^{3} p_{l} p_{u} } \right) \\ \varsigma_{4} & = p_{u} p_{l} p_{i}^{3} \left( {1 + p_{i}^{2} p_{l} - p_{i}^{2} p_{l} p_{u} - p_{u}^{3} p_{l} p_{u} } \right) = \frac{1}{{p_{i} }}\varsigma_{3} \\ \varsigma_{5} & = p_{u} p_{i}^{4} \left( {1 + p_{l} p_{i} + p_{l} p_{i}^{2} } \right) = \frac{1}{{p_{l} }}\varsigma_{2} \\ \varsigma_{6} & = p_{u} p_{i}^{3} \left( {1 + p_{i}^{2} p_{l} - p_{i}^{2} p_{l} p_{u} - p_{u}^{3} p_{l} p_{u} } \right) = \frac{1}{{p_{l} }}\varsigma_{4} \\ \varsigma_{7} & = p_{i}^{3} \left( {1 - p_{i}^{2} p_{l} p_{u} - 2p_{i}^{3} p_{l} p_{u} - p_{u}^{4} p_{l} p_{u} } \right) \\ \varsigma_{8} & = p_{l} p_{i}^{3} \left( {1 + p_{i}^{2} p_{u} - p_{i}^{2} p_{l} p_{u} - p_{u}^{3} p_{l} p_{u} } \right) \\ \varsigma_{9} & = p_{l} p_{i}^{4} \left( {1 + p_{u} p_{i} + p_{u} p_{i}^{2} } \right) \\ \varsigma_{10} & = p_{u} p_{l} p_{i}^{3} \left( {1 + p_{i}^{2} p_{u} - p_{i}^{2} p_{l} p _{u} - p_{u}^{3} p_{l} p_{u} } \right) = p_{u} \varsigma_{8} \\ \varsigma_{11} & = p_{u} p_{l} p_{i}^{4} \left( {1 + p_{i}^{2} p_{u} - p_{i}^{2} p_{l} p_{u} - p_{u}^{3} p_{l} p_{u} } \right) = p_{i} \varsigma_{10} \\ \varsigma_{12} & = p_{u} p_{l} p_{i}^{4} \left( {1 + p_{u} p_{i} + p_{u} p_{i}^{2} } \right) = p_{u} \varsigma_{9} \\ \varsigma_{13} & = p_{l} p_{i}^{5} \left( {1 + p_{u} p_{i} + p_{u} p_{i}^{2} } \right) = p_{i} \varsigma_{9} \\ \varsigma_{14} & = 1 - p_{i} - p_{i}^{3} p_{l} - p_{i}^{3} p_{u} - p_{i}^{5} p_{l} p_{u} - p_{i}^{4} p_{l} p_{u} - p_{i}^{3} p_{l} p_{u} - p_{i}^{4} p_{l}^{2} p_{u} - p_{i}^{4} p_{l} p_{u}^{2} \\ & \quad - 3p_{i}^{2} p_{l} p_{u} + 2p_{i}^{5} p_{l}^{2} p_{u}^{2} + p_{i}^{4} p_{l}^{2} p_{u}^{2} \\ \varsigma_{15} & = p_{i} - p_{i}^{2} - p_{i}^{4} p_{l} - p_{i}^{4} p_{u} - p_{i}^{6} p_{l} p_{u} - p_{i}^{5} p_{l}^{2} p_{u} - p_{i}^{5} p_{l} p_{u}^{2} - p_{i}^{5} p_{l} p_{u} - p_{i}^{4} p_{l} p_{u} \\ & \quad - 3p_{i}^{3} p_{l} p_{u} + 2p_{i}^{6} p_{l}^{2} p_{u}^{2} + 2p_{i}^{5} p_{l}^{2} p_{u}^{2} \\ \varsigma_{16} & = p_{i}^{2} - p_{i}^{3} - p_{i}^{5} p_{l} - p_{i}^{5} p_{u} - p_{i}^{7} p_{l} p_{u} - p_{i}^{6} p_{l}^{2} p_{u} - p_{i}^{6} p_{l} p_{u}^{2} - p_{i}^{6} p_{l} p_{u} - p_{i}^{5} p_{l} p_{u} \\ & \quad - 3p_{i}^{4} p_{l} p_{u} + 2p_{i}^{7} p_{l}^{2} p_{u}^{2} + 2p_{i}^{6} p_{l}^{2} p_{u}^{2} \\ \end{aligned}$$ - 2.
The SSPV of the RSS synthetic scheme is given by
$$s = \varsigma_{0} \left( {\begin{array}{*{20}c} {\varsigma_{1} } \\ {\varsigma_{2} } \\ {\varsigma_{3} } \\ {\begin{array}{*{20}c} {\varsigma_{4} } \\ {\varsigma_{5} } \\ {\varsigma_{6} } \\ {\varsigma_{7} } \\ {\varsigma_{8} } \\ \end{array} } \\ {\begin{array}{*{20}c} {\varsigma_{9} } \\ {\varsigma_{10} } \\ \end{array} } \\ \end{array} } \right)$$(48)where
$$\begin{aligned} \varsigma_{0} & = \frac{1}{{1 - p_{l} p_{u} - p_{i}^{4} p_{l} p_{u} + 2p_{i}^{3} p_{l} p_{u} - 3p_{i}^{2} p_{l} p_{u} - 2p_{i} p_{l} p_{u} }} \\ \varsigma_{1} & = p_{u} p_{i}^{5} \left( {1 + p_{l} + p_{l} p_{i} + p_{l} p_{i}^{2} } \right) \\ \varsigma_{2} & = p_{u} p_{i}^{4} \left( {1 + p_{l} + p_{l} p_{i} + p_{l} p_{i}^{2} } \right) = \frac{1}{{p_{i} }} \varsigma_{1} \\ \varsigma_{3} & = p_{u} p_{i}^{3} \left( {1 + p_{l} + p_{l} p_{i} + p_{l} p_{i}^{2} } \right) = \frac{1}{{p_{i} }} \varsigma_{2} \\ \varsigma_{4} & = p_{i}^{3} \varsigma_{0}^{ - 1} \\ \varsigma_{5} & = p_{L} p_{i}^{3} \left( {1 + p_{u} + p_{u} p_{i} + p_{u} p_{i}^{2} } \right) \\ \varsigma_{6} & = p_{L} p_{i}^{4} \left( {1 + p_{U} + p_{u} p_{i} + p_{u} p_{i}^{2} } \right) = p_{i} \varsigma_{5} \\ \varsigma_{7} & = p_{L} p_{i}^{5} \left( {1 + p_{U} + p_{u} p_{i} + p_{u} p_{i}^{2} } \right) = p_{i} \varsigma_{6} \\ \varsigma_{8} & = 1 - p_{i} - p_{i}^{3} p_{l} - p_{i}^{3} p_{u} - p_{l} p_{u} - p_{i} p_{l} p_{u} - p_{i}^{2} p_{l} p_{u} - p_{i}^{3} p_{l} p_{u} - p_{i}^{4} p_{l} p_{u} - p_{i}^{5} p_{l} p_{u} \\ \varsigma_{9} & = p_{i} - p_{i}^{2} - p_{i}^{4} p_{l} - p_{i}^{4} p_{u} - p_{i} p_{l} p_{u} - p_{i}^{2} p_{l} p_{u} - p_{i}^{3} p_{l} p_{u} - p_{i}^{4} p_{l} p_{u} - p_{i}^{5} p_{l} p_{u} - p_{i}^{6} p_{l} p_{u} \\ \varsigma_{10} & = p_{i}^{2} - p_{i}^{3} - p_{i}^{5} p_{l} - p_{i}^{5} p_{u} - p_{i}^{2} p_{l} p_{u} - p_{i}^{3} p_{l} p_{u} - p_{i}^{4} p_{l} p_{u} - p_{i}^{5} p_{l} p_{u} - p_{i}^{6} p_{l} p_{u} - p_{i}^{7} p_{l} p_{u} \\ \end{aligned}$$ - 3.
The SSPV of the MSS scheme is given by
$$s = \varsigma_{0}$$(49)where
$$\begin{aligned} \varsigma_{0} &= \frac{1}{{1 - \varsigma_{0}^{*} }} \hfill \\ \varsigma_{0}^{*} &= p_{1} p_{4} + p_{1} p_{3} p_{4} + p_{1} p_{3}^{2} p_{4} + p_{1} p_{2} p_{4} + p_{1} p_{2}^{2} p_{4} + p_{1} p_{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} p_{4} \hfill \\ \varsigma _{1} &= p_{1} p_{2}^{2} \left( {p_{3} + p_{2}^{3} + p_{2}^{2} p_{3} + p_{2}^{2} p_{4} + p_{2}^{3} p_{4} + p_{3}^{3} p_{4} + p_{2} p_{3} + p_{2} p_{4} + p_{2} p_{3}^{2} p_{4} + p_{2}^{2} p_{3} p_{4} + p_{2} p_{3}^{3} p_{4} + p_{2}^{3} p_{3} p_{4} + p_{2}^{2} p_{3}^{2} p_{4} + p_{2}^{2} p_{3}^{3} p_{4} + p_{2}^{3} p_{3}^{2} p_{4} + p_{1} p_{2} p_{4} } \right) \hfill \\ \varsigma_{2} &= \frac{1}{{p_{2} }} \varsigma_{1} \hfill \\ \varsigma_{3} & = p_{1} p_{2}^{2} + p_{1} p_{3} + p_{1} p_{2} p_{3} + p_{1} p_{2} p_{4} + p_{1} p_{2}^{2} p_{3} + p_{1} p_{2}^{2} p_{4} + p_{1} p_{2}^{3} p_{4} + p_{1} p_{3}^{3} p_{4} \hfill \\ & \quad + p_{1} p_{2} p_{3} p_{4} + p_{1} p_{2} p_{3}^{2} p_{4} + p_{1} p_{2}^{2} p_{3} p_{4} + p_{1} p_{2} p_{3}^{3} p_{4} + p_{1} p_{2}^{3} p_{3} p_{4} + p_{1} p_{2}^{2} p_{3}^{2} p_{4} + p_{1} p_{2}^{2} p_{3}^{3} p_{4} + p_{1} p_{2}^{3} p_{3}^{2} p_{4} \hfill \\ \varsigma_{4} & = p_{2}^{3} + p_{3}^{3} + p_{1} p_{3}^{2} + p_{1} p_{3}^{3} + 2p_{2} p_{3} + p_{2} p_{3}^{2} + p_{2}^{2} p_{3} + p_{2}^{2} p_{4} + p_{2}^{3} p_{4} + p_{1} p_{2} p_{3}^{2} \hfill \\ & \quad + p_{1} p_{2} p_{3}^{3} + p_{1} p_{2}^{3} p_{3} - p_{1} p_{2}^{3} p_{4} - p_{1} p_{3}^{3} p_{4} + p_{2}^{2} p_{3} p_{4} + p_{2}^{2} p_{3}^{3} p_{4} + p_{2}^{3} p_{3}^{2} p_{4} + p_{2} p_{3}^{3} p_{4} \hfill \\ & \quad + p_{2}^{3} p_{3} p_{4} + p_{1} p_{2}^{2} p_{3}^{2} + p_{1} p_{2}^{2} p_{3}^{3} + p_{1} p_{2}^{3} p_{3}^{2} + p_{2}^{2} p_{3}^{2} p_{4} - p_{1} p_{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3}^{2} p_{4}\end{aligned}$$$$\begin{aligned} \varsigma_{5} & = p_{3}^{3} p_{4} + p_{2} p_{4} + p_{1} p_{3} p_{4} + p_{2} p_{3} p_{4} + p_{1} p_{2}^{3} p_{4} + p_{1} p_{3}^{2} p_{4} + p_{1} p_{3}^{3} p_{4} + p_{2} p_{3}^{2} p_{4} \hfill \\ & \quad + p_{1} p_{2} p_{3} p_{4} + p_{1} p_{2} p_{3}^{2} p_{4} + p_{1} p_{2}^{2} p_{3} p_{4} + p_{1} p_{2} p_{3}^{3} p_{4} + p_{1} p_{2}^{3} p_{3} p_{4} + p_{1} p_{2}^{2} p_{3}^{2} p_{4} + p_{1} p_{2}^{2} p_{3}^{3} p_{4} + p_{1} p_{2}^{3} p_{3}^{2} p_{4} \hfill \\ \varsigma_{6} &= p_{3} p_{4} \left( {p_{2} + p_{3}^{3} + p_{1} p_{3}^{2} + p_{1} p_{3}^{3} + p_{2} p_{3}^{2} + p_{1} p_{2}^{3} + p_{1} p_{3} + p_{2} p_{3} + p_{1} p_{2} p_{3} + p_{1} p_{2} p_{3}^{2} } \right. \hfill \\ & \left. {\quad + p_{1} p_{2}^{2} p_{3} + p_{1} p_{2} p_{3}^{3} + p_{1} p_{2}^{3} p_{3} + p_{1} p_{2}^{2} p_{3}^{2} + p_{1} p_{2}^{2} p_{3}^{3} + p_{1} p_{2}^{3} p_{3}^{2} } \right) \hfill \\ \varsigma_{7} &= p_{3} \varsigma_{6} \hfill \\ \varsigma_{8} & = 1 - p_{2} - p_{3} - p_{1} p_{2}^{3} - p_{3}^{3} p_{4} - p_{1} p_{3} - p_{1} p_{4} - p_{2} p_{4} - p_{1} p_{2} p_{3} - p_{1} p_{2} p_{4} - p_{1} p_{3} p_{4} \hfill \\ & \quad - p_{2} p_{3} p_{4} - p_{1} p_{2}^{2} p_{3} - p_{1} p_{2}^{2} p_{4} - p_{1} p_{2}^{3} p_{4} - p_{1} p_{3}^{2} p_{4} - p_{1} p_{3}^{3} p_{4} - p_{2} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3}^{2} p_{4} \hfill \\ \varsigma_{9} & = p_{2} - p_{2}^{2} - p_{1} p_{2}^{4} - p_{2}^{2} p_{4} - p_{2} p_{3} - p_{1} p_{2} p_{3} - p_{1} p_{2} p_{4} - p_{1} p_{2}^{2} p_{3} - p_{1} p_{2}^{3} p_{3} - p_{1} p_{2}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{3} p_{4} - p_{1} p_{2}^{4} p_{4} - p_{2}^{2} p_{3} p_{4} - p_{2} p_{3}^{3} p_{4} - p_{2}^{2} p_{3}^{2} p_{4} - p_{1} p_{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3} p_{4} - p_{1} p_{2}^{4} p_{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3}^{3} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{3} p_{3}^{2} p_{4} - p_{1} p_{2}^{3} p_{3}^{3} p_{4} - p_{1} p_{2}^{4} p_{3}^{2} p_{4} \hfill \\ \varsigma_{10} &= p_{3} - p_{3}^{2} - p_{1} p_{3}^{2} - p_{3}^{4} p_{4} - p_{2} p_{3} - p_{1} p_{3} p_{4} - p_{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{2} - p_{1} p_{2}^{3} p_{3} \hfill \\ & \quad - p_{1} p_{3}^{2} p_{4} - p_{1} p_{3}^{3} p_{4} - p_{1} p_{3}^{4} p_{4} - p_{2} p_{3}^{2} p_{4} - p_{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} - p_{1} p_{2} p_{3} p_{4} \hfill \\ & \quad - p_{1} p_{2} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3} p_{4} - p_{1} p_{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3} p_{4} - p_{1} p_{2} p_{3}^{4} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3}^{4} p_{4} - p_{1} p_{2}^{3} p_{3}^{3} p_{4} \hfill \\ \varsigma_{11} & = p_{2}^{2} - p_{2}^{3} - p_{1} p_{2}^{5} - p_{2}^{2} p_{3} - p_{2}^{3} p_{4} - p_{1} p_{2}^{2} p_{3} - p_{1} p_{2}^{3} p_{3} - p_{1} p_{3}^{2} p_{4} - p_{1} p_{2}^{4} p_{3} \hfill \\ & \quad - p_{1} p_{2}^{3} p_{4} - p_{1} p_{2}^{4} p_{4} - p_{1} p_{2}^{5} p_{4} - p_{2}^{3} p_{3} p_{4} - p_{2}^{2} p_{3}^{3} p_{4} - p_{2}^{3} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{3} p_{3} p_{4} - p_{1} p_{2}^{4} p_{3} p_{4} - p_{1} p_{2}^{5} p_{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{3} p_{3}^{3} p_{4} - p_{1} p_{2}^{4} p_{3}^{2} p_{4} - p_{1} p_{2}^{4} p_{3}^{3} p_{4} - p_{1} p_{2}^{5} p_{3}^{2} p_{4} \hfill \\ \varsigma_{12} & = p_{3}^{2} - p_{3}^{3} - p_{1} p_{3}^{3} - p_{2} p_{3}^{2} - p_{3}^{5} p_{4} - p_{1} p_{2} p_{3}^{3} - p_{1} p_{3}^{2} p_{4} - p_{1} p_{3}^{3} p_{4} - p_{1} p_{3}^{4} p_{4} \hfill \\ & \quad - p_{2} p_{3}^{2} p_{4} - p_{1} p_{3}^{5} p_{4} - p_{2} p_{3}^{3} p_{4} - p_{2} p_{3}^{4} p_{4} - p_{1} p_{2}^{2} p_{3}^{3} - p_{1} p_{2}^{3} p_{3}^{2} - p_{1} p_{2} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2} p_{3}^{3} p_{4} - p_{1} p_{2} p_{3}^{4} p_{4} - p_{1} p_{2} p_{3}^{5} p_{4} - p_{1} p_{2}^{2} p_{3}^{2} p_{4} - p_{1} p_{2}^{2} p_{3}^{3} p_{4} - p_{1} p_{2}^{3} p_{3}^{2} p_{4} \hfill \\ & \quad - p_{1} p_{2}^{3} p_{3}^{3} p_{4} - p_{1} p_{2}^{2} p_{3}^{4} p_{4} - p_{1} p_{2}^{3} p_{3}^{4} p_{4} - p_{1} p_{2}^{2} p_{3}^{5} p_{4} \hfill \\ \end{aligned}$$
- 1.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Malela-Majika, JC., Graham, M.A. New Shewhart-type synthetic \(\bar{X}\) control schemes for non-normal data. J Ind Eng Int 15, 449–478 (2019). https://doi.org/10.1007/s40092-019-0304-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40092-019-0304-z