Skip to main content
Log in

Comparison of joint control schemes for multivariate normal i.i.d. output

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

A Correction to this article was published on 18 December 2018

This article has been updated

Abstract

The performance of a product frequently relies on more than one quality characteristic. In such a setting, joint control schemes are used to determine whether or not we are in the presence of unfavorable disruptions in the location (\({\varvec{\mu }}\)) and spread (\({\varvec{\varSigma }}\)) of a vector of quality characteristics. A common joint scheme for multivariate output comprises two charts: one for \({\varvec{\mu }}\) based on a weighted Mahalanobis distance between the vector of sample means and the target mean vector; another one for \({\varvec{\varSigma }}\) depending on the ratio between the determinants of the sample covariance matrix and the target covariance matrix. Since we are well aware that there are plenty of quality control practitioners who are still reluctant to use sophisticated control statistics, this paper tackles Shewhart-type charts for \({\varvec{\mu }}\) and \({\varvec{\varSigma }}\) based on three pairs of control statistics depending on the nominal mean vector and covariance matrix, \({\varvec{\mu }}_0\) and \({\varvec{\varSigma }}_0\). We either capitalize on existing results or derive the joint probability density functions of these pairs of control statistics in order to assess the ability of the associated joint schemes to detect shifts in \({\varvec{\mu }}\) or \({\varvec{\varSigma }}\) for various out-of-control scenarios. A comparison study relying on extensive numerical and simulation results leads to the conclusion that none of the three joints schemes for \({\varvec{\mu }}\) and \({\varvec{\varSigma }}\) is uniformly better than the others. However, those results also suggest that the joint scheme with the control statistics \(n \, ( \bar{\mathbf {X}}-{\varvec{\mu }}_0 )^\top \, {\varvec{\varSigma }}_0^{-1} \, ( \bar{\mathbf {X}}-{\varvec{\mu }}_0 )\) and \(\hbox {det} \left( (n-1) \mathbf{S} \right) / \hbox {det} \left( {\varvec{\varSigma }}_0 \right) \) has the best overall average run length performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Change history

  • 18 December 2018

    In the original paper, we incorrectly stated that...

References

  • Akritas, A.G., Akritas, E.K., Malaschonok, G.I.: Various proofs of Sylvester’s (determinant) identity. Math. Comput. Simul. 42, 585–593 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  • Alt, F.B.: Multivariate quality control. In: Kotz, S., Johnson, N.L., Read, C.R. (eds.) Encyclopedia of Statistical Sciences, vol. 6, pp. 5312–5323. Wiley, New York (1984)

    Google Scholar 

  • Anderson, T.W.: An Introduction to Multivariate Statistical Analysis. Wiley, New York (1958)

    MATH  Google Scholar 

  • Arnold, S.F.: Wishart distribution. In: Kotz, S., Balakrishnan, N., Read, C.R., Vidakovic, B., Johnson, N.L. (eds.) Encyclopedia of Statistical Sciences (Second Edition), vol. 15, pp. 9184–9188. Wiley, New York (2006)

    Google Scholar 

  • Fujikoshi, Y., Ulyanov, V.V., Shimizu, R.: Multivariate Statistics: High-Dimensional and Large-Sample Approximations. Wiley, New York (2010)

    Book  MATH  Google Scholar 

  • Gupta, A.K., Nagar, D.K.: Matrix Variate Distributions. Chapman & Hall/CRC, Boca Raton (2000)

    MATH  Google Scholar 

  • Hotelling, H.: Multivariate quality control illustrated by the air testing of sample bombsights. In: Eisenhart, C., Hastay, M.W., Wallis, W.A. (eds.) Techniques of Statistical Analysis, pp. 111–184. McGraw Hill, New York (1947)

    Google Scholar 

  • Karr, A.F.: Probability. Springer, New York (1993)

    Book  MATH  Google Scholar 

  • Li, K., Geng, Z.: The noncentral Wishart distribution and related distributions. Commun. Stat. Theory Methods 32, 33–45 (2003)

    Article  MathSciNet  Google Scholar 

  • Montgomery, D.C., Wadsworth, H.M.: Some techniques for multivariate quality control applications. In: Transactions of the ASQC. American Society for Quality Control, Washington DC (1972)

  • Morais, M.C., Pacheco, A.: Some stochastic properties of upper one-sided x and EWMA charts for \(\mu \) in the presence of shifts in \(\sigma \). Seq. Anal. 20, 1–12 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Morais, M.C., Okhrin, Y., Pacheco, A., Schmid, W.: On the stochastic behaviour of the run length of EWMA control schemes for the mean of correlated output in the presence of shifts in \(\sigma \). Stat. Decis. 24, 397–413 (2006)

    MathSciNet  MATH  Google Scholar 

  • Morais, M.C., Okhrin, Y., Pacheco, A., Schmid, W.: EWMA charts for multivariate output: some stochastic ordering results. Commun. Stat. Theory Methods 37, 2653–2663 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Muirhead, R.J.: Aspects of Multivariate Statistical Theory. Wiley, Hoboken (1982)

    Book  MATH  Google Scholar 

  • R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2013). http://www.R-project.org/. Accessed 28 Aug 2017

  • Ramos, P.A.A.C.F.P.: Performance analysis of simultaneous control schemes for the process mean (vector) and (co)variance (matrix). Ph.D. thesis, Instituto Superior Técnico, Universidade Técnica de Lisboa (2013)

  • Ramos, P.F., Morais, M.C., Pacheco, A., Schmid, W.: On the misleading signals in simultaneous schemes for the mean vector and covariance matrix of multivariate i.i.d. output. Stat. Pap. 57, 471–498 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Reynolds Jr., M.R., Cho, G.Y.: Multivariate control charts for monitoring the mean vector and covariance matrix. J. Qual. Technol. 38, 230–253 (2006)

    Article  Google Scholar 

  • Reynolds Jr., M.R., Stoumbos, Z.G.: Combinations of multivariate Shewhart and MEMWA control charts for monitoring the mean vector and covariance matrix. J. Qual. Technol. 40, 381–393 (2008)

    Article  Google Scholar 

  • Rohatgi, V.K.: An Introduction to Probability Theory and Mathematical Statistics. Wiley, New York (1976)

    MATH  Google Scholar 

  • Seber, G.A.F.: Multivariate Observations. Wiley, New York (1984)

    Book  MATH  Google Scholar 

  • Sylvester, J.J.: On the relation between the minor determinants of linearly equivalent quadratic functions. Philos. Mag. 1, 295–305 (1851)

    Article  Google Scholar 

  • Tiku, M.: Noncentral beta distribution. In: Kotz, S., Balakrishnan, N., Read, C.R., Vidakovic, B., Johnson, N.L. (eds.) Encyclopedia of Statistical Sciences (Second Edition), vol. 7, pp. 5541–5542. Wiley, New York (2006a)

    Google Scholar 

  • Tiku, M.: Noncentral F-distribution. In: Kotz, S., Balakrishnan, N., Read, C.R., Vidakovic, B., Johnson, N.L. (eds.) Encyclopedia of Statistical Sciences (Second Edition), vol. 7, pp. 5546–5550. Wiley, New York (2006b)

    Google Scholar 

  • Tong, Y.L.: The Multivariate Normal Distribution. Springer, New York (1990)

    Book  MATH  Google Scholar 

  • Walsh, J.E.: Operating characteristics for tests of the stability of a normal population. J. Am. Stat. Assoc. 47, 191–202 (1952)

    Article  MATH  Google Scholar 

  • Wang, K., Yeh, A., Li, B.: Simultaneous monitoring of process mean vector and covariance matrix via penalized likelihood estimation. Comput. Stat. Data Anal. 78, 206–217 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Wells, W.T., Anderson, R.L., Cell, J.W.: The distribution of the product of two central or non-central chi-square variates. Ann. Math. Stat. 33, 1016–1020 (1962)

    Article  MATH  Google Scholar 

  • Wetherill, G.B., Brown, D.W.: Statistical Process Control: Theory and Practice. Chapman and Hall, London (1991)

    Book  MATH  Google Scholar 

  • Wolfram Research, Inc.: Mathematica, Version 10.3. Champaign (2015). http://reference.wolfram.com/language/. Accessed 28 Aug 2017

Download references

Acknowledgements

This work was partially supported by FCT (Fundação para a Ciência e a Tecnologia) through projects UID/Multi/04621/2013 and PEst-OE/MAT/UI0822/ 2014. We are most grateful to the two reviewers who selflessly devoted their time to scrutinize this work and offered very pertinent comments that led to a shorter and improved version of the original manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manuel Cabral Morais.

Appendices

Appendix A

Now, we prove Theorems 2 and 3, in this particular order.

Proof of Theorem 3

We assume that \({\varvec{\delta }}\) is arbitrary in part a) of this proof. In part b) it is necessary to assume that \({\varvec{\delta }} = \mathbf{0}\) to capitalize on an existing result.

a) Let: \(~~\mathbf{U}_{i} = {\varvec{\varSigma }}^{-1/2} (\mathbf{X}_{i} - {\varvec{\mu }}_0)\), where \(\mathbf{X}_{i} {\mathop {\sim }\limits ^{i.i.d.}} \mathcal{N}_p ( {\varvec{\mu }}_0 + {\varvec{\varSigma }}_0^{1/2} {\varvec{\delta }}/\sqrt{n}, \, {\varvec{\varSigma }})\); \(\bar{\mathbf{U}} = \frac{1}{n} \sum _{i=1}^n \mathbf{U}_{i}\).  Then:

$$\begin{aligned} n \mathbf{S}^* = {\varvec{\varSigma }}^{1/2} \, \left( \sum _{i=1}^n \mathbf{U}_{i} \, \mathbf{U}_{i}^\top \right) \, {\varvec{\varSigma }}^{1/2}, \end{aligned}$$
(17)

where \(\mathbf{U}_{i} {\mathop {\sim }\limits ^{i.i.d.}} \mathcal{N}_p ( {\varvec{\varSigma }}^{-1/2} {\varvec{\varSigma }}_0^{1/2} {\varvec{\delta }}/ \sqrt{n}, \, \mathbf{I})\);

$$\begin{aligned} T^{(3)}= & {} n \, (\bar{\mathbf{X}} - {\varvec{\mu }}_0)^\top \, (\mathbf{S}^*)^{-1}\, (\bar{\mathbf{X}} - {\varvec{\mu }}_0) \nonumber \\= & {} n (\sqrt{n} \, \bar{\mathbf{U}})^\top \, \left( \sum _{i=1}^n \mathbf{U}_{i} \, \mathbf{U}_{i}^\top \right) ^{-1} \, (\sqrt{n} \, \bar{\mathbf{U}}), \end{aligned}$$
(18)

with \(\sqrt{n} \, \bar{\mathbf{U}} \sim \mathcal{N}_p( {\varvec{\varSigma }}^{-1/2} \, {\varvec{\varSigma }}_0^{1/2} {\varvec{\delta }}, \, \mathbf{I})\).

Let: \(~\mathbf{U} = (\mathbf{U}_{1}, \ldots , \mathbf{U}_{n})^\top \) be an \(n \times p\) random matrix whose rows we know are independent normal variates with common mean \(({\varvec{\varSigma }}^{-1/2} {\varvec{\varSigma }}_0^{1/2} {\varvec{\delta }}/ \sqrt{n} \,)^\top \) and the same covariance matrix \(\mathbf{I}\); \(~\mathbf{H} = (\mathbf{h}_1, \ldots , \mathbf{h}_n)^\top \) be an orthogonal \(n \times n\) matrix with \(\mathbf{h}_1 = \mathbf{1}_n/\sqrt{n}\); \(~\mathbf{Z} = (\mathbf{Z}_{1}, \ldots , \mathbf{Z}_{n})^\top = \mathbf{H} \, \mathbf{U}\) be a random matrix obtained from U by the \(n \times n\) orthogonal transformation \(\mathbf{H}\).  Then, according to Fujikoshi et al. (2010, p. 13, Theorem 1.2.6), \(\mathbf{Z}_{1}, \ldots , \mathbf{Z}_{n}\) (i.e., the rows of \(\mathbf{Z}\)) are independent normal variates with the same covariance matrix \(\mathbf{I}\), and \(E(\mathbf{Z}) = \mathbf{H} \, E(\mathbf{U})\), that is, \(\mathbf{Z}\) has the same properties as \(\mathbf{U}\) except that the mean of \(\mathbf{Z}\) is changed to \(\mathbf{H} \, E(\mathbf{U})\). Furthermore:

$$\begin{aligned} \mathbf{Z}_{1}= & {} \sqrt{n} \, \bar{\mathbf{U}} \sim \mathcal{N}_p ( {\varvec{\varSigma }}^{-1/2} \, {\varvec{\varSigma }}_0^{1/2} {\varvec{\delta }}, \mathbf{I}); \\ \mathbf{Z}_{i}\sim & {} \mathcal{N}_p( \mathbf{0}, \mathbf{I}), \, i=2,\ldots ,n, \end{aligned}$$

where the zero vector mean follows from the orthogonality of \(\mathbf{H}\) which translates into null inner products between the first row of \(\mathbf{H}\) and any of its other rows, implying in turn that the sum of the entries of any of the remaining rows of \(\mathbf{H}\) are also null.

b) Let us capitalize on the fact that \(\sum _{i=1}^n \mathbf{U}_{i} \mathbf{U}_{i}^\top = \mathbf{U}^\top \mathbf{U} = (\mathbf{H} \mathbf{U})^\top (\mathbf{H}{} \mathbf{U}) = \mathbf{Z}^\top \mathbf{Z} = \sum _{i=1}^n \mathbf{Z}_{i} \mathbf{Z}_{i}^\top \), and consider: \(\mathcal{A} = \sum _{i=1}^n \mathbf{Z}_{i} \mathbf{Z}_{i}^\top \); \({\varvec{\tau }} = \mathcal{A}^{-1/2} \mathbf{Z}_{1}\).

Firstly, by recalling result (13) and noting that \(\mathcal{A} = {\varvec{\varSigma }}^{-1/2} (n \, \mathbf{S}^*) {\varvec{\varSigma }}^{-1/2}\), we can resort to Arnold (2006, p. 9185) to add that \(\mathcal{A} \sim W_p \left( n, \mathbf{I}; {\varvec{\delta }}^* \, ({\varvec{\delta }}^\star )^\top \right) \), where \({\varvec{\delta }}^* = {\varvec{\varSigma }}^{-1/2} \, {\varvec{\varSigma }}_0^{1/2} \, {\varvec{\delta }}\). If we adopt the notation \(etr(A) = \exp (tr(A))\), consider \({\varvec{\delta }} = \mathbf{0}\) and recall (Muirhead 1982, p. 85, Theorem 3.2.1), then we can write the marginal p.d.f. of the \(p \times p\) matrix \(\mathcal{A}\) as

$$\begin{aligned} f_{\mathcal{A}}(\mathcal{A}) = \frac{1}{2^{pn/2} \varGamma _p(n/2)} \, \left[ \hbox {det}(\mathcal{A}) \right] ^{(n-p-1)/2} \, etr\left( -\mathcal{A}/2 \right) , \end{aligned}$$
(19)

where \(\varGamma _p (n/2) = \pi ^{p(p-1)/4} \prod _{i=1}^p \varGamma [(n-i+1)/2]\) according to Muirhead (1982, p. 100).

Secondly, if \({\varvec{\delta }} = \mathbf{0}\), then \(\mathbf{Z}_{1}, \ldots , \mathbf{Z}_{n}\) are independent \(\mathcal{N}_p( \mathbf{0}, \mathbf{I})\) random vectors and in this case we can invoke Gupta and Nagar (2000, pp. 167–168, Theorem 5.2.3) or Muirhead (1982, p. 117, Exercise 3.15) and state that: the joint p.d.f. of \(\mathcal{A}\) and \({\varvec{\tau }}\) is

$$\begin{aligned} f_{\mathcal{A}, {\varvec{\tau }}}(\mathcal{A}, {\varvec{\tau }}) = \frac{\left[ \hbox {det}(\mathcal{A}) \right] ^{(n-p-1)/2} \, etr\left( -\mathcal{A}/2 \right) }{2^{pn/2} \, \pi ^{p/2} \, \varGamma _p[(n-1)/2]} \times (1-{\varvec{\tau }}^\top {\varvec{\tau }})^{(n-p-2)/2}; \end{aligned}$$

the marginal p.d.f. of \({\varvec{\tau }}\) is

$$\begin{aligned} f_{{\varvec{\tau }}}({\varvec{\tau }}) = \frac{\varGamma (n/2)}{\pi ^{p/2} \, \varGamma [(n-p)/2]}\times (1-{\varvec{\tau }}^\top {\varvec{\tau }})^{(n-p-2)/2}, \end{aligned}$$
(20)

for \({\varvec{\tau }}^\top {\varvec{\tau }} < 1\).

Thirdly, combining results (19)–(20) and the fact that \(\varGamma _p(n/2)/\varGamma _p((n-1)/2) = \varGamma (n/2)/\varGamma ((n-p)/2)\), we can write the joint distribution of \(\mathcal{A}\) and \({\varvec{\tau }}\) as the product of the corresponding marginal p.d.f., \(f_{\mathcal{A}, {\varvec{\tau }}}(\mathcal{A}, {\varvec{\tau }}) = f_\mathcal{A}(\mathcal{A}) \times f_{{\varvec{\tau }}}({\varvec{\tau }})\), that is to say that \(\mathcal{A}\) and \({\varvec{\tau }}\) are independent provided that \({\varvec{\delta }} = \mathbf{0}\). (If \({\varvec{\delta }} \ne \mathbf{0}\) this is no longer the case.)

Finally, note that from (18) and (17) we get

$$\begin{aligned} T^{(3)}= & {} n \, (\bar{\mathbf{X}} - {\varvec{\mu }}_0)^\top \, (\mathbf{S}^*)^{-1}\, (\bar{\mathbf{X}} - {\varvec{\mu }}_0) = n \, {\varvec{\tau }}^\top {\varvec{\tau }} \\ U^{(3)}= & {} \frac{\hbox {det} ( n \mathbf{S}^* )}{\hbox {det}( {\varvec{\varSigma }}_0 )} = \frac{\hbox {det}({\varvec{\varSigma }})}{\hbox {det}({\varvec{\varSigma }}_0)} \times \hbox {det}(\mathcal{A}), \end{aligned}$$

hence these two measurable functions of \({\varvec{\tau }}\) and \(\mathcal{A}\) (respectively) are independent control statistics by the disjoint blocks theorem, as long as \({\varvec{\delta }} = \mathbf{0}\). Moreover, by taking advantage of results (12) and (14), \(\eta ^{(3)} (\mathbf{0}, \, {\varvec{\varSigma }})\) follows in a straightforward manner, thus we conclude the proof of Theorem 3. \(\square \)

Proof of Theorem 2

Firstly, recall that

$$\begin{aligned} n \mathbf{S}^* = (n-1) \mathbf{S} + n (\bar{\mathbf{X}} - {\varvec{\mu }}_0) \, (\bar{\mathbf{X}} - {\varvec{\mu }}_0)^\top . \end{aligned}$$
(21)

Secondly, note that if \(\mathbf{A}\) is a \(p \times p\) nonsingular matrix and \(\mathbf{u}\) and \(\mathbf{v}\) are two \(p-\)dimensional vectors, then

$$\begin{aligned} (\mathbf{A} + \mathbf{u} \, \mathbf{u}^\top )^{-1} = \mathbf{A}^{-1} - \frac{\mathbf{A}^{-1} \, \mathbf{u} \, \mathbf{u}^\top \, \mathbf{A}^{-1}}{1+ \mathbf{u}^\top \, \mathbf{A}^{-1} \, \mathbf{u}} \end{aligned}$$

(Fujikoshi et al. 2010, A.1.3, p. 496) and

$$\begin{aligned} \mathbf{u}^\top (\mathbf{A} + \mathbf{u}{} \mathbf{u}^\top )^{-1} \mathbf{u} = \frac{\mathbf{u}^\top \mathbf{A}^{-1} \mathbf{u}}{1+ \mathbf{u}^\top \mathbf{A}^{-1} \mathbf{u}} . \end{aligned}$$
(22)

Thus, by considering \(\mathbf{A} = (n-1) \mathbf{S}\) and \(\mathbf{u} = \sqrt{n} (\bar{\mathbf{X}} - {\varvec{\mu }}_0)\), we get

$$\begin{aligned} T^{(3)} = n^2 \, ( \bar{\mathbf {X}}-{\varvec{\mu }}_0 )^\top \, (n \mathbf{S}^*)^{-1} \, ( \bar{\mathbf {X}}-{\varvec{\mu }}_0 ) = \frac{n \, T^{(2)}}{n-1+ T^{(2)}}. \end{aligned}$$
(23)

Thirdly, we need to invoke Sylvester’s determinant theorem (Sylvester 1851; Akritas et al. 1996), namely one of its consequences: for the case of column vector \(\mathbf{c}\) and row vector \(\mathbf{r}\), each with p components, we have \(\hbox {det}(\mathbf{I} + \mathbf{c} \, \mathbf{r}) = 1 + \mathbf{r} \, \mathbf{c}\) and, if \(\mathbf{A}\) is positive definite,

$$\begin{aligned} \hbox {det}( \mathbf{A} + \mathbf{u}{} \mathbf{u}^\top ) = \hbox {det}(\mathbf{A}) \; (1+\mathbf{u}^\top \mathbf{A}^{-1} \mathbf{u} ) . \end{aligned}$$
(24)

Hence, using (21),

$$\begin{aligned} U^{(3)} = \frac{\hbox {det} ( n \mathbf{S}^* )}{\hbox {det} ( {\varvec{\varSigma }}_0 )} = \left( 1 + \frac{T^{(2)}}{n-1} \right) U^{(2)}. \end{aligned}$$

Finally, by recalling Rohatgi (1976, Theorem 6, p. 135) or Karr (1993, Theorem 2.43, p. 62), we can obtain the p.d.f. of a one-to-one transformation such as \((T^{(2)}, \, U^{(2)})\) of the random vector \((T^{(3)}, \, U^{(3)}) = \underline{g}^{-1}(T^{(2)}, \, U^{(2)})\):

$$\begin{aligned} f_{T^{(2)}, \, U^{(2)}}(x,y)= & {} f_{T^{(3)}, \, U^{(3)}} \left[ \underline{g}^{-1}(x,y) \right] \times \left| J \left[ \underline{g}^{-1}(x,y) \right] \right| \\= & {} f_{T^{(3)}, \, U^{(3)}} \left[ \frac{n x}{n-1 + x}, \, \left( 1 + \frac{x}{n-1} \right) y \right] \times \frac{n}{n-1 + x}. \end{aligned}$$

After obtaining the joint p.d.f. of \(T^{(3)}\) and \(U^{(3)}\) from the joint c.d.f. in (15), we get the joint c.d.f. \(T^{(2)}\) and \(U^{(2)}\):

$$\begin{aligned} F_{{T}^{(2)}, \, U^{(2)}}(x,y)= & {} \int _{0}^x \frac{n-1}{(n-1+t)^2} \times f_{beta(p/2,(n-p)/2)} \left( \frac{t}{n-1+t} \right) \\&\times P\left[ \prod _{i=1}^p \chi _{n-i+1}^2 \le \frac{\hbox {det} \left( {\varvec{\varSigma }}_0 \right) }{\hbox {det} \left( {\varvec{\varSigma }} \right) } \times \left( 1+\frac{t}{n-1} \right) y \right] \, \hbox {d}t. \end{aligned}$$

\(\square \)

Appendix B

Let us remind the reader that the results in this appendix play a pivotal role in the obtention of control limits and out-of-control ARL values.

We can add the following auxiliary results referring to the product of independent central chi-square distributions \(Q_p \equiv Q_p^{(n)} = \prod _{i=1}^p \chi _{n-i}^2\), for \(p=2,3\), by capitalizing on the fact that \(~2 \sqrt{Q_2} \sim \chi _{2(n-2)}^2\) (Anderson 1958, p. 172):

$$\begin{aligned} F_{Q_2} (q)= & {} F_{\chi _{2 (n-2)}^2} (2 \sqrt{q}) \\ F_{Q_2}^{-1}(z)= & {} \frac{1}{4} \left[ F_{\chi _{2 (n-2)}^2}^{-1}(z) \right] ^2, \, z \in (0,1); \\ Q_3= & {} Q_2 \times \chi _{n-3}^2 \\ F_{Q_3} (q)= & {} \int _{0}^{+\infty } F_{\chi _{2 (n-2)}^2} \left( 2 \sqrt{q/y} \right) \times f_{\chi _{n-3}^2} (y) \, \hbox {d}y \\ F_{Q_3}^{-1}(z)&\text{ is } \text{ the } \text{ root } q \text{ of } F_{Q_3} (q) = z, \, z \in (0,1). \end{aligned}$$

As for \(p=4\), we have to take advantage of another result stated and proved by Wells et al. (1962). Let \(I_n(x)\) and \(K_\nu (x)\) represent the modified Bessel functions of the first and second kinds defined by

$$\begin{aligned} I_n(x)= & {} \sum _{m=0}^{+\infty } \frac{x^{2m+n}}{2^{2m+n} \, m! \, \varGamma (m+n+1)} \\ K_\nu (x)= & {} \frac{\pi \, [ I_{-\nu } (x) - I_{\nu } (x)]}{2 \sin (\nu \pi )}. \end{aligned}$$

Moreover, let \(Y_1\) and \(Y_2\) be two independent chi-square r.v. with \(k_1\) and \(k_2\) degrees of freedom, respectively. Then the p.d.f. of \(W = Y_1 \, Y_2\) is equal to

$$\begin{aligned} f_W(w) = \frac{ w^{\frac{k_1}{4}+\frac{k_2}{4}-1} \, K_{\frac{k_1}{2}-\frac{k_2}{2}} (\sqrt{w})}{ 2^{\frac{k_1}{2}+\frac{k_2}{2}-1} \, \varGamma (\frac{k_1}{2}) \, \varGamma (\frac{k_2}{2})}. \end{aligned}$$

Finally, since \(4 \sqrt{Q_4} \equiv W\) when \(k_1=2(n-2)\) and \(k_2 = 2(n-4)\), we get:

$$\begin{aligned} f_{4 \sqrt{Q_4}}(w)= & {} \frac{ w^{n-4} \, K_2 \left( \sqrt{w} \right) }{ 2^{2n-7} \, (n-3)! \, (n-5)! }\\ F_{Q_4} (q)= & {} P \left( \frac{Y_1^2}{4} \, \frac{Y_2^2}{4} \le q \right) = P \left( Y_1 \, Y_2 \le 4 \sqrt{q} \right) = \int _0^{4 \sqrt{q}} f_{4 \sqrt{Q_4}}(w) \, \hbox {d}w \\ F_{Q_4}^{-1}(z)&\text{ is } \text{ the } \text{ root } q \text{ of } F_{Q_4} (q) = z, \, z \in (0,1). \end{aligned}$$

We ought to add that, by taking advantage of Theorem 2 and condition (16), we can conclude that \(\beta ^{(2)}\) is the root of

$$\begin{aligned} \int _{0}^{x} \frac{n-1}{(n-1+t)^2}\times & {} f_{beta(p/2,(n-p)/2)} \left( \frac{t}{n-1+t} \right) \\\times & {} F_{Q_p^{(n+1)}}\left[ \left( 1+\frac{t}{n-1} \right) y \right] \, \hbox {d}t = 1 - (ARL^\star )^{-1}, \end{aligned}$$

where \(x = \frac{(n-1)p}{n-p} \times F_{F_{p,n-p}}^{-1} (1- \beta ^{(2)})\) and \(y = F_{Q_p^{(n)}}^{-1}(1-\beta ^{(2)})\). Once \(\beta ^{(2)}\) is obtained numerically, the upper control limits of the individual charts of scheme 2 soon follow.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Morais, M.C., Schmid, W., Ramos, P.F. et al. Comparison of joint control schemes for multivariate normal i.i.d. output. AStA Adv Stat Anal 103, 257–287 (2019). https://doi.org/10.1007/s10182-018-00331-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-018-00331-3

Keywords

Navigation