Skip to main content

Advertisement

Log in

Testing equality of standardized generalized variances of k multivariate normal populations with arbitrary dimensions

  • Original Paper
  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

For a p-variate normal distribution with covariance matrix \( {\varvec{\Sigma }}\), the standardized generalized variance (SGV) is defined as the positive pth root of \( |{\varvec{\Sigma }}| \) and used as a measure of variability. Testing equality of the SGVs, for comparing the variability of multivariate normal distributions with different dimensions, is still regarded as matter of interest. The most classical test for this problem is the likelihood ratio test (LRT). In this article, testing equality of the SGVs of k multivariate normal distributions with possibly unequal dimensions is studied. To test this hypothesis, two approximations for the null distribution of the LRT statistic are proposed based on the well known Welch–Satterthwaite and Bartlett adjustment distribution approximation methods. Furthermore, the high-dimensional behavior of these approximated distributions is also investigated. Through a wide simulation study: first, the performance of the proposed tests with the classical LRT is compared in terms of type I error, power, and alpha adjusted equivalents; second, the robustness of the procedures with respect to departures from normality assumption is evaluated. Finally, the proposed methods are illustrated with two real data examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Andrews JL, McNicholas PD (2014) Variable selection for clustering and classification. J Classif 31(2):136–153

    Article  MathSciNet  MATH  Google Scholar 

  • Arvanitis LG, Afonja B (1971) Use of the generalized variance and the gradient projection method in multivariate stratified sampling. Biometrics 27(1):119–127

    Article  Google Scholar 

  • Bagnato L, Greselin F, Punzo A (2014) On the spectral decomposition in normal discriminant analysis. Commun Stat-Simul Comput 43(6):1471–1489

    Article  MathSciNet  MATH  Google Scholar 

  • Bartlett MS (1937) Properties of sufficiency and statistical tests. Proc R Soc Lond Ser A-Math Phys Sci 160(901):268–282

    Article  MATH  Google Scholar 

  • Behara M, Giri N (1983) Generalized variance statistic in the testing of hypothesis in complex multivariate gaussian distributions. Archiv der Math 41(6):538–543

    Article  MathSciNet  MATH  Google Scholar 

  • Bersimis S, Psarakis S, Panaretos J (2007) Multivariate statistical process control charts: an overview. Qual Reliab Eng Int 23(5):517–543

    Article  Google Scholar 

  • Bhandary M (1996) Test for generalized variance in signal processing. Stat Probab Lett 27(2):155–162

    Article  MathSciNet  MATH  Google Scholar 

  • Billingsley P (2008) Probability and measure. Wiley, London

    MATH  Google Scholar 

  • Boudt K, Rousseeuw PJ, Vanduffel S, Verdonck T (2017) The minimum regularized covariance determinant estimator. arXiv preprint arXiv:1701.07086

  • Campbell N, Mahon R (1974) A multivariate study of variation in two species of rock crab of the genus leptograpsus. Aust J Zool 22(3):417–425

    Article  Google Scholar 

  • Christensen W, Rencher A (1997) A comparison of Type I error rates and power levels for seven solutions to the multivariate Behrens–Fisher problem. Commun Stat-Simul Comput 26(4):1251–1273

    Article  MATH  Google Scholar 

  • Djauhari MA (2005) Improved monitoring of multivariate process variability. J Qual Technol 37(1):32–39

    Article  Google Scholar 

  • Djauhari MA, Mashuri M, Herwindiati DE (2008) Multivariate process variability monitoring. Commun Stat-Theory Methods 37(11):1742–1754

    Article  MathSciNet  MATH  Google Scholar 

  • Garcia-Diaz JC (2007) The ’effective variance’control chart for monitoring the dispersion process with missing data. Eur J Ind Eng 1(1):40–55

    Article  Google Scholar 

  • Gossett E (2009) Discrete mathematics with proof. Wiley, London

    MATH  Google Scholar 

  • Greselin F, Ingrassia S, Punzo A (2011) Assessing the pattern of covariance matrices via an augmentation multiple testing procedure. Stat Methods Appl 20(2):141–170

    Article  MathSciNet  MATH  Google Scholar 

  • Greselin F, Punzo A (2013) Closed likelihood ratio testing procedures to assess similarity of covariance matrices. Am Stat 67(3):117–128

    Article  MathSciNet  Google Scholar 

  • Gupta AS (1982) Tests for simultaneously determining numbers of clusters and their shape with multivariate data. Stat Probab Lett 1(1):46–50

    Article  MathSciNet  MATH  Google Scholar 

  • Hallin M, Paindaveine D (2009) Optimal tests for homogeneity of covariance, scale, and shape. J Multivar Anal 100(3):422–444

    Article  MathSciNet  MATH  Google Scholar 

  • Iliopoulos G, Kourouklis S (1998) On improved interval estimation for the generalized variance. J Stat Plan Inference 66(2):305–320

    Article  MathSciNet  MATH  Google Scholar 

  • Jacod J, Protter P (2003) Probability essentials. Springer, Berlin

    MATH  Google Scholar 

  • Jafari AA (2012) Inferences on the ratio of two generalized variances: independent and correlated cases. Stat Methods Appl 21(3):297–314

    Article  MathSciNet  MATH  Google Scholar 

  • Jiang D, Jiang T, Yang F (2012) Likelihood ratio tests for covariance matrices of high-dimensional normal distributions. J Stat Plan Inference 142(8):2241–2256

    Article  MathSciNet  MATH  Google Scholar 

  • Jolicoeur P, Mosimann J (1960) Size and shape variation in the painted turtle. A principal component analysis. Growth 24(4):339–354

    Google Scholar 

  • Korkmaz S, Goksuluk D, Zararsiz G (2014) MVN: an R package for assessing multivariate normality. R J 6(2):151–162

    Article  Google Scholar 

  • Kotz S, Nadarajah S (2004) Multivariate t-distributions and their applications. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Lawley D (1956) A general method for approximating to the distribution of likelihood ratio criteria. Biometrika 43(3/4):295–303

    Article  MathSciNet  MATH  Google Scholar 

  • Lee MH, MB Khoo (2017) Combined synthetic and |S| chart for monitoring process dispersion. Commun Stat-Simul Comput, 1–14

  • Mardia K (1970) Measures of multivariate skewness and kurtosis with applications. Biometrika 57(3):519–530

    Article  MathSciNet  MATH  Google Scholar 

  • McNicholas PD (2016) Mixture model-based classification. CRC Press, Amsterdam

    Book  MATH  Google Scholar 

  • Muirhead R (2009) Aspects of multivariate statistical theory. Wiley, London

    MATH  Google Scholar 

  • Najarzadeh D (2017) Testing equality of generalized variances of k multivariate normal populations. Commun Stat-Simul Comput, 1–10

  • Noor AM, Djauhari MA (2014) Monitoring the variability of beltline moulding process using wilks’s statistic. Malays J Fundam Appl Sci 6(2):116–120

    Google Scholar 

  • Pena D, Linde A (2007) Dimensionless measures of variability and dependence for multivariate continuous distributions. Commun Stat-Theory Methods 36(10):1845–1854

    Article  MathSciNet  MATH  Google Scholar 

  • Pena D, Rodriguez J (2003) Descriptive measures of multivariate scatter and linear dependence. J Multivar Anal 85(2):361–374

    Article  MathSciNet  MATH  Google Scholar 

  • Petersen HC (2000) On statistical methods for comparison of intrasample morphometric variability: Zalavár revisited. Am J Phys Anthr 113(1):79–84

    Article  Google Scholar 

  • Pukelsheim F (2006) Optimal design of experiments. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  • Punzo A, Browne RP, McNicholas PD (2016) Hypothesis testing for mixture model selection. J Stat Comput Simul 86(14):2797–2818

    Article  MathSciNet  Google Scholar 

  • Rencher A (2002) Methods of multivariate analysis. Wiley, London

    Book  MATH  Google Scholar 

  • Ripley B, Venables B, Bates DM, Hornik K, Gebhardt A, Firth D, Ripley MB (2013) Package mass. Cran R

  • Sarkar SK (1989) On improving the shortest length confidence interval for the generalized variance. J Multivar Anal 31(1):136–147

    Article  MathSciNet  MATH  Google Scholar 

  • Sarkar SK (1991) Stein-type improvements of confidence intervals for the generalized variance. Ann Inst Stat Math 43(2):369–375

    Article  MathSciNet  MATH  Google Scholar 

  • Satterthwaite FE (1946) An approximate distribution of estimates of variance components. Biom Bull 2(6):110–114

    Article  Google Scholar 

  • SenGupta A (1987a) Generalizations of Barlett’s and Hartley’s tests of homogeneity using overall variability. Commun Stat-Theory Methods 16(4):987–996

    Article  MATH  Google Scholar 

  • SenGupta A (1987b) Tests for standardized generalized variances of multivariate normal populations of possibly different dimensions. J Multivar Anal 23(2):209–219

    Article  MathSciNet  MATH  Google Scholar 

  • SenGupta A, Ng HKT (2011) Nonparametric test for the homogeneity of the overall variability. J Appl Stat 38(9):1751–1768

    Article  MathSciNet  Google Scholar 

  • Tallis G, Light R (1968) The use of fractional moments for estimating the parameters of a mixed exponential distribution. Technometrics 10(1):161–175

    Article  MathSciNet  Google Scholar 

  • Team RC (2015) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna

  • Welch BL (1947) The generalization of students problem when several different population variances are involved. Biometrika 34(1/2):28–35

    Article  MathSciNet  MATH  Google Scholar 

  • Wilks SS (1932) Certain generalizations in the analysis of variance. Biometrika 24(3–4):471–494

    Article  MATH  Google Scholar 

  • Yeh A, Lin D, Zhou H, Venkataramani C (2003) A multivariate exponentially weighted moving average control chart for monitoring process variability. J Appl Stat 30(5):507–536

    Article  MathSciNet  MATH  Google Scholar 

  • Yeh AB, Lin DK, McGrath RN (2006) Multivariate control charts for monitoring covariance matrix: a review. Qual Technol Quant Manag 3(4):415–436

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to express our sincere thanks to the editor and the two anonymous reviewers for their comments which greatly improved this article. The corresponding author would like to thank the “Iranian National Elites Foundation” for the financial support of this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dariush Najarzadeh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Here we present the proofs of Lemmas 2.2 and 3.2 as well as Theorems 3.1 and 3.3.

Proof of Lemma 2.2

Let \( {\varvec{W}}_{1}^*, {\varvec{W}}_{2}^*,\ldots ,{\varvec{W}}_{k}^* \) be the corresponding values of \( {\varvec{W}}_{1}, {\varvec{W}}_{2},\ldots ,{\varvec{W}}_{k} \) for transformed observations, respectively. Simply, it can be shown that \( {\varvec{W}}_{i}^*= {\varvec{\Psi }}_i{\varvec{W}}_i {\varvec{\Psi }}_i^\prime \) and since that \({\varvec{\Psi }}_i^\prime {\varvec{\Psi }}_i={\varvec{I}}_{p_i}\), we have \({\root p_i \of {{\left| {{{\varvec{W}}_{i}^*}} \right| }}}={\root p_i \of {{\left| {\varvec{\Psi }}_i{\varvec{W}}_i {\varvec{\Psi }}_i^\prime \right| }}}={\root p_i \of {{\left| {{{\varvec{W}}_{i} }} \right| }}}\). So, by (3), \(T_{LRT}\left( {{\varvec{x}}^{*}_1},{{\varvec{x}}^{*}_2},\ldots ,{{\varvec{x}}^{*}_k} \right) = T_{LRT}\left( {{\varvec{x}}_1},{{\varvec{x}}_2},\ldots ,{{\varvec{x}}_k} \right) \). The proof is complete. \(\square \)

Proof of Lemma 3.2

For any \( |b|<\infty \), it can be justified (Jiang et al. 2012, Lemma 2.1) that

$$\begin{aligned} \frac{{\Gamma (x + b)}}{{\Gamma (x)}} = {x^b}{e^{\frac{{b(b - 1)}}{{2x}} + O({x^{ - 2}})}} \end{aligned}$$
(19)

as \( x \longrightarrow \infty \). Hence,

$$\begin{aligned} \frac{{\Gamma \left( {\frac{{{n_j-t}}}{2} + h} \right) }}{{\Gamma \left( {\frac{{{n_j-t}}}{2} } \right) }} = {\left( {\frac{{{n_j-t}}}{2}} \right) ^h} e^{\frac{h(h-1)}{n_j-t}}e^{ O(n_j^{-2})} , \end{aligned}$$

as \( n_j \longrightarrow \infty , \ j=1,2,\ldots , k\). Consequently,

$$\begin{aligned} \Gamma _j\left( h \right) = \prod \limits _{t = 1}^{{p_j}} {\frac{{\Gamma \left( {\frac{{{n_j} - t}}{2} + h} \right) }}{{\Gamma \left( {\frac{{{n_j} - t}}{2}} \right) }}} = \prod \limits _{t = 1}^{{p_j}} {{\left( {\frac{{{n_j-t}}}{2}} \right) ^h} e^{\frac{h(h-1)}{n_j-t}} } e^{ O(n_j^{-2})}, \end{aligned}$$

as \( n_j \longrightarrow \infty , \ j=1,2,\ldots , k\). So, by replacing \( \Gamma _j\left( h \right) \) by \( \prod \limits _{t = 1}^{{p_j}} {{\left( {\frac{{{n_j-t}}}{2}} \right) ^h} e^{\frac{h(h-1)}{n_j-t}} } e^{ O(n_j^{-2})} \) in (24), we obtain

$$\begin{aligned} E\left[ {{{\left| {{{\varvec{W}}_j}} \right| }^h}} \right] = {\left( {2\root {p_j} \of {{|{{\varvec{\Sigma }}_j}|}}} \right) ^{h{p_j}}} \prod \limits _{t = 1}^{{p_j}} {{\left( {\frac{{{n_j-t}}}{2}} \right) ^h} e^{\frac{h(h-1)}{n_j-t}} } e^{ O(n_j^{-2})}, \end{aligned}$$

as \( n_j \longrightarrow \infty , \ j=1,2,\ldots , k\). This implies (17). The proof is complete. \(\square \)

Proof of Theorem 3.1

Using the multinomial theorem (Gossett 2009, Theorem 5.15), for any given nonnegative integer r,

$$\begin{aligned} Z_i^r&\!=\! {\left( {\sum \limits _{j = 1,j \ne i }^k {\frac{{{p_j}}}{{{p_i}}}\frac{{\root {p_j} \of {{\left| {{{\varvec{W}}_j}} \right| }}}}{{\root {p_i} \of {{\left| {{{\varvec{W}}_i}} \right| }}}}} }\right) ^r} \!\!=\! p_i^{-r } {\left| {{{\varvec{W}}_{i}}} \right| ^{ - \frac{r}{p_i}}}\sum \limits _{\mathop {{r_1},\ldots ,{r_{i - 1}},{r_{i + 1}},\ldots ,{r_k}\ge 0}\limits _{\sum \limits _{j \ne i} {{r_j}} = r} } {\left[ r!{ \prod \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {\frac{{{p_j^{r_j}{\left| {{{{\varvec{W}}_j}}{}} \right| }^{\frac{{{r_j}}}{p_j}}}}}{{{r_j}!}}} } \right] }. \end{aligned}$$
(20)

Also, for \( i \ne j \in \left\{ {1,2, \ldots ,k} \right\} \), using again the multinomial theorem,

$$\begin{aligned} Z_i^mZ_j^n&= {\left( {\sum \limits _{t = 1,t \ne i }^k {\frac{{{p_t}}}{{{p_i}}}\frac{{\root {p_t} \of {{\left| {{{\varvec{W}}_t}} \right| }}}}{{\root {p_i} \of {{\left| {{{\varvec{W}}_i}} \right| }}}}} }\right) ^m} {\left( {\sum \limits _{u = 1,u \ne j }^k {\frac{{{p_u}}}{{{p_j}}}\frac{{\root {p_u} \of {{\left| {{{\varvec{W}}_u}} \right| }}}}{{\root {p_j} \of {{\left| {{{\varvec{W}}_j}} \right| }}}}} }\right) ^n} \nonumber \\&= \dfrac{ p_i^{-m } {\left| {{{\varvec{W}}_{i}}} \right| ^{ - \frac{m}{p_i}}}}{p_j^{n } {\left| {{{\varvec{W}}_{j}}} \right| ^{ \frac{n}{p_j}}}} \sum \limits _{\begin{array}{c} \scriptstyle {r_t} \ge 0,t \ne i\\ \scriptstyle \sum \limits _{t \ne i} {{r_t}} = m \end{array}} {\sum \limits _{\begin{array}{c} \scriptstyle {s_u} \ge 0,u\ne j\\ \scriptstyle \sum \limits _{u \ne j} {{s_u}} = n \end{array}} { \left[ \frac{{m!}}{{\prod \limits _{\begin{array}{c} \scriptstyle t = 1\\ \scriptstyle t \ne i \end{array}}^k {{r_t}!} }} { \prod \limits _{\begin{array}{c} \scriptstyle t = 1\\ \scriptstyle t \ne i \end{array}}^k { {{{p_t^{r_t}{\left| {{{{\varvec{W}}_t}}{}} \right| }^{\frac{{{r_t}}}{p_t}}}}}} } \right] \left[ \frac{{n!}}{{\prod \limits _{\begin{array}{c} \scriptstyle u = 1\\ \scriptstyle u \ne j \end{array}}^k {{s_u}!} }} { \prod \limits _{\begin{array}{c} \scriptstyle u = 1\\ \scriptstyle u \ne j \end{array}}^k { {{{p_u^{s_u}{\left| {{{{\varvec{W}}_u}}{}} \right| }^{\frac{{{s_u}}}{p_u}}}}}} } \right] } } \nonumber \\&= \sum \limits _{\begin{array}{c} \scriptstyle {r_t} \ge 0,t \ne i\\ \scriptstyle \sum \limits _{t \ne i} {{r_t}} = m \end{array}} {\sum \limits _{\begin{array}{c} \scriptstyle {s_u} \ge 0,u\ne j\\ \scriptstyle \sum \limits _{u \ne j} {{s_u}} = n \end{array}} { \left[ \frac{{n!m!}}{{\prod \limits _{\begin{array}{c} \scriptstyle u = 1\\ \scriptstyle u \ne j \end{array}}^k {{s_u}!} }{\prod \limits _{\begin{array}{c} \scriptstyle t = 1\\ \scriptstyle t \ne i \end{array}}^k {{r_t}!} }} \right] } \left[ { \frac{ {{{p_i^{s_i-m}{\left| {{{{\varvec{W}}_i}}{}} \right| }^{\frac{{{s_i-m}}}{p_i}}}}}}{ {{{p_j^{n-r_j}{\left| {{{{\varvec{W}}_j}}{}} \right| }^{\frac{{{n-r_j}}}{p_j}}}}}} { \prod \limits _{\begin{array}{c} \scriptstyle u = 1\\ \scriptstyle u \ne {i,j} \end{array}}^k { {{{ \frac{ {\left| {{{{\varvec{W}}_u}}{}} \right| }^{\frac{{{s_u+r_u}}}{p_u}}}{p_u^{-(s_u+r_u)}}}}}} } } \right] }. \end{aligned}$$
(21)

Now, taking the expectation of (20) and (21), and then using the independence of \({{\varvec{W}}_{j}}\)s, immediately yields

$$\begin{aligned} {{\,\mathrm{E}\,}}\left[ {Z_i^r} \right] = p_i^{-r } E\left[ {\left| {{{\varvec{W}}_{i}}} \right| ^{ - \frac{r}{p_i}}}\right] \sum \limits _{\mathop {{r_1},\ldots ,{r_{i - 1}},{r_{i + 1}},\ldots ,{r_k}\ge 0}\limits _{\sum \limits _{j \ne i} {{r_j}} = r} } {\left[ r!{ \prod \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {\frac{{{p_j^{r_j}E[ {\left| {{{{\varvec{W}}_j}}{}} \right| }^{\frac{{{r_j}}}{p_j}} ] }}}{{{r_j}!}}} } \right] } \end{aligned}$$
(22)

and

$$\begin{aligned} E\left[ Z_i^mZ_j^n\right]&= \sum \limits _{\begin{array}{c} \scriptstyle {r_t} \ge 0,t \ne i\\ \scriptstyle \sum \limits _{t \ne i} {{r_t}} = m \end{array}} \sum \limits _{\begin{array}{c} \scriptstyle {s_u} \ge 0,u\ne j\\ \scriptstyle \sum \limits _{u \ne j} {{s_u}} = n \end{array}} { \left[ \frac{{n!m!}}{{\prod \limits _{\begin{array}{c} \scriptstyle u = 1\\ \scriptstyle u \ne j \end{array}}^k {{s_u}!} }{\prod \limits _{\begin{array}{c} \scriptstyle t = 1\\ \scriptstyle t \ne i \end{array}}^k {{r_t}!} }} \right] } \nonumber \\&\quad \left[ { \frac{ {{{p_i^{s_i-m} E\left[ {\left| {{{{\varvec{W}}_i}}{}} \right| }^{\frac{{{s_i-m}}}{p_i}}\right] }}}}{ {{{p_j^{n-r_j} E^{-1}\left[ {\left| {{{{\varvec{W}}_j}}{}} \right| }^{\frac{{{r_j-n}}}{p_j}}\right] }}}} { \prod \limits _{\begin{array}{c} \scriptstyle u = 1\\ \scriptstyle u \ne {i,j} \end{array}}^k { {{{ \frac{ E\left[ {\left| {{{{\varvec{W}}_u}}{}} \right| }^{\frac{{{s_u+r_u}}}{p_u}}\right] }{p_u^{-(s_u+r_u)}}}}}} } } \right] . \end{aligned}$$
(23)

But, for any \( j = 1,2,\ldots , k\), it is well known that \(\frac{ \left| {{{\varvec{W}}_{j}}} \right| }{\left| {{{\varvec{\Sigma }}_j}} \right| }\) is distributed as a product of the chi-square distributions (see, Muirhead 2009), that is,

$$\begin{aligned} \dfrac{ \left| {{{\varvec{W}}_{j}}} \right| }{\left| {{{\varvec{\Sigma }}_j}} \right| }\sim \chi _{{n_j} - 1}^2\chi _{{n_j} - 2}^2\ldots \chi ^2_{{n_j} - p_{j}}, \ j = 1,2,{\ldots }, k, \end{aligned}$$

where \( \chi _{{n_j} - r}^2 \), \(r = 1,2,\ldots , p_{j} \) are independently chi-square random variables with \( {{n_j} - r} \) degrees of freedom. So, for any real number h and \( {n_j} > \max (p_j ,p_j-2h)\), \( j=1,2,\ldots ,k \), we have:

$$\begin{aligned} {{\,\mathrm{E}\,}}\left[ {{{\left| {{{\varvec{W}}_{j}}} \right| }^h}} \right] = { \left( 2\root p_j \of {|{\varvec{\Sigma }}_j|}\right) }^{h{p_{j}}} \prod \limits _{t= 1}^{p_{j}} {\frac{{ {\Gamma \left[ {\frac{{{n_j} -t}}{2} + h} \right] } }}{{ {\Gamma \left[ {\frac{{{n_j} - t}}{2} } \right] } }}}= { \left( 2\root p_j \of {|{\varvec{\Sigma }}_j|}\right) }^{h{p_{j}}} \Gamma _j(h). \end{aligned}$$
(24)

Replacing the mathematical expectations in the equations (22) and (23) by their respective values calculated from (24) under the \( {H_0}\) in (1), yield the desired equations (15) and (16). The proof is complete. \(\square \)

Proof of Theorem 3.3

From (20) with \( r=1 \), we have

$$\begin{aligned} E{[Z_i]}&=\! p_i^{ - 1}{\Gamma _i}\left( { - \frac{1}{{{p_i}}}} \right) \sum \limits _{\begin{array}{c} \scriptstyle {r_j} \ge 0,j \ne i\\ \scriptstyle \sum \limits _{j \ne i} {{r_j} = 1} \end{array}} {\prod \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {p_j^{{r_j}}{\Gamma _j}\left( {\frac{{{r_j}}}{{{p_j}}}} \right) } } \!=\! p_i^{ - 1}{\Gamma _i}\left( { - \frac{1}{{{p_i}}}} \right) \sum \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {{p_j}{\Gamma _j}\left( {\frac{1}{{{p_j}}}} \right) } \nonumber \\&= \sum \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {\frac{{{p_j}}}{{{p_i}}}{\Gamma _j}\left( {\frac{1}{{{p_j}}}} \right) } {\Gamma _i}\left( { - \frac{1}{{{p_i}}}} \right) . \end{aligned}$$
(25)

Since that \( p_j \longrightarrow \infty \) and \( n_{j}>p_{j} +1\), we have \( n_j \longrightarrow \infty , \ j=1,2,\ldots , k\) and by (19),

$$\begin{aligned} \ln \frac{{\Gamma \left( {\frac{{{n_j}}}{2} + h - \frac{t}{2}} \right) }}{{\Gamma \left( {\frac{{{n_j}}}{2} - \frac{t}{2}} \right) }}&= \ln \frac{{\Gamma \left( {\frac{{{n_j}}}{2} + h - \frac{t}{2}} \right) }}{{\Gamma \left( {\frac{{{n_j}}}{2}} \right) }} - \ln \frac{{\Gamma \left( {\frac{{{n_j}}}{2} - \frac{t}{2}} \right) }}{{\Gamma \left( {\frac{{{n_j}}}{2}} \right) }} \nonumber \\&= \left( {h - \frac{t}{2}} \right) \ln \left( {\frac{{{n_j}}}{2}} \right) + \frac{t}{2}\ln \left( {\frac{{{n_j}}}{2}} \right) + o(1) \nonumber \\&= \ln {\left( {\frac{{{n_j}}}{2}} \right) ^h} + o(1). \end{aligned}$$

So, we have

$$\begin{aligned} {\Gamma _j}\left( {\frac{1}{{{p_j}}}} \right)= & {} \prod \limits _{t = 1}^{{p_j}} {{{\left( {\frac{{{n_j}}}{2}} \right) }^{\frac{1}{{{p_j}}}} }{e^{o(1)}}} = {\left( {\frac{{{n_j}}}{2}} \right) }{e^{o(1)}} \ \ \text{ and } \\ {\Gamma _i}\left( -{\frac{1}{{{p_i}}}} \right)= & {} \prod \limits _{t = 1}^{{p_i}} {{{\left( {\frac{{{n_i}}}{2}} \right) }^{-\frac{1}{{{p_i}}}} }{e^{o(1)}}} = {\left( {\frac{{{n_i}}}{2}} \right) ^{-1}}{e^{o(1)}}. \end{aligned}$$

Consequently, for \( i \ne j \in \left\{ {1,2, \ldots ,k} \right\} \),

$$\begin{aligned} {\Gamma _j}\left( {\frac{1}{{{p_j}}}} \right) {\Gamma _i}\left( { - \frac{1}{{{p_i}}}} \right) = \left( \frac{{{n_j}}}{{{n_i}}} \right) {e^{o(1)}}, \end{aligned}$$

as \( p_j \longrightarrow \infty \), \( j=1,2,\ldots ,k \). Now, by the assumptions of the theorem,

$$\begin{aligned} {\Gamma _j}\left( {\frac{1}{{{p_j}}}} \right) {\Gamma _i}\left( { - \frac{1}{{{p_i}}}} \right) \longrightarrow 1, \end{aligned}$$

as \( p_j \longrightarrow \infty \), \( j=1,2,\ldots ,k \). By taking the limit on both sides of (25) as \( p_j \longrightarrow \infty \), \( j=1,2,\ldots ,k \) and using the assumptions of the theorem, we get

$$\begin{aligned} \mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } E{[Z_i]} = \mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } \sum \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {\frac{{{p_j}}}{{{p_i}}}{\Gamma _j}\left( {\frac{1}{{{p_j}}}} \right) } {\Gamma _i}\left( { - \frac{1}{{{p_i}}}} \right) = k - 1.\nonumber \\ \end{aligned}$$
(26)

Setting \( r=2 \) in (20), we obtain

$$\begin{aligned} E[Z_i^2]&= p_i^{ - 2}{\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) \sum \limits _{\begin{array}{c} \scriptstyle {r_j} \ge 0,j \ne i\\ \scriptstyle \sum \limits _{j \ne i} {{r_j} = 2} \end{array}} {\left[ {2!\prod \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {\frac{{p_j^{{r_j}}}}{{{r_j}!}}{\Gamma _j}\left( {\frac{{{r_j}}}{{{p_j}}}} \right) } } \right] } \nonumber \\&= p_i^{ - 2}{\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) \left[ {\sum \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {p_j^2{\Gamma _j}\left( {\frac{2}{{{p_j}}}} \right) } + 2\sum \limits _{\begin{array}{c} \scriptstyle r = 1\\ \scriptstyle r \ne i \end{array}}^{k - 1} {\sum \limits _{\begin{array}{c} \scriptstyle s = r + 1\\ \scriptstyle s \ne i \end{array}}^k {{p_r}{\Gamma _r}\left( {\frac{1}{{{p_r}}}} \right) {p_s}{\Gamma _s}\left( {\frac{1}{{{p_s}}}} \right) } } } \right] \nonumber \\&= \sum \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {{{\left( {\frac{{{p_j}}}{{{p_i}}}} \right) }^2}{\Gamma _j}\left( {\frac{2}{{{p_j}}}} \right) } {\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) \\&\quad + 2\sum \limits _{\begin{array}{c} \scriptstyle r = 1\\ \scriptstyle r \ne i \end{array}}^{k - 1} {\sum \limits _{\begin{array}{c} \scriptstyle s = r + 1\\ \scriptstyle s \ne i \end{array}}^k {\left( {\frac{{{p_r}}}{{{p_i}}}} \right) \left( {\frac{{{p_s}}}{{{p_i}}}} \right) {\Gamma _r}\left( {\frac{1}{{{p_r}}}} \right) {\Gamma _s}\left( {\frac{1}{{{p_s}}}} \right) {\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) } }. \end{aligned}$$

Similar to the way that we used in the proof of (26), it can be justified that

$$\begin{aligned} {{\Gamma _j}\left( {\frac{2}{{{p_j}}}} \right) {\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) \rightarrow 1}\ \text{ and } \ {{\Gamma _r}\left( {\frac{1}{{{p_r}}}} \right) {\Gamma _s}\left( {\frac{1}{{{p_s}}}} \right) {\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) \rightarrow 1}, \end{aligned}$$

as \( p_j \longrightarrow \infty \), \( j=1,2,\ldots ,k \). Therefore,

$$\begin{aligned}&\mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } E[Z_i^2] \nonumber \\&\quad = \mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } \sum \limits _{\begin{array}{c} \scriptstyle j = 1\\ \scriptstyle j \ne i \end{array}}^k {{{\left( {\frac{{{p_j}}}{{{p_i}}}} \right) }^2}{\Gamma _j}\left( {\frac{2}{{{p_j}}}} \right) } {\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) \nonumber \\&\qquad + \mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } 2\sum \limits _{\begin{array}{c} \scriptstyle r = 1\\ \scriptstyle r \ne i \end{array}}^{k - 1} {\sum \limits _{\begin{array}{c} \scriptstyle s = r + 1\\ \scriptstyle s \ne i \end{array}}^k {\left( {\frac{{{p_r}}}{{{p_i}}}} \right) \left( {\frac{{{p_s}}}{{{p_i}}}} \right) {\Gamma _r}\left( {\frac{1}{{{p_r}}}} \right) {\Gamma _s}\left( {\frac{1}{{{p_s}}}} \right) {\Gamma _i}\left( { - \frac{2}{{{p_i}}}} \right) } } \nonumber \\&\quad = (k - 1) + 2\frac{{(k - 2)(k - 1)}}{2} = {(k - 1)^2}. \end{aligned}$$
(27)

Hence, by (26) and (27),

$$\begin{aligned} \mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } Var({Z_i}) = \mathop {\lim }\limits _{{p_1},{p_2}, \ldots ,{p_k} \rightarrow \infty } \left( {E[Z_i^2] - {E^2}[{Z_i}]} \right) = 0. \end{aligned}$$
(28)

So, \(E{[Z_i]}\longrightarrow k-1\) and \( Var({Z_i}) \longrightarrow 0\), as \( p_j \longrightarrow \infty \), \( j=1,2,\ldots ,k \). In other words, \( Z_i \) converge in probability to \( E[Z_i] \), \(i=1,2,\ldots ,k \) (Billingsley 2008). The proof is complete. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Najarzadeh, D. Testing equality of standardized generalized variances of k multivariate normal populations with arbitrary dimensions. Stat Methods Appl 28, 593–623 (2019). https://doi.org/10.1007/s10260-019-00456-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-019-00456-y

Keywords

Navigation