Skip to main content

Advertisement

Log in

A Jackknife Empirical Likelihood Approach for Testing the Homogeneity of K Variances

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

A nonparametric test for equality of K variances has been proposed by developing the jackknife empirical likelihood ratio. The standard limiting Chi-squared distribution with degrees freedom of \(K-1\) for the test statistic is established, and is used to determine the type I error rate and the power of the test. Simulation studies have been conducted to show that the proposed method is competitive to the current existing methods, Levene’s test and Fligner-Killeen’s test, in terms of power and robustness. The proposed method has been illustrated in an application on a real data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Bartlett MS (1937) Properties of sufficiency and statistical tests. Proc R Stat Soc Ser A 160:268–282

    MATH  Google Scholar 

  • Boos D, Brownie C (1989) Bootstrap methods for testing homogeneity of variances. Technometrics 31(1):69–82

    Article  MathSciNet  Google Scholar 

  • Box GEP, Anderson SL (1955) Permutation theory in the derivation of robust criteria and the study of departures from assumption (with discussion). J Roy Statist Soc Ser B 17:1–34

    MATH  Google Scholar 

  • Brown B, Forsythe A (1974) Robust test for equality of variances. J Am Stat Assoc 69:364–367

    Article  Google Scholar 

  • Chen J, Variyath AM, Abraham B (2008) Adjusted empirical likelihood and its properties. J Comput Graph Statist 17(2):426–443

    Article  MathSciNet  Google Scholar 

  • Chen BB, Pan GM, Yang Q, Zhou W (2015) Large dimensional empirical likelihood. Statist Sinica 25:1659–1677

    MathSciNet  MATH  Google Scholar 

  • Chen YJ, Ning W, Gupta AK (2015) Jackknife empirical likelihood for testing the equality of two variances. J Appl Stat 42:144–160

    Article  MathSciNet  Google Scholar 

  • Cheng CH, Liu Y, Liu Z, Zhou W (2018) Balanced augmented jackknife empirical likelihood for two sample \(U\)-statistics. Sci China Math 61:1129–1138

    Article  MathSciNet  Google Scholar 

  • Cochran WG, Cox GM (1957) Exp des. John Wiley and Sons, New York

    Google Scholar 

  • Conover WJ, Johnson ME, Johnson MM (1981) A comparative study of tests for homogeneity of variances, with applications to the outer continental shelf bidding data. Technometrics 23:315–361

    Article  Google Scholar 

  • Emerson S, Owen A (2009) Calibration of the empirical likelihood method for a vector mean. Electron J Statist 3:1161–1192

    Article  MathSciNet  Google Scholar 

  • Fligner MA, Killeen TJ (1976) Distribution-free two sample tests for scale. J Amer Statist Assoc 71:210–213

    Article  MathSciNet  Google Scholar 

  • Hoeffding W (1948) A class of statistics with asymptotically normal distribution. Ann Math Statist 19:293–325

    Article  MathSciNet  Google Scholar 

  • Jing B, Yuan J, Zhou W (2009) Jackknife empirical likelihood. J Amer Statist Assoc 104:1224–1232

    Article  MathSciNet  Google Scholar 

  • Levene H (1960) Robust tests for equality of variances. In: Olkin I (ed) contributions to Probability and Statistics. Stanford University Press, Palo Alto, CA, pp 287–292

    Google Scholar 

  • Lim T, Loh W (1996) A comparison of tests for equality of variances. Comput Stat Data Anal 22:287–301

    Article  MathSciNet  Google Scholar 

  • Loh W-Y (1987) Some modifications of Levene’s test of variance homogeneity. J Statist Comput Simulation 28:213–226

    Article  Google Scholar 

  • Miller RG (1968) Jackknifing variances. Ann Math Statist 39:567–582

    Article  MathSciNet  Google Scholar 

  • Owen A (1988) Empirical likelihood ratio confidence intervals for single functional. Biometrika 75:237–249

    Article  MathSciNet  Google Scholar 

  • Owen A (1990) Empirical likelihood ratio confidence regions. Ann Statist 18:90–120

    Article  MathSciNet  Google Scholar 

  • Qin GS, Jing BY (2001) Empirical likelihood for censored linear regression. Scand J Statist 28:661–673

    Article  MathSciNet  Google Scholar 

  • Qin J, Lawless J (1994) Empirical likelihood and general estimating functions. Ann Statist 22:300–325

    Article  MathSciNet  Google Scholar 

  • Sang Y, Dang X, Zhao Y (2019) A Jackknife empirical likelihood approach for \(K\)-sample tests. J Statist Canad. https://doi.org/10.1002/cjs.11611

    Article  Google Scholar 

  • Shi X (1984) The approximate independence of jackknife pseudo-values and the bootstrap Methods. J Wuhan Inst Hydra-Electric Eng 2:83–90

    Google Scholar 

  • Shoemaker LH (2003) Fixing the \(F\) test for equal variances. Amer Statist 57:105–114

    Article  MathSciNet  Google Scholar 

  • Varadhan R (2018) Johns Hopkins University, Borchers, H.W. and ABB Corporate Research . dfoptim: Derivative-Free Optimization. R package version 2018.2-1. https://CRAN.R-project.org/package=dfoptim

  • Wang QH, Jing BY (1999) Empirical likelihood for partial linear models with fixed designs. Statist Probab Lett 41:425–433

    Article  MathSciNet  Google Scholar 

  • Wood ATA, Do KA, Broom NM (1996) Sequential linearization of empirical likelihood constraints with application to \(U\)-statistics. J Comput Graph Statist 5:365–385

    MathSciNet  Google Scholar 

  • Yitnosumarto S, O‘Neil ME (1986) On Levene’s test of variance homogeneity. Austral J Statist 28:230–241

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongli Sang.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Define \({\varvec{\lambda }}=(\lambda _{1}, ..., \lambda _{K})\), \(n=n_1+n_2+...+n_k\),

$$\begin{aligned}&W_{kn}(\theta , {\varvec{\lambda }})=\dfrac{1}{n} \sum _{l=1}^{n_k} \frac{{\hat{V}}^{k}_l-\theta }{1+\lambda _{k} \left( {\hat{V}}^{k}_l-\theta \right) }, \quad k=1,...,K,\\&W_{{(K+1)n}}(\theta , {\varvec{\lambda }})=\dfrac{1}{n}\sum _{k=1}^K\lambda _{k}\sum _{l=1}^{n_k} \frac{-1}{1+\lambda _{k} \left( {\hat{V}}^{k}_l-\theta \right) }. \end{aligned}$$

Lemma 6.1

(Hoeffding (1948)) Under condition C1,

$$\begin{aligned} \dfrac{\sqrt{n_k}(U_{n_k}-\theta _0)}{2\sigma _{gk}} {\mathop {\rightarrow }\limits ^{d}} N(0, 1) \ \text { as} \ n_k \rightarrow \infty , \end{aligned}$$

Lemma 6.2

Let \(S_{k}=\dfrac{1}{n_k} \displaystyle \sum _{l=1}^{n_k} \left( {\hat{V}}^{k}_l- \theta _0\right) ^2, k=1, ..., K,\) Under the conditions of Lemma 6.1,

$$\begin{aligned} S_{k}=4\sigma ^2_{g_k}+o_p(1), \end{aligned}$$

as \(n_k \rightarrow \infty , k=1, ..., K\)

Lemma 6.3

(Cheng et al. 2018) Under conditions C1 and C2 and \(H_0\), with probability tending to one as \(\min \{n_1, ..., n_K\} \rightarrow \infty \), there exists a root \({\tilde{\theta }}\) of

$$\begin{aligned}&W_{kn}(\theta , {\varvec{\lambda }})=0, \ k=1,..., K+1, \end{aligned}$$

such that \(|{\tilde{\theta }}-\theta _0|< \delta ,\) where \(\delta =n^{-1/3}\).

Let \(\tilde{{\varvec{\eta }}}=({\tilde{\theta }}, \tilde{{\varvec{\lambda }}})^T\) be the solution to the above equations, and \({\varvec{\eta }}_0=(\theta , 0, ..., 0)^T\). By expanding \(W_{kn}(\tilde{{\varvec{\eta }}})\) at \({\varvec{\eta }}_0\), we have, for \(k=0, 1,...,K+1\),

$$\begin{aligned} 0&=W_{kn}({\varvec{\eta }}_0)+ \dfrac{\partial W_{kn}}{\partial \theta }({\varvec{\eta }}_0)({\tilde{\theta }}-\theta _0)+\dfrac{\partial W_{kn}}{\partial \lambda }({\varvec{\eta }}_0) {\tilde{\lambda }}\\&\quad +\dfrac{\partial W_{kn}}{\partial \lambda _1}({\varvec{\eta }}_0) {\tilde{\lambda }}_{1}+\cdots +\dfrac{\partial W_{kn}}{\partial \lambda _K}({\varvec{\eta }}_0) {\tilde{\lambda }}_{K}+R_{kn}, \end{aligned}$$

where \(R_{kn}=\frac{1}{2} (\tilde{{\varvec{\eta }}}-{\varvec{\eta }}_0)^T \dfrac{\partial ^2 W_{kn}({\varvec{\eta }}^{*})}{\partial {\varvec{\eta }}\partial {\varvec{\eta }}^T} (\tilde{{\varvec{\eta }}}-{\varvec{\eta }}_0)=o_p(n^{-1/2}),\) and \({\varvec{\eta }}^{*}\) lies between \({\varvec{\eta }}_0\) and \( \tilde{{\varvec{\eta }}}\).

Lemma 6.4

Under \(H_0\), \(\text{ Cov }(U_{n_k}, U_{n_l})=0\), \(1\le k \ne l \le K\).

Proof of Theorem 2.1

$$\begin{aligned} \begin{pmatrix} W_{1n}({\varvec{\eta }}_0) \\ W_{2n}({\varvec{\eta }}_0) \\ \vdots \\ W_{Kn}({\varvec{\eta }}_0))\\ 0\\ \end{pmatrix} = {\varvec{\mathcal {B}}} \begin{pmatrix} {\tilde{\lambda }}_{1}\\ {\tilde{\lambda }}_{2} \\ \vdots \\ {\tilde{\lambda }}_{K}\\ {\tilde{\theta }}-\theta _0 \end{pmatrix}+o_{p}(n^{-1/2}), \end{aligned}$$

where

\(\sigma ^2_k=4\sigma ^2_{gk}, k=1,..,K.\)

It is easy to see that \({\varvec{\mathcal {B}}}\) is nonsingular under Conditions C1 and C2. Therefore,

$$\begin{aligned} \begin{pmatrix} {\tilde{\lambda }}_{1}\\ {\tilde{\lambda }}_{2} \\ \vdots \\ {\tilde{\lambda }}_{K}\\ {\tilde{\theta }}-\theta _0 \end{pmatrix}={\varvec{\mathcal {B}}}^{-1}\begin{pmatrix} W_{1n}({\varvec{\eta }}_0) \\ \vdots \\ W_{Kn}({\varvec{\eta }}_0))\\ 0\\ \end{pmatrix}+o_{p}(n^{-1/2}). \end{aligned}$$

Under \(H_0\), \(\sigma ^2_1=...=\sigma ^2_K= \sigma ^2\).

$$\begin{aligned}&{\tilde{\theta }}-\theta _0=\sum ^K_{k=0} W_{kn}({\varvec{\eta }}_0)+o_p(n^{-1/2}). \end{aligned}$$

By Jing et al. (2009), we have

$$\begin{aligned}&{\tilde{\lambda }}_{k}=\dfrac{U_{n_k}-{\tilde{\theta }}}{{\tilde{S}}_{k}}+o_{p}(n^{-1/2}), \quad k=1,...,K, \end{aligned}$$

where \({\tilde{S}}_{k}=\dfrac{1}{n_k}\sum \nolimits ^{n_k}_{l=1}({\hat{V}}^{k}_{l}-{\tilde{\theta }})^2\). It is easy to check that \({\tilde{S}}_{k}=\sigma ^2_k+o_p(1), k=0,1,..,K\). By the proof of Theorem 1 in Jing et al. (2009),

$$\begin{aligned} -2\log R=\left[ \sum _{k=1}^K \dfrac{n_{k}(U_{n_k}-{\tilde{\theta }})^2}{\sigma ^2_{k}} \right] (1+o_{p}(1)). \end{aligned}$$

With simple algebra, we have

$$\begin{aligned}&\sum _{k=1}^K \dfrac{n_{k}(U_{n_k}-{\tilde{\theta }})^2}{\sigma ^2_{k}}\nonumber \\&=(\sqrt{n}W_{1n}({\varvec{\eta }}_0), ..., \sqrt{n}W_{Kn}({\varvec{\eta }}_0) \times {\varvec{A}}^T {\varvec{\mathcal {W}}} {\varvec{A}} \times (\sqrt{n}W_{1n}({\varvec{\eta }}_0), ..., \sqrt{n}W_{Kn}({\varvec{\eta }}_0))^T +o_p(1), \end{aligned}$$
(8)

where

and

Furthermore, by Shi (1984), we have the central limit theorem for W’s at \({\varvec{\eta }}_0\). This is because each \(W({\varvec{\eta }}_0)\) is the average of asymptotically independent psuedo-values. That is,

$$\begin{aligned} \sqrt{n}\begin{pmatrix} W_{1n}({\varvec{\eta }}_0) \\ W_{2n}({\varvec{\eta }}_0) \\ \vdots \\ W_{Kn}({\varvec{\eta }}_0))\\ \end{pmatrix} \overset{D}{\rightarrow } N({\varvec{0}}, {\varvec{\Sigma }}), \end{aligned}$$

where

Therefore, under \(H_0\), \(-2\log R\) converges to \(\sum _{i=1}^{K} \omega _i \chi ^2_i\) in distribution, where \(\chi ^2_i, i=1,..., K\) are K independent chi-square random variables with one degree of freedom, and \(\omega _i, i=1,.., K\) are eigenvalues of \({\varvec{\Sigma }}_0^{1/2} {\varvec{A}}^T {\varvec{\mathcal {W}_0}} {\varvec{A}} {\varvec{\Sigma }}_0^{1/2}\), where

and

We can show that \({\varvec{A}}^{T} {\varvec{\mathcal {W}_0}} {\varvec{A}}={\varvec{A}}\). Hence, \({\varvec{\Sigma }}_0^{1/2} {\varvec{A}}^T {\varvec{\mathcal {W}_0}} {\varvec{A}} {\varvec{\Sigma }}_0^{1/2}={\varvec{\Sigma }}_0 {\varvec{A}}\) since \({\varvec{A}}\) is symmetric. With algebra calculation, the eigenvalues of \({\varvec{\Sigma }}_0 {\varvec{A}}\) are {0, 1, 1,...,1} with trace(\({\varvec{\Sigma }}_0 {\varvec{A}})=K-1.\) By this result, we complete the proof. \(\square \)

Proof of Theorem 2.2

Under \(H_a\), at least one of \({\mathbb {E}}U_k,\) \(k=1,...,K\) will be different from the others. Let \({\mathbb {E}}U_k=\theta _k, \ k=1,...,K.\) From (8),

$$\begin{aligned} -2\log R= & {} \sum ^K_{k=1}\dfrac{n_k(U_{n_k}-{\tilde{\theta }})^2}{{\tilde{S}}^2_i}+o(1)\\= & {} \sum ^K_{k=1}\left[ \dfrac{\sqrt{n_k}(U_{n_k}-\theta _k)}{{\tilde{S}}_k}+\dfrac{\sqrt{n_k}(\theta _k-{\tilde{\theta }})}{{\tilde{S}}_k} \right] ^2+o(1), \end{aligned}$$

which is divergent since at least one of \(\dfrac{\sqrt{n_k}(\theta _k-{\tilde{\theta }})^2}{{\tilde{S}}_k}, k=1,...,K\) will diverge to \(\infty \).\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sang, Y. A Jackknife Empirical Likelihood Approach for Testing the Homogeneity of K Variances. Metrika 84, 1025–1048 (2021). https://doi.org/10.1007/s00184-021-00813-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-021-00813-6

Keywords

Mathematics Subject Classification

Navigation