Skip to main content

Advertisement

Log in

Robust tests for the equality of two normal means based on the density power divergence

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

Statistical techniques are used in all branches of science to determine the feasibility of quantitative hypotheses. One of the most basic applications of statistical techniques in comparative analysis is the test of equality of two population means, generally performed under the assumption of normality. In medical studies, for example, we often need to compare the effects of two different drugs, treatments or preconditions on the resulting outcome. The most commonly used test in this connection is the two sample \(t\) test for the equality of means, performed under the assumption of equality of variances. It is a very useful tool, which is widely used by practitioners of all disciplines and has many optimality properties under the model. However, the test has one major drawback; it is highly sensitive to deviations from the ideal conditions, and may perform miserably under model misspecification and the presence of outliers. In this paper we present a robust test for the two sample hypothesis based on the density power divergence measure (Basu et al. in Biometrika 85(3):549–559, 1998), and show that it can be a great alternative to the ordinary two sample \(t\) test. The asymptotic properties of the proposed tests are rigorously established in the paper, and their performances are explored through simulations and real data analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Basu A, Harris IR, Hjort NL, Jones MC (1998) Robust and efficient estimation by minimising a density power divergence. Biometrika 85(3):549–559

    Article  MATH  MathSciNet  Google Scholar 

  • Basu A, Mandal A, Martin N, Pardo L (2013) Testing statistical hypotheses based on the density power divergence. Ann Inst Stat Math 65(2):319–348

    Article  MATH  MathSciNet  Google Scholar 

  • Basu A, Mandal A, Martin N, Pardo L (2014) Density power divergence tests for composite null hypotheses. arXiv:1403.0330

  • Dik JJ, de Gunst MCM (1985) The distribution of general quadratic forms in normal variables. Stat Neerl 39(1):14–26

    Article  MATH  Google Scholar 

  • Doksum KA, Sievers GL (1976) Plotting with confidence: graphical comparisons of two populations. Biometrika 63(3):421–434

    Article  MATH  MathSciNet  Google Scholar 

  • Fraser DAS (1957) Most powerful rank-type tests. Ann Math Stat 28:1040–1043

    Article  MATH  MathSciNet  Google Scholar 

  • Fujisawa H, Eguchi S (2006) Robust estimation in the normal mixture model. J Stat Plan Inference 136(11):3989–4011

    Article  MATH  MathSciNet  Google Scholar 

  • Fujisawa H, Eguchi S (2008) Robust parameter estimation with a small bias against heavy contamination. J Multivar Anal 99(9):2053–2081

    Article  MATH  MathSciNet  Google Scholar 

  • Ghosh A, Basu A (2013) Robust estimation for independent non-homogeneous observations using density power divergence with applications to linear regression. Electron J Stat 7:2420–2456

    Article  MATH  MathSciNet  Google Scholar 

  • Jones MC, Hjort NL, Harris IR, Basu A (2001) A comparison of related density-based minimum divergence estimators. Biometrika 88(3):865–873

    Article  MATH  MathSciNet  Google Scholar 

  • Koopmans LH (1987) Introduction to contemporary statistical methods. Duxbury Press, Boston

    Google Scholar 

  • Stigler SM (1977) Do robust estimators work with real data? Ann Stat 5(6):1055–1098

    Article  MATH  MathSciNet  Google Scholar 

  • Tiku ML, Tan WY, Balakrishnan N (1986) Robust inference, volume 71 of statistics: textbooks and monographs. Marcel Dekker Inc., New York

    Google Scholar 

  • Voinov V, Balakrishnan N, Nikulin MS (2013) Chi-squared goodness of fit tests with applications. Academic Press, Waltham

    MATH  Google Scholar 

  • Yuen KK, Dixon WJ (1973) The approximate behaviour and performance of the two-sample trimmed t. Biometrika 60(2):369–374

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was partially supported by Grants MTM-2012-33740 and ECO-2011-25706. The authors gratefully acknowledge the suggestions of two anonymous referees which led to an improved version of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Mandal.

Appendix

Appendix

Proof of Theorem 1

As \(\widehat{\mu }_{i\beta }\) is the solution of the estimating equation \(_{1}h_{n_{i},\beta }^{\prime }\left( \mu _{i},\sigma \right) =0\), we get from Eq. (7)

$$\begin{aligned} \sqrt{n_{i}}(\widehat{\mu }_{i\beta }-\mu _{i0})=\sqrt{n_{i}} \varvec{J}_{11,\beta }^{-1}( \sigma _{0})\,_{1}h_{n_{i},\beta }^{\prime }\left( \mu _{i0},\sigma _0\right) +o_{p}(1),\quad i=1,2. \end{aligned}$$

Hence, using (9) we get

$$\begin{aligned} \sqrt{n_{i}}(\widehat{\mu }_{i\beta }-\mu _{i0})\underset{n_{i}\rightarrow \infty }{\overset{\mathcal {L}}{\longrightarrow }}{\mathcal {N}}\left( 0,K_{11,\beta }\varvec{(}\sigma _0)J_{11,\beta }^{-2}( \sigma _0) \right) ,\quad i=1,2, \end{aligned}$$
(27)

where

$$\begin{aligned} K_{11,\beta }\varvec{(}\sigma _0)J_{11,\beta }^{-2}(\sigma _{0})=\sigma _0^{2}\left( \beta +1\right) ^{3}\left( 2\beta +1\right) ^{- \frac{3}{2}}. \end{aligned}$$
(28)

It is clear that \(\widehat{\mu }_{1\beta }\) and \(\widehat{\mu }_{2\beta }\) are based on two independent set of observations, hence, \(Cov(\widehat{\mu }_{1\beta },\widehat{\mu }_{2\beta })=0\). As \(_{2}h_{n_1,n_2,\beta }^{\prime }(\widehat{\varvec{\eta }}_\beta )=0\), taking a Taylor series expansion around \(\varvec{\eta }_0\) we get

$$\begin{aligned} _{2}h_{n_1,n_2,\beta }^{\prime }(\widehat{\varvec{\eta }}_\beta ) =&_{2}h_{n_1,n_2,\beta }^{\prime }( \varvec{\eta }_0) + \left. \frac{\partial }{\partial \mu _1}\,_{2}h_{n_1,n_2,\beta }^{\prime }( \varvec{\eta })\right| _{\varvec{\eta }=\varvec{\eta }_0} (\widehat{\mu }_{1\beta }-\mu _{10}) \nonumber \\&+ \left. \frac{\partial }{\partial \mu _2}\,_{2} h_{n_1,n_2,\beta }^{\prime \prime }( \varvec{\eta }) \right| _{\varvec{\eta }=\varvec{\eta }_0} (\widehat{ \mu }_{2\beta }-\mu _{20}) \nonumber \\&+ \left. \frac{\partial }{\partial \sigma }\,_{2}h_{n_1,n_2,\beta }^{\prime \prime }( \varvec{\eta })\right| _{\varvec{\eta }=\varvec{\eta }_0} \left( \widehat{\sigma }_\beta -\sigma _0\right) +o_{p}\left( (n_1+n_2)^{-1/2}\right) \nonumber \\ =&\ 0. \end{aligned}$$
(29)

Notice that

$$\begin{aligned}&\lim _{n_1,n_2\rightarrow \infty } \left. \frac{\partial }{\partial \mu _1}\,_{2}h_{n_1,n_2,\beta }^{\prime }( \varvec{\eta }) \right| _{\varvec{\eta }=\varvec{\eta }_0} \nonumber \\&\quad = \lim _{n_1,n_2\rightarrow \infty } \frac{\partial }{ \partial \mu _1}\left( \frac{n_1}{n_1+n_2}\,_{2}h_{n_1,\beta }^{\prime }\left( \mu _{10},\sigma _0\right) +\frac{ n_2}{n_1+n_2}\,_{2}h_{n_2}^{\prime }(\mu _{10},\sigma _{0})\right) \nonumber \\&\quad =\lim _{n_1,n_2\rightarrow \infty } \frac{n_1}{n_1+n_2} \lim _{n_1,n_2\rightarrow \infty } \left. \frac{\partial }{\partial \mu _1}\,_{2}h_{n_1,\beta }^{\prime }\left( \mu _{1},\sigma _0\right) \right| _{\mu _1 = \mu _{10} }\nonumber \\&\quad = w \varvec{J}_{12,\beta }\left( \sigma _0\right) = 0. \end{aligned}$$
(30)

Similarly we get

$$\begin{aligned} \lim _{n_1,n_2\rightarrow \infty } \left. \frac{\partial }{\partial \mu _2}\,_{2}h_{n_1,n_2,\beta }^{\prime }( \varvec{\eta })\right| _{\varvec{\eta }=\varvec{\eta }_0} =0. \end{aligned}$$
(31)

Moreover,

$$\begin{aligned} \lim _{n_1,n_2\rightarrow \infty } \left. \frac{\partial }{\partial \sigma } _{2}h_{n_1,n_2,\beta }^{\prime }\left( \varvec{\eta } \right) \right| _{\varvec{\eta }=\varvec{\eta }_0}&= \lim _{n_1,n_2\rightarrow \infty }\tfrac{ n_1}{n_1+n_2}\,_{22}h_{n_1,\beta }^{\prime \prime }(\mu _{10},\sigma _0)\nonumber \\&+\lim _{n_1,n_2\rightarrow \infty }\tfrac{n_2}{ n_1+n_2}\,_{22}h_{n_2,\beta }^{\prime \prime }\left( \mu _{20},\sigma _0\right) \nonumber \\&= w\varvec{J}_{22,\beta }\left( \sigma _0\right) +(1-w)\varvec{J}_{22,\beta }( \sigma _0)\nonumber \\&= \varvec{J}_{22,\beta } ( \sigma _0). \end{aligned}$$
(32)

Therefore, using Eqs. (30), (31) and (32) we get from Eq. (29)

$$\begin{aligned} \sqrt{n_1+n_2}\left( \widehat{\sigma }_\beta -\sigma _0\right) =-\varvec{J}_{22,\beta }^{-1}\left( \sigma _0\right) \sqrt{n_1+n_2} \,_{2}h_{n_1,n_2,\beta }^{\prime }( \varvec{\eta }_0) +o_{p}(1). \end{aligned}$$
(33)

Applying (9) and (14) we get

$$\begin{aligned}&\lim _{n_1,n_2\rightarrow \infty } \text {E}\left[ \sqrt{n_1+n_2}\,_{2}h_{n_1,n_2,\beta }^{\prime }(\varvec{\eta }_0) \right] \\&\quad = \lim _{n_1,n_2\rightarrow \infty } \frac{\sqrt{n_1+n_2}}{n_1+n_2}E\left[ n_1\,_{2}h_{n_1,\beta }^{\prime }\left( \mu _{10},\sigma _0\right) +n_2\,_{2}h_{n_2,\beta }^{\prime }\left( \mu _{20},\sigma _{0}\right) \right] \\&\quad = \lim _{n_1,n_2\rightarrow \infty }\sqrt{\frac{n_1}{n_1+n_2}} \lim _{n_1,n_2\rightarrow \infty } E\left[ \sqrt{n_1}\,_{2}h_{n_1,\beta }^{\prime }(\mu _{10},\sigma _0)\right] \\&\qquad + \lim _{n_1,n_2\rightarrow \infty } \sqrt{\frac{ n_2}{n_1+n_2}} \lim _{n_1,n_2\rightarrow \infty } E\left[ \sqrt{n_2}\,_{2}h_{n_2,\beta }^{\prime }\left( \mu _{20},\sigma _0\right) \right] \\&\quad = 0. \end{aligned}$$

Similarly we also have

$$\begin{aligned}&\lim _{n_1,n_2\rightarrow \infty } Var\left[ \sqrt{n_1+n_2}\,_{2}h_{n_1,n_2,\beta }^{\prime }(\varvec{\eta }_0) \right] \\&\quad =\lim _{n_1,n_2\rightarrow \infty } (n_1+n_2)Var\left[ \frac{1}{n_1+n_2}\left( n_1\,_{2}h_{n_1,\beta }^{\prime }(\mu _{10},\sigma _{0})+n_2\,_{2}h_{n_2,\beta }^{\prime }(\mu _{20},\sigma _{0}\right) \right] \\&\quad = \lim _{n_1,n_2\rightarrow \infty } \frac{n_1}{n_1+n_2} \lim _{n_1,n_2\rightarrow \infty } \text {Var}\left[ \sqrt{n_1}\,_{2}h_{n_1,\beta }^{\prime }(\mu _{10},\sigma _0)\right] \\&\qquad + \lim _{n_1,n_2\rightarrow \infty } \frac{n_2}{ n_1+n_2} \lim _{n_1,n_2\rightarrow \infty } Var\left[ \sqrt{n_2}\,_{2}h_{n_2,\beta }^{\prime }(\mu _{20},\sigma _0)\right] \\&\quad = w \varvec{K}_{22,\beta }(\sigma _0) + (1-w) \varvec{K}_{22,\beta }(\sigma _0) \\&\quad = \varvec{K}_{22,\beta }(\sigma _0). \end{aligned}$$

Hence,

$$\begin{aligned} \sqrt{n_1+n_2}\,_{2}h_{n_1,n_2,\beta }^{\prime }\left( \varvec{\eta }_0\right) \underset{n_1,n_2\rightarrow \infty }{\overset{\mathcal {L}}{\longrightarrow }}{\mathcal {N}}\left( 0,\varvec{K}_{22,\beta }(\sigma _0)\right) . \end{aligned}$$

Now, from Eq. (33) we get

$$\begin{aligned} \sqrt{n_1+n_2}\left( \widehat{\sigma }_\beta -\sigma _0\right) \underset{n_1,n_2\rightarrow \infty }{\overset{\mathcal {L}}{ \longrightarrow }}{\mathcal {N}}\left( 0,\varvec{K}_{22,\beta }(\sigma _0)\varvec{J}_{22,\beta }^{-2}(\sigma _0)\right) , \end{aligned}$$
(34)

where

$$\begin{aligned} \varvec{K}_{22,\beta }(\sigma _0)\varvec{J}_{22,\beta }^{-2}(\sigma _0)=\sigma _0^{2} \frac{\left( \beta +1\right) ^{5}}{\left( \beta ^{2}+2\right) ^{2}}\left( \frac{4\beta ^{2}+2}{(1+2\beta )^{5/2}}-\frac{\beta ^{2}}{(1+\beta )^{3}} \right) . \end{aligned}$$
(35)

As \(\varvec{J}_{12,\beta }(\sigma _0)=\varvec{J}_{21,\beta }(\sigma _0)=0\), it is clear that

$$\begin{aligned} \lim _{n_1,n_2\rightarrow \infty } \left. \frac{\partial ^2 }{\partial \mu _1 \partial \sigma }\,h_{n_1,n_2,\beta }( \varvec{\eta })\right| _{\varvec{\eta } = \varvec{\eta }_0} = \lim _{n_1,n_2\rightarrow \infty } \left. \frac{\partial ^2 }{\partial \mu _2 \partial \sigma }\,h_{n_1,n_2,\beta }( \varvec{\eta }) \right| _{\varvec{\eta } = \varvec{\eta }_0} =0. \end{aligned}$$

Therefore, \(Cov(\widehat{\mu }_{1\beta },\widehat{\sigma }_{\beta })=Cov(\widehat{\mu }_{2\beta },\widehat{\sigma }_{\beta })=0\). Moreover, \(Cov(\widehat{\mu }_{1\beta },\widehat{\mu }_{2\beta })=0\). Combining the results in (27) and (34) we get the variance-covariance matrix of \(\sqrt{\frac{n_1n_2}{n_1+n_2}}\widehat{\varvec{\eta }}_{\beta }\) as follows

$$\begin{aligned} \varvec{\Sigma }_{w,\beta }(\sigma _0)=\left( \begin{array}{c@{\quad }c@{\quad }c} \left( 1-w\right) \varvec{K}_{11,\beta }\varvec{(}\sigma _0) \varvec{J}_{11,\beta }^{-2}\left( \sigma _0\right) &{} 0 &{} 0 \\ 0 &{} w \varvec{K}_{11,\beta }\varvec{(}\sigma _0) \varvec{J}_{11,\beta }^{-2}(\sigma _0) &{} 0 \\ 0 &{} 0 &{} w\left( 1-w\right) \varvec{K}_{22,\beta }(\sigma _0)\varvec{J}_{22,\beta }^{-2}\left( \sigma _0\right) \end{array} \right) , \end{aligned}$$

where the values of the diagonal elements are given in (28) and (35). Hence, the theorem is proved.\(\square \)

Proof of Theorem 2

A Taylor expansion of \( d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_\beta },f_{ \widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })\) around \(\varvec{\eta }_{0}\) gives

$$\begin{aligned} d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_\beta },f_{ \widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })=d_{\gamma }(f_{\mu _{10},\sigma _0},f_{\mu _{20},\sigma _0})+\varvec{t}_{\gamma }^{T}\left( \varvec{\eta }_{0}\right) (\widehat{\varvec{\eta }}_{\beta }-\varvec{\eta }_{0})+o_{p}\left( \left\| \widehat{ \varvec{\eta }}_{\beta }-\varvec{\eta }_{0}\right\| \right) , \end{aligned}$$

where \(\varvec{t}_{\gamma }\left( \varvec{\eta }_{0}\right) =\frac{ \partial }{\partial \varvec{\eta }}\left. d_{\gamma }(f_{\mu _1,\sigma },f_{\mu _2,\sigma })\right| _{\varvec{\eta }=\varvec{\eta }_{0}}\); the expressions of the components \(t_{\gamma , i}\left( \varvec{\eta }_{0}\right) \), \(i=1,2,3\), are given in (18)–(20). Hence, the result directly follows from Theorem 1. \(\square \)

Proof of Theorem 3

If \(\mu _{10}=\mu _{20}\), it is obvious that \(d_{\gamma }(f_{\mu _{10},\sigma _0},f_{\mu _{20},\sigma _0})=0\), and \(\varvec{t}_{\gamma } ( \varvec{\eta }_{0})=0\). Hence, a second order Taylor expansion of \(d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_\beta },f_{ \widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })\) around \(\varvec{\eta }_{0}\) gives

$$\begin{aligned} 2d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_\beta },f_{ \widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })=(\widehat{\varvec{ \eta }}_{\beta }-\varvec{\eta }_{0})^{T}\varvec{A}_{\gamma }\left( \sigma _0\right) (\widehat{\varvec{\eta }}_{\beta }-\varvec{\eta } _{0})+o_p(\left\| \widehat{\varvec{\eta }}_{\beta }-\varvec{\eta }_{0}\right\| ^{2}), \qquad \end{aligned}$$
(36)

where \(\varvec{A}_{\gamma }(\sigma _0)\) is the matrix containing the second derivatives of \(d_{\gamma }(f_{\mu _1,\sigma },f_{\mu _2,\sigma })\ \) evaluated at \(\mu _{10}=\mu _{20}\). It can be shown that

$$\begin{aligned} \varvec{A}_{\gamma }\left( \sigma _0\right) \varvec{=}\ell _{\gamma }(\sigma _0)\left( \begin{array}{ccc} 1 &{}\quad -1 &{}\quad 0 \\ -1 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{array} \right) , \end{aligned}$$

where

$$\begin{aligned} \ell _{\gamma }(\sigma _0)=\sigma ^{-(\gamma +2)}\left( 2\pi \right) ^{- \frac{\gamma }{2}}\left( \gamma +1\right) ^{-\frac{1}{2}}. \end{aligned}$$

Therefore, Eq. (36) simplifies to

$$\begin{aligned} 2d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_\beta },f_{ \widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })&= \left( \begin{pmatrix} \widehat{\mu }_{1\beta } \\ \widehat{\mu }_{2\beta } \end{pmatrix} - \begin{pmatrix} \mu _{10} \\ \mu _{20} \end{pmatrix} \right) ^{T}\varvec{A}_{\gamma }^{*}\left( \sigma _0\right) \left( \begin{pmatrix} \widehat{\mu }_{1\beta } \\ \widehat{\mu }_{2\beta } \end{pmatrix} - \begin{pmatrix} \mu _{10} \\ \mu _{20} \end{pmatrix} \right) \nonumber \\&\quad +\, o_{p}\left( \left\| \widehat{\varvec{\eta }}_{\beta }- \varvec{\eta }_{0}\right\| ^{2}\right) , \end{aligned}$$

where

$$\begin{aligned} \varvec{A}_{\gamma }^{*}\left( \sigma _0\right) =\ell _{\gamma }(\sigma _0)\left( \begin{array}{cc} 1 &{}\quad -1 \\ -1 &{}\quad 1 \end{array} \right) . \end{aligned}$$

From Theorem 1 we know that

$$\begin{aligned} \sqrt{\frac{n_1n_2}{n_1+n_2}}\left( \begin{pmatrix} \widehat{\mu }_{1\beta } \\ \widehat{\mu }_{2\beta } \end{pmatrix} - \begin{pmatrix} \mu _{10} \\ \mu _{20} \end{pmatrix} \right) ^{T}\underset{}{\overset{\mathcal {L}}{\longrightarrow }}{\mathcal {N}} \left( \varvec{0}_{2},\varvec{\Sigma }_{w,\beta }^{*}(\sigma _{0})\right) \!, \end{aligned}$$

where

$$\begin{aligned} \varvec{\Sigma }_{w,\beta }^{*}(\sigma _0)=K_{11,\beta } \varvec{(}\sigma _0)J_{11,\beta }^{-2}\left( \sigma _0\right) \left( \begin{array}{cc} 1-w &{}\quad 0 \\ 0 &{}\quad w \end{array} \right) \!. \end{aligned}$$

Therefore, \(\frac{2 n_1n_2}{n_1+n_2} d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_{\beta }},f_{\widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })\) has the same asymptotic distribution (see Dik and de Gunst 1985) as the random variable

$$\begin{aligned} \sum \limits _{i=1}^{2}\lambda _{i,\beta ,\gamma }(\sigma _0)Z_{i}^{2}, \end{aligned}$$

where \(Z_{1}\) and \(Z_{2}\) are independent standard normal variables, and

$$\begin{aligned} \lambda _{1,\beta ,\gamma }(\sigma _0)=0\text {,\quad and \quad }\lambda _{2,\beta ,\gamma }(\sigma _0)=K_{11,\beta }\varvec{(}\sigma _0)J_{11,\beta }^{-2}\left( \sigma _0\right) \ell _{\gamma }(\sigma _0)=\lambda _{\beta ,\gamma }(\sigma _0) \end{aligned}$$

are the eigenvalues of the matrix \(\varvec{\Sigma }_{w,\beta }^{*}(\sigma _0)\varvec{A}_{\gamma }^{*}\left( \sigma _0\right) \). Hence,

$$\begin{aligned} \frac{2n_1n_2}{n_1+n_2}\frac{d_{\gamma }(f_{\widehat{\mu }_{1\beta },\widehat{\sigma }_\beta },f_{\widehat{\mu }_{2\beta },\widehat{\sigma }_\beta })}{\lambda _{\beta ,\gamma }\,(\sigma _{0} )} \underset{ n_1,n_2\rightarrow \infty }{\overset{\mathcal {L}}{\longrightarrow }}\chi ^{2}(1). \end{aligned}$$

Finally, since \(\widehat{\sigma }_\beta \) is a consistent estimator of \( \sigma \), replacing \(\lambda _{\beta ,\gamma }(\sigma _0)\) by \(\lambda _{\beta ,\gamma }(\widehat{\sigma }_\beta )\) and by following Slutsky’s theorem we obtain the desired result. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Basu, A., Mandal, A., Martin, N. et al. Robust tests for the equality of two normal means based on the density power divergence. Metrika 78, 611–634 (2015). https://doi.org/10.1007/s00184-014-0518-4

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-014-0518-4

Keywords

Mathematics Subject Classification

Navigation