Skip to main content

Advertisement

Log in

Testing in nonparametric ANCOVA model based on ridit reliability functional

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In the spirit of Bross (Biometrics 14:18–38, 1958), this paper considers ridit reliability functionals to develop test procedures for the equality of \(K(>2)\) treatment effects in nonparametric analysis of covariance (ANCOVA) model with d covariates based on two different methods. The procedures are asymptotically distribution free and are not based on the assumption that the distribution functions (d.f.’s) of the response variable and the associated covariates are continuous. By means of simulation study, the proposed methods are compared with the methods provided by Tsangari and Akritas (J Multivar Anal 88:298–319, 2004) and Bathke and Brunner (Recent advances and trends in nonparametric statistics, Elsevier, Amsterdam, 2003) under ANCOVA in terms of type I error rate and power.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Akritas, M. G., Keilegom, I. V. (2001). Non-parametric estimation of the residual distribution. Scandinavian Journal of Statistics, 28(3), 549–567.

  • Akritas, M. G., Arnold, S. F., Brunner, E. (1997). Nonparametric hypotheses and rank statistics for unbalanced factorial designs. Journal of the American Statistical Association, 92(437), 258–265.

  • Akritas, M. G., Arnold, S. F., Du, Y. (2000). Nonparametric models and methods for nonlinear analysis of covariance. Biometrika, 87(3), 507–526.

  • Bandyopadhyay, U., Chatterjee, D. (2015). Nonparametric homogeneity test based on ridit reliability functional. Journal of the Korean Statistical Society, 44(4), 577–591.

  • Bandyopadhyay, U., De, S. (2011). On multi-treatment adaptive allocation design for dichotomous response. Communications in Statistics Theory and Methods, 40(22), 4104–4124.

  • Bathke, A., Brunner, E. (2003). A nonparametric alternative to analysis of covariance. In M. G. Akritas, D. N. Politis (Eds.), Recent advances and trends in nonparametric statistics (pp. 109–120). Amsterdam: Elsevier.

  • Bretz, F., Hothorn, T., Westfall, P. (2010). Multiple comparisons using R. London: Chapman and Hall.

  • Bross, I. D. J. (1958). How to use ridit analysis. Biometrics, 14(1), 18–38.

    Article  Google Scholar 

  • Brunner, E., Munzel, U. (2000). The nonparametric behrens-fisher problem: Asymptotic theory and a small-sample approximation. Biometrical Journal, 42(1), 17–25.

  • Brunner, E., Puri, M. L. (2001). Nonparametric methods in factorial designs. Statistical Papers, 42(1), 1–52.

  • Brunner, E., Dette, H., Munk, A. (1997). Box-type approximations in nonparametric factorial designs. Journal of the American Statistical Association, 92(440), 1494–1502.

  • Brunner, E., Konietschke, F., Pauly, M., Puri, M. L. (2017). Rank-based procedures in factorial designs: Hypotheses about non-parametric treatment effects. Journal of Royal Statistical Society Series B, 79(5), 1463–1485.

  • Cartwright, H. V., Lindahl, R. L., Bawden, J. W. (1968). Clinical findings on the effectiveness of stannous fluoride and acid phosphate fluoride as caries reducing agents in children. Journal of Dentistry for Children, 35(1), 36–40.

  • Dette, H., Neumeyer, N. (2001). Nonparametric analysis of covariance. The Annals of Statistics, 29(5), 1361–1400.

  • Fischer, D., Oja, H., Schleutker, J., Sen, P. K., Wahlfors, T. (2014). Generalized Mann-Whitney type tests for microarray experiments. Scandinavian Journal of Statistics, 41(3), 672–692.

  • Friedrich, S., Konietschke, F., Pauly, M. (2017). A wild bootstrap approach for nonparametric repeated measurements. Computational Statistics and Data Analysis, 113, 38–52.

  • Gao, X., Alvo, M., Chen, J., Li, G. (2008). Nonparametric multiple comparison procedures for unbalanced one-way factorial designs. Journal of Statistical Planning and Inference, 138(8), 2574–2591.

  • Grigoletto, M., Akritas, M. G. (1999). Analysis of covariance with incomplete data via semiparametric transformations. Biometrics, 55(4), 1177–1187.

  • Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. Biometrika, 75(4), 800–802.

    Article  MathSciNet  MATH  Google Scholar 

  • Lehmann, E. L., Romano, J. P. (2005). Testing statistical hypotheses (3rd ed.). New York: Springer.

  • Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(2), 65–70.

    MathSciNet  MATH  Google Scholar 

  • Konietschke, F., Hothorn, L. A., Brunner, E. (2012). Rank-based multiple test procedures and simultaneous confidence intervals. Electronic Journal of Statistics, 6, 738–759.

  • Munk, A., Neumeyer, N., Scholz, A. (2007). Non-parametric analysis of covariance: The case of inhomogeneous and heteroscedastic noise. Scandinavian Journal of Statistics, 34(3), 511–534.

  • Neve, J. D., Thas, O. (2015). A regression framework for rank tests based on the probabilistic index model. Journal of the American Statistical Association, 110(511), 1276–1283.

  • R Development Core Team. (2013). R package mvtnorm: Multivariate Normal and t Distributions. In: A. Genz, F. Bretz, T. Miwa, X. Mi, F. Leisch, F. Scheipl, B. Bornkamp, T. Hothorn (Eds.), License: GPL-2. Maintainer: Hothorn T. \(<Torsten.Hothorn@R-project.org>\).

  • Silverman, B. W. (1986). Density estimation for statistics and data analysis. New York: Chapman and Hall/CRC.

    Book  MATH  Google Scholar 

  • Simes, R. J. (1986). An improved bonferroni procedure for multiple tests of significance. Biometrika, 73(3), 655–660.

    Article  MathSciNet  MATH  Google Scholar 

  • Tamhane, A. C., Dunnett, C. W. (1999). Stepwise multiple test procedures with biometric applications. Journal of Statistical Planning and Inference, 82(1–2), 55–68.

  • Terpstra, J. T., Magel, R. C. (2003). A new nonparametric test for the ordered alternative problem. Journal of Nonparametric Statistics, 15(3), 289–301.

  • Thas, O., Neve, J. D., Clement, L., Ottoy, J. P. (2012). Probabilistic index models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(4), 623–671.

  • Tsangari, H., Akritas, M. G. (2004). Nonparametric ANCOVA with two and three covariates. Journal of Multivariate Analysis, 88(2), 298–319.

  • Wang, L., Akritas, M. G. (2006). Testing for covariate effects in the fully nonparametric analysis of covariance model. Journal of the American Statistical Association, 101(474), 722–736.

Download references

Acknowledgements

The authors wish to thank the Editor, the corresponding Associate Editor and the anonymous referees for their helpful comments and suggestions that have improved the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uttam Bandyopadhyay.

Appendix

Appendix

Proof of Result 1

It is enough to show that for any \(k, k^{\prime } (\ne k)\), \(\widehat{p}_{k k^{\prime }}\) is a consistent estimator of \(p_{k k^{\prime }}\). On this matter, we set

$$\begin{aligned} \widehat{p}_{k k^{\prime }}^{\;(1)} = \frac{1}{n_k n_{k^{{\prime }}}} \sum _{j=1}^{n_k}{\sum _{j^{{\prime }}=1}^{n_{k^{{\prime }}}} {\frac{g({\varvec{X}}_{{\varvec{kj}}}) g({\varvec{X}}_{{\varvec{k}}^{\varvec{{\prime }}} {\varvec{j}}^{\varvec{{\prime }}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}}) g_{k^{\prime }}({\varvec{X}}_{{\varvec{k}}^{\varvec{{\prime }}} {\varvec{j}}^{\varvec{{\prime }}}})} \ U(Y_{kj},Y_{k^{\prime } j^{\prime }})}} \end{aligned}$$

and

$$\begin{aligned} \widehat{p}_{k k^{\prime }}^{\;(2)} = \frac{1}{n_k n_{k^{{\prime }}}} \sum _{j=1}^{n_k}{\sum _{j^{{\prime }}=1}^{n_{k^{{\prime }}}}{\left( e_{kj} e_{k^{\prime } j^{\prime }} - \frac{g({\varvec{X}}_{{\varvec{kj}}}) g(\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}}) g_{k^{\prime }}(\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}\right) \ U(Y_{kj},Y_{k^{\prime } j^{\prime }})}}, \end{aligned}$$

where g(.) is the density (or probability mass function) corresponding to G(.). Then, we can rewrite \(\widehat{p}_{k k^{\prime }}\) as

$$\begin{aligned} \widehat{p}_{k k^{\prime }} = \widehat{p}_{k k^{\prime }}^{\;(1)} + \widehat{p}_{k k^{\prime }}^{\;(2)}. \end{aligned}$$
(12)

Now,

$$\begin{aligned} E \left( \widehat{p}_{kk^{\prime }}^{\;(1)}\right)&=\frac{1}{n_kn_{k^{{\prime }}}}\sum _{j=1}^{n_k}{\sum _{j^{{\prime }}=1}^{n_{k^{{\prime }}}} {E \left[ \frac{g({\varvec{X}}_{{\varvec{kj}}})g (\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}}) g_{k^{\prime }}(\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}p_{k k{^{\prime }}}({\varvec{X}}_{{\varvec{kj}}},\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})\right] }} \\&= \int {\int {p_{k k{^{\prime }}} ({\varvec{x}}_{\varvec{k}},\varvec{x}_{\varvec{k}^{\varvec{{\prime }}}})}} \ \hbox {d}G({\varvec{x}}_{\varvec{k}}) \ \hbox {d}G(\varvec{x}_{\varvec{k}^{\varvec{{\prime }}}}) \\&= p_{k k{^{\prime }}} \end{aligned}$$

and

$$\begin{aligned} Var\left( \widehat{p}_{kk^{\prime }}^{\;(1)}\right) =&\, \frac{1}{n_{k}^2 n_{k^{\prime }}^2} Var \left[ \sum _{j=1}^{n_k}{\sum _{j^{{\prime }}=1}^{n_{k^{{\prime }}}} {\left\{ \frac{g({\varvec{X}}_{{\varvec{kj}}})g (\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}}) g_{k^{\prime }}(\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})} U(Y_{kj},Y_{k^{\prime } j^{\prime }})\right\} }}\right] \\ =&\, \frac{1}{n_{k}^2 n_{k^{\prime }}^2} \left[ \sum _{j=1}^{n_k}{\sum _{j^{{\prime }}=1}^{n_{k^{{\prime }}}} {Va_p(j, j^{\prime })}} \right. \\&\left. + \sum _{j (\ne j_1)=1}^{n_k}{\sum _{j^{{\prime }}=1}^{n_{k^{{\prime }}}} {C_p(j, j_1, j^{\prime })}} + \sum _{j=1}^{n_k}{\sum _{j^{{\prime }} (\ne j_1^{\prime })=1}^{n_{k^{{\prime }}}} {C_p(j, j^{\prime }, j_1^{\prime })}} \right] . \end{aligned}$$

In the above expression, \(Va_p(.,.)\) and \(C_p(.,.,.)\) denote, respectively, the variance and covariance terms and these terms are bounded under A3. Therefore, \(Var\left( \widehat{p}_{kk^{\prime }}^{\;(1)}\right) \rightarrow 0\) under A2. Thus, we get

$$\begin{aligned} \widehat{p}_{kk^{\prime }}^{\;(1)} \rightarrow p_{k k^{\prime }} \end{aligned}$$
(13)

in probability. Furthermore, the assumptions A1–A4 imply that

$$\begin{aligned} \max _{j} \left| e_{kj} - \frac{g({\varvec{X}}_{{\varvec{kj}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}})}\right| \rightarrow 0 \end{aligned}$$

almost surely (Tsangari and Akritas 2004), and hence, we get

$$\begin{aligned} 0 \le&\left| \widehat{p}_{k k^{\prime }}^{(2)}\right| \le \max _{j, \ j^{\prime }} \left| e_{kj} e_{k^{\prime } j^{\prime }}\right. \nonumber \\&-\left. \frac{g({\varvec{X}}_{{\varvec{kj}}}) g(\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}}) g_{k^{\prime }}(\varvec{X}_{\varvec{k}^{\varvec{{\prime }}} \varvec{j}^{\varvec{{\prime }}}})}\right| \left\{ \frac{1}{n_k n_{k^{\prime }}} \sum _{j}{\sum _{j^{\prime }}{U(Y_{kj},Y_{k^{\prime } j^{\prime }})}}\right\} = o_{p}(1) O_{p}(1). \end{aligned}$$
(14)

Now, combining (13) and (14), the required result follows from (12). \(\square \)

Proof of Result 2

To find the asymptotic distribution of \(\sqrt{N}(\widehat{\varvec{q}}.-\varvec{q}.)\), we write, for any \(k = 1,2,\ldots ,K\),

$$\begin{aligned} T_k^p&=\,\sqrt{N}(\widehat{p}_k. - p_k.)\nonumber \\&= \frac{\sqrt{N}}{K} \left[ \sum _{{\mathop {k_1 \ne k}\limits ^{k_1=1}}}^{K}{(\widehat{p}_{k k_1} - p_{k k_1})}\right] \nonumber \\&= T_{K1}^p + T_{K2}^p, \end{aligned}$$
(15)

where

and \(p_{k k_1}^{0} = E(U(Y_{kj},Y_{k_1 j_1}))\) for any \((j,j_1)\). Now, since

$$\begin{aligned} 0 \le \left| T_{k2}^p\right| \le&\, \frac{1}{K} \sum _{{\mathop {k_1 \ne k}\limits ^{k_1=1}}}^{K}\max _{j, \ j_1} \left| e_{kj} e_{k_1 j_1} - \frac{g({\varvec{X}}_{{\varvec{kj}}}) g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}_{\varvec{1}}})}{g_{k}({\varvec{X}}_{{\varvec{kj}}}) g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}_{\varvec{1}}})}\right| \\&\times \left| \frac{\sqrt{N}}{n_k n_{k_1}} \sum _{j=1}^{n_k}{\sum _{j_1 =1}^{n_{k_1}}{\left( U(Y_{kj},Y_{k_1 j_1}) - p_{k k_1}^{0}\right) }}\right| \\ =&\, o_{p}(1) O_{p}(1), \end{aligned}$$

we get

$$\begin{aligned} T_{k2}^p \rightarrow 0 \end{aligned}$$

in probability. Hence, the asymptotic distribution of \(\sqrt{N}(\widehat{\varvec{q}}.-\varvec{q}.)\) is same as that of \({\varvec{T}}^{\varvec{p}} = \left\{ T_{k1}^p, k=1,2,\ldots ,K\right\} \). Now, using Hajek’s projection theorem in multivariate setup, we get that the asymptotic distribution of \({\varvec{T}}^{\varvec{p}}\) is same as that of \(\varvec{Z}^{\varvec{p}} = \left\{ Z_{k}^p, k=1,2,\ldots ,K\right\} \), with

$$\begin{aligned} Z_{k}^{p} =\sum _{k_1=1}^{K}\frac{\sqrt{N}}{n_{k_1}} \sum _{j=1}^{n_{k_1}}\left\{ {\frac{g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}{g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})} Z_{k k_1}^{p}(Y_{k_1 j})} - (p_k.-p_k)\right\} , \end{aligned}$$

where

$$\begin{aligned} Z_{k k}^{p}(Y_{k j})= & {} \frac{1}{K} \sum _{{\mathop {k_1 \ne k}\limits ^{k_1=1}}}^{K} \left( 1-F_{k_1}^{\,0}.(Y_{kj})-p_{k k_1}^{0}\right) \ \ \hbox {and} \\ Z_{k k_1}^{p}(Y_{k_1 j})= & {} \frac{1}{K} \left( F_{k}^{\,0}.(Y_{k_1 j})-p_{k k_1}^{0}\right) \end{aligned}$$

for \(k, k_1 (\ne k) = 1,2,\ldots ,K\).

Next, using central limit theorem, the asymptotic distribution of \(\varvec{Z}^{\varvec{p}}\) is K-variate normal with mean vector \({\varvec{0}}\) and dispersion matrix \(S_p = ((s_{k k^{\prime }}^p))\), where

$$\begin{aligned} s_{k k}^{p} =&\, \sum _{k_1=1}^{K}\frac{1}{\lambda _{k_1}} \left[ Var_{X_{k_1}}\left\{ \frac{g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}{g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})} E_{Y_{k_1}|X_{k_1}}Z_{k k_1}^{p}\right\} \right. \\&\left. + E_{X_{k_1}}\left\{ \left( \frac{g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}{g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}\right) ^2 Var_{Y_{k_1}|X_{k_1}}Z_{k k_1}^{p}\right\} \right] , \end{aligned}$$

\(k=1,2,\ldots ,K\) and for \(k_1 \ne k_2 = 1,2,\ldots ,K\), we have

$$\begin{aligned} s_{k_1 k_2}^{p} =&\sum _{k_3=1}^{K}\frac{1}{\lambda _{k_3}} Cov_{X_{k_3}}\\&\times \left\{ \frac{g(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}{g_{k_3}(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})} E_{Y_{k_3}|X_{k_3}}Z_{k_1 k_3}^{p},\frac{g(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}{g_{k_3}(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})} E_{Y_{k_3}|X_{k_3}}Z_{k_2 k_3}^{p}\right\} \\&+ E_{X_{k_3}}\left\{ \left( \frac{g(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}{g_{k_3}(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}\right) ^2 Cov_{Y_{k_3}|X_{k_3}}\left( Z_{k_1 k_3}^{p},Z_{k_2 k_3}^{p}\right) \right\} , \end{aligned}$$

in which

$$\begin{aligned} p_{k}^{0} = \frac{1}{K} \left[ \frac{1}{2} + \sum _{{\mathop {k_1 \ne k}\limits ^{k_1=1}}}^{K}{p_{k k_1}^{0}}\right] . \end{aligned}$$

Therefore, the asymptotic distribution of \(\sqrt{N}C(\widehat{\varvec{q}}.-\varvec{q}.)\) is \((K-1)\)-variate normal with mean vector \({\varvec{0}}\) and dispersion matrix \(\Sigma _{Cp} = C S_p C^{T}\). Hence, the result follows. \(\square \)

Proof of Result 4

We can prove the result with very similar approach as suggested for Result 2. To find the asymptotic distribution of \(\sqrt{N}(\widehat{\varvec{R}}.-\varvec{R}.)\), we set

$$\begin{aligned} V_{k} = V_{k}(Y_{1j_1},Y_{2j_2},\ldots ,Y_{Kj_K}) \end{aligned}$$

and write

$$\begin{aligned} T_k^R&= \sqrt{N} (\widehat{R}_k. - R_k.)\nonumber \\&= \sqrt{N}\left[ \frac{1}{n_1 n_2 \cdots n_K} \sum _{j_1=1}^{n_1}{\sum _{j_2=1}^{n_2}}{ \cdots \sum _{j_K=1}^{n_K}{e(j_1,j_2,\ldots ,j_K) \ V_{k}}} - R_k.\right] \nonumber \\&= T_{K1}^R + T_{K2}^R, \end{aligned}$$
(16)

where

$$\begin{aligned} T_{k1}^{R}=&\, \frac{\sqrt{N}}{n_1 n_2 \cdots n_K} \sum _{j_1=1}^{n_1}\sum _{j_2=1}^{n_2} \cdots \sum _{j_K=1}^{n_K}\left\{ \prod _{l=1}^{K}{\frac{g(\varvec{X}_{\varvec{l} \varvec{j}_{\varvec{l}}})}{g_l(\varvec{X}_{\varvec{l} \varvec{j}_{\varvec{l}}})}} \left( V_{k} - R_k\right) - \left( R_k. - R_k\right) \right\} , \\ T_{k2}^{R}=&\, \frac{\sqrt{N}}{n_1 n_2 \cdots n_K} \sum _{j_1=1}^{n_1}\sum _{j_2=1}^{n_2} \cdots \sum _{j_K=1}^{n_K}\left( e(j_1,j_2,\ldots ,j_K) - \prod _{l=1}^{K}{\frac{g(\varvec{X}_{\varvec{l} \varvec{j}_{\varvec{l}}})}{g_l(\varvec{X}_{\varvec{l} \varvec{j}_{\varvec{l}}})}}\right) \\&\times \left( V_{k} - R_k\right) \end{aligned}$$

and \(R_k = E(V_k)\) for any \(k=1,2,\ldots ,K\). Now, since

$$\begin{aligned} 0 \le \left| T_{k2}^p\right| \le&\, \max _{j_1 \cdots j_K} \left| e(j_1,j_2,\ldots ,j_K) - \prod _{l=1}^{K}{\frac{g(\varvec{X}_{\varvec{l} \varvec{j}_{\varvec{l}}})}{g_l(\varvec{X}_{\varvec{l} \varvec{j}_{\varvec{l}}})}}\right| \\&\times \left| \frac{\sqrt{N}}{n_1 n_2 \cdots n_K} \sum _{j_1=1}^{n_1}\sum _{j_2=1}^{n_2} \cdots \sum _{j_K=1}^{n_K} \left( V_{k} - R_k\right) \right| \\ =&\, o_{p}(1) O_{p}(1), \end{aligned}$$

we get

$$\begin{aligned} T_{k2}^R \rightarrow 0 \end{aligned}$$

in probability, and hence the asymptotic distribution of \(\sqrt{N}(\widehat{\varvec{R}}.-\varvec{R}.)\) is same as that of \(\varvec{T}^{\varvec{R}} = \left\{ T_{k1}^R, k=1,2,\ldots ,K\right\} \). Applying Hajek’s projection theorem in multivariate setup, we get that the asymptotic distribution of \(\varvec{T}^{\varvec{R}}\) is same as that of \(\varvec{Z}^{\varvec{R}} = \left\{ Z_{k}^R, k=1,2,\ldots ,K\right\} \) with

$$\begin{aligned} Z_{k}^{R} =\sum _{k_1=1}^{K}\frac{\sqrt{N}}{n_{k_1}} \sum _{j=1}^{n_{k_1}}\left\{ {\frac{g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}{g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})} Z_{k k_1}^{R}(Y_{k_1 j})} - (R_k.-R_k)\right\} , \end{aligned}$$

where

$$\begin{aligned} Z_{k k}^{R}(Y_{k j}) =&\prod _{{\mathop {l \ne k}\limits ^{l=1}}}^{K}{F_l.^{-}(Y_{kj})} + \frac{1}{2} \sum _{{\mathop {l_1 \ne k}\limits ^{1 \le l_1 \le K}}}{f_{l_1}.(Y_{kj}) \prod _{{\mathop {l \ne l_1, \ k}\limits ^{l=1}}}^{K}{F_l.^{-}(Y_{kj})}}\\&+ \frac{1}{3} \sum _{{\mathop {l_1, \ l_2}\limits ^{1 \le l_1}}} \sum _{{\mathop {\ne \ k}\limits ^{\ < l_2 \le K}}} f_{l_1}.(Y_{kj}) f_{l_2}.(Y_{kj}) \prod _{{\mathop {l \ \ne \ l_1, \ k}\limits ^{l=1}}}^{K}{F_l.^{-}(Y_{kj})} \\&+ \cdots + \frac{1}{K} \prod _{{\mathop {l \ne \ k}\limits ^{l=1}}}^{K}{f_l.(Y_{kj})} - R_k \end{aligned}$$

for \(k = 1,2,\ldots ,K\) and for \(k_1 \ne k\), we have

$$\begin{aligned} Z_{k k_1}^{R}(Y_{k_1 j}) =&\left\{ 1-F_k.\left( Y_{k_1j}\right) \right\} \prod ^K_{{\mathop {l\ne k,k_1}\limits ^{l=1}}} F_l.\left( Y_{k_1j}\right) + \frac{1}{2}f_k.\left( Y_{k_1 j}\right) \prod ^K_{{\mathop {l\ne k,k_1}\limits ^{l=1}}} F_l.^{-}\left( Y_{k_1 j}\right) \\&+\frac{1}{3}\sum _{{\mathop {k_2\ne k,k_1}\limits ^{1\le k_2\le K}}} f_k.\left( Y_{k_1 j}\right) f_{k_2}.\left( Y_{k_1 j}\right) \\&\prod ^K_{{\mathop {l\ne k,k_1,k_2}\limits ^{l=1}}} F_l.^{-}\left( Y_{k_1j}\right) + \cdots + \frac{1}{K} \prod ^K_{{\mathop {l\ne k_1}\limits ^{l=1}}} f_l.\left( Y_{k_1j}\right) \\&+ \sum _{{\mathop {k_2\ne k,k_1}\limits ^{1\le k_2\le K}}} \prod ^K_{{\mathop {l\ne k,k_1,k_2}\limits ^{l=1}}} F_l.\left( Y_{k_1 j}\right) E\left\{ R_{Y_k} \left( Y_{k_2}\right) I\left( Y_{k_2}>Y_{k_1 j}\right) \right\} \\&+ \sum _{{\mathop {k_2,k_3\ne k,k_1}\limits ^{1\le k_2<k_3\le K}}} \prod ^K_{{\mathop {l\ne k,k_1,k_2,k_3}\limits ^{l=1}}} F_l.\left( Y_{k_1 j}\right) E\left\{ R_{Y_k} \left( Y_{k_2}, Y_{k_3}\right) \right. \\&\times \left. I\left( \hbox {min}\left( Y_{k_2},Y_{k_3}\right)>Y_{k_1 j}\right) \right\} + \cdots \\&+ E\left\{ R_{Y_k} \left( Y_1,Y_2,\ldots , Y_{k_1-1}, Y_{k_1+1},\ldots Y_K\right) \right. \\&\times \left. I\left( \hbox {min}\left( Y_1,Y_2,\ldots , Y_{k_1-1}, Y_{k_1+1},\ldots Y_K\right) >Y_{k_1 j}\right) \right\} -R_k \end{aligned}$$

with I(.) representing the indicator of the corresponding set,

$$\begin{aligned} F_k.^{-}(y) = \int {P(Y_k < y|x) \ \hbox {d}G(x)}, \ \ f_k.(y) = \int {P(Y_k = y|x) \ \hbox {d}G(x)} \end{aligned}$$

and

$$\begin{aligned} R_Y (Y_{s_1},\ldots ,Y_{s_{r-1}}) =&\, P(Y_{s_1}<Y,Y_{s_2}<Y,\ldots ,Y_{s_{r-1}}<Y)\nonumber \\&+\frac{1}{2}\sum _{1 \le q_1 \le p}{P(Y_{q_1}=Y,Y_{s_1}<Y,Y_{s_2}<Y,\ldots ,Y_{s_{r-1}}<Y)}\nonumber \\&+\cdots +\frac{1}{r}P(Y_{s_1}=Y,Y_{s_2}=Y,\ldots ,Y_{s_{r-1}}=Y). \end{aligned}$$
(17)

Thus, using central limit theorem, the asymptotic distribution of \(\varvec{Z}^{\varvec{R}}\) is K-variate normal with mean vector \({\varvec{0}}\) and dispersion matrix \(S_R = ((s_{k k^{\prime }}^R))\), where

$$\begin{aligned} s_{k k}^{R} =&\sum _{k_1=1}^{K}\frac{1}{\lambda _{k_1}} \left[ Var_{X_{k_1}}\left\{ \frac{g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}{g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})} E_{Y_{k_1}|X_{k_1}}Z_{k k_1}^{R}\right\} \right. \\&+\left. E_{X_{k_1}}\left\{ \left( \frac{g(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}{g_{k_1}(\varvec{X}_{\varvec{k}_{\varvec{1}} \varvec{j}})}\right) ^2 Var_{Y_{k_1}|X_{k_1}}Z_{k k_1}^{R}\right\} \right] , \end{aligned}$$

\(k=1,2,\ldots ,K\) and for \(k_1 \ne k_2 = 1,2,\ldots ,K\), we have

$$\begin{aligned} s_{k_1 k_2}^{R} =&\sum _{k_3=1}^{K}\frac{1}{\lambda _{k_3}} Cov_{X_{k_3}}\\&\times \left\{ \frac{g(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}{g_{k_3}(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})} E_{Y_{k_3}|X_{k_3}}Z_{k_1 k_3}^{R},\frac{g(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}{g_{k_3}(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})} E_{Y_{k_3}|X_{k_3}}Z_{k_2 k_3}^{R}\right\} \\&+E_{X_{k_3}}\left\{ \left( \frac{g(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}{g_{k_3}(\varvec{X}_{\varvec{k}_{\varvec{3}} \varvec{j}})}\right) ^2 Cov_{Y_{k_3}|X_{k_3}}\left( Z_{k_1 k_3}^{R},Z_{k_2 k_3}^{R}\right) \right\} . \end{aligned}$$

Therefore, the asymptotic distribution of \(\sqrt{N}C(\widehat{\varvec{R}}.-\varvec{R}.)\) is \((K-1)\)-variate normal with mean vector \({\varvec{0}}\) and dispersion matrix \(\Sigma _{CR} = C S_R C^{T}\). Hence, the result follows. \(\square \)

Note 1. The proof of Result 6 is exactly same as that of Result 4. Hence, the proof is not given here. The elements of the dispersion matrix \(\Sigma _{CR^*} = C S_{R^*} C^{T}\) can be obtained by replacing the terms \(F_k.(y) \ \hbox {and} \ F_k.^{-}(y)\) by \(\bar{F}_k.(y) \ \hbox {and} \ F_k.^{+}(y)\) and finally

$$\begin{aligned}&E\left\{ R_{Y_k} \left( Y_1,Y_2,\ldots , Y_{k_1-1}, Y_{k_1+1},\ldots Y_K\right) \right. \\&\left. I\left( \hbox {min}\left( Y_1,Y_2,\ldots , Y_{k_1-1}, Y_{k_1+1},\ldots Y_K\right) >Y_{k_1 j}\right) \right\} \end{aligned}$$

by

where

$$\begin{aligned} F_k.^{+}(y) = \int {P(Y_k > y|x) \ \hbox {d}G(x)}, \ \ \bar{F}_k.(y) = \int {P(Y_k \ge y|x) \ \hbox {d}G(x)} \end{aligned}$$

and \(R_Y^*\) can be defined similarly as (17) corresponding to BC-type functional.

Note 2. The consistent estimators of the dispersion matrices can be provided under the assumptions A1–A4 by replacing the quantities \(\lambda _k, F_k.(y), F^{\,0}_{k}.(y), F_k.^{-}(y), F_k.^{+}(y), \bar{F}_k.(y), f_k.(y)\) and \(\frac{g(\varvec{X}_{\varvec{kj}})}{g_{k}(\varvec{X}_{\varvec{kj}})}\) by their sample versions \(\frac{n_k}{N}, \widehat{F}_k.(y), \widehat{F}^{\,0}_{k}.(y), \widehat{F}_k.^{-}(y), \widehat{F}_k.^{+}(y), \widehat{\bar{F}}_k.(y), \widehat{f}_k.(y) \ \hbox {and} \ e_{kj}\), respectively, where \(\widehat{F}^{\,0}_{k}.(y) = \frac{1}{n_k} \sum _{j=1}^{n_k}{e_{kj}\left\{ I(Y_{kj}>y)+\frac{1}{2}I(Y_{kj}=y)\right\} }\) and similarly the others. Note that, as we do not assume any specific nature of the distribution, the estimators under A1–A4 are consistent even for tie cases.

Note 3. To compare the tests with the test provided by Tsangari and Akritas (2004), we modify the tests by using the estimator of the variance components under \(H_{0A}\). Here, we use the combined estimator of \(F_{k}^{\,0}.(y)\) as

$$\begin{aligned} \sum _{k=1}^{K}{\frac{n_k}{N} \ \widehat{F}_{k}^{\,0}.(y)} \end{aligned}$$

for any \(k=1,2,\ldots ,K\). Similarly, we can also estimate the other quantities.

Proof of Result 7

For BP-, BD- and BC-type functionals, it is not difficult to show that

$$\begin{aligned} \sqrt{N}\left\{ (\widehat{{\varvec{\pi }}} - {\varvec{\pi }}) - \varvec{B}\right\} \rightarrow {\varvec{0}} \end{aligned}$$
(18)

in probability, where \(\varvec{B} = \left( B_1,B_2,\ldots ,B_K\right) ^{T}\) with

$$\begin{aligned} B_k = B_k^{(0)} - \sum _{r=1}^{d}{\beta _r B_k^{(r)}} \ \ \hbox {and} \ \ B_k^{(r)} = \sum _{k_1=1}^{K}{\frac{1}{n_{k_1}}}\sum _{j=1}^{n_{k_1}}{B_{k k_1}^{(r)}(X_{k_1 j}^{(r)})}, \end{aligned}$$

\(k=1,2,\ldots ,K\) and \(r=0,1,\ldots ,d\). For convenience, without loss of generality, we write \(X_{k j}^{(0)} = Y_{k j}\). The mathematical expression of \(B_{k k_1}^{(r)}\) depends on the choices of the functionals. For BP-type functionals, we have,

$$\begin{aligned} B_{k k}^{(r)}(X_{k j}^{(r)}) =&\, \frac{1}{K} \sum _{{\mathop {k_1 \ne k}\limits ^{k_1=1}}}^{K} \left( 1-F_{(r) k_1}^{\,0}(X_{kj}^{(r)})-p_{k k_1}^{(r)}\right) \ \ \hbox {and} \\ B_{k k_1}^{(r)}(X_{k_1 j}^{(r)}) =&\, \frac{1}{K} \left( F_{(r) k}^{\,0}(X_{k_1 j}^{(r)})-p_{k k_1}^{(r)}\right) , \end{aligned}$$

where \(F_{(r) k}^{\,0}(x) = P(X_k^{(r)}>x) + \frac{1}{2} P(X_k^{(r)}>x)\) for \(k, k_1 (\ne k) = 1,2,\ldots ,K\) and \(r=0,1,\ldots ,d\).

Again, for BD-type functionals the expression will be

$$\begin{aligned} B_{k k}^{(r)}(X_{k j}^{(r)}) =&\prod _{{\mathop {l \ne k}\limits ^{l=1}}}^{K}{F_{(r)l}^{-}(X_{k j}^{(r)})} + \frac{1}{2} \sum _{{\mathop {l_1 \ne k}\limits ^{1 \le l_1 \le K}}}{f_{(r) l_1}(X_{k j}^{(r)}) \prod _{{\mathop {l \ne l_1, \ k}\limits ^{l=1}}}^{K}{F_{(r) l}^{-}(X_{k j}^{(r)})}}\\&+ \frac{1}{3} \sum _{{\mathop {l_1, \ l_2}\limits ^{1 \le l_1}}} \sum _{{\mathop {\ne \ k}\limits ^{\ < l_2 \le K}}} f_{(r) l_1}(X_{k j}^{(r)}) f_{(r) l_2}(X_{k j}^{(r)})\\&\prod _{{\mathop {l \ \ne \ l_1, \ k}\limits ^{l=1}}}^{K} {F_{(r) l}^{-}(X_{k j}^{(r)})} + \cdots + \frac{1}{K} \prod _{{\mathop {l \ne \ k}\limits ^{l=1}}}^{K}{f_{(r) l}(X_{k j}^{(r)})} - R_k^{(r)} \end{aligned}$$

for \(k = 1,2,\ldots ,K\) and for \(k_1 \ne k\), we have

$$\begin{aligned} B_{k k_1}^{(r)}(X_{k_1 j}^{(r)}) =&\, \left\{ 1-F_{(r) k}\left( X_{k_1 j}^{(r)}\right) \right\} \prod ^K_{{\mathop {l\ne k,k_1}\limits ^{l=1}}} F_{(r) l}\left( X_{k_1 j}^{(r)}\right) \\&+ \frac{1}{2}f_{(r) k}\left( X_{k_1 j}^{(r)}\right) \prod ^K_{{\mathop {l\ne k,k_1}\limits ^{l=1}}} F_{(r) l}^{-}\left( X_{k_1 j}^{(r)}\right) \\&+\frac{1}{3}\sum _{{\mathop {k_2\ne k,k_1}\limits ^{1\le k_2\le K}}} f_{(r) k}\left( X_{k_1 j}^{(r)}\right) f_{(r) k_2}\left( X_{k_1 j}^{(r)}\right) \\&\prod ^K_{{\mathop {l\ne k,k_1,k_2}\limits ^{l=1}}} F_{(r) l}^{-}\left( X_{k_1 j}^{(r)}\right) + \cdots + \frac{1}{K} \prod ^K_{{\mathop {l\ne k_1}\limits ^{l=1}}} f_{(r) l}\left( X_{k_1 j}^{(r)}\right) \\&+ \sum _{{\mathop {k_2\ne k,k_1}\limits ^{1\le k_2\le K}}} \prod ^K_{{\mathop {l\ne k,k_1,k_2}\limits ^{l=1}}} F_{(r) l}\left( X_{k_1 j}^{(r)}\right) \\&\times E\left\{ R_{X_{k}^{(r)}} \left( X_{k_2}^{(r)}\right) I\left( X_{k_2}^{(r)}>X_{k_1}^{(r)}\right) \right\} \\&+ \sum _{{\mathop {k_2,k_3\ne k,k_1}\limits ^{1\le k_2<k_3\le K}}} \prod ^K_{{\mathop {l\ne k,k_1,k_2,k_3}\limits ^{l=1}}} F_{(r) l}\left( X_{k_1 j}^{(r)}\right) E\left\{ R_{X_{k}^{(r)}} \left( X_{k_2}^{(r)}, X_{k_3}^{(r)}\right) \right. \\&\times \left. I\left( \hbox {min}\left( X_{k_2}^{(r)},X_{k_3}^{(r)}\right)>X_{k_1 j}^{(r)}\right) \right\} + \cdots \\&+ E\left\{ R_{X_{k}^{(r)}} \left( X_{1}^{(r)},X_{2}^{(r)},\ldots , X_{k_1-1}^{(r)}, X_{k_1+1}^{(r)},\ldots X_{K}^{(r)}\right) \right. \\&\times \left. I\left( \hbox {min}\left( X_{1}^{(r)},X_{2}^{(r)},\ldots , X_{k_1-1}^{(r)}, X_{k_1+1}^{(r)},\ldots X_{K}^{(r)}\right) \right. \right. \\&>\left. \left. X_{k_1 j}^{(r)}\right) \right\} -R_k^{(r)}, \end{aligned}$$

where \(F_{(r) k}(x) = P(X_{k}^{(r)} \ge x)\), \(F_{(r) k}^{-}(x) = P(X_{k}^{(r)} < x)\) and \(f_{(r) k}(x) = P(X_{k}^{(r)} = x)\), \(r=0,1,\ldots ,d\). In case of BC-type functional, we get the expressions by altering the \(> \ (or \ \ge )\) sign by \(< \ (or \ \le )\) and “min” by “max” in the above expressions.

Therefore, from (18) we can say that \(\sqrt{N}(\widehat{{\varvec{\pi }}} - {\varvec{\pi }})\) and \(\sqrt{N}\varvec{B}\) have the same asymptotic distribution. Using central limit theorem, \(\sqrt{N}\varvec{B}\) asymptotically follows K-variate normal distribution with mean vector \({\varvec{0}}\) and dispersion matrix \(\Sigma _B\). Thus, \(\sqrt{N} C \varvec{B}\) and hence \(\sqrt{N}C(\widehat{{\varvec{\pi }}} - {\varvec{\pi }})\) asymptotically follow \((K-1)\)-variate normal distribution with mean vector \({\varvec{0}}\) and dispersion matrix \(\Sigma _{C\pi }\). \(\square \)

Note 4. Consistent estimator of the dispersion matrix \(\Sigma _{C\pi }\) can be provided by estimating the elements with the usual sample versions. Note that \(\Sigma _{C\pi }\) involves unknown parameter \(\varvec{\beta } = \left( \beta _1,\beta _2,\ldots ,\beta _d\right) ^{T}\). We estimate \(\varvec{\beta }\) consistently by minimizing the variance of \(\frac{1}{N}\sum _{k=1}^{K}{n_k B_k}\) (Bathke and Brunner 2003) and solving the set of d equations

$$\begin{aligned}&\sum _{r_1=1}^{d} {\beta _{r_1}}\left\{ \sum _{k=1}^{K}{n_k^2}\sum _{k_1=1}^{K}{\frac{1}{n_{k_1}} Cov\left( B_{k k_1}^{(r)},B_{k k_1}^{(r_1)}\right) }\right. \\&\qquad \left. + \sum _{\ k}{\sum _{\ne k^{\prime }}{n_k n_{k^{\prime }}}}\sum _{k_1=1}^{K}{\frac{1}{n_{k_1}} Cov\left( B_{k k_1}^{(r)},B_{k^{\prime } k_1}^{(r_1)}\right) }\right\} \\&\quad =\sum _{k=1}^{K}{n_k^2}\sum _{k_1=1}^{K}{\frac{1}{n_{k_1}} Cov\left( B_{k k_1}^{(0)},B_{k k_1}^{(r)}\right) }\\&\qquad + \sum _{\ k}{\sum _{< k^{\prime }}{n_k n_{k^{\prime }}}}\sum _{k_1=1}^{K}{\frac{1}{n_{k_1}} \left\{ Cov\left( B_{k k_1}^{(0)},B_{k^{\prime } k_1}^{(r)}\right) +Cov\left( B_{k k_1}^{(r)},B_{k^{\prime } k_1}^{(0)}\right) \right\} }, \end{aligned}$$

\(r=1,2,\ldots ,d\).

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chatterjee, D., Bandyopadhyay, U. Testing in nonparametric ANCOVA model based on ridit reliability functional. Ann Inst Stat Math 71, 327–364 (2019). https://doi.org/10.1007/s10463-017-0643-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-017-0643-8

Keywords

Navigation