Skip to main content

Advertisement

Log in

Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Aardoom, J. J., Dingemans, A. E., Landt, M. S. C., & Van Furth, E. F. (2012). Norms and discriminative validity of the Eating Disorder Examination Questionnaire (EDE-Q). Eating Behaviors, 13, 305–309. doi:10.1016/j.eatbeh.2012.09.002.

    Article  PubMed  Google Scholar 

  • AERA, Apa, & NCME., (1999). Standards for educational and psychological testing. Washington, DC: Author.

  • Agresti, A. (2012). Analysis of ordinal categorical data (2nd ed.). Hoboken, NJ: Wiley.

    Google Scholar 

  • Agresti, A. (2013). Categorical data analysis (3rd ed.). Hoboken, NJ: Wiley.

    Google Scholar 

  • Agresti, A., & Min, Y. (2001). On small-sample confidence intervals for parameters in discrete distributions. Biometrics, 57(963), 971. doi:10.1111/j.0006-341X.2001.00963.x.

    Google Scholar 

  • Ahn, S., & Fessler, A. (2003). Standard errors of mean, variance, and standard deviation estimators. Technical Report. Ann Arbor, MI: EECS Department, University of Michigan: July 2003. http://www.eecs.umich.edu/~fessler/papers/files/tr/stderr.pdf.

  • American Psychological Association. (2010). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.

    Google Scholar 

  • Bergsma, W. P. (1997). Marginal models for categorical data. Tilburg: Tilburg University Press.

    Google Scholar 

  • Bergsma, W. P., Croon, M. A., & Hagenaars, J. A. (2009). Marginal models for dependent, clustered and longitudinal categorical data. New York, NY: Springer.

    Google Scholar 

  • Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinee’s ability. In F. M. Lord & M. R. Novick, Statistical theories of mental test scores (pp. 453–479). Reading, MA: Addison-Wesley.

  • Brennan, R. L., & Lee, W.-C. (1999). Conditional scale-score standard errors of measurement under binomial and compound binomial assumptions. Educational and Psychological Measurement, 56, 5–24. doi:10.1177/0013164499591001.

    Article  Google Scholar 

  • Cavaco, S., Gonçalves, A., Pinto, C., Almeida, E., Gomes, F., Moreira, I., et al. (2013). Trail making test: Regression-based norms for the Portuguese population. Archives of Clinical Neuropsychology, 28, 189–198. doi:10.1093/arclin/acs115.

  • Cooch, E., & White, G. (2015). Program MARK: A gentle introduction (14th ed.). Fort Collins, CO: Colorado State University.

    Google Scholar 

  • Crawford, J., Cayley, C., Lovibond, P. F., Wilson, P. H., & Hartley, C. (2011). Percentile norms and accompanying interval estimates from an Australian general adult population sample for self-report mood scales (BAI, BDI, CRSD, CES-D, DASS, DASS-21, STAI-X, STAI-Y, SRDS, and SRAS). Australian Psychologist, 46, 3–14. doi:10.1111/j.1742-9544.2010.00003.x.

    Article  Google Scholar 

  • Crawford, J. R., Garthwaite, P. H., & Slick, D. J. (2009). On percentile norms in neuropsychology: Proposed reporting standards and methods for quantifying the uncertainty over the percentile ranks of test scores. The Clinical Neuropsychologist, 23, 1173–1195.

    Article  PubMed  Google Scholar 

  • Crawford, J. R., & Howell, D. C. (1998). Comparing an individual’s test score against norms derived from small samples. The Clinical Neuropsychologist, 12, 482–486. doi:10.1076/clin.12.4.482.7241.

    Article  Google Scholar 

  • Evers, A., Lucassen, W., Meijer, R. R., & Sijtsma, K. (2009). COTAN assessment system for the quality of tests. Amsterdam: Nederlands Instituut van Psychologen.

    Google Scholar 

  • Glaesmer, H., Rief, W., Martin, A., Mewes, R., Brähler, E., Zenger, M., & Hinz, A. (2012). Psychometric properties and population-based norms of the Life Orientation Test Revised (LOT-R). British Journal of Health Psychology, 17, 432–445. doi:10.1111/j.2044-8287.2011.02046.x.

  • Goretti, B., Niccolai, C., Hakiki, B., Sturchio, A., Falautano, M., Eleonora, M., et al. (2014). The Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS): Normative values with gender, age and education corrections in the Italian population. BMC Neurology, 14, 171–176. doi:10.1186/s12883-014-0171-6.

    Article  PubMed  PubMed Central  Google Scholar 

  • Grande, G., Romppel, M., Glaesmer, H., Petrowski, K., & Herrmann-Lingen, C. (2010). The type-D scale (DS14): Norms and prevalence of type-D personality in a population-based representative sample in Germany. Personality and Individual Differences, 48, 935–939. doi:10.1016/j.paid.2010.02.026.

    Article  Google Scholar 

  • Grizzle, J. E., Starmer, C. F., & Koch, G. G. (1969). Analysis of categorical data for linear models. Biometrics, 25, 489–504. doi:10.2307/2528901.

    Article  PubMed  Google Scholar 

  • Kendall, M., & Stuart, A. (1977). The advanced theory of statistics, distributional theory (4th ed., Vol. 1). New York, NY: Macmillan.

  • Kessels, R. P., Montagne, B., Hendriks, A. W., Perrett, D. I., & De Haan, E. H. (2014). Assessment of perception of morphed facial expression using the Emotion Recognition Task: Normative data from healthy participants aged 8–75. Journal of Neuropsychology, 8, 75–93. doi:10.1111/jnp.12009.

    Article  PubMed  Google Scholar 

  • Kritzer, H. M. (1977). Analyzing measures of association derived from contingency tables. Sociological Methods and Research, 5, 35–50. doi:10.1177/004912417700500401.

    Article  Google Scholar 

  • Kuijpers, R. E., Van der Ark, L. A., & Croon, M. A. (2013a). Standard errors and confidence intervals for scalability coefficients in Mokken scale analysis using marginal models. Sociological Methodology, 43, 42–69. doi:10.1177/0081175013481958.

    Article  Google Scholar 

  • Kuijpers, R. E., Van der Ark, L. A., & Croon, M. A. (2013b). Testing hypotheses involving Cronbach’s alpha using marginal models. British Journal of Mathematical and Statistical Psychology, 66, 503–520. doi:10.1111/bmsp.12010.

    PubMed  Google Scholar 

  • Lang, J. B. (2008). Score and profile likelihood confidence intervals for contingency table parameters. Statistics in Medicine, 27, 5975–5990. doi:10.1002/sim.3391.

    Article  PubMed  Google Scholar 

  • Larson, R., & Edwards, B. (2013). Calculus (10th ed.). Boston, MA: Cengage Learning, Brooks/Cole.

  • Lee, W.-C., Brennan, R. L., & Kolen, M. J. (2000). Estimators of conditional scale-score standard errors of measurement: A simulation study. Journal of Educational Measurement, 37, 1–20. doi:10.1111/j.1745-3984.2000.tb01073.x.

    Article  Google Scholar 

  • Lehtonen, R., & Pahkinen, E. (2004). Practical methods for design and analysis of complex surveys (2nd ed.). West Sussex: Wiley.

    Google Scholar 

  • Merrell, K. W. (1994). Preschool and Kindergarten Behavior Scales. Test manual. Brandon, VT: Clinical Psychology Publishing Company.

    Google Scholar 

  • Mertler, C. A. (2007). Interpreting standardized test scores: Strategies for data-driven instructional decision making. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Mond, J. M., Hay, P. J., Rodgers, B., & Owen, C. (2006). Eating Disorder Examination Questionnaire (EDE-Q): Norms for young adult women. Behaviour Research and Therapy, 44, 53–62. doi:10.1016/j.brat.2004.12.003.

    Article  PubMed  Google Scholar 

  • Oosterhuis, H. E. M., Van der Ark, L. A., & Sijtsma, K. (2016). Sample size requirements for traditional and regression-based norms. Assessment, 23, 191–202. doi:10.1177/1073191115580638.

    Article  PubMed  Google Scholar 

  • Palomo, R., Casals-Coll, M., Sánchez-Benavides, G., Quintana, M., Manero, R. M., Rognoni, T., et al. (2011). Spanish normative studies in young adults (NEURONORMA young adults project): Norms for the Rey-Osterrieth Complex Figure (copy and memory) and Free and Cued Selective Reminding Test. Neurologiá, 28, 226–235. doi:10.1016/j.nrl.2012.03.008.

  • R Core Team (2015). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org/.

  • Rao, R. (1973). Linear statistical inference and its applications (2nd ed.). New York, NY: Wiley.

    Book  Google Scholar 

  • Sartorio, F., Bravini, E., Vercelli, S., Ferriero, G., Plebani, G., Foti, C., & Franchignoni, F. (2013). The functional dexterity test: Test-retest reliability analysis and up-to-date reference norms. Journal of Hand Therapy, 26, 62–68. doi:10.1016/j.jht.2012.08.001.

  • Shi, J., Wei, M., Tian, J., Snowden, J., Zhang, X., Li, T., et al. (2014). The Chinese version of story recall: A useful screening tool for mild cognitive impairment and Alzheimer’s disease in the elderly. BMC Psychiatry, 14, 71–80. doi:10.1186/1471-244X-14-71.

  • Van Belle, G. (2003). Statistical rules of thumb (2nd ed.). Hoboken, NJ: Wiley.

    Google Scholar 

  • Van der Ark, L. A. (2012). New developments in Mokken Scale Analysis in R. Journal of Statistical Software, 48(5), 1–27. doi:10.18637/jss.v048.i05.

    Article  Google Scholar 

  • Van der Ark, L. A., Croon, M. A., & Sijtsma, K. (2008). Mokken scale analysis for dichotomous items using marginal models. Psychometrika, 73, 183–208. doi:10.1007/s11336-007-9034-z.

    Article  PubMed  Google Scholar 

  • Van der Linden, W. J., & Hambleton, R. K. (1997). Handbook of modern item response theory. New York, NY: Springer.

    Book  Google Scholar 

Download references

Acknowledgments

The funding was provided by Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NL) (Grant No. 406-12-013).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hannah E. M. Oosterhuis.

Appendices

Appendix 1

The sample estimate of the standard deviation, \(s_{X}\), can be written using the generalized exp-log notation (Eq. 5) as

$$\begin{aligned} s_{X}={\mathbf{A}}_{5}\cdot \mathrm {exp}\left( {\mathbf{A}}_{4}\cdot \mathrm {log}\left( {\mathbf{A}}_{3}\cdot \mathrm {exp}\left( {\mathbf{A}}_{2}\cdot \mathrm {log}\left( {\mathbf{A}}_{1}\cdot {\hat{\mathbf{m}}} \right) \right) \right) \right) . \end{aligned}$$
(43)

Let \({\mathbf{y}}^{{(2)}}\) be the vector containing the squares of the elements in \({\mathbf{y}}\). Then, the \(4 \times k\) matrix \({\mathbf{A}}_{1}\) equals

$$\begin{aligned} {\mathbf{A}}_{1}=\left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{{(}2{)}} &{} {\mathbf {1}}_{k} &{} {\mathbf {1}}_{k}\\ \end{array} } \right] ', \end{aligned}$$
(44)

the \(4 \times 4\) matrix \({\mathbf{A}}_{2}\) equals

$$\begin{aligned} {\mathbf{A}}_{2}=\left[ {\begin{array}{cccc} 2 &{} 0 &{} -1 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] , \end{aligned}$$
(45)

the \(2 \times 4\) matrix \({\mathbf{A}}_{3}\) equals

$$\begin{aligned} {\mathbf{A}}_{3}=\left[ {\begin{array}{cccc} -1 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] , \end{aligned}$$
(46)

the \(1 \times 2\) vector \({\mathbf{A}}_{4}\) equals

$$\begin{aligned} {\mathbf{A}}_{4}=\left[ {\begin{array}{cc} \frac{1}{2} &{} -\frac{1}{2}\\ \end{array} } \right] , \end{aligned}$$
(47)

and \({\mathbf{A}}_{5}\) equals the scalar 1. It follows that \({\mathbf{g}}_{0}={\hat{\mathbf{m}}}\), the \(4 \times 1\) vector \({\mathbf{g}}_{1}\) equals

$$\begin{aligned} {\mathbf{g}}_{1}=\mathrm {log}\left( {\mathbf{A}}_{1}\cdot {\mathbf{g}}_{0} \right) =\log \left( \left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf {1}}_{k} &{} {\mathbf {1}}_{k}\\ \end{array} } \right] ^{\prime } \cdot {\hat{\mathbf{m}}} \right) =\left[ {\begin{array}{c} \mathrm {log}\left( \sum X_{i} \right) \\ \mathrm {log}\left( \sum X_{i}^{2} \right) \\ \mathrm {log}\left( N \right) \\ \mathrm {log}\left( N \right) \\ \end{array} } \right] , \end{aligned}$$
(48)

the \(4 \times 1\) vector \({\mathbf{g}}_{2}\) equals

$$\begin{aligned} {\mathbf{g}}_{2}=\mathrm {exp}\left( {\mathbf{A}}_{2}\cdot {\mathbf{g}}_{1} \right) = \mathrm {exp}\left( \left[ {\begin{array}{cccc} 2 &{} 0 &{} -1 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \mathrm {log}\left( \sum X_{i} \right) \\ \mathrm {log}\left( \sum X_{i}^{2} \right) \\ \mathrm {log}\left( N \right) \\ \mathrm {log}\left( N \right) \\ \end{array} } \right] \right) =\left[ {\begin{array}{c} \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ \mathrm {1}\\ \end{array} } \right] , \end{aligned}$$
(49)

the \(2 \times 1\) vector \({\mathbf{g}}_{3}\) equals

$$\begin{aligned} {\mathbf{g}}_{3}= & {} \log \left( {\mathbf{A}}_{3}\cdot {\mathbf{g}}_{2} \right) =\log \left( \left[ {\begin{array}{cccc} -1 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ \mathrm {1}\\ \end{array} } \right] \right) = \left[ {\begin{array}{c} \mathrm {log}\left( SS \right) \\ \mathrm {log} (N-1)\\ \end{array} } \right] , \end{aligned}$$
(50)

where \(SS = \sum X_{i}^{2} -\frac{\left( \sum X_{i} \right) ^{2}}{N}\), and

$$\begin{aligned} {\mathbf{g}}_{4}=\exp \left( {\mathbf{A}}_{4}\cdot {\mathbf{g}}_{3} \right) =\exp \left( \left[ {\begin{array}{ll} \frac{1}{2} &{} -\frac{1}{2}\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \mathrm {log}\left( SS \right) \\ \log \left( N-1 \right) \\ \end{array} } \right] \right) =\sqrt{\frac{SS}{N-1}}, \end{aligned}$$
(51)

because \({\mathbf{A}}_{5}=1\), \({\mathbf {g}}({\hat{\mathbf {m}}})={\mathbf{g}}_{5}={\mathbf{g}}_{4}=s_{X}\).

After some tedious algebra, it may be verified that \({\mathbf{G}}_{0}={\mathbf{I}}_{k}\),

$$\begin{aligned} {\mathbf{G}}_{1}= & {} {\mathbf{D}}^{-1}\left[ {\mathbf{A}}_{1}\cdot {\mathbf{g}}_{\mathrm {0}} \right] \cdot {\mathbf{A}}_{\mathrm {1}} \cdot {\mathbf{G}}_{0} \nonumber \\= & {} {\mathbf{D}}^{-1}\left[ {\begin{array}{c} {\sum }_{i=1}^N X_{i} \\ {\sum }_{i=1}^N X_{i}^{2} \\ N\\ N\\ \end{array} } \right] \cdot \left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf {1}}_{k} &{} {\mathbf {1}}_{k}\\ \end{array} } \right] ^{\prime } \cdot {\mathbf{I=}}\left[ {\begin{array}{c} \frac{{\mathbf{r}}^{\prime }}{\sum X_{i} }\\ \frac{{\mathbf{r}}^{\left( 2 \right) \prime }}{\sum X_{i}^{2} }\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \end{array} } \right] , \end{aligned}$$
(52)
$$\begin{aligned} {\mathbf{G}}_{2}= & {} {\mathbf{D}}\left[ \exp \left( {\mathbf{A}}_{2}\cdot {\mathbf{g}}_{\mathrm {1}} \right) \right] \cdot {\mathbf{A}}_{\mathrm {2}} \cdot {\mathbf{G}}_{1} \nonumber \\= & {} {\mathbf{D}}\left[ {\begin{array}{c} \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ \mathrm {1}\\ \end{array} } \right] \cdot \left[ {\begin{array}{cccc} 2 &{} 0 &{} -1 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{{\mathbf{r}}^{\prime }}{\sum X_{i} }\\ \frac{{\mathbf{r}}^{\left( 2 \right) \prime }}{\sum X_{i}^{2} }\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \end{array} } \right] =\left[ {\begin{array}{c} 2{\mathbf{r}}^{{\prime }}{\bar{X}}-{\bar{X}}^{2}\\ {\mathbf{r}}^{\left( 2 \right) \prime }\\ {\mathbf {1}}_{k}^{{\prime }}\\ {\mathbf {0}}_{k}^{{\prime }}\\ \end{array} } \right] . \end{aligned}$$
(53)

Then, \({\mathbf{G}}_{3}\) is a \(2 \times k\) matrix,

$$\begin{aligned} {\mathbf{G}}_{3}= & {} {\mathbf{D}}^{-1}\left[ {\mathbf{A}}_{3}\cdot {\mathbf{g}}_{\mathrm {2}} \right] \cdot {\mathbf{A}}_{\mathrm {3}} \cdot {\mathbf{G}}_{2} \nonumber \\= & {} {\mathbf{D}}^{-1}\left[ {\begin{array}{c} SS\\ N-1\\ \end{array} } \right] \cdot \left[ {\begin{array}{cccc} -1 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} 2{\mathbf{r}}^{{\prime }}{\bar{X}}-{\bar{X}}^{2}\\ {\mathbf{r}}^{\left( 2 \right) {\prime }}\\ {\mathbf {1}}_{k}^{{\prime }}\\ {\mathbf {0}}_{k}^{{\prime }}\\ \end{array} } \right] =\left[ {\begin{array}{c} \frac{{\mathbf{d}}^{\left( 2 \right) {\prime }}}{SS}\\ {\mathbf{e}}^{\prime }\\ \end{array} } \right] , \end{aligned}$$
(54)

where the \(k \times 1\) vector \({\mathbf{d}}={\mathbf{r}}-{\bar{X}}\) and \({\mathbf{e}}={\mathbf {1}}_{k}/(N-1)\),

and \({\mathbf{G}}_{4}\) is the \(1 \times k\) vector

$$\begin{aligned} {\mathbf{G}}_{4}= & {} {\mathbf{D}}\left[ \exp \left( {\mathbf{A}}_{4}\cdot {\mathbf{g}}_{\mathrm {3}} \right) \right] \cdot {\mathbf{A}}_{\mathrm {4}} \cdot {\mathbf{G}}_{3} \nonumber \\= & {} {\mathbf{D}}\left[ s_{X} \right] \cdot \left[ {\begin{array}{cc} \frac{1}{2} &{} -\frac{1}{2}\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{{\mathbf{d}}^{\left( 2 \right) }}{SS}\\ {\mathbf{e}}\\ \end{array} } \right] =0.5s_{X}\left( \frac{{\mathbf{d}}^{2}}{SS}-{\mathbf{e}} \right) . \end{aligned}$$
(55)

Finally, one can derive that \({\mathbf{G}=}{\mathbf{G}}_{\mathrm {5}} = {\mathbf{G}}_{\mathrm {4}}\).

      Because the sample estimate of the standard deviation is obtained by dividing the sum of squared deviation scores by \(N-1, {\mathbf {g}}({\hat{\mathbf {m}}})\) (Eq. 51) is not homogeneous of order 0. As a result, the asymptotic variance is approximated by \(V_{S_{X}}\approx {\mathbf{GD}}({\hat{\mathbf{m}}}){\mathbf{G}}^{\prime } {\mathbf{-G}}{\hat{\mathbf{m}}}N^{-1}{\hat{\mathbf{m}}}^{\prime } {\mathbf{G}}^{\prime }\) (cf. Eq. 2). Inserting the Jacobian \({\mathbf{G}}\) (Eq. 55) in Eq. 2 results in the sample estimate of the asymptotic variance of \(s_{X}\),

$$\begin{aligned} V_{s_{X}}\approx .25s_{X}^{2}\cdot {\sum }_{\varvec{i}} {\sum }_{\varvec{j}} {\left( \frac{d_{i}^{2}}{SS}-e \right) \left( \frac{d_{j}^{2}}{SS}-e \right) \left( {\delta }_{ij}{\hat{m}}_{i}-\frac{{\hat{m}}_{i}{\hat{m}}_{j}}{N} \right) }. \end{aligned}$$
(56)

For large N,

$$\begin{aligned} V_{s_{X}}\approx .25s_{X}^{2}\cdot {\sum }_{\varvec{i}} {\hat{m}}_{i} \left( \frac{d_{i}^{2}}{SS}-\frac{1}{N} \right) ^{2}. \end{aligned}$$
(57)

Appendix 2

The sample estimates of the percentile ranks, collected in vector \({\mathbf{PR}} ={\begin{array}{lll} {(PR}_{r_{1}} &{} \cdots &{} {PR}_{r_{k}})\\ \end{array}}^{\prime }\) (Eq. 26), can be written as

$$\begin{aligned} {\mathbf{PR}}={\mathbf{A}}_{3}\cdot \mathrm {exp}\left( {\mathbf{A}}_{2}\cdot \mathrm {log} \left( {{\mathbf{A}}}_{1}\cdot {\hat{\mathbf{m}}}\right) \right) . \end{aligned}$$
(58)

Let \({\mathbf{L}}_{p}\) be a lower triangular matrix of ones. The \((k+1) \times k\) matrix \({\mathbf{A}}_{1}\) equals

$$\begin{aligned} {\mathbf{A}}_{1}=\left[ {\begin{array}{c} {\mathbf{L}}_{k}\\ {\mathbf {1}}_{k}^{\prime } \\ \end{array} } \right] , \end{aligned}$$
(59)

and the \(k \times (k+1)\) matrix \({\mathbf{A}}_{2}\) equals

$$\begin{aligned} {\mathbf{A}}_{2}=\left[ {\begin{array}{ll} {\mathbf{I}}_{k} &{} {-{\mathbf{1}}}_{k}\\ \end{array} } \right] . \end{aligned}$$
(60)

Let \({\mathbf{A}}_{3}\) be a lower bidiagonal \(k \times k\) matrix in which each nonzero element equals 50; that is,

$$\begin{aligned} {\mathbf{A}}_{3}=50\left( {\mathbf{I}}_{k} +\left[ {\begin{array}{cc} {\mathbf {0}}_{k-1}^{\prime } &{} 0\\ {\mathbf{I}}_{k-1} &{} {\mathbf {0}}_{k-1}\\ \end{array} } \right] \right) . \end{aligned}$$
(61)

It follows that \({\mathbf{g}}_{0}={\hat{\mathbf{m}}}\). Let \({\hat{\mathbf{m}}}^{*}= (\sum _{i=1}^1 {\hat{\mathrm{m}}}_{i} , \sum _{i=1}^2 {\hat{\mathrm{m}}}_{i} , \ldots , \sum _{i=1}^k {\hat{\mathrm{m}}}_{i} )\) be a vector of cumulative frequencies; that is, \({\hat{\mathbf{m}}}^{*}={\mathbf{L}}_{k}\cdot {\hat{\mathbf{m}}}\), and let \({\mathbf{P}}^{*}={\hat{\mathbf{m}}}^{*}/N=\left( P\left( X\le r_{1} \right) ,\ldots , P\left( X\le r_{k} \right) \right) ^{\prime }\). Function \({\mathbf{g}}_{1}\) is a \((k+1) \times 1\) vector,

$$\begin{aligned} {\mathbf{g}}_{1}=\log \left( {\mathbf{A}}_{1}\cdot {\hat{\mathbf{m}}} \right) =\mathrm {log}\left( \left[ {\begin{array}{c} {\mathbf{L}}_{k}\\ {\mathbf {1}}_{k}^{\prime } \\ \end{array} } \right] \cdot {\hat{\mathbf{m}}} \right) = \mathrm {log}\left[ {\begin{array}{c} {\hat{\mathbf{m}}}^{*}\\ N\\ \end{array} } \right] , \end{aligned}$$
(62)

\({\mathbf{g}}_{2}\) is a \(k \times 1\) vector,

$$\begin{aligned} {\mathbf{g}}_{2}=\mathrm {exp}\left( {\mathbf{A}}_{2}\cdot {\mathbf{g}}_{1} \right) =\mathrm {exp}\left( \left[ {\begin{array}{ll} {\mathbf{I}}_{k} &{} {-{\mathbf{1}}}_{k}\\ \end{array} } \right] \cdot \mathrm {log}\left[ {\begin{array}{c} {\hat{\mathbf{m}}}^{*}\\ N\\ \end{array} } \right] \right) ={\mathbf{P}}^{*}, \end{aligned}$$
(63)

and \({\mathbf{g}}_{3}\) is a \(k \times 1\) vector,

$$\begin{aligned} {\mathbf {g}}({\hat{\mathbf {m}}})={\mathbf{g}}_{3}={\mathbf{A}}_{3}\cdot {\mathbf{g}}_{2}=50\left( {\mathbf{I}}_{{\varvec{k}}} + \left[ {\begin{array}{cc} {\mathbf {0}}_{\varvec{k-1}}^{{\prime }} &{} {\mathbf {0}}\\ {\mathbf{I}}_{\varvec{k-1}} &{} {\mathbf {0}}_{\varvec{k-1}}\\ \end{array} } \right] \right) \cdot {\mathbf{P}}^{*} = {\mathbf{PR}}. \end{aligned}$$
(64)

It follows that \({\mathbf{G}}_{0}={\mathbf{I}}\), the \((k+1) \times k\) matrix \({\mathbf{G}}_{1}\) equals

$$\begin{aligned} {\mathbf{G}}_{1}={\mathbf{D}}^{-1}\left[ {\mathbf{A}}_{1}{{\mathbf{\cdot g}}}_{0} \right] \cdot {\mathbf{A}}_{1}\cdot {\mathbf{G}}_{0}{{\mathbf{=D}}}^{-1}\left( \left[ {\begin{array}{c} {\hat{\mathbf{m}}}^{*}\\ N\\ \end{array} } \right] \right) \cdot \left[ {\begin{array}{c} {\mathbf{L}}_{k}\\ {\mathbf {1}}_{k}^{\prime }\\ \end{array} } \right] , \end{aligned}$$
(65)

the \(k \times k\) matrix \({\mathbf{G}}_{2}\) equals

$$\begin{aligned} {\mathbf{G}}_{2}={\mathbf{D}}\left[ \exp ({\mathbf{A}}_{2}\cdot {\mathbf{g}}_{\mathrm {1}}) \right] \cdot {\mathbf{A}}_{\mathrm {2}} \cdot {\mathbf{G}}_{1}={\mathbf{D}}\left( {\mathbf{P}}^{*} \right) \cdot \left[ {\begin{array}{ll} {\mathbf{I}}_{k} &{} {-{\mathbf{1}}}_{k}\\ \end{array} } \right] \cdot {\mathbf{D}}^{-1}\left( \left[ {\begin{array}{c} {\hat{\mathbf{m}}}^{*}\\ N\\ \end{array} } \right] \right) \cdot \left[ {\begin{array}{c} {\mathbf{L}}_{k}\\ {\mathbf {1}}_{k}^{\prime }\\ \end{array} } \right] , \end{aligned}$$
(66)

and the \(k \times k\) matrix \({\mathbf{G}}_{3}\) equals

$$\begin{aligned} {\mathbf{G}}= & {} {\mathbf{G}}_{3}={\mathbf{A}}_{3}\cdot {\mathbf{G}}_{2} \nonumber \\= & {} 50\left( {\mathbf{I}}_{k} +\left[ {\begin{array}{cc} {\mathbf {0}}_{k-1}^{\prime } &{} 0\\ {\mathbf{I}}_{k-1} &{} {\mathbf {0}}_{k-1}\\ \end{array} } \right] \right) \cdot {\mathbf{D}}\left( \left[ {\mathbf{P}}^{*} \right] \right) \cdot \left[ {\begin{array}{ll} {\mathbf{I}}_{k} &{} {-{\mathbf{1}}}_{k}\\ \end{array} } \right] \cdot {\mathbf{D}}^{-1}\left( \left[ {\begin{array}{c} {\hat{\mathbf{m}}}^{*}\\ N\\ \end{array} } \right] \right) \cdot \left[ {\begin{array}{c} {\mathbf{L}}_{k}\\ {\mathbf {1}}_{k}^{\prime }\\ \end{array} } \right] . \end{aligned}$$
(67)

Some tedious algebra shows that the elements of \({\mathbf{G}}\) equal

$$\begin{aligned} G_{gi}=\frac{50}{N}\cdot \left\{ {\begin{array}{ll} {2-P}_{g-1}^{*}{-P}_{g}^{*} &{} \hbox {if } g>i\\ 1{-P}_{g-1}^{*}{-P}_{g}^{*} &{} \hbox {if } g=i\\ 0{-P}_{g-1}^{*}{-P}_{g}^{*} &{} \hbox {if } g<i\\ \end{array} } \right. , \end{aligned}$$
(68)

where \(P_{x}^{*}=P\left( X\le x \right) \). Because the percentile ranks are homogeneous of order 0, the asymptotic variance is approximated by \(V_{{\mathbf{PR}}}\approx {\mathbf{GD}}({\hat{\mathbf{m}}}){\mathbf{G}}^{\prime }\) (cf. Eq. 3). The sample estimates of the elements of the asymptotic covariance matrix of \({\mathbf{PR}}\) are

$$\begin{aligned} V_{{{\mathbf{PR}}}_{gh}}\approx \frac{2500}{N^{2}} {\sum }_i {{\hat{\mathrm{m}}}_{i}\big ({{\gamma }_{gi}-P}_{g-1}^{*}{-P}_{g}^{*} \big )\left( {\gamma _{hi}-P}_{h-1}^{*}{-P}_{h}^{*} \right) } \end{aligned}$$
(69)

for \(g,h=1, \ldots , k\).

Appendix 3

The sample estimates of the eight boundaries of the stanines (Eq. 28) can be written using the generalized exp-log notation (Eq. 5) as follows:

$$\begin{aligned} {{\mathbf{St}}}_{{\varvec{b}}}={\mathbf{A}}_{5}\cdot \mathrm {exp}\left( {\mathbf{A}}_{4}\cdot \mathrm {log}\left( {\mathbf{A}}_{3}\cdot \mathrm {exp}\left( {\mathbf{A}}_{2}\cdot \mathrm {log}\left( {\mathbf{A}}_{1}\cdot {\hat{\mathbf{m}}} \right) \right) \right) \right) . \end{aligned}$$
(70)

Let \({\mathbf{A}}_{1}\) be the \(4 \times k\) matrix,

$$\begin{aligned} {\mathbf{A}}_{1}=\left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf {1}}_{k} &{} {\mathbf {1}}_{k}\\ \end{array} } \right] ^{\prime }, \end{aligned}$$
(71)

let \({{\mathbf{A}}}_{2}\) be the \(5 \times 4\) matrix

$$\begin{aligned} {{\mathbf{A}}}_{2}=\left[ {\begin{array}{cccc} 2 &{} 0 &{} 0 &{} -1 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ 1 &{} 0 &{} 0 &{} -1 \\ 0 &{} 0 &{} 1 &{} -1 \\ \end{array} } \right] , \end{aligned}$$
(72)

let \({{\mathbf{A}}}_{3}\) be the \(3 \times 5\) matrix

$$\begin{aligned} {\mathbf{A}}_{3}=\left[ {\begin{array}{ccccc} -1 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} -1 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ \end{array} } \right] , \end{aligned}$$
(73)

let \({{\mathbf{A}}}_{4}\) be the \(2 \times 3\) matrix,

$$\begin{aligned} {{\mathbf{A}}}_{4}=\left[ {\begin{array}{ccc} {\frac{1}{2}} &{} {-\frac{1}{2}} &{} 0 \\ 0 &{} 0 &{} 1 \end{array} } \right] , \end{aligned}$$
(74)

and let \({{\mathbf{A}}}_{5}\) be the \(8 \times 2\) matrix

$$\begin{aligned} {{\mathbf{A}}}_{5}=\left[ {\begin{array}{cc} {\mathbf{f}} &{} {\mathbf {1}}_{8}\\ \end{array} } \right] . \end{aligned}$$
(75)

It follows that \({\mathbf{g}}_{0}={\hat{\mathbf{m}}}\), \({\mathbf{g}}_{1}\) is the \(4 \times 1\) vector

$$\begin{aligned} {\mathbf{g}}_{1}=\mathrm {log}\left( {\mathbf{A}}_{1}\cdot {\mathbf{g}}_{0} \right) =\log \left( \left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf {1}}_{k} &{} {\mathbf {1}}_{k}\\ \end{array} } \right] ^{\prime }\cdot {\hat{\mathbf{m}}} \right) =\log \left( \left[ {\begin{array}{c} \sum X_{i} \\ \sum X_{i}^{2} \\ N\\ N\\ \end{array} } \right] \right) , \end{aligned}$$
(76)

\({\mathbf{g}}_{2}\) is the \(5 \times 1\) vector

$$\begin{aligned} {\mathbf{g}}_{2}=\mathrm {exp}({\mathbf{A}}_{2}\cdot {\mathbf{g}}_{1})=\mathrm {exp}\left( \left[ {\begin{array}{cccc} 2 &{} 0 &{} 0 &{} -1\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1\\ 1 &{} 0 &{} 0 &{} -1\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \cdot \log \left( \left[ {\begin{array}{c} \sum X_{i} \\ \sum X_{i}^{2} \\ N\\ N\\ \end{array} } \right] \right) \right) =\left[ {\begin{array}{c} \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ {\bar{X}}\\ \mathrm {1}\\ \end{array} } \right] , \end{aligned}$$
(77)

\({\mathbf{g}}_{3}\) is the \(3 \times 1\) vector

$$\begin{aligned} {\mathbf{g}}_{3}= & {} \mathrm {log}\left( {\mathbf{A}}_{3}\cdot {\mathbf{g}}_{2} \right) =\mathrm {log}\left( \left[ {\begin{array}{ccccc} -1 &{} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} -1\\ 0 &{} 0 &{} 0 &{} 1 &{} 0\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ {\bar{X}}\\ \mathrm {1}\\ \end{array} } \right] \right) , \nonumber \\= & {} \mathrm {log}\left( \left[ {\begin{array}{c} SS\\ N-1\\ {\bar{X}}\\ \end{array} } \right] \right) , \end{aligned}$$
(78)

\({\mathbf{g}}_{4}\) is the \(2 \times 1\) vector

$$\begin{aligned} {\mathbf{g}}_{4}=\mathrm {exp}({{\mathbf{A}}}_{4}\cdot {\mathbf{g}}_{3})= \exp \left( \left[ {\begin{array}{ccc} {\frac{1}{2}} &{} {-\frac{1}{2}} &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array} } \right] \cdot \mathrm {log}\left( \left[ {\begin{array}{c} SS\\ N-1\\ {\bar{X}}\\ \end{array} } \right] \right) \right) =\left[ {\begin{array}{c} s_{X}\\ {\bar{X}}\\ \end{array} } \right] , \end{aligned}$$
(79)

and \({\mathbf{g}}_{5}\) is the \(8 \times 1\) vector

$$\begin{aligned} {\mathbf{g}}_{5}={{\mathbf{A}}}_{5}\cdot {\mathbf{g}}_{4} = \left[ {\begin{array}{ll} {\mathbf{f}} &{} {\mathbf {1}}_{8}\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} s_{X}\\ {\bar{X}}\\ \end{array} } \right] ={\mathbf {g}}({\hat{\mathbf {m}}})={{\mathbf{St}}}_{{\mathbf{b}}}. \end{aligned}$$
(80)

Next, \({\mathbf{G}}_{0}={\mathbf{I}}_{{\varvec{k}}}\). \({\mathbf{G}}_{1}\) is the \(4 \times k\) matrix,

$$\begin{aligned} {{\mathbf{G}}_{1}{\mathbf{=D}}}^{-1}\left[ {\mathbf{A}}_{1}{{\mathbf{\cdot g}}}_{0} \right] \cdot {\mathbf{A}}_{1}\cdot {\mathbf{G}}_{0}={\mathbf{D}}^{-1}\left[ {\begin{array}{c} \sum X_{i} \\ \sum X_{i}^{2} \\ N\\ N\\ \end{array} } \right] \cdot \left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf {1}}_{k} &{} {\mathbf {1}}_{k}\\ \end{array} } \right] ^{\prime }=\left[ {\begin{array}{c} \frac{{\mathbf{r}}^{{\prime }}}{\sum X_{i} }\\ \frac{{\mathbf{r}}^{\left( 2 \right) {\prime }}}{\sum X_{i}^{2} }\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \end{array} } \right] . \end{aligned}$$
(81)

Then \({\mathbf{G}}_{2}\) is the \(5 \times k\) matrix

$$\begin{aligned} {\mathbf{G}}_{2}= & {} {\mathbf{D}}\left[ \exp \left( {\mathbf{A}}_{2}\cdot {\mathbf{g}}_{\mathrm {1}} \right) \right] \cdot {\mathbf{A}}_{\mathrm {2}} \cdot {\mathbf{G}}_{1} \nonumber \\= & {} {\mathbf{D}}\left( \left[ {\begin{array}{c} \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ {\bar{X}}\\ \mathrm {1}\\ \end{array} } \right] \right) \left[ {\begin{array}{cccc} 2 &{} 0 &{} 0 &{} -1\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1\\ 1 &{} 0 &{} 0 &{} -1\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \left[ {\begin{array}{c} \frac{{\mathbf{r}}^{{\prime }}}{\sum X_{i} }\\ \frac{{\mathbf{r}}^{\left( 2 \right) {\prime }}}{\sum X_{i}^{2} }\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \frac{{\mathbf {1}}_{k}^{{\prime }}}{N}\\ \end{array} } \right] {\mathbf{=}}\left[ {\begin{array}{c} 2{\mathbf{r}}^{{\prime }}{\bar{X}}-{\bar{X}}^{2}\\ {\mathbf{r}}^{(2)\prime } \\ {\mathbf {1}}_{k}^{{\prime }}\\ \frac{{\mathbf{d}}^{{\prime }}}{N}\\ {\mathbf {0}}_{k}^{{\prime }}\\ \end{array} } \right] , \end{aligned}$$
(82)

\({\mathbf{G}}_{3}\) is the \(3 \times k\) matrix

$$\begin{aligned} {\mathbf{G}}_{3}= & {} {\mathbf{D}}^{-1}\left[ {\mathbf{A}}_{3}\cdot {\mathbf{g}}_{\mathrm {2}} \right] \cdot {\mathbf{A}}_{\mathrm {3}} \cdot {\mathbf{G}}_{2} \nonumber \\= & {} {\mathbf{D}}^{-1}\left( \left[ {\begin{array}{c} SS\\ N-1\\ {\bar{X}}\\ \end{array} } \right] \right) \cdot \left[ {\begin{array}{ccccc} -1 &{} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 &{} 0 &{} -1\\ 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ \end{array} } \right] \cdot \left[ {\begin{array}{c} 2{\mathbf{r}}^{{\prime }}{\bar{X}}-{\bar{X}}^{2}\\ {\mathbf{r}}^{(2) \prime }\\ {\mathbf {1}}_{k}^{{\prime }}\\ \frac{{\mathbf{d}}^{{\prime }}}{N}\\ {\mathbf {0}}_{k}^{{\prime }}\\ \end{array} } \right] =\left[ {\begin{array}{c} {\mathbf{d}}^{*{\prime }}\\ {\mathbf{e}}^{\prime } \\ \frac{{\mathbf{d}}^{{\prime }}}{\sum X_{i} }\\ \end{array} } \right] , \end{aligned}$$
(83)

where \({\mathbf{d}}^{*}=\frac{{\mathbf{d}}^{\mathbf {(2)}}}{SS}\). Then \({\mathbf{G}}_{4}\) is the \(2 \times k\) matrix

$$\begin{aligned} {\mathbf{G}}_{4}= & {} {\mathbf{D}}\left[ \exp \left( {\mathbf{A}}_{4}\cdot {\mathbf{g}}_{\mathrm {3}} \right) \right] \cdot {\mathbf{A}}_{\mathrm {4}} \cdot {\mathbf{G}}_{3} \nonumber \\= & {} \left[ {\begin{array}{cc} s_{X} &{} 0\\ 0 &{} {\bar{X}}\\ \end{array} } \right] \cdot \left[ {\begin{array}{ccc} {\frac{1}{2}} &{} {-\frac{1}{2}} &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array} } \right] {\cdot }\left[ {\begin{array}{c} {\mathbf{d}}^{*{\prime }}\\ {\mathbf{e}}^{\prime } \\ \frac{{\mathbf{d}}^{{\prime }}}{\sum X_{i} }\\ \end{array} } \right] =\left[ {\begin{array}{c} \frac{s_{X}}{2}\cdot \left( {\mathbf{d}}^{*{\prime }}-{\mathbf{e}}^{\prime } \right) \\ \frac{{\mathbf{d}}^{{\prime }}}{N}\\ \end{array} } \right] , \end{aligned}$$
(84)

and \({\mathbf{G}}_{5}\) is the \(8 \times k\) matrix

$$\begin{aligned} {\mathbf{G}}_{5}={\mathbf{G}}={\mathbf{A}}_{5}\cdot {\mathbf{G}}_{4}=\left[ {\begin{array}{ll} {\mathbf{f}} &{} {\mathbf {1}}_{8}\\ \end{array} } \right] \left[ {\begin{array}{c} \frac{s_{X}}{2}\cdot \left( {\mathbf{d}}^{*{\prime }}-{\mathbf{e}}^{\prime } \right) \\ \frac{{\mathbf{d}}^{{\prime }}}{{\varvec{N}}}\\ \end{array} } \right] =\frac{{{\mathbf{f}}\cdot s}_{X}}{2}\cdot \left( {\mathbf{d}}^{*{\prime }}-{\mathbf{e}}^{\prime } \right) +\frac{{{\mathbf {1}}_{8}{\mathbf{\cdot d}}}^{{\prime }}}{N}. \end{aligned}$$
(85)

The stanine boundaries are not homogeneous of order 0, because \({\mathbf{St}}_{{\varvec{b}}}\) is obtained using \(s_{X}\). As a result, the asymptotic variance is approximated by \({\mathbf{V}}_{{\mathbf{St}}}\approx {\mathbf{G}}{\mathbf{V}}_{{\hat{\mathbf{m}}}}{\mathbf{G}}^{\prime }\) (cf. Eq. 2). Some tedious algebra shows that the sample estimates of the elements of the asymptotic covariance matrix of \({\mathbf{V}}_{{\mathbf{St}}}\) are

$$\begin{aligned} V_{{\varvec{St}}_{gh}}\approx {\sum }_{j=1}^k {\sum }_{i=1}^k {t \cdot \left[ \frac{{f_{g}{\mathbf{\cdot }}s\!}_{X}}{2}\cdot \left( d_{i}^{*}-e \right) +\frac{d_{i}}{N} \right] } \cdot \left[ \frac{{f_{_{h}}{\mathbf{\cdot }}s\!}_{X}}{2}\cdot \left( d_{j}^{*}-e \right) +\frac{d_{j}}{N} \right] , \end{aligned}$$
(86)

where \(t={{\delta }_{ij}{\hat{\mathrm{m}}}}_{j} -\frac{{\hat{\mathrm{m}}}_{i} {\hat{\mathrm{m}}}_{j}}{N}\). For large N, Eq. 86 reduces to

$$\begin{aligned} V_{{\varvec{St}}_{gh}}\approx {\sum }_{i=1}^k {{\hat{\mathrm{m}}}_{i} \cdot \left[ \frac{{f_{g} \cdot s\!}_{X}}{2}\cdot \left( d_{i}^{*}-\frac{1}{N} \right) +\frac{d_{i}}{N} \right] \cdot \left[ \frac{{{f}_{_{h}} \cdot s\!}_{X}}{2}\cdot \left( d_{i}^{*}-\frac{1}{N} \right) +\frac{d_{i}}{N} \right] }. \end{aligned}$$
(87)

Appendix 4

The sample estimate of the k standardized scores corresponding to r, collected in a \(k \times 1\) vector z (Eq. 31), can be written as

$$\begin{aligned} {\mathbf{z}}={\mathbf{A}}_{7}\cdot \mathrm {exp}\left( {\mathbf{A}}_{6}\cdot \mathrm {log}\left( {\mathbf{A}}_{5}\cdot \mathrm {exp}\left( {\mathbf{A}}_{4}\cdot \mathrm {log}\left( {\mathbf{A}}_{3}\cdot \mathrm {exp}\left( {\mathbf{A}}_{2}\cdot \mathrm {log}\left( {\mathbf{A}}_{1}\cdot {\hat{\mathbf{m}}} \right) \right) \right) \right) \right) \right) . \end{aligned}$$
(88)

Let \({\mathbf{A}}_{1}\) be the \((k+2) \times k\) matrix

$$\begin{aligned} {\mathbf{A}}_{1}=\left[ {\begin{array}{lll} {\mathbf{I}}_{k} &{} {\mathbf{1}}_{k} &{} {\mathbf{1}}_{k}\\ \end{array} } \right] ^{\prime }. \end{aligned}$$
(89)

Let \(\oplus \) indicate the direct sum, for example \({\mathbf{X}}\oplus {\mathbf{Y}}=\left[ {\begin{array}{cc} {\mathbf{X}} &{} 0\\ 0 &{} {\mathbf{Y}}\\ \end{array} } \right] \). Then \({\mathbf{A}}_{2}\) is the \((k+1) \times (k+2)\) matrix

$$\begin{aligned} {\mathbf{A}}_{2}={\mathbf{I}}_{k}\oplus \left[ {\begin{array}{ll} 1 &{} -1\\ \end{array} } \right] , \end{aligned}$$
(90)

\({\mathbf{A}}_{3}\) is the \((k+4) \times (k+1)\) matrix

$$\begin{aligned} {\mathbf{A}}_{3}=\left[ {\begin{array}{llll} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf{1}}_{k} &{} {\mathbf{1}}_{k} \\ \end{array} } \right] ^{\prime } \oplus {\mathbf{r}}, \end{aligned}$$
(91)

\({{\mathbf{A}}}_{4}\) is the \((5+k) \times (4+k)\) matrix

$$\begin{aligned} {\mathbf{A}}_{4}=\left[ {\begin{array}{cccc} 1 &{} 0 &{} 0 &{} -1 \\ 2 &{} 0 &{} 0 &{} -1 \\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{}1 &{} -1\\ \end{array} } \right] \oplus {\mathbf{I}}_{k}, \end{aligned}$$
(92)

\({{\mathbf{A}}}_{5}\) is the \((k+3) \times (k+5)\) matrix

$$\begin{aligned} {{\mathbf{A}}}_{5}=1\oplus \left[ {\begin{array}{ll} -1 &{} 1\\ \end{array} } \right] \oplus \left[ {\begin{array}{ll} 1 &{} -1\\ \end{array} } \right] \oplus {\mathbf{I}}_{k}, \end{aligned}$$
(93)

\({\mathbf{A}}_{6}\) is the \((1+k) \times (3+k)\) matrix

$$\begin{aligned} {\mathbf{A}}_{6}=\left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1 &{} -\frac{1}{{{2}}} &{} \frac{1}{{2}} &{} {\mathbf {0}}_{k}^{\prime }\\ {\mathbf {0}}_{k} &{} -\frac{1}{2}\cdot {\mathbf {1}}_{k} &{} \frac{1}{{{2}}}\cdot {\mathbf {1}}_{k}&{} {\mathbf{I}}_{\mathrm {k}}\\ \end{array} } \right] , \end{aligned}$$
(94)

and \({\mathbf{A}}_{7}\) is the \(k \times (k+1)\) matrix

$$\begin{aligned} {\mathbf{A}}_{7}=\left[ {\begin{array}{cc} {-{\mathbf {1}}}_{k} &{} {\mathbf{I}}_{k}\\ \end{array} } \right] . \end{aligned}$$
(95)

It follows that \({\mathbf{g}}_{0}={\hat{\mathbf{m}}}\), the \((k+2) \times 1\) vector \({\mathbf{g}}_{1}\) equals

$$\begin{aligned} {\mathbf{g}}_{1}=\log \left( {\mathbf{A}}_{1}\cdot {\hat{\mathbf{m}}} \right) =\log \left( \left[ {\begin{array}{ccc} {\mathbf{I}}_{k} &{} {\mathbf{1}}_{k} &{} {\mathbf{1}}_{k}\\ \end{array} } \right] ^{\prime } \cdot {\hat{\mathbf{m}}} \right) =\log {\left( \left[ {\begin{array}{ccc} {\hat{\mathbf{m}}}^{\prime } &{} N &{} N\\ \end{array} } \right] ^{\prime } \right) ,} \end{aligned}$$
(96)

the \((k+1) \times 1\) vector \({\mathbf{g}}_{2}\) equals

$$\begin{aligned} {\mathbf{g}}_{2}=\mathrm {exp}\left( {\mathbf{A}}_{2}\cdot {\mathbf{g}}_{1} \right) =\mathrm {exp}\left( {\mathbf{I}}_{k}\oplus \left[ {\begin{array}{cc} 1 &{} -1\\ \end{array} } \right] \cdot \log \left( \left[ {\begin{array}{cccc} {\hat{\mathbf{m}}}^{\prime } &{} N &{} N\\ \end{array} } \right] ^{\prime } \right) \right) =\left[ {\begin{array}{cc} {\hat{\mathbf{m}}}^{\prime } &{} 1\\ \end{array} } \right] ^{\prime }, \end{aligned}$$
(97)

the \((k+4) \times 1\) vector \({\mathbf{g}}_{3}\) equals

$$\begin{aligned} {\mathbf{g}}_{3}=\mathrm {log}\left( {\mathbf{A}}_{3}\cdot {\mathbf{g}}_{2} \right) =\mathrm {log}\left( \left( \left[ {\begin{array}{cccc} {\mathbf{r}} &{} {\mathbf{r}}^{(2)} &{} {\mathbf{1}}_{k} &{} {\mathbf{1}}_{k}\\ \end{array} } \right] '\oplus {\mathbf{r}} \right) \cdot \left[ {\begin{array}{cc} {\hat{\mathbf{m}}}^{\prime } &{} 1\\ \end{array} } \right] ^{\prime } \right) =\mathrm {log}\left[ {\begin{array}{c} \sum X_{i} \\ \sum X_{i}^{2} \\ N\\ N\\ {\mathbf{r}}\\ \end{array} } \right] , \end{aligned}$$
(98)

the \((k+5) \times 1\) vector \({\mathbf{g}}_{4}\) equals

$$\begin{aligned} {\mathbf{g}}_{4}= & {} \exp \left( {{\mathbf{A}}}_{4}\cdot {\mathbf{g}}_{3} \right) \nonumber \\= & {} \mathrm {exp}\left( \left( \left[ {\begin{array}{cccc} 1 &{} 0 &{} 0 &{} -1\\ 2 &{} 0 &{} 0 &{} -1\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 1 &{} -1\\ \end{array} } \right] \oplus {\mathbf{I}}_{k} \right) \cdot \mathrm {log}\left[ {\begin{array}{c} \sum X_{i} \\ \sum X_{i}^{2} \\ N\\ N\\ {\mathbf{r}}\\ \end{array} } \right] \right) =\left[ {\begin{array}{c} {\bar{X}}\\ \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ 1\\ {\mathbf{r}}\\ \end{array} } \right] , \end{aligned}$$
(99)

the \((k+3) \times 1\) vector \({\mathbf{g}}_{5}\) equals

$$\begin{aligned} {\mathbf{g}}_{5}= & {} \log \left( {\mathbf{A}}_{5}\cdot {\mathbf{g}}_{4} \right) \nonumber \\= & {} \mathrm {log}\left( \left[ 1\oplus \left[ {\begin{array}{cc} -1 &{} 1\\ \end{array} } \right] \oplus \left[ {\begin{array}{cc} 1 &{} -1\\ \end{array} } \right] \oplus {\mathbf{I}}_{k} \right] \cdot \left[ {\begin{array}{c} {\bar{X}}\\ \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ 1\\ {\mathbf{r}}\\ \end{array} } \right] \right) =\mathrm {log}\left[ {\begin{array}{c} {\bar{X}}\\ SS\\ N-1\\ {\mathbf{r}}\\ \end{array} } \right] , \end{aligned}$$
(100)

the \((k+1) \times 1\) vector \({\mathbf{g}}_{6}\) equals

$$\begin{aligned} {\mathbf{g}}_{6}= & {} \exp \left( {{\mathbf{A}}}_{6}\cdot {\mathbf{g}}_{5} \right) \nonumber \\= & {} \mathrm {exp}\left( \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1 &{} -\frac{1}{{{2}}} &{} \frac{1}{{{2}}} &{} {\mathbf {0}}_{k}^{\prime }\\ {\mathbf {0}}_{k} &{} -\frac{1}{2}\cdot {\mathbf {1}}_{k} &{} \frac{1}{{{2}}}\cdot {\mathbf {1}}_{k}&{} {\mathbf{I}}_{\mathrm {k}}\\ \end{array} } \right] \cdot \mathrm {log}\left[ {\begin{array}{c} {\bar{X}}\\ SS\\ N-1\\ {\mathbf{r}}\\ \end{array} } \right] \right) =\left[ {\begin{array}{c} \frac{{\bar{X}}}{s_{X}}\\ \frac{{\mathbf{r}}}{s_{X}}\\ \end{array} } \right] , \end{aligned}$$
(101)

and the \(k \times 1\) vector \({\mathbf{g}}_{7}\) equals

$$\begin{aligned} {\mathbf{g}}_{{\mathbf {7}}}={\mathbf {g}}({\hat{\mathbf {m}}})={{\mathbf{A}}}_{7}\cdot {\mathbf{g}}_{6}=\left[ {\begin{array}{cc} {-{\mathbf {1}}}_{k} &{} {\mathbf{I}}_{k}\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{{\bar{X}}}{s_{X}}\\ \frac{{\mathbf{r}}}{s_{X}}\\ \end{array} } \right] =\left[ \frac{{\mathbf{d}}}{s_{X}} \right] ={\mathbf{z}}. \end{aligned}$$
(102)

Next, \({\mathbf{G}}_{0}={\mathbf{I}}_{k}\), the \((k+2) \times k\) matrix \({\mathbf{G}}_{1}\) equals

$$\begin{aligned} {\mathbf{G}}_{1}= & {} {\mathbf{D}}^{-1}\left[ {\mathbf{A}}_{1} \cdot {\mathbf{g}}_{0} \right] \cdot {\mathbf{A}}_{1}\cdot {\mathbf{G}}_{0} \nonumber \\= & {} \mathbf{D}^{-1}{\left( \left[ {\begin{array}{ccc} {\hat{\mathbf{m}}}^{{\prime }} &{} N &{} N\\ \end{array} } \right] ^{\prime } \right) {\cdot \left[ {\begin{array}{ccc} {\mathbf{I}}_{k} &{} {\mathbf{1}}_{k} &{} {\mathbf{1}}_{k}\\ \end{array} } \right] }^{\prime }=\left[ {\begin{array}{c} {\mathbf{D}}^{-1}({\hat{\mathbf{m}}})\\ 1_{k}^{\prime } / N\\ 1_{k}^{\prime } / N\\ \end{array} } \right] }, \end{aligned}$$
(103)

the \((k+1) \times k\) matrix \({\mathbf{G}}_{2}\) equals

$$\begin{aligned} {\mathbf{G}}_{2}= & {} {\mathbf{D}}\left[ \mathrm {exp}({\mathbf{A}}_{2}\cdot {\mathbf{g}}_{\mathrm {1}}) \right] \cdot {\mathbf{A}}_{\mathrm {2}} \cdot {\mathbf{G}}_{1} \nonumber \\= & {} {\mathbf{D}}\left( \left[ {\begin{array}{cc} {\hat{\mathbf{m}}} &{} 1\\ \end{array} } \right] ^{\prime } \right) \cdot \left[ {\mathbf{I}}_{k}\oplus \left[ {\begin{array}{cc} 1 &{} -1\\ \end{array} } \right] \right] \cdot \left[ {\begin{array}{c} {\mathbf{D}}^{-1}({\hat{\mathbf{m}}})\\ 1_{k}^{\prime } / N\\ 1_{k}^{\prime } / N\\ \end{array} } \right] =\left[ {\begin{array}{c} {\mathbf{I}}_{k}\\ {\mathbf {0}}_{k}^{\prime }\\ \end{array} } \right] , \end{aligned}$$
(104)

the \((k+4) \times k\) matrix \({\mathbf{G}}_{3}\) equals

$$\begin{aligned} {\mathbf{G}}_{3}={\mathbf{D}}^{-1}\left( \left[ {\begin{array}{c} \sum X_{i} \\ \sum X_{i}^{2} \\ N\\ N\\ {\mathbf{r}}\\ \end{array} } \right] \right) \cdot \left[ \left[ {\begin{array}{cccc} {\mathbf{r}} &{} {\mathbf{r}}^{\left( 2 \right) } &{} {\mathbf{1}}_{k} &{} {\mathbf{1}}_{k}\\ \end{array} } \right] ^{\prime }\oplus {\mathbf{r}} \right] \cdot \left[ {\begin{array}{c} {\mathbf{I}}_{k}\\ {\mathbf {0}}_{k}^{\prime }\\ \end{array} } \right] =\left[ {\begin{array}{c} {\mathbf{r}}^{{\prime }} / \sum X_{i} \\ {\mathbf{r}}^{(2) {\prime }} / \sum X_{i}^{2} \\ {\mathbf {1}}_{k}^{\prime } / N\\ {\mathbf {1}}_{k}^{\prime } / N\\ {\mathbf {0}}_{k\times k}\\ \end{array} } \right] , \end{aligned}$$
(105)

the \((k+5) \times k\) matrix \({\mathbf{G}}_{4}\) equals

$$\begin{aligned} {\mathbf{G}}_{4}= & {} {\mathbf{D}}\left[ \exp \left( {\mathbf{A}}_{4}\cdot {\mathbf{g}}_{\mathrm {3}} \right) \right] \cdot {\mathbf{A}}_{\mathrm {4}} \cdot {\mathbf{G}}_{3} \nonumber \\= & {} {\mathbf{D}}\left( \left[ {\begin{array}{c} {\bar{X}}\\ \frac{\left( \sum X_{i} \right) ^{2}}{N}\\ \sum X_{i}^{2} \\ N\\ 1\\ {\mathbf{r}}\\ \end{array} } \right] \right) \cdot \left[ \left[ {\begin{array}{cccc} 1 &{} 0 &{} 0 &{} -1 \\ 2 &{} 0 &{} 0 &{} -1 \\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} -1 \\ \end{array} } \right] \oplus {\mathbf{I}}_{k} \right] \cdot \left[ {\begin{array}{c} {\mathbf{r}}^{{\prime }} / \sum X_{i} \\ {\mathbf{r}}^{\left( 2 \right) {\prime }} / \sum X_{i}^{2} \\ {\mathbf {1}}_{k}^{\prime } / N\\ {\mathbf {1}}_{k}^{\prime } / N\\ {\mathbf {0}}_{k\times k}\\ \end{array} } \right] =\left[ {\begin{array}{cc} \frac{{\mathbf{d}}^{{\prime }}}{N}\\ {\mathbf{r}}^{{\prime }}2{\bar{X}}-{\bar{X}}^{2}\\ {\mathbf{r}}^{\left( 2 \right) {\prime }}\\ {\mathbf {1}}_{k}^{{\prime }}\\ {\mathbf {0}}_{(k+1)\times k}\\ \end{array} } \right] ,\nonumber \\ \end{aligned}$$
(106)

the \((k+3) \times k\) matrix \({\mathbf{G}}_{5}\) equals

$$\begin{aligned} {\mathbf{G}}_{5}= & {} {\mathbf{D}}^{-1}\left[ {\mathbf{A}}_{5}\cdot {\mathbf{g}}_{\mathrm {4}} \right] \cdot {\mathbf{A}}_{\mathrm {5}} \cdot {\mathbf{G}}_{4} \nonumber \\= & {} {\mathbf{D}}^{-1}\left[ {\begin{array}{c} {\bar{X}}\\ SS\\ N-1\\ {\mathbf{r}}\\ \end{array} } \right] \cdot \left[ 1\oplus \left[ {\begin{array}{cc} -1 &{} 1\\ \end{array} } \right] \oplus \left[ {\begin{array}{cc} 1 &{} -1\\ \end{array} } \right] \oplus {\mathbf{I}}_{k} \right] \cdot \left[ {\begin{array}{c} \frac{{\mathbf{d}}^{{\prime }}}{N}\\ {\mathbf{r}}^{{\prime }}2{\bar{X}}-{\bar{X}}^{2}\\ {\mathbf{r}}^{\left( 2 \right) {\prime }}\\ {\mathbf {1}}_{k}^{{\prime }}\\ {\mathbf {0}}_{(k+1)\times k}\\ \end{array} } \right] \nonumber \\= & {} \left[ {\begin{array}{c} \frac{{\mathbf{r}}^{{\prime }}}{\sum X_{i} }-\frac{\mathrm {1}}{N}\\ {\mathbf{d}}^{*{\prime }}\\ {\mathbf{e}}^{\prime }\\ {\mathbf {0}}_{k\times k}\\ \end{array} } \right] , \end{aligned}$$
(107)

and the \(\left( k+1 \right) \times k\) matrix \({\mathbf{G}}_{6}\) equals

$$\begin{aligned} {\mathbf{G}}_{6}= & {} {\mathbf{D}}\left[ \exp \left( {\mathbf{A}}_{6}\cdot {\mathbf{g}}_{\mathrm {5}} \right) \right] \cdot {\mathbf{A}}_{\mathrm {6}} \cdot {\mathbf{G}}_{5} \nonumber \\= & {} {\mathbf{D}}\left( \left[ {\begin{array}{c} \frac{{\bar{X}}}{s_{X}}\\ \frac{{\mathbf{r}}}{s_{X}}\\ \end{array} } \right] \right) \cdot \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 1 &{} -\frac{1}{{{2}}} &{} \frac{1}{{{2}}} &{} {\mathbf {0}}_{k}^{\prime }\\ {\mathbf {0}}_{k} &{} -\frac{1}{2}\cdot {\mathbf {1}}_{k} &{} \frac{1}{{{2}}}\cdot {\mathbf {1}}_{k}&{} {\mathbf{I}}_{\mathrm {k}}\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{{\mathbf{r}}^{{\prime }}}{\sum X_{i} }-\frac{\mathrm {1}}{N}\\ {\mathbf{d}}^{*{\prime }}\\ {\mathbf{e}}^{\prime }\\ {\mathbf {0}}_{k\times k}\\ \end{array} } \right] \nonumber \\= & {} \left[ {\begin{array}{c} \frac{{\bar{X}}}{s_{X}}\cdot \left[ \frac{{\mathbf{r}}^{{\prime }}}{\sum X}-\frac{{\mathbf {1}}_{k}^{{\prime }}}{N}-0.5\left( {\mathbf{d}}^{*^{\prime }}-{\mathbf{e}}^{\prime } \right) \right] \\ \frac{-\mathrm {0.5{\mathbf {r}}}}{s_{X}} \cdot \left( {\mathbf{d}}^{*^{\prime }}-{\mathbf{e}}^{\prime } \right) \\ \end{array} } \right] . \end{aligned}$$
(108)

Finally, the \(k\times k\) matrix \({\mathbf{G}}_{7}\) equals

$$\begin{aligned} {\mathbf{G}} = {\mathbf{A}}_{\mathrm {7}} \cdot {\mathbf{G}}_{6}= & {} \left[ {\begin{array}{cc} {-{\mathbf {1}}}_{k} &{} {\mathbf{I}}_{k}\\ \end{array} } \right] \cdot \left[ {\begin{array}{c} \frac{{\bar{X}}}{s_{X}}\cdot \left[ \frac{{\mathbf{r}}^{{\prime }}}{\sum X}-\frac{{\mathbf {1}}_{k}^{{\prime }}}{N}-0.5\left( {\mathbf{d}}^{*^{\prime }}-{\mathbf{e}}^{\prime } \right) \right] \\ \frac{-\mathrm {0.5{\mathbf {r}}}}{s_{X}} \cdot \left( {\mathbf{d}}^{*^{\prime }}-{\mathbf{e}}^{\prime } \right) \\ \end{array} } \right] \nonumber \\= & {} \frac{{\mathbf {1}}_{k}}{s_{X}}\cdot \left\{ -{\bar{X}}\left[ \frac{{\mathbf{r}}^{{\prime }}}{\sum X }-\frac{{\mathbf {1}}_{k}^{{\prime }}}{N}-0.5\left( {\mathbf{d}}^{*^{\prime }}-{\mathbf{e}}^{\prime } \right) \right] -0.5{\mathbf{r}}\left( {\mathbf{d}}^{*^{\prime }}-{\mathbf{e}}^{\prime } \right) \right\} , \end{aligned}$$
(109)

with elements

$$\begin{aligned} G_{ij}=\frac{1}{s_{X}}\cdot \left\{ -{\bar{X}}\left[ \frac{r_{j}}{\sum X }-\frac{1}{N}-0.5\left( d_{j}^{*}-e \right) \right] -0.5r_{i}\left( d_{j}^{*}-e \right) \right\} . \end{aligned}$$
(110)

The Z-scores are not homogeneous of order 0, because \({\mathbf{z}}\) is obtained using \(s_{X}\). As a result, the asymptotic variance is approximated by \({\mathbf{V}}_{{\mathbf{z}}}\approx {\mathbf{G}}{\mathbf{V}}_{{\hat{\mathbf{m}}}}{\mathbf{G}}^{\prime }\) (cf. Eq. 2). Some tedious algebra shows that the sample estimate of the asymptotic covariance matrix of \({\mathbf{V}}_{{\mathbf{z}}}\) is

$$\begin{aligned} V_{Z_{gh}}\approx {\sum }_{j=1}^k {\sum }_{i=1}^k {t\left[ -{\bar{X}}\cdot \left( \frac{r_{i}}{\sum X }-\frac{\mathrm {1}}{N} -u_{i} \right) -r_{g}u_{i} \right] } \cdot \left[ -{\bar{X}}\left( \frac{r_{j}}{\sum X }-\frac{\mathrm {1}}{N} -u_{j} \right) -r_{h}u_{j} \right] ,\nonumber \\ \end{aligned}$$
(111)

where \(u_{i}= 0.5\left[ d_{i}^{*}-e \right] \). For large N,

$$\begin{aligned} V_{{Z}_{gh}}\approx {\sum }_i {\frac{{\hat{\mathrm{m}}}_{i}}{s_{X}^{2}}\left[ -{\bar{X}}\left( \frac{r_{i}}{\sum X }-\frac{\mathrm {1}}{N} -u_{i} \right) -r_{g}u_{i} \right] } \cdot \left[ -{\bar{X}}\left( \frac{r_{j}}{\sum X }-\frac{\mathrm {1}}{N} -u_{j} \right) -r_{h}u_{j} \right] .\nonumber \\ \end{aligned}$$
(112)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oosterhuis, H.E.M., van der Ark, L.A. & Sijtsma, K. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests. Psychometrika 82, 559–588 (2017). https://doi.org/10.1007/s11336-016-9535-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-016-9535-8

Keywords

Navigation