Advertisement

Abstract

When testing hypotheses (or computing confidence intervals) with the one-sample Student’s T method described in Chapter 5, the central limit theorem tells us that Student’s T performs better as the sample size increases. That is, under random sampling the discrepancy between the nominal and actual Type I error probability will go to zero as the sample size goes to infinity. But unfortunately, for reasons outlined in Section 5.3 of Chapter 5, there are realistic situations where about two hundred observations are needed to get satisfactory control over the probability of a Type I error or accurate probability coverage when computing confidence intervals. When comparing the population means of two groups of individuals, using Student’s T is known to be unsatisfactory when sample sizes are small or even moderately large. In fact, it might be unsatisfactory no matter how large the sample sizes are because under general conditions it does not converge to the correct answer (Cressie and Whitford, 1986). Switching to the test statistic W given by Equation 5.3, the central limit theorem now applies under general conditions, so using W means we will converge to the correct answer as the sample sizes increase, but in some cases we again need very large sample sizes to get accurate results. (There are simple methods for improving the performance of W using what are called estimated degrees of freedom, but the improvement remains highly unsatisfactory for a wide range of situations.) Consequently, there is interest in finding methods that beat our reliance on the central limit theorem as it applies to these techniques. That is, we would like to find a method that converges to the correct answer more quickly as the sample sizes get large, and such a method is described here.

Keywords

Central Limit Theorem Sampling Distribution Bootstrap Method Bootstrap Sample Probability Coverage 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • Rand R. Wilcox
    • 1
  1. 1.Department of PsychologyUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations