• J. L. HodgesJr.
  • E. L. Lehmann
Open Access
Part of the Selected Works in Probability and Statistics book series (SWPS)

Introduction and summary

Consider a statistical procedure (Method A) which is based on n observations, and a less effective procedure (Method B) which requires a larger number kn of observations to give equally good performance. Comparison of the two methods involves the comparison of kn with n9 and this can be carried out in various ways. Perhaps the most natural quantity to examine is the difference kn — n, the number of additional observations required by the less effective method. Such difference comparisons have been performed from time to time. (See, for example, Fisher (1925), Walsh (1949) and Pearson (1950).) Historically, however, comparisons have been based mainly on the ratio kn /n. Thus, Fisher (1920), in comparing the mean absolute deviation with the mean squared deviation as estimates of a normal scale, found this ratio to be 1/1.14. Similarly in 1925 he found a large-sample ratio of 2/π for median compared with mean for estimating normal location, and the same value was found by Cochran (1937) for the sign test relative to the t-test in the normal case.


Unbiased Estimator Acceptance Probability Asymptotic Efficiency Asymptotic Relative Efficiency Expected Sample Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [1]
    Cochran, W. G. (1937). The efficiencies of the binomial series tests of significance of a mean and of a correlation coefficient. J. Roy. Statist. Soc. Ser. A 100 69–73.CrossRefGoogle Scholar
  2. [2]
    Cramér, Harold (1946). Mathematical Methods of Statistics. Princeton Univ. Press.Google Scholar
  3. [3]
    Fisher, R. A. (1920). A mathematical examination of the methods of determining the accuracy of an observation by the mean error and by the mean square error. Monthly Notices Roy. Astron. Soc. 80 758–770.Google Scholar
  4. [4]
    Fisher, R. A. (1925). Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22 700–725.MATHCrossRefGoogle Scholar
  5. [5]
    Hodges, J. L., Jr. and Lehmann, E. L. (1956). The efficiency of some nonparametric competitors of the t-test. Ann. Math. Statist. 27 324–335.MathSciNetMATHCrossRefGoogle Scholar
  6. [6]
    Hodges, J. L., Jr. and Lehmann, E. L. (1967a). Moments of chi and power of t. Proc. Fifth Berkeley Symp. Math. Statist. Prob. 1 187–201.MathSciNetGoogle Scholar
  7. [7]
    Hodges, J. L., Jr. and Lehmann, E. L. (1967b). On medians and quasi-medians. J. Amer. Statist. Assoc. 62 926–931.MathSciNetCrossRefGoogle Scholar
  8. [8]
    Johnson, N. L. and Welch, B. L. (1939). Applications of the non-central t-distribution. Biometrika 31 362–389.Google Scholar
  9. [9]
    Pearson, E. S. (1950). Some notes on the use of range. Biometrika 37 88–92.MathSciNetMATHGoogle Scholar
  10. [10]
    Walsh, John E. (1949). On the “information” lost by using a t-test when the population variance is known. J. Amer. Statist. Assoc. 44 122–125.MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • J. L. HodgesJr.
    • 1
  • E. L. Lehmann
    • 1
  1. 1.University of CaliforniaBerkeleyUSA

Personalised recommendations