Advertisement

Empirical Likelihood and Small Samples

  • Art B. Owen
Conference paper

Abstract

A Monte Carlo simulation compares 9 methods for setting central 95% confidence intervals for the mean of a small sample, for 7 different sampling distributions. The bootstrap t method is the clear winner in terms of coverage accuracy provided at least 4 observations are available. The confidence intervals tend to be long, but not unreasonably so provided at least 6 observations are available. Thus it is not true, as is commonly supposed, that small samples require one to use parametric methods.

In an extreme case, sampling from the lognormal, the bootstrap t intervals are much longer than those of any of the other methods. The extra length is not excessive, in that other methods do not cover the mean more often than the bootstrap t, when they use intervals of the same length as the bootstrap t.

The coverage levels attained are very close to predictions from asymptotic theory, for n as small as 18, except for very heavy tailed distributions.

An analysis is made of the bootstrap t intervals in small samples. They are seen to be sensitive to small gaps in the order statistics from the sample. Bounds on the sampling distribution of the lengths are found using the gaps.

Keywords

Interval Length Empirical Likelihood Coverage Level Central Interval Coverage Accuracy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Buckland, S.T. (1983). Monte Carlo Methods for Confidence Interval Construction Using the Bootstrap Technique. Bias 10, 194–212.Google Scholar
  2. DiCiccio, T.J., Hall, P. & Romano, J. (1988). “Bartlett Adjustment for Empirical Likelihood”. Dept. of Statistics Tech. Report No. 298. Stanford University, Stanford CA.Google Scholar
  3. DiCiccio, T.J. & Romano, J.P. (1988). “Nonparametric Confidence Limits by Resampling Methods and Least Favorable Families”. Dept. of Statistics Tech. Report No. 295, Stanford University, Stanford CA.Google Scholar
  4. Efron, B. (1981). Nonparametric Standard Errors and Confidence Intervals (with Discussion). Canadian Journal of Statistics 9, 139–172.MathSciNetMATHCrossRefGoogle Scholar
  5. Efron, B & Tibshirani, R. (1986). Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical Accuracy (with Discussion). Statistical Science 1, 54–77.MathSciNetCrossRefGoogle Scholar
  6. Hall, P. (1988). Theoretical Comparisons of Bootstrap Confidence Intervals (with Discussion). Ann. Statist. 16, 927–985.MathSciNetMATHCrossRefGoogle Scholar
  7. Hall, P. (1990). Pseudo-likelihood Theory for Empirical Likelihood. Ann. Statist. 18, 121–140.MathSciNetMATHCrossRefGoogle Scholar
  8. Miller, R.G. (1986). Beyond ANOVA, Basics of Applied Statistics. New York: J. Wiley & Sons.Google Scholar
  9. Owen, A.B. (1988a). Empirical Likelihood Ratio Confidence Intervals For a Single Functional. Biometrika 75, 237–249.MathSciNetMATHCrossRefGoogle Scholar
  10. Owen, A.B. (1988b). “Small Sample Central Confidence Intervals for the Mean”. Dept. of Statistics Tech. Report No. 302, Stanford University, Stanford CA.Google Scholar
  11. Owen, A.B. (1990). Empirical Likelihood Confidence Regions. Ann. Statist. 18 90–120.MathSciNetMATHCrossRefGoogle Scholar
  12. Owen, A.B. (1991). Empirical Likelihood for Linear Models. Ann. Statist. To Appear.Google Scholar
  13. Press, W.H., Flannery, B.P., Teukolsky, S.A. & Vetterling, W.T. (1988). Numerical Recipes in C. Cambridge: Cambridge University Press.Google Scholar

Copyright information

© Springer-Verlag New York, Inc. 1992

Authors and Affiliations

  • Art B. Owen
    • 1
  1. 1.Dept. of StatisticsStanford UniversityUSA

Personalised recommendations