Advertisement

Bootstrap Methods: Another Look at the Jackknife

  • Bradley Efron
Part of the Springer Series in Statistics book series (SSS)

Abstract

We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = \(\theta \left( {\hat F} \right) - \theta \left( F \right)\), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.

Keywords

Linear Discriminant Analysis Bootstrap Method Sample Median Bootstrap Replication Bootstrap Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Anderson, T.W. (1958). An Introduction to Multivariate Statistical Analysis. Wiley, New York.zbMATHGoogle Scholar
  2. [2]
    Barnard. G. (1974) Conditionality. pivotals, and robust estimation. Proceedings of the Conference on Foundational Questions in Statistical Inference. Memoirs No. 1. Dept. of Theoretical Statist., Univ. of Aarhus. Denmark.Google Scholar
  3. [3]
    Cramér, H. (1946). Mathematical Methods in Statistics. Princeton Univ. Press.Google Scholar
  4. [4]
    Gray, H., Schucany. W. and Watkins, T. (1975). On the generalized jackknifc and its relation to statistical differentials. Biometrika 62 637–642.zbMATHGoogle Scholar
  5. [5]
    Hartigan, J.A. (1969). Using subsamplc values as typical values. J. Amer. Statist. Assoc. 64 1303–1317.MathSciNetCrossRefGoogle Scholar
  6. [6]
    Hartigan, J.A. (1971). Error analysis by replaced samples. J. Roy. Statist. Soc. Ser. B 33 98–110.zbMATHGoogle Scholar
  7. [7]
    Hartigan, J.A. (1975). Necessary and sufficient conditions for asymptotic joint normality of a statistic and its subsample values. Ann. Statist. 3 573–580.MathSciNetzbMATHGoogle Scholar
  8. [8]
    Hinkley, D. (1976a). On estimating a symmetric distribution. Biometrika 63 680.MathSciNetzbMATHCrossRefGoogle Scholar
  9. [9]
    Hinkley. D. (1976b). On jackknifing in unbalanced situations. Technical Report No. 22, Division of Biostatistics. Stanford Univ.Google Scholar
  10. [10]
    Jaeckel. L (1972). The infinitesimal jackknifc. Bell Laboratories Memorandum #MM 72–1215–11.Google Scholar
  11. [11]
    Kendall. M. and Stuart, A. (1950). The Advanced Theory of Statistics. Hafner, New York.Google Scholar
  12. [12]
    Lachenbruch. P. and Mickey, R. (1968). Estimation of error rates in discriminant analysis. Technometrics 10 1–11.MathSciNetCrossRefGoogle Scholar
  13. [13]
    Maritz, J.S. and Jarrett, R.G. (1978). A note on estimating the variance of the sample median. J. Amer. Statist. Assoc. 73 194–196.CrossRefGoogle Scholar
  14. [14]
    Miller. R.G. (1974a). The jackknife—a review. Biometrika 61 1–15.MathSciNetzbMATHGoogle Scholar
  15. [15]
    Miller, R.G. (1974b). An unbalanced jackknife. Ann. Statist. 2 880–891.MathSciNetzbMATHCrossRefGoogle Scholar
  16. [16]
    Noether. G. (1967). Elements of Nonparametric Statistics. Wiley, New York.zbMATHGoogle Scholar
  17. [17]
    Toussaint, G. (1974). Bibliography on estimation of misclassification. IEEE Trans. Information Theory 20 472–479.MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag New York, Inc. 1992

Authors and Affiliations

  • Bradley Efron
    • 1
  1. 1.Stanford UniversityUSA

Personalised recommendations