Elementary Sampling Theory

Part of the Die Grundlehren der mathematischen Wissenschaften book series (GL, volume 202)


In connection with the notion of probability we described the following situation: Assume we have a population of observations which are related to certain measurable outcomes. This population is taken to be infinite in the sense that the observations are always reproducible according to a fixed prescription, for example, an infinite series of throws of a die. From this population one now chooses a series of observations “at random”. If there are enough observations, then the relative frequencies of events related to the outcome under observation deviate in general only slightly from a constant value, which we have called the empirical probability (see p. 20). It is not easy to give empirical criteria for deciding when a sample from a population can be viewed as random. One often satisfies oneself with the somewhat vague formulation that a random sample has been realized when there is no reason to believe that the choice of any particular sample is more probable than the rest. In this connection, one often calls on an “urn model”. The urn, or better, its contents (for example, equal balls) represents the population and balls are then drawn from it, making sure that they are always “well-mixed” before each draw. The drawn ball is viewed as a random choice from the urn. We recall what has already been said about the urn scheme; in particular, to what these ideas correspond in the calculus of probability (see p. 27).


Test Procedure Sample Space Finite Population Infinite Population Frequency Interpretation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A number of excellent works treat applied sampling theory: A. Linder, Statistische Methoden für Naturwissenschaftler, Mediziner und Ingenieure, 3r d ed., Birkhauser, Basel. J. Neyman, First Cours in Probability and Statistics, John Wiley & Sons, New York-London 1962.Google Scholar
  2. 2.
    W. Winkler, loc. cit. Intro. 1, 35.Google Scholar
  3. 3.
    The arbitrarity of this test procedure becomes clear when one notes that for given a, one can choose arbitrarily many pairs of real numbers \( (K_\alpha ^1,K_\alpha ^{11}) \) for which \( 1 - \left( {\int\limits_{K_\alpha ^1}^{K_\alpha ^{11}} {{e^{ - {x^2}/2}}} dx} \right)/\sqrt {2\pi } = \alpha . \) We have chosen\( K_\alpha ^{11} = - K_\alpha ^1 = {K_\alpha } \) for no other reason (for the moment), than the fact that the symmetry thus obtained is convenient.Google Scholar
  4. 4.
    W. G. Cochran, Proc. Cambridge Philos. Soc. 30,178–191 (1933–1934).CrossRefGoogle Scholar
  5. 5.
    G. S. James, Proc. Cambridge Philos. Soc. 48, 443–446 (1952).MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    In this generality the theorem is due to T. Kawata and H. Sakamoto, J. Math. Soc. Japan 1, 111–115 (1949). For the case where the variance of F exists, the theorem was first proved by E. Lukacs, Ann. Math. Statist. 13,91–93 (1942). See also R. C. Geary, J. Roy. Statist. Soc. Supp. 3,178 (1936).MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    This elucidates the significance of the heading of 5 and, of course, also illuminates those of 6,9 and 10.Google Scholar
  8. 8.
    This and the other applications of the t-distribution are clearly presented in R. A. Fisher, Metron 5, 90–104 (1925).Google Scholar
  9. 9.
    See P. J. Huber, Theorie de l’inference statistique robuste. (Seminaire de mathematiques superieures 31.) Montreal, Canada: Les Presses de l’Universite de Montreal 1969.Google Scholar
  10. 10.
    H. F. Dodge and H. G. Romig, Sampling Inspection Tables (Single and Double Sampling), 2nd ed, John Wiley & Sons-Chapman & Hall, Ltd. London 1959.Google Scholar
  11. 11.
  12. 12.
    J. Neyman, J. Roy. Statist. Soc. 97, 558–606 (1934).CrossRefGoogle Scholar
  13. 13.
    This result can also be obtained from the minimum value of the variance of \( {\bar \xi _r} \) by setting \( {\sigma _i}\sqrt {\frac{{{l_i}{N_1}}}{{{N_i} - 1}}} \,{\rm{for}}\,{\sigma _i},{n_i}{l_i}\,{\rm{for}}\,{n_i}\,{\rm{(p}}{\rm{.150)}}\,{\rm{and}}\,L\,{\rm{for}}\,n. \) Google Scholar
  14. 14.
    Details are in P. Armitage, Biometrika 34, 273–280 (1947).MathSciNetzbMATHGoogle Scholar
  15. 15.
    More precisely, this means that \( E({\bar \eta _{\xi l}}|{\xi _l} = i) = {a_i}\,{\rm{for}}\,{\rm{1}}\, \le i \le M. \).Google Scholar
  16. 16.
    M. H. Hansen and W. N. Hurwitz, Ann. Math. Statist. 14, 332–362 (1943).MathSciNetCrossRefGoogle Scholar
  17. 17.
    H. Midzuno, Ann. Inst. Statist. Math. 3, 99–107 (1951/52).MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin · Heidelberg 1974

Authors and Affiliations

  1. 1.University of ViennaAustria

Personalised recommendations