Advertisement

Introduction to the Theory of Hypothesis Testing

Chapter
  • 675 Downloads
Part of the Die Grundlehren der mathematischen Wissenschaften book series (GL, volume 202)

Abstract

As already mentioned, the procedures discussed in Chapter II for the testing of an hypothesis possess without doubt a certain intuitiveness and are rather convincing. However, we have already pointed out that it is desirable to develop a general test theory which depends on few basic assumptions. In particular, we have given no clear definition of the notion of a “test”. We also want to develop criteria for deciding when one of two tests can be viewed as the “better” one.

Keywords

Probability Measure Hypothesis Test Test Problem Power Function Critical Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Neyman, J. and E. S. Pearson, Biometrika 20 A, 175–240 and 263–294 (1928). Philos. Trans. Roy. Soc. London, Ser. A, 231, 289-337 (1933).Google Scholar
  2. 2.
    A standard reference for the theory of tests and confidence regions, which treats numerous details, is the book by Lehmann I.e. Not.11: Testing Statistical Hypotheses, J. Wiley, New York 1959.Google Scholar
  3. 3.
    We will also write E(φ;γ) for E(φ;Py).Google Scholar
  4. 4.
    This means that one exploits the given level of significance “as far as possible”.Google Scholar
  5. 5.
    We will sometimes write P(M,γ) for P γ(M).Google Scholar
  6. 6.
    This terminology is due to the idea that the random experiment which delivers the sample (x1,...,xn) is the result of n trials each of which has the same probability distribution with parameter a. The correct value of a is unknown and the null hypothesis, which is to be tested, assumes that a=a0. See also II, p. 127.Google Scholar
  7. 7.
    The case α = 0 is trivial and need not be considered.Google Scholar
  8. 8.
    Strictly speaking, the assumptions of Theorem 12.2 of I are not fulfilled everywhere since the Jacobian vanishes for r=0, ϑ=0 and ϑ = π. However, it is easy to see that the exceptional sets have measure 0.Google Scholar
  9. 9.
    For the evaluation of this integral see for example N. Hofreiter and W. Grobner, Integraltafel, Zweiter Teil: Bestimmte Integrale, 2. Aufl. Springer-Verlag, Wien 1961.Google Scholar
  10. 10.
    This can be also be written as \( \mathop \Sigma \limits_{j = 0}^\infty \frac{1}{{j!}}{\left( {\frac{{|a{|^2}}}{2}} \right)^j}{e^{ - |a{|^2}/2}}\frac{{{z^{(2j + n - 2)/2}}}}{{{2^{(2j + n)/2}}\Gamma \left( {j + \frac{n}{2}} \right)}}{e^{ - z/2}}. \) But for \( z \ge 0,\frac{{{2^{(2j + n - 2)/2}}}}{{{2^{(2j + n)/2}}\Gamma \left( {j + \frac{n}{2}} \right)}}{e^{ - z/2}} \) is the density of a χ2-distribution with 2j+n degrees of freedom.Google Scholar
  11. 11.
    The first version of this fundamental theorem is in E. S. Pearson, Statist. Res. Mem. Univ. London 1,1–37 (1936).Google Scholar
  12. 12.
    If k =-∞ define v(M k) = v(Mk+).Google Scholar
  13. 13.
    k = ∞ requires (also for IIIF) a trivial special argument.Google Scholar
  14. 14.
    See Dantzig and A. Wald loc. cit. 11.Google Scholar
  15. 15.
    See L. Schmetterer, Sankhya 25, 207–210 (1963). A much deeper result has been given by W. Sendler, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 18, 183-196 (1971).Google Scholar
  16. 16.
    This means that-g is convex.Google Scholar
  17. 17.
    Without explicitly giving non-trivial tests, one can also argue as follows: From \( \mathop {\lim }\limits_{\alpha \to 0 + 0} \,g(\alpha )/\alpha = 1 \) we have from Theorem 3.4, g(α)=α,0⩽ α ⩽ 1. Hence, from the definition of \( \int\limits_R {\phi {f_1}d\mu \le \int\limits_R {\phi {f_0}d\mu } } \) for each test φ∈Φα and O ⩽ α ⩽ 1. For the test φ = cE we thus have μ(E)=0 which contradicts the assumption.Google Scholar
  18. 18.
    Practically speaking, p⩽p0, resp., p⩾p1 is a more reasonable requirement but we want to consider only simple hypotheses here.Google Scholar
  19. 19.
    The first systematic investigation of the connection between linear programs and test theory is in E. W. Barankin, Univ. California Publ. Statist. 1, 161–214 (1949–1953).MathSciNetGoogle Scholar
  20. 20.
    See for instance S. Vajda, Theory of Games and Linear Programming, John Wiley, New York 1956.zbMATHGoogle Scholar
  21. 21.
    We follow essentially a paper by O. Krafft und H. Witting, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 7, 289–302 (1967).CrossRefGoogle Scholar
  22. 22.
    To avoid trivial complications we now assume 0 < α < l.Google Scholar
  23. 23.
    Necessary and sufficient conditions for the existence of product measurable densities can be found in J. Pfanzagl, Sankhya, Ser. A. 31, 13–18 (1969).MathSciNetzbMATHGoogle Scholar
  24. 24.
    Φα is defined on p. 178.Google Scholar
  25. 25.
    See p. 185ff.Google Scholar
  26. 26.
    Essentially, the following considerations represent only an illustration of the uniqueness claim of Theorem 3.1.Google Scholar
  27. 27.
    P. R. Halmos and L. J. Savage, Ann. Math. Statist. 20, 225–241 (1949).MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    To justify this conclusion also in the case α = 0 and c = l, one must define 0·∞ = 0.Google Scholar
  29. 29.
    From this it naturally does not necessarily follow that the set of corresponding probability measures Pr is convex.Google Scholar
  30. 30.
    J. Pfanzagl, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 1, 109–115 (1963).Google Scholar
  31. 31.
    4 ⫅ B v-a.e. means that the set of elements of A which do not belong to B form a v-null set. A = B v-a.e. means v(A-B) + v(B-A) = 0.Google Scholar
  32. 32.
    Here, and occasionally later, we will supress the fact that certain relations hold only &λ-a.e.Google Scholar
  33. 33.
    See J. Pfanzagl, Sankhya Ser. A 30, 147–156 (1968).MathSciNetGoogle Scholar
  34. 34.
    In practical work, however, alternatives which are “too close” to the null hypothesis are likewise uninteresting.Google Scholar
  35. 35.
    H. Scheffe, Ann. Math. Statist. 18, 434–438 (1947).MathSciNetzbMATHCrossRefGoogle Scholar
  36. 36.
    This terminology is not restricted to test problems. It will be used analogously for confidence regions (see IV) and theory of estimation (see also V). See also VII. I 2.Google Scholar
  37. 37.
    The choice of endpoint for this interval is not important, oo can be replaced by an arbitrary real number <γ0.Google Scholar
  38. 38.
    This μ-null set may depend on γ.Google Scholar
  39. 39.
    We define here 0· ∞ =0 and-∞ · 0 =-·.Google Scholar
  40. 40.
    One can also allow l1 = ± ·, l2 = ± ·.Google Scholar
  41. 41.
    J. Neyman and E. S. Pearson, Statist. Res. Mem. Univ. London 2, 25–57 (1938).Google Scholar
  42. 42.
    See St. L. Isaacson, Ann. Math. Statist. 22, 217–234 (1951).MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    See p. 60.Google Scholar
  44. 44.
    For a more precise terminology see 122.Google Scholar
  45. 45.
    More precisely, this means that for each A∈S0 there is a B∈S0 (1) such that Py((A-B)⋃(B-A))=0 for all γ∈Γ and likewise when the roles of S0 and S0 (1) are interchanged.Google Scholar
  46. 46.
    In the statistical literature the existence of a sufficient transformation for a set of probability measures over \( ({R_n}{_n}) \) is often proved.Google Scholar
  47. 47.
    Thus for all real a the inverse image of (-∞, α) under fy belongs to S0 up to a γ-null set.Google Scholar
  48. 48.
    \( {E_\lambda }({f_\lambda }|{S_0}) \) denotes the conditional expectation w.r.t. the measure γ.Google Scholar
  49. 49.
    We have shown this only for the case T(R) = Q. However, see I23.Google Scholar
  50. 50.
    See J. Neyman, Giorn. Ital. Attuari 6, 320–334 (1953) as well as P.R. Halmos and L.J. Savage, l.e.27.Google Scholar
  51. 51.
    Actually, I, Theorem 18.3 yields this only for n = 1, but’1, Theorem 18.3 can easily be extended to the case where the function f named there has range Rn with n >1.Google Scholar
  52. 52.
    The assumption that the densities are >0 in all of R1 is made only for convenience. It is enough, for example, that the fy be >0 for all γ∈Γ in a fixed open interval and vanish for all γ outside of this fixed interval.Google Scholar
  53. 53.
    Essentially due to E. B. Dynkin, Uspehi Mat. Nauk 6,68–90 (1951). See also B. O. Koopman, Trans. Amer. Math. Soc. 39, 399–409 (1936).MathSciNetzbMATHGoogle Scholar
  54. 54.
    E. W. Barankin and M. Katz, Sankhya 21, 217–246 (1959) and E.W. Barankin and A. P. Maitra, Sankhya 25, 217–244(1963).MathSciNetzbMATHGoogle Scholar
  55. 55.
    J. L. Denny, Proc. Nat. Acad. Sci. U.S.A. 57, 1184–1187 (1967). Ann. Math. Statist. 41, 401–411 (1970). See also O. Barndorff-Nielsen and Karl Pedersen, Math. Scand. 22,197–202 (1968).MathSciNetzbMATHCrossRefGoogle Scholar
  56. 56.
    D. L. Burkholder, Ann. Math. Statist. 32, 1191–1200 (1961) and Ann. Math. Statistics 33, 596–599 (1962). See also T. S. Pitcher, loc. cit. VII7.MathSciNetzbMATHCrossRefGoogle Scholar
  57. 57.
    The importance of this definition in mathematical statistics was clearly presented for the first time in E.L. Lehmann and H. Scheffe, Sankhya 10, 305–340 (1950).MathSciNetzbMATHGoogle Scholar
  58. 58.
    See H. Steinhaus—L. Kaczmarz: Theorie der Orthogonalreihen, Monografje Matematyczne VI, Warschau 1935.Google Scholar
  59. 59.
    D. Voelker und G. Doetsch, Die zweidimensionale Laplace-Transformation. Verlag Birkhauser, Basel 1950, 208.zbMATHGoogle Scholar
  60. 60.
    An application of Holder’s inequality shows that these expectations always exist.Google Scholar
  61. 61.
    Introduced by J. Neyman and E.S. Pearson, Philos. Trans. Roy. Soc. London l.c.1.Google Scholar
  62. 62.
    The first known example of this is in W. Feller, Statist. Res. Mem. Univ. London 2, 117–125 (1938). See also H. Kellerer, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 1, 240–246 (1963).Google Scholar
  63. 63.
    E.L. Lehmann and H. Scheffe, l.c.57.Google Scholar
  64. 64.
    For an analysis see Lehmann, l.c.2, 134ff. and especially G. Noelle, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 11, 208–229 (1969).Google Scholar
  65. 65.
    G. B. Dantzig, Ann. Math. Statist. 11, 186–192 (1940) proved that there exists no (non-trivial) test for the mean of a normal distribution with given sample size whose power function is independent of a. fürther examples for similar tests are also found in VI.Google Scholar
  66. 66.
    W.U. Behrens, Landwirtschaftliche Jahrbucher 48, 807–837 (1929).Google Scholar
  67. 67.
    Ju. V. Linnik, Statistical problems with nuisance parameters, Translations of Mathematical Monographs Vol. 20, Amer, Math. Soc, Providence, R.L, 1968.Google Scholar
  68. 68.
    Due to J. Neyman and E. S. Pearson, Biometrika, l.c.1.Google Scholar
  69. 69.
  70. 70.
    See V, Lemma 3.2.Google Scholar
  71. 71.
    Such a discussion is given by P. Hoel, Ann. Math. Statist. 16, 362–368 (1945).MathSciNetzbMATHCrossRefGoogle Scholar
  72. 72.
    For details see H. Scheffe, The Analysis of Variance, John Wiley & Sons-Chapman & Hall, New York-London 1959.zbMATHGoogle Scholar
  73. 73.
    Frequent use is made of such decompositions in the analysis of variance. For the underlying algebraic relations, see H. B. Mann, Ann. Math. Statist. 31, 1–15 (1960).MathSciNetzbMATHCrossRefGoogle Scholar
  74. 74.
    For a detailed analysis of this model see A.N. Kolmogorov, Proc. Second All-Union Congress Math. Statistics, Sept. 27–Oct. 2, 1948, Acad. Sci. Uzbekistan Soviet Socialist. Republic, Tashkent 1949, 240–268.Google Scholar
  75. 75.
    K. Pearson, Philos. Mag. 50, Ser. 5, 157–175 (1900).Google Scholar
  76. 76.
    For the grouping problem see H. B. Mann and A. Wald, Ann. Math. Statist. 13, 306–317 (1942) and H. Witting, Arch. Math. 10, 468-479 (1959).MathSciNetzbMATHCrossRefGoogle Scholar
  77. 77.
    For this and fürther important results see H. Cramer, l.c. I58. The first formulation of such results is in R. A. Fisher, J. Roy. Statist. Soc. 85, 87–94 (1922). Also see W. G. Cochran, Ann. Math. Statist. 23, 315-345 (1952).CrossRefGoogle Scholar
  78. 78.
    A. Wald, Trans. Amer. Math. Soc. 54, 462–482 (1943).MathSciNetCrossRefGoogle Scholar
  79. 79.
    Strictly speaking, gγ is for the time being not at all defined for γ∈T; only is defined.Google Scholar
  80. 80.
    G is thus a homomorphic image of G.Google Scholar
  81. 81.
    The group is then called transitive.Google Scholar
  82. 82.
    The notion of an invariant test is viewed somewhat more generally in this theorem: φ is called invariant if there exists a μ-null set M such that φ(gx) = φ(x)for all g∈G and each x∈R-M.Google Scholar
  83. 83.
    See for example A. Weil, L’integration dans les groupes topologiques et ses applications,Actualites scientifiques et industrielles 869–1145, Hermann & Cie, 2nd ed., Paris 1953.Google Scholar
  84. 84.
    See e.g. E. L. Lehmann, l.c.2, 335. Also O. Wesler, Ann. Math. Statist. 30,1–20 (1959).MathSciNetCrossRefGoogle Scholar
  85. 85.
    See for example J. Neyman and E. Scott, Econometrica 16,1–32 (1948).Google Scholar
  86. 86.
    E(n)n;γ) naturally means \( \int\limits_{{R^{(n)}}} {{\phi _n}} dP_\gamma ^{(n)}. \).Google Scholar
  87. 87.
    See A. Berger, Ann. Math. Statist. 22, 289–293 (1951) and Ch. Kraft, Univ. California Publ. Statist. 2, 125–141 (1953–1958).CrossRefGoogle Scholar
  88. 88.
    S. Kakutani, Ann. of Math. II. Ser. 49, 214–224 (1948).MathSciNetzbMATHCrossRefGoogle Scholar
  89. 89.
    See for details A. Wald, l.c.78.Google Scholar
  90. 90.
    See J. L. Hodges Jr. and E. L. Lehmann, Proc. Fourth Berkeley Sympos. Math. Statist, and Prob. Vol. I, pp. 307–317, Univ. California Press, Berkeley Calif, 1961.Google Scholar
  91. 91.
    Note that \( g_n^{(1)} \) is somewhat differently defined as \( g_n^{(2)}. \).Google Scholar
  92. 92.
    If m = 1, this condition is omitted.Google Scholar
  93. 93.
    See J.G. Pitman, Lecture Notes on Nonparametric Inference. Columbia University, New York 1949. See also G.E. Noether, Ann. Math. Statist. 26, 64–68 (1955).Google Scholar
  94. 94.
    See, however, the developments on p. 243. Also J. L. Hodges Jr. and E. L. Lehmann, Ann. Math. Statist. 27, 324–335 (1956).MathSciNetzbMATHCrossRefGoogle Scholar
  95. 95.
    R.R. Bahadur, Ann. Math. Statist. 31, 276–295 (1960).MathSciNetzbMATHCrossRefGoogle Scholar
  96. 96.
    The fundamental paper is A. Wald. Ann. Math. Statist. 16, 117–186 (1945). Also see A. Wald, Sequential Analysis, John Wiley & Sons-Chapman & Hall, New York-London 1947. See also G.A. Barnard, Suppl. J. Roy. Statist. Soc. 8, 1–21 (1946).Google Scholar
  97. 97.
    For details see A. Wald and J. Wolfowitz, Ann. Math. Statist. 19, 326–339 (1948).MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin · Heidelberg 1974

Authors and Affiliations

  1. 1.University of ViennaAustria

Personalised recommendations