Introduction to Non-parametric Theories

Part of the Die Grundlehren der mathematischen Wissenschaften book series (GL, volume 202)


In the previous chapters we have frequently made use of the assumption that the set Γ of parameters is a (open or closed) subset of R n, n⩽1. In addition, in connection with essential results, we have imposed continuity and differentiability requirements on the likelihood function. For some time, so-called non-parametric methods have received considerable attention. Their beginnings, however, lie far in the past. Recent progress is due to Anglo-american, Dutch and Soviet statisticians. The term “non-parametric” is rather unfortunately chosen and it is not easy to give a satisfactory definition of this notion. Roughly speaking, one can call a test or method of estimation nonparametric when no assumptions such as those above are made o n Γ 2 .


Null Hypothesis Probability Measure Order Statistic Stochastic Approximation Empirical Distribution Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    See the report of D. van Dantzig and J. Hemelrijk, Bull. Inst. Internat. Statist. 34,239–267 (1954).zbMATHGoogle Scholar
  2. 2.
    See E. Ruist, Ark. Mat. 3, 133–163 (1955). A detailed analysis of the notion “nonparametric”is in M. G. Kendall and R. M. Sundrum, Rev. Inst. Internat. Statist. 21,124–134 (1953). Also see III, p. 199.MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    See III, p. 235. Lemma 1.1 can be viewed as an illustration of the arguments carried through there.Google Scholar
  4. 4.
    For a =-∞ or b = ∞, the definition of F is to be trivially modified.Google Scholar
  5. 5.
    More precisely, (1.14) holds, without continuity assumptions on f, only up to a null-set.Google Scholar
  6. 6.
    We refer especially in this connection to W. Hoeffding, Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability 1951, Vol. I, pp. 83–92, University of California Press, Berkeley and Los Angeles.Google Scholar
  7. 7.
    D.A.S. Fraser, Canad. J. Math. 6, 42–45 (1954). Also see P.R. Halmos, Ann. Math. Statist. 17, 34–43 (1946) An interesting general result on sufficient transformations connected with the theory of invariance is in T. S. Pitcher, Trans. Amer. Math. Soc. 85, 166–173 (1957).MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    This is obviously the case for P∈Bm.Google Scholar
  9. 9.
    One also writes n(β, δ) = (x(β)) for this.Google Scholar
  10. 10.
    For the literature see foremost the detailed report of S. S. Wilks, Bull. Amer. Math. Soc. 54, 6–50 (1948).MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    See A. Renyi, Acta Math. Acad. Sci. Hungar. 4, 191–231 (1953) and G. Hajos and A. Renyi, Acta Math. Acad. Sci. Hungar. 5, 1–6 (1954).MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Naturally, the density vanishes elsewhere.Google Scholar
  13. 13.
    The literature on limit theorems for order statistics is quite large. We mention: N.V. Smirnov, Trudy Mat. Inst. Steklov 1949.Google Scholar
  14. 14.
    See L. Le Cam, Proceedings of the 3 r d Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, Vol. I, pp. 129–156, University of California Press, Berkeley and Los Angeles.Google Scholar
  15. 15.
    The first proof is due to A. N. Kolmogorov, Giorn. 1st. Ital. Attuari 4,83–91 (1933). Other proofs are in W. Feller, Ann. Math. Statist. 19, 177–189 (1948) and J.L. Doob, Ann. Math. Statist. 20, 393–403 (1949), augmented by M. D. Donsker, Ann. Math. Statist. 22,277–281 (1952).zbMATHGoogle Scholar
  16. 16.
    Z. W. Birnbaum and F.H. Tingey, Ann. Math. Statist. 22, 592–596 (1951). E. Hlawka has applied this result in an investigation of order statistics from a quite different viewpoint: see E. Hlawka, Math. Ann. 150, 259–267 (1963).MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    For the meaning of un, see9.Google Scholar
  18. 18.
    This proof is due to Gnedenko and Koroljuk: B. V. Gnedenko and V. S. Koroljuk, Dokl. Akad. Nauk SSSR, n. Ser. 80, 525–528 (1951). See also V.S. Koroljuk, Izv. Akad. Nauk SSSR, Ser. Mat. 19, 81–96 (1955).zbMATHGoogle Scholar
  19. 19.
    We mention B. V. Gnedenko, Math. Nachr. 12, 29–63 (1954) and Darling’s instructive report: D. A. Darling, Ann. Math. Statist. 28, 823–839 (1957).MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    This theorem is due originally to Smirnov: N. V. Smirnov, Mat. Sb., n. Ser. 6, 3–26 (1939), who makes the additional assumption that \( \mathop {\lim }\limits_{m,n \to \infty } \,m/n = c \ne 0 \). The version given here was proved by I.I. Gichman, Dokl. Akad. Nauk SSSR, n. Ser. 82, 837–840 (1952).zbMATHGoogle Scholar
  21. 21.
    H. Robbins, Proceedings of the 3r d Berkeley Symposium on Mathematical Statistics and Probability, Vol. I, pp. 157–163, University of California Press, Berkeley and Los Angeles. See also M. V. Johns jr. Ann. Math. Statist. 28, 649–669 (1957).Google Scholar
  22. 22.
    We also abbreviate these distribution functions by \( \mathop \Pi \limits_{i = 1}^n {\psi _i} \circ F. \) Google Scholar
  23. 23.
    We naturally assume that the set of null hypotheses and the set of alternatives are disjoint.Google Scholar
  24. 24.
    F. Wilcoxon, Biometrics 1, 80–83 (1945). H. B. Mann and D. R. Whitney, Ann. Math. Statist. 18,50–60 (1947).CrossRefGoogle Scholar
  25. 25.
    B.L. van der Waerden, Math. Ann. 126, 93–107 (1953).Google Scholar
  26. 26.
    M. E. Terry, Ann. Math. Statist. 23, 346–366 (1952).MathSciNetzbMATHCrossRefGoogle Scholar
  27. 27.
    Obviously, this and the following expectations are taken w.r.t. (F, G).Google Scholar
  28. 28.
    E.L. Lehmann, Ann. Math. Statist. 22, 165–179 (1951).MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    A. Wald and J. Wolfowitz, Ann. Math. Statist. 11, 147–162 (1940).MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    See W. Feller, 61 ff. I.e. I1.Google Scholar
  31. 31.
    At most one separation bar can be placed in each space.Google Scholar
  32. 32.
    For the sign test see W. J. Dixon and A. M. Mood, J. Amer. Statist. Assoc. 41, 557–566 (1946) and E. Ruist, l.c.2.CrossRefGoogle Scholar
  33. 33.
    J. Hemelrijk, Indag. Math. 12, 340–350 (1950). Also see E. Ruist, l.c.2. For another general test of symmetry see C. van Eeden and A. Benard, Indag. Math. 19, 381–408 (1957).Google Scholar
  34. 34.
    That is: k2(x) isthe largest integer contained in \( \frac{{p(x) + n(x) + 1}}{2}. \) Google Scholar
  35. 35.
    H. Hornich, Mh. Math. Phys. 50,142–150 (1941). Z. W. Birnbaum and H. S. Zuckerman, Ann. Math. Statist. 15, 328–329 (1944) have couched his formulation (within the framework of risk theory) in the language of probability theory.MathSciNetzbMATHGoogle Scholar
  36. 36.
    See H. Uzawa, Ann. Math. Statist. 31, 685–702 (1960). Uzawa’s results include those of E.L. Lehmann, Ann. Math. Statist. 24, 23–43 (1953).MathSciNetCrossRefGoogle Scholar
  37. 37.
    For Fy∈Cm, Fy -1 is simply the inverse map.Google Scholar
  38. 38.
    The uniform distribution over (0,1) thus also belongs to 9).Google Scholar
  39. 39.
    These statements follow at once from Fejer–s theorem on the behavior of the arithmetic mean of a Fourier series. See A. Zygmund, Trigonometric series. 2nd ed., Vol. I, Cambridge University Press, New York 1959, 89.Google Scholar
  40. 40.
    This inequality, valid for arbitrary trigonometric polynomials, is due to S. N. Bernstein. See N.K. Bary, A treatise on trigonometric series. Vol. I, Pergamon Press Book. The Macmillan Co., New York 1964, 35.Google Scholar
  41. 41.
    See, for example, B. E. Netto, Lehrbuch der Combinatorik, Teubner, Leipzig 1901, 252.Google Scholar
  42. 42.
    W. Hoeffding, Ann. Math. Statist. 19, 293–325 (1948).MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    For generalizations see E.L. Lehmann, l.c.28. A systematic treatment of the asymptotic theory of rank-tests is given in Hajek and Sidak l.c.1.Google Scholar
  44. 44.
    We believe this theorem was first proved by H. R. van der Vaart, Nederl. Akad. Wetensch., Proc. Ser. A, 53, 494–520 (1956).Google Scholar
  45. 45.
    We mention J.L. Hodges and E.L. Lehmann, l.c.III90, H. Chernoff and I.R. Savage, Ann. Math. Statist. 29, 972–994 (1958), M. Dwass, Ann. Math. Statist. 27, 352–374 (1956). See also F. C. Andrews, Ann. Math. Statist. 25, 724–736 (1954).MathSciNetCrossRefGoogle Scholar
  46. 46.
    D. van Dantzig, Indag. Math. 13, 1–8 (1951).Google Scholar
  47. 47.
    The basic reference is H. Robbins and S. Monro, Ann. Math. Statist. 22,400–407 (1951).MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    The theorem and its proof in this form are due to J. Wolfowitz, Ann. Math. Statist. 23, 457–461 (1952).MathSciNetzbMATHCrossRefGoogle Scholar
  49. 49.
    This naturally means that \( \sum\limits_{n = 1}^\infty {a_n^2} \) converges.Google Scholar
  50. 50.
    See L. Schmetterer, Osterreich. Ing.-Arch. 7,111–117 (1953); K. L. Chung, Ann. Math. Statist. 25, 463–483 (1954); A. Dvoretzky, Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability1954-1955, Vol. I, pp. 39–55, University of California Press, Berkeley and Los Angeles (1956).Google Scholar

Copyright information

© Springer-Verlag Berlin · Heidelberg 1974

Authors and Affiliations

  1. 1.University of ViennaAustria

Personalised recommendations