A fundamental question in Statistics is how well does the frequency of an event approach its probability when the number of repetitions of an experiment increases indefinitely, or how well does the average of the values of a function at the observed outcomes approach its expected value. ‘How well’ can be thought of in many ways, and one of them is: uniformly over what classes of sets or functions does convergence take place? Empirical process theory has its origin on this question. The first law of large numbers uniform over an infinite class of sets is the Glivenko-Cantelli theorem, from the thirties, that states that the convergence of the frequency to the probability, for real random variables, takes place uniformly over all the sets (− ∞, x], x ∈ R, almost surely. This was extended to much more general classes of events and functions by Blum in Ann. Math. Statist. 26 (1955) 527∓529, and DeHardt in Ann. Math. Statist 42 (1971) 2050-2055 (metric entropy with inclusion, or bracketing), and finally Vapnik and ervonenkis, in seminal work of 1971, Theor. Probab. Appls. 16, 264-280 (and of 1981, Theor. Probab. Appls. 26, 532-553), gave non-random new combinatoric conditions, as well as random combinatoric and random entropy necessary and sufficient conditions, for the frequency to approach the probability uniformly over a class of sets (or a class of functions). Vapnik and Červonenkis announced first their results in the Doklady in 1968, and Dudley immediately noticed their significance in his review of their work for Math Reviews (MR0231431 (37 #6986)). Probably inspired by their work, Dudley (1978) considered the much more difficult question of uniformity over classes of sets and functions of the central limit theorem for the empirical measure.