Abstract
In this chapter, some asymptotic optimality theory of hypothesis testing is developed. We consider testing one sequence of distributions against another (the asymptotic version of testing a simple hypothesis against a simple alternative). It turns out that this problem degenerates if the two sequences are too close together or too far apart. The non-degenerate situation can be characterized in terms of a suitable distance or metric between the distributions of the two sequences. Two such metrics, the total variation and the Hellinger metric, will be introduced below.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Some authors prefer to leave out the constant 1/2 in their definition. Using Definition 15.1.3, the square of the Hellinger distance between \(P_0\) and \(P_1\) is just one-half the square of the \(L_2 ( \mu )\)-distance between \(\sqrt{p_0}\) and \(\sqrt{p_1}\). Using the Hellinger distance makes it unnecessary to choose a particular \(\mu \), and the Hellinger distance is even defined for all pairs of probabilities on a space where no single dominating measure exists.
- 2.
The notation \(a_k \sim b_k\) means \(a_k / b_k \rightarrow 1\).
- 3.
The term experiment rather than model was used by Le Cam, but the terms are essentially synonymous. While a model postulates a family of probability distributions from which data can be observed, an experiment additionally specifies the exact amount of data (or sample size) that is observed. Thus, if \(\{ P_{\theta }, ~\theta \in \mathrm{I}\!\mathrm{R}\}\) is the family of normal distributions \( N( \theta , 1)\) which serves as a model for some data, the experiment \(\{ P_{\theta } ,~\theta \in \mathrm{I}\!\mathrm{R}\}\) implicitly means one observation is observed from \(N( \theta ,1 )\); if an experiment consists of n observations from \(N ( \theta ,1 )\), then this is denoted by \(\{ P_{\theta }^n , \theta \in \mathrm{I}\!\mathrm{R}\}\).
- 4.
The condition (15.97) further asserts that, as a function of u, the limiting value on the right side of (15.97) is linear in u as u varies in \(L_0^2 (P)\). In fact, the Riesz representation theorem (see Theorem 6.4.1 of Dudley (1989)) asserts that any linear function of u must be of the form \(\langle u, \tilde{\theta }\rangle _P\) for some \(\tilde{\theta }\).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lehmann, E.L., Romano, J.P. (2022). Large-Sample Optimality. In: Testing Statistical Hypotheses. Springer Texts in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-70578-7_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-70578-7_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-70577-0
Online ISBN: 978-3-030-70578-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)