We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Large-Sample Optimality | SpringerLink
Skip to main content

Large-Sample Optimality

  • Chapter
  • First Online:
Testing Statistical Hypotheses

Part of the book series: Springer Texts in Statistics ((STS))

Abstract

In this chapter, some asymptotic optimality theory of hypothesis testing is developed. We consider testing one sequence of distributions against another (the asymptotic version of testing a simple hypothesis against a simple alternative). It turns out that this problem degenerates if the two sequences are too close together or too far apart. The non-degenerate situation can be characterized in terms of a suitable distance or metric between the distributions of the two sequences. Two such metrics, the total variation and the Hellinger metric, will be introduced below.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Some authors prefer to leave out the constant 1/2 in their definition. Using Definition 15.1.3, the square of the Hellinger distance between \(P_0\) and \(P_1\) is just one-half the square of the \(L_2 ( \mu )\)-distance between \(\sqrt{p_0}\) and \(\sqrt{p_1}\). Using the Hellinger distance makes it unnecessary to choose a particular \(\mu \), and the Hellinger distance is even defined for all pairs of probabilities on a space where no single dominating measure exists.

  2. 2.

    The notation \(a_k \sim b_k\) means \(a_k / b_k \rightarrow 1\).

  3. 3.

    The term experiment rather than model was used by Le Cam, but the terms are essentially synonymous. While a model postulates a family of probability distributions from which data can be observed, an experiment additionally specifies the exact amount of data (or sample size) that is observed. Thus, if \(\{ P_{\theta }, ~\theta \in \mathrm{I}\!\mathrm{R}\}\) is the family of normal distributions \( N( \theta , 1)\) which serves as a model for some data, the experiment \(\{ P_{\theta } ,~\theta \in \mathrm{I}\!\mathrm{R}\}\) implicitly means one observation is observed from \(N( \theta ,1 )\); if an experiment consists of n observations from \(N ( \theta ,1 )\), then this is denoted by \(\{ P_{\theta }^n , \theta \in \mathrm{I}\!\mathrm{R}\}\).

  4. 4.

    The condition (15.97) further asserts that, as a function of u, the limiting value on the right side of (15.97) is linear in u as u varies in \(L_0^2 (P)\). In fact, the Riesz representation theorem (see Theorem 6.4.1 of Dudley (1989))  asserts that any linear function of u must be of the form \(\langle u, \tilde{\theta }\rangle _P\) for some \(\tilde{\theta }\).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joseph P. Romano .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lehmann, E.L., Romano, J.P. (2022). Large-Sample Optimality. In: Testing Statistical Hypotheses. Springer Texts in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-70578-7_15

Download citation

Publish with us

Policies and ethics