Skip to main content

Models, Hypotheses, and Statistical Significance

  • Chapter
Analysis of Neural Data

Part of the book series: Springer Series in Statistics ((SSS))

  • 5803 Accesses

Abstract

The notion ofhypothesis is fundamental to science. Typically it refers to an idea that might plausibly be true, and that is to be examined or “tested” with some experimental data. Sometimes, the expectation is that the data will conform to the hypothesis. In other situations, the hypothesis is introduced with the goal of refuting it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This order of presentation is the one followed by Fisher in his immensely influential Statistical Methods for Research Workers, but it seems to have been abandoned later in the twentieth century as the Neyman-Pearson approach became dominant.

  2. 2.

    Our characterization of \(p<.05\) as “modest evidence” is consistent with Fisher’s view. In particular, he felt \(p=.05\) was inconclusive. See the footnote on p. 298.

  3. 3.

    In this example we use the notation \(p\) in two different ways: at first \(p\) stands for the probability that P.S. would choose the non-burning house, and then later it stands for the \(p\)-value. These are both such common notations that we felt we couldn’t change either of them. We hope our double use of \(p\) is not confusing.

  4. 4.

    The logic of the procedure does not demand that we use \(\theta _0\) in place of \(\hat{\theta }\). The justification of the large-sample significance test, the Theorem in Section 7.3.5 that says \(Z\) is approximately \(N(0,1)\), is not refined enough to distinguish between the two alternative choices for \(\textit{SE}(T_n)\) (both would satisfy the theorem). However, because we are doing the calculation under the assumption that \(\theta =\theta _0\), it makes some sense to use the value \(\theta =\theta _0\) in computing the standard error.

  5. 5.

    We discuss this distinction again in Section 13.1.

  6. 6.

    Welch provided an approximate distribution from which \(p\)-values could be computed, which is more accurate than the normal.

  7. 7.

    This may be considered an abuse of the notation because we usually consider \(H_0\) to be a fixed, non-random entity, so we are not really “conditioning” on it in the usual sense developed in Chapter 3. The exception occurs under the Bayesian interpretation given in Section 10.4.5, where \(H_0\) is formally considered to be an event. In that scenario the probability in (10.24) does become a conditional probability.

  8. 8.

    This generalizes (and follows from) Eq.  (A.29), which says that the maximal length of the sum of two unit vectors is 2 and it occurs when the vectors are equal.

  9. 9.

    See pages 114 and 128 of the fourteenth (1970) edition of Fisher (1925).

  10. 10.

    Fisher objected to the idea that statistical significance should be equated with decision making about hypotheses. From our modern perspective this is an objection about the words used to describe (10.27) but the formula itself is crucial. We say more about this in Section 10.4.7.

  11. 11.

    On the other hand, we should recall that the \(p\)-value we obtained for the data \(x=14\) was \(p=.0076\) based on \(\chi ^2_{obs}\) and the chi-squared distribution while the exact \(p\)-value was \(p=.0127\). The discrepancy between approximate and exact values is a bit larger; the approximation apparently gets worse as we move further out into the tails.

  12. 12.

    Specifically, both groups followed the distribution specified by the empirical cdf based on the 60 data values. This is an example ofbootstrap sampling and will lead to abootstrap test discussed in Chapter 11.

  13. 13.

    Part of our reasoning comes from Bayesian calibration of significance tests, which is discussed briefly in Section 10.4.5 and again in Section 16.3.4.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert E. Kass .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Kass, R.E., Eden, U.T., Brown, E.N. (2014). Models, Hypotheses, and Statistical Significance. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_10

Download citation

Publish with us

Policies and ethics