Skip to main content

Linear Regression

  • Chapter
Analysis of Neural Data

Part of the book series: Springer Series in Statistics ((SSS))

  • 5901 Accesses

Abstract

Regression is the central method in the analysis of neural data. This is partly because, in all its guises, it is the most widely applied technique.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See the appendix of Brown and Kass (2009).

  2. 2.

    This expression is known as the convolution of the hemodynamic response function \(h(t)\) with the stimulus function \(u_j\).

  3. 3.

    The signal-to-noise ratio is a term borrowed from engineering, where it refers to a ratio of the power for signal to the power for noise, and is usually reported in the log scale; under certain stochastic models it translates into a ratio of signal variance to noise variance.

  4. 4.

    In fact, the results cited in Wu (1981) show that (12.30) is necessary and sufficient for (12.31).

  5. 5.

    Beyond (12.30), condition (12.36) says that the \(x_i\) values do not diverge extremely quickly, which would make \(\hat{\beta }_1\) converge faster than \(1/\sqrt{n}\).

  6. 6.

    Randomization refers to the random assignment of treatments to subjects, and to the process of randomly ordering treatment conditions; we discuss this further in Section 13.4.

  7. 7.

    The usual derivation of the limiting normal distribution of \(r\) begins with an analytic calculation of the covariance matrix of \((V_x,V_y,C)\) where \(V_x=V(X)\), \(V_y=V(Y)\), and \(C=Cov(X,Y)\), in which \((X,Y)\) is bivariate normal. That calculation provides an explicit formula for the covariance matrix in the limiting joint normal distribution of \((V_x,V_y,C)\), and then propagation of uncertainty is applied as in Section 9.1.2.

  8. 8.

    The \(z\)-transformation may be derived as a variance-stabilizing transformation, as on p. 232, beginning with the limiting result mentioned in footnote 7. More general results are given by Hawkins (1989).

  9. 9.

    See also “analysis of covariance,” mentioned in the footnote on p. 379.

  10. 10.

    The letter \(F\) was chosen (by George Snedecor in 1934) to honor Fisher, who had first suggested a log-transformed normalized ratio of sums of squares, and derived its distribution, in the context of ANOVA, which we discuss in Chapter 13.

  11. 11.

    Sometimes when someone refers to the general linear model they may also allow the variance matrix to be different, or they may allow for non-normal errors.

  12. 12.

    Before regression is applied various pre-processing steps are usually followed to make the assumptions of linear regression a reasonable representation of the variation in the fMRI data.

  13. 13.

    The equations are not solved merely by inverting the matrix \(X^TX\); this can lead to grossly incorrect answers due to seemingly innocuous round-off error. See Section 12.5.5.

  14. 14.

    Here, Eq. (9.27) becomes

    $$\begin{aligned} \hat{F}_n(x,y)\mathop {\rightarrow }\limits ^{P} F_{(X,Y)}(x,y) \end{aligned}$$

    where \(\hat{F}_n\) is the empirical cdf computed from the random vectors \(((X_1,Y_1),\ldots ,(X_n,Y_n))\).

  15. 15.

    In \(K\)-fold cross-validation it is tempting to regard the average of the \(n\) MSE estimates as an ordinary mean, and to apply the usual standard error formula (7.17). This does not work correctly, however, because the \(n\) separate evaluations are not independent. Instead, the square of the standard error in (7.17) is an underestimate of the variance. In fact, it is not possible to provide a simple evaluation of the uncertainty attached to the cross-validation estimate of MSE, or risk (see Bengio and Granvalet 2004).

  16. 16.

    The penalty in (12.72) may also be written \(\text {magnitude}(\beta ) = ||\beta ||^2\) and in mathematical analysis the Euclidean length is called an \(L2\) norm. The penalty (12.73) is called an \(L1\) penalty because it is based, analogously, on the \(L1\) norm.

  17. 17.

    Strictly speaking ridge regression refers to \(L2\)-penalized regression after the \(x\) variables are normalized.

  18. 18.

    Actually, the penalty is applied to weighted least squares as described on p. 345.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert E. Kass .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Kass, R.E., Eden, U.T., Brown, E.N. (2014). Linear Regression. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_12

Download citation

Publish with us

Policies and ethics