Abstract
Regression is the central method in the analysis of neural data. This is partly because, in all its guises, it is the most widely applied technique.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
See the appendix of Brown and Kass (2009).
- 2.
This expression is known as the convolution of the hemodynamic response function \(h(t)\) with the stimulus function \(u_j\).
- 3.
The signal-to-noise ratio is a term borrowed from engineering, where it refers to a ratio of the power for signal to the power for noise, and is usually reported in the log scale; under certain stochastic models it translates into a ratio of signal variance to noise variance.
- 4.
- 5.
- 6.
Randomization refers to the random assignment of treatments to subjects, and to the process of randomly ordering treatment conditions; we discuss this further in Section 13.4.
- 7.
The usual derivation of the limiting normal distribution of \(r\) begins with an analytic calculation of the covariance matrix of \((V_x,V_y,C)\) where \(V_x=V(X)\), \(V_y=V(Y)\), and \(C=Cov(X,Y)\), in which \((X,Y)\) is bivariate normal. That calculation provides an explicit formula for the covariance matrix in the limiting joint normal distribution of \((V_x,V_y,C)\), and then propagation of uncertainty is applied as in Section 9.1.2.
- 8.
The \(z\)-transformation may be derived as a variance-stabilizing transformation, as on p. 232, beginning with the limiting result mentioned in footnote 7. More general results are given by Hawkins (1989).
- 9.
See also “analysis of covariance,” mentioned in the footnote on p. 379.
- 10.
The letter \(F\) was chosen (by George Snedecor in 1934) to honor Fisher, who had first suggested a log-transformed normalized ratio of sums of squares, and derived its distribution, in the context of ANOVA, which we discuss in Chapter 13.
- 11.
Sometimes when someone refers to the general linear model they may also allow the variance matrix to be different, or they may allow for non-normal errors.
- 12.
Before regression is applied various pre-processing steps are usually followed to make the assumptions of linear regression a reasonable representation of the variation in the fMRI data.
- 13.
The equations are not solved merely by inverting the matrix \(X^TX\); this can lead to grossly incorrect answers due to seemingly innocuous round-off error. See Section 12.5.5.
- 14.
Here, Eq. (9.27) becomes
$$\begin{aligned} \hat{F}_n(x,y)\mathop {\rightarrow }\limits ^{P} F_{(X,Y)}(x,y) \end{aligned}$$where \(\hat{F}_n\) is the empirical cdf computed from the random vectors \(((X_1,Y_1),\ldots ,(X_n,Y_n))\).
- 15.
In \(K\)-fold cross-validation it is tempting to regard the average of the \(n\) MSE estimates as an ordinary mean, and to apply the usual standard error formula (7.17). This does not work correctly, however, because the \(n\) separate evaluations are not independent. Instead, the square of the standard error in (7.17) is an underestimate of the variance. In fact, it is not possible to provide a simple evaluation of the uncertainty attached to the cross-validation estimate of MSE, or risk (see Bengio and Granvalet 2004).
- 16.
- 17.
Strictly speaking ridge regression refers to \(L2\)-penalized regression after the \(x\) variables are normalized.
- 18.
Actually, the penalty is applied to weighted least squares as described on p. 345.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Kass, R.E., Eden, U.T., Brown, E.N. (2014). Linear Regression. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_12
Download citation
DOI: https://doi.org/10.1007/978-1-4614-9602-1_12
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-9601-4
Online ISBN: 978-1-4614-9602-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)