Abstract
Multiple linear regression is a powerful method of exploring relationships between a response Y and a set of potential explanatory variables \(x_1,\ldots , x_p\), but it has an obvious limitation: it assumes the predictive relationship is, on average, linear. In addition, in its standard form it assumes that the noise contributions are homogeneous and follow, roughly, a normal distribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We apologize for the double use of \(f\) to mean both a pdf in \(f_{Y_i}(y|\theta )\) and a general function in \(f(x_1,\ldots ,x_p)\). These two distinct uses of \(f\) are very common. We hope by pointing them out explicitly we will avoid confusion.
- 2.
The analysis of Hecht et al. (1942) was different, but related. They wished to obtain the minimum number of quanta, \(n\), that would produce perception. Because quanta are considered to follow a Poisson distribution, in the notation we used above, they took \(W \sim P(\lambda )\) and \(c\,=\,n\), with \(\lambda \), the mean number of quanta falling on the retina, being proportional to the intensity. This latter statement may be rewritten in the form \(\log \lambda \,=\,\beta _0\,+\,x\), with \(x\) again being the log intensity. Then \(Y\,=\,1\) (light is perceived) if \(W \ge n\) which occurs with probability \(p\,=\,1-P(W \le n-1)\,=\,1-F(n-1|\lambda )\), where \(F\) is the Poisson cdf. This is a latent-variable model for the proportional data (similar to but different than the one on p. 399). It could be fitted by finding the MLE of \(\beta _0\), though Hecht et al. apparently did the fitting by eye. Hecht et al. then determined the value of \(n\) that provided the best fit. They concluded that a very small number of quanta sufficed to produce perception, but see also Teich et al. (1982).
- 3.
We have not discussed residual analysis here. It may be performed using deviance residuals, or other forms of residuals. See Agresti (1990) or McCullagh and Nelder (1989).
- 4.
Probit regression was introduced by Chester Bliss in 1934, but the latent variable idea and normal cdf-transformation was part of Fechner’s thinking about psychophysics in 1860; logistic regression was apparently discussed first by Ronald Fisher and Frank Yates in 1938. See Agresti (1990) for much more extensive discussion of the methods described briefly here.
- 5.
The regularity conditions insure non-degeneracy. For example, if there is only one \(x\) variable, it must take on at least two distinct values so that a line may be fitted. The \(y\) observations also must correspond to values that are possible according to the model; in dealing with proportions, for instance, the observed proportions can not all be zero.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Kass, R.E., Eden, U.T., Brown, E.N. (2014). Generalized Linear and Nonlinear Regression. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_14
Download citation
DOI: https://doi.org/10.1007/978-1-4614-9602-1_14
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-9601-4
Online ISBN: 978-1-4614-9602-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)