Skip to main content

Regression Problems

  • Chapter
  • First Online:
Advanced Data Analysis in Neuroscience

Part of the book series: Bernstein Series in Computational Neuroscience ((BSCN))

  • 2759 Accesses

Abstract

Assume we would like to predict variables y from variables x through a function f(x) such that the squared deviations between actual and predicted values are minimized (a so-called squared error loss function, see Eq. 1.11). Then the regression function which optimally achieves this is given by f(x) = E(y|x) (Winer 1971; Bishop 2006; Hastie et al. 2009), that is the goal in regression is to model the conditional expectancy of y (the “outputs” or “responses”) given x (the “predictors” or “regressors”). For instance, we may have recorded in vivo the average firing rate of p neurons on N independent trials i, arranged in a set of row vectors X = {x 1,…, x i ,…, x N }, and would like to see whether with these we can predict the movement direction (angle) y i of the animal on each trial (a “decoding” problem). This is a typical multiple regression problem (where “multiple” indicates that we have more than one predictor). Had we also measured more than one output variable, e.g., several movement parameters like angle, velocity, and acceleration, which we would like to set in relation to the firing rates of the p recorded neurons, we would get into the domain of multivariate regression.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 69.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Aarts, E., Verhage, M., Veenvliet, J.V., Dolan, C.V., van der Sluis, S.: A solution to dependency: using multilevel analysis to accommodate nested data. Nat. Neurosci. 17, 491–496 (2014)

    Article  Google Scholar 

  • Balaguer-Ballester, E., Lapish, C.C., Seamans, J.K., Daniel Durstewitz, D.: Attractor dynamics of cortical populations during memory-guided decision-making. PLoS Comput. Biol. 7, e1002057 (2011)

    Article  Google Scholar 

  • Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)

    MATH  Google Scholar 

  • Brette, R., Gerstner, W.: Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–3642 (2005)

    Article  Google Scholar 

  • Buzsaki, G., Draguhn, A.: Neuronal oscillations in cortical networks. Science. 304, 1926–1929 (2004)

    Article  Google Scholar 

  • Cleveland, W.S.: Robust locally weighted regression and smoothing scatterplots. J. Am. Stat. Assoc. 74, 829–836 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  • Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  • Demanuele, C., Kirsch, P., Esslinger, C., Zink, M., Meyer-Lindenberg, A., Durstewitz, D.: Area-specific information processing in prefrontal cortex during a probabilistic inference task: a multivariate fMRI BOLD time series analysis. PLoS One. 10, e0135424 (2015b)

    Article  Google Scholar 

  • Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  • Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. Wiley, New York (1973)

    MATH  Google Scholar 

  • Fahrmeir, L., Tutz, G.: Multivariate Statistical Modelling Based on Generalized Linear Models. Springer, New York (2010)

    MATH  Google Scholar 

  • Fan, J., Yao, Q.: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York (2003)

    Book  MATH  Google Scholar 

  • Friston, K.J., Harrison, L., Penny, W.: Dynamic causal modelling. Neuroimage. 19, 1273–1302 (2003)

    Article  Google Scholar 

  • Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., et al.: Hybrid computing using a neural network with dynamic external memory. Nature. 538, 471–476 (2016)

    Article  Google Scholar 

  • Haase, R.F.: Multivariate General Linear Models. SAGE, Thousand Oaks, CA (2011)

    Book  Google Scholar 

  • Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning (Vol. 2, No. 1) Springer, New York (2009)

    Google Scholar 

  • Hertz, J., Krogh, A.S., Palmer, R.G.: Introduction to the theory of neural computation. Addison-Wesley, Reading, MA (1991)

    Google Scholar 

  • Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics. 12, 55–67 (1970)

    Article  MATH  Google Scholar 

  • Hotelling, H.: Relations between two sets of variants. Biometrika. 28, 321–377 (1936)

    Article  MATH  Google Scholar 

  • Kim, J., Calhoun, V.D., Shim, E., Lee, J.H.: Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: evidence from whole-brain resting-state functional connectivity patterns of schizophrenia. NeuroImage. 124, 127–146 (2016)

    Article  Google Scholar 

  • Kohonen, T.: Self-Organising and Associative Memory. Springer, Berlin (1989)

    Book  Google Scholar 

  • Kriegeskorte, N.: Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu. Rev. Vis. Sci. 1, 417–446 (2015)

    Article  Google Scholar 

  • Krzanowski, W.J.: Principles of Multivariate Analysis. A User’s Perspective, Rev. edn. Oxford Statistical Science Series. OUP, Oxford (2000)

    Google Scholar 

  • Lapish, C.C., Balaguer-Ballester, E., Seamans, J.K., Phillips, A.G., Durstewitz, D.: Amphetamine exerts dose-dependent changes in prefrontal cortex attractor dynamics during working memory. J. Neurosci. 35, 10172–10187 (2015)

    Article  Google Scholar 

  • LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature. 521, 436–444 (2015)

    Article  Google Scholar 

  • Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing. 234, 11–26 (2017)

    Article  Google Scholar 

  • McDonald, G.C.: Ridge regression. WIREs Comp. Stat. 1, 93–100 (2009)

    Article  Google Scholar 

  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature. 518, 529–533 (2015)

    Article  Google Scholar 

  • Murayama, Y., Biessmann, F., Meinecke, F.C., Müller, K.R., Augath, M., Oeltermann, A., Logothetis, N.K.: Relationship between neural and hemodynamic signals during spontaneous activity studied with temporal kernel CCA. Magn. Reson. Imaging. 28, 1095–1103 (2010)

    Article  Google Scholar 

  • Naundorf, B., Wolf, F., Volgushev, M.: Unique features of action potential initiation in cortical neurons. Nature. 20, 1060–1063 (2006)

    Article  Google Scholar 

  • Obenchain, R.L.: Classical F-tests and confidence regions for ridge regression. Technometrics. 19, 429–439 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  • Ohiorhenuan, I.E., Mechler, F., Purpura, K.P., Schmid, A.M., Hu, Q., Victor, J.D.: Sparse coding and high-order correlations in fine-scale cortical networks. Nature. 466, 617–621 (2010)

    Article  Google Scholar 

  • Petersen, K.B., Pedersen, M.S.: The Matrix Cookbook. www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf (2012)

  • Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science. 290, 2323–2326 (2000)

    Article  Google Scholar 

  • Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature. 323, 533–536 (1986)

    Article  MATH  Google Scholar 

  • Rumelhart, D.E., McClelland, J.E.: Parallel Distributed Processing. MIT Press, Cambridge, MA (1986)

    Google Scholar 

  • Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  • Schneidman, E., Berry, M.J., Segev, R., Bialek, W.: Weak pairwise correlations imply strongly correlated network states in a neural population. Nature. 440, 1007–1012 (2006)

    Article  Google Scholar 

  • Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  • Ruder, S.: An overview of gradient descent optimization algorithms. arXiv:1609.04747 (2016)

    Google Scholar 

  • West, B.T., Welch, K.B., Galecki, A.T.: Linear Mixed Models: A Practical Guide Using Statistical Software. Chapman & Hall, London (2006)

    MATH  Google Scholar 

  • Winer, B.J.: Statistical Principles in Experimental Design. McGraw-Hill, New York (1971)

    Google Scholar 

  • Yamins, D.L.K., DiCarlo, J.J.: Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 19, 356–365 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Durstewitz, D. (2017). Regression Problems. In: Advanced Data Analysis in Neuroscience. Bernstein Series in Computational Neuroscience. Springer, Cham. https://doi.org/10.1007/978-3-319-59976-2_2

Download citation

Publish with us

Policies and ethics