Advertisement

Journal of Computational Neuroscience

, Volume 29, Issue 1–2, pp 107–126 | Cite as

A new look at state-space models for neural data

  • Liam PaninskiEmail author
  • Yashar Ahmadian
  • Daniel Gil Ferreira
  • Shinsuke Koyama
  • Kamiar Rahnama Rad
  • Michael Vidne
  • Joshua Vogelstein
  • Wei Wu
Article

Abstract

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.

Keywords

Neural coding State-space models Hidden Markov model Tridiagonal matrix 

Notes

Acknowledgements

We thank J. Pillow for sharing the data used in Figs. 2 and 5, G. Czanner for sharing the data used in Fig. 6, and B. Babadi and Q. Huys for many helpful discussions. LP is supported by NIH grant R01 EY018003, an NSF CAREER award, and a McKnight Scholar award; YA by a Patterson Trust Postdoctoral Fellowship; DGF by the Gulbenkian PhD Program in Computational Biology, Fundacao para a Ciencia e Tecnologia PhD Grant ref. SFRH / BD / 33202 / 2007; SK by NIH grants R01 MH064537, R01 EB005847 and R01 NS050256; JV by NIDCD DC00109.

References

  1. Ahmadian, Y., Pillow, J., & Paninski, L. (2009a). Efficient Markov Chain Monte Carlo methods for decoding population spike trains. Neural Computation (under review).Google Scholar
  2. Ahmadian, Y., Pillow, J., Shlens, J., Chichilnisky, E., Simoncelli, E., & Paninski, L. (2009b). A decoder-based spike train metric for analyzing the neural code in the retina. COSYNE09.Google Scholar
  3. Araya, R., Jiang, J., Eisenthal, K. B., & Yuste, R. (2006). The spine neck filters membrane potentials. PNAS, 103(47), 17961–17966.CrossRefPubMedGoogle Scholar
  4. Asif, A., & Moura, J. (2005). Block matrices with l-block banded inverse: Inversion algorithms. IEEE Transactions on Signal Processing, 53, 630–642.CrossRefGoogle Scholar
  5. Bell, B. M. (1994). The iterated Kalman smoother as a Gauss–Newton method. SIAM Journal on Optimization, 4, 626–636.CrossRefGoogle Scholar
  6. Borg-Graham, L., Monier, C., & Fregnac, Y. (1996). Voltage-clamp measurements of visually-evoked conductances with whole-cell patch recordings in primary visual cortex. Journal of Physiology (Paris), 90, 185–188.CrossRefGoogle Scholar
  7. Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Oxford: Oxford University Press.Google Scholar
  8. Brockwell, A., Rojas, A., & Kass, R. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology, 91, 1899–1907.CrossRefPubMedGoogle Scholar
  9. Brown, E., Frank, L., Tang, D., Quirk, M., & Wilson, M. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18, 7411–7425.PubMedGoogle Scholar
  10. Brown, E., Kass, R., & Mitra, P. (2004). Multiple neural spike train data analysis: State-of-the-art and future challenges. Nature Neuroscience, 7, 456–461.CrossRefPubMedGoogle Scholar
  11. Brown, E., Nguyen, D., Frank, L., Wilson, M., & Solo, V. (2001). An analysis of neural receptive field plasticity by point process adaptive filtering. PNAS, 98, 12261–12266.CrossRefPubMedGoogle Scholar
  12. Chornoboy, E., Schramm, L., & Karr, A. (1988). Maximum likelihood identification of neural point process systems. Biological Cybernetics, 59, 265–275.CrossRefPubMedGoogle Scholar
  13. Coleman, T., & Sarma, S. (2007). A computationally efficient method for modeling neural spiking activity with point processes nonparametrically. IEEE Conference on Decision and Control.Google Scholar
  14. Cossart, R., Aronov, D., & Yuste, R. (2003). Attractor dynamics of network up states in the neocortex. Nature, 423, 283–288.CrossRefPubMedGoogle Scholar
  15. Cox, D. (1955). Some statistical methods connected with series of events. Journal of the Royal Statistical Society, Series B, 17, 129–164.Google Scholar
  16. Cunningham, J. P., Shenoy, K. V., & Sahani, M. (2008). Fast Gaussian process methods for point process intensity estimation. ICML, 192–199.Google Scholar
  17. Czanner, G., Eden, U., Wirth, S., Yanike, M., Suzuki, W., & Brown, E. (2008). Analysis of between-trial and within-trial neural spiking dynamics. Journal of Neurophysiology, 99, 2672–2693.CrossRefPubMedGoogle Scholar
  18. Davis, R., & Rodriguez-Yam, G. (2005). Estimation for state-space models: An approximate likelihood approach. Statistica Sinica, 15, 381–406.Google Scholar
  19. Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39, 1–38.Google Scholar
  20. DiMatteo, I., Genovese, C., & Kass, R. (2001). Bayesian curve fitting with free-knot splines. Biometrika, 88, 1055–1073.CrossRefGoogle Scholar
  21. Djurisic, M., Popovic, M., Carnevale, N., & Zecevic, D. (2008). Functional structure of the mitral cell dendritic tuft in the rat olfactory bulb. Journal of Neuroscience, 28(15), 4057–4068.CrossRefPubMedGoogle Scholar
  22. Donoghue, J. (2002). Connecting cortex to machines: Recent advances in brain interfaces. Nature Neuroscience, 5, 1085–1088.CrossRefPubMedGoogle Scholar
  23. Doucet, A., de Freitas, N., & Gordon, N. (Eds.) (2001). Sequential Monte Carlo in practice. New York: Springer.Google Scholar
  24. Durbin, J., & Koopman, S. (2001). Time series analysis by state space methods. Oxford: Oxford University Press.Google Scholar
  25. Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analyses of neural encoding by point process adaptive filtering. Neural Computation, 16, 971–998.CrossRefPubMedGoogle Scholar
  26. Ergun, A., Barbieri, R., Eden, U., Wilson, M., & Brown, E. (2007). Construction of point process adaptive filter algorithms for neural systems using sequential Monte Carlo methods. IEEE Transactions on Biomedical Engineering, 54, 419–428.CrossRefPubMedGoogle Scholar
  27. Escola, S., & Paninski, L. (2009). Hidden Markov models applied toward the inference of neural states and the improved estimation of linear receptive fields. Neural Computation (under review).Google Scholar
  28. Fahrmeir, L., & Kaufmann, H. (1991). On Kalman filtering, posterior mode estimation and fisher scoring in dynamic exponential family regression. Metrika, 38, 37–60.CrossRefGoogle Scholar
  29. Fahrmeir, L., & Tutz, G. (1994). Multivariate statistical modelling based on generalized linear models. New York: Springer.Google Scholar
  30. Frank, L., Eden, U., Solo, V., Wilson, M., & Brown, E. (2002). Contrasting patterns of receptive field plasticity in the hippocampus and the entorhinal cortex: An adaptive filtering approach. Journal of Neuroscience, 22(9), 3817–3830.PubMedGoogle Scholar
  31. Gao, Y., Black, M., Bienenstock, E., Shoham, S., & Donoghue, J. (2002). Probabilistic inference of arm motion from neural activity in motor cortex. NIPS, 14, 221–228.Google Scholar
  32. Gat, I., Tishby, N., & Abeles, M. (1997). Hidden Markov modeling of simultaneously recorded cells in the associative cortex of behaving monkeys. Network: Computation in Neural Systems, 8, 297–322.CrossRefGoogle Scholar
  33. Godsill, S., Doucet, A., & West, M. (2004). Monte Carlo smoothing for non-linear time series. Journal of the American Statistical Association, 99, 156–168.CrossRefGoogle Scholar
  34. Green, P., & Silverman, B. (1994). Nonparametric regression and generalized linear models. Boca Raton: CRC.Google Scholar
  35. Hawkes, A. (2004). Stochastic modelling of single ion channels. In J. Feng (Ed.), Computational neuroscience: A comprehensive approach (pp. 131–158). Boca Raton: CRC.Google Scholar
  36. Herbst, J. A., Gammeter, S., Ferrero, D., & Hahnloser, R. H. (2008). Spike sorting with hidden markov models. Journal of Neuroscience Methods, 174(1), 126–134.CrossRefPubMedGoogle Scholar
  37. Huys, Q., Ahrens, M., & Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology, 96, 872–890.CrossRefPubMedGoogle Scholar
  38. Huys, Q., & Paninski, L. (2009). Model-based smoothing of, and parameter estimation from, noisy biophysical recordings. PLOS Computational Biology, 5, e1000379.CrossRefGoogle Scholar
  39. Iyengar, S. (2001). The analysis of multiple neural spike trains. In Advances in methodological and applied aspects of probability and statistics (pp. 507–524). New York: Gordon and Breach.Google Scholar
  40. Jones, L. M., Fontanini, A., Sadacca, B. F., Miller, P., & Katz, D. B. (2007). Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Proceedings of the National Academy of Sciences, 104, 18772–18777.CrossRefGoogle Scholar
  41. Julier, S., & Uhlmann, J. (1997). A new extension of the Kalman filter to nonlinear systems. In Int. Symp. Aerospace/Defense Sensing, Simul. and Controls. Orlando, FL.Google Scholar
  42. Jungbacker, B., & Koopman, S. (2007). Monte Carlo estimation for nonlinear non-Gaussian state space models. Biometrika, 94, 827–839.CrossRefGoogle Scholar
  43. Kass, R., & Raftery, A. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773–795.CrossRefGoogle Scholar
  44. Kass, R., Ventura, V., & Cai, C. (2003). Statistical smoothing of neuronal data. Network: Computation in Neural Systems, 14, 5–15.CrossRefGoogle Scholar
  45. Kass, R. E., Ventura, V., & Brown, E. N. (2005). Statistical issues in the analysis of neuronal data. Journal of Neurophysiology, 94, 8–25.CrossRefPubMedGoogle Scholar
  46. Kelly, R., & Lee, T. (2004). Decoding V1 neuronal activity using particle filtering with Volterra kernels. Advances in Neural Information Processing Systems, 15, 1359–1366.Google Scholar
  47. Kemere, C., Santhanam, G., Yu, B. M., Afshar, A., Ryu, S. I., Meng, T. H., et al. (2008). Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology, 100, 2441–2452.CrossRefPubMedGoogle Scholar
  48. Khuc-Trong, P., & Rieke, F. (2008). Origin of correlated activity between parasol retinal ganglion cells. Nature Neuroscience, 11, 1343–1351.CrossRefGoogle Scholar
  49. Kitagawa, G., & Gersch, W. (1996). Smoothness priors analysis of time series. Lecture notes in statistics (Vol. 116). New York: Springer.Google Scholar
  50. Koch, C. (1999). Biophysics of computation. Oxford: Oxford University Press.Google Scholar
  51. Koyama, S., & Paninski, L. (2009). Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models. Journal of Computational Neuroscience doi: 10.1007/s10827-009-0150-x.
  52. Kulkarni, J., & Paninski, L. (2007). Common-input models for multiple neural spike-train data. Network: Computation in Neural Systems, 18, 375–407.CrossRefGoogle Scholar
  53. Kulkarni, J., & Paninski, L. (2008).Efficient analytic computational methods for state-space decoding of goal-directed movements. IEEE Signal Processing Magazine, 25(special issue on brain-computer interfaces), 78–86.Google Scholar
  54. Lewi, J., Butera, R., & Paninski, L. (2009). Sequential optimal design of neurophysiology experiments. Neural Computation, 21, 619–687.CrossRefPubMedGoogle Scholar
  55. Litke, A., Bezayiff, N., Chichilnisky, E., Cunningham, W., Dabrowski, W., Grillo, A., et al. (2004). What does the eye tell the brain? Development of a system for the large scale recording of retinal output activity. IEEE Transactions on Nuclear Science, 1434–1440.Google Scholar
  56. Martignon, L., Deco, G., Laskey, K., Diamond, M., Freiwald, W., & Vaadia, E. (2000). Neural coding: Higher-order temporal patterns in the neuro-statistics of cell assemblies. Neural Computation, 12, 2621–2653.CrossRefPubMedGoogle Scholar
  57. Meng, X.-L., & Rubin, D. B. (1991). Using EM to obtain asymptotic variance-covariance matrices: The SEM algorithm. Journal of the American Statistical Association, 86(416), 899–909.CrossRefGoogle Scholar
  58. Minka, T. (2001). A family of algorithms for Approximate Bayesian Inference. PhD thesis, MIT.Google Scholar
  59. Moeller, J., Syversveen, A., & Waagepetersen, R. (1998). Log-Gaussian Cox processes. Scandinavian Journal of Statistics, 25, 451–482.CrossRefGoogle Scholar
  60. Moeller, J., & Waagepetersen, R. (2004). Statistical inference and simulation for spatial point processes. London: Chapman Hall.Google Scholar
  61. Murphy, G., & Rieke, F. (2006). Network variability limits stimulus-evoked spike timing precision in retinal ganglion cells. Neuron, 52, 511–524.CrossRefPubMedGoogle Scholar
  62. Neal, R., & Hinton, G. (1999). A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. Jordan (Ed.), Learning in graphical models (pp. 355–368). Cambridge: MIT.Google Scholar
  63. Nicolelis, M., Dimitrov, D., Carmena, J., Crist, R., Lehew, G., Kralik, J., et al. (2003). Chronic, multisite, multielectrode recordings in macaque monkeys. PNAS, 100, 11041–11046.CrossRefPubMedGoogle Scholar
  64. Nikolenko, V., Watson, B., Araya, R., Woodruff, A., Peterka, D., & Yuste, R. (2008). SLM microscopy: Scanless two-photon imaging and photostimulation using spatial light modulators. Frontiers in Neural Circuits, 2, 5.CrossRefPubMedGoogle Scholar
  65. Nykamp, D. (2005). Revealing pairwise coupling in linear-nonlinear networks. SIAM Journal on Applied Mathematics, 65, 2005–2032.CrossRefGoogle Scholar
  66. Nykamp, D. (2007). A mathematical framework for inferring connectivity in probabilistic neuronal networks. Mathematical Biosciences, 205, 204–251.CrossRefPubMedGoogle Scholar
  67. Ohki, K., Chung, S., Ch’ng, Y., Kara, P., & Reid, C. (2005). Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature, 433, 597–603.CrossRefPubMedGoogle Scholar
  68. Olsson, R. K., Petersen, K. B., & Lehn-Schioler, T. (2007). State-space models: From the EM algorithm to a gradient approach. Neural Computation, 19, 1097–1111.CrossRefGoogle Scholar
  69. Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15, 243–262.CrossRefGoogle Scholar
  70. Paninski, L. (2005). Log-concavity results on Gaussian process methods for supervised and unsupervised learning. Advances in Neural Information Processing Systems, 17.Google Scholar
  71. Paninski, L. (2009). Inferring synaptic inputs given a noisy voltage trace via sequential Monte Carlo methods. Journal of Computational Neuroscience (under review).Google Scholar
  72. Paninski, L., Fellows, M., Shoham, S., Hatsopoulos, N., & Donoghue, J. (2004). Superlinear population encoding of dynamic hand trajectory in primary motor cortex. Journal of Neuroscience, 24, 8551–8561.CrossRefPubMedGoogle Scholar
  73. Paninski, L., & Ferreira, D. (2008). State-space methods for inferring synaptic inputs and weights. COSYNE.Google Scholar
  74. Peña, J.-L. & Konishi, M. (2000). Cellular mechanisms for resolving phase ambiguity in the owl’s inferior colliculus. Proceedings of the National Academy of Sciences of the United States of America, 97, 11787–11792.CrossRefPubMedGoogle Scholar
  75. Penny, W., Ghahramani, Z., & Friston, K. (2005). Bilinear dynamical systems. Philosophical Transactions of the Royal Society of London, 360, 983–993.CrossRefPubMedGoogle Scholar
  76. Pillow, J., Ahmadian, Y., & Paninski, L. (2009). Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Neural Computation (under review).Google Scholar
  77. Pillow, J., Shlens, J., Paninski, L., Sher, A., Litke, A., Chichilnisky, E., et al. (2008). Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454, 995–999.CrossRefPubMedGoogle Scholar
  78. Press, W., Teukolsky, S., Vetterling, W., & Flannery, B. (1992). Numerical recipes in C. Cambridge: Cambridge University Press.Google Scholar
  79. Priebe, N., & Ferster, D. (2005). Direction selectivity of excitation and inhibition in simple cells of the cat primary visual cortex. Neuron, 45, 133–145.CrossRefPubMedGoogle Scholar
  80. Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286.CrossRefGoogle Scholar
  81. Rahnama, K., Rad & Paninski, L. (2009). Efficient estimation of two-dimensional firing rate surfaces via Gaussian process methods. Network (under review).Google Scholar
  82. Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. Cambridge: MIT.Google Scholar
  83. Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1997). Spikes: Exploring the neural code. Cambridge: MIT.Google Scholar
  84. Robert, C., & Casella, G. (2005). Monte Carlo statistical methods. New York: Springer.Google Scholar
  85. Roweis, S., & Ghahramani, Z. (1999). A unifying review of linear Gaussian models. Neural Computation, 11, 305–345.CrossRefPubMedGoogle Scholar
  86. Rybicki, G., & Hummer, D. (1991). An accelerated lambda iteration method for multilevel radiative transfer, appendix b: Fast solution for the diagonal elements of the inverse of a tridiagonal matrix. Astronomy and Astrophysics, 245, 171.Google Scholar
  87. Rybicki, G. B., & Press, W. H. (1995). Class of fast methods for processing irregularly sampled or otherwise inhomogeneous one-dimensional data. Physical Review Letters, 74(7), 1060–1063.CrossRefPubMedGoogle Scholar
  88. Salakhutdinov, R., Roweis, S. T., & Ghahramani, Z. (2003). Optimization with EM and expectation-conjugate-gradient. International Conference on Machine Learning, 20, 672–679.Google Scholar
  89. Schneidman, E., Berry, M., Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440, 1007–1012.CrossRefPubMedGoogle Scholar
  90. Schnitzer, M., & Meister, M. (2003). Multineuronal firing patterns in the signal from eye to brain. Neuron, 37, 499–511.CrossRefPubMedGoogle Scholar
  91. Shlens, J., Field, G. D., Gauthier, J. L., Grivich, M. I., Petrusca, D., Sher, A., et al. (2006). The structure of multi-neuron firing patterns in primate retina. Journal of Neuroscience, 26, 8254–8266.CrossRefPubMedGoogle Scholar
  92. Shlens, J., Field, G. D., Gauthier, J. L., Greschner, M., Sher, A., Litke, A. M., et al. (2009). The structure of large-scale synchronized firing in primate retina. Journal of Neuroscience, 29, 5022–5031.CrossRefPubMedGoogle Scholar
  93. Shoham, S., Paninski, L., Fellows, M., Hatsopoulos, N., Donoghue, J., & Normann, R. (2005). Optimal decoding for a primary motor cortical brain-computer interface. IEEE Transactions on Biomedical Engineering, 52, 1312–1322.CrossRefPubMedGoogle Scholar
  94. Shumway, R., & Stoffer, D. (2006). Time series analysis and its applications. New York: Springer.Google Scholar
  95. Silvapulle, M., & Sen, P. (2004). Constrained statistical inference: Inequality, order, and shape restrictions. New York: Wiley-Interscience.Google Scholar
  96. Smith, A., & Brown, E. (2003). Estimating a state-space model from point process observations. Neural Computation, 15, 965–991.CrossRefPubMedGoogle Scholar
  97. Smith, A. C., Frank, L. M., Wirth, S., Yanike, M., Hu, D., Kubota, Y., et al. (2004). Dynamic analysis of learning in behavioral experiments. Journal of Neuroscience, 24(2), 447–461.CrossRefPubMedGoogle Scholar
  98. Smith, A. C., Stefani, M. R., Moghaddam, B., & Brown, E. N. (2005). Analysis and design of behavioral experiments to characterize population learning. Journal of Neurophysiology, 93(3), 1776–1792.CrossRefPubMedGoogle Scholar
  99. Snyder, D., & Miller, M. (1991). Random point processes in time and space. New York: Springer.Google Scholar
  100. Srinivasan, L., Eden, U., Willsky, A., & Brown, E. (2006). A state-space analysis for reconstruction of goal-directed movements using neural signals. Neural Computation, 18, 2465–2494.CrossRefPubMedGoogle Scholar
  101. Suzuki, W. A., & Brown, E. N. (2005). Behavioral and neurophysiological analyses of dynamic learning processes. Behavioral & Cognitive Neuroscience Reviews, 4(2), 67–95.CrossRefGoogle Scholar
  102. Truccolo, W., Eden, U., Fellows, M., Donoghue, J., & Brown, E. (2005). A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. Journal of Neurophysiology, 93, 1074–1089.CrossRefPubMedGoogle Scholar
  103. Utikal, K. (1997). A new method for detecting neural interconnectivity. Biological Cyberkinetics, 76, 459–470.CrossRefGoogle Scholar
  104. Vidne, M., Kulkarni, J., Ahmadian, Y., Pillow, J., Shlens, J., Chichilnisky, E., et al. (2009). Inferring functional connectivity in an ensemble of retinal ganglion cells sharing a common input. COSYNE.Google Scholar
  105. Vogelstein, J., Babadi, B., Watson, B., Yuste, R., & Paninski, L. (2008). Fast nonnegative deconvolution via tridiagonal interior-point methods, applied to calcium fluorescence data. Statistical analysis of neural data (SAND) conference.Google Scholar
  106. Vogelstein, J., Watson, B., Packer, A., Jedynak, B., Yuste, R., & Paninski, L., (2009). Model-based optimal inference of spike times and calcium dynamics given noisy and intermittent calcium-fluorescence imaging. Biophysical Journal. http://www.stat.columbia.edu/liam/research/abstracts/vogelsteinbj08-abs.html.
  107. Wahba, G. (1990). Spline models for observational data. Philadelphia: SIAM.Google Scholar
  108. Wang, X., Wei, Y., Vaingankar, V., Wang, Q., Koepsell, K., Sommer, F., & Hirsch, J. (2007). Feedforward excitation and inhibition evoke dual modes of firing in the cat’s visual thalamus during naturalistic viewing. Neuron, 55, 465–478.CrossRefPubMedGoogle Scholar
  109. Warland, D., Reinagel, P., & Meister, M. (1997). Decoding visual information from a population of retinal ganglion cells. Journal of Neurophysiology, 78, 2336–2350.PubMedGoogle Scholar
  110. Wehr, M., & Zador, A., (2003). Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex. Nature, 426, 442–446.CrossRefPubMedGoogle Scholar
  111. West, M., & Harrison, P., (1997). Bayesian forecasting and dynamic models. New York: Springer.Google Scholar
  112. Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., & Black, M. J. (2006). Bayesian population coding of motor cortical activity using a Kalman filter. Neural Computation, 18, 80–118.CrossRefPubMedGoogle Scholar
  113. Wu, W., Kulkarni, J., Hatsopoulos, N., & Paninski, L. (2009). Neural decoding of goal-directed movements using a linear statespace model with hidden states. IEEE Transactions on Biomedical Engineering (in press).Google Scholar
  114. Xie, R., Gittelman, J. X., & Pollak, G. D. (2007). Rethinking tuning: In vivo whole-cell recordings of the inferior colliculus in awake bats. Journal of Neuroscience, 27(35), 9469–9481.CrossRefPubMedGoogle Scholar
  115. Ypma, A., & Heskes, T., (2003). Iterated extended Kalman smoothing with expectation-propagation. Neural Networks for Signal Processing, 2003, 219–228.Google Scholar
  116. Yu, B., Afshar, A., Santhanam, G., Ryu, S., Shenoy, K., & Sahani, M. (2006). Extracting dynamical structure embedded in neural activity. NIPS.Google Scholar
  117. Yu, B. M., Cunningham, J. P., Shenoy, K. V., & Sahani, M. (2007). Neural decoding of movements: From linear to nonlinear trajectory models. ICONIP, 586–595.Google Scholar
  118. Yu, B. M., Kemere, C., Santhanam, G., Afshar, A., Ryu, S. I., Meng, T. H., et al. (2007). Mixture of trajectory models for neural decoding of goal-directed movements. Journal of Neurophysiology, 97(5), 3763–3780.CrossRefPubMedGoogle Scholar
  119. Yu, B. M., Shenoy, K. V., & Sahani, M. (2006). Expectation propagation for inference in non-linear dynamical models with Poisson observations. In Proceedings of the nonlinear statistical signal processing workshop (pp. 83–86). Piscataway: IEEE.CrossRefGoogle Scholar
  120. Zhang, K., Ginzburg, I., McNaughton, B., & Sejnowski, T. (1998). Interpreting neuronal population activity by reconstruction: Unified framework with application to hippocampal place cells. Journal of Neurophysiology, 79, 1017–1044.PubMedGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Liam Paninski
    • 1
    Email author
  • Yashar Ahmadian
    • 1
  • Daniel Gil Ferreira
    • 1
  • Shinsuke Koyama
    • 2
  • Kamiar Rahnama Rad
    • 1
  • Michael Vidne
    • 1
  • Joshua Vogelstein
    • 3
  • Wei Wu
    • 4
  1. 1.Department of Statistics and Center for Theoretical NeuroscienceColumbia UniversityNew YorkUSA
  2. 2.Department of StatisticsCarnegie Mellon UniversityPittsburghUSA
  3. 3.Department of NeuroscienceJohns Hopkins UniversityBaltimoreUSA
  4. 4.Department of StatisticsFlorida State UniversityTallahasseeUSA

Personalised recommendations