Skip to main content
Log in

A new look at state-space models for neural data

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. It is worth noting that other more sophisticated methods such as expectation propagation (Minka 2001; Ypma and Heskes 2003; Yu et al. 2006, 2007; Koyama and Paninski 2009) may be better-equipped to handle these strongly non-Gaussian observation densities p(y t |q t ) (and are, in turn, closely related to the optimization-based methods that are the focus of this paper); however, due to space constraints, we will not discuss these methods at length here.

  2. In practice, the simple Newton iteration does not always increase the objective logp(Q|Y); we have found the standard remedy for this instability (perform a simple backtracking linesearch along the Newton direction \(\hat Q^{(i)} - \delta^{(i)} H ^{-1} \nabla\) to determine a suitable stepsize δ (i) ≤ 1) to be quite effective here.

  3. More precisely, Cunningham et al. (2008) introduce a clever iterative conjugate-gradient (CG) method to compute the MAP path in their model; this method requires O(T logT) time per CG step, with the number of CG steps increasing as a function of the number of observed spikes. (Note, in comparison, that the computation times of the state-space methods reviewed in the current work are insensitive to the number of observed spikes.)

  4. The approach here can be easily generalized to the case that the input noise has a nonzero correlation timescale. For example, if the noise can be modeled as an autoregressive process of order p instead of the white noise process described here, then we simply include the unobserved p-dimensional Markov noise process in our state variable (i.e., our Markov state variable q t will now have dimension p + 2 instead of 2), and then apply the O(T) log-barrier method to this augmented state space.

  5. In some cases, Q may be observed directly on some subset of training data. If this is the case (i.e., direct observations of q t are available together with the observed data Y), then the estimation problem simplifies drastically, since we can often fit the models p(y t |q t ,θ) and p(q t |q t − 1, θ) directly without making use of the more involved latent-variable methods discussed in this section.

  6. It is worth mentioning the work of Cunningham et al. (2008) again here; these authors introduced conjugate gradient methods for optimizing the marginal likelihood in their model. However, their methods require computation time scaling superlinearly with the number of observed spikes (and therefore superlinearly with T, assuming that the number of observed spikes is roughly proportional to T).

  7. An additional technical advantage of the direct optimization approach is worth noting here: to compute the E step via the Kalman filter, we need to specify some initial condition for p(V(0)). When we have no good information about the initial V(0), we can use “diffuse” initial conditions, and set the initial covariance Cov(V(0)) to be large (even infinite) in some or all directions in the n-dimensional V(0)-space. A crude way of handling this is to simply set the initial covariance in these directions to be very large (instead of infinite), though this can lead to numerical instability. A more rigorous approach is to take limits of the update equations as the uncertainty becomes large, and keep separate track of the infinite and non-infinite terms appropriately; see Durbin and Koopman (2001) for details. At any rate, these technical difficulties are avoided in the direct optimization approach, which can handle infinite prior covariance easily (this just corresponds to a zero term in the Hessian of the log-posterior).

References

  • Ahmadian, Y., Pillow, J., & Paninski, L. (2009a). Efficient Markov Chain Monte Carlo methods for decoding population spike trains. Neural Computation (under review).

  • Ahmadian, Y., Pillow, J., Shlens, J., Chichilnisky, E., Simoncelli, E., & Paninski, L. (2009b). A decoder-based spike train metric for analyzing the neural code in the retina. COSYNE09.

  • Araya, R., Jiang, J., Eisenthal, K. B., & Yuste, R. (2006). The spine neck filters membrane potentials. PNAS, 103(47), 17961–17966.

    Article  CAS  PubMed  Google Scholar 

  • Asif, A., & Moura, J. (2005). Block matrices with l-block banded inverse: Inversion algorithms. IEEE Transactions on Signal Processing, 53, 630–642.

    Article  Google Scholar 

  • Bell, B. M. (1994). The iterated Kalman smoother as a Gauss–Newton method. SIAM Journal on Optimization, 4, 626–636.

    Article  Google Scholar 

  • Borg-Graham, L., Monier, C., & Fregnac, Y. (1996). Voltage-clamp measurements of visually-evoked conductances with whole-cell patch recordings in primary visual cortex. Journal of Physiology (Paris), 90, 185–188.

    Article  CAS  Google Scholar 

  • Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Oxford: Oxford University Press.

    Google Scholar 

  • Brockwell, A., Rojas, A., & Kass, R. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology, 91, 1899–1907.

    Article  CAS  PubMed  Google Scholar 

  • Brown, E., Frank, L., Tang, D., Quirk, M., & Wilson, M. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18, 7411–7425.

    CAS  PubMed  Google Scholar 

  • Brown, E., Kass, R., & Mitra, P. (2004). Multiple neural spike train data analysis: State-of-the-art and future challenges. Nature Neuroscience, 7, 456–461.

    Article  CAS  PubMed  Google Scholar 

  • Brown, E., Nguyen, D., Frank, L., Wilson, M., & Solo, V. (2001). An analysis of neural receptive field plasticity by point process adaptive filtering. PNAS, 98, 12261–12266.

    Article  CAS  PubMed  Google Scholar 

  • Chornoboy, E., Schramm, L., & Karr, A. (1988). Maximum likelihood identification of neural point process systems. Biological Cybernetics, 59, 265–275.

    Article  CAS  PubMed  Google Scholar 

  • Coleman, T., & Sarma, S. (2007). A computationally efficient method for modeling neural spiking activity with point processes nonparametrically. IEEE Conference on Decision and Control.

  • Cossart, R., Aronov, D., & Yuste, R. (2003). Attractor dynamics of network up states in the neocortex. Nature, 423, 283–288.

    Article  CAS  PubMed  Google Scholar 

  • Cox, D. (1955). Some statistical methods connected with series of events. Journal of the Royal Statistical Society, Series B, 17, 129–164.

    Google Scholar 

  • Cunningham, J. P., Shenoy, K. V., & Sahani, M. (2008). Fast Gaussian process methods for point process intensity estimation. ICML, 192–199.

  • Czanner, G., Eden, U., Wirth, S., Yanike, M., Suzuki, W., & Brown, E. (2008). Analysis of between-trial and within-trial neural spiking dynamics. Journal of Neurophysiology, 99, 2672–2693.

    Article  PubMed  Google Scholar 

  • Davis, R., & Rodriguez-Yam, G. (2005). Estimation for state-space models: An approximate likelihood approach. Statistica Sinica, 15, 381–406.

    Google Scholar 

  • Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39, 1–38.

    Google Scholar 

  • DiMatteo, I., Genovese, C., & Kass, R. (2001). Bayesian curve fitting with free-knot splines. Biometrika, 88, 1055–1073.

    Article  Google Scholar 

  • Djurisic, M., Popovic, M., Carnevale, N., & Zecevic, D. (2008). Functional structure of the mitral cell dendritic tuft in the rat olfactory bulb. Journal of Neuroscience, 28(15), 4057–4068.

    Article  CAS  PubMed  Google Scholar 

  • Donoghue, J. (2002). Connecting cortex to machines: Recent advances in brain interfaces. Nature Neuroscience, 5, 1085–1088.

    Article  CAS  PubMed  Google Scholar 

  • Doucet, A., de Freitas, N., & Gordon, N. (Eds.) (2001). Sequential Monte Carlo in practice. New York: Springer.

    Google Scholar 

  • Durbin, J., & Koopman, S. (2001). Time series analysis by state space methods. Oxford: Oxford University Press.

    Google Scholar 

  • Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analyses of neural encoding by point process adaptive filtering. Neural Computation, 16, 971–998.

    Article  PubMed  Google Scholar 

  • Ergun, A., Barbieri, R., Eden, U., Wilson, M., & Brown, E. (2007). Construction of point process adaptive filter algorithms for neural systems using sequential Monte Carlo methods. IEEE Transactions on Biomedical Engineering, 54, 419–428.

    Article  PubMed  Google Scholar 

  • Escola, S., & Paninski, L. (2009). Hidden Markov models applied toward the inference of neural states and the improved estimation of linear receptive fields. Neural Computation (under review).

  • Fahrmeir, L., & Kaufmann, H. (1991). On Kalman filtering, posterior mode estimation and fisher scoring in dynamic exponential family regression. Metrika, 38, 37–60.

    Article  Google Scholar 

  • Fahrmeir, L., & Tutz, G. (1994). Multivariate statistical modelling based on generalized linear models. New York: Springer.

    Google Scholar 

  • Frank, L., Eden, U., Solo, V., Wilson, M., & Brown, E. (2002). Contrasting patterns of receptive field plasticity in the hippocampus and the entorhinal cortex: An adaptive filtering approach. Journal of Neuroscience, 22(9), 3817–3830.

    CAS  PubMed  Google Scholar 

  • Gao, Y., Black, M., Bienenstock, E., Shoham, S., & Donoghue, J. (2002). Probabilistic inference of arm motion from neural activity in motor cortex. NIPS, 14, 221–228.

    Google Scholar 

  • Gat, I., Tishby, N., & Abeles, M. (1997). Hidden Markov modeling of simultaneously recorded cells in the associative cortex of behaving monkeys. Network: Computation in Neural Systems, 8, 297–322.

    Article  Google Scholar 

  • Godsill, S., Doucet, A., & West, M. (2004). Monte Carlo smoothing for non-linear time series. Journal of the American Statistical Association, 99, 156–168.

    Article  Google Scholar 

  • Green, P., & Silverman, B. (1994). Nonparametric regression and generalized linear models. Boca Raton: CRC.

    Google Scholar 

  • Hawkes, A. (2004). Stochastic modelling of single ion channels. In J. Feng (Ed.), Computational neuroscience: A comprehensive approach (pp. 131–158). Boca Raton: CRC.

    Google Scholar 

  • Herbst, J. A., Gammeter, S., Ferrero, D., & Hahnloser, R. H. (2008). Spike sorting with hidden markov models. Journal of Neuroscience Methods, 174(1), 126–134.

    Article  PubMed  Google Scholar 

  • Huys, Q., Ahrens, M., & Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology, 96, 872–890.

    Article  PubMed  Google Scholar 

  • Huys, Q., & Paninski, L. (2009). Model-based smoothing of, and parameter estimation from, noisy biophysical recordings. PLOS Computational Biology, 5, e1000379.

    Article  Google Scholar 

  • Iyengar, S. (2001). The analysis of multiple neural spike trains. In Advances in methodological and applied aspects of probability and statistics (pp. 507–524). New York: Gordon and Breach.

    Google Scholar 

  • Jones, L. M., Fontanini, A., Sadacca, B. F., Miller, P., & Katz, D. B. (2007). Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Proceedings of the National Academy of Sciences, 104, 18772–18777.

    Article  CAS  Google Scholar 

  • Julier, S., & Uhlmann, J. (1997). A new extension of the Kalman filter to nonlinear systems. In Int. Symp. Aerospace/Defense Sensing, Simul. and Controls. Orlando, FL.

  • Jungbacker, B., & Koopman, S. (2007). Monte Carlo estimation for nonlinear non-Gaussian state space models. Biometrika, 94, 827–839.

    Article  Google Scholar 

  • Kass, R., & Raftery, A. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773–795.

    Article  Google Scholar 

  • Kass, R., Ventura, V., & Cai, C. (2003). Statistical smoothing of neuronal data. Network: Computation in Neural Systems, 14, 5–15.

    Article  Google Scholar 

  • Kass, R. E., Ventura, V., & Brown, E. N. (2005). Statistical issues in the analysis of neuronal data. Journal of Neurophysiology, 94, 8–25.

    Article  PubMed  Google Scholar 

  • Kelly, R., & Lee, T. (2004). Decoding V1 neuronal activity using particle filtering with Volterra kernels. Advances in Neural Information Processing Systems, 15, 1359–1366.

    Google Scholar 

  • Kemere, C., Santhanam, G., Yu, B. M., Afshar, A., Ryu, S. I., Meng, T. H., et al. (2008). Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology, 100, 2441–2452.

    Article  PubMed  Google Scholar 

  • Khuc-Trong, P., & Rieke, F. (2008). Origin of correlated activity between parasol retinal ganglion cells. Nature Neuroscience, 11, 1343–1351.

    Article  Google Scholar 

  • Kitagawa, G., & Gersch, W. (1996). Smoothness priors analysis of time series. Lecture notes in statistics (Vol. 116). New York: Springer.

    Google Scholar 

  • Koch, C. (1999). Biophysics of computation. Oxford: Oxford University Press.

    Google Scholar 

  • Koyama, S., & Paninski, L. (2009). Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models. Journal of Computational Neuroscience doi:10.1007/s10827-009-0150-x.

  • Kulkarni, J., & Paninski, L. (2007). Common-input models for multiple neural spike-train data. Network: Computation in Neural Systems, 18, 375–407.

    Article  Google Scholar 

  • Kulkarni, J., & Paninski, L. (2008).Efficient analytic computational methods for state-space decoding of goal-directed movements. IEEE Signal Processing Magazine, 25(special issue on brain-computer interfaces), 78–86.

  • Lewi, J., Butera, R., & Paninski, L. (2009). Sequential optimal design of neurophysiology experiments. Neural Computation, 21, 619–687.

    Article  PubMed  Google Scholar 

  • Litke, A., Bezayiff, N., Chichilnisky, E., Cunningham, W., Dabrowski, W., Grillo, A., et al. (2004). What does the eye tell the brain? Development of a system for the large scale recording of retinal output activity. IEEE Transactions on Nuclear Science, 1434–1440.

  • Martignon, L., Deco, G., Laskey, K., Diamond, M., Freiwald, W., & Vaadia, E. (2000). Neural coding: Higher-order temporal patterns in the neuro-statistics of cell assemblies. Neural Computation, 12, 2621–2653.

    Article  CAS  PubMed  Google Scholar 

  • Meng, X.-L., & Rubin, D. B. (1991). Using EM to obtain asymptotic variance-covariance matrices: The SEM algorithm. Journal of the American Statistical Association, 86(416), 899–909.

    Article  Google Scholar 

  • Minka, T. (2001). A family of algorithms for Approximate Bayesian Inference. PhD thesis, MIT.

  • Moeller, J., Syversveen, A., & Waagepetersen, R. (1998). Log-Gaussian Cox processes. Scandinavian Journal of Statistics, 25, 451–482.

    Article  Google Scholar 

  • Moeller, J., & Waagepetersen, R. (2004). Statistical inference and simulation for spatial point processes. London: Chapman Hall.

    Google Scholar 

  • Murphy, G., & Rieke, F. (2006). Network variability limits stimulus-evoked spike timing precision in retinal ganglion cells. Neuron, 52, 511–524.

    Article  CAS  PubMed  Google Scholar 

  • Neal, R., & Hinton, G. (1999). A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. Jordan (Ed.), Learning in graphical models (pp. 355–368). Cambridge: MIT.

    Google Scholar 

  • Nicolelis, M., Dimitrov, D., Carmena, J., Crist, R., Lehew, G., Kralik, J., et al. (2003). Chronic, multisite, multielectrode recordings in macaque monkeys. PNAS, 100, 11041–11046.

    Article  CAS  PubMed  Google Scholar 

  • Nikolenko, V., Watson, B., Araya, R., Woodruff, A., Peterka, D., & Yuste, R. (2008). SLM microscopy: Scanless two-photon imaging and photostimulation using spatial light modulators. Frontiers in Neural Circuits, 2, 5.

    Article  PubMed  Google Scholar 

  • Nykamp, D. (2005). Revealing pairwise coupling in linear-nonlinear networks. SIAM Journal on Applied Mathematics, 65, 2005–2032.

    Article  Google Scholar 

  • Nykamp, D. (2007). A mathematical framework for inferring connectivity in probabilistic neuronal networks. Mathematical Biosciences, 205, 204–251.

    Article  PubMed  Google Scholar 

  • Ohki, K., Chung, S., Ch’ng, Y., Kara, P., & Reid, C. (2005). Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature, 433, 597–603.

    Article  CAS  PubMed  Google Scholar 

  • Olsson, R. K., Petersen, K. B., & Lehn-Schioler, T. (2007). State-space models: From the EM algorithm to a gradient approach. Neural Computation, 19, 1097–1111.

    Article  Google Scholar 

  • Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15, 243–262.

    Article  Google Scholar 

  • Paninski, L. (2005). Log-concavity results on Gaussian process methods for supervised and unsupervised learning. Advances in Neural Information Processing Systems, 17.

  • Paninski, L. (2009). Inferring synaptic inputs given a noisy voltage trace via sequential Monte Carlo methods. Journal of Computational Neuroscience (under review).

  • Paninski, L., Fellows, M., Shoham, S., Hatsopoulos, N., & Donoghue, J. (2004). Superlinear population encoding of dynamic hand trajectory in primary motor cortex. Journal of Neuroscience, 24, 8551–8561.

    Article  CAS  PubMed  Google Scholar 

  • Paninski, L., & Ferreira, D. (2008). State-space methods for inferring synaptic inputs and weights. COSYNE.

  • Peña, J.-L. & Konishi, M. (2000). Cellular mechanisms for resolving phase ambiguity in the owl’s inferior colliculus. Proceedings of the National Academy of Sciences of the United States of America, 97, 11787–11792.

    Article  PubMed  Google Scholar 

  • Penny, W., Ghahramani, Z., & Friston, K. (2005). Bilinear dynamical systems. Philosophical Transactions of the Royal Society of London, 360, 983–993.

    Article  CAS  PubMed  Google Scholar 

  • Pillow, J., Ahmadian, Y., & Paninski, L. (2009). Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Neural Computation (under review).

  • Pillow, J., Shlens, J., Paninski, L., Sher, A., Litke, A., Chichilnisky, E., et al. (2008). Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454, 995–999.

    Article  CAS  PubMed  Google Scholar 

  • Press, W., Teukolsky, S., Vetterling, W., & Flannery, B. (1992). Numerical recipes in C. Cambridge: Cambridge University Press.

    Google Scholar 

  • Priebe, N., & Ferster, D. (2005). Direction selectivity of excitation and inhibition in simple cells of the cat primary visual cortex. Neuron, 45, 133–145.

    Article  CAS  PubMed  Google Scholar 

  • Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286.

    Article  Google Scholar 

  • Rahnama, K., Rad & Paninski, L. (2009). Efficient estimation of two-dimensional firing rate surfaces via Gaussian process methods. Network (under review).

  • Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. Cambridge: MIT.

    Google Scholar 

  • Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1997). Spikes: Exploring the neural code. Cambridge: MIT.

    Google Scholar 

  • Robert, C., & Casella, G. (2005). Monte Carlo statistical methods. New York: Springer.

    Google Scholar 

  • Roweis, S., & Ghahramani, Z. (1999). A unifying review of linear Gaussian models. Neural Computation, 11, 305–345.

    Article  CAS  PubMed  Google Scholar 

  • Rybicki, G., & Hummer, D. (1991). An accelerated lambda iteration method for multilevel radiative transfer, appendix b: Fast solution for the diagonal elements of the inverse of a tridiagonal matrix. Astronomy and Astrophysics, 245, 171.

    CAS  Google Scholar 

  • Rybicki, G. B., & Press, W. H. (1995). Class of fast methods for processing irregularly sampled or otherwise inhomogeneous one-dimensional data. Physical Review Letters, 74(7), 1060–1063.

    Article  CAS  PubMed  Google Scholar 

  • Salakhutdinov, R., Roweis, S. T., & Ghahramani, Z. (2003). Optimization with EM and expectation-conjugate-gradient. International Conference on Machine Learning, 20, 672–679.

    Google Scholar 

  • Schneidman, E., Berry, M., Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440, 1007–1012.

    Article  CAS  PubMed  Google Scholar 

  • Schnitzer, M., & Meister, M. (2003). Multineuronal firing patterns in the signal from eye to brain. Neuron, 37, 499–511.

    Article  CAS  PubMed  Google Scholar 

  • Shlens, J., Field, G. D., Gauthier, J. L., Grivich, M. I., Petrusca, D., Sher, A., et al. (2006). The structure of multi-neuron firing patterns in primate retina. Journal of Neuroscience, 26, 8254–8266.

    Article  CAS  PubMed  Google Scholar 

  • Shlens, J., Field, G. D., Gauthier, J. L., Greschner, M., Sher, A., Litke, A. M., et al. (2009). The structure of large-scale synchronized firing in primate retina. Journal of Neuroscience, 29, 5022–5031.

    Article  CAS  PubMed  Google Scholar 

  • Shoham, S., Paninski, L., Fellows, M., Hatsopoulos, N., Donoghue, J., & Normann, R. (2005). Optimal decoding for a primary motor cortical brain-computer interface. IEEE Transactions on Biomedical Engineering, 52, 1312–1322.

    Article  PubMed  Google Scholar 

  • Shumway, R., & Stoffer, D. (2006). Time series analysis and its applications. New York: Springer.

    Google Scholar 

  • Silvapulle, M., & Sen, P. (2004). Constrained statistical inference: Inequality, order, and shape restrictions. New York: Wiley-Interscience.

    Google Scholar 

  • Smith, A., & Brown, E. (2003). Estimating a state-space model from point process observations. Neural Computation, 15, 965–991.

    Article  PubMed  Google Scholar 

  • Smith, A. C., Frank, L. M., Wirth, S., Yanike, M., Hu, D., Kubota, Y., et al. (2004). Dynamic analysis of learning in behavioral experiments. Journal of Neuroscience, 24(2), 447–461.

    Article  CAS  PubMed  Google Scholar 

  • Smith, A. C., Stefani, M. R., Moghaddam, B., & Brown, E. N. (2005). Analysis and design of behavioral experiments to characterize population learning. Journal of Neurophysiology, 93(3), 1776–1792.

    Article  PubMed  Google Scholar 

  • Snyder, D., & Miller, M. (1991). Random point processes in time and space. New York: Springer.

    Google Scholar 

  • Srinivasan, L., Eden, U., Willsky, A., & Brown, E. (2006). A state-space analysis for reconstruction of goal-directed movements using neural signals. Neural Computation, 18, 2465–2494.

    Article  PubMed  Google Scholar 

  • Suzuki, W. A., & Brown, E. N. (2005). Behavioral and neurophysiological analyses of dynamic learning processes. Behavioral & Cognitive Neuroscience Reviews, 4(2), 67–95.

    Article  Google Scholar 

  • Truccolo, W., Eden, U., Fellows, M., Donoghue, J., & Brown, E. (2005). A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. Journal of Neurophysiology, 93, 1074–1089.

    Article  PubMed  Google Scholar 

  • Utikal, K. (1997). A new method for detecting neural interconnectivity. Biological Cyberkinetics, 76, 459–470.

    Article  Google Scholar 

  • Vidne, M., Kulkarni, J., Ahmadian, Y., Pillow, J., Shlens, J., Chichilnisky, E., et al. (2009). Inferring functional connectivity in an ensemble of retinal ganglion cells sharing a common input. COSYNE.

  • Vogelstein, J., Babadi, B., Watson, B., Yuste, R., & Paninski, L. (2008). Fast nonnegative deconvolution via tridiagonal interior-point methods, applied to calcium fluorescence data. Statistical analysis of neural data (SAND) conference.

  • Vogelstein, J., Watson, B., Packer, A., Jedynak, B., Yuste, R., & Paninski, L., (2009). Model-based optimal inference of spike times and calcium dynamics given noisy and intermittent calcium-fluorescence imaging. Biophysical Journal. http://www.stat.columbia.edu/liam/research/abstracts/vogelsteinbj08-abs.html.

  • Wahba, G. (1990). Spline models for observational data. Philadelphia: SIAM.

    Google Scholar 

  • Wang, X., Wei, Y., Vaingankar, V., Wang, Q., Koepsell, K., Sommer, F., & Hirsch, J. (2007). Feedforward excitation and inhibition evoke dual modes of firing in the cat’s visual thalamus during naturalistic viewing. Neuron, 55, 465–478.

    Article  CAS  PubMed  Google Scholar 

  • Warland, D., Reinagel, P., & Meister, M. (1997). Decoding visual information from a population of retinal ganglion cells. Journal of Neurophysiology, 78, 2336–2350.

    CAS  PubMed  Google Scholar 

  • Wehr, M., & Zador, A., (2003). Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex. Nature, 426, 442–446.

    Article  CAS  PubMed  Google Scholar 

  • West, M., & Harrison, P., (1997). Bayesian forecasting and dynamic models. New York: Springer.

    Google Scholar 

  • Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., & Black, M. J. (2006). Bayesian population coding of motor cortical activity using a Kalman filter. Neural Computation, 18, 80–118.

    Article  PubMed  Google Scholar 

  • Wu, W., Kulkarni, J., Hatsopoulos, N., & Paninski, L. (2009). Neural decoding of goal-directed movements using a linear statespace model with hidden states. IEEE Transactions on Biomedical Engineering (in press).

  • Xie, R., Gittelman, J. X., & Pollak, G. D. (2007). Rethinking tuning: In vivo whole-cell recordings of the inferior colliculus in awake bats. Journal of Neuroscience, 27(35), 9469–9481.

    Article  CAS  PubMed  Google Scholar 

  • Ypma, A., & Heskes, T., (2003). Iterated extended Kalman smoothing with expectation-propagation. Neural Networks for Signal Processing, 2003, 219–228.

    Google Scholar 

  • Yu, B., Afshar, A., Santhanam, G., Ryu, S., Shenoy, K., & Sahani, M. (2006). Extracting dynamical structure embedded in neural activity. NIPS.

  • Yu, B. M., Cunningham, J. P., Shenoy, K. V., & Sahani, M. (2007). Neural decoding of movements: From linear to nonlinear trajectory models. ICONIP, 586–595.

  • Yu, B. M., Kemere, C., Santhanam, G., Afshar, A., Ryu, S. I., Meng, T. H., et al. (2007). Mixture of trajectory models for neural decoding of goal-directed movements. Journal of Neurophysiology, 97(5), 3763–3780.

    Article  PubMed  Google Scholar 

  • Yu, B. M., Shenoy, K. V., & Sahani, M. (2006). Expectation propagation for inference in non-linear dynamical models with Poisson observations. In Proceedings of the nonlinear statistical signal processing workshop (pp. 83–86). Piscataway: IEEE.

    Chapter  Google Scholar 

  • Zhang, K., Ginzburg, I., McNaughton, B., & Sejnowski, T. (1998). Interpreting neuronal population activity by reconstruction: Unified framework with application to hippocampal place cells. Journal of Neurophysiology, 79, 1017–1044.

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We thank J. Pillow for sharing the data used in Figs. 2 and 5, G. Czanner for sharing the data used in Fig. 6, and B. Babadi and Q. Huys for many helpful discussions. LP is supported by NIH grant R01 EY018003, an NSF CAREER award, and a McKnight Scholar award; YA by a Patterson Trust Postdoctoral Fellowship; DGF by the Gulbenkian PhD Program in Computational Biology, Fundacao para a Ciencia e Tecnologia PhD Grant ref. SFRH / BD / 33202 / 2007; SK by NIH grants R01 MH064537, R01 EB005847 and R01 NS050256; JV by NIDCD DC00109.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liam Paninski.

Additional information

Action Editor: Israel Nelken

Rights and permissions

Reprints and permissions

About this article

Cite this article

Paninski, L., Ahmadian, Y., Ferreira, D.G. et al. A new look at state-space models for neural data. J Comput Neurosci 29, 107–126 (2010). https://doi.org/10.1007/s10827-009-0179-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-009-0179-x

Keywords

Navigation