Skip to main content
Log in

Comparison of brain–computer interface decoding algorithms in open-loop and closed-loop control

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

Neuroprosthetic devices such as a computer cursor can be controlled by the activity of cortical neurons when an appropriate algorithm is used to decode motor intention. Algorithms which have been proposed for this purpose range from the simple population vector algorithm (PVA) and optimal linear estimator (OLE) to various versions of Bayesian decoders. Although Bayesian decoders typically provide the most accurate off-line reconstructions, it is not known which model assumptions in these algorithms are critical for improving decoding performance. Furthermore, it is not necessarily true that improvements (or deficits) in off-line reconstruction will translate into improvements (or deficits) in on-line control, as the subject might compensate for the specifics of the decoder in use at the time. Here we show that by comparing the performance of nine decoders, assumptions about uniformly distributed preferred directions and the way the cursor trajectories are smoothed have the most impact on decoder performance in off-line reconstruction, while assumptions about tuning curve linearity and spike count variance play relatively minor roles. In on-line control, subjects compensate for directional biases caused by non-uniformly distributed preferred directions, leaving cursor smoothing differences as the largest single algorithmic difference driving decoder performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. If spike counts of the various neurons have different variances (and are not independent), the weighted least-squares:

    $$ \hat{\boldsymbol{v}}_t = (P^T\Sigma^{-1}P)^{-1}P^T\Sigma^{-1}\boldsymbol{z}_t \nonumber $$

    will be more efficient than the OLS. Here Σ  − 1 is the covariance matrix of z t (Kutner et al. 2004). We consider only the OLS here, and instead, the non-constant variance is taken in log-linear models below.

  2. The canonical link function equates the linear predictor with the canonical parameter of the exponential family distribution, which allows \(P^T\boldsymbol{y}_t\) to be a sufficient statistic for v t (McCullagh and Nelder 1989).

  3. We took the threshold value to be 1 Hz instead of 0 because the algorithm becomes unstable as the variance gets close to 0 due to the fact that the inverse of variance is taken as the “weight” of the weighted least-squares; see Eq. (17) in Appendix A2.

  4. Some of the cells were well-isolated single-units, while others were multi-unit combinations.

  5. During the dual control experiment, only one algorithm could control the cursor at a time. The controlling algorithm during the calibration phase was alternated between experimental sessions in order to encourage a more balanced comparison.

  6. The maximum likelihood estimator asymptotically achieves the theoretical lower bound (the “Cramer-Rao” lower bound given by the inverse of the Fisher information Schervish 1996) of the variance in the limit of large samples. In the Bayesian inference, the posterior expectation gives the optimal estimator under the squared error loss. In our analysis, we approximately computed the recursive Bayesian equations (see Appendix A2), which provides the first-order approximation of the posterior expectation in the “high-information” limit, where the posterior becomes very sharply peaked around the mode and the Laplace approximation is valid (Koyama et al., unpublished manuscript). Thus, the state-space estimates we computed are asymptotically optimal under the squared error loss.

References

  • Amirikian, B., & Georgopulos, A. P. (2000). Directional tuning profiles of motor cortical cells. Neuroscience Research, 36, 73–79.

    Article  CAS  PubMed  Google Scholar 

  • Bickel, P. J., & Doksum, K. A. (2006). Mathematical statistics (2nd ed.). Englewood Cliffs: Prentice Hall.

    Google Scholar 

  • Brockwell, A. E., Rojas, A. L., & Kass, R. E. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology, 91, 1899–1907.

    Article  CAS  PubMed  Google Scholar 

  • Brockwell, A. E., Schwartz, A. B., & Kass, R. E. (2007). Statistical signal processing and the motor cortex. Proceeding of the IEEE, 95, 882–898.

    Article  Google Scholar 

  • Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, M. A. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18, 7411–7425.

    CAS  PubMed  Google Scholar 

  • Chapin, J. K., Moxon, K. A., Markowitz, R. S., & Nicolelis, M. A. L. (1999). Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nature Neuroscience, 2, 664–670.

    Article  CAS  PubMed  Google Scholar 

  • Chase, S. M., Schwartz, A. B., & Kass, R. E. (2009). Bias, optimal linear estimation, and the differences between open-loop simulation and closed-loop performance of brain-computer interface algorithms. Neural Networks. doi:10.1016/j.neunet.2009.05.005.

    PubMed  Google Scholar 

  • Doucet, A., de Freitas, N., & Gordon, N. (Eds.) (2001). Sequential Monte Carlo methods in practice. Berlin: Springer.

    Google Scholar 

  • Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analyses of neural encoding by point process adaptive filtering. Neural Computation, 16, 971–998.

    Article  PubMed  Google Scholar 

  • Foldiak, P. (1993). Computation and neural systems (chap. 9, pp. 55–60). Norwell, MA:Kluwer Academic Publishers.

    Google Scholar 

  • Georgopoulos, A., Kettner, R., & Schwartz, A. (1986). Neuronal population coding of movement direction. Science, 233, 1416–1419.

    Article  CAS  PubMed  Google Scholar 

  • Georgopoulos, A., Kettner, R., & Schwartz, A. (1988). Primate motor cortex and free arm movements to visual targets in three-dimensional space. ii. Coding of the direction of movement by a neural population. Journal of Neuroscience, 8, 2928–2937.

    CAS  PubMed  Google Scholar 

  • Koyama, S., P’erez-Bolde, L. C., Shalizi, C. R., & Kass, R. E. (in press). Approximate methods for statespace models. Journal of the American Statistical Association. Accepted for publication.

  • Koyama, S., & Shinomoto, S. (2004). Histogram bin width selection for time-dependent Poisson processes. Journal of Physics A: Mathematical and General, 37, 7255–7265.

    Article  Google Scholar 

  • Kutner, M. H., Nachtsheim, C. J., & Neter, J. (2004). Applied linear regression methods (4th ed.). Chicago: McGraw-Hill/Irwin.

    Google Scholar 

  • Lebedev, M. A., & Nicolelis, A. L. (2006). Brain-machine interfaces: Past, present and future. Trends in Neuroscience, 29, 536–546.

    Article  CAS  Google Scholar 

  • Maynard, E. M., & Normann, R. A. (1997). The Utah intracortical electrode array: A recording structure for potential brain-computer interfaces. Electroencephalography and Clinical Neurophysiology, 102, 228–239.

    Article  CAS  PubMed  Google Scholar 

  • McCullagh, P., & Nelder, J. (1989). Generalized linear models. London: Chapman and Hall.

    Google Scholar 

  • Reina, G. A., & Schwartz, A. B. (2003). Eye-hand coupling during closed-loop drawing: Evidence of shared motor planning? Human Movement Science, 22, 137–152.

    Article  PubMed  Google Scholar 

  • Salinas, E., & Abbott, L. F. (1994). Vector reconstruction from firing rates. Journal of Computational Neuroscience, 1, 89–107.

    Article  CAS  PubMed  Google Scholar 

  • Schervish, M. (1996). Theory of statistics. New York: Springer.

    Google Scholar 

  • Serruya, M., Hatsopoulos, N. G., Paninski, L., Fellows, M. R., & Donoghue, J. P. (2002). Brain-machine interface: Instant neural control of a movement signal. Nature, 416, 141–142.

    Article  CAS  PubMed  Google Scholar 

  • Shimazaki, H., & Shinomoto, S. (2007). A method for selecting the bin size of a time histogram. Neural Computation, 19, 1503–1527.

    Article  PubMed  Google Scholar 

  • Taylor, D. M., Tillery, H., Stephen, I., & Schwartz, A. B. (2002). Direct cortical control of 3D neuroprosthetic devices. Science, 296, 1829–1832.

    Article  CAS  PubMed  Google Scholar 

  • Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S., & Schwartz, A. B. (2008). Cortical control of a prosthetic arm for self-feeding. Nature, 453, 1098–1101. doi:10.1038/nature06996.

    Article  CAS  PubMed  Google Scholar 

  • Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., & Black, M. J. (2006). Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Computation, 18, 80–118.

    Article  PubMed  Google Scholar 

  • Yu, B. M., Kemere, C., Santhanam, G., Afshar, A., Ryu, S. I., Meng, T. H., et al. (2007). Mixture trajectory models for neural decoding of goal-directed movements. Journal of Neurophysiology, 97, 3763–3780.

    Article  PubMed  Google Scholar 

  • Zhang, K., Ginzburg, I., McNaughton, B. L., & Sejnowski, T. J. (1998). Interpreting neuronal population activity by reconstruction: Unified framework with application to hippocampal place cells. Journal of Neurophysiology, 79, 1017–1044.

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This work was supported by grants RO1 MH064537, RO1 EB005847 and RO1 NS050256.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shinsuke Koyama.

Additional information

Action Editor: Alessandro Treves

Appendices

Appendix A1: Fisher tuning function

The Fisher tuning function is defined as

$$ f(\theta) = b + c\exp(\kappa\cos\theta), \label{eq:Fisher-tuning} $$
(9)

where θ is the angle between the cell’s preferred direction and the movement direction (Amirikian and Georgopulos 2000). Note that this corresponds to the log-linear model Eq. (6) when b is omitted. The parameters b, c and κ determine the tuning properties. These parameters are translated into three physiological parameters: the half-width of the dropoff shape θ hw , the modulation depth m d , and the baseline firing rate b s , as

$$ \theta_{hw} = \cos^{-1}\Bigg[ \frac{\log(\cosh\kappa)}{\kappa} \Bigg] ~, $$
(10)
$$ m_d = 2c\sinh\kappa, $$
(11)

and

$$ b_s = b + ce^{-\kappa}. $$
(12)

The half width takes θ hw  < π/2 for κ > 0. For κ→0 (as θ hw π/2), Eq. (9) becomes the cosine tuning function,

$$ f(\theta) = b_s + \frac{m_d}{2} + \frac{m_d}{2}\cos\theta. $$
(13)

Appendix A2: Exponential family regression

Here, we briefly describe the exponential family regression and its extension to the state-space method which provides a unifying framework for the neural decoders introduced in Section 2.3. We assume that the spike counts of the ith cell (i = 1,...,N) follows the exponential family distribution,

$$ p(y_{i,t}|\theta_i,\phi_i) = \exp[(y_{i,t}\theta_i - b(\theta_i))/a(\phi_i)+c(y_{i,t},\phi_i)], \label{eq:exp-family} $$
(14)

where θ i is the canonical parameter related to the mean of y i,t and ϕ i is the dispersion parameter (McCullagh and Nelder 1989). This family includes commonly used distributions such as the Poisson and Gaussian. The mean and variance of y i,t are given as E(y i,t) = b′(θ i ) and Var(y i,t) = a(ϕ i )b′′(θ i ), respectively. Since the tuning function f(v t , p i ) relates the velocity to the firing rate as E(y i,t) = f(v t , p i ), θ i  = θ i (v t ) is a function of v t . Assuming that the spiking of N cells are independent of each other, the probability distribution of y t = (y 1, t ,...,y N, t ) is given by

$$ p(\boldsymbol{y}_t|\boldsymbol{v}_t) = \prod\limits_{i=1}^Np(y_{i,t}|\theta_i(\boldsymbol{v}_t),\phi_i). \label{eq:population-distribution} $$
(15)

Let l(v t ) = log p(y t |v t ) be the log likelihood function of Eq. (15). The gradient of l(v t ) is then derived as

$$ \nabla_{v_t}l(\boldsymbol{v}_t) = \sum\limits_{i=1}^N\frac{y_{i,t}-f(\boldsymbol{v}_t,\boldsymbol{p}_i)}{Var(y_{i,t})}\nabla_{v_t}f(\boldsymbol{v}_t,\boldsymbol{p}_i), \label{eq:loglikeli} $$
(16)

where \(\nabla_{v_t}\) denotes the gradient with respect to v t . The maximum likelihood estimate of v t is then obtained by solving \(\nabla_{v_t}l(\boldsymbol{v}_t) = 0\). Note that only the tuning function f(v t , p i ) and the spike count variance Var(y i,t) appear in Eq. (16). When the tuning function is linear and the variance is constant, \(\nabla_{v_t}l(\boldsymbol{v}_t) = 0\) is solved analytically leading to the LGB (i.e. the optimal linear estimator). Otherwise it is solved numerically (by the Newton-Raphson method, for example). Note also that it is seen from Eq. (16) that the maximum likelihood estimate corresponds to the weighted (nonlinear) least-squares estimate which minimizes the weighed squared error,

$$ \sum\limits_{i=1}^N\frac{[y_{i,t}-f(\boldsymbol{v}_t,\boldsymbol{p}_i)]^2}{Var(y_{i,t})}. \label{eq:WLS} $$
(17)

In state-space filtering, in addition to the likelihood of velocity Eq. (16), which is called the observation model, we take a state model describing the evolution of the state, v t . In our analysis, the state model was taken to be a random walk,

$$ p(\boldsymbol{v}_{t+1}|\boldsymbol{v}_t) \!=\! \frac{1}{(2\pi\eta)^{d/2}} \exp\bigg[\!-\frac{1}{2\eta}(\boldsymbol{v}_{t+1}\!-\boldsymbol{v}_t)^T(\boldsymbol{v}_{t+1}\!-\boldsymbol{v}_t)\bigg]. $$
(18)

The set of posterior distributions of v t given the observation up to time t, for t = 1,2,..., is then computed by the recursive relationships,

$$ p(\boldsymbol{v}_t|\boldsymbol{y}_1,\ldots,\boldsymbol{y}_t) \propto p(\boldsymbol{y}_t|\boldsymbol{v}_t)p(\boldsymbol{v}_t|\boldsymbol{y}_1,\ldots,\boldsymbol{y}_{t-1}), \label{eq:posterior} $$
(19)

where

$$ p(\boldsymbol{v}_t|\boldsymbol{y}_1,\ldots,\!\boldsymbol{y}_{t-1}) \!=\!\! \int\!\! p(\boldsymbol{v}_t|\boldsymbol{v}_{t-1})p(\boldsymbol{v}_{t-1}|\boldsymbol{y}_1,\ldots,\boldsymbol{y}_{t-1})d\boldsymbol{v}_{t-1} \label{eq:predictive} $$
(20)

is called the predictive distribution. Note that in the limit of large variance, η → ∞, (or when there is no dependence of v t on v t − 1) the state-space estimate (more precisely, the maximum a posteriori (MAP) estimate) converges to the maximum likelihood estimate obtained by solving \(\nabla_{v_t}l(\boldsymbol{v}_t) = 0\).

One implementation of the recursive formula approximates the posterior distribution Eq. (19) as a Gaussian centered on its mode (Brown et al. 1998). Let v t|t and V t|t be the (approximate) mode and covariance matrix for the posterior distribution Eq. (19), and v t|t − 1 and V t|t − 1 be the mode and covariance matrix for the predictive distribution Eq. (20) at time t. Further, let r(v t )= log{p(y t |v t ) p(v t |y 1,...,y t − 1)}. The posterior distribution is then approximated as a Gaussian whose mean and covariance are \(\boldsymbol{v}_{t|t} = \arg\max_{\boldsymbol{v}_t}r(\boldsymbol{v}_t)\) and \(V_{t|t} = -[\nabla\nabla_{v_t} r(\boldsymbol{v}_{t|t})]^{-1}\), respectively (where \(\nabla\nabla_{\boldsymbol{v}_t}\) represents the second derivative with respect to v t ). Since the state model is a random walk, the predictive distribution Eq. (20) is also Gaussian, whose mean and covariance are computed as

$$\boldsymbol{v}_{t|t-1} = \boldsymbol{v}_{t-1|t-1}, \label{eq:predictive-mean} $$
(21)
$$V_{t|t-1} = V_{t-1|t-1} + \eta I. \label{eq:predictive-var} $$
(22)

We take the initial state for filtering to be the center of the workspace. This approximate filter is a version of Laplace-Gaussian filter in which the posterior distribution is approximated to be a Gaussian by using Laplace’s method (Koyama et al., unpublished manuscript). Note that we used the Gaussian approximation instead of particle filtering (Doucet et al. 2001) to compute the posterior estimates because under the constraint of computational cost for real-time decoding, the Gaussian approximation provides faster and more accurate posterior estimates than the particle filter (Koyama et al., unpublished manuscript).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Koyama, S., Chase, S.M., Whitford, A.S. et al. Comparison of brain–computer interface decoding algorithms in open-loop and closed-loop control. J Comput Neurosci 29, 73–87 (2010). https://doi.org/10.1007/s10827-009-0196-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-009-0196-9

Keywords

Navigation