One of the great challenges of contemporary neuroscience is understanding in a quantitative manner how neurons and neural ensembles encode and process information in the form of action potentials. The advent of multi-electrode arrays which can record large amounts of data simultaneously from several neurons has made this task more urgent. This task, however, is made difficult by the inherent complexity of neural systems which are highly nonlinear, interconnected, dynamic, and subject to stochastic variations. Furthermore, while several methods exist which offer good predictive performance for spike train modeling, their adoption in the broader neuroscience community has been limited due to their mathematical complexity and lack of interpretability.

Here we present a novel and intuitive methodology of modeling nonlinear dynamic systems with point process inputs and outputs, such as interconnected neuronal ensembles. The method relies on the expression of the nonlinearity in the form of Volterra-like kernels, termed the Probability-Based Volterra (PBV) kernels. The nth order PBV kernel, PBVn, is derived in two steps. First, we calculate the conditional probability of an output spike given n input spikes at various lags. Second, we subtract lower order effects to isolate the nth order nonlinearity. Thus, the first PBV kernel, PBV1(τ), is the conditional probability of an output spike given a spike in the input spike train at time τ, minus the probability that there will be a spike in the output, i.e.:

P B V 1 ( τ ) = P ( y [ t ] | x [ t - τ ] ) - P ( y [ t ] )
(1)

This first order kernel describes a linear impulse response filter similar to that derived from spike triggered averaging and cross-correlation methods. The second order PBV kernel, PBV2(τ 1 , τ 2 ), which is the first nonlinear kernel, is the conditional probability of an output spike given a pair of input spikes minus the conditional probability of an output spike given either one of those input spikes individually, i.e.:

PB V 2 τ 1 , τ 2 =P y [ t ] | x [ t - τ 1 ] x [ t - τ 2 ] -P ( y [ t ] | x [ t - τ 1 ] ) -P ( y [ t ] | x [ t - τ 2 ] ) +P ( y [ t ] )
(2)

This method may be extended to describe the contribution of n pairs spikes to the output in the form of the nth PBV kernel. We show that the PBV kernels are equivalent to the Wiener kernels when the input is a Poisson process, thus placing the PBV kernels in the context of a well-established and rigorous mathematical theory [1].

The proposed PBV methodology was applied to synthetic systems where the ground truth of the model was available. The PBV kernels were found to both accurately estimate the ground truth kernels and to reproduce the given output, thus validating the method. Finally, the proposed PBV methodology was applied to real neural data derived from the CA3 and CA1 regions of the rodent hippocampus [2]. Although here ground truth was not available, the PBV kernels were able to reproduce the output as well as other models which have been validated both mathematically and in-vivo in the context of neural prosthetics [2].