Journal of Computational Neuroscience

, Volume 31, Issue 3, pp 563–579 | Cite as

Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process

  • Patrick Jahn
  • Rune W. Berg
  • Jørn Hounsgaard
  • Susanne Ditlevsen
Open Access


Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential.


Statistical methods in neuroscience Membrane time constants State dependent firing intensity Ornstein–Uhlenbeck process Square-root model Synaptic fluctuations 

1 Introduction

When neurons receive intense and random synaptic input, their post-synaptic potentials overlap and fuse, such that the membrane potential becomes a stochastic process. These fluctuations in the membrane potential have often been modeled as an Ornstein–Uhlenbeck (OU) process since it is the simplest stochastic leaky integrate-and-fire (LIF) model. Also other LIF models have been proposed, like the square-root model where a inhibitory reversal potential is introduced, or models with non-linear drift terms to describe the spike-generating current. In spite of the popularity of the OU process for neuronal modeling, there is little data to support its validity. For this reason, we conducted experiments on a real neuronal network measuring the membrane potential fluctuations during a well-characterized motor network activity. The analysis of the fluctuations provide poor support for the OU process as a proper model for the stochastic membrane fluctuations. Instead, we suggest the square-root model, since it describes our data remarkably well. The crucial point is that the variance is not constant. We chose to let the variance depend on the membrane potential and not the input. Even if the synaptic drive is the primary cause of changes in variance, instead we chose to let it depend on the membrane potential for convenience, since this is the variable we measure. The membrane potential and the synaptic input are highly correlated, and cannot be distinguished statistically. Extending this to a global model, we suggest a time-inhomogenous jump diffusion process, which also incorporates the time-varying dynamics of the stimulus induced network activity.

The main findings are the following. Under stimulation, a square-root diffusion describes the data much better than an OU process with constant diffusion coefficient. The drift is linear in locally stationary regimes during short time windows, but the membrane time constant decreases with increasing depolarization. Hence, the membrane time constant is given by a function of the membrane potential X t estimated to be approximately exponentially decreasing with depolarizing membrane potential.

To model the spiking characteristics we introduce a Poissonian firing intensity that depends on the membrane potential. The relationship between spike frequency and mean membrane potential has been addressed previously both analytically (Hansel and van Vreeswijk 2002) and experimentally (Anderson et al. 2000; Carandini and Heeger 1994; Gabbiani et al. 2002). In contrast to the spike frequency as a function of the mean membrane potential, we propose a model for the spike timing mechanism that directly provides the spiking frequency for a given level of the membrane potential, as also proposed by Pfister et al. (2006). This way of modeling the spiking mechanism is more flexible than the ones used in the literature on LIF models, see e.g. Burkitt (2006a) and Gerstner and Kistler (2002). Moreover, this approach avoids the difficult and crucial problem of defining a fixed excitation threshold, also discussed in Jahn (2009). For typical shapes of spikes and the membrane potential fluctuations some 10 ms before the spike maxima, an exact threshold for the spike is not obvious and the notion even seems inadequate. This concern naturally leads to the assumption that the spike probability depends on the state. After applying our estimation method for the Poissonian firing intensity to all of the data, we report an exponential increase of the firing intensity with depolarizing membrane potential, which is comparable with the quadratic shape reported in previous studies (Anderson et al. 2000; Carandini and Heeger 1994; Gabbiani et al. 2002).

Hip flexor nerve activity was recorded simultaneously with intracellular recordings of a spinal motoneuron during network activity induced by a mechanical stimulus. This activation leads to an intense synaptic bombardment of the intracellularly recorded neuron. It is of interest to characterize the network activity to which the recorded neuron is exposed. Indeed, this background signal, which may well be variable over the course of the experiment, could be the main piece of information to be extracted from the data available. When the network activity is large enough to elicit spikes in the spinal motoneurons, the network output measured through the hip flexor nerve activity is a proxy for the input to the recorded cell. Hence, we test this hypothesis by estimating the input from the recorded neuron without using the measured output activity, and then compare the resulting estimate with the output. We find that the unmeasured network activity, which the neuron is exposed to, can be reasonably estimated and coincides with a threshold version of the output from the network.

The stochastic LIF models provide a useful compromise between biophysical reality and mathematical tractability, see e.g. Burkitt (2006a, b), Dayan and Abbott (2001), Gerstner and Kistler (2002), Ricciardi (1977) and Tuckwell (1988). Most studies make theoretical analyses of the models, whereas studies where experimental data are fitted to some of the stochastic LIF models are less frequent (e.g. Jahn 2009; Lansky et al. 2006). The data are typically either intracellular measurements of the membrane potential sampled at high frequency, or extracellular measurements of the spike times. Obviously there is more information contained in the membrane potential than in spike times alone. Some studies estimate intrinsic parameters characterizing the neuron by assuming the neuronal input known; references using intracellular recordings are Clopath et al. (2007), Huys et al. (2006), Jolivet et al. (2004), Jolivet et al. (2006), La Camera et al. (2004), Paninski et al. (2005), Rauch et al. (2003) and references using spike times are Paninski et al. (2004, 2005). Other studies identify the input signal from the neuronal output, assuming the intrinsic neuronal parameters known; some references using intracellular recordings are Höpfner (2007), Lanska and Lansky  (1998), Lansky (1983), Lansky et al. (2006, 2009), Picchini et al. (2008) and references using spike times are Ditlevsen and Ditlevsen (2008), Ditlevsen and Lansky (2005, 2006, 2007, 2008), Inoue et al. (1995), Lansky and Ditlevsen (2008), Mullowney and Iyengar (2008), Shinomoto et al. (1999) and Zhang et al. (2009). In these last studies, two parameters characterizing the neuronal input were estimated or statistical methods to do so were proposed. Note though that more elaborate biophysical models where OU-processes have been used as building blocks, e.g. for the synaptic conductances, have been successfully fitted to experimental data, see e.g. Piwkowska et al. (2008).

Nearly all of the above methods rely on determining the model before the analysis is carried out, as well as assuming some of the parameters known. In this paper we construct a diffusion LIF model by non-parametric methods from experimental data, thus letting the data decide the best model within the class of stochastic diffusion integrate-and-fire models. A time constant is introduced, which is inversely proportional to the conductance of the neuron and varies with the state of the membrane potential. Further, the dynamics of the process are allowed to depend on time by modeling a non-stationary input caused by a stimulus that changes during the course of the experiment. Finally, the spiking intensity is estimated by fitting the observed spikes to a Poisson random measure with a state dependent intensity. All parameters and quantities of the model are estimated from experimental data. The estimated model is validated through computer simulations and compared to the experimental data. The simulations show a remarkable resemblance to the recorded data.

2 Materials and methods

2.1 Experimental methods

The experimental data are taken from a previous study, where traces of 25 s of the membrane potential of a motoneuron were measured during different current injections under the same mechanical stimulation. The network was permitted to relax for 5 min before a new trace was recorded. In this paper we analyze 16 traces, which were chosen from all traces being those that did not spike or only rarely spiked, since a main purpose of the present study is to model the subthreshold dynamics. Sample traces are illustrated in Fig. 1(c) and (e).
Fig. 1

(a) Cutaneous stimulation via sinusoidal movements of a glass rod on the hind-limb pocket skin. (b) Electro-neurogram from hip flexor nerve, recorded simultaneously with trace 12. (c) Membrane potential from intracellular recordings, trace 12. The injected current is −1.7 nA, and no spiking occurs. (d) Electro- neurogram from hip flexor nerve, recorded simultaneously with trace 14. (e) Membrane potential from intracellular recordings, trace 14. There is no injected current and spiking occurs. The number of spikes observed is 68

Briefly, the experiments were performed in an integrated spinal cord-carapace preparation from an adult red-eared turtle (Trachemys scripta elegans). In the integrated preparation the spinal cord remains in the spinal canal with the tactile sensory nerves from the carapace intact. The motor nerves are carefully transected to avoid muscle movements and dissected out for electroneurogram recordings. The scratch reflex was activated by mechanical somato-sensory stimulation of selected regions on the carapace as described in Alaburda (2003) and Alaburda et al. (2005).

Intracellular recordings in current-clamp mode were obtained from a motoneuron in segment D10. Recordings were accepted if the neuron had a stable membrane potential more negative than −50 mV. Data were sampled at 10 kHz, i.e. the time steps between observations were Δ = 0.1 ms. The voltage was measured with an accuracy of 0.05 mV. Hip flexor nerve activity was recorded using a suction pipette.

Mechanical stimulation was performed with the fire polished tip of a bent glass rod mounted to the membrane of a loudspeaker in the cutaneous region known to elicit “pocket scratch” (Robertson and Stein 1988) which results in a broad activation of cells (Berkowitz 2007). The duration, frequency, and amplitude of the stimulus were controlled with a function generator (Fig. 1(a)). This tactile stimulus induced the scratchlike network activity, which was monitored by the suction electrode nerve recordings from the hip-flexor nerve (Fig. 1(b) and (d)). Further details on recording, sampling, and experimental procedures are available in Berg et al. (2007, 2008).

2.2 The model

A stochastic diffusion model  In diffusion neuronal models, the dynamics of the membrane depolarization between two consecutive neuronal firings are represented by a scalar diffusion process X = {X t ; t ≥ 0} indexed by the time t, and given by the Itô-type stochastic differential equation (SDE)
$$ \mathrm{d} X_t=\beta(X_t,t)\, \mathrm{d} t+\sigma(X_t) \mathrm{d} W_t; \quad X_0 = x_0 $$
where W = {W t ; t ≥ 0} is a standard Wiener process (Brownian motion) and β(·) and σ 2 (·) are real-valued functions (called the drift and the infinitesimal variance) of their arguments satisfying certain regularity conditions to ensure the existence of a unique solution to Eq. (1). The drift coefficient reflects the local average rate of displacement and local variability is represented by the infinitesimal variance. We will determine non-parametrically the functions β(·) and σ 2 (·) from the experimental data.

Firing of the neuron is not an intrinsic part of model (1), so a firing mechanism has to be imposed. Often firing events are modeled by the crossing of the membrane potential X t of a voltage threshold, which is modeled as a constant or as a time dependent function to account for refractory effects. In this work we consider a more detailed firing mechanism with a state dependent firing intensity, see below.

The most common diffusion model proposed to describe the membrane potential dynamics is the linear diffusion LIF model, given by the SDE
$$ \mathrm{d} X_t= \frac{1}{\tau}(a - X_t)\mathrm{d} t+ \sigma(X_t)\mathrm{d} W_t, $$
where τ is the membrane time constant and a characterizes the neuronal input with voltage units. Under mild regularity conditions on the diffusion coefficient σ(·) the conditional mean of model (2) is
$$ \mathbf{E}(X_t \, | \, X_o = x_0 ) = a + (x_0 -a)e^{-t/\tau}. $$
If σ(X t ) = σ is constant then this is just the OU model (Ditlevsen and Lansky 2005), which takes values on the entire real line. This is the simplest mean-reverting diffusion. When \(\sigma (X_t) = \sigma \sqrt{X_t-V_I}\), where V I is the inhibitory reversal potential, this is the square-root model (Bibbona et al. 2010; Ditlevsen and Lansky 2006; Höpfner and Brodda 2006; Lansky and Lanska 1987; Lansky et al. 1995), also called the Feller neuronal model or the CIR process. It is required that \(2(a-V_I)>\sigma^2\) to ensure that X t only takes values larger than V I and never hits V I .
The data introduced in Section 2.1 show a cycling behavior generated by the mechanical stimulation and are thus time inhomogeneous, which is not reflected by this simple model. Nevertheless, locally the data might be considered approximately stationary. For the spinal motor activity we consider three regimes: The quiescent regime during the first five seconds of each trace before the mechanical stimulation starts with little or no synaptic input, the on-cycles with motor nerve activity and occasional spike activity, and the off-cycles, the lower regime in between the on-cycles (Fig. 2). The analysis of Section 3 clearly shows that the membrane time constant τ is different in each of the three regimes, decreasing for increasing activity in agreement with previous findings (Berg et al. 2008). A biological explanation is that when the mean synaptic conductance increases there is a shortening of the effective integration time (Barrett 1975; Berg et al. 2007; Bernander et al. 1991; Destexhe and Pare 1999; Destexhe et al. 2003), which may serve to facilitate rapid changes in movements (Berg et al. 2008). Thus, it is necessary to extend the model to a time inhomogeneous LIF model since the time homogeneous LIF model is only locally valid.
Fig. 2

A close-up of trace 12. The red line is the moving average after which we numerically decide where on- and off-cycles begin and end, which is indicated by the dashed lines

To allow τ to depend on the level of the membrane potential and to take the time inhomogenous input from the background network activity caused by the mechanical stimulation into account, the model is extended as follows.

A time-inhomogeneous diffusion LIF model  Assume that the subthreshold dynamics of the membrane potential between spikes X t can be described by the SDE
$$ \mathrm{d} X_t=\frac{1}{\tau(X_t)}(a+g(t)-X_t)\mathrm{d} t+\sigma(X_t) \mathrm{d} W_t, $$
where g(t) is the time inhomogeneous input with voltage units, and a is the mean voltage during the quiescent period. We assume g = 0 in the quiescent period before the mechanical stimulation is initiated and thus, a characterizes a basal level which will depend on the constant injected current. The dependence on the level of the membrane potential of the time constant is modeled as
$$ \tau(x)=\tau^* e^{-\gamma x} $$
with τ *,γ > 0. Here τ * is the hypothetical time constant when the membrane potential is 0, and thus only represents a scaling of the membrane time constant, and γ represents a kind of rate with which the time constant decreases with depolarizing membrane potential. Note that the time inhomogeneity of this model only comes from the input function tg(t).

In model (4), we have chosen to let the time constant and diffusion coefficient be functions of the membrane potential X t and not the input g(t). Though the synaptic drive g(t) is the primary cause of changes in variance and time constant, we chose to let it depend on X t for convenience, since X t and g(t) are highly correlated.

Spike generation  The problem of finding a reasonable excitation threshold S and a reset value x * for LIF models is discussed in Jahn (2009). There S and x * are estimated from interspike interval data. Due to the time inhomogeneity of the data, this approach is not convenient for our case. Hence we introduce a more flexible mechanism of spike generation, based on the idea in Höpfner and Brodda (2006), who introduced a Poissonian firing according to an exponential clock that measures how much time the membrane potential X spends over a certain threshold S. Consider a spike train to be a random sequence 0 < s 1 < s 2 < ... < s l  < T on the time interval [0,T] or equivalently written, to be a random measure
$$ \mu(\mathrm{d} t):=\sum\limits_{i=1}^l\delta_{(s_i)}(\mathrm{d} t) \quad {\rm on}\;([0,T],\mathcal{B}([0,T])), $$
where δ (t) denotes the Dirac-measure at t and \(\mathcal{B}([0,T])\) the Borel σ-algebra on [0,T]. We will now generalize the definition in Höpfner and Brodda (2006) of a Poisson spike train that allows spatial flexibility of the intensity.

Definition 1

The measure μ given by Eq. (6) is called a Poisson spike train if it is a Poisson random measure with intensity measure λ(X t − )d t, where X t −  denotes the left limit of X t and xλ(x) is a piecewise continuous function.

A Poisson spike train μ generates points, identified with the jumps, on the time axis [0,T] with an intensity that depends on the state of the membrane potential. When a jump is generated the membrane potential X is reset to a value x * assumed constant for simplicity. Spikes are identified with the jumps.

A global jump diffusion model  The model can be represented as a jump diffusion process by adding a jump term to the SDE in Eq. 4,
$$ \begin{array}{rll} \mathrm{d} X_t&=&\frac{1}{\tau(X_t)}(a+g(t)-X_t)\mathrm{d} t\\ &&+\,\sigma(X_t) \mathrm{d} W_t + (x^* - X_{t-})\mu(X_{t-},\mathrm{d} t), \end{array} $$
where μ(X t − , d t) is a Poisson spike train with intensity measure λ(X t − )d t. The main goal of this work is to estimate all unknown quantities of model (7) from experimental data, including g(·), σ(·) and λ(·). If μ has a jump at time t, the jump size of X at time t is \(X_t-X_{t-}=x^* - X_{t-}\), such that the process is reset to x * right after a spike, which is associated with the jump. For other jump diffusion neuronal models we refer to Giraudo and Sacerdote (1997).
To ensure the existence of a unique solution to Eq. (7) for the starting value X 0 = x 0 ∈ ℝ, it is convenient to assume that σ(·) is Lipschitz and g(·) is bounded. The drift term in Eq. (7) is mean reverting and locally Lipschitz. Hence, we only need to worry about a possible explosive behavior of jumps (see e.g., Jacod 2003, Ch.III, Thm. 2.32). Let \(\mathbb{1}_A(z)\) denote the indicator function of the set A, i.e. \(\mathbb{1}_A(z)=1\) if z ∈ A and \(\mathbb{1}_A(z)=0\) if \(z\not\in A\). To construct μ we define
$$\mu(X_{t-},\mathrm{d} t):=\int_{z\in \mathbb{R}_+}\mathbb{1}_{[0,\lambda(X_{t-})]}(z) N(\mathrm{d} z,\mathrm{d} t),$$
where N(d z, d t) is an independent homogeneous Poisson random measure on \(\mathbb{R}_+^2\) with intensity measure d zd t. From this we see that the intensity measure of μ is given by
$$ \begin{array}{rll} \mathbf{E}[\mu(X_{t-},\mathrm{d} t)|X_{t-}]&=& \mathbf{E}[\int_{z\in R_+}\mathbb{1}_{[0,\lambda(X_{t-})]}(z) N(\mathrm{d} z,\mathrm{d} t)|X_{t-}]\\ &=& \int_{z\in R_+}\mathbb{1}_{[0,\lambda(X_{t-})]}(z) \mathrm{d} z\mathrm{d} t =\lambda(X_{t-})\mathrm{d} t, \end{array} $$
which is the desired intensity from above for Eq. (7). Further, since X always jumps back to a fixed value x * where the jump intensity is λ(x *) < ∞, X can only jump finitely many times on a finite interval [0, T]. In fact, we will estimate λ(·) to be approximately 0 in the vicinity of x *, which is reasonable from a biological point of view, since then a refractory period after a spike is accounted for.

3 Estimation methods and results

Estimation of the diffusion coefficient  To determine the function σ(·) in model (1), we use a nonparametric estimation method (Florens-Zmirou 1993; Höpfner 2007), which is reviewed in Appendix. Since this estimator only involves the quadratic variation of the process and the data are sampled at high frequency, the effects of time-inhomogeneity in the drift are negligible. During all on- and off-cycles the squared diffusion coefficient xσ 2(x) turns out to be approximately a linear function for each trace. We can therefore perform a linear regression. This happens consistently over all traces and is shown for traces 12 and 14 in Fig. 3. Thus, we conclude that the best description of the diffusion part in model (1), during the on- and off-cycles, is a square-root process, where \(\sigma (x)= \sigma \sqrt{x-V_I}\) and V I is the inhibitory reversal potential. To ensure the Lipschitz condition, σ(x) is set to be bounded away from 0 by a small constant, which is the estimate of σ from the quiescent period. During quiescence the noise is much lower than after stimulation and close to the same order of magnitude as the accuracy of the voltage measurement. Therefore, we cannot expect reliable results for such low values. In fact, in this regime xσ 2(x) cannot be interpreted as a smooth function anymore. Due to the relatively small noise in the quiescent period, a constant noise is suitable enough to explain the data. The estimate for σ 2 in the quiescent regime is the median value over the nonparametric estimate of xσ 2(x).
Fig. 3

The non-parametric estimates of σ 2(·) during the on- and off-cycles and the corresponding linear regression for trace 12 (a) and trace 14 (b). The injected current is indicated by I

To obtain a common estimate of σ 2 of the cell over all traces, we performed a linear regression of σ 2 on x, using the points from the non-parametric estimation for all traces, illustrated in Fig. 3 for traces 12 and 14. The analysis was carried out using a linear mixed model, where a random trace-specific component was introduced that allowed adjusting for the intertrace variations in slope and intercept. Note that all traces are recorded from the same cell, the differences between repetitions being different levels of injected current. Thus, if the random trace-specific component is statistically significant, we conclude that variation in the noise level is influenced by the injected current, as expected since the driving force is modified when changing the membrane potential with respect to the excitatory and inhibitory synaptic reversal potentials. In effect, the random trace-specific component was statistically significant (p < 0.0001).

The estimate for the on- and off-cycles without injected current was \(\hat{\sigma}^2 (x) = 0.071 (x+75.4)\). Thus, the inhibitory reversal potential V I is estimated to −75.4 mV. The average estimate for the on- and off-cycles over all traces, including those with negative injected current was \(\hat{\sigma}^2 (x) = 0.047 (x+92.1)\). Note that this is an average estimate for the cell, but varies from trace to trace. A larger negative injected current hyperpolarizes the membrane potential and the amplitude of the noise, both in a linear fashion. Apart from the quiescent regime, where σ 2(·) is estimated and modeled to be a small constant, the final estimates for xσ 2(x) are plotted in Fig. 4(a).
Fig. 4

(a) The colored lines are the estimated σ 2(·) for each trace during on- and off-cycles. The cyan line is the estimate for trace 12 and the magenta line is the estimate for trace 14. The black curve is the average estimate for all traces, \(\hat{\sigma}^2 (x) = 0.047(x+92.1)\). (b) The colored points are the trace-specific estimates of τ during quiescence, on- and off-cycles, where estimates from the same trace are connected. The cyan curve is trace 12 and the magenta curve is trace 14. The black curve is the final estimate \(\hat{\tau} (x) = 2.5 e^{-0.022 x}\), valid for all traces

Time constant  The nonparametric method requires stationary data. To analyze the time inhomogenous input and the varying time constant, we will use a different strategy. Since the data within each on- and off-cycle and the quiescent period can be considered as approximately stationary, we describe the membrane potential X t within each of these regions and between spikes by a time homogeneous SDE (1) where the drift and the diffusion functions β(·) and σ(·) are not allowed to depend on time.

To determine the on- and off-cycle intervals where the process behaves approximately stationary, we apply the following procedure. First the data are smoothed by a moving average of 2,001 points (radius of 1,000 time steps, corresponding to 100 ms, around every point). Then the local maxima and minima in each on- or off-cycle are found after indicating roughly the intervals. Finally, the interval bounds are chosen to be the closest time points to the maximum or minimum where the smoothed trajectory exceeds a 3 mV distance to the corresponding maximum or minimum (Fig. 2). This procedure is applied to all traces.

The nonparametric estimation reviewed in Appendix requires large sample sizes to work properly. Due to the high frequency sampling it is possible to apply the nonparametric estimation method to the on- and off-cycles and the quiescent period separately and estimate for each regime the corresponding homogeneous drift functions \(\beta_{\text{on}}(\cdot)\), \(\beta_{\text{of\/f}}(\cdot)\), \(\beta_{\text{q}}(\cdot)\). All the estimated functions are approximately linear (Fig. 5), which leads to the diffusion LIF model (2). Hence, we perform a linear regression where the inverse slopes of the fitted lines correspond to the different time constants \(\tau_{\text{on}}\), \(\tau_{\text{of\/f}}\) and \(\tau_{\text{q}}\). Results are reported in Fig. 4(b). Except for one trace the time constant is decreasing from quiescence over off-cycles to on-cycles. Hence, the neuron reacts faster for more depolarized levels of the membrane potential as previously shown (Berg et al. 2008).
Fig. 5

Estimates of the drift coefficient β(·) in model (1) during the on-cycles (a), the off-cycles (b), and quiescence (c) of trace 12. The slope is \(-1/\hat{\tau}\)

To estimate the parameters τ * and γ in the function τ(·) given in Eq. 5 of the time-inhomogenous diffusion LIF model Eq. 4, we performed a linear regression of log(τ) on x, using the points \((\tau_{\text{on}}, \mathbf{E}_{\text{on}}[X])\), \((\tau_{\text{of\/f}},\mathbf{E}_{\text{of\/f}}[X])\), and \((\tau_{\text{q}},\mathbf{E}_{\text{q}}[X])\), where E  ∙ [X] is the estimated mean in the given regime. The analysis was carried out using a linear mixed model, where a random trace-specific component was introduced that allowed adjusting for the intertrace variations in slope and intercept. Likelihood ratio tests were applied to assess statistical significance, and neither the random effect on the slope γ nor the intercept resulted statistically significant (p > 0.1). Thus, as an approximation we take that the time constant only depends on the level of the membrane potential. Previous investigations have found that the decrease in the time constant is caused by an increase in mean synaptic conductance, whereas an increase in membrane potential fluctuations is caused by an increase in synaptic conductance variance (see e.g., Destexhe and Pare 1999; Kuhn et al. 2004). Thus we find indirectly that both the synaptic conductance mean and variance are empirically linked to the level of the membrane potential during this particular network activity (in agreement with Berg et al. 2007, 2008).

The final estimates were \(\hat\tau^* =2.5\mbox{ ms}\) and \(\hat\gamma = 0.022 \mbox{ mV}^{-1}\), and thus \(\hat\tau(x)=2.5 e^{-0.022x}\), which is the black curve in Fig. 4(b). That gives a time constant of 12.4 ms when the membrane potential is at its resting level of −71.5 mV, which is the level during quiescence for traces 14 and 15 that had no injected current. All other traces had negative injected currents, and thus more hyperpolarized levels during quiescence. When the level is −45 mV, where the neuron has a high firing intensity, the time constant is approximately 6.9 ms.

Spike intensity  Let Y = {Y i } be the entire data set with all traces appended one after the other, such that i runs from 1 to M = 16 (N + 1), where M is the total number of observations from all 16 traces, and N + 1 is the number of observations in each trace, N = 250,000. Thus, Y 1, ..., Y 250,001 are the observations from trace 1, Y 250,002, ..., Y 500,003 are the observations from trace 2, and so forth. Let l ∈ ℕ be the number of observed spikes and s 1 < s 2 < ... < s l be the time points of Y where the spikes were initiated. In the 16 traces l = 364 spikes occurred. Our aim is to estimate the firing intensity function λ(·) of the Poisson spike train μ, defined in Definition 1, which is allowed to depend directly on the voltage level. First we partition the state space in intervals of length h, and study each interval of the form \([x-\frac{h}{2},x+\frac{h}{2}]\), where x runs from some value \(x_{\mbox{{\small min}}}\) to some value \(x_{\mbox{{\small max}}}\) in steps of h. The bounds x min and x max are chosen such that all l measured values of the membrane potential at the initiation of a spike, \(Y_{s_1}, \ldots, Y_{s_l}\), lie in the interval [x min ,x max ]. For the data set at hand we chose x min  = − 60 mV and x max  = − 35 mV. For a given x we then count the number of spikes that were initiated in \([x-\frac{h}{2},x+\frac{h}{2}]\) by the quantity \(\sum_{j=1}^{l}\mathbb{1}_{[x-\frac{h}{2},x+\frac{h}{2}]}(Y_{s_j})\). Recall that \(\mathbb{1}_A(z)\) denotes the indicator function defined above. The time the process spent in \([x-\frac{h}{2},x+\frac{h}{2}]\) is then estimated by counting the number of observations that fell in the interval and multiplying it by the time step between observations, i.e. by the quantity \(\sum_{i=1}^{M}\Delta\mathbb{1}_{[x-\frac{h}{2},x+\frac{h}{2}]}(Y_{i })\), where Δ = 0.1 ms is the time step between observations. Finally, we estimate the spike intensity at x by
$$ \begin{array}{rll} \hat\lambda(x)&:=& \frac{\sum_{j=1}^{l}\mathbb{1}_{[x-\frac{h}{2},x+\frac{h}{2}]}(Y_{s_j})} {\sum_{i=1}^{M}\Delta\mathbb{1}_{[x-\frac{h}{2},x+\frac{h}{2}]}(Y_{i })}, \end{array} $$
which is simply the number of spikes initiated in an h/2-neighborhood of x divided by the time X spent in this neighborhood. For \(x< x_{\min}-\frac{h}{2}\) we estimate \(\hat\lambda(x) =0\), since no spikes were emitted when the membrane depolarization was that low, and thus the numerator in Eq. (8) is zero. To improve reliability, we only considered estimates \(\hat\lambda(x)\) if \([x-\frac{h}{2},x+\frac{h}{2}]\) was visited at least for 20 ms.

In the literature nonparametric estimators for jump intensity functions do exist, see e.g. Andersen et al. (1993), Helmers and Manku (2009), O’Sullivan (1993), Patil and Wood (2004) and the references therein. These references mostly consider the Cox-model from survival analysis and some assumptions are not fitting our requirements. The essential difference is that we consider the intensity λ to be a quite general function of the state of a stochastic process (cf. Definition 1), such that our estimator \(\hat\lambda\) from above is depending on the number of visits (or local time) of this process.

Figure 6(a) shows superimposed data traces 10 ms before and after each spike, centered at the observed spike maximum. It seems that there is a large range of the membrane potential from where spikes can be initiated. To apply the estimator (8) it is necessary to determine where the spikes were initiated, i.e. to find the points s 1,...,s l . These should be time points right before the spike maxima, where the membrane potential is belonging to the upturn of the spike, but has not yet left the diffusion regime. Figure 6(b) shows the standard deviation at each time point of Fig. 6(a). The increase during the spike upturn is due to the high slope of a spike and the fact that we centered at the observed spike maximum, which is not the true maximum, since it can occur between two observation points. At around 4 ms before the spike maxima the standard deviation starts decreasing, which also agrees with the analysis of the effective synaptic integration time (eSIT) in Berg et al. (2008). Hence, the time points \(s_i=t^{\max}_{i}-4\) ms are fulfilling our requirements, where \(t^{\max}_{i}\) denotes the observed time of the maximum of the ith spike. With this choice and h = 1 mV, all spikes of the 16 traces were used to estimate λ(·). As an example, over all traces 28 spikes were initiated at a membrane voltage level between −55.5 and −54.5 mV, measured 4 ms before the spike peak. Further, all traces spend an overall time of 19,070 ms within this interval, giving a spike intensity estimate of \(\hat\lambda(-55\text{mV})=28/(1\text{,}907~\text{ms})=0.00147\) ms − 1 at a voltage level of − 55 mV. The resulting \(\hat\lambda(\cdot)\) is shown in Fig. 7(a).
Fig. 6

In (a), all 364 spikes of the 16 traces centered at 0. In (b) the standard deviation at every time point in (a) is computed. The dashed line indicates 4 ms before the spike maximum, where the membrane potential becomes less random than before. This can be interpreted as the time when the spike was initiated and its upturn begins

Fig. 7

Estimated intensity function (a), in (b) plotted on logarithmic scale. The full drawn cyan curve is the result from the linear regression estimating \(\hat\lambda(x)=\exp(15.3+0.4x)\)

On a logarithmic scale, \(\hat\lambda(\cdot)\) has a linear shape (Fig. 7(b)). A regression leads to \(\hat\lambda(x)=\exp(15.3+0.4x)\). This is in agreement with the proposed intensity function by Pfister et al. (2006) of \(\lambda (x) = \exp (16.7 + 0.33x)\), and thus the present study supports experimentally the theoretical considerations of that paper.

The spiking frequency is thus estimated to 77 Hz at a membrane potential of −45 mV, giving a mean interspike interval of 12.9 ms. At −71.5 mV the frequency is 0.002 Hz with a mean interspike interval of 479,000 ms, thus in practice the neuron is silent. For an intermediate level of excitation of −55 mV, the frequency is 1.5 Hz, with a mean interspike interval of 684 ms.

An essential advantage of this estimation method is that it is robust to the choice of s 1,...,s l . As long as \(X_{s_i}\) belongs to the diffusion regime, the estimated intensity adjusts the model to the observed spiking behavior.

Input function  Finally we have to treat the estimation of g(t) at the observed time points t i  = Δi, i = 0, 1,..., N, where N = 250,000 is the sample size of the observed trace. We define \(X_i:=X_{t_i}\) and note that when ignoring spikes, we derive from Eq. (3) the approximation
$$\mathbf{E}[X_{i+1}|X_{i}]\approx X_{i} e^{-\Delta/\tau(X_{i})} + (g(t_{i}) + a)(1-e^{-\Delta/\tau(X_{i})}).$$
The approximation would be exact if g(t) and τ(X t ) were constant between observations at times t i and t i + 1. This is justified because Δ = t i + 1 − t i  = 0.1 ms is small compared to the time constants of the system. We then replace E[X i |X i − 1] by X i , and τ(·) and a with their corresponding estimates \(\hat \tau (\cdot)\) and \(\hat a\) estimated above. An estimator of the input function is derived by solving for g(t i ),
$$ \check{g}(t_{i}):=\frac{X_{i+1}- X_{i} e^{-\Delta/\hat{\tau}(X_i)}} {(1-e^{-\Delta/\hat{\tau}(X_i)})} - \hat{a}, \quad i=0,1,\dots,N. $$
From the substitution of E[X i |X i − 1] by X i , \(\left\{\check{g}(t_{i})\right\}\) is an extremely noisy sequence. Nevertheless, since Δ is small, \(\left(\mathbf{E}[X_i|X_{i-1}]-X_i\right)\), i = 1,...,N are approximately independent Martingale increments and normal distributed around 0. To balance out these errors we smooth \(\check{g}(\cdot)\) by a smooth spline \(\hat{g}(\cdot)\) which will be our final estimate for g(·). In Fig. 8(a), \(\hat{g}(\cdot)\) is shown in black for trace 14.
Fig. 8

(a) The black line is the estimated input function \(\hat g (\cdot)\) for trace 14, the cyan line is the squared smoothed and rescaled measured network activity from (b), and the magenta line is the simulated output from a network of 1,000 neurons, each receiving an input proportional to the estimated input \(\hat g (\cdot)\), where the proportionality factor is drawn from a uniform distribution on [0.2,1]. (b) Network activity recorded from the hip-flexor nerve during the recording of trace 14. (c) The number of spikes elicited from simulations of 1,000 neurons against the activation level of the input, i.e. the proportionality factor drawn from a uniform distribution on [0.2,1]

4 Input–output relationship

In the previous Section, we estimated the unmeasured network activity resulting from the mechanical cutaneous stimulation on the hind-limb pocket skin of the turtle represented by g(·) in model (4). This network activity is the input to the entire pool of spinal motoneurons, and was estimated from the intracellularly recorded membrane fluctuations in a single motoneuron that contributes to the output nerve activity recorded in the electro-neurogram. We estimated g(·) in Eq. (4) without using the data from the electro-neurogram recorded from the hip-flexor nerve, shown for one trace in Fig. 8(b). The network activity from the hip-flexor nerve is the output from the spinal cord, and represents the activity of maybe 100 motoneurons. This quantity is scale free, since the strength of the signal depends on the exact location where it is measured. Moreover, only activity from spikes in the network are detectable. Thus, when the input is too weak to cause spikes, the electro-neurogram measures no activity, even if the network does receive some input. A natural assumption is that the hip flexor nerve activity is a threshold version of the input, where more neurons become activated the higher the input, depending on their distance to the emitted signals. To test this hypothesis we compared the estimated input to the measured hip-flexor nerve activity. To compare input and output, the observed network activity was squared, smoothed in the same way as \(\hat g(\cdot)\) and scaled to the same order of magnitude as \(\hat g(\cdot)\). The occasional negative values are an artifact of the smooth spline procedure. The result is the cyan curve shown for one trace in Fig. 8(a). We also tried the absolute value instead of squaring, but it made practically no difference (results not shown).

To test if the output could be a threshold version of the input, we simulated the spike times from 1,000 neurons receiving the estimated input \(\hat g(\cdot)\). To incorporate varying distances from the input, a proportionality factor was drawn from a uniform distribution on [0.2,1], which was multiplied onto the input. This approximation is of course rude, and does not take the specific network structure into account, which we have no information about. At each time point, spikes from the pool of neurons were added. Finally, this signal was smoothed and scaled as \(\hat g(\cdot)\). The result is the magenta curve shown for one trace in Fig. 8(a).

It is seen that a simulated output from a pool of neurons, induced by the estimated input, reproduces main characteristics of the measured output, like zero values during low input and peaks during high input, and larger variation between amplitudes of waves. We conclude that the synaptic input can be estimated from the fluctuations in the membrane potential recorded in just one of the motoneurons.

In Fig. 8(c) the number of spikes emitted from each simulated neuron is shown as a function of the activation level of the input, i.e. the proportionality factor drawn from a uniform distribution on [0.2,1]. The number of spikes appear to grow exponentially with the input level.

5 Model validation

To see how the different features of the model work and to compare it to the experimental data, we simulate the jump diffusion (7) with the corresponding estimated quantities. The trace specific quantities are \(\hat a\), \(\hat\sigma(\cdot)\) and \(\hat g(\cdot)\). The quantities, estimated over all 16 traces, are the membrane time constants \(\hat\tau(x)=2.5 e^{-0.022x}\), the Poissonian firing intensity \(\hat\lambda(x)=\exp(15.3+0.4x)\) and the reset value after a spike x * = − 68.2 mV, which is the mean value of the membrane potential 1.2 ms after the spike maximum (Fig. 6(a)). The only value in the simulation that is directly used from the original data is the starting value X 0 = x 0. Figure 9 shows the corresponding simulated traces for traces 12 and 14.
Fig. 9

(a) Simulation with the estimates of trace 12. The simulation should be compared to the experimental data depicted in Fig. 1(c). (b) Simulation with the estimates of trace 14. The simulation should be compared to the experimental data depicted in Fig. 1(e)

Figure 10 shows a close-up of only 2 s of both the simulated and the measured trace 12, which reveals a remarkable correspondence between simulated and experimental subthreshold fluctuations. In Fig. 11 the corresponding illustration is shown for trace 14 with a further close-up to 0.5 s, to study how the model behaves in the spiking regime. Also the spiking characteristics are captured well by the model. The only noticeable difference is that right after the spike the observed membrane potential depolarizes faster than the simulated one. This is because the increased conductance caused by the spike is not included in the model, and thus the model is not convenient to describe the behavior directly after a spike. The membrane time constant is estimated from fluctuations at least 10 ms away from any spike, and is thus overestimated during the immediate after-spike dynamics. This problem could be solved if one incorporates the increased conductance and thus a lower time constant after the spike into the model. For simplicity we omit this extra feature. Note that the model does not pretend to reproduce the exact timings of the spikes, but rather the statistics, and thus, number of spikes should more or less coincide.
Fig. 10

A close-up of trace 12 and the corresponding simulated trace. To distinguish the two traces one is vertically shifted by 20 mV

Fig. 11

A close-up of trace 14 and the corresponding simulated trace. Spikes were truncated at −20 mV. To distinguish the two traces one is vertically shifted by 50 mV

To justify the use of model (7) instead of using a simpler OU type model with constant τ and σ we choose trace 14 and simulated from the two models. We compared the infinitisimal behavior of the trajectories informally by inspection of plots. Moreover, the number of spikes generated from each model were compared. The OU-model was fitted to data by maximum likelihood (ML). This requires stationarity, and hence, we applied the ML-estimator to the quiescent period. The results were: \(\hat \tau = 8.28\) ms and \(\hat \sigma = 0.155\) mV/\(\sqrt{\mbox{ms}}\).

1,000 trajectories from the OU-model and from model (7) were simulated. For both models the same spiking mechanism and the same input function were used. As expected, during on- and off-cycles, where σ-values larger than one were estimated (cf. Fig 4(a)), a constant value of σ = 0.155 mV/\(\sqrt{\mbox{ms}}\) is too low to describe the observed infinitesimal behavior in the data (not shown), whereas model (7) reproduces the data well judged by visual inspection, see Figs. 10 and 11.

The number of spikes produced by each model are visualized in the two histograms that are overlayed in Fig. 12. The left histogram shows the number of spikes for the OU-model, and the right histogram the number of spikes of each trajectory of model (7). The vertical dashed line is the number of spikes of trace 14, which is 68. Since [46,67] is the 99%-interval of the distribution for the OU-model, we can reject on a 1%-level the hypothesis that the OU-model will produce 68 spikes as in trace 14. By contrast, the 80%-interval of the distribution for model (7) is [55, 68], and hence, we accept the corresponding hypothesis (p > 0.2). If a higher level of the constant noise term is used, the spiking statistics of the two models are not statistically different, though the OU-model with constant high noise level spikes slightly more (results not shown). In this case, the subthreshold fluctuations vary much more than the observed data.
Fig. 12

Histograms of number of spikes produced from simulations from model (7) (right, cyan) and from an OU-model (left, magenta), with parameters estimated from trace 14. Number of spikes produced by traces 14 was 68, and indicated by the dashed vertical line

6 Discussion

In the present study, we have analyzed membrane potential fluctuations from a motoneuron embedded in a spinal network which is performing a real motor function (scratching) (Figs. 1 and 2). This is a non-stationary activity pattern and since most neuronal activities are non-stationary, we suggest our analytical advance as a good approach to investigate the neuronal activity in general.

We find that the non-stationary membrane potential fluctuations observed during network activity are poorly described by the classical OU process, in spite of its popularity in the literature. Instead, we suggest a slightly more complicated stochastic model, the square-root model, Eq. (4) where the variance is allowed to depend on the depolarization. This model appears naturally when an inhibitory reversal potential is introduced into the OU model (Lansky and Lanska 1987). Moreover, to characterize the experimental data we extend the model to a time-inhomogeneous jump diffusion model, where the jumps represent spikes, Eq. 7. This is a descriptive model, though, and since the level of the membrane potential and the input are highly correlated and cannot be distinguished statistically, we make no claim of the membrane potential X t being the cause of changes in variance. The synaptic drive g(t) is most probably the main cause, and could be substituted in the diffusion coefficient without changing the behavior of the model.

Few studies have compared mathematical models to experimental data, and often parts of the parameters have been assumed known, only estimating some of the unknown quantities from the study. The remaining parameters have then been taken from literature from similar studies or otherwise determined. In the present paper all parameters and the synaptic input have been estimated directly from experimental data (Figs. 3, 4 and 5). This is possible because it is still a simple model ignoring many biological mechanisms. Despite the simplicity of the model it shows a remarkable resemblance to data.

The parametric specification of the model has also been derived from the observed data. We find that the variance of the subthreshold fluctuations is proportional to the depolarization, and that locally the drift is linear. The membrane time constant is then given by the inverse of the slope in the linear drift (Fig. 5). Nevertheless, globally the drift is not linear because the membrane time constant decreases with increasing depolarization since it is inversely related to the conductance. We find that an exponential decrease of the membrane time constant as a function of the depolarization agrees with data.

Nonlinear integrate-and-fire neuron models are not new, see e.g. Badel et al. (2008), Fourcaud-Trocme and Brunel (2005), Fourcaud-Trocme et al. (2003), Richardson (2007) and references therein. They are given by model (1) with the drift
$$ \begin{array}{rll} \beta (x) &=& \frac{1}{\tau} \left ( a - x + \psi (x) \right ) \end{array} $$
where ψ(x) is a non-linear term, which models the spike-generating current and dominates the dynamics for large depolarizations. Thus, it extends the LIF model to include the initiation of spikes without having to impose a threshold. In the exponential integrate-and-fire model \(\psi = \rho e^{(x-x_T)/\rho}\), where x T is the potential level at which the exponential term starts dominating, and is thus a substitution of the standard threshold in LIF models. The sharpness of the spike is modeled by ρ. The quadratic integrate-and-fire model is given by \(\psi = \rho \left({(x-x_T)/\rho}\right )^2\), where the parameters have a similar interpretation as in the exponential model. The aim in our study has been different, and only the subthreshold dynamics with no influence from spikes have entered the specification of the model of the membrane fluctuations.

To estimate the time constant of the cell, i.e. the common function of the membrane potential over all traces, and the trace-specific diffusion coefficients, we applied a mixed model, common in biomedical research, where repeated measurements are taken over a series of experimental units, in our study corresponding to sample traces. This is reasonable when it can be assumed that responses follow the same model form over repetitions of the experiment, but model parameters vary randomly among repeated experiments. In this way the total variation of the quantity of interest is split into two components, one corresponding to the variation of each trace (intra-variation) and one corresponding to the variation from trace to trace (inter-variation). This increases the statistical power, compared to separate estimation on each trace, and the average estimate for the cell is improved in the sense of decreased variance of the estimator. We found that the random effects of the time constant function were not statistically significant, and the model could be reduced to one common function for the cell. On the contrary, the random effects of the diffusion coefficients remained significant.

When studying intracellularly recorded traces of membrane fluctuations the exact moment of spike initiation is ambiguous. We therefore proposed a model for the firing mechanism, where the instantaneous probability of firing depends on the depolarization by introducing a Poissonian firing intensity. The advantage is that, due to characteristics of the Poisson distribution, the intensity directly provides the spike frequency and its variability. The analysis showed an empirical exponential increase in the spike frequency depending on the membrane potential. Other studies have reported an x 2 dependency on the mean membrane potential (Anderson et al. 2000; Carandini and Heeger 1994; Gabbiani et al. 2002; Hansel and van Vreeswijk 2002), which is somewhat in good agreement since these two functions have a similar shape.

Previous statistical analysis of fluctuations in membrane potentials and variations in the membrane time constant during high intensity network activity revealed that spike timing was well described by a stochastic process with a memory shorter than 10 ms (Berg et al. 2008). Here we show that the experimentally observed fluctuations in the membrane potential, the rhythmic waves of depolarization and the spike pattern during network activity are features captured by our present model. This strongly supports the idea that spiking during network activity is driven by depolarizing transients produced by the concurrent inhibitory and excitatory synaptic input (Berg et al. 2007, 2008). In addition we find that the spiking activity in the entire pool of hip flexor motoneurons can be approximately predicted from the fluctuations in membrane potential recorded from a single member of the pool. This shows that subthreshold non-linear intrinsic membrane properties of the motoneurons make no detectable contribution to spike timing. More importantly, it also suggests that the synaptic input to each individual motoneuron can be described as a function that predicts the global motor behavior of the motor pool. To our knowledge this is the first direct evidence for population coding in the synaptic input to motoneurons (Pouget et al. 2000).



  1. Alaburda, A. (2003). Metabotropic modulation of motoneurons by scratch-like spinal network activity. The Journal of Neuroscience, 24, 8625–8629.Google Scholar
  2. Alaburda, A., Russo, R., MacAulay, N., & Hounsgaard, J. (2005). Periodic highconductance states in spinal motoneurons during scratch-like network activity in adult turtles. The Journal of Neuroscience, 25, 6316–6321.PubMedCrossRefGoogle Scholar
  3. Andersen, P. K., Borgan, Ø., Gill, R. D., & Keiding, N. (1993). Statistical models based on counting processes. Springer.Google Scholar
  4. Anderson, J., Lampl, I., Reichova, I., Carandini, M., & Ferster, D. (2000). Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nature Neuroscience, 3, 617–621.PubMedCrossRefGoogle Scholar
  5. Badel, L., Lefort, S., Berger, T. K., Petersen, C. C. H., Gerstner, W., & Richardson, M. J. E. (2008). Extracting non-linear integrate-and-fire models from experimental data using dynamic IV curves. Biological Cybernetics, 99(4), 361–370.PubMedCrossRefGoogle Scholar
  6. Barrett, J. N. (1975). Motoneuron dendrites: Role in synaptic integration. Federation Proceedings, 34, 1398–1407.PubMedGoogle Scholar
  7. Berg, R. W., Alaburda, A., & Hounsgaard, J. (2007). Balanced inhibition and excitation drive spike activity in spinal half-centers. Science, 315, 390–393.PubMedCrossRefGoogle Scholar
  8. Berg, R. W., Ditlevsen, S., & Hounsgaard, J. (2008). Intense synaptic activity enhances temporal resolution in spinal motoneurons. PLoS ONE, 3, e3218.CrossRefGoogle Scholar
  9. Berkowitz, A. (2007). Spinal interneurons that are selectively activated during fictive flexion reflex. The Journal of Neuroscience, 27(17), 4634–4641.PubMedCrossRefGoogle Scholar
  10. Bernander, O., Douglas, R. J., Martin, K. A., & Koch, C. (1991). Synaptic background activity influences spatiotemporal integration in single pyramidal cells. PNAS, 88(24), 11569–11573.PubMedCrossRefGoogle Scholar
  11. Bibbona, E., Lansky, P., & Sirovich, R. (2010). Estimating input parameters from intracellular recordings in the Feller neuronal model. Physical Review E, 81, 031916.CrossRefGoogle Scholar
  12. Burkitt, A. N. (2006a). A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biological Cybernetics, 95, 1–19.PubMedCrossRefGoogle Scholar
  13. Burkitt, A. N. (2006b). A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biological Cybernetics, 95, 97–112.PubMedCrossRefGoogle Scholar
  14. Carandini, M., & Heeger, D. J. (1994). Summation and division by neurons in primate visual cortex. Science, 264, 1333– 1336.PubMedCrossRefGoogle Scholar
  15. Clopath, C., Jolivet, R., Rauch, A., Lüscher, H.-R., & Gerstner, W. (2007). Predicting neuronal activity with simple models of the threshold type: Adaptive exponential integrate-and-fire model with two compartments. Neurocomputers, 70, 1668–1673.CrossRefGoogle Scholar
  16. Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. MIT Press.Google Scholar
  17. Destexhe, A., & Pare, D. (1999). Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. Journal of Neurophysiology, 81, 1531–1547.PubMedGoogle Scholar
  18. Destexhe, A., Rudolph, M., & Pare, D. (2003). The high-conductance state of neocortical neurons in vivo. Nature Reviews Neuroscience, 4, 739–751.PubMedCrossRefGoogle Scholar
  19. Ditlevsen, S., & Ditlevsen, O. (2008) Parameter estimation from observations of first-passage times of the Ornstein–Uhlenbeck process and the Feller process. Probalistic Engineering Mechanics, 23, 170–179.CrossRefGoogle Scholar
  20. Ditlevsen, S., & Lansky, P. (2005) Estimation of the input parameters in the Ornstein–Uhlenbeck neuronal model. Physical Review E, 71, 011907.CrossRefGoogle Scholar
  21. Ditlevsen, S., & Lansky, P. (2006). Estimation of the input parameters in the Feller neuronal model. Physical Review E, 73, 061910.CrossRefGoogle Scholar
  22. Ditlevsen, S., & Lansky, P. (2007). Parameters of stochastic diffusion processes estimated from observations of first hitting-times: Application to the leaky integrate-and-fire neuronal model. Physical Review E, 76, 041906.CrossRefGoogle Scholar
  23. Ditlevsen, S., & Lansky, P. (2008). Comparison of statistical methods for estimation of the input parameters in the Ornstein–Uhlenbeck neuronal model from first-passage times data. In L. M. Ricciardi, A. Buonocore, & E. Pirozzi (Eds.), Collective dynamics: Topics on competition and cooperation in the biosciences (Vol. CP1028). American Institute of Physics Proceedings Series.Google Scholar
  24. Florens-Zmirou, D. (1993). On estimating the diffusion coefficient from discrete observations. Journal of Applied Probability, 30(4), 790–804.CrossRefGoogle Scholar
  25. Fourcaud-Trocme, N., & Brunel, N. (2005). Dynamics of the instantaneous firing rate in response to changes in input statistics. Journal of Computational Neuroscience 18(3), 311–321.PubMedCrossRefGoogle Scholar
  26. Fourcaud-Trocme, N., Hansel, D., van Vreeswijk, C., & Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. The Journal of Neuroscience, 23(37), 11628–11640.PubMedGoogle Scholar
  27. Gabbiani, F., Krapp, H. G., Koch, C., & Laurent, G. (2002). Multiplicative computation in a visual neuron sensitive to looming. Nature, 420, 320–324.PubMedCrossRefGoogle Scholar
  28. Gerstner, W., & Kistler, W.M. (2002). Spiking neuron models. Cambridge University Press.Google Scholar
  29. Giraudo, M. T., & Sacerdote, L. (1997). Jump-diffusion processes as models for neuronal activity. Biosystems, 40, 75–82.PubMedCrossRefGoogle Scholar
  30. Hansel, D., & van Vreeswijk, C. (2002). How noise contributes to contrast invariance of orientation tuning in cat visual cortex. The Journal of Neuroscience, 22, 5118–5128.PubMedGoogle Scholar
  31. Helmers, R., & Manku, I. W. (2009). Estimating the intensity of a cyclic Poisson process in the presence of linear trend. Annals of the Institute of Statistical Mathematics, 61(3), 599–628.CrossRefGoogle Scholar
  32. Höpfner, R. (2007). On a set of data for the membrane potential in a neuron. Mathematical Biosciences, 207(2), 275–301.PubMedCrossRefGoogle Scholar
  33. Höpfner, R., & Brodda, K. (2006). A stochastic model and a functional central limit theorem for information processing in large systems of neurons. Journal of Mathematical Biology, 52, 439–457.PubMedCrossRefGoogle Scholar
  34. Huys, Q. J. M., Ahrens, M. B., & Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology, 96, 872–890.PubMedCrossRefGoogle Scholar
  35. Inoue, J., Sato, S., & Ricciardi, L. M. (1995). On the parameter estimation for diffusion models of single neurons activity. Biological Cybernetics, 73, 209–221.PubMedCrossRefGoogle Scholar
  36. Jacod, J., & Shiryaev, A. N. (2003). Limit theorems for stochastic processes. Springer.Google Scholar
  37. Jahn, P. (2009). Statistical problems related to excitation threshold and reset value of membrane potentials. Ph.D. thesis. hebis:77-19399.
  38. Jolivet, R., Lewis, T. J., & Gerstner, W. (2004). Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. Journal of Neurophysiology, 92(2), 959–976.PubMedCrossRefGoogle Scholar
  39. Jolivet, R., Rauch, A., Lüscher, H.-R., & Gerstner, W. (2006). Predicting spike timing of neocortical pyramidal neurons by simple threshold models. Journal of Computational Neuroscience, 21, 35–49.PubMedCrossRefGoogle Scholar
  40. Kuhn, A., Aertsen, A., & Rotter, S. (2004). Neuronal integration of synaptic input in the fluctuation-driven regime. The Journal of Neuroscience, 24(10), 2345–2356.PubMedCrossRefGoogle Scholar
  41. La Camera, G., Rauch, A., Lüscher, H.-R., Senn, W., & Fusi, S. (2004). Minimal models of adapted neuronal response to in vivo-like input currents. Neural Computation, 16, 2101–2124.PubMedCrossRefGoogle Scholar
  42. Lanska, V., & Lansky, P. (1998). Input parameters in a one-dimensional neuronal model with reversal potentials. Biosystems, 48, 123–129.PubMedCrossRefGoogle Scholar
  43. Lansky, P. (1983). Inference for diffusion models of neuronal activity. Mathematical Biosciences, 67, 247–260.CrossRefGoogle Scholar
  44. Lansky, P., & Ditlevsen, S. (2008). A review of the methods for signal estimation in stochastic diffusion leaky integrate-and-fire neuronal models. Biological Cybernetics, 99, 253–262.PubMedCrossRefGoogle Scholar
  45. Lansky, P., & Lanska, V. (1987). Diffusion approximations of the neuronal model with synaptic reversal potentials. Biological Cybernetics, 56, 19–26.PubMedCrossRefGoogle Scholar
  46. Lansky, P., Sacerdote, L., & Tomasetti, F. (1995). On the comparison of Feller and Ornstein–Uhlenbeck models for neural activity. Biological Cybernetics, 73, 457–465.PubMedCrossRefGoogle Scholar
  47. Lansky, P., Sanda, P., & He, J. (2006). The parameters of the stochastic leaky integrate-and-fire neuronal model. Journal of Computational Neuroscience, 21, 211–223.PubMedCrossRefGoogle Scholar
  48. Lansky, P., Sanda, P., & He, J. (2009). Effect of stimulation on the input parameters of stochastic leaky integrate-and-fire neuronal model. Journal of Physiology-Paris, 104(3–4), 160–166.Google Scholar
  49. Mullowney, P., & Iyengar, S. (2008). Parameter estimation for a leaky integrate-and-fire neuronal model from ISI data. Journal of Computational Neuroscience, 24, 179–194.PubMedCrossRefGoogle Scholar
  50. O’Sullivan, F. (1993). Nonparametric estimation in the Cox model. Annals of Statistics, 21(1), 124–145.CrossRefGoogle Scholar
  51. Paninski, L., Pillow, J., & Simoncelli, E. (2005). Comparing integrate-and-fire models estimated using intracellular and extracellular data. Neurocomputers, 65–66, 379–385.CrossRefGoogle Scholar
  52. Paninski, L., Pillow, J. W., & Simoncelli, E. P. (2004). Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model. Neural Computation, 16, 2533–2561.PubMedCrossRefGoogle Scholar
  53. Patil, P. N., & Wood, A. T. A. (2004). Counting process intensity estimation by orthogonal wavelet methods. Bernoulli, 10(1), 1–24.CrossRefGoogle Scholar
  54. Pfister, J., Toyoizumi, T., Barber, D., & Gerstner, W. (2006). Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6), 1318–1348.PubMedCrossRefGoogle Scholar
  55. Picchini, U., Ditlevsen, S., De Gaetano, A., & Lansky, P. (2008). Parameters of the diffusion leaky integrate-and-fire neuronal model for a slowly fluctuating signal. Neural Computation, 20, 2696–2714.PubMedCrossRefGoogle Scholar
  56. Piwkowska, Z., Pospischil, M., Brette, R., Sliwa, J., Rudolph-Lilith, M., Bal, T., et al. (2008). Characterizing synaptic conductance fluctuations in cortical neurons and their influence on spike generation. Journal of Neuroscience Methods, 169(2), 302–322.PubMedCrossRefGoogle Scholar
  57. Pouget, A., Dayan, P., & Zemel, R. (2000). Information processing with population codes. Nature Reviews Neuroscience, 1(2), 125–132.PubMedCrossRefGoogle Scholar
  58. Rauch, A., La Camera, G., Lüscher, H.-R., Senn, W., & Fusi, S. (2003). Neocortical pyramidal cells respond as integrate-and fire neurons in vivo-like input currents. Journal of Neurophysiology, 90, 1598–1612.PubMedCrossRefGoogle Scholar
  59. Ricciardi, L. M. (1977). Diffusion processes and related topics in biology. Berlin: Springer.Google Scholar
  60. Richardson, M. J. E. (2007). Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive. Physical Review E, 76(2), 021919.CrossRefGoogle Scholar
  61. Robertson, G. A., & Stein, P. S. (1988). Synaptic control of hindlimb motoneurones during three forms of the fictive scratch reflex in the turtle. Journal of Physiology, 404(1), 101–128.PubMedGoogle Scholar
  62. Shinomoto, S., Sakai, Y., & Funahashi, S. (1999). The Ornstein–Uhlenbeck process does not reproduce spiking statistics of neurons in prefrontal cortex. Neural Computation, 11, 935–951.PubMedCrossRefGoogle Scholar
  63. Tuckwell, H. C. (1988). Introduction to theoretical neurobiology, Vol.2: Nonlinear and stochastic theories. Cambridge: Cambridge University Press.Google Scholar
  64. Zhang, X., You, G., Chen, T., & Feng, J. (2009). Maximum likelihood decoding of neuronal inputs from an interspike interval distribution. Neural Computation, 21(11), 3079–3105.PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2011

Authors and Affiliations

  • Patrick Jahn
    • 1
  • Rune W. Berg
    • 2
  • Jørn Hounsgaard
    • 2
  • Susanne Ditlevsen
    • 1
  1. 1.Department of Mathematical SciencesUniversity of CopenhagenCopenhagenDenmark
  2. 2.Department of Neuroscience and PharmacologyUniversity of CopenhagenCopenhagenDenmark

Personalised recommendations