Firingrate models for neurons with a broad repertoire of spiking behaviors
 765 Downloads
Abstract
Capturing the response behavior of spiking neuron models with ratebased models facilitates the investigation of neuronal networks using powerful methods for ratebased network dynamics. To this end, we investigate the responses of two widely used neuron model types, the Izhikevich and augmented multiadapative threshold (AMAT) models, to a range of spiking inputs ranging from step responses to natural spike data. We find (i) that linearnonlinear firing rate models fitted to test data can be used to describe the firingrate responses of AMAT and Izhikevich spiking neuron models in many cases; (ii) that firingrate responses are generally too complex to be captured by firstorder lowpass filters but require bandpass filters instead; (iii) that linearnonlinear models capture the response of AMAT models better than of Izhikevich models; (iv) that the wide range of response types evoked by currentinjection experiments collapses to few response types when neurons are driven by stationary or sinusoidally modulated Poisson input; and (v) that AMAT and Izhikevich models show different responses to spike input despite identical responses to current injections. Together, these findings suggest that ratebased models of network dynamics may capture a wider range of neuronal response properties by incorporating secondorder bandpass filters fitted to responses of spiking model neurons. These models may contribute to bringing ratebased network modeling closer to the reality of biological neuronal networks.
Keywords
Rate model Linearnonlinear model Izhikevich model AMAT model1 Introduction
The simulation of large networks of spiking neurons on the scale of cortical columns or even whole areas of the cortex has become feasible due to advances in computer technology and simulator software (Helias et al. 2012; Kunkel et al. 2014). In order to relate simulation results to experimental findings, it is important to employ neuron models that accurately capture actual neuron dynamics in response to realistic stimuli. Dynamical models that reproduce the responses of individual neurons to injected currents go back to the seminal work by Hodgkin and Huxley (1952). Their conductancebased model quantitatively described the action potential initiation and propagation in the squid giant axon in response to depolarizing currents and spawned many variants and simplifications that have been analyzed and used in computational neuroscience ever since. Examples are the FitzHugh (1961) and the MorrisLecar model (Morris and Lecar 1981). On the more abstract side of neuron modeling, Lapicque’s neuron model (Lapicque 1907), widely known as the leaky integrateandfire (IAF) neuron, models the membrane potential V (t) as a passive current integrator with leak current, emitting a spike whenever V (t) reaches a threshold value 𝜃, followed by a membrane potential reset (Tuckwell 1988; Burkitt 2006a, 2006b).
These simple integrateandfire neuron models have particular appeal to computational neuroscientists because they capture the essential function of a neuron, while still being amenable to mathematical analysis in many input and network scenarios.
In a network context, however, neurons usually receive noisy input currents. Moreover, they are known to respond highly reliably to repeated injections of the same frozen noise injection, while responses vary widely across trials when neurons receive identical direct current (Mainen and Sejnowski 1996). Neurons thus respond stereotypically to certain temporal input features rather than to mere current amplitude.
Motivated by such findings, Gerstner and colleagues showed that nonlinear IAF models, including the spikeresponse model and the adaptive exponential IAF model, can succesfully be mapped to experimental spike data in a noisy input regime and even have good spiketime prediction power (Brette and Gerstner 2005; Jolivet et al. 2006). Yet, the nonlinearity and the number of parameters in general make fitting a difficult task. The International Competition on Quantitative Neuron Modeling has challenged modelers to fit their neuron models to a set of spike data recorded from neurons stimulated with noisy input currents (Jolivet et al. 2008). The resulting neuron models were tested with a noisy input current that was not included in the training set, and the predicted spike times were compared to those of the actually emitted spikes. The multitimescale adaptive threshold model (MAT model) introduced by Kobayashi et al. (2009), a surprisingly simple model with linear subthreshold dynamics, solved this task best. Despite its simplicity, the MAT model can generate typeI and typeII excitability, as well as burst firing. Moreover, an extended version of the MAT model, the augmented MAT (AMAT) model, which incorporates threshold dynamics that depend on the membranepotential history, is able to reproduce all twenty spike response patterns described for the Izhikevich model (Yamauchi et al. 2011). Because of its few parameters and simple dynamics, the AMAT model has low computational cost while providing a large dynamical repertoire, and is thus highly attractive for largescale network simulations.
In an actual neuronal network, neurons typically integrate spikes from thousands of presynaptic neurons, yet not all spikes might necessarily have a strong impact on the membrane potential. In many spiking network models, the effect of individual spikes on the membrane potential is assumed to be small, and spiking activity asynchronous and irregular. In this limit it is indeed possible to substitute the input current by, e.g., Gaussian white noise or an OrnsteinUhlenbeck process (Johannesma 1968). However, experimental findings have repeatedly demonstrated that, even though most synapses are weak, synaptic weight distributions typically have heavy tails, with some corresponding to postsynaptic potentials of up to 10 mV (Song et al. 2005; Lefort et al. 2009; Avermann et al. 2012; Ikegaya et al. 2013). It is thus important to extend the analysis of neuronal response dynamics to input spike trains that elicit large individual postsynaptic potentials.
At an even higher level of abstraction are models that ignore specific spike times and heterogeneities in network structure, i.e., rate and field models. In contrast to highdimensional networks of spiking neurons, such models are often easier to analyze mathematically due to their low dimensionality, and hence can offer insight into steady states of network activity and bifurcations that give rise to complex spatiotemporal phenomena, such as oscillatory dynamics, traveling waves or activity bump formation. Prominent examples are neural mass models, such as the JansenRit model (Jansen and Rit 1995), and neural field models, such as the WilsonCowan model (Wilson and Cowan 1972), which include spatial interactions between neurons. In these models, the dynamics of large, possibly heterogeneous, populations of neurons are substituted by rate variables in a meanfield manner (Ermentrout 1998; Coombes 2005).
An important conceptual step in the derivation of these models is the substitution of the spiking activity of a neuron in response to a certain input current I(t) by an appropriate rate function^{1} mapping the input history {I(s)s ≤ t} to the response rate at time t. Common choices are abstract models such as thresholdlinear or sigmoidal functions F(I(t)) depending only on the input current at time t. The thresholdlinear form is often chosen because of mathematical convenience, but also because it mimics to first order the gain function of many individual neurons in experiments (Chance et al. 2002; Blomquist et al. 2009), while the sigmoidal also models the saturation at very high firing rates. Yet, parameters of the gain function such as time constants, activation thresholds, or slope are often chosen rather qualitatively, and it is uncertain how well they match singleneuron properties or biophysics.
A first step towards a stringent comparison of spiking neuron network simulations with reduced neural mass or field models is to obtain an adequate quantitative expression for the neuronal gain function F(I(t)). It is hence of interest to understand if and how the activity of individual spiking neurons in response to arbitrary input currents can be described truthfully by a ratemodel formulation. Several point neuron models are simple enough to allow for an analytical derivation of the gain function, assuming that input currents are Gaussian white noise, sinusoidally modulated input, or shot noise of a given structure (see, e.g., Gerstein and Mandelbrot 1964; Stein 1965; Brunel 2000; Brunel et al. 2001; Burkitt 2006b; Richardson 2007; Richardson and Swarbrick 2010; Roxin 2011; Ostojic and Brunel 2011). However, more complex nonlinear neuron models, such as the Izhikevich model or even the AMAT model, often render such analyses futile, especially in the presence of largeamplitude postsynaptic current events that are beyond the realm of perturbationbased theories. This holds to an even larger degree for the second step towards a stringent comparison of spiking network and neural field models, namely capturing the temporal response properties of the models. A thorough understanding of complex nonlinear models thus requires simulation studies.
We provide here an analysis of the response to spike train input of the models proposed by Izhikevich (2003b) and by Yamauchi et al. (2011), following the approach by Nordlie et al. (2010) and Heiberg et al. (2013). Both models actually represent an entire class of models that can be tuned to a wide range of reponses by adjusting model parameters. We will thus refer to the Izhikevich and AMAT model classes, respectively, when we refer to the set of equations and spikegeneration rules, while we will call each of the approximately 20 different parameterization a model. Each of the two model classes comprises some 20 models.
In Section 3.1 we present how the different models respond to spike train input.
In Section 3.2, we present fits of a linearnonlinear firingrate model to the spike responses of Izhikevich and AMAT models to stationary and temporally modulated stochastic spike trains across a range of input rates, synaptic weights, and modulation frequencies and amplitudes under different background noise regimes.
We group the different models according to the filter parameters obtained in Section 3.3, before we in Section 3.4 explore how well the linearnonlinear rate models capture the response of their spiking counterparts to novel stimuli, such as steps in the input firing rate and more complex temporally modulated input.
Finally, in Section 3.5 we investigate whether we can generalize models fitted to a specific input regime to a broader set of stimuli, before we summarize our findings in Section 4.
2 Methods
2.1 Neuron models
Summary of Izhikevich model; for parameters, see Table 3
Summary of AMAT model; for parameters, see Table 4
Parameters for Izhikevich model class obtained from code published by Izhikevich (2003a)
Label  Model  a  b  c  d  ξ  I _{ext} 

A  Tonic spiking  0.02  0.2  − 65  6  15.1  0 
B  Phasic spiking  0.02  0.25  − 65  6  4.3  0 
C  Tonic bursting  0.02  0.2  − 50  2  15.1  0 
D  Phasic bursting  0.02  0.25  − 55  0.05  4.3  0 
E  Mixed mode  0.02  0.2  − 55  4  15.1  0 
F  Spike frequency adaptation  0.01  0.2  − 65  8  15.1  0 
G*  Class 1 excitable  0.02  − 0.1  − 55  6  49  0 
H  Class 2 excitable  0.2  0.26  − 65  0  5.6  − 0.5 
I*  Spike latency  0.02  0.2  − 65  6  15.1  0 
J  Subthreshold oscillation  0.05  0.26  − 60  0  1.8  0 
K  Resonator  0.1  0.26  − 60  − 1  2.4  0 
L*  Integrator  0.02  − 0.1  − 55  6  49  0 
M  Rebound spike  0.03  0.25  − 60  4  4.5  0 
N  Rebound burst  0.03  0.25  − 52  0  4.5  0 
O*  Threshold variability  0.03  0.25  − 60  4  4.5  0 
P  Bistability  0.1  0.26  − 60  0  0.87  0.24 
Q  Depolarizing afterpotential  1  0.2  − 60  − 21  17.8  0 
R*  Accomodation  0.02  1  − 55  4  1  0 
S  Inhibitioninduced spiking  − 0.02  − 1  − 60  8  4.5  80 
T*  Inhibitioninduced bursting  − 0.026  − 1  − 45  − 2  4.8  80 
Parameters for AMAT model class, based on Yamauchi et al. (2011, Table 1)
Label  Model  α _{1}  α _{2}  β 

A  Tonic spiking  10  0  0 
B  Phasic spiking  10  0  − 0.3 
C  Tonic bursting  − 0.5  0.35  0 
D  Phasic bursting  − 0.5  0.35  − 0.3 
E  Mixed mode  − 0.8  0.7  0 
F  Spike frequency adaptation  10  1  0 
G  Class 1 excitable  15  3  0 
H  Class 2 excitable  15  − 0.05  0 
I  Spike latency  10  0  − 1 
J  Subthreshold oscillations  1  0  0.2 
K  Resonator  10  0  0.5 
L*  Integrator  10  0  0 
M  Rebound spiking  10  0  − 2.5 
N  Rebound bursting  − 0.5  0.35  − 2.5 
O  Threshold variability  10  0  − 0.5 
P  Bistability  20  − 0.4  0 
Q  Depolarizing afterpotential  25  − 1  0 
R*  Accomodation  10  0  − 0.5 
S  Inhibitioninduced spiking  20  0  2 
T  Inhibitioninduced bursting  − 0.5  0.35  2 
We integrate the Izhikevich model class using the forward Euler algorithm as in the original publications on the model. Izhikevich (2003b) used a 1 ms time step, but splitting the update of the membrane potential (but not the recovery variable) into two steps of 0.5 ms “for numerical stability”. Figure 1 of Izhikevich (2004), on the other hand, was generated using different time steps for different cases, ranging from 0.1 ms to 0.5 ms without substepping, as evidenced by the source code used to generate that figure (Izhikevich 2003a). We extracted model parameters as shown in Table 3 from that source code, including an external current I_{ext} injected into the model for some variants in addition to the stimulus current.
Izhikevich’s source code also revealed that model variants G, L, and R use other equations than Eqs. (??)–(??) for V (t) or U(t). We therefore excluded these variants from our study. We also excluded variants I and O, since they have the same parametes as variants A and M, respectively, and differ only in the test stimulus injected to create Fig. 1 of Izhikevich (2004).
Furthermore, we observed that response patterns depend on the precise time step used. In particular, the response for case T, Inhibitioninduced bursting, is unstable for time steps shorter than 0.5 ms. We therefore also excluded case T from our analysis.
The Izhikevich model class is not defined with consistent units in the original publication (Izhikevich 2003b). While a time unit of milliseconds is implied and membrane potential is specified in millivolts, no units are given for the parameters or explicit constants. The model equations imply that input currents have units of mV/ms, which is rather exotic. In the spirit of Izhikevich (2003b) we therefore treat all quantities except time and membrane potential as unitless for the Izhikevich model class.
The AMAT class is implemented in NEST as model amat2_psc_exp using exact integration (Rotter and Diesmann 1999). The implementation follows the NEST convention of parameterizing the membrane potential equation Eq. (4) in terms of membrane time constant τ_{m} and membrane capacitance C_{m} and an explicit reversal potential E_{L}, while Yamauchi et al. (2011) parameterize their Eq. 1 in terms of τ_{m} and membrane resistance R and define E_{L} = 0mV. The parameterizations are related by C_{m} = τ_{m}/R and a shift of the membrane potential V and the resting value of the threshold ω by E_{L}. Some parameter values were adjusted to be able to reproduce Figs. 6 and 7 in Yamauchi et al. (2011) as discussed in the Appendix. Model variants L and R are excluded from the study as they have identical parameters to variants A and O, respectively.
In all simulations reported here, a single neuron is stimulated with spike train input. For the Izhikevich model class, this spike input results in instantaneous jumps in the membrane potential v. For the AMAT class, each incoming spike evokes an exponentially decaying synaptic current. For details, see Tables 1 and 2 and Section 2.2.
Output spikes are recorded with NEST device spike_detector.
2.2 Stimulation
We briefly summarize here the sinusoidal stimulation protocol and response characterization based on Nordlie et al. (2010) and presented in detail in Heiberg et al. (2013). More general stimulation protocols are described in Section 2.5.
Mean rates a_{0}, modulation depth a_{1}, and modulation frequency f_{stim} are varied systematically; modulation depth is limited to 0 ≤ a_{1} ≤ a_{0} to avoid rectification. We used NEST device sinusoidal_poisson_generator to generate the input spike trains.
The weights w > 0 of the synapses transmitting the stimulus spike train a(t) are varied from about 10% to about 75% of the synaptic weight w_{𝜃} required if a single incoming excitatory spike shall evoke a threshold crossing from rest. For the AMAT model class, w_{𝜃} is the same for all model variants and we use weights between 100 pA and 900 pA in our experiments.
For the Izhikevich model class, in contrast, model parameters do influence the response to isolated spikes. We therefore define a weight factor ξ for each model variant as the smallest weight for which a single excitatory input spike triggers the spike initiation process. Synaptic weights w are set to fractions of this value, ranging from 0.1 to 0.75, i.e., within the same range as for the AMAT model.
In addition to the resulting current stimulus, I_{stim}(t), we consider stationary noisy background input currents I_{bg}(t), representing unspecific weak network input. This allows us to study neuronal responses to I_{stim}(t) in different input scenarios. The full input a neuron receives is thus given by I(t) = I_{stim}(t) + I_{bg}(t). We characterize the background current by its mean μ_{bg} and standard deviation σ_{bg}.
The NEST implementation of the Izhikevich neuron model is equipped with instantaneous currentbased synapses. Assuming high rates and small synaptic strength, balanced spiking input can be approximated well by Gaussian white noise. We thus inject approximate Gaussian white noise realizations of defined mean μ_{bg} and standard deviation σ_{bg} using NEST’s noise_generator^{2}.
We consider three background current regimes: first the case without additional background current I_{bg}(t) = 0pA, where all spiking activity is purely stimulus induced. In the second case, I_{bg}(t) is chosen such that μ_{bg} = 0pA, and σ_{bg} is large enough to elicit spiking activity with background input alone, i.e., if I_{stim}(t) = 0pA. In the third case, we consider a net inhibitory background current, with μ_{bg} < 0pA and sufficient standard deviation σ_{bg} to again elicit baseline spiking in absence of I_{stim}(t). While the first scenario can be considered a typical situation for neurons in slice preparations, the latter two mimic the situation in vivo, e.g., in cortical layer II/III where ongoing spiking activity is sparse (see e.g., Sakata and Harris 2012; Petersen and Crochet 2013) and input currents are balanced or even inhibition dominated (Haider et al. 2013).
2.3 Characterization of response properties
2.3.1 Sinusoidal rate model
2.3.2 Linearity
2.4 Rate model description
To test how well this applies to the neuron models studied here, we fit linearnonlinear firingrate models to the responses of the spiking neuron models and compare firingrate predictions from the linearnonlinear models to those of the fitted spiking models. We summarize the derivation of the firingrate model below, based on Heiberg et al. (2013) and Nordlie et al. (2010).
For each neuron, we find the activation function g(⋅) and the kernel h(t). For constant input, a(t) = a_{0}, the convolution becomes the identity operation, provided the kernel is normalized (\(\int h(t) dt= 1\)). We determine g(⋅) by measuring the response to stationary input, r_{0} = g(a_{0}) for a range of a_{0} and obtain a continuous representation of g(⋅) by interpolation (linear Bspline).
This form allows for a representation of the filter through a system of linear differential equations, see Section 2.4.1. ^{3}
2.4.1 Differentialequation representation
The filter \(\tilde {H}_{0,\text {SUM}}(f)\) corresponds to a sum of lowpass filters in the time domain. For this model, the linearnonlinear model of Eq. (14) can be mapped to a set of delay differential equations using the linear chain trick (Nordbø et al. 2007).
2.5 Tests against spike trains
We compare the response properties of our ratebased model against spiking neuron models as follows. We use synthetic (Section 2.5.1) or experimentally recorded (Section 2.5.2) spike trains S(t) as test input. Spiking neuron models are driven by these trains directly and their output spike trains R(t) are recorded as described in Section 2.6. We then use the fixedkernel density estimation method by Shimazaki and Shinomoto (2010) with 0.05 ms bin width to estimate a continuous output firing rate r_{spike}(t). This is the reference against which we test the ratebased model.
To obtain the response of the ratebased model, we either use the known rate of the synthetic input spike trains or obtain a continuous input rate function a(t) from the input spike trains S(t) using the fixedkernel density estimation method. Applying Eq. (14) to this rate yields the reponse of the rate model r_{rate}(t).
We repeat each simulation experiment with five different random seeds and retain only results for which the optimal kernel width obtained by the densitiy estimation methods is 15 ms or less, as wider kernels would lead to an undue smoothing over time.
The difference between responses obtained from ratebased and spiking models is then defined as the mean squared error normalized by the variance of the response of the spiking model (Pillow et al. 2005)
2.5.1 Tests with synthetic spike trains
Poisson spike train rates applied during different intervals
Interval [ms]  0–600  600–1000  1000–1200  1200–1500 

Rate [1/s]  100  200  40  150 
2.5.2 Tests with realistic spike trains
The Izhikevich models in particular responds weakly to these spike trains in many cases. We therefore increase the rate of the input spike trains by merging pairs of spike trains, resulting in a total of 48 input spike trains with average rates of 36.6 spikes per second. We then drive 48 model neurons independently with one spike train each for 8000 ms and pool the resulting output spike trains for output rate estimation.
2.6 Simulation
Simulations for all model configurations are performed with the NEST Simulator (Gewaltig and Diesmann 2007; Plesser et al. 2013).
In practice, we simulate N trials by creating N mutually independent Poissongenerator–neuron pairs in a single NEST simulation. Membrane potentials are randomized upon network initialization and data collection is started only after an equilibration period of 1 s simulated time. All simulations are performed with a spiketime resolution of 0.1 ms.
Simulations underlying model fitting are performed using NEST 2.3.r10450, while some scoring of model responses according to Eq. (33) was performed using NEST 2.8.0. Trials are configured using the NeuroTools.parameters package (Muller et al. 2009). Data analysis is performed using NumPy 1.7.1–1.11.1, SciPy 0.18.1, Pandas 0.11.0–0.18.1 and Matplotlib 1.2.1–1.5.3 under Python 2.7.
3 Results
3.1 Response to spike train input
As spiking and bursting variations are included as separate response types in the model classification (Fig. 1), we illustrate the burstiness of the responses by marking spikes fired within dT = 5 ms of each other as belonging to a burst, corresponding to the upper limit of intraburst intervals in LGN (Funke and Wörgötter 1997, p. 71).
The models that exhibit their characteristic behaviour (Fig. 1) based on “simple” excitatory input current shapes (e.g., steps, ramps, pulses) generally behave as expected when driven by Poisson spike trains; spiking neurons primarily spike and bursting neurons burst, but the nuances of individual models are less visible in the spiking patterns (e.g. tonic vs phasic) due to the input variability. Models that are based on more specific input current pattens or consistent inhibitory input (i.e., bottom rows) do to a lesser extent receive the required input and respond in a less characteristic manner, some even seem erratic (e.g., Fig. 5Q). Note, however, that the figures illustrate responses at a single input rate and noise regime combination and that the models to varying degree are sensitive to these conditions.
In contrast to the 20 markedly different responses to current injections (Fig. 1), responses to spiking input show more similar patterns across models, differing in the overall response rate and the proportion of spikes belonging to bursts.
While some Izhikevich and AMAT models that show identical responses to current injections also respond similarly when driven by spiking input (e.g., top two rows), we observe some with very different response patterns (e.g., depolarizing afterpotential (Q) and inhibitioninduced spiking (S)) across the two model classes.
3.2 Linearnonlinear models
We now obtain the linearnonlinear models as defined by Eq. (24).
3.2.1 Activation functions

each model (14 models for the Izhikevich model class, 18 for the AMAT model class);

each background noise regime
 no noise

μ = 0,σ = 0
 balanced noise

Izhikevich: μ = 0,σ = 0.1, AMAT: μ = 0pA,σ = 100pA
 biased noise

Izhikevich: μ = − 0.1,σ = 0.2, AMAT: μ = − 100pA,σ = 200pA;

each synaptic weight (Izhikevich: 0.1, 0.25, 0.5, 0.6, 0.75; AMAT 100pA, 300pA, 500pA, 700pA, 900pA).
For the Izhikevich neurons, the stationary linearity metric L_{1} indicates that strong synaptic weights w, large mean input rates a_{0}, and small modulation amplitudes a_{1} give the most linear responses. Larger weights and mean rates not only increase the mean input of the Poisson input current, but also its variance. This leads to a linearization of the activation function and moves the activation threshold towards smaller rates (see also Chance et al. 2002). Furthermore, firingrate modulation amplitudes are more likely to stay within a single region of the sigmoidal firing rate curve for small a_{1}, and are thus more likely to adhere to a linear fit.
The stationary linearity metric L_{1} for the augmented MAT model indicates overall more linear behavior, but the same general pattern of parameter dependence can be seen (Fig. 8). One notable difference is the saturation of the AMAT model at output rates of 500 s^{− 1}—due to the absolute refractory time of 2 ms—that adds another source of nonlinearity in the firing rate curves for some neurons.
3.2.2 Transfer function and linear filters
We obtain empirical transfer functions according to Eq. (17) for 20 combinations of working point and modulation depth (a_{0},a_{1}) for each model, noise regime and synaptic weight using the approach described in detail in Heiberg et al. (2013, Section 2.2.2), measuring the model response at 28 different stimulation frequencies f_{stim}, logarithmically spaced from 1Hz to 1000Hz. We then fit the linear filter \(\tilde {H}_{0,\text {SUM}}(f) \) according to Eq. (18) as described in Section 2.4, obtaining fit parameters (f_{c,1},f_{c,2},γ_{1},γ_{2},Δ) for each model and stimulation parameter combination. Note that γ_{1} is fully captured by the activation function, and therefore does not explicitly enter the linearnonlinear model we construct here, cf. Eq. (25).
Fit parameters for filters H_{0}(f) shown in the second row of Fig. 6
Model  Noise  γ _{1}  γ _{2}  f_{c,1}[Hz]  f_{c,2}[Hz]  Δ[ms] 

Izh/Tonic spiking  none  − 0.152  − 1.328  9.988  61.577  0.987 
balanced  − 0.150  − 1.334  9.959  63.535  1.009  
biased  − 0.110  − 1.442  8.397  68.001  1.003  
AMAT/Tonic spiking  none  0.468  − 0.225  224.270  636.620  0.183 
balanced  0.386  − 0.107  181.140  636.620  0.181  
biased  0.088  1.748  28.946  149.427  0.222  
Izh/Phasic bursting  none  − 25.548  − 1.002  8.745  9.043  1.709 
balanced  − 23.825  − 1.002  8.817  9.134  1.670  
biased  − 21.662  − 1.003  8.601  8.892  1.936  
AMAT/Phasic bursting  none  − 0.718  − 1.486  3.067  22.380  0.913 
balanced  − 0.672  − 1.488  3.159  21.450  0.832  
biased  − 0.304  − 1.884  3.697  21.139  0.836 
We found that not all model variants responded sufficiently to periodic stimulation under all stimulation conditions to provide sufficient spike data to fit a kernel. Therefore, we only obtained kernel fits for approximately threequarters of all conditions for the Izhikevich class (3262 out of 4200 possible) and about 90% of all conditions for the AMAT class (4843 out of 5400).
3.3 Grouping of models
Models grouped by kmeans clustering of linear filter parameters as illustrated in Fig. 9
Group  Firing pattern  Izhikevich  AMAT 

1  Isolated spikes, rare mini bursts  A/Tonic spiking  B/Phasic spiking 
E/Mixed mode  I/Latency  
F/Adaptation  O/Threshold variability  
2  Isolated spikes  B/Phasic spiking  A/Tonic spiking 
J/Subthreshold oscillations  H/Class 2  
K/Resonator  
M/Rebound spiking  
S/Inhibitioninduced spiking  
3  Short bursts  C/Tonic bursting  C/Tonic bursting 
D/Phasic bursting  
E/Mixed mode  
F/Adaptation*  
G/Class 1*  
4  Long bursts  D/Phasic bursting  M/Rebound spiking 
N/Rebound bursting  N/Rebound bursting  
S/Inhibitioninduced spiking*  
T/Inhibitioninduced bursting  
5  Long bursts  Q/Depolarizing afterpotential  J/Subthreshold oscillations 
K/Resonator*  
6  Regular isolated spikes  H/Class 2  P/Bistability 
P/Bistability  Q/Depolarizing afterpotential* 
Median values of parameters for filter kernels fitted to the six groups described in Table 7 for Izhikevich and AMAT models
Group  γ _{1}  γ _{2}  f_{c,1}[Hz]  f_{c,2}[Hz]  Δ[ms] 

Izhikevich  
1  − 11.98  − 1.41  8.57  61.67  1.35 
2  − 1231.41  − 1.01  17.48  19.51  3.60 
3  − 154.58  − 1.21  10.51  23.28  0.91 
4  − 2273.17  − 1.02  6.70  7.33  6.54 
5  54.98  0.19  17.26  403.19  0.74 
6  − 1462.74  − 1.02  36.15  38.72  4.24 
AMAT  
1  − 90.59  − 1.52  16.03  36.65  0.21 
2  2.46  0.50  34.47  161.85  0.22 
3  − 34.88  − 1.65  1.53  21.41  0.35 
4  − 1436.78  − 1.00  16.06  18.20  1.90 
5  32.98  0.29  8.56  407.10  0.27 
6  3.89  0.90  12.77  190.14  0.20 
Comparing the grouping of models to the spike responses shown in Figs. 4 and 5, we can roughly identify the groups found by kmeans clustering of filter parameters to firing patterns, as indicated in the right column of Table 7. This classification is far from perfect, as several models show firing patterns different from the groups into which they have been placed, especially for the AMAT class. It should also be noted that the firing patterns are for a single stimulus configuration only and that models may behave differently under other conditions; the kmeans clustering, on the other hand, is based on a wide range of stimulus conditions.
3.4 Performance of rate models
We evaluate the perfomance of the linearnonlinear firing rate models by testing them against the corresponding spiking model as described in Section 2.5, using the fit quality E_{r} as criterium, with E_{r} = 1 indicating a perfect fit.
The third row of Fig. 6 shows the response to a Poisson spike train with a step in rate from 100s^{− 1} to 300s^{− 1}. We use the filters fitted for the same noise regime and synaptic weight and a_{0} = 200s^{− 1} and a_{1} = 100s^{− 1}, corresponding to the step height. For the Tonic spiking case, the firing rate models capture the spiking neuron response very well, with E_{r} > 0.9 in all cases (see legend). For the Phasic bursting models, we find that the rate models overshoot massively for the Izhikevich variant with no or balanced noise, while the rate models “undershoot” somewhat for the AMAT variant. The stationary rate attained after the step is captured well in all cases. These examples also provide an illustration of how to interpret the fit quality measure E_{r}.
For the Izhikevich class, the Inhibitioninduced spiking and bursting models, as well as most models with the lowest weight, w = 0.1, produce too few spikes to confidently estimate firing rates from the spiking model. We also observe very poor responses for the Bistability model. Class 2 excitable stands out with poor scores, E_{r} < 0.5, while the remaining models provide reasonable fits, E_{r} > 0.7 at least for most cases with sufficiently strong weights (w ≥ 0.4).
The AMAT model class performs significantly better: Results are available for almost all stimulus conditions except for w = 100pA in the absence of noise and all models except the Depolarizing afterpotential model yield excellent fits (E_{r} > 0.9) for almost all conditions.
For Izhikevichclass models we find noticeably worse fit quality, mostly E_{r} < 0.7, with the worst results mostly for the same conditions that also yielded low fit quality in response to Poisson input with piecewise constant rate. The main differences are that the Depolarizing afterpotential model, which fitted stepped Poissonian input very well does not perform better than other models for the real spike trains, and that we obtain quality of fit values, albeit very poor ones, for the Inihibitioninduced bursting model.
AMAT class responses to real spike trains show all over better fit quality than the Izhikevich class, but also for the AMAT class fit quality is lower in response to real spike trains than to stepped Poisson input. The distribution of good and bad fits is similar to the one observed for stepped Poisson input, with the worst performance for the Depolarizing afterpotential model. Furthermore, more models require w ≥ 300pA to yield a fit quality result for real spike trains.
Proportion of linearnonlinear rate models achieving \(E_{r}^{\text {opt}}\geq 0.8\) across all model variants, noise regimes and synaptic weights
Izhikevich  AMAT  

Stepped Poisson trains  60%  84% 
Real spike trains  28%  59% 
3.5 Model generalizations
As we have shown above, for the AMAT model class our linearnonlinear rate models can capture the responses of spiking neuron models to real spike trains quite accurately. For the Izhikevich class, on the other hand, fits were poorer. Unfortunately, to find the optimal linearnonlinear model for each input configuration μ,σ,w, we had to test a set of 20 different linearnonlinear models to then pick the best one. This is impractical. We will now consider how to generalize our linearnonlinear rate models, so that we can select an optimal model a priori.
We consider four different types of generalization:
 per model (M)

one linearnonlinear model for each of the 14 Izhikevich class and 18 AMAT class models;
 per model and noise (MN)

one linearnonlinear model for each Izhikevich/AMAT model and each noise regime;
 per model, noise, and weight (MNW)

one linearnonlinear model for each Izhikevich/AMAT model, each noise regime, and each synaptic weight selected a priori;
 MNW selected by stepped response (MNWS)

one linearnonlinear model for each Izhikevich/AMAT model, each noise regime, and each synaptic weight selected based on the stepped Poisson test.
For the M and MN generalizations, we exploit that the activation functions g(a) for many models and conditions scale roughly linear in the synaptic weight. We thus pool the scaled activation function data g(a)/w for a given model across all input conditions (M) or just all synaptic weights for given noise (MN) and fit a single spline \(\bar {g}(a)\) to the pooled data. We then use \(w\bar {g}(a)\) as activation function in the linearnonlinear model. For MNW and MNWS we use the original splines fitted directly against measurements.
To generalize the linear kernels, we take the median value for each of the kernel fit parameters f_{c,1},f_{c,2},γ_{1},γ_{2},d and use these median parameters as parameters of our generalized kernel; using the median instead of the mean avoids problems with outliers. For M generalization, we take the median across all μ,σ,w,a_{0},a_{1} combinations, for MN across all w,a_{0},a_{1} for given μ,σ and for MNW across all a_{0},a_{1} for given μ,σ,w.
For MNWS generalization, we proceed differently: For each combination of μ,σ,w we select the filter parameters f_{c,1},f_{c,2},γ_{1},γ_{2},Δ which yielded the highest fit quality \(E_{r}=E_{r}^{\text {opt}}\) in response to the stepped Poisson input, our test stimulus.
The most important observation, though, is that MNWS generaliztion works well, with ρ_{MNWS} > 0.9 in almost all cases for both model classes. This means that by selecting a filter model based on a fixed stepped Poisson protocol, we will obtain a linearnonlinear rate model that is close to the optimal model for given noise regime and synaptic weight when applied to real neuronal dynamics.
Combined with the observation from Table 9 that the optimal model will provide a good approximation to actual neuronal firing rates in roughly two thirds of all conditions, we can thus use our fitting approach together with the stepped Poisson test to select a reasonably reliable linearnonlinear neuron model.
4 Discussion
In this paper we numerically investigated the response properties of two neuron model classes, the Izhikevich model and the AMAT model, to noisy spiking input. Both neuron models can reproduce a wide range of experimentally observed spike response patterns when stimulated with current injections. However, how these neurons behave with more natural synaptic inputs has so far not been studied systematically. We considered three different background noise regimes, one with no background noise at all, one balanced and one biased with enough background noise to put neurons in a spontaneously active state at low output rates. The first scenario can be considered to represent the situation in slice preparations, the other two correspond to neurons embedded in a network with ongoing excitatory and inhibitory activity. The stimulus spikes were modeled as stationary and sinusoidally modulated excitatory Poisson input spike trains, mimicking afferent inputs from sensory pathways with different synaptic connection strengths w.
4.1 Responses to spike input
We found that the response complexity observed under current injection collapses to only a few response types when the neurons are driven by stationary or sinusiodally modulated Poisson input. This is not entirely surprising, since some of the models are parametrically quite similar, and variations in response behavior to current stimulation depend on very specific current injection patterns that are not realizable in terms of Poisson spike inputs. Still, actual neurons receive inputs that often are welldescribed by Poissonian statistics and this can thus be considered the functionally more relevant input scenario. It is hence of interest to see which, possibly quite different, neuron models behave approximately equivalent.
The respective groupings for Izhikevich and AMAT are allinall very different. In particular, direct comparison of the individual corresponding neuron models reveals completely different response properties for most neuron models. This is in part explained by the differences in subthreshold dynamics which are linear for the AMAT model but nonlinear for the Izhikevich model. Individual spikes thus have quite different effects in the two models: Any input spike to an AMAT neuron will always evoke the same postsynaptic membranepotential response and these responses simply superimpose due to the subthreshold linearity. For the Izhikevich models, on the other hand, the postsynaptic response depends intricately on the value of the dynamic variables, such as the membrane potential, and the effect of several incoming excitatory spikes of same weight at one moment might be smaller than that of just one such spike at another moment. These differences hence make it hard, or even impossible, to set up synaptic weights w for the two neuron model classes that are directly comparable.
We therefore chose to gauge synaptic strengths in terms of the minimal weight w_{𝜃} needed to evoke a spike from rest, cf. Section 2.2, and to use weights spanning roughly from 10% to 75% of w_{𝜃}. This allowed us to quantify and compare input coupling strength within and between model classes. In general we observed that output rates for Izhikevich neurons were much lower than for AMAT neurons for the same input frequency and relative synaptic strength. It is therefore possible that model classes might become more similar if the Izhikevich neurons were driven at higher input rates or at other background noise levels, although we did not observe such a trend.
We observed here that neuron models can show very similar responses to spike input, even though they show very different responses to current injections, and in particular that models of different mathematical nature, showing identical current responses can respond very differently to spiking input. Given that neurons are mainly driven by spike input in vivo, this raises the intriguing question of how valuable a classification of neuronal response types based purely on current injection experiments is. While in vitro characterization using carefully crafted current injections is an important tool to classify neuronal cell types, it appears that a systematic classification based on a neuron’s response to spiking input may be required to select suitable neuron models for spiking and ratebased network models.
4.2 Firingrate models
In the second part of the paper, we made use of the measured stationary and frequency responses to fit linearnonlinear firingrate models to the data. It has previously been shown that the firingrate dynamics in response to complex spiking input can be well described by such models (Paninski et al. 2004; Ostojic and Brunel 2011; Weber and Pillow 2017; Østergaard et al. 2018). In particular, Nordlie et al. (2010) studied simple leaky integrateandfire (LIF) models with strong currentbased synapses. They showed that a lowpass fit to the frequency response together with the nonlinear activation function yielded linearnonlinear rate models that predicted responses to arbitrary inputs with high accuracy. Heiberg et al. (2013) adapted this approach and studied two LIFlike models, one with currentbased, the other with conductancebased synapses, that were fit to actual data recorded from cat and macaque LGN in response to retinal stimulation. They found the performance of linearnonlinear rate models to be good as well.
Here, we presented results of the same basic approach for the Izhikevich and AMAT neuron model classes. Frequency responses were in most cases more complex than simple lowpass behavior and we employed fits to bandpass filters that better capture the nonmonotonous passband structure observed in simulations. We then used novel test stimuli, i.e., step responses and more structured, highly variable spike input sampled from actual recordings of retinal ganglion cells (Casti et al. 2008) to study ratemodel performance. The main finding is that the AMAT neuron model class is approximated much better than the Izhikevich class by our linearnonlinear rate models: for the former, good rate model responses (E_{r} ≥ 0.8) were obtained in 64% of all cases tested, while the latter provided such good results in only 15% of cases tested. This difference might again be explained by the fact that the AMAT model class has subthreshold linear dynamics. However, the AMAT class is not completely linear either, because its firing threshold depends on the history of the membranepotential dynamics. Therefore, neuronal transfer is not expected to be linear in any model class.
Some of the model variants gave consistently poor results, typically those that show very nonlinear behavior in response to direct current stimulation, e.g., the Bistability, Inhibitioninduced spiking and Inhibitioninduced bursting models for the Izhikevich class, and the Depolarizing afterpotential and Inhibition induced spiking models for the AMAT class.
To estimate the effects of linearity on ratemodel performance, we measured the linearity of the stationary response function r_{0}(a_{0}) in terms of L_{1}, cf. Eq. (12). If the stationary response function is linear, the activation function g(⋅) is also linear and only its slope is relevant, independent of the working point, cf. Section 2.4. We computed L_{1} for all background noise regimes as a function of synaptic strength w and working point a_{0}, and find that the AMAT model generally is more linear than the Izhikevich model with respect to L_{1}. We further find that the linearity measure L_{1} does not predict ratemodel performance (data not shown). Thus, a nonlinear activation function does not imply poor ratemodel performance, nor does linearity in terms of L_{1} necessarily predict a good ratemodel performance.
Furthermore, despite exploration of many potential performance predictors, we were unable to identify any single quantity or group of quantities that reliably predicted whether the response of a neuron model in a given input regime could be captured well by a linearnonlinear rate model. We found, though, that a relative simple protocol, testing the rate model’s performance in response to a Poisson spike train input with piecewise constant rate (stepped Poisson), allowed us to reliably identify rate models that render spiking neuron model responses to realistic spike input with reasonable accuracy.
4.3 Application to network modeling
We have shown that the firingrate responses of the widelyused Izhikevich model (Izhikevich 2003b) and in particular of the awardwinning augmented multitimescale adaptive threshold (AMAT) model (Yamauchi et al. 2011; Jolivet et al. 2008) can be captured by linearnonlinear firingrate models with bandpass filters fitted to spiking neuron responses through a systematic, automated process. We have further shown that these models can be generalized across a wide range of input conditions without excessive loss of fidelity, and that optimal parameter sets can be chosen using simple test stimuli. Furthermore, since we use a bandpass filter in sum form, it can be represented by a system of two firstorder differential equations, which is straightforward to integrate into standard formalisms for ratebased network models (see, e.g., Nordbø et al. 2007).
This suggests the following approach to improve ratebased neuronal network models based on our findings. Assuming a model with two neuronal populations, start by selecting for each population the AMAT (or Izhikevich) model variant that best matches the response of individual neurons of each population to spiking input. Then apply the fitting procedure described in this paper (Section 2.4) to obtain the parameters of the nonlinear activation function and the linear bandpass filter using test stimuli covering the expected dynamic range of your network model. From the large set of fits obtained, either select individual fits based on a simple test protocol (Section 3.4) or generalized at a suitable level (Section 3.5), and apply the model parameters thus obtained to the differentialequation representation of the bandpass filter (Section 2.4.1).
The approach presented here may thus contribute to bringing ratebased network modeling closer to the reality of biological neuronal networks. While a systematic delineation of the range of validity of the linearnonlinear models described here for network modeling is beyond the scope of this paper, we consider the generalization results in Figs. 16 and 17 an indicator: Linearnonlinear models will not be useful in cases where responses are insufficient (gray areas in Figs. 10–13 and 16–17); we also observed poorer performance for spiking patterns that deviate strongly from tonic behavior (e.g., phasic bursting, bistability, depolarizing afterpotential). But good generalization scores, in particular where coinciding with good scores for individual conditions (Figs. 10–13) suggest that in these cases our linearnonlinear model provides a faithful representation for the rate dynamics of the underlying spike responses, and hence a strong potential for good performance also on the network level.
Footnotes
 1.
Strictly speaking, this is a functional, not a function, but we ignore such mathematical detail here as we focus on instantaneous transformations in what follows.
 2.
The current generated is stepwise constant during each dt = 0.1ms time step, with Gaussiandistributed amplitude.
 3.We also explored combining the terms in product formbut did not observe significantly different results.$$\tilde{H}_{0,\text{Prod}}(f) = \gamma_{1} e^{2\pi if{\Delta}} \frac{1}{1+ i\frac{f}{f_{\text{c,1}}}} \left( 1  \frac{\gamma_{2}}{1+ i\frac{f}{f_{\text{c,2}}}} \right) , $$
 4.
Each parameter set consists of f_{c,1}, f_{c,2}, γ_{1}, γ_{2}, and Δ. To compress widely scattering data, we transformed f_{c,1}, f_{c,2}, and Δ using α(x) = log 10x and γ_{1} and γ_{2} using β(x) = sgnx log 10100x before applying kmeans clustering.
Notes
Acknowledgements
We are grateful to Alex Casti for permission to use data from his recordings. Partially funded by the Research Council of Norway (Grant 178892/V30 eNeuro) and EU Grants 269921 (BrainScaleS), 604102 (Human Brain Project RUP), 720270 (Human Brain Project SGA1), and 785907 (Human Brain Project SGA2), and the Helmholtz Association portfolio theme SMHB and the Jülich Aachen Research Alliance (JARA). Simulations were performed using NOTUR resources.
Compliance with Ethical Standards
Conflict of interests
The authors declare that they have no conflict of interest.
References
 AlMohy, A.H., & Higham, N.J. (2009). A new scaling and squaring algorithm for the matrix exponential. SIAM Journal on Matrix Analysis and Applications, 31, 970–989. https://doi.org/10.1137/09074721X.CrossRefGoogle Scholar
 Avermann, M., Tomm, C., Mateo, C., Gerstner, W., Petersen, C.C.H. (2012). Microcircuits of excitatory and inhibitory neurons in layer 2/3 of mouse barrel cortex. Journal of Neurophysiology, 107(11), 3116–3134. https://doi.org/10.1152/jn.00917.2011.CrossRefPubMedGoogle Scholar
 Blomquist, P., Devor, A., Indahl, U.G., Ulbert, I., Einevoll, G.T., Dale, A.M. (2009). Estimation of thalamocortical and intracortical network models from joint thalamic singleelectrode and cortical laminarelectrode recordings in the rat barrel system. PLoS Computational Biology, 5(3), e1000,328. https://doi.org/10.1371/journal.pcbi.1000328.CrossRefGoogle Scholar
 Brette, R., & Gerstner, W. (2005). Adaptive exponential integrateandfire model as an effective description of neuronal activity. Journal of Neurophysiology, 94(5), 3637–3642. https://doi.org/10.1152/jn.00686.2005.CrossRefPubMedGoogle Scholar
 Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3), 183–208.CrossRefGoogle Scholar
 Brunel, N., Chance, F.S., Fourcaud, N., Abbott, L.F. (2001). Effects of synaptic noise and filtering on the frequency response of spiking neurons. Physical Review Letters, 86(10), 2186–2189.CrossRefGoogle Scholar
 Burkitt, A.N. (2006a). A review of the intergrateandfire neuron model: I. Homogeneous synaptic input. Biological Cybernetics, 95, 1–19.CrossRefGoogle Scholar
 Burkitt, A.N. (2006b). A review of the intergrateandfire neuron model: II. Inhomogeneous synaptic input and network properties. Biological Cybernetics, 95, 97–112.CrossRefGoogle Scholar
 Casti, A., Hayot, F., Xiao, Y., Kaplan, E. (2008). A simple model of retinaLGN transmission. Journal of Computational Neuroscience, 24(2), 235–252. https://doi.org/10.1007/s1082700700537.CrossRefPubMedGoogle Scholar
 Chance, F.S., Abbott, L.F., Reyes, A.D. (2002). Gain modulation from background synaptic input. Neuron, 35, 773–782.CrossRefGoogle Scholar
 Coombes, S. (2005). Waves, bumps, and patterns in neural field theories. Biological Cybernetics, 93, 91–108.CrossRefGoogle Scholar
 Ermentrout, B. (1998). Neural networks as spatiotemporal patternforming systems. Reports on Progress in Physis, 61, 353–430.CrossRefGoogle Scholar
 FitzHugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1, 445–466.CrossRefGoogle Scholar
 Funke, K., & Wörgötter, F. (1997). On the significance of temporally structured activity in the dorsal lateral geniculate nucleus (LGN). Progress in Neurobiology, 53, 67–119.CrossRefGoogle Scholar
 Gerstein, G.L., & Mandelbrot, B. (1964). Random walk models for the spike activity of a single neuron. Biophysical Journal, 4, 41–68.CrossRefGoogle Scholar
 Gewaltig, M.O., & Diesmann, M. (2007). NEST (NEural Simulation Tool). Scholarpedia, 2(4), 1430.CrossRefGoogle Scholar
 Haider, B., Häusser, M, Carandini, M. (2013). Inhibition dominates sensory responses in the awake cortex. Nature, 493, 97–100.CrossRefGoogle Scholar
 Heiberg, T., Kriener, B., Tetzlaff, T., Casti, A., Einevoll, G.T., Plesser, H.E. (2013). Firingrate models capture essential response dynamics of LGN relay cells. Journal of Computational Neuroscience, 35, 359–375. https://doi.org/10.1007/s1082701304566.CrossRefPubMedGoogle Scholar
 Helias, M., Kunkel, S., Masumoto, G., Igarashi, J., Eppler, J.M., Ishii, S., Fukai, T., Morrison, A., Diesmann, M. (2012). Supercomputers ready for use as discovery machines for neuroscience. Frontiers in Neuroinformatics, 6, 26. https://doi.org/10.3389/fninf.2012..CrossRefGoogle Scholar
 Higham, N.J. (2005). The scaling and squaring method for the matrix exponential revisited. SIAM Journal on Matrix Analysis and Applications, 26, 1179–1193.CrossRefGoogle Scholar
 Hodgkin, A.L., & Huxley, A.F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117, 500–544.CrossRefGoogle Scholar
 Ikegaya, Y., Sasaki, T., Ishikawa, D., Honma, N., Tao, K., Takahashi, N., Minamisawa, G., Ujita, S., Matsuki, N. (2013). Interpyramid spike transmission stabilizes the sparseness of recurrent network activity. Cerebral Cortex, 23(2), 293–304. https://doi.org/10.1093/cercor/bhs006.CrossRefGoogle Scholar
 Izhikevich, E.M. (2003a). Figure 1.m MATLAB script. http://www.izhikevich.org/publications/figure1.m, last accessed 18 Aug 2017.
 Izhikevich, E.M. (2003b). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569–1572. https://doi.org/10.1109/TNN.2003.820440.CrossRefGoogle Scholar
 Izhikevich, E.M. (2004). Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5), 1063–1070. https://doi.org/10.1109/TNN.2004.832719.CrossRefPubMedGoogle Scholar
 Izhikevich, E.M. (2010). Hybrid spiking models. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 368(1930), 5061–5070. https://doi.org/10.1098/rsta.2010.0130.CrossRefPubMedGoogle Scholar
 Jansen, B.H., & Rit, V.G. (1995). Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biological Cybernetics, 73(4), 357–366.CrossRefGoogle Scholar
 Johannesma, P.I.M. (1968). Diffusion models of the stochastic activity of neurons. In Caianiello, E.R. (Ed.), Neural networks (pp. 116–144). Berlin: Springer.CrossRefGoogle Scholar
 Jolivet, R., Rauch, A., Lüscher, H R, Gerstner, W. (2006). Predicting spike timing of neocortical pyramidal neurons by simple threshold models. Journal of Computational Neuroscience, 21(1), 35–49. https://doi.org/10.1007/s1082700670745.CrossRefPubMedGoogle Scholar
 Jolivet, R., Schürmann, F, Berger, T.K., Naud, R., Gerstner, W., Roth, A. (2008). The quantitative singleneuron modeling competition. Biological Cybernetics, 99(45), 417–426. https://doi.org/10.1007/s004220080261x.CrossRefGoogle Scholar
 Jones, E., Oliphant, T., Peterson, P., et al. (2001). SciPy: open source scientific tools for Python. http://www.scipy.org/, [Online; Accessed 09 March 2015].
 Kobayashi, R., Tsubo, Y., Shinomoto, S. (2009). Madetoorder spiking neuron model equipped with a multitimescale adaptive threshold. Frontiers in Computational Neuroscience, 3, 9. https://doi.org/10.3389/neuro.10.009.2009.CrossRefPubMedPubMedCentralGoogle Scholar
 Kunkel, S., Schmidt, M., Eppler, J.M., Plesser, H.E., Masumoto, G., Igarashi, J., Ishii, S., Fukai, T., Morrison, A., Diesmann, M., Helias, M. (2014). Spiking network simulation code for petascale computers. Frontiers in Neuroinformatics, 8, 78. https://doi.org/10.3389/fninf.2014.00078.CrossRefPubMedPubMedCentralGoogle Scholar
 Lapicque, L. (1907). Considérations préalables sur la nature du phénomene par lequel l’électricité excite les nerfs. Journal de Physiologie et de Pathologie Générale, 9, 565–578.Google Scholar
 Lefort, S., Tomm, C., Sarria, J.C.F., Petersen, C.C.H. (2009). The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex. Neuron, 61(2), 301–316. https://doi.org/10.1016/j.neuron.2008.12.020.CrossRefPubMedGoogle Scholar
 Mainen, Z.F., & Sejnowski, T.J. (1996). Influence of dendritic structure on firing pattern in model neocortical neurons. Nature, 382, 363–366.CrossRefGoogle Scholar
 Markram, H., ToledoRodriguez, M., Wang, Y., Gupta, A., Silberberg, G., Wu, C. (2004). Interneurons of the neocortical inhibitory system. Nature Reviews Neuroscience, 5, 793–807. https://doi.org/10.1038/nrn1519.CrossRefPubMedGoogle Scholar
 Moler, C. (2012). A balancing act for the matrix exponential. http://blogs.mathworks.com/cleve/2012/07/23/abalancingactforthematrixexponential/.
 Morris, C., & Lecar, H. (1981). Voltage oscillations in the barnacle giant muscle fiber. Biophysical Journal, 35(1), 193–213. https://doi.org/10.1016/S00063495(81)847820.CrossRefPubMedPubMedCentralGoogle Scholar
 Morrison, A., Straube, S., Plesser, H.E., Diesmann, M. (2007). Exact subthreshold integration with continuous spike times in discrete time neural network simulations. Neural Computation, 19, 47– 79.CrossRefGoogle Scholar
 Muller, E., Davison, A.P., Brizzi, T., Bruederle, D., Eppler, J.M., Kremkow, J., Pecevski, D., Perrinet, L., Schmuker, M., Yger, P. (2009). NeuralEnsemble.Org: Unifying neural simulators in Python to ease the model complexity bottleneck. In Frontiers in Neuroscience Conference Abstract: Neuroinformatics 2009. https://doi.org/10.3389/conf.neuro.11.2009.08.104.
 Nordbø, Ø, Wyller, J, Einevoll, G.T. (2007). Neural network firingrate models on integral form: effects of temporal coupling kernels on equilibriumstate stability. Biological Cybernetics, 97(3), 195–209. https://doi.org/10.1007/s004220070167z.CrossRefPubMedGoogle Scholar
 Nordlie, E., Tetzlaff, T., Einevoll, G.T. (2010). Rate dynamics of leaky integrateandfire neurons with strong synapses. Frontiers in Computational Neuroscience, 4, 149. https://doi.org/10.3389/fncom.2010.00149.CrossRefPubMedPubMedCentralGoogle Scholar
 Østergaard, J, Kramer, M.A, Eden, U.T. (2018). Capturing spike variability in noisy Izhikevich neurons using point process generalized linear models. Neural Computation, 30(1), 125–148. https://doi.org/10.1162/neco_a_01030.CrossRefGoogle Scholar
 Ostojic, S., & Brunel, N. (2011). From spiking neuron models to linearnonlinear models. PLoS Computational Biology, 7(1), e1001,056. https://doi.org/10.1371/journal.pcbi.1001056.CrossRefGoogle Scholar
 Paninski, L., Pillow, J.W., Simoncelli, E.P. (2004). Maximum likelihood estimation of a stochastic integrateandfire neural encoding model. Neural Computation, 16, 2533–2561.CrossRefGoogle Scholar
 Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E. (2011). Scikitlearn: machine learning in python. Journal of Machine Learning Research, 12, 2825– 2830.Google Scholar
 Petersen, C., & Crochet, S. (2013). Synaptic computation and sensory processing in neocortical layer 2/3. Neuron, 78, 28– 48.CrossRefGoogle Scholar
 Pillow, J.W., Paninski, L., Uzzell, V.J., Simoncelli, E.P., Chichilnisky, E.J. (2005). Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. Journal of Neuroscience, 25(47), 11,003–11,013. https://doi.org/10.1523/JNEUROSCI.330505.2005.CrossRefGoogle Scholar
 Plesser, H.E., & Diesmann, M. (2009). Simplicity and efficiency of integrateandfire neuron models. Neural Computation, 21, 353–359. https://doi.org/10.1162/neco.2008.0308731.CrossRefPubMedGoogle Scholar
 Plesser, H.E., Diesmann, M., Gewaltig, M.O., Morrison, A. (2013). NEST: the neural simulation tool. In Jaeger, D, & Jung, R (Eds.) Encyclopedia of Computational Neuroscience. Berlin: Springer, DOI https://doi.org/10.1007/SpringerReference_348323, (to appear in print).
 Richardson, M.J.E. (2007). Firingrate response of linear and nonlinear integrateandfire neurons to modulated currentbased and conductancebased synaptic drive. Physical Review E, 76(021919), 1–15.Google Scholar
 Richardson, M.J.E., & Swarbrick, R. (2010). Firingrate response of a neuron receiving excitatory and inhibitory synaptic shot noise. Physical Review Letters, 105(17), 178,102.CrossRefGoogle Scholar
 Rotter, S., & Diesmann, M. (1999). Exact digital simulation of timeinvariant linear systems with applications to neuronal modeling. Biological Cybernetics, 81, 381–402.CrossRefGoogle Scholar
 Roxin, A. (2011). The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons. Frontiers in Computational Neuroscience, 5, 8. https://doi.org/10.3389/fncom.2011.00008.CrossRefPubMedPubMedCentralGoogle Scholar
 Sakata, S., & Harris, K.D. (2012). Laminardependent effects of cortical state on auditory cortical spontaneous activity. Frontiers in Neural Circuits, 6(109), 1–10.Google Scholar
 Shimazaki, H., & Shinomoto, S. (2010). Kernel bandwidth optimization in spike rate estimation. Journal of Computational Neuroscience, 29(12), 171–182. https://doi.org/10.1007/s1082700901804.CrossRefPubMedGoogle Scholar
 Song, S., Sjöström, P, Reigl, M., Nelson, S., Chklovskii, D. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biology, 3(3), e68.CrossRefGoogle Scholar
 Stein, R.B. (1965). A theoretical analysis of neuronal variability. Biophysical Journal, 5, 173–194.CrossRefGoogle Scholar
 Tuckwell, H.C. (1988). Introduction to theoretical neurobiology Vol. 1. Cambridge: Cambridge University Press.Google Scholar
 Weber, A.I., & Pillow, J.W. (2017). Capturing the dynamical repertoire of single neurons with generalized linear models. Neural Computation, 29(12), 3260–3289. https://doi.org/10.1162/neco_a_01021.CrossRefPubMedGoogle Scholar
 Wilson, H.R., & Cowan, J.D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12(1), 1–24. https://doi.org/10.1016/S00063495(72)860685.CrossRefPubMedPubMedCentralGoogle Scholar
 Wolfram, S. (1999). The mathematica book, 4t. Cambridge: Wolfram Media/Cambridge University Press.Google Scholar
 Yamauchi, S., Kim, H., Shinomoto, S. (2011). Elemental spiking neuron model for reproducing diverse firing patterns and predicting precise firing times. Frontiers in Computational Neuroscience, 5, 42. https://doi.org/10.3389/fncom.2011.00042.CrossRefPubMedPubMedCentralGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.