Journal of Computational Neuroscience

, Volume 35, Issue 3, pp 359–375 | Cite as

Firing-rate models capture essential response dynamics of LGN relay cells

  • Thomas Heiberg
  • Birgit Kriener
  • Tom Tetzlaff
  • Alex Casti
  • Gaute T. Einevoll
  • Hans E. Plesser


Firing-rate models provide a practical tool for studying signal processing in the early visual system, permitting more thorough mathematical analysis than spike-based models. We show here that essential response properties of relay cells in the lateral geniculate nucleus (LGN) can be captured by surprisingly simple firing-rate models consisting of a low-pass filter and a nonlinear activation function. The starting point for our analysis are two spiking neuron models based on experimental data: a spike-response model fitted to data from macaque (Carandini et al. J. Vis., 20(14), 1–2011, 2007), and a model with conductance-based synapses and afterhyperpolarizing currents fitted to data from cat (Casti et al. J. Comput. Neurosci., 24(2), 235–252, 2008). We obtained the nonlinear activation function by stimulating the model neurons with stationary stochastic spike trains, while we characterized the linear filter by fitting a low-pass filter to responses to sinusoidally modulated stochastic spike trains. To account for the non-Poisson nature of retinal spike trains, we performed all analyses with spike trains with higher-order gamma statistics in addition to Poissonian spike trains. Interestingly, the properties of the low-pass filter depend only on the average input rate, but not on the modulation depth of sinusoidally modulated input. Thus, the response properties of our model are fully specified by just three parameters (low-frequency gain, cutoff frequency, and delay) for a given mean input rate and input regularity. This simple firing-rate model reproduces the response of spiking neurons to a step in input rate very well for Poissonian as well as for non-Poissonian input. We also found that the cutoff frequencies, and thus the filter time constants, of the rate-based model are unrelated to the membrane time constants of the underlying spiking models, in agreement with similar observations for simpler models.


LGN Retina Visual system Rate model Linear-nonlinear model 

1 Introduction

The thalamus is the central gateway for information passing from our sensory organs to cortex (Sherman and Guillery 2001). In particular, relay cells in the lateral geniculate nucleus (LGN) receive visual signals from retinal ganglion cells and transmit processed information to the primary visual cortex. These first stages of the visual system have been studied extensively. Since Rodieck (1965) introduced the difference-of-Gaussians (DOG) model for the spatial receptive field of retinal ganglion cells, most modeling of the response properties of cells in the early visual system has been descriptive in the sense that the main purpose has been to summarize experimental data compactly in a mathematical form.

Various stimuli including random white noise, flashing spots, and drifting gratings have been applied to obtain receptive field models, and numerous spatiotemporal receptive-field filters have been suggested (see Ch.2 in Dayan and Abbott 2001). Generalized linear models (GLM) are a class of simplified descriptive models often used to describe neurons in the early stages of sensory processing (Pillow et al. 2005, 2008), or to characterize neural responses with white-noise stimuli (Chichilnisky 2001). Mechanistic models, on the other hand, aim to account for observed neural properties on the basis of known neural physiology and anatomy. Mechanistic models of the early visual system exist both in the form of spiking neuron models (e.g. Casti et al. 2008; Carandini et al. 2007; Kirkland and Gerstein 1998; Köhn and Wörgötter 1996) and firing-rate models (Einevoll and Heggelund 2000; Einevoll and Plesser 2002; Hayot and Tranchina 2001; Yousif and Denham 2007).

The main motivation for using firing-rate models rather than spiking neuron models is to reduce the dimensionality and complexity of the microscopic dynamics in order to allow for analytical tractability, efficient simulation, and intuitive understanding. The majority of rate-based neural population models have been justified by the diffusion approximation (see references in Nordlie et al. 2010), assuming a large number of tiny incoming synaptic inputs. This approach is valid for neurons that receive input spikes at a high rate through weak synapses (Johannesma 1968), but synapses between retinal ganglion cells and LGN relay cells are often much stronger and even single retinal spikes have been reported to initiate action potentials in the thalamic targets (Cleland et al. 1971; Sirovich 2008). In the present study, we investigate how firing-rate models perform in the context of LGN relay cells.

Nordlie et al. (2010) have recently investigated the firing-rate response properties of leaky integrate-and-fire (LIF) neurons receiving current input through strong synapses. They demonstrated that neuronal responses to sinusoidally modulated inhomogeneous Poisson processes could be described well by a combination of a linear first-order low-pass filter with a nonlinear activation function. This linear-nonlinear firing-rate model accurately predicted the population response for a variety of non-sinusoidal test stimuli.

In the present study, we use the same approach to investigate whether linear-nonlinear firing-rate models can describe the firing rate properties of LGN relay neuron models fitted to experimental data. In particular, we investigate spiking models with conductance-based synaptic currents and after-hyperpolarizing currents (Casti et al. 2008) as well as more abstract spike-response models (Carandini et al. 2007; Gerstner and Kistler 2002).

Moreover, we study the effect of input spike train regularity on the rate model, parameterized by the shape parameter of the gamma process. More regular input than Poisson, as observed in actual recordings (Troy and Robson 1992; Casti et al. 2008), increases the linearity of the activation function for high input rates, while low rates effectively become rectified.

In the Methods section, we introduce the spiking LGN neuron models along with a description of stimulation and response characteristics. We further summarize our simulation setup and detail how we extract linear-nonlinear firing-rate models from the results of simulations with spiking neuron models. In the Results section, we first show the results from stationary (unmodulated) stimulation to illustrate the shape of the activation function. Results from sinusoidal stimulation are presented along with optimized low-pass model filters that illustrate the high quality of the fits. We finally test the performance of the extracted rate models by comparing the actual responses to novel stimuli with the responses predicted by the firing-rate models.

2 Methods

2.1 Spiking models for LGN cells

The rate models of LGN cells investigated in this study are based on spiking neuron models of LGN cells proposed by Casti et al. (2008) and Carandini et al. (2007). We refer to these models as the Casti and Carandini models.

Casti et al. (2008) and Carandini et al. (2007) fitted their models to experimental data obtained from LGN relay neurons in cat and macaque, respectively. In both studies, retinal input and LGN output spike trains of a number of relay cells were recorded using a single electrode. This is possible because signal transmission across the strong retino-geniculate synapses can be recorded as S potentials using extracellular electrodes (Kaplan and Shapley 1984). The models of specific cat and macaque relay cells thus obtained are the starting point of our study.

Both model neurons receive input only through a single, excitatory synapse. Casti et al. (2008) initially included “locked inhibition” (Blitz and Regehr 2005), i.e., inhibition following excitation with a fixed delay, but observed that their model could fit the experimental data equally well with locked inhibition removed. They thus concluded that locked inihibition was not relevant under the stimulus regime studied and fixed the inhibitory conductance to zero. Carandini et al. (2007) designed their model with excitatory input only. In both cases, the resulting model neurons transform a single input spike train {s j } arriving through a single synapse into an output spike train {t k }. When interpreting results later, one should keep in mind that the model by Casti et al. (2008) matched experimental data best for LGN relay cells with moderate-to-high transfer ratios (Carandini et al. (2007) do not provide transfer ratios).

The models are summarized in Table 1 following the template suggested by Nordlie et al. (2009).
Table 1

Overview of the neuron models. See Table 2 for parameters

A. Model summary

  Neuron model

Casti model and Carandini model

  Input model

Spike trains realised by inhomogeneous Poisson and gamma point processes

B. Casti model


Leaky integrate-and-fire, conductance-based synapses, afterhyperpolarization (AHP)

\(C\frac {\mathrm {d}V}{\mathrm {d}t} = -G_{L}(V-V_{L})-G_{E}(t)(V-V_{E}) -G_{A}(t)(V-V_{A}) \)

\(G_{E}(t) = \sum\limits _{\{s_{j}\}} g_{E}(t-s_{j}) \Theta (t-s_{j})\)


\(G_{A}(t) = \sum\limits _{\{t_{k}\}} g_{A}(t-t_{k}) \Theta (t-t_{k})\)

  Subthreshold dynamics

\(g_{X}(t) = \bar {g}_{X} \left (\frac {t}{\tau _{X}}\right ) e^{-\frac {t-\tau _{X}}{\tau _{X}}}.\)


Integrated using Runge-Kutta-Fehlberg 4/5 integration with adaptive step size


Spike emission in time step of threshold crossing (V(t k ) ≥ V th). Precise spike time found using linear interpolation.


See Table 2A,B.

C. Carandini model


Spike-response model \(V(t)= \sum\limits _{\{s_{j}\}} {V}_{\text {syn}}(t-s_{j}) + \sum \limits _{\{t_{k}\}} {V}_{\text {spike}}(t-t_{k})+ n(t) \)

  Subthreshold dynamics

\({V}_{\text {syn}}(t) = {V}_{\text {EPSP}} \, \frac {t}{{\tau }_{\text {EPSP}}} e^{-\frac {t-{\tau }_{\text {EPSP}}}{{\tau }_{\text {EPSP}}}} \Theta (t) \)

\({V}_{\text {spike}}(t) = \delta (t) - {V}_{\text {reset}}\, e^{- t/{\tau }_{\text {reset}}} \Theta (t)\)


Exact integration (Rotter and Diesmann 1999) with temporal resolution dt


Spike emission at times t k ∈ {n dt|n ∈ ℕ} with V(t k ) ≥ V th


See Table 2C

D. Input model


Spike train generated by an inhomogeneous Poisson/gamma point process


Instantaneous rate: a(t) = a 0 + a 1cos(2π  f stim t)


See Table 2D.

2.1.1 Casti model

The Casti model is a modified leaky integrate-and-fire (LIF) model with conductance-based excitatory and inhibitory synapses; for the sake of brevity, we have removed the (unused) inhibitory synapse in our sketch of the model.

The sub-threshold membrane potential V(t) of the model neuron is governed by
$$\begin{array}{rll} C\frac{\mathrm{d}V}{\mathrm{d}t} &=& -G_{L}(V-V_{L})-G_{E}(t)(V-V_{E}) \\ &&-G_{A}(t)(V-V_{A}), \end{array} $$
$$ G_{E}(t) = \sum\limits_{\{s_{j}\}} g_{E}(t-s_{j}) \Theta(t-s_{j}), $$
$$ G_{A}(t) = \sum\limits_{\{t_{k}\}} g_{A}(t-t_{k}) \Theta(t-t_{k}) , $$
$$ g_{X}(t) = \bar{g}_{X} \left(\frac{t}{\tau_{X}}\right)e^{-\frac{t-\tau_{X}}{\tau_{X}}}. $$
Here, C is the membrane capacitance, G L the persistent leakage conductance, G E (t) the total excitatory synaptic conductance evoked by the incoming spike train {s j }, and G A (t) the total after-hyperpolarizing (AHP) conductance triggered by the outgoing spike train {t k }. The associated reversal potentials are V L , V E , and V A . The time course gx(t) of an individual conductance activation is modeled as an α-function with maximum \(\bar {g}_{X}\) at \(t=\tau _{X}\). \(\Theta (t)\) is the Heaviside step-function.

A spike is fired when the membrane potential reaches the fixed threshold V(t) = V th from below. Instead of a voltage reset immediately after a spike, a transient activation of the AHP conductance G A (t) models the reset mechanism and subsequent refractory period. Modeling reset and refractoriness in this way ensures that the membrane potential V(t) remains continuous upon threshold crossing. Because the membrane potential is not reset, it may remain above threshold for some time after a spike.

Figure 1 illustrates the dynamics of the model. From resting potential, this neuron comes close to threshold as a result of one incoming retinal spike. Excitability varies between the neurons, but a single incoming spike results in an increase in membrane potential by more than 50 % of the difference between resting potential and threshold for all the neurons in the study. Hence, we are clearly outside the diffusive regime.
Fig. 1

Membrane potential \(V(t)\) (solid line) for the Casti model with input spikes at 50 ms, 200 ms, and 220 ms. The third input spike evokes and output spike at 221.8 ms, marked by a cross. The dotted horizontal line marks the threshold \(V_{\text {th}}\)

Casti et al. (2008) recorded from two X-On and eight X-Off cells from six anesthetized adult cats. By recording S potentials along with spikes, input \(\{s_{j}\}\) to and output \(\{t_{k}\}\) from the cells could be recorded simultaneously (Kaplan and Shapley 1984). Although S potentials do not represent the entire input to an LGN relay cell, it is known that the LGN does not fire if retinal ganglion spikes are silenced. It is therefore reasonable to assume that the S potentials are the dominant monosynaptic excitatory input. The cells were stimulated with temporally modulated, spatially homogeneous circular spots of various diameters.

In Casti et al. (2008), the model specified by Eqs. (1)–(4) was fitted as follows for each cell recorded: Most model parameters were fixed to plausible values, cf. Table 2A. The model neuron was then stimulated with the recorded S potential trains, and the response of the model neuron was compared to the experimentally recorded response using a cost function sensitive to spike-timing mismatches. They then used the Simplex algorithm (Nelder and Mead 1965) to find the values of \((\tau , \tau _{A}, \bar {g}_{E}, \bar {g}_{A})\) that minimized the cost function. Here, \(\tau =C/G_{L}\) is the passive membrane time constant of the model neuron. For details of the fitting procedure, see Casti et al. (2008).
Table 2


A. Casti model, common parameters

  τ E

Excitatory synaptic time constant


1 ms


Membrane capacitance


1 nF

  G L

Persistent leak conductance


0.1 μS

  V L

Resting potential




Spike threshold



  V E

Excitatory reversal potential



  V A

AHP reversal potential



B. Casti model, specific parameters


Neuron 1

Neuron 1*

Neuron 6

Neuron 8


Membrane time constant

17.8 ms

11.7 ms

16.3 ms

7.2 ms

  τ A

AHP time constant

0.47 ms

0.60 ms

1.00 ms

0.26 ms

  \(\bar {g}_{E}\)

Exc. conductance

0.16 μS

0.11 μS

0.08 μS

0.07 μS

  \(\bar {g}_{A}\)

AHP conductance

0.42 μS

0.56 μS

0.60 μS

0.44 μS

C. Carandini model

  τ EPSP

Time constant for excitatory PSPs


6.0 ms

  τ reset

Time constant of AHP potential


12.0 ms


Amplitude of excitatory PSPs



  V reset

Amplitude of AHP potential



  V th

Spike threshold



  V noise

Noise amplitude



D. Input parameters

  a 0

Input rate


{0, 5,…, 160} s−1

  a 1

Input amplitude


{0, 20,…, 100} s−1

  f stim

Input frequency


∼10{0.0,0.1,…,3.0} Hz


Input regularity (Γ order)


{1, 3, 6}

E. Simulation parameters


Time resolution


0.1 ms


Simulation time



A: Fixed parameters common to all models from Casti et al. (2008).

B: Optimal parameter sets for neurons no. 1, 6, and 8 from Casti et al. (2008). Parameters were obtained under stimulation with flashing small spots, except for Neuron 1*, which was obtained with a full-field stimulus.

C: Optimal parameter set for neuron 122R4-5 from Carandini et al. (2007); potentials are in arbitrary units.

D: Input parameters used to test the model.

E: Simulation parameters. Data in A–C are from Casti et al. (2008, Table 1, 2) and Carandini et al. (2007, Table 1), respectively.

Table 2B shows four sets of optimized parameter values, which were obtained in Casti et al. (2008) by fitting the responses of three neurons, of which one was fit for two different flashing spot sizes. These four cases span the range of response types studied here of all data reported by Casti et al. (2008), so we will use them for illustration in the remainder of this study. Complete data for all 14 optimized parameter sets from Casti et al. (2008) is given in the supplementary material (Supplementary Table 1).

We implemented this model neuron in the NEST Simulator (Gewaltig and Diesmann 2007) as model iaf_cxhk_2008 using a Runge-Kutta-Fehlberg 4/5 ODE solver with adaptive step-size control from the GNU Science Library (Galassi et al. 2001). Minor modifications to the original model in Casti et al. (2008) are described in the supplementary material.

2.1.2 Carandini model

The Carandini model is a spike-response model (Gerstner and Kistler 2002), i.e., the membrane potential is given as a sum of stereotyped events:
$$ V(t) = \sum\limits_{\{s_{j}\}}{V}_{\text{syn}}(t-s_{j}) + \sum\limits_{\{t_{k}\}} {V}_{\text{spike}}(t-t_{k}) + n(t) $$
$${V}_{\text{syn}}(t) = {V}_{\text{EPSP}} \, \frac{t}{\tau_{\text{EPSP}}} e^{-\frac{t-{\tau}_{\text{EPSP}}}{\tau_{\text{EPSP}}}} \Theta(t), $$
$${V}_{\text{spike}}(t) = \delta(t) - {V}_{\text{reset}}\, e^{- t/{\tau}_{\text{reset}}} \Theta(t)\;. $$
\({V}_{\text {syn}}(t)\) is the postsynaptic potential, \({V}_{\text {spike}}(t)\) the waveform describing a spike and the subsequent after-hyperpolarization with initial amplitude \({V}_{\text {reset}}\) and decay time constant \({\tau }_{\text {reset}}\), and \({V}_{\text {syn}}(t)\) the response to incoming spikes, with maximal response \({V}_{\text {EPSP}}\) at time \({\tau }_{\text {EPSP}}\). As before, \(\{s_{j}\}\) and \(\{t_{k}\}\) are the incoming and outgoing spike trains, respectively. \(n(t)\) is Gaussian-distributed white noise. The model produces a spike when the membrane potential exceeds the spike threshold, \(V(t) > {V}_{\text {th}}\).
The dynamics of the model is illustrated in Fig. 2. Given a neuron at rest, a single incoming spike results in an increase in membrane potential by more than 50 % of the distance between resting potential and threshold for all neurons in the Carandini study. As with the Casti neurons, we operate outside the diffusive regime.
Fig. 2

Membrane potential \(V(t)\) (solid line) for the Carandini model with input spikes at 50 ms, 200 ms, and 220 ms. The third input spike evokes and output spike at 223.4 ms, marked by a cross. The dotted horizontal line marks the threshold \(V_{\text {th}}\)

Carandini et al. (2007) fitted this model to nine cells (seven On, two Off; three P, four M, two unclassified) recorded from six adult macaques. Both input and output spike trains ({s j }, {t k }) were recorded. The cells were stimulated with spatially homogeneous light spots restricted to the receptive field center and varying continuously in time.

Optimal parameter sets for the four free parameters of the model (\({\tau }_{\text {EPSP}},{V}_{\text {EPSP}}, {\tau }_{\text {reset}}, {V}_{\text {reset}}\)) were obtained by minimizing the difference between the low-pass filtered output spike trains recorded from experiment and simulation. Minimization was performed by a custom procedure described in Carandini et al. (2007). The optimal noise level (\({V}_{\text {noise}}\)) was obtained by simulating the model response at a number of amplitudes for the noise term \(n(t)\) and by finding the noise level that yielded the best fit. Table 2C shows the optimal parameter set for one neuron from the Carandini study. Complete data for all nine neurons from Carandini et al. (2007) is given in the supplementary material (Supplementary Table 2).

We implemented this model neuron in the NEST Simulator (Gewaltig and Diesmann 2007) as model iaf_chs_2007 using exact integration (Rotter and Diesmann 1999; Plesser and Diesmann 2009). Minor modifications to the original model are described in the supplementary material.

2.2 Characterization of response properties

2.2.1 Stimulation

We stimulated model neurons with sinusoidally modulated inhomogeneous Poisson or gamma process spike trains, as illustrated in Fig. 3. Specifically, we considered spike trains that are realizations of point processes with rate (or intensity)
$$ a(t) = a_0 + a_1 \sin 2\pi{f}_{\text{stim}} t. $$
Mean rates were in the range \(0 < a_0 \leq 160\,\text {s}^{-1}\), while we limited the modulation depth to \(0 \leq a_1 \leq a_0\) to avoid rectification issues. Modulation frequencies \({f}_{\text {stim}}\) varied from 0 Hz to 1 kHz.
Fig. 3

A model neuron is driven by a spike train with sinusoidally modulated rate \(a(t)\) with mean \(a_0\), modulation depth \(a_1\), and frequency \({f}_{\text {stim}}\), cf. Eq. (8). As a first-order approximation, the output spike train of the neuron can be characterized by the sinusoidally modulated response firing rate \(r(t)\) with mean \(r_0\), amplitude \(r_1\), frequency \({f}_{\text {stim}}\) and phase ϕ, cf. Eq. (10). Adapted from Nordlie et al. (2010), Fig. 1

Input spike times \(\{s_1, s_2, \dots \}\) were chosen such that the time-rescaled spike trains \(\{u_1, u_2, \dots \}=\{u_{j}|u_{j} = A(s_{j})\}\) form homogeneous Poisson or gamma processes of the desired order (Brown et al. 2002). Here,
$$ A(t) = \int_0^{t} a(s) \mathrm{d}s = a_0 t - \frac{a_1}{2\pi{f}_{\text{stim}}}\cos 2\pi{f}_{\text{stim}} t\; $$
is the cumulated rate (cumulated intensity) of the process. For brevity, we occasionally refer to Poisson processes as gamma processes with order \(\Gamma =1\).

The sinusoidal_gamma_generator model in the NEST Simulator (Gewaltig and Diesmann 2007) generates spike trains using this algorithm.

2.2.2 Response characteristics

We characterized the response of the neurons by a sinusoidal rate model
$$\begin{array}{rll} r(t) &=& r_0 + r_1 \cos(2\pi {f}_{\text{stim}} t + \phi_1) \\ && + \sum\limits_{m=2}^{\infty} r_{m} \cos(2 m \pi {f}_{\text{stim}} t + \phi_{m}) \;, \end{array} $$
as illustrated in Fig. 3. For a purely linear response, \(r_0\) represents the background firing rate of the neuron, \(r_1\) the stimulus response amplitude (with phase shift \(\phi _1\)), and we expect \(r_{m}=0\) for all higher harmonics (\(m\geq 2\)).
We will quantify the linearity of the response to periodic stimuli using Fourier analysis. Spectra of spike trains have a continuous component due to the jitter in spike times. For Poisson spike trains this spectrum is perfectly flat. For spike trains including refractory effects (such as trains with gamma ISI-statistics for \(\Gamma >1\)), the spectra have a dip near the origin (Franklin and Bair 1995). To test whether the neuronal response is indeed linear, we compare the Fourier amplitudes at higher harmonics \(r_{m}~(m\geq 2)\) with the continuous background component B, as illustrated in Fig. 4.
Fig. 4

Spectrum of response amplitudes \(\bar {r}(f)\) obtained from \(N=50\) trials of \(T=100\) s duration, recording from a Casti 1 model neuron stimulated by a sinusoidally modulated gamma process (\(a_0=40\,\text {s}^{-1}\), \(a_1=10\,\text {s}^{-1}\), \({f}_{\text {stim}}=10\) Hz, \(\Gamma =3\)); for this figure, \(\Delta f=0.1\) Hz and \({f}_{\text {max}}=45\) Hz. The horizontal solid line is the estimated background B, while the dashed line marks the 99 % confidence limit for signals exceeding the background, cf Eq. (23). Thin dotted lines mark the harmonics. This spectrum shows no significant power at the second, third or fourth harmonic

Estimates of the Fourier amplitudes \(r_{m}\), phases \(\phi _{m}\) and continuous background component B were obtained as follows: We recorded output spike trains \(\{t_{k}^{(n)}\}\) for \(n=1, \dots , N\) trials of duration T, with temporal resolution \(dt=0.1\) ms. We then computed per-trial spectra
$$ S^{(n)}(f) = \sum\limits_{t\in\{t_{k}^{(n)}\}} e^{-i2\pi f t} $$
at frequencies \(f=j\Delta f\) chosen such that the stimulation frequency \({f}_{\text {stim}}\) is an integer multiple of \(\Delta f\). We thus obtained per-trial Fourier amplitudes
$$ r^{(n)}(f) =\left\{\begin{array}{ll} {|{S^{(n)}(0)}|}/{T} & f=0 \\ {2|{S^{(n)}(f)}|}/{T} & f> 0\\ \end{array}\right. $$
and phases
$$ \phi^{(n)}(f) = \arg S^{(n)}(f). $$
The factor 2 in the amplitudes for \(f>0\) accounts for the power at negative frequencies. We averaged across trials to obtain estimates of the true Fourier amplitudes and their standard deviations
$$ \bar{r}(f) = \frac{1}{N} \sum\limits_{n=1}^{N} r^{(n)}(f) $$
$$ \sigma_{r}(f) = \sqrt{\frac{1}{N-1} \sum\limits_{n=1}^{N} \left(r^{(n)}(f)-\bar{r}(f)\right)^{2}}. $$
Phases were averaged on the unit circle (Goldberg and Brown 1969)
$$ \bar{\phi}(f) = \arg \sum\limits_{n=1}^{N} e^{i\phi^{(n)}(f)}. $$
Estimates of the response amplitudes at the harmonics are thus given by
$$ \bar{r}_{m} = \bar{r}(m{f}_{\text{stim}}) $$
and correspondingly for the standard deviations \(\sigma_{m}\) and phases \(\bar {\phi }_{m}\).
In estimating the amplitude B of the continuous background of the spectrum, we exploited the fact that the spectrum excluding the harmonics essentially is flat. Instead of estimating the background at each harmonic by a linear fit to \(\bar {r}(f)\) in the vicinity of each harmonic, we thus simply averaged across the entire spectrum, excluding the harmonics, and obtained
$$ B = \frac{1}{|{F_{B}|}}\sum\limits_{f\in F_{B}} \bar{r}(f), $$
$$ F_{B} = \left\{j\Delta f | 0< j\Delta f < {f}_{\text{max}} \wedge j\Delta f \neq m{f}_{\text{stim}} \forall m\in \mathbb{N}\right\}. $$
Here \({f}_{\text {max}}\) is the upper limit of the spectrum we computed; unless otherwise noted, we used \(\Delta f=0.1{f}_{\text {stim}}\) and \({f}_{\text {max}}=10.5{f}_{\text {stim}}\). The standard deviation of B is then given by
$$ \sigma_{B} = \sqrt{\frac{1}{|{F_{B}}|}\sum\limits_{f\in F_{B}}\sigma^{2}_{r}(f)}. $$
A higher harmonic (\(m\geq 2\)) carries significant signal power if the mean response amplitude \(\bar {r}_{m}\) at the harmonic exceeds the mean background amplitude B in a statistically significant way. A one-sided z-test with test statistic
$$ z = {\frac{\bar{r}_{m}-B}{\Sigma}},$$
$$\Sigma = \sqrt{\frac{ \min_{m\geq 2} \sigma_{m}^{2}}{N} +\frac{\sigma_{B}^{2}}{N|{F_{B}}|}}$$
suffices to test for significance, because we collect data across N > 30 trials (Walpole and Myers 1993, Ch. 8.5). This statistic combines the standard deviations of harmonics (one data point from each of N trials) and background (|F B | data points from each of N trials). To obtain a test that can be applied to all higher harmonics and is sensitive for non-linearities, we use the smallest σ m across all higher harmonics. This minimizes Σ and thus maximizes z. As a consequence, the test may indicate significant power at a higher harmonic even if there is none, but we consider such false positives less problematic than false negatives that may occur if we, e.g., choose the largest σ m in our definition of Σ.
Then, \(\bar {r}_{m} > B\) with 99 % confidence if z > 2.34 or, equivalently, if
$$ \bar{r}_{m} > B + 2.34\Sigma. $$
We will use this criterion to identify significant nonlinearities in model responses.

2.3 Simulation

Simulations for all 23 model configurations reported by Casti et al. (2008) and Carandini et al. (2007) were performed with the NEST Simulator (Gewaltig and Diesmann 2007).

In practice, we simulated N trials by creating N mutually independent generator-neuron pairs in a single NEST simulation. Membrane potentials were randomized upon network initialization and data collection started only after an equilibration period of 1 s simulated time. All simulations were performed with a spike-time resolution of 0.1 ms.

Simulations were performed on a system with Intel Xeon 2 CPUs running Linux 2.6.18 using NEST 2.1.r9693. Software was compiled with the GNU Compiler v. 4.1.2 and linked against the GNU Science Library v. 1.14. Trials were configured using the NeuroTools.parameters package (Muller et al. 2009). Data analysis was performed on the same computers and Apple MacBook Pro computers using NumPy 1.5.1 and 1.6.2 and Matplotlib 1.0.1 and 1.1.1 under Python 2.7.1 and 2.7.3.

2.4 Rate model description

A linear, time-invariant (LTI) system is completely characterized by its impulse response. That is, for any input, the output can be calculated as a convolution of the input and the impulse response. A wide class of non-linear systems can be described by a linear convolution with a kernel h(t) followed by a non-linear activation function g(·), so that the response is given by
$$ r(t) = g(h(t)*a(t)). $$

For each model neuron described by Carandini et al. (2007) and Casti et al. (2008), we need to find the activation function g(·) and the kernel \(h(t)\). For constant input, a(t) = a 0, the convolution becomes the identity operation, provided the kernel is normalized (∫ h(t)dt = 1). We thus determine g(·) by measuring the response to input with fixed rate, r 0 = g(a 0) for a range of a 0 and obtain a continuous representation of g(·) by spline interpolation. In practice, we use a 0 ∈ {0, 5,…, 160} s−1.

To obtain the kernel h(t), we linearize the activation function around a given working point (a 0, r 0) using Taylor expansion. The response to a(t) = a 0 + a 1 s(t) can then be expressed as
$$ \begin{array}{rll} r(t) &=& g(h(t)*(a_0+a_1 s(t)) )\\ &=& g(a_0) + g'(a_0)h(t) * (a_1 s(t)) + \mathcal{O}(a_1^{2})\\ &\approx& r_0 + h_0(t) * (a_1 s(t)) \;, \end{array} $$
where we have introduced the linear impulse response function
$$ h_0(t) = g'(a_0)h(t) $$
which combines the normalized kernel with the linear gain. For general g(·), h(t) and s(t), this approximation is only valid for small-amplitude signals (|a 1 s(t)| ≪ |a 0|).
Based on this approximation, we can obtain h 0(t) as follows: We record model responses to sinusoidally modulated input (\(s(t)=\sin 2\pi {f}_{\text {stim}} t\), cf. Eq. (8)) for fixed a 0 and a 1a 0 at a range of logarithmically spaced frequencies \({f}_{\text {stim}}\) (see Table 2D). The Fourier amplitude \(\bar {r}({f}_{\text {stim}})\) and phase \(\bar {\phi }(f)\) of the response, computed according to Eqs. (14) and (16), then yield the complex transfer function, i.e., the Fourier transform of the linear impulse response h 0(t)
$$ H_0({f}_{\text{stim}}) = \frac{\bar{r}({f}_{\text{stim}})}{a_1}e^{i\bar{\phi}({f}_{\text{stim}})}\;. $$
We then fit a first-order low-pass filter
$$ \tilde{H}_0(f) = \frac{\gamma}{(1+ i\frac{f}{f_{c}})} e^{-2\pi ifd} $$
to the empirical transfer function to capture it with as few parameters as possible: the cutoff frequency f c, the low-frequency gain \(\gamma \) and the delay d; see Nordlie et al. (2010) for details of the fitting procedure. For each set of stimulus parameters (\(a_0,a_1, {f}_{\text {stim}}\)), we obtained five independent fits, from which we computed mean values and standard deviations of the fitted parameters \(f_{c}, \gamma \) and d. In the time domain, Eq. (28) corresponds to a delayed exponential kernel
$$ h_0(t) = \mathcal{F}[\tilde{H}_0(f)](t) = \gamma \tau^{-1} e^{-\frac{t-d}{\tau}} \Theta(t-d) $$
where \(\Theta (\cdot )\) is the Heaviside function and \(\tau = 1/(2\pi f_{c})\) the filter time constant.
We now define our linear-nonlinear rate model as
$$ r_{\text{NL}}(t) = g(h_0(t) * a(t)) \;. $$
Two approximations were made in deriving Eq. (30) from the original model defined by Eq. (24): the linearization of \(g(\cdot )\) and the assumption that \(h_0(t)\) is a first-order low-pass filter. Therefore, even though a comparison of Eqs. (26) and (29) suggests that \(\gamma =g'(a_0)\) should depend only on a 0, while h 0(t) should be independent of all stimulus parameters, this may not hold true in practice, due to the approximations involved. We will discuss this further in Section 3.2.
We note that the linear-nonlinear model of Eq. (24) can be mapped to the following delay differential equation using the linear chain trick (Nordbø et al. 2007):
$$ \tau \dot{u}(t) = -u(t) + a(t-d), \quad r(t) = g(u(t)). $$
Here, \(u(t) = (a*h)(t)\) and \(h(t)\) is an exponential kernel.

3 Results

We initially present representative results from simulations using the Casti model (Casti et al. 2008). Later, we show that the results generalize to the Carandini model (Carandini et al. 2007). Additional results from both models can be found in the supplementary material.

3.1 Stationary response

With only stationary excitatory input, the output rate \(r_0\) is expected to increase monotonically with increasing input rate \(a_0\). This is indeed the case when the model operates under normal conditions (Fig. 5). Refractoriness entails that the firing rate curve will flatten out for high input rates. Curiously, output rates start to decrease for high inputs rates for certain configurations of the Casti model as shown in Fig. 5 G–I). This is a consequence of the repolarizing mechanism of the Casti model and the absence of inhibitory input: The model has no explicit reset or refractory time. Instead, the neuron remains unable to fire as long as the membrane potential remains above threshold. Thus, if the neuron receives a volley of input sufficiently strong to push it so far across threshold that the afterhyperpolarizing current activated after an output spike does not repolarize the neuron to a subthreshold membrane potential, then the neuron will remain refractory until a lapse in input occurs that is long enough to allow the leak current to repolarize the neuron. This effect also increases output rate variability, as the time spent above threshold may vary considerably from trial to trial. As this effect only occurs with persistent high input rates, we will not discuss it further.
Fig. 5

Stationary response for selected Casti neurons. Symbols illustrate mean output rates \(r_0\) across trials. Error bars denote one standard deviation in either direction. Each row contains results from one neuron configuration, from top to bottom: neuron 6, neuron 8, neuron 1, neuron 1*. The first and second neurons have low and high throughput ratio respectively. The third and fourth row contain responses from the same neuron, but with parameters obtained from stimulation with different spot sizes (see Section 2.1.1). Columns represent different input regularities, from left to right: Poisson (Γ = 1), gamma (Γ = 3) and gamma (Γ = 6)

The output rates are observed to be lower than the input rates, meaning that the neurons have to integrate several incoming spikes to reach threshold. The neurons differ considerably in transfer ratio r 0/a 0, as illustrated by the difference between the top two rows in Fig. 5.

Stimulation within the receptive field center and stimulation of the whole receptive field lead to quantitatively different activation functions (Fig. 5G, J).

The more input spikes the neurons need to integrate to produce a response, the more rectified-linear the stationary response curves become. For example, the neurons in the two top rows of Fig. 5 hardly produce any output for low input rates, but above a certain level output rates start to increase nearly linearly with the input rate.

Input signals with high-order gamma statistics causes stationary responses to change in two ways because short interspike intervals (ISIs) are rarer. First, as spikes arrive more evenly, higher input rates are needed to evoke any response. Second, neurons are less likely to stay above threshold for extended periods of time as evidenced by the linearization of the stationary response curve with increasing gamma order in Fig. 5 G–I.

3.2 Response to sinusoidal stimuli

In principle, measuring the linear response H 0(f) of a nonlinear system at a working point a 0 requires infinitesimally small perturbations. In practice, one needs to determine empirically the perturbation amplitudes up to which response nonlinearities may be neglected. To this end, we quantified the frequency contents of the response r(t). For a linear system, a sinusoidal stimulus with frequency \({f}_{\text {stim}}\) will give rise to a single peak in the response spectrum at the same frequency \({f}_{\text {stim}}\). Any nonlinearities in the system will produce higher harmonics in the response rate.

To measure the degree of nonlinearity, we compared the amplitudes of the principal and second harmonics, r 1 and r 2, with the background-firing/“noise” level \(z=2.34\) (Fig. 6). The first-order low-pass filter provides an overall good fit to the base harmonic in the response, while the higher harmonics (second harmonic included in figure) show little or no significant power.
Fig. 6

Low-pass characteristic of the response to sinusoidal stimuli (\(a_1 > 0\)) for representative Casti neurons. The figure illustrates how a first-order low-pass filter with cutoff frequency \(f_{c}\), low-frequency gain \(\gamma \), and delay d fits the response of the Casti model to time varying input. Symbols represent measured responses \(r_{n}({f}_{\text {stim}})/a_1\) for principal (blue) and second (green) harmonics (\(n \in \{1, 2\}\)). Gray curves show fitted first-order low-pass filters. Dotted vertical lines mark fitted cutoff frequencies \(f_{c}\). Dashed horizontal lines represent noise level \(z=2.34\). Stimulus parameters: a 0 = 40 s−1, a 1 = 10 s−1. Same panel arrangement as in Fig. 5

When a neuron with a rectified-linear stationary response curve (e.g., Fig. 5C) operates near the kink in the response curve, the second harmonic of the non-stationary response becomes more pronounced with increasing regularity for some neurons (see Fig. 6, third column). Overall, we conclude that the low power found at higher harmonics indicates that the dynamics of the Casti model are linear beyond the rectification point and thus can be captured by a linear-nonlinear model as proposed here.

As pointed out in Section 2.4, the gain of the low-pass filter should fulfill \(\gamma =g'(a_0)\), while the cutoff frequency \(f_{c}\) and delay d should be independent of both mean input rate a 0 and modulation amplitude a 1 for a linear system. Nordlie et al. (2010) found that these expectations are reasonably fulfilled for integrate-and-fire neuron models of retinogeniculate transmission.

To investigate whether this decomposition holds for the models studied here, we obtained response parameters \(\gamma \), f c, and d for a range of stimulus parameters (a 0, a 1) with a 1a 0. Results for two typical neurons as shown in Fig. 7 demonstrate that the response properties are largely independent of modulation depth a 1 for given mean input rate \(a_0\). Data in Fig. 7 are for Poisson input, but we found similar results for higher order gamma input (Γ = 3, Γ = 6; data not shown).
Fig. 7

Rate model parameters (\(\gamma , f_{c}\) and d) for Poisson (Γ = 1) input for two Casti neurons. The left column illustrates typical parameter variation (neuron 6). The right column illustrates the results from a high throughput neuron (neuron 1). Solid, dashed, dash dotted and dotted black lines represent \(a_1 = \{0.25, 0.5, 0.75, 1.0\} \times a_0\) respectively. Thick, grey lines indicate mean values

We observed further that \(\gamma \approx g'(a_0)\) holds across neuron models and stimulus parameters, with mild deviations for cases with very high throughput (data not shown), providing further evidence that the linearization in Eq. (25) and the low-pass filter approximation in Eq. (28) are reasonable for the Casti and Carandini models. Given that the stationary response curves in Fig. 5 have approximately constant slope \(g'(a_0)\), we expect constant low-frequency gain \(\gamma \). Fig. 7 A, B indicates that this expectation is fulfilled to a reasonable degree.

The cutoff frequency \(f_{c}\) (Fig. 7 C, D) and, in some cases, the delay d (Fig. 7 F), depend on the mean input rate \(a_0\). The latter applies to the high-throughput neurons (Casti neurons 1, 2, and 5) in particular. The neuronal responses for the models considered here thus cannot be decomposed into a gain dependent on the working point and a kernel independent of it.

However, the parameters \(\gamma , f_{c}\), and d of our linear-nonlinear model are independent of modulation depth \(a_1\). This is a key property of the model: Parameters obtained for one value of a 1 will apply to any modulation depth a 1a 0, rendering the model applicable to a wide range of stimuli provided the fixed mean input rate is approximately constant. Furthermore, as the dependence of the cutoff frequency f c on input rate a 0 is rather weak, the rate models are expected to generalize well when driven with other stimuli. Also, in those cases where delay d depends on a 0, f c increases along with d. The difference between rate models for different working points may effectively be reduced: As input rate changes, an increased cutoff frequency f c entails faster responses, which are compensated by an increased delay d.

3.2.1 Cutoff frequencies

Low-frequency signals pass through low-pass filters essentially unchanged, while signals with frequencies higher than the cutoff frequency are attenuated. The cutoff frequency of a fitted low-pass filter hence describes the tracking speed of a neuron model.

Values for all neurons studied are listed in Table 3. In summary, for Poisson input with a mean rate a 0 = 40 s−1 (consistent with the S potential recordings and retinal ganglion cells in general) and modulation depth a 1 = 10 s−1, cutoff frequencies f c for the Casti neurons ranged from 48.7 to 93.9 Hz. This corresponds to rate-model time constants τ = 1/(2π f c ) from 3.3 to 1.7 ms.
Table 3

Cutoff frequencies in Hz



Γ = 3

Γ = 6































































































The asterisk indicates that the neuron parameters were obtained from stimulation extending beyond the neuron’s receptive field center. Stimulus parameters: a 0 = 40 s−1, a 1 = 10 s−1.

With few exceptions, the cutoff frequency drops with increased input regularity (Table 3). However, at higher input rates, increased input regularity results in higher cutoff frequencies for neurons with high transfer ratios (Supplementary Figure 4). This behavior is typically seen in neurons where AHP has relatively little effect. Across the full set of Casti neurons and input rates, cutoff frequencies varied from approximately 30 Hz to approximately 230 Hz (data not shown).

3.3 Test of linear-nonlinear model

3.3.1 Step test

As a first test of the rate model of Eq. (24), we drove it with a step in the input firing rate. Population-averaged responses of 50,000 independent Casti neurons (black curves) are shown in Fig. 8 along with the predictions of the firing-rate models (gray curves). The step response is seen to be well predicted by the firing-rate model.
Fig. 8

Population-averaged step responses for selected Casti neurons. Firing rate \(r(t)\) in response to an instantaneous increase in the input firing rate \(a(t)\) at time \(t = 100\) ms from 30 to 50 \(\text {s}^{-1}\). Comparison between simulation results (black curves; population-averaged response of 50000 neurons, bin size \(\text {d}t=2.0\) ms) and prediction of the linear-nonlinear model (24) (gray curves) with measured activation function \(g(a_0)\) and transfer kernel \(h(t)\). Same panel arrangement as in Fig. 5

An overshoot can be seen in the simulation results for some of the neurons following the step in the input rate. The magnitude of the overshoot increases with input regularity (Fig. 8). The maximal overshoot observed exceeded 30 % of the sustained post-step output rate (Supplementary Figure 5C), but for most input/neuron combinations it was much lower.

The overshoot occurs when a significant number of neurons prior to the input-rate step have a membrane potential close to threshold. As the input rate suddenly increases, many of these neurons will receive an input spike within the time frame required to produce an output spike. In cases with strong refractoriness (large AHP conductance \({\bar {g}}_{\text {A}}\) and/or time constant \({\tau }_{\text {A}}\)), this leaves few neurons to spike until the refractory effects wane. Different neurons exhibit this behavior at different input rates.

3.3.2 Recorded retinal spike trains

To further validate the performance of the LGN rate models, we tested them on recorded retinal spike trains with low baseline firing rates and transients exceeding 150 s−1. The spike trains were derived from S potentials captured by an electrode whose tip was extremely close to the relay cell soma (see Section 2.1). As in Casti et al. (2008), such a recording was deemed suitable for this analysis if the following conditions were met: (1) The recording was stable over a period of hours, indicating that the cell was not damaged by the electrode, (2) the S potentials stood out well above the extracellular membrane potential noise and could easily be identified by simple thresholding and subsequent principal components analysis, and (3) there was an absence of short inter-event intervals (< 2 ms), giving strong evidence that the S potentials were elicited by a single retinal ganglion cell. Each of the relay cells recorded had a moderate-to-high transfer ratio (ratio of LGN output spikes to S potential input events) between 0.15 and 0.7, a range for which the Casti model was accurate.

The monitor stimulus used to drive the ganglion cells was a noisy flashing spot modulated at 160 Hz by a naturalistic distribution of light intensities (van Hateren 1997) relative to a gray background in the photoptic range (∼ 25 cd/m2). A stimulus run consisted of a single 8-second realization of this stimulus repeated 128 times. The spot size was fixed for a set of 128 repeat trials, but was varied between runs from sub-receptive-field center sizes to full field. All cells were located within 15 degrees of area centralis in the adult cat.

To compare the perfomance of the spiking and rate-based models, we obtained comparable responses from both types of models as follows. For the rate-based model, we pooled experimental spike trains across trials and determined the averaged response rate by means of kernel density estimation (Shimazaki and Shinomoto 2010), using the fixed kernel method to optimize the kernel bandwith. We thus obtained a continuous rate function a RGC(t) describing responses of real retinal ganglion cells. Applying Eq. (24) to this rate yields the reponse of the rate model r rate(t). For the spike-based models, we drove the models with spike trains from individual experimental trials, pooled the resulting output spike trains and applied kernel density estimation to obtain the response rate r spike(t), as shown in Fig. 9.
Fig. 9

Firing-rate model prediction quality \(E_{r}\). Population-averaged response (solid) from 128 neurons and prediction of the linear-nonlinear model (24) (dashed). Panel A illustrates the response to a complete 8 second stimulus sequence (neuron 1, dataset 1), while panel B shows 1 s (shaded area) of the same data in more detail

We then quantified the difference between responses obtained from rate-based and spiking models as the mean square error normalized by the variance of the response of the spiking model (Pillow et al. 2005)
$$ {E}_{r} = 1 - \frac{ \frac{1}{T} \int_0^{T}({r}_{\text{rate}}(t)-\text{r}_{\text{spike}}(t))^{2} \text{d}t} {\frac{1}{T} \int_0^{T}\left( {r}_{\text{spike}}(t)-{\bar{r}}_{\text{spike}}\right)^{2}\text{d}t} $$
where \({\bar {r}}_{\text {spike}}\) is the average response rate of the spiking model. Note that \(E_{r}=100~\%\) indicates perfect agreement between the models.
We tested the rate model on three separate datasets with different mean rate \(a_0\) and regularity Γ and observed good agreement between rate and spiking models. Scores for the four example neurons are listed in Table 4. Across all 14 model neurons reported by Casti et al. (2008), median \(E_{r}\) scores for the three datasets were 96.5 %, 93.0 %, and 89.3 %, respectively, with E r ≥ 85.0 % for 39 of a total of 42 scores, and a minimum score of E r = 74.8 %.
Table 4

Rate-model prediction quality \(E_r\)





Set 2 (\(a_0=17.9, \Gamma =2.1\), fit: 20/10)













Set 2 (\(a_0=27.8\), Γ = 3.5, fit: 30/15)













Set 2 (\(a_0=12.5\), Γ = 0.9, fit: 20/10)













For each neuron, prediction scores are listed for default and best fit. Default fit is selected based on nearest mean rate (rounded up) and 50 % modulation depth. Mean rate, regularity, and default fit is specified for each dataset

3.4 Carandini results

We found the results for the Carandini model to be equivalent to the results presented above. Both the stationary and non-stationary responses have the same qualitative features, but there are some quantitative differences worth pointing out. In particular, the cutoff-frequencies are lower for the Carandini neurons, implying longer time constants. This is especially pronounced for Poisson input (Table 3). The stationary response, non-stationary response, and resulting firing-rate model’s predicted response to a step increase in the input rate for one of the neurons in the Carandini study (122R4-5) are illustrated in Fig. 10. Results for more neurons from the Carandini study are shown in Supplementary Figures 1–3 and 6–7.
Fig. 10

Stationary response (top), non-stationary response (middle) and population-averaged step responses (bottom) for one Carandini neuron (122R4-5). Same row arrangement as in Fig. 5. See Figs. 5, 6 and 8 for detailed legend

4 Discussion

In the present study, we have shown that linear-nonlinear firing-rate models can capture the essential response dynamics of data-fitted spiking LGN relay neuron models.

Our use of data-fitted models allowed us to calculate the rate-model time constants for the cat and macaque LGN relays cells studied as shown in Table 3. For Poisson input with a mean rate a 0 = 40s−1 and modulation depth a 1 = 10s−1, time constants τ = 1/(2π f c ) ranged from 1.7 to 3.3 ms for the cat neurons and from 3.0 to \(10.1\,\text {ms}\) for the macaque neurons. These values were found to decrease somewhat with increasing firing rates and, with some exceptions, increase with increased input regularity. In accordance with earlier work (Gerstner 2000; Nordlie et al. 2010), we found no connection between the rate-model time constants and membrane time constants.

While the neurons operate outside of the diffusive regime (numerous tiny synaptic inputs), responses to stationary stimuli show that all the studied neurons require integration of at least two input spikes to produce an output spike. This result is in line with previous studies of LGN relay neurons (Sirovich 2008; Casti et al. 2008; Carandini et al. 2007).

Since the work of Wilson and Cowan (1972), firing-rate models with exponential kernels (i.e., first-order low-pass filters) have become a standard tool in neuroscience. Nordlie et al. (2010) demonstrated that for an ensemble of unconnected LIF neurons with strong synapses, a simple first-order model yields accurate predictions of the population-averaged response for a wide range of stimulus, neuron, and synapse parameters. Because of its simplicity, we used the same filter model and our results indicate that it produces reasonable predictions for the two models studied. The exact shape of the transfer function has been less of a concern to us than the low-frequency gain and the cutoff frequency required for a simple and accurate firing-rate model.

Previous studies have found that cutoff frequencies f c increase with firing rate for small synaptic time constants \(\tau _{s}\) (Knight 1972; Brunel et al. 2001; Nordlie et al. 2010). We see such an increase as well (Fig. 7), but input regularity has a larger impact: For most neurons, the cutoff frequency is reduced with increasing input regularity. At high input rates, though, the cutoff frequencies of neurons with high transfer ratios increase with input regularity and thus behave more like the neurons with supercritical weights studied by Nordlie et al. (2010).

To assess the overall validity of our rate-based models, we drove the rate models with novel stimuli. First, rate model predictions were compared to simulated step responses. The predictions were found to be good overall. Sustained rates were predicted well for all input regularities, but some combinations of neuron parameters and input parameters resulted in an overshoot immediately following the step in input rate. Our firing-rate models are unable to account for this effect. The overshoot occurs because many of the neurons that are close to threshold spike shortly after the sudden increase in the input rate. If this happens to a large proportion of the neurons, few neurons will be able to spike until the refractory effects wane. This effect can be understood from the properties of Poisson processes with refractoriness (see Deger et al. 2010). A variant of the model in which we replaced the low-pass with a band-pass filter captured the overshoot well (data not shown), but given the excellent agreement between spiking and model neurons observed of realistic input as shown in Section 3.3.2, we consider this an unnecessary complication.

Second, we used recorded retinal spikes as input to the rate models and compared the results to the output from the spiking relay cell models. Results varied between datasets and neurons, but prediction quality was good overall with median E r scores of 96.5 %, 93.0 %, and 89.3 % for the three datasets tested. Neurons with low transfer ratios generally scored worse than neurons with high transfer ratios. Shimazaki and Shinomoto (2010) proposed rate estimators based on fixed and variable kernels. We found that estimates based on variable kernels, which capture abrupt changes in activity better, resulted in even better scores for our models than fixed rate kernels (data not shown). Due to the computational burden and the experimental status of the variable kernel methods, though, we used the fixed kernel method for all data reported here.

Overall, our rate-based model fits the experimentally constrained spiking models by Carandini et al. (2007) and Casti et al. (2008) equally well, even though the models differ in their mathematical form (spike response vs. conductance-based) and the species modeled (monkey vs cat). This universality—which may seem surprising at first—reflects the fact that both studies investigate responses to comparable stimuli, to which relay cells in cat and monkey LGN respond in a similar fashion. Thus, our model abstracts away details of the Casti and Carandini models that are insignificant to the response properties investigated. One should keep in mind, though, that the model by Casti et al. (2008), which our rate-based model match well, captures responses of neurons with moderate-to-high transfer ratios best.

Our results indicate that simple firing-rate models produce acceptable predictions for LGN relay neurons. The approach used here could therefore be a useful tool for further exploration of the firing-rate response properties of neurons.



We would like to thank Matteo Carandini for valuable discussions on how to replicate his model and two anonymous referees for constructive comments.

Conflict of interest

The authors declare that they have no conflict of interest.

Supplementary material

10827_2013_456_MOESM1_ESM.pdf (493 kb)
(PDF 493 kb)


  1. Blitz, D.M., & Regehr, W.G. (2005). Timing and specificity of feed-forward inhibition within the L.G.N. Neuron, 45(6), 917–928.PubMedCrossRefGoogle Scholar
  2. Brown, E.N., Barbieri, R., Ventura, V., Kass, R.E., Frank, L.M. (2002). The time-rescaling theorem and its application to neural spike train data analysis. Neural Computation, 14(2), 325–346.PubMedCrossRefGoogle Scholar
  3. Brunel, N., Chance, F.S., Fourcaud, N., Abbott, L.F. (2001). Effects of synaptic noise and filtering on the frequency response of spiking neurons. Physical Review Letters, 86(10), 2186–2189.PubMedCrossRefGoogle Scholar
  4. Carandini, M., Horton, J.C., Sincich, L.C. (2007). Thalamic filtering of retinal spike trains by postsynaptic summation. Journal of Vision, 20(14), 1–2011.Google Scholar
  5. Casti, A., Hayot, F., Xiao, Y., Kaplan, E. (2008). A simple model of retina-LGN transmission. Journal of Computational Neuroscience, 24(2), 235–252.PubMedCrossRefGoogle Scholar
  6. Chichilnisky, E.J. (2001). A simple white noise analysis of neuronal light responses. Network, 12(2), 199–213.PubMedGoogle Scholar
  7. Cleland, B.G., Dubin, M.W., Levick, W.R. (1971). Simultaneous recording of input and output of lateral geniculate neurones. Nature New Biology, 231(23), 191–192.PubMedCrossRefGoogle Scholar
  8. Dayan, P., & Abbott, L.F. (2001). Theoretical neuroscience. Cambridge: Massachusetts Institute of Technology Press.Google Scholar
  9. Deger, M., Helias, M., Cardanobile, S., Atay, F.M., Rotter, S. (2010). Nonequilibrium dynamics of stochastic point processes with refractoriness. Physical Review E, 82(2 Pt 1), 021–129.Google Scholar
  10. Einevoll, G.T., & Heggelund, P. (2000). Mathematical models for the spatial receptive-field organization of nonlagged X-cells in dorsal lateral geniculate nucleus of cat. Visual Neuroscience, 17(6), 871–885.PubMedCrossRefGoogle Scholar
  11. Einevoll, G.T., & Plesser, H.E. (2002). Linear mechanistic models for the dorsal lateral geniculate nucleus of cat probed using drifting-grating stimuli. Network, 13(4), 503–530.PubMedCrossRefGoogle Scholar
  12. Franklin, J., & Bair, W. (1995). The effect of a refractory period on the power spectrum of neuronal discharge. SIAM Journal on Applied Mathematics, 55, 1074–1093.CrossRefGoogle Scholar
  13. Galassi, M., Davies, J., Theiler, J., Gough, B., Jungman, G., Booth, M., Rossi, F. (2001). GNU scientific library reference manual. Bristol: Network Theory.Google Scholar
  14. Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural Computation, 12(1), 43–89.PubMedCrossRefGoogle Scholar
  15. Gerstner, W., & Kistler, W.M. (2002). Spiking neuron models. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  16. Gewaltig, M.O., & Diesmann, M. (2007). NEST (NEural simulation tool). Scholarpedia, 2(4), 1430.CrossRefGoogle Scholar
  17. Goldberg, J.M., & Brown, P.B. (1969). Response of binaural neurons of dog superior olivary complex to dichotic tonal stimuli: Some physiological mechanisms of sound localization. Journal of Neurophysiology, 32, 613–636.PubMedGoogle Scholar
  18. Hayot, F., & Tranchina, D. (2001). Modeling corticofugal feedback and the sensitivity of lateral geniculate neurons to orientation discontinuity. Visual Neuroscience, 18(6), 865–877.PubMedGoogle Scholar
  19. Johannesma, P.I.M. (1968). Diffusion models for the stochastic activity of neurons. In E.R. Caianiello (Ed.), Networks neural: Proceedings of the school on neural networks (pp. 116–144). Springer-Verlag.Google Scholar
  20. Kaplan, E., & Shapley, R. (1984). The origin of the S (slow) potential in the mammalian lateral geniculate nucleus. Experimental Brain Research, 55(1), 111–116.CrossRefGoogle Scholar
  21. Kirkland, K.L., & Gerstein, G.L. (1998). A model of cortically induced synchronization in the lateral geniculate nucleus of the cat: a role for low-threshold calcium channels. Vision Research, 38(13), 2007–2022.PubMedCrossRefGoogle Scholar
  22. Knight, B.W. (1972). Dynamics of encoding in a population of neurons. The Journal of General Physiology, 59(6), 734–766.PubMedCrossRefGoogle Scholar
  23. Köhn, J., & Wörgötter, F. (1996). Corticofugal feedback can reduce the visual latency of responses to antagonistic stimuli. Biological Cybernetics, 75(3), 199–209.PubMedCrossRefGoogle Scholar
  24. Muller, E., Davison, A.P., Brizzi, T., Bruederle, D., Eppler, J.M., Kremkow, J., Pecevski, D., Perrinet, L., Schmuker, M., Yger, P. (2009). NeuralEnsemble.Org: Unifying neural simulators in Python to ease the model complexity bottleneck. In Frontiers in neuroscience conference abstract: Neuroinformatics 2009.Google Scholar
  25. Nelder, J.A., & Mead, R. (1965). A simplex method for function minimization. Computer Journal, 7, 308–313.CrossRefGoogle Scholar
  26. Nordbø, Ø., Wyller, J., Einevoll, G.T. (2007). Neural network firing-rate models on integral form: effects of temporal coupling kernels on equilibrium-state stability. Biological Cybernetics, 97(3), 195–209.PubMedCrossRefGoogle Scholar
  27. Nordlie, E., Gewaltig, M.O., Plesser, H.E. (2009). Towards reproducible descriptions of neuronal network models. PLoS Computational Biology, 5(8), e1000456.PubMedCrossRefGoogle Scholar
  28. Nordlie, E., Tetzlaff, T., Einevoll, G.T. (2010). Rate dynamics of leaky integrate-and-fire neurons with strong synapses. Frontiers in Computational Neuroscience, 4, 149.PubMedGoogle Scholar
  29. Pillow, J.W., Paninski, L., Uzzell, V.J., Simoncelli, E.P., Chichilnisky, E.J. (2005). Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. Journal of Neuroscience, 25(47), 11003–11013.PubMedCrossRefGoogle Scholar
  30. Pillow, J.W., Shlens, J., Paninski, L., Sher, A., Litke, A.M., Chichilnisky, E.J., Simoncelli, E.P. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207), 995–999.PubMedCrossRefGoogle Scholar
  31. Plesser, H.E., & Diesmann, M. (2009). Simplicity and efficiency of integrate-and-fire neuron models. Neural Computation, 21, 353–359.PubMedCrossRefGoogle Scholar
  32. Rodieck, R.W. (1965). Quantitative analysis of cat retinal ganglion cell response to visual stimuli. Vision Research, 5(11), 583–601.PubMedCrossRefGoogle Scholar
  33. Rotter, S., & Diesmann, M. (1999). Exact digital simulation of time-invariant linear systems with applications to neuronal modeling. Biological Cybernetics, 81(5–6), 381–402.PubMedCrossRefGoogle Scholar
  34. Sherman, S.M., & Guillery, R.W. (2001). Exploring the thalamus. New York: Academic Press.Google Scholar
  35. Shimazaki, H., & Shinomoto, S. (2010). Kernel bandwidth optimization in spike rate estimation. Journal of Computational Neuroscience, 29(1–2), 171–182.PubMedCrossRefGoogle Scholar
  36. Sirovich, L. (2008). Populations of tightly coupled neurons: the RGC/LGN system. Neural Computation, 20(5), 1179–1210.PubMedCrossRefGoogle Scholar
  37. Troy, J.B., & Robson, J.G. (1992). Steady discharges of X and Y retinal ganglion cells of cat under photopic illuminance. Visual neuroscience, 9(6), 535–53.PubMedCrossRefGoogle Scholar
  38. van Hateren, J.H. (1997). Processing of natural time series of intensities by the visual system of the blowfly. Vision Research, 37(23), 3407–3416.PubMedCrossRefGoogle Scholar
  39. Walpole, R.E., & Myers, R.H. (1993). Probability and Statistics for Engineers and Scientists (5th Ed.). Englewood Cliffs: Prentice Hall.Google Scholar
  40. Wilson, H.R., & Cowan, J.D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12(1), 1–24.PubMedCrossRefGoogle Scholar
  41. Yousif, N., & Denham, M. (2007). The role of cortical feedback in the generation of the temporal receptive field responses of lateral geniculate nucleus neurons: a computational modelling study. Biological Cybernetics, 97(4), 269–277.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Thomas Heiberg
    • 1
  • Birgit Kriener
    • 1
  • Tom Tetzlaff
    • 2
  • Alex Casti
    • 3
  • Gaute T. Einevoll
    • 1
  • Hans E. Plesser
    • 1
  1. 1.Department of Mathematical Sciences and TechnologyNorwegian University of Life SciencesÅsNorway
  2. 2.Institute of Neuroscience and Medicine (INM-6), Research Center JülichJülichGermany
  3. 3.Department of Mathematics, Gildart-Haase School of Computer Sciences and EngineeringFairleigh Dickinson UniversityTeaneckUSA

Personalised recommendations