# How does transient signaling input affect the spike timing of postsynaptic neuron near the threshold regime: an analytical study

- 1.9k Downloads
- 1 Citations

## Abstract

The noisy threshold regime, where even a small set of presynaptic neurons can significantly affect postsynaptic spike-timing, is suggested as a key requisite for computation in neurons with high variability. It also has been proposed that signals under the noisy conditions are successfully transferred by a few strong synapses and/or by an assembly of nearly synchronous synaptic activities. We analytically investigate the impact of a transient signaling input on a leaky integrate-and-fire postsynaptic neuron that receives background noise near the threshold regime. The signaling input models a single strong synapse or a set of synchronous synapses, while the background noise represents a lot of weak synapses. We find an analytic solution that explains how the first-passage time (ISI) density is changed by transient signaling input. The analysis allows us to connect properties of the signaling input like spike timing and amplitude with postsynaptic first-passage time density in a noisy environment. Based on the analytic solution, we calculate the Fisher information with respect to the signaling input’s amplitude. For a wide range of amplitudes, we observe a non-monotonic behavior for the Fisher information as a function of background noise. Moreover, Fisher information non-trivially depends on the signaling input’s amplitude; changing the amplitude, we observe one maximum in the high level of the background noise. The single maximum splits into two maximums in the low noise regime. This finding demonstrates the benefit of the analytic solution in investigating signal transfer by neurons.

## Keywords

First-passage time density Transient signaling input Strong synapse Gaussian noise Threshold regime Fisher information## 1 Introduction

High variability in spiking activities of *in vivo* cortical neurons is considered as one of the fundamentals of information processing by networks of neurons (Softky and Koch 1993; Shadlen and Newsome 1998). Since it is difficult to experimentally control mechanisms that underlie the highly variable neuronal activity, theoretical and computational analysis of a stochastically spiking neuron model are invaluable approaches to investigate how information is transferred via the variable spiking activities (Abbott et al. 2012). Statistics of spike timing beyond the spike-rate conveys information in the sensory systems; in particular, neuron’s first-spike time after stimulus onset can encode most of the information in the sensory cortex (Petersen et al. 2001; Panzeri et al. 2001; Van Rullen and Thorpe 2001; Furukawa and Middlebrooks 2002; Johansson and Birznieks 2004; Van Rullen et al. 2005). Hence the spike-timing distribution, if attained at sufficient accuracy, could be a building block in modeling neural computation (Herz et al. 2006); it would explain consequences of activity-dependent plasticity (Babadi and Abbott 2013), information transmission by a population of neurons (Silberberg et al. 2004; De La Rocha et al. 2007; Pitkow and Meister 2012) and even behavior (Pitkow et al. 2015). An analytical solution would serve this purpose; however, the non-linear dynamics of a single neuron has so far prevented obtaining such a solution.

The variability observed in spike-timing is thought to reflect fluctuations of synaptic inputs rather than the intrinsic noise of neurons (Mainen and Sejnowski 1995). A neuron is sensitive to input fluctuations and fires irregularly if inputs from excitatory and inhibitory neurons are balanced at levels near but below the threshold (Shadlen and Newsome 1998). Intracellular recordings from *in vivo* cortical neurons have revealed ubiquity of such balanced inputs from excitatory and inhibitory populations (Wehr and Zador 2003; Okun and Lampl 2008). The balanced inputs are self-organized in sparsely connected networks with relatively strong synaptic connections and result in asynchronous population activities (van Vreeswijk and Sompolinsky 1996; 1998; Kumar et al. 2008; Renart et al. 2010). Encouragingly, a recent experiment (Tan et al. 2014) demonstrated that membrane potential of macaque V1 neurons are dynamically clamped *near the threshold* when a stimulus is presented to the animal. All these evidence place importance on developing an analytic solution to understand neural behavior near the threshold regime.

On the other hand, the distribution of synaptic strength is typically a log-normal distribution, which indicates the presence of a few extremely strong synapses and a majority of weak synapses (Song et al. 2005; Lefort et al. 2009; Ikegaya et al. 2013; Buzsáki and Mizuseki 2014; Cossell et al. 2015). These strong synapses may form signaling inputs (Abbott et al. 2012), with the aid of other weak synapses (Song et al. 2005; Teramae et al. 2012; Ikegaya et al. 2013; Cossell et al. 2015). Moreover, it has been long debated that nearly synchronized inputs, from multiple neurons, act as a strong signal on top of the noisy background input (Stevens and Zador 1998; Diesmann et al. 1999; Salinas and Sejnowski 2001; Takahashi et al. 2012). The strong input, in many cases, is a short lasting signal. For example, the signaling inputs which code stimulus information in early sensory processing areas like in primary visual and auditory cortex, are usually known to be transient (Gollisch and Herz 2005; Geisler et al. 2007). Thus, besides many weak synapses which form a noisy background input, we should also consider strong or temporally coordinated synaptic events which induce *strong transient signaling inputs*.

The leaky integrate-and-fire (LIF) neuron model is the simplest model that captures the important features of cortical neurons (Rauch et al. 2003; La Camera et al. 2004; Jolivet et al. 2008). This simple model is largely used to investigate the input-output relation of postsynaptic neuron (Tuckwell 1988; Brunel and Sergi 1998; Burkitt and Clark 1999; Lindner et al. 2005; Burkitt 2006a, 2006b; Richardson 2007, 2008; Richardson and Swarbrick 2010; Helias et al. 2013; Iolov et al. 2014). There are analytical studies which obtained the linear response of the neuron to oscillatory signaling input (Bulsara et al. 1996; Brunel and Hakim 1999; Brunel et al. 2001; Lindner and Schimansky-Geier 2001), excitatory and inhibitory synaptic jumps (Richardson and Swarbrick 2010; Helias et al. 2010) or transient input (Herrmann and Gerstner 2001; Helias et al. 2010, 2011). However, a closed-form analytical solution for the impact of *strong transient signaling input* on a LIF neuron model subject to Gaussian noise is not achieved yet.

Here we analytically derive the first-passage time density of a LIF neuron receiving transient signaling input with arbitrary amplitude; the background input is noisy but balanced at the threshold regime. We extend our solution for the arbitrary shape of transient signaling input. As an application of this solution, we calculate the Fisher information with respect to input’s amplitude; the maximum of Fisher information provides the minimum error to estimate the signaling input’s amplitude from spiking activity. We quantify the noise level and signal’s amplitude which yield the best possible discrimination.

## 2 Results

### 2.1 Impact of a transient signaling input on postsynaptic spiking

*V*, evolves with:

*τ*

_{m}is the membrane time constant, and

*I*(

*t*)is the total input current. The neuron produces a spike when its voltage reaches the threshold,

*V*

_{ 𝜃 }. The membrane voltage then resets to its resting value,

*V*

_{ r }, which we assume to be zero without loss of generality.

*I*(

*t*):

*I*

_{0}(

*t*) is the background input induced by presynaptic neurons, and Δ

*I*(

*t*,

*t*

^{∗}) is the signaling input from the specified neuron; here

*t*

^{∗}represents the time that the signaling input arrives. For large number of presynaptic neurons, the uncorrelated background input is approximated as: \(I_{0}(t) = \bar {I} + \xi (t)\), where \(\bar {I}\) is the mean input strength and

*ξ*(

*t*) is a

*zero mean*Gaussian noise (i.e. <

*ξ*(

*t*) >= 0and <

*ξ*(

*t*)

*ξ*(

*t*

^{′}) >= 2

*D*

*δ*(

*t*−

*t*

^{′}), where

*δ*(

*t*) is the Dirac delta function).

*τ*

_{ s }. For

*τ*

_{ s }≪

*τ*

_{m}which is measured for fast currents generated by

*AMPA*and

*G*

*A*

*B*

*A*

_{ A }receptors (Destexhe et al. 1998), we can model the input current as (Stern et al. 1992):

*t*∼

*τ*

_{ s }. This resembles a current which begins at

*t*=

*t*

^{∗}, remains constant for a short time window,

*t*∈ [

*t*

^{∗},

*t*

^{∗}+ Δ

*t*], and vanishes after that. It transmits a net charge of

*A*regardless of Δ

*t*; it also converges to the Dirac delta function in the limit of Δ

*t*→ 0(i.e. Δ

*I*(

*t*,

*t*

^{∗}) →

*A*

*δ*(

*t*−

*t*

^{∗})). In this limit, the process (see Eq. (1)) converges to jump-diffusion process (Kou and Wang 2003).

As mentioned, a key element of this article is to predict when the postsynaptic neuron spikes if the signaling input arrives at *t*^{∗}. We consider the last spike of the postsynaptic neuron as the origin of time, *t* = 0, and *analytically* predict when the *first postsynaptic spike* will happen (Fig. 1b). However, because of the stochastic term in the background input, *ξ*(*t*), we cannot predict the exact time of the next spike, but can describe its probability density, *J*(*V*_{ 𝜃 },*t*).

*V*(

*t*), which is governed by the LIF equation (see Fig. 1c, top-left). All trajectories begin from the same point (

*V*= 0 at

*t*= 0), but do not follow the same path; because of different values realized for the stochastic background noise,

*ξ*(

*t*). The neuron initiates a spike if a trajectory passes the threshold voltage,

*V*

_{ 𝜃 }. Then

*J*(

*V*

_{ 𝜃 },

*t*), which we shortly call

*first-passage time density*, is the probability density function that a trajectory passes

*V*

_{ 𝜃 }at time

*t*. However, to obtain

*J*(

*V*

_{ 𝜃 },

*t*), we have to know

*P*(

*V*,

*t*), the probability density that a trajectory has the potential

*V*at time

*t*. This

*membrane potential probability density*satisfies the Fokker-Planck (FP) equation (Risken 1984; Kardar 2007):

Here, the temporal evolution of *P*(*V*,*t*)is governed by (i) a diffusion term which is a signature of the stochastic input (i.e. *ξ*(*t*)), and (ii) a drift term which represents both the leak and the non-stochastic currents (i.e. \(\bar {I}\) and Δ*I*(*t*,*t*^{∗})).

Threshold nonlinearity of neuronal spike generation is dictated as a boundary condition in Eq. (4). The LIF neuron *spikes* if it passes the threshold voltage. Since each membrane trajectory ends at the threshold, there is no neuron with *V* > *V*_{ 𝜃 }. In the continuum limit, this results in the *absorbing boundary condition* of *P*(*V* ≥ *V*_{ 𝜃 },*t*) = 0 (Gerstner et al. 2014). Here, we do not consider the reappearance of the trajectory from the resting potential after each spike occurs; because we are interested in neuron’s first spike only (Fig. 1c, down). This results in the absorbing boundary condition instead of the widely used periodic boundary condition to derive firing rate (Brunel and Hakim 1999; Brunel et al. 2001; Lindner and Schimansky-Geier 2001; Richardson and Swarbrick 2010).

Finally, we will obtain *P*(*V*,*t*)for *t* > *t*_{0} by solving Eq. (4) under this boundary condition once we specify an initial distribution of the membrane potential at time *t* = 0. Here we use *P*(*V*,0) = *δ*(*V* ) as we assumed that all membrane trajectories started from *V* = 0.

Unfortunately, the analytical solution of *P*(*V*,*t*)and consequently *J*(*V*_{ 𝜃 },*t*)is not attainable in general even if we discard the signaling input from the equation. However, we may obtain the analytical solution at a particular regime known as the *threshold regime*, which will be described in detail as follows. For a fixed noise strength (i.e., *D*), a simple ratio \(\bar {I}/V_{\theta }\) determines how *P*(*V*,*t*) and the corresponding first-passage time density of *J*(*V*_{ 𝜃 },*t*) behave. The neuron regularly spikes if \(\bar {I}\) significantly exceeds *V*_{ 𝜃 }; because the high value of mean input robustly drives neuron to its threshold. If, on the other hand, \(\bar {I} \ll V_{\theta }\), there would be occasional spikes whenever the noise or the signaling input happens to be strong enough to prevail the gap between \(\bar {I}\) and *V*_{ 𝜃 }. An interesting regime exists somewhere in-between; for \(\bar {I} \simeq V_{\theta }\), a modest signaling input or some conventional noise can induce spike of the postsynaptic neuron. This is the near *threshold regime*. It was empirically observed (Shadlen and Newsome 1998; Tan et al. 2014) and suggested as a basis of high variability in neural networks (van Vreeswijk and Sompolinsky 1996, 1998). The quest for a *closed-form analytical solution* for Eq. (4) also leads us to the very same regime. There exists a closed-form solution for *P*(*V*,*t*), and consequently for *J*(*V*_{ 𝜃 },*t*), if (i) \(\bar {I}=V_{\theta }\) and (ii) no signaling input is applied: Δ*I*(*t*,*t*^{∗}) = 0(Wang and Uhlenbeck 1945; Sugiyama et al. 1970; Bulsara et al. 1996). Below, we describe the reason behind this peculiarity of the threshold regime and revisit the closed-form solution without the signaling input. We then extend this analytical solution to include effect of the signaling input.

### 2.2 The first-passage time density in the absence of signaling input

We begin with the simpler condition in which the signaling input is turned off (Δ*I*(*t*,*t*^{∗}) = 0). Even in such a case, solutions for *P*(*V*,*t*)are, in general, available only in a non-closed form such as inverse Laplace transforms (Siegert 1951; Ricciardi and Sato 1988; Ostojic 2011). However, there exists a closed-form analytical solution for the particular case of the threshold regime, \(\bar {I}=V_{\theta }\) (Wang and Uhlenbeck 1945; Sugiyama et al. 1970; Tuckwell 1988).

*V*=

*V*

_{0}at time

*t*=

*t*

_{0}, there exists a closed-form analytical solution for Eq. (4), for

*t*>

*t*

_{0}. It would be the

*free Green’s function*, and it reads (Uhlenbeck and Ornstein 1930):

*V*

_{0}and

*t*

_{0}quote the

*initial condition*, and

*r*(

*t*) = exp[−

*t*/

*τ*

_{m}]. The free Green’s function describes a probability density of the membrane trajectories which all begin from the point

**O**

_{1}= (

*t*

_{0},

*V*

_{0}), but follow different paths due to the noise (see Fig. 1c, Right). Since we have neglected the threshold for the moment, the trajectories do not end as they pass the threshold line,

*V*=

*V*

_{ 𝜃 }. Thus we can freely consider any initiating point, even above the threshold line, for the trajectories. It would be \(\textbf {O}_{2}=(t_{0},2\bar {I}-V_{0})\), the mirror-image point of

**O**

_{1}, with respect to the \(V=\bar {I}\) line (Fig. 1c, Right). The probability density for this initiating point is \(G_{f}(V,t;\,2\bar {I}-V_{0},t_{0})\). The encouraging fact is that the two Green’s functions yield equal values on the mirror-line: \(G_{f}(V=\bar {I},t;\,V_{0},t_{0})=G_{f}(V=\bar {I},t;\,2\bar {I}-V_{0},t_{0})\). Conclusively, we define the main Green’s function as:

*V*

_{ 𝜃 }, this means that

*G*(

*V*,

*t*;

*V*

_{0},

*t*

_{0})also satisfies the

*absorbing boundary condition*. Thus we can utilize the analytical free Green’s functions to obtain an analytical solution under the absorbing boundary condition only at the threshold regime.

*V*= 0 at

*t*= 0, the probability density of membrane potential is simply:

*t*and

*t*+

*d*

*t*, is proportional to the number of voltage trajectories which pass the threshold in [

*t*,

*t*+

*d*

*t*] (see Fig. 1c, left). This equals \(J_{0}(V,t)|_{V=V_{\theta }}dt\) where

*J*

_{0}(

*V*,

*t*) is the current density:

*V*=

*V*

_{ 𝜃 }, Eq. (8) yields the

*first-passage time density*; it simplifies to (Wang and Uhlenbeck 1945; Sugiyama et al. 1970; Tuckwell 1988; Bulsara et al. 1996):

*r*(

*t*) = exp[−

*t*/

*τ*

_{m}]. Apart from a 1/

*τ*

_{m}pre-factor in Eq. (9), which stands for its time inverse dimensionality, the overall shape of function is characterized by a dimensionless ratio of

*D*/(

*τ*

_{m}

*V*

*𝜃*2). The ratio quantifies the strength of input background noise relative to the other competing factors. Other relevant quantities are also characterized by this ratio. For example, the maximum value of first-passage time density,

*J*

_{0}(

*V*

_{ 𝜃 },

*t*), occurs at \(t_{\max }=\tau _{\mathrm {m}}\,{\textbf {h}}(\,D/(\tau _{\mathrm {m}} V_{\theta }^{2})\,)\) where:

For weak enough noise (i.e. \(x=D/(\tau _{\mathrm {m}} V_{\theta }^{2}) \ll 1\)), this function simplifies to \(0.5\times \ln (\tau _{\mathrm {m}} V_{\theta }^{2}/D)\). Finally, it is important to extend this formalism to more plausible sub/supra-threshold cases. In Appendix I, we show how a scaling approach does help us to do so.

### 2.3 The transient signaling input modifies the probability density and first-passage time density

*P*(

*V*,

*t*) and consequently

*J*(

*V*

_{ 𝜃 },

*t*). To obtain a clear causal picture, we rewrite Eq. (4) as:

*I*(

*t*,

*t*

^{∗}) corrects the initial

*threshold regime*answer of

*P*

_{0}(

*V*,

*t*)to

*P*(

*V*,

*t*) =

*P*

_{0}(

*V*,

*t*) + Δ

*P*(

*V*,

*t*);

*P*

_{0}(

*V*,

*t*)is the analytical solution of membrane potential density in the absence of the signaling input, and Δ

*P*(

*V*,

*t*) is the correction, due to the signaling input. Δ

*P*(

*V*,

*t*) would be zero if the signaling input did not exist,

*A*= 0 (see Eq. (3)). For arbitrary signaling input, Δ

*P*(

*V*,

*t*)would be a function of

*A*, the signaling input’s strength. This lets us write a Taylor series for Δ

*P*(

*V*,

*t*)as:

*δ*

*P*

_{ n }(

*V*,

*t*) ∝

*A*

^{ n }. Since Δ

*P*(

*V*,

*t*) vanishes for

*A*= 0, the constant term,

*δ*

*P*

_{n= 0}(

*V*,

*t*) ∝

*A*

^{0}, is not included in the series. We plug \(P(V,t)=P_{0}(V,t)+ {\Sigma }_{n = 1}^{n=\infty } \delta P_{n}(V,t)\) into Eq. (11); for each

*n*, we consider terms proportional to

*A*

^{ n }, on both sides of equality, as a separate equation. For

*n*= 1the equation reads:

*first-order perturbation*equation, as both its sides are proportional to

*A*

^{1}. To address its boundary conditions, we note that Δ

*P*(

*V*,

*t*)is zero before the occurrence of the specified signaling input (i.e.

*t*<

*t*

^{∗}); consequently we obtain

*δ*

*P*

_{1}(

*V*,

*t*<

*t*

^{∗}) = 0. Moreover, the absorbing boundary condition at

*V*=

*V*

_{ 𝜃 }results in Δ

*P*(

*V*

_{ 𝜃 },

*t*) = 0, from which we conclude that

*δ*

*P*

_{1}(

*V*

_{ 𝜃 },

*t*) = 0. These let us use the aforementioned Green’s function, and write

*δ*

*P*

_{1}(

*V*,

*t*)as a Green’s integral over the source term, the right side of Eq. (13):

*I*(

*t*

_{0},

*t*

^{∗}) is zero for

*t*

_{0}<

*t*

^{∗}or

*t*

_{0}>

*t*

^{∗}+ Δ

*t*; thus the

*t*

_{0}in

*G*(

*V*,

*t*;

*V*

_{0},

*t*

_{0}) and (

*∂*/

*∂*

*V*)

*P*

_{0}(

*V*

_{0},

*t*

_{0}) always belong to [

*t*

^{∗},

*t*

^{∗}+ Δ

*t*], a short time interval. As Δ

*t*≪

*τ*

_{m}, we conclude that the two functions are

*almost constant*during this time interval and approximate them with

*G*(

*V*,

*t*;

*V*

_{0},

*t*

^{∗}) and (

*∂*/

*∂*

*V*)

*P*

_{0}(

*V*

_{0},

*t*

^{∗})respectively. This further simplifies Eq. (14):

*n*≥ 2, the

*n*

*th order perturbation*equation is the same as Eq. (13) by replacing

*δ*

*P*

_{1}(

*V*,

*t*) and

*δ*

*P*

_{0}(

*V*,

*t*) with

*δ*

*P*

_{ n }(

*V*,

*t*) and

*δ*

*P*

_{n− 1}(

*V*,

*t*). A recursive formalism, then, yields:

*P*(

*V*,

*t*). For

*t*>

*t*

^{∗}+ Δ

*t*, for example, it reads:

*P*

_{0}(

*V*

_{0}−

*A*/

*τ*

_{m},

*t*

^{∗}) −

*P*

_{0}(

*V*

_{0},

*t*

^{∗}); thus for

*t*>

*t*

^{∗}+ Δ

*t*:

*t*

^{∗}≤

*t*≤

*t*

^{∗}+ Δ

*t*, a similar reasoning yields:

*t*<

*t*

^{∗},

*δ*

*P*

_{ n }(

*V*,

*t*) = 0. We use the combination rule, for propagators:

*P*(

*V*,

*t*) =

*P*

_{0}(

*V*,

*t*) + Δ

*P*(

*V*,

*t*), simplifies as:

*P*(

*V*,

*t*)to obtain the corrected first-passage time density in the presence of the signaling input:

*arbitrary values*of transient signaling input, here we focus on the effect of strong excitatory/inhibitory signaling input on postsynaptic neuron’s response. Figure 1c (top-left) shows how voltage trajectories almost uniformly increase during arrival of excitatory signaling input (i.e.

*t*∈ [

*t*

^{∗},

*t*

^{∗}+ Δ

*t*]). The short period of signal arrival (i.e.Δ

*t*≪

*τ*

_{m}) guarantees this uniform increase and results in an overall rise of

*A*/

*τ*

_{m}. Consequently, if the value of membrane potential of a particular trajectory is larger than

*V*

_{ 𝜃 }−

*A*/

*τ*

_{m}upon signal arrival, it passes the threshold during signal arrival: the neuron fires. If, on the other hand, it is smaller than

*V*

_{ 𝜃 }−

*A*/

*τ*

_{m}, the membrane potential does not cross the threshold by the additional signaling input: the neuron does not fire. This simple picture helps us to understand results shown in Fig. 2.

Figure 2a and b show that excitatory and inhibitory signaling inputs can result in quite different spiking behaviors, depending on their arrival time. The dashed-black curve, in both panels of Fig. 2a, b, right and left, shows the first-passage time density in the absence of the signaling input, *J*_{0}(*V*_{ 𝜃 },*t*). In the left panels, the signaling input arrives at *t*^{∗} = 50 ms; it has *A* = 10 mV × ms which increases or decreases the membrane potential by *A*/*τ*_{m} = 0.5 mV. In this case, the excitatory signaling input shifts the original density leftward for excitation or rightward for inhibition (thick-red-curves). In contrast, the right panels show the spiking density when the signaling input occurs at *t*^{∗} = 100 ms. This is close to \(t_{\max }= 0.5\times \tau _{\mathrm {m}}\,\ln (\,\tau _{\mathrm {m}} V_{\theta }^{2}/D\,)\simeq 93~\text {ms}\), at which the *J*_{0}(*V*_{ 𝜃 },*t*)is maximized. Therefore, when the signal arrives, many trajectories are already close to the threshold potential (i.e., larger than *V*_{ 𝜃 } − *A*/*τ*_{m}). All such trajectories will spike, due to the aforementioned rise of excitatory *A*/*τ*_{m}; this results in a sudden sharp increase of the first-passage time density at *t* = *t*^{∗} (see Fig. 2a, right, and its inset). In contrast, the inhibitory input prevents all trajectories to reach the threshold, which makes a sharp depletion in the first-passage time density (Fig. 2b, right). The effect of inhibitory input fades away after a while and again trajectories approach the threshold; this leads to the second rise of the first-passage time density (Fig. 2b, right). These theoretical predictions were confirmed by numerical simulation of the LIF model (green dots). It is also important to see what will happen if the signaling input comes much later than *t*_{max}. This means that most of the trajectories have already reached the threshold voltage, and only a tiny portion of them remains. Consequently, the signaling input does not induce much change in *J*(*V*_{ 𝜃 },*t*). In other words, if the signaling input comes too late, the postsynaptic neuron has already fired, and the signal cannot change its first spike-time any more (Fig. 2c). This figure also demonstrates that the first-passage time density undergoes the maximum change when the excitatory input arrives around *t*_{max}, the peak time of no-signaling input case.

It is also important to note that there is a critical value for excitatory signal strength for which postsynaptic neuron fires, apart from the signal arrival time. For a signal with *A* ≥ *τ*_{m}*V*_{ 𝜃 }, the aforementioned rise would be *V*_{ 𝜃 }which results in spiking of almost all trajectories, irrespective of signal arrival time. This introduces another dimensionless ratio of *A*/(*τ*_{m}*V*_{ 𝜃 }), which quantifies the strength of the signaling input.

*t*

_{max}shift to the left. So signaling inputs that arrive earlier than

*t*

_{max}at 50 mscan modulate the density (Fig. 3a, left). When the signaling input arrives at time

*t*

^{∗}= 100 ms which is close to

*t*

_{max}, the first-passage time density is modulated in low diffusion regimes (Fig. 3a, Right). In the case of inhibitory presynaptic spike (Fig. 3b), for high scaled diffusion coefficient, the first-passage time density decreases at the time of presynaptic spike but because of high diffusion, it goes up quickly. As the scaled diffusion decreases, the recovery from inhibition takes time which makes distance between two separated distributions.

*t*

^{∗}) much earlier than

*t*

_{max}, as the amplitude increases, the density moves to the left (Fig. 4a, left) until the amplitude is large enough to make them spike at the same time (Fig. 4a, middle). When inhibitory signaling input arrives near

*t*

_{max}, the density breaks in two parts (Fig. 4b, right). As the amplitude increases, the spiking density is zero not only at the time of presynaptic spiking but also for a duration after that. This duration nonlinearly depends on the strength of signaling input (Fig. 4b, right).

### 2.4 Arbitrary shapes of the transient signaling input

*I*(

*t*,

*t*

^{∗}) ∝ exp(−(

*t*−

*t*

^{∗})/

*τ*

_{s}), see Appendix II for the derivation and results). Figure 5a and b show how an exponential transient signaling input modifies the first-passage time density.

For early signaling arrival, the modification due to excitatory (inhibitory) input is a leftward (rightward) shift, very similar to the changes we had for square input (Fig. 2a, b). For an excitatory input at *t*^{∗} = 100 ms (Fig. 5a right) there is a jump upon signal arrival; comparing to the square excitatory input (Fig. 2a right) the jump is less sharp but more wide. Similarly, the inhibitory exponential input at *t*^{∗} = 100 ms induces a fall (Fig. 5b right), similar but less steep than the fall observed for the square inhibitory input (Fig. 2b right). The numerical simulations well verify these analytic results (green circles in Fig. 5a, b).

*general formula*for probability density in the presence of an

*arbitrary input current*, Δ

*I*(

*t*,

*t*

^{∗}), which arrives at

*t*

^{∗}. If the duration of signal arrival is shorter enough than the membrane time constant (i.e.

*τ*

_{s}≪

*τ*

_{m}) we suggest the probability density as:

*Gamma function*input current:

*n*) is the Euler’s Gamma function (Davis 1959). The signaling inputs are shown in the insets of Fig. 5c, top (bottom) for

*γ*= 1(

*γ*= 0.25). The inputs arrive at

*t*

^{∗}= 100 ms, and have a short duration of

*τ*

_{s}= 2 ms ≪

*τ*

_{m}= 20 ms; we recognize a good agreement between simulation and analytical results. The method also can be extendable to the case that more than one presynaptic spike arrives. The derivation for more spikes of presynaptic neuron comes in Appendix III.

*P*

_{0}(

*V*), and try to calculate the integral in Eq. (28) (see Appendix IV). An expression is derived, for the first-passage time density in the presence of arbitrary signaling input (

*τ*

_{ s }≪

*τ*

_{m}):

*κ*(

*t*,

*t*

^{∗}),

*ω*(

*t*,

*t*

^{∗}) and

*φ*

_{±}(

*t*,

*t*

^{∗})are:

The first-passage time density, Eq. (30), is the probability density of the first spike of the postsynaptic neuron when the signaling input arrives at *t*^{∗}; the *t*^{∗}measures the time elapsed since the *last spike of postsynaptic neuron* up to the *signal arrival*.

There are experimental studies in which the postsynaptic neuron’s spike times both before and after signal arrival are recorded (Blot et al. 2016). In these researches, the *t*^{∗} is observed/assumed. For example studies regarding *timing dependent plasticity*, assume the knowledge of the last postsynaptic spike (Froemke and Dan 2002; Wang et al. 2005). The timing of the last spike may also be learned by intrinsic mechanisms (Johansson et al. 2016; Jirenhed et al. 2017). In such cases, it would be possible to directly apply our formalism to the analysis. However, there are experimental studies which do not take into account such knowledge of the last postsynaptic spike timing (Panzeri et al. 2014); a downstream neuron may have no access to the information of the neuron’s former spikes. Therefore, we should consider all possible values for *t*^{∗}, a statistical procedure known as marginalization.

### 2.5 First spike-timing density after signaling input’s arrival

We observed that timing of the signaling input, relative to the last postsynaptic spike, significantly affects the first-spiking density of the postsynaptic neuron. However, the postsynaptic neuron (as well as downstream neurons) may have no access to this elapsed time. Assume that we monitor the arrival of a signaling input; it would be more convenient to consider arrival time as the time origin. Therefore, we reset the time origin accordingly and ask what is the probability density *f*(*τ*) that postsynaptic neuron fires at *τ* after signaling input arrival.

*t*

^{∗}and

*t*

^{∗}+

*d*

*t*

^{∗}before the signal arrival,

*P*

_{back}(

*t*

^{∗})

*d*

*t*

^{∗}. Following the language of voltage trajectories, this probability can be computed by considering that (A) a trajectory has begun from

*V*

_{r}= 0in the mentioned time window, but (B) it has not yet reached the threshold at

*V*=

*V*

_{ 𝜃 }. The answer comes as multiplication of probabilities associated with the two conditions (A) and (B):

*J*

_{0}(

*V*

_{ 𝜃 },

*s*) is given by Eq. (9).

*P*

_{back}(

*t*

^{∗})is known as the density function of

*backward recurrence*time in the point process theory (Cox 1962).

*portion*of the trajectories, addressed above, will reach the threshold at

*τ*to

*τ*+

*d*

*τ*, after signal arrival. This sets a temporal distance of

*t*

^{∗}+

*τ*between the beginning point of the trajectories with

*V*=

*V*

_{r}= 0to their spiking point at

*V*=

*V*

_{ 𝜃 }. The answer is a conditional probability which reads:

Note that *J*(*V*_{ 𝜃 },*τ* + *t*^{∗}) is given by Eq. (30), where the signaling input arrives at *t*^{∗}, after the last postsynaptic spike. The denominator is a normalization term to achieve \({\int }_{0}^{\infty } f (\tau | t^{*} )d\tau = 1\).

*f*(

*τ*|

*t*

^{∗})and

*P*

_{back}(

*t*

^{∗}), we should integrate over all possible values of backward recurrence time,

*t*

^{∗}, to obtain

*f*(

*τ*):

The result in Eq. (34) presents the probability density of first-spike timing after input arrival; the time of input arrival (stimulus onset) should be known by some mechanisms in the cortex (Van Rullen et al. 2005; Panzeri et al. 2014).

*P*

_{back}(

*t*

^{∗})successfully connects us to the existing stationary solution for the membrane potential (Brunel and Hakim 1999; Brunel 2000). We should address the probability of finding the membrane potential between

*V*

_{0}and

*V*

_{0}+

*d*

*V*

_{0}at an arbitrary observation time. We split the task into two questions: First, what is the probability that the last postsynaptic spike has happened in a time window of

*t*

^{∗}to

*t*

^{∗}+

*d*

*t*

^{∗}, before observation time?; Second, what is the conditional probability that voltage trajectories, which initiated from

*t*

^{∗}before observation time, have a potential

*V*∈ [

*V*

_{0},

*V*

_{0}+

*d*

*V*

_{0}]at the time of observation. The answer for the first question is simply given by the probability density of the backward recurrence time. The answer for the second question is a conditional probability:

*P*

_{0}(

*V*,

*t*

^{∗})is given by Eq. (7). The denominator is a normalizing factor to ensure: \({\int }_{-\infty }^{V_{\theta }} \tilde {p}(V_{0} | t^{*}) dV_{0}= 1\). It has the very same origin we mentioned for the denominator in Eq. (33); in fact, it is easy to verify that the two denominators are equal, due to the conservation of probability: \({\int }_{-\infty }^{V_{\theta }} P_{0}(V, t^{*}) dV = 1 - {\int }_{0}^{t^{*}} J_{0}(V_{\theta },s) ds\). We combine the answers of two questions, and obtain the

*stationary probability density*as:

*P*

_{s}(

*V*

_{0})nicely coincides with the existing stationary solution found by Brunel and Hakim (Brunel and Hakim 1999; Brunel 2000). To use their solution, we have simply put the mean input current equal to the threshold potential, \(\bar {I}=V_{\theta }\).

*P*

_{s}(

*V*

_{0}) provides an alternative approach to find

*f*(

*τ*). It determines the probability density that the postsynaptic neuron has a membrane potential of

*V*=

*V*

_{0}upon signal arrival, (i.e.

*t*=

*t*

^{∗}). We have also obtained how the probability density evolves after signal arrival (i.e.

*t*>

*t*

^{∗}), for a square (see Eq. (24)) or exponential (see Eq. (55)) signaling input. We note that the framework of solutions which results in Eq. (24) or Eq. (55) does not depend on the initial choice of

*P*

_{0}(

*V*

_{0},

*t*) or

*P*

_{s}(

*V*

_{0}). Conclusively, if we want to determine first-spiking density, with no previous knowledge about the last postsynaptic spike, we should simply replace

*P*

_{0}(

*V*

_{0},

*t*

^{∗}) with

*P*

_{s}(

*V*

_{0}) (see Appendix V). This lets us follow our suggestion for

*J*(

*V*

_{ 𝜃 },

*t*)in the presence of an arbitrary transient input, Eq. (30), and obtain

*f*(

*τ*)accordingly:

*f*(

*τ*)using both Eq. (37), dashed lines, and Eq. (34), full lines. There is a nice coincidence between two sets of curves which shows the consistency of the result from two approaches mentioned here. The result arises from each one of two approaches, has its own advantage. Since Eq. (34) has just one temporal integral, and

*J*(

*V*

_{ 𝜃 },

*τ*)is already well simplified in Eq. (30), it is computationally easier and faster to work with. At first glance, Eq. (37) also has one temporal integral, however, there is another integral in

*P*

_{s}(

*V*) to reach stationary solution (see Eq. (36)). Consequently, it would be computationally faster to use Eq. (34) but Eq. (37) provides more intuition about how

*f*(

*τ*)behaves.

### 2.6 Fisher information

The analytical first-spiking density after input arrival, Eq. (34), allows us to quantify the *minimum error* for any unbiased estimator to decode signaling input’s properties such as its amplitude (input’s strength). Based on Cramer-Rao’s inequality (Rao 1973), the Fisher information provides the lower bound of the estimator’s variance (\(\sigma ^{2}_{\text {est}}\geq 1/\mathcal {I}_{FI} \)). Applied to spike timing density, maximizing the Fisher information gives us the minimum error to decode an input parameter (e.g. signal’s amplitude) using the spiking activity. Spike timing of postsynaptic neuron contains information that spike count does not carry (Rieke et al. 1999; van Vreeswijk 2001; Toyoizumi et al. 2006). Indeed, discarding spike timing information (specifically first-spike timing) leads to loss of information (Panzeri et al. 2001). Hence here, we investigate the Fisher information based on the spike timing with respect to the strength of signaling input. In this scenario, the decoder must know the input arrival time and the first spike time after that. It was discussed that this knowledge about the input arrival time as a time reference may be known by, for example, network oscillations or other mechanisms in cortical/sensory systems (Van Rullen et al. 2005; Panzeri et al. 2014). Here, depending on the level of noise, we want to find the amplitudes of signaling inputs with which an optimal decoder can make the best possible discrimination.

*f*(

*τ*)itself (see Eq. (34)). We assume the exponential decaying as signaling input’s functionality (i.e. Δ

*I*(

*t*,0) ∝ exp(−

*t*/

*τ*

_{s}), see Eq. (46) for details).

Figure 6a top, shows the Fisher information as a function of two scaled variables: amplitude, *A*/(*τ*_{m}*V*_{ 𝜃 }), and noise level, \(D / (\tau _{\mathrm {m}}V_{\theta }^{2})\). The dashed black lines locate the points of local maximums. Given the high noise level, \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})>0.088\), the Fisher information is maximized at a certain amplitude. The single maximum, however, splits into two maximums as the noise decreases. Figure 6a bottom, depicts the same \(\mathcal {I}_{\text {FI}}(A)\) as a function of signal’s amplitude, for certain values of the noise level. There are two distinct maxima in the dark-red curve, \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\simeq 0.01\). The two peaks, however, go down and approach each other as the noise level is increased; they finally merge into *one peak* for \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\gtrsim 0.09\), the blue and dark-blue curves.

The mentioned two maximums which appear in low noise regime show noise plays a major role in optimal decoding. Despite the high noise level, the best discrimination would happen for two kinds of input strength, in the low noise level. The crucial role of noise is also studied in the context of mutual information, where the maximally informative solutions for neural population splits into two, as noise level decreases (Kastner et al. 2015). Figure 6a shows two branches for the maximum of the Fisher information. The left side branch indicates that the maximizing amplitude diminishes as noise decreases. A similar behavior has been seen using an extension of the *perfect integrate and fire* model (Levakova et al. 2016); however, they observed a single but not two maximums. Therefore, the existence of the second maximum, as a result of strong signaling input, is less expected and needs more exploration. Here, we suggest a hand-waving explanation, which intuitively explains the existence of the second peak for *strong amplitudes* in low noise levels.

The Fisher information (Eq. (38)) has an integral over *τ*; we may expect that the maximization of its integrand versus *A*, for certain domains of *τ*, results in the maximization of the whole \(\mathcal {I}_{FI}(A)\). The integrand, also, is a fraction with *∂**f*(*τ*)/*∂**A* in its nominator, and *f*(*τ*)in its denominator. Figure 6c shows in the *logarithmic scale*, how *f*(*τ*) modifies as *A* increases. For large signal’s amplitude, as *A*/(*τ*_{m}*V*_{ 𝜃 }) varies from 0.7 to 1, we see no significant change of *f*(*τ*)for 0 ≤ *τ* ≲ 2.5*τ*_{s} domain. On the contrary, for \(\tau \gtrsim 2.5 \tau _{\mathrm {s}}\), we see a significant decrease in *f*(*τ*); increasing *A*/(*τ*_{m}*V*_{ 𝜃 })by a constant step of 0.1, always results in a significant downward shift of *f*(*τ*). The shift in the last step (*A*/(*τ*_{m}*V*_{ 𝜃 }) : 0.9 → 1) seems almost twice as large as the shift in its previous step, in logarithmic scale. This picture suggests that as *A*/(*τ*_{m}*V*_{ 𝜃 }) → 1, *f*(*τ*) drastically decreases, whereas *∂**f*(*τ*)/*∂**A* remains finite. This results in the growth of the integrand and hence the integral for the \(\tau \gtrsim 2.5 \tau _{\mathrm {s}}\) domain.

*A*/

*τ*

_{m}. Consequently, those trajectories which are closer to the threshold than

*A*/

*τ*

_{m}, upon signal arrival, will fire immediately. What remains to fire afterward would be the trajectories which were below

*V*

_{ 𝜃 }−

*A*/

*τ*

_{m}, upon signal occurrence. This picture lets us introduce two distinct sources for

*f*(

*τ*), during and after signal arrival. \(F_{\text {during}}(A)={\int }_{V_{\theta }-A/\tau _{\mathrm {m}}}^{V_{\theta }} P_{\mathrm {s}}(V_{0})dV_{0}\) measures the portion of those trajectories which fire during signal arrival (e.g. 0 <

*τ*< 2.5

*τ*

_{s}), whereas \(F_{\text {after}}(A)={\int }_{-\infty }^{V_{\theta }-A/\tau _{\mathrm {m}}} P_{\mathrm {s}}(V_{0})dV_{0}\) measures the portion which will reach the threshold after the signal arrival. For the case of exponentially decaying input, it roughly implies that:

*f*(

*τ*)behaves after signal arrival; but it provides us a hint on how

*f*(

*τ*)varies with

*A*. However, Fig. 6c helps us to go one step further; it shows a linear tail for

*f*(

*τ*) in

*logarithmic scale*,

*f*(

*τ*) ∝ exp(−

*α*

*t*); all tails show almost the same slope: \(\alpha \sim \tau _{\mathrm {m}}^{-1}\). This decouples the dependence of

*f*(

*τ*) on

*A*, from its temporal dependence:

*f*(

*τ*) ∝

*F*

_{after}(

*A*) × exp(−

*α*

*t*). Consequently, we would be able to estimate

*∂*

*f*(

*τ*)/

*∂*

*A*which reads (

*∂*

*F*

_{after}(

*A*)/

*∂*

*A*) × exp(−

*α*

*t*). The

*∂*

*F*

_{after}(

*A*)/

*∂*

*A*simply is − (1/

*τ*

_{m})

*P*

_{s}(

*V*

_{ 𝜃 }−

*A*/

*τ*

_{m}). These points clarify how the integrand in Eq. (38) varies with

*A*, in the 2.5

*τ*

_{s}<

*τ*domain. The integrand and the whole integral would behave like:

The right side of Eq. (40) has a simple geometrical interpretation. Consider the green curve in Fig. 6b (*D*/(*τ*_{m}*V**𝜃*2) = 0.2), the denominator of Eq. (40) equals the hatched area below the curve, while its nominator is simply the square of the height of that curve, *P*_{s}(*V* ). As *A*/*τ*_{m} → *V*_{ 𝜃 }, the height of the curve remains finite (i.e. *P*_{s}(0)), whereas the hatched area decreases to \({\int }_{-\infty }^{0} P_{\mathrm {s}}(V_{0})\,dV_{0}\).

We compare how the curve changes as noise decreases from green to red curve (Fig. 6b), \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\,= 0.2\rightarrow 0.04\). There is almost a ratio of 2 between the heights of the curves at *V* = 0; this means that the nominator decreases by a factor of 1/2^{2}. However, the hatched area is decreasing by a factor larger than 4. This means that the right side of Eq. (40) enlarges as noise level decreases. This trend is much stronger as noise level further decreases to \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})\,= 0.004\); the height is decreased by a factor of 0.63 whereas the enclosed area reduces to something hardly recognizable. In fact, we can show that the right side of Eq. (40) diverges like \((\tau _{\mathrm {m}}V_{\theta }^{2}/D)/\ln {(\tau _{\mathrm {m}}V_{\theta }^{2}/D)}\), as the noise level approaches to zero. So the integral of the Fisher information on the *after signal arrival domain* should diverge if both *A*/*τ*_{m} → *V*_{ 𝜃 }and \(D/(\tau _{\mathrm {m}}V_{\theta }^{2}) \rightarrow 0\). This intuitive picture explains why the second peak arises for large amplitudes in low diffusion regime.

Figure 6a shows for weak amplitudes (0.02 ≤ *A*/(*τ*_{m}*V*_{ 𝜃 }) < 0.1), when the level of noise increases, the Fisher information decreases monotonically. The same effect is observed when *A*/(*τ*_{m}*V*_{ 𝜃 }) ≃ 1. But for the rest amount of amplitudes, the Fisher information is a non-monotonic function of noise level (see the dashed lines in Fig. 6a which show the maximum of the function); there is a certain level of the noise that maximizes the Fisher information (stochastic resonance) (Bulsara et al. 1991).

Finally, we can associate the two scaled diffusion and amplitude parameters with measurements from neural data. The diffusion coefficient relates to the variance of the noise distribution as *D* = *σ*^{2}*τ*_{m}/2. So the scaled diffusion coefficient would read \(D/(\tau _{\mathrm {m}}V_{\theta }^{2})=\sigma ^{2}/ (2V_{\theta }^{2})\). If the variance of the noise distribution is known which is different for *in vivo* and *in vitro* neurons, the scaled diffusion parameter in Fig. 6a can be found. The scaled amplitude (*A*/(*τ*_{m}*V*_{ 𝜃 })) also shows the measure of excitatory postsynaptic potential (EPSP) which is available in different experimental studies (Shadlen and Newsome 1994; Song et al. 2005; Lefort et al. 2009; Cossell et al. 2015).

## 3 Discussion

In this study, we analytically derived the statistical input-output relation of a LIF neuron receiving transient signaling input on top of noisy balanced inputs. We developed a first-passage time density of the neuron when it receives the signal at the threshold regime. We examined a simple square input signal, and then extended it to more physiologically plausible signaling inputs. Our prediction matches well with simulation study, which shows the applicability of our model for more realistic signaling input shapes. The first-passage time density is a function of the scaled diffusion coefficient and the scaled amplitude of signaling input. It also depends on the arrival time of the signaling input elapsed from the last postsynaptic spike. We also extended our analysis and made it independent from the knowledge of the last postsynaptic spikes by marginalizing over all possible last postsynaptic spikes with respect to input arrival time. Based on the analytic expression for the first-spiking density after input arrival, we examined the Fisher information with respect to signaling input’s amplitude (efficacy). The result reveals that for each level of the noise, there are specific amplitudes of signaling inputs at which the decoding can be done most accurately.

Here, we investigate the LIF neuron model (Stein 1965). Although lots of studies have shown that some extended models such as adaptive exponential integrate-and-fire model (aEIF) better explains the neural properties (Izhikevich 2004; Ostojic and Brunel 2011), the LIF neuron model can capture the properties of the cortical pyramidal neurons (Rauch et al. 2003; La Camera et al. 2004; Jolivet et al. 2008); it is still the widely studied neuron model because of its simplicity to be driven analytically (Burkitt 2006a, 2006b).

Previous studies on the *LIF neuron model* already attempted to find an analytical solution for the first-passage time density or firing rate in the presence of signaling input in the noisy balanced environment. The effect of small oscillatory input on the first-passage time problem is studied by Bulsara *et.al.* using image method to solve the Fokker-Planck equation (Bulsara et al. 1996). The linear response of LIF neuron to oscillatory input and the change of firing rate starting from stationary distribution is also investigated analytically in a Fokker-Planck formalism by some studies (Brunel and Hakim 1999; Brunel et al. 2001; Lindner and Schimansky-Geier 2001). Richardson and Swarbrick provided analytical result for firing rate modulation up to linear order receiving excitatory and inhibitory synaptic jumps drawn from the exponential distribution (Richardson and Swarbrick 2010). In addition, there are studies that investigated the effect of transient input current (Herrmann and Gerstner 2001; Helias et al. 2010). Hermann and Gerstner showed how the post-stimulus time histogram is changed by the transient input signal on top of noise using the escape rate model and hazard function. They provided a numerical solution for the full model but the analytical result is achievable up to the first order. Moreover, Helias and coworkers (Helias et al. 2010) found the effect of the delta input kick in the Fokker-Planck equation on the firing rate of postsynaptic neuron up to linear order, assuming steady state distribution for finite synaptic amplitudes prior to input arrival.

Here, we analytically solved the first-passage time density for the LIF neuron receiving transient signaling input with arbitrary amplitude and shape on top of Gaussian background. However, it is obtained under the assumption that the mean input drive equals or at least is near to the threshold potential (see Appendix I). Further, it was assumed that the synaptic time constant is considerably smaller than the membrane time constant. However, within these limitations, we can analytically achieve various features of the LIF neuron’s spiking density. Our framework and approach at first is conditioned on the knowledge of last postsynaptic spike; this framework is useful in some experimental studies which the last postsynaptic spike is assumed (Froemke and Dan 2002; Wang et al. 2005; Blot et al. 2016) or the neuron may learn to time sequential responses (Johansson et al. 2016; Jirenhed et al. 2017). Moreover, we show by using the backward recurrence time distribution, the first-spiking density after input arrival is achieved, which does not depend on the last postsynaptic spike (see Eq. (34)); it can be used for experiments that the knowledge of last postsynaptic spike is not considered and just the time of input arrival is assumed as a reference signal for readout by downstream neurons (Van Rullen et al. 2005; Panzeri et al. 2014). We also show that the result of using backward recurrence time for the marginalization of the spiking density, in Eq. (34), can be achieved by another approach (see Eq. (37)) assuming a stationary distribution at the time of input arrival and the use of Green’s function from Eq. (6). While Eq. (34) has just one integral over time and computationally is faster, Eq. (37) gives us intuition about the spiking density after input arrival, which was used in the Fisher information part to describe the second observed maximum in low noise regime.

In this study, we model the background synaptic activity as a white *Gaussian noise*. This would be a too simplified view on the background activity. For example, background synaptic inputs to neurons may be described as shot noise (Richardson and Swarbrick 2010; Helias et al. 2013) and is synaptically filtered (Brunel and Sergi 1998; Moreno-Bote and Parga 2010). It can also be modeled as temporally correlated noise (colored noise) because of long lasting time scale of NMDA and GABA_{ B }generated currents (Lerchner et al. 2006; Dummer et al. 2014; Ostojic 2014). However, the white Gaussian noise still can be a reasonable assumption for short lasting currents generated by AMPA and GABA_{ A } receptors (Destexhe et al. 1998). In fact, in the limit of *τ*_{ s } ≪ *τ*_{m}, and with small numerous background synaptic amplitudes, the background activity is approximated by *white Gaussian noise*. Moreover, experimental evidence reported temporally uncorrelated noise for an animal engaged in a task (Poulet and Petersen 2008; Tan et al. 2014). The Gaussian noise, however, changes to correlated noise for an anesthetized animal, or when the animal is in its quiet wakefulness. Hence it would be important to extend the current formalism to non-Gaussian situations, as well.

Moreover, our solution works for the neurons at the threshold regime, at which the membrane potential of the neurons are very close to the threshold. At this regime, even small fluctuation brings the membrane potential above the threshold voltage, which makes the neuron to fire. Recent experimental study (Tan et al. 2014) shows that this scenario works when stimulus is presented to the monkey. To check how the solution changes when the value of the mean input deviates from the threshold voltage (sub/supra-threshold), we also use scaling approach and compare it with simulation study (Appendix I). The result shows, by scaling relation that we introduced, we can go beyond threshold spiking density to near sub/supra-threshold densities and extend our solution to more plausible cases of sub/supra-threshold regimes.

Finally, we calculate the Fisher information based on spike timing rather than rate or spike count (Rieke et al. 1999; van Vreeswijk 2001; Toyoizumi et al. 2006). Precise spike timing of postsynaptic neuron relative to signaling input arrival has information that may be lost or decreased in spike count methods that, spikes (responses) are summed over long time windows (Panzeri et al. 2010). The Fisher information as one application of the first-spiking density is investigated here; we find the specific amplitude of signaling inputs that can be distinguished most accurately by downstream neurons. To maximize the accuracy of decoding in low noise level, neurons have two choices for their synaptic strength, one in weak and the other in strong amplitude. But for higher levels of the noise, there is one strong amplitude which maximizes the FI; the achieved maximum is robust to a mere change of the amplitude (Fig. 6a Bottom, blue color). This effect may have some advantage in neural decoding and learning; strong amplitudes which might be the result of causal Hebbian learning (Dan and Poo 2004) can be discriminated most accurately by downstream neurons, even in low noise regime. We also revisit the stochastic resonance (Douglass et al. 1993; McDonnell and Ward 2011; Teramae et al. 2012; Ikegaya et al. 2013; Levakova et al. 2016); the Fisher information does not behave monotonically with the noise level, instead, there exists a wide range of signaling input’s amplitude for which the Fisher information is maximized in a certain and finite value of the noise (Fig. 6a).

The input-output relation of a single neuron embedded in a network is the building block of the neural activity underlying learning, cognition and behavior. Since strong synaptic inputs are pervasive in the neural system (Song et al. 2005; Lefort et al. 2009; Ikegaya et al. 2013; Buzsáki and Mizuseki 2014; Cossell et al. 2015), the analytic solution that can deal with the effect of strong transient signaling inputs can widely be used in predicting network’s complex activity (Herz et al. 2006). Given that the LIF model describes spike timing with adequate accuracy, the analytic solution presented here is expected to facilitate theoretical investigation of information processing in the neural systems.

## Notes

### Acknowledgements

H.S. thanks S.Koyama and R.Kobayashi for valuable discussions; S.RS and SN.R acknowledge T.Fukai’s kind supports and are grateful to him and H.Maboudi for valuable discussions.

### Compliance with Ethical Standards

### Conflict of interests

The authors declare that they have no conflict of interest.

## References

- Abbott, L., Fusi, S., Miller, K.D. (2012).
*Theoretical approaches to neuroscience: examples from single neurons to networks*(Vol. 5, pp. 1601–1618). New York: McGraw-Hill.Google Scholar - Babadi, B., & Abbott, L. (2013). Pairwise analysis can account for network structures arising from spike-timing dependent plasticity.
*PLoS Computational Biology*,*9*, e1002,906.CrossRefGoogle Scholar - Blot, A., Solages, C., Ostojic, S., Szapiro, G., Hakim, V., Léna, C. (2016). Time-invariant feed-forward inhibition of purkinje cells in the cerebellar cortex in vivo.
*The Journal of Physiology*,*594*(10), 2729–2749.CrossRefPubMedPubMedCentralGoogle Scholar - Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.
*Journal of Computational Neuroscience*,*8*(3), 183–208.CrossRefPubMedGoogle Scholar - Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates.
*Neural Computation*,*11*, 1621.CrossRefPubMedGoogle Scholar - Brunel, N., & Sergi, S. (1998). Firing frequency of leaky intergrate-and-fire neurons with synaptic current dynamics.
*Journal of Theoretical Biology*,*195*(1), 87–95.CrossRefPubMedGoogle Scholar - Brunel, N., Chance, F.S., Fourcaud, N., Abbott, L. (2001). Effects of synaptic noise and filtering on the frequency response of spiking neurons.
*Physical Review Letters*,*86*(10), 2186.CrossRefPubMedGoogle Scholar - Bulsara, A., Jacobs, E., Zhou, T., Moss, F., Kiss, L. (1991). Stochastic resonance in a single neuron model: theory and analog simulation.
*Journal of Theoretical Biology*,*152*(4), 531–555.CrossRefPubMedGoogle Scholar - Bulsara, A.R., Elston, T.C., Doering, C.R., Lowen, S.B., Lindenberg, K. (1996). Cooperative behavior in periodically driven noisy integrate-fire models of neuronal dynamics.
*Physical Review E*,*53*(4), 3958.CrossRefGoogle Scholar - Burkitt, A.N. (2006a). A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input.
*Biological Cybernetics*,*95*(1), 1–19.Google Scholar - Burkitt, A.N. (2006b). A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties.
*Biological Cybernetics*,*95*(2), 97–112.Google Scholar - Burkitt, A.N., & Clark, G.M. (1999). Analysis of integrate-and-fire neurons: synchronization of synaptic input and spike output.
*Neural Computation*,*11*(4), 871–901.CrossRefPubMedGoogle Scholar - Buzsáki, G., & Mizuseki, K. (2014). The log-dynamic brain: how skewed distributions affect network operations.
*Nature Reviews Neuroscience*,*15*(4), 264–278.CrossRefPubMedPubMedCentralGoogle Scholar - Cossell, L., Iacaruso, M.F., Muir, D.R., Houlton, R., Sader, E.N., Ko, H., Hofer, S.B., Mrsic-Flogel, T.D. (2015). Functional organization of excitatory synaptic strength in primary visual cortex.
*Nature*,*518*(7539), 399–403.CrossRefPubMedPubMedCentralGoogle Scholar - Cox, D.R. (1962).
*Renewal theory*Vol. 4. London: Methuen.Google Scholar - Dan, Y., & Poo, M.-M. (2004). Spike timing-dependent plasticity of neural circuits.
*Neuron*,*44*(1), 23–30.CrossRefPubMedGoogle Scholar - Davis, P.J. (1959). Leonhard euler’s integral: a historical profile of the gamma function: in memoriam: Milton Abramowitz.
*The American Mathematical Monthly*,*66*(10), 849–869.CrossRefGoogle Scholar - De La Rocha, J., Doiron, B., Shea-Brown, E., Josić, K., Reyes, A. (2007). Correlation between neural spike trains increases with firing rate.
*Nature*,*448*(7155), 802–806.CrossRefPubMedGoogle Scholar - Destexhe, A., Mainen, Z.F., Sejnowski, T.J. (1998). Kinetic models of synaptic transmission.
*Methods in Neuronal Modeling*,*2*, 1–25.Google Scholar - Diesmann, M., Gewaltig, M.O., Aertsen, A. (1999). Stable propagation of synchronous spiking in cortical neural networks.
*Nature*,*402*(6761), 529–533.CrossRefPubMedGoogle Scholar - Douglass, J.K., Wilkens, L., Pantazelou, E., Moss, F., et al. (1993). Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance.
*Nature*,*365*(6444), 337–340.CrossRefPubMedGoogle Scholar - Dummer, B., Wieland, S., Lindner, B. (2014). Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity.
*Frontiers in Computational Neuroscience*,*8*(104).Google Scholar - Froemke, R.C., & Dan, Y. (2002). Spike-timing-dependent synaptic modification induced by natural spike trains.
*Nature*,*416*(6879), 433–438.CrossRefPubMedGoogle Scholar - Furukawa, S., & Middlebrooks, J.C. (2002). Cortical representation of auditory space: information-bearing features of spike patterns.
*Journal of Neurophysiology*,*87*(4), 1749–1762.CrossRefPubMedGoogle Scholar - Geisler, W.S., Albrecht, D.G., Crane, A.M. (2007). Responses of neurons in primary visual cortex to transient changes in local contrast and luminance.
*Journal of Neuroscience*,*27*(19), 5063–5067.CrossRefPubMedGoogle Scholar - Gerstner, W., Werner, M.K., Naud, R., Paninski, L. (2014).
*Neuronal dynamics: from single neurons to networks and models of cognition*. Cambridge: Cambridge University Press.CrossRefGoogle Scholar - Gollisch, T., & Herz, A.M. (2005). Disentangling sub-millisecond processes within an auditory transduction chain.
*PLoS Biology*,*3*(1), e8.CrossRefPubMedPubMedCentralGoogle Scholar - Helias, M., Deger, M., Rotter, S., Diesmann, M. (2010). Instantaneous non-linear processing by pulse-coupled threshold units.
*PLoS Computational Biology*,*6*(9), e1000,929.CrossRefGoogle Scholar - Helias, M., Deger, M., Rotter, S., Diesmann, M. (2011). Finite post synaptic potentials cause a fast neuronal response.
*Frontiers in Neuroscience*,*5*, 19.CrossRefPubMedPubMedCentralGoogle Scholar - Helias, M., Tetzlaff, T., Diesmann, M. (2013). Echoes in correlated neural systems.
*New Journal of Physics*,*15*(2), 023,002.CrossRefGoogle Scholar - Herrmann, A., & Gerstner, W. (2001). Noise and the psth response to current transients: I. General theory and application to the integrate-and-fire neuron.
*Journal of Computational Neuroscience*,*11*(2), 135–151.CrossRefPubMedGoogle Scholar - Herz, A.V., Gollisch, T., Machens, C.K., Jaeger, D. (2006). Modeling single-neuron dynamics and computations: a balance of detail and abstraction.
*Science*,*314*(5796), 80–85.CrossRefPubMedGoogle Scholar - Ikegaya, Y., Sasaki, T., Ishikawa, D., Honma, N., Tao, K., Takahashi, N., Minamisawa, G., Ujita, S., Matsuki, N. (2013). Interpyramid spike transmission stabilizes the sparseness of recurrent network activity.
*Cerebral Cortex*,*23*(2), 293–304.CrossRefPubMedGoogle Scholar - Iolov, A., Ditlevsen, S., Longtin, A. (2014). Fokker–planck and fortet equation-based parameter estimation for a leaky integrate-and-fire model with sinusoidal and stochastic forcing.
*Journal of Mathematical Neuroscience*,*4*(1), 4.CrossRefPubMedPubMedCentralGoogle Scholar - Izhikevich, E.M. (2004). Which model to use for cortical spiking neurons?
*IEEE Transactions on Neural Networks*,*15*(5), 1063–1070.CrossRefPubMedGoogle Scholar - Jirenhed, D.A., Rasmussen, A., Johansson, F., Hesslow, G. (2017). Learned response sequences in cerebellar purkinje cells.
*Proceedings of the National Academy of Sciences*,*114*(23), 6127– 6132.CrossRefGoogle Scholar - Johansson, F., Hesslow, G., Medina, J.F. (2016). Mechanisms for motor timing in the cerebellar cortex.
*Current Opinion in Behavioral Sciences*,*8*, 53–59.CrossRefPubMedPubMedCentralGoogle Scholar - Johansson, R.S., & Birznieks, I. (2004). First spikes in ensembles of human tactile afferents code complex spatial fingertip events.
*Nature Neuroscience*,*7*(2), 170–177.CrossRefPubMedGoogle Scholar - Jolivet, R., Kobayashi, R., Rauch, A., Naud, R., Shinomoto, S., Gerstner, W. (2008). A benchmark test for a quantitative assessment of simple neuron models.
*Journal of Neuroscience Methods*,*169*(2), 417–424.CrossRefPubMedGoogle Scholar - Kardar, M. (2007).
*Statistical physics of fields*. Cambridge: Cambridge University Press.CrossRefGoogle Scholar - Kastner, D.B., Baccus, S.A., Sharpee, T.O. (2015). Critical and maximally informative encoding between neural populations in the retina.
*Proceedings of the National Academy of Sciences*,*112*(8), 2533–2538.CrossRefGoogle Scholar - Kou, S.G., & Wang, H. (2003). First passage times of a jump diffusion process.
*Advances in Applied Probability*,*35*(2), 504–531.CrossRefGoogle Scholar - Kumar, A., Schrader, S., Aertsen, A., Rotter, S. (2008). The high-conductance state of cortical networks.
*Neural Computation*,*20*(1), 1–43.CrossRefPubMedGoogle Scholar - La Camera, G., Rauch, A., Lüscher, H.R., Senn, W., Fusi, S. (2004). Minimal models of adapted neuronal response to in vivo–like input currents.
*Neural Computation*,*16*(10), 2101–2124.CrossRefPubMedGoogle Scholar - Lefort, S., Tomm, C., Sarria, J.C.F., Petersen, C.C. (2009). The excitatory neuronal network of the c2 barrel column in mouse primary somatosensory cortex.
*Neuron*,*61*(2), 301–316.CrossRefPubMedGoogle Scholar - Lerchner, A., Ursta, C., Hertz, J., Ahmadi, M., Ruffiot, P., Enemark, S. (2006). Response variability in balanced cortical networks.
*Neural Computation*,*18*(3), 634–659.CrossRefPubMedGoogle Scholar - Levakova, M., Tamborrino, M., Kostal, L, Lansky, P. (2016). Presynaptic spontaneous activity enhances the accuracy of latency coding.
*Neural Computation*,*28*(10), 2162–2180.CrossRefPubMedGoogle Scholar - Lindner, B., & Schimansky-Geier, L. (2001). Transmission of noise coded versus additive signals through a neuronal ensemble.
*Physical Review Letters*,*86*(14), 2934.CrossRefPubMedGoogle Scholar - Lindner, B., Chacron, M.J., Longtin, A. (2005). Integrate-and-fire neurons with threshold noise: a tractable model of how interspike interval correlations affect neuronal signal transmission.
*Physical Review E*,*72*(2), 021,911.CrossRefGoogle Scholar - Mainen, Z.F., & Sejnowski, T.J. (1995). Reliability of spike timing in neocortical neurons.
*Science*,*268*(5216), 1503–1506.CrossRefPubMedGoogle Scholar - McCormick, D.A., Connors, B.W., Lighthall, J.W., Prince, D.A. (1985). Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortex.
*Journal of Neurophysiology*,*54*(4), 782–806.CrossRefPubMedGoogle Scholar - McDonnell, M.D., & Ward, L.M. (2011). The benefits of noise in neural systems: bridging theory and experiment.
*Nature Reviews Neuroscience*,*12*(7), 415–426.CrossRefPubMedGoogle Scholar - Moreno-Bote, R., & Parga, N. (2010). Response of integrate-and-fire neurons to noisy inputs filtered by synapses with arbitrary timescales: firing rate and correlations.
*Neural Computation*,*22*(6), 1528–1572.CrossRefPubMedGoogle Scholar - Okun, M., & Lampl, I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities.
*Nature Neuroscience*,*11*(5), 535–537.CrossRefPubMedGoogle Scholar - Ostojic, S. (2011). Interspike interval distributions of spiking neurons driven by fluctuating inputs.
*Journal of Neurophysiology*,*106*(1), 361–373.CrossRefPubMedGoogle Scholar - Ostojic, S. (2014). Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons.
*Nature Neuroscience*,*17*(4), 594–600.CrossRefPubMedGoogle Scholar - Ostojic, S., & Brunel, N. (2011). From spiking neuron models to linear-nonlinear models.
*PLoS Computational Biology*,*7*(1), e1001,056.CrossRefGoogle Scholar - Panzeri, S., Petersen, R.S., Schultz, S.R., Lebedev, M., Diamond, M.E. (2001). The role of spike timing in the coding of stimulus location in rat somatosensory cortex.
*Neuron*,*29*(3), 769–777.CrossRefPubMedGoogle Scholar - Panzeri, S., Brunel, N., Logothetis, N.K., Kayser, C. (2010). Sensory neural codes using multiplexed temporal scales.
*Trends in Neurosciences*,*33*(3), 111–120.CrossRefPubMedGoogle Scholar - Panzeri, S., Ince, R.A., Diamond, M.E., Kayser, C. (2014). Reading spike timing without a clock: intrinsic decoding of spike trains.
*Philosophical Transactions of the Royal Society B*,*369*(1637), 20120,467.CrossRefGoogle Scholar - Petersen, R.S., Panzeri, S., Diamond, M.E. (2001). Population coding of stimulus location in rat somatosensory cortex.
*Neuron*,*32*(3), 503–514.CrossRefPubMedGoogle Scholar - Pitkow, X., & Meister, M. (2012). Decorrelation and efficient coding by retinal ganglion cells.
*Nature Neuroscience*,*15*(4), 628–635.CrossRefPubMedPubMedCentralGoogle Scholar - Pitkow, X., Liu, S., Angelaki, D.E., DeAngelis, G.C., Pouget, A. (2015). How can single sensory neurons predict behavior?
*Neuron*,*87*(2), 411–423.CrossRefPubMedPubMedCentralGoogle Scholar - Poulet, J.F., & Petersen, C.C. (2008). Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice.
*Nature*,*454*(7206), 881–885.CrossRefPubMedGoogle Scholar - Rao, C. (1973).
*Linear statistical inference and its applications*, 2nd edn. Hoboken: Wiley Series in Probability and Mathematical Statistics. Wiley.CrossRefGoogle Scholar - Rauch, A., La Camera, G., Lüscher, H R, Senn, W., Fusi, S. (2003). Neocortical pyramidal cells respond as integrate-and-fire neurons to in vivo–like input currents.
*Journal of Neurophysiology*,*90*(3), 1598–1612.CrossRefPubMedGoogle Scholar - Renart, A., de la Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., Harris, K.D. (2010). The asynchronous state in cortical circuits.
*Science*,*327*(5965), 587–590.CrossRefPubMedPubMedCentralGoogle Scholar - Ricciardi, L.M., & Sato, S. (1988). First-passage-time density and moments of the Ornstein-Uhlenbeck process.
*Journal of Applied Probability*,*25*(1), 43–57.CrossRefGoogle Scholar - Richardson, M.J. (2007). Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive.
*Physical Review E*,*76*(2), 021,919.CrossRefGoogle Scholar - Richardson, M.J. (2008). Spike-train spectra and network response functions for non-linear integrate-and-fire neurons.
*Biological Cybernetics*,*99*(4), 381–392.CrossRefPubMedGoogle Scholar - Richardson, M.J., & Swarbrick, R. (2010). Firing-rate response of a neuron receiving excitatory and inhibitory synaptic shot noise.
*Physical Review Letters*,*105*(17), 178,102.CrossRefGoogle Scholar - Rieke, F., Warland, D., de Ruyter van Steveninck, R., Bialek, W. (1999).
*Spikes: exploring the neural code*. Cambridge, MA, USA: MIT Press.Google Scholar - Risken, H. (1984).
*Fokker-planck equation*. Springer.Google Scholar - Salinas, E., & Sejnowski, T.J. (2001). Correlated neuronal activity and the flow of neural information.
*Nature Reviews Neuroscience*,*2*(8), 539–550.CrossRefPubMedPubMedCentralGoogle Scholar - Shadlen, M.N., & Newsome, W.T. (1994). Noise, neural codes and cortical organization.
*Current Opinion in Neurobiology*,*4*(4), 569–579.CrossRefPubMedGoogle Scholar - Shadlen, M.N., & Newsome, W.T. (1998). The variable discharge of cortical neurons: implications for connectivity, computation, and information coding.
*The Journal of Neuroscience*,*18*(10), 3870–3896.PubMedGoogle Scholar - Siegert, A.J.F. (1951). On the first passage time probability problem.
*Physical Review*,*81*(4), 671.CrossRefGoogle Scholar - Silberberg, G., Bethge, M., Markram, H., Pawelzik, K., Tsodyks, M. (2004). Dynamics of population rate codes in ensembles of neocortical neurons.
*Journal of Neurophysiology*,*91*(2), 704–709.CrossRefPubMedGoogle Scholar - Softky, W.R., & Koch, C. (1993). The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps.
*The Journal of Neuroscience*,*13*(1), 334–350.PubMedGoogle Scholar - Song, S., Sjöström, P.J., Reigl, M., Nelson, S., Chklovskii, D.B. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits.
*PLoS Biology*,*3*(3), e68.CrossRefPubMedPubMedCentralGoogle Scholar - Stein, R.B. (1965). A theoretical analysis of neuronal variability.
*Biophysical Journal*,*5*(2), 173.CrossRefPubMedPubMedCentralGoogle Scholar - Stern, P., Edwards, F.A., Sakmann, B. (1992). Fast and slow components of unitary epscs on stellate cells elicited by focal stimulation in slices of rat visual cortex.
*The Journal of Physiology*,*449*(1), 247–278.CrossRefPubMedPubMedCentralGoogle Scholar - Stevens, C.F., & Zador, A.M. (1998). Input synchrony and the irregular firing of cortical neurons.
*Nature Neuroscience*,*1*(3), 210–217.CrossRefPubMedGoogle Scholar - Sugiyama, H., Moore, G., Perkel, D. (1970). Solutions for a stochastic model of neuronal spike production.
*Mathematical Biosciences*,*8*(3-4), 323–341.CrossRefGoogle Scholar - Takahashi, N., Kitamura, K., Matsuo, N., Mayford, M., Kano, M., Matsuki, N., Ikegaya, Y. (2012). Locally synchronized synaptic inputs.
*Science*,*335*(6066), 353–356.CrossRefPubMedGoogle Scholar - Tan, A.Y., Chen, Y., Scholl, B., Seidemann, E., Priebe, N.J. (2014). Sensory stimulation shifts visual cortex from synchronous to asynchronous states.
*Nature*,*509*(7499), 226–229.CrossRefPubMedPubMedCentralGoogle Scholar - Teramae, J.-N., Tsubo, Y., Fukai, T. (2012). Optimal spike-based communication in excitable networks with strong-sparse and weak-dense links.
*Scientific Reports*,*2*, 485.CrossRefPubMedPubMedCentralGoogle Scholar - Toyoizumi, T., Aihara, K., Amari, S. (2006). Fisher information for spike-based population decoding.
*Physical Review Letters*,*97*(9), 098,102.CrossRefGoogle Scholar - Tuckwell, H.C. (1988). Introduction to theoretical neurobiology: Nonlinear and stochastic theories, Vol. 2, Cambridge University Press, Cambridge.Google Scholar
- Uhlenbeck, G.E., & Ornstein, L.S. (1930). On the theory of the brownian motion.
*Physical Review*,*36*(5), 823.CrossRefGoogle Scholar - Van Rullen, R., & Thorpe, S.J. (2001). Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex.
*Neural Computation*,*13*(6), 1255–1283.CrossRefPubMedGoogle Scholar - Van Rullen, R., Guyonneau, R., Thorpe, S.J. (2005). Spike times make sense.
*Trends in Neurosciences*,*28*(1), 1–4.CrossRefGoogle Scholar - van Vreeswijk, C. (2001). Information transmission with renewal neurons.
*Neurocomputing*,*38*, 417–422.CrossRefGoogle Scholar - van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity.
*Science*,*274*(5293), 1724–1726.CrossRefPubMedGoogle Scholar - van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits.
*Neural Computation*,*10*(6), 1321–1371.CrossRefPubMedGoogle Scholar - Wang, H.X., Gerkin, R.C., Nauen, D.W., Bi, G.Q. (2005). Coactivation and timing-dependent integration of synaptic potentiation and depression.
*Nature Neuroscience*,*8*(2), 187–193.CrossRefPubMedGoogle Scholar - Wang, M.C., & Uhlenbeck, G.E. (1945). On the theory of the brownian motion ii.
*Reviews of Modern Physics*,*17*(2-3), 323.CrossRefGoogle Scholar - Wehr, M., & Zador, A.M. (2003). Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex.
*Nature*,*426*(6965), 442–446.CrossRefPubMedGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.