Estimating latency from inhibitory input
 178 Downloads
 4 Citations
Abstract
Stimulus response latency is the time period between the presentation of a stimulus and the occurrence of a change in the neural firing evoked by the stimulation. The response latency has been explored and estimation methods proposed mostly for excitatory stimuli, which means that the neuron reacts to the stimulus by an increase in the firing rate. We focus on the estimation of the response latency in the case of inhibitory stimuli. Models used in this paper represent two different descriptions of response latency. We consider either the latency to be constant across trials or to be a random variable. In the case of random latency, special attention is given to models with selective interaction. The aim is to propose methods for estimation of the latency or the parameters of its distribution. Parameters are estimated by four different methods: method of moments, maximumlikelihood method, a method comparing an empirical and a theoretical cumulative distribution function and a method based on the Laplace transform of a probability density function. All four methods are applied on simulated data and compared.
Keywords
Response latency Selective interaction Neuronal firing Inhibition Maximum likelihood Laplace transform1 Introduction
In the nervous system, information is transmitted through the firing of action potentials (spikes) by neurons. The time course of the action potentials themselves varies very little and probably carries no information. We therefore consider the output of the neuron as a sequence of point events, which is called a spike train. Spike trains appear to be stochastic and are thus modeled as realizations of stochastic point processes. Experimentally measured quantities are time intervals between consecutive spikes, socalled interspike intervals (ISIs).
Traditionally, it has been believed that most of the relevant information is contained in the firing rate of the neuron. However, behavioral experiments show that reaction times on some stimuli are often rather short and it is not possible to evaluate the firing rate in such a short time window (Rullen et al. 1998, 2005). This suggests that the firing rate cannot be the only form of neural code and the exact timing of spikes plays a role. This is called temporal coding. The temporal coding can be affected by response latency (BonnasseGahot and Nadal 2012; Gautrais and Thorpe 1997), which is in general defined as a timelag between the stimulus onset and the evoked modulation in neural activity. For example, results of Chase and Young (2007) show that firstspike latency codes could be a feasible mechanism for information transfer.
When stimulated, the neuronal response is in most cases characterized by an increase in firing rate. However, sometimes a change of conditions is followed by an apparent decrease of the neuronal activity. For example, inhibitory response is a common phenomenon and has been observed, e.g., in the olfactory system of many animals. Krofczik et al. (2009) report that about 12 % of odors in their experiment evoked inhibitory response in lateral projection neurons of the honeybee. Moreover, the suppression of neural activity was so rapid that not a single response action potential was elicited when stimulating with a mixture of odors. Similar responses are not exceptional in olfactory receptor and cortical neurons of the frog Rana ridibunda (Rospars et al. 2000; DuchampViret et al. 1996) and in primary olfactory centers of the moth Manduca sexta (Reisenman et al. 2008).
Existence of the socalled spontaneous activity makes it impossible to measure the latency exactly, rendering estimation difficult. Methods of estimation based on records of the entire spike train obtained in \(n\) independent trials have been presented (Baker and Gerstein 2001; Friedman and Priebe 1998; Commenges et al. 1986). The disadvantage of these methods is that they are designed primarily for excitatory stimuli. Although it is possible to apply them also when the stimulus is inhibitory, the estimates are often less precise. Different approaches assuming that the reaction to the stimulus is of short duration are proposed by Pawlas et al. (2010) and Tamborrino et al. (2012, 2013), where only observations up to the first spike after the stimulus onset are used for estimation. However, also these methods are build on models where an excitatory stimulus is explicitly assumed and the models cannot easily include inhibitory stimuli. It seems that the problem of latency estimation of inhibitory stimuli has been somewhat neglected. Here, we deal specifically with this situation. Therefore, whenever evoked activity is referred to, it is assumed that the firing activity is lower compared to the spontaneous activity.
In this paper, only the times of the first spikes following the stimulus onset from repeated trials instead of entire spike trains are used for estimation. This approach is especially suitable for inhibitory stimuli because a response can consist of a few long ISIs only. The usual approach of estimating the firing rate over an extended time window and detecting the change point might be inappropriate for responses of a few spikes and with a nonstationary firing rate.
Our approach is based on parametric models of spike trains. The key assumption is the Poissonian character of the spontaneous activity. It is generally agreed on that firing of real neurons is not Poissonian, in many cases, it is not even renewal as has been evidenced in a number of papers, e.g., in a review by Farkhooi et al. (2009). Nevertheless, the Poisson assumption is often used because it makes calculations less difficult, and it is often an acceptable approximation during spontaneous activity. Although a more precise description is appropriate, it would lead to major inconveniences when trying to handle the problem mathematically. Moreover, the Poisson assumption is in this case not so strong because it is required only locally since proposed models are constructed so that the assumption concerns only the ISI containing the stimulus onset and only prior to the beginning of the response. In addition, many elaborated mathematical models support this approximation as adequate. The statistical properties of neuronal firing of the classical Hodgkin–Huxley model with a stochastic input fluctuation was analyzed by Chow and White (1996). It was shown that the spontaneous activity arising from channel fluctuations is well described by the Poisson model. Also, the firing of the leaky integrateandfire model without input current (but with stochastic fluctuations of the membrane potential) is described by the Poisson spiking model (the socalled subthreshold regime) (Ditlevsen and Lansky 2005). In general, applications of the Poisson model to neuronal data are numerous.
The ISIs evoked by a stimulus presentation are often assumed gamma distributed. This distribution serves as a typical example in theoretical studies on neuronal firing (Dorval 2008; Kang and Amari 2008; Miura et al. 2006; Nawrot et al. 2008; Shimokawa and Shinomoto 2009), where the point process of the spike times is often called a gamma renewal process. Furthermore, the distribution was often checked in experimental studies, e.g., Hentall (2000), Mandl (1993) and McKeegan (2002).
The response latency is treated as a random variable. This approach can be justified by the fact that noise of all kinds (e.g. synaptic, membrane, or channel noise) has an influence on the actual latency. For example, the impact of channel noise on latency variability was studied theoretically by Wainrib et al. (2010). They investigated the MorrisLecar model with a finite number of channels, which leads to random latency, whose asymptotic distribution is derived there as well. The approach assuming random latency was employed previously by Nawrot et al. (2003) who proposed a method for elimination of response latency variability. We discuss the special case of constant latency as well.
Two alternatives of the basic model are considered, and their stochastic properties are investigated. Characteristics of the distribution of the time between the stimulus onset and the first spike following it, like the probability density function (pdf), its Laplace transform and the cumulative distribution function (cdf), are derived and moments (mean and variance) are calculated. Then, four estimation methods of the mean latency are proposed. The first one is nonparametric, and only assumes that the latency is constant across trials. It is based on comparison between the theoretical and the empirical cdf. The remaining estimators are parametric, thus assuming specific models and distributions. The second estimator is obtained by the method of moments where knowledge of the mean and variance is crucial. This can be particularly useful when moments are available, but explicit expressions for the distribution are not. The two remaining methods are the maximumlikelihood method, which employs the pdf, and the method based on its Laplace transform. Normally, the maximumlikelihood method is the preferred method of choice, when available, but in our setting, the usual regularity conditions are not fulfilled, since the likelihood function is discontinuous in the parameter of interest; the mean latency. Thus, it is not obvious that it will behave better than other estimators, and the usual asymptotic tools to evaluate the quality of the estimators based on the Hessian, are not available. Moreover, in some of the considered models, the likelihood function is not available, and other methods are necessary.
All estimating routines were implemented in the free statistical software R (see R Core Team 2013) and can be found in the supplementary material of the paper.
2 Character of experimental data
Data are obtained from \(n\) trials under identical experimental conditions. In each trial, the stimulus is presented and the resulting spike train is recorded during a time period, which spans from a time instant preceding the stimulus onset to a time instant after the stimulus onset. Before the stimulation, the activity of the neuron is spontaneous and it fires irregularly, but with some stable firing rate. The stimulus is applied at a fixed time, denoted by \(t_s\), in each trial. During some unknown period of time \(\varTheta \), variable or fixed, after \(t_s\), the activity of the neuron remains spontaneous and is not influenced by the stimulation. This lag is called the response latency. After that the spiking activity changes. Henceforth, we assume that the evoked activity is lower than the spontaneous, since we focus on inhibitory stimuli. We model the observed spike train by two different random point processes. The first one describes the spontaneous activity, and the second characterizes evoked activity. The observed spike train is a realization of the first spontaneous process up to time \(t_\mathrm{s} + \varTheta \), and of the second evoked process after time \(t_\mathrm{s} + \varTheta \).
Estimation methods presented here require generally only knowledge of the time interval from \(t_\mathrm{s}\) to the first following spike. In addition, some methods also allow to use measurements of an entire ISI after \(t_\mathrm{s}\), i.e. the time between the first and the second spike after \(t_\mathrm{s}\), which improves the estimates. Nevertheless, spike times subsequent to the first spike are not necessarily required, which is an advantage if the response lasts for a short time only. The time between \(t_\mathrm{s}\) and the first following spike is a random variable denoted by \(T\), its realizations from \(n\) independent trials are \(\{t_1,t_2,\ldots ,t_n\}\). Observations \(t_i\) can be divided into two subgroups with respect to \(\varTheta \). An observation \(t_i\) can be shorter than \(\varTheta \), which means that the first spike after \(t_\mathrm{s}\) belongs to the spontaneous activity, or it is longer than \(\varTheta \) and the first observed spike after \(t_\mathrm{s}\) is influenced by the stimulus. The time between the first and the second spike after \(t_\mathrm{s}\) is a random variable \(X\) and its realizations are denoted by \(\{x_1,x_2,\ldots ,x_n\}\).
Two different concepts of latency are considered. Either the latency \(\varTheta \) is supposed to be a constant, thus it is fixed in all trials. In that case, it is denoted by lower case, \(\theta \). Or the latency is assumed to be a random variable, which is denoted by upper case, \(\varTheta \), and so the exact latency differs across trials. Then, the mean value of \(\varTheta \) is of interest. In order to estimate the latency, it is necessary to develop a probabilistic model of an afterstimulus spike train and to know its properties.
3 General models of an afterstimulus spike train
To study \(\varTheta \), we first derive the conditional distribution of \(T\) and its moments for a given value \(\theta \) of \(\varTheta \).
3.1 Conditional distribution of \(T\)
3.2 Unconditional distribution of \(T\)
4 Examples of models
In this Section, particular examples are considered by assuming specific distributions for the evoked activity \(U\). First, we assume that the latency is constant across trials and equal to \(\theta ^*\). Thus, \(\mathrm{Pr}(\varTheta =\theta ^*)=1\). The distribution of \(T\) is found using the formulas derived in Sect. 3.1 with \(\theta =\theta ^*\). Constant \(\theta ^*\) plays the role of a parameter of the distribution of \(T\).
We now relax the assumption that the latency is constant across trials and let it be a realization of an exponentially distributed random variable \(\varTheta \) with mean \(\theta ^*\), different in each trial. The formulas from Sect. 3.2 can be applied.
Model 4: Model B with exponentially distributed latency and gamma distributed \(U\). We generalize Model 2 to nonconstant latency. The pdf is not available. The Laplace transform, mean and variance can be found in “Appendix 2”, Eqs. (50)–(52).
4.1 Model with selective interaction during the response
 1.The input to the neuron consists of pulses generated by two independent renewal processes, one excitatory and the other inhibitory.
 (a)
An interval between two subsequent excitatory pulses is a random variable \(X_E\) with pdf \(f_E(t)\).
 (b)
An interval between inhibitory pulses is a random variable \(X_I\) with pdf \(f_I(t)\). For simplicity, in this paper, it is assumed that inhibitory pulses form a Poisson process with mean \(\theta ^*\), i.e. \(f_I(t) = \exp ( t/\theta ^*)/\theta ^*\).
 (a)
 2.
Whenever one or more inhibitory pulses occur, the effect of the next excitatory pulse is eliminated.
 3.
A spike is observed whenever an excitatory pulse occurs, unless it is deleted by a preceding inhibitory pulse.
Model 5: Model with selective interaction and excitatory pulses as a Gamma process. Suppose that excitatory pulses form a renewal process, where ISIs follow a gamma distribution with shape parameter \(k\) and rate parameter \(\lambda \). The pdf of \(T\) can only be found in the special case where \(k=1\). The Laplace transforms and means of \(X\) and \(T\), as well as the pdf of \(T\) for \(k=1\) can be found in “Appendix 2, Eqs. (53)–(57).
Overview of proposed models
Class of models  Latency  Spontaneous activity  Evoked activity  

Model 1  Model A  Constant  \(W \sim Exp(\lambda )\)  \(R = \varTheta + U\) 
\(\varTheta =\theta ^*\)  \(U \sim Exp(\kappa )\)  
Model 2  Model B  Constant  \(W \sim Exp(\lambda )\)  \(R = W + U\) 
\(\varTheta =\theta ^*\)  \(U \sim Gamma(k,\lambda )\)  
Model 3  Model A  Random  \(W \sim Exp(\lambda )\)  \(R = \varTheta + U\) 
\(\varTheta \sim Exp(1/\theta ^*)\)  \(U \sim Exp(\kappa )\)  
Model 4  Model B  Random  \(W \sim Exp(\lambda )\)  \(R = W + U\) 
\(\varTheta \sim Exp(1/\theta ^*)\)  \(U \sim Gamma(k,\lambda )\)  
Model 5  Model with selective interaction  Random  Gamma process  Inhibition by Poisson process 
\(\varTheta \sim Exp(1/\theta ^*)\)  \(X_E \sim Gamma(k,\lambda )\)  \(X_I \sim Exp(1/\theta ^*)\) 
5 Estimation of latency
The aim of this paper was the estimation of \(\theta ^*\), which represents the exact latency in models with constant latency and the mean latency in models with random latency. Four estimation methods are proposed: a nonparametric method based on the cdf of \(T\), the method of moments, maximumlikelihood estimation and a method based on the Laplace transform of the pdf of \(T\). We focus on estimation of \(\theta ^*\), although there are other parameters. We assume that \(\lambda \) is known. This may not be completely true, but it is possible to estimate it from the record of the spontaneous activity before the stimulus onset. We do not deal with this issue here since it is out of scope of this paper. Estimation of the spontaneous firing rate under Poisson, renewal and stationarity assumptions is discussed e.g. in Tamborrino et al. (2012). Other parameters of the evoked activity, namely \(\kappa \) in Model 1 and 3 and \(k\) in Model 2 and 4, are unknown, and it can be necessary to estimate them too. All estimating routines were implemented in the free statistical software R (see R Core Team 2013) and can be found in the supplementary material of the paper.
5.1 Estimators of \(\theta ^*\) based on cumulative distribution functions
This nonparametric estimator has the advantage of being nearly assumptionfree. The only assumptions are the Poissonian character of the spontaneous activity (or any other distribution, as long as it is known) and that the latency is constant across trials. The price one pays is loss of efficiency in the sense of larger variance on the estimators since no information on the specific model is used. This approach was originally proposed by Tamborrino et al. (2012) for the case of excitatory stimulus. The idea is to compare the empirical cumulative distribution function (ecdf), denoted by \(\widehat{F}_{T}(t)\), obtained from observations \(t_i\) of \(T\) in \(n\) independent trials, to the theoretical cdf of \(W\), denoted by \(F_W(t)\). We have \(F_W(t) = 1\mathrm{e}^{\lambda t}\). Note that it is straightforward to use any other distribution function for the spontaneous activity.
 1.
On the interval \([0, \theta ]\), \(F_W(t)  \widehat{F}_{T}(t)\) oscillates near zero.
 2.
For \(t \in (\theta , t_{(n)})\), where \(t_{(n)}\) is the maximal observation of \(T, F_W(t)\) is greater than \(F_{T}(t)\) (because of the lower firing rate during the response period) and thus \(F_W(t)  \widehat{F}_{T}(t)\) tends to be positive.
 3.
For \(t \ge t_{(n)}\) the ecdf \(\widehat{F}_{T}(t)\) is equal to \(1\), while \(F_W(t)\) approaches 1 in the limit. Thus, their distance \(F_W(t)  \widehat{F}_{T}(t)\) is negative and converges to 0.
Note that it is straightforward to adapt the estimator to a response, where a priori it is unknown if it is excitatory or inhibitory, as well as use it as a test of whether the stimulus has any effect on the neuronal activity at all.
5.2 Estimators obtained by the method of moments
5.3 Maximumlikelihood estimators
When the pdf of \(T\) is available, we can perform maximumlikelihood estimation. Under standard regularity conditions, it is the most efficient estimator, meaning that the asymptotic variance of the estimator is smallest among all nonbiased estimators. The regularity conditions are not met here, though, since the likelihood function is not differentiable. Nevertheless, we shall later see in the simulation study that it still outperforms the other estimators. Because the likelihood functions for models with constant latency have some typical features, such that maximization of the likelihood function is done differently from the models with random latency and selective interaction, it is described in more detail.
5.3.1 Models with constant latency
First, we discuss maximization of the loglikelihood function generally for Model A and B without assuming any particular distribution of \(U\). The pdfs of \(T_\mathrm{A}\) and \(T_\mathrm{B}\) are given in (4) and (5) and the loglikelihood functions \(l(\theta ^*)\) are given in “Appendix 4”, Eqs. (63) and (64). To maximize \(l(\theta ^*)\), we put the derivative with respect to \(\theta ^*\) equal to zero. However, \(l(\theta ^*)\) is discontinuous at \(\theta ^*=t_i\) for all \(i=1, \ldots , n\). It is therefore necessary to find all local maxima on the intervals \((0,t_{(1)}), (t_{(1)},t_{(2)}), \ldots , (t_{(n)},\infty )\), where \(t_{(1)}<t_{(2)}<\cdots <t_{(n)}\) are ordered observations of \(T\). Moreover, the global maximum could be achieved on the boundary of any of these intervals, therefore the onesided limits for \(\theta \rightarrow t_i^\) and \(\theta \rightarrow t_i^+\) must be evaluated.
If \(\lambda > \kappa \), which corresponds to an inhibitory response, \(l^{M1}\) is decreasing on intervals \([0,t_{(1)}), [t_{(i1)},t_{(i)}), i=2,\ldots ,n\) and has no stationary points within these intervals. Therefore, we only need to examine \(l^{M1}\) at points \(\theta ^*=t_i\). The same is true for Model 2 as it is a special case of Model B. However, in both models, there is another unknown parameter, \(\kappa \) or \(k\), respectively, so we look for a maximum of a twodimensional function. Therefore, for every \(\theta ^*=t_i\), we must find the maximum with respect to \(\kappa \) (in Model 1) or \(k\) (in Model 2) first. Then, we determine which of the local maxima is the global one.
5.3.2 Models with random latency
The loglikelihood function for Model 3 is given in “Appendix 4”, Eq. (67). The estimator is obtained by numerical maximization of (67) with respect to \(\theta ^*\) and \(\kappa \).
For Model 4, the pdf of \(T_{M4}\) and thus the likelihood function are not available in a closed form.
5.3.3 Model with selective interaction
In Model 5, it is difficult to find the inverse Laplace transform of \(\widehat{f_T}(t)\), and the maximumlikelihood estimator cannot be calculated in general. However, if \(k=1\), the pdf of \(T\) is available [see (57)] and the loglikelihood function is given in “Appendix 4”, Eq. (68). The estimator of \(\theta ^*\) is obtained by direct numerical maximization of (68).
We can get more precise estimates by doubling the number of observations, using observations \(t_i\) and \(x_i\) together. Because observations of \(T\) and \(X\) come from the same distribution, they can all be inserted into (68).
5.4 Estimators based on the Laplace transform
The empirical moment generating function was first used in estimation problems by Quandt and Ramsey (1978) and later by Epps and Pulley (1985). Parameter estimation in particular distributions has been discussed (Koutrouvelis and Canavos 1997; Koutrouvelis et al. 2005; Koutrouvelis and Meintainis 2002). We implement this method for estimation of \(\theta ^*\), but we employ the Laplace transform of \(f_T(t)\) instead of the moment generating function. This modification makes no difference.
Summary of estimation methods
Estimation method  Description  Advantages  Disadvantages 

Method based on cdfs  Returns \(\theta ^*\) at which the empirical cdf differs significantly from the exponential cdf with rate \(\lambda \)  No assumptions about evoked activity  Less accurate estimatesApplicable only if latency is constant across trials 
Method of moments  Returns parameters for which the theoretical mean (and variance) are equal to their empirical counterparts  Applicable for all models with known momentsCan be extended to take into account subsequent ISIs  Needs particular assumptions about evoked activity 
Maximumlikelihood method  Returns parameters for which the probability that data were generated under the given model is maximal  Has often good asymptotic properties such as efficiency and consistency, even when regularity conditions are not met  Needs particular assumptions about evoked activity Pdf of the data must be known 
Laplace method  Returns parameters for which the deviation of the theoretical Laplace transform of the pdf from the empirically estimated Laplace transform is minimal  Applicable for all models with known Laplace transform  Needs particular assumptions about evoked activity Accuracy strongly influenced by the choice of the grid points 
6 Simulation studies and numerical results
The performance of the estimators introduced in Sect. 5 is examined for Model 1–5 by simulations. First \(W\) and \(R\) and in the case of models with random latency also \(\varTheta \) are simulated. Then, Eq. (1) provides a realization of \(T\). We used a sample size of \(n=100\), i.e., the number of trials under identical conditions. For every combination of parameter values, \(N=1,000\) samples were simulated and therefore \(1{,}000\) different estimates of the mean latency were computed.
6.1 Parameter setting and results

\(\lambda = 1\), \(\quad \kappa \in \{0.1, 0.25\}\), \(\quad k \in \{3,9\}\)

\(\theta ^*\) varying from \(0.1\) to \(1.5\) in steps of \(0.1\)
Results for models with random latency (Model 3 and 4) Results can be seen in Fig. 11e, f. Again, the moment estimator and the Laplace estimator give similar results with large errors for small values of mean latency. The Laplace method leads to slightly better estimates. As expected, maximumlikelihood estimates are best.
Results for model with selective interaction (Model 5)
Identifiability of parameters Estimates of \(\theta ^*\) are required to be positive. Only the estimator based on ecdf and the maximumlikelihood estimator for models with constant latency are constructed so that they ensure nonnegativity of estimates. Estimates obtained by other methods can become negative. In our simulations, estimates in models with selective interaction were all positive. Maximumlikelihood estimates in all models were almost always positive, problems with identifiability arose only in Model 3 for very small \(\theta ^*\) (\(\theta ^*=\lambda /5\) and \(\theta ^*=\lambda /10\)), where at most \(1\,\%\) of estimates were negative. However, a large proportion of negative estimates occurred among moment estimates and estimates obtained from the Laplace transform. The worst results were obtained for Model 1 and 3 (both are special cases of Model A) for small \(\theta ^*\).
6.2 Conclusion
Based on the results from the simulation study, we suggest the following ranking of the estimators according to their accuracy. Overall, the best method is maximum likelihood. It is succeeded by the method of moments and the Laplace method, which are of a similar quality, whereas the cdfmethod gives the poorest results. However, there are many exceptions to these rules. The maximumlikelihood method performs best if only RMSE is taken into consideration. Nevertheless, if the bias is to be minimized, then it is outperformed for very short latencies (\(\theta \le \lambda /5\), approximately) by the method of moments and the Laplace method in all models except Model 3. The method of moments and the Laplace method represent basic approaches to inhibitory latency estimation, because they are not as limited in their applicability to particular models, since e.g. moments are often available even if the distribution is not. The method of moments is better than the Laplace method for the model with selective interaction if there is a good reason to assume that the response is long enough and satisfies the renewal assumption and observations of subsequent intervals are available. On the other hand, the Laplace method avoids the problem with potential unidentifiability of parameters with higher probability. Finally, the method based on cdfs is the best choice if one is not willing to assume a parametric model for the evoked activity.
7 Discussion
Although the notation used in this paper mimics the notation used by Pawlas et al. (2010), the methods for latency estimation presented there cannot be applied on inhibitory response. The main reason is that the variable \(R\) has in Pawlas et al. (2010) the meaning of both the time to the first evoked spike, as well as the socalled firstspike latency. Therefore, the general aim of the methods proposed there is to distinguish observations of \(W\) and \(R\). When observations of \(W\) are excluded, statistical properties of the variable \(R\) are estimated. On the contrary, here we have observations of \(W\) and \(R\) as well, but the variable of interest is \(\varTheta \), which is not measured directly and has only indirect influence on the observations obtained in the experiment.
The models presented in the papers by Tamborrino et al. (2012) and Tamborrino et al. (2013) are more similar to our models, but the difference is that \(T = \min \{W,R\}\) if \(W>\varTheta \). Thus, estimation methods based on the assumptions about evoked activity cannot be used for our model. The only exception is the estimator based on comparison of cumulative distribution functions, because it uses only the assumptions about spontaneous and evoked activity being different.
The simulation study shows that the maximumlikelihood method, when available, is the best choice, which is in agreement with our expectations. However, the method of moments and the Laplace method can be applied for all considered models while the maximumlikelihood method cannot. The nonparametric estimators are biased, \(\hat{\theta }^*_{\mathrm{{ECDF}},1}\) underestimates and \(\hat{\theta }^*_{\mathrm{{ECDF}},2}\) overestimates \(\theta ^*\). Since these two estimators differ only in the threshold used for detecting the beginning of the response (\(0\) and \(\sigma (t)\), respectively), it suggests that a less biased estimator (with smaller RME for each \(\theta ^*\)) could be obtained if the difference \(F_W(t)\widehat{F}_T(t)\) is compared with \(\alpha \sigma (t)\), where \(\alpha \) is a suitable constant in \((0,1)\). Remember that this nonparametric method only assumes Poissonian spontaneous activity and constant latency.
It was shown that estimators of \(\theta ^*\) for models with selective interaction, which are based on observations of both \(T\) and \(X\), are much better for arbitrary choice of the mean latency \(\theta ^*\). On the other hand, the method of estimation using only observations of \(T\) can have an advantage over estimators using observations of \(T\) and \(X\), if the assumption that the spontaneous activity as well as the evoked activity are given by renewal processes, is not satisfied. They can also be better when the response period is relatively short, and the corresponding part of the spike train consists mainly of a few spikes. Then, it is not certain that the first ISI after the stimulus belongs entirely to the response period. However, it is clear that any estimation method based on measurements of \(T\) alone would fail, whenever the true latency is so long that all observations of \(T\) (or a considerable amount of them) are shorter than \(\varTheta \). In that case, observations of the subsequent ISIs are necessary for estimation.
The assumptions of the presented models could be relaxed at the cost of less explicit estimators and larger computational costs, e.g. by allowing for more general distribution families for the spontaneous activity. The sensitivity on the Poisson assumption should be tested, either on real data or on simulated data, where this assumption is violated by design. The assumption that the latency follows an exponential distribution is very simplistic and more realistic models could be considered. The justification of this restriction is that the exponential distribution enables explicit calculations and to obtain the Laplace transform of the pdf of \(T\) in a manageable form. For other distributions, numerical methods would be required. In particular, eq. (15) is no longer valid. Another shortcoming is that stationarity across trials is implicitly assumed, whereas experimental data are usually more complicated, e.g. because of adaptation and plasticity. On the other hand, the assumption of reproducibility is intrinsic to statistical methods and nearly always implicitly assumed by experimentalists when preparing poststimulus time histograms.
Some of the presented models could also be used for excitatory responses, namely Model A and its special cases Model 1 and Model 3. They can describe inhibitory as well as excitatory response, if the distribution of \(U\) is appropriately chosen, e.g. \(U \sim Exp(\kappa )\), where \(\kappa > \lambda \). In that case, all estimation methods could be applied with only little alterations. The method based on cdfs would require to work with the difference \(\hat{F}_T(t)  F_W(t)\) instead of \(F_W(t)  \hat{F}_T(t)\). In fact, this method was originally proposed this way for excitatory response (Tamborrino et al. 2012). The method of moments and the Laplace method would not be influenced by this change. The maximumlikelihood method is also unchanged, the only difference concerns the likelihood function in Model 1 (with constant latency), which is piecewise decreasing for inhibitory response and piecewise increasing for excitatory response. Nevertheless, this change has no impact on the determination of estimates.
Notes
Acknowledgments
M.L. and P.L. were supported by the Grant Agency of the Czech Republic, project P304/12/G069, and by RVO:67985823. S.D. was supported by the Danish Council for Independent Research  Natural Sciences. The work is part of the Dynamical Systems Interdisciplinary Network, University of Copenhagen.
Supplementary material
References
 Baker SN, Gerstein GL (2001) Determination of response latency and its application to normalization of crosscorrelation measures. Neural Comput 13:1351–1377PubMedCrossRefGoogle Scholar
 BonnasseGahot L, Nadal JP (2012) Perception of categories: from coding efficiency to reaction times. Brain Res 1434:47–61PubMedCrossRefGoogle Scholar
 Chase SM, Young ED (2007) Firstspike latency information in single neurons increases when referenced to population onset. Proc Natl Acad Sci USA 104:5175–5180PubMedCentralPubMedCrossRefGoogle Scholar
 Chow CC, White JA (1996) Spontaneous action potentials due to channel fluctuations. Biophys J 71:3013–3021PubMedCentralPubMedCrossRefGoogle Scholar
 Commenges D, Seal J, Pinatel F (1986) Inference about a change point in experimental neurophysiology. Math Biosci 80:81–108CrossRefGoogle Scholar
 Ditlevsen S, Lansky P (2005) Estimation of the input parameters in the OrnsteinUhlenbeck neuronal model. Phys Rev E 71:011907CrossRefGoogle Scholar
 Ditlevsen S, Lansky P (2006) Estimation of the input parameters in the Feller neuronal model. Phys Rev E 73:061910CrossRefGoogle Scholar
 Dorval AD (2008) Probability distributions of the logarithm of interspike intervals yield accurate entropy estimates from small datasets. J Neurosci Meth 173:129–139CrossRefGoogle Scholar
 DuchampViret P, PalouzierPaulignan B, Duchamp A (1996) Odor coding properties of frog olfactory cortical neurons. Neuroscience 74:885–895PubMedCrossRefGoogle Scholar
 Epps TW, Pulley LB (1985) Parameter estimates and test of fit for infinite mixture distributions. Commun Stat Theory Methods 14:3125–3145CrossRefGoogle Scholar
 Farkhooi F, StrubeBloss MF, Nawrot MP (2009) Serial correlation in neural spike trains: experimental evidence, stochastic modeling, and single neuron variability. Phys Rev E 79:021905CrossRefGoogle Scholar
 Fienberg SE (1974) Stochastic models for single neuron firing trains: a survey. Biometrics 30:399–427PubMedCrossRefGoogle Scholar
 Friedman HS, Priebe CE (1998) Estimating stimulus response latency. J Neurosci Methods 83:185–194PubMedCrossRefGoogle Scholar
 Gautrais J, Thorpe S (1997) Rate coding versus temporal order coding: a theoretical approach. Biosystems 48:57–65CrossRefGoogle Scholar
 Hentall I (2000) Interactions between brainstem and trigeminal neurons detected by crossspectral analysis. Neuroscience 96:601–610PubMedCrossRefGoogle Scholar
 Kang K, Amari S (2008) Discrimination with spike times and ISI distributions. Neural Comput 20:1411–1426PubMedCrossRefGoogle Scholar
 Koutrouvelis IA, Canavos GC (1997) Estimation in the three parameter gamma distribution based on the empirical moment generating function. J Stat Comput Simul 59:47–62CrossRefGoogle Scholar
 Koutrouvelis IA, Meintainis S (2002) Estimating the parameters of Poissonexponential models. Aust NZ J Stat 44:233–245CrossRefGoogle Scholar
 Koutrouvelis IA, Canavos GC, Meintanis SG (2005) Estimation in the threeparameter inverse Gaussian distribution. Comput Stat Data An 49:1132–1147CrossRefGoogle Scholar
 Krofczik S, Menzel R, Nawrot MP (2009) Rapid odor processing in the honeybee antennal lobe network. Front Comput Neurosci 2:9PubMedCentralPubMedGoogle Scholar
 Mandl G (1993) Coding for stimulus velocity by temporal patterning of spike discharges in visual cells of cat superior colliculus. Vis Res 33:1451–1475PubMedCrossRefGoogle Scholar
 McKeegan D (2002) Spontaneous and odour evoked activity in single avian olfactory bulb neurons. Brain Res 929:48–58PubMedCrossRefGoogle Scholar
 Miura K, Okada M, Amari SI (2006) Estimating spiking irregularities under changing environments. Neural Comput 18:2359–2386PubMedCrossRefGoogle Scholar
 Nawrot M, Boucsein C, Molina V, Riehle A, Aertsen A, Rotter S (2008) Measurement of variability dynamics in cortical spike trains. J Neurosci Methods 169:374–390PubMedCrossRefGoogle Scholar
 Nawrot MP, Aertsen A, Rotter S (2003) Elimination of response variability in neuronal spike trains. Biol Cybern 88:321–334PubMedCrossRefGoogle Scholar
 Pawlas Z, Klebanov LB, Beneš V, Prokešová M, Popelář J, Lánský P (2010) Firstspike latency in the presence of spontaneous activity. Neural Comput 22:1675–1697PubMedCrossRefGoogle Scholar
 Quandt RE, Ramsey JB (1978) Estimating mixtures of normals and switching regressions. J Am Stat Assoc 73:730–738CrossRefGoogle Scholar
 R Core Team (2013) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.Rproject.org/
 Reisenman CE, Heinbockel T, Hildebrand JG (2008) Inhibitory interactions among olfactory glomeruli do not necessarily reflect spatial proximity. J Neurophysiol 100:554–564PubMedCentralPubMedCrossRefGoogle Scholar
 Rospars JP, Lánský P, DuchampViret P, Duchamp A (2000) Spiking frequency versus odorant concentration in olfactory receptor neurons. Biosystems 58:133–141PubMedCrossRefGoogle Scholar
 Shimokawa T, Shinomoto S (2009) Estimating instantaneous irregularity of neuronal firing. Neural Comput 21:1931–1951PubMedCrossRefGoogle Scholar
 Tamborrino M, Ditlevsen S, Lansky P (2012) Identification of noisy response latency. Phys Rev E 86:021128CrossRefGoogle Scholar
 Tamborrino M, Ditlevsen S, Lansky P (2013) Parametric inference of neuronal response latency in presence of a background signal. BioSystems 112:249–257 PubMedCrossRefGoogle Scholar
 Van Rullen R, Gautrais J, Delorme A, Thorpe S (1998) Face processing using one spike per neurone. Biosystems 48:229–239PubMedCrossRefGoogle Scholar
 Van Rullen R, Guyonneau R, Thorpe S (2005) Spike times make sense. Trends Neurosci 28:1–4CrossRefGoogle Scholar
 Wainrib G, Thieullen M, Pakdaman K (2010) Intrinsic variability of latency to firstspike. Biol Cybern 103:43–56PubMedCrossRefGoogle Scholar