The steady state and response to a periodic stimulation of the firing rate for a theta neuron with correlated noise

The stochastic activity of neurons is caused by various sources of correlated fluctuations and can be described in terms of simplified, yet biophysically grounded, integrate-and-fire models. One paradigmatic model is the quadratic integrate-and-fire model and its equivalent phase description by the theta neuron. Here we study the theta neuron model driven by a correlated Ornstein-Uhlenbeck noise and by periodic stimuli. We apply the matrix-continued-fraction method to the associated Fokker-Planck equation to develop an efficient numerical scheme to determine the stationary firing rate as well as the stimulus-induced modulation of the instantaneous firing rate. For the stationary case, we identify the conditions under which the firing rate decreases or increases by the effect of the colored noise and compare our results to existing analytical approximations for limit cases. For an additional periodic signal we demonstrate how the linear and nonlinear response terms can be computed and report resonant behavior for some of them. We extend the method to the case of two periodic signals, generally with incommensurable frequencies, and present a particular case for which a strong mixed response to both signals is observed, i.e. where the response to the sum of signals differs significantly from the sum of responses to the single signals. We provide Python code for our computational method: https://github.com/jannikfranzen/theta_neuron.


Introduction
Neural spiking is a random process due to the presence of multiple sources of noise. This includes the quasirandom input received by a neuron which is embedded in a recurrent network (network noise), the unreliability of the synapses (synaptic noise), and the stochastic opening and closing of ion channels (channel noise) (Gabbiani & Cox, 2017;Koch, 1999). This stochasticity and the resulting response variability is a central feature of neural spiking (Holden, 1976;Tuckwell, 1989). Therefore, studies in computational neuroscience have to account for this stochasticity as it has important implication for the signal transmission properties.
Computational studies of stochastic neuron models often assume that the driving fluctuations are temporally uncorrelated. This white-noise assumption implies that the correlation time s of the input fluctuations is much smaller than the time scale of the membrane potential m . Put differently, the input noise is regarded as fast compared to every other process present in the neural system. This assumption grants a far-reaching mathematical tractability of the problem (Abbott & van Vreeswijk, 1993;Brunel, 2000;Burkitt, 2006;Holden, 1976;Lindner & Schimansky-Geier, 2001;Ricciardi, 1977;Richardson, 2004;Tuckwell, 1989) but is violated in a number of interesting cases. First, fluctuations that arise in a recurrent network often exhibit reduced power at low frequencies (green noise) (Bair et al., 1994;Câteau & Reyes, 2006;Pena et al., 2018;Vellmer & Lindner, 2019). Second, fluctuations in oscillatory systems, e.g. caused by the electroreceptor of the paddlefish, can be band-pass filtered (Bauermeister et al., 2013;Neiman & Russell, 2001). Finally and most prominently, fluctuations that emerge due to synaptic filtering of postsynaptic potentials (Brunel & Sergi, 1998;Lindner, 2004;Lindner & Longtin, 2006Moreno-Bote & Parga, 2010Rudolph & Destexhe, 2005) or due to slow ion channel kinetics (Fisch et al., 2012;Schwalger et al., 2010), have reduced power at high frequencies (red noise).
There are two important types of neuron models with distinct responses characteristics: Integrators (type I neurons) and resonators (type II neurons) (Izhikevich, 2007). The canonical model for a type I neuron is the quadratic integrate-and-fire model or, mathematically equivalent, in terms of a phase variable, the theta neuron. Here we study the response characteristics of the theta neuron, driven by a low-pass filtered noise, the Ornstein-Uhlenbeck (OU) process. This model has been studied analytically by Brunel and Latham (2003) for the limits of very short and very long correlation times. Furthermore, Naundorf et al. (2005a, b) solved the associated Fokker-Planck equation for the voltage and the noise variable for selected parameter sets in order to obtain the stationary firing rate and the firing rate's linear response to a weak periodic stimulus.
Here, we put forward semi-analytical results for the stationary firing rate by means of the matrix-continued-fraction (MCF) method for arbitrary ratios of the two relevant time scales = s ∕ m . We present exhaustive parameter scans of the stationary firing rate with respect to variations of the bifurcation parameter and the correlation time. Furthermore, our method also allows to calculate how a, not necessarily weak, periodic signal in the presence of a correlated background noise is encoded in the firing-rate of the model neuron. Because recently, non-weak signals, for which the linear response does not provide a good approximation to the firing rate, have attracted attention (Novikov & Gutkin, 2020;Ostojic & Brunel, 2011;Voronenko & Lindner, 2017, we also develop semi-analytical tools for the linear as well as the non-linear response of the firing rate to one or two periodic signals. To the best of our knowledge, this is the first application of the MCF method in computational neuroscience.
This paper is organized as follows. In Sect. 2 we introduce the model system and the associated Fokker-Planck equation. In Sect. 3 we compute the stationary firing rate of a theta neuron subject to correlated noise by means of the MCF method. Section 4 generalizes the ideas of the MCF method to the case where the model is driven by the OU noise and an additional periodic signal. Finally, in Sect. 4.3 we compute the firing rate response to two periodic signals. We conclude with a short summary of our results.

Model
The quadratic integrate-and-fire (QIF) model uses the normal form of a saddle-node on invariant circle (SNIC) bifurcation (Izhikevich, 2007)

with a time-dependent input I(t):
In order to make the connection to physical time units transparent, we have kept on the l.h.s. a time constant, which is of the order of the membrane time m 1 , typically 10ms. In the following however for the ease of notation we use a nondimensional time t =t∕ m , i.e. we measure time as well as any other time constants, e.g. the correlation time below, in multiples of the membrane time constant. Similarly, all frequencies and firing rates are given in multiples of the inverse membrane time constant (additional rescalings are considered below, see e.g. Eqs. (7) and (10)).
In the new nondimensional time the QIF model takes the usual form: If the variable x(t) reaches the threshold x th = ∞ , a spike is created at time t i = t and x(t) is immediately reset to x re = −∞ . If the input is assumed to be constant it can serve as a bifurcation parameter and allows the model to switch between the excitable ( I < 0 ) and mean-driven regime ( I > 0 ). The model for I < 0 is illustrated in Fig. 1A, including the stable and unstable fixed point at x = ± √ I as well as the reset. The QIF model can be transformed into the theta neuron by the transformation x = tan( ∕2) (cf., Fig. 1A): The advantage of such a phase description is, that the threshold th = and reset re = − lie at finite values. We will use this phase description of a canonical Type I neuron in the remainder of this paper.
(3) d dt = (1 − cos ) + (1 + cos )I(t). (4) 1 One way to derive the QIF model is to consider the limit of a large slope factor Δ v in the exponential integrate-and-fire model which itself results from a simplification of a conductance-based model (Fourcaud-Trocmé et al., 2003). By choosing the new variable 2Δ v ) and expanding the exponential function up to the second order for v − v t ≪ Δ v , one finds √ 2 mẋ = + x 2 , where m = C∕g L is the membrane-time constant. For simplicity we neglected the prefactor √ 2 in Eq. (1). a constant mean input , a temporally correlated noise (t) and a periodic signal s(t) (see Fig. 1B). Note that the temporal average of the input Ī = lim T→∞ ∫ T 0 I(t)dt∕T is only affected by because the temporal averages are set to ̄(t) = 0 and s(t) = 0 without loss of generality. The correlated noise is given by an Ornstein-Uhlenbeck process with auto-correlation function ⟨ (t) (t + Δt)⟩ = 2 exp(−Δt∕ ) and correlation time ; it can be generated by an extra stochastic differential equation, a trick from statistical physics known as Markovian embedding of a colored noise (see e.g. Dygas et al., 1986;Guardia et al., 1984;Langer, 1969;Mori, 1965;Siegle et al., 2010, and the review by Hänggi & Jung, 1995). We remind the reader that the correlation time is given in terms of the membrane time constant, i.e. = s ∕ m is actually the ratio between the true correlation time s (given for instance in ms) and the membrane time m . In the limit → 0 the noise (t) becomes uncorrelated, i.e. white. However, if the variance 2 is held constant, as in Eq. (5), the effect of the noise on the neuron vanishes together with the correlation time. A non-trivial white-noise limit can be more properly described in terms of the noise intensity D = 2 ; if D is held constant, the noise still affects the dynamics for vanishing correlation times. For such a constant intensity scaling the effect of the noise vanishes as → ∞.
For s(t) = 0 the system shows spontaneous spiking (not related to any signal). In this case the parameter space is three-dimensional, i.e. all statistics depend only on ( , , ) . This dependence however can be reduced to just two independent parameters ( ̂,̂ ) defined by This transfor mation also affects the phase tan(̂∕2) = tan( ∕2)∕ √ and time t = √ t in Eq. (3) and consequently rescales the firing rate Under an additional periodic driving s(t) = cos( t) the signal will be rescaled as well: ŝ(t) =̂cos(̂t) with For several periodic signals the respective amplitudes and frequencies will be rescaled in the same manner.
For the constant intensity scaling we use a similar transformation and set D = 1: again, the state variables are affected by this scaling as well: tan(̃∕2) = tan( ∕2)∕D 1∕3 and t = D 1∕3 t . The firing rates in the scaled and unscaled parameter space are related by (9) = ∕D 2∕3 ,̃= D 1∕3 , = ∕D 2∕3 ,̃= ∕D 1∕3 Fig. 1 Type-I neuron model. A Representation of the deterministic QIF and the equivalent theta neuron model. The blue line shows the QIF models potential U(x) = − x (x 2 + I) in the excitable regime ( I < 0 ). Upon reaching the threshold x th = ∞ a spike is created and x is reset to x re = −∞ . For the equivalent theta neuron model, obtained by the transformation x = tan( ∕2) , a spike is created whenever passes th = , no additional reset rule is needed. B Illustration of a theta neuron subject to a temporally correlated OU noise (blue) as well as a periodic signal (red) and the resulting spike train with stochastic spike times (orange) We make use of these scalings in the discussion of the results. For the ease of notation, we omit the hat and tilde over the parameters.

The Fokker-Planck equation
The stochastic system of interest can be written by two Langevin equations where f ( , , s(t)) = (1 − cos ) +(1 + cos ) ( + (t)+ s(t)) . The relation to the governing equation for the probability density function (PDF) is the well known Fokker-Planck equation (FPE) (Risken, 1984). The PDF denotes the probability to find the phase and noise at time t around certain values. In the neural context the PDF can be related to the instantaneous firing rate r(t) (see for instance Brunel & Sergi, 1998;Naundorf et al., 2005a, b, andParga, 2010) as we recall in the following. The FPE is given by: The two dimensional partial differential equation is completed by two natural boundary conditions a periodic boundary condition and the normalization condition There is a corresponding continuity equation that relates the temporal derivative of the PDF to the spatial derivative of the probability current: where J and J are the probability currents in the and direction, respectively: t P( , , t) =L( , , s(t))P( , , t) An important insight is that the probability current in the phase direction J at the threshold = is directly related to the instantaneous firing rate r(t): In the last equality we have used that the dynamics of the theta neuron becomes independent of the input at the threshold; specifically, we have f ( , , s(t)) = 2.
The solution of the two-dimensional Fokker-Planck equation and the boundary conditions listed above is a difficult problem, even in the simplest case of the (time-independent) stationary solution in the absence of a periodic stimulus. Different authors have proposed approximate solutions in limit cases, e.g. for the case of very slow or very fast Ornstein-Uhlenbeck noise (Brunel & Latham, 2003), for weak noise in the mean-driven regime (Galán, 2009;Zhou et al., 2013), or, in the case of a periodic modulation of the firing rate, for very low or very high stimulus frequencies (Fourcaud-Trocmé et al., 2003). A numerical method to solve the twodimensional Fokker-Planck equation in terms of an eigenfunction expansion was presented by Naundorf et al. (2005a, b); similar approaches have been pursued to describe two onedimensional white-noise driven neuron models either coupled directly (Ly & Ermentrout, 2009) or subject to a shared input noise (Deniz & Rotter, 2017). Eigenfunction expansions have also been used to describe the activity in neural populations and neural networks, see e.g. Knight (2000) and Doiron et al. (2006). Turning back to the problem of single-neuron models, beyond the theta neuron, different approximations to the multidimensional Fokker-Planck equation for neuron models with Ornstein-Uhlenbeck noise have been suggested for the perfect integrate-and-fire model (Fourcaud & Brunel, 2002;Lindner, 2004;Schwalger et al., 2010Schwalger et al., , 2015 and for the leaky integrateand-fire model (Alijani & Richardson, 2011;Brunel & Sergi, 1998;Brunel et al., 2001;Moreno et al., 2002;Moreno-Bote & Parga, 2004, 2006Schuecker et al., 2015;Schwalger & Schimansky-Geier, 2008). We note that with respect to the driving noise, the related simpler case of an exponentially correlated two-state (dichotomous) noise permits the exact analytical solution for a few statistical measures such as the firing rate and stationary voltage distribution (Droste & Lindner, 2014;Müller-Hansen et al., 2015), the power spectrum and linear response function (Droste & Lindner, 2017), and the serial correlation coefficient of the interspike intervals (Lindner, 2004;Müller-Hansen et al., 2015).

Stationary firing rate
If we consider a system that is subject to a temporally correlated noise but no external signal ( s(t) = 0 ) then the probability density asymptotically approaches a stationary distribution P 0 ( , ) which is what we consider now. The FPE for this stationary distribution reads with the stationary Fokker-Planck operator L 0 ( , ) = L( , , 0) . Once the stationary probability density is known it can be used to obtain the stationary firing rate r 0 . Alternatively to Eq.

The MCF method
In the previous section it was shown that the stationary probability density is interesting on its own because it is directly related to the stationary firing rate. Here we outline the core ideas and assumptions that are necessary to compute the stationary PDF P 0 ( , ) by means of the matrix-continued-fraction method, which has been put forward by Risken (1984).
As a first step, the stationary probability density is expanded with respect to the phase and noise by two sets of eigenfunctions, namely the complex exponential functions e in ∕ √ 2 and Hermite functions p ( ) (see Bartussek, 1997 for a similar choice): Note, that c n,p * = c −n,p because P 0 ( , ) is real. Thus, we must only determine the expansion coefficients for n ≥ 0 . Both sets satisfy the periodic and natural boundary conditions in and , respectively. A first application of this result is the determination of the marginal probability density by which is illustrated for different values of and in Fig. 2. The stationary firing rate is conveniently expressed by only two of the coefficients, This expression can be derived by inserting the expansion into Eq. (23) and using the properties of the coefficients and eigenfunctions, in particular (80) and (81) of the Hermite functions.
The coefficients can be determined by a substitution of the expansion Eq. (24) into the stationary FPE (22) which yields the tridiagonal recurrence relation, see Appendix A: with the coefficient vectors c n = c n,0 , c n,1 , ... T and where 1 is the identity matrix and Â , B are defined by Solving Eq. (27) for c n,p is difficult because the matrices are infinite and the equation constitutes a relation between three unknown. As a first step to find the coefficients, one can truncate the expansion in Eq. 24 to obtain finite matrices. In practice, we assume that all Hermite functions and Fourier modes become negligible for large p or n, so that the corresponding coefficients vanish 2 c n,p = 0 for p > p max or n > n max . To solve the second problem (of having three unknowns), we define transition matrices Ŝ n by which upon insertion into Eq. (27) yield: For any coefficient vectors c n this equation is satisfied provided the term in square brackets vanishes. The relation between the two unknown transition matrices can be expressed by: and leads by recursive insertion to an infinite matrix continued fraction where 1∕⋅ denotes the inverse of a matrix. This fraction is truncated after n > n max . The matrix Ŝ 0 determines the following coefficients via Eq. (31): which are needed for the computation of the firing rate according to Eq. (26).

Constant variance scaling
The MCF method provides a fast computational method to determine the stationary firing rate r 0 in a large part of the parameter space. Together with different analytical approximations it is possible to cover the complete dependence of r 0 on the parameters , and . In the following figures, we additionally verify the MCF results by comparison to numerical simulations of Eq. (11) using a Euler-Maruyama scheme with time step Δt = 5 ⋅ 10 −3 for N trials = 5 ⋅ 10 5 trials of length T max = 500 . For more details see the repository. In Fig. 3 we use the constant variance scaling (see Sect. 2) with = 1 . A different choice for would result in a rescaling of the axes according to Eq. (6). As depicted in Fig. 3B, for short as well as large correlation times, the firing rate approaches limit values indicated by the horizontal lines. For → 0 , the effect of the correlated noise vanishes so that the short time limit is equal to the deterministic firing rate where Θ( ) is the Heaviside function. In the case → ∞ , the noise causes a slow modulation of the firing rate; computing the long-correlation-time limit then corresponds to averaging the deterministic firing rate over the distribution of the noise (quasi-static noise approximation, see Moreno-Bote & Parga, 2010) We recall that for a QIF model driven by white noise the firing rate is always larger than the deterministic rate (Lindner et al., 2003). In contrast, a colored noise may decrease the firing rate (Brunel & Latham, 2003;Galán, 2009) as shown in Fig. 3. For large correlation times, the decrease in the firing rate is a direct consequence of the concave curvature of the deterministic firing rate r det (I) at large as illustrated in Fig. 4. This can be understood as follows. If we take the linear approximation of the deterministic rate around the operation point then, not surprisingly, with a symmetric input distribution of the noise, the averaging yields the deterministic firing rate at the operation point: In the relevant range the underlined term is larger than the function r det (I) in Eq. (37) as it can be seen from Fig. 4. Consequently, the resulting integral in Eq. (38) (i.e. the deterministic firing rate) is larger than the actual firing rate in the long-correlation-time limit, Eq. (37). This is the mechanism by which a colored noise can reduce the firing rate in the mean-driven regime.
For weak noise in the mean-driven regime ( ≪ ) this drop in the firing rate can be calculated analytically as done by Galán (2009). The formula requires the phase response curve (PRC) of the theta neuron, which is well known (Ermentrout, 1996), resulting in the following compact expression for the firing rate: (please note the transition from cyclic frequencies used in Galán, 2009 to firing rates). The formula predicts clearly a reduction of the firing rate by colored noise; specifically, r 0 decreases monotonically with increasing correlation times. It should be noted, however, that in the strongly mean-driven regime, in which this theory is valid, the changes in the firing rate are very small (see Fig. 5A, B). If the driving is less strong and deviations of the firing rate from r det are more pronounced, the theory  5C). Is at least the qualitative prediction of an overall rate reduction due to correlated noise correct? To answer this question, we plot in Fig. 6 the difference between the firing rate and the deterministic limit r 0 − r det for a broad of correlation times and inputs . This difference can be both positive and negative. Trivially, in the excitable regime ( < 0 ) the firing rate in the presence can only be larger than the vanishing deterministic rate (here r det = 0 ). In the mean-driven regime the changes can be both positive (for sufficiently small ) and negative (for larger ); the exact line of separation is displayed by a solid line in Fig. 6.

Constant intensity scaling
Instead of a constant variance, we can also keep the noise intensity fixed ( D = 2 ). The corresponding stationary firing rate as a function of and is shown in Fig. 7A. One advantage of the constant-intensity scaling is that it permits a non-trivial white noise limit ( → 0 ), displayed in Fig. 7B, C by the dashed lines (Brunel & Latham, 2003). In the opposite limit of a long correlation time the noise variance vanishes, which implies that r 0 approaches the deterministic rate.
Remarkably, for a sufficiently strong mean input current , the rate attains a minimum at intermediate correlation times. Considering the long as well as the short correlationtime approximation by Moreno-Bote and Parga (2010) (see our Eq. (37)) and Brunel and Latham (2003) (see Eq. (3.19) therein), respectively, this behavior can be expected. Generally, we find that the firing rate for any is smaller than the white-noise limit.

Response to periodic stimulus
In the previous section we have considered a theta neuron with an input current I(t) that consisted of a constant input and a colored noise (t) . We now turn to a more general case that involves an additional periodic signal as illustrated in Fig. 8A and demonstrate how the MCF method can be used to compute the response of the firing rate.
We consider the time-dependent signal s(t) as a perturbation with amplitude . The respective FPE can be expressed . 4 Mechanism for the firing rate reduction. The decrease of the firing rate due to strongly correlated noise in the mean-driven regime is a consequence of the concave curvature of the deterministic firing rate r det (I) . For large the firing rate can be approximated by averaging the deterministic firing rate over the noise distribution according to Eq. (37), this yields the blue point on the dashed line Comparison between the firing rate and the deterministic rate. Difference between r( , ) and the deterministic firing rate r det ( ) for 2 = 1 . As expected, in the excitable regime ( < 0 ) the firing rate of the stochastic system is increased compared to the deterministic rate. For the mean-driven regime ( > 0 ) the firing rate can be both increase or decreased depending on the particular value of both and . Parameters MCF method: n max = p max = 150 by the stationary Fokker-Planck operator L 0 as defined in the last section and an additional term that represents the effect of the periodic signal: with L per = (1 + cos ) . As a result of the periodic forcing, we can no longer expect that the probability density converges to a stationary distribution; instead the probability density approaches a so called cyclo-stationary state with period T = 2 ∕ : Since this distribution fully determines the asymptotic firing rate, this implies for the latter r(t + T) = r(t).
To determine the cyclo-stationary PDF we again use a twofold expansion, first a Fourier expansion that reflects the (42) P( , , t + T) = P( , , t). Interestingly, the firing rate is always smaller than the corresponding white noise limit → 0 (dashed line) and can show non-monotonic behavior with a minimum depending on and , see B. Here, known analytical approximations by Fourcaud-Trocmé et al. (2003) (solid purple lines) and Moreno-Bote and Parga (2010)  Note that P ,k ( , ) = P * ,−k ( , ) because P( , , t) is real. The expansion Eq. (43) can be substituted into Eq. (41) to obtain a system of coupled differential equations that are no longer time dependent and can be solved iteratively with respect to : with L k =L 0 + ik . The normalization of the probability density provides additional conditions for these functions: Here i,j is the Kronecker delta. Clearly, P 0,0 ( , ) = P 0 ( , ) is the stationary probability density. This system of coupled differential equations Eq. (44) can be solved iteratively ( → + 1 ). Notice that whenever P ,k ( , ) is governed by a homogeneous differential equation, i.e. L k P ,k ( , ) = 0 , the trivial solution P ,k ( , ) = 0 does satisfy Eq. (45) and is thus a solution (except for k = = 0 ). Therefore, for = 0 we find that all coefficients except P 0,0 ( , ) vanish. For = 1 we find two nonvanishing coefficients, namely P 1,−1 ( , ) and P 1,1 ( , ) . Generally, all coefficients P ,k ( , ) for k > and k + = odd vanish (see Fig. 16). The remaining inhomogeneous differential equations can be solved by means of the MCF method (see Appendix B).
The cyclo-stationary firing rate can now be expressed in terms of the functions P ,k ( , ) using Eq. (21), exploiting the symmetry P ,k = P * ,−k and P ,k> = 0: with: where arg( ⋅ ) is the complex argument. We recover our well known stationary firing rate for = k = 0 , i.e. r 0,0 = r 0 . Note that some of the terms r ,k in Eq. (46) vanish because of the underlying symmetry of the governing equations Eq. (44).

Linear response
For small the linear term in the expansion, i.e. the linear response r 1,1 , already provides a good approximation of the asymptotic firing rate r(t): Note that all other terms r 1,k≠1 vanish. The function |r 1,1 ( )| is also commonly known as the absolute value of the susceptibility | ( )| that quantifies the amplitude response of the firing rate. The phase shift with respect to the signal is described by 1,1 . An exemplary signal s(t) together with the linear response, given in terms of the amplitude and phase shift, is shown in Fig. 8. For the chosen small signal amplitude , the linear theory indeed captures very well the cyclo-stationary part of the firing rate. There is also a transient response due to the chosen initial condition of the ensemble, here we however focus solely on the cyclo-stationary response.
Before we discuss the rate modulation with respect to different parameters, we compare our numerical results against known approximations (Fourcaud-Trocmé et al., 2003) (see Fig. 9). First we verify the low frequency limit → 0 . In this case the signal s(t) is slow and can be considered as a quasi-constant input. Expanding the firing rate with respect to the signal amplitude yields: r(t) ≈ r 0 + |r 1,1 ( )| cos( t − 1,1 ).

Fig. 8
Cyclo-stationary firing rate. A Illustration of a theta neuron model subject to a temporally correlated OU noise and a periodic signal. B The firing rate (orange line; simulation) approaches a cyclostationary state (black line; MCF method) due to the periodicity of the signal (green line). In the linear regime the firing rate is well approximated by r(t) ≈ r 0 + | ( )|s(t − 11 ∕ ) . Parameters: = 0.5 , 2 = 1 , = 1 , = 0.1 , and = 2 . The cyclo-stationary firing rate was calculated by the MCF method with n max = p max = 100 . Simulation parameters: In this figure, the number of realizations was up-scaled to N trials = 1 ⋅ 10 6 for visual purposes. For all realizations, the initial values are (t = 0) = 0 and (t = 0) = − A comparison with Eq. (48) allows to identify the low frequency limit of the susceptibility and phase shift: As we can compute the firing rate r 0 for different values of (see Sect. 3), the derivative above can be calculated numerically.
Second, in the opposite limit of large frequencies → ∞ , the theta neuron acts as a low-pass filter (Fourcaud-Trocmé et al., 2003): Hence, the susceptibility becomes very small in the highfrequency limit which is also noticeable by the pronounced random deviations of our simulation results in this specific limit. Both limit cases are well captured by our method for two values of the correlation time ( = 0.1, 1 ) in the mean driven regime. We see that here the main effect of increasing the correlation time is to diminish the resonance of the response: For = 0.1 the susceptibility peaks around ≈ 2 r 0 (note, that for small : r 0 ≈ r det ); this peak is gone for = 1 because the effect of the noise, keeping its variance constant, increases with . All these features are in detail confirmed by the results of stochastic simulations (symbols in Fig. 9). The general dependence of the susceptibility, focusing on its magnitude only, is inspected in Fig. 10 for the constant variance and in Fig. 11 for the constant intensity scaling. Qualitative different behavior of | ( )| can be observed between the mean-driven > 0 and excitable regime < 0 . In the mean-driven regime the theta neuron exhibits a strong resonance near det = 2 r det that increases with decreasing effect of the noise, i.e. in the constant variance scaling the resonance becomes stronger as → 0 (see Fig. 10 top) while for the constant intensity scaling the resonance increases as → ∞ , (see Fig. 11 top). In the excitable regime resonances are weak or absent. First of all, the baseline firing rate of the neuron vanishes as the effect of the noise decreases (cf. Figs. 3A and 7A) and so does the susceptibility (see Figs. 10 and 11 bottom). Secondly, the theta neuron becomes a low-pass filter where | ( )| decreases with increasing regardless of the correlation time .
Right at the bifurcation point = 0 there are still no pronounced resonances with respect to . However, the dependence of the linear response on the correlation time is somewhat different to the excitable regime: the susceptibility increases if the effect of the noise becomes very weak, i.e. → 0 for the constant variance scaling (see Fig. 10 middle) and → ∞ for the constant intensity scaling (see Fig. 11 middle).

Nonlinear response
For larger signal amplitudes nonlinear response functions have to be considered: Here we have included all terms up to the 3rd order in (cf. Eq. (46)). The nonlinear response features higher Fourier modes and a correction r 2,0 of the time-averaged firing rate. The response functions r ,k and their respective argument ,k of course depend on the model parameters , and as well as the signal frequency .
For a neuron in the mean-driven regime the frequency dependence for three selected response functions is shown in Fig. 12B. In contrast to the linear response |r 1,1 | the functions |r 2,2 | and |r 3,3 | display additional resonances for instance at ≈ 1 = r det . This behavior is not specific to the theta neuron, for instance such resonances can be observed for the LIF neuron as well (Voronenko & Lindner, 2017). These additional resonances give rise to strong nonlinear effects even if the signal is weak, see Fig. 12A. In the particular case shown in Fig. 12 the signal frequency was chosen to match the resonance frequency of the second-order response |r 2,2 | so that the linear response alone no longer provides a good (52) r(t) = r 0 + |r 1,1 | cos t − 1,1 + 2 r 2,0 + |r 2,2 | cos 2 t − 2,2 + 3 |r 3,1 | cos t − 3,1 ) + |r 3,3 | cos(3 t − 3,3 ) + ... approximation to the firing rate r(t). Instead the second-order response must be included, illustrating the importance of the nonlinear theory even for comparatively weak signals.
By means of the MCF method it is possible to achieve a near perfect fit of the actual firing rate by including many correction terms; see Fig. 12A where we have included all terms up to the 10th order. However, note that the computational cost of each further correcting term increases roughly linearly with the order of the signal amplitude.
We now discuss the amplitude response functions |r ,k | to the third order in for varying values of the mean input and correlation time (cf. Fig. 13). The linear response |r 1,1 | , already discussed in the preceding section and shown here for completeness (Fig. 13A I, B I, C I), displays in the meandriven regime ( = 1 ), and to a lesser degree also at the bifurcation point ( = 0 ), a well known resonance peak near the firing frequency 0 = 2 r 0 ; it acts as a low-pass filter in the excitable regime ( = −0.5 ). Increasing the correlation time and thereby the effect of the noise diminishes this resonance.
The first nonlinear term r 2,0 describes the effect of the periodic signal on the time-averaged firing rate; we discuss this term first for the mean-driven regime (Fig. 13C II). Similar to the findings for a stochastic LIF model (Voronenko & Lindner, 2017, Fig . 3B) at low noise we find that a resonant driving at a frequency corresponding to the firing rate 0 does not evoke any change of the time-averaged firing rate while a frequency slightly below or above this frequency evokes a reduction or increase of the rate, respectively. If we deviate too strongly from 0 however the effect of the signal on the time-averaged rate becomes very small. Increasing the correlation time increases the effect of the noise and smears out these nonlinear resonances. The effect of the periodic signal on the time-averaged firing rate in the excitable regime and at the bifurcation point is quite different (Fig. 13A II, B II). Here the rate is always increased by the periodic signal, similar to what was found already for an excitable LIF model (Voronenko & Lindner, 2017, Fig. 3A). Furthermore, at the bifurcation point and at low noise intensities (green curve in B II) there is a pronounced maximum as a function of frequency attained at a frequency higher than 0 .
Generally, in the higher-order response functions, we observe a number of peaks versus frequency (see e.g. Fig. 13A-C V). The resonances in the mean-driven regime (C V) and at low noise (green curve) are found near det , det ∕2 and det ∕3 . Note again, that in this regime the deterministic frequency det = 2 r det of the oscillator and the stationary firing frequency 0 are close. In the excitable regime both the linear and nonlinear response functions also exhibit for most driving frequencies a nonmonotonic behavior with respect to the correlation time, i.e. with respect to the strength of the noise.

Response to two periodic signals
So far we have discussed the theta neuron's linear and nonlinear firing rate response to a single periodic signal. In this section we derive a scheme that allows to calculate the response if the model neuron receives two periodic signals: Calculating the firing rate in this case will not only help to understand how a theta neuron responds to two periodic signals but can also be used to calculate the 2nd order response to arbitrary signals (Voronenko & Lindner, 2017).
As a starting point we formulate the corresponding FPE: This equation still agrees with Eq. (41) except for s(t) which contains two periodic signals. Again we are interested in the PDF for which all initial condition have been forgotten and the time dependence of P( , , t) is only due to the (53) s(t) = s 1 (t) + s 2 (t) = 1 cos( 1 t) + 2 cos( 2 t). ( , , t). time dependence of the signal s(t). Note that since the sum of two periodic signals is not necessarily periodic, the functions P( , , t) and r(t) are not periodic either. In fact, s(t) is only periodic if the ratio of the two frequencies is a rational number, i.e. 1 ∕ 2 ∈ ℚ . We chose a Fourier representation with respect to 1 t , 2 t and expand with respect to the small amplitudes 1 , 2 :

A) B) C)
For notational convenience we have omitted the arguments of the coefficients P 1 , 2 k 1 ,k 2 ( , ). Because P( , , t) is a real valued function, the coefficients obey As for the case of a single periodic signal, inserting Eq. (55) into Eq. (54) gives a system of time-independent coupled differential equations: with L k 1 ,k 2 =L 0 + i(k 1 1 + k 2 2 ) and P 1 , 2 k 1 ,k 2 = 0 for 1 < 0 or 2 < 0 . The normalization of the probability density provides again the additional conditions: The differential equations (57) are analogous to Eq. (44) and can be solved by means of the MCF method (see Appendix C). In the following we explicitly provide the hierarchy of coupled differential equations up to the second order of 1 , 2 , i.e. for 1 + 2 ≤ 2 . The zeroth-order term 1 + 2 = 0 describes the unperturbed system. As we have already argued for the case of a single periodic signal the function P 0,0 0,0 , governed by is the only non-vanshing zeroth-order term because for every other value of k 1 , k 2 the trivial solution does satisfies Eq. (58). Therefore P 0,0 0,0 = P 0 is the stationary probability density from Sect. 3. The stationary PDF in turn determines the two non-vanishing linear ( 1 + 2 = 1 ) correction terms: k 1 ,k 2 = k 1 ,0 k 2 ,0 1 ,0 2 ,0 .
(62) L 2,0 P 2,0 2,0 =L per 2 P 1,0 1,0 , L 1,1 P 1,1 1,1 =L per 2 P 1,0 1,0 + P 0,1 0,1 . (68) r(t) ≈ r 0,0 0,0 + 1 |r 1,0 1,0 | cos 1 t − 1,0 1,0 + 2 |r 0,1 0,1 | cos 2 t − 0,1 0,1 + 2 1 r 2,0 0,0 + |r 2,0 2,0 | cos 2 1 t − 2,0 2,0 + 2 2 r 0,2 0,0 + |r 0,2 0,2 | cos 2 2 t − 0,2 0,2 + 1 2 |r 1,1 1,1 | cos ( 1 + 2 )t − 1,1 1,1 The first five lines represent the first and second order responses of the firing rate for a theta neuron that receives a single periodic signal, either s 1 (t) or s 2 (t) . For instance, |r 1,0 1,0 ( 1 , 2 )| = |r 1,1 ( 1 )| (the linear response amplitude to s 1 ) and |r 2,0 2,0 ( 1 , 2 )| = |r 2,2 ( 1 )| (the response amplitude at the second harmonic of s 1 ) do not depend on the frequency of the second signal 2 as it can be seen in Fig. 14A I and A II . The response functions r ,k ( ) for a single periodic signal have already been discussed in the previous sections. The last two terms, proportional to 1 2 , are of particular interest here, because they arise only due to the interaction of two periodic signals. The corresponding response amplitudes |r 1,1 1,1 | and |r 1,−1 1,1 | are shown in Fig. 14A III and A IV . In (cf. Eq. (71)). Note that the response functions |r 0,1 0,1 | and |r 0,2 0,2 | that are not shown here are identical to the response functions that are shown in A I and A II if the frequencies 1 and 2 are interchanged (both account for a single signal). The response functions |r 1,1 1,1 | and |r 1,1 1,−1 | , that describe the interaction effect of both signals on the firing rate, exhibit additional resonances near 1 + 2 = 2 r det and | 1 − 2 | = 2 r det . B and C show the firing rate in response to two periodic signals where the sum of the frequencies does and does not match the aforementioned condition 1 + 2 = 2 r det , respectively. If the condition is matched the sum of responses to each individual signal does not provide a good approximation to the actual firing rate but the full response to the sum of signals has to be calculated (see B). Parameters: = 1 , 2 = 1 , = 0.05 , 1 = 0.3 , 2 = 0.1 with frequencies 1 = 0.5 , accordance with previous observations for the leaky integrate-and-fire model with white noise and a periodic driving (Voronenko & Lindner, 2017) we find two distinct cases in the mean-driven regime. First, if neither the sum nor the difference 1 ± 2 is close to the firing frequency 2 r det then the response to the sum of two signals is well described by the sum of responses to the separate signals. A particular set of frequencies 1 and 2 for which this is the case is shown in Fig. 14C where the second order response to the sum of two signals (black solid line) agrees very well with the sum of the second order responses to one signal at a time (dashed line). Second, if 1 + 2 ≈ 2 r det or | 1 − 2 | ≈ 2 r det the firing rate is significantly affected by the interaction of both signals (see Fig. 14A III and A IV ). An example of the firing rate as a function of time where these interaction terms are crucial is shown in Fig. 14B. Here the aforementioned response to the sum of two signals and sum of responses to one signal at a time disagree significantly.

Summary and outlook
In this paper we have studied the firing rate of the canonical type-I neuron model, the theta neuron, subject to a temporally correlated Ornstein-Uhlenbeck noise and additional periodic signals. We have solved the associated multi-dimensional Fokker-Planck-equation numerically by means of the matrix-continued-fraction (MCF) method, put forward by Risken (1984). For our problem the MCF method provided reliable solutions for a wide range of parameters; the main restriction is that the correlation time cannot be to large and additionally in the excitable regime the noise intensity (as also known from other application of the method, see Lindner & Sokolov, 2016 for a recent example) cannot be to small. To the best of our knowledge this is the first application of this method in computational neuroscience, advancing the results by Naundorf et al. (2005a, b) on the same model.
When the neuron receives no additional periodic signal, i.e. when the model is driven solely by the correlated noise, our method allows a quick and accurate computation of the stationary firing rate. We investigated the rate for a large part of the parameter space, confirmed the MCF results by comparison with stochastic simulations and discussed the agreement with known analytical approximations (Fourcaud-Trocmé et al., 2003;Galán, 2009;Moreno-Bote & Parga, 2010). We found that, in contrast to the white noise case (Lindner et al., 2003), correlated noise can both increase and decrease the stationary firing rate of a type-I neuron and we identified the conditions under which one or the other behavior can be observed.
In the presence of a single additional periodic signal both the probability density function and the firing rate approach a cyclo-stationary solution, which can be found by extending the MCF method to the time-dependent Fokker-Planck-equation. The corresponding rate modulation is for a weak signal given by the linear response function, the well known susceptibility, which has been addressed before numerically (Naundorf et al., 2005a, b) and analytically in limit cases (Fourcaud-Trocmé et al., 2003). Here we went beyond the linear response and computed also the higherorder response to a single periodic stimulus. Similar to what was found for a periodically driven leaky integrate-and-fire model with white background noise (Voronenko & Lindner, 2017), we identified driving frequencies at which the higher harmonics can be stronger than the firing rate modulation with the fundamental frequency. For a variety of nonlinear response functions, we observed resonant behavior.
Finally, we generalized the numerical approach to the case of two periodic signals and studied the nonlinear response up to second order. We found that for certain frequency combinations the mixed response to the two signals can lead to a drastically different rate modulation than predicted by pure linear response theory; this is similar to what was observed in a leaky integrate-and-fire neuron with white background noise (Voronenko & Lindner, 2017).
Our method could be extended to neuron models that include more complicated correlated noise, for instance, a harmonic noise (Schimansky-Geier & Zülicke, 1990) that can mimic special sources of intrinsic fluctuations (Engel et al., 2009). Another problem that could be addressed by this method is the computation of the spike-train power spectrum in the stationary state. Furthermore the linear and nonlinear response to the modulation of other parameters, e.g. the noise intensity (Boucsein et al., 2009;Lindner & Schimansky-Geier, 2001;Silberberg et al., 2004;Tchumatchenko et al., 2011), could be of interest and be computed with the methods outlined in this paper.

A. Stationary case -derivation of the tridiagonal recurrence relation
Here we demonstrate how the problem of solving the FPE (22) for the stationary PDF P 0 ( , ) , can be translated into an equivalent problem of solving a tridiagonal recurrence relation for the expansion coefficients c n p . These coefficients can then be found by means of the matrix-continued-fraction method as it was demonstrated in Sect. 3.1.
First, we recall the stationary FPE (72) 0 =L 0 P 0 ( , ) = (L +L )P 0 ( , ), and the expansion of the PDF by two sets of orthonormal eigenfunctions e in ∕ √ 2 and p ( ) The Fourier modes and Hermite functions satisfy the periodic boundary condition in and natural boundary conditions in , respectively. Using the orthonormality of these functions one can show that the normalization condition of the PDF determines c 0,0 : Before addressing the full problem of finding the recursive relation for the coefficients c n,p , we first calculate L P 0 ( , ) using the expansion (75): Remember that the Hermite functions can expressed by the Hermite polynomials H p (x) as follows: Where is an arbitrary scaling factor. Making use of the two properties of the Hermite functions we can derive a handy expression for L 0 ( ) p ( ) by choosing = √ 2 : Hence, 0 p is a eigenfunction of the operator L . Combining Eq. (82), the expansion (75) and the FPE (72) yields We split the sum into three parts, perform an index shift and use Eq. (80) to obtain Furthermore, we introduce the orthonormal operators so that Ô e im = n,m and Ô p = p,q . Multiplying Eq. (84) from the left by ÔÔ allows to get rid of the sum over n and to find the following recursive relation.
The sum can be interpreted as a product of matrices and vectors. We introduce the coefficient vector Note, that for n = 0 we can readily infer the remaining elements of c 0 (remember that c 0,0 = 1) Equation (90) can by simplified by multiplication with B −1 ∕n from the left to obtain the expression used in Sect. 3.1: This is the tridiagonal recurrence relation which we have solved in the main part by the MCF method (illustrated in Fig. 15).

B. Cyclo-stationary case -MCF method
In this section we expand the MCF method to the case of an additional periodic signal s(t) = cos( t) and calculate the cyclo-stationary firing rate r(t). In Sect. 4 we have already shown that the time-dependent FPE for this problem, i.e. Eq. (41), can be transformed into a set of time-independent differential equations that are recursively related (cf. Eq. (44)): with P <0,k = 0 . This hierarchy of coupled differential equations can be solved iteratively starting at the stationary PDF P 0,0 . The dependence is illustrated in Fig. 16 where many terms P ,k vanish (grey circles) due to the normalization condition of the PDF as explained in Sect. 4. In order to solve the corresponding differential equation for each P ,k ( , ) we chose the same ansatz as in the previous section: By substituting this ansatz into Eq. (47) a relation between the expansion coefficients c ( ,k) n,p and response functions r ,k (which determine the full firing rate r(t) via Eq. (46)) can be derived: To find c ( ,k) n,p , we again transform the differential equations (93) into coefficient equations by means of the expansion (94) (see derivation of the tridiagonal recurrence relation in the previous section) and obtain (90) 0 = Â + 2n(B − 1) c n + nB c n−1 + c n+1 .
(92) K n c n = c n−1 + c n+1 . Expanding the probability density using eigenfunctions according to Eq. (75) and insertion into the FPE (72) leads to a relation for the expansion coefficients c n,p where only nearest neighbors interact (including diagonals). The coefficients can then be computed by truncating the system and introducing transition matrices Ŝ n that can be obtained from a matrix continued fraction, see Sect. 3.1 For notational convenience we introduced the coefficient vectors and droped the superscripts c n ∶= c ( ,k) n . Further we denote the sum of the previously computed coefficient vectors by c � n ∶= c ( −1,k+1) n + c ( −1,k−1) n . As in the previous section Eq. (96) is multiplied by B −1 ∕n from the left to obtain a more compact expression K n c n = c n+1 + c n−1 +c n with In order to solve the 2-dimensional coefficient equation (98) we must assume that all Hermite functions and Fourier modes become negligible for large p or n, so that the corresponding coefficients vanish: c n,p ∶= c ( ,k) n,p = 0 for p > p max . or n > n max . Specifically, we have checked how key statistics as for instance the firing rate depend on p and n and observed saturation for sufficiently large p and n; we then take these as maximal values.
The normalization condition of the PDF is determines the coefficient c 0,0 ∶= c ( ,k) 0,0 : The remaining elements of c 0 vanish. This can be seen from Eq. (96) that simplifies considerably for n = 0 The involved matrix A k =Â + k 1 is diagonal with nonvanishing elements (A k ) p,p ≠ 0 for p ≠ 0 . This implies that Eq. (102) can only be fulfilled if c p≠0,0 = 0 . The resulting coefficient vector serves as the initial condition in the following. All other coefficient vectors can be derived iteratively: Here we have introduced the transition matrices Ŝ n and Ŝ R n as done in the case of no periodic signal and the additional vectors d n and d R n , which take the inhomogeneity c n in Eq. (98) into account (Risken, 1984). The ansatz Eq. (104) is substituted into Eq. (98), which yields This equation is satisfied when both expressions in the square brackets vanish. This allows to derive two recursive relation. First, from the left hand side (2c � n + c � n+1 + c � n−1 ).   If the theta neuron is subject to a periodic signal the cyclo-stationary PDF can be expanded according to Eq. (94). Inserting this ansatz into the FPE leads to a system of time-independent recursively coupled differential equations (93). The coupling hierarchy is shown here. The stationary PDF P 0,0 determines P 1,1 which in turn determines P 2,0 and P 2,2 and so on. Gray dots represent terms that vanish as explained in Sect. 4 and second, from the right hand side Analogous expressions can be derived for Ŝ R n and d R n . Because all coefficient vectors c n are assumed to vanish for n > |n max | it follows that This defines the initial condition that is needed to determine the remaining transition matrices and vectors for 0 < n < n max : and for −n max < n < 0 To summarize, the rate response functions r ,k and hence the full firing rate can be calculated following an iterative scheme illustrated in Fig. 16. Starting point is the zeroth order term in the signal amplitude ( = 0 ), where c � n = 0 is known. For each iteration step, i.e. → + 1 , the following series of steps is executed.

C. MCF method: response to two periodic signals
In Sect. 4.3 we were interested in the nonlinear response r(t) of the noisy theta neuron subject to two periodic signals. To this end, we need to compute the response functions r 1 , 2 k 1 ,k 2 that are related to the expansion functions P 1 , 2 k 1 ,k 2 by Eq. (69). The latter in turn obey the following differential equation.
This system of coupled differential equations can be solved iteratively as described in Sect. 4.3. We wish to find the function P 1 , 2 k 1 ,k 2 given that the functions on the right-hand side of Eq. (115) are already computed. For each P 1 , 2 k 1 ,k 2 the differential equation has in principle the same structure as Eq. (93) from the previous section. Hence, it can be solved using the same numerical scheme based on the MCF method, using Eqs. (94)-(114), except for a change in the notation that reflects the expansion with respect to two signals: Further two differences are: 1. Replace k by k 1 1 + k 2 2 which effects the computation of K n according to Eq. (99). 2. The function P 1 , 2 k 1 ,k 2 is determined by four previously computed expansion functions P 1 −1, 2 k 1 +1,k 2 , P 1 −1, 2 k 1 −1,k 2 , P 1 , 2 −1 k 1 ,k 2 +1 and P 1 , 2 −1 k 1 ,k 2 −1 . This affects the computation of c ′ n as follows: Note that the known initial coefficient vector in Eq. (103) is still c 0 = (1, 0, ..., 0) T , when computing the stationary firing rate r 0,0 0,0 and c 0 = (0, 0, ..., 0) T else. The hierarchy of coupled differential equations is indeed different to the previous section and is provided up to the 2nd order in the signal amplitude in Sect. 4.3.