Skip to main content

Advertisement

Log in

Multi-scale detection of rate changes in spike trains with weak dependencies

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

The statistical analysis of neuronal spike trains by models of point processes often relies on the assumption of constant process parameters. However, it is a well-known problem that the parameters of empirical spike trains can be highly variable, such as for example the firing rate. In order to test the null hypothesis of a constant rate and to estimate the change points, a Multiple Filter Test (MFT) and a corresponding algorithm (MFA) have been proposed that can be applied under the assumption of independent inter spike intervals (ISIs). As empirical spike trains often show weak dependencies in the correlation structure of ISIs, we extend the MFT here to point processes associated with short range dependencies. By specifically estimating serial dependencies in the test statistic, we show that the new MFT can be applied to a variety of empirical firing patterns, including positive and negative serial correlations as well as tonic and bursty firing. The new MFT is applied to a data set of empirical spike trains with serial correlations, and simulations show improved performance against methods that assume independence. In case of positive correlations, our new MFT is necessary to reduce the number of false positives, which can be highly enhanced when falsely assuming independence. For the frequent case of negative correlations, the new MFT shows an improved detection probability of change points and thus, also a higher potential of signal extraction from noisy spike trains.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Avila-Akerberg, O., & Chacron, M.J. (2011). Nonrenewal spike train statistics: causes and functional consequences on neural coding. Exp. Brain Res., 210(0), 353–71.

    Article  PubMed  PubMed Central  Google Scholar 

  • Berkes, I., Horváth, L., Kokoszka, P., & Shao, Q.-M. (2005). Almost sure convergence of the Bartlett estimator. Period. Math. Hungar., 51(1), 11–25.

    Article  Google Scholar 

  • Berkes, I., Horváth, L., Kokoszka, P., & Shao, Q.-M. (2006). On discriminating between long-range dependence and changes in mean. Ann. Statist., 34(3), 1140–1165.

    Article  Google Scholar 

  • Billingsley, P. (1999). Convergence of probability measures. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons Inc., New York, second edition. A Wiley-Interscience Publication.

  • Bingmer, M., Schiemann, J., Roeper, J., & Schneider, G. (2011). Measuring burstiness and regularity in oscillatory spike trains. J. Neurosci. Methods, 201, 426–37.

    Article  PubMed  Google Scholar 

  • Brody, C.D. (1999). Correlations without synchrony. Neural Comput., 11(7), 1537–1551.

    Article  CAS  PubMed  Google Scholar 

  • Brown, E.N., Kass, R.E., & Mitra, P.P. (2004). Multiple neural spike train data analysis: state-of-the-art and future challenges. Nat. Neurosci., 7(5), 456–461.

    Article  CAS  PubMed  Google Scholar 

  • Camproux, A.C., Saunier, F., Chovet, G., Thalabard, J.C., & Thomas, G. (1996). A hidden markov model approach to neuron firing patterns. Biophys. J., 71(5), 2404–12.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Chacron, M.J., Lindner, B., & Longtin, A. (2004). Noise shaping by interval correlations increases information transfer. Phys. Rev. Lett., 92(8), 080601.

    Article  PubMed  Google Scholar 

  • Chacron, M.J., Longtin, A., & Maler, L. (2001). Negative interspike interval correlations increase the neuronal capacity for encoding time-dependent stimuli. J. Neurosci., 21(14), 5328–5343.

    CAS  PubMed  Google Scholar 

  • Csörgȯ, M., & Horváth, L. (1987). Asymptotic distributions of pontograms. Math. Proc. Cambridge Philos. Soc., 101(1), 131–139.

    Article  Google Scholar 

  • De Jong, R., & Davidson, J. (2000). Consistency of kernel estimators of heteroscedastic and autocorrelated covariance matrices. Econometrica, 68(2), 407–424.

    Article  Google Scholar 

  • Dehling, H., Rooch, A., & Taqqu, M.S. (2013). Non-parametric change-point tests for long-range dependent data. Scand. J. Stat., 40(1), 153–173.

    Article  Google Scholar 

  • Eden, U.T., Frank, L.M., Barbieri, R., Solo, V., & Brown, E.N. (2004). Dynamic analysis of neural encoding by point process adaptive filtering. Neural Comput., 16(5), 971–98.

    Article  PubMed  Google Scholar 

  • Farkhooi, F., Strube-Bloss, M., & Nawrot, M. (2009). Serial correlation in neural spike trains: Experimental evidence, stochastic modeling, and single neuron variability. Phys. Rev. E Stat. Nonlin. Soft Matter Phys., 79(2 Pt 1), 021905.

    Article  PubMed  Google Scholar 

  • Frick, K., Munk, A., & Sieling, H. (2014). Multiscale change point inference. Journal of the Royal Statistical Society, 76(3), 495– 580.

    Article  Google Scholar 

  • Fryzlewicz, P. (2014). Wild binary segmentation for multiple change-point detection. Ann. Statist., 42(6), 2243–2281.

    Article  Google Scholar 

  • Gonçalves, S., & Politis, D. (2011). Discussion: Bootstrap methods for dependent data: A review. J. Kor. Stat. Soc., 40, 383–6.

    Article  Google Scholar 

  • Grün, S., Diesmann, M., & Aertsen, A. (2002). ’Unitary events’ in multiple single-neuron activity. II. Non-stationary data. Neural Comput., 14(1), 81–119.

    Article  PubMed  Google Scholar 

  • Gut, A., & Steinebach, J. (2002). Truncated sequential change-point detection based on renewal counting processes. Scand. J. Statist., 29(4), 693–719.

    Article  Google Scholar 

  • Gut, A., & Steinebach, J. (2009). Truncated sequential change-point detection based on renewal counting processes. II. J. Statist. Plann. Inference, 139(6), 1921–1936.

    Article  Google Scholar 

  • Hartmann, C., Lazar, A., Nessler, B., & Triesch, J. (2015). Where’s the noise? key features of spontaneous activity and neural variability arise through learning in a deterministic network. PLoS Computational Biology, 11 (12).

  • Kendall, D.G., & Kendall, W.S. (1980). Alignments in two-dimensional random sets of points. Adv. in Appl Probab., 12(2), 380–424.

    Article  Google Scholar 

  • Kirch, C., & Muhsal, B. (2014). A MOSUM procedure for the estimation of multiple random change points: Preprint.

  • Klenke, A. (2008). Probability theory. Universitext. Springer-Verlag London Ltd., London. A comprehensive course, Translated from the 2006 German original.

  • Koyama, S., Eden, U.T., Brown, E.N., & Kass, R.E. (2010). Bayesian decoding of neural spike trains. Annals of the Institute of Statistical Mathematics, 62(1), 37–59.

    Article  Google Scholar 

  • Kreiss, J.P., & Lahiri, S.N. (2012). Bootstrap methods for time series. In Time Series Analysis: Methods and Applications, 30:Ch. 1. Elsevier.

  • Lavielle, M. (1999). Detection of multiple changes in a sequence of dependent variables. Stochastic Process. Appl., 83(1), 79– 102.

    Article  Google Scholar 

  • Lee, S.H., & Dan, Y. (2012). Neuromodulation of brain states. Neuron, 76(1), 209–222.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Lowen, S.B., & Teich, M.C. (1991). Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales. J. Acoust. Soc. Am., 92, 803–6.

    Article  Google Scholar 

  • Luczak, A., Bartho, P., & Harris, K.D. (2013). Gating of sensory input by spontaneous cortical activity. Journal of Neuroscience, 33(4), 1684–1695.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Messer, M., Kirchner, M., Schiemann, J., Roeper, J., Neininger, R., & Schneider, G. (2014). A multiple filter test for the detection of rate changes in renewal processes with varying variance. Ann. Appl. Stat., 8 (4), 2027–2067.

    Article  Google Scholar 

  • Nawrot, M.P., Boucsein, C., Rodriguez-Molina, V., Aertsen, A., Grün, S., & Rotter, S. (2007). Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro. Neurocomputing, 70(10), 1717–1722.

    Article  Google Scholar 

  • Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models. Network: Comp. Neur. Sys., 15, 243–62.

    Article  Google Scholar 

  • Pillow, J.W., Shlens, J., Paninski, L., Sher, A., Litke, A.M., Chichilinsky, E.J., & Simoncelli, E.P. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7202), 995–9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Ratnam, R., & Nelson, M.E. (2000). Nonrenewal statistics of electrosensory afferent spike trains: implications for the detection of weak sensory signals. J. Neurosci., 20(17), 6672–6683.

    CAS  PubMed  Google Scholar 

  • Ray, B.K., & Tsay, R.S. (2002). Bayesian methods for change-point detection in long-range dependent processes. J. Time Ser. Anal., 23(6), 687–705.

    Article  Google Scholar 

  • Schiemann, J., Klose, V., Schlaudraff, F., Bingmer, M., Seino, S., Magill, P.J., Schneider, G., Liss, B., & Roeper, J. (2012). K-atp channels control in vivo burst firing of dopamine neurons in the medial substantia nigra and novelty-induced behavior. Nat. Neurosci., 15(9), 1272–1280.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Schneider, G. (2008). Messages of oscillatory correlograms - a spike-train model. Neural Comput., 20(5), 1211–1238.

    Article  PubMed  Google Scholar 

  • Schwalger, T., & Lindner, B. (2013). Patterns of interval correlations in neural oscillators with adaptation. Front. Comp. Neurosci, 7, 164.

    Google Scholar 

  • Shiau, L., Schwalger, T., & Lindner, B. (2015). Interspike interval correlation in a stochastic exponential integrate-and-fire model with subthreshold and spike- triggered adaptation. J. Comp. Neurosci., 38, 589.

    Article  Google Scholar 

  • Singh, K. (1981). On asymptotic accuracy of Efron’s bootstrap. Ann. Stat., 9, 1187–95.

    Article  Google Scholar 

  • Steinebach, J., & Eastwood, V.R. (1995). On extreme value asymptotics for increments of renewal processes. J. Statist. Plann. Inference, 45(1-2), 301–312.

    Article  Google Scholar 

  • Steinebach, J., & Zhang, H.Q. (1993). On a weighted embedding for pontograms. Stochastic Process. Appl., 47(2), 183–195.

    Article  Google Scholar 

  • Subramaniam, M., Althof, D., Gispert, S., Schwenk, J., Auburger, G., Kulik, A., Fakler, B., & Roeper, J. (2014). Mutant α-synuclein enhances firing frequencies in dopamine substantia nigra neurons by oxidative impairment of a-type potassium channels. The Journal of Neuroscience, 34(41), 13586–99.

    Article  PubMed  Google Scholar 

  • Tang, S.M., & MacNeill, I.B. (1993). The effect of serial correlation on tests for parameter change at unknown time. The Annals of Statistics, 21(1), 552–75.

    Article  Google Scholar 

  • Trucculo, W., Eden, U.T., Fellows, M.R., Donoghue, J.P., & Brown, E.N. (2004). A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J. Neurophysiol., 93, 1074–89.

    Article  Google Scholar 

  • Vervaat, W. (1972). Functional central limit theorems for processes with positive drift and their inverses. Z. Wahrsch. Verw. Geb., 23(4), 245–253.

    Article  Google Scholar 

  • Wied, D., Krämer, W., & Dehling, H. (2012). Testing for a change in correlation at an unknown point in time using an extended functional delta method. Econometric Theory, 28(3), 570– 589.

    Article  Google Scholar 

  • Wu, W.B., & Pourahmadi, M. (2009). Banding sample autocovariance matrices of stationary processes. Statist. Sinica, 19(4), 1755– 1768.

    Google Scholar 

  • Xiao, H., & Wu, W.B. (2012). Covariance matrix estimation for stationary time series. Ann. Statist., 40(1), 466–493.

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank Götz Kersting for helpful comments on weak convergence principles.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gaby Schneider.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Additional information

Action Editor: Liam Paninski

This work was supported by the German Federal Ministry of Education and Research (BMBF, Funding number: 01ZX1404B) and by the Priority Program 1665 of the German Research Foundation.

Appendix: A Proofs

Appendix: A Proofs

Here we show consistency of the estimators \(\hat s^{2}\) of s 2 in Eqs. (16) and (17). Recall that these were

$$\begin{array}{@{}rcl@{}} \text{global estimator:} \quad & 2h\hat\rho^{2}/\hat\mu^{3} \quad & \text{(see Lemma A.1)}\\ \text{local estimator:} \quad &\left( \frac{\hat\rho_{\text{ri}}^{2}}{\hat\mu_{\text{ri}}^{3}}+\frac{\hat\rho_{\text{le}}^{2}}{\hat\mu_{\text{le}}^{3}}\right)h \quad & \text{(see Lemma A.2)} \end{array} $$

The used estimators \(\hat \rho , \hat \mu , \hat \rho _{\text {le}}, \hat \rho _{\text {ri}}, \hat \mu _{\text {le}}, \hat \mu _{\text {ri}}\) are the empirical means and estimates of ρ given in Eq. (14), derived from the whole process in the global estimator and from the local right and left windows at time t in the local estimator.

Lemma 1.1

Let {ξ i } i≥1 be an m-dependent process in \(\mathcal P\) and \((\hat s_{nh,nt}^{2})_{t}\) the global estimator as in Eq. (16). Then it holds in (D[h, T − h], d ∥⋅∥ ) almost surely as n → ∞ that

$$(\hat s_{nh,nt}^{2} /n)_{t}\longrightarrow (2h\rho^{2}/\mu^{3})_{t}, $$

where d ∥⋅∥ denotes the supremum norm.

Proof

Note that the global estimator \(\hat s\) does not depend on h and t, i.e., the formulation of \(\hat s\) as a process is artificial. We show that \(\hat \mu \to \mu \) a.s. and \(\hat \rho _{\ell }\to \rho _{\ell }\) a.s. as n for = 0,1,2,… where ρ 0 = σ 2. Since {ξ i } i ≥ 1 is m-dependent and square-integrable, the sequence {ξ i ξ i + } i ≥ 1 is integrable and (m + )-dependent, thus ergodic. Then, the ergodic theorem, see e.g., Klenke (2008), states almost surely as n

$$ \frac{1}{n}\sum\limits_{i=1}^{n} \xi_{i} \!\longrightarrow\! \mathbb{E}[\xi_{1}]\,=\,\mu \quad\text{and}\quad \frac{1}{n}\sum\limits_{i=1}^{n} \xi_{i}\xi_{i+\ell}\!\longrightarrow\! \mathbb{E}[\xi_{1}\xi_{1+\ell}]. $$
(19)

Since the life times are a.s. positive and integrable, it follows N n T a.s. as n (cmp. the proof to Lemma A.1. in Messer et al. (2014)). Thus, in Eq. (19), the value n can be exchanged with the random number of observations N n T (respectively N n T − ( − 1)). Hence, for n, we find \(\hat \mu \to \mu \) a.s. and \(\hat \rho _{\ell }\to \rho _{\ell }\) a.s., so that the finite sum \(\hat \rho ^{2}\to \rho ^{2}\) a.s. By construction of \(\hat s^{2}\) the statement holds. □

Lemma 1.2

Let {ξ i } i≥1 be an m-dependent process in \(\mathcal P\) and for all T>0 and h∈(0,T/2] let \(((\hat s_{nh,nt})^{2})_{t}\) be the local estimator as in Eq. (17). Then it holds in (D[h,T−h],d ∥⋅∥ ) almost surely as n→∞ that \(((\hat s_{nh,nt})^{2} /n)_{t}\to (2h\rho ^{2}/\mu ^{3})_{t}\).

Proof

We show the uniform a.s. convergence of \((\hat {\mu }_{\text {le}})_{t}\) and \((\hat {\mu }_{\text {ri}})_{t}\) to the constant μ in Lemma A.4, and the uniform a.s. convergence of the summands \((\hat \rho _{\text {le},\ell })_{t}\) and \((\hat \rho _{\text {ri},\ell })_{t}\) of \(\hat \rho _{\text {le}}^{2}\) and \(\hat \rho _{\text {ri}}^{2}\) to the constant ρ in Lemma A.5. This implies the statement, since uniform almost sure convergence interchanges with finite sums in general and with products if the limits are constant. □

We start with a uniform a.s. result for the scaled counting process (N t ) t ≥ 0. Throughout, we use the following approach: First, we state an almost sure convergence result for the finite dimensional marginals of the processes. This essentially results from the ergodic theorem. Then, by a discretization argument, we show uniform a.s. convergence.

Lemma 1.3

Let {ξ i } i ≥ 1 be a process in \(\mathcal {P}\) with \(\mathbb {E}[\xi _{1}]=\mu \) . Then we have in (D[h, T − h], d ∥⋅∥ ) almost surely as n → ∞ that

$$\begin{array}{@{}rcl@{}} \left( \frac{N_{nt}-N_{n(t-h)}}{nh/\mu}\right)_{t} &\longrightarrow (1)_{t}, \end{array} $$
(20)
$$\begin{array}{@{}rcl@{}} \left( \frac{N_{n(t+h)}-N_{nt}}{nh/\mu}\right)_{t} &\longrightarrow (1)_{t}. \end{array} $$
(21)

Proof

We show Eqs. (21); (20) follows analogously.

For \(S_{n} := {\sum }_{i=1}^{n}\xi _{i}\) for n ≥ 1, the ergodic theorem implies S n /nμ a.s. for n. As we have N t a.s. as t, \(S_{N_{t}}/N_{t}\to \mu \) a.s. as t. Now, for all t ≥ 0 we find \(S_{N_{t}} \le t \le S_{N_{t} + 1}\), so that (for all t sufficiently large such that N t ≥1)

$$\frac{S_{N_{t}}}{N_{t}} \le \frac{t}{N_{t}} \le \frac{S_{N_{t}+1}}{N_{t}+1}\frac{N_{t}+1}{N_{t}}. $$

Since the left hand side and the right hand side tend to μ almost surely we obtain N t /t → 1/μ a.s. as t. For 0≤s<t, this implies, as n, almost surely

$$\begin{array}{@{}rcl@{}} \frac{N_{nt} - N_{ns}}{n(t-s)}& =& \frac{t}{t-s}\frac{N_{nt}}{nt} - \frac{s}{t-s}\frac{N_{ns}}{ns}\\ & \longrightarrow& \frac{t}{t-s}\frac{1}{\mu} -\frac{s}{t-s}\frac{1}{\mu} = \frac{1}{\mu}. \end{array} $$
(22)

This implies the convergence of the finite dimensional marginal of Eq. (21). The uniform convergence follows by a discretization argument analogously to the proof of Lemma A.14 in Messer et al. (2014). □

Next, we show the uniform a.s. convergence of the estimators \((\hat \mu _{\text {ri}})_{t}\), \((\hat \mu _{\text {le}})_{t}\), \((\hat \sigma _{\text {ri}}^{2})_{t}\) and \((\hat \sigma _{\text {le}}^{2})_{t}\).

Lemma 1.4

Let \(\{\xi _{i}\}_{i\ge 1} \in \mathcal P\) with \(\mu :=\mathbb {E}[\xi _{1}]\) . Then it holds in (D[h,T−h],d ∥⋅∥ ) almost surely as n → ∞ that

$$\left( \hat\mu_{\text{le}}\right)_{t} \longrightarrow (\mu)_{t} \qquad\text{and}\qquad (\hat\mu_{\text{ri}})_{t} \longrightarrow (\mu)_{t}. $$

Proof

Again we prove the statement only for the right window. We find \((1/n){\sum }_{i=1}^{n}\xi _{i}\to \mu \) a.s., such that \((1/N_{t}){\sum }_{i=1}^{N_{t}}\xi _{i}\to \mu \) a.s. as n. Then we conclude for all 0<s<t (the case s = 0 being similar) as n almost surely

$$\begin{array}{@{}rcl@{}} &&\frac{1}{N_{nt}-N_{ns}} \sum\limits_{i=N_{ns}+1}^{N_{nt}} \xi_{i} \\ &&\quad\quad\quad\quad= \frac{N_{nt}}{N_{nt}-N_{ns}} \left( \frac{1}{N_{nt}}\sum\limits_{i=1}^{N_{nt}} \xi_{i} - \frac{N_{ns}}{N_{nt}}\frac{1}{N_{ns}} \sum\limits_{i=1}^{N_{ns}} \xi_{i} \right)\\ &&\quad\quad\quad\quad\longrightarrow \frac{t}{t-s}\left( \mu - \frac{s}{t} \mu\right) = \mu, \end{array} $$
(23)

making use of Lemma A.3. Thus, for every fixed t we obtain almost surely as n

$$ \hat{\mu}_{\text{ri}} =\frac{1}{N_{n(t+h)}-N_{nt}-1}\sum\limits_{i=N_{nt}+2}^{N_{n(t+h)}}\xi_{i}\longrightarrow \mu. $$
(24)

The a.s. convergence holds for finitely many t simultaneously. As above, the uniform convergence follows by a discretization argument analogously to the proof of Lemma A.15 in Messer et al. (2014). □

Now we show the uniform a.s. convergence of covariance estimators.

Lemma 1.5

Let \(\{\xi _{i}\}_{i\ge 1}\in \mathcal P\) , and let \(\hat \rho _{\text {le},\ell }\) and \(\hat \rho _{\text {ri},\ell }\) be the local estimators of ρ in the left and right window, see Eqs. (14), (17), for ℓ= 0, 1, 2…, where ρ 0 2 . Then in (D[h, T − h], d ∥⋅∥ ) a.s. as n → ∞ we have

$$\left( \hat\rho_{\text{le},\ell}\right)_{t} \longrightarrow (\rho_{\ell})_{t} \qquad\text{and}\qquad (\hat\rho_{\text{ri},\ell})_{t} \longrightarrow (\rho_{\ell})_{t}. $$

Proof

Again we conclude \((1/n){\sum }_{i=1}^{n} \xi _{i}\xi _{i+\ell }\to \mathbb {E}[\xi _{1}\xi _{1+\ell }]\) a.s. as n. Using N n T , we find \((1/N_{nt}){\sum }_{i=1}^{N_{nt}} \xi _{i}\xi _{i+\ell }\to \mathbb {E}[\xi _{1}\xi _{1+\ell }]\) a.s. as n. With a similar argument as in Eq. (23), we find for all 0≤s<t almost surely as n

$$ \frac{1}{N_{nt}-N_{ns}-(\ell+1)}\sum\limits_{i=N_{ns+2}}^{N_{nt}-\ell} \xi_{i}\xi_{i+\ell}\to\mathbb{E}[\xi_{1}\xi_{1+\ell}]. $$
(25)

Together with the previous Lemma A.4 this implies the almost sure convergence \(\hat \rho _{\text {ri},\ell }\to \mathbb {E}[\xi _{1}\xi _{1+\ell }]-\mathbb {E}[\xi _{1}]^{2} = \rho _{\ell }\) for every fixed t and thus for the finite dimensional marginals.

In order to obtain the convergence in (D[h, Th], d ∥ ⋅ ∥), we show a.s. as n that

$$ \left( \frac{\mu}{nh}\sum\limits_{i=N_{nt}+2}^{N_{n(t+h)}} \xi_{i}\xi_{i+\ell}\right)_{t} \longrightarrow (\mathbb{E}[\xi_{1}\xi_{1+\ell}])_{t}. $$
(26)

The convergence of the finite dimensional marginals follows from Eq. (25) together with Lemma A.3 and Slutsky’s theorem. We show the uniform convergence (26) even for t ∈ [0, Th]. It suffices to show almost surely that

$$\begin{array}{@{}rcl@{}} \lim\limits_{n\to\infty} \sup\limits_{t\in[0,T-h]} \frac{\mu}{nh}\sum\limits_{i=N_{nt}+2}^{N_{n(t+h)}} \xi_{i}\xi_{i+\ell} &\le \mathbb{E}[\xi_{1}\xi_{1+\ell}],\\ \lim\limits_{n\to\infty} \inf\limits_{t\in[0,T-h]} \frac{\mu}{nh}\sum\limits_{i=N_{nt}+2}^{N_{n(t+h)}} \xi_{i}\xi_{i+\ell} &\ge \mathbb{E}[\xi_{1}\xi_{1+\ell}]. \end{array} $$
(27)

Again, we make use of a discretization argument as in Messer et al. (2014). We make it explicit here, since the mixing terms ξ i ξ i + were not explicitly considered in the latter article. For an ε > 0 with \(T/\varepsilon \in \mathbb N\), we decompose the time interval [0, n T] into equidistant sections of length n ε. Using the notation \(|\lceil x\rceil |:=\lceil x \rceil +1, x\in \mathbb R, \)we bound

$$\begin{array}{@{}rcl@{}} && \sup\limits_{t\in[0,T-h]} \frac{\mu}{nh}\sum\limits_{i=N_{nt}+2}^{N_{n(t+h)}} \xi_{i}\xi_{i+\ell} \\ && \le \max\limits_{j=0,1,\ldots,T/\varepsilon - |\lceil h/\varepsilon \rceil |} \frac{\mu}{nh}\sum\limits_{i=N_{jn\varepsilon}}^{N_{jn\varepsilon+n|\lceil h/\varepsilon\rceil|\varepsilon} } \xi_{i}\xi_{i+\ell}\\ && \le \max\limits_{j=0,1,\ldots,T/\varepsilon - |\lceil h/\varepsilon \rceil |} \frac{\mu}{nh}\sum\limits_{i=N_{jn\varepsilon +nh}}^{N_{jn\varepsilon+n| \lceil h/\varepsilon\rceil | \varepsilon} } \xi_{i}\xi_{i+\ell}\\ &&\qquad + \max\limits_{j=0,1,\ldots,T/\varepsilon - |\lceil h/\varepsilon \rceil |} \frac{\mu}{nh}\sum\limits_{i=N_{jn\varepsilon}}^{N_{jn\varepsilon+nh} } \xi_{i}\xi_{i+\ell}. \end{array} $$

For any δ > 0 we can choose ε > 0 so that maxj=0,…,T/ε−|⌈h/ε⌉|(N j n ε + n|⌈h/ε⌉|ε N j n ε + n h )/(δ n/μ) → 1 a.s. for n. Then, for n, the first summand in the latter display converges to \((\delta /h)\mathbb {E}[\xi _{1}\xi _{1+\ell }]\) a.s. and the second summand to \(\mathbb {E}[\xi _{1}\xi _{1+\ell }]\) a.s., since convergence (26) holds for finitely many t. Since δ can be chosen arbitrarily small, we find the first inequality of Eq. (27). The second one follows analogously. Thus, the convergence in Eq. (26) follows. We then exchange the normalization according to Lemma A.3. Omitting + 1 summands does not change the limit such that the uniform a.s. convergence of \((\hat \rho _{\text {ri},\ell })_{t}\) is shown. Analogously, the uniform a.s. convergence of \((\hat \rho _{\text {le},\ell })_{t}\) is shown. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Messer, M., Costa, K.M., Roeper, J. et al. Multi-scale detection of rate changes in spike trains with weak dependencies. J Comput Neurosci 42, 187–201 (2017). https://doi.org/10.1007/s10827-016-0635-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-016-0635-3

Keywords

Navigation