Skip to main content
Log in

Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

We present a numerical analysis of the dynamics of all-to-all coupled Hodgkin-Huxley (HH) neuronal networks with Poisson spike inputs. It is important to point out that, since the dynamical vector of the system contains discontinuous variables, we propose a so-called pseudo-Lyapunov exponent adapted from the classical definition using only continuous dynamical variables, and apply it in our numerical investigation. The numerical results of the largest Lyapunov exponent using this new definition are consistent with the dynamical regimes of the network. Three typical dynamical regimes—asynchronous, chaotic and synchronous, are found as the synaptic coupling strength increases from weak to strong. We use the pseudo-Lyapunov exponent and the power spectrum analysis of voltage traces to characterize the types of the network behavior. In the nonchaotic (asynchronous or synchronous) dynamical regimes, i.e., the weak or strong coupling limits, the pseudo-Lyapunov exponent is negative and there is a good numerical convergence of the solution in the trajectory-wise sense by using our numerical methods. Consequently, in these regimes the evolution of neuronal networks is reliable. For the chaotic dynamical regime with an intermediate strong coupling, the pseudo-Lyapunov exponent is positive, and there is no numerical convergence of the solution and only statistical quantifications of the numerical results are reliable. Finally, we present numerical evidence that the value of pseudo-Lyapunov exponent coincides with that of the standard Lyapunov exponent for systems we have been able to examine.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  • Aihara, K., & Matsumoto, G. (1986). Chaotic oscillations and bifurcations in squid giant axons. In Chaos, nonlinear science: Theory and applications. Manchester: Manchester University Press.

    Google Scholar 

  • Cai, D., Rangan, A. V., & McLaughlin, D. W. (2005). Architectural and synaptic mechanisms underlying coherent spontaneous activity in v1. Proceedings of the National Academy of Sciences (USA), 102, 5868–5873.

    Article  CAS  Google Scholar 

  • Campbell, D., & Rose, H. (Eds.) (1983). Order in chaos. Physica D, 7, 1–362.

    Article  Google Scholar 

  • Compte, A., Sanchez-Vives, M. V., McCormick, D. A., & Wang, X. J. (2003). Cellular and network mechanisms of slow oscillatory activity (< 1 Hz) and wave propagations in a cortical network model. Journal of Neurophysiology, 89, 2707–2725.

    Article  PubMed  Google Scholar 

  • Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. Cambridge: MIT.

    Google Scholar 

  • Guckenheimer, J., & Oliva, R. A. (2002). Chaos in the Hodgkin-Huxley model. SIAM Journal on Applied Dynamical Systems, 1, 105–114.

    Article  Google Scholar 

  • Hansel, D., Mato, G., Meunier, C., & Neltner, L. (1998). On numerical simulations of integrate-and-fire neural networks. Neural Computation, 10, 467–483.

    Article  CAS  PubMed  Google Scholar 

  • Hansel, D., & Sompolinsky, H. (1992). Synchronization and computation in a chaotic neural network. Physical Review Letters, 68, 718–721.

    Article  PubMed  Google Scholar 

  • Hansel, D., & Sompolinsky, H. (1996). Chaos and synchrony in a model of a hypercolumn in visual cortex. Journal of Computational Neuroscience, 3, 7–34.

    Article  CAS  PubMed  Google Scholar 

  • Hodgkin, A. L., & Huxley, A. F. (1952). A. quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117, 500–544.

    CAS  PubMed  Google Scholar 

  • Koch, C. (1999). Biophysics of computation. Oxford: Oxford University Press.

    Google Scholar 

  • Kosmidis, E. K., & Pakdaman, K. (2003). An analysis of the reliability phenomenon in the FitzHugh-Nagumo model. Journal of Computational Neuroscience, 14, 5–22.

    Article  PubMed  Google Scholar 

  • Lin, K. (2006). Entrainment and chaos in a pulse-driven Hodgkin-Huxley oscillator. SIAM Journal on Applied Dynamical Systems, 5, 179–204.

    Article  Google Scholar 

  • Mainen, Z., & Sejnowski, T. (1995). Reliability of spike timing in neocortical neurons. Science, 268, 1503–1506.

    Article  CAS  PubMed  Google Scholar 

  • Mattia, M., & Del Giudice, P. (2000). Efficient event-driven simulation of large networks of spiking neurons and dynamical synapses. Neural Computation, 12, 2305–2329.

    Article  CAS  PubMed  Google Scholar 

  • McLaughlin, D., Shapley, R., Shelley, M., & Wielaard, J. (2000). A neuronal network model of macaque primary visual cortex (V1): Orientation selectivity and dynamics in the input layer 4Cα. Proceedings of the National Academy of Sciences USA, 97, 8087–8092.

    Article  CAS  Google Scholar 

  • Mueller, P. (1995). Calculation of Lyapunov exponents for dynamic systems with discontinuities. Chaos, Solitons & Fractals, 5, 1671–1681.

    Article  Google Scholar 

  • Ott, E. (1993). Chaos in dynamical systems. New York: Cambridge University Press.

    Google Scholar 

  • Parker, T. S., & Chua, L. O. (1989). Practical numerical algorithms for chaotic systems. New York: Springer.

    Google Scholar 

  • Rangan, A. V., & Cai, D. (2007). Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. Journal of Computational Neuroscience, 22, 81–100.

    Article  PubMed  Google Scholar 

  • Rangan, A. V., Cai, D., & McLaughlin, D. W. (2005). Modeling the spatiotemporal cortical activity associated with the line-motion illusion in primary visual cortex. Proceedings of the National Academy of Sciences (USA), 102, 18793–18800.

    Article  CAS  Google Scholar 

  • Reutimann, J., Giugliano, M., & Fusi, S. (2003). Event-based simulation of spiking neurons with stochastic dynamics. Neural Computation, 15, 811–830.

    Article  PubMed  Google Scholar 

  • Rudolph, M., & Destexhe, A. (2007). How much can we trust neural simulation strategies? Neurocomputing, 70, 1966–1969.

    Article  Google Scholar 

  • Schuster, H. G., & Just, W. (2005). Deterministic chaos. Weinheim: Wiley-VCH Verlag.

    Book  Google Scholar 

  • Shelley, M. J., & Tao, L. (2001). Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks. Journal of Computational Neuroscience, 11, 111–119.

    Article  CAS  PubMed  Google Scholar 

  • Somers, D., Nelson, S., & Sur, M. (1995). An emergent model of orientation selectivity in cat visual cortical simple cells. Journal of Neuroscience, 15, 5448–5465.

    CAS  PubMed  Google Scholar 

  • Sun, Y., Zhou, D., Rangan, A. V., & Cai, D. (2009). Library-based numerical reduction of the Hodgkin-Huxley neuron for network simulation. Journal of Computational Neuroscience, 27, 369–390.

    Article  CAS  PubMed  Google Scholar 

  • Takens, F. (1981). Detecting strange attractors in turbulence. In Dynamical systems and turbulence, lecture notes in mathematics (Vol. 898). Berlin: Springer.

    Google Scholar 

  • Troyer, T., Krukowski, A., Priebe N., & Miller, K. (1998). Contrast invariant orientation tuning in cat visual cortex with feedforward tuning and correlation based intracortical connectivity. Journal of Neuroscience, 18, 5908–5927.

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

The work was supported by NSF grants DMS-0506396, DMS-0507901 and a grant from the Swartz foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Sun.

Additional information

Action Editor: David Terman

Appendices

Parameter values for the Hodgkin-Huxley equations

Parameter values or ranges and function definitions of the Hodgkin-Huxley model are as follows (Dayan and Abbott 2001):

$$ \begin{array}{lll} G_{\rm Na}&=&120~{\rm mS/cm^2}, \ \ V_{\rm Na}=50~{\rm mV}, \\ G_{\rm K}&=&36~{\rm mS/cm^2}, \ \ V_{\rm K}=-77~{\rm mV}, \\ G_{\rm L}&=&0.3~{\rm mS/cm^2}, \ \ V_{\rm L}=-54.387~{\rm mV}, \\ C &=&1{\rm \mu~F/cm^2}, \ \ V_{\rm G}^{\rm E} =0~{\rm mV}, \ \ V_{\rm G}^{\rm I} =-80~{\rm mV}, \\ F^{\rm E}&=&0.05\sim0.1~{\rm mS/cm^2}, \ \ S^{\rm E}=0.05\sim1.0~{\rm mS/cm^2}, \\ F^{\rm I}&=&0.01\sim0.05~{\rm mS/cm^2}, \ S^{\rm I}=0.05\sim1.0~{\rm mS/cm^2}, \\ \sigma_{\rm r}^{\rm E}&=&0.5~{\rm ms}, \ \ \sigma_{\rm d}^{\rm E}=3.0~{\rm ms}, \\ \sigma_{\rm r}^{\rm I}&=&0.5~{\rm ms}, \ \ \sigma_{\rm d}^{\rm I}=7.0~{\rm ms}, \\ \alpha_m(V) &=& 0.1(V+40)/(1-\exp{(-(V+40)/10)}), \\ \beta_m(V) &=& 4 \exp{(-(V+65)/18)}, \\ \alpha_h(V) &=& 0.07 \exp{(-(V+65)/20)}, \\ \beta_h(V) &=& 1/(1+\exp{(-(35+V)/10})), \\ \alpha_n(V) &=& 0.01(V+55)/(1-\exp{(-(V+55)/10)}), \\ \beta_n(V) &=& 0.125 \exp{(-(V+65)/80)}. \end{array} $$

Numerical method for a single neuron

Here we provide details of the numerical method for evolving the dynamics of a single neuron. For simplicity, we use vector X i to represent all the variables in the solution of the ith neuron:

$$ X_i(t)=\big(V_i(t), m_i(t), h_i(t), n_i(t), G_i^{\rm Q}(t), \widetilde{G}_i^{\rm Q}(t)\big). $$

Given an initial time t 0 and time step Δt, initial values X i (t 0), and spike times \(T^{\rm F}_{i,k}\) and \(T^{\rm S}_{j\neq i,k}\) from the rest of the network, our method computes a numerical solution of all variables X i (t 0 + Δt) as well as the intervening spike times \(T^{\rm S}_{i,k}\) (if any occurred) for the ith neuron as follows:

Algorithm 1. (Single neuron scheme)

Step 1: Input: an initial time t0, a time step Δt, a set of spike times \(T^{\rm F}_{i,k}\) and \(T^{\rm S}_{j\neq i,k}\) and associated strengths \(F_i^{\rm Q}\) and \(S_{i,j}^{\rm Q}\).

Step 2: Consider the time interval [t0, t0 + Δt]. Let M denote the total number of feedforward and presynaptic spikes within this interval. Sort these spikes into an increasing list of M spike times \(T^{\rm sorted}_m\) with corresponding spike strengths \(S^{\rm sorted, Q}_m\). In addition, we extend this notation such that \(T^{\rm sorted}_0\) : = t0, \(T^{\rm sorted}_{M+1}:= t_0 + \Delta t\) and \(S^{\rm sorted, Q}_0 =S^{\rm sorted, Q}_{M+1}:= 0\).

Step 3: For m = 1, ..., M + 1, advance the equations for the HH neuron model and its conductances (Eqs. (1)–(7)) from \(T^{\rm sorted}_{m-1}\) to \(T^{\rm sorted}_m\) using the standard RK4 scheme to obtain \(X_i(T^{\rm sorted}_m)\); Then, update the conductance \(\widetilde{G}_i^{\rm Q}(T^{\rm sorted}_m)\) by adding the appropriate strengths \(S^{\rm sorted, Q}_m\).

Step 4: If the calculated values for \(V_i(T^{\rm sorted}_m)\) are each less than Vth, then we can accept \(X_i(T^{\rm sorted}_{M+1})\) as the solution X i (t0 + Δt). We update t0t0 + Δt and return to step 2 and continue.

Step 5: Otherwise, let \(V_i(T^{\rm sorted}_m)\) be the first calculated voltage greater than Vth. We know that the ith neuron fired somewhere during the interval \([T^{\rm sorted}_{m-1}, T^{\rm sorted}_m]\).

Step 6: In this case we use a high-order polynomial interpolation to find an approximation of the spike time tfire in the interval \([T^{\rm sorted}_{m-1}, T^{\rm sorted}_m]\). For example, we can use the numerical values of \(V_i(T^{\rm sorted}_{m-1})\), \(V_i(T^{\rm sorted}_m)\), \(\frac{d}{dt}V_i(T^{\rm sorted}_{m-1})\), \(\frac{d}{dt}V_i(T^{\rm sorted}_m)\) to form a cubic polynomial. We record tfire as the (k + 1)th postsynaptic spike time \(T^{\rm S}_{i,k+1}\) of the ith neuron. We update t0t0 + Δt and return to step 2 and continue.

Numerical method for the HH network

Here we briefly outline an algorithm which accounts for spike-spike interactions, and can accurately evolve recurrent HH networks governed by Eqs. (1)–(7).

Algorithm 2. (Network model scheme)

Step 1: Given initial values initial values X i (t0) and feedforward input spike times \(T^{\rm F}_{i,k}\) for all i, we can apply Algorithm 1 and time step Δt to obtain an estimate of neuronal trajectories on the interval [t0, t0 + Δt].

Step 2: We use this rough estimate to classify the neurons into two groups , —those that are estimated to fire within [t0, t0 + Δt] and those that are not.

Step 3: We sort the approximate spike times of neurons within into a list with corresponding coupling strengths .

Step 4: We use the feedforward spikes \(T^{\rm F}_{i,k}\) as well as the approximate spike times and Algorithm 1 and the same time step Δt to correct the neuronal trajectories of the subnetwork over the interval [t0, t0 + Δt], and obtain a more accurate approximation to the spike times . This spike-spike correction procedure is equivalent to stepping through the list and computing the effect of each spike on all future spikes. We step through this correction process until the neuronal trajectories and spike times of neurons in converge.

Step 5: Finally we use the corrected estimates of the spike times as well as the feedforward spikes \(T^{\rm F}_{i,k}\) and Algorithm 1 and time step Δt to evolve the remainder of the neurons from t0 to t0 + Δt.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sun, Y., Zhou, D., Rangan, A.V. et al. Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics. J Comput Neurosci 28, 247–266 (2010). https://doi.org/10.1007/s10827-009-0202-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-009-0202-2

Keywords

Navigation