Abstract
Mathematical neuronal models are normally expressed using differential equations. The ParkerSochacki method is a new technique for the numerical integration of differential equations applicable to many neuronal models. Using this method, the solution order can be adapted according to the local conditions at each time step, enabling adaptive error control without changing the integration timestep. The method has been limited to polynomial equations, but we present division and power operations that expand its scope. We apply the ParkerSochacki method to the Izhikevich ‘simple’ model and a HodgkinHuxley type neuron, comparing the results with those obtained using the RungeKutta and BulirschStoer methods. Benchmark simulations demonstrate an improved speed/accuracy tradeoff for the method relative to these established techniques.
Introduction
Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Spiking neuron models can be described mathematically as hybrid systems (Brette et al. 2007), with continuous evolution of the state variables punctuated by discrete synaptic and/or firing events. The continuous part of the system is generally described by a set of differential equations, and running a simulation involves repeatedly solving these equations using analytical or numerical integration methods.
The ParkerSochacki (PS) method is a new technique for numerically integrating differential equations. PS computes iterative Talyor series expansions, enabling extraordinary integration accuracy in practical simulation time. The method is broadly applicable in computational modelling but has so far been largely overlooked in the biological sciences.
In this article, we explore the ParkerSochacki method by applying it to two neuronal models: the Izhikevich ‘simple’ model (Izhikevich 2003), and a HodgkinHuxley neuron described in Brette et al. (2007). Benchmark simulations based on those established in Brette et al. (2007) are employed to compare the PS method with the established RungeKutta and BulirschStoer methods.
The ParkerSochacki method
Most neuronal models can be expressed as initial value ordinary differential equations (ODEs) of the form
Picard’s method of successive approximations was designed to prove the existence of solutions to such equations. The method uses an equivalent integral form for Eq. (1)
whose solution can be obtained as the limit of a sequence of functions y _{ n }(t) given by the following recurrence relation
Provided f(t,y) satisfies the Lipschitz condition locally, this sequence is guaranteed to converge locally to y. However, the iterates become increasingly hard to compute, limiting the practicality of the method in this general form.
Parker and Sochacki (1996), considered a form of Eq. (1), with t _{0} = 0 and polynomial f. Note that the first condition is insignificant, since systems of the form of Eq. (1) can always be translated to the origin with a change of independent variable t →t + t _{0}. Parker and Sochacki showed that polynomial f resulted in Picard iterates that were also polynomial. Furthermore, if y _{ n }(t) is truncated to degree n at each iteration, then the nth Picard iterate is identical to the degree n Maclaurin Polynomial for y(t). Using a truncated Picard iteration to compute the Maclaurin series for a polynomial ODE was termed the Modified Picard Method in Parker and Sochacki (1996), but we follow Rudmin (1998) in calling it the ParkerSochacki method.
For a system of ODEs with all polynomial right hand sides, the PS method can be used to compute the Maclaurin series for each variable to any degree desired, thus enabling arbitrarily accurate solutions for the ODE system within the regions of convergence of the series approximations. Parker and Sochacki (1996) went on to demonstrate that a broad class of analytical ODEs can be converted into polynomial form via variable substitutions, thus rendering them solvable via the PS method. The method was subsequently extended to partial differential equations (Parker and Sochacki 2000).
Rudmin (1998) established the practical utility of the PS method by using it to solve the Nbody problem in celestial mechanics. Pruett et al. (2003) developed an adaptive timestepping version of the method for the same problem. Carothers et al. (2005) built on the algorithmic work of Rudmin to derive an efficient, algebraic PS method using Cauchy products to solve for higher order terms.
Application
To apply the PS method to a polynomial ODE system, we first define Maclaurin series for each model variable
with y _{0} = y(0), y _{1} = y′(0), \(y_2 = \frac{y''(0)}{2!}\) and so on. Now, because the Maclaurin series is polynomial, we can write down a series for the first derivative in terms of the original series
Equating terms, we have y′_{ p } = (p + 1)y _{(p + 1)}. Rearranging for coefficients in the original series, we arrive at a relation that lies at the heart of the PS method
The basis of the method is to use the model differential equations to replace y′_{ p } with an expression in terms of the model variables. This is best illustrated through examples.
Example 1
Consider the linear system
Here, y′_{ p } = y _{ p } + z _{ p }, z′_{ p } = − y _{ p } + z _{ p } and the PS solution is
Thus, each coefficient of the Maclaurin series can be computed using the previous coefficients and we can easily obtain solutions of arbitrary order. This is the general principle of the PS method.
Example 2
To demonstrate how to handle constants and higher order terms, we consider
The series for y and y′ are as defined above, but an additional series is also defined for y ^{2}.
with coefficients generated using Cauchy products,
Since we can obtain the value of y ^{2} given y, we refer to y ^{2} as a derived variable, while y is a basic variable. The PS solution to Eq. (9) is given by
with p ≥ 1 and \(\left(y^2\right)_p\) given by Eq. (11). Note that the constant term appears in the initial step but not in the subsequent iterations.
Simulations
In numerical simulations, the PS method is applied at each time step to solve for the system variables using initial conditions given by the solution at the previous time step. Thus, for a step size \(\mathit{\Delta} t\), the variables are updated using truncated series approximations (up to order n), as follows:
In a “clockdriven” simulation with fixed time step, it is always possible to rescale the system such that the effective step size becomes equal to one. Thus,
Adaptive order processing
One of the advantages of the ParkerSochacki method is that the order of the Maclaurin series approximations depends only on the number of iterations, and can therefore be adapted according to the local conditions at each time step (Pruett et al. 2003; Carothers et al. 2005).
The PS solution is a sum over terms \(y_p(\mathit{\Delta} t)^p\), which approximate the local truncation error for variable y on iteration p. However, with floating point numbers, rounding means that the actual change in solution at each iteration will only be approximately equal to \(y_p(\mathit{\Delta} t)^p\). Taking this into account, we apply adaptive error control by incrementally calculating the solution and halting the iterations when the absolute changes in value of all variables are less than or equal to some error tolerance value, ε.
Variable substitutions
Equations containing exponential and trigonometric functions can often be converted into a form solvable by PS via the substitution of variables (Parker and Sochacki 1996, 2000; Carothers et al. 2005). We illustrate the method using a simple example relevant to neuronal modelling.
Example 3
Consider the system
In order to transform the system, we let x = exp(y). Like y ^{2} in Example 3, x here is a derived variable, while y and z are basic variables. Since the derivative of an exponential function is equal to the function itself, x′ = y′x, and the system can be rewritten:
Power series operations
Application of the ParkerSochacki method can be viewed in terms of power series operations, and the examples above demonstrate all of the operations required to solve any polynomial system of ODEs. Addition and subtraction operations are applied in termwise fashion, while the Cauchy product performs multiplication. Since integer powers can be obtained using multiplication (y ^{3} = y ^{2} y, y ^{4} = y ^{2} y ^{2} etc.), addition, subtraction and multiplication operations are sufficient to solve polynomial equations.
Knuth (1997) describes further power series operations that can be used to apply the ParkerSochacki method to nonpolynomial equations. First, we consider division. If we take two variables x,y, expressed as power series, and define a new variable to represent the quotient z = x/y, then using the Cauchy product we can write
By rearrangement, we have (for \(y_0 \ne 0\)):
Just as the Cauchy product permits variable multiplication in ODEs solvable by the PS method, this formula adds division to the list of permissible operations. Thus, the ODEs need not be strictly polynomial as suggested in prior works. Rather, PS can be applied to any equation composed only of numbers, variables, and the four basic arithmetic operations (addition, subtraction, multiplication and division), with higher powers handled through iterative multiplication.
Alternatively, Knuth (1997) presents a formula, due to Euler, for raising series to powers directly. We consider only positive integer powers here. Briefly, if y _{0} = 1, then the coefficients of z = y ^{α} are given by z _{0} = 1 and (for p > 0)
For series with \(y_0 \ne 1\), \(z_0 = (y_0)^\alpha\) and (for p > 0)
If the ((α + 1)j/p − 1) terms are precalculated, this method uses 2p multiplications and a single division to calculate the pth coefficient. Thus, Euler’s power method can provide a computational saving over iterative multiplication only if more than two Cauchy products are required to calculate the power, i.e. α > 4.
As presented, both division and general power operations run the risk of encountering division by zero. We will return to the issue of division by zero in quotient calculations in the context of the HodgkinHuxley neuron model in Section 4. For Euler’s power method, we can circumvent the issue. If y _{ m } is the first nonzero coefficient in y, we define a new series x, with x _{ p } = y _{p + m}. Next, we take w = x ^{α} and calculate the coefficients using Eq. (20). The series z has a number of leading zeros equal to the number of leading zeros in y, multiplied by the power (m α). Finally, z _{p + mα} = w _{ p }.
The presented methods for performing power series division and power operations are not new (though we are not aware of a prior description of the technique for handling leading zeros in the power calculations). However, their incorporation into the ParkerSochacki method is both novel and powerful, significantly expanding the method’s scope.
The Izhikevich model
The Izhikevich model (Izhikevich 2003, 2007) is a two variable, phenomenological neuron model, featuring a quadratic membrane potential, \(\emph{v}\), and a linear recovery variable, u. The model is interesting because it has simple equations yet is capable of a rich dynamic repertoire (Izhikevich 2004, 2007). The model can act as either an integrator or a resonator and can exhibit adaptation or support bursting. Indeed, this is claimed to be the simplest model capable of spiking, bursting, and being either an integrator or a resonator (Izhikevich 2007).
Subthreshold behaviour and the upstroke of the action potential can be represented as follows:
where \(\emph{v}\) is the membrane potential minus the resting potential \(\emph{v}_{rest}\) (\(\emph{v}=0\) at rest), \(\emph{v}_t\) is the threshold potential, C is the membrane capacitance, a is the rate constant of the recovery variable, k and b are scaling constants, and I is the total (inward) input current from sources other than \(\emph{v}\) and u. Assuming the threshold potential is greater than the resting potential \((\emph{v}_t>0)\), then when \(\emph{v} > \emph{v}_t\), the quadratic expression in Eq. (21) will be positive, and \(\emph{v}\) will tend to escape towards infinity. This escape process models the action potential upstroke. The action potential downstroke is modelled using an instantaneous reset of the membrane potential, plus a stepping of the recovery variable:
where \(\emph{v}_{max}\) is the action potential peak, \(\emph{v}_{reset}\) is the postspike reset potential, and u _{ step } is used to model postspike adaptation effects. Spike times are taken as the times when Eq. (22) is applied.
In our benchmark network simulations, synaptic interactions were modelled using a conductancebased formalism (Vogels and Abbott 2005; Brette et al. 2007). With the addition of fast excitatory (η) and inhibitory (γ) conductancebased synaptic currents, Eq. (21) becomes
where η and γ are the total excitatory/inhibitory conductances, and E _{ η }, E _{ γ } are corresponding reversal potentials. The conductance values are stepped by incoming synaptic events of matching type, and decay exponentially with time
where the λ parameters are decay rate constants.
The ParkerSochacki solution
In this section, we develop an efficient PS solution for the Izhikevich model system Eqs. (22), (23), (24). Most calculations in the PS method require a fixed number of floating point operations at each iteration, but Cauchy products require a number of operations that scales linearly with the number of iterations. Consequently, we seek to minimise the use of Cauchy products in designing an efficient algorithm.
A straightforward solution based on Eq. (23) would require three Cauchy products: one to compute \(k\emph{v}^2\), one for \(\eta \emph{v}\), and another for \(\gamma \emph{v}\). Noting that these products contain the common factor \(\emph{v}\), we rearrange the membrane equation such that only one Cauchy product is required
where \(\chi = k\emph{v}  \eta  \gamma k \emph{v}_t\). Thus, \(\chi_0 = k \emph{v}_0  \eta_0  \gamma_0 k \emph{v}_t\) and (for p > 0) \(\chi_p = k \emph{v}_p  \eta_p  \gamma_p\). Then the term \(\chi \emph{v}\) is given by a Cauchy product:
Using this construction, an efficient ParkerSochacki solution for the Izhikevich model can be written down as:
We can precalculate 1/C, 1/(p + 1) and 1/C(p + 1) and solve using only add, subtract and multiply floating point operations.
Calculating exact spike times
In clockdriven simulations, spike times are normally restricted to discrete time samples, offering limited spike timing accuracy despite accurate integration. Furthermore, Eq. (22) implies that discretisation of spike times will dramatically affect the subsequent accuracy of the solution. Specifically, the membrane potential increases rapidly during an action potential upstroke and, because of the voltagedependence of the recovery variable, spike timing discretisation will tend to result in significant errors in the values of both \(\emph{v}\) and u prior to the application of Eq. (22). When Eq. (22) is applied, \(\emph{v}\) is reset to a fixed value regardless of its prior state, but u is stepped and thus depends on its prior value. Thus, errors in the value of u are propagated through to the postspike state. This propagation of errors can be minimised by applying Eq. (22) at the correct times.
We now show how to calculate precise spike times for the Izhikevich model despite using large time steps in simulation. To establish our method, we note the following:

1.
The Maclaurin series solution for \(\emph{v}(t)\) is a polynomial.

2.
Via a shift in \(\emph{v}_0\), locating a voltage threshold crossing can be posed as a polynomial rootfinding problem.

3.
Having found a suprathreshold voltage value at a discrete time point, we know that the threshold crossing must have occurred during the preceding time step.

4.
Because of the escape process used to model the action potential upstroke, the membrane voltage will be monotonically increasing close to the threshold crossing/root.
Given these conditions, it is clear that we can efficiently solve this root finding problem using the NewtonRaphson method with precalculated polynomial coefficients.
For original step size \(\mathit{\Delta} t = \mathit{\Delta} t_{1}\), this rootfinding process returns a value \(\mathit{\Delta} t_{pre}\), in \((0, \mathit{\Delta} t_{1}]\), reflecting the spike time within the local time step, and we solve for u, η, γ at \(t + \mathit{\Delta} t_{pre}\). Next, Eq. (22) is applied to model the action potential downstroke and postspike adaptation effects. Finally, an additional time step is run using the postspike variable values as initial conditions and a step size \(\mathit{\Delta} t_{post} = \mathit{\Delta} t_{1}  \mathit{\Delta} t_{pre}\). This returns the solution to time \(t + \mathit{\Delta} t_{1}\).
Figure 1 illustrates the application of this algorithm to a single neuron under constant current injection, with fixed time step \((\mathit{\Delta} t_1 = 0.5\) ms). In Fig. 1(a), the cell fires four times in 100 ms, with spikerate adaptation due to the recovery variable. Figure 1(b) zooms in on the first spike, where a peak voltage crossing occurs between the 19.5 ms and 20 ms time samples. Figure 1(c) illustrates the use of the NewtonRaphson method to find the exact spike time, and the postspike reduced step up to 20 ms is depicted in Fig. 1(d).
An adaptive order algorithm
With adaptive order processing, the complete algorithm for one Izhikevich neuron over a single time step is as follows:

1.
Run Eq. (27) to order n where error checking succeeds

2.
Use Eq. 13 to get a new value for \(\emph{v}\)

3.
If \(\emph{v} \ge \emph{v}_{max}\)

(a)
\(\emph{v}_0 \leftarrow \emph{v}_0  \emph{v}_{max}\)

(b)
Apply the NewtonRaphson method to find (\(\mathit{\Delta} t_{pre}\))

(c)
\(u(t+\mathit{\Delta} t_{pre}) = u(t) + \sum_{p=1}^n u_p (\mathit{\Delta} t_{pre})^p\)

(d)
\(\eta(t+\mathit{\Delta} t_{pre}) = \eta(t) (e^{\lambda_{\eta} \mathit{\Delta} t_{pre}})\)

(e)
\(\gamma(t+\mathit{\Delta} t_{pre}) = \gamma(t) (e^{\lambda_{\gamma} \mathit{\Delta} t_{pre}})\)

(f)
\(\emph{v} \leftarrow \emph{v}_{reset}, u \leftarrow u + u_{step}\)

(g)
Run a reduced time step with \(\mathit{\Delta} t_{post} =\mathit{\Delta} t_1  \mathit{\Delta} t_{pre}\)

(a)

4.
Else
This algorithm omits synaptic events. We have developed a system for scheduling and delivering events at arbitrary, continuous time points despite using a fixed global time step, \(\mathit{\Delta} t_g\). Provided that synaptic transmission delays are always longer than \(\mathit{\Delta} t_g\), we can calculate in advance whether a neuron receives any synaptic events during the time interval \([t, t + \mathit{\Delta} t_g]\) (Morrison et al. 2007). If events are to be delivered, we move through the global step via local substeps separated by synaptic events, with each substep being processed using the algorithm presented above.
A HodgkinHuxley model
The Hodgkin and Huxley (1952) (HH) model of the squid giant axon has been arguably the most influential work in the field of computational neuroscience, and their conductancebased modelling framework remains widely employed. HH model equations are more complex than those of the Izhikevich neuron, and they are not generally expressed in a form to which the ParkerSochacki method can be directly applied. In this section, we show how to apply the power series operations and variable substitutions described in Sections 2.5 and 2.4, to produce a PS solution algorithm.
The particular HH model neuron considered here was described in Brette et al. (2007), as a modification of a hippocampal cell model described by Traub and Miles (1991). With conductancebased synapses, the equations are,
where \(\emph{v}\) is the membrane potential (in mV), n, m, h are gating variables for the voltagegated sodium (m, h) and potassium (n) currents, and η and γ are excitatory and inhibitory synaptic conductances, respectively. The gating variables evolve according to voltagedependent rate constants,
where \(\emph{v}_t = 63 mV\) sets the threshold (Brette et al. 2007).
The ParkerSochacki solution
Since Eqs. 29–34 feature exponential functions, variable substitutions are required before PS can be applied. First, we let a = β _{ n } and b = α _{ h }. As described in Section 2.4, the new equations are as follows:
Equation 34 takes the form of a Boltzmann function. Now, letting \(c = exp((\emph{v}_t+40\emph{v})/5)\), we can write
Applying this substitution, we have β _{ h } = 4/(c + 1), and h′ = b(1 − h) − 4h/(c + 1). Carothers et al. (2005) showed that the substitution z = 1/y yields an equation of the form z′ = − y′z ^{2}, and this substitution can be employed to convert β _{ h }, and hence h′, into polynomial form. However, a simpler solution is obtained via series division using Eq. (18). In this application, we let d = h/(c + 1), and use
Then, h′ = b(1 − h) − 4d. Note, there is no danger of encountering division by zero here since the denominator in Eq. (38) is of the form (exp(x) + 1), which is always positive.
Equations 29, 31, 32 can all be written in the form x/(exp(x)−1), multiplied by some scaling constant. As with (34), we begin here by substituting for the exponential terms in the denominators of these equations. Thus, letting \(e = exp((\emph{v}_t+15\emph{v})/5)\), \(f = exp((\emph{v}_t+13\emph{v})/4)\), and \(g = exp((\emph{v}  (\emph{v}_t+40))/5)\), we have
Next we introduce variables for the quotient terms. Thus, \(q = (\emph{v}_t+15  \emph{v})/(e1)\), \(r = (\emph{v}_t+ 13  \emph{v})/(f1)\), \(s = (\emph{v}  (\emph{v}_t+40))/(g1)\), with coefficients given by:
yielding α _{ n } = 0.032q, α _{ m } = 0.32 r, and β _{ m } = 0.28 s. There is a danger of division by zero here since the denominators in Eqs. (42)–(44) follow (exp(x) − 1), which equals zero when x = 0. Furthermore, this condition is encountered within the normal voltage range for the model neuron. In examining the stability of the PS method around these singular points, we found that the Taylor series expansions diverged, causing the PS method to fail. To examine whether this problem was specific to the series division operation, an alternative formulation was testing using substitutions of the form z = 1/x, z′ = − x′z ^{2} (Carothers et al. 2005). The same failures were observed here. To solve this problem, code was added to first detect series divergence and then substitute in an alternative integration method to repeat the failed step. This system was used in the HH model benchmarking simulations in Section 5.2.
As for the Izhikevich model, we simplify the membrane potential equation by grouping all the terms multiplied by \(\emph{v}\), and defining \(\chi =  g_L  \bar{g}_Kn^4  \bar{g}_{Na}m^3h  \eta \!\! \gamma\). Thus, \(\chi_0 \!=\! \! g_L \!\! \bar{g}_K(n^4)_0 \!\! \bar{g}_{Na}(m^3h)_0 \) η _{0} − γ _{0} and (for p > 0) \(\chi_p =  \bar{g}_K(n^4)_p  \bar{g}_{Na}(m^3h)_p  \eta_p  \gamma_p\). Similarly, ψ = − (α _{ n } + β _{ n }) = − (0.032q + a), ξ = − (α _{ m } + β _{ m }) = − (0.32 r + 0.28 s).
Equation (28) can now be rewritten as:
The complete PS solution is listed below. Since no powers greater than four are calculated, Cauchy products are used rather than the Euler power operation. Cauchy products will comprise the major computational cost of the solution method, especially in cases where a highorder solution is required. Equation 50 uses one Cauchy product to obtain \((\chi \emph{v})_p\). Two Cauchy products are needed to obtain (n ^{4})p (via an intermediate n ^{2} term). Three Cauchy products are needed for (m ^{3} h)p, via intermediate m ^{2} and m ^{3} terms. Equations 51 to 65 each require one Cauchy product to solve (in slightly modified form for the quotient variables). Thus, a total of 19 Cauchy products are required at each iteration to solve this HH model using the PS method.
With adaptive order processing, the complete algorithm for one HH neuron over a single time step is:
 1.
 2.
For the nonpower derived variables (a − s), we have the option of either updating using Eq. (13) or Eq. (14), or using the definition of the variable to recalculate its value at each step. For example, for the variable c, we can use
or
Using the latter method, the variable is guaranteed to match its definition at each time step. We term this method tethering, and any variable so updated a tethered variable. In preliminary testing, it was found that the stability of the PS solution was improved by tethering all the variables involved in quotient calculations (c, d, e, f, g, q, r, s), but that tethering a and b produced no improvement in the solution.
Results
In this section we assess the speed and accuracy of our adaptive PS algorithms by running benchmark simulations for the Izhikevich and HodgkinHuxley neuron models. The results are compared to those obtained using the 4thorder RungeKutta (RK) and BulirschStoer (BS) methods.
The 4thorder RungeKutta method is one of the most commonly used numerical integration methods (Press et al. 1992). The method offers moderate accuracy at moderate computational cost. For each equation, the derivative is evaluated four times per step: once at the start point, twice at trial midpoints, and once at a trial endpoint. These results are then combined in such a way that the first, second and third order error terms cancel. Thus, the solution agrees with the Taylor series expansion up to the 4thdegree term. Derivative evaluations are the major computational cost of RK.
The BulirschStoer method is another popular method. For smooth ODEs without singular points inside the integration interval, BS is described by Press et al. (1992) as the best known way to obtain highaccuracy solutions to ODEs with minimal computational effort. The method combines the (second order) modified midpoint method with the technique of Richardson extrapolation. In a single BS step, a sequence of crossings of the step is made with an increasing number n of modified midpoint substeps. Following Press et al. (1992), we use the sequence n _{ k } = 2k, where k is the crossing number. After each crossing, a rational function extrapolation is carried out to approximate the solution that would be obtained if the step size were zero. The extrapolation algorithm also returns error estimates. If the latter are acceptable, we terminate the sequence and move to the next step. If not, we continue with the next crossing. For a given step size, BS can be expected to be more accurate but also more computationally expensive than RK.
Both BS and PS can apply adaptive error control without adaptive time stepping. To examine this process, adaptive stepping was not implemented for any of the methods. For PS, adaptive order processing was implemented as described in Section 2.3. Equivalent adaptive error control, based on change in the iterative solution, was employed for BS. PS was limited to a maximum of 200th order, while BS was limited to a maximum of 50 crossings.
Simulations were run in the MATLAB 7.5 environment, with algorithms written in C and compiled as mex files. Code for the RungeKutta and BulirschStoer methods was adapted from the routines provided in Press et al. (1992) by removing adaptive time stepping. The machines used to run the simulations featured 2.2 GHz AMD Opteron processors, and at least 4 GB of memory. Simulation code will be provided in the ModelDB database.^{Footnote 1} Major routines are listed in Appendix A.
Izhikevich model
Two types of simulation were used on the Izhikevich model. In the first type, cells were driven by current injection only. In the second, recurrent synaptic interactions were also modelled. These different simulations enabled us to separate the computational costs of integration and synaptic processing.
Current injection simulations
The current injection model featured 1000 neurons but no functional synapses. All cells had identical parameters, set by fitting the model to the HH neuron in Brette et al. (2007). Details of the cellular model and fitting process are given in Appendix B. All simulations were of one second duration. In one series of experiments, all model cells were driven by a constant, depolarising injection current sufficient to make them fire once within the simulation time period. In another, the cells were driven to fire ten times. These are the one and tenspike simulations, respectively. In each case, we applied n _{ c } = 15 different error tolerance conditions. For BS and PS we used a global step size of \(\mathit{\Delta} t_g = 0.25\) ms, and systematically varied the error tolerance, ε. All three methods were stable (no solution divergence to infinity) at this step size. For condition c _{ n }, ε = 1e−(n + 1). For RK, we varied the error tolerance indirectly by changing the global step size. For c _{1..15}, \(\mathit{\Delta} t_g\) for RK was set to 1/4, 1/6, 1/8, 1/10, 1/20, 1/40, 1/60, 1/80, 1/100, 1/200, 1/400, 1/600, 1/800, 1/1000, 1/2000 ms, respectively. For time averaging, all simulations were repeated ten times. All solution algorithms included calculations for exact spike times using the NewtonRaphson method as described in Section 3.2. For RK and BS this required additional integration steps to evaluate \(\emph{v}\) and v′ at different time points.
Figure 2 shows the results from these simulations. In Fig. 2 (top) simulation time is plotted as a function of c. In the onespike simulations (solid lines), PS was the fastest method for all c, with simulation times monotonically increasing from 0.72 ±0.01 s (c _{1}) to 1.83 ±0.02 s (c _{15}). RK times rose from 0.78 ±0.01 s to 388 ±0.7 s for c _{1..15}. BS times increased gradually from 7 to 11 s across c _{1..12} but rose steeply at tighter error tolerances to 309 ±1 s in condition 15. Results from the tenspike simulations were similar. PS was again the fastest method in all conditions. However, times here were greater than the onespike results in equivalent conditions, with gain ranging from 1.09 (c _{1}) to 1.44 (c _{7}). RK times were slightly greater than for the equivalent onespike simulations for all c, with gain factors ranging from 1.072 (c _{6}) to 1.085 (c _{2}). BS times were reduced relative to the onespike results in the first few conditions, but were greater in the last four conditions.
In order to explain the variation in simulation times, Fig. 2 (middle) shows how adaptive processing for BS and PS varied with error tolerance by plotting representative statistics for each method. The number of BS crossings was low for c _{1..12} but rose steeply for c > 12. This increase reflects error tolerance failures. In both one and tenspike simulations there were no failures for c _{1..12}, but for c _{13,14,15} there were, respectively, 21, 475, 1666 failures per cell in the onespike simulations and 176, 1088, and 1850 failures per cell in the tenspike simulations. In contrast, PS never failed to achieve the specified error tolerances. The mean order of the PS method increased gradually with increasing c, and the maximum order across all conditions was 20 in one and 21 in tenspike simulations. The mean PS order was greater in the ten than onespike simulations and the gain was very similar to the simulation time gain, ranging from 1.10 (c _{1}) to 1.43 (c _{7}).
To quantify the accuracy of the simulation output, we created reference solutions against which to test all other solutions. Since the PS Taylor series were convergent in these simulations, reference solutions were obtained by running PS to complete numerical convergence (ε = 0). There were no tolerance failures and reference simulation times were 1.84 ±0.02 s for one spike and 2.59 ±0.01 s for ten. Mean order was 6.14 and 8.73, respectively, and maximum order was 20 and 21 (as for c _{15}). Simulation error was calculated as the mean absolute membrane voltage divergence between test and reference traces. Figure 2 (bottom) plots simulation accuracy, taken as the reciprocal of the error.
Despite using the same error tolerance conditions, BS was many times more accurate than PS for c _{1..10} in both one and tenspike simulations. In the onespike simulations, RK was also more accurate than PS for c _{1..10}. RK was less accurate in the tenspike simulations, but was still more accurate than PS for c _{1..6}. However, both BS and RK accuracy plots plateaued at low tolerances, with peak values always between 1e11 and 1e13. Indeed, BS showed reduced accuracy at the lowest tolerances where it exhibited failures. In contrast, PS showed progressive accuracy gains with decreasing tolerance until in condition 15 measured accuracy was infinite since the reference and test traces were identical for both one and tenspike simulations.
The reference PS simulations achieved double precision accuracy in the state variables and yet were faster than the fastest BS simulations. Furthermore, the reference runs were only 2.35 and 3.07 slower than the fastest RK simulations in the one and tenspike simulations, respectively.
Recurrent network model simulations
The network model here was based on Benchmark 1 from Brette et al. (2007), which was inspired by an earlier model (Vogels and Abbott 2005). The network featured 4000 neurons (80% excitatory, 20% inhibitory; parameters as above). All cells were randomly (2% probability) connected, and n _{ s } = 5 different network configurations were created.
All simulations were of one second duration. To generate recurrent activity, random stimulation was applied for the first 50 ms, as described by Brette et al. (2007). This initial stimulation was provided here by constant current injection, and each cell was independently assigned a random current value in [0, 200] pA. For each network configuration, n _{ i } = 10 different patterns of initial stimulation were applied, and each pairing of network configuration and input pattern defines a single experiment (n _{ e } = n _{ s } ×n _{ i } = 50).
In the absence of numerical errors, all simulations from the same experiment should have produced identical output. Repeated experiments therefore allowed us to examine the speed/accuracy tradeoff for each integration method.
Given the results from the current injection simulations, we selected three representative error tolerance conditions to apply here. Specifically, it was specified that c _{1,2,3} here would be identical to c _{1,9,15} from the current injection simulations. Thus, for BS and PS, ε = 1e2, 1e10, 1e16, and for RK, \(\mathit{\Delta} t_g = 1/4, 1/100, 1/2000\). As in the current injection simulations, reference solutions were created using PS with ε = 0.
Single experiment results
In this section, we characterise the outputs from a single experiment, using the reference solution to assess accuracy. Figure 3(a) shows membrane potential traces from a single neuron, with results from conditions 1–3 arranged in separate panels, top to bottom and integration methods represented using different colours. For comparison, the trace from the reference solution for this experiment is plotted as a black line in each panel. The reference trace was drawn last so that it would obscure the coloured traces when they were in agreement. Thus, working left to right in a single panel, the appearance of a coloured line is a visual indicator of divergence between the reference solution and the test solution from the method represented by that colour.
A quantitative measure of trace divergence (accuracy) was obtained by recording the time point at which each test trace first differed from the reference trace by more than 1 mV. These divergence points are indicated by vertical lines in Fig. 3(a). For c _{1}, the divergence times for all three methods were between 140 and 150 ms. For c _{2}, divergence times were later for all methods, at 433 ms (RK), 443 ms (PS), and 500 ms (BS). In the final condition, the BS and RK results were reversed relative to the previous condition with RK diverging at 500 ms, and BS diverging earlier at 433 ms. In contrast, the PS test solution agreed with the reference solution over the full one second simulation.
Figure 3(b) shows population raster plots from the first 15 cells in the same network. Once again, the reference solution is plotted in each panel to give a visual indicator of agreement. Raster divergence (vertical lines) was taken as the time point at which a test spike time first differed from the corresponding reference spike time by more than 1 ms. Raster divergence times were generally later than trace divergence times but followed the same trends. This indicates that solution divergence for this model is a network phenomenon rather than a single cell characteristic; as expected given the highly recurrent network activity here.
Both voltage traces and spike time results from each method generally converged towards the reference solution as the error tolerance decreased. The only exception to this rule was the BS result from condition 3, which was worse than condition 2. As above, BS exhibited error tolerance failures at the lowest tolerance, and reduced accuracy probably results from roundoff errors with many crossings (Press et al. 1992). Only the PS solution from condition 3 showed agreement with the reference solution that extended beyond the time limit of the simulation. RK and BS showed neither complete agreement with the reference solution, nor agreement between their own test solutions.
Overall performance results
In this section, we examine the overall accuracy of each integration method and derive a performance measure based on both speed and accuracy. In the single experiment results, simple heuristic measures of trace and raster plot divergence were employed to assess accuracy. While those measures were sufficient to highlight important trends in the data, a stricter measure of global solution divergence was obtained here by comparing timeordered sequences of spikes from different simulations in the same experiment. Spike sequences consisted of {spike time, neuron index} pairs, and the duration of global solution agreement was taken as the time of the last spike at which the test and reference sequence neuron indices were identical. Assuming the reference simulations were at least as accurate as the test simulations, the duration of agreement provides a measure of solution accuracy.
Figure 4 (top) plots simulation time for each method as a function of the error tolerance condition. The general pattern of results here was qualitatively similar to the current injection results in Fig. 2(a). The mean number of spikes across all experiments was 7.66 ±0.4, so we compare with the tenspike current injection results. RK times were between 6.07 (c _{2}) and 6.82 (c _{1}) times larger than in the equivalent current injection simulation. PS times were between 7.51 (c _{2,3}) and 8.31 (c _{1}) times larger, while BS times were between 5.09 (c _{1}) and 8.9 (c _{3}) times larger. These increased times, despite reduced firing rate, reflect the introduction of synaptic interactions, and the fact that the recurrent network had four times as many cells. Furthermore, trace recordings were output to file during the recurrent simulations but not during the current injection simulations. To examine the cost of synaptic interactions more directly, additional simulations were run with smaller recurrent networks of 1000 cells, without file recordings, and with firing rate adjusted to approximately ten spikes per cell via weight scaling. In these smaller simulations, RK times were between 0.97 (c _{2,3}) and 1.03 (c _{1}) times larger than in the equivalent current injection simulation. PS times were between 1.24 (c _{1}) and 1.32 (c _{3}) times larger, while BS times were between 1.27 (c _{1}) and 1.96 (c _{3}) times larger. Thus, the additional costs of synaptic interactions was noticeable for BS and PS, but not RK.
Figure 4 (middle) plots the mean duration of agreement for each method. These results are broadly similar to those obtained in the single experiment results. RK accuracy increased by a large amount moving from c _{1} to c _{2}, and by a smaller amount from c _{2} to c _{3}, reaching a maximal value of 0.33 s. BS accuracy peaked at 0.30 s for c _{2} and decreased at c _{3}. PS accuracy increased progressively as the error tolerance was tightened. In condition 3, most, but not all, PS simulations agreed with the reference solution over the full duration of the simulation. Thus, unlike in the current injection simulations, tiny numerical differences between the simulations with ε = 1e16 and ε = 0 were sufficient here to cause network state divergence in some cases within a one second simulation time period.
Morrison et al. (2007) have argued that in order to arrive at a relevant measure of the performance of an integration method, simulation time should be analysed as a function of the integration error. Following this principle, overall performance was assessed as a function of both speed and accuracy by dividing the duration of agreement by simulation time to yield a dimensionless performance measure. For example, a performance score of 1 would be obtained by a method running at realtime speed and showing complete agreement with the reference solution. Figure 4 (bottom) plots mean performance. By this measure, RK was the best performing method for c _{1}, while PS performed second best for c _{1} and much better than the other methods for c _{2,3}.
The reference PS simulations once again achieved double precision accuracy in the state variables over the simulation period and took 19.50 ±0.7 s to run. This was faster than the fastest BS simulations, and 3.39 times slower than the fastest RK simulations.
HodgkinHuxley model
For the HodginHuxley model, current injection simulations were used to compare methods in the absence of synaptic processing. Ten identical cells were modelled for more accurate time calculations; parameters are listed in Appendix B. As for the Izhikevich model, one and tenspike simulations were run and 15 error tolerance conditions were applied. Preliminary testing showed solution divergence to infinity with step sizes larger than 0.01 ms for RK, and 0.1 ms for BS/PS. Consequently, \(\mathit{\Delta} t_g\) was set to 0.1 ms for all BS and PS simulations, and 0.01 for RK in condition 1. Error tolerances for BS and PS were identical to those used in Section 5.1.1. For RK, \(\mathit{\Delta} t_g\) was reduced in the same manner as for the Izhikevich model, but from a lower starting point. Thus, for c _{15}, \(\mathit{\Delta} t_g\) for RK was 1/80,000 ms, or 12.5 ns.
For the PS method, an adaptive algorithm with failure detection was used as described in Section 4.1, with a replacement BS step being run when PS failure was detected.
Unlike the Izhikevich model simulations, the PS method sometimes failed to achieve the specified error tolerances here, giving no guarantee of accuracy. For this reason, we conducted an analysis of withinmethod agreement across conditions using the c _{15} results from each method as reference solutions. Voltage traces were recorded from every simulation at 1 ms sampling steps, and the mean absolute difference between test and reference traces was recorded. All methods showed initial convergence towards their own reference as the error tolerance was reduced, suggesting that they were becoming more accurate. The best result in both one and tenspike simulations came from PS (c _{14}) at 2.39e13 and 1.12e12 mV, respectively. RK and BS achieved divergences never less than 1e10 and 1e9 mV, respectively.
Consequently, general reference solutions for each experiment were produced using PS with ε = 0. Treating all other voltage traces as test solutions, the mean absolute voltage difference between reference and test traces was calculated, and the inverse of this value was taken as an accuracy measure for the test solution. Finally, overall performance was taken as accuracy divided by simulation time.
Figure 5 shows the results of this performance analysis. The top panel shows how simulation time varied with the error tolerance condition. All methods became slower with decreasing error tolerance, as expected. RK was the slowest method in all conditions here. PS was the fastest method, with better than realtime speed in all conditions in the onespike experiments. In the tenspike simulations, PS was between 1.26 (c _{1}) and 3.27 (c _{15}) times slower than the equivalent onespike simulations, but was still faster than RK and BS in all conditions.
The middle panel of Fig. 5 shows mean accuracy values plotted against the error tolerance condition. As with the Izhikevich model results, RK and BS showed convergence towards the reference solution with reducing error tolerance, justifying the choice of reference. As for the Izhikevich model however, the accuracy measures for both methods plateaued, reaching similar peak levels in each case, with the peak located at c _{11} (BS, onespike) or c _{10} (all other simulations). PS accuracy continued to improve beyond this tolerance level, reaching peak values more than two orders of magnitude greater than the alternatives in the onespike simulations and roughly one order of magnitude greater in the tenspike experiments.
The bottom panel of Fig. 5 shows overall performance values for each method. PS recorded the best results here in both the one and tenspike simulations. Comparing peak performance values for each method, in the onespike simulations PS performed roughly four orders of magnitude better than BS and five orders of magnitude better than RK. In the tenspike simulations, PS performed 9 times better than BS, and 558 times better than RK.
The reference PS simulations took around ten percent longer to run than the c _{15} simulations in both one and tenspike simulations. Unlike the Izhikevich model results, double precision accuracy was not obtained due to error tolerance failures. The PS method only failed around singular points in the equations, but the replacement BS steps were unable to achieve zero error tolerance when PS failed.
Discussion
The ParkerSochacki method is a promising new numerical integration technique. We have presented the method and shown how it can be applied to the Izhikevich and HodgkinHuxley neuronal models.
In Section 2, we summarised major milestones in the development of the ParkerSochacki method and illustrated its application through examples. We demonstrated how to implement adaptive error control using adaptive order processing in PS. We also showed how power series division and power operations can be used within the ParkerSochacki framework. For terms with powers greater than 4, Euler’s power method can provide significant computational savings over iterated Cauchy products, but it is the division operation which is likely to be of greater utility. Series division is simple to implement since the major calculation can be carried out using a standard Cauchy product function. With this operation, PS can be directly applied to any equation composed only of numbers, variables, positive integer powers, and the four basic arithmetic operations (addition, subtraction, multiplication and division). This is a far broader class of equations than the polynomials considered in previous articles (Parker and Sochacki 1996; Carothers et al. 2005). Where other expressions are present, it may still be possible to apply the method, but additional work will be required to discover and apply suitable variable substitutions.
In Section 3, we applied PS to the Izhikevich neuron model: a simple model capable of rich dynamic behaviour. We developed an efficient PS solution using a single Cauchy product, showed how to calculate exact spike times within larger time steps using the NewtonRaphson method and presented a simple adaptive order algorithm.
Benchmark simulations in Section 5.1 demonstrated that the ParkerSochaki method is capable of double precision integration accuracy for Izhikevich model neurons in both current injection and recurrent network simulations. Neither the BulirschStoer nor the RungeKutta methods were capable of the same level of accuracy. Furthermore, in Section 5.1.2 it was shown that integration accuracy had a major effect on network behaviour in a recurrent network simulation, with small solution errors leading to divergent behaviour within a relatively short simulation time period.
In light of the typical parameter uncertainties in neuronal modelling, the question of whether double precision integration accuracy is useful in a given setting will be a matter for the individual investigator to consider. However, the relative time cost of applying zero error tolerance in PS simulations is small; reference PS simulations were always faster than any BS simulations on this model and took less than four times as long to run as 4thorder RK simulations with the same global step size. In general, we would make the following suggestions. First, PS should always be run with zero error tolerance. Second, even if PS is not used for the main simulations in a study, it may still be useful as a reference solution in pilot work.
In Section 4, we applied PS to a HodgkinHuxley model. We showed how variable substitutions transform the equations into a form suitable for the application of PS. We also successfully applied the series division operation. However, for equations of the form x/(exp(x) − 1), we encountered a Taylor series divergence problem that we were unable to solve through variable substitutions. There are at least three ways to work around such a problem. First, as we did, an alternative numerical integration method can be used for steps where the PS method fails. Second, polynomial, spline, or rational function approximation methods can be used. Cubic interpolating splines are one attractive option here due to the low order and high accuracy offered (de Boor 2001). Alternatively, Floater and Hormann (2007) proposed a family of rational interpolants that have no poles, and arbitrarily high approximation orders. Finally, there exist alternative equation forms for HodgkinHuxley type models that avoid the presence of singular points; one promising option being the Extended HodgkinHuxley (EHH) model (BorgGraham 1999) (Section 8.4.2). Here, the voltagedependent rate constant equations take on the following generic form:
where K is a positive constant, F is Faraday’s constant, R is the gas constant, and T is the temperature in Kelvin. The following method creates a PS solution. First, substitute for the exponential expressions to give α′_{ x } = a _{ x }, α′_{ x } = b _{ x }. Next, define a derived variable c _{ x } = τ _{0}(a _{ x } + b _{ x }) + 1. Finally, solve for α _{ x } and β _{ x } using series division operations on a _{ x }/c _{ x } and b _{ x }/c _{ x }, respectively. Furthermore, since K is positive, c _{ x } is always positive, avoiding any singularities.
In order to retain the HodgkinHuxley model equations used by Brette et al. (2007), benchmark simulations in Section 5.2 used the methodsubstitution approach. At low error tolerances, this approach appeared to yield greater accuracy than the alternative methods, but was unable to achieve double precision accuracy due to failures in both PS steps close to singular points and the replacement BS steps. Given the lack of singular points in the Extended HodgkinHuxley equations, we conjecture that the Maclaurin series would always be convergent for this modelling framework. If simulation testing proves this conjecture to be correct, then the PS method will be able to achieve arbitrary precision and should also run faster than on the standard Hodgkin Huxley model due to an absence of failure and replacement steps.
In all of our simulations, continuous event times were accommodated within a globally clockdriven framework. This modelling approach enables far greater accuracy than traditional clockdriven methods where events are restricted to discrete time points. Eventdriven simulation approaches (Mattia and Del Giudice 2000; Delorme and Thorpe 2003; Makino 2003; Brette 2006, 2007), offer comparable event timing precision but are generally restricted to simple neuronal models, while the present approach is far more widely applicable. Morrison et al. (2007) proposed a similar hybrid clockdriven/eventdriven approach but, like many standard eventdriven techniques, their method was restricted to linear neuron models. In contrast, by using the ParkerSochacki method, we were able to combine the flexibility of clockdriven simulation methods with the precision of eventdriven approaches.
In Appendix A, we present the major routines used in our implementation of the ParkerSochacki method. We endeavoured to make the code generic, modular and simple to adapt. It should be emphasized that the provided code does not constitute a general neuronal modelling package of the type described by Brette et al. (2007). Rather, it is our hope that the ParkerSochacki method will be adopted into existing simulation packages as an alternative integration method for highly accurate simulations.
As previously noted, the ParkerSochacki method is directly applicable to any model with polynomial or rational differential equations. In terms of existing spiking neuron models, the class with polynomial equations includes the leaky (Lapicque 1907; Tuckwell 1988) and quadratic (Ermentrout 1996; Latham et al. 2000) integrateandfire models, the resonateandfire neuron model (Izhikevich 2001), the FitzhughNagumo model (Fitzhugh 1961; Nagumo et al. 1962), and the models of spiking and bursting by Hindmarsh and Rose (1982, 1984). The exponential integrateandfire model (FourcaudTrocmé et al. 2003), including the twodimensional adaptive version (Brette and Gerstner 2005), can be handled using an exponential variable substitution (see Section 2.4). For HodgkinHuxley type models, multiple substitutions will usually be required.
The HodgkinHuxley formalism can be viewed as a simple Markov kinetic model (Destexhe et al. 1994). More complex kinetic models have been used to model voltagegated ion channels (see, for example Vandenberg and Bezanilla 1991; Bezanilla et al. 1994), and it has been suggested that ligandgated, and secondmessengergated channels can be modeled along the same lines (Destexhe et al. 1994; Destexhe 2000). Like the HH and EHH models, these general kinetic models usually feature exponential functions that can be handled using the same kind of substitutions.
Compartmental models often use equations similar to singlecompartment models for each compartment, plus linear, resistive coupling terms between neighbours. The PS method handles coupling terms in the same way as any other variables; no additional substitutions or manipulations are required. Indeed, PS has already been successfully applied to the nbody problem (Rudmin 1998; Pruett et al. 2003), which features n coupling terms in the equations describing the motion of each body. Thus, the PS method is applicable to a compartmental model provided it is applicable to the equations of individual compartments, with coupling terms.
Calcium modelling introduces calcium ion concentrations as model variables, with equations describing concentration changes (BorgGraham 1999). Once again, PS handles these new variables in the same way as any others.
In conclusion, the ParkerSochacki method offers unprecedented integration accuracy in neuronal model simulations, at moderate computational cost, and is applicable in a variety of computational neuroscience settings. It is our hope that this article will help to facilitate its wider adoption.
References
Bezanilla, F., Perozo, E., & Stefani, E. (1994). Gating of shaker k+ channels: Ii. The components of gating currents and a model of channel activation. Biophysical Journal, 66(4), 1011–1021, April.
BorgGraham, L. (1999). Interpretations of data and mechanisms for hippocampal pyramidal cell models. In Cerebral cortex (pp. 19–138). New York: Plenum.
Brette, R. (2006). Exact simulation of integrateandfire models with synaptic conductances. Neural Computation, 18(8), 2004–2027.
Brette, R. (2007). Exact simulation of integrateandfire models with exponential currents. Neural Computation, 19(10), 2604–2609.
Brette, R., & Gerstner, W. (2005). Adaptive exponential integrateandfire model as an effective description of neuronal activity. Journal of Neurophysiology, 94(5), 3637–3642, November.
Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., et al. (2007). Simulation of networks of spiking neurons: A review of tools and strategies. Journal of Computational Neuroscience, 23(3), 349–398, December.
Carothers, D. C., Parker, G. E., Sochacki, J. S., & Warne, P. G. (2005). Some properties of solutions to polynomial systems of differential equations. Electronic Journal of Differential Equations, 2005(40), 1–17.
de Boor, C. (2001). A practical guide to splines. In Applied mathematical sciences, revised edition (Vol. 27). New York: Springer.
Delorme, A., & Thorpe, S. J. (2003). SpikeNET: An eventdriven simulation package for modelling large networks of spiking neurons. Network, 14(4), 613–27.
Destexhe, A. (2000). Kinetic models of membrane excitability and synaptic interactions. In J. M. Bower & H. Bolouri (Eds.), Computational modeling of genetic and biochemical networks (pp. 225–262). Cambridge: MIT.
Destexhe, A., Mainen, Z. F., & Sejnowski, T. J. (1994). Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. Journal of Computational Neuroscience, 1(3), 195–230, August.
Ermentrout, B. (1996). Type i membranes, phase resetting curves, and synchrony. Neural Computation, 8(5), 979–1001, July.
Fitzhugh, R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1, 445–166.
Floater, M. S., & Hormann, K. (2007). Barycentric rational interpolation with no poles and high rates of approximation. Numerical Mathematics, 107(2), 315–331.
FourcaudTrocmé, N., Hansel, D., van Vreeswijk, C., & Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. Journal of Neuroscience, 23(37), 11628–11640, December.
Hindmarsh, J. L., & Rose, R. M. (1982). A model of the nerve impulse using two firstorder differential equations. Nature, 296(5853), 162–164, March.
Hindmarsh, J. L., & Rose, R. M. (1984). A model of neuronal bursting using three coupled first order differential equations. Proceedings of the Royal Society of London. Series B, Biological Sciences, 221(1222), 87–102, March.
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117(4), 500–544, August.
Izhikevich, E. M. (2001). Resonateandfire neurons. Neural networks, 14(6–7), 883–894, July–September.
Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569–1572.
Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5), 1063–1070, September.
Izhikevich, E. M. (2007). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge: MIT.
Knuth, D. E. (1997). The art of computer programming: Seminumerical algorithms (Vol. 2, 3rd ed.). Boston: AddisonWesley Longman.
Lapicque, L. (1907). Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarisation. Journal de Physiologie et de Pathologie Générale, 9, 620–35.
Latham, P. E., Richmond, B. J., Nelson, P. G., & Nirenberg, S. (2000). Intrinsic dynamics in neuronal networks. I. Theory. Journal of Neurophysiology, 83(2), 808–827, February.
Makino, T. (2003). A discreteevent neural network simulator for general neuron models. Neural Computing & Applications, 11, 210–223.
Mattia, M., & Del Giudice, P. (2000). Efficient eventdriven simulation of large networks of spiking neurons and dynamical synapses. Neural Computation, 12(10), 2305–2329.
Morrison, A., Straube, S., Plesser, H. E., & Diesmann, M. (2007). Exact subthreshold integration with continuous spike times in discretetime neural network simulations. Neural Computation, 19(1), 47–79, January.
Nagumo, J., Arimoto, S., & Yoshizawa, S. (1962). An active pulse transmission line simulating nerve axon. Proceedings of the IRE, 50, 2061–2070.
Parker, G. E., & Sochacki, J. S. (1996). Implementing the Picard iteration. Neural, Parallel & Scientific Computations, 4(1), 97–112.
Parker, G. E., & Sochacki, J. S. (2000). A PicardMaclaurin theorem for initial value PDE’s. Abstract and Applied Analysis, 5(1), 47–63.
Press, W., Teukolsky, S., Vetterling, W., & Flannery, B. (1992). Numerical recipes in C (2nd ed.). Cambridge: Cambridge University Press.
Pruett, C. D., Rudmin, J. W., & Lacy, J. M. (2003). An adaptive Nbody algorithm of optimal order. Journal of Computational Physics, 187(1), 298–317.
Rudmin, J. W. (1998). Application of the ParkerSochacki method to celestial mechanics. Technical Report, James Madison University.
Traub, R. D., & Miles, R. (1991). Neuronal networks of the hippocampus. New York: Cambridge University Press.
Tuckwell, H. (1988). Introduction to theoretical neurobiology: Linear cable theory and dendritic structure (Vol. 1). Cambridge: Cambridge University Press.
Vandenberg, C. A., & Bezanilla, F. (1991). A sodium channel gating model based on single channel, macroscopic ionic, and gating currents in the squid giant axon. Biophysical Journal, 60(6), 1511–1533, December.
Vogels, T. P., & Abbott, L. F. (2005). Signal propagation and logic gating in networks of integrateandfire neurons. Journal of Neuroscience, 25(46), 10786–10795, November.
Acknowledgements
Research supported by The Wellcome Trust. We are grateful to Joseph Rudmin, Stephen Lucas and Jim Sochacki for stimulating discussions and helpful advice.
Open Access
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: Upinder Bhalla
Appendices
Appendix A: ParkerSochacki solution code
The routines below form the core of our implementation of the ParkerSochacki method. The generic solver routine, ps_step, solves a system of differential equations using PS and advances the solution across a single time step. Worker routines first and iter are passed in to ps_step and calculate the first and subsequent terms of the PS solution to a specific system. The ps_step routine was used with different first and iter functions for both Izhikevich and HodgkinHuxley model simulations; the Izhikevich model routines iz_first and iz_iter are included as an example. In order to use ps_step to solve a different model, all that is required is to define suitable ‘first’ and ‘iter’ functions specific to the model in question. Also included below are some generic power series operation functions.
Appendix B: Benchmark model neuron parameters
Cellular model parameters for both the Izhikevich and HH models were taken from Brette et al. (2007), and all cells had identical basic parameters. Briefly, the cell area was 20,000 \(\upmu\)m^{2}, and input resistance was 100 MΩ. Given a specific capacitance of 1 \(\upmu\)F/cm^{2}, whole cell capacitance was taken as C = 200 pF. Following the published code accompanying Brette et al. (2007), E _{ L } was set to − 65 mV; the value of − 60 mV given in the text of the paper was erroneous (Destexhe, personal communication).
The HH neuron model was as described in Section 4. In addition to the basic parameters listed above, parameters specific to the HH model were: g _{ L } = 10 nS, \(\bar{g}_{Na} = 20000\) nS, \(\bar{g}_{K} = 6000\) nS, E _{ Na } = 50 mV, E _{ K } = − 90 mV.
The Izhikevich model neuron parameters were obtained by fitting the model to the HH neuron in Brette et al. (2007). First, \(\emph{v}_{rest}\) was taken as − 65 mV to match E _{ L }. The voltage threshold of − 50 mV was shifted by \(\emph{v}_{rest}\) to give \(\emph{v}_t = 15\) mV. Next, \(\emph{v}_{max}\) and c were obtained by observing the HH neuron model under constant, suprathreshold current injection. The observed values of 48 mV and −85 mV were shifted relative to \(\emph{v}_{rest}\) to give \(\emph{v}_{max} = 113\) mV and c = − 20 mV. In the same simulations, the rheobase current was found to be around 19 pA. Since the HH neuron from Brette et al. (2007) lacks spike frequency adaptation, u _{ step } was set to zero, and a was set to a value of 0.03 to match the value given by Izhikevich (2007) for a regular spiking cortical neuron.
Izhikevich (2007) (Ch 5), describes a method for setting b and k given the rheobase current, input resistance, and the resting and threshold potentials. Using this method, values of k = 1.3 and b = − 9.5 were obtained.
Model synapses were conductancebased, and conductances were summed together to form one η and one γ value for each neuron. The conductances decayed exponentially with time constants of 5 ms for η and 10 ms for γ. When fired, excitatory synapses incremented η by 6 nS, while inhibitory synapses incremented γ by 67 nS.
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/bync/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Stewart, R.D., Bair, W. Spiking neural network simulation: numerical integration with the ParkerSochacki method. J Comput Neurosci 27, 115–133 (2009). https://doi.org/10.1007/s1082700801315
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1082700801315
Keywords
 ParkerSochacki
 Spiking neural network
 Numerical integration
 Izhikevich
 HodgkinHuxley