Background

It is by now very well known that the biochemical kinetics involving small numbers of molecules can be very different to kinetics described by the law of mass action and differential equations [13]. This effect is a property of the intrinsic noise of the system and is associated with the uncertainty of knowing when a reaction occurs and what that reaction is. At the molecular level such intrinsic uncertainty is, in turn, a consequence of the stochastic nature of the fluctuations of the potential energy surface for any chemical reaction in the condensed phase [4]. When considering a collection of molecules, the intrinsic noise is accentuated when some chemical species have small numbers, as is often the case in genetic regulatory models where there are small numbers of key transcription factors that can bind to a limited number of operator regions on DNA [515]. Kurtz [16] and Gillespie [17] realised this fact and developed discrete methods to deal with this situation. The stochastic simulation algorithm (SSA, see [18] for a review) describes the time evolution of the dynamics of the species in a well-stirred chemically reacting system as a discrete nonlinear Markov process, resulting in an exact method to sample from the probability density function described by the chemical master equation (CME). Gibson and Bruck proposed a more efficient implementation of the SSA called the next reaction method [19].

The basic idea of the SSA is that at each time point a waiting time to the next reaction and the most likely reaction to occur must be sampled from a joint probability density function leading to an appropriate update of the state vector. But if the rate constants and/or the numbers of molecules in the system are large then the waiting time (time step, τ) can be very small [18]. Because of this Gillespie [20] introduced the Poisson τ-leap method, in which all reactions are allowed to fire in a given τ with a frequency extracted from a Poisson distribution. Since then many extensions of this idea have been developed. Cao et al. [21] have considered efficient mechanisms for selecting τ and have developed implicit methods suitable for simulating stiff systems. Tian and Burrage [22] introduced a modification of Poisson τ-leap methods known as Binomial τ-leap methods that avoids the issue of obtaining negative molecular numbers from which Poisson τ-leap methods can suffer. Chatterjee et al. [23] and Auger et al. [24] have considered modifications to Binomial τ-leap methods that improve some of the implementation aspects. On the other hand, Monk [25] and Mackey [26] noted the importance of representing delays, especially when representing processes such as transcription and translation. Accordingly, Bratsun et al. [12] and Barrio et al. [27] developed a delayed version of the Stochastic Simulation Algorithm. Leier et al. [28] and Anderson [29] extended these ideas to a τ-leap setting.

Although τ-leap methods can, in some cases, substantially improve computational efficiency compared with the SSA, when there is moderate stiffness in the system the efficiencies can be quite poor. One could resort to implicit τ-leap methods but then there are considerable implementation issues and subtleties. A different approach is to explore ideas from the numerical ODE (ordinary differential equations) and numerical SDE (stochastic differential equations) communities. Thus, with ODEs it is well known that stiffness leads to a step size restriction when using explicit methods and many classes of efficient implicit methods have been designed [30]. However, in the case of moderately stiff systems explicit Runge-Kutta methods with extended stability regions along the negative real axis have proven to be especially effective [31, 32]. Runge-Kutta methods are a class of one step methods which gain their efficacy by computing intermediate approximations to the solution within a step. Explicit Runge-Kutta methods with extended stability regions are based on explicit Runge-Kutta methods whose stability function is a shifted and scaled Chebyshev polynomial or some variant thereof. In the stochastic setting, there are some subtleties designing fully implicit methods due to possible unboundedness of the solution as the Wiener increment can take positive or negative values with equal likelihood [33]. Thus most methods are semi-implicit, that is implicit in the deterministic component. Abdulle and Cirilli [32] have, with some success, extended the ideas of explicit Chebyshev methods with extended stability regions to the SDE setting via their class of S-ROCK methods.

Here, we use the Runge-Kutta formulation to construct methods with large stability regions so that efficiencies are gained by allowing larger stepsizes. We note that this is exactly what Abdulle and Cirilli [32] do in the SDE setting, that is they use a Runge-Kutta formulation to construct methods with excellent stability properties and even though these methods are only weak order 1 they perform very well. It is noteworthy that in this work we are not using the Runge-Kutta formulation to get second order accuracy for τ-leap methods. This seems to be a difficult problem, just as it is the case for SDEs and will probably require double integrals of compensated processes to be simulated. In fact, Abdulle and Cirilli [32] also note that it is very difficult to construct weak order 2 methods with good stability properties and to our knowledge at the moment no such methods exist in the SDE setting. Note that in a stochastic setting we judge order of accuracy through two mechanisms: strong order (where trajectories are compared with the true solutions) and weak order (where moments are compared). Often a numerical method may have a higher weak order than its strong order. The Euler-Maruyama method is a case in point with weak order one and strong order a half.

Thus, in this paper, we explore a series of fully explicit multistage Runge-Kutta methods with extended stability for a fixed τ-leap stochastic simulation schema. Our methods involve the same number of Poisson evaluations per integration step as in the original τ-leap formulation but allow increasingly larger step sizes at the cost of an increasing series of deterministic evaluations in the internal stages. First we give some background on Runge-Kutta methods for ODEs and SDEs. In section Results we extend these ideas to the τ-leap methods and present a stability analysis for linear chemical kinetics, including its practical implementation. In section Numerical results we present numerical results for both the linear case and the classical stiff system described by the Schlögl reaction [34]. Finally, in section Discussion we discuss further implications of this work and, in particular, possible extensions to multiscale modelling.

Review of Runge-Kutta methods for SDEs and ODEs

Stability region for RK methods applied to ODEs

Consider the system of initial value ODEs given by

y ( t ) = f ( t , y ) , y ( t 0 ) = y 0 .
(1)

The class of s-stage Runge-Kutta (RK) methods for approximating the solution to (1) is given by

Y i = y n + h j = 1 s α i j f ( t n + ω j h , Y j ) , for  i = 1 , , s , y n + 1 = y n + h j = 1 s β j f ( t n + ω j h , Y j ) ,
(2)

where h is the time step. This class of methods is characterised by the Butcher tableau

w A b T

where bT = (β1,...,β s ), w = Ae and e = (1,...,1)T. Here A is the matrix with entries α ij and w is the column vector wT = (w1,...,ws)T. A Runge-Kutta method is said to be explicit if the s × s matrix A is strictly lower triangular. The method parameters are usually chosen so that a Runge-Kutta method has appropriate efficiency, order and stability characteristics. The Y i are considered to be approximations to the solution at the intermediate points t n + w i h for i = 1,...s.

In a stability setting an RK method is often applied to the linear, scalar test equation

y = λ y , Re [ λ ] 0.
(3)

In which case it is easily seen that (2) gives rise to

y n + 1 = R ( h λ ) y n ,

Where

R ( z ) = 1 + z b T ( I A z ) 1 e .
(4)

.

Here R(z) is the so-called stability function. This function can be extended to a linear N-dimensional equation y' = Λy in which case it becomes a matrix function of the N × N matrix Λ:

R ( h Λ ) = I N + ( h b T Λ ( I s I N h A Λ ) 1 ( e I N ) ) ,
(5)

where e is the unit vector, I s is the identity matrix of order s and ⊗ represents the Kronecker tensor product such that the (i, j) element of AB is a ij B. Notice that, if Λ is a scalar value and taking z = h Λ, R(z) would be a scalar and take the form (4). Therefore we can refer to R seamlessly irrespective of whether the argument is a matrix or a scalar.

In the case of an explicit method, as A is a strictly lower triangular s × s matrix, its s th power is As= 0. Therefore, equation (4) can be expanded into a finite power series for A:

R ( z ) = 1 + j = 1 s z j b T A j 1 e = 1 + j = 1 s r j z j ,
(6)

where r j = bTAj-1e, j = 1,...,s. Hence, R(z) is a polynomial of at most degree s for any explicit method.

Since (3) is asymptotically stable for all Re [λ] < 0, the stability region of a Runge-Kutta method is defined as

S = { z = h λ : | R ( z ) | 1 } .
(7)

Stability region for RK methods applied to SDEs

In the case of stochastic differential equations (SDEs), we consider the general m dimensional form

d y ( t )  =  f ( t ,  y )d t +  g ( t ,  y )d W ( t ), y ( t 0 )  =  y 0 ,
(8)

where W(t) = (W1(t),...,W d (t))T is a vector of d independent Wiener processes in which an individual Wiener process has the properties

E [ W ( t ) ] = 0 , t , Var [ W ( t ) W ( s ) ] = t s , t > s

and non-overlapping Wiener increments are independent of one another. A sample of a Wiener increment W(t + h) - W(t) is simulated from a Normal random variable with mean 0 and variance h, N(0, h).

Equation (8) can arise as the limit of a discrete process through the concept of a diffusion process in which case f (t, y) will represent the mean of this process and g(t, y) is the m × d matrix such that ggT is the covariance. Equation (8) can be interpreted in several ways (see [35] for an introduction to SDEs), depending on which integral definition is used. Two such interpretations lead to Itô and Stratonovich forms of SDEs. In the Itô setting an integral is approximated by summing, over a partition, the areas of a rectangle with width the increment of the Wiener process on that subinterval and height the value of the integrand at the lefthand point of each subinterval whereas in the Stratonovich setting the integrand is evaluated at the midpoint of each interval. If (8) is interpreted in the Itô sense then the simplest numerical algorithm is given by

y n + 1 = y n + h f ( t n , y n ) + Δ W n g ( t n , y n ) ,
(9)

where ΔW n = (ΔW1,....ΔW d )T and ΔW i := W i (t n + h) - W i (t n ), i = 1,...,d are normally distributed random numbers with mean 0 and variance h. This method is known as the Euler-Maruyama method and it is known to have strong order (pathwise order) 1 2 and weak order (moment order) 1.

As with the deterministic case, the quality of a numerical method can be partly characterised by its stability region associated with the scalar, linear test equation

d y = a y d t + b y d W , y ( 0 ) = y 0 .
(10)

The solutions of (10) in the Itô and Stratonovich cases are, respectively,

y I ( t ) = e ( a 1 2 b 2 ) t + b W ( t ) y 0 and y s ( t ) = e a t + b W ( t ) y 0 .

In the later case, the solution is mean square stable ( lim t E [ | y S ( t ) | 2 ] = 0 ) if Re [a] + Re [b2] ≤ 0.

A very general class of stochastic Runge-Kutta methods [36] was constructed for the solution of (8) which, when applied to the scalar test SDE (10) produces

E [ | y n + 1 | 2 ] = R ( p , q ) E [ | y n | 2 ] ,

where R is a multinomial in p and q if the method is explicit and where p = ha, q = h b . Analogous to the deterministic case, the mean square stability region of a method is defined as

S = { p , q : R ( p , q ) 1 } .

In the case of the Euler-Maruyama method

R ( p , q ) = | 1 + p | 2 + | q | 2

and in the (p, q) plane, with p, q ∈ ℝ, the stability region is a circle of radius 1 centered in (-1,0).

Results

The τ-leap Runge-Kutta framework with bounded variance and extended stability domain

As stated in the Background section, the SSA describes the time evolution of a vector of integer numbers of molecules in the presence of intrinsic noise. More formally, suppose that there are N chemical species S1,...,SN undergoing m chemical reactions. Let X i (t), i = 1,...,N denote the number of molecules of species S i and X(t) = (X1(t),...,X N (t))T. Now any set of chemical reactions is uniquely characterised by two sets of quantities. These are the update (stoichiometric) vectors ν1,...,ν m for each of the m reactions and the propensity functions a1(X(t)),...,a m (X(t)), which are proportional to the probabilities of each of the reactions occurring. For example, given the reaction

A + B c C

then X(t) = (A(t), B(t), C(t))T, ν1 = (-1, -1, 1)T and a1(X(t)) = cA(t)B(t).

Given X(t) at time t, the SSA determines a waiting time τ to the next reaction assuming an exponential waiting time distribution e τ a 0 ( X ( t ) ) , where a 0 ( X ( t ) ) = j = 1 m a j ( X ( t ) ) , and then selects the most likely reaction, say k, based on the relative sizes of a1(X(t)),...,a m (X(t)). The state vector is then updated as

X ( t + τ ) = X ( t ) + ν k ,

and the algorithm repeats.

Since a typical stepsize (waiting time) is of the size 1/a0(X(t)), this can be very small if some of the rate constants are large and/or some species have large numbers of molecules. Accordingly τ-leap methods attempt to take a larger step size in which all the reactions can occur based on a certain frequency. This can be written as

X n + 1 = X n + j = 1 m ν j K j .
(11)

Gillespie [20] chose the number of R j reactions per step, K j , as coming from a Poisson distribution with mean τ a j (X n ), that is

K j ~ P ( τ a j ( X n ) ) .
(12)

Using the so-called compensated process given by

L ( τ , x ) = P ( τ x ) τ x ,
(13)

which satisfies E[L (τ, x)] = 0 and E [L (τ, x)2] = τx, equation (11) can be restated as

X n + 1 = X n + τ f ( X n ) + j = 1 m ν j L ( τ , a j ( X n ) ) ,
(14)

Where f ( x ) = j = 1 m ν j a j ( x ) .

As noted by Gillespie [20] and Tian and Burrage [22], and as a consequence of the Law of Large Numbers, as → ∞, L(τ, x) converges to a normal random variable with zero mean and variance τx, N(0, τ x), and this can be considered as a sample x Δ W n of x N ( 0 , τ ) . Substituting this into (14) gives

X n + 1 = X n + τ f ( X n ) + j = 1 m ν j a j ( X n ) Δ W j .
(15)

This is precisely the Euler-Maruyama method applied to the SDE

d X = j = 1 m ν j a j ( X ) d t + j = 1 m ν j a j ( X ) d W j .
(16)

Thus in the continuous limit the Poisson τ-leap method can be viewed as the Euler-Maruyama method applied to a form of the Chemical Langevin Equation. Indeed Li [37] has shown that the Poisson τ-leap method has mean square strong order 1 2 and weak order 1 and this is consistent with the previous remarks. In addition, equation (16) is a particular case of the general SDE

d X = f ( X ) d t + k = 1 m g k ( X ) d W k .

These relationships naturally lead to the introduction of the class of Runge-Kutta τ-leap methods which bears a relationship, similar to the one discussed above, to the general class of Stochastic Runge-Kutta methods for solving SDEs [36]. This general class of explicit s-stage Runge-Kutta τ-leap methods takes the form

d n = j = 1 m ν j L ( τ , a j ( X n ) ) Y i = X n + τ j = 1 i 1 α i j f ( Y j ) + ω i d n , i = 1 , , s X n + 1 = X n + τ j = 1 s β j f ( Y j ) + d n
(17)

where L(τ, x) is given by (13) and f ( x ) = j = 1 m ν j a j ( x ) represents the drift or expected stepchange. As our focus is explicit methods, the matrix A is strictly lower diagonal. We note that (17) requires the same number of samples of Poisson random variables per step as the Poisson τ-leap method.

The Poisson τ-leap method given by (11) and (12) is equivalent to (17) with

s = 1 , A = 0 , β 1 = 1.

Indeed any Runge-Kutta method for solving an ODE can be incorporated into this framework. We also note that other methods proposed in the literature can be put into this framework. For example, the midpoint method of Gillespie [20] can be represented with s = 2, bT = (0, 1), w = (0, 0.5)T and where the row-wise entries of A are 0, 0, 0.5, 0.

The linear case

As in the case of stability settings in the ODE and SDE regimes, we analyse (17) when applied to linear kinetics, which in this case are described by sets of unimolecular reactions. A general set of m unimolecular reactions can be described by m propensity functions given by the following linear functions

a j ( x ) = i = 1 N c i j x i = c j T x , j = 1 , , m ,
(18)

where x is the state vector of dimension N and c j = ( c 1 j , , c N j ) T , j = 1,...,m are m vectors of dimension N defining the propensities. A more convenient way to describe this linear kinetics system is by using the N × N matrix W

W = j = 1 m ν j c j T ,

so that now the drift or expected step-change can be represented as

f ( x ) = j = 1 m ν j c j T x = W x .

If the Runge-Kutta method for ODEs underlying a Runge-Kutta τ-leap method (17) has stability function given by (4), then when the latter is applied to (18) we show (Additional file 1) that

E [ X n + 1 ] = R ( τ W ) E [ X n ] ,
(19)

where R is the multidimensional version of (4) given by (5). Note that this is a natural generalization of the deterministic case when a Runge-Kutta method is applied to the problem y' = Λy giving X n = R(h Λ)Xn-1. Thus with fixed stepsize τ

E [ X n ] = R ( τ W ) n E [ X 0 ] .
(20)

Therefore, boundedness in the mean requires that the spectral radius, ρ, of R(τ W) satisfies

ρ ( R ( τ W ) ) 1.

In order to analyse the framework (17) from the perspective of both mean and variance behaviour we consider the reversible isomerisation reaction with fixed total number of molecules given by

S 1 k 2 k 1 S 2 ,
(21)

as the linear scalar test equation. It is easy to see that this system is a analogous to (3) for ODEs and (10) to SDEs with constant nonzero term. The system is chosen to have constant nonzero term in order to compare its variance, which otherwise would fade to zero, to the variance given by the framework methods (17). In this case

W = ( k 1 k 2 k 1 k 2 ) .
(22)

.

For this set of reactions, the Chemical Master Equation (which describes the probability density function associated with the evolving Markov process X) can be solved analytically [18, 38]. In particular, it can be shown that the stationary state X * = ( X 1 * , X 2 * ) T has a probability density function (PDF) that follows a binomial distribution with

P ( X 1 * = x ) = T ! x ! ( T x ) ! p x ( 1 p ) N x

Where

p = k 2 k 1 + k 2

and T = X1(t) + X2(t) is the (fixed) total number of molecules in the system. Thus from the properties of the binomial distribution with e = (1, 1)T

E [ X * ] = T k 1 + k 2 ( k 2 , k 1 ) T Var [ X * ] = T ( k 1 + k 2 ) 2 k 1 k 2 e .
(23)

In the case of non-negative coefficients in the underlying RK method and for constant τ one can show (see details in Additional file 1) that if (17) is applied to (21) with constant τ such that |R(z)| < 1, z = -τ(k1 + k2), then in the limit as n → ∞ the mean vector converges to the theoretical mean, that is

lim n E [ X n ] = E [ X * ] .

Note that with the constraint |R(z)| < 1, z = -τ(k1 + k2) then the spectral radius of R(τ W) is less than or equal to 1, and as there is only one eigenvalue equal to one hence we have boundedness of the mean.

Furthermore, if Var [X] denotes the variance of the new method at steady state (X1 and X2 have the same variance) and if R2(z) ≠ 1, z = -τ(k1 + k2), then (see details in Additional File 1)

Var [ X ] = ψ ( z ) Var [ X * ] ,

where

ψ ( z ) = 2 z ( R ( z ) 1 R ( z ) + 1 ) .
(24)

We call this the relative variance at the stationary state associated to R.

Let us consider some particular cases of this result:

Poisson τ -leap For this method R(z) = 1 + z and ψ ( z ) = 1 1 + 1 2 z . Thus, the equilibrium variance doubles at z = -1, it rises fourfold at z = -1.5 and is unbounded at z = -2.

Two stage methods with α21 ≠ 0 For the family of explicit two-stage methods with α21 ≠ 0

0 0 α 21 0 β 1 β 2

the stability function is R(z) = 1 + z + γz2, where γ = β2α21 and the variance behaviour is determined by

ψ ( z ) = 1 + γ z 1 + 1 2 z + γ 2 z 2 .

In this case we have one free parameter of the method, γ, which allows us to control both the stability function R and the relative variance at steady state. We might be interested in setting γ to a value that both allows large time-steps to be used (by maximising the region (-l, 0] for which z fulfils |R(z)| < 1) and keeps the relative variance, ψ(z) close to one. In the case γ 1 8 , ψ grows as z becomes more negative. More interesting is the case γ > 1 8 , where the maximum and minimum of ψ occur for 1 + γ z = ± 2 γ , respectively and in this case

2 γ 8 γ + 1 ψ ( z ) 2 γ 8 γ 1 .

Constraining ψ to be around 1 with a certain fixed tolerance ϵ, |ψ(z) -1| < ϵ, for a range z ∈ (-l, 0] to be maximised is achieved with

γ = ( 1 + ϵ ) [ ( 1 2 + ϵ ) ϵ ( 1 + ϵ ) ]

and with a stability region (-l, 0] with

l = [ 1 1 ϵ 1 2 γ ( 1 2 γ + 1 1 ϵ ) 2 2 γ ] .

For instance, for 0.5 < ψ(z) < 1.5, setting γ = 0.20096 gives a maximum stability region of (-3.68026, 0] and thus the method

0 0 0.20096 0 0 1 .

This is the methodology we propose in the following section for the derivation of particular Runge-Kutta methods with s steps. Note that if we required the same limitation on the variance with the standard Poisson τ-leap method we could only take z ( 2 3 , 0 ] . Thus with the two stage method we can take a stepsize almost six times as large.

Implicit midpoint rule For the implicit midpoint rule

1 2 1 2 1 R ( z ) = 1 + 1 2 z 1 1 2 z and ψ ( z ) = 1 , z .
(25)

This was first shown by Cao et al. [38]. In fact only those Runge-Kutta methods that have a stability function given by (25) can preserve the variance exactly for linear problems. These methods include the implicit midpoint and trapezoidal rules and have to be implicit.

Methods with bounded variance and extended stability domain

For the general case of s stages we require ψ(z) to be as close to 1 as possible for as large a range of z as possible, this is, for as large a range of z fulfilling the stability condition |R(z)| < 1). We proceed by first showing that if we consider a bound on the relative variance, ψ, around one, we automatically fulfil the stability conditions for a certain range. In this sense, let ϵ ≥ 0 (and ϵ < 1), we impose the constraint

| ψ ( z ) 1 | < ϵ
(26)

and optimise the value of ls, ϵsuch that the range for which this holds is (-ls, ϵ, 0].

Noticing from (24) that

R ( z ) = 1 + z 2 ψ ( z ) 1 z 2 ψ ( z ) ,
(27)

inequality (26) can be restated in terms of R(z)

1 + z 2 ( 1 + ϵ ) 1 z 2 ( 1 + ϵ ) < R ( z ) < 1 + z 2 ( 1 ϵ ) 1 z 2 ( 1 ϵ ) ,
(28)

with z ∈ (-ls ϵ, 0]. Hence, we can translate constraints in the relative variance into constraints in the stability function. Since we are interested in constructing explicit methods we can ask how we can make ψ(z) close to 1 in an explicit framework for which we already know the stability function is a polynomial of at most degree s (equation (6))

R ( z ) = 1 + j = 1 s r j z j .

Thus, similar to the case s = 2 in which we had one free parameter, γ, to optimise, if we assume r1 = bTe = 1 then we have s - 1 parameters, r2,...,r s , we can optimise. In this case, though, the search of the optimal set of parameters has to be performed with numerical optimisation methods rather than analytically. The problem of finding optimal sets of parameters can be stated as a nonlinear program, NLP, and thus its solution approximated numerically (see details in Additional file 1).

Figure 1 shows the stability function and relative variance function for the Poisson τ-leap, and optimal methods for s = 3 and s = 5 under the constraints |ψ(z)-1| < 0.1, 0.25 and 0.5 and Table 1 summarises the numerical values for these conditions.

Table 1 Stability regions for methods with bounded relative variance and optimal stability
Figure 1
figure 1

Stability and relative variance for the different methods. Stability and relative variance functions for the Poisson τ-leap method (solid line) and RK τ-leap methods with optimal stability regions and bounded relative variance (ψ) with 3 stages (dotted line) and 5 stages (dashed line). Regions fulfilling the bounds on ψ are shown in grey. Square dots correspond to relative variances computed from 106 simulations each. (a), (b): Relative variance bounded by 0.1. (c), (d): Relative variance bounded by 0.25. (e), (f): Relative variance bounded by 0.5.

Efficient methods with bounded variance and extended stability

Runge-Kutta methods with a given stability polynomial R(z) are not unique. This is because the stability polynomial only reflects the application of a Runge-Kutta method to a linear problem. Nonlinear problems require many additional order conditions to be satisfied in order for a method to have a certain order of accuracy. Thus many different methods can have the same stability polynomial. Furthermore, we have already seen that the relative variance ψ does not directly depend on A but on R(z) thus making all methods with the same stability function behave identically in terms of stationary variance for linear problems. In order to distinguish between methods with the same stability function we would have to consider more complicated nonlinear chemistry and this is beyond the scope of this work. However, we have an explicit way of constructing an efficient method that has a given stability polynomial (i.e. to find values for b and A of the Butcher tableau, see details in Additional file 1). Furthermore, the tableaus build in this way are such that β s = 1, βj = 0, j = 0,...,s - 1 and A has all its elements set to zero except those on the first subdiagonal. These Runge-Kutta schemes obtained in this way are very natural, can be regarded as fixed point iterations and allow the following efficient reformulation of (17)

Y 1 = y n Y i = y n + α i , i 1 ( τ f ( Y i 1 ) + d n ) , for  i = 2 , , s y n + 1 = y n + τ f ( Y s ) + d n .
(29)

It is thus clear that these methods are computationally more efficient than the general case as they only require s-1 evaluations of the expected step-change f(·) instead of the s(s - 1)/2 required in the general framework (17). A collection of methods have been implemented in a branch of the ByoDyn package, v.5.0 [39].

Numerical results

Reversible isomerisation

We compare the new Runge-Kutta framework to the Poisson τ-leap to solve three systems of chemical reactions. The first is the reversible isomerisation test problem in (21) for which we have already developed theoretical results. Numerical simulation of the number of molecules for each of the two components in the system was carried out using the different methods discussed in the previous section with k1 = k2 = 10 (z = -20τ) and X(0) = (100, 100)T. We sampled 106 trajectories for each of the methods and for different fixed τ values. Figure 2 shows a comparison between the true probability density function (PDF) and the histograms of X1 obtained from the different methods and some of the values of τ. Note that the Poisson τ-leap method becomes unstable for τ > 0.1 and so does RK τ-leap with three stages for τ > 0.4. Figure 1 shows that the stationary variances obtained by the simulations are in exact accordance with the theoretical values derived in the previous section.

Figure 2
figure 2

Histogram of X 1 in the Reversible isomerisation reaction. Histogram of X1 in the Reversible isomerisation reaction (106 samples used) solved by the SSA (grey background), Poisson τ-leap (dashed line), and optimal RK τ-leap methods with bounded relative variance. (a) τ = 0.05 (z = -1), Optimal RK τ-leap s = 3, ϵ = 0.1 (solid line) and s = 5, ϵ = 0.1 ("+" marks), (b) τ = 0.4 (z = -8), Optimal RK τ-leap s = 3, ϵ = 0.5 (solid line) and s = 5, ϵ = 0.1 ("+" marks), Poisson τ-leap is unstable for this time step. (c) τ = 0.6 (z = -12), Optimal RK τ-leap s = 5, ϵ = 0.5 ("+" marks), Poisson τ-leap and Optimal RK τ-leap s = 3 are unstable for this time step.

Schlögl reaction

We also consider Schlögl's autocatalytic reaction system [34, 40] to illustrate the accuracy of the presented framework, developed for the linear case, for nonlinear systems. We use here the same set of parameters as Cao et al. [38] for which this system presents a bimodal PDF for the species X in the stationary state. We have also considered that the non-autocatalytic species are buffered (assuming they are constant) hence reducing the system to a scalar problem (see Table 2). We have again performed 106 simulations for each method and τ value. Figure 3 shows histograms computed by the SSA, Poisson τ-leap and the methods with s = 3, 5. Visual inspection of the plots shows a consistent improvement over the original τ-leap method by means of the multistage RK methods developed here. A more precise comparison of the plots is given in Figure 4, which shows the estimated Kullback-Leibler divergences between the exact PDF (PE) and the PDFs of each of these methods (PM), given by:

Table 2 Details of the Schlögl reaction system
Figure 3
figure 3

Histogram of X in the Schlögl reaction. Histogram of X in the Schlögl reaction (106 samples used) solved by the SSA (grey background), Poisson τ-leap (dashed line), and Optimal RK τ-leap methods. (a) τ = 0.4, Optimal RK τ-leap s = 3, ϵ = 0.1 (solid line) and s = 5, ϵ = 0.1 ("+" marks), (b) τ = 0.8, Optimal RK τ-leap s = 3, ϵ = 0.5 (solid line) and s = 5, ϵ = 0.5 ("+" marks).

Figure 4
figure 4

Kullback-Leibler divergence for the Schlögl reaction. Kullback-Leibler divergence between the exact stationary distribution of X in the Schlögl reaction (estimated by 106 samples solved by SSA) and the approximate stationary distributions obtained with the Poisson τ-leap (black), Optimal RK s = 3; ϵ = 0.5 (grey lines) and Optimal RK s = 5, ϵ = 0.5 (white). Bars are shown only for the stable method and τ settings. Asterisks denote methods that have a rate of failure above 10-3.

D ( P E , P M ) = x P E ( x ) log 2 ( P E ( x ) P M ( x ) ) .
(30)

The MAPK cascade

Finally, we have tested the performance of our methods on a larger system of chemical reactions with stiffness due to different reaction time scales and species amounts ranging over several orders of magnitude. For this purpose we considered the Huang and Ferrell model for the mitogen-activated protein kinase (MAPK) cascade [41]. This model is available from the BioModels database [42] and consists of 22 species interacting through 30 reaction channels. The set of parameters used here (see Additional file 1 for details) renders the model stiff and with species amounts ranging from none up to 3·105 molecules. With the chosen initial conditions the system undergoes a transient change and finally settles down into a stationary state at around t = 150 minutes. We have simulated the model using SSA (Gillespie's Direct Method), the Poisson τ-leap and the RK methods presented here. To produce fair comparisons, all methods have been rewritten in ANSI C using the Mersenne twister [43] pseudorandom number generator from the GNU Scientific Library. The GNU C Compiler was used to compile the sources with the -O2 optimisation flag. The algorithms were run on an Intel(R) Core(TM)2 Duo Processor E8500 at 3.16 GHz and 6 MB cache. We have run the system to a final time T = 200. Simulations run with SSA took 61, 841 ± 74 seconds.

We have compared the methods in two distinct situations. First we have run them with the same time step τ = 5·10-5. In this case, the Poisson τ-leap method took 51.7 ± 0.4 seconds while Optimal RK τ-leap methods with s = 3 and s = 5 took 86.1 ± 0.4 seconds and 113.9 ± 0.3 seconds respectively. Hence, at the same time step the RK methods are approximately 66% and 120% slower than the Poisson τ-leap due to the multiple evaluations of the propensity functions per step. However, there is an important difference in the results. The relative variance at the steady state is 1.3 (see Additional file 1) for the Poisson τ-leap while for both RK τ-leap methods with s = 3 and s = 5 (ϵ = 0.1) it is less than 1.04.

Then we have compared these methods when run at their respective maximum time steps such that the relative variance at the stationary state is bounded to 1.1 (estimated from the simulations). The maximum time steps allowed with this constraint were: τ = 2·10-5 for the Poisson τ-leap, τ = 3.5·10-4 for the RK τ-leap (3, 0.1) and τ = 9.5·10-4 for the Optimal RK τ-leap (5, 0.1). With this setting, the runtimes obtained were: 111.9 ± 0.7 seconds for the Poisson τ-leap, 15.7 ± 0.06 seconds for the RK τ-leap (3, 0.1) and 7.8 ± 0.02 seconds for the Optimal RK τ-leap (5, 0.1). Thus, in this case the Poisson τ-leap approximately 7.1 and 14.3 times slower than the RK methods, respectively.

Discussion

Biochemical kinetics typically deals with multiscale problems, in which several scales of time, space and concentrations, simultaneously affect the dynamical behaviour of the system. Thus, the systems biology community is deeply interested in the development of methods that lead to a multiscale view of biochemical systems. As a first step in this workflow, we have presented here a new set of methods that considerably expands the classical τ-leap implementation, from a stability perspective. The importance of the results shown here embraces not only the increase in computational speed for stochastic simulations, a key element for the understanding of the intrinsically noisy biological systems, but more importantly, a way to deal with fast reactions in multiscale settings. The methods developed here have been demonstrated for a first example of stiff system, the classical Schlögl autocatalytic reaction, and can be straightforwardly incorporated into hybrid SSASDE-ODE frameworks.

We see from Table 1 that if we require a bound on the equilibrium variance of 0.1 then the Poisson τ-leap method must take | z | 2 11 while for the RK methods the bounds on |z| are approximately 4 and 10, respectively with s = 3, 5. This is a very considerable improvement and all the more striking given that the same number of Poisson random variables are simulated per step in all cases.

Initially we had hoped that an approach via Chebyshev methods using ideas from ODEs and SDEs applied to the discrete cases would have been fruitful. It turns out that while such methods have good mean behaviour, the variance behaviour is poor. This is because the variance growth function satisfies (24) and an s-stage Chebyshev method would have s - 1 poles and zeros due to the oscillations in the stability function. Similar issues arise even in the damped forms of the Chebyshev formulation. This means that our optimisation approach is the only way of getting good bounds on ψ(z).

Our results on the nonlinear bimodal Schlögl problem show that the RK methods still behave appropriately even on nonlinear problems. For example, from Figure 3 we see that the Poisson τ-leap method is not very accurate with τ = 0.4 and quite poor in picking up the second peak with τ = 0.8. On the other hand the RK methods match the peak quite well, albeit with a slight shift in that peak. Furthermore, numerical results from the MAPK cascade simulations show that our methods can run an order of magnitude faster than the Poisson τ-leap and still give the same accuracy in the results.

Finally, we note that we could extend our RK methods to allow more than one set of Poisson random variables to be simulated per step. We imagine that this would allow even bigger stepsizes but at the cost of taking more simulation time in that the additional Poisson sampling is expensive. We emphasise that although our analysis of these new methods has been given for unimolecular reactions, the simulations of the nonlinear Schlögl reaction and the MAPK cascade indicate that these methods have a more general applicability and we will consider nonlinear analysis via Taylor series expansions in future work.