1 Introduction

Long-range transport of air pollutants in the atmosphere is often studied (as, for example, in [29, 31]) by applying complex mathematical models described by non-linear systems of partial differential equations (PDEs), which, after spatial discretization, lead to huge systems of ordinary differential equations (ODEs). If some splitting procedure is applied, then the chemical reactions together with the deposition and the emission terms can be handled separately, as a non-linear system of ODEs, at every grid-point of the domain in which the discretized system of PDEs is defined. This is done for an arbitrary grid-point. We choose this approach in order to explain in a simple manner how good numerical procedures can be designed for the chemical part of a large air pollution model. Such models can also be used to investigate the impact of future climatic changes on the pollution levels (see [30, 32]).

Different chemical schemes can be implemented in the air pollution models. These schemes play a very important role in the air pollution studies. The number of chemical species involved in the large-scale air pollution models varies as a rule from 20 to about 200 (see, for example, [1, 2, 5,6,7, 10, 11, 15, 16, 19, 22,23,24]).

The use of less than 20 chemical species will require crude parameterization of some of the chemical processes and, therefore, the choice of such chemical schemes is in general not advisable when long-range transport of air pollutants in the atmosphere is to be studied.

On the other side, the application of chemical schemes with more than 200 species leads to huge computational tasks when long-term simulations, as those reported in [30, 32], are to be carried out by running the air pollution models over many consecutive years.

A chemical scheme containing 56 species [31] will be used in this study. Moreover, we shall concentrate our attention on the necessity to increase the computational efficiency during the treatment of the chemical scheme and shall investigate the implementation of some reliable and accurate numerical algorithms in the handling of the selected scheme. This can successfully be done by separating the chemical scheme from the other parts of the air pollution models. This approach facilitates very essentially the search for optimal numerical algorithms when the chemical reactions are to be handled numerically. There is an additional benefit of the treatment of the problem in this way: the chemical scheme together with the efficient algorithms used to handle it numerically can easily be implemented in any large-scale air pollution model when the investigation is successfully finished. Finally, reliable information about some interesting phenomena, such as, for example, diurnal variations of some chemical species, can sometimes be easily obtained by using only the chemical scheme. This possibility will be demonstrated in the end of this paper.

2 Mathematical description of the problem

The chemical reactions together with the deposition and emissions terms of a large-scale air pollution model can be described mathematically by the following initial value problem for non-linear ordinary differential equations (ODEs):

$$ \begin{aligned} & {\text{y}}^{\prime } = {\text{f(t}},{\text{y)}},\quad {\text{t}} \in [{\text{a}},{\text{b}}],\quad {\text{a}} < {\text{b}},\quad {\text{y}} \in {\text{D}} \subset {\mathbb{R}}^{{\text{s}}} \\ & \quad {\text{and}}\;{\text{y(a)}} =\upeta ,\quad\upeta \in {\mathbb{R}}^{{\text{s}}} \quad {\text{being a given constant}}{.} \\ \end{aligned} $$
(1)

Here \({\mathrm{f}}\) is a given function defined in some domain \({\mathrm{D}}\subset {\mathbb{R}}\times {\mathbb{R}}^{{\mathrm{s}}}\) (it will always be assumed in this paper that \({\mathrm{f}}\) is a one-valued function in the whole domain \({\mathrm{D}}\)). Later further assumptions will be made about the smoothness of the function f, see Sect. 4.2. If high-order versions of the Richardson Extrapolation (RE) are to be used, then one must also assume that f is, as well as y, sufficiently many times continuously differentiable. The components of the solution vector \({\mathrm{y}}({\mathrm{t}})\in {\mathbb{R}}^{{\mathrm{s}}}\) are concentrations of the chemical species at time \({\mathrm{t}}\in \left[{{\mathrm{a}}},{{\mathrm{b}}}\right]\). As mentioned above, \({\mathrm{s}}=56\) is used in this paper, but the main ideas are very general and similar results can easily be obtained when other chemical schemes are used (as, for example, the schemes with \({\mathrm{s}}=35\) and \({\mathrm{s}}=168\) from [31]).

Assume that \({{\mathrm{t}}}_{0}={{\mathrm{a}}}\), \({{\mathrm{t}}}_{{\mathrm{n}}}={{\mathrm{t}}}_{{\mathrm{n}}-1}+{{\mathrm{h}}}_{{\mathrm{n}}}\) (where \({{\mathrm{h}}}_{{\mathrm{n}}}>0\) for \({\mathrm{n}}=1, 2, \cdots ,{\mathrm{N}}\)) and \({{\mathrm{t}}}_{\mathrm{N}}={{\mathrm{b}}}\). Denote \({{\mathrm{y}}}_{0}={\mathrm{y}}\left({{\mathrm{a}}}\right)=\upeta \). Then some numerical method for solving systems of ODEs has to be used to calculate successively approximations \({{\mathrm{y}}}_{1}\approx {\mathrm{y}}\left({{\mathrm{t}}}_{1}\right), {{\mathrm{y}}}_{2}\approx {\mathrm{y}}\left({{\mathrm{t}}}_{2}\right), \cdots , {{\mathrm{y}}}_{\mathrm{N}}\approx {\mathrm{y}}({{\mathrm{t}}}_{\mathrm{N}})\) of the exact solution of (1) on some non-equidistant set of grid-points \(\{{{\mathrm{t}}}_{1}, {{\mathrm{t}}}_{2}, \cdots ,{{\mathrm{t}}}_{\mathrm{N}}\}\). Normally, it is desirable to select at every step as large as possible stepsize \({{\mathrm{h}}}_{{\mathrm{n}}}\) with which the calculated by the selected numerical method approximation \({{\mathrm{y}}}_{{\mathrm{n}}}\) will satisfy some accuracy requirement, as for example, \(\Vert {{\mathrm{y}}({{\mathrm{t}}}_{{\mathrm{n}}})-{\mathrm{y}}}_{{\mathrm{n}}}\Vert /\Vert {\mathrm{y}}({{\mathrm{t}}}_{{\mathrm{n}}})\Vert \approx {\mathrm{TOL}}\), where \({\mathrm{TOL}}>0\) is some prescribed in advance error tolerance. Moreover, we say that the order of accuracy of the numerical solution \({{\mathrm{y}}}_{{\mathrm{n}}}\) is p when

$$ {\text{y}}\left( {{\text{t}}_{{\text{n}}} } \right) - {\text{y}}_{{\text{n}}} = {\text{O}}\left( {{\text{h}}^{{\text{p}}} } \right). $$

This means that two important computational tasks are to be resolved:

  1. (a)

    it is necessary to select appropriate numerical methods for solving non-linear systems of ODEs by which sufficiently accurate approximations \({{\mathrm{y}}}_{{\mathrm{n}}}\) can be calculated

    and

  2. (b)

    it is worthwhile to derive and implement reliable error estimators by which the accuracy of the approximations \({{\mathrm{y}}}_{{\mathrm{n}}}, {\mathrm{n}}=1, 2, \cdots ,{\mathrm{N}}\) can be evaluated in a reliable way.

The solution of these two important tasks will be presented and discussed in the next two sections.

3 Selection of appropriate numerical methods for solving ODEs

The choice of numerical methods, which can efficiently be used in the treatment of atmospheric chemical schemes, is a very difficult task. There are at least three problems, which are creating great difficulties when the atmospheric chemical schemes are treated numerically.

The first problem is caused by the fact that the chemical schemes are very badly scaled. This is demonstrate in Table 1, where the maximum and minimum values of four chemical species are given.

Table 1 Typical minimum and maximum values of four of the 56 chemical species in an interval of 24 h (from 12 o’clock in a given day to 12 o’clock in the next day)

The second problem is caused by the fact that some components of their solution vectors vary very quickly in some parts of the interval \(\left[{{\mathrm{a}}},{{\mathrm{b}}}\right]\) forming extremely sharp gradients [29, 31]. This is illustrated in Fig. 1 where the diurnal variations of two chemical species are plotted. It is seen that some components of the solution vector \({\mathrm{y}}({\mathrm{t}})\in {\mathrm{R}}^{{\mathrm{s}}}\) are varying very quickly in some rather short intervals (containing as a rule the time changes from day to night and from night to day).

Fig. 1
figure 1

Diurnal variation of two chemical species. Both the “exact” solution (the solution calculated with a very small stepsize, see the explanations related to the obtaining of the “exact” solution that are given in Sect. 6) and the numerical solution, which is calculated by using the method, which will be discussed in Sect. 5, are given in the two plots

The third problem is the stiffness of the systems of ODEs, by which chemical schemes are described mathematically. The fact that (1) is a stiff system of ODEs implies that implicit numerical methods have to be selected and used in the solution process. The application of stable Implicit Runge–Kutta Methods (IRKMs), see [3, 4, 12, 14, 17, 18, 21], is a good choice and such methods will be discussed in the remaining part of this section.

We selected three IRKMs and used these methods in the experiments, results of which will be reported in Sect. 6:

  1. (a)

    EULERB (the First Order Backward Differentiation Formula),

  2. (b)

    DIRK23 (a two-stage third-order Diagonally Implicit Runge–Kutta Method)

    and

  3. (c)

    FIRK35 (a three-stage fifth-order Fully Implicit Runge–Kutta Method).

3.1 Introduction of EULERB

The first of the three selected numerical methods, EULERB, is the well-known and often used First Order Backward Differentiation Formula (called also the Implicit Euler Method or Backward Euler Method). EULERB is a one-stage first-order IRKM that is based on the use of the following formula:

$$ {\text{y}}_{{\text{n}}} = {\text{y}}_{{{\text{n}} - 1}} + {\text{h}}_{{\text{n}}} {\text{f(t}}_{{\text{n}}} ,{\text{ y}}_{{\text{n}}} {)},\quad {\text{n}} \in \{ 1,{ }2,{ } \ldots ,{\text{N}}\} . $$
(2)

The method is very robust and has good stability properties (L-stable, see [14, 18]), but it is not very accurate.

3.2 Introduction of DIRK23

The second of the three selected numerical methods, DIRK23, is a two-stage third-order Diagonally Implicit Runge–Kutta Method, which is based on the following formulae:

$$ {\text{y}}_{{\text{n}}} = {\text{y}}_{{{\text{n}} - 1}} + 0.5{\text{h}}_{{\text{n}}} \left( {{\text{k}}_{1}^{{\text{n}}} + {\text{k}}_{2}^{{\text{n}}} } \right), \quad {\text{n}} \in \{ 1,{ }2, \ldots ,{\text{N}}\} , $$
(3)
$$ {\text{k}}_{1}^{{\text{n}}} = {\text{f}}\left( {{\text{t}}_{{{\text{n}} - 1}} + \frac{3 + \sqrt 3 }{6}{\text{h}}_{{\text{n}}} ,\quad {\text{y}}_{{{\text{n}} - 1}} + \frac{3 + \sqrt 3 }{6}{\text{h}}_{{\text{n}}} {\text{k}}_{1}^{{\text{n}}} } \right), $$
(4)
$$ {\text{k}}_{2}^{{\text{n}}} = {\text{f}}\left( {{\text{t}}_{{{\text{n}} - 1}} + \frac{3 - \sqrt 3 }{6}{\text{h}}_{{\text{n}}} ,\quad {\text{y}}_{{{\text{n}} - 1}} - \frac{\sqrt 3 }{3}{\text{h}}_{{\text{n}}} {\text{k}}_{1}^{{\text{n}}} + \frac{3 + \sqrt 3 }{6}{\text{h}}_{{\text{n}}} {\text{k}}_{2}^{{\text{n}}} } \right). $$
(5)

DIRK23 has good stability properties (being an A-stable method, [18]) and it is more accurate than EULERB, but it is more time-consuming too. At every time-step it is necessary to handle successively the two non-linear systems of algebraic Eqs. (4) and (5), each of which contains \({\mathrm{s}}\) equations, while only one such system is to be solved when EULERB is used.

3.3 Introduction of FIRK35

The third of the three selected numerical methods, FIRK35, is a three-stage fifth-order Fully Implicit Runge–Kutta Method based on the following formulae:

$$ {\text{y}}_{{\text{n}}} = {\text{y}}_{{{\text{n}} - 1}} + \frac{16 - \sqrt 6 }{{36}}{\text{h}}_{{\text{n}}} {\text{k}}_{1}^{{\text{n}}} + \frac{16 + \sqrt 6 }{{36}}{\text{h}}_{{\text{n}}} {\text{k}}_{2}^{{\text{n}}} + \frac{1}{9}{\text{h}}_{{\text{n}}} {\text{k}}_{3}^{{\text{n}}} { }, {\text{ n}} \in \left\{ {1,{ }2, \ldots ,{\text{N}}} \right\},{ } $$
(6)
$$ {\text{k}}_{1}^{{\text{n}}} = {\text{f}}\left( {{\text{t}}_{{{\text{n}} - 1}} + \frac{4 - \sqrt 6 }{{10}}{\text{h}}_{{\text{n}}} ,\quad {\text{y}}_{{{\text{n}} - 1}} + \frac{88 - 7\sqrt 6 }{{360}}{\text{h}}_{{\text{n}}} {\text{k}}_{1}^{{\text{n}}} + \frac{296 - 169\sqrt 6 }{{1800}}{\text{h}}_{{\text{n}}} {\text{k}}_{2}^{{\text{n}}} + \frac{ - 2 + 3\sqrt 6 }{{225}}{\text{h}}_{{\text{n}}} {\text{k}}_{3}^{{\text{n}}} } \right), $$
(7)
$$ {\text{k}}_{2}^{{\text{n}}} = {\text{f}}\left( {{\text{t}}_{{{\text{n}} - 1}} + \frac{4 + \sqrt 6 }{{10}}{\text{h}}_{{\text{n}}} ,\quad {\text{y}}_{{{\text{n}} - 1}} + \frac{296 + 169\sqrt 6 }{{1800}}{\text{h}}_{{\text{n}}} {\text{k}}_{1}^{{\text{n}}} + \frac{ 88 + 7\sqrt 6 }{{360}}{\text{h}}_{{\text{n}}} {\text{k}}_{2}^{{\text{n}}} + \frac{ - 2 - 3\sqrt 6 }{{225}}{\text{h}}_{{\text{n}}} {\text{k}}_{3}^{{\text{n}}} } \right), $$
(8)
$$ {\text{k}}_{3}^{{\text{n}}} = {\text{f}}\left( {{\text{t}}_{{{\text{n}} - 1}} + {\text{h}}_{{\text{n}}} ,\quad {\text{y}}_{{{\text{n}} - 1}} + \frac{16 - \sqrt 6 }{{36}}{\text{h}}_{{\text{n}}} {\text{k}}_{1}^{{\text{n}}} + \frac{16 + \sqrt 6 }{{36}}{\text{h}}_{{\text{n}}} {\text{k}}_{2}^{{\text{n}}} + \frac{1}{9}{\text{h}}_{{\text{n}}} {\text{k}}_{3}^{{\text{n}}} } \right). $$
(9)

FIRK35 is much more accurate than EULERB and DIRK23. It has excellent stability properties, being L-stable, [14, 18]. However, it is much more time-consuming than the other two numerical methods, because one has to handle the systems of \(3{\mathrm{s}}\) non-linear algebraic equations formed by (7), (8), and (9) at every step \({\mathrm{n}}\), while systems of \({\mathrm{s}}\) non-linear algebraic equations are to be solved at each step when EULERB and DIRK23 are used.

4 Applying advanced versions of the Richardson Extrapolation

Any of the introduced in the previous section IRKMs can be used in combination with nine advanced versions of the Richardson Extrapolation, which were introduced and studied in [33, 34]. These versions can be applied in order to increase the order of accuracy and/or to obtain reliable error estimations.

The theory of the Advanced Richardson Extrapolation can be based on the four important theorems, which are listed below without proofs (the proofs can be found in [33, 34]).

It should be mentioned here that all results presented in [33, 34] and formulated below are valid for one-step methods for solving systems of ODEs. However, these theorems are also valid for Runge–Kutta methods, because the class of the Runge–Kutta methods is a sub-class of the class of one-step methods.

The first theorem is telling us how to obtain each of the nine advanced versions of the Richardson Extrapolation.

The second theorem is providing results related to the accuracy of the advanced versions of the Richardson Extrapolation.

The third theorem is establishing rules for the calculation of reliable error estimations by using the advanced versions of the Richardson Extrapolation.

The fourth theorem is telling us how to obtain information about the stability properties of the different versions of the Richardson Extrapolation.

4.1 Derivation of nine advanced versions of the Richardson Extrapolation

Theorem 1

Let \({{\mathrm{y}}}_{0}^{\left[{\mathrm{q}}\right]}={\mathrm{y}}\left({{\mathrm{a}}}\right)=\upeta \) for \(\forall {\mathrm{q}}\in \left\{0, 1, \dots , 8\right\}\). Assume that some integer \({\mathrm{q}}\in \left\{0, 1, \dots , 8\right\}\) has been selected and that \({{\mathrm{y}}}_{{\mathrm{n}}-1}^{[{\mathrm{q}}]}\in {\mathbb{R}}^{{\mathrm{s}}}\) is available. Calculate first \({\mathrm{q}}+2\) auxiliary vectors \({\mathrm{z}}_{{\mathrm{n}}}^{[0]}\), \({\mathrm{z}}_{{\mathrm{n}}}^{[1]}\), …, \({\mathrm{z}}_{{\mathrm{n}}}^{[{\mathrm{q}}+1]}\) in \({\mathbb{R}}^{{\mathrm{s}}}\) by using \({{\mathrm{y}}}_{{\mathrm{n}}-1}^{\left[{\mathrm{q}}\right]}\) together with any one-step method and by performing respectively \(1, 2, \dots , {2}^{{\mathrm{q}}+1}\) steps with stepsizes \({\mathrm{h}}\), \({\mathrm{h}}/2\), …, \({\mathrm{h}}/{2}^{{\mathrm{q}}+1}\). Then vector \({{\mathrm{y}}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) (the \({\mathrm{q}}\)-Times Repeated Richardson Extrapolation, the \(q\) TRRE) can be obtained by using the following two formulae:

$$ {\text{y}}_{{\text{n}}}^{{[{\text{q}}]}} = \frac{{{\text{P}}_{{\text{n}}}^{{[{\text{q}}]}} \left( {{\text{z}}_{{\text{n}}}^{{[{\text{q + 1}}]}} ,{\text{z}}_{{\text{n}}}^{{[{\text{q}}]}} , \ldots ,{\text{z}}_{{\text{n}}}^{[0]} } \right)}}{{{\text{S}}^{{[{\text{q}}]}} }} , $$
(10)
$$ {\text{S}}^{{[{\text{q}}]}} = \mathop \prod \limits_{{{\text{j}} = 1}}^{{{\text{q}} + 1}} \left( {2^{{{\text{p}} + {\text{j}} - 1}} - 1} \right). $$
(11)

The first term in \({\mathrm{P}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) is obtained by multiplying the first term in \({\mathrm{S}}^{[{\mathrm{q}}]}\) with \({\mathrm{z}}_{{\mathrm{n}}}^{[{\mathrm{q}}+1]}\), the second term in \({\mathrm{P}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) is obtained by multiplying the second term in \({\mathrm{S}}^{[{\mathrm{q}}]}\) with \({\mathrm{z}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) and one must continue in this way. The last term in \({\mathrm{P}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) is obtained by multiplying the last term in \({\mathrm{S}}^{[{\mathrm{q}}]}\) with \({\mathrm{z}}_{{\mathrm{n}}}^{[0]}\).

Note that any of the three IRKMs discussed in the previous sections can be used in the calculation of the approximations \({\mathrm{z}}_{{\mathrm{n}}}^{[0]}\), \({\mathrm{z}}_{{\mathrm{n}}}^{[1]}\), …, \({\mathrm{z}}_{{\mathrm{n}}}^{[{\mathrm{q}}+1]}\), but one can also apply any other IRKM or even an arbitrary one-step method as stated in Theorem 1.

Three examples for advanced Richardson Extrapolation formulae are given below (all other formulae can be found in [33, 34]).


Classical Richardson Extrapolation, CRE:

$$ {\text{y}}_{{\text{n}}}^{\left[ 0 \right]} = \frac{{{\text{P}}_{{\text{n}}}^{\left[ 0 \right]} }}{{{\text{S}}^{\left[ 0 \right]} }} = \frac{{2^{{\text{p}}} {\text{z}}_{{\text{n}}}^{\left[ 1 \right]} - {\text{z}}_{{\text{n}}}^{\left[ 0 \right]} }}{{2^{{\text{p}}} - 1}}. $$
(12)

Repeated Richardson Extrapolation, RRE:

$$ {\text{y}}_{{\text{n}}}^{\left[ 1 \right]} = \frac{{{\text{P}}_{{\text{n}}}^{\left[ 1 \right]} }}{{{\text{S}}^{\left[ 1 \right]} }} = \frac{{2^{{2{\text{p}} + 1}} {\text{z}}_{{\text{n}}}^{\left[ 2 \right]} - 3\left( {2^{{\text{p}}} } \right){\text{z}}_{{\text{n}}}^{\left[ 1 \right]} + {\text{z}}_{{\text{n}}}^{\left[ 0 \right]} }}{{2^{{2{\text{p}} + 1}} - 3\left( {2^{{\text{p}}} } \right) + 1}}. $$
(13)

Two-Times Repeated Richardson Extrapolation, 2TRRE:

$$ {\text{y}}_{{\text{n}}}^{\left[ 2 \right]} = \frac{{{\text{P}}_{{\text{n}}}^{\left[ 2 \right]} }}{{{\text{S}}^{\left[ 2 \right]} }} = \frac{{2^{{3{\text{p}} + 3}} {\text{z}}_{{\text{n}}}^{\left[ 3 \right]} - 7\left( {2^{{2{\text{p}} + 1}} } \right){\text{z}}_{{\text{n}}}^{\left[ 2 \right]} + 7\left( {2^{{\text{p}}} } \right){\text{z}}_{{\text{n}}}^{\left[ 1 \right]} - {\text{z}}_{{\text{n}}}^{\left[ 0 \right]} }}{{2^{{3{\text{p}} + 3}} - 7\left( {2^{{2{\text{p}} + 1}} } \right) + 7\left( {2^{{\text{p}}} } \right) - 1}}. $$
(14)

4.2 Accuracy of the advanced versions of the Richardson Extrapolation

Theorem 2

Consider an arbitrary one-step method for solving non-linear systems of ODEs \(\mathrm{dy}/\mathrm{dt}={\mathrm{f}}({\mathrm{t}},{\mathrm{y}})\). Assume that

  1. (a)

    the order of accuracy of the selected one-step method is \({\mathrm{p}}\ge 1\)

    and

  2. (b)

    it is used together with a \(q\) TRRE where \(0\le {\mathrm{q}}\le 8\).

Then the order of accuracy of the approximation \({{\mathrm{y}}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) (where vector \({{\mathrm{y}}}_{{\mathrm{n}}}^{[{\mathrm{q}}]}\) is calculated as shown in Theorem 1) is at least \({\mathrm{p}}+{\mathrm{q}}+1\) when the function \({\mathrm{f}}\) in the right-hand-side of the system of ODEs \(\mathrm{dy}/\mathrm{dt}={\mathrm{f}}({\mathrm{t}},{\mathrm{y}})\) is at least \({\mathrm{p}}+{\mathrm{q}}+1\) times continuously differentiable.

The nine advanced versions of the Richardson Extrapolation, the abbreviations used in this paper (as well as in [33, 34]) and their orders of accuracy, according to Theorem 2, are listed in Table 2.

Table 2 The nine versions of the Richardson Extrapolation, the abbreviations that will be used in the remaining part of this paper and the orders of accuracy are given in this table

4.3 Error estimators for the advanced versions of the Richardson Extrapolation

Theorem 3

Assume that \({\mathrm{z}}_{{\mathrm{n}}}^{[0]}\), \({\mathrm{z}}_{{\mathrm{n}}}^{[1]}\), \({{\mathrm{y}}}_{{\mathrm{n}}}^{\left[{\mathrm{q}}-1\right]}\) and \({{\mathrm{y}}}_{{\mathrm{n}}}^{\left[{\mathrm{q}}\right]}\) are calculated as shown in Theorem 1 as well as that the order of accuracy of the underlying one-step method is \({\mathrm{p}}\). Then error estimations at step \({\mathrm{n}}\), where \({\mathrm{n}}=1, 2, \dots ,{\mathrm{N}}\), can be obtained by using the following two formulae:

$$ {\text{EST}}_{{\text{n}}}^{{\left[ {{\text{p}}0} \right]}} = \frac{{{\text{z}}_{{\text{n}}}^{\left[ 1 \right]} - {\text{z}}_{{\text{n}}}^{\left[ 0 \right]} }}{{2^{{\text{p}}} - 1}} $$
(15)

and

$$ {\text{EST}}_{{\text{n}}}^{{\left[ {{\text{pq}}} \right]}} = {\text{y}}_{{\text{n}}}^{{\left[ {\text{q}} \right]}} - {\text{y}}_{{\text{n}}}^{{\left[ {{\text{q}} - 1} \right]}} ,\quad {\text{q}} \in \left\{ {1, 2, \ldots , 8} \right\}. $$
(16)

Remark 1

Formula (16) is explaining the general rule that is used in the calculation of \({\mathrm{EST}}_{{\mathrm{n}}}^{\left[\mathrm{pq}\right]}\), \({\mathrm{q}}\in \left\{1, 2, \dots , 8\right\}\), but in the practical computations there is no need to calculate vector \({{\mathrm{y}}}_{{\mathrm{n}}}^{\left[{\mathrm{q}}-1\right]}\). The difference in the right-hand-side of (16) can easily be expressed by applying, as in (15), the vectors \({\mathrm{z}}_{{\mathrm{n}}}^{\left[\mathrm{k}\right]}\), \(\mathrm{k}=0, 1, \dots ,\mathrm{ q}+1 ,\) which must nevertheless be calculated and used in the computation of \({{\mathrm{y}}}_{{\mathrm{n}}}^{\left[{\mathrm{q}}\right]}\), see Theorem 1.

Two examples, the error estimations for \({\mathrm{q}}=1\) and \({\mathrm{q}}=2\), are given below. All formulae related to the error estimations, when different versions of the Richardson Extrapolation are used with \({\mathrm{q}}>2\), are given in [33].

Error estimation for the Repeated Richardson Extrapolation, RRE:

$$ {\text{EST}}_{{\text{n}}}^{{\left[ {{\text{p}}1} \right]}} = \frac{{2^{{\text{p}}} {\text{z}}_{{\text{n}}}^{\left[ 2 \right]} - \left( {2^{{\text{p}}} + 1} \right){\text{z}}_{{\text{n}}}^{\left[ 1 \right]} + {\text{z}}_{{\text{n}}}^{\left[ 0 \right]} }}{{2^{{2{\text{p}} + 1}} - 3\left( {2^{{\text{p}}} } \right) + 1}}. $$
(17)

Error estimation for the Two-Times Repeated Richardson Extrapolation, 2TRRE:

$$ {\text{EST}}_{{\text{n}}}^{{\left[ {{\text{p}}2} \right]}} = \frac{{2^{{2{\text{p}} + 1}} {\text{z}}_{{\text{n}}}^{\left[ 3 \right]} - \left( {2^{{2{\text{p}} + 1}} + 3 \cdot 2^{{\text{p}}} } \right){\text{z}}_{{\text{n}}}^{\left[ 2 \right]} + \left( {3 \cdot 2^{{\text{p}}} + 1} \right){\text{z}}_{{\text{n}}}^{\left[ 1 \right]} - {\text{z}}_{{\text{n}}}^{\left[ 0 \right]} }}{{2^{{3{\text{p}} + 3}} - 7\left( {2^{{2{\text{p}} + 1}} } \right) + 7\left( {2^{{\text{p}}} } \right) - 1}} . $$
(18)

4.4 Stability properties of the advanced versions of the Richardson Extrapolation

Theorem 4

If \(0\le {\mathrm{q}}\le 8\) and \(\nu \in {\mathbb{C}}^{-}\), then the value of the stability function of the \({\mathrm{qTRRE}}\) can be calculated by using the following formula:

$$ {\text{R}}^{{\left[ {\text{q}} \right]}} \left( {\upnu } \right) = \frac{{{\overline{\text{P}}}^{{\left[ {\text{q}} \right]}} \left( {{\text{R}}\left( {\upnu } \right)} \right)}}{{{\text{S}}^{{\left[ {\text{q}} \right]}} }} , $$
(19)

where \({\mathrm{S}}^{[{\mathrm{q}}]}\) is defined by (11) and \(\mathrm{R}\left(\upnu \right)\) is the value of the stability function of the underlying numerical method for solving systems of ODEs.

The formulae for all functions \({\overline{\mathrm{P}} }^{\left[{\mathrm{q}}\right]}\left(\mathrm{R}\left(\upnu \right)\right)\), \({\mathrm{q}}\in \left\{0, 1,\dots , 8\right\}\), are listed in [34]. The important issue is that if we know the value \(\mathrm{R}\left(\upnu \right)\) of the stability function of the underlying one-step method for some \(\upnu \in {\mathbb{C}}^{-}\), then we are able to calculate, by using (19), the values of the stability functions \({\mathrm{R}}^{\left[{\mathrm{q}}\right]}\left(\upnu \right)\) of all advanced versions of the Richardson Extrapolation, i.e. the values of the stability functions for all \({\mathrm{q}}\in \left\{0, 1,\dots , 8\right\}\).

There are many different stability definitions (see [3, 4, 13, 14, 17, 21]). The result presented in (19) is valid for any stability function used in the different definitions. However, the classical definition for absolute stability, which was introduced by G. Dahlquist in 1963, [9], is used in all calculations, which are discussed in the remaining part of this section. Some results from these calculations are presented in Fig. 2. It is explained below how these results were obtained.

Fig. 2
figure 2

Part of the absolute stability region of the three-stage fifth-order Fully Implicit Runge–Kutta Method (FIRK35) combined with the Seven Times Repeated Richardson Extrapolation (7TRRE). There are only some points in the small square near the origin of the coordinate system where \(\left|{\mathrm{R}}^{\left[7\right]}\left(\upnu \right)\right|\ge 1\). This means that instability may occur if the Jacobian matrix of function \({\mathrm{f}}\) of the system \(\mathrm{dy}/\mathrm{dt}={\mathrm{f}}({\mathrm{t}},{\mathrm{y}})\) has eigenvalues close to the origin of the coordinate system (Color figure online)

If we calculate the values of \({\mathrm{R}}^{[{\mathrm{q}}]}(\upnu )\) for some \({\mathrm{q}}\in \left\{0, 1,\dots , 8\right\}\) on a sufficiently dense regular set of points \(\upnu \in {\mathbb{C}}^{-}\), then we shall be able to obtain reliable information about the stability of the qTRRE in the continuous domain containing this set of points.

The idea sketched in the previous paragraph was implemented in the following way. Consider the square \(\mathrm{SQ}\) in the negative part of the complex plane with vertices the coordinates of which are given by \(\left(0.0, 0.0\right), \left(0.0, {10}^{5}\mathrm{i}\right), \left(-{10}^{5}, {10}^{5}\mathrm{i}\right), \left(-{10}^{5}, 0.0\right)\). Discretize the square \(\mathrm{SQ}\) by using an equidistant set \(\mathrm{PSQ}\) of grid-points on the lines parallel to the coordinate axes such that the distance between two neighbour grid-points (either in the vertical direction or in the horizontal direction) is \(\mathrm{d}=0.001\). It is clear that the total number of grid-points in \(\mathrm{PSQ}\) is \(\mathrm{O}({10}^{16})\). The values of the function \({\mathrm{R}}^{\left[{\mathrm{q}}\right]}\left(\upnu \right)\) were calculated for all \(\upnu \in {\mathrm{PSQ}}\) by using each of the three underlying Implicit Runge–Kutta Methods (IRKMs) for solving ODEs from Sect. 3 with all \({\mathrm{q}}\in \left\{0, 1,\dots , 8\right\}\), i.e. with all nine advanced versions of the Richardson Extrapolation discussed in Sect. 4. The results from the 27 runs (the runs with the three underlying methods for solving ODEs, each of these methods being combined with all nine versions of the Richardson Extrapolation) were very similar. It is clear that one should expect the computations to remain stable when the condition \(\left|{\mathrm{R}}^{\left[{\mathrm{q}}\right]}\left(\upnu \right)\right|<1.0\) is satisfied. This condition was fulfilled in all runs for all points in the set \({\mathrm{PSQ}}\), excepting only a few points near the origin of the coordinate system.

One example, which is an illustration of the conclusion made above, is given in Fig. 2. The three-stage fifth-order Fully Implicit Runge–Kutta Method, the FIRK35, is used in this example together with the Seven-Times Repeated Richardson Extrapolation, the 7TRRE, to obtain the results, which are shown in the plot given in Fig. 2. It shows the domain where one should expect the computational process to remain stable because \(\left|{\mathrm{R}}^{\left[{\mathrm{q}}\right]}\left(\upnu \right)\right|<1.0\) for all points in this domain. The inequality \(\left|{\mathrm{R}}^{\left[{\mathrm{q}}\right]}\left(\upnu \right)\right|\ge 1.0\) holds only for a few points located near the origin of the coordinate system. This means that one must be careful if the Jacobian matrix of function \({\mathrm{f}}\) has small eigenvalues with negative real parts that are close to the imaginary axis.

As mentioned above very similar results, as those shown in Fig. 2, were obtained in the other 26 runs.

It should also be mentioned that the inequality \(\left|{\mathrm{R}}^{\left[0\right]}\left(\upnu \right)\right|<1.0\) is satisfied for \(\forall\upnu \in \mathrm{PSQ}\) when the Classical Richardson Extrapolation, CRE, is combined with EULERB and DIRK23. This indicates that the combinations of CRE with EULERB and DIRK23 are stable in the whole square \({\mathrm{SQ}}\).

It must be emphasized that the investigation of the stability of the advanced versions of the Richardson Extrapolation is very important and must always be carried out. The following fact explains why this is so. The combination of the Classical Richardson Extrapolation, the CRE, with the well-known Trapezoidal Rule (which is A-stable), results in an unstable numerical method, see [9].

5 Development of a variable stepsize variable formula method

It is clear (see the last column in Table 2) that the accuracy of the computational process can be increased very considerably when the value of parameter q is increased. For example, the order of accuracy is becoming \(14\) when the 8TRRE is used together with FIRK35. However, it is necessary to pay something for the high accuracy: the computational work is also increased very considerably. This explains why it is worthwhile to carry out the computations with an attempt to keep both the error of the calculated solution and the number of the computational operations sufficiently small. This aim can be achieved by selecting, at each step \({\mathrm{n}}\in \left\{1, 2,\dots ,{\mathrm{N}}\right\}\) of the computational process, as large as possible stepsize \({{\mathrm{h}}}_{{\mathrm{n}}}\) and a version qTRRE with as small as possible value of parameter q \(\in \left\{0, 1,\dots , 8\right\}\) under the essential requirement that the computational error remains sufficiently small (according to a prescribed error tolerance \({\mathrm{TOL}}>0\)). This means that it is necessary to develop a Variable Stepsize Variable Formula Method (a VSVFM) in an attempt to optimize the computational process.

VSVFMs have been studied in many papers. Some ideas from [25,26,27,28] are used in the derivation of the algorithm, which is described below.

The major rules used in our Variable Stepsize Variable Formula Method, our VSVFM, which is based on the application of any of the three underlying IRKMs presented in Sect. 3 together with the nine advanced versions of the Richardson Extrapolation from Sect. 4, are sketched below.

A key parameter \({\mathrm{RATIO}}\) is calculated at every time-step by using the formula:

$$ {\text{RATIO}} =\updelta \left( {\frac{{{\text{TOL}}}}{{{\text{EST}}_{{\text{n}}}^{{\left[ {{\text{pq}}} \right]}} }} } \right)^{{\frac{1}{{{\text{p}} + {\text{q}} + 1}}}} , \quad {\text{n}} = 1,{ }2, \ldots ,{\text{N}},\quad {\text{q}} = 0,{ }1,{ } \ldots ,{ }8,\quad {\text{p}} = 1,3\;{\text{or}}\;{ }5{ }. $$
(20)

The value of the integer \({\mathrm{q}}\) is telling us what is the particular version of the Richardson Extrapolation which is used at step \({\mathrm{n}}\). The integer \({\mathrm{p}}\) is the order of the underlying numerical method for solving systems of ODEs. The coefficient \(\updelta \) (\(\updelta <1.0\)), from the right-hand-side of (20), is a precaution factor used always in the preparation of VSVFMs (see, for example, [18]); \(\updelta \) = \(0.9\) is selected in all experiments results of which are presented in this paper.

The five major actions, which are to be performed at the end of any step \({\mathrm{n}}\) (\({\mathrm{n}}=1, 2, \cdots ,{\mathrm{N}}\)), can be formulated as shown in Table 3. It is clearly seen that all these actions depend essentially on the value of parameter \({\mathrm{RATIO}}\).

Table 3 Major rules for accepting or rejecting the approximation obtained at the current step \({\mathrm{n}}\)

It should be emphasized that only the most important actions are described in Table 3. Several additional rules must also be used. For example, no new increase of the stepsize is in general allowed several steps after increasing it. This rule is useful, because the theory, on which the numerical methods for solving systems of ODEs are based, is strictly speaking valid only in the case where a constant stepsize is used. Therefore, the stepsize should not be varied too often and/or with a too large amount. There is a parameter WAIT in the program, which is used to keep the same stepsize at least during two consecutive steps. However, if the check of parameter \({\mathrm{RATIO}}\) indicates that either Case 4 or Case 5 (see Table 3) has taken place, then the step is always rejected and the computations are repeated by applying a reduced stepsize.

6 Numerical results

The chemical scheme with 56 species was run over a time-interval of 24 h, starting at 12 o’clock at a given day and finishing at 12 o’clock on the next day. This is a very relevant and important choice, because this interval contains the periods of changes from day-time to night-time and from night-time to day-time when some chemical species vary very quickly (see the plots given in Fig. 1) and can cause great problems for the numerical methods. The time is measured in seconds. This means that the time-interval, which is actually used in all computations, was \(\left[{{\mathrm{a}}},{{\mathrm{b}}}\right]=\left[43200, 129600\right]\). This interval was divided into 168 sub-intervals (the length of each sub-interval being approximately \(514.285\) seconds) and the “exact” solution was calculated at the end of each sub-interval by applying a constant stepsize \({\mathrm{h}}={10}^{-5}\) and by running FIRK35 combined with the 8TRRE.

6.1 Accuracy results

All three numerical methods from Sect. 3 were run with different values of the error tolerance TOL, but the code was always forced to reach the end-point of each of the 168 sub-intervals and the achieved accuracy at the end of each of the 168 sub-intervals of the interval \(\left[{{\mathrm{a}}},{{\mathrm{b}}}\right]\) was checked by using the formula \({\Vert {{\mathrm{y}}}^{\mathrm{exact}}-{{\mathrm{y}}}^{{\mathrm{calculated}}}\Vert }_{2}/{\mathrm{max}}({\Vert {{\mathrm{y}}}^{{\mathrm{exact}}}\Vert }_{2},{10}^{-6})\).

We started with DIRK23 and used five values of parameter \({\mathrm{TOL}}\). The results, which are given in Table 4 show clearly that the VSVFM algorithm, the major properties of which were described in Table 3, is working very well in this situation.

Table 4 Results obtained by running DIRK23 combined with different versions of the Richardson Extrapolation used as a VSVFM

Much more accurate results can be achieved, as shown in Table 5, when FIRK35 is used with eleven values of the error tolerance \({\mathrm{TOL}}\).

Table 5 Results obtained by running FIRK35 combined with different versions of the Richardson Extrapolation used as a VSVFM

Finally, if one is not interested in achieving high accuracy, then EULERB can be used with large values of the error tolerance. This is demonstrated in Table 6, where five values of \({\mathrm{TOL}}\) are used.

Table 6 Results obtained by running EULERB combined with different versions of the Richardson Extrapolation used as a VSVFM

The results shown in Tables 4, 5 and 6 indicate that the number of steps is in general not changed too much when the value of the error tolerance \({\mathrm{TOL}}\) is varied. The figures presented in Table 7 explain why this is so: decreasing the value of the error tolerance, i.e. the increase of the accuracy requirements, leads in most of the cases not to a decrease of the stepsize, but to the selection of more accurate versions of the Richardson Extrapolation. This is important in the efforts to achieve some computational balance, because the use of Variable Stepsize Variable Formula Methods, VSVFMs, require some extra checks and calculations at each step, which are needed in the efforts to find the answers to the following questions:

  1. (a)

    Is the step successful or should it be rejected?

  2. (b)

    Should the stepsize be increased and, if this is the case, by how much should it be increased?

  3. (c)

    Should the stepsize be decreased and, if this is the case, by how much should it be decreased?

Table 7 Numbers of calls of different versions of the Richardson Extrapolation when two values of the error tolerance TOL are used in runs where the underlying method is FIRK35

It is important to avoid, if possible, the use of expensive Richardson Extrapolation versions. The results from many runs (not only the runs reported in this section) indicate that the most expensive versions of the Richardson Extrapolation are used only a few times even in the cases where the error tolerance is extremely small. This is illustrated in Table 7. If the error tolerance is \({\mathrm{TOL}}={10}^{-11}\), then the most time consuming version of the Richardson Extrapolation, the 8TRRE, is called only two times (see the first line of Table 7). Even if \({\mathrm{TOL}}={10}^{-20}\), 8TRRE was called only 18 times. This is very small number compared with the total number of steps (compare the figures given in the second line of Table 7).

It should be mentioned here that all runs, results from which were presented in this sub-section, were performed by using extended computer precision (i.e. working with about 32 significant digits of the real numbers).

6.2 Diurnal variations of some chemical species

The diurnal variations of two chemical species were shown in Fig. 1 in order to illustrate the fact that sharp gradients may appear in some short intervals (during changes from night-time to day time and from day time to night-time). The diurnal variations do not necessarily lead to the creation of sharp gradients. This fact is demonstrated in some of the plots shown in Fig. 3, where the diurnal variations of six of the 56 chemical species are presented. As in Fig. 1, two curves are given in each plot: one of these curves is presenting the numerical solution calculated by the code (by using error tolerance \({\mathrm{TOL}}=0.001\) in this particular case), the other one the “exact solution” calculated by using a very small stepsize. It is seen that the calculated solution is very close to the “exact solution” on the whole interval of 24 h (86,400 s).

Fig. 3
figure 3

Diurnal variation of six chemical species. Both the “exact” solution (the solution calculated with a very small stepsize) and the numerical solution (calculated by using the VSVFM, which was discussed in Sect. 5) are given in the six plots

7 Concluding remarks

Remark 2

About the stability of the computational process. The three underlying numerical methods, the three IRKMs, have excellent stability properties (see Sect. 3). However, there is a danger that the combinations of any of these methods with some of the versions of the Richardson Extrapolation may result in unstable computational process (see Sect. 4 and the comments related to Fig. 2). Our experiments show that the combination of any of the underlying numerical methods with any of the nine versions of the Richardson Extrapolation is producing stable results. The instability problems are avoided because the VSVFM is changing automatically the stepsize and/or the value of \({\mathrm{q}}\) every time when numerical difficulties appear.

Remark 3

About the use of other underlying numerical methods for solving ODEs. The results presented in Sect. 6 were obtained by applying the same rules for changing the stepsize and the version of the Richardson Extrapolation when the three different underlying methods for solving ODEs, which were introduced in Sect. 3, are used. This fact indicates that the strategy for varying the stepsize and the Richardson Extrapolation versions does not depend too much on the underlying methods and other underlying methods can also be selected and successfully used. However, it must be emphasized here that it is important to investigate the stability properties of the combinations of the selected underlying method with all nine versions of the Richardson Extrapolation (the same rules as those explained after Theorem 4 may be used).

Remark 4

About the reduction of the number of advanced versions of the Richardson Extrapolation used in the code. The numerical results indicate that the computationally expensive versions of the Richardson Extrapolation are not used often when the algorithm sketched in Table 3 is used. If one wishes to be sure that these versions will never be used during the runs, then the algorithm described in Table 3 could be slightly modified in order to prevent the use of expensive versions. If the required accuracy is not very high and/or if the solved problem is not very difficult, then it might be worthwhile to introduce a parameter \({\text{MAX}}\_{\mathrm{q}}\le 8\) so that if, for example, \({\text{MAX}}\_{\mathrm{q}}=2\), then only the first three versions of the Richardson Extrapolation (with \({\mathrm{q}}=0\), \({\mathrm{q}}=1\) and \({\mathrm{q}}=2\)) will be used in the run. Some other values of parameter \({\text{MAX}}\_{\mathrm{q}}\), with \({\mathrm{q}}\in \left\{0, 1,\dots , 8\right\}\), can also be selected and used.

Remark 5

About the used chemical scheme. We were mainly interested in the efficient implementation of advanced versions of the Richardson Extrapolation together with three underlying Implicit Runge–Kutta Methods. Therefore, the properties of the used chemical scheme were shortly described. The important issue is that the described algorithms are not depending very much on the particular chemical scheme that is used and can also be applied if other chemical schemes are selected. Some more details about the treatment of the chemical schemes in the air pollution models can be found in [29, 31]. Information about other atmospheric chemical schemes, which can be used in large-scale air pollution models can be found, for example, in [1, 2, 5,6,7, 10, 11, 15, 16, 19, 22,23,24].