Many physical processes such as chemical kinetics (Chan et al. [1]) and air pollution modeling (Verwer et al. [2] and Sandu et al. [3]) are represented by an initial value problem (IVP) associated with a set of stiff ordinary differential system of equations (ODE’s) of first order given as [4]

$$\begin{aligned} y^{\prime }(x)=f({x,\,y(x)}),\quad x\in \left[ x_{0},\,x_{n}\right] \end{aligned}$$

with a initial condition for the initial value prescribed as

$$\begin{aligned} y_{0}=y\left( {x_{0}}\right) \end{aligned}$$

If f does not depend explicitly on x, the system is said to be autonomous; otherwise, it is considered non-autonomous. We have to use iterative implicit methods to solve these physical problems, which sometimes lead to problem of convergence of iterative methods due to nonlinear implicit equations. Rosenbrock [5], first presented a class of methods to generalize the linearly implicit approach by not using the iterative Newton method, instead to derive stable formulas by working the Jacobian matrix directly into the integration formula. This idea has found widespread use called Rosenbrock methods (ROW; Hairer and Wanner [6]).

Tambue et al. [7] have shown that the exponential Rosenbrock–Euler method and Rosenbrock type method lead to efficient tools to effectively solve geothermal systems. Schippmann and Burchard [8] have focused their attention on investigating the computational efficiency of Rosenbrock in comparison with Runge–Kutta and Patankar methods. The numerical solution of a discontinuous differential system by a Rosenbrock method has been obtained by Berardi [9]. It is further suggested that this technique is also applicable to solve discontinuous singularly perturbed systems. Rang [10] has derived new order conditions to reduce order reduction and developed a new third order diagonally implicit Runge–Kutta methods and Rosenbrock–Wanner method. The new schemes are applied to two stiff problems, namely, the Prothero–Robinson example and the incompressible Navier–Stokes equations.

The non-autonomous form (1) can be converted into autonomous form by adding \(x^{\prime }=1.\) For autonomous systems, and an exact Jacobian, an s-stage ROW methods are defined as

$$\begin{aligned} \begin{aligned} k_{i}&= hf\left( y_{n} +\sum \limits _{j=1}^{i-1} \alpha _{ij} k_{j} \right) +hJ\sum \limits _{j=1}^{i}\gamma _{ij} k_{j } ,\quad i=1,\ldots ,s \\ y_{n+1}&= y_{n} +\sum \limits _{j=1}^s {b_{j} k_{j} } \end{aligned} \end{aligned}$$

where \(\alpha _{ij},\,\gamma _{ij}\) and \(b_{j}\) are the determining coefficients and \(J = f^{\prime }(y_{0}),\,{\gamma }_{{ii}}={\gamma }.\)

Many methods of this type are derived by Calahan [11], Cash [12], Norsett and Wolfbrandt [13], Kaps and Rentrop [14], Kaps and Wanner [15], Shampine [16] and Kaps et al. [17] in view of assuming \({\gamma }_{{ii}}=\gamma .\) Parallel machines are computers with more than one processor and this facility might help to speed up the computations in ODE’s. Therefore numerical scientists are interested in developing Rosenbrock methods suitable to parallel computers in solving stiff ODE’s. Voss [18, 19] proposed a parallel Rosenbrock method (PRM) of second and fourth order by assuming \({\gamma }_{{ii}}={\gamma }_{{j}},\,j\,=\,1,\,2.\) To raise the efficiency of sequential Rosenbrock methods, Chen and Liu [20] presented a class of PRMs, which also avoids nonlinear systems and is more efficient than the sequential ROW methods mentioned above. Cao et al.[21] proposed modified PRMs (MPROW) of third order two stage methods and fourth order third stage methods.

In this paper, a new fourth order, four-stage PRM (NPROS4(4)) to solve stiff ODEs is derived and studied. This organization of this article is as follows. Section “Order Conditions”, is followed by section “New PRMs of Order 4(4) (NPROS4(4)) for \(\gamma _{ii} \ne \gamma \)” which presents construction of a new class of methods by exploiting more number of free parameters, and further explains the parallel implementation of the sequential method. In the section “Derivation of the LTE”, LTE expression is derived using the fifth-order conditions. Analysis of convergence and stability is described in section “Convergence and Stability of NPROS4(4)”, while numerical experiments are given in the section “Numerical Experimentation”. Further, we intend here to compare the present four stage, fourth order PRM (NPROS4(4)) with the existing three stage, fourth order modified PRM (MPROW) described in Cao et al. [21].

Order Conditions

Consider the IVP given in Eq. (1) where the mapping \(f(y)\) is assumed to satisfy a Lipschitz condition (Jain et al. [22]) and has all continuous derivatives used later. The exact solution of Eq. (1) is denoted by \(y(t),\, x_{0}\le t \le x_{n}.\) To solve this, a class of sequential fourth order four stage Rosenbrock method is defined as,

$$\begin{aligned} \begin{array}{l} \left( I-h\gamma _{ii} J\right) k_{i} =hf\left( y_{n} +\sum \limits _{j=1}^{i-1} \alpha _{ij} k_{j}\right) +hJ\sum \limits _{j=1}^{i-1} \gamma _{ij} k_{j } ,\quad i=1,\,2,\,3,\,4 \\ y_{n+1} =y_{n} +\sum \limits _{j=1}^{4} {b_{j} k_{j} } \\ \end{array} \end{aligned}$$

where \(h>0\) is the integration stepsize, \(t_{n} = x_{0} +nh,\,\alpha _{ij},\,\gamma _{ij}\) and \(b_{j}\) are the real coefficients, I denotes the identity matrix, J denotes the Jacobian matrix, \(y_{n}\) is an approximation to \(y(t_{n}),\) and each \(k_{i}\) denotes an approximation to some piece of information about the exact solution \(y(t).\) To obtain the coefficients of NPROS4(4), the Taylor series expansion can be used or the order tree theory (Butcher [23]) is adopted. According to Butcher [24, 25], trees play a central role in the theory of Runge–Kutta methods and they also have applications to more general methods, involving multiple values and multiple stages. Keeping this in view, we have given the trees and order conditions upto order 5 in Table 1. The abbreviation \(\beta _{ij} =\alpha _{ij} +\gamma _{ij}\) has been followed. The novelty of the present paper is that we do not assume the condition \(\gamma _{ii} =\gamma .\)

Table 1 Trees and order conditions up to order 5

Using the set of order conditions corresponding to trees of order up to 4 (see Table 1), we derive the new Rosenbrock method NPROS4(4). With the help of the order conditions of trees of order 5 (see Table 1), we obtain the LTE of the NPROS4(4).

The conditions used to construct the four-stage fourth order Rosenbrock methods are written as:

$$\begin{aligned}&b_{1} +b_{2} +b_{3} +b_{4} =1\end{aligned}$$
$$\begin{aligned}&b_{2} \beta _{2} ^{\prime }+b_{3} \beta _{3} ^{\prime }+b_{4} \beta _{4}^{\prime }=\frac{1}{2}-\left( b_{1} \gamma _{11} +b_{2} \gamma _{22} +b_{3} \gamma _{33} +b_{4} \gamma _{44} \right) \end{aligned}$$
$$\begin{aligned}&b_{2} \alpha _{2}^{2} +b_{3} \alpha _{3}^{2} +b_{4} \alpha _{4}^{2}=\frac{1}{3} \end{aligned}$$
$$\begin{aligned}&b_{2} \beta _{2} ^{\prime }\gamma _{11} +b_{2} \beta _{2} ^{\prime }\gamma _{22} +b_{3} \left\{ \beta _{2} ^{\prime }\beta _{32} +\beta _{32} \gamma _{22} +\beta _{32} \gamma _{33} +\beta _{31} \left( \gamma _{11} +\gamma _{33} \right) \right\} \nonumber \\&\qquad +\,b_{4} \left\{ \beta _{2} ^{\prime }\beta _{42} +\beta _{31} \beta _{43} + \beta _{32} \beta _{43} +\beta _{41} \gamma _{11} +\beta _{42} \gamma _{22} +\beta _{43} \gamma _{33} +\beta _{41} \gamma _{44} +\beta _{43} \gamma _{44} \right\} \nonumber \\&\quad =\frac{1}{6}-\left( b_{1} \gamma _{11}^{2} +b_{2} \gamma _{22}^{2} +b_{3} \gamma _{33}^{2} +b_{4} \gamma _{44}^{2}\right) \end{aligned}$$
$$\begin{aligned}&b_{2} \alpha _{2}^{3} +b_{3} \alpha _{3}^{3} +b_{4} \alpha _{4}^{3}=\frac{1}{4} \end{aligned}$$
$$\begin{aligned}&b_{2} \alpha _{2}^{2} \gamma _{11} +b_{3} \alpha _{3} \left\{ \alpha _{31} \gamma _{11} +\alpha _{32} \gamma _{22} +\alpha _{32} \beta _{2}^{\prime }\right\} +b_{4} \alpha _{4} \left\{ \alpha _{41} \gamma _{11} +\alpha _{42}\gamma _{22}\right. \nonumber \\&\left. \qquad +\alpha _{43} \gamma _{33} +\alpha _{42} \beta _{2} ^{\prime }+\alpha _{43} \beta _{3} ^{\prime }\right\} =\frac{1}{8} \end{aligned}$$
$$\begin{aligned}&\alpha _{2}^{2} \left\{ b_{2} \gamma _{22} +b_{3} \beta _{32} +b_{4}\beta _{42} \right\} +\alpha _{3}^{2}\left\{ b_{3} \gamma _{33} +b_{4} \beta _{43} \right\} +b_{4} \alpha _{4}^{2} \gamma _{44} =\frac{1}{12} \end{aligned}$$
$$\begin{aligned}&b_{2} \gamma _{11}^{2} \beta _{2} ^{\prime }+b_{2} \gamma _{11}\beta _{2} ^{\prime }\gamma _{22} +b_{2} \beta _{2} ^{\prime }\gamma _{22}^{2}+b_{3} \gamma _{11}^{2} \beta _{31} +b_{3} \gamma _{11} \beta _{21} \beta _{32}\nonumber \\&\qquad +b_{3} \beta _{21} \gamma _{22} \beta _{32} +b_{3} \gamma _{22}^{2}\beta _{32} +b_{3} \gamma _{11} \beta _{31} \gamma _{33} +b_{3} \beta _{21} \beta _{32} \gamma _{33} +b_{3} \gamma _{22} \beta _{32} \gamma _{33}\nonumber \\&\qquad +\,b_{3} \beta _{31} \gamma _{33}^{2} +b_{3} \beta _{32} \gamma _{33}^{2} +b_{4}\gamma _{11}^{2} \beta _{41} +b_{4} \gamma _{11} \beta _{21} \beta _{42}+b_{4} \beta _{21} \gamma _{22} \beta _{42} +b_{4} \gamma _{22}^{2} \beta _{42} \nonumber \\&\qquad +\,b_{4} \gamma _{11} \beta _{31} \beta _{43} +b_{4} \beta _{2} ^{\prime }\beta _{32} \beta _{43} +b_{4} \gamma _{22} \beta _{32} \beta _{43} +\,b_{4} \beta _{31} \gamma _{33} \beta _{43} +b_{4} \beta _{32} \gamma _{33} \beta _{43} \nonumber \\&\qquad +\,b_{4} \gamma _{33}^{2} \beta _{43} +b_{4} \gamma _{11} \beta _{41} \gamma _{44} +b_{4} \beta _{2} ^{\prime }\beta _{42} \gamma _{44} +b_{4} \gamma _{22} \beta _{42} \gamma _{44} +b_{4} \beta _{31} \beta _{43} \gamma _{44} \nonumber \\&\qquad +\,b_{4} \beta _{32} \beta _{43} \gamma _{44} +b_{4} \gamma _{33} \beta _{43} \gamma _{44} +\,b_{4} \beta _{41} \gamma _{44}^{2} +b_{4} \beta _{42}\gamma _{44}^{2} +b_{4} \beta _{43} \gamma _{44}^{2} \nonumber \\&\quad =\frac{1}{24}-b_{1} \gamma _{11}^{3} -b_{2} \gamma _{22}^{3} -b_{3} \gamma _{33}^{3} -b_{4} \gamma _{44}^{3} \end{aligned}$$


$$\begin{aligned} \alpha _{i} =\sum _{j=1}^{i-1} {\alpha _{ij},\quad \beta _{i} ^{\prime }=\sum _{j=1}^{i-1} {\beta _{ij} } } \end{aligned}$$

Internal Stability of the Method

Rosenbrock methods use s function evaluations at the points

$$\begin{aligned} u_{i} =y_{n} +\sum _{j=1}^{i-1} {\alpha _{ij} k_{j} } \end{aligned}$$

to avoid function evaluations away from the solution, stability for these points is needed. For Eq. (14), we obtain \(u_{i} =R_{i} (z)y_{n},\) where

$$\begin{aligned}&u_{1} =1,\quad R_{2} =1+\alpha _{2}w,\quad R_{3} =1+\alpha _{3} w+\alpha _{32} \beta _{2}w^{2} \\&R_{4} =1+\alpha _{4} w+\left( \alpha _{42} \beta _{2} +\alpha _{43} \beta _{3}\right) w^{2}+\alpha _{43} \beta _{32} \beta _{2} w^{3} \end{aligned}$$


$$\begin{aligned} w=\frac{z}{1-\gamma _{ii} z}, \quad \alpha _{i} =\sum _{j=1}^{i-1} {\alpha _{ij}, \quad \beta _{i} ^{\prime }=\sum _{j=1}^{i-1}{\beta _{ij} }} \end{aligned}$$

A method is said to be internally stable, if \(\hbox {R}_{\mathrm{i}}\)(z) are A-stable for \(1 \le i \le s\) (Verwer [26]).

New PRMs of Order 4(4) (NPROS4(4)) for \(\gamma _{ii} \ne \gamma \)

The computation of the coefficients \(\alpha _{ij},\,\gamma _{ij},\,\beta _{ij }\) and \(b_{j}\) satisfying (5)–(13) have been carried out as follows:

Step 1 we choose \(\gamma _{ii} >0\) with the following stability properties (Kaps et al. [17]):

  1. (i)

    Small values for the non-vanishing error terms of LTE described in section 4.

  2. (ii)

    Positive coefficients \(b_{j}.\)

  3. (iii)

    \(\alpha _{ij},\,\beta _{ij}\) are of small value in magnitude.

  4. (iv)

    Internal stability of the method (described in section 2.1).

Step 2 choose \(\alpha _{2},\,\alpha _{3},\,\alpha _{4}\) and \(b_{1},\,b_{2},\,b_{3},\,b_{4}\) in such a way that the three conditions (5), (7) and (9) are satisfied.

Step 3 take \(\beta _{43}\) and \(\beta _{31}\) as free parameters and solve conditions (6), (8), (11) and (12) to find \(\beta _{2}^{\prime },\,\beta _{3}^{\prime },\,\beta _{4}^{\prime }\) and \(\beta _{42}.\)

Step 4 once the \(\beta _{i}^{\prime }\) are known, one can easily find \(\beta _{32},\,\beta _{41}\) by using (13).

Step 5 find \(\alpha _{41},\,\alpha _{42},\,\alpha _{43}\) by using (10) and (13). We assume here \(\alpha _{43}\) as a free parameter. The remaining free parameters can be chosen.

In particular, we follow steps 1–5 to select the parameters,

$$\begin{aligned} \alpha _{21}&= \frac{1}{2},\quad \alpha _{31} =\frac{1}{2},\quad \alpha _{32}=\frac{1}{2},\quad \alpha _{41} =-0.178278,\quad \gamma _{11} =\frac{1}{6} \\ \gamma _{22}&= \frac{2}{6},\quad \gamma _{33} =\frac{2}{6},\quad \gamma _{44} =\frac{1}{6} \\ \alpha _{42}&= 0.219945,\quad \alpha _{43} =\frac{1}{3},\quad \gamma _{21}=-0.12642566,\\&\gamma _{31} =-0.5,\quad \gamma _{32} =-1.22102225482 \\ b_{1}&= \frac{43}{758},\quad b_{2} =\frac{83}{192},\quad b_{3} =\frac{137}{768},\quad b_{4}=\frac{1}{3} \end{aligned}$$

Then the new fourth order four stage Rosenbrock method is given as follows:

$$\begin{aligned} \left( I-\frac{h}{6}J\right) k_{1}&= hf\left( y_{n}\right) \nonumber \\ \left( I-\frac{2h}{6}J\right) k_{2}&= hf\left( y_{n} +(0.5)k_{1}\right) -hJ(0.12642566)k_{1} \nonumber \\ \left( I-\frac{2h}{6}J\right) k_{3}&= hf\left( y_{n} +(0.5)k_{1} +(0.5)k_{2}\right) +hJ\left( -0.5k_{1} -1.22102225482k_{2}\right) \nonumber \\ \left( I-\frac{h}{6}J\right) k_{4}&= hf\left( y_{n} -0.178278k_{1} +0.219945 k_{2} -\frac{k_{3} }{3}\right) \nonumber \\&+ hJ\left( 0.178278 k_{1} +0.088797 k_{2} -0.37395 k_{3}\right) \nonumber \\ y_{n+1}&= y_{n} +\frac{43k_{1} }{768}+\frac{83k_{2} }{192}+\frac{137k_{3}}{768}+\frac{k_{4} }{3} \end{aligned}$$

Parallel Implementation

The method described in Eq. (15) is rewritten as

$$\begin{aligned} \left( I-h\gamma _{11} J\right) k_{1,n}&= hf\left( y_{n}\right) \nonumber \\ \left( I-h\gamma _{22} J\right) k_{2,n}&= hf\left( y_{n} +\alpha _{21} k_{1,n-1}\right) -hJ\gamma _{21} k_{1,n-1} \nonumber \\ \left( I-\gamma _{33} J\right) k_{3,n}&= hf\left( y_{n} +\alpha _{31} k_{1,n-1} +\alpha _{32} k_{2,n-1}\right) +hJ\left( \gamma _{31} k_{1,n-1} +\gamma _{32} k_{2,n-1}\right) \nonumber \\ \left( I-\gamma _{44} J\right) k_{4,n}&= hf\left( y_{n} +\alpha _{41} k_{1,n-1} +\alpha _{42} k_{2,n-1} +\alpha _{43} k_{3,n-1} \right) \nonumber \\&+hJ\left( \gamma _{41} k_{1,n-1} +\gamma _{42} k_{2,n-1} +\gamma _{43}k_{3,n-1} \right) \nonumber \\ y_{n+1}&= y_{n} +b_{1} k_{1,n} +b_{2} k_{2,n} +b_{3} k_{3,n} +b_{4} k_{4,n} \end{aligned}$$


$$\begin{aligned} \alpha _{21}&= \frac{1}{2},\quad \alpha _{31} =\frac{1}{2},\quad \alpha _{32} =\frac{1}{2},\quad \alpha _{41} =-0.178278,\quad \gamma _{11} =\frac{1}{6}\\ \gamma _{22}&= \frac{2}{6},\quad \gamma _{33}=\frac{2}{6},\quad \gamma _{44} =\frac{1}{6}\\ \alpha _{42}&= 0.219945,\; \alpha _{43} =\frac{1}{3},\; \gamma _{21}=-0.12642566,\; \gamma _{31} =-0.5,\; \gamma _{32} =-1.22102225482\\ b_{1}&= \frac{43}{758},\quad b_{2} =\frac{83}{192},\quad b_{3} =\frac{137}{768},\quad b_{4}=\frac{1}{3} \end{aligned}$$

and each \(k_{in}\) denotes an approximation to certain information \(k_{i }(t_{n},\,h)\) about the exact solution \(y(t),\) for each calculating step the unknowns \(k_{1,n},\, k_{2,n},\, k_{3,n},\, k_{4,n}\) can be calculated in parallel on four processors with each processor only solving one linear system provided that the back values \(y_{n},\,k_{1,n-1},\, k_{2,n-1},\, k_{3,n-1},\, k_{4,n-1}\) are known. Initially we assume some values to set \(k_{1,n-1},\, k_{2,n-1},\, k_{3,n-1},\,k_{4,n-1}\) as zero.

We write

$$\begin{aligned} u_{i,n} =\sum _{j=1}^{i} {\gamma _{ij} k_{j,n},\quad i=1,\,2,\,3,\,4 \quad {\text {for}}\quad n=1,\,2,\ldots } \end{aligned}$$

If \({\gamma }_{{ii}}\ne 0\) for all i, then the matrix \(\Gamma =({\gamma }_{{ij}})\) is invertible and then we write \(k_{i}\) as

$$\begin{aligned}&k_{i,n} =\frac{u_{i,n} }{\gamma _{ii} }-\sum _{j=1}^{i-1} {c_{ij} u_{j,n-1} }\quad {\text {where}}\\&\quad C= diag\left( \gamma _{11}^{-1},\,\gamma _{22}^{-1},\,\gamma _{33}^{-1},\,\gamma _{44}^{-1}\right) -\Gamma ^{-1},\quad i=1,\,2,\,3,\,4 \end{aligned}$$

Inserting in Eq. (16) and dividing by h, we get

$$\begin{aligned} \left( \frac{I}{h\gamma _{ii} }-J\right) u_{i,n}&= f\left( y_{n}+\sum \limits _{j=1}^{i-1} a_{ij} u_{j,n-1}\right) +\sum \limits _{j=1}^{i-1} {\frac{c_{ij}u_{j,n-1}}{{h}},\quad i=1,\,2,\,3,\,4} \nonumber \\ \end{aligned}$$
$$\begin{aligned}&y_{n+1} =y_{n} +\sum \limits _{j=1}^4 {m_{j} u_{j}}\nonumber \\&\left( a_{ij}\right) =\left( \alpha _{ij}\right) \Gamma ^{-1},\quad \left( m_{1},\,m_{2},\,m_{3},\,m_{4}\right) =\left( b_{1},\,b_{2},\,b_{3},\,b_{4}\right) \Gamma ^{-1},\quad i=1,\,2,\,3,\,4 \end{aligned}$$

Each \(u_{i}\) is a system to be solved using a call to LU decomposition. Therefore at each step (i) the Jacobian is evaluated and (ii) four linear systems have been solved. The following are the steps taken to implement the algorithm on a parallel machine having four processors P1, P2, P3 and P4.

Step 1

Compute \(h =\frac{x_{n} -x_{0} }{n},\) the step-size of the method on P1 and initially allocate \(k_{1,n-1}= 0\) on P1, \(k_{2,n-1}= 0\) on P2, \(k_{3,n-1 }= 0\) on P3 and \(k_{4,n-1}=0\) on P4.

Step 2


$$\begin{aligned} \left( \frac{I}{h\gamma _{11} }-J\right) u_{1,n}&= f(y_{n} )\,\mathrm{on}\,P1\\ \left( \frac{I}{h\gamma _{22} }-J\right) u_{2,n}&= f\left( y_{n} +a_{21} u_{1,n-1}\right) +\frac{c_{21} u_{1,n-1} }{h} \;\mathrm{on\,P2}\\ \left( \frac{I}{h\gamma _{33} }-J\right) u_{3,n}&= f\left( y_{n} +a_{31} u_{1,n-1}+a_{32} u_{2,n-1} \right) +\frac{c_{31} u_{1,n-1} +c_{32} u_{2,n-1}}{h}\;\mathrm{on\,P3\quad and}\\ \left( \frac{I}{h\gamma _{44} }-J\right) u_{4,n}&= f\left( y_{n} +a_{41} u_{1,n-1}+a_{42} u_{2,n-1} +a_{43} u_{3,n-1}\right) \\&+\frac{c_{41} u_{1,n-1} +c_{42}u_{2,n-1} +c_{43} u_{3,n-1} }{h} \end{aligned}$$

on P4, respectively where \(u_{j,n}\) are defined as in Eq. (18).

Step 3

Finally, the results are combined to yield an approximation to the solution at the next time step. This step requires the computation

$$\begin{aligned} y_{n+1} =y_{n} +\sum _{j=1}^4 {m_{j} u_{j} } \end{aligned}$$

on any of the processors P1, P2, P3 and P4.

Derivation of the LTE

By making use of the order conditions for fifth order trees in Table 1, the LTE can be obtained. Since the Rosenbrock schemes are the particular case of implicit Runge–Kutta methods or semi-implicit Runge–Kutta methods, the LTE is obtained using the following Theorem 1.

Theorem 1

If the Runge–Kutta method is of order p and if f is (p + 1)-times continuously differentiable, then the LTE can be obtained from

$$\begin{aligned} {y}^\mathrm{J}\left( \mathrm{x}_{0} +\mathrm{h} \right) -\mathrm{y}_{1}^\mathrm{J}=\frac{h^{p+1}}{(p+1)!}\sum \limits _{t\in T_{p+1} } \alpha (t) \mathrm{e}(\mathrm{t} )\mathrm{F}^\mathrm{J}(\mathrm{t})\left( \mathrm{y}_{0} \right) +O(h^{p+2}) \end{aligned}$$


$$\begin{aligned} \mathrm{e}(\mathrm{t})= 1 -\gamma (\mathrm{t})\sum \limits _{j=1}^{s} {b_{j} } \Phi _\mathrm{j} (\mathrm{t}) \end{aligned}$$

and \(\upalpha \)(t) be the number of possible different monotonic labellings of \(t,\, \gamma \)(t) be the density of the tree t \(\in \mathrm{LT}_\mathrm{q}\) and \(\Phi _\mathrm{j}\)(t) be the elementary weight \(\Phi _\mathrm{j}\)(t) for stage j. The expressions \(e(t)\) are called the error coefficients (Hairer et al. [27]).

The error coefficients and the derivative terms in the LTE are computed using the elementary differentials corresponding to fifth order trees given in Table 1. We assume that \(f ^{1}=1,\,f ^{2}=f,\) J = 1, 2. The following are the error coefficients and their corresponding derivatives:

$$\begin{aligned} e_1&= \left\{ { 1-5 \left( b_{2} \alpha _{2,1}^{4} +b_{3} \alpha _{3,1}^{4} +4b_{3} \alpha _{3,1}^{3} \alpha _{3,2} +6b_{3} \alpha _{3,1}^{2} \alpha _{3,2}^{2} +4b_{3} \alpha _{3,1} \alpha _{3,2}^{3} +b_{3} \alpha _{3,2}^{4}\right) }\right\} \end{aligned}$$
$$\begin{aligned} e_{2}&= \left\{ { 1-10 (b_{3} \alpha _{3,1}^{2} \alpha _{3,2} \beta _{2,1}\!+2b_{3} \alpha _{3,1} \alpha _{3,2}^{2} \beta _{2,1} +\!b_{3} \alpha _{3,2}^{3} \beta _{2,1} +b_{2} \alpha _{2,1}^{3} \gamma _{1,1} +\!b_{3} \alpha _{3,1}^{3} \gamma _{1,1} } \right. \nonumber \\&\quad \left. { 2b_{3} \alpha _{3,1}^{2} \alpha _{3,2} \gamma _{1,1} +\!b_{3}\alpha _{3,1} \alpha _{3,2}^{2} \gamma _{1,1} +\!b_{3} \alpha _{3,1}^{2}\alpha _{3,2} \gamma _{2,2} +\!2b_{3} \alpha _{3,1} \alpha _{3,2}^{2} \gamma _{2,2} +\!b_{3} \alpha _{3,2}^{3} \gamma _{2,2} ) } \right\} \end{aligned}$$
$$\begin{aligned} e_{3}&= \left\{ { 1-15\left( b_{3} \alpha _{2,1}^{2} \alpha _{3,1}\alpha _{3,2} +b_{3} \alpha _{2,1}^{2} \alpha _{3,2}^{2}\right) } \right\} \end{aligned}$$
$$\begin{aligned} e_{4}&= \left\{ { 1-30(b_{3} \alpha _{3,1} \alpha _{3,2} \beta _{2,1} \gamma _{1,1} +b_{3} \alpha _{3,2}^{2} \beta _{2,1} \gamma _{1,1} +b_{2} \alpha _{2,1}^{2} \gamma _{1,1}^2 +b_{3} \alpha _{3,1}^{2} \gamma _{1,1}^{2}} \right. \nonumber \\&\left. +\, b_{3} \alpha _{3,1} \alpha _{3,2} \gamma _{1,1}^{2} { b_{3} \alpha _{3,1} \alpha _{3,2} \beta _{2,1} \gamma _{2,2} +b_{3} \alpha _{3,2}^{2} \beta _{2,1} \gamma _{2,2} +b_{3} \alpha _{3,1} \alpha _{3,2} \gamma _{2,2}^{2} +b_{3} \alpha _{3,2}^{2} \gamma _{2,2}^{2} )} \right\} \end{aligned}$$
$$\begin{aligned} e_{5}&= \left\{ { 1-20(b_{3} \alpha _{2,1}^{3} \beta _{3,2} +b_{4} \alpha _{2,1}^{3} \beta _{4,2} +b_{4} \alpha _{3,1}^{3} \beta _{4,3} +3b_{4} \alpha _{3,1}^{2} \alpha _{3,2} \beta _{4,3} +3b_{4} \alpha _{3,1} \alpha _{3,2}^{2} \beta _{4,3} } \right. \nonumber \\&\left. {+\,b_{4} \alpha _{3,2}^{3} \beta _{4,3} +b_{2} \alpha _{2,1}^{3}\gamma _{2,2} +b_{3} \alpha _{3,1}^{3} \gamma _{3,3} +3b_{3} \alpha _{3,1}^{2} \alpha _{3,2} \gamma _{3,3}}\right. \nonumber \\&\left. {+3b_{3} \alpha _{3,1} \alpha _{3,2}^{2} \gamma _{3,3} +b_{3} \alpha _{3,2}^{3} \gamma _{3,3} ) } \right\} \end{aligned}$$
$$\begin{aligned} e_{6}&= \left\{ { 1-20(b_{3} \alpha _{2,1}^{3} \beta _{3,2} +b_{4} \alpha _{2,1}^{3} \beta _{4,2} +b_{4} \alpha _{3,1}^{3} \beta _{4,3} +3b_{4} \alpha _{3,1}^{2} \alpha _{3,2} \beta _{4,3} +3b_{4} \alpha _{3,1} \alpha _{3,2}^{2} \beta _{4,3}} \right. \nonumber \\&\quad +\,b_{4} \alpha _{3,2}^{3} \beta _{4,2} +\left. {b_{2} \alpha _{2,1}^{3} \gamma _{2,2} +b_{3} \alpha _{3,1}^{3}\gamma _{3,3} +3b_{3} \alpha _{3,1}^{2} \alpha _{3,2} \gamma _{3,3} +3b_{3} \alpha _{3,1} \alpha _{3,2}^{2} \gamma _{3,3}} \right. \nonumber \\&\left. \quad +\,b_{3} \alpha _{3,2}^{3} \gamma _{3,3} )\right\} \end{aligned}$$
$$\begin{aligned} e_{7}&= \left\{ { 1-40(b_{4} \alpha _{3,1} \alpha _{3,2} \beta _{2,1} \beta _{4,3} +b_{4} } \right. \alpha _{3,2}^{2} \beta _{2,1} \beta _{4,3} +b_{3} \alpha _{2,1}^{2} \beta _{3,2} \gamma _{1,1} +b_{4} \alpha _{2,1}^{2}\beta _{4,2} \gamma _{1,1} \nonumber \\&\quad +\,b_{4} \alpha _{3,1}^{2} \beta _{4,3} \gamma _{1,1} + b_{4} \alpha _{3,1} \alpha _{3,2} \beta _{4,3} \gamma _{1,1} +b_{4}\alpha _{3,1} \alpha _{3,2} \beta _{4,3} \gamma _{2,2} +b_{4} \alpha _{3,2}^{2} \beta _{4,3} \gamma _{2,2}\nonumber \\&\quad \left. { +\,b_{2} \alpha _{2,1}^{2} \gamma _{1,1} \gamma _{2,2} +b_{3} \alpha _{3,1} \alpha _{3,2} \beta _{2,1} \gamma _{2,2} +b_{3} \alpha _{3,2}^{2} \beta _{2,1} \gamma _{3,3} +b_{3} \alpha _{3,1}^{2} \gamma _{1,1} \gamma _{3,3}}\right. \nonumber \\&\quad \left. {+b_{3} \alpha _{3,1} \alpha _{3,2} \gamma _{2,2} \gamma _{3,3} + b_{4} \alpha _{3,2}^{2} \gamma _{2,2} \gamma _{3,3} ) } \right\} \end{aligned}$$
$$\begin{aligned} e_{8}&= \left\{ { 1-60(b_{4} \alpha _{2,1}^{2} \beta _{3,2} \beta _{4,3} +b_{3} \alpha _{2,1}^{2} \beta _{3,2} \gamma _{2,2} +b_{4} \alpha _{2,1}^{2}\beta _{4,2} \gamma _{2,2} +b_{2} \alpha _{2,1}^{2} \gamma _{2,2}^{2} } \right. \nonumber \\&\quad +\,b_{3}\alpha _{2,1}^{2} \beta _{3,2} \gamma _{3,3} +b_{4} \alpha _{3,1}^{2} \beta _{4,3} \gamma _{3,3} +2b_{4} \alpha _{3,1}\alpha _{3,2} \beta _{4,3} \gamma _{3,3} +b_{4} \alpha _{3,2}^{2} \beta _{4,3} \gamma _{3,3}\nonumber \\&\quad \left. { +\,b_{3} \alpha _{3,1}^{2} \gamma _{3,3}^{2} +2b_{3}\alpha _{3,1} \alpha _{3,2} \gamma _{3,3}^{2} +b_{3} \alpha _{3,2}^{2} \gamma _{3,3}^{2} +b_{4} \alpha _{2,1}^{2}\beta _{4,2} \gamma _{4,4} +b_{4} \alpha _{3,1}^{2} \beta _{4,3} \gamma _{4,4}} \right\} \nonumber \\&\quad +\,2b_{4} \alpha _{3,1} \alpha _{3,2} \beta _{4,3} \gamma _{4,4}+b_{4} \alpha _{3,2}^{2} \beta _{4,3} \gamma _{4,4} ) \end{aligned}$$
$$\begin{aligned} e_{9}&= \left\{ { 1-120(b_{4} \beta _{2,1} \beta _{3,2} \beta _{4,3}\gamma _{1,1} +b_{3} \beta _{2,1} \beta _{3,2} \gamma _{1,1}^{2} +b_{4}\beta _{2,1} \beta _{4,2} \gamma _{1,1}^{2} +b_{4} \beta _{3,1} \beta _{4,3} \gamma _{1,1}^{2} } \right. \nonumber \\&\quad +\,b_{2} \beta _{2,1} \gamma _{1,1}^{3}+b_{3} \beta _{3,1} \gamma _{1,1}^{3} +b_{4} \beta _{4,1} \gamma _{1,1}^{3}+b_{1} \gamma _{1,1}^{4} +b_{4} \beta _{2,1} \beta _{3,2} \beta _{4,3}\gamma _{2,2} \nonumber \\&\quad +\,b_{3} \beta _{2,1} \beta _{3,2} \gamma _{1,1} \gamma _{2,2} +b_{4} \beta _{2,1} \beta _{4,2} \gamma _{1,1} \gamma _{2,2}+b_{2} \beta _{2,1} \gamma _{1,1}^{2} \gamma _{2,2} +b_{3} \beta _{2,1}\beta _{3,2} \gamma _{2,2}^{2}\nonumber \\&\quad +\,b_{4} \beta _{2,1} \beta _{4,2} \gamma _{2,2}^{2} +b_{4} \beta _{3,2} \beta _{4,3} \gamma _{2,2}^{2} +b_{2} \beta _{2,1} \gamma _{1,1} \gamma _{2,2}^{2} +b_{2} \beta _{2,1} \gamma _{2,2}^{3} +b_{3} \beta _{3,2} \gamma _{2,2}^{3}\nonumber \\&\quad +\,b_{4} \beta _{4,2} \gamma _{2,2}^{3}+b_{2} \gamma _{2,2}^{4} +b_{4} \beta _{2,1} \beta _{3,2} \beta _{4,3}\gamma _{3,3} +b_{3} \beta _{2,1} \beta _{3,2} \gamma _{1,1} \gamma _{3,3} +b_{4} \beta _{3,1} \beta _{4,3} \gamma _{1,1} \gamma _{3,3}\nonumber \\&\quad +\,b_{3} \beta _{3,1} \gamma _{1,1}^{2} \gamma _{3,3} +b_{3} \beta _{2,1}\beta _{3,2} \gamma _{2,2} \gamma _{3,3} +b_{4} \beta _{3,2} \beta _{4,3} \gamma _{2,2} \gamma _{3,3} +b_{3} \beta _{3,2} \gamma _{2,2}^{2}\gamma _{3,3} \nonumber \\&\quad +\,b_{3} \beta _{2,1} \beta _{3,2} \gamma _{3,3}^{2}+b_{4} \beta _{3,1} \beta _{4,3} \gamma _{3,3}^{2} +b_{4} \beta _{3,2}\gamma _{4,3} \gamma _{3,3}^{2} +b_{3} \beta _{3,1} \gamma _{1,1} \gamma _{3,3}^{2} +b_{3} \beta _{3,2} \gamma _{2,2} \gamma _{3,3}^{2} \nonumber \\&\quad +\,b_{3} \beta _{3,1} \gamma _{3,3}^{3} +b_{3} \beta _{3,2} \gamma _{3,3}^{3}+b_{4} \beta _{4,3} \gamma _{3,3}^{3} +b_{3} \gamma _{3,3}^{4} +b_{4} \beta _{2,1} \beta _{3,2} \beta _{4,3} \gamma _{4,4} \nonumber \\&\quad +\,b_{4} \beta _{2,1}\beta _{4,2} \gamma _{1,1} \gamma _{4,4} +b_{4} \beta _{3,1} \beta _{4,3} \gamma _{1,1} \gamma _{4,4}+b_{4} \beta _{3,1} \beta _{4,3} \gamma _{1,1} \gamma _{4,4} +b_{4}\beta _{4,1} \gamma _{1,1}^{2} \gamma _{4,4} \nonumber \\&\quad +\,b_{4} \beta _{2,1} \beta _{4,2} \gamma _{2,2} \gamma _{4,4} +b_{4} \beta _{3,2} \beta _{4,3} \gamma _{3,3} \gamma _{4,4} +b_{4} \beta _{4,3} \gamma _{3,3}^{2} \gamma _{4,4}+b_{4} \beta _{2,1} \beta _{4,2} \gamma _{4,4}^{2}\nonumber \\&\quad +\,b_{4} \beta _{3,3}\beta _{4,3} \gamma _{4,4}^{2} +b_{4} \beta _{3,2} \beta _{4,3} \gamma _{4,4}^{2} +b_{4} \beta _{4,1} \gamma _{1,1} \gamma _{4,4}^{2} +b_{4} \beta _{4,2} \gamma _{2,2} \gamma _{4,4}^{2} \nonumber \\&\quad +\,b_{4} \beta _{4,2} \gamma _{2,2} \gamma _{4,4}^{2} +b_{4} \beta _{4,3}\gamma _{3,3} \gamma _{4,4}^{2} +b_{4} \beta _{4,1} \gamma _{4,4}^{3} +b_{4}\beta _{4,2} \gamma _{4,4}^{3} +b_{4} \beta _{4,3} \gamma _{4,4}^{3} +b_{4}\gamma _{4,4}^{2} \left. {) } \right\} \nonumber \\ \end{aligned}$$

The derivative terms are computed and given as follows:

$$\begin{aligned}&f_{xxxx} +4f_{xxxy} f+6f_{xxyy} f^{2}+4f_{xyyy} f^{3}+f_{yyyy} f^{4} \quad \mathrm{for}\quad t \in t_{61} \\&f_{xxy} f_{x} +2f_{xyy} f_{x} f+f_{xxy} f_{y} f+2f_{xyy} f_{y} f^{2}+f_{yyy} f_{x} f^{2}+f_{yyy} f_{y} f^{3}\quad \mathrm{for}\quad t\in t_{62} \\&f_{xy} f_{xx} +2f_{xy}^{2} f+f_{yy} f_{xx} f^{2}+2f_{xy} f_{yy} f^{2}+f_{yy} f_{xy} f^{2}+f_{yy}^{2} f^{3}\quad \mathrm{for}\quad t\in t_{63} \\&f_{xy} f_{y} f_{x} +f_{xy} f_{y}^{2} f+f_{yy} f_{y} f_{x} f+f_{yy} f_{y}^{2} f^{2}\quad \mathrm{for}\quad t\in t_{64} \\&f_{yy} f_{x}^{2} +2f_{yy} ff_{y} f_{x} +f_{yy} f_{y}^{2} f^{2}\quad \mathrm{for}\quad t\in t_{65} \\&f_{y} f_{xxx} +3f_{y} ff_{xxy} +3f_{xyy} f_{y} f^{2}+f_{y} f_{xxx} f^{3}\quad \mathrm{for}\quad t\in t_{66} \\&f_{xy} f_{y} f_{x} +f_{xy} f_{y}^{2} f+f_{yy} f_{y} f_{x} f+f_{yy} f_{y}^{2} f^{2}\quad \mathrm{for}\quad t\in t_{67} \\&f_{y}^{2} f_{xx} +2f_{xy} f_{y}^{2} f+f_{yy} f_{y}^{2} f^{2}\quad \mathrm{for}\quad t\in t_{68} \\&f_{y}^{3} f_{x} +f_{y}^{4} f^{2}\quad \mathrm{for}\quad t\in t_{69}. \end{aligned}$$

Using Eqs. (21)–(29) and the derivative terms (given in Table 1), the LTE for the NPROS4(4) is computed. In this manner, the expression of LTE for a parallel fourth order four stage Rosenbrock algorithm is obtained as follows.

$$\begin{aligned} \mathrm{LTE}&= \frac{h^{5}}{5!}\left\{ {\left( e_{1}\right) \left( f_{xxxx} +4f_{xxxy}f+6f_{xxyy} f^{2}+4f_{xyyy} f^{3}+f_{yyyy} f^{4}\right) } \right. \\&\quad +\left( 6e_{2}\right) \left( f_{xxy} f_{x} +\! 2f_{xyy} f_{x} f+f_{xxy} f_{y} f+2f_{xyy} f_{y}f^{2}+f_{yyy} f_{x} f^{2}+f_{yyy} f_{y} f^{3} \right) \\&\quad +\left( 4e_{4}\right) \left( f_{xy} f_{y} f_{x} +f_{xy} f_{y}^{2} f+f_{yy} f_{y} f_{x} f+f_{yy}f_{y}^{2} f^{2} \right) +\left( 3e_{5}\right) \left( f_{yy} f_{x}^{2}\right. \\&\left. \quad + 2f_{yy} ff_{y} f_{x} +f_{yy} f_{y}^{2} f^{2}\right) +\left( e_{6}\right) \left( f_{y} f_{xxx} +3f_{y}ff_{xxy} +3f_{xyy} f_{y} f^{2}\right. \\&\left. \quad +f_{y} f_{xxx} f^{3} \right) +\left( 3e_{7}\right) \left( f_{xy} f_{y} f_{x} +f_{xy} f_{y}^{2} f+f_{yy}f_{y} f_{x} f+f_{yy} f_{y}^{2} f^{2}\right) \\&\quad \left. { +\left( e_{8}\right) \left( f_{y}^{2} f_{xx} +2f_{xy} f_{y}^{2} f+f_{yy} f_{y}^{2} f^{2}\right) +\left( e_{9}\right) \left( f_{y}^{3} f_{x} +f_{y}^{4} f^{2}\right) } \right\} \end{aligned}$$

The values of \(e_{1},\,e_{2},\ldots ,e_{9}\) are calculated by using the values of the parameters of NPROS4(4) \(\alpha _{21},\,\alpha _{31},\,\alpha _{32},\,\alpha _{41},\,\alpha _{42},\,\alpha _{43},\,\gamma _{11},\,\gamma _{21},\,\gamma _{22},\,\gamma _{31},\,\gamma _{32},\,\gamma _{33},\,\gamma _{41},\,\gamma _{42},\,\gamma _{43}\) and \(b_{1},\,b_{2},\,b_{3},\,b_{4}.\)

By using the values given below,

$$\begin{aligned}&\alpha _{21} =\frac{1}{2},\quad \alpha _{31} =\frac{1}{2},\quad \alpha _{32}=\frac{1}{2},\quad \alpha _{41} =-0.178278,\quad \gamma _{11} =\frac{1}{6},\quad \gamma _{22} =\frac{2}{6},\quad \gamma _{33} =\frac{2}{6} \\&\gamma _{44} =\frac{1}{6},\alpha _{42} =0.219945,\quad \alpha _{43} =\frac{1}{3},\quad \gamma _{21}=-0.12642566,\quad \gamma _{31} =-0.5 \\&\gamma _{32} =-1.2210222548, b_{1} =\frac{43}{758},\quad b_{2} =\frac{83}{192},\quad b_{3} =\frac{137}{768},\quad b_{4}=\frac{1}{3} \end{aligned}$$

the LTE of NPROS4(4) is obtained as,

$$\begin{aligned}&\frac{1}{120}h^{5}\left( 0.259259259 \left( f_{x} f_{y}^{3} +ff_{y}^{4} \right) -0.13564066\left( f_{xx} f_{y}^{2} +2ff_{xy} f_{y}^{2} +f^{2}f_{y}^{2} f_{yy}\right) \right. \nonumber \\&\quad -1.8948329844\left( f_{x} f_{xy} f_{y} +ff_{xy} f_{y}^{2} +ff_{x} f_{y} f_{yy}+f^{2}f_{y}^{2} f_{yy}\right) \nonumber \\&\quad +\frac{1,363}{512}\left( f_{xx} f_{xy} +2ff_{xy}^{2}+f^{2}f_{xx} f_{yy} +3f^{2}f_{xy} f_{yy} +f^{3}f_{yy}^2 \right) \nonumber \\&\quad -0.5685019\left( f_{x} f_{xxy} +2ff_{x} f_{xyy} +ff_{xxy} f_{y} +2f^{2}f_{xyy}f_{y}+f^{2}f_{x} f_{yyy} +f^{3}f_{y} f_{yyy}\right) \nonumber \\&\quad -\frac{83\left( f_{xxxx} +4ff_{xxxy}+6f^{2}f_{xxyy} +4f^{3}f_{xyyy} +f^{4}f_{yyyy} \right) }{3,072} \nonumber \\&\quad -0.214438\left( f_{xxx} f_{y} +f^{3}f_{xxx} f_{y} +3f^{2}f_{xyy} f_{y} +3f_{y} ff_{xxy}\right) \nonumber \\&\quad -0.643315606\left( f_{x}^{2} f_{yy} +f^{2}f_{y}^{2} f_{yy} +2f_{x} f_{yy} ff_{y}\right) \end{aligned}$$

LTE Bound

If we assume the following bounds for f and its partial derivatives for \(\mathrm{x} \in [\mathrm{x}_{0},\,\mathrm{x}_\mathrm{n}\)] and \(\mathrm{y} \in [-\infty ,\,\infty ]\) according to Lotkin [28] and Ponalagusamy et al. [4], then we write,

$$\begin{aligned} |f(x,\,y)|&< \mathrm{Q} \nonumber \\ \left| {\frac{\partial ^{i+j}f(x,\,y)}{\partial x^{i}\partial y^{j}}} \right|&< \frac{P^{i+j}}{Q^{j-1}}, \quad i+j\le p \end{aligned}$$

where P and Q are positive constants, and p is the order of the method. Here p =  4. Using Eq. (31), the derivatives in Eq. (30) are obtained as

$$\begin{aligned} \left| {f_{x} f_{y}^{3} } \right|&< \frac{P}{Q^{-1}}\times \left( \frac{P^{3}}{Q^{0}}\right) =P^{4}Q \\ \left| {ff_{y}^{4} } \right|&< Q\times \left( \frac{P}{Q^{1-1}}\right) ^{4}=P^{4}Q \\ \left| {f_{xx} f_{y}^{2} } \right|&< \frac{P^{2}}{Q^{0-1}}\times \frac{P^{2}}{(Q^{0})^{2}}=P^{4}Q \end{aligned}$$

and so on.

Therefore, we obtain from Eq. (30),

$$\begin{aligned} \mathrm{LTE}_{\mathrm{NPROS4}(4)} \le \frac{{4.4244256673333}h^{5}P^{4}Q}{120} \end{aligned}$$

This LTE bound is useful in selecting suitable h values according to the error at each step. If we select an error tolerance TOL = 0.0001, then the step-size h can be made as

$$\begin{aligned} \frac{{4.4244256673333}h^{5}P^{4}Q}{120}<\mathrm{TOL} \quad \mathrm{or} \quad h<\left( {\frac{{27.1222}\cdot \mathrm{TOL}}{\mathrm{P}^{{4}}Q}} \right) ^{\frac{1}{5}} \end{aligned}$$

Convergence and Stability of NPROS4(4)

In regard to analyze the convergence properties, we consider the method given in Eq. (16) as a general multivalued method of the form (Li [29]) as,

$$\begin{aligned} y^{(n)}=Ay^{(n-1)}+h\varphi \left( y^{(n-1)},\,h\right) \end{aligned}$$


$$\begin{aligned} y^{(n)}&= \left( k_{1,n}^{T},\,k_{2,n}^{T},\,k_{3,n}^{T},\,k_{4,n}^{T},\,y_{n+1}^{T}\right) ^{T} \nonumber \\ A&= \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 1 \\ \end{array} }} \right] \quad \varphi \left( {y^{(n-1)},\,h}\right) =\left( {{\begin{array}{c} {\begin{array}{l} \varphi _{1} \left( y^{(n-1)},\,h\right) \\ \varphi _{2}\left( y^{(n-1)},\,h\right) \\ \varphi _{3}\left( y^{(n-1)},\,h\right) \\ \end{array}} \\ {\varphi _{4} \left( y^{(n-1)},\,h\right) } \\ {\sum \nolimits _{i=1}^{4} {b_{i} \varphi _{i}\left( y^{(n-1)},\,h\right) } } \\ \end{array} }} \right) \end{aligned}$$


$$\begin{aligned} \varphi _{i} \left( y^{(n-1)},\,h\right)&= \left( I-h\gamma _{ii} J\right) ^{-1}\left( f\left( y_{n}+\sum _{j=1}^{i-1} {\alpha _{ij}} k_{j,n-1}\right) +J\sum _{j=1}^{i-1} {\beta _{ij}} k_{j,n-1}\right) , \\&i=1,\,2,\,3,\,4 \end{aligned}$$

It is seen that the minimum polynomial (\(\uplambda -1\)) of the matrix A satisfies the root condition, and therefore this method is convergent and its convergence order is equal to the consistency order.

To investigate the numerical stability properties, we use the test equation \(y^{\prime }=\lambda y,\,h >0.\) The stability function of NPROS4(4) may be obtained as

$$\begin{aligned} \frac{(-12.4473+z)(-2.31466+z)(11.2456+z(5.76191+z))}{(18+(-9+z)z)^{2}} \end{aligned}$$

The corresponding stability region (the white portion) is illustrated in Fig. 1. It is found that NPROS4(4) is strongly A-stable. The stability region of the existing three stage fourth order method MPROW is also presented in Fig. 2. We have observed that the stability region of NPROS4(4) includes the imaginary axis in comparison with that of MPROW. It is also possible to find other set of parameters for the new method NPROS4(4) described through the steps given in section  3 and one can determine the intervals of stability of various values of \({\gamma }_{{ii}},\) which also leads to have comparisons of various sets of parameters.

Fig. 1
figure 1

Stability region of NPROS4(4)

Fig. 2
figure 2

Stability region of MPROW of Cao et al. [16]

Numerical Experimentation

We present the results obtained for the integration of three problems of practical interest. The following adaptive procedure is adopted for the examples considered in order to have error control.

Adaptive Procedure

  1. (1)

    Initially, h is selected using Eq. (33) and a local error tolerance \(\epsilon = 10^{-4}\) has been imposed.

  2. (2)

    The algorithm defined by Eq. (16) is implemented. The LTE is estimated using (30).

  3. (3)

    The stepsize of integration be h and that to be chosen for the next step is \(h_{1}.\) The relationship between h and \(h_{1}\) is as follows:

    1. (a)

      If \(\mathrm{LTE}> \epsilon ,\,h_{1}=h/2\) and re-do the next step.

    2. (b)

      If \(\epsilon > \mathrm{LTE} > \epsilon /20,\,h_{1}=h.\)

    3. (c)

      If \(\mathrm{LTE} < \epsilon /20,\,h_{1} = 2h.\)

Example 1

As a first example to verify the sharpness of our error bounds, we have considered the Robertson’s chemical reaction problem. The problem consists of a stiff system of three non-linear ODEs and describes the kinetics of an autocatalytic reaction. The present problem is very popular in numerical studies and it is often used as a test problem in the stiff integrators comparisons (Edsberg [30]).

Mathematical Description of the Problem

The problem is of the form

$$\begin{aligned} \left\{ {\begin{array}{l} y^{\prime }=f(y) \\ y(0)=y_{0},\quad 0\le t\le T \\ \end{array}} \right. \end{aligned}$$


$$\begin{aligned} f(y)=\left( {{\begin{array}{c} {-0.04y_{1} +10^{4}y_{2} y_{3} } \\ {0.04y_{1} -10^{4}y_{2} y_{3} -3\times 10^{7}y_{2}^{2} } \\ {3\times 10^{7}y_{2}^{2} } \\ \end{array} }} \right) ,\quad y_{0} =\left( {{\begin{array}{c} 1 \\ 0 \\ 0 \\ \end{array} }} \right) \end{aligned}$$

The large difference among the reaction rate constants is the reason for stiffness. At \(t = 0,\) the three eigen values of the Jacobian are \({\uplambda }_{1}={\uplambda }_{2 }= 0,\,{\uplambda }_{3 }= -0.04\) and for large t these eigen values approach the limits \({\uplambda }_{1}={\uplambda }_{2}=0,\,{\uplambda }_{3}= -10^{4}.\) The stiffness ratio is \(10^{4}.\)

The reference solution computed by RADAU on an Alphaserver DS20E, with a 667  MHz EV67 processor, using double precision (Mazzia and Iavernaro [31]), is assumed to be exact. Using adaptive step-size h, the computed result is listed in Table 1 for some solution steps integrated in the interval [0, 400]. The errors in the solutions of the problem mentioned above by using NPROS4(4) and MPROW are illustrated in Figs. 3, 4 and 5.

Fig. 3
figure 3

Error using NPROS4(4) and MPROW for the first 10 steps of Example 1

Fig. 4
figure 4

Error using NPROS4(4) for the Example 1

Fig. 5
figure 5

Error using MPROW for the Example 1

Example 2

We consider weekly damped oscillatory stiff differential equations given by

$$\begin{aligned} \left\{ {\begin{array}{l} y^{\prime }=Ay \\ y(0)=y_0,\quad 0\le t\le 10 \\ \end{array}} \right. \end{aligned}$$


$$\begin{aligned} A=\left( {{\begin{array}{ccc} {-0.01}&{} {-1}&{} {-1} \\ 2&{} {-100.005}&{} {99.995} \\ 2&{} {99.995}&{} {-100.005} \\ \end{array} }} \right) ,\quad y_{0} =\left( {{\begin{array}{c} 1 \\ 2 \\ 0 \\ \end{array} }} \right) \end{aligned}$$

The exact solution of the system is

$$\begin{aligned} \begin{array}{l} y_{1} (t)=e^{-0.01t}(\cos 2t-\sin 2t) \\ y_{2} (t)=e^{-0.01t}(\cos 2t+\sin 2t)+e^{-200t} \\ y_{3} (t)=e^{-0.01t}(\cos 2t+\sin 2t)-e^{-200t}. \\ \end{array} \end{aligned}$$

The solutions using NPROS4(4) and MPROW are presented in Table 2. Figures 6, 7 and 8 represent the errors in solutions computed by the two methods.

Fig. 6
figure 6

Error using NPROS4(4) and MPROW of Example 2

Fig. 7
figure 7

Error for the Example 2 using NPROS4(4)

Fig. 8
figure 8

Error for the Example 2 using MPROW

Table 2 NPROS4(4) and MPROW solutions for Example 1

Example 3

The problem consists of a stiff system of three non-linear ODEs. This refers to the Oregonator popular model which is described by ODE. Field et al. [32] proposed this model for the Belousov–Zhabotinskii (BZ) reaction and formulated the model for the most important parts of the kinetic mechanism that gives rise to oscillation in the BZ reaction. This mechanism can be summarized as three concurrent processes by Gray [33].

Mathematical Description of the Problem

The problem is of the form

$$\begin{aligned} \left\{ {\begin{array}{l} y^{\prime }=f(y) \\ y(0)=y_{0},\quad 0\le t\le 360 \\ \end{array}} \right. \end{aligned}$$


$$\begin{aligned} f(y)=\left( {{\begin{array}{c} {77.27\left( y_{2} -y_{1} y_{2} +y_{1} -8.375\times 10^{-6}y_{1}^{2}\right) } \\ {\frac{1}{77.27}\left( -y_{2} -y_{1} y_{2} +y_{3}\right) } \\ {0.161\left( y_{1} -y_{3}\right) } \\ \end{array} }} \right) ,\quad y_{0} =\left( {{\begin{array}{c} 1 \\ 2 \\ 3 \\ \end{array} }} \right) \end{aligned}$$

Using double precision, the reference solution is computed by RADAU on an Alphaserver DS20E, with a 667 MHz EV67 processor and it is considered as exact solution (Mazzia and Magherini [34]).

This problem is taken as tool to test any stiff solver by several researchers who have been performing active research in the field of stiff differential equations. We can also find the useful discussions of this problem in Cash [12].

The problem is solved by using NPROS4(4) and MPROW and the results are tabulated in Table 3. The errors computed by using NPROS4(4) and MPROW are presented in Figs. 9, 10 and 11. Figure 9 presents the error found in both the methods. Figures 10 and 11 show the error found using NPROS4(4) and error found using MPROW, respectively.

Fig. 9
figure 9

Error using NPROS4(4) and MPROW of Example 3

Fig. 10
figure 10

Error using NPROS4(4) of Example 3

Fig. 11
figure 11

Error using MPROW of Example 3

Table 3 NPROS4(4) and MPROW solutions for Example 2


In the case of Example 1, the total number of steps to achieve the solution for \(t = 400\) by NPROS4(4) is 185 which is much lesser than the steps taken by MPROW which comes as 239. This is where the problem is run in sequential and at each step the function evaluations required to evaluate the solution is \(4\times 185,\) that is, 740 function evaluations are required to complete the integration in sequential. It is of interest to pin-point out that in the case of a parallel machine, the function evaluations can be performed concurrently, and hence the integration can be completed quickly with 185 function evaluations. Further, Table 1 and Figs. 3, 4 and 5 show that NPROS4(4) could be a better stiff solver for the Robertson’s chemical reaction problem (Example 1) than MPROW in view of faster convergence towards solution and error control. One can see that an improvement in reducing the number of functions evaluations is gained by the present method of NPROS4(4) in comparison with the method of MPROW.

It is observed from Example 2 that both NPROS4(4) and MPROW perform well with solutions correct to 12 decimal places. But the number of steps taken to integrate the differential equations is considerably less in NPROS4(4) method as compared to that of MPROW method. Further, NPROS4(4) takes 41 steps using adaptive step-size whereas the number of steps taken by MPROW in the interval [0, 10] is 97. Hence, the number of function evaluations performed by NPROS4(4) in parallel machine remains 41 unlike the case of MPROW which takes 97 function evaluations in a parallel machine. Numerical solutions by NPROS4(4) and MPROW and exact solutions for weekly damped oscillatory stiff differential equations are tabulated in Table 2 and the corresponding errors are shown in Figs. 6, 7 and 8.

Another important observation from Example 3 is that NPROS4(4) requires 248 steps in the integration interval [0, 360] to complete the iterations but MPROW requires 791 steps to complete the iteration in the same interval. The solutions are given in Table 3 and the errors are plotted in Figs. 9, 10 and11. It is pertinent to note that the presently developed method of NPROS4(4) outperforms well and is more suitable and efficient one for solving Oregonator model in comparison with MPROW method (Table 4).

Table 4 NPROS4(4) and MPROW solutions for Example 3

Further, in a p-processor environment where p = s = 4, NPROS4(4) requires the computations of s function evaluations, one Jacobian evaluation and one LU decomposition at each function evaluation. Since one LU decomposition requires O(\(s^{3})\) for the input size s, the speed-up is calculated as follows:

$$\begin{aligned} \mathrm{S }=\frac{t_{s} }{t_{p=4} }=\frac{s\times O(s^{3})}{O(s^{3})}=s \end{aligned}$$

where \(t_{s}\) is the time period elapsed between the beginning and end of the algorithm execution, parallel time, \(t_{p},\) is the time period elapsed between the beginning of the first processor and the end of the last processor during the execution of the algorithm and p is the number of processors used.

$$\begin{aligned} \mathrm{Efficiency\,of\,the\,new\,algorithm}=\frac{t_{s} }{4t_{p} }=\frac{s\times O(s^{3})}{s\times O(s^{3})}=1. \end{aligned}$$

Hence perfect speed-up is achievable for this method where \(p =s = 4.\)


We have presented a new approach for constructing an A-stable parallel fourth-order four stage Rosenbrock method and examined its convergence and stability aspects. The proposed approach allows us to derive the order conditions for high-order methods in a systematic and simple way. LTE is derived. Numerical experimentation has been performed which shows the efficiency of the method. It is concluded from the numerical experimentation considered in this paper that NPROS4(4) outperforms well for computing the solution of strongly stiff system of non-linear differential equations with less error as compared to the method of MPROW. The present new method can further be generalized to a parallel machine of \(2^{\mathrm{n}}\) processors, for any n.