1 Introduction

It is the purpose of this work to derive new explicit two-step s-stage peer methods with improved stability properties for the numerical solution of first-order ODEs of the type

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} y'(t)=&{}f\big (t,y(t)\big ), \quad f:\mathbbm {R}\times \mathbbm {R}^d \rightarrow \mathbbm {R}^d,\\ y(t_0)=&{}y_0, \quad t \in [t_0,T],\quad y_0 \in \mathbbm {R}^d. \end{aligned} \end{array}\right. } \end{aligned}$$
(1)

Peer methods were first introduced in their linearly implicit form (Schmitt and Weiner 2004). Furthermore, explicit (Horváth et al. 2015; Jebens et al. 2008; Klinge et al. 2017; Weiner et al. 2009), implicit (Jebens et al. 2011; Kulikov and Weiner 2018; Schneider et al. 2017), implicit–explicit (Schneider et al. 2021; Soleimani and Weiner 2017, 2018), and parallelizable (Kulikov and Weiner 2010; Schmitt and Wiener 2010; Schmitt et al. 2009; Weiner et al. 2012) peer methods have been derived. There are also specific techniques that can allow to build peer methods adapted to the problem to solve. For example, if it is a priori known that the analyzed problem has an oscillating solution (Budroni et al. 2021a, b), the Exponential Fitting (EF) technique (Ixaru 1997; Ixaru and Vanden 2004) leads to peer methods with coefficients depending on the oscillation frequency (Conte et al. 2018, 2019, 2020b), that are much more accurate than the classical ones.

Peer methods are characterized by several stages like Runge–Kutta schemes, but unlike the latter all of them show the same accuracy and stability properties. They have been introduced with the aim of combining the advantages of Runge–Kutta and multistep methods and are very convenient. In fact, peer methods do not suffer from order reduction if applied to ODEs systems with a decidedly high stiffness. Furthermore, they are suitable for parallel implementation, as the actual stages rely only on the previous ones.

In this paper, we focus on explicit peer methods with fixed step-size h. Explicit methods are less expensive than implicit ones, but they usually have worse stability properties. For this, the main aim of our work lies in the derivation of new peer methods that are still explicit, but with improved stability properties than the classical ones. Considering a time discretization {\(t_n\), \(n=1, ..., N\)} of the integration interval [\(t_0, T\)] related to the ODE (1), classic explicit peer methods can be expressed as

$$\begin{aligned} Y^{[n+1]}=(B\otimes I_d)Y^{[n]}+h(A\otimes I_d)F(Y^{[n]})+h(R\otimes I_d)F(Y^{[n+1]}), \end{aligned}$$
(2)

where \(A=(a_{i,j})_{i,j=1}^s\), \(B=(b_{i,j})_{i,j=1}^s\) and \(R=(r_{i,j})_{i,j=1}^s\) are matrices containing the coefficients of the method. \(I_d\) is the identity matrix of order d. In our case, since we analyze explicit methods, R is strictly lower triangular, i.e., \(r_{i,j}=0\) \(\forall i\le j\). The notation used to represent these methods in vectorial form (2) is the following:

$$\begin{aligned}&Y^{[n]}=(Y_{n,i})_{i=1}^s, \quad F(Y^{[n]})=\big (f(t_{n,i},Y_{n,i})\big )_{i=1}^s,\\&Y_{n,i} \approx y(t_{n,i}), \quad t_{n,i}=t_n+h c_i, \end{aligned}$$

where the nodes \(c_i\) are assumed to be distinct with \(c_s=1\). The stages \(Y^{[n]}\), and therefore \(F(Y^{[n]})\), are column vectors.

To determine the methods proposed in this work, we apply a technique that leads to new coefficients with respect to classical schemes, depending on the Jacobian of the problem. This technique briefly consists in modifying the classical case by considering a different expression of the stages of the method. By imposing the order conditions using the new form of the stages, the new coefficients of the method are obtained. We were inspired by Ixaru (2012), in which a similar procedure was applied to explicit Runge–Kutta methods improving their accuracy and stability properties, as evidenced also by the numerous numerical tests conducted in Conte et al. (2020a). An extension of this methodology has also been applied to peer methods in Conte et al. (2021), in which the authors have managed to derive Jacobian-dependent coefficients, slightly improving the stability properties of classical schemes. In this work, we show that by changing the approach proposed in Conte et al. (2021), it is possible to obtain New Explicit Jacobian-Dependent Peer (NEJDP) methods with much better stability and accuracy properties than those obtained in Conte et al. (2021), which will be called Old Explicit Jacobian-Dependent Peer (OEJDP) methods.

Moreover, this paper also completes the work done in Conte et al. (2021), where the coefficients of the proposed methods have been derived only in the scalar case and when the accuracy order and the number of stages are equal to two. Now, we derive the old methods coefficients (which become matrices) also for non-scalar problems. We also provide the order conditions of these methods even when the accuracy order p and number of stages s satisfy \(p=s \ge 2\). Therefore, Sect. 2 of this paper is devoted to the extension of OEJDP methods of Conte et al. (2021), while the following ones are devoted to the derivation of NEJDP methods.

Specifically, this paper is organized as follows: in Sect. 2, we extend the work done in Conte et al. (2021), formulating the OEJDP methods in the non-scalar case and deriving the related order conditions for \( p = s \ge 2 \); in Sect. 3, we derive two-stage NEJDP methods of order two, showing the relative linear stability properties; in Sect. 4, we carry out numerical tests highlighting the advantages of the new methods over the old Jacobian-dependent and classical ones; in Sect. 5, we discuss the obtained results and the possible future research.

2 Old equation-dependent peer methods

In this section, after recalling the formulation of the two-stage OEJDP methods for scalar ODEs (Sect. 2.1), we first of all extend these methods to the case of s-stages (Sect. 2.2), and afterwards, we derive two-stage OEJDP methods able to solve differential problems of any dimension d (Sect. 2.3).

2.1 Two-stage OEJDP methods for scalar ODEs

Consider explicit two-stage peer methods:

$$\begin{aligned} \begin{aligned} Y_{n,1}&= b_{11} Y_{n-1,1} + b_{12} Y_{n-1,2} + h a_{11} f(t_{n-1,1},Y_{n-1,1}) + h a_{12} f(t_{n-1,2},Y_{n-1,2}), \\ Y_{n,2}&= b_{21} Y_{n-1,1} + b_{22} Y_{n-1,2} + h a_{21} f(t_{n-1,1},Y_{n-1,1}) + h a_{22} f(t_{n-1,2},Y_{n-1,2})\\&\quad + h r_{21} f(t_{n,1},Y_{n,1}). \end{aligned} \end{aligned}$$
(3)

By defining the error operators \(\underline{L_1}\) and \(\underline{L_2}\) associated, respectively, with the first and second stages as

$$\begin{aligned} \underline{L_1}\big (y(t)\big ) = y(t+h c_1) - Y_1(t), \quad \underline{L_2}\big (y(t)\big ) = y(t+h) - Y_2(t), \end{aligned}$$
(4)

where \(Y_1\) and \(Y_2\) represent the continuous expression of \(Y_{n,1}\) and \(Y_{n,2}\)

$$\begin{aligned} \begin{aligned} Y_1(t)&= b_{11} y\big (t+h(c_1-1)\big ) + b_{12} y(t) + h a_{11} y'\big (t+h(c_1-1)\big ) + h a_{12} y'(t), \\ Y_2(t)&= b_{21} y\big (t+h(c_1-1)\big ) + b_{22} y(t) + h a_{21} y'\big (t+h(c_1-1)\big ) + h a_{22} y'(t)\\ {}&\quad + h r_{21} y'(t+hc_1), \end{aligned} \end{aligned}$$
(5)

it has been shown that annihilating \(\underline{L_1}(t^k)\) and \(\underline{L_2}(t^k)\), \(k=0,1,2\), at \(t=0\), i.e., annihilating the moments \(L_{i,k}\), \(i=1,2\), \(k=0,1,2\), leads to the following order conditions for classic peer methods of accuracy order equal to two, respectively

$$\begin{aligned}&{\left\{ \begin{array}{ll} 1-b_{11}-b_{12}=0,\\ c_1-b_{11}(c_1-1)-a_{11}-a_{12}=0,\\ c_1^2-b_{11}(c_1-1)^2-2a_{11}(c_1-1)=0, \end{array}\right. } \end{aligned}$$
(6)
$$\begin{aligned}&{\left\{ \begin{array}{ll} 1-b_{21}-b_{22}=0,\\ 1-b_{21}(c_1-1)-a_{21}-a_{22}-r_{21}=0,\\ 1-b_{21}(c_1-1)^2-2a_{21}(c_1-1)-2r_{21}c_1=0. \end{array}\right. } \end{aligned}$$
(7)

Instead, in the derivation of equation-dependent methods, it was assumed that the first stage computed at \(t_{n,1}\), i.e. \(Y_{n,1}\), is affected by an error \(\mathrm{{err}}_1\), which has the form

$$\begin{aligned} \begin{aligned} \mathrm{{err}}_1(t)&= t_{\mathrm{{err}}_1}(t)+O(h^4), \\ t_{\mathrm{{err}}_1}(t)&= \displaystyle \frac{h^3}{3!}\big (c_1^3-b_{11}(c_1-1)^3-3a_{11}(c_1-1)^2\big )y'''(t)=\frac{1}{3!} L_{1,3} y'''(t). \end{aligned} \end{aligned}$$
(8)

To prove this, the following relationship between error operators and moments has been used:

$$\begin{aligned} \underline{L_i}\big (y(t)\big )=\frac{1}{k!}L_{i,k}y^{(k)}(t). \end{aligned}$$
(9)

By doing so, the expression of \(Y_2\) (and therefore that of \(\underline{L_2}\)) has changed

$$\begin{aligned} \begin{aligned} Y_2(t)&= b_{21} y\big (t+h(c_1-1)\big ) + b_{22} y(t) + h a_{21} y'\big (t+h(c_1-1)\big ) + h a_{22} y'(t)\\ {}&\quad + h r_{21} f\big (t+hc_1,Y_1(t)\big ), \end{aligned} \end{aligned}$$
(10)

where \(f\big (t+hc_1,Y_1(t)\big )=y'(t+hc_1)-j_1(t)\mathrm{t}_\mathrm{err_1}(t)+O(h^4)\) (through Taylor expansion of the second term of \(f\big (t+hc_1,Y_1(t)+\mathrm{err}_1(t)\big )=y'(t+hc_1)\)). The function \(j_1\) is the Jacobian related to the ODE (1) at \(Y_1\), i.e., \(j_1(t) = f_y(t + hc_1, y)_{|y=Y_1(t)}\).

By annihilating \(L_{1,k}\), \(k=0,1,2\), and the new \(L_{2,k}\), \(k=0,1,2,3\), the same conditions (6) and (7) related to the classical case are obtained, plus an additional one

$$\begin{aligned} \begin{aligned}&1-b_{21}(c_1-1)^3-3a_{21}(c_1-1)^2-3r_{21}c_1^2+hj_1(t)r_{21} \big (c_1^3-b_{11}(c_1-1)^3\\ {}&-3a_{11}(c_1-1)^2\big )=0. \end{aligned} \end{aligned}$$
(11)

The resulting two-stage peer methods are still explicit and of order two, but they exhibit better accuracy and stability properties than the classical ones. The Jacobian-dependent scalar coefficients of these methods, which can be derived by imposing the order conditions (6), (7), and (11), are

$$\begin{aligned} a_{11}= & {} \Big (-\big (b_{11}(-1 + c_1)^2\big ) + c_1^2\Big )/\Big (2(-1 + c_1)\Big ),\nonumber \\ a_{12}= & {} \Big (-\big (b_{11}(-1 + c_1)^2\big ) + (-2 + c_1)c_1\Big )/\Big (2(-1 + c_1)\Big ),\nonumber \\ a_{21}= & {} \Big (b_{11}(-1 + b_{21})hj_1(t) + (-1 + b_{11} + 7b_{21} - 10b_{11}b_{21})c_1^3hj_1(t)- (-1 + b_{11})\nonumber \\&b_{21}c_1^5hj_1(t) + b_{21}c_1^4\big (2 + 5(-1 + b_{11})hj_1(t)\big )+ c_1\big (4 + 3b_{11}hj_1(t) + b_{21} (4 -\nonumber \\&5b_{11}hj_1(t))\big ) + c_1^2\big (-6+ 3hj_1(t) - 3b_{11}hj_1(t) + b_{21}(-6 - 3hj_1(t) + 10b_{11}\nonumber \\&hj_1(t))\big )\Big )/\Big (2(-1 + c_1) (-(b_{11} hj_1(t)) - 3(-1 + b_{11}) c_1^2 hj_1(t) + (-1 + b_{11})\nonumber \\&c_1^3 hj_1(t) + 3c_1(-2 + b_{11}hj_1(t))\big ) \Big ),\nonumber \\ a_{22}= & {} \Big (-10 + 3b_{11}hj_1(t) - 9(-1 + b_{11})c_1^3hj_1(t) + 2(-1 + b_{11})c_1^4 hj_1(t) + c_1(24\nonumber \\&- 11b_{11}hj_1(t)) + 3c_1^2(-4 - 3hj_1(t) + 5b_{11} hj_1(t)) - b_{21}(-1 + c_1)^2\big (-2 -\nonumber \\&b_{11}hj_1(t) - 3 (-1 + b_{11}) c_1^2hj_1(t) + (-1 + b_{11})c_1^3hj_1(t) + c_1(-4 + 3b_{11}hj_1\nonumber \\&(t))\big )\Big ) / \Big (2 (-1 +c_1)\big (-(b_{11}hj_1(t))- 3(-1 + b_{11}) c_1^2 hj_1(t)+ (-1 +b_{11}) c_1^3\nonumber \\&hj_1(t) + 3c_1(-2 + b_{11}hj_1(t))\big )\Big ),\nonumber \\ r_{21}= & {} \Big (-5 - b_{21}(-1 + c_1)^3 + 3c_1\Big )/\Big (-(b_{11}hj_1(t)) - 3(-1 + b_{11})c_1^2 hj_1(t) +\nonumber \\&(-1 + b_{11})c_1^3hj_1(t) + 3c_1(-2 + b_{11}hj_1(t))\Big ). \end{aligned}$$
(12)

Note that the moments associated with \( \underline{L_2} (t^k) \) have been annihilated also for \(k = 3\), i.e., the second stage is calculated with accuracy order equal to three. That is why there is an additional order condition (11) than the classical case and why this technique can also lead to better accuracy properties.

2.2 s-stage OEJDP methods for scalar ODEs

In this subsection, we derive the order conditions of the OEJDP methods in the general s-stage case where the order is \(p=s\). To do this, recall the definition of accuracy order of an explicit peer method (Weiner et al. 2008).

Definition 1

The peer method

$$\begin{aligned} \begin{aligned} \displaystyle Y_{n,i}&= \sum _{j=1}^{s} b_{ij} Y_{n-1,j} + h \sum _{j=1}^{s} a_{ij} f(t_{n-1,j},Y_{n-1,j}) +h \sum _{j=1}^{i-1} r_{ij} f(t_{n,j},Y_{n,j}),\\&i=1,...,s, \end{aligned} \end{aligned}$$
(13)

is consistent of order p if \(\Delta _i=O(h^p)\), \(\forall i=1,...,s\), where

$$\begin{aligned} \begin{aligned} h\Delta _i:=&y(t_{n,i})-\sum _{j=1}^{s} b_{ij} y(t_{n-1,j}) - h \sum _{j=1}^{s} a_{ij} y'(t_{n-1,j}) + \sum _{j=1}^{i-1} r_{ij} y'(t_{n,j}). \end{aligned} \end{aligned}$$
(14)

The values of \(\Delta _i\) are called residuals. Imposing that the method (13) has accuracy order \(p = s\), leads to express its coefficients in the following form (Weiner et al. 2008):

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} &{} B \mathbbm {1} = \mathbbm {1}, \quad \mathbbm {1}=(1,...,1)^\mathrm{{T}} \quad \text { (pre-consistency condition)},\\ &{} A=(CV_0D^{-1}-RV_0)V_1^{-1}-B(C-I_s)V_1D^{-1}V_1^{-1}, \end{aligned} \end{array}\right. } \end{aligned}$$
(15)
$$\begin{aligned} \begin{aligned}&V_0=(c_i^{j-1})_{i,j=1}^s, \quad V_1=\big ((c_i-1)^{j-1}\big )_{i,j=1}^s, \quad I_s= \text { identity matrix of order s}, \\&D=\mathrm{{diag}}(1,...,s), \quad C=\mathrm{{diag}}(c_i). \end{aligned} \end{aligned}$$

What we want to do now is to express the coefficients of OEJDP methods with accuracy order \(p=s\) in a similar form. Let us consider the conditions (6), (7), and (11) that come out by canceling the moments \({L_{1,k}}\), \(k=0,1,2\), and \({L_{2,k}}\), \(k=0,1,2,3\), in the simple case of two-stage schemes of order two. All the equations of (6) and (7) correspond to the classical order conditions. Equation (11) returns an additional order condition with respect to the classical case, which arises by assuming that in general, for s-stage methods, the first \(s-1\) stages are affected by error and the last stage is more accurate than the previous ones.

Define the continuous expression of the ith stage

$$\begin{aligned} Y_i(t)=\sum _{j=1}^{s} b_{ij} y(t+h(c_i-1)) + h \sum _{j=1}^{s} a_{ij} y'(t+h(c_i-1)) +h \sum _{j=1}^{i-1} r_{ij} f(t+c_ih,Y_i(t)).\nonumber \\ \end{aligned}$$
(16)

The related error operator is \(\underline{L_i}\big (y(t)\big )=y(t+hc_i)-Y_i(t)\), \(i=1,...,s\). The values of \(\mathrm{{err}}_i\) associated with \(Y_i\), \(i=1,...,s\) assume the following form:

$$\begin{aligned} \begin{aligned} \displaystyle \mathrm{{err}}_i(t)=&t_{\mathrm{{err}}_i}(t)+O(h^{s+2}),\\ t_{\mathrm{{err}}_i}(t)=&\frac{h^{s+1}}{(s+1)!}\Big (c_i^{s+1}-\sum _{k=1}^{s}b_{ik}(c_k-1)^{s+1}-(s+1)\sum _{k=1}^{s}\big (a_{ik}(c_k-1)^s+c_k^sr_{ik}\big )\\ {}&+h\sum _{k=1}^{i-1}j_k(t)r_{ik}t_{\mathrm{{err}}_k}(t)\Big )y^{(s+1)}(t). \end{aligned}\nonumber \\ \end{aligned}$$
(17)

Note that \(\mathrm{{err}}_i\) depends on the product \(j_k t_{\mathrm{{err}}_k}\), \(k \le i-1\), where the Jacobians \(j_k(t) = f_y(t + hc_k, y)_{|y=Y_k(t)}\) arise from the Taylor series expansion of the terms \(y'(t+hc_k)=f\big (t+hc_k,Y_k(t)+\mathrm{{err}}_k(t)\big )\) at the second component, replacing the result in (16).

To understand why the error associated with \(Y_i\) takes this form, just extend the procedure shown in Conte et al. (2021) for \(\mathrm{{err}}_1\) (8), observing that it depends on the moment \( L_{1,3} \). Still referring to the simple two-stage case, look at Eq. (11), in which at the end, the product between \(j_1\) and \(L_{1,3}\) (deprived of \(h ^ 3\) which has been simplified) just appears. In fact, here, the moment \(L_{2,3}\) is annihilated, whose expression contains \(L_{1,3}\), i.e., it depends on \( t_{\mathrm{{err}}_1}\). In the general s-stage case, therefore, the moment \(L_{s,s+1}\) must be annihilated, which depends on all the errors made at the previous stages.

Define the following column vector operr \(\in \mathbbm {R}^{s\times 1}\) to derive a compact notation for the moments associated with the error operators:

$$\begin{aligned} \mathrm{{operr}}=V_0^{[s+1]}-BV_1^{[s+1]}-(s+1)(AV_1^{[s]}+RV_0^{[s]})+\mathrm{{herr}}_\mathrm{{cum}}, \end{aligned}$$
(18)
$$\begin{aligned} \begin{aligned}&V_0^{[k]}=(c_i^{k})_{i=1}^s \in \mathbbm {R}^{s\times 1}, k=s,s+1, \quad V_1^{[k]}=\big ((c_i-1)^{k}\big )_{i=1}^s \in \mathbbm {R}^{s\times 1}, k=s,s+1, \\&\mathrm{{err}}_\mathrm{{cum}}=\bigg (\sum _{k=1}^{i-1}j_kr_{ik}\mathrm{{operr}}_k\bigg )_{i=1}^s \in \mathbbm {R}^{s\times 1} \quad (\mathrm{{err}}_\mathrm{{cum1}}=0). \end{aligned} \end{aligned}$$

The vector operr essentially contains in the ith component the moment (deprived of the powers of h) associated with \(\underline{L_i} (t ^ {s + 1})\). Therefore, once we have determined the first \(s-1\) (in consecutive order) components of operr, we can calculate its last component \(\mathrm{{operr}}_s\). Annihilating \( \mathrm{{operr}}_s \) corresponds to the additional order condition we have for the s-stage OEJDP methods.

Summarizing, these methods have order \( p = s \) if their coefficients satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} &{} B \mathbbm {1} = \mathbbm {1}, \quad \mathbbm {1}=(1,...,1)^\mathrm{{T}} \quad \text { (pre-consistency condition)},\\ &{} A=(CV_0D^{-1}-RV_0)V_1^{-1}-B(C-I_s)V_1D^{-1}V_1^{-1},\\ &{} (V_0^{[s+1]}-BV_1^{[s+1]}-(s+1)(AV_1^{[s]}+RV_0^{[s]})+\mathrm{{herr}}_\mathrm{{cum}})_s=0. \end{aligned} \end{array}\right. } \end{aligned}$$
(19)

2.3 Two-stage OEJDP methods for systems of ODEs

In this subsection, we show how the coefficients of the OEJDP methods can be obtained in the multi-dimensional case (\(d\ge 1\)), practically deriving them for two-stage methods of order two.

As seen above, explicit peer methods can be expressed in the vectorial form (2), in which the coefficients matrices A, B, and R are tensorially multiplied by the identity matrix of order d (which from now on we simply denote with I instead of \( I_d \)). However, since the Jacobian related to the ODE (1) has dimension d, the Jacobian-dependent coefficients must be re-formulated appropriately, and there is no need to multiply them by the identity matrix.

For example, in the case of two-stage methods, the coefficients take the form (12). Specifically, \( a_ {21} \), \( a_ {22} \) and \( r_ {21} \) depend on the Jacobian, and must therefore be re-formulated as matrices. For this reason, let us indicate them with capital letters \(A_{21}\), \(A_{22}\), and \(R_{21}\), respectively. To understand how to derive such matrices, let us accurately show the computation of \( R_ {21} \).

The coefficient \(r_{21}\) of (12) can be expressed as \(r_{21}^\mathrm{{num}}/r_{21}^\mathrm{{den}}\), where

$$\begin{aligned} \begin{aligned} r_{21}^\mathrm{{num}}=&-5 - b_{21}(-1 + c_1)^3 + 3c_1,\\ r_{21}^\mathrm{{den}}=&-(b_{11}hj_1(t)) - 3(-1 + b_{11})c_1^2 hj_1(t) + (-1 + b_{11})c_1^3hj_1(t)\\ {}&+ 3c_1(-2 + b_{11}hj_1(t)). \end{aligned} \end{aligned}$$
(20)

Now, isolate the coefficients of \(r_{21}^\mathrm{{num}}\) and \(r_{21}^\mathrm{{den}}\) that do not multiply the Jacobian \(j_1\). Considering \(r_ {21} ^ \mathrm{{den}}\), the only coefficient that respects this property corresponds to \(-6c_1\) and appears in the last term, while \(r_ {21} ^ \mathrm{{num}}\) does not contain \(j_1\). Therefore, define the coefficient \(r_{21}^c=r_{21}^\mathrm{{num}}/(-6c_1)\).

The next step involves highlighting the term \(hj_1\) in \( r_ {21} ^ \mathrm{{den}} \), dividing everything by the coefficient \( -6c_1 \) previously determined. This operation must be done, in general, also for \(r_ {21} ^ \mathrm{{num}}\) (dividing it by the numerator of \(r_ {21} ^ c\)). However, in this case, we can avoid that as \(r_{21}^\mathrm{{num}}\) does not depend on the Jacobian. This leads to

$$\begin{aligned} \begin{aligned} \displaystyle r_{21}^\mathrm{{den}}=&-6c_1\Bigg (1+\frac{r_{21}^\mathrm{{den1}}}{-6c_1}hj_1(t)\Bigg ),\\ r_{21}^\mathrm{{den1}}=&{-b_{11}}-{3(-1+b_{11})c_1^2}+{(-1+b_{11})c_1^3}+{3c_1b_{11}}. \end{aligned} \end{aligned}$$
(21)

By indicating the Jacobian with the capital letter J in the multi-dimensional case, we obtain the following form for the matrix \( R_ {21} \):

$$\begin{aligned} R_{21}=r_{21}^c \times \mathrm{{inv}}\big (I+r_{21}^\mathrm{{den1}}hJ_1(t)\big ). \end{aligned}$$
(22)

Similarly, the matrices \( A_ {21} \) and \( A_ {22} \) can be derived

$$\begin{aligned} A_{21}=a_{21}^c A_{21}^\mathrm{{num}} \times \mathrm{{inv}}(A_{21}^\mathrm{{den}}), \quad A_{22}=a_{22}^c A_{22}^\mathrm{{num}} \times \mathrm{{inv}}(A_{22}^\mathrm{{den}}), \quad \text {where} \end{aligned}$$
(23)
$$\begin{aligned} \begin{aligned} a_{21}^c&= \frac{2b_{21}c_1^4+4c_1+4c_1b_{21}-6c_1^2-6b_{21}c_1^2}{-12c_1(-1+c_1)},\\ a_{22}^c&= \frac{-10+24c_1-12c_1^2+2b_{21}(-1+c_1)^2+4c_1b_{21}(-1+c_1)^2}{-12c_1(-1+c1)},\\ A_{21}^\mathrm{{num}}&= I + \frac{a_{21}^\mathrm{{num}}}{2b_{21}c_1^4+4c_1+4c_1b_{21}-6c_1^2-6b_{21}c_1^2} hJ_1(t),\\ A_{22}^\mathrm{{num}}&= I +\frac{a_{22}^\mathrm{{num}}}{-10+24c_1-12c_1^2+2b_{21}(-1+c_1)^2+4c_1b_{21}(-1+c_1)^2}hJ_1(t),\\ A_{21}^\mathrm{{den}}&= A_{22}^\mathrm{{den}}=I +\frac{a_{21}^\mathrm{{den}}}{-12c_1(-1+c_1)}hJ_1(t). \end{aligned} \end{aligned}$$

Here, \(a_ {21} ^ \mathrm{{num}}\), \(a_ {22} ^ \mathrm{{num}}\), and \(a_{21}^\mathrm{{den}}\) correspond to the numerators and denominators of \(a_ {21}\) and \(a_ {22}\) of (12), deprived of the coefficients that do not multiply \(hj_1\).

Note that the matrices \(A_ {21}^\mathrm{{den}}\) and \(A_ {22}^\mathrm{{den}}\) match, as the denominators of \(a_ {21}\) and \(a_ {22}\) of (12) are equal. This is a good thing, as these matrices need to be inverted at each time-step. Thus, the computational cost of this two-stage method is halved.

When we deal with numerical tests in the next sections, we will show that these Jacobian-dependent peer methods work well even in the multi-dimensional case, resulting once again better than the classical ones.

3 New equation-dependent peer methods

As mentioned in the Introduction, in the paper Conte et al. (2021) the authors derived the OEJDP methods shown in the previous section by applying a technique similar to that adopted in Ixaru (2012) for Runge–Kutta methods. However, the procedure applied for Runge–Kutta and peer methods could not be the same, as for the latter the stages in the current interval depend on those in the previous one. In the work Conte et al. (2021) this was ignored. That is, the OEJDP methods have been derived by assuming that only the stages \(Y_{n,j}\), \(j<i\), in the computation of \(Y_{n,i}\), are affected by error. Now, we derive NEJDP methods by assuming that also the stages \(Y_{n-1,j}\), \(j< i\), are affected by error in the computation of \(Y_{n,i}\).

3.1 Two-stage NEJDP methods of order two

In this subsection, we determine two-stage NEJDP methods by following the observations just made. The continuous expressions of the stages \(Y_{n,1}\) and \(Y_{n,2}\) in this case are

$$\begin{aligned} \begin{aligned} Y_1(t)&= b_{11} Y_1(t-h) + b_{12} y(t) + h a_{11} f\big (t+h(c_1-1),Y_1(t-h)\big ) + h a_{12} y'(t), \\ Y_2(t)&= b_{21} Y_1(t-h) + b_{22} y(t) + h a_{21} f\big (t+h(c_1-1),Y_1(t-h)\big ) + h a_{22} y'(t) \\&\quad + h r_{21} f\big (t+hc_1,Y_1(t)\big ). \end{aligned} \end{aligned}$$
(24)

Assuming that \( Y_ {n-1,1} \) is affected by error (which we call \(\mathrm{{err}} _ {- 1}\), while with \(t_ {\mathrm{{err}}_{-1}}\) we indicate the local truncation error which is obtained by imposing that the first stage has order two) means that the following writings hold:

$$\begin{aligned} \begin{aligned} y\big (t+h(c_1-1)\big )&= Y_1(t-h)+\mathrm{{err}}_{-1}(t-h)=Y_1(t-h)+t_{\mathrm{{err}}_{-1}}(t-h)+O(h^4),\\ y'\big (t+h(c_1-1)\big )&= f\big (t+h(c_1-1),Y_1(t-h)+\mathrm{{err}}_{-1}(t-h)\big )\\&= f\big (t+h(c_1-1),Y_1(t-h)\big )+j_1(t-h)t_{\mathrm{{err}}_{-1}}(t-h)+O(h^4). \end{aligned}\nonumber \\ \end{aligned}$$
(25)

Once again, the Jacobian \(j_1(t-h)=f_y(t+h(c_1-1),y)_{|y=Y_1(t-h)}\) arises from the application of Taylor series expansion. Substituting the expressions (25) in the continuous stages (24) leads to

$$\begin{aligned} \begin{aligned} Y_1(t)&= b_{11} \Big (y\big (t+h(c_1-1)\big )-t_{\mathrm{{err}}_{-1}}(t-h)\Big ) + b_{12} y(t) + h a_{11} \Big (y'\big (t+h(c_1-1)\big )\\&\quad -j_1(t-h)t_{\mathrm{{err}}_{-1}}(t-h)\Big ) + h a_{12} y'(t), \\ Y_2(t)&= b_{21} \Big (y\big (t+h(c_1-1)\big )-t_{\mathrm{{err}}_{-1}}(t-h)\Big ) + b_{22} y(t) + h a_{21} \Big (y'\big (t+h(c_1-1)\big )\\&\quad -j_1(t-h)t_{\mathrm{{err}}_{-1}}(t-h)\Big ) + h a_{22} y'(t) + h r_{21} f\big (t+hc_1,Y_1(t)\big ). \end{aligned}\nonumber \\ \end{aligned}$$
(26)

By defining the error operator related to \(Y_{n-1,1}\) as

$$\begin{aligned} \begin{aligned} \underline{L_{-1}}\big (y(t-h)\big )=&y\big (t+h(c_1-1)\big )-Y_1(t-h), \text { where} \end{aligned} \end{aligned}$$
(27)
$$\begin{aligned} \begin{aligned} Y_1(t-h)&= b_{11} y\big (t+h(c_1-2)\big ) + b_{12} y(t-h) + h a_{11} y'\big (t+h(c_1-2)\big )\\&\quad + h a_{12} y'(t-h), \end{aligned} \end{aligned}$$

it’s possible to determine \(\mathrm{{err}}_{-1}\) and therefore \(t_{\mathrm{{err}}_{-1}}\). Following a similar approach to the one detailed in Conte et al. (2021), to have an order-two first stage we have to annihilate the moments related to \(\underline{L_{-1}}(t^k)\), \(k=0,1,2\) (it is easy to see that annihilating them leads to conditions equivalent to (6)). The resulting error \(\mathrm{{err}}_{-1}\) is given by the following expression, that involves the moment \({L_{-1,3}}\):

$$\begin{aligned} \mathrm{{err}}_{-1}(t-h)= & {} t_{\mathrm{{err}}_{-1}}(t-h)+O(h^4),\nonumber \\ t_{\mathrm{{err}}_{-1}}(t-h)= & {} \frac{h^3}{3!}\big ((c_1-1)^3-b_{11}(c_1-2)^3+b_{12}-3a_{11}(c_1-2)^2-3a_{12}\big ) y'''(t-h)\nonumber \\= & {} \frac{1}{3!}{L_{-1,3}}y'''(t-h). \end{aligned}$$
(28)

In order to determine \(\mathrm{{err}}_{- 1}\) we have used the property (9) that binds error operators to the related moments.

Note that by defining \(\underline{L_{-1}} \) as done in (27) we have implicitly assumed that, for simplicity, there are no further errors at the earlier stages \(Y_{n-k,1}\), \(k\le 2\). Therefore, the additional error we consider compared to the OEJDP methods concerns \( Y_{n-1,1} \).

It is necessary to define the error operators associated with the stages \(Y_{n,1}\) and \(Y_{n,2}\) to determine the order conditions of the NEJDP methods. Such operators have the same form as (4), but obviously the continuous expression of the stages is now given by (26).

To calculate the first stage with accuracy order equal to two, the moments \(L_{1,k}\), \(k=0,1,2\), must be annihilated. Note that, by (26)\(, \underline{L_1}\) is completely known, as the expression of \(t_{\mathrm{{err}}_{- 1}} \) is known (28). This operator, evaluated at \(y(t)=t^k\), assumes the following form:

$$\begin{aligned} \underline{L_1}(t^k)= & {} (t + hc_1)^k - b_{11}\Big (\big (t + h(c_1 - 1)\big )^k - (1/6)L_{-1,3}k (k - 1) (k - 2) (t - h)^{k-3}\Big )\nonumber \\&\quad - b_{12}t^k - h a_{11} \Big (k\big (t + h(c_1 - 1)\big )^{k-1} - j_1(t - h) (1/6)L_{-1,3}k (k - 1) (k - 2)\nonumber \\&\quad (t - h)^{k-3}\Big )- h k a_{12}t ^{k-1}. \end{aligned}$$
(29)

Canceling the moments \( L_{1,k} \), \( k = 0,1,2 \), leads to order conditions equivalent to that obtained in the classical case (6) for the first stage. Leveraging property (9), it is possible to conclude that, for NEJDP methods, the error associated with the first stage \(Y_{n,1}\) is

$$\begin{aligned} \mathrm{{err}}_{1}(t)=t_{\mathrm{{err}}_1}(t)+O(h^4)=\frac{1}{3!}L_{1,3}y'''(t)+O(h^4), \end{aligned}$$
(30)
$$\begin{aligned} \begin{aligned} L_{1,3}&= h^3\bigg (c_1^3-b_{11}\Big ((c_1-1)^3-\big ((c_1-1)^3-b_{11}(c_1-2)^3+b_{12}-3a_{11}(c_1-2)^2\\&\quad -3a_{12}\big )\Big )-a_{11} \Big (3(c_1-1)^2-hj_1(t-h)\big ((c_1-1)^3-b_{11}(c_1-2)^3+b_{12}\\&\quad -3a_{11}(c_1-2)^2-3a_{12}\Big )\bigg ). \end{aligned} \end{aligned}$$

The final step now consists in deriving the conditions coming from the imposition that the second stage has accuracy order equal to three. Therefore, we consider the continuous expression of the second stage obtained previously (26), assuming that \( Y_{n,1} \) is affected by the error (30):

$$\begin{aligned} \begin{aligned} Y_2(t)&= b_{21} \Big (y\big (t+h(c_1-1)\big )-t_{\mathrm{{err}}_{-1}}(t-h)\Big ) + b_{22} y(t) + h a_{21} \Big (y'\big (t+h(c_1-1)\big )\\ {}&\quad -j_1(t-h)t_{\mathrm{{err}}_{-1}}(t-h)\Big ) + h a_{22} y'(t) + h r_{21} \big (y'(t+hc_1)-j_1(t)t_{\mathrm{{err}}_1}(t)\big ). \end{aligned} \end{aligned}$$
(31)

As usual, the Jacobian \( j_1(t)=f_y(t+hc_1,y)_{|y=Y_1(t)}\) appears by applying Taylor expansion of \(y'(t+hc_1)=f\big (t+hc_1,Y_1(t)+\mathrm{{err}}_1(t)\big )\) at the second component.

By defining the error operator associated with \( Y_{n, 2} \) as before (4), considering this time the new continuous expression of the second stage (31), we can determine the moments \(L_{2, k}\), \(k \ge 0\), which are obtained by canceling \( \underline{L_2} (t ^ k) \) at \( t = 0 \). Report the expression of \( \underline{L_2} (t ^ k) \):

$$\begin{aligned} \begin{aligned} \underline{L_2}(t^k)&= (t + h)^k - b_{21}\Big (\big (t + h(c_1 - 1)\big )^k - (1/6)L_{-1,3}k (k - 1) (k - 2) (t - h)^{k-3}\Big )\\ {}&\quad - b_{22}t^k - h a_{21} \Big (k\big (t + h(c_1 - 1)\big )^{k-1} - j_1(t - h) (1/6)L_{-1,3}k (k - 1) (k - 2) \\&\quad (t - h)^{k-3}\Big )- h k a_{22}t ^{k-1}-hr_{21} \big (k(t + hc_1)^k -j_1(t) (1/6)L_{1,3}k (k - 1)\\ {}&\quad (k - 2) t ^{k-3}\big ). \end{aligned}\nonumber \\ \end{aligned}$$
(32)

Now, the moments \( L_ {2,0} \), \( L_ {2,1} \), \( L_ {2,2} \) and \( L_ {2,3} \) can be easily calculated. While canceling the first three, we obtain exactly the same order conditions of the classical case (7), annihilating the last moment leads to the following additional order condition, different from that derived for OEJDP methods:

$$\begin{aligned} \begin{aligned}&1-b_{21}\Big ((c_1-1)^3-\big ((c_1-1)^3-b_{11}(c_1-2)^3+b_{12}-3a_{11}(c_1-2)^2-3a_{12}\big )\Big )\\ {}&-a_{21}\Big (3(c_1-1)^2-hj_1(t-h) \big ((c_1-1)^3-b_{11}(c_1-2)^3+b_{12}-3a_{11}(c_1-2)^2\\ {}&-3a_{12}\big )\Big )-r_{21}\Bigg (3c_1^2-hj_1(t) \bigg (c_1^3-b_{11}\Big ((c_1-1)^3-\big ((c_1-1)^3-b_{11}(c_1-2)^3\\ {}&+b_{12}-3a_{11}(c_1-2)^2-3a_{12}\big )\Big )-a_{11}\Big (3(c_1-1)^2-hj_1(t-h)\big ((c_1-1)^3-b_{11}\\ {}&(c_1-2)^3+b_{12}-3a_{11}(c_1-2)^2-3a_{12}\Big )\bigg )\Bigg )=0. \end{aligned}\nonumber \\ \end{aligned}$$
(33)

To conclude, therefore, the NEJDP methods that we propose in this paper are obtained by imposing the order conditions (6), (7) and (33). This leads to coefficients which, compared to the OEJDP methods, depend on the Jacobian calculated at the current and the previous grid points:

$$\begin{aligned} \begin{aligned} A_{21} =&\frac{a_{21}^\mathrm{{cnum}}}{a_{21}^\mathrm{{cden}}}\Big (I+\frac{1}{a_{21}^\mathrm{{cnum}}} A_{21}^\mathrm{{num}}hJ_1(t)\Big ) \times \mathrm{{inv}} \Big (I+\frac{1}{a_{21}^\mathrm{{cden}}} \big (A_{21}^\mathrm{{den1}}hJ_1(t-h)+\\ {}&A_{21}^\mathrm{{den2}}hJ_1(t)\big )\Big ), \\ A_{22} =&\frac{a_{22}^\mathrm{{cnum}}}{a_{21}^\mathrm{{cden}}}\Big (I+\frac{1}{a_{22}^\mathrm{{cnum}}} \big (A_{22}^\mathrm{{num1}}hJ_1(t-h)+A_{22}^\mathrm{{num2}}hJ_1(t)\big )\Big )\times \mathrm{{inv}} \Big (I+\frac{1}{a_{21}^\mathrm{{cden}}}\\ {}&\big (A_{21}^\mathrm{{den1}}hJ_1(t-h)+A_{21}^\mathrm{{den2}}hJ_1(t)\big )\Big ), \\ R_{21}=&\frac{r_{21}^\mathrm{{cnum}}}{r_{21}^\mathrm{{cden}}}\Big (I+\frac{1}{r_{21}^\mathrm{{cnum}}} R_{21}^\mathrm{{num}}hJ_1(t-h)\Big )\times \mathrm{{inv}} \Big (I+\frac{1}{r_{21}^\mathrm{{cden}}} \big (R_{21}^\mathrm{{den1}}hJ_1(t-h)\\ {}&+R_{21}^\mathrm{{den2}}hJ_1(t)\big )\Big ), \text { where} \end{aligned}\nonumber \\ \end{aligned}$$
(34)
$$\begin{aligned} \begin{aligned} A_{21}^\mathrm{{num}}=&-\big (-1 + b_{21} (-1 + c_1)^2\big ) \big (b_{11} (-1 + c_1)^3 - (-3 + c_1) c_1^2\big ) \big (2I + b_{11} \\ {}&(-1 + c_1) \big (-2I + (-1 + c_1) hJ_1(t-h)\big ) - c_1 (2I + c_1 hJ_1(t-h))\big ),\\A_{21}^\mathrm{{den1}}=&2 (-1 + c_1) \big (b_{11} (-1 + c_1)^3 - (-3 + c_1) c_1^2\big ) \big (b_{11} hJ_1(t) + c_1 \big (2I + b_{11}\\ {}&(-2 + c_1) hJ_1(t) - c_1 hJ_1(t)\big )\big ),\\A_{21}^\mathrm{{den2}}=&4 \big ((-1 + c_1)^2\big ) (b_{11} - b_{11}^2 (-1 + c_1)^3 - 3 b_{11} c_1 + (-3 + c_1) c1^2),\\A_{22}^\mathrm{{num1}}=&2 (-1 + c_1) (-b_{11} (-1 + c_1)^3 + (-3 + c_1) c_1^2) \big (1 - 2 c_1 + b_{21} (-1 + c_1^2)\big ),\\ A_{22}^\mathrm{{num2}}=&-(3 + b_{21} (-1 + c_1)^2 - 2 c_1) (b_{11} (-1 + c_1)^3 - (-3 + c_1) c_1^2) \big (2I + b_{11}\\ {}&(-1 + c_1) \big (-2I + (-1 + c_1) hJ_1(t-h)\big ) - c_1 \big (2I + c_1 hJ_1(t-h)\big )\big ),\\a_{21}^\mathrm{{cnum}}=&-4 (-1 + c_1) c_1 \big (2 - 3 c_1 + b_{21} (2 + b_{11} (-1 + c_1)^3 + 3 (-1 + c_1) c_1)\big ),\\a_{21}^\mathrm{{cden}}=&24c_1(-1+c_1)^2,\\a_{22}^\mathrm{{cnum}}=&4 (-1 + c_1) \big (5 + 6 (-2 + c_1) c_1 + b_{21} (-1 + b_{11} (-1 + c_1)^3 - 3 (-2 + c_1) c_1^2)\big ),\\R_{21}^\mathrm{{num}}=&\big (1 - b_{21} (-1 + c_1)^2\big ) \big (b_{11} (-1 + c_1)^3 - (-3 + c_1) c_1^2\big ) ,\\R_{21}^\mathrm{{den1}}=&2 c_1 (b_{11} (-1 + c_1)^3 - (-3 + c_1) c_1^2),\\R_{21}^\mathrm{{den2}}=&(b_{11} (-1 + c_1)^3 - (-3 + c_1) c_1^2) \big (2I + b_{11} (-1 + c_1) (-2 + (-1 + c_1) \\ {}&hJ_1(t-h)) - c_1 \big (2I + c_1 J_1(t-h)\big )\big ),\\ r_{21}^\mathrm{{cnum}}=&2 \big (-5 + (8 - 3 c_1) c_1 + b_{21} (-1 + c_1) (-1 + b_{11} (-1 + c_1)^3 + 3 c_1)\big ), \\r_{21}^\mathrm{{cden}}=&{12 (-1 + c_1) c_1}. \end{aligned} \end{aligned}$$

The coefficients \(a_ {11}\) and \(a_ {12}\) are scalar and correspond to that of OEJDP methods (12). Furthermore, as in the previous case, the denominators of \(A_ {21}\) and \(A_ {22}\) correspond. For convenience, we have reported the coefficients of the NEJDP methods directly in non-scalar case.

The coefficients dependence on the Jacobian also at the previous point does not cause an additional computational cost (with respect to the OEJDP methods) in terms of function evaluations. In fact, the Jacobian determined at the previous grid points can be stored in an array, without the need to re-evaluate it at each step of the method. We note that, as with the OEJDP methods, we still have two-stage methods of order two, and the second stage is always computed by imposing that it has order three. Since we have also considered the errors on the stages in the previous interval, it is reasonable to expect that the new methods are more accurate than the classic and old equation-dependent ones. This observation will be confirmed in numerical tests.

3.2 Stability analysis of NEJDP methods

In the current subsection, we show the absolute stability region of the NEJDP methods obtained in this paper. Through a careful selection of the parameters left free, the NEJDP methods result more stable than the OEJDP methods (12) obtained in Conte et al. (2021), and therefore also than the classical ones. Furthermore, in numerical tests, we will see that they also have better accuracy.

To perform the linear stability analysis of the considered peer methods, we need to solve with them the test equation

$$\begin{aligned} y '= \lambda y, \end{aligned}$$
(35)

where \(\lambda \) is a complex number with negative real part, by evaluating for which values of \( h \lambda \) the numerical solution does not explode. The application of the peer method (2) to the test Eq. (35) leads to

$$\begin{aligned} Y^{[n+1]}=BY^{[n]}+zAY^{[n]}+zRY^{[n+1]}, \quad z=h\lambda . \end{aligned}$$
(36)

Therefore, \(Y^{[n+1]}=(I-zR)^{-1}(B+zA)Y^{[n]}\).

By defining the matrix \( M (z)=(I-zR)^{-1}(B+zA) \), it is possible to conclude that the numerical solution does not explode if its spectral radius is in modulus less than one. Obviously the NEJDP, the OEJDP, and the classical method have three different stability matrices M, respectively, as they have different coefficients. Consequently, in order to analyze the stability properties of these three methods, the eigenvalues of the respective matrices M must be determined, evaluating for which values of z they assume a value less than one in modulus.

Regarding the OEJDP and classical peer methods, the values of the free parameters for which the relative stability region has real part as large as possible have already been determined in Conte et al. (2021). For classical methods, these parameters are

$$\begin{aligned} b_{11} = -0.52, \quad b_{21} = -1.3, \quad c_1 = 0.3, \quad r_{21} = 0.8, \end{aligned}$$
(37)

while for the OEJDP schemes they assume the following values:

$$\begin{aligned} b_{11} = -0.59, \quad b_{21} = -1, \quad c_1 = 0.3. \end{aligned}$$
(38)

Conducting the same analysis for the NEJDP methods derived in this work, using the Matlab fmincon function (considering random initial values for the coefficients), we have ascertained that the parameters that maximize the absolute stability region real part are

$$\begin{aligned} b_{11} = -0.24, \quad b_{21} = -0.31, \quad c_1 = 0.2. \end{aligned}$$
(39)

Figure 1 shows the absolute stability regions of the three methods with the values of the parameters just reported. Note that the considered OEJDP method has stability region that in some places contains that of the NEJDP method. However, the latter has a larger area and includes a much larger part of the real axis.

Fig. 1
figure 1

Absolute stability regions (internal part) of classic (37), OEJDP (38) and NEJDP (39) methods, respectively

4 Numerical tests

In this section we perform numerical tests to evaluate the advantages of NEJDP methods derived in this paper, comparing them with the OEJDP and the classic ones. Since in Conte et al. (2021) numerical tests were conducted exclusively on scalar ODEs, we now also consider non-scalar problems. Specifically, we will focus on the methods versions (39), (38) and (37), which guarantee optimal stability properties as demonstrated in the previous subsection.

We evaluate for each method the absolute error committed at the final grid point T, comparing the computed numerical solution with the exact one (if known), or with that determined by the Matlab function ode15s, requiring maximum accuracy. In addition, we report tables with the estimate of the order of each method, calculated using the formula

$$\begin{aligned} p(h)=\frac{cd(h)-cd(2h)}{\mathrm{{log}}_{10}(2)}, \end{aligned}$$
(40)

where \(cd(h)=-\mathrm{{log}}_{10}\)(absolute error) represents the number of correct digits obtained with step-size h.

4.1 Prothero–Robinson equation

The Prothero–Robinson equation (Prothero and Robinson 1974) is often used to evaluate the stability of numerical methods, as it depends on a parameter \( \lambda \) that determines the stiffness of the problem. In fact, the stiffness of the equation is directly proportional to the modulus of \( \lambda \). This equation takes the form

$$\begin{aligned} {\left\{ \begin{array}{ll} y'(t)=\lambda \big (y(t)-\sin (t)\big )+\cos (t), \\ y(0)=0, \quad t \in [0,\pi /2]. \end{array}\right. } \end{aligned}$$
(41)

Moreover, for (41), the exact solution is known: \(y(t)=\sin (t)\).

In this case, we do not report tables with errors due to the application of the considered methods. In fact, we will evaluate the greater accuracy of the new methods obtained in this paper through the relative application on non-scalar problems. Now, we want to evaluate the better stability of the NEJDP methods, both with respect to OEJDP and classical schemes.

Fig. 2
figure 2

Solution behaviour using the considered methods on Prothero–Robinson equation (41)

Fig. 3
figure 3

Solution behaviour using the considered methods on Prothero–Robinson equation (41)

From Figs. 2 and 3, this observation becomes evident, since for different values of \(\lambda \) and h, the numerical solution determined using NEJDP methods is very close to the exact one. On the other hand, for the same values of the parameter and of the step-size, both the OEJDP method and the classical one produce numerical solutions that either explode, or in any case that have a completely different trend with respect the exact one.

4.2 Euler problem

Euler problem (Euler 1758) is represented by the following non-stiff ODEs system:

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} y'_1(t)=&{}-2y_2(t)y_3(t), \\ y'_2(t)=&{}\frac{5}{4}y_1(t)y_3(t), \\ y'_3(t)=&{}-\frac{1}{2}y_1(t)y_2(t), \\ y(0)=&{}[1;0;0.9], \quad t \in [0,10]. \end{aligned} \end{array}\right. } \end{aligned}$$
(42)

In this case, from Table 1, it is evident the greater accuracy of the NEJDP methods compared to the OEJDP and the classic ones. In fact, the error of the new methods is smaller than the others by at least two orders of magnitude. Specifically, there are two orders of difference between the new methods and the old ones, and even three orders of difference between the new methods and the classic peer schemes.

As we expected, however, the order of NEJDP methods remains equal to two, despite the fact that the second stage is calculated with order three. In fact, as mentioned in the Introduction, peer methods are characterized by having all stages with the same properties in terms of accuracy and stability. Therefore, if one stage is determined with order two, and the other with order three, the overall order of the method is still equal to two. We can ascertain this by looking at Table 2. However, note that, although the order tends to two, it is initially close to three. Thus, the higher accuracy in the second-stage calculation is responsible for the higher overall accuracy of the NEJDP methods.

Table 1 Absolute errors at the endpoint T on Euler problem (42), in correspondence of several values of the number of grid points (\(N+1\))
Table 2 Estimated order p(h) on Euler problem (42)

4.3 Brusselator model

The last ODEs system we consider is the Brusselator (Nicolis and Prigogine 1977) problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} y'_1(t)=&{}1+y_1(t)^2y_2(t)-4y_1(t), \\ y'_2(t)=&{}3y_1(t)-y_1(t)^2y_2(t), \\ y(0)=&{}[1.5;3], \quad t \in [0,20]. \end{aligned} \end{array}\right. } \end{aligned}$$
(43)

In this case, Tables 3 and 4 confirm the observations already made previously. Therefore, even for non-scalar problems with non-constant and non-linear Jacobian, the NEJDP methods work well.

Table 3 Absolute errors at the endpoint T on Brusselator model (43), in correspondence of several values of the number of grid points (\(N+1\))
Table 4 Estimated order p(h) on Brusselator model (43)

4.4 Non-linear Burgers equation

To show the trend of the new methods also on ODEs systems of higher dimensions than those discussed so far, let us consider a Partial Differential Equation (PDE) and discretize it in space using the method of lines. In particular, we consider the following formulation of the non-linear Burgers PDE (Burgers 1948):

$$\begin{aligned} \begin{aligned}&\frac{\partial y}{\partial t}= \epsilon \frac{\partial ^2 y}{\partial x^2} - \frac{1}{2}\frac{\partial y^2 }{\partial x}, \qquad (x,t) \in [0,2\pi ]\times [0,2], \quad \epsilon =0.1. \end{aligned} \end{aligned}$$

We set periodic boundary conditions and

$$\begin{aligned} \begin{aligned} y(x,0)={\left\{ \begin{array}{ll} 1, \quad x \in [0,\pi ],\\ 0, \quad x \in (\pi , 2 \pi ], \end{array}\right. } \end{aligned} \end{aligned}$$

as initial conditions, then employing an order four finite differences spatial semi-discretization, both for the second- and first-order spatial derivatives, fixing a grid of \(M=2^5\) spatial intervals. The semi-discretized is space problem, which is an ODEs system of dimension M, takes the following form:

$$\begin{aligned} y'(t)=\epsilon L_1 y(t) - \frac{1}{2} L_2 y(t)^2. \end{aligned}$$
(44)

Here, \( L_1 \) and \(L_2\) are constant matrices having the discretization coefficients of the second- and first-order spatial derivatives, respectively. The Jacobian is the non-constant matrix

$$\begin{aligned} J\big (y(t)\big ) = \epsilon L_1 - L_2 y(t). \end{aligned}$$

We solve the problem (44) with the NEJDP methods, also comparing the CPU time they take with that of the Matlab ode45 routine, with parity of errors. The errors are evaluated, as before, with respect to the reference solution determined by the ode15s function applied to the semi-discretized ODEs system (44), requiring maximum accuracy. The corresponding results are reported in Table 5.

Since ode45 has an adaptive step-size control, the comparison is not very fair, as NEJDP methods are with fixed step-size. In fact, from Table 5, note that the number of grid points taken by ode45 is significantly lower than that of the NEJDP methods. However, the latter still manage to be competitive (they take on average a CPU time four times bigger than ode45, but still low). To show that thanks to the step control ode45 saves a lot of time, in Table 6, we have reported the minimum step-size \( h_ {\min } \) used by it during the integration. Then, note that even though ode45 takes far fewer grid points than NEJDP methods, it is sometimes forced to choose an even smaller step than them to obtain a comparable error.

Table 5 Absolute errors and CPU time on the semi-discretized non-linear Burgers PDE (44) in correspondence of several numbers of grid points (N+1), by applying NEJDP methods with frozen and unfrozen Jacobian and the Matlab routine ode45, setting for it options ‘AbsTol’ and ‘RelTol’ which lead to an absolute error similar to that of NEJDP methods
Table 6 Minimum time-steps \(h_{\min }\) used by ode45 and time-steps h used by the NEJDP methods with reference to the results reported in Table 5

Furthermore, note that in the far right column of Table 5, we have reported the results obtained by applying the NEJDP methods by freezing the Jacobian as \( J = \epsilon L_1\). This choice is usual when the system of ODEs to be solved is of the form (44), and it is also known that its stiffness is mainly concentrated in the first term. In this case, NEJDP methods are advantageous with respect to ode45 in terms of computational time, with parity of errors. This happens despite the choice \( J = \epsilon L_1 \) does not look at accuracy, and therefore could be done in a better way, leading to much lower errors.

This suggests that, with adequate Jacobian freezing strategies and adaptive step-size control, NEJDP methods could definitely perform even better than they already do. Indeed, in this test, we have shown that, only with frozen (non-optimized) Jacobian and without adaptive step, the NEJDP methods are already competitive with ode45. Surely, with adaptive step-size control, the computing times would be lowered even more drastically.

5 Conclusions and future perspectives

In this paper, we have proposed a new class of explicit peer methods with improved stability and accuracy properties. Furthermore, we have proved that such methods are very advantageous both with respect to the classic ones, and to those derived in Conte et al. (2021), which have exactly the same computational cost. Furthermore, we have definitely improved the work Conte et al. (2021), deriving the OEJDP methods also in non-scalar case and generalizing their order conditions.

Moreover, we have shown how it is possible to generalize in an appropriate way the technique proposed in Ixaru (2012) for Runge–Kutta methods also on peer methods, which unlike the former are characterized by stages that in the current interval depend on those determined in the previous one. In fact, we considered the stages in the intervals \( [t_n, t_ {n + 1}] \) and \( [t_ {n-1}, t_n] \) to be affected by error. Due to the shape of peer methods, we think that the Jacobian dependency must be applied in some way to all points of the grid in \( [0, t_n] \), to obtain even better stability and accuracy properties.

Finally, the results reported in the last numerical test lead us to think about investigating a variable step-size formulation of the NEJDP methods derived in this paper, also with a Jacobian freezing strategy.