1 Introduction

Consider the initial value problems of the form

$$\begin{aligned}& f \bigl(x '(t); x(t); t\bigr) = 0;\qquad x(t _{0}) - a = 0, \quad t \in [t _{0}; T], \end{aligned}$$
(1)

where \(a\in R^{m}\) is a consistent initial value for (1) and the function f: \(R^{{m}} \times R^{{m}} \times [t _{0} ; T] \rightarrow R^{{m}}\) is assumed to be sufficiently smooth. If \((\partial f/\partial x)\) is nonsingular, then it is possible to formally solve (1) for x in order to obtain an ordinary differential equation. However, if \((\partial f/\partial x)\) is singular, it is no longer possible and the solution x has to satisfy certain algebraic constraints; therefore, equations (1) are referred to as differential algebraic equations.

Systems of differential algebraic equations arise from many applications such as physics, engineering, and circuit analysis. Some systems can be reduced to an ODE system, which are zero index DAEs, and can be solved by numerical ODE methods after reduction. Other systems, in which reductions to an explicit differential systems are in the form \(x' = f(x; t)\), are either impossible or impractical, that is because the problem is more naturally posed in the form

$$\begin{aligned}& f \bigl(t, x', x, y\bigr) = 0; \end{aligned}$$
(2-a)
$$\begin{aligned}& g(t, x, y) = 0; \end{aligned}$$
(2-b)

and a reduction might reduce the sparseness of Jacobian matrices. These systems are then solved directly [16, 17].

A fundamentally important concept in the algorithms of the numerical solutions of DAEs is the index of a DAE. In a sense, this tells us how far away the DAE is from being an ODE. The index of a DAE is the minimum number of times all or part of the DAE system must be differentiated with respect to time in order to convert the DAE into an explicit ODE. The higher the index is, the further it is from an ODE and the more difficult it is in general to solve the DAE [12].

The first general method applied to the numerical solution of DAEs is backward differentiation formula (BDF). Ebadi andGokhale presented in [9,10,11] class \(2+1\) hybrid BDF-like methods, hybrid BDF methods (HBDF), and new hybrid methods for the numerical solution of IVPs. These methods have wide stability regions and good performance in solving CPU time compared to the extended BDF (EBDF) and modified extended BDF (MEBDF) methods [3].

In Sect. 2 the first hybrid class is derived, its orders of convergence are investigated, and its stability analysis is studied. In Sect. 3 some basic notions of one-leg schemes and G-stability are mentioned. The one-leg twin of the first class is derived and its G-stability is discussed in Sect. 4. The one-leg twin of the second class [14] is derived and its G-stability is discussed in Sect. 5. Numerical tests are investigated in Sect. 6. Finally, a conclusion is introduced.

2 The first hybrid class

The first hybrid class takes the form

$$\begin{aligned}& y_{n + s} = h\mu f_{n} + \sum_{j = 0}^{k} \gamma_{n - j} y_{n - j}, \end{aligned}$$
(3)
$$\begin{aligned}& y_{n} + \sum_{j = 1}^{k} \alpha_{n - j} y_{n - j} = h (\beta_{s} f _{n + s} + \beta_{1} f_{n} + \beta_{0}f_{n - 1}), \end{aligned}$$
(4)

where \(f_{n+s} = f(t _{n+s}; y _{n+s})\); \(t_{n+s} = t _{n} + s h\); \(-1 < s\) and \(\beta_{s}\), \(\beta_{1}\), \(\alpha_{ n- j}\), \(j = 1, 2, \dots , k\), are parameters to be determined as functions of s and \(\beta _{0}\). The methods for step k and order \(p=k+1\) (from \(k = 1\) up to 6) will be derived and \(y_{n+s}\) has order \(k - 1\). To evaluate the value of \(y_{n+s}\) at an off-step point, i.e. \(t _{n+s}\), we will consider the nodes \(t _{n}\) (double node), \(t_{ n-1 }, \dots , t _{ n- k}\) (simple nodes).

Applying Newton’s interpolation formula for this data gives the following scheme:

$$\begin{aligned} y(t) =& y_{n} + (t - t_{n}) y'_{n} + (t - t_{n})^{2}\frac{h y'_{n} - \nabla y_{n}}{h^{2}} \\ &{}+ (t - t_{n})^{2}(t - t_{n - 1})\frac{h y'_{n} - \nabla y_{n} - \frac{1}{2}\nabla^{2}y_{n}}{2!h^{3}} \\ &{}+ (t - t_{n})^{2}(t - t_{n - 1}) (t - t_{n - 2})\frac{h y'_{n} - \nabla y_{n} - \frac{1}{2}\nabla^{2}y_{n} - \frac{1}{3}\nabla^{3}y _{n}}{3!h^{4}} +\cdots. \end{aligned}$$
(5)

Differentiate (5) with respect to t:

$$\begin{aligned} y'(t) =& y'_{n} + 2(t - t_{n})\frac{h y'_{n} - \nabla y_{n}}{h^{2}} + \bigl(2(t - t{}_{n}) (t - t_{n - 1}) + (t - t_{n})^{2}\bigr)\frac{h y'_{n} - \nabla y_{n} - \frac{1}{2}\nabla^{2}y_{n}}{2!h^{3}} \\ &{}+ \bigl(2(t - t{}_{n}) (t - t_{n - 1}) (t - t_{n - 2}) + (t - t_{n})^{2}(t - t_{n - 2}) + (t - t_{n})^{2}(t - t_{n - 1})\bigr) \\ &{}\times \frac{h y'_{n} - \nabla y _{n} - \frac{1}{2}\nabla^{2}y_{n} - \frac{1}{3}\nabla^{3}y_{n}}{3!h ^{4}} +\cdots. \end{aligned}$$
(6)

Using (5) and (6) to evaluate \(y_{n+s}\) and \(f_{n+s}\) gives

$$\begin{aligned}& y(t_{n} + sh) = y_{n} + s h f_{n} + s^{2} (h f_{n} - \nabla y_{n}) + \frac{s ^{2}(s + 1)}{2!}\biggl(h f_{n} - \nabla y_{n} - \frac{1}{2}\nabla^{2}y_{n}\biggr) \\& \hphantom{y(t_{n} + sh) =}{}+ \frac{s^{2}(s + 1)(s + 2)}{3!}\biggl(h f_{n} - \nabla y_{n} - \frac{1}{2} \nabla^{2}y_{n} - \frac{1}{3}\nabla^{3}y_{n}\biggr) +\cdots, \end{aligned}$$
(7)
$$\begin{aligned}& f(t_{n + s}) = f_{n} + 2s\frac{h f_{n} - \nabla y_{n}}{h} + s(2 + 3s) \frac{h f_{n} - \nabla y_{n} - \frac{1}{2}\nabla^{2}y_{n}}{2!h} \\& \hphantom{f(t_{n + s}) =}{}+ s\bigl(4 + 9s + 4s^{2}\bigr)\frac{h f_{n} - \nabla y_{n} - \frac{1}{2}\nabla ^{2}y_{n} - \frac{1}{3}\nabla^{3}y_{n}}{3!h} + \cdots, \end{aligned}$$
(8)

where f (or \(f(t, y)\)) is considered as a derivative of the solution \(y(t)\), \(\nabla y _{n} = y _{n} - y_{ n-1 }\).

Method (4) is of order p if and only if

$$\begin{aligned}& 1 + \sum_{j = 1}^{k} \alpha_{n - j} = 0,\qquad \sum_{j = 1}^{k} - j \alpha_{n - j} = (\beta_{s} + \beta_{1} + \beta_{0}), \\& \sum_{j = 1}^{k} \alpha_{n - j} ( - j)^{q} = q \bigl(\beta_{s} s^{q - 1} + ( - 1)^{q - 1}\beta_{0}\bigr),\quad \text{where } q= 2, \dots , p. \end{aligned}$$

The coefficients of the methods for steps \(k= 1\) up to 6 are tabulated in Tables 1, 2, 3, and 4.

Table 1 The coefficients of method (2.2) for orders 2, 3, and 4
Table 2 The coefficients of method (2.2) for order 5
Table 3 The coefficients of method (2.2) for order 6
Table 4 The coefficients of method (2.2) for order 7

Since formula (3) is of order k and formula (4) is of order \(k+1\), then it is easy to see that method (3)–(4) has order \(k+1\).

2.1 Stability analysis

Consider the scalar test problem \(y' = \lambda y\), \(y(0) = y_{0}\). From equations (3) and (4) the corresponding characteristic equation is as follows:

$$ y_{n} + \sum_{j = 1}^{k} \alpha_{n - j} y_{n - j} = \overline{h} \Biggl( \beta_{s} \Biggl(h \mu y_{n} + \sum_{j = 0}^{k} \gamma_{n - j} y_{n - j} \Biggr) + \beta_{1} y_{n} + \beta_{0}y_{n - 1}\Biggr), $$
(9)

where \(\overline{h} = \lambda y\), i.e.

$$ A \overline{h}^{2} + B \overline{h} + C = 0, $$
(10)

where

$$ A = \beta_{s} \mu r,\qquad B = \beta_{s} \sum _{j = 0}^{k} \gamma_{n - j} r ^{1 - j} + \beta_{1} r + \beta_{0},\qquad C = r + \sum _{j = 1}^{k} a_{n - j} r^{1 - j}. $$
(11)

The absolute stability regions for this class for \(k=1\) up to 7 are given in Fig. 1 for the optimal s and \(\beta_{0}\) using the boundary locus method.

Figure 1
figure 1

The absolute stability domain of class (3–4) for \(k=1\) up to 7

The angle α of A(α)-stability for different methods, BDF, EBDF, A-EBDF, MEBDF, Enright methods, HEBDF, and The class (3–4) for various orders are tabulated in Table 5.

Table 5 A(α)-stability for BDF, EBDF, A-EBDF, MEBDF, Enright methods, HEBDF, and The class (3–4) for various orders

We recall some basic notions of one-leg schemes and G-stability.

3 One-leg schemes

Suppose that a linear k-step method

$$ \sum_{i = 0}^{k} \alpha_{i} y_{n + i} = h \sum_{i = 0}^{k} \beta_{i} f( t_{n + i}, y_{n + i}) $$
(12)

is given. One-leg methods can be formulated in a compact form by introducing the polynomials

$$ \rho (\xi ) = \sum_{i = 0}^{k} \alpha_{i} \xi^{i},\qquad \sigma (\xi ) = \sum _{i = 0}^{k} \beta_{i} \xi^{i}, $$
(13)

with real coefficients \(\alpha_{i}, \beta_{i} \in R\) and no common divisor. There is also the assumption throughout the normalization that

$$ \sigma (1) = 1. $$
(14)

The associated one-leg methods are defined by

$$ \sum_{i = 0}^{k} \alpha_{i} y_{n + i} = h f\Biggl(\sum_{i = 0}^{k} \beta _{i} t_{n + i},\sum_{i = 0}^{k} \beta_{i} y_{n + i} \Biggr). $$
(15)

In the one-leg methods, the derivative f is evaluated at one point only, which makes it easier to analyze. The one-leg method (15) may have stronger nonlinear stability properties such as G-stability [12, 15]. On the other hand, it is known that to obtain a one-leg method of high order, the parameters \(\alpha_{i}\), \(\beta_{i}\) have to satisfy more constraints than those for linear multistep methods, see [7, 8, 14]. The conditions \(\rho (1)= 0\), \(\rho '(1) = \sigma (1)= 1\) imply the consistency of the scheme \(( \rho , \sigma )\).

3.1 G-stability analysis

The G-stability analysis, announced at the 1975 Dundee conference and published in [6], uses the test problem \(\mathrm{d}y/\mathrm{d}x = f(x, y)\), where \(\langle y - z, f(x, y) - f(x, z)\rangle \leq 0\). In the same publication, one-leg methods were introduced and related to corresponding linear multistep methods. Stable behavior for this problem was defined as G-stability. A more detailed account of this work will be given.

If the differential equation satisfies the one-sided Lipschitz condition

$$\begin{aligned}& \bigl\langle f(x, y) - f(x, z)\bigr\rangle \leq \nu \Vert y -z \Vert ^{2}, \end{aligned}$$
(16)

with \(\nu = 0\), then the exact solutions are contractive. Consider the multistep method as a mapping \(R^{n,k}\rightarrow R^{n,k}\). Let \(Y _{m} =(y_{m+k-1}, \dots , y _{m})^{{T}}\) and consider inner product norms on \(R^{n,k}\)

$$\begin{aligned}& \Vert Y_{m} \Vert _{G}^{2} = \sum_{i = 1}^{k} \sum _{j = 1}^{k} g_{ji}\langle y_{m + i - 1},y_{m + j - 1} \rangle, \end{aligned}$$
(17)

where \(\langle \cdot , \cdot \rangle \) is the inner product on \(R^{{n}}\) used in (16) and k-dimensional matrix \(G = (g _{ij})\) \(i, j=1,\dots, k\) is assumed to be real, symmetric, and positive definite. The inner product \(\langle \cdot , \cdot \rangle \), on which \(\langle \cdot , \cdot \rangle_{ {G}}\) is built, is supposed to have a corresponding norm defined by \(\|u\|_{2} = \langle u, u\rangle \). Similarly we will write \(\|\cdot , \cdot \|_{{G}}\) as the norm corresponding to \(\langle \cdot , \cdot \rangle_{{G}}\).

Definition 1

([6])

The one-leg method (15) is called G-stable if there exists a real, symmetric, and positive definite matrix G such that, for two numerical solutions \(\{Y_{m}\}\) and \(\{\hat{Y} _{m}\}\), we have

$$ \Vert Y_{m + 1} - \hat{Y}_{m + 1} \Vert _{G} \le \Vert Y_{m} - \hat{Y}_{m} \Vert _{G} $$
(18)

for all step sizes \(h > 0\) and for all differential equations satisfying (16) with \(\nu = 0\).

Theorem 1

([2])

G-stability implies A-stability.

Theorem 2

([12])

Consider a method \(( \rho , \sigma )\). If there exists a real, symmetric, and positive definite matrix G, and real numbers \(a _{0},\dots ,a _{{k}}\) such that

$$ \frac{1}{2}\bigl(\rho (\xi ) \sigma (\omega ) + \rho (\omega )\sigma ( \xi )\bigr) = (\xi \omega - 1) \sum_{i,j = 1}^{k} g_{ij} \xi^{i - 1} \omega^{j - 1} + \Biggl(\sum _{i = 0}^{k} a_{i} \xi^{i} \Biggr) \Biggl(\sum_{i = 0}^{k} a _{j} \omega^{j} \Biggr), $$
(19)

then the corresponding one-leg method is G-sable.

Theorem 3

([5])

If ρ and σ have no common divisor, then the method \(( \rho , \sigma )\) is A-stable if and only if the corresponding one-leg method is G-stable.

4 One-leg method for the first hybrid class

Here, the one-leg twin of the first class is studied when \(k=1\) and \(k=2\).

In the case of \(k=1\), method (4) takes the form

$$\begin{aligned}& \alpha_{n} y_{n} + \alpha_{n - 1} y_{n - 1} = h (\beta_{s} f_{n + s} + \beta_{1} f_{n} + \beta_{0}f_{n - 1}), \end{aligned}$$
(20)
$$\begin{aligned}& y_{n + s} = y_{n} + s h f_{n}, \end{aligned}$$
(21)

where

$$\begin{aligned}& \alpha_{n} = 1,\qquad \alpha_{n - 1} = - 1,\qquad \beta_{s} = \frac{ - 1 + 2\beta_{0}}{2s},\quad \mbox{and} \quad \beta_{1} = \frac{1 + 2s - 2(1 + s)\beta _{0}}{2s}. \end{aligned}$$

Method (20) has order 2, its truncation error takes the form

$$ T_{2} = \frac{2 + 3 s - 6(1 + s) \beta_{0}}{12}h^{3}y^{(3)}(\eta ). $$

The one-leg twin of (20) takes the form

$$ \alpha_{n} y_{n} + \alpha_{n - 1} y_{n - 1} = h f (\beta_{s} t_{n + s} + \beta_{1} t_{n} + \beta_{0} t_{n - 1}) $$
(22)

and has order 2 and its truncation error takes the form

$$ T_{2}=(1/24) h^{3} y ^{(3)} (\eta ). $$

To discuss G-stability of (22),

$$ f_{n + s} = f_{n}. $$

Substitute \(f_{n+s}\) in equation (20), it becomes

$$ \alpha_{n} y_{n} + \alpha_{n - 1} y_{n - 1} = h (\beta_{s} f_{n} + \beta_{1} f_{n} + \beta_{0}f_{n - 1}). $$

The corresponding characteristic equations are

$$ \rho (\xi ) = \alpha_{n} \xi + \alpha_{n - 1},\qquad \sigma (\xi ) = ( \beta_{1} + \beta_{s})\xi + \beta_{0}. $$

Applying Theorem 2, the variables \(a _{i}\), \(i=0,1\) and \(g_{ij}\), \(i,j=1\) satisfy the relations

$$\begin{aligned}& a_{0} = \sqrt{\frac{1}{2} - \beta_{0}},\qquad a_{1} =- \sqrt{ \frac{1}{2} - \beta_{0}}\quad \mbox{and} \quad g_{11}=a_{0} ^{2} + \beta_{0}. \end{aligned}$$

Choose \(\beta_{0}<1/2\), \(g_{11}=1/2 > 0\). So, method (22) is G-stable.

In the case of \(k=2\), method (4) takes the form

$$\begin{aligned}& \alpha_{n} y_{n} + \alpha_{n - 1} y_{n - 1} + \alpha_{n - 2} y_{n - 2} = h (\beta_{s} f_{n + s} + \beta_{1} f_{n} + \beta_{0}f_{n - 1}), \end{aligned}$$
(23)
$$\begin{aligned}& y_{n + s} = y_{n} + s h f_{n} + s^{2}(h f_{n} - y_{n} + y_{n - 1}). \end{aligned}$$
(24)

After normalization

$$\begin{aligned}& \alpha_{n} = \frac{14 + 9 s}{6(2 + s + (1 + s)\beta_{0})},\qquad \alpha_{n - 1} = \frac{ - 8 - 6s + 3(1 + s)\beta_{0}}{3(2 + s + (1 + s) \beta_{0})}, \\& \alpha_{n - 2} = \frac{2 + 3 s - 6(1 + s)\beta_{0}}{6(2 + s + (1 + s)\beta_{0})} \\& \beta_{s} = \frac{ - 4 + 5\beta_{0}}{6 s(2 + s + (1 + s)\beta_{0})},\quad \text{and} \quad \beta_{1} = \frac{4 + 6 s(2 + s) - (1 + s)(5 + 3 s) \beta_{0}}{6 s(2 + s + (1 + s)\beta_{0})}. \end{aligned}$$

Method (23) has order 3, its truncation error takes the form

$$ T_{3} = \frac{8 + 2 s(9 + 4s) + \beta_{0}( - 21 - s(33 + 10s) + 12(1 + s)\beta_{0}}{72(2 + s + (1 + s)\beta_{0})}h^{4}y^{(4)}(\eta ). $$

The one-leg twin of (23) takes the form

$$ \alpha_{n} y_{n} + \alpha_{n - 1} y_{n - 1} + \alpha_{n - 2} y_{n - 2} = h f (\beta_{s} t_{n + s} + \beta_{1} t_{n} + \beta_{0} t_{n - 1}) $$
(25)

and has order 2 if \(\beta_{0}=(2+3s)/(6(1+s))\) and its truncation error takes the form

$$T_{3}=(1/24)h ^{3} y'''( \eta ). $$

To discuss G-stability of (25),

$$ f_{n + s} = f_{n} + 2s (h f_{n} - y_{n} + y_{n - 1})/h. $$

Substitute \(f_{n+s}\) in equation (23), it becomes

$$ \alpha_{n} y_{n} + \alpha_{n - 1} y_{n - 1} + \alpha_{n - 2} y_{n - 2} = h \bigl(\beta_{s} \bigl(f_{n} + 2s (h f_{n} - y_{n} + y_{n - 1})/h\bigr) + \beta _{1} f_{n} + \beta_{0}f_{n - 1}\bigr). $$

The corresponding characteristic equations are

$$\begin{aligned}& \rho (\xi ) = (\alpha_{n} + 2 s\beta_{s})\xi^{2} + (\alpha_{n - 1} - 2 s \beta_{s})\xi + \alpha_{n - 2}, \\& \sigma (\xi ) = \bigl(\beta_{1} + \beta_{s}(1 + 2s)\bigr) \xi^{2} + \beta_{0} \xi . \end{aligned}$$

Applying Theorem 2, the variables \(a _{i}\), \(i=0,1,2\) and \(g_{ij}\), \(i,j=1,2\) satisfy the relations

$$\begin{aligned}& g_{11}=a_{0}^{2}, \\& g_{12}=\bigl(12a_{0} a_{1}\bigl(2+s+(1+s) \beta _{0}\bigr)+ \beta _{0}\bigl(-2-3s+6(1+s) \beta _{0}\bigr)\bigr)/\bigl(12\bigl(2+s+(1+s) \beta _{0}\bigr) \bigr), \\& g_{22}=-\bigl(36a_{2}^{2}\bigl(2+s+(1+s) \beta _{0}\bigr)^{2}+(6+9s+10 \beta _{0}) \bigl(-4-6s+(-2+3s) \beta _{0}\bigr)\bigr) \\& \hphantom{g_{22}=}{}/ \bigl(36\bigl(2+s+(1+s) \beta _{0} \bigr)^{2}\bigr). \end{aligned}$$

Choosing \(\beta^{*}=0.4\) and \(s=0.9\) makes that \(a_{0}=0.079714\), \(a_{1}=-0.0561286\), \(a_{2}=-0.0235855\), \(g_{11}> 0\), and Det ( ( g 11 g 12 g 21 g 22 ) ) >0. Therefore, the matrix G is positive definite and method (25) is G-stable.

5 The second hybrid class

The second hybrid class takes the form

$$\begin{aligned}& y_{n + s} = h\mu f_{n} + \sum_{j = 0}^{k - 2} \gamma_{n - j} y_{n - j}, \end{aligned}$$
(26)
$$\begin{aligned}& y_{n} + \sum_{j = 1}^{k} \alpha_{n - j} y_{n - j} = h \beta_{s} \bigl(f _{n + s} - \beta^{*} f_{n - 1}\bigr), \end{aligned}$$
(27)

where \(f_{n+s} = f(t _{n+s} ; y _{n+s})\); \(t_{n+s} = t _{n} + s h\); \(-1 < s < 1\) and \(\beta_{s}\), \(\alpha_{ n- j}\), \(j = 1, 2, \dots , k\), are parameters to be determined as functions of s and \(\beta ^{*}\). The method with step k has order \(p = k\) and \(y_{n+s}\) has order \(k - 1\). To evaluate the value of \(y_{n+s}\) at off-step point, i.e. \(t _{n+s}\), consider the nodes \(t _{n}\) (double node), \(t_{ n-1 }, \dots , t _{ n- k}\) (simple nodes) [14].

Here, the one-leg twin of the second class is studied when \(k = 2\) and \(k = 3\).

In the case of \(k=2\), the method takes the form

$$\begin{aligned}& \alpha_{n}y_{n} + \alpha_{n - 1}y_{n - 1} + \alpha_{n - 2}y_{n - 2} = h \beta_{s} \bigl(f_{n + s} - \beta^{*}f_{n - 1}\bigr), \\ \end{aligned}$$
(28)
$$\begin{aligned}& y_{n + s} = y_{n} + s h f_{n}, \end{aligned}$$
(29)

where

$$\begin{aligned}& \alpha_{n} = \frac{3 + 2s - \beta^{*}}{2(1 - \beta^{*})},\qquad \alpha_{n - 1} = \frac{ - 2(1 + s)}{(1 - \beta^{*})}, \\& \alpha_{n - 2} = \frac{ - ( - 1 - 2s - \beta^{*})}{2(1 - \beta^{*})}\quad \mbox{and} \quad \beta_{s} = \frac{1}{(1 - \beta^{*})}. \end{aligned}$$

Method (28) has order 2, its truncation error takes the form

$$ T_{3} = \frac{2 + 3s(2 + s) + \beta^{*}}{6( - 1 + \beta^{*})}h^{3} y'''( \eta ). $$

The one-leg twin of (28) takes the form

$$ \alpha_{n}y_{n} + \alpha_{n - 1}y_{n - 1} + \alpha_{n - 2}y_{n - 2} = h f\bigl(\beta_{s} t_{n + s} - \beta_{s} \beta^{*}t_{n - 1} \bigr) $$
(30)

and has order 2 and its truncation error takes the form

$$ \bar{T}_{3} = \biggl(\frac{1}{6} - \frac{(1 + s)^{2}}{2( - 1 + \beta^{*})^{2}}\biggr)h ^{3} y'''(\eta ). $$

If

$$ s = \frac{1}{3}\bigl( - 3 + \sqrt{3} \sqrt{1 - 2\beta^{*} + \beta^{*2}} \bigr), $$

then method (30) has order 3 and its truncation error becomes

$$ \bar{T}_{4} = \frac{1}{36\sqrt{3}} h^{4} y^{(4)}(\eta ). $$

To discuss G-stability of (30), using (8), we have

$$f_{n+s} = f _{n}. $$

Substitute \(f_{n+s}\) in equation (28), it becomes

$$ \sum_{i = 0}^{2} \alpha_{n - i}y_{n - i} = h \beta_{s} \bigl(f_{n} - \beta ^{*}f_{n - 1} \bigr). $$

The corresponding characteristic equations are

$$\rho ( \xi ) = \alpha_{n} \xi ^{2} + \alpha _{n-1} \xi +\alpha _{n-2}\quad \mbox{and}\quad \sigma ( \xi ) = \beta_{s} \xi \bigl(\xi - \beta ^{*}\bigr). $$

Applying Theorem 2, the variables \(a _{i}\), \(i = 0, 1, 2\) and \(g _{ij}\); \(i, j = 1, 2\) satisfy the relations

$$\begin{aligned}& g_{11} = a_{0}^{2}, \\& g_{21} = \frac{ - 4(1 + s) - 4a_{1} a_{2} ( - 1 + \beta^{*})^{2} + \beta^{*}( - 3 - 2s + \beta^{*})}{4( - 1 + \beta^{*})^{2}}, \\& g_{22} = - a_{2}^{2} + \frac{3 + 2 s - \beta^{*}}{2( - 1 + \beta^{*})^{2}}. \end{aligned}$$

Choosing \(\beta ^{*}= 0.3\) and \(s = - 0.1\) makes \(a _{0} = - 0.583636\),

a 1 = 1.54524 , a 2 = 0.9616 , g 11 > 0 and Det ( [ g 11 g 12 g 21 g 22 ] ) > 0 .

Therefore, the matrix G is positive definite and method (30) is G-stable.

In the case of \(k = 3\), the method takes the form

$$\begin{aligned}& \alpha_{n}y_{n} + \alpha_{n - 1}y_{n - 1} + \alpha_{n - 2}y_{n - 2} + \alpha_{n - 3}y_{n - 3} = h \beta_{s} \bigl(f_{n + s} - \beta^{*}f_{n - 1} \bigr), \end{aligned}$$
(31)
$$\begin{aligned}& y_{n + s} = y_{n} + s h f_{n} + s^{2}(h f_{n} - y_{n} + y_{n - 1}), \end{aligned}$$
(32)

where

$$\begin{aligned}& \alpha_{n} = \frac{11 + 12s + 3 s^{2} - 2\beta^{*}}{6(1 - \beta^{*})},\qquad \alpha_{n - 1} = \frac{ - (6 + 10s + 3s^{2} + \beta^{*})}{2(1 - \beta ^{*})}, \\& \alpha_{n - 2} = \frac{3 + 8s + 3 s^{2} + 2\beta^{*}}{2(1 - \beta^{*})},\qquad \alpha_{n - 3} = \frac{ - (2 + 6s + 3 s^{2} + \beta^{*})}{6(1 - \beta^{*})} \quad \mbox{and}\quad \\& \beta_{s} = \frac{1}{(1 - \beta^{*})}. \end{aligned}$$

Method (31) has order 3 and its truncation error takes the form

$$ T_{4} = \frac{(3 + 2s)(1 + s(3 + s)) + \beta^{*}}{12( - 1 + \beta^{*})}h ^{4} y^{(4)}(\eta ). $$

The one-leg twin of (31) takes the form

$$\begin{aligned}& \alpha_{n}y_{n} + \alpha_{n - 1}y_{n - 1} + \alpha_{n - 2}y_{n - 2} + \alpha_{n - 3}y_{n - 3} \\& \quad = h f\bigl(\beta_{s} t_{n + s} - \beta_{s} \beta ^{*}t_{n - 1}\bigr) \end{aligned}$$
(33)

and has order 2, its truncation error takes the form

$$ \bar{T}_{3} = - \frac{(1 + s)^{2}\beta^{*}}{2( - 1 + \beta^{*})^{2}}h ^{3} y'''(\eta ). $$

To discuss G-stability of (33), using (8), we have

$$ h f_{n + s} = h f_{n} + 2s (h f_{n} - y_{n} + y_{n - 1}). $$

Substitute \(f_{n+s}\) in equation (31), it becomes

$$ \sum_{i = 0}^{3} \alpha_{n - i}y_{n - i} = \beta_{s} \bigl(\bigl(h f_{n} + 2 s (h f_{n} - y_{n} + y_{n - 1})\bigr) - h\beta^{*}f_{n - 1} \bigr). $$

The corresponding characteristic equations are

$$\begin{aligned}& \rho (\xi ) = ( \alpha_{n} +2 s \beta_{s}) \xi ^{3} + ( \alpha _{n -1}-2s \beta_{s}) \xi ^{2} + \alpha _{n-2} \xi +\alpha _{n-3} \quad \mbox{and} \\& \sigma (\xi ) =(1+2s)\beta_{s} \xi ^{3} - \beta ^{*} \beta_{s} \xi^{2}. \end{aligned}$$

Applying Theorem 2, the variables \(a _{i}\), \(i = 0, 1, 2, 3\) and \(g _{ij} \); \(i, j = 1, 2, 3\) satisfy the relations

$$\begin{aligned}& g_{11} = a_{0}^{2}, \\& g_{12} = g_{21} = a_{0} a_{1}, \\& g_{13} = g_{31} = \frac{ 12 a_{0} a_{2} ( - 1 + \beta^{*})^{2} - \beta ^{*}(2 + 3s(2 + s) + \beta^{*})}{12( - 1 + \beta^{*})^{2}}, \\& g_{23} = g_{32} \\& \hphantom{g_{23}} = \frac{ - 3(1 + 2s)(6 + s(14 + 3s)) - 12 a_{2} a_{3} ( - 1 + \beta^{*})^{2} + \beta^{*}( - 14 - 3s(10 + s) + 2\beta^{*})}{12( - 1 + \beta^{*})^{2}}, \\& g_{22} = a_{0}^{2} + a_{1}^{2}, \\& g_{33} = a_{0}^{2} + a_{1}^{2} + \frac{ 2 a_{2}^{2}( - 1 + \beta^{*})^{2} - \beta^{*}(6 + s(14 + 3s) + \beta^{*})}{2( - 1 + \beta ^{*})^{2}}. \end{aligned}$$

Choosing \(s = - 0.3\) and \(\beta ^{*}= 0.2\) makes \(a _{0} = - 0.231455\);

a 1 = 0.613677 , a 2 = 0.53299 , a 3 = 0.150767 , g 11 > 0 , Det ( [ g 11 g 12 g 21 g 22 ] ) > 0 and Det ( [ g 11 g 12 g 13 g 21 g 22 g 23 g 31 g 32 g 33 ] ) > 0 .

Therefore, the matrix G is positive definite and method (33) is G-stable.

6 Numerical tests

Here, some numerical results are presented to evaluate the performance of the proposed technique [1, 4, 13]. The numerical tests are solved after reduction.

Test 1

Consider the differential algebraic equations:

$$\begin{aligned}& y'_{1}(t) - t y'_{2} (t) + t^{2} y'_{3} (t) + y_{1}(t) - (t + 1) y_{2}(t) + \bigl(t^{2}+ 2t\bigr) y_{3}(t) = 0; \\& y'_{2}(t) - t y'_{3} (t) - y_{2}(t) + (t - 1) y_{3}(t) = 0; \\& y _{3}(t) = \sin(t); \end{aligned}$$

with the initial condition \(y_{1}(0) = 1\); \(y_{2}(0) = 1\); \(y_{3}(0) = 0\); and the exact solution is

$$\begin{aligned}& y_{1}(t) = \exp (-t) + t \exp (t); \qquad y_{2}(t) = \exp (t) + t\sin(t); \qquad y _{3}(t) = \sin (t). \end{aligned}$$

Test 2

Consider the differential algebraic equations:

$$\begin{aligned}& x'(t) = 2(1 -y) \sin (y) + x /\sqrt{(1-y)}; \qquad 0 = x^{2}+ (y - 1) \cos ^{2}(y); \end{aligned}$$

with the initial condition \(x(1)=1\); \(y(1)=0\); and the exact solution is \(x(t)= t \cos (1-t^{2})\), \(y(t) =1-t^{2}\).

Test 3

Consider the differential algebraic equations:

$$\begin{aligned}& x'(t) = f(x; t) - B(x; t)y;\qquad 0 = g(x; t); \end{aligned}$$

where \(x'(t) = (x_{1},x_{2})^{\mathrm{T}}\); \(f(x; t) = (1+ (t- 1/2) \exp (t), 2t + (t^{2}-1/4 ) \exp (t))^{\mathrm{T}}\); \(B(x; t) = (x' _{1},x'_{2})^{\mathrm{T}}\); \(g(x; t) = 1/2 (x^{2} _{1} + x ^{2} _{2} - (t - 1/2)^{2}- (t^{2} - 1/4 )^{2})\); with the initial condition \(x_{1}(0) = - 1/2\); \(x_{2}(0) = -1/4\); and the exact solution is \(x_{1}(t) = (t - 1/2)\); \(x_{2}(t) = t^{2} -1/4\); \(y(t) = \exp (t)\).

The above tests are solved by the two hybrid classes and their one-leg twins of the two classes with \(k=2\) at different values of t. In the first method, \(\beta_{0} =0.8\), \(s = 2\), and in the second method, \(\beta^{*} = -0.4\), \(s = - 0.3\). The errors of numerical solutions of tests 1, 2, and 3 are tabulated in Tables 6, 7, and 8, with different step-sizes h, respectively.

Table 6 The error of the first test
Table 7 The error of the second test
Table 8 The error of the third test

7 Conclusion

In this paper, the first hybrid class is studied for \(k=1\) up to 6. It has large stability regions. Its one-leg twin for \(k=1\) (\(p=2\)) and \(k=2\) (\(p=3\)) is G-stable. In the second class, for \(k = p = 2\), the one-leg twin has order 2 except when \(s =\frac{1}{3} (-3+\sqrt{3} \sqrt{(1 -2 \beta ^{*} + \beta ^{*2})})\) it has order 3. For \(k = p = 3\), the one-leg twin has order 2 and if \(\beta^{*} = 0\), it leads to one leg hybrid BDF, and they are G-stable. The numerical tests show that the hybrid method (4) gives good result with small steps.