1 Introduction

Boundary value problems (BVPs) with higher order differential equations play an important role in a variety of different branches of applied mathematics, engineering, and many other fields of advanced physical sciences. For examples, third-order BVPs arise in several physical problems, such as the deflection of a curved beam, the motion of rockets, thin-film flows, electromagnetic waves, or gravity-driven flows (see [3, 4] and references therein). Fifth-order differential equations are used in mathematical modelling of viscoelastic flows [17]. Seventh-order BVPs arise in modelling induction motor with two rotor circuits [26]. Nineth-order BVPs are known to arise in hydrodynamic, hydromagnetic stability, and mathematical modelling of AFTI-F16 fighters [2, 23]. For more details on accuracy of high-order BVPs, see [6, 13,14,15, 19, 20, 22, 33, 34, and references therein].

In this paper, we will consider the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} y^{(2r+1)}(x)=f\left( x,y(x),y'(x),\dots , y^{(q)}(x)\right) \\ y^{(2i+1)}(0)=\alpha _i,\qquad i=0,\dots , r-1, \\ y^{(2i)}(1)=\beta _i,\qquad i=0,\dots , r, \end{array} \right. \end{aligned}$$
(1.1)

with \(0\le q\le 2r\) fixed; \(\alpha _i\), \(i=0,\dots , r-1,\) and \(\beta _i\), \(i=0,\dots , r,\) finite real constants. The function f is defined and continuous in \([0,1]\times D\), \(D\subset \mathrm{I\!R}^{q+1}\).

The existence and uniqueness of solution of high-order BVPs are discussed in [1], but there numerical methods and examples are only mentioned. Problem (1.1) does not appear in [1], for the particularity of the boundary conditions. The boundary conditions in (1.1) have physical meanings. For example, for \(r = 1\), they represent the position and acceleration at the end point and the velocity at the starting point of the system. However, to the best of authors’ knowledge, there are no concret physical problems at least in the mathematical literature.

We call the BVP (1.1) Lidstone–Euler second-type boundary value problem in contrast to Lidstone–Euler (first-type) boundary value problem in [10].

The interpolatory theory and qualitative as well as quantitative study of BVPs are directly connected [1, 7, 10, 16]. The boundary conditions in (1.1), that is

$$\begin{aligned} y^{(2i+1)}(0)=\alpha _i,\quad i=0,\dots , r-1, \qquad y^{(2i)}(1)=\beta _i,\quad i=0,\dots , r,\nonumber \\ \end{aligned}$$
(1.2)

represent the Birkhoff-type interpolatory problem with the following incidence matrix, in Schoenberg notation: [28]:

$$\begin{aligned} \left( \begin{array} {cccccccccccccc} 0 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad \cdots \\ 1 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots \end{array}\right) . \end{aligned}$$
(1.3)

The corresponding interpolation series has been considered in [24, 25, 27, 28], and in [36] for analogous problems.

Our study on the BVP (1.1) starts from the interpolatory problem (1.2).

The paper is organized as follows: in Sect. 2, we consider the Lidstone–Euler interpolation problem (1.2) [8, 11] using the Lidstone–Euler second-type (or even) polynomials, and we give some new results concerning the bounds of error and the convergence. In Sect. 3, we consider the existence and uniqueness of the solution of problem (1.1). In Sect. 4, we discuss some computational aspects and we give two algorithms for computing a numerical solution of the problem. In Sect. 5, we present some numerical examples to illustrate the applicability of the proposed methods. The results clearly show that the described procedures are able to produce good results in terms of accuracy. Finally, some conclusions are given.

2 Lidstone–Euler Second-Type Polynomials and Related Interpolation Problem

Lidstone–Euler second-type polynomials \({\mathscr {S}}_k(x)\), also called Lidstone–Euler even polynomials, have been introduced in [8, 11, 12]. They satisfy the BVP of second order

$$\begin{aligned} \left\{ \begin{array}{l} {\mathscr {S}}''_k(x)=2k(2k-1){\mathscr {S}}_{k-1}(x),\quad k\ge 1, \\ {\mathscr {S}}_k(1)=0,\quad k\ge 1,\qquad {\mathscr {S}}'_k(0)=0,\quad k\ge 0. \end{array} \right. \end{aligned}$$
(2.1)

Polynomials \({\mathscr {S}}_k(x)\) are connected to Euler polynomials by the identity

$$\begin{aligned} {\mathscr {S}}_k(x)=2^{2k}E_{2k}\left( \frac{x+1}{2}\right) , \qquad k=0,1,\ldots , \end{aligned}$$
(2.2)

where \(E_k(x)\) is the classic Euler polynomial of degree k [8]. Moreover, \({\mathscr {S}}_k(x)\) satisfies

$$\begin{aligned} {\mathscr {S}}_k(x)=\frac{\varepsilon '_k (x)}{2k+1}, \end{aligned}$$

\(\varepsilon _k (x)\) being the Lidstone–Euler first-type polynomial [8, 10, 11].

The first polynomials \({\mathscr {S}}_k(x)\) are

$$\begin{aligned} \begin{aligned}&{\mathscr {S}}_0(x)=1,\\&{\mathscr {S}}_1(x)= -1 + x^2,\\&{\mathscr {S}}_2(x)= 5 - 6 x^2 + x^4, \\&{\mathscr {S}}_3(x)=-61 + 75 x^2 - 15 x^4 + x^6, \\&{\mathscr {S}}_4(x)=1385 -1708 x^2 + 350 x^4 - 28 x^6 + x^8. \end{aligned} \end{aligned}$$

Relations (2.1) and (2.2) justify the name Lidstone–Euler type given to the sequence \(\left\{ {\mathscr {S}}_k\right\} _k\). This sequence is important, because it acts as a fundamental polynomial sequence for the Birkhoff interpolation problem given by the incidence matrix (1.3).

Theorem 1

For the Lidstone–Euler second-type polynomials, the following identity holds:

$$\begin{aligned} {\mathscr {S}}_k(x)=(2k)!\int _0^1 g_{k} (x,t)\, \mathrm{d}t,\qquad k\ge 1, \end{aligned}$$
(2.3)

where

$$\begin{aligned}&g_1(x,t)=\left\{ \begin{array}{lll} x-1 &{} &{} t\le x\\ t-1 &{} &{} t> x, \end{array} \right. \end{aligned}$$
(2.4)
$$\begin{aligned}&g_k(x,t)=\int _0^1 g_{1} (x,s) g_{k-1} (s,t)\, \mathrm{d}s,\qquad k\ge 2. \end{aligned}$$
(2.5)

Proof. The proof follows by induction. For \(k=1\), the thesis is trivial. For \(k\ge 2\), we observe that the polynomial \({\mathscr {S}}_{k+1}(x)\) is the solution of the boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle {\mathscr {S}}_{k+1}''(x)=(2k+2)(2k+1){\mathscr {S}}_{k}(x)\\ {\mathscr {S}}_{k+1}(1)={\mathscr {S}}_{k+1}'(0)=0. \end{array} \right. \end{aligned}$$
(2.6)

From the inductive hypothesis, we have

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle {\mathscr {S}}_{k+1}''(x)=(2k+2)!\int _0^1 g_k(x,t) \, \mathrm{d}t\\ {\mathscr {S}}_{k+1}(1)={\mathscr {S}}_{k+1}'(0)=0. \end{array} \right. \end{aligned}$$
(2.7)

From the theory of differential equations, the solution of (2.7) is

$$\begin{aligned} \begin{array}{ll} \displaystyle {\mathscr {S}}_{k+1}(x) &{}= \displaystyle \int _0^1 g_1(x,s) \left[ (2k+2)! \int _0^1 g_k (s,t) \, \mathrm{d}t\right] \mathrm{d}s\\ &{}\displaystyle = (2k+2)! \int _0^1 \left[ \int _0^1 g_1(x,s) g_k (s,t) \, \mathrm{d}s\right] \, \mathrm{d}t\\ &{}\displaystyle = (2k+2)! \int _0^1 g_{k+1} (x,t) \, \mathrm{d}t. \end{array} \end{aligned}$$

Thus, the thesis follows.

Remark 1

From (2.4), \(g_1(x,t)\le 0\), \(0\le x,t\le 1\). Thus, from (2.5),

$$\begin{aligned} 0\le (-1)^k g_k(x,t)=\Bigl | g_k(x,t) \Bigr |,\qquad 0\le x,t\le 1. \end{aligned}$$

Hence, in view of (2.3), we get \( (-1)^k{\mathscr {S}}_k(x)\ge 0\), \( 0\le x\le 1\).

Theorem 2

The Lidstone–Euler second-type polynomials can be written as

$$\begin{aligned} {\mathscr {S}}_k(x)=(2k)!\int _0^1 t G_{k-1} (x,t)\, \mathrm{d}t,\qquad k\ge 1, \end{aligned}$$
(2.8)

where

$$\begin{aligned}&G_0(x,t)=\left\{ \begin{array}{lll} 0 &{} &{} t\le x\\ -1 &{} &{} t>x, \end{array} \right. \end{aligned}$$
(2.9)
$$\begin{aligned}&G_k(x,t)=\int _0^1 g_1 (x,s) G_{k-1} (s,t)\, \mathrm{d}s,\qquad k\ge 1, \end{aligned}$$
(2.10)

with \(g_1\) defined as in (2.4).

Proof. Relation (2.8) can be proved by induction. For \(k=1\), it is trivially true. For \(k> 1\), from (2.8) and the inductive hypothesis, the boundary value problem (2.6) can be written as

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle {\mathscr {S}}_{k+1}''(x)=(2k+2)!\int _0^1 t G_{k-1}(x,t) \, \mathrm{d}t\\ {\mathscr {S}}_{k+1}(1)={\mathscr {S}}_{k+1}'(0)=0. \end{array} \right. \end{aligned}$$
(2.11)

The solution of (2.11) is

$$\begin{aligned} \begin{array}{ll} \displaystyle {\mathscr {S}}_{k+1}(x) &{}= \displaystyle \int _0^1 g_1(x,s) \left[ (2k+2)! \int _0^1 t \, G_{k-1} (s,t) \, \mathrm{d}t\right] \mathrm{d}s\\ &{}\displaystyle = (2k+2)! \int _0^1 \left[ \int _0^1 g_1(x,s) G_{k-1} (s,t) \, \mathrm{d}s\right] t \, \mathrm{d}t\\ &{}\displaystyle = (2k+2)! \int _0^1 t\, G_{k} (x,t) \, \mathrm{d}t, \end{array} \end{aligned}$$

and this completes the proof.

Remark 2

Since \(g_1(x,t)\le 0\), from (2.10) it holds \((-1)^{k+1} G_k(x,t)\ge 0\).

Proposition 1

Let \(G_k(x,t)\) be the function defined in (2.9)–(2.10). Then

  1. (i)

    \(\displaystyle G_{k}(1,t)=0,\quad k\ge 1 ;\)

  2. (ii)

    \(\displaystyle \left. \frac{\partial }{\partial x} G_{k}(x,t)\right| _{x=0} =0,\quad k\ge 0;\)

  3. (iii)

    \(\displaystyle \frac{\partial ^{2s}}{\partial x^{2s}}G_{k}(x,t) = G_{k-s}(x,t),\qquad s=0,\dots , k-1,\quad k\ge 1;\)

  4. (iv)

    \(\displaystyle \frac{\partial ^{2s+1}}{\partial x^{2s+1}}G_{k}(x,t) =\frac{\partial }{\partial x}G_{k-s}(x,t),\qquad s=0,\dots , k-1,\quad k\ge 1.\)

Proof. The first two identities follow from the definition of \(G_k\left( x,t\right) \) and the boundary conditions in (2.1). Property (iii) is obtained from the first of (2.1) and Theorem 2. Relation (iv) follows from (iii).

Proposition 2

For the function \(G_k(x,t)\), the following inequalities hold:

$$\begin{aligned}&\int _0^1 \Bigl |G_k(x,t)\Bigr | \mathrm{d}t\le \frac{1}{2^k},\qquad k\ge 0; \end{aligned}$$
(2.12)
$$\begin{aligned}&\int _0^1 \left| \frac{\partial }{\partial x} G_{k}(x,t)\right| \mathrm{d}t\le \frac{1}{2^{k-1}},\qquad k\ge 1. \end{aligned}$$
(2.13)

Proof. The proof of (2.12) follows by induction. For \(k=0\), the thesis is trivial. For \(k\ge 1\), from (2.4), (2.10), and the inductive hypothesis, we get

$$\begin{aligned} \begin{aligned} \int _0^1 |G_{k+1}(x,t)| \mathrm{d}t&= \int _0^1 \left| \int _0^1 g_{1}(x,s) G_k(s,t) \mathrm{d}s\right| \mathrm{d}t\\&\le \int _0^1 \left| g_{1}(x,s)\right| \left( \int _0^1\left| G_k(s,t) \right| \mathrm{d}t\right) \mathrm{d}s\\&\le \frac{1}{2^{k}}\int _0^1 \left| g_{1}(x,s)\right| \mathrm{d}s\le \frac{1}{2^{k+1}}. \end{aligned} \end{aligned}$$

To prove property (2.13), observe that

$$\begin{aligned} G_{k}(x,t)&=\int _0^1 g_1 (x,s) G_{k-1} (s,t)\, \mathrm{d}s\\&=\int _0^x (x-1)G_{k-1}(s,t) \mathrm{d}s +\int _x^1 (s-1) G_{k-1}(s,t) \mathrm{d}s. \end{aligned}$$

By differentiating, we obtain

$$\begin{aligned} \frac{\partial }{\partial x} G_{k}(x,t)=\int _0^x G_{k-1}(s,t) \mathrm{d}s. \end{aligned}$$

Hence

$$\begin{aligned} \int _0^1 \left| \frac{\partial }{\partial x} G_{k}(x,t)\right| \mathrm{d}t\le \int _0^1 \left( \int _0^x \left| G_{k-1}(s,t)\right| \mathrm{d}s\right) \mathrm{d}t= \int _0^x \left( \int _0^1 \left| G_{k-1}(s,t)\right| \mathrm{d}t\right) \mathrm{d}s. \end{aligned}$$

From (2.12), it follows:

$$\begin{aligned} \int _0^1 \left| \frac{\partial }{\partial x} G_{k}(x,t)\right| \mathrm{d}t\le \frac{1}{2^{k-1}}\int _0^1 \mathrm{d}t\le \frac{1}{2^{k-1}}. \end{aligned}$$

The following two results have been proved in [11].

Theorem 3

The polynomial \(Q_{\ell }\) defined by

$$\begin{aligned} Q_\ell (x)=\sum _{i=0}^\ell \frac{{{\tilde{\beta }}}_i}{(2i)!} {\mathscr {S}}_i(x)+\sum _{i=0}^{\ell -1} \frac{{{\tilde{\alpha }}}_i}{(2i+2)!} {\mathscr {S}}'_{i+1}(1-x) \end{aligned}$$
(2.14)

is the unique polynomial of degree \(2\ell \) that satisfies the interpolatory conditions

$$\begin{aligned} Q_\ell ^{(2i+1)}(0)={{\tilde{\alpha }}}_i,\ \ \ i=0,\dots , \ell -1,\qquad \displaystyle Q_\ell ^{(2i)}(1)={{\tilde{\beta }}}_i,\ \ \ i=0,\dots , \ell ,\nonumber \\ \end{aligned}$$
(2.15)

with \({{\tilde{\alpha }}}_i\), \(i=0,\dots , \ell -1\) and \({{\tilde{\beta }}}_i\), \(i=0,\dots , \ell \), given real numbers.

Hence, we call \(Q_\ell (x)\) the Lidstone–Euler type interpolant polynomial of second kind and the conditions (2.15) the Lidstone–Euler type interpolation conditions of second kind.

Corollary 1

Let f be a \(2\ell \)-differentiable function in [0, 1]. The polynomial

$$\begin{aligned} Q_\ell [f](x)=\sum _{i=0}^\ell \frac{f^{(2i)}(1)}{(2i)!} {\mathscr {S}}_i(x)+ \sum _{i=0}^{\ell -1} \frac{f^{(2i+1)}(0)}{(2i+2)!} {\mathscr {S}}'_{i+1}(1-x) \end{aligned}$$
(2.16)

is the unique polynomial of degree \(2\ell \), such that

$$\begin{aligned} Q_{r}^{\left( 2i+1\right) }[f]\left( 0\right) =f^{(2i+1)}(0),\ \ i=0,\dots , \ell -1,\qquad Q_{r}^{\left( 2i\right) }[f]\left( 1\right) =f^{(2i)}(1),\ \ i=0,\dots , \ell .\nonumber \\ \end{aligned}$$
(2.17)

The polynomial \(Q_\ell [f](x)\) is called the Lidstone–Euler interpolant polynomial of second kind for the function f.

Proposition 3

For the derivatives of the Lidstone–Euler type interpolant polynomial of second kind, there exist constants \(C_{2s}\) and \(C_{2s+1}\), such that

$$\begin{aligned} \left| Q_{\ell }^{\left( 2s\right) }[f]\left( x\right) \right| \le C_{2s},\qquad \left| Q_{\ell }^{\left( 2s+1\right) }[f]\left( x\right) \right| \le C_{2s+1},\quad 0\le x\le 1,\quad s=0,\dots , \ell -1, \end{aligned}$$

with

$$\begin{aligned} C_{2s}=\frac{\pi ^{2s+2}}{3\cdot 2^{2s}} \left[ \sum _{i=2s-2}^{\ell } \frac{\left| f^{(2i)}(1)\right| }{(2i-2s)!} \left( \frac{2}{\pi }\right) ^{2i+1} + \sum _{i=2s-2}^{\ell -1} \frac{\left| f^{(2i+1)}(0)\right| }{(2i-2s+1)!} \left( \frac{2}{\pi }\right) ^{2i+2} \right] \nonumber \\ \end{aligned}$$
(2.18)

and

$$\begin{aligned} C_{2s+1}=\frac{\pi ^{2s}}{3\cdot 2^{2s}} \left[ \sum _{i=2s-2}^{\ell } \frac{\left| f^{(2i)}(1)\right| }{(2i-2s-1)!} \left( \frac{2}{\pi }\right) ^{2i} + 4\sum _{i=2s-2}^{\ell -1} \frac{\left| f^{(2i+1)}(0)\right| }{(2i-2s)!} \left( \frac{2}{\pi }\right) ^{2i-1} \right] .\nonumber \\ \end{aligned}$$
(2.19)

Proof

The proof follows after easy calculations tacking into account the inequality \( \Bigl | E_{n}(x)\Bigr | \le \frac{2}{3 \pi ^{n-1}}\) [21, p. 303]. \(\square \)

For any \(x\in [0,1]\), we can define the remainder as

$$\begin{aligned} T_\ell [f](x)=f(x)-Q_\ell [f](x), \qquad r\ge 1. \end{aligned}$$
(2.20)

Theorem 4

If \(f\in C^{2\ell +1} [0,1]\), the following identity holds:

$$\begin{aligned} T_\ell [f](x)=\int _0^1 f^{(2\ell +1)}(t)G_\ell (x,t)\, \mathrm{d}t \end{aligned}$$
(2.21)

with \(G_\ell (x,t)\) defined in (2.10).

Proof

From definition, \(\displaystyle T_\ell ^{(2\ell +1)}[f](x)=f^{(2\ell +1)}(x)\). By integrating in \(\left[ x,1\right] \), we get

$$\begin{aligned} T_\ell ^{(2\ell )}[f](x)=f^{(2\ell )}(x)-f^{(2\ell )}(1)= -\int _x^1 f^{(2\ell +1)}(t)\, \mathrm{d}t= \int _0^1 G_0(x,t) f^{(2\ell +1)}(t)\, \mathrm{d}t, \end{aligned}$$

with \(G_0(x,t)\) defined as in (2.9).

For the second derivative of \(T_\ell ^{(2\ell -2)}[f](x)\), we have

$$\begin{aligned} \left( T_\ell ^{(2\ell -2)}\right) ''[f](x)=T_\ell ^{(2\ell )}[f](x)= \int _0^1 G_0(x,t) f^{(2\ell +1)}(t)\, \mathrm{d}t. \end{aligned}$$

Moreover, from (2.17) and (2.20),

$$\begin{aligned} T_\ell ^{(2\ell -2)}[f](1)=\left[ \left( T_\ell ^{(2\ell -2)}\right) '[f](x)\right] _{x=0}=0; \end{aligned}$$

hence

$$\begin{aligned} \begin{array}{llll} \displaystyle T_\ell ^{(2\ell -2)}[f](x) &{}\displaystyle = \int _0^1 g_1\left( x,s\right) \int _0^1 G_0(s,t) f^{(2\ell +1)}(t)\, \mathrm{d}t\, \mathrm{d}s \\ &{}\displaystyle = \int _0^1 \left( \int _0^1 g_1\left( x,s\right) G_0(s,t)\, \mathrm{d}s\right) f^{(2\ell +1)}(t)\, \mathrm{d}t= \int _0^1 G_1(x,t)\, f^{(2\ell +1)}(t)\, \mathrm{d}t. \end{array} \end{aligned}$$

By repeating s times the same procedure, we get (2.21). \(\square \)

Theorem 5

(Cauchy representation) If \(f\in C^{2\ell +1} [0,1]\), there exists \(\xi \in (0,1)\), such that

$$\begin{aligned} T_\ell [f](x)= f^{(2\ell +1)}(\xi ) \int _0^1 G_\ell (x,t) \, \mathrm{d}t. \end{aligned}$$

Proof

The result follows from the mean value theorem for integrals, since \(f\in C^{2\ell +1} [0,1]\), and Remark 2. \(\square \)

We can derive a different representation of the remainder.

Theorem 6

(Peano’s representation of the remainder) [11] If \(f\in C^{2\ell +1} [0,1]\), the following identity holds

$$\begin{aligned} T_\ell [f](x)=\int _0^1 K_\ell (x,t) f^{(2\ell +1)}(t)\, \mathrm{d}t, \end{aligned}$$
(2.22)

where

$$\begin{aligned} K_\ell (x,t)=\frac{1}{(2\ell )!}\left[ (x-t)_+^{2\ell }-Q_\ell [ (x-t)_+^{2\ell }](x) \right] \end{aligned}$$

\(\left( \cdot \right) _+\) being the known truncated power function [18].

Corollary 2

[11] For the Peano kernel \(K_\ell (x,t)\), we get

$$\begin{aligned} K_\ell (x,t)=\frac{(x-t)_+^{2\ell }}{(2\ell )!} -\sum _{i=0}^\ell \frac{(1-t)^{2(\ell -i)}}{(2(\ell -i))!(2i)!}{\mathscr {S}}_i(x),\qquad r\ge 1; \end{aligned}$$
(2.23)

that is

$$\begin{aligned} K_\ell (x,t)=\left\{ \begin{array}{llll} \displaystyle -\sum _{i=0}^{\ell -1}\frac{t^{2(\ell -i)-1}}{(2(\ell -i)-1)!(2i+2)!}{\mathscr {S}}'_{i+1}(1-x), &{}&{} t\le x\\ \displaystyle -\sum _{i=0}^{\ell }\frac{(1-t)^{2(\ell -i)}}{(2(\ell -i))!(2i)!} {\mathscr {S}}_i(x), &{}&{} t> x. \end{array} \right. \end{aligned}$$
(2.24)

Proposition 4

For any \(\ell \ge 1\), we get

$$\begin{aligned} G_\ell (x,t)=K_\ell (x,t). \end{aligned}$$

Proof

The thesis follows from (2.21), (2.22) and the uniqueness of Peano’s kernel. \(\square \)

If we set

$$\begin{aligned} M_\ell =\max _{0\le t\le 1}\left| f^{(2\ell +1)}(t)\right| , \end{aligned}$$

the following theorems provide bounds for the remainder and its derivatives.

Theorem 7

With the previous hypotheses and notations, the following bound holds:

$$\begin{aligned} \left| T_\ell [f](x)\right| \le M_\ell \int _0^1 \left| G_\ell \left( x,t\right) \right| \, \mathrm{d}t \le \frac{1}{2^\ell }\, M_\ell . \end{aligned}$$

Proof

The thesis follows from Cauchy representation and Proposition 2. \(\square \)

Theorem 8

With the previous hypothesys and notations, the following bounds hold:

$$\begin{aligned}&\left| T_\ell ^{(2i)}[f](x)\right| \le \gamma _{\ell ,2i}\, M_\ell , \qquad i=0,\ldots ,\ell , \\&\left| T_\ell ^{(2i+1)}[f](x)\right| \le \gamma _{\ell ,2i+1}\, M_\ell , \qquad i=0,\ldots ,\ell -1, \end{aligned}$$

where \(\displaystyle \gamma _{\ell ,2i}=\frac{1}{2^{\ell -i}},\ \gamma _{\ell ,2i+1}=\frac{1}{2^{\ell -i+1}}\). It also holds

$$\begin{aligned} \left| T_\ell ^{(k)}[f](x)\right| \le M_\ell \gamma _{\ell ,k}, \qquad k=0,\ldots ,2\ell \end{aligned}$$
(2.25)

with

$$\begin{aligned} \gamma _{\ell ,k}=\frac{1}{2^{\ell -\zeta }}, \qquad \zeta =\Bigl \lfloor \frac{k+1}{2}\Bigr \rfloor . \end{aligned}$$
(2.26)

Proof

From Proposition 1 and Theorem 4, for \(i=0,\ldots ,\ell \)

$$\begin{aligned} \left| T_\ell ^{(2i)}[f](x)\right| = \left| \int _0^1 G_{\ell -i}(x,t) f^{(2\ell +1)}(t)\, \mathrm{d}t\right| \le M_\ell \int _0^1 \bigl |G_{\ell -i}(x,t)\bigr |\, \mathrm{d}t, \end{aligned}$$

and for \(i=0,\ldots ,\ell -1\)

$$\begin{aligned} \left| T_\ell ^{(2i+1)}[f](x)\right| = \left| \int _0^1 \frac{\partial }{\partial x}G_{\ell -i}(x,t) f^{(2\ell +1)}(t)\, \mathrm{d}t\right| \le M_\ell \int _0^1 \left| \frac{\partial }{\partial x} G_{\ell -i}(x,t)\right| \, \mathrm{d}t. \end{aligned}$$

From the last two inequalities and Proposition 2, we have

$$\begin{aligned}&\left| T_\ell ^{(2i)}[f](x)\right| \le M_\ell \frac{1}{2^{\ell -i}} , \qquad i=0,\ldots ,\ell , \end{aligned}$$
(2.27)
$$\begin{aligned}&\left| T_\ell ^{(2i+1)}[f](x)\right| \le M_\ell \frac{1}{2^{\ell -i+1}} ,\qquad i=0,\ldots ,\ell -1. \end{aligned}$$
(2.28)

Relations (2.27) and (2.28) can be written as (2.25).\(\square \)

Remark 3

We explicitly note that \(\gamma _{\ell ,0}= \frac{1}{2^{\ell }}\), \(\gamma _{\ell ,1}= \gamma _{\ell ,2}= \frac{1}{2^{\ell -1}}.\)

From the previous inequalities, the following theorem can be proved.

Theorem 9

Let be \(f\in C^\infty \left[ 0,1\right] \). Then, for a fixed k

$$\begin{aligned} \lim _{\ell \rightarrow \infty } Q^{(k)}_\ell [f](x)=f^{(k)}(x) \end{aligned}$$

absolutely and uniformly in \(\left[ 0,1\right] \), providing that there exists a positive constant \(\lambda \), with \(|\lambda |<2\), and an integer m, such that \(f^{(2\ell +1)}=O\left( \lambda ^{\ell -\zeta +1}\right) \), for all \(\ell \ge m\) and \(\zeta \) as in (2.26).

Remark 4

We observe that the functions \(\sin x\) and \(\cos x\) satisfy Theorem 9.

3 The Second-Type Lidstone–Euler Boundary Value Problem

In this section, we will investigate the existence and uniqueness of the solution of the second-type Lidstone–Euler BVP (1.1). As we said, to the best of the authors’ knowledge, the existence and uniqueness of the solution of (1.1) have not previously been investigated. Similar BVPs with different boundary conditions have been much studied (see, for example, [1] and references therein).

If \(f\equiv 0\), problem (1.1) has a unique solution \(y\left( x\right) = Q_r \left( x\right) \), where \(Q_r \left( x\right) \) is defined in (2.14).

The following theorem provides sufficient conditions for the existence and uniqueness of the solution of problem (1.1).

Theorem 10

Suppose that

  1. (i)

    \(k_s>0\), \(0\le s\le q\), are real given numbers and let M be the maximum of \(\Bigl | f\left( x,y_0,\dots , y_{q}\right) \Bigr |\) on the compact set \([0,1]\times \Omega \), where

    $$\begin{aligned} \Omega =\left\{ \left( y_0,\dots , y_{q}\right) \Big |\, |y_s|\le 2 k_s,\; s=0,\dots , q\right\} ; \end{aligned}$$
  2. (ii)

    \(C_{2s}<k_{2s},\; \; C_{2s+1}<k_{2s+1}\), where \(C_{2s}\) and \(C_{2s+1}\) are defined in (2.18) and (2.19), respectively;

  3. (iii)

    \(\displaystyle \frac{M}{2^{r-s}}<k_{2s},\ \ s=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor ; \quad \frac{M}{2^{r-s-1}}<k_{2s+1},\ \ s=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor ; \)

  4. (iv)

    the function f satisfies a uniform Lipschitz condition in \(\left( y(x), y'(x),\dots ,\right. \left. y^{(q)}(x)\right) \), that is, there exists a nonnegative constant L, such that the inequality

    $$\begin{aligned} \bigl |f\left( x,y_0,\dots , y_q\right) -f\left( x,{\overline{y}}_0,\dots , {\overline{y}}_q\right) \bigr | \le L\sum _{k=0}^q \bigl |y_k-{\overline{y}}_k\bigr | \end{aligned}$$

    holds whenever \(\left( y_0,\dots , y_q\right) \) and \(\left( {\overline{y}}_0,\dots , {\overline{y}}_q\right) \) belong to \(\Omega \);

  5. (v)

    \(\displaystyle (q+1)DL<1\), where \(\displaystyle D= \max _{0\le s\le q} \left\{ \max _{0\le x,t\le 1} \left| \frac{\partial ^s}{\partial x^s} K_r(x,t)\right| \right\} .\)

Then, the boundary value problem (1.1) has a unique solution on \(\Omega \).

Proof

It is known [16] that problem (1.1) is equivalent to the Fredholm integral equation

$$\begin{aligned} y(x)= & {} Q_r[y](x)+\int _0^1 K_r(x,t) y^{(2r+1)}\left( t\right) \mathrm{d}t \nonumber \\= & {} Q_r[y](x)+\int _0^1 K_r(x,t) f \left( t,y(t),\dots , y^{(q)}\left( t\right) \right) , \end{aligned}$$
(3.1)

where \(Q_r[y]\left( x\right) \) is the polynomial defined in (2.16) and \(K_r(x,t)\) is the Peano kernel as in (2.24).

We define the operator \(T:C^{q}[0,1]\rightarrow C^{2r+1}[0,1]\) as follows:

$$\begin{aligned} T[y](x):=Q_r[y](x)+\int _0^1 K_r(x,t) f\left( t,y(t),\dots , y^{(q)}\left( t\right) \right) \mathrm{d}t. \end{aligned}$$

Obviously, any fixed point of T is a solution of the boundary value problem (1.1).

For all \(y\in C^{q} [0,1]\), we introduce the norm \(\displaystyle \left\| y\right\| =\max _{0\le s\le q} \left\{ \max _{0\le t\le 1} \left| y^{(s)} (t)\right| \right\} \), so that \(C^{q} [0,1]\) becomes a Banach space. Moreover, we consider the set

$$\begin{aligned} B=\left\{ y(t)\in C^{q} [0,1] \Big | \; \max _{0\le t\le 1} \left| y^{(s)} (t)\right| \le 2k_s,\; s=0,\dots , q \right\} . \end{aligned}$$

The operator T maps B into itself. To show this, let \(y(x)\in B\). Then

$$\begin{aligned} \bigl ( T[y]\bigr )^{(2s)}(x)=Q_r^{(2s)}[y](x)+\int _0^1 \frac{\partial ^{2s}}{\partial x^{2s}}\Bigl | K_r(x,t)\Bigr | f\left( t,y(t),\dots , y^{(q)}\left( t\right) \right) \mathrm{d}t. \end{aligned}$$

From hypotheses (i) and (iii), and Propositions 1, 2, and 4 , we have that \( T B \subseteq B\). In fact

$$\begin{aligned} \Bigl |\bigl ( T[y]\bigr )^{(2s)}(x)\Bigr |&\displaystyle \le&C_{2s} +M\int _0^1 \Bigl | K_{r-s}(x,t)\Bigr | \mathrm{d}t \nonumber \\&\displaystyle \le&k_{2s}+\frac{M}{2^{r-s}}<2k_{2s}, \qquad s=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor , \end{aligned}$$
(3.2)

and

$$\begin{aligned} \Bigl |\bigl ( T[y]\bigr )^{(2s+1)}(x)\Bigr |&\displaystyle \le&C_{2s+1} +M\int _0^1 \left| \frac{\partial }{\partial x} K_{r-s}(x,t)\right| \mathrm{d}t \nonumber \\&\displaystyle \le&k_{2s+1}+\frac{M}{2^{r-s-1}}<2k_{2s+1}, \qquad s=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor .\nonumber \\ \end{aligned}$$
(3.3)

From inequalities (3.2)–(3.3), we get that the sets \( \left\{ \bigl ( T[y]\bigr )^{(s)}(x) \; \Big | y(x)\in \right. \left. B \right\} \) are uniformly bounded and equicontinuous in [0, 1], for all \( 0\le s\le q\). From the Ascoli–Arzela theorem, this implies that \(\overline{T B}\) is compact. Hence, from the Shauder fixed point theorem, there exists a fixed point of T in \(\Omega \).

Now, we will prove the uniqueness. Suppose that there exist two distinct solutions y(x), z(x) of problem (1.1). It results

$$\begin{aligned}\begin{aligned}&y^{(s)}\left( x\right) -z^{(s)}\left( x\right) \\&\quad = \int _0^1 \frac{\partial ^s}{\partial x^s} K_r(x,t)\left[ f\left( t,y\left( t\right) ,y'\left( t\right) ,\ldots ,y^{(q)}\left( t\right) \right) \right. \\&\qquad \left. - f\left( t,z\left( t\right) ,z'\left( t\right) ,\ldots ,z^{(q)}\left( t\right) \right) \right] \,\mathrm{d}t, \end{aligned} \end{aligned}$$

\(s=0,\dots , q\). Hence

$$\begin{aligned} \begin{aligned} \left| y^{(s)}\left( x\right) -z^{(s)}\left( x\right) \right|&\le \max _{0\le s\le q} \left\{ \max _{0\le x,t\le 1} \left| \frac{\partial ^s}{\partial x^s} K_r(x,t)\right| \right\} \, L\, \sum _{i=0}^q \int _0^1 \left| y^{(i)}\left( t\right) -z^{(i)}\left( t\right) \right| \mathrm{d}t\\&\le D\, L\, \sum _{i=0}^q \int _0^1 \max _{0\le i\le q} \left\{ \max _{0\le t\le 1} \left| y^{(i)}\left( t\right) -z^{(i)}\left( t\right) \right| \right\} \mathrm{d}t, \end{aligned} \end{aligned}$$

so that

$$\begin{aligned} \left\| y-z\right\| \le (q+1) D\,L \left\| y-z\right\| . \end{aligned}$$

From hypothesis (v), the uniqueness of the solution follows.\(\square \)

4 Computational Aspects

For the numerical solution of high odd-order boundary value problems, many approaches have been proposed such as: spline interpolation [9, 30], non-polynomial spline techniques [29], Galerkin methods [17], variational iterative techniques [31, 33], and modified decomposition method [35]. In the following, we present two numerical approaches: Bernstein extrapolation and collocation methods.

4.1 Bernstein Extrapolation Methods

The first numerical approach for the numerical solution of the BVP (1.1) is based on extrapolated Bernstein polynomials [5].

We recall that, given a real function \(g\left( x\right) \) defined in \(\left[ 0,1\right] \), the \(n-th\) Bernstein polynomial for g is given by

$$\begin{aligned} B_n\left[ g\right] \left( x\right) =\sum _{k=0}^n b_{n,k}\left( x\right) g\left( \frac{k}{n}\right) ,\quad b_{n,k}\left( x\right) =\left( {\begin{array}{c}k\\ n\end{array}}\right) x^k\left( 1-x\right) ^{n-k}. \end{aligned}$$

Then, it is known the following

Theorem 11

[18] Let g be a bounded real function in \(I=[0,1]\). Then

$$\begin{aligned} \lim _{n\rightarrow \infty } B_n\left[ g\right] \left( x\right) =g\left( x\right) , \end{aligned}$$

at any point \(x\in I\) at which g is continuous, and, if we pose \({{\overline{R}}}_n\left[ g\right] \left( x\right) =B_n\left[ g\right] \left( x\right) -g(x)\),

$$\begin{aligned} \Bigl |{{\overline{R}}}_n\left[ g\right] \left( x\right) \Bigr |\le \frac{5}{4} \omega \left( g;n^{-\frac{1}{2}}\right) , \end{aligned}$$

where \(\omega \) is the modulus of continuity of g on I. If \(g\in C(I)\), the convergence is uniform in I.

Moreover, if g is twice differentiable in I, then

$$\begin{aligned} \lim _{n\rightarrow \infty } n\bigl [g(x)-B_n\left[ g\right] \left( x\right) \bigr ]=\frac{x(1-x)}{2} g''(x). \end{aligned}$$

Theorem 12

[5] If \(g\in C^{2s}(I)\), \(s\ge 1\), then the Bernestein polynomial for g has the following asymptotic expansion:

$$\begin{aligned} B_n[g](x)=g(x)+\sum _{i=1}^s h^i S_i [g]\left( x\right) + h^{s+1} E_h[g](x), \end{aligned}$$

where \(\displaystyle h=\frac{1}{n}\), the functions \(S_i[g]\left( x\right) \), \(i=0,\dots , s\), do not depend on h and \(E_h[g](x)\rightarrow 0 \) for \(h\rightarrow 0\).

From Theorems 11 and 12, we can prove the following theorem.

Theorem 13

Let y(x) be the solution of problem (1.1). Then

$$\begin{aligned} y\left( x\right) =Q_r\left[ y\right] \left( x\right) +\sum _{k=0}^np_{n,k}(x) f\left( x_k, y(x_k),\dots , y^{(q)}(x_k)\right) +R_n\left[ y\right] \left( x\right) , \end{aligned}$$

where \(\displaystyle x_k=\frac{k}{n}\)

$$\begin{aligned}&p_{n,k}\left( x\right) =\int _0^1 K_r(x,t) b_{n,k}\left( t\right) \mathrm{d}t,\qquad k=0,\ldots ,n,\nonumber \\&\Bigl |R_n\left[ y\right] \left( x\right) \Bigr |\le \frac{5}{2^{r+2}} \omega \left( y^{(2r+1)};n^{-\frac{1}{2}}\right) . \end{aligned}$$
(4.1)

Moreover, if \(y\in C^{2r+3}(I)\), then

$$\begin{aligned} \Bigl |R_n\left[ y\right] \left( x\right) \Bigr |\le \frac{1}{2^{r+3}\, n}\ \max _{0\le x \le 1}\Bigl |y^{(2r+3)}\left( x\right) \Bigr |, \end{aligned}$$

and the convergence is uniform.

Proof

If y(x) is the solution of problem (1.1), we get relation (3.1). The thesis follows after easy calculations from Theorem 12 if \(g\equiv y^{(2r+1)}\). \(\square \)

The proposed method for the numerical solution of problem (1.1) is based on the results of the previous theorems.

\(\forall n \in \mathrm{I\!N}\), let us set

$$\begin{aligned} \phi _n\left( x\right) =Q_r\left[ y\right] \left( x\right) +\sum _{i=0}^n p_{n,i}(x) f\left( x_i, y(x_i),\dots , y^{(q)}(x_i)\right) ,\quad x_i=\frac{i}{n}.\nonumber \\ \end{aligned}$$
(4.2)

Corollary 3

For the solution y(x) of problem (1.1), we get

$$\begin{aligned} \lim _{n\rightarrow \infty }\phi _n\left( x\right) =y(x), \end{aligned}$$

uniformly in \(x\in I\). Moreover

$$\begin{aligned} \bigl \Vert y-\phi _n\bigr \Vert = \bigl \Vert R_n[y]\bigr \Vert \le \frac{5}{2^{r+2}} \omega \left( y^{(2r+1)};n^{-\frac{1}{2}}\right) . \end{aligned}$$
(4.3)

We call \(\phi _n\left( x\right) \) the approximating solution of first order, and the error is bounded by (4.3).

To have approximating functions of higher order, we use the following asymptotic expansion.

Theorem 14

Let nm be two positive integers and \(\displaystyle h=\frac{1}{n}\). Moreover, let \(y(x)\in C^{2(r+m+1)}(I)\) be the solution of problem (1.1). Then

$$\begin{aligned} y(x)=\phi _n\left( x\right) +\sum _{i=0}^m h^i {\overline{S}}_i\left[ y\right] \left( x\right) + h^{m+1}{\overline{E}}_h\left[ y\right] \left( x\right) , \end{aligned}$$
(4.4)

where the functions \(\displaystyle {\overline{S}}_i\left[ y\right] \left( x\right) \) do not depend on \(\displaystyle h\), and \({\overline{E}}_h\left[ y\right] \left( x\right) \rightarrow 0\) for \(h\rightarrow 0\).

Proof

The proof follows after easy calculations by applying Theorem 12 at relation (3.1). \(\square \)

The expansion (4.4) in Theorem 14 suggests to apply the extrapolation procedure [5, 32] described in the following Theorem.

Theorem 15

Let \(y\in C^{2(r+m)}[0,1]\), with m a fixed positive integer. Let \(\left\{ n_k\right\} _k\) be an increasing sequence of positive integers and \(\displaystyle h_k=n_k^{-1}\). We define a sequence of polynomials of degree \(n_{i+k}\) as follows:

$$\begin{aligned} \left\{ \begin{array}{llllll} T_0^{(i)} := T_0^{(i)}[y](x) &{}\displaystyle =\phi _{n_i}(x), &{}&{}i=0,\dots , m,\\ T_k^{(i)} := T_k^{(i)}[y](x) &{}\displaystyle =\frac{h_{i+k} T_{k-1}^{(i)}-h_{i} T_{k-1}^{(i+1)}}{h_{i+k}-h_{i}}, &{}&{} \begin{array}{l} k=1,\dots , m-1,\\ i=0,\dots , m-k. \end{array} \end{array} \right. \end{aligned}$$

Then, for \(i=0,\dots , m-k\)

$$\begin{aligned} \lim _{h_i\rightarrow 0} T_k^{(i)}=y(x), \qquad k=1,2,\dots , m-1. \end{aligned}$$

Moreover, the following representations of the error and of \(T_k^{(i)}\) hold:

$$\begin{aligned}&T_k^{(i)}-y(x)=(-1)^k h_i\, h_{i+1}\cdots h_{k} \left( {\overline{S}}_{k+1}[y](x)+O\left( h_i\right) \right) ,\\&T_k^{(i)}[y](x)=\sum _{j=0}^kl_j\left( 0\right) y_{n_j}(x),\qquad \qquad l_j\left( h\right) =\prod _{i=0,i\ne j}^k\frac{h_i-h}{h_i-h_j}. \end{aligned}$$

From Theorem 15, for any \(z\in [0,1]\), y(z) is approximated by \(T_{m}^{(0)}[y](z)\), \(n_m\) being the last element of the considered numerical sequence \(\left\{ n_i\right\} _i\).

5 Algorithm for Practical Calculations

To calculate the first-order approximation \(\phi _n(x)\) by formula (4.2), we need the values \(\displaystyle y_i^{(s)}\approx y^{(s)}\left( x_i\right) \), \(s=0,\dots , q, \ \ i=0,\ldots ,n,\) with \(s\ne 2j+1,\; j=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor ,\) when \(i=0\) and \(s\ne 2j,\; j=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor ,\) when \(i=n\).

To this aim, we consider the algebraic system of dimension \(n(q+1)\)

$$\begin{aligned} y_i^{(s)}=Q_r^{(s)}\left[ y\right] \left( x_i\right) +\sum _{k=0}^{n}p_{n,k}^{(s)}\left( x_i\right) f\left( x_k,y_k,y_k'.\ldots ,y_k^{(q)}\right) ,\qquad \begin{array}{l} i=0,\ldots ,n,\\ s=0,\ldots , q, \end{array}\nonumber \\ \end{aligned}$$
(5.1)

and \(y_0^{(2j+1)}=y^{(2j+1)}(0)=\alpha _j\), \(j=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor ,\) \(y_n^{(2j)}=y^{(2j)}(1)=\beta _j\), \(j=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor \).

Let us put \(\displaystyle Y_n^q=({{\overline{Y}}}_0, \dots , \overline{Y}_{q} )^T\), with \(\displaystyle {{\overline{Y}}}_{2j}= \left( y^{(2j)}_0,\dots , y_{n-1}^{(2j)}\right) \), \(j=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor \), \(\displaystyle \overline{Y}_{2j+1}= \left( y^{(2j+1)}_1,\dots , y_{n}^{(2j+1)}\right) \), \(j=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor \)

$$\begin{aligned} A_n^q=\left( \begin{array}{cccc} A_0 &{}0 &{} \cdots &{} 0 \\ 0 &{}\ddots &{} &{}\vdots \\ \vdots &{} &{} \ddots &{}0 \\ 0 &{}\cdots &{} 0 &{}A_{q} \end{array} \right) , \end{aligned}$$

with \(\displaystyle A_j\in \mathrm{I\!R}^{n\times (n+1)}\)

$$\begin{aligned}&A_{2j}=\left( \begin{array}{ccc} p^{(2j)}_{n,0}(x_0) &{}\cdots &{} p^{(2j)}_{n,n}(x_0) \\ \vdots &{} &{}\vdots \\ p^{(2j)}_{n,0}(x_{n-1}) &{}\cdots &{} p^{(2j)}_{n,n}(x_{n-1}) \\ \end{array} \right) ,\qquad j=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor ,\\&A_{2j+1}=\left( \begin{array}{ccc} p^{(2j+1)}_{n,0}(x_1) &{}\cdots &{} p^{(2j+1)}_{n,n}(x_1) \\ \vdots &{} &{}\vdots \\ p^{(2j+1)}_{n,0}(x_{n}) &{}\cdots &{} p^{(2j+1)}_{n,n}(x_{n}) \\ \end{array} \right) ,\qquad j=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor . \end{aligned}$$

Moreover

$$\begin{aligned} \displaystyle F_{Y_n^q}=( \underbrace{F_{n}, \dots , F_{n}}_{q+1} )^T, \end{aligned}$$

with

$$\begin{aligned} F_{n}=\left( f_0, \dots , f_{n}\right) ^T,\qquad f_k=f\left( x_k,y_k,y_k'.\ldots ,y_k^{(q)}\right) ,\ \ k=0,\dots , n \end{aligned}$$

and

$$\begin{aligned} C_n^q =\left( C_0,\dots , C_{q}\right) ^T, \end{aligned}$$

with

$$\begin{aligned} \begin{array}{c} C_{2j}=\left( Q^{(2j)}_{r}[y](x_0),\dots , Q^{(2j)}_{r}[y](x_{n-1})\right) ,\quad j=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor ,\\ C_{2j+1}=\left( Q^{(2j+1)}_{r}[y](x_1),\dots , Q^{(2j+1)}_{r}[y](x_{n})\right) ,\quad j=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor . \end{array} \end{aligned}$$

Thus, system (5.1) can be written in the form

$$\begin{aligned} Y_n^q - A_n^q F_{Y_n^q} = C_n^q, \end{aligned}$$

or

$$\begin{aligned} Y_n^q=G\left( Y_n^q\right) , \quad \text {with}\quad G\left( Y_n^q\right) =A\,F_{Y_n^q}+C_n^q. \end{aligned}$$
(5.2)

Proposition 5

For the matrix \(A_n^q\), the following relation holds:

$$\begin{aligned} \Vert A_n^q\Vert _\infty = O (1), \qquad n\rightarrow \infty . \end{aligned}$$

Proof. \(\displaystyle \Vert A_n^q\Vert _\infty = \max \nolimits _{0\le s\le q} \Vert A_s\Vert _\infty .\) Since all Bernstein basis functions \(b_{n,k}(t)\) of the same order have the same definite integral over the interval [0, 1], that is \(\int _0^1 b_{n,k}(t) \mathrm{d}t=\frac{1}{n+1}\), we have that

  • if \(s=2j\), \(j=0,\dots , \Bigl \lfloor \frac{q}{2}\Bigr \rfloor ,\) then

    $$\begin{aligned} \begin{aligned} \left| p_{n,k}^{(2j) }\left( x_{i-1}\right) \right|&\le \int _0^1 \left| \frac{\partial ^{2j}}{\partial x^{2j}}K_{r}(x,t) \right| _{x=x_{i-1}} b_{n,k}(t) \mathrm{d}t\\&=\int _0^1 \left| K_{r-j}(x_{i-1}, t) \right| b_{n,k}(t) \mathrm{d}t \le \frac{1}{n+1} \left| K_{r-j}(x_{i-1},{{\overline{t}}}) \right| , \quad {{\overline{t}}}\in (0,1); \end{aligned} \end{aligned}$$

    hence

    $$\begin{aligned} \Vert A_s\Vert _\infty= & {} \max _{0\le i\le n} \sum _{k=0}^n | p_{n,k}^{(2j) }\left( x_{i-1}\right) |\le \frac{1}{n+1} \sum _{k=0}^n \max _{0\le i\le n} \left| K_{r-j}(x_{i-1},{{\overline{t}}}) \right| \\= & {} \max _{0\le i\le n} \left| K_{r-j}(x_{i-1},{{\overline{t}}}) \right| ; \end{aligned}$$
  • if \(s=2j+1\), \(j=0,\dots , \Bigl \lfloor \frac{q-1}{2}\Bigr \rfloor ,\) then

    $$\begin{aligned} \begin{aligned}&\qquad \left| p_{n,k}^{(2j+ 1) }\left( x_{i}\right) \right| \le \int _0^1 \left| \frac{\partial ^{2j+1}}{\partial x^{2j+1}}K_{r}(x,t) \right| _{x=x_{i-1}} b_{n,k}(t) \mathrm{d}t\\&\qquad =\int _0^1 \left| \frac{\partial }{\partial x}K_{r-j}(x,t) \right| _{x=x_{i-1}} b_{n,k}(t) \mathrm{d}t \le \frac{1}{n+1} \left| \frac{\partial }{\partial x} K_{r-j}(x_{i},{{\tilde{t}}}) \right| , \quad {{\tilde{t}}}\in (0,1); \end{aligned} \end{aligned}$$

    hence

    $$\begin{aligned} \Vert A_s\Vert _\infty= & {} \max _{0\le i\le n} \sum _{k=0}^n | p_{n,k}^{(2j+1) }\left( x_{i}\right) |\le \frac{1}{n+1} \sum _{k=0}^n \max _{0\le i\le n} \left| \frac{\partial }{\partial x} K_{r-j}(x_{i},{{\tilde{t}}}) \right| \\= & {} \max _{0\le i\le n} \left| \frac{\partial }{\partial x} K_{r-j}(x_{i},{{\tilde{t}}}) \right| . \end{aligned}$$

From definition of \(K_l(x,t)\), there exist \(M_1, M_2\in \mathrm{I\!R}\), such that \(\left| K_l(x,t)\right| \le M_1\) and \(\displaystyle \left| \frac{\partial }{\partial x} K_l(x,t)\right| \le M_2\), \(l\ge 0\), for all \(0\le x,t\le 1\). From this, the result follows. \(\square \)

Lemma 1

With the previous notations and hypothesis, system (5.2) has a unique solution if \(T=L\left\| A\right\| _\infty <1\). The solution can be calculated by the iterations

$$\begin{aligned} \left( Y_n^q\right) _{j+1}=G\left( \left( Y_n^q\right) _{j}\right) ,\quad \qquad j=0,\dots , N, \end{aligned}$$

with a fixed \(\left( Y_n\right) _{0} \in \mathrm{I\!R}^{n(q+1)}\). Moreover, at the jth iteration, the error is

$$\begin{aligned} \left\| \left( Y_n^q\right) _{j+1}-Y_n^q\right\| _{\infty } \le \frac{T^{j}}{1-T} \left\| \left( Y_n^q\right) _{1}-\left( Y_n^q\right) _{0}\right\| _{\infty } \,. \end{aligned}$$

Proof

The proof follows by standard technique by applying the well-known contraction principle and Proposition 5. \(\square \)

The previous Lemma allows us to consider the first-order approximating function

$$\begin{aligned} {{\overline{\phi }}}_n\left( x\right) =Q_r\left[ y\right] \left( x\right) +\sum _{k=0}^{n}p_{n,k}\left( x\right) f\left( x_k, y_k, y_k'.\ldots , y_k^{(q)}\right) . \end{aligned}$$
(5.3)

Proposition 6

Let y(x) be the solution of problem (1.1) and \(\overline{\phi }_n\left( x\right) \) the first-order approximation. Then

$$\begin{aligned} \Vert y-{{\overline{\phi }}}_n\Vert =o(1), \end{aligned}$$

if \(LZ<1\), where \(\displaystyle Z=(q+1) D\) with D as in Theorem 10.

Proof

For all \(j=0,\dots , q\) and for all \(x \in [0,1]\), from (4.2), we get

$$\begin{aligned} \phi _n^{(j)}\left( x\right) =Q_r^{(j)}\left[ y\right] \left( x\right) +\sum _{k=0}^{n}p_{n,k}^{(j)}\left( x\right) f\left( x_k, y(x_k), y'(x_k).\ldots , y(x_k)^{(q)}\right) ,\nonumber \\ \end{aligned}$$
(5.4)

and by differentiating the (5.3)

$$\begin{aligned} {{\overline{\phi }}}_n^{(j)}\left( x\right) =Q_r^{(j)}\left[ y\right] \left( x\right) +\sum _{k=0}^{n}p_{n,k}^{(j)}\left( x\right) f\left( x_k, y_k, y_k'.\ldots , y_k^{(q)}\right) . \end{aligned}$$
(5.5)

\(\square \)

From (5.4), (5.5) and (4.1), we obtain

$$\begin{aligned} \begin{aligned} \phi _n^{(j)}\left( x\right) -{{\overline{\phi }}}_n^{(j)}\left( x\right) =&\sum _{k=0}^{n}\left[ f\left( x_k, y(x_k), y'(x_k).\ldots , y^{(q)}(x_k)\right) -f\left( x_k, y_k, y_k'.\ldots , y_k^{(q)}\right) \right] \\&\times \int _0^1 \frac{\partial ^j}{\partial x^j} K_r(x,t) b_{n,k}(t) \mathrm{d}t. \end{aligned} \end{aligned}$$

Hence, from the Lipschitz property of f

$$\begin{aligned} \begin{array}{lll} &{}\displaystyle \Bigl | \phi _n^{(j)}\left( x\right) -{{\overline{\phi }}}_n^{(j)}\left( x\right) \Bigr | \le \sum _{k=0}^{n}\left( L \sum _{i=0}^q \Bigl |y^{(i)}(x_k)- y_k^{(i)}\Bigr |\right) \max _{0\le x,t\le 1} \Bigl | \frac{\partial ^j}{\partial x^j} K_r(x,t)\Bigr | \frac{1}{n+1} \\ &{}\quad \displaystyle \le L \max _{0\le i\le q} \left\{ \max _{0\le k\le q} \bigl |y^{(i)}(x_k)- y_k^{(i)}\bigr |\right\} (q+1) \max _{0\le x,t\le 1} \Bigl | \frac{\partial ^j}{\partial x^j} K_r(x,t)\Bigr | \\ &{}\quad \displaystyle \le L \max _{0\le i\le q} \left\{ \max _{0\le x\le 1} \bigl |y^{(i)}(x)- {{\overline{\phi }}}_n^{(i)}(x)\bigr |\right\} (q+1) \max _{0\le j\le q} \left\{ \max _{0\le x,t\le 1} \Bigl | \frac{\partial ^j}{\partial x^j} K_r(x,t)\Bigr | \right\} \\ &{}\quad \displaystyle =L Z \left\| y- {{\overline{\phi }}}_n \right\| . \end{array} \end{aligned}$$

Finally, the last inequality yields

$$\begin{aligned} \left\| y-{{\overline{\phi }}}_n \right\| \le \left\| y- \phi _n \right\| +\left\| \phi _n-{{\overline{\phi }}}_n \right\| \le \left\| y- \phi _n \right\| +L Z \left\| y-{{\overline{\phi }}}_n \right\| ; \end{aligned}$$

hence, the thesis follows, according to Corollary 3.\(\square \)

The Bernstein extrapolation method for BVP (1.1) can be summarized as follows:

  1. 1.

    Data input: problem (1.1);

  2. 2.

    for a fixed \(n\in \mathrm{I\!N}\), solve the algebraic system (5.1);

  3. 3.

    for a fixed \(m\in \mathrm{I\!N}\), choose a sequence \(n_i\), \(i=0,\dots , m\) and calculate \({{\overline{\phi }}}_{n_i} (x)\);

  4. 4.

    for \(k=1,\dots , m-1\), \(i=0,\dots , m-k\), calculate \(T_k^{(i)}\).

5.1 Collocation-Birkhoff–Lagrange Approach

The collocation-Birkhoff–Lagrange approach to BVPs has been proposed in [16]. Here, we use this approach as comparisons with the method described above.

Let y(x) be the solution of problem (1.1). For any \(n\in \mathrm{I\!N}\), if \(y(x)\in C^{2r+n+2}[0,1]\), we can approximate \(y^{(2r+1)}\left( x\right) \) in \(x\in [0,1]\) by the well-known Lagrange interpolation polynomial

$$\begin{aligned} y^{(2r+1)}(x)=L_n\left[ y\right] \left( x\right) +R_n\left[ y\right] \left( x\right) , \end{aligned}$$
(5.6)

where

$$\begin{aligned} L_n\left[ y\right] \left( x\right) =\sum _{i=0}^n l_i(x) y^{(2r+1)}\left( x_i\right) , \qquad R_n\left[ y\right] \left( x\right) =\frac{1}{(n+1)!} \omega _n(x) y^{(2r+n+2)}\left( \xi _x\right) , \end{aligned}$$

\(l_i\left( x\right) \) being the fundamental Lagrange polynomials and \(\omega _n(x)=\prod _{i=0}^n (x-x_i)\), with \(x_i\), for \(i=0\dots ,n\), \(n+1\) distinct points in [0, 1] and \(\xi _x\) a point in the smallest interval containing x and all \(x_i\), \(i=0,\dots , n\).

Then, by substituting (5.6) in (3.1), we have

$$\begin{aligned} y(x)=Q_r\left[ y\right] \left( x\right) +\sum _{i=0}^n q_{n,i}(x) f\left( x_i,y(x_i),y'(x_i),\ldots ,y^{(q)} (x_i)\right) + T_{r,n}[y] (x),\nonumber \\ \end{aligned}$$
(5.7)

where

$$\begin{aligned} q_{n,i}(x)=\int _0^1 K_r(x,t) l_i(t)\, \mathrm{d}t,\qquad i=0,\dots , n, \end{aligned}$$

and the remainder term \(T_{r,n}[y](y,x)\) is given by

$$\begin{aligned} T_{r,n}[y](x)=\frac{1}{(n+1)!} \int _0^1 K_r(x,t)\omega _n(t) y^{(2r+n+2)}\left( \xi _t\right) \mathrm{d}t. \end{aligned}$$

Relation (5.7) suggests to consider the implicitly defined polynomial

$$\begin{aligned} y_{n}(x)=Q_r\left[ y\right] \left( x\right) +\sum _{i=0}^n q_{n,i}(x) f\left( x_i, y_{n}(x_i), y_{n}'(x_i),\ldots , y_{n}^{(q)} (x_i)\right) . \end{aligned}$$
(5.8)

Theorem 16

[16] The polynomial \(y_{n}(x)\) of degree \(2r+n+1\) defined in (5.8) is a collocation polynomial for (1.1) at nodes \(x_i\).

An algorithm for practical calculation is similar to that used in [16].

6 Numerical Examples

As we said, there are no specific methods for the numerical solution of problem (1.1). Neither specific numerical examples we have found in the literature, to compare the numerical results. Anyhow, in the following, we report some problems to validate the theoretical results previously given. Since the analytical solutions of the considered examples are known, we compute the true errors \(\forall x\in I\) fixed

$$\begin{aligned} e_{B,n}(x)=\left| y(x)-{{\overline{\phi }}}_n(x)\right| ,\quad E_{B,m}(x)=\left| y(x)-T_m^{(0)}[y](x)\right| ,\quad e_{L,n}(x)=\left| y(x)-{{\overline{y}}}_{n}(x)\right| . \end{aligned}$$

Example 1

Consider the following problem:

$$\begin{aligned} \left\{ \begin{array}{l} y'''(x)+2 e^{-3 y(x)}=4(1+x)^{-3}, \qquad \quad x\in [0,1] \\ y\left( 1\right) =\log 2,\quad y'(0)=1,\\ y''(1)=-\frac{1}{4}. \end{array} \right. \end{aligned}$$
(6.1)

The analytical solution is \(\displaystyle { y\left( x\right) = \log (1+x)}\).

The first approximating polynomials \({{\overline{\phi }}}_n(x)\) are

$$\begin{aligned} {{\overline{\phi }}}_2(x)&=0.10803 + x - 0.70961 x^2 + 0.42560 x^3 - 0.15413 x^4 + 0.023259 x^5\\ {{\overline{\phi }}}_3(x)&=0.06454 + x - 0.62952 x^2 + 0.39201 x^3 - 0.17309 x^4 + 0.04420 x^5 -0.00500 x^6\\ {{\overline{\phi }}}_4(x)&=0.04531 + x - 0.59258 x^2 + 0.37570 x^3 - 0.18558 x^4 + 0.06144 x^5 - 0.01226 x^6 \\&\quad + 0.00112 x^7\\ {{\overline{\phi }}}_5(x)&=0.03473 + x - 0.57175 x^2 + 0.36631 x^3 - 0.19457 x^4 + 0.07557 x^5 - 0.02022 x^6 \\&\quad + 0.00333 x^7 - 0.00025 x^8\\ {{\overline{\phi }}}_6(x)&=0.02808 + x - 0.55847 x^2 + 0.36027 x^3 - 0.20137 x^4 + 0.08726 x^5 - 0.02815 x^6 \\&\quad + 0.00636 x^7- 0.00089 x^8 + 0.00006 x^9. \end{aligned}$$

Figure 1 shows the graphs of the error functions \(e_{B,n_k}(x)\) with \(n_k=4+2k\), \(k=0,\dots , 3\) (Fig. 1a) and the graph of \(E_{B,3}(x)\) (Fig. 1b). Figure 1a shows the low convergence of the approximating polynomial sequence \(\left\{ {{\overline{\phi }}}_n\right\} \).

Fig. 1
figure 1

Error functions for \(n_k=4+2k\), \(k=0,\dots , 3\)—Problem (6.1)

The absolute errors in \(x=\frac{1}{2}\) using extrapolation for different sequences \(n_k\), \(k=0,\dots , m\), are displayed in Table 1.

Table 1 Extrapolation error in \(x=\frac{1}{2}\)—Problem (6.1)

Figure 2 shows the graphs of the error functions \(e_{L,n}(x)\) in the case of collocation on equidistant nodes, for several values of the degree n of the approximating polynomials.

Table 2 lists the comparison between the approximation by extrapolated Bernstein polynomials (using one of the sequences considered in Table 1) and collocation-Birkhoff–Lagrange polynomials of degree 10 and 12, respectively.

Example 2

Consider the following problem:

$$\begin{aligned} \left\{ \begin{array}{l} y^{(\textit{v})}(x)=y^2(x) e^{-x}, \qquad \quad x\in [0,1] \\ y\left( 1\right) =y''(1)=y^{(\textit{iv})}(1)=e,\\ [0.4em] y'(0)=y'''(0)=1. \end{array} \right. \end{aligned}$$
(6.2)

The analytical solution is \(\displaystyle { y\left( x\right) = e^x}\).

Figure 3 shows the graphs of the error functions \(e_{B,n_k}(x)\) with \(n_k=2+2k\), \(k=0,\dots , 3\) (Fig. 3a) and the graph of \(E_{B,2}(x)\) (Fig. 3b).

Fig. 2
figure 2

Error functions \(e_{L,n}(x)\), Problem (6.1)

Table 2 Comparison between extrapolated Bernstein polynomials and collocation-Birkhoff–Lagrange polynomials—Problem (6.1)
Fig. 3
figure 3

Error functions for \(n_k=2+2k\), \(k=0,\dots ,3\)—Problem (6.2)

The errors in \(x=\frac{1}{2}\) using extrapolation for different sequences \(n_k\) are displayed in Table 3.

Figure 4 shows the graphs of the error functions \(e_{L,n}(x)\) in the case of collocation on equidistant nodes, for several values of the degree n of the approximating polynomials.

Table 4 lists the comparison between the approximation by extrapolated Bernstein polynomials and collocation-Birkhoff–Lagrange polynomials.

Table 3 Extrapolation error in \(x=\frac{1}{2}\)—Problem (6.2)
Fig. 4
figure 4

Error functions \(e_{L,n}(x)\), Problem (6.2)

Table 4 Comparison between extrapolated Bernstein polynomials and collocation-Birkhoff–Lagrange polynomials—Problem (6.2)

7 Conclusions

In this paper, we considered general nonlinear high odd-order differential equations with Lidstone–Euler boundary conditions of second type. First, we studied the associated interpolation problem and we obtained new properties, such as the integral Cauchy and Peano representation of the error, bounds for the error, and its derivatives, and we deducted the interesting convergence properties of the interpolatory polynomial sequences. Then, we considered the associated Lidstone–Euler second-type boundary value problem, from both a theoretical and a computational point of view. Particularly, we gave a theorem of existence and uniqueness of the solution of the problem and we proposed two different numerical approaches for the approximate solution: one of them is based on extrapolated Bernstein polynomials, the other one on Lagrange interpolation and collocation principle. We note the convergence properties of the Bernstein extrapolation method and, in some cases, a greater computational accuracy of the Lagrange-collocation methods. From the numerical examples, we can observe that the order of the error is similar with the two different methods. Further developments are possible and desirable as well theoretical as well computational.