1 Introduction

Many scientific applications in the literature of mathematical physics and fluid mechanics can be distinctively described by the Emden–Fowler equation

$$ \textstyle\begin{cases} u''(x)+\frac{\beta}{x}u'(x)=H ( x,u(x) ) ,\\ u(0)=u_{0},\qquad u'(0)=0, \end{cases} $$
(1)

where \(H(x,u(x))=f(x)g(u)\); \(f(x)\) and \(g( u )\) are given functions of x and u, respectively, and β is a shape factor. Equation (1) is a singular initial value problem relating to second-order ordinary differential equations which have been used to model several phenomena in mathematical physics and astrophysics such as thermal explosions, stellar structure, the thermal behavior of a spherical cloud of gas, isothermal gas spheres, and thermionic currents [13].

For \(f(x)=1 \), \(g(u)=u^{m} \), Eq. (1) becomes the standard Lane–Emden equation of the first order and index m, and for \(f(x)=1 \), \(g(u)=\exp(u) \) becomes the second order.

The Emden–Fowler equation was studied by Fowler [4] to describe a variety of phenomena in fluid mechanics and relativistic mechanics among others. The singular behavior that occurs at \(x = 0\) is the main difficulty of Eq. (1).

During the last few decades, many analytic and numeric methods were developed to study and to obtain approximate solutions for different types of Lane–Emden equations and Emden–Fowler equation. The Adomian decomposition method (ADM) was presented by Wazwaz [57]; the variational iteration method (VIM) was investigated in [810]. The authors of [11] solved singular IVPs of Lane–Emden type by the homotopy perturbation method (HPM). Parand et al. investigated nonlinear differential equations of Lane–Emden type by the rational Legendre pseudospectral approach [12].

Based on ideas in [8] and [5], Eq. (1) is derived from the following equation:

$$ \textstyle\begin{cases} x^{-\beta}\,\frac{\mathrm{d}}{\mathrm{d}x}(x^{\beta}\,\frac{\mathrm{d}}{\mathrm{d}x})u=H( x,u(x)),\\ u(0)=u_{0},\qquad u'(0)=0, \end{cases} $$
(2)

where β is called the shape factor.

To derive the Emden–Fowler type equations of third and forth order, we use the sense of Eq. (2) and set

$$ \textstyle\begin{cases} x^{-\beta}\,\frac{\mathrm{d}^{p}}{\mathrm{d}x^{p}}(x^{\beta}\,\frac{\mathrm{d}^{q}}{\mathrm{d}x^{q}})u=H( x,u(x)),\\ u(0)=u_{0},\qquad u'(0)=0. \end{cases} $$
(3)

To determine third-order equations, it is clear that we should select

$$p+q=3,\quad p, q\geq 1, $$

which leads to the following two choices:

$$p=2, q=1\quad \text{and}\quad p=1, q=2. $$

Substituting \((p=2, q=1) \) in Eq. (3) gives

$$ \textstyle\begin{cases} u'''(x)+\frac{2\beta}{x}u''(x)+\frac{\beta (\beta-1)}{x^{2}}u'(x)=H( x,u(x)),\\ u(0)=u_{0},\qquad u'(0)=u''(0)=0. \end{cases} $$
(4)

Notice that the singular point \(x=0\) appears twice as x and \(x^{2}\) with shape factors 2β and \(\beta (\beta-1)\), respectively.

In the other case, we substitute \((p=1, q=2) \) in Eq. (3) to obtain

$$ \textstyle\begin{cases} u'''(x)+\frac{\beta}{x}u''(x)=H( x,u(x)),\\ u(0)=u_{0},\qquad u'(0)=u''(0)=0. \end{cases} $$
(5)

The singular point \(x=0\) appears once with shape factor β.

Similarly, to derive Emden–Fowler type equations of forth-order in Eq. (3), to specify forth-order equation, we should select

$$p+q=4,\quad p, q\geq 1, $$

which leads to the following three choices:

$$p=3, q=1, p=2, q=2,\quad \text{and}\quad p=1, q=3. $$

Substituting options in Eq. (3) gives Eqs. (6)–(8) as follows:

$$ \textstyle\begin{cases} u^{(4)}(x)+\frac{3\beta}{x}u'''(x)+\frac{3\beta (\beta-1)}{x^{2}}u''(x)+\frac{\beta (\beta-1)(\beta-2)}{x^{3}}u'(x)=H( x,u(x)),\\ u(0)=u_{0}, \qquad u'(0)=u''(0)=u'''(0)=0, \end{cases} $$
(6)

where the singular point \(x=0\) appears thrice as x, \(x^{2}\), and \(x^{3}\) with shape factors 3β, \(3\beta (\beta-1)\), and \(\beta (\beta-1)(\beta-2) \), respectively.

$$ \textstyle\begin{cases} u^{(4)}(x)+\frac{2\beta}{x}u'''(x)+\frac{\beta (\beta-1)}{x^{2}}u''(x)=H( x,u(x)),\\ u(0)=u_{0}, \qquad u'(0)=u''(0)=u'''(0)=0, \end{cases} $$
(7)

unlike the first kind, the singular point \(x=0\) appears twice as x and \(x^{2}\) with shape factors 3β and \(\beta (\beta-1)\), respectively.

$$ \textstyle\begin{cases} u^{(4)}(x)+\frac{\beta}{x}u'''(x)=H( x,u(x)),\\ u(0)=u_{0}, \qquad u'(0)=u''(0)=u'''(0)=0. \end{cases} $$
(8)

The singular point \(x=0\) appears once with shape factor β.

Naturally, solving high-order models with usual methods is difficult, so providing appropriate methods to solve these types of equations is useful. The theory of reproducing kernels [13] was used for the first time at the beginning of the twentieth century by Zaremba in his work on boundary value problems for harmonic and biharmonic functions.

This theory has been successfully applied to integral equations [14, 15], partial differential equations [16], boundary value problems [1722], fractional differential equations [23], and so on [2429].

In this paper, we generalize the idea of the RKHSM to provide a numerical solution for Eqs. (4)–(8). The main idea is to construct the reproducing kernel space satisfying the conditions for determining solution of the new type of Emden–Fowler equations stated in the third and forth order. The analytical solution is represented in the form of series through the function value at the right-hand side of the equation. To demonstrate the effectiveness of the RKHSM algorithm, several numerical experiments of Eqs. (4)–(8) are presented.

The outline of the paper is as follows: several reproducing kernel spaces are described in the next section. In Section 3, linear operators, a complete normal orthogonal system, and some essential results are introduced. Also, a method for the existence of solutions for Eqs. (4)–(8) based on a reproducing kernel space is described. Various numerical examples are presented in Section 4. Section 5 ends this work with a brief conclusion.

2 Several reproducing kernel spaces

In this section, several reproducing kernels needed are constructed in order to solve Eqs. (4)–(8) using the reproducing kernel spaces method.

1. The reproducing kernel space \(W_{2}^{4,0}[0,1] \) for Eqs. ( 4 )( 5 )

Space \(W_{2}^{4,0}[0,1] \) is defined by:

$$\begin{aligned} W_{2}^{4,0}\equiv{}& W_{2}^{4,0}[0,1] \\ ={}&\bigl\{ u(x) |u, u', u'', u''' \text{ is an absolutely continuous real-valued function on } [0, 1], \\ & u^{(4)}\in L^{2}[0, 1], u(0)=u'(0)=u''(0)=0 \bigr\} , \end{aligned} $$

and endowed with inner product

$$ \langle u, v\rangle_{W_{2}^{4,0}}=u^{(3)}(0)v^{(3)}(0)+ \int_{0}^{1} u^{(4)}(x)v^{(4)}(x) \,\mathrm {d}x,\quad \forall u, v\in W_{2}^{4,0}. $$
(9)

Theorem 1

The space \(W_{2}^{4,0} \) is a complete reproducing kernel space with reproducing kernel

$$ R_{y}(x) = \textstyle\begin{cases} -\frac{x^{3} (x^{4}-7 x^{3} y+21 x^{2} y^{2}-140 y^{3}-35 x y^{3} )}{5040}, &x\le y,\\ -\frac{y^{3} (21 x^{2} y^{2}-7 x y^{3}+y^{4}-35 x^{3} (4+y) )}{5040},& y< x. \end{cases} $$
(10)

That is, for every \(x\in [0,1] \) and \(u\in W_{2}^{4,0} \), \(\langle u(x), R_{y}(x)\rangle_{W_{2}^{4,0}}=u(y) \) holds.

Proof

The proof of the completeness and reproducing property of \(W_{2}^{4,0} \) is similar to the proof of Theorem 1.3.2 in [29].

Now, let us find out the expression of the reproducing kernel function \(R_{y}(x) \) in \(W_{2}^{4,0} \). Through several integrations by parts for (9), we have

$$\begin{aligned} \bigl\langle u(x), R_{y}(x)\bigr\rangle _{W_{2}^{4,0}} ={}& \sum _{i=0}^{2}(-1)^{2-i}u^{(i)}(0) \frac{\partial^{7-i}R_{y}(0)}{\partial x^{7-i}}+u^{(3)}(0) \biggl[ \frac{\partial^{3}R_{y}(0)}{\partial x^{3}}- \frac{\partial^{4}R_{y}(0)}{\partial x^{4}} \biggr] \\ &{}+\sum_{i=0}^{3}(-1)^{3-i}u^{(i)}(1) \frac{\partial^{7-i}R_{y}(1)}{\partial x^{7-i}} + \int _{0}^{1} u(x) \frac{\partial^{8}R_{y}(x)}{\partial x^{8}}\,\mathrm {d}x. \end{aligned}$$
(11)

Since \(R_{y}(x)\in W_{2}^{4,0}\), it follows that

$$ \frac{\partial^{i}R_{y}(0)}{\partial x^{i}}=0, \quad i=0,1,2. $$
(12)

Also, since \(u(x)\in W_{2}^{4,0} \), one obtains

$$ u^{(i)}(0)=0, \quad i=0,1,2. $$
(13)

Thus, if

$$ \frac{\partial^{3}R_{y}(0)}{\partial x^{3}}-\frac{\partial^{4}R_{y}(0)}{\partial x^{4}}=0 $$
(14)

and

$$ \frac{\partial^{7-i}R_{y}(1)}{\partial x^{7-i}}=0, \quad i=0,1,2,3, $$
(15)

then

$$ \bigl\langle u(x), R_{y}(x)\bigr\rangle _{W_{2}^{4,0}} = \int _{0}^{1} u(x) \frac{\partial^{8}R_{y}(x)}{\partial x^{8}}\,\mathrm {d}x. $$
(16)

Now, for each \(x\in [0,1] \), if \(R_{y}(x) \) also satisfies

$$ \frac{\partial^{8}R_{y}(x)}{\partial x^{8}}=\delta(x-y), $$
(17)

where δ is the dirac-delta function, then \(\langle u(x), R_{y}(x)\rangle_{W_{2}^{4,0}}=u(y) \). Obviously, \(R_{y}(x) \) is the reproducing kernel function of the space \(W_{2}^{4,0} \).

The characteristic equation of Eq. (17) is given by \(\lambda^{8}=0 \), and \(\lambda=0 \) has eight repeated roots. Let the representation of \(R_{y}(x) \) be

$$ R_{y}(x) = \textstyle\begin{cases} \sum_{i=1}^{8}c_{i}(y) x^{i-1}, & x\le y,\\ \sum_{i=1}^{8}d_{i}(y) x^{i-1}, & y< x. \end{cases} $$
(18)

Integrating Eq. (17) from \(y-\epsilon \) to \(y+\epsilon \) with respect to x and \(R_{y}(x) \) satisfies

$$ \frac{\partial ^{i} R_{y}(x)}{\partial x^{i}} \biggm|_{x=y+0}=\frac{\partial ^{i} R_{y}(x)}{\partial x^{i}} \biggm|_{x=y-0}, \quad i=0,1,\ldots,6. $$
(19)

Using the jump discontinuity of \(\frac{\partial^{7} R_{y}(x)}{\partial x^{7}} \) at \(x=y \) and taking \(\epsilon\rightarrow 0 \) give

$$ \frac{\partial^{7} R_{y}(x)}{\partial x^{7}} \biggm|_{x=y+0}-\frac{\partial^{7} R_{y}(x)}{\partial x^{7}} \biggm|_{x=y-0}=1. $$
(20)

Hence, the coefficients \(c_{i}(y) \) and \(d_{i}(y) \) of (18) for \(i=1,2,\ldots,8 \) can be determined by solving Eqs. (12), (14), (15), (19), and (20).

This completes the proof. □

2. The reproducing kernel space \(W_{2}^{5,0}[0,1] \) for ( 6 )( 8 )

Space \(W_{2}^{5,0}[0,1] \) is defined as follows:

$$\begin{aligned} W_{2}^{5,0}\equiv {}&W_{2}^{5,0}[0,1] \\ ={}& \bigl\{ u(x) |u, u', u'', u''', u^{(4)} \text{ is an absolutely continuous real-valued function on } \\ & [0, 1], u^{(5)}\in L^{2}[0, 1], u(0)=u'(0)=u''(0)=u'''(0)=0 \bigr\} \end{aligned} $$

endowed with inner product

$$ \langle u, v\rangle_{W_{2}^{5,0}}=u^{(4)}(0)v^{(4)}(0)+ \int_{0}^{1} u^{(5)}(x)v^{(5)}(x) \,\mathrm {d}x,\quad \forall u, v\in W_{2}^{5,0}. $$
(21)

Theorem 2

The space \(W_{2}^{5,0} \) is a complete reproducing kernel space with reproducing kernel

$$ R_{y}(x) = \textstyle\begin{cases} \frac{x^{4} (x^{5}-9 x^{4} y+36 x^{3} y^{2}-84 x^{2} y^{3}+630 y^{4}+126 x y^{4} )}{362\mbox{,}880}, &x\le y,\\ \frac{y^{4} (-84 x^{3} y^{2}+36 x^{2} y^{3}-9 x y^{4}+y^{5}+126 x^{4} (5+y) )}{362\mbox{,}880},& y< x. \end{cases} $$
(22)

That is, for every \(x\in [0,1] \) and \(u\in W_{2}^{5,0} \), \(\langle u(x), R_{y}(x)\rangle_{W_{2}^{5,0}}=u(y) \) holds.

Proof

The proof of this theorem is similar to that of Theorem 1. Therefore the proof is omitted. □

3. The reproducing kernel space \(W_{2}^{1}[0,1] \)

Space \(W_{2}^{1}[0,1] \) is defined as follows:

$$\begin{aligned} W_{2}^{1} \equiv{}& W_{2}^{1}[0,1] \\ ={}& \bigl\{ u(x) |u \text{ is an absolutely continuous real-valued function on } [0, 1],\\ & u'\in L^{2}[0, 1]\bigr\} . \end{aligned} $$

The inner product in \(W_{2}^{1} \) is given by

$$ \langle u, v\rangle_{W_{2}^{1}}=u(0)v(0)+ \int_{0}^{1} u'(x)v'(x) \,\mathrm {d}x, $$
(23)

and the norm \(\Vert u \Vert _{W_{2}^{1}} \) is denoted by

$$ \Vert u \Vert _{W_{2}^{1}}=\sqrt{\langle u, u \rangle_{W_{2}^{1}}}, $$
(24)

where \(u, v\in W_{2}^{1}\).

In [29], the authors proved that \(W_{2}^{1} \) is a complete reproducing kernel space and its reproducing kernel is

$$ R_{y}(x) = \textstyle\begin{cases} 1+x, &x\le y,\\ 1+y, & y< x. \end{cases} $$
(25)

Theorem 3

([29])

Let \(R_{y}(x) \) be the reproducing kernel of the space \(W_{2}^{m,0} \). Then

$$\frac{\partial ^{i+j} R_{y}(x) }{\partial x^{i}\partial y^{j}}\in W_{2}^{m,0}, \quad i+j=m-1, m\geq 1. $$

3 Solving Eqs. (4)–(8) in reproducing kernel spaces

Equations (4)–(8) cannot be solved directly using the reproducing kernel method, since it is impossible to obtain a reproducing kernel satisfying the initial conditions of Eqs. (4)–(8). So, we need homogenize the conditions of Eqs. (4)–(8).

In Eqs. (4)–(8), put

$$ u(x)=\overline{u}(x)+u_{0}. $$
(26)

Hence Eqs. (4)–(8) can be converted into the following equivalent forms:

$$ \textstyle\begin{cases} L_{1}^{T}\overline{u}(x)=H( x,\overline{u}(x)+u_{0}),\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=0, \end{cases} $$
(27)

where \(L_{1}^{T}: W_{2}^{4,0}\rightarrow W_{2}^{1} \) and \(L_{1}^{T}\overline{u}(x)=\overline{u}'''(x)+\frac{2\beta}{x}\overline{u}''(x)+\frac{\beta (\beta-1)}{x^{2}}\overline{u}'(x) \),

$$ \textstyle\begin{cases} L_{2}^{T}\overline{u}(x)=H( x,\overline{u}(x)+u_{0}),\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=0, \end{cases} $$
(28)

where \(L_{2}^{T}: W_{2}^{4,0}\rightarrow W_{2}^{1} \) and \(L_{2}^{T}\overline{u}(x)=\overline{u}'''(x)+\frac{\beta}{x}\overline{u}''(x) \),

$$ \textstyle\begin{cases} L_{1}^{F}\overline{u}(x)=H( x,\overline{u}(x)+u_{0}),\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=\overline{u}'''(0)=0, \end{cases} $$
(29)

where \(L_{1}^{F}: W_{2}^{5,0}\rightarrow W_{2}^{1} \) and \(L_{1}^{F}\overline{u}(x)=\overline{u}^{(4)}(x)+\frac{3\beta}{x}\overline{u}'''(x)+\frac{3\beta (\beta-1)}{x^{2}}\overline{u}''(x)+\frac{\beta (\beta-1)(\beta-2)}{x^{3}}\overline{u}'(x)\),

$$ \textstyle\begin{cases} L_{2}^{F}\overline{u}(x)=H( x,\overline{u}(x)+u_{0}),\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=\overline{u}'''(0)=0, \end{cases} $$
(30)

where \(L_{2}^{F}: W_{2}^{5,0}\rightarrow W_{2}^{1} \) and \(L_{2}^{F}\overline{u}(x)=\overline{u}^{(4)}(x)+\frac{2\beta}{x}\overline{u}'''(x)+\frac{\beta (\beta-1)}{x^{2}}\overline{u}''(x)\), and

$$ \textstyle\begin{cases} L_{3}^{F}\overline{u}(x)=H( x,\overline{u}(x)+u_{0}),\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=\overline{u}'''(0)=0, \end{cases} $$
(31)

where \(L_{3}^{F}: W_{2}^{5,0}\rightarrow W_{2}^{1} \) and \(L_{3}^{F}\overline{u}(x)=\overline{u}^{(4)}(x)+\frac{\beta}{x}\overline{u}'''(x)\), respectively.

In Eqs. (27)–(31), since \(\overline{u}(x)\) is sufficiently smooth, we see that \(L_{1}^{T}, L_{2}^{T}, L_{1}^{F}, L_{2}^{F} \), and \(L_{3}^{F} \) are bounded linear operators.

Theorem 4

The linear operators \(L_{1}^{T}, L_{2}^{T}, L_{1}^{F}, L_{2}^{F} \), and \(L_{3}^{F} \) defined by Eqs. (27)(31) are bounded linear operators.

Proof

We only need to prove \(\Vert L_{1}^{T}u \Vert _{W_{2}^{1}}^{2}\leq M \Vert L_{1}^{T}u \Vert _{W_{2}^{4,0}}^{2} \), where \(M> 0\) is a positive constant. By Eqs. (23)–(24), we get

$$\bigl\Vert L_{1}^{T}u \bigr\Vert _{W_{2}^{1}}^{2}= \bigl\langle L_{1}^{T}u, L_{1}^{T}u\bigr\rangle _{W_{2}^{1}}^{2}=\bigl[L_{1}^{T}u(0) \bigr]^{2}+ \int_{0}^{1} \bigl[u'(x) \bigr]^{2}\,\mathrm {d}x. $$

By Theorem 1, it is found

$$u(y)=\bigl\langle R_{y}(\cdot), u(\cdot) \bigr\rangle _{W_{2}^{4,0}}, $$

and

$$L_{1}^{T}u(y)=\bigl\langle L_{1}^{T} R_{y}(\cdot), u(\cdot) \bigr\rangle _{W_{2}^{4,0}}, $$

so

$$\bigl\vert L_{1}^{T}u(y) \bigr\vert \leq \Vert u \Vert _{W_{2}^{4,0}} \bigl\Vert L_{1}^{T} R_{y}(\cdot) \bigr\Vert _{W_{2}^{4,0}}= M_{1} \Vert u \Vert _{W_{2}^{4,0}}, $$

where \(M_{1} \) is a positive constant. Since

$$\bigl(L_{1}^{T}u\bigr)'(y)=\bigl\langle \bigl(L_{1}^{T} R_{y}\bigr)'(\cdot), u(\cdot) \bigr\rangle _{W_{2}^{4,0}}, $$

then

$$\bigl\vert \bigl(L_{1}^{T}u\bigr)'(y) \bigr\vert \leq \Vert u \Vert _{W_{2}^{4,0}} \bigl\Vert \bigl(L_{1}^{T} R_{y}\bigr)'(\cdot) \bigr\Vert _{W_{2}^{4,0}}= M_{2} \Vert u \Vert _{W_{2}^{4,0}}, $$

where \(M_{2} \) is a positive constant. So, we obtain

$$\bigl[\bigl(L_{1}^{T}u\bigr)'(y)\bigr] ^{2}\leq M_{2}^{2} \Vert u \Vert _{W_{2}^{4,0}}^{2} $$

and

$$\int_{0}^{1}\bigl[\bigl(L_{1}^{T}u \bigr)'(x)\bigr]^{2}\,\mathrm {d}x\leq M_{2}^{2} \Vert u \Vert _{W_{2}^{4,0}}^{2}, $$

that is,

$$\begin{aligned} \bigl\Vert L_{1}^{T}u \bigr\Vert _{W_{2}^{1}}^{2}& \leq \bigl\langle L_{1}^{T}u, L_{1}^{T}u \bigr\rangle _{W_{2}^{1}}^{2}=\bigl[L_{1}^{T}u(0) \bigr]^{2}+ \int_{0}^{1}\bigl[\bigl(L_{1}^{T}u \bigr)'(x)\bigr]^{2}\,\mathrm {d}x \\ &\leq \bigl( M_{1}^{2}+M_{2}^{2} \bigr) \Vert u \Vert _{W_{2}^{4,0}}^{2}=M \Vert u \Vert _{W_{2}^{4,0}}^{2}, \end{aligned}$$

where \(M= M_{1}^{2}+M_{2}^{2} \) is a positive constant.

This completes the proof. □

The other linear operators are proved similarly. In order to solve Eqs. (27)–(31), define

$$ \psi_{i}(x)= \textstyle\begin{cases} L_{1}^{T}R_{y}(x)|_{y=(x_{i})}&\text{ for (27)},\\ L_{2}^{T}R_{y}(x)|_{y=(x_{i})}&\text{ for (28)},\\ L_{1}^{F}R_{y}(x)|_{y=(x_{i})}&\text{ for (29)},\\ L_{2}^{F}R_{y}(x)|_{y=(x_{i})}&\text{ for (30)},\\ L_{3}^{F}R_{y}(x)|_{y=(x_{i})}&\text{ for (31)}, \end{cases} $$
(32)

where \(\{x_{i}\}_{i=1}^{\infty}\) is dense in the interval \([0, 1] \). Hence, one gets

$$ \psi_{i}(x)= \textstyle\begin{cases} \frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{2\beta}{x}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}+\frac{\beta(\beta-1)}{x^{2}}\frac{\partial R_{x}(x_{i})}{\partial x}&\text{ for (27)},\\ \frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{\beta}{x}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}&\text{ for (28)},\\ \frac{\partial^{4} R_{x}(x_{i})}{\partial x^{4}}+\frac{3\beta}{x}\frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{3\beta(\beta-1)}{x^{2}}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}+\frac{\beta(\beta-1)(\beta-2)}{x^{3}}\frac{\partial R_{x}(x_{i})}{\partial x}&\text{ for (29)},\\ \frac{\partial^{4} R_{x}(x_{i})}{\partial x^{4}}+\frac{2\beta}{x}\frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{\beta(\beta-1)}{x^{2}}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}&\text{ for (30)},\\ \frac{\partial^{4} R_{x}(x_{i})}{\partial x^{4}}+\frac{\beta}{x}\frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}&\text{ for (31)}. \end{cases} $$
(33)

Lemma

Let \(\{x_{i}\}_{i=1}^{\infty}\) be dense on \([0, 1]\), then

  1. 1.

    \(\psi_{i}(x)\in W_{2}^{4,0}\) for (27)(28).

  2. 2.

    \(\psi_{i}(x)\in W_{2}^{5,0}\) for (29)(31).

  3. 3.

    \(\{\psi_{i}(x)\}_{i=1}^{\infty}\) is complete in \(W_{2}^{4,0}\) for (27)(28).

  4. 4.

    \(\{\psi_{i}(x)\}_{i=1}^{\infty}\) is complete in \(W_{2}^{5,0}\) for (29)(27).

Proof

By Theorem 3, one can get the proof of parts 1 and 2 immediately.

3. For each \(u(x) \in W_{2}^{4,0}\), let \(\langle u(x), \psi_{i}(x)\rangle_{W_{2}^{4,0}} = 0\) (\(i =1, 2, \ldots\)), which means

$$\begin{aligned} \bigl\langle u(x), \psi_{i}(x)\bigr\rangle _{W_{2}^{4,0}} &=\bigl\langle u(x), \bigl[L_{1}^{T}R_{x}(y) \bigr](x_{i})\bigr\rangle _{W_{2}^{4,0}} \\ &=L_{1}^{T} \bigl\langle u(x), R_{x_{i}}(y)\bigr\rangle _{W_{2}^{4,0}} \\ & =L_{1}^{T} u(x_{i})=0. \end{aligned}$$
(34)

Note that \(\{x_{i}\}_{i=1}^{\infty}\) is dense on \([0, 1]\), and hence \((L_{1}^{T}u)(x) = 0\). It follows that \(u \equiv 0\) from the existence of \((L_{1}^{T})^{-1}\). So \(\{\psi_{i}(x)\}_{i=1}^{\infty}\) is complete in \(W_{2}^{4,0}\).

4. The proof of this part is similar to that of part 3. □

The orthonormal system \(\{\bar{\psi}_{i}(x)\}_{i=1}^{\infty}\) of \(W_{2}^{4,0}\) or \(W_{2}^{5,0}\) can be derived from the Gram–Schmidt orthogonalization process of \(\{\psi_{i}(x)\}_{i=1}^{\infty}\) as follows:

$$ \bar{\psi}_{i}(x) =\sum_{k=1}^{i} \beta_{ik}\psi_{k}(x), \quad i=1,2,\ldots, $$
(35)

where \(\beta_{ik} \) are orthogonalization coefficients given as \(\beta_{11}=1/ \Vert \psi_{1} \Vert \), \(\beta_{ii}=1/B_{ik} \), and \(\beta_{ij}=-(1/B_{ik})\sum_{k=j}^{i-1}c_{ik}\beta_{kj} \) for \(j< i \) which \(B_{ik}=\sqrt{ \Vert \psi_{i} \Vert ^{2}-\sum_{k=1}^{i-1}c_{ik}^{2}} \), \(c_{ik}=\langle \psi_{i}, \bar{\psi}_{k}\rangle_{W_{2}^{0,m}},\:\:m=4,5\).

Theorem 5

Let \(\{x_{i}\}_{i=1}^{\infty}\) be dense on \([0, 1]\), then the exact solution of Eqs. (27)(31) could be represented by

$$ \overline{u}(x)=\sum_{i = 1}^{\infty} \sum_{k = 1}^{i}\beta_{ik}H\bigl( x_{k},\overline{u}(x_{k})+u_{0}\bigr)\bar{ \psi}_{i}(x). $$
(36)

Proof

Let \(\overline{u}(x)\) be a solution of (27) in \(W_{2}^{4,0}\). From \(\{\bar{\psi}\}_{i=1}^{\infty}\) is an orthonormal system, \(\overline{u}(x)\) could be expressed as a Fourier series:

$$\begin{aligned} \overline{u}(x)&=\sum_{i = 1}^{\infty}\bigl\langle \overline{u}(x),\bar{\psi}_{i}(x)\bigr\rangle _{W_{2}^{4,0}} \bar{\psi}_{i}(x) = \sum_{i = 1}^{\infty} \sum_{k = 1}^{i}\beta_{ik}\bigl\langle \overline{u}(x),\psi_{k}(x)\bigr\rangle _{W_{2}^{4,0}}\bar{ \psi}_{i}(x) \\ &=\sum_{i= 1}^{\infty} \sum _{k = 1}^{i}\beta_{ik} \bigl[L_{1}^{T} \overline{u}(y) \bigr](x_{k})\bar{\psi}_{i}(x)=\sum _{i= 1}^{\infty} \sum_{k = 1}^{i} \beta_{ik}L_{1}^{T} \overline{u}(x_{k}) \bar{\psi}_{i}(x) \\ &=\sum_{i = 1}^{\infty} \sum _{k = 1}^{i}\beta_{ik}H\bigl( x_{k},\overline{u}(x_{k})+u_{0}\bigr)\bar{ \psi}_{i}(x). \end{aligned}$$

Now the approximate solution \(\overline{u}_{n}(x)\) can be obtained by the n-term intercept of the analytical solution \(\overline{u}(x)\), that is,

$$ \overline{u}_{n}(x)=\sum_{i = 1}^{n} \sum_{k = 1}^{i}\beta_{ik}H\bigl( x_{k},\overline{u}(x_{k})+u_{0}\bigr)\bar{ \psi}_{i}(x). $$
(37)

Note that the exact solutions of Eqs. (28)–(31) are similar to Eq. (27), and we do not deal with them, and the proof of the theorem is complete. □

Remark

If Eqs. (27)–(31) are linear, that is, \(H( x,\overline{u}(x)+u_{0})=H( x) \), then the solution of equations can be obtained directly from Eq. (36).

If Eqs. (27)–(31) are nonlinear, the approximate solutions can be obtained using the following method. According to (36), we construct the following iteration formula:

$$ \textstyle\begin{cases} u_{0}(x)=0,\\ \overline{u}_{n+1}(x)=\sum_{i = 1}^{\infty} \sum_{k = 1}^{i}\beta_{ik}H( x_{k},\overline{u}_{n-1}(x_{k})+u_{0})\bar{\psi}_{i}(x),\quad n=0,1,\ldots. \end{cases} $$
(38)

For the proof of convergence of the iterative formula (Eqs. (38)), see [22].

Remark

In the iteration process of Eq. (38), we can guarantee that the approximation \(\overline{u}_{n}(x) \) satisfies the initial conditions of Eqs. (27)–(31).

Now, the approximate solution \(\overline{u}_{n}^{N}(x) \) can be obtained by taking finitely many terms in the series representation of \(\overline{u}_{n}(x)\) and

$$ \overline{u}_{n}^{N}(x)=\sum _{i = 1}^{N} \sum_{k = 1}^{i} \beta_{ik}H\bigl( x_{k},\overline{u}_{n-1}(x_{k})+u_{0} \bigr)\bar{\psi}_{i}(x),\quad n=0,1,\ldots. $$
(39)

In the following algorithm, we summarize how the method works.

Algorithm

  1. 1.

    Set \(m=4 \) or \(m=5 \) in \(W_{2}^{m,0}\).

  2. 2.

    Choose N collocation points in the domain set \([0, 1]\).

  3. 3.

    For \(i=1,2,\ldots, N\), set

    $$ \psi_{i}(x)= \textstyle\begin{cases} \frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{2\beta}{x}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}+\frac{\beta(\beta-1)}{x^{2}}\frac{\partial R_{x}(x_{i})}{\partial x}&\text{ for (27)},\\ \frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{\beta}{x}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}&\text{ for (28)},\\ \frac{\partial^{4} R_{x}(x_{i})}{\partial x^{4}}+\frac{3\beta}{x}\frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{3\beta(\beta-1)}{x^{2}}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}+\frac{\beta(\beta-1)(\beta-2)}{x^{3}}\frac{\partial R_{x}(x_{i})}{\partial x}&\text{ for (29)},\\ \frac{\partial^{4} R_{x}(x_{i})}{\partial x^{4}}+\frac{2\beta}{x}\frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}+\frac{\beta(\beta-1)}{x^{2}}\frac{\partial^{2} R_{x}(x_{i})}{\partial x^{2}}&\text{ for (30)},\\ \frac{\partial^{4} R_{x}(x_{i})}{\partial x^{4}}+\frac{\beta}{x}\frac{\partial^{3} R_{x}(x_{i})}{\partial x^{3}}&\text{ for (31)}. \end{cases} $$
  4. 4.

    Set \(\bar{\psi}_{i}(x) =\sum_{k=1}^{i} \beta_{ik}\psi_{k}(x)\), \(i=1,2,\ldots,N\).

  5. 5.

    Choose an initial function \(u_{0}(x)\).

  6. 6.

    Set \(n=1\).

  7. 7.

    Set \(B_{n}=\sum_{k = 1}^{n}\beta_{nk}H( x_{k},\overline{u}_{n-1}(x_{k})+u_{0})\).

  8. 8.

    Set \(\overline{u}_{n}^{N}(x)=\sum_{i = 1}^{n}B_{i}\bar{\psi}_{i}(x)\).

  9. 9.

    If \(n < N\), then set \(n = n + 1\) and go to step 7, else stop.

4 Numerical experiments

In this section, we present and discuss the numerical results by employing the reproducing kernel space for two examples of third order and three examples of forth order Emden–Fowler type equations in spaces \(W_{2}^{4,0}\) and \(W_{2}^{5,0}\), respectively. For each example, we demonstrate a figure of convergence. Results demonstrate that the present method is remarkably effective.

Example 1

([8])

For \(f(x)=6(10+2x^{3}+x^{6}) \), \(g(u)=\exp(-3u) \), and \(\beta=3 \), Eq. (4) is as follows:

$$ \textstyle\begin{cases} {u}'''(x)+\frac{6}{x}{u}''(x)+\frac{6}{x^{2}}{u}'(x)=6(10+2x^{3}+x^{6})\exp(-3{u}),\\ {u}(0)=0, \qquad {u}'(0)=0,\qquad {u}''(0)=0, \end{cases} $$
(40)

which has the exact solution \({u}(x)=\ln[x^{3}+1]\).

We use the proposed scheme, choose initial approximation \(u_{0}(x)=0 \), and take \(N=30, 45 \); \(n=7 \); and \(x_{i}=\frac{i}{N} \), where \(i=1:N \). The numerical results are shown in Fig. 1, and a comparison of the errors in the form of maximum absolute error between the method developed in this paper and that of Wazwaz [8] is shown in Table 1.

Figure 1
figure 1

Figures of absolute errors \(|\overline{u}(x)-\overline{u}^{30}_{7}| \), \(|\overline{u}(x)-\overline{u}^{45}_{7}| \) for Example 1

Table 1 Maximum absolute error for Example 1 (\(n=7\))

Example 2

For \(f(x)=-\frac{9}{8}(x^{6}+8) \), \(g(u)=u^{-5} \), and \(\beta=2 \), Eq. (5) is as follows:

$$ \textstyle\begin{cases} {u}'''(x)+\frac{2}{x}{u}''(x)=-\frac{9}{8}(x^{6}+8)({u})^{-5},\\ {u}(0)=1,\qquad {u}'(0)=0,\qquad {u}''(0)=0. \end{cases} $$
(41)

The exact solution is given in [8] as \({u}(x)=\sqrt{x^{3}+1}\). If we apply Eq. (26) to Eq. (41), then the following equation is obtained:

$$ \textstyle\begin{cases} \overline{u}'''(x)+\frac{2}{x}\overline{u}''(x)=-\frac{9}{8}(x^{6}+8)(\overline{u}+1)^{-5},\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=0. \end{cases} $$
(42)

\(\overline{u}(x)=\sqrt{x^{3}+1}-1\) is the true solution of Eq. (42).

We use the proposed method, choose initial approximation \(\overline{u}_{0}(x)=0 \), and take \(N=20, 35 \); \(n=5 \); and \(x_{i}=\frac{i}{N} \), where \(i=1:N \).

The numerical results are shown in Fig. 2, and a comparison of the errors in the form of maximum absolute error between the method developed in this paper and that of Wazwaz [8] is shown in Table 2.

Figure 2
figure 2

Figures of absolute errors \(|\overline{u}(x)-\overline{u}^{20}_{5}| \), \(|\overline{u}(x)-\overline{u}^{35}_{5}| \) for Example 2

Table 2 Maximum absolute error for Example 2 (\(n=5\))

Example 3

([5])

For \(f(x)=60(7-18x^{4}+3x^{8}) \), \(g(u)=u^{9} \), and \(\beta=4 \), Eq. (6) is as follows:

$$ \textstyle\begin{cases} {u}^{(4)}(x)+\frac{12}{x}{u}'''(x)+\frac{36}{x^{2}}{u}''(x)+\frac{24}{x^{3}}{u}'(x)=60(7-18x^{4}+3x^{8}) ({u})^{9},\\ {u}(0)=1,\qquad {u}'(0)=0,\qquad {u}''(0)=0,\qquad {u}'''(0)=0, \end{cases} $$
(43)

which has the exact solution \({u}(x)=\frac{1}{\sqrt{x^{4}+1}}\). By Eq. (26), Eq. (43) can be converted into the following equivalent form:

$$ \textstyle\begin{cases} \overline{u}^{(4)}(x)+\frac{12}{x}\overline{u}'''(x)+\frac{36}{x^{2}}\overline{u}''(x)+\frac{24}{x^{3}}\overline{u}'(x)=60(7-18x^{4}+3x^{8}) (\overline{u}+1)^{9},\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=\overline{u}'''(0)=0, \end{cases} $$
(44)

which has the exact solution \(\overline{u}(x)=\frac{1}{\sqrt{x^{4}+1}}-1\).

We use the proposed scheme, choose initial approximation \(\overline{u}_{0}(x)=0 \), and take \(N=35, 45 \); \(n=7 \); and \(x_{i}=\frac{i}{N} \), where \(i=1:N \).

The numerical results are shown in Fig. 3, and a comparison of the errors in the form of maximum absolute error between the method developed in this paper and that of Wazwaz [8] is shown in Table 3.

Figure 3
figure 3

Figures of absolute errors \(|\overline{u}(x)-\overline{u}^{35}_{7}| \), \(|\overline{u}(x)-\overline{u}^{45}_{7}| \) for Example 3

Table 3 Maximum absolute error for Example 3 (\(n=7\))

Example 4

[5] For \(f(x)=1\), \(g(u)=u^{m} \), and \(\beta=4 \), Eq. (7) is as follows:

$$ \textstyle\begin{cases} {u}^{(4)}(x)+\frac{8}{x}{u}'''(x)+\frac{12}{x^{2}}{u}''(x)=-({u})^{m},\\ {u}(0)=1,\qquad {u}'(0)=0,\qquad {u}''(0)=0,\qquad {u}'''(0)=0, \end{cases} $$
(45)

which has the exact solution \({u}(x)=1-\frac{x^{4}}{360}\). By Eq. (26), Eq. (45) can be converted into the following equivalent form:

$$ \textstyle\begin{cases} \overline{u}^{(4)}(x)+\frac{8}{x}\overline{u}'''(x)+\frac{12}{x^{2}}\overline{u}''(x)=-(\overline{u}+1)^{m},\\ \overline{u}(0)=\overline{u}'(0)=\overline{u}''(0)=\overline{u}'''(0)=0, \end{cases} $$
(46)

which has the exact solution \(\overline{u}(x)=-\frac{x^{4}}{360}\) for \(m=0 \). We use the proposed scheme, choose initial approximation \(\overline{u}_{0}(x)=0 \), and take \(N=10, 15 \); \(n=5 \); and \(x_{i}=\frac{i}{N} \), where \(i=1:N \). The numerical results are shown in Fig. 4, and a comparison of the errors in the form of maximum absolute error between the method developed in this paper and that of Wazwaz [5] is shown in Table 4.

Figure 4
figure 4

Figures of absolute errors \(|\overline{u}(x)-\overline{u}^{10}_{5}| \), \(|\overline{u}(x)-\overline{u}^{15}_{5}| \) for Example 4

Table 4 Maximum absolute error for Example 4 (\(n=5\))

Example 5

[5] We finally consider the following nonlinear Emden–Fowler type equation:

$$ \textstyle\begin{cases} {u}^{(4)}(x)+\frac{3}{x}{u}'''(x)=-96(1-10x^{4}+5x^{8})\exp(-4{u}),\\ {u}(0)=0,\qquad {u}'(0)=0, \qquad {u}''(0)=0,\qquad {u}'''(0)=0. \end{cases} $$
(47)

The exact solution is \({u}(x)=\ln(x^{4}+1) \). We use the proposed method, choose initial approximation \({u}_{0}(x)=0 \), and take \(N=30, 45 \); \(n=5 \); and \(x_{i}=\frac{i}{N} \), where \(i=1:N \). The numerical results are shown in Fig. 5, and a comparison of the errors in the form of maximum absolute error between the method developed in this paper and that of Wazwaz [5] is shown in Table 5.

Figure 5
figure 5

Figures of absolute errors \(|\overline{u}(x)-\overline{u}^{30}_{5}| \), \(|\overline{u}(x)-\overline{u}^{45}_{5}| \) for Example 5

Table 5 Maximum absolute error for Example 5 (\(n=5\))

5 Conclusion

In this study, we have presented a numerical scheme based on reproducing kernel space for solving high-order nonlinear singular initial value Emden–Fowler equations. The properties of the reproducing kernel space require no more integral computation for some functions, instead of computing some values of a function at some nodes. This simplification of integral computation not only improves the computational speed, but also improves the computational accuracy. It was observed that the errors in the form of maximum absolute error are better than the other developed methods [4, 8].

In addition, it is seen from the figures that for N large enough, the errors decrease. The numerical results show that the present method is an accurate and reliable technique for the high-order linear singular differential-difference equations. One of the considerable advantages of the method is that the approximate solutions are found very easily by using the computer code written in Mathematica 8.0 software package.