1 Introduction

In 1914, Bratu [1] suggested an important BVP as follows:

$$\begin{aligned} x^{\prime \prime}(t)+\lambda e^{x(t)}=0, \end{aligned}$$
(1.1)

with the boundary condition (BC):

$$\begin{aligned} x(0)=x(1)=0. \end{aligned}$$
(1.2)

Bratu’s Problem (1.1)–(1.2) attracted the attention of many researchers due to its fruitful applications in many areas of applied science. Thus in [2], Ascher et al. first time successfully obtained the exact solution for the Bratu’s problem (1.1)–(1.2) as follows:

$$\begin{aligned} x(t)=-2\ln \biggl[ \cosh \biggr(\frac{ 0.5(t-0.5)\theta }{\cosh (\frac{\theta}{4})} \biggr)\biggr], \end{aligned}$$
(1.3)

where the number θ involved in (1.3) solves essentially \(\theta =\sqrt{2\lambda}\cosh (\frac{\theta}{4})\). Suppose, \(\lambda _{c}\) is a number that satisfies the following equation:

$$\begin{aligned} \frac{\sqrt{2\lambda}\sinh (\frac{\theta}{4} )}{4}=1. \end{aligned}$$

Then we remark as follows.

Remark 1.1

  1. (i)

    The Bratu’s problem admits no solution when \(\lambda >\lambda _{c}\).

  2. (ii)

    The Bratu’s problem admits one and only one solution when \(\lambda =\lambda _{c}\).

  3. (iii)

    The Bratu’s problem admits two solutions when \(\lambda <\lambda _{c}\).

The Bratu’s BVP (1.1)–(1.2) has many important applications in various fields of applied sciences, (see, e.g., [36] and others). The exact solution of this problem comes from Ascher et al. [2]. Using this exact solution, various researchers investigated different numerical techniques and compared their findings with this exact solution [79]. However, we note that these techniques do not guarantee the existence of solution and the methods used have slow speed of convergence and require too much complicated set of parameters. Our alternative approach in this research is to study the existence and iterative approximation for the Bratu’s BVP (1.1)–(1.2) using fixed point and Green function combination technique. We first show that the sought solution of (1.1)–(1.2) can be set as a fixed point of a continuous operator. We will show that this operator has a unique fixed point which solves our problem. Moreover, we use this operator in the well-known Ishikawa [10] iterative scheme to obtain the convergence. The stability result with a comparative numerical experiment is provided that validates the results and shows the high accuracy of the proposed approach.

Often real world problems can be described in the form of differential equations. However, finding analytical solutions for such problems is very difficult (or may be impossible in some cases), and hence one needs approximate solutions for these problems. In such cases, fixed-point theory provides alternative approaches to these problems, since the sought solution to such problem can be in the form of a fixed point of a certain linear or nonlinear operator T, whose domain is normally a suitable Banach space. Hence, once the existence of a fixed point for the involved operator is established, it follows immediately that the solution for the problem under the consideration has been established. The famous Banach Contraction Principle (BCP) [11] proved that if X is any given Banach space and \(T:X\rightarrow X\) is essentially a contraction map, that is, \(\Vert Tx-Ty\Vert \leq \nu \Vert x-y\Vert \) for all \(x,y\in X\) and \(\nu \in (0,1)\), then T admits essentially a unique fixed point, namely, \(x^{\ast}\in X\), which solves the operator equation \(x=Tx\). Once the existence of a fixed point (sought solution for the operator equation \(x=Tx\)), one asks for an iterative scheme to approximate value of the fixed point. The proof of the BCP suggests an iterative scheme of Picard [12], \(x_{k+1}=Tx_{k}\), for finding the approximate value of the unique fixed point of T. It is known that when T is nonexpanansive, that is, \(\Vert Tx-Ty\Vert \leq \Vert x-y\Vert \) for all \(x,y\in X\), then T has a fixed point under some restrictions (see, e.g., [1315] and others), but the Picard iteration of T generally fails to converge to the fixed point of T. Example of nonexpansive mapping for which Picard iteration is not convergent is the following.

Example 1

Assume that \(X=[-1,1]\) and define the selfmap T on X by \(Tx=-x\) for all \(x\in X\). It follows that the selfmap T is nonexpansive on X and has a unique fixed point, but if \(x_{0}\neq 0\), then we have a divergent sequence from the Picard iterative scheme.

On the other hand, the speed of convergence of Picard iteration is slow. Thus to overcome these difficulties, Mann [16] and Ishikawa [10] generalized the Picard iterative scheme and proved the convergence for nonexpansive mappings. In this paper, we consider the following iterative scheme, which is independent of both the Mann [16] and Ishikawa [10] iterative schemes:

$$\begin{aligned} \textstyle\begin{cases} x_{0}\in X, \\ y_{k}=(1-\beta _{k})x_{k}+\beta _{k}Tx_{k}, \\ x_{k+1}=(1-\alpha _{k})y_{k}+\alpha _{k}Ty_{k},\quad (k=0,1,2,3,\ldots), \end{cases}\displaystyle \end{aligned}$$
(1.4)

where \(\alpha _{k},\beta _{k}\in (0,1)\).

Recently, some authors suggested novel approaches to different classes of BVPs by embedding a Green’s function into the Picard, Mann, and Ishikawa iterative schemes. They called these new modified schemes Picard–Green’s, Mann–Green’s, and Ishikawa–Green’s iterative schemes, respectively. In particular, Kafri and Khuri [17] suggested Picard–Green’s and Mann–Green’s iterative schemes for solving the Bratu problem (1.1)–(1.2). They compared the speed of convergence of these new schemes with some classical methods and observed that these new schemes are too much better than the corresponding old techniques. Motivated by Kafri and Khuri [17], first we embed a Green’s function into the scheme (1.4) and prove its strong convergence for the Bratu’s problem BVP (1.1)–(1.2). Similar results can be proved for Picard–Green’s and Mann–Green’s iterative schemes. We then show that our new iterative scheme is essentially weakly \(w^{2}\)-stable. Some numerical experiments to support the main outcome are given. These numerical experiments also suggest a high numerical accuracy of the our new iterative scheme.

2 Description of the iterative method

We shall now provide a Green’s function associated with a general class of BVPs having second order. After this, we embed this Green’s function into the iterative scheme (1.4) in order to obtain the desirable new iterative scheme. We now split this section into some sub-sections as follows.

2.1 Solution of BVPs by Green’s functions

We first assume the following BVP with second order:

$$\begin{aligned} x^{\prime \prime}+y(t)x^{\prime}+z(t)x=q\bigl(t,,x,x^{\prime},x^{\prime} \bigr), \end{aligned}$$
(2.1)

and assume its BCs as follows:

$$\begin{aligned} \mathcal{B}_{1}[x]=a_{1}x(a)+a_{2}x^{\prime}(a)= \beta\quad \text{and}\quad \mathcal{B}_{2}[x]=b_{1}x(b)+b_{2}x^{\prime}(b)= \gamma, \end{aligned}$$
(2.2)

where the value t lies between 0 and 1, that is, \(0\leq t\leq 1\). Now we decopose the equation (2.1) into linear and nonlinear terms respectively as \(\mathcal{L}[x]=x^{\prime \prime}+y(t)x^{\prime}+z(t)x\) and \(\mathcal{N}[x]=q(t,,x,x^{\prime},x^{\prime})\) and our aim is to construct a Green’s function for \(\mathcal{L}\).

Now we assume that the Problem (2.1)–(2.2) essentially admits a general solution as follows:

$$\begin{aligned} x=x_{h}+x_{p}, \end{aligned}$$
(2.3)

where the notations \(x_{h}\) denote a solution for \(\mathcal{L}[x]=0\) (homogeneous part) and \(x_{p}\) is any particular solution for \(\mathcal{L}[x]= q(t,x,x^{\prime},x^{\prime})\). According to [18, pp. 568], the particular solution \(x_{p}\) can be expressed as follows:

$$\begin{aligned} x_{p}= \int _{a}^{b}\mathcal{G}(t,s)q\bigl(s,x_{p}(s),x^{\prime}_{p}(s),x^{\prime \prime}_{p}(s) \bigr)\,ds. \end{aligned}$$
(2.4)

The notation \(\mathcal{G}(t,s)\) in (2.4) denotes the Green’s function obtained from \(\mathcal{L}[x]=0\). Notice that when δ denotes essentially the direct delta function, then from [18, pp. 572], we have

$$\begin{aligned} \mathcal{L}\bigl[\mathcal{G}(t,s)\bigr]=\delta (t-s). \end{aligned}$$
(2.5)

Also, using the definition of \(\mathcal{L}\) and keeping (2.1), we have

$$\begin{aligned} \mathcal{L}\bigl[\mathcal{G}(t,s)\bigr]=\mathcal{G}^{\prime \prime}(t,s)+y(t) \mathcal{G}^{\prime}(t,s)+z(t)\mathcal{G}(t,s). \end{aligned}$$
(2.6)

Subsequently, from (2.5) and (2.6), we have \(\mathcal{L}[\mathcal{G}(t,s)]=\mathcal{G}^{\prime \prime}(t,s)+y(t) \mathcal{G}^{\prime}(t,s)+z(t)\mathcal{G}(t,s)=\delta (t-s)\). Notice that the Green’s function \(G(t,s)\) is defined as the solution to the differential equation

$$\begin{aligned} \mathcal{L}\bigl[\mathcal{G}(t,s)\bigr]=\mathcal{G}^{\prime \prime}(t,s)+y(t) \mathcal{G}^{\prime}(t,s)+z(t)\mathcal{G}(t,s)=\delta (t-s), \end{aligned}$$
(2.7)

where the new BCs are given as:

$$\begin{aligned} \mathcal{B}_{1}\bigl[\mathcal{G}(t,s)\bigr]=0= \mathcal{B}_{2}\bigl[\mathcal{G}(t,s)\bigr]. \end{aligned}$$
(2.8)

Hence it follows that for the given problem (2.1)–(2.2), \(x_{p}\) must obey the homogeneous BCs and \(x_{h}\) must satisfy the nonhomogeneous BCs \(\mathcal{B}_{1}[x] = \beta \) and \(\mathcal{B}_{2}[x] = \gamma \).

Now we want to show that the proposed solution satisfies the BCs as well as the differential equation, provided that certain assumptions are imposed on q. Now we apply the operator \(\mathcal{L}\) on (2.3) and keeping (2.4) and (2.7) in mind, we have

$$\begin{aligned} \mathcal{L}[x_{h}+x_{p}]= \int _{a}^{b}\delta (t-s)q\bigl(s,x_{p}(s),x^{ \prime}_{p}(s),x^{\prime \prime}_{p}(s) \bigr)\,ds=q\bigl(t,x_{p}(t),x^{\prime}_{p}(t),x^{ \prime \prime}_{p}(t) \bigr). \end{aligned}$$
(2.9)

We assume that q is essentially a function in t only or if it satisfies the following:

$$\begin{aligned} q\bigl(t,x_{p}(t),x^{\prime}_{p}(t),x^{\prime \prime}_{p}(t) \bigr)=q\bigl(t,x(t),x^{ \prime}(t),x^{\prime \prime}(t)\bigr), \end{aligned}$$
(2.10)

then subsequently, one can see that (2.1) is satisfied such that:

$$\begin{aligned} \mathcal{L}[x_{h}+x_{p}]=q \bigl(t,x(t),x^{\prime}(t),x^{\prime \prime}(t)\bigr). \end{aligned}$$
(2.11)

Note that the for our Problem (1.1)–(1.2), the sought solution of (1.1) endowed with the BCs (1.2) takes the form \(x_{h} = 0\). It follows that the Condition (2.10) is essentially satisfied and so (2.11) is true.

Now consider (2.8), one has

$$\begin{aligned} \mathcal{B}_{1}[x_{h}+x_{p}]= \mathcal{B}_{1}[x_{h}]+\mathcal{B}_{1} \biggl[ \int _{a}^{b}\mathcal{G}(t,s)q\bigl(s,x_{p}(s),x^{\prime}(s),x^{ \prime \prime}(s) \bigr)\,ds \biggr]=\beta \end{aligned}$$
(2.12)

and \(\mathcal{B}_{2}[x_{h}+x_{p}]=\gamma \).

Next we list some basic axioms of a given Green’s function as follows, which can be found in [18].

  1. (a1.)

    If we denote two linear independent solutions for \(\mathcal{L}[x]=0\) by \(x_{1}\) and \(x_{2}\), then the associated Green’s function can be written in the following form:

    $$\begin{aligned} \mathcal{G}(t,s)= \textstyle\begin{cases} c_{1}x_{1}+c_{2}x_{2}& \text{when } a< t< s, \\ d_{1}x_{1}+d_{2}x_{2}& \text{when } s< t< b, \end{cases}\displaystyle \end{aligned}$$
    (2.13)
  2. (a2.)

    \(\mathcal{G}(t, s)\) is essentially continuous at \(t = s\), that is,

    $$\begin{aligned} &\mathcal{G}(t,s)_{t\rightarrow s^{+}}=\mathcal{G}(t,s)_{t\rightarrow s^{-}}, \end{aligned}$$
    (2.14)
    $$\begin{aligned} &c_{1}x_{1}(s)+c_{2}x_{2}(s)=d_{1}x_{1}(s)+d_{2}x_{2}(s). \end{aligned}$$
    (2.15)
  3. (a3.)

    \(\mathcal{G}^{\prime}(t, s)\) admits a unit jump discontinuity.

We are now going to obtain the jump assumption as concerns the terms \(x_{1}\) as well as the term \(x_{2}\). To do this, we first compute essentially the value of the jump. Therefore, we must integrate (2.7) with lower limit \(s^{-}\) to the upper limit \(s^{+}\) as follows:

$$\begin{aligned} \int _{s^{-}}^{s^{+}}\bigl[\mathcal{G}^{\prime \prime}(t,s)+y(t) \mathcal{G}^{\prime}(t,s)+z(t)\mathcal{G}(t,s)\bigr]\,dt= \int _{s^{-}}^{s^{+}} \delta (t-s)\,dt. \end{aligned}$$

It follows that

$$\begin{aligned} \int _{s^{-}}^{s^{+}}\bigl[\mathcal{G}^{\prime \prime}(t,s) \bigr]\,dt+ \int _{s^{-}}^{s^{+}}y(t)\mathcal{G}^{\prime}(t,s) \,dt+ \int _{s^{-}}^{s^{+}}z(t) \mathcal{G}(t,s)\,dt= \int _{s^{-}}^{s^{+}}\delta (t-s)\,dt. \end{aligned}$$
(2.16)

Now the function \(\mathcal{G}\) is essentially continuous, and also \(\mathcal{G}^{\prime}\) admits only the jump discontinuity. Thus, the following equations hold:

$$\begin{aligned} \int _{s^{-}}^{s^{+}}z(t)\mathcal{G}(t,s)\,dt=0 \quad\text{and}\quad \int _{s^{-}}^{s^{+}}y(t) \mathcal{G}^{\prime}(t,s) \,dt=0. \end{aligned}$$
(2.17)

Hence, from (2.16) and (2.17), we have

$$\begin{aligned} \int _{s^{-}}^{s^{+}}\bigl[\mathcal{G}^{\prime \prime}(t,s) \bigr]\,dt= \int _{s^{-}}^{s^{+}} \delta (t-s)\,dt. \end{aligned}$$
(2.18)

Evaluating integrals in Equation (2.18) gives the value of the required jump as follows:

$$\begin{aligned} \bigl[\mathcal{G}^{\prime}(t,s)\bigr]_{s^{-}}^{s^{+}}= \bigl[\mathcal{H}(t-s)\bigr]_{s^{-}}^{s^{+}} \quad\text{and}\quad \mathcal{G}^{\prime}\bigl(s^{+},s\bigr)-\mathcal{G}^{\prime} \bigl(s^{-},s\bigr)=1, \end{aligned}$$
(2.19)

where \(\mathcal{H}\) is the well-known Heaviside function, where more details on this can be found in the book [18]. Notice that from the jump condition one can write as follows:

$$\begin{aligned} c_{1}x^{\prime}_{1}(s)+c_{2}x^{\prime}_{2}(s)-d_{1}x^{\prime}_{1}-d_{2}x^{ \prime}_{2}(s)=1, \end{aligned}$$
(2.20)

where the constants \(c_{i},d_{i}\) (\(i=1,2\)) can be found by solving (2.15), (2.20), and the BCs given in (2.8).

2.2 Green’s functions and novel fixed-point scheme

Now we derive our desired new iterative scheme. To do this, we embed a Green’s function into an integral operator and then apply this operator on the new iterative scheme (1.4). For this purpose, we set

$$\begin{aligned} \mathcal{K}[x]=x_{h}+ \int _{a}^{b}\mathcal{G}(t,s)\bigl[x^{\prime \prime}(s)+y(s)x^{ \prime}(s)+z(s)x(s)) \bigr]\,ds. \end{aligned}$$
(2.21)

Now if we add and subtract \(q (s, x,x^{\prime},x^{\prime \prime})\) within the integrand, then one has

$$\begin{aligned} \mathcal{K}[x]={}&x_{h}+ \int _{a}^{b}\mathcal{G}(t,s)[x^{\prime \prime}(s)+y(s)x^{ \prime}(s)+z(s)x(s)-q \bigl(s, x,x^{\prime},x^{\prime \prime}\bigr)\,ds \\ &{}+ \int _{a}^{b}\mathcal{G}(t,s)q \bigl(s, x,x^{\prime},x^{\prime \prime}\bigr)\,ds. \end{aligned}$$
(2.22)

Now we suppose that q is either a function of t only or it satisfies (2.10), then the last integral in (2.22) is equivalent to that in (2.4), so can be replaced by \(x_{p}\). Since \(x = x_{h} + x_{p}\), (2.22) becomes:

$$\begin{aligned} \mathcal{K}[x]=x+ \int _{a}^{b}\mathcal{G}(t,s)[x^{\prime \prime}+y(s)x^{\prime}+z(s)x-q \bigl(s, x,x^{\prime},x^{\prime \prime}\bigr)\,ds. \end{aligned}$$
(2.23)

Now applying new fixed-point iterative scheme (1.4) to the latter integral operator and simplifying, we obtain our new Green’s iterative scheme as follows:

$$\begin{aligned} \textstyle\begin{cases} y_{k}=(1-\beta _{k})x_{k}+\beta _{k}\mathcal{K}[x_{n}], \\ x_{k+1}=(1-\alpha _{k})y_{n}+\alpha _{k}\mathcal{K}[y_{n}]. \end{cases}\displaystyle \end{aligned}$$
(2.24)

It follows that

$$\begin{aligned} \textstyle\begin{cases} y_{k}=x_{k}+\beta _{k} \{\int _{a}^{b}\mathcal{G}(t,s)[x^{ \prime \prime}+y(s)x^{\prime}+z(s)x-q(s, x,x^{\prime},x^{\prime \prime})\,ds \}, \\ x_{k+1}=y_{k}+\alpha _{k} \{\int _{a}^{b}\mathcal{G}(t,s)[y^{ \prime \prime}+y(s)y^{\prime}+z(s)y-q (s, y,y^{\prime},y^{\prime \prime})\,ds \}. \end{cases}\displaystyle \end{aligned}$$
(2.25)

Finally, we derive calculation for the starting iterate \(x_{0}\). To do this, we use a property of the Green’s function in (2.8) to get

$$\begin{aligned} \mathcal{B}_{i}[ \int _{a}^{b}\mathcal{G}(t,s)\bigl[x^{\prime \prime}_{k}+y(s)x^{ \prime}_{k}+z(s)x_{k}-q \bigl(s,x_{k},x^{\prime}_{k},x^{\prime \prime}_{n} \bigr) \bigr]=0, \end{aligned}$$
(2.26)

where \(i=1,2\). It now follows that the starting iterate \(x_{0}\) should be select so that it can solve \(\mathcal{L}[x]=0\) with BCs \(\mathcal{B}_{1}[x] = \beta \) and \(\mathcal{B}_{2}[x] = \gamma \).

Now the analysis which helps us in constructing the iterative scheme (2.25) is restricted essentially to case when either q is a function of the t only or it satisfies (2.10). However, if q is a function of x, then \(x_{p}\) satisfies the following:

$$\begin{aligned} x^{\prime \prime}_{p}+y(s)x^{\prime}_{p}+z(s)x_{p}-q \bigl(s,x_{p},x^{ \prime}_{p},x^{\prime \prime}_{p} \bigr), \end{aligned}$$
(2.27)

so

$$\begin{aligned} x_{p}(t)= \int _{a}^{b}\mathcal{G}(t,s)q\bigl(s,x_{p},x^{\prime}_{p},x^{ \prime \prime}_{p} \bigr)\,ds. \end{aligned}$$
(2.28)

Hence it means that \(\int _{a}^{b}G(t,s)f(s,x,x^{\prime},x^{\prime \prime})\,ds\) in (2.22) cannot be replaced with \(x_{p}\). Thus, we must define \(K[x]\) in (2.21) as follows:

$$\begin{aligned} Tx= \int _{a}^{b}\mathcal{G}(t,s)\bigl[x^{\prime \prime}_{p}+y(s)x^{\prime}_{p}+z(s)x_{p} \bigr]\,ds. \end{aligned}$$
(2.29)

Now similar calculations are made as for (2.21), but the term \(f(s,,x_{p},x^{\prime}_{p})\) is subtracted from and added to the given integral in (2.29), one has

$$\begin{aligned} Tx=x_{p}+ \int _{a}^{b}\mathcal{G}(t,s)\bigl[x^{\prime \prime}_{p}+y(s)x^{ \prime}_{p}+z(s)x_{p}-q \bigl(s,x_{p},x^{\prime}_{p},x^{\prime \prime}_{p} \bigr)\bigr]\,ds. \end{aligned}$$
(2.30)

Now using the operator T in (2.30), we apply our new fixed-point scheme (1.4).

3 Convergence result

Now we are going to prove our main convergence result. We use our proposed scheme (2.25) and assume some possible mild conditions, to obtain the approximate solution for the Bratu’s problem (1.1)–(1.2). For this purpose, we first establish a Green’s function to the term \(x^{\prime \prime}=0\) involved in (1.1). The two independent solutions to this \(x^{\prime \prime}=0\) are \(x_{1}(t)=1\) and \(x_{2}(t)=t\). Hence using (2.13), the Green’s function attains the following form:

$$\begin{aligned} \mathcal{G}(t,s)= \textstyle\begin{cases} c_{1}+c_{2}t& \text{when } 0< t< s, \\ d_{1}+d_{2}t& \text{when } s< t< 1, \end{cases}\displaystyle \end{aligned}$$
(3.1)

where \(c_{i}\) and \(d_{i}\) (\(i=1,2\)) are unknowns to be determined using the basic axioms of Green’s functions. Using the homogeneous BCs, we get

$$\begin{aligned} c_{1}=0, \qquad d_{1}+d_{2}=0. \end{aligned}$$
(3.2)

Moreover, the jump discontinuity of G at \(t=s\) suggests the following equations

$$\begin{aligned} c_{1}+c_{2}s=d_{1}+d_{2}s,\qquad c_{2}-d_{2}=1. \end{aligned}$$
(3.3)

Now solving (3.2) and (3.3), (3.1) becomes as:

$$\begin{aligned} \mathcal{G}(t,s)= \textstyle\begin{cases} s(1-t)& \text{when } 0\leq t< s, \\ t(1-s)& \text{when } 0\leq s\leq t< 1, \end{cases}\displaystyle \end{aligned}$$

Now using the Green’s function above, we set the operator \(T_{\mathcal{G}}:C[0,1]\rightarrow C[0,1]\) by

$$\begin{aligned} T_{\mathcal{G}}x=x+ \int _{0}^{1}G(t,s) \bigl(x^{\prime \prime}+\lambda e^{x(s)}\bigr)\,ds. \end{aligned}$$
(3.4)

Hence, iterative scheme (2.25) becomes:

$$\begin{aligned} \textstyle\begin{cases} x_{0}\in X, \\ y_{k}=(1-\beta _{k})x_{k}+\beta _{k}T_{\mathcal{G}}x_{k}, \\ x_{k+1}=(1-\alpha _{k})y_{k}+\alpha _{k}T_{\mathcal{G}}y_{k},\quad (k=0,1,2,3,\ldots), \end{cases}\displaystyle \end{aligned}$$
(3.5)

where \(\alpha _{k},\beta _{k}\in (0,1)\).

The iterative scheme (3.5) is the desired new iterative scheme. The main convergence result of the paper is now ready to be established.

Theorem 3.1

Consider a Banach space \(X=C[0,1]\) with the supremum norm. Let \(T_{\mathcal{G}}:X\rightarrow X\) be the operator defined in (3.4) and \(\{x_{k}\}\) be the sequence produced by (3.5). Assume that the following conditions hold:

  1. (i)

    \(\frac{\lambda ^{2}L_{c}}{8}<1\), where \(L_{c}=\max_{t\in [0,1]}|e^{x(t)}|\).

  2. (ii)

    Either \(\sum \alpha _{k}=\infty \) or \(\sum \beta _{k}=\infty \)

Subsequently, \(\{x_{k}\}\) converges the unique fixed point of \(T_{\mathcal{G}}\) and hence to the unique sought solution of the given Problem (1.1)(1.2).

Proof

Put \(\frac{\lambda ^{2}L_{c}}{8}=\nu \), accordingly it follows from the condition (i), that \(T_{\mathcal{G}}\) is a ν-contraction. Thanks to BCP, \(T_{\mathcal{G}}\) has a unique fixed point in \(X=C[0,1]\), namely, \(x^{\ast}\) which is the unique solution for the given problem (1.1)–(1.2).

Now using the condition (ii), we prove that the sequence of our new iterative scheme converges strongly to \(x^{\ast}\). First we consider the case when \(\sum \alpha _{k}=\infty \). For this, we have

$$\begin{aligned} \bigl\Vert y_{k} - x^{\ast} \bigr\Vert &= \bigl\Vert (1-\beta _{k})x_{k} + \beta _{k}T_{ \mathcal{G}}x_{k} -x^{\ast} \bigr\Vert \\ &= \bigl\Vert (1-\beta _{k}) \bigl(x_{k}-x^{\ast} \bigr)+ \beta _{k}\bigl(T_{\mathcal{G}}x_{k} -x^{ \ast}\bigr) \bigr\Vert \\ &\leq (1-\beta _{k}) \bigl\Vert x_{k}-x^{\ast} \bigr\Vert + \beta _{k} \bigl\Vert T_{\mathcal{G}}x_{k} -x^{\ast} \bigr\Vert \\ &\leq (1-\beta _{k}) \bigl\Vert x_{k}-x^{\ast} \bigr\Vert + \beta _{k}\nu \bigl\Vert x_{k} -x^{ \ast} \bigr\Vert \\ &= \bigl[1-\beta _{k}(1-\nu )\bigr] \bigl\Vert x_{k} -x^{\ast} \bigr\Vert . \end{aligned}$$

Now using \(\Vert y_{k} - x^{\ast}\Vert \leq [1-\beta _{k}(1-\nu )]\Vert x_{k} -x^{\ast}\Vert \) and noting \([1-\beta _{k}(1-\nu )]\Vert x_{k} -x^{\ast}\Vert \leq 1\), we have

$$\begin{aligned} \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert &= \bigl\Vert (1- \alpha _{k})y_{k} + \alpha _{k}T_{ \mathcal{G}}y_{k} -x^{\ast} \bigr\Vert \\ &= \bigl\Vert (1-\alpha _{k}) \bigl(y_{k}-x^{\ast} \bigr)+ \alpha _{k}\bigl(T_{\mathcal{G}}y_{k} -x^{\ast}\bigr) \bigr\Vert \\ &\leq (1-\alpha _{k}) \bigl\Vert y_{k}-x^{\ast} \bigr\Vert + \alpha _{k} \bigl\Vert T_{ \mathcal{G}}y_{k} -x^{\ast} \bigr\Vert \\ &\leq (1-\alpha _{k}) \bigl\Vert y_{k}-x^{\ast} \bigr\Vert + \alpha _{k}\nu \bigl\Vert y_{k} -x^{ \ast} \bigr\Vert \\ &=\bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl\Vert y_{k} -x^{\ast} \bigr\Vert \\ &\leq \bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl[1-\beta _{k}(1-\nu \bigr] \bigl\Vert x_{k} -x^{\ast} \bigr\Vert \\ &\leq \bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl\Vert x_{k} -x^{\ast} \bigr\Vert . \end{aligned}$$

Accordingly, we get

$$\begin{aligned} \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert &\leq \bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl\Vert x_{k} -x^{\ast} \bigr\Vert \\ &\leq \bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl[1-\alpha _{k-1}(1-\nu )\bigr] \bigl\Vert x_{k-1} -x^{ \ast} \bigr\Vert \\ &\leq \bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl[1-\alpha _{k-1}(1-\nu )\bigr] \bigl[1-\alpha _{k-2}(1- \nu )\bigr] \bigl\Vert x_{k-2} -x^{\ast} \bigr\Vert . \end{aligned}$$

Inductively, we obtain

$$\begin{aligned} \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert \leq \prod _{m=0}^{k}\bigl[1-\alpha _{m}(1-\nu )\bigr] \bigl\Vert x_{0} -x^{ \ast} \bigr\Vert . \end{aligned}$$
(3.6)

It is well-known from the classical analysis that \(1-x\leq e^{-x}\) for all \(x\in [0,1]\). Taking into account this fact together with (3.6), we get

$$\begin{aligned} \lim_{k\rightarrow\infty}\prod _{m=0}^{k}\bigl[1-\alpha _{m}(1-\nu )\bigr] \bigl\Vert x_{0} -x^{ \ast} \bigr\Vert =0, \end{aligned}$$
(3.7)

because \(\sum_{k=0}^{\infty}\alpha_{k}=\infty \) and ν lies in \((0,1)\). Hence from (3.6) and (3.7) that

$$\begin{aligned} \lim_{k\rightarrow \infty} \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert =0. \end{aligned}$$

Accordingly, \(\{x_{k}\}\) converges to the unique fixed point \(x^{\ast}\) of \(T_{\mathcal{G}}\), which is the unique solution of the Problem (1.1)–(1.2). The case when \(\sum \beta _{k}=\infty \) is similar to the above and hence omitted. □

4 Weak \(w^{2}\)-stability result

In some cases, a numerical scheme may not stable when we implement it on a certain operator in order to find a sought solution of a given problem (see, e.g., [1921] and others). A numerical iterative scheme is said to be stable if the error obtained from any two successive iteration steps do not disturb the convergence of the scheme towards a sought solution. The concept of stability for an iterative scheme has its roots in the work of Urabe [22]. Soon the authors Harder and Hicks [23] suggested the formal notion of stability. Here we need some of their definitions and concepts, which will be used in the sequel.

Definition 4.1

[23] Suppose T is a selfmap on a Banach space X and \(\{x_{k}\}\) is a sequence of iterates of T generated by

$$\begin{aligned} \textstyle\begin{cases} x_{0}\in X, \\ x_{k+1}=h(T,x_{k}), \end{cases}\displaystyle \end{aligned}$$
(4.1)

where the point \(x_{0}\) essentially denotes a starting value and h is a function. If \(\{x_{k}\}\) converges to a point \(x^{\ast}\in F_{T}\), then \(\{x_{k}\}\) is called stable if for every other sequence \(\{u_{k}\}\) in X, one has the following

$$\begin{aligned} \lim_{k\rightarrow \infty} \bigl\Vert u_{k+1}-h(T,u_{k}) \bigr\Vert =0\Rightarrow \lim_{k \rightarrow \infty}u_{k}=x^{\ast}. \end{aligned}$$

Definition 4.2

[24] Two sequence \(\{u_{k}\}\) and \(\{x_{k}\}\) in a Banach space are said to be equivalent if the following property holds

$$\begin{aligned} \lim_{k\rightarrow \infty} \Vert u_{k}-x_{k} \Vert =0. \end{aligned}$$

In [25], Timis suggested the natural notion of stability, which he named as weak \(w^{2}\)-stability. He used the concept of equivalent sequences opposed to the of concept of arbitrary sequences in Definition 4.1.

Definition 4.3

[25] Suppose T be a selfmap on a Banach space X and \(\{x_{k}\}\) is a sequence of iterates of T generated by (4.1). If \(\{x_{k}\}\) converges to a point \(x^{\ast}\in F_{T}\), then \(\{x_{k}\}\) is called weakly \(w^{2}\)-stable if for every equivalent sequence \(\{u_{k}\}\subseteq X\) of \(\{x_{k}\}\) one has the following

$$\begin{aligned} \lim_{k\rightarrow \infty} \bigl\Vert u_{k+1}-h(T,u_{k}) \bigr\Vert =0\Rightarrow \lim_{k \rightarrow \infty}u_{k}=x^{\ast}. \end{aligned}$$

Finally, we are in the position to obtain the weak \(w^{2}\)-stability for our proposed scheme (3.5).

Theorem 4.4

Let X, \(T_{\mathcal{G}}\) and \(\{x_{k}\}\) be as given in the Theorem 3.1. Subsequently, \(\{x_{k}\}\) is weakly \(w^{2}\)-stable with respect to \(T_{\mathcal{G}}\).

Proof

To complete the proof, we consider any equivalent sequence \(\{u_{k}\}\) of \(\{x_{k}\}\), that is, \(\{u_{k}\}\) satisfies the equation \(\lim_{k\rightarrow \infty}\Vert u_{k}-x_{k}\Vert =0\). Put

$$\begin{aligned} \epsilon _{k}= \bigl\Vert u_{k+1}-\bigl[(1-\alpha _{k})v_{k}+\alpha_{k}T_{\mathcal{G}}v_{k}\bigr] \bigr\Vert , \end{aligned}$$

where \(v_{k}=(1-\beta _{k})u_{k}+\beta _{k}T_{\mathcal{G}}u_{k}\).

Assume that \(\lim_{k\rightarrow \infty}\epsilon _{k}=0\). The need is to prove that \(\lim_{k\rightarrow \infty}\Vert u_{k}-x^{\ast}\Vert =0\). For this, we have

$$\begin{aligned} \Vert v_{k}-y_{k} \Vert &=\bigl\Vert \bigl[(1- \beta _{k})u_{k}+\beta _{k}T_{\mathcal{G}}u_{k} \bigr]-\bigl[(1- \beta _{k})x_{k}+\beta _{k}T_{\mathcal{G}}x_{k} \bigr] \bigr\Vert \\ &=\bigl\Vert \bigl[(1-\beta _{k}) (u_{k}-x_{k})+ \beta _{k}(T_{\mathcal{G}}u_{k}-T_{ \mathcal{G}}x_{k} \bigr]\bigr\Vert \\ &\leq (1-\beta _{k}) \Vert u_{k}-x_{k} \Vert +\beta _{k} \Vert T_{\mathcal{G}}u_{k}-T_{ \mathcal{G}}x_{k} \Vert \\ &\leq (1-\beta _{k}) \Vert u_{k}-x_{k} \Vert +\beta _{k}\lambda \Vert u_{k}-x_{k} \Vert \\ &\leq \bigl[1-\beta _{k}(1-\nu )\bigr] \Vert u_{k}-x_{k} \Vert . \end{aligned}$$

Hence

$$\begin{aligned} &\bigl\Vert u_{k+1}-x^{\ast} \bigr\Vert \\ &\quad\leq \Vert u_{k+1}-x_{k+1} \Vert + \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert \\ &\quad\leq \bigl\Vert x_{k+1}-\bigl[(1-\alpha _{k})v_{k}+T_{\mathcal{G}}v_{k} \bigr] \bigr\Vert + \bigl\Vert \bigl[(1- \alpha _{k})v_{k}+T_{\mathcal{G}}v_{k} \bigr]-x_{k+1} \bigr\Vert \\ &\qquad{}+ \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert \\ &\quad=\epsilon _{k}+ \bigl\Vert \bigl[(1-\alpha _{k})v_{k}+T_{\mathcal{G}}v_{k} \bigr]-x_{k+1} \bigr\Vert + \bigl\Vert x_{k+1}-x^{ \ast} \bigr\Vert \\ &\quad=\epsilon _{k}+ \bigl\Vert \bigl[(1-\alpha _{k})v_{k}+T_{\mathcal{G}}v_{k} \bigr]-\bigl[(1- \alpha _{k})y_{k}+T_{\mathcal{G}}y_{k} \bigr] \bigr\Vert + \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert \\ &\quad=\epsilon _{k}+ \bigl\Vert (1-\alpha _{k}) (v_{k}-y_{k})+\alpha _{k}(T_{ \mathcal{G}}v_{k}-T_{\mathcal{G}}y_{k}) \bigr\Vert + \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert \\ &\quad\leq \epsilon _{k}+(1-\alpha _{k}) \Vert v_{k}-y_{k} \Vert +\alpha _{k} \Vert T_{ \mathcal{G}}v_{k}-T_{\mathcal{G}}y_{k} \Vert + \bigl\Vert x_{k+1}-x^{\ast} \bigr\Vert \\ &\quad\leq \epsilon _{k}+(1-\alpha _{k}) \Vert v_{k}-y_{k} \Vert +\alpha _{k}\nu \Vert v_{k}-y_{k} \Vert + \bigl\Vert x_{k+1}-x^{ \ast} \bigr\Vert \\ &\quad=\epsilon _{k}+\bigl[1-\alpha _{k}(1-\nu )\bigr] \Vert v_{k}-y_{k} \Vert + \bigl\Vert x_{k+1}-x^{ \ast} \bigr\Vert \\ &\quad\leq \epsilon _{k}+\bigl[1-\alpha _{k}(1-\nu )\bigr] \bigl[1-\beta _{k}(1-\nu )\bigr] \Vert u_{k}-x_{k} \Vert + \bigl\Vert x_{k+1}-x^{ \ast} \bigr\Vert \\ &\quad\leq \epsilon _{k}+\bigl[1-\alpha _{k}(1-\nu )\bigr] \Vert u_{k}-x_{k} \Vert + \bigl\Vert x_{k+1}-x^{ \ast} \bigr\Vert . \end{aligned}$$

Subsequently, we obtained

$$\begin{aligned} \bigl\Vert u_{k+1}-x^{\ast} \bigr\Vert \leq \epsilon _{k}+\bigl[1-\alpha _{k}(1-\nu )\bigr] \Vert u_{k}-x_{k} \Vert + \bigl\Vert x_{k+1}-x^{ \ast} \bigr\Vert . \end{aligned}$$
(4.2)

Since \(\lim_{k\rightarrow \infty}\epsilon _{k}=0\) by assumption, \(\lim_{k\rightarrow \infty}\Vert u_{k}-x_{k}\Vert =0\) as \(\{u_{k}\}\) is an equivalent sequence for \(\{x_{k}\}\) and \(\lim_{k\rightarrow \infty}\Vert x_{k}-x^{\ast}\Vert =0\) as \(\{x_{k}\}\) is convergent to \(x^{\ast}\). Accordingly, from (4.2), \(\lim_{k\rightarrow \infty}\Vert u_{k}-x^{\ast}\Vert =0\). This means that \(\{x_{k}\}\) generated by the iterative scheme (3.5) is weakly \(w^{2}\)-stable with respect to \(T_{\mathcal{G}}\). □

5 Numerical computions

We now choose different values of λ and connect the Mann–Green’s and Ishikawa–Green’s, our new iterative scheme with the Bratu’s problem (1.1)–(1.2). First, we take \(\alpha _{k}=\beta _{k}=0.9\) and \(x_{0}=0\) that satisifies the equation \(x^{\prime \prime}=0\) and the BCs (1.2). The results are displayed in the Tables 12. Notice that we assume \(\Vert x_{k}-x^{\ast}\Vert <10^{-6}\) is our stopping criterion, where \(x^{\ast}\) is the requested sough solution. These results confirm the convergence of the all schemes towards the sought solution. Moreover, it is easy to observe that our new scheme is faster than the other two schemes. Graphical convergence is shown in Figs. 14.

Figure 1
figure 1

Graphical comparison for \(\lambda =1\) and \(t=0.1\)

Figure 2
figure 2

Graphical comparison for \(\lambda =1\) and \(t=0.1\)

Figure 3
figure 3

Graphical comparison for \(\lambda =2\) and \(t=0.5\)

Figure 4
figure 4

Graphical comparison for \(\lambda =2\) and \(t=0.5\)

Table 1 Comparison of various iterations for \(\lambda =1\) and \(\alpha _{k}=\beta _{k}=0.9\) for \(k=0\) to 10
Table 2 Comparison of various iterations for \(\lambda =2\) and \(\alpha _{k}=\beta _{k}=0.9\) for \(k=0\) to 10

Eventually, we compare our results with some other numerical methods of the literature in the following Table 3. We now compare our new approach with the the DM scheme [7], LDM scheme [26], B-spline scheme [27], LGSM [28], Spline approach [29], DTM scheme [30], GVM scheme [31], and CW scheme [32] as follows. Clearly, in all the cases, our new approach is highly accurate.

Table 3 Influence of parameters: comparison of various iteration processes

Now we list the following observations.

  • From Tables 12, we see that absolute error in our new iterative scheme is much smaller than the Mann–Green’s and Ishikawa–Green’s iterative schemes. This means that our new iterative scheme is moving faster to the solution.

  • From Figs. 14, we see that for small values of parameters, \(\alpha _{k}\) and \(\beta _{k}\), the rate of convergence of Mann–Green’s and Ishikawa–Green’s iterative scheme is almost same. However, our new iterative scheme for all values of \(\alpha _{k}\) and \(\beta _{k}\) is moving fast to the solution.

6 Conclusion

We constructed a new highly accurate numerical scheme based on Green’s function for numerical solutions of Bratu’s BVPs in a Banach space framework. We investigated a convergence result under suitable conditions for the mapping and parameters in our scheme. We proved that our new scheme is weakly \(w^{2}\)-stable in this new setting. We provide a numerical simulation to support our claims and results. It has been shown that the convergence of our new scheme is very accurate and can be used effectively for all values of parameters involved in our scheme.