1 Introduction

In this paper we disscuss the variation iteration method as a tool for construction of approximate formulas of positive, bounded and decreasing solutions to the following class of nonlinear ordinary differential equations

$$\begin{aligned} -\left( a(t)u^{\prime }(t)e^{At}\right) ^{\prime }=e^{At}f(t,u(t))\text { for a.e. } \ t\in \left( 0,1\right) \end{aligned}$$
(1)

with the boundary conditions

$$\begin{aligned} u^{\prime }(0)=0\text { and }u^{\prime }(1)+Au(1)=0, \end{aligned}$$
(2)

where f is a Caratheodory function, \(A>0\) and \(a\in C^{1}([0,1])\). It is clear that (1) can be rewritten as follows

$$\begin{aligned} (a(t)u^{\prime }(t))^{\prime }+Aa(t)u^{\prime }(t)+f(t,u(t))=0,\text { for }a.e.\ t\in \left( 0,1\right) . \end{aligned}$$
(3)

For \(a\equiv const\) and the nonlinearity independent of t,  (1), together with (2), gives the prototype Markus and Amundson’s problem

$$\begin{aligned} \left\{ \begin{array}{c} u^{\prime \prime }(t)+Au^{\prime }(t)+Bf(u(t))=0,\text { for all }t\in \left( 0,1\right) ,\\ u^{\prime }(0)=0,\\ u^{\prime }(1)+Au(1)=0, \end{array} \right. \end{aligned}$$
(4)

where u describes the dimensionless temperature, f represents the rates of chemical production of the species and the constants A, B depend on the properties of the chemical reactor (see e.g. [2, 4, 5] and references therein). Then, on the basis of the solution u and the stoichiometric coefficients of the species that take part in the reaction, one can determine their concentration. Even for \(a\equiv 1\) and \(A=0\) our equation covers a few important models arising in real life phenomena. If for example we consider the exponential function f, we obtain the well-known Bratu’s equation which models the instability of the moving jet in the electrospinning process(see e.g. [6] and [11]). Various boundary value problems associated with equations of that type are also investigated in the recent paper [1, 2] (and the references therein) with the help of VIM and its modification. It is also worth emphasizing that (1) with the polynomial nonlinearity f gives the cubic-quintic Duffing oscillator equation

$$\begin{aligned} u^{\prime \prime }(t)+u(t)+u^{3}(t)+u^{5}(t)=0. \end{aligned}$$
(5)

This equation, together with certain initial conditions, is widely discussed by Chowdhury et al. in [3] and [12]. Moreover, [3] investigates the equation being a generalization of (5 ), i.e.

$$\begin{aligned} u^{\prime \prime }(t)+\omega ^{2}u(t)=\varepsilon f(u(t)), \end{aligned}$$

where \(\epsilon \) is a certain constant and \(\omega \) stands for the angular frequency. Assuming that the nonlinearity of f is odd (i.e. \(f(-u)\ =\ -\ f(u)\) ) and using the approach based the harmonic balance method (HBM), the authors obtain a higher-order approximation of the solution. In [1] a similar initial problem for the general nonlinear oscillator is considered. The authors calculate the first order numerical solution by applying the Laplace transform together with the variational iteration method.

Furthermore, we have to mention the following nonlinear oscillator with coordinate-dependent mass

$$\begin{aligned} (1+\alpha u^{2}(t))u^{\prime \prime }(t)+\alpha u(t)(u^{\prime }(t))^{2} -x(1-x^{2})=0, \end{aligned}$$
(6)

which is an important cosmological model, describing phase transmissions in physics. It is discussed in [15] with the initial conditions \(u(0)=A\), \(u'(0)=0\). On the other hand, [13] investigates (6) using the homotopy perturbation method, which gives the existence of periodic solutions to that problem. The authors underline the difficulty of that approach in their problem, as the coefficient of the linear term in (6) is negative.

Last but not least, we would like to mention that in [23] our problem (1)–(2) is discussed, with nonlinearity \(f(t,u)=\frac{nu}{u+k}\), where \(n=0.76129\) is the Michaelis constant and \(k=0.03119\) is associated with the reaction rate (cf. e.g. [16]). Similar BVPs are also widely discussed in the literature (see, among others, [7, 17,18,19] and references therein). It is worth emphasizing that, although nonlinearities in our problem have less general form than those considered in aforementioned papers, we cannot apply those results, since we consider a nonlinearity which can be singular with respect to the first variable and do not assume conditions regarding the monotonicity of f. In the recent paper [20] we have some results for the special case when \(a\equiv 1\).

We start with the auxiliary theoretical results which describe sufficient conditions for the existence of positive, bounded and monotonic solutions to our problem and describe their dependence on the additional functional parameters and coefficients a. Here we want to emphasize that we obtain this result in the case of lack of uniqueness of solutions. The goal of this paper is numerical. Having guaranteed the existence of at least one such a solution, we obtain an approximate solution applying variational iteration method (VIM) which was developed by J.H. He in 1997 (see [8]). This approach was inspired by the results presented by Inokuti et. al. (see [14]) who applied a general Lagrange multiplier method to obtain the solution to the nonlinear problem arised in mathematical physics. Recently, this powerful method has been widely applied in many papers devoted to chemical and physical problems modeled by ODEs and PDEs without any restrictive assumptions (see, among others, [9,10,11,12, 22,23,24]). The VIM gives the solution as a limit of sequence which is obtained during rapidly convergent successive approximation procedure that can lead to the exact solution if such a solution exists (see e.g. [23]). Thus the theoretical results presented in the first part of this paper can be treated as the starting point of the numerical approach. In the last section we use VIM to obtain a certain approximation of solutions and show its action in some important cases of nonlinearity f for Duffing-type and Bratu-type equations. It is worth emphasizing that VIM gives approximate solutions with properties (e.g. estimates, monotonicity) described in the existence theorem (Th. 2.1)

Finally, we compare the approximations obtained with help of the variational iteration method with results based on the classical fixed point iteration method. The choice of the latter method is natural because of the tools applied in the theoretical part of this paper.

It turns out that VIM is more useful when we need to get the explicit formulas for approximate solutions. This observation is associated with the fact that in many cases, this method allows us to obtain the acceptable accuracy in the second or third step for which the descriptions by the integral formulas, given by the iterative schema, are easier than for further steps. It is also worth emphasizing that these approximate solutions have the same properties as exact solutions of the class of problems described in Theorem 2.1.

2 Auxiliary theoretical results—existence and properties of solutions

In this section we apply Schauder’s fixed point theorem to prove the existence of positive and nonincreasing solutions to the problem (1)–(2) with values bounded by a threshold positive constant. Next we describe the dependence of solutions on functional parameters. Precisely, we prove that the sequence of positive solutions \(\left\{ u_{n}\right\} _{n\in N},\) corresponding to a sequence of parameters \(\left\{ v_{n}\right\} _{n\in N}\subset L^{2}(0,1)\) and a sequence of coefficients \(\left\{ a_{n}\right\} _{n\in N} \subset C^{1}([0,1]),\) tends uniformly (up to a subsequence) in [0, 1] to \(u_{0}\), provided that \(\left\{ v_{n}\right\} _{n\in N}\) is convergent almost everywhere in (0, T) to \(v_{0}\in L^{2}(0,1)\) and \(\left\{ a_{n}\right\} _{n\in N}\ \) converges uniformly to \(a_{0}\in C^{1}([0,1]).\) Moreover, we prove that \(u_{0}\) is a solution to (1)–(2) with \(v=v_{0}\) and \(a=a_{0}\). Our approach is based on the following assumptions

  1. (H1)

    \(a\in C^{1}([0,1]),\) \(a_{\min }:=\underset{t\in \left[ 0,1\right] }{\min }a(t)>0.\)

  2. (H2)

    There exist numbers \(b>0,\) \(\alpha ,\beta \in \mathbb {R},\) \(\alpha <\beta ,\) such that \(f:(0,1)\times [0,b]\rightarrow [0,+\infty )\) is a Caratheodory function and for all \(t\in \left( \alpha ,\beta \right) \subset [0,1],\) \(f(t,0)>0.\)

  3. (H3)

    There exists \(\varphi \in L^{2}(0,1)\) such that for all \(t\in [0,1],\) the following conditions hold: \(\underset{u\in \left[ 0,b\right] }{\max }f(t,u)\le \varphi (t)\) and \(\int _{0}^{1}\varphi (t)dt\le \frac{Aa_{\min }}{A+1}b.\)

We obtain a solution \(u_{0}\) to (1)–(2) as a fixed point of a certain integral operator T, where \(u_{0}\) belongs to the set

$$\begin{aligned} X:=\{u\in C^{1}([0,1]),0\le u(t)\le b,-b\le u^{\prime }\le 0,u^{\prime }(0)=0,u^{\prime }(1)+Au(1)=0\}. \end{aligned}$$

Theorem 2.1

If (H1), (H2) and (H3) hold then there exists a nontrivial solution \(u_0\in C^{1}([0,1])\cap W^{2,2}(0,1)\) of (1)–(2) such that for all \(t\in \left[ 0,1\right] ,\) \(0\le u_0(t)\le b\) and \(-b\le u_0^{\prime }(t)\le 0.\) Moreover, in the case when \(f(t,u)>0\) for all \(t\in (\alpha ,\beta ) \) and \(u\in [0,b]\), \(u_0\) is decreasing in \((\alpha ,1)\).

Proof

We can treat our problem as a fixed point problem for the following integral operator

$$\begin{aligned} \begin{array}{ll} Tu(t) &{} :=-\int _{0}^{t}e^{-Ar}(a(r))^{-1}\int _{0}^{r}e^{As}\overline{f}(s,u(s))dsdr\\ &{}\quad +\frac{e^{-A}}{A}(a(1))^{-1}\int _{0}^{1}e^{As}{\overline{f}}(s,u(s))ds+\int _{0}^{1}e^{-Ar}(a(r))^{-1}\int _{0}^{r}e^{As}{\overline{f}}(s,u(s))dsdr, \end{array} \end{aligned}$$
(7)

where

$$\begin{aligned} {\overline{f}}(s,z)=\left\{ \begin{array}{rrl} f(s,0)\quad \text { for } &{} z<0, &{}\ s\in (0,1)\\ f(s,z)\quad \text { for } &{} 0\le z\le b, &{}\ s\in (0,1)\\ f(s,b)\quad \text { for } &{} z>b, &{} \ s\in (0,1). \end{array} \right. \end{aligned}$$

Assumptions (H1)–(H3) allow us to infer that the operator T :  \(C([0,1])\rightarrow C([0,1])\) is well-defined. Our task is now to show that T is completely continuous and \(TX\subset X\). We start with the proof of the latter fact. Let us note that for an arbitrary \(u\in X,\) the element \({\overline{u}}:=Tu\) satisfies the following assertion

$$\begin{aligned} {\overline{u}}^{\prime }(t)=-e^{-At}(a(t))^{-1}\int _{0}^{t}e^{As}f(s,u(s))ds. \end{aligned}$$

In consequence, we can state that \({\overline{u}}^{\prime }(0)=0,\) \(\overline{u}^{\prime }(1)+A{\overline{u}}(1)=0\) and \({\overline{u}}^{\prime }\) is nonpositive in [0, 1]. Additionally, we get for all \(t\in (0,1),\)

$$\begin{aligned} {\overline{u}}^{\prime }(t)=-e^{-At}(a(t))^{-1}\int _{0}^{t}e^{As}\overline{f}(s,u(s))ds\ge -(a_{\min })^{-1}\int _{0}^{1}\underset{u\in \left[ 0,b\right] }{\max }{\overline{f}}(s,u)ds\ge -{b}. \end{aligned}$$

Moreover, the following estimates take place

$$\begin{aligned} {\overline{u}}(1)= & {} \frac{e^{-At}}{A}(a(1))^{-1}\int _{0}^{1}e^{As}\overline{f}(s,u(s))ds\ge 0,\\ {\overline{u}}(0)= & {} \frac{e^{-A}}{A}(a(1))^{-1}\int _{0}^{1}e^{As}\overline{f}(s,u(s))ds+\int _{0}^{1}e^{-Ar}(a(r))^{-1}\int _{0}^{r}e^{As}\overline{f}(s,u(s))dsdr\\\le & {} \frac{1+A}{Aa_{\min }}\int _{0}^{1}{\overline{f}}(s,u(s))ds\le \frac{1+A}{Aa_{\min }}\int _{0}^{1}\varphi (s)ds\le b. \end{aligned}$$

Taking into account the above assertions and the monotonicity of \({\overline{u}}\) (which is nonincreasing) one obtains the chain of inequalities for all \(t\in \left[ 0,1\right] \),

$$\begin{aligned} 0\le {\overline{u}}(1)\le {\overline{u}}(t)\le {\overline{u}}(0)\le b. \end{aligned}$$

Thus we have shown the inclusion \(TX\subset X\). Now we prove that operator T is completely continuous in C([0, 1]). The proof of this fact is based on the standard reasoning (see e.g. [21]) and consists of two steps. We start with the proof of the continuity of T. To this end, we take arbitrary \(u\in C([0,1])\) and a sequence \((u_{n})_{n\in {\mathbb {N}}}\subset C([0,1]),\) such that \(u_{n}\rightarrow u \) in C([0, 1]) with the sup-norm \(||u||_{C}=\underset{t\in [0,1]}{\sup }|u(t)|.\) By the definition of T,  one can obtain the following chain of inequalities

$$\begin{aligned} ||Tu-Tu_{n}||_{C}\le & {} \frac{1}{a_{min}} \int _{0}^{1}\int _0^r||{\overline{f}}(s,u_{n}(s))-{\overline{f}} (s,u(s))|| dsdr\\&+ \frac{1}{A a_{min}}\int _{0}^{1}||{\overline{f}}(s,u(s))-{\overline{f}}(s,u_{n}(s))||ds\\&+ \frac{1}{a_{min}} \int _{0}^{1}||{\overline{f}}(s,u(s))-{\overline{f}} (s,u_{n}(s))||ds \\\le & {} (a_{\min })^{-1}\left( 2+\frac{1}{A}\right) \int _{0}^{1}||{\overline{f}}(s,u_{n}(s))-{\overline{f}}(s,u(s))||ds. \end{aligned}$$

Let us consider the sequence \(\psi _{n}(s):=||{\overline{f}}(s,u_{n} (s))-{\overline{f}}(s,u(s))||.\) Bearing in mind the definition of \({\overline{f}},\) (H2) and (H3), we note that for all \(s\in [0,1],\) \(\psi _{n}(s)\rightarrow 0\) when \(n\rightarrow \infty ,\) \(\psi _{n}(s)\le 2\underset{u\in [0,b]}{\max }f(s,u)\) and \(\int _{0} ^{1}\psi _{n}(s)ds\le 2\int _{0}^{1}\varphi (t)<+\infty .\) Therefore, by the Lebesgue dominated convergence theorem, we derive that

$$\begin{aligned} \int _{0}^{1}\psi _{n}(s)ds\rightarrow 0\text { for }n\rightarrow \infty \end{aligned}$$

and consequently

$$\begin{aligned} ||Tu-Tu_{n}||_{C}\rightarrow 0\text { for }n\rightarrow \infty . \end{aligned}$$

Thus T is continuous at arbitrary \(u\in C([0,1]),\) namely T is continuous as an operator from C([0, 1]) into itself. To prove the compactness of T we apply the Ascoli-Arzelá lemma. To this effect we show that T maps bounded subsets of C([0, 1]) into relatively compact subsets of C([0, 1]). Let us consider a certain \(R>0\) and a closed ball \(B:=\{u\in C([0,1]),\;||u||_{C}\le R\}.\) Our task is now to state that the image of B: \(T(B):=\{Tu\in C([0,1]),\;||u||_{C}\le R\}\) is relatively compact in C([0, 1]). First, the equicontinuity of functions from the family T(B) will be proved. To this effect we take any \(t_{0}\) and \(t_{n} \rightarrow t_{0}^{+}.\) Now, by the definition of \({\overline{f}},\) for all \(u\in B\) we have

$$\begin{aligned}&|Tu(t_{n})-Tu(t_{0})|\\&\quad \le (a_{\min })^{-1}\int _{t_{0}}^{t_{n}}\int _{0}^{r}{\overline{f}} (s,u(s))dsdr\le (a_{\min })^{-1}\int _{t_{0}}^{t_{n}}\int _{0}^{1} \underset{v\in \left[ 0,b\right] }{\max }f(s,v)dsdr\\&\quad \le (a_{\min })^{-1}\frac{Aa_{\min }}{A+1}b\left( t_{n}-t_{0}\right) , \end{aligned}$$

and further

$$\begin{aligned} \underset{n\rightarrow \infty }{\lim }|Tu(t_{n})-Tu(t_{0})|=0\text {, } \end{aligned}$$

uniformly with respect to \(u\in B.\) We obtain the analogous conclusion for \(t_{n}\rightarrow t_{0}^{-}\). Thus, we state the uniform equicontinuity of T(B). Moreover the following estimate takes place

$$\begin{aligned} \underset{t\in [0,1]}{\sup }|Tu(t)|\le \frac{2A+1}{Aa_{\min }}\int _{0}^{1}\underset{v\in \left[ 0,b\right] }{\max }f(s,v)ds<+\infty {{\textbf {,}}} \end{aligned}$$

for all \(u\in T(B),\) which implies the boundedness of the family T(B). Finally, the Ascoli-Arzela lemma leads to the conclusion that T(B) is relatively compact in C([0, 1]).

Summarizing, we have proved that T is a completely continuous operator mapping the nonempty, closed and convex subset X of C([0, 1]) into X. Now Schauder’s fixed point theorem gives the existence of at least one fixed point \(u_{0}\in X\) of the operator T.

Furthermore, as for all \(t\in \left( \alpha ,\beta \right) \subset [0,1],\) \(f(t,0)>0\) we state that \(u_{0}\ne 0\) for \(t\in (\alpha ,\beta )\). If, in addition, \(f(t,u)>0\) for all \(t\in ( \alpha ,\beta ) \) and \(u\in [0,b]\), we get that \(u_{0}\) is strictly decreasing in \((\alpha ,1)\), which follows from the estimate

$$\begin{aligned} u_{0}^{\prime }(t)&\le -e^{-At}(a(t))^{-1}\int _{\alpha }^{t}e^{As}f(s,u_{0}(s))ds \\&<-e^{-At}(a(t))^{-1}\underset{s\in \left[ \alpha ,t\right] }{\min } f(s,u_{0}(s))\frac{1}{A}(e^{At}-e^{A\alpha })<0. \end{aligned}$$

\(\square \)

The natural question is associated with the dependence of solutions on the coefficient a and parameter which can be included in the nonlinearity f. It is worth emphasizing that we consider this problem in the case when we are not able to guarantee the uniqueness of a solution. Thus, now we extend our problem and consider the following equation

$$\begin{aligned} -\left( a(t)u^{\prime }(t)e^{At}\right) ^{\prime }=e^{At}f(t,u(t),v(t)) \text { for a.a. }t\in \left( 0,1\right) \end{aligned}$$
(8)

with parameter \(v\in V\subset L^{2}(0,1)\) and f satisfying the below assumptions

  1. (H2’)

    There exist numbers \(b>0,\) \(\alpha ,\beta \in \mathbb {R},\) \(\alpha <\beta ,\) such that \(f:(0,1)\times [0,b]\times \mathbb {R}\rightarrow [0,+\infty )\) is a Caratheodory function and for all \(t\in \left( \alpha ,\beta \right) \subset [0,1],\) \(v\in V,\) \(f(t,0,v(t))>0\).

  2. (H3’)

    There exists \(\varphi \in L^{2}(0,1)\) such that for all \(t\in \left( \alpha ,\beta \right) \subset [0,1],\) \(v\in V,\) the following inequalities hold \(\underset{u\in \left[ 0,b\right] }{\max }f(t,u,v(t))\le \varphi (t)\) and \(\int _{0}^{1}\varphi (t)dt\le \frac{A a_{min}}{A+1}b.\)

Theorem 2.2

Assume hypotheses (H2’) and (H3’) and define the set of coefficients \({\mathcal {A}} :=\{a\in C^{1}([0,1]): m\le a(t)\le M\},\) where m and M are given positive numbers (the same for all elements of \({\mathcal {A}})\). Let \(\{v_{n}\}_{n\in {\mathbb {N}}}\subset V\) be a sequence of parameters convergent pointwisely to a certain \(v_{0}\in V\) and \(\{a_{n}\}_{n\in {\mathbb {N}}}\subset {\mathcal {A}}\) be a sequence of coefficients convergent uniformly to a certain \(a_{0}\in {\mathcal {A}}.\) Additionally, let us suppose that for each \(n\in {\mathbb {N}},\) \(u_{n}\in X\) denotes a solution to (8) with \(v=v_{n}\) and \(a=a_{n}\). Then the sequence \(\{u_{n}\}_{n\in {\mathbb {N}}}\subset X\) possesses a subsequence converging uniformly to a certain element \(u_{0}\subset X\) which is a solution to (8) and (2) corresponding to the parameter \(v=v_{0}\) and the coefficient \(a=a_{0}.\)

Proof

First we note that for each \(v_{n}\) and \(a_{n},\) Theorem 2.1 leads to the existence of at least one nonegative and nontrivial solution \(u_{n}\in X\) of (8) and (2). The fact that \(u_{n}\in X\) implies boundedness of \(\{u_{n}\}_{n\in {\mathbb {N}}}\) and \(\{u_{n}^{\prime }\}_{n\in {\mathbb {N}}}\) in [0, 1] and consequently in the \(W^{1,2}(0,1)-\)norm. Therefore, \(\{u_{n}\}_{n\in {\mathbb {N}}}\) (up to the subsequence) converges weakly to a certain \(u_{0}\in W^{1,2}(0,1).\) Owing to the Rellich-Kondrashov theorem we get the uniform convergence of \(\{u_{n}\}_{n\in {\mathbb {N}}}\) to \(u_{0}\) in [0, 1]. Now, let us consider the auxiliary sequence of the following form

$$\begin{aligned} p_{n}(t)=a_{n}(t)u_{n}^{\prime }(t)e^{At},\text { }t\in [0,1]. \end{aligned}$$

Since for all \(t\in \left[ 0,1\right] ,\) the following estimate holds

$$\begin{aligned} |p_{n}(t)|\le M|u_{n}^{\prime }(t)|e^{At}\le Mbe^{A}, \end{aligned}$$

the sequence \(\{p_{n}\}_{n\in {\mathbb {N}}}\) is also bounded. Moreover,  we have the estimate for all \(t\in \left( 0,1\right) ,\)

$$\begin{aligned} |p_{n}^{\prime }(t)|=|\left( a_{n}(t)u_{n}^{\prime }(t)e^{At}\right) ^{\prime }|=|e^{At}f(t,u_{n}(t),v_{n}(t))|\le e^{A}\varphi \left( t\right) , \end{aligned}$$

which gives the boundedness of \(\{p_{n}^{\prime }\}_{n\in {\mathbb {N}}}\) in \(L^{2}(0,1)-\)norm. Both assertions imply the boundedness of \(\{p_{n} \}_{n\in {\mathbb {N}}}\) in the norm in \(W^{1,2}(0,1).\) Therefore, going if necessary to a subsequence, we state that \(\{p_{n}\}_{n\in {\mathbb {N}}}\) , is weakly convergent to a certain \(p_{0}\in W^{1,2}(0,1).\) Now, the Rellich-Kondrashov theorem allows us to get the uniform convergence of \(\{p_{n}\}_{n\in {\mathbb {N}}}\) to \(p_{0}\) in [0, 1]. Taking into account the fact that for all \(t\in \left[ 0,1\right] ,\) \(u_{n}^{\prime }(t)=a_{n}(t)e^{-At} p_{n}(t),\) in particular \(u_{n}^{\prime }(0)=a_n(0) p_{n}(0)\) which gives \(p_{n}(0)=0,\) we obtain the pointwise convergence of \(\{u_{n}^{^{\prime }}\}_{n\in N }\) to \(u_{0}^{\prime }\) in [0, 1] and the equalities

$$\begin{aligned} u_{0}^{\prime }(t)=\left( a_{0}(t)\right) ^{-1}e^{-At}p_{0}(t). \end{aligned}$$

Thus we get \(u_{0}^{\prime }(0)=a_{0}(0)p_{0}(0)=0\) and

$$\begin{aligned} u_{0}^{\prime }(1)+Au_{0}(1)=\underset{n\rightarrow \infty }{\lim }\left( u_{n}^{\prime }(1)+Au_{n}(1)\right) =0. \end{aligned}$$

Finally, \(u_{0}\in X.\) Thus we know that \(u_{0}\) satisfies (2). Our task is now to show that \(u_{0}\) is a solution to (8) with \(v=v_{0}\) and \(a=a_{0}\). To this effect we take an arbitrary \(h\in W_{0}^{1,2}(0,1)\) and note that for each \(t\in \left[ 0,1\right] ,\)

$$\begin{aligned} e^{At}f(t,u_{n}(t),v_{n}(t))h(t)\le e^{A}\varphi (t)h(t) \end{aligned}$$

and

$$\begin{aligned} \underset{n\rightarrow \infty }{\lim }e^{At}f(t,u_{n}(t),v_{n}(t))h(t)=e^{At} f(t,u_{0}(y),v_{0}(t))h(t). \end{aligned}$$

Thus, the Lebesgue dominated convergence theorem allows us to state that, for all \(h\in W_{0}^{1,2}(0,1)\),

$$\begin{aligned} \underset{n\rightarrow \infty }{\lim }\int _{0}^{1}e^{At}f(t,u_{n}(t),v_{n} (t))h(t)dt=\int _{0}^{1}e^{At}f(t,u_{0}(t),v_{0}(t))h(t)dt, \end{aligned}$$

and finally

$$\begin{aligned} \int _{0}^{1}p_{0}(t)h^{\prime }(t)dt&=\underset{n\rightarrow \infty }{\lim }\int _{0}^{1}p_{n}(t)h^{\prime }(t)dt=\underset{n\rightarrow \infty }{\lim } \int _{0}^{1}e^{At}f(t,u_{n}(t),v_{n}(t))h(t)dt\\&=\int _{0}^{1}e^{At}f(t,u_{0}(t),v_{0}(t))h(t)dt. \end{aligned}$$

With the du Bois-Reymond lemma in mind, we infer that

$$\begin{aligned} -\left( p_{0}(t)\right) ^{\prime }=e^{At}f(t,u_{0}(t),v_{0}(t)) \ \ \text {for a.e. } t \in [0,1] \end{aligned}$$

what can be rewritten as follows

$$\begin{aligned} -\left( a_0(t)u_{0}^{\prime }(t)e^{At}\right) ^{\prime }=e^{At}f(t,u_{0} (t),v_{0}(t))\ \ \text {for a.e. } t \in [0,1]. \end{aligned}$$

Since \(p_{0}\) is absolutely continuous and \(u_{0}^{\prime }(t)=(a_0(t))^{-1} e^{-At}p_{0}(t),\) we derive that the \(u_{0}^{^{\prime \prime }}\) exists almost everywhere in (0, 1) and belongs to \(L^{2}(0,1)\). Thus we get a.e. in (0, 1)

$$\begin{aligned} \left( a_0(t)u_{0}^{\prime }(t)\right) ^{\prime }+A a_0 (t)u_{0}^{\prime } (t)+e^{At}f(t,u_{0}(t),v_{0}(t))=0. \end{aligned}$$

This means that \(\ u_{0}\in W^{2,2}(0,1)\) is a solution to (8) and (2) with the limit parameter \(v=v_{0}\) and the limit coefficient \(a=a_{0}.\) \(\square \)

Remark 1

It is worth emphasizing that we can apply our results for the nonlinearity f which is not monotonic with respect to the second variable (see e.g. assumptions in [5]) and can have exponential growth. Moreover, we are able to consider functions \(f(\cdot ,u,v)\) which are singular at 0.

These observations are illustrated by the example below

Example 2.1

Assume that

$$\begin{aligned} \left\{ \begin{array}{c} (a(t)u^{\prime }(t))^{\prime }+Aa(t)u^{\prime }(t)+\left. \alpha (t)v(t)e^{u(t)}+\beta (t)\frac{u^{2}(t)+1}{(u(t)+5)(6-u(t))}e^{v}\right. \\ \left. +\frac{\gamma (t)}{\root 6 \of {t}}(u-1)^{4}(2-u)v^{2}+\delta (t)\left( u^{5}+u^{3}+u+\frac{1}{\root 7 \of {t}}\right) \right] =0, \\ u^{\prime }(0)=0\text { and }u^{\prime }(1)+Au(1)=0, \end{array} \right. \end{aligned}$$
(9)

where \(v\in V:=\{w\in L^{2}(0,1);ess\sup |w|\le 1\},\) functions \(\alpha ,\) \( \beta ,\) \(\gamma ,\) \(\delta \) are nonnegative, belong to \(L^{\infty }(0,1)\), \( \alpha (t)+\beta (t)+\gamma (t)+\delta (t)\ne 0\) a.e. in (0, 1) and

$$\begin{aligned} \left( ess\sup \alpha \right) e+\frac{e}{15}\left( ess\sup \beta \right) +\frac{ 12}{5}\left( ess\sup \gamma \right) +\frac{25}{6}(ess\sup \delta )\le \frac{A}{ A+1}. \end{aligned}$$

Here we have

$$\begin{aligned} f(t,u,v)=&\alpha (t)ve^{u}+\beta (t)\frac{u^{2}+1}{(u+5)(6-u)}e^{v}\nonumber \\&+\frac{\gamma (t)}{\root 6 \of {t}}(u-1)^{4}(2-u)v^{2}+\delta (t)\left( u^{5}+u^{3}+u+\frac{1}{\root 7 \of {t}}\right) v^{3}. \end{aligned}$$
(10)

Let us consider the sequence of parameters \(\{v_{n}\}_{n\in N}\subset V\) given as follows \(v_{n}(t)=\frac{\cos t}{n}+t^{n}\) and the sequence of coefficients \(a_{n}(t)=\frac{t}{n}+1\), \(t\in \left[ 0,1\right] .\) We can note that \(\{v_{n}\}_{n\in N}\) tends pointwisely to \(v_{0}\) such that \( v_{0}(t)=0\) for \(t\in [0,1)\) and \(v_{0}(t)=1\) for \(t=1\) and \( \{a_{n}\}_{n\in N}\subset {\mathcal {A}}\), where

$$\begin{aligned} {\mathcal {A}}:=\{a\in C^{1}([0,1]):1\le a(t)<2 \}, \end{aligned}$$

converges uniformly to \(a_{0}\equiv 1\) in [0, 1]. It is clear that \( v_{0}\in V\) and \(a_{0}\in {\mathcal {A}}.\) Thus, taking \(b=1,\) we have the following estimate for a.a. \(t\in (0,1)\), \(u\in \left[ 0,1\right] \) and \(v\in V,\)

$$\begin{aligned}&f(t,u,v) \end{aligned}$$
(11)
$$\begin{aligned}&\quad \le \left[ \alpha (t)e+\beta (t)\frac{2}{30}e+\frac{2\gamma (t)}{\root 6 \of {t}} +\delta (t)\left( 3+\frac{1}{\root 7 \of {t}}\right) \right] \end{aligned}$$
(12)

which gives

$$\begin{aligned}&\int _{0}^{1}\underset{u\in \left[ 0,1\right] }{\max }f(t,u,v(t))dt \\&\quad \le \left( ess\sup \alpha \right) e+\frac{e}{15}\left( ess\sup \beta \right) + \frac{12}{5}\left( ess\sup \gamma \right) +\frac{25}{6}(ess\sup \delta ) \\&\quad \le \frac{A}{A+1}, \end{aligned}$$

where the last inequality follows from the estimates made on \(\alpha ,\) \( \beta ,\) \(\gamma \) and \(\delta .\) Moreover, \(f(t,u,0)>0\) for all \(t\in (0,1),\) \( u\in \left[ 0,1\right] \) and \(\underset{u\in \left[ 0,1\right] }{\max } f(t,u,v(t))\) belongs to \(L^{2}(0,1).\) Finally, we can state that f satisfies (H2’) and (H3’). Applying Theorem 2.1 we derive that for each \(n\in {\mathbb {N}},\) there exists at least one positive, nonincreasing solution \(u_{n}\) for (9) with the coefficient \( a=a_{n}\) and parameter \(v=v_{n},\) such that \(u_{n}(t)\in (0,1]\) and \( u_{n}^{\prime }(t)\in [-1,0)\) for all \(t\in (0,1).\) Moreover, by Theorem 2.2, we obtain (up to a subsequence) the uniform convergence of \( \{u_{n}\}_{n\in {\mathbb {N}}\text { }}\) to a certain \(u_{0}\) in [0, 1], where \( u_{0}\) is a solution to (9) with \(v=v_{0},\) \(a=a_{0}.\)

Remark 2

Let us note that for \(\beta (t)=\gamma (t)=\delta (t)=0\) we obtain Bratu-type equation and for \(\alpha (t)=\beta (t)=\gamma (t)=0\) we get the equation of Duffing’s-type.

3 Variational iteration method (VIM)

Having guaranteed the existence of a positive solution our problem we can apply VIM to obtain an approximation a solution to our problem. We start this section with a correction functional for Eq. (1) in the form

$$\begin{aligned} u_{n+1}(t)=u_{n}(t)+\int _{0}^{t}\lambda _t(s)\left[ (a(s)e^{As}u_{n}^{\prime }(s))^{\prime }+e^{As}f(s,{\widetilde{u}}_{n}(s))\right] ds, \end{aligned}$$
(13)

where \({\widetilde{u}}_{n}\) is identified as a restricted variation, namely \(\delta f(s,{\widetilde{u}}_{n}(s))=0\) and \(\lambda \) is a general Lagrange multiplier, which can be calculated optimally with the help of the variation theory. We will write \(\lambda =\lambda _t(s)\) in order to underline its functional form (see [1] as well as [22] or [23]). The method comprises in two steps. In the first step one needs to derive the formula for the multiplier, which can be done with the use of the restricted variation and integration by parts. In the second step, by application of the iteration scheme, the sequence of successive approximations of the solution is created. For the initial approximation \(u_0\) an arbitrary function can be taken.

Finally, the possible solution is the limit: \(\underset{n\rightarrow \infty }{\lim } u_{n}(t)=u(t)\). Thus, in our case the iteration scheme is given as follows

$$\begin{aligned} u_{n+1}(t)=u_{n}(t)+\int _{0}^{t}\lambda _t(s)\left[ (a(s)e^{As}u_{n}^{\prime }(s))^{\prime }+e^{As}f(s,u_{n}(s))\right] ds. \end{aligned}$$
(14)

Applying the standard approach we determine the optimal value of \(\lambda .\) To this effect we calculate the variation for both sides with respect to \(u_{n}\) and get the following formula

$$\begin{aligned} \delta u_{n+1}(t)&=(1-\lambda ^{\prime }_t(t)e^{At}a(t))\delta u_{n} (t)+\lambda _t(t)(a(t)e^{At}\delta u_{n}^{\prime }(t))\\&+\int _{0}^{t}\left( \lambda ^{\prime }_t(s)e^{As}a(s)\right) ^{\prime }\delta u_{n}(s))ds+\delta \int _{0}^{t}\lambda _t(s)e^{As}f(s,{\widetilde{u}}_{n}(s))ds, \end{aligned}$$

where we have \(\delta f(s,{\widetilde{u}}_{n}(s))=0.\) Then the stationary condition gives

$$\begin{aligned} \left\{ \begin{array}{c} \left( \lambda ^{\prime }_t(s)e^{As}a(s)\right) ^{\prime }=0\ \ \ \text {for}\ s\in [0,t],\\ (1-\lambda ^{\prime }_t(t)e^{At}a(t))=0,\\ \lambda _t(t)=0, \end{array} \right. \end{aligned}$$

and further

$$\begin{aligned} \lambda _t(s)=-\int _{s}^{t}\frac{e^{-Ar}dr}{a(r)}. \end{aligned}$$
(15)

Finally, we obtain the following correction functional

$$\begin{aligned} u_{n+1}(t)=u_{n}(t)-\int _{0}^{t}\left( \int _{s}^{t}\frac{dr}{e^{Ar} a(r)}\right) \left[ (a(s)e^{As}u_{n}^{\prime }(s))^{\prime }+e^{As} f(s,u_{n}(s))\right] ds. \end{aligned}$$

4 Numerical approximations: comparison of VIM and FPIM

We present the results of numerical computations for several boundary value problems belonging to (2)–(3) class of problems for which the existence of at least one positive, nonincreasing (decreasing) and bounded solution is guaranteed by Theorem 2.1. All the calculations have been made in Matlab 2017A. Each of the below examples is divided into two parts that correspond to the numerical approaches we would like to compare.

Example 4.1

We consider the problem (2)–(3) with

$$\begin{aligned} \begin{array}{l} \bullet \ \ \ A=1\\ \bullet \ \ \ a(t)=10\cdot e^{-t}\\ \bullet \ \ \ f(t,x)=0.2\cdot e^t+0.1\cdot x^3. \end{array} \end{aligned}$$
(16)

Let us observe that there exists \(b>0\), for example \(b=1\), such that the conditions (H1)–(H3) are satisfied. Hence, according to Theorem 2.1, there exists at least one positive and nonincreasing solution to the considered problem.

  1. (a)

    Variational iteration method

    In order to apply VIM for finding an approximate solution, we first need to derive the function \(\lambda _t(s)\) (which appears in the correction functional at the right hand side of (14) ); substituting data to formula (15), we get \(\lambda _t(s)=0.1\cdot (s-t)\). In the second step, choosing the initial approximation \(u_0(t) = c\cdot (3-t^2)\) and iterating (15), we compute three subsequent approximations

    $$\begin{aligned} \begin{array}{l} u_1(t)\ = \ u_0(t)+\int _{0}^{t}0.1 (s-t) [(10 e^{-s}\ e^{s} u_0^\prime (s))^{\prime } + e^{s} (0.2 e^s+0.1 u_0^{3}(s)] ds\\ \quad =u_0(t)+\int _{0}^{t}0.1 (s-t) [(10 u_0^{\prime \prime }(s) + e^{s} (0.2 e^s+0.1 u_0^{3}(s)] ds\\ \quad =0.005e^{2t}(2t - 1) - 0.02t(0.5e^{2t} - 0.5) - 2.343315\cdot 10^{-7}e^{t}(t^7 - 7t^6 + 33t^5 \\ \qquad - 165t^4 + 687t^3 - 2061t^2 + 4095t - 4095) + 0.01t(2.343315\cdot 10^{-5} e^{t} (t^6 - 6t^5 \\ \quad + 21t^4 - 84t^3 + 279t^2 - 558t + 531) - 1.2443 \cdot 10^{-2}) + 8.988867\cdot 10^{-2},\vspace{0.2cm} \\ u_2(t)\ = \ u_1(t) + \int _{0}^{t}0.1 (s-t) [10 u_1^{\prime \prime } (s) + 0.2 e^{2s}+0.1 e^s u_1^3(s)] ds \\ \quad = u_1(t)+\int _{0}^{t} (s-t) u_1^{\prime \prime } (s) + (s-t)[0.02 e^{2s}+0.01 e^s u_1^3(s)] ds \end{array} \end{aligned}$$

    and \(u_3(t)= u_2(t)+\int _{0}^{t}0.1 (s-t) [(10 u_2^{\prime \prime }(s) + e^{s} (0.2 e^s + 0.1 u_2^{3}(s)] ds\).

    At this point \(u_3\) depends on the constant c (as c is the coefficient of \(u_0\)). Then, solving the appropriate algebraic equation, we adjust the value of the parameter c so that \(u_3\) satisfies boundary conditions (2), i.e. \(u^{\prime } (0)=0\) and \(u^{\prime }(1)+Au(1)=0\) (Matlab’s numerical solver gives \(c = 0.028616085125364771580656069082945\)). In other words, we apply the shooting method, which transforms the boundary conditions into initial conditions.Footnote 1 Then, to verify the accuracy of the approximation \(u_3\), we calculate the absolute value \(L_3\) of the left-hand side of (3) when \(u=u_3\):

    $$\begin{aligned} L_3(t)=| (a(t)u_3^{\prime }(t))^{\prime }+Aa(t)u_3^{\prime }(t)+f(t,u_3(t)) | \end{aligned}$$

    (see Fig. 1). The formula for \(u_3\) contains nested integrals and hence (as in the rest of the presented examples) cannot be computed explicitly, but for this problem we obtain quite an accurate (32-component) Taylor approximation \(Tlr=Tlr(u_3)\) of \(u_3\) (the corresponding left-hand side error L(Tlr) for the approximation Tlr is depicted on Fig. 1). We get:

    $$\begin{aligned} \begin{array}{l} Tlr(u_3) \\ \quad = 4.776139 \cdot 10^{-19}\ t^{31}+ 2.141772 \cdot 10^{-18}\ t^{30} + 9.208265\cdot 10^{-18}\ t^{29}+ 3.807395\cdot 10^{-17}\ t^{28} \\ \qquad + 1.515022\cdot 10^{-16}\ t^{27}+ 5.797472\cdot 10^{-16}\ t^{26} + 2.129952 \cdot 10^{-15}\ t^{25} + 7.494837 \cdot 10^{-15}\ t^{24} \\ \qquad + 2.517773 \cdot 10^{-14}\ t^{23} + 8.040387 \cdot 10^{-14}\ t^{22} + 2.426423 \cdot 10^{-13}\ t^{21} + 6.857045\cdot 10^{-13}\ t^{20} \\ \qquad + 1.784668\cdot 10^{-12}\ t^{19} + 4.108039\cdot 10^{-12}\ t^{18} + 7.185033\cdot 10^{-12}\ t^{17} + 4.951950\cdot 10^{-13}\ t^{16}\\ \qquad - 1.111934\cdot 10^{-10}\ t^{15} -9.668494\cdot 10^{-10}\ t^{14} - 6.775555\cdot 10^{-9}\ t^{13} - 4.343031\cdot 10^{-8}\ t^{12}\\ \qquad - 4.343031\cdot 10^{-8}\ t^{11} - 1.413868\cdot 10^{-6}\ t^{10} -7.057160\cdot 10^{-6}\ t^{9} - 3.173993\cdot 10^{-5}\ t^{8}\\ \qquad - 1.269488\cdot 10^{-4}\ t^{7} - 4.443513\cdot 10^{-4}\ t^{6} - 1.333202\cdot 10^{-3}\ t^{5} - 3.333413\cdot 10^{-3}\ t^{4}\\ \qquad - 6.667721\cdot 10^{-3}\ t^{3} - 1.000316\cdot 10^{-2}\ t^{2} + 8.584825 \cdot 10^{-2}. \end{array} \end{aligned}$$

    It is easy to see that for each argument \(t\in [0,1]\), the approximate solution Tlr is a decreasing and positive function with values less than 1, namely it has all properties of exact solutions described in Theorem 2.1.

  2. (b)

    Fixed point iteration method

    First we set \({{\tilde{u}}}_0=0\) and take a grid of points \(t_{arr}=\{ 0,\ 0.05,\ 0.1,\ 0.15,...,\ 1\}\). Then we calculate the subsequent approximations \({{\tilde{u}}}_i({t_arr})\), \(i=1...8\), where each approximation is given as the solution to the following linear boundary value problem:

    $$\begin{aligned} \begin{array}{l} (a(t){{\tilde{u}}}_{i+1}^{\prime }(t))^{\prime }+A a(t)\tilde{u}_{i+1}^{\prime }(t)+f(t,{{\tilde{u}}}_i(t))=0,\text { for }a.e.\ t\in \left( 0,1\right) \\ \quad {{\tilde{u}}}_{i+1}^{\prime }(0)=0\text { and } \tilde{u}_{i+1}^{\prime }(1)+A {{\tilde{u}}}_{i+1}(1)=0. \end{array} \end{aligned}$$
    (17)

    Note that the above procedure is equivalent to application of the scheme \({{\tilde{u}}}_{i+1}(t)=T {{\tilde{u}}}_{i}(t)\) , where T is the integral operator defined by the formula (7).

    We observe that after 5 iterations FPIM stabilizes on quite an accurate approximation \(u_{5}\) of the solution to the problem (1)–(2), i.e. the function \(\tilde{L}_5(t)=|(a(t){{\tilde{u}}}_5^{\prime }(t))^{\prime }+Aa(t)\tilde{u}_5^{\prime }(t)+f(t,{{\tilde{u}}}_5(t))|\), being the absolute value of the left-hand side of (2) when \(u={{\tilde{u}}}_5\), does not exceed \(5\times 10^{-17}\). However, for the sake of comparison of effectiveness of the two considered methods, below we present the results of FPIM obtained in the course of 3 iterations (\(\tilde{L}_3(t)=|(a(t){{\tilde{u}}}_3^{\prime }(t))^{\prime }+Aa(t)\tilde{u}_3^{\prime }(t)+f(t,{{\tilde{u}}}_3(t))|\le 12\cdot 10^{-12}\) contrasted with \(L_3(t)\le 6\cdot 10^{-15}\) in the former method):Footnote 2

Thus, after the third step, VIM gives better accuracy than FPIM (compare Figs. 1b and 2b).

Fig. 1
figure 1

VIM method in Example 4.1—plots of \(u_3\), \(L_3\) and L(Tlr)

Fig. 2
figure 2

FPIM method in Example 4.1—plots of \(\tilde{u}_{3}\) and \({{\tilde{L}}}_{3}\)

Example 4.2

For the problem (2)–(3) we specify:

$$\begin{aligned} \begin{array}{l} \bullet \ \ \ A=1\\ \bullet \ \ \ a(t)=t+1\\ \bullet \ \ \ f(t,x)=x^2+0.01. \end{array} \end{aligned}$$
(18)

Here the conditions (H1)–(H3) are fulfilled for \(b\in [b_1,b_2 ]\), where \(b_1=1- \frac{\frac{\sqrt{21}}{5}}{4}\), \(b_2=1+ \frac{\frac{\sqrt{21}}{5}}{4}\). Thus, from Theorem 2.1, there exists at least one solution to the considered problem.

  1. (a)

    Variational iteration method

    In the first step of computations, we obtain \(\lambda _t(s)=-\int _s^t \frac{1}{e^r (r+1)}dr\). In the second step, we take the initial approximation \(u_0(x,t) = 0.0055-c\cdot t^2\), derive subsequent approximations

    $$\begin{aligned}&\begin{array}{l} u_1(t)\\ \quad =u_0(t)-\int _{0}^{t} \int _s^t \frac{1}{e^r (r+1)}dr [ ((s+1) e^{s} u_0^\prime (s) )^{\prime }+e^{s}(u^2_0(s)+0.01)] ds\\ \quad =u_0(t)-\int _{0}^{t} \int _s^t \frac{1}{e^r (r+1)}dr [ (s+1) e^{s} u_0^{\prime \prime }(s) + (s+2)e^s u_0^{\prime }(s) +e^{s}(u^2_0(s)+0.01)] ds \end{array} \\&\begin{array}{l} u_2(t)\\ \quad =u_1(t)-\int _{0}^{t} \int _s^t \frac{1}{e^r (r+1)}dr [ (s+1) e^{s} u_1^{\prime \prime }(s) + (s+2)e^s u_1^{\prime }(s) +e^{s}(u^2_1(s)+0.01)] ds, \end{array} \end{aligned}$$

    and then find the value of c for which boundary conditions (2) are approximately satisfied (more precisely, we obtain: \(u_2'(0)=0\) and \(u_2'(1)+A u_2(1)\approx 5\times 10^{-5}\) for \(c = 0.0033\)). Once we determine the value of parameter c in the formula for \(u_2\), we calculate \(L_2\).

  2. (b)

    Fixed point iteration method

    We set \({{\tilde{u}}}_0=0\) and calculate subsequent approximations \({{\tilde{u}}}_i\), \(i=1,...12\) according to the numerical scheme (17), as in the previous example (on the same grid of points \(t_{arr}\)). We observe that after two iterations the error \({{\tilde{L}}}_2\), corresponding to the approximation \(u_2\) is greater than the analogical error that appears in calculations with the use of VIM (\({{\tilde{L}}}_2(t)\le 14\times 10^{-8}\) compared to \(L_2(t) \le 3\times 10^{-10}\)—see Figs. 3 and  4). However, after 8 iterations results obtained by the current method stabilize with the left hand-side error \({{\tilde{L}}}_8(t)\) less or equal than \(4\times 10^{-19}\).

As in the previous example, VIM appears to be more useful to derive the formula of approximate solution.

Fig. 3
figure 3

VIM method in Example 4.2—plots of \(u_2\) and \(L_2\)

Fig. 4
figure 4

FPIM method in Example 4.2—plots of \(\tilde{u}_{2}\) and \({{\tilde{L}}}_{2}\)

Example 4.3

Here we assume that

$$\begin{aligned} \begin{array}{l} \bullet \ \ \ A=1\\ \bullet \ \ \ a(t)=e^t\\ \bullet \ \ \ f(t,x)=(1-t)^2 + \frac{t}{x+1}. \end{array} \end{aligned}$$
(19)

Since the conditions (H1)–(H3) are fulfilled for \(b\ge \frac{5}{3}\), there exists at least one solution to the given problem (Theorem 2.1).

  1. (a)

    Variational iteration method

    In the first step we obtain \(\lambda _t(s)=-\int _s^t \frac{1}{e^{2r}}dr=\frac{1}{2} [e^{-2t} - e^{-2s}]\). In the second step, we set \(u_0(x,t) = c \cdot (3 - 1.5\cdot t^2) \). After evaluation of \(u_1\) and \(u_2\), i.e

    $$\begin{aligned}&\begin{array}{l} u_1(t)=\\ u_0(t)+\int _{0}^{t} \frac{1}{2} [e^{-2t} - e^{-2s}] [ ((e^{2s} u_0^\prime (s) )^{\prime }+e^{s}((1-s)^2 + \frac{s}{u_0(s)+1} )] ds=\\ u_0(t)+\int _{0}^{t} \frac{1}{2} [e^{-2t} - e^{-2s}] [ (e^{2s}(2 u_0^\prime (s)+u_0^{\prime \prime }(s))+e^{s}((1-s)^2 + \frac{s}{u_0(s)+1} )] ds, \end{array} \\&\begin{array}{l} u_2(t)=\\ u_1(t)+\int _{0}^{t} \frac{1}{2} [e^{-2t} - e^{-2s}] [ (e^{2s}(2 u_1^\prime (s)+u_1^{\prime \prime }(s))+e^{s}((1-s)^2 + \frac{s}{u_1(s)+1} )] ds, \end{array} \end{aligned}$$

    we calculate the constant c for which \(u_2\) satisfies boundary conditions (2) (getting \(c = 0.10798064912100120344672221166801\)). Then we substitute the value of c to the formula for \(u_2\) and compute the left-hand side error \(L_2\) .

  2. (b)

    Fixed point iteration method

    Again we set \({{\tilde{u}}}_0=0\) and - as in two previous examples - perform computations on the grid of points \(t_{arr}\). 8 iterations are sufficient to obtain stable results with the left-hand side error \({{\tilde{L}}}_8(t)\) bounded by \(9\times 10^{-19}\). As before, we conclude that for the current problem FPIM tends to the solution less quickly that VIM (here we get the left-hand side error \({{\tilde{L}}}_2(t) \le 0.02\) compared to the error \(L_2(t) \le 6\times 10^{-4}\) in VIM; see Figs. 5 and 6).

Fig. 5
figure 5

VIM method in Example 4.3—plots of \(u_2\) and \(L_2\)

Fig. 6
figure 6

FPIM method in Example 4.3—plots of \(\tilde{u}_{2}\) and \({{\tilde{L}}}_{2}\)

Example 4.4

The following problem is different than other examples we consider, since here the assumptions of Theorem 2.1 are not satisfied. Nevertheless, both methods give the numerical solution.

We assume that

$$\begin{aligned} \begin{array}{l} \bullet \ \ \ A=1\\ \bullet \ \ \ a(t)=e^{-t}\\ \bullet \ \ \ f(t,x)=0.113\cdot (x+1)^2. \end{array} \end{aligned}$$

Notice that condition (H3) is not satisfied for \(f(t,x)=c\cdot (x+1)^2\) such that \(c>\frac{1}{8e}\) (and \(c=0.113>\frac{1}{8e}\)). What is characteristic here is that FPIM converges to the numerical solution very slowly (our implementation of this method needs about 10 times more iterations than VIM to reduce the left-hand side error to the same level).

5 Conclusions

The first aim of this paper was to prove the existence of positive (nonnegative and nontrivial) solutions to the wide class of, in general, singular problems inspired by the model of nonlinear reactor dynamics. Here we apply classical methods based on the fixed point approach. The second purpose of our work was to extend the class of boundary value problems for which the variational iteration method had been applied. Here we show the application of VIM for nonlinearities which are also dependent on t and can have more general form as far as u is concerned.

In calculations regarding each of the problems shown in the previous section, VIM tends - in terms of the number of iterations - to approximate solution more rapidly than the fixed point iteration method. According to our experiments, VIM guarantees an acceptable accuracy already in the second or third iteration step while FPIM gives similar error in further steps. It is clear that each another step makes the integral formula, associated with the iterative schema, more and more complicated. Therefore VIM is a better tool to derive explicit formulas of approximate solutions.

On the other hand, in our implementation of FPIM we approximate the solution on the grid of points and, owing to that, obtain very accurate results much faster (in respect of the calculation time). Therefore, if one does not necessarily need to derive the functional formula of an approximate solution, but is only looking for the values of that approximation in particular points, then, according to our experiences, FPIM would be a better choice due to the shorter calculation time as well as the simplicity of the numerical scheme.

The form of the estimates used in the proof of Theorem 2.1 corresponds to generality of the class of admissible nonlinearities. Example 4.4 suggests that the next step of these research could be consideration of nonlinearities for which the estimates (H3) [or \((H3')\)] do not hold. It would be interesting to check whether in such situations VIM would reveal to be a more effective numerical procedure. It is clear that such problems require some different theoretical approach.