1 Introduction

The singularly perturbed boundary value problems (SPBVPs) frequently occur in the different areas of physical phenomena. Specifically, these occur in the fields of fluid dynamic, elasticity, neurobiology, quantum mechanics, oceanography, and reactor diffusion process. These problems often have sharp boundary layers. These boundary layers usually appear as a multiple of the highest derivative. Their small values cause trouble in different numerical schemes for the solution of SPBVPs. Therefore, it is important to find the numerical and analytic solutions of these types of problems. The different second order SPBVPs have different expressions but we deal with the following:

$$\begin{aligned}& \varepsilon Z^{\bullet \bullet }(u)=p(u)Z^{\bullet }(u)+q(u)Z(u)+g(u), \end{aligned}$$
(1)
$$\begin{aligned}& Z(a)=\alpha _{0}, \qquad Z(b)=\alpha _{1} , \quad a \leqslant u \leqslant b, \end{aligned}$$
(2)

where \(0 < \varepsilon < 1\), while \(p(u)\), \(q(u)\), \(g(u)\) are bounded and real valued functions. \(g(u)\), \(\alpha _{0}\), \(\alpha _{1}\) depends on ε. We may refer to Ascher et al. [1] for more details as regards such a type of SPBVPs.

Here, we first present short review of different methods for the solution of second order SPBVPs; then we discuss a subdivision-based solution of SPBVPs.

The second order SPBVPs were solved based on cubic spline scheme by Aziz and Khan [2, 6] in 2005. These problems were also solved by Bawa and Natesan [3] in the same year. They have used quintic spline based approximating schemes. Kadalbajoo and Aggarwal [5] and Tirmizi et al. [17] solved self-adjoint SPBVPs by using B-spline collocation and non-polynomial spline function schemes, respectively. Kumar and Mehra [7] and Pandit and Kumar [14] solved SPBVPs by a wavelet optimized difference and uniform Haar wavelet methods, respectively.

The second order SPBVPs were also solved by [9, 12, 13]. They have used finite difference scheme for the solution.

The linear SPBVPs were solved by [4, 11, 15]. They have used interpolating subdivision schemes for this purpose. The solution of second order SPBVPs by subdivision techniques did not reported yet. We develop an algorithm by using a 6-point interpolating subdivision scheme (6PISS) [8]. We have

$$\begin{aligned} \textstyle\begin{cases} Q^{k+1}_{2i}=Q^{k}_{i}, \\ Q^{k+1}_{2i+1}=\mu (Q_{i-2}^{k}+Q_{i+3}^{k} )- (3 \mu +\frac{1}{16} )(Q_{i-1}^{k}+Q_{i+2}^{k})+ (2\mu + \frac{9}{16} ) (Q_{i}^{k}+Q_{i+1}^{k}), \end{cases}\displaystyle \end{aligned}$$
(3)

where the scheme is \(C^{2}\)-continuous for \(0<\mu <0.042\). It has support width \((-5, 5)\). It has fourth order of approximation. It satisfies the 2-scale relation

$$\begin{aligned} \rho (u) =&\rho (2u)+ \biggl[\mu \bigl\{ \rho (2u-1)+\rho (2u+1) \bigr\} - \biggl(3\mu +\frac{1}{16} \biggr) \bigl\{ \rho (2u-3) +\rho (2u \\ &{} +3) \bigr\} + \biggl(2\mu +\frac{9}{16} \biggr) \bigl\{ \rho (2u-5)+ \rho (2u+5) \bigr\} \biggr],\quad u\in \mathbb{R}, \end{aligned}$$
(4)

where

$$\begin{aligned} \rho (u)= \textstyle\begin{cases} 1&\text{for } u=0, \\ 0&\text{for } u\neq 0. \end{cases}\displaystyle \end{aligned}$$
(5)

Here is the layout of the rest of the work. In Sect. 2, we first find the derivatives of \(\rho (u)\) then by using them we develop the collocation algorithm. The convergence of the method is discussed in Sect. 3. In Sect. 4, we present the numerical solutions of different problems. The comparison of the solutions obtained by different methods is also offered. Section 5 deals with our conclusion.

2 The numerical algorithm

In this section, we develop an algorithm to deal with second order SPBVPs. First we discuss the derivatives of 2-scale relations known as basis functions of the subdivision scheme.

2.1 Derivatives of 2-scale relations

The 6PISS is \(C^{2}\)-continuous by [8], so its 2-scale relations \(\rho (u)\) are also \(C^{2}\)-continuous. First we find the eigenvectors (both left and right) of the subdivision matrix of 6PISS then we find the derivatives of \(\rho (u)\). For simplicity, we choose \(\mu =0.04\) to find eigenvectors. We use a similar approach to [4, 11] to find the derivatives. The first two derivatives of the 2-scale relation are given in Table 1.

Table 1 2-scale relation and its derivatives

2.2 The 6PISS based algorithm

Let m be the indexing parameter which might be equal to or greater than the last right end integral value of the right eigenvector corresponding to the eigenvalue \(\frac{1}{2}\) of the subdivision matrix for (3). Some useful notations depending on the indexing parameter m are defined as \(h=1/m\) and \(\upsilon _{\kappa _{1}}=\kappa _{1}/m=ih\) with \(\kappa _{1}=0,1,2,\ldots,m\). Finally, we suppose that the approximate solution of (1) is

$$\begin{aligned} D(\upsilon )=\sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \rho \biggl(\frac{\upsilon -\upsilon _{\kappa _{1}}}{h} \biggr), \quad 0\leq \upsilon \leq 1, \end{aligned}$$
(6)

where \(\{d_{\kappa _{1}}\}\) are the unknowns to be determined; then

$$\begin{aligned} a D^{\bullet \bullet }(\upsilon _{\kappa })=p(\upsilon _{\kappa }) D^{ \bullet }(\upsilon _{\kappa })+q(\upsilon _{\kappa }) D(\upsilon _{\kappa })+g( \upsilon _{\kappa }), \quad \kappa =0, 1, 2,\ldots, m, \end{aligned}$$
(7)

with given boundary conditions at both ends of the interval

$$\begin{aligned} D(0)=\alpha _{0}, \quad\quad D(1)=\alpha _{1}. \end{aligned}$$

From (6), we have

$$\begin{aligned} \begin{gathered} D^{\bullet }(\upsilon _{\kappa })=\frac{1}{h}\sum_{\kappa _{1}=-4}^{m+4}d_{ \kappa _{1}} \rho ^{\bullet } \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr), \\ D^{\bullet \bullet }(\upsilon _{\kappa })=\frac{1}{h^{2}}\sum _{ \kappa _{1}=-4}^{m+4}d_{\kappa _{1}}\rho ^{\bullet \bullet } \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr). \end{gathered} \end{aligned}$$
(8)

We get the system of equations by using (6) and (8) in (7),

$$\begin{aligned}& a\sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}}\rho ^{\bullet \bullet } \biggl(\frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr)-hp_{\kappa }\sum _{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \rho ^{\bullet } \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr) \\& \quad{} -h^{2}q_{ \kappa }\sum _{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}}\rho \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr)=h^{2}g_{ \kappa }, \end{aligned}$$

where \(A_{\kappa }=p(\upsilon _{\kappa })\), \(q_{\kappa }=q(\upsilon _{\kappa })\) and \(g_{\kappa }=g(\upsilon _{\kappa })\). This implies

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \biggl\{ a \rho ^{ \bullet \bullet } \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr)-hp_{ \kappa }\rho ^{\bullet } \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr) -h^{2}q_{ \kappa } \rho \biggl( \frac{\upsilon _{\kappa }-\upsilon _{\kappa _{1}}}{h} \biggr) \biggr\} =h^{2}g_{\kappa }. \end{aligned}$$

This further implies

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \bigl\{ a\rho ^{ \bullet \bullet }(\kappa -\kappa _{1})-hp_{\kappa } \rho ^{\bullet }( \kappa -\kappa _{1})-h^{2}q_{\kappa } \rho (\kappa -\kappa _{1}) \bigr\} =h^{2}g_{\kappa }, \end{aligned}$$
(9)

where \(\kappa =0, 1, 2,\ldots, m\) and \(\upsilon _{\kappa _{1}}=ih\) or \(\upsilon _{\kappa }=jh\). By using the notation “\(\rho (\kappa _{1})=\rho _{\kappa _{1}}\)”, (9) can be written as

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \bigl\{ a\rho ^{ \bullet \bullet }_{\kappa -\kappa _{1}}-hp_{\kappa }\rho ^{\bullet }_{ \kappa -\kappa _{1}}-h^{2}q_{\kappa }\rho _{\kappa -\kappa _{1}} \bigr\} =h^{2}g_{\kappa }, \quad \kappa =0, 1, 2,\ldots, m. \end{aligned}$$
(10)

As we observe from Table 1, \(\rho ^{\bullet }_{-\kappa _{1}}=-\rho ^{\bullet }_{\kappa _{1}}\) and \(\rho ^{\bullet \bullet }_{-\kappa _{1}}=\rho ^{\bullet \bullet }_{ \kappa _{1}}\), for \(\kappa =0, 1, 2,\ldots, m\), (10) becomes

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \bigl\{ a\rho ^{ \bullet \bullet }_{\kappa _{1}-\kappa } +hp_{\kappa }\rho ^{\bullet }_{ \kappa _{1}-\kappa }-h^{2}q_{\kappa }\rho _{\kappa _{1}-\kappa } \bigr\} =h^{2}g_{\kappa }. \end{aligned}$$
(11)

The above system of equations is summarized in the following proposition.

Proposition 1

The equivalent form of the system (11) is

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{4}d_{\kappa +\kappa _{1}} \tau ^{ \kappa }_{\kappa _{1}}=h^{2}g_{\kappa }, \quad \kappa = 0, 1, 2,\ldots, m, \end{aligned}$$
(12)

where

$$\begin{aligned} \tau _{\kappa _{1}}^{\kappa }= \textstyle\begin{cases} a\rho ^{\bullet \bullet }_{0}-h^{2}q_{\kappa },&\textit{for } \kappa _{1}=0, \\ a\rho ^{\bullet \bullet }_{\kappa _{1}}-hp_{\kappa }\rho ^{\bullet }_{ \kappa _{1}}, &\textit{for } \kappa _{1}\neq 0. \end{cases}\displaystyle \end{aligned}$$
(13)

Proof

Substituting \(\kappa =0\) in (11), we get

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{m+4}d_{\kappa _{1}} \bigl\{ a \rho ^{ \bullet \bullet }_{\kappa _{1}}+hA_{0}\rho ^{\bullet }_{\kappa _{1}}-h^{2}q_{0} \rho _{\kappa _{1}} \bigr\} =h^{2}g_{0}, \quad \kappa =0, 1, 2,\ldots, m. \end{aligned}$$

By expanding the above equation, we get

$$\begin{aligned}& d_{-4} \bigl\{ a\rho ^{\bullet \bullet }_{-4}+hp_{0} \rho ^{\bullet }_{-4}-h^{2}q_{0} \rho _{-4} \bigr\} +d_{-3} \bigl\{ a\rho ^{\bullet \bullet }_{-3}+hp_{0} \rho ^{ \bullet }_{-3}-h^{2}q_{0}\rho _{-3} \bigr\} +\cdots \\& \quad{} +d_{0} \bigl\{ a\rho ^{\bullet \bullet }_{0}+hp_{0} \rho ^{\bullet }_{0}-h^{2}q_{0} \rho _{0} \bigr\} +\cdots +d_{m+3} \bigl\{ a\rho ^{\bullet \bullet }_{m+3} +hp_{0} \rho ^{\bullet }_{m+3}-h^{2}q_{0} \rho _{m+3} \bigr\} \\& \quad{} +d_{m+4} \bigl\{ a\rho ^{\bullet \bullet }_{m+4}+hp_{0} \rho ^{\bullet }_{m+4}-h^{2}q_{0} \rho _{m+4} \bigr\} =h^{2}g_{0}. \end{aligned}$$

Since the support width of the 6PISS is \((-5,5)\) therefore the graphs of \(\rho ^{\bullet }_{\kappa _{1}}\) and \(\rho ^{\bullet \bullet }_{\kappa _{1}}\) cannot be zero over the domain \([-4, 4]\) but their graphs away from it will be zero. This simplifies as

$$\begin{aligned}& d_{-4} \bigl\{ a\rho ^{\bullet \bullet }_{-4}+hp_{0} \rho ^{\bullet }_{-4}-h^{2}q_{0} \rho _{-4} \bigr\} +d_{-3} \bigl\{ a\rho ^{\bullet \bullet }_{-3}+hp_{0} \rho ^{ \bullet }_{-3}-h^{2}q_{0}\rho _{-3} \bigr\} +d_{-2} \bigl\{ a\rho ^{\bullet \bullet }_{-2} \\& \quad{} +hp_{0}\rho ^{\bullet }_{-2} -h^{2}q_{0}\rho _{-2} \bigr\} +d_{-1} \bigl\{ a\rho ^{ \bullet \bullet }_{-1}+hp_{0}\rho ^{\bullet }_{-1}-h^{2}q_{0} \rho _{-1} \bigr\} +d_{0} \bigl\{ a\rho ^{\bullet \bullet }_{0}+hp_{0} \rho ^{\bullet }_{0} \\& \quad{} -h^{2}q_{0}\rho _{0} \bigr\} +d_{1} \bigl\{ a\rho ^{\bullet \bullet }_{1}+hp_{0} \rho ^{\bullet }_{1}-h^{2}q_{0}\rho _{1} \bigr\} +d_{2} \bigl\{ a\rho ^{\bullet \bullet }_{2}+hp_{0} \rho ^{\bullet }_{2}-h^{2}q_{0}\rho _{2} \bigr\} +d_{3} \bigl\{ a\rho ^{\bullet \bullet }_{3} \\& \quad{} +hp_{0}\rho ^{\bullet }_{3}-h^{2}q_{0} \rho _{3} \bigr\} +d_{4} \bigl\{ a\rho ^{ \bullet \bullet }_{4}+hp_{0} \rho ^{\bullet }_{4}-h^{2}q_{0}\rho _{4} \bigr\} =h^{2}g_{0}. \end{aligned}$$

By substituting the values of \(\rho _{\kappa _{1}}\) and \(\rho ^{\bullet }_{0}=0\), we get

$$\begin{aligned}& d_{-4} \bigl\{ a\rho ^{\bullet \bullet }_{-4}+hp_{0} \rho ^{\bullet }_{-4} \bigr\} +d_{-3} \bigl\{ a\rho ^{\bullet \bullet }_{-3}+hp_{0}\rho ^{\bullet }_{-3} \bigr\} +d_{-2} \bigl\{ a\rho ^{\bullet \bullet }_{-2} +hp_{0}\rho ^{\bullet }_{-2} \bigr\} \\& \quad{} +d_{-1} \bigl\{ a\rho ^{\bullet \bullet }_{-1}+hp_{0} \rho ^{\bullet }_{-1} \bigr\} +d_{0} \bigl\{ a\rho ^{\bullet \bullet }_{0}-h^{2}q_{0}\rho _{0} \bigr\} +d_{1} \bigl\{ a \rho ^{\bullet \bullet }_{1}+hp_{0} \rho ^{\bullet }_{1} \bigr\} \\& \quad{} +d_{2} \bigl\{ a\rho ^{\bullet \bullet }_{2}+hp_{0} \rho ^{\bullet }_{2} \bigr\} +d_{3} \bigl\{ a\rho ^{\bullet \bullet }_{3} +hp_{0}\rho ^{\bullet }_{3} \bigr\} +d_{4} \bigl\{ a \rho ^{\bullet \bullet }_{4}+hp_{0} \rho ^{\bullet }_{4} \bigr\} =h^{2}g_{0}. \end{aligned}$$

If

$$\begin{aligned}& \tau ^{0}_{\pm 4}=a\rho ^{\bullet \bullet }_{\pm 4}+hp_{0} \rho ^{ \bullet }_{\pm 4},\quad \quad \tau ^{0}_{\pm 3}=a \rho ^{\bullet \bullet }_{ \pm 3}+hp_{0}\rho ^{\bullet }_{\pm 3}, \quad \quad \tau ^{0}_{\pm 2}=a\rho ^{ \bullet \bullet }_{\pm 2}+hp_{0} \rho ^{\bullet }_{\pm 2}, \\& \tau ^{0}_{\pm 1}=a\rho ^{\bullet \bullet }_{\pm 1}+hp_{0} \rho ^{ \bullet }_{\pm 1},\quad \quad \tau ^{0}_{0}=a \rho ^{\bullet \bullet }_{0}-h^{2}q_{0}, \end{aligned}$$

the above equation becomes

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{4}d_{\kappa _{1}}\tau ^{0}_{\kappa _{1}}=h^{2}g_{0}. \end{aligned}$$

Similarly, for \(\kappa =1, 2, 3,\ldots, m\), we get

$$\begin{aligned} \sum_{\kappa _{1}=-4}^{4}d_{\kappa _{1}+\kappa } \tau _{ \kappa _{1}}^{\kappa }=h^{2}g_{\kappa }, \end{aligned}$$

where, for \(\kappa _{1}= -4, -3,\ldots, 3, 4\) and \(\kappa =1, 2, 3,\ldots, m\), we have

$$\begin{aligned} \tau _{\kappa _{1}}^{\kappa }= \textstyle\begin{cases} a\rho ^{\bullet \bullet }_{0}-h^{2}q_{\kappa }&\text{for } \kappa _{1}=0, \\ a\rho ^{\bullet \bullet }_{\kappa _{1}}-hp_{\kappa }\rho ^{\bullet }_{ \kappa _{1}} &\text{for }\kappa _{1}\neq 0. \end{cases}\displaystyle \end{aligned}$$

The proof has been completed. □

2.2.1 Matrix representation of the linear system

The matrix representation of the linear system (12) is given by

$$\begin{aligned} \mathbb{S}\mathbb{D}=\mathbb{G}_{1}, \end{aligned}$$
(14)

where

$$\begin{aligned} \mathbb{S}= \bigl(\xi _{s}^{r-1} \bigr)_{(m+1)\times (m+9)}, \end{aligned}$$
(15)

where \(r=1, 2,\ldots,m+2\) and \(s=-4, -3,\ldots, m+3, m+4\) represent row and column, respectively, and

$$\begin{aligned} \xi ^{r-1}_{s}= \textstyle\begin{cases} \tau _{\kappa _{1}}^{r-1},&\text{for } -4\leqslant \kappa _{1} \leqslant 4, \\ 0, &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

The column matrices \(\mathbb{D}\) and \(\mathbb{G}_{1}\) are given by

$$\begin{aligned} \mathbb{D}=(d_{-4}, d_{-3},\ldots, d_{m+3}, d_{m+4})^{T} \end{aligned}$$
(16)

and

$$\begin{aligned} \overline{}\mathbb{G}_{1}=h^{2} \times (g_{0}, g_{1},\ldots, g_{m-1}, g_{m})^{T}. \end{aligned}$$
(17)

The system (14) in its present form does not have a unique solution. We need eight extra equations to get its unique solution. Luckily, two equations can be obtained from (2), i.e., \(D(0)\) and \(D(1)\) and for the remaining six equations, we move to the next section.

2.2.2 End point constraints

If the data points are given then the 6PISS is suitable to fit the data with a fourth order of approximation. So we use the fourth order polynomials to define the constraints at the end points. Here we suggest two types of polynomials i.e. the simple cubic polynomial (i.e. a polynomial of order 4) and cardinal basis function-based cubic polynomial, to get the constraints.

C-1: Constraints by polynomial of degree three:

We use the fourth order polynomial \(C_{1}(\upsilon )\) which interpolates the data \((\upsilon _{\kappa _{1}}, d_{\kappa _{1}})\) for \(0\leq \kappa _{1} \leq 3\) to compute the left end points \(d_{-3}\), \(d_{-2}\), \(d_{-1}\). Precisely, we have

$$\begin{aligned} d_{-\kappa _{1}}=C_{1}(-\upsilon _{\kappa _{1}}), \quad \kappa _{1}= 1,2,3, \end{aligned}$$

where

$$\begin{aligned} C_{1}(\upsilon _{\kappa _{1}})=\sum_{\kappa =1}^{4} \begin{pmatrix} 4 \\ \kappa \end{pmatrix}(-1)^{\kappa +1}D({\upsilon _{\kappa _{1}-\kappa }}). \end{aligned}$$

Since by (6), \(D(\upsilon _{\kappa _{1}})=d_{\kappa _{1}}\) for \(\kappa _{1}=1, 2, 3\) then, by replacing \(\upsilon _{\kappa _{1}}\) by \(-\upsilon _{\kappa _{1}}\), we have

$$\begin{aligned} C_{1}(-\upsilon _{\kappa _{1}})=\sum_{\kappa =1}^{4} \begin{pmatrix} 4 \\ \kappa \end{pmatrix}(-1)^{\kappa +1}d_{-\kappa _{1}+\kappa }. \end{aligned}$$

Hence, we get the following three constraints defined at the left end points:

$$\begin{aligned} \sum_{\kappa =0}^{4} \begin{pmatrix} 4 \\ \kappa \end{pmatrix}(-1)^{\kappa }d_{-\kappa _{1}+\kappa }=0, \quad \kappa _{1}= 1, 2, 3. \end{aligned}$$
(18)

A similar procedure is adopted for the right end i.e. we can define \(d_{\kappa _{1}}=C_{1}(\upsilon _{\kappa _{1}})\), \(\kappa _{1}= m+1, m+2, m+3\) and

$$\begin{aligned} C_{1}(\upsilon _{\kappa _{1}})=\sum_{\kappa =1}^{4} \begin{pmatrix} 4 \\ \kappa \end{pmatrix}(-1)^{\kappa +1}d_{\kappa _{1}-\kappa }. \end{aligned}$$

So the following three constraints are defined at the right end:

$$\begin{aligned} \sum_{\kappa =0}^{4} \begin{pmatrix} 4 \\ \kappa \end{pmatrix}(-1)^{\kappa }d_{\kappa _{1}-\kappa }=0, \quad \kappa _{1}=m+1, m+2, m+3. \end{aligned}$$
(19)

C-2: Constraints by cardinal basis functions:

The following fourth order polynomial \(C_{2}(\upsilon )\) can be used to find the left points \(d_{-3}\), \(d_{-2}\), \(d_{-1}\):

$$ d_{-\kappa _{1}}=C_{2}(-\upsilon _{\kappa _{1}}), \quad \kappa _{1}=1,2, 3, $$

where

$$\begin{aligned} C_{2}(\upsilon )=d_{0}\zeta _{0} \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr)+d_{1}\zeta _{1} \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr)+d^{\bullet \bullet }_{0} \zeta ^{*}_{0} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr)+d^{ \bullet \bullet }_{1}\zeta ^{*}_{1} \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr), \end{aligned}$$
(20)

while the basis functions are given by

$$\begin{aligned}& \zeta _{0} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr)=1- \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr), \\& \zeta _{1} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr)= \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr), \\& \zeta ^{*}_{0} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr)=- \frac{1}{6} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr) \biggl( \frac{\upsilon -\upsilon _{0}}{h}-1 \biggr) \biggl( \frac{\upsilon -\upsilon _{0}}{h}-2 \biggr), \\& \zeta ^{*}_{1} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr)= \frac{1}{6} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr) \biggl( \frac{\upsilon -\upsilon _{0}}{h}-1 \biggr) \biggl( \frac{\upsilon -\upsilon _{0}}{h}+1 \biggr), \end{aligned}$$

and for \(t=0, 1\)

$$\begin{aligned} d^{\bullet \bullet }_{t}=p \biggl( \frac{\upsilon _{t}-\upsilon _{0}}{h} \biggr)d_{t}+q \biggl( \frac{\upsilon _{t}-\upsilon _{0}}{h} \biggr)d_{t}+g \biggl( \frac{\upsilon _{t}-\upsilon _{0}}{h} \biggr). \end{aligned}$$

A similar procedure is adopted for the right end points \(d_{\kappa _{1}}=C_{2}(-\upsilon _{\kappa _{1}})\), \(\kappa _{1}= m+1, m+2, m+3\), where

$$\begin{aligned} C_{2}(\upsilon ) =&d_{m}\zeta _{m} \biggl( \frac{\upsilon -\upsilon _{m}}{h} \biggr)+d_{m+1}\zeta _{m+1} \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr)+d^{\bullet \bullet }_{m} \zeta ^{*}_{m} \biggl(\frac{\upsilon -\upsilon _{0}}{h} \biggr) \\ &{} +d^{\bullet \bullet }_{m+1}\zeta ^{*}_{m+1} \biggl( \frac{\upsilon -\upsilon _{0}}{h} \biggr) \end{aligned}$$
(21)

and

$$\begin{aligned}& \zeta _{m} \biggl(\frac{\upsilon -\upsilon _{m}}{h} \biggr)=1- \biggl( \frac{\upsilon -\upsilon _{m}}{h} \biggr), \\& \zeta _{m+1} \biggl(\frac{\upsilon -\upsilon _{m}}{h} \biggr)= \biggl( \frac{\upsilon -\upsilon _{m}}{h} \biggr), \\& \zeta ^{*}_{m} \biggl(\frac{\upsilon -\upsilon _{m}}{h} \biggr) =- \frac{1}{6} \biggl(\frac{\upsilon -\upsilon _{m}}{h} \biggr) \biggl(\frac{\upsilon -\upsilon _{m}}{h}-1 \biggr) \biggl( \frac{\upsilon -\upsilon _{m}}{h}-2 \biggr), \\& \zeta ^{*}_{m+1} \biggl(\frac{\upsilon -\upsilon _{m}}{h} \biggr)= \frac{1}{6} \biggl(\frac{\upsilon -\upsilon _{m}}{h} \biggr) \biggl( \frac{\upsilon -\upsilon _{m}}{h}-1 \biggr) \biggl( \frac{\upsilon -\upsilon _{m}}{h}+1 \biggr), \end{aligned}$$

and for \(t=m, m+1\)

$$\begin{aligned} d^{\bullet \bullet }_{t}=p \biggl( \frac{\upsilon _{t}-\upsilon _{m}}{h} \biggr)d_{t}+q \biggl( \frac{\upsilon _{t}-\upsilon _{m}}{h} \biggr)d_{t}+g \biggl( \frac{\upsilon _{t}-\upsilon _{m}}{h} \biggr). \end{aligned}$$

2.2.3 Stable singularly perturbed system

Finally, we get a stable singularly perturbed system with \(m+9\) unknowns and \(m+9\) equations obtained from (2), (12), (18) and (19) or (20) and (21).

By C-1 constrants:

If we use (2), (12), (18) and (19) then the system can expressed as

$$\begin{aligned} \mathbb{S}_{1}\mathbb{D}=\mathbb{G}, \end{aligned}$$
(22)

where \(\mathbb{S}_{1}= (\mathbb{S}_{L_{1}}^{T}, \mathbb{S}^{T}, \mathbb{S}_{R_{1}}^{T})^{T}\), \(\mathbb{S}\) is defined by (15). The matrix \([\mathbb{S}_{L_{1}}]_{4\times (m+9)}\) is defined as

$$\begin{aligned} \mathbb{S}_{L_{1}} = \left ( \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 & 1 & -4 & 6 & -4 & 1 & 1 & 0 & 0 &0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & -4 & 6 & -4 & 1 & 0 & 0 &0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 1 & -4 & 6 & -4 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0& \cdots & 0 & 0 \end{array}\displaystyle \right ), \end{aligned}$$

its first three rows and the fourth row are obtained from (18) and (2), respectively,

$$\begin{aligned} \mathbb{S}_{R_{1}} = \left ( \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 & 0 & \cdots &0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0& 0 & 1 & -4 & 6 & -4 & 1 & 0 & 0&0 \\ 0 & 0 & \cdots & 0 & 0 &0 &0 & 1 & -4 & 6 & -4 & 1 & 0 & 0 \\ 0 & 0 & \cdots & 0 &0 &0 & 0 & 0 & 1 & -4 & 6 & -4 & 1 & 0 \end{array}\displaystyle \right ), \end{aligned}$$

its first row and the last three rows are obtained from (2) and (19), respectively,

$$\begin{aligned} \mathbb{G}= \bigl( 0, 0, 0, \alpha _{0}, \mathbb{G}_{1}^{T}, \alpha _{1}, 0, 0, 0 \bigr)^{T}, \end{aligned}$$
(23)

while the matrices \(\mathbb{D}\) and \(\mathbb{G}_{1}\) are defined in (16) and (17), respectively.

By C-2 constraints:

If we use (2), (12), (20) and (21) then the system can be expressed as

$$\begin{aligned} \mathbb{S}_{2}\mathbb{D}=\mathbb{G}, \end{aligned}$$
(24)

where \(\mathbb{S}_{2}= (\mathbb{S}_{L_{2}}^{T}, \mathbb{S}^{T}, \mathbb{S}_{R_{2}}^{T})^{T}\) while the first three rows and the last row of \([{\mathbb{S}_{L_{2}}}]_{4\times (m+9)}\) are obtained from (20) and (2), respectively. Similarly the first row and the remaining three rows of \([{\mathbb{S}_{R_{2}}}]_{4\times (m+9)}\) are obtained from (2) and (21), respectively. Now we have two systems, i.e., (22) and (24).

2.3 Existence of the solution

The matrices \(\mathbb{S}_{1}\) and \(\mathbb{S}_{2}\) involved in the systems (22) and (24) are non-singular. Their non-singularity can be checked by finding their eigenvalues. We notice that for \(m\leq 500\) the eigenvalues are nonzero. By [16], these are non-singular. Their singularity is not guaranteed for \(m > 500\).

3 Error estimation of the algorithm

This section discussed the mathematical results as regards the convergence of the proposed method.

Let the analytic solution of the SPBVPs problem (1) with (2) be denoted as \(Z_{e}\) then

$$\begin{aligned} \varepsilon Z_{e}^{\bullet \bullet }(\upsilon )=p(\upsilon )Z_{e}^{ \bullet }(\upsilon )+q(\upsilon )Z_{e}(\upsilon )+g(\upsilon ). \end{aligned}$$

It implies for node points, \(\kappa =0, 1,\ldots,m\),

$$\begin{aligned} ay_{e}^{\bullet \bullet }(\upsilon _{\kappa })=p(\upsilon _{\kappa })Z_{e}^{ \bullet }( \upsilon _{\kappa })+q(\upsilon _{\kappa })Z_{e}(\upsilon _{ \kappa })+g(\upsilon _{\kappa }). \end{aligned}$$
(25)

Let the vector \(Z_{e}(\upsilon )\) be defined as

$$\begin{aligned} Z_{e}(\upsilon )= \bigl(Z_{e}(\upsilon _{0}), Z_{e}(\upsilon _{1}),\ldots, Z_{e}(\upsilon _{m}) \bigr)^{T}. \end{aligned}$$

By Taylor’s series

$$\begin{aligned} Z_{e}^{\bullet }(\upsilon _{\kappa }) =&\frac{1}{25{,}878h} \bigl[-256Z_{e}( \upsilon _{\kappa }-4h)-3200Z_{e}( \upsilon _{\kappa }-3h)+19{,}673Z_{e}( \upsilon _{\kappa }-2h) \\ &{} -54{,}600Z_{e}(\upsilon _{\kappa }-h)+54{,}600Z_{e}( \upsilon _{\kappa }+h)-19{,}673Z_{e}(\upsilon _{\kappa }+2h) \\ &{} +3200Z_{e}(\upsilon _{\kappa }+3h)+256Z_{e}( \upsilon _{\kappa }+4h) \bigr]+\mathcal{O} \bigl(h^{4} \bigr) \end{aligned}$$

and

$$\begin{aligned} Z_{e}^{\bullet \bullet }(\upsilon _{\kappa }) =&\frac{1}{8448h^{2}} \bigl[256Z_{e}(\upsilon _{\kappa }-4h)+1600Z_{e}( \upsilon _{\kappa }-3h)-4725Z_{e}( \upsilon _{\kappa }-2h) \\ &{} +17{,}300Z_{e}(\upsilon _{\kappa }-h)-28{,}862Z_{e}( \upsilon _{\kappa })+17{,}300Z_{e}(\upsilon _{\kappa }+h)-4725Z_{e}( \upsilon _{\kappa }+2h) \\ &{} +1600Z_{e}(\upsilon _{\kappa }+3h)+256Z_{e}( \upsilon _{\kappa }+4h) \bigr]+\mathcal{O} \bigl(h^{4} \bigr). \end{aligned}$$

Since \(D(\upsilon )\) is the approximate solution of (1) which can be obtained from the system (22) or (24), by (7), for \(\kappa =0, 1,\ldots, m\), we have

$$\begin{aligned} \varepsilon D^{\bullet \bullet }(\upsilon _{\kappa })=p(\upsilon _{ \kappa })D^{\bullet }(\upsilon _{\kappa })+q(\upsilon _{\kappa })D( \upsilon _{\kappa })+g( \upsilon _{\kappa }), \end{aligned}$$
(26)

where \(D^{\bullet }(\upsilon _{\kappa })\) and \(D^{\bullet \bullet }(\upsilon _{\kappa })\) are defined as

$$\begin{aligned} D^{\bullet }(\upsilon _{\kappa }) =&\frac{1}{25{,}878h} \bigl[-256d( \upsilon _{\kappa }-4h)-3200d(\upsilon _{\kappa }-3h)+19{,}673d(\upsilon _{\kappa }-2h) \\ &{}-54{,}600d(\upsilon _{\kappa }-h)+54{,}600d(\upsilon _{\kappa }+h)-19{,}673d( \upsilon _{\kappa }+2h) \\ &{} +3200d(\upsilon _{\kappa }+3h)+256d(\upsilon _{\kappa }+4h) \bigr]+\mathcal{O} \bigl(h^{4} \bigr) \end{aligned}$$

and

$$\begin{aligned} D^{\bullet \bullet }(\upsilon _{\kappa }) =&\frac{1}{8448h^{2}} \bigl[256d( \upsilon _{\kappa }-4h)+1600d(\upsilon _{\kappa }-3h)-4725d( \upsilon _{\kappa }-2h) \\ &{}+17{,}300d(\upsilon _{\kappa }-h)-28{,}862d(\upsilon _{\kappa })+17{,}300d( \upsilon _{\kappa }+h)-4725d(\upsilon _{\kappa }+2h) \\ &{} +1600d(\upsilon _{\kappa }+3h)+256d(\upsilon _{\kappa }+4h) \bigr]+\mathcal{O} \bigl(h^{4} \bigr). \end{aligned}$$

Let the error function \(\Delta (\upsilon )=Z_{e}(\upsilon )-D(\upsilon )\) and

$$\begin{aligned} \Delta =(\Delta _{-4}, \Delta _{-3},\ldots, \Delta _{m+3}, \Delta _{m+4}). \end{aligned}$$

Then error vector at the given nodal values is

$$\begin{aligned} \Delta (\upsilon _{\kappa })=Z_{e}(\upsilon _{\kappa })-D( \upsilon _{ \kappa }), \quad -4\leqslant \kappa \leqslant m+4. \end{aligned}$$

This implies

$$\begin{aligned}& \Delta ^{\bullet }(\upsilon _{\kappa })=Z_{e}^{\bullet }( \upsilon _{ \kappa })-D^{\bullet }(\upsilon _{\kappa }), \quad -4 \leqslant \kappa \leqslant m+4, \\& \Delta ^{\bullet \bullet }(\upsilon _{\kappa })=Z_{e}^{\bullet \bullet }( \upsilon _{\kappa })-D^{\bullet \bullet }(\upsilon _{ \kappa }), \quad -4 \leqslant \kappa \leqslant m+4. \end{aligned}$$

The following result is obtained after subtracting (26) from (25):

$$\begin{aligned} \varepsilon \bigl[Z_{e}^{\bullet \bullet }(\upsilon _{\kappa })-D^{ \bullet \bullet }( \upsilon _{\kappa }) \bigr]=p(\upsilon _{\kappa }) \bigl[Z_{e}^{\bullet }( \upsilon _{\kappa })-D^{\bullet }( \upsilon _{ \kappa }) \bigr]+q( \upsilon _{\kappa }) \bigl[Z_{e}(\upsilon _{ \kappa })-D( \upsilon _{\kappa }) \bigr]. \end{aligned}$$

By applying the definition of error vector the above equation can be written as

$$\begin{aligned} \varepsilon \Delta ^{\bullet \bullet }(\upsilon _{\kappa })=p( \upsilon _{\kappa })\Delta ^{\bullet }(\upsilon _{\kappa })+q( \upsilon _{\kappa })\Delta (\upsilon _{\kappa }), \quad 0\leqslant \kappa \leqslant m. \end{aligned}$$

This implies

$$\begin{aligned} \varepsilon \Delta ^{\bullet \bullet }(\upsilon _{\kappa })-p( \upsilon _{\kappa })\Delta ^{\bullet }(\upsilon _{\kappa })-q( \upsilon _{\kappa })\Delta (\upsilon _{\kappa })=0, \quad 0 \leqslant \kappa \leqslant m, \end{aligned}$$
(27)

where for \(0\leqslant \kappa \leqslant m\)

$$\begin{aligned} \Delta ^{\bullet }(\upsilon _{\kappa }) =&\frac{1}{25{,}878h} \bigl[-256 \Delta (\upsilon _{\kappa }-4h)-3200\Delta (\upsilon _{\kappa }-3h)+19{,}673 \Delta (\upsilon _{\kappa }-2h) \\ &{} -54{,}600\Delta (\upsilon _{\kappa }-h)+54{,}600\Delta ( \upsilon _{\kappa }+h)-19{,}673\Delta (\upsilon _{\kappa }+2h) \\ &{} +3200\Delta (\upsilon _{\kappa }+3h)+256\Delta ( \upsilon _{\kappa }+4h) \bigr]+\mathcal{O} \bigl(h^{4} \bigr), \end{aligned}$$

and for \(0\leqslant \kappa \leqslant m\)

$$\begin{aligned} \Delta ^{\bullet \bullet }(\upsilon _{\kappa }) =& \frac{1}{8448h^{2}} \bigl[256 \Delta (\upsilon _{\kappa }-4h)+1600 \Delta (\upsilon _{\kappa }-3h)-4725 \Delta (\upsilon _{\kappa }-2h) \\ &{} +17{,}300\Delta (\upsilon _{\kappa }-h)-28{,}862\Delta ( \upsilon _{\kappa })+17{,}300\Delta (\upsilon _{\kappa }+h)-4725\Delta ( \upsilon _{\kappa }+2h) \\ &{} +1600\Delta (\upsilon _{\kappa }+3h)+256\Delta ( \upsilon _{\kappa }+4h) \bigr]+\mathcal{O} \bigl(h^{4} \bigr). \end{aligned}$$

As \(0\leq \upsilon \leq 1\) and \(\upsilon _{\kappa }=\kappa h\), \(\kappa =0,1, 2,\ldots, m\), the values lie outside the interval \([0,1]\), i.e., \(\Delta _{-4},\ldots, \Delta _{-1}\) and \(\Delta _{m+1},\ldots, \Delta _{m+4}\) must be equal to zero. These error values can be assumed to be

$$\begin{aligned} \Delta _{\kappa }= \textstyle\begin{cases} \max_{0 \leq k \leq 4}\{ \vert \Delta _{k} \vert \}\mathcal{O}(h^{4}), & -4 \leq \kappa < 0, \\ \max_{m-4 \leq k \leq m}\{ \vert \Delta _{k} \vert \}\mathcal{O}(h^{4}), & m < \kappa \leq m+4. \end{cases}\displaystyle \end{aligned}$$
(28)

If we expand (27) by adopting a similar procedure to Proposition 1 then we obtain

$$\begin{aligned} \bigl(\mathbb{S}_{1}+\mathcal{O} \bigl(h^{4} \bigr)- \mathcal{O}(h) \bigr)\Delta =0 \end{aligned}$$

and

$$\begin{aligned} \bigl(\mathbb{S}_{2}+\mathcal{O} \bigl(h^{4} \bigr)- \mathcal{O}(h) \bigr)\Delta =0. \end{aligned}$$

Or equivalently

$$\begin{aligned} \bigl(\mathbb{S}_{1}+\mathcal{O} \bigl(h^{4} \bigr)- \mathcal{O}(h) \bigr)\Delta =\mathcal{O} \bigl(h^{4} \bigr) \Vert \Delta \Vert = \mathcal{O} \bigl(h^{4} \bigr) \end{aligned}$$

and

$$\begin{aligned} \bigl(\mathbb{S}_{2}+\mathcal{O} \bigl(h^{4} \bigr) \bigr) \Delta =\mathcal{O} \bigl(h^{4} \bigr) \Vert \Delta \Vert = \mathcal{O} \bigl(h^{4} \bigr). \end{aligned}$$

The matrix \(\mathbb{S}_{\kappa _{1}}+\mathcal{O}(h^{4})\), \(\kappa _{1}=1, 2\), for small h and \(\epsilon =0.1 \times 10^{-3}\), is non-singular so

$$\begin{aligned} \Vert \Delta \Vert \leq \bigl( \bigl\Vert \mathbb{S}_{\kappa _{1}}^{-1} \bigr\Vert { \bigl(1- \mathcal{O}(h) \bigr)^{-1}} \bigr) \mathcal{O} \bigl(h^{4} \bigr)=\mathcal{O} \bigl(h^{4} \bigr), \quad \kappa _{1}=1,2. \end{aligned}$$

Hence \(\Vert \Delta \Vert = O(h^{4})\). This discussion can be summarized.

Proposition 2

Let\(Z_{e}\)and\(D_{\kappa }\), \(\kappa =0,1,\ldots, m \)be the analytic and approximate solutions of second order SPBVPs defined in (1), respectively, then\(\Vert \Delta \Vert = \Vert Z_{e}(\upsilon )-D(\upsilon ) \Vert \leq \mathcal{O}(h^{4})\).

Remark

The order of error approximation varies if we use different values of ϵ.

4 Solutions of second order SPBVPs and discussions

In this section, we consider second order SPBVPs and find their numerical solutions by using different algorithms. Since we have developed two linear systems i.e. (22) and (24) for approximate solutions of the SPBVPs, both systems have been used for solutions. We also give a comparison of solutions by computing the maximum absolute errors of the analytic and approximate solutions.

Example 4.1

This type of problem has also solved by [2, 3, 5, 6, 14],

$$\begin{aligned} \varepsilon Z^{\bullet \bullet }(\upsilon ) = Z+ \cos ^{2}(\pi \upsilon )+2 \varepsilon \pi ^{2} \cos (2\pi \upsilon ), \quad 0 < \upsilon < 1, \end{aligned}$$

where the boundary conditions of the above problem are

$$\begin{aligned} Z(0)= 0= Z(1), \end{aligned}$$

its analytic solution is

$$ Z(\upsilon )= \frac{ [\operatorname{Exp} (\frac{-(1-\upsilon )}{\sqrt{\varepsilon }} ) +\operatorname{Exp} (\frac{-\upsilon }{\sqrt{\varepsilon }} ) ]}{ [1+\operatorname{Exp} (\frac{-1}{\sqrt{\varepsilon }} ) ]}- \cos ^{2}(\pi \upsilon ). $$

Example 4.2

Consider the boundary value problem [10, 12, 13]

$$ \varepsilon Z^{\bullet \bullet }(\upsilon )-(1+\upsilon )Z(\upsilon ) = 40 \bigl[ \upsilon \bigl(\upsilon ^{2}-1 \bigr)-2\varepsilon \bigr], \quad 0 < \upsilon < 1, $$

where the boundary conditions of the above problem are

$$\begin{aligned} Z(0) = 0= Z(1), \end{aligned}$$

its analytic solution is

$$ Z(\upsilon ) = 40\upsilon (1-\upsilon ). $$

Example 4.3

Take the problem already solved by [7, 9],

$$\begin{aligned} \varepsilon Z^{\bullet \bullet }(\upsilon )- \bigl\{ 1+\upsilon (1-\upsilon ) \bigr\} Z( \upsilon )&=- \bigl[1+\upsilon (1-\upsilon )+ \bigl\{ 2\sqrt{\varepsilon }- \upsilon ^{2}(1-\upsilon ) \bigr\} \\ & \quad {}\times e^{ \{ \frac{(1-\upsilon )}{\sqrt{\varepsilon }} \} }+ \bigl\{ 2\sqrt{ \varepsilon }-\upsilon (1- \upsilon )^{2} \bigr\} e^{ \{ - \frac{\upsilon }{\sqrt{\varepsilon }} \} } \bigr], \end{aligned}$$

here \(0\leqslant \upsilon \leqslant 1\) and

$$\begin{aligned} Z(0)=Z(1)=0, \end{aligned}$$

its analytic solution is

$$\begin{aligned} Z(\upsilon )=1+(\upsilon -1)e^{ \{ - \frac{\upsilon }{\sqrt{\varepsilon }} \} }-xe^{ \{ - \frac{(1-\upsilon )}{\sqrt{\varepsilon }} \} }. \end{aligned}$$

4.1 Discussion and comparison

We solve SPBVPs by our algorithm and summarized the results in the following form.

  • The facts regarding the solutions of Example 4.1 are shown in Tables 26 and in Figs. 13 In Tables 2 and 3, the maximum absolute errors (MAE) are given while in Tables 46 the comparison with the methods of [2, 3, 5, 6, 14] are presented. In Fig. 1 the solutions are presented. In Figs. 2 and 3 results for m and ε are depicted.

    Figure 1
    figure 1

    Comparison concerning Example 4.1: Analytic and approximate solutions with parametric setting: \(N=10\) and \(\varepsilon =10^{-4}, 10^{-7}, 10^{-10}\)

    Figure 2
    figure 2

    Comparison concerning Example 4.1: Analytic and approximate solutions with parametric setting: \(N=32\) and \(\varepsilon =2^{-25}\)

    Figure 3
    figure 3

    Comparison concerning Example 4.1: Analytic and approximate solutions with parametric setting: \(N=32\) and \(\varepsilon =(2^{-20})^{2}\)

    Table 2 Maximum absolute errors (MAE) for Example 4.1
    Table 3 MAE in the solution of SPBVP in Example 4.1
    Table 4 MAE in the solution of SPBVP in Example 4.1
    Table 5 MAE in the solution of SPBVP in Example 4.1
    Table 6 MAE in the solution of SPBVP in Example 4.1
  • In Tables 710 and in Figs. 46 results of Example 4.2 are presented. Tables 7 and 10 show the MAE while Tables 9 and 10 present the comparison of MAE with [10, 12, 13]. This shows our results are better. Figure 4 shows the solutions while Figs. 5 and 6 show the results for m and ε.

    Figure 4
    figure 4

    Comparison concerning Example 4.2: Analytic and approximate solutions with parametric setting: \(N=10\) with \(\varepsilon =10^{-4}, 10^{-7}, 10^{-10}\)

    Figure 5
    figure 5

    Comparison concerning Example 4.2: Analytic and approximate solutions with parametric setting: \(N=16\) and \(\varepsilon =10^{-8}\)

    Figure 6
    figure 6

    Comparison concerning Example 4.2: Analytic and approximate solutions with parametric setting: \(N=32\) and \(\varepsilon =10^{-9}\)

    Table 7 MAE in the solution of SPBVP in Example 4.2
    Table 8 MAE in the solution of SPBVP in Example 4.2
    Table 9 MAE in the solution of SPBVP in Example 4.2
    Table 10 MAE in the solution of SPBVP in Example 4.2
  • Tables 1113 and Figs. 79 are related to the solution of Example 4.3. The MAE are shown in Tables 11 and 12. We compare our result with the results of [7, 9] and found them to be better. The graphical representation is given in Figs. 8 and 9

    Figure 7
    figure 7

    Comparison concerning Example 4.3: Analytic and approximate solutions with parametric setting: \(N=10\) with \(\varepsilon =10^{-4}, 10^{-7}, 10^{-10}\) shown in (a), (b) and (c), respectively

    Figure 8
    figure 8

    Comparison concerning Example 4.3: Analytic and approximate solutions with parametric setting: \(N=16\) and \(\varepsilon =10^{-5}\)

    Figure 9
    figure 9

    Comparison concerning Example 4.3: Analytic and approximate solutions with parametric setting: \(N=16\) and \(a=10^{-8}\)

    Table 11 MAE in the solution of SPBVP in Example 4.3
    Table 12 MAE in the solution of SPBVP in Example 4.3
    Table 13 MAE in the solution of SPBVP in Example 4.3
  • From these results we conclude that the condition C-2 gives better results than the condition C-1.

  • If we keep m fixed, then MAE increases with the increase of ε. It is also observed that if we keep ε fixed, then MAE decreases with the increase of m.

5 Conclusions

In this paper, we introduced a numerical algorithm for the solution of second order SPBVPs. The algorithm was developed by using the 2-scale relation of a well-known interpolating subdivision scheme. This algorithm gives the approximate solution of second order SPBVPs with a fourth order of approximation. We presented the comparison of maximum absolute error of the solutions obtained from subdivision (i.e. our method), spline [2, 3, 5, 6], finite difference [9, 10, 12, 13] and Haar wavelet [7, 14] algorithms. We concluded that our algorithm gives smaller maximum absolute error.