1 Introduction

The convection–diffusion equation is a mathematical representation that combines the diffusion and convection equations. It is used to explain many physical events in which the transfer of particles, energy, or other physical variables occurs inside a physical system as a result of two distinct processes: diffusion and convection. The nomenclature for a certain equation may vary depending on the specific context in which it is used. In some cases, the equation may be referred to as the convection–diffusion equation, while in other cases it may be termed the drift-diffusion equation. Additionally, it is also possible for the equation to be referred to as the generic scalar transport equation [7, p.64].

The study of singularly perturbed coupled systems of convection–diffusion equations has a rich history in the field of applied mathematics. Such systems arise in various scientific disciplines, including fluid mechanics, chemical engineering, and environmental modeling, among others. In these systems, convection and diffusion work together. Convection is the movement of a substance due to the motion of the fluid as a whole, and diffusion is the spreading or dispersing of the substance due to random molecular movement. One of these effects is significantly dominant compared to the other, leading to a disparity in the magnitudes of the associated coefficients. This results in a multiscale behavior of the solution, where rapid variations occur in thin transition layers, while the solution behaves more slowly away from these layers. Understanding and accurately estimating the solutions of such systems present challenges due to the presence of sharp gradients and the need to resolve these layers accurately.

The choice of boundary conditions is crucial when solving the singularly perturbed coupled system of convection–diffusion equations. Different forms of boundary conditions, including Dirichlet, Neumann, and Robin, can be considered [10, 11, 21, 25]. However, the focus of this paper is on the application of Robin boundary conditions. Robin boundary conditions, also referred to as impedance or mixed boundary conditions, combine Dirichlet and Neumann conditions. They offer modeling flexibility for systems in which the values of the solution and its derivative affect the exchange or transmission of a substance across a boundary [9, 20]. By adding Robin boundary conditions to the coupled convection–diffusion system, physical phenomena can be represented more accurately, making the model more accurate.

The Robin boundary condition problem for a set of singularly perturbed convection–diffusion equations is the subject of this research. This problem in the form:

$$\begin{aligned} \left. \begin{array}{ll} &{}\hspace{35pt} \varepsilon _{1}\,u_{1}^{\prime \prime }(x)+F_{1}(x,u_{1}(x),u_{2}(x),u_{1}^{\prime }(x),u_{2}^{\prime }(x))=g_{1}(x),\quad x\in (a,b),\\ &{}\hspace{35pt} \varepsilon _{2}\,u_{2}^{\prime \prime }(x)+F_{2}(x,u_{1}(x),u_{2}(x),u_{1}^{\prime }(x),u_{2}^{\prime }(x))=g_{2}(x),\quad x\in (a,b),\\ \end{array}\right\} \end{aligned}$$
(1.1)

subject to the set of four RBCs:

$$\begin{aligned} \left. \begin{array}{ll} &{}\alpha _{1j}u_{j}(a)+\varepsilon _{j}\,\beta _{1j}u^{\prime }_{j}(a)=\gamma _{1j},\quad j=1,2,\\ &{}\alpha _{2j}u_{j}(b)+\varepsilon _{j}\,\beta _{2j}u^{\prime }_{j}(b)=\gamma _{2j},\quad j=1,2, \end{array}\right\} \end{aligned}$$
(1.2)

where \(F_{1}\) and \(F_{2}\) are either linear or nonlinear functions, \(0\le \varepsilon _{j}\ll 1,\) are the perturbation parameters and \(\alpha _{ij},\,\beta _{ij},\,\gamma _{ij},\,i,j=1,2,\) are all constants.

In the case of \(F_{1}\) and \(F_{2}\) are linear functions, this system has been the subject of several related investigations: the adaptive grid method [21], hybrid difference schemes on the Shishkin mesh [26], upwind finite difference scheme on a Shishkin meshes [6], a parameter uniform numerical method [13].

Three well-known spectral methods are the collocation method, the tau method, and the Galerkin method. These methods provide very accurate approximations for a wide range of various differential equations. The form of these methods depends on the kind of differential equation being solved and the boundary conditions being considered. The computational cost of solving differential equations using one of these methods may be reduced by using operational matrices to construct efficient approaches (see for instance, [1,2,3,4,5, 8, 22, 24]).

To the best of our knowledge, a Galerkin operational matrix with any basis function that fulfills the homogeneous RBCs (In the form (1.2) with \(\gamma _{i,j}= 0,i,j=1,2\)) is unknown and untraceable in the literature. This contributes to our interest in such an operational matrix. The numerical approach of BVP (1.1) and (1.2) using this kind of operational matrix is another inspiration.

The primary goals of this study are as follows:

  1. (i)

    Using GSLPs to build a new class of basis polynomials that meet the four homogeneous Robin boundary conditions, we called them Robin-Modified Legendre polynomials (RMLP).

  2. (ii)

    Creating operational matrices for the derivatives of the calculated polynomials.

  3. (iii)

    Creating a numerical technique for solving BVP (1.1) and (1.2) utilizing the collocation method and the operational matrices of derivatives given.

  4. (iv)

    Estimating the error obtained for the approximate solution.

The structure of the paper is as follows. The Legendre polynomials and their shifted ones GSLPs are explored in Sect. 2. Section 3 is confined to building RMLP that satisfies homogenous RBCs. Section 4 focuses on creating a new operational matrix of RMLP’ derivatives to handle BVP (1.1) and (1.2). The section 5 looks at the use of the collocation method to provide numerical approache for BVP, (1.1), and (1.2). Section 6 discusses theoretical convergence and error estimates. Section 7 includes three examples as well as comparisons with other approaches from the literature. Finally, some conclusions are presented in Sect. 8.

2 An Overview on Legendre Polynomials and their Shifted Ones

One of the several approaches that may be used to define the Legendre polynomials of degree n\(L_{n}(t)\), is the recurrence formula, which reads as follows [14]:

$$\begin{aligned} \left. \begin{array}{ll} &{}(n+1)L_{n+1}(t)=(2n+1)t\,L_{n}(t)-n\,L_{n-1}(t),\quad t\in [-1,1],\\ &{}L_{0}(t)=1,\,L_{1}(t)=t, \end{array}\right\} \end{aligned}$$
(2.1)

and they are orthogonal polynomials and satisfy the relation

$$\begin{aligned} \int _{-1}^{1} L_{m}(t)\, L_{n}(t)\, dt= {\left\{ \begin{array}{ll} \dfrac{2}{2n+1},&{}\,n=m,\\ 0,&{} n\ne m. \end{array}\right. } \end{aligned}$$

The generalized shifted Legendre polynomials (GSLPs) \(L^*_{n}(x;a,b)\), are defined as follows:

$$\begin{aligned} L^*_{n}(x;a,b)=L_{n}\Big (\frac{2x-a-b}{b-a}\Big ),\quad x\in [a,b], \end{aligned}$$
(2.2)

and their orthogonality relation is

$$\begin{aligned} \int _{a}^{b}L^*_{m}(x;a,b)\,L^*_{n}(x;a,b)\, dx= {\left\{ \begin{array}{ll} \dfrac{b-a}{2n+1},&{}\,n=m,\\ 0,&{} n\ne m. \end{array}\right. } \end{aligned}$$

Lemma 2.1

The GSLPs can be represented as

$$\begin{aligned} L^{*}_{n}(x;a,b)=\displaystyle \sum _{q=0}^{n} \dfrac{L^{*(q)}_{n}(0;a,b)}{q!}x^q,\,n\ge 0, \end{aligned}$$
(2.3)

where

$$\begin{aligned} L^{*(q)}_{n}(0;a,b)=\frac{(-1)^{n+q}(n+q)!}{(b-a)^{q}(q)! (n-q)!}\,_{2}F_{1}\left( \begin{array}{c} q-n,\,n+q+1 \\ q+1 \end{array} \Big |\frac{a}{a-b}\right) . \end{aligned}$$
(2.4)

Proof

The known shifted Legendre polynomials \(L^*_{n}(x;0,1)\) are expressed analytically as

$$\begin{aligned} L^*_{n}(x;0,1)=\,\sum \limits _{k=0}^{n}\frac{ (-1)^{n+k} (n+k)!}{(k!)^2\,(n-k)!}\,x^{k},\qquad x\in [0,1]. \end{aligned}$$
(2.5)

Then

$$\begin{aligned} L^{*}_{n}(x;a,b)=L^{*}_{n}(\frac{x-a}{b-a};0,1)=\,\sum \limits _{k=0}^{n}\frac{ (-1)^{n+k} (n+k)!}{(k!)^2\,(n-k)!}\,\left( \frac{x-a}{b-a}\right) ^{k},\qquad x\in [a,b]. \end{aligned}$$
(2.6)

Substituting the relation

$$\begin{aligned} (x-a)^{k}=\sum \limits _{i=0}^{k}\left( {\begin{array}{c}k\\ i\end{array}}\right) (-a)^{k-i}\,x^{k}, \end{aligned}$$

to Eq.(2.6), expanding and collecting similar terms - and after some rather manipulation - one can see that \(L^{*(q)}_{n}(0;a,b), q\le n,\) take the form (2.4) and this completes the proof of Lemma 2.1. \(\square\)

It follows logically from Lemma 2.1 that

$$\begin{aligned} L^{*}_{n}(x;0,\tau )=\sum \limits _{q=0}^{n}\frac{ (-1)^{n+q} (n+q)!}{(q!)^{2}(n-q)!\tau ^{q}}\,x^{q},\qquad x\in [0,\tau ]. \end{aligned}$$
(2.7)

Note 2.1

Here, it is important to remember that the generalized hypergeometric function is defined as Luke [23]

$$\begin{aligned} {}_{p}F_{q} \left( \begin{array}{c}a_1,a_2,...,a_p\\ b_1,b_2,...,b_q \end{array}\Big |z\right) =\sum \limits _{k=0}^{\infty }\frac{(a_1)_k...(a_p)_k\,z^k}{(b_1)_k...(b_q)_k\,k!}, \end{aligned}$$

where \(b_j\ne 0\), for all \(1\le j\le q\).

3 Robin-Modified Legendre Polynomials

This section introduces two new classes of polynomials denoted by \(\varphi _{j,k}(x),\,j=1,2,\) and they are referred to as RMLP and meet the homogeneous RBCs:

$$\begin{aligned} \left. \begin{array}{ll} &{}\alpha _{1j}\varphi _{j,k}(a)+\varepsilon _{j}\,\beta _{1j}\varphi _{j,k}^{(1)}(a)=0,\\ &{}\alpha _{2j}\varphi _{j,k}(b)+\varepsilon _{j}\,\beta _{2j}\varphi _{j,k}^{(1)}(b)=0, \end{array}\right\} \end{aligned}$$
(3.1)

respectively. To achieve this aim, it is proposed that RMLP be written as

$$\begin{aligned} \varphi _{j,k}(x)=(x^{2}+\mathbb {A}_{j,k}\,x+\mathbb {B}_{j,k})L^{*}_{k}(x;a,b),\,\,\, k=0,1,2,\dots , \end{aligned}$$
(3.2)

where the constants \(\mathbb {A}_{j,k},\,\mathbb {B}_{j,k},\,j=1,2,\) will be computed such that \(\varphi _{j,k}(x)\) fulfill the conditions (3.1), respectively. Substitution of \(\varphi _{j,k}(x)\) into (3.1) yields the two systems for \(j=1,2\):

$$\begin{aligned} \left. \begin{array}{ll} \alpha _{1j}(a^{2}+\mathbb {A}_{j,k}a+\mathbb {B}_{j,k})L^{*}_{k}(a;a,b)+\varepsilon _{j}\,\beta _{1j}((2a+\mathbb {A}_{j,k})L^{*}_{k}(a;a,b)+(a^{2}+\mathbb {A}_{j,k}a+\mathbb {B}_{j,k})L^{*(1)}_{k}(a;a,b))=0,\\ \alpha _{2j}(b^{2}+\mathbb {A}_{j,k}b+\mathbb {B}_{j,k})L^{*}_{k}(b;a,b)+\varepsilon _{j}\,\beta _{2j}((2b+\mathbb {A}_{j,k})L^{*}_{k}(b;a,b)+(b^{2}+\mathbb {A}_{j,k}b+\mathbb {B}_{j,k})L^{*(1)}_{k}(b;a,b))=0, \end{array}\right\} \end{aligned}$$
(3.3)

respectively, which directly provides

$$\begin{aligned} \left. \begin{array}{ll} &{}\mathbb {A}_{j,k}=-\frac{a \alpha _{1j} L \left( \varepsilon _{j}\,\beta _{2j} \left( a-b \left( d_{k}+2\right) \right) -\alpha _{2j} b L\right) +\varepsilon _{j}\,\beta _{1j} \left( \alpha _{2j} b L \left( a \left( d_{k}+2\right) -b\right) +\varepsilon _{j}\,\beta _{2j} \left( d_{k}+2\right) (a b d_{k}-L r)\right) }{\Delta _{j,k}},\\ &{}\mathbb {B}_{j,k}=\frac{\varepsilon _{j}\,\beta _{1j} \left( \alpha _{2j} L (2 a+r\,d_{k})+\varepsilon _{j}\,\beta _{2j} d_{k} \left( d_{k}+2\right) r\right) -\alpha _{1j} L \left( \varepsilon _{j}\,\beta _{2j} (2 b+r\,d_{k})+\alpha _{2j} L r\right) }{\Delta _{j,k}}, \end{array}\right\} \end{aligned}$$
(3.4)

where \(d_{k}=k^2+k\)\(L=b-a\)\(r=a+b\) and

$$\begin{aligned} \Delta _{j,k}=\alpha _{1j} L \left( \varepsilon _{j}\,\beta _{2j} \left( d_{k}+1\right) +\alpha _{2j} L\right) +\varepsilon _{j}\,\beta _{1j} \left( -\varepsilon _{j}\,\beta _{2j} d_{k} \left( d_{k}+2\right) -\left( \alpha _{2j} \left( d_{k}+1\right) L\right) \right) \ne 0. \end{aligned}$$

The proposed RMLP have the special values

$$\begin{aligned} \varphi ^{(q)}_{j,k}(0)=\mathbb {B}_{j,k}\,L^{*(q)}_{k}(0;a,b)+\,q\,\mathbb {A}_{j,k}\,L^{*(q-1)}_{k}(0;a,b)+\,q(q-1)\,L^{*(q-2)}_{k}(0;a,b),\, 1\le q \le k+2. \end{aligned}$$
(3.5)

4 Operational Matrix of Derivatives of RMLP

Operational derivative matrices for \(\varphi _{j,n}(x),\,n=0,1,2,\dots ,\) will be developed in this section. They will be new Galerkin operational matrices of derivatives. The following Theorem must be proven first:

Theorem 4.1

\(D\varphi _{j,n}(x),\,j=1,2,\)  for all \(n\ge 0\), have the following expansions:

$$\begin{aligned} D\varphi _{1,n}(x)=\sum _{j=0}^{n-1}a_{j}(n)\varphi _{1,j}(x)+ \epsilon _{n}(x),\,\,\epsilon _{n}(x)=e_{1}(n)x+e_{0}(n), \end{aligned}$$
(4.1)

and

$$\begin{aligned} D\varphi _{2,n}(x)=\sum _{j=0}^{n-1}\tilde{a}_{j}(n)\varphi _{2,j}(x)+\tilde{\epsilon }_{n}(x),\,\,\tilde{\epsilon }_{n}(x)=\tilde{e}_{1}(n)x+\tilde{e}_{0}(n), \end{aligned}$$
(4.2)

where \(a_{0}(n),\,a_{1}(n),\,\dots ,a_{n-1}(n)\), satisfy the system

$$\begin{aligned} \textbf{G}_{n}{} \textbf{a}_{n}=\textbf{B}_{n}, \end{aligned}$$
(4.3)

while the coefficients \(\tilde{a}_{0}(n),\,\tilde{a}_{1}(n),\,\dots ,\tilde{a}_{n-1}(n)\), satisfy the system

$$\begin{aligned} \tilde{\textbf{G}}_{n}\tilde{\textbf{a}}_{n}=\tilde{\textbf{B}}_{n}, \end{aligned}$$
(4.4)

where \(\textbf{a}_{n}=[a_{0}(n),\,a_{1}(n),\,\dots ,a_{n-1}(n)]^{T}\)\(\tilde{\textbf{a}}_{n}=[\tilde{a}_{0}(n),\,\tilde{a}_{1}(n),\,\dots ,\,\tilde{a}_{n-1}(n)]^{T}\), \(\textbf{G}_{n}=(g_{i,j}(n))_{0\le i,j\le n-1}\)\(\tilde{\textbf{G}}_{n}=(\tilde{g}_{i,j}(n))_{0\le i,j\le n-1}\)\(\textbf{B}_{n}=[b_{0}(n),\,b_{1}(n),\,\dots ,b_{n-1}(n)]^{T}\) and \(\tilde{\textbf{B}}_{n}=[\tilde{b}_{0}(n),\,\tilde{b}_{1}(n),\,\dots ,\tilde{b}_{n-1}(n)]^{T}\). The elements of \(\textbf{G}_{n}\)\(\tilde{\textbf{G}}_{n}\)\(\textbf{B}_{n}\) and \(\tilde{\textbf{B}}_{n}\) are defined as follows:

$$\begin{aligned} g_{i,j}(n)={\left\{ \begin{array}{ll} \varphi ^{(n-i+1)}_{1,n-j-1}(0)&{}\quad i\ge j,\\ 0,&{} \text {otherwise,} \end{array}\right. },\qquad b_{i}(n)=\varphi ^{(n-i+2)}_{1,n}(0), \end{aligned}$$

and

$$\begin{aligned} \tilde{g}_{i,j}(n)={\left\{ \begin{array}{ll} \varphi ^{(n-i+1)}_{2,n-j-1}(0)&{}\quad i\ge j,\\ 0,&{} \text {otherwise,} \end{array}\right. },\qquad \tilde{b}_{i}(n)=\varphi ^{(n-i+2)}_{2,n}(0). \end{aligned}$$

In addition, \(e_{i}(n)\) and  \(\tilde{e}_{i}(n)\)\(i=0,1,\) have the forms:

$$\begin{aligned} \left. \begin{array}{ll} &{}e_{i}(n)=\varphi ^{(i+1)}_{1,n}(0)-\sum _{j=0}^{n-1}a_{j}(n)\varphi ^{(i)}_{1,j}(0),\\ &{}\tilde{e}_{i}(n)=\varphi ^{(i+1)}_{2,n}(0)-\sum _{j=0}^{n-1}\tilde{a}_{j}(n)\varphi ^{(i)}_{2,j}(0).\\ \end{array}\right\} \end{aligned}$$
(4.5)

Proof

It is easy to see that \(e_{0}(n)\) and \(e_{1}(n)\) take the form in (4.5). This enable us to write (4.1) in the form:

$$\begin{aligned} D\varphi _{1,n}(x)-\varphi ^{(1)}_{1,n}(0)-\varphi ^{(2)}_{1,n}(0)x=\sum _{j=0}^{n-1}a^{}_{j}(n)\left( \varphi _{1,j}(x)-\varphi _{1,j}(0)-\varphi ^{(1)}_{1,j}(0)x\right) ,\,n=1,2,\dots . \end{aligned}$$
(4.6)

Using Maclaurin series for \(\varphi _{1,j}(x)\) and \(D\varphi _{1,n}(x)\), Eq.(4.6) takes the form:

$$\begin{aligned} \sum _{r=2}^{n+1}\frac{\varphi ^{(r+1)}_{1,n}(0)}{r!}x^r=\sum _{j=0}^{n-1}a_{j}(n)\left( \sum _{r=2}^{j+2}\frac{\varphi ^{(r)}_{1,j}(0)}{r!}x^r\right) =\sum _{r=2}^{n+1}\left( \sum _{j=r}^{n+1}\frac{\varphi ^{(r)}_{1,j-2}(0)}{r!}a_{j-2}(n)\right) x^r,\,n=1,2,\dots . \end{aligned}$$
(4.7)

This gives the following triangle system of n equations in the unknowns \(a_{j}(n),j=0,1,\dots ,n-1\),

$$\begin{aligned} \sum _{j=r}^{n+1}\varphi ^{(r)}_{1,j-2}(0)\,a_{j-2}(n)= \varphi ^{(r+1)}_{1,n}(0),\,r=n+1,n,\dots ,2, \end{aligned}$$
(4.8)

which can be written in the matrix form (4.3). As before, it is possible to show that \(D\varphi _{2,n}(x)\) has the expansion (4.2). Here, \(\tilde{e}_{0}(n)\) and \(\tilde{e}_{1}(n)\) have the forms in (4.5), and \(\tilde{a}_{j}(n),j=0,1,\dots ,n-1\) satisfy the system (4.4). This completes the proof of Theorem 4.1. \(\square\)

This section’s primary objective is to introduce the operational matrices of derivatives of

$$\begin{aligned} {\varvec{\Phi }_{j}}(x)=[\varphi _{j,0}(x),\varphi _{j,1}(x),\dots , \varphi _{j,N}(x)]^{T},\quad j=1,2, \end{aligned}$$
(4.9)

which is stated in the following corollary:

Corollary 4.1

The mth derivative of the vectors \(\varvec{\Phi }_{j}(x),\,j=1,2,\) have the forms:

$$\begin{aligned} \frac{d^{m}\varvec{\Phi }_{j}(x)}{dx^{m}}= {\varvec{H}}^{m}_{j}\varvec{\Phi }_{j}(x)+{\varvec{\eta }}^{(m)}_{j}(x),\quad {\varvec{\eta }}^{(m)}_{j}(x)=\sum \limits _{k=0}^{m-1}{\varvec{H}}^{k}_{j}\,\varvec{\epsilon }^{(m-k-1)}_{j}(x), \end{aligned}$$
(4.10)

where \({\varvec{\epsilon }_{1}}(x)=\left[ \epsilon _{0}(x),\epsilon _{1}(x),\dots ,\epsilon _{N}(x)\right] ^T\)\({\varvec{\epsilon }_{2}}(x)=\left[ \tilde{\epsilon }_{0}(x),\tilde{\epsilon }_{1}(x),\dots ,\tilde{\epsilon }_{N}(x)\right] ^T\)\({\varvec{H}}_{1}=\big (h_{i,j}\big )_{0\le i,j\le N}\) and \({\varvec{H}}_{2}=\big (\tilde{h}_{i,j}\big )_{0\le i,j\le N}\),

$$\begin{aligned} h_{i,j}={\left\{ \begin{array}{ll} a_{j}(i),&{}\quad i> j,\\ 0,&{} \text {otherwise,} \end{array}\right. }, \qquad \tilde{h}_{i,j}={\left\{ \begin{array}{ll} \tilde{a}_{j}(i),&{}\quad i> j,\\ 0,&{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

For instance, if \(N=5, a = 0,b = 1,\,\alpha _{i1}=\beta _{i1}=1,\,\alpha _{i2}=1,\,\beta _{i2}=-1,\,i=1,2\), we get

$$\begin{aligned} {\varvec{H}}_{1}=\left( \begin{array}{cccccc} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 6 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ -\frac{2004}{329} &{} 12 &{} 0 &{} 0 &{} 0 &{} 0 \\ \frac{3670204}{164829} &{} -\frac{12940}{7849} &{} \frac{50}{3} &{} 0 &{} 0 &{} 0 \\ -\frac{1311891740}{72359931} &{} \frac{102206469}{3445711} &{} -\frac{125930}{219939} &{} 21 &{} 0 &{} 0 \\ \frac{2555675305543}{49566552735} &{} -\frac{11509826512}{2360312035} &{} \frac{1006475201}{30131643} &{} -\frac{73422}{300715} &{} \frac{126}{5} &{} 0 \\ \end{array} \right) _{6\times 6}, \end{aligned}$$
(4.11)

and

$$\begin{aligned} {\varvec{H}}_{2}=\left( \begin{array}{cccccc} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 6 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \frac{2004}{329} &{} 12 &{} 0 &{} 0 &{} 0 &{} 0 \\ \frac{3670204}{164829} &{} \frac{12940}{7849} &{} \frac{50}{3} &{} 0 &{} 0 &{} 0 \\ \frac{1311891740}{72359931} &{} \frac{102206469}{3445711} &{} \frac{125930}{219939} &{} 21 &{} 0 &{} 0 \\ \frac{2555675305543}{49566552735} &{} \frac{11509826512}{2360312035} &{} \frac{1006475201}{30131643} &{} \frac{73422}{300715} &{} \frac{126}{5} &{} 0 \\ \end{array} \right) _{6\times 6}. \end{aligned}$$
(4.12)

5 A Collocation Algorithm for Handling the System (1.1) Subject to RBCs(1.2)

In this part, we go through how to get numerical solutions for BVP (1.1)–(1.2) using the operational matrix mentioned in Corollary 4.1.

5.1 Homogeneous Boundary Conditions

In this section, consider the homogeneous case of BCs (1.2), i.e., \(\gamma _{i,j}=0,\,i,j=1,2\). In this case, we propose approximations to \(u_{1}(x)\) and \(u_{2}(x)\) as follows:

$$\begin{aligned} u_{1}(x)\simeq u_{1,N}(x)=\displaystyle \sum _{i=0}^{N}c_{i}\, \varphi _{1,i}(x)={\varvec{A}^{T}_{1}}\,{\varvec{\Phi }_{1}}(x),\quad {\varvec{A}_{1}}=\left[ c_0, c_1,\dots ,c_N\right] ^T, \end{aligned}$$
(5.1)

and

$$\begin{aligned} u_{2}(x)\simeq u_{2,N}(x)=\displaystyle \sum _{i=0}^{N}\tilde{c}_{i}\, \varphi _{2,i}(x)={\varvec{A}}^{T}_{2}\,{\varvec{\Phi }_{2}}(x),\quad {\varvec{A}_{2}}=\left[ \tilde{c}_0, \tilde{c}_1,\dots ,\tilde{c}_N\right] ^T. \end{aligned}$$
(5.2)

Corollary 4.1 allows us to estimate the derivatives \(u^{(m)}_{j,N}(x)\)\(m,j=1,2,\) as follows:

$$\begin{aligned} u^{(m)}_{j,N}(x)={\varvec{A}}^{T}_{j}\,{\varvec{H}}^{m}_{j}\,{\varvec{\Phi }_{j}}(x)+\varvec{\eta }^{(m)}_{j}(x), \end{aligned}$$
(5.3)

Using estimates (5.3), we can define the residuals of two Equations (1.1) as follows:

$$\begin{aligned} \begin{aligned} R_{j,N}(x)=\varepsilon _{j}\,({\varvec{A}}^{T}_{j}\,{\varvec{H}}^{2}_{j}\,{\varvec{\Phi }}_{j}(x)+\varvec{\eta }^{(2)}_{j}(x))+F_{j}(x,{\varvec{A}}^{T}_{1}\,{\varvec{\Phi }}_{1}(x),{\varvec{A}}^{T}_{2}\,{\varvec{\Phi }}_{2}(x),{\varvec{A}}^{T}_{1}\,{\varvec{H}}_{1}\,{\varvec{\Phi }}_{1}(x)+{\varvec{\epsilon }}_{1}(x),{\varvec{A}}^{T}_{2}\,&{\varvec{H}}_{2}\,{\varvec{\Phi }}_{2}(x)+{\varvec{\epsilon }}_{2}(x))\\ {}&-g_{j}(x),\, j=1,2. \end{aligned} \end{aligned}$$
(5.4)

To solve the system (1.1)–(1.2) (with \(\gamma _{i,j}=0,\,i,j=1,2,\)) numerically, a spectral technique is proposed: the Robin shifted Legendre collocation operational matrix mathod RSLCOMM. The collocation points \(x_{i}\), are chosen such that either the \((N+1)\) zeros of \(L^{*}_{N+1}(x;a,b)\) or \(x_{i}=\dfrac{i+1}{N+2},\,i=0,1,...,N\), so we have

$$\begin{aligned} R_{j,N}(x_{i})=0, \quad i=0,1,...,N,\,j=1,2, \end{aligned}$$
(5.5)

then solving the system (5.5) gives the coefficients \(c_i\) and \(\tilde{c}_i\,(i=0,1,...,N)\).

5.2 Nonhomogeneous Boundary Conditions

The transformation of the equation (1.1) with non-homogeneous RBCs (1.2) into the appropriate homogeneous conditions is a crucial step in the construction of the suggested method. To accomplish this, the following transformation is suggested:

$$\begin{aligned} \bar{u_{j}}(x)=u_{j}(x)-\lambda _j\,x-\mu _j,\quad j=1,2, \end{aligned}$$
(5.6)

where

$$\begin{aligned} \left. \begin{array}{ll} &{}\lambda _j=\dfrac{1}{\triangle _j}(\alpha _{2j} \gamma _{1j}-\alpha _{1j} \gamma _{2j}),\\ &{}\mu _j=\dfrac{1}{\triangle _j}(\gamma _{2j} \left( a \alpha _{1j}+\varepsilon _{j}\,\beta _{1j}\right) -\gamma _{1j} \left( \alpha _{2j} b\,+\varepsilon _{j}\,\beta _{2j}\right) ),\\ &{}\triangle _j=\alpha _{2j} \left( \alpha _{1j} (a-b)+\varepsilon _{j}\,\beta _{1j}\right) -\alpha _{1j} \,\varepsilon _{j}\,\beta _{2j}\ne 0. \end{array}\right\} \end{aligned}$$
(5.7)

As a result, it is sufficient to solve the system

$$\begin{aligned} \left. \begin{array}{ll} &{}\hspace{35pt} \varepsilon _{1}\,\bar{u_{1}}^{\prime \prime }(x)+F_{1}(x,\bar{u_{1}}(x),\bar{u_{2}}(x),\bar{u_{1}}^{\prime }(x),\bar{u_{2}}^{\prime }(x))={\bar{g}}_{1}(x),\quad x\in (a,b),\\ &{}\hspace{35pt} \varepsilon _{2}\,\bar{u_{2}}^{\prime \prime }(x)+F_{2}(x,\bar{u_{1}}(x),\bar{u_{2}}(x),\bar{u_{1}}^{\prime }(x),\bar{u_{2}}^{\prime }(x))={\bar{g}}_{2}(x),\quad x\in (a,b),\\ \end{array}\right\} \end{aligned}$$
(5.8)

subject to the homogeneous RBCs:

$$\begin{aligned} \left. \begin{array}{ll} &{}\alpha _{1j}{\bar{u}}_{j}(a)+\varepsilon _{j}\,\beta _{1j}{\bar{u}}^{\prime }_{j}(a)=0,\quad j=1,2,\\ &{}\alpha _{2j}{\bar{u}}_{j}(b)\,+\varepsilon _{j}\,\beta _{2j}{\bar{u}}^{\prime }_{j}(b)=0,\quad j=1,2. \end{array}\right\} \end{aligned}$$
(5.9)

6 Convergence and Error Estimates For RSLCOMM

In this part, we analyze the convergence and error estimates of the proposed technique. For a nonnegative integer N, consider the two spaces \(S_{j,N},\,j=1,2,\) defined by

$$\begin{aligned} S_{j,N}=Span\{\varphi _{j,0}(x), \varphi _{j,1}(x),...,\varphi _{j,N}(x)\}. \end{aligned}$$

Furthermore, the differences between the function \(u_{j}(x)\) and its estimated value \(u_{j,N}(x)\) are denoted by

$$\begin{aligned} E_{j,N}(x)=\left| u_{j}(x)-u_{j,N}(x)\right| ,\,j=1,2. \end{aligned}$$
(6.1)

This study examines the errors of the suggested method by using the \(L_{2}\) norm error estimate,

$$\begin{aligned} \Vert E_{j,N}\Vert _{2}=\Vert u_{j}-u_{j,N}\Vert _{2}=\left( \int _{a}^{b} |u_{j}(x)-u_{j,N}(x)|^{2}\,dx\right) ^{1/2}, \end{aligned}$$
(6.2)

and the \(L_{\infty }\) norm error estimate,

$$\begin{aligned} \Vert E_{j,N}\Vert _{\infty }=\Vert u_{j}-u_{j,N}\Vert _{\infty }=\underset{a\le x \le b}{max}|u_{j}(x)-u_{j,N}(x)|. \end{aligned}$$
(6.3)

The proof of the following theorem has similarities to the proofs of theorems expounded in the research articles [5, 15,16,17,18,19, 27N.

Theorem 6.1

Assume that \(u_{j}^{(i)}(x)\in C[a,b],\,i=0,1,...,N+1,\) with \(|u_{j}^{(N+1)}(x)|\le M_j, \forall x\in [a,b],\,j=1,2\). Assume that \(u_{j,N}(x),\,j=1,2,\) have the expansions (5.1) and (5.2), respectively, represent the best possible approximations for \(u_{j}(x)\) out of \(S_{j,N}, j=1,2\), respectively. Then, the estimates obtained are as follows:

$$\begin{aligned} \Vert E_{j,N}\Vert _{\infty }\le \frac{M_j\,(b-a)^{N+1}}{(N+1)!}, \end{aligned}$$
(6.4)

and

$$\begin{aligned} \Vert E_{j,N}\Vert _{2}\le \frac{M_j}{(N+1)!}\frac{(b-a)^{(N+1)+1/2}}{(2N+3)^{1/2}}. \end{aligned}$$
(6.5)

Proof

The proof of this theorem is similar to [, Theorem 6.1]. 5\(\square\)

The following corollary demonstrates that the obtained errors converge rapidly.

Corollary 6.1

For all \(N\ge 1\), the following two estimates hold:

$$\begin{aligned} \Vert E_{j,N-1}\Vert _{\infty }={\mathcal {O}}(((b-a)e)^{N}/N^{N+1/2}),\,j=1,2, \end{aligned}$$
(6.6)

and

$$\begin{aligned} \Vert E_{j,N-1}\Vert _{2}={\mathcal {O}}(((b-a)e)^{N}/N^{N+1}),\,j=1,2. \end{aligned}$$
(6.7)

Proof

The proof of this corollary is similar to [, Corollary 6.1]. 5\(\square\)

7 Computational Simulations

In this section, we show that the proposed algorithm in Sect. 5 has high flexibility and accuracy. In order to evaluate the precision of RSLCOMM, we define the following error:

$$\begin{aligned} E_{N}=\max \{\Vert E_{1,N}\Vert _{\infty },\Vert E_{2,N}\Vert _{\infty }\}, \end{aligned}$$
(7.1)

and the order of convergence \(R_{N}\) by

$$\begin{aligned} R_{N}\approx \frac{Log(E_{N+1}/E_{N})}{Log(E_{N}/E_{N-1})}. \end{aligned}$$
(7.2)

By solving three numerical problems, we show that RSLCOMM yields reliable results, and when the provided system has polynomial solutions \(u_{j}(x),\,j=1,2,\) of degree less than or equal to N, these solutions have the forms:

$$\begin{aligned} u_{1}(x)= \displaystyle \sum _{i=0}^{N-2}c_{i}\, \varphi _{1,i}(x), \qquad u_{2}(x)= \displaystyle \sum _{i=0}^{N-2}\tilde{c}_{i}\, \varphi _{2,i}(x). \end{aligned}$$
(7.3)

In addition, the obtained errors of order \(10^{-15}\) are produced for \(N = 10,11\) when employing the suggested technique RSLCOMM, as shown in two Tables 1 and 3. The computational outcomes in these tables are outstanding. Table 2 provide comparisons between our technique and other approaches in [21], and it shows that RSLCOMM provides greater precision outcomes than these approaches.

Moreover, as illustrated in Figs. 3 and 4, the exact and numerical solutions for the two cases: \(\varepsilon _{1}=10^{-8}\),  \(\varepsilon _{2}=10^{-5}\) and \(\varepsilon _{1}=10^{-8}\)\(\varepsilon _{2}=10^{-10}\), to the provided problem 7.2, are in great agreement, and their corresponding errors are represented by two Figs. 1 and 2. Additionally, two Figs. 5a and 6a show that the absolute error function \(E_{j,N}(x),\,j=1,2,\) for different values of N and they emphasize the dependency of error on N. Also, they demonstrate that when RSLCOMM is used, the convergent behavior of the calculated numerical solutions to the Problem 7.3 performs well. Furthermore, Figs. 6b, and 6b illustrate the stability of solutions.

Example 7.1

Consider the following system as a linear test problem to illustrate the theoretical result of RSLCOMM:

$$\begin{aligned} \left. \begin{array}{ll} &{}\hspace{35pt} \,u_{1}^{\prime \prime }(x)-\,u_{1}^{\prime }(x)-\,u_{1}(x)-2x\,u_{2}(x)=-96 x^3-64 x^2+530 x-228,\quad x\in (0,1),\\ &{}\hspace{35pt} \,u_{2}^{\prime \prime }(x)-\,u_{2}^{\prime }(x)+\,u_{1}(x)+3x\,u_{2}(x)=\,112 x^3-144 x^2+17 x+44,\quad x\in (0,1), \end{array}\right\} \end{aligned}$$
(7.4)

subject to the set of four RBCs

$$\begin{aligned} \left. \begin{array}{ll} &{}10\,u_{1}(0)+\,\,\,\,u^{\prime }_{1}(0)=0,\quad 10\,u_{1}(1)-\,\,\,u^{\prime }_{1}(1)=0,\\ &{}16\,u_{2}(0)+3\,u^{\prime }_{2}(0)=0,\quad 16\,u_{2}(1)-3\,u^{\prime }_{2}(1)=0, \end{array}\right\} \end{aligned}$$
(7.5)

where the exact solutions are \(u_{1}(x)=U^{*}_{3}(x;0,1)\) and \(u_{2}(x)=U^{*}_{2}(x;0,1)\). In this problem, the used bases polynomials have the forms

$$\begin{aligned} \varphi _{1,i}(x)= & {} \left( x^2-x+\frac{1}{10-i (i+1)}\right) L^{*}_{i}(x;0,1),\quad i=0,1,2,\dots ,\\ \varphi _{2,i}(x)= & {} \left( x^2-x+\frac{3}{16-3 i (i+1)}\right) L^{*}_{i}(x;0,1),\quad i=0,1,2,\dots . \end{aligned}$$

The application of proposed method RSLCOMM gives the exact solution for \(N\ge 1\):

$$\begin{aligned} u_{1}(x)=32\,\varphi _{1,1}(x),\quad u_{2}(x)=16\,\varphi _{2,0}(x). \end{aligned}$$
(7.6)
Table 1 Numerical results with \(\varepsilon _{1}=2^{-9}\) for Problem 7.2
Table 2 Comparison of several approaches of Problem 7.2
Fig. 1
figure 1

Errors \(E_{1,11}(x)\) and \(E_{2,11}(x)\) using \(\varepsilon _{1}=10^{-8}\) and \(\varepsilon _{2}=10^{-5}\) for Problem 7.2

Fig. 2
figure 2

Errors \(E_{1,11}(x)\) and \(E_{2,11}(x)\) using \(\varepsilon _{1}=10^{-8}\) and \(\varepsilon _{2}=10^{-10}\) for Problem 7.2

Fig. 3
figure 3

Exact and Approximate solutions for Problem 7.2 using \(N=11\)\(\varepsilon _{1}=10^{-8}\) and \(\varepsilon _{2}=10^{-5}\)

Fig. 4
figure 4

Exact and Approximate solutions for Problem 7.2 using \(N=11\)\(\varepsilon _{1}=10^{-8}\) and \(\varepsilon _{2}=10^{-10}\)

Example 7.2

Consider the singularly perturbed convection–diffusion:

$$\begin{aligned} \left. \begin{array}{ll} &{}\hspace{35pt} \,-\varepsilon _{1}\,u_{1}^{\prime \prime }(x)+((2x+1)\,u_{1})^{\prime }(x)-(x^2\,u_{2}(x))^{\prime }=\,f_{1}(x),\quad x\in (0,1),\\ &{}\hspace{35pt} \,-\varepsilon _{2}\,u_{2}^{\prime \prime }(x)-(x^2\,u_{1})^{\prime }(x)+(u_{2}(x))^{\prime }=\,f_{2}(x),\quad x\in (0,1),\\ \end{array}\right\} \end{aligned}$$
(7.7)

subject to the set of four RBCs

$$\begin{aligned} \left. \begin{array}{ll} &{}u_{1}(0)+\varepsilon _{1}\, u^{\prime }_{1}(0)=1,\quad u_{1}(1)+\varepsilon _{1}\,u^{\prime }_{1}(1)=2+2\varepsilon _{1},\\ &{}u_{2}(0)+\varepsilon _{2}\,u^{\prime }_{2}(0)=2,\quad u_{2}(1)+\varepsilon _{2}\,u^{\prime }_{2}(1)=4+3\varepsilon _{2}-\varepsilon _{2}\,\cos \,1-\sin \,1, \end{array}\right\} \end{aligned}$$
(7.8)

where \(f_{1}(x)\) and \(f_{2}(x)\) are chosen such that the exact solutions are

$$\begin{aligned} u_{1}(x)=1-e^{-x/\varepsilon _{1}}+x^2, \qquad u_{2}(x)=2-2\,e^{-x/\varepsilon _{2}}+x(1+x)-\sin x. \end{aligned}$$
(7.9)

In this problem, the computed bases polynomials take the form:

$$\begin{aligned} \varphi _{j,i}(x)=(x^2+\mathbb {A}_{j,i}\,x+\mathbb {B}_{j,i})L^{*}_{i}(x;0,1),\quad j=1,2,\,i=0,1,2,\dots , \end{aligned}$$
(7.10)

where

$$\begin{aligned} \mathbb {A}_{j,i}=\dfrac{1+2\,\epsilon _j -i (i+1) \left( i^2+i+2\right) \epsilon _{j}^{2}}{i (i+1) \left( i^2+i+2\right) \epsilon _j^2-1},\qquad \mathbb {B}_{j,i}=\dfrac{\epsilon _j+\left( i^2+i+2\right) \epsilon _{j}^{2}}{1-i (i+1) \left( i^2+i+2\right) \epsilon _j^2}. \end{aligned}$$

The application of RSLCOMM gives the following numerical solutions \(u_{1,11}(x)\) and \(u_{2,11}(x)\) for \(\varepsilon _{1}=10^{-8}\) and \(\varepsilon _{2}=10^{-k}\)\(k=5,10\):

$$\begin{aligned} \left. \begin{array}{ll} u_{1,11}(x)=1&{}+7.06102*10^{-14}\,x+\,x^2+1.61923*10^{-11}\,x^3-1.13497*10^{-10}\,x^4+5.35737*10^{-10}\,x^5\\ &{}-1.76544*10^{-9}\,x^6+4.13893*10^{-9}\,x^7-6.94268*10^{-9}\,x^8+8.27108*10^{-9}\,x^9-6.8327*10^{-9}\,x^{10}\\ &{}+3.72099*10^{-9}\,x^{11}-1.20145*10^{-9}\,x^{12}+1.74233*10^{-10}\,x^{13},\\ \\ u_{2,11}(x)=2&{}-1.15855\,x+\,x^2+0.166667\,x^3+1.44308*10^{-10}\,x^4-0.00833333\,x^5+2.13159*10^{-9}\,x^6\\ &{}+0.000198408\,x^7+7.89061*10^{-9}\,x^8-2.76475*10^{-6}\,x^9+7.06017*10^{-9}\,x^{10}+2.14831*10^{-8}\,x^{11}\\ &{}+1.03523*10^{-9}\,x^{12}-2.86939*10^{-10}\,x^{13}+ \left( \dfrac{200003}{100000}-\sin (1)-\dfrac{\cos (1)}{100000}\right) \,x, \end{array}\right\} \end{aligned}$$

and

$$\begin{aligned} \left. \begin{array}{ll} u_{1,11}(x)=1&{}-8.70415*10^{-14}\,x+ x^2-1.91388*10^{-11}\, x^3+1.28573*10^{-10}\, x^4-5.74032*10^{-10}\, x^5\\ &{}+1.76417*10^{-9}\, x^6-3.79102*10^{-9}\, x^7+5.69559*10^{-9}\, x^8-5.88266*10^{-9}\,x^9+4.01249*10^{-9}\, x^{10}\\ &{}-1.66482*10^{-9}\,x^{11}+3.49732*10^{-10}\,x^{12}-2.05703*10^{-11}\, x^{13},\\ \\ u_{2,11}(x)=2&{}-1.15853\, x+ x^2+0.166667\,x^3+1.96343*10^{-10}\,x^4-0.00833333\,x^5+2.30854*10^{-9}\,x^6\\ &{}+0.000198408\,x^7+5.54202*10^{-9}\,x^8-2.75992*10^{-6}\,x^9+1.259*10^{-9}\,x^{10}+2.56883*10^{-8}\,x^{11}\\ &{}-6.79802*10^{-10}\,x^{12}+1.62319*10^{-11}\,x^{13}+ \left( \dfrac{20000000003}{10000000000}-\sin (1)-\dfrac{\cos (1)}{10000000000}\right) \,x, \end{array}\right\} \end{aligned}$$

respectively. These solutions corresponds precisely to the exact solution of precision \(10^{-15}\) as shown in Table 1.

Example 7.3

Consider the singularly perturbed convection–diffusion:

$$\begin{aligned} \left. \begin{array}{ll} &{}\varepsilon _{1}\,u_{1}^{\prime \prime }(x)+(3\,u_{1}(x)-\frac{1}{4}\,e^{-u_{1}^{2}(x)}-\,u_{2}(x))^{\prime }=\,f_{1}(x),\quad x\in (0,1),\\ &{}\varepsilon _{2}\,u_{2}^{\prime \prime }(x)+(4\,u_{2}(x)-\cos (u_{2}(x))-u_{1}(x))^{\prime }=\,f_{2}(x),\quad x\in (0,1),\\ \end{array}\right\} \end{aligned}$$
(7.11)

subject to the set of four RBCs

$$\begin{aligned} \left. \begin{array}{ll} &{}u_{1}(0)+\varepsilon _{1}\, u^{\prime }_{1}(0)=-\frac{3}{4},\quad u_{1}(1)+\varepsilon _{1}\,u^{\prime }_{1}(1)=\frac{1}{4},\\ &{}u_{2}(0)+\varepsilon _{2}\,u^{\prime }_{2}(0)=2,\quad \quad u_{2}(1)+\varepsilon _{2}\,u^{\prime }_{2}(1)=e\,+\,1, \end{array}\right\} \end{aligned}$$
(7.12)

where \(f_{1}(x)\) and \(f_{2}(x)\) are chosen such that the exact solutions are

$$\begin{aligned} \left. \begin{array}{ll} &{}u_{1}(x)=\dfrac{4 \epsilon _1+3}{4 \left( \epsilon _1+1\right) \left( e \epsilon _1-\epsilon _1-1\right) }e^x-\dfrac{1+3 e}{4 \left( e \epsilon _1-\epsilon _1-1\right) }x,\\ \\ &{}u_{2}(x)=\dfrac{e \epsilon _2+\epsilon _2-2 \epsilon _2 \cos (1)-2 \sin (1)}{\left( \epsilon _2+1\right) \left( e \epsilon _2-\epsilon _2 \cos (1)-\sin (1)\right) }e^x-\dfrac{(1-e) \sin (x)}{e \epsilon _2-\epsilon _2 \cos (1)-\sin (1)}. \end{array}\right\} \end{aligned}$$
(7.13)

In this problem, the computed bases polynomials have the forms (7.10). The application of RSLCOMM gives the following numerical solutions \(u_{1,9}(x)\) and \(u_{2,9}(x)\) for \(\varepsilon _{1}=10^{-10}\) and \(\varepsilon _{2}=10^{-4}\), and \(u_{1,10}(x)\) and \(u_{2,10}(x)\) for \(\varepsilon _{1}=10^{-10}\) and \(\varepsilon _{2}=10^{-8}\):

$$\begin{aligned} \left. \begin{array}{ll} u_{1,9}(x)=&{}-5.38711*10^{-11}+0.538711\, x-0.375\, x^2-0.125\, x^3-0.03125\, x^4-0.00625\, x^5-0.00104165\, x^6\\ &{}-0.000148841\, x^7-0.0000185494\, x^8-2.12122*10^{-6}\, x^9-1.71503*10^{-7}\, x^{10}-3.11824*10^{-8} \,x^{11}\\ &{}+\dfrac{1}{4} \left( x-\dfrac{1}{10000000000}\right) -\dfrac{3}{4} \left( \dfrac{10000000001}{10000000000}-x\right) ,\\ u_{2,9}(x)=&{}0.00017608+-1.7608\, x+\, x^2+0.673755\, x^3+0.0833335\, x^4-0.000354345\, x^5+0.00277776\, x^6\\ &{}+0.000802151\, x^7+0.0000495022\, x^8-9.59263*10^{-9}\, x^9+4.79934*10^{-7}\, x^{10}+1.27788*10^{-7}\, x^{11}\\ &{}+2 \left( \dfrac{10001}{10000}-x\right) -(1+e) \left( \dfrac{1}{10000}-x\right) , \end{array}\right\} \end{aligned}$$

and

$$\begin{aligned} \left. \begin{array}{ll} u_{1,10}(x)=&{}-5.38711*10^{-11}+0.538711\, x-0.375\, x^2-0.125\, x^3-0.03125\, x^4-0.00625\, x^5-0.00104167\, x^6\\ &{}-0.000148807\, x^7-0.0000186055\, x^8-2.06079*10^{-6}\, x^9-2.12228*10^{-7}\, x^{10}-1.55721*10^{-8}\, x^{11}\\ &{}-2.59846*10^{-9}\, x^{12}+\dfrac{1}{4} \left( x-\dfrac{1}{10000000000}\right) -\dfrac{3}{4} \left( \dfrac{10000000001}{10000000000}-x\right) ,\\ u_{2,10}(x)=&{}1.76028*10^{-8}-1.76028\, x+\, x^2+0.673666\, x^3+0.0833333\, x^4-0.000349982\, x^5+0.00277778\, x^6\\ &{}+0.000801981\, x^7+0.0000496078\, x^8-1.22015*10^{-7}\, x^9+5.56639*10^{-7}\, x^{10}+9.83842*10^{-8}\, x^{11}\\ &{}+4.8907*10^{-9}\, x^{12}+2 \left( \dfrac{100000001}{100000000}-x\right) -(1+e) \left( \dfrac{1}{100000000}-x\right) , \end{array}\right\} \end{aligned}$$

respectively. These solutions corresponds precisely to the exact solution of precision \(10^{-15}\) as shown in Table 3.

Table 3 Numerical results with \(\varepsilon _{1}=10^{-10}\) for Problem 7.3
Fig. 5
figure 5

Errors results at \(\varepsilon _{1}=10^{-10}\), \(\varepsilon _{2}=10^{-8}\) for Problem 7.3

Fig. 6
figure 6

Errors results at \(\varepsilon _{1}=10^{-10}\), \(\varepsilon _{2}=10^{-8}\) for Problem 7.3

8 Conclusion

In this article, two RMLP systems that meet homogeneous four-boundary Robin conditions (3.1) were built. The combination of these polynomials and the collocation spectral technique results in an approximation to the system (1.1)–(1.2). The proposed technique, RSLCOMM, was tested on three problems, confirming the algorithm’s high accuracy and efficiency. The theoretical insights offered in this paper may be applied to a variety of ordinary, partial, and fractional differential equation systems. Theoretical convergence and error analysis were also studied. The presented numerical problems demonstrated the method’s applicability, utility, and accuracy. By exploring the connection between our research and the field of inverse scattering problems in future studies and incorporating ideas from the mentioned works [12, 28, 29], we can further investigate the applicability and potential extensions of our method for solving inverse scattering problems, which would provide valuable insights for the scientific community.