Introduction

Orthogonal Jacobi polynomials

The systems of polynomials remain a very active research area in mathematics, physics, engineering and other applied sciences; and the orthogonal polynomials, among others, are definitely the most thoroughly studied and widely applied systems [13]. The three of these systems, namely, Hermite, Laguerre, and Jacobi, are called collectively the classical orthogonal polynomials [4]. There is excessive literature on these polynomials, and the most comprehensive single account of the classical polynomials is found in the classical treatise of Szegö [5].

Jacobi polynomials are the common set of orthogonal polynomials defined by the formula [4]

$$P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \left( {n!} \right)^{ - 1} \left( { - 2} \right)^{n} \left( {1 - x} \right)^{ - \alpha } \left( {1 + x} \right)^{ - \beta } \frac{{{\text{d}}^{n} }}{{{\text{d}}x^{n} }}\left[ {\left( {1 + x} \right)^{n + \beta } \left( {1 - x} \right)^{n + \alpha } } \right]$$
(1)

Here, \(\alpha\) and \(\beta\) are parameters that, for integrability purposes, are restricted to \(\alpha > - 1, \beta > - 1\). However, many of the identities and other formal properties of these polynomials remain valid under the less restrictive condition that neither \(\alpha\) is \(\beta\) a negative integer. Among the many special cases, the following is the most important [4]

  1. (a)

    The Legendre polynomials \(\left( {\alpha = \beta = 0} \right)\)

  2. (b)

    The Chebyshev polynomials \(\left( {\alpha = \beta = - 1/2} \right)\)

  3. (c)

    The Gegenbouer (or ultraspherical) polynomials \(\left( {\alpha = \beta } \right)\)

The Jacobi polynomials \(P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) are defined [6, 7] with respect to the weight function \(\omega^{\alpha ,\beta } \left( x \right) = \left( {1 - x} \right)^{\alpha } \left( {1 + x} \right)^{\beta } (\alpha > - 1,\beta > - 1)\) on \(\left( { - 1,1} \right)\). It is proved that the Jacobi polynomials satisfy the following relation [7]:

$$P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \mathop \sum \limits_{k = 0}^{n} B_{n}^{{\left( {\alpha ,\beta ,n} \right)}} \left( {x - 1} \right)^{k} ; \quad \alpha ,\beta > - 1$$
(2)

where

$$B_{n}^{{\left( {\alpha ,\beta ,n} \right)}} = 2^{ - k} \left( {\begin{array}{*{20}c} {n + \alpha + \beta + k} \\ k \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {n + \alpha } \\ {n - k} \\ \end{array} } \right); \quad k = 0, 1, 2, \ldots , n$$
(3)

These polynomials play role in rotation matrices [8], in the trigonometric Reson–Morse potential [9], and the cases of a few exact solutions in quantum mechanics [10, 11].

In very recent years, several researchers developed new numerical algorithms for some problems using Jacobi polynomials. Eslahchi et al. [12] gave a numerical solution for some nonlinear ordinary differential equations using the spectral method. Bojdi et al. [13] proposed a Jacobi matrix method for differential-difference equations with variable coefficients. Kazem [14] used the Tau method for solving fractional-order differential equations by means of Jacobi polynomials.

Recently, Bharwy et al. [1522] have used Jacobi polynomials both in operational matrix method and in spectral collocation method for solving some class of fractional differential equations; for instance, nonlinear sub-diffusion equations, delay fractional optimal control problems, time fractional Kdv equations, Caputo fractional diffusion-wave equations, fractional nonlinear cable equation, and fractional differential equations.

Integro-differential-difference equations

Fredholm integro-differential-difference equations (FIDDEs) are encountered in many model problems in biology, physics, and engineering. Also, they have been investigated using different methods by scientists [2328]. Various numerical schemes for solving a partial integro-differential equation are presented by Dehghan [29].

In this study, we generate a procedure to find a Jacobi polynomial solution for the nth order linear FIDDE with variable coefficients

$$\mathop \sum \limits_{i = 0}^{n} P_{i} \left( x \right)y^{(i)} \left( x \right) + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( x \right)y^{(j)} \left( {x - \tau } \right) = g\left( x \right) + \mathop \int \limits_{a}^{b} K\left( {x,t} \right)y\left( {t - \tau } \right){\text{d}}t, \quad \tau \ge 0$$
(4)

under mixed conditions

$$\mathop \sum \limits_{i = 0}^{n - 1} \left[ {\alpha_{ki} y^{\left( i \right)} \left( a \right) + \beta_{ki} y^{\left( i \right)} \left( b \right) + \gamma_{ki} y^{\left( i \right)} \left( \eta \right)} \right] = \mu_{k} , \quad k = 0, 1, \ldots , n - 1$$
(5)

where \(P_{i} \left( x \right)\), \(Q_{j} \left( x \right)\), \(K\left( {x,t} \right)\), and \(g\left( x \right)\) are known functions and \(\alpha_{ki}\), \(\beta_{ki}\), \(\gamma_{ki}\), and \(\mu_{k}\) are appropriate constants, while \(y\left( x \right)\) is the unknown function. Note that \(a \le \eta \le b\) is a given point in the spatial domain of problem.

The main aim of our study, using orthogonal Jacobi polynomials, is to provide an approximate solution for the problem (4, 5), which is usually hard to find analytical solutions.

We assume a solution expressed as the truncated series of orthogonal Jacobi polynomials defined by

$$y\left( x \right) \cong y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \mathop \sum \limits_{n = 0}^{N} a_{n} P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)$$
(6)

where \(P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\), \(n = 0, 1, \ldots , N\) denote the orthogonal Jacobi polynomials defined by (2, 3); \(N\) is chosen \(N \ge n\) and \(a_{n} , n = 0, 1, \ldots , N\) are unknown coefficients to be determined. Note that \(\alpha\) and \(\beta\) are arbitrary parameters, such that \(\left( {\alpha > - 1,\beta > - 1} \right)\).

Fundamental matrix relations

We can transform the orthogonal Jacobi polynomials \(P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) from algebraic form into matrix form as follow:

$${\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right) = {\mathbf{X}}\left( x \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}}$$
(7)

where

$${\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right) = \left[ {\begin{array}{*{20}c} {P_{0}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} & {\begin{array}{*{20}c} {P_{1}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} & {P_{2}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {P_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \\ \end{array} } \\ \end{array} } \right]$$
(8)
$${\mathbf{X}}\left( x \right) = \left[ {\begin{array}{*{20}c} 1 & {\begin{array}{*{20}c} {\left( {x - 1} \right)} & {\left( {x - 1} \right)^{2} } \\ \end{array} } & {\begin{array}{*{20}c} \ldots & {\left( {x - 1} \right)^{N} } \\ \end{array} } \\ \end{array} } \right]$$
(9)

and

$${\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} = \left[ {m_{ij}^{(\alpha ,\beta )} } \right],\quad 1 \le i, \quad j \le N + 1$$
(10)

such that

$$m_{{ij}}^{{\left( {\alpha ,\beta } \right)}} = \left\{ \begin{array}{ll} 2^{{1 - i}} \left( {\begin{array}{*{20}c} {\alpha + \beta + i - 2 + j} \\ {i - 1} \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {\alpha + j - 1} \\ {j - i} \\ \end{array} } \right), & i \le j \\ 0, & i > j \\ \end{array} \right.$$

We assume the solution \(y\left( x \right)\), which is defined by the truncated orthogonal Jacobi series (6) in matrix form as follow

$$\left[ {y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \right] = {\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right){\mathbf{A}}$$
(11)

where

$${\mathbf{A}} = \left[ {\begin{array}{*{20}c} {a_{0} } & {a_{1} } & {\begin{array}{*{20}c} \ldots & {a_{N} } \\ \end{array} } \\ \end{array} } \right]^{{\mathbf{T}}}$$
(12)

By substituting the matrix form of Jacobi polynomials (7) to (11) into (4, 5), we can obtain the fundamental matrix equation of approximate solution of unknown function as

$$\left[ {y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \right] = {\mathbf{X}}\left( x \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}}$$
(13)

Matrix representation of differential-difference part of problem

Differential-difference part of problem is \(\mathop \sum \nolimits_{i = 0}^{n} P_{i} \left( x \right)y^{(i)} \left( x \right) + \mathop \sum \nolimits_{j = 0}^{m} Q_{j} \left( x \right)y^{(j)} \left( {x - \tau } \right).\) First, to explain the relation between the matrix form of the unknown function and the matrix form of its derivative \(y^{(i)} \left( x \right)\), we introduce the relation between \({\mathbf{X}}\left( x \right)\) and its derivatives \({\mathbf{X}}^{\left( i \right)} \left( x \right)\) can be expressed as

$${\mathbf{X}}^{\left( i \right)} \left( x \right) = {\mathbf{X}}\left( x \right){\mathbf{B}}^{i}$$
(14)

where

$$\textbf{B}= \left[\begin{array}{ccccc} 0& 1 & 0& \cdots & 0 \\ 0 & 0 & 2 & & 0 \\ & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \cdots &N\\ 0 & 0 & 0 && N \end{array} \right]$$
(15)

Then, using (13) and (14), we may write

$$\left[ {y^{(i)} \left( x \right)} \right] \cong \left[ {\left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( x \right)} \right] = {\mathbf{X}}^{\left( i \right)} \left( x \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} = {\mathbf{X}}\left( x \right){\mathbf{B}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}}$$
(16)

Similarly, the relation between the matrix form of unknown function and matrix form of its delay forms’ derivatives \(y^{(j)} \left( {x - \tau } \right)\) can be expressed as

$$\begin{aligned} y^{\left( j \right)} \left( {x - \tau } \right) = \;& {\mathbf{X}}^{\left( j \right)} \left( {x - \tau } \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} \\ = \;& {\mathbf{X}}\left( {x - \tau } \right){\mathbf{B}}^{j} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} \\ =\; & {\mathbf{X}}\left( x \right){\mathbf{B}}_{\tau } {\mathbf{B}}^{j} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} \\ \end{aligned}$$
(17)

where

$$\textbf{B}_\tau = \left[\begin{array}{cllcc} \left(\begin{array}{l} 0 \\ 0 \end{array}\right) (-\tau)^0 & \left(\begin{array}{l} 1 \\ 0 \end{array}\right) (-\tau)^1 & \left(\begin{array}{l} 2 \\ 0 \end{array}\right) (-\tau)^2 & \cdots & \left(\begin{array}{l} N \\ 0 \end{array}\right) (-\tau)^N\\ 0 & \left(\begin{array}{l} 1 \\ 1 \end{array}\right) (-\tau)^0 & \left(\begin{array}{l} 2 \\ 1 \end{array}\right) (-\tau)^1 & & \left(\begin{array}{l} N \\ 1 \end{array}\right) (-\tau)^{N-1} \\ &\quad \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \left(\begin{array}{l} N \\ N \end{array}\right) (-\tau)^0 \end{array} \right]$$
(18)

Thus, it is seen that

$$y^{(j)} \left( {x - \tau } \right) = {\mathbf{X}}\left( x \right){\mathbf{B}}_{\tau } {\mathbf{B}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}}$$
(19)

Using (16) and (19), the matrix form of differential-difference part of Eq. (4) becomes

$$\mathop \sum \limits_{i = 0}^{n} P_{i} \left( x \right)y^{(i)} \left( x \right) + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( x \right)y^{(j)} \left( {x - \tau } \right) = \mathop \sum \limits_{i = 0}^{n} P_{i} \left( x \right){\mathbf{X}}\left( x \right){\mathbf{B}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( x \right){\mathbf{X}}\left( x \right){\mathbf{B}}_{\tau } {\mathbf{B}}^{j} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}}$$
(20)

Matrix representation of integral part of problem

Fredholm integral part of problem is \(\mathop \int \nolimits_{a}^{b} K\left( {x,t} \right)y\left( {t - \tau } \right){\text{d}}t\), where \(K\left( {x,t} \right)\) the kernel function of the Fredholm integral part of main problem is. This function can be written using the truncated Taylor Series [30] and the truncated orthogonal Jacobi series, respectively, as

$$K\left( {x,t} \right) = \mathop \sum \limits_{m = 0}^{N} \mathop \sum \limits_{n = 0}^{N} k_{mn}^{T} x^{m} t^{n}$$
(21)

and

$$K\left( {x,t} \right) = \mathop \sum \limits_{m = 0}^{N} \mathop \sum \limits_{n = 0}^{N} k_{mn}^{J} P_{m}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( t \right)$$
(22)

where

$$k_{mn}^{T} = \frac{1}{m!n!}\frac{{\partial^{m + n} K\left( {0,0} \right)}}{{\partial x^{m} \partial t^{n} }}, \quad m,n = 0, 1, \ldots , N$$

is the Taylor coefficient and \(k_{mn}^{J}\) is the Jacobi coefficient. The expressions (21) and (22) can be written using matrix forms of the Jacobi polynomials, respectively, as

$$K\left( {x,t} \right) = {\mathbf{X}}\left( x \right){\mathbf{B}}_{ - 1} {\mathbf{K}}_{\text{T}} \left( {{\mathbf{X}}\left( t \right){\mathbf{B}}_{ - 1} } \right)^{\text{T}} ,\quad {\mathbf{K}}_{\text{T}} = \left[ {k_{mn}^{T} } \right]$$
(23)
$$K\left( {x,t} \right) = {\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right){\mathbf{K}}_{\text{J}} \left( {{\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( t \right)} \right)^{\text{T}} ,\quad {\mathbf{K}}_{\text{J}} = \left[ {k_{mn}^{J} } \right]$$
(24)

The following relation can be obtained from Eqs. (7), (23), and (24),

$$\begin{aligned} & {\mathbf{X}}\left( x \right){\mathbf{B}}_{ - 1} {\mathbf{K}}_{\text{T}} \left( {{\mathbf{B}}_{ - 1} } \right)^{\text{T}} {\mathbf{X}}^{\text{T}} \left( t \right) = {\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right){\mathbf{K}}_{\text{J}} \left( {{\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( t \right)} \right)^{\text{T}} \\ \varvec{ } & \quad \quad \,\quad \,\quad \quad \quad \quad \quad \quad \;\;\;\,= {\mathbf{X}}\left( x \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} \left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{\text{T}} {\mathbf{X}}^{\text{T}} \left( t \right) \\ & \Rightarrow {\mathbf{B}}_{ - 1} {\mathbf{K}}_{\text{T}} \left( {{\mathbf{B}}_{ - 1} } \right)^{\text{T}} = {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} \left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{\text{T}} \\ & \Rightarrow {\mathbf{K}}_{\text{J}} = \left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{ - 1} {\mathbf{B}}_{ - 1} {\mathbf{K}}_{\text{T}} \left( {{\mathbf{B}}_{ - 1} } \right)^{\text{T}} \left( {\left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{\text{T}} } \right)^{ - 1} \\ & {\text{or}} \\ & {\mathbf{K}}_{\text{T}} = \left( {{\mathbf{B}}_{ - 1} } \right)^{ - 1} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} \left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{\text{T}} \left( {\left( {{\mathbf{B}}_{ - 1} } \right)^{\text{T}} } \right)^{ - 1} \\ \end{aligned}$$
(25)

By substituting the Eqs. (19) and (25) into\(\mathop \int \nolimits_{a}^{b} K\left( {x,t} \right)y\left( {t - \tau } \right){\text{d}}t\), we derive the matrix relation

$$\begin{aligned} \mathop \int \limits_{a}^{b} K\left( {x,t} \right)y\left( {t - \tau } \right){\text{d}}t = & \mathop \int \limits_{a}^{b} {\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right){\mathbf{K}}_{\text{J}} \left( {{\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( t \right)} \right)^{\text{T}} \times{\mathbf{X}}\left( t \right){\mathbf{B}}_{\tau } {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}}{\text{d}}t \\ = &\; {\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right){\mathbf{K}}_{\text{J}} {\mathbf{QA}} \\ \end{aligned}$$
(26)

such that

$$\begin{aligned} {\mathbf{Q}} = & \mathop \int \limits_{a}^{b} \left( {{\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( t \right)} \right)^{\text{T}} {\mathbf{X}}\left( t \right){\mathbf{B}}_{\tau } {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\text{d}}t \\ = & \mathop \int \limits_{a}^{b} \left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{\text{T}} {\mathbf{X}}^{\text{T}} \left( t \right){\mathbf{X}}\left( t \right){\mathbf{B}}_{\tau } {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\text{d}}t \\ \varvec{ } = &\; \left( {{\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} } \right)^{\text{T}} {\mathbf{HB}}_{\tau } {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \\ \end{aligned}$$

where

$$\begin{aligned} {\mathbf{H}} = & \mathop \int \limits_{a}^{b} {\mathbf{X}}^{\text{T}} \left( t \right){\mathbf{X}}\left( t \right){\text{d}}t = \left[ {h_{ij} } \right]; \\ h_{ij} = & \frac{1}{i - 1 + j}\left( {\left( {b - 1} \right)^{i - 1 + j} - \left( {a - 1} \right)^{i - 1 + j} } \right), i,j = 1, 2, 3 \ldots , N + 1 \\ \end{aligned}$$

Finally, substituting the form (7) into expression (26) yields the matrix relation

$$\mathop \int \limits_{a}^{b} K\left( {x,t} \right)y\left( {t - \tau } \right){\text{d}}t = {\mathbf{X}}\left( x \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} {\mathbf{QA}}$$
(27)

Matrix representation of conditions

In this section, we write to the matrix form of mixed conditions of the problem given Eq. (5), using the matrix relation (16), as

$$\mathop \sum \limits_{i = 0}^{n - 1} \left[ {\alpha_{ki} y^{\left( i \right)} \left( a \right) + \beta_{ki} y^{\left( i \right)} \left( b \right) + \gamma_{ki} y^{\left( i \right)} \left( \eta \right)} \right] = \mathop \sum \limits_{i = 0}^{n - 1} \left[ {\alpha_{ki} {\mathbf{X}}\left( a \right) + \beta_{ki} {\mathbf{X}}\left( b \right) + \gamma_{ki} {\mathbf{X}}\left( \eta \right)} \right]{\mathbf{B}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} = \mu_{k} , \quad k = 0, 1, \ldots , n - 1$$
(28)

Method of solution

We substitute obtained matrix relations in the previous subsections given in Eqs. (20) and (27) into fundamental problem to build the fundamental matrix equation of the problem. For this purpose, we can define collocation points as follow:

$$x_{s} = a + \frac{b - a}{N}s, \quad s = 0,1, 2, \ldots , N$$

As can be observed, standard collocation points dividing the domain interval \([a,b]\) of the problem into \(N\) equal parts are employed.

Accordingly, we obtain the system of matrix equations

$$\begin{aligned} &\mathop \sum \limits_{i = 0}^{n} P_{i} \left( {x_{s} } \right){\mathbf{X}}\left( {x_{s} } \right){\mathbf{B}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( {x_{s} } \right){\mathbf{X}}\left( {x_{s} } \right){\mathbf{B}}_{\tau } {\mathbf{B}}^{j} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{A}} \hfill \\& \quad = g\left( {x_{s} } \right) + {\mathbf{X}}\left( {x_{s} } \right){\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} {\mathbf{QA}} \hfill \\ \end{aligned}$$

The fundamental matrix equation becomes

$$\left\{ {\mathop \sum \limits_{i = 0}^{n} {\mathbf{P}}_{\text{i}} {\mathbf{XB}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} + \mathop \sum \limits_{j = 0}^{m} {\mathbf{Q}}_{\text{j}} {\mathbf{XB}}_{\tau } {\mathbf{B}}^{j} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} - {\mathbf{XM}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} {\mathbf{Q}}} \right\}{\mathbf{A}} = {\mathbf{G}}$$
(29)

where

$$\begin{gathered} {\mathbf{P}}_{{\text{i}}} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {P_{i} \left( {x_{0} } \right)} & 0 \\ 0 & {P_{i} \left( {x_{1} } \right)} \\ \end{array} } & \cdots & {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \\ \vdots & \ddots & \vdots \\ {\begin{array}{*{20}c} {0~~~~~~~~~~~} & 0 \\ \end{array} } & \cdots & {P_{i} \left( {x_{N} } \right)} \\ \end{array} } \right],~\quad {\mathbf{G}} = \left[ {\begin{array}{*{20}c} {g\left( {x_{0} } \right)} \\ {g\left( {x_{1} } \right)} \\ {\begin{array}{*{20}c} \vdots \\ {g\left( {x_{N} } \right)} \\ \end{array} } \\ \end{array} } \right] \hfill \\ {\mathbf{Q}}_{{\text{j}}} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {Q_{j} \left( {x_{0} } \right)} & 0 \\ 0 & {Q_{j} \left( {x_{1} } \right)} \\ \end{array} } & \cdots & {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \\ \vdots & \ddots & \vdots \\ {\begin{array}{*{20}c} {0~~~~~~~~~~~} & 0 \\ \end{array} } & \cdots & {Q_{j} \left( {x_{N} } \right)} \\ \end{array} } \right],\quad {\mathbf{X}} = \left[ {\begin{array}{*{20}c} {{\mathbf{X}}\left( {x_{0} } \right)} \\ {{\mathbf{X}}\left( {x_{1} } \right)} \\ {\begin{array}{*{20}c} \vdots \\ {{\mathbf{X}}\left( {x_{N} } \right)} \\ \end{array} } \\ \end{array} } \right] \hfill \\ \end{gathered}$$

Equation (29), which is matrix representation of the Eq. (4), corresponds to a system of \(N + 1\) algebraic equations. This system indicates \(N + 1\) unknown coefficients, such that \(a_{0} , a_{1} , a_{2} , \ldots , a_{N}\). Briefly, if we define

$${\mathbf{W}} = \mathop \sum \limits_{i = 0}^{n} {\mathbf{P}}_{\text{i}} {\mathbf{XB}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} + \mathop \sum \limits_{j = 0}^{m} {\mathbf{Q}}_{\text{j}} {\mathbf{XB}}_{\tau } {\mathbf{B}}^{j} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} - {\mathbf{XM}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} {\mathbf{K}}_{\text{J}} {\mathbf{Q}}$$

under last definition \({\mathbf{W}}\) matrix Eq. (29) transforms into the augmented matrix form

$$\left[ {{\mathbf{W}};{\mathbf{G}}} \right] .$$
(30)

Similarly, from (28), the matrix form of mixed conditions can be obtained briefly as

$${\mathbf{U}}_{{\mathbf{k}}} {\mathbf{A}} = \mu_{k} \,{\text{or}}\;\left[ {{\mathbf{U}}_{{\mathbf{k}}} ;\mu_{k} } \right], \quad k = 0, 1, 2, \ldots , n - 1$$
(31)

such that

$${\mathbf{U}}_{{\mathbf{k}}} = \mathop \sum \limits_{i = 0}^{n - 1} \left[ {\alpha_{ki} {\mathbf{X}}\left( a \right) + \beta_{ki} {\mathbf{X}}\left( b \right) + \gamma_{ki} {\mathbf{X}}\left( \eta \right)} \right]{\mathbf{B}}^{i} {\mathbf{M}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}}$$

Consequently, to find the Jacobi polynomial solution of Eq. (4) under the mixed conditions (5), we replace the row matrix (31) by last \(n\) rows of the augmented matrix (30), which yields the new matrix equation form written as follow

$$\left[ {{\tilde{\mathbf{W}}};{\tilde{\mathbf{G}}}} \right] .$$
(32)

If \({\text{rank}}\;{\tilde{\mathbf{W}}} = {\text{rank}}\left[ {{\tilde{\mathbf{W}}};{\tilde{\mathbf{G}}}} \right] = N + 1\), then we can find the matrix of unknown coefficient of Jacobi series via \({\mathbf{A}} = \left( {{\tilde{\mathbf{W}}}} \right)^{ - 1} {\tilde{\mathbf{G}}}\). Note that the matrix \({\mathbf{A}}\) (thereby, the coefficients \(a_{0} , a_{1} , a_{2} , \ldots , a_{N}\)) is uniquely determined [22]. Equation (4) has also a unique solution under the conditions (5). Thus, we get the Jacobi polynomial solution for arbitrary parameters \(\alpha\) and \(\beta\):

$$y\left( x \right) \cong y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = \mathop \sum \limits_{n = 0}^{N} a_{n} P_{n}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)$$

Error analysis

In this part of study, it is given to a useful error estimation procedure for orthogonal Jacobi polynomial solution of the problem. Also, this procedure is used to obtain the improved solution of the problem (4, 5) according to the direct Jacobi polynomial solution. For this purpose, we use the residual correction technique [31, 32] and error estimation by the known Tau method [33, 34].

Recently, Yüzbaşı and Sezer [35] solved a class of the Lane–Emden equations using the improved BCM with residual error function. Yüzbaşı et al. [36] proposed an improved Legendre method for to obtain the approximate solutions of a class of the integro-differential equations. Wei and Chen [37] presented a numerical method called spectral methods for classes Volterra type integro-differential equations with weakly singular kernel and smooth solutions.

For the purpose of calculating the corrected solution, we now define the residual function using the Jacobi polynomial solution by obtained the our method as

$$R_{N} \left( x \right) = \mathop \sum \limits_{i = 0}^{n} P_{i} \left( x \right)\left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{(i)} \left( x \right) + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( x \right)\left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{(j)} \left( {x - \tau } \right) - g\left( x \right) - \mathop \int \limits_{a}^{b} K\left( {x,t} \right)y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( {t - \tau } \right){\text{d}}t$$
(33)

where \(y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) is the approximate solution of Eqs. (4, 5) for arbitrary parameters \(\alpha\) and \(\beta\). Hence, \(y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) satisfies the problem

$$\begin{aligned} &\mathop \sum \limits_{i = 0}^{n} P_{i} \left( x \right)\left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( x \right) + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( x \right)\left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( j \right)} \left( {x - \tau } \right) - \mathop \int \limits_{a}^{b} K\left( {x,t} \right)y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( {t - \tau } \right){\text{d}}t = g\left( x \right) + R_{N} \left( x \right) \\ &\mathop \sum \limits_{i = 0}^{n - 1} \left[ {\alpha_{ki} \left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( a \right) + \beta_{ki} \left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( b \right) + \gamma_{ki} \left( {y_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( \eta \right)} \right] = \mu_{k} , \quad k = 0, 1, \ldots , n - 1 \end{aligned}$$
(34)

The error function \(e_{N} \left( x \right)\) can also be defined as

$$e_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = y\left( x \right) - y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)$$
(35)

where \(y\left( x \right)\) is the exact solution of the Eqs. (4, 5). Substituting (35) into (4, 5) and also using (33) and (34), we derive the error differential equation with homogenous conditions:

$$\mathop \sum \limits_{i = 0}^{n} P_{i} \left( x \right)\left( {e_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{(i)} \left( x \right) + \mathop \sum \limits_{j = 0}^{m} Q_{j} \left( x \right)\left( {e_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{(j)} \left( {x - \tau } \right) - \mathop \int \limits_{a}^{b} K\left( {x,t} \right)e_{N}^{{\left( {\alpha ,\beta } \right)}} \left( {t - \tau } \right){\text{d}}t = - R_{N} \left( x \right)$$
$$\mathop \sum \limits_{i = 0}^{n - 1} \left[ {\alpha_{ki} \left( {e_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( a \right) + \beta_{ki} \left( {e_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( b \right) + \gamma_{ki} \left( {e_{N}^{{\left( {\alpha ,\beta } \right)}} } \right)^{\left( i \right)} \left( \eta \right)} \right] = 0$$
(36)

By solving the problem (36) using the present method given in the previous section, we get the error estimation function \(e_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) to \(e_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\). Note that \(M\) must be bigger than \(N\) and error estimation is found using the residual function \(R_{N} \left( x \right)\). Consequently, by means of the orthogonal Jacobi polynomials \(y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) and \(e_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\), we obtain the corrected Jacobi solution

$$y_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) + e_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)$$
(37)

Finally, we construct the Jacobi error function \(e_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) and the corrected Jacobi error function \(E_{N,M}^{(\alpha ,\beta )} \left( x \right)\)

$$e_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) = y\left( x \right) - y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) ,$$
(38)
$$E_{N,M}^{(\alpha ,\beta )} \left( x \right) = y\left( x \right) - y_{N,M} \left( x \right) .$$
(39)

Illustrative examples

We apply the Jacobi matrix method to four examples via the symbolic computation program Maple [38]. In these examples, the term \(\left| {e_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \right|\) represent absolute error function and also \(\left| {E_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \right|\) represent the absolute error function of the corrected Jacobi polynomial solution.

Example 1 As the first example, we consider the FIDDE [23, 24]

$$y^{'} \left( x \right) + xy^{'} \left( {x - 1} \right) - y\left( x \right) + y\left( {x - 1} \right) = g(x) + \mathop \int \limits_{ - 1}^{1} K(x,t)y\left( {t - 1} \right)\text{d} t$$
(40)

with initial condition

$$y\left( 1 \right) - 2y\left( 0 \right) + y\left( { - 1} \right) = 0$$

Here, \(m = 1, \tau = 1, P_{1} \left( x \right) = 1, P_{0} \left( x \right) = - 1, g\left( x \right) = x - 2, Q_{1} \left( x \right) = x, Q_{0} \left( x \right) = 1, K\left( {x,t} \right) = x + t, a = - 1, b = 1, \eta = 0, a_{00} = 1, \beta_{00} = - 2,\gamma_{00} = 1, \mu_{0} = 0\). We assume that the Eq. (40) has a Jacobi polynomial solution in the following form,

$$y\left( {\text{x}} \right) = a_{0} P_{0}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) + a_{1} P_{1}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) + a_{2} P_{2}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)$$

where \(N = 2\) and \(\left( {\alpha ,\beta } \right) = \left( {0.5, - 0.5} \right)\), which are chosen arbitrary; then, according to (8)

$$\begin{aligned} {\mathbf{P}}^{{\left( {\varvec{\alpha},\varvec{\beta}} \right)}} \left( x \right) = & \left[ {\begin{array}{*{20}c} {P_{0}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} & {P_{1}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} & {P_{2}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \\ \end{array} } \right] \\ = & \left[ {\begin{array}{*{20}c} 1 & {\frac{1}{2} + x} & { - \frac{15}{8} + \frac{15}{4}x + \frac{{3\left( {x - 1} \right)^{2} }}{2}} \\ \end{array} } \right]. \\ \end{aligned}$$

The collocation points are computed as

$$\left\{ {x_{0} = - 1, x_{1} = 0, x_{2} = 1} \right\}$$

and from Eq. (30), the matrix equation of the Eq. (40) is

$$\left\{ {{\mathbf{P}}_{1} {\mathbf{XBM}} + {\mathbf{P}}_{0} {\mathbf{XM}} + {\mathbf{Q}}_{1} {\mathbf{XB}}_{1} {\mathbf{BM}} - {\mathbf{Q}}_{0} {\mathbf{XB}}_{1} {\mathbf{M}} - {\mathbf{XMK}}_{\text{J}} {\mathbf{Q}}} \right\}\varvec{A} = \varvec{G}$$

where

$${\mathbf{P}}_{1} = {\mathbf{Q}}_{1} = \left[ {\begin{array}{*{20}c} {1\quad } & {0\quad } & {0} \\ {0\quad } & {1\quad } & {0} \\ {0\quad } & {0\quad } & {1} \\ \end{array} } \right],\quad {\mathbf{P}}_{0} = \left[ {\begin{array}{*{20}c} { - 1\quad } & {0\quad } & {0} \\ {0\quad } & { - 1\quad } & {0} \\ {0\quad } & {0\quad } & { - 1} \\ \end{array} } \right],\quad {\mathbf{Q}}_{1} = \left[ {\begin{array}{*{20}c} { - 1\quad } & {0\quad } & {0 } \\ {0\quad } & {0\quad } & {0} \\ {0\quad } & {0\quad } & {1} \\ \end{array} } \right],$$
$${\mathbf{X}} = \left[ {\begin{array}{*{20}c} {1\quad } & { - 2\quad } & {4} \\ {1\quad } & { - 1\quad } & {1 } \\ {1\quad } & {0\quad } & {0 } \\ \end{array} } \right],\quad {\mathbf{B}} = \left[ {\begin{array}{*{20}c} {0\quad } & {1\quad } & {0} \\ {0\quad } & {0\quad } & {2 } \\ {0\quad } & {0\quad } & {0} \\ \end{array} } \right],\varvec{ }\quad {\mathbf{B}}_{1} = \left[ {\begin{array}{*{20}c} {1\quad } & { - 1} & {\quad 0} \\ {0\quad } & {1\quad } & { - 2} \\ {0} & {0\quad } & {0} \\ \end{array} } \right],\varvec{ }\quad \varvec{ }{\mathbf{G}} = \left[ {\begin{array}{*{20}c} { - 3} \\ { - 2} \\ { - 1} \\ \end{array} } \right]$$
$${\mathbf{M}}\varvec{ } = \left[ {\begin{array}{*{20}c} {1\quad } & {\frac{3}{2}\quad } & {\frac{15}{8}} \\ {0\quad } & {1\quad } & {\frac{15}{4} } \\ {0\quad } & {0\quad } & {\frac{3}{2} } \\ \end{array} } \right],\varvec{ }\quad {\mathbf{K}}_{\text{J}} = \left[ {\begin{array}{*{20}c} { - 1\quad } & {1\quad } & {0} \\ {1\quad } & {0\quad } & {0} \\ {0\quad } & {0\quad } & {0} \\ \end{array} } \right],\varvec{ }\quad \varvec{ }{\mathbf{Q}} = \left[ {\begin{array}{*{20}c} {2\quad } & { - 1\quad } & {\frac{7}{4}} \\ {1\quad } & {\frac{1}{6}\quad } & {\frac{ - 5}{8}} \\ {\frac{1}{4}\quad } & {\frac{3}{8}\quad } & {\frac{ - 81}{160}} \\ \end{array} } \right]$$

Hence, we obtain the matrix \({\mathbf{W}}\) as follows

$${\mathbf{W}} = \left[ {\begin{array}{*{20}c} {2\quad } & {\frac{ - 8}{3}\quad } & {10} \\ {0\quad } & {\frac{ - 2}{3}\quad } & {3 } \\ { - 2\quad } & {\frac{4}{3}\quad } & {2} \\ \end{array} } \right]$$

Using Eq. (31), we can write the matrix equation of the condition of the problem as

$$\left[ {{\mathbf{U}};\lambda } \right] = \left[ {\begin{array}{*{20}c} 0 & 0 & {\begin{array}{*{20}c} 3; & 0 \\ \end{array} } \\ \end{array} } \right]$$

Consequently, to find the Jacobi polynomial solution of the problem under the mixed conditions, we replace the row matrix \(\left[ {{\mathbf{U}};\lambda } \right]\) by last row of the matrix \(\left[ {{\mathbf{W}};{\mathbf{G}}} \right]\) and obtain the matrix \(\left[ {{\tilde{\mathbf{W}}};{\tilde{\mathbf{G}}}} \right]\) as

$$\left[ {{\tilde{\mathbf{W}}};{\tilde{\mathbf{G}}}} \right] = \left[ {\begin{array}{*{20}c} 2 & {\frac{ - 8}{3}} & {\begin{array}{*{20}c} {10} ; & { - 3} \\ \end{array} } \\ 0 & {\frac{ - 2}{3}} & {\begin{array}{*{20}c} 3 ; & { - 2} \\ \end{array} } \\ 0 & 0 & {\begin{array}{*{20}c} 3 ; & { 0} \\ \end{array} } \\ \end{array} } \right].$$

Solving the new augmented matrix \(\left[ {{\tilde{\mathbf{W}}};{\tilde{\mathbf{G}}}} \right]\), we obtain the Jacobi polynomial coefficient matrix

$${\mathbf{A}} = \left[ {\begin{array}{*{20}c} {\frac{5}{2}} & 3 & 0 \\ \end{array} } \right]^{\text{T}} .$$

From Eq. (11), the Jacobi polynomial solution of the problem is \(y_{2}^{{\left( {0.5, - 0.5} \right)}} \left( x \right) = 3x + 4\), which is the exact solution of the problem. Furthermore, we can obtain the exact solution of the problem for any value of \(N\) and corresponding suitable values of \(\left( {\alpha ,\beta } \right)\).

Example 2 We consider a third-order FIDDE with variable coefficients

$$y^{'''} \left( x \right) + y^{''} \left( {x - 1} \right) - xy^{'} \left( x \right) - xy\left( {x - 1} \right) = g(x) + \mathop \int \limits_{ - 1}^{1} y\left( {t - 1} \right){\text{d}}t$$
(41)

with the initial conditions\(y\left( 0 \right) = 0, y^{'} \left( 0 \right) = 1, y^{''} \left( 0 \right) = 0\). where \(g\left( x \right) = - \left( {x + 1} \right)\left( {\sin \left( {x - 1} \right) + \cos \left( x \right)} \right) - \cos \left( 2 \right) + 1\). The exact solution of problem is \(y\left( x \right) = \sin \left( x \right).\)

After several trials, it has been determined that \(\left( {\alpha ,\beta } \right) = \left( { - 0.4, 0.5} \right)\) gives the most accurate result; therefore, the approximate solution of the third-order FIDDE has been derived by employing these values, as \(y_{6}^{{\left( { - 0.4,0.5} \right)}} \left( x \right) = 0.2993926085 + 0.5377164192x - 0.4041110104\left( {x - 1} \right)^{2} - 0.6837651331e{-} 1\left( {x - 1} \right)^{3} + 0.4006530189e{-} 1\left( {x - 1} \right)^{4} + 0,2888171229e{-} 2\left( {x - 1} \right)^{5} - 0.8352418987e{-} 3\left( {x - 1} \right)^{6}\) and the estimated error function is \(e_{6,7}^{{\left( { - 0.4,0.5} \right)}} \left( x \right) = 0.585645023e{-} 3 + 0.460274411e{-} 2x - 0.1546818100e{-} 1\left( {x - 1} \right)^{2} - 0.2185948038e{-} 1\left( {x - 1} \right)^{3} - 0.4716294762e{-} 2\left( {x - 1} \right)^{4} + 0.2241198695e{-} 2\left( {x - 1} \right)^{5} - 0.1679942991e{-} 3\left( {x - 1} \right)^{6} - 0.1485433260e{-} 3\left( {x - 1} \right)^{7}\). Then, we calculate the corrected solution function simply as the sum of the approximate solution and the estimated error function

$$y_{6,7}^{{\left( { - 0.4,0.5} \right)}} \left( x \right) = y_{6}^{{\left( { - 0.4,0.5} \right)}} \left( x \right) + e_{6,7}^{{\left( { - 0.4,0.5} \right)}} \left( x \right)$$

Table 1 shows the relative absolute error function \(\left| {e_{6}^{{\left( { - 0.4,0.5} \right)}} \left( x \right)} \right|\) and the corrected absolute error function \(\left| {E_{6,7}^{{\left( { - 0.4,0.5} \right)}} \left( x \right)} \right|\) for this example.

Table 1 Relative absolute error function \(\left| {e_{6}^{{\left( { - 0.4,0.5} \right)}} \left( x \right)} \right|\) and the corrected absolute error function \(\left| {E_{6,7}^{{\left( { - 0.4,0.5} \right)}} \left( x \right)} \right|\) for Example 2

Now, we determine the maximum error for \(y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) as,

$$E_{N}^{{\left( {\alpha ,\beta } \right)}} = y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) - y\left( x \right)_{\infty } = { \hbox{max} }\left\{ {\left| {y_{N}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) - y\left( x \right)} \right|,\quad a \le x \le b} \right\}$$

The maximum errors \(E_{N}^{{\left( {\alpha ,\beta } \right)}}\) for different values of \(N\) are given in Table 2, and it is seen that the error decreases continually as \(N\) increases.

Table 2 Maximum error (\(E_{N}^{{\left( { - 0.4,0.5} \right)}}\)) for Example 2

The maximum error for the corrected Jacobi polynomial solution (37) is calculated in a similar way,

$$E_{N,M}^{{\left( {\alpha ,\beta } \right)}} = y_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) - y\left( x \right)_{\infty } = { \hbox{max} }\left\{ {\left| {y_{N,M}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) - y\left( x \right)} \right|, \quad a \le x \le b} \right\}$$

and the results are shown in Table 3 for miscellaneous values of \(N\), \(M\). The decrease in maximum error, as \(M\) increases, is indisputable.

Table 3 Maximum error (\(E_{N,M}^{{\left( { - 0.4,0.5} \right)}}\)) for Example 2

Finally, the third-order FIDDE has also been solved using Legendre, Gegenbauer (also Chebyshev), and Jacobi polynomials, for comparison purposes. The maximum error values are given in Table 4, and it is seen that Jacobi-based solution gives slightly better results.

Table 4 Comparison of different polynomial bases for maximum error values for Example 2

Example 3 The third example is a second-order FIDE [2426] with variable coefficients

$$y^{''} \left( x \right) + 4xy^{'} \left( x \right) = \frac{{ - 8x^{4} }}{{\left( {x^{2} + 1} \right)^{3} }} - 2\mathop \int \limits_{0}^{1} \frac{{t^{2} + 1}}{{\left( {x^{2} + 1} \right)^{2} }}y\left( t \right){\text{d}}t, \quad 0 \le x \le 1$$
(42)

under the boundary conditions

$$y\left( 0 \right) = 1, y\left( 1 \right) = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}.$$

Here, \(P_{2} \left( x \right) = 1, P_{1} \left( x \right) = 4x, P_{0} \left( x \right) = 0, Q_{0} \left( x \right) = 0, g\left( x \right) = - 8x^{4} /\left( {x^{2} + 1} \right)^{3} , \alpha = 0, \beta = 0, K\left( {x,t} \right) = - 2\left( {t^{2} + 1} \right)/\left( {x^{2} + 1} \right)^{2} , \tau = 0, a = 0\).

The exact solution of this problem is \(\left( {x^{2} + 1} \right)^{ - 1}\).

Figure 1 shows a comparison of the Jacobi polynomial solution \(y_{N}^{{\left( {0,0} \right)}} \left( x \right)\), and the corrected Jacobi polynomial solution is \(y_{N,M}^{{\left( {0,0} \right)}} \left( x \right)\), for \(\left( {N,M} \right) = \left( {5,6} \right)\) and \((\alpha = \beta = 0)\), with the exact solution \(y(x)\). It is apparently seen that the corrected Jacobi polynomial solution almost coincides with the exact solution.

Fig. 1
figure 1

Comparison of the Jacobi polynomial solution \(y_{5}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) and the corrected Jacobi polynomial solution \(y_{5,6}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)\) with the exact solution for Example 3

Table 5 and Fig. 2 show a comparison of the absolute error with the corrected absolute errors, for \(N = 5, 8\) and \(M = 6, 7, 8,9\). The parameters are taken as \((\alpha = \beta = 0)\). The corrected absolute errors are corrected, once more and the last two columns show these values. It is noticed that sequential corrections tend to decrease the absolute error.

Table 5 Comparison of the absolute error with the corrected absolute errors for Example 3
Fig. 2
figure 2

Comparison of the absolute error functions of the Jacobi polynomial solution \(\left| {e_{5}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \right|\) with the corrected Jacobi polynomial solution \(\left| {E_{5,6}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)} \right|\) for Example 3

Example 4 [25, 26] The last example is the second-order Fredholm integro-differential equation

$$x^{2} y^{''} + 50xy^{'} - 35y = \frac{{1 - e^{x + 1} }}{x + 1} + \left( {x^{2} + 50x - 35} \right)e^{x} + \mathop \int \limits_{0}^{1} e^{xt} y\left( t \right){\text{d}}t$$

with conditions

$$y\left( 0 \right) = 1, \, y\left( 1 \right) = e$$

The exact solution of problem is

Taking \((\alpha = 0.4, \beta = 0.5)\), the absolute errors of Jacobi polynomial solution for \(N = 7\) and the absolute errors of the improved Jacobi polynomial solution for \(N = 7, M = 8\) are compared with those of the wavelet Galerkin, the wavelet collocation, and the Chebyshev finite difference (ChFD) methods [25, 26], in Table 6. Considering the errors of the different methods, it is observed that the smallest errors are obtained using the improved Jacobi polynomial solution.

Table 6 Comparison of the absolute errors of Jacobi polynomial solution, improved Jacobi polynomial solution, wavelet collocation, wavelet Galerkin, and ChFD methods for Example 4

Example 5 Consider the first-order linear FIDDE [39]

$$\mathop \int \limits_{0}^{x} \frac{y(t)}{{(x - t)^{{\frac{1}{2}}} }}{\text{d}}t = \frac{4}{105}x^{{{\raise0.7ex\hbox{$3$} \!\mathord{\left/ {\vphantom {3 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}}} \left( {24 - x^{2} } \right),\quad 0 \le x \le 1.$$

We assume that the problem has a Jacobi polynomial solution in the form

$$y\left( {\text{x}} \right) = a_{0} P_{0}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) + a_{1} P_{1}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) + a_{2} P_{2}^{{\left( {\alpha ,\beta } \right)}} \left( x \right) + a_{3} P_{3}^{{\left( {\alpha ,\beta } \right)}} \left( x \right)$$

where \(N = 3\) and \(\left( {\alpha ,\beta } \right) = \left( {0.2, - 0.3} \right)\), which are chosen arbitrary. Using the mentioned methods, the Jacobi polynomial solution of the problem is obtained by \(y_{3}^{{\left( {0.2, - 0.3} \right)}} \left( x \right) = x - x^{3}\), which is the exact solution of the problem [39]. Furthermore, we can obtain the exact solution of the problem for any value of \(N \ge 3\) and corresponding suitable values of \(\left( {\alpha ,\beta } \right)\).

Conclusions

A new matrix method based on Jacobi polynomials and collocation points has been introduced to solve high-order linear FIDDE with variable coefficients. Jacobi polynomials are the common set of orthogonal polynomials, which are the most extensively studied and widely applied systems. The solution of the FIDDE is expressed as a truncated series of orthogonal Jacobi polynomials, which is then transformed from algebraic form into matrix form. The problem and the mixed conditions are also represented in matrix form. Finally, the solution is obtained as a truncated Jacobi series written in matrix form using collocation points. A new error estimation procedure for polynomial solution and a technique to find a high accuracy solution are developed.

Most of the previous studies dealt with solutions using Legendre, Chebyshev, and Gegenbauer polynomials. In this study, however, we have proposed a Jacobi polynomial solution that comprises all of these polynomial solutions.

The new Jacobi matrix method has been applied to four illustrative examples. It is well seen from these examples that the method yields either the exact solution or a high accuracy approximate solution for delay integro-differential equation problems. The accuracy of the approximate solution can be increased using the proposed error analysis technique depending on residual function.