1 Introduction

The predator–prey model is the essential model in studying the population dynamics of many species, which was initially proposed by Lotka and Volterra (see [1, 2]). It has wide application in various research areas, such as chemical processes (see [3, 4]), bioparticles granulation (see [5]), the interaction of microorganisms and ecosystems (see [6, 7]). In recent years, many researchers have worked on the Lotka–Volterra type predator–prey system. In particular, Zhu et al. [8] focused on competitive Lotka–Volterra model in random environments. Li et al. [7] studied the canard phenomenon for predator–prey systems with response functions of Holling types. Badri et al. [9] dealt with the stabilization of the feasible equilibrium point of a special class of nonlinear quadratic systems known as Lotka–Volterra systems.

In order to get a more realistic model, the time delay has been taken into account in the predator–prey system. The analytical and dynamical aspects of such time delay models have been studied extensively by many researchers (see [1012]). Also, there have been many studies interested in obtaining numerical solutions to the predator–prey system. For example, Capobianco [13] solved the numerical solution of Lotka–Volterra by using high performance parallel numerical methods. Susmita Paul [14] explained how to solve the Lotka–Volterra predator–prey model by using the Runge–Kutta–Fehlberg (RKF) method. Gokmen [15] used Taylor’s collocation method to find the numerical solution of the predator–prey system with delay.

Nowadays, more and more attention is focused on the fractional predator–prey dynamical system. However, few of the fractional equations can be solved explicitly, but the broad application attracts many authors to devote themselves to numerical methods of these equations (see [1630]). Very recently, many numerical methods have been developed for solving the fractional predator–prey dynamical system. Particularly, hybrid analytic approach [31], Haar wavelet and Adams–Bashforth–Moulton methods [32], Bernstein wavelet and Euler methods [33], and a numerical scheme based on the homotopy analysis transform technique [34].

Chebyshev polynomials have become very important in numerical analysis. They are widely used because of their advantages, such as the roots of the first kind of Chebyshev polynomials (Gauss–Lobatto nodes) being used in polynomial interpolation for minimizing the Runge phenomena, providing the best uniform approximation of polynomials in continuous functions (see [3537]). Most commonly used techniques with Chebyshev polynomials have been examined in [3840] and the references therein.

Motivated by the above discussion, we are mainly interested in applying a modified Chebyshev collocation method for the time-delay predator–prey model in [10] as follows:

$$ \textstyle\begin{cases} y_{1}'(t)=y_{1}(t)[r_{1}-a_{11}y_{1}(t-\tau )-a_{12}y_{2}(t)], \\ y_{2}'(t)=y_{2}(t)[-r_{2}+a_{21}y_{1}(t)-a_{22}y_{2}(t)], \end{cases}\displaystyle t\in [0, T] $$
(1)

with initial conditions

$$\begin{aligned}& y_{1}(0)=\alpha , \\& y_{2}(0)=\beta , \end{aligned}$$

where \(y_{1}(t)\) and \(y_{2}(t)\) are interpreted as the densities of prey and predator respectively, \(r_{1} > 0\) is the growth rate of prey in the absence of predators, \(a_{11}> 0\) denotes the self-regulation constant of prey, \(a_{12} > 0\) describes the predation of prey by predators, \(r_{2} > 0\) is the death rate of predators in the absence of prey, \(a_{21} > 0 \) is the conversion rate for predators, \(a_{22} > 0\) describes the intraspecific competition among predators and τ is the generation time of the prey species, α, β are constant.

The objective of this paper is to obtain the approximation solutions of system (1) in the form of truncated Chebyshev series. The primary benefit of this method is the nonlinear term that can easily be dealt with without any extra efforts. Other advantages include this method being nondifferential, nonintegral, and easily implemented on a computer.

This paper is organized as follows: In Sect. 2, a brief review of the shifted Chebyshev polynomial and its properties is provided. In Sect. 3, we apply the collocation method for system (1) using the shifted Chebyshev polynomial. In Sect. 4, we construct a fundamental matrix equation for system (1). In Sect. 5, we introduce the technique of residual error correction in order to check the accuracy of the method. Finally, a numerical example is presented to verify the efficiency and accuracy of the proposed method.

2 Shifted Chebyshev polynomials and their properties

In this section, we introduce Chebyshev polynomials. The Chebyshev polynomials are the sets of orthogonal polynomials and they are simply related to the trigonometric functions (see [41, 42]) by the formula

$$ T_{n}(\cos \theta )=\cos (n\theta ) $$

with \(\theta \in [0,\pi ]\). The Chebyshev polynomial \(T_{n}(x)\) of the first kind is a polynomial in x of degree n, defined by the following relation [43]:

$$ T_{n}(x) = \cos \bigl(n \arccos (x)\bigr) ,\quad n = 0, 1, \ldots , x\in [-1,1]. $$

Since we use polynomial on \(t \in [0, L]\) for any real number \(L > 0\), we can obtain the shifted Chebyshev polynomials \(T^{\ast }_{n}(t)=T_{n}(\frac{2t}{L}-1)\) by introducing the change of variable \(x = 2t/L - 1\), \(t \in [0,L]\). The shifted Chebyshev polynomial \(T^{\ast }_{n}(t)\) satisfies the recurrence relation as follows:

$$ T^{\ast }_{{(n+1)}}(t)=2 \biggl(\frac{2t}{L}-1 \biggr)T^{\ast }_{n}(t)-T^{\ast }_{(n-1)}(t),\quad n\in N, $$
(2)

with the boundary condition

$$ T^{\ast }_{n}(0)=(-1)^{n},\qquad T^{\ast }_{n}(L)=1. $$

And \(T^{\ast }_{n}(t)\) satisfies the discrete orthogonality condition

$$ \sum_{k=0}^{N}{''}T^{\ast }_{i}(t_{k})T^{\ast }_{j}(t_{k})= \textstyle\begin{cases} 0 ,& i\neq j, \\ N ,&i=j=0, \\ \frac{N}{2} ,& i=j\neq0,\end{cases} $$
(3)

where the interpolation points \(t_{k}\) are chosen to be the Chebyshev–Gauss–Lobatto associated with the interval \([0, L]\) and \(t_{k}=\frac{L}{2}(1-\cos (k\frac{\pi }{N}))\), \(k=0,1,2,\dots , N\). The summation symbol with double primes denotes a sum with both the first and last term halved [43].

3 Method of solution

Continuous and bounded functions \(y_{s}(t)\) (\(s=1,2\)) can be approximated in terms of shifted Chebyshev polynomials in the interval \([0,L]\) as follows:

$$ y_{{sN}}(t)=\sum_{k=0}^{N}{''}c_{sk}T^{\ast }_{k}(t). $$
(4)

Using the discrete orthogonality relation (3), coefficient \(c_{sk}\) in (4) is given by

$$ c_{sk}=\frac{2}{N}\sum_{i=0}^{N}{''}y_{s}(t_{i})T^{\ast }_{k}(t_{i}). $$
(5)

Our aim is to obtain the unknown coefficients \(y_{s}(t_{i})\) for \(i=0,1,2,\dots ,N\), and the method of solution being considered should be programmable in a computer.

From equations (4) and (5) we can obtain the function \(y_{{sN}}(t)\) as follows:

$$ y_{{sN}}(t)=T(t)\cdot P\cdot Y_{s}, $$
(6)

where

$$\begin{aligned}& T(t)=\bigl[T^{\ast }_{0}(t), T^{\ast }_{1}(t) , T^{\ast }_{2}(t), \ldots ,T^{\ast }_{N}(t) \bigr], \\& P= \begin{bmatrix} \frac{1}{2N}T^{\ast }_{0}(t_{0})&\frac{2}{2N}T^{\ast }_{0}(t_{1})&\cdots &\frac{2}{2N}T^{\ast }_{0}(t_{N-1})&\frac{1}{2N}T^{\ast }_{0}(t_{N}) \\ \frac{1}{2N}T^{\ast }_{1}(t_{0})&\frac{2}{2N}T^{\ast }_{1}(t_{1})&\cdots &\frac{2}{2N}T^{\ast }_{1}(t_{N-1})&\frac{1}{2N}T^{\ast }_{1}(t_{N}) \\ \vdots &\vdots & & &\vdots \\ \frac{1}{2N}T^{\ast }_{N}(t_{0})&\frac{2}{2N}T^{\ast }_{N}(t_{1})&\cdots &\frac{2}{2N}T^{\ast }_{0}(t_{N-1})&\frac{1}{2N}T^{\ast }_{N}(t_{N}) \end{bmatrix}, \\& Y_{s}=\bigl[y_{s}(t_{0}),y_{s}(t_{1}), \ldots y_{s}(t_{N})\bigr]'. \end{aligned}$$

We know that

$$ T'(t)=T(t)\cdot K, $$

where \(K=\frac{2}{L}M\) and M is the operational matrix.

$$ M= \begin{bmatrix} 0 & 1 &0 &2 &0 &5 &\cdots &m1 \\ 0 &0 &4 &0 &8 &0 &\cdots &m2 \\ 0 &0 &0 &6 &0 &10 &\cdots & m3 \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\ 0 &0 &0&0&0&0&\cdots &2N \\ 0 &0 &0&0&0&0&\cdots &0 \end{bmatrix}_{(N+1)\times (N+1)}. $$

where m1, m2, and m3 are respectively N, 0, 2N for odd N and 0, 2N, 0 for even N. Then from the above equation we can write \(y'_{{sN}}(t)\) as follows:

$$ y'_{{sN}}(t)=T(t)\cdot K\cdot P\cdot Y_{s}. $$
(7)

4 Fundamental matrix equation for system (1)

To obtain the fundamental matrix equations of system (1), we substitute equations (6) and (7) into system (1). We get the fundamental matrix system

$$ \textstyle\begin{cases} T(t)KPY_{1}=T(t)PY_{1}[r_{1}-a_{11}T(t-\tau )PY_{1}-a_{12}T(t)PY_{2}], \\ T(t)KPY_{2}=T(t)PY_{2}[-r_{2}+a_{21}T(t)PY_{1}-a_{12}T(t)PY_{2}]. \end{cases} $$
(8)

Let

$$ PY_{1}=Z_{1},\qquad PY_{2}=Z_{2}. $$

We write equations (8) as follows:

$$ \begin{aligned} &T(t)KZ_{1}-r_{1}T(t)Z_{1}+a_{11}T(t- \tau )Z_{1}T(t)Z_{1}+a_{12}T(t)Z_{2}T(t)Z_{1}=0, \\ &T(t)KZ_{2}+r_{2}T(t)Z_{2}-a_{21}T(t)Z_{1}T(t)Z_{2}+a_{22}T(t)Z_{2}T(t)Z_{2}=0. \end{aligned} $$
(9)

Then we can rewrite equations (9) as follows:

$$ \begin{aligned} & \bigl(D_{1}(t)+A_{11}(t-\tau ,t)Z_{1}+A_{12}(t,t)Z_{2} \bigr)Z_{1}=0, \\ & \bigl(D_{2}(t)+A_{21}(t,t)Z_{1}+A_{22}(t,t)Z_{2} \bigr)Z_{2}=0, \end{aligned} $$
(10)

where

$$\begin{aligned} &D_{1}(t)=T(t)K-r_{1}T(t),\qquad A_{11}(t-\tau ,t)=a_{11}T(t-\tau )T^{\ast }(t), \\ &A_{12}(t,t)=a_{12}T(t)T^{*}(t),\qquad A_{22}(t,t)=a_{22}T(t)T^{*}(t), \\ &D_{2}(t)=T(t)K+r_{2}T(t),\qquad A_{21}(t,t)=-a_{21}T(t)T^{\ast }(t), \end{aligned}$$

and

$$ T^{\ast }(t)= \begin{bmatrix} T(t)&0&\cdots &0 \\ 0&T(t)&\cdots &0 \\ \vdots &&\ddots &\vdots \\ 0&0&\cdots &T(t) \end{bmatrix}. $$

By substituting the interpolation points \(t_{k}\) into equations (10), we have two nonlinear systems

$$ \begin{pmatrix} D_{1}(t_{0})+A_{11}(t_{0}-\tau ,t_{0})Z_{1}+A_{12}(t_{0},t_{0})Z_{2} \\ D_{1}(t_{1})+A_{11}(t_{1}-\tau ,t_{1})Z_{1}+A_{12}(t_{1},t_{1})Z_{2} \\ \cdot \\ \cdot \\ \cdot \\ D_{1}(t_{N})+A_{11}(t_{N}-\tau ,t_{N})Z_{1}+A_{12}(t_{N},t_{N})Z_{2} \end{pmatrix} Z_{1}=0 $$
(11)

and

$$ \begin{pmatrix} D_{2}(t_{0})+A_{21}(t_{0},t_{0})Z_{1}+A_{22}(t_{0},t_{0})Z_{2} \\ D_{2}(t_{1})+A_{21}(t_{1},t_{1})Z_{1}+A_{22}(t_{1},t_{1})Z_{2} \\ \cdot \\ \cdot \\ \cdot \\ D_{2}(t_{N})+A_{21}(t_{N},t_{N})Z_{1}+A_{22}(t_{N},t_{N})Z_{2} \end{pmatrix} Z_{2}=0. $$
(12)

Let

$$ W_{1}= \begin{pmatrix} D_{1}(t_{0})+A_{11}(t_{0}-\tau ,t_{0})Z_{1}+A_{12}(t_{0},t_{0})Z_{2} \\ D_{1}(t_{1})+A_{11}(t_{1}-\tau ,t_{1})Z_{1}+A_{12}(t_{1},t_{1})Z_{2} \\ \cdot \\ \cdot \\ \cdot \\ D_{1}(t_{N})+A_{11}(t_{N}-\tau ,t_{N})Z_{1}+A_{12}(t_{N},t_{N})Z_{2} \end{pmatrix} $$
(13)

and

$$ W_{2}= \begin{pmatrix} D_{2}(t_{0})+A_{21}(t_{0},t_{0})Z_{1}+A_{22}(t_{0},t_{0})Z_{2} \\ D_{2}(t_{1})+A_{21}(t_{1},t_{1})Z_{1}+A_{22}(t_{1},t_{1})Z_{2} \\ \cdot \\ \cdot \\ \cdot \\ D_{2}(t_{N})+A_{21}(t_{N},t_{N})Z_{1}+A_{22}(t_{N},t_{N})Z_{2} \end{pmatrix}. $$
(14)

In addition, by the initial value, we have

$$ T(t_{0})Z_{1}=\alpha ,\qquad T(t_{0})Z_{2}= \beta . $$

Thus, replacing first rows of the argument matrix \(W_{1}\), \(W_{2}\) by \(T(t_{0})\), we have

$$ \widetilde{W}_{1}= \begin{pmatrix} T(t_{0}) \\ D_{1}(t_{1})+A_{11}(t_{1}-\tau ,t_{1})Z_{1}+A_{12}(t_{1},t_{1})Z_{2} \\ \cdot \\ \cdot \\ \cdot \\ D_{1}(t_{N})+A_{11}(t_{N}-\tau ,t_{N})Z_{1}+A_{12}(t_{N},t_{N})Z_{2} \end{pmatrix} $$
(15)

and

$$ \widetilde{W}_{2}= \begin{pmatrix} T(t_{0}) \\ D_{2}(t_{1})+A_{21}(t_{1}-\tau ,t_{1})Z_{1}+A_{22}(t_{1},t_{1})Z_{2} \\ \cdot \\ \cdot \\ \cdot \\ D_{2}(t_{N})+A_{21}(t_{N}-\tau ,t_{N})Z_{1}+A_{22}(t_{N},t_{N})Z_{2} \end{pmatrix}. $$
(16)

Then we can rewrite equations (11) and (12) as follows:

$$ WA=B, $$
(17)

where

$$ W= \begin{pmatrix} \widetilde{W}_{1}&0 \\ 0&\widetilde{W}_{2} \end{pmatrix},\qquad A= \begin{pmatrix} Z_{1} \\ Z_{2} \end{pmatrix} ,\qquad B= \begin{pmatrix} b_{1} \\ b_{2} \end{pmatrix} $$

and \(b_{1}=[\alpha ,0,0,\ldots ]^{T}\), \(b_{2}=[\beta ,0,0,\ldots ]^{T}\).

5 Error estimation and residual correction

This section is devoted to checking the accuracy of our method. Since the exact solution of system (1) cannot be obtained, we will use the residual correction to obtain better approximate solutions. Residual correction is a process when the obtained approximate solution is substituted into the original equation, and a system whose solution is the error corresponding to the approximate solution is obtained. In the sequence, substituting the approximate solution \(y_{{sN}}(t)\) (\(s=1,2\)) into system (1), we obtain

$$\begin{aligned} &E_{{1N}}(t)=y_{{1N}}'(t)-y_{{1N}}(t) \bigl[r_{1}-a_{11}y_{{1N}}(t- \tau )-a_{12}y_{{2N}}(t)\bigr], \\ &E_{{2N}}(t)=y_{{2N}}'(t)-y_{{2N}}(t) \bigl[-r_{2}+a_{{21}}y_{{1N}}(t)-a_{22}y_{{2N}}(t) \bigr], \end{aligned}$$

where \(E_{sN}(t)\) (\(s=1,2\)) denotes the residual functions.

We define the error corresponding to \(y_{{1N}}(t)\) and \(y_{{2N}}(t)\) as follows:

$$ e_{{1N}}(t)=y_{1}(t)-y_{{1N}}(t) $$

and

$$ e_{{2N}}(t)=y_{2}(t)-y_{{2N}}(t). $$

Substituting \(y_{1}(t)\) and \(y_{2}(t)\) into system (1), we have

$$ \begin{aligned} &(y_{{1N}}+e_{{1N}})'(t)=(y_{{1N}}+e_{{1N}}) (t)\bigl[r_{1}-a_{11}(y_{{1N}}+e_{{1N}}) (t- \tau )-a_{12}(y_{{2N}}+e_{{2N}}) (t)\bigr], \\ &(y_{{2N}}+e_{{2N}})'(t)=(y_{{2N}}+e_{{2N}}) (t)[-r_{2}-a_{21}(y_{{1N}}+e_{{1N}}(t)-a_{22}(y_{{2N}}+e_{{2N}}) (t). \end{aligned} $$
(18)

We can rewrite Eq. (18) as follows:

$$\begin{aligned}& \begin{aligned} e_{{1N}}'(t)&=r_{1}y_{{1N}}(t)-a_{11}y_{{1N}}(t)e_{{1N}}(t)-a_{11}y_{{1N}}(t- \tau )e_{{1N}}(t)-a_{11}e_{{1N}}(t)e_{{1N}}(t) \\ &\quad{}-a_{12}y_{{1N}}(t)e_{{2N}}(t)-a_{21}y_{{2N}}(t)-a_{12}e_{{1N}}(t)e_{{2N}}(t)-E_{{1N}}(t), \end{aligned} \\ & \\ & \begin{aligned} e_{{2N}}'(t)&=-r_{2}y_{{2N}}(t)+a_{21}y_{{2N}}(t)e_{{1N}}(t)+a_{21}y_{{1N}}e_{{2N}}(t)+a_{21}e_{{2N}}(t)e_{{1N}}(t) \\ &\quad{}-2a_{22}y_{{2N}}(t)e_{{2N}}(t)-a_{22}e_{{2N}}e_{{2N}}(t)-E_{{2N}}(t). \end{aligned} \end{aligned}$$
(19)

Similar to system (1), this system is also nonlinear delay differential system with initial values \(e_{{1N}}(0)=0\) and \(e_{{2N}}(0)=0\). The unknown functions are \(e_{{1N}}(t)\) and \(e_{{2N}}(t)\). We will apply the method of Sect. 3 to Eq. (19) in order to obtain the approximate solutions. Let \(e_{{1N,M}}(t)\) and \(e_{{2N,M}}(t)\) be the estimation solutions of errors \(e_{{1N}}(t)\) and \(e_{{2N}}(t)\). We can obtain the new approximate solutions as follows:

$$\begin{aligned} &y_{{1N,M}}(t)=y_{{1N}}(t)+e_{{1N,M}}(t), \\ &y_{{2N,M}}(t)=y_{{2N}}(t)+e_{{2N,M}}(t). \end{aligned}$$

Then \(y_{{1N,M}}(t)\) and \(y_{{2N,M}}(t)\) are the correction solutions which are more accurate than \(y_{{1N}}(t)\) and \(y_{{2N}}(t)\). We will use the residual functions to measure the accuracy of numerical solutions by using \(y_{{1N,M}}(t)\) and \(y_{{2N,M}}(t)\) instead of \(y_{{1N}}(t)\) and \(y_{{2N}}(t)\). In the next section we use an example to demonstrate the above idea.

6 Numerical application

In this section, we demonstrate our method by a detailed example. We give the values of approximate solutions \(y_{{sN}}(t)\), (\(s= 1, 2\)) at selected points of the given interval for different N values.

Example 1

([44])

We consider the following system:

$$ \textstyle\begin{cases} y_{1}'(t)=y_{1}(t)[1-y_{1}(t-\tau )-0.5y_{2}(t)], \\ y_{2}'(t)=y_{2}(t)[-1+2y_{1}(t)-4y_{2}(t)], \end{cases}\displaystyle 0< t< 5, $$
(20)

with \(\alpha =1\) and \(\beta =0.2\). In order to obtain \(y_{{1N}}(t)\) and \(y_{{2N}}(t)\) with \(N=5,6\), and 7, we apply the method of Sect. 3 for Eq. (20). Then we have

$$\begin{aligned} &\begin{aligned} y_{{15}}(t) & =0.9141-9.0596\times 10^{-3}T^{\ast }_{1}(t) + 9.1229 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}-4.6658\times 10^{-3}T^{\ast }_{3}(t)-1.6735 \times 10^{-2} T^{\ast }_{4}(t) +2.3208\times 10^{-3}T^{\ast }_{5}(t), \end{aligned} \\ &\begin{aligned} y_{{25}}(t)&=0.19536 -1.1644\times 10^{-2}T^{\ast }_{1}(t)+1.046 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}+1.2809\times 10^{-2}T^{\ast }_{3}(t)-4.6031 \times 10^{-3}T^{\ast }_{4}(t)+5.4663 \times 10^{-5}T^{\ast }_{5}(t) \end{aligned} \end{aligned}$$

for \(N=5\),

$$\begin{aligned} &\begin{aligned} y_{{16}}(t)&=0.91389-5.1691\times 10^{-3}T^{\ast }_{1}(t) +8.3874 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}-8.3620\times 10^{-3}T^{\ast }_{3}(t)-1.3075 \times 10^{-2}T^{\ast }_{4}(t)+ 4.7651\times 10^{-4}T^{\ast }_{5}(t) \\ &\quad{}+ 2.2600\times 10^{-3} T^{\ast }_{6}(t), \end{aligned} \\ &\begin{aligned} y_{{26}}(t)&=0.195521-8.7581\times 10^{-3}T^{\ast }_{1}(t) + 1.0145 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}+ 1.0997\times 10^{-2}T^{\ast }_{3}(t)-3.9419 \times 10^{-3}T^{\ast }_{4}(t)-3.0842 \times 10^{-4}T^{\ast }_{5}(t) \\ &\quad{}+2.0641\times 10^{-4} T^{\ast }_{6}(t) \end{aligned} \end{aligned}$$

for \(N=6\), and

$$\begin{aligned} &\begin{aligned} y_{{17}}(t)&=0.9322-8.8302\times 10^{-3} T^{\ast }_{1}(t)+8.0857 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}-7.2591\times 10^{-3}T^{\ast }_{3}(t)-1.1632 \times 10^{-2}T^{\ast }_{4}(t)-2.7616 \times 10^{-4}T^{\ast }_{5}(t) \\ &\quad{}+1.0363\times 10^{-3} T^{\ast }_{6}(t)-1.4866 \times 10^{-4}T^{\ast }_{7}(t), \end{aligned} \\ &\begin{aligned} y_{{27}}(t)&=0.195691-9.2694\times 10^{-3}T^{\ast }_{1}(t)+9.1533 \times 10^{-3}T^{\ast }_{2}(t) \\ &\quad{}+ 1.0748\times 10^{-2}T^{\ast }_{3}(t)-3.6569 \times 10^{-3}T^{\ast }_{4}(t)-1.7423 \times 10^{-4}T^{\ast }_{5}(t) \\ &\quad{}+8.6092\times 10^{-5}T^{\ast }_{6}(t)-3.3297 \times 10^{-5}T^{\ast }_{7}(t) \end{aligned} \end{aligned}$$

for \(N=7\).

The approximate solutions of prey and predator and comparison with the results of Ref. [44] are presented in Fig. 1 and Fig. 2. Figures show that the proposed method preserves the positivity of the solutions, which is the part of the solutions of Eq. (20). To examine their accuracy, we considered the absolute residual errors of these approximate solutions. Figure 3 plots the absolute residual errors of Example 1. In Table 1, we list the absolution residual errors of the present method and Ref. [44]. It is seen from the table and figures that the absolute residual error values are decreasing as we increase the parameter N, which are in good agreement with the results given in Ref. [44].

Figure 1
figure 1

Approximate solutions for the prey population with \(N=5, 6\), and 7 by the present method and the Taylor collocation method [44]

Figure 2
figure 2

Approximate solutions for the predator population with \(N=5, 6\), and 7 by the present method and the Taylor collocation method [44]

Figure 3
figure 3

Absolute residual errors corresponding to the approximate solutions of prey and predator population with \(N= 5, 6\), and 7

Table 1 Comparison of absolute errors obtained by the present method and [44] for Example 1

For implementation residual error correction in Sect. 5, we apply again the method of Sect. 3 for Eq. (20) with choosing \(N=4\) and \(M=5, 6\). The approximate solutions of \(y_{{14}}(t)\) and \(y_{{24}}(t)\) are found as follows:

$$\begin{aligned} &\begin{aligned} y_{{14}}(t)&=0.91279-1.1906\times 10^{-2}T^{\ast }_{1}(t) +6.5714 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}-1.4116\times 10^{-2}T^{\ast }_{3}(t)-4.5221 \times 10^{-3}T^{\ast }_{4}(t), \end{aligned} \\ &\begin{aligned} y_{{24}}(t)&= 0.19698-6.3365\times 10^{-3}T^{\ast }_{1}(t)+7.2034 \times 10^{-3}T^{\ast }_{2}(t) \\ &\quad{}+7.5775\times 10^{-3}T^{\ast }_{3}(t)-2.9467 \times 10^{-3}T^{\ast }_{4}(t). \end{aligned} \end{aligned}$$

To realize the error approximate concept in Sect. 5 with \(N=4\) and \(M=5, 6\), the estimated errors are obtained, namely

$$\begin{aligned}& \begin{aligned} e_{{14,5}}(t)&= 0.0013143+2.8461\times 10^{-3}T^{\ast }_{1}(t)+2.5515 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}-9.4498\times 10^{-3}T^{\ast }_{3}(t)-1.2212 \times 10^{-2}T^{\ast }_{4}(t)+2.3208 \times 10^{-3}T^{\ast }_{5}(t), \end{aligned} \\& \begin{aligned} e_{{24,5}}(t) &=-0.0016213-5.3077\times 10^{-3}T^{\ast }_{1}(t)+3.2564 \times 10^{-3}T^{\ast }_{2}(t) \\ &\quad{}+5.2317\times 10^{-3}T^{\ast }_{3}(t)-1.6564 \times 10^{-3}T^{\ast }_{4}(t)+5.4663 \times 10^{-5}T^{\ast }_{5}(t), \end{aligned} \\& e_{{14,6}}(t)= 2.8176\times 10^{-3} +5.1572 \times 10^{-2}T^{\ast }_{1}(t) +1.557\times 10^{-2}T^{\ast }_{2}(t) \\& \hphantom{e_{{14,6}}(t)=}{}-3.149\times 10^{-2}T^{\ast }_{3}(t)+1.6031 \times 10^{-4}T^{\ast }_{4}(t)-2.6291 \times 10^{-4}T^{\ast }_{5}(t) \\& \hphantom{e_{{14,6}}(t)=}{}+1.2712\times 10^{-3}T^{\ast }_{6}(t), \\& \begin{aligned} e_{{24,6}}(t)&=-1.0873\times 10^{-3}+1.0813\times 10^{-2}T^{\ast }_{1}(t)+1.3781 \times 10^{-2}T^{\ast }_{2}(t) \\ &\quad{}-2.2536\times 10^{-3}T^{\ast }_{3}(t)-3.2036 \times 10^{-3}T^{\ast }_{4}(t)+ 7.4238\times 10^{-4}T^{\ast }_{5}(t) \\ &\quad{}-1.8855\times 10^{-4}T^{\ast }_{6}(t). \end{aligned} \end{aligned}$$

Then we can obtain our improved approximate solutions:

$$ y_{{14,5}}(t)=y_{{14}}(t)+e_{{14,5}}(t),\qquad y_{{24,5}}(t)=y_{{24}}(t)+e_{{24,5}}(t) $$

and

$$ y_{{14,6}}(t)=y_{{14}}(t)+e_{{14,6}}(t),\qquad y_{{24,6}}(t)=y_{{24}}(t)+e_{{24,6}}(t). $$

The improvement values of \(y_{{14}}(t)\) and \(y_{{24}}(t)\) with \(M=5,6\) are given in Fig. 4. It revealed that the proposed technique preserved the positive solutions of the given delayed prey-predator system. In order to understand how much improvement is provided by this scheme, the absolute residual errors of the original approximate solutions \(y_{{14}}(t)\) and \(y_{{24}}(t)\) are shown together with those of corrected solutions in Fig. 5. In Table 2, we list the residual errors of the improvement solutions at some point on our interval. In view of Fig. 5 and Table 2, absolute residual errors of \(y_{{14,5}}(t)\) and \(y_{{24,5}}(t)\) are smaller than those of \(y_{{14}}(t)\) and \(y_{{24}}(t)\) respectively, and absolute residual errors of \(y_{{14,6}}(t)\) and \(y_{{24,6}}(t)\) are smaller. Hence, we can comment that residual correction in general provides a certain improvement in the approximate solutions for Eq. (20).

Figure 4
figure 4

Graphics of the approximate solutions for the prey and predator with \(N = 4\) and their two improvements obtained by \(M = 5, 6\)

Figure 5
figure 5

Absolute residual errors for the prey and predator population corresponding to the approximate solutions with \(N=4\) and their improvement \(M= 5, 6\)

Table 2 Absolute residual error values of \(y_{{s4,M}}(t)\) with \(s=1,2\) and \(N=4\) and \(M=5, 6 \) at some points (for comparison, one can see [44])

7 Conclusion

In this paper, a modified Chebyshev collocation method based on the residual correction technique is presented to solve the Lotka–Volterra model with delay. An efficient error estimation can be made by using this technique. The key advantages of this approach are its low-cost computing and simplicity of implementation. Also the present method has the ability to convert the given problem into a system of mathematical equations, which can be solved easily using MATLAB or MAPLE software. Our numerical results are compared with the numerical results of [44]. The results show that they are in good correspondence with the results obtained in [44]. Based on the above facts, the modified Chebyshev collocation method is a powerful mathematical tool to obtain the numerical solutions of a nonlinear system. In future the proposed method will be applied to the fractional Lotka–Volterra biological model with and without a time delay. It is hoped that the biological relevance of the numerical results, such as stability and chaotic behavior, can be obtained. Similarly, numerical techniques may be designed for fractional reaction diffusion systems.