Introduction

Fractional calculus is the generalization of the ordinary calculus which studies integrals and derivatives of non-integer (real or complex) order. Some numerical methods have been used to solve functional equations containing fractional derivatives, such as [18, 22, 27, 31]. In a wide class of physical, dynamical and biological models and complex problems, the constant order fractional equation cannot describe the characterization of problems. In 1993, intellectual curiosity of Samko and Ross led to an extension of the classical fractional calculus [32]. They presented continuously varying order q(x) for differential and integral operators and defined these operators in two ways. The first way was a direct definition and the second way was based on Fourier transforms. Variable-order (VO) fractional calculus is good at representing the memory property which changes with time or location. Applications of the VO idea have been developed by several researchers such as Lorenzo and Hartley [25], Ingman and Suzdalnitsky [17], Coimbra [9] and others. VO fractional calculus as a powerful tool has recently been applied in the fluid dynamics and control of nonlinear viscoelasticity [29], mechanics [9], medical imaging [38], processing of geographical data [10], etc. The analytic results on the existence and uniqueness of the solutions for a generalized fractional differential equations with VO operators have been discussed in [30].

Solving VO fractional problems is quite difficult. Therefore, efficient numerical techniques are necessary to be developed. A finite difference technique is used for solving VO fractional integro-differential equations [37]. Recently, spectral methods using continuous orthogonal polynomials, such as Jacobi, Chebyshev and Legendre polynomials, have been developed for solving different kinds of VO fractional differential equations. Chen et al. proposed Legendre wavelets functions to solve a class of nonlinear VO fractional differential equations [8]. Bernstein polynomials are used to numerically solve the VO fractional partial differential equations by Wang et al. [35]. In [7], numerical solutions of the VO linear cable equation with adopted Bernstein polynomials basis on the interval [0, R] are presented. Although in most cases continuous orthogonal polynomials are used as basis functions for the approximate solution of equations, recently discrete orthogonal polynomials have been noticed for solving stochastic differential equations [36] and in numerical fluid dynamics problems [13] because of the behavior and property of these polynomials [15].

Discrete orthogonal polynomials are orthogonal with respect to a weighted discrete inner product. Classical orthogonal polynomial (continuous/discrete) are a class of the orthogonal polynomials associated with the Wiener–Askey polynomials including Hermite, Laguerre, Jacobi as continuous, and Charlier, Meixner, Krawtchouk and Hahn as discrete polynomials [24]. That in this paper we focus on Hahn polynomials.

Advection–dispersion equation and its extensions (such as the mobile–immobile equation) are a combination of the diffusion and advection equations. These equations are used to the modeling of transformation of pollutants, energy, subsurface water flows, deeper river flows, streams, and groundwater [4, 11, 12, 19, 23]. Recently, VO fractional diffusion equation is expanded to describe time-dependent anomalous diffusion and diffusion process in inhomogeneous porous media and for the description of complex dynamical problems [40].

In this work, we investigate the VO fractional problem arises in a mobile–immobile advection–dispersion equation (VOFMIE) [39]. This model is obtained from the standard advection–dispersion equation by adding the variable time-fractional derivative in the Caputo sense of order \(0< q(x,t)\le 1\). In [26], this model is used to simulate solute transport in watershed catchments and rivers. Some numerical methods have been developed for the solution of VOFMIE. Authors of [20] are used reproducing kernel theory and collocation method for solving the VOFMIE. An implicit Euler approximation for the VOFMIE has been explained in [39] Also, Chebyshev wavelets method [16] is employed to solve this VO equation. In [34], the VOFMIE in 2-D arbitrary domains is introduced and a meshless method based on the MLS approach is proposed to solve it. In [1], a method, based on shifted Jacobi Gauss–Lobatto and shifted Jacobi Gauss–Radau spectral methods, is presented for solving the mobile–immobile advection–dispersion model with a Coimbra time variable fractional derivative.

The VO fractional mobile–immobile advection–dispersion equation is defined as follows [39]:

$$\begin{aligned}&\beta _1D_t^{q(x,t)}u(x,t)+ \beta _2\frac{\partial u(x,t)}{\partial t}+ \beta _3\frac{\partial u(x,t)}{\partial x} +\beta _4\frac{\partial ^2 u(x,t)}{\partial x^2}=f(x,t),\nonumber \\&\quad (x,t)\in \Omega :=[0,X]\times [0,T], \end{aligned}$$
(1)

with the following initial and boundary conditions:

$$\begin{aligned} u(x,0)&=\phi (x),&0\le x\le X,\end{aligned}$$
(2)
$$\begin{aligned} u(0,t)&=\psi _1(t),\ \ u(X,t)=\psi _2(t),&0\le t\le T, \end{aligned}$$
(3)

where \(\beta _1\ge 0\), \(\beta _2\ge 0\), \(\beta _3>0\), \(\beta _4<0\), and \(D_t^{q(x,t)}\) denotes the VO fractional derivatives in the Caputo sense of order \(0< q(x,t)\le 1\) as defined by [7]

$$\begin{aligned} D_t^{q(x,t)}\left( f(x,t)\right) =\frac{1}{\Gamma (1-q(x,t))} \int _0^t(t-\tau )^{-q(x,t)}\frac{\partial f(x,\tau )}{\partial \tau }\mathrm{d}\tau ,\quad t>0. \end{aligned}$$

This paper is organized as follows. In section “Hahn polynomials”, we review definitions and some properties of Hahn polynomials. In section “Operational matrix of derivatives for Hahn polynomials”, the operational matrix of VO fractional derivative for the Hahn polynomials is derived. Section “Description of the proposed method”, as the main section, is devoted applying the operational matrices for solving a class of VO fractional PDEs. Convergence analysis is verified in section “Error bound”. Some examples are presented in section “Numerical examples” to illustrate the efficiency of the method. Finally, some conclusions will be given in section “Discussion and conclusion”.

Hahn polynomials

Definition and properties of Hahn polynomials

First, we state some definitions.

Definition 1

The Pochhammer symbol is defined by the following relations:

$$\begin{aligned} (a)_0&:=1,\end{aligned}$$
(4)
$$\begin{aligned} (a)_k&:=a(a+1)(a+2)\cdots (a+k-1), \ \ k\in \mathbb {N}, a\in \mathbb {C},Re(a)>0 . \end{aligned}$$
(5)

Definition 2

For real numbers \(\alpha >-1\), \(\beta >-1\), and for finite non-negative integer N, the Hahn polynomials \(H_n(x;\alpha ,\beta ,N)\), \(n=0,1,\ldots ,N\), are defined in [3] by

$$\begin{aligned} H_n(x;\alpha ,\beta ,N)=\sum _{k=0}^{n}\frac{(-n)_k(n+\alpha +\beta +1)_k(-x)_k}{(\alpha +1)_k(-N)_k k!}. \end{aligned}$$
(6)

From the explicit formula (6) and the definition of Pochhammer symbol, it can be seen that \(H_n(x;\alpha ,\beta ,N)\) is a polynomial of variable x with degree exactly n. The Hahn polynomials \(H_n(x;\alpha ,\beta ,N)\) are classical discrete orthogonal polynomials of degree n on the interval \(I:=[0,N]\). They are orthogonal on I with respect to the discrete inner product:

$$\begin{aligned} {\langle f,g \rangle }:=\sum _{x=0}^N {f(x)}{g(x)}w(x). \end{aligned}$$

where \(w(x):[a,b]\rightarrow R\) is a non-negative weight-function given by

$$\begin{aligned} w(x)={\alpha +x \atopwithdelims ()x}{\beta +N-x \atopwithdelims ()N-x}. \end{aligned}$$

They are normalized by

$$\begin{aligned} {\langle H_m(x;\alpha ,\beta ,N),H_n(x;\alpha ,\beta ,N) \rangle } := \frac{(-1)^n(n+\alpha +\beta +1)_{N+1}(\beta +1)_n n!}{(2n+\alpha +\beta +1)(\alpha +1)_n(-N)_n N!}. \end{aligned}$$

The orthogonality property of these basis polynomials is given in [21].

Also, the Hahn polynomials and Jacobi polynomials \(P_k=P_k(\alpha ,\beta )\) are related by the following:

$$\begin{aligned} \lim _{N\rightarrow \infty }(-1)^k {k+\alpha \atopwithdelims ()k} Q_k\left( \frac{N}{2}(1+x);\alpha , \beta ,N\right) =P_k^{\beta ,\alpha }(x). \end{aligned}$$
(7)

Expansion of Hahn polynomials in terms of Taylor basis

To obtain the expansion of Hahn polynomials as a Taylor series, we use the relation:

$$\begin{aligned} (-x)_k=(-1)^k\sum _{m=0}^k S_k^{(m)}x^m, \end{aligned}$$
(8)

where \(S_k^{(m)}\) are the Stirling numbers of the first kind (see page 824 of [2]). Closed forms for Stirling numbers of the first are as the following [2]:

$$\begin{aligned} S_k^{(i)}=\sum _{r=0}^{k-i}(-1)^{r}{k-1+r \atopwithdelims ()k-i+r}{2k-i \atopwithdelims ()k-i-r}s_ {k-i+r}^{(r)}, \end{aligned}$$
(9)

where \(s_k^{(i)}\) are the Stirling numbers of the second kind, namely

$$\begin{aligned} s_k^{(i)}=\frac{1}{i!} \sum _{r=0}^{i}(-1)^{i-r}{r \atopwithdelims ()i} r^{k}. \end{aligned}$$
(10)

Using Eq. (8), we have

$$\begin{aligned} H_n(x;\alpha ,\beta ,N)=\sum _{k=0}^{n}\sum _{m=0}^k(-1)^k\frac{(-n)_k(n+\alpha +\beta +1)_k}{(\alpha +1)_k(-N)_k k!}S_k^{(m)}x^m. \end{aligned}$$

Now, we define

$$\begin{aligned} \mathbf{H}(x):=[H_0(x),H_1(x),\ldots ,H_N(x)]^T, \end{aligned}$$

where \(H_i(x):=H_i(x;\alpha ,\beta ,N)\). If we define \((N+1)\times (N+1)\) matrix \(\mathbf{A}=[a_{i,j}]_{(N+1)\times (N+1)}\) such that

$$\begin{aligned}{}[{a_{i + 1,j + 1}}] = \left\{ { \begin{array}{cc} {\sum\nolimits _{k - j}^i {{{( - 1)}^k}\frac{{{{( - i)}_k}{{(i + \alpha + \beta + 1)}_k}}}{{{{(\alpha + 1)}_k}{{( - N)}_k}k!}}S_k^{(j)}} }&\quad {i \ge j,}\\ 0&\quad {i < j,} \end{array}} \right. \,\,i,j = 0,\ldots ,N, \end{aligned}$$
(11)

then

$$\begin{aligned} \mathbf{H}(x)=\mathbf{A}\mathbf{T}(x), \end{aligned}$$
(12)

where \(\mathbf{T}(x)=\left[ 1, x, \ldots , x^N\right] ^T\).

Remark 1

Matrix \(\mathbf{A}\) is an upper triangular matrix and

$$\begin{aligned} \det (\mathbf{A})= \prod _{i=0}^N a_{i+1, i+1} =\prod _{i=0}^N (-1)^i\frac{(-i)_i(i+\alpha +\beta +1)_i}{(\alpha +1)_i(-N)_i i!}S_i^{(i)}, \end{aligned}$$
(13)

also from definition (), for \(i=0, 1, \ldots ,N\), and \(\alpha , \beta \ge -1\), we have

$$\begin{aligned} (-i)_i\ne 0, \\ (i+\alpha +\beta +1)_i\ne 0, \\ (\alpha +1)_i\ne 0, \\ (-N)_i\ne 0. \end{aligned}$$

It can be easily seen from (9) that \(S_0^{(0)}=1\). Therefore, \(\det (\mathbf{A})\ne 0\) and \(\mathbf{A}\) is an invertible matrix.

From (12), we have

$$\begin{aligned} \mathbf{T}(x)=\mathbf{A}^{-1}\mathbf{H}(x). \end{aligned}$$

Function approximations by Hahn polynomials

In Theorem 1.1 from [14], Goertz and Öffner describe the expansion of a function by Hahn polynomials and they presented the following result: Let \(u\in \mathcal {C}[a,b]\) be a function of bounded variation. Then, the series expansion \(\sum _{i=0}^{N}c_{i}H_i(x)\) of a function u, by Hahn polynomials, converges pointwise, if the series expansion \(\sum _{i=0}^{N}d_{i}P_i(x)\) of the function u, by Jacobi polynomials, converges pointwise and \(n^4/N \rightarrow 0\) for \(n,N\rightarrow {\infty }\). From Theorem 3.1 in [15], spectral accuracy follows directly for Hahn polynomials. Therefore, a continuous function u(x), of bounded variation, can be expanded by truncated Hahn series as follows:

$$\begin{aligned} u(x)\approx \sum _{i=0}^{N}c_{i}H_i(x)=\mathbf{C}^T\mathbf{H}(x), \end{aligned}$$
(14)

where

$$\begin{aligned} \mathbf{C}^T=[c_0,c_1,\ldots ,c_N],\ c_i=\langle u,H_i\rangle . \end{aligned}$$

In the approximation of a function u(x) on [ab], in terms of continuous orthogonal polynomials \(\phi _i(x)\), we use the expansion \(u(x)=\sum _{i=0}^{N}c_{i}\phi _i(x)\), where the coefficients \(u_{i}\) are computed by

$$\begin{aligned} c_{i}=\frac{1}{\Vert \phi _i\Vert }\int _{a}^b w(x)u(x)\phi _i(x)\mathrm{d}x. \end{aligned}$$
(15)

The integral of (15) is computed by numerical quadrature where usually is not exact. But using discrete orthogonal polynomials, we have only to compute summation (14) and the calculation of coefficients is exact. This is our motivation for using discrete Hahn polynomials in the presented method.

Similarly, a function of two independent variables \(u(x,t)\in \Omega :=[0,X]\times [0,T]\) may be expanded in terms of the double Hahn polynomials as:

$$\begin{aligned} u(x,t)\approx \sum _{i=0}^N\sum _{j=0}^N u_{ij}H_i(x)H_j(t)=\mathbf{H}^T(x)\mathbf{U}\mathbf{H}(t), \end{aligned}$$
(16)

where coefficient matrix \(\mathbf{U}\) is given by

$$\begin{aligned} \mathbf{U}= \left[ \begin{array}{cccc} u_{00} &{}\quad u_{01} &{}\quad {\dots }&{}\quad u_{0N}\\ u_{10} &{}\quad u_{11} &{}\quad {\dots }&{}\quad u_{1N}\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ u_{N0} &{}\quad u_{N1} &{}\quad {\dots }&{}\quad u_{NN}\\ \end{array} \right] . \end{aligned}$$
(17)

where \(u_{ij}=\langle \mathbf{H}(x),\langle \mathbf{U},\mathbf{H}(t)\rangle \rangle .\)

Operational matrix of derivatives for Hahn polynomials

Here, we present the ordinary and VO differentiation matrix of the Hahn polynomials in the Caputo sense.

We need the following formula of the VO fractional derivative of function \(t^n\), in Caputo’s sense of order \(0<q(x,t)\le 1\) [28]:

$$\begin{aligned} D_t^{q(x,t)}t^n=\left\{ \begin{array}{ll} \frac{\Gamma (n+1)}{\Gamma (n+1-q(x,t))}t^{n-q(x,t)},&{}\quad n=1,2,\ldots \\ 0,&{}\quad \ n=0. \end{array}\right. \end{aligned}$$
(18)

Lemma 1

The Caputo VO derivative of Hahn vector \(\mathbf{H}(t)\) is

$$\begin{aligned} D_t^{q(x,t)}\mathbf{H}(t) =\mathbf{Q}^{q(x,t)}\mathbf{H}(t), \end{aligned}$$
(19)

where \(\mathbf{Q}^{q(x,t)}=\mathbf{A}\mathbf{M}^{q(x,t)}\mathbf{A}^{-1}\) is an \((N+1)\times (N+1)\) matrix and \(\mathbf{M}^{q(x,t)}\) is defined as:

$$\mathbf{M}^{q(x,t)}=\left[ {\begin{matrix} 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad \frac{\Gamma (2)}{\Gamma (2-q(x,t))}t^{-q(x,t)}&{}\quad 0&{}\quad 0\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0&{}\quad 0&{}\quad \ldots &{}\quad \frac{\Gamma (N+1)}{\Gamma (N+1-q(x,t))}t^{-q(x,t)}\end{matrix}}\right] .$$
(20)

Proof

By Eq. (12), we have

$$\begin{aligned} D_t^{q(x,t)}\mathbf{H}(t) =D_t^{q(x,t)}\left( \mathbf{A}\mathbf{T}(t)\right) =\mathbf{A}\ D_t^{q(x,t)}\mathbf{T}(t). \end{aligned}$$
(21)

Now, using Eq. (18), we get

$$\begin{aligned} D_t^{q(x,t)}\mathbf{T}(t)= \left[ {\begin{matrix} 0\\ \frac{\Gamma (2)}{\Gamma (2-q(x,t))}t^{1-q(x,t)}\\ \vdots \\ \frac{\Gamma (N+1)}{\Gamma (N+1-q(x,t))}t^{N-q(x,t)} \end{matrix}}\right] . \end{aligned}$$
(22)

The last vector can be written as:

$$\begin{aligned} \left[ {\begin{matrix} 0\\ \frac{\Gamma (2)}{\Gamma (2-q(x,t))}t^{1-q(x,t)}\\ \vdots \\ \frac{\Gamma (N+1)}{\Gamma (N+1-q(x,t))}t^{N-q(x,t)} \end{matrix}}\right]&= \left[ {\begin{matrix} 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad \frac{\Gamma (2)}{\Gamma (2-q(x,t))}t^{-q(x,t)}&{}\quad 0&{}\quad 0\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0&{}\quad 0&{}\quad \ldots &{}\quad \frac{\Gamma (N+1)}{\Gamma (N+1-q(x,t))}t^{-q(x,t)}\end{matrix}}\right] \left[ \begin{array}{c} 1\\ t\\ \vdots \\ t^{N} \end{array}\right] \\ {}&=\mathbf{M}^{q(x,t)}\mathbf{T}(t), \end{aligned}$$

where matrix \(\mathbf{M}^{q(x,t)}\) is defined as (20), thus, Eq. (21) is as the following:

$$\begin{aligned} D_t^{q(x,t)}\mathbf{H}(t)= \mathbf{A}\mathbf{M}^{q(x,t)}\mathbf{T}(t)= \mathbf{A}\mathbf{M}^{q(x,t)}\mathbf{A}^{-1}\mathbf{H}(t)=\mathbf{Q}^{q(x,t)}\mathbf{H}(t). \end{aligned}$$

\(\square\)

Corollary 1

Since the ordinary derivative is a special case of fractional derivative, so from Lemma 1, we have

$$\begin{aligned} \mathbf{H}'(x)=\mathbf{Q}^{1}\mathbf{H}(x). \end{aligned}$$
(23)

Description of the proposed method

In this section, we use the derivative operational matrix of Hahn polynomials for solving (1) with initial and boundary conditions (2) and (3). At first,we approximate \(D_t^{q(x,t)}u(x,t)\), \(\frac{\partial u(x,t)}{\partial t}\), \(\frac{\partial u(x,t)}{\partial x}\), and \(\frac{\partial ^2 u(x,t)}{\partial x^2}\) by Eqs. (19) and (23) as follows:

$$\begin{aligned} D_t^{q(x,t)}u(x,t)\approx \mathbf{H}^T(x)\mathbf{U}D_t^{q(x,t)}\mathbf{H}(t)=\mathcal {H}^T(x)\mathbf{U}\mathbf{Q}^{q(x,t)}\mathbf{H}(t), \end{aligned}$$
(24)
$$\begin{aligned} \frac{\partial u(x,t)}{\partial t}\approx \mathbf{H}^T(x)\mathbf{U}\frac{\partial \mathbf{H}(t)}{\partial t}=\mathbf{H}^T(x)\mathbf{U}\mathbf{Q}^1\mathbf{H}(t), \end{aligned}$$
(25)
$$\begin{aligned} \frac{\partial u(x,t)}{\partial x}\approx \frac{\partial \mathbf{H}^T(x)}{\partial x}\mathbf{U}\mathbf{H}(t)= \mathbf{H}^T(x)\left( \mathbf{Q}^{1}\right) ^T\mathbf{U}\mathbf{H}(t), \end{aligned}$$
(26)

and:

$$\begin{aligned} \frac{\partial ^2 u(x,t)}{\partial x^2}\approx \frac{\partial ^2 \mathbf{H}^T(x)}{\partial x^2}\mathbf{U}\mathbf{H}(t)= \mathbf{H}^T(x)\left( \left( \mathbf{Q}^1\right) ^T\right) ^2\mathbf{U}\mathbf{H}(t). \end{aligned}$$
(27)

Substituting Eqs. (25), (26), (27), and (24) into Eq. (1) leads to the following relation:

$$\begin{aligned}&\beta _1\mathbf{H}^T(x)\mathbf{U}\mathbf{Q}^{q(x,t)}\mathbf{H}(t) +\beta _2\mathbf{H}^T(x)\mathbf{U}\mathbf{Q}^1\mathbf{H}(t) \nonumber \\&\quad + \beta _3\mathbf{H}^T(x)\left( \mathbf{Q}^1\right) ^T\mathbf{U}\mathbf{H}(t) +\beta _4\mathbf{H}^T(x)\left( \left( \mathbf{Q}^1\right) ^T\right) ^2\mathbf{U}\mathbf{H}(t)\approx f(x,t). \end{aligned}$$
(28)

To discretize the Eq. (28), we use the nodes \((x_i,t_j)\), for \(i,j= 0,1,\ldots ,N\), where \(x_i=i\frac{X}{N}\) and \(t_j=j\frac{T}{N}\) are equidistant points on [0, X] and [0, T] respectively. By discretizing Eq. (28), we get a matrix equation which is in the general form \(\mathbf{AUB+CUD=E}\) where \(\mathbf{A}\), \(\mathbf{B}\), \(\mathbf{C}\), \(\mathbf{D}\), and \(\mathbf{U}\) are multipliable matrices.

Notice that if \(t_j=0\), or \(x_i=0, X\), the following equations are replaced by the corresponding equations in the last matrix equation:

$$\begin{aligned} u(x_i,0)&\approx \mathbf{H}^T(x_i)\mathbf{U}\mathbf{H}(0)=\phi (x_i),\quad i, 0,1,\ldots , N,\\ u(0,t_j)&\approx \mathbf{H}^T(0)\mathbf{U}\mathbf{H}(t_j)=\psi _1(t_j),\quad j, 0,1,\ldots , N,\\ u(X,t_j)&\approx \mathbf{H}^T(X)\mathbf{U}\mathbf{H}(t_j)=\psi _2(t_j),\quad j, 0,1,\ldots , N. \end{aligned}$$

Equation \(\mathbf{AUB+CUD=E}\) is a linear system in terms of unknown matrix \(\mathbf{U}\). To solve this system, we use the theorem 2.12 and corollary 2.4 in [33].

Error bound

In this section, an upper bound for the error of the presented approximation scheme, with equidistant nodes, is obtained with a similar procedure as in [5].

Suppose that u(xt) is a smooth function, of bounded variation, in \(\Omega\). Consider

$$\begin{aligned} \mathbb {H}:=\hbox {Span}\{H_i(x)H_j(t)|i=0, 1, \ldots , I, j=0, 1, \ldots ,J\}. \end{aligned}$$

Also we assume that \(u_{I, J}(x,t)\in \mathbb {H}\) is the best approximation of u(xt) and the nodes \((x_i,t_j)\), for \(i= 0,1,\ldots ,I\) and \(j=0,\ldots ,J\) are equidistant points on [0, X] and [0, T], respectively. Let

$$\begin{aligned} h_1&=x_{i+1}-x_i, \quad i=0,1,\ldots , I, \\ h_2&=t_{j+1}-t_j, \quad j=0,1,\ldots , J. \end{aligned}$$

By the definition of the best approximation, we have

$$\begin{aligned} \forall v_{I, J}(x,t)\in \mathbb {H}:\Vert u(x,t)-u_{I, J}(x,t)\Vert _{\infty }\le \Vert u(x,t)-v_{I, J}(x,t)\Vert _{\infty }. \end{aligned}$$
(29)

The last inequality is also true if \(v_{I, J}(x,t)\) denotes the interpolating polynomial of u(xt) at equidistant points \((x_i, t_j)\), for \(0\le i\le I\), \(0\le j\le J\). Then, we have

$$\begin{aligned} u(x,t)-v_{I, J}(x, t)= & {} \frac{\partial ^{I+1} u(\mu , t)}{\partial x^{I+1}}\frac{\prod _{i=0}^I(x-x_i)}{(I+1)!} \nonumber \\&+\,\frac{\partial ^{J+1} u(x, \xi )}{\partial t^{J+1}}\frac{\prod _{j=0}^J(t-t_J)}{(J+1)!}\nonumber \\&-\,\frac{\partial ^{I+J+2}u(\mu ',\xi ')}{\partial x^{I+1}\partial t^{J+1}}\dfrac{\prod _{i=0}^I(x-x_i)\prod _{j=0}^J(t-t_j)}{(I+1)!(j+1)!}, \end{aligned}$$
(30)

where \(\mu , \mu '\in [0, L]\) and \(\xi , \xi '\in [0, T]\).

$$\begin{aligned} \Vert u(x, t)-u_{I, J}(x,t)\Vert _{\infty }\le & {} \max _{(x,t)\in \Omega }\left|\frac{\partial ^{I+1} u(\mu ,t)}{\partial x^{I+1}}\right|\frac{\Vert \prod _{i=0}^I(x-x_i)\Vert _{\infty }}{(I+1)!} \nonumber \\&+\,\max _{(x, t)\in \Omega }\left|\frac{\partial ^{J+1} u(x, t)}{\partial y^{J+1}}\right|\frac{\Vert \prod _{j=0}^J(t-t_j)\Vert _{\infty }}{(J+1)!} \nonumber \\&+\,\max _{(x,t)\in \Omega }\left|\frac{\partial ^{J+J+2} u(\mu ', \xi ')}{\partial x^{I+1}\partial t^{J+1}}\right|\nonumber \\&\times \,\dfrac{\Vert \prod _{i=0}^I(x-x_i)\Vert _{\infty }\Vert \prod _{j=0}^J(t-t_j)\Vert _{\infty }}{(I+1)!(J+1)!}. \end{aligned}$$
(31)

Since u(xt) is a smooth function on \(\Omega\), then there exist the constants \(K_1\), \(K_2\), and \(K_3\) such that

$$\begin{aligned}&\max _{(x, t)\in \Omega }\left|\frac{\partial ^{I+1} u(\mu , t)}{\partial x^{I+1}}\right|\le K_1,\end{aligned}$$
(32)
$$\begin{aligned}&\max _{(x, t)\in \Omega }\left|\frac{\partial ^{J+1} u(x, \xi )}{\partial t^{J+1}}\right|\le K_2,\end{aligned}$$
(33)
$$\begin{aligned}&\max _{(x, t)\in \Omega }\left|\frac{\partial ^{I+J+2} u(\mu ', \xi ')}{\partial x^{I+1}\partial t^{J+1}}\right|\le K_3. \end{aligned}$$
(34)

On the other hand, for equidistant points, we have

$$\begin{aligned} \frac{\Vert \prod _{i=0}^I(x-x_i)\Vert _{\infty }}{(I+1)!}&\le \frac{h_1^{I+1}}{4(I+1)},\end{aligned}$$
(35)
$$\begin{aligned} \frac{\Vert \prod _{j=0}^J(t-t_j)\Vert _{\infty }}{(J+1)!}&\le \frac{h_2^{J+1}}{4(J+1)}, \end{aligned}$$
(36)

and

$$\begin{aligned} \dfrac{\Vert \prod _{i=0}^I(x-x_i)\Vert _{\infty }\Vert \prod _{j=0}^J(t-t_j)\Vert _{\infty }}{(I+1)!(J+1)!}\le \frac{h_1^{I+1}h_2^{J+1}}{4^2(I+1)(J+1)}. \end{aligned}$$
(37)

Now, from Eqs. (32)–(37), Eq. (31) becomes

$$\begin{aligned} \Vert u(x, t)-u_{I, J}(x,t)\Vert _{\infty }\le \frac{K_1h_1^{I+1}}{4(I+1)}+ \frac{K_2h_2^{J+1}}{4(J+1)} +\frac{K_3h_1^{I+1}h_2^{J+1}}{4^2(I+1)(J+1)}. \end{aligned}$$
(38)

In this manner, an upper bound of the absolute errors is obtained for the approximate solutions with equidistant point.

Numerical examples

In this section, numerical examples are provided to illustrate the efficiency of the mentioned approach. We compare our results to the exact solutions at T, via the following norm:

$$\begin{aligned} \Vert e_N\Vert _{\infty }&=\max _{0\le i\le N}|u_{\mathrm{Approx}}(x_i, T)-u_{\mathrm{Exact}}(x_i, T)|, \end{aligned}$$

where \(\Vert e_{N}\Vert _{\infty }\) denotes the error corresponding to polynomial of degree N. Also, to show the efficiency, we report the convergence order (CO) of our method for two last examples. CO is defined by [6]

$$\begin{aligned} {\text {CO}=\frac{\log \frac{\Vert e_{N_1}\Vert _{\infty }}{\Vert e_{N_2}\Vert _{\infty }}}{\log \frac{N_2}{N_1}}.} \end{aligned}$$
(39)

Example 1

Consider the following time VO fractional mobile–immobile advection–dispersion model [39]:

$$\begin{aligned} D_t^{q(x, t)}u(x, t)+ \frac{\partial u(x, t)}{\partial t}+ \frac{\partial u(x, t)}{\partial x} -\frac{\partial ^2 u(x, t)}{\partial x^2}=f(x, t), \ \ (x, t)\in \Omega , \end{aligned}$$
(40)

with the following initial and boundary conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} u(x, 0)=10x^2(1-x)^2, &{}0\le x\le 1, \\ u(0, t)=u(1, t)=0, &{}0\le t\le T, \end{array}\right. } \end{aligned}$$
(41)

where \(\Omega =[0, 1]\times [0, T]\), \(q(x, t)=1-\frac{1}{2} e^{-xt}\), and \(f=f_1+f_2\), with:

$$\begin{aligned} f_1(x,t)&= 10x^2(1-x)^2+\frac{10x^2(1-x)^2t^{1-q(x,t)}}{\Gamma (2-q(x,t))},\\ f_2(x,t)&= 10(t+1)(2x-6x^2+4x^3)-10(t+1)(2-12x+12x^2). \end{aligned}$$

The exact solution is \(u_E(x,t)=10(t+1)x^2(1-x)^2\).

The absolute errors of the numerical solutions, at \(T=1,2,4\), and for \(\alpha =0.01\), \(\beta =0.01\), and \(N=5\) are shown in Table 1. A comparison of the numerical solutions is made by the results reported in [39] and [20] at \(T=1\), by Finite Difference Method (FD) and Reproducing Kernel Method (RKM), respectively. Also, the results of the presented method at \(T=2\) and \(T=4\) show that the method is accurate even for larger domains. Fig. 1 shows the results of the approximate and exact solutions at \(T=1,2,4\). The absolute error for this approach is shown in Fig. 2.

Table 1 Error at \(T = 1,2,4\) for Eq. (40)
Fig. 1
figure 1

Approximate and exact solutions of Eq. (40) at \(T=1,2,4\)

Fig. 2
figure 2

Absolute error of numerical solution of Eq. (40)

Example 2

Consider another time VO fractional mobile–immobile advection–dispersion model as follows [39]:

$$\begin{aligned} D_t^{q(x,t)}u(x,t)+ \frac{\partial u(x,t)}{\partial t}+ \frac{\partial u(x,t)}{\partial x} -\frac{\partial ^2 u(x,t)}{\partial x^2}=f(x,t),\ \ (x,t)\in \Omega , \end{aligned}$$
(42)

where \(\Omega =[0,1]\times [0,T]\), with the following initial and boundary conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} u(x,0)=5x(1-x), &{}\quad 0\le x\le 1,\\ u(0,t)=u(1,t)=0, &{}\quad 0\le t\le T, \end{array}\right. } \end{aligned}$$

where \(q(x,t)=0.8+0.005\cos (xt)\sin (x)\), and \(f=f_1+f_2\), with:

$$\begin{aligned} f_1(x,t)&= 5x(1-x)+\frac{5x(1-x)t^{1-q(x,t)}}{\Gamma (2-q(x,t))},\\ f_2(x,t)&= 5(t+1)(1-2x)+10(t+1). \end{aligned}$$

The exact solution is \(u_E(x,t)=5(t+1)x(1-x)\).

The errors of the numerical solutions for \(\alpha =1\), \(\beta =1\), and \(N=10\), at \(T=1, 2, 4\), are shown in Table 2. Also, a comparison of the numerical solutions, at \(T=1\), is made by the results of RKM [20], in Table 2. In Fig. 3, the results of the approximate and the exact solutions at \(T=1, 2, 4\) are compared. The absolute errors for this approach are shown in Fig. 4.

Table 2 Absolute error at \(T = 1, 2, 4\) for Eq. (42)
Fig. 3
figure 3

Approximate and exact solutions of Eq. (42) at \(T = 1,2,4\)

Fig. 4
figure 4

Absolute error of numerical solution of Eq. (42)

Example 3

Now consider the following time VO fractional PDE, arising from mobile–immobile advection–dispersion model [39]:

$$\begin{aligned} D_t^{q(x,t)}u(x,t)+ \frac{\partial u(x,t)}{\partial t}+ \frac{\partial u(x,t)}{\partial x} -\frac{\partial ^2 u(x,t)}{\partial x^2}=0,\ \ (x,t)\in \Omega , \end{aligned}$$
(43)

with the following initial and boundary conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} u(x,0)=\sin (\pi x), &{}\quad 0\le x\le 1,\\ u(0,t)=u(1,t)=0, &{}\quad 0\le t\le 1, \end{array}\right. } \end{aligned}$$

where \(\Omega =[0,1]\times [0,1]\), and \(q(x,t)=0.8+0.05e^{-x}\sin (t)\).

Figure 5 shows the numerical solution of Eq. (43) where it displays a typical mobile–immobile behavior.

Fig. 5
figure 5

Approximate solution of Eq. (43) for \(N=5\)

Example 4

Consider the following VOFMIE [16]:

$$\begin{aligned} {\frac{1}{2}D_t^{q(x,t)}u(x,t)+ \frac{\partial u(x,t)}{\partial t}+ \frac{\partial u(x,t)}{\partial x} -2\frac{\partial ^2 u(x,t)}{\partial x^2}=f(x,t),\ \ (x,t)\in \Omega ,} \end{aligned}$$
(44)

where \(\Omega =[0,1]\times [0,1]\), with the following initial and boundary conditions:

$$\begin{aligned} {{\left\{ \begin{array}{ll} u(x,0)=0, &{}\quad 0\le x\le 1,\\ u(0,t)=t^3,\quad u(1,t)=et^3, &{}\quad 0\le t\le 1, \end{array}\right. }} \end{aligned}$$

where \(q(x,t)=0.8+0.2e^{-x}\sin (t)\), and

$$\begin{aligned} {f(x,t)=\left( 3t^2+\frac{3t^{3-q(x,t)}}{\Gamma (4-q(x,t))}-t^3\right) e^x}. \end{aligned}$$

The exact solution is \(u_E(x,t)=t^3e^x\).

The errors of the numerical solutions for \(\alpha =\beta =1\) and different N are shown in Table 3. We investigate the convergence order of our method, and a comparison of the numerical solutions is made by the results of Chebyshev wavelets method (CWs) [16], in Table 4. The absolute errors for this approach, at \(T=1\), are shown in Fig. 6.

Table 3 Absolute errors at \(T = 1\) and different N, for Eq. (44)
Table 4 Comparison of \(\Vert e_N\Vert _{\infty }\) at \(T = 1\), and convergence error (for our method) for Eq.(44)
Fig. 6
figure 6

Absolute error of numerical solutions of Eq. (44) at \(T=1\)

Example 5

Now consider the following VOFMIE [26]:

$$\begin{aligned} {D_t^{q(x,t)}u(x,t) -0.01\frac{\partial ^2 u(x,t)}{\partial x^2}=f(x,t),\ \ (x,t)\in \Omega ,} \end{aligned}$$
(45)

where \(\Omega =[0,L]\times [0,T]\), with \(L=10\), \(T=0.5\), and the following initial and boundary conditions:

$$\begin{aligned} {{\left\{ \begin{array}{ll} u(x,0)=0, &{}\quad 0\le x\le L,\\ u(0,t)=u(L,t)=0, &{}\quad 0\le t\le T, \end{array}\right. }} \end{aligned}$$

where \(q(x,t)=0.8+0.2xt/(LT)\), and

$$\begin{aligned} {f(x,t)=\left( \frac{2t^{2-q(x,t)}}{\Gamma (3-q(x,t))}+\frac{D\pi ^2t^2}{L^2}\right) \sin \left( \frac{\pi x}{L}\right) .} \end{aligned}$$

The exact solution is \(u_E(x,t)=t^2\sin (\pi x/L)\).

The errors of the numerical solutions for \(\alpha =\beta =1\) and different N are shown in Table 5. The convergence order of our method is investigated in Table 6. Also, the absolute errors for this approach, at \(T=0.5\), are shown in Fig. 7.

Table 5 Absolute error at \(T=0.5\) and different N, for Eq. (45)
Table 6 Convergence error of Eq. (45)
Fig. 7
figure 7

Approximate and exact solutions of Eq. (45) at \(T =0.5\) and \(x=1,2,\ldots ,10\)

Discussion and conclusion

In this paper, we present an operational matrix method based on Hahn polynomials for solving the time VO fractional mobile–immobile advection–dispersion model. This method converts the VO fractional equation into an algebraic system which is solved by a technique of linear algebra. Using the discrete Hahn polynomials in a projection approach will not result in any numerical calculation errors.

As the main result, a new operational matrix of VO fractional derivative, in the Caputo sense, is derived for the Hahn polynomials. Then, it is used to obtain an approximate solution to the problem under study. Also, an upper bound for the error of the presented method, with equidistant nodes, is investigated. Numerical examples show the accuracy and the efficiency of the presented method. An advantage of using the Hahn polynomials is that the numerical results achieved only using a small number of bases are accurate in a larger intervals and significant results are achieved. A comparison of the numerical solutions by FDM [39] and RKM [20] shows that this technique is accurate enough to be known as a powerful device. This method can be applied to solve other types of VO fractional functional equations.