1 Introduction

The study of fractional differential equations is an important area of research because of their appearance in various fields of study such as fluid mechanics, physics, engineering, and biology [111]. Particularly, it has been shown that fractional calculus is a powerful tool for modeling various natural phenomena, because of the non-local nature and long-range history dependence of fractional differential operators. Tan and Xu [12] proposed the fractional derivative between two parallel plates for a class of unsteady flows of a generalized second grade fluid and obtained exact analytical solutions by using Laplace and Fourier transformations. In addition, Shen et al. [13] presented the fractional-order R-S problem for a heated generalized second grade fluid, and the solution of this problem is obtained by the Fourier and fractional Laplace transforms. Stokes’ first problem has been studied by Qi and Xu [14] for a viscoelastic fluid with the generalized Oldroyd-B model.

Several analytical methodologies are proposed and developed to provide the analytical solutions of the fractional differential equations depending on Laplace, Mellin, and Fourier transforms, by many authors; see [11, 13, 1520]. These methods are employed for linear fractional differential equations, and cannot be used for nonlinear equations. Indeed, most of analytical solutions contain infinite series and special functions, and thus are complicated and inconvenient for computational and numerical situations. Therefore, for most cases, it is desirable to propose and develop numerical solutions for solving such problems. Wu [21] proposed an implicit numerical approach for Stokes’ first problem of fractional order. Chen et al. [15] proposed two numerical approaches for the same problem. Mohebbi et al. [22] proposed a combination of difference scheme and the radial basis function with the Kansa approach for solving of the fractional-order R-S problem.

It is well known that the spectral methods have gained increasing popularity for several decades, especially in solving differential equations and in the field of computational fluid dynamics (see, e.g., [2325] and the references therein). Spectral methods have also been applied to constant-order fractional differential equations when the exact solutions are smooth; see, e.g., [2629]. As in the traditional spectral methods for integer-order differential equations, it is extremely important to choose an appropriate basis function for the solution of constant-order fractional differential equations. Recently, spectral method were developed to solve partial fractional differential equations in which the choice of the basis function is given by means of Jacobi polynomials; see [30, 31].

Nowadays, numerical approximation theory for variable-order fractional differential equations is attracting more and more attention from the research community [3236]. Inspired by [37] and [38], this paper aims to numerically solve the variable-order R-S problem for a heated generalized second grade fluid subject to initial-boundary and non-local conditions. The proposed method presents a novel coupling of the shifted Jacobi-Gauss scheme for the spatial discretization and the shifted Jacobi-Gauss-Radau scheme for temporal discretization. This treatment improves the accuracy of the scheme greatly. Therefore, the aforementioned problem with its non-local conditions is reduced to a system of linear algebraic equations. This method results in an accurate solution that is continuous in the temporal and spatial domains and is computational efficient.

The paper is laid out as follows. The definitions of the fractional calculus and some properties of Jacobi polynomials are introduced in Section 2. The spectral collocation methods for the variable-order R-S problem subject to boundary, non-local, and mixed conditions are presented in Section 3 and then illustrated, with three examples in Section 4. The conclusion is given in Section 5.

2 Preliminaries

We first recall some definitions and preliminaries of the variable-order fractional differential and integral operators and some knowledge of orthogonal shifted Jacobi polynomials that are most relevant to spectral approximations.

Definition 2.1

The Riemann-Liouville and Caputo differential operators of constant order γ, when \(n-1\leq\gamma< n\), of \(f(t)\) are given, respectively, by

$$ \begin{aligned} &{}_{0} D_{t}^{\gamma}f(t)= \frac{1}{{\Gamma(n - \gamma)}}\frac{{d^{n} }}{{dt^{n} }} \int _{0}^{t} {\frac{{f(s)}}{{(t - s )^{\gamma- n + 1} }}} \,ds, \\ &{}_{0}^{\mathrm{C}} D^{\gamma}_{t}f(t) = \frac{1}{{\Gamma(n - \gamma)}} \int _{0}^{t} {\frac{{f^{(n)} (s )}}{{(t - s )^{\gamma- n + 1} }}} \,ds, \end{aligned} $$
(2.1)

where \(\Gamma(\cdot)\) represents the Euler gamma function.

Definition 2.2

The left Riemann-Liouville variable-order fractional differential operator of order \(\gamma(t)\) is given by

$$ {}_{0} D_{t}^{\gamma(t)} f(t) = \frac{1}{{\Gamma(n - \gamma (t))}}\frac{{d^{n} }}{{dt^{n} }} \int _{0}^{t} \frac{{f(s)}}{{(t - s)^{\gamma(t) - n + 1} }}\,ds, $$
(2.2)

where \(n-1 < \gamma_{\mathrm{min}} < \gamma(t) < \gamma_{\mathrm{max}} < n \), \(n \in\mathbb{N}\) for all \(t \in[0,\tau]\).

Definition 2.3

The Caputo variable-order fractional differential operator is given by [39]

$$ {}_{0}^{\mathrm{C}} D_{t}^{\gamma(t)} f(t) = \frac{1}{{\Gamma(1 - \gamma (t))}} \int _{0 }^{t} \frac{{f'(s)}}{{(t - s)^{\gamma(t)} }}\,ds , $$
(2.3)

where \(0< \gamma(t) \leq1\) for all \(t \in[0,\tau]\).

It is important to note here that the constant-order fractional derivative can be seen as a special case of the variable-order fractional derivative. These two definitions are related by the following relation:

$$ {}_{0} D_{t}^{\gamma(t)}f(t) = \sum _{k = 0}^{n - 1} {\frac{{f^{(k)} (0)t^{k - \gamma(t)} }}{{\Gamma(k + 1 - \gamma (t))}}} + {}_{0}^{\mathrm{C}} D_{t}^{\gamma(t)} f(t). $$
(2.4)

The Jacobi polynomials, denoted by \(P_{j}^{(\theta,\vartheta)}(x)\) (\(j=0,1\ldots\)); \(\theta>-1\), \(\vartheta>-1\) and defined on the interval \([-1,1]\) are generated from the three-term recurrence relation:

$$\begin{aligned}& P^{(\theta,\vartheta)}_{i+1}(x)=\bigl(a^{(\theta,\vartheta)}_{i} x-b^{(\theta,\vartheta)}_{i}\bigr)P^{(\theta,\vartheta)}_{i}(x)-c^{(\theta ,\vartheta)}_{i} P^{(\theta,\vartheta)}_{i-1}(x),\quad i\geq1, \\& P^{(\theta,\vartheta)}_{0}(x)=1,\qquad P^{(\theta,\vartheta )}_{1}(x)= \frac{1}{2 }(\theta+\vartheta+2)x+\frac{1}{2}(\theta -\vartheta), \end{aligned}$$

where

$$\begin{aligned}& a^{(\theta,\vartheta)}_{i}= \frac{(2i+\theta+\vartheta+1)(2i+\theta +\vartheta+2)}{2(i+1)(i+\theta+\vartheta+1)}, \\& b^{(\theta,\vartheta)}_{i}= \frac{(2i+\theta+\vartheta+1)({\vartheta }^{2}-{\theta}^{2})}{2(i+1)(i+\theta+\vartheta+1)(2i+\theta+\vartheta )}, \\& c^{(\theta,\vartheta)}_{i}= \frac{(2i+\theta+\vartheta+2)(i+\theta )(i+\vartheta)}{(i+1)(i+\theta+\vartheta+1)(2i+\theta+\vartheta)}. \end{aligned}$$

The formula that relates Jacobi polynomials and their derivatives is

$$ D^{(q)} P_{k}^{(\theta,\vartheta)}(x)= P_{k}^{(\theta,\vartheta ,q)}(x)=2^{-q}\frac{\Gamma(k+\theta+\vartheta+q+1)}{\Gamma (k+\theta+\vartheta+1)}P_{k-q}^{(\theta+q,\vartheta+q)}(x). $$
(2.5)

The orthogonality condition is

$$ \bigl(P_{k}^{(\theta,\vartheta )}(x),P_{l}^{(\theta,\vartheta)}(x) \bigr)_{w^{(\theta,\vartheta)} }= \int _{-1}^{1}P_{k}^{(\theta,\vartheta)}(x) P_{l}^{(\theta ,\vartheta)}(x) w^{(\theta,\vartheta)} (x)\,dx =h_{k}^{(\theta ,\vartheta)} \delta_{lk}, $$
(2.6)

where \(w^{(\theta,\vartheta)}=(1-x)^{\theta}(1+x)^{\vartheta}\), \(h_{k}^{(\theta,\vartheta)} =\frac{2^{\theta+\vartheta+1}\Gamma (k+\theta+1)\Gamma(k+\vartheta+1)}{(2k+\theta+\vartheta+1) k!\Gamma(k+\theta+\vartheta+1)}\).

Let the shifted Jacobi polynomials \(P^{(\theta,\vartheta)}_{i}{(\frac {2x}{L}-1)}\) be denoted by \(P^{(\theta,\vartheta)}_{L,i}{(x)}\), then they can be obtained with the aid of the following recurrence formula:

$$ \begin{aligned} &P_{L,i+1}^{(\theta,\vartheta)}{(x)}= \biggl(a_{i}^{(\theta,\vartheta )}\biggl(\frac{2x}{L}-1 \biggr)-b_{i}^{(\theta,\vartheta)}\biggr)P_{L,i}^{(\theta ,\vartheta)}{(x)}-c_{i}^{(\theta,\vartheta)} P_{L,i-1}^{(\theta,\vartheta)}{(x)},\quad i\geq1, \\ &P_{L,0}^{(\theta,\vartheta)}{(x)}=1,\qquad P_{L,1}^{(\theta,\vartheta )}{(x)}= \frac{1}{L}(\theta+\vartheta+2)x-(\vartheta+1). \end{aligned} $$
(2.7)

The analytic form of the shifted Jacobi polynomials \(P^{(\theta ,\vartheta)}_{L,i}{(x)}\) of degree i is given by

$$ P_{L,i}^{(\theta,\vartheta)}{(x)}=\sum _{k=0}^{i }{(-1)}^{i+k}\frac{ \Gamma{(i+\vartheta+1)}\Gamma{(i+k+\theta +\vartheta+1)}}{\Gamma{(k+\vartheta+1)}\Gamma{(i+\theta+\vartheta +1)}(i-k)!k! L^{k}} x^{k}, $$
(2.8)

and the orthogonality condition is

$$ \int_{0}^{L} P^{(\theta,\vartheta)}_{L,j} (x)P^{(\theta,\vartheta)}_{L,k}(x) w^{(\theta,\vartheta)}_{L} (x)\,dx = \hbar^{(\theta,\vartheta)}_{L,k} \delta_{jk}, $$
(2.9)

where \(w_{L}^{(\theta,\vartheta)} (x)={x}^{\vartheta}(L-x)^{\theta }\) and \(\hbar^{(\theta,\vartheta)}_{L,k} =\frac {L^{\theta+\vartheta+1}\Gamma(k+\theta+1)\Gamma(k+\vartheta +1)}{(2k+\theta+\vartheta+1) k!\Gamma(k+\theta+\vartheta+1)}\).

The shifted Jacobi-Gauss quadrature is commonly used to evaluate the previous integrals accurately. For any \(\phi\in S_{2N+1}[0,L]\), we have

$$\int_{0}^{L} {\phi(x)w_{L}^{(\theta,\vartheta)} (x)\,dx} = \sum_{j = 0}^{N} { \varpi_{G,L,j}^{(\theta,\vartheta)} \phi \bigl(x_{G,L,j}^{(\theta,\vartheta)} \bigr)}, $$

where \(S_{N}[0,L]\) is the set of polynomials of degree less than or equal to N, \(x_{G ,L,j}^{(\theta,\vartheta)}\) (\(0\leq j \leq N \)) and \(\varpi_{G ,L,j}^{(\theta,\vartheta)}\) (\(0\leq j \leq N \)) are used, as usual, the nodes and the corresponding Christoffel numbers in the interval \([0,L]\), respectively.

For the shifted Jacobi-Gauss (SJ-G) case, \(x_{G ,L,j}^{(\theta,\vartheta)}\) (\(0\leq j \leq N \)) are the zeros of \(P_{L,N+1}^{(\theta,\vartheta)}(x)\) and the weights

$$ \varpi_{G ,L,j}^{(\theta,\vartheta)}=\frac{{C_{L,N}^{(\theta ,\vartheta)} }}{ {(L - x_{G ,L,j}^{(\theta,\vartheta)}) x_{G ,L,j}^{(\theta ,\vartheta)} [ {\partial_{x} P_{N + 1}^{(\theta,\vartheta)} (x_{G ,L,j}^{(\theta,\vartheta)})} ]^{2} }},\quad 0 \leq j \leq N, $$
(2.10)

where

$$C_{L,N}^{(\theta,\vartheta)} = \frac{{L^{\theta+ \vartheta+ 1} \Gamma(N + \theta+ 2)\Gamma(N + \vartheta+ 2)}}{ {(N + 1)!\Gamma(N + \theta+ \vartheta+ 2)}}, $$

while the nodes and the corresponding Christoffel numbers in the shifted Jacobi-Gauss-Radau (SJ-GR) quadrature are given by \(x_{R ,L,0}^{(\theta,\vartheta)}=0\), \(x_{R ,L,j}^{(\theta ,\vartheta)}\) (\(1\leq j \leq N \)) are the zeros of \(P_{L,N}^{(\theta ,\vartheta+1)}(x)\), and the weights

$$ \begin{aligned} \varpi_{R ,L,0}^{(\theta,\vartheta)}&= \frac{{(L )^{\theta+ \vartheta+ 1} (\vartheta+ 1) \Gamma^{2} (\vartheta+ 1)\Gamma (N+1)\Gamma(N + \theta+ 1)}}{ {\Gamma(N + \vartheta+ 2)\Gamma(N + \theta+ \vartheta+ 2)}}, \\ \varpi_{R ,L,j}^{(\theta,\vartheta)}&=\frac{{C_{L,N - 1}^{(\theta ,\vartheta+ 1)} }}{ {(L - x_{R ,L,j}^{(\theta,\vartheta)})(x_{R ,L,j}^{(\theta ,\vartheta)})^{2} \partial_{x} [ {P_{N}^{(\theta,\vartheta+ 1)} (x_{R ,L,j}^{(\theta,\vartheta)})} ]^{2} }},\quad 1 \leq j \leq N. \end{aligned} $$
(2.11)

A function \(u(x)\), square integrable in \([0,L]\), may be expressed in terms of shifted Jacobi polynomials as

$$ u{(x)}=\sum_{j=0}^{\infty}c_{j} P^{(\theta,\vartheta)}_{L,j}(x), $$

where the coefficients \(c_{j}\) are given by

$$ c_{j}= \frac{1}{ \hbar^{(\theta,\vartheta)}_{L,j}} \int_{0}^{L}u (x)P^{(\theta,\vartheta)}_{L,j}(x)w_{L}^{(\theta,\vartheta)}(x) \,dx,\quad j=0,1,2, \ldots. $$
(2.12)

The qth derivative of \(P_{L,k}^{(\theta,\vartheta)}(x)\) can be written as

$$ D^{q}P_{L,k}^{(\theta,\vartheta)}{(x)}=P_{L,k}^{(\theta,\vartheta ,q)}{(x)}= \frac{ \Gamma(q+k+\theta+\vartheta+1)}{L^{q}\Gamma (k+\theta+\vartheta+1)}P_{L,k-q}^{(\theta+q,\vartheta+q)}{(x)}. $$
(2.13)

Accordingly, we can calculate the Caputo derivative of the shifted Jacobi polynomials from

$$\begin{aligned}& {}_{0}^{\mathrm{C}} D_{t}^{\gamma(t)} P_{L,i}^{(\theta,\vartheta )}{(x)} \\& \quad =P_{L,i}^{(\theta,\vartheta,\gamma(t))}{(x)} \\& \quad =\sum_{k=1}^{i }\frac{ {(-1)}^{i+k}\Gamma{(i+\vartheta +1)}\Gamma{(i+k+\theta+\vartheta+1)}}{\Gamma{(k+\vartheta +1)}\Gamma{(i+\theta+\vartheta+1)}(i-k)! L^{k} \Gamma{(k-\gamma (t)+1)}} x^{k-\gamma(t)}. \end{aligned}$$
(2.14)

3 Jacobi collocation method

This section introduces three numerical schemes that are based on the shifted Jacobi collocation method to numerically solve the 2D variable-order fractional R-S problem with different types of boundary conditions. In the following, it is important to mention here that the shifted Jacobi-Gauss points are employed for the spatial approximation, however, we employ the shifted Jacobi-Gauss-Radau points for the temporal approximation.

Now, we present the methodology of the shifted Jacobi collocation scheme for solving the 2D variable-order fractional R-S problem with initial-boundary conditions.

3.1 R-S problem with Dirichlet boundary conditions

The main objective is to extend the Jacobi collocation method for handling the variable-order fractional R-S problem:

$$\begin{aligned} \frac{\partial u(x,y,t)}{\partial t} =& {}_{0}D^{1-\gamma (x,y,t)}_{t} \biggl(k_{1}\frac{\partial^{2}u(x,y,t)}{\partial x^{2}}+k_{2}\frac{\partial^{2}u(x,y,t)}{\partial y^{2}} \biggr)+k_{3}\frac {\partial^{2}u(x,y,t)}{\partial x^{2}} \\ &{}+k_{4} \frac{\partial ^{2}u(x,y,t)}{\partial y^{2}}+f(x,t,y), \quad (x,y)\in\Omega, t \in[0,T], \end{aligned}$$
(3.1)

subject to

$$ u(x,y,0)=g_{0}(x,y),\quad {(x,y)\in\Omega}, $$
(3.2)

and the Dirichlet boundary conditions

$$ \begin{aligned} &u(0,y,t)=g_{1}(y,t),\qquad u(L,y,t)=g_{2}(y,t), \\ &u(x,0,t)=g_{3}(x,t), \qquad u(x,L,t)=g_{4}(x,t), \end{aligned} $$
(3.3)

where \(\Omega=\{(x,y)|0\leq x,y\leq L\}\), \(\gamma(x,y,t)\) satisfies \(0<\gamma_{\mathrm{min}}<\gamma(x,y,t)<\gamma_{\mathrm{max}}<1\) and \(k_{1},k_{2},k_{3},k_{4}>0\) are the diffusion coefficients. Here, \({}_{0}D^{1-\gamma(x,y,t)}_{t}u(x,u,t)\) is the variable-order Riemann-Liouville fractional partial derivative of order \(1-\gamma(x,y,t)\).

By virtue of (2.4), we have

$$ \begin{aligned} &{}_{0}D^{1-\gamma(x,y,t)}_{t} \frac{{\partial^{2}}u(x,y,t)}{\partial x^{2}} ={}_{0}^{\mathrm{C}}D^{1-\gamma(x,y,t)}_{t} \frac{{\partial ^{2}}u(x,y,t)}{\partial x^{2}}+\frac{{\partial^{2}}u(x,y,t)}{\partial x^{2}}\bigg|_{t=0} \frac{t^{\gamma(x,y,t)-1}}{\Gamma(\gamma(x,y,t))}, \\ &{}_{0}D^{1-\gamma(x,y,t)}_{t}\frac{{\partial^{2}}u(x,y,t)}{\partial y^{2}} ={}_{0}^{\mathrm{C}}D^{1-\gamma(x,y,t)}_{t} \frac{{\partial^{2}}u(x,y,t)}{\partial y^{2}}+\frac{{\partial ^{2}}u(x,y,t)}{\partial y^{2}}\bigg|_{t=0} \frac{t^{\gamma (x,y,t)-1}}{\Gamma(\gamma(x,y,t))}. \end{aligned} $$
(3.4)

Substituting (3.4) into (3.1), we have

$$\begin{aligned} \frac{\partial u(x,y,t)}{\partial t} =&{k_{1}}\, {}_{0}^{\mathrm{C}}D^{1-\gamma (x,y,t)}_{t} \frac{{\partial^{2}}u(x,y,t)}{\partial x^{2}}+{k_{2}}\, {}_{0}^{\mathrm{C}}D^{1-\gamma(x,y,t)}_{t} \frac{{\partial^{2}}u(x,y,t)}{\partial y^{2}} \\ &{}+{k_{3}} \frac{{\partial^{2}}u(x,y,t)}{\partial x^{2}} +{k_{4}} \frac{{\partial^{2}}u(x,y,t)}{\partial y^{2}}+{k_{1}}\frac {{\partial^{2}}u(x,y,t)}{\partial x^{2}}\bigg|_{t=0} \frac{t^{\gamma(x,y,t)-1}}{\Gamma(\gamma(x,y,t))} \\ &{}+{k_{2}}\frac {{\partial^{2}}u(x,y,t)}{\partial y^{2}}\bigg|_{t=0} \frac{t^{\gamma(x,y,t)-1}}{\Gamma(\gamma(x,y,t))}+f(x,y,t) \end{aligned}$$
(3.5)

subject to the conditions (3.2) and (3.3).

Let us expand the approximate solution in a doubly shifted Jacobi series,

$$\begin{aligned} u(x,y,t)&=\sum_{i,j,k = 0}^{N}{ \hat{u}_{i,j,k} P_{L,i}^{(\theta {_{1}},\vartheta_{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta_{3},\vartheta_{3})}(t)} \\ &=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{0}^{i,j,k}(x,y,t), \end{aligned}$$
(3.6)

where \(\mathcal{P}_{0}^{i,j,k}(x,y,t)=P_{L,i}^{(\theta_{1},\vartheta _{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t)\).

The approximation of the temporal partial derivative \(\frac{\partial u(x,y,t)}{\partial t}\) can easily be computed by using (2.13) as follows:

$$\begin{aligned} \frac{\partial u(x,y,t)}{\partial t}&=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta_{1},\vartheta_{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta _{3},\vartheta_{3},1)}(t) \\ &=\sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal{P}_{1}^{i,j,k}(x,y,t), \end{aligned}$$
(3.7)

where \(\mathcal{P}_{1}^{i,j,k}(x,y,t)=P_{L,i}^{(\theta_{1},\vartheta _{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta _{3},\vartheta_{3},1)}(t)\).

Also, it is easy to approximate the spatial partial derivatives \(\frac {\partial u(x,y,t)}{\partial x}\) and \(\frac{\partial u(x,y,t)}{\partial y}\) as follows:

$$\begin{aligned}& \frac{\partial u(x,y,t)}{\partial x} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta_{1},\vartheta_{1},1)}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t) =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{2}^{i,j,k}(x,y,t), \end{aligned}$$
(3.8)
$$\begin{aligned}& \frac{\partial u(x,y,t)}{\partial y} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta_{1},\vartheta_{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2},1)}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t) =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{3}^{i,j,k}(x,y,t), \end{aligned}$$
(3.9)

where

$$\mathcal{P}_{2}^{i,j,k}(x,y,t)= P_{L,i}^{(\theta_{1},\vartheta _{1},1)}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta_{3},\vartheta_{3})}(t) $$

and

$$\mathcal{P}_{3}^{i,j,k}(x,y,t)= P_{L,i}^{(\theta_{1},\vartheta _{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2},1)}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t). $$

Identical steps can be implemented to the second spatial partial derivatives, to get

$$\begin{aligned}& \frac{\partial^{2} u(x,y,t)}{\partial x^{2}} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta_{1},\vartheta_{1},2)}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t) =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{4}^{i,j,k}(x,y,t), \end{aligned}$$
(3.10)
$$\begin{aligned}& \frac{\partial^{2} u(x,y,t)}{\partial y} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta_{1},\vartheta_{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2},2)}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t) =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{5}^{i,j,k}(x,y,t), \end{aligned}$$
(3.11)

where

$$\mathcal{P}_{4}^{i,j,k}(x,y,t)= P_{L,i}^{(\theta_{1},\vartheta _{1},2)}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta_{3},\vartheta_{3})}(t) $$

and

$$\mathcal{P}_{5}^{i,j,k}(x,y,t)= P_{L,i}^{(\theta_{1},\vartheta _{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2},2)}(y) P_{T,k}^{(\theta _{3},\vartheta_{3})}(t). $$

A straightforward calculation shows that the fractional derivative of variable order of the approximate solution can be computed by

$$\begin{aligned}& {}_{0}^{\mathrm{C}}D_{t}^{1-\gamma(x,y,t)} \frac{\partial^{2} u(x,y,t)}{\partial x^{2}} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta _{1},\vartheta_{1},2)}(x) P_{L,j}^{(\theta_{2},\vartheta _{2})}(y) P_{T,k}^{(\theta_{3},\vartheta_{3},{1-\gamma (x,y,t)})}(t) \\& \hphantom{{}_{0}^{\mathrm{C}}D_{t}^{1-\gamma(x,y,t)}\frac{\partial^{2} u(x,y,t)}{\partial x^{2}}}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{6}^{i,j,k}(x,y,t), \end{aligned}$$
(3.12)
$$\begin{aligned}& {}_{0}^{\mathrm{C}}D_{t}^{1-\gamma(x,y,t)} \frac{\partial^{2} u(x,y,t)}{\partial y^{2}} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} P_{L,i}^{(\theta _{1},\vartheta_{1})}(x) P_{L,j}^{(\theta_{2},\vartheta _{2},2)}(y) P_{T,k}^{(\theta_{3},\vartheta_{3},{1-\gamma (x,y,t)})}(t) \\& \hphantom{{}_{0}^{\mathrm{C}}D_{t}^{1-\gamma(x,y,t)}\frac{\partial^{2} u(x,y,t)}{\partial y^{2}}}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{7}^{i,j,k}(x,y,t), \end{aligned}$$
(3.13)

where

$$\begin{aligned}& \mathcal{P}_{6}^{i,j,k}(x,y,t)= P_{L,i}^{(\theta_{1},\vartheta _{1},2)}(x) P_{L,j}^{(\theta_{2},\vartheta_{2})}(y) P_{T,k}^{(\theta_{3},\vartheta_{3},{1-\gamma(x,y,t)})}(t), \\& \mathcal{P}_{7}^{i,j,k}(x,y,t)= P_{L,i}^{(\theta_{1},\vartheta _{1})}(x) P_{L,j}^{(\theta_{2},\vartheta_{2},2)}(y) P_{T,k}^{(\theta_{3},\vartheta_{3},{1-\gamma(x,y,t)})}(t). \end{aligned}$$

Therefore, based on (3.6)-(3.13), we can write (3.5) in the form

$$\begin{aligned}& \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{1}^{i,j,k}(x,y,t) \\& \quad = k_{1} \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{6}^{i,j,k}(x,y,t)+k_{2} \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal{P}_{7}^{i,j,k}(x,y,t) \\& \qquad {}+k_{3} \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{4}^{i,j,k}(x,y,t)+k_{4} \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal{P}_{5}^{i,j,k}(x,y,t) \\& \qquad {}+f(x,t,y)+k_{1} \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{4}^{i,j,k}(x,y,0) \frac{t^{\gamma(x,y,t)-1}}{\Gamma(\gamma (x,y,t))} \\& \qquad {}+k_{2} \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{5}^{i,j,k}(x,y,0) \frac{t^{\gamma(x,y,t)-1}}{\Gamma(\gamma(x,y,t))}, \end{aligned}$$
(3.14)

whereas the numerical treatment of initial and boundary conditions are

$$\begin{aligned}& u(x,y,0)=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x,y,0)=g_{0}(x,y), \\& u(0,y,t)=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(0,y,t)=g_{1}(y,t), \\& u(L,y,t)=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(L,y,t)=g_{2}(y,t), \\& u(x,0,t)=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x,0,t)=g_{3}(x,t), \\& u(x,L,t)=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x,L,t)=g_{4}(x,t). \end{aligned}$$
(3.15)

In the proposed shifted Jacobi collocation method, the residual of (3.14) is set to be zero at \(N\times(N-1)^{2}\) of collocation points. Moreover, the initial-boundary conditions in (3.15) will be collocated at collocation points. First of all, we have \(N\times (N-1)^{2}\) algebraic equations for \((N+1)^{3}\) unknowns of \(\hat{u}_{i,j,k}\),

$$\begin{aligned}& \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} F_{r,s,\tau }^{i,j,k}=f\bigl(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta)} \bigr), \\& \quad r=1,\ldots ,N-1; s=1,\ldots,N-1, \tau=1,\ldots,N, \end{aligned}$$
(3.16)

where

$$\begin{aligned} F_{r,s,\tau}^{i,j,k} =&\mathcal{P}_{1}^{i,j,k} \bigl(x_{G,L,r}^{(\theta ,\vartheta)},y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta ,\vartheta)} \bigr)-k_{1} \mathcal{P}_{6}^{i,j,k} \bigl(x_{G,L,r}^{(\theta ,\vartheta)},y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta ,\vartheta)} \bigr) \\ &{}-k_{2} \mathcal{P}_{7}^{i,j,k} \bigl(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta )} \bigr)-k_{3} \mathcal{P}_{4}^{i,j,k} \bigl(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta )} \bigr) \\ &{}-k_{4} \mathcal{P}_{5}^{i,j,k} \bigl(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta )} \bigr)-\frac{{( t_{R,T,\tau}^{(\theta,\vartheta)})}^{\gamma (x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)})-1}}{ \Gamma(\gamma(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta)}))} \\ &{}\times \bigl[k_{1} \mathcal{P}_{4}^{i,j,k} \bigl(x_{G,L,r}^{(\theta ,\vartheta)},y_{G,L,s}^{(\theta,\vartheta)},0 \bigr)+k_{2} \mathcal {P}_{5}^{i,j,k} \bigl(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},0\bigr) \bigr]. \end{aligned}$$
(3.17)

Also we have \((N+1)^{2}\) algebraic equations produce due to the initial conditions

$$ \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k} \bigl(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},0 \bigr)=g_{0}\bigl(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)} \bigr), \quad r=0,\ldots,N; s=0,\ldots,N. $$
(3.18)

Furthermore, using the boundary conditions, we have \(4\times N^{2}\) algebraic equations

$$ \begin{aligned} &\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k} \bigl(0,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)}\bigr) =g_{1}\bigl(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)} \bigr), \\ &\sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}\bigl(L,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)} \bigr) =g_{2}\bigl(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)} \bigr), \\ &\sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}\bigl(x_{G,L,r}^{(\theta,\vartheta)},0,t_{R,T,\tau }^{(\theta,\vartheta)} \bigr) =g_{3}\bigl(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)} \bigr), \\ &\sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}\bigl(x_{G,L,r}^{(\theta,\vartheta)},L,t_{R,T,\tau }^{(\theta,\vartheta)} \bigr) =g_{4}\bigl(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)} \bigr), \end{aligned} $$
(3.19)

where \(r=0,\ldots,N-1\); \(s=0,\ldots,N-1\); \(\tau=1,\ldots,N\).

This in turn produces \((N +1)^{3}\) algebraic equations

$$ \textstyle\begin{cases} \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} F_{r,s,\tau }^{i,j,k}=f(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta)}) , & r=1,\ldots ,N-1; s=1,\ldots,N-1; \\ &\tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},0)= g_{0}(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)}), & r=0,\ldots,N; s=0,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(0,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)})=g_{1}(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & s=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(L,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)})=g_{2}(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & s=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},0,t_{R,T,\tau }^{(\theta,\vartheta)})=g_{3}(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & r=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},L,t_{R,T,\tau }^{(\theta,\vartheta)})=g_{4}(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & r=0,\ldots,N-1; \tau=1,\ldots,N. \end{cases} $$
(3.20)

3.2 R-S problem with Neumann boundary conditions

Since many application problems in science and engineering involve Neumann boundary conditions [28, 40], it is important to extend the result of the present section to account for more general boundary conditions so that the shifted Jacobi collocation method can be used efficiently to simulate these models.

Let us consider the R-S problem (3.1) subject to (3.2) and Neumann boundary conditions,

$$ \begin{aligned} &\frac{\partial u(0,y,t)}{\partial x}=g_{1}(y,t), \qquad \frac{\partial u(L,y,t)}{\partial x}=g_{2}(y,t), \\ &\frac{\partial u(x,0,t)}{\partial y}=g_{3}(x,t),\qquad \frac{\partial u(x,L,t)}{\partial y}=g_{4}(y,t). \end{aligned} $$
(3.21)

Depending on the information mentioned in the last subsection, we get

$$\begin{aligned}& \sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} F_{r,s,\tau }^{i,j,k}=f\bigl(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)}, t_{R,T,\tau}^{(\theta,\vartheta)}\bigr), \\ & \quad r=1,\ldots ,N-1; s=1,\ldots,M-1; \tau=1,\ldots,N, \end{aligned}$$
(3.22)

where \(F_{r,s,\tau}^{i,j,k}\) is given in equation (3.17). In addition to the previous equation, we get the following:

$$\begin{aligned}& u(x,y,0)=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x,y,0)=g_{0}(x,y), \\& \frac{\partial u(0,y,t)}{\partial x}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{2}^{i,j,k}(0,y,t)=g_{1}(y,t), \\& \frac{\partial u(L,y,t)}{\partial x}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{2}^{i,j,k}(L,y,t)=g_{2}(y,t), \\& \frac{\partial u(x,0,t)}{\partial y}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{3}^{i,j,k}(x,0,t)=g_{3}(x,t), \\& \frac{\partial u(x,L,t)}{\partial y}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{3}^{i,j,k}(x,L,t)=g_{4}(x,t). \end{aligned}$$
(3.23)

By collocating these equations at the collocation points, the approximate solution can be obtained from solving the generated algebraic system:

$$ \textstyle\begin{cases} \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} F_{r,s,\tau }^{i,j,k}=f(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta)}) , & r=1,\ldots ,N-1; s=1,\ldots,N-1; \\ &\tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},0)= g_{0}(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)}), & r=0,\ldots,N; s=0,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{2}^{i,j,k}(0,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)})=g_{1}(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & s=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{2}^{i,j,k}(L,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)})=g_{2}(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & s=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{3}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},0,t_{R,T,\tau }^{(\theta,\vartheta)})=g_{3}(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & r=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{3}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},L,t_{R,T,\tau }^{(\theta,\vartheta)})=g_{4}(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & r=0,\ldots,N-1; \tau=1,\ldots,N. \end{cases} $$
(3.24)

3.3 R-S problem with mixed boundary conditions

This subsection focuses on developing the shifted Jacobi collocation method to numerically solve the R-S problem (3.1) subject to (3.2) and four non-local boundary conditions:

$$\begin{aligned}& \frac{\partial u(0,y,t)}{\partial x} =g_{1}(y,t),\qquad \int _{0}^{L}u(x,y,t)\,dx=g_{2}(y,t), \\& \frac{\partial u(x,0,t)}{\partial y} =g_{3}(x,t),\qquad \int _{0}^{L}u(x,y,t)\,dy=g_{4}(x,t), \\& \quad {0}\leq a_{1}\leq a_{2}\leq L, {0}\leq b_{1}\leq b_{2}\leq L, t\in[0,T]. \end{aligned}$$
(3.25)

Based on the previous initial and mixed boundary conditions, we get the following approximations:

$$\begin{aligned}& u(x,y,0) =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x,y,0)=g_{0}(x,y), \\& \frac{\partial u(0,y,t)}{\partial x} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{2}^{i,j,k}(0,y,t)=g_{1}(y,t), \\& \int_{0}^{L}u(x,y,t)\,dx =\sum _{i,j,k = 0}^{N}\hat{u}_{i,j,k} \biggl( \int_{0}^{L}P_{L,i}^{(\theta,\vartheta)}(x)\,dx \biggr) P_{L,j}^{(\theta ,\vartheta)}(y) P_{T,k}^{(\theta,\vartheta)}(t) \\& \hphantom{\int_{0}^{L}u(x,y,t)\,dx}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{8}^{i,j,k}(y,t)=g_{2}(y,t), \\& \frac{\partial u(x,0,t)}{\partial y} =\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal{P}_{3}^{i,j,k}(x,0,t)=g_{3}(x,t), \\& \int_{0}^{L}u(x,y,t)\,dx = \sum _{i,j,k = 0}^{N}\hat{u}_{i,j,k}P_{L,i}^{(\theta,\vartheta)}(x) \biggl( \int _{0}^{L}P_{L,j}^{(\theta,\vartheta)}(y)\,dy \biggr) P_{T,k}^{(\theta ,\vartheta)}(t) \\& \hphantom{\int_{0}^{L}u(x,y,t)\,dx}=\sum_{i,j,k = 0}^{N} \hat{u}_{i,j,k} \mathcal {P}_{9}^{i,j,k}(x,t)=g_{4}(x,t), \end{aligned}$$
(3.26)

where

$$ \begin{aligned} &\mathcal{P}_{8}^{i,j,k}(y,t) = \biggl( \int_{0}^{L}P_{L,i}^{(\theta,\vartheta)}(x)\,dx \biggr) P_{L,j}^{(\theta,\vartheta)}(y) P_{T,k}^{(\theta,\vartheta)}(t), \\ &\mathcal{P}_{9}^{i,j,k}(x,t) =P_{L,i}^{(\theta,\vartheta)}(x) \biggl( \int_{0}^{L}P_{L,j}^{(\theta,\vartheta)}(y)\,dy \biggr) P_{T,k}^{(\theta ,\vartheta)}(t). \end{aligned} $$
(3.27)

According to this, the following system of \((M+1)^{3}\) algebraic equations is obtained:

$$ \textstyle\begin{cases} \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} F_{r,s,\tau }^{i,j,k}=f(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},t_{R,T,\tau}^{(\theta,\vartheta)}) , & r=1,\ldots ,N-1; s=1,\ldots,N-1; \\ &\tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{0}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},y_{G,L,s}^{(\theta ,\vartheta)},0)= g_{0}(x_{G,L,r}^{(\theta,\vartheta )},y_{G,L,s}^{(\theta,\vartheta)}), & r=0,\ldots,N; s=0,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{2}^{i,j,k}(0,y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)})=g_{1}(y_{G,L,s}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & s=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{8}^{i,j,k}(y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta ,\vartheta)})=g_{2}(y_{G,L,s}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)}), & s=0,\ldots,N-1; \tau=1,\ldots ,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{3}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},0,t_{R,T,\tau }^{(\theta,\vartheta)})=g_{3}(x_{G,L,r}^{(\theta,\vartheta )},t_{R,T,\tau}^{(\theta,\vartheta)}), & r=0,\ldots,N-1; \tau=1,\ldots,N, \\ \sum_{i,j,k = 0}^{N}\hat{u}_{i,j,k} \mathcal {P}_{9}^{i,j,k}(x_{G,L,r}^{(\theta,\vartheta)},t_{R,T,\tau}^{(\theta ,\vartheta)})=g_{4}(x_{G,L,r}^{(\theta,\vartheta)},t_{R,T,\tau }^{(\theta,\vartheta)}), & r=0,\ldots,N-1; \tau=1,\ldots,N. \end{cases} $$
(3.28)

The above system may be solved by implementing the Mathematica package FindRoot. The default method of this package is Newton’s method. In fact, if we start with a zero initial approximation for the solution, the package produces an accurate approximate solution of the problem.

4 Numerical examples

This section reports several numerical examples to ensure the high accuracy and applicability of the present scheme. The results obtained from the present method will be compared with those mentioned in the recent literature.

Example 1

Consider the problem [32],

$$\begin{aligned}& \frac{\partial u(x,y,t)}{\partial t}= {}_{0}D^{1-\gamma (x,y,t)}_{t} \biggl( \frac{\partial^{2}{ u(x,y,t)}}{\partial x^{2}} +\frac{\partial^{2}{ u(x,y,t)}}{\partial y^{2}} \biggr) +\frac{\partial^{2}{ u(x,y,t)}}{\partial x^{2}} \\& \hphantom{\frac{\partial u(x,y,t)}{\partial t}={}}{}+\frac{\partial ^{2}{ u(x,y,t)}}{\partial y^{2}}+f(x,y,t),\quad (x,y,t)\in[0,1]\times [0,1] \times[0,1], \\& u(0,y,t) =e^{y} t^{2},\qquad u(1,y,t)=e^{y+1} t^{2}, \\& u(x,0,t) =e^{x} t^{2},\qquad u(x,1,t)=e^{x+1} t^{2},\quad (x,y,t)\in[0,1]\times[0,1]\times[0,1], \\& u(x,y,0) =0,\quad (x,y)\in(0,1)\times[0,1], \end{aligned}$$
(4.1)

where

$$ f(x,y,t) =2 e^{x+y} \biggl(t-t^{2}-\frac{2 t^{1+\gamma (x,y,t)}}{\Gamma(2+\gamma(x,y,t))} \biggr). $$

The exact solution is

$$ u(x,y,t)=e^{x+y} t^{2}, \quad (x,y,t)\in[0,1] \times[0,1]\times[0,1]. $$

Chen et al. [32] develop implicit and explicit schemes to obtain numerical solutions of the above problem with different choices of \(\Delta_{t}\) and \(\Delta_{x}\), where \(\Delta_{t}\) and \(\Delta_{x}\) are time and space step sizes, respectively. In Table 1, we contrast our results based on maximum absolute errors (MAEs) obtained by the present method for the shifted Jacobi parameters \(\theta_{i}=\vartheta_{i}=0\), \(i=1,2,3\) with the corresponding results of the implicit scheme [32] and the explicit scheme [32] at \(\gamma(x,y,t)=\sin (xyt+\frac{2\pi}{5} )\). For different choices of \(\gamma(x,y,t)\), the MAEs with \(\theta_{i}=\vartheta_{i}=0\), \(i=1,2,3\), are listed in Table 2.

Table 1 Comparing MAEs of the present method at \(\pmb{\theta_{i}=\vartheta _{i}=0}\) , \(\pmb{i=1,2,3}\) , and the method in [ 32 ] for Example 1
Table 2 MAEs of problem ( 4.1 ) at \(\pmb{\theta_{i}=\vartheta_{i}=0}\) , \(\pmb{i=1,2,3}\)

In addition, to ensure the convergence and high accuracy of the present algorithm, in Figure 1, the logarithmic graph of MAEs \((\log_{10} \mathrm{Error})\) at \(\gamma(x,y,t)=\sin (xyt+\frac{2\pi}{5} )\) with different values of the shifted Jacobi parameters is presented. The conclusion is that the numerical errors for all chosen shifted Jacobi parameters \(\theta_{i}\), \(\vartheta_{i}\), \(i=1,2,3\), decay rapidly as the parameter N increases. This confirms that the present method achieves a highly accurate numerical solution for variable-order partial FDE.

Figure 1
figure 1

Convergence of problem ( 4.1 ) at \(\pmb{\gamma(x,y,t)=\sin (xyt+\frac {2\pi}{5} )}\) and various choices of \(\pmb{\theta_{i}=\vartheta_{i}}\) , \(\pmb{i=1,2,3}\) .

Example 2

Consider the following problem:

$$\begin{aligned} \frac{\partial u(x,y,t)}{\partial t} =& {}_{0}D^{1-\gamma (x,y,t)}_{t} \biggl(\frac{\partial^{2}{ u(x,y,t)}}{\partial x^{2}} +2 \frac{\partial^{2}{ u(x,y,t)}}{\partial y^{2}} \biggr) +\frac{\partial^{2}{ u(x,y,t)}}{\partial x^{2}} \\ &{}+\frac{\partial ^{2}{ u(x,y,t)}}{\partial y^{2}}+f(x,t,y),\quad (x,y,t)\in[0,1]\times [0,1]\times[0,1], \end{aligned}$$
(4.2)

where

$$\begin{aligned} f(x,y,t) =&3e^{t} \sin(x + y) + \frac{{3t^{\gamma(x,y,t) - 1} \sin(x + y)}}{ {\Gamma(\gamma(x,y,t))}} \\ &{}+ \frac{{3e^{t} (\Gamma(\gamma(x,y,t)) - \Gamma(\gamma(x,y,t),t))\sin(x + y)}}{{\Gamma(\gamma (x,y,t))}}, \end{aligned}$$

where \(\Gamma(r,t)\) is the incomplete gamma function of t. The initial and Neumann boundary conditions can be extracted from

$$u(x,y,t)=e^{t} \sin(x + y). $$

In Table 3, we introduce the MAEs of \(u(x, y, t)\) of problem (4.2) at \(\gamma(x,y,t)=0.7\), \(\gamma(x,y,t)=\frac{{9 + (yt)^{3} - \cos^{2} (xt)}}{{60}}\) and \(\theta_{1}=\vartheta_{1}=-\theta_{2}=-\vartheta_{2}=-\theta_{3}=-\vartheta_{3}=-1/2\) with various choices of N. From the results of this table, it is observed that the approximate solutions are very accurate for few values of N. Figure 2 demonstrates that the errors are very small even for the small number of collocation nodes taken.

Figure 2
figure 2

The space-time graphs of the error function at various values of t with \(\pmb{\gamma(x,y,t)=\frac{{11 - \cos^{2} (xt)}}{{11}}}\) , \(\pmb{N=10}\) and \(\pmb{\theta_{i}=\vartheta_{i}=0}\) , \(\pmb{i=1,2,3}\) , for Example 2 .

Table 3 MAEs at \(\pmb{\theta_{1}=\vartheta_{1}=-\theta_{2}=-\vartheta _{2}=-\theta_{3}=-\vartheta_{3}=-1/2}\) with various values of \(\pmb{\gamma (x,y,t)}\) and N for Example 2

Example 3

Finally, we consider the following problem with mixed boundary conditions:

$$\begin{aligned} \frac{\partial u(x,y,t)}{\partial t} =& {}_{0}D^{1-\gamma (y,t)}_{t} \biggl(\frac{\partial^{2}{ u(x,y,t)}}{\partial x^{2}} + \frac{\partial^{2}{ u(x,y,t)}}{\partial y^{2}} \biggr) +\frac{\partial^{2}{ u(x,y,t)}}{\partial x^{2}} \\ &{}+\frac{\partial ^{2}{ u(x,y,t)}}{\partial y^{2}}+f(x,y,t), \quad (x,y,t)\in[0,1]\times [0,1]\times[0,1], \end{aligned}$$
(4.3)

with the initial and non-local boundary conditions

$$\begin{aligned}& u(x,y,0)=0 , \\& \frac{\partial u(0,y,t)}{\partial x}=0,\qquad \int^{1}_{0} u(x,y,t) \,dx =t^{3} \cos(x) \sin(1), \\& \frac{\partial u(x,0,t)}{\partial y}=0, \qquad \int^{1}_{0} u(x,y,t) \,dx =t^{3} \cos(y) \sin(1), \end{aligned}$$
(4.4)

where

$$f(x,y,t)= \bigl(3t^{2} + 2t^{3} \bigr)\cos(x)\cos(y) + \frac{{12t^{2 + \gamma (y,t)} \cos(x)\cos(y)}}{ {\Gamma(3 + \gamma(y,t))}} $$

and

$$\gamma(y,t) = \frac{{11 - \cos^{2} (yt)}}{{11}}. $$

The exact solution is

$$u(x,y,t)={t^{3} \cos(x)\cos(y)}. $$

Table 4 lists the results obtained by the implementation of the proposed algorithm for different values of \(\theta_{i}\), \(\vartheta _{i}\) and N. It is observed that an excellent approximation for the exact solution is achieved for limited collocation nodes. Therefore, we have demonstrated that the present method provides an accurate approximation for problems with non-local conditions.

Table 4 Maximum absolute errors of Example 3

5 Conclusions

We presented a collocation method to achieve an accurate numerical solution for the variable-order R-S problem for a heated generalized second grade fluid subject to initial-boundary and non-local conditions. One of the most significant advantages of the present technique is that a fully spectral method was implemented for the time and space variables by using SJ-GR-C and SJ-G-C approximations, respectively. The problem with its conditions was then reduced to an algebraic system. The greatest feature of the present scheme is, adding a few terms of the SJ-G and SJ-GR collocation points, a full agreement between the approximate and exact solution was achieved. Through the numerical examples and specially a comparison between the obtained approximate solution and those obtained by other approximations, we demonstrate the validity and high accuracy of the method presented.