1 Introduction

The most famous equation in quantum mechanics that can explain the behavior of particles in Hilbert spaces is the Schrödinger equation. This equation is obtained in a different form in quantum physics, for example, in canonical quantization of the quantum mechanics, time evolution of the wave function leads to the Schrödinger equation. Also, Feynman [1] applied a path integral approach for a Gaussian probability distribution function to achieve the Schrödinger equation. The mathematical appearance of this equation is similar to a diffusion equation that can be derived by taking into account probability distributions.

Recently, the word fractional has been widely used in various sciences, especially, physics and mathematics. This concept for the first time was introduced by Mandelbrot [2] in many research items. Feynman’s path integral first introduced the concept of fractal into quantum mechanics.

The fractional derivative Schrödinger equation was generalized by Laskin [37] two decades ago. It is an extension of the Feynman path integral formalism. This prominent formalism of quantum mechanics is a basis for the fractional quantum mechanics. As we know, the Schrödinger equation has two parts: the first part contains the first-order time derivative, and the second part has the second-order space derivative. Laskin [37] used the time-fractional derivative Schrödinger equation and showed that the connection between the statistical form of the quantum path was of the same nature as basic equations of quantum mechanics. In the Laskin scenario, the time-fractional derivative Schrödinger equation correlates with consideration of the non-Markovian evolutions. Also, he introduced the space-fractional derivative by the Levy distributions for all the possible paths. Furthermore, it has been shown that the fractional Hamiltonian and parity remain conserved, and by the time-fractional derivative the spectrum of energy levels of the hydrogen atom and the harmonic oscillator are computed. He, in his paper, investigated the energy levels for the fractional three-dimensional Coulomb potential Bohr atom; see [37]. The fractional Laplacian–Schrödinger equation for initial value problems was investigated by Hu and Kallianpur [8], in which the solutions are represented as probability. Also, the Green function in quantum scattering and the barrier influence were computed by Guo and Xu [9].

Some of the properties of time-fractional derivation nonlinear Schrödinger equation, such as fractal oscillator, were investigated by Naber [10]. This process was done by replacing the first order of time derivatives with a Caputo fractional derivative [11]; meanwhile, the second-order space derivatives remained unchanged. In [10], the order of the time-fractional Schrödinger derivative is \(0< \alpha < 1\), and with this order, the Schrödinger equation for a free particle in box and a finite potential well was solved. After that, the Schrödinger equation with both space and time-fractional derivatives was developed and computed by Wang and Xu [12] for free particle and an infinite rectangular potential well. The solution of Coulomb potential, linear potential, and δ-potential for the fractional Schrödinger equation were investigated in [13].

In this paper, we study the Schrödinger equation based on fractal time. This equation was discussed in [37, 10, 1421]. In paper [14], according to Caputo’s fractional derivatives, Planck mass and constant were represented by fractal equations whose dimensions are also fractal quantities. By this technique, they have shown that the time-dependent fractal Schrödinger differential equation for a particle in the potential field exactly matches the standard form of the equation. In [10], the Schrödinger fractal time wave function for free particle and wells was obtained by using the Mittag-Leffler function with the complex argument, and the eigenvalues corresponding to this function were also shown. In paper [37] also presented by Laskin, a Schrödinger time-independent fractal equation was considered and applications of this equation such as determining the shape of the Schrödinger wave function and its exact solution, the wave function and the eigenvalues for infinite potential wells, and in particular the values for a linear potential field, were obtained for \(0 < \alpha < 1\). In all of these papers, the Schrödinger fractional time differential equation was investigated, and for different potentials, the wave function and specific differential equation values were specified. In this paper, we solve the general form of a heterogeneous fractal time differential equation corresponding to a heterogeneous fractal time Schrödinger equation using the B-spline method [22] and obtain numerical answers for such a function. To make this more concrete, we solve the fractal time-dependent Schrödinger equation for a linear potential field with this method and present the exact solutions.

2 The inhomogeneous nonlinear time-fractional Schrödinger equation

One way to do a fractal form of the time-fractional derivation of nonlinear Schrödinger equation is to apply the natural units \(\hbar =c=1\) for a wave function. Some people such as [10] used Planck’s units for our models, both of which are the same. The definition of Plank’s units [23] in terms of three theoretical physics constants \(G,\hbar, c\), where G states for gravitational constant, ħ is Planck’s constant, and c denotes the speed of light, is as follows:

(1)

where the last equation denotes the energy in Planck’s scale. Indeed, in natural units, .

In our model, for the construction of the time-fractional derivative for a nonlinear Schrödinger equation in natural units, we start from the primary definition of this equation. The Schrödinger equation in one-dimensional space-time was introduced in [24] as follows:

$$ i\hbar \frac{\partial {\varPsi }}{\partial {t}}=-\frac{{\hbar }^{2}}{2m} \frac{{\partial }^{2}\varPsi }{{\partial x}^{2}} +V(x)\varPsi, $$
(2)

where \(V(x)\) indicates the potential of the system. In quantum mechanics literature, various potential functions are usually chosen to describe the models. In this work, to construct a nonlinear time-derivative fractional Schrödinger equation, we consider Eq. (2) in terms of natural units as follows:

$$ i\frac{\partial {\varPsi }}{\partial {t}}=- \frac{1}{2m} \frac{{\partial }^{2}\varPsi }{{\partial x}^{2}} +{V(x)}\varPsi. $$
(3)

Now, we apply the Caputo fractional derivation method to introduce nonlinear Schrödinger equation in one-dimensional spaces as follows:

$$ iD^{\nu }_{t}\varPsi =- \frac{1}{2m} \frac{{\partial }^{2}\varPsi }{{\partial x}^{2}} +{V(x)}\varPsi. $$
(4)

In Eq. (4), \(D^{\nu }_{t}\) denotes the Caputo fractional time derivative of order \(0<\nu <1\). The main intention of this paper is to consider the abstract time-fractional evolution equation (4) on a potential well, where \(\frac{\partial ^{\nu } \varPsi }{{\partial t}^{\nu }}\) denotes the Caputo fractional derivative.

Now, we consider the motion of a particle in a linear potential field [25] which is generated by an external field. The potential arising from this field reads as follows:

$$ V(x)= \textstyle\begin{cases} Fx,& x\geq 0, (F>0), \\ \infty,& x< 0, \end{cases} $$
(5)

where F is the force that affects the particle in the external field. Thus one can apply an example of inhomogeneous nonlinear time-fractional Schrödinger equation for a linear potential given by

$$ i\frac{\partial ^{\alpha } \varPsi (x,t)}{{\partial t}^{\alpha }}+ \frac{{\partial }^{2}\varPsi (x,t)}{{\partial x}^{2}} -{V(x)} \varPsi (x,t)- \bigl(1- \cos ({\pi x})\bigr)=0. $$
(6)

The Schrödinger equation is the basic equation in quantum mechanics. This equation is the generator of probability function producing the temporal and local evaluation of a mechanical system. In this article, we consider the time-fractional Schrödinger equation

$$\begin{aligned} i\frac{\partial ^{\alpha }\varPsi (x,t)}{\partial t^{\alpha }} + \frac{\partial ^{2} \varPsi (x,t)}{\partial x^{2}} + r \bigl\vert \varPsi (x,t) \bigr\vert ^{2} \varPsi (x,t) - \delta R(x,t)=0,\quad 0 < \alpha < 1, \end{aligned}$$
(7)

where δ has a real constant. Also, the initial, boundary, and overspecified conditions, respectively, read as follows:

$$\begin{aligned} \begin{aligned} &\varPsi (x,0)= f(x), \quad 0< x < 1, \\ &\varPsi (0,t)= p (t),\qquad \varPsi (1,t)= q (t),\quad 0 \leq t \leq T, \\ &\varPsi \bigl(x^{\ast },t\bigr)= g (t),\quad 0< x^{\ast }< 1, 0 \leq t \leq T. \end{aligned} \end{aligned}$$
(8)

Therefore, we have

$$\begin{aligned} \begin{aligned}&\varPsi (x,t)=u(x,t)+i v(x,t), \quad u(x,t), v(x,t)\in C\bigl([0,1] \times [0,T], \mathbb{R}\bigr), \\ &R(x,t)=f(x,t)+i g(x,t),\quad f(x,t), g(x,t)\in C\bigl([0,1]\times [0,T], \mathbb{R} \bigr), \\ &p(t) = p_{1} (t) + i p_{2} (t), \quad p_{1} (t), p_{2} (t)\in C\bigl([0,T], \mathbb{R}\bigr), \\ &q(t) = q_{1} (t) + i q_{2} (t), \quad q_{1} (t), q_{2} (t)\in C\bigl([0,T], \mathbb{R}\bigr), \\ &f(x) = f_{1} (x) + i f_{2} (x), \quad f_{1} (t), f_{2} (t)\in C\bigl([0,T], \mathbb{R}\bigr), \\ &g(t) = g_{1} (t) + i g_{2} (t), \quad g_{1} (t), g_{2} (t)\in C\bigl([0,T], \mathbb{R}\bigr). \end{aligned} \end{aligned}$$
(9)

By substitution of (9) into (7), the coupled real differential equations are obtained as follows:

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u^{\alpha } (x,t)}{\partial t^{\alpha }} + \frac{\partial ^{2} v(x,t)}{\partial x^{2}} + r ( u^{2} (x,t) + v^{2} (x,t) ) v(x,t) -\delta g(x,t)=0, \\ \frac{\partial v^{\alpha } (x,t)}{\partial t^{\alpha }} - \frac{\partial ^{2} u(x,t)}{\partial x^{2}} - r ( u^{2} (x,t) + v^{2} (x,t) ) u(x,t) +\delta f(x,t)=0. \end{cases}\displaystyle \end{aligned}$$
(10)

For any positive integer m, the Caputo fractional derivatives of order α read as follows:

$$\begin{aligned} \frac{\partial u^{\alpha } (x,t)}{\partial t^{\alpha }}= \textstyle\begin{cases} \frac{1}{\varGamma (m-\alpha )} \int _{0}^{t} (t-s )^{m-\alpha -1} \frac{\partial u^{m} (x,s)}{\partial s^{m}} \,ds, & m-1 < \alpha < m, \\ \frac{\partial u^{m} (x,t)}{\partial t^{m}}, & \alpha =m \in \mathbb{N}. \end{cases}\displaystyle \end{aligned}$$

If \(h=\frac{1}{N}\) is a step size in the x-axis and \(\Delta t=\frac{T}{M}\) is a step size in the t-axis, then, for any point of \((x_{i}, t_{k})\), we have

$$\begin{aligned} x_{i}= i h, \quad i=0,1,\ldots,N,\qquad t_{k}= k \tau,\quad k=0,1, \ldots,M. \end{aligned}$$

By using the discretization of the time-fractional derivative term in [26], we have

$$\begin{aligned} D^{\alpha }_{t} u (x_{i}, t_{k+1}) \approx \frac{(\Delta t)^{-\alpha }}{\varGamma (2-\alpha )} \Biggl[ u_{i}^{k+1} - u_{i}^{k} + \sum_{r=1}^{k} a_{r}^{\alpha } \bigl(u_{i}^{k+1-r} - u_{i}^{k-j} \bigr) \Biggr], \end{aligned}$$
(11)

where \(u_{i}^{k}\) has a numerical approximation of \(u(x_{i}, t_{k})\) and

$$\begin{aligned} a_{j}^{\alpha } = (j+1)^{1-\alpha } - (j)^{1-\alpha }. \end{aligned}$$
(12)

3 Implementation of the cubic B-spline functions

In this section, we introduce a set of nodes over \([0,1]\) such as

$$ 0=x_{0} < x_{1} < \cdots < x_{N}=1, $$

and \(h=x_{i+1} - x_{i}, i=0,1,\ldots, N-1 \), is a step length. We extend the sets as

$$ x_{-3} < x_{-2} < x_{-1} < x_{0}\quad \text{and}\quad x_{N} < x_{N+1} < x_{N+2} < x_{N+3}. $$

Definition 3.1

Let

$$\begin{aligned} &A_{1}(x)=h^{3}+3h^{2}(x-x_{i-1})+3h(x-x_{i-1})^{2}-3(x-x_{i-1})^{3}, \\ &A_{2}(x)=h^{3}+3h^{2}(x_{i+1}-x)+3h(x_{i+1}-x)^{2}-3(x_{i+1}-x)^{3}. \end{aligned}$$

Then the cubic B-spline is defined as

$$\begin{aligned} B_{i}(x) = \frac{1}{h^{3}} \textstyle\begin{cases} (x-x_{i-2})^{3}, &x \in [x_{i-2},x_{i-1}], \\ A_{1}(x),&x \in [x_{i-1},x_{i}], \\ A_{2}(x),&x \in [x_{i},x_{i+1}], \\ (x_{i+2}-x)^{3}, &x \in [x_{i+1},x_{i+2}], \\ 0, &\text{otherwise} \end{cases}\displaystyle \end{aligned}$$
(13)

for \(i=-1, 0, \ldots, N+1\).

Also, by using Definition 3.1, the value and derivatives of \(B_{i} (x)\) at the nodes \(x_{i}\text{ s}\) are given by

$$\begin{aligned} \begin{aligned} &B_{m}(x_{i}) = \textstyle\begin{cases} 4 & \text{if } m=i, \\ 1 & \text{if } \vert m-i \vert =1, \\ 0 & \text{if } \vert m-i \vert \geq 2, \end{cases}\displaystyle \qquad B'_{m}(x_{i}) = \textstyle\begin{cases} 0 & \text{if } m=i, \\ -\frac{3}{h} & \text{if } m=i-1, \\ \frac{3}{h} & \text{if } m=i+1, \\ 0 & \text{if } \vert m-1 \vert \geq 2, \end{cases}\displaystyle \\ & B''_{m}(x_{i}) = \textstyle\begin{cases} -\frac{12}{h^{2}} & \text{if } m=i, \\ \frac{6}{h^{2}} & \text{if } \vert m-1 \vert =1, \\ 0 & \text{if } \vert m \vert \geq 2. \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(14)

If \(c_{i}\) and \(d_{i}\) are unknown time-dependent quantities, which should be determined, then

$$\begin{aligned} U_{n}(x,t) = \sum_{i=-1}^{N+1} c^{n}_{i} (t) B_{i} (x),\qquad V_{n}(x,t) = \sum_{i=-1}^{N+1} d^{n}_{i} (t) B_{i} (x). \end{aligned}$$
(15)

By discretizing the time derivative of Eq. (10) and using the finite difference, we have

$$\begin{aligned} \textstyle\begin{cases} \frac{(\Delta t)^{-\alpha }}{\varGamma (2-\alpha )} ( u^{k+1} - u^{k} + \sum_{j=1}^{k} a_{j}^{\alpha } (u^{k+1-j} - u^{k-j} ) ) \\ \quad{} + (v_{xx} )^{n} + r v^{n} (u^{2}+v^{2} )^{n} - \delta g(t, t_{n+1})= 0, \\ \frac{(\Delta t)^{-\alpha }}{\varGamma (2-\alpha )} ( v^{k+1} - v^{k} + \sum_{j=1}^{k} a_{j}^{\alpha } (v^{k+1-j} - v^{k-j} ) ) \\ \quad{} - (u_{xx} )^{n} - r u^{n} (u^{2}+v^{2} )^{n} + \delta f(t, t_{n+1})= 0, \end{cases}\displaystyle \end{aligned}$$
(16)

where Δt is the time step. By applying (15) and (16) at the point \(x = x_{m}\) and using (14), we have

$$\begin{aligned} &\bigl(c_{m-1}^{n+1}+ 4 c_{m}^{n+1} + c_{m+1}^{n+1} \bigr) + \gamma _{ \alpha } \frac{6}{h^{2}} \bigl(d_{m-1}^{n}-2 d_{m}^{n} + d_{m+1}^{n} \bigr) \\ &\qquad{}+\gamma _{\alpha } r \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n} \bigr) \bigl( \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr)^{2} \\ &\qquad{}+ \gamma _{\alpha } \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n} \bigr)^{2} \bigr)\\ &\quad =\gamma _{\alpha } \delta g(t, t_{n+1})+ \bigl(1-a_{1}^{\alpha }\bigr) \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n}\bigr) \\ &\qquad{}+ \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigl( c_{m-1}^{k-j}+ 4 c_{m}^{k-j} + c_{m+1}^{k-j} \bigr) + a_{n}^{\alpha } u_{i}^{0}, \\ &\bigl(d_{m-1}^{n+1}+ 4 d_{m}^{n+1} + d_{m+1}^{n+1} \bigr) - \gamma _{ \alpha } \frac{6}{h^{2}} \bigl(c_{m-1}^{n}-2 c_{m}^{n} + c_{m+1}^{n} \bigr) \\ &\qquad{}- \gamma _{\alpha } r \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr) \bigl( \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr)^{2} \\ &\qquad{}+\gamma _{\alpha } \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n} \bigr)^{2} \bigr)\\ &\quad =-\gamma _{\alpha } \delta f(t, t_{n+1}) + \bigl(1-a_{1}^{\alpha }\bigr) \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n}\bigr) \\ &\qquad{}+ \sum_{j=1}^{n-1} \bigl(a_{j}^{ \alpha } - a_{j+1}^{\alpha } \bigr) \bigl( d_{m-1}^{k-j}+ 4 d_{m}^{k-j} + d_{m+1}^{k-j} \bigr) + a_{n}^{\alpha } v_{i}^{0}, \end{aligned}$$

where \(\gamma _{\alpha }=(\Delta t)^{\alpha } \varGamma (2-\alpha )\). For simplicity, we set

$$\begin{aligned} \mathbf{X}_{m}^{n} ={}&{-}\gamma _{\alpha } \frac{6}{h^{2}} \bigl(d_{m-1}^{n}-2 d_{m}^{n} + d_{m+1}^{n} \bigr) + \gamma _{\alpha } \delta g(t, t_{n+1}) \\ &{}+\bigl(1-a_{1}^{\alpha }\bigr) \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr) \\ &{}+ \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigl( c_{m-1}^{k-j}+ 4 c_{m}^{k-j} + c_{m+1}^{k-j} \bigr) + a_{n}^{ \alpha } u_{i}^{0} \\ &{} - r \Delta t \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n} \bigr) \\ &{}\times \bigl( \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr)^{2} + \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n} \bigr)^{2} \bigr) \end{aligned}$$

and

$$\begin{aligned} \mathbf{R}_{m}^{n}= {}& \gamma _{\alpha } t \frac{6}{h^{2}} \bigl(c_{m-1}^{n}-2 c_{m}^{n} + c_{m+1}^{n} \bigr) -\gamma _{\alpha } \delta f(t, t_{n+1}) \\ &{}+ \bigl(1-a_{1}^{\alpha }\bigr) \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n}\bigr) \\ &{} + \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigl( d_{m-1}^{k-j}+ 4 d_{m}^{k-j} + d_{m+1}^{k-j} \bigr) + a_{n}^{\alpha } v_{i}^{0} \\ & {}+ r \Delta t \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr) \\ &{}\times \bigl( \bigl(c_{m-1}^{n}+ 4 c_{m}^{n} + c_{m+1}^{n} \bigr)^{2} + \bigl(d_{m-1}^{n}+ 4 d_{m}^{n} + d_{m+1}^{n} \bigr)^{2} \bigr). \end{aligned}$$

Then we have

$$\begin{aligned} \bigl(c_{m-1}^{n+1}+ 4 c_{m}^{n+1} + c_{m+1}^{n+1} \bigr) = \mathbf{X}_{m}^{n},\qquad \bigl(d_{m-1}^{n+1}+ 4 d_{m}^{n+1} + d_{m+1}^{n+1} \bigr) = \mathbf{R}_{m}^{n}. \end{aligned}$$
(17)

System (17) consists of \(2(N + 1)\) equations in \(2(N + 3)\) unknown coefficients. In addition, by imposing the boundary conditions (8), we complete the following equations:

$$\begin{aligned} &u\bigl(x^{\ast },t_{n+1}\bigr)=U_{n+1} (x_{s})= c_{s-1}^{n+1}+ 4 c_{s}^{n+1} + c_{s+1}^{n+1}=g_{1}(t_{n+1}), \\ &v\bigl(x^{\ast },t_{n+1}\bigr)=V_{n+1} (x_{s})= d_{s-1}^{n+1}+ 4 d_{s}^{n+1} + d_{s+1}^{n+1}=g_{2}(t_{n+1}), \\ &u(1,t_{n+1})=U_{n+1} (x_{N})= c_{N-1}^{n+1}+ 4 c_{N}^{n+1} + c_{N+1}^{n+1}=q_{1}(t_{n+1}), \\ &v(1,t_{n+1})=V_{n+1} (x_{N})= d_{N-1}^{n+1}+ 4 d_{N}^{n+1} + d_{N+1}^{n+1}=q_{2}(t_{n+1}), \end{aligned}$$

where \(x_{s}=x^{*}, 1\leq s \leq N-1\). This system can be rewritten as

$$\begin{aligned} A X=B, \end{aligned}$$
(18)

where

$$\begin{aligned} A_{2(N+3)\times 2(N+3)}= \begin{pmatrix} \mathbf{M}_{1} & \vert & O \\ - & - & - \\ O & \vert & \mathbf{M}_{1} \end{pmatrix}. \end{aligned}$$
(19)

The matrices \(\mathbf{M}_{1}\) and O have the same size \((N + 3)\times (N + 3)\), where O is the zero matrix and

$$ \mathbf{M}_{1} = \begin{pmatrix} 0 & \ldots & 0 & 1 & 4 & 1 & 0 & \ldots & 0 \\ 1 & 4 & 1 \\ 0 & 1 & 4 & 1 \\ & & \ldots & \ldots & \ldots \\ & & & \ldots & \ldots & \ldots \\ & & & & \ldots & \ldots & \ldots \\ & & & & &\ldots & \ldots & \ldots \\ \vdots & & & & & &1 & 4 & 1 \\ 0 & \ldots & & & & &1 & 4 & 1 \end{pmatrix} $$

and

$$\begin{aligned} &X= \bigl[c_{-1}^{n+1}, c_{0}^{n+1}, c_{1}^{n+1}, \ldots,c_{N+1}^{n+1}, d_{-1}^{n+1},d_{0}^{n+1}, d_{1}^{n+1},\ldots, d_{N+1}^{n+1} \bigr]^{T}, \\ &B= \bigl[\mathbf{X}_{-1}^{n}, \mathbf{X}_{0}^{n}, \mathbf{X}_{1}^{n}, \ldots, \mathbf{X}_{N+1}^{n}, \mathbf{R}_{-1}^{n},\mathbf{R}_{0}^{n}, \mathbf{R}_{1}^{n},\ldots, \mathbf{R}_{N+1}^{n} \bigr]^{T}, \end{aligned}$$

where

$$\begin{aligned} &\mathbf{X}_{-1}^{n}=p_{1} (t_{n+1}),\qquad \mathbf{X}_{N+1}^{n}=p_{2}(t_{n+1}), \\ &\mathbf{R}_{-1}^{n}=q_{1}(t_{n+1}),\qquad \mathbf{R}_{N+1}^{n}=q_{2}(t_{n+1}). \end{aligned}$$

Also,

$$\begin{aligned} \mathbf{M}_{1}[1,s+1]=1,\qquad \mathbf{M}_{1}[1,s+2]=4,\qquad \mathbf{M}_{1}[1,s+3]=1. \end{aligned}$$

By solving (18), the coefficients \(c_{j}\) and \(d_{j}\) are obtained.

Hence, for \(j=0, 1, \ldots, N\), we have

$$\begin{aligned} U(x_{j}, t_{n+1}) =& c_{j-1}^{(n+1)}+4c_{j}^{(n+1)}+c_{j+1}^{(n+1)}, \\ V(x_{j}, t_{n+1}) =& d_{j-1}^{(n+1)}+4d_{j}^{(n+1)}+d_{j+1}^{(n+1)}. \end{aligned}$$

By using the boundary conditions in (8), the initial vector \(C^{0}\) can be obtained as

$$\begin{aligned} &U(x_{s},0) = c_{s-1}^{(0)} + 4 c_{s}^{(0)} + c_{s+1}^{(0)} = g_{1} (0), \\ &V(x_{s},0) = d_{s-1}^{(0)} + 4 d_{s}^{(0)} + d_{s+1}^{(0)} = g_{2} (0), \\ &U(x_{j},0) = c_{j-1}^{(0)} + 4 c_{j}^{(0)} + c_{j+1}^{(0)} = f_{1} (x_{j}), \quad 0 \leq j \leq N, \\ &V(x_{j},0) = d_{j-1}^{(0)} + 4 d_{j}^{(0)} + d_{j+1}^{(0)} = f_{2} (x_{j}),\quad 0 \leq j \leq N, \\ &U(x_{N},0) = c_{N-1}^{(0)} + 4 c_{N}^{(0)} + c_{N+1}^{(0)} = q_{1} (0), \\ &V(x_{N},0) = d_{N-1}^{(0)} + 4 d_{N}^{(0)} + d_{N+1}^{(0)} = q_{2} (0), \end{aligned}$$

or

$$\begin{aligned} A X^{0} = B^{\ast }, \end{aligned}$$
(20)

where

$$\begin{aligned} &X= \bigl[c_{-1}^{n+1}, c_{0}^{n+1}, c_{1}^{n+1}, \ldots,c_{N+1}^{n+1}, d_{-1}^{n+1},d_{0}^{n+1}, d_{1}^{n+1},\ldots, d_{N+1}^{n+1} \bigr]^{T}, \\ &B= \bigl[g_{1} (0), f_{1} (x_{0}), \ldots, f_{1} (x_{N}),q_{1}(0),g_{2}(0),f_{2} (x_{0}),\ldots, f_{2} (x_{N}),q_{2}(0) \bigr]^{T}. \end{aligned}$$

Since the matrix A is singular and ill-posed, the estimate of \(X^{0}\) by (20) will be unstable. In this work, we adapt the Tikhonov regularization method (TRM) to solve matrix equations (18) and (20), which are given by

$$\begin{aligned} &F_{\sigma } (X) = \Vert A X - B \Vert _{2}^{2} + \sigma \bigl\Vert R^{(z)} X \bigr\Vert _{2}^{2} , \\ &F_{\sigma } \bigl(X^{0}\bigr) = \bigl\Vert A X^{0} - B^{\ast } \bigr\Vert _{2}^{2} + \sigma \bigl\Vert R^{(z)} X^{0} \bigr\Vert _{2}^{2}. \end{aligned}$$

By using the first- and second-order TRM, the matrices \(R^{(1)}\) and \(R^{(2)}\) are given by [27]

$$\begin{aligned} &R^{(1)} = \begin{pmatrix} -1 &1 & 0 & \ldots & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \ldots & 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & \ldots & 0 & -1 & 1 \end{pmatrix} \in \mathbb{R}^{(M-1)\times (M)}, \\ &R^{(2)} = \begin{pmatrix} 1 & -2 & 1 & \ldots & 0 & 0 & 0 \\ 0 & 1 & -2 & 1 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \ldots & 1 & -2 & 1 & 0 \\ 0 & 0 & 0 & \ldots & 1 & -2 & 1 \end{pmatrix} \in \mathbb{R}^{(M-2)\times (M)}, \end{aligned}$$

where \(M=2(N+3)\). Therefore for equations (18) and (20), by using TRM, we have

$$\begin{aligned} & X_{\sigma } = \bigl[A^{T} A + \sigma \bigl(R^{(z)}\bigr)^{T} R^{(z)}\bigr]^{-1} A^{T} B, \end{aligned}$$
(21)
$$\begin{aligned} &X_{\sigma }^{0} = \bigl[A^{T} A + \sigma \bigl(R^{(z)}\bigr)^{T} R^{(z)}\bigr]^{-1} A^{T} B^{\ast }. \end{aligned}$$
(22)

To compute the equation and to determine a suitable value of σ, we use the generalized cross-validation (GCV) scheme (see [2830]).

4 Convergence analysis

In this section, we discuss the convergence of scheme (7). For this purpose, we introduce Δ on the interval \([a, b]\) as

$$ \Delta = \{a=x_{0} < x_{1} < \cdots < x_{n}=b \}. $$

Also, the spline function is interpolated, the values of the function \(u\in C^{4} [a, b]\) named \(S_{\Delta }\). First we begin with a useful theorem that was proved by Stoer and Bulirsch [31].

Theorem 4.1

Suppose that\(u \in C^{4} [a, b]\)and that\(|f^{(4)} (x)| \leq L\)for\(x \in [a, b]\). Then there exist constants\(\lambda _{j} \leq 2 \), which do not depend on the partition Δ, such that for\(x \in [a, b]\)it follows that

$$\begin{aligned} \bigl\Vert u^{(j)} (x) - S_{\Delta }^{(j)} (x) \bigr\Vert _{\infty } \leq \lambda _{j} L h^{4-j},\quad j=0,1,2,3. \end{aligned}$$
(23)

Theorem 4.2

The collocation approximations\(U_{n}(x)\)and\(V_{n}(x)\)for the solutions\(u_{n}(x)\)and\(v_{n}(x)\)of problem (7) for any constant of\(\mu >0\)satisfy the following error estimate:

$$\begin{aligned} \bigl\Vert (u_{n}-U_{n}, v_{n}-V_{n}) \bigr\Vert _{\infty } \leq \mu h^{2}. \end{aligned}$$
(24)

Proof

Let \(\hat{U}_{n} (x,t)\) and let \(\hat{V}_{n} (x,t)\) be the computed splines for \(U_{n}(x,t)\) and \(V_{n} (x,t)\) defined in (15) such that

$$ \hat{U}_{n} (x,t) = \sum_{j=-1}^{N+1} \hat{c}^{n}_{j} (t) B_{j} (x) $$

and

$$ \hat{V}_{n} (x,t) = \sum_{j=-1}^{N+1} \hat{d}^{n}_{j} (t) B_{j} (x). $$

Following (18), for \(\hat{U}_{n}\) and \(\hat{V}_{n}\), we have

$$\begin{aligned} A \hat{X} = \hat{B}, \end{aligned}$$
(25)

where

$$\begin{aligned} &\hat{X}= \bigl[\hat{c}_{-1}^{n+1}, \hat{c}_{0}^{n+1}, \hat{c}_{1}^{n+1}, \ldots,\hat{c}_{N+1}^{n+1}, \hat{d}_{-1}^{n+1},\hat{d}_{0}^{n+1}, \hat{d}_{1}^{n+1},\ldots, \hat{d}_{N+1}^{n+1} \bigr]^{T}, \\ &\hat{B}= \bigl[\mathbf{X}_{-1}^{n}, \hat{ \mathbf{X}}_{0}^{n}, \hat{\mathbf{X}}_{1}^{n}, \ldots,\hat{\mathbf{X}}_{N}^{n}, \mathbf{X}_{N+1}^{n}, \mathbf{R}_{-1}^{n},\hat{\mathbf{R}}_{0}^{n}, \hat{\mathbf{R}}_{1}^{n}, \ldots, \hat{ \mathbf{R}}_{N}^{n}, \mathbf{R}_{N+1}^{n} \bigr]^{T}, \end{aligned}$$

and

$$ \mathbf{X}_{-1}^{n}=g_{1} (t_{n}),\qquad \mathbf{X}_{N+1}^{n}=q_{1}(t_{n}),\qquad \mathbf{R}_{-1}^{n}=g_{2}(t_{n}),\qquad \mathbf{R}_{N+1}^{n}=q_{2}(t_{n}). $$

By subtracting (18) and (25), we have

$$\begin{aligned} A (X-\hat{X} ) = (B - \hat{B} ), \end{aligned}$$
(26)

where

$$\begin{aligned} B - \hat{B}= {}&\bigl[0, \mathbf{X}_{0}^{n}- \hat{\mathbf{X}}_{0}^{n}, \\ &\mathbf{X}_{1}^{n} - \hat{\mathbf{X}}_{1}^{n}, \ldots,\mathbf{X}_{N}^{n}- \hat{\mathbf{X}}_{N}^{n}, 0, 0, \mathbf{R}_{0}^{n}-\hat{\mathbf{R}}_{0}^{n}, \mathbf{R}_{1}^{n} - \hat{\mathbf{R}}_{1}^{n}, \ldots, \mathbf{R}_{N}^{n}- \hat{\mathbf{R}}_{N}^{n}, 0 \bigr], \end{aligned}$$
(27)

and for every \(0 \leq m \leq N\), we have

$$\begin{aligned} \mathbf{X}_{m}^{n} = {}& \Biggl( \gamma _{\alpha } \bigl( - V''_{n} (x_{m}) - r V_{n}(x_{m}) \bigl( U_{n}^{2} (x_{m}) + V_{n}^{2} (x_{m}) \bigr) + \delta g(t, t_{n+1}) \bigr) \\ & {}+ \bigl(1-a_{1}^{\alpha }\bigr) U_{n} (x_{m}) + a_{n}^{\alpha } U_{0} (x_{m}) + \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) U_{k-j} (x_{m}) \Biggr), \\ \hat{\mathbf{X}}_{m}^{n} ={}& \Biggl( \gamma _{\alpha } \bigl( - \hat{V}''_{n} (x_{m}) - r \hat{V}_{n}(x_{m}) \bigl( \hat{U}_{n}^{2} (x_{m}) + \hat{V}_{n}^{2} (x_{m}) \bigr) + \delta g(t, t_{n+1}) \bigr) \\ &{} + \bigl(1-a_{1}^{\alpha }\bigr) \hat{U}_{n} (x_{m}) + a_{n}^{\alpha } \hat{U}_{0} (x_{m}) + \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{ \alpha } \bigr) \hat{U}_{k-j} (x_{m}) \Biggr), \\ \mathbf{R}_{m}^{n} ={}& \Biggl( \gamma _{\alpha } \bigl( U''_{n} (x_{m}) + r U_{n}(x_{m}) \bigl( U_{n}^{2} (x_{m}) + V_{n}^{2} (x_{m}) \bigr) - \delta f(t, t_{n+1}) \bigr) \\ &{} + \bigl(1-a_{1}^{\alpha }\bigr) V_{n} (x_{m}) + a_{n}^{\alpha } V_{0} (x_{m}) + \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) V_{k-j} (x_{m}) \Biggr), \\ \hat{\mathbf{R}}_{m}^{n} ={}& \Biggl( \gamma _{\alpha } \bigl( \hat{U}''_{n} (x_{m}) + r \hat{U}_{n}(x_{m}) \bigl( \hat{U}_{n}^{2} (x_{m}) + \hat{V}_{n}^{2} (x_{m}) \bigr) - \delta f(t, t_{n+1}) \bigr) \\ & {}+ \bigl(1-a_{1}^{\alpha }\bigr) \hat{V}_{n} (x_{m}) + a_{n}^{\alpha } \hat{V}_{0} (x_{m}) + \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{ \alpha } \bigr) \hat{V}_{k-j} (x_{m}) \Biggr). \end{aligned}$$

Therefore

$$\begin{aligned} \bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert ={}& \Biggl\vert \gamma _{\alpha } \bigl( - \bigl( V''_{n} (x_{m}) - \hat{V}''_{n} (x_{m}) \bigr) \\ &{} - r V_{n}(x_{m}) \bigl( U_{n}^{2} (x_{m}) + V_{n}^{2} (x_{m}) \bigr) + r \hat{V}_{n}(x_{m}) \bigl( \hat{U}_{n}^{2} (x_{m}) + \hat{V}_{n}^{2} (x_{m}) \bigr) \bigr) \\ &{} + \bigl(1-a_{1}^{\alpha }\bigr) \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr) + a_{n}^{\alpha } \bigl( U_{0} (x_{m})- \hat{U}_{0} (x_{m}) \bigr) \\ &{} + \sum_{j=1}^{n-1} \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigl( U_{k-j} (x_{m}) - \hat{U}_{k-j} (x_{m}) \bigr) \Biggr\vert . \end{aligned}$$

By using the Cauchy–Schwarz inequality, we have

$$\begin{aligned} &\bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert \\ &\quad \leq \gamma _{\alpha } \bigl\vert V''_{n} (x_{m}) - \hat{V}''_{n} (x_{m}) \bigr\vert + \bigl\vert \bigl(1-a_{1}^{\alpha }\bigr) \bigr\vert \bigl\vert U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr\vert \\ &\qquad{}+ \bigl\vert a_{n}^{\alpha } \bigr\vert \bigl\vert U_{0} (x_{m}) - \hat{U}_{0} (x_{m}) \bigr\vert \\ &\qquad{}+ \sum_{j=1}^{n-1} \bigl\vert \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigr\vert \bigl\vert U_{k-j} (x_{m}) - \hat{U}_{k-j} (x_{m}) \bigr\vert \\ &\qquad{}+ \gamma _{\alpha } r \underbrace{ \bigl\vert V_{n}(x_{m}) \bigl( U_{n}^{2} (x_{m}) + V_{n}^{2} (x_{m}) \bigr) - \hat{V}_{n}(x_{m}) \bigl( \hat{U}_{n}^{2} (x_{m}) + \hat{V}_{n}^{2} (x_{m}) \bigr) \bigr\vert }_{\text{ (I )}} \\ &(I ) = \bigl\vert V^{3}_{n} (x_{m}) - \hat{V}_{n}^{3} (x_{m}) + V_{n}(x_{m}) U_{n}^{2} (x_{m}) - \hat{V}_{n}(x_{m}) \hat{U}_{n}^{2} (x_{m}) \bigr\vert \\ &\phantom{(I ) }= \bigl\vert \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr)^{3} - V_{n} (x_{m}) \bigl(V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr)^{2} \\ &\phantom{(I ) = }{}+ V_{n}^{2} (x_{m}) \bigl(V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) + U_{n}^{2}(x_{m}) \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \\ &\phantom{(I ) = }{}+ U_{n} (x_{m}) V_{n} (x_{m}) \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr) - V_{n} (x_{m}) \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr)^{2} \\ &\phantom{(I ) = }{}- U_{n}(x_{m}) \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr) \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \\ &\phantom{(I ) = }{}+ \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr)^{2} \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \bigr\vert . \end{aligned}$$

Then, after simplification and differentiation, we have

$$\begin{aligned} &\bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert \\ &\quad\leq \gamma _{\alpha } \bigl\vert V''_{n} (x_{m}) - \hat{V}''_{n} (x_{m}) \bigr\vert + \bigl\vert \bigl(1-a_{1}^{\alpha } \bigr) \bigr\vert \bigl\vert U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr\vert \\ &\qquad{} + \bigl\vert a_{n}^{\alpha } \bigr\vert \bigl\vert U_{0} (x_{m}) - \hat{U}_{0} (x_{m}) \bigr\vert \\ &\qquad{} + \sum_{j=1}^{n-1} \bigl\vert \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigr\vert \bigl\vert U_{k-j} (x_{m}) - \hat{U}_{k-j} (x_{m}) \bigr\vert \\ &\qquad{} + \gamma _{\alpha } r ( \bigl\vert \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr)^{3} \bigr\vert \\ &\qquad{}+ \bigl\vert V_{n} (x_{m}) \bigr\vert \bigl\vert \bigl(V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr)^{2} \bigr\vert \\ &\qquad{} + \bigl\vert V_{n}^{2} (x_{m}) \bigr\vert \bigl\vert \bigl(V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \bigr\vert \\ &\qquad{} + \bigl\vert U_{n}^{2}(x_{m}) \bigr\vert \bigl\vert \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \bigr\vert \\ &\qquad{} + \bigl\vert U_{n} (x_{m}) \bigr\vert \bigl\vert V_{n} (x_{m}) \bigr\vert \bigl\vert \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr) \bigr\vert \\ &\qquad{} + \bigl\vert V_{n} (x_{m}) \bigr\vert \bigl\vert \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr)^{2} \bigr\vert \\ &\qquad{} + \bigl\vert U_{n}(x_{m}) \bigr\vert \bigl\vert \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr) \bigr\vert \bigl\vert \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \bigr\vert \\ &\qquad{} + \bigl\vert \bigl( U_{n} (x_{m}) - \hat{U}_{n} (x_{m}) \bigr)^{2} \bigr\vert \bigl\vert \bigl( V_{n} (x_{m}) - \hat{V}_{n} (x_{m}) \bigr) \bigr\vert ). \end{aligned}$$

By using Theorem 4.1, we have

$$\begin{aligned} \bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert \leq {}& \gamma _{\alpha } \Biggl( \lambda _{2} L_{v} h^{2} + \bigl(1-a_{1}^{ \alpha }\bigr) \lambda _{0} L_{u} h^{4} \\ &{}+ a_{0}^{\alpha } \lambda _{0} L_{u} h^{4} + \bigl(\lambda _{0} L_{u} h^{4}\bigr) \sum_{j=1}^{n-1} \bigl\vert \bigl(a_{j}^{\alpha } - a_{j+1}^{\alpha } \bigr) \bigr\vert \Biggr) \\ &{} + \gamma _{\alpha } r \bigl( \bigl(\lambda _{0} L_{v} h^{4}\bigr)^{3} + M_{v} \bigl( \lambda _{0} L_{v} h^{4}\bigr)^{2} + M_{v}^{2} \bigl(\lambda _{0} L_{v} h^{4}\bigr) \\ &{}+ M_{u}^{2} \bigl(\lambda _{0} L_{u} h^{4}\bigr) + M_{u} M_{v} \bigl( \lambda _{0} L_{u} h^{4}\bigr) + M_{v} \bigl( \lambda _{0} L_{u} h^{4}\bigr)^{2} \\ &{} + M_{u} \bigl(\lambda _{0} L_{u} h^{4}\bigr) \bigl(\lambda _{0} L_{v} h^{4}\bigr)+ \bigl(\lambda _{0} L_{u} h^{4}\bigr)^{2} \bigl(\lambda _{0} L_{v} h^{4}\bigr) \bigr). \end{aligned}$$

After simplification, we get

$$\begin{aligned} & \bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert \\ &\quad\leq h^{2} \Biggl[ \gamma _{\alpha } \Biggl( \lambda _{2} L_{v} + \bigl(1-a_{1}^{ \alpha }\bigr) \lambda _{0} L_{u} h^{2} + a_{0}^{\alpha } \lambda _{0} L_{u} h^{2} \\ & \qquad{}+ \bigl(\lambda _{0} L_{u} h^{2}\bigr) \sum _{j=1}^{n-1} \bigl\vert \bigl(a_{j}^{ \alpha } - a_{j+1}^{\alpha } \bigr) \bigr\vert \Biggr) \\ & \qquad{}+\gamma _{\alpha } r \bigl( \bigl(\lambda _{0} L_{v} h^{ \frac{10}{3}}\bigr)^{3} + M_{v} \bigl(\lambda _{0} L_{v} h^{3}\bigr)^{2} + M_{v}^{2} \bigl(\lambda _{0} L_{v} h^{2}\bigr) + M_{u}^{2} \bigl(\lambda _{0} L_{u} h^{2}\bigr) \\ &\qquad{} + M_{u} M_{v} \bigl(\lambda _{0} L_{u} h^{2}\bigr) + M_{v} \bigl( \lambda _{0} L_{u} h^{3}\bigr)^{2} + M_{u} \bigl(\lambda _{0} L_{u} h^{4} \bigr) \bigl(\lambda _{0} L_{v} h^{2}\bigr) \\ &\qquad{} + \bigl(\lambda _{0} L_{u} h^{4} \bigr)^{2} \bigl(\lambda _{0} L_{v} h^{2}\bigr) \bigr) \Biggr]. \end{aligned}$$
(28)

So we can rewrite (28) as follows:

$$\begin{aligned} \bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert \leq h^{2} M_{1}, \end{aligned}$$

where

$$\begin{aligned} M_{1}= {}&\Biggl[ \gamma _{\alpha } \Biggl( \lambda _{2} L_{v} + \bigl(1-a_{1}^{ \alpha }\bigr) \lambda _{0} L_{u} h^{2} + a_{0}^{\alpha } \lambda _{0} L_{u} h^{2} \\ & {}+ \bigl(\lambda _{0} L_{u} h^{2}\bigr) \sum _{j=1}^{n-1} \bigl\vert \bigl(a_{j}^{ \alpha } - a_{j+1}^{\alpha } \bigr) \bigr\vert \Biggr) \\ &{} +\gamma _{\alpha } r \bigl( \bigl(\lambda _{0} L_{v} h^{2}\bigr)^{3} + M_{v} \bigl(\lambda _{0} L_{v} h^{3}\bigr)^{2} + M_{v}^{2} \bigl(\lambda _{0} L_{v} h^{2}\bigr) + M_{u}^{2} \bigl(\lambda _{0} L_{u} h^{2}\bigr) \\ &{} + M_{u} M_{v} \bigl(\lambda _{0} L_{u} h^{2}\bigr) + M_{v} \bigl( \lambda _{0} L_{u} h^{3}\bigr)^{2} + M_{u} \bigl(\lambda _{0} L_{u} h^{2} \bigr) \bigl(\lambda _{0} L_{v} h^{2}\bigr) \\ & {}+ \bigl(\lambda _{0} L_{u} h^{3} \bigr)^{2} \bigl(\lambda _{0} L_{v} h^{2}\bigr) \bigr) \Biggr]. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \bigl\vert \mathbf{R}_{m}^{n} - \hat{ \mathbf{R}}_{m}^{n} \bigr\vert \leq h^{2} M_{2}. \end{aligned}$$

If \(\mathbf{M} = \max \{M_{1}, M_{2}\}\), then

$$\begin{aligned} \bigl\vert \mathbf{X}_{m}^{n} - \hat{ \mathbf{X}}_{m}^{n} \bigr\vert \leq \mathbf{M} h^{2},\qquad \bigl\vert \mathbf{R}_{m}^{n} - \hat{ \mathbf{R}}_{m}^{n} \bigr\vert \leq \mathbf{M} h^{2}. \end{aligned}$$
(29)

From (27), Eq. (29) is deduced

$$\begin{aligned} \Vert B - \hat{B} \Vert _{\infty } \leq \mathbf{M} h^{2}. \end{aligned}$$
(30)

By applying (26) and (21), we have

$$\begin{aligned} ( X-\hat{X} ) = \bigl[A^{T} A + \alpha \bigl(R^{(z)} \bigr)^{T} R^{(z)}\bigr]^{-1} A^{T} (B - \hat{B}). \end{aligned}$$

Using relation (30) and taking the infinity norm, we find

$$\begin{aligned} \Vert X-\hat{X} \Vert _{\infty } &\leq \bigl\Vert \bigl(A^{T} A + \alpha \bigl(R^{(z)}\bigr)^{T} R^{(z)}\bigr)^{-1} A^{T} \bigr\Vert _{\infty } \Vert B - \hat{B} \Vert _{\infty } \\ & \leq \bigl\Vert \bigl(A^{T} A + \alpha \bigl(R^{(z)} \bigr)^{T} R^{(z)}\bigr)^{-1} A^{T} \bigr\Vert _{ \infty } \mathbf{M} h^{2} \leq \mathbf{M}_{1} h^{2}, \end{aligned}$$
(31)

where

$$ \mathbf{M}_{1} = \bigl\Vert \bigl(A^{T} A + \alpha \bigl(R^{(z)}\bigr)^{T} R^{(z)}\bigr)^{-1} A^{T} \bigr\Vert _{\infty } \mathbf{M}. $$

Thus

$$\begin{aligned} \bigl\Vert (u_{n}-U_{n}, v_{n}-V_{n} ) \bigr\Vert _{\infty } ={}& \Vert u_{n}-U_{n} \Vert _{\infty } + \Vert v_{n}-V_{n} \Vert _{\infty } \\ \leq{}& \Vert u_{n}-\hat{U}_{n} \Vert _{\infty } + \Vert \hat{U}_{n} - U_{n} \Vert _{\infty } + \Vert v_{n}-\hat{V}_{n} \Vert _{\infty } \\ &{} + \Vert \hat{V}_{n}-V_{n} \Vert _{\infty }, \end{aligned}$$

such that

$$\begin{aligned} &U_{n}(x)-\hat{U}_{n} (x) = \sum _{i=-1}^{N+1} \bigl(c^{n}_{i} - \hat{c}^{n}_{i}\bigr) B_{i} (x), \\ &\bigl\vert U_{n}(x_{m})-\hat{U}_{n} (x_{m}) \bigr\vert \leq \max_{-1 \leq i \leq N+1} \bigl\{ \bigl\vert c^{n}_{i} - \hat{c}^{n}_{i} \bigr\vert \bigr\} \sum_{i=-1}^{N+1} \bigl\vert B_{i} (x_{m}) \bigr\vert , \quad 0 \leq m \leq N, \end{aligned}$$

and

$$\begin{aligned} &V_{n}(x)-\hat{V}_{n} (x) = \sum _{i=-1}^{N+1} \bigl(d^{n}_{i} - \hat{d}^{n}_{i}\bigr) B_{i} (x), \\ &\bigl\vert V_{n}(x_{m})-\hat{V}_{n} (x_{m}) \bigr\vert \leq \max_{-1 \leq i \leq N+1} \bigl\{ \bigl\vert d^{n}_{i} - \hat{d}^{n}_{i} \bigr\vert \bigr\} \sum_{i=-1}^{N+1} \bigl\vert B_{i} (x_{m}) \bigr\vert ,\quad 0 \leq m \leq N. \end{aligned}$$

We can easily see that

$$ \sum_{i=-1}^{N+1} \bigl\vert B_{i} (x_{m}) \bigr\vert \leq 10,\quad 0\leq m \leq N, $$

so

$$\begin{aligned} \bigl\Vert U_{n}(x_{m})- \hat{U}_{n} (x_{m}) \bigr\Vert _{\infty } \leq 10 \mathbf{M}_{1} h^{2}, \qquad\bigl\Vert V_{n}(x_{m})- \hat{V}_{n} (x_{m}) \bigr\Vert _{\infty } \leq 10 \mathbf{M}_{1} h^{2}. \end{aligned}$$
(32)

Therefore, according to (23) and (32), we obtain

$$\begin{aligned} \bigl\Vert (u_{n}-U_{n}, v_{n}-V_{n} ) \bigr\Vert _{\infty } &\leq \lambda _{0} L h^{4} + 10 \mathbf{M}_{1} h^{2} + \lambda _{0} L h^{4} + 10 \mathbf{M}_{1} h^{2} \\ &= h^{2} \bigl( 2 \lambda _{0} L h^{2} + 20 \mathbf{M}_{1}\bigr). \end{aligned}$$

Setting \(\gamma = 2 \lambda _{0} L h^{2} + 20 \mathbf{M}_{1} \), we have

$$\begin{aligned} \bigl\Vert (u_{n}-U_{n}, v_{n}-V_{n} ) \bigr\Vert _{\infty } \leq \gamma h^{2}. \end{aligned}$$
(33)

In addition, we calculate the time discretization process of Eq. (17). For this purpose, we discretize the system of (7) in the time variable:

$$\begin{aligned} D^{\alpha }_{t} u (x_{i}, t_{n})={}& \frac{1}{\varGamma (1-\alpha )} \int _{0}^{t_{n}} \frac{\partial u(x_{i}, s)}{\partial t} (t_{n}-s)^{-\alpha } \,ds \\ = {}& \frac{1}{\varGamma (1-\alpha )} \sum_{j=1}^{n} \int _{(j-1)\Delta t}^{j \Delta t} \biggl[ \frac{u_{i}^{j} - u_{i}^{j-1}}{\Delta t} + O(\Delta t) \biggr] (n\Delta t-s)^{-\alpha } \,ds \\ = {}& \frac{1}{\varGamma (1-\alpha )} \sum_{j=1}^{n} \biggl\{ \biggl[ \frac{u_{i}^{j} - u_{i}^{j-1}}{\Delta t} + O(\Delta t) \biggr] \\ &{}\times \bigl[ (n-j+1 )^{1-\alpha } - (n-j )^{1-\alpha } \bigr] \biggr\} \frac{\Delta t^{1-\alpha }}{1-\alpha } \\ ={} & \frac{\Delta t^{-\alpha }}{\varGamma (2-\alpha )} \sum_{j=1}^{n} \bigl( {u_{i}^{j} - u_{i}^{j-1}} \bigr) \bigl( (n-j+1 )^{1- \alpha } - (n-j )^{1-\alpha } \bigr) \\ &{} + \frac{1}{\varGamma (2-\alpha )} \sum_{j=1}^{n} \bigl( (n-j+1 )^{1-\alpha } - (n-j )^{1-\alpha } \bigr) O\bigl(\Delta t^{2- \alpha }\bigr). \end{aligned}$$
(34)

Hence, according to Theorem 4.2 and Eqs. (33) and (34), we have

$$ \bigl\Vert (u_{n}-U_{n}, v_{n}-V_{n} ) \bigr\Vert _{\infty } \leq \tau \bigl( \Delta t^{2-\alpha } + h^{2}\bigr), $$

where τ is constant and the order of convergence is \(O(\Delta t^{2-\alpha } +h^{2})\). □

5 Illustrative examples

The results of the proposed method are compared with the RBF method (RBFM), and the accuracy of the method is shown in all examples. Also, we define the root mean square (RMS) or total error as follows:

$$ \operatorname{RMS} = \sqrt{ \frac{1}{M} \sum _{i=1}^{M} \bigl(p(t_{i})^{\mathrm{exact}} - p(t_{i})^{\mathrm{numerical}} \bigr)^{2}}, $$

where the total error is RMS and the total number of estimated values is M. Also, we consider \(T=1\), \(N=\frac{1}{10}\), and \(\Delta t=\frac{1}{1000}\).

Example 5.1

Let

$$\begin{aligned} &i\frac{\partial \varPsi ^{\alpha } (x,t)}{\partial t^{\alpha }} + \frac{\partial ^{2} \varPsi (x,t)}{\partial x^{2}} + \bigl\vert \varPsi (x,t) \bigr\vert ^{2} \varPsi (x,t) \\ &\quad{} - t^{2} \biggl( \bigl( t^{4} - 1 \bigr) \sin (x) + \frac{2 \cos (x) t^{-\alpha }}{\varGamma (3-\alpha )} \biggr)- i t^{2} \biggl( \bigl( t^{4} - 1 \bigr) \cos (x) + \frac{2 \sin (x) t^{-\alpha }}{\varGamma (3-\alpha )} \biggr)=0 \end{aligned}$$

with the initial and boundary condition as follows:

$$\begin{aligned} &\varPsi (x,0)=0, \quad 0\leq x\leq 1, \\ &\varPsi (0.1,t)=t^{2} \bigl( \sin (0.1) + i \cos (0.1) \bigr),\qquad \varPsi (1,t)=t^{2} \bigl( \sin (1) + i \cos (1) \bigr) \end{aligned}$$

and the exact solution is \(\varPsi (x,t)= t^{2} ( \sin (x) + i \cos (x))\).

The plots of approximation and exact solutions between of \(p_{1}(t)\), \(p_{2}(t)\) and \(u(x,t)\), \(v(x,t)\) for Example 5.1 with noisy data, are drawn in Fig. 1 and Fig. 2, respectively. Also, the plots of error of \(u(x,t)\) and \(v(x,t)\) for Example 5.1 with the noisy data is drawn in Fig. 3. In addition, the numerical solutions of Example 5.1 with \(\alpha=0.5\), is shown in Table 1.

Figure 1
figure 1

The plots of approximation and exact solutions of \(p_{1}(t)\) and \(p_{2}(t)\) for Example 5.1 with the noisy data

Figure 2
figure 2

The plots of approximate solutions and exact solutions of \(u(x,t)\) and \(v(x,t)\)for Example 5.1 with the noisy data

Figure 3
figure 3

The plots of error of \(u(x,t)\) and \(v(x,t)\) for Example 5.1 with the noisy data

Table 1 Numerical solutions of Example 5.1 with \(\alpha =0.5\)

Example 5.2

Let

$$\begin{aligned} &i\frac{\partial \varPsi ^{\alpha } (x,t)}{\partial t^{\alpha }} + \frac{\partial ^{2} \varPsi (x,t)}{\partial x^{2}} + \bigl\vert \varPsi (x,t) \bigr\vert ^{2} \varPsi (x,t) - R(x,t)=0, \end{aligned}$$

with the initial and boundary condition as follows:

$$\begin{aligned} &\varPsi (x,0)=\cos (x)+i \sin (x),\quad 0\leq x\leq 1, \\ &\varPsi (0.1,t)= \bigl( \cos (0.1+t)+i \sin (0.1+t) \bigr),\qquad w(1,t)= \bigl(\cos (1+t) + i \sin (1+t) \bigr). \end{aligned}$$

The exact solution is

$$\begin{aligned} \varPsi (x,t)= \bigl( \cos (x+t) + i \sin (x+t) \bigr),\quad 0\leq x \leq 1, 0\leq t \leq T. \end{aligned}$$

Also, \(R(x)\) is obtained by the conditions of the problem.

Similar to the previous examples, the plots of approximation and exact solutions between of \(p_{1}(t)\), \(p_{2}(t)\) and \(u(x,t)\), \(v(x,t)\) for Example 5.2 with noisy data, are drawn in Fig. 4 and Fig. 5, respectively. Also, the plots of error of \(u(x,t)\) and \(v(x,t)\) for Example 5.2 with the noisy data is drawn in Fig. 6. In addition, the numerical solutions of Example 5.2 with \(\alpha=0.75\), is shown in Table 2.

Figure 4
figure 4

The plots of approximation and exact solutions of \(p_{1}(t)\) and \(p_{2}(t)\) for Example 5.2 with the noisy data

Figure 5
figure 5

The plots of approximate solutions and exact solutions of \(u(x,t)\) and \(v(x,t)\) for Example 5.2 with the noisy data

Figure 6
figure 6

The plots of error of \(u(x,t)\) and \(v(x,t)\) for Example 5.2 with the noisy data

Table 2 Numerical solutions of Example 5.2 with \(\alpha =0.75\)

6 Conclusion

It is a fact that the time evolution of the system or the wave function in quantum mechanics is described by the Schrödinger equation. By applying a finite-difference formula to time discretization and cubic B-splines for the spatial variable, we got a numerical method for solving the Schrödinger equation, the simplification of implementation and lower computational cost can be mentioned as its main advantages. We proved the convergence of the method by getting an upper bound and using some theorems. We also computed the order of the mentioned equations in the convergence analysis section. Finally, in the illustrative examples section, we showed two examples with the approximate solutions, which were obtained by using the cubic B-splines for the spatial variable. The plots of approximate and exact solutions with the noisy data are presented in figures.