1 Introduction

In the last few decades, many scientists have demonstrated that fractional models describe natural phenomena in a more precise and systematic way than integer-order fraction models they are classic with regular time derivatives [1,2,3,4]. In this paper, we consider the integral boundary problem for the fractional Sobolev equation as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial ^{\beta }_t \big (u+\rho (-\Delta )^{s}u \big )(x_{1},x_{2},t)+(-\Delta )^{s} u(x_{1},x_{2},t)=F(x_{1},x_{2},t),\qquad &{}\text { in }\Omega \times (0,T],\\ u= u_{0}(x_{1},x_{2}) \qquad &{}\text { in }\partial \Omega \times (0,T], \\ \theta _1 u(x_{1},x_{2},T)+\theta _2 \displaystyle \int _{0}^{T}u(x_{1},x_{2},t)~\hbox {d}t=\ell (x_{1},x_{2}),&{}\text { in }\Omega . \end{array}\right. } \end{aligned}$$
(1.1)

where \(\rho > 0\), \(s \in (0,1)\), \(\Omega = (0,\pi ) \times (0,\pi ) \subset \mathbb {R}^{2}\) and \( \ell \in L^{2}(\Omega )\). The symbol \(\partial _{t}^{\beta }u\) is denoted by the Caputo derivative which is defined by

$$\begin{aligned} \partial _{t}^{\beta }u(x_{1},x_{2},t) = \frac{1}{\Gamma (1-\beta )} \int _{0}^{t} \frac{u_{\tau }(x_{1},x_{2},\tau )}{(t-\tau )^{\beta }} \hbox {d}\tau ,~~0< \beta < 1. \end{aligned}$$
(1.2)

The symbol \(\Gamma (.)\) denotes the standard Gamma function, the two constants \( \theta _1,\theta _2 \ge 0 \) satisfying \( \theta _1^2+\theta _2^2>0\). In the main equation of problem (1.1), by taking \( \beta =s=1 \), the first equation becomes the classical pseudo-parabolic (or called Sobolev equation) in the following

$$\begin{aligned} u_t -\Delta u - \rho \Delta u_t =f. \end{aligned}$$
(1.3)

Pseudo-parabolic equations have many applications in science and technology, especially in physical phenomena such as homogeneous fluids through a fissured rock, aggregation of populations, ...see, e.g., [17] and its references. To our knowledge, the results for Eq. (1.3) are quite abundant and huge. Most of the results consider and study the existence of solutions and the properties of solutions to the initial value problem.

For the reader’s convenience, we can state our problem (1.1) as follows. Let us assume that \(\ell \) is the given input data in some suitable space. We need to determine the initial value \(u(x_1, x_2, 0)\). Our study is based on the insights provided by the publication [34]. Let us look at the third condition of our problem, i.e., is repeated as follows

$$\begin{aligned} \theta _{1} u(x_{1},x_{2},T) + \theta _{2} \int \limits _{0}^{T} u(x_{1},x_{2},t)~\hbox {d}t = \ell (x_{1},x_{2}). \end{aligned}$$
(1.4)

The above condition is also called a non-local in-time condition. Some observations and comments on what has just been said are as follows:

  • If \(\theta _1=1\) and \(\theta _2=0\), the condition (1.4) becomes the terminal condition

    $$\begin{aligned} u(x_{1},x_{2},T)=\ell (x_{1},x_{2}) . \end{aligned}$$
    (1.5)

    Many PDEs with terminal value condition are called backward problem which is a famous problem in the application. We can refer the reader to some interesting papers on the terminal value problem of various PDEs, Kaltenbacher et al. [19, 20], Jia et al. [15], Liu et al. [43], Yamamoto, see [16, 23,24,25], Janno see [13, 14], Rundell et al. [29, 30], Amar Debbouche et al. [9, 12, 18, 22, 31, 37], Triet, see [38, 39], Chao Yang, [40], Au, see [41] and Thach, see [42], and references therein. The results of the pseudo-parabolic equation (1.3) with a terminal condition are still limited. The most interesting recent work of determining the initial function is of Tuan and Caraballo [6,7,8, 36, 44,45,46,47].

  • If \(\theta _1=0\) and \(\theta _2 =1\) then we obtain the condition

    $$\begin{aligned} \int \limits _{0}^{T} u(x_{1},x_{2},t)~\hbox {d}t=\ell (x_{1},x_{2}) . \end{aligned}$$
    (1.6)

    The non-local condition (1.6) appears in some models in N. Dokuchaev’s works similar to his article [10].

  • The meaning of the appearance of the condition (1.4) with \(\theta _1>0\) and \(\theta _2>0\) has been thoroughly discussed in [34].

There are two main approaches to the problem of recovering the initial value for parabolic or pseudo-parabolic equation from observed data \(\ell \). When we observe experimentally the function \(\ell \), we will generate an approximation of this function in two cases. The first case is in the sense of the deterministic case, i.e., there exists a function \(\ell ^\epsilon \) of \(\ell \) such that the norm of the difference \(\ell ^\epsilon \) and \(\ell \) is smaller than \(\varepsilon \), where \(\varepsilon \) is the noise level. The second case is in the sense of random cases. This means that we obtain the observation data in the form

$$\begin{aligned} \ell ^{\text {obs}}= \ell + \text {random noise}. \end{aligned}$$

Solving methods for stochastic models are often more complex than for deterministic models. Continuation of main ideas from previous articles [32, 33], we continue to investigate the random noise problem for the model (1.1). From the complexity of this model, we require a lot of cumbersome calculations in this paper. In this study, we consider the input data in the analysis view as follows. Let

$$\begin{aligned} (c_{k},d_{l}) = \Big (\dfrac{2k-1}{2i}\pi , \dfrac{2l-1}{2j}\pi \Big ), {k = 1,2,3,\ldots ,i,~l=1,2,3,\ldots ,j}, \end{aligned}$$

be discrete points in \(\Omega = (0,\pi ) \times (0,\pi )\). Here, the observed random data \(\big ( \widetilde{\ell }_{k l}, \widetilde{F}_{k l}(t)\big )\) are noise of the exact data \(\big (\ell (c_{k},d_{l}), F(c_{k},d_{l},t) \big )\) by satisfying the following models

$$\begin{aligned} \widetilde{\ell }_{k l}&= \ell \big (c_{k},d_{l} \big ) + \theta _{kl} \mathcal {Y}_{kl},\nonumber \\ \widetilde{F}_{k l}(t)&= F( c_{k},d_{l},t) + \varsigma {\mathcal {X}_{kl}(t)},~~\text {for}~ {k = 1,2,3,\ldots ,i,~l=1,2,3,\ldots ,j}, \end{aligned}$$
(1.7)

where \(\mathcal Y_{kl}\) are mutually independent and identically distributed random variables and \(\mathcal X_{kl}(t)\) are independent one dimensional standard Brownian motions. The above random model is inspired by a similar model in [26, 27, 32, 33]. Due to the additional appearance of the integral in the non-local condition, the formulation of our solution becomes more complicated, which generates significant difficulties for us to handle when showing the ill-posedness as well as regularizing our problem. Our paper will investigate problem (1.1), and the main results of this work are as follows:

  • The proof for the ill-posedness of the solution and the estimate of the solution in \(L^{2}\big ( (0,\pi ) \times (0,\pi ) \big )\).

  • Proposing a regularized method and showing of convergence rate under a priori parameter choice rule.

This paper is organized as follows. Section 2 gives some preliminaries. In Sect. 3, we show the mild solution of problem (1.1), and an example describing the mismatch of the problem and the intercept of the source function in space \(L^{2}(\Omega )\) and in Sect. 4, we study the Fourier method to solve the problem (1.1) and show the rate of convergence. Finally, we show one numerical experiment to verify the theory results.

2 Preliminary

Let us collect some well-known properties of the negative Laplacian operator defined by

$$\begin{aligned} \mathcal {L} f := -\Delta f = - \bigg ( \dfrac{\partial ^{2} f}{\partial x_{1}^{2}} + \dfrac{\partial ^{2} f}{\partial x_{2}^{2}} \bigg ), \end{aligned}$$
(2.8)

and recall the definition of the Hilbert scale space, which is a subset of \(L^2(\Omega )\). Since \(\mathcal {L}\) is a linear operator on \(\Omega = (0,\pi ) \times (0,\pi )\) with Dirichlet condition with the eigenvalues of \(\mathcal {L}\) are \(\lambda _{m,n} = m^{2} + n^{2}\), \(m,n \in \mathbb {N}\), and the corresponding eigenfunctions \( e _{m,n}\) satisfy

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal L}^{s} e _{m,n}(x_{1},x_{2})=\lambda ^{s}_{m,n} e _{m,n}(x_{1},x_{2}),\quad &{}(x_{1},x_{2})\in \Omega ,\\ e _{m,n}(x_{1},x_{2})=0,&{}(x_{1},x_{2})\in \partial \Omega , \end{array}\right. } \end{aligned}$$
(2.9)

where \( e _{m,n}(x_{1},x_{2}) = e _{m}(x_{1}) e _{n}(x_{2}) = \dfrac{2}{\pi } \sin (m {x_1}) \sin (n{x_2}).\)

Definition 2.1

For any \(\nu > 0\), we define the fractional Hilbert scale space by

$$\begin{aligned} \mathcal {H}^{\nu }(\Omega )=\Big \{f\in {L^2(\Omega )}:\sum \limits _{m=1}^{\infty } \sum \limits _{n=1}^{\infty } \lambda _{m,n}^{\nu }\big |\big<f, e _{m,n}\big >|^{2}<\infty \Big \}. \end{aligned}$$

The space \( \mathcal {H}^{\nu }(\Omega ) \) is a Hilbert space equipped with the norm

$$\begin{aligned} \Vert f \Vert _{\mathcal {H}^{\nu }(\Omega )} :=\Big (\sum \limits _{m=1}^{\infty } \sum \limits _{n=1}^{\infty } \lambda _{m,n}^{\nu }\big |\big <f, e _{m,n}\big >\big |^{2}\Big )^{\frac{1}{2}}, \end{aligned}$$

In what follows, several properties of the Mittag–Leffler function are collected.

Definition 2.2

(see [21]) For \(0< \beta < 1\) and an arbitrary constant \( \alpha \in \mathbb {R}\), the Mittag–Leffler function is defined by

$$\begin{aligned} E_{\beta ,\alpha }(z)= {\sum _{j=0}^{\infty }} \frac{z^j}{\Gamma (\beta j+\alpha )},\quad z\in \mathbb {C}, \end{aligned}$$
(2.10)

where \( \Gamma \) is the usual Gamma function.

Lemma 2.1

[35] For \( 0<\beta _1<\beta _2<1 \) and \(\beta \in [\beta _1,\beta _2],\) there exist positive constants \(\mathcal {M}_{1}\), \(\mathcal {M}_{2}\), and \( \mathcal {M}_{3}\), depending only on \( \beta \) such that

  1. (i)

    \(E_{\beta ,1}(-z)>0 \),    for \( z>0. \)

  2. (ii)

    \( \dfrac{\mathcal {M}_{1}}{1+z} \le E_{\beta ,1}(-z)\le \dfrac{\mathcal {M}_{2}}{1+z},\) for \( z>0. \)

  3. (iii)

    \(E_{\beta ,\alpha }(-z) \le \dfrac{ \mathcal {M}_{3}}{1+z}\), for all \(z > 0, \alpha \in \mathbb {R}\).

Lemma 2.2

[28] For \(\lambda _{m,n} > 0\), \(1>\beta > 0\), and \(m,n \in \mathbb {N}\) are positive integers, we have:

$$\begin{aligned}&\frac{\hbox {d}}{\hbox {d}t}\big (tE_{\beta ,2}(-\lambda _{m,n}t^{\beta }) \big ) = E_{\beta ,1}(-\lambda _{m,n}t^{\beta }),\\ {}&\frac{\hbox {d}}{\hbox {d}t} \big (E_{\beta ,1}(-\lambda _{m,n}t^{\beta }) \big ) = -\lambda _{m,n}t^{\beta -1}E_{\beta ,\beta }(-\lambda _{m,n}t^{\beta }). \end{aligned}$$

Lemma 2.3

[34] Let \(\beta \in (0,1)\) and \(0< s < 1\), then we have

$$\begin{aligned} \theta _1E_{\beta ,1}\Big (-\frac{\lambda ^{s}_{m,n}}{1+\rho \lambda ^{s}_{m,n}} T^{\beta } \Big )+\theta _2\int _{0}^{T}E_{\beta ,1}\Big (\frac{-\lambda ^{s}_{m,n}}{1+\rho {\lambda ^{s}}_{m,n}} t^{\beta } \Big )\hbox {d}t \lesssim \dfrac{1}{\lambda _{m,n}^{s}} \cdot \end{aligned}$$
(2.11)

where \(\lesssim \) implies that “is less or equal up to a generic multiplicative constant C”.

Proof

The readers can refer to the document [34] for detailed proof. \(\square \)

Lemma 2.4

[35] For \(0< \beta < 1\), using Lemma 2.1, then we have

$$\begin{aligned} \frac{E_{\beta ,1}\big (-\frac{\lambda _{m,n}^{s}}{1+\rho \lambda _{m,n}^{s}} t^{\beta }\big ) }{E_{\beta ,1}\big (-\frac{\lambda _{m,n}^{s}}{1+\rho \lambda _{m,n}^{s}} T^{\beta }\big )} \le \frac{\mathcal {M}_{2} \big (1 + \frac{\lambda _{m,n}^{s}}{1+\rho \lambda _{m,n}^{s}} T^{\beta } \big ) }{\mathcal {M}_{1} \big (1 + \frac{\lambda _{m,n}^{s}}{1+\rho \lambda _{m,n}^{s}} t^{\beta } \big ) } \le \frac{\mathcal {M}_{2}}{\mathcal {M}_{1}} \Big (\frac{T}{t}\Big )^{\beta }, \end{aligned}$$
(2.12)

and putting \(\mathcal {M}:= \dfrac{\mathcal {M}_{2}}{\mathcal {M}_{1}}\Big (\dfrac{T}{t}\Big )^{\beta }\).

Lemma 2.5

[20] For \(0< \beta < 1\), \(\rho > 0\), \(E_{\beta ,1}\big (-\rho \big )\) is completely monotonic

$$\begin{aligned} (-1)^{k} \frac{d^{k}}{d\rho ^{k}} E_{\beta ,1} (-\rho ) \ge 0, \forall k \in \mathbb {N}\cap \{0\}. \end{aligned}$$
(2.13)

Lemma 2.6

[20] For \(0< \beta < 1\), \(\rho > 0\), one has \(0 \le E_{\beta ,\beta }(-\rho ) \le \dfrac{1}{\Gamma (\beta )} \cdot \) Moreover \(E_{\beta ,\beta }\big (-\rho \big )\) is a monotonic decreasing function.

Lemma 2.7

(see Lemma 3.3 of [5]) Assume that \(m<i\), \(n<j\) and \(\Phi \in L^2(\Omega )\). Then,

$$\begin{aligned}&\frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \Phi (c_{k},d_{l}) e _{m,n}(c_{k},d_{l})-\Phi _{m,n} \nonumber \\&\quad =\sum \limits _{p=1}^{\infty } (-1)^{p} \Big [ \Phi _{2pi+m,n}-\Phi _{2pi-m,n} \Big ] + \sum \limits _{q=1}^{\infty } (-1)^{q} \Big [ \Phi _{m,2qj+n}-\Phi _{m,2qj-n} \Big ] \nonumber \\&\qquad +\,\sum \limits _{p=1}^{\infty } \sum \limits _{q=1}^{\infty } (-1)^{p+q} \Big [ \Phi _{2pi+m,2qj+n}-\Phi _{2pi+m,2qj-n} + \Phi _{2pi-m,2qj-n}-\Phi _{2pi-m,2qj+n} \Big ] \cdot \end{aligned}$$
(2.14)

Proof

Applying Lemma 3.5 of [11], page 145. This lemma can be proved. Therefore, we omit the detail here. \(\square \)

2.1 The Mild Solution of Problem (1.1)

Assume that problem (1.1) has a solution u of the following form

$$\begin{aligned} u(x_{1},x_{2},t)=\sum \limits _{m=1}^{\infty } \sum \limits _{n=1}^{\infty } \big <u(\cdot ,\cdot ,t), e _{m,n}\big > e _{m,n}(x_{1},x_{2}). \end{aligned}$$

in which \(u_{m,n}(t) = \big <u(\cdot ,\cdot ,t), e _{m,n}\big >,~m,n \in \mathbb {N}^{*}\). Then, by applying the Laplace transform method, we obtain a formulation for the Fourier coefficients \(u_{m,n}(t)\) depending on \(\big <u_{0}, e _{m,n}\big>\) as follows:

$$\begin{aligned} u_{m,n}(t) =&\, E_{\beta ,1}\big (- \frac{\lambda _{m,n}^{s}}{1+\rho \lambda _{m,n}^{s}} t^{\beta }\big )\big<u_{0}, e _{m,n}\big>\nonumber \\&+\, \Big (\int _{0}^{t}\frac{(t-\tau )^{\beta -1}}{1+\rho \lambda ^{s}_{m,n}}E_{\beta ,\beta }\Big (\frac{-\lambda ^{s}_{m,n}(t-\tau )^{\beta }}{1+\rho \lambda ^{s}_{m,n}}\Big ) \big <F(\cdot ,\cdot ,\tau ), e _{m,n}\big > \hbox {d}\tau \Big ) {.} \end{aligned}$$
(2.15)

From now on, for a shorter, we put \(a(\rho ,z) = {z^s~b(\rho ,z)}\) with \(b(\rho ,z) = \frac{1}{1+\rho z^{s}}\), \(z=\lambda _{m,n}\) then

$$\begin{aligned} a(\rho ,\lambda _{m,n})&= \frac{\lambda _{m,n}^{s}}{1+\rho \lambda _{m,n}^{s}}~\text {and}~b(\rho ,\lambda _{m,n}) = \frac{1}{1+\rho \lambda _{m,n}^{s}} {,} \nonumber \\ \mathcal {W}_{\beta ,m,n}(t)&= E_{\beta ,1} \big ( - a(\rho ,\lambda _{m,n}) t^{\beta } \big ),\nonumber \\&\overline{\mathcal {W}}_{\beta ,m,n}(t) = t^{\beta -1}E_{\beta ,\beta } \big ( - a(\rho ,\lambda _{m,n}) t^{\beta } \big ),~\forall ~ 0 \le t \le T. \end{aligned}$$
(2.16)

Then, (2.15) can be written as follows:

$$\begin{aligned} u_{m,n}(t) = \mathcal {W}_{\beta ,m,n}(t)\big<u_{0}, e _{m,n}\big> + \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau )\big <F(\cdot ,\cdot ,\tau ), e _{m,n}\big > \hbox {d}\tau . \end{aligned}$$
(2.17)

This leads to

$$\begin{aligned} u(x_{1},x_{2},t)&=\sum \limits _{m=1}^{\infty }\sum \limits _{n=1}^{\infty }\mathcal {W}_{\beta ,m,n}(t)\big<u_{0}, e _{m,n}\big> e _{m,n}(x_{1},x_{2})\nonumber \\&\quad +\,\sum \limits _{m=1}^{\infty }\sum \limits _{n=1}^{\infty }\Big (\int _{0}^{t}b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \big <F(\cdot ,\cdot ,\tau ), e _{m,n}\big > \hbox {d}\tau \Big ) e _{m,n}(x_{1},x_{2}). \end{aligned}$$
(2.18)

From the formula (2.18), using the condition non-local (1.4), through some transformations, we received

$$\begin{aligned} \big<\ell , e _{m,n}\big>&= \big<u_{0}, e _{m,n}\big> \bigg [ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t \bigg ] \nonumber \\&\quad +\,\theta _{1} \int \limits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \big< F(\cdot ,\cdot ,\tau ), e _{m,n}\big>\hbox {d}\tau \nonumber \\&\quad +\, \theta _{2} \int \limits _{0}^{T} \int \limits _{0}^{t} b(\rho ,\lambda _{m,n})\overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \big <F(\cdot ,\cdot ,\tau ), e _{m,n}\big > \hbox {d}\tau \hbox {d}t. \end{aligned}$$
(2.19)

Substituting (2.19) into (2.18), we get

$$\begin{aligned}&u(x_{1},x_{2},t) = \sum \limits _{m=1}^{\infty }\sum \limits _{n=1}^{\infty }\frac{ \mathcal {W}_{\beta ,m,n}(t)\big<\ell , e _{m,n}\big> }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(x_{1},x_{2}) \nonumber \\&\quad -\, \sum \limits _{m=1}^{\infty }\sum \limits _{n=1}^{\infty }\frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \limits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \big<F(\cdot ,\cdot ,\tau ), e _{m,n}\big> \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(x_{1},x_{2}) \nonumber \\&\quad -\, \sum \limits _{m=1}^{\infty }\sum \limits _{n=1}^{\infty }\frac{ \theta _{2} \mathcal {W}_{\beta ,1,m,n}(t) \int \limits _{0}^{T} \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \big<F(\cdot ,\cdot ,\tau ), e _{m,n}\big> \hbox {d}\tau \hbox {d}t }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t)\hbox {d}t } e _{m,n}(x_{1},x_{2}) \nonumber \\&\quad +\, \sum \limits _{m=1}^{\infty }\sum \limits _{n=1}^{\infty }\bigg ( \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \big <F(\cdot ,\cdot ,\tau ), e _{m,n}\big > \hbox {d}\tau \bigg ) e _{m,n}(x_{1},x_{2}). \end{aligned}$$
(2.20)

We have completed finding the mild solution to the problem (1.1).

2.2 Discretization Formula of the Solution

Here, we show a formula for the solution u with the discrete data \(\ell (c_{k},d_{l})\) and \(F(c_{k},d_{l},t)\) instead of the unknown data as in \(\ell _{m,n}\) and \(F_{m,n}(t)\). To do this, we need the following lemmas:

Lemma 2.8

Let mn be positive integers such that \(m <i\) and \(n<j\). Setting

$$\begin{aligned} \mathbb M_1&:= \{ (m',n') \in \mathbb N^2: \quad (m^{\prime },n^{\prime }) = \pm (m,n) + (2pi,2qj) \}, \\ \mathbb M_2&:= \{ (m',n') \in \mathbb N^2: \quad (m^{\prime },n^{\prime }) = \pm (-m,n) + (2pi,2qj) \}. \end{aligned}$$

Then, the following property holds true

$$\begin{aligned}&\frac{1}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} e _{m,n} (c_{k},d_{l} ) e _{m^{\prime },n^{\prime }} (c_{k},d_{l}) = \frac{ (-1)^{p+q} }{ \pi ^{2}} \textbf{1}_{\mathbb M_1}(m',n') + \frac{(-1)^{p+q+1}}{\pi ^{2}} \textbf{1}_{\mathbb M_2}(m',n'), \end{aligned}$$
(2.21)

where \(\textbf{1}\) is the indicator function.

Proof

The lemma can be proved by using Lemma 3.5 in page 145 of [11]. \(\square \)

Theorem 2.1

Let positive integers \(\mathcal {N}_{tr},\mathcal {M}_{tr}\) such that \(\mathcal {N}_{tr} < i \) and \(\mathcal {M}_{tr} < j\). We assume \(\ell \in C^{1}(\overline{\Omega }) \cap L^{\infty }(0,T)\) and \(F \in C([0,T];C^{1}(\overline{\Omega }))\). Then, we have

$$\begin{aligned} u(x_{1},x_{2},t) = \mathcal {U}_{dis}(x_{1},x_{2},t) + \mathcal O^{I}_{res}(x_{1},x_{2},t) + \mathcal O^{II}_{res}(x_{1},x_{2},t), \end{aligned}$$
(2.22)

where \(\mathcal {U}_{dis}\) is constructed based on discrete values of \(\ell \) and \(F(\cdot ,\cdot ,t)\) at \(i \times j\) grid points \((c_k,d_l) \)

$$\begin{aligned}&\mathcal {U}_{dis}(x_{1},x_{2},t)\nonumber \\ {}&\quad := \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \bigg [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \frac{ \mathcal {W}_{\beta ,m,n}(t) \ell (c_{k},d_{l}) }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t)\hbox {d}t } e _{m,n}(c_{k},d_{l}) \nonumber \\&\qquad - \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) F(c_{k},d_{l},\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(c_{k},d_{l}) \nonumber \\&\qquad - \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) F(c_{k},d_{l},\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(c_{k},d_{l}) \nonumber \\&\qquad + \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) F(c_{k},d_{l},\tau ) \hbox {d}\tau \Big ) e _{m,n}(c_{k},d_{l}) \bigg ] e _{m,n}(x_{1},x_{2}), \end{aligned}$$
(2.23)

the first residual sum \(\mathcal O^I_{res}\) is the difference \(\displaystyle \sum \nolimits _{m=1}^{ \mathcal M_{tr}} \sum \nolimits _{n=1}^{ \mathcal N_{tr}} u_{m,n}(t) e_{m,n} -\mathcal U_{dis}\)

$$\begin{aligned} \mathcal O^I_{res}(x_1,x_2,t) =&-\sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Bigg [ \frac{ \mathcal {W}_{\beta ,m,n}(t) ~\delta _{i,j,m,n} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&- \frac{\theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Theta _{i,j,m,n}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&- \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Theta _{i,j,m,n}(\tau )\hbox {d}\tau \hbox {d}t }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&+ \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Theta _{i,j,m,n}(\tau )\hbox {d}\tau \Bigg ] e _{m,n}(x_{1},x_{2}), \end{aligned}$$
(2.24)

and the second residual sum \(\mathcal O^{II}_{res}\) is the difference \(u- \displaystyle \sum \nolimits _{m=1}^{ \mathcal M_{tr}} \sum \nolimits _{n=1}^{ \mathcal N_{tr}} u_{m,n}(t) e_{m,n}\)

$$\begin{aligned} \mathcal O^{II}_{res}(x_1,x_2,t)&= \sum \limits _{m \not \le \mathcal {M}_{tr} \text { or } n \not \le \mathcal {N}_{tr}} u_{m,n}(t) e _{m,n}(x_{1},x_{2}). \end{aligned}$$
(2.25)

Here, \(\delta _{i,j,m,n}\) and \(\Theta _{i,j,m,n}(t)\) are the following differences

$$\begin{aligned} \delta _{i,j,m,n}&= \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \ell (c_{k},d_{l}) e _{m,n}(c_{k},d_{l}) - \ell _{m,n}, \end{aligned}$$
(2.26)

and

$$\begin{aligned}&\Theta _{i,j,m,n}(t) = \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} F(c_{k},d_{l}) e _{m,n}(c_{k},d_{l}) - F_{m,n}(t). \end{aligned}$$
(2.27)

Proof

We replace \(\big <\ell (\cdot ,\cdot ), e _{m,n}\big>\) and \(\big <F(\cdot ,\cdot ,t), e _{m,n}\big>\) in the formula (2.20) by the data random in Lemma 2.9 and Lemma 2.10, this implies that the discretization formula of the solution u and we finish prove. \(\square \)

Lemma 2.9

Let \(\ell \) satisfies Theorem 2.1 and assume that \(m<i\), \(n<j\). Then, we get

$$\begin{aligned} \ell _{m,n}&= \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \ell (c_{k},d_{l}) e _{m,n}(c_{k},d_{l}) - \sum \limits _{p=1}^{\infty } (-1)^{p} \Big [ \ell _{2pi+m,n} - \ell _{2pi-m,n} \Big ] \nonumber \\&\quad -\, \sum \limits _{q=1}^{\infty } (-1)^{q} \Big [ \ell _{m,2qj+n} - \ell _{m,2qj-n} \Big ] \nonumber \\&\quad -\, \sum \limits _{p=1}^{\infty } \sum \limits _{q=1}^{\infty } (-1)^{p+q} \Big [ \ell _{2pi+m,2qj+n} - \ell _{2pi+m,2qj-n} + \ell _{2pi-m,2qj-n} - \ell _{2pi-m,2qj+n} \Big ] \cdot \end{aligned}$$
(2.28)

Lemma 2.10

Let F satisfies Theorem 2.1 and assume that \(m<i\), \(n<j\), then we get

$$\begin{aligned} F_{m,n}(t)&= \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} F(c_{k},d_{l}) e _{m,n}(c_{k},d_{l},t) - \sum \limits _{p=1}^{\infty } (-1)^{p} \Big [ F_{2pi+m,n} - F_{2pi-m,n} \Big ] \nonumber \\&\quad -\, \sum \limits _{q=1}^{\infty } (-1)^{q}\Big [ F_{m,2qj+n} - F_{m,2qj-n} \Big ] \nonumber \\&\quad -\, \sum \limits _{p=1}^{\infty } \sum \limits _{q=1}^{\infty } (-1)^{p+q} \Big [F_{2pi+m,2qj+n} - F_{2pi+m,2qj-n} + F_{2pi-m,2qj-n} - F_{2pi-m,2qj+n} \Big ] \cdot \end{aligned}$$
(2.29)

Proof

The demonstration of these two supplements is similar to in [32]. We omit here. \(\square \)

Remark 2.1

The explicit form of the mild solution (2.22) is more convenient (than (2.20)) for us to show the ill-posedness of our problem and construct an approximate solution later. We would like to emphasize that it is divided into two parts including the finite series and the bias between the two series, in which the first part \(\mathcal U_{dis}\) is expressed based on the value of \(\ell \) and \(F(\cdot ,\cdot ,t)\) at discrete point \((c_k,d_l)\) instead of the whole domain \(\Omega \). In this way, there appear several new terms in the residual term \(\mathcal O^{I}_{res}\) including \(\delta _{i,j,m,n}\) and \(\Theta _{i,j,m,n}(t)\) defined in (2.26) and (2.27). Fortunately, these couple of terms possess explicit forms like in Lemma 2.9.

3 The Non-well Posed of Problem (1.1) with Random Noise Data

Theorem 3.1

The problem (1.1) is ill-posed.

Proof

Let us design the exact data and the corresponding random noise data as follows:

$$\begin{aligned}&\text {Exact data :}~~\ell (x_{1},x_{2}) = 0,~~F(x_{1},x_{2},t) = 0, \end{aligned}$$
(3.30)
$$\begin{aligned}&\text {Random noise data :}~~\widetilde{\ell }_{k,l} = (ij)^{-\frac{1}{4}} \mathcal {Y}_{kl},~~ \widetilde{F}_{kl}(t) = (ij)^{-\frac{1}{2}} {\mathcal {X}}_{kl}(t), \end{aligned}$$
(3.31)

and \(s=1\). Additionally, we design approximations of the exact data \(\ell \) and F at discrete points \((x_1,x_2)\) as follows:

$$\begin{aligned}&\widetilde{\ell }^{i,j}(x_{1},x_{2}) := \sum \limits _{n=1}^{i-1} \sum \limits _{m=1}^{j-1} \Bigg [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{m} \sum \limits _{l=1}^{n} \widetilde{\ell }_{k,l} e _{m,n}(c_{k},d_{l}) \Bigg ] e _{m,n}(x_{1},x_{2}), \end{aligned}$$
(3.32)
$$\begin{aligned}&\widetilde{F}^{i,j}(x_{1},x_{2},t) := \sum \limits _{n=1}^{i-1} \sum \limits _{m=1}^{j-1} \Bigg [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \widetilde{F}_{k,l}(t) e _{m,n}(c_{k},d_{l}) \Bigg ] e _{m,n}(x_{1},x_{2}) \cdot \end{aligned}$$
(3.33)

Let u be the exact solution of Problem (1.1) with respect to the exact data (3.30)–(3.31). Then, it is obvious that is \(u \equiv 0\). Let \(\widetilde{u}^{i,j}\) be the solution of problem (1.1) with the data (3.32)–(3.33). Due to Lemma 2.9 with \(\mathcal N_{tr} = i - 1\) and \(\mathcal M_{tr} = j - 1\).

$$\begin{aligned} \widetilde{u}^{i,j}(x_{1},x_{2},t)&= \sum \limits _{m=1}^{i-1} \sum \limits _{n=1}^{j-1} \Bigg [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \bigg ( \frac{ \mathcal {W}_{\beta ,m,n}(t) {\widetilde{\ell }_{k,l}}}{\theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \widetilde{F}_{k,l}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \widetilde{F}_{k,l}(\tau ) \hbox {d}\tau \hbox {d}t }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad +\, \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \widetilde{F}_{k,l}(\tau ) \hbox {d}\tau \Big ) \bigg ) e_{m,n} (c_k,d_l) \Bigg ] e _{m,n}(x_{1},x_{2}). \end{aligned}$$
(3.34)

Next, we will show that the means \(\mathbb {E} \big \Vert \widetilde{\ell }^{i,j} - \ell \big \Vert _{ {L^2(\Omega )}}^{2}\) and \(\mathbb {E}\big \Vert \widetilde{F}^{i,j}(\cdot ,\cdot ,t) - F(\cdot ,\cdot ,t) \big \Vert ^{2}_{ {L^2(\Omega )}} \rightarrow 0\) but \(\mathbb {E}\big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,t) - u(\cdot ,\cdot ,t) \big \Vert ^{2}_{ {L^2(\Omega )}} \nrightarrow 0\) as \(i,j \rightarrow \infty \) for any \(t \in [0,T]\), where \(\mathbb {E}\) denotes the exception.

\(\underline{{\textbf {Part 1:}}}\) We estimate that

$$\begin{aligned} \mathbb {E} \big \Vert \widetilde{\ell }^{i,j} - \ell \big \Vert ^{2}_{L^{2}(\Omega )} \rightarrow 0,~~ \text {and}~ \mathbb {E} \big \Vert \widetilde{F}^{i,j} - F \big \Vert ^{2}_{L^{2}(\Omega )} \rightarrow 0. \end{aligned}$$

First, we can find that

$$\begin{aligned} \big \Vert \widetilde{\ell }^{i,j} - \ell \big \Vert _{L^{2}(\Omega )}^{2}&= \sum \limits _{m=1}^{i-1}\sum \limits _{n=1}^{j-1} \Bigg ( \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j}\widetilde{\ell }^{i,j} e _{m,n}(c_{k},d_{l}) \Bigg )^{2} \nonumber \\&= \frac{\pi ^{4}}{i^{\frac{5}{2}}j^{\frac{5}{2}}} \sum \limits _{m=1}^{i-1}\sum \limits _{n=1}^{j-1} \Bigg ( \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \mathcal {Y}_{kl} e _{m,n}(c_{k},d_{l}) \Bigg )^{2} \end{aligned}$$
(3.35)

Use the properties of random variable \(\mathcal {Y}_{kl}\), we know \( = \mathbb {E}(\mathcal {Y}_{kl}\mathcal {Y}_{ij}) = \delta _{ki} \delta _{lj}\) for all klij is the Kronecker delta, we have

$$\begin{aligned} \mathbb {E} \big \Vert \widetilde{\ell }^{i,j} - \ell \big \Vert ^{2}_{L^{2}(\Omega )} = \frac{\pi ^{4}}{i^{\frac{5}{2}}j^{\frac{5}{2}}}\sum \limits _{m=1}^{i-1}\sum \limits _{n=1}^{j-1} \Bigg ( \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \mathbb {E}\mathcal {Y}^{2}_{kl} e ^{2}_{m,n}(c_{k},d_{l})\Bigg )^{2} \cdot \end{aligned}$$
(3.36)

From Lemma 2.8, it yields that

$$\begin{aligned} \frac{1}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} e ^{2}_{i-1,j-1} (c_{k},d_{l})= \frac{1}{\pi ^{2}} \cdot \end{aligned}$$
(3.37)

Therefore, we get

$$\begin{aligned} \mathbb {E} \big \Vert \widetilde{\ell }^{i,j} - \ell \big \Vert ^{2}_{L^{2}(\Omega )} = \frac{ \pi ^{2} (i-1) (j-1) }{ i^{\frac{2}{2}} j^{\frac{3}{2}} } \rightarrow 0, \text { as}~ i,j \rightarrow \infty \cdot \end{aligned}$$
(3.38)

By using the similar techniques as shown in above, we have

$$\begin{aligned} \mathbb {E} \big \Vert \widetilde{F}^{i,j}(\cdot ,\cdot ,t) - F(\cdot ,\cdot ,t) \big \Vert ^{2}_{L^{2}(\Omega )} = \frac{t \pi ^{2} (i-1) (j-1) }{i^{2} j^{2}} \rightarrow 0,\text { as}~ i,j \rightarrow \infty , \end{aligned}$$
(3.39)

Using the fact that

$$\begin{aligned} \mathbb {E} \Big [ \mathcal {Y}_{kl}(t) \mathcal {Y}_{pq}(t) \Big ] = \delta _{kp}\delta _{lq}t,~\text {for all}~k,l,p,q. \end{aligned}$$
(3.40)

\(\underline{{\textbf {Part 2:}}}\) Estimate of \(\mathbb {E} \big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,0) - u(\cdot ,\cdot ,0) \big \Vert _{L^{2}(\Omega )}^{2}\), we receive

$$\begin{aligned} \big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,t) \big \Vert _{L^{2}(\Omega )}^{2}&= \sum \limits _{m=1}^{i-1} \sum \limits _{n=1}^{j-1} \Bigg [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \bigg ( \frac{ \mathcal {W}_{\beta ,m,n}(t) \widetilde{\ell }_{k,l} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \widetilde{F}_{k,l}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \widetilde{F}_{k,l}(\tau )\hbox {d}\tau \hbox {d}t }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad +\, \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \widetilde{F}_{k,l}(\tau ) \hbox {d}\tau \Big ) \bigg ) e_{m,n}(c_k,d_l) \Bigg ]^{2} \cdot \end{aligned}$$
(3.41)

We present the incorrectness of the test \(u(\cdot ,\cdot ,t)\) in case \(t=0\), from (3.41), one has

$$\begin{aligned}&\big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,0) \big \Vert _{L^{2}(\Omega )}^{2} \ge \Bigg [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \Big [ \theta _{1} \mathcal {W}_{\beta ,i-1,j-1}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,i-1,j-1}(t) \hbox {d}t \Big ]^{-1}\nonumber \\&\quad \times \Big ( \widetilde{\ell }_{k,l} -\theta _{1}\int \limits _{0}^{T} b(\rho ,\lambda _{i-1,j-1}) \overline{\mathcal {W}}_{\beta ,i-1,j-1}(T-\tau ) \widetilde{F}_{k,l}(\tau ) \hbox {d}\tau \Big ) e _{i-1,j-1}(c_{k},d_{l}) \bigg ]^{2} \cdot \end{aligned}$$

With the inequality \((a+b)^{2} \ge \dfrac{a^{2}}{2} - b^{2}\), one has

$$\begin{aligned} \big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,0) \big \Vert _{L^{2}(\Omega )}^{2}&\ge \frac{\pi ^{4}}{i^{2}j^{2}} \bigg [ \theta _{1} \mathcal {W}_{\beta ,i-1,j-1}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,i-1,j-1}(t) \hbox {d}t \bigg ]^{-2}~ \Big (\mathcal {L}_{1} - \mathcal {L}_{2}\Big ) \cdot \end{aligned}$$

whereby

$$\begin{aligned} \mathcal {L}_{1}&= 2^{-1} \bigg ( \sum \limits _{k=1}^{i}\sum \limits _{l=1}^{j} \widetilde{\ell }_{k,l} e _{i-1,j-1}(c_{k},d_{l}) \bigg )^{2}, \nonumber \\ \mathcal {L}_{2}&= \bigg ( \theta _{1}\int \limits _{0}^{T} b(\rho ,\lambda _{i-1,j-1}) \overline{\mathcal {W}}_{\beta ,i-1,j-1}(T-\tau ) \sum \limits _{k=1}^{i}\sum \limits _{l=1}^{j} \widetilde{F}_{k,l}(\tau ) e _{i-1,j-1}(c_{k},d_{l})\hbox {d}\tau \bigg )^{2}\cdot \end{aligned}$$

Now, we will estimate the error for \(\mathcal {L}_{1}\) and \(\mathcal {L}_{2}\) by using the formula (3.40); it gives

$$\begin{aligned} \mathbb {E} \mathcal {L}_{1}&= 2^{-1} \mathbb {E} \Bigg (\sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \frac{1}{(ij)^{\frac{1}{4}}} \mathcal {Y}_{kl} e _{i-1,j-1}(c_{k},d_{l}) \Bigg )^{2} \nonumber \\&= 2^{-1} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} (ij)^{-\frac{1}{2}}\mathbb {E}\mathcal {Y}^{2}_{kl} e ^{2}_{i-1,j-1}(c_{k},d_{l}) = \frac{(ij)^{\frac{1}{2}}}{2\pi ^{2}} \cdot \end{aligned}$$
(3.42)

Next, thanks to Hölder’s inequality, we have

$$\begin{aligned} \mathcal {L}_{2} \le \theta _{1}^{2} \int \limits _{0}^{T} b^{2}(\rho ,\lambda _{i-1,j-1}) \overline{\mathcal {W}}^{2}_{\beta ,i-1,j-1}(T-\tau ) \hbox {d}\tau ~\int \limits _{0}^{T} \Big ( \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} (ij)^{-\frac{1}{2}} {\mathcal {X}_{kl}} (\tau ) e _{i-1,j-1}(c_{k},d_{l}) \Big )^{2}\hbox {d}\tau . \end{aligned}$$

It gives

$$\begin{aligned} \theta _{1}^{2} \int \limits _{0}^{T} b^{2}(\rho ,\lambda _{i-1,j-1}) \overline{\mathcal {W}}^{2}_{\beta ,i-1,j-1}(T-\tau ) \hbox {d}\tau \le \theta _{1}^{2}\mathcal {M}_{3}^{2} \frac{T^{2\beta -1}}{2\beta -1} \cdot \end{aligned}$$

We know that

$$\begin{aligned} \mathbb {E} \big [{\mathcal {X}}_{kl}(t) {\mathcal {X}}_{ij}(t) \big ]= \delta _{ki}\delta _{lj} t, \quad \quad \forall k,l,i,j. \end{aligned}$$

Hence, we can find that

$$\begin{aligned} \mathbb {E}\bigg ( \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} (ij)^{-\frac{1}{2}} {\mathcal {X}_{kl} (\tau )} e _{i-1,j-1} (c_{k},d_{l}) \bigg )^{2}&= (ij)^{-1} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \mathbb {E} {\mathcal {X}^2_{kl} (\tau )} e ^{2}_{i-1,j-1} (c_{k},d_{l})\\ {}&= \frac{\tau }{\pi ^{2}} \cdot \end{aligned}$$

It is easy to see that

$$\begin{aligned} \mathbb {E} \mathcal {L}_{2} \le \theta _{1}^{2}~\mathcal {M}_{3}^{2} \frac{T^{2\beta -1}}{2\beta -1} \int \limits _{0}^{T} \frac{\tau }{\pi ^{2}} \hbox {d}\tau \le \frac{\theta _{1}^{2} \mathcal {M}_{3}^{2}T^{2\beta +1}}{\pi ^{2}(2\beta -1)} \cdot \end{aligned}$$

Therefore, we get

$$\begin{aligned} \mathbb {E} \big (\mathcal {L}_{1} - \mathcal {L}_{2}\big ) \ge \frac{(ij)^{\frac{1}{2}}}{2\pi ^{2}} - \frac{\theta _{1}^{2} \mathcal {M}_{3}^{2}T^{2\beta +1}}{\pi ^{2}(2\beta -1)} \cdot \end{aligned}$$
(3.43)

On the other hand, it is clear that

$$\begin{aligned}&\frac{\pi ^{4} (ij)^{-2} }{ \Big | \theta _1 \mathcal {W}_{\beta ,i-1,j-1}(T)+\theta _2\int _{0}^{T} \mathcal {W}_{\beta ,i-1,j-1}(t) \hbox {d}t \Big |^{2} } \ge \frac{ \pi ^{4} }{ (ij)^{2} } {\lambda _{i-1,j-1}^{2s}} \cdot \end{aligned}$$
(3.44)

Combining (3.43), and (3.44), and choosing \(s=1\), we conclude that

$$\begin{aligned} \mathbb {E} \big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,0) \big \Vert ^{2}_{L^{2}(\Omega )} \ge \frac{ \pi ^{4} }{ (ij)^{2} } \Big ( (i-1)^{2} + (j-1)^{2} \Big ) \Big ( \frac{(ij)^{\frac{1}{2}}}{2\pi ^{2}} - \frac{\theta _{1}^{2} \mathcal {M}_{3}^{2}T^{2\beta +1}}{\pi ^{2}(2\beta -1)} \Big ) \cdot \end{aligned}$$

Thus, \(\mathbb {E} \big \Vert \widetilde{u}^{i,j}(\cdot ,\cdot ,0) - u(\cdot ,\cdot ,0) \big \Vert _{L^{2}(\Omega )} \rightarrow \infty \) as \(i,j \rightarrow \infty \). Therefore, the problem (1.1) is ill-posed in the Hadamard sense. \(\square \)

4 Regularized Solution and Convergent Rate

Theorem 4.1

Let \(\delta >2\), and let \( \mathcal {N}_{tr}, \mathcal {M}_{tr} \in \mathbb {Z}^{+}\) be such that \(\mathcal {N}_{tr}< i, \mathcal {M}_{tr} < j\). Assume that \(\ell \in \mathcal {H}^{\delta }(\Omega )\) and \(F \in L^{\infty }(0,T; {L^2(\Omega )})\), the problem (1.1) has a unique solution in \(C([0,T]; {L^2(\Omega )})\). The regularization solution is constructed as follows:

$$\begin{aligned}&\widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}}(x_{1},x_{2},t)\nonumber \\&\quad = \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Bigg [ \frac{ \mathcal {W}_{\beta ,m,n}(t)~ \overline{\ell }^{i,j}_{m,n} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(x_{1},x_{2}) \nonumber \\&\qquad - \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \limits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \overline{F}_{m,n}^{i,j}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(x_{1},x_{2}) \nonumber \\&\qquad - \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \limits _{0}^{T} \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \overline{F}_{m,n}^{i,j}(\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } e _{m,n}(x_{1},x_{2}) \nonumber \\&\qquad + \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \overline{F}_{m,n}^{i,j}(\tau ) \hbox {d}\tau \Big ) e _{m,n}(c_{k},d_{l}) \Bigg ] e _{m,n}(x_{1},x_{2}) \cdot \end{aligned}$$
(4.45)

whereby

$$\begin{aligned} \overline{\ell }_{m,n}^{i,j} = \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \widetilde{\ell } _{kl} e _{m,n}(c_{k},d_{l}), \quad \overline{F}_{m,n}^{i,j}(t) = \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j}\widetilde{F}_{kl}(t) e _{m,n}(c_{k},d_{l}), \end{aligned}$$

then we have the following estimate

$$\begin{aligned}&\mathbb {E} \big \Vert \widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}}(\cdot ,\cdot ,t) - u(\cdot ,\cdot ,t) \big \Vert _{L^{2}(\Omega )}^{2}~\text {is defined in (4.72)}\cdot \end{aligned}$$

Proof

Using the method of least square, we get

$$\begin{aligned}&\ell \approx \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Big ( \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \widetilde{\ell }_{k,l} e _{m,n}(c_{k},d_{l}) \Big ) e _{m,n} := \overline{\ell }^{\mathcal {N}_{tr},\mathcal {M}_{tr}}_{i,j}, \nonumber \\&F(\cdot ,\cdot ,t) \approx \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Big ( \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \widetilde{F}_{kl}(t) e _{m,n}(c_{k},d_{l}) \Big ) e _{m,n} := \overline{F}_{i,j}^{\mathcal {N}_{tr},\mathcal {M}_{tr}}(t) \cdot \end{aligned}$$
(4.46)

Remark 4.1

The \(\big \Vert \widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}}(\cdot ,\cdot ,t) - u(\cdot ,\cdot ,t) \big \Vert _{L^{2}(\Omega )}^{2}\) is of order

$$\begin{aligned} \max \Big \{ ( i j )^{-1} \mathcal {N}_{j} \mathcal {M}_{i}, \big (\mathcal {N}_{j}+1\big )^{-2\nu } , \big (\mathcal {M}_{i}+1\big )^{-2\nu } \Big \} \cdot \end{aligned}$$
(4.47)

Next, we have to choose

$$\begin{aligned} \mathcal {N}_{j} = \lfloor \log ^{\frac{1}{2} } j \rfloor ,~~~ \mathcal {M}_{i} = \lfloor \log ^{\frac{1}{2} } i \rfloor , \end{aligned}$$
(4.48)

in which \(\lfloor y \rfloor \) is greatest integer less than or equal y, then \(\big \Vert \widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}}(\cdot ,\cdot ,t) - u(\cdot ,\cdot ,t) \big \Vert _{L^{2}(\Omega )}^{2}\) is of order

$$\begin{aligned} \max \bigg ( \frac{\log (ij)}{ij}, \big (\log j\big )^{-\nu }, \big ( \log i \big )^{-\nu } \bigg ). \end{aligned}$$

Next, the probability of the Theorem (4.1) is done through some of the following steps:

Part 1: In this part, from (2.18)–(4.45), we have estimate

$$\begin{aligned}&\widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}} (x_{1},x_{2},t) - u(x_{1},x_{2},t) \nonumber \\&\quad = \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Bigg [ \frac{ \mathcal {W}_{\beta ,m,n}(t) \Psi _{m,n}^{i,j} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\qquad - \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\qquad - \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1}\mathcal {W}_{\beta ,m,n}(T) + \theta _{2}\int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\qquad + \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \Big ) \Bigg ] e _{m,n}(x_{1},x_{2}) \nonumber \\&\qquad + \sum \limits _{m> \mathcal {N}_{tr}} \sum \limits _{n \le \mathcal {M}_{tr}} u_{m,n}(t) e _{m,n}(x_{1},x_{2})+\sum \limits _{m \le \mathcal {N}_{tr}} \sum \limits _{n> \mathcal {M}_{tr}} u_{m,n}(t) e _{m,n}(x_{1},x_{2}) \nonumber \\&\qquad + \sum \limits _{m> \mathcal {N}_{tr}} \sum \limits _{n > \mathcal {M}_{tr}} u_{m,n}(t) e _{m,n}(x_{1},x_{2}). \end{aligned}$$
(4.49)

where \(\underbrace{\Psi _{m,n}^{i,j} = \overline{\ell }_{m,n}^{i,j} - \ell _{m,n}}_{{\textbf {Step 1:}}}\) and \(\underbrace{\Upsilon _{m,n}^{i,j} = \overline{F}_{m,n}^{i,j}(\tau ) - F_{m,n}(\tau )}_{{\textbf {Step 2:}}}\). After that, we conduct evaluation for \(\Psi _{m,n}^{i,j}\)\(\Upsilon _{m,n}^{i,j}\), we get

\({\underline{{\textbf {Step 1:}}}}\) \(\mathbb {E} \big [\Psi _{m,n}^{i,j}\big ]^{2}\) can be bounded as follows:

$$\begin{aligned} \Psi _{m,n}^{i,j}&= \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \widetilde{\ell } _{k,l} e _{m,n}\big ( c_{k},d_{l} \big ) - \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \ell (c_{k},d_{l}) e _{m,n}(c_{k},d_{l}) + \delta _{i,j,m,n} \nonumber \\&= \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \theta _{k,l} \mathcal {Y}_{k,l} e _{m,n}(c_{k},d_{l}) + \delta _{i,j,m,n}. \end{aligned}$$
(4.50)

By using familiar inequality \((a+b)^{2} \le 2a^{2}+2b^{2}\), for \(a,b \ge 0\), one has

$$\begin{aligned} \big [ \Psi _{m,n}^{i,j} \big ]^{2} \le 2 \big [ \frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \theta _{k,l} \mathcal {Y}_{k,l} e _{m,n}(c_{k},d_{l}) \big ]^{2} + 2 |\delta _{i,j,m,n}|^{2}. \end{aligned}$$

Therefore, we notice that \(0 < \mathcal {B}_{min} \le \theta _{k,l} \le \mathcal {B}_{max}\), and \(\mathbb {E} \big (\mathcal {Y}_{kl} \mathcal {Y}_{pq}\big ) = \delta _{kp} \delta _{lq} \), addition

\(\dfrac{1}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} e _{m,n}^{2} (c_{k},d_{l}) = \pi ^{-2}\), we get

$$\begin{aligned} \mathbb {E} \big [ \Psi _{m,n}^{i,j} \big ]^{2} \le \frac{2\pi ^{4}}{(ij)^{2}} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \theta ^{2}_{k,l} \mathbb {E} \mathcal {Y}^{2}_{k,l} e ^{2}_{i,j}(c_{k},d_{l}) + 2 \big | \delta _{i,j,m,n} \big |^{2} \le \dfrac{2\pi ^{2}\mathcal {B}_{max}}{ij} + 2 |\delta _{i,j,m,n}|^{2}. \end{aligned}$$
(4.51)

From the assumption \(\ell \in \mathcal {H}^{\nu }(\Omega )\), there exists a constant \(\mathcal {D}_{\nu }\), we have

$$\begin{aligned} \big | \delta _{i,j,m,n} \big | \le \mathcal {A}_{4} \big ( i^{-\nu } + j^{-\nu } \big ) \big \Vert \ell \big \Vert _{\mathcal {H}^{\nu }(\Omega )} . \end{aligned}$$
(4.52)

From (4.52), this implies that

$$\begin{aligned} \big | \ell _{m,n} \big | \le \lambda _{m,n}^{-\frac{\nu }{2}} \Vert \ell \Vert _{\mathcal {H}^{\nu }(\Omega )}, \quad \quad \forall m,n \in \mathbb {Z}^{+}. \end{aligned}$$

From (4.51) and the formula (2.26), we can see that

$$\begin{aligned} |\delta _{i,j,m,n}|&\le \bigg ( 2\sum \limits _{p=1}^{\infty } \lambda ^{-\frac{\nu }{2}}_{2pi-m,j} + 2\sum \limits _{p=1}^{\infty } \lambda ^{-\frac{\nu }{2}}_{m,2qj-n} + 4\sum \limits _{p=1}^{\infty } \sum \limits _{q=1}^{\infty } \lambda ^{-\frac{\nu }{2}}_{2pi-m,2qj-n} \bigg ) \Vert \ell \Vert _{\mathcal {H}^{\nu }(\Omega )} \nonumber \\&\le \underbrace{\bigg ( 2\sum \limits _{p=1}^{\infty } (pi)^{-\nu } + 2\sum \limits _{q=1}^{\infty } (qj)^{-\nu } + 4\sum \limits _{p=1}^{\infty } \sum \limits _{q=1}^{\infty } \Big [ (pi)^{2} + (qj)^{2} \Big ]^{-\nu } \bigg )}_{\mathcal {A}_{1}} \Vert \ell \Vert _{\mathcal {H}^{\nu }(\Omega )}. \end{aligned}$$
(4.53)

Estimate of \(\mathcal {A}_{1}\) as follows:

$$\begin{aligned} \mathcal {A}_{1}&\le \underbrace{\bigg ( \max \Big \{2\sum \limits _{p=1}^{\infty } p^{-\nu } ,2\sum \limits _{p=1}^{\infty } q^{-\nu } \Big \} + 4\sum \limits _{p=1}^{\infty } \sum \limits _{q=1}^{\infty } p^{-\frac{\nu }{2} } q^{-\frac{\nu }{2}} \bigg )}_{\mathcal {A}_{2}} \big (i^{-\nu } + j^{-\nu }\big ) \big \Vert \ell \big \Vert _{\mathcal {H}^{\nu }(\Omega )}. \end{aligned}$$
(4.54)

Combining (4.52) and (4.54), we conclude that

$$\begin{aligned} \mathbb {E} \big [ \Psi _{m,n}^{i,j} \big ]^{2} \le \dfrac{2\pi ^{2}\mathcal {B}_{max}}{ij} + \frac{2 \mathcal {A}_{4}^{2} \big \Vert \ell \big \Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )}}{ij},~\text {for}~i,j ~~\text {large enough}. \end{aligned}$$
(4.55)

\(\underline{{\textbf {Step 2:}}}\) Evaluate \(\mathbb {E} \big [ \Upsilon _{m,n}^{i,j}(\tau ) \big ]^{2} \), we have also

$$\begin{aligned} \Big [ \Upsilon _{m,n}^{i,j}(\tau ) \Big ]^{2} \le 2 \bigg (\frac{\pi ^{2}}{ij} \sum \limits _{k=1}^{i} \sum \limits _{l=1}^{j} \varsigma {\mathcal {X}}_{k,l}(\tau ) e _{m,n}(c_{k},d_{l}) \bigg )^{2} + 2 \big |\Theta _{i,j,m,n}(\tau )\big |^{2} \end{aligned}$$
(4.56)

Because of \(F \in L^{\infty }(0,T;L^{2}(\Omega ))\), this leads to

$$\begin{aligned} \big |F_{m,n}(\tau ) \big | \le \lambda _{m,n}^{-\frac{\nu }{2}} \big \Vert F \big \Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}. \end{aligned}$$

In here, \(\Theta _{i,j,m,n}\) is rated similarly (4.53)

$$\begin{aligned} \big | \Theta _{i,j,m,n}(\tau ) \big | \le \mathcal {A}_{4} \Big ( i^{-\nu } + j^{-\nu } \Big ) \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}. \end{aligned}$$
(4.57)

We have known that \(\mathbb {E}\big ( {\mathcal {X}}_{kl}(\tau ) {\mathcal {X}}_{pq}(\tau ) \big ) = \delta _{kp}\delta _{ql} \tau \),, one has

$$\begin{aligned} \mathbb {E} \big [\Psi _{m,n}^{i,j} (\tau ) \big ]^{2}&\le \frac{2\pi ^{4}}{i^{2} j^{2}} \sum \limits _{k=1}^{i} \sum \limits _{j=1}^{j} \varsigma ^{2} \mathbb {E} {\mathcal {X}}_{kl}^{2}(\tau ) e ^{2}_{m,n}(c_{k},d_{l}) + 2 \big |\Theta _{i,j,m,n}\big |^{2} \nonumber \\&\le \frac{1}{ij} \Big ( 2 \pi ^{2} \varsigma ^{2} \tau + 2 \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ),~\text {for}~i,j~\text {large enough}. \end{aligned}$$
(4.58)

From (4.49), we have

$$\begin{aligned} \big \Vert \widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}}(\cdot ,\cdot ,t)&- u(\cdot ,\cdot ,t) \big \Vert _{L^{2}(\Omega )}^{2} \le \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Bigg [ \frac{ \mathcal {W}_{\beta ,m,n}(t) \Psi _{m,n}^{i,j} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&- \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n} ) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&- \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&+ \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \Big ) \Bigg ]^{2} \nonumber \\&+ \sum \limits _{m> \mathcal {N}_{tr}} \sum \limits _{n \le \mathcal {M}_{tr}} u^{2}_{m,n}(t) + \sum \limits _{m \le \mathcal {N}_{tr}} \sum \limits _{n> \mathcal {M}_{tr}} u^{2}_{m,n}(t) + \sum \limits _{m> \mathcal {N}_{tr}} \sum \limits _{n > \mathcal {M}_{tr}} u^{2}_{m,n}(t) \nonumber \\&= \text {E}_{1} + \text {E}_{2} . \end{aligned}$$

whereby

$$\begin{aligned} \text {E}_{1}&= \sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Bigg [ \frac{ \mathcal {W}_{\beta ,m,n}(t) \Psi _{m,n}^{i,j} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n} ) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad +\, \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \Big ) \Bigg ]^{2}, \nonumber \\ \text {E}_{2}&= \sum \limits _{m> \mathcal {N}_{tr}} \sum \limits _{n \le \mathcal {M}_{tr}} u^{2}_{m,n}(t) + \sum \limits _{m \le \mathcal {N}_{tr}} \sum \limits _{n> \mathcal {M}_{tr}} u^{2}_{m,n}(t) + \sum \limits _{m> \mathcal {N}_{tr}} \sum \limits _{n > \mathcal {M}_{tr}} u^{2}_{m,n}(t). \end{aligned}$$
(4.59)

Estimate (4.59) will be shown right below; for ease and convenience, we divided the evaluation into two parts as follows:

Part 1 Estimate of \(\text {E}_{1}\), and due to \((a+b+c+d)^{2} \le 4 (a^{2}+b^{2}+c^{2}+d^{2})\), \(\forall a,b,c,d \ge 0\), we get that

$$\begin{aligned} {\text {E}_{1} } \le&\sum \limits _{m=1}^{\mathcal {N}_{tr}} \sum \limits _{n=1}^{\mathcal {M}_{tr}} \Bigg [ \frac{ \mathcal {W}_{\beta ,m,n}(t) \Psi _{m,n}^{i,j} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \limits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad -\, \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \limits _{0}^{T} \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \limits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \nonumber \\&\quad +\, \Big (\displaystyle \int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \Big ) \Bigg ]^{2} \nonumber \\&\le 4 \Big ( \mathcal {B}^{m,n}_{i,j} + \mathcal {C}^{m,n}_{i,j}(T) + \mathcal {D}^{m,n}_{i,j}(t) + \mathcal {E}^{m,n}_{i,j}(t) \Big ). \end{aligned}$$
(4.60)

Next, we continue to go to the evaluation \(\mathcal {B}^{m,n}_{i,j}\), \(\mathcal {C}_{i,j}^{m,n}(T)\) and \(\mathcal {D}^{m,n}_{i,j}(t)\) and \(\mathcal {E}^{m,n}_{i,j}(t)\). We evaluate the error (4.60) through four steps as follows.

\({\underline{Claim 1:}}\) First of all, from Lemmas 2.1 and 2.4, we have

$$\begin{aligned} \mathbb {E} \left( \frac{ \mathcal {W}_{\beta ,m,n}(t)~\Psi _{m,n}^{i,j} }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \right) ^{2} \le \frac{2\mathcal {M}^{2}}{\theta _{1}} \left( \dfrac{\pi ^{2}\mathcal {B}_{\max }}{ij} + \frac{\mathcal {A}_{4}^{2} \big \Vert \ell \big \Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )}}{ij} \right) . \end{aligned}$$
(4.61)

This leads to

$$\begin{aligned} \mathbb {E}(\mathcal {B}_{i,j}^{m,n}) \le \frac{2}{\theta _{1}} \frac{\mathcal {M}^{2}}{ij} \bigg ( \pi ^{2}\mathcal {B}_{\max } + \mathcal {A}_{4}^{2} \Vert \ell \Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )} \bigg )\mathcal {N}_{tr}\mathcal {M}_{tr} \cdot \end{aligned}$$
(4.62)

\({\underline{Claim 2:}}\) Next, we see that \(\mathcal {C}_{i,j}^{m,n}(T)\) can be bounded as follows:

$$\begin{aligned} \mathcal {C}_{i,j}^{m,n}(T)&\le \left( \frac{ \theta _{1} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau }{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \right) ^{2} \nonumber \\&\le \mathcal {M}^{2} \left( \int \limits _{0}^{T} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(T-\tau ) \Upsilon _{m,n}^{i,j}(\tau )\hbox {d}\tau \right) ^{2} \cdot \end{aligned}$$
(4.63)

Using the inequality \(b(\rho ,\lambda _{m,n}) \le 1\). Thus,

$$\begin{aligned} \mathcal {C}_{i,j}^{m,n}(T) \le \big ( \mathcal {M} \mathcal {M}_{3}\big )^{2} \int \limits _{0}^{T} (T-\tau )^{2(\beta -1)} \hbox {d}\tau \int \limits _{0}^{T} \big |\Upsilon _{m,n}^{i,j}(\tau )\big |^{2} \hbox {d}\tau \cdot \end{aligned}$$
(4.64)

From (4.64), we have

$$\begin{aligned} \mathbb {E} \big (\mathcal {C}_{i,j}^{m,n}(T)\big )&\le \frac{\big ( \mathcal {M} \mathcal {M}_{3}\big )^{2} T^{2\beta -1}}{2\beta -1} \int \limits _{0}^{T} \mathbb {E}\big |\Upsilon _{m,n}^{i,j}(\tau )\big |^{2} \hbox {d}\tau \nonumber \\&\le \Big ( \frac{1}{ij} \Big )~ \frac{\big ( \mathcal {M} \mathcal {M}_{3}\big )^{2} T^{2\beta }}{2\beta -1} ~\Big ( 2 \pi ^{2} \varsigma ^{2} T + 2 \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big )\mathcal {N}_{tr}\mathcal {M}_{tr} \cdot \end{aligned}$$
(4.65)

\({\underline{Claim 3:}}\) Estimate of \(\big (\mathcal {D}_{i,j}^{m,n}\big )(t)\)

$$\begin{aligned} \big (\mathcal {D}_{i,j}^{m,n}\big )(t) \le \left( \frac{ \theta _{2} \mathcal {W}_{\beta ,m,n}(t) \int \nolimits _{0}^{T} \int \nolimits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \hbox {d}t}{ \theta _{1} \mathcal {W}_{\beta ,m,n}(T) + \theta _{2} \int \nolimits _{0}^{T} \mathcal {W}_{\beta ,m,n}(t) \hbox {d}t } \right) ^{2}. \end{aligned}$$
(4.66)

Hence, from Lemma 2.4, and combining (4.66) to (4.68), we get

$$\begin{aligned} \mathbb {E} \big ( \mathcal {D}_{i,j}^{m,n}(t) \big ) \le \Big (\frac{2}{ij} \Big ) \frac{\mathcal {M}^{2}\mathcal {M}_{3}^{2}}{2\beta -1} \bigg ( \pi ^{2} \varsigma ^{2} \frac{T^{2\beta +2}}{2\beta +2} + \frac{T^{2\beta +1}}{2\beta +1} \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \bigg ) \mathcal {N}_{tr}\mathcal {M}_{tr} \cdot \end{aligned}$$
(4.67)

\({\underline{Claim 4:}}\) Estimate of \(\mathcal {E}_{i,j}^{m,n}(t)\)

$$\begin{aligned} (\mathcal {E}_{i,j}^{m,n})(t)&\le \bigg (\int \limits _{0}^{t} b(\rho ,\lambda _{m,n}) \overline{\mathcal {W}}_{\beta ,m,n}(t-\tau ) \Upsilon _{m,n}^{i,j}(\tau ) \hbox {d}\tau \bigg )^{2} \nonumber \\&= \mathcal {M}_{3}^{2} \int \limits _{0}^{t} (t-\tau )^{2\beta -2} \hbox {d}\tau \int \limits _{0}^{t} \big |\Upsilon _{m,n}^{i,j}(\tau ) \big |^{2}\hbox {d}\tau \le \mathcal {M}_{3}^{2} \frac{t^{2\beta -1}}{2\beta -1} \int \limits _{0}^{t} \big |\Upsilon _{m,n}^{i,j}(\tau ) \big |^{2}\hbox {d}\tau . \end{aligned}$$

This implies that

$$\begin{aligned} \mathbb {E} \big ( (\mathcal {E}_{i,j}^{m,n})(t) \big )&\le \Big (\frac{1}{ij} \Big ) \frac{\mathcal {M}_{3}^{2}~t^{2\beta -1}}{2\beta -1} \int \limits _{0}^{t} \mathbb {E}\big |\Upsilon _{m,n}^{i,j}(\tau ) \big |^{2}\hbox {d}\tau \nonumber \\&\le \Big (\frac{1}{ij} \Big ) \frac{\mathcal {M}_{3}^{2}}{2\beta -1} \Big ( 2 \pi ^{2} \varsigma ^{2} t^{2\beta +1} + 2t^{2\beta } \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ), \end{aligned}$$

and obviously

$$\begin{aligned} \int \limits _{0}^{T} \mathbb {E} \big (\mathcal {E}_{i,j}^{m,n}\big )(t) \hbox {d}t \le \Big (\frac{1}{ij} \Big ) \frac{2\mathcal {M}_{3}^{2}}{2\beta -1} \Big ( \pi ^{2} \varsigma ^{2} \frac{T^{2\beta +2}}{2\beta +2} + \frac{T^{2\beta +1}}{2\beta +1} \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ). \end{aligned}$$
(4.68)

\({\underline{Claim 5:}}\) Similarly as in the evaluation (4.64), we can obtain

$$\begin{aligned} \mathbb {E} \big (\mathcal {E}^{m,n}_{i,j}(t)\big ) \le \Big ( \frac{1}{ij} \Big )~ \frac{\mathcal {M}^{2} \mathcal {M}_{3}^{2} t^{2\beta }}{2\beta -1} ~\Big ( 2 \pi ^{2} \varsigma ^{2} t + 2 \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big )\mathcal {N}_{tr}\mathcal {M}_{tr} \cdot \end{aligned}$$
(4.69)

Combining (4.60) to (4.69), we conclude that

$$\begin{aligned} \mathbb {E}(\text {E}_{1})&\le \frac{8 \mathcal {N}_{tr}\mathcal {M}_{tr}}{ij} \Bigg ( \frac{1}{\theta _{1}} \mathcal {M}^{2} \bigg ( \pi ^{2}\mathcal {B}_{max} + \mathcal {A}_{4}^{2} \big \Vert \ell \big \Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )} \bigg ) \nonumber \\&\quad +\, \frac{\mathcal {M}^{2}\mathcal {M}_{3}^{2}T^{2\beta }}{2\beta -1} ~\Big ( 2 \pi ^{2} \varsigma ^{2} T + 2 \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ) \nonumber \\&\quad +\, \frac{ \mathcal {M}^{2}\mathcal {M}_{3}^{2} }{2\beta -1} \Big ( \pi ^{2} \varsigma ^{2} \frac{T^{2\beta +2}}{2\beta +2} + \frac{T^{2\beta +1}}{2\beta +1} \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ) \Bigg ) \cdot \end{aligned}$$
(4.70)

Part 2 Estimate of \(\text {E}_{2}\), assume that \(u \in C([0,T];\mathcal {H}^{\nu }(\Omega ))\), this means

$$\begin{aligned} \big |u_{m,n}(t)\big | \le \frac{\Vert u(\cdot ,\cdot ,t)\Vert _{\mathcal {H}^{\nu }(\Omega )}}{\lambda _{m,n}^{\frac{\nu }{2}}},~~\forall m,n \in \mathbb {Z}^{+}. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \text {E}_{2} \le \Big ( (\mathcal {N}_{tr}+1)^{-2\nu } + (\mathcal {M}_{tr}+1)^{-2\nu } + (\mathcal {N}_{tr}+1)^{-2\nu }(\mathcal {M}_{tr}+1)^{-2\nu } \Big ) \Vert u(\cdot ,\cdot .,t)\Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )} \cdot \end{aligned}$$
(4.71)

Combining (4.70) to (4.71), we conclude that

$$\begin{aligned}&\mathbb {E} \big \Vert \widetilde{U}_{\mathcal {N}_{tr},\mathcal {M}_{tr}}(\cdot ,\cdot ,t) - u(\cdot ,\cdot ,t) \big \Vert _{L^{2}(\Omega )}^{2} \nonumber \\&\quad \le \frac{8 \mathcal {N}_{tr}\mathcal {M}_{tr}}{ij} \Bigg ( \frac{1}{\theta _{1}} \mathcal {M}^{2} \bigg ( \pi ^{2}\mathcal {B}_{max} + \mathcal {A}_{4}^{2} \big \Vert \ell \big \Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )} \bigg ) \nonumber \\&\qquad +\, \frac{\mathcal {M}^{2}\mathcal {M}_{3}^{2}T^{2\beta }}{2\beta -1} ~\Big ( 2 \pi ^{2} \varsigma ^{2} T + 2 \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ) \nonumber \\&\qquad +\, \frac{ \mathcal {M}^{2}\mathcal {M}_{3}^{2} }{2\beta -1} \Big ( \pi ^{2} \varsigma ^{2} \frac{T^{2\beta +2}}{2\beta +2} + \frac{T^{2\beta +1}}{2\beta +1} \mathcal {A}_{4}^{2} \Vert F\Vert _{L^{\infty }(0,T;L^{2}(\Omega ))}^{2} \Big ) \Bigg ) \nonumber \\&\qquad +\, \Big ( (\mathcal {N}_{tr}+1)^{-2\nu } + (\mathcal {M}_{tr}+1)^{-2\nu } + (\mathcal {N}_{tr}+1)^{-2\nu }(\mathcal {M}_{tr}+1)^{-2\nu } \Big ) \Vert u(\cdot ,\cdot ,t)\Vert ^{2}_{\mathcal {H}^{\nu }(\Omega )} \cdot \end{aligned}$$
(4.72)

The provision of this theorem is completed. \(\square \)

5 Simulation

We present numerical example, we choice \(\Omega = (0,\pi ) \times (0,\pi )\), \(T = 1\), \(\lambda _{m,n} = m^{2}+n^{2}\), \( e _{m,n}(x) = \dfrac{2}{\pi } \sin (mx) \sin (ny), s=1\), \(\theta _{1}=\theta _{2} = 0.45\), \(\beta =0.52\) are shown in this section. In this example, we test the problem as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial ^{\beta }_t \big (u+\rho (-\Delta )u \big )(x_{1},x_{2},t)+(-\Delta ) u(x_{1},x_{2},t)=F(x_{1},x_{2},t),\qquad &{}\text { in } (0,\pi ) \times (0,\pi ) \times (0,1],\\ u= 0 \qquad &{}\text { in } (0,\pi ) \times (0,\pi )\times (0,1), \\ \theta _1 u(x_{1},x_{2},1)+\theta _2 \displaystyle \int _{0}^{1}u(x_{1},x_{2},t)~\hbox {d}t=\ell (x_{1},x_{2}),&{}\text { in } (0,\pi ) \times (0,\pi ). \end{array}\right. } \end{aligned}$$
(5.73)

For the discretization, \((x_{i},y_{j})\) as discretion the space on \((0,\pi ) \times (0,\pi )\) for \(i=\overline{1,N_{x}+1}\), \(j=\overline{1,N_{y}+1}\) satisfied

$$\begin{aligned} \Delta x = \frac{\pi }{2m}, \Delta y = \frac{\pi }{2n},~x_{0}=0,x_{N_{x}+1}=\pi ,~y_{0}=0,~y_{N_{y}+1}=\pi . \end{aligned}$$

whereby

$$\begin{aligned} \ell (x,y) = \sin (2x) \sin (4y) ,~~F(x,y,t) = t^{2} \sin (2x) \sin (4y). \end{aligned}$$
(5.74)

The couple of \(\big (\ell _{\epsilon }(x_{1},x_{2}), F_{\epsilon }(x_{1},x_{2},t) \big )\), where we suppose that the measurements are described as follows:

$$\begin{aligned} \ell _{\epsilon }(\cdot ,\cdot ) = \ell (\cdot ,\cdot ) + ``\text {noise}_{1}'',~~~ F_{\epsilon }(\cdot ,\cdot ,t) = F(\cdot ,\cdot ,t) + ``\text {noise}_{2}''. \end{aligned}$$
(5.75)

whereby \( {\text {noise}_{1} } = \alpha \mathcal {N}(0,1)\). Besides that, \({\text {noise}_{2} } = \vartheta \Psi (t)\) with \(\Psi (t)\) is Brownian motion, with \( \vartheta , \alpha \) is a constant in (0, 1). The behavior of the input functions and their noise are shown in Fig. 1. Here, we use N(0, 1) as function numpy.random.randn(.) in Python software. With the ways to choose the function as in (5.74) and use the exact solution as in (2.20), we can easily find the u(xyt) function of the problem (5.73). In the same way, with the \(\ell _{\epsilon }(x_ {1}, x_{2}) \) functions and \( F_{\epsilon }(x_{1},x_{2},t)\) be given as in (5.75), we have the testing of \(u_ {\epsilon }(x_{1},x_{2},t)\), and during calculation, we also need to use the following integrals, see [21]

$$\begin{aligned} \int \limits _{0}^{x} (x-t)^{\beta -1} E^{\gamma }_{\alpha ,\beta }\big [a(x-t)^{\alpha }\big ] t^{\nu -1}\hbox {d}t = x^{\beta +\nu -1} E_{\alpha ,\beta +\nu }^{\gamma }(ax^{\alpha }). \end{aligned}$$
(5.76)

We present the absolute root mean square errors

$$\begin{aligned} Error(t) = \frac{ \bigg ( \displaystyle \sum \nolimits _{m=1}^{\infty }\sum \nolimits _{n=1}^{\infty } \big |u_{\epsilon }(x_{m},y_{n},t) - u(x_{m},y_{n},t)\big |_{{L}^2(\Omega ) \times L^{2}(\Omega )}^{2} \bigg )^{\frac{1}{2}} }{ \Big ( \displaystyle \sum \nolimits _{m=1}^{\infty }\sum \nolimits _{n=1}^{\infty } \big |u(x_{m},x_{n},t)\big |_{L^2(\Omega ) \times L^{2}(\Omega )}^{2} \Big )^{\frac{1}{2}} } \cdot \end{aligned}$$
(5.77)

When calculating these integrals, we also approximate how to calculate the integrals with Simpson integral as follows:

$$\begin{aligned} \int \limits _{a}^{b} f(x)\hbox {d}x = \frac{h}{3} \Bigg [ f(x_{0}) + 4 \Big (\sum \limits _{i=1,i~ \text {odd}}^{n-1} f(x_{i})\Big ) + 2 \Big ( \sum \limits _{i=2,i~\text {even}}^{n-2} f(x_{i}) \Big ) + f\big (x_{n} \big ) \Bigg ]. \end{aligned}$$
(5.78)

And with the way to choose \( N_{x} = N_{y} = 200\), we have matrix elements u represented as follows, similarly, we also represent \(u_{\epsilon }\) similar

$$\begin{aligned} \hspace{-1cm}\widehat{U}=\left[ \begin{array}{ccccc} \widehat{u}^{N}\left( x^{1}, y^{1}, t\right) &{} \widehat{u}^{N}\left( x^{1}, y^{2}, t\right) &{} \cdots &{} \widehat{u}^{N}\left( x^{1}, y^{N_{y}}, t\right) \\ \widehat{u}^{N}\big (x^{2}, y^{1}, t\big ) &{} \widehat{u}^{N}\left( x^{2}, y^{2}, t\right) &{} \cdots &{} \widehat{u}^{N}\big (x^{2}, y^{N_{y}}, t\big ) \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \widehat{u}^{N}\big (x^{N_{x}}, y^{1}, t\big ) &{} \widehat{u}^{N}\left( x^{N_{x}}, y^{2}, t\right) &{} \cdots &{} \widehat{u}^{N}\big (x^{N_{x}}, y^{N_{y}}, t\big ) \end{array}\right] _{N_{x} \times N_{y}} \end{aligned}$$
(5.79)

The results of this section are presented in Table 1, and Figs. 1, 2, 3 and 4. In Figs. 1, 2, 3 and 4, we show an illustration of the input data of the problem and its approximation. In our calculation, we calculate for a specific case at \(t=0.5\), and present with full figures and table; however, at cases \(t=0.3\) and \(t=0.8\), the errors and drawings are also done similarly; therefore, we only present the calculation results in the number table and not presented with a drawing. Observe the drawings of input data and the exact solution and the regularized solution results, we commented that when the value of \(\epsilon \rightarrow 0\), the approximation will gradually converge the accuracy, this is not only true to the \(\ell \) function input data of non-local conditions but also true to the convergence of the regularization solution. From the observed data above, we can conclude that the proposed method is effective and stable.

Fig. 1
figure 1

Graph of \(\ell (x,y)\) and its approximation

Fig. 2
figure 2

Graph of u(xy, 0.5) and its approximation at \(\epsilon =0.5\)

Fig. 3
figure 3

Graph of u(xy, 0.5) and its approximation at \(\epsilon =0.25\)

Fig. 4
figure 4

Graph of u(xy, 0.5) and its approximation at \(\epsilon =0.0125\)

Table 1 The errors estimation between the solutions for \(\beta =0.52\), \(\epsilon \in \{0.5,0.25, 0.0125\}\), \(t \in \{0.3, 0.5, 0.8\}\)

6 Conclusion

In this article, we consider the Fourier truncation method for solving the backward problem (1.1). In this work, we have shown the non-well-posed of our problem (1.1), we present an example, applying the Fourier truncation method, and based on an a priori assumption for the exact solution, showing the error estimate between the exact solution and the regularized solution, having an additional numerical illustration will prove the correctness of our method.