On a Non-local Sobolev–Galpern-Type Equation Associated with Random Noise

This paper aims to retrieve the initial value for a non-local fractional Sobolev–Galpern problem. The given data are subject to noise by the discrete random model. We show that the solution to the problem is ill-posed in the sense of Hadamard. In this paper, we applied the Fourier truncation method to construct the regularized solution. We estimate the convergence between the solution and the regularized solution. In addition, the numerical example is also proposed to assess the efficiency of the theory.


Introduction
In the last few decades, many scientists have demonstrated that fractional models describe natural phenomena in a more precise and systematic way than integer-order fraction models they are classic with regular time derivatives [1][2][3][4].In this paper, we consider the integral boundary problem for the fractional Sobolev equation as follows: in ∂ × (0, T ], ( where ρ > 0, s ∈ (0, 1), = (0, π) × (0, π) ⊂ R 2 and ∈ L 2 ( ).The symbol ∂ β t u is denoted by the Caputo derivative which is defined by 2) The symbol (.) denotes the standard Gamma function, the two constants θ 1 , θ 2 ≥ 0 satisfying θ 2 1 + θ 2 2 > 0. In the main equation of problem (1.1), by taking β = s = 1, the first equation becomes the classical pseudo-parabolic (or called Sobolev equation) in the following u t − u − ρ u t = f .(1.3) Pseudo-parabolic equations have many applications in science and technology, especially in physical phenomena such as homogeneous fluids through a fissured rock, aggregation of populations, …see, e.g., [17] and its references.To our knowledge, the results for Eq.(1.3) are quite abundant and huge.Most of the results consider and study the existence of solutions and the properties of solutions to the initial value problem.
For the reader's convenience, we can state our problem (1.1) as follows.Let us assume that is the given input data in some suitable space.We need to determine the initial value u(x 1 , x 2 , 0).Our study is based on the insights provided by the publication [34].Let us look at the third condition of our problem, i.e., is repeated as follows (1.4) The above condition is also called a non-local in-time condition.Some observations and comments on what has just been said are as follows: • If θ 1 = 1 and θ 2 = 0, the condition (1.4) becomes the terminal condition u(x 1 , x 2 , T ) = (x 1 , x 2 ). (1.5) Many PDEs with terminal value condition are called backward problem which is a famous problem in the application.We can refer the reader to some interesting papers on the terminal value problem of various PDEs, Kaltenbacher et al. [19,20], Jia et al. [15], Liu et al. [43], Yamamoto, see [16,[23][24][25], Janno see [13,14], Rundell et al. [29,30], Amar Debbouche et al. [9,12,18,22,31,37], Triet, see [38,39], Chao Yang, [40], Au, see [41] and Thach, see [42], and references therein.
The results of the pseudo-parabolic equation (1.3) with a terminal condition are still limited.The most interesting recent work of determining the initial function is of Tuan and Caraballo [6-8, 36, 44-47].• If θ 1 = 0 and θ 2 = 1 then we obtain the condition (1.6) The non-local condition (1.6) appears in some models in N. Dokuchaev's works similar to his article [10].• The meaning of the appearance of the condition (1.4) with θ 1 > 0 and θ 2 > 0 has been thoroughly discussed in [34].
There are two main approaches to the problem of recovering the initial value for parabolic or pseudo-parabolic equation from observed data .When we observe experimentally the function , we will generate an approximation of this function in two cases.The first case is in the sense of the deterministic case, i.e., there exists a function of such that the norm of the difference and is smaller than ε, where ε is the noise level.The second case is in the sense of random cases.This means that we obtain the observation data in the form obs = + random noise.
Solving methods for stochastic models are often more complex than for deterministic models.Continuation of main ideas from previous articles [32,33], we continue to investigate the random noise problem for the model (1.1).From the complexity of this model, we require a lot of cumbersome calculations in this paper.In this study, we consider the input data in the analysis view as follows.Let be discrete points in = (0, π)×(0, π).Here, the observed random data kl , F kl (t) are noise of the exact data (c k , d l ), F(c k , d l , t) by satisfying the following models where Y kl are mutually independent and identically distributed random variables and X kl (t) are independent one dimensional standard Brownian motions.The above random model is inspired by a similar model in [26,27,32,33].Due to the additional appearance of the integral in the non-local condition, the formulation of our solution becomes more complicated, which generates significant difficulties for us to handle when showing the ill-posedness as well as regularizing our problem.Our paper will investigate problem (1.1), and the main results of this work are as follows: • The proof for the ill-posedness of the solution and the estimate of the solution in L 2 (0, π) × (0, π) .• Proposing a regularized method and showing of convergence rate under a priori parameter choice rule.This paper is organized as follows.Section 2 gives some preliminaries.In Sect.3, we show the mild solution of problem (1.1), and an example describing the mismatch of the problem and the intercept of the source function in space L 2 ( ) and in Sect.4, we study the Fourier method to solve the problem (1.1) and show the rate of convergence.Finally, we show one numerical experiment to verify the theory results.

Preliminary
Let us collect some well-known properties of the negative Laplacian operator defined by and recall the definition of the Hilbert scale space, which is a subset of L 2 ( ).Since L is a linear operator on = (0, π) × (0, π) with Dirichlet condition with the eigenvalues of L are λ m,n = m 2 +n 2 , m, n ∈ N, and the corresponding eigenfunctions e m,n satisfy where e m,n (x 1 , x 2 ) = e m (x 1 )e n (x 2 ) = 2 π sin(mx 1 ) sin(nx 2 ).
Definition 2.1 For any ν > 0, we define the fractional Hilbert scale space by The space H ν ( ) is a Hilbert space equipped with the norm In what follows, several properties of the Mittag-Leffler function are collected.
Definition 2.2 (see [21]) For 0 < β < 1 and an arbitrary constant α ∈ R, the Mittag-Leffler function is defined by where is the usual Gamma function.
Proof The readers can refer to the document [34] for detailed proof.

The Mild Solution of Problem (1.1)
Assume that problem (1.1) has a solution u of the following form in which u m,n (t) = u(•, •, t), e m,n , m, n ∈ N * .Then, by applying the Laplace transform method, we obtain a formulation for the Fourier coefficients u m,n (t) depending on u 0 , e m,n as follows: From now on, for a shorter, we put a(ρ, z) (2.16) Then, (2.15) can be written as follows: (2.17) This leads to (2.18) From the formula (2.18), using the condition non-local (1.4), through some transformations, we received (2.20) We have completed finding the mild solution to the problem (1.1).

Discretization Formula of the Solution
Here, we show a formula for the solution u with the discrete data (c k , d l ) and F(c k , d l , t) instead of the unknown data as in m,n and F m,n (t).To do this, we need the following lemmas: Lemma 2.8 Let m, n be positive integers such that m < i and n < j.Setting Then, the following property holds true where 1 is the indicator function.
Proof The lemma can be proved by using Lemma 3.5 in page 145 of [11].
Theorem 2.1 Let positive integers N tr , M tr such that N tr < i and M tr < j.We assume where U dis is constructed based on discrete values of and F(•, the first residual sum O I res is the difference Here, δ i, j,m,n and i, j,m,n (t) are the following differences ) Lemma 2.10 Let F satisfies Theorem 2.1 and assume that m < i, n < j, then we get Proof The demonstration of these two supplements is similar to in [32].We omit here.

Remark 2.1
The explicit form of the mild solution (2.22) is more convenient (than (2.20)) for us to show the ill-posedness of our problem and construct an approximate solution later.We would like to emphasize that it is divided into two parts including the finite series and the bias between the two series, in which the first part U dis is expressed based on the value of and F(  Proof Let us design the exact data and the corresponding random noise data as follows: Random noise data : and s = 1.Additionally, we design approximations of the exact data and F at discrete points (x 1 , x 2 ) as follows: Let u be the exact solution of Problem (1.1) with respect to the exact data (3.30)-(3.31).Then, it is obvious that is u ≡ 0. Let u i, j be the solution of problem (1.1) with the data (3.32)-(3.33).Due to Lemma 2.9 with N tr = i − 1 and M tr = j − 1.
Next, we will show that the means , where E denotes the exception.
Part 1: We estimate that First, we can find that 123 Use the properties of random variable Y kl , we know = E(Y kl Y i j ) = δ ki δ l j for all k, l, i, j is the Kronecker delta, we have From Lemma 2.8, it yields that Therefore, we get By using the similar techniques as shown in above, we have Using the fact that E Y kl (t)Y pq (t) = δ kp δ lq t, for all k, l, p, q. (3.40) We present the incorrectness of the test u(•, •, t) in case t = 0, from (3.41), one has Now, we will estimate the error for L 1 and L 2 by using the formula (3.40); it gives Next, thanks to Hölder's inequality, we have It gives Hence, we can find that It is easy to see that • Therefore, we get On the other hand, it is clear that Combining (3.43), and (3.44), and choosing s = 1, we conclude that Therefore, the problem (1.1) is ill-posed in the Hadamard sense.

Regularized Solution and Convergent Rate
Theorem 4.1 Let δ > 2, and let N tr , M tr ∈ Z + be such that N tr < i, M tr < j.Assume that ∈ H δ ( ) and F ∈ L ∞ (0, T ; L 2 ( )), the problem (1.1) has a unique solution in C([0, T ]; L 2 ( )).The regularization solution is constructed as follows: then we have the following estimate Proof Using the method of least square, we get Next, we have to choose in which y is greatest integer less than or equal y, then Next, the probability of the Theorem (4.1) is done through some of the following steps: Part 1: In this part, from (2.18)-(4.45),we have estimate and Step 2: . After that, we conduct 2 can be bounded as follows: Therefore, we notice that 0 < B min ≤ θ k,l ≤ B max , and E Y kl Y pq = δ kp δ lq , addition (4.51) From the assumption ∈ H ν ( ), there exists a constant D ν , we have From (4.51) and the formula (2.26), we can see that (4.53) Estimate of A 1 as follows: (4.54) Combining (4.52) and (4.54), we conclude that , for i, j large enough.(4.55) Step 2: Evaluate E ϒ i, j m,n (τ ) 2 , we have also In here, i, j,m,n is rated similarly (4.53) We have known that E X kl (τ )X pq (τ ) = δ kp δ ql τ " one has From (4.49), we have whereby Estimate (4.59) will be shown right below; for ease and convenience, we divided the evaluation into two parts as follows: Part 1 Estimate of E 1 , and due to (a Next, we continue to go to the evaluation B m,n i, j , C m,n i, j (T ) and D m,n i, j (t) and E m,n i, j (t).We evaluate the error (4.60) through four steps as follows.
Claim1 : First of all, from Lemmas 2.1 and 2.4, we have 123 This leads to Claim2 : Next, we see that C m,n i, j (T ) can be bounded as follows: Using the inequality b(ρ, λ m,n ) ≤ 1.Thus, From (4.64), we have (4.66) Hence, from Lemma 2.4, and combining (4.66) to (4.68), we get This implies that and obviously (4.68) Claim5 : Similarly as in the evaluation (4.64), we can obtain Combining (4.60) to (4.69), we conclude that The provision of this theorem is completed.
In the same way, with the (x 1 , x 2 ) functions and F (x 1 , x 2 , t) be given as in (5.75), we have the testing of u (x 1 , x 2 , t), and during calculation, we also need to use the following integrals, see [21] x 0 (5.76) We present the absolute root mean square errors When calculating these integrals, we also approximate how to calculate the integrals with Simpson integral as follows: (5.78) And with the way to choose N x = N y = 200, we have matrix elements u represented as follows, similarly, we also represent u similar  of → 0, the approximation will gradually converge the accuracy, this is not only true to the function input data of non-local conditions but also true to the convergence of the regularization solution.From the observed data above, we can conclude that the proposed method is effective and stable.

Conclusion
In this article, we consider the Fourier truncation method for solving the backward problem (1.1).In this work, we have shown the non-well-posed of our problem (1.1), we present an example, applying the Fourier truncation method, and based on an a priori assumption for the exact solution, showing the error estimate between the exact solution and the regularized solution, having an additional numerical illustration will prove the correctness of our method.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
24) and the second residual sum O I I res is the difference u − M tr m=1 N tr n=1 u m,n (t)e m,n

3Theorem 3 . 1
The Non-well Posed of Problem (1.1) with Random Noise Data The problem (1.1) is ill-posed.

Fig. 3 Fig. 4
Fig. 3 Graph of u(x, y, 0.5) and its approximation at = 0.25 Fortunately, these couple of terms possess explicit forms like in Lemma 2.9.
•, •, t) at discrete point (c k , d l ) instead of the whole domain .In this way, there appear several new terms in the residual term O I res including δ i, j,m,n and i, j,m,n (t) defined in (2.26) and (2.27).
It is easy to see that E 2 ≤ (N tr + 1) −2ν + (M tr + 1) −2ν + (N tr + 1) −2ν (M tr + 1) −2ν u(•, •., t)2 , ∀m, n ∈ Z + .123 N x N x , y 1 , t u N x N x , y 2 , t • • • u N x N x , y N y , tThe results of this section are presented in Table1, and Figs. 1, 2, 3 and 4. In Figs. 1, 2, 3 and 4, we show an illustration of the input data of the problem and its approximation.In our calculation, we calculate for a specific case at t = 0.5, and present with full figures and table; however, at cases t = 0.3 and t = 0.8, the errors and drawings are also done similarly; therefore, we only present the calculation results in the number table and not presented with a drawing.Observe the drawings of input data and the exact solution and the regularized solution results, we commented that when the value u