1 Introduction

In this paper, we consider an inverse problem of identifying a pollution source from data measured at some points in a watershed. The pollution source causes water contamination in some region. In all industrial countries, groundwater pollution is a serious environmental problem that puts the whole ecosystem, including humans, in jeopardy. The quality and quantity of groundwater have much effect on human life and may lead to natural environmental changes (see, e.g., [1]). As we know, most efforts to find pollutant transport are based on the methodology of mathematics. Solute transport in a uniform groundwater flow can be described by the one-dimensional (1D) linear parabolic equation

u ˜ t D 2 u ˜ x 2 +V u ˜ x +R u ˜ = F 0 (x,t),xΩ,0<t<T,
(1)

where Ω is a spatial domain, u ˜ is the solute concentration, V represents the velocity of watershed movement, R denotes the self-purifying function of the watershed, and F 0 (x,t) is a source term causing the pollution function u ˜ (x,t). Putting

u ˜ (x,t)=u(x,t) e V 2 D x ( V 2 4 D + R ) t ,

we can transform the latter equation into

u t D 2 u x 2 =F(x,t),
(2)

where F(x,t)= F 0 (x,t) e V 2 D x + ( V 2 4 D + R ) t ; we still call it the source function. Coming from this relationship between the two equations (1) and (2), in the present paper, we will find a pair of functions (u,F) satisfying (2) subject to the initial and the final conditions

u(x,0)=0,u(x,T)=g(x),x(0,π),
(3)

and the boundary condition u(0,t)=u(π,t)=0. To consider a more general case, we will replace D in (2) by a given function a(t) which is defined later.

This inverse source problem is ill posed. Indeed, a solution corresponding to the given data does possibly not exist, and even if the solution exists (uniquely) then it may not depend continuously on the data. Because the problem is severely ill posed and difficult, many preassumptions on the form of the heat source are in order. In fact, let { φ n (t)} be a basis in L 2 (0,T). Then the function F can be written as

F(ξ,t)= n = 0 φ n (t) f n (ξ).
(4)

In the simplest case, one reduces this approximation to its first term F(x,t)=φ(t)f(x), where the function φ is given. Source terms of this form frequently appear, for example, as a control term for the parabolic equation.

In another context, this problem is called the identification of heat source; it has received considerable attention from many researchers in a variety of fields using different methods since 1970. If the pollute source has the form of f=f(u), the inverse source problem was studied in [2]. In [3], the authors considered the heat source as a function of both space and time variables, in the additive or separable forms. Many researchers viewed the source as a function of space or time only. In [4, 5], the authors determined the heat source dependent on one variable in a bounded domain by the boundary-element method and the iterative algorithm. In [6], the authors investigated the heat source which is time-dependent only by the method of a fundamental solution.

Many authors considered the uniqueness and stability conditions of the determination of the heat source under this separate form. In spite of the uniqueness and stability results, the regularization problem for unstable cases is still difficult. For a long time, it has been investigated for a heat source which is time-depending only [4, 5, 7] or space-depending only [1, 3, 810]. As regards the regularization method, there are few papers with a strict theoretical analysis of identifying the heat source F(x,t)=φ(t)f(x), where φ is a given function. Trong et al. [11, 12] considered this problem by the Fourier transformation method. Recently, when a(t)=1 and φ(t)= e λ t (λ>0), the problem (1) describes a heat process of radio isotope decay whose decay rate is λ, which has been considered by Qian and Li [13]. In [14], Hasanov identified the heat source which has the form of F(x,t)=F(x)H(t) of the variable coefficient heat conduction equation u t = ( k ( x ) u x ) x +F(x)H(t) using the variational method. However, the generalized case with the time-dependent coefficient of Δu in the main equation is still limited and open. In this paper, we consider the following generalized equation:

u t a(t) u x x =F(x,t)
(5)

and u satisfies the condition (3). This kind of equation (5) has many applications in groundwater pollution. It is a simple form of advection-convection, which appears in groundwater pollution source identification problems (see [1]). Such a model is related to the detection of the pollution source causing water contamination in some region.

The remainder of the paper is divided into three sections. In Section 2, we apply the quasi-boundary value method and truncation method to solve the problem (2)-(3). Then we also estimate the error between an exact solution and the regularization solution with the logarithmic order and Hölder order. Finally, some numerical experiments will be given in Section 3.

2 Identification and regularization for inhomogeneous source depending on time variable

Let , , be the norm and the inner product in L 2 (0,π). Let a:[0,T]R be a continuous function on [0,T]. We set A(t)= 0 t a(s)ds. The problem (5) can be transformed into

{ d d t u ( x , t ) , sin n x + n 2 a ( t ) u ( x , t ) , sin n x = φ ( t ) f ( x ) , sin n x , 0 < t < T , u ( x , t ) , sin n x = 0 , u ( x , T ) , sin n x = g ( x ) , sin n x .
(6)

By an elementary calculation, we can solve the ordinary differential equation (6) to get

f ( x ) , sin n x = e n 2 A ( T ) [ 0 T e n 2 A ( t ) φ ( t ) d t ] 1 g ( x ) , sin n x

or

f(x)= n = 1 e n 2 A ( T ) [ 0 T e n 2 A ( t ) φ ( t ) d t ] 1 g n sinnx,
(7)

where g n = 2 π g(x),sinnx. Note that e n 2 A ( T ) increases rather quickly when n becomes large. Thus the exact data function g(x) must satisfy the property that g(x),sinnx decays rapidly. But in applications, the input data g(x) can only be measured and never be exact. We assume the data functions g ϵ (x) L 2 (0,π), φ, φ ϵ L 2 (0,T) to satisfy

g ϵ g ϵ, φ ϵ φϵ
(8)

and φ(t)> C 0 , φ ϵ (t)> C 0 , t(0,T), where the constant ϵ represents a noise level and C 0 >0.

Lemma 1 Let s>0, X0. Then for all 0tT and 0<ϵ<1, we have

ϵ ( 1 + X ) s ( ϵ + e T X ) s s e 1 s ( 1 + T s ) ( T ln ( 1 / ϵ ) ) s .
(9)

Proof Case 1. X[0, 1 T ]. It is clear to see that

ϵ ( 1 + X ) s ( ϵ + e T X ) ϵ ( 1 + X ) s e T X ϵ e T X eϵ.
(10)

From the inequality ϵ ( s e ) s ( 1 ln ( 1 / ϵ ) ) s , we get

ϵ ( 1 + X ) k ( ϵ + e T X ) s s e 1 s ( 1 ln ( 1 / ϵ ) ) s s s e 1 s ( 1 + T s ) ( T ln ( 1 / ϵ ) ) s .

Case 2. X> 1 T . Set e T X =ϵY. Then we obtain

ϵ ( 1 + X ) s ( ϵ + e T X ) = ϵ ϵ + ϵ Y ( T T ln ( ϵ Y ) ) s = 1 1 + Y ( T T ln ( ϵ Y ) ) s = 1 1 + Y ( T ln ( 1 / ϵ ) ) s ( ln ( ϵ ) T ln ( ϵ Y ) ) s = ( T ln ( 1 / ϵ ) ) s 1 1 + Y ( ln ( ϵ ) T ln ( ϵ Y ) ) s .

We continue to estimate the term 1 1 + Y ( ln ( ϵ ) T ln ( ϵ Y ) ) s .

If 0<Y1 then 0<ln(ϵ)<ln(ϵY), thus

1 1 + Y ( ln ( ϵ ) T ln ( ϵ Y ) ) s <1,

else if Y>1 then lnY>0 and ln(ϵY)=TX<1 due to the assumption X( 1 T ,). Therefore, lnY(1+ln(ϵY))0. This implies that

0< ln ϵ T ln ( ϵ Y ) < ln ϵ ln ( ϵ Y ) <1+lnY.
(11)

Hence, in this case, we get

1 1 + Y ( ln ( ϵ ) T ln ( ϵ Y ) ) s < ( 1 + ln Y ) s Y = ( 1 + ln Y ) s Y 1 .
(12)

Set g(Y)= ( 1 + ln Y ) s Y 1 for Y> e 1 . Taking the derivative of this function, we get

g (Y)= ( 1 + ln Y ) s 1 Y 2 (s1lnY).
(13)

The function g has a maximum at the point Y 0 , so that g ( Y 0 )=0. This implies that Y 0 = e s 1 . Therefore

sup Y 1 ( 1 + ln Y ) s Y 1 g( Y 0 )= s s e 1 s .
(14)

Since (11), (14), we have

1 1 + Y ( ln ( ϵ ) T ln ( ϵ Y ) ) s s s e 1 s .

From (11), we get

ϵ ( 1 + X ) s ( ϵ + e T X ) s s e 1 s ( T ln ( 1 / ϵ ) ) s s s e 1 s ( 1 + T s ) ( T ln ( 1 / ϵ ) ) s .

 □

Lemma 2 Let a:[0,T]R be a continuous function on [0,T]. Let p= inf 0 t T a(t), q= sup 0 t T a(t). Then we have

( i ) [ 0 T exp ( n 2 0 t a ( s ) d s ) d t ] 1 1 T ,
(15)
( i i ) 1 ( 1 + n 2 ) k ( α ( ϵ ) + e n 2 A ( T ) ) B ( q , k , T ) α ( ϵ ) | q T ln ( 1 α ( ϵ ) ) | k ,
(16)

where

B(q,k,T)= k k e 1 k ( 1 + ( q T ) k ) .

Proof (i) Since a(t)p, we have

[ 0 T exp ( n 2 0 t a ( s ) d s ) d t ] 1 = 1 0 T exp ( n 2 0 t a ( s ) d s ) d t 1 0 T exp ( n 2 0 t p d s ) d t = 1 0 T e p n 2 t d t = p n 2 e p n 2 T 1 1 T .
(17)

(ii) Since a(t)q, we get e n 2 A ( T ) e n 2 q T . Then using Lemma 1, we get

1 ( 1 + n 2 ) k ( α ( ϵ ) + e n 2 A ( T ) ) 1 ( 1 + n 2 ) k ( α ( ϵ ) + e n 2 q T ) B ( q , k , T ) α ( ϵ ) | q T ln ( 1 α ( ϵ ) ) | k .
(18)

 □

2.1 Regularization by a quasi-boundary value method

Denote by k the norm in Sobolev space H k (0,π) defined by

f k = ( n = 1 ( 1 + n 2 ) k | f n | 2 ) 1 2 ,

where f n = 2 π f(x),sinnx.

We modify the problem (3)-(5) by perturbing the Fourier expansion of final value g as follows:

{ u ϵ t x ( a ( t ) u ϵ x ) = φ ϵ ( t ) f ϵ ( x ) , x ( 0 , π ) , 0 < t < T , u ϵ ( x , 0 ) = 0 , x ( 0 , π ) , u ϵ ( 0 , t ) = u ϵ ( π , t ) = 0 , t ( 0 , T ) , u ϵ ( x , T ) = n = 1 e A ( T ) n 2 α ( ϵ ) + e A ( T ) n 2 g n ϵ sin n x , x ( 0 , π ) ,
(19)

where g n ϵ = 2 π g ϵ (x),sinnx and α(ϵ) is a regularization parameter such that lim ϵ 0 α(ϵ)=0. This problem is based on the quasi-boundary regularization method which is given in [11]. This method has been studied for solving various types of inverse problem [11, 15]. The solution of this problem is given by

f ϵ (x)= n = 1 1 α ( ϵ ) + e n 2 A ( T ) ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 1 g n ϵ sinnx.
(20)

Now we will give an error estimate between the regularization solution and the exact solution by the following theorem.

Theorem 1 Suppose that f,g L 2 (0,π) such that f 2 k < and g k + 1 2 < for some k0. Let g ϵ L 2 (0,π) be measured data at t=T satisfying (8). Let f ϵ be the regularized solution given by (20). If we select α(ϵ) such that

lim ϵ 0 ϵ α ( ϵ ) =0,

then lim ϵ 0 f ϵ f=0 and we have following estimate:

f ϵ f ϵ C 0 T α ( ϵ ) + C ( p , q , k , T ) ϵ α ( ϵ ) | 1 ln ( 1 α ( ϵ ) ) | k g 2 k + 1 2 + | B ( q , k , T ) | | q T ln ( 1 α ( ϵ ) ) | k f 2 k .
(21)

Proof We define

h ϵ (x)= n = 1 1 α ( ϵ ) + e n 2 A ( T ) ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 1 g n sinnx
(22)

and

p ϵ (x)= n = 1 1 α ( ϵ ) + e n 2 A ( T ) ( 0 T e n 2 A ( t ) φ ( t ) d t ) 1 g n sinnx.
(23)

We divide the proof into three steps.

Step 1. Estimate f ϵ h ϵ . From (20) and (22), we have

f ϵ h ϵ 2 = n = 1 1 ( α ( ϵ ) + e n 2 A ( T ) ) 2 ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 2 ( g n ϵ g n ) 2 n = 1 1 ϵ 2 ln 2 ( q T ϵ ) 1 ( 0 T C 0 d t ) 2 ( g n ϵ g n ) 2 1 | α ( ϵ ) | 2 1 ( 0 T C 0 d t ) 2 g ϵ g 2 ϵ 2 C 0 2 T 2 | α ( ϵ ) | 2 .
(24)

Step 2. Estimate h ϵ p ϵ . From (22), (23), and (18), we have

h ϵ p ϵ 2 = n = 1 1 ( α ( ϵ ) + e n 2 A ( T ) ) 2 [ ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 1 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 1 ] 2 g n 2 = n = 1 1 ( 1 + n 2 ) 2 k ( α ( ϵ ) + e n 2 A ( T ) ) 2 ( 0 T e n 2 A ( t ) ( φ ( t ) φ ϵ ( t ) ) d t ) 2 ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 ( 1 + n 2 ) 2 k g n 2 | B ( q , k , T ) α ( ϵ ) | 2 | q T ln ( 1 α ( ϵ ) ) | 2 k n = 1 [ 0 T e 2 n 2 A ( t ) d t ] [ 0 T | φ ϵ ( t ) φ ( t ) | 2 d t ] ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 2 ( 1 + n 2 ) 2 k g n 2 | B ( q , k , T ) α ( ϵ ) | 2 | q T ln ( 1 α ( ϵ ) ) | 2 k × n = 1 [ 0 T e 2 n 2 A ( t ) d t ] [ 0 T | φ ϵ ( t ) φ ( t ) | 2 d t ] C 0 4 T 4 ( 0 T e n 2 A ( t ) d t ) 2 ( 1 + n 2 ) 2 k g n 2 .
(25)

On other hand, we have

e n 2 A ( T ) e n 2 A ( 0 ) = e n 2 A ( T ) 1 = 0 T ( e n 2 A ( t ) ) ( t ) d t = 0 T n 2 A ( t ) e n 2 A ( t ) d t = 0 T n 2 a ( t ) e n 2 A ( t ) d t .

Since pa(t)q, we get

p 0 T e n 2 A ( t ) dt 0 T a(t) e n 2 A ( t ) dtq 0 T e n 2 A ( t ) dt.

Hence

e n 2 A ( T ) 1 q n 2 0 T e n 2 A ( t ) dt e n 2 A ( T ) 1 p n 2 .
(26)

It follows from (25) and (26) that

h ϵ p ϵ 2 | B ( q , k , T ) α ( ϵ ) | 2 | q T ln ( 1 α ( ϵ ) ) | 2 k n = 1 q 2 ( e 2 n 2 A ( T ) 1 ) φ ϵ ( t ) φ ( t ) 2 2 p C 0 4 T 4 ( e n 2 A ( T ) 1 ) 2 ( 1 + n 2 ) 2 k + 1 g n 2 .
(27)

Since

e 2 n 2 A ( T ) 1 ( e n 2 A ( T ) 1 ) 2 = 1 e 2 n 2 A ( T ) ( 1 e n 2 A ( T ) ) 2 ( 1 1 e n 2 A ( T ) ) 2 ( 1 1 e A ( T ) ) 2 ( 1 1 e p T ) 2

and φ ϵ ( t ) φ ( t ) 2 ϵ 2 , we obtain

h ϵ p ϵ 2 q 2 2 p C 0 4 T 4 ( 1 e p T ) 2 | ϵ B ( q , k , T ) α ( ϵ ) | 2 | q T ln ( 1 α ( ϵ ) ) | 2 k n = 1 ( 1 + n 2 ) 2 k + 1 g n 2 = | C ( p , q , k , T ) | 2 | ϵ α ( ϵ ) | 2 | 1 ln ( 1 α ( ϵ ) ) | 2 k g 2 k + 1 2 2 .

Here

C(p,q,k,T)= q 2 p C 0 2 T 2 ( 1 e p T ) B(q,k,T) ( q T ) k .

Hence

h ϵ p ϵ C(p,q,k,T) ϵ α ( ϵ ) | 1 ln ( 1 α ( ϵ ) ) | k g 2 k + 1 2 .
(28)

Step 3. Estimate p ϵ f. In fact, using the Fourier expansion of f, we have

p ϵ f 2 = n = 1 ( 1 α ( ϵ ) + e n 2 A ( T ) e n 2 A ( T ) ) 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 g n 2 = n = 1 ( α ( ϵ ) α ( ϵ ) + e n 2 A ( T ) ) 2 ( e n 2 A ( T ) 0 T e n 2 A ( t ) φ ( t ) d t ) 2 g n 2 = n = 1 ( α ( ϵ ) α ( ϵ ) + e n 2 A ( T ) ) 2 f n 2 .

Using Lemma 2, we obtain

p ϵ f 2 = n = 1 | α ( ϵ ) | 2 ( 1 + n 2 ) 2 k ( α ( ϵ ) + e n 2 A ( T ) ) 2 ( 1 + n 2 ) 2 k f n 2 | B ( q , k , T ) | 2 | q T ln ( 1 α ( ϵ ) ) | 2 k f 2 k 2 .

This implies that

p ϵ f|B(q,k,T)|| q T ln ( 1 α ( ϵ ) ) | k f 2 k .
(29)

Combining Steps 1, 2, and 3 and using the triangle inequality, we get

f ϵ f f ϵ h ϵ + h ϵ p ϵ + p ϵ f ϵ C 0 T α ( ϵ ) + C ( p , q , k , T ) ϵ α ( ϵ ) | 1 ln ( 1 α ( ϵ ) ) | k g 2 k + 1 2 + | B ( q , k , T ) | | q T ln ( 1 α ( ϵ ) ) | k f 2 k .
(30)

 □

Remark 1 If we choose α(ϵ)= ϵ m , 0<m<1, then (21) holds.

Remark 2 In this theorem, with the assumption f H 2 k (0,π), we have an error f ϵ f of logarithmic order. In the next section, we introduce a truncation method which improves the order of the error. We present the error of Hölder estimates (the order is ϵ α , 0<α<1) with a weaker assumption of f, i.e., f H 1 (0,π).

2.2 Regularization by a truncation method

Theorem 2 Suppose that f H 1 (0,π). Let g ϵ L 2 (0,π) be measured data at t=T satisfying (8). Put

f ϵ (x)= n = 1 N e n 2 A ( T ) [ 0 T e n 2 A ( t ) φ ϵ ( t ) d t ] 1 g n ϵ sinnx,
(31)

where N=[ ϵ k 1 3 ]+1, k(0,1). Then the following estimate holds:

f ϵ f Q ϵ 1 k 6 +2P ϵ k ,
(32)

where

P = 4 C 0 2 + q 4 g 2 2 p C 0 4 ( 1 e p T ) 4 , Q = ( 2 π + π 2 ) f H 1 ( 0 , π ) .

Proof From (7) and (31), we have

f ( x ) f ϵ ( x ) = n = 1 e n 2 A ( T ) 0 T e n 2 A ( t ) φ ( t ) d t g n sin n x n = 1 N e n 2 A ( T ) 0 T e n 2 A ( t ) φ ϵ ( t ) d t g n ϵ sin n x = n = N + 1 e n 2 A ( T ) 0 T e n 2 A ( t ) φ ( t ) d t g n sin n x + n = 1 N e n 2 A ( T ) 0 T e n 2 A ( t ) φ ( t ) d t g n sin n x n = 1 N e n 2 A ( T ) 0 T e n 2 A ( t ) φ ϵ ( t ) d t g n ϵ sin n x = I 1 + I 2 ,
(33)

where

I 1 = n = N + 1 e n 2 A ( T ) 0 T e n 2 A ( t ) φ ( t ) d t g n sinnx
(34)

and

I 2 = n = 1 N e n 2 A ( T ) 0 T e n 2 A ( t ) φ ( t ) d t g n sinnx n = 1 N e n 2 A ( T ) 0 T e n 2 A ( t ) φ ϵ ( t ) d t g n ϵ sinnx.
(35)

Step 1. We estimate I 1 . In fact, since (34), we get

I 1 2 = n = N + 1 e 2 n 2 A ( T ) ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 g n 2 = n = N + 1 f n 2 .
(36)

Using integration by parts, we have

f n = 0 π f ( x ) sin n x d x = cos n x n f ( x ) | x = 0 x = π + 1 n 0 π f ( x ) cos n x d x = 1 n f ( 0 ) ( 1 ) n n f ( π ) + 1 n 0 π f ( x ) cos n x d x .
(37)

Hence

| f n | | f ( 0 ) | + | f ( π ) | n + π 2 1 n f ( x ) .
(38)

On the other hand, since H 1 (0,π) is embedded continuously in C[0,π] we can assume that uC[0,π]. So, there exists an m[0,π] such that f(m)= 1 π 0 π f(x)dx. We have

f ( π ) = f ( m ) + m π f ( x ) d x , f ( 0 ) = f ( m ) 0 m f ( x ) d x .
(39)

It follows that

| f ( π ) | | f ( m ) | + m π | f ( x ) | d x 1 π 0 π | f ( x ) | d x + 0 π | f ( x ) | d x π 0 π ( | f ( x ) | 2 + | f ( x ) | 2 ) d x = π f H 1 ( 0 , π ) .
(40)

In a similar way, we also obtain |f(0)| π f H 1 ( 0 , π ) . Hence | f n | 2 π + π 2 n f H 1 ( 0 , π ) . This implies that

I 1 2 n = N + 1 ( 2 π + π 2 ) 2 n 2 f H 1 ( 0 , π ) 2 ( 2 π + π 2 ) 2 f H 1 ( 0 , π ) 2 n = N + 1 1 n 2 n ( 2 π + π 2 ) 2 f H 1 ( 0 , π ) 2 1 N .
(41)

Step 2. We estimate I 2 . The term (35) can be rewritten as follows:

I 2 = n = 1 N e n 2 A ( T ) [ g n 0 T e n 2 A ( t ) φ ϵ ( t ) d t g n ϵ 0 T e n 2 A ( t ) φ ( t ) d t ] ( 0 T e n 2 A ( t ) φ ( t ) d t ) ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) sin n x = n = 1 N e n 2 A ( T ) [ ( g n g n ϵ ) 0 T e n 2 A ( t ) φ ϵ ( t ) d t + g n 0 T e n 2 A ( t ) ( φ ϵ ( t ) φ ( t ) ) d t ] ( 0 T e n 2 A ( t ) φ ( t ) d t ) ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) sin n x .

Then

I 2 2 2 n = 1 N e 2 n 2 A ( T ) ( g n g n ϵ ) 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 +2 n = 1 N e 2 n 2 A ( T ) g n 2 [ 0 T e n 2 A ( t ) ( φ ϵ ( t ) φ ( t ) ) d t ] 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 2 .
(42)

Using e n 2 A ( T ) 1 q n 2 0 T e n 2 A ( t ) dt, we have

n = 1 N e 2 n 2 A ( T ) ( g n g n ϵ ) 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 n = 1 N e 2 n 2 A ( T ) C 0 2 ( 0 T e n 2 A ( t ) d t ) 2 ( g n g n ϵ ) 2 n = 1 N n 4 q 2 e 2 n 2 A ( T ) C 0 2 ( e n 2 A ( T ) 1 ) 2 ( g n g n ϵ ) 2 n = 1 N n 4 q 2 C 0 2 ( 1 e n 2 A ( T ) ) 2 ( g n g n ϵ ) 2 4 N 4 q 2 ϵ 2 C 0 2 .
(43)

In a similar way and using (26), we also obtain

n = 1 N e 2 n 2 A ( T ) g n 2 [ 0 T e n 2 A ( t ) ( φ ϵ ( t ) φ ( t ) ) d t ] 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 ( 0 T e n 2 t φ ϵ ( t ) d t ) 2 n = 1 N e 2 n 2 A ( T ) g n 2 [ 0 T e 2 n 2 A ( t ) d t ] [ 0 T | φ ϵ ( t ) φ ( t ) | 2 d t ] ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 2 n = 1 N e 2 n 2 A ( T ) g n 2 q [ 0 T e 2 n 2 A ( t ) d t ] [ 0 T | φ ϵ ( t ) φ ( t ) | 2 d t ] C 0 4 ( 0 T e n 2 A ( t ) d t ) 4 n = 1 N n 6 q 4 e 2 n 2 A ( T ) ( e 2 n 2 A ( T ) 1 ) g n 2 ϵ 2 2 p C 0 4 ( e n 2 A ( T ) 1 ) 4 n = 1 N n 6 q 4 ( 1 e 2 n 2 A ( T ) ) g n 2 ϵ 2 2 p C 0 4 ( 1 e n 2 A ( T ) ) 4 .
(44)

It is easy to see that 1 1 e n 2 A ( T ) 1 1 e p T . It implies that

n = 1 N e 2 n 2 A ( T ) g n 2 [ 0 T e n 2 A ( t ) ( φ ϵ ( t ) φ ( t ) ) d t ] 2 ( 0 T e n 2 A ( t ) φ ( t ) d t ) 2 ( 0 T e n 2 A ( t ) φ ϵ ( t ) d t ) 2 n = 1 N N 6 g n 2 ϵ 2 2 C 0 4 ( 1 e p T ) 4 N 6 q 4 ϵ 2 2 p C 0 4 ( 1 e p T ) 4 n = 1 g n 2 N 6 q 4 ϵ 2 g 2 2 p C 0 4 ( 1 e p T ) 4 .
(45)

Therefore

I 2 2 4 N 2 ϵ 2 C 0 2 + N 6 q 4 ϵ 2 g 2 2 p C 0 4 ( 1 e p T ) 4 N 6 ϵ 2 P 2 ,

where P= 4 C 0 2 + q 4 g 2 2 p C 0 4 ( 1 e p T ) 4 . Hence

I 2 N 3 ϵP.
(46)

Combining (33), (41), and (46), we obtain

f f ϵ = I 1 + I 2 I 1 + I 2 ( 2 π + π 2 ) f H 1 ( 0 , π ) 1 N + P N 3 ϵ .
(47)

Since N=[ ϵ k 1 3 ]+1, we obtain

f f ϵ Q ϵ 1 k 6 +2P ϵ k ,
(48)

where Q=(2 π + π 2 ) f H 1 ( 0 , π ) . □

3 Numerical results

In this section, we consider some examples simulation for the theory in Section 2. In numerical experiments, we are interested in the error between exact source and source with approximation as RMSE:

RMSE(f, f ϵ ):= 1 N n = 1 N ( f ( x n ) f ϵ ( x n ) ) 2

with f( x n ), f ϵ ( x n ) a discretization of function f, f ϵ .

Now, we consider

{ u t a ( t ) u x x = φ ( t ) f ( x ) , x ( 0 , π ) , t ( 0 , 1 ) , u ( 0 , t ) = u ( π , t ) = 0 , t ( 0 , 1 ) , u ( x , T ) = g ( x ) , x ( 0 , π ) ,

where

a ( t ) = t + 1 , φ ( t ) = t 2 + t + 1 , g ( x ) = sin x .

We can see the exact source

f(x)=sin(x).

Using FORTRAN 95, we have a generator for noise data from routine rand() which is a random variable with the uniform distribution on [0,1]. Therefore, we have measurement data with noise

g ϵ ( x ) = sin x + ϵ rand ( ) , φ ϵ ( t ) = t 2 + t + 1 + ϵ rand ( ) ,

where ϵ= 10 r , with r=1,2,3,4, works as the amplitude of noise.

We can easily see

g g ϵ < ϵ π , ϕ ϕ ϵ < ϵ π

and we have convergence to zero.

From Figure 1, we can compare between exact data and measured data.

Figure 1
figure 1

Data for the problem.

We consider the source approximation with the quasi-reversibility regularization

f ϵ (x)= n = 1 ( 0 1 e n 2 A ( t ) φ ϵ ( t ) d t ) 1 g n ϵ sin n x ϵ + e n 2 A ( 1 ) .

We have the table of errors with ϵ= 10 1 , 10 2 , 10 3  and  10 4 (see Table 1) and Figure 2.

Figure 2
figure 2

The approximation source. Red is for the exact solution and green is for the approximation from the quasi-reversibility regularization.

Table 1 The error estimation between exact solution and regularized solution by quasi-reversibility method

On the other hand, we have the source approximation with the truncation Fourier regularization

f ϵ (x)= n = 1 N ( 0 1 e n 2 A ( t ) φ ϵ ( t ) d t ) 1 e n 2 A ( 1 ) g n ϵ sinnx.

We have the table of errors ϵ= 10 1 , 10 2 , 10 3  and  10 4 (see Table 2) and Figure 3 is as in Table 2.

Figure 3
figure 3

The approximation source. Red is for the exact solution and green is for the approximation from the truncation Fourier regularization.

Table 2 The error estimation between exact solution and regularized solution by truncation method