1 Introduction

The Poisson equation is

Δu=f,
(1)

where Δ is the Laplace operator, f and u are real or complex-valued functions on a manifold.

In case of f=0, equation (1) is called Laplace’s equation which arises naturally in many areas of engineering and science, especially in wave propagation and vibration phenomena such as the vibration of a structure [1], the acoustic cavity problem [2], the radiation wave [3] and the scattering of a wave [4]. For example, certain problems related to the search for mineral resources, which involve interpretation of the earth’s gravitational and magnetic fields, are equivalent to the Cauchy problem for Laplace’s equation. In another application of geophysical underground prospection, the geoelectrical method has been initiated in recent years (see the historical account in Zhdanov and Keller [5]). In fact, by now the geoelectrical method involves, even in its most basic formulation, the solution of the Cauchy problem for Laplace’s equation.

Nowadays, the Cauchy problem for Laplace’s equation, and more generally for elliptic equations, has a central position in all inverse boundary value problems which represent electrical impedance tomography, optical tomography and transient phenomenon in a time-like variable, while elliptic equations describe steady-state processes in physical fields.

In physics, a gravitational field is a model used to explain the influence that a massive body extends into the space around itself, producing a force on another massive body. Therefore, a gravitational field characterizes gravitational phenomena and is measured in Newtons per kilogram (N/kg). In its original concept, gravity was a force between point masses. Following Newton, Laplace attempted to model gravity as some kind of radiation field or fluid, although the 19th century explanations of gravity have usually been sought in terms of a field model rather than a point attraction. A gravitational field is the force field created around massive bodies that causes attraction of other massive bodies. A distribution of matter of density ρ=ρ(x,y,z) gives rise to a gravitational potential ϕ which satisfies the three-dimensional (3D) Poisson equation

Δϕ=4πGρ
(2)

at the points inside the distribution, where G is the universal gravitational constant.

Motivated by important applications of the Poisson equation, in this paper we are interested in considering the Cauchy problem of identification of the gravitational field ϕ which satisfies a Poisson equation (two-dimensional case or three-dimensional case). As first pointed out by Hadamard, the Cauchy problem for Laplace’s equation is an ill-posed problem. It means the problem’s solutions do not always exist and, whenever they do exist, there is no continuous dependence on the given data. A small perturbation in the Cauchy data therefore can affect the solution significantly. Readers are referred to [14, 613] for earlier materials on the Cauchy problem for Laplace’s equation. For the homogeneous case of source term, the elliptic problem was considered in a series of articles analyzing the stability and convergence (see, e.g., [68, 10, 11, 14]). A similar version of Laplace’s equation with homogeneous case was considered by Regińska et al. [15, 16], Lesnic et al. [17], Tautenhahn [18] and Wei et al. [1921].

Although we have many works on the homogeneous case of the elliptic problem, the literature on the inhomogeneous case, for example, the Poisson equation, is quite scarce. The earlier work on the abstract elliptic second order equation with inhomogeneous source was introduced in [22] by Showalter (see p.469). The main aim of this paper is to present a general regularization method and investigate the error estimate between the regularized solution and the exact solution.

Our paper is organized as follows. In Sections 2 and 3, we construct stable approximate solutions of the equation and give the convergence estimates for 2D and 3D cases, respectively. Finally, in Section 4, two numerical examples for each 2D and 3D case are devised to test the effectiveness of proposed methods.

2 The 2D case of the Poisson equation

2.1 Mathematical model

We consider the problem of finding ϕ(x,y) such that

Δϕ= ϕ x x + ϕ y y =f(x,y),(x,y)Ω×(0,1)
(3)

subject to the boundary condition ϕ(x,y)=0, xΩ. Here Ω=(0,π) and φ,g L 2 (0,π) are given functions. We will derive the solution of Problem (3) for source term f(x,y) L 2 (0,1; L 2 (0,π)). Let { u p (x)} be an orthogonal system of L 2 (0,π). Then

ϕ(x,y)= p = 1 ϕ ( x , y ) , u p ( x ) u p (x),

and we have

ϕ x x + ϕ y y , u p ( x ) = f ( x , y ) , u p ( x ) .
(4)

Set ϕ p (y)=ϕ(x,y), u p (x)= 0 π ϕ(x,y) u p (x)dx. By transformation of (4), we have

ϕ p ( y ) + ϕ x ( π , y ) u p ( π ) ϕ x ( 0 , y ) u p ( 0 ) ϕ ( π , y ) u p ( π ) + ϕ ( 0 , y ) u p ( 0 ) 0 π ϕ ( x , y ) u p ( x ) d x = f ( x , y ) , u p ( x ) .

By choosing u p and λ p such that

u p (x)= λ p u p (x), u p (0)= u p (π)=0,
(5)

we have the following system:

ϕ p ( y ) + λ p ϕ p ( y ) = f p ( y ) , ϕ p ( 0 ) = φ p , ϕ p ( 0 ) = g p ,
(6)

where f p (y)=f(x,y), u p (x), φ p =φ(x), u p (x), g p =g(x), u p (x). By solving (5), we obtain λ p = p 2 and u p (x)=sinpx. This leads to

ϕ p (y)=cosh(py) φ p + sinh ( p y ) p g p + 0 y sinh ( p ( y s ) ) p f p (s)ds.
(7)

Hence, we obtain the solution of Problem (3) as follows:

ϕ(x,y)= p = 1 [ cosh ( p y ) φ p + sinh ( p y ) p g p + 0 y sinh ( p ( y s ) ) p f p ( s ) d s ] sinpx.
(8)

From (8), we see that the data error can be arbitrarily amplified by the ‘kernel’ function cosh(py). That is the reason why equation (3) is ill-posed in the sense of Hadamard. In the paper of Hadamard, he provided a fundamental example which shows that a solution of a Cauchy problem for Laplace’s equation does not depend continuously on the data. The example is as follows:

Δu=0,(x,y) R 2 ,y>0,
(9)
u(x,0)=0,
(10)
u y (x,0)= A n sinnx,xR.
(11)

We have

u n (x,y)= A n n sinnxsinhny.
(12)

If we choose A n = 1 n p for some p>0, then u n , y (x,0)0 uniformly as n; whereas, for any y>0, the function u n (x,y) containing factor sinhny blows up as n.

2.2 A general filter regularization method

To find some regularized solutions, we should replace ‘instability’ kernels cosh(py), sinh(py), sinh(p(ys)) by the ‘stability’ kernels A(β,p,y), B(β,p,y), C(β,p,y) that satisfy the following properties:

lim β 0 A ( β , p , y ) = cosh ( p y ) , ( P 1 ) lim β 0 B ( β , p , y ) = sinh ( p y ) , lim β 0 C ( β , p , y , s ) = sinh ( p ( y s ) )
(13)

and some suitable conditions which are given later. Following property (P1), one can construct other kernels. Furthermore, the idea of the above property can be applied to other ill-posed problems such as, e.g., the backward heat conduction problem [23].

Throughout this section, we assume that the functions φ,g L 2 (0,π) and f L 2 (0,1; L 2 (0,π)). In reality, they can only be measured with some measurement errors, and we would actually have noisy data:

φ ϵ (x)= p = 1 φ p ϵ sinpx, g ϵ (x)= p = 1 g p ϵ sinpx

for which

φ ϵ φ ϵ, g ϵ g ϵ.

Here the constant ϵ>0 represents a bound on the measurement error and denotes the norm in L 2 (Ω). As noted above, we present the following general regularized solution:

ϕ ϵ (x,y)= p = 1 [ A ( β , p , y ) φ p ϵ + B ( β , p , y ) p g p ϵ + 0 y C ( β , p , y , s ) p f p ( s ) d s ] sinp,
(14)

where

A ( β , p , y ) = P ( β , p , y ) e p y + e p y 2 , B ( β , p , y ) = P ( β , p , y ) e p y e p y 2 , C ( β , p , y , s ) = P ( β , p , y ) e p ( y s ) e p ( s y ) 2 ,
(15)

and P(β,p,y) is chosen suitably.

Theorem 1 (A general regularization method)

Assume that ϕ(x,y) is the exact solution of Problem (3) and M 2 is a real-valued function such that

p = 1 ( M 2 2 ( p ) | ϕ ( x , y ) , sin p x | 2 + M 2 2 ( p ) p 2 | ϕ y ( x , y ) , sin p x | 2 ) 2 E 2 ,
(16)

where E is a positive number. Let P(β,p,y) be a function in such a way that, for any β>0, there exist M 1 (β) and M 3 (β) satisfying

(a)1P(β,p,y) M 1 (β) M 2 (p),
(17)
(b)P(β,p,y) M 3 (β) e p y .
(18)

Then ϕ ϵ (x,y) defined by equation (14) fulfills the following estimate:

ϕ ϵ ϕ ϵ 2 M 3 2 ( β ) + 2 + M 1 (β)E.
(19)

A choice β=β(ϵ) is admissible if

lim ϵ 0 β(ϵ)= lim ϵ 0 M 1 (β)= lim ϵ 0 ϵ M 3 (β)=0.
(20)

Proof The proof will be split into two parts as follows.

Part 1. We estimate ϕ ϵ v ϵ , where v ϵ is defined as

v ϵ (x,y)= p = 1 [ A ( β , p , y ) φ p + B ( β , p , y ) p g p + 0 y C ( β , p , y , s ) p f p ( s ) d s ] sinpx.
(21)

By a simple calculation, we get

ϕ ϵ ( , y ) v ϵ ( , y ) 2 = p = 1 [ A ( β , p , y ) ( φ p ϵ φ p ) + B ( β , p , y ) p ( g p ϵ g p ) ] 2 2 p = 1 A 2 ( β , p , y ) ( φ p ϵ φ p ) 2 + 2 p = 1 B 2 ( β , p , y ) ( g p ϵ g p ) 2 M 3 2 ( β ) + 1 2 ( 2 φ ϵ φ 2 + 2 g ϵ g 2 ) 2 ϵ 2 ( M 3 2 ( β ) + 1 ) .
(22)

Hence

ϕ ϵ ( , y ) v ϵ ( , y ) ϵ 2 M 3 2 ( β ) + 2 .
(23)

Part 2. We estimate v ϵ ϕ. In fact, we have

ϕ ( x , y ) v ϵ ( x , y ) , sin p x = ( A ( β , p , y ) cosh ( p y ) ) φ p + B ( β , p , y ) sinh ( p y ) p g p + 0 y C ( β , p , y , s ) sinh ( p y p s ) p f p ( s ) d s = e p y P ( β , p , y ) e p y 2 ( φ p + g p p + 0 y e p s f p ( s ) d s ) .
(24)

From (8), the partial derivative of ϕ with respect to y is

ϕ y (x,y)= p = 1 [ sinh ( p y ) φ p + cosh ( p y ) p g p + 0 y cosh ( p ( y s ) ) p f p ( s ) d s ] sinpx.

This implies that

ϕ ( x , y ) , sin p x + 1 p ϕ y ( x , y ) , sin p x = e p y ( φ p + g p p + 0 y e p s f p ( s ) d s ) .
(25)

Combining (24) and (25), we obtain

ϕ ( x , y ) v ϵ ( x , y ) , sin p x = 1 P ( β , p , y ) 2 ( ϕ ( x , y ) , sin p x + 1 p ϕ y ( x , y ) , sin p x ) .
(26)

Hence

ϕ ( , y ) v ϵ ( , y ) 2 = p = 1 | ϕ ( x , y ) v ϵ ( x , y ) , sin p x | 2 p = 1 ( 1 P ( β , p , y ) ) 2 4 ( ϕ ( x , y ) , sin p x + 1 p ϕ y ( x , y ) , sin p x ) 2 p = 1 M 1 2 ( β ) M 2 2 ( p ) 2 ( | ϕ ( x , y ) , sin p x | 2 + 1 p 2 | ϕ y ( x , y ) , sin p x | 2 ) M 1 2 ( β ) 2 p = 1 ( M 2 2 ( p ) | ϕ ( x , y ) , sin p x | 2 + M 2 2 ( p ) p 2 | ϕ y ( x , y ) , sin p x | 2 ) M 1 2 ( β ) E 2 .
(27)

Combining (23) and (27), we have

ϕ ( , y ) ϕ ϵ ( , y ) ϕ ( , y ) v ϵ ( , y ) + ϕ ϵ ( , y ) v ϵ ( , y ) ϵ 2 M 3 2 ( β ) + 2 + M 1 ( β ) E .
(28)

This completes the proof. □

Theorem 2 (The first regularized solution)

Let P(β,p,y)= e β p 2 y . Assume that ϕ is the exact solution of Problem (3) such that ϕ x x ( , y ) 2 + ϕ x y ( , y ) 2 2 E 2 for y[0,1]. If we select β= 1 4 k ln ( 1 ϵ ) (0<k<1), then

ϕ ϵ ϕ E 4 k ln ( 1 ϵ ) + 2 ϵ 2 + 2 ϵ 2 2 k .
(29)

Proof First, we find the functions M 1 , M 2 , M 3 such that P(β,p,y) holds (17), (18). Using the inequality 1 e m m, we have

1P(β,p,y)=1 e β p 2 y β p 2 yβ p 2 .

Since (17), we can choose M 1 (β)=β and M 2 (p)= p 2 . The condition

p = 1 ( M 2 2 ( p ) | ϕ ( x , y ) , sin p x | 2 + M 2 2 ( p ) p 2 | ϕ y ( x , y ) , sin p x | 2 ) 2 E 2
(30)

is equivalent to

p = 1 ( p 4 | ϕ ( x , y ) , sin p x | 2 + p 2 | ϕ y ( x , y ) , sin p x | 2 ) = ϕ x x ( , y ) 2 + ϕ x y ( , y ) 2 2 E 2 .
(31)

Using the inequality pβ p 2 1 4 β , we get

e p y P(β,p,y)= e ( p β p 2 ) y e y 4 β e 1 4 β .

Since (18), we can choose M 3 (β)= e 1 4 β . The admissible regularization parameter is β= 1 4 k ln ( 1 ϵ ) (0<k<1). In fact, it is easy to check that

lim ϵ 0 β(ϵ)= lim ϵ 0 M 1 (β)= lim ϵ 0 1 4 k ln ( 1 ϵ ) =0

and

lim ϵ 0 ϵ M 3 (β)= lim ϵ 0 ϵ e 1 4 β = lim ϵ 0 ϵ 1 k =0.

Applying Theorem 1, we obtain

ϕ ϵ ϕ ϵ 2 M 3 2 ( β ) + 2 + M 1 (β)E E 4 k ln ( 1 ϵ ) + 2 ϵ 2 + 2 ϵ 2 2 k .
(32)

 □

Theorem 3 (The second regularized solution)

Let P(β,p,y)= e p y β p + e p y . Assume that ϕ is the exact solution of Problem (3) such that ϕ x x ( , y ) 2 + ϕ y y ( , y ) 2 2 E 2 for y[0,1]. If we select β= ϵ k (0<k<1), then

ϕ ϵ ϕ E 4 k ln ( 1 ϵ ) + 2 ϵ 2 + 2 ϵ 2 2 k .
(33)

Proof First, we find the functions M 1 , M 2 , M 3 such that P(β,p,y) holds (17), (18). We have

1P(β,p,y)=1 e p y β p + e p y = β p β p + e p y .

On the other hand, for 0y1, we have

1 β p + e p y 1 β p + e p 1 β ln ( 1 β ) ,β(0,e).

Hence

1P(β,p,y) p ln ( 1 β ) .
(34)

Since (17), we can choose M 1 (β)= 1 ln ( 1 β ) , M 2 (p)=p. The condition

p = 1 ( M 2 2 ( p ) | ϕ ( x , y ) , sin p x | 2 + M 2 2 ( p ) p 2 | ϕ y ( x , y ) , sin p x | 2 ) 2 E 2
(35)

is equivalent to

p = 1 ( p 2 | ϕ ( x , y ) , sin p x | 2 + | ϕ y ( x , y ) , sin p x | 2 ) = ϕ x ( , y ) 2 + ϕ y ( , y ) 2 2 E 2 .
(36)

Since (18), we can choose M 3 (β)= 1 β ln ( 1 β ) . Next, we prove that this regularization parameter β= ϵ k (0<k<1) is admissible by checking condition (20). In fact,

lim ϵ 0 β(ϵ)= lim ϵ 0 M 1 (β)= lim ϵ 0 1 ln ( 1 ϵ k ) =0

and

lim ϵ 0 ϵ M 3 (β)= lim ϵ 0 ϵ e 1 4 β = lim ϵ 0 ϵ 1 k 1 ln ( 1 ϵ k ) =0.

Applying Theorem 1, we obtain

ϕ ϵ ϕ ϵ 2 M 3 2 ( β ) + 2 + M 1 (β)E E ln ( 1 ϵ k ) + 2 ϵ 2 + ϵ 2 2 k ln 2 ( 1 ϵ k ) .
(37)

 □

3 The 3D Poisson equation

Let ρ be the given mass density. For simplification, we consider the problem: determine the gravitational potential ϕ such that the Poisson equation

Δϕ(x,y,z)=4πGρ(x,y,z)
(38)

subject to the homogeneous Dirichlet boundary condition ϕ(x,y,z)=0, (x,y)Ω, z(0,1), where Ω=(0,π)×(0,π). The data of ϕ at z=0: ϕ(x,y,0)=φ(x,y) and ϕ z (x,y,0)=g(x,y), where φ, g are known.

By a similar way as in (8), we find the solution of (38) that

ϕ ( x , y , z ) = p , q = 1 [ cosh ( z p 2 + q 2 ) φ p q + sinh ( z p 2 + q 2 ) p 2 + q 2 g p q + 4 π G 0 y sinh ( ( z s ) p 2 + q 2 ) p 2 + q 2 ρ p q ( s ) d s ] sin p x sin q y .
(39)

Physically, φ can only be measured with some measurement errors, and we would actually have a disturbed data function

φ ϵ ( x , y ) = p , q = 1 φ p q ϵ sin p x sin q y L 2 ( Ω ) , g ϵ ( x , y ) = p , q = 1 g p q ϵ sin p x sin q y L 2 ( Ω )

for which

φ ϵ φ ϵ, g ϵ g ϵ,

where the constant ϵ>0 represents a bound on the measurement error, denotes the L 2 -norm.

By a similar method as in Section 4, we present the following general regularized solution:

ϕ ϵ ( x , y , z ) = p , q = 1 [ A ( β , p , q , z ) φ p q ϵ + B ( β , p , q , z ) p 2 + q 2 g p q ϵ + 4 π G 0 y C ( β , p , q , z , s ) p 2 + q 2 ρ p q ( s ) d s ] sin p x sin q y ,
(40)

where

A ( β , p , q , z ) = P ( β , p , q , z ) e z p 2 + q 2 + e z p 2 + q 2 2 , B ( β , p , q , z ) = P ( β , p , q , z ) e z p 2 + q 2 e z p 2 + q 2 2 , C ( β , p , q , z , s ) = P ( β , p , q , z ) e ( z s ) p 2 + q 2 e ( s z ) p 2 + q 2 2 ,
(41)

and P(β,p,y) is chosen suitably.

Theorem 4 Assume that ϕ(x,y,z) is the exact solution of Problem (38) and N 2 (p,q) is a real-valued function such that

p = 1 ( N 2 2 ( p , q ) | ϕ ( x , y , z ) , sin p x sin q y | 2 + N 2 2 ( p , q ) p 2 + q 2 | ϕ z ( x , y , z ) , sin p x sin q y | 2 ) 2 E 2 ,
(42)

where E is a positive number. Let P(β,p,q,z) be a function such that for any β>0, there exist functions N 1 (β) and N 3 (β) satisfying

(a)1P(β,p,q,z) N 1 (β) N 2 (p,q),
(43)
(b)P(β,p,q,z) N 3 (β) e z p 2 + q 2 ,
(44)

then ϕ ϵ (x,y,z) defined by equation (40) fulfills the following estimate:

ϕ ϵ ϕ ϵ 2 N 3 2 ( β ) + 2 + N 1 (β)E.
(45)

A choice β=β(ϵ) is admissible if

lim ϵ 0 β(ϵ)= lim ϵ 0 N 1 (β)= lim ϵ 0 ϵ N 3 (β)=0.
(46)

Theorem 5 Let P(β,p,q,z)= e z p 2 + q 2 β ( p 2 + q 2 ) m + e z p 2 + q 2 for m1. Assume that ϕ is the exact solution of Problem (3) such that

p = 1 ( ( p 2 + q 2 ) m / 2 | ϕ ( x , y , z ) , sin p x sin q y | 2 + ( p 2 + q 2 ) m 1 | ϕ z ( x , y , z ) , sin p x sin q y | 2 ) 2 E 2
(47)

for z[0,1]. If we select β= ϵ k (0<k<1), then

ϕ ϵ ϕ E 4 k ln ( 1 ϵ ) + 2 ϵ 2 + 2 ϵ 2 2 k .
(48)

Proof First, we find the functions N 1 , N 2 , N 3 such that P(β,p,q,z) holds (43), (44). We have

1P(β,p,q,z)=1 e z p 2 + q 2 β ( p 2 + q 2 ) m + e z p 2 + q 2 = β ( p 2 + q 2 ) m β ( p 2 + q 2 ) m + e p 2 + q 2 z .

On the other hand, for 0z1, we have

1 β ( p 2 + q 2 ) m + e p 2 + q 2 z 1 β ( p 2 + q 2 ) m + e p 2 + q 2 .

Now, we prove that

1 β ( p 2 + q 2 ) m + e p 2 + q 2 m m β ln m ( 1 m β ) .

In fact, let the function h be defined by h(x)= 1 β x m + e M x . By taking the derivative of h, one has

h (x)= β m x m 1 M e M x ( β x m + e M x ) 2 .
(49)

The equation h (x)=0 gives a unique solution x 0 such that βm x 0 m 1 M e M x 0 =0. It means that x 0 m 1 e M x 0 = M m β . Thus the function h achieves its maximum at a unique point x= x 0 . Thus

h(x) 1 ϵ x 0 m + e M x 0 .
(50)

Since e M x 0 = k β M x 0 k 1 , one has

h(x) 1 ϵ x 0 m + e M x 0 1 β x 0 m + m β M x 0 m 1 .
(51)

By using the inequality e M x 0 M x 0 , we get

M m ϵ = x 0 m 1 e M x 0 1 M m 1 e ( m 1 ) M x 0 e M x 0 = 1 M m 1 e m M x 0 .

This gives e m M x 0 M m m β or mM x 0 ln( M m m β ). Therefore

x 0 1 m M ln ( M m m β ) .

Hence, we obtain

h(x) 1 β x 0 m ( m M ) m β ln m ( M m m β ) .
(52)

Using this inequality for M=1, we have

1P(β,p,q,z) m m ln m ( 1 m β ) ( p 2 + q 2 ) m
(53)

and

P(β,p,q,z) m m β ln m ( 1 m β ) e z p 2 + q 2 .
(54)

By choosing N 1 (β)= m m ln m ( 1 m β ) , N 2 (p,q)= ( p 2 + q 2 ) m and N 3 (β)= m m β ln m ( 1 m β ) , the conditions (43), (44) hold. It is easy to check that (46) holds.

Applying Theorem 5, we obtain

ϕ ϵ ϕ ϵ 2 M 3 2 ( β ) + 2 + M 1 (β)E E ln ( 1 ϵ k ) + 2 ϵ 2 + ϵ 2 2 k ln 2 ( 1 ϵ k ) .
(55)

 □

4 Numerical experiment

In this section, simple examples for the 2D and 3D Poisson equations are devised for verifying the validity of our proposed methods.

4.1 Example 1

Consider the 2D Poisson equation as follows:

{ ϕ x x + ϕ y y = f ( x , y ) , ( x , y ) ( 0 , π ) × ( 0 , 1 ) , ϕ ( x , 0 ) = φ ( x ) , x ( 0 , π ) , ϕ y ( x , 0 ) = g ( x ) , x ( 0 , π ) ,
(56)

where

φ ( x ) = 0 , f ( x , y ) = ( 4 x 2 + 4 y 2 π 5 5 ) e x 2 + y 2 sin ( x ) sin ( π y ) f ( x , y ) = ( 2 π y + π 2 ) e ( x 2 + y 2 ) sin ( x ) cos ( π y ) f ( x , y ) = 4 x e ( x 2 + y 2 ) cos ( x ) sin ( π y ) f ( x , y ) = + e 2 ( x 2 + y 2 ) sin ( 2 x ) sin ( 2 π y ) ( 16 y 2 4 x 2 4 π 2 8 ) f ( x , y ) = e 2 ( x 2 + y 2 ) sin ( 2 x ) cos ( 2 π y ) ( 8 y π + 4 π 2 ) f ( x , y ) = 12 x e 2 ( x 2 + y 2 ) cos ( 2 x ) sin ( 2 π y ) , g ( x ) = π e ( x 2 ) sin ( x ) + 2 π e 2 x 2 sin ( 2 x ) .

Problem (56) has a unique solution as follows:

ϕ(x,y)= e x 2 + y 2 sin(x)sin(πy)+ e 2 ( x 2 + y 2 ) sin(2x)sin(2πy).

Now we are seeking a solution of the following problem:

{ ϕ x x ϵ + ϕ y y ϵ = f ϵ ( x , y ) , ( x , y ) ( 0 , π ) × ( 0 , 1 ) , ϕ ϵ ( x , 0 ) = φ ϵ ( x ) , x ( 0 , π ) , ϕ y ϵ ( x , 0 ) = g ϵ ( x ) , x ( 0 , π ) .
(57)

We assume h ϵ to be a measured data as follows:

h ϵ (x)= p = 1 p 0 ( h p + ϵ rand ( p ) ) sin(px),

where p 0 p and {rand()} is an array of pseudo-random numbers satisfying

p = 1 p 0 |rand(p) | 2 1.

Let ( p 0 , p ) satisfy

O ( h ) := p = p 0 + 1 p | f p | 2 ϵ.
(58)

In this paper, we choose p 0 =30, p =100, then (58) is satisfied.

Step 1. Choose L and K to generate the temporal and spatial discretization in such a manner that

x i = i Δ x , Δ x = π K , i = 0 , , K , y j = j Δ y , Δ y = 1 L , j = 0 , , L .

Of course, the higher value of K and L will provide more accurate and stable numerical results; however, in the following numerical examples K=L=101 are satisfied.

Step 2. We choose the following functions:

φ ϵ = φ + ϵ rand ( ) , g ϵ = g + ϵ rand ( ) , f ϵ = f + ϵ rand ( , ) .

Step 3. Set ϕ β ϵ ( x i )= ϕ β , j ϵ and ϕ( x j )= ϕ j , construct two vectors containing all discrete values of ϕ β ϵ and f denoted by Λ β ϵ and Ψ, respectively,

Λ β ϵ = [ ϕ β , 0 ϵ ϕ β , 1 ϵ ϕ β , K ϵ ] R K + 1 , Ψ = [ ϕ 0 ϕ 1 ϕ K 1 ϕ K ] R K + 1 .

Step 4. Error estimate between the exact solutions and regularized solutions. At a fixed point y , the error estimation δ i , ϵ in L 2 between the exact solution ϕ and the regularized solutions ϕ i , ϵ is given by the following formula. Relative error estimation:

δ ϵ = i = 0 K | ϕ β ϵ ( x i ) ϕ ( x i ) | L 2 ( 0 , π ) 2 i = 0 K | ϕ ( x i ) | L 2 ( 0 , π ) 2 .

In one example, we have the first regularized solution (defined in Theorem 2). In method two, we have the second regularized solution (defined in Theorem 3). With parameter k=1/2, in the first method we choose β= 1 2 ln ϵ , and in the second method we choose β= ϵ 1 / 2 .

Tables 1 and 2 show the computed error estimations δ ϵ at each fixed value y=j/10, j=0,,10. The errors are significantly small when ϵ 10 3 . Comparing the errors of two regularized solutions in the table, we can see that the second one is better. We show the error between the exact solution and the regularized solution at β= 1 2 ln ϵ . For the purpose of better illustration, we also present some graphical figures. Figure 1 is the 3D representation of the exact solution and regularized solutions when ϵ= 10 1 , ϵ= 10 2 and ϵ= 10 3 . Figure 2 shows graphs of section cut of these solutions at value y=0.5 when ϵ= 10 1 , ϵ= 10 2 and ϵ= 10 3 . It is easy to see that our methods are stably convergent.

Figure 1
figure 1

3D graphs of the exact solution ϕ(x,y) and three regularized solutions ϕ i , ϵ (x,y) at i=1,2 .

Figure 2
figure 2

Section cut of the exact solution and regularized solutions at y=0.5 , ϵ= 10 1 , ϵ= 10 2 , ϵ= 10 3 .

Table 1 Discrete relative error estimations for the regularized solution at fixed values from y=0.0 to y=0.5
Table 2 Discrete relative error estimations for the regularized solution at fixed values from y=0.6 to y=1.0

4.2 Example 2

We consider the following 3D problem:

{ ϕ x x + ϕ y y + ϕ z z = f ( x , y , z ) , ( x , y , z ) Ω × ( 0 , 2 ) , ϕ ( x , y , z ) = 0 , ( x , y ) Ω , ϕ ( x , y , 0 ) = φ ( x , y ) , ϕ z ( x , y , 0 ) = g ( x , y ) ,
(59)

where Ω=(0,π)×(0,π).

Step 1. Choose Q and K (in our computations, Q=K=101 are chosen) to have

x i = i Δ x , Δ x = π Q , j = 0 , Q ¯ , y j = j Δ y , Δ y = π K , i = 0 , K ¯ , z k = k Δ z , Δ z = 2 L , z = 0 , L ¯ .

Step 2. We choose the following functions and fix z,

φ ϵ = φ + ϵ rand ( , ) , g ϵ = g + ϵ rand ( , ) , f ϵ ( , , z ) = f ( , , z ) + ϵ rand ( , ) .

Step 3. In this example, we fix z. We put ϕ ( , , z ) β ( ϵ ) ϵ ( x i , y j )=ϕ ( , ) β ( ϵ ) , i , j ϵ and ϕ(,, z )( x i , y j )= u i , j , construct two vectors containing all discrete values of ϕ ( , ) β ϵ and ϕ(,) denoted by Λ β ϵ and Ξ, respectively,

Λ β ( ϵ ) ϵ = [ ϕ ( 0 , 0 , 0 ) β ( ϵ ) , ϵ ϕ ( 0 , 0 , 1 ) β ( ϵ ) , ϵ ϕ ( 0 , 0 , K 1 ) β ( ϵ ) , ϵ ϕ ( 0 , 0 , K ) β ( ϵ ) , ϵ ϕ ( 0 , 1 , 0 ) β ( ϵ ) , ϵ ϕ ( 0 , 1 , 1 ) β ( ϵ ) , ϵ ϕ ( 0 , 1 , K 1 ) β ( ϵ ) , ϵ ϕ ( 0 , 1 , K ) β ( ϵ ) , ϵ ϕ ( 0 , 2 , 0 ) β ( ϵ ) , ϵ ϕ ( 0 , 2 , 1 ) β ( ϵ ) , ϵ ϕ ( 0 , 2 , K 1 ) β ( ϵ ) , ϵ ϕ ( 0 , 2 , K ) β ( ϵ ) , ϵ ϕ ( 0 , Q , 0 ) β ( ϵ ) , ϵ ϕ ( 0 , Q , 1 ) β ( ϵ ) , ϵ ϕ ( 0 , Q , K 1 ) β ( ϵ ) , ϵ ϕ ( 0 , Q , K ) β ( ϵ ) , ϵ ] R K + 1 × R Q + 1 , Ξ = [ ϕ ( 0 , 0 ) ϕ ( 0 , 1 ) ϕ ( 0 , K 1 ) ϕ ( 0 , K ) ϕ ( 1 , 0 ) ϕ ( 1 , 1 ) ϕ ( 1 , K 1 ) ϕ ( 1 , K ) ϕ ( 2 , 0 ) ϕ ( 2 , 1 ) ϕ ( 2 , K 1 ) ϕ ( 2 , K ) ϕ ( Q , 0 ) ϕ ( Q , 1 ) ϕ ( Q , K 1 ) ϕ ( Q , K ) ] R K + 1 × R Q + 1 .

Step 4. The error estimation.

Relative error estimation:

δ 2 = i = 1 Q j = 1 K | ϕ ϵ ( x i , y j , z ) ϕ ( x i , y j , z ) | L 2 ( Ω ) 2 i = 1 Q j = 1 K | ϕ ( x i , y j , z ) | L 2 ( Ω ) 2 ,
(60)

where

Ω = ( 0 , π ) × ( 0 , π ) , f ( x , y , z ) = ( π 2 ( π 5 ) 2 ) sin ( π ( x π ) ) sin ( π 5 ( y π ) ) f ( x , y , z ) = × [ 1 z ( π 3 ) 2 cos ( π 3 ( x π ) ) cos ( π 3 ( y π ) ) ] , ϕ ( x , y ) = sin ( π ( x π ) ) sin ( π 5 ( y π ) ) , g ( x , y ) = sin ( π ( x π ) ) sin ( π 5 ( y π ) ) cos ( π 3 ( x π ) ) cos ( π 3 ( y π ) ) .

From (59) and (60), we get

ϕ(x,y,z)=sin ( π ( x π ) ) sin ( π 5 ( y π ) ) [ 1 + z cos ( π 3 ( x π ) ) cos ( π 3 ( y π ) ) ] .

Now we are seeking a solution of the following problem:

{ ϕ x x ϵ + ϕ y y ϵ + ϕ z z ϵ = f ϵ ( x , y , z ) , ( x , y ) ( 0 , π ) × ( 0 , 2 ) , ϕ ϵ ( x , y , 0 ) = φ ϵ ( x , y ) , x ( 0 , π ) , ϕ y ϵ ( x , y , 0 ) = g ϵ ( x , y ) , x ( 0 , π ) .
(61)

Due to the computational cost, we only computed the value of regularized solution ϕ ϵ (x,y,z) at the fixed values y =π/12 and z =1.5. The discrete relative error estimation in one dimension is defined as follows:

δ ϵ ( y , z ) = i = 1 p 0 | ϕ ( x i , y , z ) ϕ ϵ ( x i , y , z ) | 2 i = 1 p 0 | ϕ ( x i , y , z ) | 2
(62)

with N=30 being the grid size along x axis. The regularized solution is calculated by formula (40) and Theorem 5 with parameter β= ϵ 1 2 . Computational results are shown in Tables 3 and 4 (the relative error) and in Figure 3 (section cut graphs). In this problem, the regularized solution is very accurate just with ϵ= 10 1 , ϵ= 10 2 and ϵ= 10 3 , respectively. In Figures 4 and 5, we show the 3D representation of the exact ϕ and the regularized ϕ and 3D representation of the exact g(x,y) and the regularized g ϵ (x,y) at ϵ= 10 1 , ϵ= 10 2 and ϵ= 10 3 . In Figure 6, we show the 3D representation of the exact solution and the regularized solution when ϵ= 10 1 , ϵ= 10 2 , ϵ= 10 3 .

Figure 3
figure 3

Section cut of the exact solution and regularized solutions at y= π 12 and z=1.5 .

Figure 4
figure 4

3D graphs of φ(x,y) and three regularized solutions φ i , ϵ (x,y) .

Figure 5
figure 5

3D graphs of g(x,y) and three regularized solutions g i , ϵ (x,y) .

Figure 6
figure 6

3D graphs of ϕ(x,y) and two regularized solutions ϕ i , ϵ (x,y) .

Table 3 Discrete relative error estimations for the regularized solution at fixed values y=π/12 and z=1.5 from y=0.0 to y=0.5
Table 4 Discrete relative error estimations for the regularized solution at fixed values y=π/12 and z=1.5 from y=0.6 to y=1.0

5 Conclusion

The study on the inverse Poisson-type problem with inhomogeneous source in 2D and 3D is still limited. This work is a continuous development of our previous study.

In theoretical results, we have suggested a general filter regularization method of regularized solution (Section 2.2). Subsequently, we have shown which sets are fundamental to solving problem (3) numerically and obtained the error estimation of logarithm type. We deduced two regularized solutions. In Section 2.2, we have shown the regularized solution for this case, method one. In Section 2.2, we have shown the regularized solution for the second method. The numerical results prove the efficiency of the theoretical suggestion, i.e., regularized solutions stably converge to the exact solution.