Introduction

Consider the following nonclassical parabolic problem:

u t = 2 u x 2 +g(x,t),(0<x<X,0<tT)
(1.1)

subject to the initial condition

u(x,0)=f(x)
(1.2)

and the nonlocal boundary conditions

u ( 0 , t ) + μ 1 u x ( 0 , t ) = 0 X h 1 ( x ) u ( x , t ) dx + q 1 ( t ) u ( X , t ) + μ 2 u x ( X , t ) = 0 X h 2 ( x ) u ( x , t ) dx + q 2 ( t ) ,
(1.3)

where μ i (i = 1,2) are given constants, and g,f,q i (i = 1,2) are given continuous functions. Here, we only consider q i (t) = 0 (i = 1,2) since the nonlocal boundary conditions can reduce to q i (t) = 0 (i = 1,2) easily by homogenization of initial and boundary conditions. We assume that the functions g,f,q i (i = 1,2) satisfy the conditions so that the solution of this equation exists and is unique.

Various problems arising in heat conduction[13], chemical engineering[4], thermo-elasticity[5], and plasma physics[6] can be reduced to the nonlocal problems. Boundary value problems with integral conditions constitute a very interesting and important class of problems. Therefore, partial differential equations with nonlocal boundary conditions have received much attention in the last 20 years. We will deal here with parabolic partial differential equations with nonlocal boundary conditions. These nonlocal conditions arise mainly when the data on the boundary cannot be measured directly. Many physical phenomena are modeled by parabolic boundary value problems with nonlocal boundary conditions.

The theoretical aspects of the solutions to the one-dimensional partial differential equations (PDEs) with integral conditions have been studied by several authors[710]. Lin, Cui, and Zhou[11, 12] studied the numerical solution of a class of PDEs with integral conditions. Golbabai and Javidi[13] developed a numerical method based on Chebyshev polynomials and local interpolating functions for solving one-dimensional parabolic PDEs subject to nonclassical conditions. Dehghan[1419], together with Tatari[17], presents some effective methods for solving PDEs with nonlocal conditions.

Reproducing kernel theory has important applications in numerical analysis, differential equation, probability and statistics, and so on[2031]. Recently, authors presented reproducing kernel methods (RKMs) for solving linear and nonlinear differential equations[2231].

In this work, we will give the approximation of solution to nonclassical parabolic problems (1.1) to (1.3) based on the transverse method of lines and the reproducing kernel method.

The rest of the paper is organized as follows: In the next section, the method for nonclassical parabolic problems (1.1) to (1.3) is introduced. The numerical examples are presented in the ‘Results and discussion’ section. The last section ends this paper with a brief conclusion.

Methods

Analysis of the RKM for ODEs with integral boundary conditions (1.3)

In this section, we illustrate how to solve the following linear second-order ordinary differential equations (ODEs) with integral boundary conditions (1.3) using the RKM:

Lu ( x ) = F ( x ) , 0 < x < X , u ( 0 ) + μ 1 u ( 0 ) = 0 X h 1 ( s ) u ( s ) ds , u ( X ) + μ 2 u ( X ) = 0 X h 2 ( s ) u ( s ) ds ,
(2.1)

where Lu = u′′(x) + b(x)u(x) + c(x)u(x), b(x), c(x) and F(x) are continuous.

In order to solve (2.1) using the RKM, it is necessary to construct a reproducing kernel space W 2 3 [0,X] in which every function satisfies the integral boundary conditions of (2.1).

First, we construct the following reproducing kernel space.

Definition 2.1

W3[0,X] = {u(x)∣u′′(x) is an absolutely continuous real value function, u′′′(x) ∈ L2[0,X]}. The inner product and norm in W3[0,X] are given, respectively, by

( u ( y ) , v ( y ) ) W 3 = u ( 0 ) v ( 0 ) + u ( 0 ) v ( 0 ) + u ( X ) v ( X ) + 0 X u v dy

and

u W 3 = ( u , u ) W 3 , u , v W 3 [ 0 , X ] .

By[22, 24], clearly, W3[0,X] is a reproducing kernel space, and its reproducing kernel is

k(x,y)= h 1 ( x , y ) , y x , h 1 ( y , x ) , y > x ,
(2.2)

where h 1 (x,y)= y 5 X 2 x 2 120 X 2 x 2 X 2 y 2 x 5 X 2 5 x 4 X 3 + 10 x 3 X 4 6 x 2 X 5 + 20 X 2 + 40 + 120 x X 3 + 120 X 2 120 X 4 + x y 4 ( x X ) 24 X + xy ( X x ) X +1.

Next, we construct a reproducing kernel space W 1 3 [0,X] in which every function satisfiesu(0)+ μ 1 u (0)= 0 X h 1 (s)u(s)ds.

Definition 2.2

W 1 3 [0,X]={u(x)u(x) W 3 [0,X]

,u(0)+ μ 1 u (0)= 0 X h 1 (s)u(s)ds}.

Clearly, W 1 3 [0,X] is a closed subspace of W3[0,X], and therefore, it is also a reproducing kernel space.

Put L 1 u(x)=u(0)+ μ 1 u (0) 0 X h 1 (s)u(s)ds.

Theorem 2.1

If L1xL1yk(x,y) ≠ 0, then the reproducing kernel k1(x,y) of W 1 3 [0,X]is given by

k 1 (x,y)=k(x,y) L 1 x k ( x , y ) L 1 y k ( x , y ) L 1 x L 1 y k ( x , y ) ,
(2.3)

where the subscript x by the operator L1 indicates that the operator L1 applies to the function of x.

Proof

It is easy to see that L1k1(x,y) = 0, and therefore k 1 (x,y) W 1 3 [0,X].

For allu(y) W 1 3 [0,X], obviously, L1yu(y) = 0, it follows that

( u ( y ) , k 1 ( x , y ) ) W 3 = ( u ( y ) , k ( x , y ) ) W 3 = u ( x ) .

That is, k1(x,y) is of ‘reproducing property’. Thus, k1(x,y) is the reproducing kernel of W 1 3 [0,X] and the proof is complete. □

Similarly, we construct a reproducing kernel space which is a closed subspace of W 1 3 [0,X].

Definition 2.3

W 2 3 [0,X]={u(x)u(x) W 1 3 [0,X]

,u(X)+ μ 2 u (X)= 0 X h 2 (s)u(s)ds}.

Put L 2 u(x)=u(X)+ μ 2 u (X) 0 X h 2 (s)u(s)ds. By the proof of Theorem 2.1, it is easy to see the following.

Theorem 2.2

The reproducing kernel k2(x,y) of W 2 3 [0,X]is given by

k 2 (x,y)= k 1 (x,y) L 2 x k 1 ( x , y ) L 2 y k 1 ( x , y ) L 2 x L 2 y k 1 ( x , y ) .
(2.4)

In[22], Cui and Lin defined a reproducing kernel space W1[0,X] and gave its reproducing kernel

k ¯ ( x , y ) = 1 + y , y x , 1 + x , y > x ,

It is clear thatL: W 2 3 [0,X] W 1 [0,X] is a bounded linear operator. Put φ i (x)= k ¯ (x, x i ) and ψ i (x) = Lφ i (x) where L is the adjoint operator of L. The orthonormal system { ψ ¯ i ( x ) } i = 1 of W 2 3 [0,X] can be derived from the process of Gram-Schmidt orthogonalization of { ψ i ( x ) } i = 1 ,

ψ ¯ i (x)= k = 1 i β ik ψ k (x),( β ii >0,i=1,2,).
(2.5)

Theorem 2.3

For (2.1), if { x i } i = 1 is dense on [0,X], then { ψ i ( x ) } i = 1 is the complete system of W 2 3 [0,X]and ψ i (x)= L y R x (y) | y = x i .

Proof

For the proof, we refer to[22]. □

Theorem 2.4

If { x i } i = 1 is dense on [0,X] and the solution of (2.1) is unique, then the solution of (2.1) is

u(x)= i = 1 k = 1 i β ik F( x k ) ψ ¯ i (x).
(2.6)

Proof

Applying Theorem 2.3, it is easy to know that { ψ ¯ i ( x ) } i = 1 is the complete orthonormal basis of W 2 3 [0,X]. Note that (v(x),φ i (x)) = v(x i ) for each v(x)∈W1[0,X]. Hence, we have

u ( x ) = i = 1 ( u ( x ) , ψ ¯ i ( x ) ) ψ ¯ i ( x ) = i = 1 k = 1 i β ik ( u ( x ) , L φ k ( x ) ) ψ ¯ i ( x ) = i = 1 k = 1 i β ik ( Lu ( x ) , φ k ( x ) ) ψ ¯ i ( x ) = i = 1 k = 1 i β ik ( F ( x ) , φ k ( x ) ) ψ ¯ i ( x ) = i = 1 k = 1 i β ik F ( x k ) ψ ¯ i ( x ) ,
(2.7)

and the proof of the theorem is complete. □

Now, the approximate solution u N (x) can be obtained by the N-term intercept of the exact solution u(x) and

u N (x)= i = 1 N k = 1 i β ik F( x k ) ψ ¯ i (x).
(2.8)

Algorithm for nonclassical parabolic problems (1.1) to (1.3)

To solve problems (1.1) to (1.3) numerically, we consider a finite difference discretization in the time variable first. For simplicity, assume a uniform mesh with △t = T/m, and let u i (x) approximate u(x,t i ), where t i = it, i = 0,1,2,⋯,m. Then, replacing the time derivative u t by a simple backward difference approximation using time step △t, we obtain

d 2 u i d x 2 m T u i ( x ) = g ( x , t i ) m T u i 1 ( x ) G i ( x ) , 0 < x < X , i = 1 , 2 , , m u i ( 0 ) + μ 1 u i ( 0 ) = 0 X h 1 ( s ) u i ( s ) ds , u i ( X ) + μ 2 u i ( X ) = 0 X h 2 ( s ) u i ( s ) ds ,
(2.9)

with u0(x) = f(x).

Therefore, to solve problems (1.1) to (1.3), it suffices for us to solve problem (2.9).

Problem (2.9) is an ODE boundary value problem in space variable x. By using the RKM presented in the ‘Analysis of the RKM for ODEs with integral boundary conditions (1.3)’ section, one can obtain the solution of problem (2.9):

u i (x)= j = 1 A j ψ ¯ j (x),
(2.10)

where A j = k = 1 j β jk G i ( x k ).

Therefore, N-term approximations ui,N(x) to u i (x) are obtained

u i , N (x)= j = 1 N A j ψ ¯ j (x).
(2.11)

Results and discussion

In this section, we present and discuss the numerical results by employing the present method for two examples. The results demonstrate that the present method is remarkably effective.

Example 1

For an example problem from[15]

u t = 2 u x 2 + 2 ( x 2 + t + 1 ) ( t + 1 ) 3 ,(0<x<1,0<t1)
(3.1)

subject to the initial condition

u(x,0)= x 2
(3.2)

and the nonlocal boundary conditions

u ( 0 , t ) = 0 1 xu ( x , t ) dx 1 4 ( t + 1 ) 2 u ( 1 , t ) = 0 1 xu ( x , t ) dx + 3 4 ( t + 1 ) 2 ,
(3.3)

it is easy to see that the exact solution isu(x,t)= x 2 ( t + 1 ) 2 .

Using the present method, take x i = (i−1)h, h = 1/(N−1), (i = 1,2,⋯,N). Taking h = 0.05, 0.025, △t = 0.4h2, the relative errors of the numerical value of u(0.6,1.0) by the present method and method in[15] are compared in Table1. Table2 shows maximum errors of the numerical values of u(x t) which is defined as:

E ( t i ) = max 0 x 1 u ( x , t i ) u i ( x ) , i = 1 , 2 , , m .

The numerical values of u(x t) are obtained by using h = 0.05 and various values of time step △t.

Table 1 Relative errors of numerical values of u (0.6,1.0) by the present method and [[15]], for Example 1
Table 2 Maximum errors of numerical values using h = 0 . 05 and various values of time step for Example 1

Example 2

For an example problem from[15]

u t = 2 u x 2 e ( x + sin t ) (1+cost),(0<x<1,0<t1)
(3.4)

subject to the initial condition

u(x,0)= e x
(3.5)

and the nonlocal boundary conditions

u ( 0 , t ) = 0 1 3 . 784423 xu ( x , t ) dx u ( 1 , t ) = 0 1 0 . 6623722 cos xu ( x , t ) dx ,
(3.6)

it is easy to see that the exact solution is u(x t) = e−(x + sint).

Using the present method, take x i = (i−1)h, h = 1/(N−1), (i = 1,2,⋯,N). Taking h = 0.05, 0.025, △t = 0.4h2, the relative errors of the numerical value of u(0.6,0.1) by the present method and method in[15] are compared in Table3. Table4 shows maximum errors of the numerical values of u(x t) which is defined as follows:

E ( t i ) = max 0 x 1 u ( x , t i ) u i ( x ) , i = 1 , 2 , , m .

The numerical values of u(x t) are obtained by using h = 0.05 and various values of time step △t.

Table 3 Relative errors of numerical values of u (0 . 6,0 . 1) by the present method and [[15]], for Example 2
Table 4 Maximum errors of numerical values using h = 0 . 05 and various values of time step for Example 2

Conclusion

In this paper, the combination of the transverse method of lines and the reproducing kernel method was employed successfully for solving parabolic problems with integral boundary conditions. Using the transverse method of lines, the nonclassical parabolic problem is converted to boundary value ODE problems in space variable first; then, solve ODE problems with integral boundary conditions by using the reproducing kernel method. The numerical results show that the present method is an accurate and reliable technique.