1 Introduction

We consider the following Sturm-Liouville problem:

τ(u):= u +q(x)u=λu,xI,
(1.1)

with one of the boundary conditions including an eigenparameter:

B a (u):= β 1 u(a)+ β 2 u (a)=0,
(1.2)
B b (u):=λ ( α 1 u ( b ) α 2 u ( b ) ) + ( α 1 u ( b ) α 2 u ( b ) ) =0,
(1.3)

and transmission conditions at two points of discontinuity, θ ε and θ + ε :

T ε (u):=u( θ ε )δu( θ ε +)=0,
(1.4)
T ε (u):= u ( θ ε )δ u ( θ ε +)=0,
(1.5)
T + ε (u):=δu( θ + ε )γu( θ + ε +)=0,
(1.6)
T + ε (u):=δ u ( θ + ε )γ u ( θ + ε +)=0,
(1.7)

where I:=[a, θ ε )( θ ε , θ + ε )( θ + ε ,b]; λ is a complex spectral parameter; q(x) is a given real valued function which is continuous in [a, θ ε ), ( θ ε , θ + ε ), and ( θ + ε ,b] and has finite limits q( θ ε ±), q( θ + ε ±); β i , α i , α i ,δ,γR (i=1,2); | β 1 |+| β 2 |0, δ,γ0; θ:=(a+b)/2; θ ± ε ±:=(θ±ε)±0; 0<ε<(ba)/2, and

ρ:= ( α 1 α 2 α 1 α 2 ) >0.

In the literature, the Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem and generalization of the WKS sampling theorem (see [1]–[3]) has been investigated extensively (see also [4]–[8]). Sampling theorems associated with Sturm-Liouville problems were investigated in [9]–[13]. Also, [14]–[17] and [18]–[21] are for example works in the direction of sampling analysis for continuous and discontinuous eigenproblems, respectively. The sampling series associated with strings were investigated and one compared them with those associated with Sturm-Liouville problems in [20]. In [21] the author studied the sampling analysis for the discontinuous Sturm-Liouville problem which had transmission conditions at one point of discontinuity and contained an eigenparameter in two boundary conditions. In the present paper, we derive sampling theorems associated with a new Sturm-Liouville problem with moving discontinuity points. The problem studied in this paper was presented in more detail for the first time in [22]. The problem has symmetrically located discontinuities which are defined depending on a parameter in a neighborhood of the midpoint of the interval and with an eigenparameter appearing in a boundary condition. There are many published works on sampling theorems associated with different types of generalized Sturm-Liouville boundary value problems, but the present paper deals with a case that has not been studied before. To derive sampling theorems for the problem (1.1)-(1.7), we establish briefly some spectral properties and construct the Green’s function of the problem (1.1)-(1.7). Then we derive two sampling theorems using solutions and the Green’s function, respectively.

2 An operator formulation and asymptotic formulas

Some properties of the eigenvalues and asymptotic formulas for the eigenvalues and the corresponding eigenfunctions for the same problem were given in [22]. We state the results briefly in this section.

To formulate a theoretic approach to the problem (1.1)-(1.7) we define a Hilbert space H:= L 2 (a,b)C with an inner product

f ( ) , g ( ) H : = a θ ε f ( x ) g ¯ ( x ) d x + δ 2 θ ε θ + ε f ( x ) g ¯ ( x ) d x + γ 2 θ + ε b f ( x ) g ¯ ( x ) d x + γ 2 ρ h k ¯ ,
(2.1)

where f(x)= ( f ( x ) h ) , g(x)= ( g ( x ) k ) H, f(),g() L 2 (a,b), h,kC. For convenience we put

R(u):= α 1 u(b) α 2 u (b), R (u):= α 1 u(b) α 2 u (b).
(2.2)

Let D(A)H be the set of all f(x)= ( f ( x ) h ) H such that f and f are absolutely continuous on [a,b] and τ(f) L 2 (a,b), h= R (f), B a (f)=0, T ± ε (f)= T ± ε (f)=0. We define an operator A:D(A)H by

A( f ( x ) R ( f ) ):=( τ ( f ) R ( f ) ),( f ( x ) R ( f ) )D(A).

Thus, the operator A:D(A)H is equivalent to the eigenvalue problem (1.1)-(1.7) in the sense that the eigenvalues of A are exactly those of the problem (1.1)-(1.7).

We can prove according to [23], [24] that A is symmetric in H, and all eigenvalues of the problem are real (see [22]).

Let ϕ λ () and χ λ () be two solutions of (1.1) as

ϕ λ (x)= { ϕ ε , λ ( x ) , x [ a , θ ε ) , ϕ ε , λ ( x ) , x ( θ ε , θ + ε ) , ϕ + ε , λ ( x ) , x ( θ + ε , b ] , χ λ (x)= { χ ε , λ ( x ) , x [ a , θ ε ) , χ ε , λ ( x ) , x ( θ ε , θ + ε ) , χ + ε , λ ( x ) , x ( θ + ε , b ] ,

satisfying the following conditions, respectively:

ϕ ε , λ ( a ) = β 2 , ϕ ε , λ ( a ) = β 1 , ϕ ε , λ ( θ ε ) = δ 1 ϕ ε , λ ( θ ε ) , ϕ ε , λ ( θ ε ) = δ 1 ϕ ε , λ ( θ ε ) , ϕ + ε , λ ( θ + ε ) = δ γ 1 ϕ ε , λ ( θ + ε ) , ϕ + ε , λ ( θ + ε ) = δ γ 1 ϕ ε , λ ( θ + ε ) ,
(2.3)

and

χ + ε , λ ( b ) = λ α 2 + α 2 , χ + ε , λ ( b ) = λ α 1 + α 1 , χ ε , λ ( θ + ε ) = γ δ 1 χ + ε , λ ( θ + ε + ) , χ ε , λ ( θ + ε ) = γ δ 1 χ + ε , λ ( θ + ε + ) , χ ε , λ ( θ ε ) = δ χ ε , λ ( θ ε + ) , χ ε , λ ( θ ε ) = δ χ ε , λ ( θ ε + ) .
(2.4)

These functions are entire in λ for all xI.

Let W( ϕ λ , χ λ ;x) be the Wronskian of ϕ λ (x) and χ λ (x) which is independent of x, since the coefficient of y in (1.1) is zero. Let

ω ( λ ) : = W ( ϕ λ , χ λ ; x ) = ϕ λ ( x ) χ λ ( x ) ϕ λ ( x ) χ λ ( x ) = ω ε ( λ ) = δ 2 ω ε ( λ ) = γ 2 ω + ε ( λ ) .
(2.5)

Now, ω(λ) is an entire function of λ whose zeros are precisely the eigenvalues of the operator A. Using techniques similar to those established by Titchmarsh in [25], see also [22]–[24], the zeros of ω(λ) are real and simple and if λ n , n=0,1,2, , denote the zeros of ω(λ), then the two component vectors

Φ n (x):=( ϕ λ n ( x ) R ( ϕ λ n ) )

are the corresponding eigenvectors of the operator A satisfying the orthogonality relation

Φ n ( ) , Φ m ( ) H =0,for nm.

Here { ϕ λ n ( ) } n = 0 will be the sequence of eigenfunctions of the problem (1.1)-(1.7) corresponding to the eigenvalues { λ n } n = 0 and we denote by Ψ n (x) the normalized eigenvectors of A, i.e.;

Ψ n (x):= Φ n ( x ) Φ n ( ) H =( Ψ n ( x ) R ( Ψ n ) ).

Let k n 0 be the real constants for which

χ λ n (x)= k n ϕ λ n (x),xI,n=0,1,2,.
(2.6)

ϕ λ () is the solution determined by (2.3) so, the following integral equations hold for k=0 and k=1:

d k d x k ϕ ε , λ ( x ) = β 2 d k d x k cos ( λ ( x a ) ) β 1 λ d k d x k sin ( λ ( x a ) ) + 1 λ a x d k d x k sin ( λ ( x y ) ) q ( y ) ϕ ε , λ ( y ) d y , d k d x k ϕ ε , λ ( x ) = δ 1 ϕ ε , λ ( θ ε ) d k d x k cos ( λ ( x θ ε ) ) + δ 1 λ ϕ ε , λ ( θ ε ) d k d x k sin ( λ ( x θ ε ) ) + 1 λ θ ε x d k d x k sin ( λ ( x y ) ) q ( y ) ϕ ε , λ ( y ) d y , d k d x k ϕ + ε , λ ( x ) = δ γ 1 ϕ ε , λ ( θ + ε ) d k d x k cos ( λ ( x θ + ε ) ) + δ γ 1 λ ϕ ε , λ ( θ + ε ) d k d x k sin ( λ ( x θ + ε ) ) + 1 λ θ + ε x d k d x k sin ( λ ( x y ) ) q ( y ) ϕ + ε , λ ( y ) d y ,

and ϕ λ () has the following asymptotic representations for |λ|, which holds uniformly for xI:

d k d x k ϕ ε , λ ( x ) = β 2 d k d x k cos ( λ ( x a ) ) + O ( | λ | ( k 1 ) 2 e | t | ( x a ) ) , d k d x k ϕ ε , λ ( x ) = β 2 δ 1 d k d x k cos ( λ ( x a ) ) + O ( | λ | ( k 1 ) 2 e | t | ( x a ) ) , d k d x k ϕ + ε , λ ( x ) = β 2 γ 1 d k d x k cos ( λ ( x a ) ) + O ( | λ | ( k 1 ) 2 e | t | ( x a ) ) ,
(2.7)

if β 2 0,

d k d x k ϕ ε , λ ( x ) = β 1 λ d k d x k sin ( λ ( x a ) ) + O ( | λ | ( k 2 ) 2 e | t | ( x a ) ) , d k d x k ϕ ε , λ ( x ) = β 1 δ 1 λ d k d x k sin ( λ ( x a ) ) + O ( | λ | ( k 2 ) 2 e | t | ( x a ) ) , d k d x k ϕ + ε , λ ( x ) = β 1 γ 1 λ d k d x k sin ( λ ( x a ) ) + O ( | λ | ( k 2 ) 2 e | t | ( x a ) ) ,
(2.8)

if β 2 =0.

Then we obtain four distinct cases for the asymptotic behavior of ω(λ) for |λ|, namely,

ω(λ)= { λ λ α 1 β 2 γ sin ( λ ( b a ) ) + O ( λ e | t | ( b a ) ) , if  β 2 0 , α 1 0 , λ α 2 β 2 γ cos ( λ ( b a ) ) + O ( λ e | t | ( b a ) ) , if  β 2 0 , α 1 = 0 , λ α 1 β 1 γ cos ( λ ( b a ) ) + O ( λ e | t | ( b a ) ) , if  β 2 = 0 , α 1 0 , λ α 2 β 1 γ sin ( λ ( b a ) ) + O ( e | t | ( b a ) ) , if  β 2 = 0 , α 1 = 0 .

Consequently if λ 0 < λ 1 <, are the zeros of ω(λ), then we have the following asymptotic formulas for sufficiently large n:

λ n = { ( n 1 ) π ( b a ) + O ( n 1 ) , if  β 2 0 , α 1 0 , ( n 1 / 2 ) π ( b a ) + O ( n 1 ) , if  β 2 0 , α 1 = 0 , ( n 1 / 2 ) π ( b a ) + O ( n 1 ) , if  β 2 = 0 , α 1 0 , n π ( b a ) + O ( n 1 ) , if  β 2 = 0 , α 1 = 0 .
(2.9)

3 Green’s function

To study the completeness of the eigenvectors of A and hence the completeness of the eigenfunctions of the problem (1.1)-(1.7), we construct the resolvent of A as well as the Green’s function of the problem (1.1)-(1.7). We assume without any loss of generality that λ=0 is not an eigenvalue of A.

Now let λC not be an eigenvalue of A and consider the inhomogeneous problem for f(x)= ( f ( x ) h ) H, u(x)= ( u ( x ) R ( u ) ) D(A);

(λIA)u(x)=f(x),xI,

where I is the identity operator. Since

(λIA)u(x)=λ( u ( x ) R ( u ) )( τ ( u ) R ( u ) )=( f ( x ) h )

we have

(λτ)u(x)=f(x),xI,
(3.1)
λ R (u)+R(u)=h.
(3.2)

Now we can represent the general solution of the homogeneous differential equation (1.1), appropriate to (3.1) in the following form:

u(x,λ)= { c 1 ϕ ε , λ ( x ) + c 2 χ ε , λ ( x ) , x [ a , θ ε ) , c 3 ϕ ε , λ ( x ) + c 4 χ ε , λ ( x ) , x ( θ ε , θ + ε ) , c 5 ϕ + ε , λ ( x ) + c 6 χ + ε , λ ( x ) , x ( θ + ε , b ] ,

in which c i (i= 1 , 6 ¯ ) are arbitrary constants. By applying the method of variation of the constants, we shall search the general solution of the non-homogeneous linear differential (3.1) in the following form:

u(x,λ)= { c 1 ( x , λ ) ϕ ε , λ ( x ) + c 2 ( x , λ ) χ ε , λ ( x ) , x [ a , θ ε ) , c 3 ( x , λ ) ϕ ε , λ ( x ) + c 4 ( x , λ ) χ ε , λ ( x ) , x ( θ ε , θ + ε ) , c 5 ( x , λ ) ϕ + ε , λ ( x ) + c 6 ( x , λ ) χ + ε , λ ( x ) , x ( θ + ε , b ] ,
(3.3)

where the functions c i (x,λ) (i= 1 , 6 ¯ ) satisfy the following linear system of equations:

{ c 1 ( x , λ ) ϕ ε , λ ( x ) + c 2 ( x , λ ) χ ε , λ ( x ) = 0 , c 1 ( x , λ ) ϕ ε , λ ( x ) + c 2 ( x , λ ) χ ε , λ ( x ) = f ( x ) , for  x [ a , θ ε ) , { c 3 ( x , λ ) ϕ ε , λ ( x ) + c 4 ( x , λ ) χ ε , λ ( x ) = 0 , c 3 ( x , λ ) ϕ ε , λ ( x ) + c 4 ( x , λ ) χ ε , λ ( x ) = f ( x ) , for  x ( θ ε , θ + ε ) , { c 5 ( x , λ ) ϕ + ε , λ ( x ) + c 6 ( x , λ ) χ + ε , λ ( x ) = 0 , c 5 ( x , λ ) ϕ + ε , λ ( x ) + c 6 ( x , λ ) χ + ε , λ ( x ) = f ( x ) , for  x ( θ + ε , b ] .
(3.4)

Since λ is not an eigenvalue and ω ε (λ)0, ω ε (λ)0, ω + ε (λ)0, each of the linear systems in (3.4) has a unique solution, which leads to

{ c 1 ( x , λ ) = 1 ω ε ( λ ) x θ ε χ ε , λ ( y ) f ( y ) d y + c 1 ( λ ) , c 2 ( x , λ ) = 1 ω ε ( λ ) a x ϕ ε , λ ( y ) f ( y ) d y + c 2 ( λ ) , for  x [ a , θ ε ) , { c 3 ( x , λ ) = 1 ω ε ( λ ) x θ + ε χ ε , λ ( y ) f ( y ) d y + c 3 ( λ ) , c 4 ( x , λ ) = 1 ω ε ( λ ) θ ε x ϕ ε , λ ( y ) f ( y ) d y + c 4 ( λ ) , for  x ( θ ε , θ + ε ) , { c 5 ( x , λ ) = 1 ω + ε ( λ ) x b χ + ε , λ ( y ) f ( y ) d y + c 5 ( λ ) , c 6 ( x , λ ) = 1 ω + ε ( λ ) θ + ε x ϕ + ε , λ ( y ) f ( y ) d y + c 6 ( λ ) , for  x ( θ + ε , b ] ,
(3.5)

where c i (λ) (i= 1 , 6 ¯ ) are arbitrary constants. Substituting (3.5) into (3.3), we obtain the solution of (3.1) as

u(x,λ)= { ϕ ε , λ ( x ) ω ε ( λ ) x θ ε χ ε , λ ( y ) f ( y ) d y + χ ε , λ ( x ) ω ε ( λ ) a x ϕ ε , λ ( y ) f ( y ) d y + c 1 ( λ ) ϕ ε , λ ( x ) + c 2 ( λ ) χ ε , λ ( x ) , x [ a , θ ε ) , ϕ ε , λ ( x ) ω ε ( λ ) x θ + ε χ ε , λ ( y ) f ( y ) d y + χ ε , λ ( x ) ω ε ( λ ) θ ε x ϕ ε , λ ( y ) f ( y ) d y + c 3 ( λ ) ϕ ε , λ ( x ) + c 4 ( λ ) χ ε , λ ( x ) , x ( θ ε , θ + ε ) , ϕ + ε , λ ( x ) ω + ε ( λ ) x b χ + ε , λ ( y ) f ( y ) d y + χ + ε , λ ( x ) ω + ε ( λ ) θ + ε x ϕ + ε , λ ( y ) f ( y ) d y + c 5 ( λ ) ϕ + ε , λ ( x ) + c 6 ( λ ) χ + ε , λ ( x ) , x ( θ + ε , b ] .
(3.6)

Then, from the boundary conditions (3.2), (1.2), and the transmission conditions (1.4)-(1.7), we get

c 1 ( λ ) = 1 ω ε ( λ ) θ ε θ + ε χ ε , λ ( y ) f ( y ) d y + 1 ω + ε ( λ ) θ + ε b χ + ε , λ ( y ) f ( y ) d y + h ω + ε ( λ ) , c 2 ( λ ) = 0 , c 3 ( λ ) = 1 ω + ε ( λ ) θ + ε b χ + ε , λ ( y ) f ( y ) d y + h ω + ε ( λ ) , c 4 ( λ ) = 1 ω ε ( λ ) a θ ε ϕ ε , λ ( y ) f ( y ) d y , c 5 ( λ ) = h ω + ε ( λ ) , c 6 ( λ ) = 1 ω ε ( λ ) a θ ε ϕ ε , λ ( y ) f ( y ) d y + 1 ω ε ( λ ) θ ε θ + ε ϕ ε , λ ( y ) f ( y ) d y .
(3.7)

Substituting (3.7) and (2.5) into (3.6), (3.6) can be written as

u(x,λ)= { ϕ ε , λ ( x ) ω ( λ ) x θ ε χ ε , λ ( y ) f ( y ) d y + χ ε , λ ( x ) ω ( λ ) a x ϕ ε , λ ( y ) f ( y ) d y + δ 2 ϕ ε , λ ( x ) ω ( λ ) θ ε θ + ε χ ε , λ ( y ) f ( y ) d y + γ 2 ϕ ε , λ ( x ) ω ( λ ) θ + ε b χ + ε , λ ( y ) f ( y ) d y + γ 2 h ω ( λ ) ϕ ε , λ ( x ) , x [ a , θ ε ) , δ 2 ϕ ε , λ ( x ) ω ( λ ) x θ + ε χ ε , λ ( y ) f ( y ) d y + δ 2 χ ε , λ ( x ) ω ( λ ) θ ε x ϕ ε , λ ( y ) f ( y ) d y + χ ε , λ ( x ) ω ( λ ) a θ ε ϕ ε , λ ( y ) f ( y ) d y + γ 2 ϕ ε , λ ( x ) ω ( λ ) θ + ε b χ + ε , λ ( y ) f ( y ) d y + γ 2 h ω ( λ ) ϕ ε , λ ( x ) , x ( θ ε , θ + ε ) , γ 2 ϕ + ε , λ ( x ) ω ( λ ) x b χ + ε , λ ( y ) f ( y ) d y + δ 2 χ + ε , λ ( x ) ω ( λ ) θ + ε x ϕ + ε , λ ( y ) f ( y ) d y + χ + ε , λ ( x ) ω ( λ ) a θ ε ϕ ε , λ ( y ) f ( y ) d y + δ 2 χ + ε , λ ( x ) ω ( λ ) θ ε θ + ε ϕ ε , λ ( y ) f ( y ) d y + γ 2 h ω ( λ ) ϕ + ε , λ ( x ) , x ( θ + ε , b ] .

Hence we have

u ( x ) = ( λ I A ) 1 f ( x ) = ( a θ ε G ( x , y ; λ ) f ( y ) d y + δ 2 θ ε θ + ε G ( x , y ; λ ) f ( y ) d y + γ 2 θ + ε b G ( x , y ; λ ) f ( y ) d y + γ 2 h ϕ λ ( x ) ω ( λ ) R ( u ) ) ,
(3.8)

where

G(x,y;λ)= { ϕ λ ( y ) χ λ ( x ) ω ( λ ) , a y x b , x θ ε , θ + ε ; y θ ε , θ + ε , ϕ λ ( x ) χ λ ( y ) ω ( λ ) , a x y b , x θ ε , θ + ε ; y θ ε , θ + ε ,
(3.9)

is the Green’s function of the problem (1.1)-(1.7).

4 The sampling theorems

In this section we derive two sampling theorems associated with the problem (1.1)-(1.7). For convenience we may assume that the eigenvectors of A are real valued.

Theorem 1

Consider the problem (1.1)-(1.7), and let

ϕ λ (x)= { ϕ ε , λ ( x ) , x [ a , θ ε ) , ϕ ε , λ ( x ) , x ( θ ε , θ + ε ) , ϕ + ε , λ ( x ) , x ( θ + ε , b ] ,

be the solution defined above. Letg() L 2 (a,b)and

F(λ)= a θ ε g(x) ϕ ε , λ (x)dx+ δ 2 θ ε θ + ε g(x) ϕ ε , λ (x)dx+ γ 2 θ + ε b g(x) ϕ + ε , λ (x)dx.
(4.1)

Then F(λ) is an entire function of exponential type (ba) that can be reconstructed from its values at the points { λ n } n = 0 via the sampling formula

F(λ)= n = 0 F( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) .
(4.2)

The series (4.1) converges absolutely onand uniformly on compact subset of ℂ. Hereω(λ)is the entire function defined in (2.5).

Proof

The relation (4.1) can be rewritten as an inner product of H as follows:

F ( λ ) = g ( ) , Φ λ ( ) H = a θ ε g ( x ) ϕ ε , λ ( x ) d x + δ 2 θ ε θ + ε g ( x ) ϕ ε , λ ( x ) d x + γ 2 θ + ε b g ( x ) ϕ + ε , λ ( x ) d x ,
(4.3)

where

g(x)=( g ( x ) 0 ), Φ λ (x)=( ϕ λ ( x ) R ( ϕ λ ) )H.

Since both g() and Φ λ () are in H, then they have the Fourier expansions

g(x)= n = 0 g ˆ (n) Φ n ( x ) Φ n ( ) H 2 , Φ λ (x)= n = 0 Φ n ( ) , Φ λ ( ) H Φ n ( x ) Φ n ( ) H 2 ,

where

g ˆ ( n ) = g ( ) , Φ λ ( ) H = a θ ε g ( x ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε g ( x ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b g ( x ) ϕ + ε , λ n ( x ) d x = F ( λ n ) .
(4.4)

Applying Parseval’s identity to (4.3) and using (4.4), we obtain

F(λ)= n = 0 F( λ n ) Φ n ( ) , Φ λ ( ) H Φ n ( ) H 2 .

Now we calculate Φ n ( ) , Φ λ ( ) H and Φ n ( ) H . To prove formula (4.2), we need to show that

Φ n ( ) , Φ λ ( ) H Φ n ( ) H 2 = ω ( λ ) ( λ λ n ) ω ( λ n ) ,n=0,1,2,.
(4.5)

By the definition of the inner product of H, we have

Φ λ ( ) , Φ n ( ) H = a θ ε ϕ ε , λ ( x ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε ϕ ε , λ ( x ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b ϕ + ε , λ ( x ) ϕ + ε , λ n ( x ) d x + δ 2 ρ R ( ϕ λ ) R ( ϕ λ n ) .
(4.6)

From the Green’s identity [26], we have

a θ ε τ ( ϕ ε , λ ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε τ ( ϕ ε , λ ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b τ ( ϕ + ε , λ ) ϕ + ε , λ n ( x ) d x = a θ ε ϕ ε , λ ( x ) τ ( ϕ ε , λ n ) d x + δ 2 θ ε θ + ε ϕ ε , λ ( x ) τ ( ϕ ε , λ n ) d x + γ 2 θ + ε b ϕ + ε , λ ( x ) τ ( ϕ + ε , λ n ) d x + W ( ϕ ε , λ , ϕ ε , λ n ; θ ε ) W ( ϕ ε , λ , ϕ ε , λ n ; a ) + δ 2 W ( ϕ ε , λ , ϕ ε , λ n ; θ + ε ) δ 2 W ( ϕ ε , λ , ϕ ε , λ n ; θ ε + ) + γ 2 W ( ϕ + ε , λ , ϕ + ε , λ n ; b ) γ 2 W ( ϕ + ε , λ , ϕ + ε , λ n ; θ + ε + ) ,
(4.7)

then from (2.3) and (2.4), the equality (4.11) becomes

( λ λ n ) ( a θ ε ϕ ε , λ ( x ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε ϕ ε , λ ( x ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b ϕ + ε , λ ( x ) ϕ + ε , λ n ( x ) d x ) = γ 2 W ( ϕ + ε , λ , ϕ + ε , λ n ; b ) .

Thus

a θ ε ϕ ε , λ ( x ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε ϕ ε , λ ( x ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b ϕ + ε , λ ( x ) ϕ + ε , λ n ( x ) d x = γ 2 W ( ϕ + ε , λ , ϕ + ε , λ n ; b ) ( λ λ n ) .
(4.8)

From (2.2), (2.4), and (2.5), we have

W ( ϕ + ε , λ , ϕ + ε , λ n ; b ) = ϕ + ε , λ ( b ) ϕ + ε , λ n ( b ) ϕ + ε , λ ( b ) ϕ + ε , λ n ( b ) = k n 1 { ϕ + ε , λ ( b ) χ + ε , λ n ( b ) ϕ + ε , λ ( b ) χ + ε , λ n ( b ) } = k n 1 { ϕ + ε , λ ( b ) ( λ n α 1 α 1 ) ϕ + ε , λ ( b ) ( λ n α 2 α 2 ) } = k n 1 { ω ( λ ) + ( λ n λ ) R ( ϕ λ ) } .
(4.9)

Equations (2.5), (2.6), and R ( χ λ n )=ρ yield

γ 2 ρ R ( ϕ λ ) R ( ϕ λ n )= γ 2 ρ k n 1 R ( ϕ λ ) R ( χ λ n )= γ 2 k n 1 R ( ϕ λ ).
(4.10)

Substituting from (4.8), (4.9), and (4.10) into (4.6), we get

Φ λ ( ) , Φ n ( ) H = γ 2 k n 1 ω ( λ ) λ λ n .
(4.11)

Letting λ λ n in (4.11), since the zeros of ω(λ) are simple, we get

Φ n ( ) , Φ n ( ) H = Φ n ( ) H 2 = γ 2 k n 1 ω ( λ n ).
(4.12)

Therefore from (4.11) and (4.12), we get (4.5). Hence (4.2) is proved with a pointwise converge on ℂ. Now we investigate the convergence of (4.2). First we prove that it is absolutely convergent on ℂ. Using the Cauchy-Schwarz’s inequality for λC,

| n = 0 F ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) | ( n = 0 | g ( ) , Φ n ( ) H | 2 Φ n ( ) H 2 ) 1 / 2 ( n = 0 | Φ n ( ) , Φ λ ( ) H | 2 Φ n ( ) H 2 ) 1 / 2 .
(4.13)

Since g(), Φ λ ()H, both series in the right-hand side of (4.13) converge. Thus the series (4.2) converges absolutely on ℂ. For uniform convergence let MC be compact. Let λM and N>0. Define σ N (λ) to be

σ N (λ):=|F(λ) n = 0 F( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) |.

Using the same method developed above

σ N (λ) ( n = N + 1 | g ( ) , Φ n ( ) H | 2 Φ n ( ) H 2 ) 1 / 2 ( n = N + 1 | Φ n ( ) , Φ λ ( ) H | 2 Φ n ( ) H 2 ) 1 / 2 .

Therefore

σ N (λ) Φ λ ( ) H ( n = N + 1 | g ( ) , Φ n ( ) H | 2 Φ n ( ) H 2 ) 1 / 2 .

Since [a,b]×M is compact, then we can find a positive constant C M such that

Φ λ ( ) H C M ,for all λM.

Therefore,

σ N (λ) C M ( n = N + 1 | g ( ) , Φ n ( ) H | 2 Φ n ( ) H 2 ) 1 / 2

uniformly on M. In view of Parseval’s equality,

( n = N + 1 | g ( ) , Φ n ( ) H | 2 Φ n ( ) H 2 ) 1 / 2 0as N.

Thus σ N (λ)0 uniformly on M. Hence (4.2) converges uniformly on M. As a result F(λ) is analytic on compact subsets of ℂ and hence it is entire function. From the relation

|F(λ)| a θ ε |g(x)|| ϕ ε , λ (x)|dx+ δ 2 θ ε θ + ε |g(x)|| ϕ ε , λ (x)|dx+ γ 2 θ + ε b |g(x)|| ϕ + ε , λ (x)|dx,

and the fact that ϕ ε , λ (x), ϕ ε , λ (x) and ϕ + ε , λ (x) are entire functions of exponential type (ba), we conclude that F(λ) is also exponential type (ba). □

Remark 1

To see that the expansion (4.2) is a Lagrange type interpolation, we may replace ω(λ) by the canonical product

ϖ(λ)= { n = 0 ( 1 λ λ n ) , if zero is not an eigenvalue , λ n = 1 ( 1 λ λ n ) , if  λ 0 = 0  is an eigenvalue .
(4.14)

From Hadamard’s factorization theorem (see [27]), ω(λ)=h(λ)ϖ(λ), where h(λ) is an entire function with no zeros. Thus,

ω ( λ ) ω ( λ n ) = h ( λ ) ϖ ( λ ) h ( λ n ) ϖ ( λ n )

and (4.1), (4.2) remain valid for the function F(λ)/h(λ). Hence

F(λ)= n = 0 F( λ n ) h ( λ ) ϖ ( λ ) ( λ λ n ) h ( λ n ) ϖ ( λ n ) .

We may redefine (4.1) by taking the kernel ϕ λ ()/h(λ)= ϕ ¯ λ () to get

F ¯ (λ)= F ( λ ) h ( λ ) = n = 0 F ¯ ( λ n ) ϖ ( λ ) ( λ λ n ) ϖ ( λ n ) .

The next theorem is devoted to giving interpolation sampling expansions associated with the problem (1.1)-(1.7) defined in terms of the Green’s function (these steps were introduced for the first time in [4], [10] and recently in [21], [28]). As we see in (3.9), the Green’s function G(x,y;λ) of the problem (1.1)-(1.7) has simple poles at { λ n } n = 0 . Let the function G(x,λ) to be G(x,λ)=ω(λ)G(x, y 0 ;λ), where y 0 I is a fixed point and ω(λ) is the function defined in (2.5) or the canonical product (4.14).

Theorem 2

Let g() L 2 (a,b) and F(λ) be the integral transform

F(λ)= a θ ε G(x,λ) g ¯ (x)dx+ δ 2 θ ε θ + ε G(x,λ) g ¯ (x)dx+ γ 2 θ + ε b G(x,λ) g ¯ (x)dx.
(4.15)

Then F(λ) is an entire function of exponential type (ba) which admits the sampling representation

F(λ)= n = 0 F( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) .
(4.16)

The series (4.16) converges absolutely onand uniformly on a compact subset of ℂ.

Proof

The integral transform (4.15) can be rewritten as

F(λ)= G ( , λ ) , g ( ) H ,
(4.17)

where

g(x)=( g ( x ) 0 ),G(x,λ)=( G ( x , λ ) R ( G ) )H.

Applying Parseval’s identity to (4.17) with respect to { Φ n ( ) } n = 0 , we obtain

F(λ)= n = 0 G ( , λ ) , Φ n ( ) H g ( ) , Φ n ( ) H ¯ Φ n ( ) H 2 .
(4.18)

Let λ λ n . Since each Φ n () is an eigenvector of A,

(λIA) Φ n (x)=(λ λ n ) Φ n (x).

Thus

( λ I A ) 1 Φ n (x)= 1 ( λ λ n ) Φ n (x).
(4.19)

From (3.8) and (4.9), we obtain

a θ ε G ( x , y 0 ; λ ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε G ( x , y 0 ; λ ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b G ( x , y 0 ; λ ) ϕ + ε , λ n ( x ) d x + γ 2 ω ( λ ) ϕ λ ( y 0 ) R ( ϕ λ n ) = 1 ( λ λ n ) Φ λ n ( y 0 ) .
(4.20)

Using R ( ϕ λ n )= k n 1 ρ, (4.20) becomes

a θ ε G ( x , y 0 ; λ ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε G ( x , y 0 ; λ ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b G ( x , y 0 ; λ ) ϕ + ε , λ n ( x ) d x + γ 2 ω ( λ ) k n 1 ρ ϕ λ ( y 0 ) = 1 ( λ λ n ) Φ λ n ( y 0 ) .
(4.21)

Hence (4.21) can be rewritten as

a θ ε G ( x , λ ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε G ( x , λ ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b G ( x , λ ) ϕ + ε , λ n ( x ) d x + γ 2 k n 1 ρ ϕ λ ( y 0 ) = ω ( λ ) ( λ λ n ) Φ λ n ( y 0 ) .
(4.22)

From the definition of G(,λ), we have

G ( , λ ) , Φ n ( ) H = a θ ε G ( x , λ ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε G ( x , λ ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b G ( x , λ ) ϕ + ε , λ n ( x ) d x + γ 2 ρ R ( G ) R ( ϕ λ n ) .
(4.23)

From (3.8), we have

R (G)= ϕ λ ( y 0 ) R ( χ + ε , λ ).
(4.24)

Combining (4.24), R ( χ + ε , λ )=ρ and (2.6), together with (4.23), yields

G ( , λ ) , Φ n ( ) H = a θ ε G ( x , λ ) ϕ ε , λ n ( x ) d x + δ 2 θ ε θ + ε G ( x , λ ) ϕ ε , λ n ( x ) d x + γ 2 θ + ε b G ( x , λ ) ϕ + ε , λ n ( x ) d x + γ 2 k n 1 ρ ϕ λ ( y 0 ) .
(4.25)

Substituting from (4.22) and (4.25), we get

G ( , λ ) , Φ n ( ) H = ω ( λ ) ( λ λ n ) Φ λ n ( y 0 ).
(4.26)

As an element of H, G(,λ) has the eigenvectors expansion

G ( x , λ ) = i = 0 G ( , λ ) , Φ i ( ) H Φ i ( x ) Φ i ( ) H 2 = i = 0 ω ( λ ) ( λ λ i ) Φ λ i ( y 0 ) Φ i ( x ) Φ i ( ) H 2 .
(4.27)

Taking the limit when λ λ n in (4.17), we get

F( λ n )= lim λ λ n G ( , λ ) , g ( ) H .
(4.28)

Making use of (4.27), we may rewrite (4.28) as

F ( λ n ) = lim λ λ n i = 0 ω ( λ ) ( λ λ i ) Φ λ i ( y 0 ) Φ i ( ) , g ( ) H Φ i ( ) H 2 = ω ( λ n ) Φ λ n ( y 0 ) Φ n ( ) , g ( ) H Φ n ( ) H 2 .
(4.29)

The interchange of the limit and summation is justified by the asymptotic behavior of Φ λ n (x) and ω(λ). If Φ λ n ( y 0 )0, then (4.29) gives

g ( ) , Φ n ( ) H ¯ Φ n ( ) H 2 = F ( λ n ) ω ( λ n ) Φ λ n ( y 0 ) .
(4.30)

Combining (4.26), (4.27), and (4.18) under the assumption that Φ λ n ( y 0 )0 for all n. If Φ λ n ( y 0 )=0, for some n, the same expansion holds with F( λ n )=0. The convergence properties as well as the analytic and growth properties can be established as in Theorem 2. □

Now we give an example to illustrate the sampling transform.

Example

Consider the boundary value problem:

u = λ u , 2 x 4 , u ( 2 ) = 0 , λ u ( 4 ) u ( 4 ) = 0 , u ( 0 ) 2 u ( 0 + ) = 0 , u ( 0 ) 2 u ( 0 + ) = 0 , u ( 2 ) 1 2 u ( 2 + ) = 0 , u ( 2 ) 1 2 u ( 2 + ) = 0 ,
(4.31)

is a special case of the problem (1.1)-(1.7) when θ ε =0 and θ + ε =2 as 0<ε<3. The eigenvalues λ n of the problem (4.31) are the zeros of the function

ω(λ)=cos(6 λ )+λ λ sin(6 λ )=0.
(4.32)

By Theorem 1, the transform

F ( λ ) = 2 0 g ( x ) cos ( λ ( x + 2 ) ) d x + 2 0 2 g ( x ) cos ( λ ( x + 2 ) ) d x + 2 4 g ( x ) cos ( λ ( x + 2 ) ) d x ,

has the following expansion:

F(λ)= n = 0 F( λ n ) cos ( 6 λ ) + λ λ sin ( 6 λ ) ( λ λ n ) ( 3 λ n cos ( 6 λ n ) + ( 3 ( λ n 2 ) / 2 λ n ) sin ( 6 λ n ) ) ,

where λ n are the zeros of (4.32).