1 Introduction

By a sampling theorem we mean a representation of a certain function in terms of its values at a discrete set of points. In communication theory, it means a reconstruction of a signal (information) in terms of a discrete set of data. This has several applications, especially in the transmission of information. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem [13]. By a band-limited signal with band width τ, τ>0, we mean a function in the Paley-Wiener space

B τ := { f  entire , | f ( λ ) | C e τ | λ | , R | f ( λ ) | 2 d λ } .
(1.1)

The WKS sampling theorem is a fundamental result in information theory. It states that any f B τ can be reconstructed from its sampled values f( x k ), where x k =kπ/τ and kZ, by the formula

f(x)= k Z f( x k )sinc(τx/πk),xR,
(1.2)

where

sinc(x):={ sin π x π x , x R { 0 } , 1 , x = 0 ,
(1.3)

and the series converges absolutely and uniformly on any finite interval of ℝ. Expansion (1.2) is used in several approximation problems which are known as sinc methods; see, e.g., [47]. In particular the sinc-method is used to approximate eigenvalues of boundary value problems; see, for example, [814]. The sinc-method has a slow rate of decay at infinity, which is as slow as O(| x 1 |). There are several attempts to improve the rate of decay. One of the interesting ways is to multiply the sinc-function in (1.2) by a kernel function; see, e.g., [1517]. Let h(0,π/τ] and γ(0,πhτ). Assume that Φ B γ such that Φ(0)=1, then for f B τ we have the expansion, [18]

f(x)= n = f(nh)sinc ( h 1 π x n π ) Φ ( h 1 x n ) .
(1.4)

The speed of convergence of the series in (1.4) is determined by the decay of |Φ(x)|. But the decay of an entire function of exponential type cannot be as fast as e c | x | as |x|, for some positive c [18]. In [19], Qian has introduced the following regularized sampling formula. For h(0,π/τ], NN and r>0, Qian defined the operator [19]

( G h , N f)(x)= n Z N ( x ) f(nh) S n ( h 1 π x ) G ( x n h 2 r h ) ,xR,
(1.5)

where G(t):=exp( t 2 ), which is called the Gaussian function, S n ( h 1 πx):=sinc( h 1 πxnπ), Z N (x):={nZ:|[ h 1 x]n|N} and [x] denotes the integer part of xR; see also [20, 21]. Qian also derived the following error bound. If f B τ , h(0,π/τ] and a:=min{r(πhτ),(N2)/r}1, then [19, 20]

| f ( x ) ( G h , N f ) ( x ) | 2 τ π f 2 π 2 a 2 ( 2 π a + e 3 / 2 r 2 ) e a 2 / 2 ,xR.
(1.6)

In [18] Schmeisser and Stenger extended the operator (1.5) to the complex domain ℂ. For τ>0, h(0,π/τ] and ω:=(πhτ)/2, they defined the operator [18]

( G h , N f)(z):= n Z N ( z ) f(nh) S n ( π z h ) G ( ω ( z n h ) N h ) ,
(1.7)

where Z N (z):={nZ:|[ h 1 z+1/2]n|N} and NN. Note that the summation limits in (1.7) depend on the real part of z. Schmeisser and Stenger [18] proved that if f is an entire function such that

| f ( ξ + i η ) | ϕ ( | ξ | ) e τ | η | ,ξ,ηR,
(1.8)

where ϕ is a non-decreasing, non-negative function on [0,) and τ0, then for h(0,π/τ), ω:=(πhτ)/2, NN, |z|<N, we have

| f ( z ) ( G h , N f ) ( z ) | 2 | sin ( h 1 π z ) | ϕ ( | z | + h ( N + 1 ) ) e ω N π ω N β N ( h 1 z ) , z C ,
(1.9)

where

β N (t):=cosh(2ωt)+ 2 e ω t 2 / N π ω N [ 1 ( t / N ) 2 ] + 1 2 [ e 2 ω t e 2 π ( N t ) 1 + e 2 ω t e 2 π ( N + t ) 1 ] .
(1.10)

The amplitude error arises when the exact values f(nh) of (1.7) are replaced by the approximations f ˜ (nh). We assume that f ˜ (nh) are close to f(nh), i.e., there is ε>0 sufficiently small such that

sup n Z n ( z ) |f(nh) f ˜ (nh)|<ε.
(1.11)

Let h(0,π/τ), ω:=(πhτ)/2 and NN be fixed numbers. The authors in [22] proved that if (1.11) holds, then for |z|<N, we have

|( G h , N f)(z)( G h , N f ˜ )(z)| A ε , N (z),
(1.12)

where

A ε , N (z)=2ε e ω / 4 N (1+ N / ω π )exp ( ( ω + π ) h 1 | z | ) .
(1.13)

It is well known that many topics in mathematical physics require the investigation of the eigenvalues and eigenfunctions of Sturm-Liouville type boundary value problems. Therefore, the Sturmian theory is one of the most actual and extensively developing fields of theoretical and applied mathematics. Particularly, in recent years, highly important results in this field have been obtained for the case when the eigenparameter appears not only in the differential equation but also in the boundary conditions. The literature on such results is voluminous, and we refer to [2327] and corresponding bibliography cited therein. In particular, [24, 26, 28, 29] contain many references to problems in physics and mechanics. Our task is to use formula (1.7) to compute the eigenvalues numerically of the differential equation

y (x,μ)+q(x)y(x,μ)= μ 2 y(x,μ),x [ 1 , 0 ) ( 0 , 1 ] ,
(1.14)

with boundary conditions

L 1 (y):= ( α 1 μ 2 α 1 ) y(1,μ) ( α 2 μ 2 α 2 ) y (1,μ)=0,
(1.15)
L 2 (y):= ( β 1 μ 2 + β 1 ) y(1,μ) ( β 2 μ 2 + β 2 ) y (1,μ)=0,
(1.16)

and transmission conditions

L 3 (y):= γ 1 y ( 0 , μ ) δ 1 y ( 0 + , μ ) =0,
(1.17)
L 4 (y):= γ 2 y ( 0 , μ ) δ 2 y ( 0 + , μ ) =0,
(1.18)

where μ is a complex spectral parameter; q(x) is a given real-valued function, which is continuous in [1,0) and (0,1] and has a finite limit q( 0 ± )= lim x 0 ± q(x); γ i , δ i , α i , β i , α i , β i (i=1,2) are real numbers; γ i 0, δ i 0 (i=1,2); γ 1 γ 2 = δ 1 δ 2 and

det( α 1 α 1 α 2 α 2 )>0,det( β 1 β 1 β 2 β 2 )>0.
(1.19)

The eigenvalue problem (1.14)-(1.18) when ( α 1 , α 2 )(0,0)( β 1 , β 2 ) is a Sturm-Liouville problem which contains an eigenparameter μ in two boundary conditions, in addition to an internal point of discontinuity. In [30], Tharwat proved that the eigenvalue problem (1.14)-(1.18) has a denumerable set of real and simple eigenvalues using techniques similar to those established in [23, 24, 31], where also sampling theorems have been established. Tharwat et al., in [14], computed the eigenvalues of the problem (1.14)-(1.18) by using the sinc method. In the sinc method, the basic idea is as follows: The eigenvalues are characterized as the zeros of an analytic function F(μ) which can be written in the form F(μ)=K(μ)+U(μ), where K(μ) (known part) is the function for the case q0. The ingenuity of the approach is in trying to choose the function F(μ) so that U(μ) B τ (unknown part) and can be approximated by the WKS sampling theorem if its values at some equally spaced points are known; see [814].

Our goal in this paper is to improve the results presented in Tharwat et al. [14] with the least conditions. In this paper we use the sinc-Gaussian sampling formula (1.7) to compute eigenvalues of (1.14)-(1.18) numerically. As is expected, the new method reduced the error bounds remarkably; see the examples at the end of this paper. Also here, we use the same idea but the unknown part U(μ) is an entire function of exponential type and satisfies (1.8), that is, U(μ) is not necessary L 2 -function. Then we approximate the U(μ) using (1.7) and obtain better results. We would like to mention that the papers in computing eigenvalues by the sinc-Gaussian method are few; see [22, 32, 33]. In Section 2 we derive the sinc-Gaussian technique to compute the eigenvalues of (1.14)-(1.18) with error estimates. The last section involves some illustrative examples.

2 Treatment of the eigenvalue problem (1.14)-(1.18)

In this section we derive approximate values of the eigenvalues of the eigenvalue problem (1.14)-(1.18). Recall that the problem (1.14)-(1.18) has a denumerable set of real and simple eigenvalues, cf. [30]. Let

y(x,μ)={ y 1 ( x , μ ) , x [ 1 , 0 ) , y 2 ( x , μ ) , x ( 0 , 1 ]

denote the solution of (1.14) satisfying the following initial conditions:

( y 1 ( 1 , μ ) y 2 ( 0 + , μ ) y 1 ( 1 , μ ) y 2 ( 0 + , μ ) )=( μ 2 α 2 α 2 γ 1 δ 1 y 1 ( 0 , μ ) μ 2 α 1 α 1 γ 2 δ 2 y 1 ( 0 , μ ) ).
(2.1)

Since y(,μ) satisfies (1.15), (1.17) and (1.18), then the eigenvalues of problem (1.14)-(1.18) are the zeros of the characteristic determinant, cf. [30],

Ω(μ):= ( β 1 μ 2 + β 1 ) y 2 (1,μ) ( β 2 μ 2 + β 2 ) y 2 (1,μ).
(2.2)

According to [30], see also [3440], the function Ω(μ) is an entire function of μ where zeros are real and simple. We aim to approximate Ω(μ) and hence its zeros, i.e., the eigenvalues, by using (1.7). The idea is to split Ω(μ) into two parts, one is known and the other is unknown, but is an entire function of exponential type and satisfies (1.8). Then we approximate the unknown part using (1.7) to get the approximate Ω(μ) and then compute the approximate zeros. By using the method of variation of constants, we can see that the solution y(,μ) satisfies the Volterra integral equations, cf. [30],

y 1 ( x , μ ) = ( α 2 + μ 2 α 2 ) cos [ μ ( x + 1 ) ] ( α 1 + μ 2 α 1 ) 1 μ sin [ μ ( x + 1 ) ] + ( T 1 y 1 ) ( x , μ ) ,
(2.3)
y 2 ( x , μ ) = γ 1 δ 1 y 1 ( 0 , μ ) cos [ μ x ] + γ 2 δ 2 y 1 ( 0 , μ ) sin [ μ x ] μ + ( T 2 y 2 ) ( x , μ ) ,
(2.4)

where T 1 and T 2 are the Volterra operators

( T 1 y 1 )(x,μ):= 1 x sin [ μ ( x t ) ] μ q(t) y 1 (t,μ)dt,
(2.5)
( T 2 y 2 )(x,μ):= 0 x sin [ μ ( x t ) ] μ q(t) y 2 (t,μ)dt.
(2.6)

Differentiating (2.3) and (2.4), we obtain

y 1 ( x , μ ) = ( α 2 + μ 2 α 2 ) μ sin [ μ ( x + 1 ) ] ( α 1 + μ 2 α 1 ) cos [ μ ( x + 1 ) ] + ( T ˜ 1 y 1 ) ( x , μ ) ,
(2.7)
y 2 (x,μ)= γ 1 δ 1 μ y 1 ( 0 , μ ) sin[μx]+ γ 2 δ 2 y 1 ( 0 , μ ) cos[μx]+( T ˜ 2 y 2 )(x,μ),
(2.8)

where T ˜ 1 and T ˜ 2 are the Volterra-type integral operators

( T ˜ 1 y 1 )(x,μ):= 1 x cos [ μ ( x t ) ] q(t) y 1 (t,μ)dt,
(2.9)
( T ˜ 2 y 2 )(x,μ):= 0 x cos [ μ ( x t ) ] q(t) y 2 (t,μ)dt.
(2.10)

Define ϑ i (,μ) and ϑ ˜ i (,μ), i=1,2, to be

ϑ i (x,μ):= T i y i (x,μ), ϑ ˜ i (x,μ):= T ˜ i y i (x,μ).
(2.11)

In the following, we make use of the known estimates [41]

|cosz| e | z | ,| sin z z | c 0 1 + | z | e | z | ,
(2.12)

where c 0 is some constant (we may take c 0 1.72 cf. [41]). For convenience, we define the constants

q 1 : = 1 0 | q ( t ) | d t , q 2 : = 0 1 | q ( t ) | d t , c 1 : = max ( | α 1 | , | α 2 | , | α 1 | , | α 2 | ) , c 2 : = exp ( c 0 q 1 ) , c 3 : = 1 + c 0 c 2 q 1 , c 4 : = ( 1 + c 0 ) [ | γ 1 | | δ 1 | c 3 + | γ 2 | | δ 2 | c 0 ( 1 + c 3 q 1 ) ] , c 5 : = exp ( c 0 q 2 ) , c 6 : = 1 + c 0 q 2 c 5 .
(2.13)

As in [14], we split Ω(μ) into two parts via

Ω(μ):=K(μ)+U(μ),
(2.14)

where K(μ) is the known part

K ( μ ) : = ( β 1 μ 2 + β 1 ) [ ( μ 2 α 2 α 2 ) ( γ 1 δ 1 cos 2 μ γ 2 δ 2 sin 2 μ ) ( μ 2 α 1 α 1 ) ( γ 1 δ 1 + γ 2 δ 2 ) cos μ sin μ μ ] + ( β 2 μ 2 + β 2 ) [ ( μ 2 α 2 α 2 ) ( γ 1 δ 1 + γ 2 δ 2 ) μ cos μ sin μ + ( μ 2 α 1 α 1 ) ( γ 2 δ 2 cos 2 μ γ 1 δ 1 sin 2 μ ) ]
(2.15)

and U(μ) is the unknown one

U ( μ ) : = γ 1 δ 1 [ ( β 1 μ 2 + β 1 ) cos μ + ( β 2 μ 2 + β 2 ) μ sin μ ] ϑ 1 ( 0 , μ ) + ( β 1 μ 2 + β 1 ) ϑ 2 ( 1 , μ ) + γ 2 δ 2 [ ( β 1 μ 2 + β 1 ) sin μ μ ( β 2 μ 2 + β 2 ) cos μ ] ϑ ˜ 1 ( 0 , μ ) ( β 2 μ 2 + β 2 ) ϑ ˜ 2 ( 1 , μ ) .
(2.16)

Then the function U(μ) is entire in μ for each x[0,1] for which, cf. [14],

| U ( μ ) | ϕ ( | μ | ) e 2 | μ | ,μC,
(2.17)

where

ϕ ( | μ | ) :=M ( 1 + | μ | 2 ) 2 ,
(2.18)

and

M : = c 1 c ( 1 + c 0 ) 2 q 1 [ c 0 c 2 | γ 1 | | δ 1 | + c 3 | γ 2 | | δ 2 | ] + c 1 c 4 c q 2 ( c 6 + c 0 c 5 ) , c : = max { | β 1 | , | β 2 | , | β 1 | , | β 2 | } .
(2.19)

Then U(μ) is an entire function of exponential type τ=2. In the following, we let μR since all eigenvalues are real. Now we approximate the function U(μ) using the operator (1.7) where h(0,π/2) and ω:=(π2h)/2 and then, from (1.9), we obtain

|U(μ)( G h , N U)(μ)| T h , N (μ),
(2.20)

where

T h , N (μ):=2|sin ( h 1 π μ ) |ϕ ( | μ | + h ( N + 1 ) ) e ω N π ω N β N (0),μR.
(2.21)

The samples U(nh)=Ω(nh)K(nh), n Z N (μ) cannot be computed explicitly in the general case. We approximate these samples numerically by solving the initial-value problems defined by (1.14) and (2.1) to obtain the approximate values U ˜ (nh), n Z N (μ), i.e., U ˜ (nh)= Ω ˜ (nh)K(nh). Here we use a computer algebra system, Mathematica, to obtain the approximate solutions with the required accuracy. However, a separate study for the effect of different numerical schemes and the computational costs would be interesting. Accordingly, we have the explicit expansion

( G h , N U ˜ )(μ):= n Z N ( μ ) U ˜ (nh) S n ( π μ h ) G ( ω ( μ n h ) N h ) .
(2.22)

Therefore we get, cf. (1.12),

|( G h , N U)(μ)( G h , N U ˜ )(μ)| A ε , N (0),μR.
(2.23)

Now let Ω ˜ N (μ):=K(μ)+( G h , N U ˜ )(μ). From (2.20) and (2.23) we obtain

| Ω ( μ ) Ω ˜ N ( μ ) | T h , N (μ)+ A ε , N (0),μR.
(2.24)

Let μ 2 be an eigenvalue and μ N be its desired approximation, i.e., Ω( μ )=0 and Ω ˜ N ( μ N )=0. From (2.24) we have | Ω ˜ N ( μ )| T h , N ( μ )+ A ε , N (0). Define the curves

a ± (μ)= Ω ˜ N (μ)± T h , N (μ)+ A ε , N (0).
(2.25)

The curves a + (μ), a (μ) trap the curve of Ω(μ) for suitably large N. Hence the closure interval is determined by solving a ± (μ)=0, which gives an interval

I ε , N :=[ a , a + ].

It is worthwhile to mention that the simplicity of the eigenvalues guarantees the existence of approximate eigenvalues, i.e., the μ N ’s for which Ω ˜ N ( μ N )=0. Next we estimate the error | μ μ N | for the eigenvalue μ .

Theorem 2.1 Let μ 2 be an eigenvalue of (1.14)-(1.18) and μ N be its approximation. Then, for μR, we have the following estimate:

| μ μ N | < T h , N ( μ N ) + A ε , N ( 0 ) inf ζ I ε , N | Ω ( ζ ) | ,
(2.26)

where the interval I ε , N is defined above.

Proof Replacing μ by μ N in (2.24), we obtain

| Ω ( μ N ) Ω ( μ ) | < T h , N ( μ N )+ A ε , N (0),
(2.27)

where we have used Ω ˜ N ( μ N )=Ω( μ )=0. Using the mean value theorem yields that for some ζ J ε , N :=[min( μ , μ N ),max( μ , μ N )],

| ( μ μ N ) Ω (ζ)| T h , N ( μ N )+ A ε , N (0),ζ J ε , N I ε , N .
(2.28)

Since μ is simple and N is sufficiently large, then inf ζ I ε , N | Ω (ζ)|>0, and we get (2.26). □

3 Examples

This section includes two examples illustrating the sinc-Gaussian method. All examples are computed in [14] with the classical sinc-method. It is clearly seen that the sinc-Gaussian method gives remarkably better results. We indicate in these two examples the effect of the amplitude error in the method by determining enclosure intervals for different values of ε. We also indicate the effect of N and h by several choices. We would like to mention that Mathematica has been used to obtain the exact values for these examples where eigenvalues cannot computed concretely. Mathematica is also used in rounding the exact eigenvalues, which are square roots. Each example is exhibited via figures that accurately illustrate the procedure near to some of the approximated eigenvalues. More explanations are given below.

Example 1 Consider the boundary value problem

y (x,μ)+q(x)y(x,μ)= μ 2 y(x,μ),x[1,0)(0,1],
(3.1)
μ 2 y(1,μ)+ y (1,μ)=0, μ 2 y(1,μ) y (1,μ)=0,
(3.2)
y ( 0 , μ ) y ( 0 + , μ ) =0, y ( 0 , μ ) y ( 0 + , μ ) =0.
(3.3)

Here β 1 = β 2 = α 1 = α 2 =1, β 1 = β 2 = α 1 = α 2 =0, γ 1 = δ 1 =2, γ 2 = δ 2 = 1 2 and

q(x)={ 1 , x [ 1 , 0 ) , 2 , x ( 0 , 1 ] .
(3.4)

The characteristic function is

Ω ( μ ) = 1 1 + μ 2 2 + μ 2 [ sin 1 + μ 2 ( 2 + μ 2 ( μ 4 μ 2 1 ) cos 2 + μ 2 μ 2 ( 3 + 2 μ 2 ) sin 2 + μ 2 ) 1 + μ 2 cos 1 + μ 2 ( 2 μ 2 2 + μ 2 cos 2 + μ 2 + ( μ 4 μ 2 2 ) sin 2 + μ 2 ) ] .
(3.5)

The function K(μ) will be

K(μ)=μ ( 1 + μ 2 ) sin2μ.
(3.6)

As is clearly seen, eigenvalues cannot be computed explicitly. Tables 1, 2, 3 indicate the application of our technique to this problem and the effect of ε. By exact we mean the zeros of Ω(μ) computed by Mathematica.

Table 1 The approximation μ k , N and the exact solution μ k for different choices of h and N
Table 2 Absolute error | μ k μ k , N |
Table 3 For N=20 and h=0.1 , the exact solutions μ k are all inside the interval [ a , a + ] for different values of ε

Figures 1 and 2 illustrate the enclosure intervals dominating μ 1 for N=20, h=0.1 and ε= 10 2 , ε= 10 5 , respectively. The middle curve represents Ω(μ), while the upper and lower curves represent the curves of a + (μ), a (μ), respectively. We notice that when ε= 10 5 , all two curves are almost identical. Similarly, Figures 3 and 4 illustrate the enclosure intervals dominating μ 4 for h=0.1, N=20 and ε= 10 2 , ε= 10 5 , respectively.

Figure 1
figure 1

The enclosure interval dominating μ 1 for h=0.1 , N=20 and ε= 10 2 .

Figure 2
figure 2

The enclosure interval dominating μ 1 for h=0.1 , N=20 and ε= 10 5 .

Figure 3
figure 3

The enclosure interval dominating μ 4 for h=0.1 , N=20 and ε= 10 2 .

Figure 4
figure 4

The enclosure interval dominating μ 4 for h=0.1 , N=20 and ε= 10 5 .

Example 2 Consider the boundary value problem

y (x,μ)+q(x)y(x,μ)= μ 2 y(x,μ),x[1,0)(0,1],
(3.7)
y(1,μ)+ μ 2 y (1,μ)=0,y(1,μ)+ μ 2 y (1,μ)=0,
(3.8)
y ( 0 , μ ) y ( 0 + , μ ) =0, y ( 0 , μ ) y ( 0 + , μ ) =0,
(3.9)

where α 1 = β 1 =1, α 2 = β 2 =1, β 1 = β 2 = α 2 = α 1 =0, γ 1 = δ 1 =3, γ 2 = δ 2 = 1 3 and

q(x)={ 2 , x [ 1 , 0 ) , x , x ( 0 , 1 ] .
(3.10)

The function K(μ) will be

K(μ)= ( 1 + μ 6 ) sin 2 μ μ .
(3.11)

The characteristic determinant of the problem is

Ω ( μ ) = π 2 + μ 2 ( ( Bi [ 1 μ 2 ] + μ 2 Bi [ 1 μ 2 ] ) × ( Ai [ μ 2 ] ( μ 2 2 + μ 2 cos [ 2 + μ 2 ] + sin [ 2 + μ 2 ] ) + Ai [ μ 2 ] ( 2 + μ 2 cos [ 2 + μ 2 ] + μ 2 ( 2 + μ 2 ) sin [ 2 + μ 2 ] ) ) + ( Ai [ 1 μ 2 ] + μ 2 Ai [ 1 μ 2 ] ) × ( Bi [ μ 2 ] ( μ 2 2 + μ 2 cos [ 2 + μ 2 ] + sin [ 2 + μ 2 ] ) + Bi [ μ 2 ] ( 2 + μ 2 cos [ 2 + μ 2 ] + μ 2 ( 2 + μ 2 ) sin [ 2 + μ 2 ] ) ) ) ,
(3.12)

where Ai[z] and Bi[z] are Airy functions, and Ai [z] and Bi [z] are derivatives of Airy functions. As in the above example, the three tables (Tables 4, 5, 6) indicate the application of our technique to this problem and the effect of ε.

Table 4 The approximation μ k , N and the exact solution μ k for different choices of h and N
Table 5 Absolute error | μ k μ k , N |
Table 6 For N=40 and h=0.3 , the exact solutions μ k are all inside the interval [ a , a + ] for different values of ε

Here Figures {5, 6}, {7, 8} illustrate the enclosure intervals dominating μ 2 and μ 3 for h=0.3, N=40 and ε= 10 2 , ε= 10 5 , respectively.

Figure 5
figure 5

The enclosure interval dominating μ 2 for h=0.3 , N=40 and ε= 10 2 .

Figure 6
figure 6

The enclosure interval dominating μ 2 for h=0.3 , N=40 and ε= 10 5 .

Figure 7
figure 7

The enclosure interval dominating μ 3 for h=0.3 , N=40 and ε= 10 2 .

Figure 8
figure 8

The enclosure interval dominating μ 3 for h=0.3 , N=40 and ε= 10 5 .

4 Conclusion

With a simple analysis, and with values of solutions of initial value problems computed at a few values of the eigenparameter, we have computed the eigenvalues of discontinuous Sturm-Liouville problems which contain an eigenparameter appearing linearly in two boundary conditions, with a certain estimated error. The method proposed is a shooting procedure, i.e., the problem is reformulated as two initial value ones, due to the interior discontinuity, of size two and a miss-distance is defined at the right end of the interval of integration whose roots are the eigenvalues to be computed. The unknown part U(μ) of the miss-distance can be written in terms of a function which is an entire function of exponential type. Therefore, we propose to approximate such term by means of a truncated cardinal series with sampling values approximated by solving numerically corresponding suitable initial value problems. Finally, in Section 3 we introduced two instructive examples. The computations show that, as compared to the classical sampling expansion in [14], the variant with the Gaussian multiplier provides a strikingly high improvement of the accuracy.