1 Introduction

The extrapolation of band-limited signals is widely applied in Engineering, Seismology, Industrial Electronics and other fields [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19].

In [19], it is pointed out that the extrapolation for bandlimited signals is one of essential research objects in signal processing, wireless communication, and positioning scenarios where the transmitted signals are always bandlimited.

In [1] and [2], Papoulis and Gerchberg presented an iterative algorithm in which the iteration converges to the extrapolation of the band-limited signal. In [3], an improved version of Papoulis–Gerchberg algorithm is given. In [4], the application of the extrapolation in spectral estimation is described. In [5,6,7,8,9,10,11,12,13,14], other extrapolation algorithms are presented. In [15, 16], applications of extrapolation and regularization method in image restoration are given. In [20], the discrete iteration of the Papoulis–Gerchberg algorithm is studied. Fast iterative and noniterative methods are presented in [21]. In [22], the connections between two approaches to band-limited interpolation are clarified. In [19], a piece-wise extrapolation method is proposed to solve the problems according to the error variation and error accumulation within the iterations of Papoulis–Gerchberg algorithm. The extrapolation process is divided into several pieces and the minimum energy least squares error extrapolation result based on Papoulis–Gerchberg algorithm is given to reduce the accumulated error. In [23], the Papoulis–Gerchberg algorithm is extended for extrapolation of signals in higher dimensions, and the regularization method is applied in the noisy case due to the instability. In [24] and [25], a fast convergence algorithm is presented. This algorithm has been proved that it converges faster than the Papoulis–Gerchberg algorithm. In [26], the fast convergence algorithm is taken to be a generalization of the Papoulis–Gerchberg algorithm. In [27] it is cited in the nonpatent citations.

In the extrapolation algorithms described above, we need to compute the integrals in the Fourier transform. The novelty of this paper is to revise the fast convergence algorithm in [24] and [25] by the Shannon Sampling Theorem. The formula of iteration can be simplified in the case of over sampling. We replaced the Fourier transform [1, 2, 24, 25] by Fourier series in the new algorithm in this paper. In this way, we obtain a fast algorithm in the form of Fourier series instead of the Fourier transform in the Papoulis–Gerchberg algorithm. So the computation of the integrals in the previous algorithms [1, 2, 24, 25] is omitted. The amount of computation is reduced and the accuracy is even better. We even consider the truncation of the Fourier series in which the Gibbs phenomena occurs, and a fast algorithm is presented to remove the Gibbs phenomena.

In this paper, we will improve the fast convergence algorithm in [24] by the Shannon Sampling Theorem in the case of over sampling [28]. By over sampling, we will present a fast algorithm which is more effective than the fast convergence algorithm in [24] and [25]. In Sect. 2, we will describe the band-limited extrapolation problem. In Sect. 3, we will describe how the sampling theorem is used to obtain a fast algorithm and explain why we can obtain more accurate solutions by the fast algorithm. In Sect. 4, the method and experiment for extrapolation are given. In Sect. 5, Gibbs phenomena is analyzed and shown by some examples. Also another algorithm is presented to control the Gibbs phenomena. In Sect. 6, we will discuss the experimental results. Finally the conclusion is given in Sect. 7.

2 Extrapolation of band-limited signals

In this section, we introduce the problem of extrapolation of band-limited signals.

The definition of band-limited functions is as following:

Definition

For \(\Omega =const.>0\), a function \(f\in L^2({\textbf {R}})\) is said to be \(\Omega\)-band-limited if \({\hat{f}}(\omega )=0 \,\,\forall \omega \in {\textbf {R}}\backslash [-\Omega ,\Omega ]\). Here \({\hat{f}}\) is the Fourier transform of f [29]:

$$\begin{aligned} {\mathcal{F}} (f)(\omega )=\hat{f}(\omega ):=\int _{-\infty }^{+\infty }f(t)e^{-j\omega t}\textrm{d}t,\, \omega \in {\textbf {R}}, \end{aligned}$$
(1)

where \({\textbf {R}}\) is the set of all real numbers.

We then have the inversion formula:

$$\begin{aligned} \mathcal{F}\ ^{-1} ({\hat{f}})(t) = f(t)=\frac{1}{2\pi } \int _{-\Omega }^{\Omega }{\hat{f}}(\omega )e^{j\omega t}\textrm{d}\omega ,\,\mathrm{a.e.}\, t\in {\textbf {R}}. \end{aligned}$$
(2)

By the derivative

$$\begin{aligned} f'(t)=\frac{1}{2\pi } \int _{-\Omega }^{\Omega }(j\omega ){\hat{f}}(\omega )e^{j\omega t}\textrm{d}\omega \end{aligned}$$

we can see that f(t) is an analytical function if we take t to be a complex variable. For analytical functions, we have the uniqueness theorem [30].

The Uniqueness Theorem. Let f(z) and g(z) be analytic in a region \(\mathcal R\). If the set of points z in \(\mathcal R\) where \(f(z) = g(z)\) has a limit point in \(\mathcal R\), then \(f(z) = g(z)\) for all \(z \in \mathcal R\). In particular, if the set of zeros of f(z) in \(\mathcal R\) has a limit point in \(\mathcal R\), then f(z) is identically zero in \(\mathcal R\).

In this paper we just consider t is in the set of real numbers.

By the uniqueness theorem for analytical functions, f(t) is uniquely determined by f(t) for \(t\in [-T,\,T]\) where \(T=const.>0\) since any point in \([-T,\,T]\) is a limit point. So we present the band-limited extrapolation problem:

Assume that \(f:{\textbf {R}}\rightarrow R\) is a band-limited function and T is a positive constant,

$$\begin{aligned} \textrm{given}\, f(t)\,\,\, t\in [-T,T],\, \textrm{find}\,\, f(t)\,\,\, t\in {\textbf {R}}\backslash [-T,T]. \end{aligned}$$
(3)

Remark 1

The problem (3) is the problem of extrapolation in the time domain. In some papers, the problem of extrapolation in the frequency domain to obtain a higher resolution signal in the time domain is discussed [31, 32].

Here we give an example of band-limited signal.

Example

Suppose

$$\begin{aligned} f(t):=\frac{1-\cos t}{\pi t^2} \end{aligned}$$

is the signal in Fig. 1.

Then construct

$$\begin{aligned} {\hat{f}}(\omega )= \left\{ \begin{array}{ll} 1- |\omega |, &{} \quad \omega \in [-\Omega ,\Omega ] \\ 0,&{} \quad \omega \in {\textbf {R}}\backslash [-\Omega ,\Omega ]. \end{array} \right. \end{aligned}$$

Here \(\Omega =1\). f(t) is a band-limited signal, and it is uniquely determined by f(t), \(t\in [-T,T]\).

Fig. 1
figure 1

Example of band-limited signal

The Papoulis–Gerchberg algorithm in [1, 2] is as follows:

$$\begin{aligned} f_0:=\mathcal{P}_Tf. \end{aligned}$$

For \(k=0,1,2,...\),

$$\begin{aligned} f_{k+1}:=\mathcal{P}_Tf+(\mathcal{I}-\mathcal{P}_T)\mathcal{F} ^{-1} \mathcal{P}_{\Omega } \mathcal{F} f_k, \end{aligned}$$

where \(\mathcal I\) is the identity operator,

$$\begin{aligned} \mathcal{P}_Tf(t):= \left\{ \begin{array}{ll} f(t), &{} \quad t\in [-T,T] \\ 0,&{} \quad t\in {\textbf {R}}\backslash [-T,T] \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \mathcal{P}_{\Omega }{\hat{f}}(\omega ):= \left\{ \begin{array}{ll} {\hat{f}}(\omega ), &{} \quad \omega \in [-\Omega ,\Omega ] \\ 0,&{} \quad \omega \in {\textbf {R}}\backslash [-\Omega ,\Omega ]. \end{array} \right. \end{aligned}$$

The convergence \(||f_k -f||_{L^2} \rightarrow 0\) is proven in [1] by Papoulis.

3 A fast extrapolation algorithm

In this section we present a fast extrapolation algorithm in the case of over sampling by sampling theorem. We first describe the Shannon Sampling Theorem in [29].

Shannon Sampling Theorem. The \(\Omega _0\)-band-limited signal \(f(t)\in L^2({\textbf {R}})\) can be exactly reconstructed from its samples \(f(nh_0)\), and

$$\begin{aligned} f(t)=\sum _{n=-\infty }^\infty f(nh_0)\frac{\sin \Omega _0(t-nh_0)}{\Omega _0(t-nh_0)} \end{aligned}$$

where \(h_0:= \pi /\Omega _0\).

Remark 2

Since \(\Omega _0\)-band-limited signals are also \(\Omega\)-band-limited for \(\Omega \ge \Omega _0\), we have the formula for oversampling

$$\begin{aligned} f(t)=\sum _{n=-\infty }^\infty \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} f(nh) \end{aligned}$$

where \(h:= \pi /\Omega\) and \(h\le h_0\).

In [24] and [25], a fast convergence algorithm is presented (Chen Fast Convergence Algorithm):

$$\begin{aligned} f^C_0:=\mathcal{P}_Tf. \end{aligned}$$

For \(k=0,1,2,...\)

$$\begin{aligned} f^C_{k+1}:=\mathcal{P}_Tf+(\mathcal{I}-\mathcal{P}_T){\mathcal{F}} ^{-1}\mathcal{P}_{\Omega }{\hat{f}}^C_k \end{aligned}$$

where

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):= & {} \int _{-T}^T f(t)e^{-j\omega t}\textrm{d}t\nonumber \\{} & {} +\sum _{|nh|\le T} f(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \nonumber \\{} & {} + \sum _{|nh|>T} g^C_k(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \end{aligned}$$
(4)

in which \(g^C_k={\mathcal{F}} ^{-1}\mathcal{P}_{\Omega }{\hat{f}}^C_k.\)

For each \(\omega\), in

$$\begin{aligned} \int _{-T}^T f(t)e^{-j\omega t}\textrm{d}t \approx \sum _{i=-M}^{M-1} f(t_i)e^{-j\omega t_i}\Delta t \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (4a) \end{aligned}$$

where \(t_i=i\Delta t\) and \(\Delta t = T /M\), we need \(8M+1\) real multiplications and \(6M-1\) real additions. Here M is a large positive integer.

We assume we have \(2N_T+1\) terms in

$$\begin{aligned} \sum _{|nh|\le T} f(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \end{aligned}$$

then we need \((2N_T+1)(8M+1)+2N_T+1\) real multiplications and \((2N_T+1)(6M+1)+2N_T\) real additions. We assume we have chosen \(2N_1\) terms in

$$\begin{aligned} \sum _{|nh|>T} g^C_k(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] , \end{aligned}$$

then we need \((2N_1)(8M+1)+2N_1\) real multiplications and \((2N_1)(6M+1)+2N_1-1\) real additions.

Now, we try to find a fast algorithm by the Shannon Sampling Theorem.

In (4), we need the computation of the integrals

$$\begin{aligned} \int _{-T}^T f(t)e^{-j\omega t}\textrm{d}t \end{aligned}$$

for extrapolations. We can use the rectangular formula, trapezoidal formula or Simpson formula. In (4a) we use the rectangular formula.

If we replace f(t) in (4) by its approximation

$$\begin{aligned} f(t)\approx \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}$$
(5)

and we choose \(\Omega\) that is large enough such that \(\Delta t = h= \pi /\Omega\), then we will obtain the same result in computation since the function values of f(t) at \(t=t_i,\, i= -M, -M+1,...,M-1, M\) are not changed.

In (5), we have taken

$$\begin{aligned} \sum _{|nh|> T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}=0. \end{aligned}$$

However, in the k-th step of the iteration, the signal out of \([-T,T]\) has been updated by \(g^C_k\). We can add

$$\begin{aligned} \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}$$
(6)

into (5) and obtain another approximation of f(t)

$$\begin{aligned} f(t)\approx \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} + \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}. \end{aligned}$$
(7)

We can replace f(t) in (4) by (7):

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):= & {} \int _{-T}^T \left[ \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} + \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right] e^{-j\omega t}\textrm{d}t\\{} & {} +\sum _{|nh|\le T} f(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] \\{} & {} + \sum _{|nh|>T} g^C_k(nh)\left[ he^{-jnh\omega } -\int _{|t|\le T} \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} e^{-j\omega t} \textrm{d}t\right] . \end{aligned}$$

By canceling like terms, we obtain another expression of \({\hat{f}}^C_{k+1}(\omega )\):

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):=\sum _{|nh|\le T}f(nh) he^{-jnh\omega } + \sum _{|nh|> T} g^C_k(nh) he^{-jnh\omega }. \end{aligned}$$
(8)

This is a Fourier series in the case of over sampling and it is much simpler than (4).

Based on the formula (8), we present a fast algorithm:

$$\begin{aligned} {\hat{f}}^C_0(\omega ):=\sum _{|nh|\le T} f(nh) he^{-jnh\omega }. \end{aligned}$$
(9)

For \(k=0,1,2,...\),

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega ):={\hat{f}}^C_0(\omega ) + \sum _{|nh|> T} g^C_k(nh) he^{-jnh\omega } \end{aligned}$$
(10)

where

$$\begin{aligned} g^C_k=\mathcal{F}^{-1}\mathcal{P}_{\Omega _0}{\hat{f}}^C_k. \end{aligned}$$

This algorithm should have a more accurate approximation since we have added (6) into (5). This can be seen from the theorem below.

Theorem 1

The error energy of (5) is

$$\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)|^2. \end{aligned}$$

The error energy of (7) is

$$\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh)\frac{\sin \Omega (t-nh)}{\Omega (t-nh)} -\sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)- g^C_k(nh)|^2 \end{aligned}$$

where \(h=\frac{\pi }{\Omega }\), \(g^C_k\) is the k-th approximation of of f for \(|t|>T\).

Proof

Assume s(t) is \(\Omega\)-band-limited. By Parseval’s theorem for Fourier transform, we have

$$\begin{aligned} \int _{-\infty }^{\infty }|s(t)|^2 \textrm{d}t=\frac{1}{2\pi }\int _{-\Omega }^{\Omega }|{\hat{s}}(\omega )|^2 d. \end{aligned}$$

By Parseval’s theorem for Fourier series, we have

$$\begin{aligned} \int _{-\Omega }^{\Omega }|{\hat{s}}(\omega )|^2 d =\frac{2\pi ^2}{\Omega }\sum _{n=-\infty }^{\infty } |s(nh)|^2. \end{aligned}$$

Then

$$\begin{aligned} \int _{-\infty }^{\infty }|s(t)|^2 \textrm{d}t =h \sum _{n=-\infty }^{\infty } |s(nh)|^2. \end{aligned}$$

Let

$$\begin{aligned} s(t)=f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} =\sum _{|nh|> T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}$$

we have

$$\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)|^2. \end{aligned}$$

Let

$$\begin{aligned} s(t)= & {} f(t)-\sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} -\sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\\= & {} \sum _{|nh|> T} (f(nh)-g^C_k(nh)) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)} \end{aligned}$$

we have

$$\begin{aligned} \left\| f(t)-\sum _{|nh|\le T} f(nh)\frac{\sin \Omega (t-nh)}{\Omega (t-nh)} -\sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\right\| ^2 =h\sum _{|nh|> T} |f(nh)- g^C_k(nh)|^2. \end{aligned}$$

\(\square\)

Theorem 2

The error energy

$$\begin{aligned} ||f^C_k(t)-f(t)||^2\ge ||f^C_{k+1}(t)-f(t)||^2. \end{aligned}$$

Proof

By Parseval’s identity

$$\begin{aligned} ||f^C_k(t)-f(t)||^2= & {} \frac{1}{2\pi }||{\hat{f}}^C_k(\omega )-{\hat{f}}(\omega )||^2\\= & {} \frac{1}{2\pi }\int _{-\infty } ^{\infty }|{\hat{f}}^C_k(\omega )-{\hat{f}}(\omega )|^2 d\omega \ge \frac{1}{2\pi }\int _{-\Omega _0} ^{\Omega _0}|{\hat{f}}^C_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega \\= & {} \frac{1}{2\pi }\int _{-\Omega _0} ^{\Omega _0}|{\hat{g}}^C_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega =||g^C_k(t)-f(t)||^2. \end{aligned}$$

By the Shannon Sampling Theorem

$$\begin{aligned} g_k ^C(t)=\sum _{n=-\infty }^\infty g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}. \end{aligned}$$

By the inversion of the Fourier transform on (10)

$$\begin{aligned} f_{k+1} ^C(t)= & {} \sum _{|nh|\le T} f(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}\\{} & {} + \sum _{|nh|> T} g^C_k(nh) \frac{\sin \Omega (t-nh)}{\Omega (t-nh)}. \end{aligned}$$

By Parseval’s theorem

$$\begin{aligned}{} & {} ||g^C_k(t)-f(t)||^2=h\sum _{n=-\infty }^\infty | g^C_k(nh)-f(nh)|^2 \\{} & {} ||f^C_{k+1}(t)-f(t)||^2=h\sum _{|nh|>T} | g^C_k(nh)-f(nh)|^2. \end{aligned}$$

So

$$\begin{aligned} ||g^C_k(t)-f(t)||^2\ge ||f^C_{k+1}(t)-f(t)||^2. \end{aligned}$$

\(\square\)

Remark 3

This theorem shows that the error energy is decreasing in each step of iterations. The “=” can be satisfied only if \(f^C_k(t)=f(t)\). So, before we have reached the exact extrapolation, we have

$$\begin{aligned} ||f^C_k(t)-f(t)||^2>||f^C_{k+1}(t)-f(t)||^2. \end{aligned}$$

4 Method and experiment for extrapolation

In this section, we give the method for the algorithm. In computation we can only use finite terms in (10). We choose a large integer \(N>0\) to approximate (10):

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega )={\hat{f}}^C_0(\omega ) + \sum _{|nh|> T \,\textrm{and}\, |n|\le N} g^C_k(nh) he^{-jnh\omega }. \end{aligned}$$
(11)

We refer to it as Chen Fast Algorithm 1.

figure a

We have \(2M+1\) terms in \({\hat{f}}^C_0(\omega _l)\). For each \(\omega _l\), in \({\hat{f}}^C_0(\omega _l)\) we need 8M real multiplications and 6M real additions. In \(g^C_k(t_m)\), for each \(t_m\), we need 8L multiplication and 6L addition, in \({\hat{f}}^C_{k+1}(\omega _l)\) for each \(\omega _l\), we need \(8(N-M)\) real multiplications and \(6(N-M)\) real additions.

Now we give an example of an application of Chen Fast Algorithm 1 and compare it with Chen Fast Convergence Algorithm and Papoulis–Gerchberg algorithm.

Example 1

Suppose

$$\begin{aligned} f(t):=\frac{1-\cos t}{\pi t^2}. \end{aligned}$$

Then

$$\begin{aligned} {\hat{f}}(\omega )=\mathcal{P}_{\Omega _0}(1- |\omega |). \end{aligned}$$

Here \(\Omega _0=1\).

Suppose the signal is known on \([-T, T]=[-\pi /5, \pi /5]\). In Chen Fast Algorithm 1, we input: \(T=\pi /5\), \(\Omega =10\), \(L=50\), \(N=10\) and \(NumofIterations=6\). Since \(\Omega =10>> \Omega _0=1\) and the step size of sampling \(h=\pi /\Omega\), it is the case of over sampling.

The numerical results by Papoulis–Gerchberg algorithm, Chen Fast Convergence Algorithm and Chen Fast Algorithm 1 for the iterations 1, 2, 3, 4 are in Figs. 2, 3, 4, 5, 6, 7, 8 and 9.

The result of the Papoulis–Gerchberg algorithm is given by the dotted lines in the black color. The result of the Chen Fast Convergence Algorithm is given by the dash dot lines in the red color. The result of the Chen Fast Algorithm 1 is given by the dashed lines in the blue color.

The error energy

$$\begin{aligned} \int _{-\Omega _0}^{\Omega _0}|{\hat{f}}_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega \end{aligned}$$

of Papoulis–Gerchberg algorithm, Chen Fast Algorithm Convergence Algorithm and Chen Fast Algorithm 1 for the iterations 1–6 is in Table 1.

In the first iteration, the results of Papoulis–Gerchberg algorithm and Chen Fast Convergence Algorithm are same. But the Chen Fast Algorithm 1 is better. In the iterations 2–4, the Papoulis–Gerchberg algorithm is the worst, the Chen Fast Algorithm 1 is the best. This can also be seen in Table 1.

Table 1 Error energy in Example 1
Fig. 2
figure 2

The results of the first iteration in the frequency domain

Fig. 3
figure 3

The results of the first iteration in the time domain

Fig. 4
figure 4

The results of the second iteration in the frequency domain

Fig. 5
figure 5

The results of the second iteration in the time domain

Fig. 6
figure 6

The results of the third iteration in the frequency domain

Fig. 7
figure 7

The results of the third iteration in the time domain

Fig. 8
figure 8

The results of the forth iteration in the frequency domain

Fig. 9
figure 9

The results of the forth iteration in the time domain

5 Method to remove Gibbs phenomena

In this section we give some examples to show the Gibbs phenomena. In Chen Fast Algorithm 1, we truncated the Fourier series. This gives rise to the Gibbs phenomena in the frequency domain if \({\hat{f}}(\omega )\) is not continuous at \(\omega =\Omega\) and \(\omega =-\Omega\). This happens when \(f\in L^2\) but \(f\notin L^1\)

Example 2

Suppose

$$\begin{aligned} f(t)=\frac{\sin (\pi t)}{\pi t}. \end{aligned}$$

Then

$$\begin{aligned} {\hat{f}}(\omega ):= \left\{ \begin{array}{l} 1,\,\,\,\,\,\omega \in [-\Omega _0,\Omega _0] \\ 0,\,\,\,\,\,\omega \in {\textbf {R}}\backslash [-\Omega _0,\Omega _0]. \end{array} \right. \end{aligned}$$

Here \(\Omega _0=1\).

Suppose the signal is known on \([-T, T]=[-\pi /5, \pi /5]\). In Chen Fast Algorithm 1, we input \(T=\pi /5\), \(\Omega =10\), \(L=50\), \(N=200\) and \(NumofIterations=6\). The numerical results by the algorithm for the iterations 1, 2, 3, 4 are in Fig. 10. The results of Chen Fast Algorithm 1 are given by dashed lines.

Fig. 10
figure 10

The results of Chen Fast Algorithm 1 in Example 2 for \(\Omega =10\)

Now, we present a fast algorithm without truncation errors in computation of infinite terms of the Fourier series. We can change (10) to

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega )= & {} {\hat{f}}^C_0(\omega ) - \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + \sum _{n=-\infty }^\infty g^C_k(nh) he^{-jnh\omega } \nonumber \\= & {} {\hat{f}}^C_0(\omega )- \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + {\hat{g}}^C_k(\omega ) \nonumber \\= & {} {\hat{f}}^C_0(\omega )- \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + \mathcal{P}_{\Omega _0}\hat{f^C_k}(\omega ). \end{aligned}$$
(12)

So we present another new method and we refer to it as Chen Fast Algorithm 2:

figure b

We have \(2M+1\) terms in \({\hat{f}}^C_0(\omega _l)\). For each \(\omega _l\), in \({\hat{f}}^C_0(\omega _l)\) we need 8M real multiplications and 6M real additions. In \(g^C_k(t_m)\), for each \(t_m\), we need 8L multiplication and 6L addition, in \({\hat{f}}^C_{k+1}(\omega _l)\) for each \(\omega _l\), we need 8M real multiplications and 6M real additions.

To compare the complexity of Chen Fast Algorithm 1 and Chen Fast Algorithm 2, if the number of terms in \(\sum _{|nh|\le T} g^C_k(nh) he^{jnh\omega }\) is less than N, the Chen Fast Algorithm 2 is of less complexity.

Theorem 3

For each integer \(k\ge 0\), the limits

$$\begin{aligned} \lim _{\omega \rightarrow \Omega ^-}{\hat{f}}^C _k(\omega ) \,\,\,\textrm{and} \,\, \lim _{\omega \rightarrow -\Omega ^+}{\hat{f}}^C _k(\omega ) \end{aligned}$$

exist.

Proof

By mathematical induction, first \(k=0\),

$$\begin{aligned} {\hat{f}}^C_0(\omega )=\sum _{|nh|\le T} f(nh) he^{-jnh\omega }, \end{aligned}$$

we can see the limits exist. Assume the limits exist for \({\hat{f}}^C_k(\omega )\), by (12)

$$\begin{aligned} {\hat{f}}^C_{k+1}(\omega )={\hat{f}}^C_0(\omega )- \sum _{|nh|\le T} g^C_k(nh) he^{-jnh\omega } + \mathcal{P}_{\Omega _0}\hat{f^C_k}(\omega ), \end{aligned}$$

we can see the limits exist for \({\hat{f}}^C_{k+1}(\omega )\). \(\square\)

Remark 4

By this theorem we can see that the limits exist and are indepedent of the N in (11), so the Gibbs phenomena is disappeared.

Example 3

We choose the function in example 2.

Suppose the signal is known on \([-T, T]=[-\pi /5, \pi /5]\).

In Chen Fast Algorithm 2, we input \(T=\pi /5\), \(\Omega =10\), \(L=50\) and \(NumofIterations=6\).

Since \(\Omega =10>> \Omega _0=1\) and the step size of sampling \(h=\pi /\Omega\), it is the case of over sampling.

The numerical results by Chen Fast Algorithm 2 for the iterations 1, 2, 3, 4 are in Fig. 11.

The error energy

$$\begin{aligned} \int _{-\Omega _0}^{\Omega _0}|{\hat{f}}_k(\omega )-{\hat{f}}(\omega )|^2\textrm{d}\omega \end{aligned}$$

of Chen Fast Algorithm 1 and 2 for the iterations 1–6 is in Table 2. We give the results of Chen Fast Algorithm 1 for the case \(T=10\) in Fig. 12.

Fig. 11
figure 11

The results of Chen Fast Alg 2 in Example 3

Table 2 Error energy in Examples 2 and 3
Fig. 12
figure 12

The results of Chen Fast Algorithm 1 in Example 3 for T=10

6 Results and discussion

In (11) and (12), the computation of the integrals in (4) is omitted. Therefore the amount of computation is greatly reduced. In the computation of (11) FFT can be used. The amount of computation is of the order \(O(N\log N)\).

In example 1, we can see the numerical result of Chen Fast Algorithm 1 is more accurate than the numerical result of the Papoulis–Gerchberg algorithm and Chen Fast Convergence Algorithm. The amount of computation is less in Chen Fast Algorithm 1 since there is no computation of integrals in it.

In example 2, the Gibbs phenomena occurs. The Gibbs phenomena can be in control in Chen Fast Algorithm 2.

In example 3, we can see that the Gibbs phenomena affects the accuracy of the numerical results severely even we choose a large enough T. This is not due to the error energy in the residual sequence beyond \(|t| = T\). This is due to the truncation error in the Fourier series. And this occurs when \({\hat{f}}(-\Omega +0)\not = 0\) or \({\hat{f}}(\Omega -0)\not = 0\).

7 Conclusion

Fast extrapolation algorithms for band-limited signals are introduced by the Shannon Sampling Theorem in this paper. By adding the updated term into the Shannon Sampling Theorem in the procedure of iteration we can obtain more accurate extrapolations signal and reduce the amount of computation. This can be seen in the experimental results. The Gibbs phenomena can be in control if the computation of infinite Fourier series is not required.