1 Introduction

In many cases, we are confronted with wave phenomena, such as wave scattering, signal processing, and antenna theory, which are characterized by bandlimited functions (whose Fourier transforms are compactly supported) [3, 14]. It is well known that the natural tool for effectively representing bandlimited functions on an interval is prolate spheroidal wave functions (PSWFs) [7, 28, 32]. Hence, there has been a growing interest in developing prolate spheroidal wave functions, which also offer an alternative to Chebyshev and other orthogonal polynomials for pseudospectral/collocation and spectral-element algorithms [5].

It is well known that a simple way of approximating a function f(x) is to choose a sequence of points \(\{x_{j}\}_{j=0}^{n}\) and find the function P(x) from the values of f(x) at these interpolation nodes, i.e., set P(xj) = f(xj), 0 ≤ jn. The standard tool for interpolation and approximation algorithms was investigated in [15, 32]. We highlight that a very popular alternative nowadays is to use barycentric interpolation formula, and the favourable numerical aspects of this way are summarized by Berrut and Trefethen [1, 24]. So, the related issues are worthy of investigation.

The purpose of this paper is to have new insights into prolate interpolation and pseudospectral differentiation based on the Prolate-Gauss-Lobatto points. The inspiration behind the proposed numerical method is the remarkable advantage of barycentric interpolation formula [1]. The main contributions reside in the following aspects.

  • We give the barycentric prolate interpolation and differentiation formula, which enjoys a more stable approximability and efficiency than the formulas given early [32, 34].

  • We give the error analysis of the barycentric prolate interpolation and differentiation based on the error analysis of the standard prolate interpolation and differentiation [22].

  • We offer a preconditioning matrix that nearly inverts the second-order prolate pseudospectral differentiation matrix, leading to a well-conditioned collocation approach for second-order boundary value problems.

The structure of the paper is as follows. In Section 2, we review some results of PSWFs and prolate interpolation and pseudospectral differentiation, while Section 3 combines the barycentric form with the prolate interpolation, which yields the new barycentric prolate interpolation and pseudospectral differentiation scheme. Furthermore, the convergence analysis of barycentric prolate interpolation and pseudospectral differentiation is given. In Section 4, we introduce the preconditioning matrix that nearly inverts the second-order barycentric prolate differentiation matrix. Sections 5 and 6 demonstrate the analysis via several numerical experiments and apply to the second-order boundary value problem and Helmholtz problem.

2 Prolate interpolation and pseudospectral differentiation

2.1 Preliminaries

The PSWFs were introduced in the 1960s by Slepian et al. in a series of papers [20, 21]. Firstly, we briefly recall some preliminary properties of the PSWFs. All of these can be found in [2, 6, 13,14,15,16, 18, 20, 21, 30, 31, 34].

Prolate spheroidal wave functions of order zero are the eigenfunctions ψj(x) of the Helmholtz equation in prolate spheroidal coordinates:

$$ \left[(x^{2}-1)\frac{d^{2}}{dx^{2}}+2x\frac{d}{dx}+c^{2}x^{2}\right]\psi_{j}(x)=\chi_{j}(c)\psi_{j}(x), $$
(2.1)

for x ∈ (− 1,1) and c ≥ 0. A series of papers [18, 32, 34] have shown that the ψj(x) are also the eigenfunctions of the following integral eigenvalue problem:

$$ {\int}_{-1}^{1}e^{icxt}\psi_{j}(t)dt=\lambda_{j}(c)\psi_{j}(x), $$
(2.2)

Here, \(\{\chi _{j}:=\chi _{j}(c)\}_{j=0}^{\infty }\) and \(\{\lambda _{j}:=\lambda _{j}(c)\}_{j=0}^{\infty }\) are the associated eigenvalues corresponding to the differential operator and integral operator, and the constant c is known as the “bandwidth parameter.” The eigenvalues \(\{\lambda _{j}\}_{j=0}^{\infty }\) satisfy |λ0| > |λ1| > |λ2| > … > 0, which decay exponentially to nearly 0. Specifically, based on the work of Wang and Zhang [31], we have:

$$ \lambda_{n}\simeq \sqrt{\frac{\pi e}{2n+3}}\left( \frac{ce}{4n+2}\right)^{n}, $$
(2.3)

where the notation \(A\simeq B\) means that for B≠ 0, the ratio \(\frac {A}{B}\rightarrow 1\) in the sense of some limiting process.

An important issue related to the PSWFs is the choice of bandlimit parameter c. For general functions, we do not have a simple optimal c. This is due to the fact that an arbitrary function has many different modes and each mode has a distinct optimal c [7]. Regardless of whether the function being represented is bandlimited or not, all the useful choices of c must satisfy [5]:

$$ 0\leq c<c_{\ast}(N):=\frac{\pi}{2}\left( N+\frac{1}{2}\right). $$
(2.4)

As recommended in [30, 32], in practice, a quite safe choice is \(c=\frac {N}{2}\). With this in mind, we choose \(c=\frac {N}{2}\). Guidelines on the suitable choice of c can be found in [3]. The practical rule for pairing up (c,N) has been given in [13, 32].

Denoting the zeros of (1 − x2)ψn(x) by the Prolate-Gauss-Lobatto points (PGL points). For computation, Boyd [6] described Newton’s iteration method with some care in selecting initial guesses. Generally, [12] gives the efficient algorithm for computing zeros of special functions, such as PSWFs. With the Prolate-Gauss-Lobatto points at our disposal, we will introduce the prolate interpolation and pseudospectal differentiation.

2.2 Prolate interpolation and pseudospectal differentiation

In this subsection, firstly, we review some facts about prolate interpolation and pseudospectal differentiation.

The key idea for interpolation is to search the prolate cardinal functions i(x) := i(x;c), which are designed to satisfy the interpolation property:

$$ \ell_{i}(x_{j})=\delta_{ij},\quad 0\leq i,j\leq N. $$
(2.5)

Then, the function f(x) is approximated by

$$ (\mathcal{I}_{n}f)(x)=\sum\limits_{j=0}^{N}\ell_{j}(x)f(x_{j}). $$
(2.6)

The standard route to get the derivatives is by directly differentiating the prolate cardinal basis i(x).

Generally, we define the prolate cardinal functions i(x) as

$$ \ell_{j}(x)=\frac{s_{p}(x)}{s_{p}^{\prime}(x_{j})(x-x_{j})}, \quad j=0,\ldots,N, $$
(2.7)

where \(\{x_{k}\}_{k=0}^{N}\) are the Prolate points, which are zeros of sp(x). It follows that the standard interpolation is [15, 32]:

$$ P_{N}(x)=\sum\limits_{j=0}^{N}\frac{s_{p}(x)}{s_{p}^{\prime}(x_{j})(x-x_{j})}f(x_{j}). $$
(2.8)

The standard differentiation generated from the cardinal basis (2.7) can be computed by:

$$ \ell^{\prime}_{j}(x_{i})=\left\{ \begin{array}{ll} \displaystyle \frac{s_{p}^{\prime}(x_{i})}{s_{p}^{\prime}(x_{j})(x_{i}-x_{j})}, ~~\text{if}\ i\neq j,\\ \frac{s_{p}^{\prime\prime}(x_{j})}{2s_{p}^{\prime}(x_{j})}, ~~~~~~~~~~~\text{if} ~\ i=j. \end{array} \right. $$
(2.9)

Taking derivative of \(s_{p}(x)=s_{p}^{\prime }(x_{j})(x-x_{j})\ell _{j}(x)\) two times implies for

$$ \ell^{\prime\prime}_{j}(x_{i})=\left\{ \begin{array}{ll} \displaystyle \frac{s_{p}^{\prime\prime}(x_{i})}{s_{p}^{\prime}(x_{j})(x_{i}-x_{j})}-\frac{2\ell^{\prime}_{j}(x_{i})}{x_{i}-x_{j}}, ~~\text{if}\ i\neq j,\\ \frac{s_{p}^{\prime\prime\prime}(x_{j})}{3s_{p}^{\prime}(x_{j})}, ~~~~~~~~~~~\text{if} ~\ i=j, \end{array} \right. $$
(2.10)

In the following, let us consider the prolate interpolation and pseudospectal differentiation scheme through the barycentric form.

3 Barycentric prolate interpolation and differentiation

In this section, we start with the barycentric interpolation [1, 9, 11], which are important pieces of the puzzle for our new approach, and then give the new insights into prolate interpolation, which are called the barycentric prolate interpolation. The differentiation matrices are derived through the barycentric interpolation formula. Then, the convergence analysis of barycentric prolate interpolation and differentiation is given.

3.1 Barycentric interpolation formula

Let \(\{x_{j}\}_{j=0}^{n}\) be a set of distinctive nodes in [− 1,1], which are arranged in ascending order, together with corresponding numbers f(xj). We assume that the nodes are real and they are zeros of the function s(x), i.e., s(xj) = 0 for 0 ≤ jN. Thus, the Lagrange interpolating basis is defined by:

$$ l_{j}(x)=\frac{s(x)}{s^{\prime}(x_{j}) (x-x_{j})},\quad 0\le j\le N. $$
(3.11)

Accordingly, the interpolation in Lagrange form for the function f(x) is

$$ \mathcal{I}_{N}^{L} f(x)=\sum\limits_{j=0}^{N} l_{j}(x)f(x_{j}) . $$
(3.12)

The barycentric formula is the alternative Lagrange form, and for computations, it is generally recommended that one should use barycentric interpolation formula [1, 11], which has stability or robustness property that proves advantageous in some application. The barycentric interpolation is defined as

$$ \mathcal{I}_{N}^{B} f(x)=\frac{{\sum}_{j=0}^{N}\frac{w_{j}}{x-x_{j}}f(x_{j})}{{\sum}_{k=0}^{N}\frac{w_{k}}{x-x_{k}}}, $$
(3.13)

where \(\{w_{j}\}_{j=0}^{N}\) are the barycentric weights. To this end, it suffices to note that the barycentric weights \(\{w_{j}\}_{j=0}^{N}\) can be written as different quantity. As with the polynomial interpolation [1, 24], s(x) can be written as:

$$s(x)=(x-x_{0})(x-x_{1}) {\ldots} (x-x_{N}),$$

such that barycentric weights become

$$ \{w_{j}\}_{j=0}^{N}=\frac{1}{s^{\prime}(x_{j})}=\frac{1}{{\prod}_{k\neq j}(x_{j}-x_{k})}. $$

For certain special sets of nodes \(\{x_{j}\}_{j=0}^{n}\), the explicit expressions of the barycentric weights wj were available in [1, 26, 27]. For general point sets, the barycentric weights wj can be evaluated by the fast multipole method [23]. These observations lead to an efficient method for computing prolate interpolants based on the Prolate-Gauss-Lobatto points through a new definition of non-zero barycentric weights.

3.2 Barycentric prolate interpolation and pseudospectral differentiation

Using the barycentric form, this subsection give a new definition of barycentric prolate weights, which leads to remarkably simple and efficient schemes for the construction of rational barycentric interpolation, which is denoted by barycentric prolate interpolation.

The fist question is how to choose barycentric weights. In the similar manner as deriving the barycentric formula (3.13) form Lagrange interpolation (3.11), since the Prolate-Gauss-Lobatto points are the roots of (1 − x2)ψN(x), it is straightforward to let s(x) := sp(x) = (1 − x2)ψN− 1(x). Correspondingly, we define the prolate barycentric weights to be

$$ w_{j}=\frac{1}{s_{p}^{\prime}(x_{j})}=\frac{1}{-2x_{j}\psi_{N-1}(x_{j})+(1-{x_{j}^{2}})\psi^{\prime}_{N-1}(x_{j})}. $$
(3.14)

According to the foregoing observations, it is desirable to define a new interpolation which is called barycentric prolate interpolation. Moreover, the interpolation property is stable with respect to the nonzero weights, as noticed in [33].

Definition 3.1

The barycentric prolate interpolation can be expressed as

$$ G_{N}(x)=\frac{{\sum}_{j=0}^{N}\frac{w_{j}}{x-x_{j}}f(x_{j})}{{\sum}_{k=0}^{N}\frac{w_{k}}{x-x_{k}}}, $$
(3.15)

where \(\{x_{j}\}_{j=0}^{n}\) are the Prolate-Gauss-Lobatto points and \(w_{j}=\frac {1}{-2x_{j}\psi _{N-1}(x_{j})+(1-{x_{j}^{2}})\psi ^{\prime }_{N-1}(x_{j})}\).

The error analysis will be derived in the next subsection. In fact, from the numerical evidences in Figs. 3 and 4, the barycentric prolate interpolation gives better approximations.

Remark 1

The barycentric prolate interpolation enjoys several advantages, which makes it very efficient in practice. (i) The barycentric prolate interpolation are scale-invariant, thus avoiding any problems of underflow and overflow. (ii) Once wj are computed, the interpolant at any points x will take only \(\mathcal {O}(N)\) floating point operations to compute.

Remark 2

The barycentric formula has natural advantages for applications to fast multipole method [1], which is an useful and efficient tool to improve the complexity of centain sums (3.15) from \(\mathcal {O}(N^{2})\) to \(\mathcal {O}(N)\). The idea of using FMM to accelerate the interpolation and pseudospectral differentiation can be traced back to [4, 8] and we see from [15] that the FMM was used to accelerate the standard prolate interpolation and differentiation. It is noteworthy to point out that the new scheme (3.15) can also be accelerated by the FMM through a very similar process in [17].

Remark 3

We have to calculate sp(x) = (1 − x2)ψN− 1(x) in standard interpolation formula (2.8). Since \(\psi _{N}(x)={\sum }_{k}{\alpha ^{N}_{k}}\overline {P}_{k}(x)\), where the \(\overline {P}_{k}(x)\) is normalized Legendre polynomial and αk is the eigenvector of a matrix, which is complex and time-consuming. However, it is obvious that the factor sp(x) has dropped out in the (3.15), and this feature has practical consequences.

Furthermore, defining the cardinal basis function of the barycentric prolate interpolation (3.15) as

$$ h_{j}(x)=\frac{\frac{w_{j}}{x-x_{j}}}{{\sum}_{k=0}^{N}\frac{w_{k}}{x-x_{k}}}. $$
(3.16)

It leads to the differentiation matrices

$$ \begin{aligned} \textbf{D}^{(m)}=(h_{j}^{(m)}(x_{i}))_{0\leq i,j\leq N}, \quad m=1,2, \end{aligned} $$
(3.17)

which have the explicit formulas [23]:

$$ \begin{aligned} \textbf{D}^{(1)}_{ij}=\frac{w_{j}/w_{i}}{x_{i}-x_{j}},\ i\neq j, \quad \quad \textbf{D}^{(1)}_{ii}=-\sum\limits_{j=0,j\neq i}^{N}\frac{w_{j}/w_{i}}{x_{i}-x_{j}}, \end{aligned} $$
(3.18)
$$ \begin{aligned} \textbf{D}_{ij}^{(2)}=&-2\frac{w_{j}/w_{i}}{x_{i}-x_{j}}\left( \sum\limits_{k\neq i}\frac{w_{k}/w_{i}}{x_{i}-x_{k}}+\frac{1}{x_{i}-x_{j}}\right),\ i\neq j ,\\ \textbf{D}_{ii}^{(2)}=&-\sum\limits_{j=0,j\neq i}^{N}\left( -2\frac{w_{j}/w_{i}}{x_{i}-x_{j}}\left( \sum\limits_{k\neq i}\frac{w_{k}/w_{i}}{x_{i}-x_{k}}+\frac{1}{x_{i}-x_{j}}\right)\right), \end{aligned} $$
(3.19)

where \(\{x_{j}\}_{j=0}^{n}\) are the Prolate-Gauss-Lobatto points and \(w_{j}=\frac {1}{-2x_{j}\psi _{N-1}(x_{j})+(1-{x_{j}^{2}})\psi ^{\prime }_{N-1}(x_{j})}\).

Remark 4

It is obvious that the standard differentiation method (2.10) involves the first-order differentiation value \(\ell ^{\prime }_{j}(x_{i})\) (2.9), which causes error propaganda for large number N. The barycentric prolate differentiation only involves the barycentric weights value. Hence, the barycentric prolate differentiation form is stable even for large N, which has been shown in Fig. 6.

3.3 Convergence properties of barycentric prolate interpolation and differentiation

Results can also be obtained for the convergence properties of barycentric prolate interpolation and differentiation.

Lemma 3.1

[22] Let f be the entire function, ΓR be the boundary of the square [−RK,RK] × [−iRK,iRK], \(R_{K}>\frac {\pi }{2c}+\frac {8(c+1)}{c\cdot \lambda _{n}}\), ψn(RK)≠ 0, \(C_{1}=\max \limits _{z\in {\Gamma }_{R}}|f(z)|\). Suppose Pn(x) is the interpolant of f(x) at the Prolate-Gauss-Lobatto points (2.8), then it follows for χn > c2 and − 1 < x < 1 that

$$ |f(x)-P_{n}(x)|<\frac{2\cdot C_{1}\cdot|\lambda_{n}|}{{R^{2}_{K}}-1}\left( 1+4\cdot c\cdot R_{K}\cdot e^{-c\cdot R_{K}}\right), $$
(3.20)
$$ |f^{\prime}(x)-P_{n}^{\prime}(x)|<\left( 2+\widetilde{C}\cdot\sqrt{2}\cdot n^{3}+\frac{1}{R_{K}}\right)\frac{2\cdot C_{1}\cdot|\lambda_{n}|}{{R^{2}_{K}}-1}\left( 1+4\cdot c\cdot R_{K}\cdot e^{-cR_{K}}\right), $$
(3.21)

where \(\widetilde {C}\) is a constant.

Remark 5

We remark that the condition “Let f be the entire function” in Lemma 3.1 can be refined as “Let f be analytic in a region bounded by the square [−Rk,Rk] × [−iRk,iRk]” [18, 35].

Theorem 3.1

Let f be analytic in a region bounded by the square [−Rk,Rk] × [−iRk,iRk], ΓR be the boundary of the square [−RK,RK] × [−iRK,iRK], \(R_{K}>\frac {\pi }{2c}+\frac {8(c+1)}{c\cdot \lambda _{n}}\), ψn(RK)≠ 0. Suppose Pn(x) and Gn(x) is the interpolant of f(x) at the Prolate-Gauss-Lobatto points by fomula (2.8) and (3.15), then it follows for χn > c2 and − 1 < x < 1 that

$$ |f(x)-G_{n}(x)|\leq\frac{\varepsilon_{n}}{1+\varepsilon_{n}}\|f\|_{\infty}+\frac{1}{1-\varepsilon_{n}}\|f-P_{n}\|_{\infty}, $$
(3.22)
$$ |f^{\prime}(x)-G_{n}^{\prime}(x)|\leq\frac{(\|f^{\prime}\|_{\infty}\varepsilon_{n}+\|f\|_{\infty}\varepsilon_{n}^{\prime}+ \|f^{\prime}-P_{n}^{\prime}\|_{\infty})(1+\varepsilon_{n})+\|f\|_{\infty}\varepsilon_{n}\varepsilon_{n}^{\prime}+\|f-P_{n} \|_{\infty}\varepsilon_{n}^{\prime}}{(1-\varepsilon_{n})^{2}}; $$
(3.23)

where \(\varepsilon _{n}:=\frac {2\cdot |\lambda _{n}|}{{R^{2}_{K}}-1}\left (1+4\cdot c\cdot R_{K}\cdot e^{-c\cdot R_{K}}\right )\) and \(\varepsilon _{n}^{\prime }=\left (2+\widetilde {C}\cdot \sqrt {2}\cdot n^{3}+\frac {1}{R_{K}}\right )\frac {2\cdot |\lambda _{n}|}{{R^{2}_{K}}-1}\left (1+4\cdot c\cdot R_{K}\cdot e^{-cR_{K}}\right )\).

Proof

Due to Lemma 3.1, when Pn[1](x) interpolates the constant function f(x) = 1, let Pn[1](x) = 1 + En(x), we provide the error that

$$ \begin{aligned} &|E_{n}(x)|\leq\frac{2\cdot|\lambda_{n}|}{{R^{2}_{K}}-1}\left( 1+4\cdot c\cdot R_{K}\cdot e^{-c\cdot R_{K}}\right):=\varepsilon_{n}, \\ &|E_{n}^{\prime}(x)|\leq\left( 2+\widetilde{C}\cdot\sqrt{2}\cdot n^{3}+\frac{1}{R_{K}}\right)\frac{2\cdot|\lambda_{n}|}{{R^{2}_{K}}-1}\left( 1+4\cdot c\cdot R_{K}\cdot e^{-cR_{K}}\right):=\varepsilon_{n}^{\prime}. \end{aligned} $$
(3.24)

It follows that

$$ -\varepsilon_{n}\leq E_{n}(x)\leq \varepsilon_{n}, \quad -\varepsilon_{n}^{\prime}\leq E_{n}^{\prime}(x)\leq \varepsilon_{n}^{\prime}. $$
(3.25)

Then, we have:

$$|f(x)-G_{n}(x)|= \left|f(x)-\frac{P_{n}(x)}{1+E_{n}(x)}\right|=\left|\frac{f(x)E_{n}(x)+(f(x)-P_{n}(x))}{1+E_{n}(x)}\right|$$

and

$$ \begin{array}{@{}rcl@{}} &&|f^{\prime}(x)-G_{n}^{\prime}(x)|\\ &=&\left|\frac{[f^{\prime}(x)E_{n} (x)+f(x)E_{n}^{\prime}(x)+(f^{\prime}(x)-P_{n}^{\prime}(x))](1+E_{n}(x))-(f(x)E_{n}(x)+(f(x)-P_{n}(x)))E_{n}^{\prime}(x)}{(1+ E_{n}(x))^{2}}\right|. \end{array} $$

Combining with (3.25), we obtain:

$$ |f(x)-G_{n}(x)|\leq\frac{\varepsilon_{n}}{1-\varepsilon_{n}}\|f\|_{\infty}+\frac{1}{1-\varepsilon_{n}}\|f-P_{n}\|_{\infty}, $$

and

$$ |f^{\prime}(x)-G_{n}^{\prime}(x)|\leq\frac{(\|f^{\prime}\|_{\infty}\varepsilon+\|f\|_{\infty}\varepsilon_{n}^{\prime}+\| f^{\prime}-P^{\prime}\|_{\infty})(1+\varepsilon_{n})+\|f\|_{\infty}\varepsilon_{n}\varepsilon_{n}^{\prime}+\|f-P\|_{\infty} \varepsilon_{n}^{\prime}}{(1-\varepsilon_{n})^{2}}. $$

where εn and \(\varepsilon _{n}^{\prime }\) are defined in (3.24). The proof is completed. □

Remark 6

Theorem 3.1 shows a close connection between the barycentric prolate interpolation (3.15) and standard prolate interpolation (2.8). Roughly speaking, for λn satisfying (2.3), so |f(x) − Gn(x)| should decay exponentially with respect to n when c satisfies (2.4).

Remark 7

A function f may be less smooth than the case we have considered; numerical results illustrate that it might be also suited to this fast convergence. However, it appears open to know about exactly how the convergence rates of barycentric prolate interpolation depend on the degree of smoothness of f.

4 A well-conditioned prolate-collocation method

As everyone knows, the second-order prolate differentiation matrix is apparently unstable even for slightly large N [19]. Fortunately, Wang et al. [32] offered a new basis leading to well-conditioned collocation linear systems. In this subsection, we give a different way to evaluate the Birkhoff interpolation basis, which generates the preconditioner Pin, such that the eigenvalues of \(P_{in}D_{in}^{(2)}\) are nearly concentrated around one.

Consider the second-order BVPs with Dirichlet boundary conditions:

$$ f^{\prime\prime}(x)+r(x)f^{\prime}(x)+s(x)f(x)={g}(x),\quad x\in[-1,1],\quad f(\pm1)=f_{\pm1}. $$
(4.26)

Following the work of Wang [32], the Birkhoff interpolation p(x) of f(x) can be uniquely determined by:

$$ p(x)=f(-1)B_{0}(x)+\sum\limits_{j=1}^{N-1}f^{\prime\prime}(x_{j})B_{j}(x)+f(1)B_{N}(x),\quad x\in[-1,1], $$
(4.27)

where \(\{B_{j}\}_{j=0}^{N}\) are the Birkhoff interpolation basis and satisfy:

$$ B_{0}(-1)=1, \quad B_{0}(1)=0, \quad B^{\prime\prime}_{0}(x_{i})=0, \quad 1\leq i\leq N-1; $$
(4.28)
$$ B_{j}(-1)=0, \quad B_{j}(1)=0, \quad B^{\prime\prime}_{j}(x_{i})=\delta_{ij}, \quad 1\leq i\leq N-1; $$
(4.29)
$$ B_{N}(-1)=0, \quad B_{N}(1)=1, \quad B^{\prime\prime}_{N}(x_{i})=0, \quad 1\leq i\leq N-1. $$
(4.30)

Proposition 4.1

Let \(\{x_{j}\}_{j=0}^{n}\) be a set of Prolate-Gauss-Lobatto points. The Birkhoff interpolation basis \(\{B_{j}\}_{j=0}^{N}\) defined in (4.28)–(4.30) is given by:

$$ B_{0}(x)=\frac{1-x}{2}, \quad B_{N}(x)=\frac{1+x}{2}; $$
(4.31)
$$ B_{j}(x)=\frac{1+x}{2}{\int}_{-1}^{1}(t-1)\widetilde{h}_{j}(t)dt+{\int}_{-1}^{x}(x-t)\widetilde{h}_{j}(t)dt,\quad 1\leq j\leq N-1, $$
(4.32)

where \(\{\widetilde {h}_{j}\}_{j=1}^{N-1}\) are the prolate barycentric interpolation basis at \(\{x_{j}\}_{j=1}^{N-1}\)

$$ \widetilde{h}_{j}(x)=\frac{\frac{\lambda_{j}}{x-x_{j}}}{\sum\limits_{k=1}^{N-1}\frac{\lambda_{k}}{x-x_{k}}}, \quad $$
(4.33)

and \(\lambda _{j}=\{\frac {1}{\psi ^{\prime }_{N-1}(x_{j})}\}_{j=1}^{N-1}\). What’s more,

$$B_{0}^{(1)}(x)=-B_{N}^{(1)}(x)=-\frac{1}{2},$$
$$B_{j}^{(1)}(x)=\frac{1}{2}{\int}_{-1}^{1}(t-1)\widetilde{h}_{j}(t)dt+{\int}_{-1}^{x}\widetilde{h}_{j}(t)dt,\quad 1\leq j\leq N-1.$$

We omit the proof, since it is very similar to that in [29]. In order to avoid the instability and low-efficiency of the Lagrange interpolation, the barycentric form is used which is recommended by [1].

To construct the Birkhoff interpolation basis, we give the numerical scheme for integral (4.32) at xi

$$ B_{j}(x_{i})=\frac{1+x_{i}}{2}{\int}_{-1}^{1}(t-1)\widetilde{h}_{j}(t)dt+{\int}_{-1}^{x_{i}}(x_{i}-t)\widetilde{h}_{j}(t)dt,\quad 1\leq j\leq N-1, $$
(4.34)

Introducing the change of variable

$$ t=\frac{x_{i}+1}{2}y+\frac{x_{i}-1}{2}, $$
(4.35)

allows us to rewrite the definite integrals (4.34) further as

$$ \begin{array}{@{}rcl@{}} B_{j}(x_{i})&=&\frac{1+x_{i}}{2}{\int}_{-1}^{1}(t-1)\widetilde{h}_{j}(t)dt+\frac{1+x_{i}}{2}{\int}_{-1}^{1}\left( x_{i}-\frac{x_{i}+1}{2}y- \frac{x_{i}-1}{2}\right)\\ &&\widetilde{h}_{j}\left( \frac{x_{i}+1}{2}y+\frac{x_{i}-1}{2}\right)dy,\quad 1\leq j\leq N-1. \end{array} $$
(4.36)

Since the integrands in (4.36) can be computed exactly using an Gauss quadrature at Legendre points. Based on fast \(\mathcal {O}(N)\) operations for the computation of Gaussian quadrature due to Hale and Townsend [10], we get the fast scheme for the Birkhoff interpolation basis \(\{B_{j}\}_{j=0}^{N}\) and \(\{B^{(1)}_{j}\}_{j=0}^{N}\).

Let \(b^{(k)}_{ij}:=B^{(k)}_{j}(x_{i})\), and define the matrices

$$ \textbf{B}^{(k)}=(b^{(k)}_{ij})_{0\leq i,j\leq N},\quad \textbf{B}^{(k)}_{in}=(b^{(k)}_{ij})_{1\leq i,j\leq N-1}. $$
(4.37)

Due to (4.27), hk(x) in (3.16) can be approximated by

$$ h_{k}(x)\approx\sum\limits_{j=1}^{n-1}h^{\prime\prime}_{k}(x_{j})B_{j}(x),\quad 1\leq k\leq n-1. $$
(4.38)

According to the fact that hk(x) satisfying hj(xi) = δij, it follows that

$$ \textbf{B}_{in}\textbf{D}_{in}^{(2)}\approx\textbf{I}_{N-1}, $$
(4.39)

where IM is an M × M identity matrix, and the matrix \(\textbf {D}_{in}^{(2)}\) is the same as in (3.17). We depict in Fig. 1 the distribution of the largest and smallest eigenvalues of \(\textbf {B}_{in}\textbf {D}_{in}^{(2)}\) at the Prolate-Gauss-Lobatto points. This agrees with (4.39).

Fig. 1
figure 1

Distribution of the largest and smallest eigenvalues of \(\textbf {B}_{in}\textbf {D}_{in}^{(2)}\) for various N = 10:6:210 (c = N/2)

As we know, the usual collocation scheme is find f = (f(x1),…,f(xN− 1)) by solving

$$ (\textbf{D}_{in}^{(2)}+\boldsymbol{\Lambda}_{r}\textbf{D}_{in}^{(1)}+\boldsymbol{\Lambda}_{s})\textbf{f}=\textbf{g}- \mathbf{f}_{b}, $$
(4.40)

where g = (g(x1),…,g(xN− 1))t, Λr = diag(r(x1),…,r(xN− 1)), Λs = diag(s(x1),…,s(xN− 1)),

$$ \mathbf{f_{b}}=f_{-}(h_{0}^{(2)}(x_{j})+r(x_{j})h_{0}^{(1)}(x_{j}))+f_{+}(h_{N}^{(2)}(x_{j})+r(x_{j})h_{N}^{(1)}(x_{j})), \quad 1\leq j\leq N-1. $$

It is well known that the coefficient matrix of the usual collocation method has a high condition number. Below, let us consider the preconditioning method for solving BVP. On the one hand, due to \(B_{in}D_{in}^{(2)}=I_{N-1}\), the matrix Bin can be used to precondition the ill-conditioned system by:

$$ \textbf{B}_{in}(\textbf{D}_{in}^{(2)}+\boldsymbol{\Lambda}_{r}\textbf{D}_{in}^{(1)}+\boldsymbol{\Lambda}_{s})\textbf{f}=\textbf{B}_{in} (\textbf{g}-\mathbf{f}_{b}), $$
(4.41)

where

$$ \mathbf{f_{b}}=f_{-}(h_{0}^{(2)}(x_{j})+r(x_{j})h_{0}^{(1)}(x_{j}))+f_{+}(h_{N}^{(2)}(x_{j})+r(x_{j})h_{N}^{(1)}(x_{j})), \quad 1\leq j\leq N-1. $$

On the other hand, recall the formula (4.27): one can directly use {Bk} as basis. Then, the collocation scheme of BVP can be expressed as:

$$ (\mathbf{I}_{N-1}+\boldsymbol{\Lambda}_{r}\mathbf{B}_{in}^{(1)}+\boldsymbol{\Lambda}_{s}\mathbf{B}_{in})\mathbf{u}=\mathbf{g}-f_{-} \mathbf{u}_{-}-f_{+}\mathbf{u}_{+}, $$
(4.42)

where \(\mathbf {u}=(f^{\prime \prime }_{N}(x_{1}),f^{\prime \prime }_{N}(x_{2}),\ldots ,f^{\prime \prime }_{N}(x_{N-1}))^{T}\), and

$$ \mathbf{u}_{-}=\left( -\frac{r(x_{1})}{2}+s(x_{1})\frac{1-x_{1}}{2},\ldots,-\frac{r(x_{N-1})}{2}+s(x_{N-1})\frac{1-x_{N-1}}{2} \right)^{T}, $$
$$ \mathbf{u}_{+}=\left( \frac{r(x_{1})}{2}+s(x_{1})\frac{1+x_{1}}{2},\ldots,\frac{r(x_{N-1})}{2}+s(x_{N-1})\frac{1+x_{N-1}}{2} \right)^{T}. $$

We can obtain u by solving the system, and then recover f = (fN(x1),…,fN(xN− 1))T from

$$ \mathbf{f}=\mathbf{B}_{in}\mathbf{u}+f_{-}\mathbf{b}_{0}+f_{+}\mathbf{b}_{N}, $$
(4.43)

where bj = (Bj(x1),Bj(x2),…,Bj(xN− 1))T for j = 0,N.

Remark 8

Obviously, the new system (4.42) does not involve the direct multiplication of the preconditioner, and the round-off errors in forming differentiation matrices can be alleviated.

Remark 9

The use of Birkhoff interpolation as basis functions for deriving precondition is mimic to the preconditioning technique in [32]. However, [32] search for the Birkhoff interpolation basis {Bj(x)} through expansion in a different finite dimensional space, and then solving the coefficients by the interpolation conditions. This process involves inverting a matrix of PSWF values, which is time-consuming. My idea of constructing the basis {Bj(x)} in (4.31)–(4.32) is actually inspired by polynomial-based algorithms in [29] and the new insights reside in two aspects. First, in order to avoid the instability of the Lagrange interpolation, the barycentric form was used. Second, through changing the variable, the integrals in (4.32) were computed by the fast Gauss quadrature proposed by Hale and Townsend.

5 Numerical tests

In this section, we illustrate the numerical results in this paper. All the numerical results in this paper are carried out by using Matlab R2014a on a desktop (4.0 GB RAM, 2 Core2 (64 bit) processors at 3.17 GHz) with Windows 7 operating system.

Example 1

Figure 2 illustrates the convergence of the barycentric prolate interpolation formulas for the two analytic functions:

$$ f(x)=e^{\sin(6x)} $$

and

$$ f(x)=2\sin(10x). $$
Fig. 2
figure 2

Convergence rate for the barycentric prolate interpolation methods of two analytic functions using different values of c

For each n, the error is defined by

$$ \max\limits_{x\in [-1,1]}|f(x)-G_{n}(x)|, $$

which is measured at 1000 random points in [− 1,1]. As we can see, the convergence is exponential and is almost indistinguishable for different c. Moreover, it is shown that the optimal c depends on the function being approximated [7]. In the following, we will take c = n/2 for general functions, which is recommended in [30, 32].

Example 2

For the functions:

$$ f(x)=\sin(25x), $$
(5.44)
$$ f(x)=\frac{1}{1 + 25x^{2}}, $$
(5.45)
$$ f(x)=exp(x)/cos(x) $$
(5.46)

and

$$ f(x)=e^{-1/x^{2}}, $$
(5.47)

we focus on the comparison of the new barycentric prolate interpolation (c = n/2) (3.15) with the standard interpolation (2.8) in terms of the approximation error in \(L^{\infty }\) norm, which is measured at 1000 random points in [− 1,1]. Numerical results are shown in Figs. 3 and 4. It is seen that the errors for these approaches decrease very fast. Furthermore, the barycentric prolate interpolation has better stability than that of the Lagrange formulation for a large number of points.

Fig. 3
figure 3

Convergence rate for the different interpolation methods in the semilogy scale for N = 11:6:1211

Fig. 4
figure 4

Convergence rate for the different interpolation methods in the semilogy scale for N = 11:6:1211

Example 3

For the wave functions \(f(x)=\frac {\sin \limits (25x)}{2-x^{2}}\) and \(f(x)=(\cos \limits (25x)+\sin \limits (x))/(1+4x^{2})\), we compare the barycentric prolate interpolation (c = n/2) with the barycentric interpolation in the polynomial case, whose nodes and barycentric weights are computed in the chebfun system by the command legpts [24]. Figure 5 illustrates the barycentric prolate interpolation yields spectrally accurate results using even fewer points than barycentric interpolation in the polynomial case.

Fig. 5
figure 5

Errors of Barycentric prolate interpolation formula and the barycentric interrpolation formula

Example 4

We compare the absolute errors of the derivatives for

$$ f(x)=e^{\sin(3x)} $$
(5.48)

at Prolate-Gauss-Lobatto points by the barycentric prolate differentiation (3.18)–(3.19) and standard method (2.9)–(2.10). Results of these calculations are shown in Fig. 6. As can be seen, since the standard method (2.10) involves the first-order differentiation value, it causes error propaganda for a large number n. There is a good performance of prolate barycentric differentiation, which gives us the motivations for the application.

Fig. 6
figure 6

Maximum absolute error of the one-order and the second-order differentiation of \(f(x)=e^{\sin \limits (3x)}\) (c = n/2) by standard differentiation method and barycentric prolate differentiation method in the semilogy scale for N = 1:2:1003

6 Application

Different from the usual collocation scheme using the standard Lagrange differentiation, barycentric prolate differentiation (3.18)–(3.19), combining with the usual spectral collocation method and GMRES, has been implemented and tested on the highly oscillatory problem and two-dimensional Helmholtz problem. The comparison with CC points-based method is reminiscent when the solution is highly oscillatory.

Example 5

The second example is one where the solution is very oscillatory

$$ \begin{aligned} u^{\prime\prime}(x)+5u^{\prime}(x)+10000u(x)=-500\cos(100x)e^{-5x}, \quad x\in [0,1], \end{aligned} $$
(6.49)
$$ \begin{aligned} u(0)=0,\quad u(1)=\sin(100)e^{-5}. \end{aligned} $$
(6.50)

The exact solution is

$$ \begin{aligned} u(x)=\sin(100x)e^{-5x}. \end{aligned} $$
(6.51)

The behavior of the prolate barycentric differentiation matrix is demonstrated in Fig. 7. It is clear that this method is rapidly convergent and stable, which is better than the usual collocation method based on CC points.

Fig. 7
figure 7

The exact solution of Example 5 (Left). The convergence rate at C-C points and PGL points (c = N/2) when N = 46:2:100 in the log-log scale

Example 6

We extend the barycentric prolate pseudospectral method to 2D Helmholtz problem [25], which arises in the analysis of wave propagation:

$$ u_{xx}+u_{yy}+k^{2}u=f(x), \quad -1<x,y<1, $$
(6.52)

where u = 0 on the boundary and k is a real parameter. For such a problem, we set up a grid based on Prolate-Gauss-Lobatto points independently in each direction called a tensor product grid. To solve such a problem for the particular choices k = 9, f(x,y) = exp(− 10[(y − 1)2 + (x − 1/2)2]). The solution appears as a mesh plot in Fig. 8. Compared with the value u(0,0) is accurate to nine digits at Chebyshev grid [25] when N = 24, the new barycentric prolate differentiation scheme (3.19) achieves the accuracy to eleven digits at the same number of points. On the right side of Fig. 8, the absolute error at u(0,0) is illustrated when N = 4 : 2 : 38, which show the fast convergence rate at Prolate-Gauss-Lobatto points.

Fig. 8
figure 8

Solution of the Helmholtz problem (N = 24, c = 12) (left) and the absolute error at u(0,0) by differentiation at CC points and PGL points in the semilogy scale (c = N/2)(right)

Example 7

We consider

$$ u^{\prime\prime}(x)-(1+\sin(x))u^{\prime}(x)+e^{x}u(x)=f(x), \quad x\in(-1,1);\quad u(\pm1)=1, $$
(6.53)

with the exact solution \(u(x)=e^{(x^{2}-1)/2}\). Below, Table 1 compares the condition number and errors of the spectral collocation (SC) scheme (4.40), direct preconditioned (M-PC) scheme (4.41), and the new basis preconditioned collocation (B-PC) scheme (4.42), respectively. We also show the iteration number for solving the systems by GMRES. Table 1 clearly indicates that the two preconditioned schemes are well-conditioned and the new basis preconditioned collocation (B-PC) scheme has desired performance.

Table 1 Condition number, absolute errors, and iteration steps of spectral collocation method (SC), direct preconditioned scheme (4.41), and the new basis preconditioned collocation (B-PC) scheme (4.42)

7 Conclusion

In this paper, we have developed a new scheme for the prolate interpolation and prolate spectral differentiation. The solver is based on the barycentric interpolation, which allows for stable approximation and the error analysis of barycentric prolate interpolation and differentiation are given. What’s more, the new preconditioning skill is proposed for the usual prolate-collocation scheme. The numerical examples demonstrate the performance of the proposed algorithms.