1 Introduction

Black holes in general relativity are simple spacetime objects, fully specified by only a handful of constants. When the spacetime around black holes is disturbed by surrounding complex distributions of matter and fields, as they are found in nature, these spacetime disturbances generically evolve in the form of damped oscillations known as quasinormal modes (QNMs).

Quasinormal modes are the characteristic ringing of spacetime around black holes. They are independent of the initial excitation that generated them, dependent only on parameters of the black hole. A wealth of information can be extracted from the quasinormal mode spectrum of a black hole, so they serve as probes for the validity of general relativity and its extensions in the strong gravity regime. Two excellent reviews on the topic with an emphasis on astrophysics can be found in [1] and [2]. A review on higher dimensional black holes and their connections to strongly coupled quantum fields can be found in [3].

In general, the quasinormal mode spectrum of a black hole comes from solving an ordinary differential equation (ODE) eigenvalue problem. These usually take the form of a Schrödinger-like equation,

$$\begin{aligned} -\dfrac{\text {d} ^2 R }{\text {d} r_*^2} + V(r, \omega )R = \omega ^2 R. \end{aligned}$$
(1)

where \(r_*\) is called a tortoise coordinate.

Various numerical methods have been developed to solve (1), such as the WKB approach, shooting methods, continued-fraction methods, and the use of Pöschl-Teller potentials. A review article with an emphasis on this topic can be found in [4]. In this paper, we shall be solving (1) using a pseudospectral method.

The use of spectral and pseudospectral methods in gravitational problems is well-established [5, 6], and have been applied to numerous numerical experiments such as [7,8,9] to name a few. Here we extend this library of methods to include the Bernstein polynomial basis, which has particular properties that lend its use to mixed-type boundary-value problems.

Likewise, Bernstein polynomials have been used as a function basis in the numerical solution of various differential [10,11,12,13,14], fractional differential [15], integral [16,17,18,19], integro-differential [20, 21] and fractional integro-differential [22] equations. Multiple methods have been deployed in this context, such as the Bernstein–Petrov–Galerkin (BPG) method, the collocation method, operational matrices and direct integration. Our work extends the range of the Bernstein basis by exploring its use in ODE eigenvalue problems.

The aim of this paper is two-fold. First, it is a primer on how Bernstein polynomials (BPs) may be used for boundary-value problems in a general relativity setting. Second, it is an introduction to a Mathematica package we call SpectralBP that implements the pseudospectral method based on Bernstein polynomials. For examples and benchmarks, we have applied SpectralBP to a selection of eigenvalue problems in quantum mechanics and general relativity: the infinite square well, harmonic and anharmonic oscillators, and quasinormal modes of various fields in a Schwarzschild black hole. Particularly noteworthy is that SpectralBP is able to find eigenvalues with modest resources where other numerical methods find with difficulty, such as the algebraically special modes for gravitational perturbations of the Schwarzschild geometry [23, 24]. As will be explained below, this should be expected of any spectral method for eigenvalue problems.

The method introduced in this paper, and the accompanying Mathematica package, has seen use in general relativity [25,26,27,28,29,30,31,32,33] and quantum mechanics [34]. It has been particularly useful in finding purely imaginary quasinormal modes [28, 32], finding new branches of solutions hitherto unknown [26, 29] and show novel and critical behaviors like spectrum bifurcation [27] and instability [30]. In quantum mechanics applications, it has been shown to generate exceedingly accurate solutions where other methods require vast resources in memory and compute time [34].

Consider an \(n \times n\) matrix of linear differential operators \(\hat{L}(u,\omega )\) dependent on a single independent variable u and polynomial in the eigenvalue \(\omega \) of some maximal integer order m,

$$\begin{aligned} \begin{array}{l} \hat{L}_{i,j} (u,\omega ) = \hat{f}_{i,j,0} + \omega \hat{f}_{i,j,1} + \dots + \omega ^m \hat{f}_{i,j,m}, \\ \hat{f}_{i,j,k} = f_{i,j,k}(u, \dfrac{\text {d} }{\text {d} u},\dfrac{\text {d} ^2 }{\text {d} u^2},\dots ), \end{array} \end{aligned}$$
(2)

and let \(\Phi (u)\) be a vector of n functions dependent on u

$$\begin{aligned} \Phi (u) = (\phi _1 (u), \phi _2 (u), \dots , \phi _n(u))^T. \end{aligned}$$
(3)

We wish to solve the following eigenvalue problem for \(\omega \),

$$\begin{aligned} \hat{L}(u,\omega ) \Phi (u) = 0, \end{aligned}$$
(4)

provided the problem satisfies the following criteria:

  1. 1.

    The domain of the solution is compact and analytic over its whole domain. (\(u \in [a,b]\))

  2. 2.

    The boundary conditions for all eigenfunctions \(\psi _i(u)\) specifies that \(\lim _{u \rightarrow a} \psi _i(u) \sim (u-a)^{q}\) and \(\lim _{u \rightarrow b} \psi _i(u) \sim (b-u)^{r}\) for some \(q, r \ge 0\).

  3. 3.

    The eigenvalues of \(\omega \) form a discrete spectrum.

The calculation of the bound state energies of quantum mechanical particles and the quasinormal modes of black hole spacetimes are examples of such a problem.

To solve (4) we use a pseudospectral method, in which the solution of the differential equation is approximated as a weighted sum of a set of basis functions, say \(\{\phi _i(r)\}\), as in,

$$\begin{aligned} R(r) \approx \sum _i C_i \phi _i(r). \end{aligned}$$
(5)

This renders the initial differential problem into a system of algebraic equations the set of expansion coefficients \(\{C_i\}\) must satisfy. Since (4) is linear, these algebraic equations can be cast as a matrix equation generically of the form of a generalized eigenvalue problem (GEP),

$$\begin{aligned} \varvec{M}(\omega ) \varvec{C} = 0. \end{aligned}$$
(6)

We have developed a Mathematica package we call SpectralBP, written to streamline the numerical solution of ODE eigenvalue problems. The package utilizes the Bernstein polynomials, and the properties which make them particularly powerful in the context of boundary value problems.

A similar Mathematica package can be found in [35]. It is a pseudospectral method which uses a Chebyschev polynomial basis, called QNMSpectral. This open-source package served as the initial inspiration for our work, and so the two codes unavoidably overlap in some of their functionality. We developed SpectralBP to be a superset of QNMSpectral, with the intent of developing a spectral solver not just specifically tailored to quasinormal mode calculations. It also serves to introduce the Bernstein method to the general relativity community. Aside from methods specifically tied to the Bernstein basis, SpectralBP also implements a novel algorithm for efficiently tackling transcendental and polynomial eigenvalue problems that we shall discuss in detail in a future paper [36].

This paper is organized in two parts. We first establish how the Bernstein polynomial basis may be used in ODE eigenvalue problems with boundary conditions. In Sect. 2, we fix our notation and enumerate the properties of the Bernstein basis relevant to the method. In Sect. 3, we explain how the Bernstein basis is appropriate in handling boundary conditions. In Sect. 4, we review standard methods for translating (4) into a generalized eigenvalue problem using a collocation method. We then enumerate various positives and negatives the Bernstein polynomial basis has compared to other bases like Fourier or Chebyschev in Sect. 5.

The rest of the paper involves the implementation and application of SpectralBP. Section 6 introduces the SpectralBP package and its general features. We then show in detail how SpectralBP can be used in Sects. 7 and 8, introducing functionalities of the package by working out some model problems in quantum mechanics and calculating quasinormal modes respectively. In Sect. 9, we look at the algebraically special modes of the Regge–Wheeler equation. In the final section, we show miscellaneous details implemented in SpectralBP: closed-form expressions of the spectral matrices, matrix inversion, and eigenfunction calculation and manipulation.

2 Bernstein polynomials

We review some of the key properties of Bernstein polynomials. We shall not be exhaustive and select only those properties useful to the development of SpectralBP. This section shall also fix our notation for the rest of the paper. A useful reference can be found in [12], which describes all of the properties listed here using a Bernstein basis over the interval [0, 1]. The generalization to a Bernstein basis over an arbitrary interval [ab] is straightforward.

The Bernstein basis of degree N defined over the interval \(u \in [a,b]\) is a set of \(N+1\) polynomials, \(\{B^N_k(u)\}\), given by

$$\begin{aligned} \begin{array}{l} B^N_k(u) = \begin{pmatrix} N \\ k \end{pmatrix} \dfrac{(u-a)^k (b-u)^{N-k}}{(b-a)^N}, \\ k = 0,1,\dots ,N, \qquad \begin{pmatrix} N \\ k \end{pmatrix} = \dfrac{N!}{(k)!(N-k)!}. \end{array} \end{aligned}$$
(7)

For convenience, we also set \(B^N_k(u) = 0\) and \(\begin{pmatrix} N \\ k \end{pmatrix} = 0\) when either \(k < 0\) or \(k > N\).

Fig. 1
figure 1

Bernstein polynomials of degree 10

The Bernstein basis of degree 10 is shown in Fig. 1. It is clear that at the boundaries \(u=a\) and \(u=b\), Bernstein polynomials satisfy

$$\begin{aligned} B^N_k(a) = \delta _{k,0}, \qquad B^N_k(b) = \delta _{k,N}. \end{aligned}$$
(8)

The derivative of a Bernstein polynomial of degree N can be expressed in terms of Bernstein polynomials of degree \(N-1\), satisfying the following recurrence relation,

$$\begin{aligned} \dfrac{\text {d} B^N_k }{\text {d} u} = \dfrac{N}{b-a} \left( B^{N-1}_{k-1}(u) - B^{N-1}_{k}(u) \right) . \end{aligned}$$
(9)

Repeated differentiation also gives

$$\begin{aligned} \dfrac{\text {d} ^m B^N_k }{\text {d} u^m}= & {} \dfrac{1}{(b-a)^m} \dfrac{N!}{(N-m)!} \nonumber \\{} & {} \times \sum _{l=0}^m (-1)^l \begin{pmatrix} m \\ l \end{pmatrix} B^{N-m}_{k+l-m}(u). \end{aligned}$$
(10)

A Bernstein polynomial of degree N can be expressed as a sum of Bernstein polynomials of a higher degree [37],

$$\begin{aligned} B^N_k(u) = \sum _{j=0}^{m} \dfrac{\begin{pmatrix} N \\ k \end{pmatrix} \begin{pmatrix} m \\ j \end{pmatrix}}{\begin{pmatrix} N+m \\ k+j \end{pmatrix}} B^{N+m}_{k+j}(u). \end{aligned}$$
(11)

The integral of each basis polynomial in a Bernstein basis of degree N over [ab] are equal,

$$\begin{aligned} \int _a^b B^N_k(u) du = \dfrac{b-a}{N+1}. \end{aligned}$$
(12)

Finally, the product between two Bernstein polynomials can be expressed as single Bernstein polynomial of higher degree,

$$\begin{aligned} B^N_j (u) B^M_k (u) = \frac{\begin{pmatrix} N \\ j \end{pmatrix}\begin{pmatrix} M \\ k \end{pmatrix}}{\begin{pmatrix} N+M \\ j+k \end{pmatrix}} B^{N+M}_{j+k} (u). \end{aligned}$$
(13)

3 Boundary conditions

When using the Bernstein basis in mixed-type boundary-value problems, we shall see that the boundary conditions act only on a subset of the Bernstein basis. This lets us independently solve the boundary conditions separately, making the Bernstein basis particularly useful in mixed-type boundary-value problems. For the particular boundary-value problem described in Sect. 1, the Bernstein method reduces to a form in which each basis function satisfies the boundary conditions.

We begin by approximating the solution \(\phi (u)\) as a weighted sum of Bernstein polynomials,

$$\begin{aligned} \phi (u) \approx \sum _{k=0}^N C_{k} B^N_k(u). \end{aligned}$$
(14)

Let there be q boundary conditions on \(u=a\) and r boundary conditions on \(u=b\) of the following form,

$$\begin{aligned} \hspace{20pt} \begin{array}{l} \phi (a) = a_0, \dfrac{\text {d} \phi (a) }{\text {d} u} = a_1, \dots , \dfrac{\text {d} ^{q-1} \phi (a) }{\text {d} u^{q-1}} = a_{q-1}, \\ \phi (b) = b_0, \dfrac{\text {d} \phi (b) }{\text {d} u} = b_1, \dots , \dfrac{\text {d} ^{r-1} \phi (b) }{\text {d} u^{r-1}} = b_{r-1}. \end{array} \end{aligned}$$
(15)

These constants may be interrelated. A common example would be a two-point boundary value problem of a second-order differential equation subject to mixed linear boundary conditions,

$$\begin{aligned} \,\, \begin{array}{r} c_{1,k} \phi (a) + c_{2,k} \phi '(a) + c_{3,k} \phi (b) + c_{4,k} \phi '(b) = c_{5,k}, \\ (k = 1,2,3,4), \end{array} \end{aligned}$$
(16)

which fixes \(a_0, a_1, b_0,\) and \(b_1\).

Combining (10) and (14), the \(m{\textrm{th}}\) derivative of \(\phi (u)\) is given by

$$\begin{aligned} \dfrac{\text {d} ^m \phi }{\text {d} u^m} = \sum _{k=0}^N \sum _{l=0}^m \dfrac{C_{k}}{(b-a)^m} \dfrac{N!}{(N-m)!} (-1)^l \begin{pmatrix} m \\ l \end{pmatrix} B^{N-m}_{k+l-m}(u). \end{aligned}$$
(17)

We use (8) to simplify evaluating \(\phi (u)\) at the boundaries. At \(u=a\) and \(u=b\), we get

$$\begin{aligned} \dfrac{\text {d} ^m\phi }{\text {d} u^m} \bigg |_a = \dfrac{1}{(b-a)^m} \dfrac{N!}{(N-m)!} \sum _{l=0}^m C_{m-l} (-1)^l \begin{pmatrix} m \\ l \end{pmatrix} \end{aligned}$$
(18)

and

$$\begin{aligned} \dfrac{\text {d} ^m \phi }{\text {d} u^m} \bigg |_b = \dfrac{1}{(b-a)^m} \dfrac{N!}{(N-m)!} \sum _{l=0}^m C_{N-l} (-1)^l \begin{pmatrix} m \\ l \end{pmatrix}. \end{aligned}$$
(19)

Thus, the boundary conditions act only first q and last r of the Bernstein basis, whose expansion coefficients are fixed via the matrix equations

$$\begin{aligned} {\textbf {A}} {\textbf {C}} = {\textbf {a}}, \qquad {\textbf {B}} \tilde{{\textbf {C}}} = {\textbf {b}}, \end{aligned}$$
(20)

where

$$\begin{aligned} \begin{array}{l} {\textbf {A}}_{l,m} = \dfrac{1}{(b-a)^l} \dfrac{N!}{(N-l)!}(-1)^{l-m} \begin{pmatrix} l \\ l-m \end{pmatrix},\\ {\textbf {C}}_m = C_m, \, {\textbf {a}}_l = a_l, \\ m,l \in \{0,1,\dots ,q-1\}, \end{array} \end{aligned}$$
(21)

and

$$\begin{aligned} \begin{array}{l} {\textbf {B}}_{l,m} = \dfrac{1}{(b-a)^l} \dfrac{N!}{(N-l)!}(-1)^{m} \begin{pmatrix} l \\ m \end{pmatrix},\\ \tilde{{\textbf {C}}}_m = C_{N-m}, \, {\textbf {b}}_l = b_l, \\ m,l \in \{0,1,\dots ,r-1\}. \end{array} \end{aligned}$$
(22)

When the differential operator is linear, the modified ODE eigenvalue problem

$$\begin{aligned} \hat{L}(u,\omega )\psi (u) = g(u,\omega ), \quad \psi (u) = \sum _{k=q}^{N-r} C_k B^N_k(u) \end{aligned}$$
(23)

determines the rest of the expansion coefficients, where the residual function \(g(u,\omega )\) is given by

$$\begin{aligned} g(u,\omega )&{=}&{-}\hat{L}(u,\omega )\left( \sum _{k=0}^{q-1} C_k B^N_k(u) {+} \sum _{k=N-r+1}^N C_k B^N_k(u) \right) . \nonumber \\ \end{aligned}$$
(24)

We consider the case where \(g(u,\omega )\) vanishes, or equivalently

$$\begin{aligned} \lim _{u \rightarrow a} \phi (u) \sim (u-a)^q, \qquad \lim _{u \rightarrow b} \phi (u) \sim (b-u)^r. \end{aligned}$$
(25)

We arrive at an ODE eigenvalue problem identical to the one we started with, but over a smaller set of basis functions

$$\begin{aligned} \hat{L}(u,\omega )\psi (u) = 0, \quad \psi (u) = \sum _{k=q}^{N-r} C_k B^N_k(u). \end{aligned}$$
(26)

It should be noted that for more standard basis functions, imposing the boundary conditions considered in (15) would involve the entire basis set. To determine the expansion coefficients, the differential equations and the boundary conditions must be solved simultaneously. In the Bernstein basis, the boundary conditions act only on the first q and last r basis polynomials, and we get their corresponding expansion coefficients for free even before considering the ODE. Though we do not prove that this advantage is unique to the Bernstein basis, we believe that any other basis must behave like Bernstein polynomials to enjoy it. That is, the nth basis function of a basis of size N must asymptote to \((u-a)^n\) towards the lower boundary and to \((b-u)^{N-n}\) towards the upper boundary.

We express a similar sentiment for other basis functions where the condition (25) would make the residual function vanish. In the Bernstein basis, the problem is simplified since each basis polynomial satisfies the boundary conditions exactly.

Finally, we note that when the differential operator is not dependent on \(\omega \), Eq. (23) serves as a general recipe for solving boundary value problems using Bernstein polynomials. One may modify the many methods found in Sect. 1 to solve for the remaining undetermined coefficients.

4 Pseudospectral method

In this section, we review how one starts with the ODE eigenvalue problem in (4) and end up with the generalized eigenvalue problem in (6). We derive a general recipe for mapping a differential operator and function pair to a matrix and vector pair \((\tilde{\varvec{\mathcal {M}}}(\omega ),\tilde{\varvec{\mathcal {C}}})\) via a collocation method in the Bernstein basis, whose closed form can be found in the last section. In the context of Chebyschev basis polynomials and Fourier basis functions, the standard reference is [38].

We start with a linear eigenvalue ODE, then show how it can be extended to polynomial eigenvalue ODEs. We extend this to include problems involving a set of dependent functions. We elaborate on special cases in A, used to convert the polynomial generalized eigenvalue problem to an eigenvalue problem.

4.1 Linear eigenvalue problem

Consider the ODE eigenvalue problem in (26), specifically of the form

$$\begin{aligned} \hat{L}(u,\omega ) \psi (u) = (\hat{f}_0(u) + \omega \hat{f}_1(u))\psi (u) = 0. \end{aligned}$$
(27)

To arrive at a spectral matrix of size \(N+1\), we expand the basis degree to \(N_{max} = N + q + r\).

$$\begin{aligned} \psi (u) \approx \sum _{k = 0}^{N} C_{k+q} B^{N_{\textrm{max}}}_{k+q}(u). \end{aligned}$$
(28)
Fig. 2
figure 2

The set of 11 Bernstein basis polynomials appropriate when \(q = 30\) and \(r = 30\), and their derivatives. The basis functions are localized around the center of [ab], as are their derivatives

A straightforward implementation of the collocation method would be to define a grid of \(N+1\) points in the interval [ab]. Since the first q or last r Bernstein basis functions dominate the behaviour of the solution near the boundaries, we propose instead to select collocating points in the region dominated by the basis functions whose weights are still unknown.

As an illustrative example, consider the case when \(N = 10\), \(q = 30\) and \(r = 30\). One can imagine rescaling a solution \(\phi (u)\) finite at both boundaries via the transformation,

$$\begin{aligned} \phi (u) = \dfrac{\tilde{\phi }(u)}{(u-a)^{30}(b-u)^{30}}. \end{aligned}$$
(29)

The basis of \(\phi (u)\) is in Fig. 1 while the basis of \(\tilde{\phi }(u)\) is in Fig. 2. Its derivatives are similarly localized. We construct our collocating grid by considering a Chebyschev or equally spaced grid of \(N_{\textrm{max}} + 1\) points over [ab],

$$\begin{aligned} \{u_0, u_1, \dots , u_{N_{\textrm{max}}}\}, \qquad u_0 = a, \, \, \,u_{N_{\textrm{max}}}=b, \end{aligned}$$
(30)

and then select grid points q through \(N + q + 1\).

Let us now endeavour to convert the differential operator and function pair \((\hat{f}(u),\psi (u))\) into a matrix and vector pair \((\varvec{M},\varvec{C})\). Suppose \(\hat{f}(u)\) is of the form,

$$\begin{aligned} \hat{f}(u) = \sum _{n=0}^{n_{\textrm{max}}} f_n(u) \dfrac{\text {d} ^n }{\text {d} u^n}. \end{aligned}$$
(31)

A generic term in the \(\hat{f}(u)\psi (u)\) is of the form \(f_n(u) \dfrac{\text {d} ^n \psi }{\text {d} u^n}\). Combining (10) and (28), we get

$$\begin{aligned} f_n(u) \dfrac{\text {d} ^n \psi (u) }{\text {d} u^n}= & {} \dfrac{f(u)}{(b-a)^n} \dfrac{(N_{\textrm{max}})!}{(N_{\textrm{max}}-n)!} \nonumber \\{} & {} \times \sum _{k = 0}^{N} \sum _{l=0}^n (-1)^l \begin{pmatrix} n \\ l \end{pmatrix} B^{N_{\textrm{max}}-n}_{k+q+l-n}(u) C_{k+q}. \nonumber \\ \end{aligned}$$
(32)

We may assign a vector to each term in \(\hat{f}(u)\psi (u)\) with the condition that the differential operator is satisfied at each collocation point,

$$\begin{aligned} f_n(u) \dfrac{\text {d} ^n \psi (u) }{\text {d} u^n} \rightarrow \varvec{T}^{(n)} \varvec{C}, \end{aligned}$$
(33)

where \(\varvec{C}_k = C_{k+q}\) and the matrix components of \(\varvec{T}^{(n)}\) are given by

$$\begin{aligned} \varvec{T}^{(n)}_{j,k}= & {} \dfrac{f_n(u_{j+q})}{(b-a)^n} \dfrac{(N_{\text {max}})!}{(N_{\text {max}}-n)!} \nonumber \\{} & {} \times \sum _{l=0}^n (-1)^l \begin{pmatrix} n \\ l \end{pmatrix} B^{N_{\text {max}}-n}_{k+q+l-n}(u_{j+q}), \end{aligned}$$
(34)

for \(j,k \in \{0,1,\dots ,N\}\).

To use (34), each Bernstein basis polynomial of degree \(N_{\textrm{max}}-n\) through \(N_{\textrm{max}}\) must be evaluated at each collocation point. Since in many applications, \(n_{\textrm{max}} \ll N\), it would be numerically cost efficient to use (11) and rewrite (34) in terms of a single Bernstein basis degree, as in

$$\begin{aligned} \varvec{T}^{(n)}_{j,k}= & {} \dfrac{f_n(u_{j+q})}{(b-a)^n} \dfrac{(N_{\text {max}})!}{(N_{\text {max}}-n)!}\sum _{l=0}^n (-1)^l \begin{pmatrix} n \\ l \end{pmatrix} \nonumber \\{} & {} \times \sum _{m=0}^n \dfrac{\begin{pmatrix} n \\ m \end{pmatrix} \begin{pmatrix} N_{\text {max}}-n \\ k+q+l-n \end{pmatrix}}{\begin{pmatrix} N_{\text {max}} \\ k+q+l+m-n \end{pmatrix}} B^{N_{\text {max}}}_{k+q+l+m-n}(u_{j+q}). \nonumber \\ \end{aligned}$$
(35)

By choosing this degree to be \(N_{\textrm{max}}\), only a subset of the Bernstein basis needs to be evaluated at each collocation point-specifically those indexed in the range \([q - \min (n_{\textrm{max}},q), N + q + \min (n_{\textrm{max}},r)]\). Thus,

$$\begin{aligned} (\hat{f}(u),\psi (u)) \rightarrow (\varvec{M},\varvec{C}), \qquad \varvec{M} = \sum _{n=0}^{n_{\textrm{max}}} \varvec{T}^{(n)}. \end{aligned}$$
(36)

The ODE linear eigenvalue problem in (27) may be written as a generalized eigenvalue problem,

$$\begin{aligned} \varvec{M}(\omega ) \varvec{C} = (\varvec{M}_0 + \omega \varvec{M}_1)\varvec{C} = 0. \end{aligned}$$
(37)

4.2 Polynomial eigenvalue problem

Consider a polynomial eigenvalue problem of order m.,

$$\begin{aligned} (\hat{f}_0(u) + \omega \hat{f}_1(u) + \omega ^2 \hat{f}_2(u) + \dots + \omega ^m \hat{f}_m(u) )\psi (u) = 0. \end{aligned}$$
(38)

Using the recipe discussed in the previous section, this corresponds to an eigenvalue problem of a matrix pencil of order m,

$$\begin{aligned} (\varvec{M}_0 + \omega \varvec{M}_1 + \omega ^2 \varvec{M}_2 + \dots + \omega ^m \varvec{M}_m)\varvec{C} = 0. \end{aligned}$$
(39)

We linearize the matrix pencil by defining the following matrices,

$$\begin{aligned} \varvec{\mathcal {M}}'= & {} \left( \begin{array}{cccc} \varvec{M}_0 &{} \varvec{M}_1 &{} \dots &{} \varvec{M}_{m-1} \\ 0 &{} \mathbb {1} &{} \dots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \dots &{} \mathbb {1} \\ \end{array}\right) , \end{aligned}$$
(40)
$$\begin{aligned} \varvec{\mathcal {M}}''= & {} \left( \begin{array}{cccc} 0 &{} \dots &{} 0 &{} \varvec{M}_m \\ -\mathbb {1} &{} \dots &{} 0 &{} 0 \\ \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0 &{} \dots &{} -\mathbb {1} &{} 0 \\ \end{array} \right) , \end{aligned}$$
(41)

and the vector,

$$\begin{aligned} \varvec{\mathcal {C}} = \left( \begin{array}{c} \varvec{C} \\ \omega \varvec{C} \\ \vdots \\ \omega ^{m-1} \varvec{C} \\ \end{array} \right) . \end{aligned}$$
(42)

This transforms the matrix pencil (39) to another GEP,

$$\begin{aligned} \varvec{\mathcal {M}}(\omega ) \varvec{\mathcal {C}} = (\varvec{\mathcal {M}}' + \omega \varvec{\mathcal {M}}'') \varvec{\mathcal {C}} = 0. \end{aligned}$$
(43)

For clarity, we typeset matrices and vectors generated from linearizing a matrix pencil by a calligraphic typeface.

Generalized eigenvalue problems are more difficult to solve than regular eigenvalue problems. We describe a method to transform the above GEP to an EP in A.2 contingent on the invertibility of either \(\varvec{M}_0\) or \(\varvec{M}_m\), leading to a modest improvement in speed.

4.3 Polynomial eigenvalue problem over several dependent functions

Consider the full problem in Sect. 1. In matrix form, this becomes the set of simultaneous equations,

$$\begin{aligned} \varvec{\mathcal {M}}_{1,1}(\omega ) \varvec{\mathcal {C}}_1 + \dots + \varvec{\mathcal {M}}_{1,n}(\omega ) \varvec{\mathcal {C}}_n= & {} 0, \nonumber \\ \varvec{\mathcal {M}}_{2,1}(\omega ) \varvec{\mathcal {C}}_1 + \dots + \varvec{\mathcal {M}}_{2,n}(\omega ) \varvec{\mathcal {C}}_n= & {} 0, \nonumber \\&\vdots&\nonumber \\ \varvec{\mathcal {M}}_{n,1}(\omega ) \varvec{\mathcal {C}}_1 + \dots + \varvec{\mathcal {M}}_{n,n}(\omega ) \varvec{\mathcal {C}}_n= & {} 0, \end{aligned}$$
(44)

where each matrix \(\varvec{\mathcal {M}}_{j,k}(\omega )\) is constructed by linearizing the matrix pencil of the kth dependent function of the jth equation, as in

$$\begin{aligned} \varvec{\mathcal {M}}_{j,k}(\omega ) = \varvec{\mathcal {M}}_{j,k}' + \omega \varvec{\mathcal {M}}_{j,k}''. \end{aligned}$$
(45)

The set of simultaneous equations can be written as a single matrix equation by defining the following matrices,

$$\begin{aligned} \tilde{\varvec{\mathcal {M}}}'= & {} \left( \begin{array}{cccc} \varvec{\mathcal {M}}'_{1,1} &{} \varvec{\mathcal {M}}'_{1,2} &{} \dots &{} \varvec{\mathcal {M}}'_{1,n} \\ \varvec{\mathcal {M}}'_{2,1} &{} \varvec{\mathcal {M}}'_{2,2} &{} \dots &{} \varvec{\mathcal {M}}'_{2,n} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \varvec{\mathcal {M}}'_{n,1} &{} \varvec{\mathcal {M}}'_{n,2} &{} \dots &{} \varvec{\mathcal {M}}'_{n,n} \\ \end{array} \right) , \end{aligned}$$
(46)
$$\begin{aligned} \tilde{\varvec{\mathcal {M}}}''= & {} \left( \begin{array}{cccc} \varvec{\mathcal {M}}''_{1,1} &{} \varvec{\mathcal {M}}''_{1,2} &{} \dots &{} \varvec{\mathcal {M}}''_{1,n} \\ \varvec{\mathcal {M}}''_{2,1} &{} \varvec{\mathcal {M}}''_{2,2} &{} \dots &{} \varvec{\mathcal {M}}''_{2,n} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \varvec{\mathcal {M}}''_{n,1} &{} \varvec{\mathcal {M}}''_{n,2} &{} \dots &{} \varvec{\mathcal {M}}''_{n,n} \\ \end{array} \right) , \end{aligned}$$
(47)

and vector,

$$\begin{aligned} \tilde{\varvec{\mathcal {C}}} = \left( \begin{array}{c} \varvec{\mathcal {C}}_1 \\ \varvec{\mathcal {C}}_2 \\ \vdots \\ \varvec{\mathcal {C}}_n \end{array} \right) . \end{aligned}$$
(48)

We arrive at the GEP of the full problem introduced in Sect. 1,

$$\begin{aligned} \tilde{\varvec{\mathcal {M}}}(\omega )\tilde{\varvec{\mathcal {C}}} = (\tilde{\varvec{\mathcal {M}}}' + \omega \tilde{\varvec{\mathcal {M}}}'')\tilde{\varvec{\mathcal {C}}} = 0. \end{aligned}$$
(49)

The GEP of the full problem is much more complicated. Unlike the GEP of the previous subsection, it can be shown that \(\tilde{\varvec{\mathcal {M}}}'\) is always singular, as we show in A.2.

5 Advantages and disadvantages of the Bernstein basis

Having elaborated on how the Bernstein basis fits into solving a partial differential problem like (4), we discuss in this section what these properties cost and afford us, and how they compare to more standard basis functions. We also discuss some results that may be found in A.4. One may read through that section first, and then return here.

  1. 1.

    Bernstein polynomials are not orthogonal. This follows from (12) and (13). This complicates an extension of the current method to partial differential equations, where the weights may be made to vary in time.

  2. 2.

    The Bernstein basis polynomials depends on the basis degree. We cannot naively apply derivatives without costing us additional numerical resources. We need to fold in an application of (11) so that we remain in a single common basis degree. There is no operation similar to (11) for classical orthogonal polynomials, because those basis functions do not depend on the size of the basis.

  3. 3.

    The zeros of the Bernstein basis, if they occur, are located at the boundaries. There are no nodes we can take advantage of in constructing a collocation grid, so the implemented spectral matrices (103) and (104) are dense.

  4. 4.

    Many of the properties of the Bernstein basis have equivalent forms for other basis functions. The boundary values, derivative recurrence relation and integral similar to (8), (10) and (12) are well-known for classical orthogonal polynomials and the Fourier basis. A simple product formula like (13) exists for Chebyschev and Fourier basis.

  5. 5.

    The specific form of these properties gives the Bernstein basis an advantage over other basis functions when dealing with mixed boundary value problems outlined in Sect. 3. In the Bernstein basis, the boundary constraints only act on a subset of the basis set, whose weights can be fully determined independently of the differential equation. Such a luxury is not enjoyed by more standard basis functions. For classical orthogonal polynomials and the Fourier basis, imposing the boundary conditions (15) would involve the entire basis set. Solving the differential equations and the boundary conditions must be done simultaneously.

  6. 6.

    The lack of a residual term in (26) and the lack of additional constraints on the expansion coefficients lets us write down the algebraic equations these expansion coefficients must satisfy as a generalized eigenvalue problem in Sect. 4.

  7. 7.

    There are manipulations which can only be easily done in the Bernstein basis, discussed in A.4. For example, a tau method using Chebyschev polynomials can impose the boundary condition \(\lim _{u \rightarrow a} \psi (u) \sim (u-a)\) exactly. However, one cannot naively divide out a \((u-a)\) term-by-term, since each Chebyschev polynomial is finite at the lower boundary. Such a rescaling can be exactly carried out in the Bernstein basis, as shown in A.4. This lets us calculate the weighted \(L^2\)-norm of a function in the Bernstein basis in closed-form, even in cases where the weight has a pole of integer degree at the boundaries. This is useful, for example, when normalizing wavefunctions in a compactified coordinate system, as in Sect. 7.2.

  8. 8.

    The numerical convergence of the Bernstein basis has been established in the context of other differential problems. Interestingly, in some cases, the Bernstein method would outperform other basis functions (including Chebyshev and Fourier) in terms of numerical cost or the accuracy of the solutions [15, 17,18,19, 21]. We do not perform a similarly comprehensive analysis here, concentrating instead on general ideas on how the Bernstein basis may be adapted to ODE eigenvalue problems and introducing the package SpectralBP. Though we do demonstrate numerical convergence for some of the cases we tackle below.

6 The SpectralBP package

The SpectralBP package uses the properties of the Bernstein basis, written to streamline the calculation of the eigenvalues and eigenfunctions of (4). It is primarily distributed as a Mathematica paclet and is publicly available [39].

SpectralBP commands are documented, and the package is bundled with two tutorial notebooks. After installation, the details and options of each command may be explored by prefixing a command with a question mark, as in ?GetModes, similar to built-in commands in Mathematica.

There are three types of commands in SpectralBP: Get commands, Compare commands and Print commands. The basic work flow is as follows.

  1. 1.

    Begin with some ODE eigenvalue problem

    $$\begin{aligned} \hat{L}'(x,\omega )\Psi '(x) = 0 \end{aligned}$$
    (50)

    which may not satisfy the 3 properties required in Sect. 1.

  2. 2.

    If the domain of the eigenfunctions \(\psi '_i(x)\) is not compact, define an invertible change of variables \(f(x) = u\) so that the domain in u is compact.

  3. 3.

    If the resulting eigenfunctions are non-analytic, one may rescale as in

    $$\begin{aligned} \psi '_i(u) = f_i(u) \psi _i(u) \end{aligned}$$
    (51)

    so that the resulting eigenfunctions \(\psi _i(u)\) are analytic.

    One also defines \(f_i(u)\) so that all eigenfunctions \(\psi _i(u)\) satisfies the same boundary conditions. The result should be an eigenvalue problem described in Sect. 1.

  4. 4.

    Use Get commands to calculate eigenvalues and eigenfunctions at different BP orders.

  5. 5.

    Use Compare commands to filter out spurious eigenvalues and eigenfunctions.

  6. 6.

    Use Print commands to quickly glean off information from the prior calculations.

We will discuss each command type in the following subsection before going into applications. Example notebooks can be found in the next two sections.

6.1 Get commands

The first input of a Get command is a list of differential equations. The command automatically identifies the dependent functions, the independent variable and the eigenvariable. The command halts whenever it identifies more than one independent variable or eigenvariable, or whenever the number of dependent functions underdetermine or overdetermine the problem.

There are three Get commands,

  1. 1.

    GetModes[eqn,N]: Calculates the eigenvalues of the ODE eigenvalue problem stored in eqn using a basis degree of N.

  2. 2.

    GetEigenfunctions[eqn,modes,N]: Calculates the eigenvectors corresponding to each eigenvalue in the list modes, using a basis degree of N. As discussed in the Appendix, we advise that N be identical to be basis degree the eigenvalues in modes were calculated.

  3. 3.

    GetAccurateModes[eqn,N1,N2]: Calculates the eigenvalues using basis degrees of N1 and N2, then applies a CompareModes command to filter the eigenvalues.

By replacing the basis degree inputs with a pair of numbers, which we call a basis tuple of the form {N,prec}, eigenvalues are calculated using a basis degree of N using prec-precision numbers. That is, an alternative input scheme for the above commands is given by,

$$\begin{aligned}{} & {} \texttt {GetModes[eqn,\{N,prec\}]}, \\{} & {} \texttt {GetAccurateModes[eqn,\{N1,prec1\},}\\{} & {} \texttt {\{N2,prec2\}]}. \end{aligned}$$

The default behavior of \(\texttt {GetModes}\) and \(\texttt {GetAccurate} \)\( \texttt {Modes}\) are as follows

$$\begin{aligned}{} & {} \texttt {GetModes[eqn,N]} = \texttt {GetModes[eqn,\{N,N/2\}]},\\{} & {} \texttt {GetAccurateModes[eqn,N1,N2]} = \\{} & {} \quad \texttt {GetAccurateModes[eqn,\{N1,N1/2\},\{N2,N2/2\}]}. \end{aligned}$$

In calculating the eigenvalues and eigenvectors, Get commands must be supplied with the correct domain and boundary conditions. These are controlled by 4 options,

  1. 1.

    LowerBound and UpperBound: defines the domain [ab], which defaults to [0, 1].

  2. 2.

    LBPower and UBPower: defines the leading polynomial power qr at each boundary, which defaults to \(q = 0\) and \(r = 0\).

The option Normalization lets one choose how eigenfunctions are normalized. The option may have 4 values,

  1. 1.

    “UB”: the coefficient of the leading polynomial expansion of the eigenfunctions at b to 1.

  2. 2.

    “LB”: the coefficient of the leading polynomial expansion of the eigenfunctions at a to 1.

  3. 3.

    “L2Norm”: the \(L^2\)-norm of the eigenfunctions to 1.

  4. 4.

    {“L2Norm”,{A,B,C}}: the \(L^2\)-norm of the eigenfunctions to 1, with a weight function underneath the integral of the form \(A (u-a)^B (b-u)^C\).

The option FinalAsymptotics lets one change the outputted eigenfunctions’ asymptotics, according to manipulations detailed in A.4.

Table 1 Input scheme for the various eigenvalue ODE problems discussed in Sects. 7, 8 and 9. The potential \(V^*\) was chosen to be (53) for the base infinite square well problem, and (56) for the perturbed infinite square well problem. The potential \(V^\dagger \) was chosen to be (58) for the quantum harmonic oscillator problem. The potential \(V^\ddagger \) was chosen to be (71) as the \(\mathcal{P}\mathcal{T}\)-symmetric anharmonic potential for specific values of \(\lambda \), and (72) as the quartic anharmonic potential for specific values of \(\beta \). The different variables used mark certain coordinate transformations effected to compactify an infinite domain

6.2 Compare commands

The spectrum calculated from a finite basis degree will be filled with either eigenvalues that have not converged or spurious eigenvalues. We have provided two ways to filter these out. These are the two Compare commands,

  1. 1.

    CompareModes[modes1,modes2]: Checks whether eigenvalues in the two spectra inputted share common digits, then keeps only eigenvalues that share at least 3 digits.

  2. 2.

    CompareEigenfunctions[eqn,{modes1,modes2},{N1,N2}]: Calculates the eigenfunctions of the eigenvalues approximately common to modes1 and modes2 using a basis degree of N1 and N2 respectively. If the \(L^2\)-norm of their difference is less than \(10^{-3}\), the eigenvalues are kept.

There are two relevant options,

  1. 1.

    Cutoff: controls the minimum number of common digits for eigenvalues to be kept, which defaults to 3.

  2. 2.

    L2Cutoff: controls the maximum difference between two eigenfunctions, of the form \(10^{-n}\), for their corresponding eigenvalues to be kept, which defaults to \(n=3\).

We call eigenvalues of different spectra that share a Cutoff-number of common digits approximately common.

One may also input a list of spectra into CompareModes, as in

$$\begin{aligned} \texttt {CompareModes[\{modes1,modes2,\dots \}]}. \end{aligned}$$

6.3 Print commands

There are four Print commands,

  1. 1.

    PrintFrequencies[modes]: plots the eigenvalues in modes on the complex plane.

  2. 2.

    PrintEigenfunctions[eqn,modes,N]: plots the real and imaginary parts of the corresponding eigenfunctions.

  3. 3.

    PrintTable[convergedmodes]: generates a table of eigenvalues, categorizing them into purely real, purely imaginary, and complex eigenvalues. Groups together eigenvalues satisfying \(\omega ^* = \omega \) and \(\omega ^* = - \omega \). The input must be a pair of lists of approximately common eigenvalues, usually coming from the output of a CompareModes command.

  4. 4.

    PrintAll[eqn,convergedmodes,N]: a shortcut to do the previous three commands in a single command.

There are three relevant options,

  1. 1.

    FreqName: specifies the symbol for the eigenvariable, which defaults to \(\omega \).

  2. 2.

    NSpectrum: specifies how many eigenvalues would be plotted, which defaults to plotting everything.

  3. 3.

    NEigenFunc: specifies how many eigenfunctions would be plotted, which defaults to plotting everything.

The PrintTable command automatically only prints out significant digits, defined to be the digits common to both the spectra inputted. When the inputted spectra comes from two adjacent basis degrees, say N and \(N+1\), the right-most digits of the output may be incorrect. This occurs because the absolute error of the two spectra overlap.

We recommend using basis degrees that are far apart in the sense that the absolute error of the higher basis degree spectrum is much smaller than the absolute error of the lower basis degree spectrum. Although the practice would be numerically more costly, in this way we increase our chances that the right-most significant digit outputted is correct.

6.4 Summary of implementations

figure a

In Table 1, we summarize the different inputs needed to solve the ODE eigenvalue problems that we shall look at in the succeeding sections. Hopefully, in the examples considered in the proceeding sections, one is left with an impression of the general-purpose applicability and ease-of-use of SpectralBP. As shall be demonstrated, three lines of code can yield a wealth of information about the considered ODE eigenvalue problem. The difference between the examples given is just swapping in and out of differential equations, applying certain change of variables in cases where the domain is infinite, and specifying the necessary boundary conditions.

7 Applications in quantum mechanics

We first illustrate how SpectralBP is used by working through standard problems in quantum mechanics. We solve for the eigenenergies and eigenfunctions of the infinite square well and quantum harmonic potentials numerically in the first two subsection. Calculations are compared with well-known analytic results, as can be found in standard quantum mechanics textbooks like [40].

For the last two subsections, we compute the eigenenergies of the anharmonic potentials considered in [41] and [42]. We compare ground state eigenenergies calculated with SpectralBP to the results of the aforementioned papers, which were both calculated perturbatively using a combination of Padé approximation and Stieltjes series. In [42], Milne’s method [43] was used as an independent test.

7.1 Infinite square well

Consider the time-independent Schrödinger equation

$$\begin{aligned} \dfrac{1}{2}\dfrac{\text {d} ^2 }{\text {d} x^2}\phi (x) + (E - V(x))\phi (x) = 0. \end{aligned}$$
(52)

For the infinite square well, the potential is chosen to be

$$\begin{aligned} V(x) = \left\{ \begin{array}{ll} 0 &{} 0 \le x \le 1 \\ \infty &{} \textrm{otherwise}. \end{array} \right. \end{aligned}$$
(53)

Its eigenenergies are

$$\begin{aligned} E_n = \dfrac{\pi ^2 n^2}{2}, \qquad n = 1,2,3,\dots . \end{aligned}$$
(54)

The domain of solutions is the interval [0, 1] with boundary conditions,

$$\begin{aligned} \lim _{x \rightarrow 0} \phi (x) \sim x, \qquad \lim _{x \rightarrow 1} \phi (x) \sim (1-x). \end{aligned}$$
(55)

7.1.1 SpectralBP-basic implementation

A simple implementation to solve the infinite square well problem is schematically found in Notebook 1.

Lines 2 and 3 solves the ODE eigenvalue problem (52) with potential (53) using basis degrees 50 and 80 respectively.

The boundary conditions (55) are set by the option values

$$\begin{aligned} \texttt {LBPower }\rightarrow \texttt { 1}, \qquad \texttt {UBPower }\rightarrow \texttt { 1}, \end{aligned}$$

which must be specified whenever eigenvalues and eigenvectors are calculated.

Line 4 selects eigenvalues that are approximately common to modes50 and modes80. As described in the Sect. 6.3, this may serve as input for the PrintTable command in line 7. We have chosen to rescale the eigenenergies in lines 5 and 7 so that the output would be the first 10 perfect squares.

Line 6 plots the eigenfunctions of the inputted spectrum of the lowest three eigenvalues of modes50 using a basis degree of 50. The Print commands found in the last 3 lines output Fig. 3.

As described in Sect. 6.3, the command PrintTable only prints out significant digits. As an illustrative example, consider the lowest rescaled eigenenergies. The absolute error for modes50 is \(3.27\times 10^{-22}\) and the absolute error for modes80 is \(4.97\times 10^{-31}\). The PrintTable compares the two eigenvalues and detects a difference of \(\sim 10^{-22}\), and prints out the eigenvalue up to the \(21^{st}\) decimal place.

7.1.2 SpectralBP-quick commands

Fig. 3
figure 3

Output of Notebooks 1. Top: (PrintFrequencies) The first 10 eigenenergies calculated using a basis degree of 50, plotted on the complex plane. Middle: (PrintEigenfunctions) The eigenfunctions of the first 3 eigenenergies, calculated using a basis degree of 50, normalized according to their \(L^2\)-norm. Bottom: (PrintTable) Rescaled eigenvalues common to basis degrees of 50 and 80. There are 28 eigenenergies that share a minimum of 3 significant digits (not shown). We tabulate only the lowest 10. The spectrum calculated is in excellent agreement with (54)

Three commands can do the calculations in Notebook 1. We have omitted the relevant options for boundary conditions and printing for conciseness. Notebook 2 outputs the same figures as in Notebook 1.

figure b

7.1.3 A note on machine precision

As described in Sect. 6.1, one may use arbitrary precision numbers by inputting a basis tuple of the form {N,prec}. This would calculate eigenvalues using a basis degree of N with prec-precision numbers, as in Notebook 3.

figure c

The PrintTable command in line 3 outputs Fig. 4. The number of common modes remain at 28 (not shown), but there are more significant digits for the lowest eigenenergies.

This is because the error due to floating point arithmetic at machine precision is generally small enough to resolve approximately common eigenenergies between basis degrees. When higher precision numbers are used, this error is pushed down further and may reveal more significant digits. The absolute error from approximating the solution space in a finite polynomial basis eventually dominates, and may only be corrected by using higher and higher basis degrees.

Briefly, increasing machine precision increases the significant digits (up to a point) while increasing the Bernstein basis degree used increases the number of converged modes (up to a point).

Fig. 4
figure 4

Calculated eigenvalues common to basis tuples \(\{50,50\}\) and \(\{80,80\}\) (described in Sect. 6.1). There are 28 eigenenergies that share a minimum of 3 significant digits (not shown) – similar to Fig. 3 – while the number of significant digits for the lower eigenvalues have increased

7.1.4 Test on non-analytic solutions

For completeness, let us explore the case when the exact solution is non-analytic. Suppose we perturb the potential by lifting half of the infinite square well,

$$\begin{aligned} V(x) = \left\{ \begin{array}{ll} 0 &{} 0 \le x< \dfrac{1}{2} \\ 1 &{} \dfrac{1}{2} \le x < 1 \\ \infty &{} \textrm{otherwise}. \end{array} \right. \end{aligned}$$
(56)

The exact solution can be derived by starting with a pair of free particle solutions at \(0 \le x \le 1/2\) and \(1/2 \le x \le 1\), then imposing the correct boundary conditions at the walls of the infinite square well and continuity relations \(x = 1/2\). One then finds that for the boundary conditions and the continuity relations to be satisfied, the eigenenergies must be solutions to the transcendental equation,

$$\begin{aligned} \sqrt{2(E-1)} \cot \left( \dfrac{\sqrt{2(E-1)}}{2}\right) + \sqrt{2E} \cot {\dfrac{\sqrt{2E}}{2}} = 0. \nonumber \\ \end{aligned}$$
(57)

Exact solutions are non-analytic since they are not twice differentiable at \(x = 1/2\).

On the other hand, one may simply swap in the potential (56) and use a GetAccurateModes to numerically solve for these eigenenergies. We benchmark SpectralBP against the Mathematica in-built function NSolve in Table 2. NSolve is a zero-finding algorithm, which we use to find solutions to (57). There is great agreement between the two methods.

The non-analyticity of the solutions has adversely affected how quickly the eigenenergies converge to the correct values, which is expected from a spectral method. SpectralBP was able to find all eigenenergies below 1000. On the other hand, NSolve will not find the eigenenergies indicated by *’s by default. These roots are sensitive since one must start close to the them so that NSolve can find them. The eigenenergies indicated by *’s were found by sampling the range [0,1000] with a resolution of 0.01.

We note that we have chosen odd basis tuples in the calculation so that the corresponding collocation grids avoids the point \(x=1/2\). Choosing even basis tuples degrades the accuracy of odd-numbered eigenenergies, and one would need to reach a basis degree of around 400 to determine the ground state energy accurate to 3 digits.

Table 2 Comparison between NSolve (a zero-finding algorithm in Mathematica 11.3) and SpectralBP, for eigenenergies in the range \(0 \le E \le 1000\). Eigenenergies with *’s were found by NSolve by sampling the range [0,1000] with a resolution of 0.01. Unmarked eigenenergies can be found by default. Eigenenergies found by SpectralBP used basis tuples of \(\{61,61\}\) and \(\{101,101\}\) (described in Sect. 6.1). There is excellent agreement between the eigenenergies found by both methods

7.2 Quantum harmonic oscillator

Consider the harmonic oscillator potential,

$$\begin{aligned} V(x) = \dfrac{1}{2} x^2. \end{aligned}$$
(58)

Its eigenenergies are,

$$\begin{aligned} E_n = n + \dfrac{1}{2}, \qquad n = 0,1,2,\dots \end{aligned}$$
(59)

The domain of the solutions is the entire real line \((-\infty ,\infty )\) with boundary conditions

$$\begin{aligned} \lim _{x \rightarrow -\infty } \phi (x) \sim 0, \qquad \lim _{x \rightarrow \infty } \phi (x) \sim 0. \end{aligned}$$
(60)

7.2.1 Compactification and boundary conditions

One may swap in the harmonic oscillator potential in the example notebooks we have presented to calculate eigenenergies and eigenfunctions, except one must include an additional step of compactifying the domain. Let us compare the spectrum calculated using two different ways of compactifying the interval \((-\infty ,\infty )\). The first,

$$\begin{aligned} v_1 = \tanh (x), \end{aligned}$$
(61)

has a domain of \([-1,1]\). As described in Sect. 6.1, one may change the default domain of [0, 1] to \([-1,1]\) by setting the option value of LowerBound to \(-1\). The second,

$$\begin{aligned} v_2 = \dfrac{1}{1+ \exp (-x)} \end{aligned}$$
(62)

has a domain of [0, 1].

Some comments are in order. First, note that the exact solution in both compactified coordinates is flat at both boundaries. All derivatives vanish at either boundary. However, it is sufficient to specify at least

$$\begin{aligned} \lim _{v_k \rightarrow a_k} \psi (v_k) \sim (v_k-a_k), \quad \lim _{v_k \rightarrow b_k} \psi (v_k) \sim (b_k-v_k),\nonumber \\ \end{aligned}$$
(63)

where \(a_k,b_k\) are the corresponding boundary locations for \(k=1,2\). Second, note that the potential is singular at the boundaries in both compactified coordinates, with

$$\begin{aligned} V(v_1) = \dfrac{1}{2} \left( \textrm{tanh}^{-1}(v_1)\right) ^2, \,\, V(v_2) = \dfrac{1}{2} \left( \ln \left( \dfrac{v_2}{1 - v_2}\right) \right) ^2. \end{aligned}$$
(64)

A consequence of using the collocation grid we proposed in Sect. 4.1 is that we have avoided evaluating at these singularities by expanding the Bernstein basis order and choosing collocation points in the interior of the relevant domain.

Table 3 Comparison between compactifying using (61) and (62), using Bernstein tuples \(\{50,50\}\) and \(\{100,100\}\) (described in Sect. 6.1). For conciseness we indicate eigenergies found using (61) with a dagger\(^\dagger \), and mark in square brackets the additional significant digits calculated using (62). Compactifying using (62) performs better, which finds more eigenvalues with more significant digits

Finally, we observe a dependence on the rate of convergence of the method with respect to different coordinate transformations, as can be seen in Table 3. We may attribute this discrepancy on how features such as maxima and nodes of higher energy eigenfunctions are distributed on the compactified coordinates in relation to how the collocation points are distributed on the compactified coordinates.

Consider the distance of the right-most maxima or node relative to the upper bound of a high energy eigenfunction for either transformation,

$$\begin{aligned}{} & {} \lim _{x \rightarrow \infty } 1 {-} v_1(x) \sim 2 \exp (-2x), \nonumber \\{} & {} \lim _{x \rightarrow \infty } 1 - v_2(x) \sim \exp (-x). \end{aligned}$$
(65)

That is, in proportion to the length of the interval, these features are closer to the upper bound of the interval with (61) than in (62),

$$\begin{aligned} (61) \rightarrow \exp (-2x), \qquad (62) \rightarrow \exp (-x) \end{aligned}$$
(66)

The same is true for the lower bound.

Thus, a collocation grid defined on \(v_1\) is unable to resolve higher energy eigenfunctions compared to \(v_2\) since the collocation points are less densely located on where the maxima and nodes are expected to appear – ie., closer to the edge in proportion to the length of the interval for (61) than in (62)

We note that a transformation such as

$$\begin{aligned} v_3 = \dfrac{1}{1 + \exp (-x/2)} \end{aligned}$$
(67)

‘spreads’ these features further away from the upper bound and lower bound. An identical calculation on \(v_3\) yields more accurate eigenenergies than on \(v_2\) as well as finding more eigenenergies (upto 26).

7.2.2 Eigenfunctions – normalization and manipulation

Consider the eigenfunctions calculated from (61) and (62). To properly normalize the eigenfunctions in the original coordinates x, one must introduce a weight function underneath the integral of their \(L^2\)-norms in the compactified coordinates, respectively of the form

$$\begin{aligned} w(v_1)= & {} (v_1 + 1)^{-1} (1 - v_1)^{-1} \end{aligned}$$
(68)
$$\begin{aligned} w(v_2)= & {} v_2^{-1} (1 - v_2)^{-1} \end{aligned}$$
(69)

As described in Sect. 6.1, the option value for Normalization should be {“L2Norm”,{1,-1,-1}} for both \(v_1\) and \(v_2\).

The eigenfunctions of the three lowest eigenenergies in Table 3 may be calculated using the GetEigenfunctions command. The output is a Bernstein polynomial in the compactified variable \(v_2\), which may reverted to the original uncompactified coordinates by a change of variables. The eigenfunctions in x are plotted in Fig. 5 together with their absolute error compared with the exact eigenfunctions. The absolute error is bounded from above, with a maximum difference between \(10^{-9}-10^{-11}\).

7.3 Anharmonic potentials

Table 4 Spectra for anharmonic potentials found in (71) and (72), with \(\lambda = 1/7\) and \(\beta = 40/49\), calculated using basis tuples \(\{250,250\}\) and \(\{300,300\}\) (described in Sect. 6.1). Only common eigenvalues with at least 5 significant digits were kept. For (72), there are 79 such eigenvalues. We have chosen to show only the lowest 10 eigenvalues up to 40 digits, rounded up

We now benchmark SpectralBP against other numerical methods, here in the context of anharmonic potentials. We perform test calculations also done in [41] and [42], in which the time-independent Schrödinger equation has been rescaled such that,

$$\begin{aligned} \phi ''(x) + (E - V(x)) \phi (x) = 0, \end{aligned}$$
(70)

and the anharmonic potentials,

$$\begin{aligned} V(x)= & {} \dfrac{1}{4} x^2 + i \lambda x^3, \end{aligned}$$
(71)
$$\begin{aligned} V(x)= & {} x^2 + \beta x^4 \end{aligned}$$
(72)

were considered. In the papers cited, Padé approximation and Milne’s method [43] were used to calculate the ground state energies.

Fig. 5
figure 5

The calculated eigenfunctions \(\phi ^{\textrm{BP}}_n(x)\) in the uncompactified coordinate system are plotted above, while the absolute difference between \(\phi ^{\textrm{BP}}_n\) and the exact eigenfunctions \(\phi ^{\textrm{exact}}_n\) are plotted below. The eigenfunctions were calculated with a basis degree of 100

The potential (71) is interesting. Although the corresponding Hamiltonian,

$$\begin{aligned} H = p^2 + \dfrac{1}{4} x^2 + i \lambda x^3, \end{aligned}$$
(73)

isn’t hermitian, its eigenenergies remain real and positive. This is because of its underlying \(\mathcal{P}\mathcal{T}\) symmetry [44], in which combining parity, \(\mathcal {P}: p \rightarrow -p\) and \(x \rightarrow -x\), and time reversal, \(\mathcal {T}: p \rightarrow -p,\) \(x \rightarrow x,\) and \(i \rightarrow -i\), transformations leaves H invariant.

For both potentials, we compactify our domain via the transformation in (62). To recreate Table II of [41], we set \(\lambda = 1/7\) and \(\beta = 40/49\) and use basis tuples \(\{250,250\}\) and \(\{300,300\}\) (described in 6.1). The spectra of both potentials are found in Table 4. For a more direct comparison to Table II of [41], we use Equations (8) and (9) of [41] to calculate \(P(\lambda ^2)\) and \(P(\beta )\) for the ground state energy. Comparing the two values coming from both basis tuples for significant digits, and we arrive at the expressions

$$\begin{aligned} P(\lambda ^2)= & {} 5.524167213060[22] \nonumber \\ P(\beta )= & {} 0.41924941603348[0802587964456...]. \nonumber \end{aligned}$$

where the last expression goes on for 21 more digits. These values are in excellent agreement with the values calculated in [41]. The digits enclosed in square brackets are additional significant digits calculated by SpectralBP.

The anharmonic potential (72) was used in [42], but for different values of \(\beta \). We calculated spectra using basis tuples of {150,150} and {200,200}, keeping only eigenvalues with at least 5 significant digits. In Table 5, we show only the ground state energies for a direct comparison of Table II and Table IV of [42].

The results are in great agreement with the “Exact” values calculated in [42], which were calculated using Milne’s method [43]. At the digits where they differ, which we have indicated in square brackets, the difference is within the error bars in both tables. The calculations took an average of 68 s each, running in a single 2.50 GHz Intel i5 Core with 8.00GB RAM.

With modest resources, we are able to calculate the ground state energies to high precision. This is simultaneous with an abundance of excited state energies; the calculation at \(\beta = 1/10\) yielded 47 eigenenergies with at least 5 significant digits, while the calculation at \(\beta = 100\) yielded 69 eigenenergies with at least 5 significant digits.

Table 5 Ground state energies calculated using the anharmonic potential (72) for different values of \(\beta \), using basis tuples \(\{150,150\}\) and \(\{200,200\}\) (described in Sect. 6.1). For conciseness, we have enclosed in square brackets additional significant digits calculated by SpectralBP compared to an application of Milne’s method in [42]

8 Applications in quasinormal modes

In general relativity, spacetime itself is treated as a dynamical entity, interacting with the matter that is placed within it. Thus, black holes found in nature are always interacting with complex distributions of matter and fields around them. In active galactic nuclei, accretion disks transport matter inward and transport angular momentum outward, heating the accretion disk into a hot plasma and immersing the black hole in a complex gravitational and electromagnetic system. Even in the absence of matter and fields, the black hole interacts with the vacuum around it, slowly evaporating due to Hawking radiation.

The standard treatment is to decompose the spacetime as in

$$\begin{aligned} g_{\mu \nu } = g^{0}_{\mu \nu } + \delta g_{\mu \nu }, \end{aligned}$$
(74)

where the metric \(g^{0}_{\mu \nu }\) is that of an unperturbed black hole, such as the Schwarzschild or Kerr solution. In the linear approximation \(\delta g_{\mu \nu } \ll g^{0}_{\mu \nu }\) (so called because the perturbing metric \(\delta g_{\mu \nu }\) does not back react with the background metric), these small perturbations generically take the form of damped oscillations known as quasinormal modes. When \(g^{0}_{\mu \nu }\) is spherically-symmetric, the equations for \(\delta g_{\mu \nu }\) reduce to one-dimensional wave equations in certain potentials. These are the famous Regge–Wheeler and Zerilli equations for odd- and even-parity perturbations, respectively.

Quasinormal modes arise as the characteristic ringing of spacetime as it is perturbed by some external field. For a given external field, these oscillations are independent of the initial excitation, their vibrations and damping specified solely by the mass, spin and charge of the black hole. As such, quasinormal modes are used as probes for the validity of general relativity in the strong gravity regime.

From a more theoretical perspective, quasinormal modes provides a test for the linear stability of more exotic spacetimes (such as black branes, black rings, black string): when all quasinormal modes are damped (\(\texttt {Im}(\omega ) \le 0\)), the spacetime is linearly stable. In the context of AdS/CFT duality, the onset of instability of the AdS spacetime corresponds to a thermodynamic phase transition in CFT.

Review articles on quasinormal modes in an astrophysical setting – black holes, stars, and other such compact objects – we cite [1] and [2]. An emphasis on higher dimensional black holes and their connection to strongly coupled quantum fields is in [3], while [4] emphasizes on the various numerical and analytical techniques that have been developed to calculate quasinormal modes. The papers [5, 6] focus on the application of spectral and pseudospectral methods in gravity, of which SpectralBP is an example of.

8.1 Regge–Wheeler equation

In Sect. 6, we described a general work flow starting from an ODE eigenvalue problem. In this subsection we go through the first 3 steps of this work flow, starting from a standard ODE eigenvalue problem for quasinormal modes. We focus on the Regge–Wheeler equation as an illustrative example; a treatment of the Zerilli equation would proceed in a similar manner. The Regge–Wheeler equation describes axial or odd-parity perturbations of the Schwarzschild metric of mass M linearly coupled to a perturbing field of spin s and angular momentum l,

$$\begin{aligned} \partial _t^2 \Phi (t,r_*) + \left( - \partial _{r_*}^2 + V(r_*^2) \right) \Phi (t,r_*) = 0, \end{aligned}$$
(75)
$$\begin{aligned} V(r_*) = \left( 1 - \dfrac{1}{r}\right) \left( \dfrac{l(l+1)}{r^2} + \dfrac{1-s^2}{r^3}\right) , \end{aligned}$$
(76)

where \(r_* = r + r_s \ln (r/r_s - 1)\). We are interested in solutions of the form \(\Phi (t,r_*) = R(r) \exp (-i \epsilon t)\). This then turns (76) into the ODE eigenvalue problem of the form,

$$\begin{aligned}{} & {} \dfrac{\epsilon ^2 r^4 - l(l+1)r^2 +(l(l+1) + s^2 - 1)r + 1 -s^2}{r^2 (r-1)^2} R(r) \nonumber \\{} & {} \quad + \dfrac{1}{r(r-1)} \dfrac{\text {d} R }{\text {d} r} + \dfrac{\text {d} ^2 R }{\text {d} r^2} = 0, \end{aligned}$$
(77)

with \(\epsilon = 2M \omega \). The domain of the solutions relevant to us is non-compact, stretching from the black hole horizon at \(r = 1\) to spatial infinity at \(r = \infty \). Note also that the solutions are non-analytic. The coordinate singularity at \(r=0\) and the black hole horizon at \(r=1\) are both regular singular points of the ODE, while spatial infinity \(r=\infty \) is an irregular singular point of the ODE.

We may peel away the non-analytic parts by rescaling out the asymptotic behaviour of R(r) at the black hole horizon and at spatial infinity. The asymptotic behaviour of R(r) at \(r = \infty \) can be easily determined to be

$$\begin{aligned} R^{\textrm{out}}(r) \sim r^{ i \omega } \exp ( i \omega r) \quad R^{\textrm{in}}(r) \sim r^{- i \omega } \exp (- i \omega r), \end{aligned}$$
(78)

where we have indicated in superscript which solution is outgoing or ingoing at spatial infinity when the time dependence is restored.

Since the singularity at \(r=1\) is regular, we may write an indicial equation \(f(x)=0\) at \(r = 1\). This can be shown to be simply

$$\begin{aligned} x^2 + \omega ^2 = 0 \end{aligned}$$
(79)

which defines two solutions around \(r=1\),

$$\begin{aligned} R_{\textrm{in}}(r) \sim (r-1)^{-i \omega } \qquad R_{\textrm{out}}(r) \sim (r-1)^{i \omega }, \end{aligned}$$
(80)

where we have indicated in subscript which solution is outgoing or ingoing at the black hole horizon when the time dependence is restored.

We expect a perturbation to come from a finite location outside the black hole. As this perturbation propagates, we expect it to either fall into the black hole or out into spatial infinity. This defines the behaviour of the causal solution, and corresponds to the quasinormal mode boundary conditions

$$\begin{aligned} \lim _{r \rightarrow 1} R(r) \sim R_{\textrm{in}}(r), \qquad \lim _{r \rightarrow \infty } R(r) \sim R^{\textrm{out}}(r). \end{aligned}$$
(81)

An acausal solution would contain parts that are either propagating out of the black hole, or propagating in from spatial infinity. We rescale out the non-analytic parts of the desired solution,

$$\begin{aligned} R(r) = r^{2 i \omega } (r-1)^{- i \omega } \exp (i \omega r) \phi (r), \end{aligned}$$
(82)

leaving us with a differential equation in \(\phi (r)\). We note that the additional factor of \(r^{i \omega }\) is there to cancel out the asymptotic behavior of \((r - 1)^{-i \omega }\) around spatial infinity.

Explicitly, the rescaled solution at the boundaries have the following behaviours:

$$\begin{aligned} \phi _{\textrm{in}}(r) \sim 1,&\,\,&\phi _{\textrm{out}}(r) \sim (r-1)^{2 i \omega }, \end{aligned}$$
(83)
$$\begin{aligned} \phi ^{\textrm{out}}(r) \sim 1,&\,\,&\phi ^{\textrm{in}}(r) \sim r^{- 2 i \omega } \exp (- 2 i \omega r). \end{aligned}$$
(84)

For generic values of \(\omega \), these four solutions have very distinct behaviours. Consider the acausal solutions near their corresponding limits,

$$\begin{aligned} \lim _{r \rightarrow 1} \left|\phi _{\textrm{out}}(r)\right|= & {} \left\{ \begin{array}{ll} \infty , &{} \qquad \textrm{Im}~\omega > 0 \\ 0, &{} \qquad \textrm{Im}~\omega < 0 \end{array} \right. \end{aligned}$$
(85)
$$\begin{aligned} \lim _{r \rightarrow \infty } \left|\phi ^{\textrm{in}}(r)\right|= & {} \left\{ \begin{array}{ll} \infty , &{} \qquad \textrm{Im}~\omega > 0 \\ 0, &{} \qquad \textrm{Im}~\omega < 0. \end{array} \right. \end{aligned}$$
(86)

When \(\textrm{Im}(\omega ) = 0\), both solutions are highly oscillatory. Thus, the boundary conditions,

$$\begin{aligned} \lim _{r \rightarrow 1} \phi (r) \sim 1, \qquad \lim _{r \rightarrow \infty } \phi (r) \sim 1 \end{aligned}$$
(87)

filters out both undesired acausal solutions, since these solutions cannot be approximated in the Bernstein basis of finite degree. Thus, with the boundary conditions in (87), we may identify our solutions to correspond to quasinormal mode eigenfunctions,

$$\begin{aligned} \phi (r) = \phi ^{\textrm{out}}_{\textrm{in}}(r). \end{aligned}$$
(88)
Fig. 6
figure 6

Benchmarking for performance using basis tuples \(\{N,N\}\). The blue line comes from (89), in which the coefficient functions are complex. The orange line effects the replacement \(\omega \rightarrow i \lambda \), solving (90) in which the coefficient functions are real. Both are power laws of the form \(T(N) \sim N^{3.2}\), with the latter performing faster. Calculations were done in a single 2.50 GHz Intel i5 Core with 8.00GB RAM

Finally, we compactify the region \([1,\infty )\) to [0, 1] via the change of variables \(r \rightarrow 1/u\), leaving us with

$$\begin{aligned}{} & {} \left( -l - l^2 + 4 \omega ^2 + u (s^2 + (i + 2 \omega )^2)\right) \phi (u) \nonumber \\{} & {} \qquad + (- 2 i \omega + 2 u + u^2 (-3 + 4 i \omega ))\phi '(u) \nonumber \\{} & {} \qquad - (u-1) u^2 \phi ''(u) = 0. \end{aligned}$$
(89)

This change of variables moves the regular singularity at \(r = 0\) to \(u = \infty \) and the irregular singularity at \(r = \infty \) to \(u = 0\).

We may use Eq. (89) as the ODE eigenvalue problem we feed into SpectralBP. However, we may improve our calculations with the transformation \(\omega \rightarrow i \lambda \), which yields an ODE eigenvalue problem whose coefficient functions are all real,

$$\begin{aligned}{} & {} \left( -l - l^2 - 4 \lambda ^2 + u (s^2 - (1 + 2 \lambda )^2)\right) \phi (u) \nonumber \\{} & {} \qquad + (2 u - u^2 (3 + 4 \lambda ) + 2 \lambda )\phi '(u) \nonumber \\{} & {} \qquad - (u-1) u^2 \phi ''(u) = 0, \end{aligned}$$
(90)

and boundary conditions

$$\begin{aligned} \lim _{u \rightarrow 0} \phi (u) \sim 1, \qquad \lim _{u \rightarrow 1} \phi (r) \sim 1 \end{aligned}$$
(91)

The spectral matrices constructed from (90) are strictly real. This has two consequences. First, the calculation of the spectra is quicker, which is demonstrated in Fig. 6. Solving a generalized eigenvalue problem with matrices that are strictly real is computationally cheaper compared when the matrices involved are complex. Second, the calculated eigenvalues come in only two flavours: real eigenvalues, or complex conjugate pairs. Their eigenvectors are similarly real, or come in complex conjugate pairs.

When we return the imaginary number i, the eigenvalues \(\omega \) are expected to be strictly imaginary or come in pairs satisfying \(\omega = - \omega ^*\). In the proceeding subsections, we calculate all eigenvalues and eigenfunctions using (90), and then multiplying the resulting spectra with i to retrieve the spectrum of (89).

8.2 Scalar perturbations

figure d

We now calculate the quasinormal modes of a scalar perturbation (\(s=0\)) for \(l=3\). A simple Mathematica implementation is in Notebook 4.

The spectrum derived from using a basis tuple of {50, 50} (described in Sect. 6.1) is plotted on the complex plane in Fig. 7. Since the ODE eigenvalue problem is quadratic in \(\omega \), there are 102 eigenvalues as follows from the discussion in Sect. 4.2.

Fig. 7
figure 7

Calculated spectrum of a scalar field in a Schwarzschild spacetime for \(l=3\) using the basis tuple {50, 50} (described in Sect. 6.1), many of which are spurious. There are eigenvalues distributed along the negative-imaginary axis because of the existence of a continuum of eigenvalues that is present there

Table 6 Result of a CompareModes command on 2 and 3 basis tuples (discussed in Sect. 6.1). (a) The filtered spectrum for the duo basis tuples include purely imaginary modes, which we know to be spurious. These modes may be filtered out using a CompareEigenfunctions command. (b) The filtered spectrum for the trio of basis tuples do not include purely imaginary modes. We have printed here significant digits shared by basis tuples {80,80} and {100,100}

8.2.1 Filtering spurious modes

In Sect. 6.2, we described two ways to filter out spurious eigenvalues: the CompareModes command and the CompareEigenfunctions command. In Sect. 7, the CompareModes command on a pair of spectra was sufficient to filter out all the spurious modes.

In the current case the CompareModes command at line 6 is not sufficient. Its output in Table 6 (a) includes purely imaginary modes, which are well-known not to exist for scalar perturbations given the boundary conditions we have chosen [45].

Recall that Eq. (77) comes from choosing a stationary ansatz for (76). It has been shown that the retarded Green function of this wave equation possesses a branch cut on the negative-imaginary axis [46, 47]. It is the ‘shadow’ of this continuum of eigenvalues which SpectralBP feels, as can be observed in Fig. 7.

To filter these modes out, we demonstrate two solutions in the Notebook 4. These can be found in lines 8 and 11.

The first method is straightforward: calculate the spectrum of a third basis tuple and select eigenvalues common to all three spectra. We have chosen {100,100} as our third basis tuple, and the corresponding output is in Table 6 (b). The purely imaginary modes are successfully filtered out.

The second method would be to compare eigenfunctions between two basis tuples. This is the purpose of the CompareEigenfunctions command, whose output on line 8 is an empty set. This confirms that these modes are indeed spurious; their eigenfunctions are not approximately equal. One is then justified to filter out the purely imaginary modes in Table 6 (a).

The calculation of a third spectrum may be numerically prohibitive, especially when only a small subset of eigenvalues are suspected to be spurious. This consideration would favour one method over the other. In this case testing only the eigenfunctions of the suspected spurious eigenvalues, as filtered in line 10, should be favoured over the former method.

This second filter works because the rescaling in Eq. (82) keeps other valid solutions of our ODE eigenvalue problem non-analytic. In the case of the branch cut eigenvalues, their corresponding eigenfunctions remains singular at the cosmologcal horizon after rescaling [48]. Thus, the approximation of these eigenfunctions in a Bernstein basis would fail to converge near the cosmological horizon. This idea is explored further in Sect. 8.2.2.

This failure to converge is shown explicitly in Fig. 8, where we compare the eigenfunctions of the spurious eigenvalue \(-18.67 i\) and the non-spurious eigenvalue \(\pm 1.3507\dots \) \(-0.1930\dots i\).

Using a GetEigenfunctions command, we plotted the absolute difference between the eigenfunctions of approximately common eigenvalues for two spectral basis orders. The maximum error for the spurious eigenvalue is indicative of the presence of a singularity in the eigenfunction,

$$\begin{aligned} ||\phi ^{80}_1(u) - \phi ^{50}_1(u)||_\infty\sim & {} 10^{14}, \nonumber \\ ||\phi ^{80}_2(u) - \phi ^{50}_2(u)||_\infty\sim & {} 10^{-17}. \nonumber \end{aligned}$$
Fig. 8
figure 8

The absolute difference between eigenfunctions of approximately equal eigenvalues using Bernstein basis orders 50 and 80. \(\phi _1(u)\) calculates the absolute difference for the eigenvalue \(\omega = -18.67 i\), while \(\phi _2(u)\) calculates the absolute difference for the eigenvalue \(\omega = \pm 1.3507\dots \) \(-0.1930\dots i\). The former indicates that the eigenfunctions does not converge to some non-singular function, while the latter indicates convergence

8.2.2 On the discrete spectrum condition

We echo an idea from [35]. One must be careful in rescaling so that boundary conditions are still capable of the undesired solutions. For example, there are instances when peeling off an extra \((r-1)^{-1}\) term so that \(\phi (r) \sim (r-1)\) is desirable. This boundary condition would fail to filter out the acausal solution at the black hole horizon, since both the acausal and causal solutions vanish at \(r = 1\). The spectral method would then try to solve for solutions of the form,

$$\begin{aligned} \phi (r) = A \phi ^{\textrm{out}}_{\textrm{in}}(r) + B \phi ^{\textrm{out}}_{\textrm{out}} (r), \end{aligned}$$
(92)

which generally is a mixture of causal and acausal parts at the black hole horizon. The ultimate consequence is that the boundary-value problem no longer has a discrete spectrum of eigenvalues. Continuing to calculate the spectrum using {50, 50} and {80, 80} would result in Fig. 9. As expected, SpectralBP is unable to find the desired discrete spectrum.

Fig. 9
figure 9

Spectrum calculated when \(\phi (r)\) is rescaled so that \(\lim _{r \rightarrow 1} \phi (r) \sim (1-r)\), for basis tuples {50, 50} (blue circles) and {80, 80} (red squares). The problem has become ill-posed since the rescaling no longer imposes the correct boundary conditions corresponding to a discrete spectrum

9 Algebraically special modes

It is well-known that the standing wave equation for odd- and even-parity gravitational perturbations (\(s=2\)) has an exact solution at what is called by Chandresekhar as the algebraically special mode. It is a purely imaginary frequency which appears to separate two different branches of the quasinormal mode spectrum: a lower branch that spirals towards the imaginary axis and an upper branch corresponding to an asymptotic high-damping regime.

It is a curious mode, whose frequencies can be shown analytically [49,50,51] to be

$$\begin{aligned} M\omega _l = -i \dfrac{(l-1)l(l+1)(l+2)}{12}, \end{aligned}$$
(93)

and whose corresponding eigenfunctions, with singularities properly scaled out, can be expressed analytically as a truncated polynomial. For example, for \(l = 2\),

$$\begin{aligned} \phi _2(u)= & {} 1 + \dfrac{115}{7} (u-1) + \dfrac{860}{7} (u-1)^2 + \dfrac{11572}{21} (u-1)^3 \nonumber \\{} & {} + \dfrac{34486}{21} (u-1)^4 + \dfrac{356662}{105} (u-1)^5 + \dfrac{44372}{9} (u-1)^6 \nonumber \\{} & {} + \dfrac{44372}{9} (u-1)^7 + \dfrac{77651}{27} (u-1)^8 + \dfrac{11093}{9} (u-1)^9 \qquad \nonumber \\ \end{aligned}$$
(94)

Various numerical investigations [23, 24] are hard-pressed to converge towards this exact result. It has been argued [50] that the discrepancy can be traced to two explanations: (1) the algebraically special mode is sensitive to the exact form of the gravitational potential (affecting WKB and Pöschl–Teller potential fitting) and (2) the sensitivity of a method to a properly defined mode number (affecting the continued fraction methods by Leaver).

In fact, numerical methods that are able to find eigenvalues on the complex plane do not generally work when those eigenvalues are located exactly on the imaginary axis. For example, the continued fraction method is not convergent for modes on the imaginary axis [24, 52, 53]. This disputes previous analytic and numerical results concerning Kerr QNMs on the negative-imaginary axis. One can, however, deduce the existence of these modes by finding ‘mode sequences’ that arbitrarily get close to the negative-imaginary axis, including the special algebraic mode [52, 54]. How these modes move around the negative-imaginary axis is not accessible to Leaver’s method.

Table 7 Gravitational perturbations with \(l=2\) and \(l=3\), calculated using basis tuples {350, 350} and {400, 400}. The special algebraic modes have 295 and 227 significant digits respectively. In units where the horizon is at \(r=1\), we have \(M=1/2\), so that \(\omega _2 = -4i\) and \(\omega _3 = -20i\) according to (93). Our numerical results show agreement up to 295 and 227 significant digits, respectively

With respect to this, spectral methods enjoy a significant advantage over Leaver’s method: an algorithm such as SpectralBP is capable of finding eigenvalues on the imaginary axis. Unlike Leaver’s method, which is based on a local power series expansion at one of the horizons, spectral methods find solutions globally. This has been reported before in [35], where the spectral algorithm QNMspectral finds a novel infinite set of purely imaginary modes for massless scalar perturbations in a Schwarzschild-de Sitter background. Because the spectral method is able to find these overdamped modes, one is able to observe complex bifurcation events in which quasinormal modes sink into, move along and emerge out of the negative imaginary axis where two QNMs collide. We have also used SpectralBP to uncover an interesting scenario that occurs in a Schwarzschild AdS background [27].

Table 8 Gravitational perturbations with \(l=4\) and \(l=5\), calculated using basis tuples {350, 350} and {400, 400}. The special algebraic modes have 137 and 115 significant digits respectively. In units where the horizon is at \(r=1\), we have \(M=1/2\), so that \(\omega _4 = -60i\) and \(\omega _5 = -140i\) according to (93). Our numerical results show agreement up to 137 and 115 significant digits, respectively

9.1 Algebraically special eigenvalues

We now solve (90) for \(s = 2\) and for \(l = 2,3,4,5\), and reverse the transformation \(\omega \rightarrow i \lambda \) to retrieve the eigenvalues of (89). We have used basis tuples of {350,350} and {400,400} (described is Sect. 6.1) for all calculations, and we have filtered out spurious eigenvalues on the negative-imaginary axis using CompareEigenfunctions. The resulting spectra can be seen in Tables 7 and 8. We show only the 10 lowest damping eigenvalues, using Mathematica’s notation for significant digits for the purely imaginary eigenvalues.

The coincidence of the calculated numerically purely imaginary mode \(\omega _l'\) with the algebraically special mode \(\omega _l\) is very strong. The coincidence when calculating \(\omega _2' \approx \omega _2 = - 4 i\) is within 295 significant digits, \(\omega _3' \approx \omega _3 = -20 i\) to within 227 significant digits, \(\omega _4' \approx \omega _4 = - 60i\) to within 137 significant digits and \(\omega _5' \approx \omega _5 = - 140i\) to within 115 significant digits. This is expected, since we are using a polynomial basis to numerically find a solution whose exact form is a truncated polynomial.

As a additional check, we have verified that the eigenfunction solved by SpectralBP using a basis tuple of {400,400} and \(l = 2\) is found consistent with (94) to within and error of \(10^{-250}\). The eigenfunctions for \(l = 3,4\) are also truncated polynomials, of expected degrees 41 and 121 respectively. One might need the use of higher precision numbers to confirm that the degree of the \(l=5\) eigenfunction is of degree 281.

As we have described in Sect. 8.1, the eigenvalues of (90) are either purely real or come in complex conjugate pairs. As a consequence of this, when we transform back to \(\omega \) from \(\lambda \) the calculated purely imaginary eigenvalues have exactly no real part. This avoids a criticism on numerical calculations which finds a single mode near the ASM with a finite real part whose symmetric pair \(\omega = - \omega ^*\) is unexpectedly not found.

The main lesson here is that SpectralBP manages exceptionally well to find eigenvalues on the negative-imaginary axis while filtering out spurious overdamped modes, as would other spectral or pseudospectral methods. This is in contrast with continued fraction methods, which cannot converge when the real part of the eigenvalue vanishes.

As a final note, and to illustrate the resources required to calculate one of the tables in this section, a single spectrum calculation for a basis tuple of \(\{400,400\}\) takes around 1 h each, running in a single 2.50 GHz Intel i5 Core with 8.00GB RAM.

9.2 Boundary behavior of the eigenfunctions

For completeness, we give warning when labelling solutions found by spectral methods as bonafide quasinormal modes whenever the eigenvalues calculated imply that the indicial equation (79) at one or more of the singularities are non-generic. This may affect whether or not the solution found satisfies the quasinormal mode boundary conditions.

For example, the finiteness of the eigenfunctions of the special algebraic modes at the boundaries can be folded back into (82), seemingly then implying that the quasinormal mode boundary conditions are satisfied and that these imaginary frequencies correspond to quasinormal modes.

The indicial equation (79) is said to be generic when its two solutions, \(\pm i \omega \), do not differ by an integer. This is manifestly true for general complex values of \(\omega \). In this case, the power series expansion at \(u = 1\) of the rescaled function \(\phi (r)\) in (82) converges, whether dominant or subdominant. At the algebraically special mode, however, the indicial equation is non-generic. From (93) and \(M = 1/2\), the solution of the indicial equation are both integers,

$$\begin{aligned} \pm i \omega _l = \dfrac{(l-1)l(l+1)(l+2)}{6}. \end{aligned}$$
(95)

In this case, only one power series expansion of \(\phi (r)\) is assured to converge, corresponding to the dominant solution. For the subdominant, say \(\tilde{\phi }(r)\), two things may happen. First, the subdominant solution may diverge logarithmically, of the form

$$\begin{aligned} \tilde{\phi }(u) \sim c_0 \phi (u) \ln (u-1) + a_0 (u-1)^8 + a_1 (u-1)^9 + \cdots \nonumber \\ \end{aligned}$$
(96)

However, a miraculous cancellation may occur [55], in which case the logarithmic term vanishes. Thus, both solutions may be expressed as a power series expansion at \(r = 1\). It is this latter case that occurs at the algebraically special mode for the Regge–Wheeler equation. This means that the dominant and subdominant solutions, corresponding to ingoing and outgoing modes at the black hole horizon respectively, may be rescaled to have the form,

$$\begin{aligned} \begin{array}{l} \phi _{\textrm{in}}(u) \sim b_0 + b_1 (u-1) + \cdots \\ \phi _{\textrm{out}}(u) \sim a_0 (u-1)^8 + a_1 (u-1)^9 + \cdots \end{array} \end{aligned}$$
(97)

For the specific case of the ASM, the following two statements are then not mutually exclusive: (1) the ASM eigenfunction, properly rescaled, has a regular, well-behaved Frobenius expansion in powers of \((u-1)\) and (2) it is an inextricable mixture of the two linearly independent solutions at the black hole horizon, corresponding to a causal ingoing mode and an acausal outgoing mode. The reconciliation between the analytic and numerical results is thus simple but subtle; there is no contradiction. While SpectralBP has indeed found an eigenvalue-eigenfunction pair of the Regge–Wheeler equation, this solution is an inseperable mixture of both ingoing and outgoing solutions at the black hole horizon, and therefore is not a quasinormal mode.

In summary, SpectralBP picks up the special algebraic frequency to an incredible degree of accuracy, but because of the peculiar nature of the special algebraic mode, the corresponding eigenfunction is one that does not satisfy quasinormal mode boundary conditions, as would be expected from [55].

10 Conclusion

This work makes a case for the use of Bernstein polynomials as a basis for spectral and pseudospectral methods applied to ordinary differential eigenvalue problems. A prime example of these problems is the calculation of quasinormal modes in black hole spacetimes. The Bernstein polynomials constitute a non-orthogonal spectral basis, which may explain why they are much less utilized compared to Chebyshev or Fourier basis functions. In contrast to its more popular counterparts though, a Bernstein basis allows one to decouple some of the spectral weights relevant to boundary conditions of ordinary differential eigenvalue problems. More specifically, the weights for the first q and last the r basis polynomials for free without recourse to the differential equations. For some applications, this proves to be a significant advantage.

We developed a user-friendly Mathematica package, SpectralBP, as a general spectral solver for eigenvalue problems. This package fully utilizes the properties of Bernstein polynomials and several other algorithmic enhancements (such as a novel inverse iteration method) that we shall describe in a later paper. As far as we know, SpectralBP is unique among existing spectral codes in its use of a Bernstein basis. We described its key functionalities and showcased several examples for its use. In particular, to serve both as tutorial and benchmarks, we featured applications of SpectralBP to a number of model eigenvalue problems in quantum mechanics. Most importantly, we have also applied SpectralBP to quasinormal mode problems in the Schwarzshild geometry. In all of our example cases, SpectralBP succeeded in providing very accurate results. Remarkably, with only modest resources, we are able calculate the algebraically special modes of Schwarzschild gravitational perturbations. Purely imaginary modes are notoriously difficult to calculate with more conventional numerical methods even when the solution is straightforward to calculate analytically, as in the case for the Schwarzschild ASM. To the best of our knowledge, ours is the most accurate numerical calculation of these algebraically special modes in the extant literature, agreeing with the analytical prediction to a staggering (294!) number of significant digits. We have supplemented our calculations with a discussion on the subtleties of the boundary conditions of the algebraically special mode. Moving forward, spectral methods should be a very useful tool in finding quasinormal modes on the negative imaginary axis.

Encouraged by these successes, we believe that SpectralBP may serve as a useful tool for the black-hole physics community or just about anyone seeking to solve a differential eigenvalue problem. Future work will look into applications of SpectralBP to the Kerr spacetime, as well as several algorithmic enhancements (such as a novel inverse iteration method) that we shall describe in a later paper. We have also used SpectralBP to discover new interesting properties of the quasinormal modes of Schwarzschild-anti-de Sitter spacetime, which will also be discussed in a later paper.