1 Introduction

This paper is a sequel to two articles of the second author, written over a period of more than 25 years. The first, [12], presents numerical methods for self-adjoint Sturm-Liouville-type equations with matrix coefficients, while the second, [13], analyses the application of a dissipative barrier scheme to a Schrödinger equation on a half-line. In the years since [12] appeared, there has been a lot of activity on Schrödinger-type equations with matrix-valued coefficients—see, e.g., Clark and Gesztesy [4] and Clark, Gesztesy, Holden and Levitan [5], together with the substantial bibliographies therein. Some results from the scalar case carry across to the matrix case in a straightforward way; some require new proofs; and some are simply no longer true. As a simple example, the usual spectral data only determines the Titchmarsh-Weyl coefficient, and hence the matrix-valued potential, up to unitary equivalence. Our concern in this article is to examine which of the results in [13] are still true in the case of a matrix-valued potential, and which not. At the end of the article, we indicate some directions for future work on quantum waveguide problems.

To fix notation, we consider the dissipative matrix Schrödinger equation on the half-line \([0,\infty )\),

$$ -\underline{u}^{\prime\prime}+(Q+i\gamma S)\underline{u}=\lambda \underline{u}, $$
(1)

with a regular self-adjoint-type boundary condition at the origin. The precise form of this condition is not important for our results, so we shall use

$$ \cos(\alpha)\underline{u}(0)-\sin(\alpha)\underline{u}^{\prime}(0)=\underline{0}, $$
(2)

where α ∈ [0,π), even though this is not the most general form. Here \(\underline {u}\) is a vector-valued function in a subspace of \(L^{2}([0,\infty ))^{n}\), the parameter γ is a non-zero real, and the coefficients Q, S satisfy the following hypotheses:

  • (A1): Q(x) is a Hermitian-valued, integrable over compact subsets of \([0,\infty )\), and is eventually periodic with period a > 0 i.e, there exists R0 ≥ 0 such that

    $$ Q(x+a)=Q(x), \quad \forall x \geq R_{0}. $$
    (3)
  • (A2): S is a cutoff function with support in [0,R] for some RR0 : there exists 0 < c < 1 such that

    $$ S(x)= \left\{\begin{array}{ll} I, & x< cR;\\ 0, & x\geq R. \end{array}\right. $$
    (4)

When x ∈ (cR,R), we assume that elements of S are measurable and their values lie in [0,1].

The hypothesis (A2) in particular is stronger than is really needed for most of our results, but sufficient to analyse most dissipative barrier schemes.

We define an operator L0 by:

$$ L_{0}\underline{u}=-\underline{u}^{\prime\prime}+Q\underline{u}, $$
(5)

with domain:

$$ \begin{array}{@{}rcl@{}} D(L_{0})= \{{\kern1.7pt} \underline{u} \in L^{2}(0,\infty) | -\underline{u}^{\prime\prime}+Q\underline{u} \in L^{2}(0,\infty),\\ \cos(\alpha)\underline{u}(0)-\sin(\alpha)\underline{u}^{\prime}(0)= \underline{0} {\kern1.7pt}\}. \end{array} $$
(6)

Our aim is to present a substantial analysis of the interaction between the dissipative barrier approach to the problem of numerical approximation of the spectrum of L0, and interval truncation methods. Our methods will be based on the Floquet theory and Weyl-Titchmarsh functions.

2 Summary of results

We investigate the following results for an eigenvalue λγ of our problem with the dissipative term iγS(⋅) which develops from the eigenvalue λ when γ = 0 :

  1. 1.

    For our non-truncated problem, if λ is an isolated eigenvalue of multiplicity ν, where 1 ≤ νn, Q is a compactly supported perturbation of a Hermitian periodic function, and for a sufficiently small γ > 0, the approximation λγ,j for each j = 1,…,ν satisfies the bound:

    $$ |\lambda+i\gamma-\lambda_{\gamma,j}|\leq C_{1}\gamma \exp(-c C_{2}R), $$
    (7)

    where C1 and C2 are positive constants.

  2. 2.

    If our problem is truncated to some interval [0,X], X > R, then by imposing a boundary condition at x = X, any eigenvalue λγ,X,good of the truncated problem which converges to λγ as \(X\to \infty \) satisfies:

    $$ |\lambda_{\gamma}-\lambda_{\gamma,X,\text{good}}|\leq C_{3} \exp\left( -C_{4}(X-R)\right), $$
    (8)

    where C3 and C4 are positive constants and depend on λγ. Moreover, the total algebraic multiplicities of all λγ,X,good converging to λγ is equal to the algebraic multiplicity of λγ.

  3. 3.

    If our problem is truncated to some interval [0,X], X > R, then by imposing a boundary condition at x = X, then any eigenvalue λγ,X,bad of the truncated problem which converges as \(X\to \infty \) to a point which is not in the spectrum of L0 + iγS satisfies

    $$ |\Im(\lambda_{\gamma,X,\text{bad}})| \leq C_{5} \exp(-C_{6}(X-R)), $$
    (9)

    where C5 and C6 are positive constants.

  4. 4.

    One crucial difference between the operators considered here and the scalar-coefficient operators in [13] concerns the behaviour of eigenvalues of L0 + iγS as γ ↘ 0. In [13], it is shown that if an eigenvalue λγ of L0 + iγS evolves continuously to become an interior point of a spectral band of L0 + iγS when γ = 0, then a threshold effect occurs: there exists γcrit > 0 such that as γγcrit,λγ converges to the interior point of the spectral band. We give an example to show that this is not generally true for the operators considered in the present article.

The paper is organised as follows. Section 3 is devoted to the representation of the Floquet theory and Glazman decomposition for the main equation. Section 4 and Section 5 contain the analysis of the method for the truncated and non-truncated problem respectively. Finally, Section 6 represents some numerical experiments to illustrate our results.

3 Floquet theory and Glazman decomposition for matrix Schrödinger operators

The essential spectrum of L0 can be described using the Floquet theory, studying the solutions of the differential (1) over just one period. We shall review the elements of the Floquet theory below, primarily to introduce the notation which we require for an analysis of the domain truncation technique.

For the point spectrum of L0 + iγS, we shall apply the Glazman decomposition technique [1, Appendix 2].

Recall the parameter R > 0 from hypothesis (A2). For a fixed \(\lambda \in \mathbb {C}\) and any non-zero constant vector \(\underline {h}\), consider the following two boundary value problems:

$$ P_{\text{left}}: \left\{\begin{array}{lll} -\underline{v}^{\prime\prime}+(Q+i\gamma S)\underline{v}=\lambda \underline{v},& x\in (0,R); \\ \cos(\alpha)\underline{v}(0)-\sin(\alpha)\underline{v}^{\prime}(0)=\underline{0} \in \mathbb{C}^{n}; \\ \underline{v}(R)=\underline{h} \in \mathbb{C}^{n}; \end{array}\right. $$
(10)
$$ P_{\text{right}}: \left\{\begin{array}{lll} -\underline{w}^{\prime\prime}+Q \underline{w}=\lambda \underline{w}, & x\in (R,\infty); \\ \underline{w}(R)=\underline{h} \in \mathbb{C}^{n}; \\ \underline{w}\in L^{2}(R,\infty). \end{array}\right. $$
(11)

If these problems can be solved uniquely for every \(\underline {h}\) Footnote 1, then the maps \(\underline {v}(R) \longmapsto \underline {v}^{\prime }(R) \in \mathbb {C}^{n}\) and \(\underline {w}(R) \longmapsto -\underline {w}^{\prime }(R) \in \mathbb {C}^{n}\) are linear operators (Weyl-Titchmarsh operators or Dirichlet to Neumann maps) which admit representations by n × n matrices:

\(M_{\text {left}}(\lambda )\underline {v}(R)=\underline {v}^{\prime }(R);\) \(M_{\text {right}}(\lambda )\underline {w}(R)=-\underline {w}^{\prime }(R).\)

The matrices Mleft(λ) and Mright(λ) are analytic functions of λ on suitable subsets of \({\mathbb C}\). In particular Mleft is meromorphic with poles in \({\mathbb {C}}^{+}\) when γ > 0, and on the real axis when γ = 0; the function Mright is analytic outside Specess(L0) except at a set of poles (see [4, 9]).

If λ is an eigenvalue of L0 + iγS, then there exists an eigenfunction \(\underline {u};\) we can take \(\underline {v}(x)=\underline {u}(x), x\in [0,R],\) and \(\underline {w}(x)=\underline {u}(x), x\in [R,\infty )\) so that:

\([ M_{\text {left}}(\lambda )+M_{\text {right}}(\lambda )]\underline {u}(R)=\underline {u}^{\prime }(R)+(-\underline {u}^{\prime }(R))=\underline {0}\).

Assuming that \(\underline {u}(R)\) is not zero, this leads to the condition that \({\ker }(M_{\text {left}}(\lambda )+M_{\text {right}}(\lambda ))\) is not trivial. In fact, if \(\underline {u}(R)\) were zero, then both Mleft(⋅) and Mright(⋅) would be undefined at λ, so the condition that \(\underline {u}(R)\) be non-zero is satisfied automatically if Mleft(λ) and Mright(λ) are well defined.

Conversely, suppose there exists \(\mu \in \mathbb {C}\) such that

$$ {\ker} (M_{\text{left}}(\mu)+M_{\text{right}}(\mu))\neq \{0\}. $$
(12)

Take \(\underline {h}\in \ker (M_{\text {left}}(\mu )+M_{\text {right}}(\mu ))\) and define a non-trivial vector \(\underline {u}\) by:

$$ \underline{u}(x)= \left\{\begin{array}{ll} \underline{v}(x), & x\leq R, \\ \underline{w}(x), & x\geq R, \end{array}\right. $$
(13)

where \(\underline {v}\) and \(\underline {w}\) are the solutions of Pleft and Pright respectively for the case λ = μ. Then \(\underline {u}\) is a solution for the differential equation \(-\underline {u}^{\prime \prime }+(Q+i\gamma S)\underline {u}=\mu \underline {u}\) on both the intervals (0,R) and \((R,\infty )\), which is continuous. Moreover, it has a continuous first derivative at x = R which follows from (12) since \(\underline {h}\in \ker (M_{\text {left}}(\mu )+M_{\text {right}}(\mu ))\) and using the definitions of Mleft(μ) and Mright(μ). This means that \(\underline {u}\) is an eigenfunction of L0 + iγS with eigenvalue μ. We have proved the following result.

Lemma 1

Suppose that Mleft(λ) and Mright(λ) are well defined at λ = μ. Then μ is an eigenvalue of L0 + iγS if and only if the kernel of Mleft(μ) + Mright(μ) is non-trivial.

We now describe how to calculate Mleft and Mright. For Mleft, the procedure is (at least in principle) straightforward:

$$ M_{\text{left}}(\lambda)=U^{\prime}(R)U(R)^{-1}, $$
(14)

where U is the n × n matrix-valued solution of the initial value problem

$$ \begin{array}{@{}rcl@{}} &-&U^{\prime\prime}+(Q+i\gamma S)U=\lambda U,\\ &&U(0)= I \sin\alpha \in \mathbb{C}^{n\times n},\\ &&U^{\prime}(0)=I \cos\alpha \in \mathbb{C}^{n\times n}. \end{array} $$

In order to find Mright, we bear in mind that the Q(x) is periodic for xR0 and hence for xRR0; also the dissipative perturbation S(x) = 0 for xR. Therefore, we can apply the Floquet theory [6] for the system of differential equations. We rewrite (1) as a first-order differential system:

$$ \left( \begin{array}{c} \underline{u}(x) \\ \underline{u}^{\prime}(x) \end{array}\right)' = \left( \begin{array}{cc} 0 & I\\ \lambda I-Q -i\gamma S & 0 \end{array}\right) \left( \begin{array}{c} \underline{u}(x) \\ \underline{u}^{\prime}(x) \end{array}\right). $$
(15)

Let Φ(x,λ) be the fundamental matrix of this equation, i.e.

$$ {\Phi} '(x,\lambda)= \left( \begin{array}{cc} 0 & I\\ \lambda I-Q -i\gamma S & 0 \end{array}\right) {\Phi}(x,\lambda), {\kern2.2pt}{\kern2.2pt}{\kern2.2pt} {\Phi}(0,\lambda) = I_{2n\times 2n}, $$
(16)

where I2n×2n is the 2n × 2n identity. Define a non-singular matrix A(λ) by

$$ A(\lambda)={\Phi}(R,\lambda)^{-1}{\Phi}(R+a,\lambda), $$
(17)

Then we can find the eigenvalues of A(λ), say ϱ1(λ),ϱ2(λ),…,ϱ2n(λ). These eigenvalues are called Floquet multipliers.

Suppose that A(λ) has a canonical Jordan form, i.e.

$$ A(\lambda)=F(\lambda)J(\lambda)F(\lambda)^{-1} $$
(18)

where J(λ) is a Jordan matrix and F(λ) is a non-singular matrix. It may be shown then that

$$ {\Phi}(x+a,\lambda)={\Phi}(x,\lambda)F(\lambda)J(\lambda)F(\lambda)^{-1}, $$

so Φ(x + a,λ)F(λ) = Φ(x,λ)F(λ)J(λ). In fact, the columns of Φ(⋅,λ)F(λ) are called Floquet solutions of (15).

The following Lemma summarises some standard facts (for more information about the Floquet theory for Hamiltonian systems, see [5, 16]):

Lemma 2

  1. 1.

    The Floquet multipliers ϱ1(λ),…,ϱ2n(λ) satisfy ϱ1(λ)…ϱ2n(λ) = 1.

  2. 2.

    For λ∉Specess(L0), there exist precisely n of the ϱj(λ); say ϱ1(λ),…,ϱn(λ), such that |ϱj(λ)| < 1.

Proof 1

  1. 1.

    This statement holds because \(\det (A(\lambda ))=1\), which follows from the fact that \(\det ({\Phi }(x,\lambda ))\) is a non-zero constant (the coefficient matrix on the right-hand side of (16) having zero trace).

  2. 2.

    The (1) is in the limit-point case at infinity, and hence for λ outside the essential spectrum it has precisely an n-dimensional space of solutions in \(L^{2}(0,\infty )\). None of the Floquet multipliers ϱ1(λ),…,ϱ2n(λ) has absolute value 1, for otherwise it is possible to construct a Weyl singular sequence of oscillatory solutions from the corresponding Floquet solution; this is impossible as λ lies outside the essential spectrum. Thus, precisely n of the Floquet multipliers must have absolute value strictly less than 1, precisely n have absolute value strictly greater then 1, and we can order them so that \(|{\varrho _{1}(\lambda )}|,\dots ,|{\varrho _{n}(\lambda )}|<1\) and \(|\varrho _{n+1}(\lambda )|,\dots ,|\varrho _{2n}(\lambda )|>1.\)

It follows from Lemma 2 that if λ lies outside the essential spectrum, then the Jordan matrix J(λ) in (18) decomposes into n × n blocks as

$$ J(\lambda)=\left( \begin{array}{cc} J_{1}(\lambda) & 0 \\ 0 & J_{2}(\lambda) \end{array}\right), $$
(19)

where J1(λ) corresponds to ϱ1(λ),…,ϱn(λ) and J2(λ) corresponds to ϱn+ 1(λ), …, ϱ2n(λ). In this case, the matrix

$$ {\Gamma}(x,\lambda)={\Phi}(x,\lambda)F(\lambda)\left( \begin{array}{c} J_{1}(\lambda) \\ 0 \end{array}\right), $$

has columns which span the n-dimensional space of square-integrable solutions; moreover, for \(N\in \mathbb {N}:\)

$$ {\Gamma}(x+Na,\lambda)={\Gamma}(x,\lambda)J_{1}(\lambda)^{N}. $$
(20)

We can partition Γ(x,λ) as

$$ {\Gamma}(x,\lambda)=\left( \begin{array}{c} {\Psi}(x,\lambda) \\ {\Psi}^{\prime}(x,\lambda) \end{array}\right), $$
(21)

where Ψ(x,λ) is an n × n solution of

$$ -{\Psi}^{\prime\prime}+(Q+i\gamma S){\Psi}=\lambda {\Psi}, $$
(22)

whose columns (as we mentioned above) span the space of all square integrable solutions of (1). Hence, by direct verification, if Ψ(R,λ) is invertible, then the function

$$ \underline{w}(x) = {\Psi}(x,\lambda){\Psi}(R,\lambda)^{-1}\underline{h} $$

is the solution of the problem Pright. A simple calculation now shows that \(-\underline {w}^{\prime }(R) = M_{\text {right}}(\lambda )\underline {w}(R)\), where

$$ M_{\text{right}}(\lambda) = -{\Psi}^{\prime}(R,\lambda){\Psi}(R,\lambda)^{-1}. $$
(23)

We immediately have the following corollary to Lemma 1.

Corollary 1

Suppose that Mleft(λ) is well defined and that Ψ(R,λ)− 1 exists. Then λ is an eigenvalue of L0 + iγS if and only if

$$ {\ker}\left( M_{\text{left}}(\lambda)-{\Psi}^{\prime}(R,\lambda){\Psi}(R,\lambda)^{-1}\right) \neq \{ 0 \}. $$
(24)

4 Approximation of spectral-gap eigenvalues using truncated problems

In this section, we truncate the problem over \([0,\infty )\) to a problem on [0,X] for some X > R. At x = X we impose, for some \(\upbeta \in \mathbb {R}\), a self-adjoint artificial boundary condition

$$ \cos(\upbeta)\underline{u}(X)-\sin(\upbeta)\underline{u}^{\prime}(X)=\underline{0}. $$
(25)

The operator L0 is replaced by L0,X defined by:

$$ L_{0,X}\underline{u}=-\underline{u}^{\prime\prime}+Q\underline{u} , $$
(26)

with domain:

$$ \begin{array}{ll} D(L_{0,X})= \{{\kern1.7pt} \underline{u} \in L^{2}(0,X) | -\underline{u}^{\prime\prime}+Q\underline{u} \in L^{2}(0,X), \\ \cos(\alpha)\underline{u}(0)-\sin(\alpha)\underline{u}^{\prime}(0)= \underline{0}=\cos(\upbeta)\underline{u}(X)-\sin(\upbeta)\underline{u}^{\prime}(X) {\kern1.7pt}\}. \end{array} $$
(27)

In this case, the spectra of L0,X and L0,X + iγS are purely discrete. To characterise the eigenvalues of L0,X + iγS, we replace Pright in (11) by:

$$ P_{\text{right,X}}: \quad \left\{\begin{array}{lll} -\underline{w}^{\prime\prime}+Q \underline{w}=\lambda \underline{w}, & x\in (R,X); \\ \underline{w}(R)=\underline{h}; \\ \cos(\upbeta)\underline{w}(X)-\sin(\upbeta)\underline{w}^{\prime}(X) =\underline{0}. \end{array}\right. $$
(28)

Let Λ(x,λ) be a 2n × n matrix of non-\(L^{2}(0,\infty )\) solutions corresponding to the eigenvalues ϱn+ 1,…,ϱ2n of (17), thus,

$$ {\Lambda}(x,\lambda)={\Phi}(x,\lambda)F(\lambda)\left( \begin{array}{c} J_{2}(\lambda) \\ 0 \end{array}\right), $$

where J2(λ) is the corresponding Jordan matrix introduced in (19). Moreover, for \(N\in \mathbb {N}:\)

$$ {\Lambda}(x+Na,\lambda)={\Lambda}(x,\lambda)J_{2}(\lambda)^{N}. $$
(29)

We may partition Λ(λ,x) as

$$ {\Lambda}(x,\lambda)=\left( \begin{array}{c} {\Theta}(x,\lambda) \\ {\Theta}^{\prime}(x,\lambda) \end{array}\right); $$
(30)

construct the matrix:

$$ {\Psi}_{X}(x,\lambda)={\Psi}(x,\lambda)-{\Theta}(x,\lambda)C_{X}(\lambda), $$
(31)

choosing CX(λ) so that cos\((\upbeta ){\Psi }_{X}(X)-\textit {sin}(\upbeta ){\Psi }^{\prime }_{X}(X)=0\):

$$ C_{X}(\lambda)=(\cos(\upbeta){\Theta}(X,\lambda)-\sin(\upbeta){\Theta}^{\prime}(X,\lambda))^{-1} (\cos(\upbeta){\Psi}(X,\lambda)-\sin(\upbeta){\Psi}^{\prime}(X,\lambda)). $$
(32)

Hence, the solution \(\underline {w}\) of the boundary value problem (28) exists if the corresponding ΨX(R,λ)− 1 exists and:

\(\underline {w}(x)= {\Psi }_{X}(x,\lambda ){\Psi }_{X}(R,\lambda )^{-1}\underline {h},\)

Thus, the eigenvalues of L0,X + iγS may be characterised by an analogue of Lemma 1 if we replace Mright by

$$ M_{\text{right,X}}(\lambda) := -{\Psi}^{\prime}_{X}(R,\lambda){\Psi}_{X}(R,\lambda)^{-1}. $$
(33)

Corollary 1 also has the following analogue.

Lemma 3

Suppose that Mleft(λ) and ΨX(R,λ)− 1 exist. Then λ is an eigenvalue of L0,X + iγS if and only if

$$ {\ker}\left( M_{\text{left}}(\lambda)-{\Psi}^{\prime}_{X}(R,\lambda){\Psi}_{X}(R,\lambda)^{-1}\right)\neq \{ 0 \}. $$
(34)

We analyse the effect of interval truncation through a sequence of intermediate results and technical lemmas.

Proposition 1

If X = R + Na where \(N\in \mathbb {N} \), and J1(λ) and J2(λ) are defined as in (19) then

$$ {\Psi}(R+Na,\lambda)={\Psi}(R,\lambda)J_{1}(\lambda)^{N}; {\kern2.2pt}{\kern2.2pt}{\kern2.2pt} {\Theta}(R+Na,\lambda)={\Theta}(R,\lambda)J_{2}(\lambda)^{N}. $$
(35)

Furthermore,

$$ C_{X}(\lambda)=J_{2}(\lambda)^{-N}C_{R}(\lambda)J_{1}(\lambda)^{N}; $$
(36)

in particular,

$$ \left\lVert C_{X}(\lambda)\right\rVert \leq C\left\lVert C_{R}(\lambda)\right\rVert \biggl (\frac{\varrho(\lambda)}{\tilde{\varrho}(\lambda)}\biggr)^{N} \biggl (N^{2} \frac{\tilde{\varrho}(\lambda)}{\varrho(\lambda)}\biggr)^{n-1}, $$
(37)

where C is a positive constant, \(\varrho (\lambda )=\max \limits (|\varrho _{1}(\lambda )|,\ldots ,|\varrho _{n}(\lambda )|)\) and \(\tilde {\varrho }(\lambda )=\min \limits (|\varrho _{n+1}(\lambda )|,\ldots ,|\varrho _{2n}(\lambda )|).\)

Proof 2

Equation (35) follows directly from (2029) upon using the definitions (2130). Substituting (35) into (32) yields (36). In order to estimate the norm of CX(λ), we need to find the norm of J1(λ)N. The expression for J1(λ)N may have terms, of worst case scenario, like:

$$ \varrho(\lambda),N\varrho(\lambda)^{N-1},\dots,N(N-1)\dots(N-n+2)\varrho(\lambda)^{N-n+1}, $$

where \(\varrho (\lambda )=\max \limits (|\varrho _{1}(\lambda )|,\ldots ,|\varrho _{n}(\lambda )|).\) Thus,

$$ \|J_{1}(\lambda)^{N}\|\leq c_{1}\varrho(\lambda)^{N}\biggl(\frac{N}{\varrho(\lambda)}\biggr)^{n-1}. $$

A similar analogue would be followed to estimate the norm of J2(λ)N. Therefore,

$$ \|J_{2}(\lambda)^{-N}\|\leq c_{2}\biggl(\frac{1}{\tilde{\varrho}(\lambda)}\biggr)^{N}\biggl(N\tilde{\varrho}(\lambda)\biggr)^{n-1}, $$

where \(\tilde {\varrho }(\lambda )=\min \limits (|\varrho _{n+1}(\lambda )|,\ldots ,|\varrho _{2n}(\lambda )|).\) Finally, (37) follows from the expressions of the norms of J1(λ)N and J2(λ)N. □

Lemma 4

Let Ψ(x,λ) be defined as in (22) and suppose I(λ) > 0. Then Ψ(R,λ) is invertible.

Proof 3

Assume that Ψ(R,⋅) is not invertible: then we can find a non-zero vector \(\underline {c} \in \mathbb {C}^{n}\) such that \({\Psi }(R,\lambda )\underline {c}=\underline {0}.\) Define a function \(\underline {u}(x,\lambda )={\Psi }(x,\lambda )\underline {c},\) which satisfies the differential equation for all x. Also, \( \underline {u}(R,\lambda )=\underline {0}.\) However, the fact that |ϱj(λ)| < 1 for j = 1,…,n, means that \(\underline {u}(\cdot ,\lambda )\in L^{2}(R,\infty ).\) Hence \(\underline {u}(x,\lambda )\) is an eigenfunction for the problem \(-\underline {u}^{\prime \prime }+Q\underline {u}=\lambda \underline {u},\) on \([R,\infty )\) with Dirichlet condition at R. This problem is self-adjoint, so I(λ) = 0, which is a contradiction. □

Lemma 5

Let ΨX(x,λ) be defined as in (31) and suppose I(λ) > 0. Then ΨX(R,λ) is invertible.

Proof 4

Assume ΨX(R,λ) is not invertible. Then ∃ a non-zero \(\underline {c}\in \mathbb {C}^{n}\) such that \({\Psi }_{X}(R,\lambda )\underline {c}=\underline {0}. \) Define a function \(\underline {u}_{X}(x,\lambda )={\Psi }_{X}(x,\lambda )\underline {c},\) so that \(\underline {u}_{X}(R,\lambda )=\underline {0}.\) Hence, \(\underline {u}_{X}(x,\lambda )\) is an eigenfunction of the problem on [R,X] with Dirichlet condition at R and the boundary condition (25) at X with \(\upbeta \in \mathbb {R}.\) This is also a self-adjoint problem, so again we have the contradiction I(λ) = 0. □

Finally, we have a quantitative lemma on continuity of determinants, which will be needed in the proof of Theorem 1. We shall use this lemma with X = R + Na and large N.

Lemma 6

Suppose that \(A,A_{X} \in \mathbb {M}_{n}(\mathbb {C})\) have the property:

$$ \left\lVert A-A_{X}\right\rVert \leq b {\tau}^{X}(X^{2}\frac{1}{\tau})^{n-1}, $$

where b is a positive constant and 0 < τ < 1. Then

$$ \lvert{\det(A)-\det(A_{X})}\rvert \leq \tilde{b} {\tau}^{X}(X^{2}\frac{1}{\tau})^{n-1}, $$

where \(\tilde {b}\) is a positive constant.

Proof 5

The proof follows directly from [8], by letting \(\tilde {b} =nb[\left \lVert A\right \rVert +b\tau ]^{n-1}\). □

Theorem 1

Suppose that assumptions (A1) and (A2) hold. For γ > 0, let λγ be an eigenvalue of the non-self-adjoint Schrödinger operator L0 + iγS defined in (56). Then there exist approximations λγ,X,good to λγ, whose total algebraic multiplicity is equal to the algebraic multiplicity of λγ, obtained as eigenvalues of the operator L0,X + iγS defined in (2627), which satisfies

$$ |\lambda_{\gamma}-\lambda_{\gamma,X,\text{good}}|\leq C_{3} \exp\left( -C_{4}(X-R)\right). $$
(38)

Here C3 and C4 are positive constants which depend on λγ.

Proof 6

Without loss of generality, it is sufficient to check the cases X = R + Na where \(N\in \mathbb {N}\) is sufficiently large. The other cases follow by exploiting the freedom in the choice of the constants c and R in (26) and (27). For example, if X = R + Na + b, with 0 < b < a, then we can replace R by R + b and use a smaller constant c in (26).

First, we observe that for γ > 0, λγ has strictly positive imaginary part. If \(\underline {u}_{\gamma }\) is the corresponding normalised eigenfunction, then a standard integration by parts yields: \(\Im (\lambda _{\gamma })=\gamma {{\int \limits }_{0}^{R}} \underline {u}^{*}_{\gamma }(x)S(x)\underline {u}_{\gamma }(x) dx >0,\) where \( \underline {u}^{*}_{\gamma }(x)\) is the Hermitian conjugate of \(\underline {u}_{\gamma }(x).\) Next, we observe consequently from Lemma 4 and Lemma 5, Ψ(R,⋅) and ΨX(R,⋅) are invertible for all λ in a neighbourhood of λγ. It follows that Mright(⋅) and Mright,X(⋅) are well defined in a neighbourhood of λγ.

If Mleft(λγ) is well defined, then from Corollary 1,

$$ {\ker} \left( M_{\text{left}}(\lambda_{\gamma})+M_{\text{right}}(\lambda_{\gamma})\right) \neq \{ 0 \}; $$
(39)

from Lemma 3, we seek points λγ,X,good which satisfy (38) together with the truncated problem eigenvalue condition:

$$ {\ker}\left( M_{\text{left}}(\lambda_{\gamma ,X,\text{good}})+M_{\text{right,X}}(\lambda_{\gamma ,X,\text{good}})\right)\neq \{ 0 \}. $$
(40)

Using (31) and the definitions of Mright and Mright,X for a fixed λ, we obtain

$$ \begin{array}{rcl} M_{\text{right,X}}(\lambda) & = & \left( M_{\text{right}}(\lambda)+{\Theta}^{\prime}(R,\lambda)C_{X}(\lambda){\Psi}(R,\lambda)^{-1} \right) \\ & & \times \left( I - {\Theta}(R,\lambda)C_{X}(\lambda){\Psi}(R,\lambda)^{-1} \right)^{-1}. \end{array} $$
(41)

Now we exploit the fact that X = R + Na, which allows us to use Proposition 1 (Eq. (37)). By Lemma 2 part 2, since λ is outside the essential spectrum, and since the Floquet multipliers may be chosen to be continuous functions of λ, there exist constants c < 1 and c+ > 1 such that \(\max \limits (|\varrho _{1}(\lambda )|,\ldots ,|\varrho _{n}(\lambda )|)<c_{-}<1\) and \(\min \limits (|\varrho _{n+1}(\lambda )|,\ldots ,|\varrho _{2n}(\lambda )|)=(\max \limits (|\varrho _{1}(\lambda )|,\ldots ,|\varrho _{n}(\lambda )|))^{-1} >c_{+}>1\) uniformly with respect to λ in a neighbourhood of λγ which does not lie in a spectral band. Thus, in addition to (39), we have from (37) and (41),

$$ \left\lVert M_{\text{right}}(\lambda)-M_{\text{right,X}}(\lambda)\right\rVert\leq b\biggl (\frac{c_{-}}{c_{+}}\biggr)^{N}\biggl (N^{2}\frac{c_{+}}{c_{-}}\biggr)^{n-1}, $$
(42)

uniformly with respect to λ in a neighbourhood of λγ, where b is a positive constant. Using Lemma 6 and since \(0< (\frac {c_{-}}{c_{+}} )<1,\)

$$ |\det(M_{\text{left}}(\lambda)+M_{\text{right}}(\lambda)) - \det(M_{\text{left}}(\lambda)+M_{\text{right,X}}(\lambda))|\leq \tilde{b}\biggl (\frac{c_{-}}{c_{+}}\biggr)^{N}\biggl (N^{2}\frac{c_{+}}{c_{-}}\biggr)^{n-1}, $$

uniformly with respect to λ in a neighbourhood of λγ, where \(\tilde {b}\) is a positive constant. It follows by a standard zero-counting argument for analytic functions (Lemma 3 in [13]) that there exist points λγ,X,good which satisfy (40) and are such that

$$ |\lambda_{\gamma}-\lambda_{\gamma,X,\text{good}}|\leq C\biggl (\frac{c_{-}}{c_{+}}\biggr)^{N/\nu}\biggl (N^{2}\frac{c_{+}}{c_{-}}\biggr)^{(n-1)/\nu}, $$
(43)

where C > 0 and ν is the order of the zero of \(\det (M_{\text {left}}(\cdot )+M_{\text {right}}(\cdot ))\) at λγ. Moreover, the total order of the λγ,X,good as zeros of \( \det (M_{\text {left}}(\cdot )+M_{\text {right,X}}(\cdot ))=0\) in a neighbourhood of λγ, is ν. In view of (43), given 𝜖 > 0 such that \((1+\epsilon )\frac {c_{+}}{c_{-}}<1,\) we have

$$\biggl ((1+\epsilon)\frac{c_{+}}{c_{-}}\biggr)^{N}>\biggl (\frac{c_{-}}{c_{+}}\biggr)^{N/\nu}\biggl (N^{2}\frac{c_{+}}{c_{-}}\biggr)^{(n-1)/\nu},$$

for all sufficiently large N. Hence (43) becomes:

$$ |\lambda_{\gamma}-\lambda_{\gamma,X,\text{good}}|\leq O\biggl (\frac{c_{-}}{c_{+}}\biggr)^{N/\nu}; $$
(44)

furthermore, any solutions of (40) which converges to λγ must satisfy (44). This completes the proof. □

Remark 1

When Mleft(λ) has a pole at λ, it is still possible for λ to be an eigenvalue of L0 + iγS. However, Mright(λ) and Mright,X(λ) cannot have poles off the real axis, as Lemma 4 and Lemma 5 show that Ψ(R,λ) and ΨX(R,λ) are invertible for I(λ)≠ 0.

Theorem 2

Suppose that assumptions (A1) and (A2) hold. For γ > 0, let λγ,X,bad be an eigenvalue of the non-self-adjoint Schrödinger operator L0,X + iγS defined in (56) which converges, as \(X\rightarrow +\infty \), to a point which is not in the spectrum of L0 + iγS. Then for some positive constants C5 and C6:

$$ |\Im(\lambda_{\gamma,X,\text{bad}})| \leq C_{5} \exp(-C_{6}(X-R)). $$
(45)

Proof 7

Similarly to the proof of Theorem 1, it is sufficient to consider the case X = R + Na where \(N\in \mathbb {N}\). Since λγ,X,bad has the property:

$$ {\ker}\left( M_{\text{left}}(\lambda_{\gamma,X,\text{bad}})+M_{\text{right,X}}(\lambda_{\gamma,X,\text{bad}})\right)\neq \{ 0 \}, $$

in particular Mleft(λγ,X,bad) + Mright,X(λγ,X,bad) has an eigenvalue 0. It is known (see, e.g., [3]) that spectral pollution must lie on the real axis, since L0 + iγS is a relatively compact perturbation of the semi-bounded self-adjoint operator L0: thus, \(\lambda _{\gamma ,X,\text {bad}}\rightarrow \mu \in \mathbb {R}\cap \{\text {Spectral Gap}\}\) as \(X\rightarrow \infty \). Additionally, since μ is not in the spectrum of L0 + iγS, the matrix Mleft(μ) + Mright(μ) is invertible if it is well defined, i.e. if Ψ(R,μ) and U(R,μ) are invertible (see (1423)). We assume without loss of generality that this is indeed so, for if not then one may simply use a greater value of R for the Glazman decomposition.

Hence, there is a compact neighbourhood of μ, say \(\overline {B(\mu ,r)}; r>0\) such that Mleft(λ) + Mright(λ) is invertible \(\forall \lambda \in \overline {B(\mu ,r)}\). Following the reasoning which led to (42), we deduce that provided CR(μ) is well defined, then Mright,X will converge locally uniformly to Mright in a neighbourhood of μ. Since such a uniform convergence excludes spectral pollution, it follows that CR(μ) must not be well defined, i.e.

$$ \cos(\upbeta){\Theta}(R,\mu) - \sin(\upbeta){\Theta}^{\prime}(R,\mu) \text{is not invertible.} $$

Now \(\cos \limits (\upbeta ){\Theta }(R,\lambda ) - \sin \limits (\upbeta ){\Theta }^{\prime }(R,\lambda )\) has at worst branch-cut singularities as a function of λ, so its inverse must admit a bound

$$ \left\|\left( \cos(\upbeta){\Theta}(R,\mu) - \sin(\upbeta){\Theta}^{\prime}(R,\mu)\right)^{-1}\right\| \leq \frac{C}{|\lambda - \mu|^{\nu}}, $$

for some C, ν > 0, in a neighbourhood of λ = μ. Combining this with (37) and using the notation of (42), we see that, for X = R + Na,

$$ \| C_{X}(\lambda_{\gamma,X,\text{bad}}) \| \leq \frac{C}{|\lambda_{\gamma,X,\text{bad}}-\mu|^{\nu}}\biggl (\frac{c_{-}}{c_{+}}\biggr)^{N}\biggl (N^{2}\frac{c_{+}}{c_{-}}\biggr)^{n-1}. $$

The fact that the eigenvalues λγ,X,bad form a polluting sequence means that Mright,X(λγ,X,bad) cannot converge to Mright(μ) as \(X\rightarrow \infty \), so in view of (41) the norms ∥CX(λγ,X,bad)∥ cannot converge to zero. This implies a bound

$$ |\Im(\lambda_{\gamma,X,\text{bad}})| \leq |\lambda_{\gamma,X,\text{bad}}-\mu| \leq C \biggl (\frac{c_{-}}{c_{+}}\biggr)^{N/\nu}\biggl (N^{2}\frac{c_{+}}{c_{-}}\biggr)^{(n-1)/\nu}, $$

and the required result follows since the exponential decay of \(\biggl (\frac {c_{-}}{c_{+}}\biggr )^{N/\nu }\) overcomes the power N2(n− 1)/ν which completes the proof. □

5 Eigenvalues in spectral gaps for non-truncated problems

The purpose of this section is to study the evolution of the point spectrum of L0 + iγS with respect to the coupling constant γ. Throughout this section, we drop the assumption of eventual periodicity (A1).

Theorem 3

Suppose that assumption (A2) holds (see (4)). Let λ be an isolated eigenvalue of L0 with multiplicity ν, where 1 ≤ νn, and normalised eigenvectors uj, j = 1,…,ν. For each sufficiently small γ > 0, let λγ,j;j = 1,…,ν, be eigenvalues of the non-self-adjoint operator L0 + iγS defined in (56) with eigenvectors uγ,j, j = 1,…,ν, and suppose \(\lambda _{\gamma ,j}\rightarrow \lambda \) as \(\gamma \rightarrow 0.\) Then for each 1 ≤ jν, the projection of uγ,j onto Span{u1,…,uν} remains bounded away from zero, uniformly with respect to R and γ for sufficiently small γ.

If, additionally, the assumption

  • (A1’):

    $$ \left\lVert u_{j}(x)\right\rVert \leq C\exp(-C_{2}x),{\kern2.2pt}{\kern2.2pt} x\in[0,\infty),{\kern2.2pt}{\kern2.2pt}j=1,\ldots,\nu, $$
    (46)

    holds for some positive constants C and C2, then there exists C1 > 0, independent of R, such that for all R > 0,

    $$ |\lambda+i\gamma-\lambda_{\gamma,j}|\leq C_{1}\gamma \exp(-c C_{2}R), $$
    (47)

    where c ∈ (0,1) is the constant appearing in assumption (A2).

Proof 8

In fact, we prove more than stated: only the estimate (48) depends on (A2); the rest of the theorem requires only that S to be bounded independently of R as a multiplication operator. The existence of λγ,j with \(|\lambda _{\gamma ,j}-\lambda |\rightarrow 0\) as \(\gamma \rightarrow 0\) is a consequence of results in [11] on analytic families. Since γ is sufficiently small, let Γ be a contour which encloses the spectral point λ of L0 and suppose that λγ,j, j = 1,…,ν are the only spectral points of L0 + iγS inside Γ. Clearly, \(\left \lVert S\right \rVert =1\) independently of R and since L0 is a self-adjoint operator then |λλγ,j|≤ γ independently of R; thus, the contour Γ can be chosen independently of R. Suppose uγ,j, j = 1,…,ν are eigenvectors of L0 + iγS, linearly independent with \(\left \lVert u_{\gamma ,j}\right \rVert =1.\) Following Kato [11, VII,§3], let P(γ) be the projection onto the eigenspace of L0 + iγS spanned by the uγ,j, j = 1,…,ν, and P(0) be the projection onto the eigenspace of L0 associated with λ; the projection P(γ) is analytic as a function of γ, so that

$$ \left\lVert P(\gamma)-P(0)\right\rVert\leq O(\gamma). $$
(48)

Since

$$ P(0)u_{\gamma,j}=u_{\gamma,j}+(P(0)-P(\gamma))u_{\gamma,j}, $$
(49)

taking the norm of (49) and using (48), we conclude that P(0)uγ,j is bounded away from zero uniformly with respect to R and γ for sufficiently small γ.

Now, since (L0 + iγS)uγ,j = λγ,juγ,j and (L0iγI)uk = (λiγ)uk, using the inner product for the first equation with uk, we obtain:

$$ \langle (L_{0}+i \gamma S)u_{\gamma,j},u_{k}\rangle=\lambda_{\gamma,j}\langle u_{\gamma,j},u_{k}\rangle. $$
(50)

Similarly, using the inner product for the second equation with uγ,j, we obtain:

$$ \langle (L_{0}-i \gamma I)u_{k},u_{\gamma,j}\rangle=(\lambda-i \gamma)\langle u_{k},u_{\gamma,j}\rangle. $$
(51)

Because L0 and S are self-adjoint and uk and uγ,j are in the domain of L0 and it is contained in the domain of S then from (51), we have:

$$ \langle (L_{0}+i \gamma I)u_{\gamma,j},u_{k}\rangle=(\lambda+i \gamma)\langle u_{\gamma,j},u_{k} \rangle. $$
(52)

From (50) and (52), we obtain:

$$ |\lambda+i \gamma-\lambda_{\gamma,j}|\ \langle u_{\gamma,j},u_{k} \rangle=i \gamma \langle(I-S)u_{\gamma,j},u_{k} \rangle =i \gamma \langle u_{\gamma,j},(I-S)u_{k} \rangle. $$

Since P(0)uγ,j is bounded away from zero, we may choose k (possibly depending on γ) such that 〈uγ,j,uk〉 is bounded away from zero for small γ, uniformly with respect to R; furthermore, from the assumption (46) and (A2), we deduce

$$ \left\lVert(I-S)u_{k}\right\rVert\leq C\exp(-cC_{2}R), $$

for some positive constants C and C2. The result is proved. □

Remark 2

We may also ask what happens when an eigenvalue λγ merges into the essential spectrum with decreasing γ. For the scalar problem, Theorem 5 in [13] states that in this situation, there is a strictly positive value γcrit > 0 of the the coupling constant at which λγ merges into the essential spectrum.

In our current setup, this is false. Consider the following system:

$$ -\underline{u}^{\prime\prime}(x)+(Q(x)+i\gamma S(x))\underline{u}(x)=\lambda \underline{u}(x), \quad \underline{u}(0)=\underline{0}, $$

with \(Q(x)=\left (\begin {array}{cc} 0 & 0 \\ 0 & \frac {-40}{1+x^{2}}\chi _{[0,40]}(x)+\sin \limits (x) \end {array} \right )\) and \(S(x)=\left \{\begin {array}{ll} I_{2} & in \quad [0,50], \\ 0 & in \quad (50,\infty ). \end {array}\right .\) The system can be seen as two scalar problems with q1(x) = 0 and \(q_{2}(x)=\frac {-40}{1+x^{2}}\chi _{[0,40]}(x)+\sin \limits (x)\). The problem with potential q2 and γ = 0 has a spectral gap which is approximately (− 0.340363,0.595942), with infinitely many eigenvalues accumulating from below at the top end of the gap [15]. However, these eigenvalues all lie in the essential spectrum \([0,\infty )\) for the problem with potential q1 and hence also lie in the essential spectrum of the matrix Schrödinger system. Nevertheless, they emerge from the real axis with positive speed, into the upper half-plane, as soon as γ is increased from zero, following the estimate in Theorem 3, (47).

6 Numerical examples

In this section, we present some numerical examples to demonstrate our theoretical results. We generally apply the 3-point finite difference scheme to our system; the number of steps-per-period is fixed when the number of periods increases. The main advantage of using ‘low tech’ finite differences rather than a spectral method or Galerkin method is that there exists a Floquet theory for periodic Jacobi matrices, so the theorems of the previous section may be expected to have analogues for the (infinite) discretised system.

Example 1

Consider the following matrix Schrödinger systems:

  1. (P):
    $$ -\underline{u}^{\prime\prime}(x)+\biggl(\frac{-40}{1+x^{2}}\chi_{[0,40]}(x)I_{3}+Q(x)\biggr)\underline{u}(x)=\lambda\underline{u}(x), \quad \underline{u}(0)=\underline{0}; $$
  2. (P’):
    $$ -\underline{u}^{\prime\prime}(x)+\biggl(\frac{-40}{1+x^{2}}I_{3}+Q(x)\biggr)\underline{u}(x)=\lambda\underline{u}(x), \quad \underline{u}(0)=\underline{0}; $$

    problem P satisfies hypothesis (A1) while problem P’ satisfies (A1’). The fact that P’ satisfies (A1’) can be proved by an ODE version of the Agmon-type results in Janas, J. et al [10]. In both these equations, we use:

$$ Q(x)=\left( \begin{array}{ccc} \frac{\sin(x)+\cos(x)}{6} & \frac{\sin(x)-\cos(x)}{2\sqrt{3}} & -\frac{\sin(x)+\cos(x)}{3\sqrt{2}}\\ \frac{\sin(x)-\cos(x)}{2\sqrt{3}} & \frac{\sin(x)+\cos(x)}{2} & \frac{\cos(x)-\sin(x)}{\sqrt{2}\sqrt{3}} \\ -\frac{\sin(x)+\cos(x)}{3\sqrt{2}}& \frac{\cos(x)-\sin(x)}{\sqrt{2}\sqrt{3}} &\frac{\sin(x)+\cos(x)}{3} \end{array} \right). $$

Both problems have exactly the same essential spectrum, though their point spectra are slightly different due to the compactly supported potential well in P. The first two spectral bands are approximately [13]:

$$ [-0.378514,-0.340363],[0.595942,0.912391]. $$

For problem P’, we expect an eigenvalue close to λ ≈− 0.1076 in a gap, and an eigenvalue embedded in a spectral band near λ ≈ 0.6336. For both problems, we use the perturbation:

$$ Q(x) \mapsto Q(x)+i\gamma S(x); \quad S(x)=\left\{\begin{array}{ll} I_{3} & in \quad [0,R],\\ 0 & in \quad (R,\infty), \end{array}\right. $$

and perceive the resulting eigenvalues with γ = 1/4. Figure 1 shows eigenvalue computations for both P and P’. They show that our estimate (47) holds with \(C_{1}^{\prime } \approx 1293.6\) and \(C_{2}^{\prime } \approx 0.5386\) for (P’), and C1 ≈ 1291.9 and C2 ≈ 0.5384 for (P). These figures were calculated using X = 100 and a step-size h = 0.1, both of which were chosen to ensure that the effects of discretisation error would not be seen in the estimated constants.

Fig. 1
figure 1

a, b Logarithmic plot of |λ + iγλγ| against R

For the embedded eigenvalues, Fig. 2 shows the effects of interval truncation. In particular we observe that for P the prediction of Theorem 1 holds with C3 ≈ 0.0039 and C4 ≈ 0.3842. Theorem 1 has not been proved for P’, which does not have an eventually periodic Q. However, the experiments indicate that the result still holds, with \(C_{3}^{\prime }\approx 0.0047\) and \(C_{4}^{\prime }\approx 0.3842\). For these experiments, the step-size was h = 0.1.

Fig. 2
figure 2

a, b Logarithmic plot of |λγλγ,X,good| against XR

Finally, we shall show that the prediction of Theorem 2 for a spurious eigenvalue also holds; in fact, Fig. 3 indicates that it holds not only for P, but also for P’, for which it is not proved. In these figures, we consider the value λbad ≈− 0.1847, which lies in a spectral gap but is not an eigenvalue of either problem. We fixed X = 55 and varied R from 19 to 43 in steps of 4. The value of h was again chosen small enough to suppress effects of discretisation error. The constants C5 and C6 predicted by Theorem 2 are C5 ≈ 0.0017 and C6 ≈ 0.5345; it seems also that \(C_{5}^{\prime }\approx 0.0017\) and \(C_{6}^{\prime }\approx 0.5345\) for P’.

Fig. 3
figure 3

a, b Logarithmic plot of |I(λγ,X,bad)| against XR

Example 2

Consider the system:

$$ -\underline{u}^{\prime\prime}(x)+Q(x)\underline{u}(x)=\lambda\underline{u}(x), \quad \underline{u}(0)=\underline{0}, $$

with the following perturbed periodic potential:

$$ Q(x)=\left( \begin{array}{lllll} 4.8876-5.9996k(x) & -1.8348\times10^{-4}k(x)-1.5641 & 3.1428\times 10^{-4}k(x)-8.05947\times 10^{-3} & 2.2452\times 10^{-4}k(x)-1.3334 & 2.2686 \times 10^{-4} k(x)+0.4743 \\ -1.8348\times10^{-4}k(x)-1.5641 & 3.1766-5.9993k(x) & 8.562 \times 10^{-5}k(x)-0.0318 & 0.0905-1.6524 \times 10^{-4}k(x) & 1.5527-2.5128 \times 10^{-4}k(x) \\ 3.143 \times 10^{-4}k(x)-8.06\times 10^{-3} & 8.562 \times 10^{-5}k(x)-0.0318 & 0.7789-5.9997 k(x) & 0.13405-1.8348 \times 10^{-4}k(x) & 3.711 \times 10^{-4}k(x)+0.5288 \\ 2.2452\times 10^{-4}k(x)-1.3334 & 0.0905-1.6524 \times 10^{-4}k(x) & 0.13405-1.8348 \times 10^{-4}k(x) & 2.8067-5.9999 k(x) & 5.358 \times 10^{-5} k(x)-0.1412 \\ 2.2686 \times 10^{-4} k(x)+0.4743 & 1.5527-2.5128 \times 10^{-4}k(x) & 3.711 \times 10^{-4}k(x)+0.5288 & 5.358 \times 10^{-5} k(x)-0.1412 & 2.1111-6.0001k(x) \end{array}\right). $$

in which k(x) = sech2(x)χ[0,5](x).

This rather unwieldy formula was obtained by an expression

$$ Q(x) = T \tilde{Q}(x) T^{*}, $$

in which \(\tilde {Q}(x) = -6k(x) I_{5} + \text {diag}(n^{2}/4,n=1,\ldots ,5)\) and T is the matrix of orthonormal eigenvectors of a randomly chosen real, symmetric 5 × 5 matrix. According to [2], one eigenvalue for the scalar problem with q(x) = − 6k(x) is − 1; from this, we know that one of the eigenvalues of our Hamiltonian system is 1.25 which is an embedded eigenvalue in the essential spectrum \([1/4,\infty )\) of this multi-channel problem. We expect that if the dissipative barrier method is working well, then we should see an eigenvalue close to 1.25 + 0.25i when γ = 1/4. Figure 4 shows the plot of the corresponding eigenfunction, for which the calculated eigenvalue was approximately 1.24998 + 0.24993i. This illustrates the usefulness of the dissipative barrier method for lifting embedded eigenvalues out of the essential spectrum and making them easier to calculate, even bearing in mind Remark 2.

Fig. 4
figure 4

Plot of a finite difference approximation of a genuine eigenfunction

Example 3

We consider an equation of the form

$$ -\underline{u}^{\prime\prime} + D\underline{u}= \lambda(W(x) -i S(x))\underline{u}, ;{\kern2.2pt}{\kern2.2pt}{\kern2.2pt} x\in (0,\infty), $$
$$ \underline{u}(0) = \textbf{0}. $$

Here W is a continuous, periodic matrix, strictly positive definite, so that the corresponding weighted L2 space is equivalent to \(L^{2}(0,\infty )\). The matrix S(x) was chosen to be compactly supported and bounded. The matrix D is a diagonal matrix with

$$ D(j,j) = j^{2}, {\kern2.2pt}{\kern2.2pt}{\kern2.2pt} j = 1,\ldots, n; $$

qualitatively this is a second-order differentiation matrix.

The main differences compared to the cases considered in our theorems are, firstly, the weight W(x) is no longer the identity; and secondly, the compactly supported dissipative barrier S(x) is now also multiplied by the spectral parameter. The form of this problem is inspired by work in optics with complex index of refraction (see, e.g., [7]).

It is not difficult to show that the Floquet theory still holds for the problem on \([R,\infty )\), and so Mright,X(λ) still converges exponentially fast to Mright(λ) as \(X\to \infty \). For the problem on [0,R], there is now an additional λ-dependence of the barrier term, but Mleft(λ) is still meromorphic. We therefore expect to see fast convergence of the eigenvalues lying well away from the real axis.

Figure 5 shows the results of computations in the purely diagonal case

$$ W(x) = (2+\sin(x))I, {\kern2.2pt}{\kern2.2pt}{\kern2.2pt} S(x) = I\chi_{[0,1]}(x), $$
(53)

with all matrices being of dimension 5 × 5. These results were computed using the Numerov discretisation [14], with a uniform mesh of 80 intervals per period (mesh size 2π/80). Because the values of λ in Fig. 5 are not large, this mesh size is sufficient to ensure that the points plotted in Fig. 5 will not move in the ‘eyeball norm’ if the mesh size is halved.

Fig. 5
figure 5

Eigenvalue approximations for coefficients in (53) computed using [0,20π] and [0,40π] as approximations to \([0,\infty )\). Eigenvalues are marked with asterisks for [0,20π] and circles for [0,40π]

In Fig. 5, we see that the asterisks (shorter interval approximations) and circles (longer interval approximations) are essentially coincident for the eigenvalues well away from the real axis. We expected this, due to the exponentially small error which interval truncation causes. The more interesting parts are the ‘loops’ in the upper half plane, one of which starts at approximately λ = 1.75 and returns to the real axis around λ = 2.6. Here the asterisk loop (approximating interval [0,20π]) is approximately twice as far from the real axis as the circle loop (approximating interval [0,40π]). These loops are approximations to a spectral band. Though we have not proved this, they appear to converge at a rate 1/X, as the interval [0,X] goes to infinity. Note that there are also other approximations to (parts of) spectral bands, due to the fact that the spectral multiplicity of the higher bands can be greater than 1. It seems that, in this picture, only the bands near 0.5, and from 0.75 to just below 1, are simple.

In Fig. 6, we repeated the experiments using non-diagonal W(x) given by

$$ W_{j,k}(x) = 2I + \frac{j+k}{2n}\sin(x). $$
(54)

The same phenomena are noted as in the diagonal coefficient case, though the different scale on the vertical axis makes the slow convergence to the essential spectrum more stark.

Fig. 6
figure 6

Eigenvalue approximations for coefficients in (54) computed using [0,20π] and [0,40π] as approximations to \([0,\infty )\). Eigenvalues are marked with asterisks for [0,20π] and circles for [0,40π]

7 Conclusion and future work

We have introduced a method to calculate eigenvalues in gaps of matrix-valued Schrödinger operators. Theoretically, we have shown that the relatively compact dissipative perturbation technique together with domain truncation obtain approximations of isolated eigenvalues close to the ones of the original problem. Moreover, spurious eigenvalues can be predicted using this method and are characterised by exponentially small imaginary parts. These approximations have been computed using finite difference schemes for some numerical examples and have shown excellent agreement with the theoretical part. An additional remark on this procedure is that the approximating results of the implementations of both fast decaying periodic potentials and compactly supported periodic potentials (see, e.g., Example 1) are mostly the same.

We have also observed the effectiveness of the presented method when the weighted matrix is different from the identity and the dissipative barrier is multiplied by the spectral parameter as in Example 3. The only caveat for these cases is that the approximations to the spectral bands do not converge fast, so if very high accuracy is required then one should use λ-dependent non-reflecting boundary conditions [7].

One of the main sources of Hamiltonian eigenvalue problems is the use of semi-discretisation for PDE problems on waveguides. In future work, we will consider the PDE:

$$ -{\Delta} u+Q(x,y)u=\lambda u, $$
(55)

on a semi-infinite waveguide. These give rise to a Hamiltonian system in which Q(x) in our problem is an operator in \(l^{2}(\mathbb {N})\); they can also be studied directly in PDE form.