1 Introduction

The paper is devoted to operators of the form

$$\begin{aligned} L=-\partial _x^2+V(x) \end{aligned}$$
(1.1)

on ]ab[, where \(a<b\), a can be \(-\infty \) and b can be \(\infty \). The potential V can be complex, has low regularity and a rather arbitrary behavior at the boundary of the domain: we assume that \(V\in L_\mathrm {loc}^1]a,b[\). We study realizations of L as closed operators on \(L^2]a,b[\).

Operators of the form (1.1) are commonly called one-dimensional Schrödinger operators or, shorter, 1d Schrödinger operators. They are special cases of Sturm–Liouville operators, that is operators of the form

$$\begin{aligned} -\frac{1}{w(x)}\partial _xp(x)\partial _x+\frac{q(x)}{w(x)}. \end{aligned}$$
(1.2)

Note, however, that if \(\frac{p(x)}{w(x)}\) is real, then under rather weak assumptions on wpq, a simple unitary transformation reduces (1.2) to (1.1). Therefore, it is not a serious restriction to consider 1d Schrödinger operators instead of Sturm–Liouville operators.

1d Schrödinger operator is a classic subject with a lot of literature. Most of the literature is devoted to real V, when L can be realized as self-adjoint operator. It is, however, quite striking that the usual theory well-known from the real (self-adjoint) case works almost equally well in the complex case. In particular, essentially the same theory for boundary conditions and the same formulas for Green’s operators [right inverses of (1.1)] hold as in the real case. We will describe these topics in detail in this paper.

A large part of the literature on 1d Schrödinger operators assumes that potentials are \(L^1\) near finite endpoints. Under this condition, one can impose the so-called regular boundary conditions (Dirichlet, Neumann or Robin). In this case, it is natural to use the so-called Weyl–Titchmarsh function and the formalism of the so-called boundary triplets, see, e.g., [2] and references therein. We are interested in general boundary conditions, such as those considered in [4, 9, 10], where the above approach does not directly apply. See the discussion at the end of Sect. 5.2.

One of the motivations of the present work is the study of exactly solvable Schrödinger operators, such as those given by the Bessel equation [4, 9], or the Whittaker equation [10]. Analysis of those operators indicates that non-real potentials are as good from the point of view of the exact solvability as real ones. It is also natural to organize exactly solvable Schrödinger operators in holomorphic families, whose elements are self-adjoint only in exceptional cases. Therefore, a theory for 1d Schrödinger operators with complex potentials and general boundary conditions provides a natural framework for the study of exactly solvable Hamiltonians.

As we mentioned above, we suppose that \(V\in L_\mathrm {loc}^1]a,b[\). The theory is much easier if \(V\in L_\mathrm {loc}^2]a,b[\), because one could then assume that the operator acts on \(C_\mathrm {c}^2]a,b[\). Dealing with potentials in \(L_\mathrm {loc}^1\) causes of a lot of trouble—this is, however, a rather natural assumption. We think that handling a more general case forces us to better understand the problem. Actually, one could consider even more singular potentials: it is easy to generalize our results to potentials V being Borel measures on ]ab[.

In the preliminary Sect. 2, we study the inhomogeneous problem given by the operator (1.1) by basic ODE methods. We introduce some distinguished Green’s operators: The two-sided Green’s operators are related to boundary conditions on both sides. The forward and backward Green’s operators are related to the Cauchy problem at the endpoints of the interval. These operators belong to the most often used objects in mathematics. Usually they appear under the guise of Green’s functions, which are the integral kernels of Green’s operators.

Note that in the Hilbert space \( L^2]a,b[\), we have a natural conjugation \(f\mapsto {\overline{f}}\) and a bilinear form \(\langle f|g\rangle : =\int fg\). For an operator T, it is natural to define its transpose\(T^\#:=\overline{T^*}\), where the bar denotes the complex conjugation. We say that T is self-transposed if \(T^\#=T\) (in the literature the alternative name J-self-adjoint is also used). These concepts play an important role in the theory of differential operators on \( L^2]a,b[\). Therefore, we devote Sect. 3 to a general theory of operators in a Hilbert space with a conjugation. We briefly recall the theory of restrictions/extensions of unbounded operators. The concept of self-transposed operators turns out to be a natural alternative to the concept of self-adjointness. It is well-known that self-adjoint operators are well-posed (possess non-empty resolvent set). Not every self-transposed operator is well-posed, however, they often are.

The remaining sections are devoted to realizations of L given by (1.1) as closed operators on the Hilbert space \(L^2]a,b[\). The most obvious realizations are the minimal one \(L_{\min }\) and the maximal one \(L_{\max }\). We prove that these operators are closed and densely defined. Under the assumption \(V\in L_\mathrm {loc}^1]a,b[\), the proof is quite long and technical but, in our opinion, instructive. If we assumed \(V\in L_\mathrm {loc}^2]a,b[\), the proof would be easy.

At this point, it is helpful to recall basic theory of 1d Schrödinger operators for real potentials. One is usually interested in self-adjoint extensions of the Hermitian operator \(L_{\min }\). They are situated “half-way” between \(L_{\min }\) and \(L_{\max }\). More precisely, we have three possibilities:

  1. (1)

    \(L_{\min }=L_{\max }\): then, \(L_{\min }\) is already self-adjoint.

  2. (2)

    The codimension of \(\mathcal {D}(L_{\min })\) in \(\mathcal {D}(L_{\max })\) is 2: if \(L_\bullet \) is a self-adjoint extension of \(L_{\min }\), the inclusions \(\mathcal {D}(L_{\min })\subset \mathcal {D}(L_\bullet )\subset \mathcal {D}(L_{\max })\) are of codimension 1.

  3. (3)

    The codimension of \(\mathcal {D}(L_{\min })\) in \(\mathcal {D}(L_{\max })\) is 4: if \(L_\bullet \) is a self-adjoint extension of \(L_{\min }\), the inclusions \(\mathcal {D}(L_{\min })\subset \mathcal {D}(L_\bullet )\subset \mathcal {D}(L_{\max })\) are of codimension 2.

Note that in the literature, it is common to use the theory of deficiency indices. The cases (1), (2), resp., (3) correspond to \(L_{\min }\) having the deficiency indices (0, 0), (1, 1) and (2, 2). However, the deficiency indices do not have a straightforward generalization to the complex case.

Let us go back to complex potentials. Note that the Hermitian conjugate of an operator T, denoted \(T^*\), turns out to be less useful than its transpose \(T^\#\). In particular, the role of self-adjoint operators is taken up by self-transposed operators.

By choosing a subspace of \(\mathcal {D}(L_{\max })\) closed in the graph topology and restricting \(L_{\max }\) to this subspace, we can define a closed operator. Such operators will be called closed realizations of L. We will show that in the complex case closed realizations of L possess a theory quite analogous to that of the real case.

We are mostly interested in realizations of L whose domain contains \(\mathcal {D}(L_{\min })\), that is, operators \(L_\bullet \) satisfying \(L_{\min }\subset L_\bullet \subset L_{\max }\). Such realizations are defined by specifying boundary conditions. Similarly as in the real case, boundary conditions are given by functionals on \(\mathcal {D}(L_{\max })\) that vanish on \(\mathcal {D}(L_{\min })\). For each of endpoints, a and b, there is a space of functionals describing boundary conditions. We call the dimension of this space the boundary index at a, resp., b, and denote it \(\nu _a(L)\), resp., \(\nu _b(L)\). They can take the values 0 or 2 only. Therefore, we have the following classification of operators L:

$$\begin{aligned} \begin{array}{l} \hbox {(1) } \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })=0,\hbox { or }L_{\min }=L_{\max },\\ \hbox {(2) } \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })=2,\\ \hbox {(3) } \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })=4. \end{array} \end{aligned}$$
(1.3)

Let \(\lambda \in \mathbb C\). It is natural to consider the space of solutions of \((L-\lambda )u=0\) that are square integrable near a, resp., b. We denote these spaces by \(\mathcal {U}_a(\lambda )\), resp., \(\mathcal {U}_a(\lambda )\). We will prove that

$$\begin{aligned}&\nu _a(L)=0 \Longleftrightarrow \dim \mathcal {U}_a(\lambda )\le 1 \ \forall \lambda \in \mathbb C\Longleftrightarrow \dim \mathcal {U}_a(\lambda )\le 1 \text { for some } \lambda \in \mathbb C, \end{aligned}$$
(1.4)
$$\begin{aligned}&\nu _a(L)=2 \Longleftrightarrow \dim \mathcal {U}_a(\lambda )=2 \ \forall \lambda \in \mathbb C\Longleftrightarrow \dim \mathcal {U}_a(\lambda )=2 \text { for some } \lambda \in \mathbb C. \end{aligned}$$
(1.5)

The most useful realizations of L are well-posed. Not all L possess such realizations. One can classify such L’s as follows. If L possesses a well-posed realization \(L_\bullet \), then one of the following conditions holds:

$$\begin{aligned} \begin{array}{l} \hbox {(1) } \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )=0,\hbox { or }L_\bullet =L_{\max }\\ \hbox {(2) } \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )=1,\\ \hbox {(3) } \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )=2. \end{array} \end{aligned}$$
(1.6)

There is a strict correspondence between (1), (2) and (3) of (1.3) and (1), (2) and (3) of (1.6). In cases (1) and (2) from Table (1.6), we describe all realizations with non-empty resolvent set and their resolvents. We prove that if \(L_\bullet \) is such a realization, then we can find \(u\in \mathcal {U}_a(\lambda )\) and \(v\in \mathcal {U}_b(\lambda )\) with the Wronskian equal to 1, so that the integral kernel of \((L_\bullet -\lambda )^{-1}\) can then be easily expressed in terms of u and v.

The case (3) is much richer. We describe all realizations of L that have separated boundary conditions (given by independent boundary conditions at a and b). If in addition they are self-transposed, then essentially the same formula as in (1) and (2) gives \((L_\bullet -\lambda )^{-1}\). There are, however, two other separated realizations of L, which are denoted \(L_a\) and \(L_b\), with boundary conditions only at a, resp., b. They are not self-transposed; in fact, they satisfy \(L_a^\#=L_b\). Their resolvents are given by what we call forward and backward Green’s operators, which incidentally are cousins of the retarded and advanced Green’s functions, well-known from the theory of the wave equation.

In the last section, we discuss potentials with a negative imaginary part. We show that under some weak conditions they define dissipative 1d Schrödinger operators. We also describe Weyl’s limit point–limit circle method for such potentials. For real potentials, this method allows us to determine the dimension of \(\mathcal {U}_a(\lambda )\) for \({\text {Im}}(\lambda )>0\): if a is limit point, then \(\dim \mathcal {U}_a(\lambda )=1\); if a is limit circle, then \(\dim \mathcal {U}_a(\lambda )=2\). The picture is more complicated if the potential is complex: there are examples where the endpoint a is limit point and \(\mathcal {U}_a(\lambda )\) is two-dimensional.

1d Schrödinger operator is one of the most classic topics in mathematics. Already in the first half of 19 century, Sturm and Liouville considered second-order differential operators on a finite interval with various boundary conditions. The theory was extended to a half-line and a line in a celebrated work by Weyl.

Second-order ODE’s and 1d Schrödinger operators are considered in many textbooks, including Atkinson [1], Coddington–Levinson [5], Dunford–Schwartz [12, 13], Naimark [20], Pryce [21], de Alfaro–Regge [7], Reed–Simon [23], Stone [25], Titchmarsh [27], Teschl [26], Gitman–Tyutin–Voronov [17]. However, in the literature complex potentials are rarely studied in detail, and if so, then one does not pay attention to nontrivial boundary conditions. The monograph by Edmunds–Evans [14] is often considered as one of the most up-to-date source for results on this subject. Many results presented in our article have their counterpart in the literature, especially in [14]. Let us try to make a more detailed comparison of our paper with the literature.

Most of the material of Sect. 2 is standard. However, we have not seen a separate discussion of semiregular boundary condition, as described in Proposition 2.5 (2) and (3). The definitions of the canonical bisolution \(G_\leftrightarrow \), various Green’s operators \(G_{u,v}\)\(G_\leftarrow \), \(G_\rightarrow \) and relations between them (2.21)–(2.23) are implicit in many works on the subject; however, they are rarely separately emphasized.

The material of our Sect. 3 on Hilbert spaces with conjugation is to a large extent contained in Chap. 3 Sects 3 and 5 of [14]. It is based on previous results of Vishik [28], Galindo [16] and Knowles [19]. However, our presentation seems to be somewhat different. It shows in particular that the existence of a self-transposed extension follows almost trivially from a basic theory of symplectic spaces described in Appendix A. Another special feature of our Sect. 3 is a discussion of properties of right inverses of an unbounded operator.

The deepest result described in our paper is probably Theorem 6.15 about the characterization of boundary conditions by square integrable solutions. This result is actually not contained in [14]. It is based on Everitt and Zettl [15, Theorem 9.1] and uses [1, Theorem 9.11.2] of Atkinson and [22, Theorem 5.4] of Race.

The study of Green’s operators contained in Sect. 7 is probably to a some extent new.

A separate subject that we discuss is potentials with negative imaginary part studied by means of the so-called Weyl limit circle/limit point method. Here, the main reference is Sims [24], see also [3, 14].

The present manuscript grew out of Appendix of [4] devoted to 1d Schrödinger operators with the potential \(\frac{1}{x^2}\). [4], and its follow-up papers [9, 10] illustrated that 1d Schrödinger operators with complex potentials and unusual boundary conditions appear naturally in various situations. Motivated by these applications, we decided to write an exposition of basic general theory of 1d Schrödinger operators.

We decided to make our exposition as complete and self-contained as possible, explaining things that are perhaps obvious to experts, but often difficult to many readers. We use freely the modern operator theory—this is not the case of a large part of the literature, which often sticks to old-fashioned approaches. We also use the terminology and notation which, we believe, is as natural as possible in the context we consider. For instance, we prefer the name “self-transposed” to “J-self-adjoint” found in a large part of the literature. We believe that our treatment of the subject is quite different from the one found in the literature. One of the main tools that we have found useful, rarely appearing in the literature, is right inverses, which in the context of 1d Schrödinger operators we call by the traditional name of Green’s operators.

2 Basic ODE Theory

2.1 Notations

Recall that \(a<b\), a can be \(-\infty \) and b can be \(\infty \). The notation [ab] stands for the interval including the endpoints a and b, while ]ab[ for the interval without endpoints. [ab[ and ]ab] have the obvious meaning.

In some cases, one could use the notation involving either [ab] or ]ab[ without a change of the meaning. For instance, \(L^p\big ([a,b]\big )=L^p\big (]a,b[\big )\). For esthetic reasons, we try to use a uniform notation—we usually write \(L^p]a,b[\), dropping the round bracket for brevity.

In other cases, the choice of either [ab] or ]ab[ influences the meaning of a symbol. For instance, \(C[a,b]\subsetneq C]a,b[\). For example, \(f\in C[-\infty ,b]\) implies that \(\lim \nolimits _{x\rightarrow -\infty }f(x)=:f(-\infty )\) exists.

\(f\in L_\mathrm {loc}^p]a,b[\) iff for any \(a<a_1<b_1<b\), we have \(f\Big |_{]a_1,b_1[}\in L^p]a_1,b_1[\). \(f\in L_\mathrm {loc}^p[a,b[\) if in addition \(f\in L^1[a,a_1[\) and similarly for \(L_\mathrm {loc}^p]a,b]\).

\(f\in L_\mathrm {c}^p]a,b[\) iff \(f\in L^p]a,b[\) and \(\mathrm {supp}f\) is a compact subset of ]ab[. The analogous meaning has the subscript \(\mathrm {c}\) in different situations.

\(\oplus \) will mean the topological direct sum of two spaces.

2.2 Absolutely Continuous Functions

We will denote \(f'\) or \(\partial f\) the derivative of a distribution f. We will denote by \(\mathrm{AC}]a,b[\) the space of absolutely continuous functions on ]ab[, that is, distributions on ]ab[ whose derivative is in \(L_\mathrm {loc}^1]a,b[\). This definition is equivalent to the more common definition: for every \(\epsilon >0\) there exists \(\delta >0\) such that for any family of non-intersecting intervals \([x_i,x_i']\) in ab[ satisfying \(\sum _{i=1}^n|x_i'-x_i|<\delta \) we have \(\sum _{i=1}^n|f(x_i')-f(x_i)|<\epsilon \).

We have \(\mathrm{AC}]a,b[\subset C]a,b[\). If \(f,g\in \mathrm{AC}]a,b[\), then \(fg\in \mathrm{AC}]a,b[\) and the Leibniz rule holds:

$$\begin{aligned} (fg)'= f'g+fg' . \end{aligned}$$
(2.1)

\(\mathrm{AC}^n]a,b[\) will denote the space of distributions whose nth derivative is in \(\mathrm{AC}]a,b[\).

Lemma 2.1

Let \(f_n\in \mathrm{AC}]a,b[\) be a sequence such that for any \(a<a_1<b_1<b\), \(f_n\rightarrow f\) uniformly on \([a_1,b_1]\) and \(f_n'\rightarrow g\) in \(L^1[a_1,b_1]\). Then, \(f\in \mathrm{AC}]a,b[\) and \(g=f'\).

We will denote by \(\mathrm{AC}[a,b]\) the space of functions on [ab] whose (distributional) derivative is in \(L^1]a,b[\). Clearly, \(\mathrm{AC}[a,b]\subset C[a,b]\). If \(f\in \mathrm{AC}[a,b]\), then

$$\begin{aligned} \int _{a}^{b} f'(x)\mathrm dx&=f(b)-f(a). \end{aligned}$$
(2.2)

Note that a can be \(-\infty \) and b can be \(\infty \).

Obviously, if \(f\in \mathrm{AC}]a,b[\) and \(a<a_1<b_1<b\), then \(f\Big |_{[a_1,b_1]}\) belongs to \(\mathrm{AC}[a_1,b_1]\).

2.3 Choice of Functional-Analytic Setting

Throughout the section, we assume that \(V\in L_\mathrm {loc}^1]a,b[\) and we consider the differential expression

$$\begin{aligned} L:=-\partial ^2+V . \end{aligned}$$
(2.3)

Sometimes we restrict our operator to a smaller interval, say ]cd[, where \(a\le c<d\le b\). Then, (2.3) restricted to ]cd[ is denoted \(L^{c,d}\).

Eventually, we would like to study operators in \(L^2]a,b[\) associated with L, which in many respects seems the most natural setting for one-dimensional Schrödinger operators. This introductory section, however, is devoted mostly to the equation\(Lf=g\). We postpone considerations related to operator realizations of L to the later sections.

Suppose that we choose \(L_\mathrm {loc}^1]a,b[\) as the target space for (2.3), which seems to be a rather general function space. Note that if \(f\in L_\mathrm {loc}^\infty ]a,b[\), the product fV is well-defined and belongs to \(L_\mathrm {loc}^1]a,b[\). Moreover, this is the best we can do if V is an arbitrary locally integrable function, i.e., we cannot replace \(L_\mathrm {loc}^\infty \) by a larger space. Then, if we consider \(L_\mathrm {loc}^\infty \) as the initial space for (2.3) and we require that the target space for (2.3) is \(L_\mathrm {loc}^1]a,b[\), we are forced to work with functions \(f\in L_\mathrm {loc}^\infty ]a,b[\) such that \(Lf \in L_\mathrm {loc}^1]a,b[\). But then \(f''\in L_\mathrm {loc}^1]a,b[\), and hence \(f\in \mathrm{AC}^1]a,b[\).

Therefore, it is natural to consider (2.3) as an operator \(L:\mathrm{AC}^1]a,b[\,\rightarrow L_\mathrm {loc}^1]a,b[\), which we will do throughout this paper. Restrictions of L to subspaces of \(\mathrm{AC}^1]a,b[\) which are sent into \(L^2]a,b[\) by L are the objects of main interest in our study.

We equip \(L^1_\mathrm {loc}]a,b[\) with the topology of local uniform convergence, i.e., a sequence \(\{f_n\}\) converges to f if and only if \(\lim \nolimits _{n\rightarrow \infty }\Vert f_n-f\Vert _{L^1(J)}=0\) for any compact \(J\subset ]a,b[\). Clearly, this is a complete space. It is convenient to think of L as an operator in \(L^1_\mathrm {loc}]a,b[\) with domain \(\mathrm{AC}^1]a,b[\). Then, L is densely defined and later on we will prove that it is closed (see Corollary 2.17).

2.4 The Cauchy Problem

For \( g\in L_\mathrm {loc}^1]a,b[\), we consider the problem

$$\begin{aligned} Lf=g. \end{aligned}$$
(2.4)

Proposition 2.2

Let \(a<d<b\). Then, for any \(p_0\), \(p_1\) there exists a unique \(f\in \mathrm{AC}^1]a,b[\) satisfying (2.4) and

$$\begin{aligned} f(d)=p_0,\quad f'(d)=p_1. \end{aligned}$$
(2.5)

Proof

Define the operators \(Q_d\) and \(T_d\) by their integral kernels

$$\begin{aligned} Q_d(x,y):={\left\{ \begin{array}{ll} (y-x)V(y) ,&{}\quad x<y<d,\\ (x-y)V(y),&{}\quad x>y>d,\\ 0&{}\quad \text {otherwise; } \end{array}\right. }\end{aligned}$$
(2.6)
$$\begin{aligned} T_d(x,y):={\left\{ \begin{array}{ll} (y-x) ,&{}\quad x<y<d,\\ (x-y),&{}\quad x>y>d,\\ 0&{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(2.7)

The Cauchy problem can be rewritten as \(F(f)=f\) where F is a map on C]ab[ given by

$$\begin{aligned} F(f)(x):=p_0+p_1(x-d)+Q_df(x) +T_dg(x). \end{aligned}$$
(2.8)

If \(a\le a_1<d<b_1\le b\) and we view \(Q_d\) as an operator on \(C[a_1,b_1]\) with the supremum norm, then

$$\begin{aligned} \Vert Q_d\Vert \le \max \left\{ \int _{a_1}^d|V(y)(y-a_1)|\mathrm dy,\, \int _d^{b_1}|V(y)(y-b_1)|\mathrm dy\right\} . \end{aligned}$$
(2.9)

If the interval \([a_1,b_1]\) is finite, the operator \(T_d\) is bounded from \(L^1]a_1,b_1[\) into \(C[a_1,b_1]\). \(\square \)

Thus, by choosing a sufficiently small interval \([a_1,b_1]\) containing d, we can make F well-defined and contractive on \(C[a_1,b_1]\). (F is contractive iff \(\Vert Q_d\Vert <1\)). By Banach’s fixed point theorem (or the convergence of an appropriate Neumann series), there exists \(f\in C[a_1,b_1]\) such that \(f=F(f)\). Then, note that we have

$$\begin{aligned} f'(x)=F(f)'(x) =p_1+\int _d^xV(y)f(y)\mathrm dy+\int _d^xg(y)\mathrm dy \end{aligned}$$

hence \(f'\in \mathrm{AC}[a_1,b_1]\) and \(f\in \mathrm{AC}^1[a_1,b_1]\).

Thus for every \(d\in ]a,b[\), we can find an open interval containing d on which there exists a unique solution to the Cauchy problem. We can cover ]ab[ with intervals \(]a_j,b_j[\) containing \(d_j\) with the analogous property. This allows us to extend the solution with initial conditions at any \(d\in ]a,b[\) to the whole ]ab[. \(\square \)

2.5 Regular and Semiregular Endpoints

One-dimensional Schrödinger operators possess the simplest theory when \(-\infty<a<b<\infty \) and \(V\in L^1]a,b[\). Then, we say that Lis a regular operator. Most of the classical Sturm–Liouville theory is devoted to such operators. More generally, the following standard terminology will be convenient.

Definition 2.3

The endpoint a is called regular, or L is called regular ata, if a is finite and \(V\in L_\mathrm {loc}^1[a,b[\) (i.e., V is integrable around a). Similarly for b. Hence, L is regular if both endpoints are regular.

1d Schrödinger operators satisfying the following conditions are also relatively well behaved:

Definition 2.4

The endpoint a is called semiregular if a is finite and \((x{-}a)V\in L_\mathrm {loc}^1[a,b[\) (i.e., \((x{-}a)V\) is integrable around a). Similarly for b.

Proposition 2.5

Let \(g\in L_\mathrm {loc}^1[a,b[\).

  1. (1)

    Let a be a regular endpoint. Let \(p_0\), \(p_1\) be given. Then, there exists a unique \(f\in \mathrm{AC}^1[a,b[\) satisfying \(Lf=g\) and \(f(a)=p_0, f'(a)=p_1\).

  2. (2)

    Let a be a semiregular endpoint. Then, all solutions f have a limit at a.

  3. (3)

    Let a be a semiregular endpoint. Let \(p_1\) be given. Then, there exists a unique \(f\in \mathrm{AC}^1[a,b[\) satisfying \(Lf=g\) and \(f(a)=0, f'(a)=p_1\).

Proof

  1. (1)

    is proven as Proposition 2.2, choosing \(d=a\); the operators (2.6) and (2.7) are now

    $$\begin{aligned}&Q_a(x,y):={\left\{ \begin{array}{ll} (x-y)V(y),&{}\quad x>y>a,\\ 0&{}\quad \text {otherwise; } \end{array}\right. } \end{aligned}$$
    (2.10)
    $$\begin{aligned}&T_a(x,y):={\left\{ \begin{array}{ll} (x-y),&{}\quad x>y>a,\\ 0&{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
    (2.11)

To prove (2), we choose d inside ]ab[ such that \(\int _a^dV(y)|y-a|\mathrm dy<1\). This guarantees that the operator \(Q_d\) is contractive on C[ad].

To prove (3), we modify the Proof of Proposition 2.2. We chose \(d=a\) and use the Banach space \(|x-a|^{-1}C]a,b_1[:= \{f\in C]a,b_1[ \ \mid \Vert f\Vert := \sup \frac{|f(x)|}{|x-a|}<\infty \}\), where \(a<b_1<b\) is finite and will be chosen later. The Cauchy problem can be rewritten as \(F(f)=f\) where F is given by \({|x-a|^{-1}C]a,b_1[}\) given by

$$\begin{aligned} F(f)(x):=p_1(x-a)+Q_af(x) +T_ag(x). \end{aligned}$$
(2.12)

\(Q_a\) is an operator on \(|x-a|^{-1}C[a,b_1]\) with the norm

$$\begin{aligned} \Vert Q_a\Vert \le \int _{a}^{b_1}|V(y)(y-a)|\mathrm dy. \end{aligned}$$
(2.13)

The operator \(T_a\) is bounded from \(L^1]a,b_1[\) into \(|x-a|^{-1}C[a,b_1]\). Therefore, F is a well-defined map on \(|x-a|^{-1}C[a,b_1]\). Then, we argue similarly as in the Proof of Proposition 2.2. For \(b_1\) close enough to a, the map F is contractive and we can apply Banach’s fixed point theorem. From

$$\begin{aligned} f'(x)=F(f)'(x) =p_1+\int _a^x|y-a|V(y)|y-a|^{-1}f(y)\mathrm dy+\int _a^xg(y)\mathrm dy, \end{aligned}$$

we see that \(f'(a)=p_1\). \(\square \)

An example of a potential with a finite point which is not semiregular is \(V(x)=\frac{c}{x^2}\) on \(]0,\infty [\). For its theory, see [4, 9].

2.6 Wronskian

Definition 2.6

The Wronskian of two differentiable functions uv is

$$\begin{aligned} W(u,v;x)= W_x(u,v)=u(x)v'(x)-u'(x)v(x). \end{aligned}$$
(2.14)

Proposition 2.7

Let \(u,v\in \mathrm{AC}^1]a,b[\). Then, the Lagrange identity holds:

$$\begin{aligned} \partial _x W(u,v;x)=-(Lu)(x) v(x)+u(x) (Lv)(x). \end{aligned}$$
(2.15)

Consequently, if \(Lu=Lv=0\), then W(uv) is a constant function.

Proof

Since \(u,u',v,v'\in \mathrm{AC}]a,b[\,\), the Wronskian can be differentiated and a simple computation yields (2.15). \(\square \)

Definition 2.8

The set of solutions in \(\mathrm{AC}^1]a,b[\) of the homogeneous equation \(Lf=0\) will be denoted \(\mathcal {N}( L)\).

\(\mathcal {N}(L)\) is a two-dimensional complex space, and the map \(W:\mathcal {N}( L)\times \mathcal {N}( L)\rightarrow \mathbb {C}\) is bilinear and antisymmetric. \(u,v\in \mathcal {N}( L)\) are linearly independent if and only if \(W(u,v)\ne 0\). If \(u_2=\alpha u_1+\beta v_1, v_2=\gamma u_1+\delta v_1\), then

$$\begin{aligned} W(u_2,v_2)=W(\alpha u_1,\delta v_1)+W(\beta v_1,\gamma u_1) = (\alpha \delta -\beta \gamma ) W(u_1,v_1). \end{aligned}$$

hence if \(W(u_1,v_1)=1\), then \(W(u_2,v_2)=1\) if and only if \(\alpha \delta -\beta \gamma =1\), and in this case a simple computation gives

$$\begin{aligned} u_2(x)v_2(y)-u_2(y)v_2(x)=u_1(x) v_1(y)-u_1(y) v_1(x),\quad x,y\in ]a,b[. \end{aligned}$$
(2.16)

Thus, the function

$$\begin{aligned} G_\leftrightarrow (x,y):=u(x) v(y)-u(y) v(x) \end{aligned}$$
(2.17)

is independent of the choice of the solutions uv of the homogeneous equation \(Lf=0\) if they satisfy \(W(u,v)=1\). (2.17) can be interpreted as the integral kernel of an operator \(G_\leftrightarrow : L_{\mathrm {c}}^1]a,b[\rightarrow \mathrm{AC}^1]a,b[\) and will be called the canonical bisolution of L. It satisfies

$$\begin{aligned} LG_\leftrightarrow =0,\quad G_\leftrightarrow L=0,\quad G_\leftrightarrow (x,y)= - G_\leftrightarrow (y,x). \end{aligned}$$
(2.18)

2.7 Green’s Operators

The expression “Green’s function” is commonly used to denote the integral kernel of a right inverse of a differential operator, usually of a second order. We will use the expression “Green’s operator” for a right inverse of L.

Definition 2.9

An operator \(G_\bullet : L_{\mathrm {c}}^1]a,b[\rightarrow \mathrm{AC}^1]a,b[\) is called a Green’s operator of L if

$$\begin{aligned} LG_\bullet g=g,\quad g\in L_{\mathrm {c}}^1]a,b[. \end{aligned}$$
(2.19)

Note that we do not require that \( G_\bullet L=\mathbb {1}\). Note also that \(G_\leftrightarrow \) is not Green’s operator—it is a bisolution. However, it is so closely related to various Green’s operators that its symbol contains the same letter G.

There are many Green’s operators. If \(G_\bullet \) is a Green’s operator, uv are two solutions of the homogeneous equation, and \(\phi ,\psi \in L_\mathrm {loc}^\infty ]a,b[\) are arbitrary, then

$$\begin{aligned} G_\bullet +|u\rangle \langle \phi |+|v\rangle \langle \psi | \end{aligned}$$

is also a Green’s operator. Recall that if EF are vector spaces, g belongs to the dual of E, and \(f\in F\), then \(|{f}\rangle \langle {g}|\) is the linear map \(E\rightarrow F\) defined by \(e\mapsto g(e) f\).

Let us define some distinguished Green’s operators. Let uv be two solutions of the homogeneous equation such that \(W(v ,u)=1\). We easily check that the operators \(G_{u,v}\), \(G_a\) and \(G_b\) defined below are Green’s operators in the sense of Definition 2.9:

Definition 2.10

Green’s operator associated withuataandvatb, denoted \(G_{u,v}\), is defined by its integral kernel

$$\begin{aligned} G_{u,v}(x,y):={\left\{ \begin{array}{ll} u(x)v (y),&{}\quad x<y,\\ v (x)u(y),&{}\quad x>y. \end{array}\right. } \end{aligned}$$

Operators of the form \(G_{u,v}\) will be sometimes called two-sided Green’s operators.

Definition 2.11

Forward Green’s operator\(G_\rightarrow \) has the integral kernel

$$\begin{aligned} G_\rightarrow (x,y):={\left\{ \begin{array}{ll} 0,&{}\quad x<y,\\ v (x)u(y)-u(x)v (y),&{}\quad x>y. \end{array}\right. } \end{aligned}$$
(2.20)

Definition 2.12

Backward Green’s operator\(G_\leftarrow \) has the integral kernel

$$\begin{aligned} G_\leftarrow (x,y):={\left\{ \begin{array}{ll} u (x)v(y)-v(x)u (y),&{}\quad x<y,\\ 0,&{}\quad x>y.\end{array}\right. } \end{aligned}$$

By the comment after (2.16), the operators \(G_\rightarrow \) and \(G_\leftarrow \) are independent of the choice of uv. For \(a<b_1<b\), \(G_\rightarrow \) maps \(L_{\mathrm {c}}^1]b_1,b[\) into functions that are zero on \(]a,b_1]\). Similarly, for \(a<a_1<b\), \(G_\rightarrow \) maps \(L_{\mathrm {c}}^1]a,a_1[\) into functions that are zero on \([a_1,b[\).

Note also some formulas for differences of two kinds of Green’s operators:

$$\begin{aligned} G_{u,v}-G_\rightarrow&=|u\rangle \langle v|, \end{aligned}$$
(2.21)
$$\begin{aligned} G_{u,v}-G_\leftarrow&=|v\rangle \langle u| , \end{aligned}$$
(2.22)
$$\begin{aligned} G_\rightarrow -G_\leftarrow&=|v\rangle \langle u|-|u\rangle \langle v|= G_\leftrightarrow , \end{aligned}$$
(2.23)
$$\begin{aligned} G_{u,v}-G_{u_1,v_1}&=|u\rangle \langle v|-|u_1\rangle \langle v_1|, \end{aligned}$$
(2.24)

The following definition introduces another class of Green’s operators in the sense of Definition 2.9, which are generalizations of forward and backward Green’s operators.

Definition 2.13

Green’s operator associated with\(d\in ]a,b[\) is defined by the integral kernel

$$\begin{aligned}G_d(x,y):={\left\{ \begin{array}{ll} u (x)v(y)-v(x)u (y),&{}\quad x<y<d,\\ v (x)u(y)-u(x)v(y),&{}\quad x>y>d,\\ 0,&{}\quad \text {otherwise.} \end{array}\right. }\end{aligned}$$

As in the case of \(G_\rightarrow \) and \(G_\leftarrow \), these operators are independent of the choice of uv. Note that if \(a<a_1<d<b_1<b\), then \(G_d\) maps \(L_{\mathrm {c}}^1]a,a_1[\) on functions that are zero on \([a_1,b[\), and \(L_{\mathrm {c}}^1] b_1,b[\) on functions that are zero on \(]a,b_1]\).

The Proof of Proposition 2.2 suggests how to construct \(G_d\) without knowing vu using the operators \(Q_d\) and \(T_d\) defined there. We have, at least formally,

$$\begin{aligned} G_d=(\mathbb {1}-Q_d)^{-1}T_d. \end{aligned}$$
(2.25)

If we choose \(a\le a_1<d<b_1\le b\) with a finite [ab] and

$$\begin{aligned} \max \left\{ \int _{a_1}^d|V(x)(x-a_1)|\mathrm dx,\, \int _d^{b_1}|V(x)(x-b_1)|\mathrm dx\right\} <1, \end{aligned}$$
(2.26)

then (2.25) is given by a convergent Neumann series in the sense of an operator from \(L^1]a_1,b_1[\) to \(C[a_1,b_1]\).

Remark 2.14

The one-dimensional Schrödinger equation can be interpreted as the Klein Gordon equation on a \(1+0\) dimensional spacetime (no spacial dimensions, only time). The operators \(G_\leftrightarrow \), \(G_\rightarrow \) and \(G_\leftarrow \) have important generalizations to globally hyperbolic spacetimes of any dimension—they are then usually called the Pauli–Jordan, retarded, resp., advanced propagator, see, e.g., [11].

2.8 Some Estimates

The following elementary estimates will be useful later on.

Lemma 2.15

Let J be an open interval of length \(\nu <\infty \) and \(f\in L^1(J)\) with \(f''\in L^1(J)\). Then, f and \(f'\) are continuous functions on the closure of J and if a is an end point of J, then

$$\begin{aligned} |f(a)|+\nu |f'(a)|\le C\int _J \left( \nu |f''(x)| + \nu ^{-1} |f(x)| \right) \mathrm dx, \end{aligned}$$
(2.27)

where C is a real number independent of f and J.

Proof

By a scaling argument, we may assume \(\nu =1\). It suffices to assume that f is a distribution on \(\mathbb R\) such that \(f''=0\) outside J. Let \(\theta :\mathbb R\rightarrow \mathbb R\) be of class \(C^\infty \) outside of 0 and such that \(\theta (x)=0\) if \(x\le 0\), \(\theta (x)=x\) if \(0<x<1/2\), \(\theta (x)=0\) if \(x\ge 1\). Define \(\eta \) by \(\theta ''=\delta -\eta \) where \(\delta \) is the Dirac measure at the origin. Clearly, \(\eta \) is of class \(C^\infty \) with support in [1/2, 1]. For any distribution f, we have

$$\begin{aligned} f=\delta *f=\theta '' * f + \eta * f= \theta * f'' + \eta * f \ \text { hence also } \ f'= \theta ' * f'' + \eta ' * f. \end{aligned}$$

This clearly implies (2.27) for \(\nu =1\) and a the right endpoint of J. \(\square \)

Lemma 2.16

Assume that \(\ell :=\sup _J\int _J|V(x)|\mathrm dx<\infty \) where J runs over all the intervals \(J\subset ]a,b[\) of length \(\le 1\). Then, there are numbers \(C,\nu _0>0\) such that

$$\begin{aligned} \Vert f\Vert _{L^{\infty }(J)} + \nu \Vert f'\Vert _{L^{\infty }(J)} \le C\nu \Vert Lf\Vert _{L^1(J)} + C\nu ^{-1}\Vert f\Vert _{L^1(J)} \end{aligned}$$
(2.28)

for all \(f\in L_\mathrm {loc}^\infty ]a,b[\), all \(\nu \le \nu _0\), and all intervals \(J\subset ]a,b[\) of length \(\nu \).

Proof

Note that for a continuous f, we have \(f''\in L^1_{\text {loc}}\) if and only \(Lf\in L^1_{\text {loc}}\) and then \(f'\) is absolutely continuous. We take \(\nu _0\le 1\) and strictly less than half the length of ]ab[. If \(\nu \le \nu _0\), then (2.27) gives for x such that \(]x,x+\nu [\subset ]a,b[\):

$$\begin{aligned} |f(x)|+\nu |f'(x)|\le & {} C\int _x^{x+\nu } \left( \nu |Lf| + \nu |Vf| + \nu ^{-1} |f(y)| \right) \mathrm dy \\\le & {} C \Vert \nu |Lf| + \nu ^{-1} |f| \Vert _{L^1(x,x+\nu )} + C \ell \nu \Vert f\Vert _{L^\infty (x,x+\nu )} \\\le & {} C \nu \Vert Lf\Vert _{L^1(J)} + C \nu ^{-1} \Vert f \Vert _{L^1(J)} + C \ell \nu \Vert f\Vert _{L^\infty (J)}. \end{aligned}$$

If \(x\in ]a,b[\) and \(]x,x+\nu [\,\not \subset \, ]a,b[\), then \(]x-\nu ,x[\,\subset \, ]a,b[\) and we have an estimate as above with \(]x,x+\nu [\) replaced by \(]x-\nu ,x[\). Hence,

$$\begin{aligned} \Vert f\Vert _{L^\infty (J)} + \nu \Vert f'\Vert _{L^\infty (J)} \le C \nu \Vert Lf\Vert _{L^1(J)} + C \nu ^{-1} \Vert f \Vert _{L^1(J)} + C \ell \nu \Vert f\Vert _{L^\infty (J)}. \end{aligned}$$

If \(\nu _0\) is such that \(C\ell \nu _0<1\), we get the required estimate. \(\square \)

Recall (see Sect. 2.3) that \(L^1_\mathrm {loc}]a,b[\) is equipped with the topology of local uniform convergence and that we think of L as an operator in \(L^1_\mathrm {loc}]a,b[\) with domain \(\mathrm{AC}^1]a,b[\). The next result says that this operator is closed.

Corollary 2.17

Let \(\{f_n\}\) be a sequence in \(\mathrm{AC}^1]a,b[\) such that the sequences \(\{f_n\}\) and \(\{Lf_n\}\) are Cauchy in \(L^1_\mathrm {loc}]a,b[\). Then, the limits \(f:=\lim \nolimits _{n\rightarrow \infty }f_n\) and \(g:=\lim \nolimits _{n\rightarrow \infty }Lf_n\) exist in \( L_\mathrm {loc}^1]a,b[\) and we have \(f\in \mathrm{AC}^1]a,b[\) and \(Lf=g\).

Proof

The estimate (2.28) implies that on every compact interval J we have uniform convergence of \(f_n\) to f (and also of \(f_n'\) to \(f'\)). Therefore, \(Vf_n\big |_J\rightarrow Vf\big |_J\) in \(L^1(J)\) for any such J. Hence, \(-f_n''=Lf_n-Vf_n\) converges in \(L^1(J)\) to \(g-Vf\). Therefore, by Lemma 2.1, \(-f''=g-Vf\). We know that \(g-Vf\in L_\mathrm {loc}^1]a,b[\), hence \(f\in \mathrm{AC}^1]a,b[\). \(\square \)

3 Hilbert Spaces with Conjugation

3.1 Bilinear Scalar Product

Let \(\mathcal {H}\) be a Hilbert space equipped with a scalar product\((\cdot |\cdot )\). One says that J is a conjugation if it is an anti-linear operator on \(\mathcal {H}\) satisfying \(J^2=\mathbb {1}\) and \((Jf|Jg)=\overline{(f|g)}\). In a Hilbert space with a conjugation J, an important role is played by the natural bilinear form

$$\begin{aligned} \langle f|g\rangle :=(Jf|g). \end{aligned}$$
(3.1)

In our paper, we usually consider the Hilbert space \(\mathcal {H}=L^2]a,b[\), which has the obvious conjugation \(f\mapsto {\overline{f}}\). Its scalar product and its bilinear form are as follows:

$$\begin{aligned} ( f|g)&:=\int _a^b \overline{f(x)}g(x)\mathrm dx, \end{aligned}$$
(3.2)
$$\begin{aligned} \langle f|g\rangle&:=\int _a^b f(x)g(x)\mathrm dx=({\overline{f}}| g). \end{aligned}$$
(3.3)

Thus, we use round brackets for the sesquilinear scalar product and angular brackets for the bilinear form. Note that in some sense the latter plays a more important role in our paper (and in similar problems) than the former. See, e.g., [8, 10], where the same notation is used.

If \(\mathcal {G}\subset L^2]a,b[\), we will write

$$\begin{aligned} \mathcal {G}^\perp&:=\{f\in L^2]a,b[\ |\ ( f|g)=0,\ g\in \mathcal {G}\}, \end{aligned}$$
(3.4)
$$\begin{aligned} \mathcal {G}^\mathrm {perp}&:=\{f\in L^2]a,b[ \ |\ \langle f|g\rangle =0,\ g\in \mathcal {G}\} =\overline{\mathcal {G}^\perp }. \end{aligned}$$
(3.5)

3.2 Transposition of Operators

If T is an operator, then \(\mathcal {D}(T)\), \(\mathcal {N}(T)\) and \(\mathcal {R}(T)\) will denote the domain, the nullspace (kernel) and the range of T. \({\overline{T}}\) denotes the complex conjugation of T. This means,

$$\begin{aligned} \mathcal {D}(\overline{T}):=\{\overline{f}\mid f\in \mathcal {D}(T)\},\quad {\overline{T}}f:=\overline{T{\overline{f}}},\quad f\in \mathcal {D}(\overline{T}). \end{aligned}$$
(3.6)

Suppose that T is densely defined. We say that \(u\in \mathcal {D}(T^\#)\) if

$$\begin{aligned} \langle u|Tv\rangle = \langle w|v\rangle , \quad u\in \mathcal {D}(T), \end{aligned}$$
(3.7)

for some \(w\in \mathcal {H}\). Then, we set \(T^\#u:=w\). The operator \(T^\#\) is called the transpose of T. Clearly, if \(T^*\) denotes the usual Hermitian adjoint of T, then \(T^\#=\overline{T^*}=\overline{T}^*\).

If T is a bounded linear operator with

$$\begin{aligned} \big (T f\big )(x):=\int _a^b T(x,y)f(y)\mathrm dy, \end{aligned}$$

then

$$\begin{aligned} \big (T^* f\big )(x)&=\int _a^b\overline{ T (y,x)}f(y)\mathrm dy, \end{aligned}$$
(3.8)
$$\begin{aligned} \big (T^\# f\big )(x)&=\int _a^b T (y,x)f(y)\mathrm dy, \end{aligned}$$
(3.9)
$$\begin{aligned} \big ({\overline{T}} f\big )(x)&=\int _a^b \overline{T (x,y)}f(y)\mathrm dy. \end{aligned}$$
(3.10)

An operator T is self-adjoint if \(T=T^*\). We will say that it is self-transposed if \( T^\#=T.\) It is useful to note that a holomorphic function of a self-transposed operator is self-transposed.

Remark 3.1

It would be natural to call an operator satisfying \(T\subset T^\#\)symmetric. The natural name for an operator satisfying \(T\subset T^*\) would then be Hermitian. Unfortunately and confusingly, in a large part of mathematical literature the word symmetric is reserved for an operator satisfying the latter condition.

Many properties of the transposition have their exact analogs for the Hermitian conjugation and are proven similarly. Below, we describe some of them.

Proposition 3.2

Let T be a densely defined operator.

  1. (1)

    \(T^\#\) is closed.

  2. (2)

    If T is closed, then \(T=T^{\#\#}\).

  3. (3)

    Let \(T\subset S\). Then, \(S^\#\subset T^\#\) and

    $$\begin{aligned} \dim \mathcal {D}(S)/\mathcal {D}(T)=\dim \mathcal {D}(T^\#)/\mathcal {D}(S^\#). \end{aligned}$$
    (3.11)

Lemma 3.3

Let ST be linear operators on a Hilbert space \(\mathcal {H}\) such that:

  1. (1)

    \(\langle Sf|g\rangle =\langle f|Tg\rangle \) for all \(f\in \mathcal {D}(S)\) and \(g\in \mathcal {D}(T)\),

  2. (2)

    T is surjective,

  3. (3)

    \(\mathcal {R}(S)^\mathrm {perp}\subset \mathcal {N}( T)\). Then, S is densely defined.

Proof

We must show:

$$\begin{aligned} \langle f|h\rangle =0,\ \forall f\in \mathcal {D}(S)\Rightarrow h=0. \end{aligned}$$

Since T is surjective, there is \(g\in \mathcal {D}(T)\) such that \(h=Tg\) and then we get \(0=\langle f|h\rangle =\langle f|Tg\rangle =\langle Sf|g\rangle \) by (1) for all \(f\in \mathcal {D}(S)\). Thus, \(g\in \mathcal {R}(S)^\mathrm {perp}\) and (3) gives \(Tg=0\) hence \(h=0\). \(\square \)

Here is a version of the closed range theorem [29, Sect. VII.5], which we will use in Sect. 3.4.

Theorem 3.4

If T is a closed densely defined operator in \(\mathcal {H}\), then the following assertions are equivalent:

  1. (1)

    \(\mathcal {R}(T)\) is closed,

  2. (2)

    \(\mathcal {R}(T^\#)\) is closed,

  3. (3)

    \(\mathcal {R}(T)=\mathcal {N}(T^\#)^\mathrm {perp}\),

  4. (4)

    \(\mathcal {R}( T^\#)=\mathcal {N}( T)^\mathrm {perp}\).

Remark 3.5

In a large part of the literature [14, 19], a different terminology and notation is used. If T is an operator, then \(JT^*J\) is called the J-adjoint of T; an operator T satisfying \(T=JT^*J\) is called J-self-adjoint, etc. In the context of our paper, \(Jf={\overline{f}}\). Moreover, (Jf|g) and \(JT^*J\) are denoted \(\langle f|g\rangle \), resp., \(T^\#\). Our notation and terminology stresses the naturalness of the bilinear product \(\langle \cdot |\cdot \rangle \) and of the transposition \(T\mapsto T^\#\). Therefore, we prefer them instead of the notation and terminology of, e.g., [14, 19], which puts J in many places.

3.3 Spectrum

Let T be a closed operator. We say that G is an inverse of T if G is bounded, \(GT=\mathbb {1}\) on \(\mathcal {D}(T)\), and \(TG=\mathbb {1}\) on \(\mathcal {H}\). An inverse of T, if it exists, is unique. If T possesses an inverse, we say that it is invertible.

We will denote by \(\mathrm {rs}(T)\) the resolvent set, that is the set of \(\lambda \in \mathbb C\) such that \(T-\lambda \) is invertible. The spectrum of T is \(\mathrm {sp}(T):=\mathbb C\backslash \mathrm {rs}(T)\).

Let T be densely defined. Then, T is invertible iff \(T^\#\) is, \(T^{-1\#}=T^{\#-1}\) and \(\mathrm {sp}(T)=\mathrm {sp}(T^\#)\).

We say that a closed operator T is Fredholm if \(\dim \mathcal {N}(T)<\infty \) and \(\dim \mathcal {H}/\mathcal {R}(T)<\infty \). If T is a Fredholm operator, we define its index

$$\begin{aligned} \mathrm {ind}(T)=\dim \mathcal {N}(T)-\dim \mathcal {H}/\mathcal {R}(T). \end{aligned}$$
(3.12)

If a closed operator T is densely defined, then we have an equivalent, more symmetric definition: T is Fredholm if \(\mathcal {R}(T)\) is closed (equivalently, \(\mathcal {R}(T^\#)\) is closed), \(\dim \mathcal {N}(T)<\infty \) and \(\dim \mathcal {N}(T^\#)<\infty \). Besides,

$$\begin{aligned} \mathrm {ind}(T)=\dim \mathcal {N}(T)-\dim \mathcal {N}(T^\#). \end{aligned}$$
(3.13)

One can introduce various varieties of the essential resolvent set and essential spectrum, [14]. One of them is

$$\begin{aligned} \mathrm {rs}_{\mathrm {F}0}(T):=\{\lambda \in \mathbb C\mid T-\lambda \quad \text {is Fredholm of index }0\}, \; \mathrm {sp}_{\mathrm {F}0}(T):=\mathbb C\backslash \mathrm {rs}_{\mathrm {F}0}(T).\nonumber \\ \end{aligned}$$
(3.14)

Clearly, \(\mathrm {rs}(T)\subset \mathrm {rs}_{\mathrm {F}0}(T)\).

3.4 Restrictions of Closed Operators

In this subsection, we fix a closed operator on a Hilbert space \(\mathcal {H}\). For consistency with the rest of the paper, this operator will be denoted by \(L_{\max }\). Note that \(\mathcal {D}(L_{\max })\) can be treated as a Hilbert space with the scalar product

$$\begin{aligned} (f|g)_L:=(f|g)+(L_{\max }f|L_{\max }g). \end{aligned}$$
(3.15)

We will investigate closed operators \(L_\bullet \) contained in \(L_{\max }\). Obviously, \(L_\bullet \subset L_{\max }\) if and only if \(L_\bullet -\lambda \subset L_{\max }-\lambda \), where \(\lambda \) is a complex number. Therefore, many of the statements in this and next subsections have obvious generalizations, where \(L_{\max }\) is replaced with \(L_{\max }-\lambda \). For simplicity of presentation, we keep \(\lambda =0\).

Proposition 3.6

  1. (1)

    We have a 1–1 correspondence between closed subspaces \(\mathcal {L}_\bullet \) of \(\mathcal {D}(L_{\max })\) and closed operators \(L_\bullet \subset L_{\max }\) given by \(\mathcal {L}_\bullet =\mathcal {D}(L_\bullet )\).

  2. (2)

    If \(L_{\max }\) is Fredholm and \(L_\bullet \subset L_{\max }\) is closed, then \(L_\bullet \) is Fredholm if and only if \(\dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )<\infty \), and then

    $$\begin{aligned} \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )=\mathrm {ind}(L_{\max })-\mathrm {ind}(L_\bullet ) \end{aligned}$$
    (3.16)
  3. (3)

    If \(\mathcal {R}(L_{\max })=\mathcal {H}\), and \(L_\bullet \subset L_{\max }\), then \(\mathrm {ind}(L_\bullet )=0\) iff

    $$\begin{aligned} \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )=\dim \mathcal {N}(L_{\max }). \end{aligned}$$
    (3.17)

Proof

  1. (1)

    is obvious.

Since \(L_\bullet \) is a restriction of \(L_{\max }\), we have \(\mathcal {N}(L_\bullet )\subset \mathcal {N}(L_{\max })\) and, \(\mathcal {N}(L_\bullet )\) being a finite-dimensional subspace of the Banach space \(\mathcal {D}(L_\bullet )\), there is a closed subspace of \(\mathcal {D}_\bullet \) of \(\mathcal {D}(L_\bullet )\) such that \(\mathcal {D}(L_\bullet )=\mathcal {N}(L_\bullet ) \oplus \mathcal {D}_\bullet \). We have \(\mathcal {D}_\bullet +\mathcal {N}(L_{\max })\subset \mathcal {D}(L_{\max })\) with \(\mathcal {D}_\bullet \cap \mathcal {N}(L_{\max })=0\); hence, since \(\mathcal {N}(L_{\max })\) is finite dimensional, there is a closed subspace \(\mathcal {D}_{\max }\) of \(\mathcal {D}(L_{\max })\) which contains \(\mathcal {D}_\bullet \) and such that \(\mathcal {D}(L_{\max })=\mathcal {N}(L_{\max }) \oplus \mathcal {D}_{\max }\). Then, we clearly get

$$\begin{aligned} \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )&=\dim \mathcal {N}(L_{\max })/\mathcal {N}(L_\bullet ) + \dim \mathcal {D}_{\max }/\mathcal {D}_\bullet \\&=\dim \mathcal {N}(L_{\max })-\dim \mathcal {N}(L_\bullet ) + \dim \mathcal {D}_{\max }/\mathcal {D}_\bullet . \end{aligned}$$

The map \(L_{\max }:\mathcal {D}_{\max }\rightarrow \mathcal {R}(L_{\max })\) is bijective and its restriction to \(\mathcal {D}_\bullet \) has \(\mathcal {R}(L_\bullet )\) as image hence by the closed map theorem it is a homeomorphism, so

$$\begin{aligned} \dim \mathcal {D}_{\max }/\mathcal {D}_\bullet =\dim \mathcal {R}(L_{\max })/\mathcal {R}(L_\bullet ) =\dim \mathcal {H}/\mathcal {R}(L_\bullet )-\dim \mathcal {H}/\mathcal {R}(L_{\max }). \end{aligned}$$

The last two relations imply (3.16).

Finally, (3) is an immediate consequence of (2). \(\square \)

The following concept is useful in the study of invertible operators contained in \(L_{\max }\):

Definition 3.7

We say that \(G_\bullet \) is a right inverse of \(L_{\max }\) if it is a bounded operator on \(\mathcal {H}\) such that \(\mathcal {R}(G_\bullet )\subset \mathcal {D}(L_{\max })\) and

$$\begin{aligned} L_{\max }G_\bullet =\mathbb {1}. \end{aligned}$$
(3.18)

Note that \(L_{\max }\) can have many right inverses or none.

Proposition 3.8

The following conditions are equivalent:

  1. (1)

    \(L_{\max }\) has right inverses,

  2. (2)

    \(\mathcal {R}(L_{\max })=\mathcal {H}\),

    If in addition \(L_{\max }\) is densely defined, then (1) and (2) are equivalent to

  3. (3)

    \(L_{\max }^\#:\mathcal {D}(L_{\max }^\#)\rightarrow \mathcal {H}\) is injective, and \(\mathcal {R}(L_{\max }^\#)\) is closed.

  4. (4)

    \(\mathcal {N}(L_{\max }^\#)=\{0\}\) and \(\mathcal {R}(L_{\max }^\#)=\mathcal {N}( L_{\max })^\mathrm {perp}\).

    Under these conditions, we have a bijective correspondence between

  5. (a)

    right inverses \(G_\bullet \) of \(L_{\max }\);

  6. (b)

    invertible operators \(L_\bullet \) contained in \(L_{\max }\);

  7. (c)

    closed subspaces \(\mathcal {L}_\bullet \) of \(\mathcal {D}(L_{\max })\) such that \(\mathcal {D}(L_{\max })=\mathcal {L}_\bullet \oplus \mathcal {N}( L_{\max }) \).

    This correspondence is given by \(G_\bullet =L_\bullet ^{-1}\) and \(\mathcal {D}(L_\bullet )=\mathcal {L}_\bullet \).

Proof

If \(G_\bullet \) is a right inverse for \(L_{\max }\), then \(L_{\max }\) is surjective due to (3.18). Reciprocally, assume \(L_{\max }\) is surjective. Let \(\mathcal {L}_\bullet \) be the orthogonal complement of the closed subspace \(\mathcal {N}( L_{\max }) \) in the Hilbert space \(\mathcal {D}(L_{\max })\). Then, \(\mathcal {D}(L_{\max })=\mathcal {L}_\bullet \oplus \mathcal {N}( L_{\max })\). Now, \(L_\bullet = L_{\max }\big |_{\mathcal {L}_\bullet }\) is a bijective map \(\mathcal {L}_\bullet \rightarrow \mathcal {H}\) and \(G_\bullet := L_\bullet ^{-1}\) is a right inverse of \(L_{\max }\). This proves \((1)\Leftrightarrow (2)\). The equivalence with (3) and (4) follows by the closed range theorem (see Theorem 3.4). \(\square \)

Proposition 3.9

Let \(G_\bullet \) be a right inverse of \(L_{\max }\). Then,

  1. (1)

    \(\mathcal {N}( G_\bullet )=\{0\}\).

  2. (2)

    \(G_\bullet \) is bounded from \(\mathcal {H}\) to \(\mathcal {D}(L_{\max })\).

  3. (3)

    \(P_\bullet :=G_\bullet L_{\max }\) is a bounded projection in the space \(\mathcal {D}(L_{\max })\) such that

    $$\begin{aligned} \mathcal {R}(P_\bullet ) =\mathcal {R}( G_\bullet ),\quad \mathcal {N}( P_\bullet ) =\mathcal {N}( L_{\max }). \end{aligned}$$
  4. (4)

    \(\mathcal {R}(G_\bullet )\) is closed in \(\mathcal {D}(L_{\max })\).

  5. (5)

    \(\mathcal {D}(L_{\max })=\mathcal {R}(G_\bullet )\oplus \mathcal {N}( L_{\max })\).

Proof

  1. (1)

    is obvious and

    $$\begin{aligned} \Vert G_\bullet f\Vert _L^2= \Vert L_{\max }G_\bullet f\Vert ^2+\Vert G_\bullet f\Vert ^2\le \big (1+\Vert G_\bullet \Vert ^2\big )\Vert f\Vert ^2 \end{aligned}$$
    (3.19)

    implies (2). Since \(L_{\max }: \mathcal {D}(L_{\max }) \rightarrow \mathcal {H}\) is bounded, \(P_\bullet \) is bounded on \(\mathcal {D}(L_{\max })\). Then,

    $$\begin{aligned} P_\bullet ^2=G_\bullet ( L_{\max }G_\bullet ) L_{\max }=G_\bullet L_{\max }=P_\bullet \end{aligned}$$

    hence, \(P_\bullet \) is a projection.

    It is obvious that \(\mathcal {R}(P_\bullet )\subset \mathcal {R}( G_\bullet )\). If \(g\in \mathcal {H}\), then

    $$\begin{aligned} G_\bullet g=G_\bullet L_{\max }G_\bullet g=P_\bullet G_\bullet g. \end{aligned}$$

    Hence, \(\mathcal {R}(G_\bullet )\subset \mathcal {R}( P_\bullet )\). This shows that \(\mathcal {R}( G_\bullet )= \mathcal {R}( P_\bullet )\).

    It is obvious that \(\mathcal {N}(L_{\max })\subset \mathcal {N}( P_\bullet )\). If \(0=P_\bullet f\), then

    $$\begin{aligned} 0=L_{\max }P_\bullet f=(L_{\max }G_\bullet ) L_{\max }f= L_{\max }f. \end{aligned}$$

    Hence, \(\mathcal {N}(P_\bullet )\subset L_{\max }\). This shows that \(\mathcal {N}( P_\bullet )= \mathcal {N}(L_{\max })\).

    Thus, we have shown (3), which implies immediately (4) and (5).

\(\square \)

Proposition 3.10

Let \(L_\bullet \) be a closed operator such that \(L_\bullet \subset L_{\max }\) and \(L_\bullet \) is invertible. Then,

$$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_\bullet )\oplus \mathcal {N}(L_{\max }). \end{aligned}$$
(3.20)

Proof

\(G_\bullet :=L_\bullet ^{-1}\) is a right inverse of \(L_{\max }\). Now, (3.20) is the same as Proposition 3.9.(5). \(\square \)

Proposition 3.11

Let \(G_\bullet \) be a right inverse of \(L_{\max }\). If \(K:\mathcal {H}\rightarrow \mathcal {N}(L_{\max }) \) is a linear continuous map, then \(G_\bullet +K\) is also a right inverse of \(L_{\max }\). Conversely, all right inverses of \(L_{\max }\) are of this form.

In particular, suppose that \(\mathcal {N}(L_{\max })\) is n-dimensional and spanned by \(u_1,\ldots ,u_n\). Then, if \(G_1,G_2\) are two right inverses of \(L_{\max }\), there exist \(\phi _1,\ldots \phi _n\in \mathcal {H}\) such that

$$\begin{aligned} G_1-G_2=\sum _{j=1}^n|u_j\rangle \langle \phi _j| . \end{aligned}$$
(3.21)

Proposition 3.12

Suppose that \(G_\bullet \) is a bounded operator on \(\mathcal {H}\) and \(\mathcal {D}\subset \mathcal {H}\) a dense subspace such that \(G_\bullet \mathcal {D}\subset \mathcal {D}(L_{\max })\) and

$$\begin{aligned} L_{\max }G_\bullet g=g,\quad g\in \mathcal {D}. \end{aligned}$$
(3.22)

Then, \(G_\bullet \) is a right inverse of \(L_{\max }\).

Proof

Let \(f\in \mathcal {H}\) and \((f_n)\subset \mathcal {D}\) such that \(f_n\underset{n\rightarrow \infty }{\rightarrow }f\). Then, \(G_\bullet f_n\underset{n\rightarrow \infty }{\rightarrow }G_\bullet f\) and \(L_{\max }G_\bullet f_n=f_n\underset{n\rightarrow \infty }{\rightarrow } f\). By the closedness of \(L_{\max }\), \(G_\bullet f\in \mathcal {D}(L_{\max })\) and \(L_{\max }G_\bullet f=f\). \(\square \)

3.5 Nested Pairs of Operators

In this subsection, we assume that \(L_{\min }\) and \(L_{\max }\) are two densely defined closed operators. We assume that they form a nested pair,

$$\begin{aligned} L_{\min }\subset L_{\max }. \end{aligned}$$
(3.23)

Note that along with (3.23) we have a second nested pair

$$\begin{aligned} L_{\max }^\#\subset L_{\min }^\# . \end{aligned}$$
(3.24)

In this subsection, we do not assume that \(L_{\min }^\#=L_{\max }\), so that the two nested pairs can be different.

Remark 3.13

The notion of a nested pair is closely related to the notion of conjugate pair, often introduced in the literature, e.g., in [14]. Two operators AB form a conjugate pair if \(A\subset B^*\), and hence \(B\subset A^*\). The pair of operators \(L_{\min }\), \( L_{\max }^*\) is an example of a conjugate pair.

Proposition 3.14

  1. (1)

    We have a direct decomposition

    $$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_{\min })\oplus \mathcal {N}(L_{\min }^* L_{\max }+1), \end{aligned}$$
    (3.25)

    where

    $$\begin{aligned} \mathcal {N}(L_{\min }^* L_{\max }+1) =\{u\in \mathcal {D}(L_{\max })\mid L_{\max }u\in \mathcal {D}(L_{\min }^*) \text { and } L_{\min }^*L_{\max }u+u=0\}. \end{aligned}$$
  2. (2)

    If in addition \(\mathcal {R}(L_{\min }^\#)=\mathcal {H}\), then

    $$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_{\min })\oplus \mathcal {N}(L_{\min }^* L_{\max }). \end{aligned}$$
    (3.26)

    where \( \mathcal {N}(L_{\min }^* L_{\max }) =\{u\in \mathcal {D}(L_{\max })\mid L_{\max }u\in \mathcal {D}(L_{\min }^*) \text { and } L_{\min }^*L_{\max }u=0\}\).

Proof

  1. (1)

    We will show that

    $$\begin{aligned} \mathcal {N}(L_{\min }^* L_{\max }+1)=\mathcal {D}(L_{\min })^{\perp _L}, \end{aligned}$$
    (3.27)

    where the superscript \(\perp _L\) denotes the orthogonal complement with respect to the scalar product (3.15). In fact, \(u\in \mathcal {D}(L_{\min })\) and \(v\in \mathcal {D}(L_{\min })^{\perp _L}\) if and only if

    $$\begin{aligned} 0=(v|u)+(L_{\max }v|L_{\max }u)=(v|u)+(L_{\max }v|L_{\min }u). \end{aligned}$$
    (3.28)

    This means \(L_{\max }v\in \mathcal {D}(L_{\min }^*)\) and

    $$\begin{aligned} 0=(L_{\min }^*L_{\max }v+v| u). \end{aligned}$$
    (3.29)

    Since \(\mathcal {D}(L_{\min })\) is dense, we obtain

    $$\begin{aligned} 0=v+L_{\min }^*L_{\max }v. \end{aligned}$$
    (3.30)
  2. (2)

    \(\mathcal {R}(L_{\min }^\#)=\mathcal {H}\) implies that \(\mathcal {R}( L_{\min })\) is closed. Therefore,

    $$\begin{aligned} \mathcal {H}=\mathcal {R}(L_{\min })\oplus \mathcal {N}(L_{\min }^*). \end{aligned}$$
    (3.31)

    Let \(u\in \mathcal {D}(L_{\max })\). By (3.31), there exist \(v\in \mathcal {D}(L_{\min })\) and \(w\in \mathcal {N}(L_{\min }^*)\) such that

    $$\begin{aligned} L_{\max }u=L_{\min }v+w. \end{aligned}$$
    (3.32)

    Hence, \(L_{\max }(u-v)=w\). Therefore, \(u-v\in \mathcal {N}(L_{\min }^*L_{\max })\). Thus, we have proven that

    $$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_{\min })+\mathcal {N}(L_{\min }^* L_{\max }). \end{aligned}$$
    (3.33)

    Suppose now that \(u\in \mathcal {D}(L_{\min })\cap \mathcal {N}(L_{\min }^*L_{\max })\). Thus,

    $$\begin{aligned} 0=L_{\min }^*L_{\max }u=L_{\min }^*L_{\min }u. \end{aligned}$$
    (3.34)

    This implies \(L_{\min }u=0\). But, \(\mathcal {R}(L_{\min }^\#)=\mathcal {H}\) implies \(\mathcal {N}(L_{\min })=\{0\}\). Hence, \(u=0\).

\(\square \)

Our main goal in this and the next subsection is to study closed operators \(L_\bullet \) satisfying \(L_{\min }\subset L_\bullet \subset L_{\max }\). Such operators are the subject of the following proposition.

Proposition 3.15

Let \(L_\bullet \) be a closed operator such that \(L_\bullet \subset L_{\max }\). Then, \(L_{\min }\subset L_\bullet \) if and only if \(L_\bullet ^\#\subset L_{\min }^\#\).

Let us reformulate Proposition 3.15 for invertible \(L_\bullet \), using right inverses as the basic concept:

Proposition 3.16

Let \(G_\bullet \) be a right inverse of \(L_{\max }\) and \(L_\bullet ^{-1}=G_\bullet \). Then, \(L_{\min }\subset L_\bullet \) if and only if \(G_\bullet ^\#\) is a right inverse of \(L_{\min }^\#\).

The following proposition should be compared with Proposition 3.11.

Proposition 3.17

Suppose that \(G_1\) is a right inverse of \(L_{\max }\) and \(G_1^\#\) is a right inverse of \(L_{\min }^\#\). Then, \(G_2\) is also a right inverse of \(L_{\max }\) and \(G_2^\#\) is a right inverse of \(L_{\min }^\#\) if and only if

$$\begin{aligned} G_1-G_2=K, \end{aligned}$$
(3.35)

where K is a bounded operator in \(\mathcal {H}\) with \(\mathcal {R}(K)\subset \mathcal {N}(L_{\max })\) and \(\mathcal {R}(K^\#)\subset \mathcal {N}( L_{\min }^\#)\).

In particular, let \(\mathcal {N}( L_{\max })\) and \(\mathcal {N}( L_{\min }^\#)\) be finite dimensional. Chose a basis \((u_1,\ldots u_n)\) of \(\mathcal {N}( L_{\max })\) and a basis \((w_1,\ldots w_m)\) of \(\mathcal {N}( L_{\min }^\#)\). Then,

$$\begin{aligned} G_1-G_2=\sum _{i,j}\alpha _{ij}|u_i\rangle \langle w_j| \quad \text { for some matrix }\quad [\alpha _{ij}]. \end{aligned}$$

3.6 Nested Pairs Consisting of an Operator and Its Transpose

In this subsection, we assume that the pair of densely defined closed operators \(L_{\min }\), \(L_{\max }\) satisfies

$$\begin{aligned} L_{\min }^\#=L_{\max },\quad L_{\min }\subset L_{\max }, \end{aligned}$$
(3.36)

Note that this is a special case of conditions of the previous subsection—now the two nested pairs (3.23) and (3.24) coincide with one another.

We will use the terminology related to symplectic vector spaces introduced in “Appendix A.”

Lemma 3.18

Let \(u,v\in \mathcal {D}(L_{\max })\). Consider

$$\begin{aligned} \langle L_{\max }u|v\rangle - \langle u| L_{\max }v\rangle . \end{aligned}$$
(3.37)

Then, (3.37) is zero if \(u\in \mathcal {D}(L_{\min })\) or \(v\in \mathcal {D}(L_{\min })\). If we fix u and (3.37) is zero for all \(v \in \mathcal {D}(L_{\max })\), then \(u\in \mathcal {D}(L_{\min })\). Besides,

$$\begin{aligned} |\langle L_{\max }u|v\rangle - \langle u| L_{\max }v\rangle | \le \Vert u\Vert _L\Vert v\Vert _L. \end{aligned}$$
(3.38)

If \(\phi ,\psi \in \mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })\) are represented by \(u,v\in \mathcal {D}(L_{\max })\), we set

$$\begin{aligned} {[\![}\phi |\psi {]\!]}:= \langle L_{\max }u|v\rangle - \langle u| L_{\max }v\rangle . \end{aligned}$$
(3.39)

By Lemma 3.18, \({[\![}\cdot |\cdot {]\!]}\) is a well-defined continuous symplectic form on \(\mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })\).

To every closed operator \(L_\bullet \) such that \(L_{\min }\subset L_\bullet \subset L_{\max }\), we associate a closed subspace \(\mathcal {W}_\bullet \) of \(\mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })\) by

$$\begin{aligned} \mathcal {W}_\bullet :=\mathcal {D}(L_\bullet )/\mathcal {D}(L_{\min }). \end{aligned}$$

Proposition 3.19

  1. (1)

    The above correspondence is bijective.

  2. (2)

    If \(L_\bullet \) is mapped to \(\mathcal {W}_\bullet \), then \(L_\bullet ^\#\) is mapped to \(W_\bullet ^{\,\mathrm {s}\!\perp }\) (the symplectic orthogonal complement of \(\mathcal {W}_\bullet \)).

  3. (3)

    Self-transposed operators are mapped to Lagrangian subspaces.

The following result is quite striking and shows that in a certain respect the concept of self-transposedness is superior to the concept of self-adjointness. It is due to Galindo [16], with a simplified proof given by Knowles [19], see also [14]. It is a generalization of a well-known property of real Hermitian operators: they have a self-adjoint extension which commutes with the usual conjugation.

Theorem 3.20

There exists a self-transposed operator \(L_\bullet \) such that \(L_{\min }\subset L_\bullet \subset L_{\max }\). Moreover, \(\dim \mathcal {D}(L_{\max }^\#)/\mathcal {D}(L_{\min })\) is even or infinite and

$$\begin{aligned} \dim \mathcal {D}(L_\bullet )/\mathcal {D}(L_{\min })= \dim \mathcal {D}(L_{\max }^\#)/\mathcal {D}(L_\bullet )= \frac{1}{2}\dim \mathcal {D}(L_{\max }^\#)/\mathcal {D}(L_{\min }).\nonumber \\ \end{aligned}$$
(3.40)

Proof

By Proposition A.1, there exists a Lagrangian subspace \(\mathcal {W}_\bullet \) contained in the symplectic space \(\mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })\). By Proposition A.2, it is closed. The corresponding operator \(L_\bullet \) is self-transposed (Proposition 3.19). \(\square \)

Proposition 3.21

Suppose that \(L_{\min }\subset L_\bullet \subset L_{\max }\) and

$$\begin{aligned} \dim \mathcal {D}(L_\bullet )/\mathcal {D}(L_{\min })= \dim \mathcal {D}(L_{\max }^\#)/\mathcal {D}(L_\bullet )= \frac{1}{2}\dim \mathcal {D}(L_{\max }^\#)/\mathcal {D}(L_{\min })=1.\nonumber \\ \end{aligned}$$
(3.41)

Then, \(L_\bullet \) is self-transposed.

Proof

All one-dimensional subspaces in a two-dimensional symplectic space are Lagrangian. \(\square \)

Here is a version of Proposition 3.14 adapted to the present context:

Proposition 3.22

  1. (1)

    We have a direct decomposition

    $$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_{\min })\oplus \mathcal {N}({\overline{L}}_{\max }L_{\max }+1). \end{aligned}$$
    (3.42)
  2. (2)

    If in addition \(\mathcal {R}(L_{\max })=\mathcal {H}\), then

    $$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_{\min })\oplus \mathcal {N}({\overline{L}}_{\max }L_{\max }). \end{aligned}$$
    (3.43)

Here is a consequence of Proposition 3.16

Proposition 3.23

Suppose that \(L_\bullet \) is invertible and \( L_{\min }\subset L_\bullet \subset L_{\max }\). Then, there exists a self-transposed invertible \(L_1\) such that \( L_{\min }\subset L_1\subset L_{\max }\).

Proof

\(G_\bullet :=L_\bullet ^{-1}\) is a right inverse of \(L_{\max }\). By Proposition 3.16, \(G_\bullet ^\#\) is also a right inverse of \(L_{\max }\). Therefore, also \(G_1: =(G_\bullet +G_\bullet ^\#)/2\) is a right inverse, which in addition is self-transposed. Now, \(L_1\) such that \(L_1^{-1}=G_1\) has the required properties. \(\square \)

Here is a version of Proposition 3.17 adapted to the present context:

Proposition 3.24

Suppose that \(G_1\) is a right inverse of \(L_{\max }\) and \(G_1^\#\) is a right inverse of \(L_{\max }\). Then, \(G_2\) is also a right inverse of \(L_{\max }\) and \(G_2^\#\) is a right inverse of \(L_{\max }\) if and only if

$$\begin{aligned} G_1-G_2=K, \end{aligned}$$
(3.44)

where K and \(K^\#\) are bounded from \(\mathcal {H}\) to \(\mathcal {N}(L_{\max })\).

In particular, let \(\mathcal {N}( L_{\max })\) be finite dimensional. Chose a basis \((u_1,\ldots u_n)\) of \(\mathcal {N}(L_{\max })\). Then,

$$\begin{aligned} G_1-G_2=\sum _{i,j}\alpha _{ij}|u_i\rangle \langle u_j| \quad \text { for some matrix }\quad [\alpha _{ij}]. \end{aligned}$$

Theorem 3.25

Suppose that \( L_{\min }\subset L_\bullet \subset L_{\max }\).

  1. (1)

    If \(L_\bullet \) is Fredholm of index 0, then

    $$\begin{aligned} \dim \mathcal {D}(L_{\max })/\mathcal {D}(L_\bullet )=\mathcal {D}(L_\bullet )/\mathcal {D}(L_{\min }) =\frac{1}{2}\dim \mathcal {D}(L_{\max })/\mathcal {D}(L_{\min }) .\nonumber \\ \end{aligned}$$
    (3.45)
  2. (2)

    If \(\dim \mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })<\infty \) and (3.45) holds, then \(L_\bullet \) is Fredholm of index 0.

Remark 3.26

Theorem  3.25 implies that if \( L_{\min }\subset L_\bullet \subset L_{\max }\) and \(\mathrm {rs}_{\mathrm {F}0}(L_\bullet )\ne \emptyset \), then \(L_\bullet \) has to satisfy (3.45) [see (3.14) for the definition of \(\mathrm {rs}_{\mathrm {F}0}\)]. Thus, the most “useful” operators (which usually means well-posed ones) are “in the middle” between the minimal and maximal operator.

4 Basic \(L^2\) Theory of 1D Schrödinger Operators

4.1 The Maximal and Minimal Operator

As before, we assume that \(V\in L_\mathrm {loc}^1]a,b[\). Recall that L is the differential expression

$$\begin{aligned} L:=-\partial ^2+V . \end{aligned}$$
(4.1)

In this section, we present basic realizations of L as closed operators on \(L^2]a,b[\).

Definition 4.1

The maximal operator\(L_{\max }\) is defined by

$$\begin{aligned} \mathcal {D}(L_{\max })&:=\big \{f\in L^2]a,b[\,\cap \, \mathrm{AC}^1]a,b[ \ \mid L f\in L^2]a,b[\big \}, \end{aligned}$$
(4.2)
$$\begin{aligned} L_{\max }f&:=Lf,\quad f\in \mathcal {D}(L_{\max }). \end{aligned}$$
(4.3)

We equip \(\mathcal {D}(L_{\max })\) with the graph norm

$$\begin{aligned} \Vert f\Vert _{L}^2:=\Vert f\Vert ^2+\Vert Lf\Vert ^2. \end{aligned}$$

Remark 4.2

Note that \(L^2]a,b[\subset L_\mathrm {loc}^1]a,b[\). Therefore, as explained in Sect.  2.3, \(f\in L_\mathrm {loc}^\infty ]a,b[\) and \(Lf\in L^2]a,b[\) implies \(f\in \mathrm{AC}^1]a,b[\). Therefore, in (4.2) we can replace \(\mathrm{AC}^1]a,b[\) with \( L_\mathrm {loc}^\infty ]a,b[\) (or C]ab[, or \(C^1]a,b[\,\)).

Recall that \(AC_\mathrm {c}^1]a,b[\) are once absolutely differentiable functions of compact support.

Definition 4.3

We set

$$\begin{aligned} \mathcal {D}(L_{\mathrm {c}}):= AC_{\mathrm {c}}^1]a,b[\,\cap \, \mathcal {D}(L_{\max }). \end{aligned}$$

Let \(L_{\mathrm {c}}\) be the restriction of \(L_{\max }\) to \(\mathcal {D}(L_{\mathrm {c}})\). Finally, \(L_{\min }\) is defined as the closure of \(L_{\mathrm {c}}\).

The next theorem is the main result of this subsection:

Theorem 4.4

The operators \(L_{\min },L_{\max }\) have the following properties.

  1. (1)

    The operators \(L_{\max }\) and \(L_{\min }\) are closed, densely defined and \({L_{\min }\subset L_{\max }}\).

  2. (2)

    \(L_{\max }^\#=L_{\min }\) and \(L_{\min }^\#=L_{\max }\).

  3. (3)

    Suppose that \(f_1,f_2\in \mathcal {D}(L_{\max })\). Then, there exist

    $$\begin{aligned} W(f_1,f_2;a)&:=\lim \limits _{d\searrow a}W(f_1,f_2;d), \end{aligned}$$
    (4.4)
    $$\begin{aligned} W(f_1,f_2;b)&:=\lim \limits _{d\nearrow b}W(f_1,f_2;d), \end{aligned}$$
    (4.5)

    and the so-called Green’s identity (the integrated form of the Lagrange identity) holds:

    $$\begin{aligned} \langle L_{\max }f_1|f_2\rangle - \langle f_1|L_{\max }f_2\rangle =W(f_1,f_2;b)-W(f_1,f_2;a). \end{aligned}$$
    (4.6)
  4. (4)

    We set \(W_d(f_1,f_2)= W(f_1,f_2;d)\) for any \(d\in [a,b]\) and \(f_1,f_2\in \mathcal {D}(L_{\max })\). Then for any \(d\in [a,b]\), the map \(W_d:\mathcal {D}(L_{\max })\times \mathcal {D}(L_{\max })\rightarrow \mathbb C\) is a continuous bilinear antisymmetric form, in particular

    $$\begin{aligned} |W_d(f_1,f_2)|\le C_d\Vert f_1\Vert _{L}\Vert f_2\Vert _{L}. \end{aligned}$$
    (4.7)
  5. (5)

    \( \mathcal {D}(L_{\min })\) coincides with

    $$\begin{aligned} \big \{f\in \mathcal {D}(L_{\max })\ \mid \ W(f,g;a)=0 \text { and } W(f,g;b)=0 \text { for all }g\in \mathcal {D}(L_{\max })\big \}. \nonumber \\ \end{aligned}$$
    (4.8)
  6. (6)

    \(\overline{L}_{\min }=\overline{L_{\min }}=L_{\max }^*\) and \(\overline{L}_{\max }=\overline{L_{\max }}=L_{\min }^*\).

One of the things we will need to prove is the density of \( \mathcal {D}(L_\mathrm {c})\) in \(L^2]a,b[\). This is easy if \(V\in L_\mathrm {loc}^2]a,b[\) (see Proposition 4.12), but with our assumptions on the potential the proof is not so trivial, because the idea of approximating an \(f\in L^2(I)\) with smooth functions does not work: \(\mathcal {D}(L_{\max })\) may not contain any “nice” function, as the example described below shows.

Example 4.5

Let \(V(x)=\sum _{\sigma } c_\sigma |\mathsf x-\sigma |^{-1/2}\) where \(\sigma \) runs over the set of rational numbers and \(c_\sigma \in \mathbb R\) satisfy \(c_\sigma >0\) and \(\sum _\sigma c_\sigma <\infty \). Then, \(V\in L^1_\mathrm {loc}(\mathbb R)\) but V is not square integrable on any non-empty open set. Hence, there is no \(C^2\) nonzero function in the domain of L in \(L^2(\mathbb R)\).

Before proving Theorem 4.4, we first state an immediate consequence of Lemma 2.16:

Lemma 4.6

  1. (1)

    Let J be a finite interval whose closure is contained in ]ab[. Then,

    $$\begin{aligned}&\big \Vert f\big |_J\big \Vert \le C_J\Vert f\Vert _{L}, \end{aligned}$$
    (4.9)
    $$\begin{aligned}&\big \Vert f'\big |_J\big \Vert \le C_J\Vert f\Vert _{L}. \end{aligned}$$
    (4.10)
  2. (2)

    Let \(\chi \in C^\infty ]a,b[\) with \(\chi '\in C_\mathrm {c}^\infty ]a,b[\). Then,

    $$\begin{aligned} \Vert \chi f\Vert _{L}\le C_\chi \Vert f\Vert _{L}. \end{aligned}$$
    (4.11)

As in the previous section, we fix \(u ,v\in \mathrm{AC}^1]a,b[\) that span \(\mathcal {N}(L)\) and satisfy \(W(v ,u)=1\).

Our Proof of Theorem 4.4 uses ideas from [25, Theorem 10.11] and [20, Sect. 17.4] and is based on an abstract result described in Lemma 3.3. The following lemma about the regular case (cf. Definition 2.3) contains the key arguments of the proof of (1) and (2) of Theorem 4.4:

Lemma 4.7

If \(V\in L^1]a,b[\) and \(a,b\in \mathbb R\), then

  1. (1)

    \(\mathcal {N}(L_{\max })=\mathcal {N}( L)\).

  2. (2)

    \(\mathcal {R}(L_{\max })=L^2]a,b[\).

  3. (3)

    \(\langle L_\mathrm {c}f|g\rangle = \langle f|L_{\max }g\rangle \), \(f\in \mathcal {D}(L_\mathrm {c})\), \(g\in \mathcal {D}(L_{\max })\).

  4. (4)

    \(\mathcal {R}(L_\mathrm {c})=L_\mathrm {c}^2]a,b[\,\cap \,\mathcal {N}( L)^\mathrm {perp}\).

  5. (5)

    \(\mathcal {R}( L_\mathrm {c})^\mathrm {perp}=\mathcal {N}( L)\).

  6. (6)

    \(\mathcal {D}(L_\mathrm {c})\) is dense in \(L^2]a,b[\).

Proof

Clearly, \(\mathcal {N}(L)=\mathrm {Span}(u,v)\subset \mathrm{AC}^1[a,b]\subset L^2]a,b[\). Therefore, \(\mathcal {N}(L)\subset \mathcal {D}(L_{\max })\). This proves (1).

Recall that in (2.20) we defined the forward Green’s operator \(G_\rightarrow \). Under the assumptions of the present lemma, it maps \(L^2]a,b[\) into \(\mathrm{AC}^1[a,b]\). Therefore, for any \(g\in L^2]a,b[\), \(\alpha ,\beta \in \mathbb C\),

$$\begin{aligned} f=\alpha u+\beta v+G_\rightarrow g \end{aligned}$$

belongs to \(\mathrm{AC}^1[a,b]\) and verifies \(Lf=g\). Therefore, \(f\in \mathcal {D}(L_{\max })\). Hence, \(L_{\max }\) is surjective. This proves (2).

To obtain (3), we integrate twice by parts. This is allowed by (2.1) and (2.2), since \(f,g\in \mathrm{AC}^1[a,b]\).

It is obvious that \(\mathcal {R}(L_\mathrm {c})\subset L_\mathrm {c}^2]a,b[\). \(\mathcal {R}( L_\mathrm {c})\subset \mathcal {N}( L_{\max })^\mathrm {perp}\) follows from (3).

Let us prove the converse inclusions. Let \(g\in L_\mathrm {c}^2]a,b[\,\cap \,\mathcal {N}( L)^\mathrm {perp}\). Set \(f:=G_\rightarrow g\). Clearly, \(Lf=g\). Using \(\int _a^b g u=\int _a^b g v=0\), we see that f has compact support. Hence, \(f\in \mathcal {D}(L_\mathrm {c})\). This proves (4).

\(L_\mathrm {c}^2]a,b[\) is dense in \(L^2]a,b[\), and \(\mathcal {N}( L)^\mathrm {perp}\) has a finite codimension. Therefore, by Lemma 4.8, given below, \(L_\mathrm {c}^2]a,b[\,\cap \,\mathcal {N}( L)^\mathrm {perp}\) is dense in \(\mathcal {N}( L)\). This implies (5).

By applying Lemma 3.3 with \(T:=L_{\max }\) and \(S:=L_\mathrm {c}\), we obtain (6). \(\square \)

Lemma 4.8

Let \(\mathcal {H}\) be a Hilbert space and \(\mathcal {K}\) a closed subspace of finite codimension. If \(\mathcal {Z}\) is a dense subspace of \(\mathcal {H}\), then \(\mathcal {Z}\cap \mathcal {K}\) is dense in \(\mathcal {K}\).

Proof

The lemma is obvious if the codimension is 1. Then, we apply induction.

\(\square \)

Proof of Theorem 4.4

It follows from Lemma 4.7 (6) that \(\mathcal {D}(L_{\mathrm {c}})\) is dense in \(L^2]a,b[\). We have

$$\begin{aligned} L_\mathrm {c}^\#\supset L_{\max } \end{aligned}$$
(4.12)

by integration by parts, as in the proof of (3), Lemma 4.7.

Suppose that \(h,k\in L^2]a,b[\) such that

$$\begin{aligned} \langle L_\mathrm {c}f|h\rangle =\langle f|k\rangle ,\quad f\in \mathcal {D}(L_\mathrm {c}). \end{aligned}$$
(4.13)

In other words, \(h\in \mathcal {D}(L_\mathrm {c}^\#)\) and \(L_\mathrm {c}^\#h=k\). Choose \(d\in ]a,b[\). We set \(h_d:=G_dk\), where \(G_d\) is defined in Definition 2.13. Clearly, \(Lh_d=k\). For \(f\in \mathcal {D}(L_\mathrm {c})\), set \(g:=L_\mathrm {c}f\). We can assume that \(\mathrm {supp}f\subset [a_1,b_1]\) for \(a<a_1<b_1<b\). Now,

$$\begin{aligned} \langle g|h_d\rangle = \langle L_\mathrm {c}f|h_d\rangle = \langle f|Lh_d\rangle = \langle f|k\rangle = \langle L_\mathrm {c} f|h\rangle = \langle g|h\rangle . \end{aligned}$$

By Lemma 4.7 (4) applied to \([a_1.b_1]\),

$$\begin{aligned} h=h_d+\alpha u+\beta v \end{aligned}$$
(4.14)

on \([a_1,b_1]\). But since \(a_1,b_1\) were arbitrary under the condition \(a<a_1<b_1<b\), (4.14) holds on ]ab[. Hence, \(Lh=k\). Therefore, \(h\in \mathcal {D}(L_{\max })\) and \(L_{\max }h=k\). This proves that

$$\begin{aligned} L_\mathrm {c}^\#\subset L_{\max }. \end{aligned}$$
(4.15)

From (4.12) and (4.15), we see that \(L_\mathrm {c}^\#= L_{\max }\). In particular, \(L_{\max }\) is closed and \(L_\mathrm {c}\) is closable. We have

$$\begin{aligned} L_{\min }=L_\mathrm {c}^{\#\#}= L_{\max }^\# . \end{aligned}$$
(4.16)

This ends the proof of (1) and (2).

For \(f,g\in \mathcal {D}(L_{\max })\) and \(a<a_1<b_1<b\), we have

$$\begin{aligned} \int _{a_1}^{b_1}(Lf(x) g(x)-f(x) Lg(x))\mathrm dx&= \int _{a_1}^{b_1}(f(x)g'(x)-f'(x)g(x))'\mathrm dx\nonumber \\&= W(f,g;a_1)-W(f,g;b_1). \end{aligned}$$
(4.17)

The lhs of (4.17) clearly converges as \(a_1\searrow a\). Therefore, the limit (4.4) exists. Similarly, by taking \(b_1\nearrow b\) we show that the limit (4.5) exists. Taking both limits, we obtain (4.6). This proves (3).

If \(d\in ]a,b[\), then (4.7) is an immediate consequence of (4.9) and (4.10). We can rewrite (4.17) as

$$\begin{aligned} W(f,g;a)=-\int _{a}^{d}\big ((Lf)(x) g(x)-f(x) Lg(x)\big )\mathrm dx + W(f,g;d). \end{aligned}$$
(4.18)

Now, both terms on the right of (4.18) can be estimated by \(C \Vert f\Vert _{L}\Vert g\Vert _{L}\). This shows (4.7) for \(d=a\). The proof for \(d=b\) is analogous.

Let \(L_w\) be L restricted to (4.8). By (4.7), (4.8) is a closed subspace of \(\mathcal {D}( L_{\max })\). Hence, \(L_w\) is closed. Obviously, \(L_\mathrm {c}\subset L_w\). By (4.6), \(L_w\subset L_{\max }^\#\). By (2), we know that \( L_{\max }^\#=L_{\min }\). But \(L_{\min }\) is the closure of \(L_\mathrm {c}\). Hence, \(L_w=L_{\min }\). This proves (5). \(\square \)

Remark 4.9

Here is an alternative, more direct proof of the closedness of \(L_{\max }\). Let \(f_n\in \mathcal {D}(L_{\max })\) be a Cauchy sequence wrt the graph norm. This means that \(f_n\) and \(Lf_n\) are Cauchy sequences wrt \(L^2]a,b[\). Let \(f:=\lim \nolimits _{n\rightarrow \infty }f_n\), \(g:=\lim \nolimits _{n\rightarrow \infty }Lf_n\). Let J be an arbitrary sufficiently small closed interval in ]ab[. We have

$$\begin{aligned} \Vert f_n-f_m\Vert _{L^1(J)}&\le \sqrt{|J|} \Vert f_n-f_m\Vert _{L^2(J)} , \end{aligned}$$
(4.19)
$$\begin{aligned} \Vert Lf_n-Lf_m\Vert _{L^1(J)}&\le \sqrt{|J|} \Vert Lf_n-Lf_m\Vert _{L^2(J)}. \end{aligned}$$
(4.20)

Hence, \(f_n\) satisfies the conditions of Corollary 2.17. Hence, \(f\in \mathrm{AC}^1]a,b[\) and \(g=Lf\). Hence, \(f\in \mathcal {D}(L_{\max })\) and it is the limit of \(f_n\) in the sense of the graph norm. Therefore, \(\mathcal {D}(L_{\max })\) is complete. Hence, \(L_{\max }\) and \(L_{\min }\) are closed.

4.2 Smooth Functions in the Domain of \(L_{\max }\)

We point out a certain pathology of the operators \(L_{\max }\) and \(L_{\min }\) if V is only locally integrable.

Lemma 4.10

  1. (1)

    The imaginary part of V is locally square integrable if and only if \(\mathcal {D}(L_{\mathrm {c}})\) is stable under conjugation and in this case \(\mathcal {D}(L_{\min })\) and \(\mathcal {D}(L_{\max })\) are also stable under conjugation.

  2. (2)

    If the imaginary part of V is not square integrable on any open set, then for \(f\in \mathcal {D}(L_{\max })\) we have \(\overline{f}\in \mathcal {D}(L_{\max })\) only if \(f=0\). In other words, \(\mathcal {D}(L_{\max })\cap \mathcal {D}(\overline{L}_{\max })=\{0\}\). Hence, \(\mathcal {D}(L_{\max })\) does not contain any nonzero real function.

Proof

  1. (1):

    Write \(Lf=-f''+V_1f+iV_2f\) if \(V=V_1+iV_2\) with \(V_1,V_2\) real. Then, if \(V_2\in L^2_\mathrm {loc}]a,b[\) and \(f\in \mathrm{AC}^1_{\mathrm {c}}]a,b[\) we have \(V_2f\in L^2]a,b[\) so \(-f''+Vf\in L^2]a,b[\) if and only if \(-f''+V_1f\in L^2]a,b[\) hence \(-\overline{f}''+V_1\overline{f}\in L^2]a,b[\), so we get \(-\overline{f}''+V\overline{f}\in L^2]a,b[\), thus \(\mathcal {D}(L_{\mathrm {c}})\) is stable under conjugation. The corresponding assertion concerning \(\mathcal {D}(L_{\min })\) follows by taking the completion, and that concerning \(\mathcal {D}(L_{\max })\) follows by taking the transposition.

    Reciprocally, assume that \(\mathcal {D}(L_{\mathrm {c}})\) is stable under conjugation and let \(x_0\in ]a,b[\). Then, there is \(f\in \mathcal {D}(L_{\mathrm {c}})\) such that \(f(x_0)\ne 0\) and we may assume that its real part \(g=(f+\overline{f})/2\) does not vanish on a neighborhood of \(x_0\). Then, \(g\in \mathcal {D}(L_{\mathrm {c}})\) hence \(-g''+V_1g+iV_2g\in L^2]a,b[\) and so must be the imaginary part of this function hence \(V_2\) is square integrable on a neighborhood of \(x_0\).

  2. (2):

    Assume now that \(V_2\) is not square integrable on any open set. If \(f\in \mathrm{AC}^1\) is real, then \(-f''+Vf\in L^2\) if and only if \(-f''+V_1f\in L^2\) and \(V_2f\in L^2\) and if \(f\ne 0\), then the second condition implies \(f=0\). Finally, if \(f\in \mathcal {D}(L_{\max })\) and \(\overline{f}\in \mathcal {D}(L_{\max })\), then the functions \(f+\overline{f}\) and \(f-\overline{f}\) will be zero by (1).

\(\square \)

Remark 4.11

Clearly, \(L_{\min }^*=\overline{L}_{\max }\). Thus, by Proposition 4.10 (2), if the imaginary part of V is not square integrable on any open set then \(\mathcal {D}(L_{\min })\cap \mathcal {D}(L_{\min }^*)=\{0\}\). On the other hand, if the imaginary part of V is locally square integrable, then \(\mathcal {D}(L_{\min })\subset \mathcal {D}(L_{\min }^*)\).

If \(V\in L_\mathrm {loc}^2\), many things simplify:

Proposition 4.12

If \(V\in L^2_\mathrm {loc}]a,b[\), then \(C_\mathrm {c}^\infty ]a,b[\) is a dense subspace of \(\mathcal {D}(L_{\min })\).

Proof

Clearly, \(C_\mathrm {c}^\infty ]a,b[\subset \mathcal {D}(L_\mathrm {c})\). Let \(f\in C_\mathrm {c}]a,b[\). Then, \(Lf\in L^2]a,b[\) if and only if \(f''\in L^2]a,b[\). Fix some \(\theta \in C_\mathrm {c}^\infty (\mathbb R)\) with \(\int \theta =1\) and let \(\theta _n(x):=n\theta (nx )\) with \(n\ge 1\). Then, for n large \(f_n:=\theta _n*f\in C_\mathrm {c}^\infty ]a,b[\) and has support in a fixed small neighborhood of \(\mathrm {supp}f\). Moreover, \(f_n\rightarrow f\) in \(C^1_\mathrm {c}]a,b[\), in particular \(f_n\rightarrow f\) uniformly with supports in a fixed compact, which clearly implies \(Vf_n\rightarrow Vf\) in \(L^2]a,b[\). Moreover, \(f_n''\rightarrow f''\) in \(L^2]a,b[\). \(\square \)

4.3 Closed Operators Contained in \(L_{\max }\)

If \(\mathcal {D}(L_\bullet )\) is a subspace of \(\mathcal {D}(L_{\max })\) closed in the \(\Vert \cdot \Vert _{L}\) norm, then the operator

$$\begin{aligned} L_\bullet :=L_{\max }\Big |_{\mathcal {D}(L_\bullet )} \end{aligned}$$
(4.21)

is closed and contained in \(L_{\max }\). We can call such an operator \(L_\bullet \) a closed realization of L.

We will be mostly interested in operators \(L_\bullet \) that satisfy

$$\begin{aligned} \mathcal {D}(L_{\min })\subset \mathcal {D}(L_\bullet ) \subset \mathcal {D}(L_{\max }) \end{aligned}$$
(4.22)

so that

$$\begin{aligned} L_{\min }\subset L_\bullet \subset L_{\max }. \end{aligned}$$
(4.23)

They are automatically densely defined. Note that we can use the theory developed in Sects. 3.5 and 3.6. In particular, as described in Proposition 3.15, one can easily check if a realization of L contains \(L_{\min }\) with the help of the following criterion:

Proposition 4.13

Suppose that \(L_\bullet \) is a closed densely defined operator contained in \(L_{\max }\). Then, \(L_\bullet ^\#\) is contained in \(L_{\max }\) if and only if \(L_{\min }\subset L_\bullet \). In particular, if \(L_\bullet ^\#=L_\bullet \), then \(L_{\min }\subset L_\bullet \).

The most obvious examples of such operators are given by one-sided boundary conditions:

Definition 4.14

Set

$$\begin{aligned} \mathcal {D}(L_a)&:=\{f\in \mathcal {D}(L_{\max })\ \mid \ W(f,g;a)=0 \text { for all }g\in \mathcal {D}(L_{\max })\}, \end{aligned}$$
(4.24)
$$\begin{aligned} \mathcal {D}(L_b)&:=\{f\in \mathcal {D}(L_{\max })\ \mid \ W(f,g;b)=0 \text { for all }g\in \mathcal {D}(L_{\max })\}. \end{aligned}$$
(4.25)

Let \(L_a\), resp., \(L_b\) be \(L_{\max }\) restricted to \(\mathcal {D}(L_a)\), resp., \(\mathcal {D}(L_b)\).

Proposition 4.15

\(L_a\) and \(L_b\) are closed and densely defined operators satisfying

$$\begin{aligned} L_a^\#=L_b,&\quad L_b^\#=L_a, \end{aligned}$$
(4.26)
$$\begin{aligned} L_{\min }\subset L_a\subset L_{\max },&\quad L_{\min }\subset L_b\subset L_{\max }. \end{aligned}$$
(4.27)

5 Boundary Conditions

5.1 Regular Endpoints

Recall that the endpoint a is regular if it is finite and V is integrable close to a.

Proposition 5.1

If L is regular at a, then any function \(f\in \mathcal {D}(L_{\max })\) extends to a function of class \(C^1\) on the left closed interval [ab[, hence f(a) and \(f'(a)\) are well-defined, and for \(f,g\in \mathcal {D}(L_{\max })\) we have \(W_a(f,g)=f(a)g'(a)-f'(a)g(a)\). Similarly, if L is regular at b. Thus, if L is regular, then \(\mathcal {D}(L_{\max })\subset C^1[a,b]\) and Green’s identity (4.6) has the classical form

$$\begin{aligned} \langle {L_{\max }f_1}|{f_2}\rangle - \langle {f_1}|{L_{\max }f_2}\rangle= & {} \big (f_1(b)f_2'(b)-f_1'(b)f_2(b)\big )\\&- \big (f_1(a)f_2'(a)-f_1'(a)f_2(a)\big ). \end{aligned}$$

Thus, if L is a regular operator, then we have four continuous linear functionals on \(f\in \mathcal {D}(L_{\max })\)

$$\begin{aligned} f\mapsto f(a),&\quad f\mapsto f'(a), \end{aligned}$$
(5.1)
$$\begin{aligned} f\mapsto f(b),&\quad f\mapsto f'(b), \end{aligned}$$
(5.2)

which give a convenient description of closed operators \(L_\bullet \) such that \(L_{\min }\subset L_\bullet \subset L_{\max }\). In particular, \(\mathcal {D}(L_{\min })\) is the intersection of the kernels of (5.1) and (5.2), \(\mathcal {D}(L_a)\) is the intersection of the kernels of (5.1) and \(\mathcal {D}(L_b)\) is the intersection of the kernels of (5.2).

5.2 Boundary Functionals

It is possible to extend the strategy described above to the case of an arbitrary L by using an abstract version of the notion of boundary value of a function. We shall do it in this section.

The abstract theory of boundary value functionals goes back to J. W. Calkin’s thesis [6] who used it for the classification of self-adjoint extensions of Hermitian operators. The theory was adapted to Hermitian differential operators of any order by Naimark [20] and to operators with complex coefficients of class \(C^\infty \) by Dunford and Schwarz in [12, ch. XIII]. In this section, we shall use this technique in the case of second-order operators with potentials which are only locally integrable: this loss of regularity is a problem for some arguments in [12].

Recall that \(\mathcal {D}(L_{\max })\) is equipped with the Hilbert space structure associated with the norm \(\Vert f\Vert _L=\sqrt{\Vert f\Vert ^2+\Vert Lf\Vert ^2}\). Following [12, §XXX.2], we introduce the following notions.

Definition 5.2

A boundary functional for L is any linear continuous form on \(\mathcal {D}(L_{\max })\) which vanishes on \(\mathcal {D}(L_{\min })\). A boundary functional at a is a boundary functional \(\phi \) such that \(\phi (f)=0\) for all \(f\in \mathcal {D}(L_{\max })\) with \(f(x)=0\) near a; boundary functionals at b are defined similarly. \(\mathcal {B}(L)\) is the set of boundary functionals for L and \(\mathcal {B}_a(L),\mathcal {B}_b(L)\) the subsets of boundary functionals at a and b.

\(\mathcal {B}(L)\) is a closed linear subspace of the topological dual \(\mathcal {D}(L_{\max })'\) of \(\mathcal {D}(L_{\max })\), and \(\mathcal {B}_a(L),\mathcal {B}_b(L)\) are closed linear subspaces of \(\mathcal {B}(L)\). By using a partition of unity on ]ab[, it is easy to prove that

$$\begin{aligned} \mathcal {B}(L)=\mathcal {B}_a(L)\oplus \mathcal {B}_b(L), \end{aligned}$$
(5.3)

a topological direct sum.

Definition 5.3

We define

$$\begin{aligned} the \hbox { boundary index for }L\hbox { at }a,&\quad \nu _a(L):=\dim \mathcal {B}_a(L),\\ the \hbox { boundary index for }L\hbox { at }b,&\quad \nu _b(L):=\dim \mathcal {B}_b(L),\\ and\,the \hbox { total boundary index for }L,&\quad \nu (L):=\dim \mathcal {B}(L)=\nu _a(L)+\nu _b(L). \end{aligned}$$

By definition, the subspace \(\mathcal {B}(L)\subset \mathcal {D}(L_{\max })'\) is the polar set of the closed subspace \(\mathcal {D}(L_{\min })\) of \(\mathcal {D}(L_{\max })\). Hence, it is canonically identified with the dual space of \(\mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })\):

$$\begin{aligned} \mathcal {B}(L)=\big (\mathcal {D}(L_{\max })/\mathcal {D}(L_{\min })\big )' . \end{aligned}$$
(5.4)

Clearly, one may also define \(\mathcal {B}_a(L)\) as the set of continuous linear forms on \(\mathcal {D}(L_{\max })\) which vanish on the closed subspace \(\mathcal {D}(L_a)\), and similarly for \(\mathcal {B}_b(L)\). Thus,

$$\begin{aligned} \mathcal {B}_a(L)=\big (\mathcal {D}(L_{\max })/\mathcal {D}(L_a)\big )' . \end{aligned}$$
(5.5)

Definition 5.4

For each \(f\in \mathcal {D}(L_{\max })\) and \(x\in [a,b]\), we introduce the functional

$$\begin{aligned} \mathbf {f}_x:\mathcal {D}(L_{\max })\rightarrow \mathbb C\quad \text {defined by}\quad \mathbf {f}_x (g)=W_x(f,g). \end{aligned}$$
(5.6)

By Theorem 4.4 it is a well-defined linear continuous form on \(\mathcal {D}(L_{\max })\).

Remember that if \(x\in ]a,b[\), then we can write

$$\begin{aligned} \mathbf {f}_x(g)=f(x)g'(x)-f'(x)g(x)=W_x(f,g) \quad \forall g\in C^1]a,b[ . \end{aligned}$$
(5.7)

If \(x=a\), in general we cannot write (5.7) (unless a is regular). However, we know that for all \(x\in [a,b]\) (5.6) depends weakly continuously on x. Thus, in general

$$\begin{aligned} \mathrm {w}-\lim _{x\rightarrow a}\mathbf {f}_x=\mathbf {f}_a. \end{aligned}$$
(5.8)

It is easy to see that \(\mathbf {f}_a\in \mathcal {B}_a\), cf. (4.24) for example. We shall prove below that any boundary value functional at the endpoint a is of this form.

Theorem 5.5

  1. (i)

    \(f\mapsto \mathbf {f}_a\) is a linear surjective map \(\mathcal {D}(L_{\max })\rightarrow \mathcal {B}_a(L)\).

  2. (ii)

    \(W_a(f,g)=0\) for all \(f,g\in \mathcal {D}(L_{\max })\) if and only if \(\mathcal {B}_a(L)=\{0\}\).

  3. (iii)

    \(W_a(f,g)\ne 0\) if and only if the functionals \(\mathbf {f}_a, \mathbf {g}_a\) are linearly independent.

  4. (iv)

    If \(W_a(f,g)\ne 0\), then \(\{\mathbf {f}_a, \mathbf {g}_a\}\) is a basis in \(\mathcal {B}_a(L)\); then \(\forall h\in \mathcal {D}(L_{\max })\), we have

    $$\begin{aligned} \mathbf {h}_a=cW_a(g,h)\mathbf {f}_a+cW_a(h,f) \mathbf {g}_a \quad \text { with } c=-1/W_a(f,g). \end{aligned}$$
    (5.9)

Proof

Let \(\mathcal {W}_a\) be the set of linear forms of the form \(\mathbf {f}_a\), this is a vector subspace of \(\mathcal {B}_a(L)\) and we shall prove later that \(\mathcal {W}_a=\mathcal {B}_a(L)\). For the moment, note that \(\mathcal {W}_a\) separates the points of \(\mathcal {Y}_a:=\mathcal {D}(L_{\max })/\mathcal {D}(L_a)\), i.e., we have \(W_a(f,g)=0\) for all f if and only if \(g\in \mathcal {D}(L_a)\), cf. (4.8) and (4.24). On the other hand, (5.5) implies that \(\mathcal {B}_a(L)=\{0\}\) is equivalent to \(\mathcal {D}(L_{\max })=\mathcal {D}(L_a)\) which in turn is equivalent to \(W_a(f,g)=0\) for all \(f,g\in \mathcal {D}(L_{\max })\) by (4.24). This proves (ii).

For the rest of the proof, we need Kodaira’s identity [21, pp. 151–152], namely: if fghk are \(C^1\) functions on ]ab[, then

$$\begin{aligned} W(f,g)W(h,k) +W(g,h) W(f,k)+W(h,f) W(g,k)=0 , \end{aligned}$$
(5.10)

with the usual definition \(W(f,g)=fg'-f'g\). The relation obviously holds pointwise on ]ab[. If \(f,g,h,k\in \mathcal {D}(L_{\max })\), then the relation extends to [ab], in particular

$$\begin{aligned} W_a(f,g)W_a(h,k) +W_a(g,h) W_a(f,k)+W_a(h,f) W_a(g,k)=0, \end{aligned}$$
(5.11)

and similarly at b. This implies (5.9) if \(W_a(f,g)\ne 0\) from which it follows that \(\{\mathbf {f}_a, \mathbf {g}_a\}\) is a basis in the vector space \(\mathcal {W}_a\), in particular \(\mathcal {W}_a\) has dimension 2. But \(\mathcal {W}_a\subset \mathcal {Y}_a'\) separates the points of \(\mathcal {Y}_a\) hence \(\mathcal {W}_a=\mathcal {Y}_a'=\mathcal {B}_a(L)\) which proves the surjectivity of the map \(f\mapsto \mathbf {f}(a)\). This proves (i) and (iv) completely and also one implication in (iii). It remains to prove that \(\mathbf {f}_a, \mathbf {g}_a\) are linearly dependent if \(W_a(f,g)=0\).

We prove this but with a different notation which allows us to use what we have already shown. Let f such that \(\mathbf {f}_a\ne 0\). Then, \(\mathbf {f}_a\) is part of a basis in \(\mathcal {W}_a=\mathcal {B}_a(L)\); hence, there is g such that \(\{\mathbf {f}_a, \mathbf {g}_a\}\) is a basis in \(\mathcal {B}_a(L)\). Then, \(W_a(f,g)\ne 0\) and we have (5.9). Thus, if \(W_a(h,f)=0\), then \(\mathbf {h}_a=cW_a(g,h)\mathbf {f}_a\), so \(\mathbf {h}_a,\mathbf {f}_a\) are linearly dependent. \(\square \)

The space \(\mathcal {B}_a\) is naturally a symplectic space. In fact, if \(\mathcal {B}_a\) is nontrivial, then we can find kh with \(W_a(k,h)\ne 0\). By the Kodaira identity,

$$\begin{aligned} W_a(f,g)&=\frac{-W_a(f,k)W_a(g,h)+W_a(f,h)W_a(g,k)}{W_a(h,k)}\nonumber \\&=\frac{-\mathbf {f}_a(k)\mathbf {g}_a(h)-\mathbf {f}_a(h)\mathbf {g}_a(k)}{W_a(h,k)}. \end{aligned}$$
(5.12)

Thus, if we set for \(\phi ,\psi \in \mathcal {B}_a\) with \(\mathbf {f}_a=\phi \), \(\mathbf {g}_a=\psi \),

$$\begin{aligned} {[\![}\phi |\psi {]\!]}_a:=W_a(f,g), \end{aligned}$$
(5.13)

then \({[\![}\cdot |\cdot {]\!]}_a\) is a well-defined symplectic form on \(\mathcal {B}_a\). Moreover, \(f\mapsto \mathbf {f}_a\) maps the form \(W_a\) onto \({[\![}\cdot |\cdot {]\!]}_a\). If \({[\![}\phi |\psi {]\!]}_a=1\), then by the Kodaira identity

$$\begin{aligned} W(h,k)=\phi (h)\psi (k)-\psi (h)\phi (k). \end{aligned}$$
(5.14)

In the literature, boundary functionals are usually described using the notion of boundary triplet. Let us make a comment on this concept. Suppose, for definiteness, that \(\nu _a=\nu _b=2\). Choose bases

$$\begin{aligned} \phi _a,\psi _a, \text { of } \mathcal {B}_a \text { and } \phi _b,\psi _b \text { of } \mathcal {B}_b \end{aligned}$$
(5.15)

such that \({[\![}\phi _a|\psi _a{]\!]}_a={[\![}\phi _b|\psi _b{]\!]}_b=1\). We have the maps

$$\begin{aligned} \mathcal {D}(L_{\max })\ni f\mapsto \phi ( f)&:=\big (\phi _a(f),\phi _b(f)\big )\in \mathbb {C}^2;\end{aligned}$$
(5.16)
$$\begin{aligned} \mathcal {D}(L_{\max })\ni f\mapsto \psi ( f)&:=\big (\psi _a(f),\psi _b(f)\big )\in \mathbb {C}^2. \end{aligned}$$
(5.17)

Then, we can rewrite Green’s formula (4.6) as

$$\begin{aligned} \langle L_{\max }f|g\rangle - \langle f|L_{\max }g\rangle = \langle \psi ( f)|\phi ( g)\rangle - \langle \phi ( f)|\psi ( g)\rangle . \end{aligned}$$
(5.18)

The triplet \((\mathbb {C}^2,\phi ,\psi )\) is often called in the literature as boundary triplet, see, e.g., [2] and references therein. It can be used to characterize operators in between \(L_{\min }\) and \(L_{\max }\).

Thus, a boundary triplet is essentially a choice of a basis (5.15) in the space of boundary functionals. Such a choice is often natural: in particular, this is the case of regular boundary conditions, see (5.1), (5.2). In our paper, we consider rather general potentials for which there may be no natural choice for (5.15). Therefore, we do not use the boundary triplet formalism.

5.3 Classification of Endpoints and of Realizations of L

The next fact is a consequence of Theorem 5.5. One may think of the assertion “\(\nu _a(L)\) can only take the values 0 or 2” as a version of Weyl’s dichotomy, cf. Sect. 6.2.

Theorem 5.6

\(\nu _a(L)\) can be 0 or 2: we have \(\nu _a(L)=0 \Leftrightarrow W_a=0\) and \(\nu _a(L)=2\Leftrightarrow W_a\ne 0\). Similarly for \(\nu _b(L)\), hence \(\nu (L)\in \{0,2,4\}\).

Remark 5.7

According to the terminology in [12], we might say that L has no boundary values at a if \(\nu _a(L)=0\) and that L has two boundary values at a if \(\nu _a(L)=2\).

Example 5.8

If L is regular at the endpoint a, then \(\nu _a(L)=2\). It is clear that \(f\mapsto f(a)\) and \(f\mapsto f'(a)\) are linearly independent, and Theorem 5.6 implies that they form a basis in \(\mathcal {B}_a(L)\).

Example 5.9

If L is semiregular at a, then we also have \(\nu _a(L)=2\). Indeed, \(\dim \mathcal {U}_a(\lambda )=2\) (Proposition 2.5) and this implies \(\nu _a(L)=2\) by (the easy part of) Theorem 6.15. We also have the distinguished boundary functional \(f\mapsto f(a)\), as shown in Proposition 2.5 (2). If u is the solution of \(Lu=0\) satisfying \(u(a)=0\), \(u'(0)=1\), whose existence is guaranteed by Proposition 2.5 (3), then this functional coincides with \(\mathbf {u}_a\). However, in general, we do not have another, linearly independent distinguished boundary functional.

As a consequence of Theorem 5.6, we get the following classification of 1d Schrödinger operators in terms of the boundary functionals.

  1. (1)

    \(\nu _a(L)=\nu _b(L)=0\). This is equivalent to \(L_{\min }=L_a=L_b=L_{\max }\).

  2. (2)

    \(\nu _a(L)=0,\)\(\nu _b(L)=2,\) Then, \(\mathcal {D}(L_{\min })\) is a subspace of codimension 2 in \(\mathcal {D}(L_{\max })\). This is equivalent to \(L_a=L_{\max }\), and to \(L_{\min }=L_b\).

  3. (3)

    \(\nu _a(L)=2,\)\(\nu _b(L)=0.\) Then, \(\mathcal {D}(L_{\min })\) is a subspace of codimension 2 in \(\mathcal {D}(L_{\max })\). This is equivalent to \(L_b=L_{\max }\), and to \(L_{\min }=L_a\).

  4. (4)

    \(\nu _a(L)=\nu _b(L)=2\). Then, \(\mathcal {D}(L_{\min })\) is a subspace of codimension 4 in \(\mathcal {D}(L_{\max })\).

In case (2), the operators \(L_\bullet \) with \(L_{\min }\subsetneq L_\bullet \subsetneq L_{\max }\) are defined by nonzero boundary value functionals \(\phi \) at a: \(\mathcal {D}(L_\bullet )=\{f\in \mathcal {D}(L_{\max }) \mid \phi (f)=0\}\). Similarly in case (3).

Consider now the case (4). The domain of nontrivial realizations \(L_\bullet \) could be then of codimension 1, 2, or 3 in \(\mathcal {D}(L_{\max })\). We will see that realizations of codimension 2 are the most important.

Each realization of L extending \(L_{\min }\) is defined by a subspace \(\mathcal {C}_\bullet \subset \mathcal {B}_a\oplus \mathcal {B}_b\).

$$\begin{aligned} \mathcal {D}(L_\bullet ):=\{f\in \mathcal {D}(L_{\max })\ \mid \phi (f)=0,\quad \phi \in \mathcal {C}_\bullet \} \end{aligned}$$
(5.19)

The space \(\mathcal {C}_\bullet \) is called the space of boundary conditions for \(L_\bullet \). The dimension of \(\mathcal {C}_\bullet \) coincides with the codimension of \(\mathcal {D}(L_\bullet )\) in \(\mathcal {D}(L_{\max })\).

Definition 5.10

We say that the boundary conditions \(\mathcal {C}_\bullet \) are separated if

$$\begin{aligned} \mathcal {C}_\bullet =\mathcal {C}_\bullet \cap \mathcal {B}_a\,\oplus \,\mathcal {C}_\bullet \cap \mathcal {B}_b. \end{aligned}$$
(5.20)

For instance, \(L_a\) and \(L_b\) are given by separated boundary conditions \(\mathcal {B}_a\), resp., \(\mathcal {B}_b\).

Definition 5.11

Let \(\phi \in \mathcal {B}_a\) and \(\psi \in \mathcal {B}_b\). Then, the realization of L with the boundary condition \(\mathbb C\phi \oplus \mathbb C\psi \) will be denoted \(L_{\phi ,\psi }\).

Clearly, \(L_{\phi ,\psi }\) has separated boundary conditions and depends only on the complex lines determined by \(\phi \) and \(\psi \). More explicitly,

$$\begin{aligned} \mathcal {D}(L_{\phi ,\psi })=\{f\in \mathcal {D}(L_{\max })\mid \phi (f)=\psi (f)=0\}. \end{aligned}$$

Recall that if \(\phi \ne 0\) then \(\phi (f)=0 \Leftrightarrow \exists c(f)\in \mathbb C\) such that \(\mathbf {f}_a=c(f)\phi \). We abbreviate \(L_\phi =L_{\phi ,0}\) if \(\psi =0\) and define similarly \(L_\psi \) if \(\phi =0\). Thus, \(L_\phi \) involves no boundary condition at b:

$$\begin{aligned} \mathcal {D}(L_{\phi })= & {} \{f\in \mathcal {D}(L_{\max })\mid \phi (f)=0\}\nonumber \\= & {} \{f\in \mathcal {D}(L_{\max })\mid \exists c(f) \text { such that } \mathbf {f}_a=c(f)\phi \} \end{aligned}$$
(5.21)

where the second equality holds if \(\phi \ne 0\). Note that \(L_{0,0}=L_{\max }\).

5.4 Properties of Boundary Functionals

The next proposition is a version of [12, XIII.2.27] in our context.

Proposition 5.12

If \(\phi \in \mathcal {B}_a(L)\), then there are continuous functions \(\alpha ,\beta :\,]a,b[\rightarrow \mathbb C\) such that

$$\begin{aligned} \phi (f)=\lim _{x\rightarrow a}\big (\alpha (x)f(x)+\beta (x)f'(x) \big ) \quad \forall f\in \mathcal {D}(L_{\max }). \end{aligned}$$

Reciprocally, if \(\alpha ,\beta \) are complex functions on ]ab[ and \(\lim _{x\rightarrow a}\big (\alpha (x)f(x)+\beta (x)f'(x) \big )=:\phi (f)\) exists \(\forall f\in \mathcal {D}(L_{\max })\), then \(\phi \in \mathcal {B}_a(L)\).

Proof

The first assertion follows from Theorem 5.5-(i) and relations (5.8), (5.7) while the second one is a consequence of Banach–Steinhaus theorem.

\(\square \)

Recall that for \(d\in [a,b]\) the symbol \(L^{a,d}\) denotes the operator \(-\partial ^2+V\) on the interval ]ad[.

Lemma 5.13

Let \(d\in ]a,b[\). Then,

$$\begin{aligned} \dim \mathcal {B}_a(L)=\dim \mathcal {B}_a(L^{a,d}) . \end{aligned}$$
(5.22)

Proof

Since d is a regular endpoint for \(L^d\), the maximal operator \(L^{a,d}_{\max }\) associated with \(L^{a,d}\) has the property \(\mathcal {D}(L^{a,d}_{\max })\subset C^1]a,d]\). Thus, the restriction map \(R:f\mapsto f\big |_{]a,d[}\) is a surjective map \(\mathcal {D}(L_{\max })\rightarrow \mathcal {D}(L^{a,d}_{\max })\) such that \(R\mathcal {D}(L_a)=\mathcal {D}(L^{a,d}_a)\). If \(\phi \) is a boundary value functional at a for \(L^{a,d}\), then clearly \(\phi \circ R\) is a boundary value functional at a for \(L^{a,d}\) and the map \(\phi \mapsto \phi \circ R\) is a bijective map \(\mathcal {B}_a(L)\rightarrow \mathcal {B}_a(L^{a,d})\). \(\square \)

We note that the space \(\mathcal {B}(L)\) and its subspaces \(\mathcal {B}_a(L),\mathcal {B}_b(L)\) depend on L only through the domains \(\mathcal {D}(L_{\max })\) and \(\mathcal {D}(L_{\min })\). So, in order to compute them one can sometimes change the potential and consider an operator \(L^U:=-\partial ^2+U\) instead of \(L:=-\partial ^2+V\). This is especially useful if U is real: for example, U could be the real part of V, if its imaginary part is bounded.

Proposition 5.14

Let \(U:\,]a,b[\,\rightarrow \mathbb C\) measurable such that \(\Vert (U-V)f\Vert \le \alpha \Vert Lf\Vert +\beta \Vert f\Vert \) for some real numbers \(\alpha ,\beta \) with \(\alpha <1\) and all \(f\in \mathcal {D}(L_{\max })\). Then, \(\mathcal {D}(L_{\max })=\mathcal {D}(L^U_{\max })\) and \(\mathcal {D}(L_{\min })=\mathcal {D}(L^U_{\min })\). Hence, \(\mathcal {B}(L)=\mathcal {B}(L^U)\) and

$$\begin{aligned} \nu _a(L)=\nu _a(L^U), \quad \nu _b(L)=\nu _b(L^U) . \end{aligned}$$
(5.23)

Proof

We have

$$\begin{aligned} (1-\alpha )\Vert Lf\Vert -\beta \Vert f\Vert \le \Vert L^Uf\Vert \le (1+\alpha )\Vert Lf\Vert +\beta \Vert f\Vert \end{aligned}$$

so the norms \(\Vert \cdot \Vert _L\) and \(\Vert \cdot \Vert _{L^U}\) are equivalent. Then, we use (5.4). \(\square \)

5.5 Infinite Endpoints

Suppose now that our interval is right-infinite. We will show that if the potential stays bounded in average at infinity, then all elements of the maximal domain converge to zero at \(\infty \) together with their derivative, which obviously implies that their Wronskian converges to zero.

Proposition 5.15

Suppose that \(b=\infty \) and

$$\begin{aligned} \limsup _{c\rightarrow \infty }\int _{c}^{c+1}|V(x)|\mathrm dx<\infty . \end{aligned}$$
(5.24)

Then,

$$\begin{aligned} f\in \mathcal {D}(L_{\max })\quad \Rightarrow \quad \lim _{x\rightarrow \infty }f(x)=0,\ \lim _{x\rightarrow \infty }f'(x)=0. \end{aligned}$$
(5.25)

Hence, \(\nu _b=0\).

Of course, an analogous statement is true for \(a=-\infty \) on left-infinite intervals.

Proof of Proposition 5.15

Let \(\nu <\nu _0\) and let \(J_n:=[a+n\nu ,a+(n+1)\nu ]\). Then, using first (2.28) and then the Schwarz inequality, we obtain

$$\begin{aligned} \Vert f\Vert _{L^\infty (J_n)}+\nu \Vert f'\Vert _{L^\infty (J_n)}&\le C_1\Vert Lf\Vert _{L^1(J_n)}+ C_2\Vert f\Vert _{L^1(J_n)}\\&\le C_1\sqrt{\nu }\Vert Lf\Vert _{L^2(J_n)}+ C_2\sqrt{\nu }\Vert f\Vert _{L^2(J_n)}\,\underset{n\rightarrow \infty }{\rightarrow }\,0. \end{aligned}$$

This implies (5.25). \(\square \)

6 Solutions Square Integrable Near Endpoints

6.1 Spaces \(\mathcal {U}_a(\lambda )\) and \(\mathcal {U}_b(\lambda )\)

In this section, we will show that one can compute the boundary indices with the help of eigenfunctions of the operator L which are square integrable around a given endpoint.

Definition 6.1

If \(\lambda \in \mathbb C\), then \(\mathcal {U}_a(\lambda )\) is the set of \(f\in \mathrm{AC}^1]a,b[\) such that \((L-\lambda )f=0\) and f is \(L^2\) on ]ad[ for some, hence for all d such that \(a<d<b\). Similarly, we define \(\mathcal {U}_b(\lambda )\).

Proposition 6.2

If a is a semiregular endpoint for L, then \(\dim \mathcal {U}_a(\lambda )=2\) for all \(\lambda \in \mathbb C\). Besides, if a is regular, we can choose \(u,v\in \mathcal {N}( L-\lambda )\) such that

$$\begin{aligned} u(a)=1,&\quad u'(a)=0, \end{aligned}$$
(6.1)
$$\begin{aligned} v(a)=0,&\quad v'(a)=1. \end{aligned}$$
(6.2)

Similarly for b.

Proof

We apply Proposition 2.5. \(\square \)

6.2 Two-Dimensional \(\mathcal {U}_a(\lambda )\)

The next proposition contains the main technical fact about the dimensions of the \(\mathcal {U}_a(\lambda )\).

Proposition 6.3

Assume that all the solutions of \(Lf=0\) are square integrable near a. If \(f\in C^1]a,b[\) and \(|Lf| \le B|f|\) for some \(B>0\), then f is square integrable near a. In particular, if \(U\in L^\infty ]a,b[\), then all the solutions of \((L+U)f=0\) are square integrable near a.

Proof

We may clearly assume that b is a regular endpoint and \(f\in C^1]a,b]\). Let \(G_\leftarrow \) be the backward Green’s operator of L (Definition 2.12). If \(Lf=g\), then \(L(f-G_{\leftarrow } g)=0\). Therefore,

$$\begin{aligned} f(x)=\alpha u(x)+\beta v(x)+\int _x^b \big (u(x)v(y)-v(x)u(y)\big ) g(y)\mathrm dy, \end{aligned}$$
(6.3)

for some \(\alpha ,\beta \). Set \(A:=\sqrt{|\alpha |^2+|\beta |^2}\) and \(\mu (x):=\sqrt{|u(x)|^2+|v(x)|^2}\). Then,

$$\begin{aligned} |f(x)|\le A\mu (x)+\mu (x)\int _x^b\mu (y)|g(y)|\mathrm dy \le \mu (x)\Big (A+B\int _x^b\mu (y)|f(y)|\mathrm dy\Big ), \end{aligned}$$

and the Gronwall Lemma applied to \(|f|/\mu \) implies

$$\begin{aligned} |f(x)|\le A\mu (x)\exp \left( B\int _x^b\mu ^2(y)\mathrm dy\right) . \end{aligned}$$
(6.4)

Clearly, the right hand side of (6.4) is square integrable. \(\square \)

The above proposition has the following important consequence.

Proposition 6.4

If \(\dim \mathcal {U}_a(\lambda )=2\) for some \(\lambda \in \mathbb C\), then \(\dim \mathcal {U}_a(\lambda )=2\) for all \(\lambda \in \mathbb C\). Besides, if this is the case, then \(\nu _a(L)=2\).

6.3 The Kernel of \(L_{\max }\)

Let us describe the relationship between the dimension of the kernel of \(L_{\max }-\lambda \) and the dimensions of spaces \(\mathcal {U}_a(\lambda )\) and \(\mathcal {U}_b(\lambda )\).

The first proposition is a corollary of Proposition 6.4:

Proposition 6.5

The following statements are equivalent:

  1. (1)

    \(\dim \mathcal {N}(L_{\max }-\lambda )=2\) for some \(\lambda \in \mathbb C\).

  2. (2)

    \(\dim \mathcal {N}(L_{\max }-\lambda )=2\) for all \(\lambda \in \mathbb C\).

  3. (3)

    \(\dim \mathcal {U}_a(\lambda _a)=\dim \mathcal {U}_b(\lambda _b)=2\) for some \(\lambda _a,\lambda _b \in \mathbb C\).

  4. (4)

    \(\dim \mathcal {U}_a(\lambda )=\dim \mathcal {U}_b(\lambda )=2\) for all \(\lambda \in \mathbb C\).

Besides, if this is the case, then \(\nu _a(L)=\nu _b(L)=2\).

The next two propositions are essentially obvious:

Proposition 6.6

Let \(\lambda \in \mathbb C\). We have \(\dim \mathcal {N}(L_{\max }-\lambda )=1\) if and only if one of the following statements is true:

  1. (1)

    \(\dim \mathcal {U}_a(\lambda )=\dim \mathcal {U}_b(\lambda )=1\) and \(\mathcal {U}_a(\lambda )=\mathcal {U}_b(\lambda )\).

  2. (2)

    \(\dim \mathcal {U}_a(\lambda )=2\) and \(\dim \mathcal {U}_b(\lambda )=1\).

  3. (3)

    \(\dim \mathcal {U}_a(\lambda )=1\) and \(\dim \mathcal {U}_b(\lambda )=2\).

Proposition 6.7

Let \(\lambda \in \mathbb C\) and \(\mathcal {U}_a(\lambda )\ne \{0\}\), \(\mathcal {U}_b(\lambda )\ne \{0\}\). Then, \(\dim \mathcal {N}(L_{\max }-\lambda )=0\) if and only if \(\dim \mathcal {U}_a(\lambda )=\dim \mathcal {U}_b(\lambda )=1\) and \(\mathcal {U}_a(\lambda )\ne \mathcal {U}_b(\lambda )\).

6.4 First-Order ODE’s

We will need some properties of vector-valued ordinary differential equations. We will denote by \(B(\mathbb C^n)\) the space of \(n\times n\) matrices.

The following statement can be proven by the same methods as Proposition 2.2. Clearly, in the following proposition \(\mathbb C^n\) can be easily replaced by an arbitrary Banach space.

Proposition 6.8

Let \(u_0\in \mathbb C^n\) and the function \([a,b[\ni x\mapsto A(x)\in B(\mathbb C^n)\) be in \(L_\mathrm {loc}^1\big ([a,b[,B(\mathbb C^n)\big )\). Then, there exists a unique solution in \(\mathrm{AC}\big ([a,b[,\mathbb C^n\big )\) of the following Cauchy problem:

$$\begin{aligned} \partial _xu(x)=A(x)u(x),\quad u(a)=u_0. \end{aligned}$$
(6.5)

In particular, the dimension of the space of solutions of \(\partial _xu(x)=A(x)u(x)\) is n.

If \(A \in L^1\big (]a,b[,B(\mathbb C^n)\big )\), then \(u\in \mathrm{AC}\big ([a,b],\mathbb C^n\big )\); hence, u is continuous up to b.

The following theorem is much more interesting. It is borrowed from Atkinson [1, Th. 9.11.2]. Note that in this theorem the finite dimensionality of the space \(\mathbb C^n\) seems essential.

Theorem 6.9

Suppose that AB are functions \([a,b[\,\rightarrow B(\mathbb C^n)\) belonging to \(L_\mathrm {loc}^1([a,b[,B(\mathbb C^n))\) satisfying \(A(x)=A^*(x)\ge 0\), \(B(x)=B^*(x)\). Let J be an invertible matrix satisfying \(J^*=-J\) and such that \(J^{-1}A(x)\) is real. If for some \(\lambda \in \mathbb C\) all solutions of

$$\begin{aligned} J\partial _x\phi (x)=\lambda A(x)\phi (x)+B(x)\phi (x) \end{aligned}$$
(6.6)

satisfy

$$\begin{aligned} \int _a^b\big (\phi (x)|A(x)\phi (x)\big )\mathrm dx<\infty , \end{aligned}$$
(6.7)

then for all \(\lambda \in \mathbb C\) all solutions of (6.6) satisfy (6.7).

Proof

For \(\lambda \in \mathbb C\), let \([a,b[\ni x\mapsto Y_\lambda (x)\in B(\mathbb C^n)\) be the solution of

$$\begin{aligned} J\partial _xY_\lambda (x)=\lambda A(x)Y_\lambda (x)+B(x)Y_\lambda (x),\quad Y_\lambda (a)=\mathbb {1}. \end{aligned}$$
(6.8)

Then, the theorem is equivalent to the following statement: if for some \(\lambda \in \mathbb C\), we have

$$\begin{aligned} \int _a^b\mathrm {Tr}\,Y_\lambda ^*(x)A(x)Y_\lambda (x)\mathrm dx<\infty , \end{aligned}$$
(6.9)

then for all \(\lambda \in \mathbb C\), we have (6.9). We are going to prove this in the following.

First note that

$$\begin{aligned} \overline{\mathrm {Tr}\,J^{-1}B(x)}=\mathrm {Tr}\,(J^{-1}B(x))^*= -\mathrm {Tr}\,B(x)J^{-1}=-\mathrm {Tr}\,J^{-1}B(x). \end{aligned}$$
(6.10)

Therefore, \(\mathrm {Tr}\,J^{-1}B(x)\in \mathrm {i}\mathbb R\). By the same argument, \(\mathrm {Tr}\,J^{-1}A(x)\in \mathrm {i}\mathbb R\). But \(\mathrm {Tr}\,J^{-1}A(x)\) is real. Hence, \(\mathrm {Tr}\,J^{-1}A(x)=0\), and so for arbitrary \(\lambda \in \mathbb C\) we have \(\mathrm {Tr}\,\big (\lambda A(x)+B(x)\big )\in \mathrm {i}\mathbb R\). Therefore,

$$\begin{aligned} \partial _x\det Y_\lambda (x) =\mathrm {Tr}\,\big (\lambda A(x)+B(x)\big )\det Y_\lambda (x) \end{aligned}$$
(6.11)

implies

$$\begin{aligned} | \det Y_\lambda (x)|= | \det Y_\lambda (a)|=1. \end{aligned}$$
(6.12)

Therefore, \(Y_\lambda (x)\) is invertible for all \(x\in ]a,b]\).

Now, let \(\mu \in \mathbb C\) and assume that (6.9) holds for \(\lambda =\mu \). We have

$$\begin{aligned} \partial _xY_\mu ^*(x)JY_\mu (x)= (\mu -{\overline{\mu }})Y_\mu ^*(x)A(x)Y_\mu (x),\quad Y_\mu ^*(a)JY_\mu (a)=J. \end{aligned}$$
(6.13)

Hence,

$$\begin{aligned} Y_\mu ^*(x)JY_\mu (x)= J+ (\mu -{\overline{\mu }})\int _a^xY_\mu ^*(y)A(y)Y_\mu (y)\mathrm dy. \end{aligned}$$
(6.14)

Using (6.9), we see that \( Y_\mu ^*(x)JY_\mu (x)\) is bounded uniformly in \(x\in [a,b[\). By (6.12), its inverse is also bounded uniformly in \(x\in [a,b[\). (Here, we use the finiteness of the dimension of \(\mathbb C^n\)!)

Set \(Z_\lambda (x):=Y_\mu ^{-1}(x)Y_\lambda (x).\) We have

$$\begin{aligned} \partial _xZ_\lambda =(\lambda -\mu )Y_\mu ^{-1}J^{-1}AY_\lambda =(\lambda -\mu )\big (Y^*_\mu J Y_\mu )^{-1}Y_\mu ^*AY_\mu Z_\lambda . \end{aligned}$$
(6.15)

We have proven that \(\big (Y^*_\mu J Y_\mu )^{-1}\) is uniformly bounded. By (6.9), the norm of \(Y_\mu ^*AY_\mu \) is in \(L^1]a,b[\). Hence, by the second part of Proposition 6.8, \(\Vert Z_\lambda \Vert \) is uniformly bounded on [ab[. Now, by using

$$\begin{aligned} Y_\lambda ^*(x)A(x)Y_\lambda (x)= Z_\lambda ^*(x)Y_\mu ^*(x)A(x)Y_\mu (x)Z_\lambda (x) \end{aligned}$$
(6.16)

we see that (6.9) for \(\lambda =\mu \) implies (6.9) for all \(\lambda \). \(\square \)

Remark 6.10

Set

$$\begin{aligned} J:=\begin{pmatrix} 0 &{} \quad -1 \\ 1 &{} \quad 0\end{pmatrix},\quad A:=\begin{pmatrix} 1 &{} \quad 0 \\ 0 &{} \quad 0\end{pmatrix},\quad B(x):=\begin{pmatrix} -V(x) &{}\quad 0 \\ 0 &{}\quad 1\end{pmatrix}. \end{aligned}$$
(6.17)

Then, \(Lf=\lambda f\) can be rewritten as (6.6), that is,

$$\begin{aligned} J\partial _x\phi (x)=\lambda A(x)\phi (x)+B(x)\phi (x), \end{aligned}$$
(6.18)

with \(\phi =\begin{pmatrix} f \\ f'\end{pmatrix}\). Moreover,

$$\begin{aligned} \int _a^b\big (\phi (x)|A(x)\phi (x)\big )=\int |f(x)|^2\mathrm dx, \end{aligned}$$
(6.19)

hence, the condition (6.7) means that \(f\in L^2]a,b[\). Note also that the conditions of Theorem 6.9 on J, A and B are satisfied. Theorem 6.9 therefore implies that if all solutions of \(Lf=\lambda f\) are square integrable for one \(\lambda \), they are square integrable for all \(\lambda \). We thus obtain an alternative Proof of Proposition 6.4.

6.5 Von Neumann Decomposition

Von Neumann’s theory for the classification of self-adjoint extensions of a Hermitian operators is well-known, cf. [12, 25]. In the present subsection, we will investigate how to adapt it to the case of complex potentials.

First recall that \(\mathcal {D}(L_{\max })\) has a Hilbert space structure inherited from its graph, which is a closed subspace of \(L^2]a,b[\,\oplus L^2]a,b[\,\), namely

$$\begin{aligned} (f|g)_{L}:=(Lf|Lg)+(f|g)=\langle {\overline{L}}{\overline{f}}|Lg\rangle +\langle {\overline{f}}|g\rangle . \end{aligned}$$
(6.20)

Therefore, by the Riesz representation theorem \(\mathcal {D}({\overline{L}}_{\max })\) can be identified with the dual of \(\mathcal {D}(L_{\max })\):

$$\begin{aligned} \mathcal {D}({\overline{L}}_{\max })\ni f\mapsto ({\overline{f}}| \cdot )_L \in \mathcal {D}(L_{\max })'. \end{aligned}$$
(6.21)

Hence, the space of boundary functionals \(\mathcal B(L)\subset \mathcal {D}(L_{\max })'\) can be viewed as a subspace of \( \mathcal {D}(\overline{L}_{\max })\).

The following lemma follows from Proposition 3.22 (1):

Lemma 6.11

With the identification (6.21), there is a canonical linear isomorphism

$$\begin{aligned} \mathcal {B}(L)\simeq & {} \mathcal {N}(L_{\max }{\overline{L}}_{\max }+1)\nonumber \\= & {} \{f\in \mathcal {D}(\overline{L}_{\max })\mid \overline{L}f\in \mathcal {D}(L_{\max }) \text { and } L\overline{L}f+f=0\} . \end{aligned}$$
(6.22)

Von Neumann’s formalism is particularly efficient for real potentials and gives more precise results than in the complex case, so for completeness we begin with some comments on the real case. Then, we explore what can be done for arbitrary complex potentials. The differences between the real and complex case are significative, the difficulties being related to the fact that in the complex case there is no simple relation between the (geometric) limit point/circle method and the dimension of the spaces \(\mathcal {U}_a(\lambda )\), cf. Sect. 8.5.

If V is real, then \(\overline{L}=L\), \(L_{\min }\) is Hermitian, and \(L_{\max }=L_{\min }^*\), hence

$$\begin{aligned} \mathcal {B}(L) \simeq \mathcal {N}(L_{\max }^2+1) . \end{aligned}$$
(6.23)

Then by using the relation \(L^2+1=(L-\mathrm {i})(L+\mathrm {i})\), it is easy to prove that

$$\begin{aligned} \mathcal {B}(L)\simeq \mathcal {N}(L_{\max }-\mathrm {i})+\mathcal {N}(L_{\max }+\mathrm {i}) . \end{aligned}$$
(6.24)

The last sum is obviously algebraically direct but also orthogonal for the scalar product (6.20); hence, we have an orthogonal direct sum decomposition

$$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_{\min })\oplus \mathcal {B}(L) =\mathcal {D}(L_{\min })\oplus \mathcal {N}(L_{\max }-\mathrm {i})\oplus \mathcal {N}(L_{\max }+\mathrm {i}) .\nonumber \\ \end{aligned}$$
(6.25)

The map \(f\mapsto \overline{f}\) is a real linear isomorphism of \(\mathcal {N}(L_{\max }-\mathrm {i})\) onto \(\mathcal {N}(L_{\max }+\mathrm {i})\); hence, these spaces have equal dimension \(\le 2\) and so \(\dim \mathcal {B}(L)=2\dim \mathcal {N}(L_{\max }-\mathrm {i})\in \{0,2,4\}\). Of course, we have already proved this in a much simpler way, but (6.25) also gives via a simple argument the following: if V is real then

  1. (1)

    \(\nu _a(L)=0 \Leftrightarrow \dim \mathcal {U}_a(\lambda )=1\)\(\forall \lambda \in \mathbb C{\setminus }\mathbb R\);

  2. (2)

    \(\nu _a(L)=2 \Leftrightarrow \dim \mathcal {U}_a(\lambda )=2\)\(\forall \lambda \in \mathbb C\) .

The simplicity of the treatment in the real case is due to the possibility of working with (6.24), which involves only the second-order operators \(L_{\max }\pm \mathrm {i}\), instead of (6.23), which involves the operator \(L_{\max }^2\) of order 4. We do not have such a simplification in the complex case where \(L_{\max }\overline{L}_{\max }+1\) is formally a fourth-order differential operator with very singular coefficients, since V is only locally \(L^1\).

Let us show how to generalize von Neumann’s analysis to the complex case. We will follow [15, Theorem 9.1] which in turn is a consequence of [1, Theorem 9.11.2]. The nontrivial part of Theorem 6.15 is due to Race [22, Theorem 5.4].

We need to study the equation

$$\begin{aligned} (L\overline{L}+\lambda )f=0. \end{aligned}$$
(6.26)

More precisely, by a solution of (6.26) we will mean \(f\in \mathrm{AC}^1]a,b[\) such that \(\overline{L}f\in \mathrm{AC}^1]a,b[\) and \(L(\overline{L}f)+\lambda f=0\) holds.

Let us rewrite (6.26) as a second-order system of 2 equations. To this end, introduce

$$\begin{aligned}&Q:=\begin{pmatrix} 0&{}\quad 1 \\ 1 &{}\quad 0 \end{pmatrix},\qquad \mathcal {I}:=\begin{pmatrix} 1&{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix},\qquad W:= \begin{pmatrix} 0 &{}\quad V \\ \overline{V} &{}\quad -1\end{pmatrix}, \end{aligned}$$
(6.27)
$$\begin{aligned}&\mathcal {L}:=-Q\partial ^2+W =\begin{pmatrix} 0 &{}\quad L\\ \overline{L} &{}\quad -1\end{pmatrix}. \end{aligned}$$
(6.28)

Consider the equation

$$\begin{aligned} (\mathcal {L}+\lambda \mathcal {I})F=0, \end{aligned}$$
(6.29)

on \(\mathrm{AC}^1\big (]a,b[,\mathbb C^2\big )\). The Eqs. (6.26) and (6.29) are equivalent in the following sense:

Lemma 6.12

The map \(f\mapsto F:=\left( {\begin{matrix}f\\ \overline{L}f \end{matrix}}\right) \) is an isomorphism of the space of solutions of \({(L\overline{L}+\lambda )f=0}\) onto the space of solutions of \((\mathcal {L}+\lambda \mathcal {I})F=0\).

Proof

It is immediate to see that if \((L\overline{L}f+\lambda ) f=0\), then \(\left( {\begin{matrix}f\\ \overline{L}f \end{matrix}}\right) \in \mathrm{AC}^1(]a,b[,\mathbb C^2)\) and \({(\mathcal {L}+\lambda \mathcal {I})\left( {\begin{matrix}f\\ \overline{L}f \end{matrix}}\right) =0}\).

Reciprocally, if \((\mathcal {L}+\lambda \mathcal {I})\left( {\begin{matrix} f_1\\ f_2 \end{matrix}}\right) =0\), then \(f_1,f_2\in \mathrm{AC}^1]a,b[\) and \(\begin{pmatrix} \lambda f_1 +L f_2 \\ \overline{L} f_1-f_2\end{pmatrix}=\begin{pmatrix} 0\\ 0 \end{pmatrix} \). Thus, \(f_2=\overline{L}f_1\) and \((L \overline{L}+\lambda ) f_1=0\). \(\square \)

We still prefer to transform (6.29) to a first-order system of 4 equations. To this end, we introduce

$$\begin{aligned} J=\begin{pmatrix} 0 &{}\quad -Q \\ Q &{} \quad 0 \end{pmatrix},\quad A=\begin{pmatrix}\mathcal {I}&{}\quad 0\\ 0 &{}\quad 0 \end{pmatrix}, \quad B=\begin{pmatrix} W &{}\quad 0\\ 0 &{}\quad - Q \end{pmatrix} \end{aligned}$$

Consider the equation

$$\begin{aligned} J\partial _x\phi =(\lambda A +B)\phi , \end{aligned}$$
(6.30)

where \(\phi \in \mathrm{AC}\big (]a,b[,\mathbb C^4\big )\).

Lemma 6.13

The map \(F\mapsto \phi :=\begin{pmatrix} F\\ -F' \end{pmatrix}\) is an isomorphism of the space of solutions of \((\mathcal {L}+\lambda \mathcal {I})F=0\) onto the space of solutions of \(J\partial _x\phi =(\lambda A +B)\phi \).

Proof

It is immediate to see that if \((\mathcal {L}+\lambda \mathcal {I})F=0\), then

$$\begin{aligned} \begin{pmatrix} F\\ -F' \end{pmatrix}\in \mathrm{AC}\big (]a,b[,\mathbb C^4\big ) \quad \text {and}\quad (-J\partial _x+\lambda A+B)\begin{pmatrix} F\\ -F' \end{pmatrix}=0. \end{aligned}$$

Reciprocally, if \((-J\partial _x+\lambda A+B)\begin{pmatrix} F\\ G \end{pmatrix}=0\), then

$$\begin{aligned} F,G\in \mathrm{AC}\big (]a,b[,\mathbb C^2\big ) \quad \text {and}\quad \begin{pmatrix}QG'+(W+\lambda \mathcal {I})F\\ -QF'+QG \end{pmatrix}= \begin{pmatrix} 0\\ 0 \end{pmatrix}. \end{aligned}$$

Therefore, \(G=F'\), so that \(F\in \mathrm{AC}^1\big (]a,b[,\mathbb C^2\big )\), and \(QF''+(W+\lambda \mathcal {I})F=0\).

\(\square \)

Lemma 6.14

Suppose that L is regular at b. If all solutions of \((L\overline{L}+\lambda )f=0\) are square integrable for some \(\lambda \), then all solutions of \((L\overline{L}+\lambda )f=0\) are square integrable for all \(\lambda \).

Moreover, if this is the case, then all solutions of \(Lf=0\) are square integrable.

Proof

By Lemmas 6.12 and 6.13, instead of \((L\overline{L}+\lambda )f=0\), we can consider \(J\partial _x\phi =(\lambda A +B)\phi \), and the square integrability of f is equivalent to the integrability of \(\big (\phi (x)|A\phi (x)\big )= |\phi _1(x)|^2\), since \(f=\phi _1\) under the identification. Note that JA are constant \(4\times 4\) matrices with \(J^*=-J\), \(A^*=A\), \(J^{-1}A\) is a real matrix, and \(B(x)^*=B(x)\) belongs to \(L_\mathrm {loc}^1]a,b]\). Thus, the Eq. (6.30) satisfies the assumptions of Theorem 6.9. The theorem says that if for some \(\lambda \) all solutions \(\phi \) of (6.30) \(\big (\phi (x)|A\phi (x)\big )\) are integrable, then this is so for any \(\lambda \). This proves the first statement of the lemma.

Now, suppose that all solutions of \((L\overline{L}+\lambda )f=0\) belong to \(L^2]a,b[\) for all \(\lambda \). In particular, all solutions of \(L\overline{L}f=0\) are square integrable. Since \(\overline{L}f=0\Rightarrow L\overline{L}f=0\), any solution of \(\overline{L}f=0\) is square integrable. Hence, also any solution of \(Lf=0\). \(\square \)

Theorem 6.15

The following assertions are equivalent and true:

$$\begin{aligned}&\nu _a(L)=0 \Longleftrightarrow \dim \mathcal {U}_a(\lambda )\le 1 \ \forall \lambda \in \mathbb C\Longleftrightarrow \dim \mathcal {U}_a(\lambda )\le 1 \text { for some } \lambda \in \mathbb C, \end{aligned}$$
(6.31)
$$\begin{aligned}&\nu _a(L)=2 \Longleftrightarrow \dim \mathcal {U}_a(\lambda )=2 \ \forall \lambda \in \mathbb C\Longleftrightarrow \dim \mathcal {U}_a(\lambda )=2 \text { for some } \lambda \in \mathbb C. \end{aligned}$$
(6.32)

If V is a real function, then

$$\begin{aligned} \nu _a(L)=0 \Longleftrightarrow \dim \mathcal {U}_a(\lambda )=1 \ \forall \lambda \in \mathbb C{\setminus }\mathbb R. \end{aligned}$$
(6.33)

Proof

The equivalences (6.31) follow from (6.32) by taking into account Theorem 5.6 and the fact that the dimension of \(\mathcal {U}_a(\lambda )\) is \(\le 1\) if it is not 2. Thus, we only have to discuss (6.32). The second equivalence from (6.32) is a consequence of Proposition 6.4.

It is easy to see that \(\nu _a(L)=2\) if \(\dim \mathcal {U}_a(\lambda )=2\) for some complex \(\lambda \). Indeed, let uv be solutions of the equation \((L-\lambda )f=0\) such that \(W(u,v)=1\). Then, if all the solutions of \((L-\lambda )f=0\) are square integrable near a, we get \(W_a(u,v)=1\), hence \(W_a\ne 0\), so that \(\nu _a(L)=2\).

In what follows, we consider the nontrivial part of the theorem: we assume \(\nu _a(L)=2\) and show that \(\dim \mathcal {U}_a(0)=2\). Clearly, we may assume that b is a regular end, if not we replace it by any number between a and b. Then, \(\nu (L)=2\Leftrightarrow \nu _a(L)=0\) and \(\nu (L)=4\Leftrightarrow \nu _a(L)=2\), so we have to show that \(\nu (L)=4\Rightarrow \dim \mathcal {N}(L_{\max })=2\). Since \(\nu (L)=\dim \mathcal {B}(L)\) and \(\mathcal {N}(L_{\max }\overline{L}_{\max }+1)\simeq \mathcal {B}(L)\) by (6.22), it suffices to prove

$$\begin{aligned} \dim \mathcal {N}(L_{\max }\overline{L}_{\max }+1)=4 \Rightarrow \dim \mathcal {N}(L_{\max })=2 . \end{aligned}$$
(6.34)

By Proposition 6.8, the space of solutions of the first-order system (6.30) is four dimensional. Therefore, Lemmas 6.12 and 6.13 imply that the space of solutions of \(L\overline{L}f+\lambda f=0\) is four dimensional. Hence, \(\dim \mathcal {N}(L_{\max }\overline{L}_{\max }+1)=4\) implies that all solutions of \({(L\overline{L}+1)f=0}\) are square integrable. Now by Lemma 6.14 applied to \(\lambda =1\), all solutions of \(Lf=0\) are square integrable. \(\square \)

7 Spectrum and Green’s Operators

7.1 Integral Kernel of Green’s Operators

Recall that in Definition 3.7, we introduced the concept of a right inverse of a closed operator. In the context of 1d Schrödinger operators, right inverses of \(L_{\max }\) will be called \(L^2\)Green’s operators. Thus, \(G_\bullet \) is a \(L^2\) Green’s operator if it is bounded, \(\mathcal {R}(G_\bullet )\subset \mathcal {D}(L_{\max })\) and \(L_{\max }G_\bullet =\mathbb {1}\).

Let \(G_\bullet \) be a Green’s operator in the sense of Definition 2.9. Clearly, \(L_\mathrm {c}^2]a,b[\) is contained in \(L_\mathrm {c}^1]a,b[\). Besides, \(L_{\mathrm {c}}^2]a,b[\) is dense in \(L^2]a,b[\). Therefore, if the restriction of \(G_\bullet \) to \(L_{\mathrm {c}}^2]a,b[\) is bounded, then it has a unique extension to a bounded operator on \(L^2]a,b[\). This extension, which by Proposition 3.12 is a \(L^2\) Green’s operator, will be denoted by the same symbol \(G_\bullet \).

Note that the pair \(L_{\max },L_{\min }\) satisfies \(L_{\min }=L_{\max }^\#\subset L_{\max }\), which are precisely the properties discussed in Sect. 3.6. Recall from that subsection that \(L^2\) Green’s operators whose inverse contains \(L_{\min }\) correspond to realizations of L that are between \(L_{\min }\) and \(L_{\max }\). The following proposition is devoted to properties of such Green’s operators.

Recall that for any \(x\in ]a,b[\) we denote by \(L^{a,x}\), resp., \(L^{x,b}\) the restriction of L to \(L^2]a,x[\), resp., \(L^2]x,b[\). We also can define \(L_{\max }^{a,x}\) and \(L_{\max }^{x,b}\), etc. Note that x is a regular point of both \(L^{a,x}\) and \(L^{x,b}\) (V is integrable on a neighborhood of x).

Proposition 7.1

Suppose that \(L_{\min }\subset L_\bullet \subset L_{\max }\), \(L_\bullet \) is invertible and \(G_\bullet :=L_\bullet ^{-1}\). Then, \(G_\bullet \) is an integral operator whose integral kernel

$$\begin{aligned} ]a,b[\times ]a,b[\ni (x,y)\mapsto G_\bullet (x,y)\in \mathbb C\end{aligned}$$

is a function separately continuous in x and y which has the following properties:

  1. (1)

    for each \(a<x<b\), the function \(G_\bullet (x,\cdot )\) restricted to ]ax[, resp., ]xb[ belongs to \(\mathcal {D}(L_{\max }^{a,x})\), resp., \(\mathcal {D}(L_{\max }^{x,b})\) and satisfies \(LG_\bullet (x,\cdot )=0\) outside x. Besides, \(G_\bullet (x,\cdot )\) and its derivative have limits at x from the left and the right satisfying

    $$\begin{aligned} G_\bullet (x,x-0)-G_\bullet (x,x+0)&=0,\\ \partial _2G_\bullet (x,x-0)-\partial _2G_\bullet (x,x+0)&=1; \end{aligned}$$
  2. (2)

    for each \(a<y<b\), the function \(G_\bullet (\cdot ,y)\) restricted to ]ay[, resp., ]yb[ belongs to \(\mathcal {D}(L_{\max }^{a,y})\), resp., \(\mathcal {D}(L_{\max }^{y,b})\) and satisfies \(LG_\bullet (\cdot ,y)=0\) outside y. Besides, \(G_\bullet (\cdot ,y)\) and its derivative have limits at y from the left and the right satisfying

    $$\begin{aligned} G_\bullet (y-0,y)-G_\bullet (y+0,y)&=0,\\ \partial _1G_\bullet (y-0,y)-\partial _1G_\bullet (y+0,y)&=1; \end{aligned}$$

Proof

We shall use ideas from the proof of Lemma 4 p. 1315 in [12]. \(G_\bullet \) is a continuous linear map \(G_\bullet :L^2]a,b[ \rightarrow \mathcal {D}(L_{\max })\) and for each \(x\in ]a,b[\) we have a continuous linear form \(\varepsilon _x:f\mapsto f(x)\) on \(\mathcal {D}(L_{\max })\); hence, we get a continuous linear form \(\varepsilon _x\circ G_\bullet :L^2]a,b[\rightarrow \mathbb C\). Thus, for each \(x\in ]a,b[\) there exists a unique \(\phi _x\in L^2]a,b[\) such that

$$\begin{aligned} (G_\bullet f)(x)=\int _a^b \phi _x(y)f(y) \mathrm dy, \quad \forall f\in L^2]a,b[\, . \end{aligned}$$

We get a map \(\phi :\,]a,b[\rightarrow L^2]a,b[\) which is continuous, and even locally Lipschitz, because if \(J\subset \,]a,b[\) is compact and \(x,y\in J\), then

$$\begin{aligned} \left| \int _a^b (\phi _x(z)-\phi _y(z))f(z) \mathrm dz \right|&= |(G_\bullet f)(x)-(G_\bullet f)(y)| \le \Vert (G_\bullet f)'\Vert _{L^\infty (J)}|x-y| \\&\le C_1 \Vert G_\bullet f\Vert _{\mathcal {D}(L_{\max })}|x-y| \le C_2 \Vert f\Vert |x-y|, \end{aligned}$$

hence \(\Vert \phi _x-\phi _y\Vert \le C_2|x-y|\). By taking \(f=L_\bullet g\), \( g\in \mathcal {D}(L_\bullet )\), we get

$$\begin{aligned} g(x)=\int _a^b\phi _x(y)(L_\bullet g)(y) \mathrm dy. \end{aligned}$$
(7.1)

Set \(\phi _x^a:=\phi _x\big |_{]a,x[}\) and \(\phi _x^b:=\phi _x\big |_{]x,b[}\). (7.1) can be rewritten as

$$\begin{aligned} g(x)=\int _a^x\phi _x^a(y)(L_\bullet g)(y) \mathrm dy +\int _x^b\phi _x^b(y)(L_\bullet g)(y) \mathrm dy. \end{aligned}$$
(7.2)

Since \(G_\bullet ^\#\) is also an \(L^2\) Green’s operator, we have \(L_{\min }\subset L_\bullet \subset L_{\max }\). Assuming that \(g\in \mathcal {D}(L_{\min })\) and \(g(y)=0\) in a neighborhood of x, we can rewrite (7.2) as

$$\begin{aligned} 0=\int _a^x\phi _x^a(y)(L_{\min }^{a,x} g)(y) \mathrm dy +\int _x^b\phi _x^b(y)(L_{\min }^{a,x} g)(y) \mathrm dy. \end{aligned}$$
(7.3)

Such functions g are dense in \(\mathcal {D}(L_{\min }^{a,x})\oplus \mathcal {D}(L_{\min }^{x,b})\). Therefore, \(\phi _x^a\) belongs to \(\mathcal {D}(L_{\max }^{a,x})\) and \(\phi _x^b\) belongs to \(\mathcal {D}(L_{\max }^{x,b})\). Since x is a regular end of both intervals ]ax[ and ]xb[, the function \(\phi _x\) and its derivative \(\phi _x'\) extend to continuous functions on ]ax] and [xb[. However, these extensions are not necessarily continuous on ]ab[, i.e., we must distinguish the left and right limits at x, denoted \(\phi _x(x\pm 0)\) and \(\phi '_x(x\pm 0)\).

We now take \(g\in \mathcal {D}(L_{\min })\) in (7.1). By taking into account (5) of Theorem 4.4 and what we proved above, we have \(W(\phi _x,g;a)=0\) and \(W(\phi _x,g;b)=0\). Denote \(\phi _x^a\) and \(\phi _x^b\) the restrictions of \(\phi _x\) to the intervals ]ax[ and ]xb[. Then by using Green’s identity on ]ax[ and ]xb[ in (7.2), we get

$$\begin{aligned} g(x) =-W(\phi ^a_x,g;x) + W(\phi ^b_x,g;x) . \end{aligned}$$

We may compute the last two terms explicitly because x is a regular end of both intervals:

$$\begin{aligned} W(\phi ^a_x,g;x)&=\phi _x(x-0)g'(x)-\phi _x'(x-0)g(x), \\ W(\phi ^b_x,g;x)&=\phi _x(x+0)g'(x)-\phi _x'(x+0)g(x). \end{aligned}$$

Thus, we get

$$\begin{aligned} g(x)=(\phi _x(x+0)-\phi _x(x-0))g'(x)+(\phi _x'(x-0) -\phi _x'(x+0))g(x). \end{aligned}$$

The values g(x) and \(g'(x)\) may be specified in an arbitrary way under the condition \(g\in \mathcal {D}(L_{\min })\), so we get \(\phi _x(x+0)-\phi _x(x-0)=0\) and \(\phi _x'(x-0) -\phi _x'(x+0)=1\). Thus, \(\phi _x\) must be a continuous function which is continuously differentiable outside x and its derivative has a jump \(\phi _x'(x+0) -\phi _x'(x-0)=-1\) at x.

Thus, \(G_\bullet \) is an integral operator with kernel \(G_\bullet (x,y)=\phi _x(y)\). But \(G_\bullet ^\#\) is also an \(L^2\) Green’s operator and clearly \(G_\bullet ^\#\) has kernel \(G_\bullet ^\#(x,y)=\phi _y(x)\). Repeating the above arguments applied to \(G_\bullet ^\#\), we obtain the remaining statements of the proposition. \(\square \)

Let us describe a consequence of the above proposition; we use the notation of Definition 6.1.

Proposition 7.2

If there exists a realization of L such that \(\lambda \in \mathbb C\) is in its resolvent set, then \(\dim \mathcal {U}_a(\lambda )\ge 1\) and \(\dim \mathcal {U}_b(\lambda )\ge 1\).

Proof

Suppose that L possesses a realization with \(\lambda \in \mathbb C\) contained in its resolvent set. This means that \(L-\lambda \) possesses an \(L^2\) Green’s operator \(G_\bullet \). By Proposition 3.23, it can be chosen to satisfy \(G_\bullet =G_\bullet ^\#\). Then, Proposition 7.1 implies that for any \(x\in ]a,b[\) the function \(G_\bullet (x,\cdot )\in L^2]a,b[\) belongs to \(L^2]a,b[\) and satisfies \(LG_\bullet (x,\cdot )=0\) on ]ax[ and ]xb[. We will prove that there is x such that \(G_\bullet (x,\cdot )\big |_{]x,b[}\ne 0\), which implies \(\dim \mathcal {U}_b(\lambda )\ge 1\). In order to prove that \(\dim \mathcal {U}_a(\lambda )\ge 1\) it suffices to show that there is x such that \(G_\bullet (x,\cdot )\big |_{]a,x[}\ne 0\) and the argument is similar.

If the required assertion is not true, then \(G_\bullet (x,\cdot )\big |_{]x,b[}=0\) for any x, in other terms \(G_\bullet (x,y)=0\) for all \(a<x<y<b\). Since \(G_\bullet \) is self-transposed, (3.9) gives \(G_\bullet (x,y)=G_\bullet (y,x)\ \forall x,y\). Hence, we will also have \(G_\bullet (x,y)=0\) for \(a<y<x<b\). But this means \(G_\bullet =0\), which is not true. \(\square \)

7.2 Forward and Backward Green’s Operators

Let us study the \(L^2\) theory of the forward Green’s operator \(G_\rightarrow \). Recall that if uv span \(\mathcal {N}(L)\) with \(W(v,u)=1\), then \(G_\rightarrow \) is given by

$$\begin{aligned} G_\rightarrow g(x)=v(x)\int _a^xu(y)g(y)\mathrm dy-u(x)\int _a^xv(y)g(y)\mathrm dy. \end{aligned}$$
(7.4)

Note that elements of \(\mathcal {N}(L)\) do not have to be square integrable. We have \(\mathcal {N}(L_{\max })=\mathcal {N}(L)\cap L^2]a,b[\). In the following proposition, we consider the case \(\mathcal {N}(L)=\mathcal {N}(L_{\max })\):

Proposition 7.3

Assume \(\dim \mathcal {N}(L_{\max })=2\). Then,

  1. (1)

    \(G_\rightarrow \) is Hilbert–Schmidt. In particular, it is an \(L^2\) Green’s operator of L.

  2. (2)

    Let \(L_a\) be the operator defined in Definition 4.14. Then, \(L_a\) has an empty spectrum, \((L_a-\lambda )^{-1}\) is compact for avery \(\lambda \in \mathbb C\), and we have \(L_a^{-1}=G_\rightarrow \).

  3. (3)

    Every \(f\in \mathcal {D}(L_{\max })\) has a unique decomposition as

    $$\begin{aligned} f=\alpha u+\beta v+f_a,\quad f_a\in G_\rightarrow L^2]a,b[ \, =\mathcal {D}(L_a). \end{aligned}$$
    (7.5)
  4. (4)

    \(G_\leftarrow \) has analogous properties. In particular, we have

    $$\begin{aligned} G_\rightarrow ^\#=G_\leftarrow ,\quad L_b^{-1}=G_\leftarrow . \end{aligned}$$
    (7.6)

Proof

By hypothesis, \(u,v\in L^2]a,b[\). The Hilbert–Schmidt norm of \(G_\rightarrow \) is clearly bounded by \(\sqrt{2}\Vert u\Vert _2\Vert v\Vert _2\). Then by Proposition 3.10, zero belongs to the resolvent set of \(L_a\), \(L_a^{-1}=G_a\) and

$$\begin{aligned} \mathcal {D}(L_{\max })=\mathcal {D}(L_a)\oplus \mathcal {N}(L_{\max }), \end{aligned}$$
(7.7)

which can be restated as the decomposition (7.5). If \(\lambda \in \mathbb C\) and V is replaced by \(V-\lambda \), then the new \(G_\rightarrow \) will be the resolvent at \(\lambda \) of \(L_a\), which proves the second assertion in (2). Finally, (7.6) is proved by a simple computation. \(\square \)

Proposition 7.4

\(G_\rightarrow \) is bounded if and only if \(\dim \mathcal {N}(L_{\max })=2\) (so that the assumptions of Proposition 7.3 are valid).

Proof

Let \(G_\rightarrow \) be bounded. Then, so is \(G_\rightarrow ^\#=G_\leftarrow \). Let us recall the identity (2.23):

$$\begin{aligned} G_\rightarrow -G_\leftarrow =|v\rangle \langle u|-|u\rangle \langle v|. \end{aligned}$$
(7.8)

But the boundedness of the rhs of (7.8) implies \(v,u\in L^2]a,b[\). \(\square \)

\(G_\rightarrow \) is useful even if it not a bounded operator, especially if \(\dim \mathcal {U}_a(0)=2\):

Proposition 7.5

Assume that \(\dim \mathcal {U}_a(0)=2\). Then, \(G_\rightarrow \) extends as a map from \(L^2]a,b[\) to \(C^1]a,b[\) satisfying the bounds

$$\begin{aligned} |G_\rightarrow g(x)|&\le \Big (|u(x)|\Vert v\Vert _x+|v(x)|\Vert u\Vert _x\Big )\Vert g\Vert _x,\end{aligned}$$
(7.9)
$$\begin{aligned} |\partial _xG_\rightarrow g(x)|&\le \Big (|u'(x)|\Vert v\Vert _x+|v'(x)|\Vert u\Vert _x\Big )\Vert g\Vert _x, \end{aligned}$$
(7.10)

where \(\Vert g\Vert _x:=\Big (\int _a^x |g(y)|^2\mathrm dy\Big )^{\frac{1}{2}}\). If \(\chi \in C_\mathrm {c}^\infty [a,b[\), \(\chi =1\) around a, then every \(f\in \mathcal {D}(L_{\max })\) has a unique decomposition as

$$\begin{aligned} f=\alpha \chi u+\beta \chi v+f_a,\quad f_a\in \mathcal {D}(L_a). \end{aligned}$$
(7.11)

Proof

Let \(a<d<b\). Then, we can restrict our problem to ]ad[. Now, \(\dim \mathcal {U}_a(0)=\dim \mathcal {U}_d(0)=2\). Therefore, we can apply Proposition 7.3, using the fact that \(G_\rightarrow \) restricted to \(L^2]a,d[\) is an \(L^2\) Green’s operator of \(L^{a,d}\). \(\square \)

The main assertion of Theorem 6.15 is, technically speaking, that \(\dim \mathcal {U}_a(0)=2\) if \(\nu _a(L)=2\). We may state an improved version of this assertion as a boundary value problem and this is of a certain interest: it says that if \(\nu _a(L)=2\), then the endpoint a behaves almost as if it were a regular end (in the regular case one works with \(L^1\) instead of \(L^2\)). Note that since only the behavior near a of the solutions matters, we may assume b a regular endpoint.

Proposition 7.6

Suppose that \(\nu _a(L)=2\) and b is a regular endpoint for L. Let \(\phi ,\psi \in \mathcal {B}_a(L)\) be a pair of linearly independent boundary value functionals. Then, the linear continuous map

$$\begin{aligned} \mathcal {D}(L_{\max })\ni f \mapsto (Lf,\phi (f),\psi (f))\in L^2]a,b[\times \mathbb C\times \mathbb C\end{aligned}$$
(7.12)

is bounded and invertible. In particular, for any \(g\in L^2]a,b[\) and any \(\alpha ,\beta \in \mathbb C\), there is a unique \(f\in \mathcal {D}(L_{\max })\) such that \(Lf=g\), \(\phi (f)=\alpha \), and \(\psi (f)=\beta \).

Proof

By Proposition 7.3, the operator \(L_a:\mathcal {D}(L_a)\rightarrow L^2]a,b[\) is bijective hence the map (7.12) is injective. Since the map is clearly continuous, by the open mapping theorem it suffices to prove its surjectivity. Let \(g\in L^2]a,b[\) and \(\alpha ,\beta \in \mathbb C\). Since \(L_a\) is surjective, there is \(h\in \mathcal {D}(L_a)\) such that \(Lh=g\). Now, it suffices to show that there is \(k\in \mathcal {N}(L_{\max })\) such that \(\phi (k)=\alpha ,\psi (k)=\beta \) because then \(f=h+k\in \mathcal {D}(L_{\max })\) will satisfy \(Lf=g\), \(\phi (f)=\alpha \), and \(\psi (f)=\beta \). Clearly, it suffices to prove this just for one couple \(\phi ,\psi \). Since \(\mathcal {N}(L_{\max })\) is two dimensional, there are \(u,v\in \mathcal {N}(L_{\max })\) with \(W(u,v)=1\) and we may take \(\phi =\mathbf {u}_a\) and \(\psi =\mathbf {v}_a\) since, by Theorem 5.5 the boundary value functionals \(\mathbf {u}_a,\mathbf {v}_a\in \mathcal {B}_a(L)\) are linearly independent. Then, it suffices to take \(k=-\beta u+\alpha v\). \(\square \)

7.3 Green’s Operators with Two-Sided Boundary Conditions

Recall from Definition 5.11 that if \(\phi \in \mathcal {B}_a\) and \(\psi \in \mathcal {B}_b\) be are nonzero functionals, then \(L_{\phi ,\psi }\) is the operator \( L_{\phi ,\psi }\subset L_{\max }\) with

$$\begin{aligned} \mathcal {D}(L_{\phi ,\psi })&:= \{f\in \mathcal {D}(L_{\max })\ \mid \phi (f)= \psi (f)=0\}. \end{aligned}$$

Note that \(L_{\phi ,\psi }^\#=L_{\phi ,\psi }\).

Recall also that if uv are solutions of the equation \(Lf=0\) with \(W(v,u)=1\), we defined in Definition 2.10 the two-sided Green’s operator \(G_{u,v}\)

$$\begin{aligned} G_{u,v}g(x):=\int _x^bu(x)v (y)g(y)\mathrm dy+ \int _a^x v (x)u(y) g(y)\mathrm dy. \end{aligned}$$

Clearly, there exists a close relationship between realizations of L of the form \(L_{\phi ,\psi }\) and Green’s operators of the form \(G_{u,v}\).

Proposition 7.7

Suppose \(\phi \in \mathcal {B}_a\), \(\psi \in \mathcal {B}_b\) and \(0\in \mathrm {rs}( L_{\phi ,\psi })\). Then, there exists \(u\in \mathcal {U}_a(0)\) and \(v\in \mathcal {U}_b(0)\) with \(W(v,u)\ne 0\) such that in the notation of Definition 5.4,

$$\begin{aligned} \phi =\mathbf {u}_a,\quad \psi =\mathbf {v}_b, \end{aligned}$$
(7.13)

Proof

Let us prove the existence of u. Note that by Proposition 7.2 we have \(\dim \mathcal {U}_a(0)\ge 1\). Then, by Proposition 7.1, the operator \(L_{\phi ,\psi }^{-1}\) has an integral kernel \(G_{\phi ,\psi }(\cdot ,\cdot )\) such that for any \(a<c<b\) the restriction of \(G_{\phi ,\psi }(c,\cdot )\) to ]ac[ belongs to \(\mathcal {D}(L^{a,c}_{\max })\) and satisfies \(LG_{\phi ,\psi }(c,\cdot )=0\). If \(f\in \mathcal {D}(L_{\phi ,\psi })\), the relation (7.1) gives

$$\begin{aligned} f(x)=\int _a^b G_{\phi ,\psi }(x,y)(L_{\phi ,\psi } f)(y) \mathrm dy \end{aligned}$$

hence if \(a<x<c\) and \(f(x)=0\), for \(x>c\) we have

$$\begin{aligned} 0=\int _a^c G_{\phi ,\psi }(c,y)(L_{\phi ,\psi } f)(y) \mathrm dy. \end{aligned}$$
(7.14)

Denote \(L^{a,c}_{\phi ,c}\) the operator in \(L^2]a,c[\) defined by L and the boundary conditions \(\phi (f)=0\) and \(f(c)=f'(c)=0\). Clearly, any function f satisfying such conditions extends to a function in \(\mathcal {D}(L_{\phi ,\psi })\) if we set \(f(x)=0\) for \(x>c\) hence (7.14) is equivalent to

$$\begin{aligned} \int _a^c G_{\phi ,\psi }(c,\cdot ) L^{a,c}_{\phi ,c} f \mathrm dx=0 \quad \forall f\in \mathcal {D}(L^{a,c}_{\phi ,c}) . \end{aligned}$$

We noted above that \(L_{\phi ,\psi }^\#=L_{\phi ,\psi }\) and by a simple argument this implies \((L^{a,c}_{\phi ,c})^\#=L^{a,c}_{\phi ,0}\equiv L^{a,c}_{\phi }\); hence, the preceding relation means \(G_{\phi ,\psi }(c,\cdot )|_{]a,c[} \in \mathcal {N}(L^{a,c}_{\phi })\). Now, recall that during the Proof of Proposition 7.2 we have seen that c may be chosen such that \(G_{\phi ,\psi }(c,\cdot )|_{]a,c[}\ne 0\). Finally, if we fix such a c and denote \(u=G_{\phi ,\psi }(c,\cdot )\), then we get a nonzero element \(u\in \mathcal {U}_a(0)\) such that \(\phi (u)=0\) which, since \(u\ne 0\), is equivalent to \(\phi =\alpha \mathbf {u}_a\).

In an analogous way, we prove the existence of v. Both are nonzero. If u is proportional to v, then they are eigenvectors of \(L_{\phi ,\psi }\) for the eigenvalue 0, which contradicts \(0\in \mathrm {rs}( L_{\phi ,\psi })\). Hence, they are not proportional to one another, so that \(W(v,u)\ne 0\). \(\square \)

Note that in the above proposition we can have \(\phi =0\) or \(\psi =0\), or both. However, u and v are always nonzero.

Suppose now that we start from a two-sided Green’s operator.

Proposition 7.8

Let \(G_{u,v}\) be bounded on \(L^2]a,b[\). Then, \(u\in \mathcal {U}_a(0)\) and \(v\in \mathcal {U}_b(0)\).

Proof

Let \(a<d<b\). If \(G_{u,v}\) is bounded, then so is \(\mathbb {1}_{]a,d]}( x)G_{u,v} \mathbb {1}_{]d,b]}(x)\), where x denotes the operator of multiplication by the variable in ]ab[. But its integral kernel is

$$\begin{aligned} u(x)\mathbb {1}_{]a,d]}(x)v (y) \mathbb {1}_{]d,b]}(y) \end{aligned}$$

where x and y denote the variables in ]ab[. This is a rank one operator with the norm

$$\begin{aligned} \left( \int _a^d|u|^2(x)\mathrm dx\right) ^{\frac{1}{2}} \left( \int _d^b|v |^2(x)\mathrm dx\right) ^{\frac{1}{2}}. \end{aligned}$$

Until the end of this subsection, we assume that \(u\in \mathcal {U}_a(0)\) and \(v\in \mathcal {U}_b(0)\) and the functionals \(\phi ,\psi \) are given by (7.13). Thus, we have both Green’s operator \(G_{u,v}\) and the operator \(L_{\phi ,\psi }\).

Let \(\chi \in C^\infty ]a,b[\) such that \(\chi =1\) close to a and \(\chi =0\) close to b. Clearly,

$$\begin{aligned} \mathcal {D}(L_{\phi ,\psi })=\mathcal {D}(L_{\min })+\mathrm {Span}\{ \chi u,(1-\chi ) v\big ). \end{aligned}$$
(7.15)

We will show that \(G_{u,v}\) is bounded iff and only if \(0\in \mathrm {rs}(L_{\phi ,\psi })\). However, it seems that there is no guarantee that \(G_{u,v}\) is bounded.

Proposition 7.9

$$\begin{aligned} G_{u,v}L_\mathrm {c}^2]a,b[&\subset \mathcal {D}(L_{\phi ,\psi }), \end{aligned}$$
(7.16)
$$\begin{aligned} G_{u,v}L^2]a,b[&\subset \mathrm{AC}^1]a,b[. \end{aligned}$$
(7.17)

Moreover, \(G_{u,v}\) is bounded if and only if there exists \(c>0\) such that

$$\begin{aligned} \Vert L_{\phi ,\psi }f\Vert \ge c\Vert f\Vert ,\quad f\in \mathcal {D}(L_{\phi ,\psi }). \end{aligned}$$
(7.18)

If this is the case, then 0 belongs to the resolvent set of \(L_{\phi ,\psi }\), we have \(G_{u,v}=L_{\phi ,\psi }^{-1}\), \(G_{u,v}^\#=G_{u,v}\) and

$$\begin{aligned} \mathcal {D}(L_{\phi ,\psi })=G_{u,v}L^2]a,b[. \end{aligned}$$
(7.19)

Proof

It is easy to see that

$$\begin{aligned} G_{u,v}L_\mathrm {c}^2]a,b[\subset \mathcal {D}(L_\mathrm {c})+\mathrm {Span}\big \{ \chi u,(1-\chi ) v\big \}, \end{aligned}$$

which implies (7.16).

Let \(g\in L^2]a,b[\). For \(a<x<b\), we compute:

$$\begin{aligned} \partial _xG_{u,v} g(x) =u'(x)\int _{x}^bv (y)g(y)\mathrm dy+ v' (x)\int _a^{x}u(y)g(y)\mathrm dy. \end{aligned}$$
(7.20)

Now, \(x\mapsto u'(x),v'(x),\int _{x}^bv (y)g(y)\mathrm dy,\, \int _a^{x}u(y)g(y)\mathrm dy\) belong to \(\mathrm{AC}]a,b[\). Hence, (7.20) belongs to \(\mathrm{AC}]a,b[\). Therefore, (7.17) is true. Next, let

$$\begin{aligned} f=f_\mathrm {c}+\alpha \chi u+\beta (1-\chi )v,\quad f_\mathrm {c}\in \mathcal {D}(L_\mathrm {c}). \end{aligned}$$
(7.21)

We compute, integrating by parts,

$$\begin{aligned} G_{u,v}L_{\phi ,\psi }f(x)&= \int _a^b\Big (\big (-\partial _y^2+V(y)\big )G_{u,v}(x,y)\Big )f(y)\mathrm dy \end{aligned}$$
(7.22)
$$\begin{aligned}&+\lim _{y\rightarrow a}\big (G_{u,v}(x,y)f'(y)-\partial _yG_{u,v}(x,y)f(y)\big ) \end{aligned}$$
(7.23)
$$\begin{aligned}&-\lim _{y\rightarrow b}\big (G_{u,v}(x,y)f'(y)-\partial _yG_{u,v}(x,y)f(y)\big ) \end{aligned}$$
(7.24)
$$\begin{aligned}&=f(x)+v(x)W(u,f;a)-u(x)W(v,f;b) \;= \; f(x). \end{aligned}$$
(7.25)

Moreover, functions of the form (7.21) are dense in \(\mathcal {D}(L_{\phi ,\psi })\). Therefore, if \(G_{u,v}\) is bounded, then (7.25) extends to

$$\begin{aligned} G_{u,v}L_{\phi ,\psi }f=f,\quad f\in \mathcal {D}(L_{\phi ,\psi }). \end{aligned}$$
(7.26)

Hence, \(\Vert f\Vert =\Vert G_{u,v}L_{\phi ,\psi }f\Vert \le \Vert G_{u,v}\Vert \Vert L_{\phi ,\psi }f\Vert \) which gives (7.18).

Assume that \(G_{u,v}\) is bounded in the sense of \(L^2]a,b[\). By Proposition 3.12, \(G_{u,v}\) is an \(L^2\) Green’s operator. By Proposition 3.9, it is also bounded from \(L^2]a,b[\) to \(\mathcal {D}(L_{\max })\). Therefore, (7.16) extends then to

$$\begin{aligned} G_{u,v}L^2]a,b[\subset \mathcal {D}(L_{\phi ,\psi }), \end{aligned}$$
(7.27)

so that

$$\begin{aligned} L_{\phi ,\psi } G_{u,v}g=g,\quad g\in L^2]a,b[. \end{aligned}$$
(7.28)

By (7.26) and (7.28), \(G_{u,v}\) is a (bounded) inverse of \(L_{\phi ,\psi }\) so that (7.18) and (7.19) are true.

Now, assume that (7.18) holds. By (7.16), we then have

$$\begin{aligned} g=L_{\phi ,\psi }G_{u,v}g,\quad g\in L_\mathrm {c}^2]a,b[. \end{aligned}$$
(7.29)

Hence,

$$\begin{aligned} \Vert g\Vert =\Vert L_{\phi ,\psi }G_{u,v}g\Vert \ge c\Vert G_{u,v} g\Vert \end{aligned}$$
(7.30)

on \(L_\mathrm {c}^2]a,b[\), which is dense in \(L^2]a,b[\). Therefore, \(G_{u,v}\) is bounded. \(\square \)

7.4 Classification of Realizations with Non-empty Resolvent Set

In applications, well-posed operators (possessing non-empty resolvent set) are by far the most useful. The following theorem describes a classification of realizations of L with this property.

Theorem 7.10

Suppose that \(L_\bullet \) is a realization of L with a non-empty resolvent set. Then, exactly one of the following statements is true.

  1. (1)

    \(L_\bullet =L_{\max }\).

    Then, also \(L_{\min }=L_\bullet \), so that L possesses a unique realization. We have \(\nu (L)=0\).

    If \(\lambda \in \mathrm {rs}(L_\bullet )\), then \(\dim \mathcal {N}(L_{\max }-\lambda )=0\), \(\dim \mathcal {U}_a(\lambda )=\dim \mathcal {U}_b(\lambda )=1\) and \(\mathcal {U}_a(\lambda )\ne \mathcal {U}_b(\lambda )\). If \(u\in \mathcal {U}_a(\lambda )\) and \(v\in \mathcal {U}_b(\lambda )\) with \(W(v,u)=1\), then

    $$\begin{aligned} (L_\bullet -\lambda )^{-1}=G_{u,v}. \end{aligned}$$

    \(L_\bullet \) is self-transposed and has separated boundary conditions. (See Definition 5.10 for separated boundary conditions.)

  2. (2)

    The inclusion \(\mathcal {D}(L_\bullet )\subset \mathcal {D}(L_{\max })\) is of codimension 1.

    Then, the inclusion \(\mathcal {D}(L_{\min })\subset \mathcal {D}(L_\bullet )\) is of codimension 1 and we have \(\nu (L)=2\).

    If \(\lambda \in \mathrm {rs}(L_\bullet )\), then \(\dim \mathcal {N}(L_{\max }-\lambda )=1\), \(\dim \mathcal {U}_a(\lambda )=2\) and \(\dim \mathcal {U}_b(\lambda )=1\), or \(\dim \mathcal {U}_a(\lambda )=1\) and \(\dim \mathcal {U}_b(\lambda )=2\). We can find \(u\in \mathcal {U}_a(\lambda )\) and \(v\in \mathcal {U}_b(\lambda )\) with \(W(v,u)=1\) such that

    $$\begin{aligned} (L_\bullet -\lambda )^{-1}=G_{u,v}. \end{aligned}$$

    \(L_\bullet \) is self-transposed and has separated boundary conditions.

  3. (3)

    The inclusion \(\mathcal {D}(L_\bullet )\subset \mathcal {D}(L_{\max })\) is of codimension 2.

    Then, the inclusion \(\mathcal {D}(L_{\min })\subset \mathcal {D}(L_\bullet )\) is of codimension 2. We have \(\nu (L)=4\).

    The spectrum of \(L_\bullet \) is discrete, and its resolvents are Hilbert–Schmidt. For any \(\lambda \in \mathbb C\), we have \(\dim \mathcal {N}(L_{\max }-\lambda )=2\), \(\dim \mathcal {U}_a(\lambda )=2\) and \(\dim \mathcal {U}_b(\lambda )=2\).

    If in addition \(L_\bullet \) is self-transposed, has separated boundary conditions, and \(\lambda \in \mathrm {rs}(L_\bullet )\), then we can find \(u\in \mathcal {U}_a(\lambda )\) and \(v\in \mathcal {U}_b(\lambda )\) with \(W(v,u)=1\), such that

    $$\begin{aligned} (L_\bullet -\lambda )^{-1}=G_{u,v}. \end{aligned}$$

    If, instead, \(L_\bullet \) is not self-transposed and has separated boundary conditions, then it has empty spectrum and one of the following possibilities holds:

    1. (i)

      \(L_\bullet =L_a\) and \((L_\bullet -\lambda )^{-1}\) are given by the forward Green’s operator.

    2. (ii)

      \(L_\bullet =L_b\) and \((L_\bullet -\lambda )^{-1}\) are given by the backward Green’s operator.

    We have \(L_a^\#=L_b\), and both (i) and (ii) are described in Proposition 7.5.

7.5 Existence of Realizations with Non-empty Resolvent Set

\(\mathbb C\backslash \mathbb R\) is contained in the resolvent set of all self-adjoint operators. The following proposition gives a generalization of this fact.

Proposition 7.11

Let \(V_{\mathrm {R}}\) and \(V_{\mathrm {I}}\) be the real and imaginary part of V. Let \(\Vert V_\mathrm {I}\Vert _\infty =:\beta <\infty \). Then,

$$\begin{aligned} \{\lambda \in \mathbb C\,||{\text {Im}}\lambda |>\beta \} \end{aligned}$$
(7.31)

is contained in the resolvent set of some realizations of L. All realizations of L possess only discrete spectrum in (7.31).

Proof

Let \(L_\mathrm {R}:=-\partial _x^2+V_\mathrm {R}\). By Theorem 4.4, \(L_{\mathrm {R},\min }\) is densely defined and \( L_{\mathrm {R},\min }^\# =L_{\mathrm {R},\max }\supset L_{\mathrm {R},\min }\). By the reality of \(V_\mathrm {R}\), \(L_{\mathrm {R},\min }^*=L_{\mathrm {R},\min }^\#\). Therefore, \(L_{\mathrm {R},\min }^*\supset L_{\mathrm {R},\min }\). This means that \(L_{\mathrm {R},\min }\) is Hermitian (symmetric). Let us now apply the well-known theory of self-adjoint extensions of Hermitian operators. Let \(d_\pm :=\mathcal {N}\big (L_{\mathrm {R},\min }^*\mp \mathrm {i})\) be the deficiency indices. Using the fact that \(L_{\mathrm {R},\min }\) is real, we conclude that \(d_+=d_-\). Therefore, \(L_{\mathrm {R},\min }\) possesses at least one self-adjoint extension, which we denote \(L_{\mathrm {R},\bullet }\). By the self-adjointness of \(L_{\mathrm {R},\bullet }\), we have \( \Vert ( L_{\mathrm {R},\bullet }-\lambda )^{-1}\Vert \le |{\text {Im}}\lambda |^{-1}\) for all \(\lambda \not \in \mathbb R\). Set \(L_\bullet := L_{\mathrm {R},\bullet }+\mathrm {i}V_\mathrm {I}\). Clearly,

$$\begin{aligned} L_{\max }\supset L_\bullet \supset L_{\min }. \end{aligned}$$
(7.32)

For \(|{\text {Im}}\lambda |>\beta \), \(\lambda \) belongs to the resolvent set of \(L_\bullet \), and its resolvent is given by

$$\begin{aligned} (L_\bullet - \lambda )^{-1}= ( L_{\mathrm {R},\bullet }-\lambda )^{-1}\big (\mathbb {1}+\mathrm {i}V_\mathrm {I} ( L_{\mathrm {R},\bullet }-\lambda )^{-1}\big )^{-1}. \end{aligned}$$

\(\square \)

Note that the above proposition can be improved to cover some singularities of \(V_\mathrm {I}\). In fact, if there are numbers \(\alpha ,\beta \) with \(0\le \alpha <1\) such that

$$\begin{aligned} \Vert V_{\mathrm {I}}f\Vert ^2\le \alpha ^2\big (\Vert L_{\mathrm {R},\bullet } f\Vert ^2+\beta ^2\Vert f\Vert ^2\big ),\quad \forall f\in \mathcal {D}(L_{\mathrm {R},\bullet }), \end{aligned}$$

then still

$$\begin{aligned} \Vert V_\mathrm {I} ( L_{\mathrm {R},\bullet }-\lambda )^{-1}\Vert \le \alpha <1, \end{aligned}$$

and the conclusion of Proposition 7.11 holds.

7.6 “Pathological” Spectral Properties

We construct now 1d Schrödinger operators whose realizations have an empty resolvent set. Such operators seem to be rather pathological and not very interesting for applications.

Proposition 7.12

There is \(V\in L_\mathrm {loc}^\infty [0,\infty [\) such that if \(L=-\partial ^2+V\), then any operator \(L_\bullet \) on \(L^2]0,\infty [\) with \(L_{\min }\subset L_\bullet \subset L_{\max }\) has empty resolvent set, hence \(\sigma (L_\bullet )= \mathbb C\).

Proof

Let \(I_n=]n^2-n,n^2+n[\) with \(n\ge 1\) integer. Then, \(I_n\) is an open interval of length \(|I_n|=2n\) and \(I_{n+1}\) starts with the point \(n^2+n\) which is the endpoint of \(I_n\). Thus, \(\cup _n I_n\) is a disjoint union equal to \(]0,\infty [\,\setminus \{n^2+n\mid n\ge 1\}\). Let \(\mathbb {P}\) be the set prime numbers \(\mathbb {P}=\{2,3,5,\ldots \}\) and for each prime p denote \(J_p=\cup _{k\ge 1}I_{p^k}\). We get a family of open subsets \(J_p\) of \(]0,\infty [\) which are pairwise disjoint and each of them contains intervals of length as large as we wish. Now, let \(p\mapsto c_p\) be a bijective map from \(\mathbb {P}\) to the set of complex rational numbers and let us define a function \(V:[0,\infty [\,\rightarrow \mathbb C\) by the following rules: if \(x\in J_p\) for some prime p, then \(V(x)=c_p\) and \(V(x)=0\) if \(x\notin \cup _pJ_p\). Then, V is a locally bounded function whose range contains all the complex rational numbers. We set \(L=-\partial ^2+V(x)\) and we prove that the spectrum of any \(L_\bullet \) with \(L_{\min }\subset L_\bullet \subset L_{\max }\) is equal to \(\mathbb C\). Since the spectrum is closed, it suffices to show that any complex rational number c belongs to the spectrum of any \(L_\bullet \). If not, there is a number \(\alpha >0\) such that \(\Vert (L_\bullet -c)\phi \Vert \ge \alpha \Vert \phi \Vert \) for any \(\phi \in \mathcal {D}(L_\bullet )\). If r is a (large) positive number then there is an open interval I of length \(\ge r\) such that \(V(x)=c\) on I. Let \(\phi \in C^\infty _{\mathrm {c}}(I)\) such that \(\phi (x)=1\) for x at distance \(\ge 1\) from the boundary of I and with \(|\phi ''|\le \beta \) with a constant \(\beta \) independent of r (take \(r>3\) for example). Then, \(\phi \in \mathcal {D}(L_{\min })\) and \((L-c)\phi =-\phi ''+V\phi -c\phi =-\phi ''\); hence, \(\Vert \phi ''\Vert =\Vert (L_\bullet -c)\phi \Vert \ge \alpha \Vert \phi \Vert \) so \(\alpha \Vert \phi \Vert \le 2\beta \) which is impossible because the left hand side is of order \(\sqrt{r}\). One may choose V of class \(C^\infty \) by a simple modification of this construction. \(\square \)

8 Potentials with a Negative Imaginary Part

8.1 Dissipative 1D Schrödinger Operators

Recall that an operator A is called dissipative if

$$\begin{aligned} {\text {Im}}(f|Af)\le 0,\quad f\in \mathcal {D}(A), \end{aligned}$$
(8.1)

that is, if its numerical range is contained in \(\{\lambda \in \mathbb C\mid {\text {Im}}\lambda \le 0\}\). It is called maximal dissipative if in addition its spectrum is contained in \(\{\lambda \in \mathbb C\mid {\text {Im}}\lambda \le 0\}\). The following criterion is well-known [18].

Proposition 8.1

Assume A is closed, densely defined and dissipative. Then, A is maximal dissipative if and only if \(-A^*\) is dissipative, and then \(-A^*\) is maximal dissipative.

Let us now consider \(L=-\partial _x^2+V(x)\) with \(V\in L_\mathrm {loc}^1]a,b[\).

Proposition 8.2

The operator \(L_{\min }\) is dissipative if and only if \({\text {Im}}V\le 0\). It is maximal dissipative if in addition \(L_{\min }=L_{\max }\).

Proof

If \(f\in \mathcal {D}(L_{\max })\), then

$$\begin{aligned} (f|L_{\max }f)&=\int _a^b \big (\overline{f}'f'-(\overline{f}f')' +V\overline{f}f \\&=\lim _{\begin{array}{c} a_1\rightarrow a \\ b_1\rightarrow b \end{array}} \left( \overline{f}(a_1)f'(a_1) -\overline{f}(b_1)f'(b_1) +\int _{a_1}^{b_1} \left( |f'|^2+V|f|^2 \right) \right) ; \end{aligned}$$

hence,

$$\begin{aligned} {\text {Im}}(f|L_{\max }f) = \lim _{\begin{array}{c} a_1\rightarrow a \\ b_1\rightarrow b \end{array}} \left( \int _{a_1}^{b_1}{\text {Im}}(V)|f|^2 + {\text {Im}}(\overline{f}(a_1)f'(a_1)) -{\text {Im}}(\overline{f}(b_1)f'(b_1)) \right) .\nonumber \\ \end{aligned}$$
(8.2)

Thus,

$$\begin{aligned} {\text {Im}}(f|L_{\min }f) = \int _{a}^{b}{\text {Im}}(V)|f|^2\ge 0,\quad f\in \mathcal {D}(L_{\mathrm {c}}). \end{aligned}$$
(8.3)

By continuity, (8.3) extends to \(f\in \mathcal {D}(L_{\min })\), which clearly implies that \(L_{\min }\) is dissipative. The same argument as above shows that \(-{\overline{L}}_{\min }\) is dissipative

If \(L_{\min }=L_{\max }\), then \({\overline{L}}_{\min }={\overline{L}}_{\max }\). But \(L_{\min }^*=\overline{L}_{\max }\). Hence, \(-L_{\min }^*\) is dissipative. This proves maximal dissipativity of \(L_{\min }\).

If \(L_{\min }\ne L_{\max }\), then the spectrum of \(L_{\min }\) is \(\mathbb C\), so \(L_{\min }\) is not maximally dissipative. \(\square \)

A convenient criterion for dissipativity holds if \(\mathcal {D}(A)\subset \mathcal {D}(A^*)\): then, A is dissipative if and only if \(\frac{1}{\mathrm {i}2}(A-A^*)\ge 0\). Unfortunately, an operator A can be dissipative even if \(\mathcal {D}(A)\cap \mathcal {D}(A^*)=\{0\}.\)

This is related to a certain difficulty when one tries to study the dissipativity of Schrödinger operators with singular complex potentials. Suppose that we want to check that \(L_\bullet \) is dissipative using this criterion. If \(L_{\min }\subset L_\bullet \subset L_{\max }\), then \(\overline{L}_{\min }\subset L_\bullet ^*\subset \overline{L}_{\max }\). But we may have \(\mathcal {D}(\overline{L}_{\max }) \cap \mathcal {D}(L_{\max })=\{0\}\) (Lemma 4.10), hence we could have \(\mathcal {D}(L_\bullet )\cap \mathcal {D}(L_\bullet ^*)=\{0\}\) which is annoying. Indeed, although \(W_{x}(\overline{f},f)=2\mathrm {i}{\text {Im}}(\overline{f}(x)f'(x))\), we cannot use in (8.2) the existence of the limits (4.4) and (4.5) because in general \({\overline{f}}\not \in \mathcal {D}(L_{\max })\).

Let us describe the action of conjugation on boundary conditions. The conjugate of \(\alpha \in \mathcal {B}(L)\) is the boundary functional \({\overline{\alpha }}\in \mathcal {B}({\overline{L}})\) given by

$$\begin{aligned} \overline{\alpha }(f):=\overline{\alpha ({\overline{f}})} . \end{aligned}$$
(8.4)

Clearly, \(\alpha \mapsto {\overline{\alpha }}\) is a bijective anti-linear map \(\mathcal {B}(L)\rightarrow \mathcal {B}({\overline{L}})\) which sends \(\mathcal {B}_a(L)\) into \(\mathcal {B}_a(\overline{L})\) and \(\mathcal {B}_b(L)\) into \(\mathcal {B}_b({\overline{L}})\). Then, if \(g\in \mathcal {D}(L_{\max })\) is a representative of \(\alpha \in \mathcal {B}_a\), so that

$$\begin{aligned} \alpha (f)=W_a(g,f),\quad f\in \mathcal {D}(L_{\max }), \end{aligned}$$
(8.5)

then

$$\begin{aligned} {\overline{\alpha }}(f)=W_a({\overline{g}},f) ,\quad f\in \mathcal {D}(\overline{L}_{\max }) . \end{aligned}$$
(8.6)

Recall that \(\mathcal {B}_a\) and \(\mathcal {B}_b\) are equipped with symplectic forms \({[\![}\cdot |\cdot {]\!]}_a\) and \({[\![}\cdot |\cdot {]\!]}_b\), see (5.13).

Fix \(\alpha \in \mathcal {B}_a\) and \(\beta \in \mathcal {B}_b\). Consider \(L_{\alpha ,\beta }\), the realizations of L introduced in Definition 5.11. Recall that it is the restriction of \(L_{\max }\) to \(\mathcal {D}(L_{\alpha \beta })=\{f\in \mathcal {D}(L_{\max })\mid \alpha (f)=\beta (f)=0\}\).

Proposition 8.3

\(L_{\alpha \beta }^*=\overline{L}_{{\overline{\alpha }}\,{\overline{\beta }}}\).

Proof

By Proposition 7.9, \(L_{\alpha \beta }^\#={L}_{\alpha \,\beta }\). Clearly, \(\overline{L_{\alpha \beta } } =\overline{L}_{{\overline{\alpha }}\,{\overline{\beta }}}\). Therefore, the proposition follows from \(L_{\alpha \beta }^*=\overline{L_{\alpha \,\beta }^\#}\). \(\square \)

In the following two subsections, we will describe boundary conditions that guarantee the dissipativity of \(L_{\alpha ,\beta }\). We will consider separately two classes of potentials: \(V\in L_\mathrm {loc}^2]a,b[\) and \(V\in L^1]a,b[\).

8.2 Dissipative Boundary Conditions for Locally \(L^{2}\) Potentials

Note first the following sesquilinear version of Green’s identity (4.6).

Lemma 8.4

Suppose that \(f,\overline{f},g\in \mathcal {D}(L_{\max })\). Then, \({\text {Im}}( V) f\in L^2]a,b[\) and

$$\begin{aligned} (L_{\max }f|g)-(f|L_{\max }g) =-2\mathrm {i}\int _a^b{\text {Im}}(V){\overline{f}}g+W_b(\overline{f},g)-W_a({\overline{f}},g). \end{aligned}$$
(8.7)

Proof

The left hand side of (8.7) is

$$\begin{aligned} \langle \overline{L_{\max }f}|g\rangle -\langle {\overline{f}}|L_{\max }g\rangle&= \langle \overline{L_{\max }f}|g\rangle -\langle L_{\max }{\overline{f}}|g\rangle \end{aligned}$$
(8.8)
$$\begin{aligned}&\quad +\langle L_{\max }{\overline{f}}|g\rangle -\langle {\overline{f}}|L_{\max }g\rangle . \end{aligned}$$
(8.9)

Then, we apply \(\overline{L_{\max }}-L_{\max }=-2\mathrm {i}{\text {Im}}(V)\) to (8.8) and Green’s identity (4.6) to (8.9). \(\square \)

For the rest of the argument, we need the equality of the domains \(\mathcal {D}(\overline{L}_{\max })=\mathcal {D}(L_{\max })\) which, by Lemma 4.10, is equivalent to \({\text {Im}}V\in L_\mathrm {loc}^2]a,b[\). Then, we have \(\mathcal {B}(L)=\mathcal {B}({\overline{L}})\), hence \(\alpha \mapsto {\overline{\alpha }}\) is a conjugation in \(\mathcal {B}(L)\) which leaves invariant the subspaces \(\mathcal {B}_a(L)\) and \(\mathcal {B}_b(L)\). Recall that in (5.13), we equipped \(\mathcal {B}_a(L)\) in a symplectic form \({[\![}\cdot |\cdot {]\!]}_a\). Note that \({[\![}{\overline{\alpha }}|\alpha {]\!]}_a\) is well-defined for any \(\alpha \in \mathcal {B}_a(L)\).

Lemma 8.5

If \({\text {Im}}V\in L_\mathrm {loc}^2]a,b[\) and \(\alpha \in \mathcal {B}_a\), then the number \({[\![}{\overline{\alpha }}|\alpha {]\!]}_a\) is purely imaginary and

$$\begin{aligned} \frac{1}{2\mathrm {i}}{[\![}{\overline{\alpha }}|\alpha {]\!]}_a\ge 0 \ \Longleftrightarrow \ \frac{1}{2\mathrm {i}} W_a({\overline{f}},f)\ge 0 \ \forall f\in \mathcal {D}(L_{\max }) \text { with } \alpha (f)=0 . \end{aligned}$$

Proof

Let \(g\in \mathcal {D}(L_{\max })\) be a representative of \(\alpha \), so that (8.5) and (8.6) are true. Then,

$$\begin{aligned} {[\![}{\overline{\alpha }}|\alpha {]\!]}_a=W_a({\overline{g}},g) =\lim _{c\searrow a}\big ({\overline{g}}(c)g'(c)-{\overline{g}}'(c)g(c)\big ), \end{aligned}$$

which proves that \({[\![}{\overline{\alpha }}|\alpha {]\!]}_a\) is purely imaginary. Now, by the Kodaira identity

$$\begin{aligned} W_a({\overline{g}},g)W_a({\overline{f}},f)= |W_a(g,f)|^2-|W_a({\overline{g}},f)|^2. \end{aligned}$$

But \(\alpha (f)=0\) means \(W_a(g,f)=0\). Therefore, \({[\![}({\overline{\alpha }}|\alpha {]\!]}_a W_a({\overline{f}},f)\le 0\). \(\square \)

Theorem 8.6

Let \(\alpha \in \mathcal {B}_a\) and \(\beta \in \mathcal {B}_b\). If \({\text {Im}}V \in L^2_\mathrm {loc}]a,b[\), we have

$$\begin{aligned} L_{\alpha \beta } \text { is dissipative } \Longleftrightarrow {\text {Im}}V\le 0, \ \frac{1}{2\mathrm {i}}{[\![}{\overline{\alpha }}|\alpha {]\!]}_a\le 0,\text { and } \frac{1}{2\mathrm {i}}{[\![}{\overline{\beta }}|\beta {]\!]}_b\ge 0. \end{aligned}$$
(8.10)

And then \(L_{\alpha \beta }\) is maximal dissipative.

Proof

We consider only the case \(\alpha \ne 0,\beta \ne 0\). Lemma 8.4 gives

$$\begin{aligned} {\text {Im}}(f|L_{\max }f) =\int _a^b{\text {Im}}(V)|f|^2+\frac{1}{2\mathrm {i}}W_a({\overline{f}},f) - \frac{1}{2\mathrm {i}}W_b({\overline{f}},f) \quad \forall f\in \mathcal {D}(L_{\max }) \end{aligned}$$
(8.11)

and this implies that \(L_{\alpha \beta }\) is dissipative if and only if

$$\begin{aligned} \frac{1}{2\mathrm {i}}W_a({\overline{f}},f) - \frac{1}{2\mathrm {i}}W_b({\overline{f}},f) \le \int _a^b{\text {Im}}(-V)|f|^2 \quad \forall f\in \mathcal {D}(L_{\alpha \beta }) . \end{aligned}$$
(8.12)

If \(L_{\alpha \beta }\) is dissipative, by taking \(f\in \mathcal {D}(L_{\mathrm {c}})\) in (8.12) we get \({\text {Im}}(-V)\ge 0\). Then, by choosing \(f\in \mathcal {D}(L_{\alpha \beta })\) equal to zero near b, we get \(\frac{1}{2\mathrm {i}}W_a({\overline{f}},f)\le \int _a^b{\text {Im}}(-V)|f|^2\). If we fix such an f and replace it in this estimate by \(f\theta \) where \(\theta \in C^\infty (\mathbb R)\) with \(0\le \theta \le 1\) and \(\theta (x)=1\) on a neighborhood of a, then we get \(\frac{1}{2\mathrm {i}}W_a({\overline{f}},f)\le \int _a^b{\text {Im}}(-V)|f\theta |^2\). Since the right hand side here can be made as small as we wish by taking \(\theta \) equal to zero for \(x>d>a\) with d close to a, we see that we must have \(\frac{1}{2\mathrm {i}}W_a({\overline{f}},f)\le 0\) and this clearly implies the same inequality for any \(f\in \mathcal {D}(L_{\alpha \beta })\). Then, we get \(\frac{1}{2\mathrm {i}}{[\![}{\overline{\alpha }}|\alpha {]\!]}_a\le 0\) by Lemma 8.5. We similarly prove \(\frac{1}{2\mathrm {i}}{[\![}{\overline{\beta }}|\beta {]\!]}_b\ge 0\).

We proved the implication \(\Rightarrow \) in (8.10), and \(\Leftarrow \) is clear by (8.12). It remains to show the maximal dissipativity assertion. Due to Propositions 8.1 and 8.3, it suffices to prove that the operator \(-L^*_{\alpha \beta }=-\overline{L}_{{\overline{\alpha }}\,{\overline{\beta }}}\) is dissipative. Observe first that the relation \(\mathcal {D}(\overline{L}_{\max })=\mathcal {D}(L_{\max })\) implies \(\mathcal {D}(\overline{L}_{{\overline{\alpha }}\,{\overline{\beta }}})=\mathcal {D}(L_{{\overline{\alpha }}\,{\overline{\beta }}})\). Then, (8.11) gives

$$\begin{aligned} {\text {Im}}(f|-\overline{L}_{\max }f) =\int _a^b{\text {Im}}(V)|f|^2-\frac{1}{2\mathrm {i}}W_a({\overline{f}},f) +\frac{1}{2\mathrm {i}}W_b({\overline{f}},f) \quad \forall f\in \mathcal {D}(L_{\max }) \end{aligned}$$
(8.13)

hence instead of (8.12) we get the condition

$$\begin{aligned} -\frac{1}{2\mathrm {i}}W_a({\overline{f}},f) + \frac{1}{2\mathrm {i}}W_b({\overline{f}},f) \le \int _a^b{\text {Im}}(-V)|f|^2 \quad \forall f\in \mathcal {D}(L_{{\overline{\alpha }}\,{\overline{\beta }}}) . \end{aligned}$$

As above we get \(\frac{1}{2\mathrm {i}}W_a({\overline{f}},f) \ge 0\) and \(\frac{1}{2\mathrm {i}}W_b({\overline{f}},f)\le 0\) for any \(f\in \mathcal {D}(L_{{\overline{\alpha }}\,{\overline{\beta }}})\). Thus, if \(f\in \mathcal {D}(L_{\max })\) and \({\overline{\alpha }}(f)=0\), then \(\frac{1}{2\mathrm {i}}W_a({\overline{f}},f) \ge 0\) and by Lemma 8.5 this means \(\frac{1}{2\mathrm {i}}{[\![}\alpha |{\overline{\alpha }}{]\!]}_a\ge 0\) which is equivalent to \(\frac{1}{2\mathrm {i}}{[\![}{\overline{\alpha }}|\alpha {]\!]}\le 0\). Similarly, we get \(\frac{1}{2\mathrm {i}}{[\![}{\overline{\beta }}|\beta {]\!]}_b\ge 0\) and the last two conditions are satisfied by the assumptions in the right hand side of (8.10). Hence, \(-L^*_{\alpha \beta }\) is dissipative. \(\square \)

8.3 Dissipative Regular Boundary Conditions

Suppose that the operator L has a regular left endpoint at a. As we noted several times, for regular boundary conditions \(\mathcal {B}_a\) can be identified with \(\mathbb C^2\). Indeed,

$$\begin{aligned} \alpha (f)=\alpha _0f'(a)-\alpha _1f(a), \end{aligned}$$

is a general form of a boundary functional, with \(\alpha =(\alpha _0,\alpha _1)\in \mathbb C^2\) and \(f\in \mathcal {D}(L_{\max })\).

The space \(\mathcal {B}_a\) is equipped with the symplectic form \({[\![}\cdot |\cdot {]\!]}_a\), which coincides with the usual (two dimensional) vector product:

$$\begin{aligned} {[\![}\alpha |\beta {]\!]}_a=\alpha _0\beta _1-\alpha _1\beta _0=\alpha \times \beta . \end{aligned}$$

Thus, if we write \(\mathbf {f}_a:=\big (f(a),f'(a)\big )\), an alternative notation for \(\alpha (f)\) is

$$\begin{aligned} \alpha (f)=\alpha \times \mathbf {f}_a. \end{aligned}$$

Note that there is no guarantee that \(\mathcal {D}(L_{\min })\) and \(\mathcal {D}(L_{\max })\) are invariant wrt the complex conjugation. However, the space \(\mathcal {B}_a\simeq \mathbb C^2\) is equipped with the obvious complex conjugation:

$$\begin{aligned} {\overline{\alpha }}(f)={\overline{\alpha }}_0f'(a)-{\overline{\alpha }}_1f(a)={\overline{\alpha }}\times \mathbf {f}_a. \end{aligned}$$

Lemma 8.7

  1. (1)

    \(\alpha \times \beta =0\) if and only if the vectors \(\alpha ,\beta \) are collinear.

  2. (2)

    \({\overline{\alpha }}\times \alpha \in \mathrm {i}\mathbb R\) and \({\overline{\alpha }}\times \alpha =0\) if and only if \(\alpha \) is proportional to a real vector.

  3. (3)

    \(({\overline{\alpha }}\times \alpha )({\overline{\beta }}\times \beta )=|\alpha \times \beta |^2 -|{\overline{\alpha }}\times \beta |^2\)

Proof

  1. (1):

    If \(\alpha _0\beta _1=\alpha _1\beta _0\) and \(\beta \ne 0\), then \(\beta _k=0\Rightarrow \alpha _k=0\) and if \(\beta _0\ne 0\ne \beta _1\), then \(\alpha _0/\beta _0=\alpha _1/\beta _1\). (2): If \({\overline{\alpha }}\times \alpha =0\), we get \({\overline{\alpha }}=c^2\alpha \) for some complex c with \(|c|=1\) which implies \((c\alpha )^*=c\alpha \). (3) follows by the Kodaira identity. \(\square \)

Here is a version of Lemma 8.4 for the regular case.

Lemma 8.8

Let \(V\in L^1]a,b[\). Suppose that \(f,g\in \mathcal {D}(L_{\max })\). Then,

$$\begin{aligned} (L_{\max }f|g)-(f|L_{\max }g)&=-2\mathrm {i}\int _a^b{\text {Im}}(V){\overline{f}}g+W_b(\overline{f},g)-W_a({\overline{f}},g). \end{aligned}$$
(8.14)

Next, we have a version of Theorem 8.6 for the regular case. Fix nonzero vectors\(\alpha ,\beta \) and define \(L_{\alpha \beta }\) by imposing the boundary conditions at a and b:

$$\begin{aligned} f(a)\alpha _1-f'(a)\alpha _0=0, \quad f(b)\beta _1-f'(b)\beta _0=0 . \end{aligned}$$

In this context, it is quite easy to prove that \(L^*_{\alpha \beta }=\overline{L}_{{\overline{\alpha }}\,{\overline{\beta }}}\).

Theorem 8.9

Suppose that ab are finite and \(V\in L^1]a,b[\). Then,

$$\begin{aligned} L_{\alpha \beta }\text { is dissipative }\Leftrightarrow {\text {Im}}V\le 0,\quad {\text {Im}}({\overline{\alpha }}_0\alpha _1)\le 0,\text { and } {\text {Im}}({\overline{\beta }}_0\beta _1)\ge 0. \end{aligned}$$

And in this case \(L_{\alpha \beta }\) is maximal dissipative.

Proof

The proof is similar to that of Theorem 8.6, but much simpler. We use Lemma 8.8 instead of Lemma 8.4 and get the same relation (8.12) as necessary and sufficient condition for dissipativity. Then, we use

$$\begin{aligned} \frac{1}{2\mathrm {i}}{[\![}{\overline{\alpha }}|\alpha {]\!]}_a = \frac{1}{2\mathrm {i}}({\overline{\alpha }}_0\alpha _1-{\overline{\alpha }}_1\alpha _0)= {\text {Im}}({\overline{\alpha }}_0\alpha _1) \end{aligned}$$

and a similar relation for \(\beta \). Finally, when checking the dissipativity of \(-L^*_{\alpha \beta }\), note that this operator is associated with the differential expression \(\partial ^2-\overline{V}\), which explains a difference of sign. \(\square \)

8.4 Weyl Circle in the Regular Case

In this subsection, we fix a regular operator L whose potential has a negative imaginary part. We study solutions of \((L-\lambda )f=0\) for \({\text {Im}}\lambda >0\) with real boundary conditions. In Theorem 8.10, we show that they define a certain circle in the complex plane called the Weyl circle. This result will be needed in the next subsection Sect. 8.5, where general boundary conditions are studied.

We will use an argument essentially due to H. Weyl in the real case, cf. [5, 20, 21] for example. The Weyl circle for potentials with semi-bounded imaginary part was first treated in [24], see [3] for more recent results.

Let us denote \(U={\text {Im}}(\lambda -V)\) and

$$\begin{aligned} ({f}|{g})_U=\int _a^b \overline{f}gU . \end{aligned}$$
(8.15)

We set \(\Vert f\Vert _U^2=({f}|{f})_U\) and note that if \(U\ge 0\), then \(({\cdot }|{\cdot })_U\) is a positive Hermitian form and we denote \(\Vert \cdot \Vert _U\) is the corresponding seminorm. Now, if \(f,g\in \mathcal {D}(L_{\max })\) and \(Lf=\lambda f\), \(Lg=\lambda g\) for some complex number \(\lambda \), then (8.7) can be rewritten as

$$\begin{aligned} 2\mathrm {i}({f}|{g})_U= W_a({\overline{f}},g) -W_b(f,g) . \end{aligned}$$
(8.16)

Theorem 8.10

Assume that \({\text {Im}}V\le 0\) and \({\text {Im}}\lambda >0\). Let uv be solutions of the equation \(Lf=\lambda f\) with real boundary condition at a and satisfying \( W(v,u)=1.\) If w is a solution of \(Lf=\lambda f\) with a real boundary condition at b, then there is a unique \(m\in \mathbb C\) such that \(w=mu+v\); this number is on the circle

$$\begin{aligned} \int _a^b|mu+v|^2{\text {Im}}(\lambda -V) ={\text {Im}}m, \end{aligned}$$
(8.17)

which has

$$\begin{aligned} \text {center } c=\frac{\mathrm {i}/2-({u}|{v})_U}{\Vert u\Vert ^2_U}=\frac{W_b(\overline{u},v)}{2\mathrm {i}\Vert u\Vert _U^2} \quad \text {and radius } r=\frac{1}{2\Vert u\Vert _U^2} . \end{aligned}$$
(8.18)

Conversely, let m be a complex number on the circle (8.17), and define w by \(w=mu+v\). Then, w has a real boundary condition at b and \(W(w,u)=1\).

Proof

From Lemma 8.7 (2) and the reality of the boundary conditions at a, we get

$$\begin{aligned} W_a({\overline{u}},u)=0,\quad W_a({\overline{v}},v)=0. \end{aligned}$$
(8.19)

This implies

$$\begin{aligned} \Vert u\Vert _U^2=\frac{\mathrm {i}}{2}W_b({\overline{u}},u),\quad \Vert v\Vert _U^2=\frac{\mathrm {i}}{2}W_b({\overline{v}},v), \end{aligned}$$
(8.20)

due to (8.16). And if w is as in the first part of the theorem, then the same argument gives

$$\begin{aligned} \Vert w\Vert _U^2 =\frac{1}{2\mathrm {i}} W_a({\overline{w}},w). \end{aligned}$$
(8.21)

Since uv are linearly independent solutions of \(Lf=\lambda f\), if w is another solution, then we have \(w=mu+nv\) for uniquely determined complex numbers mn. Since \(W(v,u)=1\), we see that \(n=1\).

Now, fix \(w=mu+v\). Using (8.19) and \(W_a(u,v)=-1\), we get

$$\begin{aligned} W({\overline{w}},w)_a =|m|^2W_a({\overline{u}},u)+\overline{m}W_a({\overline{u}},v)+ mW_a({\overline{v}},u)+W_a({\overline{v}},v) = 2\mathrm {i}{\text {Im}}m .\nonumber \\ \end{aligned}$$
(8.22)

From (8.21) and (8.22), we get

$$\begin{aligned} \Vert w\Vert _U^2 = {\text {Im}}m . \end{aligned}$$
(8.23)

From this relation, we get

$$\begin{aligned} {\text {Im}}m=\Vert mu+v\Vert ^2_U =|m|^2\Vert u\Vert ^2_U + 2{\text {Re}}\big (\overline{m}({u}|{v})_U\big )+ \Vert v\Vert ^2_U \end{aligned}$$
(8.24)

and since \({\text {Im}}m = 2{\text {Re}}(\overline{m}\,\mathrm {i}/2)\) we may rewrite this as

$$\begin{aligned} |m|^2\Vert u\Vert ^2_U -2{\text {Re}}\big (\overline{m}(\mathrm {i}/2-({u}|{v})_U)\big ) +\Vert v\Vert ^2_U =0 . \end{aligned}$$
(8.25)

Clearly, \(\Vert w\Vert _U>0\) hence \({\text {Im}}m>0\) by (8.23) so (8.25) may be rewritten

$$\begin{aligned} |m|^2 -2{\text {Re}}\left( \overline{m}\frac{\mathrm {i}/2-({u}|{v})_U}{\Vert u\Vert ^2_U}\right) +\frac{\Vert v\Vert ^2_U}{\Vert u\Vert ^2_U} =0 . \end{aligned}$$
(8.26)

If \(c\in \mathbb C\) and \(d\in \mathbb R\), then \( |m|^2-2{\text {Re}}(\overline{m}c) +d=|m-c|^2 - (|c|^2-d) \). Hence, there is m such that \(|m|^2-2{\text {Re}}(\overline{m}c) +d=0\) if and only if \(d\le |c|^2\), and then \(|m|^2-2{\text {Re}}(\overline{m}c) +d=0\) is the equation of a circle with center c and radius \(\sqrt{|c|^2 -d}\). Thus, (8.26) is the equation of the circle with

$$\begin{aligned} \text {center}\quad c=\frac{\mathrm {i}/2-({u}|{v})_U}{\Vert u\Vert ^2_U} \quad \text {and square of radius}\quad r^2=\frac{|\mathrm {i}/2-({u}|{v})_U|^2 - \Vert u\Vert ^2_U \Vert v\Vert ^2_U}{\Vert u\Vert ^4_U}. \end{aligned}$$

From (8.16), we get \( 2\mathrm {i}({u}|{v})_U= W_a({\overline{u}},v) -W_b({\overline{u}},v)=-1 -W_b({\overline{u}},v); \) hence, \(\mathrm {i}/2-({u}|{v})_U = W_b({\overline{u}},v)/2\mathrm {i}\). Then, (8.20) implies

$$\begin{aligned} \Vert u\Vert ^2_U \Vert v\Vert ^2_U=-\frac{1}{4}W_b({\overline{u}},u) W_b({\overline{v}},v) , \end{aligned}$$

hence

$$\begin{aligned} |\mathrm {i}/2-({u}|{v})_U|^2 - \Vert u\Vert ^2_U \Vert v\Vert ^2_U= \big (|W_b(\overline{u},v)|^2 +W_b({\overline{u}},u) W_b({\overline{v}},v) \big )/4 . \end{aligned}$$

But by the Kodaira identity \(W_b({\overline{u}}, u)W_b({\overline{v}},v) =1-|W_b({\overline{u}},v)|^2\), hence we get

$$\begin{aligned} |\mathrm {i}/2-({u}|{v})_U|^2 - \Vert u\Vert ^2_U \Vert v\Vert ^2_U=1/4 \end{aligned}$$

so (8.26) is just the circle described by (8.18).

To prove the reciprocal part of the theorem, consider a point m on this circle and let \(w=mu+v\). Clearly, \(Lw=\lambda w\) and \(W(w,u)=1\) and the computation (8.22) gives us \(W_a({\overline{w}},w)=2\mathrm {i}{\text {Im}}m\). We also have (8.24) because this just says that m is on the circle (8.18). Thus, we have

$$\begin{aligned} \Vert w\Vert _U^2={\text {Im}}m = W_a({\overline{w}},w)/2\mathrm {i}, \end{aligned}$$

and then (8.16) implies \(W_b({\overline{w}},w)=0\). Therefore, by Lemma 8.16 w has a real boundary condition at b. This proves the final assertion of the theorem. \(\square \)

8.5 Limit Point/Circle

In this section, we again assume that \({\text {Im}}V\le 0\) and \({\text {Im}}\lambda >0\). We allow b to be an irregular endpoint. We also assume that a is a regular endpoint. Thus, we assume that \(V\in L_\mathrm {loc}^1[a,b[\). This class of potentials has first been considered in [24]; see [3] for more general conditions.

Using Theorem 8.10, we will obtain a classification of the properties of L around b into three categories. This classification can be called the Weyl trichotomy. It replaces the Weyl dichotomy, well-known classification of irregular endpoints for real potentials.

Note that this classification depends only on the behavior of V close to b. In particular, the assumption of regularity of a is made only for convenience. If a is also irregular, then the analysis should be done separately on intervals \(]a,a_1]\) and \([b_1,b[\).

Let uv be solutions of \(Lf=\lambda f\) on ]ab[ with real boundary conditions at a and such that \(W(v,u)=1\).

Definition 8.11

For any \(d\in ]a,b[\), we define

$$\begin{aligned} the \hbox { Weyl circle }\quad {\mathscr {C}}_d&:=\Big \{m\in \mathbb C\mid \int _a^d|mu+v|^2{\text {Im}}(\lambda -V) ={\text {Im}}m\Big \},\\ the \hbox { open Weyl disk }\quad {\mathscr {C}}_d^\circ&:=\Big \{m\in \mathbb C\mid \int _a^d|mu+v|^2{\text {Im}}(\lambda -V) <{\text {Im}}m\Big \},\\ the \hbox { closed Weyl disk }\quad {\mathscr {C}}_d^\bullet&:=\Big \{m\in \mathbb C\mid \int _a^d|mu+v|^2{\text {Im}}(\lambda -V) \le {\text {Im}}m\Big \}\\&= \mathscr {C}_d^\circ \cup {\mathscr {C}}_d. \end{aligned}$$

Thus, the Weyl circle is given by the condition (8.17) with b replaced by d. Since the left hand side of (8.17) growth like \(|m|^2\) when \(m\rightarrow \infty \), it follows that \({\mathscr {C}}_d^\circ \) is inside \({\mathscr {C}}_d\) . If \(d_1<d_2\), then

$$\begin{aligned} \int _a^{d_2}|mu+v|^2{\text {Im}}(\lambda -V) \le {\text {Im}}m \Rightarrow \int _a^{d_1}|mu+v|^2{\text {Im}}(\lambda -V) <{\text {Im}}m. \end{aligned}$$

Hence, \({\mathscr {C}}^\bullet _{d_2}\subset {\mathscr {C}}^\circ _{d_1}\) strictly if \(d_1<d_2<b\).

Definition 8.12

We set

$$\begin{aligned} {\mathscr {C}}_b^\bullet&:=\bigcap _{d<b}{\mathscr {C}}^\bullet _d,\\ {\mathscr {C}}_b&:=\text {the boundary of }{\mathscr {C}}_b^\bullet . \end{aligned}$$

It follows that either \({\mathscr {C}}_b^\bullet ={\mathscr {C}}_b\) is a point, or \({\mathscr {C}}_b^\bullet \) is a disk and \({\mathscr {C}}_b\) is a circle of radius \(>0\).

Definition 8.13

We say that b is limit point if \({\mathscr {C}}_b\) is a point. We say that b is limit circle if \({\mathscr {C}}_b\) is a circle of a positive radius.

Lemma 8.14

Let \(m\in {\mathscr {C}}_b^\bullet \). Then,

$$\begin{aligned} \int _a^b|w|^2{\text {Im}}(\lambda -V)\le {\text {Im}}(m). \end{aligned}$$
(8.27)

If b is limit point, then \(\int _a^b|u|^2{\text {Im}}(\lambda -V)=\infty \).

Proof

For any \(d\in ]a,b[\), we have \( {\mathscr {C}}_b^\bullet \subset {\mathscr {C}}_d^\bullet \). Therefore,

$$\begin{aligned} \int _a^d|w|^2{\text {Im}}(\lambda -V)\le {\text {Im}}(m). \end{aligned}$$
(8.28)

Then, we take the limit \(d\nearrow b\). If b is limit point, then the radius of the Weyl circle \({\mathscr {C}}_d\) tends to zero as \(d\rightarrow b\); hence, \(\lim _{d\rightarrow b}\int _a^d|u|^2{\text {Im}}(\lambda -V)=\infty \) by the last relation in (8.18). \(\square \)

The above lemma implies immediately the following theorem:

Theorem 8.15

If b is limit circle, then all solutions of \((L-\lambda )f-0\) satisfy

$$\begin{aligned} \int _a^b|w|^2{\text {Im}}(\lambda -V)\quad \text { is bounded.} \end{aligned}$$
(8.29)

If b is limit point, then there exists only one (modulo a complex factor) solution of \((L-\lambda )f=0\) satisfying (8.29).

Note that \({\text {Im}}(\lambda -V)\ge {\text {Im}}\lambda >0\). Therefore, (8.29) implies the square integrability of w.

Thus, for potentials with a negative imaginary part instead of Weyl’s dichotomy we have three possibilities for solutions of \((L-\lambda )f=0\) (we consider solutions modulo a complex factor):

  1. (1)

    limit point case, only one solution satisfies (8.29), only one solution is square integrable;

  2. (2)

    limit point case, only one solution satisfies (8.29), all solutions are square integrable;

  3. (3)

    limit circle, all solutions satisfy (8.29), and hence all solutions are square integrable.

If V is real, then the case (2) is absent and we have the usual Weyl’s dichotomy. Then, L is limit point at b iff for any \(\lambda \) there is at most one solution of \(Lf=\lambda f\) which is square integrable near b. But this is not the case if V is complex.

We emphasize that the limit point/circle terminology is interpreted here in the geometric sense described above (based on Theorem 8.10).

There exist examples of (2) in the literature. In the limit point case, it is possible that we have only one nonzero solution satisfying (8.29), whereas all solutions are square integrable with respect to the Lebesgue measure. Indeed, Sims [24, p. 257] has shown that this happens in simple examples like \(V(x)= x^6-3\mathrm {i}x^2/2\) on \(]1,\infty [\). See also the discussion in [3].

Remark 8.16

We also note that if V is real, then for any non-real \(\lambda \) there is at least one nonzero solution of \(Lf=\lambda f\) which is square integrable near b. But it does not seem to be known whether for arbitrary complex V there is \(\lambda \) such that \(Lf=\lambda f\) has a nonzero solution which is square integrable near b.