One-Dimensional Schrödinger Operators with Complex Potentials

We discuss realizations of L:=-∂x2+V(x)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L:=-\partial _x^2+V(x)$$\end{document} as closed operators on L2]a,b[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^2]a,b[$$\end{document}, where V is complex, locally integrable and may have an arbitrary behavior at (finite or infinite) endpoints a and b. The main tool of our analysis is Green’s operators, that is, various right inverses of L.


Introduction
The paper is devoted to operators of the form on ]a, b[, where a < b, a can be −∞ and b can be ∞. The potential V can be complex, has low regularity and a rather arbitrary behavior at the boundary of the domain: we assume that V ∈ L 1 loc ]a, b[. We study realizations of L as closed operators on L 2 ]a, b[.
Operators of the form (1.1) are commonly called one-dimensional Schrödinger operators or, shorter, 1d Schrödinger operators. They are special cases of Sturm-Liouville operators, that is operators of the form is not a serious restriction to consider 1d Schrödinger operators instead of Sturm-Liouville operators. 1d Schrödinger operator is a classic subject with a lot of literature. Most of the literature is devoted to real V , when L can be realized as self-adjoint operator. It is, however, quite striking that the usual theory well-known from the real (self-adjoint) case works almost equally well in the complex case. In particular, essentially the same theory for boundary conditions and the same formulas for Green's operators [right inverses of (1.1)] hold as in the real case. We will describe these topics in detail in this paper.
A large part of the literature on 1d Schrödinger operators assumes that potentials are L 1 near finite endpoints. Under this condition, one can impose the so-called regular boundary conditions (Dirichlet, Neumann or Robin). In this case, it is natural to use the so-called Weyl-Titchmarsh function and the formalism of the so-called boundary triplets, see, e.g., [2] and references therein. We are interested in general boundary conditions, such as those considered in [4,9,10], where the above approach does not directly apply. See the discussion at the end of Sect. 5.2.
One of the motivations of the present work is the study of exactly solvable Schrödinger operators, such as those given by the Bessel equation [4,9], or the Whittaker equation [10]. Analysis of those operators indicates that non-real potentials are as good from the point of view of the exact solvability as real ones. It is also natural to organize exactly solvable Schrödinger operators in holomorphic families, whose elements are self-adjoint only in exceptional cases. Therefore, a theory for 1d Schrödinger operators with complex potentials and general boundary conditions provides a natural framework for the study of exactly solvable Hamiltonians.
As we mentioned above, we suppose that V ∈ L 1 loc ]a, b[. The theory is much easier if V ∈ L 2 loc ]a, b[, because one could then assume that the operator acts on C 2 c ]a, b[. Dealing with potentials in L 1 loc causes of a lot of troublethis is, however, a rather natural assumption. We think that handling a more general case forces us to better understand the problem. Actually, one could consider even more singular potentials: it is easy to generalize our results to potentials V being Borel measures on ]a, b[.
In the preliminary Sect. 2, we study the inhomogeneous problem given by the operator (1.1) by basic ODE methods. We introduce some distinguished Green's operators: The two-sided Green's operators are related to boundary conditions on both sides. The forward and backward Green's operators are related to the Cauchy problem at the endpoints of the interval. These operators belong to the most often used objects in mathematics. Usually they appear under the guise of Green's functions, which are the integral kernels of Green's operators.
Note that in the Hilbert space L 2 ]a, b[, we have a natural conjugation f → f and a bilinear form f |g := fg. For an operator T , it is natural to define its transpose T # := T * , where the bar denotes the complex conjugation. We say that T is self-transposed if T # = T (in the literature the alternative name J-self-adjoint is also used). These concepts play an important role in the theory of differential operators on L 2 ]a, b[. Therefore, we devote Sect. 3 to a general theory of operators in a Hilbert space with a conjugation. We briefly recall the theory of restrictions/extensions of unbounded operators. The concept of self-transposed operators turns out to be a natural alternative to the concept of self-adjointness. It is well-known that self-adjoint operators are well-posed (possess non-empty resolvent set). Not every self-transposed operator is well-posed, however, they often are.
The remaining sections are devoted to realizations of L given by (1.1) as closed operators on the Hilbert space L 2 ]a, b[. The most obvious realizations are the minimal one L min and the maximal one L max . We prove that these operators are closed and densely defined. Under the assumption V ∈ L 1 loc ]a, b[, the proof is quite long and technical but, in our opinion, instructive. If we assumed V ∈ L 2 loc ]a, b[, the proof would be easy. At this point, it is helpful to recall basic theory of 1d Schrödinger operators for real potentials. One is usually interested in self-adjoint extensions of the Hermitian operator L min . They are situated "half-way" between L min and L max . More precisely, we have three possibilities: (1) L min = L max : then, L min is already self-adjoint. Note that in the literature, it is common to use the theory of deficiency indices. The cases (1), (2), resp., (3) correspond to L min having the deficiency indices (0, 0), (1,1) and (2,2). However, the deficiency indices do not have a straightforward generalization to the complex case. Let us go back to complex potentials. Note that the Hermitian conjugate of an operator T , denoted T * , turns out to be less useful than its transpose T # . In particular, the role of self-adjoint operators is taken up by self-transposed operators.
By choosing a subspace of D(L max ) closed in the graph topology and restricting L max to this subspace, we can define a closed operator. Such operators will be called closed realizations of L. We will show that in the complex case closed realizations of L possess a theory quite analogous to that of the real case.
We are mostly interested in realizations of L whose domain contains D(L min ), that is, operators L • satisfying L min ⊂ L • ⊂ L max . Such realizations are defined by specifying boundary conditions. Similarly as in the real case, boundary conditions are given by functionals on D(L max ) that vanish on D(L min ). For each of endpoints, a and b, there is a space of functionals describing boundary conditions. We call the dimension of this space the boundary index at a, resp., b, and denote it ν a (L), resp., ν b (L). They can take the values 0 or 2 only. Therefore, we have the following classification of operators L: (1) dim D(L max )/D(L min ) = 0, or L min = L max , (2)  Let λ ∈ C. It is natural to consider the space of solutions of (L − λ)u = 0 that are square integrable near a, resp., b. We denote these spaces by U a (λ), resp., U a (λ). We will prove that ν a (L) = 0 ⇐⇒ dim U a (λ) ≤ 1 ∀λ ∈ C ⇐⇒ dim U a (λ) ≤ 1 for some λ ∈ C, (1.4) ν a (L) = 2 ⇐⇒ dim U a (λ) = 2 ∀λ ∈ C ⇐⇒ dim U a (λ) = 2 for some λ ∈ C. (1.6) There is a strict correspondence between (1), (2) and (3) of (1. 3) and (1), (2) and (3) of (1.6). In cases (1) and (2) from Table (1.6), we describe all realizations with non-empty resolvent set and their resolvents. We prove that if L • is such a realization, then we can find u ∈ U a (λ) and v ∈ U b (λ) with the Wronskian equal to 1, so that the integral kernel of (L • − λ) −1 can then be easily expressed in terms of u and v. The case (3) is much richer. We describe all realizations of L that have separated boundary conditions (given by independent boundary conditions at a and b). If in addition they are self-transposed, then essentially the same formula as in (1) and (2) gives (L • − λ) −1 . There are, however, two other separated realizations of L, which are denoted L a and L b , with boundary conditions only at a, resp., b. They are not self-transposed; in fact, they satisfy L # a = L b . Their resolvents are given by what we call forward and backward Green's operators, which incidentally are cousins of the retarded and advanced Green's functions, well-known from the theory of the wave equation.
In the last section, we discuss potentials with a negative imaginary part. We show that under some weak conditions they define dissipative 1d Schrödinger operators. We also describe Weyl's limit point-limit circle method for such potentials. For real potentials, this method allows us to determine the dimension of U a (λ) for Im(λ) > 0: if a is limit point, then dim U a (λ) = 1; if a is limit circle, then dim U a (λ) = 2. The picture is more complicated if the potential is complex: there are examples where the endpoint a is limit point and U a (λ) is two-dimensional.
1d Schrödinger operator is one of the most classic topics in mathematics. Already in the first half of 19 century, Sturm and Liouville considered second-order differential operators on a finite interval with various boundary conditions. The theory was extended to a half-line and a line in a celebrated work by Weyl.
Second-order ODE's and 1d Schrödinger operators are considered in many textbooks, including Atkinson [1], Coddington-Levinson [5], Dunford-Schwartz [12,13], Naimark [20], Pryce [21], de Alfaro-Regge [7], Reed-Simon [23], Stone [25], Titchmarsh [27], Teschl [26], Gitman-Tyutin-Voronov [17]. However, in the literature complex potentials are rarely studied in detail, and if so, then one does not pay attention to nontrivial boundary conditions. The monograph by Edmunds-Evans [14] is often considered as one of the most up-to-date source for results on this subject. Many results presented in our article have their counterpart in the literature, especially in [14]. Let us try to make a more detailed comparison of our paper with the literature.
Most of the material of Sect. 2 is standard. However, we have not seen a separate discussion of semiregular boundary condition, as described in Proposition 2.5 (2) and (3). The definitions of the canonical bisolution G ↔ , various Green's operators G u,v G ← , G → and relations between them (2.21)-(2.23) are implicit in many works on the subject; however, they are rarely separately emphasized.
The material of our Sect. 3 on Hilbert spaces with conjugation is to a large extent contained in Chap. 3 Sects 3 and 5 of [14]. It is based on previous results of Vishik [28], Galindo [16] and Knowles [19]. However, our presentation seems to be somewhat different. It shows in particular that the existence of a selftransposed extension follows almost trivially from a basic theory of symplectic spaces described in Appendix A. Another special feature of our Sect. 3 is a discussion of properties of right inverses of an unbounded operator.
The deepest result described in our paper is probably Theorem 6.15 about the characterization of boundary conditions by square integrable solutions. This result is actually not contained in [14]. It is based on Everitt  The study of Green's operators contained in Sect. 7 is probably to a some extent new.
A separate subject that we discuss is potentials with negative imaginary part studied by means of the so-called Weyl limit circle/limit point method. Here, the main reference is Sims [24], see also [3,14].
The present manuscript grew out of Appendix of [4] devoted to 1d Schrödinger operators with the potential 1 x 2 . [4], and its follow-up papers [9,10] illustrated that 1d Schrödinger operators with complex potentials and unusual boundary conditions appear naturally in various situations. Motivated by these applications, we decided to write an exposition of basic general theory of 1d Schrödinger operators.
We decided to make our exposition as complete and self-contained as possible, explaining things that are perhaps obvious to experts, but often difficult to many readers. We use freely the modern operator theory-this is not the case of a large part of the literature, which often sticks to old-fashioned approaches. We also use the terminology and notation which, we believe, is as natural as possible in the context we consider. For instance, we prefer the name "self-transposed" to "J-self-adjoint" found in a large part of the literature. We believe that our treatment of the subject is quite different from the one found in the literature. One of the main tools that we have found useful, rarely appearing in the literature, is right inverses, which in the context of 1d Schrödinger operators we call by the traditional name of Green's operators.

Notations
Recall that a < b, a can be −∞ and b can be ∞.
[ and suppf is a compact subset of ]a, b[. The analogous meaning has the subscript c in different situations.
⊕ will mean the topological direct sum of two spaces.

Absolutely Continuous Functions
We will denote f or ∂f the derivative of a distribution f . We will denote by AC]a, b[ the space of absolutely continuous functions on ]a, b[, that is, distributions on ]a, b[ whose derivative is in L 1 loc ]a, b[. This definition is equivalent to the more common definition: for every > 0 there exists δ > 0 such that for any family of non-intersecting intervals [ We will denote by AC[a, b] the space of functions on [a, b] whose (distributional) derivative is in Note that a can be −∞ and b can be ∞.

Choice of Functional-Analytic Setting
Throughout the section, we assume that V ∈ L 1 loc ]a, b[ and we consider the differential expression Proof. Define the operators Q d and T d by their integral kernels x>y >d, 0 otherwise; The Cauchy problem can be rewritten as Thus, by choosing a sufficiently small interval [a 1 , b 1 ] containing d, we can make F well-defined and contractive on C[a 1 , b 1 ]. (F is contractive iff Q d < 1). By Banach's fixed point theorem (or the convergence of an appropriate Neumann series), there exists f ∈ C[a 1 , b 1 ] such that f = F (f ). Then, note that we have

Regular and Semiregular Endpoints
One-dimensional Schrödinger operators possess the simplest theory when −∞ < a < b < ∞ and V ∈ L 1 ]a, b[. Then, we say that L is a regular operator. Most of the classical Sturm-Liouville theory is devoted to such operators. More generally, the following standard terminology will be convenient. 1d Schrödinger operators satisfying the following conditions are also relatively well behaved: |x−a| < ∞}, where a < b 1 < b is finite and will be chosen later. The Cauchy problem can be rewritten as F (f ) = f where F is given by |x − a| −1 C]a, b 1 [ given by The operator T a is bounded from Then, we argue similarly as in the Proof of Proposition 2.2. For b 1 close enough to a, the map F is contractive and we can apply Banach's fixed point theorem. From we see that f (a) = p 1 .
An example of a potential with a finite point which is not semiregular is V (x) = c x 2 on ]0, ∞[. For its theory, see [4,9].

Wronskian
Definition 2.6. The Wronskian of two differentiable functions u, v is Proof. Since u, u , v, v ∈ AC]a, b[ , the Wronskian can be differentiated and a simple computation yields (2.15).

Green's Operators
The expression "Green's function" is commonly used to denote the integral kernel of a right inverse of a differential operator, usually of a second order. We will use the expression "Green's operator" for a right inverse of L.
Note that we do not require that G • L = 1. Note also that G ↔ is not Green's operator-it is a bisolution. However, it is so closely related to various Green's operators that its symbol contains the same letter G.
There are many Green's operators. If G • is a Green's operator, u, v are two solutions of the homogeneous equation, and φ, ψ ∈ L ∞ loc ]a, b[ are arbitrary, then is also a Green's operator. Recall that if E, F are vector spaces, g belongs to the dual of E, and f ∈ F , then |f g| is the linear map E → F defined by e → g(e)f .
Let us define some distinguished Green's operators. Let u, v be two solutions of the homogeneous equation such that W (v, u) = 1. We easily check that the operators G u,v , G a and G b defined below are Green's operators in the sense of Definition 2.9: Definition 2.10. Green's operator associated with u at a and v at b, denoted G u,v , is defined by its integral kernel x>y.
Operators of the form G u,v will be sometimes called two-sided Green's operators.
Definition 2.11. Forward Green's operator G → has the integral kernel x>y. (2.20) Definition 2.12. Backward Green's operator G ← has the integral kernel x > y .
By the comment after (2.16), the operators G → and G ← are independent of the choice of u, v. For a < b 1 Note also some formulas for differences of two kinds of Green's operators: The following definition introduces another class of Green's operators in the sense of Definition 2.9, which are generalizations of forward and backward Green's operators.
As in the case of G → and G ← , these operators are independent of the choice of u, v. Note that if a < a 1 < d < b 1 then (2.25) is given by a convergent Neumann series in the sense of an operator Remark 2.14. The one-dimensional Schrödinger equation can be interpreted as the Klein Gordon equation on a 1 + 0 dimensional spacetime (no spacial dimensions, only time). The operators G ↔ , G → and G ← have important generalizations to globally hyperbolic spacetimes of any dimension-they are then usually called the Pauli-Jordan, retarded, resp., advanced propagator, see, e.g., [11].

Some Estimates
The following elementary estimates will be useful later on.
where C is a real number independent of f and J.
Proof. By a scaling argument, we may assume ν = 1. It suffices to assume that f is a distribution on R such that f = 0 outside J. Let θ : R → R be of class C ∞ outside of 0 and such that .
If ν 0 is such that C ν 0 < 1, we get the required estimate.

Bilinear Scalar Product
Let H be a Hilbert space equipped with a scalar product (·|·). One says that J is a conjugation if it is an anti-linear operator on H satisfying J 2 = 1 and (Jf|Jg) = (f |g). In a Hilbert space with a conjugation J, an important role is played by the natural bilinear form f |g := (Jf|g). (3.1) In our paper, we usually consider the Hilbert space H = L 2 ]a, b[, which has the obvious conjugation f → f . Its scalar product and its bilinear form are as follows: Thus, we use round brackets for the sesquilinear scalar product and angular brackets for the bilinear form. Note that in some sense the latter plays a more important role in our paper (and in similar problems) than the former. See, e.g., [8,10], where the same notation is used.

Transposition of Operators
If T is an operator, then D(T ), N (T ) and R(T ) will denote the domain, the nullspace (kernel) and the range of T . T denotes the complex conjugation of T . This means, If T is a bounded linear operator with then It is useful to note that a holomorphic function of a self-transposed operator is self-transposed.
Remark 3.1. It would be natural to call an operator satisfying T ⊂ T # symmetric. The natural name for an operator satisfying T ⊂ T * would then be Hermitian. Unfortunately and confusingly, in a large part of mathematical literature the word symmetric is reserved for an operator satisfying the latter condition.
Many properties of the transposition have their exact analogs for the Hermitian conjugation and are proven similarly. Below, we describe some of them.

Proposition 3.2. Let T be a densely defined operator.
(1) T # is closed.

Lemma 3.3. Let S, T be linear operators on a Hilbert space H such that:
Proof. We must show: Since T is surjective, there is g ∈ D(T ) such that h = T g and then we get Here is a version of the closed range theorem [29, Sect. VII.5], which we will use in Sect. 3.4.

Theorem 3.4. If T is a closed densely defined operator in H, then the following assertions are equivalent:
( In a large part of the literature [14,19], a different terminology and notation is used. If T is an operator, then JT * J is called the J-adjoint of T ; an operator T satisfying T = JT * J is called J-self-adjoint, etc. In the context of our paper, Jf = f . Moreover, (Jf|g) and JT * J are denoted f |g , resp., T # . Our notation and terminology stresses the naturalness of the bilinear product ·|· and of the transposition T → T # . Therefore, we prefer them instead of the notation and terminology of, e.g., [14,19], which puts J in many places.

Spectrum
Let T be a closed operator. We say that G is an inverse of T if G is bounded, GT = 1 on D(T ), and T G = 1 on H. An inverse of T , if it exists, is unique. If T possesses an inverse, we say that it is invertible. We will denote by rs(T ) the resolvent set, that is the set If a closed operator T is densely defined, then we have an equivalent, more symmetric definition: One can introduce various varieties of the essential resolvent set and essential spectrum, [14]. One of them is (3.14) Clearly, rs(T ) ⊂ rs F0 (T ).

Restrictions of Closed Operators
In this subsection, we fix a closed operator on a Hilbert space H. For consistency with the rest of the paper, this operator will be denoted by L max . Note that D(L max ) can be treated as a Hilbert space with the scalar product where λ is a complex number. Therefore, many of the statements in this and next subsections have obvious generalizations, where L max is replaced with L max − λ. For simplicity of presentation, we keep λ = 0.

Proposition 3.6. (1) We have a 1-1 correspondence between closed subspaces
The map L max : D max → R(L max ) is bijective and its restriction to D • has R(L • ) as image hence by the closed map theorem it is a homeomorphism, so The last two relations imply (3.16). Finally, (3) is an immediate consequence of (2).
The following concept is useful in the study of invertible operators contained in L max : Note that L max can have many right inverses or none.
Proposition 3.8. The following conditions are equivalent: (1) L max has right inverses, (1) and (2) are equivalent to This correspondence is given by . The equivalence with (3) and (4) follows by the closed range theorem (see Theorem 3.4).
In particular, suppose that N (L max ) is n-dimensional and spanned by

Nested Pairs of Operators
In this subsection, we assume that L min and L max are two densely defined closed operators. We assume that they form a nested pair, Note that along with (3.23) we have a second nested pair In this subsection, we do not assume that L # min = L max , so that the two nested pairs can be different.
Remark 3.13. The notion of a nested pair is closely related to the notion of conjugate pair, often introduced in the literature, e.g., in [14]. Two operators A, B form a conjugate pair if A ⊂ B * , and hence B ⊂ A * . The pair of operators L min , L * max is an example of a conjugate pair.

Proposition 3.14. (1) We have a direct decomposition
where the superscript ⊥ L denotes the orthogonal complement with respect to the scalar product (3.15). In fact, u ∈ D(L min ) and v ∈ D(L min ) ⊥L if and only if Let u ∈ D(L max ). By (3.31), there exist v ∈ D(L min ) and w ∈ N (L * min ) such that Our main goal in this and the next subsection is to study closed operators L • satisfying L min ⊂ L • ⊂ L max . Such operators are the subject of the following proposition.
Let us reformulate Proposition 3.15 for invertible L • , using right inverses as the basic concept: The following proposition should be compared with Proposition 3.11.

Nested Pairs Consisting of an Operator and Its Transpose
In this subsection, we assume that the pair of densely defined closed operators L min , L max satisfies

36)
Note that this is a special case of conditions of the previous subsection-now the two nested pairs (3.23) and (3.24) coincide with one another. We will use the terminology related to symplectic vector spaces introduced in "Appendix A."  (1) The above correspondence is bijective.
The following result is quite striking and shows that in a certain respect the concept of self-transposedness is superior to the concept of self-adjointness. It is due to Galindo [16], with a simplified proof given by Knowles [19], see also [14]. It is a generalization of a well-known property of real Hermitian operators: they have a self-adjoint extension which commutes with the usual conjugation.
Proof. All one-dimensional subspaces in a two-dimensional symplectic space are Lagrangian.
Here is a version of Proposition 3.14 adapted to the present context: Here is a consequence of Proposition 3.16 Here is a version of Proposition 3.17 adapted to the present context: (3.14) for the definition of rs F0 ]. Thus, the most "useful" operators (which usually means well-posed ones) are "in the middle" between the minimal and maximal operator.

The Maximal and Minimal Operator
As before, we assume that V ∈ L 1 loc ]a, b[. Recall that L is the differential expression In this section, we present basic realizations of L as closed operators on L 2 ]a, b[.
We equip D(L max ) with the graph norm [ are once absolutely differentiable functions of compact support. (1) The operators L max and L min are closed, densely defined and L min ⊂ L max . (4.5) and the so-called Green's identity (the integrated form of the Lagrange identity) holds: (4.8) (6) L min = L min = L * max and L max = L max = L * min . One of the things we will need to prove is the density of Proposition 4.12), but with our assumptions on the potential the proof is not so trivial, because the idea of approximating an f ∈ L 2 (I) with smooth functions does not work: D(L max ) may not contain any "nice" function, as the example described below shows.
where σ runs over the set of rational numbers and c σ ∈ R satisfy c σ > 0 and σ c σ < ∞. Then, V ∈ L 1 loc (R) but V is not square integrable on any non-empty open set. Hence, there is no C 2 nonzero function in the domain of L in L 2 (R).
Before proving Theorem 4.4, we first state an immediate consequence of Lemma 2.16: Then, As in the previous section, we fix u, v ∈ AC 1 ]a, b[ that span N (L) and satisfy W (v, u) = 1.
Our Proof of Theorem 4.4 uses ideas from [25,Theorem 10.11] and [20,Sect. 17.4] and is based on an abstract result described in Lemma 3.3. The following lemma about the regular case (cf. Definition 2.3) contains the key arguments of the proof of (1) and (2) of Theorem 4.4: . This proves (1).
Recall that in (2.20) we defined the forward Green's operator G → . Under the assumptions of the present lemma, it maps belongs to AC 1 [a, b] and verifies Lf = g. Therefore, f ∈ D(L max ). Hence, L max is surjective. This proves (2).
To obtain (3), we integrate twice by parts. This is allowed by (2.1) and In other words, h ∈ D(L # c ) and on [a 1 , b 1 ]. But since a 1 , b 1 were arbitrary under the condition a < a 1 < b 1 < b, (4.14) holds on ]a, b[. Hence, Lh = k. Therefore, h ∈ D(L max ) and L max h = k. This proves that From (4.12) and (4.15), we see that L # c = L max . In particular, L max is closed and L c is closable. We have This ends the proof of (1) and (2). For f, g ∈ D(L max ) and a < a The lhs of (4.17) clearly converges as a 1 a. Therefore, the limit (4.4) exists. Similarly, by taking b 1 b we show that the limit (4.5) exists. Taking both limits, we obtain (4.6). This proves (3).
If d ∈]a, b[, then (4.7) is an immediate consequence of (4.9) and (4.10). We can rewrite (4.17) as , g; d). (4.18) Now, both terms on the right of (4.18) can be estimated by C f L g L . This shows (4.7) for d = a. The proof for d = b is analogous. Let L w be L restricted to (4.8). By (4.7), (4.8) is a closed subspace of D(L max ). Hence, L w is closed. Obviously, L c ⊂ L w . By (4.6), L w ⊂ L # max . By (2), we know that L # max = L min . But L min is the closure of L c . Hence, L w = L min . This proves (5).
Hence, f n satisfies the conditions of Corollary 2.17. Hence, f ∈ AC 1 ]a, b[ and g = Lf . Hence, f ∈ D(L max ) and it is the limit of f n in the sense of the graph norm. Therefore, D(L max ) is complete. Hence, L max and L min are closed.

Smooth Functions in the Domain of L max
We point out a certain pathology of the operators L max and L min if V is only locally integrable.
is stable under conjugation. The corresponding assertion concerning D(L min ) follows by taking the completion, and that concerning D(L max ) follows by taking the transposition. Reciprocally, assume that D(L c ) is stable under conjugation and let x 0 ∈ ]a, b[. Then, there is f ∈ D(L c ) such that f (x 0 ) = 0 and we may assume that its real part g = (f + f )/2 does not vanish on a neighborhood of x 0 . Then, g ∈ D(L c ) hence −g + V 1 g + iV 2 g ∈ L 2 ]a, b[ and so must be the imaginary part of this function hence V 2 is square integrable on a neighborhood of x 0 .  loc , many things simplify: c (R) with θ = 1 and let θ n (x) := nθ(nx) with n ≥ 1. Then, for n large f n := θ n * f ∈ C ∞ c ]a, b[ and has support in a fixed small neighborhood of suppf . Moreover, f n → f in C 1 c ]a, b[, in particular f n → f uniformly with supports in a fixed compact, which clearly implies

Closed Operators Contained in L max
If D(L • ) is a subspace of D(L max ) closed in the · L norm, then the operator is closed and contained in L max . We can call such an operator L • a closed realization of L.
We will be mostly interested in operators L • that satisfy

Regular Endpoints
Recall that the endpoint a is regular if it is finite and V is integrable close to a.  (a).
and Green's identity (4.6) has the classical form Thus, if L is a regular operator, then we have four continuous linear which give a convenient description of closed operators L • such that L min ⊂ L • ⊂ L max . In particular, D(L min ) is the intersection of the kernels of (5.1) and (5.2), D(L a ) is the intersection of the kernels of (5.1) and D(L b ) is the intersection of the kernels of (5.2).

Boundary Functionals
It is possible to extend the strategy described above to the case of an arbitrary L by using an abstract version of the notion of boundary value of a function. We shall do it in this section. The abstract theory of boundary value functionals goes back to J. W. Calkin's thesis [6] who used it for the classification of self-adjoint extensions of Hermitian operators. The theory was adapted to Hermitian differential operators of any order by Naimark [20] and to operators with complex coefficients of class C ∞ by Dunford and Schwarz in [12,ch. XIII]. In this section, we shall use this technique in the case of second-order operators with potentials which are only locally integrable: this loss of regularity is a problem for some arguments in [12].
Recall that D(L max ) is equipped with the Hilbert space structure associated with the norm f L = f 2 + Lf 2 . Following [12, §XXX.2], we introduce the following notions.

Remember that if x ∈]a, b[, then we can write
If x = a, in general we cannot write (5.7) (unless a is regular). However, we know that for all x ∈ [a, b] (5.6) depends weakly continuously on x. Thus, in general It is easy to see that f a ∈ B a , cf. (4.24) for example. We shall prove below that any boundary value functional at the endpoint a is of this form.   (5.11) and similarly at b. This implies (5.9) if W a (f, g) = 0 from which it follows that { f a , g a } is a basis in the vector space W a , in particular W a has dimension 2. But W a ⊂ Y a separates the points of Y a hence W a = Y a = B a (L) which proves the surjectivity of the map f → f (a). This proves (i) and (iv) completely and also one implication in (iii). It remains to prove that f a , g a are linearly dependent if W a (f, g) = 0. We prove this but with a different notation which allows us to use what we have already shown. Let f such that f a = 0. Then, f a is part of a basis in W a = B a (L); hence, there is g such that { f a , g a } is a basis in B a (L). Then, W a (f, g) = 0 and we have (5.9). Thus, if W a (h, f ) = 0, then h a = cW a (g, h) f a , so h a , f a are linearly dependent.
The space B a is naturally a symplectic space. In fact, if B a is nontrivial, then we can find k, h with W a (k, h) = 0. By the Kodaira identity, Thus, if we set for φ, ψ ∈ B a with f a = φ, g a = ψ, In the literature, boundary functionals are usually described using the notion of boundary triplet. Let us make a comment on this concept. Suppose, for definiteness, that ν a = ν b = 2. Choose bases , ψ a , of B a and φ b , ψ b Then, we can rewrite Green's formula (4.6) as The triplet (C 2 , φ, ψ) is often called in the literature as boundary triplet, see, e.g., [2] and references therein. It can be used to characterize operators in between L min and L max . Thus, a boundary triplet is essentially a choice of a basis (5.15) in the space of boundary functionals. Such a choice is often natural: in particular, this is the case of regular boundary conditions, see (5.1), (5.2). In our paper, we consider rather general potentials for which there may be no natural choice for (5.15). Therefore, we do not use the boundary triplet formalism.

Classification of Endpoints and of Realizations of L
The next fact is a consequence of Theorem 5.5. One may think of the assertion "ν a (L) can only take the values 0 or 2" as a version of Weyl's dichotomy, cf. Sect. 6.2.  Example 5.9. If L is semiregular at a, then we also have ν a (L) = 2. Indeed, dim U a (λ) = 2 (Proposition 2.5) and this implies ν a (L) = 2 by (the easy part of) Theorem 6.15. We also have the distinguished boundary functional f → f (a), as shown in Proposition 2.5 (2). If u is the solution of Lu = 0 satisfying u(a) = 0, u (0) = 1, whose existence is guaranteed by Proposition 2.5 (3), then this functional coincides with u a . However, in general, we do not have another, linearly independent distinguished boundary functional.
As a consequence of Theorem 5.6, we get the following classification of 1d Schrödinger operators in terms of the boundary functionals.
(1) ν a (L) = ν b (L) = 0. This is equivalent to L min = L a = L b = L max . Consider now the case (4). The domain of nontrivial realizations L • could be then of codimension 1, 2, or 3 in D(L max ). We will see that realizations of codimension 2 are the most important.
Each realization of L extending L min is defined by a subspace The space C • is called the space of boundary conditions for L • . The dimension of C • coincides with the codimension of D(L • ) in D(L max ).
Definition 5.10. We say that the boundary conditions C • are separated if For instance, L a and L b are given by separated boundary conditions B a , resp., B b . Definition 5.11. Let φ ∈ B a and ψ ∈ B b . Then, the realization of L with the boundary condition Cφ ⊕ Cψ will be denoted L φ,ψ .
Clearly, L φ,ψ has separated boundary conditions and depends only on the complex lines determined by φ and ψ. More explicitly, Recall that if φ = 0 then φ(f ) = 0 ⇔ ∃c(f ) ∈ C such that f a = c(f )φ. We abbreviate L φ = L φ,0 if ψ = 0 and define similarly L ψ if φ = 0. Thus, L φ involves no boundary condition at b: where the second equality holds if φ = 0. Note that L 0,0 = L max .

Properties of Boundary Functionals
The next proposition is a version of [12, XIII.2.27] in our context. Proof. The first assertion follows from Theorem 5.5-(i) and relations (5.8), . If φ is a boundary value functional at a for L a,d , then clearly φ•R is a boundary value functional at a for L a,d and the map φ → φ•R is a bijective map B a (L) → B a (L a,d ).
We note that the space B(L) and its subspaces B a (L), B b (L) depend on L only through the domains D(L max ) and D(L min ). So, in order to compute them one can sometimes change the potential and consider an operator L U := −∂ 2 + U instead of L := −∂ 2 + V . This is especially useful if U is real: for example, U could be the real part of V , if its imaginary part is bounded.
so the norms · L and · L U are equivalent. Then, we use (5.4).

Infinite Endpoints
Suppose now that our interval is right-infinite. We will show that if the potential stays bounded in average at infinity, then all elements of the maximal domain converge to zero at ∞ together with their derivative, which obviously implies that their Wronskian converges to zero. Of course, an analogous statement is true for a = −∞ on left-infinite intervals.

Spaces U a (λ) and U b (λ)
In this section, we will show that one can compute the boundary indices with the help of eigenfunctions of the operator L which are square integrable around a given endpoint.

Two-Dimensional U a (λ)
The next proposition contains the main technical fact about the dimensions of the U a (λ). Proof. We may clearly assume that b is a regular endpoint and f ∈ C 1 ]a, b]. Let G ← be the backward Green's operator of L (Definition 2.12). If Lf = g, then L(f − G ← g) = 0. Therefore, for some α, β. Set A := |α| 2 + |β| 2 and μ(x) := |u(x)| 2 + |v(x)| 2 . Then, x μ(y)|f (y)|dy , and the Gronwall Lemma applied to |f |/μ implies Clearly, the right hand side of (6.4) is square integrable.
The above proposition has the following important consequence.

The Kernel of L max
Let us describe the relationship between the dimension of the kernel of L max −λ and the dimensions of spaces U a (λ) and U b (λ). The first proposition is a corollary of Proposition 6.4: Proposition 6.5. The following statements are equivalent: The next two propositions are essentially obvious: Proposition 6.6. Let λ ∈ C. We have dim N (L max − λ) = 1 if and only if one of the following statements is true: (2) dim U a (λ) = 2 and dim U b (λ) = 1.

First-Order ODE's
We will need some properties of vector-valued ordinary differential equations. We will denote by B(C n ) the space of n × n matrices.
The following statement can be proven by the same methods as Proposition 2.2. Clearly, in the following proposition C n can be easily replaced by an arbitrary Banach space.
In particular, the dimension of the space of solutions of The following theorem is much more interesting. It is borrowed from Atkinson [1,Th. 9.11.2]. Note that in this theorem the finite dimensionality of the space C n seems essential.
Then, the theorem is equivalent to the following statement: if for some λ ∈ C, then for all λ ∈ C, we have (6.9). We are going to prove this in the following.
. Now, let μ ∈ C and assume that (6.9) holds for λ = μ. We have x a Y * μ (y)A(y)Y μ (y)dy. (6.14) Using (6.9), we see that Y * μ (x)JY μ (x) is bounded uniformly in x ∈ [a, b[. By (6.12), its inverse is also bounded uniformly in x ∈ [a, b[. (Here, we use the finiteness of the dimension of C n !) We have proven that Y * μ JY μ ) −1 is uniformly bounded. By (6.9), the norm of Y * μ AY μ is in L 1 ]a, b[. Hence, by the second part of Proposition 6.8, Z λ is uniformly bounded on [a, b[. Now, by using 16) we see that (6.9) for λ = μ implies (6.9) for all λ. Then, Lf = λf can be rewritten as (6.6), that is, hence, the condition (6.7) means that f ∈ L 2 ]a, b[. Note also that the conditions of Theorem 6.9 on J, A and B are satisfied. Theorem 6.9 therefore implies that if all solutions of Lf = λf are square integrable for one λ, they are square integrable for all λ. We thus obtain an alternative Proof of Proposition 6.4.

Von Neumann Decomposition
Von Neumann's theory for the classification of self-adjoint extensions of a Hermitian operators is well-known, cf. [12,25]. In the present subsection, we will investigate how to adapt it to the case of complex potentials. First recall that D(L max ) has a Hilbert space structure inherited from its graph, which is a closed subspace of Von Neumann's formalism is particularly efficient for real potentials and gives more precise results than in the complex case, so for completeness we begin with some comments on the real case. Then, we explore what can be done for arbitrary complex potentials. The differences between the real and complex case are significative, the difficulties being related to the fact that in the complex case there is no simple relation between the (geometric) limit point/circle method and the dimension of the spaces U a (λ), cf. Sect. 8.5.
If V is real, then L = L, L min is Hermitian, and L max = L * min , hence B(L) N (L 2 max + 1). (6.23) Then by using the relation L 2 + 1 = (L − i)(L + i), it is easy to prove that The last sum is obviously algebraically direct but also orthogonal for the scalar product (6.20); hence, we have an orthogonal direct sum decomposition The map f → f is a real linear isomorphism of N (L max − i) onto N (L max + i); hence, these spaces have equal dimension ≤ 2 and so dim B(L) = 2 dim N (L max −i) ∈ {0, 2, 4}. Of course, we have already proved this in a much simpler way, but (6.25) also gives via a simple argument the following: if V is real then The simplicity of the treatment in the real case is due to the possibility of working with (6.24), which involves only the second-order operators L max ± i, instead of (6.23), which involves the operator L 2 max of order 4. We do not have such a simplification in the complex case where L max L max + 1 is formally a fourth-order differential operator with very singular coefficients, since V is only locally L 1 .
Let us show how to generalize von Neumann's analysis to the complex case. We will follow [15, Theorem 9.1] which in turn is a consequence of [1, Theorem 9.11.2]. The nontrivial part of Theorem 6.15 is due to Race [22,Theorem 5.4].
We need to study the equation More precisely, by a solution of (6.26) we will mean f ∈ AC 1 ]a, b[ such that We still prefer to transform (6.29) to a first-order system of 4 equations. To this end, we introduce Proof. It is immediate to see that if (L + λI)F = 0, then

Consider the equation
Therefore, G = F , so that F ∈ AC 1 ]a, b[, C 2 , and QF + (W + λI)F = 0. Proof. By Lemmas 6.12 and 6.13, instead of (LL + λ)f = 0, we can consider J∂ x φ = (λA + B)φ, and the square integrability of f is equivalent to the integrability of φ(x)|Aφ(x) = |φ 1 (x)| 2 , since f = φ 1 under the identification. Note that J, A are constant 4 × 4 matrices with J * = −J, A * = A, J −1 A is a real matrix, and B(x) * = B(x) belongs to L 1 loc ]a, b]. Thus, the Eq. (6.30) satisfies the assumptions of Theorem 6.9. The theorem says that if for some λ all solutions φ of (6.30) φ(x)|Aφ(x) are integrable, then this is so for any λ. This proves the first statement of the lemma. Now, suppose that all solutions of (LL + λ)f = 0 belong to L 2 ]a, b[ for all λ. In particular, all solutions of LLf = 0 are square integrable. Since Lf = 0 ⇒ LLf = 0, any solution of Lf = 0 is square integrable. Hence, also any solution of Lf = 0. Theorem 6.15. The following assertions are equivalent and true: Proof. The equivalences (6.31) follow from (6.32) by taking into account Theorem 5.6 and the fact that the dimension of U a (λ) is ≤ 1 if it is not 2. Thus, we only have to discuss (6.32). The second equivalence from (6.32) is a consequence of Proposition 6.4. It is easy to see that ν a (L) = 2 if dim U a (λ) = 2 for some complex λ. Indeed, let u, v be solutions of the equation (L−λ)f = 0 such that W (u, v) = 1. Then, if all the solutions of (L − λ)f = 0 are square integrable near a, we get W a (u, v) = 1, hence W a = 0, so that ν a (L) = 2.
In what follows, we consider the nontrivial part of the theorem: we assume ν a (L) = 2 and show that dim U a (0) = 2. Clearly, we may assume that b is a regular end, if not we replace it by any number between a and b. Then, ν(L) = 2 ⇔ ν a (L) = 0 and ν(L) = 4 ⇔ ν a (L) = 2, so we have to show that ν(L) = 4 ⇒ dim N (L max ) = 2. Since ν(L) = dim B(L) and N (L max L max + 1) B(L) by (6.22), it suffices to prove dim N (L max L max + 1) = 4 ⇒ dim N (L max ) = 2.
(6.34) By Proposition 6.8, the space of solutions of the first-order system (6.30) is four dimensional. Therefore, Lemmas 6.12 and 6.13 imply that the space of solutions of LLf + λf = 0 is four dimensional. Hence, dim N (L max L max + 1) = 4 implies that all solutions of (LL + 1)f = 0 are square integrable. Now by Lemma 6.14 applied to λ = 1, all solutions of Lf = 0 are square integrable.

Integral Kernel of Green's Operators
Recall that in Definition 3.7, we introduced the concept of a right inverse of a closed operator. In the context of 1d Schrödinger operators, right inverses of L max will be called L 2 Green's operators. Thus, G • is a L 2 Green's operator if it is bounded, R(G • ) ⊂ D(L max ) and L max G • = 1.
Let G • be a Green's operator in the sense of Definition 2.9. Clearly, [ is bounded, then it has a unique extension to a bounded operator on L 2 ]a, b[. This extension, which by Proposition 3.12 is a L 2 Green's operator, will be denoted by the same symbol G • .
Note that the pair L max , L min satisfies L min = L # max ⊂ L max , which are precisely the properties discussed in Sect. 3.6. Recall from that subsection that L 2 Green's operators whose inverse contains L min correspond to realizations of L that are between L min and L max . The following proposition is devoted to properties of such Green's operators.
Recall that for any x ∈]a, b[ we denote by L a,x , resp., L x,b the restriction of L to L 2 ]a, x[, resp., L 2 ]x, b[. We also can define L a,x max and L x,b max , etc. Note that x is a regular point of both L a,x and L x,b (V is integrable on a neighborhood of x).
is a function separately continuous in x and y which has the following properties: (1) for each a < x < b, the function G • (x, ·) restricted to ]a, x[, resp., ]x, b[ belongs to D(L a,x max ), resp., D(L x,b max ) and satisfies LG • (x, ·) = 0 outside x. Besides, G • (x, ·) and its derivative have limits at x from the left and the right satisfying (2) for each a < y < b, the function G • (·, y) restricted to ]a, y[, resp., ]y, b[ belongs to D(L a,y max ), resp., D(L y,b max ) and satisfies LG • (·, y) = 0 outside y. Besides, G • (·, y) and its derivative have limits at y from the left and the right satisfying Proof. We shall use ideas from the proof of Lemma 4 p. 1315 in [12]. G • is a continuous linear map G • : L 2 ]a, b[→ D(L max ) and for each x ∈]a, b[ we have a continuous linear form ε x : f → f (x) on D(L max ); hence, we get a continuous We get a map φ : ]a, b[→ L 2 ]a, b[ which is continuous, and even locally Lips- 1) can be rewritten as Since G # • is also an L 2 Green's operator, we have L min ⊂ L • ⊂ L max . Assuming that g ∈ D(L min ) and g(y) = 0 in a neighborhood of x, we can rewrite (7.2) as 0 = . We may compute the last two terms explicitly because x is a regular end of both intervals: . Thus, we get The values g(x) and g (x) may be specified in an arbitrary way under the condition g ∈ D(L min ), so we get φ x (x + 0) − φ x (x − 0) = 0 and φ x (x − 0) − φ x (x + 0) = 1. Thus, φ x must be a continuous function which is continuously differentiable outside x and its derivative has a jump φ is also an L 2 Green's operator and clearly G # • has kernel G # • (x, y) = φ y (x). Repeating the above arguments applied to G # • , we obtain the remaining statements of the proposition.
Let us describe a consequence of the above proposition; we use the notation of Definition 6.1.

Proposition 7.2.
If there exists a realization of L such that λ ∈ C is in its resolvent set, then dim U a (λ) ≥ 1 and dim U b (λ) ≥ 1.
Proof. Suppose that L possesses a realization with λ ∈ C contained in its resolvent set. This means that L − λ possesses an L 2 Green's operator G • . By Proposition 3.23, it can be chosen to satisfy G • = G # • . Then, Proposition 7.1 implies that for any In order to prove that dim U a (λ) ≥ 1 it suffices to show that there is x such that G • (x, ·) ]a,x[ = 0 and the argument is similar.
If the required assertion is not true, then G • (x, ·) ]x,b[ = 0 for any x, in other terms G • (x, y) = 0 for all a < x < y < b. Since G • is self-transposed, (3.9) gives G • (x, y) = G • (y, x) ∀x, y. Hence, we will also have G • (x, y) = 0 for a < y < x < b. But this means G • = 0, which is not true.

Forward and Backward Green's Operators
Let us study the L 2 theory of the forward Green's operator G → . Recall that if u, v span N (L) with W (v, u) = 1, then G → is given by x a v(y)g(y)dy.
(7.5) (4) G ← has analogous properties. In particular, we have Proof. By hypothesis, u, v ∈ L 2 ]a, b[. The Hilbert-Schmidt norm of G → is clearly bounded by √ 2 u 2 v 2 . Then by Proposition 3.10, zero belongs to the resolvent set of L a , L −1 a = G a and D(L max ) = D(L a ) ⊕ N (L max ), (7.7) which can be restated as the decomposition (7.5). If λ ∈ C and V is replaced by V − λ, then the new G → will be the resolvent at λ of L a , which proves the second assertion in (2). Finally, (7.6) is proved by a simple computation. Proof. Let G → be bounded. Then, so is G # → = G ← . Let us recall the identity (2.23): But the boundedness of the rhs of (7.8) implies v, u ∈ L 2 ]a, b[. G → is useful even if it not a bounded operator, especially if dim U a (0) = 2: Proof. Let a < d < b. Then, we can restrict our problem to ]a, d[. Now, dim U a (0) = dim U d (0) = 2. Therefore, we can apply Proposition 7.3, using the fact that G → restricted to L 2 ]a, d[ is an L 2 Green's operator of L a,d .
The main assertion of Theorem 6.15 is, technically speaking, that dim U a (0) = 2 if ν a (L) = 2. We may state an improved version of this assertion as a boundary value problem and this is of a certain interest: it says that if ν a (L) = 2, then the endpoint a behaves almost as if it were a regular end (in the regular case one works with L 1 instead of L 2 ). Note that since only the behavior near a of the solutions matters, we may assume b a regular endpoint. Proposition 7.6. Suppose that ν a (L) = 2 and b is a regular endpoint for L. Let φ, ψ ∈ B a (L) be a pair of linearly independent boundary value functionals. Then, the linear continuous map is bounded and invertible. In particular, for any g ∈ L 2 ]a, b[ and any α, β ∈ C, there is a unique f ∈ D(L max ) such that Lf = g, φ(f ) = α, and ψ(f ) = β.
Proof. By Proposition 7.3, the operator L a : D(L a ) → L 2 ]a, b[ is bijective hence the map (7.12) is injective. Since the map is clearly continuous, by the open mapping theorem it suffices to prove its surjectivity. Let g ∈ L 2 ]a, b[ and α, β ∈ C. Since L a is surjective, there is h ∈ D(L a ) such that Lh = g. Now, it suffices to show that there is k ∈ N (L max ) such that φ(k) = α, ψ(k) = β because then f = h + k ∈ D(L max ) will satisfy Lf = g, φ(f ) = α, and ψ(f ) = β. Clearly, it suffices to prove this just for one couple φ, ψ. Since N (L max ) is two dimensional, there are u, v ∈ N (L max ) with W (u, v) = 1 and we may take φ = u a and ψ = v a since, by Theorem 5.5 the boundary value functionals u a , v a ∈ B a (L) are linearly independent. Then, it suffices to take k = −βu + αv.

Green's Operators with Two-Sided Boundary Conditions
Recall from Definition 5.11 that if φ ∈ B a and ψ ∈ B b be are nonzero functionals, then L φ,ψ is the operator L φ,ψ ⊂ L max with Clearly, there exists a close relationship between realizations of L of the form L φ,ψ and Green's operators of the form G u,v . Proposition 7.7. Suppose φ ∈ B a , ψ ∈ B b and 0 ∈ rs(L φ,ψ ). Then, there exists u ∈ U a (0) and v ∈ U b (0) with W (v, u) = 0 such that in the notation of Definition 5.4, Proof. Let us prove the existence of u. Note that by Proposition 7.2 we have dim U a (0) ≥ 1. Then, by Proposition 7.1, the operator L −1 φ,ψ has an integral kernel G φ,ψ (·, ·) such that for any a < c < b the restriction of G φ,ψ (c, ·) to ]a, c[ belongs to D(L a,c max ) and satisfies LG φ,ψ (c, ·) = 0. If f ∈ D(L φ,ψ ), the relation (7.1) gives hence if a < x < c and f (x) = 0, for x > c we have 0 = c a G φ,ψ (c, y)(L φ,ψ f )(y)dy. (7.14) Denote L a,c φ,c the operator in L 2 ]a, c[ defined by L and the boundary conditions φ(f ) = 0 and f (c) = f (c) = 0. Clearly, any function f satisfying such conditions extends to a function in D(L φ,ψ ) if we set f (x) = 0 for x > c hence (7.14) is equivalent to We noted above that L # φ,ψ = L φ,ψ and by a simple argument this implies (L a,c φ,c ) # = L a,c φ,0 ≡ L a,c φ ; hence, the preceding relation means G φ,ψ (c, ·)| ]a,c[ ∈ N (L a,c φ ). Now, recall that during the Proof of Proposition 7.2 we have seen that c may be chosen such that G φ,ψ (c, ·)| ]a,c[ = 0. Finally, if we fix such a c and denote u = G φ,ψ (c, ·), then we get a nonzero element u ∈ U a (0) such that φ(u) = 0 which, since u = 0, is equivalent to φ = α u a .
In an analogous way, we prove the existence of v. Both are nonzero. If u is proportional to v, then they are eigenvectors of L φ,ψ for the eigenvalue 0, which contradicts 0 ∈ rs(L φ,ψ ). Hence, they are not proportional to one another, so that W (v, u) = 0.
Note that in the above proposition we can have φ = 0 or ψ = 0, or both. However, u and v are always nonzero.
Suppose now that we start from a two-sided Green's operator. Until the end of this subsection, we assume that u ∈ U a (0) and v ∈ U b (0) and the functionals φ, ψ are given by (7.13). Thus, we have both Green's operator G u,v and the operator L φ,ψ .
Let χ ∈ C ∞ ]a, b[ such that χ = 1 close to a and χ = 0 close to b. Clearly, We will show that G u,v is bounded iff and only if 0 ∈ rs(L φ,ψ ). However, it seems that there is no guarantee that G u,v is bounded.
If this is the case, then 0 belongs to the resolvent set of L φ,ψ , we have Proof. It is easy to see that x a u(y)g(y)dy.
Assume that G u,v is bounded in the sense of L 2 ]a, b[. By Proposition 3.12, G u,v is an L 2 Green's operator. By Proposition 3.9, it is also bounded from L 2 ]a, b[ to D(L max ). Therefore, (7.16) extends then to (7.27) so that  (7.29) Hence,

Classification of Realizations with Non-empty Resolvent Set
In applications, well-posed operators (possessing non-empty resolvent set) are by far the most useful. The following theorem describes a classification of realizations of L with this property.
Theorem 7.10. Suppose that L • is a realization of L with a non-empty resolvent set. Then, exactly one of the following statements is true.
Then, also L min = L • , so that L possesses a unique realization. We have L • is self-transposed and has separated boundary conditions. (See Definition 5.10 for separated boundary conditions.) Then, the inclusion D(L min ) ⊂ D(L • ) is of codimension 1 and we have ν(L) = 2.
Then, the inclusion D(L min ) ⊂ D(L • ) is of codimension 2. We have ν(L) = 4. The spectrum of L • is discrete, and its resolvents are Hilbert-Schmidt.
If in addition L • is self-transposed, has separated boundary conditions, and λ ∈ rs(L • ), then we can find u ∈ U a (λ) and v ∈ U b (λ) with W (v, u) = 1, such that If, instead, L • is not self-transposed and has separated boundary conditions, then it has empty spectrum and one of the following possibilities holds:

Existence of Realizations with Non-empty Resolvent Set
C\R is contained in the resolvent set of all self-adjoint operators. The following proposition gives a generalization of this fact.
Proposition 7.11. Let V R and V I be the real and imaginary part of V . Let V I ∞ =: β < ∞. Then, is contained in the resolvent set of some realizations of L. All realizations of L possess only discrete spectrum in (7.31).
Let L R := −∂ 2 x + V R . By Theorem 4.4, L R,min is densely defined and L # R,min = L R,max ⊃ L R,min . By the reality of V R , L * R,min = L # R,min . Therefore, L * R,min ⊃ L R,min . This means that L R,min is Hermitian (symmetric). Let us now apply the well-known theory of self-adjoint extensions of Hermitian operators. Let d ± := N L * R,min ∓ i) be the deficiency indices. Using the fact that L R,min is real, we conclude that d + = d − . Therefore, L R,min possesses at least one self-adjoint extension, which we denote L R,• . By the self-adjointness of L R,• , we have (L R,• −λ) −1 ≤ | Im λ| −1 for all λ ∈ R. Set L • := L R,• +iV I . Clearly, For | Im λ| > β, λ belongs to the resolvent set of L • , and its resolvent is given by Note that the above proposition can be improved to cover some singularities of V I . In fact, if there are numbers α, β with 0 ≤ α < 1 such that and the conclusion of Proposition 7.11 holds.

"Pathological" Spectral Properties
We construct now 1d Schrödinger operators whose realizations have an empty resolvent set. Such operators seem to be rather pathological and not very interesting for applications.
Proof. Let I n =]n 2 −n, n 2 +n[ with n ≥ 1 integer. Then, I n is an open interval of length |I n | = 2n and I n+1 starts with the point n 2 + n which is the endpoint of I n . Thus, ∪ n I n is a disjoint union equal to ]0, ∞[ \{n 2 + n | n ≥ 1}. Let P be the set prime numbers P = {2, 3, 5, . . .} and for each prime p denote J p = ∪ k≥1 I p k . We get a family of open subsets J p of ]0, ∞[ which are pairwise disjoint and each of them contains intervals of length as large as we wish. Now, let p → c p be a bijective map from P to the set of complex rational numbers and let us define a function V : [0, ∞[ → C by the following rules: if x ∈ J p for some prime p, then V (x) = c p and V (x) = 0 if x / ∈ ∪ p J p . Then, V is a locally bounded function whose range contains all the complex rational numbers. We set L = −∂ 2 + V (x) and we prove that the spectrum of any L • with L min ⊂ L • ⊂ L max is equal to C. Since the spectrum is closed, it suffices to show that any complex rational number c belongs to the spectrum of any L • . If not, there is a number α > 0 such that (L • − c)φ ≥ α φ for any φ ∈ D(L • ). If r is a (large) positive number then there is an open interval I of length ≥ r such that V (x) = c on I. Let φ ∈ C ∞ c (I) such that φ(x) = 1 for x at distance ≥ 1 from the boundary of I and with |φ | ≤ β with a constant β independent of r (take r > 3 for example). Then, φ ∈ D(L min ) and which is impossible because the left hand side is of order √ r. One may choose V of class C ∞ by a simple modification of this construction.

Recall that an operator
that is, if its numerical range is contained in {λ ∈ C | Im λ ≤ 0}. It is called maximal dissipative if in addition its spectrum is contained in {λ ∈ C | Im λ ≤ 0}. The following criterion is well-known [18]. Proof. If f ∈ D(L max ), then hence, Im(V )|f | 2 + Im(f (a 1 )f (a 1 )) − Im(f (b 1 )f (b 1 )) . Thus, By continuity, (8.3) extends to f ∈ D(L min ), which clearly implies that L min is dissipative. The same argument as above shows that −L min is dissipative If L min = L max , then L min = L max . But L * min = L max . Hence, −L * min is dissipative. This proves maximal dissipativity of L min .
If L min = L max , then the spectrum of L min is C, so L min is not maximally dissipative. This is related to a certain difficulty when one tries to study the dissipativity of Schrödinger operators with singular complex potentials. Suppose that we want to check that L • is dissipative using this criterion. Proof. By Proposition 7.9, L # αβ = L α β . Clearly, L αβ = L α β . Therefore, the proposition follows from L * αβ = L # α β . In the following two subsections, we will describe boundary conditions that guarantee the dissipativity of L α,β . We will consider separately two classes of potentials: V ∈ L 2 loc ]a, b[ and V ∈ L 1 ]a, b[.

Dissipative Boundary Conditions for Locally L 2 Potentials
Note first the following sesquilinear version of Green's identity (4.6).
And then L αβ is maximal dissipative.
Then, by choosing f ∈ D(L αβ ) equal to zero near b, we get 1 2i W a (f, f) ≤ b a Im(−V )|f | 2 . If we fix such an f and replace it in this estimate by fθ where θ ∈ C ∞ (R) with 0 ≤ θ ≤ 1 and θ(x) = 1 on a neighborhood of a, then we get 1 2i W a (f, f) ≤ b a Im(−V )|fθ| 2 . Since the right hand side here can be made as small as we wish by taking θ equal to zero for x > d > a with d close to a, we see that we must have 1 2i W a (f, f) ≤ 0 and this clearly implies the same inequality for any f ∈ D(L αβ ). Then, we get 1 2i [[α|α]] a ≤ 0 by Lemma 8.5. We similarly prove 1 2i [[β|β]] b ≥ 0. We proved the implication ⇒ in (8.10), and ⇐ is clear by (8.12). It remains to show the maximal dissipativity assertion. Due to Propositions 8.1 and 8.3, it suffices to prove that the operator −L * αβ = −L α β is dissipative. Observe first that the relation D(L max ) = D(L max ) implies D(L α β ) = D(L α β ). Then, (8.11) gives hence instead of (8.12) we get the condition Im(−V )|f | 2 ∀f ∈ D(L α β ).
As above we get

Dissipative Regular Boundary Conditions
Suppose that the operator L has a regular left endpoint at a. As we noted several times, for regular boundary conditions B a can be identified with C 2 . Indeed, is a general form of a boundary functional, with α = (α 0 , α 1 ) ∈ C 2 and f ∈ D(L max ). The space B a is equipped with the symplectic form [[·|·]] a , which coincides with the usual (two dimensional) vector product: Thus, if we write f a := f (a), f (a) , an alternative notation for α(f ) is Note that there is no guarantee that D(L min ) and D(L max ) are invariant wrt the complex conjugation. However, the space B a C 2 is equipped with Here is a version of Lemma 8.4 for the regular case. Next, we have a version of Theorem 8.6 for the regular case. Fix nonzero vectors α, β and define L αβ by imposing the boundary conditions at a and b: In this context, it is quite easy to prove that L * αβ = L α β . L αβ is dissipative ⇔ Im V ≤ 0, Im(α 0 α 1 ) ≤ 0, and Im(β 0 β 1 ) ≥ 0.
And in this case L αβ is maximal dissipative.
Proof. The proof is similar to that of Theorem 8.6, but much simpler. We use Lemma 8.8 instead of Lemma 8.4 and get the same relation (8.12) as necessary and sufficient condition for dissipativity. Then, we use 1 2i [[α|α]] a = 1 2i (α 0 α 1 − α 1 α 0 ) = Im(α 0 α 1 ) and a similar relation for β. Finally, when checking the dissipativity of −L * αβ , note that this operator is associated with the differential expression ∂ 2 − V , which explains a difference of sign.

Weyl Circle in the Regular Case
In this subsection, we fix a regular operator L whose potential has a negative imaginary part. We study solutions of (L − λ)f = 0 for Im λ > 0 with real boundary conditions. In Theorem 8.10, we show that they define a certain circle in the complex plane called the Weyl circle. This result will be needed in the next subsection Sect. 8.5, where general boundary conditions are studied.
We will use an argument essentially due to H. Weyl in the real case, cf. [5,20,21] for example. The Weyl circle for potentials with semi-bounded imaginary part was first treated in [24], see [3] for more recent results.
Let us denote U = Im(λ − V ) and (f |g) U = b a fgU. (8.15) We set f 2 U = (f |f ) U and note that if U ≥ 0, then (·|·) U is a positive Hermitian form and we denote · U is the corresponding seminorm. Now, if f, g ∈ D(L max ) and Lf = λf , Lg = λg for some complex number λ, then (8.7) can be rewritten as g). Proof. From Lemma 8.7 (2) and the reality of the boundary conditions at a, we get W a (u, u) = 0, W a (v, v) = 0. (8.19) This implies (8.20) due to (8.16). And if w is as in the first part of the theorem, then the same argument gives Since u, v are linearly independent solutions of Lf = λf , if w is another solution, then we have w = mu + nv for uniquely determined complex numbers m, n. Since W (v, u) = 1, we see that n = 1. Now, fix w = mu + v. Using (8.19) and W a (u, v) = −1, we get W (w, w) a = |m| 2 W a (u, u) + mW a (u, v) + mW a (v, u) + W a (v, v) = 2i Im m.
To prove the reciprocal part of the theorem, consider a point m on this circle and let w = mu + v. Clearly, Lw = λw and W (w, u) = 1 and the computation (8.22) gives us W a (w, w) = 2i Im m. We also have (8.24) because this just says that m is on the circle (8.18). Thus, we have w 2 U = Im m = W a (w, w)/2i, and then (8.16) implies W b (w, w) = 0. Therefore, by Lemma 8.16 w has a real boundary condition at b. This proves the final assertion of the theorem.

Limit Point/Circle
In this section, we again assume that Im V ≤ 0 and Im λ > 0. We allow b to be an irregular endpoint. We also assume that a is a regular endpoint. Thus, we assume that V ∈ L 1 loc [a, b[. This class of potentials has first been considered in [24]; see [3] for more general conditions. Using Theorem 8.10, we will obtain a classification of the properties of L around b into three categories. This classification can be called the Weyl trichotomy. It replaces the Weyl dichotomy, well-known classification of irregular endpoints for real potentials. otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The following proposition is well-known and easy to prove. Perhaps the only nontrivial point is (4) for infinite-dimensional V, where the usual induction argument needs to use the Zorn Lemma.

Appendix A. Symplectic Spaces
Proposition A.1. (1) If V is a symplectic space, then dim V is even or infinite.
(2) If W is a subspace of V, then dim W = dim V/W s⊥ .