Abstract
By developing the method of multipliers, we establish sufficient conditions on the magnetic field and the complex, matrix-valued electric potential, which guarantee that the corresponding system of Schrödinger operators has no point spectrum. In particular, this allows us to prove analogous results for Pauli operators under the same electromagnetic conditions and, in turn, as a consequence of the supersymmetric structure, also for magnetic Dirac operators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Objectives and state of the art
Understanding electromagnetic phenomena has played a fundamental role in quantum mechanics. The simplest mathematical model for the Hamiltonian of an electron, subject to an external electric field described by a scalar potential \(V:\mathbb {R}^3\rightarrow \mathbb {R}\) and an external magnetic field \(B={{\,\mathrm{curl}\,}}A\) with a vector potential \(A:\mathbb {R}^3 \rightarrow \mathbb {R}^3\), is given by the Schrödinger operator
where \(\nabla _{\!A} := \nabla + i A\) is the magnetic gradient.
Unfortunately, the mathematically elegant model (1.1) is not sufficient to explain finer electromagnetic effects, for it disregards an inner structure of electrons, namely their spin. A partially successful attempt to take the spin into account is to enrich the algebraic structure of the Hilbert space and consider the Pauli operator
where \(\sigma := (\varvec{\sigma }_{\varvec{1}},\varvec{\sigma }_{\varvec{2}},\varvec{\sigma }_{\varvec{3}})\) are Pauli matrices. Here the term \(\sigma \cdot B\) describes the interaction of the spin with the magnetic field and \(\varvec{V} := V \varvec{I}_{\mathbb {C}^{\varvec{2}}}\) stands for the electric interaction as above.
To get a more realistic description of the electron, subject to an external electromagnetic field, one has to take relativistic effects into account. A highly successful model is given by the Dirac operator
where \(\alpha := (\varvec{\alpha }_{\varvec{1}},\varvec{\alpha }_{\varvec{2}},\varvec{\alpha }_{\varvec{3}})\) and \(\beta \) are Dirac matrices and \(\varvec{V} := V \varvec{I}_{\mathbb {C}^{\varvec{4}}}\).
The principal objective of this paper is to develop the so-called method of multipliers in order to establish spectral properties of the Pauli and Dirac operators. This technique comes from partial differential equations, but it seems to be much less known in spectral theory. We are primarily interested in physically relevant sufficient conditions, which guarantee the absence of point spectra (including possibly embedded eigenvalues).
As far as absence of embedded eigenvalues is concerned, nowadays the method of multipliers can be considered as a valid alternative to the routine approach, based on Carleman estimates, that, after the appearance of Kato’s work [23] on Schrödinger operators, has been consistently used to disproving presence of positive eigenvalues in the continuous spectrum of diverse Hamiltonian (refer, for instance, to [27], Section 15). We should emphasize that proving a Carleman estimate, in particular in context in which one seeks to treat optimal (or close to optimal) conditions on the potentials, is a highly nontrivial task and requires the construction of a parametrix for which suitable mapping properties have to be proved (in relation to this issue we refer to the remarkable work by Koch and Tataru [26]). On the contrary, the method of multipliers is a rather direct approach which, at least at a formal level, asks for clever algebraic manipulations only. Another advantage of the method of multipliers over the Carleman-based scheme is represented by the fact that the first approach allows quite intuitively to single out repulsivity conditions on the potentials which permits to include long range perturbations in the analysis, on the other hand it is a known fact that long range potentials, alike short range ones, cannot be easily handled with the method based on Carleman estimates as they cannot be treated as small perturbations. This results into the need of including the long range potential in the proof of the Carleman estimates which represents a rather challenging issue (see [26] for the most updated available results and [19,20,21] for more standard references).
Although some of our results are new even in the self-adjoint setting, we proceed in greater generality by allowing \(V:\mathbb {R}^3 \rightarrow \mathbb {C}\) to be complex-valued in (1.1) and \(\varvec{V}:\mathbb {R}^3 \rightarrow \mathbb {C}^{2 \times 2}\) to be a general matrix-valued potential, possibly non-Hermitian, in (1.2). Since the spin-magnetic term \(\sigma \cdot B\) can be included in \(\varvec{V}\), we simultaneously consider matrix electromagnetic Schrödinger operators
Since the operator acts on spinors, we occasionally call the corresponding spectral problem the spinor Schrödinger equation.
As the last but not least generalisation to mention, in the main body of the paper, we shall consider the Pauli and Dirac operators in the Euclidean space \(\mathbb {R}^d\) of arbitrary dimension \(d \ge 1\).
The study of spectral properties of scalar Schrödinger operators (1.1) constitutes a traditional domain of mathematical physics and the literature on the subject is enormous. Much less is known in the mathematically challenging and still physically relevant situations where V is allowed to be complex-valued, see [15, 16] and references therein. Works concerning non-self-adjoint Pauli operators are much more sparse in the literature, see [36] and references therein. More results are available in the case of non-self-adjoint Dirac operators, see [5,6,7,8, 10,11,12, 14, 35]. The paper [16] represents a first, physically satisfactory, application of the method of multipliers to spectral theory: the authors established sufficient conditions, which guarantee the total absence of eigenvalues of (1.1). Furthermore, those conditions are compatible with the well established gauge invariance of electromagnetic models in the sense that they involve the magnetic field B rather than the vector potential A. This last remarkable fact represents a distinguishing feature of [16] compared to previous works in the subject (see, for instance, Roze [34] where the method of multipliers has been used to prove absence of embedding eigenvalues under Kato-type decay conditions on both the electric potential V and the magnetic potential A, introducing then a counterintuitive gauge-dependent constraint). The two-dimensional situation was covered later in [15]. The robustness of the method of multipliers has been demonstrated in its successful application to the half-space instead of the whole Euclidean space in [4] and to Lamé instead of Schrödinger operators in [3]. In the present paper, we push the analysis forward by investigating how the unconventional method provides meaningful and interesting results in the same direction also in the less explored setting of the spinorial Hamiltonians.
1.2 The strategy
The main ingredient in our proofs is the method of multipliers as developed in [16] for scalar Schrödinger operators (1.1). In the present paper, however, we carefully revisit the technique and provide all the painful details, which were missing in the previous works. We identify various technical hypothesis about the electromagnetic potentials to justify the otherwise formal manipulations. We believe that this part of the paper will be of independent interest for communities interested in spectral theory as well as partial differential equations.
The next, completely new contribution is the adaptation of the method to the matrix electromagnetic Schrödinger operators (1.4). The Pauli Hamiltonians (1.2) are then covered as a particular case.
The method of multipliers does not seem to apply directly to Dirac operators, because of the lack of positivity of certain commutators. Our strategy is to employ the supersymmetric structure of Dirac operators (cf. [38, Ch. 5]). More specifically, using the standard representation
and the commutation properties of the Pauli matrices, it is easy to see that the square of the purely magnetic Dirac operator \(H_{\text {D}}(A,\varvec{0}) =: H_{\text {D}}(A)\) satisfies
where \(H_{\text {P}}(A) := H_{\text {P}}(A,\varvec{0})\) is just the purely magnetic Pauli operator (1.2). This allows us to ensure the absence of the point spectrum of the Dirac operator \(H_{\text {D}}(A)\), once the corresponding result for the Pauli operator \(H_{\text {P}}(A)\) is available, which, in turn, follows as a consequence of the corresponding result for the general Schrödinger operators \(H_{\text {S}}(A, \varvec{V})\) with matrix-valued potentials \(\varvec{V}\). Notice that, in this way, we are not able to treat magnetic Dirac operators with electric perturbations.
1.3 The results in three dimensions
As usual, the sums on the right-hand sides of (1.1), (1.2) and (1.4) should be interpreted in a form sense (cf. [24, Ch. VI]). More specifically, the operators are introduced as the Friedrichs extension of the operators initially defined on smooth functions of compact support. The regularity hypotheses and the functional inequalities stated in the theorems below ensure that the operators are well defined as m-sectorial operators. The Dirac operator (1.3) with \(\varvec{V}=\varvec{0}\) is a closed symmetric operator under the stated assumptions.
Henceforth, we use the notation \(r(x) := |x|\) for the distance function from the origin of \(\mathbb {R}^d\) and \(\partial _r f(x) := \frac{x}{|x|}\cdot \nabla f(x)\) for the radial derivative of a function \(f:\mathbb {R}^d\rightarrow \mathbb {C}\). We also set \(f_\pm (x) := \max \{\pm f(x),0\}\) if f is real-valued.
For matrix Schrödinger operators (1.4), we prove the following result.
Theorem 1.1
(Spinor Schrödinger equation). Let \(A\in L^2_{\text {loc}}(\mathbb {R}^3; \mathbb {R}^3)\) be such that \(B\in L^2_{\text {loc}}(\mathbb {R}^3;\mathbb {R}^3).\) Suppose that \(\varvec{V}\in L^1_{\text {loc}}(\mathbb {R}^3; \mathbb {C}^{2\times 2})\) admits the decomposition \(\varvec{V} = \varvec{V}^{\varvec{(1)}} + \varvec{V}^{\varvec{(2)}}\) with components \(\varvec{V}^{\varvec{(1)}}\in L^1_{\text {loc}}(\mathbb {R}^3)\) and \(\varvec{V}^{\varvec{(2)}}=V^{(2)} \varvec{I}_{\mathbb {C}^{\varvec{2}}}\), where \(V^{(2)} \in L^1_{\text {loc}}(\mathbb {R}^3)\) is such that \([\partial _r(r {{\,\mathrm{Re}\,}}V^{(2)})]_+\in L^1_{\text {loc}}(\mathbb {R}^3)\) and \(r \varvec{V}^{\varvec{(1)}}, r({{\,\mathrm{Re}\,}}V^{(2)})_-, r{{\,\mathrm{Im}\,}}V^{(2)}\in L^2_{\text {loc}}(\mathbb {R}^3).\) Assume that there exist numbers \(a,b, \beta , {\mathfrak {b}}, c\in [0,1)\) satisfying
such that, for all two-vector u with components in \(C^\infty _0(\mathbb {R}^3),\) the inequalities
and
hold true. If in addition \(A\in W^{1,3}_{\text {loc}}(\mathbb {R}^3)\) and \(V^{(2)}\in W^{1, 3/2}_{\text {loc}}(\mathbb {R}^3),\) then \(H_{\text {S}}(A, \varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {S}}(A, \varvec{V}))= \varnothing .\)
As a consequence of the previous result, one has the corresponding theorem for Pauli operators.
Theorem 1.2
(Pauli equation). Under the hypotheses of Theorem 1.1, with (1.7) being replaced by
the operator \(H_{\text {P}}(A,\varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {P}}(A,\varvec{V}))=\varnothing .\)
Due to the supersymmetric structure (1.6) of the Dirac operator, the spectra of the Dirac and Pauli operators are intimately related. In particular, we deduce the following result from the previous theorem.
Theorem 1.3
(Dirac equation). Let \(A\in L^2_{\text {loc}}(\mathbb {R}^3;\mathbb {R}^3)\) be such that \(B\in L^2_{\text {loc}}(\mathbb {R}^3; \mathbb {R}^3).\) Assume that there exists a number \(c\in [0,1)\) satisfying
such that, for all four-vector u with components in \(C^\infty _0(\mathbb {R}^3),\) the inequality
holds true. If in addition \(A\in W^{1,3}_{\text {loc}}(\mathbb {R}^3),\) then \(H_{\text {D}}(A)\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {D}}(A))=\varnothing .\)
Remark 1.1
Notice that the conditions in (1.12) are overabundant, in the sense that if c is such that the second inequality of (1.12) holds true, then \(4\sqrt{3}c<1\) is automatically satisfied. Indeed, the second inequality of (1.12) requires \(c < c_{1}^*\) where \(c_1^*\approx 0.075,\) whereas the first requires \(c<c_2^*\) where \(c_2^*\approx 0.14.\) We decided to keep both conditions anyway in order to have a faster comparison with the corresponding results concerning the other theorems.
1.4 Organisation of the paper
Even though so far we have considered only the three-dimensional framework, in this work we shall actually provide variants of the results presented above in any dimension. (We anticipate already now that the two-dimensional framework will be excluded in the settings of Pauli and Dirac operators because of the well-known Aharonov–Casher effect.) In order to state our results in any dimension, however, an auxiliary material will be needed in order to introduce the general framework for the Pauli and Dirac Hamiltonians. We therefore postpone the presentation of the general results to Sect. 3, while Sect. 2 is devoted to the definition of Dirac and Pauli operators to any dimension (this section can be skipped by an experienced reader). The method of multipliers for scalar Schrödinger operators is revisited with all the necessary details in Sect. 4. The development of the method for Schrödinger operators with matrix-valued potentials is performed in Sect. 5. The application of this general result to Pauli and Dirac operators is given in Sect. 6.
1.5 Notations
Here we summarise specific notations and conventions that we use in this paper.
-
We adopt the convention to write matrices in boldface.
-
For any dimension \(d\ge 2,\) the physically relevant quantity associated to a given magnetic vector potential \(A:\mathbb {R}^d\rightarrow \mathbb {R}^d\) is the \(d\times d\) matrix-valued quantity
$$\begin{aligned} \varvec{B} := (\nabla A) - (\nabla A)^t . \end{aligned}$$Here, as usual, \((\nabla A)_{jk}= \partial _j A_k\) and \((\nabla A)^t_{jk}=(\nabla A)_{kj}\) with \(j,k=1,2\dots , d\). In \(d=2\) and \(d=3\) the magnetic tensor \(\varvec{B}\) can be identified with the scalar field \(B_{12} = \partial _1 A_2 - \partial _2 A_1\) or the vector field \(B={{\,\mathrm{curl}\,}}A,\) respectively. More specifically, one has
$$\begin{aligned} \varvec{B} w= \left\{ \begin{array}{rll} &{}B_{12}\, w^\perp &{}\text {if}\quad d=2,\, \quad w\in \mathbb {R}^2\\ &{}-B \times w &{}\text {if}\quad d=3,\, \quad w\in \mathbb {R}^3, \end{array}\right. \end{aligned}$$where for any \(w=(w_1,w_2)\in \mathbb {R}^2,\)\(w^\perp := (w_2,-w_1)\) and the symbol \(\times \) denotes the cross product in \(\mathbb {R}^3.\) Notice that we did not comment on the case \(d=1.\) In one dimension, in fact, the addition of a magnetic potential is trivial, in the sense that it is always possible to remove it by a suitable gauge transformation. We refer to [2] for a complete survey on the concept of magnetic field in any dimensions and its definition in terms of differential forms and tensor fields.
-
We adopt the standard notation \(|\cdot |\) for the Euclidean norm on \(\mathbb {C}^d.\) We use the same symbol \(|\cdot |\) for the operator norm: if \(\varvec{M}\) is a \(d\times d\) matrix, we set
$$\begin{aligned} |\varvec{M}|:=\sup _{\begin{array}{c} {v\in \mathbb {C}^d}\\ v \not = 0 \end{array}} \frac{|\varvec{M} v|}{|v|} . \end{aligned}$$ -
Let \(v, w\in \mathbb {R}^d,\) the centered dot operation \(v\cdot w\) designates the scalar product of the two vectors v, w in \(\mathbb {R}^d.\)
-
Given two vectors \(v, w\in \mathbb {R}^d\) and a \(d\times d\) matrix \(\varvec{M},\) the double-centered dot operation \(v \cdot \varvec{M}\cdot w\) stands for the vector-matrix-vector product which returns the following scalar number
$$\begin{aligned} v\cdot \varvec{M} \cdot w:=\sum _{j,k=1}^d v_k M_{k j} w_j. \end{aligned}$$ -
We use the following definition for the \(L^2\)-norm of a vector-valued function \(u=(u_1,u_2, \dots , u_n)\) on \(\mathbb {R}^d\):
$$\begin{aligned} \Vert u\Vert _{[L^2(\mathbb {R}^d)]^n}:=\Bigg (\sum _{j=1}^n \Vert u_j\Vert _{L^2(\mathbb {R}^d)}^2 \Bigg )^{1/2}. \end{aligned}$$
2 Definition of Dirac and Pauli Hamiltonians in any Dimension
As already mentioned, our results will be stated in all dimensions \(d\ge 1.\) In particular, this requires a more careful analysis on the Dirac and Pauli operators as their explicit form changes according to the underlying dimension (see Appendix in [22]). Since here we are just interested in identifying the correct action of the operators, we disregard issues with the operator domains for a moment.
2.1 The Dirac operator
Generalising the expression (1.3) to arbitrary dimensions requires ensuring existence of \(d+1\) Hermitian matrices \(\alpha :=(\varvec{\alpha }_{\varvec{1}},\varvec{\alpha }_{\varvec{2}}, \dots , \varvec{\alpha }_{\varvec{d}})\) and \(\varvec{\beta }\) satisfying the anticommutation relations
for \(\mu , \nu \in \{1,2,\dots ,d\}\), where \(\delta _{\mu \nu }\) represents the Kronecker symbol. The possibility to find such matrices clearly depends on the dimension n(d) of the matrices themselves. In this regard one can verify that the following distinction is needed:
Even though all that really cares are the anticommutation relations that the Dirac matrices satisfy, for the purpose of visualisation of the supersymmetric structure of the Dirac operator, we shall rely on a particular representation of these matrices, that is the so-called standard representation. According to the standard representation one defines the \(d+1\) matrices \(\alpha =(\varvec{\alpha }_{\varvec{1}},\varvec{\alpha }_{\varvec{2}}, \dots , \varvec{\alpha }_{\varvec{d}})\) and \(\varvec{\beta }\) iteratively (with respect to the dimension) distinguishing between odd and even dimensions. For sake of clearness in the following the Dirac matrices are written with a superscript \(^{(d)}\) to stress that these are constructed at the step corresponding to working in d dimensions, e.g., \(\alpha =(\varvec{\alpha }_{\varvec{1}}^{\varvec{(d)}}, \varvec{\alpha }_{\varvec{2}}^{\varvec{(d)}}, \dots , \varvec{\alpha }_{\varvec{d}}^{\varvec{(d)}})\) and \(\varvec{\beta }^{\varvec{(d)}}\) are the \(d+1\) Dirac matrices constructed in d dimensions. Moreover, for notation convenience, we denote the matrix \(\varvec{\beta }^{\varvec{(d)}}\) as the \((d+1)\)-th \(\alpha \)-matrix, namely \(\varvec{\beta }^{\varvec{(d)}}:=\varvec{\alpha }_{\varvec{d+1}}^{\varvec{(d)}}.\)
2.1.1 Odd dimensions
If d is odd, let us assume to know the \(n(d-1) \times n(d-1)\) matrices \(\varvec{\alpha }_{\varvec{1}}^{\varvec{(d-1)}}, \varvec{\alpha }_{\varvec{2}}^{\varvec{(d-1)}}, \dots , \varvec{\alpha }_{\varvec{d}}^{\varvec{(d-1)}}\) corresponding to a previous step in the iteration. We then define \(n(d)\times n(d)\) matrices (where, according to (2.2), \(n(d)=2 n(d-1)\)) in the following way:
2.1.2 Even dimensions
If d is even, we define \(n(d)\times n(d)\) matrices (where, according to (2.2), \(n(d)=n(d-1)=2n(d-2)\)) as follows:
and
Notice that we are also using the convention that \(n(0)=1\) and that the \(1\times 1\) matrix \(\alpha _1^{(0)}=(1).\) This allows us to use the previous rule to construct the Dirac matrices corresponding to the standard representation also in \(d=1\) and \(d=2.\)
According to the construction above, one recognises that the Dirac matrices, regardless of the dimension, have all the following structure
where \(\varvec{a}_{\varvec{\mu }}\) are \(n(d)/2 \times n(d)/2\) matrices (Hermitian if d is odd) such that
for \(\mu , \nu \in \{1,2,\dots , d\}.\) Here, as usual, \(\varvec{a}_{\varvec{\mu }}^{\varvec{*}}\) denotes the adjoint to \(\varvec{a}_{\varvec{\mu }},\) that is the conjugate transpose of \(\varvec{a}_{\varvec{\mu }}.\) We set \(a:=(\varvec{a}_{\varvec{1}},\dots ,\varvec{a}_{\varvec{d}})\).
Remark 2.1
Notice that, as a consequence of the fact that \(\varvec{\alpha _\mu }\) are Hermitian (in any dimension) and that \(\varvec{\alpha _\mu }^2=\varvec{I}_{\mathbb {C}^{\varvec{n(d)}}},\) one has \(|\varvec{\alpha _\mu }|=1,\)\(\mu =1,2,\dots , d.\) Therefore, due to the iterative construction above, one has that also the submatrices \(\varvec{a_\mu }\) and \(\varvec{a_\mu ^*}\) have norm one, i.e. \(|\varvec{a_\mu }|=|\varvec{a_\mu ^*}|=1.\)
In the standard representation, that is using expression (2.3) for the Dirac matrices, the purely magnetic Dirac operator can be defined through the following block-matrix differential expression
where
Notice that in odd dimension, being the submatrices \(\varvec{a_\mu }\) Hermitian, one has \(\varvec{D}=\varvec{D^*}.\)
2.2 The square of the Dirac operator
From representation (2.5), it can be easily seen that \(H_{\text {D}}(A)\) can be decomposed as a sum of a \(2\times 2\)diagonal block and a \(2\times 2\)off-diagonal block operators. More specifically, one has
where
As one may readily check, \(H_{\text {diag}}\) and \(H_{\text {off-diag}}\) satisfy the anticommutation relation
This distinguishing feature places the Dirac operator within the class of operators with supersymmetry. It is consequence of the supersymmetric condition (2.6) that squaring out the Dirac operator gives
where
Therefore, \(H_{\text {D}}(A)^2\) turns out to have the following favorable form
From property (2.4) of the Dirac submatrices, one can show that
2.3 Low-dimensional illustrations
In order to become more confident with the previous construction, we decided to present explicitly the situations of dimensions \(d=1\) and \(d=2\) in the next two subsections. (Dimension \(d=3\) was already discussed above.)
2.3.1 Dimension one
In the Hilbert space \(L^2(\mathbb {R};\mathbb {C}^2)\), the 1d Dirac operator reads
where \(\nabla \) is just a weird notation for an ordinary derivative. With the notation \(H_{\text {D}}(0)\) we emphasise that the magnetic potential A has been chosen to be identically equal to zero, since in one dimension it can be always removed by choosing a suitable gauge. One can immediately verify that squaring out the operator \(H_{\text {D}}(0)\) yields
According to the rule provided above, in the standard representation, one chooses \( \varvec{\alpha }:= \varvec{\sigma _1}\) and \(\varvec{\beta }:= \varvec{\sigma _3},\) where \(\varvec{\sigma _1}\) and \(\varvec{\sigma _3}\) are two of the three Pauli matrices. Thus, one conveniently writes
where \(D :=-i \nabla \) and
with the Pauli operator
Hence, in one dimension, the Pauli operator coincides with the free one dimensional Schrödinger operator acting in \(L^2(\mathbb {R};\mathbb {R})\).
2.3.2 Dimension two
In the Hilbert space \(L^2(\mathbb {R}^2;\mathbb {C}^2)\), the 2d Dirac operator reads
where \(\alpha :=(\varvec{\alpha _1}, \varvec{\alpha _2})\) and \(\varvec{\beta }\) are \(2\times 2\) Hermitian matrices satisfying (2.1). Squaring out \(H_{\text {D}}(A)\) yields
According to the rule provided above, in the standard representation, one chooses \(\varvec{\alpha _1}:=\varvec{\sigma _1},\)\(\varvec{\alpha _2}:=\varvec{\sigma _2}\) and \(\varvec{\beta }:=\varvec{\sigma _3}\). This gives \([\varvec{\alpha _1},\varvec{\alpha _2}]=2i\varvec{\sigma _3}\) and
where
and \(\partial _{j,A}:= \partial _j + i A_j,\)\(j=1,2.\) Thus
with the Pauli operator
2.4 The Pauli operator
After these illustrations, let us come back to the general dimension \(d \ge 1\). Recall that the Dirac operator \(H_{\text {D}}(A)\) has been introduced via (2.5) and that its square satisfies (2.7). The following lemma specifies the form of the square according to the parity of the dimension and offers a natural definition for the Pauli operator in any dimension.
Lemma 2.1
(Algebraic definition of Pauli operators) Let \(d\ge 1\) and let n(d) be as in (2.2).
-
If d is odd, then
$$\begin{aligned} H_{\text {D}}^{\text {odd}}(A)^2= \begin{pmatrix} H_{\text {P}}^{\text {odd}}(A) + \frac{1}{4}\varvec{I}_{\mathbb {C}^{\varvec{n(d)/2}}} &{} \varvec{0}\\ \varvec{0} &{} H_{\text {P}}^{\text {odd}}(A) + \frac{1}{4}\varvec{I}_{\mathbb {C}^{\varvec{n(d)/2}}} \end{pmatrix}, \end{aligned}$$(2.11)where we define
$$\begin{aligned} H_{\text {P}}^{\text {odd}}(A):= -\nabla _{\!A}^2 \varvec{I}_{\mathbb {C}^{\varvec{n(d)/2}}} - \frac{i}{2}\, a \cdot \varvec{B} \cdot a . \end{aligned}$$(2.12) -
If d is even, then
$$\begin{aligned} H_{\text {D}}^{\text {even}}(A)^2=H_{\text {P}}^{\text {even}}(A) + \frac{1}{4}\varvec{I}_{\mathbb {C}^{\varvec{n(d)}}}, \end{aligned}$$(2.13)where we define
$$\begin{aligned} H_{\text {P}}^{\text {even}}(A):= \begin{pmatrix} -\nabla _{\!A}^2 \varvec{I}_{\mathbb {C}^{\varvec{n(d)/2}}} - \frac{i}{2}\, a^*\! \cdot \varvec{B} \cdot a, &{} \varvec{0}\\ \varvec{0} &{} -\nabla _{\!A}^2 \varvec{I}_{\mathbb {C}^{\varvec{n(d)/2}}} - \frac{i}{2}\, a\cdot \varvec{B} \cdot a^*\end{pmatrix}.\qquad \end{aligned}$$(2.14)
Proof
In odd dimensions one has that \(\varvec{D^*}=\varvec{D},\) therefore
Thus, defining
and using (2.7) one immediately gets the desired representation in odd dimensions. In even dimensions one defines
Hence, from (2.7) and (2.8) one readily has the thesis. \(\quad \square \)
Notice that in even dimensions the Pauli operator is a matrix operator with the same dimension as the Dirac Hamiltonian. In odd dimensions the dimension of the Pauli operator is a half of that of the Dirac operator. Recalling (2.2), we therefore set
2.5 Domains of the operators
Finally, we specify the domains of the Dirac and Pauli operators. Notice that the rather formal manipulations of the preceding subsections can be justified when the action of the operators is considered on smooth functions of compact support. Therefore, we shall define each of the operators as an extension of the operator initially defined on such a restricted domain. We always assume that the vector potential \(A \in L_\mathrm {loc}^2(\mathbb {R}^d;\mathbb {R}^d)\) is such that \(\varvec{B} \in L_\mathrm {loc}^1(\mathbb {R}^d;\mathbb {R}^{d \times d})\).
We define the Pauli operator \(H_{\text {P}}(A)\) acting on the Hilbert space \(L^2(\mathbb {R}^d;\mathbb {R}^{n'(d)})\) as the self-adjoint Friedrichs extension of the operator initially considered on the domain \(C_0^\infty (\mathbb {R}^d;\mathbb {R}^{n'(d)})\); notice that this initial operator is symmetric. Disregarding the spin-magnetic term for a moment, the form domain can be identified with the magnetic Sobolev space (cf. [30, Sec. 7.20])
The operator domain is the subset of \(H_{\!A}^1(\mathbb {R}^d;\mathbb {R}^{n'(d)})\) consisting of functions \(\psi \) such that \(\nabla _{\!A}^2 \psi \in L^2(\mathbb {R}^d;\mathbb {R}^{n'(d)})\). To include the spin-magnetic term, we make the hypothesis that there exist numbers \(a<1\) and \(b \in \mathbb {R}\) such that, for every \(\psi \in C_0^\infty (\mathbb {R}^d)\),
Then the spin-magnetic term is a relatively form-bounded perturbation of the already defined operator with the relative bound less than one (recall Remark 2.1), so the Pauli operator \(H_{\text {P}}(A)\) with the same form domain (2.16) is indeed self-adjoint.
For the domain of the Dirac operator (2.5) we take
Notice that \(H_{\text {D}}(A)\) is symmetric. Using Lemma 2.1, for every \(\psi \in C_0^\infty (\mathbb {R}^d;\mathbb {R}^{n(d)})\), which is dense in \(\mathcal {D}(H_{\text {D}}(A))\), we have the identity (with a slight abuse of notation)
Since the quadratic form of the Pauli operator \(H_{\text {P}}(A)\) is closed on the space (2.16), it follows that the Dirac operator \(H_{\text {D}}(A)\) with (2.18) is a closed symmetric operator. Under further assumptions about the vector potential (see [38, Sec. 4.3]), one can ensure that \(H_{\text {D}}(A)\) is actually self-adjoint, but our results hold under the present more general setting.
3 Statement of the Main Results in any Dimension
Now we are in position to state our main results in any dimension. As anticipated, in order to do that, we shall consider separately the three spinorial Hamiltonians.
3.1 The spinor Schrödinger equation
Let us start by considering the matrix Schrödinger operator
which is an extension of (1.4) to any dimension \(d \ge 1\) and \(n \ge 1\). Here \(\varvec{V} \in L_\mathrm {loc}^1(\mathbb {R}^d;\mathbb {C}^{n \times n})\) and \(A \in L_\mathrm {loc}^2(\mathbb {R}^d;\mathbb {R}^d)\). The operator is properly introduced as the Friedrichs extension of the operator initially defined on \(C_0^\infty (\mathbb {R}^d;\mathbb {C}^n)\). The hypotheses in the theorems below ensure that \(H_{\text {S}}(A, \varvec{V})\) is well defined as an m-sectorial operator.
3.1.1 A general result in any dimension
Theorem 3.1
Given any \(d, n\ge 1\), let \(A\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^d)\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^{d\times d}).\) Suppose that \(\varvec{V}\in L^1_{\text {loc}}(\mathbb {R}^d; \mathbb {C}^{n\times n})\) admits the decomposition \(\varvec{V}= \varvec{V}^{\varvec{(1)}} + \varvec{V}^{\varvec{(2)}}\) with components \(\varvec{V}^{\varvec{(1)}}\in L^1_{\text {loc}}(\mathbb {R}^d)\) and \(\varvec{V}^{\varvec{(2)}}=V^{(2)}\varvec{I}_{\mathbb {C}^{\varvec{n}}}\), where \(V^{(2)}\in L^1_{\text {loc}}(\mathbb {R}^d)\) is such that \([\partial _r(r {{\,\mathrm{Re}\,}}V^{(2)})]_+\in L^1_{\text {loc}}(\mathbb {R}^d)\) and \(r \varvec{V}^{\varvec{(1)}}, r({{\,\mathrm{Re}\,}}V^{(2)})_-, r{{\,\mathrm{Im}\,}}V^{(2)}\in L^2_{\text {loc}}(\mathbb {R}^d).\) Assume that there exist numbers \(a_1, a_2, b_1, b_2, {\mathfrak {b}}, \beta _1, \beta _2, c \in [0,1)\) satisfying
such that, for all n-vector u with components in \(C^\infty _0(\mathbb {R}^d),\)
If \(d=2\) assume also that the inequality
holds true. If, in addition, one has
then \(H_{\text {S}}(A,\varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {S}}(A,\varvec{V}))=\varnothing .\)
Remark 3.1
In order to exclude zero modes we need to replace the second condition in (3.2) with
(See Remark 5.1)
The theorem is further commented on in the following subsections.
3.1.2 Criticality of low dimensions
Because of the criticality of the Laplacian in \(L^2(\mathbb {R}^d)\) with \(d=1,2\), the lower dimensional scenarios are a bit special.
First of all, due to the absence of magnetic phenomena in \(\mathbb {R}^1,\) the corresponding assumptions (3.3)–(3.7) in dimension \(d=1\) come with the classical gradient \(\nabla \) as a replacement of the magnetic gradient \(\nabla _{\!A}.\) Consequently, because of the criticality of the Laplacian in \(L^2(\mathbb {R})\), necessarily \(\varvec{V}^{\varvec{(1)}}=0\), \(({{\,\mathrm{Re}\,}}V^{(2)})_-=0\), \([\partial _r (r {{\,\mathrm{Re}\,}}V^{(2)})]_+=0\) and \({{\,\mathrm{Im}\,}}V^{(2)}=0\). Moreover, (3.7) is always satisfied if \(d=1\) being \(\varvec{B}\) equal to zero. Hence, if \(d=1\), the theorem essentially says that the scalar Schrödinger operator \(-\nabla ^2+V\) in \(L^2(\mathbb {R})\) has no eigenvalues, provided that V is non-negative and the radial derivative \(\partial _r (r V)\) is non-positive. The requirements respectively exclude non-positive and positive eigenvalues. The latter is a sort of the classical repulsiveness requirement (cf. [33, Thm. XIII.58]).
Similarly, if \(d=2\) and there is no magnetic field (i.e. \(\varvec{B}=\varvec{0}\)), the theorem essentially says that the scalar Schrödinger operator \(-\nabla ^2+V\) in \(L^2(\mathbb {R}^2)\) has no eigenvalues, provided that V is non-negative and the radial derivative \(\partial _r (r V)\) is non-positive (again, the conditions exclude non-positive and positive eigenvalues, respectively). On the other hand, in two dimensions, the situation becomes interesting if the magnetic field is present. Indeed, the magnetic Laplacian in \(L^2(\mathbb {R}^2)\) is subcritical due to the existence of magnetic Hardy inequalities (see [28] for the pioneering work and [2] for the most recent developments). The latter guarantee a source of sufficient conditions to make the hypotheses (3.3)–(3.7) non-trivial (cf. [15]).
3.1.3 An alternative statement in dimension two
We want to comment more on the additional condition (3.8) in dimension \(d=2.\) Using the 2d weighted Hardy inequality
it is easy to check that requiring “enough” positivity to \({{\,\mathrm{Re}\,}}V^{(2)}\) will guarantee the validity of (3.8). More specifically, the pointwise bound
valid for almost every \(x \in \mathbb {R}^2\) is sufficient for (3.8) to hold. On the other hand, without the positivity of \({{\,\mathrm{Re}\,}}V^{(2)}\), condition (3.8) is quite restrictive. Indeed, if one assumes \(V^{(2)}=0,\) then ensuring the validity of (3.8), would require to ensure the existence of vector potentials A for which an improvement of the weighted Hardy inequality (3.11) holds true (for (3.8) with \(V^{(2)}=0\) is nothing but (3.11) with a better constant).
For this reason, following an idea introduced in [15, Sec. 3.2], we provide an alternative result, which avoids condition (3.8), but a stronger hypothesis compared to (3.2) is assumed.
Theorem 3.2
Let \(d=2\) and let \(n, A, \varvec{B}\) and \(\varvec{V}\) be as in Theorem 3.1. Assume that there exist numbers \(a_1, a_2, b_1, b_2, {\mathfrak {b}}, \beta _1, \beta _2, c , \epsilon \in [0,1)\) satisfying
such that, for all n-vector u with components in \(C^\infty _0(\mathbb {R}^2),\) inequalities (3.3)–(3.7) hold true. If, in addition, one has
then \(H_{\text {S}}(A, \varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {S}}(A, \varvec{V}))=\varnothing .\)
3.1.4 A simplification in higher dimensions
In dimensions \(d\ge 3,\) as a consequence of the diamagnetic inequality (see [25] and [30, Thm. 7.21])
together with the classical Hardy inequality
applied to \(|\psi |,\) one can prove the following magnetic Hardy inequality
Using (3.15), it is easy to check that the first inequalities in (3.3), (3.4) and (3.6) follow respectively as a consequence of the second inequalities in (3.3), (3.4) and (3.6) with
and assuming \(a_2, b_2, \beta _2<(d-2)/2.\) Hence, in the higher dimensions \(d\ge 3,\) conditions in (3.2) simplifies to
In particular, this justifies the fact that in Theorem 1.1 which is a special case of Theorem 3.1 for \(d=3\) (and \(n=2\)) we assume only the validity of (1.8), (1.9) and (1.10), moreover (3.2) is replaced by (1.7) (notice that dropping the subscript \(\cdot _2\) in the constants and fixing \(d=3\) in (3.16) gives (1.7)).
3.1.5 The Aharonov–Bohm field
Let us come back to dimension two and consider the Aharonov–Bohm magnetic potential
where \((x,y)=(r\cos \theta , r\sin \theta )\) is the parametrisation via polar coordinates, \(r\in (0,\infty ),\)\(\theta \in [0,2\pi ),\) and \(\alpha :[0,2\pi )\rightarrow \mathbb {R}\) is an arbitrary bounded function. In this specific case, there is an explicit magnetic Hardy-type inequality (see [28, Thm. 3])
where \({\bar{\alpha }}\) has the physical meaning of the total magnetic flux:
Notice that in this case the magnetic field B equals zero everywhere except for \(x=0\); indeed
in the sense of distribution, where \(\delta \) is the Dirac delta function.
The Aharonov–Bohm potential (3.17) is not in \(L^2_{\text {loc}}(\mathbb {R}^2),\) so the matrix Schrödinger operator is not well defined as described below (3.1) and Theorem 3.1 does not apply to it as such. Now the Schrödinger operator \(H_{\text {S}}(A, \varvec{V})\) is introduced as the Friedrichs extension of the operator (1.4) initially defined on \(C_0^\infty (\mathbb {R}^2 {{\setminus }} \{0\};\mathbb {C}^n)\). At the same time, it is possible to adapt the method of multipliers in such a way that it covers this situation as well. The following result can be considered as an extension of [15, Thm. 5] in the scalar case to the spinorial Schrödinger equation.
Theorem 3.3
Let \(d=2\) and let A be as in (3.17) with \({\bar{\alpha }}\notin {\mathbb {Z}}\) and \(\varvec{V}\) as in Theorem 3.1. Assume that there exist numbers \(a, b, {\mathfrak {b}}, \beta , \epsilon \in [0,1)\) satisfying
with \(\gamma :={{\,\mathrm{dist}\,}}\{{\bar{\alpha }}, {\mathbb {Z}}\},\) such that, for all n-vector u with component in \(C^\infty _0(\mathbb {R}^2{\setminus } \{0\}),\) inequalities
and
hold true. If, in addition, one has
then \(H_{\text {S}}(A, \varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {S}}(A, \varvec{V}))=\varnothing .\)
3.1.6 On the regularity condition (3.9) and their replacement
As we will see in more details later on (see Sect. 4.2), the additional local regularity assumptions (3.9) on the potentials are needed in order to justify rigorously the algebraic manipulations that the method of multipliers introduces. A formal proof of Theorem 3.1 would require just the weaker conditions \(A \in L^2_{\text {loc}}(\mathbb {R}^d)\) and \(\varvec{V}\in L^1_{\text {loc}}(\mathbb {R}^d).\)
The unpleasant conditions (3.9) can be removed if we consider the situation of potentials \(\varvec{V}\) and A with just one singularity at the origin (see Sect. 4.5). This specific case is worth being investigated as it allows to cover a large class of non vanishing potentials, e.g., \(\varvec{V(x)}= a/ |x|^\alpha \varvec{I_{\mathbb {C}^n}}\) with \(a\ne 0\) and \(\alpha >0,\) and also the Aharonov–Bohm vector fields (3.17) which otherwise would be ruled out by conditions (3.9). It is evident that the Coulomb singularity \(-ze^2/r\) at \(r=0,\) with \(z|e|\) the nuclear charge, is also included in the class of available potentials, this fact is remarkable in view of the great interest in quantum mechanics on stability of atoms, both in the pure electric framework and when magnetic interactions are included (the interested reader is referred to the monograph [31] which provides a thorough account of stability of matter and to the original papers [37] and to [18] and [32] when also magnetic effects are analyzed).
3.1.7 An alternative general result in the self-adjoint setting
Obviously, Theorem 3.1 above is valid, with clear simplifications, also in the self-adjoint situation, namely considering Hermitian matrix-valued potentials \(\varvec{V}\). In this case, however, we also have an alternative result that we have decided to present because the “repulsivity” condition (3.5) is replaced by a “more classical” assumption in terms of \(r\partial _r V^{(2)}.\) Furthermore, condition (3.8) is not needed in this context. More precisely we have the following result.
Theorem 3.4
Let \(d, n\ge 1\) and let \(A\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^d)\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^{d\times d}).\) Suppose that \(\varvec{V}\in L^1_{\text {loc}}(\mathbb {R}^d; \mathbb {R}^{n\times n})\) admits the decomposition \(\varvec{V}= \varvec{V}^{\varvec{(1)}} + \varvec{V}^{\varvec{(2)}}\) with components \(\varvec{V}^{\varvec{(1)}}\in L^1_{\text {loc}}(\mathbb {R}^d)\) and \(\varvec{V}^{\varvec{(2)}}=V^{(2)}\varvec{I_{\mathbb {C}^n}}\), where \(V^{(2)}\in L^1_{\text {loc}}(\mathbb {R}^d)\) is such that \([r\partial _r V^{(2)}]_+\in L^1_{\text {loc}}(\mathbb {R}^d)\) and \(r V^{(1)}\in L^2_{\text {loc}}(\mathbb {R}^d).\) Assume that there exist numbers \(a_1, a_2, b, {\mathfrak {b}}, c \in [0,1)\) satisfying
such that, for all n-vector u with components in \(C^\infty _0(\mathbb {R}^d),\) (3.3) and (3.7) hold and, moreover,
If in addition (3.9) holds true, then \(H_{\text {S}}(A,\varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {S}}(A,\varvec{V}))=\varnothing .\)
Remark 3.2
Here, the first condition in (3.25) is not explicitly used in the proof of the theorem, but it is needed to give sense to the Hamiltonian \(H_{\text {S}}(A, \varvec{V})\). We refer to Sect. 4.1 for details.
3.2 The Pauli equation
Recall that the definition of the Pauli operator depends on the parity of the dimension, cf. Lemma 2.1.
Theorem 3.5
Let \(d\ge 3\) be an integer and let \(n'(d)\) be as in (2.15). Let \(A\in L^2_{\text {loc}}(\mathbb {R}^d; \mathbb {R}^d)\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^2;\mathbb {R}^{d\times d}).\) Suppose that \(\varvec{V}\in L^1_{\text {loc}}(\mathbb {R}^d; \mathbb {C}^{n'(d)\times n'(d)})\) admits the decomposition \(\varvec{V}=\varvec{V}^{\varvec{(1)}} + \varvec{V}^{\varvec{(2)}}\) with components \(\varvec{V}^{\varvec{(1)}}\in L^1_{\text {loc}}(\mathbb {R}^d; \mathbb {C}^{n'(d)\times n'(d)})\) and \(\varvec{V}^{\varvec{(2)}}=V^{(2)}\varvec{I}_{\mathbb {C}^{\varvec{n'(d)}}}\), where \(V^{(2)}\in L^1_{\text {loc}}(\mathbb {R}^d)\) is such that \([\partial _r(r {{\,\mathrm{Re}\,}}V^{(2)})]_+\in L^1_{\text {loc}}(\mathbb {R}^d)\) and \(r \varvec{V}^{\varvec{(1)}}, r({{\,\mathrm{Re}\,}}V^{(2)})_-, r{{\,\mathrm{Im}\,}}V^{(2)}\in L^2_{\text {loc}}(\mathbb {R}^d).\) If d is even, we additionally require \(\varvec{V}^{\varvec{(1)}}=V^{(1)}\varvec{I}_{\mathbb {C}^{\varvec{n'(d)}}}\). Assume that there exist numbers \(a, b, \beta , {\mathfrak {b}}, c\in [0,1)\) satisfying
such that, for all \(n'(d)\)-vector u with components in \(C^\infty _0(\mathbb {R}^d),\) the inequalities
and
hold true. If, in addition, one has
then \(H_{\text {P}}(A,\varvec{V})\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {P}}(A,\varvec{V}))=\varnothing .\)
Remark 3.3
(Even parity) Observe that in the even dimensional case we assume also the component \(\varvec{V}^{\varvec{(1)}}\) to be diagonal. This is needed in order not to spoil the diagonal form in the definition (2.14) of the free Pauli operator, which will represent a crucial point in the strategy underlying the proof (we refer to Sect. 6.2 for more details).
The case of low dimensions \(d=1, 2\) is intentionally not present in Theorem 3.5 for the following reasons.
Remark 3.4
(Dimension one) As discussed in Sect. 2.3.1, the one-dimensional Pauli operator coincides with the scalar potential-free Schrödinger operator \(-\nabla ^2\) (i.e. the one-dimensional Laplacian), hence the absence of the point spectrum is trivial in this case. Formally, it is already guaranteed by Theorem 3.1 with \(d=n=1\) (see also Sect. 3.1.2).
Remark 3.5
(Dimension two) The two dimensional case is rather special because of the paramagnetism of the Pauli operator. As a matter of fact, the total absence of the point spectrum is no longer guaranteed even in the purely magnetic case (i.e. \(\varvec{V}=\varvec{0}\)). In this case the Pauli operator has the form (see Sect. 2.3.2)
For smooth vector potentials, the supersymmetry says that the operators \(-\nabla _{\!A}^2 \pm B_{12}\) have the same spectrum except perhaps at zero (see [9, Thm. 6.4]). Hence the absence of the point spectrum for the two-dimensional Pauli operator is in principle governed by our Theorem 3.1 with \(d=2\) and \(n=1\) (or Theorem 3.2) or its self-adjoint counterpart Theorem 3.4 for the special choice \(\varvec{V} = B_{12} \varvec{I}_{\mathbb {C}^2}\). Unfortunately, we do not see how to derive any non-trivial condition on \(B_{12}\) to guarantee the total absence of eigenvalues (cf. Remark 5.2). Physically, it does not come as a big surprise because of the celebrated Aharonov–Casher effect, which states that the number of zero-eigenstates is equal to the integer part of the total magnetic flux (see [9, Sec. 6.4]). On the one hand, the absence of negative eigenvalues does follow as an immediate consequence of the standard lower bound
which holds with either of the sign ± (see, e.g., [1, Sec. 2.4]).
Notice that when an attractive potential is added to the two-dimensional Pauli operator, it has been proved [17, 39] that the perturbed Hamiltonian presents always (i.e. no matter how small is chosen the coupling constant) negative eigenvalues (not only due to the Aharonov–Casher zero modes turning into negative ones, but it is also the essential part of the spectrum that contributes to their appearance). This fact can be seen as a quantification of the aforementioned paramagnetic effect of the Pauli operators in contrast to the diamagnetic effect which holds true for magnetic Schrödinger operators.
3.3 The Dirac equation
Finally, we state our results for the purely magnetic Dirac operator (2.5).
Theorem 3.6
Let \(d\ge 3\) and let n(d) be as in (2.2). Let \(A\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^d)\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^d; \mathbb {R}^{d\times d}).\) Assume that there exists a number \(c\in [0,1)\) satisfying
such that, for all n(d)-vector u with components in \(C^\infty _0(\mathbb {R}^d),\) the inequality
holds true. If in addition \(A\in W^{1,d}_{\text {loc}}(\mathbb {R}^d),\) then \(H_{\text {D}}(A)\) has no eigenvalues, i.e. \(\sigma _{\text {p}}(H_{\text {D}}(A))=\varnothing .\)
As discussed in Sect. 2.3.1, the square of the one-dimensional Dirac operator is just the one-dimensional Laplacian shifted by a constant (cf. (2.9)), hence the absence of the point spectrum follows at once in this case. On the other hand, the two-dimensional analogue of Theorem 3.6 is unavailable, because of the absence of a two-dimensional variant of Theorem 3.5 in the Pauli case, cf. Remark 3.5.
4 Scalar Electromagnetic Schrödinger Operators Revisited
In this section, we leave aside the operators acting on spinor Hilbert spaces and focus on scalar electromagnetic Schrödinger operators (1.1). This will be useful later on when, in the following sections, we reduce our analysis to the level of components. We provide a careful and deep analysis of the method of multipliers, stressing on the major outcomes that the technique provides in this context. Our goal is to represent a reader-friendly overview of the original ideas and main outcomes of [15, 16] to tackle the issue of the total absence of eigenvalues of scalar Schrödinger operators. Furthermore, we go through the more technical parts by rigorously establishing some results that were just sketched in the previous works.
4.1 Definition of the operators
For the sake of completeness, we start with recalling some basic facts on the rigorous definition of the scalar electromagnetic Schrödinger operators.
Let \(d\ge 1\) be any natural number. Let \(A\in L^2_{{\text {loc}}}(\mathbb {R}^d;\mathbb {R}^d)\) and \(V\in L^1_{{\text {loc}}}(\mathbb {R}^d; \mathbb {C})\) be respectively a vector potential and a scalar potential (the latter possibly complex-valued). The quantum Hamiltonian apt to describe the motion of a non-relativistic particle interacting with the electric field \(-\nabla V\) and the magnetic field \(\varvec{B}:=(\nabla A)- (\nabla A)^t\) is represented by the scalar electromagnetic Schrödinger operator
Observe that the magnetic field is absent in \(\mathbb {R}^1\) and A can be chosen to be equal to zero without loss of generality. Therefore the two-dimensional framework is the lowest in which the introduction of a magnetic field is non-trivial.
As usual, the sum in (4.1) should be understood in the sense of forms after assuming that V is relatively form-bounded with respect to the magnetic Laplacian \(-\nabla _{\!A}^2\) with the relative bound less than one. We shall often proceed more restrictively by assuming the form-subordination condition
where \(a\in [0,1)\) is a constant independent of u. Assumption (4.2) in particular implies that the quadratic form
is relatively bounded with respect to the quadratic form
with the relative bound less than one. Consequently, the sum \(h_{A,V}:=h_A + h_V\) with domain \(\mathcal {D}(h_{A,V}):=\mathcal {D}_A\) is a closed and sectorial form. Therefore \(H_{A,V}\) as defined in (4.1) makes sense as the m-sectorial operator associated to \(h_{A,V}\) via the representation theorem (cf. [24, Thm. VI.2.1]).
With the aim of including also potentials which are not necessarily subordinated in the spirit of (4.2), now we present an alternative way to give a meaning to the operator \(H_{A,V}\) assuming different conditions on the electric potential V. We introduce the form
with
The form \(h_{A,V}^{(1)}\) is closed by definition. Now instead of assuming the smallness condition (4.2) for the whole V, we take the advantage of the splitting in real (positive and negative part) and imaginary part of the potential to require the following more natural subordination: There exist \(b, \beta \in [0,1)\) with
such that, for any \(u\in \mathcal {D}(h_{A,V}^{(1)}),\)
In other words, we require the subordination just for the parts \(({{\,\mathrm{Re}\,}}V)_-\) and \({{\,\mathrm{Im}\,}}V\) of the potential V. Hence, defining
the form \(h_{A,V}^{(2)}\) is relatively bounded with respect to \(h_{A,V}^{(1)}\), with the relative bound less than one (see (4.3)). Consequently, as above, the sum \(h_{A,V}=h_{A,V}^{(1)} + h_{A,V}^{(2)}\) is a closed and sectorial form and \(\mathcal {D}(h_{A,V})=\mathcal {D}(h_{A,V}^{(1)}).\) Therefore, also in this more general setting, \(H_{A,V}\) is the m-sectorial operator associated with \(h_{A,V}.\)
In order to consider simultaneously both these two possible configurations, we introduce the decomposition \(V=V^{(1)} + V^{(2)}\) and assume that there exist \(a, b, \beta \in [0,1)\) satisfying
such that, for any \(u\in \mathcal {D}_A,\)
and
Let us define \( h_{A,V}^{(1)}[u]:= \int _{\mathbb {R}^d} |\nabla _{\!A} u|^2 + \int _{\mathbb {R}^d} ({{\,\mathrm{Re}\,}}V^{(2)})_+ |u|^2\) with \(\mathcal {D}(h_{A,V}^{(1)}):= \overline{C^\infty _0(\mathbb {R}^d)}^{{\left| \left| \left| \cdot \right| \right| \right| }},\) where
and \(h_{A,V}^{(2)}[u]:= \int _{\mathbb {R}^d} V^{(1)}|u|^2 -\int _{\mathbb {R}^d} ({{\,\mathrm{Re}\,}}V^{(2)})_-|u|^2 + i \int _{\mathbb {R}^d} {{\,\mathrm{Im}\,}}V^{(2)}|u|^2 \) with \(\mathcal {D}(h_{A,V}^{(2)}):=\mathcal {D}(h_{A,V}^{(1)})\). By the same reasoning as above, one has that \(H_{A,V}\) is the m-sectorial operator associated with the closed and sectorial form \(h_{A,V} := h_{A,V}^{(1)}+h_{A,V}^{(2)}\) with \(\mathcal {D}(h_{A,V}):=\mathcal {D}(h_{A,V}^{(1)}).\) In order to drop the dependance on the form h in the notation of the domain that will not be used explicitly any more, from now on we will denote
4.2 Further hypotheses on the potentials
As we shall see below, in order to justify rigorously the algebraic manipulations that the method of multipliers introduces, we need to assume more regularity on the magnetic potential A and on the electric potential \(V=V^{(1)} + V^{(2)}\) than the ones required to give a meaning to the electromagnetic Hamiltonian (4.1).
4.2.1 Further hypotheses on the magnetic potential
We assume
In particular, these assumptions ensure that for any \(u\in \mathcal {D}_A\) then
and the same can be said for \(\partial _l A u,\) with \(l=1,2,\dots , d.\) Indeed, from the Hölder inequality, one has that for any \(k=1,2,\dots , d\)
Observe that the diamagnetic inequality (3.13) and \(u\in \mathcal {D}_A\) guarantee \(|u|\in H^1(\mathbb {R}^d).\) By the Sobolev embeddings
Consequently, if one chooses q as in (4.11), then \(\Vert u\Vert _{L^q(\mathbb {R}^d)}\) is finite. If, moreover, the Hölder conjugated exponent p is as in our assumption (4.8), then \(\Vert A_k\Vert _{L_{\text {loc}}^p(\mathbb {R}^d)}\) is finite and therefore, from (4.10), \(A_k u\in L_{\text {loc}}^2(\mathbb {R}^d).\)
Notice that, given any function \(u\in \mathcal {D}_A\) as soon as \(Au\in L^2(\mathbb {R}^d),\) then \(\nabla u\in L^2(\mathbb {R}^d)\) and therefore \(u\in H^1(\mathbb {R}^d).\) In other words
4.2.2 Further hypotheses on the electric potential
Recalling the decomposition \(V = V^{(1)}+V^{(2)}\), we assume the following condition on the real part of the second component:
By the same reasoning as done above for the magnetic potential, one can observe that assumption (4.13) ensures that for any \(u\in H^1_A(\mathbb {R}^d),\) then
and the same can be said for \(\partial _k {{\,\mathrm{Re}\,}}V^{(2)},\) with \(k=1,2,\dots , d.\)
4.3 The method of multipliers: main ingredients
The purpose of this subsection is to provide, in a unified and rigorous way, the proof of the common crucial starting point of the series of works [3, 4, 15, 16] for proving the absence of the point spectrum of the electromagnetic Hamiltonians \(H_{A,V}\) in various settings.
Since this section is intended as a review of already known results on scalar Schrödinger Hamiltonians, here we will be concerned almost exclusively with the most interesting and more troublesome case of the spectral parameter \(\lambda \in \mathbb {C}\) within the sector of the complex plane given by
On the other hand, how to deal with the complementary sector, i.e., \(\{\lambda \in \mathbb {C}:{{\,\mathrm{Re}\,}}\lambda < |{{\,\mathrm{Im}\,}}\lambda |\}\) can be seen explicitly in the proof of our original results (see Sects. 5 and 6).
The proof of the absence of eigenvalues within the sector defined in (4.14) is based on the following crucial result obtained by means of the method of multipliers. It basically provides an integral identity for weak solutions u to the resolvent equation \((H_{A,V}-\lambda ) u = f\), where \(f:\mathbb {R}^d \rightarrow \mathbb {C}\) is a suitable function. More specifically, \(u\in \mathcal {D}_{A,V}\) is such that the identity
holds for any \(v\in \mathcal {D}_{A,V},\) where f is any suitable function for which the last integral in (4.15) is finite. The crucial result reads as follows.
Lemma 4.1
Let \(d\ge 1\), let \(A\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^d)\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^d; \mathbb {R}^{d\times d})\) and (4.8) holds. Suppose that \(V\in L^1_{\text {loc}}(\mathbb {R}^d;\mathbb {C})\) admits the decomposition \(V=V^{(1)}+ V^{(2)}\) with \({{\,\mathrm{Re}\,}}V^{(2)}\) satisfying (4.13). Let \(u\in \mathcal {D}_{A,V}\) be a solution to (4.15), with \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda \) and \(r f\in L^2(\mathbb {R}^d),\) satisfying
Then also \(r|\nabla _{\!A} u^-|^2 + r^{-1}|u|^2 + [\partial _r(r {{\,\mathrm{Re}\,}}V^{(2)})]_-|u|^2 + r[{{\,\mathrm{Re}\,}}V^{(2)}]_+|u|^2 \in L^1(\mathbb {R}^d)\) and the identity
holds true with
and \(f^-\) defined in the analogous way.
Remark 4.1
(Dimension one). Since the addition of a magnetic potential is trivial in \(\mathbb {R}^1,\) the corresponding identity (4.16) with \(d=1\) comes with the classical gradient \(\nabla \) as a replacement of the magnetic gradient \(\nabla _{\!A},\) moreover the term involving \(\varvec{B}\) is not present.
The proof of Lemma 4.1 can be found in Sect. 4.3.1, here we just provide its main steps:
-
Step one: Approximation of u with a sequence of compactly supported functions \(u_R\) (see definition (4.28) below) which satisfy a related problem with small (in a suitable topology) corrections. This first step is necessary in order to justify rigorously the algebraic manipulations that the method of multipliers introduces when the test function v is chosen to be possibly unbounded (so that it is not even a priori clear if this specific choice v belongs to \(L^2(\mathbb {R}^d)\)).
-
Step two: Development of the method of multipliers for \(u_R\) (main core of the proof) in order to produce the analogue of identity (4.16) for the approximating sequence. This step will require a further approximation procedure which will ensure that the chosen multiplier v (see (4.51) below) is in \(\mathcal {D}_{A,V}\) and therefore allowed to be taken as a test function.
-
Step three: Proof of (4.16) by taking the limit as \(R \rightarrow \infty \) in the previous identity and using the smallness of the corrections which is quantified in Lemma 4.3 below.
As a byproduct of the crucial identity of Lemma 4.1, we get the following inequality. For the sake of completeness, we provide it with a proof.
Lemma 4.2
Under the hypotheses of Lemma 4.1 the following estimate
holds true.
Proof of Lemma 4.2
Let us consider identity (4.16) with \(V^{(1)}=0.\) In passing, notice that requiring \(V^{(1)}=0\) do not entails any loss of generality. Indeed since, according to our notations, \(V^{(1)}\) represents the component of the electric potential V which is fully subordinated to the magnetic Dirichlet form (in the sense given by (4.6)), it can be treated at the same level of the forcing term f.
After splitting \({{\,\mathrm{Re}\,}}V^{(2)}\) in its positive and negative parts, namely using \({{\,\mathrm{Re}\,}}V^{(2)}= ({{\,\mathrm{Re}\,}}V^{(2)})_+ - ({{\,\mathrm{Re}\,}}V^{(2)})_-,\) identity (4.16) with \(V^{(1)}=0\) reads as follows
We consider first
By the Cauchy–Schwarz inequality, it immediately follows that
Now we consider the terms in (4.19) involving \(V^{(2)},\) that is
Using that \(|u|=|u^-|,\) the term \(I\!I_1\) can be easily estimated in this way:
By the Cauchy–Schwarz inequality one has
Finally, if \({{\,\mathrm{Im}\,}}\lambda \ne 0,\) we also need to estimate \(I\!I_3.\) First notice that choosing \(v=\frac{{{\,\mathrm{Im}\,}}\lambda }{|{{\,\mathrm{Im}\,}}\lambda |} u\) in (4.15) (with \(V^{(1)}=0\)) and taking the imaginary part of the resulting identity, gives the following \(L^2\)- bound
Using the Cauchy–Schwarz inequality, the \(L^2\)-bound (4.23), the fact that we are working in the sector \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda ,\) and again using that \(|u|=|u^-|,\) we have
Now we estimate the terms in (4.19) involving f, namely
In a similar way as done to estimate \(I\!I_1, I\!I_2\) and \(I\!I_3,\) one gets
and
Applying estimates (4.20), (4.21),(4.22) and (4.24) together with (4.25) and (4.26) in (4.19), we obtain the thesis. \(\quad \square \)
Now we are in a position to prove Lemma 4.1 on the basis of the three steps presented above.
4.3.1 Proof of Lemma 4.1
-
Step one. The desired approximation by compactly supported functions is achieved by a usual “horizontal cut-off.” Let \(\mu :[0,\infty )\rightarrow [0,1]\) be a smooth function such that
$$\begin{aligned} \mu (r)= \left\{ \begin{array}{ll} 1 &{}\quad \text {if}\quad 0\le r\le 1,\\ 0 &{}\quad \text {if}\quad r\ge 2. \end{array}\right. \end{aligned}$$Given a positive number R, we set \(\mu _R(x):=\mu (|x|R^{-1}).\) Then \(\mu _R:\mathbb {R}^d \rightarrow [0,1]\) is such that
$$\begin{aligned} \mu _R= & {} 1 \quad \text {in}\quad B_R(0), \qquad \mu _R= 0 \quad \text {in}\quad \mathbb {R}^d{\setminus } B_{2R}(0), \nonumber \\ \qquad |\nabla \mu _R|\le & {} cR^{-1}, \qquad |\Delta \mu _R|\le c R^{-2}, \end{aligned}$$(4.27)where \(B_R(0)\) stands for the open ball centered at the origin and with radius \(R>0\) and \(c> 1\) is a suitable constant independent of R. For any function \(h:\mathbb {R}^d \rightarrow \mathbb {C}\) we then define the compactly supported approximating family of functions by setting
$$\begin{aligned} h_R:=\mu _R h. \end{aligned}$$(4.28)If \(u\in \mathcal {D}_{A,V}\) is a weak solution to \(-\nabla _{\!A}^2 u + Vu=\lambda u + f\), it is not difficult to show that the compactly supported function \(u_R\) belongs to \(\mathcal {D}_{A,V}\) and solves in a weak sense the following related problem
$$\begin{aligned} -\nabla _{\!A}^2u_R + Vu_R=\lambda u_R + f_R + {\text {err}}(R) \quad \text {in}\quad \mathbb {R}^d, \end{aligned}$$(4.29)where
$$\begin{aligned} {\text {err}}(R):= -2\nabla _{\!A} u \cdot \nabla \mu _R - u \Delta \mu _R. \end{aligned}$$(4.30)The next easy result shows that the extra terms (4.30), which originate from the introduction of the horizontal cut-off \(\mu _R\), become negligible as R increases.
Lemma 4.3
Given \(u\in \mathcal {D}_{A,V}\), let \({\text {err}}(R)\) be as in (4.30). Then the following limits
hold true.
Proof
By (4.27) we have
Since \(u\in L^2(\mathbb {R}^d)\) and \(\nabla _{\!A} u\in \big [L^2(\mathbb {R}^d)\big ]^d\), the right-hand side tends to zero as R goes to infinity.
Similarly,
and again the right-hand side goes to zero as R approaches infinity. \(\quad \square \)
-
Step two. This second step represents the main body of the section, it is here that the method of multipliers is fully developed. Informally speaking the method of multipliers is based on producing integral identities by choosing different test functions v in (4.15) (see Lemma 4.4 below) and later combining them in a refined way to get, for instance in our case, the analogous to (4.16). By virtue of the previous step, we shall develop the method for compactly supported solutions \(u\in \mathcal {D}_{A,V}\) to (4.15), it will be in the next Step three that we will get the result also for not necessarily compactly supported solutions. As a starting point we state the aforementioned identities, these are collected in the following lemma. Notice that the lemma is stated for any \(\lambda \in \mathbb {C}\) and not necessarily just for \(\lambda \) in the sector (4.14).
Lemma 4.4
Let \(d\ge 1,\) let \(A\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^d)\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^d; \mathbb {R}^{d\times d})\) and assume also (4.8), Suppose that \(V\in L^1_{\text {loc}}(\mathbb {R}^d;\mathbb {C})\) admits the decomposition \(V=V^{(1)}+ V^{(2)}\) with \({{\,\mathrm{Re}\,}}V^{(2)}\) satisfying (4.13). Let \(u\in \mathcal {D}_{A,V}\) be any compactly supported solution of (4.15), with \(\lambda \) any complex constant and \(|x|f\in L^2_{\text {loc}}(\mathbb {R}^d),\) satisfying
Then \(|x|^{-1}|u|^2\in L^1_{\text {loc}}(\mathbb {R}^d)\) and the following identities
hold true.
Now we show how to use these identities to prove the analogous of identity (4.16) for compactly supported solutions of (4.15). For the sake of clarity, the technical proof of Lemma 4.4 is postponed to Sect. 4.4.
Let us start our algebraic manipulation of identities (4.32)–(4.36) by taking the sum
This gives
Recalling definition (4.17) of \(u^-\), one observes that
and therefore
Moreover one has
where the previous follows from the fact that being \(\varvec{B}\) anti-symmetric, then \(x\cdot \varvec{B} \cdot x=0.\)
Reintegrating (4.39) over \(\mathbb {R}^d\), we obtain
Adding equation (4.33) multiplied by \(({{\,\mathrm{Re}\,}}\lambda )^{-1/2}|{{\,\mathrm{Im}\,}}\lambda |\) to (4.37), plugging (4.41), using again (4.39) and (4.40), we get
Then, using (4.38) in the fourth, last but two and last line of the previous identity, we obtain
where \(f^-(x):=e^{-i({{\,\mathrm{Re}\,}}\lambda )^{1/2}{{\,\mathrm{sgn}\,}}({{\,\mathrm{Im}\,}}\lambda )|x|} f(x).\)
-
Step three. Now we want to come back to our approximating sequence \(u_R.\) Recalling that \(u_R\) is a weak solution to (4.29), identity (4.43), rewritten in terms of \(u_R,\)\(f_R\) and \({\text {err(R)}}\) gives
$$\begin{aligned}&\int _{\mathbb {R}^d} |\nabla _{\!A} u_R^-|^2\, dx +({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda | \int _{\mathbb {R}^d} |x||\nabla _{\!A} u_R^-|^2\, dx\nonumber \\&\qquad -\frac{(d-1)}{2}({{\,\mathrm{Re}\,}}\lambda )^{-1/2}|{{\,\mathrm{Im}\,}}\lambda | \int _{\mathbb {R}^d} \frac{|u_R|^2}{|x|}\,dx\nonumber \\&\qquad +2{{\,\mathrm{Im}\,}}\int _{\mathbb {R}^d} x \cdot \varvec{B} \cdot u_R^- \overline{\nabla _{\!A} u_R^-}\, dx\nonumber \\&\qquad +(d-1)\int _{\mathbb {R}^d} {{\,\mathrm{Re}\,}}V^{(1)}|u_R|^2\, dx +2{{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} x\cdot V^{(1)} u_R^- \overline{\nabla _{\!A} u_R^-}\, dx\nonumber \\&\qquad +({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda | \int _{\mathbb {R}^d}|x| {{\,\mathrm{Re}\,}}V^{(1)}|u_R|^2\, dx\nonumber \\&\qquad -\int _{\mathbb {R}^d} \partial _r (|x| {{\,\mathrm{Re}\,}}V^{(2)})|u_R|^2\, dx -2{{\,\mathrm{Im}\,}}\int _{\mathbb {R}^d} x {{\,\mathrm{Im}\,}}V^{(2)} u_R^- \overline{\nabla _{\!A} u_R^-}\, dx\nonumber \\&\qquad +({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda |\int _{\mathbb {R}^d}|x| {{\,\mathrm{Re}\,}}V^{(2)} |u_R|^2\, dx\nonumber \\&\quad =(d-1) {{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} f_R \overline{u_R}\, dx + 2 {{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} x \cdot f_R^- \overline{\nabla _{\!A} u_R^-}\, dx\nonumber \\&\qquad +({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda | {{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} |x| f_R \overline{u_R}\, dx\nonumber \\&\qquad +(d-1) {{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} {\text {err}}(R) \overline{u_R}\, dx + 2 {{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} x \cdot {\text {err}}(R)^- \overline{\nabla _{\!A} u_R^-}\, dx \nonumber \\&\qquad +({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda | {{\,\mathrm{Re}\,}}\int _{\mathbb {R}^d} |x| {\text {err}}(R) \overline{u_R}\, dx. \end{aligned}$$(4.44)Letting R go to infinity, the thesis follows from dominated and monotone convergence theorems and Lemma 4.3.
4.4 The method of multipliers: proof of the crucial Lemma 4.4
This part is entirely devoted to the rigorous proof of the crucial identities contained in Lemma 4.4. Let us start proving (4.32) and (4.33). Choosing in (4.15) \(v:= \varphi u,\) with \(\varphi :\mathbb {R}^d \rightarrow \mathbb {R}\) being a radial function such that \(v\in \mathcal {D}_{A,V}\) (since the support of u is compact, any locally bounded \(\varphi \) together with locally bounded partial derivatives of first order is admissible). Using the generalised Leibniz rule for the magnetic gradient, namely
valid for any \(g, h:\mathbb {R}^d \rightarrow \mathbb {C},\) we get
Taking the real part of the obtained identity, using that being A a real-valued vector field one has one has
and performing an integration by parts give
Taking \(\varphi :=1\) and \(\varphi (x):=|x|,\) we get (4.32) and (4.33). Equations (4.34) and (4.35) are obtained as in the previous case choosing in (4.15) \(v:=\psi u,\) with \(\psi :\mathbb {R}^d\rightarrow \mathbb {R}\) being a radial function such that \(v\in \mathcal {D}_{A,V}\) and taking the imaginary part of the resulting identity. Finally, one chooses \(\psi :=1\) and \(\psi (x):=|x|,\) respectively.
The remaining identity (4.36) is formally obtained by plugging into (4.15) the multiplier
taking the real part and integrating by parts. However, such v does not need to belong to \(\mathcal {D}_A\) (and therefore neither to \(\mathcal {D}_{A,V}\)). Indeed, though on the one hand the unboundedness of the radial function \(\phi \) does not pose any problems because the support of u is assumed to be compact at this step, on the other hand \(\nabla _{\!A} u\) does not necessarily belong to \(\mathcal {D}_A.\) Following the strategy developed in [4], we replace (4.47) by its regularised version
where
and where
with \(\delta \in \mathbb {R}{\setminus } \{0\}\) is the standard difference quotient of u (we refer to [13, Sec. 5.8.2] or [29, Sec. 10.5] for basic facts about the difference quotients) and the Lipschitz continuous function
with \(N>0\) is the usual truncation function. After the second equality of (4.48) and in the sequel, we use the Einstein summation convention.
We start showing that v defined as in (4.48) belongs to \(\mathcal {D}_{A,V},\) which is saying \(v\in L^2(\mathbb {R}^d),\)\(\partial _{l,A}v:=(\partial _l+iA_l)v\in L^2(\mathbb {R}^d)\) for any \(l=1, \dots , d\) and \(\sqrt{({{\,\mathrm{Re}\,}}V^{(2)})_+}v\in L^2(\mathbb {R}^d).\) To see that, let us rewrite explicitly (4.48) with the choice \(\phi (x):=|x|^2,\) that is
Clearly, being \(u\in \mathcal {D}_{A,V},\) the first term in v belongs to \(\mathcal {D}_{A,V}\) and therefore we need to comment further just on the second term of the sum, namely \(x_k\,\partial _{k,A}^{\delta ,N} u\) (the part involving \(\partial _{k,A}^{-\delta ,N}u\) is analogous). One can check that \(x_k \partial _{k,A}^{\delta , N}u:=x_k(\partial _k^\delta + i T_N(A_k))u\in L^2(\mathbb {R}^d)\); this is a consequence of \(u\in L^2(\mathbb {R}^d)\) being compactly supported and of the boundedness of \(T_N(A_k).\) It is less trivial to prove that for any \(l=1,2,\dots ,d,\) one has \(\partial _{l,A}[x_k \partial _{k,A}^{\delta , N}u]\in L^2(\mathbb {R}^d).\)
To begin with, it is easy to check that the following commutation relation between the magnetic gradient \(\partial _{l,A}\) and its regularised version \(\partial _{k,A}^{\delta , N}\) holds true
Here \([\cdot , \cdot ]\) denotes the usual commutator operator, for any given subset \(S\subseteq \mathbb {R}^d,\) the function \(\chi _S\) is the characteristic function of the set S and \(\tau _k^\delta \) is the translation operator as defined in (4.50).
Using (4.45), the fact that, by definition of the commutator operator, \(\partial _{l,A} \partial _{k,A}^{\delta , N}= \partial _{k,A}^{\delta , N} \partial _{l,A} + [\partial _{l,A} \partial _{k,A}^{\delta , N}]\) and eventually using (4.52) one has
where
Here and hence \(\delta _{l,k}\) for every \(k,l=1,2,\dots , d\) denotes the Kronecker symbol.
Now, being \(u\in \mathcal {D}_{A,V}\) (thus in particular \(u\in L^2(\mathbb {R}^d)\)) and since \(T_N(A_k)\in L^\infty (\mathbb {R}^d),\) then
is clearly in \(L^2(\mathbb {R}^d).\) Moreover, since \(u\in \mathcal {D}_{A,V}\) (thus in particular \(\partial _{l,A} u\in L^2(\mathbb {R}^d)\)) is compactly supported, one can conclude the same for \(v_2.\) With respect to \(v_3,\) since \(A_k\in W^{1,p}_{\text {loc}}(\mathbb {R}^d)\) with p as in (4.8), then \((\partial _l A_k) u\in L^2(\mathbb {R}^d)\) (see (4.9)). Similar reasoning allows us to conclude that also \((\partial _k^\delta A_l) \tau _k^\delta u\in L^2(\mathbb {R}^d).\) Therefore \(v_3\in L^2(\mathbb {R}^d).\)
Now we are left to show just that \(\sqrt{({{\,\mathrm{Re}\,}}V^{(2)})_+} [x_k \partial _{k,A}^{\delta , N} u]\in L^2(\mathbb {R}^d).\) First let us write
where
Observe that being \(u\in \mathcal {D}_{A,V}\) (thus in particular \(\sqrt{({{\,\mathrm{Re}\,}}V^{(2)})_+} u \in L^2(\mathbb {R}^d)\)) and compactly supported and since \(T_N(A_k)\in L^\infty (\mathbb {R}^d),\) one has that \(v_5\in L^2(\mathbb {R}^d).\) Making explicit the difference quotient \(\partial _k^\delta u,\) one can also see that \(v_4\in L^2(\mathbb {R}^d)\) by using that \(({{\,\mathrm{Re}\,}}V^{(2)})_+\in L^p_{\text {loc}}(\mathbb {R}^d)\) with p as in (4.13) and the fact that \(|u|\in H^1(\mathbb {R}^d).\)
Gathering these facts together, we guaranteed that our multiplier v as defined in (4.51) belongs to \(\mathcal {D}_{A,V}\) and hence we have justified its choice as a test function in the weak formulation (4.15).
Now we are in a position to prove identity (4.36). For a moment, we proceed in a greater generality by considering \(\phi \) in (4.48) to be an arbitrary smooth function \(\phi :\mathbb {R}^d\rightarrow \mathbb {R}.\) We plug (4.48) in (4.15) and take the real part. Below, for the sake of clarity, we consider each integral of the resulting identity separately.
4.4.1 \(\bullet \) Kinetic term
Let us start with the “kinetic” part of (4.15):
Using
we write \(K=K_1+K_2+K_3+K_4\) with
Using (4.46) and integrating by parts in \(K_1\) give
Now we consider \(K_4.\) Using simply the definition of the commutator operator, we write
where
We start considering \(K_{4,1}.\) Using an analogous version to (4.46) for the regularised magnetic gradient, namely
and the identity
valid for every \(\psi :\mathbb {R}^d \rightarrow \mathbb {C}\), we write \(K_{4,1}=K_{4,1,1} + K_{4,1,2}\) with
Making use of the integration-by-parts formula for difference quotients (see [13, Sec. 5.8.2])
which holds true for every \(\varphi ,\psi \in L^2(\mathbb {R}^d),\) one gets
At the same time, making explicit the difference quotient and changing variable in \(K_{4,1,2}\) give (summation both over k and l)
Now we choose the multiplier \(\phi (x):=|x|^2\) and observe that
Consequently,
and
In summary,
Now we want to see what happens when \(\delta \) goes to zero and N goes to infinity. To do so, we need the following lemma.
Lemma 4.5
Under the hypotheses of Lemma 4.4, the following limits hold true:
and
Proof
Let us start with (4.59). Using the explicit expression (4.49) for \(\partial _{l,A}^{\delta , N} u,\) one easily has
Now, as a consequence of the \(L^2\)-strong convergence of the difference quotients (which can be used here because \(u\in H^1(\mathbb {R}^d)\) (see (4.12))), the first integral converges to zero as \(\delta \) goes to zero. As regards with the second integral we use that, by definition, \(T_N(s)\) converges to s as N tends to infinity, the bound \(|T_N(s)|\le |s|\) and the fact that by virtue of (4.8) the function \(A_l u\in L^2(\mathbb {R}^d),\) these allow us to conclude that the integral goes to zero as N goes to infinity via the dominated convergence theorem. This concludes the proof of (4.59).
Now we prove (4.60). Observe that (4.60) follows as soon as one proves that the limits
and
hold true. As hypothesis (4.8) implies that \(\partial _l A_k u \in L^2(\mathbb {R}^d),\) the first limit is an immediate consequence of the dominated convergence theorem. With respect to the second one, one has
and the two integrals tend to zero as \(\delta \) goes to zero as a consequence of the \(L^q\)-continuity of the translations with \(1\le q<\infty \) and the strong \(L^p\)-convergence of the difference quotients with \(1\le p<\infty \) together with assumption (4.8). \(\quad \square \)
With Lemma 4.5 at hand, it follows as a mere consequence of the Cauchy–Schwarz inequality that
4.4.2 \(\bullet \) Source term
Let us now consider simultaneously the “source” and “eigenvalue” parts of (4.15), that is,
This can be written as \(F=F_1+F_2+F_3+F_4\) with
Applying (4.55) and (4.56), we further split \(F_2=F_{2,1} + F_{2,2},\) where
Using the integration-by-parts formula (4.57), we get
Choosing \(\phi (x):=|x|^2\) in the previous identities and using (4.58) gives
Using limit (4.59) in Lemma 4.5, one gets from the Cauchy–Schwarz inequality that
4.4.3 \(\bullet \) Electric potential term
Let us now consider the contribution of the “potential” part of (4.15), that is,
Using the decomposition \(V=V^{(1)} + V^{(2)},\) it can be written as \(J=J_1+J_2\) with
First of all,
Let us consider now the part involving \(V^{(2)}.\) We can write
where
Let us consider \(J_{2,2}.\) Using (4.55), (4.56) and integrating by parts we get
Choosing \(\phi (x):=|x|^2\) in the previous identities and using (4.58) we can write
where
Moreover
By virtue of hypothesis (4.31), \(|x||V^{(1)}| |u|\in L^2_{\text {loc}}(\mathbb {R}^d)\) and then, using the Cauchy–Schwarz inequality and limit (4.59) in Lemma 4.5, one has
Similarly, using that \(|x||{{\,\mathrm{Im}\,}}V^{(2)}||u|\in L^2_{\text {loc}}(\mathbb {R}^d)\) (see (4.31)) and again (4.59), via the Cauchy–Schwarz inequality one also has
Since \(x_k {{\,\mathrm{Re}\,}}V^{(2)}\in W_{\text {loc}}^{1,p}(\mathbb {R}^d)\) with p as in (4.13), using the strong \(L^p\)-convergence of the difference quotients with \(1\le p<\infty \) and via the Hölder inequality, it is not difficult to see that
where the last identity follows from the Leibniz rule applied to \(\partial _k(x_k {{\,\mathrm{Re}\,}}V^{(2)}).\)
In summary, gathering the previous limits altogether, one gets
and
Passing to the limit \(\delta \rightarrow 0\) and \(N\rightarrow \infty \) in (4.15) and multiplying the resulting identity by 1/2, one obtains (4.36). \(\quad \square \)
4.5 Potentials with just one singularity: alternative proof of the crucial Lemma 4.4
In this section we consider the case of potentials (both electric and magnetic) with capacity zero set of singularities, in fact with just one singularity at the origin. This will allow us to remove the unpleasant hypotheses (4.8) and (4.13). Since the point has a positive capacity in dimension one, here we exclusively consider \(d\ge 2\). (As a matter of fact, if \(d=1\), hypothesis (4.13) is rather natural, while (4.8) is automatically satisfied because of the absence of magnetic fields on the real line.)
To be more specific, in the sequel we consider the following setup. Let \(A\in L^{2}_{\text {loc}}(\mathbb {R}^d{\setminus }\{0\};\mathbb {R}^d)\) and \(V\in L^1_{\text {loc}}(\mathbb {R}^d{\setminus } \{0\};\mathbb {C})\) and assume
Notice that assumption (4.65) is satisfied by a large class of non vanishing potentials, namely \(V(x)=a/|x|^\alpha \) with \(a\ne 0\) and \(\alpha >0\) and the Aharonov–Bohm vector field (3.17).
Observe that since it is no more necessarily true that \(V\in L^1_{\text {loc}}(\mathbb {R}^d;\mathbb {C})\) and \(A\in L^2_{\text {loc}}(\mathbb {R}^d;\mathbb {R}^d),\) the procedure developed in Sect. 4.1 in order to rigorously introduced the Hamiltonian \(H_{A,V}\) formally defined in (4.1) must be adapted. The modification of the procedure consists merely in taking the Friedrichs extension of the operator initially defined on \(C^{\infty }_0(\mathbb {R}^d{\setminus } \{0\})\) instead of \(C^{\infty }_0(\mathbb {R}^d)\). To be more specific, we first introduce the closed quadratic form
where
Assume that there exist \(b,\beta \in [0,1)\) with
such that, for any \(u\in \mathcal {D}(h_{A,V}^{(1)}),\)
Then, defining
the form \(h_{A,V}^{(2)}\) is relatively bounded with respect to \(h_{A,V}^{(1)}\), with the relative bound less than one. Consequently, the sum \(h_{A,V}:=h_{A,V}^{(1)} + h_{A,V}^{(2)}\) with domain \(\mathcal {D}(h_{A,V}):=\mathcal {D}(h_{A,V}^{(1)})\) is a closed and sectorial form and \(H_{A,V}\) is understood as the m-sectorial operator associated with \(h_{A,V}\) via the representation theorem. Again, we abbreviate
4.5.1 Proof of identity (4.36)
This subsection is concerned with the proof of Lemma 4.4 in the present alternative framework. More specifically we will provide the proof of identity (4.36) only, which is the one whose changes are significant. For the sake of clarity, we restate it with the alternative hypotheses assumed in this section. (Without loss of generality, we consider just the situation in which \(V^{(1)}=0\); indeed, the assumption (4.13) that we remove now concerned the component \(V^{(2)}\) only.)
Lemma 4.6
Let \(d\ge 2\). Let \(A\in L^2_{\text {loc}}(\mathbb {R}^d{\setminus } \{0\})\) be such that \(\varvec{B}\in L^2_{\text {loc}}(\mathbb {R}^d{\setminus } \{0\})\) and let \(V\in W^{1,1}_{\text {loc}}(\mathbb {R}^d{\setminus } \{0\})\) be potentials satisfying (4.65). Let \(u\in \mathcal {D}_{A,V}\) be any compactly supported solution of (4.15), with \(\lambda \) being any complex constant and \(|x|f\in L^2_{\text {loc}}(\mathbb {R}^d),\) satisfying
Then \([x\cdot \nabla {{\,\mathrm{Re}\,}}V]_-|u|^2\in L^1_{\text {loc}}(\mathbb {R}^d)\) and the following identity
holds true.
Proof
For \(d\ge 3\) we define \(\xi :[0,\infty ) \rightarrow [0,1]\) to be a smooth function such that
and set \(\xi _\varepsilon (x):=\xi (|x|/\varepsilon ).\) For \(d=2,\) let \(\xi \in C^\infty ([0,1])\) such that \(\xi =0\) in a right neighborhood of 0 and \(\xi =1\) in a left neighborhood of 1; then we define the smooth function
It comes from a straightforward computation to check that in both cases, there exists a constant \({\widetilde{c}}>0\) such that the following control on the first derivatives
holds true.
We take as the test function in (4.15) a slight modification of the multiplier (4.48) chosen above, namely
where
with \(\partial _k^\delta \) defined as in (4.50). More specifically,
Observe that in this framework we do not need the truncation of the magnetic potential.
Mimicking the arguments of Sect. 4.4, one can show that v defined as in (4.70) belongs to \(\mathcal {D}_{A,V}.\) In fact, one has \(v\in L^2(\mathbb {R}^d),\)\(\partial _{l,A}v:=(\partial _l+iA_l)v\in L^2(\mathbb {R}^d)\) for any \(l=1, \dots , d\) and \(\sqrt{({{\,\mathrm{Re}\,}}V)_+}v\in L^2(\mathbb {R}^d)\). We comment just on \(\xi _\varepsilon x_k \partial _{k,A}^\delta u\) in (4.70). Being \(\xi _\varepsilon \) supported off the origin, \(A_k\in L^\infty ({{\,\mathrm{supp}\,}}\xi _\varepsilon )\), therefore \(\xi _\varepsilon x_k \partial _{k,A}^\delta u:=\xi _\varepsilon x_k (\partial _k^\delta + i A_k)u\in L^2(\mathbb {R}^d).\) Now we want to show that \(\partial _{l,A}[\xi _\varepsilon x_k \partial _{k,A}^\delta u]\in L^2(\mathbb {R}^d).\) First observe that using the chain rule for magnetic derivatives (4.45), one can write
where
Clearly, exactly as above, \(v_2\in L^2(\mathbb {R}^d).\) Using again that \(\Vert A_k\Vert _{L^\infty ({{\,\mathrm{supp}\,}}\xi _\varepsilon )}<\infty \) and the fact that \(x_k \partial _{k,A}^\delta u= x_k \partial _{k,A}^{\delta , N} u\) with \(N= \Vert A_k\Vert _{L^\infty ({{\,\mathrm{supp}\,}}\xi _\varepsilon )},\) where \(\partial _{k,A}^{\delta , N}\) are defined as in (4.49), one can reason as in Sect. 4.4 to conclude that \(v_1\in L^2(\mathbb {R}^d)\) as well (observe that here it comes into play the assumption \(\partial _l A_k\in L^\infty (\mathbb {R}^d{\setminus } \{0\}),\) as in the previous section it came into play the assumption \(\partial _l A_k\in L^p_{\text {loc}}(\mathbb {R}^d)\) with p as in (4.8)). It remains just to prove that \(\sqrt{({{\,\mathrm{Re}\,}}V)_+} [\xi _\varepsilon x_k \partial _{k,A}^\delta u]\in L^2(\mathbb {R}^d)\), but this follows immediately observing that, on the support of \(\xi _\varepsilon ,\)\(({{\,\mathrm{Re}\,}}V)_+\) is bounded.
Now we are in position to prove identity (4.36’). Also in this section we proceed in a greater generality by considering \(\phi \) in (4.69) to be an arbitrary smooth function \(\phi :\mathbb {R}^d \rightarrow \mathbb {R}.\) After we will plug in our choice \(\phi (x)=|x|^2.\) We consider identity (4.15) with the test function v as in (4.70) and we take the real part. Each resulting integrals are treated separately.
4.5.2 \(\bullet \) Kinetic term
Let us start with the “kinetic” part of (4.15), i.e. (4.53). Using
we write \(K=K_0^\varepsilon + K_1 + K_2 + K_3^\varepsilon + K_4^\varepsilon \) with \(K_1\) and \(K_2\) as in (4.54) and
As regards with \(K_4^\varepsilon ,\) proceeding in the same way as done in Sect. 4.4 to treat the term \(K_4\), we end up with
where
and
Now we choose \(\phi (x):= |x|^2.\) Using (4.58) we get
and
Now we need the following analogous version to Lemma 4.5.
Lemma 4.7
Under the hypotheses of Lemma 4.6, the limits
and
hold true.
Using Lemma 4.7 and letting \(\delta \) go to zero, it is easy to see that
Now we want to see what happens in the limit of \(\varepsilon \) approaching zero. In order to do that we will use the following lemma.
Lemma 4.8
Let \(g\in L^1(\mathbb {R}^d)\) and let \(\xi _\varepsilon \) be defined as above. Then
Proof
The first limit in (4.72) immediately follows from the definition of \(\xi _\varepsilon \) via the dominated convergence theorem. On the other hand, using (4.68), one has
which yields the second limit in (4.72), again from the dominated convergence theorem. \(\quad \square \)
Using Lemma 4.8 and passing to the limit in (4.71), one easily gets
Notice that here we have used that, by hypothesis, \(|x|^2 |\varvec{B}|^2|u|^2\in L^1_{\text {loc}}(\mathbb {R}^d).\)
4.5.3 \(\bullet \) Source term
Now consider simultaneously the “source” and “eigenvalue” parts of (4.15), i.e. (4.61). Plugging in (4.61) our chosen test function v defined in (4.69), we can write \(F=F_1+F_2^\varepsilon +F_3^\varepsilon +F_4^\varepsilon \) with \(F_1\) as in (4.62) and
As regards with \(F_2^\varepsilon ,\) proceeding as in Sect. 4.4 when we treated \(F_2,\) we end up with
with
Choosing \(\phi (x):=|x|^2\) in the previous identities and using (4.58) give
Reasoning as above, one gets
Using Lemma 4.8, we conclude that
4.5.4 \(\bullet \) Electric potential term
Let us now consider the contribution of the “potential” part of (4.15), i.e. (4.63). Plugging v defined as in (4.69) into (4.63), we write \(J=J_1 + J_2^\varepsilon \) with
Choosing \(\phi (x):=|x|^2\) in the previous identities and using (4.58), we obtain
Now we write
where
Using that \({{\,\mathrm{Re}\,}}V\) is bounded on \({{\,\mathrm{supp}\,}}\xi _\varepsilon ,\) taking the limit as \(\delta \) goes to zero, it follows from Lemma 4.7
where in the last identity we have just integrated by parts. Moreover, using that by hypothesis \(|x|^2|{{\,\mathrm{Im}\,}}V|^2|u|^2\in L^1_{\text {loc}}(\mathbb {R}^d)\), we have
Finally, using that \({{\,\mathrm{Re}\,}}V |u|^2\) and \([x_k\partial _k {{\,\mathrm{Re}\,}}V]_+ |u|^2\in L^1(\mathbb {R}^d)\) and again \(|x|^2|{{\,\mathrm{Im}\,}}V|^2|u|^2\in L^1_{\text {loc}}(\mathbb {R}^d),\) then Lemma 4.8 gives
Observe that in order to pass to the limit in the integral involving \([x_k \partial _k {{\,\mathrm{Re}\,}}V]_-,\) we have used the monotone convergence theorem being \(\xi _\varepsilon \nearrow 1\) as \(\varepsilon \) tends to zero.
In summary, passing to the limit \(\delta \rightarrow 0\) and \(\varepsilon \rightarrow 0\) in (4.15) and multiplying the resulting identity by 1/2, one obtains (4.36’). This concludes the proof of Lemma 4.6. \(\quad \square \)
5 Absence of Eigenvalues of Matrix Schrödinger Operators
We start our investigation on Schrödinger operators by considering first the most delicate case represented by the non self-adjoint results Theorem 3.1 (and its particular case Theorem 1.1) and the alternatives in \(d=2\) given by Theorem 3.2 and Theorem 3.3. The self-adjoint situation is treated afterward (Sect. 5.2).
5.1 Non self-adjoint case
Proof of Theorem 3.1
Let u be any weak solution to the eigenvalue equation
with \(H_{\text {S}}(A, \varvec{V})\) being defined as in (1.4) and \(\lambda \) being any complex constant. More precisely, u satisfies
for \(j=1,2\dots ,n\) and for any \(v_j\in \mathcal {D}_{A,V}.\)
Here, since we want to use directly the estimate in Lemma 4.2, we have defined \(f:=-V^{(1)} u.\) In passing, observe that by virtue of our hypothesis (3.3), it is not difficult to check that f, so defined, satisfies
with \(a_1\) and \(a_2\) as in (3.3) and \(u^-\) as in (4.17). Notice that here we have used that \(|u|=|u^-|.\)
The strategy of our proof is to show that, under the hypotheses of Theorem 3.1, u is identically zero. In order to do that, as customary, we split the proof into two cases: \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda \) and \(|{{\,\mathrm{Im}\,}}\lambda |>{{\,\mathrm{Re}\,}}\lambda .\) \(\quad \square \)
5.1.1 \(\bullet \) Case \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda .\)
Since \(u_j\), for \(j=1,2,\dots , n\), is a solution to (5.2), we can use directly Lemma 4.2 to get the estimate
Summing over \(j=1,2, \dots , n\) and using the Cauchy–Schwarz inequality for discrete measures, we easily obtain
Using assumptions (3.4)–(3.7) together with (5.3), one has
Now we need to estimate the squared bracket of the latter inequality, namely
Notice that, since I appears as a “coefficient” of the positive spectral quantity \(({{\,\mathrm{Re}\,}}\lambda )^{-1/2}|{{\,\mathrm{Im}\,}}\lambda |,\) we would like to get a positive contribution out of it to eventually discard this term in the previous estimate. Notice that only the second term in I could spoil such positivity and therefore our aim is to control its magnitude in size by means of the positivity of the other terms in I.
To do so, we will proceed distinguishing the cases \(d=1, d=2\) and \(d\ge 3.\)
Let us start with the easiest \(d=1.\) In this situation the second term in I cancels out and therefore \(I\ge 0.\)
We go further considering the case \(d\ge 3.\) Here we employ the weighted magnetic Hardy-inequality
More specifically, using (5.6) we have
which again is positive because we are considering \(d\ge 3.\)
Observe that in both cases treated so far, namely \(d=1\) and \(d\ge 3,\) the positivity of the real part of \(V^{(2)},\) namely the term \(\int _{\mathbb {R}^d} |x|[{{\,\mathrm{Re}\,}}V^{(2)}]_+|u|^2\,dx,\) did not really enter the proof of the positivity of I. The situation is different when considering \(d=2.\) Indeed, although (5.6) is valid also for \(d=2,\) in this case the right-hand side of estimate (5.7) is not necessarily positive. Thus assumption (3.8) comes into play here. Indeed, thanks to (3.8), it is immediate that
Hence we have proved that in any dimension \(d\ge 1\) we have \(I\ge 0.\) This yields that
which, by virtue of (3.2), implies that \(u^-\) (and therefore u) is identically equal to zero.
Remark 5.1
Before passing to the remaining case \(|{{\,\mathrm{Im}\,}}\lambda |>{{\,\mathrm{Re}\,}}\lambda \) we must comment on the absence of zero modes, i.e. \(\lambda =0,\) that clearly cannot be directly deduced from the argument above (note that we consistently divided by \({{\,\mathrm{Re}\,}}\lambda \)). Actually the proof in this situation is easier and basically follows the same strategy adopted to prove the self-adjoint result Theorem 3.4 and it is based on the use of a single identity. We provide here the main steps for the sake of completeness. From (4.36) (with \(f=0\)) we have
Observe that \(\partial _r(r {{\,\mathrm{Re}\,}}V^{(2)})={{\,\mathrm{Re}\,}}V^{(2)} + r\partial _r {{\,\mathrm{Re}\,}}V^{(2)},\) then one has
Plugging the latter in the former, then using the Cauchy–Schwarz inequality and summing over \(j=1,2,\dots , n,\) we get
Now, using (3.3), the first in (3.4), (3.5), the second in (3.6) and (3.7), one easily gets
This gives a contradiction in virtue of (3.10).
5.1.2 \(\bullet \) Case \(|{{\,\mathrm{Im}\,}}\lambda |> {{\,\mathrm{Re}\,}}\lambda .\)
Let \(u_j\) for \(j=1,2,\dots , n\) be a solution to (5.2). Choosing as a test function \(v_j:= u_j\) and taking the real part of the resulting identity and adding/subtracting, instead of the real part, the imaginary part of the resulting identity, one gets
Summing over \(j=1,2,\dots , n\) and discarding the positive term on the left-hand side involving \(({{\,\mathrm{Re}\,}}V^{(2)})_+\), one easily gets
Using the first inequalities in (3.4), (3.6) and (5.3), we have
Therefore, since by the first inequality in (3.2) we have \(b_1^2 + \beta _1^2 + 2 a_1^2<1,\) then \({{\,\mathrm{Re}\,}}\lambda \pm {{\,\mathrm{Im}\,}}\lambda \ge 0\) unless \(u=0.\) But since \(|{{\,\mathrm{Im}\,}}\lambda |>{{\,\mathrm{Re}\,}}\lambda \) we conclude that \(u=0.\)
This concludes the proof of Theorem 3.1. \(\quad \square \)
Now we prove the alternative Theorem 3.2 valid in \(d=2.\)
Proof of Theorem 3.2
Since the proof follows analogously to the one of Theorem 3.1 presented above, except for the analysis in the sector \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda ,\) we shall comment just on this situation.
As in the proof of Theorem 3.1, we want to estimate the term I defined in (5.5), which appears multiplied by the spectral coefficient \(({{\,\mathrm{Re}\,}}\lambda )^{-1/2}|{{\,\mathrm{Im}\,}}\lambda |\) in (5.4). A first application of the weighted inequality (5.6) gives
where the last inequality follows by discarding the positive term involving the potential \(V^{(2)}.\) Now, we proceed estimating the term \(\int _{\mathbb {R}^2} \frac{|u^-|^2}{|x|}\, dx.\) In order to do that we will strongly use the following Hardy–Poincaré-type inequality
valid for all \(\psi \in W^{1,2}_0(B_R),\) where \(B_R:=\{x\in \mathbb {R}^2:|x|<R\}\) denotes the open disk of radius \(R>0\) (see [15] for an explicit proof of (5.9)).
Following the strategy of [15], given two positive numbers \(R_1<R_2,\) we introduce the function \(\eta :[0,\infty )\rightarrow [0,1]\) such that \(\eta =1\) on \([0, R_1],\)\(\eta =0\) on \([R_2, \infty )\) and \(\eta (r)=(R_2-r)/(R_2-R_1)\) for \(r\in (R_1, R_2).\) We denote by the same symbol \(\eta \) the radial function \(\eta \circ r:\mathbb {R}^2 \rightarrow [0,1].\) Now, writing \(u^-= \eta u^- +(1-\eta )u^-\) and using (5.9), we have
Choosing \(R_1=R_2/2\) and using the diamagnetic inequality (3.13) give
Now we fix conveniently \(R_2\); namely, given any positive number \(\epsilon \), we set \(R_2:= \epsilon ({{\,\mathrm{Re}\,}}\lambda )^{1/2}/|{{\,\mathrm{Im}\,}}\lambda |\) in the previous inequality. Then multiplying the resulting inequality by \(({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda | \frac{1}{4}\), we get
where in the first inequality we have used the restriction to the sector \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda ,\) the second estimate follows from (4.34) with \(f=0\) and the third inequality from (3.3) and (3.6).
Using that, from (5.8) and (5.10), one has
and plugging this last bound in (5.4), we get
From hypothesis (3.12), we therefore conclude that \(u=0\) as above. \(\quad \square \)
Finally, we prove the two dimensional result in which the magnetic potential is fixed to be the Aharonov–Bohm one.
Proof of Theorem 3.3
As in the proof of Theorem 3.2, we need to estimate the term I defined in (5.5), which appears in (5.4). Notice that in this specific case (due to the triviality of the magnetic field, everywhere except at the origin, see (3.20)), in (5.4) there does not appear the constant c related to the smallness condition assumed for B. In order to estimate I, we will use the following weighted Hardy inequality, which is also an improvement upon (3.11) , it reads
where \(\gamma :={{\,\mathrm{dist}\,}}\{{\bar{\alpha }}, {\mathbb {Z}}\}\) and \({\bar{\alpha }}\) is as in (3.19) (see [15, Lem. 3] for a proof of (5.11)).
A first application of (5.11) gives
where we discarded the positive term in I involving the potential \(V^{(2)}.\) Notice that since we are assuming \({\bar{\alpha }}\notin {\mathbb {Z}},\) then \( \gamma \in (0, 1/2],\) this gives \(1/4 - \gamma ^2 \ge 0.\)
Now, we proceed estimating the term \(\int _{\mathbb {R}^2} \frac{|u^-|^2}{|x|}\, dx.\) Given any positive number R, we write
where, also here, \(B_R\) denotes the open disk of radius \(R>0.\)
Choosing in the previous inequality \(R:=\epsilon \gamma ^2 ({{\,\mathrm{Re}\,}}\lambda )^{1/2}/ |{{\,\mathrm{Im}\,}}\lambda |\) with any positive constant \(\epsilon ,\) and multiplying the resulting estimate by the quantity \(({{\,\mathrm{Re}\,}}\lambda )^{-1/2} |{{\,\mathrm{Im}\,}}\lambda | \left( \frac{1}{4} - \gamma ^2\right) ,\) we get
In the first inequality we have used the restriction to the sector \(|{{\,\mathrm{Im}\,}}\lambda |\le {{\,\mathrm{Re}\,}}\lambda ,\) while in the second inequality we have used first the Hardy inequality (3.18) and then the hypotheses on the potential (3.22) together with the second inequality of (3.233.24). Plugging the last estimate in (5.12) and the resulting estimate in (5.4), and using an analog reasoning as in Remark 3.1.4, give
From hypothesis (3.21) we therefore conclude that \(u=0\) as above. \(\quad \square \)
5.2 Self-adjoint case: Proof of Theorem 3.4
Now we prove the much simpler and less involved analogous result to Theorem 3.1 for self-adjoint Schrödinger operators, namely Theorem 3.4.
Proof of Theorem 3.4
Let u be any weak solution to the eigenvalues equation (5.1), with \(\varvec{V}\) real-valued.
The proof of this theorem is based exclusively on the identity (4.36). More precisely, using that \(\varvec{V}\) is real-valued, so necessarily \({{\,\mathrm{Im}\,}}\lambda =0,\) from (4.36) (with \(f=0\)) we get
Observing that
using the Cauchy–Schwarz inequality and summing over \(j=1,2,\dots , n,\) one has
Now, using (3.3), (3.7) and (3.26), one easily gets
This immediately gives a contradiction in virtue of (3.25). This concludes the proof. \(\quad \square \)
In passing, observe that here we did not need to split the proof and proving separately absence of positive and non-positive eigenvalues. Indeed, we got the absence of the total point spectrum in just one step.
Remark 5.2
(Two-dimensional Pauli operators as a special case) One reason for investigating matrix self-adjoint Schrödinger operators in this work, comes from our interest in pointing out a pathological behavior of the two dimensional purely magnetic (and so self-adjoint) Pauli Hamiltonian. From the explicit expression (3.31) of the two dimensional Pauli operators, it is evident the relation with the scalar Schrödinger operator
In this specific situation identity (5.13), which was the crucial identity to prove absence of point spectrum in the self-adjoint situation, reads (after multiplying by 1/2)
We stress that differently to the proof presented above, here the presence of the second term on the right-hand side involving the magnetic field does not allow us to get a contradiction. Indeed, roughly speaking, all the positivity coming from the left-hand side and that is customarily used to get the contradiction under the smallness assumption on the magnetic field is exploited to control the second term on the right-hand side (due to inequality (3.32)), therefore, using (3.7), one is left with a term of the type
which leads to no contradiction, however small is chosen the constant c.
6 Absence of Eigenvalues of Pauli and Dirac Operators
This section is devoted to the proof of emptiness of the point spectrum of Pauli and Dirac Hamiltonians.
6.1 Warm-up in the 3d case
Even though the three dimensional setting proposed in the introduction is clearly covered by the more general results Theorem 3.5 and Theorem 3.6, we decided to dedicate to the 3d case a separate section. Indeed, due to the physical relevance of this framework, we want to make it easier to spot the conditions which guarantee the absence of the point spectrum in this case, avoiding the interested reader working his/her way through the statements of the theorems in the general setting.
6.1.1 Absence of eigenvalues of Pauli operators: proof of Theorem 1.2
Let u be any weak solution to the eigenvalue equation
with \(H_{\text {P}}(A, \varvec{V})\) defined as in (1.2) and where \(\lambda \) is any complex constant.
Using (1.2) and the decomposition \(\varvec{V}=\varvec{V}^{\varvec{(1)}} + \varvec{V}^{\varvec{(2)}},\) problem (6.1) can be written as an eigenvalue problem for matrix Schrödinger operators, namely
where \(H_{\text {S}}(A,\varvec{W})\) is defined in (1.4) and where \(\varvec{W}=\varvec{W}^{(1)} + \varvec{W^{(2)}}\) with
In light of the assumptions in (1.8) about \(\varvec{V}^{\varvec{(1)}}\) and B, which intrinsically are both full-subordination conditions to the magnetic Dirichlet form, it is indeed natural to treat \(\varvec{V}^{\varvec{(1)}}\) and B in a unified way defining \(\varvec{W^{(1)}}\) as in (6.2).
Assuming the hypotheses of Theorem 1.2 and using that \(|\sigma |=\sqrt{3}\) due to the fact that the Pauli matrices have norm one, one easily verifies the bound
Hence, hypotheses of Theorem 1.1 are satisfied (with \(\varvec{W}\) instead of \(\varvec{V}\) and with \(a+ \sqrt{3}c\) as a replacement for a in (3.28)). From this we conclude the absence of eigenvalues of \(H_{\text {S}}(A, \varvec{W})\) and, in turn clearly of \(H_{\text {P}}(A, \varvec{V}),\) which is the thesis. \(\quad \square \)
6.1.2 Absence of eigenvalues of Dirac operators: proof of Theorem 1.3
Now we are in position to prove Theorem 1.3. As we will see, it follows as a consequence of the corresponding result for Pauli operators, namely Theorem 1.2.
Let u be any solution to the eigenvalues equation
with \(H_{\text {D}}(A):=H_{\text {D}}(A,\varvec{0})\) the three dimensional self-adjoint Dirac operator defined in (1.3) and where k is any real constant. A second application of the Dirac operator to the eigenvalues problem (6.3) gives that if u is a solution to (6.3), then it satisfies
More explicitly, using expression (1.6) and defining \(u_{1,2}:=(u_1,u_2)\) and \(u_{3,4}:=(u_3,u_4)\) the two-vectors with components respectively the first and the second component of \(u=(u_1,u_2,u_3,u_4)\), and the third and the fourth, one gets that \(u_{1,2}\) and \(u_{3,4}\) satisfy
In other words, the two-vectors \(u_{1,2}\) and \(u_{3,4}\) are solutions to the eigenvalue problems associated to the shifted Pauli operators \(H_{\text {P}}(A) + \frac{1}{4}\) with eigenvalues \(k^2.\)
Notice that since (1.13) holds for any \(u=(u_1,u_2,u_3,u_4),\) in particular it holds for the four-vector \((u_1,u_2,0,0)\) and \((0,0,u_3,u_4).\) This fact implies that the second condition in (1.8) of Theorem 1.2 holds with the same constant c as in (1.13). This means that we are in the hypotheses of Theorem 1.2 (once we set a purely magnetic framework, namely \(\varvec{V}=0\)), so \(H_{\text {P}}(A)\) has no eigenvalues. As a consequence, the shifted operator \(H_{\text {P}}(A) + \frac{1}{4}\varvec{I}_{\mathbb {C}^{\varvec{2}}}\) has no eigenvalues too. Hence \(u_{1,2}\) and \(u_{3,4}\) are vanishing and with them \(u=(u_{1,2}, u_{3,4})\) itself.
This concludes the proof of Theorem 1.3. \(\quad \square \)
6.2 Absence of eigenvalues of Pauli operators in any dimension
Now we are in position to prove the general Theorem 3.5.
Proof of Theorem 3.5
We divide the proof depending on the parity of the space dimension.
6.2.1 Odd dimensions
In odd dimensions, the proof follows the same scheme as the one presented in the three-dimensional case.
Looking at expression (2.12) and using the decomposition of \(\varvec{V}=\varvec{V}^{\varvec{(1)}}+\varvec{V}^{\varvec{(2)}},\) one defines \(\varvec{W}=\varvec{W^{(1)}} + \varvec{W^{(2)}}\) such that
It is easy to see that
where we have used the validity of (3.28) and the fact that \(|a|=\sqrt{d}\) (see Remark 2.1).
Thus, the proof follows exactly as the one of Theorem 1.2 using, this time, the general result for Schrödinger operators Theorem 3.1.
6.2.2 Even dimensions
Let u be any solution to the eigenvalue problem
where \(H_{\text {P}}^{\text {even}}(A, \varvec{V})\) is defined in (2.14) and \(\lambda \) is any complex constant. In passing notice that according to (2.15), since d is even, then \(n'(d)=n(d).\)
Defining \(u_{\text {up}}:=(u_1, u_2, \dots , u_{n(d)/2})\) and \(u_{\text {down}}:=(u_{n(d)/2 +1}, u_{n(d)/2 +2}, \dots , u_{n(d)})\), the n(d)/2-vectors with components respectively the first half and the second half of the components of \(u=(u_1, u_2, \dots , u_{n(d)}),\) one gets
where \(\varvec{W}_{\text {up}}=\varvec{W}_{\text {up}}^{\varvec{(1)}} + \varvec{W}_{\text {up}}^{\varvec{(2)}}\) with
and where \(\varvec{W}_{\text {down}}=\varvec{W}_{\text {down}}^{\varvec{(1)}} + \varvec{W}_{\text {down}}^{\varvec{(2)}}\) with
Notice that here we have also used that the component \(\varvec{V}^{\varvec{(1)}}\) and \(\varvec{V}^{\varvec{(2)}}\) of \(\varvec{V}=\varvec{V}^{\varvec{(1)}}+\varvec{V}^{\varvec{(2)}}\) are diagonal by the hypothesis.
It is easy to see that
and
where we have used (3.28) for the vector \((u_{\text {up}}, 0)\) and \((0, u_{\text {down}}),\) respectively, and the fact that \(|a|=\sqrt{d}.\)
This means that we are in the hypotheses of Theorem 3.1 (once we replace \(\varvec{V}\) with \(\varvec{W}_{\text {up}}\) and \(\varvec{W}_{\text {down}}\) and with \(a + \frac{d}{2}c\) instead of \(a_2\) in (3.3)) and therefore \(H_{\text {S}}(A, \varvec{W}_{\text {up}})\) and \(H_{\text {S}}(A, \varvec{W}_{\text {down}})\) have no eigenvalues. Hence \(u_{\text {up}}\) and \(u_{\text {down}}\) are vanishing and with them \(u=(u_{\text {up}}, u_{\text {down}}).\)
This concludes the proof of Theorem 3.5. \(\quad \square \)
6.3 Absence of eigenvalues of Dirac operators in any dimension
Now we can conclude our discussion by proving the absence of eigenvalues of Dirac operators in the general case, namely proving Theorem 3.6.
Let us start commenting on the odd-dimensional case. Due to expression (2.11) for the squared Dirac in odd dimensions and due to the analogy with (1.6) in the three-dimensional case, one can proceed as in the proof of Theorem 1.3 using the validity of the corresponding result Theorem 3.5 for Pauli operators to get the result.
Turning to the even-dimensional situation, one realises from (2.13) that the squared Dirac operator equals a shifted Pauli operator. Therefore Theorem 3.6 follows as a consequence of Theorem 3.5 for even Pauli operators.
References
Balinsky, A., Laptev, A., Sobolev, A.: Generalized Hardy inequality for the magnetic Dirichlet forms. J. Stat. Phys. 116, 507–521 (2004)
Cazacu, C., Krejčiřík, D.: The Hardy inequality and the heat equation with magnetic field in any dimension. Comm. Partial Differ. Equ. 41(7), 1056–1088 (2016)
Cossetti, L.: Uniform resolvent estimates and absence of eigenvalues for Lamé operators with subordinated complex potentials. J. Math. Anal. Appl. 455, 336–360 (2017)
Cossetti, L., Krejčiřík, D.: Absence of eigenvalues of non-self-adjoint Robin Laplacians on the half-space. arXiv:1812.05348 [math.SP] (2018)
Cuenin, J.-C.: Estimates on complex eigenvalues for Dirac operators on the half-line. Integral Equ. Oper. Theory 79, 377–388 (2014)
Cuenin, J.-C.: Eigenvalue bounds for Dirac and fractional Schrödinger operators with complex potentials. J. Funct. Anal. 272, 2987–3018 (2017)
Cuenin, J.-C., Laptev, A., Tretter, Ch.: Eigenvalue estimates for non-selfadjoint Dirac operators on the real line. Ann. Henri Poincaré 15, 707–736 (2014)
Cuenin, J.-C., Siegl, P.: Eigenvalues of one-dimensional non-selfadjoint Dirac operators and applications. Lett. Math. Phys. 108, 1757–1778 (2018)
Cycon, H.L., Froese, R.G., Kirsch, W., Simon, B.: Schrödinger Operators, with Application to Quantum Mechanics and Global Geometry. Springer, Berlin (1987)
Dubuisson, C.: On quantitative bounds on eigenvalues of a complex perturbation of a Dirac operator. Integral Equ. Oper. Theory 78, 249–269 (2014)
Enblom, A.: Resolvent estimates and bounds on eigenvalues for Dirac operators on the half-line. J. Phys. A Math. Theor. 51, 165203 (2018)
Erdoğan, M.B., Goldberg, M., Green, W.R.: Limiting absorption principle and Strichartz estimates for Dirac operators in two and higher dimensions. Commun. Math. Phys. 367, 241–263 (2019)
Evans, L.C.: Partial Differential Equations, Graduate Studies in Mathematics, vol. 19. American Mathematical Society, Providence (1998)
Fanelli, L., Krejčiřík, D.: Location of eigenvalues of three-dimensional non-self-adjoint Dirac operators. Lett. Math. Phys. 109, 1473–1485 (2019)
Fanelli, L., Krejčiřík, D., Vega, L.: Absence of eigenvalues of two-dimensional magnetic Schrödinger operators. J. Funct. Anal. 275, 2453–2472 (2018)
Fanelli, L., Krejčiřík, D., Vega, L.: Spectral stability of Schrödinger operators with subordinated complex potentials. J. Spectr. Theory 8, 575–604 (2018)
Frank, R.L., Morozov, S., Vugalter, S.: Weakly coupled bound states of Pauli operators. Calc. Var. Partial Differ. Equ. 40(1–2), 253–271 (2011)
Fröhlich, J., Lieb, E.H., Loss, M.: Stability of Coulomb systems with magnetic fields I: the one-electron atom. Commun. Math. Phys. 104, 251–270 (1986)
Ionescu, A.D., Jerison, D.: On the absence of positive eigenvalues of Schrödinger operators with rough potentials. Geom. Funct. Anal. 13(5), 1029–1081 (2003)
Jerison, D.: Carleman inequalitites for the Dirac and Laplace operators and unique continuation. Adv. Math. 62, 118–134 (1986)
Jerison, D., Kenig, C.E.: Unique continuation and absence of positive eigenvalues for Schrödinger operators. Ann. Math. (2) 121, 463–494 (1985)
Kalf, H., Yamada, O.: Essential self-adjointness of \(n\)-dimensional Dirac operators with a variable mass term. J. Math. Phys. 42(6), 2667–2676 (2001)
Kato, T.: Growth properties of solutions of the reduced wave equation with a variable coefficient. Commun. Pure Appl. Math. 12, 403–425 (1959)
Kato, T.: Perturbation Theory for Linear Operators. Springer, Berlin (1966)
Kato, T.: Schrödinger operators with singular potentials. Isr. J. Math. 13, 135–148 (1972)
Koch, H., Tataru, D.: Carleman estimates and absence of embedded eigenvalues. Commun. Math. Phys. 267(2), 419–449 (2006)
Komech, A., Kopylova, E.: Dispersion Decay and Scattering Theory. Wiley, Hoboken (2012)
Laptev, A., Weidl, T.: Hardy inequalities for magnetic Dirichlet forms. Oper. Theory Adv. Appl. 108, 299–305 (1999)
Leoni, G.: A first course in Sobolev spaces. American Mathematical Society, Providence (2009)
Lieb, E.H., Loss, M.: Analysis. American Mathematical Society, Providence (1997)
Lieb, H., Seiringer, R.: Stability of Matter in Quantum Mechanics. Cambridge University Press, Cambridge (2010)
Loss, M., Yau, H.T.: Stability of Coulomb systems with magnetic fields III: Zero energy bound states of the Pauli operator. Commun. Math. Phys. 104, 283–290 (1986)
Reed, M., Simon, B.: Methods of Modern Mathematical Physics, IV. Analysis of Operators. Academic Press, New York (1978)
Roze, S.N.: On the character of the spectrum of the Dirac operator. Theor. Math. Phys. 2, 377–382 (1970)
Sambou, D.: A criterion for the existence of nonreal eigenvalues for a Dirac operator. N. Y. J. Math. 22, 469–500 (2016)
Sambou, D.: A simple criterion for the existence of nonreal eigenvalues for a class of 2D and 3D Pauli operators. Linear Algebra Appl. 529, 51–88 (2017)
Schrödinger, E.: Quantisierung als Eigenwertproblem. Annalen der Physik 79, 361–376 (1926). ibid. 79, 489–527, 80, 437–490 and 81, 109–139
Thaller, B.: The Dirac Equation. Springer, Berlin (1992)
Weidl, T.: Remarks on virtual bound states for semi-bounded operators. Commun. Partial Differ. Equ. 24(1–2), 25–60 (1999)
Acknowledgements
The first author (L.C.) gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. The research of the second author (D.K.) was partially supported by the GACR Grant No. 18-08835S.
Funding
Open access funding provided by Universitá degli Studi di Roma La Sapienza within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by W. Schlag
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cossetti, L., Fanelli, L. & Krejčiřík, D. Absence of Eigenvalues of Dirac and Pauli Hamiltonians via the Method of Multipliers. Commun. Math. Phys. 379, 633–691 (2020). https://doi.org/10.1007/s00220-020-03853-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00220-020-03853-7