Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers

By developing the method of multipliers, we establish sufficient conditions on the magnetic field and the complex, matrix-valued electric potential, which guarantee that the corresponding system of Schr\"odinger operators has no point spectrum. In particular, this allows us to prove analogous results for Pauli operators under the same electromagnetic conditions and, in turn, as a consequence of the supersymmetric structure, also for magnetic Dirac operators.


Objectives and state of the art
Understanding electromagnetic phenomena has played a fundamental role in quantum mechanics. The simplest mathematical model for the Hamiltonian of an electron, subject to an external electric field described by a scalar potential V : R 3 → R and an external magnetic field B = curl A with a vector potential A : R 3 → R 3 , is given by the Schrödinger operator where ∇ A := ∇ + iA is the magnetic gradient. Unfortunately, the mathematically elegant model (1.1) is not sufficient to explain finer electromagnetic effects, for it disregards an inner structure of electrons, namely their spin. A partially successful attempt to take the spin into account is to enrich the algebraic structure of the Hilbert space and consider the Pauli operator where σ := (σ 1 , σ 2 , σ 3 ) are Pauli matrices. Here the term σ · B describes the interaction of the spin with the magnetic field and V := V I C 2 stands for the electric interaction as above.
To get a more realistic description of the electron, subject to an external electromagnetic field, one has to take relativistic effects into account. A highly successful model is given by the Dirac operator where α := (α 1 , α 2 , α 3 ) and β are Dirac matrices and V := V I C 4 . The principal objective of this paper is to develop the so-called method of multipliers in order to establish spectral properties of the Pauli and Dirac operators. This technique comes from partial differential equations, but it seems to be much less known in spectral theory. We are primarily interested in physically relevant sufficient conditions, which guarantee the absence of point spectra (including possibly embedded eigenvalues). We proceed in greater generality by allowing V : R 3 → C to be complex-valued in (1.1) and V : R 3 → C 2×2 to be a general matrix-valued potential, possibly non-Hermitian, in (1.2). However, some of our results are new even in the self-adjoint setting. Since the spin-magnetic term σ · B can be included in V , we simultaneously consider matrix electromagnetic Schrödinger operators Since the operator acts on spinors, we occasionally call the corresponding spectral problem the spinor Schrödinger equation.
As the last but not least generalisation to mention, in the main body of the paper, we shall consider the Pauli and Dirac operators in the Euclidean space R d of arbitrary dimension d ≥ 1.
The study of spectral properties of scalar Schrödinger operators (1.1) constitutes a traditional domain of mathematical physics and the literature on the subject is enormous. Much less is known in the mathematically challenging and still physically relevant situations where V is allowed to be complex-valued, see [16,15] and references therein. Works concerning non-self-adjoint Pauli operators are much more sparse in the literature, see [26] and references therein. More results are available in the case of non-self-adjoint Dirac operators, see [8,11,6,25,7,12,9,14].
The paper [16] represents a first application of the method of multipliers to spectral theory: the authors established sufficient conditions, which guarantee the total absence of eigenvalues of (1.1). It is remarkable that the conditions are physically relevant in the sense that they involve the magnetic field B rather than the vector potential A. The two-dimensional situation was covered later in [15]. The robustness of the method of multipliers has been demonstrated in its successful application to the half-space instead of the whole Euclidean space in [5] and to Lamé instead of Schrödinger operators in [4]. In the present paper, we push the analysis forward by investigating how the unconventional method provides meaningful and interesting results in the same direction also in the less explored setting of the spinorial Hamiltonians.

The strategy
The main ingredient in our proofs is the method of multipliers as developed in [16] for scalar Schrödinger operators (1.1). In the present paper, however, we carefully revisit the technique and provide all the painful details, which were missing in the previous works. We identify various technical hypothesis about the electromagnetic potentials to justify the otherwise formal manipulations. We believe that this part of the paper will be of independent interest for communities interested in spectral theory as well as partial differential equations.
The next, completely new contribution is the adaptation of the method to the matrix electromagnetic Schrödinger operators (1.4). The Pauli Hamiltonians (1.2) are then covered as a particular case.
The method of multipliers does not seem to apply directly to Dirac operators, because of the lack of positivity of certain commutators. Our strategy is to employ the supersymmetric structure of Dirac operators (cf. [27,Ch. 5]). More specifically, using the standard representation is available, which, in turn, follows as a consequence of the corresponding result for the general Schrödinger operators H S (A, V ) with matrix-valued potentials V . Notice that, in this way, we are not able to treat magnetic Dirac operators with electric perturbations.

The results in three dimensions
As usual, the sums on the right-hand sides of (1.1), (1.2) and (1.4) should be interpreted in a form sense (cf. [18,Ch. VI]). More specifically, the operators are introduced as the Friedrichs extension of the operators initially defined on smooth functions of compact support. The regularity hypotheses and the functional inequalities stated in the theorems below ensure that the operators are well defined as m-sectorial operators. The Dirac operator (1.3) with V = 0 is a closed symmetric operator under the stated assumptions. Henceforth, we use the notation r(x) := |x| for the distance function from the origin of R d and ∂ r f (x) := x |x| · ∇f (x) for the radial derivative of a function f : R d → C. We also set f ± (x) := max{±f (x), 0} if f is real-valued. For matrix Schrödinger operators (1.4), we prove the following result.

Organisation of the paper
Even though so far we have considered only the three-dimensional framework, in this work we shall actually provide variants of the results presented above in any dimension. (We anticipate already now that the twodimensional framework will be excluded in the settings of Pauli and Dirac operators because of the well-known Aharonov-Casher effect.) In order to state our results in any dimension, however, an auxiliary material will be needed in order to introduce the general framework for the Pauli and Dirac Hamiltonians. We therefore postpone the presentation of the general results to Section 3, while Section 2 is devoted to the definition of Dirac and Pauli operators to any dimension (this section can be skipped by an experienced reader). The method of multipliers for scalar Schrödinger operators is revisited with all the necessary details in Section 4. The development of the method for Schrödinger operators with matrix-valued potentials is performed in Section 5. The application of this general result to Pauli and Dirac operators is given in Section 6.

Notations
Here we summarise specific notations and conventions that we use in this paper.
• We adopt the convention to write matrices in boldface.
• For any dimension d ≥ 2, the physically relevant quantity associated to a given magnetic vector potential Here, as usual, (∇A) jk = ∂ j A k and (∇A) t jk = (∇A) kj with j, k = 1, 2 . . . , d. In d = 2 and d = 3 the magnetic tensor B can be identified with the scalar field B 12 = ∂ 1 A 2 − ∂ 2 A 1 or the vector field B = curl A, respectively. More specifically, one has where for any w = (w 1 , w 2 ) ∈ R 2 , w ⊥ := (w 2 , −w 1 ) and the symbol × denotes the cross product in R 3 .
Notice that we did not comment on the case d = 1. In one dimension, in fact, the addition of a magnetic potential is trivial, in the sense that it is always possible to remove it by a suitable gauge transformation. We refer to [3] for a complete survey on the concept of magnetic field in any dimensions and its definition in terms of differential forms and tensor fields.
• We adopt the standard notation | · | for the Euclidean norm on C d . We use the same symbol | · | for the operator norm: if M is a d × d matrix, we set |M v| |v| .
• Let v, w ∈ R d , the centered dot operation v · w designates the scalar product of the two vectors v, w in R d .
• Given two vectors v, w ∈ R d and a d × d matrix M , the double-centered dot operation v · M · w stands for the vector-matrix-vector product which returns the following scalar number • We use the following definition for the L 2 -norm of a vector-valued function u = (u 1 , u 2 , . . . , u n ) on R d :

Definition of Dirac and Pauli Hamiltonians in any dimension
As already mentioned, our results will be stated in all dimensions d ≥ 1. In particular, this requires a more careful analysis on the Dirac and Pauli operators as their explicit form changes according to the underlying dimension. Since here we are just interested in identifying the correct action of the operators, we disregard issues with the operator domains for a moment.

The Dirac operator
Generalising the expression (1.3) to arbitrary dimensions requires ensuring existence of d+ 1 Hermitian matrices α := (α 1 , α 2 , . . . , α d ) and β satisfying the anticommutation relations for µ, ν ∈ {1, 2, . . . , d}, where δ µν represents the Kronecker symbol. The possibility to find such matrices clearly depends on the dimension n(d) of the matrices themselves. In this regard one can verify that the following distinction is needed: Even though all that really cares are the anticommutation relations that the Dirac matrices satisfy, for the purpose of visualisation of the supersymmetric structure of the Dirac operator, we shall rely on a particular representation of these matrices, that is the so-called standard representation. According to the standard representation one defines the d + 1 matrices α = (α 1 , α 2 , . . . , α d ) and β iteratively (with respect to the dimension) distinguishing between odd and even dimensions. For sake of clearness in the following the Dirac matrices are written with a superscript (d) to stress that these are constructed at the step corresponding to working in d dimensions, e.g., α = (α d ) and β (d) are the d + 1 Dirac matrices constructed in d dimensions. Moreover, for notation convenience, we denote the matrix β (d) as the (d + 1)-th α-matrix, namely

Odd dimensions
If d is odd, let us assume to know the n(d−1)×n(d−1) matrices α corresponding to a previous step in the iteration. We then define n(d) × n(d) matrices (where, according to (2.2), n(d) = 2n(d − 1)) in the following way:

Even dimensions
If d is even, we define n(d) × n(d) matrices (where, according to (2.2), n(d) = n(d − 1) = 2n(d − 2)) as follows: and Notice that we are also using the convention that n(0) = 1 and that the 1 × 1 matrix α (0) 1 = (1). This allows us to use the previous rule to construct the Dirac matrices corresponding to the standard representation also in d = 1 and d = 2.
According to the construction above, one recognises that the Dirac matrices, regardless of the dimension, have all the following structure for µ, ν ∈ {1, 2, . . . , d}. Here, as usual, a * µ denotes the adjoint to a µ , that is the conjugate transpose of a µ . We set a := (a 1 , . . . , a d ).
In the standard representation, that is using expression (2.3) for the Dirac matrices, the purely magnetic Dirac operator can be defined through the following block-matrix differential expression Notice that in odd dimension, being the submatrices a µ Hermitian, one has D = D * .

The square of the Dirac operator
From representation (2.5), it can be easily seen that H D (A) can be decomposed as a sum of a 2 × 2 diagonal block and a 2 × 2 off-diagonal block operators. More specifically, one has As one may readily check, H diag and H off-diag satisfy the anticommutation relation This distinguishing feature places the Dirac operator within the class of operators with supersymmetry. It is consequence of the supersymmetric condition (2.6) that squaring out the Dirac operator gives where Therefore, H D (A) 2 turns out to have the following favorable form From property (2.4) of the Dirac submatrices, one can show that

Low-dimensional illustrations
In order to become more confident with the previous construction, we decided to present explicitly the situations of dimensions d = 1 and d = 2 in the next two subsections. (Dimension d = 3 was already discussed above.)

Dimension one
In the Hilbert space L 2 (R; C 2 ), the 1d Dirac operator reads where ∇ is just a weird notation for an ordinary derivative. With the notation H D (0) we emphasise that the magnetic potential A has been chosen to be identically equal to zero, since in one dimension it can be always removed by choosing a suitable gauge. One can immediately verify that squaring out the operator H D (0) yields According to the rule provided above, in the standard representation, one chooses α := σ 1 and β := σ 3 , where σ 1 and σ 3 are two of the three Pauli matrices. Thus, one conveniently writes Hence, in one dimension, the Pauli operator coincides with the free one dimensional Schrödinger operator acting in L 2 (R; R).

The Pauli operator
After these illustrations, let us come back to the general dimension d ≥ 1. Recall that the Dirac operator H D (A) has been introduced via (2.5) and that its square satisfies (2.7). The following lemma specifies the form of the square according to the parity of the dimension and offers a natural definition for the Pauli operator in any dimension. • If d is odd, then where we define (2.12) • If d is even, then H even

13)
where we define (2.14) Proof. In odd dimensions one has that D * = D, therefore Thus, defining H odd P (A) := D * D and using (2.7) one immediately gets the desired representation in odd dimensions. In even dimensions one defines Hence, from (2.7) and (2.8) one readily has the thesis.
Notice that in even dimensions the Pauli operator is a matrix operator with the same dimension as the Dirac Hamiltonian. In odd dimensions the dimension of the Pauli operator is a half of that of the Dirac operator. Recalling (2.2), we therefore set

Domains of the operators
Finally, we specify the domains of the Dirac and Pauli operators. Notice that the rather formal manipulations of the preceding subsections can be justified when the action of the operators is considered on smooth functions of compact support. Therefore, we shall define each of the operators as an extension of the operator initially defined on such a restricted domain. We always assume that the vector potential A ∈ L 2 loc (R d ; R d ) is such that B ∈ L 1 loc (R d ; R d×d ). We define the Pauli operator H P (A) acting on the Hilbert space L 2 (R d ; R n ′ (d) ) as the self-adjoint Friedrichs extension of the operator initially considered on the domain C ∞ 0 (R d ; R n ′ (d) ); notice that this initial operator is symmetric. Disregarding the spin-magnetic term for a moment, the form domain can be identified with the magnetic Sobolev space (cf. [22,Sec. 7.20]) The operator domain is the subset of H 1 . To include the spin-magnetic term, we make the hypothesis that there exist numbers a < 1 and b ∈ R such that, for every ψ ∈ C ∞ 0 (R d ), Then the spin-magnetic term is a relatively form-bounded perturbation of the already defined operator with the relative bound less than one (recall Remark 2.1), so the Pauli operator H P (A) with the same form domain (2.16) is indeed self-adjoint. For the domain of the Dirac operator (2.5) we take , which is dense in D(H D (A)), we have the identity (with a slight abuse of notation)

Statement of the main results in any dimension
Now we are in position to state our main results in any dimension. As anticipated, in order to do that, we shall consider separately the three spinorial Hamiltonians.

The spinor Schrödinger equation
Let us start by considering the matrix Schrödinger operator which is an extension of (1.4) to any dimension d ≥ 1 and n ≥ 1. Here V ∈ L 1 loc (R d ; C n×n ) and A ∈ L 2 loc (R d ; R d ). The operator is properly introduced as the Friedrichs extension of the operator initially defined on C ∞ 0 (R d ; C n ). The hypotheses in the theorems below ensure that H S (A, V ) is well defined as an m-sectorial operator. (2) )] + ∈ L 1 loc (R d ) and rV (1) , r(Re V (2) ) − , r Im V (2) ∈ L 2 loc (R d ). Assume that there exist numbers a 1 ,

A general result in any dimension
such that, for all n-vector u with components in C ∞ 0 (R d ), If d = 2 assume also that the inequality holds true. If, in addition, one has The theorem is commented on in the following subsections.

Criticality of low dimensions
Because of the criticality of the Laplacian in L 2 (R d ) with d = 1, 2, the lower dimensional scenarios are a bit special.
First of all, due to the absence of magnetic phenomena in R 1 , the corresponding assumptions (3.3)-(3.7) in dimension d = 1 come with the classical gradient ∇ as a replacement of the magnetic gradient ∇ A . Consequently, because of the criticality of the Laplacian in L 2 (R), necessarily V (1) = 0, (Re V (2) ) − = 0, [∂ r (r Re V (2) )] + = 0 and Im V (2) = 0. Moreover, (3.7) is always satisfied if d = 1 being B equal to zero. Hence, if d = 1, the theorem essentially says that the scalar Schrödinger operator −∇ 2 + V in L 2 (R) has no eigenvalues, provided that V is non-negative and the radial derivative ∂ r (rV ) is non-positive. The requirements respectively exclude non-positive and positive eigenvalues. The latter is a sort of the classical repulsiveness requirement (cf. [24, Thm. XIII.58]).
Similarly, if d = 2 and there is no magnetic field (i.e. B = 0), the theorem essentially says that the scalar Schrödinger operator −∇ 2 + V in L 2 (R 2 ) has no eigenvalues, provided that V is non-negative and the radial derivative ∂ r (rV ) is non-positive (again, the conditions exclude non-positive and positive eigenvalues, respectively). On the other hand, in two dimensions, the situation becomes interesting if the magnetic field is present. Indeed, the magnetic Laplacian in L 2 (R 2 ) is subcritical due to the existence of magnetic Hardy inequalities (see [20] for the pioneering work and [3] for the most recent developments). The latter guarantee a source of sufficient conditions to make the hypotheses (3.3)-(3.7) non-trivial (cf. [15]).

An alternative statement in dimension two
We want to comment more on the additional condition (3.8) in dimension d = 2. Using the 2d weighted Hardy inequality it is easy to check that requiring "enough" positivity to Re V (2) will guarantee the validity of (3.8). More specifically, the pointwise bound valid for almost every x ∈ R 2 is sufficient for (3.8) to hold. On the other hand, without the positivity of Re V (2) , condition (3.8) is quite restrictive. Indeed, if one assumes V (2) = 0, then ensuring the validity of (3.8), would require to ensure the existence of vector potentials A for which an improvement of the weighted Hardy inequality (3.10) holds true (for (3.8) with V (2) = 0 is nothing but (3.10) with a better constant). For this reason, following an idea introduced in [15, Sec. 3.2], we provide an alternative result, which avoids condition (3.8), but a stronger hypothesis compared to (3.2) is assumed.
such that, for all n-vector u with components in

A simplification in higher dimensions
In dimensions d ≥ 3, as a consequence of the diamagnetic inequality (see [19] and [22,Thm. 7.21]) together with the classical Hardy inequality applied to |ψ|, one can prove the following magnetic Hardy inequality (3.14) Using (3.14), it is easy to check that the first inequalities in and assuming a 2 , b 2 , β 2 < (d − 2)/2. Hence, in the higher dimensions d ≥ 3, conditions in (3.2) simplifies to In particular, this justifies the fact that in Theorem 1.1 which is a special case of Theorem 3.1 for d = 3 (and n = 2) we assume only the validity of (1.8), (1.9) and (1.10), moreover (3.2) is replaced by (1.7) (notice that dropping the subscript · 2 in the constants and fixing d = 3 in (3.15) gives (1.7)).

The Aharonov-Bohm field
Let us come back to dimension two and consider the Aharonov-Bohm magnetic potential where (x, y) = (r cos θ, r sin θ) is the parametrisation via polar coordinates, r ∈ (0, ∞), θ ∈ [0, 2π), and α : [0, 2π) → R is an arbitrary bounded function. In this specific case, there is an explicit magnetic Hardy-type inequality (see [20,Thm. 3]) whereᾱ has the physical meaning of the total magnetic flux: Notice that in this case the magnetic field B equals zero everywhere except for x = 0; indeed B = 2πᾱδ (3.19) in the sense of distribution, where δ is the Dirac delta function. The Aharonov-Bohm potential (3.16) is not in L 2 loc (R 2 ), so the matrix Schrödinger operator is not well defined as described below (3.1) and Theorem 3.1 does not apply to it as such. Now the Schrödinger operator H S (A, V ) is introduced as the Friedrichs extension of the operator (1.4) initially defined on C ∞ 0 (R 2 \ {0}; C n ). At the same time, it is possible to adapt the method of multipliers in such a way that it covers this situation as well. The following result can be considered as an extension of [15,Thm. 5] in the scalar case to the spinorial Schrödinger equation.
with γ := dist{ᾱ, Z}, such that, for all n-vector u with component in 3.1.6 On the regularity condition (3.9) and their replacement As we will see in more details later on (see Section 4.2), the additional local regularity assumptions (3.9) on the potentials are needed in order to justify rigorously the algebraic manipulations that the method of multipliers introduces. A formal proof of Theorem 3.1 would require just the weaker conditions A ∈ L 2 loc (R d ) and V ∈ L 1 loc (R d ). The unpleasant conditions (3.9) can be removed if we consider the situation of potentials V and A with just one singularity at the origin (see Section 4.5). This specific case is worth being investigated as it allows to cover a large class of repulsive potentials, e.g., V (x) = a/|x| α I C n with a > 0 and α > 0, and also the Aharonov-Bohm vector fields (3.16) which otherwise would be ruled out by conditions (3.9).

An alternative general result in the self-adjoint setting
Obviously, Theorem 3.1 above is valid, with clear simplifications, also in the self-adjoint situation, namely considering Hermitian matrix-valued potentials V . In this case, however, we also have an alternative result that we have decided to present because the "repulsivity" condition (3.5) is replaced by a "more classical" assumption in terms of r∂ r V (2) . Furthermore, condition (3.8) is not needed in this context. More precisely we have the following result.
such that, for all n-vector u with components in C ∞ 0 (R d ), (3.3) and (3.7) hold and, moreover, Remark 3.1. Here, the first condition in (3.24) is not explicitly used in the proof of the theorem, but it is needed to give sense to the Hamiltonian H S (A, V ). We refer to Section 4.1 for details.

The Pauli equation
Recall that the definition of the Pauli operator depends on the parity of the dimension, cf. Lemma 2.1.
Theorem 3.5. Let d ≥ 3 be an integer and let n ′ (d) be as in (2.15).
If d is even, we additionally require hold true. If, in addition, one has then H P (A, V ) has no eigenvalues, i.e. σ p (H P (A, V )) = ∅.
Remark 3.2 (Even parity). Observe that in the even dimensional case we assume also the component V (1) to be diagonal. This is needed in order not to spoil the diagonal form in the definition (2.14) of the free Pauli operator, which will represent a crucial point in the strategy underlying the proof (we refer to Section 6.2 for more details).
The case of low dimensions d = 1, 2 is intentionally not present in Theorem 3.5 for the following reasons.
Remark 3.3 (Dimension one). As discussed in Section 2.3.1, the one-dimensional Pauli operator coincides with the scalar potential-free Schrödinger operator −∇ 2 (i.e. the one-dimensional Laplacian), hence the absence of the point spectrum is trivial in this case. Formally, it is already guaranteed by Theorem 3.1 with d = n = 1 (see also Section 3.1.2).
Remark 3.4 (Dimension two). The two dimensional case is rather special because of the paramagnetism of the Pauli operator. As a matter of fact, the total absence of the point spectrum is no longer guaranteed even in the purely magnetic case (i.e. V = 0). In this case the Pauli operator has the form (see Section 2.3.2) For smooth vector potentials, the supersymmetry says that the operators −∇ 2 A ± B 12 have the same spectrum except perhaps at zero (see [10,Thm. 6.4]). Hence the absence of the point spectrum for the two-dimensional Pauli operator is in principle governed by our Theorem 3.1 with d = 2 and n = 1 (or Theorem 3.2) or its selfadjoint counterpart Theorem 3.4 for the special choice V = B 12 I C 2 . Unfortunately, we do not see how to derive any non-trivial condition on B 12 to guarantee the total absence of eigenvalues (cf. Remark 5.1). Physically, it does not come as a big surprise because of the celebrated Aharonov-Casher effect, which states that the number of zero-eigenstates is equal to the integer part of the total magnetic flux (see [10,Sec. 6.4]). On the one hand, the absence of negative eigenvalues does follow as an immediate consequence of the standard lower bound which holds with either of the sign ± (see, e.g., [2, Sec. 2.4]).
Notice that when an attractive potential is added to the two-dimensional Pauli operator, it has been proved [28,17] that the perturbed Hamiltonian presents always (i.e. no matter how small is chosen the coupling constant) negative eigenvalues (not only due to the Aharonov-Casher zero modes turning into negative ones, but it is also the essential part of the spectrum that contributes to their appearance). This fact can be seen as a quantification of the aforementioned paramagnetic effect of the Pauli operators in contrast to the diamagnetic effect which holds true for magnetic Schrödinger operators.

The Dirac equation
Finally, we state our results for the purely magnetic Dirac operator (2.5).
has no eigenvalues, i.e. σ p (H D (A)) = ∅. As discussed in Section 2.3.1, the square of the one-dimensional Dirac operator is just the one-dimensional Laplacian shifted by a constant (cf. (2.9)), hence the absence of the point spectrum follows at once in this case. On the other hand, the two-dimensional analogue of Theorem 3.6 is unavailable, because of the absence of a two-dimensional variant of Theorem 3.5 in the Pauli case, cf. Remark 3.4.

Scalar electromagnetic Schrödinger operators revisited
In this section, we leave aside the operators acting on spinor Hilbert spaces and focus on scalar electromagnetic Schrödinger operators (1.1). This will be useful later on when, in the following sections, we reduce our analysis to the level of components. We provide a careful and deep analysis of the method of multipliers, stressing on the major outcomes that the technique provides in this context. Our goal is to represent a reader-friendly overview of the original ideas and main outcomes of [16,15] to tackle the issue of the total absence of eigenvalues of scalar Schrödinger operators. Furthermore, we go through the more technical parts by rigorously establishing some results that were just sketched in the previous works.

Definition of the operators
For the sake of completeness, we start with recalling some basic facts on the rigorous definition of the scalar electromagnetic Schrödinger operators.
Let d ≥ 1 be any natural number. Let A ∈ L 2 loc (R d ; R d ) and V ∈ L 1 loc (R d ; C) be respectively a vector potential and a scalar potential (the latter possibly complex-valued). The quantum Hamiltonian apt to describe the motion of a non-relativistic particle interacting with the electric field −∇V and the magnetic field B := (∇A) − (∇A) t is represented by the scalar electromagnetic Schrödinger operator Observe that the magnetic field is absent in R 1 and A can be chosen to be equal to zero without loss of generality. Therefore the two-dimensional framework is the lowest in which the introduction of a magnetic field is non-trivial. As usual, the sum in (4.1) should be understood in the sense of forms after assuming that V is relatively form-bounded with respect to the magnetic Laplacian −∇ 2 A with the relative bound less than one. We shall often proceed more restrictively by assuming the form-subordination condition where a ∈ [0, 1) is a constant independent of u. Assumption (4.2) in particular implies that the quadratic form is relatively bounded with respect to the quadratic form with the relative bound less than one. Consequently, the sum h A, With the aim of including also potentials which are not necessarily subordinated in the spirit of (4.2), now we present an alternative way to give a meaning to the operator H A,V assuming different conditions on the electric potential V. We introduce the form The form h A,V is closed by definition. Now instead of assuming the smallness condition (4.2) for the whole V , we take the advantage of the splitting in real (positive and negative part) and imaginary part of the potential to require the following more natural subordination: There exist b, β ∈ [0, 1) with In other words, we require the subordination just for the parts (Re V ) − and Im V of the potential V . Hence, defining A,V , with the relative bound less than one (see (4.3)). Consequently, as above, the sum A,V is a closed and sectorial form and Therefore, also in this more general setting, H A,V is the m-sectorial operator associated with h A,V .
In order to consider simultaneously both these two possible configurations, we introduce the decomposition V = V (1) + V (2) and assume that there exist a, b, β ∈ [0, 1) satisfying such that, for any u ∈ D A , and and h A,V ). By the same reasoning as above, one has that H A,V is the m-sectorial operator associated with the closed and sectorial form h A,V := h (1) A,V ). In order to drop the dependance on the form h in the notation of the domain that will not be used explicitly any more, from now on we will denote D A,V := D(h A,V ).

Further hypotheses on the potentials
As we shall see below, in order to justify rigorously the algebraic manipulations that the method of multipliers introduces, we need to assume more regularity on the magnetic potential A and on the electric potential V = V (1) + V (2) than the ones required to give a meaning to the electromagnetic Hamiltonian (4.1).

Further hypotheses on the magnetic potential
We assume (4.8) In particular, these assumptions ensure that for any u ∈ D A then and the same can be said for ∂ l Au, with l = 1, 2, . . . , d. Indeed, from the Hölder inequality, one has that for any k = 1, 2, . . . , d Observe that the diamagnetic inequality (3.12) and u ∈ D A guarantee |u| ∈ H 1 (R d ). By the Sobolev embeddings (4.11) Consequently, if one chooses q as in (4.11), then u L q (R d ) is finite. If, moreover, the Hölder conjugated exponent p is as in our assumption (4.8), then A k L p loc (R d ) is finite and therefore, from (4.10), A k u ∈ L 2 loc (R d ). Notice that, given any function u ∈ D A as soon as Au ∈ L 2 (R d ), then ∇u ∈ L 2 (R d ) and therefore u ∈ (4.12)

Further hypotheses on the electric potential
Recalling the decomposition V = V (1) + V (2) , we assume the following condition on the real part of the second component: By the same reasoning as done above for the magnetic potential, one can observe that assumption (4.13) ensures that for any u ∈ H 1 A (R d ), then Re V (2) |u| 2 ∈ L 1 loc (R d ), and the same can be said for ∂ k Re V (2) , with k = 1, 2, . . . , d.

The method of multipliers: main ingredients
The purpose of this subsection is to provide, in a unified and rigorous way, the proof of the common crucial starting point of the series of works [16,15,4,5] for proving the absence of the point spectrum of the electromagnetic Hamiltonians H A,V in various settings.
Since this section is intended as a review of already known results on scalar Schrödinger Hamiltonians, here we will be concerned almost exclusively with the most interesting and more troublesome case of the spectral parameter λ ∈ C within the sector of the complex plane given by {λ ∈ C : Re λ ≥ |Im λ|}. (4.14) On the other hand, how to deal with the complementary sector, i.e., {λ ∈ C : Re λ < |Im λ|} can be seen explicitly in the proof of our original results (see Sections 5 and 6). The proof of the absence of eigenvalues within the sector defined in (4.14) is based on the following crucial result obtained by means of the method of multipliers. It basically provides an integral identity for weak solutions u to the resolvent equation holds for any v ∈ D A,V , where f is any suitable function for which the last integral in (4.15) is finite. The crucial result reads as follows.
Remark 4.1 (Dimension one). Since the addition of a magnetic potential is trivial in R 1 , the corresponding identity (4.16) with d = 1 comes with the classical gradient ∇ as a replacement of the magnetic gradient ∇ A , moreover the term involving B is not present.
The proof of Lemma 4.1 can be found in Subsection 4.3.1, here we just provide its main steps: • Step one: Approximation of u with a sequence of compactly supported functions u R (see definition (4.28) below) which satisfy a related problem with small (in a suitable topology) corrections. This first step is necessary in order to justify rigorously the algebraic manipulations that the method of multipliers introduces when the test function v is chosen to be possibly unbounded (so that it is not even a priori clear if this specific choice v belongs to L 2 (R d )). • Step two: Development of the method of multipliers for u R (main core of the proof) in order to produce the analogue of identity (4.16) for the approximating sequence. This step will require a further approximation procedure which will ensure that the chosen multiplier v (see (4.51) below) is in D A,V and therefore allowed to be taken as a test function. • Step three: Proof of (4.16) by taking the limit as R → ∞ in the previous identity and using the smallness of the corrections which is quantified in Lemma 4.3 below.
As a byproduct of the crucial identity of Lemma 4.1, we get the following inequality. For the sake of completeness, we provide it with a proof.
holds true.
Proof of Lemma 4.2. Let us consider identity (4.16) with V (1) = 0. In passing, notice that requiring V (1) = 0 do not entails any loss of generality. Indeed since, according to our notations, V (1) represents the component of the electric potential V which is fully subordinated to the magnetic Dirichlet form (in the sense given by (4.6)), it can be treated at the same level of the forcing term f. After splitting Re V (2) in its positive and negative parts, namely using Re V (2) = (Re V (2) ) + − (Re V (2) ) − , identity (4.16) with V (1) = 0 reads as follows We consider first By the Cauchy-Schwarz inequality, it immediately follows that (4.20) Now we consider the terms in (4.19) involving V (2) , that is Using that |u| = |u − |, the term II 1 can be easily estimated in this way: By the Cauchy-Schwarz inequality one has Finally, if Im λ = 0, we also need to estimate II 3 . First notice that choosing v = Im λ |Im λ| u in (4.15) (with V (1) = 0) and taking the imaginary part of the resulting identity, gives the following L 2 -bound (4.23) Using the Cauchy-Schwarz inequality, the L 2 -bound (4.23), the fact that we are working in the sector |Im λ| ≤ Re λ, and again using that |u| = |u − |, we have (4.24) Now we estimate the terms in (4.19) involving f, namely In a similar way as done to estimate II 1 , II 2 and II 3 , one gets and Given a positive number R, we set µ R (x) := µ(|x|R −1 ). Then µ R : R d → [0, 1] is such that where B R (0) stands for the open ball centered at the origin and with radius R > 0 and c > 1 is a suitable constant independent of R. For any function h : R d → C we then define the compactly supported approximating family of functions by setting is not difficult to show that the compactly supported function u R belongs to D A,V and solves in a weak sense the following related problem where err(R) := −2∇ A u · ∇µ R − u∆µ R . The next easy result shows that the extra terms (4.30), which originate from the introduction of the horizontal cut-off µ R , become negligible as R increases.
Proof. By (4.27) we have Since u ∈ L 2 (R d ) and ∇ A u ∈ L 2 (R d ) d , the right-hand side tends to zero as R goes to infinity. Similarly, , and again the right-hand side goes to zero as R approaches infinity. • Step two. This second step represents the main body of the section, it is here that the method of multipliers is fully developed. Informally speaking the method of multipliers is based on producing integral identities by choosing different test functions v in (4.15) (see Lemma 4.4 below) and later combining them in a refined way to get, for instance in our case, the analogous to (4.16). By virtue of the previous step, we shall develop the method for compactly supported solutions u ∈ D A,V to (4.15), it will be in the next Step three that we will get the result also for not necessarily compactly supported solutions.
As a starting point we state the aforementioned identities, these are collected in the following lemma. Notice that the lemma is stated for any λ ∈ C and not necessarily just for λ in the sector (4.14).
and assume also (4.8), Suppose that V ∈ L 1 loc (R d ; C) admits the decomposition V = V (1) +V (2) with Re V (2) satisfying (4.13). Let u ∈ D A,V be any compactly supported solution of (4.15), with λ any complex constant and |x|f ∈ L 2 loc (R d ), satisfying Then |x| −1 |u| 2 ∈ L 1 loc (R d ) and the following identities This gives Recalling definition (4.17) of u − , one observes that and therefore where the previous follows from the fact that being B anti-symmetric, then x · B · x = 0.
Reintegrating (4.39) over R d , we obtain Adding equation (4.33) multiplied by (Re λ) −1/2 |Im λ| to (4.37), plugging (4.41), using again (4.39) and (4.40), we get (4.42) Then, using (4.38) in the fourth, last but two and last line of the previous identity, we obtain where f − (x) := e −i(Re λ) 1/2 sgn(Im λ)|x| f (x). • Step three. Now we want to come back to our approximating sequence u R . Recalling that u R is a weak solution to (4.29), identity (4.43), rewritten in terms of u R , f R and err(R) gives Letting R go to infinity, the thesis follows from dominated and monotone convergence theorems and Lemma 4.3.

The method of multipliers: proof of the crucial Lemma 4.4
This part is entirely devoted to the rigorous proof of the crucial identities contained in Lemma 4.4. Let us start proving (4.32) and (4.33). Choosing in (4.15) v := ϕu, with ϕ : R d → R being a radial function such that v ∈ D A,V (since the support of u is compact, any locally bounded ϕ together with locally bounded partial derivatives of first order is admissible). Using the generalised Leibniz rule for the magnetic gradient, namely Taking the real part of the obtained identity, using that being A a real-valued vector field one has one has Re(ū∇ A u) = Re(ū∇u) (4.46) and performing an integration by parts give Taking ϕ := 1 and ϕ(x) := |x|, we get (4.32) and (4.33). Equation (4.34) and (4.35) are obtained as in the previous case choosing in (4.15) v := ψu, with ψ : R d → R being a radial function such that v ∈ D A,V and taking the imaginary part of the resulting identity. Finally, one chooses ψ := 1 and ψ(x) := |x|, respectively. The remaining identity (4.36) is formally obtained by plugging into (4.15) the multiplier taking the real part and integrating by parts. However, such v does not need to belong to D A (and therefore neither to D A,V ). Indeed, though on the one hand the unboundedness of the radial function φ does not pose any problems because the support of u is assumed to be compact at this step, on the other hand ∇ A u does not necessarily belong to D A . Following the strategy developed in [5], we replace (4.47) by its regularised version and where Clearly, being u ∈ D A,V , the first term in v belongs to D A,V and therefore we need to comment further just on the second term of the sum, namely x k ∂ δ,N k,A u (the part involving ∂ −δ,N k,A u is analogous). One can check that this is a consequence of u ∈ L 2 (R d ) being compactly supported and of the boundedness of T N (A k ). It is less trivial to prove that for any l = 1, 2, . . . , d, one has ∂ l,A [x k ∂ δ,N k,A u] ∈ L 2 (R d ). To begin with, it is easy to check that the following commutation relation between the magnetic gradient ∂ l,A and its regularised version ∂ δ,N k,A holds true Here [·, ·] denotes the usual commutator operator, for any given subset S ⊆ R d , the function χ S is the characteristic function of the set S and τ δ k is the translation operator as defined in (4.50). Using (4.45), the fact that, by definition of the commutator operator, ∂ l,A ∂ δ,N k,A = ∂ δ,N k,A ∂ l,A + [∂ l,A ∂ δ,N k,A ] and eventually using (4.52) one has Here and hence δ l,k for every k, l = 1, 2, . . . , d denotes the Kronecker symbol. Now, being u ∈ D A,V (thus in particular u ∈ L 2 (R d )) and since is compactly supported, one can conclude the same for v 2 . With respect to v 3 , since A k ∈ W 1,p loc (R d ) with p as in (4.8), then (∂ l A k )u ∈ L 2 (R d ) (see (4.9)). Similar reasoning allows us to conclude that also (∂ δ k A l )τ δ k u ∈ L 2 (R d ). Therefore v 3 ∈ L 2 (R d ). Now we are left to show just that (Re V (2) Observe that being u ∈ D A,V (thus in particular (Re V (2) ) + u ∈ L 2 (R d )) and compactly supported and since Making explicit the difference quotient ∂ δ k u, one can also see that v 4 ∈ L 2 (R d ) by using that (Re V (2) ) + ∈ L p loc (R d ) with p as in (4.13) and the fact that |u| ∈ H 1 (R d ). Gathering these facts together, we guaranteed that our multiplier v as defined in (4.51) belongs to D A,V and hence we have justified its choice as a test function in the weak formulation (4.15). Now we are in a position to prove identity (4.36). For a moment, we proceed in a greater generality by considering φ in (4.48) to be an arbitrary smooth function φ : R d → R. We plug (4.48) in (4.15) and take the real part. Below, for the sake of clarity, we consider each integral of the resulting identity separately.

• Kinetic term
Let us start with the "kinetic" part of (4.15): (4.53) (4.54) Using (4.46) and integrating by parts in K 1 give Now we consider K 4 . Using simply the definition of the commutator operator, we write where We start considering K 4,1 . Using an analogous version to (4.46) for the regularised magnetic gradient, namely Re(ū ∂ δ,N k,A u) = Re(ū ∂ δ k u), k = 1, 2, . . . , d (4.55) and the identity 2 Re(ψ∂ δ k ψ) = ∂ δ k |ψ| 2 − δ|∂ δ k ψ| 2 (4.56) valid for every ψ : R d → C, we write K 4,1 = K 4,1,1 + K 4,1,2 with Making use of the integration-by-parts formula for difference quotients (see [13,Sec. 5 which holds true for every ϕ, ψ ∈ L 2 (R d ), one gets At the same time, making explicit the difference quotient and changing variable in K 4,1,2 give (summation both over k and l) Now we choose the multiplier φ(x) := |x| 2 and observe that Consequently, In summary, Now we want to see what happens when δ goes to zero and N goes to infinity. To do so, we need the following lemma. and (4.60) Proof. Let us start with (4.59). Using the explicit expression (4.49) for ∂ δ,N l,A u, one easily has Now, as a consequence of the L 2 -strong convergence of the difference quotients (which can be used here because u ∈ H 1 (R d ) (see (4.12))), the first integral converges to zero as δ goes to zero. As regards with the second integral we use that, by definition, T N (s) converges to s as N tends to infinity, the bound |T N (s)| ≤ |s| and the fact that by virtue of (4.8) the function A l u ∈ L 2 (R d ), these allow us to conclude that the integral goes to zero as N goes to infinity via the dominated convergence theorem. This concludes the proof of (4.59). Now we prove (4.60). Observe that (4.60) follows as soon as one proves that the limits and hold true. As hypothesis (4.8) implies that ∂ l A k u ∈ L 2 (R d ), the first limit is an immediate consequence of the dominated convergence theorem. With respect to the second one, one has and the two integrals tend to zero as δ goes to zero as a consequence of the L q -continuity of the translations with 1 ≤ q < ∞ and the strong L p -convergence of the difference quotients with 1 ≤ p < ∞ together with assumption (4.8).
With Lemma 4.5 at hand, it follows as a mere consequence of the Cauchy-Schwarz inequality that • Source term Let us now consider simultaneously the "source" and "eigenvalue" parts of (4.15), that is, This can be written as F = F 1 + F 2 + F 3 + F 4 with (4.62) Applying (4.55) and (4.56), we further split F 2 = F 2,1 + F 2,2 , where Using the integration-by-parts formula (4.57), we get Choosing φ(x) := |x| 2 in the previous identities and using (4.58) gives Using limit (4.59) in Lemma 4.5, one gets from the Cauchy-Schwarz inequality that • Electric potential term Let us now consider the contribution of the "potential" part of (4.15), that is, Using the decomposition V = V (1) + V (2) , it can be written as J = J 1 + J 2 with First of all, Let us consider now the part involving V (2) . We can write Let us consider J 2,2 . Using (4.55), (4.56) and integrating by parts we get Choosing φ(x) := |x| 2 in the previous identities and using (4.58) we can write Moreover By virtue of hypothesis (4.31), |x||V (1) ||u| ∈ L 2 loc (R d ) and then, using the Cauchy-Schwarz inequality and limit (4.59) in Lemma 4.5, one has (4.64) Similarly, using that |x||Im V (2) ||u| ∈ L 2 loc (R d ) (see (4.31)) and again (4.59), via the Cauchy-Schwarz inequality one also has Since x k Re V (2) ∈ W 1,p loc (R d ) with p as in (4.13), using the strong L p -convergence of the difference quotients with 1 ≤ p < ∞ and via the Hölder inequality, it is not difficult to see that where the last identity follows from the Leibniz rule applied to ∂ k (x k Re V (2) ).
In summary, gathering the previous limits altogether, one gets Passing to the limit δ → 0 and N → ∞ in (4.15) and multiplying the resulting identity by 1/2, one obtains (4.36).

Potentials with just one singularity: alternative proof of the crucial Lemma 4.4
In this section we consider the case of potentials (both electric and magnetic) with capacity zero set of singularities, in fact with just one singularity at the origin. This will allow us to remove the unpleasant hypotheses (4.8) and (4.13). Since the point has a positive capacity in dimension one, here we exclusively consider d ≥ 2. (As a matter of fact, if d = 1, hypothesis (4.13) is rather natural, while (4.8) is automatically satisfied because of the absence of magnetic fields on the real line.) To be more specific, in the sequel we consider the following setup. Let A ∈ L 2 loc (R d \ {0}; R d ) and V ∈ L 1 loc (R d \ {0}; C) and assume Notice that assumption (4.65) is satisfied by a large class of potentials, namely V (x) = a/|x| α with a > 0 and α > 0 and the Aharonov-Bohm vector field (3.16).
Observe that since it is no more necessarily true that V ∈ L 1 loc (R d ; C) and A ∈ L 2 loc (R d ; R d ), the procedure developed in Subsection 4.1 in order to rigorously introduced the Hamiltonian H A,V formally defined in (4.1) must be adapted. The modification of the procedure consists merely in taking the Friedrichs extension of the operator initially defined on To be more specific, we first introduce the closed quadratic form where such that, for any u ∈ D(h (1) A,V is relatively bounded with respect to h (1) A,V , with the relative bound less than one. Consequently, the sum h A,V := h This subsection is concerned with the proof of Lemma 4.4 in the present alternative framework. More specifically we will provide the proof of identity (4.36) only, which is the one whose changes are significant. For the sake of clarity, we restate it with the alternative hypotheses assumed in this section. (Without loss of generality, we consider just the situation in which V (1) = 0; indeed, the assumption (4.13) that we remove now concerned the component V (2) only.) ) be potentials satisfying (4.65). Let u ∈ D A,V be any compactly supported solution of (4.15), with λ being any complex constant and |x|f ∈ L 2 loc (R d ), satisfying Then [x · ∇ Re V ] − |u| 2 ∈ L 1 loc (R d ) and the following identity It comes from a straightforward computation to check that in both cases, there exists a constant c > 0 such that the following control on the first derivatives holds true. We take as the test function in (4.15) a slight modification of the multiplier (4.48) chosen above, namely where ∂ δ k,A u := ∂ δ k u + iA k u, k = 1, 2, . . . , d, with ∂ δ k defined as in (4.50). More specifically, Observe that in this framework we do not need the truncation of the magnetic potential.
Mimicking the arguments of Section 4.4, one can show that v defined as in (4.70) belongs to D A,V . In fact, one has v ∈ L 2 (R d ), ∂ l,A v := (∂ l + iA l )v ∈ L 2 (R d ) for any l = 1, . . . , d and (Re V ) + v ∈ L 2 (R d ). We comment just on ξ ε x k ∂ δ k,A u in (4.70). Being ξ ε supported off the origin, A k ∈ L ∞ (supp ξ ε ), therefore First observe that using the chain rule for magnetic derivatives (4.45), one can write Clearly, exactly as above, v 2 ∈ L 2 (R d ). Using again that A k L ∞ (supp ξε) < ∞ and the fact that , where ∂ δ,N k,A are defined as in (4.49), one can reason as in Section 4.4 to conclude that v 1 ∈ L 2 (R d ) as well (observe that here it comes into play the assumption ∂ l A k ∈ L ∞ (R d \ {0}), as in the previous section it came into play the assumption ∂ l A k ∈ L p loc (R d ) with p as in (4.8)). It remains just to prove that (Re V ) + [ξ ε x k ∂ δ k,A u] ∈ L 2 (R d ), but this follows immediately observing that, on the support of ξ ε , (Re V ) + is bounded. Now we are in position to prove identity (4.36'). Also in this section we proceed in a greater generality by considering φ in (4.69) to be an arbitrary smooth function φ : R d → R. After we will plug in our choice φ(x) = |x| 2 . We consider identity (4.15) with the test function v as in (4.70) and we take the real part. Each resulting integrals are treated separately.

• Kinetic term
Let us start with the "kinetic" part of (4.15), i.e. (4.53). Using we write K = K ε 0 + K 1 + K 2 + K ε 3 + K ε 4 with K 1 and K 2 as in (4.54) and As regards with K ε 4 , proceeding in the same way as done in Section 4.4 to treat the term K 4 , we end up with Now we choose φ(x) := |x| 2 . Using (4.58) we get Now we need the following analogous version to Lemma 4.5.
Lemma 4.7. Under the hypotheses of Lemma 4.6, the limits and Using Lemma 4.7 and letting δ go to zero, it is easy to see that (4.71) Now we want to see what happens in the limit of ε approaching zero. In order to do that we will use the following lemma.
Lemma 4.8. Let g ∈ L 1 (R d ) and let ξ ε be defined as above. Then (4.72) Proof. The first limit in (4.72) immediately follows from the definition of ξ ε via the dominated convergence theorem. On the other hand, using (4.68), one has which yields the second limit in (4.72), again from the dominated convergence theorem.
Using Lemma 4.8 and passing to the limit in (4.71), one easily gets Notice that here we have used that, by hypothesis, |x| 2 |B| 2 |u| 2 ∈ L 1 loc (R d ).

• Source term
Now consider simultaneously the "source" and "eigenvalue" parts of (4.15), i.e. (4.61). Plugging in (4.61) our chosen test function v defined in (4.69), we can write F = F 1 + F ε 2 + F ε 3 + F ε 4 with F 1 as in (4.62) and As regards with F ε 2 , proceeding as in Section 4.4 when we treated F 2 , we end up with Choosing φ(x) := |x| 2 in the previous identities and using (4.58) give Reasoning as above, one gets Using Lemma 4.8, we conclude that • Electric potential term Let us now consider the contribution of the "potential" part of (4.15), i.e. (4.63). Plugging v defined as in (4.69) into (4.63), we write J = J 1 + J ε 2 with Choosing φ(x) := |x| 2 in the previous identities and using (4.58), we obtain Using that Re V is bounded on supp ξ ε , taking the limit as δ goes to zero, it follows from Lemma 4.7 where in the last identity we have just integrated by parts. Moreover, using that by hypothesis |x| Finally, using that Re V |u| 2 and [x k ∂ k Re V ] + |u| 2 ∈ L 1 (R d ) and again |x| 2 |Im V | 2 |u| 2 ∈ L 1 loc (R d ), then Lemma 4.8 gives Observe that in order to pass to the limit in the integral involving [x k ∂ k Re V ] − , we have used the monotone convergence theorem being ξ ε ր 1 as ε tends to zero.
In summary, passing to the limit δ → 0 and ε → 0 in (4.15) and multiplying the resulting identity by 1/2, one obtains (4.36'). This concludes the proof of Lemma 4.6.

Absence of eigenvalues of matrix Schrödinger operators
We start our investigation on Schrödinger operators by considering first the most delicate case represented by the non self-adjoint results Theorem 3.1 (and its particular case Theorem 1.1) and the alternatives in d = 2 given by Theorem 3.2 and Theorem 3.3. The self-adjoint situation is treated afterward (Subsection 5.2).

Non self-adjoint case
for j = 1, 2 . . . , n and for any v j ∈ D A,V .
Here, since we want to use directly the estimate in Lemma 4.2, we have defined f := −V (1) u. In passing, observe that by virtue of our hypothesis (3.3), it is not difficult to check that f, so defined, satisfies n j=1 |f j | 1/2 |u j | 1/2 2 with a 1 and a 2 as in (3.3) and u − as in (4.17). Notice that here we have used that |u| = |u − |.
The strategy of our proof is to show that, under the hypotheses of Theorem 3.1, u is identically zero. In order to do that, as customary, we split the proof into two cases: |Im λ| ≤ Re λ and |Im λ| > Re λ.
Since u j , for j = 1, 2, . . . , n, is a solution to (5.2), we can use directly Lemma 4.2 to get the estimate Summing over j = 1, 2, . . . , n and using the Cauchy-Schwarz inequality for discrete measures, we easily obtain Using assumptions (3.4)-(3.7) together with (5.3), one has Now we need to estimate the squared bracket of the latter inequality, namely Notice that, since I appears as a "coefficient" of the positive spectral quantity (Re λ) −1/2 |Im λ|, we would like to get a positive contribution out of it to eventually discard this term in the previous estimate. Notice that only the second term in I could spoil such positivity and therefore our aim is to control its magnitude in size by means of the positivity of the other terms in I.
To do so, we will proceed distinguishing the cases d = 1, d = 2 and d ≥ 3. Let us start with the easiest d = 1. In this situation the second term in I cancels out and therefore I ≥ 0. We go further considering the case d ≥ 3. Here we employ the weighted magnetic Hardy-inequality More specifically, using (5.6) we have which again is positive because we are considering d ≥ 3.
Observe that in both cases treated so far, namely d = 1 and d ≥ 3, the positivity of the real part of V (2) , namely the term R d |x|[Re V (2) ] + |u| 2 dx, did not really enter the proof of the positivity of I. The situation is different when considering d = 2. Indeed, although (5.6) is valid also for d = 2, in this case the right-hand side of estimate (5.7) is not necessarily positive. Thus assumption (3.8) comes into play here. Indeed, thanks to (3.8), it is immediate that Hence we have proved that in any dimension d ≥ 1 we have I ≥ 0. This yields that which, by virtue of (3.2), implies that u − (and therefore u) is identically equal to zero.
Let u j for j = 1, 2, . . . , n be a solution to (5.2). Choosing as a test function v j := u j and taking the real part of the resulting identity and adding/subtracting, instead of the real part, the imaginary part of the resulting identity, one gets Summing over j = 1, 2, . . . , n and discarding the positive term on the left-hand side involving (Re V (2) ) + , one easily gets Using the first inequalities in (3.4), (3.6) and (5.3), we have Therefore, since by the first inequality in (3.2) we have b 2 1 + β 2 1 + 2a 2 1 < 1, then Re λ ± Im λ ≥ 0 unless u = 0. But since |Im λ| > Re λ we conclude that u = 0.
This concludes the proof of Theorem 3.1.
Now we prove the alternative Theorem 3.2 valid in d = 2.
Proof of Theorem 3.2. Since the proof follows analogously to the one of Theorem 3.1 presented above, except for the analysis in the sector |Im λ| ≤ Re λ, we shall comment just on this situation. As in the proof of Theorem 3.1, we want to estimate the term I defined in (5.5), which appears multiplied by the spectral coefficient (Re λ) −1/2 |Im λ| in (5.4). A first application of the weighted inequality (5.6) gives where the last inequality follows by discarding the positive term involving the potential V (2) . Now, we proceed estimating the term R 2 |u − | 2 |x| dx. In order to do that we will strongly use the following Hardy-Poincar-type inequality valid for all ψ ∈ W 1,2 0 (B R ), where B R := {x ∈ R 2 : |x| < R} denotes the open disk of radius R > 0 (see [15] for an explicit proof of (5.9)).
Finally, we prove the two dimensional result in which the magnetic potential is fixed to be the Aharonov-Bohm one.
Proof of Theorem 3.3. As in the proof of Theorem 3.2, we need to estimate the term I defined in (5.5), which appears in (5.4). Notice that in this specific case (due to the triviality of the magnetic field, everywhere except at the origin, see (3.19)), in (5.4) there does not appear the constant c related to the smallness condition assumed for B. In order to estimate I, we will use the following weighted Hardy inequality, which is also an improvement upon (3.10) , it reads where γ := dist{ᾱ, Z} andᾱ is as in (3.18) (see [15,Lem. 3] for a proof of (5.11)). A first application of (5.11) gives (Re λ) −1/2 |Im λ| I ≥ −(Re λ) −1/2 |Im λ| where we discarded the positive term in I involving the potential V (2) . Notice that since we are assumingᾱ / ∈ Z, then γ ∈ (0, 1/2], this gives 1/4 − γ 2 ≥ 0. Now, we proceed estimating the term R 2 |u − | 2 |x| dx. Given any positive number R, we write where, also here, B R denotes the open disk of radius R > 0. Choosing in the previous inequality R := ǫγ 2 (Re λ) 1/2 /|Im λ| with any positive constant ǫ, and multiplying the resulting estimate by the quantity (Re λ) −1/2 |Im λ| 1 4 − γ 2 , we get In the first inequality we have used the restriction to the sector |Im λ| ≤ Re λ, while in the second inequality we have used first the Hardy inequality (3.17) and then the hypotheses on the potential (3.21) together with the second inequality of (3.22). Plugging the last estimate in (5.12) and the resulting estimate in (5.4), and using an analog reasoning as in Remark 3.1.4, give From hypothesis (3.20) we therefore conclude that u = 0 as above.
5.2 Self-adjoint case: Proof of Theorem 3.4 Now we prove the much simpler and less involved analogous result to Theorem 3.1 for self-adjoint Schrödinger operators, namely Theorem 3.4.
Proof of Theorem 3.4. Let u be any weak solution to the eigenvalues equation (5.1), with V real-valued. The proof of this theorem is based exclusively on the identity (4.36). More precisely, using that V is real-valued, so necessarily Im λ = 0, from (4.36) (with f = 0) we get This immediately gives a contradiction in virtue of (3.24). This concludes the proof.
In passing, observe that here we did not need to split the proof and proving separately absence of positive and non-positive eigenvalues. Indeed, we got the absence of the total point spectrum in just one step.
Remark 5.1 (Two-dimensional Pauli operators as a special case). One reason for investigating matrix selfadjoint Schrödinger operators in this work, comes from our interest in pointing out a pathological behavior of the two dimensional purely magnetic (and so self-adjoint) Pauli Hamiltonian. From the explicit expression (3.30) of the two dimensional Pauli operators, it is evident the relation with the scalar Schrödinger operator In this specific situation identity (5.13), which was the crucial identity to prove absence of point spectrum in the self-adjoint situation, reads (after multiplying by 1/2) x · B 12 u∇ A u dx.
We stress that differently to the proof presented above, here the presence of the second term on the righthand side involving the magnetic field does not allow us to get a contradiction. Indeed, roughly speaking, all the positivity coming from the left-hand side and that is customarily used to get the contradiction under the smallness assumption on the magnetic field is exploited to control the second term on the right-hand side (due to inequality (3.31)), therefore, using (3.7), one is left with a term of the type −2c ∇ A u 2 L 2 (R 2 ) ≤ 0, which leads to no contradiction, however small is chosen the constant c.

Absence of eigenvalues of Pauli and Dirac operators
This section is devoted to the proof of emptiness of the point spectrum of Pauli and Dirac Hamiltonians.

Warm-up in the 3d case
Even though the three dimensional setting proposed in the introduction is clearly covered by the more general results Theorem 3.5 and Theorem 3.6, we decided to dedicate to the 3d case a separate section. Indeed, due to the physical relevance of this framework, we want to make it easier to spot the conditions which guarantee the absence of the point spectrum in this case, avoiding the interested reader working his/her way through the statements of the theorems in the general setting.

Absence of eigenvalues of Dirac operators in any dimension
Now we can conclude our discussion by proving the absence of eigenvalues of Dirac operators in the general case, namely proving Theorem 3.6. Let us start commenting on the odd-dimensional case. Due to expression (2.11) for the squared Dirac in odd dimensions and due to the analogy with (1.6) in the three-dimensional case, one can proceed as in the proof of Theorem 1.3 using the validity of the corresponding result Theorem 3.5 for Pauli operators to get the result.
Turning to the even-dimensional situation, one realises from (2.13) that the squared Dirac operator equals a shifted Pauli operator. Therefore Theorem 3.6 follows as a consequence of Theorem 3.5 for even Pauli operators.