1 Introduction

In this paper, we study the existence of localized states for a class of magnetic Laplacians

$$\begin{aligned} H_\varepsilon := -(\nabla + i \varepsilon ^{-2}a_\varepsilon ) \cdot (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \ \ \ \ \text {in }\mathbb {R}^2 \end{aligned}$$
(1.1)

in the semi-classical regime \(\varepsilon \ll 1\). Here, the vector potential \(a_\varepsilon : \mathbb {R}^2 \rightarrow \mathbb {R}^2\) is such that \({\bar{a}}_\varepsilon = (a_\varepsilon , 0)\) satisfies \(\nabla \times {\bar{a}}_\varepsilon = b_\varepsilon e_3\), with \(e_3\) being the canonical versor in \(\mathbb {R}^3\) that is orthogonal to the plane. The intensity \(b_\varepsilon : \mathbb {R}^2 \rightarrow \mathbb {R}\) of the magnetic field is bounded and has a jump or fast transition across a simple curve \(\Gamma \) that is closed and \(C^4\). If \(\Omega \) denotes the compact set of \(\mathbb {R}^2\) such that \(\partial \Omega =\Gamma \), the simplest example of magnetic field that we consider in this paper is of the form

$$\begin{aligned} b_\varepsilon (\cdot ) = b(\cdot ), \ \ \ b(x) = {\left\{ \begin{array}{ll} b_+ &{}\text {if }x \in \Omega \\ b_- &{}\text {otherwise} \end{array}\right. }, \ \ \ \ \text {for two values }b_+, b_- > 0, b_+ \ne b_-. \end{aligned}$$
(1.2)

Our study, however, also includes intensities \(b_\varepsilon \) that are not piecewise constant and such that, at any point \(x_0 \in \Gamma \), the function \(b_\varepsilon \) changes abruptly along the normal direction (c.f. (2.2)).

Given \(H_\varepsilon \) as in (1.1), we consider the spectral problem

$$\begin{aligned} H_\varepsilon \Psi = \lambda \Psi \ \ \text {in }\mathbb {R}^2, \end{aligned}$$
(1.3)

for some \(\lambda > 0\) and \(\Psi \in H^2_{loc}(\mathbb {R}^2; \mathbb {C})\) such that

$$\begin{aligned} \int _{\mathbb {R}^2} |\Psi |^2 + \int _{\mathbb {R}^2} |(\nabla + i\varepsilon ^{-2}a_\varepsilon )\Psi |^2 < +\infty . \end{aligned}$$
(1.4)

For \((\Psi , \lambda )\) solving (1.3), the eigenfunction \(\Psi \) is an edge state whenever \(\Psi \) has mass (i.e. \(L^2\)-norm) that is localized at scale \(\varepsilon \) close to \(\Gamma \) and that is distributed along this curve. From the point of view of the associated Schrödinger’s equation, this means that at the energy level \(\lambda \), the solution \(e^{-i \lambda t}\Psi \) describes a current localized on \(\Gamma \) and propagating throughout it (c.f. Sect. 2.1).

The main goal of this paper is to provide a rigorous and detailed description of the edge states for Hamiltonians as in (1.1). In particular, we focus on how the variation of the magnetic field \(b_\varepsilon \) along \(\Gamma \) influences the mass distribution of \(\Psi \).

If the curve \(\Gamma \) is a straight line and the magnetic field is as in (1.2), problem (1.3) belongs to the class of Iwatsuka models that were first studied in [15]. In this setting, the operator \(H_\varepsilon \) may be diagonalized and its spectrum contains an essential part given by the two sets of Landau levels \(\left\{ \frac{b_+}{\varepsilon ^2} \left( n+\frac{1}{2}\right) \right\} _{n\in \mathbb {N}} \cup \left\{ \frac{b_-}{\varepsilon ^2}\left( n+ \frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\) that correspond to the behaviour of the Hamiltonian \(H_\varepsilon \) at infinity. The presence of the interface \(\Gamma \), across which the magnetic fields jumps, gives rise to an absolutely continuous part of the spectrum that fills all the gaps between the essential part. The (generalized) eigenfunctions associated with the absolutely continuous part are localized at scale \(\varepsilon \) close to \(\Gamma \).

The setting of this paper may be thus considered as a generalization of the Iwatsuka models to a regular and compact curve \(\Gamma \) and to magnetic fields that are not translation-invariant and may slowly change along \(\Gamma \). In the main result of this paper (c.f. Theorem 2.2), we identify a subset \(\Sigma \subset \mathbb {R}\) such that, if \((\Psi , \lambda )\) solves (1.3) and \(\lambda \notin \Sigma \), then \(\Psi \) is an edge state and its \(L^2\)-norm changes macroscopically along \(\Gamma \) according to an explicit function. The set \(\Sigma \) (c.f. definition (2.6)) contains the bulk part \(\left\{ \frac{b_+}{\varepsilon ^2} \left( n+\frac{1}{2}\right) \right\} _{n\in \mathbb {N}} \cup \left\{ \frac{b_-}{\varepsilon ^2}\left( n+ \frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\) and an additional set that ensures that the function \(\Psi \) is localized all along \(\Gamma \) and not only on one portion of it.

In fact, we expect that for values \(\lambda \in \Sigma \backslash \left\{ \frac{b_+}{\varepsilon ^2} \left( n+\frac{1}{2}\right) \right\} _{n\in \mathbb {N}} \cup \left\{ \frac{b_-}{\varepsilon ^2}\left( n+ \frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\) the variation of the magnetic field along \(\Gamma \) may obstruct the propagation of the corresponding eigenfunction \(\Psi \) throughout the full interface \(\Gamma \). In analogy with the WKB theory in the semi-classical regime for the Schrödinger operator \(L_\varepsilon :=-\Delta + \frac{V}{\varepsilon ^{2}}\), the points of \(\Gamma \) where the propagation yield turning points (e.g. [11, Chapter 15, Subsection 15.4.1]). As explicitly shown in our main result, the function \(\Psi \) does not vanish along \(\Gamma \) as long as the solutions to the equivalent of the eikonal equation (c.f (2.7)) satisfy a suitable non-transversality condition. We plan to address this scenario in a future paper.

The original Iwatsuka model was introduced in [15] for a magnetic Schrödinger operator \(-(\nabla + i a) \cdot (\nabla + i a)\) in \(\mathbb {R}^2\) with a magnetic field \(b e_3\) having positive and bounded intensity \(b(x_1, x_2)= b(x_2)\), \((x_1, x_2) \in \mathbb {R}^2\), that converges to two distinct constants when \(x_2 \rightarrow \pm \ \infty \).Footnote 1 This setting provides an example of a magnetic Schrödinger operator with purely absolutely continuous spectrum that is generated by the transition of the magnetic field from the two values attained at infinity.

Since [15], there is an extensive literature work devoted to the study of this class of models and its generalization. In [13], the authors consider a magnetic field of the form \(b_\varepsilon = \frac{b_\varepsilon (x_2)}{\varepsilon ^2}\), \(x_2 \in \mathbb {R}\) that has a fast transition along the line \(\Gamma = \{ x_2 = 0\}\): For every \(\varepsilon >0\), the magnetic field \(b_\varepsilon \) is such that \(b_\varepsilon = b_+\) when \(x_2 > \varepsilon \) and \(b_\varepsilon = b_-\) when \(x_2 < -\varepsilon \). In this setting, the authors study the localization at scale \(\varepsilon \) around \(\Gamma \) for the eigenfunctions corresponding to suitable energies and provide lower bounds for the edge currents carried by such edge states. A similar analysis is performed in [16] for a magnetic field of intensity \(b= b(x_2) \in C^\infty (\mathbb {R})\) that is monotone and satisfies \(\lim _{x \rightarrow \pm \ \infty } b(x) := b_{\pm }\). The study in [16] relies on a detailed description of the band functions (c.f. the curves \(\{ \nu _n: \mathbb {R}\rightarrow \mathbb {R}\}_{n\in \mathbb {N}}\) in Sect. 3.3, 3.3, 3.3) associated with the spectrum of \(-(\nabla + i a) \cdot (\nabla + i a)\). Furthermore, it allows for suitable perturbation of the previous operator by a non-negative electric field V.

In [6, 7], the localization properties of the edge states are studied in the case of a magnetic field b as in (1.2) with \(\Gamma \) being a line and the two constant limit values satisfy \(b_+ > 0, b_- < 0\). In this case, the change in sign of the magnetic field gives rise to the so-called snake orbits. These were first introduced in [19] and correspond to the dynamic of a particle for the classical Hamiltonian that lie half on one side of \(\Gamma \) and half on the other. In [6], an analysis of the snake orbits is brought forward in the case of the magnetic field being anti-symmetric with respect to the line \(\Gamma \). The additional symmetry of the system allows describing the snake orbits and the band functions associated with the magnetic hamiltonian in detail. Similar results are also obtained in [1, 2] and references therein, for a magnetic field as in (1.2) with \(b_+ =1\) and \(b_- \in [-1, 0)\). In these works, the authors show that the band functions do possess a global minimum, study the asymptotic expansion for the principal eigenvalue and analyse the localization of the corresponding eigenfunction.

We believe that the main novelty of the present paper is an accurate analysis of the edge states, together with an asymptotic expansion for the associated eigenvalues, that is performed at any energy level in the gaps of the set \(\left\{ \frac{b_+}{\varepsilon ^2} \left( n+\frac{1}{2}\right) \right\} _{n\in \mathbb {N}} \cup \left\{ \frac{b_-}{\varepsilon ^2}\left( n+ \frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\). In addition, our model covers the case of magnetic fields \(b_\varepsilon \) that have a sharp transition along the normal direction to a general curve \(\Gamma \) and that may also change along the direction tangential to it. We refer to (2.2) in the next section for the detailed assumptions on \(b_\varepsilon \).

The techniques used in this paper are an extension of the methods developed in [9]. In the latter, the study of edge states is brought forward in the case of a constant magnetic field \(b_\varepsilon = \frac{{\bar{b}}}{\varepsilon ^2}\), \({\bar{b}} \in \mathbb {R}\) and when (1.3) is solved in a bounded (regular) domain \(\Omega \) with Dirichlet boundary conditions. In this case, the presence of the boundary \(\partial \Omega \) plays the role of the interface \(\Gamma \) in the current paper and gives rise to a discrete part of the spectrum that “fills” the gaps between the Landau levels \(\left\{ \frac{{\bar{b}}}{\varepsilon ^2}\left( n + \frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\). The main result of [9] shows that, whenever \(\lambda \) is between any two Landau levels, the corresponding eigenfunction is an edge state. In contrast with the current paper, its mass is distributed asymptotically uniformly along \(\partial \Omega \). For other results in the literature related to this setting, we refer to [4, 12, 17] and to the introduction of [9] for a more detailed overview of the literature.

Structure of the paper and notation This paper is organized as follows: In the next section, we introduce the main setting and the main results (Theorems 2.22.5). Theorem 2.2 provides a description of the edge states, while Theorem 2.5 gives an asymptotic approximation to the associated eigenvalues. In Sect. 2.2, we comment on how the two previous theorems greatly simplify in the case of magnetic fields that are constant along \(\Gamma \) (e.g. the one in (1.2)), while in Sect. 2.3 we provide the precise asymptotic approximation for the eigenfunctions close to the interface \(\Gamma \) (Proposition 2.9). In Sect. 3, we prove the main results and carefully comment on the analogies and differences between the current strategy and the one used in our previous paper [9]. Finally, in the Appendix, we prove and state the auxiliary results that we use throughout the proofs of Sect. 3, including an overview of the main well-known results obtained for the standard Iwatsuka model (Sect. 3.3).

Throughout this paper, we adopt the following notation:

  • We denote by \(\mathbb {T}\) the unitary circle and, for every \(\xi , \tilde{\xi }\in \mathbb {T}\), we write \(d(\xi , \tilde{\xi })\) for the distance on \(\mathbb {T}\) between the two points.

  • Given a bounded set \(U \subset \mathbb {R}^d\), we denote by the averaged (Lebesgue) integral \(|U|^{-1} \int _U\), where |U| is the usual (Lebesgue) measure of the set U.

  • Given two families \(\{ \alpha _\varepsilon \}_{\varepsilon>0}, \{\beta _\varepsilon \}_{\varepsilon >0} \subset \mathbb {R}\) such that \(\alpha _\varepsilon \rightarrow 0\), we use the notation \(\beta _\varepsilon = o( \alpha _\varepsilon )\) if \(\frac{\beta _\varepsilon }{\alpha _\varepsilon } \rightarrow 0\) when \(\varepsilon \rightarrow 0\).

  • For \(a, b \in \mathbb {R}\), we use the notation \(a \wedge b\) for the minimum between a and b.

2 Setting and Main Results

Let \(\Gamma \) be a \(C^4\) closed and simple curve in \(\mathbb {R}^2\). With no loss of generality, we assume that the curve \(\Gamma \) has unitary length. We denote by \(f=(f_1(\xi ), f_2(\xi ))\), \(\xi \in \mathbb {T}\) a parametrization of \(\Gamma \) according to arc-length and write \((\mathbf {T}(\xi ), \mathbf {N}(\xi ))\) and \(\kappa (\xi )\) for the tangent, (outer) normal and the curvature of \(\Gamma \) at a point \(\xi \in \mathbb {T}\). Since \(\Gamma \) is \(C^4\), there exists a tubular neighbourhood \(U \subset \mathbb {R}^2\) of \(\Gamma \) where the change of coordinates

$$\begin{aligned} U \ni x \mapsto (\xi , s) \in \mathbb {T}\times \mathbb {R}, \ \ \ \ x = f(\xi ) - s \mathbf {N}\end{aligned}$$
(2.1)

is well defined.

Let \(b: \mathbb {T}\times \mathbb {R}\rightarrow \mathbb {R}\) be any function such that

  1. (A1)

    For almost every \(s\in \mathbb {R}\), b is twice differentiable in the periodic variable \(\xi \in \mathbb {T}\) and \(b, \partial _\xi b, \partial _\xi ^2 b \in L^\infty (\mathbb {T}\times \mathbb {R})\);

  2. (A2)

    There exist two values \(b_+, b_- > 0\), \(b_+ \ne b_-\), and \(M > 0\) such that for every \(\xi \in \mathbb {T}\) the function \(b(\xi , s) = b_+\) in \(\{s > M\}\) and \(b(\xi , s) = b_-\) in \(\{s < -M \}\).

  3. (A3)

    There exists \(m >0\) such that \(b(\xi ,s) \ge m\) for almost every \((\xi ,s) \in \mathbb {T}\times \mathbb {R}\).

Given the tubular neighbourhood U and the function b introduced above, for every \(\varepsilon >0\) we define the magnetic field \(b_\varepsilon \) as

$$\begin{aligned} b_\varepsilon (\xi , s) = b\left( \xi , \frac{s}{\varepsilon }\right) \ \ \ \ \text {in }U \end{aligned}$$
(2.2)

and continuously extend it to be \(b_+\) or \(b_-\) in the two connected components of \(\mathbb {R}^2 \backslash U\). With no loss of generality, we assume throughout the paper that \(b_- < b_+\).

For every \(\varepsilon > 0\), we thus consider the Hamiltonian \(H_\varepsilon \) in (1.1) with \(b_\varepsilon \) as above. The spectrum \(\sigma ( H_\varepsilon )\) has an essential part given by \(\sigma _{\text {ess}}= \left\{ \frac{b_-}{\varepsilon ^2}\left( n + \frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\) and, away from this set, the spectrum is discrete (see, for instance, [20] and [3, Theorem 2.1]).

If an edge state for \(H_\varepsilon \) is localized at scale \(\varepsilon \) around \(\Gamma \), after a suitable blow-up around a point of \(\Gamma \), the new “magnified” problem (1.3) is expected to resemble the Iwatsuka model of Sect. (A.1) with the choice \(b= b(\xi , \cdot )\), \( s\in \mathbb {R}\). This motivates the introduction of the following notation that is needed to state the main theorems.

For every \(\xi \in \mathbb {T}\) fixed, let \(H_{\text {Iwa}}\) be the Hamiltonian of Section (A.1) with magnetic field \(b(\xi , \cdot )\). Let \(\{ \mathcal {O}(\xi , k)\}_{k\in \mathbb {R}}\) be the family of associated one-dimensional operators

$$\begin{aligned} \mathcal {O}(\xi , k):= - \partial _s^2 + \left( \int _0^s b(\xi , t) \, {\mathrm {d}}t - k\right) ^2 \ \ \ \text {in }\mathbb {R}. \end{aligned}$$
(2.3)

For every \(k \in \mathbb {R}\) fixed, the assumptions on b yield that \(\mathcal {O}(\xi , k)\) has a discrete spectrum \(\{ \nu _n(\xi , k) \}_{n\in \mathbb {N}}\) (c.f. Sect. (A.1) and Lemma 3.1). For every \(n \in \mathbb {N}\), we thus define the functions:

$$\begin{aligned}&\nu _n : \mathbb {T}\times \mathbb {R}\rightarrow \mathbb {R}, \ \ (\xi , k) \mapsto \nu _n(\xi , k) \ \text {the }n{\mathrm{th}}\text { eigenvalue of }\mathcal {O}(\xi , k) \nonumber \\&\quad \text { with magnetic field }b(\xi ; \cdot ). \end{aligned}$$
(2.4)

By Lemma 3.1 applied to \(b= b(\xi , \cdot )\), the previous curves are differentiable in the variable k. Equipped with this notation, we define the sets

$$\begin{aligned} \begin{aligned} \sigma _{\text {bulk}}&:= \left\{ b_- \left( n+\frac{1}{2}\right) \right\} _{n\in \mathbb {N}} \cup \left\{ b_+\left( n + \frac{1}{2}\right) \right\} _{n\in \mathbb {N}},\\ \sigma _{\text {sing}}&:= \{ \lambda \in \mathbb {R}\, \, :\, \, \text {there exists }n\in \mathbb {N}, (\xi , k) \in \mathbb {T}\times \mathbb {R}\text { such that }\\ \nu _n(\xi , k)&= \lambda , \, \partial _k \nu _n(\xi , k) = 0 \}, \end{aligned} \end{aligned}$$
(2.5)

and

$$\begin{aligned} \Sigma := \sigma _{\text {bulk}} \cup \sigma _{sing}. \end{aligned}$$
(2.6)

The definition of \(\Sigma \) yields that if \(\lambda \notin \Sigma \), then there exists \(N \in \mathbb {N}\) (possibly zero) such that for every \(\xi \in \mathbb {N}\) there exist exactly \(N \in \mathbb {N}\) values \(\{ k_j(\xi ) \}_{j =1}^N \subset \mathbb {R}\) such that for each \(j =1 ,\ldots , N\), there exists a unique \(n_j \in \mathbb {N}\) such that

$$\begin{aligned} \nu _{n_j}(\xi ; k_j(\xi ))= \lambda \end{aligned}$$
(2.7)

Furthermore, the curves \(k_j : \mathbb {T}\rightarrow \mathbb {R}\), \(\xi \mapsto k_j(\xi )\) are well defined and \(C^2\) for every \(j=1, \ldots , N\). The previous claims are an easy consequence of the regularity of the surfaces \(\nu _l: \mathbb {T}\times \mathbb {R}\rightarrow \mathbb {R}\) (see Lemma 3.1), standard topological arguments and the implicit function theorem. We stress, in fact, that the definition (2.6) allows for the implicit function theorem to be applied and infer the existence of the curves \(\{ k_j\}_{j=1}^N\). We postpone the detailed proof to the Appendix (c.f. Lemma 3.6).

Remark 2.1

For a general magnetic field as in (2.2), having limit values \(b_+, b_- > 0\), and a given \(\lambda \notin \Sigma \), the number N of solutions to (2.7) admits the lower bound \(N \ge n_1 - n_2\), where \(n_1, n_2 \in \mathbb {N}\) are such that

$$\begin{aligned} b_-\left( n_1 + \frac{1}{2}\right)< \lambda< b_-\left( n_1 + \frac{3}{2}\right) , \ \ \ \ b_+\left( n_2 + \frac{1}{2}\right)< \lambda < b_+\left( n_2 + \frac{3}{2}\right) . \end{aligned}$$

We stress that \(n_1, n_2 \in \mathbb {N}\) do exists since \(\lambda \notin \sigma _{\text {bulk}}\). The value \(n_1- n_2\) may be characterized using the so-called Chern number [5, Chapter 3]. We also remark that, if the magnetic field \(b_\varepsilon \) is monotone in the variable s, then also the functions \(\nu _l(\cdot , \cdot )\) are monotone in \(k \in \mathbb {R}\) (c.f. Lemma 3.1) and \(N= n_1-n_2\).

The next result states that, for energies away from the set \(\Sigma \), the corresponding eigenfunction \(\Psi _\varepsilon \) is an edge state. More precisely, \(\Psi _\varepsilon \) is localized around \(\Gamma \) (i.e. (2.8)) and we give a precise description of how its \(L^2\)-norm is asymptotically distributed along \(\Gamma \) (i.e. (2.9)).

Theorem 2.2

(Asymptotic behaviour of the edge states). Let \(b_\varepsilon \) and \(\Sigma \) be as above. Let \(\{\Psi _\varepsilon , \lambda _\varepsilon )_{\varepsilon > 0}\) be a family of solutions to (1.3)–(1.4) such that \(\Vert \Psi _\varepsilon \Vert _{L^2(\mathbb {R}^2)} =1\). We assume that \(\varepsilon ^2\lambda _\varepsilon \rightarrow \lambda \) with \(\lambda \notin \Sigma \). Then, \(\Psi _\varepsilon \) is an edge state and:

  1. (a)

    if \(d_{\Gamma }\) denotes the distance function from the boundary \(\Gamma \), then for every \(n\in \mathbb {N}\) there exists a constant \(C=C(n, \lambda , \Gamma )\) such that

    $$\begin{aligned}&\Vert (d_{\Gamma } \wedge 1)^n \Psi _\varepsilon \Vert _{L^2(\mathbb {R}^2 ; \mathbb {C})} + \varepsilon \Vert (d_{\Gamma } \wedge 1)^n \left( \nabla + i \frac{a_\varepsilon }{\varepsilon ^2}\right) \Psi _\varepsilon )\Vert _{L^2(\mathbb {R}^2; \mathbb {C})} \nonumber \\&\quad + \varepsilon ^2 \Vert (d_{\Gamma } \wedge 1)^n (H_\varepsilon \Psi _\varepsilon )\Vert _{L^2(\mathbb {R}^2; \mathbb {C})} \le C \varepsilon ^n. \end{aligned}$$
    (2.8)
  2. (b)

    Let \(\{r_\varepsilon \}_{\varepsilon >0}\) be such that \(\varepsilon ^{-1} r_\varepsilon \rightarrow +\infty \) and \(\varepsilon ^{-\frac{1}{2}}r_\varepsilon \rightarrow 0\). Let \(\{k_{l}\}_{l=1}^N \subset C^1(\mathbb {T})\) be the curves that solve (2.7) for the limit value \(\lambda \). Then, there exists a sequence \(\{\varepsilon _j\}_{j\in \mathbb {N}}\) and \( \{ A_\ell \}_{\ell =1}^N \subset \mathbb {C}\) with \(\sum _{\ell =1}^N |A_\ell |=1\), such that for every \(x_0=(\xi , 0) \in \Gamma \) we have

    (2.9)

Remark 2.3

In contrast with the analogous result for a constant magnetic field and Dirichlet boundary conditions [9, Theorem 2.5 and limit (2.24)], the change of the magnetic field b along \(\Gamma \) yields that the amplitude of the \(L^2\)-norm of \(\Psi _\varepsilon \) changes macroscopically along \(\Gamma \). The amplitude of \(\Psi _\varepsilon \), in particular, depends on the values of the derivatives \(\partial _k \nu _{n_i}(\xi , k_j(\xi ))\). Thanks to the assumption \(\lambda \notin \Sigma \), these are bounded both from above and away from zero. This implies that all the ratios in the sum on the right-hand side of (2.9) are bounded and never vanish along \(\mathbb {T}\).

Remark 2.4

We stress that if the number of curves solving (2.7) is \(N=1\), then the sequence \(\{r_\varepsilon \}_{\varepsilon >0}\) in Theorem 2.2, part (b) may be chosen as \(r_\varepsilon = \varepsilon \). As further discussed in Proposition 2.9, Theorem 2.2, (b) follows from a detailed asymptotic formula for the eigenfunctions \(\Psi _\varepsilon \) close to the interface \(\Gamma \). This formula allows to approximate \(\Psi _\varepsilon \) by a superposition of functions that oscillate in the variable \(\xi \) as the wave functions \(e^{\frac{i}{\varepsilon }\int _0^\xi k_\ell (x) \, {\mathrm {d}}x}\), \(\ell =1, \ldots , N\). Therefore, when \(N> 1\) the previous waves may interact along length scales \(\xi \sim \varepsilon \), but do become decoupled along any mesoscopic scale \(r_\varepsilon>> \varepsilon \). In other words, for \(\ell , j =1 ,\ldots , N\) such that \(\ell \ne j\) it holds

$$\begin{aligned} \left| \int _{|\xi | < r_\varepsilon } e^{\frac{i}{\varepsilon }\int _0^\xi (k_\ell (x)- k_j(x)) \, {\mathrm {d}}x } \, {\mathrm {d}}\xi \right| \le \frac{\varepsilon }{\delta r_\varepsilon }, \end{aligned}$$
(2.10)

whenever \(|k_i(\xi )- k_j(\xi )| > \delta \) for every \(\xi \in \mathbb {T}\). Since this last inequality is satisfied by the curves \(\{ k_{\ell }\}_{\ell =1}^N\) thanks to Lemma 3.6, the right-hand side above vanishes in the limit whenever \(\frac{r_\varepsilon }{\varepsilon } \rightarrow +\infty \). This technical issue is the same that arises in [9] and that distinguishes [9, Theorem 2.4] from [9, Theorem 2.5] (see also [9, Formulas (2.20)–(2.21)] for a further comment on this).

The next main result provides an asymptotic expansion for the eigenvalues of \(H_\varepsilon \) in (1.3) that correspond to edge states. This result should be compared with [9, Corollary 2.6] that is the analogue in the case of constant magnetic fields and Dirichlet boundary conditions. We stress that the high generality of the magnetic fields \(b_\varepsilon \) considered in this paper yields that the asymptotic expansion for the eigenvalues \(\lambda _\varepsilon \in \sigma (H_\varepsilon )\) is given in terms of functions \(\Lambda _i\), \(i=1, \ldots , N\) that are implicitly defined. In fact, in the case of magnetic fields that do not change along \(\Gamma \), the next theorem turns into an easier asymptotic approximation for the eigenvalues (see Corollary 2.8 in the next subsection).

Theorem 2.5

(Asymptotic expansion for the eigenvalues). Let \(I \subset \mathbb {R}\) be an open and bounded interval such that dist\((I, \Sigma ) > 0\). Then:

  1. (a)

    There exists \(N \in \mathbb {N}\) and smooth curves \(K_j= K_j(\lambda , \xi )\), \(j=1, \ldots , N\) such that, for every \(\lambda \in I\), the function \(K_j(\lambda , \cdot )\) solves equation (2.7). Moreover, for every \(j=1, \ldots , N\) the map

    $$\begin{aligned} \Lambda _j: I \rightarrow \mathbb {R}, \ \ \ \ \lambda \mapsto \Lambda _j(\lambda ):= \int _{\mathbb {T}} K_j(\lambda , \xi ) {\mathrm {d}}\xi . \end{aligned}$$
    (2.11)

    is differentiable and invertible.

  2. (b)

    There exists \(\varepsilon _0\) such that for all \(\varepsilon \le \varepsilon _0\), every \(\lambda _\varepsilon \in \sigma ( H_\varepsilon )\) such that \(\varepsilon ^2 \lambda _\varepsilon \in I\) satisfies

    $$\begin{aligned} \varepsilon ^2\lambda _\varepsilon = \Lambda _j^{-1}(q_\varepsilon ) + \varepsilon \int _\mathbb {T}\frac{B_{n_j}(\xi , K_j(\Lambda _j^{-1}(q_\varepsilon ), \xi ))}{D_{n_j}(\xi , K_j(\Lambda _j^{-1}(q_\varepsilon ), \xi ))} + o(\varepsilon ) \end{aligned}$$

    for some \(j=1, \ldots , N\), \(q_\varepsilon \in 2\pi \varepsilon \mathbb {Z}\) and where, for every \(n\in \mathbb {N}\), the functions \(D_n, B_n: \mathbb {T}\times \mathbb {R}\rightarrow \mathbb {R}\) are defined as

    $$\begin{aligned} \begin{aligned} B_n (\xi , k)&= 2 \kappa (\xi ) \int \biggl (k \int _0^t (\tilde{t}-\mu ) b(\xi , \tilde{t}) \, {\mathrm {d}}\tilde{t} + (\int _0^t \tilde{t}\, b(\xi , \tilde{t}) \, {\mathrm {d}}\tilde{t})(\int _0^t b(\xi , \tilde{t}) \, {\mathrm {d}}\tilde{t})\biggr )\\&\quad H_n(\xi , k, t)^2 \, {\mathrm {d}}t\\ D_n(\xi , k)&= \frac{1}{\partial _k \nu _n(\xi , k)}\biggl ( \int _{\mathbb {T}} \frac{1}{\partial _k \nu _{n}(\tilde{\xi }, k)} \, {\mathrm {d}}\tilde{\xi }\biggr )^{-1}. \end{aligned} \end{aligned}$$
    (2.12)

    We recall that here \(\kappa \) denotes the curvature of \(\Gamma \) and, for every \(n\in \mathbb {N}\) and \(k\in \mathbb {R}\), the functions \(H_n(k,\xi , \cdot )\) are the eigenfunctions associated with the operator \(\mathcal {O}(\xi , k)\) in (2.3) and associated with the eigenvalue \(\nu _n(k,\xi )\).

Remark 2.6

The previous theorem does not provide information on the existence of the eigenvalues. However, it is possible to infer the existence of eigenvalues for those \(\bar{\lambda }\in \mathbb {R}\) such that there is at least one smooth curve \(k(\xi )\) satisfying (2.7). More precisely, given such \(\bar{\lambda }\) and for every \(\varepsilon \) small enough, there exists an eigenvalue \(\lambda \in \sigma (H_\varepsilon )\) that is close to \(\bar{\lambda }\) in the sense that

$$\begin{aligned} |\varepsilon ^2\lambda - \bar{\lambda }- \varepsilon \int _\mathbb {T}\frac{B_{n_j}(\xi , k( \xi ))}{D_{n_j}(\xi , k(\bar{\lambda }))}| = o(\varepsilon ) \end{aligned}$$
(2.13)

The proof of this claim follows the same lines of [9][Proof of Corollary 2.6, (4.17)–(4.18)], where we construct a suitable approximate eigenfunction satisfying, up to a small error, the spectral problem (1.3) with \(\lambda = \bar{\lambda }\). In this case, the approximate eigenfunction may be chosen as \(\Psi _{flat,\varepsilon }\) in (2.19).

For a general magnetic field considered in this paper, the previous argument yields that there are (rescaled) eigenvalues in \(\left( \frac{b_- \wedge b_+}{2}; +\infty \right) \backslash \Sigma \), with \(\Sigma \) as in (2.6). For every \(\bar{\lambda }\) in this set, in fact, we may find a smooth solution to (2.7) (see Remark 2.1 and Lemma 3.6). Furthermore, in the case of magnetic fields that are constant along the interface, namely \(b = b(s)\) as considered in Sect. 2.2, the set \(\Sigma \) usually only contains countable isolated points. This follows from the fact that the branches \(\{ \nu _k \}_{n\in \mathbb {N}}\) in (2.4) do not depend on the angular component \(\xi \in \mathbb {T}\). If, in addition, \(b=b(s)\) is monotone, then the branches \(\{ \nu _k \}_{n\in \mathbb {N}}\) are monotone as well and the subset \(\sigma _{\text {sing}} \subset \Sigma \) is empty. In this case, we may conclude that we find eigenvalues in all the gaps between the Landau levels \(\left\{ \frac{b_-}{\varepsilon ^2}(n + \frac{1}{2})\right\} _{n\in \mathbb {N}} \cup \left\{ \frac{b_+}{\varepsilon ^2}\left( n+\frac{1}{2}\right) \right\} _{n\in \mathbb {N}}\).

2.1 Edge Currents

In this subsection, we show that the edge states described in Theorem 2.2 do carry a current. Let \((\Psi _\varepsilon , \lambda _\varepsilon )_{\varepsilon > 0}\) solve (1.3)–(1.4): In line with [13, 14], we define the flux

$$\begin{aligned} j_\varepsilon (x):= 2 \text {Im}\bigg (\overline{\Psi }_\varepsilon (\nabla + i \frac{a_\varepsilon }{\varepsilon ^2}) \Psi _\varepsilon \bigg ), \end{aligned}$$
(2.14)

where \(\overline{\Psi }_{\varepsilon }\) denotes the complex conjugate of \(\Psi _\varepsilon \) and \(\text {Im}(z)\), \(z \in \mathbb {C}\), is the imaginary part of z.

Let \((\Psi _\varepsilon , \lambda _\varepsilon )\) be edge states as in Theorem 2.2, and let \(j_\varepsilon \) be the flux associated with this choice of \(\Psi _\varepsilon \). Appealing to Theorem 2.2, (a), it is an easy consequence of (2.14) that for every \(\gamma \in (0,1)\) and every \(n\in \mathbb {N}\) there exists a constant \(C= C(n, \gamma , \lambda , \Gamma )\) such that

$$\begin{aligned} \int _{\mathop {dist}(x, \Gamma ) > \varepsilon ^{\gamma }} |j_\varepsilon (x)|&\le C \varepsilon ^n, \end{aligned}$$
(2.15)

namely the fluxes concentrate around \(\Gamma \). We postpone the short proof of this estimate to the proof of the next result. Corollary 2.7 states that \(j_\varepsilon \) does not vanish on \(\Gamma \) and that it is asymptotically concentrated along this curve in the tangential direction \(\mathbf {T}\):

Corollary 2.7

Let \((\Psi _\varepsilon , \lambda _\varepsilon )_{\varepsilon >0}\) satisfy the hypotheses of Theorem 2.2. Then, up to a subsequence, there exist \(\{ A_\ell \}_{\ell =1}^N \subset \mathbb {C}\) with \(\sum _{\ell =1}^N |A_\ell |^2 = 1\) such that

$$\begin{aligned} \varepsilon j_\varepsilon \rightarrow \beta \delta _{\Gamma } \mathbf {T}\ \ \ \text { in} \mathcal {D}'(\mathbb {R}^2; \mathbb {R}^2), \end{aligned}$$

where \(\delta _\Gamma \) denotes the Dirac measure concentrated on \(\Gamma \) and the function \(\beta : \mathbb {T}\rightarrow \mathbb {R}\) is defined as

Here, the space \(\mathcal {D}'(\mathbb {R}^2; \mathbb {R}^2)\) is the space of \(\mathbb {R}^2\)- valued distributions on \(\mathbb {R}^2\).

We recall that, thanks to the definition (2.6) of the set \(\Sigma \), the function \(\beta \) never vanishes on \(\mathbb {T}\).

2.2 Magnetic Fields that are Constant Along \(\Gamma \)

In the special case \(b(\xi ,s)=b(s)\), that includes example (1.2) in the Introduction, the curves \(\{k_i(\xi ) \}_{i=1}^N\) solving (2.7) are constant (i.e. \(k_i(\xi ) \equiv k_i \in \mathbb {R}\) for every \(\xi \in \mathbb {T}\)) and the functions \(\nu _n\), \(n\in \mathbb {N}\) do not depend on the variable \(\xi \in \mathbb {T}\). In this case, inequality (2.9) of Theorem 2.2, part (b), turns into:

$$\begin{aligned} \lim _{j\rightarrow \infty } (2 r_\varepsilon )^{-1} \int _{ |x - x_0| < r_\varepsilon } |\Psi _\varepsilon (x)|^2 \, {\mathrm {d}}x = 1, \end{aligned}$$

which means that in this case the mass of the eigenfunctions \(\Psi _\varepsilon \) is asymptotically uniformly distributed along \(\Gamma \).

Moreover, the statement of Theorem 2.2 may be simplified into the following:

Corollary 2.8

Let \((\Psi _\varepsilon , \lambda _\varepsilon )\) be as in Theorem 2.5. Then, for some \(l \in \mathbb {N}\) and \(q_\varepsilon \in 2\pi \varepsilon \mathbb {Z}\)

$$\begin{aligned} \varepsilon ^2\lambda _\varepsilon = \nu _{l}(q_\varepsilon ) + 4\pi \varepsilon {\bar{B}}_{l}(q_\varepsilon ) + o(\varepsilon ), \end{aligned}$$
(2.16)

where the function \(B_l: \mathbb {R}\rightarrow \mathbb {R}\) is defined as

$$\begin{aligned} {\bar{B}}_l (k) = \int \biggl (k \int _0^t (\tilde{t}-\mu ) b(\tilde{t}) \, {\mathrm {d}}\tilde{t} + \left( \int _0^t \tilde{t}\, b(\tilde{t}) \, {\mathrm {d}}\tilde{t}\right) \left( \int _0^t b(\tilde{t}) \, {\mathrm {d}}\tilde{t} \right) \biggr ) H_l(k, t)^2 \, {\mathrm {d}}t , \ \ \ k \in \mathbb {R}. \end{aligned}$$
(2.17)

We also stress that, in this case, the function \(\beta \) in Corollary 2.7 is constant and equals

$$\begin{aligned} \beta \equiv \sum _{\ell =1}^N |A_\ell |^2 \partial _k \nu _{n_\ell }( k_\ell ). \end{aligned}$$

We finally remark that if b is monotone (as in (1.2)), then Lemma 3.1 yields that the branches \(\nu _l(\xi , k) \equiv \nu _l(k)\) are monotone as well. This implies that the set \(\sigma _{\text {sing}}\) is empty and that \(\Sigma \) coincides only with the bulk part \(\sigma _{\text {bulk}}\).

2.3 Precise Asymptotic Expansion

in analogy with the main results in [9], Theorem 2.2, part (b) and Theorem 2.5 are an easy consequence of a precise asymptotic information on the behaviour of the eigenfunctions \(\Psi _\varepsilon \) close to \(\Gamma \): Let \((\Psi _\varepsilon , \lambda _\varepsilon )\) satisfy the assumptions of Theorem 2.2. Then, for every \(\varepsilon \) small enough also \(\varepsilon ^2\lambda _\varepsilon \notin \Sigma \) and there are also exactly N smooth curves \(\{ k_{i,\varepsilon } \}_{i=1}^N\) satisfying (2.7) with \(\lambda \) replaced by \(\varepsilon ^2 \lambda _\varepsilon \).

Before stating the next proposition, we need the following notation: For every curve \(k_j=k_j(\xi )\), \(j=1, \ldots , N\) solving (2.7), we define the function

$$\begin{aligned} W_{j,\varepsilon }(\xi , k_n(\xi ), s):= e^{\frac{i}{\varepsilon }\int _0^\xi k_{j,\varepsilon }(y) \, {\mathrm {d}}y} H_{n_j}(\xi , k_n(\xi ), s), \ \ \ \ (\xi , s) \in \mathbb {T}\times \mathbb {R}. \end{aligned}$$
(2.18)

We recall that for every \(n \in \mathbb {N}\) and \((\xi , k) \in \mathbb {T}\), \(H_n(\xi , k(\xi ), \cdot )\) is the eigenfunction of \(\mathcal {O}(\xi , k)\) associated with the eigenvalue \(\nu _n(k,\xi )\). We refer to Lemma 3.1 in Sect. (A.1) for the properties of these functions.

Proposition 2.9

Let \((\Psi _\varepsilon , \lambda _\varepsilon )_{\varepsilon >0}\) and the sequence \(\{r_\varepsilon \}_{\varepsilon >0} \subset \mathbb {R}_+\) be as in Theorem 2.2. Then, there exists a global gauge \(\theta _\varepsilon \) such that the function \(\tilde{\Psi }_\varepsilon = e^{i\theta _\varepsilon }\Psi _\varepsilon \) satisfies the following property: For every subsequence \(\{\varepsilon _j\}_{j \in \mathbb {N}}\), there exist \(A_1, \ldots A_N \in \mathbb {C}\) with \(\sum _{j=1}^N |A_j|^2 =1\) such that

(2.19)

where \(B_l\) is as in (2.12), and in the sense that

(2.20)

3 Proofs

In the remaining part of the paper, for any \(a, b \in \mathbb {R}\) we use the notation \(a \lesssim b\) and \(a > rsim b\) if there exists a constant C depending on \(\Gamma \), \(|\lambda |\), \(\mathop {dist}(\lambda , \Sigma )\) and mM in assumptions (A1)–(A3) for b such that \(a \le C b\) and \(a \ge C b\), respectively.

3.1 Proofs of Theorems 2.2, 2.5 and Corollary 2.8

In this section, we show how to adapt the proofs of [9] to the current setting. Both the strategy and most of the auxiliary results contained in [9] may be easily adapted also to the case of magnetic fields as in (2.2). Below, we thus provide an overview of the strategy that we use to prove Theorems 2.2 and 2.5 and give the details for the parts that conceptually differ from the previous paper.

Proof of Theorem 2.2

The proof for Part (a) is similar to the one for [9, Proposition 2.3], and the main difference in this setting is that the domain is the whole space \(\mathbb {R}^2\). If \(\Omega \) is the bounded set that has boundary \(\Gamma \), we argue the inequality of part (a) separately in \(\Omega \) and \(\mathbb {R}^2 \backslash \Omega \).

The argument for the set \(\Omega \) is analogous to the one for [9, Proposition 2.3], and the only difference is that the cut-off function \(\phi = \phi _\varepsilon \) used for [9, formula (3.1)] solves [9, boundary value problem (2.12)] in \(\Omega _\varepsilon = \{ x \in \Omega \, :\, \mathop {dist}(x; \Gamma ) > M\varepsilon \}\), where M is as in (A2). Since \(\Gamma \) is \(C^4\), this set is regular enough for \(\varepsilon \) sufficiently small and the magnetic field \(b_\varepsilon \equiv b_+\) in \(\Omega _\varepsilon \) (c.f. (2.2)). We also stress that the boundedness of \(b_\varepsilon \) (c.f. (A1)) is enough to infer, by standard elliptic regularity, that \(\Psi _\varepsilon \in H^2_{loc}(\mathbb {R}^2)\).

We now turn to the set \(\mathbb {R}^2 \backslash \Omega \): By a standard partition argument, it suffices to prove that for every \(\eta \in C^\infty _0(\mathbb {R}^2 \backslash \Omega )\)

$$\begin{aligned}&\int \eta ^2 |d_{\Gamma } \wedge 1|^{2n} |\Psi _\varepsilon |^2 + \varepsilon \int \eta ^2 |d_{\Gamma } \wedge 1|^{2n} |(\nabla + i \frac{a_\varepsilon }{\varepsilon ^2}) \Psi _\varepsilon |^2\\&\quad + \varepsilon ^2 \int \eta ^2 |d_{\Gamma } \wedge 1|^{2n} |H_\varepsilon \Psi _\varepsilon |^2 \lesssim C(n) \varepsilon ^n \int |\Psi _\varepsilon |^2 \mathbf {1}_{\mathop {supp}(\eta )}. \end{aligned}$$

The argument for this inequality is similar to the one above, provided that we use the cut-off function \(\phi = \eta \phi _\varepsilon \), with \(\eta \) as above and \(\phi _\varepsilon \) the solution of [9, boundary value problem (2.12)] in the exterior domain \(\{ x \notin \Omega \, :\, \mathop {dist}(x ; \Gamma ) > M\varepsilon \}\).

We now turn to the proof of Theorem 2.2, (b): In the case \(N=1\), this follows easily from (a) and from the asymptotic expansion of Proposition 2.9 together with the properties of the functions \(W_{j,\varepsilon }\) (c.f. also Lemma 3.1). For \(N > 1\), the proof is similar and relies on the choice of the mesoscale \(\{ r_\varepsilon \}_{\varepsilon >0}\), thanks to (2.10) and Lemma 3.6 implies that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \bigg |\int _{|\xi | < r_\varepsilon } e^{\frac{i}{\varepsilon }\int _0^\xi (k_i(x)- k_j(x)) \, {\mathrm {d}}x } \, {\mathrm {d}}\xi \bigg | = 0. \end{aligned}$$

\(\square \)

Proof of Theorem 2.5

We begin by showing part (a): The existence of the value N and the curves \(\{ k_i(\lambda , \xi ) \}_{i=1}^N \in C^1(I \times \mathbb {T})\) follows the same argument of Lemma 3.6 if we apply the implicit function theorem to the functions \(F_{n_i}: I \times \mathbb {T}\times \mathbb {R}\rightarrow \mathbb {R}\), \(F_{n_i}(\lambda , \xi , k):= \nu _{n_i}(\xi , k) - \lambda \). Also in this case, the fact that I is an interval that satisfies the assumption \(\mathop {dist}(I, \Sigma ) > 0\), allows to define the curves \(\{k_i\}_{i=1}^N\) globally over the set \(I \times \mathbb {T}\). We remark that, since \(\lambda \notin \Sigma \), the partial derivatives

$$\begin{aligned} \partial _\lambda k_i(\lambda , \xi ) = \frac{\partial _\lambda F_i(\lambda , \xi , k_i(\lambda , \xi ))}{\partial _k F_i(\lambda , \xi , k_i(\lambda , \xi ))}= - \frac{1}{\partial _k\nu _{n_i}(\xi , k_i(\lambda , \xi ))} \ne 0 \ \ \text { for all }\xi \in \mathbb {T}\text { and } \lambda \in I. \end{aligned}$$

and, by continuity, they do have a sign.

By the previous argument, it follows that, for each \(i=1, \ldots , N\), the function \(\Lambda _i\) defined as in the statement of Theorem 2.5 (a), is well defined and its derivative

$$\begin{aligned} \Lambda _i'(\lambda ) = \int _\mathbb {T}\partial _\lambda k_i(\lambda , \xi ) \, {\mathrm {d}}\xi = - \int _\mathbb {T}\frac{1}{\partial _k\nu _{n_i}(\xi , k_i(\lambda , \xi ))} \, {\mathrm {d}}\xi \end{aligned}$$
(3.1)

has a sign. This implies that \(\Lambda _i\) is monotone in I and its inverse is well defined and continuous. This establishes the statement of part (a).

We now turn to part (b): Using the expansion of Proposition 2.9, we may argue as for [9, Proof of Corollary 2.6, (4.19)–(4.20)] and infer that the periodicity in the variable \(\xi \) of the functions \(\Psi _\varepsilon \) and \(\nu _n\) yields

$$\begin{aligned}&\int _\mathbb {T}k_{i}( \varepsilon ^2\lambda _\varepsilon , \xi ) \, {\mathrm {d}}\xi + \varepsilon \int _\mathbb {T}\frac{B_i(\xi , k_{i}( \varepsilon ^2\lambda _\varepsilon , \xi ))}{\partial _k \nu _{n_i}(\xi , k_i(\varepsilon ^2\lambda _\varepsilon , \xi ))} {\mathrm {d}}\xi = q_\varepsilon + o(\varepsilon ), \\&\quad \text {for some }q_\varepsilon \in 2\pi \varepsilon \mathbb {Z}. \end{aligned}$$

Above, we used the fact that the functions \(k_{i,\varepsilon }\) of Proposition 2.9 may be rewritten, with the notation of part (a), as \(k_{i,\varepsilon }(\xi )= k_i(\varepsilon ^2\lambda _\varepsilon , \xi )\). Using the definition of \(\Lambda _i\), we rewrite the previous identity as

$$\begin{aligned} \Lambda _i(\varepsilon ^2 \lambda _\varepsilon ) = q_\varepsilon - \varepsilon \int _\mathbb {T}\frac{B_i(\xi , k_{i}( \varepsilon ^2\lambda _\varepsilon , \xi ))}{\partial _k \nu _{n_i}(\xi , k_i(\varepsilon ^2\lambda _\varepsilon , \xi ))} {\mathrm {d}}\xi + o(\varepsilon ), \ \ \ \ \text {for some }q_\varepsilon \in 2\pi \varepsilon \mathbb {Z}. \end{aligned}$$

By part (a) and (3.1), this also implies that

$$\begin{aligned} \varepsilon ^2 \lambda _\varepsilon&= \Lambda _i^{-1}(q_\varepsilon ) \\&\quad + \varepsilon \bigg (\int _\mathbb {T}\frac{1}{\partial _k\nu _{n_i}(\xi , k_i(\Lambda _i^{-1}(q_\varepsilon ), \xi ))} \, {\mathrm {d}}\xi \bigg )^{-1} \int _\mathbb {T}\frac{B_i(\xi , k_{i}( \varepsilon ^2\lambda _\varepsilon , \xi ))}{\partial _k \nu _{n_i}(\xi , k_i(\varepsilon ^2\lambda _\varepsilon , \xi ))} {\mathrm {d}}\xi + o(\varepsilon ). \end{aligned}$$

By the regularity of the functions \(k_i\), \(\nu _{n_i}\) and since \(\lambda \in I\), the second term on the right-hand side is of size \(\varepsilon \). Hence, \(\varepsilon ^2\lambda _\varepsilon = \Lambda _i^{-1}(q_\varepsilon ) + O(\varepsilon )\). Inserting this into the term

$$\begin{aligned} \bigg (\int _\mathbb {T}\frac{1}{\partial _k\nu _{n_i}(\xi , k_i(\Lambda _i^{-1}(q_\varepsilon ), \xi ))} \, {\mathrm {d}}\xi \bigg )^{-1} \int _\mathbb {T}\frac{B_i(\xi , k_{i}( \varepsilon ^2\lambda _\varepsilon , \xi ))}{\partial _k \nu _{n_i}(\xi , k_i(\varepsilon ^2\lambda _\varepsilon , \xi ))} {\mathrm {d}}\xi \end{aligned}$$

and using the regularity of all the functions involved in the above formula, we infer that

$$\begin{aligned} \varepsilon ^2 \lambda _\varepsilon&= \Lambda _i^{-1}(q_\varepsilon ) + \varepsilon \bigg (\int _\mathbb {T}\frac{1}{\partial _k\nu _{n_i}(\xi , k_i(\Lambda _i^{-1}(q_\varepsilon ), \xi ))} \, {\mathrm {d}}\xi \bigg )^{-1} \\&\quad \int _\mathbb {T}\frac{B_i(\xi , k_{i}( \Lambda _i^{-1}(q_\varepsilon ), \xi ))}{\partial _k \nu _{n_i}(\xi , k_i(\Lambda _i^{-1}(q_\varepsilon ), \xi ))} {\mathrm {d}}\xi + o(\varepsilon ), \end{aligned}$$

i.e. the desired formula. \(\square \)

Proof of Corollary 2.8

This statement is an immediate consequence of Theorem 2.5: Since the magnetic field \(b_\varepsilon \) does not depend on the angular variable \(\xi \in \mathbb {T}\), in this setting the curves \(\{ k_i \}_{i=1}^N\) of Theorem 2.5, part (a) do not depend on \(\xi \). The map \(\Lambda _i\) defined there thus turns into

$$\begin{aligned} \Lambda _i(\lambda ) = \int _\mathbb {T}k_i(\lambda , \xi ) \, {\mathrm {d}}\xi = k_i(\lambda ), \ \ \ \ \nu _{n_i}(k_i(\lambda ))=\lambda , \end{aligned}$$

which implies that \(\Lambda _i= \nu _{n_i}^{-1}\) in I. Inserting this into the asymptotic expansion of Theorem 2.5, part (b) and using that, in this case

$$\begin{aligned}&\bigg (\int _\mathbb {T}\frac{1}{\partial _k\nu _{n_i}(\xi , k_i(\Lambda _i^{-1}(q_\varepsilon ), \xi ))} \, {\mathrm {d}}\xi \bigg )^{-1} \\&\quad \int _\mathbb {T}\frac{B_i(\xi , k_{i}( \Lambda _i^{-1}(q_\varepsilon ), \xi ))}{\partial _k \nu _{n_i}(\xi , k_i(\Lambda _i^{-1}(q_\varepsilon ), \xi ))} {\mathrm {d}}\xi \overset{(2.12)}{=} \int _\mathbb {T}{B_i(k_{i}( \Lambda _i^{-1}(q_\varepsilon )), \xi )} {\mathrm {d}}\xi \\&\quad \overset{(2.17)}{=} {\bar{B}}_i(k_{i}( \Lambda _i^{-1}(q_\varepsilon ))) \int _\mathbb {T}\kappa (\xi ) {\mathrm {d}}\xi = 2\pi \bar{B}_i(k_{i}( \Lambda _i^{-1}(q_\varepsilon ))), \end{aligned}$$

we establish Corollary 2.8. \(\square \)

3.2 Proof of Corollary 2.7

The proof of Corollary 2.7 relies on Proposition 2.9 and on Theorem 2.2, (a).

Proof of Corollary 2.7

We start with a proof for (2.15): Let \(\gamma , n\) be fixed. Appealing to (2.14) and Cauchy–Schwarz inequality, we bound

$$\begin{aligned}&\int _{\mathop {dist}(x, \Gamma )> \varepsilon ^{\gamma }} |j_\varepsilon (x)| \le \biggl (\int _{\mathop {dist}(x, \Gamma )> \varepsilon ^{\gamma }} |\Psi _\varepsilon (x)|^2 \biggr )^{\frac{1}{2}}\\&\quad \biggl (\int _{\mathop {dist}(x, \Gamma ) > \varepsilon ^{\gamma }} |(\nabla + i \frac{a_\varepsilon }{\varepsilon ^2})\Psi _\varepsilon (x)|^2 \biggr )^{\frac{1}{2}} \end{aligned}$$

and, since in the domain \(\{ x \, :\, \mathop {dist}(x, \Gamma ) > \varepsilon ^{\gamma }\}\) it holds \(\varepsilon ^{-\gamma }\mathop {dist}(x, \Gamma ) > 1\), we conclude that for every \(N \in \mathbb {N}\)

$$\begin{aligned} \int _{\mathop {dist}(x, \Gamma ) > \varepsilon ^{\gamma }} |j_\varepsilon (x)|&\le \varepsilon ^{- 2\gamma N} \biggl (\int (\mathop {dist}(x, \Gamma ) \wedge 1)^{2N} |\Psi _\varepsilon (x)|^2 \biggr )^{\frac{1}{2}}\\&\quad \biggl (\int (\mathop {dist}(x, \Gamma ) \wedge 1)^{2N}|(\nabla + i \frac{a_\varepsilon }{\varepsilon ^2})\Psi _\varepsilon (x)|^2 \biggr )^{\frac{1}{2}}. \end{aligned}$$

To obtain (2.15), it remains to use Theorem 2.2, (a) on the right-hand side and use that \(\gamma < 1\) to pick N such that \(2(1-\gamma )N -1 \ge n\).

Equipped with (2.15), we begin by arguing the statement in the case \(N=1\). The general case \(N \in \mathbb {N}\) is only more technical and requires a modification similar to the one that was implemented in the proof of Theorem 2.5.

We argue that

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0} \int | \varepsilon j_\varepsilon (x)|&\lesssim 1. \end{aligned}$$
(3.2)

Using (2.15) with, e.g. \(\gamma = \frac{1}{3}\), it suffices to prove that

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0} \int _{\mathop {dist}(x, \Gamma ) < \varepsilon ^{\frac{2}{3}}} |\varepsilon j_\varepsilon | \lesssim 1 \end{aligned}$$
(3.3)

so that we only work in a small neighbourhood of \(\Gamma \) that is contained in the set U where the curvilinear coordinates (2.1) are well defined (c.f. (2.1)). We thus appeal to Proposition 2.9 and decompose the eigenfunction \(\Psi _\varepsilon \) as

$$\begin{aligned} \Psi _\varepsilon := \Psi _{\varepsilon , \text {flat}}+ R_\varepsilon , \end{aligned}$$
(3.4)

with \(\Psi _{\varepsilon , \text {flat}}\) as in the statement of Proposition 2.9. We now argue that the error term \(R_\varepsilon \) satisfies

$$\begin{aligned} \Vert R_\varepsilon \Vert _{L^2(U)} + \varepsilon \Vert (\nabla + i \varepsilon ^{-2}a_\varepsilon )R_\varepsilon \Vert _{L^2(\tilde{U})} = o(1), \end{aligned}$$
(3.5)

where \(\tilde{U}: = \{ x \in U \, :\, \mathop {dist}(x, \partial U) < \delta \}\) for some small \(\delta >0\) fixed such that \(\tilde{U} \ne \emptyset \). The first identity for \(R_\varepsilon \) is an immediate consequence of (2.20) in Proposition 2.9, after we sum over a suitable partition of the curve \(\Gamma \) made of intervals of size comparable to \(r_\varepsilon \). For the second identity above, we argue as follows: Writing (1.3) in the rescaled local coordinate \((\mu , \theta )\) defined in (A.6), Lemma 3.2 implies that

$$\begin{aligned} (H_0 + \varepsilon H_{1,\varepsilon } + \varepsilon ^{2}H_{2,\varepsilon }) \Psi _\varepsilon = \varepsilon ^2 \lambda _\varepsilon \Psi _\varepsilon \ \ \ \ \text {in }U. \end{aligned}$$
(3.6)

Using the definition of \(\Psi _{\varepsilon , \text {flat}}\), the properties of \(\nu _1\), \(k_1\) and \(H_1(\cdot , \cdot )\) (c.f. Lemma 3.1 and 3.6) imply that, in the rescaled coordinates \((\mu , \theta )\) we have also

$$\begin{aligned} (H_0 + \varepsilon H_{1,\varepsilon } + \varepsilon ^{2}H_{2,\varepsilon }) \Psi _{\varepsilon , \text {flat}}= \varepsilon ^2\lambda _\varepsilon \Psi _{\varepsilon , \text {flat}}+ (\varepsilon H_{1,\varepsilon } + \varepsilon ^2 H_{2,\varepsilon }) \Psi _{\varepsilon , \text {flat}}+ \varepsilon f_\varepsilon , \end{aligned}$$
(3.7)

with \(\Vert f_\varepsilon \Vert _{L^2} \lesssim 1\). Using the properties of the operators \(H_{1,\varepsilon }\) and \(H_{\varepsilon ,2}\) and of \(\Psi _{\varepsilon , \text {flat}}\), it follows that

$$\begin{aligned} (H_0 + \varepsilon H_{1,\varepsilon } + \varepsilon ^{2}H_{2,\varepsilon }) \Psi _{\varepsilon , \text {flat}}= \varepsilon ^2\lambda _\varepsilon \Psi _{\varepsilon , \text {flat}}+ \varepsilon F_\varepsilon , \end{aligned}$$
(3.8)

with \(\Vert F_\varepsilon \Vert _{L^2} \lesssim 1\). Hence, subtracting this equation to the one for \(\Psi _\varepsilon \) above and switching back to the macroscopic local coordinates, we infer by (3.4) that

$$\begin{aligned} H_\varepsilon R_\varepsilon = \lambda _\varepsilon R_\varepsilon + \varepsilon ^{-1} F_\varepsilon \ \ \ \ \text {in }U \end{aligned}$$
(3.9)

We now test this equation with \(\eta ^2 R_\varepsilon \), where \(\eta \) is any cut-off for \(\tilde{U}\) in U. This yields

$$\begin{aligned}&\Vert (\nabla + i \varepsilon ^{-2}a_\varepsilon ) R_\varepsilon \Vert _{L^2(\tilde{U})}^2 \lesssim (\lambda _\varepsilon + \varepsilon ^{-1}) \Vert R_\varepsilon \Vert _{L^2(U)} \nonumber \\&\quad + \Vert (\nabla + i \varepsilon ^{-2}a_\varepsilon ) R_\varepsilon \Vert _{L^2(U \backslash \tilde{U})} + \Vert R_\varepsilon \Vert _{L^2(U \backslash \tilde{U})} . \end{aligned}$$
(3.10)

Since every point in \(U \backslash \tilde{U}\) is at distance \(\sim 1\) away from the boundary, the localization estimate of Theorem 2.2, (a) for \(\Psi _\varepsilon \) and the decay of the functions \(H_n\) appearing in the definition of \(\Psi _{\varepsilon , \text {flat}}\) (c.f. Lemma 3.1) imply that for every \(n \in \mathbb {N}\)

$$\begin{aligned}&\Vert (\nabla + i \varepsilon ^{-2}a_\varepsilon ) R_\varepsilon \Vert _{L^2(U \backslash \tilde{U})} + \Vert R_\varepsilon \Vert _{L^2(U \backslash \tilde{U})}\\&\quad \lesssim \Vert (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \Psi _\varepsilon \Vert _{L^2(U \backslash \tilde{U})} + \Vert \Psi _\varepsilon \Vert _{L^2(U \backslash \tilde{U})} \\&\qquad + \Vert (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \Psi _{\varepsilon , \text {flat}}\Vert _{L^2(U \backslash \tilde{U})} + \Vert \Psi _{\varepsilon , \text {flat}}\Vert _{L^2(U \backslash \tilde{U})} \lesssim _n \varepsilon ^n. \end{aligned}$$

Inserting this into (3.10) implies the second inequality in (3.5).

We now insert (3.4) into (2.14) so that

$$\begin{aligned} \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}} |\varepsilon j_\varepsilon |&\le \varepsilon \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}} |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \overline{\Psi }_{\varepsilon , \text {flat}} )|\\&\quad + \varepsilon \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}} |\text {Im}((\overline{\Psi }_{\varepsilon , \text {flat}} + R_\varepsilon ) (\nabla + i \varepsilon ^{-2}a_\varepsilon ) R_\varepsilon )| \\&\quad + \varepsilon \int _{\mathop {dist}(x, \Gamma ) < \varepsilon ^{\frac{2}{3}}} |\text {Im}(\overline{R}_\varepsilon (\nabla + i \varepsilon ^{-2}a_\varepsilon )\Psi _{\varepsilon , \text {flat}})|. \end{aligned}$$

By Cauchy–Schwarz inequality, the definition of \(\Psi _{\varepsilon , \text {flat}}\) and (3.5), the last two terms vanish in the limit \(\varepsilon \rightarrow 0\). Hence,

$$\begin{aligned} \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}} |\varepsilon j_\varepsilon | \le \varepsilon \int _{\mathop {dist}(x, \Gamma ) < \varepsilon ^{\frac{2}{3}}} |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \overline{\Psi }_{\varepsilon , \text {flat}} )| + o(1). \end{aligned}$$
(3.11)

We establish (3.2) from this inequality relying on the explicit formula for \(\Psi _{\varepsilon , \text {flat}}\): We first remark that, thanks to Lemma 3.3, the operator \((\nabla + i \varepsilon ^{-2}a_\varepsilon )\) may be written in curvilinear coordinates as

$$\begin{aligned} \nabla + i \varepsilon ^{-2}a_\varepsilon&= \biggl (\partial _s + \frac{i}{2}\varepsilon ^{-2} (\kappa '(\xi )\alpha (\xi ) + \kappa (\xi )\alpha '(\xi ))s^2\biggr ) \mathbf {N}\\&\quad + \frac{1}{1+ \kappa (\xi ) s}\biggl (\partial _\xi + i \varepsilon ^{-1}\int _0^{\frac{s}{\varepsilon }} (1 + \varepsilon \kappa (\xi ) t) b(\xi , t) \, {\mathrm {d}}t \biggr ) \mathbf {T}\\&\quad + i V_1 + i \varepsilon ^{-2} s^3 V_2, \end{aligned}$$

with \(V_1, V_2\) satisfying the bounds in Lemma 3.3 and the functions \(\kappa , \alpha \in C^1(\mathbb {T})\) being there defined. This, together with the change of variable \(s = \varepsilon \mu \), implies that

$$\begin{aligned}&\varepsilon \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}}\\&\quad |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \overline{\Psi }_{\varepsilon , \text {flat}} )| \\&\quad \lesssim \varepsilon ^2 \int _\mathbb {T}\int _{|\mu |< \varepsilon ^{-\frac{1}{3}}} |\text {Im}\left( \overline{\Psi }_{\varepsilon , \text {flat}} \frac{1}{1+ \varepsilon \kappa (\xi )\mu }\biggl (\partial _\xi + i\varepsilon ^{-1} \int _0^{\mu }(1 + \varepsilon \kappa (\xi ) t) b(\xi , t) {\mathrm {d}}t \biggr ) \Psi _{\varepsilon , \text {flat}}\right) |\\&\qquad + \varepsilon ^2 \int _\mathbb {T}\int _{|\mu |< \varepsilon ^{-\frac{1}{3}}} |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\varepsilon ^{-1}\partial _\mu + i \mu ^2) \Psi _{\varepsilon , \text {flat}})| \\&\qquad + \varepsilon ^2 \int _\mathbb {T}\int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (i V_1 + i \varepsilon ^2 \mu ^2 V_2) \Psi _{\varepsilon , \text {flat}})| \end{aligned}$$

Since the functions \(\Psi _{\varepsilon , \text {flat}}\) may be written as \(\Psi _{\varepsilon , \text {flat}}= F(\xi ) G(\mu , \xi )\) with G being a real-valued function, the term containing the derivative \(\partial _\mu \) vanishes. Furthermore, using the properties of the functions \(H_1(\cdot , \cdot )\) in the definition of \(\Psi _{\varepsilon , \text {flat}}\) (c.f. Lemma 3.1), both the last term and the remaining term in the second-to-last vanish in the limit \(\varepsilon \rightarrow 0\). This yields that

$$\begin{aligned}&\varepsilon \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}}\\&\quad |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \overline{\Psi }_{\varepsilon , \text {flat}} )| \\&\quad \lesssim \varepsilon ^2 \int _\mathbb {T}\int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} \frac{1}{1+ \varepsilon \kappa (\xi )\mu }\biggl (\partial _\xi + i\varepsilon ^{-1} \\&\quad \int _0^{\mu }(1 + \varepsilon \kappa (\xi ) t) b(\xi , t) {\mathrm {d}}t \biggr ) \Psi _{\varepsilon , \text {flat}})| + o(1). \end{aligned}$$

Using a similar argument, and relying again on the explicit formulation of \(\Psi _{\varepsilon , \text {flat}}\), on the properties of the functions \(H_1(\cdot , \cdot )\) (Lemma 3.1) and on the boundedness of the function \(k_1\) (Lemma 3.6 and \(B_1\) ((2.12)), the above inequality further reduces to

$$\begin{aligned}&\varepsilon \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}}\\&\quad |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \overline{\Psi }_{\varepsilon , \text {flat}} )| \\&\quad \lesssim \varepsilon ^2 \int _\mathbb {T}\int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} \biggl (\partial _\xi + i\varepsilon ^{-1} \int _0^{\mu } b(\xi , t) {\mathrm {d}}t \biggr ) \Psi _{\varepsilon , \text {flat}})| + o(1). \end{aligned}$$

Using again the formulation for \(\Psi _{\varepsilon , \text {flat}}\), we notice that

$$\begin{aligned}&\text{ Im }(\overline{\Psi }_{\varepsilon , \text{ flat }} \biggl (\partial _\xi + i\varepsilon ^{-1} \int _0^{\mu } b(\xi , t) {\mathrm {d}}t \biggr ) \Psi _{\varepsilon , \text{ flat }}) \nonumber \\ {}&= \varepsilon ^{-2}\frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|^2}{\int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y} (k_1(\xi ) + \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) |H_1(k_1(\xi ), \mu )|^2 \nonumber \\ {}&\quad + \varepsilon ^{-\frac{1}{2}}\text{ Im }\left( \overline{\Psi }_{\varepsilon , \text{ flat }} e^{\frac{i}{\varepsilon } \int _0^\xi k_1(y) \, {\mathrm {d}}y} \right. \nonumber \\ {}&\left. \quad \times \partial _\xi \left( \frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|}{\bigg (\int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y\bigg )^{\frac{1}{2}}} e^{i\int _0^\xi B_1(k_1(y),y)\, {\mathrm {d}}y} H_1(k_1(\xi ), \mu )\right) \right) \nonumber \\ \end{aligned}$$
(3.12)

Using again the regularity of the curve \(k_1\) and of \(B_1\) and the properties of \(H_1(\cdot , \cdot )\), we infer that

$$\begin{aligned}&\varepsilon ^{\frac{3}{2}} \int _\mathbb {T}\int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} \nonumber \\&\quad |\text {Im}\left( \overline{\Psi }_{\varepsilon , \text {flat}} e^{\frac{i}{\varepsilon } \int _0^\xi k_1(y) \, {\mathrm {d}}y}\right. \nonumber \\&\qquad \times \left. \partial _\xi \left( \frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|}{\left( \int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y\right) ^{\frac{1}{2}}} e^{i \int _0^\xi B_1(k_1(y),y)\, {\mathrm {d}}y} H_1(k_1(\xi ), \mu )\right) \right) | \nonumber \\&\quad = o(1) \end{aligned}$$
(3.13)

so that

$$\begin{aligned}&\varepsilon \int _{\mathop {dist}(x, \Gamma )< \varepsilon ^{\frac{2}{3}}}\\&\quad |\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\nabla + i \varepsilon ^{-2}a_\varepsilon ) \Psi _{\varepsilon , \text {flat}})| \\&\quad \lesssim \varepsilon ^2 \int _\mathbb {T}\int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} \frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|^2}{\int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y} |k_1(\xi ) \\&\qquad + \int _0^\mu b(\xi , t) \, {\mathrm {d}}t| |H_1(k_1(\xi ), \mu )|^2 + o(1). \end{aligned}$$

By the definition of the set \(\Sigma \), the term \( \frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|^2}{\int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y}\) is bounded in \(\mathbb {T}\). Moreover, since \(b \in L^\infty (\mathbb {R}^2)\), \(k_1 \in C^0(\mathbb {T})\) and the function \(H_1(k_1(\xi ), \cdot )\) decays exponentially fast, we infer that the first term on the right-hand side is uniformly bounded. Inserting this inequality into (3.11), we establish (3.2).

Equipped with (2.15) and (3.2), we are now ready to prove the main statement. Let \(\rho \in \mathcal {D}(\mathbb {R}^2)\): Since the function \(\rho \) is uniformly continuous in any compact set, (3.2) also implies that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \varepsilon \int j_\varepsilon \rho = \lim _{\varepsilon \rightarrow 0} \varepsilon \int _\mathbb {T}\rho (\xi , 0) \int _{|s| < \varepsilon ^{\frac{2}{3}}} j_\varepsilon . \end{aligned}$$

From this identity, the proof of the corollary follows by a computation very similar to the one for (3.2): As in the proof of the latter, since \(\rho \) is bounded in a neighbourhood of \(\Gamma \), we reduce the previous identity to

$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0} \varepsilon \int _\mathbb {T}\rho (\xi , 0) \int _{|s|< \varepsilon ^{\frac{2}{3}}} j_\varepsilon \\&\quad = \varepsilon ^{2}\lim _{\varepsilon \rightarrow 0} \int _\mathbb {T}\rho (\xi , 0) \int _{|\mu |< \varepsilon ^{-\frac{1}{3}}} \text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\partial _\xi + i \varepsilon ^{-1} \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) \Psi _{\varepsilon , \text {flat}}) \end{aligned}$$

and, using (3.12) and (3.13), also to

$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0} \varepsilon \int _\mathbb {T}\rho (\xi , 0) \int _{|s|< \varepsilon ^{\frac{2}{3}}} j_\varepsilon \nonumber \\&\quad = \lim _{\varepsilon \rightarrow 0} \int _\mathbb {T}\rho (\xi , 0) \frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|^2}{\int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y} \int _{|\mu |< \varepsilon ^{-\frac{1}{3}}} (k_1(\xi ) \nonumber \\&\qquad + \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) |H_1(k_1(\xi ), \mu )|^2 \end{aligned}$$
(3.14)

By (A.22) in Lemma 3.5, the term

$$\begin{aligned} \int _{|\mu |< \varepsilon ^{-\frac{1}{3}}} (k_1(\xi ) + \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) |H_1(k_1(\xi ), \mu )|^2 \rightarrow \partial _{k}\nu _1(k_1(\xi ), \xi ). \end{aligned}$$
(3.15)

This, identity (3.14), together with the boundedness of \(\rho \) and \( \frac{|\partial _k\nu _{1}(k_1(\xi ), \xi )|^2}{\int _\mathbb {T}|\partial _k\nu _1(k_1(y), y)|^2 \, {\mathrm {d}}y}\), implies the statement of the corollary in the case \(N=1\).

In the general case \(N \ge 1\), the proof of the corollary may be argued in a similar way with only a few modification when we compute the term \(\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\partial _\xi + i \varepsilon ^{-1} \int _0^\mu b(\xi , t) \, {\mathrm {d}}t ) \Psi _{\varepsilon , \text {flat}})\) in (3.12). To simplify the notation, let us write

$$\begin{aligned} \Psi _{\varepsilon , \text {flat}}(\xi , s)&= \varepsilon ^{-\frac{1}{2}} \sum _{\ell =1}^N e^{\frac{i}{\varepsilon }\int _0^\xi k_{\varepsilon ,\ell }(y) \, {\mathrm {d}}y} F_{\ell }(\xi ) H_{n_\ell }(k_{\ell }(\xi ), \frac{s}{\varepsilon }), \ \ \\ F_{\ell }(\xi )&:= A_\ell e^{i\int _0^\xi B_\ell (k_\ell (y), y) \, {\mathrm {d}}y} \frac{\partial _k\nu _{n_\ell })(k_\ell (\xi ), \xi )}{\int _\mathbb {T}\partial _k \nu _{n_\ell }(k_{\varepsilon ,\ell }(y),y) \, {\mathrm {d}}y}. \end{aligned}$$

With this notation, (3.12) turns into

$$\begin{aligned}&\text {Im}(\overline{\Psi }_{\varepsilon , \text {flat}} (\partial _\xi + i \varepsilon ^{-1} \int _0^\mu b(\xi , t) \, {\mathrm {d}}t ) \Psi _{\varepsilon , \text {flat}}) \\&= \varepsilon ^{-1} \sum _{\ell , m =1}^N \overline{F_m}(\xi ) F_\ell (\xi ) e^{\frac{i}{\varepsilon }\int _0^\xi (k_\ell (y) - k_m(y)) \, {\mathrm {d}}y} (k_\ell \\&\quad +\int _0^\mu b(\xi , t) \, {\mathrm {d}}t) H_\ell (k_\ell (\xi ), \mu ) H_m(k_m(\xi ),\mu ) + o(1). \end{aligned}$$

The proof of (3.2) for \(N \ge 1\) follows from this identity with an argument similar to the one for case \(N=1\). The proof of the main statement follows again from (3.2) and the identity above as done for the case \(N=1\) provided that

$$\begin{aligned}&\sum _{m \ne \ell } \int _\mathbb {T}\rho (0,\xi )\overline{F_m}(\xi ) F_\ell (\xi ) e^{\frac{i}{\varepsilon }\int _0^\xi (k_\ell (y) - k_m(y)) \, {\mathrm {d}}y} \int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} \\ {}&\quad \left( k_\ell + \int _0^\mu b(\xi , t) \, {\mathrm {d}}t\right) H_\ell (k_\ell (\xi ), \mu ) H_m(k_m(\xi ),\mu ) = o(1). \end{aligned}$$

This identity may be obtained by Riemann–Lebesgue lemma [10, Proposition 3.2.1] since the function \(g(\xi ) := \rho (0,\xi ) \overline{F_m}(\xi ) F_\ell (\xi ) \int _{|\mu | < \varepsilon ^{-\frac{1}{3}}} (k_\ell + \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) H_\ell (k_\ell (\xi ), \mu ) H_m(k_m(\xi , \mu ) \in C^1(\mathbb {T})\) and \(|k_\ell (\xi ) - k_m(\xi )| > \delta \) for every \(\xi \in \mathbb {T}\) (c.f. also Remark 2.4). \(\square \)

3.3 Proof of Proposition 2.9

The proof of Proposition 2.9 is very similar to the one for [9, Theorem 2.4], and we refer to [9, Subsection 2.3] for a detailed discussion on the general strategy behind these proofs. We thus merely sketch of the steps of the argument that only require a trivial modification of the proof of [9, Theorem 2.4] and only focus on the new parts. We stress that the main technical challenge in the current paper is the macroscopic change of the magnetic field \(b_\varepsilon \) along the curve \(\Gamma \). In the blow-up analysis, the magnified asymptotic problem depends on the point of \(\xi \in \Gamma \) around which we perform the blow-up: for every \(\xi \in \Gamma \) as before, the limit problem resembles the standard Iwatsuka model described in Sect. (A.1) with magnetic field \(b= b(\xi ; \cdot )\). As the microscopic limit problems do depend on the macroscopic coordinate \(\xi \in \mathbb {T}\), also the solution k to the eikonal equation (A.23) does depend on \(\xi \in \mathbb {T}\).

The proof of Proposition 2.9 relies on the analogue of [9, Proposition 5.1] that is adapted to this setting. This means, in particular, that the exact same statements of [9, Proposition 5.1] are true also if we replace the harmonic oscillator \(\mathcal {O}(k):= - \partial _x^2 + ( x - k)^2\) with \(\mathcal {O}(k, \xi )\) as defined in (A.3). The proof of this result may be proven exactly as done for [9, Proposition 5.1] if we rely on Lemma 3.1 instead of [9, Lemma 2.1 and Lemma 5.4]. Throughout the proof below, we thus refer to [9, Proposition 5.1] with the understanding that this holds for the operator \(\mathcal {O}(k, \xi )\).

Proof of Proposition 2.9

We start by focussing on the case \(N=1\): This means that there is only one curve \(k = k_1(\xi )\) solving (2.7). The general case \(N \ge 1\) is only technically more challenging: We may upgrade the argument for \(N=1\) to any number of curves \(N \in \mathbb {N}\) via the same adaptations used in [9] to pass from [9, Theorem 2.4] to [9, Theorem 2.5]. We briefly comment on this issue at the end of the proof.

We follow the same argument of [9, Theorem 2.4]: Using the localization result of Theorem 2.2, (a), we may reduce to study (1.3) for the family \((\tilde{\Psi }_\varepsilon , \lambda _\varepsilon )\) in the neighbourhood U where the local coordinates are well defined (c.f. [9, (3.17)–(3.18)]). Thanks to this, we may apply the results of Sect. 3.3 and appeal to Lemma 3.2 to rewrite, up to a change of gauge \(\Psi _\varepsilon \mapsto e^{i\theta _\varepsilon } \Psi _\varepsilon \), the equation for \(e^{i\theta _\varepsilon } \Psi _\varepsilon \) into the microscopic coordinates \((\mu , \theta )\) introduced in (A.6). Throughout this proof, we simplify the notation by writing \(\Psi _\varepsilon \) instead of \(e^{i\theta _\varepsilon } \Psi _\varepsilon \).

We define the quantity

$$\begin{aligned} m_\varepsilon := \bigg (\max _{\xi \in \mathbb {T}} \int _{d(\tilde{\xi }, \xi ) < \varepsilon }\int |\Psi _\varepsilon |^2\bigg )^{\frac{1}{2}} \end{aligned}$$
(3.16)

and, for every \(\xi \in \mathbb {T}\) fixed, we consider the rescaled function

$$\begin{aligned} \tilde{\Psi }_\varepsilon (\xi , \theta , \mu ):= \frac{\varepsilon }{m_\varepsilon }\Psi _\varepsilon ( \xi + \varepsilon \theta , \varepsilon \mu ). \end{aligned}$$
(3.17)
  1. 1.

    The first step is the analogue of [9, Proposition 3.3] and amounts to show that for every \(\xi \in \mathbb {T}\) and \(\omega<< \varepsilon ^{-1}\), we have

    $$\begin{aligned}&\int _{|\theta - \omega | \le 1}\int (1 + |\mu |)^{-6} |\tilde{\Psi }_\varepsilon (\xi ; \theta , \mu ) - e^{\frac{i}{\varepsilon }\int _\xi ^{\xi +\varepsilon \theta } k(y) \, {\mathrm {d}}y } A_\varepsilon (\xi ) \nonumber \\&\quad (1 + i \varepsilon \theta C_1(\xi )) H(\xi + \varepsilon \theta , k(\xi + \varepsilon \theta ), \mu )|^2 \, {\mathrm {d}}\mu {\mathrm {d}}\theta \nonumber \\&\quad \lesssim \varepsilon ^2|\omega |^3 + \varepsilon \end{aligned}$$
    (3.18)

    for some \(A_\varepsilon (\xi ) \in \mathbb {C}\), \(|A_\varepsilon (\xi ) | \lesssim 1\) and where

    $$\begin{aligned} C_1(\xi ) = \frac{B_1(\xi , k(\xi ))}{\partial _k \nu _1(\xi , k(\xi ))} + i \frac{d}{d\xi }\log \bigg (\partial _k \nu _1(\xi , k_1(\xi ))\bigg ), \ \ \text { with }B_1\text { is as in }(2.12). \end{aligned}$$
    (3.19)

    We prove (3.18) as done in [9, proof of Proposition 3.3, Step 1]: Using definitions (3.17) and (3.16), and the equation for \(\tilde{\Psi }_\varepsilon \), we have that \(\tilde{\Psi }_\varepsilon \) is uniformly bounded in \(H^1( \{|\theta | < R\} \times \mathbb {R})\) for every \(R> 0\). Hence, up to a subsequence, we have that \(\Psi _\varepsilon \rightharpoonup \Psi _0\) in \(H^1( \{|\theta | < R\} \times \mathbb {R})\), for every \(R> 0\). We thus use Lemma 3.2 to pass to the limit in the equation for \(\tilde{\Psi }_\varepsilon \) and infer that \(\Psi _0(\theta , \mu ) = A(\xi ) e^{i k(\xi ) \theta } H_1(\xi , k(\xi ), \mu )\) for some \(A(\xi ) \in \mathbb {C}\) such that \(|A(\xi )| \le 1\) for every \(\xi \in \mathbb {T}\).

    Arguing as in [9, Proof of Proposition 3.3, Step 2], we now proceed to consider the next-order approximation: We define the term \(\Psi _{\varepsilon ,1}:= \frac{\tilde{\Psi }_\varepsilon - \Psi _{\varepsilon ,0}}{\varepsilon }\), with \(\Psi _{0,\varepsilon }= A_\varepsilon (\xi ) e^{i k_\varepsilon (\xi ) \theta } H_1(\xi , k_\varepsilon (\xi ), \mu )\) and \(A_\varepsilon (\xi ) \rightarrow A(\xi )\) defined as in [9, (3.52)]. We use again the decomposition for \(H_\varepsilon \) of Lemma 3.2 to write:

    $$\begin{aligned} (H_0 - \varepsilon ^2\lambda _\varepsilon )\Psi _{\varepsilon ,1} = H_{1,\varepsilon }\Psi _\varepsilon + \varepsilon H_{2,\varepsilon } \Psi _\varepsilon + f_{0,\varepsilon } + \theta f_{1,\varepsilon } + \varepsilon \theta ^2 f_{2,\varepsilon }, \end{aligned}$$
    (3.20)

    with \(H_{1,\varepsilon }, H_{2,\varepsilon }\) defined as in Lemma 3.2 and

    $$\begin{aligned} f_{0,\varepsilon }&:= \left( \int _0^\mu \partial _\xi b(\xi ; t \right) \, {\mathrm {d}}t )\Psi _\varepsilon ,\\ f_{1,\varepsilon }&:= - 2 i \bigg (\int _0^\mu \frac{b(\xi + \varepsilon \theta , t) - b(\xi , t)}{\varepsilon \theta } \, d t \bigg ) \bigg (\partial _\theta + i \int _0^\mu b(\xi , t) \, {\mathrm {d}}t\bigg ) \\ f_{2,\varepsilon }&:= \bigg (\int _0^\mu \frac{b(\xi + \varepsilon \theta , t) - b(\xi , t)}{\varepsilon \theta } \, {\mathrm {d}}t \bigg )^2. \end{aligned}$$

    We stress that the additional terms \(f_{0,\varepsilon }, f_{1,\varepsilon }, f_{2,\varepsilon }\) appear in the equation since the operator \(H_{0,\varepsilon }\) in Lemma 3.2 does depend on the variable \(\theta \). In contrast with [9, (3.57) and (3.58)], these new terms imply that the right-hand side in the previous equation grows as \(\varepsilon \theta ^2 + \theta \). This yields, by [9, Proposition 5.1], that the function \(\Psi _{\varepsilon ,1}\) satisfies for every \(R > 0\)

    By the same arguments of [9, Proof of Proposition 3.3, (3.60)], we may pass to the (weak) limit in \(\varepsilon \rightarrow 0\). By the previous inequality, the limit function \(\Psi _1\) grows at most quadratically in the variable \(\theta \) and, by (3.20), it solves

    $$\begin{aligned} (H_0 - \lambda )\Psi _{1}&= H_{1}\Psi _0 + \left( \int _0^\mu \partial _\xi b(\xi ; t) \, {\mathrm {d}}t \right) \Psi _0 .\end{aligned}$$
    (3.21)
    $$\begin{aligned}&\quad - 2 i \theta \bigg (\int _0^\mu \partial _\xi b(\xi , t) \, d t \bigg ) \bigg (\partial _\theta + i \int _0^\mu b(\xi , t) \, {\mathrm {d}}t\bigg )\Psi _0. \end{aligned}$$
    (3.22)

    We now want to identify \(\Psi _1\), starting from the previous equation: we claim that

    $$\begin{aligned} \Psi _1(\xi , \mu ):&= A(\xi ) \biggl ((i \theta C_1(\xi ) + i\theta ^2 \frac{k'(\xi )}{2}) H(\xi , k(\xi ), \mu ) \nonumber \\&\quad + ( \theta \frac{d}{d\xi } H(\xi , k(\xi ), \mu ) + W(\xi , \mu )) \biggr ) e^{i k(\xi ) \theta } \end{aligned}$$
    (3.23)

    with \(|C(\xi )| \lesssim 1\) and \(W(\xi , \cdot ) \perp H(\xi , k(\xi ), \cdot )\), \(\Vert W(\xi , \cdot ) \Vert _{L^2} \lesssim 1\). This is the analogue of [9, (3.54)] that is proved by appealing to [9, Lemma 5.2]. Here, we appeal to the analogue of the previous lemma adapted to the current setting. We thus sketch below the argument: Applying the Fourier transform in the variable \(\theta \) to equation (3.21), the distribution \(\hat{\Psi }_1= \hat{\Psi }_1(k , \mu )\) solves, in the sense of Schwartz distributions as in [9, (5.28), proof of Lemma 5.2], an equation of the form

    $$\begin{aligned} (\mathcal {O}(k) - \lambda )\Psi _{1}&= f_1(\mu ) \delta (k - k(\xi )) + f_2(\mu ) \partial _k \delta (k - k(\xi )). \end{aligned}$$
    (3.24)

    Here, the functions \(f_1, f_2 : \mathbb {R}\rightarrow \mathbb {R}\) are defined as

    $$\begin{aligned} f_1(\mu )&:=\biggl ( - \kappa (\xi ) \partial _\mu - 2 \kappa (\xi ) \mu |k(\xi )|^2 - 2 \kappa (\xi ) \left( \int _0^\mu (\mu - t) b \, {\mathrm {d}}t \right) k(\xi )\\&\quad - i \left( 3 \alpha '(\xi ) \kappa (\xi ) + \alpha (\xi ) \kappa '(\xi )\right) \mu ^2 \partial _\mu ^2 \\&\quad - i (3 \alpha '(\xi ) \kappa (\xi ) + \alpha (\xi ) \kappa '(\xi ))\mu - 2\kappa (\xi ) (\int _0^\mu (\mu - t) b(\xi , t) \, {\mathrm {d}}t )\\&\quad \left( \int _0^\mu b(\xi , t) \, {\mathrm {d}}t\right) \biggr )H_1(\xi , k(\xi ), \mu )\\&\quad + i \left( \int _0^\mu \partial _\xi b(\xi , t) \, {\mathrm {d}}t\right) H_1(\xi , k(\xi ), \mu )\\ \quad f_2(\mu )&:= - 2 i \bigg (\int _0^\mu \partial _\xi b(\xi , t) \, {\mathrm {d}}t \bigg )\bigg ( \int _0^\mu b(\xi , t) \, {\mathrm {d}}t - k(\xi )\bigg )H_1(\xi , k(\xi ), \mu ). \end{aligned}$$

    We recall that here the function \(\kappa \) denotes the curvature of \(\Gamma \) and \(\alpha \) is defined as in (A.7). Using the same argument in [9, Proof of Lemma 5.2] for \(\Psi _1\), we infer that

    $$\begin{aligned} \hat{\Psi }_1(k,\mu ):= & {} C_2 H(\xi , k(\xi ), \mu ) \partial _k^2 \delta ( k - k(\xi )) \nonumber \\&\quad + (C_1 H(\xi , k(\xi ), \mu ) + W_2(\mu )) \partial _k \delta (k - k(\xi )) \nonumber \\&\quad + W_1 \delta (k - k(\xi )) \end{aligned}$$
    (3.25)

    with \(C_2\) and \(W_2\) solving

    $$\begin{aligned}&\int f_2(\mu ) H(\xi , k(\xi ), \mu ) \, {\mathrm {d}}\mu - 4 C_2 \int (k(\xi )\nonumber \\&\quad - \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) |H(\xi , k(\xi ), \mu )|^2 \, {\mathrm {d}}\mu = 0, \nonumber \\&\quad (O(k(\xi )) - \lambda )W_2 = f_2 - 4 C_2 (k(\xi )- \int _0^\mu b(\xi , t) \, {\mathrm {d}}t)\nonumber \\&\quad H(\xi , k(\xi ), \mu ) \ \ \ \text {in }\mathbb {R}, \ \ \ W_2 \perp H(\xi , k(\xi ), \cdot ) \end{aligned}$$
    (3.26)

    and \(C_1\) such that

    $$\begin{aligned}&\int f_1(\mu ) H(\xi , k(\xi ), \mu ) \, {\mathrm {d}}\mu - 4 C_1 \int (k(\xi ) \nonumber \\&\quad - \int _0^\mu b(\xi , t) \, {\mathrm {d}}t) |H(\xi , k(\xi ), \mu )|^2\, {\mathrm {d}}\mu \nonumber \\&\quad - 2 \int (k(\xi )- \int _0^\mu b(\xi , t) \, {\mathrm {d}}t)\nonumber \\&\quad H(\xi , k(\xi ), \mu ) W_2(\mu ) \, {\mathrm {d}}\mu - 2 C_2= 0. \end{aligned}$$
    (3.27)

    We stress that the term \(\delta (k - k(\xi ))\) in (3.25) does not contain a term proportional to \(H_1(\xi , k(\xi ), \cdot )\) due to the choice of \(A_\varepsilon \) (see [9, (3.52)] and [9, Second identity in (3.64)]).

    Using the first identity in (3.26) and formula (A.22) in Lemma 3.5, we infer that

    $$\begin{aligned} C_2 = \frac{i}{2} k'(\xi ). \end{aligned}$$
    (3.28)

    Inserting this into the second equation in (3.26), and appealing to (A.26) of Lemma 3.5, we get that

    $$\begin{aligned} W_2(\mu ) = i \frac{d}{d\xi }H(\xi , k(\xi ), \mu ). \end{aligned}$$
    (3.29)

    Finally, the previous two formulas, (3.27) and Lemma 3.5 yield that \(C_1\) is as in (3.19). Inserting (3.28), (3.29) and (3.19) into (3.25), we conclude that

    $$\begin{aligned} \Psi _1(\xi , \mu )&:= A(\xi ) \biggl ((i \theta C_1(\xi ) + i\theta ^2 \frac{k'(\xi )}{2}) H(\xi , k(\xi ), \mu ) \\&\quad + \left( \theta \frac{d}{d\xi } H(\xi , k(\xi ), \mu ) + W(\xi , \mu )\right) \biggr ) e^{i k(\xi ) \theta } \end{aligned}$$

    with \(W(\xi , \cdot ) \perp H(\xi , k(\xi ), \cdot )\), \(\Vert W(\xi , \cdot ) \Vert _{L^2} \lesssim 1\).

    As in [9, Proof of Proposition 3.3, Step 3] we now turn to the second-order approximation and consider the term \(\Psi _{2,\varepsilon }:= \frac{\Psi _{1,\varepsilon }- \Psi _{\varepsilon ,1}}{\varepsilon }\). As for \(\Psi _{1,\varepsilon }\) above, also in this case the change in the magnetic field produces new terms on the right-hand side that have a higher growth in \(\theta \) with respect to the analogue problem in [9, (3.67) and display above]. This, in particular implies that

    Spelling out the definition of \(\Psi _{\varepsilon ,2}\) and using the properties of the exponential, this inequality yields, in turn, inequality (3.18).

  2. 2.

    We now claim that the \(F_\varepsilon (\cdot ) := e^{-\frac{i}{\varepsilon }\int _0^{\cdot } k(y) \, {\mathrm {d}}y} A_\varepsilon (\cdot )\) satisfies for every \(\xi \in \mathbb {T}\) and \(|\omega | > 1\) such that \(|\varepsilon \omega |<< 1\) the inequality

    $$\begin{aligned} | F_\varepsilon ( \xi + \varepsilon \omega ) - F_\varepsilon (\xi )( 1 + i\varepsilon \omega \, C_1(\xi ))| \lesssim \varepsilon + \varepsilon ^2 |\omega |^3. \end{aligned}$$
    (3.30)

    The term \(C_1\) is the same as in (3.18). We remark that, with this definition of \(F_\varepsilon \), inequality (3.18) may be rewritten as

    $$\begin{aligned}&\int _{|\theta - \omega | \le 1}\int (1 + |\mu |)^{-6} |\tilde{\Psi }_\varepsilon (\xi ; \theta , \mu ) - e^{\frac{i}{\varepsilon }\int _0^{\xi +\varepsilon \theta } k(y) \, {\mathrm {d}}y } F_\varepsilon (\xi )\nonumber \\&\quad (1 + i \varepsilon \theta C_1(\xi )) H(\xi + \varepsilon \theta , k(\xi + \varepsilon \theta ), \mu )|^2\nonumber \\&\quad \lesssim \varepsilon ^2|\omega |^3 + \varepsilon . \end{aligned}$$
    (3.31)

    The proof of (3.30) is very similar to the one for [9, (3.34), proof of Lemma 3.4]: We remark that the function

    $$\begin{aligned} f(\xi ; \varepsilon \omega ):= \int _{|\theta - \omega | < 1} \int \tilde{\Psi }_\varepsilon (\xi , \theta , \mu ) e^{- \frac{i}{\varepsilon }\int _\xi ^{\xi +\varepsilon \theta } k(y) \, {\mathrm {d}}y} H_1(\xi + \varepsilon \theta , k(\xi + \varepsilon \theta ), \mu ) \, {\mathrm {d}}\theta \, {\mathrm {d}}\mu \nonumber \\ \end{aligned}$$
    (3.32)

    satisfies

    $$\begin{aligned}&|f(\xi , \varepsilon \omega ) - A(\xi )( 1 + C_1(\xi ) \varepsilon \omega )| \lesssim \varepsilon + \varepsilon ^2|\omega |^3, \ \ \ \ |f(\xi , 0) - A(\xi )| \lesssim \varepsilon \\&f(\xi , \varepsilon \omega ) = e^{-\frac{i}{\varepsilon }\int _\xi ^{\xi +\varepsilon \omega } k} f(\xi +\varepsilon \omega , 0). \end{aligned}$$

    The first inequality above follows directly from (3.18) and the properties of the eigenfunctions \(H_1(\xi , k(\xi ), \cdot )\); the second inequality is a consequence of the first with the choice \(\omega = 0\). The third inequality is a simple change of variables in the definition of \(f(\cdot , \cdot )\). Wrapping together the previous inequalities yields (3.30).

  3. 3.

    We now show that \(F_\varepsilon \rightarrow F\) uniformly on \(\mathbb {T}\) with

    $$\begin{aligned} F' = i C_1 F \ \ \ \ \text {in } \mathbb {T}, \ \ \ \ |F(0)| = 1. \end{aligned}$$

    This implies that every limit function F satisfies

    $$\begin{aligned} F := A_1 \frac{\partial _k \nu _1(\xi , k_1(\xi ))}{\partial _k\nu _1(0, k_1(0))} e^{i \int _0^\xi \frac{\partial _k\nu _1(y, k_1(y))}{B_1(y)} \, {\mathrm {d}}y}, \end{aligned}$$

    for some \(A_1 \in \mathbb {C}\) with \(|A_1| =1\). The previous ODE follows from Step 2 by an argument similar to the one in [9, Proof of Lemma 3.4] using Ascoli–Arzela’s theorem and [9, (3.33)]: The main difference, in this case, is that the increments cannot immediately be chosen to be macroscopic (i.e. \(\omega \sim \varepsilon ^{-1}\)). We first claim that \(C_1'\) is continuous on \(\mathbb {T}\) and hence that the function \(C_1: \mathbb {T}\rightarrow \mathbb {R}\) is Lipschitz. The continuity of \(C_1'\) is a simple consequence of the definition (3.18), the regularity of the functions \(\nu _1(\cdot , \cdot )\) and \(k_1\) (c.f. Lemma 3.1 and Lemma 3.6) and the assumption \(\lambda \in \Sigma \) that implies that \(\partial _k \nu _1(\xi , k(\xi )) > \epsilon \) for every \(\xi \in \mathbb {T}\) and for some \(\epsilon >0\).

    We now define \(\delta _\varepsilon := \varepsilon \omega \) and remark that, whenever \(\varepsilon<< \delta _\varepsilon<< \varepsilon ^{\frac{1}{2}}\), inequality (3.30) may be rewritten as

    $$\begin{aligned} | F_\varepsilon ( \xi + \delta _\varepsilon ) - F_\varepsilon (\xi )( 1 + \delta _\varepsilon C_1(\xi ))| \lesssim o(\delta _\varepsilon ), \ \ \ \text {for every }\xi \in \mathbb {T}, \delta _\varepsilon \text { as above}. \end{aligned}$$

    Since \(C_1(\cdot )\) is Lipschitz, using a telescopic sum the inequality above implies that also

    $$\begin{aligned} | F_\varepsilon ( \xi + \delta ) - F_\varepsilon (\xi )( 1 + \delta C_1(\xi ))| \lesssim o(\delta ) , \ \ \ \text {for every }\xi \in \mathbb {T}, \delta \sim 1. \end{aligned}$$

    From this, the argument follows as in [9, Proof of Lemma 3.4].

  4. 4.

    Conclusion. From inequality (3.31) and an argument similar to the one to pass from [9, (3.31)] to [9, (3.22)], we infer that for every \(\xi ^*\in \mathbb {T}\)

    $$\begin{aligned} \begin{aligned} \quad \int _{|\theta | \le 1}\int |\tilde{\Psi }_\varepsilon (\xi ^*; \theta , \mu ) - e^{\frac{i}{\varepsilon }\int _0^{\xi ^*+\varepsilon \theta } k(y) \, {\mathrm {d}}y } F_\varepsilon (\xi ^*) H(\xi ^* + \varepsilon \theta , k(\xi ^*+ \varepsilon \theta ), \mu )|^2 \lesssim \varepsilon . \end{aligned} \end{aligned}$$

    In this case, to we get rid of the weight \((1 + |\mu |)^{-6}\) using properties of the eigenfunctions \(H_n(\xi , k, \cdot )\) in Lemma 3.1. We now combine the previous inequality with Step 3 to infer that also

    $$\begin{aligned} \begin{aligned} \quad \int _{|\theta | \le 1}\int |\tilde{\Psi }_\varepsilon (\xi ^*; \theta , \mu ) - e^{\frac{i}{\varepsilon }\int _0^{\xi ^*+\varepsilon \theta } k(y) \, {\mathrm {d}}y } F_\varepsilon (\xi ^*) H(\xi ^* + \varepsilon \theta , k(\xi ^*+ \varepsilon \theta ), \mu )|^2 = o(1), \end{aligned} \end{aligned}$$

    where we abuse notation writing \(\varepsilon \) instead of a sequence \(\{\varepsilon _j\}_{j\in \mathbb {N}}\) and where o(1) does not depend on the point \(\xi _*\in \mathbb {T}\) but it might depend on \(\{ \varepsilon _j \}_{j\in \mathbb {N}}\).

    Using the definition of \(\tilde{\Psi }_\varepsilon \) and of the limit function F of Step 3, we infer, after a change of coordinates, that

    $$\begin{aligned} \begin{aligned}&\int _{|\xi -\xi _*| \le \varepsilon }\int |m_\varepsilon ^{-1}\Psi _\varepsilon - A_1 e^{\frac{i}{\varepsilon }\int _0^{\xi } k(y) \, {\mathrm {d}}y + i \int _0^\xi \frac{B(y)}{\partial _k \nu (y, k(y))}\, {\mathrm {d}}y} \\&\quad \frac{\partial _k \nu (\xi _*, k(\xi _*))}{\partial _k\nu (0, k(0))} H(\xi , k(\xi ), \frac{s}{\varepsilon })|^2 \, {\mathrm {d}}\xi \, {\mathrm {d}}s = o(1) \end{aligned} \end{aligned}$$
    (3.33)

    for \(A_1 \in \mathbb {C}\) such that \(|A_1|=1\). To conclude the proof of Proposition 2.9 it thus remains to show that

    (3.34)

    We show (3.34) as follows: From (3.33), for every \(\xi _* \in \mathbb {T}\) the triangle inequality, the fact that \(|A_1|=1\) and the normalization of the eigenfunctions \(H(\xi , k(\xi ), \cdot )\) yield that

    $$\begin{aligned} \int _{|\xi -\xi _*| \le \varepsilon }\int |\Psi _\varepsilon |^2 = m_\varepsilon ^2 \bigg (|\frac{\partial _k \nu (\xi _*, k(\xi _*))}{\partial _k\nu (0, k(0))}|^2 + o(1)\bigg ). \end{aligned}$$

    Since \(\Vert \Psi _\varepsilon \Vert _{L^2(\mathbb {R}^2)} =1\), we may now find \(n_\varepsilon \) intervals \(\{ I_{i,\varepsilon }\}_{i=1}^{n_\varepsilon }\) of size \(\varepsilon \) centred at points \(\{ \xi _i^\varepsilon \}_{i=1}^{n_\varepsilon }\) such that

    $$\begin{aligned} \varepsilon n_\varepsilon \rightarrow 1, \ \ \ \ \ S_\varepsilon :=\sum _{i=i}^{n_\varepsilon } \Vert \Psi _\varepsilon \Vert _{L^2( I_{\varepsilon ,i} \times \mathbb {R})} \rightarrow 1. \end{aligned}$$

    Using this construction and the identity two displays above, we infer that

    $$\begin{aligned} \varepsilon ^{-1} m_\varepsilon ^2 S_\varepsilon = \bigg ( \varepsilon \sum _{i=1}^{n_\varepsilon }|\frac{\partial _k \nu (\xi _i, k(\xi _i))}{\partial _k\nu (0, k(0))}|^2 + o(1)\bigg )^{-1}. \end{aligned}$$

    Using that the functions \(\partial _k \nu (\cdot , k(\cdot ))\) are continuous and differentiable (c.f. Lemma 3.1 and Lemma 3.6), we know that

    and hence also (3.34). This establishes Proposition 2.9 when \(N=1\).

We conclude by quickly remarking on the case \(N > 1\), namely if the solutions to (2.7) are more than one. This case may be treated as is done in [9, Theorem 2.5 and Proposition 4.1] by relying on the fact that, by Lemma 3.1, we have that

The proof of Step 1 may be adapted to this case as done in [9, Proof of Proposition 4.1]. In this case, for every \(\xi \in \mathbb {T}\) the blow-up limit is of the form \(\Psi _0(\xi ; \theta , \mu )= \sum _{j=1}^N A_j(\xi ) e^{\frac{i}{\varepsilon }\int _0^\theta k_{j}(s) \, {\mathrm {d}}s} H_j(\xi , k_j(\xi ), \mu )\) for \(A_1(\xi ), \ldots , A_j(\xi ) \in \mathbb {C}\). We stress that, in this case, in the analogue of (3.18) the right-hand side contains also a term of the form \(C\varepsilon \).

Step 2 may be argued similarly for each function of the form \(F_{j, C, \varepsilon }(\cdot ) := e^{-\frac{i}{\varepsilon }\int _0^{\cdot } k_{j,\varepsilon }(y) \, {\mathrm {d}}y} A_j(\cdot )\), \(j=1, \ldots , N\). The argument is an adaptation of the one above and [9, Proof of Proposition 4.1, Step 2]. We stress that, in this instance, it is convenient to replace the function in (3.323.35) with

(3.35)

where \(\eta \) is any smooth cut-off function for \(\{|\theta | < 1 \}\) in \(\{ |\theta | < 2\}\) and the index \(j=1, \ldots ,N\). This choice, indeed, yields that the products of two different waves satisfy for every \(n\in \mathbb {N}\) and \(i, j =1, \ldots , N\), \(i\ne j\)

(3.36)

Steps 3 and 4 may be argued as in the case \(N=1\) if we choose an appropriate sequence \(C_\varepsilon \rightarrow +\infty \) such that \(\varepsilon C_\varepsilon \ll \varepsilon ^{\frac{1}{2}}\) and set \(r_\varepsilon := \varepsilon C_\varepsilon \). We stress that, in the argument for Step 3, the choice \(r_\varepsilon \ll \varepsilon ^{\frac{1}{2}}\) is crucial for the argument of Step 3 to work also in this setting. \(\square \)