1 Introduction

We consider the 3-dimensional Euler equations for an ideal incompressible homogeneous fluid in a time interval (0, T) and a smooth initial condition \(u_0\) given by

$$\begin{aligned} \begin{aligned} \mathbf{u}_t + (\mathbf{u}\cdot {\nabla } )\mathbf{u}&= - {\nabla } p&{\quad \hbox {in } }\mathbb R^3 \times (0,T) , \\ \mathrm{div}\, \mathbf{u}&= 0&{\quad \hbox {in } }\mathbb R^3\times (0,T) , \\ \mathbf{u}(\cdot ,0)&= u_0&{\quad \hbox {in } }\mathbb R^3 . \end{aligned} \end{aligned}$$
(1.1)

For a solution \(\mathbf{u }\) of (1.1), its vorticity is defined as \( \vec \omega = \mathrm{curl}\mathbf{u} \). Then \(\vec \omega \) solves the Euler system in vorticity form (1.1),

$$\begin{aligned} \begin{aligned} \vec \omega _t + (\mathbf{u}\cdot {\nabla } ){\vec \omega }&=( \vec \omega \cdot {\nabla } ) \mathbf{u}&{\quad \hbox {in } }\mathbb R^3\times (0,T), \\ \quad \mathbf{u} = \mathrm{curl}\vec \psi ,\&-\Delta \vec \psi = \vec \omega&{\quad \hbox {in } }\mathbb R^3\times (0,T),\\ {\vec \omega }(\cdot ,0)&= \mathrm{curl}u_0&{\quad \hbox {in } }\mathbb R^3 . \end{aligned} \end{aligned}$$
(1.2)

We are interested in solutions of the Euler equations whose vorticities are large and uniformly concentrated near an evolving smooth curve \(\Gamma (t)\) embedded in entire \(\mathbb R^3\) and so that the associated velocity field vanishes as the distance to the curve goes to infinity. This type of solutions, vortex filaments, are classical objects of study in fluid dynamics, whose analysis traces back to Helmholtz and Kelvin. In 1867 Helmholtz considered with great attention the situation where the vorticity is concentrated in a circular vortex filament with small section. He detected that these vortex rings have an approximately steady form and travel with a large constant velocity along the axis of the ring. In 1894, Hill found an explicit axially symmetric solution of (1.2) supported in a sphere (Hill’s vortex ring).

In 1970, Fraenkel [18] provided the first construction of a vortex ring concentrated around a torus with fixed radius with a small, nearly singular section \(\varepsilon >0\), travelling with constant speed \(\sim |\log \varepsilon |\), rigorously establishing the behaviour predicted by Helmholtz. Vortex rings have been analyzed in larger generality in [1, 13, 17, 30].

Da Rios [9] in 1906, and Levi-Civita [26] in 1908, formally found the general law of motion of a vortex filament with a thin section of radius \(\varepsilon >0\), uniformly distributed around an evolving curve \(\Gamma (t)\). Roughly speaking, Da Rios demonstrated that under suitable assumptions on the solution, the curve evolves by its binormal flow, with a large velocity of order \(|\log \varepsilon |\). More precisely, if \(\Gamma (t)\) is parametrized as \(x= \gamma (s,t)\) where s designates its arclength parameter, then \(\gamma (s,t)\) asymptotically obeys a law of the form

$$\begin{aligned} \gamma _t = 2c|\log \varepsilon | (\gamma _s\times \gamma _{ss}) \end{aligned}$$
(1.3)

as \(\varepsilon \rightarrow 0\), or scaling \(t= |\log \varepsilon |^{-1}\tau \),

$$\begin{aligned} \gamma _\tau = 2c (\gamma _s\times \gamma _{ss})= 2c\kappa \mathbf{b}_{\Gamma (\tau )}. \end{aligned}$$
(1.4)

Here c corresponds to the circulation of the velocity field on the boundary of sections to the filament, which is assumed to be a constant independent of \(\varepsilon \). For the curve \(\Gamma (\tau )\) parametrized as \(x=\gamma (\tau ,s)\), we designate by \(\mathbf{t}_{\Gamma (\tau )},\, \mathbf{n}_{\Gamma (\tau )},\, \mathbf{b}_{\Gamma (\tau )}\) the usual tangent, normal and binormal unit vectors, \(\kappa \) its curvature.

The law (1.3) was formally rediscovered several times during the 20th century, and it is by now classical. See for instance the book by Majda and Bertozzi [29] and the survey paper by Ricca [31].

In [20], Jerrad and Seis gave a precise form to Da Rios’ computation under the weakest known conditions on a solution to (1.2) which remains suitably concentrated around an evolving curve. Their result considers a solution \(\vec \omega _\varepsilon (x,t)\) under a set of conditions that imply that as \(\varepsilon \rightarrow 0\)

$$\begin{aligned} \vec \omega _\varepsilon (\cdot ,|\log \varepsilon |^{-1}\tau )\rightharpoonup \delta _{\Gamma (\tau )} \mathbf{t}_{\Gamma (\tau )}, \quad 0\le \tau \le T , \end{aligned}$$
(1.5)

where \(\Gamma (\tau )\) is a sufficiently regular curve and \(\delta _{\Gamma (\tau )}\) denotes a uniform Dirac measure on the curve. They prove that \(\Gamma (\tau )\) does indeed evolve by the law (1.4). See [22] and its references for results on the flow (1.4).

The existence of true solutions of (1.2) that satisfy (1.5) near a given curve \(\Gamma (\tau )\) that evolves by the binormal flow (1.4) is an outstanding open question sometimes called the vortex filament conjecture. See [5] and [20]. This statement is unknown except for very special cases. A basic example is given by a circle \(\Gamma (\tau )\) with radius R translating with constant speed along its axis parametrized as

$$\begin{aligned} \gamma (s,\tau ) = \left( \begin{matrix} R \cos \big (\frac{s}{R}\big ) \\ R \sin \big (\frac{ s}{R} \big ) \\ \frac{2c}{R} \tau \end{matrix} \right) , \quad \ 0<s \le 2\pi R. \end{aligned}$$
(1.6)

We check that

$$\begin{aligned} \gamma _s \times \gamma _{ss} = R^{-1} \left( \begin{matrix} 0 \\ 0 \\ 1 \end{matrix} \right) , \end{aligned}$$

and hence \(\gamma \) satisfies equation (1.4). Fraenkel’s result [18] on thin vortex rings yields existence of a solution \(\vec \omega _\varepsilon (x,t)\) without change of form, such that the asymptotic behavior (1.5) holds for this curve. In other words, the statement of the vortex filament conjecture holds true for the case of the traveling circle. It is also known for more general, suitably prepared axially symmetric initial data, see [5]). The vortex filament conjecture is also true in the case of a straight line. It suffices to consider a concentrated steady state constrained to a plane normal to the line and then trivially extending it in the remaining variable.

Another known solution of the binormal flow (1.4) that does not change its form in time is the rotating-translating helix, the curve \(\Gamma (\tau )\) parametrized as

$$\begin{aligned} \gamma (s,\tau ) = \left( \begin{matrix} R \cos \big ( \frac{s-a_1 \tau }{\sqrt{h^2 + R^2}} \big ) \\ R \sin \big ( \frac{s-a_1 \tau }{\sqrt{h^2 + R^2}} \big ) \\ \frac{h s + b_1 \tau }{\sqrt{h^2+ R^2}} \end{matrix} \right) , \quad a_1 = \frac{2 c h}{ h^2 + R^2 },\quad b_1 = \frac{2 c R^2}{ h^2 + R^2 }. \end{aligned}$$
(1.7)

Here \(c>0\) and \(R>0\), \(h \ne 0\) are constants that are in correspondence with the curvature and torsion of the helix, the numbers respectively given by \( \frac{R}{h^2+R^2}\) and \( \frac{h}{h^2+R^2} \). We readily check that the parametrization (1.7) satisfies equation (1.4). This helix degenerates into the the traveling circle (1.6) when \(h\rightarrow 0\) and to a straight line when \(|h|\rightarrow +\infty \).

The helices (1.7) were observed to possibly describe vortex filaments in the sense (1.5) more than 100 years ago by Joukowsky [24], Da Rios [10] and Levi-Civita [27]. Their no-change in form can be described by using the rotation matrices

$$\begin{aligned} P_\theta = \left[ \begin{matrix} \cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \end{matrix} \right] , \quad Q_\theta = \left[ \begin{matrix} P_\theta &{} 0 \\ 0 &{} 1 \end{matrix} \right] . \end{aligned}$$
(1.8)

Then \(\gamma (s,\tau )\) has can be recovered from \(\gamma (s,0)\) by a rotation and a vertical translation by means of the sense that

$$\begin{aligned} \gamma (s,\tau )= Q_{- a_2 \tau } \gamma (s,0) + \left[ \begin{matrix} 0 \\ b_2 \tau \end{matrix} \right] , \end{aligned}$$
$$\begin{aligned} a_2 = \frac{2 c h}{ (h^2 + R^2)^{\frac{3}{2}}},\quad b_2 = \frac{2 c R^2}{ (h^2 + R^2)^{\frac{3}{2}} }. \end{aligned}$$
(1.9)

Renewed interest in helical filaments has risen in the last two decades, see [35] for a recent survey. However, there has been no proof of their existence.

The purpose of this paper is to construct a true helical filament \(\vec \omega _\varepsilon (x,t)\) satisfying (1.5). This solution does not change of form and goes along the helix in the sense that the vector field

$$\begin{aligned} F(x,\tau ) = \vec \omega _\varepsilon (x, |\log \varepsilon |^{-1}\tau ), \quad \end{aligned}$$
(1.10)

satisfies

$$\begin{aligned} \begin{aligned} F(x,\tau )\ =&\ Q_{a_2 \tau } F ( P_{- a_2 \tau }x', x_3 + b_2 \tau , 0), \\ F(x, \tau )\ =&\ F(x', x_3 + 2\pi h , \tau ) \end{aligned} \end{aligned}$$
(1.11)

with \(a_2,b_2\) given by (1.9), \(x=(x',x_3),\ x'=(x_1,x_2)\).

Theorem 1

Let \(c>0\), \(R>0\), \(h\not =0\) and let \(\Gamma (\tau )\) be the helix parametrized by equation (1.7). Then there exists a smooth solution \(\vec \omega _\varepsilon (x,t)\) to(1.2), defined for \(t\in (-\infty , \infty )\) that does not change form and follows the helix in the sense (1.10)–(1.11), such that for all \(\tau \),

$$\begin{aligned} \vec \omega _\varepsilon (x, \tau |\log \varepsilon |^{-1}) \rightharpoonup c\delta _{\Gamma (\tau )}{} \mathbf{t}_{\Gamma (\tau )} \quad \text{ as }\quad \varepsilon \rightarrow 0. \end{aligned}$$

This result extends to the situation of several helices symmetrically arranged. Let us consider the curve \(\Gamma (\tau )\) parametrized by \(\gamma (\tau ,s)\) in (1.7). Let us define for \(j=1,\ldots , k \) the curves \(\Gamma _j(\tau )\) parametrized by

$$\begin{aligned} \gamma _j(s,\tau ) = Q_{ 2\pi \frac{j-1}{k} } \gamma (s,\tau ) . \end{aligned}$$
(1.12)

The following result extends that of Theorem 1.

Theorem 2

Let \(c>0\), \(R>0\) and \(h\not =0\). Let \(\Gamma _j(\tau )\) be the helices parametrized by equation (1.12), for \(j=1,\ldots ,k\). Then there exists a smooth solution \(\vec \omega _\varepsilon (x,t)\) to (1.2), defined for \(t\in (-\infty , \infty )\) that does not change form and follows the helix in the sense (1.10)–(1.11), such that for all \(\tau \),

$$\begin{aligned} \vec \omega _\varepsilon (x, \tau |\log \varepsilon |^{-1}) \rightharpoonup c \sum _{j=1}^k \delta _{\Gamma _j(\tau )}{} \mathbf{t}_{\Gamma _j(\tau )} \quad \text{ as }\quad \varepsilon \rightarrow 0. \end{aligned}$$

Our construction takes advantage of a screw-driving symmetry invariance enjoyed by the Euler equations observed by Dutrifoy [14] and Ettinger-Titi [16], see also the more recent results in [6, 23]. From the analysis in the latter reference, it follows that a function of the form

$$\begin{aligned} \vec \omega (x,t) = \frac{1}{h} w( P_{-\frac{x_3}{h}} x', t) \left( \begin{matrix} P_{\frac{\pi }{2}} x' \\ h \end{matrix} \right) , \qquad x'= (x_1,x_2) , \end{aligned}$$

solves (1.2) if the scalar function \(w(x',t)\) satisfies the transport equation

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{l} w_t + \nabla ^\perp \psi \cdot \nabla w =0 \quad {\text{ in }} \quad \mathbb R^2 \times (0, T) \\ - {\text{ div }} (K \nabla \psi ) = w \quad {\text{ in }} \quad \mathbb R^2 \times (0, T), \end{array} \right. \end{aligned} \end{aligned}$$
(1.13)

where \((a,b)^\perp = (b,-a)\) and \(K (x_1,x_2) \) is the matrix

$$\begin{aligned} K(x_1,x_2) = = \frac{1}{h^2+x_1^2+x_2^2} \left( \begin{matrix} h^2+x_2^2 &{} -x_1 x_2\\ -x_1 x_2 &{} h^2+x_1^2 \end{matrix} \right) . \end{aligned}$$
(1.14)

which we can also write

$$\begin{aligned} K(x') = \frac{1}{h^2+ |x'|^2} \left( h^2I + (x')^\perp \otimes (x')^\perp ) \right) \end{aligned}$$

For completeness we prove this fact in Sect. 2.

The proof of Theorems 1 and 2 is reduced to finding solutions of (1.13) concentrated at the vertices of a rotating k-regular polygon, which do not change form. We devote the rest of the paper to build such a solution by means of elliptic singular perturbation techniques.

Solutions concentrated near helices in other PDE settings have have been built in [7, 8, 11, 21, 37]. Construction of stationary solutions such that the flow contains certain vortex tubes that are arbitrarily close to a given collection of (generally knotted and linked) closed curves has been achieved in [15]. Connected with Theorem 2, the formal law for the dynamics of nearly parallel interacting filaments in the Euler equation has been found in [25], which is the same law governing almost parallel vortex filaments for the Gross-Pitaevskii equation, see [21]. The filaments in Theorem 2 are not nearly parallel. The motion of vortex filaments is the natural generalization of the motion of point vortices for the 2D incompressible Euler equations. Their desingularization has been rigorously analyzed in [5, 28, 32,33,34, 36] and [12]. Nonlinear Stability of point vortices has been recently established in [19]. Refined linear analysis around a smooth radial vortex has been carried out in [4]. The associated Hamiltonian dynamics has also been the subject of numerous studies, see [2, 3] and references therein.

2 Solutions with helical symmetry

In this section we recall how to find solutions of the incompressible Euler equations in 3d with helical symmetry, following Dutrifoy [14] and Ettinger-Titi [16].

Let \(h>0\). We say that a scalar function \(f:\mathbb R^3\rightarrow \mathbb R\) has helical symmetry if

$$\begin{aligned} f( P_\theta x' , x_3+ h\theta ) = f(x',x_3) \quad \text{ for } \text{ all }\quad \theta \in \mathbb R, \ \ x = (x',x_3) \in \mathbb R^3 , \end{aligned}$$
(2.1)

where \(P_\theta \) is defined in (1.8). For a scalar function satisfying (2.1) we have \(f(x) = f( P_{- \frac{x_3}{h}}x',0)\), and so f is determined by its values on the horizontal plane

$$\begin{aligned} \{ \, x = (x',x_3) \ | \ x_3 = 0 \, \}. \end{aligned}$$

Also, if f is \(C^1\), then f satisfies (2.1) if and only if \(\nabla f \cdot \vec \xi = 0\) where \(\vec \xi \) is the vector field

$$\begin{aligned} \vec \xi (x)= (- x_2,x_1,h) , \qquad x =(x_1,x_2,x_3). \end{aligned}$$

A vector field \(F:\mathbb R^3\rightarrow \mathbb R^3\) is said to have helical symmetry if

$$\begin{aligned} F( P_{\theta } x' , x_3+ h\theta ) = Q_\theta F(x',x_3), \quad \forall \theta \in \mathbb R, \quad \forall x = (x',x_3) \in \mathbb R^3 , \end{aligned}$$
(2.2)

where \(Q_\theta \) is the matrix defined in (1.8). If F satisfies (2.2) then

$$\begin{aligned} F ( x) = Q_{\frac{x_3}{h}} F( P_{- \frac{x_3}{h}}x',0) , \end{aligned}$$

so that F is determined by its values on the plane \(\{ \, x = (x',x_3) \ | \ x_3 = 0 \, \}\). Again, if \(F = (F_1,F_2,F_3)\) is \(C^1\), then it satisfies (2.2) if and only if

$$\begin{aligned} \nabla F_1 \cdot \vec \xi = - F_2, \quad \nabla F_2 \cdot \vec \xi = F_1, \quad \nabla F_3 \cdot \vec \xi = 0 . \end{aligned}$$
(2.3)

The following result is a consequence from the analysis in [16].

Lemma 2.1

Suppose that \(w(x',t)\) satisfies the transport equation (1.13). Then there exists a vector field \(\mathbf{u}\) with helical symmetry (2.2) such that

$$\begin{aligned} \vec \omega (x,t) = \frac{1}{h} w( P_{-\frac{x_3}{h}} x',t) \vec \xi (x) , \quad x = (x',x_3) \end{aligned}$$
(2.4)

satisfies the Euler equation (1.2), which in terms of the scalar function \(w(x',t)\) becomes

$$\begin{aligned} \left\{ \begin{array}{l} w_t + \nabla ^\perp \psi \cdot \nabla w =0 \quad {\text{ in } } \,\,\,\mathbb R^2 \times (-\infty , \infty ) \\ - \mathrm{div}( K \nabla \psi ) = w \quad {\text{ in } } \,\,\,\mathbb R^2 \times (-\infty , \infty ), \end{array}\right. \end{aligned}$$
(2.5)

where

\(K(x') \) is given by (1.14).

Proof

We note that the vorticity \(\vec \omega \) defined by (2.4) has the helical symmetry (2.2). Moreover the scalar function w satisfies (2.1) and hence \(\nabla w \cdot \vec \xi =0\).

The velocity vector field \(\mathbf{u}\) associated to \(\omega \) will be constructed such that it has the helical symmetry (2.2) and satisfies in addition

$$\begin{aligned} \mathbf{u}\cdot \vec \xi = 0 . \end{aligned}$$
(2.6)

This condition and (2.4) say that \(\vec \omega \) and \(\mathbf{u}\) are always orthogonal.

To construct \(\mathbf{u} = ( u_1,u_2,u_3)\), we first define \(u_1(x',0)\), and \(u_2(x',0)\) by the following relation

$$\begin{aligned} \left( \begin{matrix} u_1\\ u_2 \end{matrix} \right) = \frac{1}{h^2+x_1^2+x_2^2} \left( \begin{matrix} - x_1 x_2 &{} h^2 + x_1^2 \\ - h^2 - x_2^2 &{} x_1 x_2 \end{matrix} \right) \left( \begin{matrix} \partial _{x_1} \psi \\ \partial _{x_2} \psi \end{matrix} \right) . \end{aligned}$$
(2.7)

Next we define \(u_3(x',0)\) using (2.6):

$$\begin{aligned} u_3 = \frac{1}{h}( x_2 u_1 - x_1 u_2 ) . \end{aligned}$$

In this way \(\mathbf{u}(x',0)\) is defined for all \(x' \in \mathbb R^2\) and then we extend \(\mathbf{u}\) to \(\mathbb R^3\) imposing that (2.2) is satisfied.

Let us explain formula (2.7). The incompressibility condition \(\nabla \cdot \mathbf{u} = 0\) is rewritten as

$$\begin{aligned} 0&= \partial _{x_1} u_1 + \partial _{x_2} u_2 + \partial _{x_3} u_3 = \partial _{x_1} u_1 + \partial _{x_2} u_2 + \frac{x_2}{h} \partial _{x_1} u_3- \frac{x_1}{h} \partial _{x_2} u_3 , \end{aligned}$$

where we have used that \(- x_2 \partial _{x_1} u_3 + x_1 \partial _{x_2} u_3 + h \partial _{x_3} u_3 = 0\) (third formula in (2.3)). Then from (2.6) we get

$$\begin{aligned} 0&= \partial _{x_1} u_1 + \partial _{x_2} u_2 + \frac{x_2}{h^2} \partial _{x_1} ( x_2 u_1 - x_1 u_2) - \frac{x_1}{h^2 } \partial _{x_2} ( x_2 u_1 - x_1 u_2) \\&= \frac{1}{h^2} \partial _{x_1}[ ( h^2 + x_2^2) u_1 - x_1 x_2 u_2 ] + \frac{1}{h^2} \partial _{x_2} [ ( -x_1 x_2 u_1 + (h^2 + x_1^2) u_2 ] . \end{aligned}$$

This motivates to take \(\psi :\mathbb R^2 \rightarrow \mathbb R\) such that

$$\begin{aligned} \begin{aligned} \psi _{x_1}&= \frac{1}{h^2} [ - x_1 x_2 u_1 + (h^2 + x_1^2) u_2 ] \\ \psi _{x_2}&= - \frac{1}{h^2} [ (h^2 + x_2^2) u_1 - x_1 x_2 u_2 ] . \end{aligned} \end{aligned}$$
(2.8)

This is equivalent to (2.7). Later on we verify that \(\psi \) satisfies \(-\mathrm{div}( K \nabla \psi ) = w\).

We claim that \(\omega \), \(\mathbf{u}\) satisfy the Euler equations (1.2). We check the first equation component by component and on the plane \(x_3=0\). The equality for all \(x_3\) follows from the helical symmetry. First we note that

$$\begin{aligned} ( \vec \omega \cdot \nabla ) \mathbf{u} = \frac{1}{h} w \left( \begin{matrix} -u_2 \\ u_1 \\ 0 \end{matrix}\right) \end{aligned}$$
(2.9)

by the form of \(\omega \) in (2.4) and the equations (2.3) satisfied by \(\mathbf{u}\). Using (2.8) and (1.13) we get

$$\begin{aligned} w_t - \frac{1}{h^2}[ - (h^2+x_2^2) u_1 + x_1 x_2 u_2] w_{x_1} + \frac{1}{h^2}[ -x_1x_2u_1 + (h^2+x_1^2) u_2 ] w_{x_2} = 0 , \end{aligned}$$

which gives, using that \(\nabla w \cdot \vec \xi = 0\) and \(\mathbf{u} \cdot \xi = 0\), that

$$\begin{aligned} w_t + u_1 w_{x_1} + u_2 w_{x_2} + u_3 w_{x_3} = 0. \end{aligned}$$

By (2.9) this is the third component in the first equation in (1.2). The first and second components are handled similarly.

Next we verify that \(\vec \omega = \mathrm{curl}\mathbf{u}\). Indeed, we look first at the third component of \(\mathrm{curl}\mathbf{u}\), which is

$$\begin{aligned} \partial _{x_1} u_2 - \partial _{x_2} u_1&= \partial _{x_1} \Big [ \frac{1}{h^2+x_1^2+x_2^2} ( - ( h^2 + x_2^2) \psi _{x_1} + x_1 x_2 \psi _{x_2}) \Big ] \\&\quad - \partial _{x_2} \Big [ \frac{1}{h^2+x_1^2+x_2^2} ( - x_1 x_2 \psi _{x_1} + ( h^2 + x_1^2) \psi _{x_2}) \Big ] \\&= - \mathrm{div}( K \nabla \psi ) \\&= w. \end{aligned}$$

The first and second component of \(\mathrm{curl}\mathbf{u}\) are computed similarly. The validity of equation (2.5) is directly checked. \(\square \)

3 Equation (1.13) in a rotational frame

To find a solution \(\vec \omega _\varepsilon (x,t)\) as described in Theorem 1, of the form

$$\begin{aligned} \vec \omega _\varepsilon (x,t) = \frac{1}{h} w( P_{-\frac{x_3}{h}} x', t) \left( \begin{matrix} P_{\frac{\pi }{2}} x' \\ h \end{matrix} \right) , \qquad x'= (x_1,x_2) , \end{aligned}$$

we need to find a solution \(w(x',t) \) of (1.13) such that, with the change of variable \(t = |\log \varepsilon |^{-1}\tau \), it is concentrated near a single point in the plane that is rotating with constant speed around the origin. Here P is the matrix defined in (1.8). In terms of \(\tau \), equation (1.13) or (2.5), becomes

$$\begin{aligned} \left\{ \begin{array}{l} |\log \varepsilon | w_\tau + \nabla ^\perp \psi \cdot \nabla w =0 \quad {\text{ in } }\,\,\, \mathbb R^2 \times (-\infty , \infty ) \\ - \mathrm{div}( K \nabla \psi ) = w\quad {\text{ in } } \,\,\, \mathbb R^2 \times (-\infty , \infty ), \end{array}\right. \end{aligned}$$
(3.1)

where K was defined in (1.14).

For notational simplicity, in what follows we will restrict ourselves to the case

$$\begin{aligned} h^2 =1. \end{aligned}$$

Let \(\alpha \) be a fixed constant. We look for rotating solutions to problem (3.1) of the form

$$\begin{aligned} w (x', \tau ) = W \left( P_{ \alpha \tau } x' \right) , \quad \psi (x',\tau ) = \Psi \left( P_{ \alpha \tau } x' \right) , \end{aligned}$$
(3.2)

where \(P_\theta \) is defined in (1.8).

Let \(\tilde{x} = P_{\alpha \tau } x'\). For the rest of this computation we will denote differential operators with respect to \(\tilde{x}\) with a subscript \(\tilde{x}\), and differential operators with respect to \(x'\) without any subscript. In terms of \((W, \Psi )\), the second equation in (3.1) becomes

$$\begin{aligned} -\mathrm{div}_{\tilde{x}} ( K(\tilde{x}) \nabla _{\tilde{x}} \psi ) \Psi = W. \end{aligned}$$

Let us see how the first equation gets transformed. Observe that

$$\begin{aligned} w_\tau = \nabla _{\tilde{x}} W \cdot (\alpha P_{\frac{\pi }{2}+\alpha \tau } x' ) = - \alpha \nabla _{\tilde{x}} W \cdot \tilde{x}^\perp , \end{aligned}$$

and

$$\begin{aligned} (\nabla w)^T = (\nabla _{\tilde{x}} W)^T P_{\alpha \tau }, \quad (\nabla \psi )^T = (\nabla _{\tilde{x}} \Psi )^T P_{\alpha \tau }. \end{aligned}$$

Let \(J = P_{\frac{\pi }{2}}\). Since \(\nabla \psi ^\perp \cdot \nabla w= (\nabla w )^T J \nabla \psi \), and \( P_{\alpha \tau } J P_{\alpha \tau }^T = J \), we have

$$\begin{aligned} \nabla \psi ^\perp \cdot \nabla w&= (\nabla w )^T J \nabla \psi = (\nabla _{\tilde{x}} W)^T P_{\alpha \tau } J P_{\alpha \tau }^T \nabla _{\tilde{x}} \Psi \\&= (\nabla _{\tilde{x}} W )^T J \nabla _{\tilde{x}} \Psi = (\nabla _{\tilde{x}} \Psi )^\perp \cdot (\nabla _{\tilde{x}} W ) . \end{aligned}$$

We conclude that equation \(|\log \varepsilon |w_\tau + \nabla ^\perp \psi \cdot \nabla w =0\) becomes

$$\begin{aligned} -\alpha |\log \varepsilon | \nabla _{\tilde{x}} W(\tilde{x}) \cdot \tilde{x}^\perp + \nabla _{\tilde{x}}^\perp \Psi \cdot \nabla _{\tilde{x}} W= 0, \end{aligned}$$

or equivalently

$$\begin{aligned} \nabla _{\tilde{x}} W \cdot \nabla _{\tilde{x}}^\perp \left( \Psi - \alpha |\log \varepsilon | \frac{|\tilde{x}|^2 }{ 2} \right) = 0 . \end{aligned}$$
(3.3)

Now, if \(W(\tilde{x}) = F(\Psi (\tilde{x}) - \alpha |\log \varepsilon | \frac{ |\tilde{x}|^2}{2} )\), for some function F, then automatically (3.3) holds. We conclude that if \(\Psi \) is a solution to

$$\begin{aligned} - \mathrm{div}_{\tilde{x}} \cdot (K(\tilde{x}) \nabla _{\tilde{x}} \Psi ) = F (\Psi - \frac{\alpha }{2} |\log \varepsilon | |\tilde{x}|^2 ) \quad {\text{ in }} \quad \mathbb R^2, \end{aligned}$$
(3.4)

for some function F, and W is given by

$$\begin{aligned} W(\tilde{x})= F\left( \Psi (\tilde{x}) - \alpha |\log \varepsilon | \frac{|\tilde{x}|^2}{2} \right) \end{aligned}$$

then \( (w, \psi )\) defined by (3.2) is a solution for (3.1).

In (3.4) we take

$$\begin{aligned} F (s) = \varepsilon ^2 f(s) , \quad f(s) = e^s, \end{aligned}$$

where \(\varepsilon >0\). In the sequel we write \(\tilde{x} \) as \(x = (x_1,x_2) \in \mathbb R^2\) and consider the equation

$$\begin{aligned} -\mathrm{div}(K \nabla \Psi ) = \varepsilon ^2 e^{(\Psi -{\alpha \over 2} |\log \varepsilon | |x|^2 )} \quad {\text{ in }} \quad \mathbb R^2, \end{aligned}$$

where \(\varepsilon >0\).

4 Construction of an approximate solution

The rest of the paper is devoted to find a solution to

$$\begin{aligned} -\nabla \cdot (K \nabla \Psi ) = \varepsilon ^2 e^{(\Psi -{\alpha \over 2} |\log \varepsilon | |x|^2 )} \quad {\text{ in }} \quad \mathbb R^2, \end{aligned}$$
(4.1)

where \(\varepsilon >0\) is small, and such that the solution is concentrated near a fixed point \(q_0 = (x_0 , 0)\) with \(x_0>0\). The parameter \(x_0\) is fixed and coincides with R in the definition of the helices (1.7). The number \(\alpha \) corresponds to the angular velocity of the rotating solution described in Sect. 3, and will be adjusted suitably in the course of the proof. In the case \(h=1\), which we are considering for simplicity, we will find a posteriori that \(\alpha = {2 \over 1+ R^2 }+ O(|\log \varepsilon |^{-{1\over 2}})\), as \(\varepsilon \rightarrow 0\).

We start with the construction of a global approximate solution to (4.1). Towards this end, consider the change of variables

$$\begin{aligned} x_1 = x_0 + z_1 , \quad x_2 = \sqrt{1+ x_0^2} z_2. \end{aligned}$$

Let

$$\begin{aligned} \Gamma _\varepsilon (z) = \log {8 \over (\varepsilon ^2 + |z|^2 )^2} \end{aligned}$$
(4.2)

be the Liouville profile which satisfies

$$\begin{aligned} \Delta \Gamma _\varepsilon + {8 \varepsilon ^2 \over (\varepsilon ^2 + |z|^2)^2 }= 0 \quad {\text{ in } } \quad \mathbb R^2. \end{aligned}$$
(4.3)

For a fixed \(\delta >0\), we define

$$\begin{aligned} \Psi _{1\varepsilon } (x ) = {\alpha \over 2} |\log \varepsilon | x_0^2 - \log (1+ x_0^2) + \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 \right) \, \end{aligned}$$
(4.4)

in the region \(|z| < \delta \). Here \(c_1\) is a constant to be determined later on. Actually we will choose \(c_1 ={1\over 2} {x_0 \over 1+ x_0^2}\).

For a function \(\Psi \), we define the error-function as

$$\begin{aligned} S [\Psi ] (z) = L \Psi + \varepsilon ^2 \, f \left( \Psi -{\alpha \over 2} |\log \varepsilon | ( |x_0+ z_1|^2 + (1+ x_0^2) |z_2|^2) \right) , \end{aligned}$$
(4.5)

where

$$\begin{aligned} L \Psi = \mathrm{div}( K \nabla \Psi ). \end{aligned}$$
(4.6)

We would like to describe explicitly the error function \(S[\Psi ]\), for \(\Psi = \Psi _{1\varepsilon }\), in the region \(|z| < \delta \). Using the notation \(K = \left( \begin{matrix} K_{11} &{} K_{12} \\ K_{12} &{} K_{22} \end{matrix} \right) \), we compute in the original variables

$$\begin{aligned} \begin{aligned} L= \nabla \cdot (K \nabla \cdot )&= K_{11} \, {\partial } _{x_1}^2 + K_{22} \, {\partial } _{x_2}^2 + 2 K_{12} \, {\partial } _{x_1 x_2}^2\\&\quad + ( {\partial } _{x_1} K_{11} + {\partial } _{x_2} K_{12} ) \, {\partial } _{x_1} + ( {\partial } _{x_2} K_{22} + {\partial } _{x_1} K_{12}) \, {\partial } _{x_2} \end{aligned} \end{aligned}$$

Letting \(r^2 = |x|^2\), we get that

$$\begin{aligned} \begin{aligned} L&= {1+ x_2^2 \over 1+ r^2} {\partial } _{x_1 x_1} + {1+ x_1^2 \over 1+ r^2} {\partial } _{x_2 x_2} - 2{ x_1 x_2 \over 1+r^2} {\partial } _{x_1 x_2}\\&\quad + \left( {\partial } _{x_1} ({1+x_2^2 \over 1+r^2} ) - {\partial } _{x_2} ({x_1 x_2 \over 1+r^2} ) \right) {\partial } _{x_1}\\&\quad + \left( {\partial } _{x_2} ({1+x_1^2 \over 1+r^2} ) - {\partial } _{x_1} ({x_1 x_2 \over 1+r^2} ) \right) {\partial } _{x_2} . \end{aligned} \end{aligned}$$
(4.7)

Since

$$\begin{aligned} {\partial } _{x_1} \left( {1+x_2^2 \over 1+r^2} \right) - {\partial } _{x_2} \left( {x_1 x_2 \over 1+r^2} \right) = - {x_1 \over 1+ r^2} \left( {2\over 1+r^2} +1\right) \end{aligned}$$

and

$$\begin{aligned} {\partial } _{x_2} \left( {1+x_1^2 \over 1+r^2} \right) - {\partial } _{x_1} \left( {x_1 x_2 \over 1+r^2}\right) = - {x_2 \over 1+ r^2} \left( {2\over 1+r^2} +1\right) \end{aligned}$$

formula (4.7) simplifies to

$$\begin{aligned} \begin{aligned} L&= {1+ x_2^2 \over 1+ r^2} {\partial } _{x_1 x_1} + {1+ x_1^2 \over 1+ r^2} {\partial } _{x_2 x_2} - 2{ x_1 x_2 \over 1+r^2} {\partial } _{x_1 x_2}\\&\quad - {x_1 \over 1+ r^2} \left( {2\over 1+r^2} +1\right) {\partial } _{x_1} - {x_2 \over 1+ r^2} \left( {2\over 1+r^2} +1\right) {\partial } _{x_2} . \end{aligned} \end{aligned}$$
(4.8)

In the z-variable, from (4.8) we read the operator L

$$\begin{aligned} L = {1\over 1+ x_0^2} \left[ {\partial } _{z_1 z_1} + {\partial } _{z_2 z_2} + B_0 \right] \end{aligned}$$
(4.9)

where

$$\begin{aligned} B_0 [\phi ] = {\partial } _{z_i} \left( b_{ij}^0 {\partial } _{z_j} \phi \right) , \quad {\text{ and }} \quad b_{ij}^0 (z) = K_{ij} - \delta _{ij} . \end{aligned}$$

For later purpose, we observe that, in a region where z is bounded, say \(|z| < \delta \), for a fixed \(\delta \) small, the operator \(B_0\) has the form

$$\begin{aligned} \begin{aligned} B_0 =&- \left( {2 x_0 \over 1+ x_0^2} z_1+ O(|z|^2) \right) {\partial } _{z_1 z_1} + O(|z|^2) {\partial } _{z_2 z_2} - \left( 2 x_0 z_2 + O(|z|^2 ) \right) {\partial } _{z_1 z_2} \\&\quad - \left( x_0 (1+{2\over 1+ x_0^2} ) + O(|z|) \right) {\partial } _{z_1} + O(|z| ) {\partial } _{z_2}. \end{aligned} \end{aligned}$$
(4.10)

Using (4.9) and the explicit form of the nonlinearity f, we obtain the explicit expression for the error function \(S[\Psi ]\) defined in (4.5) is

$$\begin{aligned} (1+ x_0^2) S[\Psi ] (z)&= ( {\partial } _{z_1 z_1} + {\partial } _{z_2 z_2} ) \Psi + B_0 [\Psi ] \\&\quad + \varepsilon ^2 (1+ x_0^2 ) e^{-{\alpha \over 2} |\log \varepsilon | x_0^2} e^\Psi e^{-{\alpha \over 2} |\log \varepsilon | \ell _0(z) } , \end{aligned}$$

where \(\ell _0(z) \) is defined by

$$\begin{aligned} \ell _0(z) = 2 x_0 z_1 + z_1^2 + (1+ x_0^2) z_2^2 . \end{aligned}$$
(4.11)

Using (4.2) and (4.3), we have that

$$\begin{aligned} ( {\partial } _{z_1 z_1} + {\partial } _{z_2 z_2} ) (\Gamma _\varepsilon ) +\varepsilon ^2 (1+ x_0^2 ) e^{-{\alpha \over 2} |\log \varepsilon | x_0^2} e^{\Gamma _\varepsilon + {\alpha \over 2} |\log \varepsilon | x_0^2 - \log (1+ x_0^2)} = 0, \quad {\text{ in }} \quad \mathbb R^2. \end{aligned}$$

Thus, if for a moment we take \( c_1 = 0 \) in the definition of \(\Psi _{1\varepsilon }\) in (4.4), we get

$$\begin{aligned} \begin{aligned} (1+ x_0^2) S[\Psi _{1\varepsilon } ] (z)&= B_0 [\Gamma _\varepsilon ] + \varepsilon ^2 e^{\Gamma _\varepsilon } \left[ e^{-{\alpha \over 2} |\log \varepsilon | \ell _0(z)} -1 \right] , \end{aligned} \end{aligned}$$

where \(\ell _0\) is defined in (4.11). Using (4.10), we write \(B_0 [\Gamma _\varepsilon ]\) as follows

$$\begin{aligned} B_0 [ \Gamma _\varepsilon ] = - {2 x_0 \over 1+x_0^2} z_1 {\partial } _{z_1 z_1} \Gamma _\varepsilon - 2 x_0 z_2 {\partial } _{z_1 z_2} \Gamma _\varepsilon - x_0 (1+ {2\over 1+ x_0^2} ) {\partial } _{z_1} \Gamma _\varepsilon + E_1, \end{aligned}$$
(4.12)

where \(E_1\) is a smooth function, uniformly bounded for \(\varepsilon \) small, in a bounded region for z. The constant \(c_1\) in (4.4) will be chosen to partially cancel the part of the error given by \(-{2 x_0 \over 1+x_0^2} z_1 {\partial } _{z_1 z_1} \Gamma _\varepsilon - 2 x_0 z_2 {\partial } _{z_1 z_2} \Gamma _\varepsilon - x_0 (1+ {2\over 1+ x_0^2} ) {\partial } _{z_1} \Gamma _\varepsilon \). Using the explicit expression of \(\Gamma _\varepsilon \),

$$\begin{aligned} {\partial } _1 \Gamma _\varepsilon (z) = -{4z_1 \over \varepsilon ^2 +|z|^2} ,&\quad z_1 {\partial } _{11} \Gamma _\varepsilon (z) = -{4z_1 \over \varepsilon ^2 +|z|^2} +{8z_1^3 \over (\varepsilon ^2 +|z|^2)^2}\\ z_2 {\partial } _{12} \Gamma _\varepsilon (z)&= {8z_2^2 z_1 \over (\varepsilon ^2 +|z|^2)^2} \end{aligned}$$

and the identity

$$\begin{aligned} z_1 z_2^2 = { |z|^2 z_1 \over 4} - { {\text{ Re } } (z^3 )\over 4} \end{aligned}$$

we obtain

$$\begin{aligned} -&{2 x_0 \over 1+x_0^2} z_1 {\partial } _{z_1 z_1} \Gamma _\varepsilon - 2 x_0 z_2 {\partial } _{z_1 z_2} \Gamma _\varepsilon - x_0 \left( 1+ {2\over 1+ x_0^2} \right) {\partial } _{z_1} \Gamma _\varepsilon \\&=4x_0 \left( 1+{4\over 1+x_0^2}\right) {z_1 \over \varepsilon ^2 + |z|^2} -4x_0 \left( 1+ {3\over 1+x_0^2} \right) {|z|^2 z_1 \over (\varepsilon ^2 + |z|^2)^2} + {4x_0^3 \over 1+x_0^2} {{\text{ Re } } (z^3 ) \over (\varepsilon ^2 + |z|^2)^2}\\&= {4x_0 \over 1+ x_0^2} {z_1 \over \varepsilon ^2 + |z|^2} + {4 x_0^3 \over 1+ x_0^2} {{\text{ Re } } (z^3 ) \over (\varepsilon ^2 + |z|^2)^2} + {4 x_0 (4+ x_0^2) \over 1+ x_0^2} { \varepsilon ^2 \, z_1 \over (\varepsilon ^2 + |z|^2 )^2} . \end{aligned}$$

Now we consider the computation including a general \(c_1\ne 0\). We shall choose the constant \(c_1\) to eliminate the first term of the last line in the error, using the fact that

$$\begin{aligned} ( {\partial } _{z_1 z_1} + {\partial } _{z_2 z_2} ) (c_1 z_1 \Gamma _\varepsilon ) = - 8 \, c_1 \, {z_1 \over \varepsilon ^2 + |z|^2} - 8\, c_1 \, {\varepsilon ^2 z_1 \over (\varepsilon ^2 + |z|^2 )^2}. \end{aligned}$$

We take \(c_1\) in (4.4) to be

$$\begin{aligned} c_1 ={1\over 2} {x_0 \over 1+ x_0^2}. \end{aligned}$$
(4.13)

With this choice of \(c_1\) in the definition of \(\Psi _{1\varepsilon }\) in (4.4), we get

$$\begin{aligned} \begin{aligned} (1+ x_0^2) S[\Psi _{1\varepsilon } ] (z)&= - (8 c_1 -{4 x_0 (4+x_0^2) \over 1+ x_0^2} ) {\varepsilon ^2 z_1 \over (\varepsilon ^2 + |z|^2 )^2} + {4 x_0^3 \over 1+ x_0^2} {{\text{ Re } } (z^3) \over (\varepsilon ^2 + |z|^2 )^2} \\&\quad + E_1 + B_0 [c_1 z_1 \Gamma _\varepsilon ] \\&\quad +{ 8 \varepsilon ^2 \over (\varepsilon ^2 + |z|^2 )^2} \left[ e^{-{\alpha \over 2} |\log \varepsilon | \ell _0(z) + c_1 z_1 \Gamma _\varepsilon } -1 \right] , \end{aligned} \end{aligned}$$

where \(E_1\) is the explicit smooth and bounded function, given by (4.12), and \(\ell _0\) is defined in (4.11). A careful look at \(B_0 (c_1 z_1 \Gamma _\varepsilon )\) gives that

$$\begin{aligned} B_0 (c_1 z_1 \Gamma _\varepsilon ) = - c_1 \, x_0 \, (1+ {2\over 1+ x_0^2 } ) \Gamma _\varepsilon (z) + E_2 \end{aligned}$$

where \(E_2\) is smooth in the variable z and uniformly bounded, as \(\varepsilon \rightarrow 0\). We conclude that, taking \(c_1\) in (4.4) as defined in (4.13)

$$\begin{aligned} \begin{aligned} (1+ x_0^2) S [\Psi _{1\varepsilon } ] (z)&= - c_1 x_0 (1+ {2\over 1+ x_0^2 } ) \Gamma _\varepsilon (z) + {4 x_0^3 \over 1+ x_0^2} {{\text{ Re } } (z^3) \over (\varepsilon ^2 + |z|^2 )^2} \\&\quad - (8 c_1 - {4 x_0 (4 + x_0^2) \over 1+ x_0^2} ) {\varepsilon ^2 z_1 \over (\varepsilon ^2 + |z|^2 )^2} + E_1 + E_2 \\&\quad +{ 8 \varepsilon ^2 \over (\varepsilon ^2 + |z|^2 )^2} \left[ e^{-{\alpha \over 2} |\log \varepsilon | \ell _0(z )+ c_1 z_1 \Gamma _\varepsilon } -1 \right] . \end{aligned} \end{aligned}$$
(4.14)

Observe that the first term in (4.14) has size \(| \log \varepsilon |\), while the second term decays, in the expanded variable \(z=\varepsilon y\), as \({1\over 1+ |y|}\). We introduce a further modification to our approximate solution \(\Psi _{1\varepsilon }\) in (4.4) to eliminate those two terms in the error.

To eliminate the first term, we introduce a new constant \(c_2\), which is convenient to take

$$\begin{aligned} c_2 = \frac{1}{4} c_1 \, x_0 \, (1+ {2\over 1+ x_0^2} ) \end{aligned}$$
(4.15)

with \(c_1\) given in (4.13). We find

$$\begin{aligned} \Delta _z \left( c_2 |z|^2 \Gamma _\varepsilon (z) \right) - c_1 x_0 (1+ {2\over 1+ x_0^2}) \Gamma _\varepsilon (z) = 8 c_2 \left[ {\varepsilon ^2 |z|^2 \over (\varepsilon ^2 + |z|^2 )^2} - {2 |z|^2 \over \varepsilon ^2 + |z|^2} \right] . \end{aligned}$$

To correct the second term , we introduce \(h_\varepsilon (|z|)\) the solution to

$$\begin{aligned} h'' +{1\over s} h' - {9 \over s^2} h + {s^3 \over (\varepsilon ^2 + s^2)^2 }=0 \end{aligned}$$

defined by

$$\begin{aligned} h_\varepsilon (s ) = s^3 \int _s^1 {dx \over x^7} \int _0^x {\eta ^7 \over (\varepsilon ^2 + \eta ^2)^2} \, d \eta . \end{aligned}$$

The function \(h_\varepsilon \) is smooth and uniformly bounded as \(\varepsilon \rightarrow 0\), and \(h_\varepsilon (s ) = O(s)\), as \(s \rightarrow 0\). Writing \(z = |z| e^{i\theta }\), we have that

$$\begin{aligned} H_{1\varepsilon } (z ) := h_\varepsilon (|z|) \cos 3 \theta \quad {\text{ solves }} \quad \Delta _z \left( H_{1\varepsilon } \right) + {{\text{ Re } } (z^3) \over (\varepsilon ^2 + |z|^2 )^2} = 0. \end{aligned}$$
(4.16)

We define the following improved approximation

$$\begin{aligned} \begin{aligned} \Psi _{2\varepsilon } (x )&= {\alpha \over 2} |\log \varepsilon | x_0^2 - \log (1+ x_0^2) + \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) \\&\quad + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z) , \end{aligned} \end{aligned}$$

with \(H_{1\varepsilon }\) defined in (4.16) and \(c_2\) as in (4.15). The new error function becomes

$$\begin{aligned} \begin{aligned} (1+ x_0^2) \, S [\Psi _{2\varepsilon } ] (z)&= E_3 - \left( 8 c_1 - {4 x_0 (4+x_0^2) \over 1+ x_0^2} \right) {\varepsilon ^2 z_1 \over (\varepsilon ^2 + |z|^2 )^2} \\&\quad +{ 8 \varepsilon ^2 \over (\varepsilon ^2 + |z|^2 )^2} \left[ e^{f_{x_0} (z) } -1 \right] , \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} f_{x_0} (z )&= -{\alpha \over 2} |\log \varepsilon | \ell _0(z )+ (c_1 z_1 + c_2 |z|^2 ) \Gamma _\varepsilon + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z) \\&= (- \alpha x_0 + 4 c_1) |\log \varepsilon | z_1 + c_1 z_1 \log {8\over (1+ |{z \over \varepsilon }|^2)^2} + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z)\\&\quad -{\alpha \over 2} |\log \varepsilon | ( z_1^2 + (1+ x_0^2) z_2^2) + c_2 |z|^2 \left( -4|\log \varepsilon | + \log {8\over (1+ |{z \over \varepsilon }|^2)^2}\right) \end{aligned} \end{aligned}$$
(4.17)

with \(\ell _0\) given by (4.11) and \(H_{1\varepsilon }\) in (4.16). As before, the term \(E_3\) in (4.14) is a smooth function, which is uniformly bounded as \(\varepsilon \rightarrow 0\).

Finally we introduce a global correction to cancel the bounded term \(E_3\) in the error. Our global final approximation is

$$\begin{aligned} \begin{aligned} \Psi _{ \alpha } (x)&= {\alpha \over 2} |\log \varepsilon | x_0^2 - \log (1+ x_0^2) \\&+ \eta _\delta (x) \left[ \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z) \right] + H_{2\varepsilon } (x) \end{aligned} \end{aligned}$$
(4.18)

where

$$\begin{aligned} \eta _\delta (x ) = \eta ({|z| \over \delta } ), \end{aligned}$$
(4.19)

with \(\eta \) a fixed smooth function with

$$\begin{aligned} \eta (s) = 1 , \quad {\text{ for }} \quad s \le {1 \over 2}, \quad \eta (s) = 0 , \quad {\text{ for }} \quad s \ge 1. \end{aligned}$$
(4.20)

Let g be the function with compact support defined by

$$\begin{aligned} \begin{aligned} g(x)&= \eta _\delta (x) E_3+ 2 \nabla \eta _\delta \nabla \left( \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z) \right) \\&\quad + \left( \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z) \right) \Delta \eta _\delta \\&\quad + B_0 \left[ \eta _\delta \left( \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z)\right) \right] \\&\quad - \eta _\delta B_0 \left[ \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z) \right] . \end{aligned} \end{aligned}$$

It is easy to check that

$$\begin{aligned} \Vert g (x) \Vert _{L^\infty } \le C_\delta \end{aligned}$$

for some positive constant which depends on \(\delta \). Proposition 7.1 guarantees the existence of a solution to problem

$$\begin{aligned} \begin{aligned} \Delta H_{2\varepsilon } + B_0 [H_{2\varepsilon } ] + g&= 0 , \quad {\text{ in }} \quad \mathbb R^2 \end{aligned} \end{aligned}$$

satisfying

$$\begin{aligned} | H_{2\varepsilon } (x) | \le C_\delta (1+ |x|^2). \end{aligned}$$

The solution is given up to the addition of a constant. We define the function \(H_{2\varepsilon } (x)\) in (4.18) to be the one which furthermore satisfies

$$\begin{aligned} H_{2\varepsilon } ( (x_0, 0) ) = 0. \end{aligned}$$

With this choice for our final approximation \(\Psi _\alpha \) in (4.18), the error function takes the form

$$\begin{aligned} \begin{aligned}&(1+ x_0^2) S[\Psi _\alpha ] (x) = \eta _\delta \left( (1+ x_0^2 ) S[\Psi _{2\varepsilon } ] - E_3 \right) \\&+ (1-\eta _\delta ) {8\varepsilon ^2 \over (\varepsilon ^2 + |z|^2 )^2} e^{f_{x_0} (z)} e^{(\eta _\delta -1) ( \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z)) + H_{2\varepsilon } } \\&+\eta _\delta {8\varepsilon ^2 \over (\varepsilon ^2 + |z|^2 )^2} e^{f_{x_0} (z) }\left( e^{(\eta _\delta -1) ( \Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) + {4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z)) + H_{2\varepsilon } } -1 \right) \end{aligned} \end{aligned}$$

where \(\eta _\delta \) is given in (4.19) and \(f_{x_0}\) in (4.17).

When \(1-\eta _\delta =0\), it is important to realize that one has

$$\begin{aligned} \begin{aligned} f_{x_0} (z)&= (-\alpha x_0 + 4 c_1 ) |\log \varepsilon | z_1 + c_1 \log {8 \over (1+ |{z\over \varepsilon } |^2)^2} z_1 + O(|z|). \end{aligned} \end{aligned}$$

and

$$\begin{aligned} H_{2\varepsilon } (z) = O(|z|). \end{aligned}$$

In the complementary region, where \(1-\eta _\delta \not = 0\), we have that

$$\begin{aligned} e^{f_{x_0} (z) } \le C e^{-|z|^2}, \end{aligned}$$

We conclude that, the error \(S[\Psi _\alpha ]\) of the approximate solution \(\Psi _\alpha \) defined in (4.18), can be estimated as follows: in the region \(|z| < \delta \) it has the form

$$\begin{aligned} \begin{aligned} (1+ x_0^2) S [\Psi _\alpha ] (z)&= (-\alpha x_0 + 4 c_1 ) |\log \varepsilon | { 8 \varepsilon ^2 \, z_1 \over (\varepsilon ^2 + |z|^2 )^2} \\ {}&\quad + { \varepsilon ^2 \over (\varepsilon ^2 + |z|^2 )^2} \, O \left( |z| \log (2+ |{z \over \varepsilon } |) \right) . \end{aligned} \end{aligned}$$
(4.21)

and in the region \(|z|>\delta \),

$$\begin{aligned} |(1+ x_0^2 ) S[\Psi _\alpha ] (z) |\le C {\varepsilon ^2 \over (\varepsilon ^2 + |z|^2)^2} e^{-|z|^2}, \end{aligned}$$
(4.22)

for some constant \(C>0\).

5 The inner-outer gluing system

We consider the approximate solution \(\Psi _\alpha (x)\) we have built in Sect. 4 and look for a solution \(\Psi (x) \) of the equation

$$\begin{aligned} S[\Psi ] := L[\Psi ] + \varepsilon ^2 f( \Psi - \mu |x|^2 )=0 {\quad \hbox {in } }\mathbb R^2 \end{aligned}$$
(5.1)

where

$$\begin{aligned} f(u)= e^u, \quad \mu = \frac{\alpha }{2} |\log \varepsilon |. \end{aligned}$$

We look for \(\Psi \) of the form

$$\begin{aligned} \Psi (x) = \Psi _{ \alpha } (x) + \varphi (x). \end{aligned}$$
(5.2)

Observe that, by construction, the function \(\Psi _\alpha \) is symmetric with respect to \(x_2\), in the sense that \(\Psi _\alpha (x_1, x_2) = \Psi _\alpha (x_1 , -x_2)\). We thus also ask \(\Psi (x)\) to belong to the class of functions that symmetric with respect to \(x_2\).

Here \(\varphi (x)\) is a smaller perturbation of the first approximation, which we choose of the form

$$\begin{aligned} \varphi (x)= \eta _\delta (x) \phi \left( {z \over \varepsilon } \right) + \psi (x) . \end{aligned}$$
(5.3)

We recall that

$$\begin{aligned} z_1 = x_1- x_0 , \quad z_2 = {x_2 \over \sqrt{1+ x_0^2}}, \quad \eta _\delta (x) = \eta \left( {|z| \over \delta } \right) , \end{aligned}$$

with \(\eta \) fixed in (4.20). Thus, our aim is to find \(\varphi (x)\) so that

$$\begin{aligned} S[\Psi _\alpha + \varphi ] = {\mathcal L}_{\Psi _\alpha } [\varphi ] +N_{\Psi _\alpha } [\varphi ] + E_\alpha = 0 \quad {\text{ in }} \quad \mathbb R^2 \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} E_\alpha&= S (\Psi _\alpha )\\ {\mathcal L}_{\Psi _\alpha } [\varphi ]&= L [\varphi ] + \varepsilon ^2 f' (\Psi _\alpha - \mu |x|^2 ) \varphi \\ N_{\Psi _\alpha } (\varphi )&= \varepsilon ^2 \left[ f (\Psi _\alpha - \mu |x|^2 + \varphi ) - f (\Psi _\alpha - \mu |x|^2 ) - f' (\Psi _\alpha - \mu |x|^2 ) \varphi \right] . \end{aligned} \end{aligned}$$

The following expansion holds:

$$\begin{aligned} \begin{aligned} S( \Psi _\alpha +\varphi ) \ =&\ \eta _\delta \big [ L[\phi ] + \varepsilon ^2 f'( \Psi _\alpha - \mu |x|^2 ) (\phi + \psi ) + E_\alpha + N_{\Psi _\alpha } (\eta _\delta \phi + \psi ) \big ] \\&+ L[\psi ] + (1-\eta _\delta )\left[ \varepsilon ^2 f'( \Psi _\alpha - \mu |x|^2 ) \psi + E_\alpha + N_{\Psi _\alpha } (\eta _\delta \phi + \psi ) \right] \\&+ L[\eta _\delta ] \phi + K_{ij} (x) {\partial } _{x_i} \eta _\delta {\partial } _{x_j}\phi . \end{aligned} \end{aligned}$$

Thus \(\Psi \) given by (5.2)–(5.3) solves (5.1) if the pair \((\phi ,\psi )\) satisfies the system of equations

$$\begin{aligned} L[\phi ] + \varepsilon ^2 f'( \Psi _\alpha - \mu |x|^2 ) (\phi + \psi ) + E_\alpha + N_{\Psi _\alpha } (\eta _\delta \phi + \psi ) \, =\, 0, \quad |z|< 2\delta , \end{aligned}$$
(5.4)

and

$$\begin{aligned} \begin{aligned} L[\psi ] \ +&\, (1-\eta _\delta ) \big [ \varepsilon ^2 f'( \Psi _\alpha - \mu |x|^2 ) \psi \, + \, N_{\Psi _\alpha } (\eta _\delta \phi + \psi )\, + \, E_\alpha \big ] \\ +&\ L[\eta _\delta ] \phi \, +\, K_{ij} (x) {\partial } _{x_i} \eta _\delta {\partial } _{x_j}\phi \ =\ 0 {\quad \hbox {in } }\mathbb R^2 ,\end{aligned} \end{aligned}$$
(5.5)

which we respectively call the “inner” and “outer” problems.

Let us write (5.4) in terms of the variable \(y=\frac{z}{\varepsilon }\). First we recall that for a function \(\varphi = \varphi (z) \) we have

$$\begin{aligned} L[\varphi ] = \frac{1}{1+x_0^2} \big [ \,\Delta _z \varphi + {\partial } _{z_i} ( b^0_{ij} (z) {\partial } _{z_j} \varphi ) \,\big ] \end{aligned}$$

for smooth functions \(b^0_{ij}(z)\) with \( b^0_{ij}(0) =0\). Hence for \(\phi (y)=\varphi (\varepsilon y) \) we have

$$\begin{aligned} \varepsilon ^2(1+ x_0^2) L[\phi ] = \Delta _y \phi + {\partial } _{y_i} ( b^0_{ij} (\varepsilon y) {\partial } _{y_j} \phi ) \end{aligned}$$

We notice that, in terms of the variable y we can write

$$\begin{aligned} \Psi _\alpha (x) - \mu |x|^2 = \Gamma _0(y) - 4\log \varepsilon -\log (1+ x_0^2) + |\log \varepsilon | \varepsilon y_1 (- \alpha x_0 + 4 c_1) + {{\mathcal R}}(y) \end{aligned}$$

where

$$\begin{aligned} \Gamma _0 (y) = \log {8 \over (1+ |y|^2)^2}, \end{aligned}$$

and \({{\mathcal R}}(y)\) can be bounded as

$$\begin{aligned} | D_y {{\mathcal R}}(y)| + |{{\mathcal R}}(y)| \,\le \, C \varepsilon |y| \log (2+|y|). \end{aligned}$$
(5.6)

At this point we make the following choice for the parameter \(\alpha \). We set

$$\begin{aligned} \alpha \ = \ \frac{4c_1}{x_0} + {\alpha _1} = {2 \over 1+ x_0^2} + \alpha _1 , \qquad |\alpha _1| \le \frac{1}{|\log \varepsilon |^{\frac{1}{2}} } . \end{aligned}$$
(5.7)

In the expanded variable y and with this choice of \(\alpha \), estimates (4.21) and (4.22) for \(E_\alpha = S[\Psi _\alpha ]\) we obtain, in the region \(|y|<{\delta \over \varepsilon }\),

$$\begin{aligned} \tilde{E}_\alpha :=\varepsilon ^2(1+ x_0)^2 E_\alpha \ = \ - \frac{ 8 \alpha _1\varepsilon y_1 }{(1+|y|^2)^2}|\log \varepsilon | \ + \ O \big ( \frac{\varepsilon }{1+ |y|^{3-a}} \big ) \end{aligned}$$
(5.8)

for \(a \in (0,1)\). Indeed, we first observe that

$$\begin{aligned} \begin{aligned} \varepsilon ^4(1+ x_0)^2 f'( \Psi _\alpha (x) - \mu |x|^2) = e^{\Gamma _0(y)} e^{ - \varepsilon y_1 |\log \varepsilon | \alpha _1 x_0 + {{\mathcal R}}(y)} , \end{aligned} \end{aligned}$$

with \({{\mathcal R}}\) as in (5.6). For \(|y|< \frac{1}{\varepsilon |\log \varepsilon |} \) we can expand

$$\begin{aligned} e^{ - \varepsilon y_1 |\log \varepsilon | \alpha _1 x_0 + {{\mathcal R}}(y)} = (1 - \varepsilon y_1 |\log \varepsilon | \alpha _1 x_0 + O( \varepsilon |y| \log (2+|y|) + \varepsilon ^2 \alpha _1^2|y|^2 \log ^2\varepsilon ). \end{aligned}$$

Hence

$$\begin{aligned} \begin{aligned} \varepsilon ^4(1+ x_0)^2 f'( \Psi _\alpha (x) - \mu |x|^2) = e^{\Gamma _0} + b_0(y), \quad {\text{ with }} \\ b_0(y) = e^{\Gamma _0} (- \varepsilon y_1 |\log \varepsilon | \alpha _1 x_0) + O \left( \frac{ \varepsilon }{1+|y|^3} \log (2+|y|) \right) . \end{aligned} \end{aligned}$$

For \( \frac{1}{\varepsilon |\log \varepsilon |}< |y| < \frac{\delta }{\varepsilon }\) we have

$$\begin{aligned} \varepsilon ^4(1+ x_0)^2 f'( \Psi _\alpha (x) - \mu |x|^2) = e^{\Gamma _0(y)} \varepsilon ^{-c\delta } = O\left( \frac{\varepsilon }{ 1 +|y|^{3-a}}\right) \end{aligned}$$

for some small \(0<a<1\). Hence we globally have

$$\begin{aligned} \varepsilon ^4 (1+ x_0)^2 f'( \Psi _\alpha (x) - \mu |x|^2) = e^{\Gamma _0(y)} + b_0(y) \end{aligned}$$

where

$$\begin{aligned} b_0(y) = - \frac{ 8 \alpha _1\varepsilon y_1 |\log \varepsilon |}{(1+|y|^2)^2} + O \left( \frac{\varepsilon }{1+ |y|^{3-a}} \right) . \end{aligned}$$
(5.9)

Expansion (5.8) readily follows. Similarly, for \(b_0\) of this type we get the expansion

$$\begin{aligned} \mathcal N (\varphi ) := \varepsilon ^2(1+ x_0)^2 N_{\Psi _\alpha } (\varphi ) = ( e^{\Gamma _0(y)} + b_0) ( e^\varphi -1 -\varphi ) \end{aligned}$$
(5.10)

Then, the inner problem (5.4) becomes

$$\begin{aligned} \Delta _y \phi + f'(\Gamma _0) \phi + B[\phi ] + \mathcal N (\psi + \eta _\delta \phi ) + \tilde{E}_\alpha + (f'(\Gamma _0) + b_0 ) \psi = 0 {\quad \hbox {in } }B_R \end{aligned}$$
(5.11)

where \(R= \frac{2\delta }{\varepsilon } \) and

$$\begin{aligned} B[\phi ] = {\partial } _{y_i} (b_{ij}^0(\varepsilon y) {\partial } _j\phi ) + b_0(y) \phi . \end{aligned}$$
(5.12)

The idea is to solve this equation, coupled with the outer problem (5.5) in such a way that \(\phi \) has the size of the error \(\tilde{E}_\alpha \) with two powers less of decay in y, say

$$\begin{aligned} (1+|y|) |D_y\phi (y)|+ |\phi (y)| \le \frac{C \varepsilon |\log \varepsilon | }{1+ |y|^{1-a}}. \end{aligned}$$

We write the outer problem as

$$\begin{aligned} L[\psi ] \ + \, G(\psi , \phi ,\alpha ) \ =\ 0 {\quad \hbox {in } }\mathbb R^2 \end{aligned}$$
(5.13)

where

$$\begin{aligned} G(\psi , \phi ,\alpha ) = V(x)\psi \, + \, N(\eta _\delta \phi + \psi ) \, + \, E^o (x)\, + \ A[\phi ], \end{aligned}$$
(5.14)

with

$$\begin{aligned} \begin{aligned} V(x)\ =&\ (1-\eta _\delta ) \varepsilon ^2 f'( \Psi _\alpha - \mu |x|^2 ) , \\ N(\varphi ) \ =&\ (1-\eta _\delta ) N_{\Psi _\alpha } (\varphi )\\ E^o (x)\ =&\ (1-\eta _\delta ) E_\alpha \\ A[\phi ]\ =&\ L[\eta _\delta ] \phi \, +\, K_{ij} (x) {\partial } _{x_i} \eta _\delta {\partial } _{x_j}\phi . \end{aligned} \end{aligned}$$

We observe that for \(|z|> \frac{\delta }{2} \) we have

$$\begin{aligned} \Psi _\alpha (x) - \frac{\alpha }{2} |\log \varepsilon ||x|^2 = O(1) + \frac{\alpha }{2} |\log \varepsilon ||x_0|^2 - \frac{\alpha }{2} |\log \varepsilon ||x|^2 \end{aligned}$$

An important point is that from the choice of \(\alpha \) in (5.7) we have that

$$\begin{aligned} \frac{\alpha }{2}|x_0|^2 < \frac{x_0^2+\sigma '}{1+x_0^2}, \end{aligned}$$

for some \(\sigma '>0\) arbitrarily small. It follows that there exists \(0<b<1\), \(b={1-\sigma ' \over 1+ x_0^2}\) which depends on \(x_0\),

$$\begin{aligned} \varepsilon ^2\exp ( \Psi _\alpha (x) - \frac{\alpha }{2} |\log \varepsilon ||x|^2 ) = O(\varepsilon ^{1+b } e^{-|\log \varepsilon | |x|^2} ). \end{aligned}$$

Hence the following bounds hold.

$$\begin{aligned} \begin{aligned} |V(x)|\ \le&\ \varepsilon ^{1+b} e^{ - |x|^2} , \\ |N(\eta _\delta \phi + \psi )| \ \le&\ \varepsilon ^{1+b} e^{ - |x|^2} (|\phi |^2\eta _\delta ^2 + |\psi |^2) \\ |E_\alpha ^o (x)|\ \le&\ \varepsilon ^{1+b} e^{ - |x|^2}. \end{aligned} \end{aligned}$$
(5.15)

In the next two sections we will establish linear results that are the basic tools to solve system (5.11)–(5.13) by means of a fixed point scheme.

6 Linearized inner problem

In this section we consider the problem

$$\begin{aligned} \begin{aligned} \Delta \phi + e^{\Gamma _0(y)}\phi + h(y)= 0 {\quad \hbox {in } }\mathbb R^2. \end{aligned} \end{aligned}$$
(6.1)

We want to solve this problem in a topology of decaying functions in Hölder sense. For numbers \(m>2\), \(0<\alpha <1\) we consider the following norms

$$\begin{aligned} \begin{aligned} \Vert h\Vert _{m} =&\sup _{y\in \mathbb R^n} (1+|y|)^m|h(y)|, \\ \Vert h\Vert _{m,\alpha } =&\Vert h\Vert _{m} + (1+|y|)^{m + \alpha }[h]_{B_1(y),\alpha } , \end{aligned} \end{aligned}$$
(6.2)

where we use the standard notation

$$\begin{aligned}{}[h]_{A,\alpha } = \sup _{z_1,z_2\in A} \frac{|h(z_1) -h(z_2)| }{|z_1-z_2|^\alpha } \end{aligned}$$

We consider the functions \(Z_i(y)\) defined as

$$\begin{aligned} Z_i(y) = {\partial } _{y_i} \Gamma _0(y) = -4 \frac{y_i}{|y|^2+1}, \ i=1,2, \quad Z_0(y) = 2+ y\cdot {\nabla } \Gamma _0(y) = \frac{1 -2|y|^2}{|y|^2+1} . \end{aligned}$$

Lemma 6.1

Given \(m>2\) and \(0<\alpha < 1\), there exists a \(C>0\) and a solution \(\phi = {\mathcal T} [ h]\) of problem (6.1) for each h with \(\Vert h\Vert _m <+\infty \) that defines a linear operator of h and satisfies the estimate

$$\begin{aligned} \begin{aligned}&(1+|y|) | {\nabla } \phi (y)| + | \phi (y)| \\&\, \le \, C \left[ \, \log (2+|y|) \,\big |\int _{\mathbb R^2} h Z_0\big | + (1+|y|) \sum _{i=1}^2 \big |\int _{\mathbb R^2} h Z_i\big | + (1+|y|)^{2-m} \Vert h\Vert _{m} \,\right] . \end{aligned} \end{aligned}$$
(6.3)

In addition, if \(\Vert h\Vert _{m,\alpha } <+\infty \), we have

$$\begin{aligned} \begin{aligned}&(1+|y|^{2+\alpha }) [D^2_y \phi ]_{B_1(y),\alpha } +(1+|y|^2) |D^2_y \phi (y)| \\&\, \le \, C \big [ \, \log (2+|y|) \,\big |\int _{\mathbb R^2} h Z_0\big | + (1+|y|) \sum _{i=1}^2 \big |\int _{\mathbb R^2} h Z_i\big | + (1+|y|)^{2-m} \Vert h\Vert _{m,\alpha } \,\big ]. \end{aligned} \end{aligned}$$
(6.4)

Proof

The existence of this inverse is essentially known, we provide a proof. We assume that h is complex-valued. Setting \(y=re^{i\theta }\). We write

$$\begin{aligned} h(y) = \sum _{k=-\infty }^{\infty } h_k (r) e^{ik\theta } , \quad \phi (y)= \sum _{k=-\infty }^{\infty } \phi _k (r) e^{ik\theta } \end{aligned}$$

The equation is equivalent to

$$\begin{aligned} {\mathcal L} _k [\phi _k] + h_k(r) = 0 , \quad r\in (0,\infty ) \end{aligned}$$
(6.5)

where

$$\begin{aligned} {\mathcal L} _k [\phi _k] = \phi _k'' + \frac{1}{r} \phi _k' + e^{\Gamma _0} \phi _k - \frac{k^2}{r^2} \phi _k \end{aligned}$$

Using the formula of variation of parameters, the following formula (continuously extended to \(r= \frac{1}{\sqrt{2}}\) defines a smooth solution of (6.5) for \(k=0\):

$$\begin{aligned} \phi _0 (r) = -z(r)\int _{\frac{1}{\sqrt{2}}}^r \frac{ds}{ s z(s)^2} \int _0^s h_0(\rho ) z(\rho ) \rho \,d\rho , \quad z(r) = \frac{2r^2 - 1}{ 1 + r^2} \end{aligned}$$

Noting that \(\int _0^\infty h_0(\rho ) z(\rho ) \rho \,ds = \frac{1}{2\pi }\int _{\mathbb R^2} h(y)Z_0(y)\, dy \) we see that this function satisfies

$$\begin{aligned} |\phi _0(r)| \, \le \, C\big [ \, \log (2 + r) \Big | \int _{\mathbb R^2} h(y)Z_0(y)\, dy \Big | \, +\, (1+r)^{2-m} \Vert h\Vert _{ m} \big ]. \end{aligned}$$

Now we observe that

$$\begin{aligned} \phi _k (r) = -z(r)\int _r^\infty \frac{ds}{ s z(s)^2} \int _0^s h_k(\rho ) z(\rho ) \rho \,d\rho , \quad z(r) = \frac{4r}{ 1 + r^2} \end{aligned}$$

solves (6.5) for \(k =-1,1\) and satisfies

$$\begin{aligned} |\phi _k(r)| \, \le \, C\big [ \, (1+ r) \sum _{j=1}^2 \Big | \int _{\mathbb R^2} h(y)Z_j(y)\, dy \Big | \, +\, (1+r)^{2-m} \Vert h\Vert _{ m} \big ]. \end{aligned}$$

For \(k=2\) there is a function z(r) such that \(\mathcal L_2[z] = 0 \), \(z(r)\sim r^{2}\) as \(r\rightarrow 0\) and as \(r\rightarrow \infty \). For \(|k|\ge 2\) we have that

$$\begin{aligned} \bar{\phi }_k (r) = \frac{4}{ k^2} z(r)\int _0^r \frac{ds}{ s z(s)^2} \int _0^s |h_k(\rho )| z(\rho ) \rho \,ds, \end{aligned}$$

is a positive supersolution for equation (6.5), hence the equation has a unique solution \(\phi _k\) with \(|\phi _k(r)| \le \bar{\phi }_k (r)\). Thus

$$\begin{aligned} |\phi _k(r)| \, \le \, \frac{C}{k^2} (1+r)^{2-m} \Vert h\Vert _{ m}, \quad |k|\ge 2. \end{aligned}$$

Thus, for the functions \(\phi _k\) defined

$$\begin{aligned} \phi (y)= \sum _{k=-\infty }^{\infty } \phi _k (r) e^{ik\theta } \end{aligned}$$

defines a linear operator of functions h which is a solution of equation (6.1) which, adding up the individual estimates above, it satisfies the estimate

$$\begin{aligned} \begin{aligned} | \phi (y)| \, \le \, C \left[ \, \log (2+|y|) \,\big |\int _{\mathbb R^2} h Z_0\big | + (1+|y|) \sum _{i=1}^2 \big |\int _{\mathbb R^2} h Z_i\big | + (1+|y|)^{2-m} \Vert h\Vert _{m} \,\right] . \end{aligned} \end{aligned}$$
(6.6)

As a corollary we find that similar bounds are obtained for first and second derivatives. In fact, let us set for a large \(y = R e\), \(R=|y|\gg 1\), \( \phi _R(z) = {R^{m-2}} \phi (R(e+z)) . \) Then we find

$$\begin{aligned} \Delta _z\phi _R + R^{-2} W(R(e+ z))\phi _R + h_R (z) = 0, \quad |z| <\frac{1}{2} \end{aligned}$$

where \(h_R(z) = R^{-m}h (R(e+z)) \). Let us set,

$$\begin{aligned} \delta _i = \Big | \int _{\mathbb R^2} h Z_i \Big | , \quad i=0,1,2. \end{aligned}$$

Then from (6.6), and a standard elliptic estimate we find

$$\begin{aligned} \Vert {\nabla } _z \phi _R \Vert _{L^\infty ( B_{\frac{1}{4}}(0) )} + \Vert \phi _R \Vert _{L^\infty ( B_{\frac{1}{2}}(0) )} \, \le \, C\left[ \delta _0 \log R + \sum _{i=1}^2\delta _i R + \Vert h\Vert _{m} \right] . \end{aligned}$$

Clearly we have

$$\begin{aligned} \Vert h_R \Vert _{L^\infty (B_{\frac{1}{2}} (0))} \ \le \ C\Vert h\Vert _{m}, \quad [h_R ]_{B_{\frac{1}{2}} (0)} \ \le \ C\Vert h\Vert _{m,\alpha } . \end{aligned}$$

From interior Schauder estimates and the bound for \(\phi _R\) we then find

$$\begin{aligned} \Vert D^2_{z} \phi _R \Vert _{L^\infty ( B_{\frac{1}{4}}(0) ) } + [ D^2_{z} \phi _R ]_{B_{\frac{1}{4}}(0), \alpha } \, \le \, C \left[ \delta _0 \log R + \sum _{i=1}^2\delta _i R + \Vert h\Vert _{m,\alpha } \right] . \end{aligned}$$

From these relations, estimates (6.3) and (6.4) follow. \(\square \)

We consider now the problem for a fixed number \(\delta >0\) and a sufficiently large \(R>0\) we consider the equation

$$\begin{aligned} \Delta \phi + e^{\Gamma _0} \phi + B[\phi ] + h(y) = \sum _{j=0}^2 c_i e^{\Gamma _0} Z_i {\quad \hbox {in } }B_R \end{aligned}$$
(6.7)

where \(c_i\) are real numbers,

$$\begin{aligned} B[\phi ]= {\partial } _{i} ( b_{ij} (y) {\partial } _j \phi ) + b_0(y)\phi \end{aligned}$$
(6.8)

and the coefficients satisfy the bounds

$$\begin{aligned} \begin{aligned}&(1+|y|)^{-1} |b_{ij}(y)| + | D_yb_{ij}(y)| \\&\qquad +(1+|y|)^2 |b_{0}(y)| + (1+|y|)^3| D_yb_{0}(y)| \\&\quad \le \, \delta R^{-1} {\quad \hbox {in } }B_R\,. \end{aligned} \end{aligned}$$
(6.9)

For a function h defined in \(A\subset \mathbb R^2\) we denote by \(\Vert h\Vert _{m,\alpha ,A}\) the numbers defined in (6.2) but with the sup taken with elements in A only, namely

$$\begin{aligned} \Vert h \Vert _{m, A}= \sup _{y\in A} | (1+|y|)^m h(y)| , \quad \Vert h \Vert _{m,\alpha , A} = \sup _{y\in A} (1+|y|)^{m+\alpha }[ h]_{B(y,1)\cap A} + \Vert h \Vert _{m, A} \end{aligned}$$

Let us also define, for a function of class \(C^{2,\alpha }(A)\),

$$\begin{aligned} \Vert \phi \Vert _{*,m-2, A} = \Vert D^2\phi \Vert _{m,\alpha , A} + \Vert D\phi \Vert _{m-1,A} + \Vert D\phi \Vert _{m-2,A} . \end{aligned}$$
(6.10)

In this notation we omit the dependence on A when \(A= \mathbb R^2\). The following is the main result of this section.

Proposition 6.1

There are numbers \(\delta ,C>0\) such that for all sufficiently large R and any differential operator B as in (6.8) with bounds (6.9), Problem (6.7) has a solution \(\phi = T[h]\) for certain scalars \(c_i= c_i[h]\), that defines a linear operator of h and satisfies

$$\begin{aligned} \Vert \phi \Vert _{*,m-2, B_R} \ \le \ C \Vert h \Vert _{m,\alpha , B_R} . \end{aligned}$$

In addition, the linear functionals \(c_i\) can be estimated as

$$\begin{aligned} c_0[h]\, =&\, \gamma _0\int _{B_{R}} h Z_0 + O( R^{2-m}) \Vert h \Vert _{m,\alpha , B_R} , \\ c_i[h]\, =&\, \gamma _i\int _{B_{R}} h Z_i + O( R^{1-m}) \Vert h \Vert _{m,\alpha , B_R}, \ i=1,2. \end{aligned}$$

where \(\gamma _i^{-1} = \int _{\mathbb R^2} e^{\Gamma _0} Z_i^2\), \(i=0,1,2\).

Proof

We consider a standard linear extension operator \(h\mapsto \tilde{h} \) to entire \(\mathbb R^2\), in such a way that the support of \(\tilde{h}\) is contained in \(B_{2R}\) and \(\Vert \tilde{h}\Vert _{m,\alpha } \le C\Vert h\Vert _{m,\alpha , B_R}\) with C independent of all large R. In a similar way, we assume with no loss of generality that the coefficients of B are of class \(C^1\) in entire \(\mathbb R^2\), have compact support in \(B_{2R}\) and globally satisfy bounds (6.9). Then we consider the auxiliary problem in entire space

$$\begin{aligned} \Delta \phi + e^{\Gamma _0} \phi + B[\phi ] + \tilde{h}(y) = \sum _{j=0}^2 c_i e^{\Gamma _0} Z_i {\quad \hbox {in } }\mathbb R^2 \end{aligned}$$
(6.11)

where, assuming that \(\Vert h\Vert _m<+\infty \) and \(\phi \) is of class \(C^2\), \(c_i = c_i[h,\phi ]\) are the scalars defined as so that

$$\begin{aligned} \gamma _i \int _{\mathbb R^2} (B[\phi ] + \tilde{h}(y))Z_i = c_i , \quad \gamma _i^{-1} = \int _{\mathbb R^2} e^{\Gamma _0} Z_i^2 \end{aligned}$$

For a function \(\phi \) of class \(C^{2,\alpha } (\mathbb R^2)\) we define

$$\begin{aligned} \Vert \phi \Vert _{*, m-2,\alpha } = \Vert D^2_y\phi \Vert _{m,\alpha } + \Vert D_y\phi \Vert _{m-1}+ \Vert \phi \Vert _{m-2} \end{aligned}$$

Since \(B[Z_i]= O((1+|y|)^{-2}) \) and \( \int _{\mathbb R^2} \frac{dy}{1+|y|^{m} } <+\infty \) we get

$$\begin{aligned} \int _{\mathbb R^2} B[\phi ]Z_i = \int _{\mathbb R^2} \phi B[Z_i] = O(\Vert \phi \Vert _{m-2}) R^{-1} . \end{aligned}$$

In addition, we readily check that

$$\begin{aligned} \Vert B[\phi ] \Vert _{m,\alpha } \le C\delta \Vert \phi \Vert _{*, m-2,\alpha } . \end{aligned}$$

Let us consider the Banach space X of all \(C^{2,\alpha }(\mathbb R^2)\) functions with \( \Vert \phi \Vert _{*, m-2,\alpha } <+\infty . \) We find a solution of (6.11) if we solve the equation

$$\begin{aligned} \phi = \mathcal A [\phi ] + \mathcal H ,\quad \phi \in X \end{aligned}$$
(6.12)

where

$$\begin{aligned} \mathcal A [\phi ] = \mathcal T\Big [B[\phi ] -\sum _{i=0}^2 c_i[0, \phi ] e^{\Gamma _0}Z_i \Big ] ,\quad \mathcal H = \mathcal T\Big [ \tilde{h} - \sum _{i=0}^2 c_i[\tilde{h},0] e^{\Gamma _0}Z_i \Big ] . \end{aligned}$$

and \(\mathcal T\) is the operator built in Lemma 6.1. We observe that

$$\begin{aligned} \Vert \mathcal A [\phi ] \Vert _{*, m-2,\alpha } \le C\delta \Vert \phi \Vert _{*, m-2,\alpha }, \quad \Vert \mathcal H \Vert _{*, m-2,\alpha } \le C \Vert h \Vert _{ m,\alpha , B_R}. \end{aligned}$$

Fixing \(\delta \) so that \(\delta C<1\), we find that Eq. (6.12) has a unique solution, that defines a linear operator of h, and satisfies

$$\begin{aligned} \Vert \phi \Vert _{*, m-2,\alpha } \ \le \ C \Vert h \Vert _{ m,\alpha , B_R} \end{aligned}$$

The result of the proposition follows by just setting \(T[h] = \phi \big |_{B_R}\). The proof is concluded. \(\square \)

7 The linear outer problem

Let us recall L defined by (4.6)

$$\begin{aligned} L[\psi ] = {\text{ div }} (K \nabla \psi ) \end{aligned}$$

where \(K = K(x_1 , x_2)\) is the matrix

$$\begin{aligned} K(x_1 , x_2 ) = \frac{1}{1+x_1^2+x_2^2} \left( \begin{matrix} 1+x_2^2 &{} -x_1 x_2\\ -x_1 x_2 &{} 1+x_1^2 \end{matrix} \right) , \end{aligned}$$

that is, the same matrix (1.14) with \(h=1\).

In this section we consider the Poisson equation for the operator L

$$\begin{aligned} L[\psi ] + g(x) = 0 {\quad \hbox {in } }\mathbb R^2 , \end{aligned}$$
(7.1)

for a bounded function g.

It is sufficient to restrict our attention to the case of functions g(x) that satisfy the decay condition

$$\begin{aligned} \Vert g\Vert _\nu \, :=\, \sup _{x\in \mathbb R^2} (1+|x|)^\nu |g(x)|\, <\,+\infty \,, \end{aligned}$$

where \(\nu >2\). We have the validity of the following result

Proposition 7.1

There exists a solution \(\psi (x)\) to problem (7.1), which is of class \(C^{1,\alpha }(\mathbb R^2)\) for any \(0<\alpha <1\), that defines a linear operator \(\psi = {\mathcal T}^o (g) \) of g and satisfies the bound

$$\begin{aligned} |\psi (x)| \,\le \, C{ \Vert g\Vert _\nu } (1+ |x|^2), \end{aligned}$$
(7.2)

for some positive constant C.

Proof

To solve Eq. (7.1) we decompose g and \(\psi \) into Fourier modes as

$$\begin{aligned} g(x) = \sum _{j=-\infty }^{\infty } g_j(r) e^{ji\theta }, \quad \psi (x) = \sum _{j=-\infty }^{\infty } \psi _j (r) e^{ji\theta },\quad x=re^{i\theta } . \end{aligned}$$

It is useful to derive an expression of the operator \(L[\psi ]\) expressed in polar coordinates. With some abuse of notation we write \(\psi (r,\theta ) = \psi (x)\) with \(x= r \, e^{i\theta }\). The following expression holds:

$$\begin{aligned} L[\psi ] = {1\over 1 + r^2} \left( {1 \over r^2} +1 \right) {\partial } _\theta ^2 \psi + {1 \over r} {\partial } _r \left( {r \over 1 + r^2} {\partial } _r\psi \right) . \end{aligned}$$
(7.3)

To prove this, recall that for a vector field \(\vec F = F_r \hat{r} + F_\theta \hat{\theta }\) where

$$\begin{aligned} \hat{r} = {1\over r} (x_1,x_2), \quad \hat{\theta }= {1\over r} (-x_2,x_1), \end{aligned}$$

we have

$$\begin{aligned} {\text{ div }}(\vec F)\ =\ {1\over r} {\partial } _r ( r F^r) + {1\over r} {\partial } _\theta F^\theta , \quad {\nabla } \psi = \psi _r \hat{r} + \frac{ \psi _\theta }{r}\hat{\theta }. \end{aligned}$$

Then

$$\begin{aligned} \begin{aligned} L[\psi ]&= {\text{ div }} (K \nabla \psi ) = {\text{ div }}\left( \frac{1}{1+x_1^2+y_1^2} \left( \begin{matrix} \kappa ^2+x_2^2 &{} -x_1x_2 \\ -x_1x_2 &{} \kappa ^2+x_1^2 \end{matrix} \right) \nabla \psi \right) \\&= {\text{ div }}\left( \frac{1}{1+x_1^2+x_2^2} \left( \begin{matrix} x_2^2 &{} -x_1x_2 \\ -x_1x_2 &{} x_1^2 \end{matrix} \right) \nabla \psi \right) + {\text{ div }} \left( \frac{1}{1+ x_1^2+x_2^2} \nabla \psi \right) \\&= L_1 [\psi ] + L_2 [\psi ]. \end{aligned} \end{aligned}$$

Using the polar coordinates formalism, we get

$$\begin{aligned} \begin{aligned} L_1 [\psi ]&= {\text{ div }}\left( {r^2 \over 1 + r^2} \left[ \begin{matrix} \sin ^2 \theta &{} -\sin \theta \cos \theta \\ -\sin \theta \cos \theta &{} \cos ^2 \theta \end{matrix} \right] (\psi _r \hat{r} + {\psi _\theta \over r} \hat{\theta }) \right) \\&= {\text{ div }}\left( {r^2 \over 1 + r^2} \left[ \begin{matrix} -\sin \theta \, \hat{\theta }^T \\ \cos \theta \, \hat{\theta }^T \end{matrix} \right] (\psi _r \hat{r} + {\psi _\theta \over r} \hat{\theta }) \right) \\&=\, {\text{ div }} ( \psi _\theta {r \over 1 + r^2} \hat{\theta })\ =\ {1\over 1 + r^2} \psi _{\theta \theta } . \end{aligned} \end{aligned}$$

Similarly,

$$\begin{aligned} L_2 [\psi ] = {1\over r} \left( { r \over r^2 + 1} \psi _r \right) _r + {1 \over 1 + r^2 } {\psi _{\theta \theta } \over r^2}. \end{aligned}$$

Combining the above relations, expression (7.3) follows. A nice feature of this operator is that it decouples the Fourier modes. In fact, equation (7.1) becomes equivalent to the following infinite set of ODEs:

$$\begin{aligned} \begin{aligned}&L_k[\psi _k] + g_k(r) = 0 , \quad r\in (0,\infty ), \\&L_k[\psi _k]:= {1\over r} \big ( {r \over r^2 + 1} \psi _k'\big )' - {k^2\over 1 + r^2} \left( {1 \over r^2} +1 \right) \psi _k , \quad k\in \mathbb Z. \end{aligned} \end{aligned}$$
(7.4)

The operator \(L_k \) when \(r\rightarrow 0\) or \(r\rightarrow +\infty \) resembles

$$\begin{aligned} \begin{aligned} L_k[p]\ \sim&\ \frac{1}{r}(rp') - \frac{k^2p}{r^2} \quad \text{ as }\quad r\rightarrow 0 \\ L_k[p]\ \sim&\ \frac{1}{r}(\frac{1}{r} p') - \frac{k^2p}{r^2} \quad \text{ as }\quad r\rightarrow +\infty \end{aligned} \end{aligned}$$

For \(k\ge 1\), thanks to the maximum principle, this implies that the existence of a positive function \(z_k(r)\) with \(L_k[z_k]=0\) with

$$\begin{aligned} \begin{aligned} z_k(r) \sim r^k \quad \text{ as }\quad r\rightarrow 0 \\ z_k(r)\sim r^{\frac{1}{2} } e^{kr} \quad \text{ as }\quad r\rightarrow +\infty . \end{aligned} \end{aligned}$$

Let us consider the equation (for a barrier) for \(k=1\),

$$\begin{aligned} L_1[\bar{\psi }] + \frac{1}{1+ r^\nu } = 0 \end{aligned}$$

A solution of this equation is given by

$$\begin{aligned} \bar{\psi }(r) = z_1(r) \int _r^\infty \frac{ds}{ s z_1(s)^2} \int _0^s \frac{1}{1+ \rho ^\nu } z_1(\rho ) \rho \, d\rho . \end{aligned}$$

and we have

$$\begin{aligned} \begin{aligned} \bar{\psi }(r) =O( r^2 ) \quad \text{ as }\quad r\rightarrow 0 \\ \bar{\psi }(r) = O( r^{\nu -2}) \quad \text{ as }\quad r\rightarrow +\infty . \end{aligned} \end{aligned}$$

We also observe that for \(k\ge 2\) this function works as a barrier. In fact we have

$$\begin{aligned} |g_k(r)| \le \frac{\Vert g\Vert _{\nu }}{1+ r^\nu } \end{aligned}$$

hence (7.4) for \(|k|\ge 2\) admits as a positive barrier. Using the maximum principle, the function

$$\begin{aligned} \psi _k(r) = z_k(r) \int _r^\infty \frac{ds}{ s z_k(s)^2} \int _0^s h_k(\rho ) z_k(\rho ) \rho \, d\rho , \end{aligned}$$

which is the unique decaying solution (7.4), satisfies the estimate

$$\begin{aligned} |\psi _k(r)| \le \frac{4}{k^2} {\Vert g\Vert _{\nu }} \bar{\psi }(r) . \end{aligned}$$

Finally, let us consider the case \(k=0\). In that case, the following explicit formula yields a solution

$$\begin{aligned} \psi _0(r) = -\int _0^r \frac{1 + s^2}{h^2 s} \, ds \int _0^s g_0(\rho ) \rho \, d\rho . \end{aligned}$$

In this case we get the bound

$$\begin{aligned} |\psi _0(r)| \le C\Vert g\Vert _{\nu } (1+ r^2) . \end{aligned}$$

The function

$$\begin{aligned} \psi (x) := \sum _{j=-\infty }^{\infty } \psi _j (r) e^{ji\theta }, \end{aligned}$$

with the \(\psi _k\) being the functions built above, clearly defines a linear operator of g and satisfies estimate (7.2). The proof is concluded. \(\square \)

8 Solving the inner-outer gluing system

In this section we will formulate System (5.11)–(5.13) (for an appropriate value of the parameter \(\alpha \) in 5.7) in a fixed point formulation that involves the linear operators defined in the previous sections.

We let \(X^o\) be the Banach space of all functions \(\psi \in C^{2,\alpha }(\mathbb R^2)\) such that

$$\begin{aligned} \Vert \psi \Vert < +\infty , \end{aligned}$$

and formulate the outer equation (5.13) as the fixed point problem in \(X^o\),

$$\begin{aligned} \psi = {\mathcal T} ^o [ G(\psi , \phi ,\alpha ) ], \quad \psi \in X^o \end{aligned}$$

where \({\mathcal T}^o\) is defined in Proposition 7.1, while G is the operator given by (5.14). We decompose the inner problem (5.11) as follows: consider

$$\begin{aligned} \Delta _y \phi + f'(\Gamma _0) \phi + B[\phi ] + H(\phi , \psi ,\alpha ) = 0 {\quad \hbox {in } }B_R \end{aligned}$$

where \(R={2\delta \over \varepsilon }\) and

$$\begin{aligned} H(\phi , \psi ,\alpha ) = {\mathcal N} (\psi + \eta _\delta \phi ) + \tilde{E}_\alpha + (f'(\Gamma _0) + b_0 ) \psi \end{aligned}$$

Let \(X_*\) be the Banach space of functions \(\phi \in C^{2, \alpha } (B_R)\) such that

$$\begin{aligned} \Vert \phi \Vert _{*,m-2,B_R} <\infty \end{aligned}$$

(see (6.10)). We introduce constants \(c_i\)

$$\begin{aligned} \Delta _y \phi _1 + f'(\Gamma _0) \phi _1 + B[\phi _1] + B[\phi _2]+ H(\phi _1+\phi _2, \psi ,\alpha ) -\sum _{i=0}^2 c_i e^{\Gamma _0} Z_i = 0 {\quad \hbox {in } }B_R \end{aligned}$$
(8.1)

and formulate this problem using the operator T in Proposition 6.1, with \(c_i = c_i [ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]\), \(i=0,1,2\), and \(\phi _1 \in X_*\)

$$\begin{aligned} \phi _1 = T( H(\phi _1+\phi _2, \psi ,\alpha ) + B[\phi _2] ) \end{aligned}$$

and we require that \(\phi _2\) solves

$$\begin{aligned} \Delta _y \phi _2 + f'(\Gamma _0) \phi _2 + c_0 e^{\Gamma _0} Z_0 = 0 {\quad \hbox {in } }\mathbb R^2 . \end{aligned}$$
(8.2)

We solve Problem (8.2) by using the operator \(\mathcal {T}\) in Lemma 6.1. We write

$$\begin{aligned} \phi _2 = {\mathcal T} [ c_0[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]e^{\Gamma _0} Z_0 ]. \end{aligned}$$

Having in mind the a-priori bound in (6.3), (6.4) in Lemma 6.1, it is natural to ask that \(\phi _2 \in C^{2,\alpha }\),

$$\begin{aligned} (1+|y|^{2+\alpha }) [D^2_y \phi ]_{B_1(y),\alpha }&+(1+|y|^2) |D^2_y \phi (y)| +(1+|y|) | {\nabla } \phi (y)| + | \phi (y)| \\&\le C \log (1+ |y|) . \end{aligned}$$

We call \(\Vert \phi \Vert _{**,m-2,\alpha }\) the infimum of the constants C that satisfy the above inequality, and denote by \(X_{**}\) the Banach space of functions \(\phi \in C^{2,\alpha }\) with \(\Vert \phi \Vert _{**, m-2 , \alpha } <\infty \).

We couple equations (5.13), (8.1) and (8.2) with the relations

$$\begin{aligned} c_1[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]=0 , \quad c_2[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]=0. \end{aligned}$$

The second equation \(c_2= 0\) is automatically satisfied thanks to the fact that all functions are even in the variable \(y_2\) (or equivalently in the variable \(z_2\), and in \(x_2\)). The first equation \(c_1=0\) becomes an equation in the parameter \(\alpha _1\). Recall that \(\alpha _1\) is chosen in (5.7) as

$$\begin{aligned} \alpha \ = \ \frac{4c_1}{x_0} + {\alpha _1} = {2 \over 1+ x_0^2} + \alpha _1 , \qquad |\alpha _1| \le \frac{1}{|\log \varepsilon |^{\frac{1}{2}} } . \end{aligned}$$

Equation \(c_1=0\) becomes

$$\begin{aligned} \alpha _1 \varepsilon |\log \varepsilon | \beta= & {} F(\psi , \phi _1 , \phi _2 , \alpha _1) , \quad {\text{ where }}\nonumber \\ F(\psi , \phi _1 , \phi _2 , \alpha _1)= & {} \gamma _1\int _{B_{R}} \left[ {\mathcal N} (\psi + \eta _\delta \phi ) + (f'(\Gamma _0) + b_0 ) \psi +B(\phi _2) \right] Z_i \nonumber \\&+ O( R^{1-m}) \Vert {\mathcal N} (\psi + \eta _\delta \phi ) + \tilde{E}_\alpha + (f'(\Gamma _0) + b_0 ) \psi +B(\phi _2) \Vert _{m,\alpha , B_R},\nonumber \\ \end{aligned}$$
(8.3)

and \(\beta \)

$$\begin{aligned} \beta = -8 \gamma _1 \left( \int _{B_R} \frac{ y_1 }{(1+|y|^2)^2} Z_1 \ + \ O \big ( {1\over |\log \varepsilon |^{1\over 2}} \big ) \right) . \end{aligned}$$

The final step to conclude the proof of our result is to find \(\psi \), \(\phi _1\), \(\phi _2\) and \(\alpha _1\) solution of the fixed point problem

$$\begin{aligned} (\psi , \phi _1 , \phi _2 , \alpha _1) = {\mathcal A} (\psi , \phi _1 , \phi _2 , \alpha _1) \end{aligned}$$
(8.4)

given by

$$\begin{aligned} \begin{aligned} \psi&= {\mathcal T} ^o [ G(\psi , \phi _1+ \phi _2 ,\alpha ) ], \quad \psi \in X^o\\ \phi _1&= T[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2)]\\ \phi _2&= {\mathcal T} [ c_0[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]e^{\Gamma _0} Z_0 ]\\ \alpha _1&= {1\over \varepsilon |\log \varepsilon | \beta } \, F(\psi , \phi _1 , \phi _2 , \alpha _1). \end{aligned} \end{aligned}$$
(8.5)

Let \(b\in (0,1)\) the number introduced in (5.15), \(m>2\) and define

$$\begin{aligned} \begin{aligned} B_M&=\{ (\psi , \phi _1 , \phi _2 , \alpha _1 ) \in X^o\times X_*\times X_{**} \times \mathbb R\, : \, \\&\Vert \psi \Vert _\infty \le M \varepsilon ^{1+b} , \, \Vert \phi _1\Vert _{*, m-2, B_R} \le M \varepsilon |\log \varepsilon |^{1\over 2} , \\&\, \Vert \phi _2\Vert _{**, m-2, \alpha } \le M \varepsilon |\log \varepsilon |^{1\over 2} , \, |\alpha _1 | \le {M \over |\log \varepsilon |^{1\over 2}} \}, \end{aligned} \end{aligned}$$
(8.6)

for some positive constant M independent of \(\varepsilon \). We shall solve (8.4)–(8.5) in B.

We first show that \({\mathcal A} (B) \subset B\). Assume that \((\psi , \phi _1 , \phi _2 , \alpha _1 ) \in B\). We first want to show that \({\mathcal A} (\psi , \phi _1 , \phi _2 , \alpha _1 ) \in B\). From (5.14) and (5.15), we get that

$$\begin{aligned} |G(\psi , \phi _1 + \phi _2 , \alpha _1 )|&\le {C \over 1+ |x|^\nu } \varepsilon ^{1+b} (1+ |\phi _1+\phi _2|^2 \eta _\delta + |\psi |^2 ) \\&+ {C \over 1+ |x|^\nu } \varepsilon ^{2-m} (\Vert \phi _1\Vert _{*, m-2, B_R} + \Vert \phi _2 \Vert _{**, m-2, \alpha } ) \end{aligned}$$

for some \(\nu >2\). From Proposition 7.1, we get that

$$\begin{aligned} \begin{aligned} \Vert \psi \Vert _\infty&= \Vert {\mathcal T}^o ( G(\psi , \phi _1 + \phi _2 , \alpha _1 ) ) \Vert _\infty \le C \varepsilon ^{1+b}. \end{aligned} \end{aligned}$$
(8.7)

From (5.10), (5.8), (5.9), we get, for some \(a\in (0,1)\),

$$\begin{aligned} |H(\phi _1 + \phi _2 , \psi , \alpha _1) |&\le C \left( {\alpha _1 \varepsilon |\log \varepsilon | \over (1+ |y|^2)^2} + {\varepsilon \over (1+|y| )^{3-a}} \right) \\&\quad + {C \over 1+ |y|^{3-a}} \left( {1\over (1+ |y|)^{1+a} } + {\alpha _1 \varepsilon |\log \varepsilon | \over (1+ |y|)^a} + \varepsilon \right) |\psi | \\&\quad + {C \over (1+ |y|)^4} \left( |\psi |^2 + |\eta _\delta \phi _1|^2 + |\eta _\delta \phi _2|^2 \right) . \end{aligned}$$

Using the assumptions on \(\psi \), \(\phi _1\) and \(\phi _2\), we get that

$$\begin{aligned} \Vert H(\phi _1 + \phi _2 , \psi , \alpha _1) \Vert _{m, \alpha , B_R} \le C \varepsilon |\log \varepsilon |^{1\over 2} . \end{aligned}$$
(8.8)

From (5.12), we get

$$\begin{aligned} | B[\phi _2] |&\le C {\varepsilon ^2 |\log \varepsilon | \over 1+ |y|} \Vert \phi _2 \Vert _{**, m-2, \alpha }, \end{aligned}$$

and

$$\begin{aligned} \Vert B(\phi _2) \Vert _{m, \alpha , B_R} \le C \varepsilon ^{3-m} |\log \varepsilon | \Vert \phi _2 \Vert _{**, m-2, \alpha } \le ( C\varepsilon ^{3-m} M |\log \varepsilon | ) \, \varepsilon |\log \varepsilon |^{1\over 2} . \end{aligned}$$
(8.9)

Observe also that \(c_0[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]\)

$$\begin{aligned} |c_0[ H(\phi _1+\phi _2, \psi ,\alpha ) +B(\phi _2) ]|\le C \varepsilon |\log \varepsilon |^{1\over 2} (1+ \varepsilon ^{3-m} |\log \varepsilon | \Vert \phi _2 \Vert _{**, m-2, \alpha } ). \end{aligned}$$
(8.10)

From Propositions 6.1, (8.8)–(8.9) we conclude that

$$\begin{aligned} \Vert \phi _1 \Vert _{*, m-2, B_R} \le C \varepsilon |\log \varepsilon |^{1\over 2}, \end{aligned}$$
(8.11)

while from Lemma 6.1 and (8.10) we get

$$\begin{aligned} \Vert \phi _2 \Vert _{**, m-2, \alpha } \le C \varepsilon |\log \varepsilon |^{1\over 2}. \end{aligned}$$
(8.12)

Moreover, using (8.3), we get that

$$\begin{aligned} |\alpha _1 | \le {C \over |\log \varepsilon |^{1\over 2}} . \end{aligned}$$
(8.13)

Combining (8.7)–(8.11)–(8.12)–(8.13), we conclude that \({\mathcal A} (\psi , \phi _1 , \phi _2 , \alpha _1 ) \in B\) if we choose M large enough (but independently of \(\varepsilon \)) in the definition of the set \(B_M\) in (8.6).

We next show that \({\mathcal A}\) is a contraction map in \(B_M\). Let \(\varphi _i = \eta _\delta (\phi _1^i + \phi _2^i) +\psi ^i \), and \(\alpha _1^i\) for \(i=1,2\) such that \((\psi ^i, \phi _1^i, \psi _2^i ,\alpha _1^i ) \in B_M\). Let \(G(\varphi ^i, \alpha _1^i ) = G(\psi ^i, \phi _1^i + \phi _2^i,\alpha _1^i)\) and observe that

$$\begin{aligned} \begin{aligned} \left| G(\varphi ^1, \alpha _1^1 ) - G(\varphi ^2, \alpha _1^2 ) \right|&\le |V_{\alpha _1} (x) (\psi ^1 - \psi ^2)| \\&\quad + |V_{\alpha _1} (x) - V_{\alpha _2} (x) | |\psi ^1| + (1-\eta _\delta ) |N_{\Psi _{\alpha _1} } (\varphi ^1 ) - N_{\Psi _{\alpha _2}} (\varphi ^1) |\\&\quad + (1-\eta _\delta ) | N_{\Psi _{\alpha _2}} (\varphi ^1 ) - N_{\Psi _{\alpha _2}} (\varphi ^2 ) |\\&\quad +|A[\phi _1^1-\phi _1^2]| + |A[\phi _2^1 - \phi _2^2]| \end{aligned} \end{aligned}$$
(8.14)

where \(V_{\alpha } (x) =(1-\eta _\delta ) \varepsilon ^2 f'( \Psi _\alpha - \mu |x|^2 )\), \(\mu = {\alpha \over 2} |\log \varepsilon |\), and the other terms are defined in (5.14). A direct computation gives that

$$\begin{aligned} |V_{\alpha _1} (x) - V_{\alpha _2} (x) | |\psi ^1|&+ (1-\eta _\delta ) |N_{\Psi _{\alpha _1} } (\varphi ^1 ) - N_{\Psi _{\alpha _2}} (\varphi ^1) | \\&\le \varepsilon ^{1+b} e^{-|x|^2} |\alpha _1 - \alpha _2| \left( | \psi ^1 | + |\varphi ^1| \right) , \end{aligned}$$
$$\begin{aligned} |V_{\alpha _1} (x) (\psi ^1 - \psi ^2)|&+ (1-\eta _\delta ) | N_{\Psi _{\alpha _2}} (\varphi ^1 ) - N_{\Psi _{\alpha _2}} (\varphi ^2 ) | \\&\le \varepsilon ^{1+b } e^{-|x|^2} \left( |\psi ^1 - \psi ^2| + \eta _\delta ^2 (|\phi _1^1 - \phi _1^2|^2 + |\phi _2^1 - \phi _2^2|^2 ) \right) \end{aligned}$$

and

$$\begin{aligned} |A[\phi _1^1-\phi _1^2]|&\le C \left( |L (\eta _\delta ) |\phi _1^1 - \phi _1^2| + |K_{ij} {\partial } _i \eta _\delta {\partial } _j (\phi _1^2 - \phi _1^2) | \right) \\&\le { C \varepsilon \over 1+ |x|^\nu } \Vert \phi _1^1 - \phi _1^2 \Vert _{*, m-2, B_R} . \end{aligned}$$

In order to estimate \(A[\phi _2^1 - \phi _2^2]\), we observe that

$$\begin{aligned} \Delta _y&[\phi _2^1 - \phi _2^2] + f'(\Gamma _0) [\phi _2^1 - \phi _2^2] + c_0^{12} e^{\Gamma _0} Z_0 = 0 {\quad \hbox {in } }\mathbb R^2 \\&{\text{ where }} \quad c_0^{12}= c_0 [H(\phi _1+\phi _2^1, \psi ,\alpha ) +B(\phi _2^1) ] - c_0 [H(\phi _1+\phi _2^2, \psi ,\alpha ) +B(\phi _2^2 )] . \end{aligned}$$

By definition,

$$\begin{aligned} c_0^{12}= \int _{B_R} \left[ B[\phi _2^1 - \phi _2^1] + \mathcal N (\psi + \eta _\delta (\phi _1+ \phi _2^1) ) - \mathcal N (\psi + \eta _\delta (\phi _1+ \phi _2^2) ) \right] Z_0. \end{aligned}$$

Using (5.12) and (5.10), we get

$$\begin{aligned} |c_0^{12} |&\le C ( \int _{B_R} {\varepsilon \over 1+|y|^2} \, dy ) \Vert \phi _2^1 - \phi _2^2\Vert _{**, \alpha } + C \Vert \phi _2^1 - \phi _2^2\Vert _{**, \alpha }^2 \end{aligned}$$

Thus we can conclude that

$$\begin{aligned} |A[\phi _2^1-\phi _2^2]|&\le C \left( |L (\eta _\delta ) |\phi _2^1 - \phi _2^2| + |K_{ij} {\partial } _i \eta _\delta {\partial } _j (\phi _2^2 - \phi _2^2) | \right) \\&\le { C \varepsilon ^\sigma \over 1+ |x|^\nu } \Vert \phi _1^1 - \phi _1^2 \Vert _{**, \alpha } . \end{aligned}$$

for some \(\sigma >0\). Combining all these estimates in (8.14), we obtain that

$$\begin{aligned} \left| G(\varphi ^1, \alpha _1^1 ) - G(\varphi ^2, \alpha _1^2 ) \right|&\le {C\varepsilon ^\sigma \over 1+|x|^\nu } \Bigl ( \Vert \psi ^1-\psi ^2\Vert _\infty + \Vert \phi _1^1 - \phi _1^2 \Vert _{*,m-2, B_R} \\&+ \Vert \phi _1^1 - \phi _1^2 \Vert _{**,\alpha } + |\alpha _1^1 - \alpha _1^2 |\Bigl ), \end{aligned}$$

which, in view of Proposition 7.1, gives

$$\begin{aligned} \Vert {\mathcal T} ^o [G(\varphi ^1, \alpha _1^1 )] - {\mathcal T} ^o [G(\varphi ^2, \alpha _1^2 )] \Vert _\infty&\le C\varepsilon ^\sigma \Bigl ( \Vert \psi ^1-\psi ^2\Vert _\infty + \Vert \phi _1^1 - \phi _1^2 \Vert _{*,m-2, B_R} \\&+ \Vert \phi _1^1 - \phi _1^2 \Vert _{**,\alpha } + |\alpha _1^1 - \alpha _1^2 |\Bigl ), \end{aligned}$$

for some \(\sigma >0\). Carrying out a similar analysis for each one of the operators in (8.5), we get that \({\mathcal A}\) is a contraction mapping in \(B_M\).

As a consequence, Problem (8.4)–(8.5) has a fixed point, which concludes the proof of Theorem 1. \(\square \)

9 Proof of Theorem 2

The proof of Theorem 2 is similar to that of Theorem 1, by employing the spherical symmetry of the equation (4.1), which can be written in polar coordinates as

$$\begin{aligned} {1\over h^2 + r^2} \left( {h^2 \over r^2} +1 \right) {\partial } _\theta ^2 \Psi + {h^2 \over r} {\partial } _r \left( {r \over h^2 + r^2} {\partial } _r\Psi \right) =\varepsilon ^2 e^{(\Psi -{\alpha \over 2} |\log \varepsilon | |x|^2 )} \quad {\text{ in }} \quad \mathbb R^2. \end{aligned}$$
(9.1)

It is clear that equation (9.1) is invariant under rotational symmetry \(\theta \rightarrow \theta +\frac{2\pi }{k}\) and even symmetry \(\theta \rightarrow -\theta \).

To construct multiple concentration solutions with polygonal symmetry, we work in the following space

$$\begin{aligned} \mathcal {X}_k = \{ \Psi \in L^\infty _{loc} (\mathbb R^2) \ | \ \Psi ( z e^{\sqrt{-1} \frac{2\pi }{k}})=\Psi (z), \Psi (\bar{z})=\Psi (z), \ z =(z_1, z_2) \in \mathbb R^2 \} \end{aligned}$$
(9.2)

where \( k\ge 2\) is an integer.

As in the proof of Theorem 1 we modify the approximate solution as follows: let \( \bar{H}_{1\varepsilon } (z)=\Gamma _\varepsilon (z) \left( 1+ c_1 z_1 + c_2 |z|^2 \right) +{4 x_0^3 \over 1+ x_0^2} H_{1\varepsilon } (z)\), where \(\Gamma _\varepsilon (z), c_1, c_2, x_0, H_{1\varepsilon } (z)\) are defined in Sect. 3. Then the new approximate solution is

$$\begin{aligned} \begin{aligned} \Psi _{ \alpha } (x)&= {\alpha \over 2} |\log \varepsilon | x_0^2 - \log (1+ x_0^2) \\&\quad + \sum _{j=1}^k \eta _\delta (x- x_0 e^{\sqrt{-1} \frac{2(j-1)\pi }{k}} ) \hat{H}_{1\varepsilon } (z e^{\sqrt{-1} \frac{2(j-1)\pi }{k}} ) + H_{2\varepsilon } (x) \in \mathcal {X}_k . \end{aligned} \end{aligned}$$

Using the symmetry assumption (9.2), the rest of the proof is exactly the same as in that of Theorem 1. We omit the details.