1 Introduction and statement of the main results

Let us consider the Gel’fand problem,

$$\begin{aligned} \left\{ \!\! \begin{array}{ll} -\Delta u = \lambda e^{u}&{} \quad \text{ in } \Omega \\ u=0 &{} \quad \text{ on } \partial \Omega , \end{array} \right. \end{aligned}$$
(1.1)

where \(\Omega \subset \mathbb {R}^2\) is a bounded domain with smooth boundary \(\partial \Omega \) and \(\lambda >0\) is a real parameter. This problem appears in a wide variety of areas of mathematics such as the conformal embedding of a flat domain into a sphere [1], self-dual gauge field theories [13], equilibrium states of large number of vortices [3, 4, 14, 15, 19, 20], stationary states of chemotaxis motion [21], and so forth. See [12] for more about our motivation and [22] for other background materials.

Let \(\{\lambda _n\}_{n\in \mathbb {N}}\) be a sequence of positive values such that \(\lambda _n\rightarrow 0\) as \(n\rightarrow \infty \) and let \(u_n=u_n(x)\) be a sequence of solutions of (1.1) for \(\lambda = \lambda _n\). In [17], the authors studied solutions \(\{u_{n}\}\) which blow-up at \(m\)-points (see next section for more details). This means that there is a set \(\mathcal{S}=\{ \kappa _1, \ldots , \kappa _m\}\subset \Omega \) of \(m\) distinct points such that

  1. (i)

    \( \Vert u_n\Vert _{L^{\infty }(\omega )}=O(1) \quad \hbox {for any } \omega \Subset \overline{ \Omega }\setminus \mathcal {S},\)

  2. (ii)

    \( u_n{|_{\mathcal {S}}}\rightarrow +\infty \quad \hbox { as }n\rightarrow \infty .\)

In [2, 8, 18], and [7] some sufficient conditions which ensure the existence of this type of solutions are given. Throughout the paper, we will consider solutions \(u_n\) to (1.1) with \(m\) blow-up points and we investigate the eigenvalue problem

$$\begin{aligned} \left\{ \!\! \begin{array}{ll} -\Delta v_n^k= \mu _n^k\lambda _ne^{u_n}v_n^k &{} \quad \text{ in } \Omega \\ \Vert v_n^k\Vert _{\infty }=\max \nolimits _{\overline{\Omega }}v_n^k=1 &{}\\ v_n^k=0 &{}\quad \text{ on } \partial \Omega \end{array} \right. \end{aligned}$$
(1.2)

which admits a sequence of eigenvalues \(\mu _n^1<\mu _n^2\le \mu _n^3\le \cdots \), where \(v_n^k\) is the \(k\)th eigenfunction of (1.2) corresponding to the eigenvalue \(\mu _n^k\). We also assume the orthogonality in Dirichlet norm,

$$\begin{aligned} \int _\Omega \nabla v_n^k\cdot \nabla v_n^{k'}=0\quad \text {if}\;\, k\ne k'. \end{aligned}$$

In order to state our results, we need to introduce some notations and recall some well-known facts.

Let \(R>0\) be such that \(B_{2R}(\kappa _i)\subset \subset \Omega \) for \(i=1,\ldots ,m\) and \(B_R(\kappa _i)\cap B_R(\kappa _j)=\emptyset \) if \(i\ne j\). For each \(\kappa _j\in \mathcal {S}\) there exists a sequence \(\{x_{j,n}\}\in B_R(\kappa _j)\) such that

$$\begin{aligned} u_n(x_{j,n})=\sup _{B_R(x_{j,n})}u_n(x)\rightarrow +\infty \quad \text { and }\quad x_{j,n}\rightarrow \kappa _j\quad \text { as }\;n\rightarrow +\infty . \end{aligned}$$

For any \(j=1,\ldots ,m\), we rescale \(u_n\) around \(x_{j,n}\), letting

$$\begin{aligned} \tilde{u}_{j,n}(\tilde{x}):= u_n\left( \delta _{j,n} \tilde{x}+x_{j,n}\right) - u_n(x_{j,n})\quad \hbox { in }B_{\frac{R}{\delta _{j,n}}}(0), \end{aligned}$$
(1.3)

where the scaling parameter \(\delta _{j,n}\) is determined by

$$\begin{aligned} \lambda _ne^{u_n(x_{j,n})}\delta _{j,n}^2=1. \end{aligned}$$
(1.4)

It is known that \(\delta _{j,n}\longrightarrow 0\) and for any \(j=1,\ldots ,m\)

$$\begin{aligned} \tilde{u}_{j,n}(\tilde{x})\rightarrow U(\tilde{x})=\log \frac{1}{\left( 1+\frac{|\tilde{x}|^2}{8}\right) ^2} \quad \hbox { in }\quad C^{2,\alpha }_{loc}(\mathbb {R}^2). \end{aligned}$$
(1.5)

As we did for \(u_n\), we rescale also the eigenfunctions \(v_n^k\) around \(x_{j,n}\) for any \(j=1,\ldots ,m\). So we define

$$\begin{aligned} \tilde{v}_{j,n}^k(\tilde{x}):=v_n^k\left( \delta _{j,n} \tilde{x}+x_{j,n}\right) \quad \hbox { in }B_{\frac{R}{\delta _{j,n}}}(0), \end{aligned}$$
(1.6)

where \(\delta _{j,n}\) is as in (1.4). The rescaled eigenfunctions \(\tilde{v}_{j,n}^k(\tilde{x})\) satisfy

$$\begin{aligned} \left\{ \!\! \begin{array}{ll} -\Delta \tilde{v}_{j,n}^k=\mu _n^k e^{\tilde{u}_{j,n}}\tilde{v}_{j,n}^k \quad \hbox { in }B_{\frac{R}{\delta _{j,n}}}(0)\\ \Vert \tilde{v}_{j,n}^k \Vert _{L^{\infty }\big (B_{\frac{R}{\delta _{j,n}}}(0)\big )}\le 1. \end{array} \right. \end{aligned}$$
(1.7)

One of the main results of this paper concerns pointwise estimates of the eigenfunction. In particular, we are interested in the number of peaks of \(v_n^k\) for \(k=1,\ldots ,m\). Let us recall that, by Corollary 2.9 in [12] (see also [9]), we have that

$$\begin{aligned} v_n^k\rightarrow 0\quad \text {in}\;\; C^{1}\left( \overline{{\Omega }}\backslash \cup _{j=1}^mB_{R}\left( \kappa _j\right) \right) \end{aligned}$$

This means that \(v_n^k\) can concentrate only at \(\kappa _j\), \(j=1,\ldots ,m\). This leads to the following definition,

Definition 1

We say that an eigenfunction \(v_n^k\) concentrates at \(\kappa _j\in \Omega \) if there exist a constant \(C>0\) and \(\kappa _{j,n}\rightarrow \kappa _j\) such that

$$\begin{aligned} \left| v_n^k(\kappa _{j,n})\right| \ge C>0\quad \hbox {for }\hbox { n }\hbox { large}. \end{aligned}$$
(1.8)

A problem that arises naturally is the following,

Question 1

Let us suppose that \(u_n\) blows-up at the points \(\left\{ {\kappa _1,\ldots ,\kappa _m}\right\} \). Is the same true for the eigenfunction \(v_n^k\), \(k=1\ldots ,m\)?

A first partial answer related to this question was given in [12], where the following result was proved.

Theorem 1.1

For each \(k\in \{1,\ldots ,m\}\) we have that \(\mu _n^k\rightarrow 0\) and there exists a vector

$$\begin{aligned} \mathfrak {c}^k=(c_1^k,\ldots ,c_m^k)\in [-1,1]^m\subset \mathbb {R}^m,\quad \mathfrak {c}^k\ne \mathbf{0} \end{aligned}$$
(1.9)

such that for each \(j\in \{1,\ldots ,m\}\), there exists a sub-sequence satisfying

$$\begin{aligned}&\tilde{v}_{j,n}^k(x)\rightarrow c_j^k \quad \text { in }C^{2,\alpha }_{loc}\left( \mathbb {R}^2\right) \end{aligned}$$
(1.10)
$$\begin{aligned}&\mathfrak {c}^k\cdot \mathfrak {c}^h=0 \quad \quad \quad \hbox {if }h\ne k \end{aligned}$$
(1.11)

and

$$\begin{aligned} \frac{v_n^k}{\mu _n^k}\rightarrow 8\pi \sum _{j=1}^m c_j^k \, G(\cdot ,\kappa _j)\quad \text { in }C^{2,\alpha }_{loc}\left( \overline{\Omega } \setminus \{\kappa _1,\ldots ,\kappa _m\}\right) . \end{aligned}$$
(1.12)

Here \(G(x,y)\) denotes the Green function of \(-\Delta \) in \(\Omega \) with Dirichlet boundary condition, i.e.,

$$\begin{aligned} G(x,y)=\frac{1}{2\pi }\log {|x-y|^{-1}}+K(x,y), \end{aligned}$$
(1.13)

\(K(x,y)\) is the regular part of \(G(x,y)\) and \(R(x)=K(x,x)\) the Robin function. A consequence of Theorem 1.1 and Proposition 2.11 of [12] is that

$$\begin{aligned} v_n^k\hbox { concentrates at }\kappa _j\hbox { if \quad and \quad only if }c_j^k\ne 0. \end{aligned}$$
(1.14)

In this paper, we characterize the values \(c_j^k\) in term of the Green function and this will allow us to determine whether \(c_j^k\) is equal to \(0\) or not.

Theorem 1.2

For each \(k\in \{1,\ldots ,m\}\), we have that

  1. (i)

    The vector \(\mathfrak {c}^k=(c_1^k,\ldots ,c_m^k)\in [-1,1]^m\subset \mathbb {R}^m\backslash \{\mathbf{0}\}\) is a \(k\)th eigenvector of the matrix

    $$\begin{aligned} h_{ij}=\left\{ \begin{array}{ll} R(\kappa _i)+2\sum _{\begin{array}{c} 1\le h\le m\\ h\ne i \end{array}} G(\kappa _h,\kappa _i) &{}\quad \text {if }i=j,\\ -G(\kappa _i,\kappa _j)&{}\quad \text {if }i\ne j, \end{array}\right. \end{aligned}$$
    (1.15)
  2. (ii)

    A sub-sequence of \(\{v_n^k\}\) satisfies

    $$\begin{aligned}&\tilde{v}_{j,n}^k(\tilde{x})=v_n^k(x_{j,n})+\mu _n^k c_j^k U(\tilde{x})+o\left( \mu _n^k\right) \quad \text { in }\quad C^{2,\alpha }_{loc}\left( \mathbb {R}^2\right) \end{aligned}$$
    (1.16)

for each \(j\in \{1,\ldots ,m\}\), where \(U(\tilde{x})\) is as defined in (1.5).

Let us observe that (1.16) is a second order estimates for \(v_n^k\). We stress that this is new even for the case of one-peak solutions (\(k=1\)). From Theorem 1.2, we can deduce the answer to the Question 1,

Corollary 1.3

Let \(\mathfrak {c}^k=(c^k_1,\ldots ,c^k_m)\) be the \(k\)th eigenvector of the matrix \((h_{ij})\) associated to a simple eigenvalue. Then if \(c_j^k\ne 0\) \(v_n^k\) concentrates at \({ \kappa _j}\).

Our next aim is to understand better when \(c_j^k\ne 0\). The following proposition gives some information in this direction.

Theorem 1.4

Let \(k\in \{1,\ldots ,m\}\) and \(v_n^k\) be the corresponding eigenfunction.

Then we have that,

  1. (i)

    \(v_n^1\) concentrates at \(m\) points \(\kappa _1,\ldots ,\ \kappa _m\),

  2. (ii)

    any \(v_n^k\) concentrates at least at \(two\) points \(\kappa _i\), \(\kappa _j\) with \(i,j\in \{1,\ldots ,m\}\), \(i\ne j\).

However, there are other interesting questions. One is the following:

Question 2

Let us suppose that \(\mu _n^k\) is a multiple eigenvalue of (1.2). What about its multiplicity?

We will give an answer to this question in the case where \(\Omega \) is an annulus.

Let us fix an integer \(m>2\). In [18], there was constructed a \(m\)-mode solution \(u_n\) to (1.1), i.e., a solution which is invariant with respect to a rotation of \(\frac{2\pi }{m}\) in \(\mathbb {R}^2\),

$$\begin{aligned} u(r,\theta )=u\left( r,\theta + \frac{2\pi }{m}\right) . \end{aligned}$$
(1.17)

Reasoning as in [8] one can construct, in an annulus, an \(m\)-mode solution verifying (2.1) with the symmetry properties (1.17).

Theorem 1.5

Let \(\Omega \) be an annulus and \(u_n\) be the \(m\)-mode solutions of (1.1) that verify (1.17). Let \(V^k_n\) the eigenspace associated to \(\mu _n^k\) and \(dim\left( V^k_n\right) \) denote its dimension. Then,

  • if \(m\) is odd then \(dim\left( V^k_n\right) \ge 2\) for any \(k\ge 2\).

  • If \(m\) is even and \(\mu _n^h\) is simple for \(h\ge 2\) then the limiting eigenvector \(\mathfrak {c}^h\) verifies \(\mathfrak {c}^h=(-1,1,-1,1,\ldots ,-1,1)\). All the other eigenvalues satisfy \(dim\left( V^k_n\right) \ge 2\) for any \(k\ge 2,\ k\ne h\).

The previous results rely on the next theorem which is a refinement up the second order of some estimates proved of [12]. In our opinion, this result is interesting in itself.

Theorem 1.6

For each \(k\in \{1,\ldots ,m\}\), it holds that

$$\begin{aligned} \mu _n^k=-\frac{1}{2} \frac{1}{\log \lambda _n}+\left( 2\pi \Lambda ^k-\frac{3\log 2-1}{2}\right) \frac{1}{\left( \log \lambda _n\right) ^2}+o\left( \frac{1}{\left( \log \lambda _n\right) ^2}\right) \end{aligned}$$
(1.18)

as \(n\rightarrow +\infty \), where \(\Lambda ^k\) is the \(k\)th eigenvalue of the \(m\times m\) matrix \((h_{ij})\) defined in (1.15) assuming \(\Lambda ^1\le \cdots \le \Lambda ^m\).

So the effect of the domain \(\Omega \) on the eigenvalues \(\mu _n^k\) appears in the second order term of the expansion of \(\mu _n^k\).

The paper is organized as follows: in Sect. 2, we give some definitions and we recall some known facts. In Sect. 3, we prove Theorem 1.6 and some results on the vector \(\mathfrak {c}^k\) introduced in Theorem 1.2. In Sect. 4, we complete the proof of Theorem 1.2 and prove Theorem 1.4 and Theorem 1.5.

2 Preliminaries and known facts

Let us recall some results about the asymptotic behavior of \(u_n=u_n(x)\) as \(n\rightarrow +\infty \). In [17], the authors proved that, along a sub-sequence,

$$\begin{aligned} \lambda _n\int _{\Omega }e^{u_n}\, \mathrm{d}x\rightarrow 8\pi m \end{aligned}$$
(2.1)

for some \(m=0,1,2, \ldots , +\infty \). Moreover

  • If \(m=0\) the pair \((\lambda _{n},u_{\lambda _{n}})\) converges to \((0,0)\) as \(\lambda _n\rightarrow 0\).

  • If \(m=+\infty \) the entire blow-up of the solutions \(\{u_n\}\) occurs , i.e. \(\inf _Ku_n\rightarrow +\infty \) for any \(K\Subset \Omega \).

  • If \(0<m<\infty \) the solutions \(\{ u_{n}\}\) blow-up at \(m\)-points. Thus there is a set \(\mathcal{S}=\{ \kappa _1, \ldots , \kappa _m\}\subset \Omega \) of \(m\) distinct points such that \(\Vert u_n\Vert _{L^{\infty }(\omega )}=O(1)\) for any \(\omega \Subset \overline{ \Omega }\setminus \mathcal {S}\),

    $$\begin{aligned} u_n{|_{\mathcal {S}}}\rightarrow +\infty \quad \hbox { as } \quad n\rightarrow \infty , \end{aligned}$$

    and

    $$\begin{aligned} u_n\rightarrow \sum _{j=1}^m 8\pi \,G(\cdot , \kappa _j)\quad \hbox { in }\quad C^{2}_\mathrm{loc}(\overline{ \Omega } \setminus \mathcal {S}). \end{aligned}$$
    (2.2)

In [17], it is also proved that the blow-up points \(\mathcal{S}=\{ \kappa _1, \ldots , \kappa _m\}\) satisfy

$$\begin{aligned} \nabla H^m (\kappa _1,\ldots ,\kappa _m)=0, \end{aligned}$$
(2.3)

where

$$\begin{aligned} H^m(x_1,\ldots , x_m)=\frac{1}{2} \sum _{j=1}^m R(x_j)+\frac{1}{2} \sum _{\begin{array}{c} 1\le j,h\le m\\ j\ne h \end{array}}G(x_j,x_h). \end{aligned}$$

Here \(H^m\) is the Hamiltonian function of the theory of vortices with equal intensities, see [3, 4, 14, 15, 20] and references therein.

As we did in the introduction, let \(R>0\) be such that \(B_{2R}(\kappa _i)\subset \subset \Omega \) for \(i=1,\ldots ,m\) and \(B_R(\kappa _i)\cap B_R(\kappa _j)=\emptyset \) if \(i\ne j\) and \(x_{j,n}\), \(u_n\), \(\tilde{u}_{j,n}\), and \(\delta _{j,n}\) as in (1.3), (1.4). In [11], Corollary 4.3, it is shown that there exists a constant \(d_j>0\) such that

$$\begin{aligned} \delta _{j,n}=d_j \lambda _n^{\frac{1}{2}}+o\left( \lambda _n^{\frac{1}{2}}\right) \end{aligned}$$
(2.4)

as \(n\rightarrow \infty \) for a sub-sequence, and in particular, \(\delta _{j,n}\longrightarrow 0\). In [11], the exact value of \(d_j\) was not computed, but for our aim, it is crucial to have it. We will give it in (3.10). From (1.4) and (2.4), we have

$$\begin{aligned} u_n(x_{j,n})=-2 \log \lambda _n-2\log d_j +o(1) \end{aligned}$$
(2.5)

as \(n\rightarrow \infty \) for any \(j=1,\ldots ,m\).

The function \(\tilde{u}_{j,n}\) defined in the Introduction satisfies

$$\begin{aligned} \left\{ \!\! \begin{array}{ll} -\Delta \tilde{u}_{j,n}=e^{\tilde{u}_{j,n}}&{} \quad \hbox { in }B_{\frac{R}{\delta _{j,n}}}(0)\\ \tilde{u}_{j,n}\le \tilde{u}_{j,n}(0)=0 &{}\quad \hbox { in }B_{\frac{R}{\delta _{j,n}}}(0). \end{array} \right. \end{aligned}$$

Using the result of [5], it is easy to see that, for any \(j=1,\ldots ,m\)

$$\begin{aligned} \tilde{u}_{j,n}(\tilde{x})\rightarrow U(\tilde{x})=\log \frac{1}{\left( 1+\frac{|\tilde{x}|^2}{8}\right) ^2} \quad \hbox { in }\quad C^{2,\alpha }_{loc}(\mathbb {R}^2). \end{aligned}$$
(2.6)

Moreover, it holds

$$\begin{aligned} \big | \tilde{u}_{j,n}(\tilde{x})- U(\tilde{x})\big |\le C\quad \forall \tilde{x}\in B_{\frac{R}{\delta _{j,n}}}(0) \end{aligned}$$
(2.7)

for any \(j=1,\ldots ,m\) for a suitable positive constant \(C\), see [16].

Let us consider the eigenfunction \(v_n^k\) defined in (1.2) and recall the following result:

Theorem 2.1

([12]) For \(\lambda _n\rightarrow 0\), it holds that

$$\begin{aligned} \mu _n^k=-\frac{1}{2} \frac{1}{\log \lambda _n}+o\left( \frac{1}{\log \lambda _n}\right) \quad \text {for}\; 1\le k\le m, \end{aligned}$$
(2.8)
$$\begin{aligned} \mu _n^{k}=1-48\pi \eta ^{2m-(k-m)+1}\lambda _n+o\left( \lambda _n\right) \quad \text {for}\; m+1\le k\le 3m, \end{aligned}$$
(2.9)

and

$$\begin{aligned} \mu _n^k>1\quad \text {for}\quad k\ge 3m+1, \end{aligned}$$
(2.10)

where \(\eta ^k\) \((k=1,\ldots ,2m)\) is the \(k\)th eigenvalue of the matrix \(D(\mathrm {Hess}H^m)D\) at \((\kappa _1,\ldots ,\kappa _m)\). Here \(D=(D_{ij})\) is the diagonal matrix \(\mathrm {diag}[d_1,d_1,d_2,d_2,\ldots ,d_m,d_m]\) (see (2.4) for the definition of the constants \(d_j\) and (3.10) for the precise value of it).

One of the purposes of this paper is to refine (2.8) (see Theorem 1.6 in the introduction).

3 Fine behavior of eigenvalues

We start from the following proposition, which plays a crucial role in our argument.

Proposition 3.1

For any \(k=1,\ldots ,m\) we have

$$\begin{aligned}&\left\{ \frac{1}{\mu _n^k}-u_n(x_{j,n})\right\} \lambda _n\int _{B_R(x_{j,n})}\!\!\!\!\!\!\!\!\!e^{u_n}v_n^k \, dx \nonumber \\&\qquad =(8\pi )^2 \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}}(c_i^k-c_j^k)G(\kappa _j,\kappa _i)-16\pi c_j^k +o(1). \end{aligned}$$
(3.1)

Proof

From (1.1) and (1.2), we have

$$\begin{aligned} \int _{\partial B_R(x_{j,n})} \Big \{ \Big .\frac{\partial u_n}{\partial \nu } \frac{v_n^k}{\mu _n^k}-u_n\frac{\partial }{\partial \nu }\left( \frac{v_n^k}{\mu _n^k}\right) \Big .\Big \}\, \mathrm{d}\sigma&=\int _{ B_R(x_{j,n})} \left\{ \Delta u_n\frac{v_n^k}{\mu _n^k}-u_n\Delta \frac{v_n^k}{\mu _n^k}\right\} \, \mathrm{d}x\nonumber \\&=-\frac{1}{\mu _n^k}\lambda _n\int _{ B_R(x_{j,n})}e^{u_n}v_n^k \, \mathrm{d}x\nonumber \\&\qquad +\lambda _n\int _{ B_R(x_{j,n})}e^{u_n}v_n^k u_n\, \mathrm{d}x\nonumber \\&=-\frac{1}{\mu _n^k}\lambda _n\int _{ B_R(x_{j,n})}e^{u_n}v_n^k \, \mathrm{d}x\nonumber \\&\qquad +u_n(x_{j,n})\lambda _n\int _{ B_R(x_{j,n})}e^{u_n}v_n^k \, \mathrm{d}x\nonumber \\&\qquad +\int _{B_{\frac{R}{\delta _{j,n}}}(0)} e^{\tilde{u}_{j,n}}\tilde{v}^k_{j,n}\tilde{u}_{j,n}\, \mathrm{d}\widetilde{x} \end{aligned}$$
(3.2)

and

$$\begin{aligned} \int _{B_{\frac{R}{\delta _{j,n}}}(0)} e^{\tilde{u}_{j,n}}\tilde{v}^k_{j,n}\tilde{u}_{j,n}\, \mathrm{d}x\rightarrow \int _{\mathbb {R}^2}e^Uc_j^k U\, \mathrm{d}x= -16 \pi c_j^k. \end{aligned}$$
(3.3)

On the other hand, from (2.2) and (1.12), we have

$$\begin{aligned}&\int _{\partial B_R(x_{j,n})} \Big \{\frac{\partial u_n}{\partial \nu } \frac{v_n^k}{\mu _n^k}-u_n\frac{\partial }{\partial \nu }\left( \frac{v_n^k}{\mu _n^k}\right) \Big \}\, \mathrm{d}\sigma \nonumber \\&\rightarrow (8\pi )^2 \sum _{i=1}^m \sum _{h=1}^m c_h^k\int _{\partial B_R(\kappa _{j})} \!\!\!\left\{ \frac{\partial }{\partial \nu }G(x,\kappa _i)G(x,\kappa _h)-G(x,k_i)\frac{\partial }{\partial \nu } G(x,\kappa _h)\right\} \, \mathrm{d}\sigma . \end{aligned}$$
(3.4)

We let

$$\begin{aligned} I_{i,h}=\int _{\partial B_R(\kappa _{j})} \left\{ \frac{\partial }{\partial \nu }G(x,\kappa _i)G(x,\kappa _h)-G(x,\kappa _i)\frac{\partial }{\partial \nu } G(x,\kappa _h)\right\} \, \mathrm{d}\sigma . \end{aligned}$$

Then we have

case 1 \(i=h\)

$$\begin{aligned} I_{i,h}=0. \end{aligned}$$

case 2 \(i\ne h\) In this case, we have

$$\begin{aligned} I_{i,h}&=\int _{B_R(\kappa _{j})} \Big \{\Delta G(x,\kappa _i) G(x,\kappa _h)- G(x,\kappa _i) \Delta G(x,\kappa _h)\Big \}\, \mathrm{d}\sigma \\&=-G(\kappa _j,\kappa _h)\delta _i ^j+G(\kappa _j,\kappa _i)\delta _j^h, \end{aligned}$$

where \(\delta _a^b=1\) if \(a=b\) and \(\delta _a^b=0\) otherwise.

Therefore, from (3.4) we have

$$\begin{aligned}&\int _{\partial B_R(x_{j,n})} \left\{ \frac{\partial u_n}{\partial \nu } \frac{v_n^k}{\mu _n^k}-u_n\frac{\partial }{\partial \nu }\left( \frac{v_n^k}{\mu _n^k}\right) \right\} \, \mathrm{d}\sigma \nonumber \\&\qquad =(8\pi )^2\sum _{i=1}^m \sum _{\begin{array}{c} 1\le h\le m\\ h\ne i \end{array}} c_h^k\left\{ -G(\kappa _j,\kappa _h)\delta _i^j+G(\kappa _j,\kappa _i) \delta _j^h\right\} +o(1)\nonumber \\&\qquad =(8\pi )^2\Big \{-\sum _{\begin{array}{c} 1\le h\le m\\ h\ne j \end{array}} c_h^kG(\kappa _j,\kappa _h)+\sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}} c_j^kG(\kappa _j,\kappa _i)\Big \}+o(1)\nonumber \\&\qquad =-(8\pi )^2 \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}} \left( c_i^k-c_j^k\right) G(\kappa _j,\kappa _i)+o(1). \end{aligned}$$
(3.5)

The proof follows from (3.2), (3.3), and (3.5). \(\square \)

Next we are going to get the precise value of \(d_j\) in (2.5). For this purpose we need to strengthen (2.5).

Proposition 3.2

(cf. Estimate D in [6]) Let \(u_n\) be a solution of (1.1) corresponding to \(\lambda _n\), and let \(x_{j,n}\) and \(R\) be as in Sect. 1. Then, for any \(j=1,\ldots ,m\) we have

$$\begin{aligned} u_n(x_{j,n})=-\frac{\sigma _{j,n}}{\sigma _{j,n}-4\pi } \log \lambda _n-8\pi \Big \{ R(x_{j,n})+\sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}} G(x_{j,n},x_{i,n})\Big \}+6\log 2 +o(1),\nonumber \\ \end{aligned}$$
(3.6)

where

$$\begin{aligned} \sigma _{j,n}=\lambda _n\int _{B_R(x_{j,n})}e^{u_n}\, dx \rightarrow 8\pi . \end{aligned}$$
(3.7)

Proof

Using the Green representation formula, from (1.1), we have

$$\begin{aligned} u_n(x_{j,n})=&\int _{\Omega }G(x_{j,n},y) \lambda _ne^{u_n(y)}\, dy\\ =&\frac{1}{2\pi } \int _{B_R(x_{j,n})}\log |x_{j,n}-y|^{-1}\lambda _ne^{u_n(y)}\, \mathrm{d}y\\&+\int _{B_R(x_{j,n})}K(x_{j,n},y)\lambda _ne^{u_n(y)}\, \mathrm{d}y\\&+\sum _{\begin{array}{c} 1\le i\le m \\ i\ne j \end{array}}\int _{B_R(x_{i,n})}G(x_{j,n},y) \lambda _ne^{u_n(y)}\, \mathrm{d}y\\&+\int _{\Omega \setminus \bigcup _{i=1}^mB_R(x_{i,n})} G(x_{j,n},y) \lambda _ne^{u_n(y)}\, \mathrm{d}y\\ =&-\frac{\sigma _{j,n}}{2\pi }\log \delta _{j,n}+\frac{1}{2\pi }\int _{B_{\frac{R}{\delta _{j,n}}}(0)} \log |\tilde{y}|^{-1}e^{\tilde{u}_{j,n}(\tilde{y})}\, \mathrm{d}\tilde{y}\\&+ 8\pi \left\{ R(x_{j,n})+\sum _{\begin{array}{c} 1\le i\le m\\ i\ne \ j \end{array}}G(x_{j,n},x_{i,n})\right\} +o(1). \end{aligned}$$

Using the estimate (2.7), we get here

$$\begin{aligned} \frac{1}{2\pi }\int _{B_{\frac{R}{\delta _{j,n}}}(0)}\log |\tilde{y}|^{-1}e^{\tilde{u}_{j,n}(\tilde{y})}\, \mathrm{d}\tilde{y}\rightarrow \frac{1}{2\pi }\int _{\mathbb {R}^2}\log |\tilde{y}|^{-1}e^{U(\tilde{y})}\, \mathrm{d}\tilde{y}=-6\log 2. \end{aligned}$$
(3.8)

Then the conclusion follows by (1.4) and (3.7). \(\square \)

Here we recall a fine behavior of the local mass \(\sigma _{j,n}\) defined in (3.7).

Proposition 3.3

For any \(j\in \{1,\ldots ,m\}\), we have

$$\begin{aligned} \sigma _{j,n}=8\pi +o\!\left( \lambda _n\right) \end{aligned}$$
(3.9)

Proof

see (3.56) of [6]. \(\square \)

Using Proposition 3.2 and Proposition 3.3, we get the precise value of \(d_j\) given in (2.4).

Proposition 3.4

For any \(j=1,\ldots ,k\) it holds,

$$\begin{aligned} \mathrm{d}_j=\frac{1}{8}\exp \left\{ 4\pi R(\kappa _j)+4\pi \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}}G(\kappa _j,\kappa _i)\right\} . \end{aligned}$$
(3.10)

Proof

From (3.6), we get

$$\begin{aligned} u_n(x_{j,n})=&-2\log \lambda _n+\frac{\sigma _{j,n}-8\pi }{\sigma _{j,n}-4\pi }\log \lambda _n\nonumber \\&\quad -8\pi \left\{ R(\kappa _{j})+\sum _{\begin{array}{c} 1\le i\le m\\ i\ne \ j \end{array}}G(\kappa _{j},\kappa _{i})\right\} +6\log 2+o(1). \end{aligned}$$
(3.11)

From (3.9), it follows that \(\frac{\sigma _{j,n}-8\pi }{\sigma _{j,n}-4\pi }\log \lambda _n=o(1)\). Therefore, the claim follows from (2.5). \(\square \)

As a consequence of (2.5) and Proposition 3.4, we get, using (3.1)

$$\begin{aligned}&\left\{ \frac{1}{\mu _n^k}+ 2\log \lambda _n\right\} \int _{B_R(x_{j,n})}\!\!\!\!\!\!\!\lambda _ne^{u_n}v_n^k \, \mathrm{d}x=(8\pi )^2 \sum _{\begin{array}{c} 1\le i\le m \\ i\ne j \end{array}}c_i^kG(\kappa _j,\kappa _i)\nonumber \\&\quad -(8\pi )^2c_j^k \Big \{R(\kappa _j)+2\sum _{\begin{array}{c} 1\le i\le m \\ i\ne j \end{array}}G(\kappa _j,\kappa _i)\Big \}+48 \pi c_j^k \log 2 -16 \pi c_j^k +o(1) \nonumber \\&=-(8\pi )^2\sum _{i=1}^m h_{ji}c_i^k+16\pi c_j^k(3\log 2-1)+o(1), \end{aligned}$$
(3.12)

(see the definition of the matrix \((h_{ij})\) in (1.15).

Proposition 3.5

For any \(j,h\in \{1,\ldots ,m\}\), it holds that

$$\begin{aligned} c_h^k\sum _{i=1}^m h_{ji}c_i^k=c_j^k\sum _{i=1}^m h_{hi}c_i^k \end{aligned}$$
(3.13)

Proof

Multiplying \(\int _{B_R(x_{h,n})} \lambda _ne^{u_n}v_n^k \, dx \) to (3.12) and \(\int _{B_R(x_{j,n})} \lambda _ne^{u_n}v_n^k \, dx \) to (3.12) with \(j=h\), and then subtracting the latter from the former, we get the conclusion from (1.5) and (1.10). \(\square \)

Proposition 3.6

The vector \(\mathfrak {c}^k\), defined in (1.9), is an eigenvector of \((h_{ij})\).

Proof

First we assume that there are \(c_j^k\ne 0\) and \(c_h^k\ne 0\) for \(j\ne h\). Then (3.13) gives

$$\begin{aligned} \frac{1}{c_j^k} \sum _{i=1}^m h_{ji}c_i^k=\frac{1}{c_h^k} \sum _{i=1}^m h_{hi}c_i^k=\Lambda ^k. \end{aligned}$$
(3.14)

Then \(\Lambda ^k\) is an eigenvalue of \((h_{ij})\) if \(c_j^k\ne 0\) for all \(j=1,\ldots ,m\).

On the other hand, for \(j\in \{1,\ldots ,m\}\) satisfying \(c_j^k=0\), we can choose \(c_h^k\ne 0\) (see (1.9)) so that

$$\begin{aligned} \sum _{i=1}^m h_{ji}c_i^k=0\quad \hbox { if }c_j^k=0. \end{aligned}$$
(3.15)

From (3.14) and (3.15), we get that \(\mathfrak {c}^k\) is an eigenvector of \((h_{ij})\) if there are at least two \(j\) satisfying \(c_j^k\ne 0\).

The last case is that there is only one \(j\) satisfying \(c_j^k\ne 0\), but this never happens. Indeed, in this case (3.13) becomes

$$\begin{aligned} \sum _{i=1}^mh_{hi}c_i^k=h_{hj}c_j^k=0 \quad (j\ne h) \end{aligned}$$

which contradicts \(h_{hj}=-G(\kappa _h,\kappa _j)\ne 0\). \(\square \)

Proof of Theorem 1.6

Take \(c_j^k\ne 0\). Then Proposition 3.6 implies that \(\sum _{i=1}^m h_{ji}c_i^k=\Lambda ^k c_j^k\) and therefore (3.12) implies that

$$\begin{aligned} \frac{1}{\mu _n^k}=-2\log \lambda _n-8\pi \Lambda ^k+2(3\log 2-1)+o(1). \end{aligned}$$
(3.16)

Indeed, letting \(L=-8\pi \Lambda ^k+2(3\log 2-1)\), (3.16) leads that

$$\begin{aligned} \mu _n^k=\frac{1}{-2\log \lambda _n+L+o(1)} =-\frac{1}{2\log \lambda _n}-\frac{L}{4}\cdot \frac{1}{(\log \lambda _n)^2}+o\left( \frac{1}{(\log \lambda _n)^2}\right) . \end{aligned}$$
(3.17)

Therefore (1.18) follows.

The formula (1.18) gives \(\Lambda ^1\le \cdots \le \Lambda ^m\), since \(\mu _n^1<\mu _n^2\le \cdots \le \mu _n^m\). Consequently, we get \(\Lambda ^k\) is the \(k\)th eigenvalue. Since \(\Lambda ^k\) depends only on \((h_{ij})\) then equation (1.18) holds without taking a sub-sequence. \(\square \)

4 Fine behavior of eigenfunctions

We start this section with the following

Proposition 4.1

For any \(k,j\in \{1,\ldots ,m\}\), we have

$$\begin{aligned} \frac{v_n^k(x_{j,n})}{\mu _n^k}&=\frac{1}{2\pi }\log \delta _{j,n}^{-1} \int _{B_R(x_{j,n})}\lambda _ne^{u_n(y)} v_n^k(y)\, \mathrm{d}y\nonumber \\&\qquad +8\pi \Big \{c_j^kR(\kappa _j)+\sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}}c_i^kG(\kappa _j,\kappa _i)\Big \} -6c_j^k\log 2 +o(1). \end{aligned}$$
(4.1)

Proof

Using the Green representation formula and (1.2), we have, as in the proof of the Proposition 3.2

$$\begin{aligned} \frac{v_n^k(x_{j,n})}{\mu _n^k}=&\int _{\Omega }G(x_{j,n},y) \lambda _ne^{u_n(y)}v_n^k(y) \, \mathrm{d}y\\ =&\frac{1}{2\pi } \log \delta _{j,n}^{-1} \int _{B_R(x_{j,n})}\lambda _ne^{u_n(y)}v_n^k(y) \, \mathrm{d}y\\&+\frac{1}{2\pi } \int _{B_{\frac{R}{\delta _{j,n}}}(0)} \log |\tilde{y}|^{-1} e^{\tilde{u}_{j,n}(\tilde{y})}\tilde{v}^k_{j,n}(\tilde{y}) \, \mathrm{d}\tilde{y}\\&+ \Big \{ 8\pi c_j^k R(\kappa _j)+8\pi \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}}c_i^kG(\kappa _j,\kappa _i)\Big \}+o(1) \end{aligned}$$

and the claim follows. \(\square \)

Remark 4.2

From (4.1), (1.4), (3.6), and Proposition 3.3, we get

$$\begin{aligned} \frac{1}{\mu _n^k}\int _{B_R(x_{j,n})}\lambda _ne^{u_n}v_n^k(x_{j,n})\, \mathrm{d}x&=\frac{\sigma _{j,n}v_n^k(x_{j,n})}{\mu _n^k} \nonumber \\&=-2\log \lambda _n\int _{B_R(x_{j,n})} \lambda _ne^{u_n}v_n^k\, \mathrm{d}x -(8\pi )^2\sum _{i=1}^mh_{ji}c_i^k\nonumber \\&\qquad +48 \pi c_j^k\log 2+o(1). \end{aligned}$$
(4.2)

Proposition 4.3

For any \(k,j\in \{1,\ldots ,m\}\) we have

$$\begin{aligned} \lambda _n\int _{B_R(x_{j,n})} e^{u_n}\frac{v_n^k(x)-v_n^k(x_{j,n})}{\mu _n^k}\, \mathrm{d}x =-16 \pi c_j^k +o(1). \end{aligned}$$

Proof

Subtracting (3.12) by (4.2) we get the claim. \(\square \)

Proof of Theorem 1.2

Set

$$\begin{aligned} \tilde{z}_n:=\frac{\tilde{v}_{j,n}^k -v_n^k(x_{j,n})}{\mu _n^k}\quad \text { in } B_{\frac{R}{\delta _{j,n}}}(0). \end{aligned}$$

Then

$$\begin{aligned} -\Delta \tilde{z}_n=\mu _n^ke^{\tilde{u}_{j,n}}\tilde{z}_n+v_n^k(x_{j,n})e^{\tilde{u}_{j,n}}. \end{aligned}$$
(4.3)

The claim follows from elliptic estimates once we prove that

$$\begin{aligned} \tilde{z}_n=c_j^kU(\tilde{x})+o(1)\quad \hbox { locally uniformly in}\; \mathbb {R}^2. \end{aligned}$$
(4.4)

Using again the Green representation formula for (1.2), we have for \(x\in \omega \subset \subset B_R(x_{j,n})\)

$$\begin{aligned} \frac{v_n^k(x)}{\mu _n^k}=&\lambda _n\int _{\Omega }G(x,y) e^{u_n(y)} v_n^k(y) \, \mathrm{d}y\\ =&\int _{B_{\frac{R}{\delta _{j,n}}}(0)}\frac{1}{2\pi } \log \frac{1}{ | x-(\delta _{j,n}\tilde{y}+x_{j,n})|} e^{\tilde{u}_n} \tilde{v}_{j,n}^k \, \mathrm{d}\tilde{y}\\&+8\pi c_j^kK(x,\kappa _j)+8\pi \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}} c_i^kG(x,\kappa _i)+o(1). \end{aligned}$$

Therefore, letting \(x=\delta _{j,n}\tilde{x}+x_{j,n}\), we have for every \(\tilde{x}\in \tilde{\omega }\subset \subset \mathbb {R}^2\) that

$$\begin{aligned} \frac{\tilde{v}_{j,n}^k(\tilde{x})}{\mu _n^k} =&{\frac{1}{2\pi }} \int _{B_{\frac{R}{\delta _{j,n}}}(0)}\log {\frac{1}{| \delta _{j,n}\tilde{x}+x_{j,n}-\delta _{j,n}\tilde{y}-x_{j,n}|}} e^{\tilde{u}_{j,n}} \tilde{v}_{j,n}^k \, \mathrm{d}\tilde{y}\\&+8\pi c_j^k K(\delta _{j,n}\tilde{x}+x_{j,n},\kappa _j) +8\pi \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}} c_i^kG(\delta _{j,n}\tilde{x}+x_{j,n} ,\kappa _i)+o(1)\\ =&{\frac{1}{2\pi }} \log {\frac{1}{\delta _{j,n}}} \int _{B_{\frac{R}{\delta _{j,n}}}(0)} e^{\tilde{u}_{j,n}} \tilde{v}_{j,n}^k \, \mathrm{d}\tilde{y}+\frac{1}{2\pi } \int _{B_{\frac{R}{\delta _{j,n}}}(0)} \log \frac{1{|\tilde{x}-\tilde{y}|}}{e}^{\tilde{u}_{j,n}} \tilde{v}_{j,n}^k \, \mathrm{d}\tilde{y}\\&+8\pi c_j^k R(\kappa _j)+8\pi \sum _{\begin{array}{c} 1\le i\le m\\ i\ne j \end{array}} c_i^kG(\kappa _j,\kappa _i)+o(1)\quad \left( \hbox {using (4.1)}\right) \\ =&\frac{v_n^k(x_{j,n})}{\mu _n^k} +\frac{1}{2 \pi } \int _{B_{\frac{R}{\delta _{j,n}}}(0)}\log \frac{1}{|\tilde{x}-\tilde{y}|}e^{\tilde{u}_{j,n}} \tilde{v}_{j,n}^k \, d\tilde{y}+6c_j^k\log 2 +o(1) \end{aligned}$$

Then recalling the definition of \(\tilde{z}_n\), we have

$$\begin{aligned} \tilde{z}_n= \frac{1}{2 \pi } \int _{B_{\frac{R}{\delta _{j,n}}}(0)}\log \frac{1}{|\tilde{x}-\tilde{y}|}e^{\tilde{u}_{j,n}} \tilde{v}_{j,n}^k \, \mathrm{d}\tilde{y}+6c_j^k\log 2 +o(1) \end{aligned}$$

so that

$$\begin{aligned} \tilde{z}_n= \frac{1}{2 \pi } c _{j}^k \int _{\mathbb {R}^2}\log \frac{1}{|\tilde{x}-\tilde{y}|}e^{U} \, \mathrm{d}\tilde{y}+6c_j^k\log 2 +o(1) \end{aligned}$$

locally uniformly with respect to \(\tilde{x}\) since \(e^{\tilde{u}_{j,n}}=O(|\tilde{x}|^{-4})\) uniformly as \(|\tilde{x}|\rightarrow \infty \).

Observe that

$$\begin{aligned} \tilde{\Psi }(\tilde{x}):= \frac{1}{2 \pi } \int _{\mathbb {R}^2}\log \frac{1}{|\tilde{x}-\tilde{y}|}e^{U} \, \mathrm{d}\tilde{y}\end{aligned}$$

satisfy

$$\begin{aligned} -\Delta \tilde{\Psi }=e^U\quad \hbox { in }\quad \mathfrak {D}'(\mathbb {R}^2). \end{aligned}$$

and it is a radially symmetric function. Then, since \(-\Delta U=e^U\) and \(U(0)=0\), we have \(\tilde{\Psi }-\tilde{\Psi }(0)=U\), where \(\tilde{\Psi }(0)=-6\log 2\), see (3.8). Therefore, \(\tilde{\Psi }=U-6\log 2\). This implies that \(\tilde{z}_n \rightarrow c_j^kU\) locally uniformly and this proves (4.4). Finally, by Proposition 3.6, we have that the proof of Theorem 1.2 is complete. \(\square \)

5 Proof of Theorems 1.4 and 1.5

Proof of Theorem 1.4

The final part of the proof of Proposition 3.5 shows that, for any vector \(\mathfrak {c}^{ k}\), we have that at least \(two\) components of \(\mathfrak {c}^{ k}\) are different from zero. This shows \(ii)\). Now we are going to prove \(i)\).

We can assume that \(v_n^1>0\) and then \(c_j^1\ge 0\) for any \(j=1,\ldots ,m\). We want to prove that \(c_j^1>0\) for any \(j=1,\ldots ,m\) and so, by contradiction, let us assume that \(c_1^1=0\) (the generic case is analogous). By (3.13), we deduce that \(c_h^1\sum _{i=2}^m h_{1i}c_i^1=0\). Since \(\mathfrak {c}^1\ne \mathbf{0}\), there exists \(h\ge 2\) such that \(c_h^1\ne 0\). Moreover \(h_{1i}<0\) for any \(i\ge 2\) and this gives a contradiction. \(\square \)

Proof of Theorem 1.5

Without loss of generality, we may assume that \(\Omega =\left\{ x\in \mathbb {R}^2\right. \) such that \(\left. 0<a<|x|<1\right\} \). The \(m\) blow-up points \(\kappa _1, \cdots , \kappa _m\) are located on a circle concentric with the annulus and are vertices of a regular polygon with \(m\) sides. So we can assume that \(\kappa _1=(r_0,0)\), \(\kappa _2=r_0\left( \cos \frac{2\pi }{m}, \sin \frac{2\pi }{m}\right) ,\ldots , \kappa _m=r_0\left( \cos \frac{2(m-1)\pi }{m}, \sin \frac{2(m-1)\pi }{m}\right) \) for some \(r_0\in (a,1)\).

Observe that since \(G(x,\kappa _1)\) is symmetric with respect to the \(x_1\)-axis, (see Lemma 2.1 in [10]), we get \(G(\kappa _j,\kappa _1)= G(\kappa _{m-j+2},\kappa _1)\), \(j=2,\ldots ,m\). Similarly the value \(G(\kappa _i,\kappa _j)\) depends only on the distance between \(\kappa _i\) and \(\kappa _j\). For example, \(G(x,\kappa _2)=G(L_{-\frac{2\pi }{m}}x,\kappa _1)\) and consequently \(G(\kappa _{i+1},\kappa _2)=G(\kappa _{i},\kappa _1)\), where \(L_\theta \) denotes the rotation operator around \(0\) with angle \(\theta \). Similarly \(G(\kappa _{i+k},\kappa _{1+k})=G(\kappa _{i},\kappa _1)\). Note also that, if \(\Omega \) is an annulus, the Robin function \(R(x)\) is radial, so that \(R(\kappa _1)=\cdots =R(\kappa _m)=R\).

Here we set \(G(\kappa _i,\kappa _1)=G_i\) and \(R_l=R+4\sum _{h=2}^l G_h\).

The matrix \((h_{ij})\) becomes

if \(m=2l\,(l=1,2,\ldots )\),

$$\begin{aligned} (h_{ij})= \begin{pmatrix} R_l+2G_{l+1} &{} -G_2 &{} -G_3 &{} \dots &{} -G_{l+1}&{} \dots &{}-G_2\\ -G_2 &{} R_l+2G_{l+1} &{}-G_2&{}\dots &{}\dots &{}\dots &{} -G_3\\ &{} \dots \\ -G_2 &{} -G_3 &{}\dots &{}\dots &{}\dots &{}\dots &{} R_l+2G_{l+1} \end{pmatrix}, \end{aligned}$$

and for \(m=2l+1\,(l=1,2,\ldots )\),

$$\begin{aligned} (h_{ij})= \begin{pmatrix} R_l &{} -G_2 &{} -G_3 &{} \dots &{} -G_{l}&{} -G_{l}&{} \dots &{}-G_2\\ -G_2 &{} R_l &{}-G_2&{}\dots &{}\dots &{}\dots &{}\dots &{} -G_3\\ &{} \dots \\ -G_2 &{} -G_3 &{}\dots &{}\dots &{}\dots &{}\dots &{}-G_2&{} R_l \end{pmatrix}, \end{aligned}$$

A straightforward computation shows that the first eigenvalue of \((h_{ij})\) is \(\Lambda ^1=R+2\sum _{h=2}^l G_h+G_{l+1}\) for \(m=2l\) and \(R+2\sum _{h=2}^l G_h\) for \(m=2l+1\) which is simple. It is easy to see that the eigenspace corresponding to \(\Lambda ^1\) is spanned by \(\mathfrak {c}^1=(1,1,\ldots ,1)\).

Now consider separately the cases where \(m\) is odd and \(m\) is even.

Case 1 \(m\) is odd.

Let \(v_n^k\) be an eigenfunction related to the eigenvalue \(\mu _n^k\) with \(k\ge 2\) and rotate it by an angle of \(\frac{2\pi }{m}\). By the symmetry of the problem, we get that the rotated function \(\bar{v}_n^k(r, \theta )=v_n^k\left( r, \theta +\frac{2\pi }{m}\right) \) is still an eigenfuction related to the same eigenvalue \(\mu _n^k\). If by contradiction the eigenvalue \(\mu _n^k\) is simple we have that

$$\begin{aligned} \bar{v}_n^k={\alpha _n} v_n^k, \end{aligned}$$
(5.1)

for some \(\alpha _n\ne 0\) with \(\alpha _n\le 1\).

Let \(\mathfrak {c}^k\) and \(\bar{\mathfrak {c}}^k\) be the limiting eigenvectors given by (1.15) associated to \(v_{n}^k\) and \(\bar{v}_n^k\), respectively. Denoting by \(\mathfrak {c}^k=(c_1^k,\ldots ,c_m^k)\), we have, by the definition of \(\bar{v}_n^k\),

$$\begin{aligned} \bar{\mathfrak {c}}^k=\left( c_2^k,c_3^k\ldots ,c_m^k,c_1^k\right) . \end{aligned}$$
(5.2)

By (5.1) and (5.2), we derive that

$$\begin{aligned} \alpha c_i^k=c_{i+1}^k\quad \hbox {for }i=1,\ldots ,m,\hbox { meaning that }c_{m+1}=c_1, \end{aligned}$$
(5.3)

where \(\alpha =\lim _{n\longrightarrow \infty }\alpha _n\). From (5.3), we get that \(c_i^k=\alpha ^mc_i^k\). Since \(\mathfrak {c}^k\ne 0\), we get \(\alpha ^m=1\) and since \(m\) is odd we derive that \(\mathfrak {c}^k=(1,1,\ldots ,1)=\mathfrak {c}^1\). This gives a contradiction since \(k\ge 2\).

Case 2 \(m\) is even.

Let \(v_n^k\) be an eigenfunction related to the eigenvalue \(\mu _n^k\) with \(k\ge 2\) and define \(\bar{v}_n^k\) as in the previous case. Repeating step by step the proof, assuming that \(\mu _n^k\) is a simple eigenvalue, we again deduce that \(\alpha ^m=1\). However, since in this case \(m\) is even, we have the solution \(\alpha =-1\) and by (5.3), we get \(\mathfrak {c}^k=(-1,1,-1,1,\ldots ,-1,1)\). \(\square \)

Remark 5.1

When \(m=2l\), the eigenvalue \(\Lambda ^k\) corresponding to \(\mathfrak {c}^k=(-1,1,-1,1,\ldots , -1,1)\) is given by

$$\begin{aligned} \Lambda ^k= R+\left( 2+(-1)^{\frac{m+2}{2}}\right) G_{\frac{m+2}{2}}+2\sum \limits _{h=2}^l\left( 2+(-1)^h\right) G_h. \end{aligned}$$

A direct computation proves that for \(m=4\) the eigenvalue \(\Lambda ^k\) is simple if \(G_2\ne G_3\). For \(m>4\), similar conditions hold. Anyway these conditions are not easy to check because we do not know explicitly the Green function of the annulus. For this reason, we will not investigate further.