1 Introduction

In recent years, the improvements of the industrial techniques permit to obtain more efficient materials constructed by assembling different constituents. Indeed, the mechanical, thermal or electrical properties of these composites are definitely superior of the ones of the single components. However, this bonding does not give rise in general to perfect contacts between the different components, so that discontinuities in the involved physical fields can appear. All these discontinuities change meaningfully the properties of the composites and also of the resulting macroscopic materials. Therefore, these problems call for a theoretical investigation. The classical approach in order to treat such discontinuities is to assume the presence of a thin layer between the different physical phases which permits to have a smooth transition from one phase to the other. However, since the thickness of these thin layers is assumed to be very small, by means of a concentration procedure, they can be replaced by the so-called imperfect interfaces, across which some of the physical fields exhibit jump conditions reproducing the original discontinuities.

A great number of papers concerning problems with imperfect contact conditions have been produced in the literature. In the framework of applications, we can refer, for instance, to [18, 19, 31, 32, 35, 36, 39, 40]. On the other hand, in the rigorous mathematical setting, some pioneering papers are, among others, [16, 33, 37, 38].

The most common models dealing with imperfect contact involve jumps for the solution and continuity of the flux across the interface (see [2, 3, 6, 7, 10, 11, 15, 20, 25,26,27, 41, 42, 45]) or jumps of the flux and continuity of the solution (see [4, 28, 34]). Also, in some of these models, the Laplace-Beltrami operator appears, due for instance to the presence of highly conducting interfaces (as in [1, 4, 5, 12, 13, 33]). A unifying approach of such problems involving simultaneously jumps in the solution and also in the flux has been proposed in [21, 22, 29, 43] (see, also the references therein).

More recently, models involving also the mean value of the physical fields governing the different phases have been considered, for instance, in [8, 9, 17, 31, 39, 44]. All these models were originally proposed in the engineering context and then, in some cases, rigorously justified by means of different mathematical tools.

The model analyzed in this paper has its origins in the description of composites made by a hosting medium containing a periodic array of inclusions of size \(\varepsilon \). In order to make the composite more efficient, the inclusions are coated by a thin layer consisting of sublayers of two different materials (with thickness of the order \(\varepsilon \eta \) and \(\varepsilon \delta \), respectively), disposed in such a way that one of them is encapsulated in the other. This two-phase coating material is such that the external part has a low diffusivity in the orthogonal direction, while the internal one has high diffusivity along the tangential direction. In such original material, we assume perfect transmission conditions between the different phases of the physical components. All the parameters \(\varepsilon ,\delta \) and \(\eta \) are supposed to be very small, but with different orders. In particular, the smallness of \(\eta \) and \(\delta \), with respect to \(\varepsilon \), leads us to perform, for fixed \(\varepsilon \), a two-step concentration procedure. The limit \(\delta \rightarrow 0\) is essentially the result contained in [14] (see, also, [12]) and, then, it is not explicitly reproduced here. This first concentration replaces the internal layer with an interface, involving an imperfect contact condition, governed by a tangential Laplace equation for the heat potential having as a source the jump of the normal flux (see (2.12)).

Then, the paper starts with the concentration, with respect to \(\eta \), of the resulting model. In order to simplify the presentation, we set the concentration problem in a flat geometry, but our result holds also for a more general case, as the one addressed in Sect. 3. The main feature of this concentration procedure is the appearance of new effects on the resulting interface between the hosting material and the inclusions, involving a new surface heat potential, similarly as in [43], and the mean value of the two bulk potentials and their fluxes, as in [8, 9] (see (3.5)–(3.7)). We stress again that similar problems, involving simultaneously jumps in the solution and also in the flux, Laplace-Beltrami operator and the mean value of the physical fields governing the different phases of a composite material already appeared in the engineering literature, being justified by numerical simulations and by asymptotic analysis (see [31, 39, 44]), but at the best of our knowledge, not yet fully analyzed from the mathematical point of view. The main difficulty in order to achieve the concentrated model (3.1)–(3.7) consists, besides the construction of the proper test functions, in guessing and, then, rigorously obtaining suitable estimates for the involved unknowns.

After the second concentration step, we proceed, via the periodic unfolding method, with the homogenization of the concentrated model, which is far from being a standard one (see (3.8)–(3.9)). Also in this case, the main difficulties are connected with the guess of the macroscopic model, in order to understand which types of estimates are needed (see Theorem 4.1) to achieve the final result (see Theorem 6.6). Moreover, differently from the more common situations, we have to construct also a separate surface test function (see (6.37)), due to the non-standard form of the problem to be homogenized. Despite the fact that the limit problem (6.29)–(6.30) looks like a standard Dirichlet problem for an elliptic equation, in the construction of the homogenized matrix and of the source term a very delicate analysis is required. For example, the usual local problems involved in the homogenization procedure are, in our case, highly non-standard, calling also for properly adapted functional settings (see Sect. 5).

The paper is organized as follows. In Sect. 2, we present, in a simplified geometrical setting (a layered geometry), the model governing the composite material, which is essentially the model obtained by using a concentration procedure in [14]. Starting from this model, which already involves an imperfect contact condition, we perform a second concentration procedure, in order to achieve the microscopic model to be further homogenized. In Sect. 3, we state our microscopic problem in a more general geometrical setting, consisting of a connected hosting material containing a periodic array of disconnected inclusions. In Sect. 4, we prove the main energy inequalities required for the convergence of the heat potentials and their fluxes. Section 5 is devoted to the construction of the cell functions needed to the homogenization procedure. Finally, in Sect. 6, we state and prove our homogenization result.

2 The two-layer problem

In this section, we show that the problem we are concerned with in this paper can be obtained as the limit of an elliptic problem exhibiting an interface and a thin layer around it, where conduction in the orthogonal direction degenerates. We work for the sake of brevity in a simplified geometry, but the results obtained here hold also in a more general geometry, as the one considered in Sect. 3 (see, also, [3, 14]).

For \(\varepsilon \), \(\eta \in (0,1)\), we let here

$$\begin{aligned}{} & {} G=(0,1)\times (-1,1), \quad G^int _{\eta }=(0,1)\times (-1,-\varepsilon \eta ) , \quad G^out _{\eta }=(0,1)\times (\varepsilon \eta ,1) , \\{} & {} \quad \varSigma =(0,1)\times \{0\} , \quad \varSigma ^int _{\eta }=(0,1)\times (-\varepsilon \eta ,0) , \quad \varSigma ^out _{\eta }=(0,1)\times (0,\varepsilon \eta ) , \end{aligned}$$

and

$$\begin{aligned} G^int =(0,1)\times (-1,0) , \quad G^out =(0,1)\times (0,1) . \end{aligned}$$

In this section, the quantity \(\varepsilon >0\) is a constant, and thus we do not denote explicitly the dependence on \(\varepsilon \). Instead, as already mentioned in Introduction and similarly as in [31, 39], we perform a concentration limit \(\eta \rightarrow 0\).

For \(f^out \in L^{2}(G^out )\), \(f^int \in L^{2}(G^int )\), \(g^out _{\eta }\in L^{2}(\varSigma ^out _{\eta })\), \(g^int _{\eta }\in L^{2}(\varSigma ^int _{\eta })\), we look at the following problem for \( u_{\eta }\in H^1_0(G)\), with \(u_\eta ^\Sigma :={u_{\eta }}_{\mid \varSigma }\in H^1_0(\varSigma )\): in the outer and interior domains, we let

$$\begin{aligned} {}- {{\,\mathrm{\Delta }\,}}u^out _{\eta }&= f^out \,, \quad \text {in } G^out _{\eta }, \end{aligned}$$
(2.1)
$$\begin{aligned} {}- {{\,\mathrm{\Delta }\,}}u^int _{\eta }&= f^int \,, \quad \text {in } G^int _{\eta }, \end{aligned}$$
(2.2)
$$\begin{aligned} {}u^out _{\eta }&= 0 \,, \qquad \text {on }\partial G^out _{\eta }\setminus \{y=\varepsilon \eta \}, \end{aligned}$$
(2.3)
$$\begin{aligned} {}u^int _{\eta }&= 0 \,, \qquad \text {on }\partial G^int _{\eta }\setminus \{y=-\varepsilon \eta \}. \end{aligned}$$
(2.4)

In the thick interfaces \(\varSigma ^out _{\eta }\) and \(\varSigma ^int _{\eta }\), we prescribe instead

$$\begin{aligned} {}- \frac{\partial ^{2}u^out _{\eta }}{\partial x^{2}} - \eta \frac{\partial ^{2}u^out _{\eta }}{\partial y^{2}}&= g^out _{\eta }\,, \quad \text {in }\varSigma ^out _{\eta }, \end{aligned}$$
(2.5)
$$\begin{aligned} - \frac{\partial ^{2}u^int _{\eta }}{\partial x^{2}} - \eta \frac{\partial ^{2}u^int _{\eta }}{\partial y^{2}}&= g^int _{\eta }\,, \quad \text {in }\varSigma ^int _{\eta }, \end{aligned}$$
(2.6)
$$\begin{aligned} u^out _{\eta }&= 0 \,, \qquad \text {on }\partial \varSigma ^out _{\eta }\setminus \big (\{y=\varepsilon \eta \}\cup \{y=0\}\big ), \end{aligned}$$
(2.7)
$$\begin{aligned} u^int _{\eta }&= 0 \,, \qquad \text {on }\partial \varSigma ^int _{\eta }\setminus \big (\{y=-\varepsilon \eta \}\cup \{y=0\}\big ). \end{aligned}$$
(2.8)

Here, \(\eta \), which has been introduced as a geometrical scaling parameter, related to the characteristic dimension of the thin layer, appears also in (2.5) and (2.6) as a degeneration parameter, accounting for small diffusivity in the orthogonal direction.

On the interfaces \(y=\pm \varepsilon \eta \) between the two domains we prescribe the perfect contact conditions for \(x\in (0,1)\)

$$\begin{aligned} u^out _{\eta }(x,\varepsilon \eta +)&= u^out _{\eta }(x,\varepsilon \eta -) \,,&\quad u^int _{\eta }(x,-\varepsilon \eta +)&= u^int _{\eta }(x,-\varepsilon \eta -) \,, \end{aligned}$$
(2.9)
$$\begin{aligned} \frac{\partial u^out _{\eta }}{\partial y}(x,\varepsilon \eta +)&= \eta \frac{\partial u^out _{\eta }}{\partial y}(x,\varepsilon \eta -) \,,&\quad \eta \frac{\partial u^int _{\eta }}{\partial y}(x,-\varepsilon \eta +)&= \frac{\partial u^int _{\eta }}{\partial y}(x,-\varepsilon \eta -) \,. \end{aligned}$$
(2.10)

Finally, we prescribe the conditions on the interface \(\varSigma \). Here, we have continuity of the unknown, that is

$$\begin{aligned} u^out _{\eta }(x,0) = u^int _{\eta }(x,0) , \qquad x\in (0,1) , \end{aligned}$$
(2.11)

and on \(\varSigma \) the function \(u^{\varSigma }_{\eta }=u^out _{\eta }=u^int _{\eta }\) is required to satisfy the problem

$$\begin{aligned} - \varepsilon \frac{\partial ^{2}u^{\varSigma }_{\eta }}{\partial x^{2}}&= \eta \left[ \frac{\partial u_{\eta }}{\partial y}\right] _{\varSigma } \,,&\qquad&\text {on } \varSigma , \end{aligned}$$
(2.12)
$$\begin{aligned} u^{\varSigma }_{\eta }(x)&= 0 \,,&\qquad&x=0 \,, x=1 \,. \end{aligned}$$
(2.13)

The presence of the small parameter \(\varepsilon \) in (2.12) is due to the first concentration step that we mentioned in the Introduction and for which we refer to [12], while the right-hand side of (2.12) is just the jump of the normal flux, according to (2.5), (2.6).

Here and in the following, we denote for any function F and surface S

$$\begin{aligned}{}[F]_{S} = F^{out }_{\mid S}-F^{int }_{\mid S} , \qquad \{F\}_{S} = F^{out }_{\mid S}+F^{int }_{\mid S} , \end{aligned}$$
(2.14)

where \(F^{out }\) [respectively, \(F^{int } \)] is the restriction of F to the outer [respectively, inner] domain. Namely, we will use this notation also for functions F defined only on S, in which case we understand \([F]_{S}=0\), \(\{F\}_{S}=2F\). We make use in the following of the elementary properties

$$\begin{aligned} \begin{aligned}{}[F_{1}F_{2}]_{S} = \frac{1}{2} [F_{1}]_{S} \{F_{2}\}_{S} + \frac{1}{2} \{F_{1}\}_{S} [F_{2}]_{S} , \\ \{F_{1}F_{2}\}_{S} = \frac{1}{2} [F_{1}]_{S} [F_{2}]_{S} + \frac{1}{2} \{F_{1}\}_{S} \{F_{2}\}_{S} . \end{aligned} \end{aligned}$$
(2.15)

Let us now introduce the functions

$$\begin{aligned} {h_{\eta }}(y) = \left\{ \begin{array}{cc} &{}1, \qquad |y|\ge \varepsilon \eta , \\ &{}\eta , \qquad |y|< \varepsilon \eta , \end{array} \right. \quad {H_{\eta }}(y) = \begin{pmatrix} 1 &{} 0 \\ 0 &{} {h_{\eta }}(y) \end{pmatrix} . \end{aligned}$$
(2.16)

Let us also consider a test function \(\varphi \) and denote by \(\varphi ^out \) and \(\varphi ^int \) its restrictions to \(G^out \) and \(G^int \), respectively; we assume that such restrictions are separately Lipschitz continuous and also that \(\varphi =0\) on \(\partial G\), but we do not require that \([\varphi ]_{\varSigma }=0\).

We arrive, by the usual computations, at the following integral equality, where we have used (2.1)–(2.10) and (2.15):

$$\begin{aligned} \begin{aligned}&\int \limits _{G} {H_{\eta }}{\nabla }u_{\eta }\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varSigma } [\varphi ]_{\varSigma } \left\{ \eta \frac{\partial u_{\eta }}{\partial y}\right\} _{\varSigma } \,{\textrm{d}}x + \frac{1}{2} \int \limits _{\varSigma } \{\varphi \}_{\varSigma } \left[ \eta \frac{\partial u_{\eta }}{\partial y}\right] _{\varSigma } \,{\textrm{d}}x \\&\quad = \int \limits _{G^out _{\eta }} f^out \varphi ^out \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{G^int _{\eta }} f^int \varphi ^int \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma ^out _{\eta }} g^out _{\eta }\varphi ^out \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma ^int _{\eta }} g^int _{\eta }\varphi ^int \,{\textrm{d}}x \,{\textrm{d}}y . \end{aligned} \end{aligned}$$
(2.17)

Upon using also (2.12) and selecting \(\varphi \) such that \([\varphi ]_{\varSigma }=0\), we get the usual weak formulation for the solution \(u_{\eta }\in H^{1}_0(G)\) with \(u^{\varSigma }_{\eta }={u_{\eta }}_{\mid \varSigma }\in H^{1}_0(\varSigma )\) of the problem (2.1)–(2.13) given by

$$\begin{aligned} \begin{aligned}&\int \limits _{G} {H_{\eta }}{\nabla }u_{\eta }\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } \frac{\partial \varphi }{\partial x} \frac{\partial u^{\varSigma }_{\eta }}{\partial x} \,{\textrm{d}}x \\&\quad = \int \limits _{G^out _{\eta }} f^out \varphi \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{G^int _{\eta }} f^int \varphi \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma ^out _{\eta }} g^out _{\eta }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma ^int _{\eta }} g^int _{\eta }\varphi \,{\textrm{d}}x \,{\textrm{d}}y , \end{aligned} \end{aligned}$$
(2.18)

for all test functions \(\varphi \in H^{1}_0(G)\) with \({\varphi }_{\mid \varSigma }\in H^{1}_0(\varSigma )\).

Existence and uniqueness of a weak solution of problem (2.18) can be obtained as a standard consequence of Lax-Milgram lemma.

We note that

$$\begin{aligned} \begin{aligned}&\int \limits _{\varSigma ^out _{\eta }} |u^out _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y = \int \limits _{\varSigma ^out _{\eta }} \left( u^{\varSigma }_{\eta }+ \int \limits _{0}^{y} \frac{\partial u^out _{\eta }}{\partial z}(x,z) \,{\textrm{d}}z \right) ^{2} \,{\textrm{d}}x \,{\textrm{d}}y \\&\quad \le 2 \varepsilon \eta \int \limits _{\varSigma } |u^{\varSigma }_{\eta }|^{2} \,{\textrm{d}}x + 2 \varepsilon ^{2} \eta \int \limits _{\varSigma ^out _{\eta }} \eta \left| \frac{\partial u^out _{\eta }}{\partial z} \right| ^{2} \,{\textrm{d}}x \,{\textrm{d}}z . \end{aligned} \end{aligned}$$
(2.19)

A similar estimate holds true in \(\varSigma ^int _{\eta }\). On selecting (formally) \(\varphi =u_{\eta }\) and invoking Young inequality, we obtain

$$\begin{aligned} \begin{aligned}&\int \limits _{G} \left( \left| \frac{\partial u_{\eta }}{\partial x} \right| ^{2} + {h_{\eta }}\left| \frac{\partial u_{\eta }}{\partial y} \right| ^{2} \right) \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } \left| \frac{\partial u^{\varSigma }_{\eta }}{\partial x} \right| ^{2} \,{\textrm{d}}x \le \frac{C}{\delta } \int \limits _{G^out _{\eta }} |f^out |^{2} \,{\textrm{d}}x \,{\textrm{d}}y \\&\quad +{C\delta } \int \limits _{G^out _{\eta }} |u^out _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y + \frac{C}{\delta } \int \limits _{G^int _{\eta }} |f^int |^{2} \,{\textrm{d}}x \,{\textrm{d}}y + {C\delta } \int \limits _{G^int _{\eta }} |u^int _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y \\&\quad + \frac{C\eta }{\delta } \int \limits _{\varSigma ^out _{\eta }} |g^out _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y +{C\eta \delta } \int \limits _{\varSigma ^out _{\eta }} |u^out _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y \\&\quad + \frac{C\eta }{\delta } \int \limits _{\varSigma ^int _{\eta }} |g^int _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y +{C\eta \delta } \int \limits _{\varSigma ^out _{\eta }} |u^int _{\eta }|^{2} \,{\textrm{d}}x \,{\textrm{d}}y , \end{aligned} \end{aligned}$$

for any \(\delta >0\). Then, using also Poincaré inequality and (2.19) and choosing \(\delta \) sufficiently small we can absorb the terms containing the solution \(u_{\eta }\) in the right-hand side into the left-hand side, thus arriving at

$$\begin{aligned} \int \limits _{G} \left( \left| \frac{\partial u_{\eta }}{\partial x} \right| ^{2} + {h_{\eta }}\left| \frac{\partial u_{\eta }}{\partial y} \right| ^{2} \right) \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } \left| \frac{\partial u^{\varSigma }_{\eta }}{\partial x} \right| ^{2} \,{\textrm{d}}x \le C_{\varepsilon } , \end{aligned}$$
(2.20)

where the last inequality is an assumption of boundedness on the sources. In the previous inequalities \(C>0\) denotes constants independent from \(\varepsilon \) and \(\eta \), but \(C_{\varepsilon }\) is only independent of \(\eta \).

In order to keep the effects of the sources \(g^int _{\eta }\) and \(g^out _{\eta }\) in the concentrated problem, we have to scale them by a factor \(1/\eta \), and then we assume, for the sake of simplicity,

$$\begin{aligned} g^out _{\eta }(x,y) = \frac{1}{\eta } g^out _{1}(x) g^out _{2}(y) , \qquad g^int _{\eta }(x,y) = \frac{1}{\eta } g^int _{1}(x) g^int _{2}(y) , \end{aligned}$$
(2.21)

for \(g^out _{1}\), \(g^int _{1}\in L^{2}(\varSigma )\), \(g^out _{2}\), \(g^int _{2}\in C(\mathbb {R})\).

Let us denote by \(\widetilde{u^out _{\eta }}\in H^{1}(G^out )\) [respectively, \(\widetilde{u^int _{\eta }}\in H^{1}(G^int )\)] the standard extension of \({u^out _{\eta }}_{\mid G^out _{\eta }}\) to \(G^out \) [respectively, of \({u^int _{\eta }}_{\mid G^int _{\eta }}\) to \(G^int \)] obtained by reflection. For such an extension we have \(\Vert \widetilde{u^out _{\eta }}\Vert _{H^{1}(G^out )}\le C\Vert {u^out _{\eta }}\Vert _{H^1(G^out _{\eta })}\) (an analogous estimate is valid also for \(\widetilde{u^int _{\eta }}\)). Thus we may conclude, up to subsequences, as \(\eta \rightarrow 0\):

$$\begin{aligned}&\widetilde{u^out _{\eta }}\rightarrow u^out _{0}\,, \qquad \text {strongly in }L^{2}(G^out ), \end{aligned}$$
(2.22)
$$\begin{aligned}&\widetilde{u^int _{\eta }}\rightarrow u^int _{0}\,, \qquad \text {strongly in }{L^{2}(G^int ),} \end{aligned}$$
(2.23)
$$\begin{aligned}&{\nabla }\widetilde{u^out _{\eta }}\rightharpoonup {\nabla }u^out _{0}\,, \qquad \text {weakly in }{L^{2}(G^out ),} \end{aligned}$$
(2.24)
$$\begin{aligned}&{\nabla }\widetilde{u^int _{\eta }}\rightharpoonup {\nabla }u^int _{0}\,, \qquad \text {weakly in }{L^{2}(G^int ),} \end{aligned}$$
(2.25)
$$\begin{aligned}&\widetilde{u^out _{\eta }}(\cdot ,\varepsilon \eta )\rightarrow u^out _{0}(\cdot ,0) \,, \qquad \text {strongly in }{L^{2}(0,1),} \end{aligned}$$
(2.26)
$$\begin{aligned}&\widetilde{u^int _{\eta }}(\cdot ,-\varepsilon \eta )\rightarrow u^int _{0}(\cdot ,0) \,, \qquad \text {strongly in }{L^{2}(0,1),} \end{aligned}$$
(2.27)

for suitable \(u^out _{0}\in H^{1}(G^out )\), \(u^int _{0}\in H^{1}(G^int )\). Moreover in the same limit

$$\begin{aligned}&u^{\varSigma }_{\eta }\rightarrow u^{\varSigma }_{0}\,, \qquad \text {strongly in } L^{2}(0,1), \end{aligned}$$
(2.28)
$$\begin{aligned}&\frac{\partial u^{\varSigma }_{\eta }}{\partial x}\rightharpoonup \frac{\partial u^{\varSigma }_{0}}{\partial x} \,, \qquad \text {weakly in }{L^{2}(0,1),} \end{aligned}$$
(2.29)

for a suitable \(u^{\varSigma }_{0}\in H^{1}(0,1)\). The null boundary conditions on \(\partial G\) are of course preserved in the limit.

In order to take the limit in (2.18), we remark that, by taking a smooth test function \(\varphi \) and recalling (2.20),

$$\begin{aligned} \begin{aligned}&\left| \int \limits _{\varSigma ^out _{\eta }\cup \varSigma ^int _{\eta }} |{H_{\eta }}{\nabla }u_{\eta }\cdot {\nabla }\varphi | \,{\textrm{d}}x \,{\textrm{d}}y \right| ^{2} \le C(\varphi ) \varepsilon \eta \int \limits _{\varSigma ^out _{\eta }\cup \varSigma ^int _{\eta }} \left| \frac{\partial u_{\eta }}{\partial x} \right| ^{2} \,{\textrm{d}}x \,{\textrm{d}}y\\&\quad + C(\varphi ) \varepsilon \eta ^{2} \int \limits _{\varSigma ^out _{\eta }\cup \varSigma ^int _{\eta }} \eta \left| \frac{\partial u_{\eta }}{\partial y} \right| ^{2} \,{\textrm{d}}x \,{\textrm{d}}y \le C_{\varepsilon }(\varphi ) \eta \rightarrow 0 , \end{aligned} \end{aligned}$$
(2.30)

where \(C(\varphi )=\Vert \nabla \varphi \Vert ^2_{L^\infty (G)}\). Thus, we have the limiting equation

$$\begin{aligned} \begin{aligned}&\int \limits _{G} {\nabla }u_{0}\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } \frac{\partial \varphi }{\partial x} \frac{\partial u^{\varSigma }_{0}}{\partial x} \,{\textrm{d}}x \\&\quad = \int \limits _{G} f\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } ( g^out _{1}(x) g^out _{2}(0) + g^int _{1}(x) g^int _{2}(0) ) \varphi (x,0) \,{\textrm{d}}x . \end{aligned} \end{aligned}$$
(2.31)

Let us next derive (formally) the distributional formulation of the limiting problem and the conditions relating \(u^{\varSigma }_{0}\) to \(u_{0}\). First, on taking test functions supported away from \(\varSigma \), we obtain

$$\begin{aligned} {}- {{\,\mathrm{\Delta }\,}}u^out _{0}&= f^out \,, \qquad \text {in }{ G^out ,} \end{aligned}$$
(2.32)
$$\begin{aligned} {}- {{\,\mathrm{\Delta }\,}}u^int _{0}&= f^int \,, \qquad \text {in }{ G^int ,} \end{aligned}$$
(2.33)
$$\begin{aligned} u^out _{0}&= 0 \,, \qquad \text {on }{\partial G^out \setminus \varSigma ,} \end{aligned}$$
(2.34)
$$\begin{aligned} u^int _{0}&= 0 \,, \qquad \text {on }{\partial G^int \setminus \varSigma .} \end{aligned}$$
(2.35)

Thus for a test function \(\varphi \) as in (2.18) we obtain

$$\begin{aligned} \int \limits _{G} {\nabla }u_{0}\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma } \left[ \frac{\partial u_{0}}{\partial y}\right] _{\varSigma } \varphi \,{\textrm{d}}x = \int \limits _{G} f\varphi \,{\textrm{d}}x \,{\textrm{d}}y . \end{aligned}$$
(2.36)

By comparing (2.31) and (2.36), we conclude

$$\begin{aligned} \varepsilon \int \limits _{\varSigma } \frac{\partial \varphi }{\partial x} \frac{\partial u^{\varSigma }_{0}}{\partial x} \,{\textrm{d}}x = \int \limits _{\varSigma } \left[ \frac{\partial u_{0}}{\partial y}\right] _{\varSigma } \varphi \,{\textrm{d}}x + \varepsilon \int \limits _{\varSigma } ( g^out _{1}(x) g^out _{2}(0) + g^int _{1}(x) g^int _{2}(0) ) \varphi \,{\textrm{d}}x , \end{aligned}$$
(2.37)

whose distributional formulation is

$$\begin{aligned} {}- \varepsilon \frac{\partial ^{2}u^{\varSigma }_{0}}{\partial x^{2}}&= \left[ \frac{\partial u_{0}}{\partial y}\right] _{\varSigma } + \varepsilon g^out _{1}(x) g^out _{2}(0) + \varepsilon g^int _{1}(x) g^int _{2}(0) \,,&\qquad&\text {on }\varSigma , \end{aligned}$$
(2.38)
$$\begin{aligned} u^{\varSigma }_{0}(x)&= 0 \,,&\qquad&x=0 \,, x=1 \,. \end{aligned}$$
(2.39)

But since we have in practice three unknowns, i.e., \(u^{\varSigma }_{0}\), \(u^out _{0}\), \(u^int _{0}\), we need two more interface conditions.

We cannot extract them from (2.31), and thus we go back to the limiting procedure. In (2.18), we take as test function

$$\begin{aligned} \varphi (x,y) = \left\{ \begin{array}{cc} &{}\psi (x)\zeta (y) , \qquad \varepsilon \eta \le y\le 1 , \\ &{}\psi (x)\frac{\max (y,0)}{\varepsilon \eta } , \qquad -1\le y <\varepsilon \eta , \end{array} \right. \end{aligned}$$
(2.40)

where \(\psi \in C^{1}_{0}(0,1)\), \(\zeta \in C^{1}(\mathbb {R})\) with \(\zeta (1)=0\), \(\zeta (y)=1\) for \(y<1/2\) and \(\eta <1/2\). We obtain

$$\begin{aligned}{} & {} \int \limits _{G^out _{\eta }} {\nabla }u^out _{\eta }\cdot {\nabla }(\psi \zeta ) \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{\varepsilon } \int \limits _{\varSigma ^out _{\eta }} \left( \frac{\partial u^out _{\eta }}{\partial x} \psi '\frac{y}{\eta } + \frac{\partial u^out _{\eta }}{\partial y} \psi \right) \,{\textrm{d}}x \,{\textrm{d}}y \nonumber \\{} & {} \quad = \int \limits _{G^out _{\eta }} f^out \psi \zeta \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma ^out _{\eta }} g^out _{\eta }\psi \frac{y}{\varepsilon \eta } \,{\textrm{d}}x \,{\textrm{d}}y , \end{aligned}$$
(2.41)

where we may calculate

$$\begin{aligned} \int \limits _{\varSigma ^out _{\eta }} \frac{\partial u^out _{\eta }}{\partial y} \psi \,{\textrm{d}}x \,{\textrm{d}}y = \int \limits _{\varSigma } ( u^out _{\eta }(x,\varepsilon \eta ) - u^{\varSigma }_{\eta }(x) ) \psi \,{\textrm{d}}x . \end{aligned}$$
(2.42)

Since \(|y/(\varepsilon \eta )|\le 1\) in \(\varSigma ^out _{\eta }\), on invoking again (2.20) and the convergences established above, as \(\eta \rightarrow 0\), we arrive at

$$\begin{aligned}{} & {} \int \limits _{G^out } {\nabla }u^out _{0}\cdot {\nabla }(\psi \zeta ) \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{\varepsilon } \int \limits _{\varSigma } ( u^out _{0}(x,0) - u^{\varSigma }_{0}(x) ) \psi \,{\textrm{d}}x\nonumber \\{} & {} \quad = \int \limits _{G^out } f^out \psi \zeta \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \frac{g^out _{2}(0)}{2} \int \limits _{\varSigma } g^out _{1} \psi \,{\textrm{d}}x , \end{aligned}$$
(2.43)

where we have used (2.21) and

$$\begin{aligned} \frac{1}{\eta } \int \limits _{0}^{\varepsilon \eta } g^out _{2}(y) \frac{y}{\varepsilon \eta } \,{\textrm{d}}y = \varepsilon \int \limits _{0}^{1} g^out _{2}(\varepsilon \eta z) z \,{\textrm{d}}z \rightarrow \varepsilon \frac{g^out _{2}(0)}{2} , \quad \text {as } \eta \rightarrow 0. \end{aligned}$$
(2.44)

But on integrating (2.32) by parts, we get

$$\begin{aligned} \int \limits _{G^out } {\nabla }u^out _{0}\cdot {\nabla }(\psi \zeta ) \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma } \frac{\partial u^out _{0}}{\partial y} \psi \,{\textrm{d}}x = \int \limits _{G^out } f^out \psi \zeta \,{\textrm{d}}x \,{\textrm{d}}y , \end{aligned}$$
(2.45)

whence

$$\begin{aligned} \frac{1}{\varepsilon } \int \limits _{\varSigma } ( u^out _{0}(x,0) - u^{\varSigma }_{0}(x) ) \psi \,{\textrm{d}}x = \int \limits _{\varSigma } \frac{\partial u^out _{0}}{\partial y} \psi \,{\textrm{d}}x + \varepsilon \frac{g^out _{2}(0)}{2} \int \limits _{\varSigma } g^out _{1} \psi \,{\textrm{d}}x . \end{aligned}$$
(2.46)

Clearly, we may argument in a similar way in \(G^int \), arriving finally at the required (distributional) conditions on \(\varSigma \):

$$\begin{aligned} \frac{1}{\varepsilon } (u^out _{0}-u^{\varSigma }_{0})&= \frac{\partial u^out _{0}}{\partial y} + \varepsilon \frac{g^out _{2}(0)}{2} g^out _{1} \,, \end{aligned}$$
(2.47)
$$\begin{aligned} \frac{1}{\varepsilon } (u^{\varSigma }_{0}-u^int _{0})&= \frac{\partial u^int _{0}}{\partial y} - \varepsilon \frac{g^int _{2}(0)}{2} g^int _{1} \,. \end{aligned}$$
(2.48)

Conditions (2.47)–(2.48) are equivalent to

$$\begin{aligned} \left[ \frac{\partial u_{0}}{\partial y}\right] _{\varSigma }&= \frac{1}{\varepsilon } \{u_{0}\}_{\varSigma } - \frac{2}{\varepsilon } u^{\varSigma }_{0}- \frac{\varepsilon }{2} \{g\}_{\varSigma } \,, \end{aligned}$$
(2.49)
$$\begin{aligned} \left\{ \frac{\partial u_{0}}{\partial y}\right\} _{\varSigma }&= \frac{1}{\varepsilon } [u_{0}]_{\varSigma } - \frac{\varepsilon }{2} [g]_{\varSigma } \,. \end{aligned}$$
(2.50)

Note that, on substituting (2.49) into (2.38), we infer

$$\begin{aligned} - \varepsilon \frac{\partial ^{2}u^{\varSigma }_{0}}{\partial x^{2}} = \frac{1}{\varepsilon } \{u_{0}\}_{\varSigma } - \frac{2}{\varepsilon } u^{\varSigma }_{0}+ \frac{\varepsilon }{2} \{g\}_{\varSigma } . \end{aligned}$$
(2.51)

In order to introduce a weak formulation for the complete problem, let us select again a test function possibly with \([\varphi ]_{\varSigma }\not =0\). From (2.32)–(2.35), we have by means of standard integration by parts and of (2.15)

$$\begin{aligned} \begin{aligned}&\int \limits _{G} {\nabla }u_{0}\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varSigma } [\varphi ]_{\varSigma } \left\{ \frac{\partial u_{0}}{\partial y}\right\} _{\varSigma } \,{\textrm{d}}x + \frac{1}{2} \int \limits _{\varSigma } \{\varphi \}_{\varSigma } \left[ \frac{\partial u_{0}}{\partial y}\right] _{\varSigma } \,{\textrm{d}}x \\&\quad = \int \limits _{G} f\varphi \,{\textrm{d}}x \,{\textrm{d}}y . \end{aligned} \end{aligned}$$
(2.52)

We are led to the following result.

Proposition 1.1

Let \(f=(f^out ,f^int )\), with \(f^out \in L^{2}(G^out )\) and \(f^int \in L^{2}(G^int )\), and \(g^out _{1}\), \(g^int _{1}\in L^{2}(\varSigma )\), \(g^out _{2}\), \(g^int _{2}\in C(\mathbb {R})\). Then there exists a unique weak solution \((u^out _{0},u^int _{0},u^{\varSigma }_{0})\) such that \(u^out _{0}\in H^{1}(G^out )\), \(u^int _{0}\in H^{1}(G^int )\), \(u^{\varSigma }_{0}\in H^{1}_{0}(0,1)\) with \({u^out _{0}}=0\) [respectively, \(u^int _{0}=0\)] on \(\partial G^out \cap \{y>0\}\) [respectively, on \(\partial G^int \cap \{y<0\}\)], to the limiting problem (2.32)–(2.35), (2.38)–(2.39), (2.49)–(2.50), i.e., the triplet \((u^out _{0},u^int _{0},u^{\varSigma }_{0})\) satisfies

$$\begin{aligned} \begin{aligned}&\int \limits _{G} {\nabla }u_{0}\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varSigma } [\varphi ]_{\varSigma } \Big ( \frac{[u_{0}]_{\varSigma }}{\varepsilon } - \frac{\varepsilon }{2} [g]_{\varSigma } \Big ) \,{\textrm{d}}x \\&\quad + \frac{1}{2} \int \limits _{\varSigma } \{\varphi \}_{\varSigma } \Big ( \frac{\{u_{0}\}_{\varSigma }}{\varepsilon } - \frac{2}{\varepsilon } u^{\varSigma }_{0}- \frac{\varepsilon }{2} \{g\}_{\varSigma } \Big ) \,{\textrm{d}}x = \int \limits _{G} f\varphi \,{\textrm{d}}x \,{\textrm{d}}y \end{aligned} \end{aligned}$$
(2.53)

and

$$\begin{aligned} \varepsilon \int \limits _{\varSigma } \frac{\partial u^{\varSigma }_{0}}{\partial x} \frac{\partial \psi }{\partial x} \,{\textrm{d}}x = \int \limits _{\varSigma } \psi \Big ( \frac{\{u_{0}\}_{\varSigma }}{\varepsilon } - \frac{2}{\varepsilon } u^{\varSigma }_{0}+ \frac{\varepsilon }{2} \{g\}_{\varSigma } \Big ) \,{\textrm{d}}x , \end{aligned}$$
(2.54)

for every test function \(\psi \in H^{1}_{0}(0,1)\) and every test function \(\varphi =(\varphi ^out ,\varphi ^int )\), with \(\varphi ^out \) and \(\varphi ^int \) in the same class of \(u^out _{0}\) and \(u^int _{0}\), respectively.

Remark 1.2

Clearly, if we take in (2.53) a test function \(\varphi \) such that \(\varphi ^out =\varphi ^int =\psi \) on \(\varSigma \), and add this equality to (2.54), we recover (2.31).

Proof

The proof is carried on by means of a concentration procedure starting from the solution \(u_{\eta }\) of the problem (2.18), where we select the testing function

$$\begin{aligned} \varphi _{\eta }(x,y) = \left\{ \begin{array}{ll} &{}\varphi ^out (x,y) , \qquad (x,y)\in G^out _{\eta }, \\ &{}\frac{y}{\varepsilon \eta } \varphi ^out (x,\varepsilon \eta ) + \Big (1-\frac{y}{\varepsilon \eta }\Big ) \psi (x) , \qquad (x,y)\in \varSigma ^out _{\eta }, \\ &{}-\frac{y}{\varepsilon \eta } \varphi ^int (x,-\varepsilon \eta ) + \Big (1+\frac{y}{\varepsilon \eta }\Big ) \psi (x) , \qquad (x,y)\in \varSigma ^int _{\eta }, \\ &{}\varphi ^int (x,y) , \qquad (x,y)\in G^int _{\eta }, \end{array} \right. \end{aligned}$$
(2.55)

with \(\varphi =(\varphi ^out ,\varphi ^int )\), \(\psi \) as in the statement. We obtain

$$\begin{aligned} \begin{aligned}&\int \limits _{G^out _{\eta }\cup G^int _{\eta }} {\nabla }u_{\eta }\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varSigma ^out _{\eta }\cup \varSigma ^int _{\eta }} \frac{\partial u_{\eta }}{\partial x} \frac{\partial \varphi _{\eta }}{\partial x} \,{\textrm{d}}x \,{\textrm{d}}y \\&\quad + \frac{1}{\varepsilon } \int \limits _{\varSigma ^out _{\eta }} \frac{\partial u^out _{\eta }}{\partial y} (\varphi ^out (x,\varepsilon \eta )-\psi (x)) \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{\varepsilon } \int \limits _{\varSigma ^int _{\eta }} \frac{\partial u^int _{\eta }}{\partial y} (\psi (x)-\varphi ^int (x,-\varepsilon \eta )) \,{\textrm{d}}x \,{\textrm{d}}y \\&\quad + \varepsilon \int \limits _{\varSigma } \frac{\partial \psi }{\partial x} \frac{\partial u^{\varSigma }_{\eta }}{\partial x} \,{\textrm{d}}x:= \sum _{k=1}^{5} J_{k}(\eta ) = I(\eta ) , \end{aligned} \end{aligned}$$
(2.56)

where \(I(\eta )\) is the right-hand side of (2.18), i.e., the contribution of the sources for which, recalling (2.21) and (2.44), we immediately get

$$\begin{aligned} I(\eta ) \rightarrow \int \limits _{G} f \varphi \,{\textrm{d}}x \,{\textrm{d}}y + \frac{\varepsilon }{2} \int \limits _{\varSigma } ( \varphi ^out g^out _{2}(0) g^out _{1} + \varphi ^int g^int _{2}(0) g^int _{1} ) \,{\textrm{d}}x + \frac{\varepsilon }{2} \int \limits _{\varSigma } \psi \{g\}_{\varSigma } \,{\textrm{d}}x . \end{aligned}$$
(2.57)

As to the other terms, owing to the convergences in (2.22)–(2.29) and to estimate (2.20), and also since \(|\partial \varphi _{\eta }/\partial x|\) is bounded uniformly in \(\eta \), we immediately get

$$\begin{aligned} J_{1}(\eta ) \rightarrow \int \limits _{G} {\nabla }u_{0}\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y , \quad J_{2}(\eta ) \rightarrow 0 , \quad J_{5}(\eta ) \rightarrow \varepsilon \int \limits _{\varSigma } \frac{\partial \psi }{\partial x} \frac{\partial u^{\varSigma }_{0}}{\partial x} \,{\textrm{d}}x . \end{aligned}$$
(2.58)

Next, we note that in \(J_{3}\) and \(J_{4}\) we may explicitly integrate \(\partial u_{\eta }/\partial y\) to infer

$$\begin{aligned} \begin{aligned} J_{3}(\eta ) + J_{4}(\eta )&= \frac{1}{\varepsilon } \int \limits _{\varSigma } ( u^out _{\eta }(x,\varepsilon \eta )-u^{\varSigma }_{\eta }(x) ) ( \varphi ^out (x,\varepsilon \eta )-\psi (x) ) \,{\textrm{d}}x\\&\quad + \frac{1}{\varepsilon } \int \limits _{\varSigma } ( u^{\varSigma }_{\eta }(x) - u^int _{\eta }(x,-\varepsilon \eta ) ) ( \psi (x) - \varphi ^int (x,-\varepsilon \eta ) ) \,{\textrm{d}}x\\&\rightarrow \frac{1}{\varepsilon } \int \limits _{\varSigma } ( u^out _{0}\varphi ^out + u^int _{0}\varphi ^int - u^{\varSigma }_{0}\{\varphi \}_{\varSigma } ) \,{\textrm{d}}x + \frac{1}{\varepsilon } \int \limits _{\varSigma } ( 2u^{\varSigma }_{0}- \{u_{0}\}_{\varSigma } ) \psi \,{\textrm{d}}x , \end{aligned} \end{aligned}$$
(2.59)

where we use (2.26), (2.27) and the fact that \(u^int _{\eta }(x,-\varepsilon \eta )=\widetilde{u^int _{\eta }}(x,-\varepsilon \eta )\) and \(u^out _{\eta }(x,-\varepsilon \eta )=\widetilde{u^out _{\eta }}(x,-\varepsilon \eta )\).

Since \(\varphi \) and \(\psi \) can be chosen independently, we first select \(\varphi =0\), to get at once (2.54). Then, we select \(\psi =0\), and gather (2.56)–(2.59) to conclude

$$\begin{aligned} \begin{aligned}&\int \limits _{G} {\nabla }u_{0}\cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{\varepsilon } \int \limits _{\varSigma } ( u^out _{0}\varphi ^out + u^int _{0}\varphi ^int - u^{\varSigma }_{0}\{\varphi \}_{\varSigma } ) \,{\textrm{d}}x \\&\quad = \int \limits _{G} f \varphi \,{\textrm{d}}x \,{\textrm{d}}y + \frac{\varepsilon }{2} \int \limits _{\varSigma } ( \varphi ^out g^out _{2}(0) g^out _{1} + \varphi ^int g^int _{2}(0) g^int _{1} ) \,{\textrm{d}}x , \end{aligned} \end{aligned}$$
(2.60)

which, using (2.15), reduces to (2.53).

In order to prove uniqueness of the solution, we need an energy estimate. To this end, we take \(\varphi =u_{0}\) and \(\psi =u^{\varSigma }_{0}\) in (2.53) and (2.54), respectively. By adding the resulting formulas to each other, we get

$$\begin{aligned} \begin{aligned}&\int \limits _{G} |{\nabla }u_{0}|^{2} \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } \left| \frac{\partial u^{\varSigma }_{0}}{\partial x} \right| ^{2} \,{\textrm{d}}x + \frac{1}{2\varepsilon } \int \limits _{\varSigma } [u_{0}]_{\varSigma }^{2} \,{\textrm{d}}x \\&\quad + \frac{1}{2\varepsilon } \int \limits _{\varSigma } \{u_{0}\}_{\varSigma }^{2} \,{\textrm{d}}x + \frac{2}{\varepsilon } \int \limits _{\varSigma } |u^{\varSigma }_{0}|^{2} \,{\textrm{d}}x = \frac{\varepsilon }{4} \int \limits _{\varSigma } ( [u_{0}]_{\varSigma } [g]_{\varSigma } + \{u_{0}\}_{\varSigma } \{g\}_{\varSigma } ) \,{\textrm{d}}x \\&\quad + \frac{2}{\varepsilon } \int \limits _{\varSigma } \{u_{0}\}_{\varSigma } u^{\varSigma }_{0}\,{\textrm{d}}x + \frac{\varepsilon }{2} \int \limits _{\varSigma } u^{\varSigma }_{0}\{g\}_{\varSigma } \,{\textrm{d}}x + \int \limits _{G} fu_{0}\,{\textrm{d}}x \,{\textrm{d}}y . \end{aligned} \end{aligned}$$
(2.61)

Note that

$$\begin{aligned} 2 \{u_{0}\}_{\varSigma } u^{\varSigma }_{0}\le \frac{1}{2} \{u_{0}\}_{\varSigma }^{2} + 2|u^{\varSigma }_{0}|^{2} \end{aligned}$$

so that the second integral on the right-hand side of (2.61) cancels with the last two terms in the left-hand side. The other terms are treated similarly and can be absorbed in the left-hand side by means of Poincaré’s and trace inequalities. Eventually, we obtain

$$\begin{aligned}{} & {} \int \limits _{G} |{\nabla }u_{0}|^{2} \,{\textrm{d}}x \,{\textrm{d}}y + \varepsilon \int \limits _{\varSigma } \left| \frac{\partial u^{\varSigma }_{0}}{\partial x} \right| ^{2} \,{\textrm{d}}x + \frac{1}{2\varepsilon } \int \limits _{\varSigma } [u_{0}]_{\varSigma }^{2} \,{\textrm{d}}x\nonumber \\{} & {} \quad \le C\big ( \Vert f\Vert _{L^{2}(G)}^{2} + \varepsilon \Vert \{g\}_{\varSigma }\Vert _{L^{2}(\varSigma )}^{2} + \varepsilon ^{3} \Vert [g]_{\varSigma }\Vert _{L^{2}(\varSigma )}^{2} \big ) . \end{aligned}$$
(2.62)

We point out that (2.62) proves uniqueness of the solution of problem (2.53)–(2.54), owing to its linear character. \(\square \)

3 The microscopical problem

Our geometrical setting is rather standard: we denote by \(Y=(0,1)^{N}\) the unit cell in \(\mathbb {R}^N\), \(N\ge 2\). We introduce a smooth connected open subset \(E_{int }\) such that \({\overline{E_{int }}} \subset Y\). Then, we set \(E_{out }=Y\setminus \overline{E_{int }}\) and \(\varGamma =\partial E_{int }\). In what follows we refer to \(E_{int }\) as to the inclusion, to \(E_{out }\) as to the outer domain and to \(\varGamma \) as to the interface.

We set our problem in the smooth bounded domain \(\varOmega \subset \mathbb {R}^{N}\). For any \(\varepsilon \in (0,1)\), we define the set

$$\begin{aligned} \Xi ^{\varepsilon } = \left\{ \xi \in \mathbb {Z}^N, \quad \varepsilon (\xi +Y)\subset \varOmega \right\} \end{aligned}$$

and for \(\xi \in \Xi ^\varepsilon \) we let

$$\begin{aligned} E_{int }^{\varepsilon ,\xi }:=\varepsilon (\xi +E_{int }) , \qquad \varGamma ^{\varepsilon }_{\xi }:=\partial E_{int }^{\varepsilon ,\xi } \end{aligned}$$

and

$$\begin{aligned} \varOmega _{int }^{\varepsilon }=\bigcup _{\xi \in \Xi ^{\varepsilon }} E_{int }^{\varepsilon ,\xi } , \qquad \varGamma ^{\varepsilon }=\partial \varOmega _{int }^{\varepsilon }=\bigcup _{\xi \in \Xi ^{\varepsilon }} \varGamma ^{\varepsilon }_{\xi } , \qquad \varOmega _{out }^{\varepsilon }= \varOmega \setminus \overline{\varOmega _{int }^{\varepsilon }} . \end{aligned}$$

In this paper, \(\nu \) is the normal unit vector to \(\varGamma \) pointing into \(E_{out }\); we denote by \(\nu _\varepsilon \) the normal unit vector to \(\varGamma ^{\varepsilon }\) pointing into \(\varOmega _{out }^{\varepsilon }\). Note that \(\varOmega _{out }^{\varepsilon }\) is connected and \(\varOmega _{int }^{\varepsilon }\) is disconnected.

We look at the following problem, which we state in several steps, summarized eventually by a rigorous weak formulation. In the following, \(f\in L^{2}(\varOmega )\), \(g^out _{\varepsilon }\), \(g^int _{\varepsilon }\in C(\overline{\varOmega })\) are given data.

The equations in the bulk of the domain, together with the outer boundary data, are as follows:

$$\begin{aligned} -{{\,\mathrm{\Delta }\,}}u_{\varepsilon }^{out }&= f\,, \qquad \text {in } \varOmega _{out }^{\varepsilon }, \end{aligned}$$
(3.1)
$$\begin{aligned} -{{\,\mathrm{\Delta }\,}}u_{\varepsilon }^{int }&= f\,, \qquad \text {in } \varOmega _{int }^{\varepsilon }, \end{aligned}$$
(3.2)
$$\begin{aligned} u_{\varepsilon }^{out }&= 0 \,, \qquad \text {on }\partial \varOmega . \end{aligned}$$
(3.3)

On \(\varGamma ^{\varepsilon }\), we prescribe for the unknown \(u_{\varepsilon }^{\varGamma }\)

$$\begin{aligned} - \varepsilon {{\,\mathrm{\Delta _{\mathcal {S}}}\,}}u_{\varepsilon }^{\varGamma }= \left[ \frac{\partial u_{\varepsilon }}{\partial {\nu _{\varepsilon }}}\right] _{\varGamma ^{\varepsilon }} + \varepsilon \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} , \qquad \text {on }\varGamma ^{\varepsilon }. \end{aligned}$$
(3.4)

The unknowns \(u_{\varepsilon }^{out }\), \(u_{\varepsilon }^{int }\) and \(u_{\varepsilon }^{\varGamma }\) are connected by the interface conditions

$$\begin{aligned} \left[ \frac{\partial u_{\varepsilon }}{\partial {\nu _{\varepsilon }}}\right] _{\varGamma ^{\varepsilon }}&= \frac{1}{\varepsilon } \{u_{\varepsilon }- u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} - \frac{\varepsilon }{2} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,,&\qquad&\text {on }\varGamma ^{\varepsilon }, \end{aligned}$$
(3.5)
$$\begin{aligned} \left\{ \frac{\partial u_{\varepsilon }}{\partial {\nu _{\varepsilon }}}\right\} _{\varGamma ^{\varepsilon }}&= \frac{1}{\varepsilon } [u_{\varepsilon }]_{\varGamma ^{\varepsilon }} - \frac{\varepsilon }{2} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }} \,,&\qquad&\text {on }\varGamma ^{\varepsilon }. \end{aligned}$$
(3.6)

It is perhaps interesting to note that (3.5) and (3.6) share some symmetry, given that \([u_{\varepsilon }]_{\varGamma ^{\varepsilon }}=[u_{\varepsilon }-u_{\varepsilon }^{\varGamma }]_{\varGamma ^{\varepsilon }}\). Also, we keep in (3.4)–(3.6) the coefficients of the given sources found in Sect. 2, since they actually follow from the concentration process.

When we combine (3.4) with (3.5), we obtain

$$\begin{aligned} - \varepsilon {{\,\mathrm{\Delta _{\mathcal {S}}}\,}}u_{\varepsilon }^{\varGamma }= \frac{1}{\varepsilon } \{u_{\varepsilon }- u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} + \frac{\varepsilon }{2} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} , \qquad \text {on }\varGamma ^{\varepsilon }. \end{aligned}$$
(3.7)

We remark that (3.7) makes clear that we do not need to impose any additional condition, e.g., on the average of \(u_{\varepsilon }^{\varGamma }\) on each \(\varGamma ^{\varepsilon }_{\xi }\). Indeed, the unknown \(u_{\varepsilon }^{\varGamma }\) appears on the right-hand side, too, so that the associated energy functional vanishes only if \(u_{\varepsilon }^{\varGamma }\) does.

By means of the usual process of formal integration by parts, we obtain from (3.1)–(3.7), when also appealing to (2.15), the following weak formulation for the solution \((u_{\varepsilon }^{out },u_{\varepsilon }^{int },u_{\varepsilon }^{\varGamma })\) to the problem (3.1)–(3.7), with \(u_{\varepsilon }^{out }\in H^{1}(\varOmega _{out }^{\varepsilon })\), \(u_{\varepsilon }^{out }=0\) on \(\partial \varOmega \), \(u_{\varepsilon }^{int }\in H^{1}(\varOmega _{int }^{\varepsilon })\), \(u_{\varepsilon }^{\varGamma }\in H^{1}(\varGamma ^{\varepsilon })\), given by

$$\begin{aligned}{} & {} \int \limits _{\varOmega } {\nabla }u_{\varepsilon }\cdot {\nabla }\varphi \,{\textrm{d}}x + \frac{1}{2} \int \limits _{\varGamma ^{\varepsilon }} \Big ( \frac{1}{\varepsilon } \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} - \frac{\varepsilon }{2} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \Big ) \{\varphi \}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma \nonumber \\{} & {} \quad + \frac{1}{2} \int \limits _{\varGamma ^{\varepsilon }} \Big ( \frac{1}{\varepsilon } [u_{\varepsilon }]_{\varGamma ^{\varepsilon }} - \frac{\varepsilon }{2} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }} \Big ) [\varphi ]_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma = \int \limits _{\varOmega } f\varphi \,{\textrm{d}}x \end{aligned}$$
(3.8)

and

$$\begin{aligned} \varepsilon \int \limits _{\varGamma ^{\varepsilon }} {\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }\cdot {\nabla _{\mathcal {S}}}\varphi ^{\varGamma } \,{\textrm{d}}\sigma = \int \limits _{\varGamma ^{\varepsilon }} \Big ( \frac{1}{\varepsilon } \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} + \frac{\varepsilon }{2} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \Big ) \varphi ^{\varGamma } \,{\textrm{d}}\sigma , \end{aligned}$$
(3.9)

for all test function \((\varphi ^out ,\varphi ^int ,\varphi ^{\varGamma })\) in the same class of the solution. Here, \(\varphi ^{\varGamma }\) can be chosen independently from \(\varphi ^out \) and \(\varphi ^int \).

4 Energy inequalities

In this section, we collect some standard inequalities, which will be useful in the following. For the first two we refer, for instance, to [2], while the third one can be obtained by rescaling from the standard Poincaré-Wirtinger inequality. They are valid in the geometry used in this paper.

Poincaré inequality ( [2, Lemma 7.1]): for all \(v\in L^{2}(\varOmega )\) such that \(v_{\mid \varOmega _{out }^{\varepsilon }}\) belongs to \(H^{1}(\varOmega _{out }^{\varepsilon })\), with \(v=0\) on \(\partial \varOmega \), \(v_{\mid \varOmega _{int }^{\varepsilon }}\) belongs to \(H^{1}(\varOmega _{int }^{\varepsilon })\), we have

$$\begin{aligned} \int \limits _{\varOmega } v^{2} \,{\textrm{d}}x \le C\int \limits _{\varOmega } |{\nabla }v|^{2} \,{\textrm{d}}x + \frac{C}{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} [v]_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma . \end{aligned}$$
(4.1)

Trace inequality ( [2, Formula 7.4]): for all \(v\in L^{2}(\varOmega )\) such that \(v_{\mid \varOmega _{out }^{\varepsilon }}\) belongs to \(H^{1}(\varOmega _{out }^{\varepsilon })\), \(v_{\mid \varOmega _{int }^{\varepsilon }}\) belongs to \(H^{1}(\varOmega _{int }^{\varepsilon })\), we have

$$\begin{aligned} \int \limits _{\varGamma ^{\varepsilon }} ( |v^{out }|^{2} + |v^{int }|^{2} ) \,{\textrm{d}}\sigma \le \frac{C}{\varepsilon } \int \limits _{\varOmega } v^{2} \,{\textrm{d}}x + C\varepsilon \int \limits _{\varOmega } |{\nabla }v|^{2} \,{\textrm{d}}x . \end{aligned}$$
(4.2)

Poincaré inequality on \(\varGamma ^{\varepsilon }_{\xi }\): for all \(v\in H^{1}(\varGamma ^{\varepsilon }_{\xi })\), we have

$$\begin{aligned} \int \limits _{\varGamma ^{\varepsilon }_{\xi }} v^{2} \,{\textrm{d}}\sigma \le C\varepsilon ^{2} \int \limits _{\varGamma ^{\varepsilon }_{\xi }} |{\nabla _{\mathcal {S}}}v|^{2} \,{\textrm{d}}\sigma + \frac{C}{|\varGamma ^{\varepsilon }_{\xi }|} \left| \int \limits _{\varGamma ^{\varepsilon }_{\xi }} v \,{\textrm{d}}\sigma \right| ^{2} . \end{aligned}$$
(4.3)

Theorem 1.3

For the weak solution of problem (3.8)–(3.9), we have

$$\begin{aligned}{} & {} \int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} |{\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }|^{2} \,{\textrm{d}}\sigma + \frac{1}{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} ( \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }}^{2} + [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} ) \,{\textrm{d}}\sigma \nonumber \\{} & {} \quad \le C\int \limits _{\varOmega } f^{2} \,{\textrm{d}}x + C\varepsilon \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma + C\varepsilon ^{3} \int \limits _{\varGamma ^{\varepsilon }} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma . \end{aligned}$$
(4.4)

Proof

First, on taking \(\varphi =u_{\varepsilon }\) and \(\varphi ^{\varGamma }=u_{\varepsilon }^{\varGamma }\) in (3.8)–(3.9), and then on adding the resulting equalities, we arrive at

$$\begin{aligned} \begin{aligned}&\int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \frac{1}{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} ( \{u_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} + [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} + 4 |u_{\varepsilon }^{\varGamma }|^{2} ) \,{\textrm{d}}\sigma + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} |{\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }|^{2} \,{\textrm{d}}\sigma \\&\quad = \frac{2}{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} u_{\varepsilon }^{\varGamma }\{u_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma + \frac{\varepsilon }{4} \int \limits _{\varGamma ^{\varepsilon }} \{u_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma + \frac{\varepsilon }{4} \int \limits _{\varGamma ^{\varepsilon }} [u_{\varepsilon }]_{\varGamma ^{\varepsilon }} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma \\&\qquad + \frac{\varepsilon }{2} \int \limits _{\varGamma ^{\varepsilon }} u_{\varepsilon }^{\varGamma }\{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma + \int \limits _{\varOmega } fu_{\varepsilon }\,{\textrm{d}}x =: \sum _{h=1}^{5} I_{h} . \end{aligned} \end{aligned}$$
(4.5)

We may easily compute

$$\begin{aligned} \frac{1}{2} \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }}^{2} = \frac{1}{2} \{u_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} - 2 u_{\varepsilon }^{\varGamma }\{u_{\varepsilon }\}_{\varGamma ^{\varepsilon }} + 2 |u_{\varepsilon }^{\varGamma }|^{2} . \end{aligned}$$
(4.6)

Thus, (4.5) immediately yields

$$\begin{aligned} \int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \frac{1}{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} ( \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }}^{2} + [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} ) \,{\textrm{d}}\sigma + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} |{\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }|^{2} \,{\textrm{d}}\sigma = \sum _{h=2}^{5} I_{h} . \end{aligned}$$
(4.7)

Then, we observe, by means of (4.1) and (4.2), that

$$\begin{aligned} \begin{aligned} I_{2}&\le \delta \varepsilon \int \limits _{\varGamma ^{\varepsilon }} ( |u_{\varepsilon }^{out }|^{2} + |u_{\varepsilon }^{int }|^{2} ) \,{\textrm{d}}\sigma + \frac{C\varepsilon }{\delta } \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma \\&\le C\delta \int \limits _{\varOmega } u_{\varepsilon }^{2} \,{\textrm{d}}x + C\delta \varepsilon ^{2} \int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \frac{C\varepsilon }{\delta } \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma \\&\le \frac{C\delta }{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma + C\delta \int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \frac{C\varepsilon }{\delta } \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma , \end{aligned} \end{aligned}$$
(4.8)

for a small \(\delta >0\) independent of \(\varepsilon \) to be chosen. Next, we bound

$$\begin{aligned} I_{3} \le \frac{\delta }{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma + \frac{C}{\delta } \varepsilon ^{3} \int \limits _{\varGamma ^{\varepsilon }} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma . \end{aligned}$$
(4.9)

In order to bound \(I_{4}\), we need an estimate of the mean value of \(u_{\varepsilon }^{\varGamma }\) on each component \(\varGamma ^{\varepsilon }_{\xi }\); from (3.9), with \(\varphi ^{\varGamma }=1\), we get

$$\begin{aligned} \frac{1}{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }_{\xi }} u_{\varepsilon }^{\varGamma }\,{\textrm{d}}\sigma = \frac{1}{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }_{\xi }} \{u_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma + \frac{\varepsilon }{4} \int \limits _{\varGamma ^{\varepsilon }_{\xi }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma , \end{aligned}$$
(4.10)

implying

$$\begin{aligned} \frac{1}{|\varGamma ^{\varepsilon }_{\xi }|} \left| \int \limits _{\varGamma ^{\varepsilon }_{\xi }} u_{\varepsilon }^{\varGamma }\,{\textrm{d}}\sigma \right| ^{2} \le C\int \limits _{\varGamma ^{\varepsilon }_{\xi }} ( |u_{\varepsilon }^{out }|^{2} + |u_{\varepsilon }^{int }|^{2} ) \,{\textrm{d}}\sigma + C\varepsilon ^{4} \int \limits _{\varGamma ^{\varepsilon }_{\xi }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma . \end{aligned}$$
(4.11)

Thus, (4.3) together with (4.11) yield

$$\begin{aligned} \begin{aligned} I_{4}&\le \delta \varepsilon \int \limits _{\varGamma ^{\varepsilon }} |u_{\varepsilon }^{\varGamma }|^{2} \,{\textrm{d}}\sigma + C\frac{\varepsilon }{\delta } \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma \\&\le C\delta \varepsilon ^{3} \int \limits _{\varGamma ^{\varepsilon }} |{\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }|^{2} \,{\textrm{d}}\sigma + C\delta \varepsilon \int \limits _{\varGamma ^{\varepsilon }} ( |u_{\varepsilon }^{out }|^{2} + |u_{\varepsilon }^{int }|^{2} ) \,{\textrm{d}}\sigma + \Big ( C\delta \varepsilon ^{5} + \frac{C\varepsilon }{\delta } \Big ) \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma . \end{aligned} \end{aligned}$$
(4.12)

Note that we may reason again as in (4.8) to bound the second term in the rightmost hand side of (4.12). Finally, on using also (4.1), we have the standard inequality

$$\begin{aligned}{} & {} I_{5} \le \delta \int \limits _{\varOmega } u_{\varepsilon }^{2} \,{\textrm{d}}x + \frac{1}{4\delta } \int \limits _{\varOmega } f^{2} \,{\textrm{d}}x \nonumber \\{} & {} \le C\delta \int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \frac{C\delta }{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} \,{\textrm{d}}\sigma + \frac{1}{4\delta } \int \limits _{\varOmega } f^{2} \,{\textrm{d}}x . \end{aligned}$$
(4.13)

Finally, on selecting \(\delta \) small enough above, from (4.7)–(4.13) we infer (4.4). \(\square \)

Corollary 1.4

Assume that for a constant C independent of \(\varepsilon \)

$$\begin{aligned} \int \limits _{\varGamma ^{\varepsilon }} ( \varepsilon \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }}^{2} + \varepsilon ^{3} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} ) \,{\textrm{d}}\sigma \le C . \end{aligned}$$
(4.14)

Then, for the weak solution of problem (3.8)–(3.9), we have

$$\begin{aligned}{} & {} \int \limits _{\varOmega } |{\nabla }u_{\varepsilon }|^{2} \,{\textrm{d}}x + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} |{\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }|^{2} \,{\textrm{d}}\sigma + \frac{1}{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} ( \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }}^{2} + [u_{\varepsilon }]_{\varGamma ^{\varepsilon }}^{2} ) \,{\textrm{d}}\sigma \nonumber \\{} & {} \quad + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} ( |u_{\varepsilon }^{out }|^{2} + |u_{\varepsilon }^{int }|^{2} + |u_{\varepsilon }^{\varGamma }|^{2} ) \,{\textrm{d}}\sigma \le C. \end{aligned}$$
(4.15)

Proof

Estimate (4.15) follows at once from (4.14), (4.4), (4.3), (4.11) and (4.2). \(\square \)

5 Existence for the differential problems

In this section, we state and prove some well-posedness results concerning the differential problems.

5.1 Existence for the cell problems

For a given function \(\widetilde{\varphi }\in L^2(\varGamma )\), we set \( \mathcal {M}_{\varGamma }(\widetilde{\varphi })=\frac{1}{|\varGamma |}\int \limits _{\varGamma }\widetilde{\varphi }\,\,{\textrm{d}}\sigma _y\). We look at the problem

$$\begin{aligned} - {{\,\textrm{div}\,}}_{y}\big ( {\nabla }_{y} ({\widehat{\chi }}_{i}+y^\varGamma _{i}) \big )&= 0 \,,&\qquad&\text {in } E_{out }\cup E_{int }, \end{aligned}$$
(5.1)
$$\begin{aligned}{}[{\nabla }_{y}({\widehat{\chi }}_{i}+y^\varGamma _{i})\cdot \nu ]_{\varGamma }&= \{{\widehat{\chi }}_{i}-{\widetilde{\chi }}_{i}\}_{\varGamma }\,,&\qquad&\text {on }\varGamma , \end{aligned}$$
(5.2)
$$\begin{aligned} {\nabla }_{y}({\widehat{\chi }}^{int }_{i}+y^\varGamma _{i})\cdot \nu&= \frac{1}{2} [{\widehat{\chi }}_{i}]_{\varGamma } - \frac{1}{2} \{{\widehat{\chi }}_{i}-{\widetilde{\chi }}_{i}\}_{\varGamma } \,,&\qquad&\text {on }\varGamma , \end{aligned}$$
(5.3)
$$\begin{aligned} -{{\,\mathrm{div_{\mathcal {S}}}\,}}_{y}\big ( {\nabla _{\mathcal {S}}}_{y}({\widetilde{\chi }}_{i}+y^\varGamma _{i}) \big )&= [{\nabla }_{y}({\widehat{\chi }}_{i}+y^\varGamma _{i})\cdot \nu ]_{\varGamma } \,,&\qquad&\text {on }\varGamma , \end{aligned}$$
(5.4)

for a fixed \(i\in \{1,\dots ,N\}\), where \(y^\varGamma = y- \mathcal {M}_{\varGamma }(y)\).

We note for further use that (5.2) and (5.3) are equivalent to (5.2) and

$$\begin{aligned} \{{\nabla }_{y}({\widehat{\chi }}_{i}+y^\varGamma _{i})\cdot \nu \}_{\varGamma } = [{\widehat{\chi }}_{i}]_{\varGamma } , \qquad \text {on }\varGamma . \end{aligned}$$
(5.5)

For the sake of notational simplicity, we write \({\widehat{\chi }}_{i}={{\widehat{v}}}\), \({\widetilde{\chi }}_{i}={{\widetilde{v}}}\). We introduce the space where we seek our solution \(({{\widehat{v}}},{{\widetilde{v}}})\) as

$$\begin{aligned} \mathcal {H}= \{ ({\widehat{\varphi }},{\widetilde{\varphi }}) \in H^{1}_{\#}(Y\setminus \varGamma )\times H^{1}(\varGamma ) \mid \mathcal {M}_{\varGamma }(\{{\widehat{\varphi }}\}_{\varGamma }) = 0 , \mathcal {M}_{\varGamma }({\widetilde{\varphi }}) = 0 \} , \end{aligned}$$
(5.6)

where \(H^{1}_{\#}(Y\setminus \varGamma )\) denotes the space of periodic functions in \(Y\) of class, separately, \(H^{1}(E_{out })\), \(H^{1}(E_{int })\). By means of a routine process of integration by parts, we arrive at the integral equations (owing also to (2.15), (5.5))

$$\begin{aligned}{} & {} B_{1}\big (({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\big ):= \int \limits _{Y} {\nabla }_{y}{{\widehat{v}}}\cdot {\nabla }_{y}{\widehat{\varphi }}\,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varGamma } \big ( \{{{\widehat{v}}}-{{\widetilde{v}}}\}_{\varGamma } \{{\widehat{\varphi }}\}_{\varGamma } + [{{\widehat{v}}}]_{\varGamma } [{\widehat{\varphi }}]_{\varGamma } \big ) \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad = - \int \limits _{Y} {\nabla }_{y}y^\varGamma _{i} \cdot {\nabla }_{y}{\widehat{\varphi }}\,{\textrm{d}}y =: F_{1}\big (({\widehat{\varphi }},{\widetilde{\varphi }})\big ) , \end{aligned}$$
(5.7)

and

$$\begin{aligned}{} & {} B_{2}\big (({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\big ):= \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} {{\widetilde{v}}}\cdot {\nabla _{\mathcal {S}}}_{y} {\widetilde{\varphi }}\,{\textrm{d}}\sigma _{y} - \int \limits _{\varGamma } \{{{\widehat{v}}}-{{\widetilde{v}}}\}_{\varGamma } {\widetilde{\varphi }}\,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad = - \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} y^\varGamma _{i} \cdot {\nabla _{\mathcal {S}}}_{y} {\widetilde{\varphi }}\,{\textrm{d}}\sigma _{y} =: F_{2}\big (({\widehat{\varphi }},{\widetilde{\varphi }})\big ) . \end{aligned}$$
(5.8)

Note that in order to get to (5.7), (5.8) we do not need the normalization condition on \({\widehat{\varphi }}\). However, in the following result we remark explicitly that our notion of weak solution is in fact the correct one, which is perhaps not obvious given the restrictions we place on the test functions.

Lemma 1.5

Assume that the pair \(({{\widehat{v}}},{{\widetilde{v}}})\in \mathcal {H}\) satisfies (5.7)–(5.8), for all \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\), and that \({{\widehat{v}}}_{\mid E_{out }}\in C^{2}(\overline{E_{out }})\), \({{\widehat{v}}}_{\mid E_{int }}\in C^{2}(\overline{E_{int }})\), \({{\widetilde{v}}}\in C^{2}(\varGamma )\). Then, (5.1)–(5.5) are fulfilled in a classical pointwise sense.

Proof

Take first in (5.7) \({\widehat{\varphi }}\in H^{1}_{\#}(Y\setminus \varGamma )\), with \({\widehat{\varphi }}^{out }={\widehat{\varphi }}^{int }=0\) on \(\varGamma \). The differential equations in (5.1) follow by integration by parts, owing to the assumed regularity of \({{\widehat{v}}}={\widehat{\chi }}_{i}\). Then, we integrate by parts again these equations, using a general test function \({\widehat{\varphi }}\in C^{1}_{\#}(Y)\) such that \(\mathcal {M}_{\varGamma }({\widehat{\varphi }})=0\) to obtain

$$\begin{aligned}{} & {} \int \limits _{Y} {\nabla }_{y}({{\widehat{v}}}+ y^\varGamma _{i} ) \cdot {\nabla }_{y}{\widehat{\varphi }}\,{\textrm{d}}y = - \int \limits _{\varGamma } [{\widehat{\varphi }}{\nabla }_{y}({{\widehat{v}}}+ y^\varGamma _{i}) \cdot \nu ]_{\varGamma } \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad = - \int \limits _{\varGamma } {\widehat{\varphi }}[{\nabla }_{y}({{\widehat{v}}}+ y^\varGamma _{i}) \cdot \nu ]_{\varGamma } \,{\textrm{d}}\sigma _{y} . \end{aligned}$$
(5.9)

On comparing (5.7) and (5.9), we infer that, since \(\{{\widehat{\varphi }}\}_{\varGamma }=2{\widehat{\varphi }}\), \([{\widehat{\varphi }}]_{\varGamma }=0\),

$$\begin{aligned}{}[{\nabla }_{y}({{\widehat{v}}}+ y^\varGamma _{i}) \cdot \nu ]_{\varGamma } - \{{{\widehat{v}}}-{{\widetilde{v}}}\}_{\varGamma } = c , \qquad \text {on } \varGamma , \end{aligned}$$
(5.10)

for a suitable constant c. But each one of the quantities on the left-hand side in (5.10) has zero integral on \(\varGamma \): the first one owing to (5.1) (and periodicity of \({{\widehat{v}}}\)), and the second one as to our definition of \(\mathcal {H}\). Condition (5.2) is proved.

Next, we note that for an arbitrary \({\widehat{\varphi }}^{\varGamma }\in C^{1}(\varGamma )\) we may easily construct a \({\widehat{\varphi }}\in H^{1}_{\#}(Y\setminus \varGamma )\) with \(\{{\widehat{\varphi }}\}_{\varGamma }=0\), \([{\widehat{\varphi }}]_{\varGamma }=2{\widehat{\varphi }}^{\varGamma }\). On writing (5.7) for this test function and comparing it to the first equality in (5.9), which holds true for all \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\), we find on account of (2.15)

$$\begin{aligned} \frac{1}{2} \int \limits _{\varGamma } [{{\widehat{v}}}]_{\varGamma } [{\widehat{\varphi }}]_{\varGamma } \,{\textrm{d}}\sigma _{y} = \frac{1}{2} \int \limits _{\varGamma } \{{\nabla }_{y}({{\widehat{v}}}+ y^\varGamma _{i}) \cdot \nu \}_{\varGamma } [{\widehat{\varphi }}]_{\varGamma } \,{\textrm{d}}\sigma _{y} , \end{aligned}$$
(5.11)

whence (5.5) follows.

Finally, directly from (5.8), we obtain

$$\begin{aligned} -{{\,\mathrm{div_{\mathcal {S}}}\,}}_{y}\big ( {\nabla _{\mathcal {S}}}_{y}({{\widetilde{v}}}+y^\varGamma _{i}) \big ) - \{{{\widehat{v}}}-{{\widetilde{v}}}\}_{\varGamma } = c , \qquad \text {on }\varGamma , \end{aligned}$$
(5.12)

for a suitable constant c. But each one of the terms on the left-hand side of (5.12) has zero integral on \(\varGamma \), thus \(c=0\) and (on recalling the already established (5.2)) (5.4) is proved. \(\square \)

Given that for any \({\widehat{\varphi }}\in H^{1}_{\#}(Y\setminus \varGamma )\) we have \({\widehat{\varphi }}^{out }=(\{{\widehat{\varphi }}\}_{\varGamma }+[{\widehat{\varphi }}]_{\varGamma })/2\), \({\widehat{\varphi }}^{int }=(\{{\widehat{\varphi }}\}_{\varGamma }-[{\widehat{\varphi }}]_{\varGamma })/2\), from standard results we have

$$\begin{aligned} \int \limits _{Y} {\widehat{\varphi }}^{2} \,{\textrm{d}}y \le C\int \limits _{Y} |{\nabla }_{y}{\widehat{\varphi }}|^{2} \,{\textrm{d}}y + C\int \limits _{\varGamma } (\{{\widehat{\varphi }}\}_{\varGamma }^{2} + [{\widehat{\varphi }}]_{\varGamma }^{2} ) \,{\textrm{d}}\sigma _{y} . \end{aligned}$$
(5.13)

Next, we prove our existence result.

Theorem 1.6

There exists a unique \(({{\widehat{v}}},{{\widetilde{v}}})\in \mathcal {H}\) that satisfies (5.7)–(5.8), for all \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\).

Proof

Let us equip \(\mathcal {H}\) with the inner product

$$\begin{aligned} B\big (({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\big ) = B_{1}\big (({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\big ) + B_{2}\big (({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\big ) , \qquad ({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}, \end{aligned}$$

which in fact implies the norm

$$\begin{aligned} \Vert ({\widehat{\varphi }},{\widetilde{\varphi }})\Vert _{\mathcal {H}}^{2} = \Vert {\nabla }_{y}{\widehat{\varphi }}\Vert _{L^{2}(Y)}^{2} + \Vert {\nabla _{\mathcal {S}}}_{y}{\widetilde{\varphi }}\Vert _{L^{2}(\varGamma )}^{2} + \frac{1}{2} \Vert \{{\widehat{\varphi }}-{\widetilde{\varphi }}\}_{\varGamma }\Vert _{L^{2}(\varGamma )}^{2} + \frac{1}{2} \Vert [{\widehat{\varphi }}]_{\varGamma }\Vert _{L^{2}(\varGamma )}^{2} . \end{aligned}$$

We can readily check that \(\mathcal {H}\) is a Hilbert space. We have to check that the inner product B is positive and that \(\mathcal {H}\) is complete. If \(\Vert ({\widehat{\varphi }},{\widetilde{\varphi }})\Vert _{\mathcal {H}}=0\), then \({\widetilde{\varphi }}\) is constant, since its gradient vanishes; but, then, \({\widetilde{\varphi }}=0\), owing to the normalization condition \(\mathcal {M}_{\varGamma }({\widetilde{\varphi }})=0\). Hence, both \(\{{\widehat{\varphi }}\}_{\varGamma }=0\) and \([{\widehat{\varphi }}]_{\varGamma }=0\), whence \({\widehat{\varphi }}=0\) in \(Y\) on invoking (5.13). In addition, let \(\{({\widehat{\varphi }}_{n},{\widetilde{\varphi }}_{n})\}\) be a Cauchy sequence in \(\mathcal {H}\). The standard Poincaré inequality on \(\varGamma \) yields \({\widetilde{\varphi }}_{n}\rightarrow {\widetilde{\varphi }}\) in \(H^{1}(\varGamma )\), with \(\mathcal {M}_{\varGamma }({\widetilde{\varphi }})=0\). Thus, from the definition of Cauchy sequence in \(\mathcal {H}\), we get that both \(\{\{{\widehat{\varphi }}_{n}\}_{\varGamma }\}\) and \(\{[{\widehat{\varphi }}_{n}]_{\varGamma }\}\) are Cauchy sequences in \(L^{2}(\varGamma )\). Thus, we may appeal to (5.13) to obtain that \(\{{\widehat{\varphi }}_{n}\}\) converges in \(H^{1}_{\#}(Y\setminus \varGamma )\). The standard trace inequality then implies that \(\mathcal {M}_{\varGamma }(\{{\widehat{\varphi }}\}_{\varGamma })=0\), i.e., \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\); we have, thus, proved the completeness of \(\mathcal {H}\).

Owing to the Riesz theorem, there exists a unique element \(({{\widehat{v}}},{{\widetilde{v}}})\in \mathcal {H}\) such that, for all \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\),

$$\begin{aligned} B\big (({{\widehat{v}}},{{\widetilde{v}}}), ({\widehat{\varphi }},{\widetilde{\varphi }})\big ) = F\big (({\widehat{\varphi }},{\widetilde{\varphi }})\big ):= F_{1}\big (({\widehat{\varphi }},{\widetilde{\varphi }})\big ) + F_{2}\big (({\widehat{\varphi }},{\widetilde{\varphi }})\big ) ; \end{aligned}$$
(5.14)

indeed, F is a linear continuous functional on \(\mathcal {H}\).

We are left with the task of showing that (5.14) implies (5.7) and (5.8), the converse and thus uniqueness being obvious. But this is accomplished by selecting separately \({\widehat{\varphi }}=0\) and \({\widetilde{\varphi }}=0\). \(\square \)

Next, for \(j_{g}\in L^{2}(\varGamma )\), we consider the problem

$$\begin{aligned} {}- {{\,\mathrm{\Delta }\,}}_{y} J&= 0 \,,&\qquad&\text {in }E_{out }\cup E_{int }, \end{aligned}$$
(5.15)
$$\begin{aligned}{}[{\nabla }_{y}J\cdot \nu ]_{\varGamma }&= \{J-H\}_{\varGamma } \,,&\qquad&\text {on }\varGamma , \end{aligned}$$
(5.16)
$$\begin{aligned} {\nabla }_{y}J^{int }\cdot \nu&= - \frac{1}{4} j_{g}+ \frac{1}{2} [J]_{\varGamma } - \frac{1}{2} \{J-H\}_{\varGamma } \,,&\qquad&\text {on }\varGamma , \end{aligned}$$
(5.17)
$$\begin{aligned} - {{\,\mathrm{div_{\mathcal {S}}}\,}}_{y}( {\nabla _{\mathcal {S}}}_{y} H)&= \{J-H\}_{\varGamma } \,,&\qquad&\text {on }\varGamma . \end{aligned}$$
(5.18)

As above, we note for further use, that (5.16) and (5.17) are equivalent to (5.16) and

$$\begin{aligned} \{{\nabla }_{y}J\cdot \nu \}_{\varGamma } = - \frac{1}{2} j_{g}+ [J]_{\varGamma } , \qquad \text {on } \varGamma . \end{aligned}$$
(5.19)

Reasoning as above (on appealing to (5.19), too), we may rewrite problem (5.15)–(5.18) in the following weak form, where we separated the bilinear part from the linear functional:

$$\begin{aligned} \int \limits _{Y} {\nabla }_{y}J\cdot {\nabla }_{y}{\widehat{\varphi }}\,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varGamma } \big ( \{J-H\}_{\varGamma } \{{\widehat{\varphi }}\}_{\varGamma } + [J]_{\varGamma } [{\widehat{\varphi }}]_{\varGamma } \big ) \,{\textrm{d}}\sigma _{y} = \frac{1}{4} \int \limits _{\varGamma } j_{g}[{\widehat{\varphi }}]_{\varGamma } \,{\textrm{d}}\sigma _{y} , \end{aligned}$$
(5.20)

and

$$\begin{aligned} \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} H\cdot {\nabla _{\mathcal {S}}}_{y} {\widetilde{\varphi }}\,{\textrm{d}}\sigma _{y} - \int \limits _{\varGamma } \{J-H\}_{\varGamma } {\widetilde{\varphi }}\,{\textrm{d}}\sigma _{y} = 0 . \end{aligned}$$
(5.21)

Here, for \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\), the bilinear part is exactly the same as in (5.7) and (5.8), therefore the following result can be proved as in Theorem 5.2.

Theorem 1.7

For any \(j_{g}\in L^{2}(\varGamma )\) there exists a unique \((J,H)\in \mathcal {H}\) that satisfies (5.20)–(5.21) for all \(({\widehat{\varphi }},{\widetilde{\varphi }})\in \mathcal {H}\).

As in Lemma 5.1, we may see that weak solutions in the sense of Theorem 5.3 are in fact classical if they are smooth enough.

Remark 1.8

We introduce in the definition of the space \(\mathcal {H}\) in (5.6) the normalization condition \(\mathcal {M}_{\varGamma }(\{\widehat{\varphi }\})=0\), since, in fact, for the solution \((\widehat{\chi }_i,\widetilde{\chi }_i)\), this is automatically satisfied being a byproduct of (5.1), (5.2) and the normalization condition \(\mathcal {M}_{\varGamma }(\widetilde{\chi }_i)=0\).

It follows from (5.3) that \(\mathcal {M}_{\varGamma }([{\widehat{\chi }}_{i}]_{\varGamma })=0\); this, together with the condition \(\mathcal {M}_{\varGamma }(\{{\widehat{\chi }}_{i}\}_{\varGamma })=0\), imply \(\mathcal {M}_{\varGamma }({\widehat{\chi }}_{i}^{out })=0\) and \(\mathcal {M}_{\varGamma }({\widehat{\chi }}_{i}^{int })=0\).

Remark 1.9

Problem (5.15)–(5.18) is set in the unit reference cell, so that the solution \((J,H)\) depends only on y. However, in Sect. 6 the source \(j_{g}\) will be assumed to be a function in \(L^{2}(\varOmega \times \varGamma )\) (see Theorem 6.6); thus, the pair \((J,H)\) will depend also on x, which in problem (5.15)–(5.18) can be considered as a parameter.

5.2 Existence for the microscopical problem

The same approach employed above for the cell problems actually provides existence and uniqueness of solutions for the microscopical problem (3.8)–(3.9) set for \(\varepsilon >0\).

The correct environment for our problem is the space

$$\begin{aligned} \mathcal {H}_{\varepsilon } = \{ (\varphi ,\varphi ^{\varGamma }) \in H^{1}(\varOmega \setminus \varGamma ^{\varepsilon })\times H^{1}(\varGamma ^{\varepsilon }) \mid \varphi _{\mid \partial \varOmega } = 0 \} , \end{aligned}$$
(5.22)

which, as proven in the theorem below, is a Hilbert space equipped with the norm

$$\begin{aligned} \Vert (\varphi ,\varphi ^{\varGamma })\Vert _{\mathcal {H}_{\varepsilon }}^{2} = \int \limits _{\varOmega } |{\nabla }\varphi |^{2} \,{\textrm{d}}x + \frac{1}{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} ( \{\varphi -\varphi ^{\varGamma }\}_{\varGamma ^{\varepsilon }}^{2} + [\varphi ]_{\varGamma ^{\varepsilon }}^{2} ) \,{\textrm{d}}\sigma + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} |{\nabla _{\mathcal {S}}}\varphi ^{\varGamma }|^{2} \,{\textrm{d}}\sigma . \end{aligned}$$
(5.23)

This is the space where solutions are to be found and test functions to be taken.

Theorem 1.10

For any \(f\in L^{2}(\varOmega )\), \(g^out _{\varepsilon }\), \(g^int _{\varepsilon }\in C(\overline{\varOmega })\), there exists a unique weak solution to problem (3.8)–(3.9).

Proof

We note that the two equations (3.8) and (3.9) can be rewritten as, respectively,

$$\begin{aligned}{} & {} B_{1}^{\varepsilon }\big ((u_{\varepsilon },u_{\varepsilon }^{\varGamma }),(\varphi ,\varphi ^{\varGamma })\big ):= \int \limits _{\varOmega } {\nabla }u_{\varepsilon }\cdot {\nabla }\varphi \,{\textrm{d}}x + \frac{1}{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} \Big ( \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} \{\varphi \}_{\varGamma ^{\varepsilon }} + [u_{\varepsilon }]_{\varGamma ^{\varepsilon }} [\varphi ]_{\varGamma ^{\varepsilon }} \Big ) \,{\textrm{d}}\sigma \nonumber \\{} & {} \quad = \frac{\varepsilon }{4} \int \limits _{\varGamma ^{\varepsilon }} \Big ( \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \{\varphi \}_{\varGamma ^{\varepsilon }} + [g_{\varepsilon }]_{\varGamma ^{\varepsilon }} [\varphi ]_{\varGamma ^{\varepsilon }} \Big ) \,{\textrm{d}}\sigma + \int \limits _{\varOmega } f\varphi \,{\textrm{d}}x =: F_{1}^{\varepsilon }\big ((\varphi ,\varphi ^{\varGamma })\big ) , \end{aligned}$$
(5.24)

and

$$\begin{aligned}{} & {} B_{2}^{\varepsilon }\big ((u_{\varepsilon },u_{\varepsilon }^{\varGamma }),(\varphi ,\varphi ^{\varGamma })\big ):= \varepsilon \int \limits _{\varGamma ^{\varepsilon }} {\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }\cdot {\nabla _{\mathcal {S}}}\varphi ^{\varGamma } \,{\textrm{d}}\sigma - \frac{1}{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} \varphi ^{\varGamma } \,{\textrm{d}}\sigma \nonumber \\{} & {} \quad = \frac{\varepsilon }{2} \int \limits _{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \varphi ^{\varGamma } \,{\textrm{d}}\sigma =: F_{2}^{\varepsilon }\big ((\varphi ,\varphi ^{\varGamma })\big ) . \end{aligned}$$
(5.25)

Moreover, the inner product \(B_{1}^{\varepsilon }+B_{2}^{\varepsilon }\) yields the norm defined in (5.23). Note that this is proved by the calculations already used to arrive at (4.7). Let us first prove that the inner product is positive. Clearly, if the quantity in (5.23) vanishes, then \(\varphi _{\mid \varOmega _{out }^{\varepsilon }}=0\), since it is constant, due to \({\nabla }{\varphi }=0\), and owing to the null boundary condition. By the same token, \(\varphi _{\mid \varOmega _{int }^{\varepsilon }}\) is constant in each component of \(\varOmega _{int }^{\varepsilon }\); however, \([\varphi ]_{\varGamma ^{\varepsilon }}=0\) on \(\varGamma ^{\varepsilon }\), so that also \(\varphi _{\mid \varOmega _{int }^{\varepsilon }}=0\) in \(\varOmega _{int }^{\varepsilon }\). It follows that also \(\varphi ^{\varGamma } =0\) due to the definition of the norm.

Then we prove that \(\mathcal {H}_{\varepsilon }\) is complete; let \((\varphi _{n},\varphi ^{\varGamma }_{n})\) be a Cauchy sequence in \(\mathcal {H}_{\varepsilon }\). Then on invoking again the fact \(\varphi _{n\mid \partial \varOmega }=0\), we clearly have \(\varphi _{n\mid \varOmega _{out }^{\varepsilon }}\rightarrow \varphi _{out }\) in \(H^{1}(\varOmega _{out }^{\varepsilon })\), for some \(\varphi _{out }\) with \(\varphi _{out }=0\) on \(\partial \varOmega \). Next, we remark that the traces of \(\varphi _{n\mid \varOmega _{out }^{\varepsilon }}\) on \(\varGamma ^{\varepsilon }\) as well as the jumps \([\varphi _{n}]_{\varGamma ^{\varepsilon }}\) converge in \(L^{2}(\varGamma ^{\varepsilon })\); thus, also the traces of \(\varphi _{n\mid \varOmega _{int }^{\varepsilon }}\) on \(\varGamma ^{\varepsilon }\) converge in \(L^{2}(\varGamma ^{\varepsilon })\). It follows, on appealing again to the convergence of \({\nabla }{\varphi }_{n}\), that \(\varphi _{n\mid \varOmega _{int }^{\varepsilon }}\rightarrow \varphi _{int }\) in \(H^{1}(\varOmega _{int }^{\varepsilon })\), for some \(\varphi _{int }\). Then, we know that \(\{\varphi _{n}-\varphi _{n}^{\varGamma }\}_{\varGamma ^{\varepsilon }}\) and \(\{\varphi _{n}\}_{\varGamma ^{\varepsilon }}\) converge in \(L^{2}(\varGamma ^{\varepsilon })\), implying the convergence of \(\varphi _{n}^{\varGamma }=(\{\varphi _{n}\}_{\varGamma ^{\varepsilon }}-\{\varphi _{n}-\varphi _{n}^{\varGamma }\}_{\varGamma ^{\varepsilon }})/2\) to some \(\varphi ^{\varGamma }\); the limit is in fact taken in \(H^{1}(\varGamma ^{\varepsilon })\) by virtue of the estimate on \({\nabla _{\mathcal {S}}}{\varphi }^{\varGamma }\). Finally, it is a trivial task to check that the limit of \((\varphi _{n},\varphi _{n}^{\varGamma } )\) is taken in the norm of \(\mathcal {H}_{\varepsilon }\).

Next, we remark that \(F_{1}^{\varepsilon }\) and \(F_{2}^{\varepsilon }\) are continuous functionals on \(\mathcal {H}_{\varepsilon }\); to this end, we only need check that the norms \(\Vert \{\varphi \}_{\varGamma ^{\varepsilon }}\Vert _{L^{2}(\varGamma ^{\varepsilon })}\) and \(\Vert \varphi ^{\varGamma }\Vert _{L^{2}(\varGamma ^{\varepsilon })}\) are bounded from above by the norm in (5.23). Indeed, \(\Vert \{\varphi \}_{\varGamma ^{\varepsilon }}\Vert _{L^{2}(\varGamma ^{\varepsilon })}\) can be estimated in this sense by means of (4.1) and (4.2). In turn, \(\Vert \varphi ^{\varGamma }\Vert _{L^{2}(\varGamma ^{\varepsilon })}\) is again bounded by noting that \(\varphi ^{\varGamma }=(\{\varphi \}_{\varGamma ^{\varepsilon }}-\{\varphi -\varphi ^{\varGamma }\}_{\varGamma ^{\varepsilon }})/2\).

The proof is completed as in Theorem 5.2. \(\square \)

6 Homogenization

In order to deal with our homogenization results, we need to recall the definition and the main properties of the unfolding operators studied in [23,24,25,26].

For \(\xi \in \Xi _\varepsilon \), set

$$\begin{aligned} \widehat{\varOmega }_{\varepsilon }= \text {interior}\left\{ \bigcup _{\xi \in \Xi _{\varepsilon }} \varepsilon (\xi +\overline{Y}) \right\} . \end{aligned}$$

Denoting by [r] the integer part and by \(\{r\}\) the fractional part of \(r\in \mathbb {R}\), we define for \(x\in \mathbb {R}^{N}\)

$$\begin{aligned} \left[ \frac{x}{\varepsilon }\right] _{Y} = \Big ( \left[ \frac{x_{1}}{\varepsilon }\right] , \dots , \left[ \frac{x_N}{\varepsilon }\right] \Big ) \qquad \text {and}\qquad \left\{ \frac{x}{\varepsilon }\right\} _{Y} = \Big ( \left\{ \frac{x_{1}}{\varepsilon }\right\} , \dots , \left\{ \frac{x_N}{\varepsilon }\right\} \Big ) , \end{aligned}$$

so that

$$\begin{aligned} x = \varepsilon \left( \left[ \frac{x}{\varepsilon }\right] _{Y}+\left\{ \frac{x}{\varepsilon }\right\} _{Y}\right) . \end{aligned}$$

Definition 1.11

For \(w\) Lebesgue-measurable on \(\varOmega \), the periodic unfolding operator \(\mathcal {T}_{\varepsilon }\) is defined as

$$\begin{aligned} \mathcal {T}_{\varepsilon }(w)(x,y) = \left\{ \begin{array}{cc} &{}w\left( \varepsilon \left[ \frac{x}{\varepsilon }\right] _{Y}+\varepsilon y\right) , \quad (x,y)\in \widehat{\varOmega }_{\varepsilon }\times Y, \\ &{}0 , \quad \text {otherwise.} \end{array} \right. \end{aligned}$$

For \(w\) Lebesgue-measurable on \(\varGamma ^{\varepsilon }\), the boundary unfolding operator \(\mathcal {T}^b_{\varepsilon }\) is defined as

$$\begin{aligned} \mathcal {T}^b_{\varepsilon }(w)(x,y) = \left\{ \begin{array}{cc} &{}w\left( \varepsilon \left[ \frac{x}{\varepsilon }\right] _{Y}+\varepsilon y\right) , \quad (x,y)\in \widehat{\varOmega }_{\varepsilon }\times \varGamma , \\ &{}0 , \quad \text {otherwise.} \end{array} \right. \end{aligned}$$

Proposition 1.12

Let \(w_{\varepsilon }=(w_{\varepsilon }^{int },w_{\varepsilon }^{out })\in H^1(\varOmega _{int }^{\varepsilon })\times H^1(\varOmega _{out }^{\varepsilon })\). Assume that there exists \(C>0\) (independent of \(\varepsilon \)) such that

$$\begin{aligned} \int \limits _{\varOmega }|w_{\varepsilon }|^2\,{\textrm{d}}x+\int \limits _{\varOmega }|\nabla w_{\varepsilon }|^2\,{\textrm{d}}x\le C, \quad \quad \forall \varepsilon >0. \end{aligned}$$
(6.1)

Then, there exist \(w^int \in L^2(\varOmega )\), \(w^out \in H^1(\varOmega )\), \(\widetilde{w}_\mathrm{{int}}\in L^2(\varOmega ;H^1(E_{int }))\) and \(\overline{w}_\mathrm{{out}}\in L^2(\varOmega ;H^1_\#(E_{out }))\), with \( {\mathcal M}_{\varGamma }(\widetilde{w}_\mathrm{{int}}) = {\mathcal M}_{\varGamma }(\overline{w}_\mathrm{{out}})=0\), such that, up to a subsequence,

$$\begin{aligned}&\mathcal {T}_{\varepsilon }(\chi _{\varOmega _{int }^{\varepsilon }}w_{\varepsilon })\rightharpoonup w^int ,\qquad \hbox {weakly in } L^2(\varOmega \times E_{int })\,; \end{aligned}$$
(6.2)
$$\begin{aligned}&\mathcal {T}_{\varepsilon }(\chi _{\varOmega _{out }^{\varepsilon }}w_{\varepsilon })\rightharpoonup w^out ,\qquad \hbox {weakly in } L^2(\varOmega \times E_{out })\,; \end{aligned}$$
(6.3)
$$\begin{aligned}&\mathcal {T}_{\varepsilon }(\chi _{\varOmega _{int }^{\varepsilon }}\nabla w_{\varepsilon })\rightharpoonup \nabla _y\widetilde{w}_\mathrm{{int}}, \qquad \hbox {weakly in } L^2(\varOmega \times E_{int })\,; \end{aligned}$$
(6.4)
$$\begin{aligned}&\mathcal {T}_{\varepsilon }(\chi _{\varOmega _{out }^{\varepsilon }}\nabla w_{\varepsilon })\rightharpoonup \nabla w^out +\nabla _y\overline{w}_\mathrm{{out}}, \qquad \hbox {weakly in } L^2(\varOmega \times E_{out })\,, \end{aligned}$$
(6.5)

for \(\varepsilon \rightarrow 0\). Moreover, due to (6.1), we have

$$\begin{aligned} \varepsilon \int \limits _{\varGamma ^{\varepsilon }} [w_{\varepsilon }]^2\,{\textrm{d}}\sigma \,{\textrm{d}}t\le C,\qquad \forall \varepsilon >0, \end{aligned}$$
(6.6)

with \(C\) independent of \(\varepsilon \), and, then,

$$\begin{aligned} \mathcal {T}^b_{\varepsilon }([w_{\varepsilon }])\rightharpoonup w^out -w^int ,\qquad \hbox {weakly in } L^2(\varOmega \times \varGamma ). \end{aligned}$$
(6.7)

Finally,

$$\begin{aligned} \frac{1}{\varepsilon } \left[ \mathcal {T}_{\varepsilon }(w^out _\varepsilon )-{\mathcal M}_{\varGamma }(\mathcal {T}_{\varepsilon }(w^out _\varepsilon ))\right] \rightharpoonup y^\varGamma \cdot \nabla w^out +\overline{w}_\mathrm{{out}}, \quad \text {weakly in} \, L^2(\varOmega ;H^1(E_{out })). \end{aligned}$$
(6.8)

We notice that, as in [25, Theorem 2.20], we can set \(\overline{w}_\mathrm{{int}}=\widetilde{w}_\mathrm{{int}} - y^\varGamma \cdot \nabla w^out -\xi _{\varGamma }\), for a suitable function \(\xi _{\varGamma }\in L^{2}(\varOmega )\). Therefore, we can rewrite (6.4) as

$$\begin{aligned} \mathcal {T}_{\varepsilon }(\chi _{\varOmega _{int }^{\varepsilon }}\nabla w_{\varepsilon })\rightharpoonup \nabla w^out +\nabla _y\overline{w}_\mathrm{{int}}, \quad \hbox {weakly in } L^2(\varOmega \times E_{int }). \end{aligned}$$
(6.9)

Moreover, we can further modify \(\overline{w}_\mathrm{{int}},\overline{w}_\mathrm{{out}}\), without affecting (6.5) and (6.9), by adding to both \(\xi _{\varGamma }/2\), in such a way that the sum of the two new correctors has null mean value on \(\varGamma \). More precisely, we redefine \(\widehat{w}_\mathrm{{int}}=\overline{w}_\mathrm{{int}}+\xi _{\varGamma }/2=\widetilde{w}_\mathrm{{int}} - y^\varGamma \cdot \nabla w^out -\xi _{\varGamma }/2\) and \(\widehat{w}_\mathrm{{out}}=\overline{w}_\mathrm{{out}}+\xi _{\varGamma }/2\), so that \(\mathcal {M}_{\varGamma }(\{\widehat{w}\}_{\varGamma })=0\).

From now on, let \((u_{\varepsilon }^{out },u_{\varepsilon }^{int },u_{\varepsilon }^{\varGamma })\) be the unique solution of problem (3.8)–(3.9). Owing to the estimates of Corollary 4.2 and to the previous proposition, we have the following results.

Proposition 1.13

Assume (4.14). Then, there exist \(u^out _{0}\in H^{1}_{0}(\varOmega )\), \(u^int _{0}\in L^{2}(\varOmega )\), \({\widehat{u}}\in L^{2}(\varOmega ;H^1_\#(Y{\setminus }\varGamma ))\) such that \(\mathcal {M}_{\varGamma }(\{{\widehat{u}}\}_{\varGamma })=0\) and, up to subsequences,

$$\begin{aligned} \mathcal {T}_{\varepsilon }(\chi _{\varOmega _{out }^{\varepsilon }}u_{\varepsilon })&\rightharpoonup u^out _{0}\,, \qquad \text {weakly in }L^{2}(\varOmega \times E_{out }); \end{aligned}$$
(6.10)
$$\begin{aligned} \mathcal {T}_{\varepsilon }(\chi _{\varOmega _{int }^{\varepsilon }}u_{\varepsilon })&\rightharpoonup u^int _{0}\,, \qquad \text {weakly in }L^{2}(\varOmega \times E_{int }); \end{aligned}$$
(6.11)
$$\begin{aligned} \mathcal {T}_{\varepsilon }(\chi _{\varOmega _{out }^{\varepsilon }}{\nabla }u_{\varepsilon })&\rightharpoonup {\nabla }_{x}u^out _{0}+ {\nabla }_{y} {\widehat{u}}_{out }\,, \qquad \text {weakly in } L^{2}(\varOmega \times E_{out }); \end{aligned}$$
(6.12)
$$\begin{aligned} \mathcal {T}_{\varepsilon }(\chi _{\varOmega _{int }^{\varepsilon }}{\nabla }u_{\varepsilon })&\rightharpoonup {\nabla }_{x}u^out _{0}+ {\nabla }_{y} {\widehat{u}}_{int }\,, \qquad \text {weakly in } L^{2}(\varOmega \times E_{int }). \end{aligned}$$
(6.13)

Moreover, there exists \(u^{\varGamma }_{0}\in H^1_0(\varOmega )\) such that

$$\begin{aligned} \mathcal {T}^b_{\varepsilon }([u_{\varepsilon }]_{\varGamma ^{\varepsilon }})&\rightarrow 0 \,, \qquad \text {in }L^{2}(\varOmega \times \varGamma ); \end{aligned}$$
(6.14)
$$\begin{aligned} \mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{\varGamma })&\rightarrow u^{\varGamma }_{0}\,, \qquad \text {in }L^{2}(\varOmega \times \varGamma ); \end{aligned}$$
(6.15)
$$\begin{aligned} \mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{out }-u_{\varepsilon }^{\varGamma })&\rightarrow 0 \,, \qquad \text {in }L^{2}(\varOmega \times \varGamma ); \end{aligned}$$
(6.16)
$$\begin{aligned} \mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{int }-u_{\varepsilon }^{\varGamma })&\rightarrow 0 \,, \qquad \hbox { in}\ L^{2}(\varOmega \times \varGamma ) \,. \end{aligned}$$
(6.17)

Actually,

$$\begin{aligned} u^{\varGamma }_{0}= {u^out _{0}} = {u^int _{0}} . \end{aligned}$$
(6.18)

Then, there exists a function \(U\in L^{2}(\varOmega \times \varGamma )\) such that

$$\begin{aligned} \mathcal {T}^b_{\varepsilon }\Big ( \left[ \frac{u_{\varepsilon }}{\varepsilon }\right] _{\varGamma ^{\varepsilon }} \Big )&\rightharpoonup [{\widehat{u}}]_{\varGamma } \,, \qquad \text {weakly in } L^{2}(\varOmega \times \varGamma ); \end{aligned}$$
(6.19)
$$\begin{aligned} \mathcal {T}^b_{\varepsilon }\Big ( \frac{\{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma }}{\varepsilon } \Big )&\rightharpoonup U\,, \qquad \text {weakly in }L^{2}(\varOmega \times \varGamma ). \end{aligned}$$
(6.20)

Finally, there exists a function \(w\in L^2(\varOmega ;H^{1}(\varGamma ))\), with \(\mathcal {M}_{\varGamma }(w)=0\), such that

$$\begin{aligned} \mathcal {T}^b_{\varepsilon }({\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }) \rightharpoonup {\nabla _{\mathcal {S}}}_{y} w , \qquad \text {weakly in }L^{2}(\varOmega \times \varGamma ). \end{aligned}$$
(6.21)

Note that (6.16), (6.17) follow from (6.14) and (4.15). Thus, from (6.15), we get (6.18). For (6.19), see [5, 25, 26]. The limit in (6.20) follows from the estimate (4.15). Finally, (6.21) follows from [30].

Lemma 1.14

We have, for a suitable function \(\widetilde{\xi }_{\varGamma }\in L^{2}(\varOmega )\),

$$\begin{aligned} U= 2y^\varGamma \cdot {\nabla }u^out _{0}+ \{{\widehat{u}}\}_{\varGamma } - 2w + \widetilde{\xi }_{\varGamma } . \end{aligned}$$
(6.22)

Proof

We calculate

$$\begin{aligned} \begin{aligned} \mathcal {T}^b_{\varepsilon }\Big ( \frac{u_{\varepsilon }^{out }-u_{\varepsilon }^{\varGamma }}{\varepsilon } \Big )&= \frac{1}{\varepsilon } \big ( \mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{out }) - \mathcal {M}_{\varGamma }\big (\mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{out })\big ) \big ) \\&\quad + \frac{1}{\varepsilon } \big ( \mathcal {M}_{\varGamma }\big (\mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{out })\big ) - \mathcal {M}_{\varGamma }\big (\mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{\varGamma })\big ) \big ) \\&\quad + \frac{1}{\varepsilon } \big ( \mathcal {M}_{\varGamma }\big (\mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{\varGamma })\big ) - \mathcal {T}^b_{\varepsilon }(u_{\varepsilon }^{\varGamma }) \big ) =: J_{1} + J_{2} + J_{3} , \end{aligned} \end{aligned}$$
(6.23)

and take into account that, weakly in \(L^{2}(\varOmega \times \varGamma )\),

$$\begin{aligned} J_{1}&\rightharpoonup y^\varGamma \cdot {\nabla }u^out _{0}+ {\widehat{u}}_{out }-\frac{\xi _{\varGamma }}{2}, \end{aligned}$$
(6.24)
$$\begin{aligned} J_{2}&\rightharpoonup {\overline{\xi }}_{\varGamma } \,, \end{aligned}$$
(6.25)
$$\begin{aligned} J_{3}&\rightharpoonup -w \,, \end{aligned}$$
(6.26)

for a suitable \({\overline{\xi }}_{\varGamma }\in L^{2}(\varOmega )\); (6.25) is a consequence of a standard Hölder inequality, when we also take into account the estimate (4.15); for (6.26), see [30, Theorem 3.4]. Then, from (6.19)–(6.20), we have

$$\begin{aligned}{} & {} \mathcal {T}^b_{\varepsilon }\Big ( \frac{\{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma }}{\varepsilon } \Big ) = 2\mathcal {T}^b_{\varepsilon }\Big ( \frac{u_{\varepsilon }^{out }-u_{\varepsilon }^{\varGamma }}{\varepsilon } \Big ) - \mathcal {T}^b_{\varepsilon }\Big ( \left[ \frac{u_{\varepsilon }}{\varepsilon }\right] _{\varGamma ^{\varepsilon }} \Big ) \\{} & {} \quad \rightharpoonup 2( y^\varGamma \cdot {\nabla }u^out _{0}+ {\widehat{u}}_{out }) - \xi _{\varGamma } + 2{\overline{\xi }}_{\varGamma } - 2w - [{\widehat{u}}]_{\varGamma } , \end{aligned}$$

that is (6.22), by setting \(\widetilde{\xi }_{\varGamma }=-\xi _{\varGamma }+2{\overline{\xi }}_{\varGamma }\). \(\square \)

Remark 1.15

In fact, in Lemma 6.4 we have \(\widetilde{\xi }_{\varGamma }=0\), since when we take \(\varPsi _{2}=1\) in (6.39), we obtain

$$\begin{aligned} \widetilde{\xi }_{\varGamma } = \mathcal {M}_{\varGamma }( U) = 0 . \end{aligned}$$

Theorem 1.16

Let \(g^out _{\varepsilon }\), \(g^int _{\varepsilon }\in C(\overline{\varOmega })\) and assume that there exist \( j_{g},s_{g}\in L^2(\varOmega \times \varGamma ) \) such that

$$\begin{aligned} \varepsilon \mathcal {T}^b_{\varepsilon }([g_{\varepsilon }]_{\varGamma ^{\varepsilon }})&\rightharpoonup j_{g}\,, \qquad \text {weakly in }L^{2}(\varOmega \times \varGamma ); \end{aligned}$$
(6.27)
$$\begin{aligned} \mathcal {T}^b_{\varepsilon }(\{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }})&\rightharpoonup s_{g}\,, \qquad \text {weakly in }L^{2}(\varOmega \times \varGamma ). \end{aligned}$$
(6.28)

Then, the function \(u^out _{0}\in H^{1}_{0}(\varOmega )\) obtained in Proposition 6.3 is the unique solution of

$$\begin{aligned} {}- {{\,\textrm{div}\,}}(\mathcal {A}{\nabla }u^out _{0})&= \mathcal {F} \,, \qquad \text {in }\varOmega , \end{aligned}$$
(6.29)
$$\begin{aligned} u^out _{0}&= 0 \,, \qquad \text {on } \partial \varOmega . \end{aligned}$$
(6.30)

Here,

$$\begin{aligned} \mathcal {A}= id + \int \limits _{Y} {\nabla }_{y} {\widehat{\chi }}\,{\textrm{d}}y + \int \limits _{\varGamma } ( id - \nu \otimes \nu + {\nabla _{\mathcal {S}}}_{y} {\widetilde{\chi }}) \,{\textrm{d}}\sigma _{y} \end{aligned}$$
(6.31)

and

$$\begin{aligned} \mathcal {F} = f+ \int \limits _{\varGamma } s_{g}\,{\textrm{d}}\sigma _{y} + {{\,\textrm{div}\,}}\Big ( \int \limits _{Y} {\nabla }_{y}J\,{\textrm{d}}y + \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} H\,{\textrm{d}}\sigma _{y} \Big ) , \end{aligned}$$
(6.32)

where the pair of cell functions \(({\widehat{\chi }},{\widetilde{\chi }})\in \mathcal {H}\) has been defined in problem (5.1)–(5.4) and the functions

$$\begin{aligned} J\in L^2(\varOmega ;H^1_\#(Y\setminus \varGamma ))\qquad \text {and}\qquad H\in L^2(\varOmega ;H^1(\varGamma )), \end{aligned}$$

with \(\mathcal {M}_{\varGamma }(\{J\})=0\) and \(\mathcal {M}_{\varGamma }(\{H\})=0\), have been defined in problem (5.15)–(5.18) (see, also, Remark 5.5).

Proof

We remark preliminarily that (6.27)–(6.28) imply also (4.14) and, therefore, the estimate (4.15) and the convergence results of Proposition 6.3 and Lemma 6.4.

1) Take as test function in (3.8)

$$\begin{aligned} \varphi _{\varepsilon }(x) = \varepsilon \varPhi _{1}(x) \varPhi _{2}\Big (\frac{x}{\varepsilon }\Big ) , \end{aligned}$$
(6.33)

where \(\varPhi _{1}\in C_{0}^{\infty }(\varOmega )\), \(\varPhi _{2}\) is of class \(C^{1}\) separately in \(\overline{E_{out }}\), \(\overline{E_{int }}\) and periodic over \(Y\). We obtain

$$\begin{aligned} \begin{aligned}&\int \limits _{\varOmega } {\nabla }u_{\varepsilon }\cdot ( \varepsilon \varPhi _{2} {\nabla }\varPhi _{1} + \varPhi _{1} {\nabla }_{y}\varPhi _{2} ) \,{\textrm{d}}x + \frac{\varepsilon }{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} \varPhi _{1} \{\varPhi _{2}\}_{\varGamma ^{\varepsilon }} \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma \\&\quad + \frac{\varepsilon }{2\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} \varPhi _{1} [\varPhi _{2}]_{\varGamma ^{\varepsilon }} [u_{\varepsilon }]_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma = \frac{\varepsilon ^{2}}{4} \int \limits _{\varGamma ^{\varepsilon }} \varPhi _{1} \big ( \{\varPhi _{2}\}_{\varGamma ^{\varepsilon }} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} + [\varPhi _{2}]_{\varGamma ^{\varepsilon }} [g_{\varepsilon }]_{\varGamma ^{\varepsilon }} \big ) \,{\textrm{d}}\sigma \\&\quad + \varepsilon \int \limits _{\varOmega } f\varPhi _{1} \varPhi _{2} \,{\textrm{d}}x . \end{aligned} \end{aligned}$$
(6.34)

We unfold the integrals on the left-hand side of (6.34) and the term containing \([g_{\varepsilon }]_{\varGamma ^{\varepsilon }}\) on the right-hand side. Indeed, the other ones clearly vanish in the limit \(\varepsilon \rightarrow 0\), owing to our assumption (4.14). We obtain

$$\begin{aligned}{} & {} \int \limits _{\varOmega \times Y} \mathcal {T}_{\varepsilon }({\nabla }u_{\varepsilon }) \cdot \mathcal {T}_{\varepsilon }({\nabla }_{y}\varPhi _{2}) \mathcal {T}_{\varepsilon }(\varPhi _{1}) \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varOmega \times \varGamma } \mathcal {T}^b_{\varepsilon }(\varPhi _{1}\{\varPhi _{2}\}_{\varGamma ^{\varepsilon }}) \mathcal {T}^b_{\varepsilon }\Big ( \frac{\{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }}}{\varepsilon } \Big ) \,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad + \frac{1}{2} \int \limits _{\varOmega \times \varGamma } \mathcal {T}^b_{\varepsilon }(\varPhi _{1}[\varPhi _{2}]_{\varGamma ^{\varepsilon }}) \mathcal {T}^b_{\varepsilon }\Big ( \frac{[u_{\varepsilon }]_{\varGamma ^{\varepsilon }}}{\varepsilon } \Big ) \,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} = \frac{\varepsilon }{4} \int \limits _{\varOmega \times \varGamma } \mathcal {T}^b_{\varepsilon }(\varPhi _{1}[\varPhi _{2}]_{\varGamma ^{\varepsilon }}) \mathcal {T}^b_{\varepsilon }([g_{\varepsilon }]_{\varGamma ^{\varepsilon }}) \,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} + o(1) . \end{aligned}$$
(6.35)

As \(\varepsilon \rightarrow 0\), owing to our assumption (6.27) and to Proposition 6.3,

$$\begin{aligned}{} & {} \int \limits _{\varOmega \times Y} \varPhi _{1} ( {\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}}) \cdot {\nabla }_{y}\varPhi _{2} \,{\textrm{d}}x \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varOmega \times \varGamma } \varPhi _{1} \{\varPhi _{2}\}_{\varGamma } U\,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad + \frac{1}{2} \int \limits _{\varOmega \times \varGamma } \varPhi _{1} [\varPhi _{2}]_{\varGamma } [{\widehat{u}}]_{\varGamma } \,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} = \frac{1}{4} \int \limits _{\varOmega \times \varGamma } \varPhi _{1} [\varPhi _{2}]_{\varGamma } j_{g}\,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} . \end{aligned}$$
(6.36)

2) As to (3.9), we take in it the test function

$$\begin{aligned} \varphi _{\varepsilon }^{\varGamma }(x) = \varepsilon \varPsi _{1}(x) \varPsi _{2}\Big (\frac{x}{\varepsilon }\Big ) , \end{aligned}$$
(6.37)

where \(\varPsi _{1}\in C^{1}(\overline{\varOmega })\), \(\varPsi _{2}\in C^{1}(\varGamma )\). We obtain

$$\begin{aligned}{} & {} \varepsilon \int \limits _{\varGamma ^{\varepsilon }} {\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }\cdot ( \varepsilon \varPsi _{2} {\nabla _{\mathcal {S}}}\varPsi _{1} + \varPsi _{1} {\nabla _{\mathcal {S}}}_{y}\varPsi _{2} ) \,{\textrm{d}}\sigma - \frac{\varepsilon }{\varepsilon } \int \limits _{\varGamma ^{\varepsilon }} \varPsi _{1} \varPsi _{2} \{u_{\varepsilon }-u_{\varepsilon }^{\varGamma }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma \nonumber \\{} & {} \quad = \frac{\varepsilon ^{2}}{2} \int \limits _{\varGamma ^{\varepsilon }} \varPsi _{1} \varPsi _{2} \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma . \end{aligned}$$
(6.38)

Note that, according to our assumption (4.14), the right-hand side of (6.38) vanishes as \(\varepsilon \rightarrow 0\). Then, unfolding and taking the limit we arrive at

$$\begin{aligned} \int \limits _{\varOmega \times \varGamma } \varPsi _{1} {\nabla _{\mathcal {S}}}_{y} w \cdot {\nabla _{\mathcal {S}}}_{y} \varPsi _{2} \,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} - \int \limits _{\varOmega \times \varGamma } \varPsi _{1} \varPsi _{2} U\,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} = 0 . \end{aligned}$$
(6.39)

3) In order to derive the macroscopic limiting differential equation, we choose in (3.8) a test function \(\varphi \in C^{1}_{0}(\varOmega )\), and in (3.9) we let \(\varphi ^{\varGamma }=\varphi _{\mid \varGamma ^{\varepsilon }}\); on adding the two integral equations, we find

$$\begin{aligned} \int \limits _{\varOmega } {\nabla }u_{\varepsilon }\cdot {\nabla }\varphi \,{\textrm{d}}x + \varepsilon \int \limits _{\varGamma ^{\varepsilon }} {\nabla _{\mathcal {S}}}u_{\varepsilon }^{\varGamma }\cdot {\nabla _{\mathcal {S}}}\varphi \,{\textrm{d}}\sigma = \varepsilon \int \limits _{\varGamma ^{\varepsilon }} \varphi \{g_{\varepsilon }\}_{\varGamma ^{\varepsilon }} \,{\textrm{d}}\sigma + \int \limits _{\varOmega } \varphi f\,{\textrm{d}}x . \end{aligned}$$
(6.40)

Then, we unfold all the integrals and, on using (6.28), we find as \(\varepsilon \rightarrow 0\)

$$\begin{aligned} \int \limits _{\varOmega \times Y} ( {\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}}) \cdot {\nabla }\varphi \,{\textrm{d}}x \,{\textrm{d}}y + \int \limits _{\varOmega \times \varGamma } {\nabla _{\mathcal {S}}}_{y} w \cdot {\nabla _{\mathcal {S}}}\varphi \,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} = \int \limits _{\varOmega \times \varGamma } \varphi s_{g}\,{\textrm{d}}x \,{\textrm{d}}\sigma _{y} + \int \limits _{\varOmega } \varphi f\,{\textrm{d}}x . \end{aligned}$$
(6.41)

4) We next rewrite our limiting problem in a distributional formulation. From (6.36), we get

$$\begin{aligned} - {{\,\textrm{div}\,}}_{y} ( {\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}}) = 0 , \qquad \text {in }E_{int }\cup E_{out }. \end{aligned}$$
(6.42)

Multiplying (6.42) by \(\varPhi _{2}\) and integrating (formally) by parts (6.42) and using (2.15) for \(\varPhi _{2}({\nabla }{u^out _{0}}+{\nabla }_{y}{\widehat{u}})\cdot \nu \), we find, for each fixed \(x\in \varOmega \),

$$\begin{aligned}{} & {} \int \limits _{Y} ({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}}) \cdot {\nabla }_{y}\varPhi _{2} \,{\textrm{d}}y = - \int \limits _{\varGamma } [\varPhi _{2}({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}})\cdot \nu ]_{\varGamma } \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad = - \frac{1}{2} \int \limits _{\varGamma } ( [\varPhi _{2}]_{\varGamma } \{({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}})\cdot \nu \}_{\varGamma } + \{\varPhi _{2}\}_{\varGamma } [({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}})\cdot \nu ]_{\varGamma } ) \,{\textrm{d}}\sigma _{y} , \end{aligned}$$
(6.43)

whence, on comparing with (6.36),

$$\begin{aligned} \left[ ({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}})\cdot \nu \right] _{\varGamma }&= U\,,&\qquad&\text {on }\varGamma , \end{aligned}$$
(6.44)
$$\begin{aligned} \left\{ ({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}})\cdot \nu \right\} _{\varGamma }&= [{\widehat{u}}]_{\varGamma } - \frac{1}{2} j_{g}\,,&\qquad&\text {on }\varGamma . \end{aligned}$$
(6.45)

It follows immediately from (6.44)–(6.45) that

$$\begin{aligned} ({\nabla }u^out _{0}+{\nabla }_{y}{\widehat{u}}_{int })\cdot \nu = \frac{1}{2} [{\widehat{u}}]_{\varGamma } - \frac{1}{4} j_{g}- \frac{1}{2} U, \qquad \text {on }\varGamma . \end{aligned}$$
(6.46)

Next, from (6.39), we obtain

$$\begin{aligned} -{{\,\mathrm{div_{\mathcal {S}}}\,}}_{y} ({\nabla _{\mathcal {S}}}_{y} w) = U, \qquad \text {on }\varGamma . \end{aligned}$$
(6.47)

In (6.42)–(6.47), the unknowns are \(u^out _{0}\), \({\widehat{u}}\) and w, since, owing to Lemma 6.4, Remark 6.5, we get

$$\begin{aligned} U= 2y^\varGamma \cdot {\nabla }u^out _{0}+ \{{\widehat{u}}\}_{\varGamma } - 2w . \end{aligned}$$
(6.48)

Finally, from (6.41) we get

$$\begin{aligned} - {{\,\textrm{div}\,}}\Big ( {\nabla }u^out _{0}+ \int \limits _{Y} {\nabla }_{y}{\widehat{u}}\,{\textrm{d}}y + \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} w \,{\textrm{d}}\sigma _{y} \Big ) = f+ \int \limits _{\varGamma } s_{g}\,{\textrm{d}}\sigma , \qquad \text {in }\varOmega . \end{aligned}$$
(6.49)

Indeed, in (6.41) we may write

$$\begin{aligned} {\nabla _{\mathcal {S}}}_{y} w \cdot {\nabla _{\mathcal {S}}}\varphi = {\nabla _{\mathcal {S}}}_{y} w \cdot {\nabla }\varphi . \end{aligned}$$

In the following, we use the representations

$$\begin{aligned}{} & {} w(x,y) = y^\varGamma \cdot {\nabla }u^out _{0}(x) + {\widetilde{u}}(x,y) , \end{aligned}$$
(6.50)
$$\begin{aligned}{} & {} {\widetilde{u}}(x,y) = {\widetilde{\chi }}(y) \cdot {\nabla }u^out _{0}(x) + H(x,y) , \quad {\widehat{u}}(x,y) = {\widehat{\chi }}(y) \cdot {\nabla }u^out _{0}(x) + J(x,y) . \end{aligned}$$
(6.51)

Note that (6.48) can be rewritten now as

$$\begin{aligned} U= \{{\widehat{u}}-{\widetilde{u}}\}_{\varGamma } . \end{aligned}$$
(6.52)

Then, we identify the problems solved by the cell functions by recalling (6.42)–(6.49) and separating there the various contributions. First, we find that the functions \({\widetilde{\chi }}\) and \({\widehat{\chi }}\) are coupled for \(i=1\), ..., N, by the problems (5.1)–(5.4). In addition, we require that \({\widehat{\chi }}\) is periodic in \(Y\), and that \(\mathcal {M}_{\varGamma }(\{{\widehat{\chi }}\}_{\varGamma })=0\), \(\mathcal {M}_{\varGamma }({\widetilde{\chi }})=0\).

Also the functions \(J\) and \(H\) are coupled by the problems (5.15)–(5.18) and \(J\) is assumed to be periodic in \(Y\) and \(\mathcal {M}_{\varGamma }(\{J\}_{\varGamma })=0\), \(\mathcal {M}_{\varGamma }(H)=0\).

The well-posedness of the cell problems for \({\widehat{\chi }}\), \({\widetilde{\chi }}\), \(J\), \(H\) is dealt with in Sect. 5.

Next, we identify the limiting diffusion matrix in terms of the cell functions. We note first that

$$\begin{aligned}{} & {} {\nabla _{\mathcal {S}}}_{y} w = {\nabla _{\mathcal {S}}}_{y} y^\varGamma \cdot {\nabla }u^out _{0}+ {\nabla _{\mathcal {S}}}_{y} {\widetilde{u}}\nonumber \\{} & {} \quad = ( id - \nu \otimes \nu ) {\nabla }u^out _{0}+ {\nabla _{\mathcal {S}}}_{y} {\widetilde{u}}= {\nabla _{\mathcal {S}}}u^out _{0}+ {\nabla _{\mathcal {S}}}_{y} {\widetilde{u}}. \end{aligned}$$
(6.53)

Finally, using (6.50), (6.51) and (6.53), we write the vector in (6.49) as

$$\begin{aligned} \begin{aligned}&{\nabla }u^out _{0}+ \int \limits _{Y} {\nabla }_{y}{\widehat{u}}\,{\textrm{d}}y + \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} w \,{\textrm{d}}\sigma _{y}\\&\quad = {\nabla }u^out _{0}+ \int \limits _{Y} ( {\nabla }_{y}{\widehat{\chi }}{\nabla }u^out _{0}+ {\nabla }_{y} J) \,{\textrm{d}}y + \int \limits _{\varGamma } ( {\nabla _{\mathcal {S}}}u^out _{0}+ {\nabla _{\mathcal {S}}}_{y} {\widetilde{\chi }}{\nabla }u^out _{0}+ {\nabla _{\mathcal {S}}}_{y} H) \,{\textrm{d}}\sigma _{y} \\&\quad = \Big ( id + \int \limits _{Y} {\nabla }_{y} {\widehat{\chi }}\,{\textrm{d}}y + \int \limits _{\varGamma } ( id - \nu \otimes \nu + {\nabla _{\mathcal {S}}}_{y} {\widetilde{\chi }}) \,{\textrm{d}}\sigma _{y} \Big ) {\nabla }u^out _{0}\\&\qquad + \int \limits _{Y} {\nabla }_{y} J\,{\textrm{d}}y + \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} H\,{\textrm{d}}\sigma _{y} . \end{aligned} \end{aligned}$$
(6.54)

Hence, we obtain that \(u_0\) satisfies the limit problem (6.29), (6.30), which has uniqueness, since the homogenized matrix \(\mathcal {A}\) is symmetric and positive definite, as proved in the following proposition. Therefore, all the above convergences hold true for the whole sequences, and not only for subsequences. \(\square \)

Remark 1.17

We notice that, from (6.32), the limits \(j_{g}\) and \(s_{g}\) of the original source on the interface conditions have different effects in the source term of the homogenized problem. Indeed, while \(s_{g}\) appears directly in the definition of \(\mathcal {F}\), the function \(j_{g}\) enters through the solution \((J,H)\) of the coupled system (5.15)–(5.18). Moreover, we can remark that, if \(j_{g}\) is independent of x, the last term in (6.32) vanishes, so that the pair \((J,H)\) has no role in the macroscopic equation; however, it remains in the corrector formulas (6.50) and (6.51). A similar effect appears in the problem studied in [24, Chapter 5] and in [21].

Proposition 1.18

The matrix \(\mathcal {A}=(a_{ij})\) in (6.31) is given by

$$\begin{aligned}{} & {} a_{ij} = \int \limits _{Y} {\nabla }_{y}(y^\varGamma _{i}+{\widehat{\chi }}_{i}) \cdot {\nabla }_{y}(y^\varGamma _{j}+{\widehat{\chi }}_{j}) \,{\textrm{d}}y + \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} (y^\varGamma _{i}+{\widetilde{\chi }}_{i}) \cdot {\nabla _{\mathcal {S}}}_{y}(y^\varGamma _{j}+{\widetilde{\chi }}_{j}) \,{\textrm{d}}\sigma _{y}\nonumber \\{} & {} \quad + \frac{1}{2} \int \limits _{\varGamma } \{{\widehat{\chi }}_{i}-{\widetilde{\chi }}_{i}\}_{\varGamma } \{{\widehat{\chi }}_{j}-{\widetilde{\chi }}_{j}\}_{\varGamma } \,{\textrm{d}}\sigma _{y} + \frac{1}{2} \int \limits _{\varGamma } [{\widehat{\chi }}_{i}]_{\varGamma } [{\widehat{\chi }}_{j}]_{\varGamma } \,{\textrm{d}}\sigma _{y} \, \end{aligned}$$
(6.55)

and, therefore, it is symmetric. Moreover, it is also positive definite.

Proof

Let’s rewrite the matrix in the last term of (6.54) as \(\mathcal {A}=(a_{ij})\), \(a_{ij}=a_{ij}^{0}+a_{ij}^{1}\), where

$$\begin{aligned} a_{ij}^{0} = \delta _{ij} + \int \limits _{Y} \frac{\partial {\widehat{\chi }}_{i}}{\partial y_{j}} \,{\textrm{d}}y , \quad a_{ij}^{1} = \int \limits _{\varGamma } \big ( \delta _{ij} - \nu _{i} \nu _{j} + ({\nabla _{\mathcal {S}}}_{y}{\widetilde{\chi }}_{i})_{j} \big ) \,{\textrm{d}}\sigma _{y} . \end{aligned}$$

Note that

$$\begin{aligned} a_{ij}^{0} = \int \limits _{Y} \frac{\partial }{\partial y_{j}} (y^\varGamma _{i}+{\widehat{\chi }}_{i}) \,{\textrm{d}}y = \int \limits _{Y} {\nabla }_{y} (y^\varGamma _{i}+{\widehat{\chi }}_{i}) \cdot {\nabla }_{y} y_{j} \,{\textrm{d}}y , \end{aligned}$$

and that

$$\begin{aligned} a_{ij}^{1} = \int \limits _{\varGamma } \big ( {\nabla _{\mathcal {S}}}_{y}(y^\varGamma _{i}+{\widetilde{\chi }}_{i}) \big )_{j} \,{\textrm{d}}\sigma _{y} = \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y}(y^\varGamma _{i}+{\widetilde{\chi }}_{i}) \cdot {\nabla _{\mathcal {S}}}_{y} y^\varGamma _{j} \,{\textrm{d}}\sigma _{y} , \end{aligned}$$

since

$$\begin{aligned} {\nabla _{\mathcal {S}}}_{y}y^\varGamma _{i} = \varvec{e}_{i} - \nu _{i} \nu . \end{aligned}$$
(6.56)

Thus we conclude that

$$\begin{aligned} a_{ij} = \int \limits _{Y} {\nabla }_{y} (y^\varGamma _{i}+{\widehat{\chi }}_{i}) \cdot {\nabla }_{y} y_{j} \,{\textrm{d}}y + \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y}(y^\varGamma _{i}+{\widetilde{\chi }}_{i}) \cdot {\nabla _{\mathcal {S}}}_{y} y^\varGamma _{j} \,{\textrm{d}}\sigma _{y} . \end{aligned}$$
(6.57)

In order to show that the matrix \(\mathcal {A}\) is symmetric, let us use \({\widehat{\chi }}_{j}\) as a test function in (5.1) to get, on appealing to (2.15) once more,

$$\begin{aligned}{} & {} \int \limits _{Y} {\nabla }_{y}(y^\varGamma _{i}+{\widehat{\chi }}_{i}) \cdot {\nabla }_{y}{\widehat{\chi }}_{j} \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varGamma } [{\nabla }_{y}(y^\varGamma _{i}+{\widehat{\chi }}_{i})\cdot \nu ]_{\varGamma } \{{\widehat{\chi }}_{j}\}_{\varGamma } \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad + \frac{1}{2} \int \limits _{\varGamma } \{{\nabla }_{y}(y^\varGamma _{i}+{\widehat{\chi }}_{i})\cdot \nu \}_{\varGamma } [{\widehat{\chi }}_{j}]_{\varGamma } \,{\textrm{d}}\sigma _{y} = 0 . \end{aligned}$$
(6.58)

On applying the interface conditions (5.2) and (5.5), we obtain

$$\begin{aligned}{} & {} \int \limits _{Y} {\nabla }_{y}(y^\varGamma _{i}+{\widehat{\chi }}_{i}) \cdot {\nabla }_{y}{\widehat{\chi }}_{j} \,{\textrm{d}}y + \frac{1}{2} \int \limits _{\varGamma } \{{\widehat{\chi }}_{i}-{\widetilde{\chi }}_{i}\}_{\varGamma } \{{\widehat{\chi }}_{j}\}_{\varGamma } \,{\textrm{d}}\sigma _{y} \nonumber \\{} & {} \quad + \frac{1}{2} \int \limits _{\varGamma } [{\widehat{\chi }}_{i}]_{\varGamma } [{\widehat{\chi }}_{j}]_{\varGamma } \,{\textrm{d}}\sigma _{y} = 0 . \end{aligned}$$
(6.59)

Then, we use \({\widetilde{\chi }}_{j}\) as a test function in (5.4) to infer, also by means of (5.2),

$$\begin{aligned} \int \limits _{\varGamma } {\nabla _{\mathcal {S}}}_{y} (y^\varGamma _{i}+{\widetilde{\chi }}_{i}) \cdot {\nabla _{\mathcal {S}}}_{y}{\widetilde{\chi }}_{j} \,{\textrm{d}}\sigma _{y} - \int \limits _{\varGamma } \{{\widehat{\chi }}_{i}-{\widetilde{\chi }}_{i}\}_{\varGamma } {\widetilde{\chi }}_{j} \,{\textrm{d}}\sigma _{y} = 0 . \end{aligned}$$
(6.60)

Finally, collecting (6.57), (6.59), (6.60), we infer (6.55), and, thus, the symmetry of \(\mathcal {A}\). The positivity of the matrix \(\mathcal {A}\) can be obtained as usual. \(\square \)