1 Introduction

In this paper, we are interested in analyzing the asymptotic behavior of the solutions of a semilinear parabolic problem with homogeneous Neumann boundary condition in a thin domain \(R^\epsilon \) with a highly oscillatory behavior in its boundary as illustrated in Fig. 1.

Fig. 1
figure 1

Thin domain with a highly oscillatory boundary

Let \(G_\epsilon \), \(H_\epsilon : (0,1) \mapsto (0,\infty )\) be two positive smooth functions satisfying \(0< G_0 \le G_\epsilon (x) \le G_1\) and \(0 < H_0 \le H_\epsilon (x) \le H_1\) for all \(x \in (0,1)\) and \(\epsilon > 0\), where \(G_0\), \(G_1\), \(H_0\) and \(H_1\) are constants independent of \(\epsilon \), and consider the bounded open region \(R^\epsilon \) given by

$$\begin{aligned} R^\epsilon = \{ (x,y) \in {\mathbb {R}}^2 \; | \; x\in (0,1) \hbox { and } - \epsilon \, G_\epsilon (x) < y< \epsilon \, H_\epsilon (x) \}. \end{aligned}$$
(1.1)

Note that functions \(G_\epsilon \) and \(H_\epsilon \) define the lower and upper boundary of the 2-dimensional thin domain \(R^\epsilon \) with order of thickness \(\epsilon \). We allow \(G_\epsilon \) and \(H_\epsilon \) to present different orders and profiles of oscillations. The upper boundary established by \(\epsilon \, H_\epsilon \) presents same order of amplitude, period and thickness, but the lower boundary given by \(\epsilon \, G_\epsilon \) possesses oscillation order larger than the compression order \(\epsilon \) of the thin domain. We express this assuming that

$$\begin{aligned} G_\epsilon (x)&= G(x,x/\epsilon ^\alpha ), \quad \alpha > 1, \\ \hbox { and } \quad H_\epsilon (x)&= H(x,x/\epsilon ), \end{aligned}$$

where the functions \(G\), and \(H:[0,1] \mapsto (0,\infty )\) are smooth functions with \(y \rightarrow G(x,y)\) and \(y \rightarrow H(x,y)\) periodic in variable \(y\) with constant period \(l_g\) and \(l_h\), respectively.

In the thin domain \(R^{\epsilon }\), we look at the semilinear parabolic evolution equation

$$\begin{aligned} \left\{ \begin{array}{ll} w^\epsilon _t - \Delta w^\epsilon + w^\epsilon = f(w^\epsilon ), &{}\quad \hbox { in } R^\epsilon , \\ \frac{\partial w^\epsilon }{\partial \nu ^\epsilon } = 0&{} \quad \hbox { on } \partial R^\epsilon , \end{array} \right. t>0, \end{aligned}$$
(1.2)

where \(\nu ^\epsilon \) is the unit outward normal to \(\partial R^\epsilon \), \(\frac{\partial }{\partial \nu ^\epsilon }\) is the outwards normal derivative and the function \(f:{\mathbb {R}}\mapsto {\mathbb {R}}\) is a \(\mathcal {C}^2\)-function with bounded derivatives. Since we are interested in the behavior of solutions as \(t\rightarrow \infty \) and its dependence with respect to the small parameter \(\epsilon \), we require that the solutions of (1.2) are bounded for large values of time. A natural assumption to obtain this boundedness is given by the following dissipative condition

$$\begin{aligned} \limsup _{|s| \rightarrow \infty } \frac{f(s)}{s} < 0. \end{aligned}$$
(1.3)

From the point of view of investigating the asymptotic dynamics, assuming \(f\) with bounded derivatives does not imply any restriction since we are interested in dissipative nonlinearities. Indeed, it follows from [3, 7] that under the usual growth assumptions, the attractors are uniformly bounded in \(L^\infty \) with respect to \(\epsilon \), and we may cut the nonlinearities in a suitable way making them bounded with bounded derivatives. Recall that an attractor is a compact invariant set which attracts all bounded sets of the phase space. It contains all the asymptotic dynamics of the system, and all global bounded solutions lie in the attractor.

In order to analyze problem (1.2) and its related linear elliptic and parabolic problem, we first perform a simple change of variables which consists in stretching in the \(y\)-direction by a factor of \(1/\epsilon \). As in [28, 3739], we use \(x_1=x, x_2=y/\epsilon \) to transform \(R^\epsilon \) into the domain

$$\begin{aligned} \Omega ^\epsilon = \{ (x_1,x_2) \in {\mathbb {R}}^2 \; | \; x_1 \in (0,1) \hbox { and } - G_\epsilon (x_1) < x_2 < H_\epsilon (x_1) \}. \end{aligned}$$
(1.4)

By doing so, we obtain a domain which is not thin anymore although it presents very highly oscillatory behavior given by the fact that the upper and lower boundary are the graph of the oscillating functions \(G_\epsilon \) and \(H_\epsilon \). Under this change, Eq. (1.2) is transformed into

$$\begin{aligned} \left\{ \begin{array}{ll} u^\epsilon _{t} - \frac{\partial ^2 u^\epsilon }{{\partial x_1}^2} - \frac{1}{\epsilon ^2} \frac{\partial ^2 u^\epsilon }{{\partial x_2}^2} + u^\epsilon = f(u^\epsilon )&{} \quad \hbox { in } \Omega ^\epsilon \\ \frac{\partial u^\epsilon }{\partial x_1} N_1^\epsilon + \frac{1}{\epsilon ^2} \frac{\partial u^\epsilon }{\partial x_2}N_2^\epsilon = 0 &{}\quad \hbox { on } \partial \Omega ^\epsilon \end{array} \right. t>0, \end{aligned}$$
(1.5)

where \(N^\epsilon =(N^\epsilon _1,N^\epsilon _2)\) is the outward normal to the boundary of \(\Omega ^\epsilon \).

Observe the factor \(1/\epsilon ^2\) in front of the derivative in the \(x_2\) direction which means a very fast diffusion in the vertical direction. In some sense, we have substituted the thin domain \(R^\epsilon \) with a non-thin domain \(\Omega ^\epsilon \), but with a very strong diffusion mechanism in the \(x_2\)-direction. Because of the presence of this very strong diffusion mechanism, it is expected that solutions of (1.5) become homogeneous in the \(x_2\)-direction so that the limiting solution will not have a dependence in this direction, and therefore, the limiting problem will be one dimensional. This fact is in agreement with the intuitive idea that an equation in a thin domain should approach an equation in a line segment.

We get the following limit problem to (1.5) as \(\epsilon \) goes to zero:

$$\begin{aligned} \left\{ \begin{array}{l} u_{t} - \frac{1}{p(x)} \left( q(x) \, u_x \right) _x + u = f(u), \quad x \in (0,1), \\ u_x(0) = u_x(1) = 0, \end{array} \right. \quad t>0, \end{aligned}$$
(1.6)

where the smooth positive functions \(p\) and \(q:(0,1) \mapsto (0,\infty )\) are given by

$$\begin{aligned} q(x)&= \frac{1}{l_h} \int \limits _{Y^*(x)} \left\{ 1 - \frac{\partial X(x)}{\partial y_1}(y_1,y_2) \right\} \hbox {d}y_1 \hbox {d}y_2, \\ p(x)&= \frac{|Y^*(x)|}{l_h} + \frac{1}{l_g} \int \limits _0^{l_g} G(x,y) \, \hbox {d}y - G_0(x), \\ G_0(x)&= \min _{y \in {\mathbb {R}}} G(x,y), \end{aligned}$$

and \(X(x)\) is the unique solution of the problem

$$\begin{aligned} \left\{ \begin{array}{ll} - \Delta X(x) = 0&{} \hbox { in } Y^*(x) \\ \frac{\partial X(x)}{\partial N} = 0&{} \hbox { on } B_2(x) \\ \frac{\partial X(x)}{\partial N} = N_1&{} \hbox { on } B_1(x) \\ X(x)\quad l_{h}-\hbox {periodic} &{}\hbox { on } B_0(x) \\ \int \nolimits _{Y^*(x)} X(x) \; \hbox {d}y_1 \hbox {d}y_2 = 0&{} \end{array} \right. \end{aligned}$$

in the representative cell \(Y^*(x)\) given by

$$\begin{aligned} Y^*(x) = \{ (y_1,y_2) \in {\mathbb {R}}^2 \; | \; 0< y_1 < l_h, \quad -G_0(x) < y_2 < H(x,y_1) \}, \end{aligned}$$

where \(B_0(x)\), \(B_1(x)\) and \(B_2(x)\) are lateral, upper and lower boundary of \(\partial Y^*(x)\) for \(x \in (0,1)\). Note that the auxiliary solution \(X(x)\) and the representative cell \(Y^*(x)\) depend on variable \(x\) defining a non-constant homogenized coefficient \(q(x)\) for the homogenized equation (1.6).

If the nonlinearity \(f\) satisfies the dissipative conditions (1.3), then both equations (1.5) and (1.6) define nonlinear semigroups that possess global attractors \(\fancyscript{A}_\epsilon \subset H^1(\Omega ^\epsilon )\) and \(\fancyscript{A}_0\subset H^1(0,1)\), respectively. Here in this work, we get the continuity of the nonlinear semigroup, as well as, the upper semicontinuity of the family of the attractors \(\fancyscript{A}_\epsilon \) and the equilibria set at \(\epsilon = 0\) obtaining convergence properties for the dynamics set up by problems (1.5) and (1.6).

There are several works in the literature dealing with partial differential equations in thin domains presenting oscillating boundaries. We mention [31, 32] who studied the asymptotic approximations of solutions to parabolic and elliptic problems in thin perforated domain with rapidly varying thickness, and [1416] who consider nonlinear monotone problems in a multidomain with a highly oscillating boundary. In addiction, we also cite [1, 12, 17], in which the asymptotic description of nonlinearly elastic thin films with fast-oscillating profile was successfully obtained in a context of \(\Gamma \)-convergence [24].

Recently, we have studied many classes of oscillating thin domains discussing limit problems and convergence properties [6, 810, 36]. In [11], the authors deal with a linear elliptic problem in a thin domain presenting doubly oscillatory behavior which is related to the present one, but with constant profile, that means, assuming \(G_\epsilon (x)=g(x/\epsilon )\) and \(H_\epsilon (x)=h(x/\epsilon )\) for periodic functions \(g\) and \(h\). We call this situation purely periodic case.

Our goal here is to consider a semilinear parabolic problem in \(R^\epsilon \) also presenting doubly oscillatory behavior, but now with variable profile generally called locally periodic case. We allow much more complicated shapes combining oscillating orders establishing the limit problem, as well as, its dependence with respect to the thin domain geometry. Indeed, we get an explicit relationship among the limit equation, the oscillation, the profile and thickness of the thin domain. It is worth observing that it is not an easy task. In order to do so, we first need to combine different techniques introduced in [9, 10] and [11] to investigate the linear elliptic problem. We use extension operators and oscillating test functions from homogenization theory with boundary perturbation results to obtain the limit problem for the elliptic equation. Next, we apply the theory of dissipative systems and attractors to be able to obtain the continuity of the nonlinear semigroup and the upper semicontinuity of the attractors and stationary states of the parabolic problem here proposed.

We refer to [13, 19, 22, 23, 27, 29, 35, 40, 44] for a general introduction to the homogenization theory and the theory of dissipative systems and attractors, respectively. There are not many results on the behavior of global attractors of dissipative systems under a perturbation related to homogenization. We would like to cite [20, 21, 25, 26].

Finally, we point out that thin structures with rough contours (thin rods, plates or shells) or fluids filling out thin domains (lubrication) or even chemical diffusion process in the presence of grainy narrow strips (catalytic process) are very common in engineering and applied science. The analysis of the properties of these structures and the processes taking place on them and understanding how the micro-geometry of the thin structure affects the macro-properties of the material is a very relevant issue in engineering and material design. Thus, being able to obtain the limiting equation of a prototype equation in different structures where the micro-geometry is not necessarily smooth and being able to analyze how the different microscales affects the limiting problem goes in this direction and will allow the study and understanding in more complicated situations. See [16, 18, 30, 33] for some concrete applied problems.

This paper is organized as follows. In Sect. 2, we set up the notation and state some technical results which will be used later in the proofs. In Sect. 3, we investigate the linear elliptic problem on thin domains assuming also that \(G_\epsilon \) and \(H_\epsilon \) are piecewise periodic functions, obtaining Lemma 3.1. Next, in Sect. 4, we use Lemma 3.1 and the continuous dependence result on the domain given by Proposition 2.4 in order to provide a proof of the main result with respect to the linear elliptic problem associated with (1.5), namely Theorem 4.1. In Sect. 5, we obtain the continuity of the linear semigroup defined by (1.5) from Theorem 4.1, and in Sect. 6, we prove the main result of the paper related to the parabolic problem (1.5) getting the upper semicontinuity of the family of attractors and stationary state by Theorem 6.1.

We also note that although we deal with Neumann boundary conditions, we may also consider different conditions in the lateral boundaries of the thin domain \(R^\epsilon \) since we preserve the Neumann type boundary condition in the upper and lower boundary. Dirichlet or even Robin homogeneous can be set in the lateral boundaries of the problem (1.5). The limit problem will preserve this boundary condition as a point condition.

2 Basic facts and notations

Let us consider two families of positive functions \(G_\epsilon \), \(H_\epsilon : (0,1) \rightarrow (0,\infty )\), with \(\epsilon \in (0, \epsilon _0)\) for some \(\epsilon _0>0\) satisfying the following hypothesis

(H) There exist nonnegative constants \(G_0\), \(G_1\), \(H_0\) and \(H_1\) such that

$$\begin{aligned} 0< G_0 \le G_\epsilon (x) \le G_1 \quad \hbox { and } \quad 0< H_0 \le H_\epsilon (x) \le H_1, \end{aligned}$$

for all \(x\in (0,1)\) and \(\epsilon \in (0,\epsilon _0)\). Moreover, the functions \(G_\epsilon \) and \(H_\epsilon \) are of the type

$$\begin{aligned} G_\epsilon (x) = G(x,x/\epsilon ^\alpha ), \quad \hbox { for some } \alpha >1, \hbox { and } \quad H_\epsilon (x)=H(x,x/\epsilon ), \end{aligned}$$
(2.1)

where the functions \(H\), \(G:[0,1]\times {\mathbb {R}}\mapsto (0,+\infty )\) are periodic in the second variable, that is, there exist positive constants \(l_g\) and \(l_h\) such that \(G(x,y+l_g)=G(x,y)\) and \(H(x,y+l_h)=H(x,y)\) for all \((x,y) \in [0,1] \times {\mathbb {R}}\). We also suppose \(G\) and \(H\) are piecewise \(C^1\) with respect to the first variable, it means, there exists a finite number of points \(0=\xi _0<\xi _1<\cdots <\xi _{N-1}<\xi _N=1\) such that the functions \(G\) and \(H\) restricted to the set \((\xi _i,\xi _{i+1}) \times {\mathbb {R}}\) are \(C^1\) with \(G\), \(H\), \(G_x\), \(H_x\), \(G_y\) and \(H_y\) uniformly bounded in \((\xi _i,\xi _{i+1}) \times {\mathbb {R}}\) having limits when we approach \(\xi _i\) and \(\xi _{i+1}\).

In this work, we consider the highly oscillating thin domain \(R^\epsilon \) which is defined in (1.1) as the open set bounded by the graphs of the functions \(\epsilon G_\epsilon \) and \(\epsilon H_\epsilon \). Since we are taking \(\alpha > 1\) to define \(G_\epsilon \) in (2.1), we are allowing the lower boundary of the thin domain \(R^\epsilon \) to present a very high oscillatory behavior. In fact, as \(\epsilon \rightarrow 0\) we have that the period of the oscillations is much smaller (order \(\sim \epsilon ^\alpha \)) than the amplitude (order \(\sim \epsilon \)), the height of the thin domain (order \(\sim \epsilon \)), and period of the oscillations of the upper boundary (order \(\sim \epsilon \)) given by function \(H_\epsilon \).

A function satisfying the above conditions is \(F(x,y)=a(x)+\sum _{r=1}^N b_r(x)g_r(y)\) where \(a\), \(b_1,\ldots ,b_N\) are piecewise \(C^1\) with \(g_1,\ldots ,g_N\) also \(C^1\) and \(l\)-periodic for some \(l>0\).

In order to study the dynamics defined by (1.2) in \(R^\epsilon \), we first study the solutions of the linear elliptic equation associated with the equivalent problem introduced by (1.5). We consider the following elliptic problem with homogeneous Neumann boundary condition

$$\begin{aligned} \left\{ \begin{array}{ll} - \frac{\partial ^2 u^\epsilon }{{\partial x_1}^2} - \frac{1}{\epsilon ^2} \frac{\partial ^2 u^\epsilon }{{\partial x_2}^2} + u^\epsilon = f^\epsilon &{}\quad \hbox { in } \Omega ^\epsilon \\ \frac{\partial u^\epsilon }{\partial x_1} N_1^\epsilon + \frac{1}{\epsilon ^2} \frac{\partial u^\epsilon }{\partial x_2}N_2^\epsilon = 0 &{}\quad \hbox { on } \partial \Omega ^\epsilon \end{array} \right. \end{aligned}$$
(2.2)

where \(N^\epsilon =(N_1^\epsilon ,N_2^\epsilon )\) is the outward unit normal to \(\partial \Omega ^\epsilon \), and \(\Omega ^\epsilon \) is the oscillating domain (1.4). Moreover, we are taking \(f^\epsilon \in L^2(\Omega ^\epsilon )\) satisfying the uniform condition

$$\begin{aligned} \Vert f^\epsilon \Vert _{L^2(\Omega ^\epsilon )}\le C, \quad \forall \epsilon > 0, \end{aligned}$$
(2.3)

for some \(C>0\) independent of \(\epsilon \). From Lax-Milgran Theorem, we have that problem (2.2) has unique solution for each \(\epsilon > 0\). We first analyze the behavior of these solutions as \(\epsilon \rightarrow 0\), that is, as the domain gets thinner and thinner although with a high oscillating boundary.

Recall that the equivalence between the problems (1.2) and (1.5) is established by changing the scale of the domain \(R^\epsilon \) through the map \((x,y) \rightarrow (x, \epsilon y)\), see [28] for more details. Also, the domain \(\Omega ^\epsilon \) is not thin anymore, but presents very wild oscillations at the top and bottom boundary, although the presence of a high diffusion coefficient in front of the derivative with respect the second variable balance the effect of the wild oscillations.

It is known that the variational formulation of (2.2) is found \(u^\epsilon \in H^1(\Omega ^\epsilon )\) such that

$$\begin{aligned} \int \limits _{\Omega ^\epsilon } \Big \{ \frac{\partial u^\epsilon }{\partial x_1} \frac{\partial \varphi }{\partial x_1} + \frac{1}{\epsilon ^2} \frac{\partial u^\epsilon }{\partial x_2} \frac{\partial \varphi }{\partial x_2} + u^\epsilon \varphi \Big \} \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\Omega ^\epsilon } f^\epsilon \varphi \hbox {d}x_1 \hbox {d}x_2, \, \forall \varphi \in H^1(\Omega ^\epsilon ). \end{aligned}$$
(2.4)

Thus, we get that the solutions \(u^\epsilon \) satisfy an uniform a priori estimate on \(\epsilon \). Indeed, taking \(\varphi = u^\epsilon \) in expression (2.4), we obtain

$$\begin{aligned} \Big \Vert \frac{\partial u^\epsilon }{\partial x_1} \Big \Vert _{L^2(\Omega ^\epsilon )}^2 + \frac{1}{\epsilon ^2}\Big \Vert \frac{\partial u^\epsilon }{\partial x_2} \Big \Vert _{L^2(\Omega ^\epsilon )}^2 + \Vert u^\epsilon \Vert _{L^2(\Omega ^\epsilon )}^2 \le \Vert f^\epsilon \Vert _{L^2(\Omega ^\epsilon )} \Vert u^\epsilon \Vert _{L^2(\Omega ^\epsilon )}. \end{aligned}$$
(2.5)

Consequently, it follows from (2.3) that

$$\begin{aligned} \Vert u^\epsilon \Vert _{L^2(\Omega ^\epsilon )}, \Big \Vert \frac{\partial u^\epsilon }{\partial x_1} \Big \Vert _{L^2(\Omega ^\epsilon )} \hbox { and } \frac{1}{\epsilon } \Big \Vert \frac{\partial u^\epsilon }{\partial x_2} \Big \Vert _{L^2(\Omega ^\epsilon )} \le C, \quad \forall \epsilon >0. \end{aligned}$$
(2.6)

Provided that we have to compare functions defined in \(\Omega ^\epsilon \) for \(\epsilon > 0\), we need to introduce some extension operators \(P_\epsilon \) in a convenient way. We note that this approach is very common in homogenization theory. For the current analysis, we extend the functions only over the upper boundary of the domain \(\Omega ^\epsilon \), namely, into the open set \(\widetilde{\Omega }^\epsilon \) defined by

$$\begin{aligned} \begin{array}{l} \widetilde{\Omega }^\epsilon = \{ (x_1,x_2) \in {\mathbb {R}}^2 \; | \; x_1 \in (0,1), \; - G_\epsilon (x_1) < x_2 < H_1 \} \setminus \\ \displaystyle \qquad \qquad \cup _{i=1}^N \{ (\xi _i,x_2) \, | \, \min \{ H_{0,i-1}, H_{0,i} \} < x_2 < H_1\}, \end{array} \end{aligned}$$
(2.7)

where \( H_{0,i} = \min _{y \in {\mathbb {R}}} H(\xi _i, y)\), and the points \(0=\xi _0<\xi _1<\cdots <\xi _{N-1}<\xi _N=1\) and the positive constant \(H_1\) are given by hypothesis (H).

Lemma 2.1

Under conditions described above, there exists an extension operator

$$\begin{aligned} P_{\epsilon } \in \mathcal {L}(L^p(\Omega ^\epsilon ),L^p(\widetilde{\Omega }^\epsilon )) \cap \mathcal {L}(W^{1,p}(\Omega ^\epsilon ),W^{1,p}(\widetilde{\Omega }^\epsilon )) \end{aligned}$$

and a constant \(K\) independent of \(\epsilon \) and \(p\) such that

$$\begin{aligned}&\Vert P_{\epsilon } \varphi \Vert _{L^p(\widetilde{\Omega }^\epsilon )} \le K \, \Vert \varphi \Vert _{L^p(\Omega ^\epsilon )} \nonumber \\&\Big \Vert \frac{\partial P_{\epsilon } \varphi }{\partial x_1} \Big \Vert _{L^p(\widetilde{\Omega }^\epsilon )} \le K \, \Big \{ \Big \Vert \frac{\partial \varphi }{\partial x_1} \Big \Vert _{L^p(\Omega ^\epsilon )} + \eta (\epsilon ) \, \Big \Vert \frac{\partial \varphi }{\partial x_2} \Big \Vert _{L^p(\Omega ^\epsilon )} \Big \} \\&\Big \Vert \frac{\partial P_{\epsilon } \varphi }{\partial x_2} \Big \Vert _{L^p(\widetilde{\Omega }^\epsilon )} \le K \, \Big \Vert \frac{\partial \varphi }{\partial x_2} \Big \Vert _{L^p(\Omega ^\epsilon )}\nonumber \end{aligned}$$
(2.8)

for all \(\varphi \in W^{1,p}(\Omega ^\epsilon )\) where \(1 \le p \le \infty \) and \( \eta (\epsilon ) = \sup _{x \in (0,1)} \{ | H'_\epsilon (x) | \}, \quad \epsilon > 0. \)

Proof

This result can be obtained using a reflection procedure over the upper oscillating boundary of \(\Omega ^\epsilon \). See [6, 9] for details. \(\square \)

Remark 2.2

  1. (i)

    Note that operator \(P_\epsilon \) preserves periodicity in the \(x_1\) variable. Indeed, under this reflection procedure, we have that if the function \(\varphi \) is periodic in \(x_1\), then the extended function \(P_\epsilon \varphi \) is also periodic in \(x_1\).

  2. (ii)

    Lemma 2.1 can also be applied to the case \(G_\epsilon \) and \(H_\epsilon \) independent of \(\epsilon \). In particular, we still can apply this extension operator to the representative cell \(Y^*\).

Remark 2.3

  1. (i)

    If for each \(w \in W^{1,p}(\mathcal {O})\) we denote by \(||| \cdot |||\) the norm

    $$\begin{aligned} ||| w |||_{W^{1,p}(\mathcal {O})}^p = \Vert w \Vert _{L^p(\mathcal {O})}^p + \Big \Vert \frac{\partial w}{\partial x_1} \Big \Vert _{L^p(\mathcal {O})}^p + \eta (\epsilon ) \Big \Vert \frac{\partial w}{\partial x_2} \Big \Vert _{L^p(\mathcal {O})}^p, \end{aligned}$$

    then we have the extension operator \(P_\epsilon \) must satisfy \( ||| P_\epsilon w |||_{W^{1,p}(\widetilde{\Omega }^\epsilon )} \!\le \! K_0 ||| w |||_{W^{1,p}(\Omega ^\epsilon )} \) for some \(K_0>0\) independent of \(\epsilon \). The norm \(||| \cdot |||_{W^{1,p}}\) is equivalent to the usual one.

  2. (ii)

    Analogously, we can set \(H^1_\epsilon (\mathcal {O})\) as the Sobolev space \(H^1(\mathcal {O})\) with the equivalent norm

    $$\begin{aligned} \Vert w \Vert _{H^1_\epsilon (\mathcal {O})}^2 = \Vert w \Vert _{L^2(\mathcal {O})}^2 + \Big \Vert \frac{\partial w}{\partial x_1} \Big \Vert _{L^2(\mathcal {O})}^2 + \frac{1}{\epsilon ^2} \Big \Vert \frac{\partial w}{\partial x_2} \Big \Vert _{L^2(\mathcal {O})}^2. \end{aligned}$$

Now let us to discuss how the solutions of (2.2) depend on the domain \(\Omega ^\epsilon \) and more exactly on the functions \(G_\epsilon \) and \(H_\epsilon \). As a matter of fact, we have a continuous dependence result in \(L^\infty \) uniformly in \(\epsilon \). Assume \(G_\epsilon \), \(\widehat{G}_\epsilon \), \(H_\epsilon \) and \(\widehat{H}_\epsilon \) are piecewise continuous functions satisfying hypothesis (H) and consider the associated oscillating domains \(\Omega ^\epsilon \) and \(\widehat{\Omega }^\epsilon \) given by

$$\begin{aligned}&\Omega ^\epsilon = \{ (x_1,x_2) \in {\mathbb {R}}^2 \; | \; x_1 \in (0,1), \quad -G_\epsilon (x_1) < x_2 < H_\epsilon (x_1) \}, \\&\widehat{\Omega }^\epsilon = \{ (x_1,x_2) \in {\mathbb {R}}^2 \; | \; x_1 \in (0,1), \quad -\widehat{G}_\epsilon (x_1) < x_2 < \widehat{H}_\epsilon (x_1) \}. \end{aligned}$$

Let \(u^\epsilon \) and \(\widehat{u}^\epsilon \) be the solutions of the problem (2.2) in the oscillating domains \(\Omega ^\epsilon \) and \(\widehat{\Omega }^\epsilon \), respectively, with \(f^\epsilon \in L^2({\mathbb {R}}^2)\). Then we have the following result:

Proposition 2.4

There exists a positive real function \(\rho :[0,\infty ) \mapsto [0,\infty )\) such that

$$\begin{aligned} \Vert u^\epsilon -\widehat{u}^\epsilon \Vert ^2_{H^1_\epsilon (\Omega ^\epsilon \cap \widehat{\Omega }^\epsilon )} + \Vert u^\epsilon \Vert ^2_{H^1_\epsilon (\Omega ^\epsilon \setminus \widehat{\Omega }^\epsilon )} + \Vert \widehat{u}^\epsilon \Vert ^2_{H^1_\epsilon (\widehat{\Omega }^\epsilon \setminus \Omega ^\epsilon )} \le \rho (\delta ) \end{aligned}$$

with \(\rho (\delta )\rightarrow 0\) as \(\delta \rightarrow 0\) uniformly for all

  1. (i)

    \(\epsilon > 0\);

  2. (ii)

    piecewise \(C^1\) functions \(G_\epsilon \), \(\widehat{G}_\epsilon \), \(H_\epsilon \) and \(\widehat{H}_\epsilon \) with

    $$\begin{aligned}&0\le G_0 \le G_\epsilon (x), \widehat{G}_\epsilon (x) \le G_1, \quad 0< H_0 \le H_\epsilon (x), \widehat{H}_\epsilon (x) \le H_1, \\&\Vert G_\epsilon -\widehat{G}_\epsilon \Vert _{L^\infty (0,1)} \le \delta \quad \hbox { and } \quad \Vert H_\epsilon -\widehat{H}_\epsilon \Vert _{L^\infty (0,1)} \le \delta ; \end{aligned}$$
  3. (iii)

    \(f^\epsilon \in L^2({\mathbb {R}}^2)\), \(\Vert f^\epsilon \Vert _{L^2({\mathbb {R}}^2)}\le 1\).

Proof

The proof is quite analogous to that one performed in [9, Theorem 4.1] since we are taking functions \(G\) and \(H\) satisfying \(\mathbf{(H)}\) with constant period \(l_g\) and \(l_h\), respectively. \(\square \)

Remark 2.5

The important part of this result is that the positive function \(\rho (\delta )\) does not depend on \(\epsilon \). It only depends on the nonnegative constants \(G_0\), \(G_1\), \(H_0\) and \(H_1\).

Finally, we mention some important estimates on the solutions of an elliptic problem posed in rectangles of the type

$$\begin{aligned} Q_\epsilon =\{ (x,y) \in {\mathbb {R}}^2 \; | \; -\epsilon ^\alpha <x<\epsilon ^\alpha , \, 0<y<1\} \end{aligned}$$

with \(\alpha >1\). For each \(u_0 \in H^1(-\epsilon ^\alpha ,\epsilon ^\alpha )\), let us define \(u^\epsilon (x,y)\) as the unique solution of

$$\begin{aligned} \left\{ \begin{array}{ll} - \frac{\partial ^2 u^\epsilon }{{\partial x}^2} - \frac{1}{\epsilon ^2} \frac{\partial ^2 u^\epsilon }{{\partial y}^2}= 0 &{}\quad \hbox { in } Q_\epsilon , \\ \qquad u(x,0)=u_0(x),&{}\quad \hbox { on } \Gamma _\epsilon ,\\ \frac{\partial u}{\partial \nu }=0,&{}\quad \hbox { on } \partial Q_\epsilon \setminus \Gamma _\epsilon \end{array} \right. \end{aligned}$$
(2.9)

where \(\nu \) is the outward unit normal to \(\partial Q_\epsilon \) and \( \Gamma _\epsilon = \{ (x,0) \in {\mathbb {R}}^2 \, | \, -\epsilon ^\alpha <x<\epsilon ^\alpha \}. \)

Lemma 2.6

With the notations above, if we denote by \(\bar{u}_0\) the average of \(u_0\) in \(\Gamma _\epsilon \), that is

$$\begin{aligned} \bar{u}_0=\frac{1}{2\epsilon ^\alpha } \int \limits _{-\epsilon ^\alpha }^{\epsilon ^\alpha }u_0(x) \, \hbox {d}x, \end{aligned}$$

then, there exists a constant \(C\), independent of \(\epsilon \) and \(u_0\), such that

$$\begin{aligned}&\int \limits _{-\epsilon ^\alpha }^{\epsilon ^\alpha }|u^\epsilon (x,y)-\bar{u}_0|^2 \, \hbox {d}x \le C \exp \left\{ -\frac{2y\pi }{\epsilon ^{\alpha -1}}\right\} \Vert u_0\Vert _{L^2(-\epsilon ^\alpha ,\epsilon ^\alpha )}^2\\&\int \limits _0^1\int \limits _{-\epsilon ^\alpha }^{\epsilon ^\alpha }|u(x,y)-\bar{u}_0|^2 \, \hbox {d}x\hbox {d}y \le C\epsilon ^{\alpha -1} \Vert u_0\Vert _{L^2(-\epsilon ^\alpha ,\epsilon ^\alpha )}^2 \end{aligned}$$

and

$$\begin{aligned} \left\| \frac{\partial u}{\partial x}\right\| _{L^2(Q_\epsilon )}^2+\frac{1}{\epsilon ^2}\left\| \frac{\partial u}{\partial y}\right\| ^2_{L^2(Q_\epsilon )}\le C \epsilon ^{\alpha -1} \left\| \frac{\partial u_0}{\partial x}\right\| _{L^2(-\epsilon ^\alpha ,\epsilon ^\alpha )}^2. \end{aligned}$$
(2.10)

Proof

The proof follows from the known fact that the solution of the problem (2.9) can be found explicitly and admits a Fourier decomposition of the form

$$\begin{aligned} u^\epsilon (x,y) = \frac{1}{2\epsilon ^\alpha }\int \limits _{-\epsilon ^\alpha }^{\epsilon ^\alpha } u_0(\tau ) d\tau + \sum _{ k=1 }^\infty (u_0,\varphi _k^\epsilon )\varphi _k^\epsilon (x) \frac{\cosh (\frac{k\pi (1-y)}{\epsilon ^{\alpha -1}})}{\cosh (\frac{k\pi }{\epsilon ^{\alpha -1}})} \end{aligned}$$

where \(\varphi _k^\epsilon (x)=\epsilon ^{-\alpha /2}\cos (\frac{k\pi x}{\epsilon ^\alpha }),\) \(k=1,2, \ldots ,\) and \((u_0,\varphi _k^\epsilon )=(u_0,\varphi _k^\epsilon )_{L^2(-\epsilon ^\alpha , \epsilon ^\alpha )}\). \(\square \)

3 The piecewise periodic case

In this section, we establish the limit of sequence \(\{ u^\epsilon \}_{\epsilon > 0}\) given by the elliptic problem (2.2) as \(\epsilon \) goes to zero for the case where the oscillating boundary of \(\Omega ^\epsilon \) is defined, assuming that \(G_\epsilon \) and \(H_\epsilon \) are piecewise periodic functions.

More precisely, we suppose the functions \(G\) and \(H\) satisfy hypothesis (H), assuming also they are independent functions of the first variable in each of the open sets \((\xi _{i-1},\xi _i)\times {\mathbb {R}}\). Thus, if \(0=\xi _0<\xi _1<\cdots <\xi _{N-1}<\xi _{N}=1\) so that functions \(G\) and \(H\) satisfy

$$\begin{aligned} G(x, y)=G_i(y) \quad \hbox { and } \quad H(x,y) = H_i(y), \quad \hbox { for } x \in (\xi _{i-1},\xi _i), \end{aligned}$$
(3.1)

with \(G_i(y+l_g)=G_i(y)\) and \(H_i(y+l_h)=H_i(y)\) for all \(y\in {\mathbb {R}}\). The functions \(G_i\) and \(H_i\) are \(C^1\)-functions satisfying \(0<G_0\le G_i(\cdot )\le G_1\) and \(0<H_0\le H_i(\cdot )\le H_1\) for all \(i=1,\ldots , N\), and then, the oscillating domain \(\Omega ^\epsilon \) is now

$$\begin{aligned} \begin{array}{l} \Omega ^\epsilon =\left\{ (x,y) \, | \, \xi _{i-1}<x<\xi _i, -G_i(x/\epsilon )<y<H_i(x/\epsilon ), i=1,\ldots , N\right\} \cup \\ \displaystyle \qquad \cup _{i=1}^{N-1}\left\{ (\xi _i, y) \, | \, -\min \{G_{i}(\xi _i/\epsilon ), G_{i+1}(\xi _i/\epsilon )\} < y < \min \{H_{i}(\xi _i/\epsilon ), H_{i+1}(\xi _i/\epsilon )\}\right\} , \end{array} \end{aligned}$$

as illustrated by Figure 2. Also region \(\widetilde{\Omega }^\epsilon \), previously introduced in (2.7), is given by

$$\begin{aligned} \begin{array}{l} \widetilde{\Omega }^\epsilon = \left\{ (x,y) \, | \, \xi _{i-1}<x<\xi _i, -G_i(x/\epsilon )<y<H_1, i=1,\ldots , N \right\} \cup \\ \displaystyle \qquad \cup _{i=1}^{N-1} \left\{ (\xi _i, y) \, | \, -\min \{G_{i}(\xi _i/\epsilon ), G_{i+1}(\xi _i/\epsilon )\} < y < \min \{ H_{0,i}, H_{0,i+1} \} \right\} , \end{array} \end{aligned}$$

with \(H_{0,i} = \min _{y \in {\mathbb {R}}} H_i(y)\), \(i=1, \ldots , N\).

We also denote by \(\Omega _0\) the convenient open set without oscillating boundaries given by

$$\begin{aligned} \Omega _0&= \left\{ (x,y) \, | \, \xi _{i-1}<x<\xi _i, -G_{0,i} <y<H_1, i=1,\ldots , N \right\} \cup \nonumber \\&\cup _{i=1}^{N-1} \left\{ (\xi _i, y) \, | \, -\min \{ G_{0,i}, G_{0,i+1} \} < y < \min \{ H_{0,i}, H_{0,i+1} \} \right\} , \end{aligned}$$
(3.2)

where the positive constant \(G_{0,i}\), with \(i=1,\ldots ,N\), is set by \(G_{0,i} = \min _{y \in {\mathbb {R}}} G_i(y)\) whenever \(x \in (\xi _{i-1},\xi _i)\). Here, we are establishing the following step function

$$\begin{aligned} G_0(x) = G_{0,i} = \min _{y \in {\mathbb {R}}} G_i(y), \quad \hbox { if } x \in (\xi _{i-1},\xi _i). \end{aligned}$$
(3.3)

Notice \(\Omega _0 \subset \widetilde{\Omega }^\epsilon \) for all \(\epsilon > 0\).

It is also important to observe that we still have the extension operator \(P_\epsilon \) constructed in Lemma 2.1 for the open regions \(\Omega ^\epsilon \) into \(\widetilde{\Omega }^\epsilon \).

Fig. 2
figure 2

A piecewise periodic domain \(\Omega ^\epsilon \)

Now we can prove the following result

Lemma 3.1

Assume that \(f^\epsilon \in L^2(\Omega ^\epsilon )\) satisfies (2.3) so that function

$$\begin{aligned} {\hat{f}}^\epsilon (x) = \int \limits _{-G_\epsilon (x)}^{H_\epsilon (x)} f(x,s) \, \hbox {d}s, \quad x \in (0,1), \end{aligned}$$
(3.4)

satisfies \({\hat{f}}^\epsilon \rightharpoonup \hat{f}\), w-\(L^2(0,1)\).

Then, there exists \(\hat{u} \in H^1(0,1)\) such that, if \(P_\epsilon \) is the extension operator given by Lemma 2.1, then

$$\begin{aligned} \Vert P_{\epsilon } u^\epsilon - \hat{u} \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$

where \(\hat{u}\) is the unique weak solution of the Neumann problem

$$\begin{aligned} \int \limits _0^1 \Big \{ q(x) \; u_x(x) \, \varphi _x(x) + p(x) \, u(x) \, \varphi (x) \Big \} \hbox {d}x = \int \limits _0^1 \hat{f}(x) \, \varphi (x) \, \hbox {d}x \end{aligned}$$
(3.5)

for all \(\varphi \in H^1(0,1)\), where \(p(x)\) and \(q(x)\) are piecewise constant functions defined a.e. \((0,1)\) as follows: if \(0 = \xi _0 < \xi _1 < \ldots < \xi _N =1\), \(p(x)=p_i\) for all \(x\in (\xi _{i-1},\xi _i)\) where

$$\begin{aligned} \begin{array}{l} p_i=\frac{|Y_i^*|}{l_h} + \frac{1}{l_g} \int \limits _0^{l_g} G_i(s) \, \hbox {d}s - G_{0,i}, \\ G_{0,i} = \min _{y \in {\mathbb {R}}} G_i(y), \end{array} \quad i=1,\ldots ,N, \end{aligned}$$
(3.6)

\(Y_i^*\) is the basic cell for \(x\in (\xi _{i-1},\xi _i)\), that is,

$$\begin{aligned} Y^*_i = \{ (y_1,y_2) \in {\mathbb {R}}^2 \; | \; 0< y_1 < l_h, \quad - G_{0,i} < y_2 < H_i(y_1) \}, \end{aligned}$$

and \(q(x)=q_i\) for all \(x\in (\xi _{i-1},\xi _i)\) where

$$\begin{aligned} q_i = \frac{1}{l_h} \int \limits _{Y^*_i} \Big \{ 1 - \frac{\partial X_i}{\partial y_1}(y_1,y_2) \Big \} \hbox {d}y_1 \hbox {d}y_2 \end{aligned}$$

and the function \(X_i\) is the unique solution of

$$\begin{aligned} \left\{ \begin{array}{ll} - \Delta X_i = 0 &{}\hbox { in } Y_i^* \\ \frac{\partial X_i}{\partial N} = 0&{} \hbox { on } B^i_2 \\ \frac{\partial X_i}{\partial N} = N_1&{} \hbox { on } B^i_1 \\ X_i \quad l_h-\hbox { periodic} &{}\hbox { on } B^i_0 \\ \int \limits _{Y_i^*} X_i \; \hbox {d}y_1 \hbox {d}y_2 = 0&{} \end{array} \right. \end{aligned}$$
(3.7)

where \(B_0^i\), \(B_1^i\) and \(B_2^i\) are the lateral, upper and lower boundary of \(\partial Y^*_i\), respectively.

Remark 3.2

Note that if we call \(f_0(x)=\hat{f}(x)/p(x)\), then problem (3.5) is equivalent to

$$\begin{aligned} - r_i u_{xx}(x) + u(x) = f_0(x) \quad x \in (\xi _{i-1},\xi _i)\\ \end{aligned}$$

for \(i = 1, \ldots , N\), where \(r_i=q_i/p_i\), satisfying the following boundary conditions

$$\begin{aligned} \left\{ \begin{array}{l} u_x(\xi _0) = u_x(\xi _N) = 0 \\ r_i \, u_x(\xi _i-) - r_{i+1} \, u_x(\xi _i+) = 0 \quad i = 1,\ldots , N -1. \end{array} \right. \end{aligned}$$

Here, \(u_x(\xi _i \pm )\) denote the right(left)-hand side limits of \(u_x\) at \(\xi _i\).

Proof

In order to prove Lemma 3.1, we have to pass to the limit in the variational formulation of problem (2.2) given by (2.4). For this, we first divide the domain \(\widetilde{\Omega }^\epsilon \) in two open sets using an appropriated step function \(G_0^\epsilon \), depending on \(\epsilon \), that converges uniformly to the step function \(G_0\) defined in (3.3) and independent of parameter \(\epsilon \).

Let us denote by \(m_\epsilon \) the largest integer such that \(m_\epsilon l_g \epsilon ^{\alpha } \le 1\). Now, for each \(i=1,\ldots ,N\) and \(m=1,\ldots ,m_\epsilon \), we take the following point

$$\begin{aligned} \gamma _{\epsilon ,m}^i \in [(m-1) l_g \epsilon ^{\alpha }, m l_g \epsilon ^{\alpha }] \cap (\xi _{i-1},\xi _i), \end{aligned}$$
(3.8)

the minimum point of the piecewise periodic function \(G_\epsilon \) restricted to \([(m-1) l_g \epsilon ^{\alpha }, m l_g \epsilon ^{\alpha }] \cap (\xi _{i-1},\xi _i)\), that can be empty depending on the values of \(i\) and \(m\). As a consequence of this construction, it is easy to see that

$$\begin{aligned} G_i(\gamma _{\epsilon ,m}^i/\epsilon ^\alpha ) = \min _{y \in {\mathbb {R}}} G_i(y) = G_{0,i}. \end{aligned}$$
(3.9)

Since the interval \((\xi _{i-1}, \xi _i)\) is finite and \(G_\epsilon |_{(\xi _{i-1}, \xi _i)}\) is continuous, then there exist just a finite number of points \(\gamma _{\epsilon ,m}^i \in (\xi _{i-1},\xi _i)\). We can rename them such that

$$\begin{aligned} \{ \gamma _{\epsilon ,0}^i, \gamma _{\epsilon ,1}^i, \ldots , \gamma _{\epsilon ,m_\epsilon ^i + 1}^i \} \end{aligned}$$
(3.10)

defines a partition for the sub interval \([\xi _{i-1},\xi _i]\) for some \(m_\epsilon ^i \in {\mathbb {N}}\), \(m_\epsilon ^i \le m_\epsilon \), where \(\gamma _{\epsilon ,0}^i=\xi _{i-1}\) and \(\gamma _{\epsilon , m_\epsilon ^i+1}^i=\xi _i\). Note that \(\gamma _{\epsilon ,m}^i\) does not need to be uniquely defined.

Consequently, we can take the union of all partitions (3.10) setting a partition for the unit interval \([0,1]\)

$$\begin{aligned} \{ \gamma _{\epsilon ,0}, \gamma _{\epsilon ,1}, \ldots , \gamma _{\epsilon , \hat{m}_\epsilon +1} \}, \end{aligned}$$

with \(\gamma _{\epsilon ,0}=0\) and \(\gamma _{\epsilon ,\hat{m}_\epsilon +1}=1\) for some \(\hat{m}_\epsilon \in {\mathbb {N}}\) that we still denote by \(m_\epsilon \). Also, we have

$$\begin{aligned} \{ (\gamma _{m,\epsilon }, x_2) \; | \; - G_1< x_2 < - G_{0,i} \}\cap \Omega ^\epsilon =\emptyset , \end{aligned}$$

for all \(m=1,2,\ldots , m_\epsilon \).

Next, we take \(\epsilon \) small enough, and then we consider the convenient step function

$$\begin{aligned} G_{0}^\epsilon (x)\!=\! \left\{ \begin{array} {ll} G_{0,1}, &{} x\in [0, \gamma _{\epsilon ,1}] \\ \max \{ G(\gamma _{\epsilon ,m},\frac{\gamma _{\epsilon ,m}}{\epsilon ^\alpha }), G(\gamma _{\epsilon ,m\!+\!1},\frac{\gamma _{\epsilon ,m\!+\!1}}{\epsilon ^\alpha }) \}, &{} x\in (\gamma _{\epsilon ,m}, \gamma _{\epsilon ,m\!+\!1}], m\!=\!1,2\ldots , m_\epsilon \!-\! 1 \\ G(1,1/\epsilon ^\alpha ) , &{} x\in (\gamma _{\epsilon ,m_\epsilon -1}, 1] \end{array} \right. . \end{aligned}$$

Due to (3.9), we have \(G(\gamma _{\epsilon ,m}, \frac{\gamma _{\epsilon ,m}}{\epsilon ^\alpha }) = G_i(\gamma _{\epsilon ,m}/\epsilon ^\alpha ) = \min _{y \in {\mathbb {R}}} G_i(y) = G_{0,i}\), whenever \(\gamma _{\epsilon ,m} \in (\xi _{i-1},\xi _i)\) for some \(i=1,\ldots ,N\), and so, \(G_\epsilon (x) \ge G_0^\epsilon (x) \ge G_0(x)\) in \((0,1)\) where \(G_0\) is the step function given by (3.3). Consequently, we have constructed a suitable step function \(G^\epsilon _0\) that converges uniformly to \(G_0\). More precisely, we have obtained

$$\begin{aligned} \Vert G_0 - G^\epsilon _0 \Vert _{L^\infty (0,1)} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(3.11)

Using the step function \(G_0^\epsilon \), we can introduce now the following open sets

$$\begin{aligned} \begin{aligned}&\widetilde{\Omega }^\epsilon _+ = \{ (x_1, x_2) \in {\mathbb {R}}^2 \, | \, x_1 \in (0,1), \, - G_0^\epsilon (x_1) < x_2 < H_1 \} \hbox { and } \\&\widetilde{\Omega }^\epsilon _- = \{ (x_1, x_2) \in {\mathbb {R}}^2 \, | \, x_1 \in (0,1), \, - G^\epsilon (x_1) < x_2 < - G_0^\epsilon (x_1) \}. \end{aligned} \end{aligned}$$
(3.12)

Notice that

$$\begin{aligned} \widetilde{\Omega }^\epsilon = \text{ Int } \left( \overline{\widetilde{\Omega }^\epsilon _+ \cup \widetilde{\Omega }^\epsilon _-} \right) . \end{aligned}$$

Hence, if we denote by \(\widetilde{\cdot }\) the standard extension by zero and by \(\chi ^\epsilon \) the characteristic function of \(\Omega ^\epsilon \), we can rewrite (2.4) as

$$\begin{aligned}&\int \limits _{\widetilde{\Omega }^\epsilon _-} \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi }{\partial x_2} \right\} \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon _+} \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi }{\partial x_2}\right\} \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\quad \quad + \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \varphi \hbox {d}x_1 \hbox {d}x_2, \quad \forall \varphi \in H^1(\Omega ^\epsilon ), \end{aligned}$$
(3.13)

where \(P_\epsilon \) is the extension operator constructed in Lemma 2.1.

Now, let us to pass to the limit in the different functions that form the integrands of (3.13) to get the homogenized problem. It is worth to observe that we will combine here techniques from [911, 44] establishing suitable oscillating test functions to accomplish our goal.

(a). Limit of \(P_\epsilon u^\epsilon \) in \(L^2(\Omega ^\epsilon )\).

First we observe that, due to (2.6) and Lemma 2.1, there exists \(K>0\) independent of \(\epsilon \) such that \(P_\epsilon u^\epsilon \) satisfies

$$\begin{aligned} \Vert P_\epsilon u^\epsilon \Vert _{L^2(\widetilde{\Omega }^\epsilon )}, \Big \Vert \frac{\partial P_\epsilon u^\epsilon }{\partial x_1} \Big \Vert _{L^2(\widetilde{\Omega }^\epsilon )}\quad \hbox { and }\quad \frac{1}{\epsilon } \Big \Vert \frac{\partial P_\epsilon u^\epsilon }{\partial x_2} \Big \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \le K, \quad \forall \epsilon > 0. \end{aligned}$$
(3.14)

Hence, if \(\Omega _0\) is the open set given by (3.2), independent of \(\epsilon \), \(P_\epsilon u^\epsilon |_{\Omega _0} \in H^1(\Omega _0)\), and we can extract a subsequence, still denoted by \(P_\epsilon u^\epsilon \), such that

$$\begin{aligned} \begin{array}{c} P_\epsilon u^\epsilon \rightharpoonup u_0 \quad w-H^1(\Omega _0) \\ P_\epsilon u^\epsilon \rightarrow u_0 \quad s-H^\beta (\Omega _0) \hbox { for all } \beta \in [0, 1) \hbox { and }\\ \frac{\partial P_\epsilon u^\epsilon }{\partial x_2} \rightarrow 0 \quad s-L^2(\Omega _0) \\ \end{array} \end{aligned}$$
(3.15)

as \(\epsilon \rightarrow 0\), for some \(u_0 \in H^1(\Omega _0)\). Note that \(u_0(x_1,x_2)\) does not depend on the variable \(x_2\), that is, \( \frac{\partial u_0}{\partial x_2}(x_1,x_2) = 0 \hbox { a.e. } \Omega _0. \) Indeed, for all \(\varphi \in \mathcal {C}^\infty _0(\Omega _0)\), we have from (3.15) that

$$\begin{aligned} \int \limits _{\Omega _0} u_0 \, \frac{\partial \varphi }{\partial x_2} \, \hbox {d}x_1 \hbox {d}x_2 = \lim _{\epsilon \rightarrow 0} \int \limits _{\Omega _0} P_\epsilon u^\epsilon \, \frac{\partial \varphi }{\partial x_2} \, \hbox {d}x_1 \hbox {d}x_2 = - \lim _{\epsilon \rightarrow 0} \int \limits _{\Omega _0} \frac{\partial P_\epsilon u^\epsilon }{\partial x_2} \, \varphi \, \hbox {d}x_1 \hbox {d}x_2 = 0, \end{aligned}$$
(3.16)

and then, \(u_0(x_1,x_2) = u_0(x_1)\) for all \((x_1,x_2) \in \Omega _0\) implying \(u_0 \in H^1(0,1)\).

From (3.15), we also have that the restriction of \(P_\epsilon u^\epsilon \) to coordinate axis \(x_1\) converges to \(u_0\), in that, if \(\Gamma = \{ (x_1,0) \in {\mathbb {R}}^2 \, | \, x_1 \in (0,1) \}\), then

$$\begin{aligned} P_\epsilon u^\epsilon |_{\Gamma } \rightarrow u_0 \quad s-H^\beta (\Gamma ), \quad \forall s \in [0,1/2). \end{aligned}$$
(3.17)

Thus, using (3.17) with \(\beta =0\), we can obtain the \(L^2\)-convergence of \(P_\epsilon u^\epsilon \) to \(u_0\) in \(\widetilde{\Omega }^\epsilon \). In fact, due to (3.17), we have that

$$\begin{aligned}&\Vert P_\epsilon u^\epsilon |_{\Gamma } - u_0\Vert ^2_{L^2(\widetilde{\Omega }_\epsilon )} = \int \limits _0^1 \int \limits _{-G_\epsilon (x_1)}^{H_1} | P_\epsilon u^\epsilon (x_1,0) - u_0(x_1) |^2 \, \hbox {d}x_2 \hbox {d}x_1 \\&\qquad \le C(G,H)\, \Vert P_\epsilon u^\epsilon |_{\Gamma } - u_0 \Vert _{L^2(\Gamma )}^2 \rightarrow 0, \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$

where \(C(G,H)=G_1+H_1\). Also,

$$\begin{aligned} | P_\epsilon u^\epsilon (x_1,x_2) - P_\epsilon u^\epsilon (x_1,0) |^2 = \left| \int \limits _0^{x_2} \frac{\partial P_\epsilon u^\epsilon }{\partial x_2}(x_1,s) \, \hbox {d}s \right| ^2 \le \left( \int \limits _0^{x_2} \left| \frac{\partial P_\epsilon u^\epsilon }{\partial x_2}(x_1,s) \right| ^2 \hbox {d}s \right) \, |x_2|. \end{aligned}$$

Consequently, integrating in \(\widetilde{\Omega }^\epsilon \) and using (3.14), we get

$$\begin{aligned}&\Vert P_\epsilon u^\epsilon - P_\epsilon u^\epsilon |_{\Gamma } \Vert _{L^2(\widetilde{\Omega }^\epsilon )}^2 \quad \le \int \limits _0^1 \int \limits _{-G_\epsilon (x_1)}^{H_1} \left( \int \limits _0^{x_2} \left| \frac{\partial P_\epsilon u^\epsilon }{\partial x_2}(x_1,s) \right| ^2 \hbox {d}s \right) \, |x_2| \, \hbox {d}x_2 \hbox {d}x_1 \\&\qquad \le C(G,H) \, \left\| \frac{\partial P_\epsilon u^\epsilon }{\partial x_2} \right\| _{L^2(\widetilde{\Omega }^\epsilon )}^2 \rightarrow 0 \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$

Finally, since

$$\begin{aligned} \Vert P_\epsilon u^\epsilon - u_0 \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \le \Vert P_\epsilon u^\epsilon - P_\epsilon u^\epsilon |_{\Gamma } \Vert _{L^2(\widetilde{\Omega }^\epsilon )}+ \Vert P_\epsilon u^\epsilon |_{\Gamma } - u_0\Vert _{L^2(\widetilde{\Omega }^\epsilon )}, \end{aligned}$$

we conclude that

$$\begin{aligned} \Vert P_\epsilon u^\epsilon - u_0 \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(3.18)

(b). Limit of \(\chi ^\epsilon \).

Let us consider the family of representative cell \(Y^*_i\), \(i=1,2\ldots ,N\), defined by

$$\begin{aligned} Y^*_i=\{ (y_1,y_2) \in {\mathbb {R}}^2 \; | \; 0<y_1<l_h \hbox { and } -G_{0,i} < y_2 < H_i(y_1)\} \end{aligned}$$

and let \(\chi _i\) be their characteristic function extended periodically on the variable \(y_1 \in {\mathbb {R}}\) for each \(i=1,\ldots , N\). Eventually, we will consider the family of representative cells \(Y^*(x) = Y^*_i\) whenever \(x \in (\xi _{i-1}, \xi _i)\).

If we denote by \(\chi ^\epsilon _i\) the characteristic function of the set

$$\begin{aligned} \Omega ^\epsilon _{i,+} = \{(x_1,x_2) \, | \, \xi _{i-1}<x_1<\xi _{i}, \, -G_{0,i}<x_2<H_i(x_1/\epsilon )\}, \end{aligned}$$

we easily see that

$$\begin{aligned} \chi ^\epsilon (x_1,x_2) = \chi ^\epsilon _i(x_1,x_2) \quad \hbox { and } \quad \chi ^\epsilon _i(x_1,x_2) = \chi _i \left( \frac{x_1}{\epsilon },x_2\right) \end{aligned}$$
(3.19)

whenever \((x_1,x_2) \in \Omega ^\epsilon _{i,+}\). Thus, due to (3.19) and Average Theorem [22, Theorem 2.6], we have for each \(i=1,\ldots ,N\), and \(x_2 \in (-G_{0,i}, H_1)\) that

$$\begin{aligned} \chi ^\epsilon _i( \cdot , x_2) \mathop {\rightharpoonup }\limits ^{\epsilon \rightarrow 0} \theta _i(x_2) := \frac{1}{l_h} \int \limits _0^{l_h} \chi _i(s,x_2) \, \hbox {d}s, \quad w^*-L^\infty (\xi _{i-1},\xi _i). \end{aligned}$$
(3.20)

Note that the limit function \(\theta _i\) does not dependent on the variable \(x_1\in (\xi _{i-1},\xi _i)\), although it depends on each \(i=1,\ldots ,N\), and it is related to the area of the open set \(Y^*_i\) by formula

$$\begin{aligned} l_h \int \limits _{-G_{0,i}}^{H_1} \theta _i(x_2) \hbox {d}x_2 = |Y^*_i|. \end{aligned}$$
(3.21)

Moreover, using Lebesgue’s Dominated Convergence Theorem and (3.20), we can get that

$$\begin{aligned} \chi ^\epsilon \mathop { \rightharpoonup }\limits ^{\epsilon \rightarrow 0} \theta , \quad w^*-L^\infty (\Omega _0), \end{aligned}$$
(3.22)

where \(\theta (x_1,x_2)=\theta _i(x_2)\) if \(x_1\in (\xi _{i-1},\xi _i)\), \(i=1,2,\ldots ,N\). Indeed, from (3.20) we have

$$\begin{aligned} \mathcal {F}^\epsilon _i(x_2) = \int \limits _{\xi _{i-1}}^{\xi _i} \varphi (x_1,x_2) \, \Big \{ \chi ^\epsilon _i(x_1,x_2) - \theta _i(x_2) \Big \} \, \hbox {d}x_1 \rightarrow 0, \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
(3.23)

a.e. \(x_2 \in (-G_{0,i}, H_1)\) and for all \(\varphi \in L^1(\Omega _0)\). Thus, (3.22) is a consequence of (3.23) and

$$\begin{aligned} \int \limits _{\Omega _i} \varphi (x_1,x_2) \,\Big \{ \chi ^\epsilon _i(x_1,x_2) - \theta _i(x_2) \Big \} \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{-G_{0,i}}^{H_1} \mathcal {F}^\epsilon _i(x_2) \hbox {d}x_2, \end{aligned}$$

since \(|\mathcal {F}^\epsilon _i(x_2)| \le \int \limits _{\xi _{I-1}}^{\xi _i}| \varphi (x_1,x_2)| \hbox {d}x_1\).

Notice that (3.21) implies the family of representative cells \(Y^*(x)\) satisfies

$$\begin{aligned} Y^*(x) = l_h \int \limits _{-G_0(x)}^{H_1} \theta (x_2) \, \hbox {d}x_2, \quad x \in (0,1). \end{aligned}$$

(c) Limit in the tilde functions.

Since \(\Vert f^{\epsilon }\Vert _{L^{2}(\Omega ^\epsilon )}\) is uniformly bounded, we get from (2.5) that there exists a constant \(K>0\) independent of \(\epsilon \) such that

$$\begin{aligned} \Vert \widetilde{u^\epsilon } \Vert _{L^2(\Omega _0)}, \Big \Vert \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \Big \Vert _{L^2(\Omega _0)}\quad \hbox { and }\quad \frac{1}{\epsilon } \Big \Vert \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \Big \Vert _{L^2(\Omega _0)} \le K \hbox { for all } \epsilon > 0. \end{aligned}$$

Then, we can extract a subsequence, still denoted by \(\widetilde{u^\epsilon }\), \(\widetilde{\frac{\partial u^\epsilon }{\partial x_1}}\) and \(\widetilde{\frac{\partial u^\epsilon }{\partial x_2}}\), such that

$$\begin{aligned} \begin{array}{c} \widetilde{u^\epsilon } \rightharpoonup u^* \quad w-L^2(\Omega _0) \\ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \rightharpoonup \xi ^* \quad w-L^2(\Omega _0) \hbox { and }\\ \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \rightarrow 0 \quad s-L^2(\Omega _0) \\ \end{array} \end{aligned}$$
(3.24)

as \(\epsilon \rightarrow 0\), for some \(u^*\) and \(\xi ^* \in L^2(\Omega _0)\).

(d) Test functions.

Here we introduce the first class of test functions needed to pass to the limit in the variational formulation (3.13). For each \(\phi \in H^1(0,1)\) and \(\epsilon > 0\), we define the following test function in \(H^1(\widetilde{\Omega }^\epsilon )\)

$$\begin{aligned} \varphi ^\epsilon (x_1,x_2) = \left\{ \begin{array}{ll} \phi (x_1), &{} (x_1,x_2) \in \widetilde{\Omega }_+^\epsilon \\ Z^\epsilon _m(x_1,x_2), &{} (x_1,x_2) \in \widetilde{\Omega }^\epsilon _-\cap Q^\epsilon _m, \quad m=0,1,2,\ldots \end{array} \right. \end{aligned}$$
(3.25)

where \(Q^\epsilon _m\) is the rectangle defined from the step function \(G_0^\epsilon \),

$$\begin{aligned} Q^\epsilon _m=\{ (x_1,x_2) \in {\mathbb {R}}^2 \, | \, \gamma _{m,\epsilon }<x_1<\gamma _{m+1,\epsilon }, \, - G_1 < x_2 < - G_0^\epsilon (x_1) \}, \end{aligned}$$
(3.26)

and the function \(Z^\epsilon _m\) is the solution of the problem

$$\begin{aligned} \left\{ \begin{array}{ll} - \frac{\partial ^2 Z^\epsilon }{\partial x_1^2} - \frac{1}{\epsilon ^2} \frac{\partial ^2 Z^\epsilon }{\partial x_2^2} = 0, &{}\quad \hbox { in } Q^\epsilon _m \\ \frac{\partial Z^\epsilon }{\partial N^\epsilon }=0, &{}\quad \hbox { on } \partial Q^\epsilon _m \backslash \Gamma _m^\epsilon \\ Z^\epsilon = \phi , &{}\quad \hbox { on } \Gamma _m^\epsilon \end{array} \right. \end{aligned}$$
(3.27)

where \(\Gamma _m^\epsilon \) is the top of the rectangle \(Q^\epsilon _m\) given by

$$\begin{aligned} \Gamma _m^\epsilon = \{ (x_1, - G_0^\epsilon (x_1)) \, | \, \gamma _{m,\epsilon } < x_1 < \gamma _{m+1,\epsilon }\}. \end{aligned}$$

It is a direct consequence of (3.8) and estimate (2.10) that functions \(Z^\epsilon _m\) satisfies

$$\begin{aligned} \left\| \frac{\partial Z^\epsilon _m}{\partial x_1} \right\| ^2_{L^2(Q^\epsilon _m)} + \frac{1}{\epsilon ^2} \left\| \frac{\partial Z^\epsilon _m}{\partial x_2} \right\| ^2_{L^2(Q^\epsilon _m)} \le C \epsilon ^{\alpha -1} \Vert \phi ' \Vert ^2_{L^2(\gamma _{m,\epsilon },\gamma _{m+1,\epsilon })}. \end{aligned}$$
(3.28)

Hence, if we denote by \(Q^\epsilon = \cup _{i=1}^{m_\epsilon } Q^\epsilon _i\), we have \(\widetilde{\Omega }^\epsilon _-= Q^\epsilon \cap \widetilde{\Omega }^\epsilon \), and then,

$$\begin{aligned}&\displaystyle \left\| \frac{\partial \varphi ^\epsilon }{\partial x_1} \right\| ^2_{L^2(\widetilde{\Omega }^\epsilon _-)} + \frac{1}{\epsilon ^2} \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\| ^2_{L^2(\widetilde{\Omega }^\epsilon _-)} = \sum ^{m_\epsilon }_{i=0} \left( \left\| \frac{\partial \varphi ^\epsilon }{\partial x_1} \right\| ^2_{L^2(Q^\epsilon _m)} + \frac{1}{\epsilon ^2} \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2}\right\| ^2_{L^2(Q^\epsilon _m)}\right) \nonumber \\&\displaystyle \qquad \quad \le \sum ^{m_\epsilon }_{i=0} C \, \epsilon ^{\alpha -1} \, \left\| \phi ' \right\| ^2_{L^2(\gamma _{i,\epsilon },\gamma _{i+1,\epsilon })} \le C \, \epsilon ^{\alpha -1} \left\| \phi ' \right\| ^2_{L^2(0,1)}. \end{aligned}$$
(3.29)

Eventually, we will use \(Z^\epsilon \) to denote \(Z^\epsilon (x_1,x_2) = Z^\epsilon _m(x_1,x_2)\) whenever \((x_1,x_2) \in \widetilde{\Omega }^\epsilon _- \cap Q^\epsilon _m\).

Consequently, we can argue as in (3.18) to show

$$\begin{aligned} \Vert \varphi ^\epsilon - \phi \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(3.30)

Indeed, since

$$\begin{aligned} \varphi ^\epsilon (x_1,x_2) - \phi (x_1) = \varphi ^\epsilon (x_1,x_2) - \varphi ^\epsilon (x_1,0) = \int \limits _0^{x_2} \frac{\partial \varphi ^\epsilon }{\partial x_2}(x_1,s) \, \hbox {d}s, \end{aligned}$$

we have from (3.25) and (3.29) that

$$\begin{aligned} \Vert \varphi ^\epsilon - \phi \Vert _{L^2(\widetilde{\Omega }^\epsilon )}^2 \le C(G,H) \, \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\| _{L^2(\widetilde{\Omega }^\epsilon )}^2 \le C \, C(G,H) \, \epsilon ^{1+\alpha } \, \left\| \phi ' \right\| ^2_{L^2(0,1)} \rightarrow 0, \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$

(e). Passing to the limit in the weak formulation.

Now let us to perform our first evaluation of the variational formulation (3.13) of elliptic problem (2.2) using the test functions \(\varphi ^\epsilon \) defined in (3.25). For this, we analyze the different functions that form the integrands in (3.13) using the computations previously established.

  • First integrand: we obtain

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon _-} \Big \{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \Big \} \hbox {d}x_1 \hbox {d}x_2 \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.31)

Indeed, from (3.28), \(\alpha > 1\) and (2.6), we have that there exists \(C>0\) independent of \(\epsilon \) such that

$$\begin{aligned}&\left| \int \limits _{\widetilde{\Omega }^\epsilon _-} \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\} \hbox {d}x_1 \hbox {d}x_2\right| \\&\qquad \le \left( \int \limits _{\Omega ^\epsilon } \left\{ \left( \frac{\partial u^\epsilon }{\partial x_1} \right) ^2 + \frac{1}{\epsilon ^2} \left( \frac{\partial u^\epsilon }{\partial x_2} \right) ^2 \right\} \hbox {d}x_1 \hbox {d}x_2 \right) ^{1/2} \\&\left( \int \limits _{\widetilde{\Omega }^\epsilon _-} \left\{ \left( \frac{\partial Z^\epsilon }{\partial x_1} \right) ^2 + \frac{1}{\epsilon ^2} \left( \frac{\partial Z^\epsilon }{\partial x_2} \right) ^2 \right\} \hbox {d}x_1 \hbox {d}x_2 \right) ^{1/2} \\&\qquad \le C \, \epsilon ^{(\alpha - 1)/2} \, \Vert \phi ' \Vert _{L^2(0,1)} \rightarrow 0, \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
  • Second integrand: we have

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon _+} \left\{ \frac{\partial u^\epsilon }{\partial x_1} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \frac{\partial u^\epsilon }{\partial x_2} \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\} \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _{\Omega _0} \xi ^* \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.32)

For see this, we first observe that (3.25) implies

$$\begin{aligned} \frac{\partial \varphi ^\epsilon }{\partial x_1} \Big |_{\widetilde{\Omega }^\epsilon _+} = \frac{\partial \phi }{\partial x_1} = \phi ' \quad \hbox { and } \quad \frac{\partial \varphi ^\epsilon }{\partial x_2} \Big |_{\widetilde{\Omega }^\epsilon _+} = \frac{\partial \phi }{\partial x_2} = 0. \end{aligned}$$

Then, since \(G^\epsilon _0 \ge G_0\) in \((0,1)\), we have \(\Omega _0 \subset \widetilde{\Omega }^\epsilon _+\) and

$$\begin{aligned}&\int \limits _{\widetilde{\Omega }^\epsilon _+} \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\} \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\widetilde{\Omega }^\epsilon _+} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}}(x_1,x_2) \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2\nonumber \\&\quad \quad = \int \limits _{\Omega _0} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}}(x_1,x_2) \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon _+ \backslash \Omega _0} \frac{\partial u^\epsilon }{\partial x_1}(x_1,x_2) \, \phi '(x_1) \,\hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$
(3.33)

Thus, from (3.24), we pass to the limit as \(\epsilon \rightarrow 0\) in the first integral of (3.33) to get

$$\begin{aligned} \int \limits _{\Omega _0} \frac{\partial u^\epsilon }{\partial x_1}(x_1,x_2) \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _{\Omega _0} \xi ^* \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$
(3.34)

Hence, we will prove (3.32) if we show that the remaining integral of (3.33) goes to zero as \(\epsilon \rightarrow 0\). Let us evaluate it. From (2.6), (3.2), (3.11) and (3.12), we have

$$\begin{aligned} \left| \int \limits _{\widetilde{\Omega }^\epsilon _+ \backslash \Omega _0} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}}(x_1,x_2) \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2 \right|&\le \left\| \frac{\partial u^\epsilon }{\partial x_1} \right\| _{L^2(\Omega ^\epsilon )} \, \Vert \phi ' \Vert _{L^2(\Omega ^\epsilon _+ \backslash \Omega _0)} \nonumber \\&\le C \, \Vert \phi ' \Vert _{L^2(0,1)} \, \Vert G^\epsilon _0 - G_0 \Vert _{L^\infty (0,1)}^{1/2} \rightarrow 0, \end{aligned}$$
(3.35)

as \(\epsilon \rightarrow 0\). Therefore, (3.32) follows from (3.33), (3.34) and (3.35).

  • Third integrand: if \(p(x)\) is that one in (3.6), then

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _0^1 p(x) \, u_0(x) \, \phi (x) \, \hbox {d}x, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.36)

We start observing that \(P_\epsilon u^\epsilon |_{\Omega ^\epsilon } = u^\epsilon \), and so

$$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2&= \int \limits _{\Omega ^\epsilon } \left( u^\epsilon - u_0 \right) \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\Omega ^\epsilon } u_0 \, \left( \varphi ^\epsilon - \phi \right) \, \hbox {d}x_1 \hbox {d}x_2 \\&+ \int \limits _{\Omega ^\epsilon } u_0 \, \phi \, \hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$

Moreover, due to (3.18) and (3.30), we have

$$\begin{aligned} \int \limits _{\Omega ^\epsilon } \left( u^\epsilon - u_0 \right) \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow 0 \hbox { and } \int \limits _{\Omega ^\epsilon } u_0 \, \left( \varphi ^\epsilon - \phi \right) \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow 0, \end{aligned}$$

as \(\epsilon \rightarrow 0\), since \(\Omega ^\epsilon \subset \widetilde{\Omega }^\epsilon \), and so

$$\begin{aligned} \Vert u^\epsilon - u_0 \Vert _{L^2(\Omega ^\epsilon )} \le \Vert P_\epsilon u^\epsilon - u_0 \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \quad \hbox { and } \quad \Vert \varphi ^\epsilon - \phi \Vert _{L^2(\Omega ^\epsilon )} \le \Vert \varphi ^\epsilon - \phi \Vert _{L^2(\widetilde{\Omega }^\epsilon )}. \end{aligned}$$

Thus, we need only to pass to the limit in

$$\begin{aligned} \int \limits _{\Omega ^\epsilon } u_0(x_1) \, \phi (x_1) \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _0^1 u_0(x) \, \phi (x) \, \left( H_\epsilon (x) + G_\epsilon (x) \right) \, \hbox {d}x, \end{aligned}$$
(3.37)

and then obtain (3.36). For this, we use the Average Theorem from [10, Lemma 4.2], as well as, condition (3.1). Indeed,

$$\begin{aligned}&H_\epsilon (x) + G_\epsilon (x) = H(x, x/\epsilon ) + G(x,x/\epsilon ^\alpha ) \\&\quad \rightharpoonup \frac{1}{l_h} \int \limits _0^{l_h} H(x,y) \, \hbox {d}y + \frac{1}{l_g} \int \limits _0^{l_g} G(x,y) \, \hbox {d}y, \quad w^* - L^\infty (0,1), \end{aligned}$$

as \(\epsilon \rightarrow 0\). Hence, since \(\frac{|Y^*(x)|}{l_h} - G_0(x) = \frac{1}{l_h} \int \limits _0^{l_h} H(x,y) \, \hbox {d}y\), we have

$$\begin{aligned} H_\epsilon (x) + G_\epsilon (x) \rightharpoonup p(x), \quad w^* - L^\infty (0,1). \end{aligned}$$
  • Fourth integrand: we claim that

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _0^1 \hat{f}(x) \, \phi (x) \, \hbox {d}x, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.38)

Since

$$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \, \left( \varphi ^\epsilon - \phi \right) \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \, \phi \, \hbox {d}x_1 \hbox {d}x_2 \end{aligned}$$

and

$$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon f^\epsilon \, \phi \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _0^1 \left( \int \limits _{-G_\epsilon (x_1)}^{H_\epsilon (x_1)} f^\epsilon (x_1,x_2) \, \hbox {d}x_2 \right) \phi (x_1) \, \hbox {d}x_1 = \int \limits _0^1 \hat{f}^\epsilon (x) \, \phi (x) \, \hbox {d}x, \end{aligned}$$

we obtain (3.38) from (3.4) and (3.30).

Consequently, we can use (3.31), (3.32), (3.36) and (3.38) to pass to the limit in (3.13) to obtain the following limit variational formulation

$$\begin{aligned} \int \limits _{\Omega _0} \xi ^* \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _0^1 p(x) \, u_0(x) \, \phi (x) \, \hbox {d}x = \int \limits _0^1 \hat{f}(x) \, \phi (x) \, \hbox {d}x, \end{aligned}$$
(3.39)

for all \(\phi \in H^1(0,1)\).

Next, we need to evaluate the relationship between functions \(\xi ^*\) and \(u_0\) to complete our proof obtaining the limit problem (3.5).

(f) Relationship between \(\xi ^*\) and \(u_0\).

First let us to denote by \(\Omega \) the rectangle \(\Omega = (0,1) \times (-G_1,H_1)\), and recall the oscillating regions \(\Omega ^\epsilon _{i,+}\) given by

$$\begin{aligned} \Omega ^\epsilon _{i,+} = \left\{ (x_1, x_2) \, | \, \xi _{i-1} < x_1 < \xi _i, \, -G_{0,i} < x_2 < H_i(x_1/\epsilon ) \right\} , \quad i=1,\ldots ,N. \end{aligned}$$

Here we are taking the positive constants \(G_1\) and \(H_1\) from hypothesis \(\mathbf{(H)}\), and \(G_{0,i}\) is defined in (3.3). We also consider the families of isomorphisms \(T^{\epsilon }_k : A^{\epsilon }_k \mapsto Y\) given by

$$\begin{aligned} T^{\epsilon }_k(x_1,x_2) = \left( \frac{x_1 - \epsilon k l_h}{\epsilon },x_2\right) \end{aligned}$$
(3.40)

where

$$\begin{aligned}&A^{\epsilon }_k = \{ (x_1,x_2) \in {\mathbb {R}}^2 \; | \; \epsilon k l_h \le x_1 < \epsilon l_h (k+1) \hbox { and } -G_1 < x_2 < H_1 \} \\&Y = (0, l_h) \times (-G_1, H_1) \end{aligned}$$

with \(k \in {\mathbb {N}}\). Let us recall the auxiliary problem in the representative cell \(Y^*_i\)

$$\begin{aligned} \left\{ \begin{array}{ll} - \Delta X_i = 0 &{}\hbox { in } Y^*_i \\ \frac{\partial X_i}{\partial N} = 0 &{}\hbox { on } B_2^i \\ \frac{\partial X_i}{\partial N} = - \frac{H_i'(y_1)}{\sqrt{1+H_i'(y_1)^2}} &{}\hbox { on } B_1^i \\ X_i \quad l_h-\hbox {periodic}&{}\hbox { on } B_0^i \\ \int \limits _{Y^*_i} X_i \; \hbox {d}y_1 \hbox {d}y_2 = 0&{} \end{array} \right. \end{aligned}$$
(3.41)

where \(B_0^i\), \(B_1^i\) and \(B_2^i\) are the lateral, upper and lower boundary of \(\partial Y^*_i\), respectively.

Applying the same reflection procedure used in Lemma 2.1, we can define the extension operators

$$\begin{aligned} P^i \in \mathcal {L}(H^1(Y^*_i),H^1(Y)) \cap \mathcal {L}(L^2(Y^*_i),L^2(Y)), \end{aligned}$$
(3.42)

which are obtained by reflection in the negative direction along the line \(x_2=-G_{i,0}\), and in the positive direction along the graph of function \(H_i\), as indicated in Remark 2.2.

Thus, taking the isomorphism (3.40) and extension operator (3.42), we can set the function

$$\begin{aligned} \omega ^{\epsilon } (x_1,x_2)&= x_1 - \epsilon \Big ( P^iX_i \circ T^\epsilon _k (x_1,x_2) \Big ) \\&= x_1 - \epsilon \Big ( P^i X_i \left( \frac{x_1 - \epsilon l_h k}{\epsilon },x_2 \right) \Big ), \hbox { for } (x_1,x_2)\in \Omega _i\cap A^\epsilon _k, \quad i=1,\ldots , N, \end{aligned}$$

where

$$\begin{aligned} \Omega _i = (\xi _{i-1},\xi _i) \times (-G_1,H_1). \end{aligned}$$

Clearly, function \(\omega ^{\epsilon }\) is well defined in \(\cup _{i=1}^N \Omega _i\). If \((x_1,x_2) \in \Omega _i\) for some \(i=1,\ldots ,N\), then there exists a unique \(k\in {\mathbb {N}}\) such that \((x_1,x_2)\in A^\epsilon _k\). Furthermore, we have

$$\begin{aligned} \omega ^\epsilon \in H^1(\cup _{i=1}^N \Omega _i). \end{aligned}$$

We introduce now the vector \(\eta ^\epsilon = (\eta _1^\epsilon ,\eta _2^\epsilon )\) defined by

$$\begin{aligned} \eta _r^\epsilon (x_1,x_2) = \frac{\partial \omega ^\epsilon }{\partial x_r}(x_1,x_2), \quad (x_1,x_2)\in \cup _{i=1}^N \Omega _i, \quad r=1,2. \end{aligned}$$
(3.43)

Since \(\frac{\partial }{\partial x_1} = \frac{1}{\epsilon } \frac{\partial }{\partial y_1}\) and \(\frac{\partial }{\partial x_2} = \frac{\partial }{\partial y_2}\), we have that

$$\begin{aligned}&\eta _1^\epsilon (x_1,x_2)= 1 - \frac{\partial X_i}{\partial y_1}\left( \frac{x_1 - \epsilon k L}{\epsilon },x_2\right) = 1 - \frac{\partial X_i }{\partial y_1}\left( \frac{x_1}{\epsilon },x_2\right) := \eta _1(y_1,y_2), \nonumber \\&\eta _2^\epsilon (x_1,x_2)= - \epsilon \frac{\partial X_i }{\partial y_2}\left( \frac{x_1 - \epsilon k L}{\epsilon },x_2\right) = - \epsilon \frac{\partial X_i }{\partial y_2}\left( \frac{x_1}{\epsilon },x_2\right) := \eta _2(y_1,y_2), \end{aligned}$$
(3.44)

for \((y_1,y_2) = (\frac{x_1 - \epsilon k L}{\epsilon },x_2) \in Y^*_i\), \((x_1,x_2) \in \Omega _{i,+}^\epsilon \), \(i=1,\ldots ,N\).

Then, performing standard computations, we get from (3.41) that \(\eta _1^\epsilon \) and \(\eta _2^\epsilon \) satisfy

$$\begin{aligned} \frac{\partial \eta _1^\epsilon }{\partial {x_1}} + \frac{1}{\epsilon ^2} \frac{\partial \eta _2^\epsilon }{\partial {x_2}} = 0&\hbox { in }\Omega ^\epsilon _{i,+}, \nonumber \\ \eta _1^\epsilon N^\epsilon _1 + \frac{1}{\epsilon ^2} \eta _2^\epsilon N^\epsilon _2 = 0&\hbox { on } \left( x_1, H_i \left( \frac{x_1}{\epsilon } \right) \right) ,\, \\ \eta _1^\epsilon N^\epsilon _1 + \frac{1}{\epsilon ^2} \eta _2^\epsilon N^\epsilon _2 = 0&\hbox { on } (x_1, -G_{0,i}),\nonumber \end{aligned}$$
(3.45)

for each \(i=1,\ldots ,N\), where

$$\begin{aligned} N^\epsilon = (N^\epsilon _1, N^\epsilon _2)&= \left( - \frac{H_i'(\frac{x_1}{\epsilon })}{(\epsilon ^2+{H_i'(\frac{x_1}{\epsilon })}^2)^{\frac{1}{2}}}, \frac{\epsilon }{(\epsilon ^2+{H_i'(\frac{x_1}{\epsilon })}^2)^{\frac{1}{2}}} \right) \hbox { on } \left( x_1, H_i \Big (\frac{x_1}{\epsilon } \Big ) \right) , \\&\qquad N^\epsilon = ( 0 , -1) \hbox { on } (x_1, -G_{0,i}). \end{aligned}$$

Therefore, multiplying first equation of (3.45) by a test function \(\psi \in H^1(\Omega )\) with \(\psi =0\) in a neighborhood of set \(\cup _{i=0}^N \{ (\xi _i,x_2) \, | \, -G_1 \le x_2\le H_1 \}\) and integrating by parts, we obtain

$$\begin{aligned} 0&= \int \limits _{\Omega ^\epsilon _+} \psi \left( \frac{\partial \eta _1^\epsilon }{\partial {x_1}} + \frac{1}{\epsilon ^2} \frac{\partial \eta _2^\epsilon }{\partial {x_2}} \right) \hbox {d}x_1 \hbox {d}x_2 \\&= \int \limits _{\partial \Omega ^\epsilon _+} \psi \left( \eta _1^\epsilon N^\epsilon _1 + \frac{1}{\epsilon ^2} \eta _2^\epsilon N^\epsilon _2 \right) \hbox {d}S - \int \limits _{\Omega ^\epsilon _+} \left( \frac{\partial \psi }{\partial x_1} \eta _1^\epsilon + \frac{1}{\epsilon ^2} \frac{\partial \psi }{\partial x_2} \eta _2^\epsilon \right) \hbox {d}x_1 \hbox {d}x_2 \\&= 0 - \int \limits _{\Omega ^\epsilon _+} \left( \frac{\partial \psi }{\partial x_1} \eta _1^\epsilon + \dfrac{1}{\epsilon ^2} \frac{\partial \psi }{\partial x_2} \eta _2^\epsilon \right) \hbox {d}x_1 \hbox {d}x_2, \end{aligned}$$

where

$$\begin{aligned} \Omega ^\epsilon _+ = \text{ Int } \left( \overline{ \cup _{i=1}^N \, \Omega ^\epsilon _{i,+} }\right) . \end{aligned}$$

Then, for all \(\psi \in H^1(\Omega )\) with \(\psi =0\) in a neighborhood of \(\cup _{i=0}^N \{ (\xi _i,x_2) \, | \, -G_1 \le x_2\le H_1 \}\),

$$\begin{aligned} \int \limits _{\Omega ^\epsilon _+} \left( \eta _1^\epsilon \frac{\partial \psi }{\partial x_1} + \eta _2^\epsilon \dfrac{1}{\epsilon ^2} \frac{\partial \psi }{\partial x_2} \right) \hbox {d}x_1 \hbox {d}x_2=0. \end{aligned}$$
(3.46)

Consequently, we can rewrite the variational formulation (2.4) using identity (3.46) in

$$\begin{aligned}&\int \limits _{\widetilde{\Omega }^\epsilon } \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi }{\partial x_2} + \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi \right\} \hbox {d}x_1 \hbox {d}x_2 - \int \limits _{\Omega ^\epsilon _+} \left( \eta _1^\epsilon \frac{\partial \psi }{\partial x_1} + \eta _2^\epsilon \dfrac{1}{\epsilon ^2} \frac{\partial \psi }{\partial x_2} \right) \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\qquad \qquad \qquad = \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \varphi \hbox {d}x_1 \hbox {d}x_2, \quad \forall \varphi \in H^1(\Omega ^\epsilon ). \end{aligned}$$
(3.47)

Now, in order to accomplish our goal, we will pass to the limit in (3.47). For this, we introduce a second class of suitable test functions which will allow us to get our limit problem.

Let \(\phi = \phi (x) \in \mathcal {C}^\infty _0(\cup _{i=1}^{N} (\xi _{i-1},\xi _{i}))\) and consider the following test function

$$\begin{aligned} \varphi ^\epsilon (x_1,x_2) = \left\{ \begin{array}{ll} \phi (x_1) \, \omega ^\epsilon (x_1,x_2), &{} (x_1,x_2) \in \widetilde{\Omega }_+^\epsilon \\ Z^\epsilon _m(x_1,x_2), &{} (x_1,x_2) \in \widetilde{\Omega }^\epsilon _- \cap Q^\epsilon _m, \quad m=0,1,2,\ldots \end{array} \right. \end{aligned}$$
(3.48)

where \(Q^\epsilon _m\) is the rectangle defined by the step function \(G_0^\epsilon \) previously introduced in (3.26), with \(\widetilde{\Omega }_+^\epsilon \) and \(\widetilde{\Omega }_-^\epsilon \) given in (3.12). The function \(Z^\epsilon _m\) here is the solution of the problem

$$\begin{aligned} \left\{ \begin{array}{ll} - \frac{\partial ^2 Z^\epsilon }{\partial x_1^2} - \frac{1}{\epsilon ^2} \frac{\partial ^2 Z^\epsilon }{\partial x_2^2} = 0, &{}\quad \hbox { in } Q^\epsilon _m \\ \frac{\partial Z^\epsilon }{\partial N^\epsilon }=0,&{} \quad \hbox { on } \partial Q^\epsilon _m \backslash \Gamma _m^\epsilon \\ Z^\epsilon = \phi \, \omega ^\epsilon ,&{} \quad \hbox { on } \Gamma _m^\epsilon \end{array} \right. \end{aligned}$$
(3.49)

where \(\Gamma _m^\epsilon \) is the top of rectangle \(Q^\epsilon _m\). Hereafter, we may use notation \(Z^\epsilon (x_1,x_2) = Z^\epsilon _m(x_1,x_2)\) whenever \((x_1,x_2) \in \widetilde{\Omega }^\epsilon _- \cap Q^\epsilon _m\). Moreover, we observe that \(\phi \, \omega ^\epsilon |_{\Gamma _m^\epsilon } \in H^1(\Gamma _m^\epsilon )\), and auxiliary problems (3.27) and (3.49) just differ by the condition on the top border \(\Gamma _m^\epsilon \).

Now, let us to pass to the limit in functions \(\omega ^\epsilon \) and \(\eta _1^\epsilon \). Due to definition of \(\omega _\epsilon \), we have for each \(i=1,\ldots ,N\),

$$\begin{aligned} \int \limits _{A^\epsilon _k\cap \Omega _i} |\omega ^\epsilon - x_1|^2 \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{Y} \epsilon ^3 |(P^i X_i)(y_1,y_2)|^2 \hbox {d}y_1 \hbox {d}y_2 \le \int \limits _{Y^*_i} C \epsilon ^3 |X_i(y_1,y_2)|^2 \hbox {d}y_1 \hbox {d}y_2 \end{aligned}$$

and so,

$$\begin{aligned}&\int \limits _{\Omega _i} |\omega ^\epsilon - x_1|^2 \hbox {d}x_1 \hbox {d}x_2 \approx \sum _{k=1}^{\frac{C}{\epsilon l_h}} \int \limits _{Y^*_i} C \epsilon ^3 |X_i(y_1,y_2)|^2 \hbox {d}y_1 \hbox {d}y_2 \\&\qquad \approx \epsilon ^2 \int \limits _{Y^*_i} C |X_i(y_1,y_2)|^2 \hbox {d}y_1 \hbox {d}y_2 \rightarrow 0 \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$

Analogously,

$$\begin{aligned} \int \limits _{A^\epsilon _k\cap \Omega _i} \Big |\frac{\partial }{\partial x_1} \left( \omega ^\epsilon - x_1 \right) \Big |^2 \hbox {d}x_1 \hbox {d}x_2&= \int \limits _{Y} \Big | \frac{\partial (P^i X_i)}{\partial y_1} (y_1,y_2) \Big |^2 \, \epsilon \, \hbox {d}y_1 \hbox {d}y_2 \\&\le \epsilon \int \limits _{Y^*_i} C \Big |\frac{\partial X_i}{\partial y_1}(y_1,y_2)\Big |^2 \hbox {d}y_1 \hbox {d}y_2 \end{aligned}$$

and

$$\begin{aligned} \int \limits _{A^\epsilon _k\cap \Omega _i} \Big |\frac{\partial }{\partial x_2} \left( \omega ^\epsilon - x_1 \right) \Big |^2 \hbox {d}x_1 \hbox {d}x_2&= \int \limits _{Y} \epsilon ^3 \Big |\frac{\partial (P^i X_i)}{\partial y_2} (y_1,y_2)\Big |^2 \, \hbox {d}y_1 \hbox {d}y_2 \\&\le \epsilon ^3 \int \limits _{Y^*_i} C \Big |\frac{\partial X_i}{\partial y_2}(y_1,y_2)\Big |^2 \hbox {d}y_1 \hbox {d}y_2. \end{aligned}$$

Therefore,

$$\begin{aligned}&\int \limits _{\Omega _i} \Big |\frac{\partial }{\partial x_1} \left( \omega ^\epsilon - x_1 \right) \Big |^2 \hbox {d}x_1 \hbox {d}x_2 \approx \sum _{k=1}^{\frac{C}{\epsilon l_h}} \epsilon \int \limits _{Y^*_i} C\Big |\frac{\partial X_i}{\partial y_1}(y_1,y_2) \Big |^2 \hbox {d}y_1 \hbox {d}y_2\\&\qquad \approx \int \limits _{Y^*_i} \tilde{C} \Big |\frac{\partial X_i}{\partial y_1}(y_1,y_2)\Big |^2 \hbox {d}y_1 \hbox {d}y_2 \end{aligned}$$

for all \(\epsilon > 0\) and

$$\begin{aligned} \int \limits _{\Omega _i} \Big |\frac{\partial }{\partial x_2} \left( \omega ^\epsilon - x_1 \right) \Big |^2 \hbox {d}x_1 \hbox {d}x_2 \le \epsilon ^2 \int \limits _{Y^*_i} \tilde{C} \Big |\frac{\partial X_i}{\partial y_2}(y_1,y_2)\Big |^2 \hbox {d}y_1 \hbox {d}y_2 \rightarrow 0\quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$

Consequently, we can conclude for \(\epsilon \rightarrow 0\)

$$\begin{aligned} \omega ^\epsilon \rightarrow x_1 \quad s-L^2(\Omega ) \quad \hbox { and } \quad w-H^1(\Omega _i), \quad i=1,\ldots ,N, \end{aligned}$$
(3.50)

and

$$\begin{aligned} \frac{\partial \omega ^\epsilon }{\partial x_2} \rightarrow 0 \quad s-L^2(\Omega ). \end{aligned}$$
(3.51)

In particular, \(\omega ^\epsilon \) is uniformly bounded in \(H^1(\cup _{i=1}^N \Omega _i)\) for all \(\epsilon >0\).

Next let \(\widetilde{\eta }^\epsilon = \eta ^\epsilon \chi _0\) be the extension by zero of vector \(\eta ^\epsilon \) to the region \(\Omega _0\) independent of \(\epsilon \). Since \(X_i\) is \(l_h\)-periodic at variable \(y_1\), we can apply the Average Theorem to (3.44) obtaining

$$\begin{aligned} \widetilde{\eta }_1^\epsilon (x_1,x_2) \rightharpoonup \frac{1}{l_h} \int \limits _0^{l_h} \Big ( 1 - \frac{\partial X_i}{\partial y_1} (s,x_2) \Big ) \chi _i(s,x_2)\hbox {d}s := \hat{q}_i(x_2), \quad w^*-L^\infty (\xi _{i-1},\xi _i), \end{aligned}$$

where \(\chi _i\) is the characteristic function of \(Y^*_i\). Hence, we can argue as (3.22) to get

$$\begin{aligned} \widetilde{\eta }_1^\epsilon \rightharpoonup \hat{q}, \quad w^*-L^\infty (\Omega _0), \end{aligned}$$
(3.52)

where \(\hat{q}(x_1,x_2) \equiv \hat{q}_i(x_2)\), if \((x_1,x_2)\in \Omega _ i\), for \(i=1,\ldots ,N\).

Now we evaluate the test functions \(\varphi ^\epsilon \) as \(\epsilon \rightarrow 0\). It follows from estimate (2.10) that

$$\begin{aligned} \left\| \frac{\partial Z_m^\epsilon }{\partial x_1} \right\| ^2_{L^2(Q^\epsilon _m)} + \frac{1}{\epsilon ^2} \left\| \frac{\partial Z_m^\epsilon }{\partial x_2} \right\| ^2_{L^2(Q^\epsilon _m)} \le C \epsilon ^{\alpha -1} \left\| \frac{\partial (\phi \, \omega ^\epsilon )}{\partial x_1} \right\| ^2_{L^2(\Gamma _m^\epsilon )}. \end{aligned}$$
(3.53)

Denoting \(Q^\epsilon = \cup _{i=1}^{N_\epsilon } Q^\epsilon _m\), we have \(\Omega ^\epsilon _+= Q^\epsilon \cap \Omega ^\epsilon \), and so, due to (3.48), (3.50) and (3.53),

$$\begin{aligned}&\displaystyle \left\| \frac{\partial \varphi ^\epsilon }{\partial x_1} \right\| ^2_{L^2(\Omega ^\epsilon _-)} + \frac{1}{\epsilon ^2} \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\| ^2_{L^2(\Omega ^\epsilon _-)} = \sum ^{m_\epsilon }_{m=0} \left( \left\| \frac{\partial \varphi ^\epsilon }{\partial x_1} \right\| ^2_{L^2(Q^\varepsilon _m)} + \frac{1}{\epsilon ^2} \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2}\right\| ^2_{L^2(Q^\epsilon _m)}\right) \nonumber \\&\displaystyle \qquad \quad \le C \, \epsilon ^{\alpha -1} \, \max \left\{ \left\| \phi \right\| ^2_\infty , \left\| \phi ' \right\| ^2_{\infty } \right\} \left\| \omega ^\epsilon \right\| ^2_{H^1(\cup _{i=1}^N \Omega _i)} \nonumber \\&\displaystyle \qquad \quad \le \widetilde{C} \, \epsilon ^{\alpha -1}, \end{aligned}$$
(3.54)

for some \(\widetilde{C}>0\) independent of \(\epsilon \). Consequently, we can argue as in (3.18) to show

$$\begin{aligned} \Vert \varphi ^\epsilon - x_1 \, \phi \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \rightarrow 0\quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(3.55)

Indeed, for \((x_1,x_2) \in \{ (x_1,x_2) \, | \, \gamma _{m,\epsilon }<x_1<\gamma _{m+1,\epsilon }, \, - G_\epsilon (x_1) < x_2 < H_1 \}\),

$$\begin{aligned} \varphi ^\epsilon (x_1,x_2) - \phi (x_1) \, \omega ^\epsilon (x_1, -w^\epsilon _m) = \varphi ^\epsilon (x_1,x_2) - \varphi ^\epsilon (x_1,-w^\epsilon _m) = \int \limits _{-w^\epsilon _m}^{x_2} \frac{\partial \varphi ^\epsilon }{\partial x_2}(x_1,s) \, \hbox {d}s, \end{aligned}$$

where \(w_m^\epsilon \) is the constant given by the step function \(G^\epsilon _0\) in \((\gamma _{m,\epsilon }, \gamma _{m+1,\epsilon })\), that is,

$$\begin{aligned} w_m^\epsilon = G^\epsilon _0(x), \quad \hbox { for } x \in (\gamma _{m,\epsilon }, \gamma _{m+1,\epsilon }). \end{aligned}$$

Hence, if \(\Gamma ^\epsilon \subset {\mathbb {R}}^2\) is the graph of \(-G_0^\epsilon \), we have \(\varphi ^\epsilon |_{\Gamma ^\epsilon } = \varphi ^\epsilon (x_1,-w^\epsilon _m) = \phi (x_1) \, \omega ^\epsilon (x_1, -w^\epsilon _m)\) for \(x_1 \in (\gamma _{m,\epsilon }, \gamma _{m+1,\epsilon })\), and so

$$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } |\varphi ^\epsilon - \varphi ^\epsilon |_{\Gamma ^\epsilon } |^2 \hbox {d}x_1 \hbox {d}x_2&\le \sum _{m=0}^{m_\epsilon }\int \limits _{\gamma _{m,\epsilon }}^{\gamma _{m+1,\epsilon }} \int \limits _{-G_\epsilon (x_1)}^{H_1} |x_2+w_m^\epsilon | \int \limits _{-w_m^\epsilon }^{x_2} \left| \frac{\partial \varphi ^\epsilon }{\partial x_2}(x_1,s) \right| ^2 \hbox {d}s \hbox {d}x_2 \hbox {d}x_1\nonumber \\&\le |H_1+G_1|^2 \int \limits _0^1 \int \limits _{-G_\epsilon (x_1)}^{H_1} \left| \frac{\partial \varphi ^\epsilon }{\partial x_2}(x_1,s) \right| ^2 \hbox {d}s \hbox {d}x_1 \nonumber \\&\le |H_1+G_1|^2 \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\| ^2_{L^2(\widetilde{\Omega }^\epsilon )}. \end{aligned}$$
(3.56)

On the other hand,

$$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } |\phi \, \omega ^\epsilon - \varphi ^\epsilon |_{\Gamma ^\epsilon } |^2 \hbox {d}x_1 \hbox {d}x_2&\le \int \limits _{\widetilde{\Omega }^\epsilon } |\phi \left( \omega ^\epsilon - \omega ^\epsilon |_{\Gamma ^\epsilon } \right) |^2 \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\le |H_1+G_1|^2 \Vert \phi \Vert _\infty \left\| \frac{\partial \omega ^\epsilon }{\partial x_2} \right\| ^2_{L^2(\Omega )}. \end{aligned}$$
(3.57)

Then, it follows from (3.56) and (3.57) that there exist \(C>0\) independent of \(\epsilon \) such that

$$\begin{aligned} \Vert \varphi ^\epsilon - x_1 \, \phi \Vert _{L^2(\widetilde{\Omega }^\epsilon )}^2&\le \Vert \varphi ^\epsilon - \varphi ^\epsilon |_{\Gamma ^\epsilon } \Vert ^2_{L^2(\widetilde{\Omega }^\epsilon )} + \Vert \varphi ^\epsilon |_{\Gamma ^\epsilon } - \phi \omega ^\epsilon \Vert ^2_{L^2(\widetilde{\Omega }^\epsilon )} + \Vert \phi \omega ^\epsilon - x_1 \phi \Vert _{L^2(\widetilde{\Omega }^\epsilon )}\nonumber \\&\le C \left\{ \left\| \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\| ^2_{L^2(\widetilde{\Omega }^\epsilon )} + \left\| \frac{\partial \omega ^\epsilon }{\partial x_2} \right\| ^2_{L^2(\Omega )} + \Vert \omega ^\epsilon - x_1 \Vert _{L^2(\Omega )}\right\} . \end{aligned}$$
(3.58)

Hence, we can conclude (3.55) from (3.48), (3.50), (3.51), (3.54) and (3.58).

Now, we are in condition to pass to the limit in (3.47). Taking as test functions \(\varphi = \varphi ^\epsilon \) and \(\psi = \phi \, u^\epsilon \) in (3.47), we get

$$\begin{aligned}&\int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon f^\epsilon \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\quad = \int \limits _{\widetilde{\Omega }^\epsilon } \Big \{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} + \chi ^\epsilon P_\epsilon u^\epsilon \varphi ^\epsilon \Big \} \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\qquad - \int \limits _{\Omega _+^\epsilon } \Big \{ \eta ^\epsilon _1 \frac{\partial (\phi u^\epsilon )}{\partial x_1} + \frac{1}{\epsilon ^2} \eta ^\epsilon _2 \frac{\partial (\phi u^\epsilon )}{\partial x_2} \Big \} \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\quad = \int \limits _{\widetilde{\Omega }^\epsilon _+} \Big \{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \phi '\omega ^\epsilon + \phi \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \omega ^\epsilon }{\partial x_1} +\frac{1}{\epsilon ^2} \phi \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \omega ^\epsilon }{\partial x_2} \Big \} \hbox {d}x_1 \hbox {d}x_2 \nonumber \\&\qquad + \int \limits _{\widetilde{\Omega }^\epsilon _- } \Big \{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1}+ \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \Big \} \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon {P_\epsilon u^\epsilon } \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2\nonumber \\&\qquad - \int \limits _{\Omega ^\epsilon _+} \Big \{ {\eta _{1}^\epsilon } \phi ' u^\epsilon + {\eta _{1}^\epsilon } \phi \frac{\partial u^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} {\eta _{2}^\epsilon } \phi \frac{\partial u^\epsilon }{\partial x_2} \Big \} \hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$
(3.59)

Consequently, due to (3.59), (3.43) and \(\Omega ^\epsilon _+ \subset \widetilde{\Omega }^\epsilon _+\), we can rewrite (3.47) as

$$\begin{aligned}&\int \limits _{\widetilde{\Omega }^\epsilon _+} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon _-} \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\} \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1\hbox {d}x_2 \nonumber \\&\qquad - \int \limits _{\Omega ^\epsilon _+} \eta _1^\epsilon \phi ' \, u^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \varphi ^\epsilon \hbox {d}x_1 \hbox {d}x_2, \quad \forall \phi \in \mathcal {C}^\infty _0(\cup _{i=1}^{N} (\xi _{i-1},\xi _{i})). \end{aligned}$$
(3.60)

Let us now to evaluate (3.60) when \(\epsilon \) goes to zero.

  • First integrand: we claim

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon _+} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _{\Omega _0} \xi ^* x_1 \phi ' \, \hbox {d}x_1 \hbox {d}x_2, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.61)

    Notice \(\Omega _0 \subset \widetilde{\Omega }^\epsilon _+\), and so,

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon _+} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \, \hbox {d}x_1 \hbox {d}x_2&= \int \limits _{\Omega _0} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\widetilde{\Omega }^\epsilon _+ \setminus \Omega _0} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \, \hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$

    Due to (3.24) and (3.50), it is easy to see \( \int \nolimits _{\Omega _0} \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \nolimits _{\Omega _0} \xi ^* x_1 \phi ' \hbox {d}x_1 \hbox {d}x_2. \) On the other hand, it follows from (2.6), (3.2), (3.11), (3.12) and (3.50) that

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon _+ \setminus \Omega _0} \left| \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \, \omega ^\epsilon \, \phi ' \right| \hbox {d}x_1 \hbox {d}x_2&\le \left\| \frac{\partial u^\epsilon }{\partial x_1} \right\| _{L^2(\Omega ^\epsilon )} \Vert \phi ' \omega ^\epsilon \Vert _{L^2(\widetilde{\Omega }^\epsilon _+ \setminus \Omega _0)} \\&\le \Vert u^\epsilon \Vert _{H^1(\Omega ^\epsilon )}\Vert \omega ^\epsilon \Vert _{H^1(\cup _i \widetilde{\Omega }^\epsilon _i)} \Vert \phi ' \Vert _\infty ^2 \left| \widetilde{\Omega }^\epsilon _+ \setminus \Omega _0 \right| ^{1/2}\\&\rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$

    proving (3.61).

  • Second integrand: we have

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon _-} \Big \{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \Big \} \hbox {d}x_1 \hbox {d}x_2 \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.62)

    Indeed, it follows from estimates (3.54) and (2.6) that there exists \(C>0\) such that

    $$\begin{aligned}&\left| \int \limits _{\widetilde{\Omega }^\epsilon _-} \left\{ \widetilde{\frac{\partial u^\epsilon }{\partial x_1}} \frac{\partial \varphi ^\epsilon }{\partial x_1} + \frac{1}{\epsilon ^2} \widetilde{\frac{\partial u^\epsilon }{\partial x_2}} \frac{\partial \varphi ^\epsilon }{\partial x_2} \right\} \hbox {d}x_1 \hbox {d}x_2\right| \nonumber \\&\quad \le \left( \int \limits _{\Omega ^\epsilon } \left\{ \left( \frac{\partial u^\epsilon }{\partial x_1} \right) ^2 + \frac{1}{\epsilon ^2} \left( \frac{\partial u^\epsilon }{\partial x_2} \right) ^2 \right\} \hbox {d}x_1 \hbox {d}x_2 \right) ^{1/2} \\&\quad \left( \int \limits _{\widetilde{\Omega }^\epsilon _-} \left\{ \left( \frac{\partial \varphi ^\epsilon }{\partial x_1} \right) ^2 + \frac{1}{\epsilon ^2} \left( \frac{\partial \varphi ^\epsilon }{\partial x_2} \right) ^2 \right\} \hbox {d}x_1 \hbox {d}x_2 \right) ^{1/2} \nonumber \\&\quad \le C \, \epsilon ^{(\alpha - 1)/2} \rightarrow 0, \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$

    since \(\alpha > 1\).

  • Third integrand: if \(p(x)\) is that one defined in (3.6), then

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _0^1 p(x) \, u_0(x) \, x \phi (x) \, \hbox {d}x, \quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
    (3.63)

    In fact, we can proceed as in (3.36), since we have (3.18), (3.55), \(P_\epsilon u^\epsilon |_{\Omega ^\epsilon } = u^\epsilon \), and

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, P_\epsilon u^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2&= \int \limits _{\Omega ^\epsilon } \left( u^\epsilon - u_0 \right) \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _{\Omega ^\epsilon } u_0 \, \left( \varphi ^\epsilon - x_1 \phi \right) \, \hbox {d}x_1 \hbox {d}x_2 \\&\qquad + \int \limits _{\Omega ^\epsilon } u_0 \, x_1 \phi \, \hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$
  • Fourth integrand: Due to (3.18) and (3.52), we can easily obtain

    $$\begin{aligned} \int \limits _{\Omega ^\epsilon _+} \eta _1^\epsilon \, \phi ' \, u^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _{\Omega _0} \hat{q} \, \phi ' \, u_0 \, \hbox {d}x, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
    (3.64)

    since \(\Omega ^\epsilon _+ \subset \Omega _0\), and

    $$\begin{aligned} \int \limits _{\Omega ^\epsilon _+} \eta _1^\epsilon \, \phi ' \, u^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\Omega _0} \widetilde{\eta }_1^\epsilon \, \phi ' \, P_\epsilon u^\epsilon \, \hbox {d}x_1 \hbox {d}x_2. \end{aligned}$$
  • Fifth integrand: we have

    $$\begin{aligned} \int \limits _{\widetilde{\Omega }^\epsilon } \chi ^\epsilon \, f^\epsilon \, \varphi ^\epsilon \, \hbox {d}x_1 \hbox {d}x_2 \rightarrow \int \limits _0^1 \hat{f}(x) \, x \phi (x) \, \hbox {d}x, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
    (3.65)

    which is derived from (3.4) and (3.55) in the same way that (3.38).

Therefore, due to convergences obtained in (3.61), (3.62), (3.63), (3.64) and (3.65), we can pass to the limit in (3.60) getting the following relation

$$\begin{aligned} \int \limits _{\Omega _0} \xi ^* \, x_1 \phi ' \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _0^1 p \, u_0 \, x \phi \, \hbox {d}x - \int \limits _{\Omega _0} \hat{q} \, \phi ' \, u_0 \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _0^1 \hat{f} x \phi \, \hbox {d}x, \end{aligned}$$
(3.66)

for all \(\phi \in \mathcal {C}^\infty _0(\cup _{i=1}^{N} (\xi _{i-1},\xi _{i}))\) where the step functions \(p\) and \(\hat{q}\) are given in (3.6) and (3.52), respectively, by

$$\begin{aligned} p(x) = p_i&= \frac{|Y_i^*|}{l_h} + \frac{1}{l_g} \int \limits _0^{l_g} G_i(s)\, \hbox {d}s - G_{0,i}, \nonumber \\ G_{0,i}&= \min _{y \in {\mathbb {R}}} G_i(y), \qquad \qquad \qquad \qquad \qquad \qquad \qquad x \in (\xi _{i-1},\xi _i),\\ \hat{q}(x, y) = \hat{q}_i(y)&= \frac{1}{l_h} \int \limits _0^{l_h} \Big ( 1 - \frac{\partial X_i}{\partial y_1} (s,y) \Big ) \chi _i(s,y) \, \hbox {d}s,\nonumber \end{aligned}$$
(3.67)

for \(i=1,\ldots ,N\). Thus, if we take \(x_1 \phi (x_1)\) as a test function in (3.39), we obtain

$$\begin{aligned} \int \limits _{\Omega _0} \xi ^* \frac{\partial }{\partial x_1} \left( x_1 \phi (x_1) \right) \, \hbox {d}x_1 \hbox {d}x_2 + \int \limits _0^1 p \, u_0 \, x \phi \, \hbox {d}x = \int \limits _0^1 \hat{f} \, x \phi \, \hbox {d}x. \end{aligned}$$
(3.68)

Combining (3.66) and (3.68), we get

$$\begin{aligned} \int \limits _{\Omega _0} \left\{ \hat{q} \, \phi ' \, u_0 + \phi \, \xi ^* \right\} \hbox {d}x_1 \hbox {d}x_2 = 0, \quad \forall \phi \in \mathcal {C}^\infty _0(\cup _{i=1}^{N} (\xi _{i-1},\xi _{i})). \end{aligned}$$
(3.69)

Hence, integrating by parts, we have \(\int \nolimits _{\Omega _0} \hat{q} \, \phi ' \, u_0 \, \hbox {d}x_1 \hbox {d}x_2 = - \int \nolimits _{\Omega _0} \hat{q} \, \frac{\partial u_0}{\partial x_1} \, \phi \, \hbox {d}x_1 \hbox {d}x_2\), and so, we obtain via iterated integration and (3.69) that

$$\begin{aligned} \sum _{i=1}^N \int \limits _{\xi _{i-1}}^{\xi _i} \int \limits _{-G_{0,i}}^{H_1} \left\{ \hat{q}_i(x_2) \, \frac{\partial u_0}{\partial x_1}(x_1) - \xi ^*(x_1,x_2) \right\} \phi (x_1) \, \hbox {d}x_1 \hbox {d}x_2 = 0, \end{aligned}$$
(3.70)

for all \(\phi \in \mathcal {C}^\infty _0(\cup _{i=1}^{N} (\xi _{i-1},\xi _{i}))\).

Then, if we consider the step function \(q:(0,1) \mapsto {\mathbb {R}}\), \(q(x)=q_i\) if \(x \in (\xi _{i-1},\xi _i)\) with

$$\begin{aligned} q_i = \frac{1}{l_h} \int \limits _{Y^*_i} \Big ( 1 - \frac{\partial X_i}{\partial y_1} (y_1,y_2) \Big ) \hbox {d}y_1 \hbox {d}y_2, \end{aligned}$$

it follows from (3.70) and (3.67) that

$$\begin{aligned} \int \limits _0^1 \left\{ q(x_1) \, \frac{\partial u_0}{\partial x_1}(x_1) \!-\! \left( \int \limits _{-G_0(x_1)}^{H_1} \xi ^*(x_1,x_2) \, \hbox {d}x_2 \right) \right\} \phi (x_1) \, \hbox {d}x_1 \!=\! 0, \quad \forall \phi \in \mathcal {C}^\infty _0(\cup _{i=1}^{N} (\xi _{i\!-\!1},\xi _{i})), \end{aligned}$$

where \(G_0(x) = G_{0,i}\) if \(x \in (\xi _{i-1},\xi _i)\). Therefore,

$$\begin{aligned} \int \limits _{-G_0(x_1)}^{H_1}\xi ^*(x_1,x_2) \, \hbox {d}x_2 = q(x_1) \frac{\partial u_0(x_1)}{\partial x_1},\quad \hbox { a.e. } x_1 \in (0,1). \end{aligned}$$
(3.71)

Finally, since \(\int \nolimits _{\Omega _0} \xi ^*(x_1,x_2) \, \phi '(x_1) \, \hbox {d}x_1 \hbox {d}x_2 = \int \nolimits _0^1 \left( \int \nolimits _{-G_0(x_1)}^{H_1} \xi ^*(x_1,x_2) \, \hbox {d}x_2 \right) \phi '(x_1) \, \hbox {d}x_1\), we can plug this last equality (3.71) in (3.39) getting our limit problem (3.5) write here as

$$\begin{aligned} \sum _{i=1}^N\int \limits _{\xi _{i-1}}^{\xi _i} \left\{ q_i \frac{\partial u_0}{\partial x_1}\frac{\partial \phi }{\partial x_1} + p_i \, u_0 \, \phi \right\} \hbox {d}x_1 = \int \limits _0^1 \hat{f} \, \phi \, \hbox {d}x_1,\quad \forall \phi \in H^1(0,1). \end{aligned}$$

\(\square \)

4 The general homogenized limit

Now we are in condition to get our main result concerned to the elliptic Eq. (2.2) under hypothesis (H). Using approximation arguments on functions \(G_\epsilon \) and \(H_\epsilon \), the boundary perturbation result given by Proposition 2.4, and Lemma 3.1, we are able to accomplish our goal using techniques previously discussed in [911].

Theorem 4.1

Let \(u^\epsilon \) be the solution of (2.2) with \(f^\epsilon \in L^2(\Omega ^\epsilon )\) satisfying condition (2.3), and assume that the function

$$\begin{aligned} \hat{f}^\epsilon (x) = \int \limits _{-G_\epsilon (x)}^{H_\epsilon (x)} f^\epsilon (x,s) \, \hbox {d}s, \quad x \in (0,1), \end{aligned}$$
(4.1)

satisfies that \(\hat{f}^\epsilon \rightharpoonup \hat{f}\), w-\(L^2(0,1)\), as \(\epsilon \rightarrow 0\).

Then, there exists \(\hat{u} \in H^1(0,1)\), such that, if \(P_\epsilon \) is the extension operator introduced in Lemma 2.1, then

$$\begin{aligned} \Vert P_{\epsilon } u^\epsilon - \hat{u} \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
(4.2)

where \(\hat{u}\) is the unique solution of the Neumann problem

$$\begin{aligned} \int \limits _0^1 \Big \{ q(x) \, u_x(x) \, \varphi _x(x) + p(x) \, u(x) \, \varphi (x) \Big \} \hbox {d}x = \int \limits _0^1 \, \hat{f}(x) \, \varphi (x) \, \hbox {d}x \end{aligned}$$
(4.3)

for all \(\varphi \in H^1(0,1)\), where

$$\begin{aligned} q(x)&= \frac{1}{l_h} \int \limits _{Y^*(x)} \left\{ 1 - \frac{\partial X(x)}{\partial y_1}(y_1,y_2) \right\} \hbox {d}y_1 \hbox {d}y_2, \nonumber \\ p(x)&= \frac{|Y^*(x)|}{l_h} + \frac{1}{l_g} \int \limits _0^{l_g} G(x,y) \, \hbox {d}y - G_0(x), \\ G_0(x)&= \min _{y \in {\mathbb {R}}} G(x,y),\nonumber \end{aligned}$$
(4.4)

and \(X(x)\) is the unique solution of the problem

$$\begin{aligned} \left\{ \begin{array}{l} - \Delta X(x) = 0\quad \hbox { in } Y^*(x) \\ \frac{\partial X(x)}{\partial N} = 0\quad \quad \quad \hbox { on } B_2(x) \\ \frac{\partial X(x)}{\partial N} = N_1 \quad \quad \hbox { on } B_1(x) \\ X(x)\quad l_h-\hbox {periodic on } B_0(x) \\ \int \nolimits _{Y^*(x)} X(x) \; \hbox {d}y_1 \hbox {d}y_2 = 0 \end{array} \right. \end{aligned}$$
(4.5)

in the representative cell \(Y^*(x)\) given by

$$\begin{aligned} Y^*(x) = \{ (y_1,y_2) \in {\mathbb {R}}^2 \; | \; 0< y_1 < l_h, \quad -G_0(x) < y_2 < H(x,y_1) \}, \end{aligned}$$

\(B_0(x)\) is the lateral boundary, \(B_1(x)\) is the upper boundary and \(B_2(x)\) is the lower boundary of \(\partial Y^*(x)\) for each \(x \in (0,1)\).

Remark 4.2

  1. (i)

    If the function \( q(x)\) is continuous, we have that the integral formulation (4.3) is the weak formulation of problem

    $$\begin{aligned} \left\{ \begin{array}{l} \frac{1}{p(x)} \left( q(x) \, u_x(x) \right) _x + u(x) = f(x), \quad x \in (0,1), \\ u_x(0) = u_x(1) = 0, \end{array} \right. \end{aligned}$$

    with \(f(x)=\hat{f}(x)/p(x)\).

  2. (ii)

    Also, if we initially assume that \(f^\epsilon \) does not depend on the vertical variable \(y\), that is, \(f^\epsilon (x,y)=f_0(x)\), then it is not difficult to see that

    $$\begin{aligned} \hat{f}^\epsilon (x) = \left( H_\epsilon (x) + G_\epsilon (x) \right) f_0(x) \end{aligned}$$

    and so, due to the Average Theorem discussed for example in [10, Lemma 4.2],

    $$\begin{aligned} H_\epsilon (x) + G_\epsilon (x) \rightharpoonup \frac{1}{l_h} \int \limits _0^{l_h} H(x,y) \, \hbox {d}y + \frac{1}{l_g} \int \limits _0^{l_g} G(x,y) \, \hbox {d}y, \quad w^* - L^\infty (0,1), \end{aligned}$$

    as \(\epsilon \rightarrow 0\). Thus, \(H_\epsilon (x) + G_\epsilon (x) \rightharpoonup p(x)\), \( w^* - L^\infty (0,1)\), and \(\hat{f}(x)=p(x)f_0(x)\) as discussed in (3.37).

  3. (iii)

    Moreover, if we combine the uniform estimate (2.6) in \(H^1(\Omega ^\epsilon )\) and Lemma 2.1, we obtain \(P_{\epsilon } u^\epsilon \) uniformly bounded in \(H^1(\widetilde{\Omega }^\epsilon )\). Hence, from the convergence result (4.2) in \(L^2(\widetilde{\Omega }^\epsilon )\), we can obtain by interpolation [29, Section 1.4] that

    $$\begin{aligned} \Vert P_{\epsilon } u^\epsilon - \hat{u} \Vert _{H^{\beta }(\widetilde{\Omega }^\epsilon )} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$

    for all \(0 \le \beta < 1\).

Remark 4.3

As a matter of fact, we have that the problem (4.3) is well posed in the sense that the diffusion coefficient \(q\) is uniformly positive and smooth in \((0,1)\). For see this, we use the variational formulation of the auxiliary problem (4.5) given by the bilinear form

$$\begin{aligned} a_{Y^*}(\varphi ,\phi ) = \int \limits _{Y^*(x)} \nabla \varphi \cdot \nabla \phi \, \hbox {d}y_1 \hbox {d}y_2, \quad \forall \varphi , \phi \in V, \end{aligned}$$

defined in the Hilbert space \(V\) given by \(V = V_{Y^*} / {\mathbb {R}}\),

$$\begin{aligned} V_{Y^*} = \{ \varphi \in H^1(Y^*) \; | \; \varphi \text { is } l_h \hbox { - periodic in variable } y_1 \}, \end{aligned}$$

with norm

$$\begin{aligned} \Vert \varphi \Vert _V = \left( \int \limits _{Y^*} \left| \nabla \varphi \right| ^2 \, \hbox {d}y_1 \hbox {d}y_2 \right) ^{1/2}. \end{aligned}$$

Due to hypothesis (H), we have that the representative cell \(Y^* = Y^*(x)\) is defined for all \(x \in [0,1]\). Hence, for all \(\phi \in V\) and \(x \in [0,1]\), we have

$$\begin{aligned} a_{Y^*}(X,\phi ) = \int \limits _{{B_{1}}} \, N_1 \phi \, \hbox {d}S, \end{aligned}$$

where \(B_1(x)\) is the upper boundary of the basic cell \(Y^*\). Consequently, \(y_1 - X(x)\) satisfies

$$\begin{aligned} a_{Y^*}(y_1 - X, \phi ) = \int \limits _{{B_{1}}} N_1 \phi \, \hbox {d}S - \int \limits _{Y^*} \phi \, \hbox {d}y_1 \hbox {d}y_2 - \int \limits _{{B_{1}}} N_1 \, \phi \, \hbox {d}S = 0, \quad \forall \phi \in V, \end{aligned}$$
(4.6)

since \(\phi \) is \(l_h\)-periodic in the \(y_1\) variable. Also, we have that

$$\begin{aligned} q \, l_h&= \int \limits _{Y^*} \frac{\partial }{\partial y_1}(y_1 - X(y_1,y_2)) \, \frac{\partial y_1}{\partial y_1} \, \hbox {d}y_1 \hbox {d}y_2 = \int \limits _{Y^*} \nabla (y_1 - X(y_1,y_2)) \cdot \nabla y_1 \, \hbox {d}y_1 \hbox {d}y_2 \nonumber \\&= a_{Y^*}(y_1 - X, y_1). \end{aligned}$$
(4.7)

Hence, due to relation (4.6) with \(\phi = - X\), and identity (4.7), we get for all \(x \in [0,1]\)

$$\begin{aligned} q \, l_h&= a_{Y^*}(y_1 - X, y_1) + a_{Y^*}(y_1 - X, - X) \\&= a_{Y^*}(y_1 - X, y_1 - X) = \Vert y_1-X\Vert _V > 0. \end{aligned}$$

Thus, since \(\Vert y_1-X\Vert _V\) is a continuous function in \([0,1]\) (see [10, Proposition A.1]) and \(|Y^*| > 0\), we have that the homogenization coefficient \(q\) is uniformly positive and continuous in \([0,1]\) implying that, for example, the problem (4.3) is well posed being \(\hat{u}\) its unique solution.

We provide now a proof of the Theorem 4.1.

Proof

From estimate (2.6) and Lemma 2.1, we have \(u^\epsilon |_{\widehat{\Omega }_0} \in H^1(\widehat{\Omega }_0)\) satisfying

$$\begin{aligned} \Vert P_{\epsilon } u^\epsilon \Vert _{L^2(\widehat{\Omega }_0)}, \Big \Vert \frac{\partial P_{\epsilon } u^\epsilon }{\partial x_1} \Big \Vert _{L^2(\widehat{\Omega }_0)} \hbox { and } \frac{1}{\epsilon } \Big \Vert \frac{\partial P_{\epsilon } u^\epsilon }{\partial x_2} \Big \Vert _{L^2(\widehat{\Omega }_0)} \le M\quad \hbox { for all } \epsilon > 0, \end{aligned}$$

with \(M>0\) independent of \(\epsilon \), where \(\widehat{\Omega }_0 \subset \widetilde{\Omega }^\epsilon \) is given here by \( \widehat{\Omega }_0 = (0,1) \times (-G_0, H_1). \) Then, there exists \(u_0\in H^1(\widehat{\Omega }_0)\) and a subsequence, still denoted by \(P_{\epsilon } u^\epsilon \), satisfying

$$\begin{aligned} P_\epsilon u^\epsilon \rightharpoonup u_0 \quad w-H^1(\widehat{\Omega }_0), \quad \hbox { and } \quad \frac{\partial P_\epsilon u^\epsilon }{\partial x_2} \rightarrow 0 \quad s-L^2(\widehat{\Omega }_0). \end{aligned}$$
(4.8)

Thus, arguing as in (3.16), we get \(u_0(x_1,x_2) = u_0(x_1)\) on \(\widehat{\Omega }_0\), and so, \(u_0 \in H^1(0,1)\).

We will show that \(u_0\) satisfies the Neumann problem (4.3) using a discretization argument on the oscillating boundary of the domain.

For this, let us fix a small \(\delta >0\) and consider piecewise periodic functions \(G^\delta (x,y)\) and \(H^\delta (x,y)\) as described at the beginning of Sect. 3 satisfying hypothesis (H) and condition

$$\begin{aligned} \begin{array}{l} 0\le G^\delta (x,y)-G(x,y) \le \delta , \\ 0\le H^\delta (x,y)-H(x,y) \le \delta , \end{array} \quad \forall (x,y) \in [0,1] \times {\mathbb {R}}. \end{aligned}$$

In order to construct these functions, we may proceed as follows. The functions \(G\) and \(H\) are uniformly \(C^1\) in each interval \((\xi _{i-1},\xi _i) \times (0,1)\) being periodic in the second variable. In particular, for \(\delta >0\) small enough and for a fixed \(z\in (\xi _{i-1},\xi _i)\), we have that there exists a small interval \((z-\eta ,z+\eta )\) with \(\eta \) depending only on \(\delta \) such that \(|G(x,y)-G(z,y)|+|\partial _y G(x,y)- \partial _y G(z,y)|<\delta /2\) and \(|H(x,y)-H(z,y)|+|\partial _y H(x,y)-\partial _y H(z,y)|<\delta /2\) for all \(x\in (z-\eta ,z+\eta )\cap (\xi _{i-1},\xi _i)\) and for all \(y\in {\mathbb {R}}\). This allows us to select a finite number of points: \(\xi _{i-1}=\xi _{i-1}^1< \xi _{i-1}^2<\ldots <\xi _{i-1}^r=\xi _i\) such that \(\xi _{i-1}^r-\xi _{i-1}^{r-1}<\eta \), and therefore, defining \(G^\delta (x,y)=G(\xi _{i-1}^r,y)+\delta /2\) and \(H^\delta (x,y)=H(\xi _{i-1}^r,y)+\delta /2\) for all \(x\in (\xi _{i-1}^r,\xi _{i-1}^{r+1})\) and getting \(0\le G^\delta (x,y)-G(x,y)\le \delta \), \(|\partial _y G^\delta (x,y)-\partial _y G(x,y)|\le \delta \), \(0\le H^\delta (x,y)-H(x,y)\le \delta \) and \(|\partial _y H^\delta (x,y)-\partial _y H(x,y)|\le \delta \) for all \((x,y)\in (\xi _{i-1},\xi _i)\times {\mathbb {R}}\).

Note that this construction can be done for all \(i=1,\ldots , N\). In particular, if we rename all the points \(\xi _i^k\) constructed above by \(0=z_0<z_1<\ldots <z_m=1\) observing that \(m=m(\delta )\), then the functions \(G^\delta \) and \(H^\delta \) satisfy \(G^\delta (x,y)=G^\delta _i(y)\) and \(H^\delta (x,y)=H^\delta _i(y)\) in \((x,y) \in (z_{i-1},z_i) \times {\mathbb {R}}\), \(i=1,\ldots , m\), where \(G^\delta _i\) and \(H^\delta _i\) are \(C^1\)-functions, \(l_g\) and \(l_h\)-periodic, respectively. At each point \(z_i\), we can set \(G^\delta \) and \(H^\delta \) as the minimum value of the lateral limit in \(z_i\).

Let us now to denote \(G_\epsilon ^\delta (x)=G^\delta (x,x/\epsilon ^\alpha )\), \(\alpha > 1\), and \(H_\epsilon ^\delta (x)=H^\delta (x,x/\epsilon )\), aiming to introduce the following oscillating domains

$$\begin{aligned}&\Omega ^{\epsilon ,\delta } = \{ (x,y) \in {\mathbb {R}}^2 \; | \; x \in (0,1), \; - G_{\epsilon }^{\delta }(x) < y < H_{\epsilon }^{\delta }(x) \}, \\&\widetilde{\Omega }^{\epsilon ,\delta } = \{ (x,y) \in {\mathbb {R}}^2 \; | \; x \in (0,1), \; - G_{\epsilon }^{\delta }(x) < y < H_1 \}. \end{aligned}$$

Since \(H_{\epsilon }^{\delta }\) satisfies the hyphotheses of Lemma 2.1, there exists an extension operator

$$\begin{aligned} P_{\epsilon ,\delta } \in \mathcal {L}(L^p(\Omega ^{\epsilon ,\delta }),L^p(\widetilde{\Omega }^\delta )) \cap \mathcal {L}(W^{1,p}(\Omega ^{\epsilon ,\delta }),W^{1,p}(\widetilde{\Omega }^\delta )) \end{aligned}$$

satisfying the uniform estimate (2.8) with \(\eta (\epsilon ) \sim 1/\epsilon \).

Taking \(f^\epsilon \in L^2(\Omega ^\epsilon )\) satisfying \(\Vert f^\epsilon \Vert _{L^2(\Omega ^\epsilon )}\le C\), and extend it by 0 outside \(\Omega ^\epsilon \), and still denoting the extended function again by \(f^\epsilon \), and using that \(G_\delta \ge G\) and \(H_\delta \ge H\), we have that \(\hat{f}^\epsilon _\delta (x)=\int ^{H^\delta _\epsilon (x)}_{-G_\epsilon ^\delta (x)} f^\epsilon (x,y)\hbox {d}y=\int ^{H_\epsilon (x)}_{-G_\epsilon (x)} f^\epsilon (x,y) \hbox {d}y = \hat{f}^\epsilon (x)\) and by hypothesis, we have that \(\hat{f}^\epsilon _\delta \equiv \hat{f}^\epsilon \rightharpoonup \hat{f}\) w-\(L^2(0,1)\).

Therefore, it follows from Theorem 3.1 that for each \(\delta > 0\) fixed, there exist \(u^\delta \in H^1(0,1)\) such that the solutions \(u^{\epsilon ,\delta }\) of (2.2) in \(\Omega ^{\epsilon ,\delta }\) satisfy

$$\begin{aligned} \Vert P_{\epsilon ,\delta } u^{\epsilon ,\delta } - u^\delta \Vert _{L^2(\widetilde{\Omega }^{\epsilon ,\delta })} \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
(4.9)

where \(u^\delta \in H^1(0,1)\) is the unique solution of the Neumann problem

$$\begin{aligned} \int \limits _0^1 \Big \{ q^\delta (x) \; u_x^\delta (x) \, \varphi _x(x) + p^\delta (x) \, u^\delta (x) \, \varphi (x) \Big \} \hbox {d}x = \int \limits _0^1 \, \hat{f}(x) \, \varphi (x) \, \hbox {d}x, \quad \forall \varphi \in H^1(0,1), \end{aligned}$$
(4.10)

where \(q^\delta \) and, \(p^\delta : (0,1) \mapsto {\mathbb {R}}\) are strictly positive functions, locally constant, given by

$$\begin{aligned} \left\{ \begin{array}{l} q^\delta (x) = \frac{1}{l_h} \int \limits _{Y^*_i} \Big \{ 1 - \frac{\partial X_i}{\partial y_1}(y_1,y_2) \Big \} \hbox {d}y_1 \hbox {d}y_2, \\ p^\delta (x) =\frac{|Y_i^*|}{l_h} + \frac{1}{l_g} \int \limits _0^{l_g} G_i^\delta (s) \, \hbox {d}s - G^\delta _{0,i}, \\ G_{0,i}^\delta = \min _{y \in {\mathbb {R}}} G_i^\delta (y), \end{array}\right. \quad x \in (z_{i-1},z_i), \end{aligned}$$

where the function \(X_i\) is the unique solution of (3.7) in the representative cell \(Y^*_i\) given by

$$\begin{aligned} Y^*_i = \{ (y_1,y_2) \in {\mathbb {R}}^2 \; | \; 0< y_1 < l_h, \quad - G_{0,i}^\delta < y_2 < H_i^\delta (y_1) \}, \quad i=1, \ldots , m. \end{aligned}$$

Now, let us pass to the limit in (4.10) as \(\delta \rightarrow 0\). To do this, we consider the functions \(q^\delta \) and \(p^\delta \) defined in \(x \in (0,1)\) and the functions \(q\) and \(p\) defined in (4.4). We have that \(q^\delta \) and \(p^\delta \) converge to \(q\) and \(p\) uniformly in \((0,1)\). The uniform convergence of \(q^\delta \) to \(q\) in \((0,1)\) follows from [9, Proposition A.1]. The uniform convergence of \(p^\delta \) to \(p\) follows from the uniform convergence of \(G^\delta \) and \(H^\delta \) to \(G\) and \(H\), respectively, as \(\delta \rightarrow 0\).

Therefore, we obtain from [13, p. 8] or [23, p. 1] the following limit variational formulation: to find \(u \in H^1(0,1)\) such that

$$\begin{aligned} \int \limits _0^1 \Big \{ q(x) \; u_x(x) \, \varphi _x(x) + p(x) \, u(x) \, \varphi (x) \Big \} \hbox {d}x = \int \limits _0^1 \, \hat{f}(x) \, \varphi \, \hbox {d}x \end{aligned}$$
(4.11)

for all \(\varphi \in H^1(0,1)\). Hence, there exists \(u^* \in H^1(0,1)\) such that

$$\begin{aligned} u^\delta \rightarrow u^* \hbox { in } H^1(0,1) \end{aligned}$$
(4.12)

where \(u^*\) is the unique solution of the Neumann problem (4.11).

We will complete the proof showing that \(u^* = u_0\) in \((0,1)\), where \(u_0\) is the function obtained in (4.8). In order to do so, we observe that \(\Vert u^*-u_0\Vert _{L^2(0,1)}^2 = \left\{ H_1 + G_0\right\} ^{-1}\Vert u^*-u_0\Vert _{L^2(\widehat{\Omega }_0)}^2\), and therefore, to show that \(u^*=u_0\), it is enough to show that \(\Vert u^*-u_0\Vert _{L^2(\widehat{\Omega }_0)}^2=0\). Adding and subtracting appropriate functions, we have for all \(\epsilon \) and \(\delta > 0\) that

$$\begin{aligned} \Vert u^* - u_0 \Vert _{L^2(\widehat{\Omega }_0)}&\le \Vert u^* - u^{\delta } \Vert _{L^2(\widehat{\Omega }_0)}+\Vert u^\delta - u^{\epsilon ,\delta } \Vert _{L^2(\widehat{\Omega }_0)}\nonumber \\&+ \Vert u^{\epsilon ,\delta } - u^\epsilon \Vert _{L^2(\widehat{\Omega }_0)} + \Vert u^\epsilon - u_0 \Vert _{L^2(\widehat{\Omega }_0)}. \end{aligned}$$
(4.13)

Let \(\eta \) be now a positive small number. From (4.12) and Theorem 2.4, we can choose a \(\delta >0\) fixed and small such that \(\Vert u^* - u^{\delta } \Vert _{L^2(\Omega _0)} \le \eta \) and \(\Vert u^{\epsilon ,\delta } - u^\epsilon \Vert _{L^2(\Omega _0)} \le \eta \) uniformly for all \(\epsilon >0\). For this particular value of \(\delta \), we can choose, by (4.9), \(\epsilon _1>0\) small enough such that \(\Vert u^\delta - u^{\epsilon ,\delta } \Vert _{L^2(\Omega _0)}\le \eta \) for \(0<\epsilon <\epsilon _1\). Moreover, from (4.8), we have that there exists \(\epsilon _2>0\) such that \( \Vert u^\epsilon - u_0 \Vert _{L^2(\Omega _0)} \le \eta \) for all \(0<\epsilon <\epsilon _2\). Hence, with \(\epsilon =\min \{\epsilon _1,\epsilon _2\}\) applied to (4.13), we get \(\Vert u^* - u_0 \Vert _{L^2(\Omega _0)} \le 4\eta \). Since \(\eta \) is arbitrarily small, we get \(\Vert u^*-u_0\Vert _{L^2(\widehat{\Omega }_0)}^2=0\). \(\square \)

5 Convergence of linear semigroups

In order to accomplish our goal, we consider here the linear parabolic problems associated with the perturbed Eq. (1.5) and its limit problem (1.6) in the abstract framework given by [27, 29] to show that, under an appropriated notion of convergence, the linear semigroup given by (1.5) converges to the one established by (1.6) as \(\epsilon \rightarrow 0\). The convergence concept that we adopt here was first introduced in the works [4143, 45, 46] and then successfully applied in [25, 19] to concrete perturbation problems given by parabolic equations.

To do so, let us first consider a family of Hilbert spaces \(\{ Z_\epsilon \}_{\epsilon > 0}\) defined by \(Z_\epsilon = L^2(\Omega ^\epsilon )\) under the canonical inner product

$$\begin{aligned} ( u, v )_{\epsilon } = \int \limits _{\Omega ^\epsilon } u(x_1,x_2) \, v(x_1,x_2) \, \hbox {d}x_1 \hbox {d}x_2 \end{aligned}$$

and let \(Z_0 = L^2(0,1)\) be the limiting Hilbert space with the inner product \(( \cdot , \cdot )_0\) given by

$$\begin{aligned} ( u, v )_{{0}} = \int \limits _0^1 p(x) \, u(x) \, v(x) \, \hbox {d}x \end{aligned}$$

where

$$\begin{aligned} p(x) = \frac{|Y^*|}{l_h} + \frac{1}{l_g} \int \limits _0^{l_g} G(x, y) \, \hbox {d}y - G_0(x) \end{aligned}$$

is the positive function previously defined in (4.4).

We write the elliptic problem (2.4) as an abstract equation \(L_\epsilon u = f^{\epsilon }\) where \(L_\epsilon : \mathcal {D}(L_\epsilon ) \subset L^2(\Omega ^\epsilon ) \mapsto L^2(\Omega ^\epsilon )\) is the self-adjoint, positive linear operator with compact resolvent

$$\begin{aligned} \mathcal {D}(L_\epsilon )&= \left\{ u \in H^2(\Omega ^\epsilon ) \, | \, \frac{\partial u}{\partial x_1} N_1^\epsilon + \frac{1}{\epsilon ^2} \frac{\partial u}{\partial x_2}N_2^\epsilon = 0 \hbox { on } \partial \Omega ^\epsilon \right\} \nonumber \\ L_\epsilon u&= - \frac{\partial ^2 u}{{\partial x_1}^2} - \frac{1}{\epsilon ^2} \frac{\partial ^2 u}{{\partial x_2}^2} + u, \quad u \in \mathcal {D}(L_\epsilon ). \end{aligned}$$
(5.1)

Analogously, we associate the limit elliptic problem (4.3) to the limit linear operator \(L_0: \mathcal {D}(L_0) \subset Z_0 \mapsto Z_{0}\) defined by

$$\begin{aligned} \mathcal {D}(L_0)&= \left\{ u \in H^2(0,1) \, | \, u'(0) = u'(1) = 0 \right\} \nonumber \\ L_0 u&= - \frac{1}{p(x)} \left( q(x) u_x \right) _x + u, \quad u \in \mathcal {D}(L_0) \end{aligned}$$
(5.2)

where \(p\) and \(q\) are the homogenized coefficients established in (4.4). Due to Remark 4.3, it is clear that \(L_0\) is a positive self-adjoint operator with compact resolvent.

In order to simplify the notation, we denote by \(Z_{\epsilon }^{\alpha }\) the fractional power scale associated with operators \(L_{\epsilon }\) with \(0\leqslant \alpha \leqslant 1\) and \(0\leqslant \epsilon \leqslant 1\). We also write \(Z_{\epsilon }:=Z_{\epsilon }^0\) for all \(0\leqslant \epsilon \leqslant 1\). Notice that \( Z_{\epsilon }^{1/2}\) is the Sobolev Space \(H^1(\Omega ^\epsilon )\) with norm

$$\begin{aligned} \Vert u\Vert _{Z_{\epsilon }^{1/2}}^2=\left\| \frac{\partial u}{\partial x_{1}}\right\| _{Z_{\epsilon }}^2 + \frac{1}{\epsilon ^2} \left\| \frac{\partial u}{\partial x_{2}}\right\| _{Z_{\epsilon }}^2 + \left\| u \right\| _{Z_{\epsilon }}^2. \end{aligned}$$

Remark 5.1

It follows from Remark 2.3 that the extension operators \(P_{\epsilon } \in \mathcal {L}(Z_{\epsilon }^{{1/2}}, H^1(\widetilde{\Omega }^\epsilon )) \cap \mathcal {L}(Z_{\epsilon }, L^2(\widetilde{\Omega }^\epsilon ))\) given by Lemma 2.1 are uniformly bounded in \(\epsilon \). Therefore, we obtain by interpolation that

$$\begin{aligned} \sup _{0\leqslant \epsilon \leqslant 1}\Vert P_{\epsilon }\Vert _{\mathcal {L}(Z_{\epsilon }^\alpha , H^{2\alpha }(\widetilde{\Omega }^\epsilon ))} < \infty , \quad 0 \leqslant \alpha \leqslant \frac{1}{2}. \end{aligned}$$

So far, we have passed to limit in the variational problem (2.4) as \(\epsilon \rightarrow 0\) getting the limit Eq. (4.3). Here, we apply the concept of compact convergence to obtain convergence properties of the linear semigroups generated by the operators \(L_\epsilon \) and \(L_0\).

For this, let us consider the family of linear continuous operators \(E_\epsilon : Z_0 \mapsto Z_\epsilon \) given by

$$\begin{aligned} (E_\epsilon u)(x_1,x_2) = u(x_1) \hbox { on } \Omega ^\epsilon \end{aligned}$$

for each \(u \in Z_0\). Since

$$\begin{aligned} \Vert E_\epsilon u \Vert ^2_{Z_\epsilon } = \int \limits _{\Omega ^\epsilon } u^2(x_1) \, \hbox {d}x_1 \hbox {d}x_2 = \int \limits _0^1 \left\{ H_\epsilon (x_1) + G_\epsilon (x_1) \right\} u^2(x_1) \, \hbox {d}x_1, \end{aligned}$$

we have that \(\Vert E_\epsilon u \Vert _{Z_\epsilon } \rightarrow \Vert u \Vert _{Z_0} \hbox { as } \epsilon \rightarrow 0\). Observe that \(E_\epsilon \) is a kind of inclusion operator from \(Z_0\) into \(Z_\epsilon \). Similarly, we can consider \(E_{\epsilon }: L^1_{0} \rightarrow L^1_{\epsilon }\), and so, taking in \( L^1_{0}\) the equivalent norm \(\Vert u\Vert _{Z^1_0}= \Vert - u_{xx} + u \Vert _{ Z_0}\), we obtain

$$\begin{aligned} \Vert E_{\epsilon } u\Vert _{L^{1}_\epsilon } \rightarrow \Vert u\Vert _{L^{1}_{0}}. \end{aligned}$$

Consequently, since

$$\begin{aligned} \sup _{0 \leqslant \epsilon \leqslant 1} \{\Vert E_\epsilon \Vert _{\mathcal {L}(Z_0,Z_\epsilon )}, \Vert E_\epsilon \Vert _{\mathcal {L}(L^1_0,L^1_\epsilon )} \} < \infty , \end{aligned}$$

we get by interpolation that

$$\begin{aligned} C = \sup _{\epsilon >0}\Vert E_{\epsilon } \Vert _{\mathcal {L}(Z_{0}^\alpha ,Z_{\epsilon }^\alpha )} < \infty \, \, \hbox { for } 0\leqslant \alpha \leqslant 1. \end{aligned}$$

Now we are in condition to set the following concepts of convergence, compactness and compact convergence of operators associated with the family of operators \(\{ E_\epsilon \}_{\epsilon > 0}\).

Definition 5.2

We say that a sequence of elements \(\{ u^\epsilon \}_{\epsilon > 0}\) with \(u^\epsilon \in Z_\epsilon \) is E-convergent to \(u \in Z_0\), if \(\Vert u^\epsilon - E_\epsilon u \Vert _{Z_\epsilon } \rightarrow 0\) as \(\epsilon \rightarrow 0\). We write \(u^\epsilon \mathop {\rightarrow }\limits ^{E} u\).

Definition 5.3

A sequence \(\{ u_n \}_{n \in {\mathbb {N}}}\) with \(u_n \in Z_{\epsilon _n}\) is said to be E-precompact if for any subsequence \(\{ u_{n'} \}\) there exist a subsequence \(\{ u_{n''} \}\) and \(u \in Z_0\) such that \(u_{n''} \mathop {\rightarrow }\limits ^{E} u\) as \(n'' \rightarrow \infty \). A family \(\{ u^\epsilon \}_{\epsilon > 0}\) is called pre-compact if each sequence \(\{ u_{\epsilon _n} \}\), with \(\epsilon _n \rightarrow 0\), is pre-compact.

Definition 5.4

We say that a family of operators \(\{ B_\epsilon \in \mathcal {L}(Z_\epsilon ) \, | \, \epsilon > 0 \}\) E-converges to \(B \in \mathcal {L}(Z_0)\) as \(\epsilon \rightarrow 0\), if \(B_\epsilon f^\epsilon \mathop {\rightarrow }\limits ^{E} B f\) whenever \(f^\epsilon \mathop {\rightarrow }\limits ^{E} f \in Z_0\). We write \(B_\epsilon \mathop {\rightarrow }\limits ^{EE} B\).

Definition 5.5

We say that a family of compact operators \(\{ B_\epsilon \in \mathcal {L}(Z_\epsilon ) \, | \, \epsilon > 0 \}\) converges compactly to a compact operator \(B \in \mathcal {L}(Z_0)\), if for any family \(\{ f^\epsilon \}_{\epsilon > 0}\) with \(\Vert f^\epsilon \Vert _{Z_\epsilon } \le 1\), we have that the family \(\{ B_\epsilon f^\epsilon \}\) is E-precompact and \(B_\epsilon \mathop {\rightarrow }\limits ^{EE} B\). We write \(B_\epsilon \mathop {\rightarrow }\limits ^{CC} B\).

We finally note this notion of convergence can also be extended to sets following [5, 19].

Definition 5.6

Let \(\mathcal {O}_\epsilon \subset Z_\epsilon ^\alpha \), \(\epsilon \in [0,1]\), and \(\mathcal {O}_0 \subset Z_0^\alpha \), \(\alpha \in [0,1)\). We say that the family of sets \(\left\rbrace \mathcal {O}_\epsilon \right\lbrace _{\epsilon \in [0,1]}\) is \(E\)-upper semicontinuous or just upper semicontinuous at \(\epsilon =0\) if

$$\begin{aligned} \sup _{w^\epsilon \in \mathcal {O}_\epsilon } \Big [ \inf _{w \in \mathcal {O}_0} \left\{ \Vert w^\epsilon - E_\epsilon w \Vert _{Z_\epsilon ^\alpha } \right\} \Big ] \rightarrow 0,\quad \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$

Let us also recall an useful characterization of upper semicontinuity of sets: If any sequence \(\left\rbrace u^\epsilon \right\lbrace \subset \mathcal {O}_\epsilon \) has a \(E\)-convergent subsequence with limit belonging to \(\mathcal {O}\), then \(\left\rbrace \mathcal {O}_\epsilon \right\lbrace \) is \(E\)-upper semicontinuous at zero.

The following result is basically Theorem 4.1 written according to previous framework.

Corollary 5.7

The family of compact operators \(\{ L_\epsilon ^{-1} \in \mathcal {L}(Z_\epsilon ) \}_{\epsilon > 0}\) converges compactly to the compact operator \(L^{-1}_0 \in \mathcal {L}(Z_0)\) as \(\epsilon \rightarrow 0\).

Proof

Let us take \(\{ f^{\epsilon } \}_{\epsilon > 0} \subset Z_\epsilon \) with \(\Vert f^{\epsilon } \Vert _{Z_\epsilon } \le 1\) and define \(u^\epsilon = L_\epsilon ^{-1} f^{\epsilon }\). Then, \(L_\epsilon u^\epsilon = f^{\epsilon }\) and \(u^\epsilon \) satisfies the problem (2.4). Consequently, we get from Theorem 4.1 and Remark 4.2 that there exist \(f_0 \in Z_{0}\) and \(u_0 \in H^1(0,1)\) such that \(L_0 u_0 = f_0\), \(\Vert P_\epsilon u^\epsilon - u_0\Vert _{L^2(\widetilde{\Omega }^\epsilon )} \rightarrow 0\), as \(\epsilon \rightarrow 0\), where \(u_0(x_1,x_2) = u_0(x_1)\). Recall that \(P_{\epsilon }\) is the extension operator given by Lemma 2.1. Hence, we can conclude from the inequality

$$\begin{aligned} \Vert u^\epsilon - E_\epsilon u_0 \Vert _{Z_\epsilon } = \Vert \left( P_\epsilon u^\epsilon - u_0 \right) |_{\Omega ^\epsilon } \Vert _{Z_\epsilon } \le \Vert P_\epsilon u^\epsilon - u_0 \Vert _{L^2(\widetilde{\Omega }^\epsilon )} \end{aligned}$$

that \(u^\epsilon \mathop {\rightarrow }\limits ^{E} u_0\) proving that the family \(\{ L_\epsilon ^{-1} f^{\epsilon } \}_{\epsilon > 0}\) is E-precompact.

Finally, we have to show that \(L^{-1}_\epsilon \mathop {\rightarrow }\limits ^{EE} L^{-1}_0\). For this, let us suppose

$$\begin{aligned} f^\epsilon \mathop {\rightarrow }\limits ^{E} f_0. \end{aligned}$$
(5.3)

Due to (4.1) and (5.3), we have for any \(\varphi \in L^2(0,1)\) that

$$\begin{aligned} \int \limits _{\Omega ^\epsilon } \left\{ f^\epsilon (x_1,x_2) \!-\! f_0(x_1) \right\} \varphi (x_1) \, \hbox {d}x_1 \hbox {d}x_2 \!=\! \int \limits _0^1 \left\{ \hat{f}^\epsilon (x) \!-\! \left( H_\epsilon (x) \!+\! G_\epsilon (x) \right) f_0(x) \right\} \varphi (x) \, \hbox {d}x \!\rightarrow \! 0, \end{aligned}$$

as \(\epsilon \rightarrow 0\). Hence, since \(\left( H_\epsilon + G_\epsilon \right) f_0 \rightharpoonup p f_0\), \(w^* - L^\infty (0,1)\), see Remark 4.2, we can conclude \(\hat{f}^\epsilon \rightharpoonup p f_0\), \(w^* - L^\infty (0,1)\). Thus, it follows from Theorem 4.1 and Remark 4.2 that \(L^{-1}_\epsilon f^\epsilon \rightarrow L^{-1}_0 f_0\), and then \(L^{-1}_\epsilon \mathop {\rightarrow }\limits ^{EE} L^{-1}_0\) as \(\epsilon \rightarrow 0\). \(\square \)

Now, let us take the positive coefficient \(p(x)\) from (4.4) and consider the operator \(M_\epsilon : L^r(\Omega ^\epsilon ) \mapsto L^r(0,1)\), \(1 \le r \le \infty \), given by

$$\begin{aligned} (M_\epsilon f^\epsilon )(x) = \frac{1}{p(x)} \int \limits _{-G_\epsilon (x)}^{H_\epsilon (x)} f^\epsilon (x, s) \, \hbox {d}s \quad x \in (0,1). \end{aligned}$$

It is easy to see that \(M_\epsilon \) is a well-defined bounded linear operator with

$$\begin{aligned} \Vert M_\epsilon f^\epsilon \Vert _{L^p(0,1)} \le C \Vert f^\epsilon \Vert _{L^p(\Omega ^\epsilon )} \end{aligned}$$
(5.4)

for some \(C>0\) depending only on \(r\), \(G_0\), \(H_0\), \(G_1\) and \(H_1\). A similar operator was considered in [3, 4]. We still note that \(M_{\epsilon }\) is a multiple of operator \(\hat{f}\) defined by expression (4.1).

Under this setting, we still can point out to Theorem 4.1 showing the following result:

Lemma 5.8

Let \(\{ f^\epsilon \} \subset Z_\epsilon \) be a sequence and suppose that \(\Vert f^\epsilon \Vert _{Z_{\epsilon }} \leqslant C\), for some \(C\) independent of \(\epsilon \). Then, there exists a subsequence such that

$$\begin{aligned} \Vert L^{-1}_\epsilon f^\epsilon - E_\epsilon L^{-1}_0 M_\epsilon f^\epsilon \Vert _{Z_\epsilon } \rightarrow 0 \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$

Proof

Since \(f^\epsilon \) is uniformly bounded in \(L^2(\Omega ^\epsilon )\), and \(M_\epsilon \) is a bounded operator, we can extract a subsequence such that \(M_\epsilon f^\epsilon \rightharpoonup f_0\), w-\(L^{2}(0,1)\), for some \(f_0 \in L^2(0,1)\). Then, from Theorem 4.1 and Remark 4.2, we have \(\Vert L^{-1}_\epsilon f^\epsilon - L^{-1}_0 f_{0}\Vert _{L^{2}(\Omega ^\epsilon )} \rightarrow 0\), as \(\epsilon \rightarrow 0\). Finally, the continuity of operator \(L^{-1}_0\) implies the desired result.

As a consequence of Lemma 5.8, we get the main result of this section, namely, the convergence of the resolvent operators of \(L_\epsilon \) and \(L_0\).

Corollary 5.9

There exist \(\epsilon _0 > 0\), and a function \(\vartheta : (0,\epsilon _0) \mapsto (0,\infty )\), with \(\vartheta (\epsilon ) \rightarrow 0\) as \(\epsilon \rightarrow 0\), such that

$$\begin{aligned} \Vert L^{-1}_\epsilon - E_\epsilon L^{-1}_0 M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon )} \le \vartheta (\epsilon ), \quad \forall \epsilon \in (0,\epsilon _0). \end{aligned}$$

Proof

Let us show it by contradiction. To do so, suppose there exist a \(\delta >0\) and sequences \(\{ \epsilon _n \}_{n \in {\mathbb {N}}} \subset (0,\infty )\), \(\epsilon _n \rightarrow 0\) as \(n \rightarrow \infty \), and \(\{ f^{n} \}_{n \in {\mathbb {N}}} \subset Z_{\epsilon _n}\) with \(\Vert f^{n} \Vert _{Z_{\epsilon _n}} = 1\), such that

$$\begin{aligned} \Vert L^{-1}_{\epsilon _n} f^{n}- E_{\epsilon _n} L^{-1}_0 M_{\epsilon _n} f^{n} \Vert _{Z_{\epsilon _n}} \geqslant \delta , \quad \hbox { for all } n \in {\mathbb {N}}. \end{aligned}$$

On the other hand, from Lemma 5.8, we can extract a subsequence satisfying

$$\begin{aligned} \Vert L^{-1}_{\epsilon _{n_i}} f^{n_i} - E_{\epsilon _{n_i}} L^{-1}_0 M_{\epsilon _{n_i}} f^{n_i} \Vert _{Z_{\epsilon _{n_i}}} \mathop {\longrightarrow }\limits ^{i\rightarrow \infty } 0 \end{aligned}$$

which give us a contradiction completing the proof. \(\square \)

Remark 5.10

Note that Corollary 5.7 implies that \(L_\epsilon \) satisfies the following condition

(C) \(L_\epsilon \) is a closed operator, has compact resolvent, the number zero belongs to its resolvent

$$\begin{aligned} \hbox {set }\,\rho (L_\epsilon ) \,\hbox { for all } \,\epsilon \in [0,1], \quad \hbox {and}\quad L^{-1}_\epsilon \mathop {\rightarrow }\limits ^{CC} L^{-1}_0. \end{aligned}$$

It is known that the spectrum of \(L_\epsilon \) or \(L_0\), denoted by \(\sigma (L_\epsilon )\) or \(\sigma (L_0)\), consists only of isolated eigenvalues. Hence, if we consider an isolated point \(\lambda _{0} \in \sigma (L_0)\) and its generalized eigenspace \(W(\lambda _{0},L_0) = Q(\lambda _0,L_0) Z_0\), where

$$\begin{aligned} Q(\lambda _{0},L_0) = \frac{1}{2 \pi i} \int \limits _{S_\delta } (\xi \, I - L_0)^{-1} d\xi , \end{aligned}$$

\(S_\delta = \{ \xi \in {\mathbb {C}}\, | \, |\xi - \lambda _{0}| = \delta \}\) and \(\delta \) is chosen small enough such that there is no other point of \(\sigma (L_0)\) in the disc \(\{ \xi \in {\mathbb {C}}\, | \, |\xi - \lambda _{0}| \le \delta \}\), then, by condition (C) and [3, Lemma 4.9], we have that there exists \(\epsilon _0 > 0\) such that \(\rho (L_\epsilon ) \supset S_\delta \) for all \(\epsilon \in (0, \epsilon _0)\). Thus, we can denote by \(W(\lambda _{0},L_\epsilon ) = Q(\lambda _{0},L_\epsilon ) Z_\epsilon \) where

$$\begin{aligned} Q(\lambda _{0},L_\epsilon ) = \frac{1}{2 \pi i} \int \limits _{S_\delta } (\xi \, I - L_\epsilon )^{-1} d\xi . \end{aligned}$$

Remark 5.11

Moreover, it follows from condition (C) and [3, Lemma 4.10] the following statements about spectrum convergence of operators \(L_\epsilon \):

  1. (i)

    For any \(\lambda _0 \in \sigma (L_0 )\), there is a sequence \(\lambda _{\epsilon } \in \sigma (L_\epsilon )\), such that \(\lambda _\epsilon \rightarrow \lambda _0\) as \(\epsilon \rightarrow 0\).

  2. (ii)

    If \(\lambda _\epsilon \rightarrow \lambda _0\), with \(\lambda _\epsilon \in \sigma (L_\epsilon )\), then \(\lambda _0 \in \sigma (L_0 )\).

  3. (iii)

    There is \(\epsilon _0>0\) such that \(\dim {W(\lambda _{0},L_\epsilon )}= \dim {W(\lambda _{0},L_0 )}\) for all \(0 < \epsilon \leqslant \epsilon _0\).

  4. (iv)

    For any \(u \in W(\lambda _0,L_0 )\), there is a sequence \({u^{\epsilon }} \in W(\lambda _0,L_\epsilon )\), such that \({u^{\epsilon }} \overset{E}{\longrightarrow } u\).

  5. (v)

    If \(u^\epsilon \in W(\lambda _{0},L_\epsilon )\) satisfies \(\left\Vert u^\epsilon \right\Vert_{Z_{\epsilon }}=1\), then \(\left\rbrace u^\epsilon \right\lbrace \) has an \(E\)-convergent subsequence and any limit point of this sequence belongs to \(W(\lambda _0,L_0 )\).

Finally, we note that the first eigenvalue of \(L_\epsilon \) and \(L_0\) is \(1\), and its associated normalized eigenfunction is the constant \(|\Omega ^\epsilon |^{-1/2} \rightarrow ( \int \nolimits _0^1 p(x) \, \hbox {d}x )^{-1/2}\) as \(\epsilon \rightarrow 0\) by Remark 4.2.

Now we are in condition to discuss the convergence properties of the linear semigroups generated by the operators \(L_\epsilon \) and \(L_0\) considered in (5.1) and (5.2), respectively. We proceed here as the authors in [5, 6]. Using standard arguments discussed for example in [34], it is easy to see that there exists \(\epsilon _0 > 0\) such that the numerical range of the operators \(-L_\epsilon \) are contained in \((-\infty , -1] \subset {\mathbb {C}}\) for all \(\epsilon \in (0,\epsilon _0)\). Thus, we get from [34, Theorem 3.9] that there exists \(M>0\) and \(\frac{\pi }{2}<\phi <\pi \), independent of \(\epsilon \), such that

$$\begin{aligned} \Vert \left( \mu + L_\epsilon \right) ^{-1}\Vert _{\mathcal {L}(Z_{\epsilon })} \leqslant \frac{M}{|\mu + 1|}, \quad \forall \mu \in \Sigma _{-1 , \phi }, \end{aligned}$$
(5.5)

where \(\Sigma _{-1 , \phi }=\{\mu \in {\mathbb {C}}\, | \, 0<|\mathrm {arg}(\mu + 1)|\leqslant \phi \}\). We are setting here \(Z_{\epsilon }\) by \(Z_0\) as \(\epsilon =0\). Hence, the operators \(L_\epsilon \) are sectorial operators for all \(\epsilon \in [0,\epsilon _{0}]\), with uniform estimates in \(\epsilon \) for the resolvent operators \(\left( \mu - L_\epsilon \right) ^{-1}\) on the sector \({\mathbb {C}}\backslash \Sigma _{1 , \pi -\phi }\).

We also get from Remark 5.10 that, if \(\lambda \in \rho (L_0)\), there exists \(\epsilon _0 > 0\) such that \(\lambda \in \rho (L_\epsilon )\) for all \(0 \leqslant \epsilon < \epsilon _0\), and so, we can use the resolvent identity given by [5, Lemma 3.5] to obtain

$$\begin{aligned}&(\lambda - L_\epsilon )^{-1} - E_\epsilon (\lambda - L_0)^{-1} M_\epsilon = [I - \lambda (\lambda - L_\epsilon )^{-1}]\\&[E_{\epsilon }L_0^{-1} M_\epsilon - L_\epsilon ^{-1}] [I - \lambda E_\epsilon (\lambda - L_0)^{-1} M_\epsilon ]. \end{aligned}$$

Consequently, since (5.5) implies

$$\begin{aligned}&\Vert I - \lambda (\lambda - L_\epsilon )^{-1} \Vert _{\mathcal {L}(Z_\epsilon )} \le 1 + M, \\&| I - \lambda E_\epsilon (\lambda - L_0)^{-1} M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon )} \le 1 + \Vert E_\epsilon \Vert \, \Vert M_\epsilon \Vert \, M, \end{aligned}$$

we have by Corollary 5.9 that there exists \(\vartheta :(0,\epsilon _0)\rightarrow R^+\), \(\vartheta (\epsilon )\rightarrow 0\) as \(\epsilon \rightarrow 0\), such that

$$\begin{aligned} \Vert (\lambda - L_\epsilon )^{-1} - E_\epsilon (\lambda - L_0)^{-1} M_\epsilon \Vert _{\mathcal {L}(Z_{\epsilon })} \leqslant \vartheta (\epsilon ). \end{aligned}$$
(5.6)

Moreover, if \(\{ {\mathrm {e}}^{-L_\epsilon t} \, | \, t\geqslant 0 \}\) denote the exponentially decaying analytic semigroup in \(Z_\epsilon \) generated by the sectorial operator \(L_\epsilon \), then we obtain from [29, Theorem 1.4.3] that for any \(0< \omega < 1\), there exists a constant \(C=C(\omega )\), independent of \(\epsilon \), such that

$$\begin{aligned} \Vert {\mathrm {e}}^{-L_\epsilon t} \Vert _{\mathcal {L}(Z_{\epsilon },Z_{\epsilon }^\alpha )} \leqslant C \, t^{-\alpha }\, {\mathrm {e}}^{-\omega t} \hbox { for all } t>0, \; 0\leqslant \alpha \leqslant 1 \hbox { and } 0 \leqslant \epsilon \leqslant \epsilon _{0}. \end{aligned}$$
(5.7)

Finally, the continuity of resolvent operators allow us to obtain the continuity of linear semigroups associated with the family of sectorial operators \(\{ L_\epsilon \}_{\epsilon \ge 0}\) in appropriated spaces.

Theorem 5.12

Suppose \(0 \leqslant \alpha < \frac{1}{2}\). Then there exists a function \(\vartheta _\alpha :(0, \epsilon _0] \mapsto (0,\infty )\), \(\vartheta _\alpha (\epsilon ) \rightarrow 0\), as \(\epsilon \rightarrow 0\), such that

$$\begin{aligned} \Vert \hbox \mathrm{e}^{-{L_\epsilon } t} - E_{\epsilon } \hbox \mathrm{e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(Z_{\epsilon },Z_{\epsilon }^{\alpha })} \leqslant \vartheta _ \alpha (\epsilon ) \hbox \mathrm{e}^{-\omega t} t^{\alpha -1}, \quad \hbox { for all } t > 0. \end{aligned}$$

Consequently, there exists a constant \(K>0\), independent of \(\epsilon \), such that

$$\begin{aligned} \Vert P_{\epsilon } \hbox \mathrm{e}^{-{L_\epsilon } t} - \hbox \mathrm{e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(L^2(\Omega ^\epsilon ),H^{2\alpha }(\widetilde{\Omega }^\epsilon ))} \le K \vartheta _\alpha (\epsilon ) \hbox \mathrm{e}^{-\omega t} t^{{\alpha -1}} \hbox { for all } t > 0. \end{aligned}$$

Proof

For any sectorial operators as \({L_\epsilon }\), it is known that for any \(0<\bar{\omega }< 1\)

$$\begin{aligned} \hbox {e}^{(-{L_\epsilon } + \bar{\omega } I)t} = \frac{1}{2\pi i} \underset{\tilde{\Gamma }}{\int }\hbox {e}^{(\mu + \bar{\omega }) t} (\mu +\bar{\omega } + {L_\epsilon } - \bar{\omega } )^{-1} \mathrm {d}\mu , \end{aligned}$$

where \(\tilde{\Gamma }\) is the oriented border of the sector \(\Sigma _{-1,\phi }=\{\mu \in {\mathbb {C}}: |\mathrm {arg}(\mu +1)|\le \phi \}\), \(\frac{\pi }{2} <\phi < \pi \), such that the imaginary part of \(\mu \) increases when \(\mu \) describes the curve \(\tilde{\Gamma }\). We perform a changing of variable \(\mu +\bar{\omega } \mapsto \mu \) and call \({B_{\epsilon }} := {L_\epsilon } - \bar{\omega }\) in order to evaluate

$$\begin{aligned} 2 \pi \Vert \hbox {e}^{-B_{\epsilon } t} u^\epsilon \!-\! E_{\epsilon } \hbox {e}^{-B_0 t} M_\epsilon u^\epsilon \Vert _{Z_{\epsilon }^{\alpha }} \!=\! \left\| \underset{\Gamma _0}{\int }\hbox {e}^{\mu t} [(\mu +{B_\epsilon })^{-1} u^\epsilon \!-\! E_{\epsilon } (\mu +B_0)^{-1} M_\epsilon u^\epsilon ] \mathrm {d} \mu \right\| _{Z_{\epsilon }^{\alpha }}\nonumber \\ \end{aligned}$$
(5.8)

where \(\Gamma _0\) is the border of \(\Sigma _{0,\phi }\). For this, let us first collect some estimates involving \(B_\epsilon \).

Due to (5.5), we get for all \(\mu \in \Gamma _0\) and \(\epsilon \in [0,\epsilon _0]\) that \(\Vert (\mu +{B_\epsilon })^{-1}\Vert _{\mathcal {L}(Z_{\epsilon })} \le {\frac{C}{|\mu |}}\), and then,

$$\begin{aligned} \Vert (\mu +{B_\epsilon })^{-1} u^\epsilon - E_{\epsilon } (\mu +B_0)^{-1} M_\epsilon u^\epsilon \Vert _{Z_{\epsilon }}&\le \frac{C + \Vert E_\epsilon \Vert \, \Vert M_\epsilon \Vert }{|\mu |} \Vert u\Vert _{Z_\epsilon } \nonumber \\&\le \frac{C_1}{|\mu |} \Vert u\Vert _{Z_\epsilon }. \end{aligned}$$
(5.9)

We also have that

$$\begin{aligned} \Vert B_{\epsilon } (\mu +{B_\epsilon })^{-1} u^\epsilon \Vert _{Z_\epsilon }&= \Vert (I-\mu (\mu +{B_\epsilon })^{-1}) u^\epsilon \Vert _{Z_\epsilon } \\&\le \Vert u^\epsilon \Vert _{Z_\epsilon } + |\mu | \Vert (\mu +{B_\epsilon })^{-1} u^\epsilon \Vert _{Z_\epsilon } \\&\le (1 + C) \Vert u^\epsilon \Vert _{Z_\epsilon }. \end{aligned}$$

Now, using Moment’s Inequality from [29, Section 1.4], we get

$$\begin{aligned} \Vert B_{\epsilon }^{1/2} (\mu +{B_\epsilon })^{-1} u^\epsilon \Vert _{Z_{\epsilon }}&\le \Vert (\mu +{B_\epsilon })^{-1} u^\epsilon \Vert _{Z_{\epsilon }}^{{1/2}} \, \Vert (\mu +{B_\epsilon })^{-1} u^\epsilon \Vert _{Z_{\epsilon }^1}^{{1/2}} \\&\le \frac{C^{{1/2}}}{|\mu |^{{1/2}}} (1 + C)^{1/2} \Vert u^\epsilon \Vert _{Z_\epsilon } . \end{aligned}$$

Consequently, since for each \(u^\epsilon \in Z_{\epsilon }\), \((\mu +B_{0})^{-1} M_\epsilon u^\epsilon \in \mathcal {D}(L_0) \subset H^2(0,1)\), we also obtain,

$$\begin{aligned} \Vert B_{\epsilon }^{1/2} E_{\epsilon }(\mu +B_{0})^{-1} M_\epsilon u^\epsilon \Vert _{Z_{\epsilon }}&\le (H_1+G_1)^{1/2} \Vert B_{0}^{1/2} (\mu +B_{0})^{-1} M_\epsilon u^\epsilon \Vert _{Z_{0}} \\&\le (H_1+G_1)^{1/2} \frac{C^{1/2}}{|\mu |^{1/2}} (1 + C)^{1/2} \Vert M_\epsilon \Vert \, \Vert u^\epsilon \Vert _{Z_\epsilon }. \end{aligned}$$

Thus, we can conclude that

$$\begin{aligned} \Vert (\mu +{B_\epsilon })^{-1} u^\epsilon - E_{\epsilon } (\mu +B_0)^{-1} M_\epsilon u^\epsilon \Vert _{Z_{\epsilon }^{1/2}} \le \frac{C_2}{|\mu |^{1/2}} \Vert u^\epsilon \Vert _{Z_\epsilon }. \end{aligned}$$
(5.10)

Next let us denote \(x=(\mu +{B_\epsilon })^{-1} u^\epsilon - E_{\epsilon } (\mu +B_0)^{-1} M_\epsilon u^\epsilon \). Again using Moment’s Inequality

$$\begin{aligned} \Vert x\Vert _{Z_{\epsilon }^\alpha }&\le C_{3} \Vert x\Vert _{Z_{\epsilon }^{1/2}}^{2\alpha } \Vert x\Vert _{Z_{\epsilon }}^{1-2\alpha } . \end{aligned}$$

Therefore, due to estimates (5.6), (5.9) and (5.10), we get for \(0 \le \alpha \le 1/2\) that

$$\begin{aligned} \Vert (\mu +{B_\epsilon })^{-1} - E_{\epsilon } (\mu +B_0)^{-1} M_\epsilon \Vert _{\mathcal {L}(Z_{\epsilon },Z_{\epsilon }^\alpha )}\leqslant \frac{C_{3} \, \vartheta (\epsilon )^{(1 - 2 \alpha )}}{| \mu |^{\alpha }}. \end{aligned}$$
(5.11)

Now performing the change of variable \(\beta = \mu t\) in the integral given by (5.8), we get

$$\begin{aligned} \left\| \underset{\Gamma _0}{\int }\hbox {e}^\beta \left[ \left( {\beta t^{-1}}+{B_{\epsilon }}\right) ^{-1} E_{\epsilon } u - E_{\epsilon } \left( {\beta t^{-1}}+B_0 \right) ^{-1} u \right] \frac{d\beta }{t} \right\| _{Z_{\epsilon }^{\alpha }}. \end{aligned}$$

Hence, it follows from (5.11) that

$$\begin{aligned} \;\Big \Vert t^{-1} \underset{\Gamma _0}{\int }\hbox {e}^\beta \big [\left( {\beta t^{-1}}+{B_{\epsilon }}\right) ^{-1}&- E_{\epsilon } \left( {\beta t^{-1}}+B_0\right) ^{-1} M_\epsilon \big ]\mathrm {d} \beta \Big \Vert _{\mathcal {L}(Z_\epsilon ,Z_{\epsilon }^{\alpha })} \\&\le C_{3} \, t^{{\alpha -1}} \vartheta (\epsilon )^{(1 - 2 \alpha )} \underset{\Gamma _0}{\int }\frac{|\hbox {e}^\beta |}{|\beta |^{\alpha }} \mathrm {d} |\beta |, \end{aligned}$$

and then,

$$\begin{aligned} \Vert \hbox {e}^{-{B_{\epsilon }} t} - E_{\epsilon } \hbox {e}^{-{B_0} t} M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon ,Z_{\epsilon }^{\alpha })} \le C_{4} t^{{\alpha -1}} \vartheta (\epsilon )^{(1 - 2 \alpha )}, \quad t > 0. \end{aligned}$$

Consequently, for all \(\alpha \in [0,{1/2})\) and \(\omega \in (0,1)\), there exists a function \(\vartheta _\alpha : (0,\epsilon _0] \rightarrow {\mathbb {R}}^+\) with \(\vartheta _\alpha (\epsilon ) \overset{\epsilon \rightarrow 0}{\longrightarrow }0\) such that

$$\begin{aligned} \Vert \hbox {e}^{-{L_\epsilon } t} - E_{\epsilon } \hbox {e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(Z_{\epsilon },Z_{\epsilon }^{\alpha })} \le \vartheta _\alpha (\epsilon ) \hbox {e}^{-\omega t} t^{{\alpha -1}} \hbox { for all } t > 0. \end{aligned}$$

Finally, we conclude the proof noting Remark 5.1 implies the existence of \(K\) such that

$$\begin{aligned} \Vert P_{\epsilon } \hbox {e}^{-{L_\epsilon } t} - \hbox {e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L} (Z_\epsilon ,H^{2\alpha }(\widetilde{\Omega }^\epsilon ))}&= \Vert P_{\epsilon } \hbox {e}^{-{L_\epsilon } t} - P_{\epsilon } E_{\epsilon } \hbox {e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon ,H^{2\alpha }(\widetilde{\Omega }^\epsilon ))} \nonumber \\&\leqslant \Vert P_{\epsilon }\Vert _{\mathcal {L}(Z_{\epsilon }^\alpha ,H^{2\alpha }(\widetilde{\Omega }^\epsilon ))} \Vert \hbox {e}^{-{L_\epsilon } t} - E_{\epsilon } \hbox {e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon ,Z_\epsilon ^\alpha )} \nonumber \\&\leqslant K \Vert \hbox {e}^{-{L_\epsilon } t} - E_{\epsilon } \hbox {e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon ,Z_\epsilon ^\alpha )}. \end{aligned}$$
(5.12)

\(\square \)

Corollary 5.13

Suppose \(0\leqslant \alpha < {1/2}\) and \(u^{\epsilon } \mathop {\longrightarrow }\limits ^{E} u\). Then there is a function \(\vartheta :{(0, \epsilon _0]} \mapsto (0,\infty ),\; \vartheta (\epsilon ) \rightarrow 0\), as \(\epsilon \rightarrow 0\), such that

$$\begin{aligned} \left\Vert \hbox \mathrm{e}^{-{L_\epsilon } t} u^{\epsilon } - E_{\epsilon } \hbox \mathrm{e}^{-L_0 t} u \right\Vert_{Z_{\epsilon }^{\alpha }} \le \vartheta (\epsilon ) \hbox \mathrm{e}^{-\omega t} t^{\alpha -1}, \quad \hbox { for all } t > 0. \end{aligned}$$
(5.13)

Proof

It is a direct consequence of Theorem 5.12, and estimatives (5.7) and (5.4), since

$$\begin{aligned} \Vert \hbox {e}^{-L_\epsilon t} u^{\epsilon } - E_{\epsilon } \hbox {e}^{-L_0 t} u \Vert _{Z_{\epsilon }^{\alpha }} \leqslant \Vert \hbox {e}^{-L_\epsilon t} u^{\epsilon } - E_\epsilon \hbox {e}^{-L_0 t} M_{\epsilon } u^\epsilon \Vert _{Z_{\epsilon }^{\alpha }} + \Vert E_\epsilon \hbox {e}^{-L_0 t} \left( M_{\epsilon } u^\epsilon - u\right) \Vert _{Z_{\epsilon }^{\alpha }}, \end{aligned}$$

and \(M_\epsilon u^\epsilon - u = M_\epsilon \left( u^\epsilon - E_\epsilon u \right) \). \(\square \)

6 Upper semicontinuity of attractors and the set of equilibria

Let \(f:{\mathbb {R}}\mapsto {\mathbb {R}}\) be a bounded \(\mathcal {C}^2\)-function with bounded derivatives up to second order also satisfying the dissipative condition (1.3). Let us also consider the perturbed domain \(\Omega ^\epsilon \) defined in (1.4) by the functions \(G_\epsilon \) and \(H_\epsilon \) introduced in Sect. 2.

In the previous sections, we have studied the behavior of the linear parts of problem (1.5) as \(\epsilon \) tends to zero, and we have proved results on the continuity of the linear semigroups associated with (1.5) and (1.6). It is known that under these growth and dissipative conditions, the solutions of problems (1.5) and (1.6) are globally defined, and so, we can associate with them the nonlinear semigroups \(\{T_\epsilon (t) \, | \, t\ge 0\}\) and \(\{T_0(t) \, | \, t\ge 0\}\), well defined in \(H^{2 \alpha }(\Omega ^\epsilon )\) and \(H^{2 \alpha }(0,1)\), respectively, for all \(0 \le \alpha \le 1/2\) and \(t>0\). These dynamical systems are gradient and possess a family of compact global attractors \(\{ \fancyscript{A}_{\epsilon } \, | \, \epsilon \in [0,\epsilon _0] \} \), \(\fancyscript{A}_{\epsilon } \subset Z_\epsilon \) and \(\fancyscript{A}_0 \subset Z_0\) which lie in more regular spaces, namely \(L^\infty (\Omega ^\epsilon )\) and \(L^\infty (0,1)\). Also, we can rewrite (1.5) and (1.6) in the abstract form

$$\begin{aligned} \left\{ \begin{array}{l} \dot{u}^\epsilon + L_\epsilon u^\epsilon = \hat{f}_\epsilon (u^\epsilon ) \\ u^\epsilon (0) = u_0^\epsilon \in Z_\epsilon ^\alpha \end{array} \right. \quad \hbox { and } \quad \left\{ \begin{array}{l} \dot{u} + L_0 u = \hat{f}_0(u) \\ u(0) = u_0 \in Z_0^\alpha \end{array} \right. \end{aligned}$$

where \(\hat{f}_\epsilon : Z_\epsilon ^\alpha \mapsto Z_\epsilon : u^\epsilon \rightarrow f(u^\epsilon )\) is the Nemitskĭi operator defined by \(f\) (see [7, 28]).

In this section, we are in condition to relate the continuity of the linear semigroups with the continuity of the nonlinear semigroups using the variation of constants formula establishing at the end the upper semicontinuity of the family of attractors, as well as, the upper semicontinuity of the set of stationary states at \(\epsilon =0\).

Theorem 6.1

Suppose \(0 \leqslant \alpha < {1/2}\), and let \(u^\epsilon \in Z_\epsilon \) satisfying

$$\begin{aligned} \Vert u^\epsilon \Vert _{Z_\epsilon } \le C \end{aligned}$$
(6.1)

for some positive constant \(C\) independent of \(\epsilon \).

Then, for each \(\tau > 0\), there exists a function \(\bar{\vartheta }_\alpha :(0, \epsilon _0] \rightarrow (0,\infty ),\; \bar{\vartheta }_\alpha (\epsilon ) \rightarrow 0\), as \(\epsilon \rightarrow 0\), such that

$$\begin{aligned} \Vert T_\epsilon (t)u^\epsilon - E_\epsilon T_0(t) M_\epsilon u^\epsilon \Vert _{Z_\epsilon ^\alpha } \le \bar{\vartheta }_\alpha (\epsilon ) t^{{\alpha -1}} \end{aligned}$$
(6.2)

for all \(t \in (0, \tau )\).

Moreover, we have the family of attractors \(\{ \fancyscript{A}_\epsilon \, | \, \epsilon \in [0, \epsilon _0] \}\) of problems (1.5), and (1.6) is upper semicontinuous at \(\epsilon = 0\) in \(Z_\epsilon ^\alpha \), in the sense that

$$\begin{aligned} \sup _{\varphi ^\epsilon \in \fancyscript{A}_\epsilon } \Big [ \inf _{\varphi \in \fancyscript{A}_0} \left\{ \Vert \varphi ^\epsilon - E_\epsilon \varphi \Vert _{Z_\epsilon ^\alpha } \right\} \Big ] \rightarrow 0, \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(6.3)

Also, if we call \(\mathcal {E}_\epsilon \) the set of stationary states of problems (1.5), for \(\epsilon \in (0,\epsilon _0]\), and (1.6), for \(\epsilon =0\), then the family of sets \(\{ \mathcal {E}_\epsilon \, | \, \epsilon \in [0, \epsilon _0] \}\) is upper semicontinuous at \(\epsilon =0\), that is,

$$\begin{aligned} \sup _{\varphi ^\epsilon \in \mathcal {E}_\epsilon } \Big [ \inf _{\varphi \in \mathcal {E}_0} \left\{ \Vert \varphi ^\epsilon - E_\epsilon \varphi \Vert _{Z_\epsilon ^\alpha } \right\} \Big ] \rightarrow 0, \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(6.4)

Consequently, there exists a constant \(K\) independent of \(\epsilon \) such that

$$\begin{aligned} \Vert P_{\epsilon } T_\epsilon (t)u^\epsilon - T_0(t) M_\epsilon u^\epsilon \Vert _{H^{2 \alpha }(\widetilde{\Omega }^\epsilon )} \le K \bar{\vartheta }_\alpha (\epsilon ) t^{{2\alpha -1}} \end{aligned}$$
(6.5)

for all \(t \in (0, \tau )\) and all \(0\leqslant \alpha < {1/2}\). Furthermore,

$$\begin{aligned} \sup _{\varphi ^\epsilon \in \fancyscript{A}_\epsilon } \Big [ \inf _{\varphi \in \fancyscript{A}_0} \left\{ \Vert P_{\epsilon } \varphi ^\epsilon - \varphi \Vert _{H^{2\alpha }(\widetilde{\Omega }^\epsilon )} \right\} \Big ] \rightarrow 0, \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
(6.6)

and

$$\begin{aligned} \sup _{\varphi ^\epsilon \in \mathcal {E}_\epsilon } \Big [ \inf _{\varphi \in \mathcal {E}_0} \left\{ \Vert P_{\epsilon } \varphi ^\epsilon - \varphi \Vert _{H^{2\alpha }(\widetilde{\Omega }^\epsilon )} \right\} \Big ] \rightarrow 0, \hbox { as } \epsilon \rightarrow 0. \end{aligned}$$
(6.7)

Proof

First, we observe that (6.5), (6.6) and (6.7) follow from (6.2), (6.3) and (6.4) arguing as in (5.12). Next, let us show (6.2). Using the variation of constants formula

$$\begin{aligned} T_\epsilon (t)u^\epsilon = \hbox {e}^{-{L_\epsilon } t} u^\epsilon + \int \limits _0^t \hbox {e}^{-{L_\epsilon }(t-s)} \hat{f}_\epsilon ({T_\epsilon (s) u^\epsilon })\;\hbox {d}s, \quad \hbox { for } \epsilon \in [0, 1], \end{aligned}$$

we obtain

$$\begin{aligned}&\Vert {T_\epsilon (t) u_\epsilon } - E_\epsilon {T_0(t)M_\epsilon u^\epsilon } \Vert _{Z_\epsilon ^\alpha } \leqslant \Vert \hbox {e}^{-{L_\epsilon } t} u^\epsilon - E_\epsilon \hbox {e}^{-L_0 t} M_\epsilon u^\epsilon \Vert _{Z_\epsilon ^\alpha } \\&\quad \quad + \int \limits _0^t \Vert \hbox {e}^{-{L_\epsilon }(t-s)} \hat{f}_\epsilon ({T_\epsilon (s) u^\epsilon }) - E_\epsilon \hbox {e}^{-L_0(t-s)} \hat{f}_0({T_0(s)M_\epsilon u^\epsilon }) \Vert _{Z_\epsilon ^\alpha } \hbox {d}s. \end{aligned}$$

It follows from (5.13) that there exist \(\epsilon _0 > 0\) and \(\vartheta :(0,\epsilon _0] \mapsto (0,\infty )\), \(\vartheta \mathop {\rightarrow }\limits ^{\epsilon \rightarrow 0} 0\), such that

$$\begin{aligned} \Vert \hbox {e}^{-{L_\epsilon } t} - E_{\epsilon } \hbox {e}^{-L_0 t} M_\epsilon \Vert _{\mathcal {L}(Z_\epsilon ,Z_{\epsilon }^{\alpha })} \le \vartheta (\epsilon ) \hbox {e}^{-\omega t} t^{{\alpha -1}}, \hbox { for } t > 0. \end{aligned}$$

Furthermore, we have

$$\begin{aligned} \int \limits _0^t&\Vert \hbox {e}^{-{L_\epsilon }(t-s)} \hat{f}_\epsilon ({T_\epsilon (s)u^\epsilon }) - E_\epsilon {\hbox {e}^{-L_0(t-s)}}\hat{f}_0({T_0(s)M_\epsilon u^\epsilon }) \Vert _{Z_\epsilon ^\alpha } \hbox {d}s \\&\leqslant \int \limits _0^t \Vert \left( {\hbox {e}^{-{L_\epsilon }(t-s)}} - E_\epsilon {\hbox {e}^{-{L_0}(t-s)}} M_\epsilon \right) \hat{f}_\epsilon ({T_{\epsilon }(s) u^\epsilon }) \Vert _{Z^\alpha _\epsilon } \hbox {d}s \\&\quad + \int \limits _0^t \Vert E_\epsilon {\hbox {e}^{-{L_0}(t-s)}} \Big ( M_\epsilon \hat{f}_\epsilon ({T_{\epsilon }(s) u^\epsilon }) - \hat{f}_0({T_0(s)M_\epsilon u^\epsilon }) \Big ) \Vert _{Z^\alpha _\epsilon }\hbox {d}s. \end{aligned}$$

Since \(u^\epsilon \) satisfies (6.1) for all \(\epsilon > 0\), \(T_\epsilon \) is global defined, and \(f\) is bounded function, we have that \(\{\hat{f}_\epsilon ({T_{\epsilon }(s) u^\epsilon }) \in Z_\epsilon \, | \, s\in [0,t]\}\) is uniformly bounded. Hence, we obtain by Theorem 5.12 that there exists a constant \(\hat{C}_1 = \hat{C}_1(\tau , C)\) such that

$$\begin{aligned} \int \limits _0^t \Vert&\Big ( {\hbox {e}^{-{L_\epsilon }(t-s)}} - E_\epsilon {\hbox {e}^{-L_0(t-s)}} M_\epsilon \Big ) \hat{f}_\epsilon ({T_\epsilon (s) u^\epsilon })\Vert _{Z^\alpha _\epsilon }\hbox {d}s \\&\leqslant \int \limits _0^t \vartheta _\alpha (\epsilon ) \hbox {e}^{-\omega (t-s)} (t-s)^{{\alpha -1}} \Vert \hat{f}_\epsilon ({T_\epsilon (s)u^\epsilon })\Vert _{Z_{\epsilon }} \hbox {d}s \leqslant \hat{C}_1 \vartheta _\alpha (\epsilon ) t^{\alpha - 1} \ \quad \hbox { for all } t \in (0,\tau ). \end{aligned}$$

If \(K\) is the uniform Lipschitz constant of the Nemitskĭi operator \(\hat{f}_\epsilon \), independent of \(\epsilon \), we can use \(E_{\epsilon } \hat{f}_0 = \hat{f}_\epsilon E_{\epsilon }\) and \(M_\epsilon E_\epsilon = I\) to get

$$\begin{aligned} \int \limits _0^t&\Vert E_\epsilon {\hbox {e}^{-{L_\epsilon }(t-s)}} \Big ( M_\epsilon \hat{f}_\epsilon ({T_\epsilon (s) u^\epsilon }) - \hat{f}_0({T_0(s) M_\epsilon u^\epsilon }) \Big ) \Vert _{Z^\alpha _\epsilon }\hbox {d}s\\&= \int \limits _0^t \Vert E_\epsilon {\hbox {e}^{-{L_\epsilon }(t-s)}} M_\epsilon \Big ( \hat{f}_\epsilon ({T_\epsilon (s) u^\epsilon }) - \hat{f}_\epsilon (E_\epsilon {T_0(s)M_\epsilon u^\epsilon }) \Big ) \Vert _{Z^\alpha _\epsilon }\hbox {d}s \\&\le \int \limits _0^t \hat{C}_2 \, \Vert E_\epsilon \Vert \, \Vert M_\epsilon \Vert \, K \hbox {e}^{-w(t-s)} (t-s)^{-\alpha } \Vert T_\epsilon (s) u^\epsilon - E_\epsilon {T_0(s)M_\epsilon u^\epsilon } \Vert _{Z^\alpha _\epsilon }, \end{aligned}$$

for some constant \(\hat{C}_2=\hat{C}_2(w)\). Hence,

$$\begin{aligned} \varphi (t) \leqslant (1 + \hat{C}_1) \vartheta _\alpha (\epsilon ) t^{{\alpha -1}} + \hat{C}_2 \, \Vert E_\epsilon \Vert \, \Vert M_\epsilon \Vert \, K \int \limits _0^t\;(t- s)^{-\alpha } \varphi (s) \, \hbox {d}s \hbox { on } (0,\tau ), \end{aligned}$$

where \(\varphi (t):= \hbox {e}^{\omega t}\left\Vert {T_\epsilon (t) u^\epsilon } - E_\epsilon {T_0(t)M_\epsilon u^\epsilon } \right\Vert_{Z^\alpha _\epsilon }\). Thus, due to Gronwall’s Inequality from [29, Section 7.1], we get

$$\begin{aligned} \varphi (t) \leqslant \hat{C}_3 \vartheta _\theta (\epsilon ) t^{{\alpha -1}} \end{aligned}$$

where \(\hat{C}_3 = \hat{C}_3(\hat{C}_1, \hat{C}_2, K, \tau , \Vert E_\epsilon \Vert , \Vert M_\epsilon \Vert )\) is a constant, and so, (6.2) follows.

In order to show the upper semicontinuity of the attractors \(\fancyscript{A}_\epsilon \), we first note that by uniform \(L^\infty (\Omega ^\epsilon )\) bounds of the attractors given by [7, Theorem 2.6] and Remark 5.11, we also obtain due to (5.4) that \(\bigcup _{0 \le \epsilon \le \epsilon _0} M_\epsilon \fancyscript{A}_\epsilon \) is a bounded set in \(L^\infty (0,1)\). Then, using the attractivity property of \(\fancyscript{A}_0\) in \(Z_0\), we have that for any \(\eta > 0\) there exists \(\tau > 0\) such that

$$\begin{aligned} \inf _{\varphi \in \fancyscript{A}_0} \Vert T_0(\tau ) M_\epsilon \varphi ^\epsilon - \varphi \Vert _{Z_0^\alpha } \le (H_1+G_1)^{-1/2} \eta /2, \quad \forall \varphi ^\epsilon \in \fancyscript{A}_\epsilon \quad \hbox { and }\quad 0 \le \epsilon \le \epsilon _0. \end{aligned}$$

Thus,

$$\begin{aligned} \inf _{\varphi \in \fancyscript{A}_0} \Vert E_\epsilon T_0(\tau ) M_\epsilon \varphi ^\epsilon - E_\epsilon \varphi \Vert _{Z_\epsilon ^\alpha } \le \eta /2, \quad \forall {\varphi }^\epsilon \in \fancyscript{A}_\epsilon \quad \hbox { and }\quad 0 \le \epsilon \le \epsilon _0. \end{aligned}$$

Now, due to the convergence of the nonlinear semigroups (6.2) with \(t = \tau \), we have that there exists \(\epsilon _1 > 0\) such that for all \(0 \le \epsilon \le \epsilon _1\)

$$\begin{aligned} \Vert T_\epsilon (\tau )\varphi ^\epsilon - E_\epsilon T_0(\tau ) M_\epsilon \varphi ^\epsilon \Vert _{Z_\epsilon ^\alpha } \le \eta /2, \quad \forall \varphi ^\epsilon \in \fancyscript{A}_\epsilon . \end{aligned}$$

Consequently, since \(\fancyscript{A}_\epsilon \) is an invariant set by the flow, \(T_\epsilon (\tau ) \varphi ^\epsilon = \varphi ^\epsilon \), and so, we get

$$\begin{aligned} \inf _{\varphi \in \fancyscript{A}_0} \Vert \varphi ^\epsilon - E_\epsilon \varphi \Vert _{Z_\epsilon ^\alpha } \le \eta , \quad \forall \varphi ^\epsilon \in \fancyscript{A}_\epsilon \quad \hbox { and }\quad 0 \le \epsilon \le \epsilon _1. \end{aligned}$$

Finally, we show the upper semicontinuity of the set of stationary states \(\mathcal {E}_\epsilon \). Let us use here the characterization discussed in (5.6). First, note \(u^\epsilon \in \mathcal {E}_\epsilon \) if only if satisfies

$$\begin{aligned} \int \limits _{\Omega ^\epsilon } \Big \{ \frac{\partial u^\epsilon }{\partial x_1} \frac{\partial \varphi }{\partial x_1} + \frac{1}{\epsilon ^2} \frac{\partial u^\epsilon }{\partial x_2} \frac{\partial \varphi }{\partial x_2} + u^\epsilon \varphi \Big \} \hbox {d}x_1 \hbox {d}x_2 = \int \limits _{\Omega ^\epsilon } f(u^\epsilon ) \varphi \, \hbox {d}x_1 \hbox {d}x_2, \quad \forall \varphi \in H^1(\Omega ^\epsilon ). \end{aligned}$$
(6.8)

Hence, substituting \(\varphi = u^\epsilon \) in (6.8), we get

$$\begin{aligned} \Big \Vert \frac{\partial u^\epsilon }{\partial x_1} \Big \Vert _{L^2(\Omega ^\epsilon )}^2 + \frac{1}{\epsilon ^2}\Big \Vert \frac{\partial u^\epsilon }{\partial x_2} \Big \Vert _{L^2(\Omega ^\epsilon )}^2 + \Vert u^\epsilon \Vert _{L^2(\Omega ^\epsilon )}^2 \le \Vert f(u^\epsilon ) \Vert _{L^2(\Omega ^\epsilon )} \Vert u^\epsilon \Vert _{L^2(\Omega ^\epsilon )}, \end{aligned}$$

Thus, since \(f \in \mathcal {C}^2({\mathbb {R}},{\mathbb {R}})\), there exists \(C=C(f) > 0\), independent of \(\epsilon > 0\), such that

$$\begin{aligned} \Vert u^\epsilon \Vert _{Z_\epsilon ^{1/2}} \le C. \end{aligned}$$

Therefore, we obtain from 6.2 that there exists \(u_0 \in \mathcal {E}_0\), as well as a subsequence \(u^\epsilon \in \mathcal {E}_\epsilon \) with \(\Vert u^\epsilon - E_\epsilon u_0 \Vert _{Z_\epsilon ^{\alpha }} \rightarrow 0\), as \(\epsilon \rightarrow 0\), for all \(0 \le \alpha < 1/2\).

Indeed, since \(T_\epsilon (t) u^\epsilon = u^\epsilon \) for each \(t>0\), we have

$$\begin{aligned} \Vert u^\epsilon - E_\epsilon T_0(t) M_\epsilon u^\epsilon \Vert _{Z^\alpha _\epsilon } \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$
(6.9)

and then, \(T_0(t) M_\epsilon u^\epsilon = M_\epsilon u^\epsilon \) for each \(t>0\) implying that the uniformly bounded sequence \(\{ M_\epsilon u^\epsilon \}_{ \epsilon >0 } \subset Z_0\) is \(E\)-convergent satisfying (6.9).

Notice that we can take \(u_0 \in Z_0\) as a limit from \(\{ M_\epsilon u^\epsilon \}_{\epsilon >0} \subset Z_0\).

Let us show now that \(u_0 \in \mathcal {E}_0\). Using once more \(T_\epsilon (t) u^\epsilon = u^\epsilon \) for any \(t>0\), we have

$$\begin{aligned} \Vert u^\epsilon - E_\epsilon T_0(t) u_0 \Vert _{Z^\alpha _\epsilon } = \Vert T_\epsilon (t) u^\epsilon - E_\epsilon T_0(t) u_0 \Vert _{Z^\alpha _\epsilon } \rightarrow 0, \quad \hbox { as } \epsilon \rightarrow 0, \end{aligned}$$

for any \(t>0\). Thus, \(T_0(t) u_0 = u_0\) for all \(t>0\) and \(u_0 \in \mathcal {E}_0\) completing the proof. \(\square \)