1 Introduction

The emerging and understanding of the theory of maximal regularity for parabolic differential equations, which took place within the last three or so decades, has provided a firm basis for a successful handling of many challenging nonlinear problems. Among them, phase transition issues play a particularly prominent role. The impressive progress which has been made in this field with the help of maximal regularity techniques is well documented in the book by Prüss and Simonett [32]. The reader may also consult the extensive list of references and the ‘Bibliographic Comments’ in [32] for works of other authors and historical developments.

The relevant mathematical setup is usually placed in the framework of parabolic equations in bounded Euclidean domains, the interface being modeled as a hypersurface. In most works known to the author, it is assumed that the interface lies in the interior of the domain. Noteworthy exceptions are the papers by Wilke [37], Prüss et al. [33], Abels et al. [1], and Rauchecker [34] who study various important parabolic free boundary problems, presupposing that the membrane makes a ninety-degree boundary contact. In addition, in all of them, except for [1], a capillary (i.e., cylindrical) geometry is being studied. The same ninety-degree condition is employed by Garcke and Rauchecker [25] who carry out a linearized stability computation at a stationary solution of a Mullins–Sekerka flow in a two-dimensional bounded domain.

The assumption of the ninety-degree contact considerably simplifies the analysis since it allows to use reflection arguments. This does not apply in the case of general transversal intersection.

The only paper, we are aware of, in which a general contact angle is being considered is the one by Laurençot and Walker [28]. These authors establish the unique solvability in the strong \(L_2\) sense of a two-dimensional stationary transmission problem taking advantage of a particularly favorable geometric setting.

Elliptic problems with boundary and transmission conditions have also been investigated in a series of papers by Nistor et al. [21, 29,30,31]. The motivation for these works stems from the desire to get optimal convergence rates for approximations used for numerical computations. Although these authors employ weighted \(L_2\) Sobolev spaces, their methods and results are quite different from the ones presented here.

In this paper, we establish the maximal regularity of linear inhomogeneous parabolic transmission boundary value problems for the case where the interface intersects the boundary transversally. This is achieved by allowing the equations to degenerate near the intersection manifold and working in suitable weighted Sobolev spaces. We restrict ourselves to the simplest case of a fixed membrane and a single reaction–diffusion equation.

In a forthcoming publication, we shall use our present result to establish the local well-posedness of quasilinear equations with nonlinear boundary and transmission conditions.

The author is deeply grateful to G. Simonett for carefully reading the first draft of this paper, valuable suggestions, and pointing out misprints, errors, and the above references to related moving boundary problems.

2 The main result

Now we outline—in a slightly sketchy way—the main result of this paper. Precise definitions of notions, facts, and function spaces which we use here without further explanation are given in the subsequent sections.

Let \(\varOmega \) be a bounded domain in \({{\mathbb {R}}}^m\),  \(m\ge 2\), with a smooth boundary \(\varGamma \) lying locally on one side of \(\varOmega \). By a membrane in \(\overline{\varOmega }\), we mean a smooth oriented hypersurface S of (the manifold) \(\overline{\varOmega }\) with a (possibly empty) boundary \(\varSigma \) such that \(S\cap \varGamma =\varSigma \). Thus, S lies in \(\varOmega \) if \(\varSigma =\emptyset \). Otherwise, \(\varSigma \) is an \((m-2)\)-dimensional oriented smooth submanifold of \(\varGamma \). In this case, it is assumed that S and \(\varGamma \) intersect transversally. Note that we do not require that S be connected. Hence, even if \(\varSigma \ne \emptyset \), there may exist interior membranes. However, the focus in this paper is on membranes with boundary. Thus, we assume until further notice that \(\varSigma \ne \emptyset \).

We denote by \(\nu \) the inner (unit) normal (vector field) on \(\varGamma \) and by \(\nu _S\) the positive normal on S. (Thus, \(\nu _S(x)\in T_x\overline{\varOmega }=T_x{{\mathbb {R}}}^m=\{x\}\times {{\mathbb {R}}}^m\), the latter being also identified with \(x+{{\mathbb {R}}}^m\subset {{\mathbb {R}}}^m\) for \(x\in S\)). As usual, \(\llbracket \cdot \rrbracket =\llbracket \cdot \rrbracket _S\) is the jump across S. We fix any \(T\in (0,\infty )\) and set \(J=J_T:=[0,T]\).

Of concern in this paper are linear reaction–diffusion equations with nonhomogeneous boundary and transmission conditions of the following form.

Set

$$\begin{aligned} \begin{aligned} {{\mathcal {A}}}u&:=-\mathop {{\mathrm{div}}}\nolimits (a\mathop {{\mathrm{grad}}}\nolimits u),\ {{\mathcal {B}}}u:=\gamma a\partial _\nu u,\\ {{\mathcal {C}}}^0 u&:=\llbracket u \rrbracket ,\ {{\mathcal {C}}}^1u:=\llbracket a\partial _{\nu _S}u \rrbracket ,\ {{\mathcal {C}}}=({{\mathcal {C}}}^0,{{\mathcal {C}}}^1), \end{aligned} \end{aligned}$$

with \(\gamma \) being the trace operator on \(\varGamma \). We assume (for the moment) that \(a\in {{\bar{C}}}^1\bigl ((\overline{\varOmega }\setminus S)\times J\bigr )\) and \(a>0\). A bar over a symbol for a standard function space means that its elements may undergo jumps across S. (The usual definitions based on decompositions of \(\overline{\varOmega }\setminus S\) in ‘inner’ and ‘outer’ domains cannot be used since \(\overline{\varOmega }\setminus S\) may be connected). Then, the problem under investigation reads:

$$\begin{aligned} \begin{aligned} \partial _tu+{{\mathcal {A}}}u&=f \quad \text { on }(\overline{\varOmega }\setminus S)\times J,\\ {{\mathcal {B}}}u&=\varphi \quad \text { on }(\varGamma \setminus \varSigma )\times J,\\ {{\mathcal {C}}}u&=\psi \quad \text { on }(S\setminus \varSigma )\times J,\\ \gamma _0u&=u_0 \quad \text { on }(\overline{\varOmega }\setminus S)\times \{0\},\\ \end{aligned} \end{aligned}$$
(2.1)

where \(\gamma _0\) is the trace operator at \(t=0\).

We are interested in the strong \(L_\mathrm{p}\) solvability of (2.1), that is, in solutions possessing second order space derivatives in \(L_\mathrm{p}\). However, since S intersects \(\varGamma \), we cannot hope to get solutions which possess this regularity up to \(\varSigma \). Instead, it is to be expected that the derivatives of u blow up as we approach \(\varSigma \). For this reason, we set up our problem in weighted Sobolev spaces where the weights control the behavior of \(\partial ^\alpha u\) for \(0\le |\alpha |\le 2\) in relation to the distance from \(\varSigma \). This requires that the differential operator is adapted to such a setting, which means that the adapted ‘diffusion coefficient’ tends to zero near \(\varSigma \). In other words, we will have to deal with parabolic problems which degenerate near \(\varSigma \). To describe the situation precisely, we introduce curvilinear coordinates near \(\varSigma \) as follows.

Since \(\varSigma \) is an oriented hypersurface in \(\varGamma \), there exists a unique positive normal vector field \(\mu \) on \(\varSigma \) in \(\varGamma \). Given \(\sigma \in \varSigma \), we write \(\mu (\cdot ,\sigma )\) for the unique geodesic in \(\varGamma \) satisfying \(\mu (0,\sigma )=\sigma \) and \({{\dot{\mu }}}(0,\sigma )=\mu (\sigma )\). Similarly, for each \(y\in \varGamma \), we set \(\nu (\xi ,y):=y+\xi \nu (y)\) for \(\xi \ge 0\). Then, we can choose \(\varepsilon \in (0,1)\) and a neighborhood \(\tilde{U}(\varepsilon )\) of \(\varSigma \) in \(\overline{\varOmega }\) with the following properties: for each \(x\in \tilde{U}(\varepsilon )\), there exists a unique triple

$$\begin{aligned} (\xi ,\eta ,\sigma )\in N(\varepsilon )\times \varSigma ,\qquad N(\varepsilon ):=[0,\varepsilon )\times (-\varepsilon ,\varepsilon ), \end{aligned}$$

such that

$$\begin{aligned} x=x(\xi ,\eta ,\sigma ):=\nu \bigl (\xi ,\mu (\eta ,\sigma )\bigr ). \end{aligned}$$
(2.2)

Thus, \(x\in \varGamma \cap \tilde{U}(\varepsilon )\) iff \((\xi ,\eta ,\sigma )\in \{0\}\times (-\varepsilon ,\varepsilon )\times \varSigma \).

Now we define curvilinear derivatives for \(u\in C^2\bigl (\tilde{U}(\varepsilon )\bigr )\) by

$$\begin{aligned} \partial _\nu u(x)=\partial _1(u\circ x)(\xi ,\eta ,\sigma ) ,\quad \partial _\mu u(x):=\partial _2(u\circ x)(\xi ,\eta ,\sigma ) \end{aligned}$$
(2.3)

for \(x\in \tilde{U}(\varepsilon )\). It follows thatFootnote 1

$$\begin{aligned} {{\mathcal {A}}}u =-\bigl (\partial _\nu (a\partial _\nu u)+\partial _\mu (a\partial _\mu u) +\mathop {{\mathrm{div}}}\nolimits _\varSigma (a\mathop {{\mathrm{grad}}}\nolimits _\varSigma u)\bigr ) \end{aligned}$$
(2.4)

on \(\tilde{U}(\varepsilon )\), where \(\mathop {{\mathrm{div}}}\nolimits _\varSigma \) and \(\mathop {{\mathrm{grad}}}\nolimits _\varSigma \) denote the divergence and the gradient, respectively, in \(\varSigma \) (with respect to the Riemannian metric \(g_\varSigma \) induced by the one of \(\varGamma \) which, in turn, is induced by the Euclidean metric on \(\overline{\varOmega }\)).

For x given by (2.2), we set

$$\begin{aligned} r(x):=\sqrt{\xi ^2+\eta ^2} ,\qquad (\xi ,\eta )\in N(\varepsilon ), \end{aligned}$$
(2.5)

which is the geodesic distance in \(\overline{\varOmega }\) from x to \(\varSigma \) (and not, in general, the distance in the ambient space \({{\mathbb {R}}}^m\)). We fix \(\omega \in C^\infty \bigl (N(\varepsilon ),[0,1]\bigr )\), depending only on r, such that \(\omega |N(\varepsilon /3)=1\) and \(\mathop {{\mathrm{supp}}}\nolimits (\omega )\subset N(2\varepsilon /3)\) and set

$$\begin{aligned} \rho :=1-\omega +r\omega . \end{aligned}$$
(2.6)

Then, we define on

$$\begin{aligned} U:=U(\varepsilon ):=\tilde{U}(\varepsilon )\setminus \varSigma \end{aligned}$$
(2.7)

a singular linear reaction–diffusion operator \({{\mathcal {A}}}_U\) by

$$\begin{aligned} {{\mathcal {A}}}_Uu:=-\rho ^2\bigl (\partial _\nu (a\partial _\nu u)+\partial _\mu (a\partial _\mu u)\bigr ) -\mathop {{\mathrm{div}}}\nolimits _\varSigma (a\mathop {{\mathrm{grad}}}\nolimits _\varSigma u) \end{aligned}$$
(2.8)

for \(u\in {{\bar{C}}}^2(U\setminus S)\). The corresponding singular boundary operator is given by

$$\begin{aligned} {{\mathcal {B}}}_Uu:=\gamma a\rho \partial _\nu u. \end{aligned}$$
(2.9)

Since S intersects \(\varGamma \) transversally, it follows that there exists a smooth function \(s:[0,\varepsilon )\times \varSigma \rightarrow (-\varepsilon ,\varepsilon )\) such that \(s(0,\sigma )=0\) for \(\sigma \in \varSigma \) and

$$\begin{aligned} x\in \tilde{U}(\varepsilon )\cap S \quad \text {iff}\quad x=\bigl (\xi ,s(\xi ,\sigma ),\sigma \bigr ) ,\qquad (\xi ,\sigma )\in [0,\varepsilon )\times \varSigma . \end{aligned}$$
(2.10)

Using this, we associate with \({{\mathcal {A}}}_U\) a transmission operator \({{\mathcal {C}}}_U\) on U by setting

for \(u\in {{\bar{C}}}^2(U\setminus S)\), where \((\cdot |\cdot )_\varSigma =g_\varSigma \) and

$$\begin{aligned} (\nu _S^1,\nu _S^2,\nu _S^3) :=(\partial _\nu s,-1,1)\Big /\sqrt{1+(\partial _\nu s)^2+|\mathop {{\mathrm{grad}}}\nolimits _\varSigma s|_\varSigma ^2}. \end{aligned}$$

Now we define a singular transmission boundary value problem on \(\overline{\varOmega }\setminus S\) by putting \(V:=\overline{\varOmega }\,\big \backslash \tilde{U}(2\varepsilon /3)\) and

$$\begin{aligned} ({{\mathcal {A}}}_r,{{\mathcal {B}}}_r,{{\mathcal {C}}}_r):= \left\{ \begin{array} {ll} ({{\mathcal {A}}},{{\mathcal {B}}},{{\mathcal {C}}}) &{}\quad \text {on }V,\\ ({{\mathcal {A}}}_U,{{\mathcal {B}}}_U,{{\mathcal {C}}}_U) &{}\quad \text {on }U. \end{array} \right. \end{aligned}$$

It follows from (2.4) and the properties of \(\rho \) that this definition is unambiguous.

To introduce weighted Sobolev spaces on \(U\setminus S\), we put

$$\begin{aligned} \begin{aligned} {} \langle u\rangle ^2&:=|u|^2 +|r\partial _\nu u|^2 +|r\partial _\mu u|^2\\&\quad +|(r\partial _\nu )^2u|^2 +|r\partial _\nu (r\partial _\mu u)|^2 +|r\partial _\mu (r\partial _\nu u)|^2 +|(r\partial _\mu )^2u|^2\\&\quad +|\nabla _{\varSigma }u|^2+|\nabla _{\varSigma }^2u|^2, \end{aligned} \end{aligned}$$
(2.11)

where \(\nabla _{\varSigma }\) is the Levi–Civita connection on \(\varSigma \) for the metric \(g_\varSigma \). Also, \(1<p<\infty \) and

$$\begin{aligned} \Vert u\Vert _{{{\bar{W}}}_{p}^2(U\setminus S;r)} :=\Bigl (\int _{U\setminus S}\langle u\rangle ^p\,\frac{d(\xi ,\eta )}{r^2} \,\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _\varSigma \Bigr )^{1/p}. \end{aligned}$$
(2.12)

Then, \({{\bar{W}}}_{p}^2(U\setminus S;r)\) is the completion of \({{\bar{C}}}^2(U\setminus S)\) in \(L_{1,\mathrm{loc}}(U\setminus S)\) with respect to the norm  \(\Vert \cdot \Vert _{{{\bar{W}}}_{p}^2(U\setminus S;r)}\).

The (global) weighted Sobolev space

$$\begin{aligned} {{\mathcal {X}}}_p^2:={{\bar{W}}}_{p}^2(\overline{\varOmega }\setminus S;r) \end{aligned}$$

consists of all \(u\in L_{1,\mathrm{loc}}(\overline{\varOmega }\setminus S)\) with \(u\big |U\in {{\bar{W}}}_{p}^2(U\setminus S;r)\) and \(u\big |V\in {{\bar{W}}}_{p}^2(V\setminus S)\). It is a Banach space with the norm

$$\begin{aligned} u\mapsto \big \Vert u|U\big \Vert _{{{\bar{W}}}_{p}^2(U\setminus S;r)} +\big \Vert u|V\big \Vert _{{{\bar{W}}}_{p}^2(V\setminus S)}, \end{aligned}$$

whose topology is independent of the specific choice of \(\varepsilon \) and \(\omega \). Similarly, the Lebesgue space

$$\begin{aligned} {{\mathcal {X}}}_p^0:={{\bar{W}}}_{p}^0(\varOmega \setminus S;r) \end{aligned}$$

is obtained by replacing \(\langle u\rangle \) in (2.12) by |u|. Moreover,

$$\begin{aligned} {{\mathcal {X}}}_p^{2-2/p}:= {{\bar{W}}}_{p}^{2-2/p}(\overline{\varOmega }\setminus S;r) :=({{\mathcal {X}}}_p^0,{{\mathcal {X}}}_p^2)_{1-1/p,p}, \end{aligned}$$
(2.13)

where \((\cdot |\cdot )_{\theta ,p}\) is the real interpolation functor of exponent \(\theta \).

We also need time-dependent anisotropic spaces. For this, we use the notation \(s/\varvec{2}:=(s,\,s/2)\), \(0\le s\le 2\). Then,

$$\begin{aligned} {{\mathcal {X}}}_p^{2/\varvec{2}} :={{\bar{W}}}_{p}^{2/\varvec{2}}\bigl ((\overline{\varOmega }\setminus S)\times J;r\bigr ) :=L_p(J,{{\mathcal {X}}}_p^2)\cap W_{p}^1(J,{{\mathcal {X}}}_p^0) \end{aligned}$$

and \({{\mathcal {X}}}_p^{0/\varvec{2}}:=L_p(J,{{\mathcal {X}}}_p^0)\). If \(X\in \{\varGamma ,S\}\) and \(s\in \{1-1/p,\ 2-1/p\}\), then

$$\begin{aligned} {{\bar{W}}}_{p}^{s/\varvec{2}}\bigl ((X\setminus \varSigma )\times J;r\bigr ) :=L_p\bigl (J,{{\bar{W}}}_{p}^s(X\setminus \varSigma ;r)\bigr ) \cap W_{p}^{s/2}\bigl (J,L_p(X\setminus \varSigma ;r)\bigr ). \end{aligned}$$

Here, the \({{\bar{W}}}_{p}^s(X\setminus \varSigma ;r)\) are trace spaces of \({{\mathcal {X}}}_p^2\) (cf. (14.11) and (14.12)). Moreover,

$$\begin{aligned} \begin{aligned} {{\mathcal {Y}}}_p :={{\bar{W}}}_{p}^{(1-1/p)/\varvec{2}}\bigl ((\varGamma \setminus \varSigma )\times J;r\bigr )&\oplus {{\bar{W}}}_{p}^{(2-1/p)/\varvec{2}}\bigl ((S\setminus \varSigma )\times J;r\bigr )\\&\oplus {{\bar{W}}}_{p}^{(1-1/p)/\varvec{2}}\bigl ((S\setminus \varSigma )\times J;r\bigr ). \end{aligned} \end{aligned}$$

By \({\bar{BC}}(\overline{\varOmega }\setminus S)\), we mean the space of bounded and continuous functions (with possible jumps across S), endowed with the maximum norm. Then, \({\bar{BC}}^1(\overline{\varOmega }\setminus S;r)\) is the Banach space of all \(u\in {\bar{BC}}(\overline{\varOmega }\setminus S)\) with \(\partial _ju\in {\bar{BC}}(V\setminus S)\), \(1\le j\le m\), and

$$\begin{aligned} \rho \partial _\nu u,\rho \partial _\mu u\in {\bar{BC}}(U\setminus S) ,\quad u|\varSigma \in BC^1(\varSigma ). \end{aligned}$$

Furthermore,

$$\begin{aligned} {\bar{BC}}^{1/\varvec{2}}\bigl ((\overline{\varOmega }\setminus S)\times J;r\bigr ) :=C\bigl (J,{\bar{BC}}^1(\overline{\varOmega }\setminus S;r)\bigr ) \cap C^{1/2}\bigl (J,{\bar{BC}}(\overline{\varOmega }\setminus S)\bigr ). \end{aligned}$$

To indicate the nonautonomous structure of (2.1), we write \(a(t):=a(\cdot ,t)\) and, correspondingly, \({{\mathcal {A}}}(t)\), \({{\mathcal {B}}}(t)\), and \({{\mathcal {C}}}(t)\).

Now we are ready to formulate the main result of this paper, the optimal solvability of linear reaction–diffusion transmission boundary value problems:

Theorem 2.1

Let \(1<p<\infty \) with \(p\notin \{3/2,\,3\}\) and

$$\begin{aligned} a\in {\bar{BC}}^{1/\varvec{2}}\bigl ((\overline{\varOmega }\setminus S)\times J;r\bigr ) ,\qquad a\ge \underline{\alpha }, \end{aligned}$$

for some \(\underline{\alpha }\in (0,1)\). Suppose

$$\begin{aligned} \bigl (f,(\varphi ,\psi ^0,\psi ^1),u_0\bigr ) \in {{\mathcal {X}}}_p^{0/\varvec{2}}\oplus {{\mathcal {Y}}}_p\oplus {{\mathcal {X}}}_p^{2-2/p} \end{aligned}$$

and that the following compatibility conditions are satisfied:

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad {{\mathcal {C}}}_r^0(0)u_0&=\psi ^0(0), \quad \text {if } 3/2<p<3,\\ {\mathrm{(ii)}}\quad {{\mathcal {B}}}_r(0)u_0&=\varphi (0), \ {{\mathcal {C}}}_r(0)u_0=\psi (0), \quad \text {if } p>3, \end{aligned} \end{aligned}$$

where \(\psi :=(\psi ^0,\psi ^1)\). Then,

$$\begin{aligned} \begin{aligned} \partial _tu+{{\mathcal {A}}}_r(t)u&=f \quad \text { on }(\overline{\varOmega }\setminus S)\times J,\\ {{\mathcal {B}}}_r(t)u&=\varphi \quad \text { on }(\varGamma \setminus \varSigma )\times J,\\ {{\mathcal {C}}}_r(t)u&=\psi \quad \text { on }(S\times \varSigma )\times J,\\ \gamma _0u&=u_0 \quad \text { on }(\overline{\varOmega }\setminus S)\times \{0\} \end{aligned} \end{aligned}$$
(2.14)

has a unique solution \(u\in {{\mathcal {X}}}_p^{2/\varvec{2}}\). It depends continuously on the data.

Corollary 2.2

Suppose a is independent of t, that is, \(a\in {\bar{BC}}^1(\overline{\varOmega }\setminus S;r)\). Set

$$\begin{aligned} {{\mathcal {X}}}_{p,0}^2:= \left\{ \begin{array}{ll} {{\mathcal {X}}}_p^2, &{}\qquad \, 1<p<3/2,\\ \{\,u\in {{\mathcal {X}}}_p^2\ ;\ {{\mathcal {C}}}^0u=0\,\}, &{}\quad 3/2<p<3,\\ \{\,u\in {{\mathcal {X}}}_p^2\ ;\ {{\mathcal {B}}}u=0,\ {{\mathcal {C}}}u=0\,\}, &{}\qquad \; 3<p<\infty , \end{array} \right. \end{aligned}$$

and \(A_r:={{\mathcal {A}}}_r|{{\mathcal {X}}}_{p,0}^2\). Then, \(-A_r\), considered as a linear operator in \({{\mathcal {X}}}_p^0\) with domain \({{\mathcal {X}}}_{p,0}^2\), generates on \({{\mathcal {X}}}_p^0\) a strongly continuous analytic semigroup.

Proof

The theorem implies that \(A_r\) has the property of maximal \({{\mathcal {X}}}_p^0\) regularity. This fact is well known to imply the claim (e.g., [7, Chapter III] or [23]).\(\square \)

Theorem 2.1 is a special case of the much more general Theorems 7.1 and 14.1. They also include Dirichlet boundary conditions and apply to transmission problems in general Riemannian manifolds with boundary and bounded geometry.

The situation is considerably simpler if \(\varSigma =\emptyset \), that is, if only interior transmission hypersurfaces are present. Of course, if \(S=\emptyset \), then (2.14) reduces to a linear reaction–diffusion equation with inhomogeneous boundary conditions. In these cases, no degenerations do occur.

We refrain from considering operators \(({{\mathcal {A}}}_r,{{\mathcal {B}}}_r)\) with lower order terms. This case will be covered by the forthcoming quasilinear result.

In the case of an interior transmission surface (that is, \(\varSigma =\emptyset \)) and if a is independent of t, Theorem 2.1 is a special case of Theorem 6.5.1 in [32]. The latter theorem applies to systems and provides an \(L_p\)-\(L_q\) theory.

If \(\varSigma \ne \emptyset \), then the basic difficulty in proving Theorem 2.1 stems from the fact that \(\overline{\varOmega }\setminus \varSigma \) and, consequently, \(S\setminus \varSigma \) and \(\varGamma \setminus \varSigma \), are no longer compact. The fundamental observation which makes the proofs work is the fact that we can consider \(\overline{\varOmega }\setminus \varSigma \) as a (noncompact) Riemannian manifold with a metric g which coincides on \(U(\varepsilon /3)\) with the singular metric \(r^{-2}\mathrm{d}\nu \otimes \mathrm{d}\mu +g_\varSigma \) and on V with the Euclidean metric. With respect to this metric, \({{\mathcal {A}}}_r\) is a uniformly elliptic operator.

Theorems 4.4 and 5.1 show that \((\overline{\varOmega }\setminus \varSigma ,\,g)\) is a uniformly regular Riemannian manifold in the sense of [10]. Thus, we are led to consider linear parabolic equations with boundary and transmission conditions on such manifolds. As in the compact case, by means of local coordinates the problem is reduced to Euclidean settings. However, since we have to deal with noncompact manifolds, we have to handle simultaneously infinitely many model problems. In order for this technique to work, we have to establish uniform estimates which are in a suitable sense independent of the specific local coordinates. In addition, special care has to be taken in ‘gluing together the local model problems.’ These are no points to worry about in the compact case.

In our earlier paper [12], we have established an optimal existence theory for linear parabolic equations on uniformly regular Riemannian manifolds without boundary. The present proof extends those arguments to the case of manifolds with boundary. The presence of boundary and transmission conditions adds considerably to the complexity of the problem and makes the paper rather heavy.

In Sect. 3, we collect the needed background information. In the subsequent two sections, we establish the differential geometric foundation of transmission surfaces in uniformly regular and singular Riemannian manifolds.

After having introduced the relevant function spaces in Sect. 6, we present in Sect. 7 the basic maximal regularity theorem in anisotropic Sobolev spaces for linear nonautonomous reaction–diffusion equations with nonhomogeneous boundary and transmission conditions on general uniformly regular Riemannian manifolds. Its rather complex proof occupies the next five sections. Finally, in the last section, it is shown that our general results apply to the Euclidean setting presented here.

3 Uniformly regular Riemannian manifolds

In this section, we recall the definition of uniformly regular Riemannian manifolds and collect those properties of which we will make use. Details can be found in [9,10,11], and in the comprehensive presentation [15]. Thus, we shall be rather brief.

We use standard notation from differential geometry and function space theory. In particular, an upper, resp. lower, asterisk on a symbol for a diffeomorphism denominates the corresponding pullback, resp. push-forward (of tensors). By c, resp. \(c(\alpha )\) etc., we denote constants  \(\ge 1\) which can vary from occurrence to occurrence. Assume S is a nonempty set. On the cone of nonnegative functions on S, we define an equivalence relation  \({}\sim {}\) by \(f\sim g\) iff \(f(s)/c\le g(s)\le cf(s)\), \(s\in S\).

An m-dimensional manifold is a separable metrizable space equipped with an m-dimensional smooth structure. We always work in the smooth category.

Let M be an m-dimensional manifold with or without boundary. If \(\kappa \) is a local chart, then we use \(U_{\kappa }\) for its domain, the coordinate patch associated with \(\kappa \). The chart is normalized if \(\kappa (U_{\kappa })=Q_\kappa ^m\), where \(Q_\kappa ^m=(-1,1)^m\) if , the interior of M, and \(Q_\kappa ^m=[0,1)\times (-1,1)^{m-1}\) otherwise. An atlas \({{\mathfrak {K}}}\) is normalized if it consists of normalized charts. It is shrinkable if it normalized and there exists \(r\in (0,1)\) such that \(\bigl \{\,\kappa ^{-1}(rQ_\kappa ^m)\ ;\ \kappa \in {{\mathfrak {K}}}\,\bigr \}\) is a covering of M. It has finite multiplicity if there exists \(k\in {{\mathbb {N}}}\) such that any intersection of more than k coordinate patches is empty.

The atlas \({{\mathfrak {K}}}\) is uniformly regular (ur) if

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&\text {it is shrinkable and has finite multiplicity;}\\ {\mathrm{(ii)}}\quad&{\tilde{\kappa }}\circ \kappa ^{-1}\in BUC^\infty \bigl (\kappa (U_{\kappa \tilde{\kappa }}),{{\mathbb {R}}}^m\bigr ) \text { and }\Vert {\tilde{\kappa }}\circ \kappa ^{-1}\Vert _{k,\infty }\le c(k)\\&\text {for }\kappa ,{\tilde{\kappa }}\in {{\mathfrak {K}}}\text { and }k\in {{\mathbb {N}}}, \text { where }U_{\kappa \tilde{\kappa }}:=U_{\kappa }\cap U_{\tilde{\kappa }}. \end{aligned} \end{aligned}$$
(3.1)

Two ur atlases \({{\mathfrak {K}}}\) and \(\tilde{{{\mathfrak {K}}}}\) are equivalent if

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&\text {there exists }k\in {{\mathbb {N}}}\text { such that each coordinate patch of }{{\mathfrak {K}}}\\&\text {meets at most }k\text { coordinate patches of }\tilde{{{\mathfrak {K}}}}\text {, and vice versa;}\\ {\mathrm{(ii)}}\quad&\text {condition}~(3.1)\text {(ii) holds for all } (\kappa ,\tilde{\kappa })\text { and }(\tilde{\kappa },\kappa )\text { belonging to } {{\mathfrak {K}}}\times \tilde{{{\mathfrak {K}}}}. \end{aligned} \end{aligned}$$
(3.2)

This defines an equivalence relation in the class of all ur atlases. An equivalence class thereof is a ur structure. By a ur manifold, we mean a manifold equipped with a ur structure. A Riemannian metric g on a ur manifold M is ur if, given a ur atlas \({{\mathfrak {K}}}\),

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&\kappa _*g\sim g_m,\ \kappa \in {{\mathfrak {K}}};\\ {\mathrm{(ii)}}\quad&\Vert \kappa _*g\Vert _{k,\infty }\le c(k),\ \kappa \in {{\mathfrak {K}}},\ k\in {{\mathbb {N}}}. \end{aligned} \end{aligned}$$
(3.3)

Here, \(g_m:=(\cdot |\cdot )=\mathrm{d}x^2\) is the Euclidean metricFootnote 2 on \({{\mathbb {R}}}^m\) and (i) is understood in the sense of quadratic forms. This concept is well-defined, independently of the specific \({{\mathfrak {K}}}\). A uniformly regular Riemannian (urR) manifold is a ur manifold, endowed with a urR metric.

Remarks 3.1

  1. (a)

    Given a (nonempty) subset S of M and an atlas \({{\mathfrak {K}}}\),

    $$\begin{aligned} {{\mathfrak {K}}}_S:=\{\,\kappa \in {{\mathfrak {K}}}\ ;\ U_{\kappa }\cap S\ne \emptyset \,\}\ . \end{aligned}$$

    We say that \({{\mathfrak {K}}}\) is normalized on S, resp. has finite multiplicity on S, resp. is shrinkable on S if \({{\mathfrak {K}}}_S\) possesses the respective properties. Moreover, \({{\mathfrak {K}}}\) is ur on S if (3.1) applies with \({{\mathfrak {K}}}\) replaced by \({{\mathfrak {K}}}_S\). Similarly, two atlases \({{\mathfrak {K}}}\) and \(\tilde{{{\mathfrak {K}}}}\), which are ur on S, are equivalent on S if (3.2) holds with \({{\mathfrak {K}}}\) and \(\tilde{{{\mathfrak {K}}}}\) replaced by \({{\mathfrak {K}}}_S\) and \(\tilde{{{\mathfrak {K}}}}_S\), respectively. This induces a ur structure on S. Finally, M is ur on S if it is equipped with a ur structure on S.

  2. (b)

    Suppose \({{\mathfrak {K}}}\) is a ur atlas for M on S. Given any \(\varepsilon >0\), there exists a ur atlas \({{\mathfrak {K}}}'\) on S such that \(\mathop {{\mathrm{diam}}}\nolimits _g(U_{\kappa })<\varepsilon \) for \(\kappa \in {{\mathfrak {K}}}'\), where \(\mathop {{\mathrm{diam}}}\nolimits _g\) is the diameter with respect to the Riemannian distance \(d_g\). \(\square \)

In the following examples, we use the natural ur structure (e.g., the product ur structure in Example 3.2(c)) if nothing else is mentioned.

Example 3.2

  1. (a)

    Each compact Riemannian manifold is a urR manifold, and its ur structure is unique.

  2. (b)

    Let \(\varOmega \) be a bounded domain in \({{\mathbb {R}}}^m\) with a smooth boundary such that \(\varOmega \) lies locally on one side of it. Then, \((\overline{\varOmega },g_m)\) is a urR manifold.

More generally, suppose that \(\varOmega \) is an unbounded open subset of \({{\mathbb {R}}}^m\) whose boundary is ur in the sense of Browder [20] (also see [27, IV.§4]). Then, \((\overline{\varOmega },g_m)\) is a urR manifold. In particular, \(({{\mathbb {R}}}^m,g_m)\) and \(({{\mathbb {H}}}^m,g_m)\) are urR manifolds, where \({{\mathbb {H}}}^m:={{\mathbb {R}}}_+\times {{\mathbb {R}}}^{m-1}\) is the closed right half-space in \({{\mathbb {R}}}^m\).

  1. (c)

    If \((M_i,g_i)\), \(i=1,2\), are urR manifolds and at most one of them has a nonempty boundary, then \((M_1\times M_2,\ g_1+g_2)\) is a urR manifold.

  2. (d)

    Assume M is a manifold and N a topological space. Let \(f:N\rightarrow M\) be a homeomorphism. If \({{\mathfrak {K}}}\) is an atlas for M, then \(f^*{{\mathfrak {K}}}:=\{\,f^*\kappa \ ;\ \kappa \in {{\mathfrak {K}}}\,\}\) is an atlas for N which induces the smooth ‘pullback’ structure on N. If \({{\mathfrak {K}}}\) is ur, then \(f^*{{\mathfrak {K}}}\) also is ur.

Let \((M,g)\) be a urR manifold. Then, \(f^*(M,g):=(N,f^*g)\) is a urR manifold and the map \(f:(N,f^*g)\rightarrow (M,g)\) is an isometric diffeomorphism. \(\square \)

It follows from these examples, for instance, that the cylinders \({{\mathbb {R}}}\times M_1\) or \({{\mathbb {R}}}_+\times M_2\), where \(M_i\) are compact Riemannian manifolds with \(\partial M_2=\emptyset \), are urR manifolds. More generally, Riemannian manifolds with cylindrical ends are urR manifolds (see [11] where more examples are discussed).

Without going into detail, we mention that a Riemannian manifold without boundary is a urR manifold iff it has bounded geometry (see [10] for one half of this assertion and [24] for the other half). Thus, for example,  is not a urR manifold. A Riemannian manifold with boundary is a urR manifold iff it has bounded geometry in the sense of Schick [35] (also see [17,18,19, 26] for related definitions). Detailed proofs of these equivalences can be found in [15].

4 Uniformly regular hypersurfaces

Let \((M,g)\) be an oriented Riemannian manifold with (possibly empty) boundary \(\varGamma \). If it is not empty, then there exists a unique inner (unit) normal vector field \(\nu =\nu _\varGamma \) on \(\varGamma \), that is, a smooth section of \(T_\varGamma M\), the restriction of the tangent bundle TM of M to \(\varGamma \). Furthermore, \(\varGamma \) is oriented by the inner normal in the usual sense.

Suppose that S is an oriented hypersurface in , an embedded submanifold of codimension 1. Then, there is a unique positive (unit) normal vector field \(\nu _S\) on S, where ‘positive’ means that \(\bigl [\nu _S(p),\beta _1,\ldots ,\beta _{m-1}\bigr ]\) is a positive basis for \(T_pM\) if \([\beta _1,\ldots ,\beta _{m-1}]\) is one for \(T_pS\).

Let \(Z\in \{\varGamma ,S\}\). Then, we write

$$\begin{aligned} \gamma _p^Z(t):=\exp _p\bigl (t\nu _Z(p)\bigr ) ,\qquad t\in I_Z\bigl (\varepsilon (p)\bigr ). \end{aligned}$$

This means that, given \(p\in Z\), \(\gamma _p^Z\) is the unique geodesic in M satisfying \(\gamma _p^Z(0)=p\) and \({{\dot{\gamma }}}_p^Z(0)=\nu _Z(p)\) and being defined (at least) on \(I_Z\bigl (\varepsilon (p)\bigr )\), where \(I_\varGamma \bigl (\varepsilon (p)\bigr )=\bigl [0,\varepsilon (p)\bigr )\) and \(I_S\bigl (\varepsilon (p)\bigr )=\bigl (-\varepsilon (p),\varepsilon (p)\bigr )\) for some \(\varepsilon (p)>0\). Note that for \(t>0\).

We say that Z has a uniform normal geodesic tubular neighborhood of width \(\varepsilon \) if the following is true: there exist \(\varepsilon >0\) and an open neighborhood \(Z(\varepsilon )\) of Z in M such that

$$\begin{aligned} \varphi _Z:Z(\varepsilon )\rightarrow I_Z(\varepsilon )\times Z \quad \text {with}\quad \varphi _Z^{-1}(t,p)=\gamma _p^Z(t) \end{aligned}$$
(4.1)

is a diffeomorphism satisfying \(\varphi _Z(Z)=\{0\}\times Z\). If \(Z=\varGamma \), then a uniform tubular neighborhood is a uniform collar.

Given any embedded submanifold C of M, with or without boundary, we denote by \(g_C\) the pullback metric \(\iota ^*g\), where \(\iota :C\hookrightarrow M\) is the natural embedding.

Now we suppose that

$$\begin{aligned} (M,g)\text { is an }m\text {-dimensional oriented urR manifold}. \end{aligned}$$
(4.2)

This means that there exists an oriented ur atlas for M.

Let S be a hypersurface with boundary \(\varSigma \) such that \(\varSigma =S\cap \varGamma \). Thus, if \(\varSigma =\emptyset \). An atlas \({{\mathfrak {K}}}\) for M is S-adapted if for each \(\kappa \in {{\mathfrak {K}}}_S\) one of the following alternatives applies:

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&\kappa \notin {{\mathfrak {K}}}_\varGamma .\text { Then, }Q_\kappa ^m=(-1,1)^m\text { and}\\&\kappa (S\cap U_{\kappa })=\{0\}\times (-1,1)^{m-1};\\ {\mathrm{(ii)}}\quad&\kappa \in {{\mathfrak {K}}}_\varGamma . \text { Then, }Q_\kappa ^m=[0,1)\times (-1,1)^{m-1},\\&\kappa (\varGamma \cap U_{\kappa })=\{0\}\times (-1,1)^{m-1},\text { and}\\&\kappa (\varSigma \cap U_{\kappa })=\{0\}^2\times (-1,1)^{m-2}. \end{aligned} \end{aligned}$$

Then, S is a regularly embedded hypersurface in M, a membrane for short, if there exists an oriented ur atlas \({{\mathfrak {K}}}\) for M which is S-adapted.

Let S be a membrane. Each S-adapted atlas for M induces (by restriction) a ur structure and a (natural) orientation on S. Moreover, the ur structure and the orientation of S are independent of the specific choice of \({{\mathfrak {K}}}\).

For the proof of all this and the following theorem, we refer to [15].

Theorem 4.1

Let (4.2) be satisfied and suppose S is a membrane in M. Assume \(Z\in \{\varGamma ,S\}\). Then,

  1. (i)

    \((Z,g_Z)\) is an (\(m-1\))-dimensional oriented urR manifold.

  2. (ii)

    If \(\varSigma =\partial S\ne \emptyset \), then \(\varSigma \) is a membrane in \(\varGamma \) without boundary.

  3. (iii)

    Let \(\varSigma =\emptyset \) if \(Z=S\). Then, Z has a uniform tubular neighborhood

    $$\begin{aligned} \varphi _Z:Z(\varepsilon )\rightarrow I_Z(\varepsilon )\times Z \end{aligned}$$

    and \(\varphi _{Z*}g\sim \mathrm{d}s^2+g_Z\). Moreover, \(\varphi _Z\) is an orientation-preserving diffeomorphism.

  4. (iv)

    Suppose \(\varSigma \ne \emptyset \). Then, given \(\rho >0\), there exists \(\varepsilon (\rho )>0\) such that

    $$\begin{aligned} S\cap \bigl \{\,q\in M\ ;\ d_g(q,\varGamma )>\rho \,\bigr \} \end{aligned}$$

    has a uniform tubular neighborhood of width \(\varepsilon (\rho )\) in .

Now we suppose that S is a membrane with \(\varSigma \ne \emptyset \). It follows from (ii) and (iii) that \(\varSigma \) has a uniform tubular neighborhood \(\psi :\varSigma ^\varGamma (\varepsilon )\rightarrow (-\varepsilon ,\varepsilon )\times \varSigma \) in \(\varGamma \) for some \(\varepsilon >0\). By part (iii), \(\varGamma \) has a uniform collar \(\varphi :\varGamma (\varepsilon )\rightarrow [0,\varepsilon )\times \varGamma \) in M, where we assume without loss of generality that \(\varphi \) and \(\psi \) are of the same width. Then,

$$\begin{aligned} \varSigma (\varepsilon ):=\bigl \{\,\gamma _q^\varGamma (t)\ ;\ q\in \varSigma ^\varGamma (\varepsilon ),\ 0\le t<\varepsilon \,\bigr \} \end{aligned}$$

is an open neighborhood of \(\varSigma \) in M and

$$\begin{aligned} \chi :\varSigma (\varepsilon )\rightarrow [0,\varepsilon )\times (-\varepsilon ,\varepsilon )\times \varSigma \quad \text {with}\quad \chi ^{-1}(x,y,\sigma ):=\varphi ^{-1}\bigl (x,\psi ^{-1}(y,\sigma )\bigr ) \end{aligned}$$

is an orientation-preserving diffeomorphism, a tubular neighborhood of \(\varSigma \) in M of width \(\varepsilon \).

We refer once more to [15] for the proof of the next theorem. Henceforth, \(h:=g_\varSigma \) and \(N(\varepsilon ):=[0,\varepsilon )\times (-\varepsilon ,\varepsilon )\).

Theorem 4.2

Assume (4.2) and S is a membrane with nonempty boundary \(\varSigma \). Then,

$$\begin{aligned} \chi _*g\sim \mathrm{d}x^2+\mathrm{d}y^2+h. \end{aligned}$$
(4.3)

It follows that

$$\begin{aligned} \varSigma _\sigma (\varepsilon ):=\chi ^{-1}\bigl (N(\varepsilon )\times \{\sigma \}\bigr ) \end{aligned}$$

is for each \(\sigma \in \varSigma \) a two-dimensional submanifold of \(\varSigma (\varepsilon )\) and \(S\cap \varSigma _\sigma (\varepsilon )\) is a one-dimensional submanifold of \(\varSigma _\sigma (\varepsilon )\).

Remark 4.3

(The two-dimensional case) Suppose \(\dim (M)=2\) and \(\varSigma \ne \emptyset \). It follows from Theorem 4.1(ii) and the fact that M has a countable base that \(\varSigma \) is a countable discrete subspace of M. Thus, we can find \(\varepsilon >0\) with the following properties: If we denote by \(\psi ^{-1}(\cdot ,\sigma ):(-\varepsilon ,\varepsilon )\rightarrow \varGamma \) the local arc-length parametrization of \(\varGamma \) with \(\psi ^{-1}(0,\sigma )=\sigma \) for \(\sigma \in \varSigma \), then the above definition of \(\chi \) applies and defines a tubular neighborhood of \(\varSigma \) in M of width \(\varepsilon \).

Note that \(\varSigma (\varepsilon )\) is the countable pairwise disjoint union of \(\varSigma _\sigma (\varepsilon )\), \(\sigma \in \varSigma \). The term \(+h\) in (4.3) (and everywhere else) has to be disregarded, and the volume measure of \(\varSigma \) is the counting measure. Thus, in this case, integration with respect to \(\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _\varSigma \) reduces to summation over \(\sigma \in \varSigma \). \(\square \)

Now we restrict the class of membranes under consideration by requiring that S intersects \(\varGamma \) uniformly transversally. This means the following: there exists \(f\in C^\infty \bigl ([0,\varepsilon )\times \varSigma ,\,(-\varepsilon ,\varepsilon )\bigr )\) such that, setting \(f_\sigma :=f(\cdot ,\sigma )\),

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&f_\sigma (0)=0,\ \sigma \in \varSigma ;\\ {\mathrm{(ii)}}\quad&\text {Given }\overline{\varepsilon }\in (0,\varepsilon )\text {, there exists } \overline{\rho }\in (0,\varepsilon )\text { with}\\&\qquad |f_\sigma (x)|\le \overline{\rho },\ 0\le x\le \overline{\varepsilon },\ \sigma \in \varSigma ;\\ {\mathrm{(iii)}}\quad&|\partial f_\sigma (0)|\le c,\ \sigma \in \varSigma ;\\ {\mathrm{(iv)}}\quad&\chi \bigl (S\cap \varSigma _\sigma (\varepsilon )\bigr )=\mathop {{\mathrm{graph}}}\nolimits (f_\sigma )\times \{\sigma \},\ \sigma \in \varSigma . \end{aligned} \end{aligned}$$
(4.4)

In general, a submanifold C of a manifold B intersects \(\partial B\) transversally if

$$\begin{aligned} T_pC+T_p\partial B=T_pB ,\qquad p\in \partial B. \end{aligned}$$

The following theorem furnishes an important large class of urR manifolds and membranes intersecting the boundary uniformly transversally.

Theorem 4.4

Let \((M,g)\) be a compact oriented Riemannian manifold with boundary \(\varGamma \). Assume S is an oriented hypersurface in M with nonempty boundary \(\varSigma \subset \varGamma \) and S intersects \(\varGamma \) transversally. Then, \((M,g)\) is a urR manifold and S is a membrane intersecting \(\varGamma \) uniformly transversally.

Proof

Example 3.2(a) guarantees that \((M,g)\) is an oriented urR manifold. Hence, \((\varGamma ,g_\varGamma )\) is an oriented urR manifold by Theorem 4.1(i). Since S intersects \(\varGamma \) transversally, it is a well-known consequence of the implicit function theorem that \(\varSigma \) is a compact hypersurface in \(\varGamma \) without boundary. It is oriented, being the boundary of the oriented manifold S. Hence, invoking Example 3.2(a) once more, \((\varSigma ,g_\varSigma )\) is an oriented urR manifold. As it is compact, it has a uniform tubular neighborhood in \(\varGamma \). Thus, \(\varGamma \) having a uniform collar, \(\varSigma \) has a uniform tubular neighborhood \(\chi \) in M of some width \(\varepsilon \).

Since S intersects \(\varGamma \) transversally, \(\chi \bigl (S\cap \varSigma _\sigma (\varepsilon )\bigr )\) can be represented as the graph of a smooth function \(f_\sigma :[0,\varepsilon )\rightarrow (-\varepsilon ,\varepsilon )\) with \(f_\sigma (0)=0\), and \(f_\sigma \) depends smoothly on \(\sigma \in \varSigma \). The compactness of \(\varSigma \) implies that (4.4) is true. Hence, S intersects \(\varGamma \) uniformly transversally. Now, due to the compactness of S, it is not difficult to see that S is a regularly embedded submanifold of M. The theorem is proved.\(\square \)

Remarks 4.5

  1. (a)

    This theorem applies to the case \((M,g)=(\overline{\varOmega },g_m)\) considered in Sect. 2.

  2. (b)

    Suppose \((M,g)\) is an oriented urR manifold and S a membrane without boundary. Then, the fact that S has a uniform tubular neighborhood in  prevents S from either reaching \(\varGamma \) or ‘collapsing’ at infinity.\(\square \)

5 The singular manifold

In this section,

$$\begin{aligned} \begin{aligned} { }&(M,g)\text { is an oriented urR manifold and}\\&S\text { a membrane with nonempty boundary }\varSigma \\&\text {such that }S\text { intersects }\varGamma \text { uniformly transversally}. \end{aligned} \end{aligned}$$
(5.1)

By Theorem 4.1 and the considerations following it, we can choose a uniform tubular neighborhood

$$\begin{aligned} \chi :\varSigma (\varepsilon )\rightarrow N(\varepsilon )\times \varSigma . \end{aligned}$$
(5.2)

We write \(D(\varepsilon ):=\bigl \{\,(x,y)\in {{\mathbb {R}}}^2\ ;\ x^2+y^2<\varepsilon ^2,\ x\ge 0\,\bigr \}\). Then,

$$\begin{aligned} \tilde{U}(\varepsilon ):=\chi ^{-1}\bigl (D(\varepsilon )\times \varSigma \bigr ) \end{aligned}$$
(5.3)

is an open neighborhood of \(\varSigma \) in M contained in \(\varSigma (\varepsilon )\). We put

$$\begin{aligned} \hat{M}:=M\setminus \varSigma ,\quad U(\varepsilon ):=\tilde{U}(\varepsilon )\cap \hat{M}=\tilde{U}(\varepsilon )\setminus \varSigma . \end{aligned}$$

Furthermore, r and \(\rho \) are given by (2.5) and (2.6), respectively. Then, we define a Riemannian metric \(\hat{g}\) on \(\hat{M}\) by

$$\begin{aligned} \hat{g}:= \left\{ \begin{array}{ll} g &{}\quad \text {on }M\big \backslash \tilde{U}(\varepsilon ),\\ \chi ^*\Bigl (\frac{\mathrm{d}x^2+\mathrm{d}y^2}{\rho ^2(x,y)}+h\Bigr ) &{}\quad \text {on }U(\varepsilon ). \end{array}\right. \end{aligned}$$
(5.4)

Note that, see Theorem 4.2,

$$\begin{aligned} \hat{g}\sim g\text { on }M\big \backslash \tilde{U}(\varepsilon /3) \end{aligned}$$
(5.5)

and

$$\begin{aligned} \hat{g}=\chi ^*\Bigl (\frac{\mathrm{d}x^2+\mathrm{d}y^2}{x^2+y^2}+h\Bigr ) \text { on }U(\varepsilon /3). \end{aligned}$$

Hence, \(({{\hat{M}}},{{\hat{g}}})\) is a Riemannian manifold with a wedge singularity near \(\varSigma \).

The following theorem is the basis for our approach. It implies that it suffices to study transmission problems for membranes without boundary on urR manifolds.

Theorem 5.1

\(({{\hat{M}}},{{\hat{g}}})\) is an oriented urR manifold, and \(\hat{S}:=S\setminus \varSigma \) is a membrane in \(\hat{M}\) without boundary.

Proof

(1) We set \(\dot{D}(\varepsilon ):=D(\varepsilon )\setminus \{0,0\}\) and define \(\delta \in C^\infty [0,\varepsilon )\) by

$$\begin{aligned} \rho (x,y)=\delta \bigl (r(x,y)\bigr ) ,\qquad (x,y)\in D(\varepsilon ). \end{aligned}$$

Then, we fix \(\hat{\varepsilon }\in (2\varepsilon /3,\,\varepsilon )\), define a diffeomorphism

$$\begin{aligned} s:(0,\hat{\varepsilon }\,]\rightarrow {{\mathbb {R}}}_+ ,\quad r\mapsto \int _r^{\hat{\varepsilon }}\frac{dt}{\delta (t)}, \end{aligned}$$

and set \(t:=s^{-1}\). It follows, see [14, Lemma 5.1], that

$$\begin{aligned} t^*\Bigl (\frac{dr^2}{\delta ^2(r)}\Bigr )=\mathrm{d}s^2. \end{aligned}$$

We also consider the polar coordinate diffeomorphism

$$\begin{aligned} R:(0,\hat{\varepsilon }\,)\times [-\pi /2,\,\pi /2]\rightarrow \dot{D}(\hat{\varepsilon }) ,\quad (r,\alpha )\mapsto r(\cos \alpha ,\sin \alpha )\ . \end{aligned}$$

Then,

$$\begin{aligned} R^*(\mathrm{d}x^2+\mathrm{d}y^2)=dr^2+r^2\mathrm{d}\alpha ^2 =\delta ^2\Bigl (\frac{dr^2}{\delta ^2}+\frac{r^2}{\delta ^2}\,\mathrm{d}\alpha ^2\Bigr ), \end{aligned}$$

that is,

$$\begin{aligned} R^*\Bigl (\frac{\mathrm{d}x^2+\mathrm{d}y^2}{\rho ^2}\Bigr ) =\frac{dr^2}{\delta ^2}+\frac{r^2}{\delta ^2}\,\mathrm{d}\alpha ^2. \end{aligned}$$
(5.6)

Hence,

$$\begin{aligned} \lambda :=R\circ (t\times {{\mathrm{id}}}) :(0,\infty )\times [-\pi /2,\,\pi /2]\rightarrow \dot{D}(\hat{\varepsilon }) \end{aligned}$$

is a diffeomorphism satisfying

$$\begin{aligned} \lambda ^*\Bigl (\frac{\mathrm{d}x^2+\mathrm{d}y^2}{\rho ^2}\Bigr ) =(t\times {{\mathrm{id}}})^*R^*\Bigl (\frac{\mathrm{d}x^2+\mathrm{d}y^2}{\rho ^2}\Bigr ) =\mathrm{d}s^2+\beta ^2(s)\mathrm{d}\alpha ^2=:\gamma ^2, \end{aligned}$$

where \(\beta :=t^*(r/\delta )\). By (2.6), \(r/\delta =r/(1-\omega +r\omega )\) for \(0<r\le \hat{\varepsilon }\). Hence, \(\beta \) is smooth and \(\beta \sim 1\). Thus, \(\gamma \) is a metric on \(N:=(0,\infty )\times [-\pi /2,\,\pi /2]\) which is uniformly equivalent to \(g_2=\mathrm{d}s^2+\mathrm{d}\alpha ^2\). By Examples 3.2 (a)–(c),

$$\begin{aligned} \bigl ({{\mathbb {R}}}\times [-\pi /2,\,\pi /2],\,\mathrm{d}s^2+\mathrm{d}\alpha ^2\bigr ) \end{aligned}$$

is a urR manifold. From this, we deduce, see Remark 3.1(a), that \((N,\gamma )\) is a urR manifold on \((\overline{s},\infty )\) for each \(\overline{s}>0\).

It follows that

$$\begin{aligned} w:=(\lambda ^{-1}\times {{\mathrm{id}}})\circ \chi :\bigl (U(\hat{\varepsilon }),\hat{g}\bigr ) \rightarrow (N\times \varSigma ,\,\gamma +h) \end{aligned}$$

is an isometric isomorphism. Hence, we derive from Example 3.2(d) and Remark 3.1(a) that \(U(\hat{\varepsilon })\) is a urR manifold on \(U(\overline{\varepsilon })\), where \(\overline{\varepsilon }:=5\varepsilon /6\). Since \((M,g)\) is a urR manifold, it is a urR manifold on \(M\big \backslash \tilde{U}(\varepsilon /3)\). Thus, it is a consequence of (5.5) that \(({{\hat{M}}},{{\hat{g}}})\) is a urR manifold. The first assertion is now clear.

(2) Fix \(\overline{\varepsilon }\in (\varepsilon /3,\,\hat{\varepsilon })\). It can be assumed that (4.4) applies with this choice of \(\overline{\varepsilon }\). Set \(\tilde{f}_\sigma :=t^*f_\sigma \). Note that (4.4)(ii) implies

$$\begin{aligned} \tilde{f}_\sigma :\bigl [s(\overline{\varepsilon }),\infty \bigr )\rightarrow [-\overline{\rho },\overline{\rho }] ,\qquad \sigma \in \varSigma . \end{aligned}$$
(5.7)

Also note that \(t(s)=ce^{-s}\) for \(s\ge s(\varepsilon /3)\) and some \(c>0\). Hence,

$$\begin{aligned} \partial \tilde{f}_\sigma (s)=-ce^{-s}\partial f_\sigma (ce^{-s}) ,\qquad s\ge s(\varepsilon /3). \end{aligned}$$

Thus, it follows from (4.4)(iii) that

$$\begin{aligned} \lim _{s\rightarrow \infty }\partial \tilde{f}_\sigma (s)=0 \qquad \sigma \text {-unif.}, \end{aligned}$$
(5.8)

that is, uniformly with respect to \(\sigma \in \varSigma \).

We write \(\tilde{G}_\sigma \) for the graph of \(\tilde{f}_\sigma \) in \(\bigl [s(\overline{\varepsilon }),\infty \bigr )\times [-\pi /2,\,\pi /2]\). We can assume that

$$\begin{aligned} \tilde{\nu }_\sigma (s) :=\frac{(\partial \tilde{f}_\sigma (s),-1)}{(1+|\partial \tilde{f}_\sigma (s)|^2)^{1/2}} ,\qquad s\ge s(\overline{\varepsilon }), \end{aligned}$$
(5.9)

is the positive normal for \(\tilde{G}_\sigma \) at \(\bigl (s,\tilde{f}_\sigma (s)\bigr )\) (otherwise replace \(\tilde{\nu }_\sigma (s)\) by \(-\tilde{\nu }_\sigma (s)\)). It follows from (5.8) that

$$\begin{aligned} \lim _{s\rightarrow \infty }\tilde{\nu }_\sigma (s)=(0,-1) \qquad \sigma \text {-unif.} \end{aligned}$$
(5.10)

From this and (5.7), we deduce that \(\tilde{G}_\sigma \) has a uniform tubular neighborhood in \(\bigl ([s(\overline{\varepsilon }),\infty )\times [-\pi /2,\,\pi /2],\,\mathrm{d}s^2+\mathrm{d}\alpha ^2\bigr )\) whose width is independent of \(\sigma \in \varSigma \). It follows from step (1) that its pullback by w is a uniform tubular neighborhood of \(\hat{S}\) in \(U(\overline{\varepsilon })\). Now the second part of the assertion is a consequence of Theorem 4.1(iv), since, given any \(\delta >0\),  \(\hat{g}\) and g are equivalent on \(M\big \backslash \tilde{U}(\delta )\).\(\square \)

6 Function spaces

Let \((M,g)\) be a Riemannian manifold. We consider the tensor bundles

$$\begin{aligned} T_0^1M:=TM ,\quad T_1^0M:=T^*M ,\quad T_0^0=M\times {{\mathbb {R}}}, \end{aligned}$$

the tangent, cotangent, and trivial bundle, respectively, and

$$\begin{aligned} T_\tau ^\sigma M:=(TM)^{\otimes \sigma }\otimes (T^*M)^{\otimes \tau } ,\qquad \sigma ,\tau \ge 1, \end{aligned}$$

endow \(T_\tau ^\sigma M\) with the tensor bundle metric \(g_\sigma ^\tau :=g^{\otimes \sigma }\otimes g^{*\,\otimes \tau }\),  \(\sigma ,\tau \in {{\mathbb {N}}}\), and setFootnote 3

$$\begin{aligned} |a|_{g_\sigma ^\tau }=\sqrt{(a|a)_{g_\sigma ^\tau }} :=\sqrt{g_\sigma ^\tau (a,a)} ,\qquad a\in C(T_\tau ^\sigma M). \end{aligned}$$
(6.1)

By \(\nabla =\nabla _{g}\), we denote the Levi–Civita connection and interpret it as covariant derivative. Then, given a smooth function u on M, \(\nabla ^ku\in C^\infty (T_k^0M)\) is defined by \(\nabla ^0u:=u\), \(\nabla ^1u=\nabla u:=\mathrm{d}u\), and \(\nabla ^{k+1}u:=\nabla (\nabla ^ku)\) for \(k\in {{\mathbb {N}}}\).

Let \(\kappa =(x^1,\ldots ,x^m)\) be a local coordinate system and set \(\partial _i:=\partial /\partial x^i\). Then,

$$\begin{aligned} \nabla ^1u=\partial _iu\,\mathrm{d}x^i ,\quad \nabla ^2u=\nabla _{ij}u\,\mathrm{d}x^i\otimes \mathrm{d}x^j =(\partial _i\partial _ju-\varGamma _{ij}^k\partial _ku)\mathrm{d}x^i\otimes \mathrm{d}x^j, \end{aligned}$$

where

$$\begin{aligned} \varGamma _{ij}^k=\frac{1}{2}\,g^{k\ell }(\partial _ig_{j\ell }+\partial _jg_{i\ell }-\partial _\ell g_{ij}) ,\qquad 1\le i,j,k\le m, \end{aligned}$$

are the Christoffel symbols. It follows that

$$\begin{aligned} |\nabla u|_{g_0^1}^2=|\nabla u|_{g^*}^2=g^{ij}\partial _iu\partial _ju \end{aligned}$$
(6.2)

and

$$\begin{aligned} |\nabla ^2u|_{g_0^2}^2 =g^{i_1j_1}g^{i_2j_2}\nabla _{i_1i_2}u\nabla _{j_1j_2}u. \end{aligned}$$
(6.3)

As usual, \(\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _g=\sqrt{g}\,\mathrm{d}x\) is the Riemann–Lebesgue volume element on \(U_{\kappa }\).

Let \(\sigma ,\tau \in {{\mathbb {N}}}\), put \(V:=T_\tau ^\sigma M\), and write \(\vert \cdot \vert _V:=\vert \cdot \vert _{g_\sigma ^\tau }\). Then, \({{\mathcal {D}}}(V)\) is the linear subspace of \(C^\infty (V)\) of compactly supported sections.

For \(1\le q<\infty \), we set

$$\begin{aligned} \Vert u\Vert _{L_q(V)}=\Vert u\Vert _{L_q(V,g)} :=\Bigl (\int _M|u|_V^q\,\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _g\Bigr )^{1/q}. \end{aligned}$$

Then,

$$\begin{aligned} L_q(V)=L_q(V,g):= \bigl (\bigl \{\,u\in L_{1,\mathrm{loc}}(M)\ ;\ \Vert \cdot \Vert _{L_q(M,g)}<\infty \,\bigr \},\ \Vert \cdot \Vert _{L_q(M,g)}\bigr ) \end{aligned}$$

is the usual Lebesgue space of \(L_q\) sections of V. Hence, \(L_q(M,g)=L_q(V,g)\) for \(V=T_0^0M=M\times {{\mathbb {R}}}\). If \(k\in {{\mathbb {N}}}\), then

$$\begin{aligned} \Vert u\Vert _{W_{q}^k(V)}=\Vert u\Vert _{W_{q}^k(V,g)} :=\sum _{j=0}^k\big \Vert \,|\nabla ^jv|_{g_\sigma ^{\tau +j}}\big \Vert _{L_q(M,g)} \end{aligned}$$

and

$$\begin{aligned} \Vert u\Vert _{BC^k(V)}=\Vert u\Vert _{BC^k(V,g)} :=\sum _{j=0}^k\big \Vert \,|\nabla ^jv|_{g_\sigma ^{\tau +j}}\big \Vert _\infty . \end{aligned}$$

The Sobolev space \(W_{q}^k(V)=W_{q}^k(V,g)\) is the completion of \({{\mathcal {D}}}(V)\) in \(L_q(V)\) with respect to the norm  \(\Vert \cdot \Vert _{W_{q}^k(V)}\). If \(k<s<k+1\), the Slobodeckii space \(W_{q}^s(V)\) is obtained by real interpolation:

$$\begin{aligned} W_{q}^s(V)=W_{q}^s(V,g):=\bigl (W_{q}^k(V),W_{q}^{k+1}(V)\bigr )_{s-k,q}. \end{aligned}$$
(6.4)

We also need the time-dependent function spaces

$$\begin{aligned} W_{q}^{s/\varvec{2}}(M\times J) :=L_q\bigl (J,W_{q}^s(M)\bigr )\cap W_{q}^{s/2}\bigl (J,L_q(M)\bigr ) ,\qquad 0\le s\le 2, \end{aligned}$$
(6.5)

Thus, \(W_{q}^{0/\varvec{2}}(M\times J)\doteq L_q\bigl (J,L_q(M)\bigr )\), where \({}\doteq {}\) means ‘equivalent norms.’

By \(BC^k(V)=BC^k(V,g)\), we denote the Banach space of all \(u\in C^k(V)\) for which \(\Vert u\Vert _{BC^k(V)}\) is finite, and \(BC:=BC^0\). Then,

$$\begin{aligned} BC^{1/\varvec{2}}(M\times J) :=C\bigl (J,BC^1(M)\bigr )\cap C^{1/2}\bigl (J,BC(M)\bigr ) \end{aligned}$$
(6.6)

with the usual Hölder space \(C^{1/2}\).

The following lemma shows that in the Euclidean setting these definitions return the classical spaces.

Lemma 6.1

Suppose that \({{\mathbb {X}}}\in \{{{\mathbb {R}}}^m,{{\mathbb {H}}}^m\}\), \((M,g):=({{\mathbb {X}}},g_m)\), and \(V:={{\mathbb {X}}}\times F\) with \(F={{\mathbb {R}}}^{m^\sigma \times m^\tau }\simeq T_\tau ^\sigma {{\mathbb {X}}}\). Then,

$$\begin{aligned} W_{q}^s(V)=W_{q}^s({{\mathbb {X}}},F),\qquad s\in {{\mathbb {R}}}_+ ,\quad 1\le q<\infty , \end{aligned}$$

the standard Sobolev–Slobodeckii spaces, and

$$\begin{aligned} BC^k(V)=BC^k({{\mathbb {X}}},F),\qquad k\in {{\mathbb {N}}}. \end{aligned}$$

Proof

The second assertion is obvious.

If \(k\in {{\mathbb {N}}}\), then the above definition of \(W_{q}^k(V)\) coincides with the one in [13, (VII.1.2.2)]. Now the first assertion follows from (6.4), Theorems VII.2.7.4 and VII.2.8.3 in [13], and the fact that the Besov space \(B_q^s=B_{qq}^s\) coincides with \(W_{q}^s\) for \(s\notin {{\mathbb {N}}}\).\(\square \)

Now we suppose that

$$\begin{aligned} \begin{aligned} {}&(M,g)\text { is an oriented urR manifold and}\\&S\text { is a membrane without boundary}. \end{aligned} \end{aligned}$$
(6.7)

By Theorem 4.1(iii), we can choose a uniform tubular neighborhood

$$\begin{aligned} \varphi :S(\varepsilon )\rightarrow (-\varepsilon ,\varepsilon )\times S \end{aligned}$$
(6.8)

in . We set

$$\begin{aligned} M_+:=\varphi ^{-1}\bigl ([0,\varepsilon )\times S\bigr ) ,\quad M_-:=\varphi ^{-1}\bigl ((-\varepsilon ,0]\times S\bigr ) ,\quad M_0:=M\big \backslash \overline{S(\varepsilon /2)}. \end{aligned}$$

By \(V_\pm :=V_{M_\pm }\) and \(V_0:=V_{M_0}\), we denote the restrictions of V to \(M_\pm \) and \(M_0\), respectively. Then, \({{\bar{W}}}_{q}^s(M\setminus S,\,V)\),  \(s\in {{\mathbb {R}}}_+\), resp. \({\bar{BC}}^k(M\setminus S,\,V)\),  \(k\in {{\mathbb {N}}}\), is the closed linear subspace of

$$\begin{aligned} W_{q}^s(V_0)\oplus W_{q}^s(V_+)\oplus W_{q}^s(V_-), \quad \text {resp.}\quad BC^k(V_0)\oplus BC^k(V_+)\oplus BC^k(V_-), \end{aligned}$$

consisting of all \(u=u_0\oplus u_+\oplus u_-\) satisfying \((u_0-u_\pm )|M_0\cap M_\pm =0\). Definitions analogous to (6.5) and (6.6) give the Banach spaces \({{\bar{W}}}_{q}^{s/\varvec{2}}\bigl ((M\setminus S)\times J\bigr )\) and \({\bar{BC}}^{1/\varvec{2}}\bigl ((M\setminus S)\times J\bigr )\), respectively. Note that \({{\bar{W}}}_{p}^0(M\setminus S,\,V)=L_p(M,V)\), since \(\mathop {{\mathrm{vol}}}\nolimits _g(S)=0\).

Remark 6.2

Let \((M,g):=({{\mathbb {R}}}^m,g_m)\) and \(S:=\partial {{\mathbb {H}}}^m\). We can set \(\varepsilon =\infty \) in (6.8) to get \(M_+={{\mathbb {H}}}^m\), \(M_-=-{{\mathbb {H}}}^m\), and \(M_0=\emptyset \). Then,

$$\begin{aligned} {{\bar{W}}}_{q}^s(M\setminus S,\,V) =W_{q}^s({{\mathbb {H}}}^m,F)\oplus W_{q}^s(-{{\mathbb {H}}}^m,F) \end{aligned}$$

and

$$\begin{aligned} {\bar{BC}}^k(M\setminus S,\,V) =BC^k({{\mathbb {H}}}^m,F)\oplus BC^k(-{{\mathbb {H}}}^m,F), \end{aligned}$$

as follows from Lemma 6.1. \(\square \)

Assume \(a\in {\bar{BC}}(M\setminus S,\,V)\). Then, the one-sided limits

$$\begin{aligned} \lim _{t\rightarrow 0\pm }a\bigl (\gamma _p^S(t)\bigr )=:a_\pm (p) ,\qquad p\in S, \end{aligned}$$

exist and \(a_\pm \in BC(S)\). Hence, the jump across S,

$$\begin{aligned} \llbracket a \rrbracket :=\bigl (p\mapsto \llbracket a \rrbracket (p) :=a_+(p)-a_-(p)\bigr )\in BC(S), \end{aligned}$$

is well-defined. Note that \(a_\pm \) is the trace of a on S ‘from the positive/negative side of S.’

Let \(u\in {\bar{BC}}^1(M\setminus S)\). Then, \(u\circ \gamma _p^S\in {\bar{BC}}^1\bigl ((-\varepsilon ,\varepsilon )\setminus \{0\}\bigr )\). We set

$$\begin{aligned} \frac{\partial u}{\partial \nu _S}(q) :=\partial _1(u\circ \varphi ^{-1})(\tau ,p) ,\qquad q\in (M_+\cup M_-)\setminus S, \end{aligned}$$

for \(\varphi (q)=(\tau ,p)\in (-\varepsilon ,\varepsilon )\times S\) with \(\tau \ne 0\). Thus, \(\partial u/\partial \nu _S\) is the normal derivative of u in \((M_+\cup M_-)\setminus S\), that is, the derivative along the normal geodesic \(\gamma _p^S\). Hence,

$$\begin{aligned} \frac{\partial u}{\partial \nu _S}(q) =\bigl \langle \mathrm{d}u(q),{{\dot{\gamma }}}_p^S(\tau )\bigr \rangle =\bigl ({{\dot{\gamma }}}_p^S(\tau )\big |\mathop {{\mathrm{grad}}}\nolimits u(q)\bigr )_{g(q)}. \end{aligned}$$

Consequently, the jump of the normal derivative

is also well-defined.

7 The parabolic problem on manifolds

We presuppose (6.7) and assume

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&a\in {\bar{BC}}^{1/\varvec{2}}\bigl ((M\setminus S)\times J\bigr ).\\ {\mathrm{(ii)}}\quad&a\ge \underline{\alpha }>0, \end{aligned} \end{aligned}$$
(7.1)

where \(\underline{\alpha }\le 1\). Then,

$$\begin{aligned} {{\mathcal {A}}}u:=-\mathop {{\mathrm{div}}}\nolimits (a\mathop {{\mathrm{grad}}}\nolimits u). \end{aligned}$$

Fix \(\delta \in C\bigl (\varGamma ,\{0,1\}\bigr )\). Then, \(\varGamma _j:=\delta ^{-1}(j)\),  \(j=0,1\), is open and closed in \(\varGamma \) and \(\varGamma _0\cup \varGamma _1=\varGamma \). Either \(\varGamma \), \(\varGamma _0\) or \(\varGamma _1\) may be empty. In such a case, all references to these empty sets have to be disregarded. Recall that \(\gamma \) denotes the trace operator on \(\varGamma \) (for any manifold).

We introduce an operator \({{\mathcal {B}}}=({{\mathcal {B}}}^0,{{\mathcal {B}}}^1)\) on \(\varGamma \) by \({{\mathcal {B}}}^0u=\gamma _{\varGamma _0}u\), the Dirichlet boundary operator, on \(\varGamma _0\), and a Neumann boundary operator

$$\begin{aligned} {{\mathcal {B}}}^1u:=\bigl (\nu \big |\gamma _{\varGamma _1}(a\mathop {{\mathrm{grad}}}\nolimits u)\bigr )\text { on }\varGamma _1. \end{aligned}$$

On S, we consider the transmission operator \({{\mathcal {C}}}=({{\mathcal {C}}}^0,{{\mathcal {C}}}^1)\), where

Note that equals \(\llbracket a\partial _{\nu _S}u \rrbracket \) and not .

Of concern in this paper is the inhomogeneous linear transmission problem

$$\begin{aligned} \begin{aligned} \partial _tu+{{\mathcal {A}}}u&=f \quad \text { on }(M\setminus S)\times J,\\ {{\mathcal {B}}}u&=\varphi \quad \text { on }\varGamma \times J,\\ {{\mathcal {C}}}u&=\psi \quad \text { on }S\times J,\\ \gamma _0u&=u_0 \quad \text { on }(M\setminus S)\times \{0\}. \end{aligned} \end{aligned}$$
(7.2)

We assume that

$$\begin{aligned} 1<p<\infty ,\ p\notin \{3/2,\,3\}, \ (6.7)\text { and }(7.1) \text { are satisfied}. \end{aligned}$$
(7.3)

For abbreviation, we set, for \(1<q<\infty \),

$$\begin{aligned} {{\bar{W}}}_{q}^{k/\varvec{2}} :={{\bar{W}}}_{q}^{k/\varvec{2}}\bigl ((M\setminus S)\times J\bigr ) ,\qquad k=0,2, \end{aligned}$$

and introduce the trace spaces

$$\begin{aligned} \begin{aligned} \partial W_{q}&:=W_{q}^{(2-1/q)/\varvec{2}}(\varGamma _0\times J) \oplus W_{q}^{(1-1/q)/\varvec{2}}(\varGamma _1\times J),\\ \partial _SW_{q}&:=W_{q}^{(2-1/q)/\varvec{2}} (S\times J)\oplus W_{q}^{(1-1/q)/\varvec{2}}(S\times J), \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \gamma _0{{\bar{W}}}_{q} :={{\bar{W}}}_{q}^{2-2/q}(M\setminus S). \end{aligned}$$

As a rule, we often drop the index q if \(q=p\). Thus, \({{\bar{W}}}^{2/\varvec{2}}={{\bar{W}}}_{p}^{2/\varvec{2}}\), \(\partial W=\partial W_{p}\), etc. Finally,

$$\begin{aligned} \partial _{{{\mathcal {B}}},{{\mathcal {C}}}}W =[\partial W\oplus \partial _SW\oplus \gamma _0{{\bar{W}}}]_{{{\mathcal {B}}},{{\mathcal {C}}}} \end{aligned}$$

is the closed linear subspace of \(\partial W\oplus \partial _SW\oplus \gamma _0{{\bar{W}}}\) consisting of all \((\varphi ,\psi ,u_0)\) satisfying the compatibility conditions

$$\begin{aligned} \begin{aligned} {{\mathcal {B}}}^0u_0&=\varphi ^0(0),&\quad {{\mathcal {C}}}^0u_0&=\psi ^0(0),&\quad \text {if}\quad&3/2&<p<3,\\ {{\mathcal {B}}}(0)u_0&=\varphi (0),&\quad {{\mathcal {C}}}(0)u_0&=\psi (0),&\quad \text {if}\quad&3&<p<\infty , \end{aligned} \end{aligned}$$

where \(\varphi =(\varphi ^0,\varphi ^1)\) and \(\psi =(\psi ^0,\psi ^1)\). It follows from the anisotropic trace theorem ([13, Example VIII.1.8.6]) that \(\partial _{{{\mathcal {B}}},{{\mathcal {C}}}}W\) is well-defined.

Given Banach spaces E and F, we write \({{\mathcal {L}}}{{\mathrm{is}}}(E,F)\) for the set of all isomorphisms in \({{\mathcal {L}}}(E,F)\), the Banach space of continuous linear maps from E into F.

Now we can formulate the following maximal regularity theorem for problem (7.2). Its proof, which needs considerable preparation, is found in Sect. 13.

Theorem 7.1

Let (7.3) be satisfied. Then,

$$\begin{aligned} \bigl (\partial _t+{{\mathcal {A}}},\,({{\mathcal {B}}},{{\mathcal {C}}},\gamma _0)\bigr ) \in {{\mathcal {L}}}{{\mathrm{is}}}\bigl ({{\bar{W}}}_{p}^{(2,1)}, L_p(J,L_p(M))\times \partial _{{{\mathcal {B}}},{{\mathcal {C}}}}W_{p}\bigr ). \end{aligned}$$

8 The uniform Lopatinskii–Shapiro condition

In the proof of Theorem 7.1, we need to consider systems of elliptic boundary value problems. For this, we have to be precise on the concept of uniform ellipticity.

Let \((M,g)\) be any Riemannian manifold. We consider a general second-order linear differential operator on M,

(8.1)

with \(u=(u^1,\ldots ,u^n)\) and

$$\begin{aligned} a_i\in C(T_0^iM)^{n\times n} ,\qquad i=0,1,2 ,\quad a_2:=a. \end{aligned}$$

Here, \(\nabla ^iu=(\nabla ^iu^1,\ldots ,\nabla ^iu^n)\) so that, for example,

where s is summed from 1 to n and denotes complete contraction, that is, summation over all twice occurring indices in any local coordinate representation.

The (principal) symbol \({{\mathfrak {s}}}{{\mathcal {A}}}\) of \({{\mathcal {A}}}\) is the \((n\times n)\)-matrix-valued function defined by

Then, \({{\mathcal {A}}}\) is uniformly normally elliptic if there exists an ‘ellipticity constant’ \(\underline{\alpha }\in (0,1)\) such that

$$\begin{aligned} \sigma \bigl ({{\mathfrak {s}}}{{\mathcal {A}}}(p,\xi )\bigr )\subset [\mathop {{\mathrm{Re}}}\nolimits z\ge \underline{\alpha }] =\{\,z\in {{\mathbb {C}}}\ ;\ \mathop {{\mathrm{Re}}}\nolimits z\ge \underline{\alpha }\,\} \end{aligned}$$

for all \(p\in M\) and \(\xi \in T_p^*M\) with \(|\xi |_{g^*(p)}^2=1\), where \(\sigma (\cdot )\) denotes the spectrum.

Suppose \(\varGamma \ne \emptyset \) and \({{\mathcal {B}}}=({{\mathcal {B}}}^1,\ldots ,{{\mathcal {B}}}^n)\) is a linear boundary operator of order at most 1. More precisely, we assume that there is \(k\in \{0,\ldots ,n\}\) such that

with

$$\begin{aligned} b_i^r\in C(T_0^i\varGamma )^n ,\qquad 1\le r\le n ,\quad i=0,1. \end{aligned}$$

Then, the (principal) symbol of \({{\mathcal {B}}}\) is the \((n\times n)\)-matrix-valued function \({{\mathfrak {s}}}{{\mathcal {B}}}\) given by

Observe that if X is a vector, \(\omega \) a covector field, and \(\langle {}\cdot {},{}\cdot {}\rangle \) the duality pairing.

We denote by \(\nu ^\flat \in T_\varGamma ^*M\) the inner conormal on \(\varGamma \) defined in local coordinates by \(\nu ^\flat =g_{ij}\nu ^j\mathrm{d}x^i\). Given \(q\in \varGamma \), we write \({{\mathbb {B}}}(q)\) for the set of all

$$\begin{aligned} \begin{aligned} {}&(\xi ,\lambda )\in T_q^*M\times [\mathop {{\mathrm{Re}}}\nolimits z\ge 0]\text { satisfying}\\&\xi \perp \nu ^\flat (q)\text { and }|\xi |^2+|\lambda |=1. \end{aligned} \end{aligned}$$
(8.2)

Then, if \((\xi ,\lambda )\in {{\mathbb {B}}}(q)\), we introduce linear differential operators on \({{\mathbb {R}}}\) by

$$\begin{aligned} \begin{aligned} A(\partial ;q,\xi ,\lambda )&:=\lambda +{{\mathfrak {s}}}{{\mathcal {A}}}\bigl (q,\,\xi +i\nu ^\flat (q)\partial \bigr ),\\ B(\partial ;q,\xi ,\lambda )&:={{\mathfrak {s}}}{{\mathcal {B}}}\bigl (q,\,\xi +i\nu ^\flat (q)\partial \bigr ), \end{aligned} \end{aligned}$$
(8.3)

where \(i=\sqrt{-1}\).

As usual, \(C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}^n)\) is the closed linear subspace of \(BC({{\mathbb {R}}}_+,{{\mathbb {C}}}^n)\) consisting of the functions that vanish at infinity.

Suppose \({{\mathcal {A}}}\) is uniformly normally elliptic. Then, it follows that the homogeneous problem

$$\begin{aligned} A(\partial ;q,\xi ,\lambda )v=0\text { on }{{\mathbb {R}}}\end{aligned}$$
(8.4)

has for each \(q\in \varGamma \) and \((\xi ,\lambda )\in {{\mathbb {B}}}(q)\) precisely n linearly independent solutions whose restrictions to \({{\mathbb {R}}}_+\) belong to \(C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}^n)\). We denote their span by \(C_0(q,\xi ,\lambda )\). It is an n-dimensional linear subspace of \(C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}^n)\).

Now we consider the initial value problem on the half-line:

$$\begin{aligned} \begin{aligned} A(\partial ;q,\xi ,\lambda )v&=0\text { on }{{\mathbb {R}}}_+,\\ B(\partial ;q,\xi ,\lambda )v(0)&=\eta \in {{\mathbb {C}}}^n. \end{aligned} \end{aligned}$$
(8.5)

Then, \(({{\mathcal {A}}},{{\mathcal {B}}})\) satisfies the uniform parameter-dependent Lopatinskii–Shapiro (LS) conditions if problem (8.5) has for each \(\eta \in {{\mathbb {C}}}^n\) a unique solution

$$\begin{aligned} v=R(q,\xi ,\lambda )\eta \in C_0(q,\xi ,\lambda ) \end{aligned}$$
(8.6)

and

$$\begin{aligned} \Vert R(q,\xi ,\lambda )\Vert _{{{\mathcal {L}}}({{\mathbb {C}}}^n,C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}^n))}\le c, \end{aligned}$$
(8.7)

unif. w.r.t. \(q\in \varGamma \) and \((\xi ,\lambda )\in {{\mathbb {B}}}(q)\).

The basic feature, which distinguishes the above definition from the usual form of the LS condition, is the requirement of the uniform bound (8.7). Without this requirement, the LS condition is much simpler to formulate (e.g., [5, 6, 22, 23, 32], for example) and to verify.

It is known that the LS condition is equivalent to the parameter-dependent version of the complementing condition of Agmon et al. [2] (see, for example, [27, VII§9] or [38, Sect. 10.1]). Using this version, it is possible to define a uniform complementing condition which is equivalent to (8.7) (see [3] and [4]). However, that condition is even more difficult to verify in concrete situations. We refer to [15] for a detailed exposition of all these facts. It should be noted that the uniformity condition (8.7) is fundamental for the following, since we will have to work with infinitely many linear model problems.

9 Model cases

For the proof of Theorem 7.1, we have to understand the model cases to which problem (7.2) reduces in local coordinates.

Until further notice, it is assumed that

$$\begin{aligned} \begin{aligned} \bullet \quad&\text {assumption }(7.3)\text { applies}.\\ \bullet \quad&{{\mathfrak {K}}}\text { is an }S\text {-adapted ur atlas for }M. \end{aligned} \end{aligned}$$

By Remark 3.1(b), we can assume that \(\mathop {{\mathrm{diam}}}\nolimits (U_{\kappa })<\varepsilon /2\) for \(\kappa \in {{\mathfrak {K}}}_S\) where \(\varepsilon \) is the width of the tubular neighborhood of S.

We can choose a family \(\{\,\pi _\kappa ,\chi \ ;\ \kappa \in {{\mathfrak {K}}}\,\}\) with the following properties:

$$\begin{aligned} \begin{aligned} {\mathrm{(i)}}\quad&\pi _\kappa \in {{\mathcal {D}}}\bigl (U_{\kappa },[0,1]\bigr )\text { for }\kappa \in {{\mathfrak {K}}}\text { and } {\textstyle \sum _\kappa }\pi _\kappa ^2(p)=1\text { for }p\in M.\\ {\mathrm{(ii)}}\quad&\Vert \kappa _*\pi _\kappa \Vert _{k,\infty }\le c(k),\ \kappa \in {{\mathfrak {K}}},\ k\in {{\mathbb {N}}}.\\ {\mathrm{(iii)}}\quad&\chi \in {{\mathcal {D}}}\bigl ((-1,1)^m,[0,1]\bigr ) \text { with }\chi |\mathop {{\mathrm{supp}}}\nolimits (\kappa _*\pi _\kappa )=1\text { for }\kappa \in {{\mathfrak {K}}}. \end{aligned} \end{aligned}$$
(9.1)

(See Lemma 3.2 in [10] or [15].) We fix an \(\tilde{\omega }\in {{\mathcal {D}}}\bigl ((-1,1)^m,[0,1]\bigr )\) which is equal to 1 on \(\mathop {{\mathrm{supp}}}\nolimits (\chi )\). Then,

$$\begin{aligned} g_\kappa :=\tilde{\omega }\kappa _*g+(1-\tilde{\omega })g_m \end{aligned}$$

is a Riemannian metric on \({{\mathbb {R}}}^m\) such that

$$\begin{aligned} g_\kappa \sim g_m ,\qquad \kappa \in {{\mathfrak {K}}}, \end{aligned}$$
(9.2)

and

$$\begin{aligned} \Vert g_\kappa \Vert _{k,\infty }\le c(k) ,\qquad \kappa \in {{\mathfrak {K}}},\quad k\in {{\mathbb {N}}}. \end{aligned}$$
(9.3)

This follows from (3.3). Furthermore,

$$\begin{aligned} a_\kappa :=\tilde{\omega }\kappa _*a+1-\tilde{\omega }. \end{aligned}$$
(9.4)

Note that

$$\begin{aligned} a_\kappa \ge \underline{\alpha } ,\qquad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$
(9.5)

We write \(\mathop {{\mathrm{grad}}}\nolimits _\kappa :=\mathop {{\mathrm{grad}}}\nolimits _{g_\kappa }\) and \(\mathop {{\mathrm{div}}}\nolimits _\kappa :=\mathop {{\mathrm{div}}}\nolimits _{g_\kappa }\) for \(\kappa \in {{\mathfrak {K}}}\). Then,

$$\begin{aligned} {{\mathcal {A}}}_\kappa u:=-\mathop {{\mathrm{div}}}\nolimits _\kappa (a_\kappa \mathop {{\mathrm{grad}}}\nolimits _\kappa u) ,\qquad u\in Q_\kappa ^m. \end{aligned}$$

Let \(\delta _\kappa :=\kappa _*\delta \). Then,

$$\begin{aligned} {{\mathcal {B}}}_\kappa u :=\delta _\kappa \bigl (\nu _\kappa \big |\gamma (a_\kappa \mathop {{\mathrm{grad}}}\nolimits _\kappa u)\bigr )_\kappa +(1-\delta _\kappa )\gamma u ,\qquad \kappa \in {{\mathfrak {K}}}_\varGamma , \end{aligned}$$

where \(\nu _\kappa \) is the inner normal on \(\partial {{\mathbb {H}}}^m\) with respect to \(({{\mathbb {H}}}^m,g_\kappa )\), and \((\cdot |\cdot )_\kappa =g_\kappa \). If \(\kappa \in {{\mathfrak {K}}}_S\), then

Using these notations, we consider the three model problems:

$$\begin{aligned} \partial _tu+{{\mathcal {A}}}_\kappa u=f_\kappa \text { on }{{\mathbb {R}}}^m\times J, \end{aligned}$$
(9.6)

and

$$\begin{aligned} \begin{aligned} \partial _tu+{{\mathcal {A}}}_\kappa u&=f_\kappa \quad \text { on }{{\mathbb {H}}}^m\times J,\\ {{\mathcal {B}}}_\kappa u&=\varphi _\kappa \quad \text { on }\partial {{\mathbb {H}}}^m\times J, \end{aligned} \end{aligned}$$
(9.7)

and

$$\begin{aligned} \begin{aligned} \partial _tu+{{\mathcal {A}}}_\kappa u&=f_\kappa \quad \text { on }({{\mathbb {R}}}^m\setminus \partial {{\mathbb {H}}}^m)\times J,\\ {{\mathcal {C}}}_\kappa u&=\psi _\kappa \quad \text { on }\partial {{\mathbb {H}}}^m\times J. \end{aligned} \end{aligned}$$
(9.8)

In the following two sections, we prove that each one of them, complemented by appropriate initial and compatibility conditions, enjoys a maximal regularity result, unif. w.r.t. \(\kappa \).

10 Continuity

First, we note that

$$\begin{aligned} \sqrt{g_\kappa }\sim 1 ,\qquad \kappa \in {{\mathfrak {K}}}, \end{aligned}$$
(10.1)

and, given \(k\in {{\mathbb {N}}}\),

$$\begin{aligned} \sum _{i=0}^k|\nabla _{\kappa }^iu|\sim \sum _{|\alpha |\le k}|\partial ^\alpha u| ,\qquad \kappa \in {{\mathfrak {K}}},\quad u\in C^k(Q_\kappa ^m), \end{aligned}$$
(10.2)

with \(\nabla _{\kappa }u:=\kappa _*\nabla \kappa ^*u\) (cf. [10, Lemma 3.1] or [15]).

We set

$$\begin{aligned} {{\mathbb {X}}}_\kappa := \left\{ \begin{array}{ll} {{\mathbb {R}}}^m, &{}\quad \text {if }\kappa \in {{\mathfrak {K}}}_0:={{\mathfrak {K}}}\setminus ({{\mathfrak {K}}}_\varGamma \cup {{\mathfrak {K}}}_S),\\ {{\mathbb {H}}}^m, &{}\quad \text {if }\kappa \in {{\mathfrak {K}}}_\varGamma ,\\ {{\mathbb {R}}}^m\setminus \partial {{\mathbb {H}}}^m, &{}\quad \text {if }\kappa \in {{\mathfrak {K}}}_S. \end{array} \right. \end{aligned}$$
(10.3)

Then,

$$\begin{aligned} W_{\kappa }^s:=W_{p}^s({{\mathbb {X}}}_\kappa ,g_\kappa ) ,\qquad \kappa \in {{\mathfrak {K}}}_0\cup {{\mathfrak {K}}}_\varGamma , \end{aligned}$$

and

$$\begin{aligned} {{\bar{W}}}_{\kappa }^s:={{\bar{W}}}_{p}^s({{\mathbb {R}}}^m\setminus \partial {{\mathbb {H}}}^m,\,g_\kappa ) ,\qquad \kappa \in {{\mathfrak {K}}}_S, \end{aligned}$$

where \(0\le s\le 2\). For the sake of a unified presentation,

$$\begin{aligned} {{\mathsf {W}}}_{\kappa }^s:= \left\{ \begin{array} {ll} W_\kappa ^s, &{}\quad \text {if }\kappa \in {{\mathfrak {K}}}\setminus {{\mathfrak {K}}}_S,\\ {{\bar{W}}}_{\kappa }^s, &{}\quad \text {if }\kappa \in {{\mathfrak {K}}}_S. \end{array} \right. \end{aligned}$$

If \({{\mathbb {X}}}\in \{{{\mathbb {R}}}^m,{{\mathbb {H}}}^m\}\), then \(W_{p}^s({{\mathbb {X}}}):=W_{p}^s({{\mathbb {X}}},g_m)\). It is a consequence of (10.1) and (10.2) that

$$\begin{aligned} {{\mathsf {W}}}_{\kappa }^k\doteq {{\mathsf {W}}}_{p}^k({{\mathbb {X}}}_\kappa ) \qquad {{\mathfrak {K}}}\text {-unif.}, \end{aligned}$$
(10.4)

where \({}\doteq {}\) stands for ‘equal except for equivalent norms.’

Since

$$\begin{aligned} {{\mathsf {W}}}_{p}^s({{\mathbb {X}}}) =\bigl ({{\mathsf {W}}}_{p}^k({{\mathbb {X}}}),{{\mathsf {W}}}_{p}^{k+1}({{\mathbb {X}}})\bigr )_{s-k,p} ,\qquad k<s<k+1, \end{aligned}$$

(cf. [13, Theorems VII.2.7.4 and VII.2.8.3, as well as (VII.3.6.3)]), it follows from definition (6.4) and from (10.4) that

$$\begin{aligned} {{\mathsf {W}}}_{\kappa }^s\doteq {{\mathsf {W}}}_{p}^s({{\mathbb {X}}}_\kappa ) \qquad {{\mathfrak {K}}}\text {-unif.} \end{aligned}$$
(10.5)

Due to (10.1) and (10.2), we get, with an analogous definition of \({\mathsf {BC}}\),

$$\begin{aligned} {\mathsf {BC}}_\kappa ^k :={\mathsf {BC}}^k({{\mathbb {X}}}_\kappa ,g_\kappa ) \doteq {\mathsf {BC}}^k({{\mathbb {X}}}_\kappa ):={\mathsf {BC}}^k({{\mathbb {X}}}_\kappa ,g_m) \qquad {{\mathfrak {K}}}\text {-unif.} \end{aligned}$$
(10.6)

Using this, (6.5), and (6.6), we infer that

$$\begin{aligned} {{\mathsf {W}}}_{\kappa }^{s/\varvec{2}}\doteq {{\mathsf {W}}}_{p}^{s/\varvec{2}}({{\mathbb {X}}}_\kappa \times J) ,\quad {\mathsf {BC}}_\kappa ^{k/\varvec{2}}\doteq {\mathsf {BC}}^{k/\varvec{2}}({{\mathbb {X}}}_\kappa \times J) \qquad {{\mathfrak {K}}}\text {-unif}. \end{aligned}$$
(10.7)

First, we note that (3.1), (7.1), (9.4), and (10.2) imply

$$\begin{aligned} a_\kappa \in {\mathsf {BC}}_\kappa ^{1/\varvec{2}} \qquad {{\mathfrak {K}}}\text {-unif}. \end{aligned}$$
(10.8)

In local coordinates, \(\mathop {{\mathrm{grad}}}\nolimits u=g^{ij}\partial _ju\,\partial /\partial x^i\). Using this, (3.3), and (10.8), we deduce that

$$\begin{aligned} \Vert \mathop {{\mathrm{grad}}}\nolimits _\kappa a_\kappa \Vert _{{\mathsf {BC}}_\kappa (T{{\mathbb {X}}}_\kappa \times J)}\le c \qquad {{\mathfrak {K}}}\text {-unif.} \end{aligned}$$
(10.9)

Given a vector field \(Y=Y^i\partial /\partial x^i\), it holds \(\mathop {{\mathrm{div}}}\nolimits Y=\sqrt{g}^{-1}\partial _i\bigl (\sqrt{g}\,Y^i\bigr )\). By this and the above, it is verified that

$$\begin{aligned} {{\mathcal {A}}}_\kappa \in {{\mathcal {L}}}({{\mathsf {W}}}_{\kappa }^{2/\varvec{2}},{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}) \qquad {{\mathfrak {K}}}\text {-unif.} \end{aligned}$$
(10.10)

Now we consider Sobolev–Slobodeckii spaces on \(\partial {{\mathbb {H}}}^m\simeq {{\mathbb {R}}}^{m-1}\). We set for \(\kappa \in {{\mathfrak {K}}}_\varGamma \). Then,

(10.11)

and

$$\begin{aligned} \partial W_{\kappa }:=(1-\delta _\kappa )\partial _0W_{\kappa }+\delta _\kappa \partial _1W_{\kappa } ,\qquad \kappa \in {{\mathfrak {K}}}_\varGamma . \end{aligned}$$

Suppose \(0<\sigma<s<1\). Then,

$$\begin{aligned} \begin{aligned} BC^{1/\varvec{2}}({{\mathbb {R}}}^{m-1}\times J)&=C\bigl (J,BC^1({{\mathbb {R}}}^{m-1})\bigr )\cap C^{1/2}\bigl (J,BC({{\mathbb {R}}}^{m-1})\bigr )\\&=C\bigl (J,BC^1({{\mathbb {R}}}^{m-1})\bigr )\cap C^{1/2}\bigl (J,BUC({{\mathbb {R}}}^{m-1})\bigr )\\&\hookrightarrow B\bigl (J,BUC^s({{\mathbb {R}}}^{m-1})\bigr ) \cap C^{s/2}\bigl (J,BUC({{\mathbb {R}}}^{m-1})\bigr )\\&\doteq BUC^{s/\varvec{2}}({{\mathbb {R}}}^{m-1}\times J) \hookrightarrow b_\infty ^{\sigma /\varvec{2}}({{\mathbb {R}}}^{m-1}\times J), \end{aligned} \end{aligned}$$
(10.12)

where the \(BUC^\rho \) are the usual Hölder spaces and \(b_\infty ^{\sigma /\varvec{2}}\) is an anisotropic little Besov space. Indeed, the first embedding follows from the mean value theorem and by using the localized Hölder norm (cf. [13, (VII.3.7.1)]). For the norm equivalence, we refer to definition (VII.3.6.4) and Remark VII.3.6.4. The last embedding is implied by Lemma VII.2.2.3 and Theorem VII.7.3.4. By Theorem VII.2.7.4 in [13],

$$\begin{aligned} \begin{aligned} {}&b_\infty ^{\sigma /\varvec{2}}({{\mathbb {R}}}^{m-1}\times J)\\&\quad {} \doteq \bigl (BUC^{2/\varvec{2}}({{\mathbb {R}}}^{m-1}\times J),\, BUC^{0/\varvec{2}}({{\mathbb {R}}}^{m-1}\times J)\bigr )_{\sigma /2,\infty }^0. \end{aligned} \end{aligned}$$
(10.13)

We deduce from (3.3) and (10.2) that

\({{\mathfrak {K}}}_\varGamma \)-unif. Now it follows from (10.12) and (10.13) that

$$\begin{aligned} BC_\kappa ^{1/\varvec{2}}(\partial {{\mathbb {H}}}^m\times J) \hookrightarrow b_{\infty ,\kappa }^{\sigma /\varvec{2}}(\partial {{\mathbb {H}}}^m\times J) \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} \end{aligned}$$

Since, trivially, \(\gamma \in {{\mathcal {L}}}\bigl (BC^{1/\varvec{2}}({{\mathbb {H}}}^m\times J), \,BC^{1/\varvec{2}}(\partial {{\mathbb {H}}}^m\times J)\bigr )\), it is now clear that

$$\begin{aligned} \gamma a_\kappa \in b_{\infty ,\kappa }^{\sigma /\varvec{2}}(\partial {{\mathbb {H}}}^m\times J) \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} \end{aligned}$$
(10.14)

In local coordinates,

$$\begin{aligned} \nu _\kappa =\frac{g_\kappa ^{1i}}{\sqrt{g_\kappa ^{11}}}\,\frac{\partial }{\partial x^i}. \end{aligned}$$
(10.15)

Hence, \(\delta _\kappa {{\mathcal {B}}}_\kappa u=\delta _\kappa b_\kappa ^i\gamma \partial _iu =\delta _\kappa \gamma a_\kappa \nu _\kappa ^i\gamma \partial _iu_\kappa \), where it follows from (3.3), (9.5), and \(\Vert a_\kappa \Vert _\infty \le c\) that

$$\begin{aligned} 1/c\le b_\kappa ^1=\gamma a_\kappa \sqrt{g_\kappa ^{11}}\le c \end{aligned}$$
(10.16)

and, from (10.14),

$$\begin{aligned} \Vert b_\kappa ^i\Vert _{b_{\infty ,\kappa }^{\sigma /\varvec{2}}(\partial {{\mathbb {H}}}^m\times J)}\le c ,\qquad 1\le i\le m, \end{aligned}$$

for \(\kappa \in {{\mathfrak {K}}}_\varGamma \). Thus, it is a consequence of (10.14), (10.16), and the boundary operator retraction theorem [13, Theorem VIII.2.2.1] thatFootnote 4

$$\begin{aligned} {{\mathcal {B}}}_\kappa \text { is a }{{\mathfrak {K}}}_\varGamma \text {-uniform retraction}^{4} \text { from }W_{\kappa }^{2/\varvec{2}}\text { onto }\partial W_{\kappa }. \end{aligned}$$
(10.17)

Clearly, ‘\({{\mathfrak {K}}}_\varGamma \)-uniform’ means that there exists a coretraction \({{\mathcal {B}}}_\kappa ^c\) such that \(\Vert {{\mathcal {B}}}_\kappa \Vert \) and \(\Vert {{\mathcal {B}}}_\kappa ^c\Vert \) are \({{\mathfrak {K}}}_\varGamma \)-uniformly bounded.

Obviously,

$$\begin{aligned} \partial _SW_{\kappa }:=\partial _0W_{\kappa }\oplus \partial _1W_{\kappa } ,\qquad \kappa \in {{\mathfrak {K}}}_S. \end{aligned}$$
(10.18)

If we replace in the preceding arguments the boundary operator retraction argument by Theorem VIII.2.3.3 of [13], we find that

$$\begin{aligned} {{\mathcal {C}}}_\kappa \text { is a }{{\mathfrak {K}}}_S\text {-uniform retraction from } {{\bar{W}}}_{\kappa }^{2/\varvec{2}}\text { onto }\partial _S W_{\kappa }. \end{aligned}$$
(10.19)

It follows from (10.5) that

$$\begin{aligned} \gamma _0{{\mathsf {W}}}_{\kappa }\doteq \gamma _0{{\mathsf {W}}}_{p}({{\mathbb {X}}}_\kappa ) ,\qquad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$
(10.20)

The anisotropic trace theorem ( [13, Corollary VII.4.6.2, Theorems VIII.1.2.9 and VIII.1.3.1]) implies that

$$\begin{aligned} \gamma _0\in {{\mathcal {L}}}\bigl (W_{p}^{2/\varvec{2}}({{\mathbb {X}}}\times J),B_p^{2-2/p}({{\mathbb {X}}})\bigr ) ,\qquad {{\mathbb {X}}}\in \{{{\mathbb {R}}}^m,{{\mathbb {H}}}^m\}, \end{aligned}$$
(10.21)

is a retraction. Using Theorems VII.2.7.4 and VII.2.8.3, definition VII.3.6.3 and Remark VII.3.6.4 of [13], we get

$$\begin{aligned} B_p^{2-2/p}({{\mathbb {X}}})\doteq W_{p}^{2-2/p}({{\mathbb {X}}}). \end{aligned}$$
(10.22)

Now we infer from (10.7), (10.20)–(10.22), and (10.5) that

$$\begin{aligned} \gamma _0\text { is a }{{\mathfrak {K}}}\text {-uniform retraction from } {{\mathsf {W}}}_{\kappa }^{2/\varvec{2}}\text { onto }\gamma _0{{\mathsf {W}}}_{\kappa }. \end{aligned}$$
(10.23)

11 Maximal regularity

First, we rewrite \(({{\mathcal {A}}},{{\mathcal {B}}})\) in terms of covariant derivatives. For this, we define

$$\begin{aligned} a^\natural \in {\bar{BC}}^{1/\varvec{2}}\bigl (T_0^2(M\setminus S)\times J\bigr ) \end{aligned}$$

in local coordinates by

$$\begin{aligned} a^\natural :=ag^{ij}\frac{\partial }{\partial x^i}\otimes \frac{\partial }{\partial x^j}. \end{aligned}$$

Then, we get

(11.1)

where \(\varDelta \) is the Laplace–Beltrami operator (e.g., [36, (2.4.10)]). Consequently,

$$\begin{aligned} {{\mathfrak {s}}}{{\mathcal {A}}}(q,t,\xi )=a(q,t)\,|\xi |^2_{g^*(q)} ,\qquad q\in M\setminus S ,\quad \xi \in T_q^*(M\setminus S) ,\quad t\in J. \end{aligned}$$
(11.2)

For the boundary operator, we find

$$\begin{aligned} {{\mathcal {B}}}^1u=\gamma a(\nu ^\flat |\gamma \nabla u)_{g^*}. \end{aligned}$$
(11.3)

Hence,

$$\begin{aligned} {{\mathfrak {s}}}{{\mathcal {B}}}^1u(q,t,\xi )=a(q,t)\bigl (\nu ^\flat (q)\big |\xi \bigr )_{g^*(q)} ,\qquad q\in \varGamma ,\quad \xi \in T_q^*M ,\quad t\in J. \end{aligned}$$
(11.4)

Clearly, these formulas apply to any oriented Riemannian manifold, thus to \((\partial {{\mathbb {H}}}^m,g_\kappa )\), \(\kappa \in {{\mathfrak {K}}}_\varGamma \).

It follows from (9.5) and (11.2) that

$$\begin{aligned} {{\mathfrak {s}}}{{\mathcal {A}}}_\kappa (x,t,\xi ) =a_\kappa (x,t)\,|\xi |_{g_\kappa ^*(x)}^2\ge \underline{\alpha }\,|\xi |_{g_\kappa ^*(x)}^2 \end{aligned}$$

for  \(x\in {{\mathbb {X}}}_\kappa \),  \(\xi \in T_x^*{{\mathbb {X}}}_\kappa \),  \(t\in J\), and \(\kappa \in {{\mathfrak {K}}}\). Hence,

$$\begin{aligned} {{\mathcal {A}}}_\kappa \text { is uniformly normally elliptic, unif.\ w.r.t.\ }\kappa \in {{\mathfrak {K}}}\text { and }t\in J. \end{aligned}$$
(11.5)

We begin with the full-space problem.

Proposition 11.1

It holds

$$\begin{aligned} (\partial _t+{{\mathcal {A}}}_\kappa ,\,\gamma _0) \in {{\mathcal {L}}}{{\mathrm{is}}}(W_{\kappa }^{2/\varvec{2}},\,W_{\kappa }^{0/\varvec{2}}\times \gamma _0W_{\kappa }) \qquad {{\mathfrak {K}}}_0\text {-unif.}, \end{aligned}$$

that is,

$$\begin{aligned} \Vert (\partial _t+{{\mathcal {A}}}_\kappa ,\,\gamma _0)\Vert +\Vert (\partial _t+{{\mathcal {A}}}_\kappa ,\,\gamma _0)^{-1}\Vert \le c ,\qquad \kappa \in {{\mathfrak {K}}}_0. \end{aligned}$$

Proof

It is obvious from (10.10) and (10.23) that

$$\begin{aligned} (\partial _t+{{\mathcal {A}}}_\kappa ,\,\gamma _0) \in {{\mathcal {L}}}(W_{\kappa }^{2/\varvec{2}},\,W_{\kappa }^{0/\varvec{2}}\times \gamma _0 W_{\kappa }) \qquad {{\mathfrak {K}}}_0\text {-unif.} \end{aligned}$$

Due to (11.5), the assertion now follows from Corollary 9.7 in [16] and Theorem III.4.10.8 in [7] and (the proof of) Theorem 7.1 in [8]. (See [15] for a different demonstration).\(\square \)

Next, we study the case where \(\kappa \in {{\mathfrak {K}}}_\varGamma \). For this, we first establish the validity of the uniform LS condition. Henceforth, it is always assumed that

$$\begin{aligned} \zeta =(x,\xi ,\lambda )\text { with }x\in \partial {{\mathbb {H}}}^m\text { and }(\xi ,\lambda )\in {{\mathbb {B}}}(x). \end{aligned}$$
(11.6)

We fix any \(t\in J\) and omit it from the notation. The reader will easily check that all estimates are uniform with respect to \(t\in J\). From (11.2), we see that the first equation in (8.4) has the form

$$\begin{aligned} \ddot{v}=\rho _\kappa ^2(\zeta )v\text { on }{{\mathbb {R}}}, \end{aligned}$$
(11.7)

where

$$\begin{aligned} \rho _\kappa (\zeta ) :=\sqrt{\frac{\lambda }{a_\kappa (x)}+|\xi |_{g_\kappa ^*(x)}^2}\in {{\mathbb {C}}}\end{aligned}$$
(11.8)

with the principal value of the square root.

Suppose \(|\xi |_{g_\kappa ^*(x)}^2\le 1/2\). Then, \(\zeta \in {{\mathbb {B}}}(x)\) implies \(\rho _\kappa ^2(\zeta )\ge 1/2a_\kappa (x)\). Otherwise, \(\rho _\kappa ^2(\zeta )\ge 1/2\). Thus, since \(\Vert a_\kappa \Vert _\infty \le c\), we find a \(\beta >0\) such that

$$\begin{aligned} \mathop {{\mathrm{Re}}}\nolimits \rho _\kappa (\zeta )\ge \beta ,\qquad \kappa \in {{\mathfrak {K}}}_\varGamma . \end{aligned}$$
(11.9)

From \(a_\kappa \ge \underline{\alpha }\), we infer that \(|\rho _\kappa (\zeta )|\le c\) for \(\kappa \in {{\mathfrak {K}}}_\varGamma \). Set

$$\begin{aligned} v_\kappa (\zeta )(s):=e^{-\rho _\kappa (\zeta )s} ,\qquad s\ge 0. \end{aligned}$$
(11.10)

Then, \({{\mathbb {C}}}v_\kappa (\zeta )\) is the subspace of \(C_0({{\mathbb {R}}},{{\mathbb {C}}})\) of decaying solutions of (11.7).

Let \(\kappa \in {{\mathfrak {K}}}_{\varGamma _0}\) so that \({{\mathcal {B}}}_\kappa =\gamma \), the Dirichlet operator on \(\partial {{\mathbb {H}}}^m\). Then (recall (8.6)),  \(R_\kappa (\zeta )\eta =\eta v_\kappa (\zeta )\). Thus, by (11.9),

$$\begin{aligned} \Vert R_\kappa (\zeta )\Vert _{{{\mathcal {L}}}({{\mathbb {C}}},C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}))}\le 1 ,\qquad \kappa \in {{\mathfrak {K}}}_{\varGamma _0}. \end{aligned}$$

Now assume \(\kappa \in {{\mathfrak {K}}}_{\varGamma _1}\). Then, we see from (11.3) and (11.10) that

$$\begin{aligned} {{\mathcal {B}}}_\kappa (\partial ;\zeta )v_\kappa (\zeta )(0)=-ia_\kappa (x)\rho _\kappa (\zeta ). \end{aligned}$$
(11.11)

Consequently,

$$\begin{aligned} \Vert R_\kappa (\zeta )\Vert _{{{\mathcal {L}}}({{\mathbb {C}}},C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}))} =\frac{1}{a_\kappa (x)\,|\rho _\kappa (\zeta )|} \le \frac{1}{\underline{\alpha }\beta } ,\qquad \kappa \in {{\mathfrak {K}}}_{\varGamma _1}. \end{aligned}$$

This proves that \(({{\mathcal {A}}}_\kappa ,{{\mathcal {B}}}_\kappa )\) satisfies the uniform parameter-dependent LS condition, unif. w.r.t. \(\kappa \in {{\mathfrak {K}}}_\varGamma \) and \(t\in J\).

Proposition 11.2

It holds

$$\begin{aligned} \bigl (\partial _t+{{\mathcal {A}}}_\kappa ,\,({{\mathcal {B}}}_\kappa ,\gamma _0)\bigr ) \in {{\mathcal {L}}}{{\mathrm{is}}}\bigl (W_{\kappa }^{2/\varvec{2}}, \,W_{\kappa }^{0/\varvec{2}} \oplus [\partial W_{\kappa }\oplus \gamma _0W_{\kappa }]_{{{\mathcal {B}}}_\kappa }\bigr ) \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} \end{aligned}$$

Proof

We deduce from (10.17), (10.23), and [13, Example VIII.1.8.6] that \([\partial W_{\kappa }\oplus \gamma _0W_{\kappa }]_{{{\mathcal {B}}}_\kappa }\) is a well-defined closed linear subspace of \(\partial W_{\kappa }\oplus \gamma _0W_{\kappa }\) and, using also (10.10),

$$\begin{aligned} \bigl (\partial _t+{{\mathcal {A}}}_\kappa ,\,({{\mathcal {B}}}_\kappa ,\gamma _0)\bigr ) \in {{\mathcal {L}}}\bigl (W_{\kappa }^{2/\varvec{2}}, \,W_{\kappa }^{0/\varvec{2}} \oplus [\partial W_{\kappa }\oplus \gamma _0W_{\kappa }]_{{{\mathcal {B}}}_\kappa }\bigr ) \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} \end{aligned}$$

The uniform LS condition implies now the remaining assertions. For this, we refer to [15].\(\square \)

Nonhomogeneous linear parabolic boundary value problems (of arbitrary order and in a Banach-space-valued setting) on Euclidean domains have been studied in [23]. It follows, in particular from Proposition 6.4 therein, that the isomorphism assertion is true for each \(\kappa \in {{\mathfrak {K}}}_\varGamma \). However, it is not obvious whether the \({{\mathfrak {K}}}_\varGamma \)-uniformity statement does also follow from the results in [23]. For this, one would have to check carefully the dependence of all relevant estimates on the various parameters involved, which would be no easy task. (The same observation applies to Proposition 11.1). In [15], we present an alternative proof which takes care of the needed uniform estimates.

Lastly, we assume that \(\kappa \in {{\mathfrak {K}}}_S\). We set, once more suppressing a fixed \(t\in J\),

$$\begin{aligned} a_\kappa ^1(x):=a_\kappa (x) ,\quad a_\kappa ^2(x):=a_\kappa (-x) ,\qquad x\in {{\mathbb {H}}}^m, \end{aligned}$$

and

$$\begin{aligned} {{\mathfrak {a}}}_\kappa :={\mathop {{\mathrm{diag}}}\nolimits }[a_\kappa ^1,a_\kappa ^2]:{{\mathbb {H}}}^m\rightarrow {{\mathbb {C}}}^{2\times 2}. \end{aligned}$$

Then,

$$\begin{aligned} {{\mathfrak {A}}}_\kappa {{\mathfrak {u}}}:=-\mathop {{\mathrm{div}}}\nolimits _\kappa ({{\mathfrak {a}}}_\kappa \mathop {{\mathrm{grad}}}\nolimits _\kappa {{\mathfrak {u}}}) ,\qquad {{\mathfrak {u}}}=(u^1,u^2). \end{aligned}$$

Furthermore, \({{\mathfrak {B}}}_\kappa =({{\mathfrak {B}}}_\kappa ^0,{{\mathfrak {B}}}_\kappa ^1)\), where

$$\begin{aligned} \begin{aligned} {{\mathfrak {B}}}_\kappa ^0{{\mathfrak {u}}}&:=\gamma u^1-\gamma u^2,\\ {{\mathfrak {B}}}_\kappa ^1{{\mathfrak {u}}}&:=\bigl (\nu _\kappa \big |\gamma (a_\kappa ^1\mathop {{\mathrm{grad}}}\nolimits _\kappa u^1+a_\kappa ^2\mathop {{\mathrm{grad}}}\nolimits _\kappa u^2) \bigr )_{g_\kappa } \end{aligned} \end{aligned}$$
(11.12)

on \(\partial {{\mathbb {H}}}^m\).

Clearly,

$$\begin{aligned} \sigma \bigl ({{\mathfrak {a}}}(x)\bigr )\subset [\mathop {{\mathrm{Re}}}\nolimits z\ge \underline{\alpha }] ,\qquad x\in {{\mathbb {H}}}^m. \end{aligned}$$

Thus, \({{\mathfrak {A}}}_\kappa \) is uniformly normally elliptic on \({{\mathbb {H}}}^m\), \({{\mathfrak {K}}}_S\)-unif.

We define \(\rho _\kappa ^i\),  \(1=1,2\), by replacing \(a_\kappa \) in (11.8) by \(a_\kappa ^i\) and introduce \(v_\kappa ^i\) by changing \(\rho _\kappa \) in(11.10) to \(\rho _\kappa ^i\). Then,

$$\begin{aligned} {{\mathbb {C}}}v_\kappa ^1\oplus {{\mathbb {C}}}v_\kappa ^2 \end{aligned}$$

is the subspace of \(C_0({{\mathbb {R}}}_+,{{\mathbb {C}}}^2)\) of decaying solutions of

$$\begin{aligned} \bigl (\lambda +{{\mathfrak {s}}}{{\mathfrak {A}}}_\kappa (x,\,\xi +i\nu _\kappa (x)\partial )\bigr ){{\mathfrak {v}}}=0 ,\qquad x\in \partial {{\mathbb {H}}}^m ,\quad {{\mathfrak {v}}}=(v^1,v^2). \end{aligned}$$

From (11.11) and (11.12), we see that the initial conditions in (8.5) are in the present case

$$\begin{aligned} \begin{aligned} v^1(0)-v^2(0)&=\eta ^1,\\ a_\kappa ^1(x)\rho _\kappa ^1(x) v^1(0)+a_\kappa ^2(x)\rho _\kappa ^2(x)v^2(0)&=i\eta ^2. \end{aligned} \end{aligned}$$

Omitting \(x\in \partial {{\mathbb {H}}}^m\), the solution of this system is

$$\begin{aligned} \begin{aligned} v_\kappa ^1(0)&=v_\kappa ^2(0)+\eta ^1,\\ v_\kappa ^2(0)&=\frac{1}{a_\kappa ^1\rho _\kappa ^1+a_\kappa ^2\rho _\kappa ^2} (i\eta ^2-a_\kappa ^1\rho _\kappa ^1\eta ^1). \end{aligned} \end{aligned}$$

From this, the uniform boundedness of \(a_\kappa \), and \(\mathop {{\mathrm{Re}}}\nolimits (a_\kappa ^1\rho _\kappa ^1+a_\kappa ^2\rho _\kappa ^2)\ge 1/\underline{\alpha }\beta \), it follows that \(({{\mathfrak {A}}}_\kappa ,{{\mathfrak {B}}}_\kappa )\) satisfies the uniform parameter-dependent LS condition, unif. w.r.t. \(\kappa \in {{\mathfrak {K}}}_S\) and \(t\in J\).

Proposition 11.3

It holds

$$\begin{aligned} \bigl (\partial _t+{{\mathcal {A}}}_\kappa ,\ ({{\mathcal {C}}}_\kappa ,\gamma _0)\bigr ) \in {{\mathcal {L}}}{{\mathrm{is}}}\bigl ({{\bar{W}}}_{\kappa }^{2/\varvec{2}}, \,{{\bar{W}}}_{\kappa }^{0/\varvec{2}} \oplus [\partial _S{{\bar{W}}}_{\kappa }\oplus \gamma _0{{\bar{W}}}_{\kappa }]_{{{\mathcal {C}}}_\kappa }\bigr ) \end{aligned}$$

unif. w.r.t. \(\kappa \in {{\mathfrak {K}}}_S\) and \(t\in J\).

Proof

Set \({{\mathfrak {u}}}(x):=\bigl (u(x),u(-x)\bigr )\) for \(x\in {{\mathbb {H}}}^m\) and \({{\bar{{{\mathfrak {W}}}}}}_\kappa ^s:={{\bar{W}}}_\kappa ^s\oplus {{\bar{W}}}_\kappa ^s\), etc. Then, the assertion is true iff

$$\begin{aligned} \bigl (\partial _t+{{\mathfrak {A}}}_\kappa ,\ ({{\mathfrak {B}}}_\kappa ,\gamma _0)\bigr ) \in {{\mathcal {L}}}{{\mathrm{is}}}\bigl ({{\bar{{{\mathfrak {W}}}}}}_\kappa ^{2/\varvec{2}}, \,{{\bar{{{\mathfrak {W}}}}}}_\kappa ^{0/\varvec{2}} \oplus [\partial _S{{\bar{{{\mathfrak {W}}}}}}_\kappa \oplus \gamma _0{{\bar{{{\mathfrak {W}}}}}}_\kappa ]_{{{\mathfrak {B}}}_\kappa }\bigr ) \qquad {{\mathfrak {K}}}_S\text {-unif.} \end{aligned}$$

By the preceding considerations, the proof of Proposition 11.2 applies verbatim to the system for \({{\mathfrak {u}}}\). This proves the claim.\(\square \)

12 Localizations

We presuppose (7.3) and fix an S-adapted atlas for M with \(\mathop {{\mathrm{diam}}}\nolimits (U_{\kappa })<\varepsilon /2\) for \(\kappa \in {{\mathfrak {K}}}_S\). Then,

$$\begin{aligned} {{\mathfrak {N}}}(\kappa ):=\{\,{\tilde{\kappa }}\in {{\mathfrak {K}}}\ ;\ U_{\kappa }\cap U_{\tilde{\kappa }}\ne \emptyset \,\} ,\qquad \kappa \in {{\mathfrak {K}}}, \end{aligned}$$

\({{\mathfrak {N}}}_\varGamma (\kappa ):={{\mathfrak {N}}}(\kappa )\cap {{\mathfrak {K}}}_\varGamma \), and \({{\mathfrak {N}}}_S(\kappa ):={{\mathfrak {N}}}(\kappa )\cap {{\mathfrak {K}}}_S\). By the finite multiplicity of \({{\mathfrak {K}}}\),

$$\begin{aligned} \mathop {{\mathrm{card}}}\nolimits \bigl ({{\mathfrak {N}}}(\kappa )\bigr )\le c ,\qquad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$
(12.1)

We set for \(\kappa \in {{\mathfrak {K}}}\) and \({\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )\)

$$\begin{aligned} S_{\kappa \tilde{\kappa }}u:=\kappa _*{\tilde{\kappa }}^*u=u\circ ({\tilde{\kappa }}\circ \kappa ^{-1}) ,\qquad u\in {{\mathsf {W}}}_{\tilde{\kappa }}^{0/\varvec{2}}. \end{aligned}$$

It follows from (3.1)(ii) that, given \(s\in [0,2]\),

$$\begin{aligned} S_{\kappa \tilde{\kappa }}\in {{\mathcal {L}}}({{\mathsf {W}}}_{\tilde{\kappa }}^{s/\varvec{2}},{{\mathsf {W}}}_{\kappa }^{s/\varvec{2}}) \qquad {{\mathfrak {K}}}\text {-unif.} \end{aligned}$$
(12.2)

Interpreting \({{\mathfrak {K}}}\) as an index set, we put

$$\begin{aligned} {\varvec{{{\mathsf {W}}}}}^{s/\varvec{2}}:=\prod _{\kappa \in {{\mathfrak {K}}}}{{\mathsf {W}}}_{\kappa }^{s/\varvec{2}}, \end{aligned}$$
(12.3)

endowed with the product topology. For \(\alpha \in \{0,\varGamma \}\), we set

$$\begin{aligned} {\varvec{W}}^{s/\varvec{2}}[\alpha ]:=\prod _{\kappa \in {{\mathfrak {K}}}_\alpha }W_{\kappa }^{s/\varvec{2}} ,\quad {{\bar{{\varvec{W}}}}}^{s/\varvec{2}}:=\prod _{\kappa \in {{\mathfrak {K}}}_S}{{\bar{W}}}_{\kappa }^{s/\varvec{2}}. \end{aligned}$$

Since \({{\mathfrak {K}}}\) is the disjoint union of \({{\mathfrak {K}}}_0\), \({{\mathfrak {K}}}_\varGamma \), and \({{\mathfrak {K}}}_S\),

$$\begin{aligned} {\varvec{{{\mathsf {W}}}}}^{s/\varvec{2}} ={\varvec{W}}^{s/\varvec{2}}[0]\oplus {\varvec{W}}^{s/\varvec{2}}[\varGamma ]\oplus {{\bar{{\varvec{W}}}}}^{s/\varvec{2}}. \end{aligned}$$
(12.4)

A similar definition and direct sum decomposition applies to \(\gamma _0{\varvec{{{\mathsf {W}}}}}\). We also set

$$\begin{aligned} \partial {\varvec{W}}:=\prod _{\kappa \in {{\mathfrak {K}}}_\varGamma }\partial W_{\kappa } ,\quad \partial _S{\varvec{W}}:=\prod _{\kappa \in {{\mathfrak {K}}}_S}\partial _S{{\bar{W}}}_{\kappa }. \end{aligned}$$

We define linear operators \({{\mathcal {R}}}\) and \({{\mathcal {R}}}^c\) by

$$\begin{aligned} {{\mathcal {R}}}{\varvec{u}}:=\sum _\kappa \pi _\kappa \kappa ^*u_\kappa ,\quad {{\mathcal {R}}}^cu:=\bigl (\kappa _*(\pi _\kappa u)\bigr )_{\kappa \in {{\mathfrak {K}}}} \end{aligned}$$
(12.5)

for \({\varvec{u}}=(u_\kappa )\in {\varvec{{{\mathsf {W}}}}}^{0/\varvec{2}}\) and \(u\in L_1\bigl (J,L_{1,\mathrm{loc}}(M\setminus S)\bigr )\), respectively. Note that the sum is locally finite and \(\pi _\kappa \) is identified with the multiplication operator \(v\mapsto \pi _\kappa v\).

We want to evaluate \({{\mathcal {A}}}\circ {{\mathcal {R}}}{\varvec{u}}\) for \({\varvec{u}}\in {\varvec{{{\mathsf {W}}}}}^{2/\varvec{2}}\). Observe

$$\begin{aligned} {{\mathcal {A}}}(\pi _\kappa u)=\pi _\kappa {{\mathcal {A}}}u+[{{\mathcal {A}}},\pi _\kappa ]u ,\qquad u\in W_{p}^{2/\varvec{2}}, \end{aligned}$$

the commutator being given by

$$\begin{aligned}{}[{{\mathcal {A}}},\pi _\kappa ]u =-(\mathop {{\mathrm{grad}}}\nolimits \pi _\kappa |a\mathop {{\mathrm{grad}}}\nolimits u)-\mathop {{\mathrm{div}}}\nolimits (au\mathop {{\mathrm{grad}}}\nolimits \pi _\kappa ). \end{aligned}$$
(12.6)

Thus, we get from (12.5)

$$\begin{aligned} {{\mathcal {A}}}{{\mathcal {R}}}{\varvec{u}}=\sum _\kappa {{\mathcal {A}}}(\pi _\kappa \kappa ^*u_\kappa ) =\sum _\kappa \pi _\kappa {{\mathcal {A}}}(\kappa ^*u_\kappa )+\sum _\kappa [{{\mathcal {A}}},\pi _\kappa ]\kappa ^*u_\kappa . \end{aligned}$$
(12.7)

By \({{\mathcal {A}}}(\kappa ^*u_\kappa )=\kappa ^*{{\mathcal {A}}}_\kappa u_\kappa \), the first sum equals \({{\mathcal {R}}}{\varvec{{{\mathsf {A}}}}}{\varvec{u}}\), where \({\varvec{{{\mathsf {A}}}}}:={\mathop {{\mathrm{diag}}}\nolimits }[{{\mathcal {A}}}_\kappa ]\). Using \(1=\sum _{\tilde{\kappa }}\pi _{\tilde{\kappa }}^2\), we find

$$\begin{aligned} \begin{aligned} {} [{{\mathcal {A}}},\pi _\kappa ]\kappa ^*u_\kappa&=\sum _{\tilde{\kappa }}\pi _{\tilde{\kappa }}\pi _{\tilde{\kappa }}[{{\mathcal {A}}},\pi _\kappa ]\kappa ^*u_\kappa \\&=\sum _{\tilde{\kappa }}\pi _{\tilde{\kappa }}{\tilde{\kappa }}^* \bigl (({\tilde{\kappa }}_*\pi _{\tilde{\kappa }}){\tilde{\kappa }}_*[{{\mathcal {A}}},\pi _\kappa ]{\tilde{\kappa }}^* ({\tilde{\kappa }}_*\kappa ^*)u_\kappa \bigr )\\&=\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )}\pi _{\tilde{\kappa }}{\tilde{\kappa }}^* \bigl (({\tilde{\kappa }}_*\pi _{\tilde{\kappa }})[{{\mathcal {A}}}_{\tilde{\kappa }},S_{\tilde{\kappa }\kappa }(\kappa _*\pi _\kappa )]S_{\tilde{\kappa }\kappa }u_\kappa \bigr ). \end{aligned} \end{aligned}$$
(12.8)

Set

$$\begin{aligned} {{\mathcal {A}}}_{\kappa \tilde{\kappa }}:=(\kappa _*\pi _\kappa ) \bigl [{{\mathcal {A}}}_\kappa ,S_{\kappa \tilde{\kappa }}({\tilde{\kappa }}_*\pi _{\tilde{\kappa }})\bigr ]S_{\kappa \tilde{\kappa }}\chi . \end{aligned}$$

Then, (12.2), (12.6), (9.1), (10.8), and \(\kappa _*\pi _\kappa =(\kappa _*\pi _\kappa )\chi \) imply

$$\begin{aligned} \Vert {{\mathcal {A}}}_{\kappa \tilde{\kappa }}v\Vert _{{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}} \le c\,\Vert \chi v\Vert _{{{\mathsf {W}}}_{{\tilde{\kappa }}}^{1/\varvec{2}}} ,\qquad {\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa ) ,\quad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$
(12.9)

We define \({\varvec{{{\mathsf {A}}}}}_\kappa ^0:{\varvec{{{\mathsf {W}}}}}^{1/\varvec{2}}\rightarrow {{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}\) by

$$\begin{aligned} {\varvec{{{\mathsf {A}}}}}_\kappa ^0{\varvec{u}}:=\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )}{{\mathcal {A}}}_{\kappa \tilde{\kappa }}u_{\tilde{\kappa }},\qquad {\varvec{u}}\in {\varvec{{{\mathsf {W}}}}}^{1/\varvec{2}} ,\quad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$

Then, we deduce from (12.1) and (12.9) that

$$\begin{aligned} \Vert {\varvec{{{\mathsf {A}}}}}_\kappa ^0{\varvec{u}}\Vert _{{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}} \le c\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )}\Vert \chi u_{\tilde{\kappa }}\Vert _{{\varvec{{{\mathsf {W}}}}}_{\tilde{\kappa }}^{1/\varvec{2}}} ,\qquad u\in {\varvec{{{\mathsf {W}}}}}^{1/\varvec{2}} ,\quad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$
(12.10)

Moreover, \({\varvec{{{\mathsf {A}}}}}^0:=({\varvec{{{\mathsf {A}}}}}_\kappa ^0)_{\kappa \in {{\mathfrak {K}}}}\).

Now we sum (12.8) over \(\kappa \in {{\mathfrak {K}}}\) and interchange the order of summation to find that the second sum in (12.7) equals \({{\mathcal {R}}}{\varvec{{{\mathsf {A}}}}}^0{\varvec{u}}\). Thus, in total,

$$\begin{aligned} {{\mathcal {A}}}{{\mathcal {R}}}={{\mathcal {R}}}({\varvec{{{\mathsf {A}}}}}+{\varvec{{{\mathsf {A}}}}}^0). \end{aligned}$$
(12.11)

Similar considerations lead to

$$\begin{aligned} {{\mathcal {R}}}^c{{\mathcal {A}}}=({\varvec{{{\mathsf {A}}}}}+\tilde{{\varvec{{{\mathsf {A}}}}}}{}^0){{\mathcal {R}}}^c. \end{aligned}$$
(12.12)

Here, \(\tilde{{\varvec{{{\mathsf {A}}}}}}{}^0=(\tilde{{\varvec{{{\mathsf {A}}}}}}{}_\kappa ^0)_{\kappa \in {{\mathfrak {K}}}}\) with \(\tilde{{\varvec{{{\mathsf {A}}}}}}{}_\kappa ^0\in {{\mathcal {L}}}({\varvec{{{\mathsf {W}}}}}^{1/\varvec{2}},{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}})\) satisfying

$$\begin{aligned} \Vert \tilde{{\varvec{{{\mathsf {A}}}}}}{}_\kappa ^0{\varvec{u}}\Vert _{{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}} \le c\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )}\Vert \chi u_{\tilde{\kappa }}\Vert _{{\varvec{{{\mathsf {W}}}}}_{\tilde{\kappa }}^{1/\varvec{2}}} ,\qquad {\varvec{u}}\in {\varvec{{{\mathsf {W}}}}}^{1/\varvec{2}} ,\quad \kappa \in {{\mathfrak {K}}}. \end{aligned}$$
(12.13)

We turn to \(\varGamma \) and define for \(\kappa \in {{\mathfrak {K}}}_\varGamma \). Then,

(12.14)

and

(12.15)

Observe that

$$\begin{aligned} {{\mathcal {B}}}(\pi _\kappa u)=(\gamma \pi _\kappa ){{\mathcal {B}}}u+[{{\mathcal {B}}},\pi _\kappa ]u \end{aligned}$$

and

Similarly as above, we find

$$\begin{aligned} {{\mathcal {B}}}{{\mathcal {R}}}={{\mathcal {R}}}_\varGamma ({\varvec{B}}+{\varvec{B}}^0), \end{aligned}$$
(12.16)

where \({\varvec{B}}:={\mathop {{\mathrm{diag}}}\nolimits }[{{\mathcal {B}}}_\kappa ]\) and \({\varvec{B}}^0=({\varvec{B}}_\kappa ^0)_{\kappa \in {{\mathfrak {K}}}_\varGamma }\) with \({\varvec{B}}_\kappa ^0:{\varvec{W}}^{1/\varvec{2}}[\varGamma ]\rightarrow \{0\}\oplus \partial _1W_{\kappa }\) satisfying

$$\begin{aligned} \Vert {\varvec{B}}_\kappa ^0{\varvec{u}}\Vert _{\partial _1W_{\kappa }} \le c\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}_\varGamma (\kappa )}\Vert \chi u_{\tilde{\kappa }}\Vert _{W_{\tilde{\kappa }}^{1/\varvec{2}}} ,\qquad \kappa \in {{\mathfrak {K}}}_\varGamma . \end{aligned}$$
(12.17)

Analogously,

$$\begin{aligned} {{\mathcal {R}}}_\varGamma ^c{{\mathcal {B}}}=({\varvec{B}}+\tilde{{\varvec{B}}}{}^0){{\mathcal {R}}}^c, \end{aligned}$$
(12.18)

where \(\tilde{{\varvec{B}}}{}^0=(\tilde{{\varvec{B}}}{}_\kappa ^0)\) with \(\tilde{{\varvec{B}}}{}_\kappa ^0 :{\varvec{W}}^{1/\varvec{2}}[\varGamma ]\rightarrow \{0\}\oplus \partial _1W_{\kappa }\) is such that

$$\begin{aligned} \Vert \tilde{{\varvec{B}}}{}_\kappa ^0{\varvec{u}}\Vert _{\partial _1W_{\kappa }} \le c\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}_\varGamma (\kappa )}\Vert \chi u_{\tilde{\kappa }}\Vert _{W_{\tilde{\kappa }}^{1/\varvec{2}}} ,\qquad \kappa \in {{\mathfrak {K}}}_\varGamma . \end{aligned}$$
(12.19)

Concerning the transmission interface S, we define \({{\mathcal {R}}}_S\) and \({{\mathcal {R}}}_S^c\) analogously to (12.14) and (12.15). Observe that

$$\begin{aligned}{}[{{\mathcal {C}}},\pi _\kappa ]u =\bigl (0,\llbracket (\nu _S|a\mathop {{\mathrm{grad}}}\nolimits \pi _\kappa )u \rrbracket \bigr ). \end{aligned}$$

From this, it is now clear that

$$\begin{aligned} {{\mathcal {C}}}{{\mathcal {R}}}={{\mathcal {R}}}_S({\varvec{C}}+{\varvec{C}}^0) ,\quad {{\mathcal {R}}}_S^c{{\mathcal {C}}}=({\varvec{C}}+\tilde{{\varvec{C}}}{}^0){{\mathcal {R}}}^c, \end{aligned}$$
(12.20)

where \({\varvec{C}}:={\mathop {{\mathrm{diag}}}\nolimits }[{{\mathcal {C}}}_\kappa ]\), \({\varvec{C}}^0=({\varvec{C}}_\kappa ^0)\), and \(\tilde{{\varvec{C}}}{}^0=(\tilde{{\varvec{C}}}{}_\kappa ^0)\) with

$$\begin{aligned} \begin{aligned} \Vert C_\kappa ^0{\varvec{u}}\Vert _{\partial _1W_{\kappa }}&\le c\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}_S(\kappa )}\Vert \chi u_{\tilde{\kappa }}\Vert _{{{\bar{W}}}_{\tilde{\kappa }}^{1/\varvec{2}}},\\ \Vert \tilde{C}{}_\kappa ^0{\varvec{u}}\Vert _{\partial _1W_{\kappa }}&\le c\sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}_S(\kappa )}\Vert \chi u_{\tilde{\kappa }}\Vert _{{{\bar{W}}}_{\tilde{\kappa }}^{1/\varvec{2}}} \end{aligned} \end{aligned}$$
(12.21)

for \(\kappa \in {{\mathfrak {K}}}_S\) and \({\varvec{u}}\in {{\bar{{\varvec{W}}}}}^{2/\varvec{2}}\).

The following consequences of the preceding results are needed to establish Theorem 7.1.

Lemma 12.1

Fix \(s\in (1,2)\) and \(q>p\) with \(1/p-1/q<(2-s)/(m+2)\).Then,

$$\begin{aligned} (v\mapsto \chi v) \in {{\mathcal {L}}}\bigl ({{\mathsf {W}}}_{\kappa }^{2/\varvec{2}},L_q(J,{{\mathsf {W}}}_{\kappa }^s) \cap W_{q}^{s/2}(J,{{\mathsf {W}}}_{\kappa }^0)\bigr ) \qquad {{\mathfrak {K}}}\text {-unif.} \end{aligned}$$

Proof

(1) Set \({{\mathbb {X}}}:={{\mathbb {R}}}^m\) and \(s_1:=s+(m+2)(1/p-1/q)<2\). By (VII.3.6.3) and Example VII.3.6.5 of [13]

$$\begin{aligned} W_{q}^{s/\varvec{2}} \doteq L_q(J,W_{q}^s)\cap W_{q}^{s/2}(J,L_q), \end{aligned}$$
(12.22)

where \(W_{q}^{s/\varvec{2}}=W_{q}^{s/\varvec{2}}({{\mathbb {X}}}\times J)\), etc. Hence, see [13, Theorem VII.2.2.4(iv)],

$$\begin{aligned} W_{p}^{2/\varvec{2}}\hookrightarrow W_{p}^{s_1/\varvec{2}} \hookrightarrow L_q(J,W_{q}^s)\cap W_{q}^{s/2}(J,L_q). \end{aligned}$$

Consequently, invoking (10.5),

$$\begin{aligned} W_{\kappa }^{2/\varvec{2}} \hookrightarrow L_q(J,W_{q,\kappa }^s)\cap W_{q}^{s/2}(J,L_{q,\kappa }) \qquad {{\mathfrak {K}}}_0\text {-unif.} \end{aligned}$$

(2) Since \(\mathop {{\mathrm{supp}}}\nolimits (\chi )\subset (-1,1)^m\), Hölder’s inequality implies

$$\begin{aligned} \Vert \chi v\Vert _{W_{p}^k}\le c\,\Vert \chi v\Vert _{W_{q}^k} ,\qquad v\in W_{q}^2 ,\quad k=0,1,2. \end{aligned}$$

Hence, by interpolating and then using (10.5) once more,

$$\begin{aligned} \Vert \chi v\Vert _{W_{\kappa }^\sigma }\le c\,\Vert \chi v\Vert _{W_{q,\kappa }^\sigma } \qquad {{\mathfrak {K}}}_0\text {-unif.} ,\quad \sigma \in \{0,s\}. \end{aligned}$$

From this, we get

$$\begin{aligned} \Vert \chi u\Vert _{L_q(J,W_{\kappa }^s)} \le c\,\Vert \chi u\Vert _{L_q(J,W_{q,\kappa }^s)} \qquad {{\mathfrak {K}}}_0\text {-unif.} \end{aligned}$$

and

$$\begin{aligned} \Vert \chi u\Vert _{W_{q}^{s/2}(J,W_{\kappa }^0)} \le c\,\Vert \chi u\Vert _{W_{q}^{s/2}(J,W_{q,\kappa }^0)} \qquad {{\mathfrak {K}}}_0\text {-unif.} \end{aligned}$$

Now the assertion follows in this case from step (1).

(3) The preceding arguments also hold if we replace \({{\mathbb {X}}}={{\mathbb {R}}}^m\) by \({{\mathbb {X}}}={{\mathbb {H}}}^m\). Then (see Remark 6.2), it also applies to \({{\mathbb {X}}}={{\mathbb {R}}}^m\setminus \partial {{\mathbb {H}}}^m\). Thus, the lemma is proved.\(\square \)

Given a function space \({{\mathfrak {F}}}\) defined on J, we write \({{\mathfrak {F}}}(\tau )\) for its restriction to \(J_\tau \), \(0<\tau \le T\).

Lemma 12.2

Let \(\hat{{\varvec{{{\mathsf {A}}}}}}_\kappa \in \{{\varvec{{{\mathsf {A}}}}}_\kappa ^0,\tilde{{\varvec{{{\mathsf {A}}}}}}_\kappa \}\). There exists \(\varepsilon >0\) such that

$$\begin{aligned} \Vert \hat{{\varvec{{{\mathsf {A}}}}}}_\kappa {\varvec{u}}\Vert _{{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}(\tau )} \le c\tau ^\varepsilon \sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )}\Vert u_{\tilde{\kappa }}\Vert _{{{\mathsf {W}}}_\kappa ^{2/\varvec{2}}(\tau )} ,\qquad {\varvec{u}}\in {\varvec{{{\mathsf {W}}}}}^{2/\varvec{2}}, \end{aligned}$$

unif. w.r.t. \(\kappa \in {{\mathfrak {K}}}\) and \(0<\tau \le T\).

Proof

Fix s and q as in Lemma 12.1 and set \(\varepsilon :=1/p-1/q\). We get from Hölder’s inequality, Lemma 12.1, (12.10), and (12.13)

$$\begin{aligned} \begin{aligned} \Vert \hat{{\varvec{{{\mathsf {A}}}}}}_\kappa {\varvec{u}}\Vert _{{{\mathsf {W}}}_{\kappa }^{0/\varvec{2}}(\tau )}&=\Vert \hat{{\varvec{{{\mathsf {A}}}}}}_\kappa {\varvec{u}}\Vert _{L_p(J_\tau ,{{\mathsf {W}}}_{\kappa }^0)} \le \tau ^\varepsilon \,\Vert \hat{{\varvec{{{\mathsf {A}}}}}}_\kappa {\varvec{u}}\Vert _{L_q(J_\tau ,{{\mathsf {W}}}_{\kappa }^0)}\\&\le c\tau ^\varepsilon \sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )} \Vert \chi u_{\tilde{\kappa }}\Vert _{L_q\cap {{\mathsf {W}}}_{\tilde{\kappa }}^{s/2}(\tau )} \le c\tau ^\varepsilon \sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}(\kappa )}\Vert u_{\tilde{\kappa }}\Vert _{{{\mathsf {W}}}_\kappa ^{2/\varvec{2}}(\tau )} \end{aligned} \end{aligned}$$

for \(0<\tau \le T\), \({{\mathfrak {K}}}\)-unif., where \(\Vert \cdot \Vert _{L_q\cap {{\mathsf {W}}}_{\kappa }^{s/2}}\) is the norm in the image space occurring in Lemma 12.1.\(\square \)

The next lemma provides analogous estimates for the boundary and transmission operators.

Lemma 12.3

Let \(\hat{{\varvec{B}}}_\kappa \in \{{\varvec{B}}_\kappa ^0,\tilde{{\varvec{B}}}{}_\kappa ^0\}\) and \(\hat{{\varvec{C}}}_\kappa \in \{{\varvec{C}}_\kappa ^0,\tilde{{\varvec{C}}}{}_\kappa ^0\}\). There exists \(\varepsilon >0\) such that

$$\begin{aligned} \Vert \hat{{\varvec{B}}}_\kappa {\varvec{u}}\Vert _{\partial W_{\kappa }(\tau )} \le c\tau ^\varepsilon \sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}_\varGamma (\kappa )} \Vert u_{\tilde{\kappa }}\Vert _{W_{{\tilde{\kappa }}}^{2/\varvec{2}}(\tau )} \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} ,\quad {\varvec{u}}\in {\varvec{W}}^{2/\varvec{2}}[\varGamma ], \end{aligned}$$

and

$$\begin{aligned} \Vert \hat{{\varvec{C}}}_\kappa {\varvec{u}}\Vert _{\partial _SW_{\kappa }(\tau )} \le c\tau ^\varepsilon \sum _{{\tilde{\kappa }}\in {{\mathfrak {N}}}_S(\kappa )} \Vert u_{\tilde{\kappa }}\Vert _{{{\bar{W}}}_{{\tilde{\kappa }}}^{2/\varvec{2}}(\tau )} \qquad {{\mathfrak {K}}}_S\text {-unif.} ,\quad {\varvec{u}}\in {{\bar{{\varvec{W}}}}}^{2/\varvec{2}}, \end{aligned}$$

for \(0<\tau \le T\).

Proof

Let s, q, and \(\varepsilon \) be as in the preceding proof.

Given any Banach space E, Hölder’s inequality gives

$$\begin{aligned} \Vert u\Vert _{W_{p}^k(J_\tau ,E)} \le \tau ^\varepsilon \,\Vert u\Vert _{W_{q}^k(J_\tau ,E)} ,\qquad k=0,1. \end{aligned}$$

Hence, by interpolation (see [13, Theorems VII.2.7.4 and VII.7.3.4]),

$$\begin{aligned} \Vert u\Vert _{W_{p}^{(1-1/p)/2}(J_\tau ,E)} \le c\tau ^\varepsilon \,\Vert u\Vert _{W_{q}^{(1-1/p)/2}(J_\tau ,E)} ,\qquad 0<\tau <T. \end{aligned}$$
(12.23)

Set

$$\begin{aligned} {{\mathcal {L}}}_\kappa ^\sigma (\tau ) :=L_q\bigl (J_\tau ,W_{\kappa }^\sigma (\partial {{\mathbb {H}}}^m)\bigr ) \cap W_{q}^{\sigma /2}\bigl (J_\tau ,W_{\kappa }^0(\partial {{\mathbb {H}}}^m)\bigr ) ,\qquad \kappa \in {{\mathfrak {K}}}_\varGamma . \end{aligned}$$

Then, we get from (12.23)

$$\begin{aligned} \Vert \cdot \Vert _{\partial _1W_{\kappa }(\tau )} \le c\tau ^\varepsilon \,\Vert \cdot \Vert _{{{\mathcal {L}}}_\kappa ^{1-1/p}(\tau )} \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} ,\quad 0<\tau <T. \end{aligned}$$
(12.24)

It is clear from the structure of \(\hat{{\varvec{B}}}_\kappa \) and the mapping properties of \(\gamma \) that

$$\begin{aligned} \Vert \hat{\varvec{B}}_\kappa {\varvec{u}}\Vert _{{{\mathcal {L}}}_\kappa ^{1-1/p}(\tau )} \le c\,\Vert \chi {\varvec{u}}\Vert _{{{\mathcal {L}}}_\kappa ^1(\tau )} \qquad {{\mathfrak {K}}}_\varGamma \text {-unif.} \end{aligned}$$

Since \({{\mathcal {L}}}_\kappa ^s(\tau )\hookrightarrow {{\mathcal {L}}}_\kappa ^1(\tau )\) \({{\mathfrak {K}}}_\varGamma \)-unif. and unif. w.r.t. \(\tau \), the first assertion now follows from (12.24), Lemma 12.1, and the fact that \(\hat{\varvec{B}}_\kappa \) has its image in the closed linear subspace \(\partial _1W_{\kappa }(\tau )\) of \(\partial W_{\kappa }(\tau )\).

The proof of the second claim is similar.\(\square \)

13 Proof of Theorem 7.1

Let \({\varvec{E}}=\prod _{\alpha \in {{\mathsf {A}}}}E_\alpha \), where each \(E_\alpha \) is a Banach space and \({{\mathsf {A}}}\) is a countable index set. Then, \(\ell _p({\varvec{E}})\) is the Banach space of p-summable sequences in \({\varvec{E}}\).

We put

$$\begin{aligned} {{\mathbb {E}}}^i[\alpha ]:=\ell _p\bigl ({\varvec{W}}^{i/\varvec{2}}[\alpha ]\bigr ) ,\quad \gamma _0{{\mathbb {E}}}[\alpha ]:=\ell _p\bigl (\gamma _0{\varvec{W}}[\alpha ]\bigr ) ,\qquad \alpha \in \{0,\varGamma \}, \end{aligned}$$

and

$$\begin{aligned} {{\bar{{{\mathbb {E}}}}}}^i:=\ell _p\bigl ({{\bar{{\varvec{W}}}}}^{i/\varvec{2}}\bigr ) ,\quad \gamma _0{{\bar{{{\mathbb {E}}}}}}:=\ell _p(\gamma _0{{\bar{{\varvec{W}}}}}) \end{aligned}$$

for \(i=0,2\). Then,

$$\begin{aligned} {{\mathbb {E}}}^i:={{\mathbb {E}}}^i[0]\oplus {{\mathbb {E}}}^i[\varGamma ]\oplus {{\bar{{{\mathbb {E}}}}}}^i ,\quad i=0,2 ,\qquad \gamma _0{{\mathbb {E}}}:=\gamma _0{{\mathbb {E}}}[0]\oplus \gamma _0{{\mathbb {E}}}[\varGamma ]\oplus \gamma _0{{\bar{{{\mathbb {E}}}}}}. \end{aligned}$$

Moreover, \({{\mathbb {F}}}_\varGamma :=\ell _p(\partial {\varvec{W}})\), \({{\mathbb {F}}}_S:=\ell _p(\partial _S{{\bar{{\varvec{W}}}}})\).

Recall the definitions of \(({{\mathcal {R}}},{{\mathcal {R}}}^c)\), \(({{\mathcal {R}}}_\varGamma ,{{\mathcal {R}}}_\varGamma ^c)\), and \(({{\mathcal {R}}}_S,{{\mathcal {R}}}_S^c)\) in Sect. 12.

Proposition 13.1

\(({{\mathcal {R}}},{{\mathcal {R}}}^c)\) is an r-c pair for \(({{\mathbb {E}}}^k,{{\bar{W}}}^{k/\varvec{2}})\),  \(k=0,2\), and

$$\begin{aligned} ({{\mathcal {R}}}_\varGamma \oplus {{\mathcal {R}}}_S\oplus {{\mathcal {R}}},\,{{\mathcal {R}}}_\varGamma ^c\oplus {{\mathcal {R}}}_S^c\oplus {{\mathcal {R}}}^c) \end{aligned}$$

is an r-c pair for

$$\begin{aligned} ({{\mathbb {F}}}_\varGamma \oplus {{\mathbb {F}}}_S\oplus \gamma _0{{\bar{{{\mathbb {E}}}}}},\,\partial W\oplus \partial _SW\oplus \gamma _0{{\bar{W}}}). \end{aligned}$$

Proof

This is obtained from [10, Theorem 6.1] (also see [9, Theorem 9.3] or [15]).\(\square \)

Lemma 13.2

\(\gamma _0\) is a retraction from \({{\bar{W}}}^{2/\varvec{2}}\) onto \(\gamma _0{{\bar{W}}}\) and from \({{\mathbb {E}}}^2\) onto \(\gamma _0{{\bar{{{\mathbb {E}}}}}}\). Furthermore, \(\gamma _0{{\mathcal {R}}}={{\mathcal {R}}}\gamma _0\).

Proof

Due to (10.23), there exists \(\gamma _0^c\in {{\mathcal {L}}}(\gamma _0{{\bar{{{\mathbb {E}}}}}},{{\mathbb {E}}}^2)\) such that \((\gamma _0,\gamma _0^c)\) is an r-c pair for \(({{\mathbb {E}}}^2,\gamma _0{{\bar{{{\mathbb {E}}}}}})\). For the moment, we denote it by \(({{\bar{\gamma }}}_0,{{\bar{\gamma }}}_0^c)\). Then, it follows from Proposition 13.1 that

$$\begin{aligned} {{\mathcal {R}}}{{\bar{\gamma }}}_0{{\mathcal {R}}}^c\in {{\mathcal {L}}}({{\bar{W}}}^{2/\varvec{2}},\gamma _0{{\bar{W}}}) ,\quad {{\mathcal {R}}}{{\bar{\gamma }}}_0^c{{\mathcal {R}}}^c\in {{\mathcal {L}}}(\gamma _0{{\bar{W}}},{{\bar{W}}}^{2/\varvec{2}}). \end{aligned}$$

Since \(\gamma _0\) is the evaluation at \(t=0\) and \(({{\mathcal {R}}},{{\mathcal {R}}}^c)\) is independent of \(t\in J\), we see that \(\gamma _0{{\mathcal {R}}}={{\mathcal {R}}}{{\bar{\gamma }}}_0\). Hence, \(\gamma _0\in {{\mathcal {L}}}({{\bar{W}}}^{2/\varvec{2}},\gamma _0{{\bar{W}}})\) and \(\gamma _0^c:={{\mathcal {R}}}{{\bar{\gamma }}}_0^c{{\mathcal {R}}}^c\) is a continuous right inverse for \(\gamma _0\).\(\square \)

We write

$$\begin{aligned} \hat{{{\mathbb {A}}}}:={\varvec{{{\mathsf {A}}}}}+\hat{{\varvec{{{\mathsf {A}}}}}} \text { with } \hat{{\varvec{{{\mathsf {A}}}}}}:=[\hat{{\varvec{{{\mathsf {A}}}}}}_\kappa ]_{\kappa \in {{\mathfrak {K}}}} \end{aligned}$$

and, using (12.4), set \({\varvec{u}}=({\varvec{v}},{\varvec{w}},{\varvec{z}})\in {\varvec{{{\mathsf {W}}}}}^{2/\varvec{2}}\) and, analogously,

$$\begin{aligned} \hat{{{\mathbb {B}}}}{\varvec{u}}:=({\varvec{B}}+\hat{{\varvec{B}}}){\varvec{w}}\in \partial {\varvec{W}},\quad \hat{{{\mathbb {C}}}}{\varvec{u}}:=({\varvec{C}}+\hat{{\varvec{C}}}){\varvec{z}}\in \partial _S{\varvec{W}}. \end{aligned}$$

Moreover, \({{\mathbb {G}}}:={{\mathbb {F}}}_\varGamma \oplus {{\mathbb {F}}}_S\oplus \gamma _0{{\bar{{{\mathbb {E}}}}}}\) and \([{{\mathbb {G}}}]_{{\varvec{B}},{\varvec{C}}}\) is the linear subspace consisting of all \(({\varvec{\varphi }},\varvec{\psi },{\varvec{u}}_0)\) satisfying the compatibility conditions

$$\begin{aligned} {\varvec{B}}(0){\varvec{u}}_0 ={\varvec{\varphi }}(0) ,\quad {\varvec{C}}(0){\varvec{u}}_0 ={\varvec{\psi }}(0) \end{aligned}$$

if \(p>3\), with the corresponding modifications if \(p<3\). Analogous definitions apply to \([{{\mathbb {G}}}]_{\hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}}}\).

Proposition 13.3

\(\bigl (\partial _t+\hat{{{\mathbb {A}}}},\,(\hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}},\gamma _0)\bigr ) \in {{\mathcal {L}}}{{\mathrm{is}}}\bigl ({{\mathbb {E}}}^2,\,{{\mathbb {E}}}^0\oplus [{{\mathbb {G}}}]_{\hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}}}\bigr )\).

Proof

(1) First, we prove that

$$\begin{aligned} {{\mathbb {L}}}:=\bigl (\partial _t+{\varvec{{{\mathsf {A}}}}},\,({\varvec{B}},{\varvec{C}},\gamma _0)\bigr ) \in {{\mathcal {L}}}{{\mathrm{is}}}\bigl ({{\mathbb {E}}}^2,\,{{\mathbb {E}}}^0\oplus [{{\mathbb {G}}}]_{{\varvec{B}},{\varvec{C}}}\bigr ). \end{aligned}$$
(13.1)

Since \({{\mathbb {L}}}\) has diagonal structure, the claim is a direct consequence of Propositions 11.111.3.

(2) Set \({{\mathbb {L}}}_0:=(\hat{{\varvec{{{\mathsf {A}}}}}},\hat{{\varvec{B}}},\hat{{\varvec{C}}},0)\). Then,

$$\begin{aligned} \hat{{{\mathbb {L}}}}:= (\partial _t+\hat{{{\mathbb {A}}}},\ \hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}},\gamma _0)={{\mathbb {L}}}+\hat{{{\mathbb {L}}}}_0. \end{aligned}$$
(13.2)

It follows from (12.1) and Lemmas 12.2 and 12.3 that \(\hat{{{\mathbb {L}}}}_0\in {{\mathcal {L}}}({{\mathbb {E}}}^2,\,{{\mathbb {E}}}^0\oplus {{\mathbb {G}}})\) and there exists \(\varepsilon >0\) such that

$$\begin{aligned} \Vert \hat{{{\mathbb {L}}}}_0\Vert _{{{\mathcal {L}}}({{\mathbb {E}}}^2(\tau ),\,{{\mathbb {E}}}^0(\tau )\oplus {{\mathbb {G}}}(\tau ))} \le c\tau ^\varepsilon ,\qquad 0<\tau \le T. \end{aligned}$$
(13.3)

From this and step (1), we see that

$$\begin{aligned} \hat{{{\mathbb {L}}}}\in {{\mathcal {L}}}({{\mathbb {E}}}^2,\,{{\mathbb {E}}}^0\oplus {{\mathbb {G}}}). \end{aligned}$$
(13.4)

Write

$$\begin{aligned} {{\mathbb {E}}}_0^2:=\{\,{\varvec{v}}\in {{\mathbb {E}}}^2\ ;\ \gamma _0{\varvec{v}}=\varvec{0}\,\} ,\quad {{\mathbb {G}}}_0:={{\mathbb {F}}}_\varGamma \oplus {{\mathbb {F}}}_S\oplus \{0\}. \end{aligned}$$

By Lemma 13.2, we can choose a coretraction \(\gamma _0^c\) for \(\gamma _0\in {{\mathcal {L}}}({{\mathbb {E}}}^2,\gamma _0{{\bar{{{\mathbb {E}}}}}})\). Let \(({\varvec{f}},{\varvec{g}})\in {{\mathbb {E}}}^0\oplus [{{\mathbb {G}}}]_{\hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}}}\) with \({\varvec{g}}=({\varvec{\varphi }},{\varvec{\psi }},{\varvec{u}}_0)\). Set \(\overline{{\varvec{u}}}:=\gamma _0^c{\varvec{u}}_0\). Then, \({\varvec{u}}\in {{\mathbb {E}}}^2\) satisfies \(\hat{{{\mathbb {L}}}}{\varvec{u}}=({\varvec{f}},{\varvec{g}})\) iff \({\varvec{v}}:={\varvec{u}}-\overline{{\varvec{u}}}\) is such that

$$\begin{aligned} \hat{{{\mathbb {L}}}}{\varvec{v}}=({\varvec{f}},{\varvec{g}})-\hat{{{\mathbb {L}}}}\overline{{\varvec{u}}} =:({\varvec{f}}_0,{\varvec{g}}_0). \end{aligned}$$

Note that \({\varvec{v}}\in {{\mathbb {E}}}_0^2\) and \({\varvec{g}}_0\in [{{\mathbb {G}}}_0]_{\hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}}}\). Suppose \(p>3\). Given any \({\varvec{w}}\in {{\mathbb {E}}}_0^2\), we get

$$\begin{aligned} \gamma _0({\varvec{B}}{\varvec{w}})={\varvec{B}}(0)\gamma _0{\varvec{w}}=0=\hat{{{\mathbb {B}}}}(0)\gamma _0{\varvec{w}}=\gamma _0(\hat{{{\mathbb {B}}}}{\varvec{w}}). \end{aligned}$$

From this, the analogous relation for \({\varvec{C}}\) and \(\hat{{{\mathbb {C}}}}\), and (13.4), we infer that it suffices to prove that \(\hat{{{\mathbb {L}}}}:{{\mathbb {E}}}_0^2\rightarrow {{\mathbb {E}}}^0\oplus [{{\mathbb {G}}}_0]_{{\varvec{B}},{\varvec{C}}}\) is surjective and has a continuous inverse. Obvious modifications apply to \(p<3\).

(3) Let \({{\mathbb {F}}}:={{\mathbb {E}}}^0\oplus [{{\mathbb {G}}}_0]_{{\varvec{B}},{\varvec{C}}}\) and \({\varvec{h}}\in {{\mathbb {F}}}\). Suppose \({\varvec{u}}\in {{\mathbb {E}}}_0^2\) and set \({\varvec{v}}:={{\mathbb {L}}}{\varvec{u}}\in {{\mathbb {F}}}\). By step (1) and (13.2), the equation \(\hat{{{\mathbb {L}}}}u={\varvec{h}}\) is equivalent to \({\varvec{v}}+\hat{{{\mathbb {L}}}}_0{{\mathbb {L}}}^{-1}{\varvec{v}}={\varvec{h}}\). Observe \(\hat{{{\mathbb {L}}}}_0{{\mathbb {L}}}^{-1}{\varvec{v}}\in {{\mathbb {F}}}\).

Due to (13.3), we can fix \(\overline{\tau }\in (0,T]\) such that \(\Vert \hat{{{\mathbb {L}}}}_0{{\mathbb {L}}}^{-1}\Vert _{{{\mathcal {L}}}({{\mathbb {F}}}(\tau ))}\le 1/2\). As is well-known (e.g., [12, Lemma 12.2]), this implies that \(\hat{{{\mathbb {L}}}}\in {{\mathcal {L}}}{{\mathrm{is}}}\bigl ({{\mathbb {F}}}(\overline{\tau }),{{\mathbb {F}}}(\overline{\tau })\bigr )\).

(4) If \(\overline{\tau }=T\), then we are done. Otherwise, we repeat this argument for the problem in \([0,\,T-\overline{\tau }]\) obtained by the time shift \(t\mapsto t-\overline{\tau }\) and with the initial value \({\varvec{u}}(\overline{\tau })\). After finitely many such steps, we reach T. The proposition is proved.\(\square \)

Proof of Theorem 7.1

Now we write \(({{\mathbb {A}}},{{\mathbb {B}}},{{\mathbb {C}}})\) for \((\hat{{{\mathbb {A}}}},\hat{{{\mathbb {B}}}},\hat{{{\mathbb {C}}}})\) if \((\hat{{\varvec{{{\mathsf {A}}}}}},\hat{{\varvec{B}}},\hat{{\varvec{C}}})\) equals \(({\varvec{{{\mathsf {A}}}}}^0,{\varvec{B}}^0,{\varvec{C}}^0)\), and \((\tilde{{{\mathbb {A}}}},\tilde{{{\mathbb {B}}}},\tilde{{{\mathbb {C}}}})\) otherwise. We also set \(G:=\partial W\oplus \partial _SW\oplus \gamma _0{{\bar{W}}}\). Then, Proposition 13.1 implies

$$\begin{aligned} {\vec {{{\mathcal {R}}}}:={{\mathcal {R}}}\oplus ({{\mathcal {R}}}_\varGamma \oplus {{\mathcal {R}}}_S\oplus {{\mathcal {R}}}) \in {{\mathcal {L}}}({{\mathbb {E}}}^k\oplus {{\mathbb {G}}},\,{{\bar{W}}}^{k/\varvec{2}}\oplus G)} \end{aligned}$$

and

$$\begin{aligned} {\vec {{{\mathcal {R}}}}^c:={{\mathcal {R}}}^c\oplus ({{\mathcal {R}}}_\varGamma ^c\oplus {{\mathcal {R}}}_S^c\oplus {{\mathcal {R}}}^c) \in {{\mathcal {L}}}({{\bar{W}}}^{k/\varvec{2}}\oplus G,\,{{\mathbb {E}}}^k\oplus {{\mathbb {G}}}) } \end{aligned}$$

for \(k=0,2\).

Let \(\bigl ({\varvec{u}},({\varvec{\varphi }},{\varvec{\psi }},{\varvec{u}}_0)\bigr )\in {{\mathbb {E}}}^2\oplus {{\mathbb {G}}}\) and write \(\bigl (u,(\varphi ,\psi ,u_0)\bigr )\) for its image under \({\vec {{{\mathcal {R}}}}}\). Then, we obtain from (12.16) and (12.20)

$$\begin{aligned} {{\mathcal {B}}}u={{\mathcal {B}}}{{\mathcal {R}}}{\varvec{u}}={{\mathcal {R}}}_\varGamma {{\mathbb {B}}}{\varvec{u}},\quad {{\mathcal {C}}}u={{\mathcal {C}}}{{\mathcal {R}}}{\varvec{u}}={{\mathcal {R}}}_S{{\mathbb {C}}}{\varvec{u}}. \end{aligned}$$

Suppose \(p>3\),  \({\varvec{u}}_0=\gamma _0{\varvec{u}}\), and \({{\mathbb {B}}}(0){\varvec{u}}_0={\varvec{\varphi }}(0)\). Then,

$$\begin{aligned} \begin{aligned} \varphi (0)&={{\mathcal {R}}}_\varGamma {\varvec{\varphi }}(0) ={{\mathcal {R}}}_\varGamma {{\mathbb {B}}}(0){\varvec{u}}_0 ={{\mathcal {R}}}_\varGamma \gamma _0({{\mathbb {B}}}{\varvec{u}})\\&=\gamma _0{{\mathcal {R}}}_\varGamma {{\mathbb {B}}}{\varvec{u}}=\gamma _0{{\mathcal {B}}}{{\mathcal {R}}}{\varvec{u}}=\gamma _0({{\mathcal {B}}}u) ={{\mathcal {B}}}(0)(\gamma _0u). \end{aligned} \end{aligned}$$

Since Lemma 13.2 and \({\varvec{u}}_0=\gamma _0{\varvec{u}}\) imply \(u_0=\gamma _0u\), we see that \({{\mathcal {B}}}(0)u_0=\varphi _0\). Similarly, we find that it follows from \({{\mathbb {C}}}(0){\varvec{u}}_0={\varvec{\psi }}(0)\) that \({{\mathcal {C}}}(0)u_0=\psi (0)\). Thus, letting \({{\mathbb {F}}}:=[{{\mathbb {G}}}]_{{{\mathbb {B}}},{{\mathbb {C}}}}\) and \(F:=[G]_{{{\mathcal {B}}},{{\mathcal {C}}}}\), we have shown that

$$\begin{aligned} {\vec {{{\mathcal {R}}}}({{\mathbb {E}}}^2\oplus {{\mathbb {F}}})\subset {{\bar{W}}}^{2/\varvec{2}}\oplus F.} \end{aligned}$$

Consequently,

$$\begin{aligned} {\vec {{{\mathcal {R}}}}\in {{\mathcal {L}}}({{\mathbb {E}}}^2\oplus {{\mathbb {F}}},\,{{\bar{W}}}^{2/\varvec{2}}\oplus F).} \end{aligned}$$
(13.5)

We find analogously that

$$\begin{aligned} {\vec {{{\mathcal {R}}}}^c\in {{\mathcal {L}}}({{\bar{W}}}^{2/\varvec{2}}\oplus F,\,{{\mathbb {E}}}^2\oplus {{\mathbb {F}}}). } \end{aligned}$$
(13.6)

This holds for \(p>3\). The case \(p<3\) is similar.

Now we set

$$\begin{aligned} L:=\bigl (\partial _t+{{\mathcal {A}}},\,({{\mathcal {B}}},{{\mathcal {C}}},\gamma _0)\bigr ) ,\quad {{\mathbb {L}}}:=\bigl (\partial +{{\mathbb {A}}},\,({{\mathbb {B}}},{{\mathbb {C}}},\gamma _0)\bigr ). \end{aligned}$$

Define \(\tilde{{{\mathbb {L}}}}:=\bigl (\partial +\tilde{{{\mathbb {A}}}},\,(\tilde{{{\mathbb {B}}}},\tilde{{{\mathbb {C}}}},\gamma _0)\bigr )\). It is a consequence of (12.11), (12.12), (12.16), (12.18), (12.20), and the fact that \(({{\mathcal {R}}},{{\mathcal {R}}}^c)\) is independent of t that

$$\begin{aligned} {L{{\mathcal {R}}}=\vec {{{\mathcal {R}}}}{{\mathbb {L}}},\quad \vec {{{\mathcal {R}}}}^cL=\tilde{{{\mathbb {L}}}}\vec {{{\mathcal {R}}}}^c.} \end{aligned}$$
(13.7)

Proposition 13.3 guarantees

$$\begin{aligned} {{\mathbb {L}}},\tilde{{{\mathbb {L}}}}\in {{\mathcal {L}}}{{\mathrm{is}}}({{\mathbb {E}}}^2,\,{{\mathbb {E}}}^0\oplus {{\mathbb {F}}}). \end{aligned}$$
(13.8)

Suppose \(Lu=0\). Then, (13.7) and (13.8) imply \({{\mathcal {R}}}^cu=0\). Thus, \(u={{\mathcal {R}}}{{\mathcal {R}}}^cu=0\). This shows that L is injective.

Let \((f,g)\in {{\bar{W}}}^{0/\varvec{2}}\oplus F\). By (13.8), we find \({\varvec{u}}\in {{\mathbb {E}}}^2\) satisfying \({{{\mathbb {L}}}{\varvec{u}}=\vec {{{\mathcal {R}}}}^c(f,g)}\). Put

$$\begin{aligned} {u:={{\mathcal {R}}}{\varvec{u}}={{\mathcal {R}}}{{\mathbb {L}}}^{-1}\vec {{{\mathcal {R}}}}^c(f,g)\in {{\bar{W}}}^{2/\varvec{2}}.} \end{aligned}$$

Then, by (13.7) and (13.8),

$$\begin{aligned} {Lu=L{{\mathcal {R}}}{{\mathbb {L}}}^{-1}\vec {{{\mathcal {R}}}}^c(f,g) =\vec {{{\mathcal {R}}}}{{\mathbb {L}}}{{\mathbb {L}}}^{-1}\vec {{{\mathcal {R}}}}^c(f,g) =\vec {{{\mathcal {R}}}}\vec {{{\mathcal {R}}}}^c(f,g) =(f,g).} \end{aligned}$$

Hence, L is surjective and, by (13.5) and (13.6),

$$\begin{aligned} {L^{-1}={{\mathcal {R}}}{{\mathbb {L}}}^{-1}\vec {{{\mathcal {R}}}}^c \in {{\mathcal {L}}}({{\bar{W}}}^{0/\varvec{2}}\oplus F,\,{{\bar{W}}}^{2/\varvec{2}}). } \end{aligned}$$

The proof is accomplished.\(\square \)

Remarks 13.4

  1. (a)

    Recall that either some of \(\varGamma _0\), \(\varGamma _1\), and S, or all of them, can be empty. If \((\varGamma ,S)\ne \{\emptyset ,\emptyset \}\), then the result is new. Otherwise, it is a special case of the more general Theorem 1.23(ii) of [12].

  2. (b)

    Theorem 7.1 is true for systems \(u=(u^1,\ldots ,u^n)\), provided the uniform Lopatinskii–Shapiro conditions apply. This is trivially the case if a is a diagonal matrix. \(\square \)

14 Membranes with boundary

Now we turn to the case of membranes intersecting \(\varGamma \) transversally. This case is handled by reducing it to the situation studied in the preceding section.

Theorem 14.1

Let (5.1) be satisfied. Theorem 7.1 applies with \((M,g)\) replaced by \((\hat{M},\hat{g})\).

Proof

Theorem 5.1.\(\square \)

In preparation for the proof of Theorem 2.1, we derive rather explicit representations of \(({{\mathcal {A}}},{{\mathcal {B}}},{{\mathcal {C}}})\) and the relevant function spaces in a tubular neighborhood of \(\varSigma =\partial S\) in \(\hat{M}\).

We use the notations of Sects. 4 and 5 and set

$$\begin{aligned} U:=U(\varepsilon ) ,\qquad \dot{N}:=N(\varepsilon )\big \backslash \bigl \{(0,0)\bigr \} ,\quad \tilde{g}:=g_2/\rho ^2. \end{aligned}$$

The Christoffel symbols \(\tilde{\varGamma }_{ij}^k\) for the metric \(\tilde{g}\) turn out to be

$$\begin{aligned} \tilde{\varGamma }_{11}^1=\tilde{\varGamma }_{12}^2=-\tilde{\varGamma }_{22}^1=-\rho ^{-1}\partial _1\rho ,\quad \tilde{\varGamma }_{22}^2=\tilde{\varGamma }_{12}^1=-\tilde{\varGamma }_{11}^2=-\rho ^{-1}\partial _2\rho . \end{aligned}$$

We set \(D:=(D_1,D_2)\) with \(D_i:=\rho \partial _i\) and \(\tilde{\nabla }:=\nabla _{\tilde{g}}\). Then, for \(u\in {{\bar{C}}}^2(\dot{N}\setminus G_\sigma )\) with \(G_\sigma :=\mathop {{\mathrm{graph}}}\nolimits (f_\sigma )\),

$$\begin{aligned} \begin{aligned} \rho ^2\tilde{\nabla }_{11}u&=D_1^2u-\partial _2\rho D_2u,&\quad \rho ^2\tilde{\nabla }_{12}u&=D_1D_2u+\partial _2\rho D_1u,\\ \rho ^2\tilde{\nabla }_{21}u&=D_2D_1u+\partial _1\rho D_2u,&\quad \rho ^2\tilde{\nabla }_{22}u&=D_2^2u-\partial _1\rho D_1u. \end{aligned} \end{aligned}$$
(14.1)

Hence, see (6.2) and (6.3),

$$\begin{aligned} |\tilde{\nabla }u|_{\tilde{g}^*}^2 =|Du|^2=|D_1u|^2+|D_2u|^2 \end{aligned}$$
(14.2)

and

$$\begin{aligned} |\tilde{\nabla }^2u|_{\tilde{g}_0^2}^2 =\rho ^4\Bigl ((\tilde{\nabla }_{11}u)^2+(\tilde{\nabla }_{12}u)^2 +(\tilde{\nabla }_{21}u)^2+(\tilde{\nabla }_{22}u)^2\Bigr ). \end{aligned}$$
(14.3)

Let

$$\begin{aligned}{}[D^2u]^2:=(D_1^2u)^2+(D_1D_2u)^2+(D_2D_1u)^2+(D_2^2u)^2 \end{aligned}$$

and

$$\begin{aligned} \langle u\rangle _\rho ^2 :=|u|^2+|Du|^2+[D^2u]^2+|\nabla _hu|_{h^*}^2+|\nabla _h^2u|_{h_0^2}^2. \end{aligned}$$

We write

$$\begin{aligned} {{\bar{{{\mathcal {W}}}}}}_{p}^2 =\bigl ({{\bar{{{\mathcal {W}}}}}}_{p}^2,\Vert \cdot \Vert _{{{\bar{{{\mathcal {W}}}}}}_{p}^2}\bigr ) \end{aligned}$$

for the space of all \(u\in L_{1,\mathrm{loc}}(\dot{N}\times \varSigma )\) for which the norm

$$\begin{aligned} \Vert u\Vert _{{{\bar{{{\mathcal {W}}}}}}_{p}^2} :=\Bigl (\int _\varSigma \int _{\dot{N}\setminus G(\sigma )} \langle u\rangle _\rho ^p(x,y,\sigma )\,\frac{d(x,y)}{\rho ^2(x,y)} \,\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _\varSigma (\sigma )\Bigr )^{1/p} \end{aligned}$$

is finite. Moreover, \({{\bar{{{\mathcal {W}}}}}}_{p}^0\) is obtained by replacing  \(\langle \cdot \rangle _\rho \) by  \(\vert \cdot \vert \). It is then clear how to define the anisotropic spaces \({{\bar{{{\mathcal {W}}}}}}_{p}^{k/\varvec{2}}\),  \(k=0,2\).

Proposition 14.2

\(u\in {{\bar{W}}}_{p}^2(U\setminus S;\hat{g})\) iff \(\chi _*u\in {{\bar{{{\mathcal {W}}}}}}_{p}^2\).

Proof

First, we note that

$$\begin{aligned} \sqrt{\tilde{g}}=\rho ^{-2}. \end{aligned}$$
(14.4)

It follows from (14.3) that

$$\begin{aligned} |\tilde{\nabla }^2v|_{\tilde{g}_0^2}^2 \le c\bigl (|Dv|^2+[D^2v]^2\bigr ). \end{aligned}$$

Consequently, since \(\nabla =\chi ^*(\tilde{\nabla }\oplus \nabla _{\varSigma })\chi _*\),

$$\begin{aligned} \Vert u\Vert _{{{\bar{W}}}_{p}^2(U\setminus S,\,\hat{g})} \le c\,\Vert \chi _*u\Vert _{{{\bar{{{\mathcal {W}}}}}}_p^2}. \end{aligned}$$
(14.5)

If \((u_j)\) is a converging sequence in \({{\bar{W}}}_{p}^2:={{\bar{W}}}_{p}^2(U\setminus S;\hat{g})\), then we infer from (14.1)–(14.3) that \((\chi _*u_j)\) converges in \({{\bar{{{\mathcal {W}}}}}}_{p}^2\). Hence, \(\chi _*u\in {{\bar{{{\mathcal {W}}}}}}_{p}^2\) if \(u\in {{\bar{W}}}_{p}^2\). From this, (14.5), and Banach’s homomorphism theorem, we obtain

$$\begin{aligned} \Vert \chi _*\!\cdot \Vert _{{{\bar{{{\mathcal {W}}}}}}_{p}^2} \sim \Vert \cdot \Vert _{{{\bar{W}}}_{p}^2}. \end{aligned}$$

This proves the claim.\(\square \)

Let \(\lambda :V_\lambda \rightarrow (-1,1)^{m-2}\), \(\sigma \mapsto z\) be a local chart for \(\varSigma \). Then,

$$\begin{aligned} \kappa :=({{\mathrm{id}}}_{\dot{N}}\times \lambda )\circ \chi :U_{\kappa }\rightarrow Q_\kappa ^m ,\quad q\mapsto (x,y,z)=(x^1,x^2,\ldots ,x^m) \end{aligned}$$

is a local chart for \(\hat{M}\) with \(U_{\kappa }\subset U\) and \(\kappa (U_{\kappa })=\dot{N}\times (-1,1)^{m-2}\). Set \(\tilde{h}:=\lambda _*h\). It follows from (5.4) that \(\kappa _*\hat{g}=\tilde{g}+\tilde{h}\).

Assume \(u\in C^2(\hat{M})\), put \(v:=\kappa _*u\in C^2(Q_\kappa ^m)\), and denote by \(e_1,\ldots ,e_m\) the standard basis of \({{\mathbb {R}}}^m\). Then,

$$\begin{aligned} \kappa _*(\mathop {{\mathrm{grad}}}\nolimits _{\hat{g}}u) =\mathop {{\mathrm{grad}}}\nolimits _{\tilde{g}}v+\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}v =\rho ^2(\partial _x ve_1+\partial _yve_2)+\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}v. \end{aligned}$$
(14.6)

Similarly, let \(X=X^i\partial /\partial x^i\in C^1(TU_{\kappa })\) and set

$$\begin{aligned} Y:=\kappa _*X=Y^ie_i=Y^1e_1+Y^2e_2+\tilde{Y}, \end{aligned}$$

where \(\tilde{Y}=Y^\alpha e_\alpha \) with \(\alpha \) running from 3 to m. Observe that v and Y depend on (xyz). It follows

$$\begin{aligned} \kappa _*(\mathop {{\mathrm{div}}}\nolimits _{\hat{g}}X) =\kappa _*\Bigl (\frac{1}{\sqrt{\hat{g}}}\,\partial _i\bigl (\sqrt{\hat{g}}\,X^i\bigr )\Bigr ) =\frac{1}{\sqrt{\kappa _*\hat{g}}}\,\partial _i\bigl (\sqrt{\kappa _*\hat{g}}\,Y^i\bigr ). \end{aligned}$$

Since \(\sqrt{\kappa _*\hat{g}}=\sqrt{\tilde{g}}\,\sqrt{\tilde{h}} =\rho ^{-2}\sqrt{\tilde{h}}\), we see that

$$\begin{aligned} \kappa _*(\mathop {{\mathrm{div}}}\nolimits _{\hat{g}}X) =\rho ^2\bigl (\partial _x(\rho ^{-2}Y^1)+\partial _y(\rho ^{-2}Y^2)\bigr ) +\mathop {{\mathrm{div}}}\nolimits _{\tilde{h}}\tilde{Y}. \end{aligned}$$

From this and (14.6), we obtain, letting \(\tilde{a}:=\kappa _*a\),

$$\begin{aligned} \begin{aligned} \kappa _*({{\mathcal {A}}}u)&=-\kappa _*\bigl (\mathop {{\mathrm{div}}}\nolimits _{\hat{g}}(a\mathop {{\mathrm{grad}}}\nolimits _{\hat{g}}u)\bigr )\\&=-\rho ^2\bigl (\partial _x(\tilde{a}\partial _xv)+\partial _y(\tilde{a}\partial _yv)\bigr ) -\mathop {{\mathrm{div}}}\nolimits _{\tilde{h}}(\tilde{a}\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}v). \end{aligned} \end{aligned}$$
(14.7)

As in (2.3), we define curvilinear derivatives by

$$\begin{aligned} \partial _\nu u:=\kappa ^*\partial _x(\kappa _*u) ,\quad \partial _\mu u:=\kappa ^*\partial _y(\kappa _*u). \end{aligned}$$

Then, it follows from (14.7) that

$$\begin{aligned} {{\mathcal {A}}}_Uu=-\rho ^2\bigl (\partial _\nu (a\partial _\nu u)+\partial _\mu (a\partial _\mu u)\bigr ) -\mathop {{\mathrm{div}}}\nolimits _\varSigma (a\mathop {{\mathrm{grad}}}\nolimits _\varSigma u), \end{aligned}$$
(14.8)

where \({{\mathcal {A}}}_U\) is the restriction of \({{\mathcal {A}}}\) to U. Due to (6.2), the regularity assumption for a stipulated in Theorem 2.1 means that \(a\in {\bar{BC}}^{1/\varvec{2}}(\hat{M}\setminus S\times J,\ \hat{g}+dt^2)\).

Analogously,

$$\begin{aligned} \kappa _*({{\mathcal {B}}}^1u)=\kappa _*\bigl (\nu \big |\gamma (a\mathop {{\mathrm{grad}}}\nolimits _{\hat{g}}u)\bigr )_{\hat{g}} =\Bigl (\kappa _*\nu \Big |\gamma \bigl (\tilde{a}(\mathop {{\mathrm{grad}}}\nolimits _{\tilde{g}}v +\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}v)\bigr )\Bigr )_{\tilde{g}+\tilde{h}}. \end{aligned}$$

From (10.15) and \((\kappa _*\hat{g})^{1i}=0\) for \(i\ge 2\), we deduce that \(\kappa _*\nu (0,y,z)=\rho (0,y)e_1\). Hence,

$$\begin{aligned} \kappa _*({{\mathcal {B}}}^1u)(y,z) =\rho (0,y)\tilde{a}(0,y,z)\partial _xv(0,y,z). \end{aligned}$$

This implies

$$\begin{aligned} {{\mathcal {B}}}_U^1u=\gamma (\rho a)\partial _\nu u \text { on }\varGamma \cap U. \end{aligned}$$
(14.9)

Next, we determine the first-order transmission operator on \(({{\hat{M}}},{{\hat{g}}})\). Recalling definition (4.4), we set \(s:=\chi ^*f\in C^\infty (U)\).

Proposition 14.3

Define

$$\begin{aligned} (\nu _S^1,\nu _S^2,\nu _S^3) :=(\partial _\nu s,-1,1) \Big /\sqrt{1+(\partial _\nu s)^2+|\mathop {{\mathrm{grad}}}\nolimits _\varSigma s|_\varSigma ^2}. \end{aligned}$$
(14.10)

Then,

Proof

(1) It follows from (4.4) that

$$\begin{aligned} \tilde{S}:=\chi (S) =\bigl \{\,(x,f_\sigma (x),\sigma )\ ;\ 0<x<\varepsilon ,\ \sigma \in \varSigma \,\bigr \}. \end{aligned}$$

We write \(f(x,z):=f_{\lambda ^{-1}(z)}(x)\). Then, \((x,z)\mapsto F(x,z):=\bigl (x,f(x,z),z\bigr )\) is a local parametrization of \(\tilde{S}\), and

$$\begin{aligned} \partial F:=[\partial _jF^i] =\left[ \begin{array}{ccc} 1 &{} &{}0\\ \partial _xf &{} &{}\partial _zf\\ 0 &{} &{}1_{m-2} \end{array} \right] \in {{\mathbb {R}}}^{m\times (m-1)}. \end{aligned}$$

Hence, given \(({{\bar{x}}},{{\bar{z}}})\in (0,\varepsilon )\times (-1,1)^{m-2}\),

$$\begin{aligned} T_{({{\bar{x}}},{{\bar{z}}})}\tilde{S} =\bigl \{({{\bar{x}}},{{\bar{z}}})\bigr \}\times \partial F({{\bar{x}}},{{\bar{z}}}){{\mathbb {R}}}^{m-1} \subset T_{({{\bar{x}}},{{\bar{z}}})}{{\mathbb {R}}}^m. \end{aligned}$$

For \(\varXi :=(\xi ,\zeta )\in {{\mathbb {R}}}\times {{\mathbb {R}}}^{m-2}\) and \(\hat{\varXi }:=(\hat{\xi },\hat{\eta },\hat{\zeta })\in {{\mathbb {R}}}\times {{\mathbb {R}}}\times {{\mathbb {R}}}^{m-2}\) we find

$$\begin{aligned} \bigl (\hat{\varXi }\big |\partial F({{\bar{x}}},{{\bar{z}}})\varXi \bigr )_{\tilde{g}+\tilde{h}} =\alpha ({{\bar{x}}},{{\bar{z}}}) \Bigl (\hat{\xi }\xi +\hat{\eta }\bigl (\partial _xf({{\bar{x}}},{{\bar{z}}})\xi +\partial _zf({{\bar{x}}},{{\bar{z}}})\zeta \bigr )\Bigr )_{{{\mathbb {R}}}^2}+(\hat{\zeta }|\zeta )_{\tilde{h}}, \end{aligned}$$

where \(\alpha ({{\bar{x}}},{{\bar{z}}}):=\rho ^{-2}\bigl ({{\bar{x}}},f({{\bar{x}}},{{\bar{z}}})\bigr )\). Choose \(\hat{\xi }:=\partial _xf({{\bar{x}}},{{\bar{z}}})\), \(\hat{\eta }:=-1\), and \(\hat{\zeta }\) in \({{\mathbb {R}}}^{m-2}\) such that \((\hat{\zeta }|\tilde{\zeta })_{\tilde{h}} =\bigl \langle \partial _zf({{\bar{x}}},{{\bar{z}}}),\tilde{\zeta }\bigr \rangle _{{{\mathbb {R}}}^{m-2}}\) for all \(\tilde{\zeta }\in {{\mathbb {R}}}^{m-2}\), that is, \(\hat{\zeta }\) equals \(\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}f({{\bar{x}}},{{\bar{z}}})\). Then \(\bigl (({{\bar{x}}},{{\bar{z}}}),\hat{\varXi }\bigr )\perp T_{({{\bar{x}}},{{\bar{z}}})}\tilde{S}\). Now we define

$$\begin{aligned} (\tilde{\nu }^1,\tilde{\nu }^2,\tilde{\nu }^3) :=(\partial _xf,-1,1)\Big / \sqrt{1+(\partial _xf)^2+|\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}f|_{\tilde{h}}^2}. \end{aligned}$$

Then,

$$\begin{aligned} \tilde{\nu }:=\tilde{\nu }^1e_1+\tilde{\nu }^2e_2+\tilde{\nu }^3\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}f \end{aligned}$$

is the positiveFootnote 5 normal on \(\tilde{S}\), since \(\tilde{\nu }^1({{\bar{x}}},{{\bar{z}}})e_1+\tilde{\nu }^2({{\bar{x}}},{{\bar{z}}})e_2\) is a positive multiple of the positive normal of the graph of \(f_{{{\bar{\sigma }}}}\) at \(\bigl ({{\bar{x}}},f_{{{\bar{\sigma }}}}({{\bar{x}}})\bigr )\), where \({{\bar{\sigma }}}:=\lambda ^{-1}({{\bar{z}}})\) (cf. (5.9)).

(2) Using (14.6) and \(\kappa _*\nu _S=\tilde{\nu }\), we obtain

$$\begin{aligned} \begin{aligned} \kappa _*(\nu _S|\mathop {{\mathrm{grad}}}\nolimits _{\hat{g}}u)_{\hat{g}}&=(\kappa _*\nu _S|\mathop {{\mathrm{grad}}}\nolimits _{\tilde{g}}v+\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}v)_{\tilde{g}+\tilde{h}}\\&=\tilde{\nu }^1\partial _xv+\tilde{\nu }^2\partial _yv +\tilde{\nu }^3(\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}f|\mathop {{\mathrm{grad}}}\nolimits _{\tilde{h}}v)_{\tilde{h}}\\&=\kappa _*\bigl (\nu _S^1\partial _\nu u+\nu _S^2\partial _\mu u +\nu _S^3(\mathop {{\mathrm{grad}}}\nolimits _\varSigma s|\mathop {{\mathrm{grad}}}\nolimits _\varSigma u)_\varSigma \bigr ), \end{aligned} \end{aligned}$$

the \(\nu _S^i\) being given by (14.10). From this, the assertion is now clear.\(\square \)

Note that, see (14.4),

$$\begin{aligned} \chi _*(\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _{\varGamma \cap U})\sim \frac{\mathrm{d}y}{y^2}\,\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _\varSigma . \end{aligned}$$
(14.11)

Since, by (4.4),

$$\begin{aligned} \tilde{g}_{G(\sigma )}(\tau ) =\frac{1+(\partial f_\sigma (x))^2}{x^2+(f_\sigma (x))^2}\,\mathrm{d}x^2\sim \frac{\mathrm{d}x^2}{x^2} \end{aligned}$$

uniformly with respect to \(\tau =\bigl (x,f_\sigma (x)\bigr )\in G_\sigma \) and \(\sigma \in \varSigma \), we see that

$$\begin{aligned} \chi _*(\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _{S\cap U})\sim \frac{\mathrm{d}x}{x^2}\,\mathrm{d}\mathop {{\mathrm{vol}}}\nolimits _\varSigma . \end{aligned}$$
(14.12)

On the basis of (14.11) and (14.12) it is possible to represent the trace spaces on \(\varGamma \setminus \varSigma \) and \(S\setminus \varSigma \) analogously to \({{\bar{{{\mathcal {W}}}}}}_{p}^{2/\varvec{2}}\). Details are left to the reader.

Proof of Theorem 2.1

The claim is an easy consequence of Theorems 4.4 and 7.1, (14.2), (14.3), Proposition 14.2, (14.8), (14.9), and Proposition 14.3.\(\square \)