1 Introduction

This paper is concerned with the evolution of compressible and incompressible viscous fluids separated by a sharp interface. Typical examples of the physical interpretation of our problem are the evolution of a bubble in an incompressible fluid flow, or a drop in a volume of gas. The problem is formulated as follows: Let Ω ± be two domains. The region Ω + is occupied by a compressible barotropic viscous fluid and the region Ω by an incompressible viscous fluid. Let Γ ± and S ± be the boundaries of Ω ± such that Γ ± S ± =. We assume that Γ + = Γ and S + S =. We may assume that one of S ± is an empty set, or that both of S ± are empty sets. Let Γ t , S t , and Ω t ± be the time evolutions of Γ= Γ + = Γ , S , and Ω ± , respectively, where t is the time variable. We assume that the two fluids are immiscible, so that Ω t + Ω t = for any t0. Moreover, we assume that no phase transitions occur and we do not consider the surface tension at the interface Γ t and the free boundary S t + for mathematical simplicity. Thus, the motion of the fluids is governed by the following system of equations:

{ ρ + ( t u + + u + u + ) Div S + ( u + , P ( ρ + ) ) = 0 , t ρ + + div ( ρ + u + ) = 0 in  Ω t + , ρ ( t u + u u ) Div S ( u , π ) = 0 , div u = 0 in  Ω t , S + ( u + , P ( ρ + ) ) n t | Γ t + S ( u , π ) n t | Γ t = 0 , u + | Γ t + u | Γ t = 0 , u + | S + = 0 , S ( u , π ) n t | S t = 0
(1.1)

for t(0,T), subject to the initial conditions

( u + , ρ + ) | t = 0 =( u + 0 , ρ + 0 )in  Ω + , u | t = 0 = u 0 in  Ω .
(1.2)

Here, t =/t, ρ is a positive constant denoting the mass density of the reference domain Ω , P a pressure function, and u ± =( u ± 1 ,, u ± N ) (N2), ρ + and π are unknown velocities, scalar mass density and scalar pressure, respectively. Moreover, S ± are stress tensors defined by

S + ( u + , π + ) = μ + D ( u + ) + ( ν + μ + ) div u + I π + I , S ( u , p ) = μ D ( u ) π I ,
(1.3)

where D(v) denotes the doubled strain tensor whose (i,j) components are D i j (v)= i v j + j v i with i =/ x i and we set divv= = 1 N v and v= j = 1 N v j j for any vector of functions v=( v 1 ,, v N ). And also, for any matrix field K with (i,j) components K i j , the quantity DivK is an N-vector with components j = 1 N j K i j . Finally, I stands for the N×N identity matrix, n t the unit normal to Γ t pointed from Ω t to Ω t + , n t the unit outward normal to S t , and μ ± and ν + are first and second viscosity coefficients, respectively, which are assumed to be constant and satisfy the condition

μ ± >0, ν + >0,
(1.4)

and f | Γ t ± and f | S t are defined by

f | Γ t ± ( x 0 )= lim x Ω t ± x x 0 f(x)for  x 0 Γ t ,f | S t = lim x Ω t x x 0 f(x)for  x 0 S t .

Aside from the dynamical system (1.1), further kinematic conditions on Γ t and S t are satisfied, which give

Γ t = { x R N x = x ( ξ , t ) ( ξ Γ + ) } , S t = { x R N x = x ( ξ , t ) ( ξ S ) } .
(1.5)

Here, x=x(ξ,t) is the solution to the Cauchy problem:

d x d t =u(x,t)(t>0),x | t = 0 =ξin  Ω ¯

with u(x,t)= u + (x,t) for x Ω + and u(x,t)= u (x,t) for x Ω . This expresses the fact that the interface Γ t and the free boundary S t consist of the same particles for all t>0, which do not leave them and are not incident from Ω t ± . In particular, we exclude the mass transportation through the interface Γ t , because we assume that the two fluids are immiscible.

Denisova [1] studied a local in time unique existence theorem to problem (1.1) with surface tension on Γ t under the assumption that μ + < μ and μ / ρ < μ + / R with some positive constant R and that Ω + is bounded and Ω = R N Ω ¯ + . Here, ρ is a positive constant describing the mass density of the reference body Ω . Thus, in [1], both of S ± are empty sets. The purpose of our study is to prove local in time unique existence theorem in a general uniform domain under the assumption (1.4). Especially, the assumption on the viscosity coefficients is improved compared with Denisova [1] and widely accepted in the study of fluid dynamics.

As related topics about the two phase problem for the viscous fluid flows, the incompressible-incompressible case has been studied by [2]–[11] and the compressible-compressible case by [12], [13] as far as the authors know.

To prove a local in time existence theorem for (1.1), we transform (1.1) to the equations in fixed domains Ω ± by using the Lagrange transform (cf. Denisova [1]), so that the key step is to prove the maximal regularity for the linearized problem

{ γ ˆ 0 + t u + Div S + ( u + , γ ˆ 2 + p + ) = g + , t p + + γ ˆ 1 + div u + = f + in  Ω + , γ 0 t u Div S ( u , p ) = g , div u = f in  Ω , S + ( u + , γ ˆ 2 + p + ) n | Γ + S ( u , p ) n | Γ = h , u + | Γ u | Γ = 0 , u + | S + = 0 , S ( u , p ) n | S = h
(1.6)

for any t(0,T), subject to the initial conditions (1.2), where f | Γ ± ( x 0 )= lim x Ω ± , x x 0 f(x) for x 0 Γ. Here, γ 0 is a positive constant and γ ˆ i + (i=0,1,2) are functions defined on Ω ¯ + such that

ω 0 γ ˆ i + (x) ω 1 (x Ω ¯ + ), γ i + L r ( Ω + )

for i=0,1,2 with some positive constants ω 0 and ω 1 and with some exponent r(N,), and γ 0 is a positive number describing the mass density of the flow occupied in Ω . Our strategy of obtaining the maximal L p - L q result for (1.6) is to show the existence of ℛ-bounded solution operator R(λ) to the corresponding generalized resolvent problem:

{ γ ˆ 0 + λ u ˆ + Div S + ( u ˆ + , γ ˆ 2 + p ˆ + ) = g ˆ + , λ p ˆ + + γ ˆ 1 + div u ˆ + = f ˆ + in  Ω + , γ 0 λ u ˆ Div S ( u ˆ , p ˆ ) = g ˆ , div u ˆ = f ˆ in  Ω , S + ( u + , γ ˆ 2 + p ˆ + ) n | Γ S ( u ˆ , p ˆ ) n | Γ t = h ˆ , u ˆ + | Γ u ˆ | Γ = 0 , u ˆ + | S + = 0 , S ( u ˆ , p ˆ ) n | S = h ˆ .
(1.7)

Here, f ˆ denotes the Laplace transform of f with respect to t. In fact, solutions u ˆ ± and p ˆ ± are represented by

( u ˆ ± , p ˆ ± )=R(λ) ( f ˆ ± ( λ ) , g ˆ ± ( λ ) , h ˆ ( λ ) , h ˆ ( λ ) ) ,

so that roughly speaking, we can represent the solutions ( u ± (t), p ± (t)) to the non-stationary problem (1.6) by

( u ± ( t ) , p ± ( t ) ) = L 1 [ R ( λ ) ( f ˆ ± ( λ ) , g ˆ ± ( λ ) , h ˆ ( λ ) , h ˆ ( λ ) ) ] (t)

with Laplace inverse transform L 1 . Thus, we get the maximal L p - L q regularity result:

0 e p γ t { ( p + ( , t ) , t p + ( , t ) ) W q 1 ( Ω + ) + = ± ( t u ( , t ) L q ( Ω ) + u ( , t ) W q 2 ( Ω ) ) } p d t C { suitable norms of initial data and right members in (1.6) } ( 1 < p , q < )

for some positive constants γ and C with help of the Weis operator valued Fourier multiplier theorem [14]. To construct an ℛ-bounded solution operator to (1.7), problem (1.7) is reduced locally to the model problems in a neighborhood of an interface point as well as an interior point or a boundary point by using the localization technique and the partition of unity. The model problems for the interior point and boundary point have been studied, but the model problem for the interface point was studied only by Denisova [1] under some restriction on the viscosity coefficients. Moreover, she studied the problem in L 2 framework, so that the Plancherel formula is applicable. But our final goal is to treat the nonlinear problem (1.1) under (1.4) and (1.5) in the maximal L p - L q regularity class, so that we need different ideas. Especially, the core of our approach is to construct an ℛ-bounded solution operator to (1.7). Thus, we construct the ℛ-bounded solution operator to (1.7) for the model problem in this paper, and in the forthcoming paper [15] we construct an ℛ-bounded solution operator to (1.7) in a domain. Moreover, in [15] the maximal L p - L q regularity in a domain is derived automatically with the help of the Weis’ operator valued Fourier multiplier theorem, so that a local in time unique existence theorem is proved by using the usual contraction mapping principle based on the maximal L p - L q regularity.

Now we formulate our problem studied in this paper and state the main results. Let R + N , R N , and R 0 N be the upper half-space, lower half-space and their boundary defined by

R ± N = { x = ( x 1 , , x N ) R N ± x N > 0 } , R 0 N = { x = ( x 1 , , x N ) R N x N = 0 } .

In this paper, we consider the following model problem:

{ λ u + γ 0 + 1 Div S + ( u + , γ 2 + p + ) = g + , λ p + + γ 1 + div u + = f + in  R + N , λ u γ 0 1 Div S ( u , p ) = g , div u = 0 in  R N , S + ( u + , γ 2 + p + ) n | x N = 0 + S ( u , p ) n | x N = 0 = h , u + | x N = 0 + u | x N = 0 = k on  R 0 N .
(1.8)

Throughout the paper, n=(0,,0,1), γ 0 ± , γ 1 + , and γ 2 + are fixed positive constants and the condition (1.4) holds. Substituting the relation p + =( f + γ 1 + div u + ) λ 1 into the equations in (1.8), we have

λ u + γ 0 + 1 Div [ μ + D ( u + ) + ( ν + μ + + γ 1 + γ 2 + λ 1 ) div u + I ] = g + γ 0 + 1 γ 2 + λ 1 f + , ( μ + D ( u + ) + ( ν + μ + + γ 1 + γ 2 + λ 1 ) div u + I ) n | x N = 0 + S ( u , p ) n | x N = 0 = h + γ 2 + λ 1 f + n .

Thus, g + γ 0 + 1 γ 2 + λ 1 f + and h+ γ 2 + λ 1 f + n being renamed g + and h, respectively, and defining S δ + ( u + ) by

S δ + ( u + )= μ + D( u + )+( ν + μ + +δ)div u + I,
(1.9)

mainly we consider the following problem:

{ λ u + γ 0 + 1 Div S δ + ( u + ) = g + in  R + N , λ u γ 0 1 Div S ( u , p ) = g , div u = 0 in  R N , S δ + ( u + ) n | x N = 0 + S ( u , p ) n | x N = 0 = h , u + | x N = 0 + u | x N = 0 = k on  R 0 N .
(1.10)

Here, δ is not only γ 1 + γ 2 + λ 1 but also chosen as some complex number. More precisely, we consider the following three cases for δ and λ:

(C1)δ= γ 1 + γ 2 + λ 1 , λ Σ ϵ , λ 0 K ϵ .

(C2)δ Σ ϵ with Reδ<0, λC with |λ| λ 0 and Reλ|Reδ/Imδ||Imλ|.

(C3)δ Σ ϵ with Reδ0, λC with |λ| λ 0 and Reλ λ 0 |Imλ|.

Here, Σ ϵ ={λC{0}|argλ|πϵ} with 0<ϵ<π/2, Σ ϵ , λ 0 ={λ Σ ϵ |λ| λ 0 } and

K ϵ = { λ C ( Re λ + γ 1 + γ 2 + ν + 1 + ϵ ) 2 + ( Im λ ) 2 ( γ 1 + γ 2 + ν + 1 + ϵ ) 2 } .
(1.11)

We define Γ ϵ , λ 0 by

Γ ϵ , λ 0 ={ Σ ϵ , λ 0 K ϵ in case of (C1) , { λ C | λ | λ 0 , Re λ | Re δ / Im δ | | Im λ | } in case of (C2) , { λ C | λ | λ 0 , Re λ λ 0 | Im λ | } in case of (C3) .
(1.12)

The case (C1) is used to prove the existence of ℛ-bounded solution operator to (1.8) and the cases (C2) and (C3) are used for some homotopic argument in proving the exponential stability of analytic semigroup in a bounded domain. Such homotopic argument already appeared in [16] and [17] in the non-slip condition case. In (C2), we note that Imδ0 when δ Σ ϵ with Reδ<0.

In case (C1), |δ|=| γ 1 + γ 2 + λ 1 | γ 1 + γ 2 + λ 0 1 . On the other hand, in cases of (C2) and (C3), we assume that |δ| δ 0 for some δ 0 >0. Thus, we assume that

|δ|max ( γ 1 + γ 2 + λ 0 1 , δ 0 ) .
(1.13)

We may include the case where γ 1 + γ 2 + =0 in (1.9), which is corresponding to the Lamé system. We may also consider the case where div u = f in (1.8) under the condition that f W q 1 ( R N ) and f =div F with some F L q ( R N ) N . In fact, first we solve the equation div u = f in R N , which transfers the problem to the case where f =0 (cf. Shibata [18], Section 3]). Thus, we only consider the case where f =0 in this paper for the sake of simplicity.

Before stating our main results, we introduce several symbols and functional spaces used throughout the paper. For the differentiations of scalar f and N-vector g=( g 1 ,, g N ), we use the following symbols:

f = ( 1 f , , N f ) , 2 f = ( i j f i , j = 1 , , N ) , g = ( i g j i , j = 1 , , N ) , 2 g = ( i j g k i , j , k = 1 , , N ) .

For any Banach space X with norm X , X d denotes the d-product space of X, while its norm is denoted by X instead of X d for the sake of simplicity. For any domain D, L q (D), and W q m (D) denote the usual Lebesgue space and Sobolev space, while L q ( D ) and W q m ( D ) denote their norms, respectively. We set W ˆ q 1 ( R N )={θ L q , loc ( R N )θ L q ( R N ) N }. For any two Banach spaces X and Y, L(X,Y) denotes the set of all bounded linear operators from X into Y. Hol(U,X) denotes the set of all X-valued holomorphic functions defined on U. The letter C denotes generic constants and the constant C a , b , depends on a,b, . The values of constants C and C a , b , may change from line to line. ℕ and ℂ denote the set of all natural numbers and complex numbers, respectively, and we set N 0 =N{0}. For any multi-index α=( α 1 ,, α N ) N 0 N , we set x α = ( / x 1 ) α 1 ( / x N ) N α N .

We introduce the definition of ℛ-boundedness.

Definition 1.1

A family of operators TL(X,Y) is called ℛ-bounded on L(X,Y), if there exist constants C>0 and q[1,) such that for any nN, { T j } j = 1 n T, { x j } j = 1 n X and sequences { r j ( u ) } j = 1 n of independent, symmetric, {1,1}-valued random variables on [0,1] we have the inequality

{ 0 1 j = 1 n r j ( u ) T j x j Y q d u } 1 q C { 0 1 j = 1 n r j ( u ) x j X q d u } 1 q .

The smallest such C is called ℛ-bound of T, which is denoted by R L ( X , Y ) (T).

The following theorem is our main result in this paper.

Theorem 1.2

Let1<q<, 0<ϵ<π/2and λ 0 >0. Let Γ ϵ , λ 0 be the sets defined in (1.12). Let X q and X q be the sets defined by

X q = { ( g + , g , h , k ) g ± L q ( R ± N ) N , h W q 1 ( R N ) N , k W q 2 ( R N ) N } , X q = { F = ( F 1 + , F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ) X q = F 1 ± L q ( R ± N ) , F 2 , F 5 L q ( R N ) N 2 , F 3 , F 6 L q ( R N ) N , F 4 L q ( R N ) N 3 } .

Then there exist operator families

A ± (λ)Hol ( Γ ϵ , λ 0 , L ( X q , W q 2 ( R ± N ) N ) ) , P (λ)Hol ( Γ ϵ , λ 0 , L ( X q , W ˆ q 1 ( R N ) ) )

such that u ± = A ± (λ) F λ ( g + , g ,h,k)and p = P (λ) F λ ( g + , g ,h,k)solve problem (1.10) uniquely for any( g + , g ,h,k) X q andλ Γ ϵ , λ 0 , where F λ ( g + , g ,h,k)=( g + , g ,h, λ 1 / 2 h ± , 2 k, λ 1 / 2 k,λk).

Moreover, there exists a constant C depending on ϵ, q, and N such that

R L ( X q , L q ( R ± N ) N ˜ ) ( { ( τ τ ) ( G λ A ± ( λ ) ) λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( X q , L q ( R N ) N ) ( { ( τ τ ) ( P ( λ ) ) λ Λ ϵ , λ 0 } ) C ( = 0 , 1 )
(1.14)

with N ˜ = N 3 + N 2 +2Nandλ=γ+iτ, where G λ is an operator defined by G λ u=(λu,γu, λ 1 / 2 u, 2 u).

Setting p + = λ 1 ( f + γ 1 + div u + ) in (1.8), we have the following theorem concerning problem (1.8) immediately with the help of Theorem 1.2.

Theorem 1.3

Let1<q<, 0<ϵ<π/2and λ 0 >0. Let Γ ϵ , λ 0 be the sets defined in (1.12). Set

Y q = { ( f + , g + , g , h , k ) f + W q 1 ( R + N ) , ( g + , g , h , k ) X q } , Y q = { ( F 0 , F 1 + , F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ) F 0 W q 1 ( R + N ) , ( F 1 + , F 1 , F 2 , F 3 , F 4 , F 5 , F 6 ) X q } .

Then there exist operator families

P + ( λ ) Hol ( Λ ϵ , λ 0 , L ( Y q , W q 1 ( R + N ) ) ) , U ± ( λ ) Hol ( Λ ϵ , λ 0 , L ( Y q , W q 2 ( R ± N ) N ) ) , P ( λ ) Hol ( Λ ϵ , λ 0 , L ( Y q , W ˆ q 1 ( R N ) ) )

such that for any( f + , g + , g ,h,k) Y q andλ Λ ϵ , λ 0 ,

p + = P + ( λ ) F λ ( f + , g + , g , h , k ) , u ± = U ± ( λ ) F λ ( f + , g + , g , h , k ) , p = P ( λ ) F λ ( f + , g + , g , h , k )

solve problem (1.8) uniquely, where F λ ( f + , g + , g ,h,k)=( f + , g + , g ,h, λ 1 / 2 h, 2 k, λ 1 / 2 k,λk).

Moreover, there exists a constant C depending on ϵ, λ 0 , q, and N such that

R L ( Y q , W q 1 ( R + N ) 2 ) ( { ( τ τ ) { ( λ , γ ) P + ( λ ) } λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( Y q , L q ( R ± N ) N ˜ ) ( { ( τ τ ) ( G λ U ± ( λ ) ) λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( Y q , L q ( R N ) N ) ( { ( τ τ ) ( P ( λ ) ) λ Γ ϵ , λ 0 } ) C ( = 0 , 1 ) .
(1.15)

2 Solution formulas for the model problem

To prove Theorem 1.2, first we consider problem (1.10) with g ± =0 in this section as a model problem, that is, we consider the following equations:

{ λ u + γ 0 + 1 Div S δ + ( u + ) = 0 in  R + N , λ u γ 0 1 Div S ( u , p ) = 0 , div u = 0 in  R N , S δ + ( u + ) n | x N = 0 + S ( u , p ) n | x N = 0 = h , u + | x N = 0 + u | x N = 0 = k on  R 0 N .
(2.1)

Let v ˆ = F x [v]( ξ , x N ) denote the partial Fourier transform with respect to the tangential variable x =( x 1 ,, x N 1 ) with ξ =( ξ 1 ,, ξ N 1 ) defined by F x [v]( ξ , x N )= R N 1 e i x ξ v( x , x N )d x . Using the formulas

Div S δ + ( u + )= μ + Δ u + +( ν + +δ)div u + ,Div S ( u , p )= μ Δ u p

and applying the partial Fourier transform to (2.1), we transfer problem (2.1) to the ordinary differential equations

{ λ u ˆ + j + γ 0 + 1 μ + | ξ | 2 u ˆ + j γ 0 + 1 μ + D N 2 u ˆ + j γ 0 + 1 ( ν + + δ ) i ξ j ( i ξ u ˆ + + D N u ˆ + N ) = 0 for  x N > 0 , λ u ˆ + N + γ 0 + 1 μ + | ξ | 2 u ˆ + N γ 0 + 1 μ + D N 2 u ˆ + N γ 0 + 1 ( ν + + δ ) D N ( i ξ u ˆ + + D N u ˆ + N ) = 0 for  x N > 0 , λ u ˆ j + γ 0 1 μ | ξ | 2 u ˆ j γ 0 1 μ D N 2 u ˆ j + γ 0 1 i ξ j p ˆ = 0 for  x N < 0 , λ u ˆ N + γ 0 1 μ | ξ | 2 u ˆ N γ 0 1 μ D N 2 u ˆ N + γ 0 1 D N p ˆ = 0 for  x N < 0 , i ξ u ˆ + D N u ˆ N = 0 for  x N < 0 ,
(2.2)

subject to the boundary conditions

{ μ + ( D N u ˆ + j + i ξ j u ˆ + N ) | x N = 0 + μ ( D N u ˆ j + i ξ j u ˆ N ) | x N = 0 = h ˆ j ( 0 ) , 2 μ + D N u ˆ + N + ( ν + μ + + δ ) ( i ξ u ˆ + + D N u ˆ + N ) | x N = 0 + ( 2 μ D N u ˆ N p ˆ ) | x N = 0 = h ˆ N ( 0 ) , u ˆ + J ( 0 + ) u ˆ J ( 0 ) = k ˆ J ( 0 ) ,
(2.3)

where D N =d/d x N and i ξ v ˆ = = 1 N 1 i ξ v ˆ j for v=( v 1 ,, v N 1 , v N ). Here and in the following, j and J run from 1 through N1 and N, respectively. Applying the divergence to the first and second equations in (2.1), we have λdiv u + γ 0 + 1 ( μ + + ν + +δ)Δdiv u + =0 in R + N and Δ p =0 in R N , so that

( λ γ 0 + 1 ( μ + + ν + + δ ) Δ ) ( λ γ 0 + 1 μ + Δ ) u + = 0 in  R + N , ( λ γ 0 1 Δ ) Δ u = 0 in  R N .

Thus, the characteristic roots of (2.2) are

A + = γ 0 + ( μ + + ν + + δ ) 1 λ + A 2 , B ± = γ 0 ± ( μ ± ) 1 λ + A 2 ,A= | ξ | .
(2.4)

To state our solution formulas of problem: (2.2)-(2.3), we introduce some classes of multipliers.

Definition 2.1

Let s be a real number and let Γ ϵ , λ 0 be the set defined in (1.12). Set

Γ ˜ ϵ , λ 0 = { ( λ , ξ ) λ = γ + i τ Γ ϵ , λ 0 , ξ = ( ξ 1 , , ξ N 1 ) R N 1 { 0 } } .

Let m(λ, ξ ) be a function defined on Γ ˜ ϵ , λ 0 .

  1. (1)

    m(λ, ξ ) is called a multiplier of order s with type 1 if for any multi-index κ =( κ 1 ,, κ N 1 ) N 0 N 1 and (λ, ξ ) Γ ˜ ϵ , λ 0 there exists a constant C κ depending on κ , λ 0 , ϵ, μ ± , ν + , γ 0 , and γ i + (i=0,1,2) such that we have the estimates

    | ξ κ m ( λ , ξ ) | C α ( | λ | 1 / 2 + A ) s | κ | , | ξ κ ( τ m τ ( λ , ξ ) ) | C κ ( | λ | 1 / 2 + A ) s | κ | .
    (2.5)
  1. (2)

    m(λ, ξ ) is called a multiplier of order s with type 2 if for any multi-index κ =( κ 1 ,, κ N 1 ) N 0 N 1 and (λ, ξ ) Γ ˜ ϵ , λ 0 there exists a constant C κ depending on κ , λ 0 , ϵ, μ ± , ν + , γ 0 , and γ i + (i=0,1,2) such that we have the estimates

    | ξ κ m ( λ , ξ ) | C κ ( | λ | 1 / 2 + A ) s A | κ | , | ξ κ ( τ m τ ( λ , ξ ) ) | C κ ( | λ | 1 / 2 + A ) s A | κ | .
    (2.6)

Let M s , i be the set of all multipliers of order s with type i (i=1,2).

Obviously, M s , i are vector spaces on ℂ. Moreover, by the fact | λ 1 / 2 + A | | α | A | α | and the Leibniz rule, we have the following lemma immediately.

Lemma 2.2

Let s 1 , s 2 be two real numbers. Then the following three assertions hold.

  1. (1)

    Given m i M s i , 1 (i=1,2), we have m 1 m 2 M s 1 + s 2 , 1 .

  2. (2)

    Given i M s i , i (i=1,2), we have 1 2 M s 1 + s 2 , 2 .

  3. (3)

    Given n i M s i , 2 (i=1,2), we have m 1 m 2 M s 1 + s 2 , 2 .

Remark 2.3

We see easily that i ξ j M 1 , 2 (j=1,,N1), A M 1 , 2 , and A 1 M 1 , 2 . Especially, i ξ j /A M 0 , 2 . Moreover, M s , 1 M s , 2 for any sR.

In this section we show the following solution formulas for problem (2.2)-(2.3):

u ˆ + J = k = 1 4 u ˆ J k + , u ˆ J = k = 1 3 u ˆ J k , p ˆ = e A x N = 1 N [ p , 0 h ˆ ( 0 ) + p , 1 k ˆ ( 0 ) ] , u ˆ J 1 ± = A M + ( x N ) = 1 N [ R J , 1 ± h ˆ ( 0 ) + R J , 0 ± k ˆ ( 0 ) ] , u ˆ J 2 ± = A e B ± x N = 1 N [ S J , 2 ± h ˆ ( 0 ) + S J , 1 ± k ˆ ( 0 ) ] , u ˆ J 3 ± = e B ± x N [ T J , 1 ± h ˆ J ( 0 ) + T J , 0 ± k ˆ J ( 0 ) ] , u ˆ j 4 + = 0 , u ˆ N 4 + = A + M + ( x N ) U N , 0 + k ˆ N ( 0 )
(2.7)

with

R J , 1 ± M 1 , 2 , R J , 0 ± M 0 , 2 , S J , 2 ± M 2 , 2 , S J , 1 ± M 1 , 2 , T J , 1 ± M 1 , 1 , T J , 0 ± M 0 , 1 , U N , 0 + M 0 , 1 , p , 0 M 0 , 2 , p , 1 M 1 , 2 .
(2.8)

Here and in the following, M ± ( x N ) denote the Stokes kernels defined by

M + ( x N )= e B + x N e A + x N B + A + , M ( x N )= e B x N e A x N B A .
(2.9)

From now on, we prove (2.7). We find solutions u ˆ ± J to problem (2.2)-(2.3) of the forms

u ˆ + J = α + J ( e B + x N e A + x N ) + β + J e B + x N , u ˆ J = α J ( e B x N e A x N ) + β J e B x N , p ˆ = γ e A x N .
(2.10)

Using the symbols B ± , we write (2.2) as follows:

{ μ + B + 2 u ˆ + j μ + D N 2 u ˆ + j ( ν + + δ ) i ξ j ( i ξ u ˆ + + D N u ˆ + N ) = 0 ( x N > 0 ) , μ + B + 2 u ˆ + N μ + D N 2 u ˆ + N ( ν + + δ ) D N ( i ξ u ˆ + + D N u ˆ + N ) = 0 ( x N > 0 ) , μ B 2 u ˆ j μ D N 2 u ˆ j + i ξ j p ˆ = 0 ( x N < 0 ) , μ B 2 u ˆ N μ D N 2 u ˆ N + D N p ˆ = 0 ( x N < 0 ) , i ξ u ˆ + D N u ˆ N = 0 ( x N < 0 ) .
(2.11)

Substituting the formulas of u ± J in (2.10) and (2.11) and equating the coefficients of e B ± x N , e A + x N , and e A x N , we have

μ + ( A + 2 B + 2 ) α + j + ( ν + + δ ) i ξ j ( i ξ α + A + α + N ) = 0 , μ + ( A + 2 B + 2 ) α + N ( ν + + δ ) A + ( i ξ α + A + α + N ) = 0 , i ξ α + α + N B + + i ξ β + β + N B + = 0 , μ ( A 2 B 2 ) α j + i ξ j γ = 0 , μ ( A 2 B 2 ) α N + A γ = 0 , i ξ α + α N B + i ξ β + β N B = 0 , i ξ α + A α N = 0 .
(2.12)

First, we represent i ξ α ± , α ± N and γ by i ξ β ± and β + N . Namely, it follows from (2.12) that

i ξ α + = A 2 A + B + A 2 ( i ξ β + B + β + N ) , α + N = A + A + B + A 2 ( i ξ β + B + β + N ) , i ξ α = A B A ( i ξ β + B β N ) , α N = 1 B A ( i ξ β + B β N ) , γ = μ ( A + B ) A ( i ξ β + B β N ) .
(2.13)

Substituting the relations

u ˆ ± J (0)= β ± J , N u ˆ + J (0)=( A + B + ) α + J B + β + J , N u ˆ J (0)=( B A) α J + B β J

into (2.3), we have

β + J = β J + k ˆ J ( 0 ) , μ + ( ( B + A + ) α + j + B + β + j i ξ j β + N ) + μ ( ( B A ) α j + B β j + i ξ j β N ) = h ˆ j ( 0 ) , 2 μ + ( ( B + A + ) α + N + B + β + N ) + ( ν + μ + + δ ) ( i ξ β + + ( B + A + ) α + N + B + β + N ) + 2 μ ( ( B A ) α N + B β N ) γ = h ˆ N ( 0 ) .
(2.14)

Using (2.14) and (2.13), we have

i ξ h ˆ ( 0 ) = L 11 + ( i ξ β + ) + L 11 ( i ξ β ) + L 12 + A β + N + L 12 A β N , A h ˆ N ( 0 ) = L 21 + ( i ξ β + ) + L 21 ( i ξ β ) + L 22 + A β + N + L 22 β N

with

L 11 + = μ + A + ( B + 2 A 2 ) A + B + A 2 , L 11 = μ ( A + B ) , L 12 + = μ + A ( 2 A + B + A 2 B + 2 ) A + B + A 2 , L 12 = μ ( B A ) , L 21 + = A { 2 μ + A + ( B + A + ) A + B + A 2 ( ν + μ + + δ ) A + 2 A 2 A + B + A 2 } , L 21 = μ ( B A ) , L 22 + = ( μ + + ν + + δ ) B + ( A + 2 A 2 ) A + B + A 2 , L 22 = μ ( A + B ) B .
(2.15)

As is seen in Section 4, we have

L 11 + M 1 , 1 , L 11 M 1 , 2 , L 12 ± M 1 , 2 , L 21 ± M 1 , 2 , L 22 + M 1 , 1 , L 22 M 2 , 2 .
(2.16)

Noting the relation β + J = β J + k ˆ J (0), and setting

L 11 = L 11 + + L 11 , L 12 = L 12 + + L 12 , L 21 = L 21 + + L 21 , L 22 = L 22 + A + L 22 , L = ( L 11 A L 12 L 21 L 22 ) ,
(2.17)

we have

L ( i ξ β β N ) = ( H ˆ ( 0 ) H ˆ N ( 0 ) )
(2.18)

with

H ˆ ( 0 ) = i ξ h ˆ ( 0 ) L 11 + i ξ k ˆ ( 0 ) L 12 + A k ˆ N ( 0 ) , H ˆ N ( 0 ) = A h ˆ N ( 0 ) L 21 + i ξ k ˆ ( 0 ) A L 22 + k ˆ N ( 0 ) .

By Lemma 2.2 and (2.16), we see that

L 11 M 1 , 2 , L 12 M 1 , 2 , L 21 M 1 , 2 , L 22 M 2 , 2 .
(2.19)

The most important fact of this paper is that detL0 for any (λ, ξ ) Γ ˜ ϵ , λ 0 and

( det L ) 1 M 3 , 2 .
(2.20)

This fact is proved in Section 5, which is the highlight of this paper. Since

L 1 = 1 det L ( L 22 A L 12 L 21 L 11 ) ,

we have

i ξ β = 1 det L ( L 22 H ˆ ( 0 ) A L 12 H ˆ N ( 0 ) ) i ξ β = 1 det L ( L 22 i ξ h ˆ ( 0 ) + A 2 L 12 h ˆ N ( 0 ) + ( A L 12 L 21 + L 11 + L 22 ) i ξ k ˆ ( 0 ) i ξ β = + ( A L 12 L 22 + L 12 + L 22 ) A k ˆ N ( 0 ) ) , β N = 1 det L ( L 11 H ˆ N ( 0 ) L 21 H ˆ ( 0 ) ) β N = 1 det L ( L 21 i ξ h ˆ ( 0 ) L 11 A h ˆ N ( 0 ) + ( L 11 + L 21 L 11 L 21 + ) i ξ k ˆ ( 0 ) β N = + ( L 12 + L 21 L 11 L 22 + ) A k ˆ N ( 0 ) ) .
(2.21)

Writing i ξ k ˆ (0)=A = 1 N 1 i ξ A k ˆ (0) and using the relations β + J = β J + k ˆ J (0), by (2.21), we have

i ξ β + B + β + N = B + k ˆ N ( 0 ) + = 1 N A ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) , i ξ β + B β N = = 1 N A ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) )
(2.22)

with

P , 1 + = ( L 22 + B + L 21 ) i ξ A det L , P N , 1 + = A L 12 + B + L 11 det L , P , 0 + = ( A L 12 L 21 + L 22 L 11 + B + ( L 11 + L 21 L 11 L 21 + ) ) i ξ A det L + i ξ A , P N , 0 + = A L 12 L 22 + L 12 + L 22 B + ( L 12 + L 21 L 11 L 22 + ) det L , P , 1 = ( L 22 B L 21 ) i ξ A det L , P N , 1 = A L 12 B L 11 det L , P , 0 = ( A L 12 L 21 + L 22 L 11 + + B ( L 11 + L 21 L 11 L 21 + ) ) i ξ A det L , P N , 0 = A L 12 L 22 + L 12 + L 22 + B ( L 12 + L 21 L 11 L 22 + ) det L

for =1,,N1. By Lemma 2.2, (2.16), (2.19), and (2.20), we have

P J , 1 ± M 1 , 2 , P J , 0 ± M 0 , 2 .
(2.23)

By (2.13) we have

p ˆ ( x N ) = μ ( A + B ) A ( i ξ β + B β N ) e A x N = μ ( A + B ) = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) ) e A x N ,

so that setting p , 0 = μ (A+ B ) P , 1 and p , 1 = μ (A+ B ) P , 0 , we have the formula of p ˆ ( x N ) in (2.7).

By (2.12), we have

( B + A + ) α + j = ( ν + + δ ) i ξ j μ + ( A + + B + ) ( i ξ α + A + α + N ) , ( B + A + ) α + N = ( ν + + δ ) A + μ + ( A + + B + ) ( i ξ α + A + α + N ) , ( B A ) α j = i ξ j A ( i ξ β + B β N ) , ( B A ) α N = ( i ξ β + B β N ) .

Since i ξ α + A + α + N = A 2 A + 2 A + B + A 2 (i ξ β + B + β N ) as follows from (2.13), by (2.22) we have

( B + A + ) α + j = ( ν + + δ ) ( i ξ j ) B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( B + A + ) α + j = + ( ν + + δ ) i ξ j μ + ( A + + B + ) A 2 A + 2 A + B + A 2 A = 1 N ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) , ( B + A + ) α + N = ( ν + + δ ) A + B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( B + A + ) α + N = ( ν + + δ ) A + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 A = 1 N ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) , ( B A ) α j = i ξ j A A = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) ) , ( B A ) α N = A = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) )
(2.24)

for j=1,,N1. By (2.24) we have

( B + A + ) α + j = A [ = 1 N ( ν + + δ ) ( i ξ j ) P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 h ˆ ( 0 ) ( B + A + ) α + j = + = 1 N 1 ( ν + + δ ) ( i ξ j ) P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ ( 0 ) ( B + A + ) α + j = + ( ν + + δ ) μ + ( A + + B + ) A 2 A + 2 A + B + A 2 ( i ξ j A B + + i ξ j P N , 0 + ) k ˆ N ( 0 ) ] , ( B + A + ) α + N = ( ν + + δ ) A + B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( B + A + ) α + N = A [ = 1 N ( ν + + δ ) A + P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 h ˆ ( 0 ) ( B + A + ) α + N = + = 1 N ( ν + + δ ) A + P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 k ˆ ( 0 ) ] .

Since ( e B + x N e A + x N ) α + J = M + ( x N )( B + A + ) α + J , setting

R j , 1 + = ( ν + + δ ) ( i ξ j ) P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , R j , 0 + = ( ν + + δ ) ( i ξ j ) P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , R j N , 0 + = ( ν + + δ ) μ + ( A + + B + ) A 2 A + 2 A + B + A 2 ( i ξ j A B + + i ξ j P N , 0 + ) , R N , 1 + = ( ν + + δ ) A + P , 1 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , R N , 0 + = ( ν + + δ ) A + P , 0 + μ + ( A + + B + ) A 2 A + 2 A + B + A 2 , U N , 0 + = ( ν + + δ ) B + μ + ( A + + B + ) A 2 A + 2 A + B + A 2

for =1,,N and j, =1,,N1, we have u ˆ J 1 + and u ˆ N 4 + in (2.7). As is seen in Section 4 below, we have

A + M 1 , 1 , B + M 1 , 1 , ( A + + B + ) 1 M 1 , 1 , A 2 A + 2 A + B + A 2 M 0 , 1 ,
(2.25)

which, combined with (2.23), furnishes R J , 1 + M 1 , 2 , R J , 0 + M 0 , 2 , and U N , 0 + M 0 , 1 .

Analogously, in view of (2.24) we set

R j , 1 = i ξ j A P , 1 , R j , 0 = i ξ j A P , 0 , R N , 1 = P , 1 , R N , 0 = P , 0

for =1,,N, and j=1,,N1, we have u J 1 ( x N ) in (2.7). By (2.23) and (2.25), we have R J , 1 M 1 , 2 and R J , 0 M 0 , 2 .

Using (2.21), we represent β N by

β N =A = 1 N ( Q , 2 h ˆ ( 0 ) + Q , 1 k ˆ ( 0 ) )
(2.26)

with

Q , 2 = L 21 i ξ A det L , Q N , 2 = L 11 det L , Q , 1 = ( L 11 + L 21 L 11 L 21 + ) i ξ A det L , Q N , 1 = ( L 12 + L 21 L 11 L 22 + ) det L

for =1,,N1. By Lemma 2.2, (2.16), (2.19), and (2.20), we have

Q J , 2 M 2 , 2 , Q J , 1 M 1 , 2 .
(2.27)

In particular, noting that β + N = k ˆ N (0)+ β N and setting S N , 2 ± = Q , 2 , S N , 1 ± = Q , 1 (=1,,N), T N , 1 ± =0, T N , 0 + =1 and T N , 0 =0, we have the u ˆ N 2 ± and u ˆ N 3 ± in (2.7), and by (2.27) S N , 2 ± M 2 , 2 , S N , 1 ± M 1 , 2 , T N , 1 ± M 1 , 1 , and T N , 0 ± M 0 , 1 for =1,,N.

From (2.14) it follows that

h ˆ j ( 0 ) = μ + B + β + j + μ B β j + μ + ( B + A + ) α + j + μ ( B A ) α j ( μ + β + N μ β N ) i ξ j .

Noting that β + J = β J + k ˆ J (0), we have

β ± j = 1 μ + B + + μ B h ˆ j ( 0 ) ± μ B μ + B + + μ B k ˆ j ( 0 ) + μ + i ξ j μ + B + + μ B k ˆ N ( 0 ) μ + ( B + A + ) μ + B + + μ B α + j μ ( B A ) μ + B + + μ B α j + ( μ + μ ) i ξ j μ + B + + μ B β N ,

which, combined with (2.24) and (2.26), furnishes

β ± j = 1 μ + B + + μ B h ˆ j ( 0 ) ± μ B μ + B + + μ B k ˆ j ( 0 ) + μ + i ξ j μ + B + + μ B k ˆ N ( 0 ) + ( ν + + δ ) i ξ j B + ( μ + B + + μ B ) ( B + + A + ) A 2 A + 2 A + B + A 2 k ˆ N ( 0 ) ( ν + + δ ) i ξ j ( μ + B + + μ B ) ( B + + A + ) A 2 A + 2 A + B + A 2 A = 1 N ( P , 1 + h ˆ ( 0 ) + P , 0 + k ˆ ( 0 ) ) + μ i ξ j ( μ + B + + μ B ) A A = 1 N ( P , 1 h ˆ ( 0 ) + P , 0 k ˆ ( 0 ) ) + ( μ + μ ) i ξ j ( μ + B + + μ B ) A = 1 N ( Q , 2 h ˆ ( 0 ) + Q , 1 k ˆ ( 0 ) ) .
(2.28)

Thus, we set

S j , 2 ± = ( ν + + δ ) ( i ξ j ) P , 1 + ( μ + B + + μ B ) ( A + + B + ) A 2 A + 2 A + B + A 2 S j , 2 ± = + μ ( i ξ j ) P , 1 ( μ + B + + μ B ) A + ( μ + μ ) ( i ξ j ) Q , 2 μ + B + + μ B , S j , 1 ± = ( ν + + δ ) ( i ξ j ) P , 0 + ( μ + B + + μ B ) ( A + + B + ) A 2 A + 2 A + B + A 2 S j , 1 ± = + μ ( i ξ j ) P , 0 ( μ + B + + μ B ) A + ( μ + μ ) ( i ξ j ) Q , 1 μ + B + + μ B , S j N , 1 ± = μ + i ξ j ( μ + B + + μ B ) A + ( ν + + δ ) ( i ξ j ) B + ( μ + B + + μ B ) ( A + + B + ) A A 2 A + 2 A + B + A 2 S j N , 1 ± = ( ν + + δ ) ( i ξ j ) P N , 0 + ( μ + B + + μ B ) ( A + + B + ) A 2 A + 2 A + B + A 2 S j N , 1 ± = + μ ( i ξ j ) P N , 0 ( μ + B + + μ B ) A + ( μ + μ ) ( i ξ j ) Q N , 1 μ + B + + μ B , T j , 1 ± = 1 μ + B + + μ B , T j , 0 + = μ B μ + B + + μ B , T j , 0 = μ + B + μ + B + + μ B ,

so that we have the u ˆ j 2 ± and u ˆ j 3 ± in (2.7). Moreover, as is seen in Section 4, we have

( μ + B + + μ B ) 1 M 1 , 1 ,
(2.29)

so that by (2.23), (2.25), (2.27), and (2.29) we have S j , 2 ± M 2 , 2 , S j , 1 ± M 1 , 2 , T j , 1 ± M 1 , 1 , and T j , 0 ± M 0 , 1 . This completes the proof of (2.7).

To construct our solution operator from the solution formulas in (2.7), first of all we observe that the following formulas due to Volevich hold:

a ( ξ , x N ) h ˆ (0)= 0 ± { ( N a ) ( ξ , x N + y N ) h ˆ ( y N ) + a ( ξ , x N + y N ) N h ˆ ( ξ , y N ) } d y N ,

where j =/ x j . Using the identity 1= γ 0 ± λ μ ± B ± 2 m = 1 N 1 ( i ξ m ) ( i ξ m ) B ± 2 , we write

a ( ξ , x N ) h ˆ ( ξ , 0 ) = 0 ± a ( ξ , x N + y N ) N h ˆ ( ξ , y N ) d y N 0 ± ( N a ) ( ξ , x N + y N ) γ 0 ± λ 1 / 2 μ ± B ± 2 λ 1 / 2 h ˆ ( ξ , y N ) d y N + = 1 N 1 0 ± ( N a ) ( ξ , x N + y N ) i ξ B ± 2 h ˆ ( ξ , y N ) d y N .

Let F ξ 1 denote the partial Fourier inverse transform with respect to ξ variable and let f 2 and f 3 =( f 31 ,, f 3 N ) be corresponding variables to λ 1 / 2 h and h=( 1 h,, N h). If we define A ± ( f 2 , f 3 ) by

A ± [ a ] ( f 2 , f 3 ) = 0 ± F ξ 1 [ a ( ξ , x N + y N ) f ˆ 3 N ( ξ , y N ) ] d y N 0 ± F 1 [ ( N a ) ( ξ , x N + y N ) γ 0 ± λ 1 / 2 μ ± B ± 2 f ˆ 2 ( ξ , y N ) ] d y N + = 1 N 1 0 ± F ξ 1 [ ( N a ) ( ξ , x N + y N ) i ξ B ± 2 f ˆ 3 ( , y N ) ] d y N ,
(2.30)

then we have

F ξ 1 [ a ( ξ , x N ) h ˆ ( ξ , 0 ) ] = A ± [a] ( λ 1 / 2 h , h ) .
(2.31)

Analogously, using the identity 1= γ 0 ± λ μ ± B ± 2 m = 1 N 1 ( i ξ m ) ( i ξ m ) B ± 2 , we write

a ( ξ , x N ) k ˆ ( ξ , 0 ) = 0 ± a ( ξ , x N + y N ) [ γ 0 ± λ 1 / 2 N k ˆ ( ξ , y N ) μ ± B ± 2 = 1 N 1 i ξ N k ˆ ( ξ , y N ) B ± ] d y N 0 ± ( N a ) ( ξ , x N + y N ) [ γ 0 ± λ k ˆ ( ξ , y N ) μ ± B ± 2 = 1 N 1 k ˆ ( ξ , y N ) B ± 2 ] d y N .

Let f 4 , f 5 =( f 51 ,, f 5 N ) and f 6 =( f 6 m ,m=1,,N) be the corresponding variables to λk, λ 1 / 2 k and 2 k=( m k,m=1,,N). If we define B ± ( f 4 , f 5 , f 6 ) by

B ± [ a ] ( f 4 , f 5 , f 6 ) = 0 ± F ξ 1 [ a ( ξ , x N + y N ) { γ 0 ± f ˆ 5 N ( ξ , y N ) μ ± B ± 2 = 1 N 1 i ξ f ˆ 6 N ( ξ , y N ) B ± } ] d y N 0 ± F ξ 1 [ ( N a ) ( ξ , x N + y N ) { γ 0 ± f ˆ 4 ( ξ , y N ) μ ± B ± 2 = 1 N 1 f ˆ 6 ( ξ , y N ) B ± 2 } ] d y N ,
(2.32)

then we have

F ξ 1 [ a ( ξ , x N ) k ˆ ( ξ , 0 ) ] = B ± [a] ( λ k , λ 1 / 2 k , 2 k ) .
(2.33)

Let us define u J i + (i=1,2,3,4), u J i (i=1,2,3) and u J 4 + by u J i ± = F ξ 1 [ u ˆ J i ± ] (i=1,2,3) and u J 4 + = F ξ 1 [ u ˆ J 4 + ], respectively. Setting u J + = i = 1 4 u J i + , u J = i = 1 3 u J i and p = F ξ 1 [ p ˆ ], by (2.7) we see that u ± =( u 1 ± ,, u N ± ) and p satisfy (2.1). According to the formulas (2.30), (2.31), (2.32), and (2.33), we define our solution operators S J i + (λ) (i=1,2,3,4), S J i (λ) (i=1,2,3) and P (λ) of problem (2.1) such that

u J i ± = S J i ± ( λ ) ( λ 1 / 2 h , h , λ k , λ 1 / 2 k , 2 k ) on  R ± N ( i = 1 , 2 , 3 ) , u J 4 + = S J 4 + ( λ ) ( λ 1 / 2 h , h , λ k , λ 1 / 2 k , 2 k ) on  R + N , p = P ( λ ) ( λ 1 / 2 h , h , λ k , λ 1 / 2 k , 2 k ) on  R N
(2.34)

as follows: Note that

N M ± ( x N + y N ) = ( e ± B ± ( x N + y N ) + A ± M ± ( x N + y N ) ) , N e A ( x N + y N ) = A e A ( x N + y N ) , N e B ± ( x N + y N ) = B ± e B ± ( x N + y N ) ,
(2.35)

where we have set A =A. Let F 2 =( F 2 j j=1,,N), F 3 =( F 3 m ,m=1,,N), F 4 =( F 4 =1,,N), F 5 =( F 5 m ,m=1,,N) and F 6 =( F 6 m n ,m,n=1,,N) be the corresponding variables to λ 1 / 2 h=( λ 1 / 2 h 1 ,, λ 1 / 2 h N ), h=( h m ,m=1,,N), λk=(λ k 1 ,,λ k N ), λ 1 / 2 k=( λ 1 / 2 k m ,m=1,,N) and 2 k=( m k n ,m,n=1,,N), respectively. Then we define the operators S J 1 ± (λ), S J 2 ± (λ), S J 3 ± (λ), S N 4 + (λ) and P (λ) by

S J 1 ± ( λ ) ( F 2 , F 3 , F 4 , F 5 , F 6 ) = = 1 N { 0 ± F ξ 1 [ A M ± ( x N + y N ) [ R J , 1 ± F ˆ 3 N ( ξ , y N ) + R J , 0 ± γ 0 ± λ 1 / 2 μ ± B ± 2 F ˆ 5 N ( ξ , y N ) m = 1 N 1 R J , 0 ± ( i ξ m ) B ± 2 F ˆ 6 m N ( ξ , y N ) ] ] d y N ± 0 ± F ξ 1 [ ( A e B ± ( x N + y N ) + A A ± M ± ( x N + y N ) ) [ R J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 F ˆ 2 ( ξ , y N ) m = 1 N 1 R J , 1 ± ( i ξ m ) B ± 2 F ˆ 3 m ( ξ , y N ) + R J , 0 ± γ 0 ± μ ± B ± 2 F ˆ 4 ( ξ , y N ) m = 1 N 1 R J , 0 ± B ± 2 F ˆ 6 m m ( ξ , y N ) ] ] d y N } , S J 2 ± ( λ ) ( F 2 , F 3 , F 4 , F 5 , F 6 ) = = 1 N { 0 ± F ξ 1 [ A e B ± ( x N + y N ) [ S J , 2 ± F ˆ 3 N ( ξ , y N ) + S J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 F ˆ 5 N ( ξ , y N ) m = 1 N 1 S J , 1 ± ( i ξ m ) B ± 2 F ˆ 6 m N ( ξ , y N ) ] ] d y N ± 0 ± F ξ 1 [ B ± A e B ± ( x N + y N ) [ S J , 2 ± γ 0 ± λ 1 / 2 μ ± B ± 2 F ˆ 2 ( ξ , y N ) m = 1 N 1 S J , 2 ± ( i ξ m ) B ± 2 F ˆ 3 m ( ξ , y N ) + S J , 1 ± γ 0 ± μ ± B ± 2 F ˆ 4 ( ξ , y N ) m = 1 N 1 S J , 1 ± B ± 2 F ˆ 6 m m ( ξ , y N ) ] ] d y N } , S J 3 ± ( λ ) ( F 2 , F 3 , F 4 , F 5 , F 6 ) = { 0 ± F ξ 1 [ e B ± ( x N + y N ) [ T J , 1 ± F ˆ 3 N J ( ξ , y N ) + T J , 0 ± γ 0 ± λ 1 / 2 μ ± B ± 2 F ˆ 5 N J ( ξ , y N ) m = 1 N 1 T J , 0 ± ( i ξ m ) B ± 2 F ˆ 6 m N J ( ξ , y N ) ] ] d y N ± 0 ± F ξ 1 [ B ± e B ± ( x N + y N ) [ T J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 F ˆ 2 J ( ξ , y N ) m = 1 N 1 T J , 1 ± ( i ξ m ) B ± 2 F ˆ 3 m J ( ξ , y N ) + T J , 0 ± γ 0 ± μ ± B ± 2 F ˆ 4 J ( ξ , y N ) m = 1 N 1 T J , 0 ± B ± 2 F ˆ 6 m m J ( ξ , y N ) ] ] d y N } , S j 4 + ( λ ) ( F 2 , F 3 , F 4 , F 5 , F 6 ) = 0 , S N 4 + ( λ ) ( F 2 , F 3 , F 4 , F 5 , F 6 ) = { 0 F ξ 1 [ A + M + ( x N + y N ) [ U N , 0 + γ 0 + λ 1 / 2 μ + B + 2 F ˆ 5 N J ( ξ , y N ) m = 1 N 1 U N , 0 + ( i ξ m ) B + 2 F ˆ 6 m N J ( ξ , y N ) ] ] d y N 0 ± F ξ 1 [ A + ( e B + ( x N + y N ) + A + M + ( x N + y N ) ) × [ U N , 0 + γ 0 + μ + B + 2 F ˆ 4 J ( ξ , y N ) m = 1 N 1 U N , 0 + B + 2 F ˆ 6 m m J ( ξ , y N ) ] ] d y N } , P ( λ ) ( F 2 , F 3 , F 4 , F 5 , F 6 ) = = 1 N { 0 F ξ 1 [ e A ( x N + y N ) [ p , 0 F ˆ 3 N ( ξ , y N ) + p , 1 γ 0 λ 1 / 2 μ B 2 F ˆ 5 N ( ξ , y N ) m = 1 N 1 p , 1 ( i ξ m ) B 2 F ˆ 6 m N ( ξ , y N ) ] ] d y N + 0 F ξ 1 [ A e A ( x N + y N ) [ p , 0 γ 0 λ 1 / 2 μ B 2 F ˆ 2 ( ξ , y N ) m = 1 N 1 p J , 0 ( i ξ m ) B 2 F ˆ 3 m ( ξ , y N ) + p , 1 γ 0 μ B 2 F ˆ 4 ( ξ , y N ) m = 1 N 1 p , 1 B 2 F ˆ 6 m m ( ξ , y N ) ] ] d y N } .
(2.36)

Obviously, by (2.31) and (2.33), we have (2.34).

If we define operators S ± (λ) by

S + (λ) F = i = 1 4 ( S 1 i + ( λ ) F , , S N i + ( λ ) F ) , S (λ) F = i = 1 3 ( S 1 i ( λ ) F , , S N i + ( λ ) F )

with F =( F 2 , F 3 , F 4 , F 5 , F 6 ), respectively, by (2.34) we have

u ± = S ± (λ) ( λ 1 / 2 h , h , λ k , λ 1 / 2 k , 2 k ) .
(2.37)

Moreover, if we set

Z q ( R N ) = { ( F 2 , F 3 , F 4 , F 5 , F 6 ) F 2 , F 4 L q ( R N ) , F 3 , F 5 L q ( R N ) N 2 , F 6 L q ( R N ) N 3 } ,

then, using Lemma 3.1 and Lemma 3.2 in Section 3, we have

R L ( Z ( R N ) , L q ( R ± N ) 2 N + N 2 + N 3 ) ( { ( τ τ ) G λ S ± J ( λ ) λ Λ ϵ , λ 0 } ) C ( = 0 , 1 ) , R L ( Z ( R N ) , L q ( R ± N ) N ) ( { ( τ τ ) P ( λ ) λ Λ ϵ , λ 0 } ) C ( = 0 , 1 ) .
(2.38)

The estimates (2.38) are proved in Section 6 below.

3 Technical lemmas

To prove the ℛ-boundedness of solution operators, we use the following two lemmas. The first lemma is used to show the ℛ-boundedness of the compressible part and the second one to show that of the incompressible part.

Lemma 3.1

Let n 1 and n 2 be multipliers belonging to M 2 , 2 and M 1 , 1 , respectively. Let K i + (i=1,2,3,4) be operators defined by

K 1 + ( λ ) g = 0 F ξ 1 [ n 1 ( λ , ξ ) A A + M + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , K 2 + ( λ ) g = 0 F ξ 1 [ n 1 ( λ , ξ ) A e B + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , K 3 + ( λ ) g = 0 F ξ 1 [ n 2 ( λ , ξ ) A + M + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , K 4 + ( λ ) g = 0 F ξ 1 [ n 2 ( λ , ξ ) e B + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N .

Then there exists a constant C such that

R L ( L q ( R + N ) , L q ( R + N ) 1 + N + N 2 ) ( { ( τ τ ) G λ K i + ( λ ) λ Λ } ) C(=0,1,i=1,2,3,4).

Lemma 3.2

Let n 3 , n 4 , and n 5 be multipliers belonging to M 1 , 2 , M 2 , 2 and M 1 , 1 , respectively. Let K i + (i=1,2,3,4) be operators defined by

K 1 ( λ ) g = 0 F ξ 1 [ n 3 ( λ , ξ ) A M ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , K 2 ( λ ) g = 0 F ξ 1 [ n 4 ( λ , ξ ) A e B ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , K 3 ( λ ) g = 0 F ξ 1 [ n 4 ( λ , ξ ) A 2 M ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , K 4 ( λ ) g = 0 F ξ 1 [ n 5 ( λ , ξ ) e B ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N .

Then there exists a constant C such that

R L ( L q ( R N ) , L q ( R N ) 1 + N + N 2 ) ( { ( τ τ ) G λ K i ( λ ) λ Λ } ) C ( = 0 , 1 , i = 1 , 2 , 3 , 4 ) .
(3.1)

The assertions for K 1 + and K 2 + in Lemma 3.1 immediately follows from the following lemma.

Lemma 3.3

Letm(λ)be a multiplier belonging to M 0 , 2 . Let L 1 (λ)and L 2 (λ)be operators defined by

L 1 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) A A + M + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , L 2 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) A e B + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N .

Then we have

R L ( L q ( R + N ) ) ( { ( τ τ ) L i ( λ ) λ Λ } ) C(=0,1,i=1,2).

Proof

Set ψ i (λ,x)= F ξ 1 [m(λ, ξ )A N i (λ, ξ , x N )]( x ) with N 1 = A + M + ( x N ) and N 2 = e B + x N . As was seen in Shibata and Shimizu [19], Proof of Lemma 5.4], the lemma follows from the fact that

| ( τ τ ) ψ i (λ,x)|C | x | N (=0,1).
(3.2)

Thus, we prove (3.2). Using the following Bell formula for the derivatives of the composite function of f(t) and t=g( ξ ):

ξ κ f ( g ( ξ ) ) = = 1 | κ | f ( ) ( g ( ξ ) ) κ 1 + + κ = κ | κ i | 1 Γ κ 1 , , κ κ ( ξ κ 1 g ( ξ ) ) ( ξ κ g ( ξ ) )
(3.3)

with f ( ) (t)= d f(t)/d t and suitable coefficients Γ κ 1 , , κ κ , we have

| ξ κ [ 0 1 e ( ( 1 θ ) A + + θ B + ) x N d θ ] | C α ( | λ | 1 / 2 + A ) | κ | e c ( | λ | 1 / 2 + A ) x N

with some positive constant c independent of κ . Thus, we have

| ξ κ N i | C α ( | λ | 1 / 2 + A ) | κ | e ( c / 2 ) ( | λ | 1 / 2 + A ) x N .
(3.4)

To prove the estimate

| ψ i (λ,x)|C| x | N ,
(3.5)

using the identity e i x ξ = = 1 N 1 x i | x | 2 (i x ), we write

ψ i (λ,x)= = 1 N 1 i x | x | 2 1 ( 2 π ) N 1 R N 1 e i x ξ ξ ( m ( λ , ξ ) A N i ( λ , ξ , x N ) ) d ξ .

Since m M 0 , 2 , by (2.6) and (3.4) we have

| ξ κ ξ ( m ( λ , ξ ) A N i ( λ , ξ , x N ) ) | C κ A | κ | e ( c / 2 ) ( | λ | 1 / 2 + A ) x N .

Thus, by Theorem 2.2 due to Shibata and Shimizu [20], we have

| R N 1 e i x ξ ξ ( m ( λ , ξ ) A N i ( λ , ξ , x N ) ) d ξ |C| x | ( N 1 )

from which we have (3.5).

On the other hand, by (2.6) with κ =0 and (3.4) with κ =0, we have

| ψ i (λ,x)|C R N 1 | ξ | e ( c / 2 ) | ξ | x N d ξ .

Thus, using the change of variables x N ξ = η , we have | ψ i (λ,x)|C ( x N ) N , which, combined with (3.5), furnishes (3.2). Analogously, we have |τ τ ψ i (λ,x)|C | x | N , which completes the proof of Lemma 3.3. □

The assertions for K 3 + and K 4 + in Lemma 3.1 immediately follow from the following lemma.

Lemma 3.4

Letm(λ)be a multiplier belonging to M 1 , 2 . Let L 3 (λ)and L 4 (λ)be operators defined by

L 3 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) A + M + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , L 4 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) e B + ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N .

Then we have

R L ( L q ( R + N ) ) ( { ( τ τ ) L i ( λ ) λ Λ } ) C(=0,1,i=3,4).

Proof

Set φ i (λ,x)= F ξ 1 [m(λ, ξ ) N i (λ, ξ , x N )]( x ) with N 1 = A + M + ( x N ) and N 2 = e B + x N . As was stated in the proof of Lemma 3.3, the lemma follows from the fact that

| ( τ τ ) φ i (λ,x)|C | x | N (=0,1).
(3.6)

First, we prove that

| φ i (λ,x)| C N | x | N .
(3.7)

By (2.5), (3.4), and the Leibniz rule, we have

| ξ κ ξ ( m ( λ , ξ ) N i ( λ , ξ , x N ) ) | C κ ( | λ | 1 / 2 + A ) | κ | e ( | λ | 1 / 2 + A ) x N C κ A | κ | e ( | λ | 1 / 2 + A ) x N ,

so that by Theorem 2.2 in Shibata and Shimizu [20] we have

| R N 1 e i x ξ ξ ( m ( λ , ξ ) N i ( λ , ξ , x N ) ) d ξ |C| x | ( N 1 ) .

Thus, employing the same argumentation as in the proof of Lemma 3.3, we have (3.7).

On the other hand, we have

| φ i ( λ , x ) | C R N 1 ( | λ | 1 / 2 + | ξ | ) e ( c / 2 ) ( | λ | 1 / 2 + | ξ | ) x N d ξ C ( 4 / ( c x N ) ) R N 1 e ( c / 4 ) ( | λ | 1 / 2 + | ξ | ) x N d ξ C ( 4 / ( c x N ) ) N R N 1 e ( c / 4 ) | λ | 1 / 2 x N e | η | d η .

Thus, we have (3.6) with =0. Analogously, we have (3.6) with =1, which completes the proof of Lemma 3.4. □

The assertions for K 4 (λ) in Lemma 3.2 follows from the same observation as in the proof of Lemma 3.1 for K 4 + (λ). The assertion for K 1 (λ), K 2 (λ) and K 3 (λ) in Lemma 3.2 follows from the following lemma due to Shibata and Shimizu [19], Lemma 5.4].

Lemma 3.5

Letm(λ)be a multiplier belonging to M 0 , 2 . Let L i (λ) (i=5,6,7) be operators defined by

L 5 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) A 2 M ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , L 6 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) A e A ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N , L 7 ( λ ) g = 0 F ξ 1 [ m ( λ , ξ ) A e B ( x N + y N ) g ˆ ( ξ , y N ) ] ( x ) d y N .

Then we have

R L ( L q ( R N ) ) ( { ( τ τ ) L i ( λ ) λ Λ } ) C(=0,1,i=5,6,7).

4 Some estimates of several multipliers

In this section, we estimate several multipliers. For this purpose, we start with the following lemma.

Lemma 4.1

Let0<ϵ<π/2, λ 0 >0, δ 0 >0ands0.

  1. (1)

    For any λ Σ ϵ , ξ R N and α,β>0, we have |αλ+β|(sin ϵ 2 )(α|λ|+β).

  2. (2)

    There exists a number σ(0,π) depending on s, μ + , ν + , γ 1 + , γ 2 + , λ 0 , δ 0 and ϵ such that

    ( s μ + + ν + + δ ) 1 λ Σ σ for any λ Γ ϵ , λ 0 .
  1. (3)

    There exist constants δ 1 and δ 2 depending on s, μ + , ν + , γ 1 + , γ 2 + , λ 0 , δ 0 and ϵ such that

    δ 1 ( | λ | + | ξ | 2 ) | ( s μ + + ν + + δ ) 1 λ+| ξ | 2 | δ 2 ( | λ | + | ξ | 2 )

for any(λ, ξ ) Γ ˜ ϵ , λ 0 = Γ ϵ , λ 0 ×( R N 1 {0}).

Remark 4.2

Lemma 4.1 was proved in Götz and Shibata [21], Lemma 3.1], so that we may omit its proof.

First we estimate A + s , B ± s , ( A + + B + ) s and ( μ + B + + μ B ) s . For this purpose, we use the estimates

c ( | λ | 1 / 2 + A ) Re M 1 | M 1 | c ( | λ | 1 / 2 + A ) ( M 1 = A + , B ± )
(4.1)

for any (λ, ξ ) Γ ˜ ϵ , λ 0 = Γ ϵ , λ 0 ×( R N 1 {0}) with some positive constants c and c , which immediately follows from Lemma 4.1. Here and in the following, c and c denote some positive constants essentially depending on μ ± , ν + , γ 0 ± , γ + 1 , γ + 2 , ϵ, λ 0 and δ 0 . In particular, by (4.1) we have

c ( | λ | 1 / 2 + A ) Re M 2 | M 2 | c ( | λ | 1 / 2 + A ) ( M 2 = A + + B + , μ + B + + μ B )
(4.2)

for any (λ, ξ ) Γ ˜ ϵ , λ 0 . As was shown in Enomoto and Shibata [17], Lemma 4.3], using (4.1), (4.2), and the Bell formula (3.3), we see that

( M 3 ) s M s , 1 ( M 3 = A + , B + , A + + B + , μ + B + + μ B ).
(4.3)

Especially, we have (2.29).

Second, we estimate ( A + B + A 2 ) 1 . For this purpose, we write

1 A + B + A 2 = ( μ + + ν + + δ ) μ + γ 0 + ( 2 μ + + ν + + δ ) λ P ( λ , ξ ) with  P ( λ , ξ ) = A + B + + A 2 γ 0 + ( 2 μ + + ν + + δ ) 1 λ + A 2 .
(4.4)

By Lemma 4.1, (3.3), and (4.3) we have

A + B + + A 2 M 2 , 1 , ( γ 0 + ( 2 μ + + μ + + δ ) 1 λ + A 2 ) s M 2 s , 1 ,
(4.5)

so that by Lemma 2.2 we have

P M 0 , 1 .
(4.6)

Since A 2 A + 2 = γ 0 + ( μ + + ν + + δ ) 1 λ, by (4.4) and (4.6), we have A 2 A + 2 A + B + A 2 M 0 , 1 , which, combined with (4.3), furnishes (2.25).

Applying (4.4) to the formula in (2.15), we have

L 11 + = μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ A + P , L 12 + = μ + A ( 2 μ + + ν + + δ 2 μ + + ν + + δ P ) , L 21 + = ( 2 μ + ( ν + + δ ) 2 μ + + ν + + δ A + B + + A + μ + ( ν + μ + + δ ) 2 μ + + ν + + δ ) A P , L 22 + = μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ B + P .
(4.7)

Noting that A M 1 , 2 , by Lemma 2.2, (2.15), (4.3), (4.6), and (4.7), we have L 11 + M 1 , 1 , L 12 + M 1 , 2 , L 21 + M 1 , 2 , and L 22 + M 1 , 1 . In addition, since A M 1 , 2 and B M 1 , 2 , by Lemma 2.2 we have A± B M 1 , 2 and (A+ B ) B M 2 , 2 . Summing up, we have proved (2.16).

5 Analysis of Lopatinski determinant

In this section, we show the following lemma, which implies (2.20).

Lemma 5.1

Let L be the matrix defined in (2.17). Then there exists a positive constant ω depending solely on μ ± , ν + , ϵ, γ 0 ± , γ 1 + , γ 2 + , λ 0 , and δ 0 such that

|detL|ω ( | λ | 1 / 2 + A ) 3
(5.1)

for any(λ, ξ ) Γ ˜ ϵ , λ 0 .

Moreover, we have

| ξ κ { ( τ τ ) ( det L ) 1 } | C κ ( | λ | 1 / 2 + A ) 3 A | κ | (=0,1)
(5.2)

for any multi-index κ N 0 N 1 and(λ, ξ ) Γ ˜ ϵ , λ 0 . Namely, ( det L ) 1 M 3 , 2 .

Proof

Recalling (1.13) and setting δ 2 =max( δ 0 , γ 1 + γ 2 + λ 0 1 ), we have δ Σ ϵ and |δ| δ 2 . Moreover, by Lemma 4.1

( sin σ 2 ) (s μ + + ν + )|s μ + + ν + +δ|s μ + + ν + + δ 2
(5.3)

with s=0,1,2. To prove (5.1), first we consider the case R 1 | λ | 1 / 2 A with large R 1 1. Let P be the function defined in (4.4). By (4.4) we see easily that P=2+O( δ 3 ), that A + =A(1+O( δ 3 )), and that B ± =A(1+O( δ 3 )) when | γ 0 + ( μ + + ν + + δ ) 1 λ A 2 | γ 0 + ( ( sin σ 2 ) ( μ + 1 + ν + ) R 1 2 ) 1 δ 3 and | γ 0 ± ( μ ± ) 1 λ A 2 | γ 0 ± ( μ ± R 1 2 ) 1 δ 3 with very small positive number  δ 3 . Thus, by (4.7) we have

L 11 + = 2 μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ A ( 1 + O ( δ 3 ) ) , A L 12 + = 2 ( μ + ) 2 2 μ + + ν + + δ A 2 ( 1 + O ( δ 3 ) ) , L 21 + = 2 ( μ + ) 2 2 μ + + ν + + δ A ( 1 + O ( δ 3 ) ) , A L 22 + = 2 μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ A 2 ( 1 + O ( δ 3 ) ) .

On the other hand, we have B A= γ 0 λ μ 0 ( B + A ) =AO( δ 3 ), so that by (2.15) we have

L 11 = 2 μ A ( 1 + O ( δ 3 ) ) , A L 12 = A 2 O ( δ 3 ) , L 21 = A O ( δ 3 ) , L 22 = 2 μ A 2 ( 1 + O ( δ 3 ) ) .

Summing up, we have

L 11 = ( 2 μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ + 2 μ ) A ( 1 + O ( δ 3 ) ) , A L 12 = 2 ( μ + ) 2 2 μ + + ν + + δ A 2 ( 1 + O ( δ 3 ) ) , L 21 = 2 ( μ + ) 2 2 μ + + ν + + δ A ( 1 + O ( δ 3 ) ) , L 22 = ( 2 μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ + 2 μ ) A 2 ( 1 + O ( δ 3 ) ) ,

so that we have

det L = { ( 2 μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ + 2 μ ) 2 ( 2 ( μ + ) 2 2 μ + + ν + + δ ) 2 } A 3 ( 1 + O ( δ 3 ) ) = 4 ( μ + + μ ) ( μ + ( ν + + δ ) 2 μ + + ν + + δ + μ ) A 3 ( 1 + O ( δ 3 ) ) .

Since μ + ( ν + + δ ) 2 μ + + ν + + δ + μ =1+O( | δ | 1 ) as |δ|, we have

| μ + ( ν + + δ ) 2 μ + + ν + + δ + μ | 1 2 + μ when |δ| K 0

with some large number K 0 depending on μ + and ν + . On the other hand, when δ Σ ϵ and |δ| K 0 , we write

μ + ( μ + + ν + + δ ) 2 μ + + ν + + δ + μ = μ + μ + ( μ + + μ ) ( μ + + ν + ) + δ ( μ + + μ ) 2 μ + + ν + + δ .

Since μ ± >0 and ν + >0 and δ Σ ϵ , by Lemma 4.1(1)

| μ + μ + ( μ + + μ ) ( μ + + ν + ) + δ ( μ + + μ ) | ( sin ϵ 2 ) ( | δ | ( μ + + μ ) + μ + μ + ( μ + + μ ) ( μ + + ν + ) ) ( sin ϵ 2 ) ( μ + μ + ( μ + + μ ) ( μ + + ν + ) ) .

On the other hand, |2 μ + + ν + +δ|2 μ + + ν + + δ 2 , so that we have

|detL|2( μ + + μ )min { ( sin ϵ 2 ) ( μ + μ + ( μ + + μ ) ( μ + + ν + ) ) 2 μ + + ν + + δ 2 , 1 2 + μ } A 3

provided that A | λ | 1 / 2 R 1 with some constant R 1 1 depending on μ ± , ν + , γ 0 ± , γ + 1 , γ 2 + , ϵ, λ 0 and δ 0 , which furnishes (5.1) when A | λ | 1 / 2 R 1 with any constant ω satisfying

0<ω2( μ + + μ )min { ( sin ϵ 2 ) ( μ + μ + ( μ + + μ ) ( μ + + ν + ) ) 2 μ + + ν + + δ 2 , 1 2 + μ } .
(5.4)

Secondly, we consider the case R 2 A | λ | 1 / 2 with large R 2 1. In this case, we have

A + = ( μ + + ν + + δ ) 1 / 2 ( γ 0 + λ ) 1 / 2 ( 1 + O ( δ 4 ) ) , B ± = ( μ ± ) 1 / 2 ( γ 0 ± λ ) 1 / 2 ( 1 + O ( δ 4 ) )

when |( μ + + ν + +δ) ( γ 0 + λ ) 1 A 2 | γ 0 + 1 ( μ + + ν + + δ 1 ) R 2 2 δ 4 and | μ ± ( γ 0 ± λ ) 1 A 2 | γ 0 ± 1 μ ± R 2 2 δ 4 with some very small positive number δ 4 . By (2.15)

L 11 = ( ( μ + γ 0 + ) 1 / 2 + ( μ γ 0 ) 1 / 2 ) λ 1 / 2 ( 1 + O ( δ 4 ) ) , A L 12 = λ O ( R 2 1 ) ( 1 + O ( δ 4 ) ) , L 21 = ( μ ) 1 / 2 ( γ 0 λ ) 1 / 2 ( 1 + O ( δ 4 ) ) , L 22 = γ 0 λ ( 1 + O ( δ 4 ) ) .

Thus, we have

|detL| γ 0 2 ( ( μ + γ 0 + ) 1 / 2 + ( μ γ 0 ) 1 / 2 ) | λ | 3 / 2

provided that R 2 A | λ | 1 / 2 with some constant R 2 1 depending on μ ± , ν + , γ 0 ± , γ 1 + , γ 2 + , ϵ, λ 0 , and δ 0 , which shows that (5.1) holds when R 2 A | λ | 1 / 2 with any constant ω satisfying

0<ω γ 0 2 ( ( μ + γ 0 + ) 1 / 2 + ( μ γ 0 ) 1 / 2 ) .
(5.5)

Thirdly, we consider the case R 2 1 | λ | 1 / 2 A R 1 | λ | 1 / 2 . Set

λ ˜ = λ ( | λ | 1 / 2 + A ) 2 , A ˜ = A | λ | 1 / 2 + A , A ˜ + = γ 0 + ( μ + + ν + + δ ) 1 λ ˜ + A ˜ 2 , B ˜ ± = γ 0 ± ( μ ± 1 ) 1 λ ˜ + A ˜ 2 , D ( R 1 , R 2 ) = { ( λ ˜ , A ˜ ) ( 1 + R 1 ) 2 | λ ˜ | R 2 2 ( 1 + R 2 ) 2 , ( 1 + R 2 ) 1 A ˜ R 1 ( 1 + R 1 ) 1 } .

If (λ, ξ ) satisfies the condition R 2 1 | λ | 1 / 2 A R 1 | λ | 1 / 2 , then ( λ ˜ , A ˜ )D( R 1 , R 2 ). We define L ˜ i j by replacing A + , A, and B ± by A ˜ + , A ˜ , and B ˜ ± in (2.15), respectively. Setting det L ˜ = L ˜ 11 L ˜ 22 A ˜ L ˜ 12 L ˜ 21 , we have

detL= ( | λ | 1 / 2 + A ) 3 det L ˜ .
(5.6)

First, we prove that det L ˜ 0 provided that ( λ ˜ , A ˜ )D( R 1 , R 2 ), λ ˜ Σ ϵ and |δ| δ 5 with some small δ 5 >0 by contradiction. Suppose that det L ˜ =0. By (5.6) detL=0, so that in view of (2.18) there exist w ± ( x N )= P ± ( e B ± x N e A ± x N )+ Q ± e B ± x N and p ( x N )= γ e A x N with A =A such that w ± ( x N )=( w ± 1 ( x N ),, w ± N ( x N ))(0,,0), and w ± ( x N ) and p ( x N ) satisfy (2.2) and (2.3) with h ˆ j (0)=0, h ˆ N (0)=0, and k ˆ J (0)=0, that is, they satisfy the following homogeneous equations:

γ 0 + λ w + j = 1 N 1 μ + i ξ ( i ξ j w + + i ξ w + j ) μ + N ( i ξ j w + N + N w + j ) ( ν + μ + + δ ) i ξ j ( i ξ w + + N w + N ) = 0 for  x N > 0 , γ 0 + λ w + N = 1 N 1 μ + i ξ ( N w + + i ξ w + N ) 2 μ + N 2 w + N ( ν + μ + + δ ) N ( i ξ w + + N w + N ) = 0 for  x N > 0 , γ 0 λ w j = 1 N 1 μ i ξ ( i ξ j w + i ξ w j ) μ N ( i ξ j w N + N w j ) + i ξ j p = 0 for  x N < 0 , γ 0 λ w N = 1 N 1 μ i ξ ( N w + i ξ w N ) 2 μ N 2 w N + N p = 0 for  x N < 0 , i ξ w + N w N = 0 for  x N < 0 , μ + ( N w + j + i ξ j w + N ) | x N = 0 + μ ( N w j + i ξ j w ) | x N = 0 = 0 , 2 μ + N w + N + ( ν + μ + + δ ) ( i ξ w + + N w + N ) | x N = 0 + ( 2 μ N w N p ) | x N = 0 = 0 .
(5.7)

Set ( a , b ) + = 0 a( x N ) b ( x N ) ¯ d x N , ( a , b ) = 0 a( x N ) b N ( x N ) ¯ d x N , and a ± = ( a , a ) ± 1 / 2 . Multiplying the equations in (5.7) by w ± J ¯ and using integration by parts and the jump conditions in (5.7), we have

0 = λ ( γ 0 + = 1 N w + 2 + γ 0 = 1 N w 2 ) + μ + [ j , k = 1 N 1 i ξ k w + j + 2 + i ξ w + + 2 + j = 1 N 1 N w + j + 2 + j = 1 N 1 ( i ξ j w + N , N w + N ) + + j = 1 N 1 i ξ j w + N + 2 + j = 1 N 1 ( N w + j , i ξ j w + N ) + + 2 N w + N + 2 ] + ( ν + μ + + δ ) [ i ξ w + + 2 + ( N w + N , i ξ w + ) + + ( i ξ w + , N w + N ) + + N w + N + 2 ] + μ [ j , k = 1 N 1 i ξ k w j 2 + i ξ w 2 + j = 1 N 1 N w j 2 + j = 1 N 1 ( i ξ j w N , N w N ) + j = 1 N 1 i ξ j w N 2 + j = 1 N 1 ( N w j , i ξ j w N ) + 2 N w N 2 ] = λ ( γ 0 + w + + 2 + γ 0 w 2 ) + μ + [ j , k = 1 N 1 i ξ k w + j + 2 + i ξ w + + 2 + j = 1 N 1 N w + j + i ξ j w + N + 2 + 2 N w + N + 2 ] + ( ν + μ + + δ ) N w + N + i ξ w + + 2 + μ [ j , k = 1 N 1 i ξ k w j 2 + i ξ w 2 + j = 1 N 1 N w j + i ξ j w N 2 + 2 N w N 2 ] .
(5.8)

Taking the real part and the imaginary part in (5.8), using the inequality

j , k = 1 N 1 i ξ j w + k + 2 + i ξ w + + 2 + 2 N w + N + 2 2 ( i ξ w + + 2 + N w + N + 2 ) N w + N + i ξ w + + 2 ,
(5.9)

and setting K= γ 0 + w + + 2 + γ 0 w 2 and L= N w + N + i ξ w + + 2 for short, we have

(Imλ)K+(Imδ)L=0,0(Reλ)K+( ν + +Reδ)L.
(5.10)

First, we consider the case δ=0. When Imλ0 or Imλ=0 and Reλ>0, we have K=0, that is, w ± =0. When Imλ=0 and Reλ0, it follows from λ Σ ϵ that λ=0. Choosing ϵ >0 in such a way that μ + ϵ >0 and ν + ϵ >0, by (5.8) with λ=0 and δ=0 and (5.9) we have

ϵ [ j , k = 1 N 1 i ξ k w + j + 2 + i ξ w + + 2 + 2 N w + N + 2 ] + μ + j = 1 N 1 N w + j + i ξ j w + N + 2 + ( ν + ϵ ) N w + N + i ξ w + + 2 + μ [ j = 1 N 1 N w j + i ξ j w N 2 + 2 N w N 2 ] 0 ,

which furnishes N w ± j + i ξ j w ± N ± =0 (j=1,,N1) and N w ± N ± =0. Since w ± J ( x N )0 as ± x N (J=1,,N), we have w ± =0, which contradicts w ± 0. Thus, we have det L ˜ 0 when δ=0, which implies that

c 1 =inf { | det L ˜ | ( λ ˜ , A ˜ ) D ( R 1 , R 2 ) , λ ˜ Σ ϵ , δ = 0 } >0.

Since A ˜ + = γ 0 + ( μ + + ν + ) 1 λ ˜ + A ˜ 2 +O(|δ|), there exists a δ 5 >0 such that

inf { | det L ˜ | ( λ ˜ , A ˜ ) D ( R 1 , R 2 ) , λ ˜ Σ ϵ , | δ | δ 5 } c 1 /2,

which, combined with (5.6), implies that

|detL| ω 1 ( | λ | 1 / 2 + A ) 3
(5.11)

with some positive number ω 1 provide that R 2 | λ | 1 / 2 A R 1 1 | λ | 1 / 2 and λC with |δ| δ 5 .

Finally, we consider the case where |δ| δ 5 . First, we consider the case (C1), that is, δ= γ 1 + γ 2 + λ 1 . In this case, it follows from |δ| δ 5 that |λ| γ 1 + γ 2 + δ 5 1 , so that we prove that detL0 directly provided that λ Λ ϵ , λ 0 = Σ ϵ , λ 0 K ϵ . Since Reδ= γ 1 + γ 2 + Reλ | λ | 2 and Imδ= γ 1 + γ 2 + Imλ | λ | 2 , by (5.10) we have

Imλ ( K γ 1 + γ 2 + | λ | 2 L ) =0,0(Reλ)K+ ( ν + + γ 1 + γ 2 + Re λ | λ | 2 ) L.
(5.12)

When Imλ=0, we have λ 0 Reλ=λ, so that K=0, that is, w ± =0. If Imλ0, by (5.12) K= γ 1 + γ 2 + | λ | 2 L, which, inserted into the second formula in (5.12), furnishes

0 ν + | λ | 2 ( | λ | 2 + 2 γ 1 + γ 2 + ν + 1 Re λ ) L = ν + | λ | 2 ( ( Re λ + γ + 1 γ + 2 ν + 1 ) 2 + ( Im λ ) 2 ( γ 1 + γ 2 + ν + 1 ) 2 ) L .

Since ( Re λ + γ + 1 γ + 2 ν + 1 ) 2 + ( Im λ ) 2 ( γ 1 + γ 2 + ν + 1 ) 2 >0 when λ K ϵ , we have L=0, which implies that K=0, that is, w ± =0. Summing up, we have obtained w ± =0, which contradicts w ± 0, and therefore we have detL0 when λ Σ ϵ , λ 0 K ϵ . Thus, we have

inf { | det L | λ Σ ϵ , λ 0 K ϵ , R 2 1 | λ | 1 / 2 A R 1 | λ | 1 / 2 , | λ | γ 1 + γ 2 + δ 5 1 } >0,

which, combined with (5.11), furnishes

|detL| ω 2 ( | λ | 1 / 2 + A ) 3

with some positive constant ω 2 provided that R 2 1 | λ | 1 / 2 A R 1 | λ | 1 / 2 and λ K ϵ Σ ϵ , λ 0 .

Secondly, we consider the case where δ Σ ϵ , δ 5 |δ| δ 0 , Reδ0, |λ| λ 0 and Reλ|Reδ/Imδ||Imλ|. Note that this case includes (C2) and Imδ0. We prove that det L ˜ 0 provided that ( λ ˜ , A ˜ )D( R 1 , R 2 ) and Re λ ˜ |Reδ/Imδ||Im λ ˜ | by contradiction. Suppose that det L ˜ =0, and then by (5.6), detL=0. Thus, we have (5.10). When Imλ=0 and Imδ0, we have L=0, so that 0(Reλ)K. When Reλ>0, we have K=0. When λ=0, by (5.8) with λ=0 and L=0, we have N w ± N ± =0 and N w ± j + i ξ j w ± N ± 2 =0. Since w ± J ( x N )0 as ± x N , we have w ± =0. Thus, we have w ± =0 when Imλ=0 and Imδ0. When Imλ0, Imδ0 and (Imλ)Imδ>0, we have K=0 by the first formula of (5.10). When Imλ0, Imδ0 and (Imλ)Imδ<0, we have K=|Imδ/Imλ|L by the first formula of (5.10), so that it follows from the second formula of (5.10) that 0( ν + +Reδ+(Reλ)|Imδ/Imλ|)L. Since Reδ=|Reδ| and since Reλ|Reδ/Imδ||Imλ|, as follows from Re λ ˜ |Reδ/Imδ||Im λ ˜ |, we have Reδ+(Reλ)|Imδ/Imλ||Reδ|+|Reδ|=0, which furnishes L=0. Thus, K=0. Summing up, we have proved that K=0, that is, w ± =0. But this contradicts w ± 0, and therefore det L ˜ 0. In particular,

c 2 = inf { | det L ˜ | ( λ ˜ , A ˜ ) D ( R 1 , R 2 ) , Re λ ˜ | Re δ / Im δ | | Im λ ˜ | , δ Σ ϵ , Re δ 0 , δ 5 | δ | δ 0 } > 0 ,

which, combined with (5.11) and (5.6), implies that

|detL| ω 3 ( | λ | 1 / 2 + A ) 3

with some positive constant ω 3 provided that R 2 | λ | 1 / 2 A R 1 1 | λ | 1 / 2 and δ Σ ϵ , Reδ<0, and Reλ|Reδ/Imδ||Imλ|.

Analogously, we have

|detL| ω 4 ( | λ | 1 / 2 + A ) 3

with some positive constant ω 4 provided that R 2 | λ | 1 / 2 A R 1 1 | λ | 1 / 2 and δ Σ ϵ , Reδ0, and Reλ λ 0 |Imλ|. Therefore, we have proved (5.1).

Since

detL= L 11 L 22 A L 12 L 21 ,

by (2.19), the Leibniz rule, the Bell formula (3.3) with f(t)=1/t, g( ξ )=detL, and (5.1), we have

| ξ κ ( det L ) 1 | C κ = 1 | κ | | det L | ( + 1 ) ( | λ | 1 / 2 + A ) 3 A | κ | C κ ( | λ | 1 / 2 + A ) 3 A | κ | ,

which shows (5.2) with =0. Analogously, we have (5.2) with =1, which completes the proof of Lemma 5.1. □

6 Proofs of main results

In this section, we prove Theorem 1.2. For this purpose, first of all we prove (2.38). For the multipliers appearing in S J 1 ± (λ) of (2.36), by Lemma 2.2, (2.8), and (4.3) we have

R J , 1 + A + M 2 , 2 , R J , 0 + γ 0 ± λ 1 / 2 μ + B + 2 A + M 2 , 2 , R J , 0 + ( i ξ m ) B + 2 A + M 2 , 2 , R J , 1 M 1 , 2 , R J , 0 γ 0 ± λ 1 / 2 μ B 2 M 1 , 2 , R J , 0 ( i ξ m ) B 2 M 1 , 2 , R J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 M 2 , 2 , R J , 1 ± ( i ξ m ) B ± 2 M 2 , 2 , R J , 1 ± λ 1 / 2 B ± 2 M 2 , 2 ,

so that by Lemma 3.1 with K 1 + (λ) and K 2 + (λ) and Lemma 3.2 with K 1 (λ) and K 2 (λ) we have

R L ( Z ( R N ) , L q ( R ± N ) 2 + N + N 2 ) ( { ( τ τ ) G λ S J 1 ± ( λ ) λ Λ ϵ , λ 0 } ) C(=0,1).

For the multipliers appearing in S J 2 ± (λ) of (2.36), by Lemma 2.2, (2.8), and (4.3) we have

S J , 2 ± M 2 , 2 , S J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 M 2 , 2 , S J , 1 ± ( i ξ m ) B ± 2 M 2 , 2 , B ± S J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 M 2 , 2 , B ± S J , 2 ± ( i ξ m ) B ± 2 M 2 , 2 , B ± S J , 1 ± γ 0 ± μ ± B ± 2 M 2 , 2 , B ± S J , 1 ± B ± 2 M 2 , 2 ,

so that by Lemma 3.1 with K 2 + (λ) and Lemma 3.2 with K 2 (λ) we have

R L ( Z ( R N ) , L q ( R ± N ) 2 + N + N 2 ) ( { ( τ τ ) G λ S J 2 ± ( λ ) λ Λ ϵ , λ 0 } ) C(=0,1).

For the multipliers appearing in S J 3 ± (λ) of (2.36), by Lemma 2.2, (2.8), and (4.3) we have

T J , 1 ± M 1 , 1 , T J , 0 ± γ 0 ± λ 1 / 2 μ ± B ± 2 M 1 , 1 , T J , 0 ± ( i ξ m ) B ± 2 M 1 , 1 , B ± T J , 1 ± γ 0 ± λ 1 / 2 μ ± B ± 2 M 1 , 1 , B ± T J , 0 ± γ 0 ± μ ± B ± 2 M 1 , 1 , B ± T J , 0 ± B ± 2 M 1 , 1 ,

so that by Lemma 3.1 with K 4 + (λ) and Lemma 3.2 with K 4 (λ) we have

R L ( Z ( R N ) , L q ( R ± N ) 2 + N + N 2 ) ( { ( τ τ ) G λ S J 3 ± ( λ ) λ Λ ϵ , λ 0 } ) C(=0,1).

For the multipliers appearing in S J 4 + (λ) of (2.36), by Lemma 2.2, (2.8), and (4.3) we have

U N , 0 + γ 0 + λ 1 / 2 μ + B + 2 M 1 , 1 , U N , 0 + ( i ξ m ) B + 2 M 1 , 1 , A + U N , 0 + γ 0 + μ + B + 2 M 1 , 1 , A + U N , 0 + B + 2 M 1 , 1 ,

so that by Lemma 3.1 with K 3 + (λ) and K 4 + (λ) we have

R L ( Z ( R N ) , L q ( R + N ) 2 + N + N 2 ) ( { ( τ τ ) G λ S J 4 + ( λ ) λ Λ ϵ , λ 0 } ) C(=0,1).

Finally, for the multipliers appearing in P (λ) of (2.36), by Lemma 2.2, (2.8), and (4.3) we have

p , 0 i ξ m A M 0 , 2 , p , 1 γ 0 λ 1 / 2 μ B 2 i ξ j A M 0 , 2 , p , 1 ( i ξ m ) B 2 i ξ j A M 0 , 2 , p , 0 M 0 , 2 , p , 1 γ 0 λ 1 / 2 μ B 2 M 0 , 2 , p , 1 ( i ξ m ) B 2 M 0 , 2

so that by Lemma 3.5 with L 6 (λ) we have

R L ( Z ( R N ) , L q ( R N ) N ) ( { ( τ τ ) P ( λ ) λ Λ ϵ , λ 0 } ) C(=0,1).

Summing up, we have proved (2.38).

To transfer the problem (1.10) to (2.1), we use the solutions w + and ( w , θ ) to the following equations:

{ λ w + γ 0 + 1 Div S δ + ( w + ) = f + in  R + N , S δ + ( w + ) n | x N = 0 + = 0 on  R 0 N ,
(6.1)
{ λ w γ 0 1 Div S ( w , θ ) = f , div w = 0 in  R N , S ( w , θ ) n | x N = 0 = 0 on  R 0 N ,
(6.2)

respectively. We know the following two theorems. The first theorem is due to Götz and Shibata [21], Theorem 2.5] and the second one is due to Shibata [18], Theorem 3.4].

Theorem 6.1

Let1<q<, 0<ϵ<π/2, δ 0 >0and λ 0 >0. Let Γ ϵ , λ 0 be the set defined in (1.12). Then there exists an operator family V + (λ)Hol( Γ ϵ , λ 0 ,L( L q ( R + N ) N , W q 2 ( R + N ) N ))such that for any f + L q ( R + N ) N andλ Γ ϵ , λ 0 , w + = V + (λ) f + is a unique solution to problem (6.1), and V + (λ)satisfies the following estimates:

R L ( L q ( R + N ) N , L q ( R + N ) 2 N + N 2 + N 3 ) ( { ( τ τ ) ( G λ V + ( λ ) ) λ Γ ϵ , λ 0 } ) C(=0,1)

with some constant C depending on ϵ, λ 0 , δ 0 , μ + , ν + , γ 0 + , γ + 1 , γ + 2 , q and N.

Theorem 6.2

Let1<q<and0<ϵ<π/2. Then there exist operator families

V ( λ ) Hol ( Σ ϵ , L ( L q ( R N ) N , W q 2 ( R N ) N ) ) , O ( λ ) Hol ( Σ ϵ , L ( L q ( R N ) N , W ˆ q 1 ( R N ) ) )

such that for anyλ Σ ϵ and f L q ( R N ) N , w = V (λ) f and θ = O (λ) f are unique solutions to problem (6.2), and V (λ)and O (λ)satisfy the following estimates:

R L ( L q ( R N ) N , L q ( R N ) 2 N + N 2 + N 3 ) ( { ( τ τ ) G λ V ( λ ) λ Σ ϵ } ) C ( = 0 , 1 ) , R L ( L q ( R N ) N , L q ( R N ) N ) ( { ( τ τ ) O ( λ ) λ Σ ϵ } ) C ( = 0 , 1 )

with some constant C depending on ϵ, μ , γ 0 , q and N.

The composite operator of two ℛ-bounded operators is ℛ-bounded and the sum of two ℛ-bounded operators is also ℛ-bounded. Extending the operator V + (λ) to R N and the operators V (λ) and O (λ) to R + N by the PL Lions method, respectively, we see that the resulting operators also ℛ-bounded, so that combining Theorem 6.1 and Theorem 6.2 with (2.38), we have Theorem 1.2. This completes the proof of Theorem 1.2.