Hopf Bifurcation for General 1D Semilinear Wave Equations with Delay

We consider boundary value problems for 1D autonomous damped and delayed semilinear wave equations of the type $$ \partial^2_t u(t,x)- a(x,\lambda)^2\partial_x^2u(t,x)= b(x,\lambda,u(t,x),u(t-\tau,x),\partial_tu(t,x),\partial_xu(t,x)), \; x \in (0,1) $$ with smooth coefficient functions $a$ and $b$ such that $a(x,\lambda)>0$ and $b(x,\lambda,0,0,0,0) = 0$ for all $x$ and $\lambda$. We state conditions ensuring Hopf bifurcation, i.e., existence, local uniqueness (up to time shifts), regularity (with respect to $t$ and $x$) and smooth dependence (on $\tau$ and $\lambda$) of small non-stationary time-periodic solutions, which bifurcate from the stationary solution $u=0$, and we derive a formula which determines the bifurcation direction with respect to the bifurcation parameter $\tau$. To this end, we transform the wave equation into a system of partial integral equations by means of integration along characteristics, and then we apply a Lyapunov-Schmidt procedure and a generalized implicit function theorem to this system. The main technical difficulties, which have to be managed, are typical for hyperbolic PDEs (with or without delay): small divisors and the"loss of derivatives"property. We do not use any properties of the corresponding initial-boundary value problem. In particular, our results are true also for negative delays $\tau$.


The problem
This paper concerns 1D autonomous damped and delayed semilinear wave equation of the general type with one Dirichlet and one Neumann boundary condition u(0, t) = ∂ x u(t, 1) = 0. (1. 2) The goal is to describe Hopf bifurcation, i.e., existence and local uniqueness (up to time shifts) of families (parametrized by τ and λ) of non-stationary time-periodic solutions to (1.1)-(1.2), which bifurcate from the stationary solution u = 0.
Our main result, stated in Theorem 2 below, is quite similar to Hopf bifurcation theorems for delayed ODEs (see, e.g., [7], [11,Chapter 5.5], [12,Chapter 11], [37,41]) and for delayed parabolic PDEs (see, e.g., [3,6,8,32], [44,Chapter 6]). However, the analysis of Hopf bifurcation for hyperbolic PDEs is faced with considerable complications if compared to ODEs or parabolic PDEs (with or without delay). In the present paper we provide an approach for overcoming the following technical difficulties, which appear in dissipative hyperbolic PDEs and do not appear in ODEs or parabolic PDEs: First, the question, whether a nondegenerate time-periodic solution to a dissipative nonlinear wave equation is locally unique (up to time shifts in the autonomous case) and whether it depends smoothly on the system parameters, is much more delicate than for ODEs or parabolic PDEs (cf., e.g., [13,14]). One reason for that is the so-called loss of derivatives for hyperbolic PDEs. To overcome this difficulty, we use a generalized implicit function theorem [24,Theorem 2.2], which is applicable to abstract equations with a loss of derivatives property. Remark that for smoothness of the data-to-solution map of hyperbolic PDEs it is necessary, in general, that the equation depends smoothly not only on the data and on the unknown function u, but also on the space variable x (and the time variable t in the non-autonomous case). This is completely different to what is known for parabolic PDEs (cf. [9]).
Second, analysis of time-periodic solutions to hyperbolic PDEs usually encounters a complication known as the problem of small divisors [2,17,43]. Since Hopf bifurcations can be expected only in the so-called non-resonant case, where small divisors do not appear, we have to impose a condition (assumption (1.7) below) preventing small divisors from coming up. That condition has no counterparts in the case of ODEs or parabolic PDEs.
And third, linear autonomous hyperbolic PDEs with one space dimension differ essentially from those with more than one space dimension: They satisfy the spectral mapping property (see [38] in L pspaces and, more important for applications to nonlinear problems, [29] in C-spaces) and they generate Riesz bases (see, e.g., [10,18]), what is not the case, in general, if the space dimension is larger than one (see the celebrated counter-example of M. Renardy in [40]). Therefore the question of Fredholmness of the corresponding differential operators in appropriate spaces of time-periodic functions is highly difficult.
The main consequence (from the point of view of mathematical techniques) of the fact, that the space dimension of (1.1), (1.2) is one, consists in the following: We can use integration along characteristics in order to replace (1.1), (1.2) by an nonlinear partial integral equation (see [1] for the notion "partial integral equation"). After that, we can apply known Fredholmness properties to the linearized partial integral equation ( [23], [24,Corollary 4.11]) and, hence, we can apply the Lyapunov-Schmidt reduction method to the nonlinear partial integral equation.

Main results
Our goal is to investigate time-periodic solutions to (1.1)- (1.2). In order to work in spaces of functions with fixed time period 2π, we put the frequency parameter ω explicitely into the equation by scaling the time variable t and by introducing a new unknown function u as follows: The problem (1.1)-(1.2) for the new unknown function u and the unknown frequency ω reads ω 2 ∂ 2 t u(t, x) − a(x, λ) 2 ∂ 2 x u(t, x) = b(x, λ, u(t, x), u(t − ωτ, x), ω∂ t u(t, x), ∂ x u(t, x)), u(t, 0) = ∂ x u(t, 1) = 0, u(t + 2π, x) = u(t, x).
Assumptions (A1)-(A3) below are standard for Hopf bifurcation. To formulate them, we consider the following eigenvalue problem for the linearization of (1.3) in u = 0, ω = 1 and λ = 0: (1.4) Here µ ∈ C and u : where ∂ j b is the partial derivative of the function b with respect to its jth variable.
Our first assumption states that for certain delay τ = τ 0 there exists a pair of pure imaginary geometrically simple eigenvalues to (1.4) (without loss of generality we may assume that the pair is µ = ±i): (A1) There exists τ 0 ∈ R such that for µ = i and τ = τ 0 there exists exactly one (up to linear dependence) solution u = 0 to (1.4).
In descriptions of Hopf bifurcation phenomena one of the main questions is that of the so-called bifurcation direction, i.e. the question if the bifurcating time-periodic solutions exist for bifurcation parameters (close to the bifurcation point) such that the stationary solution is unstable (in this case the Hopf bifurcation is called supercritical) or not. For ODEs and parabolic PDEs (with or without delay) it is known that, under reasonable additional assumptions, in the supercritical case the bifurcating time-periodic solutions are orbitally stable. For hyperbolic PDEs this relationship between bifurcation direction and stability is believed to be true also, but rigorous proofs are not available up to now. More exactly, it is expected that the bifurcating non-stationary time-periodic solutions, which are described by Theorem 2, are orbitally stable if for all eigenvalues µ = ±i of (1.4) with τ = τ 0 it holds Re µ < 0 and if ρ∂ 2 ετ (0, 0) > 0. Anyway, in Theorem 4 below we present a formula which shows how to calculate the number ∂ 2 ετ (0, 0) by means of the eigenfunctions u 0 and u * and and of the first three derivatives of the nonlinearity b(x, 0, ·, ·, ·, ·). It is known that those formulae may be quite complicated and not explicit (see, e.g., [16,Section 3.3], [20], [21,Theorem I.12.2]; [22,Theorem 1.2(ii)], [28]). Therefore, in order to keep the technicalities simple, in Theorem 4 below we consider only nonlinearities of the type (1.11) Set β 0 j (x) := ∂ 3 3 β j (x, 0, 0) for j = 1, 2, 3, 4. Our result about the bifurcation direction reads as follows: Theorem 4 Let the assumptions of Theorem 2 and the conditions (1.10) and (1.11) be fulfilled. Then

Remark 5
We do not know if generalizations of Theorems 2 and 4 to higher space dimensions and/or to quasilinear equations exist and how they should look like. Also, we do not know much about corresponding to (1.1) initial-boundary value problems. Especially, we do not know if the bifurcation direction implies stability properties of the bifurcating time-periodic solutions (as it is the case for ODEs or parabolic PDEs).
Our paper is organized as follows: In Subsection 1.3 we comment about some publications which are related to ours. In Section 2 we show that any solution to (1.3) creates a solution to a semilinear first-order 2 × 2 hyperbolic system, namely (2.1), and vice versa. In Section 3 we show (by using the method of integration along characteristics) that any solution to the first-order hyperbolic system (2.1) solves a system of partial integral equations, namely (3.1), and vice versa. Remark that in Sections 2 and 3 we do pure transformations, i.e., problem (1.3) is equivalent to problem (3.1). Especially, the technical difficulties of (1.3), like small divisors and loss of smoothness, are hidden in (3.1) also. But it turns out that in (3.1) they can be handled more easily than in (1.3).
In Sections 4 and 5 we do a Lyapunov-Schmidt procedure in order to reduce locally the system (3.1) with infinite-dimensional state parameter to a problem with two-dimensional state parameter. Here the main technical results are Lemma 10 about Fredholmness of the linearization of (3.1) and Lemma 20 about local unique solvability and smooth dependence of the infinite dimensional part of the Lyapunov-Schmidt system. The proofs of those lemmas are much more complicated than the corresponding proofs for ODEs or parabolic PDEs (with or without delay).
In particular, in the proof of Lemma 10 (more exactly in the proof of Claim 4 there) we use assumption (1.7), and it turns out that the conclusions of Lemma 10 (and of Theorem 2 as well) are not true, in general, if (1.7) is not true.
In the proof of Lemma 20 we use a generalized implicit function theorem, which is a particular case of [24, Theorem 2.2] and concerns abstract parameter-dependent equations with a loss of smoothness property. This generalized implicit function theorem is presented in Subsection 5.1.
In Section 6 we put the solution of the infinite dimensional part of the Lyapunov-Schmidt system into the finite dimensional part and discuss the behavior of the resulting equation. This is completely analogous to what is known from Hopf bifurcation for ODEs and parabolic PDEs.
In Section 7 we prove Theorem 4 and give an example. Finally, in Section 8 we discuss cases of other than (1.2) boundary conditions.

Remarks on related work
The main methods for proving Hopf bifurcation theorems are, roughly speaking, center manifold reduction and Lyapunov-Schmidt reduction. In order to apply them to evolution equations, one needs to have a smooth center manifold for the corresponding semiflow (for center manifold reduction) or a Fredholm property of the linearized equation on spaces of periodic functions (for Lyapunov-Schmidt reduction). In [4,21] Hopf bifurcation theorems for abstract evolution equations are proved by means of Lyapunov-Schmidt reduction, and in [15,33,42] by means of center manifold reduction. In [4,21] it is assumed that the operator of the linearized equation is sectorial (see [4,Hypothesis (HL)] and [21, Hypothesis I.8.8]), hence this setting is not appropriate for hyperbolic PDEs. In [15,33,42] the assumptions concerning the linearized operator are more general, including non-sectorial operators. However, it is unclear if our problem (1.1), (1.2) can be written as an abstract evolution equation satisfying those conditions.
In [42] it is shown that 1D semilinear damped wave equations without delay of the type ∂ 2 t u = ∂ 2 x u − γ∂ t u + f (u) with f (0) = 0, subjected to homogeneous Dirichlet boundary conditions, can be written as an abstract evolution equation satisfying the general assumptions of [42], and a corresponding Hopf bifurcation theorem is proved. But it turns out that nonlinearities of the type f (u, ∂ x u) cannot be treated this way. In [25] a Hopf bifurcation theorem is stated without proof for second-order quasilinear hyperbolic systems without delay with arbitrary space dimension subjected to homogeneous Dirichlet boundary conditions. In [22] a Hopf bifurcation theorem for general semilinear first-order 1D hyperbolic systems without delay is proved by means of Lyapunov-Schmidt reduction, and applications to semiconductor laser modeling are described. In [30,34,35] the authors considered Hopf bifurcation for scalar linear first-order PDEs without delay of the type (∂ t + ∂ x + µ)u = 0 on the semi-axis (0, ∞) with a nonlinear integral boundary condition at x = 0.
What concerns Hopf bifurcation for hyperbolic PDEs with delay, to the best of our knowledge there exist only the two results [26,27] of N. Kosovalić and B. Pigott. In [26] the authors consider 1D damped and delayed Sine-Gordon-like wave equations of the type Because of the symmetry assumption on the nonlinearity f the bifurcating time-periodic solutions can be determined by means of Fourier expansions. In [27] these results are generalized to equations on d-dimensional cubes, but locally unique bifurcating solution families can be described for fixed prescribed spatial frequency vectors only. Our results in the present paper extend those of [26] mainly by two facts: Our equation (1.1) is more general than (1.12) (and does not have any symmetry property, in general), and we allow the presence of the perturbation parameter λ. The symmetry assumptions of [26] allow one to use Fourier series techniques, while we use integration along characteristics.

Transformation of the second-order equation into a first-order system
In this section we show that any solution u to (1.3) creates a solution v = (v 1 , v 2 ) to the first-order hyperbolic system and vice versa. Here the nonlinear operator B is defined as with partial integral operators J λ defined by and with "pointwise" operators K and K λ defined by Lemma 7 For all ω, τ, λ ∈ R and k = 2, 3, . . . the following is true: is a solution to (2.1).
is C k -smooth and a solution to (1.3).
3 Transformation of the first-order system into a system of partial integral equations In this section we show (by using the method of integration along characteristics) that any solution to (2.1), i.e. to (2.15), solves the system of partial integral equations and vice versa. Here the operators B 1 and B 1 are from (2.16), and the functions c 1 , c 2 and A are defined by (cf. (2.12) and (2.14)) .
Lemma 8 For all ω, τ, λ ∈ R the following is true: is a solution to (3.1) and if ∂ t v exists and is continuous, then v belongs to C 1 2π (R × [0, 1]; R 2 ) and solves (2.1).
Similarly one shows that dξ.
Further, from (3.1) and from the assumption, that ∂ t v exists and is continuous, it follows that also ∂ x v exists and is continuous, i.e. v ∈ C 1 2π (R × [0, 1]; R 2 ). Now, let us verify the differential equations in (2.1), i.e. in (2.15). From (3.1) it follows that Hence, the first equation of (2.15) is shown.
i.e. the second equation of (2.15) is shown.

Lyapunov-Schmidt procedure
In this section we do a Lyapunov-Schmidt procedure in order to reduce locally for v ≈ 0, ω ≈ 1, τ ≈ τ 0 and λ ≈ 0 the problem (3.1) with infinite-dimensional state parameter (v, ω) to a problem with a two-dimensional state parameter. For the sake of simplicity, we will write the problem (3.1) in a more abstract way. For that reason for ω, λ ∈ R let us introduce operators C(ω, λ), and Using this notation, the system (3.1) reads where the nonlinear operator B is introduced in (2.16).
Remark 9 Also the first-order hyperbolic system (2.15) can be written in an abstract way, namely as Remark that in the proof of Lemma 8 we showed that for all ω, λ ∈ R it holds and It is easy to see that the operators C(ω, λ), D(ω, λ), J (ω, τ, λ) and K(λ) (cf. (2.18), (2.19)) are bounded with respect to ω and τ and locally bounded with respect to λ, i.e., for any c > 0 it holds But, unfortunately, the operators C(ω, λ) and D(ω, λ) do not depend continuously (in the sense of the uniform operator norm in L(C 2π (R × [0, 1]; R 2 )) on ω and λ, in general, and J (ω, τ, λ) does not depend continuously on ω and τ , in general. This is the main technical difficulty which we have to overcome in order to analyze the bifurcation problem (4.3). Remark that this difficulty appears also in the case if τ is fixed to be zero (and λ is used to be the bifurcation parameter), i.e. in the case of Hopf bifurcation for semilinear wave equations without delay.

Fredholmness of the linearization
We intend to show that the linearization of (4.3) at v = 0, i.e., the operator Lemma 10 Let the condition (1.7) be fulfilled. Then there exists δ > 0 such that for all ω, τ, λ ∈ R with ω = 0 and |λ| < δ the operator The main complication in the proof is caused by the fact that the operators C(ω, λ)+D(ω, λ)∂ v B(0, ω, τ, λ) are not completely continuous from the space C 2π (R × [0, 1]; R 2 ) into itself, in general. The proof will be divided into a number of claims.
, and for any c > 0 it holds Proof of Claim. The idea of the proof is to show that the composition of the two partial integral operators D(ω, λ) and J (ω, τ, λ) is an integral operator mapping Here we changed the integration variable ξ to a new integration variable Note that for ω = 0 the inverse transformation ξ = ξ η,t,ω,λ (ζ) exists and depends smoothly on η, t, ω, λ and ζ.
Obviously, the absolute values of the partial derivatives of (4.10) with respect to t and x exist and can be estimated from above by a constant times v ∞ . Moreover, as long as ω and λ are varying in the ranges 1/c ≤ ω ≤ c and |λ| ≤ c, the constant may be chosen to be independent on ω, τ and λ (and to depend on c only). The same can be shown for the terms Claim 1 is therefore proved for the first component D 1 (ω, λ)J (ω, τ, λ). The same argument applies to the second component D 2 (ω, λ)J (ω, τ, λ).
, and for any c > 0 it holds sup Proof of Claim. The proof is similar to the proof of Claim 1. We have Here we changed the integration variable ξ to and denoted by ξ = ξ η,t,ω,λ (ζ) the inverse transformation. Now we proceed as in the proof of Claim 2.

Remark 11
In the proof of Claim 2 we used that the diagonal part of the operator K(λ) vanishes. Indeed, if in place of (2.19) we would have, for example, and this is not differentiable with respect to t, in general, if v 1 is not differentiable with respect to t.
, and for any c > 0 it holds Proof of Claim. We have Here we changed the integration variable ξ to and ξ = ξ t,ω,λ (ζ) is the inverse transformation. Again, now we can proceed as in the proof of Claim 1.
. We have to show that for all real numbers ω and λ with λ ≈ 0 there exists a unique function v ∈ C 2π (R × [0, 1]; R 2 ) satisfying the equation and that v ∞ ≤ const f ∞ , where the constant does not depend on ω, λ and f . Equation (4.14) is satisfied if and only if for all t ∈ R and x ∈ [0, 1] it holds and withC(ω, λ) ∈ L(C 2π (R)) defined by Hence, for all λ ∈ [−δ, δ] the operator I −C(ω, λ) is an isomorphism from C 2π (R) to itself, and Therefore, for all ω, λ ∈ R with |λ| ≤ δ there exists exactly one solution v 2 (·, 0) ∈ C 2π (R) to (4.18), and where the constants do not depend on ω, λ and f . Inserting this solution into the right-hand side of (4.17) we get v 2 ∈ C 2π (R × [0, 1]), and inserting this into the right-hand side of (4.15) we get finally This equation is again of the type (4.19), but now with C(1, 0) L(C 2π (R)) ≤ 1/c 0 . Hence, there exists δ > 0 such that we can, therefore, proceed as in the case c 0 < 1.

.2]):
Theorem 13 Let U be a Banach space and K ∈ L(U ) be an operator such that K 2 is completely continuous. Then the operator I − K is Fredholm of index zero.
On the account of Theorem 13, it remains to prove the following statement. The desired statement now follows from Claims 2 and 3.

Remark 14
For proving Lemma 10 we did not need the estimates (4.9), (4.11)-(4.13) and (4.22). These estimates will be used in the proof of Lemma 20 below (more exactly, in the proof of Claim 3 there).  .2)). From now on we will use assumptions (A1)-(A3) and (1.7) of Theorem 2. In particular, we will fix a solution u = u 0 = 0 to (1.4) with τ = τ 0 and µ = i and a solution u = u * = 0 to (1.6) fulfilling assumption (A3) (or, more precisely, (4.41) below). We will describe the kernel and the image of the operator L by means of the eigenfunctions u 0 and u * . To this end, we introduce two functions v 0 , v * :

Kernel and image of the linearization
where On the other hand, if u is a solution to (4.29), then for all k ∈ Z the functions In what follows we denote by "·" the Hermitian scalar product in C 2 , i.e. v · w := v 1 w 1 + v 2 w 2 for v, w ∈ C 2 . Further, for continuous functions v, w : Moreover, we will work with the operator A ∈ L C 1 2π (R × [0, 1]; R 2 ); C 2π (R × [0, 1]; R 2 ) , the components of which are defined by i.e. A = A(1, 0) (cf. (4.5)), and its formal adjoint one It is easy to verify that Av, w = v, A * w for all v, w ∈ C 1 2π (R × [0, 1]; R 2 ) which satisfy the boundary conditions in (2.1).

respectively. It follows that
Hence, in order to prove (4.30) it suffices to show that Taking into account the definitions of the operators A * , J * and K * and of the function v * (cf. (4.27)), it is easy to see that (4.33) is satisfied if and only if, for any x ∈ [0, 1], where v * 1 = u * + iU * and v * 2 = u * − iU * are the components of the vector function v * . Considering the sum and the difference of these two equations and taking into account that On the other side, (4.28) yields  (2.14)). The proof of (4.33) and, hence, of (4.30) is therefore complete. It remains to prove (4.31). To this end, we introduce functions w 0 : Note that the equations (4.39) define the functions w 1 , w 2 ∈ C 1 2π (R × [0, 1]; R 2 ) uniquely, as follows from Claim 4 in Section 4.1 (see also Remark 12). Combining (4.6) with (4.39), we obtain Aw 1 = Re w 0 , Aw 2 = −Im w 0 . Therefore, By (4.27) and (4.38), the right hand side of (4.40) is equal to Finally, we use (4.28) and the definition of σ in (A3) to get Similarly, Now, we normalize the eigenfunctions u 0 and u * so that which yields (4.31), as desired.

The external Lyapunov-Schmidt equation
In this section we solve the so-called external Lyapunov-Schmidt equation (4.53) with respect to w ≈ 0 for u ≈ 0, ω ≈ 1, τ ≈ τ 0 and λ ≈ 0. More exactly, in Subsection 5.1 we present a generalized implicit function theorem, which will be used in Subsection 5.2 to solve equation (4.53).

A generalized implicit function theorem
In this subsection we present the generalized implicit function theorem, which is a particular case of [24,Theorem 2.2]. It concerns abstract parameter-dependent equations of the type Here F is a map from W 0 ×P to W 0 , W 0 and W 0 are Banach spaces with norms · 0 and |·| 0 , respectively, and P is a finite dimensional normed vector space with norm · . Moreover, it is supposed that We are going to state conditions on F such that, similarly to the classical implicit function theorem, for all p ≈ 0 there exists exactly one solution w ≈ 0 to (5.1) and that the data-to-solution map p → w is smooth. Similarly to the classical implicit function theorem, we suppose that However, unlike to the classical case, we do not suppose that F (w, ·) is smooth for all w ∈ W 0 . In our applications the map (w, p) → ∂ w F (w, p) is not even continuous with respect to the uniform operator norm in L(W 0 ; W 0 ), in general. Hence, the difference of Theorem 17 below in comparison with the classical implicit function theorem is not a degeneracy of the partial derivatives ∂ w F (w, p) (like in implicit function theorems of Nash-Moser type), but a degeneracy of the partial derivatives ∂ p F (w, p) (which do not exist for all w ∈ W 0 ). Thus, we consider parameter depending equations, which do not depend smoothly on the parameter, but with solutions which do depend smoothly on the parameter. For that, of course, some additional structure is needed, which will be described now.
Theorem 17 [24, Theorem 2.2] Suppose that the conditions (5.2)-(5.6) are fulfilled. Furthermore, assume that there exist ε 0 > 0 and c > 0 such that for all p ∈ P with p ≤ ε 0 and Then there exist ε ∈ (0, ε 0 ] and δ > 0 such that for all p ∈ P with p ≤ ε there is a unique solution w =ŵ(p) to (5.1) with w 0 ≤ δ. Moreover, for all k ∈ N we haveŵ(p) ∈ W k , and the map p ∈ P →ŵ(p) ∈ W k is C ∞ -smooth.

Remark 18
The maps ϕ ∈ R → S(ϕ) ∈ L(W 0 ) and ϕ ∈ R → S(ϕ) ∈ L( W 0 ) are not continuous, in general. Nevertheless, since P is supposed to be finite dimensional, the map ϕ ∈ R → T (ϕ) ∈ L(P) is C ∞ -smooth. This is essential in the proof of Theorem 17 in [24].

Remark 19
In Theorem 17 we do not suppose that ∂ w F (0, p) depends continuously on p in the sense of the uniform operator norm in L(W 0 ; W 0 ). Hence, assumptions (5.7) and (5.8) cannot be replaced by their versions with p = 0, in general.
We have that w = 0 is a solution to (4.53) with u = 0, ω = 1, τ = τ 0 and λ = 0. This suggests that Lemma 20 can be obtained from an appropriate implicit function theorem. Unfortunately, the classical implicit function theorem does not work here, because the left-hand side of (4.53) is differentiable with respect to ω, τ and λ not for any w ∈ C 2π . We will apply Theorem 17.
Let us verify the assumptions of Theorem 17 in the following setting: Note that W 0 and W 0 are Banach spaces with the norm · ∞ . Conditions (5.2), (5.3) and (5.7) are fulfilled, the last one being true due to Lemma 10. It remains to verify conditions (5.4)-(5.6) and (5.8).
The proof goes through two claims.
Claim 1. For all l, m ∈ N and w ∈ W l+m the map (ω, λ) where the constant c lm does not depend on ω, λ and w for ω and λ varying on bounded intervals.
Proof of Claim. Since w(·, x) is C l -smooth, definition (4.1) implies that C(·, ·)w is C l -smooth, and the derivatives can be calculated by the chain rule. For example, It follows that ∂ ω [C(ω, λ)w ∞ ≤ const w 1 , where the constant does not depend on ω and λ (varying in bounded intervals) and on w ∈ W 1 . Similarly one can handle ∂ λ C(ω, λ)w and higher order derivatives, and similarly one can show (5.11).
Remark 21 In (5.12) the loss of derivatives property can be seen explicitely: Taking a derivative with respect to ω leads to a derivative with respect to t. The same happens in formulas (5.14), (5.16) and (5.17) below.
Proof of Claim. Differentiation of (4.2) with respect to ω gives where Hence, for v, w ∈ W 1 , it holds Furthermore, similarly to (2.17), we have Here the coefficientsb k are defined appropriately (similarly to (2.12) and (2.14)), as follows: and w(·, x) are C l -smooth. The derivatives can be calculated by the product and chain rules. In particular, for v, w ∈ W 1 we have Moreover, for k = 3, 4, 5, 6, The functions ∂ ωbk are bounded as long as as v 1 ω, τ and λ are bounded. Hence, we have where the constant does not depend on ω, τ , λ, v and w as long as v 1 , w 1 ω, τ and λ are bounded. Similarly one shows ∂ t [∂ v B j (v, ω, τ, λ)w] ∞ ≤ const w 1 . Using (5.15) we get where the constant does not depend on ω, τ , λ, v and w as long as v 1 , w 1 ω, τ and λ are bounded. Similarly one shows the estimates (5.13) for ∂ τ [D(ω, λ)∂ v B(v, ω, τ, λ)w] and ∂ λ [D(ω, λ)∂ v B(v, ω, τ, λ)w] and for the higer order derivatives.
Proof of Claim. We will follow ideas which are used to prove coercitivity estimates for singularly perturbed linear differential operators (see, e.g., [36,Lemma 1.3] and [39,Section 3]). Suppose the contrary. Then there exist sequences w n ∈ W, u n ∈ ker L and (ω n , τ n , λ n ) ∈ R 3 such that w n ∞ = 1 for all n ∈ N, and (I − P ) (I − C(ω n , λ n ) − D(ω n , λ n )(J (ω n , τ n , λ n ) + K(λ n ))) w n ∞ → 0 as n → ∞. (5.20) We have to construct a contradiction. For the sake of simpler writing we will use the following notation: C n := C(ω n , λ n ), D n := D(ω n , λ n )(J (ω n , τ n , λ n ) + K(λ n )), Note that the operators E n are well defined due to Claim 4 from Section 4. Moreover, because of (4.13) it follows that (I − E n )w n ∞ → 0 and, on the account of (4.8) and (4.13), that (I + E n )(I − E n )w n ∞ = (I − E 2 n )w n ∞ → 0. (5.22) Let us show that the sequence E 2 n w n is bounded in the space C 1 2π . A straightforward calculation shows that From (4.8), (4.22), (5.18) and (5.23) follows that, in order to show that E 2 n w n is bounded in C 1 2π , it suffices to show that the operators sequences D 2 n , D n C n and R n are bounded with respect to the uniform operator norm in L(C 2π ; C 1 2π ). Let us start with D 2 n = D(ω n , λ n )(J (ω n , τ n , λ n ) + K(λ n ))D(ω n , λ n )(J (ω n , τ n , λ n ) + K(λ n )).
Let us summarize: We showed that the sequence E 2 n w n is bounded in C 1 2π . Because of the Arzela-Ascoli Theorem, without loss of generality we may assume that this sequence converges in C 2π to some function w * ∈ C 2π . Then (5.22) implies the convergence w n − w * ∞ → 0. (5.25) In particular, w * ∈ W.
In order to prove (5.26), we take arbitrary w, h ∈ C 2π and calculate Hence, lim Similarly one shows for all h 1 , h 2 , h 3 ∈ C 2π the convergence Therefore, we get from (4.49), (5.18) and ( We have shown that Theorem 17 can be applied to equation (4.53) in the setting (5.9). This implies the following fact.
Inserting this into (7.1) and (7.2), we end up with the equality This is exactly the desired formula in Theorem 4 with σ = 1 (cf. (4.41)). Therefore, if this number is positive, then the Hopf bifurcation is supercritical.

Other boundary conditions
The results of Theorems 2 and 4 can be extended to other than (1.2) boundary conditions, for example, for two Dirichlet, or two Robin (in particular, Neumann), or for periodic boundary conditions. However, in those cases the transformation (2.5) is not appropriate anymore. Instead of (2.5), the following transformation can be used: v 1 (t, x) = ω(∂ t u(t, x) − u(t, x)) + a(x, λ)∂ x u(t, x), v 2 (t, x) = ω(∂ t u(t, x) − u(t, x)) − a(x, λ)∂ x u(t, x). More exactly, if v ∈ C π (R × [0, 1]; R 2 ) satisfies (8.3) and if ∂ t v exists and is continuous, then the function u, defined by (8.2), is C 2 -smooth and satisfies the differential equation in (1.3) and the boundary condition (8.4).
Here u 0 and u * are eigenfunctions to the eigenvalue problems (1.4) (with µ = i and τ = τ 0 ) and (1.6), where in both eigenvalue problems the boundary conditions are changed to (8.4). With these eigenfunctions, the formulas for σ and ρ in (A3) and the formula for ∂ 2 ετ (0, 0) in Theorem 4 remain unchanged.