1 Introduction

1.1 The Problem

This paper concerns 1D autonomous damped and delayed semilinear wave equation of the general type

$$\begin{aligned} \partial _t^2u(t,x)- a(x,\lambda )^2\partial ^2_xu(t,x)=b(x,\lambda ,u(t,x),u(t-\tau ,x),\partial _tu(t,x), \partial _xu(t,x))\nonumber \\ \end{aligned}$$
(1.1)

with one Dirichlet and one Neumann boundary condition

$$\begin{aligned} u(0,t) = \partial _xu(t,1)=0. \end{aligned}$$
(1.2)

It is supposed that \(b(x,\lambda ,0,0,0,0)=0\) for all x and \(\lambda \), i.e., that \(u=0\) is a stationary solution to (1.1)–(1.2) for all \(\tau \) and \(\lambda \).

The goal is to describe Hopf bifurcation, i.e., existence and local uniqueness (up to time shifts) of families (parametrized by \(\tau \) and \(\lambda \)) of non-stationary time-periodic solutions to (1.1)–(1.2), which bifurcate from the stationary solution \(u=0\).

Our main result, stated in Theorem 2 below, is quite similar to Hopf bifurcation theorems for delayed ODEs (see, e.g., [8, 12, Chapter 5.5], [13, Chapter 11], [38, 42]) and for delayed parabolic PDEs (see, e.g., [4, 7, 9, 33, 45, Chapter 6]). However, the analysis of Hopf bifurcation for hyperbolic PDEs is faced with considerable complications if compared to ODEs or parabolic PDEs (with or without delay). In the present paper we provide an approach for overcoming the following technical difficulties, which appear in dissipative hyperbolic PDEs and do not appear in ODEs or parabolic PDEs:

First, the question, whether a nondegenerate time-periodic solution to a dissipative nonlinear wave equation is locally unique (up to time shifts in the autonomous case) and whether it depends smoothly on the system parameters, is much more delicate than for ODEs or parabolic PDEs (cf., e.g., [14, 15]). One reason for that is the so-called loss of derivatives for hyperbolic PDEs. To overcome this difficulty, we use a generalized implicit function theorem [25, Theorem 2.2], which is applicable to abstract equations with a loss of derivatives property. Remark that for smoothness of the data-to-solution map of hyperbolic PDEs it is necessary, in general, that the equation depends smoothly not only on the data and on the unknown function u, but also on the space variable x (and the time variable t in the non-autonomous case). This is completely different to what is known for parabolic PDEs (cf. [10]).

Second, analysis of time-periodic solutions to hyperbolic PDEs usually encounters a complication known as the problem of small divisors [2, 18, 44]. Since Hopf bifurcations can be expected only in the so-called non-resonant case, where small divisors do not appear, we have to impose a condition (assumption (1.7) below) preventing small divisors from coming up. That condition has no counterparts in the case of ODEs or parabolic PDEs.

And third, linear autonomous hyperbolic PDEs with one space dimension differ essentially from those with more than one space dimension: They satisfy the spectral mapping property (see [39] in \(L^p\)-spaces and, more important for applications to nonlinear problems, [30] in C-spaces) and they generate Riesz bases (see, e.g., [11, 19]), what is not the case, in general, if the space dimension is larger than one (see the celebrated counter-example of M. Renardy in [41]). Therefore the question of Fredholmness of the corresponding differential operators in appropriate spaces of time-periodic functions is highly difficult.

The main consequence (from the point of view of mathematical techniques) of the fact, that the space dimension of (1.1), (1.2) is one, consists in the following: We can use integration along characteristics in order to replace (1.1), (1.2) by an nonlinear partial integral equation (see [1] for the notion “partial integral equation”). After that, we can apply known Fredholmness properties to the linearized partial integral equation ( [24, 25, Corollary 4.11]) and, hence, we can apply the Lyapunov-Schmidt reduction method to the nonlinear partial integral equation.

Summarizing, the technical difficulties mentioned above are unusual from the point of view of ODEs and parabolic PDEs, although the goal is to get results, which are quite usual for ODEs and parabolic PDEs. Therefore, the essential part of the present paper is quite technical. However, those technical difficulties appear quite naturally during the execution of a well-known and widely used algorithm in local bifurcation theory, the Lyapunov-Schmidt procedure. Mainly, they appear in the proof of the Fredholmness result (see Lemma 10 in Subsect. 4.1, where we use Nikolskii’s Fredholmness criterion given by Theorem 13) and in the proof of the unique solvability of the external Lyapunov-Schmidt equation (see Lemma 20, where we use our generalized implicit function Theorem 17). Roughly speaking, the technical difficulties appear because the abstract equation (4.3) (which is equivalent to our bifurcation problem (1.1)–(1.2)) does not depend smoothly of the control parameters \(\tau \) and \(\lambda \) and also one of the state parameters, namely the frequency \(\omega \).

1.2 Main Results

Our goal is to investigate time-periodic solutions to (1.1)–(1.2). In order to work in spaces of functions with fixed time period \(2\pi \), we put the frequency parameter \(\omega \) explicitely into the equation by scaling the time variable t and by introducing a new unknown function u as follows:

$$\begin{aligned} u_{\mathrm{new}}(t,x):=u_{\mathrm{old}}\left( \frac{t}{\omega },x\right) . \end{aligned}$$

The problem (1.1)–(1.2) for the new unknown function u and the unknown frequency \(\omega \) reads

$$\begin{aligned} \left. \begin{array}{l} \omega ^2\partial _t^2u(t,x)- a(x,\lambda )^2\partial _x^2u(t,x)=b(x,\lambda ,u(t,x),u(t-\omega \tau ,x),\omega \partial _tu(t,x), \partial _xu(t,x)),\\ u(t,0) = \partial _xu(t,1)=0,\\ u(t+2\pi ,x)=u(t,x). \end{array} \right\} \nonumber \\ \end{aligned}$$
(1.3)

Throughout this paper we suppose (and we do not mention it further) that

$$\begin{aligned} \begin{array}{l} a:[0,1]\times {\mathbb {R}} \rightarrow {\mathbb {R}} \text{ and } b:[0,1]\times {\mathbb {R}}^5 \rightarrow {\mathbb {R}} \text{ are } C^\infty -\hbox {smooth},\\ a(x,\lambda )>0 \text{ and } b(x,\lambda ,0,0,0,0)=0 \text{ for } \text{ all } x \in [0,1] \text{ and } \lambda \in {\mathbb {R}}. \end{array} \end{aligned}$$

Assumptions \(\mathbf (A1)\)\(\mathbf (A3)\) below are standard for Hopf bifurcation. To formulate them, we consider the following eigenvalue problem for the linearization of (1.3) in \(u=0\), \(\omega =1\) and \(\lambda =0\):

$$\begin{aligned} \left. \begin{array}{l} \left( \mu ^2-b^0_5(x)\mu -b^0_4(x)e^{-\mu \tau }-b^0_3(x) \right) u(x)=a_0(x)^2u''(x)+b^0_6(x)u'(x),\\ u(0) = u'(1)=0. \end{array} \right\} \end{aligned}$$
(1.4)

Here \(\mu \in {\mathbb {C}}\) and \(u:[0,1] \rightarrow {\mathbb {C}}\) are eigenvalue and eigenfunction, respectively. The coefficients \(a_0\) and \(b^0_j\) in (1.4) are defined by

$$\begin{aligned} a_0(x):=a(x,0),\; b^0_j(x):=\partial _jb(x,0,0,0,0,0) \text{ for } j=3,4,5,6, \end{aligned}$$
(1.5)

where \(\partial _jb\) is the partial derivative of the function b with respect to its jth variable.

Our first assumption states that for certain delay \(\tau =\tau _0\) there exists a pair of pure imaginary geometrically simple eigenvalues to (1.4) (without loss of generality we may assume that the pair is \(\mu =\pm i\)):

(A1):

There exists \(\tau _0\in {\mathbb {R}}\) such that for \(\mu =i\) and \(\tau =\tau _0\) there exists exactly one (up to linear dependence) solution \(u\ne 0\) to (1.4).

The second assumption is the so-called nonresonance condition:

(A2):

If \(u\not =0\) is a solution to (1.4) with \(\mu =ik, k \in {\mathbb {Z}}\) and \(\tau =\tau _0\), then \(k=\pm 1\).

The third assumption is the so-called transversality condition with respect to change of parameter \(\tau \). It states that for all \(\tau \approx \tau _0\) there exists exactly one eigenvalue \(\mu ={\hat{\mu }}(\tau )\approx i\) to (1.4) and that this eigenvalue crosses the imaginary axis transversally if \(\tau \) crosses \(\tau _0\). In order to formulate this more explicitly, we consider the adjoint problem to (1.4) with \(\mu =i\) and \(\tau =\tau _0\):

$$\begin{aligned} \left. \begin{array}{l} \left( -1+ib^0_5(x)-b^0_4(x)e^{i\tau _0}-b^0_3(x)\right) u(x) =(a_0(x)^2u(x))''-(b^0_6(x)u(x))',\\ u(0) = a_0(1)^2u'(1)+(2a_0(1)a_0'(1)-b_6^0(1))u(1)=0. \end{array} \right\} \end{aligned}$$
(1.6)

Because of assumption (A1) there exists exactly one (up to linear dependence) solution \(u\ne 0\) to (1.6). The transversality condition is the following:

(A3) For any solution \(u=u_0\ne 0\) to (1.4) with \(\tau =\tau _0\) and \(\mu =i\) and for any solution \(u=u_*\ne 0\) to (1.6) it holds

$$\begin{aligned} \sigma :=\int _0^1\left( 2i-b^0_5+\tau _0e^{-i\tau _0}b^0_4\right) u_0 \overline{u_*}dx \ne 0,\quad \rho :=\text{ Im }\left( \frac{e^{-i\tau _0}}{\sigma }\displaystyle \int _0^1b^0_4u_0\overline{u_*}dx\right) \ne 0. \end{aligned}$$

Remark that \(\text{ Re }\,{\hat{\mu }}'(\tau _0)=\rho \), and this real number does not depend on the choice of the eigenfunctions \(u_0\) and \(u_*\). The complex number \(\sigma \) depends on the choice of the eigenfunctions \(u_0\) and \(u_*\), but the fact, if condition \(\sigma \not = 0\) is satisfied or not, does not depend on this choice.

Definition 1

  1. (i)

    We denote by \(C_{2\pi }({\mathbb {R}}\times [0,1])\) the space of all continuous functions \(u:[0,1] \times {\mathbb {R}} \rightarrow {\mathbb {R}}\) such that \(u(t+2\pi ,x)=u(t,x)\) for all \(t \in {\mathbb {R}}\) and \(x \in [0,1]\), with the norm

    $$\begin{aligned} \Vert u\Vert _\infty :=\max \{|u(t,x)|: t \in {\mathbb {R}}, \; x \in [0,1]\}. \end{aligned}$$
  2. (ii)

    For \(k \in {\mathbb {N}}\) we denote by \(C^k_{2\pi }({\mathbb {R}}\times [0,1])\) the space of all \(C^k\)-smooth \(u\in C_{2\pi }({\mathbb {R}}\times [0,1])\), with the norm \( \max \{\Vert \partial ^i_t\partial ^j_xu\Vert _\infty : 0 \le i+j\le k\}. \)

Now we are prepared to formulate our Hopf bifurcation theorem.

Theorem 2

Suppose that conditions \(\mathbf (A1)\)\(\mathbf (A3)\) are fulfilled as well as

$$\begin{aligned} \int _0^1\frac{b^0_5(x)}{a_0(x)}dx \ne 0. \end{aligned}$$
(1.7)

Let \(u=u_0 \ne 0\) be a solution to (1.4) with \(\tau =\tau _0\) and \(\mu =i\), and let \(u=u_* \ne 0\) be a solution to (1.6). Then there exist \(\varepsilon _0>0\) and a \(C^\infty \)-map

$$\begin{aligned} ({\hat{u}},{{\hat{\omega }}},{{\hat{\tau }}}) : [0,\varepsilon _0]\times [-\varepsilon _0,\varepsilon _0]\rightarrow C^2_{2\pi }({\mathbb {R}} \times [0,1])\times {\mathbb {R}}^2 \end{aligned}$$

such that the following is true:

  1. (i)

    Existence: For all \((\varepsilon ,\lambda )\in (0,\varepsilon _0]\times [-\varepsilon _0,\varepsilon _0] \) the function \(u=\varepsilon {\hat{u}}(\varepsilon ,\lambda )\) is a non-stationary solution to (1.3) with \(\omega ={{\hat{\omega }}}(\varepsilon ,\lambda )\) and \(\tau ={{\hat{\tau }}}(\varepsilon ,\lambda )\).

  2. (ii)

    Asymptotic expansion: It holds

    $$\begin{aligned} {[}{\hat{u}}(0,0)](t,x)=\mathop {\mathrm {Re}}\nolimits u_0(x)\cos t - \mathop {\mathrm {Im}}\nolimits u_0(x)\sin t \text{ for } \text{ all } t \in {\mathbb {R}} \text{ and } x \in [0,1], \end{aligned}$$
    (1.8)

    \({{\hat{\omega }}}(0,0)=1\), \({{\hat{\tau }}}(0,0)=\tau _0\) and

    $$\begin{aligned} \partial _\varepsilon {{\hat{\omega }}}(0,\lambda )=\partial _\varepsilon {{\hat{\tau }}}(0,\lambda )=0 \text{ for } \text{ all } \lambda \in [-\varepsilon _0,\varepsilon _0]. \end{aligned}$$
    (1.9)
  3. (iii)

    Local uniqueness: There exists \(\delta >0\) such that for all solutions \((u,\omega ,\tau ,\lambda )\) to (1.3) with \(u \ne 0\) and \(\Vert u\Vert _\infty + |\omega -1| + |\tau -\tau _0|+|\lambda |<\delta \) there exist \(\varepsilon \in (0,\varepsilon _0]\) and \(\varphi \in {\mathbb {R}}\) such that \(\omega ={{\hat{\omega }}}(\varepsilon ,\lambda )\), \(\tau ={{\hat{\tau }}}(\varepsilon ,\lambda )\) and \(u(x,t)=\varepsilon [{\hat{u}}(\varepsilon ,\lambda )](x,t+\varphi )\) for all \(t\in {\mathbb {R}}\) and \(x\in [0,1]\).

  4. (iv)

    Regularity: For all \(\varepsilon \in [0,\varepsilon _0]\), \(\lambda \in [-\varepsilon _0,\varepsilon _0]\) and \(k \in {\mathbb {N}}\) it holds \({\hat{u}}(\varepsilon ,\lambda ) \in C^k_{2\pi }({\mathbb {R}} \times [0,1])\).

  5. (v)

    Smooth dependence: The map \((\varepsilon ,\lambda ) \in [0,\varepsilon _0]\times [-\varepsilon _0,\varepsilon _0] \mapsto {\hat{u}}(\varepsilon ,\lambda ) \in C^k_{2\pi }({\mathbb {R}} \times [0,1])\) is \(C^\infty \)-smooth for any \(k \in {\mathbb {N}}\).

Remark 3

The parametrizations \(u=\varepsilon \hat{u}(\varepsilon ,\lambda )\), \(\omega ={\hat{\omega }}(\varepsilon ,\lambda )\) and \(\tau ={\hat{\tau }}(\varepsilon ,\lambda )\) depend on the choice of the eigenfunctions \(u_0\) and \(u_*\), in general, while the sign of \(\partial _\varepsilon ^2{\hat{\tau }}(0,0)\), determining the bifurcation direction, does not.

In descriptions of Hopf bifurcation phenomena one of the main questions is that of the so-called bifurcation direction, i.e. the question if the bifurcating time-periodic solutions exist for bifurcation parameters (close to the bifurcation point) such that the stationary solution is unstable (in this case the Hopf bifurcation is called supercritical) or not. For ODEs and parabolic PDEs (with or without delay) it is known that, under reasonable additional assumptions, in the supercritical case the bifurcating time-periodic solutions are orbitally stable. For hyperbolic PDEs this relationship between bifurcation direction and stability is believed to be true also, but rigorous proofs are not available up to now. More exactly, it is expected that the bifurcating non-stationary time-periodic solutions, which are described by Theorem 2, are orbitally stable if for all eigenvalues \(\mu \not =\pm i\) of (1.4) with \(\tau =\tau _0\) it holds \(\mathop {\mathrm {Re}}\nolimits \mu <0\) and if

$$\begin{aligned} \rho \partial _\varepsilon ^2{\hat{\tau }}(0,0)>0. \end{aligned}$$

Anyway, in Theorem 4 below we present a formula which shows how to calculate the number \(\partial _\varepsilon ^2{\hat{\tau }}(0,0)\) by means of the eigenfunctions \(u_0\) and \(u_*\) and and of the first three derivatives of the nonlinearity \(b(x,0,\cdot ,\cdot ,\cdot ,\cdot )\). It is known that those formulae may be quite complicated and not explicit (see, e.g., [17, Section 3.3], [21, 22, Theorem I.12.2]; [23, Theorem 1.2(ii)], [29]). Therefore, in order to keep the technicalities simple, in Theorem 4 below we consider only nonlinearities of the type

$$\begin{aligned} b(x,\lambda ,u_1,u_2,u_3,u_4)=\sum _{j=1}^4\beta _j(x,\lambda ,u_j) \end{aligned}$$
(1.10)

with \(C^\infty \)-functions \(\beta _j:[0,1]\times {\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} \beta _j(x,\lambda ,0)= \partial ^2_3\beta _j(x,0,0)=0 \text{ for } \text{ all } j=1,2,3,4,\; x \in [0,1] \text{ and } \lambda \in {\mathbb {R}}. \end{aligned}$$
(1.11)

Set

$$\begin{aligned} \beta _j^0(x):=\partial _3^3\beta _j(x,0,0) \text{ for } j=1,2,3,4. \end{aligned}$$

Our result about the bifurcation direction reads as follows:

Theorem 4

Let the assumptions of Theorem 2 and the conditions (1.10) and (1.11) be fulfilled. Then

$$\begin{aligned} \partial _\varepsilon ^2{\hat{\tau }}(0,0)=\frac{3}{8\rho } {\mathrm{Re}}\left( \frac{1}{\sigma }\int _0^1\left( (\beta ^0_1+\beta ^0_2e^{-i\tau _0} +i\beta ^0_3)|u_0|^2u_0+\beta ^0_4 |u'_0|^2u'_0\right) \overline{u_*}dx\right) . \end{aligned}$$

Remark 5

We do not know if generalizations of Theorems 2 and 4 to higher space dimensions and/or to quasilinear equations exist and how they should look like.

Our paper is organized as follows:

In Subsect.  1.3 we comment about some publications which are related to ours.

In Sect. 2 we show that any solution to (1.3) creates a solution to a semilinear first-order \(2 \times 2\) hyperbolic system, namely (2.1), and vice versa. In Sect. 3 we show (by using the method of integration along characteristics) that any solution to the first-order hyperbolic system (2.1) solves a system of partial integral equations, namely (3.1), and vice versa. Remark that in Sects. 2 and 3 we do pure transformations, i.e., problem (1.3) is equivalent to problem (3.1). Especially, the technical difficulties of (1.3), like small divisors and loss of smoothness, are hidden in (3.1) also. But it turns out that in (3.1) they can be handled more easily than in (1.3).

In Sects. 4 and 5 we do a Lyapunov-Schmidt procedure in order to reduce locally the system (3.1) with infinite-dimensional state parameter to a problem with two-dimensional state parameter. Here the main technical results are Lemma 10 about Fredholmness of the linearization of (3.1) and Lemma  20 about local unique solvability and smooth dependence of the infinite dimensional part of the Lyapunov-Schmidt system. The proofs of those lemmas are much more complicated than the corresponding proofs for ODEs or parabolic PDEs (with or without delay).

In particular, in the proof of Lemma 10 (more exactly in the proof of Claim 4 there) we use assumption (1.7), and it turns out that the conclusions of Lemma 10 (and of Theorem 2 as well) are not true, in general, if (1.7) is not true.

In the proof of Lemma 20 we use a generalized implicit function theorem, which is a particular case of [25, Theorem 2.2] and concerns abstract parameter-dependent equations with a loss of smoothness property. This generalized implicit function theorem is presented in Subsect. 5.1.

In Sect. 6 we put the solution of the infinite dimensional part of the Lyapunov-Schmidt system into the finite dimensional part and discuss the behavior of the resulting equation. This is completely analogous to what is known from Hopf bifurcation for ODEs and parabolic PDEs.

In Sect. 7 we prove Theorem 4, and in Sect. 8 give an example.

Finally, in Sect. 9 we discuss cases of other than (1.2) boundary conditions.

1.3 Remarks on Related Work

The main methods for proving Hopf bifurcation theorems are, roughly speaking, center manifold reduction and Lyapunov-Schmidt reduction. In order to apply them to evolution equations, one needs to have a smooth center manifold for the corresponding semiflow (for center manifold reduction) or a Fredholm property of the linearized equation on spaces of periodic functions (for Lyapunov-Schmidt reduction).

In [5, 22] Hopf bifurcation theorems for abstract evolution equations are proved by means of Lyapunov-Schmidt reduction, and in [16, 34, 43] by means of center manifold reduction. In [5, 22] it is assumed that the operator of the linearized equation is sectorial (see [5, Hypothesis (HL)] and [22, Hypothesis I.8.8]), hence this setting is not appropriate for hyperbolic PDEs. In [16, 34, 43] the assumptions concerning the linearized operator are more general, including non-sectorial operators. However, it is unclear if our problem (1.1), (1.2) can be written as an abstract evolution equation satisfying those conditions.

In [43] it is shown that 1D semilinear damped wave equations without delay of the type \(\partial _t^2u=\partial _x^2u-\gamma \partial _tu+f(u)\) with \(f(0)=0\), subjected to homogeneous Dirichlet boundary conditions, can be written as an abstract evolution equation satisfying the general assumptions of [43], and a corresponding Hopf bifurcation theorem is proved. But it turns out that nonlinearities of the type \(f(u,\partial _xu)\) cannot be treated this way. In [26] a Hopf bifurcation theorem is stated without proof for second-order quasilinear hyperbolic systems without delay with arbitrary space dimension subjected to homogeneous Dirichlet boundary conditions. In [23] a Hopf bifurcation theorem for general semilinear first-order 1D hyperbolic systems without delay is proved by means of Lyapunov-Schmidt reduction, and applications to semiconductor laser modeling are described. In [31, 35, 36] the authors considered Hopf bifurcation for scalar linear first-order PDEs without delay of the type \((\partial _t +\partial _x + \mu )u = 0\) on the semi-axis \((0,\infty )\) with a nonlinear integral boundary condition at \(x=0\).

In [3] small periodic forcings of an undamped linear autonomous wave equation are considered. Because of lack of damping, small divisors come up, and Nash-Moser iterations have to be used. However, Lyapunov-Schmidt reduction is applied there as well as in the present paper.

What concerns Hopf bifurcation for hyperbolic PDEs with delay, to the best of our knowledge there exist only the two results [27, 28] of N. Kosovalić and B. Pigott. In [27] the authors consider 1D damped and delayed Sine-Gordon-like wave equations of the type

$$\begin{aligned} \partial _t^2u(t,x)-\partial _x^2u(t,x)+\partial _tu(t,x)+u(t-\tau ,x)=f(x,u(t-\tau ,x)) \end{aligned}$$
(1.12)

with \(f(-x,-u)=-f(x,u)\) and \(f(x,0)=\partial _uf(x,0)=0\). Because of the symmetry assumption on the nonlinearity f the bifurcating time-periodic solutions can be determined by means of Fourier expansions. In [28] these results are generalized to equations on d-dimensional cubes, but locally unique bifurcating solution families can be described for fixed prescribed spatial frequency vectors only.

Our results in the present paper extend those of [27] mainly by two facts: Our equation (1.1) is more general than (1.12) (and does not have any symmetry property, in general), and we allow the presence of the perturbation parameter \(\lambda \). The symmetry assumptions of [27] allow one to use Fourier series techniques, while we use integration along characteristics.

2 Transformation of the Second-order Equation into a First-order System

In this section we show that any solution u to (1.3) creates a solution \(v=(v_1,v_2)\) to the first-order hyperbolic system

$$\begin{aligned} \left. \begin{array}{l} \omega \partial _tv_1(t,x)-a(x,\lambda )\partial _xv_1(t,x)=[B(v,\omega ,\tau ,\lambda )](t,x),\\ \omega \partial _tv_2(t,x)+a(x,\lambda )\partial _xv_2(t,x)=[B(v,\omega ,\tau ,\lambda )](t,x),\\ v_1(t,0)+v_2(t,0)= v_1(t,1)-v_2(t,1)=0,\\ v(t+2\pi ,x)=v(t,x) \end{array} \right\} \end{aligned}$$
(2.1)

and vice versa. Here the nonlinear operator B is defined as

$$\begin{aligned}{}[B(v,\omega ,\tau ,\lambda )](t,x):= & {} b(x,\lambda ,[J_\lambda v](t,x),[J_\lambda v] (t-\omega \tau ,x), [Kv](t,x),[K_\lambda v](t,x))\nonumber \\&-\frac{1}{2}\partial _xa(x,\lambda )(v_1(t,x)-v_2(t,x)) \end{aligned}$$
(2.2)

with partial integral operators \(J_\lambda \) defined by

$$\begin{aligned} {[}J_\lambda v](t,x):=\frac{1}{2}\int _0^x\frac{v_1(t,\xi )-v_2(t,\xi )}{a(\xi ,\lambda )}d\xi \end{aligned}$$
(2.3)

and with “pointwise” operators K and \(K_\lambda \) defined by

$$\begin{aligned} Kv:=\frac{v_1+v_2}{2}, \; [K_\lambda v](t,x):=\frac{v_1(t,x)-v_2(t,x)}{2a(x,\lambda )}=[\partial _xJ_\lambda v](t,x).\nonumber \\ \end{aligned}$$
(2.4)

Definition 6

  1. (i)

    We denote by \(C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) the space of all continuous functions \(v:[0,1] \times {\mathbb {R}} \rightarrow {\mathbb {R}}^2\) such that \(v(t+2\pi ,x)=v(t,x)\) for all \(t \in {\mathbb {R}}\) and \(x \in [0,1]\), with the norm

    $$\begin{aligned} \Vert v\Vert _\infty :=\max \{|v_1(t,x)|+|v_2(t,x)|: t \in {\mathbb {R}}, \; x \in [0,1]\}. \end{aligned}$$
  2. (ii)

    For \(k \in {\mathbb {N}}\) we denote by \(C^k_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) the space of all \(C^k\)-smooth functions \(v\in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\), with the norm \(\max \{\Vert \partial ^i_t\partial ^j_xv\Vert _\infty : 0 \le i+j\le k\}\).

Lemma 7

For all \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) and \(k=2,3,\ldots \) the following is true:

  1. (i)

    If \(u \in C_{2\pi }^k({\mathbb {R}}\times [0,1])\) is a solution to (1.3), then the function \(v \in C_{2\pi }^{k-1}({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\), which is defined by

    $$\begin{aligned} v_1:=\omega \partial _tu+a(x,\lambda ) \partial _xu, \;v_2:=\omega \partial _tu-a(x,\lambda ) \partial _xu,\nonumber \\ \end{aligned}$$
    (2.5)

    is a solution to (2.1).

  2. (ii)

    If \(v \in C_{2\pi }^{k-1}({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) is a solution to (2.1), then the function \(u \in C_{2\pi }^{k-1}({\mathbb {R}}\times [0,1])\), which is defined by

    $$\begin{aligned} u(t,x):=\frac{1}{2}\int _0^x\frac{v_1(t,\xi )-v_2(t,\xi )}{a(\xi ,\lambda )}d\xi , \end{aligned}$$
    (2.6)

    is \(C^k\)-smooth and a solution to (1.3).

Proof

(i) Let \(u \in C_{2\pi }^k({\mathbb {R}}\times [0,1])\) be a solution to (1.3), and let \(v \in C_{2\pi }^{k-1}({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) be defined by (2.5). From (2.5) follows

$$\begin{aligned} \partial _tv_1=\omega \partial ^2_tu+a\partial _t\partial _xu,&\partial _xv_1=\omega \partial _t\partial _xu+\partial _xa \partial _xu + a\partial _x^2u,\\ \partial _tv_2=\omega \partial ^2_tu-a\partial _t\partial _xu,&\partial _xv_2=\omega \partial _t\partial _xu-\partial _xa \partial _xu - a\partial _x^2u. \end{aligned}$$

Hence

$$\begin{aligned} \omega \partial _tu=\frac{v_1+v_2}{2}=Kv,\; \partial _xu=\frac{v_1-v_2}{2a}=K_\lambda v \end{aligned}$$
(2.7)

and

$$\begin{aligned} \omega ^2\partial _t^2u-a^2\partial _x^2u-a\partial _xa\partial _xu=\omega \partial _tv_1-a\partial _xv_1=\omega \partial _tv_2+a\partial _xv_2. \end{aligned}$$
(2.8)

From \(u(t,0)=\partial _xu(t,1)=0\) (cf. (1.3)) and (2.7) follows \(v_1(t,0)-v_2(t,0)=0\) and \(v_1(t,1)+v_2(t,1)=0\), i.e. the boundary conditions of (2.1). Further, from \(u(t,0)=0\) and (2.7) follows also \(u=J_\lambda v\). Hence, (2.7), (2.8) and the differential equation in (1.3) yield the differential equations in (2.1).

(ii) Let \(v \in C_{2\pi }^{k-1}({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) be a solution to (2.1), and let \(u \in C_{2\pi }^{k-1}({\mathbb {R}}\times [0,1])\) be defined by (2.6). From (2.1) and (2.6) it follows that

$$\begin{aligned} \partial _tu(t,x)=\int _0^x\frac{\partial _tv_1(t,\xi )-\partial _tv_2(t,\xi )}{2a(\xi ,\lambda )}d\xi = \int _0^x\frac{\partial _xv_1(t,\xi )+\partial _xv_2(t,\xi )}{2\omega }d\xi =\frac{v_1(t,x) +v_2(t,x)}{2\omega }. \end{aligned}$$

Hence, \(\partial _tu\) is \(C^{k-1}\)-smooth, and

$$\begin{aligned} \omega ^2\partial ^2_tu=\frac{\omega }{2}(\partial _tv_1+\partial _tv_2). \end{aligned}$$
(2.9)

Further, (2.6) yields

$$\begin{aligned} \partial _xu=\frac{v_1-v_2}{2a}=K_\lambda v, \end{aligned}$$
(2.10)

i.e. \(\partial _xu\) is \(C^{k-1}\)-smooth also, i.e. u is \(C^{k}\)-smooth, and \(2(\partial _xa\partial _xu+a\partial _x^2u)=\partial _xv_1-\partial _xv_2\), i.e.

$$\begin{aligned} a^2\partial ^2_xu=\frac{a}{2}(\partial _xv_1-\partial _xv_2)-\frac{\partial _xa}{2}(v_1-v_2). \end{aligned}$$
(2.11)

But (2.1), (2.9) and (2.11) imply \(\omega ^2\partial ^2_tu-a^2\partial ^2_xu=B(v,\omega ,\tau ,\lambda )+\frac{1}{2}\partial _xa(v_1-v_2)\), i.e. the differential equation in (1.3). The boundary conditions in (1.3) follow from the boundary conditions in (2.1) and from (2.6) and (2.10). \(\square \)

Let us calculate the linearization of the operator B (cf. (2.2)) with respect to v in \(v=0\). For that reason we use the following notation:

$$\begin{aligned} b_j(x,\lambda ):=\partial _jb(x,\lambda ,0,0,0,0) \text{ for } j=3,4,5,6. \end{aligned}$$
(2.12)

Remark that \(b_j(x,0)=b_j^0(x)\) (cf. (1.5)). We have

$$\begin{aligned}&[\partial _vB(0,\omega ,\tau ,\lambda )v](t,x)\nonumber \\&\quad =b_3(x,\lambda )[J_\lambda v](t,x)+b_4(x,\lambda )[J_\lambda v](t-\omega \tau ,x)+b_5(x,\lambda )[Kv](t,x)+ b_6(x,\lambda )[K_\lambda v](t,x)\nonumber \\&\;\;\;\;\;\;\;\;\;\;-\frac{1}{2}\partial _xa(x,\lambda )(v_1(t,x)-v_2(t,x))\nonumber \\&\quad =b_1(x,\lambda )v_1(t,x)+b_2(x,\lambda )v_2(t,x)+b_3(x,\lambda )[J_\lambda v](t,x)+b_4(x,\lambda )[J_\lambda v](t-\omega \tau ,x)\nonumber \\ \end{aligned}$$
(2.13)

with

$$\begin{aligned} \left. \begin{array}{rcl} b_1(x,\lambda )&{}:=&{}\displaystyle \frac{1}{2}\left( -\partial _xa(x,\lambda ) +b_5(x,\lambda )+\frac{b_6(x,\lambda )}{a(x,\lambda )}\right) ,\;\\ b_2(x,\lambda )&{}:=&{}\displaystyle \frac{1}{2}\left( \partial _xa(x,\lambda ) +b_5(x,\lambda )-\frac{b_6(x,\lambda )}{a(x,\lambda )}\right) . \end{array} \right\} \end{aligned}$$
(2.14)

By reasons which will be seen in Sects. 3 and 4 below we rewrite system (2.1) in the following way:

$$\begin{aligned} \left. \begin{array}{l} \omega \partial _tv_1(t,x)-a(x,\lambda )\partial _xv_1(t,x)-b_1(x,\lambda )v_1(t,x) =[{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t,x),\\ \omega \partial _tv_2(t,x)+a(x,\lambda )\partial _xv_2(t,x)-b_2(x,\lambda )v_2(t,x) =[{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t,x),\\ v_1(t,0)+v_2(t,0)= v_1(t,1)-v_2(t,1)=0,\\ v(t+2\pi ,x)=v(t,x) \end{array} \right\} \qquad \end{aligned}$$
(2.15)

with

$$\begin{aligned} \left. \begin{array}{rcl} [{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t,x)&{}:=&{}[B(v,\omega ,\tau ,\lambda )](t,x)-b_1(x, \lambda )v_1(t,x),\\ \displaystyle [{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t,x)&{}:=&{}[B(v,\omega ,\tau ,\lambda )](t,x) -b_2(x,\lambda )v_2(t,x). \end{array} \right\} \end{aligned}$$
(2.16)

The operators \({{\mathcal {B}}}_1,{{\mathcal {B}}}_2:C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\times {\mathbb {R}}^3\rightarrow C_{2\pi }({\mathbb {R}} \times [0,1])\), introduced in (2.16), define an operator

$$\begin{aligned} {{\mathcal {B}}}:=({{\mathcal {B}}}_1,{{\mathcal {B}}}_2):C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\times {\mathbb {R}}^3\rightarrow C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2). \end{aligned}$$

Moreover, the operator \({{\mathcal {B}}}(\cdot ,\omega ,\tau ,\lambda ):C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\rightarrow C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) is \(C^\infty \)-smooth because the function b is supposed to be \(C^\infty \)-smooth, and

$$\begin{aligned} \partial _v{{\mathcal {B}}}(0,\omega ,\tau ,\lambda )={{\mathcal {J}}}(\omega ,\tau ,\lambda ) + {{\mathcal {K}}}(\lambda ) \end{aligned}$$
(2.17)

with operators \({{\mathcal {J}}}(\omega ,\tau ,\lambda ),{{\mathcal {K}}}(\lambda )\in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2))\). Their components are defined by (cf. (2.13) and (2.16))

$$\begin{aligned}&[{{\mathcal {J}}}_1(\omega ,\tau ,\lambda )v](t,x)= [{{\mathcal {J}}}_2(\omega ,\tau ,\lambda )v](t,x):=b_3(x,\lambda ) [J_\lambda v](t,x)+b_4(x,\lambda ) [J_\lambda v](t-\omega \tau ,x) \nonumber \\&\quad = \displaystyle \frac{1}{2}\int _0^x\frac{b_3(x,\lambda )(v_1(t,\xi )-v_2(t,\xi )) +b_4(x,\lambda )(v_1(t-\omega \tau ,\xi )-v_2(t-\omega \tau ,\xi ))}{a(\xi ,\lambda )}d\xi \end{aligned}$$
(2.18)

and

$$\begin{aligned} \displaystyle [{{\mathcal {K}}}_1(\lambda )v](t,x)=b_2(x,\lambda )v_2(t,x),\; \displaystyle [{{\mathcal {K}}}_2(\lambda )v](t,x)=b_1(x,\lambda )v_1(t,x). \end{aligned}$$
(2.19)

Hence, the linearization with respect to v in \(v=0\) of the right-hand side of (2.15) has a special structure: It is the sum of the partial integral operator \({{\mathcal {J}}}(\omega ,\tau ,\lambda )\) and of the “pointwise” operator \({{\mathcal {K}}}(\lambda )\), which has vanishing diagonal part. This structure will be used in Subsect. 4.1 below, cf. Remark 11.

3 Transformation of the First-order System into a System of Partial Integral Equations

In this section we show (by using the method of integration along characteristics) that any solution to (2.1), i.e. to (2.15), solves the system of partial integral equations

$$\begin{aligned} \left. \begin{array}{l} v_1(t,x)+c_1(x,0,\lambda )v_2(t+\omega A(x,0,\lambda ),0)\\ =\displaystyle -\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )}[{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t+\omega A(x,\xi ,\lambda ),\xi )d\xi ,\\ v_2(t,x)-c_2(x,1,\lambda )v_1(t-\omega A(x,1,\lambda ),1)\\ =\displaystyle \int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )}[{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t-\omega A(x,\xi ,\lambda ),\xi )d\xi \end{array} \right\} \end{aligned}$$
(3.1)

and vice versa. Here the operators \({{\mathcal {B}}}_1\) and \({{\mathcal {B}}}_1\) are from (2.16), and the functions \(c_1\), \(c_2\) and A are defined by (cf. (2.12) and (2.14))

$$\begin{aligned} c_1(x,\xi ,\lambda ):=\exp \int _x^\xi \frac{b_1(\eta ,\lambda )}{a(\eta ,\lambda )}d\eta ,\; c_2(x,\xi ,\lambda ):=\exp \int _\xi ^x\frac{b_2(\eta ,\lambda )}{a(\eta ,\lambda )}d\eta ,\; A(x,\xi ,\lambda ):=\int _\xi ^x\frac{d\eta }{a(\eta ,\lambda )}. \end{aligned}$$

Lemma 8

For all \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) the following is true:

  1. (i)

    If \(v \in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) is a solution to (2.1), then it is a solution to (3.1).

  2. (ii)

    If \(v \in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) is a solution to (3.1) and if \(\partial _tv\) exists and is continuous, then v belongs to \(C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) and solves (2.1).

Proof

(i) Let \(v\in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) be given. Because of \(c_1(x,x,\lambda )=1\) and \(A(x,x,\lambda )=0\) we get

$$\begin{aligned}&v_1(t,x)-c_1(x,0,\lambda )v_1(t+\omega A(x,0,\lambda ),0)=\int _0^x\frac{d}{d\xi }\left( c_1(x,\xi ,\lambda )v_1(t+\omega A(x,\xi ,\lambda ),\xi )\right) d\xi \\&\quad =\int _0^x\partial _\xi c_1(c,\xi ,\lambda )v_1(t+\omega A(x,\xi ,\lambda ),\xi )d\xi \\&\;\;\;\;\;+\int _0^xc_1(x,\xi ,\lambda )\left( \partial _tv_1(t+\omega A(x,\xi ,\lambda ),\xi )\omega \partial _\xi A(x,\xi ,\lambda ) +\partial _xv_1(t+\omega A(x,\xi ,\lambda ),\xi )\right) d\xi . \end{aligned}$$

From \(\partial _\xi A(x,\xi ,\lambda )=-1/a(\xi ,\lambda )\) and \(\partial _\xi c_1(x,\xi ,\lambda )=b_1(\xi ,\lambda )c_1(x,\xi ,\lambda )/a(\xi ,\lambda )\) it follows that

$$\begin{aligned}&v_1(t,x)-c_1(x,0,\lambda )v_1(t+\omega A(x,0,\lambda ),0)\\&\quad =\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )}\Big [-\omega \partial _tv_1(s,\xi ) +a(\xi ,\lambda )\partial _xv_1(s,\xi )+b_1(\xi ,\lambda )v_1(s,\xi )\Big ]_{s=t+\omega A(x,\xi ,\lambda )}d\xi . \end{aligned}$$

Similarly one shows that

$$\begin{aligned}&v_2(t,x)-c_2(x,1,\lambda )v_1(t-\omega A(x,1,\lambda ),1)=-\int _x^1\frac{d}{d\xi }\left( c_2(x,\xi ,\lambda )v_2(t-\omega A(x,\xi ,\lambda ),\xi )\right) d\xi \\&\quad =-\int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )}\Big [\omega \partial _tv_1(s,\xi ) +a(\xi ,\lambda )\partial _xv_1(s,\xi )-b_2(\xi ,\lambda )v_1(s,\xi )\Big ]_{s=t-\omega A(x,\xi ,\lambda )}d\xi . \end{aligned}$$

But this yields that, if v is a solution to (2.1), i.e. to (2.15), then it is a solution to (3.1)

(ii) Let \(v \in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) be a solution to (3.1). The first equation of (3.1) yields \(v_1(t,0)=c_1(0,0,\lambda )v_2(t+\omega A(0,0,\lambda ),0)=-v_2(t,0)\), i.e. the first boundary condition of (2.1). Similarly the second boundary condition of (2.1) follows from the second equation of (3.1).

Further, from (3.1) and from the assumption, that \(\partial _tv\) exists and is continuous, it follows that also \(\partial _xv\) exists and is continuous, i.e. \(v\in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\).

Now, let us verify the differential equations in (2.1), i.e. in (2.15). From (3.1) it follows that

$$\begin{aligned}&\left( \omega \partial _t-a(x,\lambda )\partial _x\right) \left( v_1(t,x)+c_1(x,0,\lambda )v_2(t+\omega A(x,0,\lambda ),0)\right) \nonumber \\&\quad =-\left( \omega \partial _t-a(x,\lambda )\partial _x\right) \int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )} [{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t+\omega A(x,\xi ,\lambda ),\xi )d\xi .\qquad \end{aligned}$$
(3.2)

Because of \(\partial _xc_1(x,0,\lambda )=-b_1(x,\lambda )c_1(x,0,\lambda )/a(x,\lambda )\) and

$$\begin{aligned} \left( \omega \partial _t-a(x,\lambda )\partial _x\right) \varphi (t+\omega A(x,\xi ,\lambda ))=0 \text{ for } \text{ all } \varphi \in C^1({\mathbb {R}}) \end{aligned}$$

the left-hand side of (3.2) is \( \left( \omega \partial _t-a(x,\lambda )\partial _x\right) v_1(t,x)+b_1(x,\lambda )v_2(t+\omega A(x,0,\lambda ),0), \) and the right-hand side of (3.2) is

$$\begin{aligned} b_1(x,\lambda )\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )} [{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t+\omega A(x,\xi ,\lambda ),\xi )d\xi +[{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t,x). \end{aligned}$$

Hence, the first equation of (2.15) is shown. Using \(\partial _xc_2(x,0,\lambda )=b_2(x,\lambda )c_1(x,0,\lambda )/a(x,\lambda )\), one gets similarly

$$\begin{aligned}&\left( \omega \partial _t+a(x,\lambda )\partial _x\right) \left( v_2(t,x)+c_2(x,0,\lambda )v_1(t-\omega A(x,0,\lambda ),0)\right) \\&\quad =\left( \omega \partial _t+a(x,\lambda )\partial _x\right) v_2(t,x) +b_2(x,\lambda )v_1(t-\omega A(x,0,\lambda ),0)\\&\quad =- \left( \omega \partial _t+a(x,\lambda )\partial _x\right) \int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )} [{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t-\omega A(x,\xi ,\lambda ),\xi )d\xi \\&\quad =b_2(x,\lambda )\int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )} [{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t+\omega A(x,\xi ,\lambda ),\xi )d\xi +[{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t,x), \end{aligned}$$

i.e. the second equation of (2.15) is shown. \(\square \)

4 Lyapunov-Schmidt Procedure

In this and the next sections we do a Lyapunov-Schmidt procedure in order to reduce locally for \(v \approx 0\), \(\omega \approx 1\), \(\tau \approx \tau _0\) and \(\lambda \approx 0\) the problem (3.1) with the infinite-dimensional state parameter \((v,\omega )\) to the problem (6.1) with the three-dimensional state parameter \((u,\omega )\).

For the sake of simplicity, we will write the problem (3.1) in a more abstract way. For that reason for \(\omega ,\lambda \in {\mathbb {R}}\) let us introduce operators \( {\mathcal {C}}(\omega ,\lambda ),{\mathcal {D}}(\omega ,\lambda )\in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)) \) with components \({\mathcal {C}}_j(\omega ,\lambda ),{\mathcal {D}}_j(\omega ,\lambda )\in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}} \times [0,1]);{\mathbb {R}}^2);C_{2\pi }({\mathbb {R}} \times [0,1]))\), \(j=1,2\), which are defined by

$$\begin{aligned} \left. \begin{array}{rcl} {[}{\mathcal {C}}_1(\omega ,\lambda )v](x,t)&{}:=&{}-c_1(x,0,\lambda )v_2(t+\omega A(x,0,\lambda ),0),\\ \displaystyle [{\mathcal {C}}_2(\omega ,\lambda )v](x,t)&{}:=&{}c_2(x,1,\lambda )v_1(t-\omega A(x,1,\lambda ),1) \end{array} \right\} \end{aligned}$$
(4.1)

and

$$\begin{aligned} \left. \begin{array}{rcl} {[}{\mathcal {D}}_1(\omega ,\lambda )v](x,t)&{}:=&{} \displaystyle -\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )}v_1(t+\omega A(x,\xi ,\lambda ),\xi )d\xi ,\\ \displaystyle [{\mathcal {D}}_2(\omega ,\lambda )v](x,t)&{}:=&{}\displaystyle \int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )}v_2(t-\omega A(x,\xi ,\lambda ),\xi )d\xi . \end{array} \right\} \end{aligned}$$
(4.2)

Using this notation, the system (3.1) reads

$$\begin{aligned} v={\mathcal {C}}(\omega ,\lambda )v+{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(v,\omega ,\tau ,\lambda ), \end{aligned}$$
(4.3)

where the nonlinear operator \({{\mathcal {B}}}\) is introduced in (2.16).

Remark 9

Also the first-order hyperbolic system (2.15) can be written in an abstract way, namely as

$$\begin{aligned} {{\mathcal {A}}}(\omega ,\lambda )v={{\mathcal {B}}}(v,\omega ,\tau ,\lambda ) \end{aligned}$$
(4.4)

with \({{\mathcal {A}}}(\omega ,\lambda ) \in {{\mathcal {L}}}(C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2);C_{2\pi } ({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\) defined by

$$\begin{aligned}{}[{{\mathcal {A}}}(\omega ,\lambda )v](t,x):= \left[ \begin{array}{c} \omega \partial _tv_1(t,x)-a(x,\lambda )\partial _xv_1(t,x)-b_1(x,\lambda )v_1(t,x)\\ \omega \partial _tv_2(t,x)+a(x,\lambda )\partial _xv_2(t,x)-b_2(x,\lambda )v_2(t,x) \end{array} \right] . \end{aligned}$$
(4.5)

Remark that in the proof of Lemma 8 we showed that for all \(\omega ,\lambda \in {\mathbb {R}}\) it holds

$$\begin{aligned} {{\mathcal {A}}}(\omega ,\lambda ){\mathcal {C}}(\omega ,\lambda )v= {{\mathcal {A}}}(\omega ,\lambda ){\mathcal {D}}(\omega ,\lambda )v-v=0 \text{ for } \text{ all } v \in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2) \end{aligned}$$
(4.6)

and

$$\begin{aligned}&{\mathcal {D}}(\omega ,\lambda ){{\mathcal {A}}}(\omega ,\lambda )v=v-{\mathcal {C}}(\omega ,\lambda )v \text{ for } \text{ all } v \in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2) \nonumber \\&\text{ with } [v_1+v_2]_{x=0}=[v_1-v_2]_{x=1}=0. \end{aligned}$$
(4.7)

It is easy to see that the operators \({\mathcal {C}}(\omega ,\lambda ),{\mathcal {D}}(\omega ,\lambda )\), \({{\mathcal {J}}}(\omega ,\tau ,\lambda )\) and \({{\mathcal {K}}}(\lambda )\) (cf. (2.18), (2.19)) are bounded with respect to \(\omega \) and \(\tau \) and locally bounded with respect to \(\lambda \), i.e., for any \(c>0\) it holds

$$\begin{aligned} \sup _{\omega ,\tau \in {\mathbb {R}}, |\lambda |\le c}\{\Vert {\mathcal {C}}(\omega ,\lambda )v\Vert _\infty +\Vert {\mathcal {D}}(\omega ,\lambda )v\Vert _\infty +\Vert {{\mathcal {J}}}(\omega ,\tau ,\lambda )v\Vert _\infty +\Vert {{\mathcal {K}}}(\lambda )v\Vert _\infty :\; \Vert v\Vert _\infty \le 1\}<\infty .\nonumber \\ \end{aligned}$$
(4.8)

But, unfortunately, the operators \({\mathcal {C}}(\omega ,\lambda )\) and \({\mathcal {D}}(\omega ,\lambda )\) do not depend continuously (in the sense of the uniform operator norm in \({{\mathcal {L}}}(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2))\) on \(\omega \) and \(\lambda \), in general, and \({{\mathcal {J}}}(\omega ,\tau ,\lambda )\) does not depend continuously on \(\omega \) and \(\tau \), in general. This is the main technical difficulty which we have to overcome in order to analyze the bifurcation problem (4.3). Remark that this difficulty appears also in the case if \(\tau \) is fixed to be zero (and \(\lambda \) is used to be the bifurcation parameter), i.e. in the case of Hopf bifurcation for semilinear wave equations without delay.

It should be emphasized that the equation (4.3) does not depend smoothly on \(\omega \), \(\tau \), and \(\lambda \). But after the Lyapunov-Schmidt reduction the equation (6.1), which is locally equivalent to (4.3), depends smoothly on all its parameters. In other words, during the Lyapunov-Schmidt reduction the main difficulties of the present paper have been eliminated, with hard technical work behind.

4.1 Fredholmness of the Linearization

We intend to show that the linearization of (4.3) at \(v=0\), i.e., the operator

$$\begin{aligned} I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(0,\omega ,\tau ,\lambda ) =I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda )({{\mathcal {J}}}(\omega ,\tau ,\lambda )+{{\mathcal {K}}}(\lambda )), \end{aligned}$$

is a Fredholm operator of index zero from the space \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself.

Lemma 10

Let the condition (1.7) be fulfilled. Then there exists \(\delta >0\) such that for all \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) with \(\omega \ne 0\) and \(|\lambda |<\delta \) the operator \(I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(0,\omega ,\tau ,\lambda )\) is a Fredholm operator of index zero from the space \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself.

The main complication in the proof is caused by the fact that the operators \({\mathcal {C}}(\omega ,\lambda )+{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(0,\omega ,\tau ,\lambda )\) are not completely continuous from the space \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself, in general.

The proof will be divided into a number of claims.

Claim 1

For all \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) with \(\omega \ne 0\) and all \(v \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) we have \({\mathcal {D}}(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )v \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\), and for any \(c>0\) it holds

$$\begin{aligned} \sup _{1/c \le \omega \le c, \tau \in {\mathbb {R}}, |\lambda | \le c}\{\Vert \partial _t{\mathcal {D}}(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )v\Vert _\infty +\Vert \partial _x{\mathcal {D}}(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )v\Vert _\infty : \;\Vert v\Vert _\infty \le 1\}<\infty .\nonumber \\ \end{aligned}$$
(4.9)

Proof of Claim

The idea of the proof is to show that the composition of the two partial integral operators \({\mathcal {D}}(\omega ,\lambda )\) and \({{\mathcal {J}}}(\omega ,\tau ,\lambda )\) is an integral operator mapping \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into \(C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\). Indeed, for \(v \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) we have

$$\begin{aligned}&[{\mathcal {D}}_1(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )v](t,x)\\&\quad =-\frac{1}{2}\int _0^x\left( \frac{c_1(x,\xi ,\lambda )b_3(\xi ,\lambda )}{a(\xi ,\lambda )}\int _0^\xi \frac{v_1(t+\omega A(\eta ,\xi ,\lambda ),\eta )-v_2(t+\omega A(\eta ,\xi ,\lambda ),\eta )}{a(\eta ,\lambda )}d\eta \right) d\xi \\&\qquad -\frac{1}{2}\int _0^x\left( \frac{c_1(x,\xi ,\lambda )b_4(\xi ,\lambda )}{a(\xi ,\lambda )} \int _0^\xi \frac{v_1(t-\omega \tau +\omega A(\eta ,\xi ,\lambda ),\eta )-v_2(t-\omega \tau +\omega A(\eta ,\xi ,\lambda ),\eta )}{a(\eta ,\lambda )}d\eta \right) d\xi ,\quad \end{aligned}$$

where

$$\begin{aligned}&\int _0^x\int _0^\xi \frac{c_1(x,\xi ,\lambda )b_3(\xi ,\lambda )v_1 (t+\omega A(\eta ,\xi ,\lambda ),\eta )}{a(\xi ,\lambda )a(\eta ,\lambda )}\,d\eta d\xi \nonumber \\&\quad =\int _0^x\int _\eta ^x\frac{c_1(x,\xi ,\lambda )b_3(\xi ,\lambda )v_1(t +\omega A(\eta ,\xi ,\lambda ),\eta )}{a(\xi ,\lambda )a(\eta ,\lambda )}\,d\xi d\eta \nonumber \\&\quad =-\frac{1}{\omega }\int _0^x\int _t^{t+\omega A(\eta ,x,\lambda )}\frac{c_1(x,\xi _{\eta ,t,\omega ,\lambda }(\zeta ),\lambda )b_3 (\xi _{\eta ,t,\omega ,\lambda }(\zeta ),\lambda ) v_1(\zeta ,\eta )}{a(\eta ,\lambda )}\,d\zeta d\eta .\qquad \quad \end{aligned}$$
(4.10)

Here we changed the integration variable \(\xi \) to a new integration variable

$$\begin{aligned} \zeta =\zeta _{\eta ,t,\omega ,\lambda }(\xi ):= t+\omega A(\eta ,\xi ,\lambda )=t+\omega \int _\xi ^\eta \frac{dz}{a(z,\lambda )},\; d\zeta =-\frac{\omega }{a(\xi ,\lambda )}\,d\xi . \end{aligned}$$

Note that for \(\omega \ne 0\) the inverse transformation \(\xi =\xi _{\eta ,t,\omega ,\lambda }(\zeta )\) exists and depends smoothly on \(\eta ,t,\omega ,\lambda \) and \(\zeta \).

Obviously, the absolute values of the partial derivatives of (4.10) with respect to t and x exist and can be estimated from above by a constant times \(\Vert v\Vert _\infty \). Moreover, as long as \(\omega \) and \(\lambda \) are varying in the ranges \(1/c\le \omega \le c\) and \(|\lambda | \le c\), the constant may be chosen to be independent on \(\omega ,\tau \) and \(\lambda \) (and to depend on c only). The same can be shown for the terms

$$\begin{aligned} \int _0^x\int _0^\xi \frac{c_1(x,\xi ,\lambda )b_3(\xi ,\lambda )v_2(t+\omega A(\eta ,\xi ,\lambda ),\eta )}{a(\xi ,\lambda )a(\eta ,\lambda )}\,d\eta d\xi \end{aligned}$$

and

$$\begin{aligned} \int _0^x\int _0^\xi \frac{c_1(x,\xi ,\lambda )b_4(\xi ,\lambda )v_j(t-\omega \tau +\omega A(\eta ,\xi ,\lambda ),\eta )}{a(\xi ,\lambda )a(\eta ,\lambda )}\,d\eta d\xi , j=1,2. \end{aligned}$$

Claim 1 is therefore proved for the first component \({\mathcal {D}}_1(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )\). The same argument applies to the second component \({\mathcal {D}}_2(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )\).

Claim 2

For all \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) with \(\omega \ne 0\) and all \(v \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) we have \({\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {D}}(\omega ,\lambda )v \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\), and for any \(c>0\) it holds

$$\begin{aligned} \sup _{1/c \le \omega \le c, \tau \in {\mathbb {R}}, |\lambda | \le c}\{\Vert \partial _t{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {D}}(\omega ,\lambda )v\Vert _\infty +\Vert \partial _x{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {D}}(\omega ,\lambda )v\Vert _\infty : \;\Vert v\Vert _\infty \le 1\}<\infty .\nonumber \\ \end{aligned}$$
(4.11)

Proof of Claim

The proof is similar to the proof of Claim 1. We have

$$\begin{aligned}&[{\mathcal {D}}_1(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {D}}(\omega ,\lambda )v](t,x)\\&\quad = -\int _0^x\int _0^\xi \frac{c_1(x,\xi ,\lambda )c_2(\xi ,\eta ,\lambda )b_2 (\xi ,\lambda )}{a(\xi ,\lambda )a(\eta ,\lambda )} v_2(t+\omega A(x,\xi ,\lambda )-\omega A(\xi ,\eta ,\lambda ),\eta )\,d\eta d\xi \\&\quad = -\int _0^x\int _\eta ^x\frac{c_1(x,\xi ,\lambda )c_2(\xi ,\eta ,\lambda )b_2(\xi ,\lambda )}{a(\xi ,\lambda )a(\eta ,\lambda )} v_2(t+\omega A(x,\xi ,\lambda )-\omega A(\xi ,\eta ,\lambda ),\eta )\,d\xi d\eta \\&\quad = \frac{1}{2\omega }\int _0^x\int _{t+\omega A(x,\eta ,\lambda )}^{t-\omega A(x,\eta ,\lambda )} \frac{c_1(x,\xi _{\eta ,t,\omega ,\lambda }(\zeta ),\lambda )c_2(\xi _{\eta ,t,\omega ,\lambda } (\zeta ),\eta ,\lambda ) b_1(\xi _{\eta ,t,\omega ,\lambda }(\zeta ),\lambda )}{a(\eta ,\lambda )} v_2(\zeta ,\eta )\,d\zeta d\eta . \end{aligned}$$

Here we changed the integration variable \(\xi \) to

$$\begin{aligned} \zeta= & {} \zeta _{\eta ,t,\omega ,\lambda }(\xi ):= t+\omega (A(x,\xi ,\lambda )-A(\xi ,\eta ,\lambda ))=t+\omega \left( \int _\xi ^x\frac{dz}{a(z,\lambda )}+ \int _\xi ^\eta \frac{dz}{a(z,\lambda )}\right) ,\; d\zeta \\= & {} -\frac{2\omega }{a(\xi ,\lambda )}d\xi , \end{aligned}$$

and denoted by \(\xi =\xi _{\eta ,t,\omega ,\lambda }(\zeta )\) the inverse transformation. Now we proceed as in the proof of Claim 2.

Remark 11

In the proof of Claim 2 we used that the diagonal part of the operator \({{\mathcal {K}}}(\lambda )\) vanishes. Indeed, if in place of (2.19) we would have, for example, \( [{{\mathcal {K}}}_1(\lambda )v](t,x)=v_1(t,x)+b_2(x,\lambda )v_2(t,x), \) then in \([{\mathcal {D}}_1(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {D}}(\omega ,\lambda )v](t,x)\) there would appear the additional summand

$$\begin{aligned} -\int _0^x\int _\eta ^x\frac{c_1(x,\xi ,\lambda )c_2(\xi ,\eta ,\lambda )}{a(\xi ,\lambda )a(\eta ,\lambda )} v_1(t+\omega A(x,\xi ,\lambda )+\omega A(\xi ,\eta ,\lambda ),\eta )\,d\xi d\eta . \end{aligned}$$

Because of \(A(x,\xi ,\lambda )+A(\xi ,\eta ,\lambda )=A(x,\eta ,\lambda )\) this equals to

$$\begin{aligned} -\int _0^x\int _\eta ^x\frac{c_1(x,\xi ,\lambda )c_2(\xi ,\eta ,\lambda )}{a(\xi ,\lambda )a(\eta ,\lambda )} v_1(t+\omega A(x,\eta ,\lambda ),\eta )\,d\xi d\eta , \end{aligned}$$

and this is not differentiable with respect to t, in general, if \(v_1\) is not differentiable with respect to t.

Claim 3

For all \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) with \(\omega \ne 0\) and all \(v \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) we have \({\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {C}}(\omega ,\lambda )v \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\), and for any \(c>0\) it holds

$$\begin{aligned} \sup _{1/c \le \omega \le c, \tau \in {\mathbb {R}}, |\lambda | \le c}\{\Vert \partial _t{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {C}}(\omega ,\lambda )v\Vert _\infty +\Vert \partial _x{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {C}}(\omega ,\lambda )v\Vert _\infty : \;\Vert v\Vert _\infty \le 1\}<\infty .\nonumber \\ \end{aligned}$$
(4.12)

Proof of Claim

We have

$$\begin{aligned}&[{\mathcal {D}}_1(\omega ,\lambda ){{\mathcal {K}}}(\lambda ){\mathcal {C}}(\omega ,\lambda )v](t,x)\\&\quad =- \int _0^x\frac{c_1(x,\xi ,\lambda )c_2(\xi ,1,\lambda )b_1(\xi ,\lambda )}{a(\xi ,\lambda )} v_1(t+\omega A(x,\xi ,\lambda )-\omega A(\xi ,1,\lambda ),1)d\xi \\&\quad =\frac{1}{2\omega }\int _{t+\omega A(x,0,\lambda )-\omega A(0,1,\lambda )}^{t-\omega A(x,1,\lambda )}c_1(x,\xi _{t,\omega ,\lambda }(\zeta ),\lambda )c_2(\xi _{t,\omega ,\lambda }(\zeta ),1,\lambda ) b_1(\xi _{t,\omega ,\lambda }(\zeta ),\lambda ) v_1(\zeta ,1)d\zeta . \end{aligned}$$

Here we changed the integration variable \(\xi \) to

$$\begin{aligned} \zeta= & {} \zeta _{t,\omega ,\lambda }(\xi ):= t+\omega A(x,\xi ,\lambda )-\omega A(\xi ,1,\lambda )=t+\omega \int _\xi ^x\frac{dz}{a(z,\lambda )}+\omega \int _\xi ^1\frac{dz}{a(z,\lambda )},\; d\zeta \\= & {} -\frac{2\omega }{a(\xi ,\lambda )}d\xi , \end{aligned}$$

and \(\xi =\xi _{t,\omega ,\lambda }(\zeta )\) is the inverse transformation. Again, now we can proceed as in the proof of Claim 1.

Claim 4

Let the condition (1.7) be fulfilled. Then there exists \(\delta >0\) such that for all \(\omega ,\lambda \in {\mathbb {R}}\) with \(|\lambda |\le \delta \) the operator \(I-{\mathcal {C}}(\omega ,\lambda )\) is an isomorphism from \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) to itself. Moreover,

$$\begin{aligned} \sup _{\omega \in {\mathbb {R}}, |\lambda |\le \delta } \left\{ \Vert (I-{\mathcal {C}}(\omega ,\lambda ))^{-1}f\Vert _\infty :\; \Vert f\Vert _\infty \le 1\right\} <\infty . \end{aligned}$$
(4.13)

Proof of Claim

Take \(f \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\). We have to show that for all real numbers \(\omega \) and \(\lambda \) with \(\lambda \approx 0\) there exists a unique function \(v \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) satisfying the equation

$$\begin{aligned} (I-{\mathcal {C}}(\omega ,\lambda ))v=f \end{aligned}$$
(4.14)

and that \(\Vert v\Vert _\infty \le \text{ const }\Vert f\Vert _\infty \), where the constant does not depend on \(\omega ,\lambda \) and f. Equation (4.14) is satisfied if and only if for all \(t \in {\mathbb {R}}\) and \(x \in [0,1]\) it holds

$$\begin{aligned} v_1(t,x)= & {} -c_1(x,0,\lambda )v_2(t+\omega A(x,0,\lambda ),0)+f_1(t,x), \end{aligned}$$
(4.15)
$$\begin{aligned} v_2(t,x)= & {} c_2(x,0,\lambda )v_1(t-\omega A(x,1,\lambda ),1)+f_2(t,x). \end{aligned}$$
(4.16)

System (4.15), (4.16) is satisfied if and only if (4.15) is true and if it holds

$$\begin{aligned} v_2(t,x)= & {} c_2(x,1,\lambda )(-c_1(1,0,\lambda )v_2(t+\omega (A(1,0,\lambda )-A(x,1,\lambda )),0)\nonumber \\&+f_1(t-\omega A(x,1,\lambda ),x))+f_2(t,x), \end{aligned}$$
(4.17)

i.e., if and only if (4.15) and (4.17) are true and if

$$\begin{aligned} v_2(t,0)= & {} c_2(0,1,\lambda )(-c_1(1,0,\lambda )v_2(t+\omega (A(1,0,\lambda ) -A(0,1,\lambda )),0)\nonumber \\&+f_1(t-\omega A(0,1,\lambda ),0))+f_2(t,0). \end{aligned}$$
(4.18)

Equation (4.18) is a functional equation for the unknown function \(v_2(\cdot ,0)\). In order to solve this equation let us denote by \(C_{2\pi }({\mathbb {R}})\) the Banach space of all \(2\pi \)-periodic continuous functions \({\tilde{v}}:{\mathbb {R}} \rightarrow {\mathbb {R}}\) with the norm \(\Vert {\tilde{v}}\Vert _\infty :=\max \{|{\tilde{v}}(t)|: \; t \in {\mathbb {R}}\}\). Equation (4.18) is an equation in \(C_{2\pi }({\mathbb {R}})\) of the type

$$\begin{aligned} (I-{\widetilde{{\mathcal {C}}}}(\omega ,\lambda )){\tilde{v}}={\tilde{f}}(\omega ,\lambda ) \end{aligned}$$
(4.19)

with \({\tilde{v}},{\tilde{f}}\in C_{2\pi }({\mathbb {R}})\) defined by \({\tilde{v}}(t):=v_2(t,0)\) and

$$\begin{aligned}{}[{\tilde{f}}(\omega ,\lambda )](t):=c_2(0,1,\lambda )f_1(t-\omega A(0,1,\lambda ),x)+f_2(t,0) \end{aligned}$$
(4.20)

and with \({\widetilde{{\mathcal {C}}}}(\omega ,\lambda )\in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}))\) defined by

$$\begin{aligned} {[}{\widetilde{{\mathcal {C}}}}(\omega ,\lambda ){\tilde{v}}](t):=-c_1(1,0,\lambda )c_2(0,1,\lambda ) {\tilde{v}}(t+\omega (A(1,0,\lambda )-A(0,1,\lambda ))). \end{aligned}$$
(4.21)

From the definitions of the functions \(c_1\) and \(c_2\) it follows that

$$\begin{aligned} c_1(1,0,\lambda )c_2(0,1,\lambda )=\exp \int _0^1\frac{b_5(x,\lambda )}{a(x,\lambda )}dx, \end{aligned}$$

and assumption (1.7) yields

$$\begin{aligned} c_0:= c_1(1,0,0)c_2(0,1,0)\not =1. \end{aligned}$$

Now, we distinguish two cases.

Case 1: \(c_0<1\). Then there exists \(\delta >0\) such that for all \(\lambda \in [-\delta ,\delta ]\) it holds \(c_1(1,0,\lambda )c_2(0,1,\lambda )\le \frac{1+c_0}{2}<1\). Therefore

$$\begin{aligned} \Vert {\widetilde{{\mathcal {C}}}}(\omega ,\lambda )\Vert _{ {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}))}\le \frac{1+c_0}{2}<1 \text{ for } \text{ all } \lambda \in [-\delta ,\delta ]. \end{aligned}$$

Hence, for all \(\lambda \in [-\delta ,\delta ]\) the operator \(I-{\widetilde{{\mathcal {C}}}}(\omega ,\lambda )\) is an isomorphism from \(C_{2\pi }({\mathbb {R}})\) to itself, and

$$\begin{aligned} \Vert (I-{\widetilde{{\mathcal {C}}}}(\omega ,\lambda ))^{-1}\Vert _{ {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}))}\le \frac{1}{1-\frac{1+c_0}{2}}=\frac{2}{1-c_0}. \end{aligned}$$

Therefore, for all \(\omega ,\lambda \in {\mathbb {R}}\) with \(|\lambda | \le \delta \) there exists exactly one solution \(v_2(\cdot ,0)\in C_{2\pi }({\mathbb {R}})\) to (4.18), and

$$\begin{aligned} \Vert v_2(\cdot ,0)\Vert _\infty \le \text{ const }\Vert {\tilde{f}}(\omega ,\lambda )\Vert _\infty \le \text{ const }\Vert f\Vert _\infty , \end{aligned}$$

where the constants do not depend on \(\omega ,\lambda \) and f. Inserting this solution into the right-hand side of (4.17) we get \(v_2 \in C_{2\pi }({\mathbb {R}}\times [0,1])\), and inserting this into the right-hand side of (4.15) we get finally \(v_1 \in C_{2\pi }({\mathbb {R}}\times [0,1])\), i.e. the unique solution \(v=(v_1,v_2) \in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) to (4.15), (4.16) such that \(\Vert v\Vert _\infty \le \text{ const }\Vert f\Vert _\infty \), where the constant does not depend on \(\omega ,\lambda \) and f.

Case 2: \(c_0>1.\) Then there exists \(\delta >0\) such that for all \(\lambda \in [-\delta ,\delta ]\) it holds \(c_1(1,0,\lambda )c_2(0,1,\lambda )\ge \frac{1+c_0}{2}>1\). Equation (4.18) is equivalent to

$$\begin{aligned} v_2(t,0)= & {} \frac{v_2(t+\omega (A(0,1,\lambda )-A(1,0,\lambda )),0)}{c_1(1,0,\lambda )c_2(0,1,\lambda )}\nonumber \\&-\frac{f_1(t-\omega A(1,0,\lambda ),1))}{c_1(1,0,\lambda )}-\frac{f_2(t+\omega (A(0,1,\lambda ) -A(1,0,\lambda )),0)}{c_1(1,0,\lambda )c_2(0,1,\lambda )}. \end{aligned}$$

This equation is again of the type (4.19), but now with \(\Vert {\widetilde{{\mathcal {C}}}}(1,0)\Vert _{ {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}))}\le 1/c_0\). Hence, there exists \(\delta >0\) such that

$$\begin{aligned} \Vert {\widetilde{{\mathcal {C}}}}(\omega ,\lambda )\Vert _{ {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}))}\le \frac{2}{1+c_0}<1 \text{ for } \text{ all } \lambda \in [-\delta ,\delta ]. \end{aligned}$$

we can, therefore, proceed as in the case \(c_0<1\).

Remark 12

Definition (4.21) implies that \(\frac{d}{dt}{\widetilde{{\mathcal {C}}}}(\omega ,\lambda ){\tilde{v}}={\widetilde{{\mathcal {C}}}} (\omega ,\lambda )\frac{d}{dt}{\tilde{v}}\) for all \({\tilde{v}}\in C^1_{2\pi }({\mathbb {R}})\). This yields the estimate

$$\begin{aligned} \left\| {\widetilde{{\mathcal {C}}}}(\omega ,\lambda ){\tilde{v}}\right\| _\infty +\left\| \frac{d}{dt}{\widetilde{{\mathcal {C}}}}(\omega ,\lambda ){\tilde{v}}\right\| _\infty \le \left\| {\widetilde{{\mathcal {C}}}}(\omega ,\lambda )\right\| _{{{\mathcal {L}}}( C_{2\pi }({\mathbb {R}}))}(\Vert {\tilde{v}}\Vert _\infty +\Vert {\tilde{v}}'\Vert _\infty ) \text{ for } \text{ all } {\tilde{v}} \in C^1_{2\pi }({\mathbb {R}}). \end{aligned}$$

Hence, \((I-{\widetilde{{\mathcal {C}}}}(\omega ,\lambda ))^{-1}\) is a linear bounded operator from \(C^1_{2\pi }({\mathbb {R}})\) into \(C^1_{2\pi }({\mathbb {R}})\) for \(\lambda \approx 0\). It follows that, for given \(f \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\), the solution v to (4.14) belongs to \(C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) and, moreover,

$$\begin{aligned} \sup _{\omega \in {\mathbb {R}}, |\lambda | \le \delta }\Vert (I-{\mathcal {C}}(\omega ,\lambda ))^{-1}\Vert _{{{\mathcal {L}}}(C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2))}<\infty . \end{aligned}$$
(4.22)

Let us turn back to Fredholmness of the operator \(I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda )({{\mathcal {J}}}(\omega ,\tau ,\lambda )+{{\mathcal {K}}}(\lambda ))\) from the space \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself for \(\omega \ne 0\) and \(\lambda \approx 0\). Note that the space \(C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) is completely continuously embedded into the space \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\). By Claim 1, for given \(\omega \ne 0\), the operator \({\mathcal {D}}(\omega ,\lambda ){{\mathcal {J}}}(\omega ,\tau ,\lambda )\) is completely continuous from \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself. Therefore, it remains to show that for \(\omega \ne 0\) and \(\lambda \approx 0\) the operator \(I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda )\) is Fredholm of index zero from \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself. By Claim 4, this is true whenever the operator \(I-(I-{\mathcal {C}}(\omega ,\lambda ))^{-1}{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda )\) is Fredholm of index zero from \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself, for \(\omega \ne 0\) and \(\lambda \approx 0\). For that we use the following Fredholmness criterion of S. M. Nikolskii (cf. e.g. [20, Theorem XIII.5.2]):

Theorem 13

Let U be a Banach space and \(K \in {{\mathcal {L}}}(U)\) be an operator such that \(K^2\) is completely continuous. Then the operator \(I-K\) is Fredholm of index zero.

On the account of Theorem 13, it remains to prove the following statement.

Claim 5

For given \(\omega \ne 0\) and \(\lambda \approx 0\), the operator \( ((I-{\mathcal {C}}(\omega ,\lambda ))^{-1}{\mathcal {D}}(\omega ,\lambda ){{\mathcal {K}}}(\lambda ))^2\) is completely continuous from \(C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) into itself.

Proof of Claim

A straightforward calculation shows that

$$\begin{aligned} ((I-{\mathcal {C}})^{-1}{\mathcal {D}}{{\mathcal {K}}}))^2= (I-{\mathcal {C}})^{-1}\left( ({\mathcal {D}}{{\mathcal {K}}})^2+{\mathcal {D}}{{\mathcal {K}}}{\mathcal {C}}(I-{\mathcal {C}})^{-1}{\mathcal {D}}{{\mathcal {K}}}\right) . \end{aligned}$$
(4.23)

The desired statement now follows from Claims  2 and 3.

Remark 14

For proving Lemma 10 we did not need the estimates (4.9), (4.11)–(4.13) and (4.22). These estimates will be used in the proof of Lemma 20 below (more exactly, in the proof of Claim 7 there).

4.2 Kernel and Image of the Linearization

This subsection concerns the kernel and the image of the operator

$$\begin{aligned} {{\mathcal {L}}}:=I-{\mathcal {C}}-{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}), \end{aligned}$$
(4.24)

where

$$\begin{aligned} {\mathcal {C}}:={\mathcal {C}}(1,0),\;{\mathcal {D}}:={\mathcal {D}}(1,0),\;{{\mathcal {J}}}:={{\mathcal {J}}}(1,\tau _0,0), \text{ and } {{\mathcal {K}}}:= {{\mathcal {K}}}(0) \end{aligned}$$
(4.25)

(cf. (2.18), (2.19), (4.1) and (4.2)). From now on we will use assumptions \(\mathbf (A1)\)\(\mathbf (A3)\) and (1.7) of Theorem 2. In particular, we will fix a solution \(u=u_0\ne 0\) to (1.4) with \(\tau =\tau _0\) and \(\mu =i\) and a solution \(u=u_*\ne 0\) to (1.6) fulfilling assumption \(\mathbf (A3)\) (or, more precisely, (4.41) below).

We will describe the kernel and the image of the operator \({{\mathcal {L}}}\) by means of the eigenfunctions \(u_0\) and \(u_*\). To this end, we introduce two functions \(v_0,v_*:[0,1] \rightarrow {\mathbb {C}}^2\), two functions \({{\varvec{v}}}_0,{{\varvec{v}}}_*:{\mathbb {R}}\times [0,1] \rightarrow {\mathbb {C}}^2\) and four functions \(v_0^1,v_0^2,v_*^1v_*^2:{\mathbb {R}}\times [0,1] \rightarrow {\mathbb {R}}^2\) by

$$\begin{aligned} v_0(x):= \left[ \begin{array}{c} iu_0(x)+a_0(x)u'_0(x)\\ iu_0(x)-a_0(x)u'_0(x) \end{array} \right] , \; {{\varvec{v}}}_0(t,x):=e^{it}v_0(x),\; v_0^1:=\text{ Re }\,{{\varvec{v}}}_0,\; v_0^2:=\text{ Im }\,{{\varvec{v}}}_0\nonumber \\ \end{aligned}$$
(4.26)

and

$$\begin{aligned} v_*(x):= \left[ \begin{array}{c} u_*(x)+iU_*(x)\\ u_*(x)-iU_*(x) \end{array} \right] , \; {{\varvec{v}}}_*(t,x) :=e^{it}v_*(x),\; v_*^1 :=\text{ Re }\,{{\varvec{v}}}_*,\; v_*^2:=\text{ Im }\,{{\varvec{v}}}_*,\nonumber \\ \end{aligned}$$
(4.27)

where

$$\begin{aligned} U_*(x):=\left( \frac{b_6^0(x)}{a_0(x)}-2a_0'(x)\right) u_*(x)-a_0(x)u'_*(x) +\frac{1}{a_0(x)}\int _x^1\left( b_3^0(\xi )+b_4^0(\xi )e^{i\tau _0}\right) u_*(\xi )d\xi .\nonumber \\ \end{aligned}$$
(4.28)

Lemma 15

If the conditions of Theorem 2 are fulfilled, then \(\ker {{\mathcal {L}}}= \text{ span }\{v^1_0,v^2_0\}\).

Proof

Because \(u_0\) is a solution to (1.4) with \(\tau =\tau _0\) and \(\mu =i\), the complex-valued function \( {{\varvec{u}}}_0(t,x):=e^{it}u_0(x) \) is a solution to the linear homogeneous problem

$$\begin{aligned} \left. \begin{array}{l} \partial _t^2u(t,x)- a_0(x)^2\partial _x^2u(t,x)\\ =b_3^0(x)u(t,x)+b_4^0(x)u(t-\tau _0,x)+b_5^0(x)\partial _tu(t,x) +b_6^0(x)\partial _xu(t,x),\\ u(0,t) = \partial _xu(t,1)=0,\;u(t+2\pi ,x)=u(t,x). \end{array} \right\} \end{aligned}$$
(4.29)

On the other hand, if u is a solution to (4.29), then for all \(k \in {\mathbb {Z}}\) the functions

$$\begin{aligned} {\tilde{u}}_k(x):=\frac{1}{2\pi }\int _0^{2\pi }u(t,x)e^{-ikt}dt \end{aligned}$$

satisfy the ODE \(\left( -k^2-b_3^0(x)-b_4^0(x)e^{ik\tau _0}-ikb_5^0(x)\right) {\tilde{u}}_k(x)=a_0(x)^2{\tilde{u}}_k''(x)+b_6^0(x){\tilde{u}}_k'(x)\) with boundary conditions \({\tilde{u}}_k(0)={\tilde{u}}_k'(1)=0\). Assumptions \(\mathbf (A1)\) and \(\mathbf (A2)\) imply that \({\tilde{u}}_k=0\) for all \(k \in {\mathbb {Z}}\setminus \{\pm 1\}\) and \({\tilde{u}}_1=c u_0\) for some constant c, i.e., \(u\in \text{ span }\{{{\varvec{u}}}_0,\overline{{{\varvec{u}}}_0}\}\). In other words, \(\text{ span }\{{{\varvec{u}}}_0,\overline{{{\varvec{u}}}_0}\}\) consists of all solutions \(u:{\mathbb {R}} \times [0,1] \rightarrow {\mathbb {C}}\) to (4.29).

Now we apply Lemmas 7 and 8 with \(\omega =1\), \(\tau =\tau _0\), \(\lambda =0\) and with \(b(x,\lambda ,u_3,u_4,u_5,u_6)\) replaced by \(b_3^0(x)u_3+b_4^0(x)u_4+b_5^0(x)u_5+b_6^0(x)u_6\). We conclude that \(\text{ span }\{{{\varvec{v}}}_0,\overline{{{\varvec{v}}}_0}\}\) consists of all solutions \(v:{\mathbb {R}} \times [0,1] \rightarrow {\mathbb {C}}^2\) to the linear homogeneous equation \( v={\mathcal {C}} v-{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}})v, \) where \({{\varvec{v}}}_0\) is defined by (4.26). As \(v_0^1=\text{ Re }{{\varvec{v}}}_0\) and \(v_0^2=\text{ Im }{{\varvec{v}}}_0\), the proof is complete. \(\square \)

In what follows we denote by “\(\cdot \)” the Hermitian scalar product in \({\mathbb {C}}^2\), i.e. \(v\cdot w:=v_1\overline{w_1}+v_2\overline{w_2}\) for \(v,w \in {\mathbb {C}}^2\). Further, for continuous functions \(v,w:[0,2\pi ]\times [0,1]\rightarrow {\mathbb {C}}^2\) we write

$$\begin{aligned}&\langle v,w \rangle :=\frac{1}{2\pi }\int _0^{2\pi }\int _0^1v(t,x)\cdot w(t,x)dxdt =\frac{1}{2\pi }\int _0^{2\pi }\int _0^1(v_1(t,x)\overline{w_1(t,x)}\\&\quad +v_2(t,x)\overline{w_2(t,x)})dxdt. \end{aligned}$$

Moreover, we will work with the operator \({{\mathcal {A}}}\in {{\mathcal {L}}}\left( C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2);C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\right) \), the components of which are defined by

$$\begin{aligned} \left. \begin{array}{rcl} \displaystyle [A_1v](t,x)&{}:=&{}\partial _tv_1(t,x)-a_0(x)\partial _xv_1(t,x) -b^0_1(x)v_1(t,x),\\ \displaystyle [{{\mathcal {A}}}_2v](t,x)&{}:=&{}\partial _tv_2(t,x)+a_0(x)\partial _xv_2(t,x)) -b^0_2(x)v_2(t,x) \end{array} \right\} \; b_j^0(x):=b_j(x,0), j=1,2, \end{aligned}$$

i.e. \({{\mathcal {A}}}={{\mathcal {A}}}(1,0)\) (cf. (4.5)), and its formal adjoint one \({{\mathcal {A}}}^* \in {{\mathcal {L}}}\left( C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2);C_{2\pi }\right. \left. ({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\right) \), which is defined by

$$\begin{aligned}{}[{{\mathcal {A}}}_1^*v](t,x):= & {} -\partial _tv_1(t,x)+\partial _x(a_0(x)v_1(t,x))-b^0_1(x)v_1(t,x),\\ \displaystyle [{{\mathcal {A}}}^*_2v](t,x):= & {} -\partial _tv_2(t,x)-\partial _x(a_0(x)v_2(t,x)) -b^0_2(x)v_2(t,x). \end{aligned}$$

It is easy to verify that \(\langle {{\mathcal {A}}}v,w\rangle =\langle v,{{\mathcal {A}}}^*w\rangle \) for all \(v,w \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) which satisfy the boundary conditions in (2.1).

Lemma 16

If the conditions of Theorem 2 are fulfilled, then

$$\begin{aligned} {\mathrm{im}}\,{{\mathcal {L}}}=\{f \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2): \; \langle f,{{\mathcal {A}}}^*v_*^1\rangle =\langle f,{{\mathcal {A}}}^*v_*^2\rangle =0\}. \end{aligned}$$

Proof

It follows from Lemmas 10 and 15 that \(\text{ im }\,{{\mathcal {L}}}\) is a closed subspace of codimension two in \(C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\). Hence, it suffices to show that

$$\begin{aligned} \text{ im }\,{{\mathcal {L}}}\subseteq \{f \in C_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2): \; \langle f,{{\mathcal {A}}}^*v_*^1\rangle =\langle f,{{\mathcal {A}}}^*v_*^2\rangle =0\} \end{aligned}$$
(4.30)

and that

$$\begin{aligned} {{\mathcal {A}}}^*v_*^1 \text{ and } {{\mathcal {A}}}^*v_*^2 \text{ are } \text{ linearly } \text{ independent. } \end{aligned}$$
(4.31)

To prove (4.30), fix an arbitrary \(v \in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\). There exists a sequence \(w^1,w^2,\ldots \in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) such that \(\Vert v-w^k\Vert _\infty \rightarrow 0\) as \(k\rightarrow \infty \). Moreover, the functions \(({\mathcal {C}}+{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}))w^k\) satisfy the boundary conditions in (2.1). Also the function \(v_*^1\) satisfies the boundary conditions in (2.1). The last fact follows from the equalities \([(u_* +iU_*)+(u_* -iU_*)]_{x=0}=2u_*(0)=0\) and

$$\begin{aligned}{}[(u_* +iU_*)-(u_* -iU_*)]_{x=1}=2i\left[ \left( \frac{b_6^0}{a_0}-2a_0'\right) u_* a_0u_*'\right] _{x=1}=0 \end{aligned}$$
(4.32)

because the eigenfunction \(u_*\) satisfies the boundary conditions in (1.6). Therefore, by (4.6),

$$\begin{aligned}&\langle ({\mathcal {C}}+{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}))w^k,{{\mathcal {A}}}^*v_*^1\rangle = \langle {{\mathcal {A}}}({\mathcal {C}}+{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}))w^k,v_*^1\rangle = \langle ({{\mathcal {J}}}+{{\mathcal {K}}})w^k,v_*^1\rangle \\&\quad = \langle w^k,({{\mathcal {J}}}^*+{{\mathcal {K}}}^*)v_*^1\rangle , \end{aligned}$$

where the operators \({{\mathcal {J}}}^*,{{\mathcal {K}}}^*\in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\) are the formal adjoint operators to \({{\mathcal {J}}}\) and \({{\mathcal {K}}}\). Due to (2.18) and (2.19), they are given by the formulas

$$\begin{aligned} {[}{{\mathcal {J}}}^*_1w](t,x)= & {} - {[}{{\mathcal {J}}}^*_2w](t,x)=\frac{1}{2a_0(x)}\int _x^1(b_3^0(\xi )(w_1(t,\xi ) +w_2(t,\xi ))\\&+b_4^0(\xi )(w_1(t+\tau _0,\xi )+w_2(t+\tau _0,\xi )))d\xi \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {K}}}^*_1w=b_1^0w_2,\; {{\mathcal {K}}}^*_2w=b_2^0w_1, \end{aligned}$$

respectively. It follows that

$$\begin{aligned}&\langle {{\mathcal {L}}}v,{{\mathcal {A}}}^*v_*^1\rangle =\langle (I-{\mathcal {C}}-{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}))v,{{\mathcal {A}}}^* v_*^1\rangle =\langle v,{{\mathcal {A}}}^* v_*^1\rangle -\lim _{k \rightarrow \infty }\langle ({\mathcal {C}}+{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}})) w^k,{{\mathcal {A}}}^* v_*^1\rangle \\&\quad =\langle v,{{\mathcal {A}}}^* v_*^1\rangle -\lim _{k \rightarrow \infty }\langle (({{\mathcal {J}}}+{{\mathcal {K}}})w^k,v_*^1\rangle =\langle v,({{\mathcal {A}}}^*-{{\mathcal {J}}}^*-{{\mathcal {K}}}^*)v_*^1\rangle . \end{aligned}$$

Similarly, \(\langle {{\mathcal {L}}}v,{{\mathcal {A}}}^*v_*^2\rangle =\langle v,({{\mathcal {A}}}^*-{{\mathcal {J}}}^*-{{\mathcal {K}}}^*)v_*^2\rangle \). Hence, in order to prove (4.30) it suffices to show that

$$\begin{aligned} ({{\mathcal {A}}}^*-{{\mathcal {J}}}^*-{{\mathcal {K}}}^*){{\varvec{v}}}_*=0. \end{aligned}$$
(4.33)

Taking into account the definitions of the operators \({{\mathcal {A}}}^*\), \({{\mathcal {J}}}^*\) and \({{\mathcal {K}}}^*\) and of the function \(v_*\) (cf. (4.27)), it is easy to see that (4.33) is satisfied if and only if, for any \(x \in [0,1]\),

$$\begin{aligned}&\left[ -iv_{*1}+(a_0v_{*1})'-b_1^0(v_{*1}+v_{*2})\right] (x) =\frac{1}{2a_0(x)}\int _x^1(b_3^0(\xi )+b_4^0(\xi ) e^{i\tau _0})(v_{*1}(\xi )+v_{*2}(\xi ))d\xi ,\\&\left[ -iv_{*2}-(a_0v_{*2})'-b_2^0(v_{*1}+v_{*2})\right] (x) =-\frac{1}{2a_0(x)}\int _x^1(b_3^0(\xi )+b_4^0(\xi ) e^{i\tau _0})(v_{*1}(\xi )+v_{*2}(\xi ))d\xi , \end{aligned}$$

where \(v_{*1}=u_*+iU_*\) and \(v_{*2}=u_*-iU_*\) are the components of the vector function \(v_*\). Considering the sum and the difference of these two equations and taking into account that \(v_{*1}+v_{*2}=2u_*\) and \(v_{*1}-v_{*2}=2iU_*\), we get

$$\begin{aligned}&-iu_{*}(x)+i(a_0U_{*})'(x)-(b_1^0(x)+b_2^0(x))u_*(x)=0, \end{aligned}$$
(4.34)
$$\begin{aligned}&U_*(x)+(a_0u_{*})'(x)-(b_1^0(x)-b_2^0(x))u_*(x)=\frac{1}{a_0(x)} \int _x^1(b_3^0(\xi )+b_4^0(\xi )e^{i\tau _0})u_*(\xi )d\xi .\nonumber \\ \end{aligned}$$
(4.35)

Thus, (4.33) is equivalent to (4.34)–(4.35). In order to show (4.34), we use the equality \(b_1^0+b_2^0=b_5^0\) (cf. (2.14)) and note that (4.34) is equivalent to

$$\begin{aligned} (a_0U_*)'=(1-ib_5^0)u_*. \end{aligned}$$
(4.36)

On the other side, (4.28) yields

$$\begin{aligned} (a_0U_*)'=(b_6^0u_*)'-(a_0^2u_*)''-(b_3^0+b_4^0e^{i\tau _0})u_*. \end{aligned}$$
(4.37)

Inserting (4.37) into (4.36), we conclude that (4.34) is true if \(u_*\) solves the ordinary differential equation in (1.6), i.e. (4.34) is satisfied.

Equation (4.35) is satisfied by the definition (4.28) of the function \(U_*\) and the equality \(b_1^0-b_2^0=-a_0'+b_6^0/a_0\) (cf. (2.14)). The proof of (4.33) and, hence, of (4.30) is therefore complete.

It remains to prove (4.31). To this end, we introduce functions \(w_0: [0,1] \rightarrow {\mathbb {C}}^2\), \({{\varvec{w}}}_0: {\mathbb {R}} \times [0,1] \rightarrow {\mathbb {C}}^2\) and \(w^1,w^2 \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) by

$$\begin{aligned} w_0:=\left[ \begin{array}{c} (i+\tau _0b_4^0e^{-i\tau _0})u_0+a_0u_0'\\ (i+\tau _0b_4^0e^{-i\tau _0})u_0-a_0u_0' \end{array} \right] , \quad {{\varvec{w}}}_0(t,x):=e^{it}w_0(x) \end{aligned}$$
(4.38)

and

$$\begin{aligned} (I-{\mathcal {C}})w^1={\mathcal {D}}\,\text{ Re }\,{{\varvec{w}}}_0,\quad (I-{\mathcal {C}})w^2=-{\mathcal {D}}\,\text{ Im }\,{{\varvec{w}}}_0. \end{aligned}$$
(4.39)

Note that the equations (4.39) define the functions \(w^1,w^2 \in C^1_{2\pi }({\mathbb {R}} \times [0,1];{\mathbb {R}}^2)\) uniquely, as follows from Claim 4 in Sect. 4.1 (see also Remark 12). Combining (4.6) with (4.39), we obtain

$$\begin{aligned} {{\mathcal {A}}}w^1=\text{ Re }\,{{\varvec{w}}}_0,\; {{\mathcal {A}}}w^2=-\text{ Im }\,{{\varvec{w}}}_0. \end{aligned}$$

Therefore,

$$\begin{aligned}&\langle w^1,{{\mathcal {A}}}^*{{\varvec{v}}}_* \rangle = \langle {{\mathcal {A}}}w^1,{{\varvec{v}}}_*^1 \rangle = \langle \text{ Re }\,{{\varvec{w}}}_0,{{\varvec{v}}}_*\rangle \nonumber \\&\quad =\frac{1}{4\pi }\int _0^{2\pi }\int _0^1\left( e^{it}w_0(x)+e^{-it} \overline{w_0(x)}\right) \cdot e^{it}v_*(x)\,dxdt=\frac{1}{2}\int _0^1w_0(x)\cdot v_*(x)\,dx.\nonumber \\ \end{aligned}$$
(4.40)

By (4.27) and (4.38), the right hand side of (4.40) is equal to

$$\begin{aligned}&\frac{1}{2}\int _0^1\left( ((i+\tau _0b_4^0e^{-i\tau _0})u_0+a_0u_0') \cdot (\overline{u_*}-i\overline{U_*})+ ((i+\tau _0b_4^0e^{-i\tau _0})u_0-a_0u_0')\cdot (\overline{u_*} +i\overline{U_*})\right) dx\\&\quad =\int _0^1((i+\tau _0b_4^0e^{-i\tau _0})u_0\overline{u_*}-ia_0u_0' \overline{U_*})dx =\int _0^1((i+\tau _0b_4^0e^{-i\tau _0})u_0\overline{u_*}+iu_0(a_0 \overline{U_*})'\,dx. \end{aligned}$$

Finally, we use (4.28) and the definition of \(\sigma \) in (A3) to get

$$\begin{aligned} \langle w^1,{{\mathcal {A}}}^*v_*\rangle = \int _0^1(2i+\tau _0b_4^0e^{-i\tau _0}-b_5^0)u_0\overline{u_*}\,dx=\sigma . \end{aligned}$$

Similarly,

$$\begin{aligned}&\langle w^2,{{\mathcal {A}}}^*{{\varvec{v}}}_* \rangle = -\langle \text{ Im }\,{{\varvec{w}}}_0,{{\varvec{v}}}_*\rangle =-\frac{1}{4\pi i}\int _0^{2\pi }\int _0^1\left( e^{it}w_0-e^{-it}\overline{w_0}\right) \cdot e^{it}{v_*}\,dxdt\\&\quad =-\frac{1}{2i}\int _0^1w_0\cdot {v_*}\,dx=i\sigma . \end{aligned}$$

Now, we normalize the eigenfunctions \(u_0\) and \(u_*\) so that

$$\begin{aligned} \sigma =\int _0^1\left( 2i-b^0_5(x)+\tau _0e^{-i\tau _0}b^0_4(x) \right) u_0(x)\overline{u_*(x)}dx=1. \end{aligned}$$
(4.41)

It follows that

$$\begin{aligned} \langle w^j,{{\mathcal {A}}}^*v_*^k \rangle =\delta ^{jk}, \end{aligned}$$
(4.42)

which yields (4.31), as desired. \(\square \)

4.3 Splitting of Equation (4.3)

Given \(\varphi \in {\mathbb {R}}\), we introduce a time shift operator \(S_\varphi \in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\) by

$$\begin{aligned}{}[S_\varphi v](t,x):=v(t+\varphi ,x). \end{aligned}$$
(4.43)

It is easy to verify that

$$\begin{aligned}&S_\varphi {{\mathcal {A}}}(\omega ,\lambda )={{\mathcal {A}}}(\omega ,\lambda )S_\varphi ,\; S_\varphi {\mathcal {C}}(\omega ,\lambda )={\mathcal {C}}(\omega ,\lambda )S_\varphi ,\; S_\varphi {\mathcal {D}}(\omega ,\lambda )={\mathcal {D}}(\omega ,\lambda )S_\varphi \end{aligned}$$
(4.44)

and

$$\begin{aligned}&S_\varphi {{\mathcal {B}}}(v,\omega ,\tau ,\lambda )={{\mathcal {B}}}(S_\varphi v,\omega ,\tau ,\lambda ) \end{aligned}$$
(4.45)

for all \(\varphi , \omega , \tau , \lambda \in {\mathbb {R}}\) and \(v \in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\). It follows that \(S_\varphi {{\mathcal {L}}}={{\mathcal {L}}}S_\varphi \), in particular,

$$\begin{aligned} S_\varphi \ker {{\mathcal {L}}}=\ker {{\mathcal {L}}}, \; S_\varphi \text{ im }\,{{\mathcal {L}}}=\text{ im }\, {{\mathcal {L}}}. \end{aligned}$$
(4.46)

Since \(\ker {{\mathcal {L}}}\) is finite dimensional, there exists a topological complement \({{\mathcal {W}}}\) (i.e., a closed subspace which is transversal) to \(\ker {{\mathcal {L}}}\) in \( C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\). Since the map \(\varphi \in {\mathbb {R}} \mapsto S_\varphi \in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\) is strongly continuous, \({{\mathcal {W}}}\) can be chosen to be invariant with respect to \(S_\varphi \), i.e.,

$$\begin{aligned} C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)=\ker {{\mathcal {L}}}\oplus {{\mathcal {W}}}\ \text{ and } \ S_\varphi {{\mathcal {W}}}={{\mathcal {W}}}\end{aligned}$$
(4.47)

(cf. [6, Theorem 2]). Further, let us introduce a projection operator \(P \in {{\mathcal {L}}}(C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2))\) by

$$\begin{aligned} Pv:=\langle v,{{\mathcal {A}}}^*v_*^1\rangle w^1+\langle v,{{\mathcal {A}}}^*v_*^2\rangle w^2, \end{aligned}$$
(4.48)

where the functions \(v_*^j\) and \(w^k\) are given by (4.26) and (4.39). The projection property \(P^2=P\) follows from (4.42). Moreover, Lemma 16 implies that

$$\begin{aligned} \ker P= \text{ im }\,{{\mathcal {L}}}. \end{aligned}$$
(4.49)

Furthermore, from (4.38) it follows that \([S_\varphi {{\varvec{w}}}_0](t,x)={{\varvec{w}}}_0(t+\varphi ,x)=e^{i(t+\varphi )}{{\varvec{w}}}_0(x)=e^{i\varphi }{{\varvec{w}}}_0(x)\) and, hence,

$$\begin{aligned} S_\varphi \text{ Re }\,{{\varvec{w}}}_0=\cos \varphi \, \text{ Re }\,{{\varvec{w}}}_0-\sin \varphi \,\text{ Im }\,{{\varvec{w}}}_0\ \text{ and } \ S_\varphi \text{ Im }\,{{\varvec{w}}}_0=\cos \varphi \,\text{ Im }\,{{\varvec{w}}}_0+\sin \varphi \,\text{ Re }\,{{\varvec{w}}}_0. \end{aligned}$$

Similarly one shows that \( S_\varphi v_*^1=\cos \varphi \,v_*^1-\sin \varphi \,v_*^2\) and \( S_\varphi v_*^2=\cos \varphi \,v_*^2+\sin \varphi \,v_*^1. \) On the account of (4.44), for every \(v \in C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) we obtain

$$\begin{aligned}&PS_\varphi v=\langle S_\varphi v,{{\mathcal {A}}}^*v_*^1\rangle w^1+\langle S_\varphi v,{{\mathcal {A}}}^*v_*^2\rangle w^2= \langle v,{{\mathcal {A}}}^*S_{-\varphi }v_*^1\rangle w^1+\langle v,{{\mathcal {A}}}^*S_{-\varphi }v_*^2 \rangle w^2\nonumber \\&\quad =\left( \cos \varphi \langle v,{{\mathcal {A}}}^*v_*^1\rangle +\sin \varphi \langle v,{{\mathcal {A}}}^*v_*^2\rangle \right) w^1 +\left( -\sin \varphi \langle v,{{\mathcal {A}}}^*v_*^1\rangle +\cos \varphi \langle v,{{\mathcal {A}}}^*v_*^2 \rangle \right) w^2\nonumber \\&\quad =\langle v,{{\mathcal {A}}}^*v_*^1\rangle \left( \cos \varphi \, w^1-\sin \varphi \,w^2\right) + \langle v,{{\mathcal {A}}}^*v_*^2\rangle \left( \sin \varphi \, w^1+\cos \varphi \,w^2\right) \nonumber \\&\quad =\langle v,{{\mathcal {A}}}^*v_*^1\rangle S_\varphi w^1+\langle v,{{\mathcal {A}}}^*v_*^2\rangle S_\varphi w^2= S_\varphi Pv. \end{aligned}$$
(4.50)

Finally, we use the ansatz (cf. (4.47))

$$\begin{aligned} v=u+w,\quad u \in \ker {{\mathcal {L}}},\; w \in {{\mathcal {W}}}\end{aligned}$$
(4.51)

and rewrite equation (4.3) as a system of two equations, namely

$$\begin{aligned}&P\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+w)-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )\right) =0, \end{aligned}$$
(4.52)
$$\begin{aligned}&(I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+w)-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )\right) =0. \end{aligned}$$
(4.53)

5 The External Lyapunov-Schmidt Equation

In this section we solve the so-called external Lyapunov-Schmidt equation (4.53) with respect to \(w \approx 0\) for \(u\approx 0\), \(\omega \approx 1\), \(\tau \approx \tau _0\) and \(\lambda \approx 0\). More exactly, in Subsect. 5.1 we present a generalized implicit function theorem, which will be used in Subsect. 5.2 to solve equation (4.53).

5.1 A Generalized Implicit Function Theorem

In this subsection we present the generalized implicit function theorem, which is a particular case of [25, Theorem 2.2]. It concerns abstract parameter-dependent equations of the type

$$\begin{aligned} F(w,p)=0. \end{aligned}$$
(5.1)

Here F is a map from \({{\mathcal {W}}}_0 \times {{\mathcal {P}}}\) to \(\widetilde{{\mathcal {W}}}_0\), \({{\mathcal {W}}}_0\) and \({\widetilde{{\mathcal {W}}}}_0\) are Banach spaces with norms \(\Vert \cdot \Vert _0\) and \(|\cdot |_0\), respectively, and \({{\mathcal {P}}}\) is a finite dimensional normed vector space with norm \(\Vert \cdot \Vert \). Moreover, it is supposed that

$$\begin{aligned} F(0,0)=0. \end{aligned}$$
(5.2)

We are going to state conditions on F such that, similarly to the classical implicit function theorem, for all \(p \approx 0\) there exists exactly one solution \(w \approx 0\) to (5.1) and that the data-to-solution map \(p \mapsto w\) is smooth. Similarly to the classical implicit function theorem, we suppose that

$$\begin{aligned} F(\cdot ,p) \in C^\infty ({{\mathcal {W}}}_0;{\widetilde{{\mathcal {W}}}}_0) \text{ for } \text{ all } p \in {{\mathcal {P}}}. \end{aligned}$$
(5.3)

However, unlike to the classical case, we do not suppose that \(F(w,\cdot )\) is smooth for all \(w \in {{\mathcal {W}}}_0\). In our applications the map \((w,p) \mapsto \partial _wF(w,p)\) is not even continuous with respect to the uniform operator norm in \({{\mathcal {L}}}({{\mathcal {W}}}_0;{\widetilde{{\mathcal {W}}}}_0)\), in general. Hence, the difference of Theorem 17 below in comparison with the classical implicit function theorem is not a degeneracy of the partial derivatives \(\partial _wF(w,p)\) (like in implicit function theorems of Nash-Moser type), but a degeneracy of the partial derivatives \(\partial _pF(w,p)\) (which do not exist for all \(w \in {{\mathcal {W}}}_0\)).

Thus, we consider parameter depending equations, which do not depend smoothly on the parameter, but with solutions which do depend smoothly on the parameter. For that, of course, some additional structure is needed, which will be described now.

Let \(\varphi \in {\mathbb {R}} \mapsto S(\varphi ) \in {{\mathcal {L}}}({{\mathcal {W}}}_0)\), \(\varphi \in {\mathbb {R}} \mapsto {\widetilde{S}}(\varphi ) \in {{\mathcal {L}}}(\widetilde{{{\mathcal {W}}}}_0)\), and \(\varphi \in {\mathbb {R}} \mapsto T(\varphi ) \in {{\mathcal {L}}}({{\mathcal {P}}})\) be strongly continuous groups of linear bounded operators on \({{\mathcal {W}}}_0\), \(\widetilde{{{\mathcal {W}}}}_0\) and \({{\mathcal {P}}}\), respectively. We suppose that

$$\begin{aligned} {\widetilde{S}}(\varphi )F(w,p)=F(S(\varphi )w,T(\varphi )p) \text{ for } \text{ all } \varphi \in {\mathbb {R}}, w \in {{\mathcal {W}}}_0 \text{ and } p \in {{\mathcal {P}}}. \end{aligned}$$
(5.4)

Furthermore, let \(A:D(A) \subseteq {{\mathcal {W}}}_0 \rightarrow {{\mathcal {W}}}_0\) be the infinitesimal generator of the \(C_0\)-group \(S(\varphi )\). For \(l\in {\mathbb {N}}\), let

$$\begin{aligned} {{\mathcal {W}}}_l:=D(A^l)=\{w \in {{\mathcal {W}}}_0: S(\cdot )w \in C^l({\mathbb {R}}; {{\mathcal {W}}_0})\} \end{aligned}$$

denote the domain of definition of the l-th power of A. Since A is closed, \({{\mathcal {W}}}_l\) is a Banach space with the norm

$$\begin{aligned} \Vert w\Vert _l:=\sum _{k=0}^l\Vert A^kw\Vert _0. \end{aligned}$$

We suppose that for all \(k,l\in {\mathbb {N}}\)

$$\begin{aligned} \partial ^k_w{{\mathcal {F}}}(w,\cdot )(w_1,\ldots ,w_k) \in C^l({{\mathcal {P}}};{\widetilde{{\mathcal {W}}}}_0) \text{ for } \text{ all } w,w_1,\ldots ,w_k \in {{\mathcal {W}}}_l \end{aligned}$$
(5.5)

and, for all \(w,w_1,\ldots ,w_k \in {{\mathcal {W}}}_l\) and \(p,p_1,\ldots ,p_l \in {{\mathcal {P}}}\) with \(\Vert w\Vert _l+\Vert p\Vert \le 1\),

$$\begin{aligned} |\partial ^l_p\partial ^k_w{{\mathcal {F}}}(w,p)(w_1,\ldots ,w_k,p_1,\ldots ,p_l)|_0 \le c_{kl}\Vert w_1\Vert _l\ldots \Vert w_k\Vert _l\,\Vert p_1\Vert \ldots \Vert p_l\Vert , \end{aligned}$$
(5.6)

where the constants \(c_{kl}\) do not depend on \(w,w_1,\ldots ,w_k,p,p_1,\ldots ,p_l\).

Theorem 17

[25, Theorem 2.2] Suppose that the conditions (5.2)–(5.6) are fulfilled. Furthermore, assume that there exist \(\varepsilon _0 >0\) and \(c>0\) such that for all \(p \in {{\mathcal {P}}}\) with \(\Vert p\Vert \le \varepsilon _0\)

$$\begin{aligned} \partial _wF(0,p) \text{ is } \text{ Fredholm } \text{ of } \text{ index } \text{ zero } \text{ from } {{\mathcal {W}}}_0 \text{ into } \widetilde{{\mathcal {W}}}_0 \end{aligned}$$
(5.7)

and

$$\begin{aligned} |\partial _wF(0,p)w|_0 \ge c\Vert w\Vert _0 \text{ for } \text{ all } w \in {{\mathcal {W}}}_0. \end{aligned}$$
(5.8)

Then there exist \(\varepsilon \in (0,\varepsilon _0]\) and \(\delta >0\) such that for all \(p \in {{\mathcal {P}}}\) with \(\Vert p\Vert \le \varepsilon \) there is a unique solution \(w=\hat{w}(p)\) to (5.1) with \(\Vert w\Vert _0 \le \delta \). Moreover, for all \(k\in {\mathbb {N}}\) we have \(\hat{w}(p) \in {{\mathcal {W}}}_k\), and the map \(p \in {{\mathcal {P}}}\mapsto \hat{w}(p) \in {{\mathcal {W}}}_k\) is \(C^\infty \)-smooth.

Remark 18

The maps \(\varphi \in {\mathbb {R}} \mapsto S(\varphi ) \in {{\mathcal {L}}}({{\mathcal {W}}}_0)\) and \(\varphi \in {\mathbb {R}} \mapsto {\widetilde{S}}(\varphi ) \in {{\mathcal {L}}}(\widetilde{{\mathcal {W}}}_0)\) are not continuous, in general. Nevertheless, since \({{\mathcal {P}}}\) is supposed to be finite dimensional, the map \(\varphi \in {\mathbb {R}} \mapsto T(\varphi ) \in {{\mathcal {L}}}({{\mathcal {P}}})\) is \(C^\infty \)-smooth. This is essential in the proof of Theorem 17 in [25].

Remark 19

In Theorem 17 we do not suppose that \(\partial _wF(0,p)\) depends continuously on p in the sense of the uniform operator norm in \({{\mathcal {L}}}({{\mathcal {W}}}_0;{\widetilde{{\mathcal {W}}}}_0)\). Hence, assumptions (5.7) and (5.8) cannot be replaced by their versions with \(p=0\), in general.

5.2 Solution of the external Lyapunov-Schmidt equation

In what follows, we use the following notation (for \(\varepsilon >0\) and \(k\in {\mathbb {N}}\)):

$$\begin{aligned}&{{\mathcal {U}}}_\varepsilon :=\{u \in \ker {{\mathcal {L}}}:\; \Vert u\Vert _\infty<\varepsilon \},\quad {{\mathcal {P}}}_\varepsilon :=\{(\omega ,\tau ,\lambda ) \in {\mathbb {R}}^3:\; |\omega -1|+|\tau -\tau _0|+|\lambda |<\varepsilon \},\\&C_{2\pi }:=C_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2),\quad C^k_{2\pi }:=C^k_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2). \end{aligned}$$

We are going to solve the so-called external Lyapunov-Schmidt equation (4.53) with respect to \(w \approx 0\) for \(u\approx 0\), \(\omega \approx 1\), \(\tau \approx \tau _0\) and \(\lambda \approx 0\).

Lemma 20

Let the conditions of Theorem 2 be fulfilled. Then there exist \(\varepsilon >0\) and \(\delta >0\) such that for all \(u \in {{\mathcal {U}}}_\varepsilon \) and \((\omega ,\tau ,\lambda )\in {{\mathcal {P}}}_\varepsilon \) there is a unique solution \(w=\hat{w}(u,\omega ,\tau ,\lambda ) \in {{\mathcal {W}}}\) to (4.53) with \(\Vert w\Vert _\infty <\delta \). Moreover, for all \(k\in {\mathbb {N}}\) it holds \(\hat{w}(u,\omega ,\tau ,\lambda ) \in C^k_{2\pi }\), and the map \((u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \mapsto \hat{w}(u,\omega ,\tau ,\lambda ) \in C^k_{2\pi }\) is \(C^\infty \)-smooth.

We have that \(w=0\) is a solution to (4.53) with \(u=0\), \(\omega =1\), \(\tau =\tau _0\) and \(\lambda =0\). This suggests that Lemma 20 can be obtained from an appropriate implicit function theorem. Unfortunately, the classical implicit function theorem does not work here, because the left-hand side of (4.53) is differentiable with respect to \(\omega \), \(\tau \) and \(\lambda \) not for any \(w\in C_{2\pi }\). We will apply Theorem 17.

Let us verify the assumptions of Theorem  17 in the following setting:

$$\begin{aligned} \left. \begin{array}{l} {{\mathcal {W}}}_0={{\mathcal {W}}},\; {\widetilde{{{\mathcal {W}}}}}_0=\text{ im }\,P,\; {{\mathcal {P}}}=\ker {{\mathcal {L}}}\times {\mathbb {R}}^3, \; p=(u,\omega -1,\tau -\tau _0,\lambda ),\\ F(w,p)=(I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+w)-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ) \right) . \end{array} \right\} \end{aligned}$$
(5.9)

Note that \({{\mathcal {W}}}_0\) and \({\widetilde{{{\mathcal {W}}}}}_0\) are Banach spaces with the norm \(\Vert \cdot \Vert _\infty \). Conditions (5.2), (5.3) and (5.7) are fulfilled, the last one being true due to Lemma 10.

It remains to verify conditions (5.4)–(5.6) and (5.8).

We begin with verifying (5.4). We identify \(S_\varphi \) and \({\widetilde{S}}_\varphi \) with \(S_\varphi \) defined by (4.43) restricted to \({{\mathcal {W}}}_0\) and \({\widetilde{{{\mathcal {W}}}}}_0\), respectively. Let

$$\begin{aligned} T_\varphi (u,\omega ,\tau ,\lambda ):=(S_\varphi u,\omega ,\tau ,\lambda ). \end{aligned}$$

It follows from (4.50) that \(S_\varphi {\widetilde{{{\mathcal {W}}}}}_0={\widetilde{{{\mathcal {W}}}}}_0\). Taking into account (4.44) and (4.45), we get

$$\begin{aligned} {\widetilde{S}}_\varphi F(w,p)= & {} S_\varphi (I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+w) -{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )\right) \nonumber \\= & {} (I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(S_\varphi u+S_\varphi w)-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(S_\varphi u +S_\varphi w,\omega ,\tau ,\lambda )\right) \nonumber \\= & {} F(S_\varphi w,T_\varphi p), \end{aligned}$$
(5.10)

which gives (5.4).

To verify assumption (5.5), recall that the infinitesimal generator of the group \(S_\varphi \) is the differential operator \(A=\frac{d}{dt}\). Therefore,

$$\begin{aligned} {{\mathcal {W}}}^l=\{w \in {{\mathcal {W}}}: \; \partial _tw,\partial _t^2w,\ldots ,\partial _t^lw\in {{\mathcal {W}}}\},\; \Vert w\Vert _l=\sum _{j=0}^l\Vert \partial _t^jw\Vert _\infty \text{ for } w \in {{\mathcal {W}}}^l. \end{aligned}$$

We have

$$\begin{aligned}&\partial _w[(I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+w)-{\mathcal {D}}(\omega ,\lambda ) {{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )\right) ]w_1\\&=(I-P)\left( I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ) \right) w_1 \end{aligned}$$

and

$$\begin{aligned}&\partial ^k_w(I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+w)-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w, \omega ,\tau ,\lambda )\right) (w_1,\ldots ,w_k)\\&=-(I-P){\mathcal {D}}(\omega ,\lambda )\partial ^k_v{{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )(w_1,\ldots ,w_k) \text{ for } k \ge 2. \end{aligned}$$

Taking into account that any \(u \in \ker {{\mathcal {L}}}\) is \(C^\infty \)-smooth and satisfies the equality \(\Vert \partial _t^ju\Vert _\infty =\Vert u\Vert _\infty \) (cf. Lemma 15), our task is reduced to show that for all \(k,l\in {\mathbb {N}}\) and all \(w,w_1,\ldots ,w_k \in {{\mathcal {W}}}^l\) the functions \({\mathcal {C}}(\omega ,\lambda )w\) and \({\mathcal {D}}(\omega ,\lambda )\partial ^k_v{{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )(w_1,\ldots ,w_k)\) depend \(C^l\)-smoothly on \((\omega ,\tau ,\lambda )\) and that condition (5.6) is fulfilled.

The proof goes through two claims.

Claim 6

For all \(l,m\in {\mathbb {N}}\) and \(w \in {{\mathcal {W}}}^{l+m}\) the map \((\omega ,\lambda ) \in {\mathbb {R}}^2 \mapsto {\mathcal {C}}(\omega ,\lambda ) w \in C_{2\pi }\) is \(C^{l+m}\)-smooth. Moreover,

$$\begin{aligned} \Vert \partial ^l_\omega \partial ^m_\lambda {\mathcal {C}}(\omega ,\lambda ) w\Vert _\infty \le c_{lm}\Vert w\Vert _{l+m}, \end{aligned}$$
(5.11)

where the constant \(c_{lm}\) does not depend on \(\omega \), \(\lambda \) and w for \(\omega \) and \(\lambda \) varying on bounded intervals.

Proof of Claim

Since \(w(\cdot ,x)\) is \(C^l\)-smooth, definition (4.1) implies that \({\mathcal {C}}(\cdot ,\cdot )w\) is \(C^l\)-smooth, and the derivatives can be calculated by the chain rule. For example,

$$\begin{aligned} \partial _\omega [{\mathcal {C}}(\omega ,\lambda )w](t,x)= \left[ \begin{array}{c} -c_1(x,0,\lambda )\partial _tw_2(t+\omega A(x,0,\lambda ),0)A(x,0,\lambda )\\ -c_2(x,1,\lambda )\partial _tw_1(t-\omega A(x,1,\lambda ),1)A(x,1,\lambda ) \end{array} \right] . \end{aligned}$$
(5.12)

It follows that \(\Vert \partial _\omega [{\mathcal {C}}(\omega ,\lambda )w]\Vert _\infty \le \text{ const }\Vert w\Vert _1\), where the constant does not depend on \(\omega \) and \(\lambda \) (varying in bounded intervals) and on \(w \in {{\mathcal {W}}}^1\).

Similarly one can handle \(\partial _\lambda {\mathcal {C}}(\omega ,\lambda )w\) and higher order derivatives, and similarly one can show (5.11).

Remark 21

In (5.12) the loss of derivatives property can be seen explicitely: Taking a derivative with respect to \(\omega \) leads to a derivative with respect to t. The same happens in formulas (5.14), (5.16) and (5.17) below.

Claim 7

For all \(k,l,m,n\in {\mathbb {N}}\), \(u \in \ker {{\mathcal {L}}}\) and \(w,w_1,\ldots ,w_k \in {{\mathcal {W}}}^{l+m+n}\), the map \((\omega ,\tau ,\lambda ) \in {\mathbb {R}}^3 \mapsto {\mathcal {D}}(\omega ,\lambda )\partial _v^k{{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ) \in C_{2\pi }\) is \(C^{l+m+n}\)-smooth. Moreover,

$$\begin{aligned} \Vert \partial ^l_\omega \partial ^m_\tau \partial ^n_\lambda [{\mathcal {D}}(\omega ,\lambda ) \partial _v^k{{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )(w_1,\ldots ,w_k)]\Vert _\infty \le c_{klmn}\Vert w_1\Vert _{l+m+n}\cdot \ldots \cdot \Vert w_k\Vert _{l+m+n},\nonumber \\ \end{aligned}$$
(5.13)

where the constant \(c_{klmn}\) does not depend on \(\omega \), \(\tau \), \(\lambda \), u and w for \(\Vert u\Vert _\infty \), \(\Vert w\Vert _{l+m+n}\), \(\omega \), \(\tau \) and \(\lambda \) varying on bounded intervals.

Proof of Claim

Differentiation of (4.2) with respect to \(\omega \) gives

$$\begin{aligned} \partial _\omega {\mathcal {D}}(\omega ,\lambda )w={\widetilde{{\mathcal {D}}}}(\omega ,\lambda )\partial _tw \text{ for } w \in {{\mathcal {W}}}^1, \end{aligned}$$
(5.14)

where

$$\begin{aligned} \begin{array}{cc} {[}{\widetilde{{\mathcal {D}}}}_1(\omega ,\lambda )w](t,x):= \displaystyle -\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )}w_1(t+\omega A(x,\xi ,\lambda ),\xi )A(x,\xi ,\lambda )\,d\xi ,\\ {[}{\widetilde{{\mathcal {D}}}}_2(\omega ,\lambda )w](t,x):= \displaystyle -\int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )}w_2(t-\omega A(x,\xi ,\lambda ),\xi )A(x,\xi ,\lambda )\,d\xi . \end{array} \end{aligned}$$

Hence, for \(v,w \in {{\mathcal {W}}}^1\), it holds

$$\begin{aligned} \partial _\omega [{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )w] = {\widetilde{{\mathcal {D}}}}(\omega ,\lambda )\partial _t[\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )w]+ {\mathcal {D}}(\omega ,\lambda )\partial _\omega [\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )w].\nonumber \\ \end{aligned}$$
(5.15)

Furthermore, similarly to (2.17), we have

$$\begin{aligned} \partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )={\widetilde{{{\mathcal {J}}}}}(v,\omega ,\tau ,\lambda ) +{\widetilde{{{\mathcal {K}}}}}(v,\omega ,\tau ,\lambda ), \end{aligned}$$

where

$$\begin{aligned} \begin{array}{ll} {[}{\widetilde{{{\mathcal {J}}}}}_1(v,\omega ,\tau ,\lambda )w](t,x)=[{\widetilde{{{\mathcal {J}}}}}_2 (v,\omega ,\tau ,\lambda )w](t,x)\\ :={\tilde{b}}_3(t,x,v,\omega ,\tau ,\lambda )[J_\lambda w](t,x)+ {\tilde{b}}_4(t,x,v,\omega ,\tau ,\lambda )[J_\lambda w](t-\omega \tau ,x) \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{cc} {[}{\widetilde{{{\mathcal {K}}}}}_1(v,\omega ,\tau ,\lambda )w](t,x):={\tilde{b}}_2(t,x,v, \omega ,\tau ,\lambda )w_2(t,x),\\ {[}{\widetilde{{{\mathcal {K}}}}}_2(v,\omega ,\tau ,\lambda )w](t,x):={\tilde{b}}_1(t,x,v, \omega ,\tau ,\lambda )w_1(t,x). \end{array} \end{aligned}$$

Here the coefficients \({\tilde{b}}_k\) are defined appropriately (similarly to (2.12) and (2.14)), as follows:

$$\begin{aligned}&{\tilde{b}}_k(t,x,v,\omega ,\tau ,\lambda ):=\partial _jb(x,\lambda ,[J_\lambda v](t,x),[J_\lambda v](t-\omega \tau ,x),[Kv](t,x),\\&\quad [K_\lambda v](t,x)) \text{ for } k=3,4,5,6 \end{aligned}$$

and

$$\begin{aligned} \begin{array}{rcl} {\tilde{b}}_1(t,x,v,\omega ,\tau ,\lambda )&{}:=&{}\displaystyle \frac{1}{2} \left( -\partial _xa(x,\lambda ) +{\tilde{b}}_5(t,x,v,\omega ,\tau ,\lambda )+\frac{{\tilde{b}}_6(t,x,v, \omega ,\tau ,\lambda )}{a(x,\lambda )}\right) ,\;\\ {\tilde{b}}_2(t,x,v,\omega ,\tau ,\lambda )&{}:=&{}\displaystyle \frac{1}{2} \left( \partial _xa(x,\lambda ) +{\tilde{b}}_5(t,x,v,\omega ,\tau ,\lambda )-\frac{{\tilde{b}}_6(t,x,v,\omega , \tau ,\lambda )}{a(x,\lambda )}\right) . \end{array} \end{aligned}$$

Now, \([\partial _v{{\mathcal {B}}}_j(v,\cdot ,\cdot ,\cdot )w](\cdot ,x)\) is \(C^l\)-smooth because \(v(\cdot ,x)\) and \(w(\cdot ,x)\) are \(C^l\)-smooth. The derivatives can be calculated by the product and chain rules. In particular, for \(v,w \in {{\mathcal {W}}}^1\) we have

$$\begin{aligned} \partial _\omega \partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )=\partial _\omega {\widetilde{{{\mathcal {J}}}}} (v,\omega ,\tau ,\lambda )+\partial _\omega {\widetilde{{{\mathcal {K}}}}}(v,\omega ,\tau ,\lambda ), \end{aligned}$$

where

$$\begin{aligned}&{[}\partial _\omega {\widetilde{{{\mathcal {J}}}}}_j(v,\omega ,\tau ,\lambda )w](t,x)= \partial _\omega {\tilde{b}}_3(t,x,v,\omega ,\tau ,\lambda )[J_\lambda w](t,x)\nonumber \\&+ \partial _\omega {\tilde{b}}_4(t,x,v,\omega ,\tau ,\lambda )[J_\lambda w] (t-\omega \tau ,x)\nonumber \\&-\tau {\tilde{b}}_4(t,x,v,\omega ,\tau ,\lambda )[J_\lambda \partial _tw](t-\omega \tau ,x) \end{aligned}$$
(5.16)

and

$$\begin{aligned}&[\partial _\omega {\widetilde{{{\mathcal {K}}}}}_1(v,\omega ,\tau ,\lambda )w](t,x)=\frac{1}{2}\left( \partial _\omega {\tilde{b}}_5(t,x,v,\omega ,\tau ,\lambda )+\frac{\partial _\omega {\tilde{b}}_6(t,x,v,\omega ,\tau ,\lambda )}{a(x,\lambda )}\right) w_2(t,x),\\&[\partial _\omega {\widetilde{{{\mathcal {K}}}}}_2(v,\omega ,\tau ,\lambda )w](t,x)=\frac{1}{2}\left( \partial _\omega {\tilde{b}}_5(t,x,v,\omega ,\tau ,\lambda )-\frac{\partial _\omega {\tilde{b}}_6(t,x,v,\omega ,\tau ,\lambda )}{a(x,\lambda )}\right) w_1(t,x). \end{aligned}$$

Moreover, for \(k=3,4,5,6\),

$$\begin{aligned}&\partial _\omega {\tilde{b}}_k(t,x,v,\omega ,\tau ,\lambda )\nonumber \\&\quad = -\tau \partial _4\partial _jb(x,\lambda ,[J_\lambda v](t,x),[J_\lambda v](t-\omega \tau ,x),[Kv](t,x),[K_\lambda v](t,x))[J_\lambda \partial _t v](t-\omega \tau ,x).\nonumber \\ \end{aligned}$$
(5.17)

The functions \(\partial _\omega {\tilde{b}}_k\) are bounded as long as as \(\Vert v\Vert _1\) \(\omega \), \(\tau \) and \(\lambda \) are bounded. Hence, we have

$$\begin{aligned} \Vert \partial _\omega [\partial _v{{\mathcal {B}}}_j(v,\omega ,\tau ,\lambda )w]\Vert _\infty \le \text{ const }\Vert w\Vert _1, \end{aligned}$$

where the constant does not depend on \(\omega \), \(\tau \), \(\lambda \), v and w as long as \(\Vert v\Vert _1\), \(\Vert w\Vert _1\) \(\omega \), \(\tau \) and \(\lambda \) are bounded. Similarly one shows \( \Vert \partial _t[\partial _v{{\mathcal {B}}}_j(v,\omega ,\tau ,\lambda )w]\Vert _\infty \le \text{ const }\Vert w\Vert _1. \) Using (5.15) we get

$$\begin{aligned} \Vert \partial _\omega [{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )w]\Vert _\infty \le \text{ const }\Vert w\Vert _1, \end{aligned}$$

where the constant does not depend on \(\omega \), \(\tau \), \(\lambda \), v and w as long as \(\Vert v\Vert _1\), \(\Vert w\Vert _1\) \(\omega \), \(\tau \) and \(\lambda \) are bounded. Similarly one shows the estimates (5.13) for \(\partial _\tau [{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )w]\) and \(\partial _\lambda [{\mathcal {D}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )w]\) and for the higer order derivatives.

Finally, we verify assumption (5.8) of Theorem  17.

Claim 8

There exist \(\delta >0\) and \(c>0\) such that, for all \(u \in \ker {{\mathcal {L}}}\) and \(\omega ,\tau ,\lambda \in {\mathbb {R}}\) with \(\Vert u\Vert _\infty +|\omega -1|+|\tau -\tau _0|+|\lambda |<\delta \), it holds

$$\begin{aligned} \Vert (I-P)\left( I-{\mathcal {C}}(\omega ,\lambda )-{\mathcal {D}}(\omega ,\lambda )({{\mathcal {J}}}(\omega ,\tau ,\lambda )+{{\mathcal {K}}}(\lambda )) \right) w\Vert _\infty \ge c\Vert w\Vert _\infty \text{ for } \text{ all } w \in {{\mathcal {W}}}. \end{aligned}$$

Proof of Claim

We will follow ideas which are used to prove coercitivity estimates for singularly perturbed linear differential operators (see, e.g., [37, Lemma 1.3] and [40, Sect. 3]).

Suppose the contrary. Then there exist sequences \(w_n \in {{\mathcal {W}}}\), \(u_n \in \ker {{\mathcal {L}}}\) and \((\omega _n,\tau _n,\lambda _n) \in {\mathbb {R}}^3\) such that

$$\begin{aligned}&\Vert w_n\Vert _\infty =1 \text{ for } \text{ all } n \in {\mathbb {N}}, \end{aligned}$$
(5.18)
$$\begin{aligned}&\Vert u_n\Vert _\infty +|\omega _n-1|+|\tau _n-\tau _0|+|\lambda _n| \rightarrow 0 \text{ as } n\rightarrow \infty \end{aligned}$$
(5.19)

and

$$\begin{aligned} \Vert (I-P)\left( I-{\mathcal {C}}(\omega _n,\lambda _n)-{\mathcal {D}}(\omega _n,\lambda _n)({{\mathcal {J}}}(\omega _n, \tau _n,\lambda _n)+{{\mathcal {K}}}(\lambda _n))\right) w_n\Vert _\infty \rightarrow 0 \text{ as } n\rightarrow \infty .\nonumber \\ \end{aligned}$$
(5.20)

We have to construct a contradiction.

For the sake of simpler writing we will use the following notation:

$$\begin{aligned}&{\mathcal {C}}_n:={\mathcal {C}}(\omega _n,\lambda _n),\; {\mathcal {D}}_n:={\mathcal {D}}(\omega _n,\lambda _n)({{\mathcal {J}}}(\omega _n,\tau _n,\lambda _n)+{{\mathcal {K}}}(\lambda _n)),\\&{{\mathcal {E}}}_n:=(I-{\mathcal {C}}_n)^{-1}({\mathcal {D}}_n+P(I-{\mathcal {C}}_n-{\mathcal {D}}_n)). \end{aligned}$$

Note that the operators \({{\mathcal {E}}}_n\) are well defined due to Claim 4 from Sect. 4.1. By assumption (5.20), we have

$$\begin{aligned} \Vert (I-P)(I-{\mathcal {C}}_n-{\mathcal {D}}_n)w_n\Vert _\infty =\Vert (I-{\mathcal {C}}_n)(I-{{\mathcal {E}}}_n)w_n\Vert _\infty \rightarrow 0. \end{aligned}$$
(5.21)

Moreover, because of (4.13) it follows that \(\Vert (I-{{\mathcal {E}}}_n)w_n\Vert _\infty \rightarrow 0\) and, on the account of (4.8) and (4.13), that

$$\begin{aligned} \Vert (I+{{\mathcal {E}}}_n)(I-{{\mathcal {E}}}_n)w_n\Vert _\infty =\Vert (I-{{\mathcal {E}}}_n^2)w_n\Vert _\infty \rightarrow 0. \end{aligned}$$
(5.22)

Let us show that the sequence \({{\mathcal {E}}}_n^2w_n\) is bounded in the space \(C^1_{2 \pi }\). A straightforward calculation shows that

$$\begin{aligned} {{\mathcal {E}}}_n^2=(I-{\mathcal {C}}_n)^{-1}\left( {\mathcal {D}}_n^2+{\mathcal {D}}_n{\mathcal {C}}_n(I-{\mathcal {C}}_n)^{-1} {\mathcal {D}}_n+{{\mathcal {R}}}_n\right) \end{aligned}$$
(5.23)

with

$$\begin{aligned} {{\mathcal {R}}}_n:={\mathcal {D}}_n(I-{\mathcal {C}}_n)^{-1}P(I-{\mathcal {C}}_n-{\mathcal {D}}_n)+P(I-{\mathcal {C}}_n-{\mathcal {D}}_n) (I-{\mathcal {C}}_n)^{-1}({\mathcal {D}}_n+P(I-{\mathcal {C}}_n-{\mathcal {D}}_n)).\nonumber \\ \end{aligned}$$
(5.24)

From (4.8), (4.22), (5.18) and (5.23) follows that, in order to show that \({{\mathcal {E}}}_n^2w_n\) is bounded in \(C^1_{2 \pi }\), it suffices to show that the operators sequences \({\mathcal {D}}_n^2\), \({\mathcal {D}}_n{\mathcal {C}}_n\) and \({{\mathcal {R}}}_n\) are bounded with respect to the uniform operator norm in \({{\mathcal {L}}}(C_{2\pi };C^1_{2\pi })\). Let us start with

$$\begin{aligned} {\mathcal {D}}_n^2={\mathcal {D}}(\omega _n,\lambda _n)({{\mathcal {J}}}(\omega _n,\tau _n,\lambda _n)+{{\mathcal {K}}}(\lambda _n)) {\mathcal {D}}(\omega _n,\lambda _n)({{\mathcal {J}}}(\omega _n,\tau _n,\lambda _n)+{{\mathcal {K}}}(\lambda _n)). \end{aligned}$$

This sequence is bounded in \({{\mathcal {L}}}(C_{2\pi };C^1_{2\pi })\) because of (4.8), (4.9) and (4.11). Then consider

$$\begin{aligned} {\mathcal {D}}_n{\mathcal {C}}_n={\mathcal {D}}(\omega _n,\lambda _n)({{\mathcal {J}}}(\omega _n,\tau _n,\lambda _n)+{{\mathcal {K}}}(\lambda _n)) {\mathcal {C}}(\omega _n,\lambda _n). \end{aligned}$$

This sequence is bounded in \({{\mathcal {L}}}(C_{2\pi };C^1_{2\pi })\) because of (4.8), (4.9) and (4.12). And finally, the operator sequence \({{\mathcal {R}}}_n\) is bounded in \({{\mathcal {L}}}(C_{2\pi };C^1_{2\pi })\) because of (4.8), (4.13), (4.22) and because the projection P belongs to \({{\mathcal {L}}}(C_{2\pi };C^1_{2\pi })\) (cf. (4.48)).

Let us summarize: We showed that the sequence \({{\mathcal {E}}}_n^2w_n\) is bounded in \(C^1_{2 \pi }\). Because of the Arzela-Ascoli Theorem, without loss of generality we may assume that this sequence converges in \(C_{2 \pi }\) to some function \(w_* \in C_{2 \pi }\). Then (5.22) implies the convergence

$$\begin{aligned} \Vert w_n-w_*\Vert _\infty \rightarrow 0. \end{aligned}$$
(5.25)

In particular, \(w_* \in {{\mathcal {W}}}\).

If we would have

$$\begin{aligned} {{\mathcal {L}}}w_*=(I-{\mathcal {C}}-{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}))w_*=0, \end{aligned}$$
(5.26)

then it would follow \(w_* \in \ker {{\mathcal {L}}}\cap {{\mathcal {W}}}\) and, on the account of (4.47), \(w_*=0\), contradicting (5.18) and (5.25). Hence, it remains to prove (5.26).

In order to prove (5.26), we take arbitrary \(w,h \in C_{2\pi }\) and calculate

$$\begin{aligned}&\langle ({\mathcal {C}}_n-{\mathcal {C}})w,h\rangle \\&\quad =\frac{1}{2\pi }\int _0^{2\pi }\int _0^1\Big (\left[ -c_1(x,0,\lambda _n)w_2 (t-\omega _nA(x,0,\lambda _n),0)\right. \\&\qquad \left. +c_1(x,0,0)w_2(t-A(x,0,0),0)\right] h_1(t,x)\\&\;\;\;\;\;\;\;\;+\left[ c_2(x,1,\lambda _n)w_1(t+\omega _nA(x,1,\lambda _n),1) -c_2(x,1,0)w_1(t+A(x,1,0),1)\right] h_2(t,x)\Big )dxdt\\&\quad =\frac{1}{2\pi }\int _0^{2\pi }\int _0^1\Big (\left[ -c_1(x,0,\lambda _n)h_1(t +\omega _nA(x,0,\lambda _n),x)\right. \\&\qquad \left. +c_1(x,0,0)h_1(t+A(x,0,0),x)\right] w_2(t,0)\\&\;\;\;\;\;\;\;\;+\left[ c_2(x,1,\lambda _n)h_2(t-\omega _nA(x,1,\lambda _n),x) -c_2(x,1,0)h_2(t-A(x,1,0),x)\right] w_1(t,1)\Big )dxdt. \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{n \rightarrow \infty }\sup _{\Vert w\Vert _\infty \le 1}\langle ({\mathcal {C}}_n-{\mathcal {C}})w,h\rangle =0 \text{ for } \text{ all } h \in C_{2\pi }. \end{aligned}$$

Similarly one shows for all \(h_1,h_2,h_3 \in C_{2\pi }\) the convergence

$$\begin{aligned} \lim _{n \rightarrow \infty }\left( \sup _{\Vert w\Vert _\infty \le 1}\langle ({\mathcal {D}}_n-{\mathcal {D}})w,h_1\rangle + \sup _{\Vert w\Vert _\infty \le 1}\langle ({{\mathcal {J}}}_n-{{\mathcal {J}}})w,h_2\rangle +\sup _{\Vert w\Vert _\infty \le 1}\langle ({{\mathcal {K}}}_n-{{\mathcal {K}}})w,h_3\rangle \right) =0. \end{aligned}$$

Therefore, we get from (4.49), (5.18) and (5.21) that

$$\begin{aligned} 0= & {} \lim _{n \rightarrow \infty }\langle (I-P)(I-{\mathcal {C}}_n-{\mathcal {D}}_n({{\mathcal {J}}}_n+{{\mathcal {K}}}_n))w_n,h \rangle =\langle (I-P)(I-{\mathcal {C}}-{\mathcal {D}}({{\mathcal {J}}}+{{\mathcal {K}}}))w_*,h\rangle \\= & {} \langle {{\mathcal {L}}}w_*,h \rangle \end{aligned}$$

for all \(h \in C_{2\pi }\), i.e., (5.26) is true.

We have shown that Theorem 17 can be applied to equation (4.53) in the setting (5.9). This implies the following fact.

Claim 9

There exist \(\varepsilon >0\) and \(\delta >0\) such that for all \(u \in {{\mathcal {U}}}_\varepsilon \) and \((\omega ,\tau ,\lambda )\in {{\mathcal {P}}}_\varepsilon \) there is a unique solution \(w=\hat{w}(u,\omega ,\tau ,\lambda ) \in {{\mathcal {W}}}\) to (4.53) with \(\Vert w\Vert _\infty <\delta \). Moreover, for all \(k\in {\mathbb {N}}\) the partial derivatives \(\partial _t^k\hat{w}(u,\omega ,\tau ,\lambda )\) exist and belong to \(C_{2\pi }\), and the map \((u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \mapsto \partial _t^k\hat{w}(u,\omega ,\tau ,\lambda ) \in C_{2\pi }\) is \(C^\infty \)-smooth.

In order to finish the proof of Lemma 20 by using Claim 9, we have to show that for all \(k \in {\mathbb {N}}\) it holds

$$\begin{aligned} \hat{w}(u,\omega ,\tau ,\lambda ) \in C^k_{2\pi } \text{ for } \text{ all } (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \end{aligned}$$
(5.27)

and that the map

$$\begin{aligned} (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \mapsto \hat{w}(u,\omega ,\tau ,\lambda ) \in C^k_{2\pi } \text{ is } C^\infty \text{-smooth }. \end{aligned}$$
(5.28)

Let us first prove (5.27). We use induction with respect to k. For \(k=0\) condition (5.27) is true because of Claim  9.

In order to do the induction step we use that \(\hat{w}(u,\omega ,\tau ,\lambda )\in {{\mathcal {W}}}^{k+1}\) (because of Claim  9) and that \(\hat{w}(u,\omega ,\tau ,\lambda )\in C^k_{2\pi }\) (because of the induction assumption), and we have to show that \(\hat{w}(u,\omega ,\tau ,\lambda )\in C^{k+1}_{2\pi }\) (induction assertion). It holds

$$\begin{aligned} \hat{w}(u,\omega ,\tau ,\lambda )=F(\hat{w}(u,\omega ,\tau ,\lambda ),u,\omega ,\tau ,\lambda ) \text{ for } \text{ all } (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon , \end{aligned}$$

where the map \(F:{{\mathcal {W}}}\times {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \rightarrow {{\mathcal {W}}}\) is defined by

$$\begin{aligned} F(w,u,\omega ,\tau ,\lambda ):= & {} {\mathcal {C}}(\omega ,\lambda )w+{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ) +(I-P)(I-{\mathcal {C}}(\omega ,\lambda ))u\\&-P{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ). \end{aligned}$$

Hence, we have to show that

$$\begin{aligned} F(w,u,\omega ,\tau ,\lambda ) \in C^{k+1}_{2\pi } \text{ for } \text{ all } w \in {{\mathcal {W}}}^{k+1}\cap C^k_{2\pi } \text{ and } (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon . \end{aligned}$$
(5.29)

Obviously, for all \(w \in C_{2\pi }\) and \( (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \) it holds \((I-P)(I-{\mathcal {C}}(\omega ,\lambda ))u \in C^{l}_{2\pi }\) and \(P{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ) \in C^{l}_{2\pi }\) for any \(l \in {\mathbb {N}}\). Hence, in order to prove (5.29), it remains to show that

$$\begin{aligned} {\mathcal {C}}(\omega ,\lambda )w,{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda ) \in C^{k+1}_{2\pi } \text{ for } \text{ all } w \in {{\mathcal {W}}}^{k+1}\cap C^k_{2\pi } \text{ and } (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon , \end{aligned}$$

or, the same, that

$$\begin{aligned} \left. \begin{array}{r} \partial _t[{\mathcal {C}}(\omega ,\lambda )w],\partial _x[{\mathcal {C}}(\omega ,\lambda )w], \partial _t[{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )], \partial _x[{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+w,\omega ,\tau ,\lambda )] \in C^{k}_{2\pi }\\ \text{ for } \text{ all } w \in {{\mathcal {W}}}^{k+1}\cap C^k_{2\pi }. \end{array} \right\} \nonumber \\ \end{aligned}$$
(5.30)

Due to the definitions of \({{\mathcal {B}}}, {\mathcal {C}}\) and \({\mathcal {D}}\), given by the formulas (2.16), (4.1) and (4.2), it holds

$$\begin{aligned}&\partial _t[{\mathcal {C}}(\omega ,\lambda )w]={\mathcal {C}}(\omega ,\lambda )\partial _tw,\; \partial _t[{\mathcal {D}}(\omega ,\lambda )w]={\mathcal {D}}(\omega ,\lambda )\partial _tw,\\&\partial _x[{\mathcal {C}}(\omega ,\lambda )w]={\widetilde{{\mathcal {C}}}}(\omega ,\lambda )w +{\widehat{{\mathcal {C}}}}(\omega ,\lambda )\partial _tw, \partial _x[{\mathcal {D}}(\omega ,\lambda )w]={\widetilde{{\mathcal {D}}}}(\omega ,\lambda )w +{\widehat{{\mathcal {D}}}}(\omega ,\lambda )\partial _tw\\&\partial _t[{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )]=\partial _v{{\mathcal {B}}}(v,\omega ,\tau ,\lambda )\partial _tv, \end{aligned}$$

where

$$\begin{aligned}&{[}{\widetilde{{\mathcal {C}}}}(\omega ,\lambda )w](t,x):= \left[ \begin{array}{c} -\partial _xc_1(x,0,\lambda )w_2(t+\omega A(x,0,\lambda ),0)\\ \partial _xc_2(x,1,\lambda )w_1(t-\omega A(x,1,\lambda ),1) \end{array} \right] ,\\&{[}{\widehat{{\mathcal {C}}}}(\omega ,\lambda )w](t,x):= \left[ \begin{array}{c} -\omega \partial _xA(x,0,\lambda )c_1(x,0,\lambda )w_2(t+\omega A(x,0,\lambda ),0)\\ -\omega \partial _xA(x,1,\lambda )c_2(x,1,\lambda )w_1(t-\omega A(x,1,\lambda ),1) \end{array}\right] ,\\&{[}{\widetilde{{\mathcal {D}}}}(\omega ,\lambda )w](t,x):= \left[ \begin{array}{c} \displaystyle -\frac{w_1(t,x)}{a(x,\lambda )}-\int _0^x \frac{\partial _xc_1(x,\xi ,\lambda )}{a(\xi ,\lambda )}w_1(t+\omega A(x,\xi ,\lambda ),0)d\xi \\ \displaystyle -\frac{w_2(t,x)}{a(x,\lambda )}+\int _x^1 \frac{\partial _xc_2(x,\xi ,\lambda )}{a(\xi ,\lambda )}w_2(t-\omega A(x,\xi ,\lambda ),0)d\xi \end{array} \right] ,\\&{[}{\widehat{{\mathcal {D}}}}(\omega ,\lambda )w](t,x):= \left[ \begin{array}{c} \displaystyle -\frac{\omega }{a(x,\lambda )}\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )} w_1(t+\omega A(x,\xi ,\lambda ),0)d\xi \\ \displaystyle -\frac{\omega }{a(x,\lambda )}\int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )} w_2(t-\omega A(x,\xi ,\lambda ),0)d\xi \end{array} \right] . \end{aligned}$$

Hence, (5.30) is true.

Now, let us prove (5.28). Again, we use induction with respect to k. For \(k=0\), condition (5.28) is true due to Claim 9, again. For the induction step, we proceed as above. We have to show that the map

$$\begin{aligned} (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \mapsto \hat{w}(u,\omega ,\tau ,\lambda ) \in C^{k+1}_{2\pi } \text{ is } C^\infty -\text{ smooth } \end{aligned}$$
(5.31)

under the assumption that

$$\begin{aligned} (u,\omega ,\tau ,\lambda ) \in {{\mathcal {U}}}_\varepsilon \times {{\mathcal {P}}}_\varepsilon \mapsto \hat{w}(u,\omega ,\tau ,\lambda ) \in {{\mathcal {W}}}^{k+1}\cap C^{k}_{2\pi } \text{ is } C^\infty -\text{ smooth }. \end{aligned}$$
(5.32)

Thanks to (5.32), the maps

$$\begin{aligned}&(u,\omega ,\tau ,\lambda ) \mapsto {\mathcal {C}}(\omega ,\lambda )\partial _t\hat{w}(u,\omega , \tau ,\lambda ) \in C^{k}_{2\pi },\\&(u,\omega ,\tau ,\lambda ) \mapsto {\widetilde{{\mathcal {C}}}}(\omega ,\lambda )\hat{w}(u, \omega ,\tau ,\lambda ) \in C^{k}_{2\pi },\\&(u,\omega ,\tau ,\lambda ) \mapsto {\widehat{{\mathcal {C}}}}(\omega ,\lambda )\hat{w}(u,\omega , \tau ,\lambda ) \in C^{k}_{2\pi },\\&(u,\omega ,\tau ,\lambda ) \mapsto {\widetilde{{\mathcal {D}}}}(\omega ,\lambda ){{\mathcal {B}}}(u+\hat{w} (u,\omega ,\tau ,\lambda ),\omega ,\tau ,\lambda )\in C^{k}_{2\pi },\\&(u,\omega ,\tau ,\lambda ) \mapsto {\widehat{{\mathcal {D}}}}(\omega ,\lambda )\partial _v{{\mathcal {B}}}(u+\hat{w}(u,\omega ,\tau ,\lambda ),\omega ,\tau ,\lambda ) (\partial _tu+\partial _t\hat{w}(u,\omega ,\tau ,\lambda )) \in C^{k}_{2\pi } \end{aligned}$$

are \(C^\infty \)-smooth, which implies (5.31) as desired.

Remark 22

The uniqueness assertion of Lemma 9 and equality (5.10) yield

$$\begin{aligned} S_\varphi \hat{w}(u,\omega ,\tau ,\lambda )=\hat{w}(S_\varphi u,\omega ,\tau ,\lambda ) \text{ for } \text{ all } \varphi \in {\mathbb {R}}, u \in {{\mathcal {U}}}_\varepsilon \text{ and } (\omega ,\tau ,\lambda )\in {{\mathcal {P}}}_\varepsilon . \end{aligned}$$
(5.33)

Moreover, the uniqueness assertion of Lemma 9 along with equality \({{\mathcal {B}}}(0,\omega ,\tau ,\lambda )=0\) implies that

$$\begin{aligned} \hat{w}(0,\omega ,\tau ,\lambda )=0 \text{ for } \text{ all } (\omega ,\tau ,\lambda )\in {{\mathcal {P}}}_\varepsilon . \end{aligned}$$
(5.34)

Finally, differentiating the identity

$$\begin{aligned} (I-P)\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+\hat{w}(u,\omega ,\tau ,\lambda )) -{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u+\hat{w}(u,\omega ,\tau ,\lambda )),\omega ,\tau ,\lambda )\right) =0 \end{aligned}$$

with respect to u in \(u=0\), \(\omega =1\), \(\tau =\tau _0\) and \(\lambda =0\), we conclude that \({{\mathcal {L}}}\partial _u\hat{w}(0,1,\tau _0,0)=0\), i.e., \(\partial _u\hat{w}(0,1,\tau _0,0) \in \ker {{\mathcal {L}}}\cap {{\mathcal {W}}}\), i.e.,

$$\begin{aligned} \partial _u\hat{w}(0,1,\tau _0,0)=0. \end{aligned}$$
(5.35)

6 The bifurcation equation

In this section we substitute the solution \(w=\hat{w}(u,\omega ,\tau ,\lambda )\) to (4.53) into (4.52) and solve the resulting so-called bifurcation equation

$$\begin{aligned} P\left( (I-{\mathcal {C}}(\omega ,\lambda ))(u+\hat{w}(u,\omega ,\tau ,\lambda ))-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u +\hat{w}(u,\omega ,\tau ,\lambda )),\omega ,\tau ,\lambda )\right) =0\nonumber \\ \end{aligned}$$
(6.1)

with respect to \(\omega \approx 1\) and \(\tau \approx \tau _0\) for \(u\approx 0\) and \(\lambda \approx 0\). The definition (4.48) of the projection P shows that equation (6.1) is equivalent to

$$\begin{aligned} \langle (I-{\mathcal {C}}(\omega ,\lambda ))(u+\hat{w}(u,\omega ,\tau ,\lambda ))-{\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(u +\hat{w}(u,\omega ,\tau ,\lambda )),\omega ,\tau ,\lambda ),{{\mathcal {A}}}^*{{\varvec{v}}}_*\rangle =0.\nonumber \\ \end{aligned}$$
(6.2)

By Lemma 15, the variable \(u \in \ker {{\mathcal {L}}}\) in (6.2) can be replaced by \(\xi \in {\mathbb {C}}\), by using the ansatz

$$\begin{aligned} u=\text{ Re }\,(\xi {{\varvec{v}}}_0)=\xi _1v_0^1-\xi _2v_0^2,\; \xi =\xi _1+i\xi _2,\; \xi _1,\xi _2 \in {\mathbb {R}}. \end{aligned}$$
(6.3)

We, therefore, get the following equation:

$$\begin{aligned} F(\xi ,\omega ,\tau ,\lambda ):= & {} \langle (I-{\mathcal {C}}(\omega ,\lambda ))(\text{ Re }\,(\xi {{\varvec{v}}}_0)+\hat{w}(\text{ Re }\, (\xi {{\varvec{v}}}_0),\omega ,\tau ,\lambda )),{{\mathcal {A}}}^*{{\varvec{v}}}_*\rangle \nonumber \\&-\langle {\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(\text{ Re }\,(\xi {{\varvec{v}}}_0)+\hat{w}(\text{ Re }\, (\xi {{\varvec{v}}}_0),\omega ,\tau ,\lambda )),\omega ,\tau ,\lambda ),{{\mathcal {A}}}^*{{\varvec{v}}}_*\rangle =0.\nonumber \\ \end{aligned}$$
(6.4)

On the account of (4.26) and (4.27), we have \(S_\varphi {{\varvec{v}}}_0=e^{i\varphi }{{\varvec{v}}}_0\) and \(S_\varphi {{\varvec{v}}}_*=e^{i\varphi }{{\varvec{v}}}_*\) and, hence, \(S_\varphi \text{ Re }\,(\xi {{\varvec{v}}}_0)=\text{ Re }\,(e^{i\varphi }\xi {{\varvec{v}}}_0)\). Now, (4.44), (4.45) and (5.33) yield

$$\begin{aligned} e^{i\varphi }F(\xi ,\omega ,\tau ,\lambda )=F(e^{i\varphi }\xi ,\omega ,\tau ,\lambda ). \end{aligned}$$
(6.5)

Our task is, therefore, reduced to determining all solutions \(\xi \approx 0\), \(\omega \approx 1\), \(\tau \approx \tau _0\) and \(\lambda \approx 0\) with real non-negative \(\xi \). Since from now on \(\xi \) is considered to be a real parameter, we redenote it by \(\varepsilon \). Equation (6.4) then reads

$$\begin{aligned} G(\varepsilon ,\omega ,\tau ,\lambda ):= & {} \langle (I-{\mathcal {C}}(\omega ,\lambda ))(\varepsilon v_0^1+\hat{w}(\varepsilon v_0^1,\omega ,\tau ,\lambda )), {{\mathcal {A}}}^*{{\varvec{v}}}_*\rangle \nonumber \\&-\langle {\mathcal {D}}(\omega ,\lambda ){{\mathcal {B}}}(\varepsilon v_0^1+\hat{w}(\varepsilon v_0^1,\omega ,\tau ,\lambda )),\omega ,\tau ,\lambda ),{{\mathcal {A}}}^*{{\varvec{v}}}_*\rangle =0. \end{aligned}$$
(6.6)

From \({{\mathcal {B}}}(0,\omega ,\tau ,\lambda )=0\) and (5.34) it follows that \(G(0,\omega ,\tau ,\lambda )\equiv 0\). This means that, to solve (6.6) with \(\varepsilon >0\), it suffices to solve the so-called scaled or restricted bifurcation equation

$$\begin{aligned} H(\varepsilon ,\omega ,\tau ,\lambda ):=\frac{1}{\varepsilon }G(\varepsilon ,\omega ,\tau ,\lambda )=\int _0^1 \partial _\varepsilon G(s\varepsilon ,\omega ,\tau ,\lambda )ds=0. \end{aligned}$$
(6.7)

In particular, on the account of (4.6) and (6.6) it holds

$$\begin{aligned}&H(\varepsilon ,1,\tau _0,0)\nonumber \\&\quad = \int _0^1\left\langle \left( {{\mathcal {A}}}-\partial _v{{\mathcal {B}}}(s\varepsilon v_0^1+\hat{w}(s\varepsilon v_0^1,1,\tau _0,0),1,\tau _0,0\right) ) (I+\partial _u\hat{w}(s\varepsilon v_0^1,1,\tau _0,0))v_0^1,{{\varvec{v}}}_*\right\rangle \, ds,\nonumber \\ \end{aligned}$$
(6.8)

and (5.34) and (5.35) yield

$$\begin{aligned} H(0,\omega ,\tau _0,0)= \langle {{\mathcal {A}}}\left( I-{\mathcal {C}}(\omega ,0) -{\mathcal {D}}(\omega ,0)\partial _v{{\mathcal {B}}}(0,\omega ,\tau _0,0)\right) v_0^1,{{\varvec{v}}}_*\rangle . \end{aligned}$$
(6.9)

By (5.34), (5.35), (6.8) and Lemma 15 we have \(H(0,1,\tau _0,0)=\langle {{\mathcal {L}}}v_0^1,{{\mathcal {A}}}^*{{\varvec{v}}}_*\rangle =0\). Hence, in order to solve (6.7) with respect to \(\omega \approx 1\) and \(\tau \approx \tau _0\) (for \(\varepsilon \approx 0\) and \(\lambda \approx 0\)) by using the classical implicit function theorem we have to show that

$$\begin{aligned} \left. \det \frac{\partial (\text{ Re }\,H,\text{ Im }\,H)}{\partial (\omega ,\tau )}\right| _{\varepsilon =\lambda =0, \omega =1, \tau =\tau _0} \ne 0. \end{aligned}$$
(6.10)

Let us calculate the partial derivatives in the Jacobian in the left-hand side of (6.10). Due to (4.24), (6.9) and Lemma  15 we have

$$\begin{aligned} \partial _\omega H(0,1,\tau _0,0)= & {} \left. \frac{d}{d\omega }\langle {{\mathcal {A}}}(\omega ,0)\left( I-{\mathcal {C}}(\omega ,0) -{\mathcal {D}}(\omega ,0)\partial _v{{\mathcal {B}}}(0,\omega ,\tau _0,0)\right) v_0^1,{{\varvec{v}}}_*\rangle \right| _{\omega =1}\nonumber \\= & {} \left. \frac{d}{d\omega }\langle \left( {{\mathcal {A}}}(\omega ,0) -{{\mathcal {J}}}(\omega ,\tau _0,0)-{{\mathcal {K}}}(0))\right) v_0^1,{{\varvec{v}}}_*\rangle \right| _{\omega =1}\nonumber \\= & {} \langle \left( \partial _\omega {{\mathcal {A}}}(1,0) -\partial _\omega {{\mathcal {J}}}(1,\tau _0,0)\right) v_0^1,{{\varvec{v}}}_*\rangle . \end{aligned}$$
(6.11)

Similarly one gets

$$\begin{aligned} \partial _\tau H(0,1,\tau _0,0)=-\langle \partial _\tau {{\mathcal {J}}}(1,\tau _0,0)v_0^1, {{\varvec{v}}}_*\rangle . \end{aligned}$$
(6.12)

On the other hand, (2.18) implies that for \(k=1,2\)

$$\begin{aligned}&[\partial _\omega {{\mathcal {J}}}_k(1,\tau _0,0)v](t,x)=-\frac{\tau _0}{2}b_4^0(x)\int _0^x \frac{\partial _tv_1(t-\tau _0,\xi )-\partial _tv_2(t-\tau _0,\xi )}{a_0(\xi )}d\xi ,\\&[\partial _\tau {{\mathcal {J}}}_k(1,\tau _0,0)v](t,x)=-\frac{1}{2}b_4^0(x)\int _0^x \frac{\partial _tv_1(t-\tau _0,\xi )-\partial _tv_2(t-\tau _0,\xi )}{a_0(\xi )}d\xi \end{aligned}$$

Moreover, (4.5) yields \(\partial _\omega {{\mathcal {A}}}(1,0)\mathbf {{{\varvec{v}}}_0}=\partial _t \mathbf {{\varvec{v}}}_0\). Hence, from (4.26) it follows that

$$\begin{aligned}{}[\partial _\omega {{\mathcal {A}}}(1,0)v_0^1](t,x)=\text{ Re }\,\partial _t{{\varvec{v}}}_0(t,x)= \text{ Re }\,\left( e^{it}\left[ \begin{array}{c} -u_0(x)+ia_0(x)u_0'(x)\\ -u_0(x)-ia_0(x)u_0'(x) \end{array} \right] \right) \end{aligned}$$

and, for \(k=1,2\), that

$$\begin{aligned}{}[\partial _\omega {{\mathcal {J}}}_k(1,\tau _0,0)v_0^1](t,x)= & {} \text{ Re }\, [\partial _\omega {{\mathcal {J}}}_k(1,\tau _0,0){{\varvec{v}}}_0](t,x)\\ {}= & {} -\frac{\tau _0}{2}b_4^0(x)\,\text{ Re }\,\left( ie^{it}\int _0^x\frac{v_{01}(\xi ) -v_{02}(\xi )}{a_0(\xi )}d\xi \right) \\= & {} \tau _0b_4^0(x)\,\text{ Im }\left( e^{i(t-\tau _0)}u_0(x)\right) , \end{aligned}$$

where

$$\begin{aligned} v_{01}(x):=iu_0+a_0u_0',\; v_{02}(x):=iu_0-a_0u_0' \end{aligned}$$
(6.13)

are the components of the vector function \(v_0\) (cf. (4.26)). Analogously one gets \( [\partial _\tau {{\mathcal {J}}}_k(1,\tau _0,0)v_0^1](t,x)=b_4^0(x)\,\text{ Im } \left( e^{i(t-\tau _0)}u_0(x)\right) . \) We insert this into (6.11) and (6.12) and get

$$\begin{aligned}&\partial _\omega H(0,1,\tau _0,0)\\&\quad =\frac{1}{2\pi }\int _0^{2\pi }\int _0^1\left( \text{ Re }\,\left( e^{it}\left[ \begin{array}{c} -u_0+ia_0u_0'\\ -u_0-ia_0u_0' \end{array} \right] \right) -\tau _0b_4^0\text{ Im }\left( e^{i(t-\tau _0)} \left[ \begin{array}{c} u_0\\ u_0 \end{array} \right] \right) \right) \cdot \left( e^{it}\left[ \begin{array}{c} u_*+iU_*\\ u_*-iU_* \end{array} \right] \right) \,dxdt\\&\quad =\frac{1}{2}\int _0^1 \left[ \begin{array}{c} (-1+i\tau _0b_4^0e^{-i\tau _0}))u_0+ia_0u_0'\\ (-1-i\tau _0b_4^0e^{-i\tau _0}))u_0-ia_0u_0' \end{array} \right] \cdot \left[ \begin{array}{c} u_*+iU_*\\ u_*-iU_* \end{array} \right] \,dx\\&\quad =\int _0^1\left( (-1+i\tau _0b_4^0e^{-i\tau _0})u_0\overline{u_*} +a_0u_0'\overline{U_*}\right) \,dx =\int _0^1\left( (-1+i\tau _0b_4^0e^{-i\tau _0})\overline{u_*}-(a_0 \overline{U_*})'\right) u_0\,dx. \end{aligned}$$

Here we used (4.32). By (4.36) and (4.41), it holds

$$\begin{aligned} \partial _\omega H(0,1,\tau _0,0)=\int _0^1\left( -2+i\tau _0b_4^0(x) e^{-i\tau _0}-ib_5^0(x)\right) u_0(x)\overline{u_*(x)}dx=i. \end{aligned}$$
(6.14)

Similarly,

$$\begin{aligned} \text{ Re }\,\partial _\tau H(0,1,\tau _0,0)= & {} -\frac{1}{2\pi }\text{ Re }\int _0^{2\pi }\int _0^1b_4^0\,\text{ Im } \left( e^{i(t-\tau _0)} \left[ \begin{array}{c} u_0\\ u_0 \end{array} \right] \right) \cdot \left( e^{it}\left[ \begin{array}{c} u_*+iU_*\\ u_*-iU_* \end{array} \right] \right) dxdt\nonumber \\= & {} \text{ Re }\left( ie^{-i\tau _0}\int _0^1b_4^0(x)u_0(x)\overline{u_*(x)}dx \right) =\rho . \end{aligned}$$
(6.15)

Hence, assumption (A2) implies that

$$\begin{aligned} \det \left[ \begin{array}{cc} \text{ Re }\,\partial _\omega H(0,1,\tau _0,0) &{} \text{ Im }\,\partial _ \omega H(0,1,\tau _0,0\\ \text{ Re }\,\partial _\tau H(0,1,\tau _0,0) &{} \text{ Im }\,\partial _\tau H(0,1,\tau _0,0 \end{array} \right] = - \text{ Re }\,\partial _\tau H(0,1,\tau _0,0)=-\rho \ne 0, \end{aligned}$$

i.e., (6.10) is true.

Now, the classical implicit function theorem can be applied to solve (6.7) with respect to \(\omega \approx 1\) and \(\tau \approx \tau _0\) for \(\varepsilon \approx 0\) and \(\lambda \approx 0\). We, therefore, conclude that there exist \(\varepsilon _0>0\) and \(C^\infty \)-smooth functions \({\hat{\omega }},{\hat{\tau }}:[-\varepsilon _0,\varepsilon _0]^2\rightarrow {\mathbb {R}}\) with \({\hat{\omega }}(0,0)=1\) and \({\hat{\tau }}(0,0)=\tau _0\) such that \((\varepsilon ,\omega ,\tau ,\lambda ) \approx (0,1,\tau _0,0)\) is a solution to (6.7) if and only if

$$\begin{aligned} \omega ={\hat{\omega }}(\varepsilon ,\lambda ),\quad \tau ={\hat{\tau }}(\varepsilon ,\lambda ). \end{aligned}$$
(6.16)

Moreover, equality (6.5) implies that \(F(-\xi ,\omega ,\tau ,\lambda )=-F(\xi ,\omega ,\tau ,\lambda )\), i.e., \(H(-\varepsilon ,\omega ,\tau ,\lambda )=H(\varepsilon ,\omega ,\tau ,\lambda )\). Thus, \({\hat{\omega }}(-\varepsilon ,\lambda )={\hat{\omega }}(\varepsilon ,\lambda )\) and \({\hat{\tau }}(-\varepsilon ,\lambda )={\hat{\tau }}(\varepsilon ,\lambda )\). This yields (1.9). Now, by (5.34) and (5.35), the corresponding solutions to (2.1), where \(\omega \) and \(\tau \) are given by (6.16), read

$$\begin{aligned} v=\varepsilon [\hat{v}(\varepsilon ,\lambda )](t,x):= & {} \varepsilon v_0^1(t,x)+[\hat{w} (\varepsilon v_0^1,{\hat{\omega }}(\varepsilon ,\lambda ),{\hat{\tau }}(\varepsilon ,\lambda ),\lambda )](t,x)\\= & {} \varepsilon \text{ Re }\left( e^{it}\left[ \begin{array}{c} iu_0(x)+a_0(x)u_0'(x)\\ iu_0(x)-a_0(x)u_0'(x) \end{array} \right] \right) +o(\varepsilon ), \end{aligned}$$

where \(o(\varepsilon )/\varepsilon \rightarrow 0\) for \(\varepsilon \rightarrow 0\) uniformly with respect to \(\lambda \in [-\varepsilon _0,\varepsilon _0]\). Finally, we take into account (2.6) and conclude that the solutions to (1.3), corresponding to \(\omega \) and \(\tau \), are defined by

$$\begin{aligned} u=\varepsilon {[}\hat{u}(\varepsilon ,\lambda )](t,x):=\frac{\varepsilon }{2}\int _0^x\frac{[\hat{v}_1(\varepsilon ,\lambda )] (t,\xi )-[\hat{v}_2(\varepsilon ,\lambda )](t,\xi )}{a(\xi ,\lambda )}d\xi =\varepsilon \text{ Re }\left( e^{it}u_0(x)\right) +o(\varepsilon ), \end{aligned}$$

which proves (1.8).

7 The bifurcation direction: Proof of Theorem 4

Differentiating the identity \( \text{ Re }\,H(\varepsilon ,{\hat{\omega }}(\varepsilon ,0),{\hat{\tau }}(\varepsilon ,0),0)\equiv 0 \) two times with respect to \(\varepsilon \) at \(\varepsilon =0\) and taking into account (1.9), (6.14) and (6.15), we get

$$\begin{aligned} \text{ Re }\,\partial _\varepsilon ^2 H(0,1,\tau _0,0)=\rho \partial _\varepsilon ^2{\hat{\tau }}(0,0). \end{aligned}$$
(7.1)

Furthermore, (6.8), (5.34) and (5.35) yield the equality

$$\begin{aligned}&\partial ^2_\varepsilon H(0,1,\tau _0,0)\nonumber \\&\quad =-\langle \partial ^3_v{{\mathcal {B}}}(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1)+2\partial ^2_v {{\mathcal {B}}}(0,1,\tau _0,0)(v_0^1, \partial ^2_u\hat{w}(0,1,\tau _0,0)(v_0^1,v_0^1)),{{\varvec{v}}}_*\rangle .\nonumber \\ \end{aligned}$$
(7.2)

Now we use the special structure (1.10), (1.11) of the nonlinearity b as it is assumed in Theorem 4. It follows from (1.10), (1.11), (2.2) and (2.16) that \(\partial ^2_v{{\mathcal {B}}}(0,1,\tau _0,0)=0\). Moreover, for \(j=1,2\), it holds

$$\begin{aligned}&[\partial ^3_v{{\mathcal {B}}}_j(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1)](t,x) =[\partial ^3_vB(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1)](t,x)\\&\quad =\beta _1^0(x)[J_0v_0^1](t,x)^3+\beta _2^0(x)[J_0v_0^1](t-\tau _0,x)^3 +\beta _3^0(x)[Kv_0^1](t,x)^3+\beta _4^0(x)[K_0v_0^1](t,x)^3. \end{aligned}$$

Furthermore, (2.3), (2.4), (4.26) and (6.13) yield

$$\begin{aligned}{}[J_0v_0^1](t,x)= & {} \text{ Re }\,[J_0{{\varvec{v}}}_0](t,x)=\frac{1}{2}\text{ Re } \left( e^{it}\int _0^x\frac{v_{01}(\xi )-v_{02}(\xi )}{a_0(\xi )}d\xi \right) = \text{ Re }\,(e^{it}u_0(x)),\\ {[Kv_0^1]}(t,x)= & {} \text{ Re }\,[K{{\varvec{v}}}_0](t,x)=\frac{1}{2}\text{ Re }\, (e^{it}(v_{01}(x)+v_{02}(x)))=-\text{ Im }\,(e^{it}u_0(x)),\\ {[K_0v_0^1]}(t,x)= & {} \partial _x[J_0v_0^1](t,x)=\text{ Re }\,(e^{it}u'_0(x)). \end{aligned}$$

Therefore,

$$\begin{aligned}&[\partial ^3_vB(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1)](t,x)\\&=\frac{1}{8}\left( \beta _1^0(x)(e^{it}u_0(x)+e^{-it}\overline{u_0(x)})^3 +\beta _2^0(x)(e^{i(t-\tau _0)}u_0(x)+e^{-i(t-\tau _0)}\overline{u_0(x)})^3 \right. \\&\left. \;\;\;\;\;+i\beta _3^0(x)(e^{it}u_0(x)-e^{-it}\overline{u_0(x)})^3 +\beta _4^0(x)(e^{it}u'_0(x)+e^{-it}\overline{u'_0(x)})^3\right) \\&=\frac{1}{8}e^{3it}\left( (\beta _1^0(x)+\beta _2^0(x)e^{-3\tau _0} +i\beta _3^0(x))u_0(x)^3+\beta _4^0(x)u_0'(x)^3\right) \\&\;\;\;\;\;+\frac{3}{8}e^{it}\left( (\beta _1^0(x)+\beta _2^0(x)e^{-i \tau _0}+i\beta _3^0(x))u_0(x)^2\overline{u_0(x)}+\beta _4^0(x)u_0'(x)^2 \overline{u'_0(x)} \right) \\&\;\;\;\;\;+\frac{3}{8}e^{-it}\left( (\beta _1^0(x)+\beta _2^0(x)e^{i\tau _0} +i\beta _3^0(x))u_0(x)\overline{u_0(x)}^2+\beta _4^0(x)u_0'(x) \overline{u'_0(x)}^2 \right) \\&\;\;\;\;\;+\frac{1}{8}e^{-3it}\left( (\beta _1^0(x)+\beta _2^0(x)e^{3i\tau _0} +i\beta _3^0(x))\overline{u_0(x)}^3+\beta _4^0(x) \overline{u'_0(x)}^3 \right) . \end{aligned}$$

Inserting this into (7.1) and (7.2), we end up with the equality

$$\begin{aligned}&\partial _\varepsilon ^2{\hat{\tau }}(0,0)=\frac{1}{\rho }\text{ Re }\,\partial _\varepsilon ^2 H(0,1,\tau _0,0) =-\frac{1}{\rho }\langle \partial ^3_v{{\mathcal {B}}}(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1), v^1_*\rangle \\&\quad =\frac{1}{2\pi \rho }\int _0^{2\pi } \int _0^1 \left[ \begin{array}{c} [\partial ^3_vB(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1)](t,x)\\ \displaystyle [\partial ^3_vB(0,1,\tau _0,0)](v_0^1,v_0^1,v_0^1)(t,x) \end{array} \right] \cdot \text{ Re }\left( e^{it} \left[ \begin{array}{c} u_*(x)+iU_*(x)\\ u_*(x)-iU_*(x) \end{array} \right] \right) dxdt\\&\quad =\frac{1}{2\pi \rho }\text{ Re }\int _0^{2\pi } \int _0^1[\partial ^3_vB(0,1,\tau _0,0)(v_0^1,v_0^1,v_0^1)](t,x)e^{-it} \overline{u_*(x)}dxdt\\&\quad =\frac{3}{8\rho } \text{ Re }\left( \int _0^1\left( (\beta ^0_1(x)+\beta ^0_2(x)e^{-i\tau _0} +i\beta ^0_3(x))|u_0(x)|^2u_0(x)+\beta _4(x) |u'_0(x)|^2u'_0(x)\right) \overline{u_*(x)}dx\right) . \end{aligned}$$

This is exactly the desired formula in Theorem 4 with \(\sigma =1\) (cf. (4.41)).

8 Example

Let us consider problem (1.1), (1.2) with

$$\begin{aligned} \tau _0= \frac{\pi }{2}, \; a_0(x)=\frac{4}{\pi ^2}, \; b_3^0(x)=b_6^0(x)=0,\; b_4^0(x)=b_5^0(x)=c(x) \text{ for } \text{ all } x \in [0,1] \end{aligned}$$

and a smooth function \(c:[0,1] \rightarrow {\mathbb {R}}\). The function \(u(x)= \sin \frac{\pi x}{2}\) then solves (1.4) with \(\mu =i\) and (1.6), and the choice \(u_0(x)=u_*(x)=\sin \frac{\pi x}{2}\) gives

$$\begin{aligned} \sigma= & {} -\int _0^1c(x) \left( \sin \frac{\pi x}{2}\right) ^2 dx+i\int _0^1\left( 2-i\frac{\pi }{2}e^{-i\frac{\pi }{2}}c(x)\right) \left( \sin \frac{\pi x}{2}\right) ^2 dx,\; \rho \\= & {} -\frac{1}{|\sigma |^2}\int _0^1c(x)\left( \sin \frac{\pi x}{2}\right) ^2 dx. \end{aligned}$$

Hence, if \(\int _0^1c(x)\sin \left( \frac{\pi x}{2}\right) ^2 dx\not =0\), then all assumptions of Theorem 2 are satisfied. If, additionally, \(\beta _4^0(x)=0\) for all \(x \in [0,1]\), then

$$\begin{aligned} \frac{8}{3}|\sigma |^2\rho \,\partial _\varepsilon {\hat{\tau }}(0,0)= & {} -\int _0^1c(x)\left( \sin \frac{\pi x}{2}\right) ^2 dx\int _0^1\beta _1^0(x) \left( \sin \frac{\pi x}{2}\right) ^4 dx\\&+\int _0^1\left( 2-\frac{\pi }{2}c(x)\right) \left( \sin \frac{\pi x}{2}\right) ^2 dx \int _0^1(\beta _3^0(x)-\beta _2^0(x))\left( \sin \frac{\pi x}{2}\right) ^4 dx. \end{aligned}$$

Therefore, if this number is positive, then the Hopf bifurcation is supercritical.

9 Other Boundary Conditions

The results of Theorems 2 and 4 can be extended to other than (1.2) boundary conditions, for example, for two Dirichlet, or two Robin (in particular, Neumann), or for periodic boundary conditions. However, in those cases the transformation (2.5) is not appropriate anymore. Instead of (2.5), the following transformation can be used:

$$\begin{aligned} \left. \begin{array}{rcl} v_1(t,x)&{}=&{}\omega (\partial _tu(t,x)-u(t,x))+a(x,\lambda ) \partial _xu(t,x),\\ \;v_2(t,x)&{}=&{}\omega (\partial _tu(t,x)-u(t,x))-a(x,\lambda ) \partial _xu(t,x). \end{array} \right\} \end{aligned}$$
(9.1)

The inverse transformation is then given by

$$\begin{aligned} u(t,x)=\frac{e^t}{2\omega }\left( \int _0^te^{-s}(v_1(s,x)+v_2(s,x))ds -\frac{1}{1-e^{-2\pi }}\int _0^{2\pi }e^{-s}(v_1(s,x)+v_2(s,x))ds\right) .\nonumber \\ \end{aligned}$$
(9.2)

More precisely, if \(u \in C^2_{2\pi }({\mathbb {R}}\times [0,1])\) satisfies the second order differential equation in (1.3) and if \(v \in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) is defined by (9.1), then v satisfies the first order system

$$\begin{aligned} \left. \begin{array}{l} \omega \partial _tv_1(t,x)-a(x,\lambda )\partial _xv_1(t,x)+\omega v_2(t,x)=[B(v,\omega ,\tau ,\lambda )](t,x)\\ \omega \partial _tv_2(t,x)+a(x,\lambda )\partial _xv_2(t,x)+\omega v_1(t,x)=[B(v,\omega ,\tau ,\lambda )](t,x), \end{array} \right\} \end{aligned}$$
(9.3)

with

$$\begin{aligned} \begin{array}{rcl} [B(v,\omega ,\tau ,\lambda )](t,x)&{}:=&{}\displaystyle b\left( x,\lambda ,[J v](t,x)/\omega ,[J v](t-\omega \tau ,x)/\omega , [(J+K)v](t,x),[K_\lambda v](t,x)\right) \\ &{}&{} \displaystyle -\frac{1}{2}\partial _xa(x,\lambda )(v_1(t,x)-v_2(t,x)),\\ \text{[Jv] }(t,x)&{}:=&{}\displaystyle \frac{e^t}{2}\left( \int _0^te^{-s}(v_1(s,x)+v_2(s,x))ds-\frac{1}{1-e^{-2\pi }} \int _0^{2\pi }e^{-s}(v_1(s,x)+v_2(s,x))ds\right) \end{array} \end{aligned}$$

and with operators K and \(K_\lambda \) defined in (2.4). And vice versa, if \(v \in C^1_{2\pi }({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) satisfies (9.3) and if \(u \in C^1_{2\pi }({\mathbb {R}}\times [0,1])\), is defined by (9.2), then u is \(C^2\)-smooth and satisfies the differential equation in (1.3). Note that till now no boundary conditions were used, but only the periodicity in time.

Now, for definiteness, suppose that u satisfies the Dirichlet boundary conditions

$$\begin{aligned} u(t,0)=u(t,1)=0. \end{aligned}$$
(9.4)

Then, accordingly to (9.1), the function v satisfies

$$\begin{aligned} v_1(t,0)+v_2(t,0)=v_1(t,1)+v_2(t,1)=0. \end{aligned}$$
(9.5)

In this case the system (3.1) of partial integral equations reads

$$\begin{aligned} \left. \begin{array}{l} v_1(t,x)+c_1(x,0,\lambda )v_2(t+\omega A(x,0,\lambda ),0)\\ =\displaystyle -\int _0^x\frac{c_1(x,\xi ,\lambda )}{a(\xi ,\lambda )}[{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t+\omega A(x,\xi ,\lambda ),\xi )d\xi ,\\ v_2(t,x)+c_2(x,1,\lambda )v_1(t-\omega A(x,1,\lambda ),1)\\ =\displaystyle \int _x^1\frac{c_2(x,\xi ,\lambda )}{a(\xi ,\lambda )}[{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t-\omega A(x,\xi ,\lambda ),\xi )d\xi , \end{array} \right\} \end{aligned}$$
(9.6)

where the operator \({{\mathcal {B}}}\) is now defined by

$$\begin{aligned} \left. \begin{array}{rcl} [{{\mathcal {B}}}_1(v,\omega ,\tau ,\lambda )](t,x)&{}:=&{}[B(v,\omega ,\tau ,\lambda )](t,x)-b_1(x,\lambda )v_1(t,x) -\omega v_2(t,x),\\ \displaystyle [{{\mathcal {B}}}_2(v,\omega ,\tau ,\lambda )](t,x)&{}:=&{}[B(v,\omega ,\tau ,\lambda )](t,x) -b_2(x,\lambda )v_2(t,x)-\omega v_1(t,x), \end{array} \right\} \qquad \end{aligned}$$
(9.7)

while the functions \(b_1,b_2,c_1,c_2\) and A are introduced in Sects. 2 and  3.

More exactly, if \(v \in C_\pi ({\mathbb {R}}\times [0,1];{\mathbb {R}}^2)\) satisfies (9.3) and if \(\partial _tv\) exists and is continuous, then the function u, defined by (9.2), is \(C^2\)-smooth and satisfies the differential equation in (1.3) and the boundary condition (9.4).

The system (9.6) can be written, again, as an operator equation of the type (4.3) with operators \({\mathcal {C}}_1\) and \({\mathcal {D}}\) as in Sect. 4, with \({\mathcal {C}}_2\) slightly changed (cf. (4.1)) to

$$\begin{aligned}{}[{\mathcal {C}}_2(\omega ,\lambda )v](x,t)=-c_2(x,1,\lambda )v_1(t-\omega A(x,1,\lambda ),1) \end{aligned}$$

and with operator \({{\mathcal {B}}}\) from (9.7).

Now we can proceed as in Sects. 47. Specifically, the linearization of operator \({{\mathcal {B}}}\) in \(v=0\) is, again, a sum of a partial integral operator and a pointwise operator with zero diagonal part (cf. (2.17)):

$$\begin{aligned} \partial _v{{\mathcal {B}}}(0,\omega ,\tau ,\lambda )={{\mathcal {J}}}(\omega ,\tau ,\lambda ) + {{\mathcal {K}}}(\omega ,\lambda ) \end{aligned}$$

with, for \(k=1,2\),

$$\begin{aligned}{}[{{\mathcal {J}}}_k(\omega ,\tau ,\lambda )v](t,x)= \left( \frac{b_3(x,\lambda )}{\omega }+b_5(x,\lambda )\right) [J v](t,x)+\frac{b_4(x,\lambda )}{\omega } [J v](t-\omega \tau ,x) \end{aligned}$$

(with functions \(b_3,b_4,b_5\) from (2.12)) and

$$\begin{aligned} \displaystyle [{{\mathcal {K}}}_1(\omega ,\lambda )v](t,x)=(b_2(x,\lambda )-\omega )v_2(t,x),\; \displaystyle [{{\mathcal {K}}}_2(\omega ,\lambda )v](t,x)=(b_1(x,\lambda )-\omega )v_1(t,x). \end{aligned}$$

The definition (4.26) of the function \(v_0\) has to be changed to

$$\begin{aligned} v_0(x):= \left[ \begin{array}{c} (i-1)u_0(x)+a_0(x)u'_0(x)\\ (i-1)u_0(x)-a_0(x)u'_0(x) \end{array} \right] , \end{aligned}$$

and similarly for \({{\varvec{v}}}_0\), \(v_0^1\) and \(v_0^2\). The definitions (4.27) of \(v_*\), \({{\varvec{v}}}_*\), \(v_*^1\) and \(v_*^2\) stay the same. The functions \(v_*^1\) and \(v_*^2\) satisfy the boundary conditions (9.5) because

$$\begin{aligned} v_{*1}(x)+v_{*2}(x)=2u_*(x)=0 \text{ in } x=0,1. \end{aligned}$$

Here \(u_0\) and \(u_*\) are eigenfunctions to the eigenvalue problems (1.4) (with \(\mu =i\) and \(\tau =\tau _0\)) and (1.6), where in both eigenvalue problems the boundary conditions are changed to (9.4). With these eigenfunctions, the formulas for \(\sigma \) and \(\rho \) in (A3) and the formula for \(\partial _\varepsilon ^2{\hat{\tau }}(0,0)\) in Theorem 4 remain unchanged.