Skip to main content
Log in

Periodic Center Manifolds for DDEs in the Light of Suns and Stars

  • Published:
Journal of Dynamics and Differential Equations Aims and scope Submit manuscript

Abstract

In this paper, we prove the existence of a periodic smooth finite-dimensional center manifold near a nonhyperbolic cycle in classical delay differential equations by using the Lyapunov–Perron method. The results are based on the rigorous functional analytic perturbation framework for dual semigroups (sun–star calculus). The generality of the dual perturbation framework ensures that the results extend to a much broader class of evolution equations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.

References

  1. Bosschaert, M.M., Janssens, S.G., Kuznetsov, Yu.A.: Switching to nonhyperbolic cycles from codimension two bifurcations of equilibria of delay differential equations. SIAM J. Appl. Dyn. Syst. 19(1), 252–303 (2020). https://doi.org/10.1137/19m1243993

    Article  MathSciNet  MATH  Google Scholar 

  2. Breda, D., Liessi, D.: Floquet theory and stability of periodic solutions of renewal equations. J. Dyn. Diff. Equat. 33(2), 677–714 (2020). https://doi.org/10.1007/s10884-020-09826-7

    Article  MathSciNet  MATH  Google Scholar 

  3. Church, K., Liu, X.: Smooth centre manifolds for impulsive delay differential equations. J. Differ. Equat. 265(4), 1696–1759 (2018). https://doi.org/10.1016/j.jde.2018.04.021

    Article  MathSciNet  MATH  Google Scholar 

  4. Church, K., Liu, X.: Computation of centre manifolds and some codimension-one bifurcations for impulsive delay differential equations. J. Differ. Equat. 267(6), 3852–3921 (2019). https://doi.org/10.1016/j.jde.2019.04.022

    Article  MathSciNet  MATH  Google Scholar 

  5. Clément, P., Diekmann, O., Gyllenberg, M., Heijmans, H.J.A.M., Thieme, H.R.: Perturbation theory for dual semigroups II. Time-dependent perturbations in the sun-reflexive case. Proc. Royal Soc. Edinburgh: Sect. A Math. 109(1–2), 145–172 (1988). https://doi.org/10.1017/s0308210500026731

    Article  MathSciNet  MATH  Google Scholar 

  6. Clément, P., Diekmann, O., Gyllenberg, M., Heijmans, H.J.A.M., Thieme, H.R.: Perturbation theory for dual semigroups I. The sun-reflexive case. Mathematische Annalen 277(4), 709–725 (1987). https://doi.org/10.1007/bf01457866

    Article  MathSciNet  MATH  Google Scholar 

  7. Clément, P., Diekmann, O., Gyllenberg, M., Heijmans, H.J.A.M., Thieme, H.R.: Perturbation theory for dual semigroups III. Nonlinear Lipschitz continuous perturbations in the sun-reflexive case. In: Proceedings of Volterra Integrodifferential Equations in Banach Spaces and Applications 1987 (1989)

  8. Clément, P., Diekmann, O., Gyllenberg, M., Heijmans, H.J.A.M., Thieme, H.R.: Perturbation theory for dual semigroups IV. The interwining formula and the canonical pairing. Trends in Semigroup Theory and Applications (1989)

  9. Coleman, R.: Calculus on Normed Vector Spaces. Springer, New York (2012)

    Book  MATH  Google Scholar 

  10. Dhooge, A., Govaerts, W., Kuznetsov, Yu.A.: MATCONT: A MATLAB package for numerical bifurcation analysis of ODEs. ACM Trans. Math. Softw. 29(2), 141–164 (2003). https://doi.org/10.1145/779359.779362

  11. Diekmann, O., Getto, P., Gyllenberg, M.: Stability and bifurcation analysis of Volterra functional equations in the light of suns and stars. SIAM J. Math. Anal. 39(4), 1023–1069 (2008). https://doi.org/10.1137/060659211

    Article  MathSciNet  MATH  Google Scholar 

  12. Diekmann, O., Gils, S.: The center manifold for delay equations in the light of suns and stars (1991). https://doi.org/10.1007/BFb0085429

  13. Diekmann, O., Gyllenberg, M.: Equations with infinite delay: blending the abstract and the concrete. J. Differ. Equat. 252(2), 819–851 (2012). https://doi.org/10.1016/j.jde.2011.09.038

    Article  MathSciNet  MATH  Google Scholar 

  14. Diekmann, O., Gyllenberg, M., Thieme, H.R.: Perturbation theory for dual semigroups. V : Variation of constants formulas. In: Semigroup theory and evolution equations : the Second International Conference, no. 135 in Lecture Notes in Pure and Applied Mathematics, pp. 107–123. Marcel Dekker Incorporated (1991). Godkänd; 1991; 20101006 (andbra)

  15. Diekmann, O., Verduyn Lunel, S.M., van Gils, S.A., Walther, H.O.: Delay Equations. Springer, New York (1995)

    Book  MATH  Google Scholar 

  16. Engelborghs, K., Luzyanina, T., Roose, D.: Numerical bifurcation analysis of delay differential equations using DDE-BIFTOOL. ACM Trans. Math. Softw. 28(1), 1–21 (2002). https://doi.org/10.1145/513001.513002

    Article  MathSciNet  MATH  Google Scholar 

  17. Hale, J.K., Verduyn Lunel, S.M.: Introduction to Functional Differential Equations. Springer, New York (1993)

    Book  MATH  Google Scholar 

  18. Hille, E., Philips, R.: Functional Analysis and Semi-groups. American Mathematical Society, Providence, R.I (1957)

    Google Scholar 

  19. Hupkes, H.J., Verduyn Lunel, S.M.: Center manifold theory for functional differential equations of mixed type. J. Dyn. Diff. Equat. 19(2), 497–560 (2006). https://doi.org/10.1007/s10884-006-9055-9

    Article  MathSciNet  MATH  Google Scholar 

  20. Hupkes, H.J., Verduyn Lunel, S.M.: Center manifolds for periodic functional differential equations of mixed type. J. Differ. Equat. 245(6), 1526–1565 (2008). https://doi.org/10.1016/j.jde.2008.02.026

    Article  MathSciNet  MATH  Google Scholar 

  21. Iooss, G.: Global characterization of the normal form for a vector field near a closed orbit. J. Differ. Equat. 76(1), 47–76 (1988). https://doi.org/10.1016/0022-0396(88)90063-0

    Article  MathSciNet  MATH  Google Scholar 

  22. Iooss, G., Adelmeyer, M.: Topics in Bifurcation Theory and Applications, Advanced Series in Nonlinear Dynamics, vol. 3, second edn. World Scientific Publishing Co., Inc., River Edge, NJ (1998). https://doi.org/10.1142/3990

  23. Janssens, S.G.: A class of abstract delay differential equations in the light of suns and stars

  24. Janssens, S.G.: A class of abstract delay differential equations in the light of suns and stars. II

  25. Janssens, S.G.: On a normalization technique for codimension two bifurcations of equilibria of delay differential equations. Master’s thesis, Utrecht University (2010). http://dspace.library.uu.nl/handle/1874/312252

  26. Kuznetsov, Yu.A., Govaerts, W., Doedel, E.J., Dhooge, A.: Numerical periodic normalization for codim 1 bifurcations of limit cycles. SIAM J. Numer. Anal. 43(4), 1407–1435 (2005). https://doi.org/10.1137/040611306

  27. Kuznetsov, Yu.A.: Elements of Applied Bifurcation Theory, 4th edn. Springer, New York (2023)

    Book  MATH  Google Scholar 

  28. Riesz, F.: Démonstration nouvelle d’un théorème concernant les opérations fonctionnelles linéaires. Annales scientifiques de l’École normale supérieure 31, 9–14 (1914). https://doi.org/10.24033/asens.669

    Article  MathSciNet  MATH  Google Scholar 

  29. Sieber, J., Engelborghs, K., Luzyanina, T., Samaey, G., Roose, D.: DDE-BIFTOOL Manual - Bifurcation analysis of delay differential equations

  30. Spek, L., Dijkstra, K., van Gils, S., Polner, M.: Dynamics of delayed neural field models in two-dimensional spatial domains. J. Differ. Equat. 317, 439–473 (2022). https://doi.org/10.1016/j.jde.2022.02.002

    Article  MathSciNet  MATH  Google Scholar 

  31. Spek, L., van Gils, S.A., Kuznetsov, Yu.A., Polner, M.: Bifurcations of neural fields on the sphere (2022). https://doi.org/10.48550/ARXIV.2212.11785

  32. Spek, L., Kuznetsov, Yu.A., van Gils, S.A.: Neural field models with transmission delays and diffusion. J. Math. Neurosci. (2020). https://doi.org/10.1186/s13408-020-00098-5

    Article  MathSciNet  MATH  Google Scholar 

  33. Szalai, R., Stépán, G.: Period doubling bifurcation and center manifold reduction in a time-periodic and time-delayed model of machining. J. Vib. Control 16(7–8), 1169–1187 (2010). https://doi.org/10.1177/1077546309341133

    Article  MathSciNet  MATH  Google Scholar 

  34. Vanderbauwhede, A., van Gils, S.A.: Center manifolds and contractions on a scale of Banach spaces. J. Funct. Anal. 72(2), 209–224 (1987). https://doi.org/10.1016/0022-1236(87)90086-3

    Article  MathSciNet  MATH  Google Scholar 

  35. Witte, V.D., Govaerts, W., Kuznetsov, Yu., Meijer, H.: Analysis of bifurcations of limit cycles with lyapunov exponents and numerical normal forms. Phys. D 269, 126–141 (2014). https://doi.org/10.1016/j.physd.2013.12.002

    Article  MathSciNet  Google Scholar 

  36. Witte, V.D., Rossa, F.D., Govaerts, W., Kuznetsov, Yu.A.: Numerical periodic normalization for codim 2 bifurcations of limit cycles: computational formulas, numerical implementation, and examples. SIAM J. Appl. Dyn. Syst. 12(2), 722–788 (2013). https://doi.org/10.1137/120874904

Download references

Acknowledgements

The authors would like to thank Prof. Odo Diekmann (Utrecht University), Prof. Stephan van Gils (University of Twente), Dr. Kevin Church and Mattias Windmolders for helpful discussions and suggestions.

Funding

There is no funding for this project.

Author information

Authors and Affiliations

Authors

Contributions

Bram Lentjes and Len Spek wrote the main part of the text and proofs of the theorems, propositions, lemmas and corollaries. Maikel M. Bosschaert provided a lot of comments, helped with the proofs and had an extensive overview of the current manuscript. Yuri A. Kuznetsov formulated the problem, wrote several parts of the text, provided a lot of suggestions, and had an extensive overview of the current manuscript.

Corresponding author

Correspondence to Bram Lentjes.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Ethical Approval

This declaration is not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Spectral Decomposition

This appendix consists of two parts. In the first part, we will lift the spectral decomposition (Hypothesis 1) from X to \(X^{\odot \star }\) and in the second part we show that classical DDEs fulfill the requirements of Hypothesis 1 and Hypothesis 2.

1.1 A.1 Lifting the Spectral Decomposition from X to \(X^{\odot \star }\)

We consider the setting from the preface of Sect. 3 and prove that the spectral decomposition on X from Hypothesis 1 induces a spectral decomposition on \(X^\star \), \(X^\odot \) and most importantly on \(X^{\odot \star }\).

Proposition 6

Under the assumption of Hypothesis 1, the space \(X^{\star }\) and the backward evolutionary system \(U^\star \) have the following properties:

  1. 1.

    \(X^{\star }\) admits a direct sum decomposition

    $$\begin{aligned} X^\star = X^{\star }_{-}(s) \oplus X^{\star }_{0}(s) \oplus X^{\star }_{+}(s), \quad \forall s \in {\mathbb {R}}, \end{aligned}$$
    (28)

    where each summand is closed.

  2. 2.

    There exist three continuous time-dependent projectors \(P_{i}^{\star }: {\mathbb {R}} \rightarrow {\mathcal {L}}(X^{ \star })\) with \({{\,\textrm{ran}\,}}(P_i^{\star }(s)) = X_i^{ \star }(s)\) for any \(s \in {\mathbb {R}}\) and \(i \in \{-,0,+\}\).

  3. 3.

    There exists a constant \(N \ge 0\) such that \(\sup _{s \in {\mathbb {R}}}(\Vert P_{-}^{ \star }(s)\Vert + \Vert P_{0}^{ \star }(s)\Vert + \Vert P_{+}^{ \star }(s)\Vert ) = N < \infty \).

  4. 4.

    The projections are mutually orthogonal meaning that \(P_{i}^{ \star }(s)P_j^{ \star }(s) = 0\) for all \(i \ne j\) and \(s \in {\mathbb {R}}\) with \(i,j \in \{-,0,+\}\).

  5. 5.

    The projections commute with the backward evolutionary system: \(U^{ \star }(s,t)P_i^{ \star }(t) = P_i^{ \star }(s)U^{ \star }(s,t)\) for all \(i \in \{-,0,+\}\) and \(s \le t\).

  6. 6.

    Define the restrictions \(U_{i}^{ \star }(s,t): X_{i}^{ \star }(t) \rightarrow X_{i}^{ \star }(s)\) for \(i \in \{-,0,+\}\) and \(t \ge s\). The operators \(U_{0}^{ \star }(s,t)\) and \(U_{+}^{ \star }(s,t)\) are invertible and also forward evolutionary systems. Specifically, for any \(t,\tau ,s \in {\mathbb {R}}\) it holds

    $$\begin{aligned} U_0^{ \star }(s,t) = U_0^{ \star }(s,\tau )U_0^{ \star }(\tau ,s), \quad U_+^{ \star }(s,t) = U_+^{ \star }(s,\tau )U_+^{ \star }(\tau ,t). \end{aligned}$$
    (29)
  7. 7.

    The decomposition (28) is an exponential trichotomy on \({\mathbb {R}}\) with the same constants as in Hypothesis 1.

Proof

We prove this proposition by separately showing that each statement holds. Throughout the proof, we assume that \(s \in {\mathbb {R}}\) is given.

1. It follows from part 1 and 2 of Hypothesis 1 that by taking duals

$$\begin{aligned} X^\star = [{{\,\textrm{ran}\,}}(P_{-}(s))]^\star \oplus [{{\,\textrm{ran}\,}}(P_0(s))]^\star \oplus [{{\,\textrm{ran}\,}}(P_+(s))]^\star . \end{aligned}$$

If \(i \in \{-,0,+\}\), then it follows from [24, Lemma A.1] that the map \(\iota _i(s): {{\,\textrm{ran}\,}}(P_{i}(s)^\star ) \rightarrow [{{\,\textrm{ran}\,}}(P_{i}(s))]^\star \) defined as \(\iota _i(s) y^\star = y^\star |_{{{\,\textrm{ran}\,}}(P_{i}(s))}\) is an isometric isomorphism and \(P_{i}(s)^\star \in {\mathcal {L}}(X^\star )\). From this isometric isomorphism, the space \([X_{i}(s)]^\star = [{{\,\textrm{ran}\,}}(P_{i}(s))]^\star \) can be identified with \({{\,\textrm{ran}\,}}(P_{i}^\star (s)) =: X_{i}^{\star }(s)\) where we defined \(P_{i}^\star (s):= P_{i}(s) ^\star \) for any \(s \in {\mathbb {R}}\). Because \(P_{i}^\star (s)\) has closed range, \(X_{i}^\star (s)\) is closed.

2. It only remains to show that \(P_{i}^\star \) is continuous for each \(i \in \{-,0,+\}\). Consider \(h \in {\mathbb {R}}\), then

$$\begin{aligned} \Vert P_{i}^\star (s+h) - P_{i}^\star (s)\Vert = \Vert [P_{i}(s+h) - P_{i}(s)]^\star \Vert = \Vert P_{i}(s+h) - P_{i}(s)\Vert \rightarrow 0, \text{ as } h \rightarrow 0, \end{aligned}$$

because \(P_i\) is continuous by part 2 of Hypothesis 1.

3. Since \(\Vert P_i^\star (s)\Vert = \Vert P_i(s)^\star \Vert = \Vert P_i(s)\Vert \) we have that part 3 holds with the same constant N as in part 3 of Hypothesis 1.

4. Let \(i \ne j\), then \(P_i^\star (s)P_j^\star (s) = P_i(s)^\star P_j(s)^\star = (P_j(s) P_i(s))^\star = 0\) because \(P_j(s) P_i(s) = 0\) due to part 4 of Hypothesis 1.

5. Notice that for any \(s \le t\) we have that

$$\begin{aligned} U^\star (s,t)P_{i}^\star (t) = (P_i(t)U(t,s))^\star = (U(t,s)P_i(s))^\star = P_{i}^{\star }(s) U(t,s)^\star = P_{i}^{\star }(s) U^\star (s,t), \end{aligned}$$

where we used part 5 of Hypothesis 1 in the third equality.

6. The restrictions are well-defined. Because \(U_0(t,s)\) and \(U_{+}(t,s)\) are invertible we also have that \(U_0^\star (s,t) = U_0(t,s)^\star \) and \(U_{+}^\star (s,t)^\star = U_{+}(t,s)^\star \) are invertible and so forward evolutionary systems. Let us now prove (29). Let \(t,\tau ,s \in {\mathbb {R}}\) be given, then

$$\begin{aligned} U_0^\star (s,t) = U_0(t,s)^\star = (U_0(t,\tau )U_0(\tau ,s))^\star = U_0(\tau ,s)^\star U_0(t,\tau )^\star = U_0^\star (s,\tau ) U_0^\star (\tau ,s), \end{aligned}$$

where we used (10) in the second equality. The proof for \(U_+^\star \) is analogous.

7. Let \(i = -\) and suppose that \(t \ge s\). Let \(x^\star \in X_{i}^\star (s) = {{\,\textrm{ran}\,}}(P_{i}^\star (s))\) be given. Since \(\iota _i(t)\) is an isometry for any \(t \in {\mathbb {R}}\),

$$\begin{aligned} \Vert U^\star (s,t)x^\star \Vert = \Vert \iota _i(t)[U^\star (s,t)x^\star ]\Vert = \sup _{ \begin{array}{c} x \in X_{i}(s) \\ \Vert x\Vert \le 1 \end{array}} |\langle U_i(t,s)x, x^\star \rangle | \le \Vert U_i(t,s)\Vert \ \Vert x^\star \Vert . \end{aligned}$$

Taking the supremum over all \(x^\star \) that satisfies \(\Vert x^\star \Vert \le 1\) we obtain \(\Vert U_i^\star (s,t)\Vert \le \Vert U_i(t,s)\Vert \) and this last part can be bounded by one of the three estimates in part 7 of Hypothesis 1. The cases for \(i \in \{0,+ \}\) are analogous. This completes the proof. \(\square \)

Proposition 7

Under the assumption of Hypothesis 1, the space \(X^{\odot }\) and the backward evolutionary system \(U^\odot \) have the following properties:

  1. 1.

    \(X^{\odot }\) admits a direct sum decomposition

    $$\begin{aligned} X^\odot = X^{\odot }_{-}(s) \oplus X^{\odot }_{0}(s) \oplus X^{\odot }_{+}(s), \quad \forall s \in {\mathbb {R}}, \end{aligned}$$
    (30)

    where each summand is closed.

  2. 2.

    There exist three continuous time-dependent projectors \(P_{i}^{\odot }: {\mathbb {R}} \rightarrow {\mathcal {L}}(X^{ \odot })\) with \({{\,\textrm{ran}\,}}(P_i^{\odot }(s)) = X_i^{ \odot }(s)\) for any \(s \in {\mathbb {R}}\) and \(i \in \{-,0,+\}\).

  3. 3.

    There exists a constant \(N \ge 0\) such that \(\sup _{s \in {\mathbb {R}}}(\Vert P_{-}^{ \odot }(s)\Vert + \Vert P_{0}^{ \odot }(s)\Vert + \Vert P_{+}^{ \odot }(s)\Vert ) = N < \infty \).

  4. 4.

    The projections are mutually orthogonal meaning that \(P_{i}^{ \odot }(s)P_j^{ \odot }(s) = 0\) for all \(i \ne j\) and \(s \in {\mathbb {R}}\) with \(i,j \in \{-,0,+\}\).

  5. 5.

    The projections commute with the backward evolutionary system: \(U^{ \odot }(s,t)P_i^{ \odot }(t) = P_i^{ \odot }(t)U^{ \odot }(s,t)\) for all \(i \in \{-,0,+\}\) and \(s \le t\).

  6. 6.

    Define the restrictions \(U_{i}^{ \odot }(s,t): X_{i}^{ \odot }(t) \rightarrow X_{i}^{ \odot }(s)\) for \(i \in \{-,0,+\}\) and \(t \ge s\). The operators \(U_{0}^{ \odot }(s,t)\) and \(U_{+}^{ \odot }(s,t)\) are invertible and also forward evolutionary systems. Specifically, for any \(t,\tau ,s \in {\mathbb {R}}\) it holds

    $$\begin{aligned} U_0^{ \odot }(s,t) = U_0^{ \odot }(s,\tau )U_0^{ \odot }(\tau ,s), \quad U_+^{ \odot }(s,t) = U_+^{ \odot }(s,\tau )U_+^{ \odot }(\tau ,t). \end{aligned}$$
    (31)
  7. 7.

    The decomposition (30) is an exponential trichotomy on \({\mathbb {R}}\) with the same constants as in Hypothesis 1.

Proof

Let \(s \in {\mathbb {R}}\) and \(i \in \{-,0,+\}\) be given. Notice directly that the Lipschitz continuity of B implies that \(U^\odot (s,t)\) is well-defined and \(X^\odot \)-invariant. We define for any s the map \(P_i^\odot (s):= P_i^\star (s) |_{X^\odot }\) and notice that part 6 of Proposition 6 implies that \(P_i^\star (s)\) maps \(X^\odot \) into itself. We denote the range of \(P_i^\odot (s)\) by \(X_i^\odot (s)\) and it is clear that

$$\begin{aligned} X_i^\odot (s) = X_i^\star (s) \cap X^\odot . \end{aligned}$$
(32)

Let us now prove the seven assertions.

1. Notice that \(X_i^\odot (s)\) is closed because \(X_i^\star (s)\) is closed (part 1 of Proposition 6) and \(X^\odot \) is closed. The result follows from (32).

2. As \(X^\odot \) is a subspace of \(X^\star \), we have for any \(h \in {\mathbb {R}}\) that

$$\begin{aligned} \Vert P_{i}^\odot (s+h) - P_{i}^\odot (s)\Vert = \Vert [P_{i}(s+h) - P_{i}(s)]^\odot \Vert \le \Vert [P_{i}(s+h) - P_{i}(s)]^\star \Vert \rightarrow 0, \text{ as } h \rightarrow 0, \end{aligned}$$

due to part 2 of Proposition 6. Hence, \(P_i^\odot \) is continuous.

3. This follows from part 3 of Proposition 6 because \(\Vert P_{i}^\odot (s)\Vert \le \Vert P_{i}^\star (s)\Vert \) due to the restriction.

4. This follows from part 4 of Proposition 6 due to the restriction.

5. This claim follows from part 4 of Proposition 6 and recalling the fact that \(U^\odot (s,t)\) is \(X^\odot \)-invariant.

6. For the well-definedness of the restriction, we have to check that \(U_i^\odot (s,t)\) takes values in \(X_i^\odot (s)\). Since \(U_i^\odot (s,t) = U_i^\star (s,t) |_{X^\odot }\) we get from part 6 of Proposition 6 that \(U_i^\odot (s,t)\) maps into \(X_i^\star (s)\). Because \(U^\odot (s,t)\) is \(X^\odot \)-invariant we also have that the restriction \(U_i^\odot (s,t)\) is \(X^\odot \)-invariant and so \(U_i^\odot (s,t)\) takes values in \(X^\odot \). To conclude, \(U_i^\odot (s,t)\) takes values in \(X_i^\star (s) \cap X^\odot = X_i^\odot (s)\) by (32). The remaining claims follow immediately because of the restriction.

7. Because of the restriction we have that \(\Vert U_{i}^\odot (s,t)\Vert = \Vert U_i(t,s)^\odot \Vert \le \Vert U_i(t,s)^\star \Vert = \Vert U_i^\star (s,t)\Vert \) and the right-hand side can now be estimated by the upper bounds given in part 7 of Proposition 6. \(\square \)

Proposition 8

Under the assumption of Hypothesis 1, the space \(X^{\odot \star }\) and the forward evolutionary system \(U^{\odot \star }\) have the following properties:

  1. 1.

    \(X^{\odot \star }\) admits a direct sum decomposition

    $$\begin{aligned} X^{\odot \star } = X^{\odot \star }_{-}(s) \oplus X^{\odot \star }_{0}(s) \oplus X^{\odot \star }_{+}(s), \quad \forall s \in {\mathbb {R}}, \end{aligned}$$
    (33)

    where each summand is closed.

  2. 2.

    There exist three continuous time-dependent projectors \(P_{i}^{\odot \star }: {\mathbb {R}} \rightarrow {\mathcal {L}}(X^{\odot \star })\) with \({{\,\textrm{ran}\,}}(P_i^{\odot \star }(s)) = X_i^{\odot \star }(s)\) for any \(s \in {\mathbb {R}}\) and \(i \in \{-,0,+\}\).

  3. 3.

    There exists a constant \(N \ge 0\) such that \(\sup _{s \in {\mathbb {R}}}(\Vert P_{-}^{\odot \star }(s)\Vert + \Vert P_{0}^{\odot \star }(s)\Vert + \Vert P_{+}^{\odot \star }(s)\Vert ) = N < \infty \).

  4. 4.

    The projections are mutually orthogonal meaning that \(P_{i}^{\odot \star }(s)P_j^{\odot \star }(s) = 0\) for all \(i \ne j\) and \(s \in {\mathbb {R}}\) with \(i,j \in \{-,0,+\}\).

  5. 5.

    The projections commute with the forward evolutionary system: \(U^{\odot \star }(t,s)P_i^{\odot \star }(s) = P_i^{\odot \star }(t)U^{\odot \star }(t,s)\) for all \(i \in \{-,0,+\}\) and \(t \ge s\).

  6. 6.

    Define the restrictions \(U_{i}^{\odot \star }(t,s): X_{i}^{\odot \star }(s) \rightarrow X_{i}^{\odot \star }(t)\) for \(i \in \{-,0,+\}\) and \(t \ge s\). The operators \(U_{0}^{\odot \star }(t,s)\) and \(U_{+}^{\odot \star }(t,s)\) are invertible and also backward evolutionary systems. Specifically, for any \(t,\tau ,s \in {\mathbb {R}}\) it holds

    $$\begin{aligned} U_0^{\odot \star }(t,s) = U_0^{\odot \star }(t,\tau )U_0^{\odot \star }(\tau ,s), \quad U_+^{\odot \star }(t,s) = U_+^{\odot \star }(t,\tau )U_+^{\odot \star }(\tau ,s). \end{aligned}$$
  7. 7.

    The decomposition (33) is an exponential trichotomy on \({\mathbb {R}}\) with the same constants as in Hypothesis 1.

Proof

Recall that \(X^{\odot }\) is a Banach space and \(U^\odot \) a backward evolutionary system on \(X^\odot \). Therefore, we can apply Proposition 6 with X replaced by \(X^\odot \) and U replaced by \(U^\star \) by going over from a forward towards a backward evolutionary system. Hence, we obtain the desired result. \(\square \)

1.2 A.2 Verification of Hypothesis 1 and Hypothesis 2 for Classical DDEs

In order to verify both hypotheses, we have to construct three time-dependent projectors \(P_{i}\) with \(i \in \{-,0,+\}\). Before we do this, let us first define the time-dependent spectral projection (at time s) as \(P_\lambda (s) \in {\mathcal {L}}(X)\) with range \(E_\lambda (s)\) and kernel \(R_\lambda (s)\) that can be represented via the holomorphic functional calculus as the Dunford integral

$$\begin{aligned} P_\lambda (s):= \frac{1}{2 \pi i} \oint _{\partial C_\lambda } (zI-U(s+T,s))^{-1}dz, \end{aligned}$$

where \(C_\lambda \subset {\mathbb {C}}\) is a sufficiently small open disk centered at \(\lambda \) with \(\partial C_\lambda \) its boundary such that \(\lambda \) is the only Floquet multiplier inside \(C_\lambda \). Recall from the compactness property of \(U(s+T,s)\) that the Floquet multipliers are isolated and hence making such a contour \(\partial C_\lambda \) in the complex plane is possible.

Proposition 9

The map \(P_\lambda : {\mathbb {R}} \rightarrow {\mathcal {L}}(X)\) is continuous and T-periodic.

Proof

Let a starting time \(s \in {\mathbb {R}}\) be given with arbitrary \(h \in {\mathbb {R}}\). Let \(C_\lambda \) be an open disk in \({\mathbb {C}}\) centered at \(\lambda \) such that \(\partial C_\lambda \) is a circle with sufficiently small radius \(r > 0\) such that \(\lambda \) is the only Floquet multiplier in \(C_\lambda \). Hence,

$$\begin{aligned}&\Vert P_\lambda (s+h) - P_\lambda (s) \Vert \\&= \frac{1}{2 \pi } \bigg | \bigg | \oint _{\partial C_\lambda } (zI-U(s+T + h,s + h))^{-1} - (zI-U(s+T,s))^{-1} dz\bigg | \bigg |, \end{aligned}$$

because the Floquet multipliers are independent of the starting time. Notice that the integrand is just a difference of resolvents and due to the second resolvent identity [18, Theorem 4.8.2] we notice that the integrand equals

$$\begin{aligned} R(z,h) [U(s+T+h,s+h) - U(s+T,s)] R(z,0), \quad \forall z \in \partial C_\lambda , \end{aligned}$$

where for any \(h \in {\mathbb {R}}\) the resolvent map \(R(\cdot ,h): \partial C_\lambda \rightarrow {\mathcal {L}}(X)\) is defined as \(R(z,h) := (zI-U(s+T + h,s + h))^{-1}\). Notice that \(R(\cdot ,h)\) indeed takes values in \({\mathcal {L}}(X)\) due to the bounded inverse theorem. Filling this back into the expression above yields

$$\begin{aligned}{} & {} \Vert P_\lambda (s+h) - P_\lambda (s) \Vert \\{} & {} \le \frac{1}{2 \pi } \Vert U(s+T+h,s+h) - U(s+T,s) \Vert \oint _{\partial C_\lambda } \Vert R(z,h)\Vert \ \Vert R(z,0)\Vert dz. \end{aligned}$$

We claim that for any fixed \(h \in {\mathbb {R}}\) the map \(\partial C_\lambda \ni z \mapsto \Vert R(z,h)\Vert \in {\mathbb {R}}\) is continuous. Indeed, fix a \(h \in {\mathbb {R}}\) and choose \(u \in C_\lambda \) such that \(|z-u| \rightarrow 0\), where \(|\cdot |\) represents the arc length on the circle \(\partial C_\lambda \). The reverse triangle inequality and the first resolvent identity [18, Theorem 4.8.1] implies

$$\begin{aligned} | \ \Vert R(u,h)\Vert - \Vert R(z,h)\Vert \ | \le |z-u| \ \Vert R(u,h)\Vert \ \Vert R(z,h)\Vert \rightarrow 0, \quad \text{ as } |z-u| \rightarrow 0. \end{aligned}$$

Since \(C_\lambda \) is compact, we have that the image \(\{\Vert R(z,h)\Vert : z \in C_\lambda \}\) is a compact subset of \({\mathbb {R}}\) and hence this set is bounded, say it is contained in the interval \([-M_h,M_h]\) for some constant \(M_h > 0\), for a fixed \(h \in {\mathbb {R}}\). We obtain

$$\begin{aligned} \Vert P_\lambda (s+h) - P_\lambda (s) \Vert&\le rM_0 M_h \Vert U(s+T+h,s+h) - U(s+T,s) \Vert \rightarrow 0, \quad \text{ as } h \rightarrow 0, \end{aligned}$$

by [5, Lemma 5.2] since \((s+T,s) \in \Omega _{\mathbb {R}}\). The T-periodicity holds due to [15, Corollary XIII.2.2] and the fact that the Floquet multipliers are independent of the starting time [15, Theorem XIII.3.3]. \(\square \)

We also need the associated spectral projections on the unstable, center and stable subspace. For the unstable and center subspace, denote the spectral projection on the unstable subspace (at time s) and the spectral projection on the center subspace (at time s) as the operators \(P_+(s) \in {\mathcal {L}}(X)\) with range \(X_{+}(s)\) and \(P_0(s) \in {\mathcal {L}}(X)\) with range \(X_0(s)\) defined as

$$\begin{aligned} P_{+}(s):= \sum _{\lambda \in \Lambda _+} P_\lambda (s), \quad P_{0}(s):= \sum _{\lambda \in \Lambda _0} P_{\lambda }(s). \end{aligned}$$

Define the spectral projection on the stable subspace (at time s) as \(P_{-}(s):= I - P_0(s) - P_+(s) \in {\mathcal {L}}(X)\) and it holds that \(P_{-}(s)\) is indeed the projection on the stable subspace \(X_{-}(s)\), see [3, Lemma 7.2.2]. The proof of the following result is almost the same as [3, Theorem 7.2.1], but we give it for the sake of completeness.

Proposition 10

The setting of (DDE) satisfies Hypothesis 1.

Proof

We verify the seven criteria step by step.

1. The decomposition (9) can be also used in the case where \(E_\lambda (s)\) is replaced with the finite-dimensional vector space \(X_{+}(s) \oplus X_{0}(s)\). Then, \(X = X_{+}(s) \oplus X_{0}(s) \oplus R(s)\) for some vector space R(s). We have to show \(R(s) = X_{-}(s)\). By the decomposition (9) we know that \(P_{+0}(s):= P_+(s) + P_0(s) \) is a projection with range \(X_{+0}(s) = X_{-}(s) \oplus X_{0}(s)\) and \(R(s) = \ker P_{+0}(s)\) and notice that \(R(s) = \cap _{\lambda \in \Lambda _{0+}} \ker (P_{\lambda }(s)) = X_{-}(s)\). The spaces \(X_{+}(s)\) and \(X_{0}(s)\) are automatically closed since they are finite-dimensional. To show that \(X_{-}(s)\) is closed, notice that for each \(\lambda \in \Lambda _{0+}\) the space \(R_{\lambda }(s)\) is closed and because the finite intersection of closed sets is closed, the result follows from (26).

2. For \(P_{+}\) and \(P_{0}\) the claim about the range follows immediately from their definition and the claim about \(P_{-}\) follows from the fact that \(P_{-}(s)\) is the projection on \(X_{-}(s)\). To show the continuity statement, recall from Proposition 9 that for any Floquet multiplier \(\lambda \), the map \(P_\lambda \) is continuous. As \(P_{+}\) and \(P_{0}\) are finite sums of such continuous projectors, it follows that both projectors are continuous. Since \(P_{-} = I - P_0 - P_{+}\) it follows that \(P_{-}\) is also continuous.

3. Since \(P_- + P_0 + P_+ = I\), we have \(\Vert P_{-}(t)\Vert \le \Vert P_{0}(t)\Vert + \Vert P_{+}(t)\Vert \) for all \(t \in {\mathbb {R}}\), and so it remains to prove that \(t\mapsto \Vert P_{0}(t)\Vert \) and \(t \mapsto \Vert P_{+}(t)\Vert \) are uniformly bounded on [0, T] by T-periodicity. We will only show the claim for \(P_{0}\) since the proof is similar for \(P_{+}\).

Suppose for a moment that part 5 and 7 are satisfied. They will be proven later, independently of this property. Assume that \(t \mapsto \Vert P_{0}(t)\Vert \) is not uniformly bounded on [0, T], then there exist sequences \((x_{n})_{n \in {\mathbb {N}}} \subset X\) and \((t_{n})_{n \in {\mathbb {N}}} \subset [0,T]\) such that \(\Vert x_n\Vert _{\infty } = 1\) and \(\Vert P_0(t_n)x_n\Vert _\infty = n\). Then for a given \(\varepsilon > 0\), there is a constant \(K_\varepsilon > 0\) such that

$$\begin{aligned} n&= \Vert P_0(t_n)x_n\Vert _\infty \le \Vert U_{0}(t_n,T)\Vert \ \Vert P_0(T)\Vert \ \Vert U_0(T,t_n)\Vert \le K_\varepsilon ^2 e^{2 \varepsilon T} \Vert P_0(T)\Vert , \end{aligned}$$

which is a contradiction, since \(n \in {\mathbb {N}}\) can be taken arbitrary large.

4. Let \(i,j \in \{-,0,+\}\) with \(i \ne j\) and let \(\varphi \in X\). By the decomposition proved in criterion one we have that \(\varphi = \varphi _{i}(s) + \varphi _{j}(s) + \varphi _{k}(s)\), where \(k \in \{-,0,+ \}\) such that \(k \ne i\) and \(k \ne j\). Then from the interplay between the ranges and kernels of the projections it follows that

$$\begin{aligned} P_{i}(s)P_{j}(s) \varphi = P_{i}(s)P_{j}(s)[\varphi _{i}(s) + \varphi _{j}(s) + \varphi _{k}(s)] = P_{i}(s) \varphi _{j}(s) = 0, \end{aligned}$$

which proves this part.

5. It is proven in [15, Theorem XIII.3.3] that

$$\begin{aligned} P(t)U(t+T,s+jT) = U(t+T,s+jT)P(s) \end{aligned}$$
(34)

for \(j \in {\mathbb {N}}\) chosen in such a way that \(s+(j-1)T \le t < s+jT\) and for \(P \in \{P_{-},P_0,P_{+} \}\). Hence,

$$\begin{aligned} P(t)U(t,s)&= P(t)U(t,s+jT)U(s+jT,s) \\&= U(t,s+jT)U(s+T,s)^j P^j(s) = U(t,s)P(s), \end{aligned}$$

where we have used that P(s) is a projection that commutes with \(U(s+T,s)\). This last claim follows from setting \(s=t\) and \(j=1\) in (34) together with [15, Corollary XIII.2.2].

6. Notice that \(U_{+}(t,s)\) and \(U_0(t,s)\) are defined for all \(t,s \in {\mathbb {R}}\) because they are restricted to a finite-dimensional space. Since \(U_{+}(t,s) U_{+}(s,t) = I = U_{+}(s,t) U_{+}(t,s)\) we have that \(U_{+}(t,s)\) is invertible with inverse \(U_{+}(t,s)^{-1} = U_{+}(s,t)\). Similarly \(U_{0}(t,s)^{-1} = U_{0}(s,t)\). To show the remaining part, that is (10), we have six different cases depending on the location of \(t,\tau ,s \in {\mathbb {R}}\). This is a straightforward computation and will be omitted.

7. We will start with the center part. The stable and unstable part will then follow from a similar reasoning. Let \(\varepsilon > 0\) and \(s \in {\mathbb {R}}\) be given. As the map \(t \mapsto U_0(t,s) \varphi \) is continuous for any \(\varphi \in X\) and \(t \ge s\), we know

$$\begin{aligned} \sup _{s \le t \le s+T} \Vert U_0(t,s)\varphi \Vert _\infty < \infty , \quad \forall \varphi \in X. \end{aligned}$$

By the principle of uniform boundedness, we get

$$\begin{aligned} \sup _{s \le t \le s+T} \Vert U_0(t,s)\Vert \le K, \end{aligned}$$

for some \(K > 0\). Because the spectrum of \(U_0(s+T,s)\) lies on the unit circle, we have by the spectral radius formula also known as the Gelfand-Beurling formula that

$$\begin{aligned} 1 = \max _{\lambda \in \sigma (U_0(s+T,s))} |\lambda | = \lim _{j \rightarrow \infty } \Vert U_0(s+T,s)^j\Vert ^{\frac{1}{j}} \end{aligned}$$

and so there exists an integer \(k_\varepsilon > 0\) such that \(\Vert U_0(s+T,s)^{k_\varepsilon }\Vert < 1+ \varepsilon T\) and denote

$$\begin{aligned} K_\varepsilon := K \max _{j=0,\dots ,k_\varepsilon -1} \Vert U_0(s+T,s)^j\Vert . \end{aligned}$$

Now, let \(m_t\) be the largest integer such that \(s+m_t k_\varepsilon T \le t\) and \(m_t^{\star } \in \{0,\dots ,k_\varepsilon - 1 \}\) the largest integer such that \(s+m_t k_\varepsilon T + m_t^\star \le t\). Then,

$$\begin{aligned} U_0(t,s)&= U_0(t,s+m_tk_\varepsilon T + m_t^\star T)U_0(s+m_tk_\varepsilon T + m_t^\star T,s+m_tk_\varepsilon T)U_0(s+m_t k_\varepsilon T,s)\\&= U_0(t- m_tk_\varepsilon T - m_t^\star T,s)U_0(s+m_t^\star T,s)U_0(s+m_t k_\varepsilon T,s)\\&= U_0(t- m_tk_\varepsilon T - m_t^\star T,s) U_0(s+ T,s)^{m_t^\star }U_0(s+T,s)^{m_t k_\varepsilon }. \end{aligned}$$

We can make the estimate

$$\begin{aligned} \begin{aligned} \Vert U_0(t,s)\Vert \le K_\varepsilon \Vert U_0(s+T,s)^{k_\varepsilon }\Vert ^{m_t} \le K_\varepsilon [(1 + \varepsilon T)^{\frac{1}{\varepsilon T}}]^{\varepsilon (t-s)} \le K_{\varepsilon }e^{\varepsilon (t-s)}, \end{aligned} \end{aligned}$$

since the function \((0,\infty ) \ni x \mapsto (1+\frac{1}{x})^x \in {\mathbb {R}}\) is monotonically increasing. The proof is analogous when \(t \le s\) and so we obtain \(\Vert U_0(t,s)\Vert \le K_{\varepsilon }e^{\varepsilon |t-s|}\). The proofs for the stable and unstable part are analogous. \(\square \)

Denote for any Floquet multiplier \(\lambda \) and any \(s \in {\mathbb {R}}\) the time-dependent extended spectral projection \(P_\lambda ^{\odot \star }(s) \in {\mathcal {L}}(X^{\odot \star })\) with range \(jE_\lambda (s)\) and kernel \(R_\lambda ^{\odot \star }(s)\), where \(R_\lambda ^{\odot \star } (s)\) is the called the extended complementary (generalized) subspace (at time s) coming from the decomposition \(X^{\odot \star } = jE_\lambda (s) \oplus R_\lambda ^{\odot \star }(s)\). Define the extended unstable subspace (at time s) and extended center subspace (at time s) as

$$\begin{aligned} X_{+}^{\odot \star }(s):= j(X_+(s)) = \bigoplus _{\lambda \in \Lambda _+} jE_{\lambda }(s), \quad X_0^{\odot \star }(s):= j(X_0(s)) = \bigoplus _{\lambda \in \Lambda _0} jE_{\lambda }(s), \end{aligned}$$

and notice via extended complementary (generalized) subspaces that the extended stable subspace (at time s) can be defined as

$$\begin{aligned} X_{-}^{\odot \star }(s):= \bigcap _{\lambda \in \Lambda _0 \cup \Lambda _+} R_{\lambda }^{\odot \star }(s). \end{aligned}$$

The construction of \(X_{+}^{\odot \star }(s)\) and \(X_{0}^{\odot \star }(s)\) directly shows that Hypothesis 2 is satisfied.

B Smoothness and Periodicity of the Center Manifold

This section of the appendix consists of three parts. Firstly, we show that the map \({\mathcal {C}}\) is not only fiberwise Lipschitz, but Lipschitz continuous in the second component where the Lipschitz constant is independent of the fiber. The proof of this claim is inspired by [3, Corollary 5.4.1.1]. Secondly, we prove via the theory of contractions on scales of Banach spaces, see [15, Section IX.6, Appendix IV] and [34] that the map \({\mathcal {C}}\) is \(C^k\)-smooth. To do this, we combine the ideas from [3, Section 8], [15, Section IX.7] and [20]. Lastly, under the assumption of T-periodicity of the time-dependent nonlinear perturbation R in the first component, we show that there exists a neighborhood of 0 in X such that the center manifold is T-periodic in this neighborhood. The proof of this result is inspired by [3, Lemma 8.3.1 and Theorem 8.3.1].

Corollary 3

There exists a constant \(L > 0\) such that \(\Vert {\mathcal {C}}(t,\varphi ) - {\mathcal {C}}(t,\psi )\Vert \le L \Vert \varphi - \psi \Vert \) for all \(t \in {\mathbb {R}}\) and \(\varphi ,\psi \in X_0(t)\).

Proof

Let \(t \in {\mathbb {R}}\) and \(\varphi ,\psi \in X_0(t)\) be given. Notice that

$$\begin{aligned} {\mathcal {C}}(t,\varphi ) = u_t^\star (\varphi )(t) = [{\mathcal {G}}_t(u_t^\star (\varphi )(t),\varphi )](t) = \varphi + {\mathcal {K}}_t^\eta [{\tilde{R}}_{\delta ,t}(u_t^\star (\varphi )(t))](t). \end{aligned}$$

By Proposition 3, we know there exists a constant \(C_{\eta } > 0,\) independent of t, such that \(\Vert {\mathcal {K}}_t^\eta \Vert \le C_\eta \). Hence, from Corollary 1 and Theorem 1 we get

$$\begin{aligned} \Vert {\mathcal {C}}(t,\varphi ) - {\mathcal {C}}(t,\psi )\Vert&\le \Vert \varphi - \psi \Vert + \Vert {\mathcal {K}}_t^\eta [ {\tilde{R}}_{\delta ,t}(u_t^\star (\varphi )(t)) - {\tilde{R}}_{\delta ,t}(u_t^\star (\psi )(t))](t)\Vert \\&\le \Vert \varphi - \psi \Vert + \Vert {\mathcal {K}}_t^\eta \Vert \sup _{s \in {\mathbb {R}}} \Vert [{\tilde{R}}_{\delta ,t}(u_t^\star (\varphi )(t)) - {\tilde{R}}_{\delta ,t}(u_t^\star (\psi )(t))](s)\Vert e^{-\eta |t-s|} \\&\le \Vert \varphi - \psi \Vert + C_\eta \Vert {\tilde{R}}_{\delta ,t}(u_t^\star (\varphi )(t)) - {\tilde{R}}_{\delta ,t}(u_t^\star (\psi )(t))\Vert _{\eta ,t} \\&= (1 + 2C_\eta L_{R_{\delta }} K_\varepsilon )\Vert \varphi - \psi \Vert . \end{aligned}$$

Hence \(L = 1 + 2C_\eta L_{R_{\delta }}K_\varepsilon > 0\) is the Lipschitz constant we were looking for. \(\square \)

The following lemma will be important to prove smoothness of \({\mathcal {C}}\) and \({\mathcal {W}}^c\).

Lemma 2

([15, Lemma XII.6.6 and XII.6.7]) Let \(Y_0,Y,Y_1\) and \(\Lambda \) be Banach spaces with continuous embeddings \(J_0: Y_0 \hookrightarrow Y\) and \(J: Y \hookrightarrow Y_1\). Consider the fixed point problem \(y = f(y,\lambda )\) for \(f: Y \times \Lambda \rightarrow Y\). Suppose that the following conditions hold.

  1. 1.

    The function \(g: Y_0 \times \Lambda \rightarrow Y_1\) defined as \( g(y_0,\lambda ):= Jf(J_0y_0,\lambda )\) is of the class \(C^1\) and there exist mappings

    $$\begin{aligned} f^{(1)}&: J_0 Y_0 \times \Lambda \rightarrow {\mathcal {L}}(Y), \\ f_1^{(1)}&: J_0Y_0 \times \Lambda \rightarrow {\mathcal {L}}(Y_1), \end{aligned}$$

    such that

    $$\begin{aligned} D_1 g(y_0,\lambda ) \xi = Jf^{(1)}(J_0y_0,\lambda )J_0, \quad \forall (y_0,\lambda ,\xi ) \in Y_0 \times \Lambda \times Y_0 \end{aligned}$$

    and

    $$\begin{aligned} Jf^{(1)}(J_0y_0,\lambda )y = f_1^{(1)}(J_0y_0,\lambda )Jy, \quad \forall (y_0,\lambda ,y) \in Y_0 \times \Lambda \times Y. \end{aligned}$$
  2. 2.

    There exists a \(\kappa \in [0,1)\) such that for all \(\lambda \in \Lambda \) the map \(f(\cdot ,\lambda ): Y \rightarrow Y\) is Lipschitz continuous with Lipschitz constant \(\kappa \), independent of \(\lambda \). Furthermore, for any \(\lambda \in \Lambda \) the maps \(f^{(1)}(\cdot ,\lambda )\) and \(f_1^{(1)}(\cdot ,\lambda )\) are uniformly bounded by \(\kappa \).

  3. 3.

    Under the previous condition, the unique fixed point \(\Psi : \Lambda \rightarrow Y\) satisfies \(\Psi (\lambda ) = f(\Psi (\lambda ),\lambda )\) and can be written as \(\Psi = J_0 \circ \Phi \) for some continuous \(\Phi : \Lambda \rightarrow Y_0\).

  4. 4.

    The function \(f_0: Y_0 \times \Lambda \rightarrow Y\) defined by \(f_0(y_0,\lambda ) = f(J_0y_0,\lambda )\) has continuous partial derivative

    $$\begin{aligned} D_2 f: Y_0 \times \Lambda \rightarrow {\mathcal {L}}(\Lambda ,Y). \end{aligned}$$
  5. 5.

    The mapping \(Y_0 \times \Lambda \ni (y,\lambda ) \mapsto J \circ f^{(1)}(J_0y,\lambda ) \in {\mathcal {L}}(Y,Y_1)\) is continuous.

Then the map \(J \circ \Psi \) is of the class \(C^1\) and \(D(J \circ \Psi )(\lambda ) = J \circ {\mathcal {A}}(\lambda )\) for all \(\lambda \in \Lambda \), where \(A = {\mathcal {A}}(\lambda ) \in {\mathcal {L}}(\Lambda ,Y)\) is the unique solution of the fixed point equation

$$\begin{aligned} A = f^{(1)}(\Psi (\lambda ),\lambda ) A + D_2 f_0(\Psi (\lambda ),\lambda ), \end{aligned}$$

formulated in \({\mathcal {L}}(\Lambda ,Y)\).

An important observation between the dependence of \(u_s^\star \) on \(\delta \) is presented in the following lemma. To make the notation a bit simpler, we define the map \({\hat{P}}_0: {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X) \rightarrow {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X)\) pointwise as \(({\hat{P}}_0\varphi )(t):= (P_0(t)\varphi )(t) \in X_0(t)\) for all \(t \in {\mathbb {R}}\) and have the following lemma.

Lemma 3

If \(\delta > 0\) is sufficiently small, then \(\Vert (I-{\hat{P}}_0)u_s^\star (\varphi )\Vert _{0,s} < N\delta \).

Proof

Since \(u_s^\star (\varphi ) = {\mathcal {G}}_s(u_s^\star (\varphi ),\varphi ) = U(\cdot ,s)\varphi + {\mathcal {K}}_s^\eta ({\tilde{R}}_{\delta ,s}(u_s^\star (\varphi )))\) we have that

$$\begin{aligned} (I-{\hat{P}}_0)u_s^\star (\varphi ) = (I-{\hat{P}}_0)[{\mathcal {K}}_s^\eta ({\tilde{R}}_{\delta ,s}(u_s^\star (\varphi )))], \end{aligned}$$

because for any \(t \in {\mathbb {R}}\) we have that

$$\begin{aligned}{}[(I-{\hat{P}}_0)U(\cdot ,s)\varphi ](t) = U(t,s)\varphi - P_0(t)U(t,s)\varphi = 0, \end{aligned}$$

since \(U(t,s)\varphi = U_0(t,s)\varphi \in X_0(t)\) due to part 6 of Hypothesis 1 and the fact hat \(\varphi \in X_0(s)\). It follows from the operator norm bounds in Proposition 3, and the bound for \({\tilde{R}}_{\delta ,s}\) in Corollary 1, that

$$\begin{aligned} \begin{aligned} \Vert (I-{\hat{P}}_0)u_s^\star (\varphi )\Vert _{0,s} = \Vert (I-{\hat{P}}_0)[{\mathcal {K}}_s^\eta ({\tilde{R}}_{\delta ,s}(u_s^\star (\varphi )))]\Vert _{0,s} \le 4 \delta \Vert j^{-1}\Vert K_\varepsilon N^2 L_{R_\delta } \bigg ( \frac{1}{-a} + \frac{1}{b} \bigg ), \end{aligned} \end{aligned}$$

which is less than or equal to \(N\delta \) if we choose

$$\begin{aligned} \begin{aligned} L_{R_\delta } \le \frac{1}{4 \Vert j^{-1}\Vert N K_\varepsilon } \bigg (\frac{1}{-a} + \frac{1}{b} \bigg )^{-1}, \end{aligned} \end{aligned}$$

which is possible since \(L_{R_\delta } \rightarrow 0\) as \(\delta \downarrow 0\). \(\square \)

Let us introduce some notation. For a Banach space X, define the sets \({{\,\textrm{BC}\,}}_s^\infty ({\mathbb {R}},X):= \cup _{\eta > 0} {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\) and \({{\,\textrm{BC}\,}}_s^\infty ({\mathbb {R}},X^{\odot \star }):= \cup _{\eta > 0} {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X^{\odot \star })\) together with the space

$$\begin{aligned} V_s^\eta ({\mathbb {R}},X):= \{u \in {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X): \Vert (I-\hat{P_0})u\Vert _{0,s} < \infty \}, \end{aligned}$$

with the norm

$$\begin{aligned} \Vert u\Vert _{V_s^{\eta }}:= \Vert {\hat{P}}_0 u \Vert _{\eta ,s} + \Vert (I-{\hat{P}}_0)u\Vert _{0,s} \end{aligned}$$

such that \(V_s^\eta ({\mathbb {R}},X)\) becomes a Banach space and is continuously embedded in \({{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\). Define in addition for a sufficiently small \(\delta > 0\) the open set

$$\begin{aligned} V_{\delta ,s}^\eta ({\mathbb {R}},X):= \{ u \in V_s^\eta ({\mathbb {R}},X): \Vert (I-\hat{P_0})u\Vert _{0,s} < N\delta \}, \end{aligned}$$

and notice that this set is non-empty due to Lemma 3. Define similarly as before the set \(V_{\delta ,s}^\infty ({\mathbb {R}},X):= \cup _{\eta > 0} V_{\delta ,s}^\eta ({\mathbb {R}},X)\). For Banach spaces \(E,E_1,E_2,\dots ,E_p\) with \(p \ge 1\) we denote by \({\mathcal {L}}^p(E_1 \times \dots \times E_p, E)\) the Banach space of E-valued continuous p-linear maps defined on the \(E_1 \times \dots \times E_p\). When there are p identical copies in this Cartesian product, we simply write \(E^p:= E \times \dots \times E\), where this notation will also be used with E is just simply a set.

If we choose \(\delta \) as in Lemma 3, then the map \(u \mapsto {\tilde{R}}_{\delta ,s}(u)\) is of the class \(C^k\), when \(u \in V_{\delta ,s}^\infty ({\mathbb {R}},X)\). For any pair of integers \(p,q \ge 0\) with \(p + q \le k\), notice that the norm \(\Vert D_1^p D_2^q R_{\delta ,s}(t,\varphi )\Vert \) is uniformly bounded on \({\mathbb {R}} \times V_{\delta ,s}^\infty ({\mathbb {R}},X)\). Hence, for any \(u \in V_{\delta ,s}^\infty ({\mathbb {R}},X)\) we can define the map \(R_{\delta ,s}^{(p,q)}(u): {{\,\textrm{BC}\,}}_s^\infty ({\mathbb {R}},X)^p \rightarrow {{\,\textrm{BC}\,}}_s^\infty ({\mathbb {R}},X^{\odot \star })\) as

$$\begin{aligned}{} & {} R_{\delta ,s}^{(p,q)}(u)(v_1,\dots ,v_q)(t)\\{} & {} := D_1^pD_2^q R_{\delta ,s}(t,u(t))(v_1(t),\dots ,v_q(t)), \quad \forall v_1,\dots ,v_q \in {{\,\textrm{BC}\,}}_s^\infty ({\mathbb {R}},X). \end{aligned}$$

The following two lemmas will be crucial for the proof of Theorem 5.

Lemma 4

([15, Lemma XII.7.3] and [20, Proposition 8.1]) Consider integers \(p \ge 0\) and \(q \ge 0\) with \(p+q \le k\) together with integers \(\mu _1,\dots ,\mu _q > 0\) such that \(\mu = \mu _1 + \dots +\mu _q\) and consider \( \eta> q \mu > 0\). Then,

$$\begin{aligned} {\tilde{R}}_{\delta ,s}^{(p,q)}(u) \in {\mathcal {L}}^q({{\,\textrm{BC}\,}}_s^{\mu _1}({\mathbb {R}},X) \times \dots \times {{\,\textrm{BC}\,}}_s^{\mu _q}({\mathbb {R}},X), {{\,\textrm{BC}\,}}^\eta ({\mathbb {R}},X^{\odot \star })), \quad \forall u \in V_{\delta ,s}^\infty ({\mathbb {R}},X). \end{aligned}$$

Furthermore, consider any \( 0 \le l \le k-(p+q)\) and \(\sigma > 0\). If \(\eta > q \mu + l \sigma \), then the map \(R_{\delta ,s}^{(p,q)}: V_{\delta ,s}^\sigma ({\mathbb {R}},X) \rightarrow {\mathcal {L}}^q({{\,\textrm{BC}\,}}_s^{\mu _1}({\mathbb {R}},X) \times \dots \times {{\,\textrm{BC}\,}}_s^{\mu _p}({\mathbb {R}},X), {{\,\textrm{BC}\,}}^\eta ({\mathbb {R}},X^{\odot \star }))\) is \(C^l\)-smooth, with \(D^l R_{\delta ,s}^{(p,q)} = R_{\delta ,s}^{(p,q + l)}\).

Lemma 5

([15, Lemma XII.7.6] and [20, Proposition 8.2]) Consider integers \(p \ge 0\) and \(q \ge 0\) with \(p+q < k\) together with integers \(\mu _1,\dots ,\mu _q > 0\) such that \(\mu = \mu _1 + \dots +\mu _q\). Let \( \eta > q \mu + \sigma \) for some \(\sigma > 0\) and consider a \(C^1\)-smooth map \(\Phi _s: X_0(s) \rightarrow V_{\delta ,s}^\sigma ({\mathbb {R}},X)\). Then the map \({\tilde{R}}_{\delta ,s}^{(p)} \circ \Phi _s: X_0(s) \rightarrow {\mathcal {L}}^q({{\,\textrm{BC}\,}}_s^{\mu _1}({\mathbb {R}},X) \times \dots \times {{\,\textrm{BC}\,}}_s^{\mu _q}({\mathbb {R}},X), {{\,\textrm{BC}\,}}^\eta ({\mathbb {R}},X^{\odot \star }))\) is \(C^1\)-smooth with

$$\begin{aligned} D({\tilde{R}}_{\delta ,s}^{(p,q)} \circ \Phi _s)(\varphi )(v_1,\dots ,v_q)(t) = {\tilde{R}}_{\delta ,s}^{(p,q+1)}(\Phi _s(\varphi ))(\Phi _s'(\varphi )(t),v_1(t),\dots ,v_q(t)). \end{aligned}$$

So far we have only proven that the center manifold is Lipschitz continuous. Recall from Theorem 1 that we solved the fixed point problem \(u = {\mathcal {G}}_s(u,\varphi )\) for a given \(\varphi \in X_0(s)\) in the space \({{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\) for a given \(\eta \in (0,\min \{-a,b\})\). It turns out that the space \({{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\) is not really suited to study additional smoothness of the center manifold. The idea to obtain this is by working with another exponent, say \({\tilde{\eta }}\), which is chosen high enough to guarantee smoothness, but not too high to lose the contraction property. Hence, a trade-off has to be made. To do this, choose an interval \([\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}] \subset (0,\min \{-a,b\})\) such that \(k \eta _{{{\,\textrm{min}\,}}} < \eta _{{{\,\textrm{max}\,}}}\) and choose \(\delta > 0\) small enough to guarantee that

$$\begin{aligned} L_{R_{\delta }}\Vert {\mathcal {K}}_s^\eta \Vert _{\eta ,s} < \frac{1}{4}, \quad \forall \eta \in [\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}], \ s \in {\mathbb {R}}, \end{aligned}$$

which is possible since \(L_{R_\delta } \rightarrow 0\) as \(\delta \downarrow 0\) proven in Proposition 4. Following the proof again of Theorem 1 we obtain for any \(\eta \in [\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\) a unique fixed point \(u_{\eta ,s}^\star : X_0(s) \rightarrow {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\) of the equation \(u = {\mathcal {G}}_s(u,\varphi )\). Denote for real numbers \(\eta _1 \le \eta _2\) the continuous embedding operator as \({\mathcal {J}}_s^{\eta _2,\eta _1}: {{\,\textrm{BC}\,}}_s^{\eta _1}({\mathbb {R}},X) \hookrightarrow {{\,\textrm{BC}\,}}_s^{\eta _2}({\mathbb {R}},X)\), then for \(\eta _1,\eta _2 \in [\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\) we have that \(u_{\eta _2,s}^\star = {\mathcal {J}}_s^{\eta _2,\eta _1} \circ u_{\eta _1,s}^\star \). These embedding operators will play the role of \(J_0\) and J defined in Lemma 2. The following proof is a combination of [3, Theorem 7.1.1], [15, Theorem IX.7.7] and [20, Theorem 7.1] to our setting.

Theorem 5

For each \(l \in \{1,\dots ,k\}\) and \(\eta \in (l\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}],\) the mapping \({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^\star : X_0(s) \rightarrow {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\) is of the class \(C^l\) provided that \(\delta > 0\) is sufficiently small.

Proof

We prove this by induction on l. Let \(l = k =1\) and \(\eta \in (\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\). We show that Lemma 2 applies with the Banach spaces

$$\begin{aligned} Y_0 = V_\delta ^{\eta _{{{\,\textrm{min}\,}}},s}({\mathbb {R}},X), \quad Y = {{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X), \quad Y_1 = {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X), \quad \Lambda = X_0(s) \end{aligned}$$

and operators

$$\begin{aligned} f(u,\varphi )&= U(\cdot ,s)\varphi + {\mathcal {K}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\tilde{R}}_{\delta ,s}(u)), \quad \forall (u,\varphi ) \in {{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X) \times X_0(s), \\ f^{(1)}(u,\varphi )&= {\mathcal {K}}_s^\eta \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u) \in {\mathcal {L}}({{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X)), \quad \forall (u,\varphi ) \in V_{\delta ,s}^{\eta }({\mathbb {R}},X) \times X_0(s), \\ f_1^{(1)}(u,\varphi )&= {\mathcal {K}}_s^{\eta _{{{\,\textrm{min}\,}}}} \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u) \in {\mathcal {L}}({{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X)), \quad \forall (u,\varphi ) \in V_{\delta ,s}^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X) \times X_0(s), \end{aligned}$$

with embeddings \(J = {\mathcal {J}}_s^{\eta , \eta _{{{\,\textrm{min}\,}}}}\) and \(J_0: V_\delta ^{\eta _{{{\,\textrm{min}\,}}},s}({\mathbb {R}},X) \hookrightarrow {{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X)\). To verify condition 1 of Lemma 2, we must show that the map

$$\begin{aligned} V_\delta ^{\eta _{{{\,\textrm{min}\,}}},s}({\mathbb {R}},X) \times X_0(s) \ni (u,\varphi ) \mapsto g(u,\varphi ) \\ = {\mathcal {J}}_s^{\eta , \eta _{{{\,\textrm{min}\,}}}} [U(\cdot ,s)\varphi + {\mathcal {K}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\tilde{R}}_{\delta ,s}(J_0u))] \hspace{-2pt} \in {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X) \end{aligned}$$

is \(C^1\)-smooth. Notice that the embedding operator J is \(C^1\)-smooth, as well as \(\varphi \mapsto U(\cdot ,s)\varphi \). Furthermore, from Lemma 4 the map \(J_0u \mapsto {\tilde{R}}_{\delta ,s}(J_0u)\) is \(C^1\)-smooth and hence g is \(C^1\)-smooth by the continuity of the linear embedding \(J_0\). Verification of the equalities \(D_1 g(y_0,\lambda ) \xi = Jf^{(1)}(J_0y_0,\lambda )J_0\) and \( Jf^{(1)}(J_0y_0,\lambda )y = f_1^{(1)}(J_0y_0,\lambda )Jy\) is straightforward.

Let us now verify condition 2. The Lipschitz claim follows immediately from the small Lipschitz constant for \(U(\cdot ,s)\varphi + {\mathcal {K}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\tilde{R}}_{\delta ,s}(u))\) by choosing \(\delta \) sufficiently small. Furthermore, the uniform boundedness claims hold because the embedding operators are bounded.

For condition 3, the unique fixed point is \(u_{\eta _{{{\,\textrm{min}\,}}},s}^\star = J_0 \circ \Phi \), where \(\Phi : X_0(s) \rightarrow V_{\delta ,s}^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X)\) is defined by \(\Phi (\varphi ):= u_{\eta _{{{\,\textrm{min}\,}}},s}^\star (\varphi )\) for all \(\varphi \in X_0(s)\). The map \(\Phi \) is well-defined due to Lemma 3 and is continuous due to Theorem 1.

To verify condition 4, we must check that the map

$$\begin{aligned}{} & {} V_\delta ^{\eta _{{{\,\textrm{min}\,}}},s}({\mathbb {R}},X) \times X_0(s) \ni (u,\varphi ) \mapsto f(J_0u,\varphi ) \\{} & {} \quad = U(\cdot ,s)\varphi + {\mathcal {K}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\tilde{R}}_{\delta ,s}(J_0u)) \in {{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X) \end{aligned}$$

has continuous partial derivative in the second variable. This is clear since the map \(\varphi \mapsto f(J_0u,\varphi )\) is linear.

To verify condition 5, we have to check that the map

$$\begin{aligned} (u,\varphi ) \mapsto J \circ f^{(1)}(J_0u,\varphi ) = {\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {\mathcal {K}}_s^\eta \circ {\tilde{R}}_{\delta ,s}^{(1)}(u) \end{aligned}$$

from \(V_\delta ^{\eta _{{{\,\textrm{min}\,}}},s}({\mathbb {R}},X) \times X_0(s)\) to \({\mathcal {L}}(X_0(s),{{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) is continuous. This again follows from the fact that the embedding operators are continuous and the smoothness of \({\tilde{R}}_{\delta ,s}\) from Lemma 4.

Since all conditions of Lemma 2 are satisfied, we conclude that \({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^\star \) is \(C^1\)-smooth and the Fréchet derivative \(D({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^\star ) \in {\mathcal {L}}(X_0(s), {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) is the unique solution \(w^{(1)}\) of the equation

$$\begin{aligned} \begin{aligned} w^{(1)} = {\mathcal {K}}_s^{\eta _{{{\,\text {min}\,}}}} \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{s,\eta _{{{\,\text {min}\,}}}}^\star (\varphi ))w^{(1)} + U(\cdot ,s) =: F_{\eta _{{{\,\text {min}\,}}}}^{(1)}(w^{(1)},\varphi ), \end{aligned} \end{aligned}$$
(35)

where \(F_{\eta _{{{\,\textrm{min}\,}}}}^{(1)}: {\mathcal {L}}(X_0(s),{{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)) \times X_0(s) \rightarrow {\mathcal {L}}(X_0(s),{{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\). Notice that \(F_{\eta _{{{\,\textrm{min}\,}}}}^{(1)}(\cdot ,\varphi )\) is a uniform contraction for each \(\eta \in [\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\) and hence its unique fixed point \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(1)}(\varphi ) \in {\mathcal {L}}(X_0(s),{{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X)) \subseteq {\mathcal {L}}(X_0(s), {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)))\) for \(\eta \ge \eta _{{{\,\textrm{min}\,}}}\). Also, the mapping \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(1)}: X_0(s) \rightarrow {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) is continuous if \(\eta \in (\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\).

Now, consider any integer \(1 \le l < k\) and suppose that for all \(1 \le q \le l\) and all \( \eta \in (q \eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}})\) the mapping \({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^\star \) is \(C^q\)-smooth with \(D^q({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^\star ) = {\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(q)}\) and \({u}_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(q)}(\varphi ) \in {\mathcal {L}}^q(X_0(s)^q, {{\,\textrm{BC}\,}}_s^{q\eta _{{{\,\textrm{min}\,}}}}({\mathbb {R}},X))\) such that the mapping \({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(q)}: X_0(s) \rightarrow {\mathcal {L}}^q(X_0(s)^q, {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) is continuous for \(\eta \in (q\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\). Suppose also for any \(\varphi \in X_0(s)\) that \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(l)}(\varphi )\) is the unique solution \(w^{(l)}\) of an equation of the form

$$\begin{aligned} w^{(l)} = {\mathcal {K}}_s^{\eta _{{{\,\textrm{min}\,}}}p} \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{s,\eta _{{{\,\textrm{min}\,}}}}^\star (\varphi ))w^{(l)} + H_{\eta _{{{\,\textrm{min}\,}}}}^{(l)}(\varphi ) =: F_{\eta _{{{\,\textrm{min}\,}}}}^{(l)}(w^{(l)},\varphi ), \end{aligned}$$

with \(H_{\eta _{{{\,\textrm{min}\,}}}}^{(1)}(\varphi ) = 0\) and for \(\nu \in [\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\) and \(l \ge 2\) the map \(H_{\nu }^{(l)}(\varphi )\) is a finite sum of terms of the form

$$\begin{aligned} {\mathcal {K}}_s^{l \nu } \circ {\tilde{R}}_{\delta ,s}^{(0,q)}(u_{\nu ,s}^\star (\varphi ))(u_{\nu ,s}^{\star ,(r_1)}(\varphi ),\dots , u_{\nu ,s}^{\star ,(r_q)}(\varphi )), \end{aligned}$$

with \(2 \le q \le l\) and \(1 \le r_i < l\) for \(i=1,\dots ,q\) such that \(r_1 + \dots + r_q = l\). Under these assumptions we have that the mapping \(F_{\eta }^{(l)}: {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{l\eta }({\mathbb {R}},X)) \times X_0(s) \rightarrow {\mathcal {L}}^l(X_0(s)^l,{{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) is a uniform contraction for all \(\eta \in [\eta _{{{\,\textrm{min}\,}}},\frac{1}{l}\eta _{{{\,\textrm{max}\,}}}]\) due to Lemma 4.

Fix some \(\eta \in ((l+1)\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\) and choose \(\eta _{{{\,\textrm{min}\,}}}< \sigma< (l+1)\sigma< \mu < \eta \). We show that Lemma 2 applies with the Banach spaces

$$\begin{aligned} \begin{aligned} Y_0&= {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{l \sigma }({\mathbb {R}},X)), \quad Y = {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\mu }({\mathbb {R}},X)), \\ Y_1&= {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X)), \quad \Lambda = X_0(s), \end{aligned} \end{aligned}$$

and operators

$$\begin{aligned} f(u,\varphi )&= {\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }(\varphi ))u \\&\quad + H_{\mu /l}^{(l)}(\varphi ), \quad \forall (u,\varphi ) \in {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\mu }({\mathbb {R}},X)) \times X_0(s), \\ f^{(1)}(u,\varphi )&= {\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }(\varphi )) \in {\mathcal {L}}({\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\mu }({\mathbb {R}},X))), \\ f_1^{(1)}(u,\varphi )&= {\mathcal {K}}_s^\eta \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }(\varphi )) \in {\mathcal {L}}({\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X))). \end{aligned}$$

We start with verifying condition 1. We have to check that the map

$$\begin{aligned} (u,\varphi ) \hspace{-2pt} \mapsto \hspace{-2pt} {\mathcal {J}}_s^{\eta ,\mu }[{\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }(\varphi ))u + H_{\mu /l}^{(l)}(\varphi )] \end{aligned}$$

from \({\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{p \sigma }({\mathbb {R}},X)) \times X_0(s)\) to \({\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X))\) is \(C^1\)-smooth, where now \({\mathcal {J}}_s^{\eta ,\mu }: {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\mu }({\mathbb {R}},X)) \hookrightarrow {\mathcal {L}}^l(X_0(s)^l, {{\,\textrm{BC}\,}}_s^{\eta }({\mathbb {R}},X))\) is a continuous embedding. The mapping defined above \(C^1\)-smooth in the first variable since it is linear. For the second variable, notice that the map \(\varphi \mapsto {\mathcal {J}}_s^{\eta ,\mu } \circ {\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }(\varphi ))\) is \(C^1\) due to Lemma 5 with \(\mu > (l+1)\sigma \) and the \(C^1\) smoothness of \(\varphi \mapsto {\mathcal {J}}_s^{\sigma \eta _{{{\,\textrm{min}\,}}}} \circ u_{\eta _{{{\,\textrm{min}\,}}},s}^\star (\varphi )\) with \(\sigma > \eta _{{{\,\textrm{min}\,}}}\). For the \(C^1\) smoothness of \(\varphi \mapsto {\mathcal {J}}_s^{\eta ,\mu } \circ H_{\mu /l}^{(l)}(\varphi )\), we get differentiability from Lemma 5 and hence we have that the derivative of \(\varphi \mapsto H_{\mu /l}^{(l)}(\varphi )\) is a sum of terms of the form

$$\begin{aligned} \begin{aligned}&{\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(0,q+1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^\star (\varphi ))(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(r_1)}(\varphi ),\dots , u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(r_q)}(\varphi )) \\&+ \sum _{j=1}^q {\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(0,q)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^\star (\varphi ))(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(r_1)}(\varphi ),\dots ,u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(r_j + 1)}(\varphi ),\dots , u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(r_q)}(\varphi )) \end{aligned} \end{aligned}$$

and each \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(r_j)}\) is a map from \(X_0(s)\) into \({{\,\textrm{BC}\,}}_s^{j\sigma }({\mathbb {R}},X)\). Applying Lemma 4 with \(\mu > (l+1)\sigma \) ensures continuity of \(DH_{\mu /l}^{(l)}(\varphi )\) and also then continuity of \({\mathcal {J}}_s^{\eta ,\mu }DH_{\mu /l}^{(l)}(\varphi )\). The remaining calculations from condition 1 are easily checked. Condition 4 can be proven similarly.

The Lipschitz condition and boundedness for condition 2 follows by the choice of \(\delta > 0\) defined at the beginning and the uniform contractivity of \(H_{\mu /l}^{(l)}\) described above. Let us now prove condition 3. Let us write

$$\begin{aligned} \begin{aligned} {\mathcal {K}}_s^\eta \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\text {min}\,}}},s}^{\star }(\varphi )) = {\mathcal {J}}_{s}^{\eta ,\mu } \circ {\mathcal {K}}_s^\mu \circ \tilde{R}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\text {min}\,}}},s}^{\star }(\varphi )) \end{aligned} \end{aligned}$$

and by applying Lemma 4 together with the \(C^1\)-smoothness of \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }\) to obtain continuity of \(\varphi \mapsto {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star }(\varphi ))\). This also proves condition 5. All the conditions from Lemma 2 are satisfied, and so we conclude that \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{(l)}: X_0(s) \rightarrow {\mathcal {L}}^l(X_0(s)^l,{{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) is of the class \(C^1\) with derivative \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{(l + 1)} = Du_{\eta _{{{\,\textrm{min}\,}}},s}^{(l)} \in {\mathcal {L}}^{l+1}(X_0(s)^{l+1}, {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X))\) given by the unique solution \(w^{(l+1)}\) of the equation

$$\begin{aligned} w^{(l+1)} = {\mathcal {K}}_s^{\mu } \circ {\tilde{R}}_{\delta ,s}^{(0,1)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^\star (\varphi ))w^{(l+1)} + H_{\mu /(l+1)}^{(l+1)}(\varphi ), \end{aligned}$$

where \(H_{\mu /(l+1)}^{(l+1)}(\varphi ) = {\mathcal {K}}_s^\mu \circ {\tilde{R}}_{\delta ,s}^{(0,2)}(u_{\eta _{{{\,\textrm{min}\,}}},s}^\star (\varphi ))(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(l)}(\varphi ),u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(1)}(\varphi )) + DH_{\mu /l}^{(l)}(\varphi )\). Similar arguments of the proof of the \(l=k=1\) case show that the unique fixed point \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(l + 1)} \in {\mathcal {L}}^{l+1}(X_0(s)^{l+1},{{\,\textrm{BC}\,}}_s^{\eta _{{{\,\textrm{min}\,}}}(l+1)}({\mathbb {R}},X))\). Hence, the map \({\mathcal {J}}_s^{\eta ,\eta _{{{\,\textrm{min}\,}}}} \circ {u}_{\eta _{{{\,\textrm{min}\,}}},s}^\star : X_0(s) \rightarrow {{\,\textrm{BC}\,}}_s^\eta ({\mathbb {R}},X)\) is of the class \(C^{l + 1}\) if \(\eta \in ((l+1)\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}]\) which completes the proof. \(\square \)

We also show that each partial derivative of the center manifold in the second component is uniformly Lipschitz continuous. The proof is inspired by [3, Corollary 8.2.1.2].

Corollary 4

For each \(l \in \{0,\dots ,k\}\), there exists a constant \(L(l) > 0\) such that \(\Vert D_2^l {\mathcal {C}}(t,\varphi ) - D_2^l {\mathcal {C}}(t,\psi ) \Vert \le L(l) \Vert \varphi - \psi \Vert \) for all \(t \in {\mathbb {R}}\) and \(\varphi ,\psi \in X_0(t)\).

Proof

for \(l = 0\), the result is already proven in Corollary 3. Now let \(l \in \{1,\dots ,k\}\). Then, from the proof of Theorem 5 we see that \(u_{\eta _{{{\,\textrm{min}\,}}},s}^{\star ,(l)}\) is the unique solution of a fixed point problem, where the right hand-side is a contraction with a Lipschitz constant L(l) independent of s. Using the same strategy as the proof of Corollary 3, we obtain the desired result. \(\square \)

Corollary 5

The center manifold \({\mathcal {W}}^c\) is \(C^k\)-smooth and its tangent bundle is \(X_0\) i.e. \(D_2 {\mathcal {C}}(t,0)\varphi = \varphi \) for all \((t,\varphi ) \in X_0\).

Proof

let \( \eta \in [\eta _{{{\,\textrm{min}\,}}},\eta _{{{\,\textrm{max}\,}}}] \subset (0,\min \{-a,b\})\) such that \(k \eta _{{{\,\textrm{min}\,}}} < \eta _{{{\,\textrm{max}\,}}}\). Define for any \(t \in {\mathbb {R}}\) the evolution map \({{\,\textrm{ev}\,}}_t: {{\,\textrm{BC}\,}}_t^\eta ({\mathbb {R}},X) \rightarrow X\) as \({{\,\textrm{ev}\,}}_t(f):= f(t)\). Then, for all \((t,\varphi ) \in X_0\) we get

$$\begin{aligned} {\mathcal {C}}(t,\varphi ) = {{\,\textrm{ev}\,}}_t(u_{\eta _{{{\,\textrm{min}\,}}},t}^{\star }(\varphi )) = {{\,\textrm{ev}\,}}_t({\mathcal {J}}_t^{\eta , \eta _{{{\,\textrm{min}\,}}}}u_{\eta _{{{\,\textrm{min}\,}}},t}^{\star }(\varphi )). \end{aligned}$$

It is clear that \({{\,\textrm{ev}\,}}_t \in {\mathcal {L}}({{\,\textrm{BC}\,}}_t^\eta ({\mathbb {R}},X), X)\) and hence it follows from Theorem 5 that \({\mathcal {C}}\) is of the class \(C^k\). This shows that the center manifold \({\mathcal {W}}^c\) is \(C^k\)-smooth. Moreover,

$$\begin{aligned} D_2{\mathcal {C}}(t,0)\varphi = {{\,\textrm{ev}\,}}_t(D({\mathcal {J}}_t^{\eta , \eta _{{{\,\textrm{min}\,}}}} \circ u_{\eta _{{{\,\textrm{min}\,}}},t}^{\star })(0)\varphi ) = {{\,\textrm{ev}\,}}_t(u_{\eta _{{{\,\textrm{min}\,}}},t}^{\star ,(1)}(0)\varphi ). \end{aligned}$$

As \(D{\tilde{R}}_{\delta ,t}(0) = 0\) and \(u_{\eta _{{{\,\textrm{min}\,}}},t}^\star (0) = 0\) for all \(t \in {\mathbb {R}}\), we get from (35) that \(u_{\eta _{{{\,\textrm{min}\,}}},t}^{\star ,(1)}(0) = U(\cdot ,t)\) and so \(D_2{\mathcal {C}}(t,0)\varphi = {{\,\textrm{ev}\,}}_t(U(\cdot ,t)\varphi ) = \varphi \), as claimed. \(\square \)

It follows from the previous corollary that the local center manifold \({\mathcal {W}}_{{{\,\textrm{loc}\,}}}^c\) is also \(C^k\)-smooth and has \(X_0\) as a tangent bundle. Let us now take a look into periodicity.

Theorem 6

If the time-dependent nonlinear perturbation \(R: {\mathbb {R}} \times X \rightarrow X^{\odot \star }\) is T-periodic in the first variable, then there exists a \(\delta > 0\) such that \({\mathcal {C}}(t+T,\varphi ) = {\mathcal {C}}(t,\varphi )\) for all \(t \in {\mathbb {R}}\) whenever \(\Vert \varphi \Vert < \delta \).

Proof

The proof of this theorem is essentially the same as [3, Lemma 8.3.1], which was obtained for impulsive DDEs. To obtain the result for classical DDEs, one has to ignore the discontinuous impulses and make the logical substitution \({\mathcal {R}}{\mathcal {C}}{\mathcal {R}} \rightarrow X\) and put everything towards the sun–star setting. \(\square \)

C Variation-of-Constants Formulas and One-to-One Correspondences

This section of the appendix consists of two subsections. In the first subsection, we study the interplay between solutions of inhomogeneous linear abstract ODEs and their associated inhomogeneous linear AIEs. In the second subsection, we prove that there is a one-to-one correspondence between solutions of (T-DDE) and (T-AIE) by using the results from Appendix C.1. This result is important when one applies the sun–star machinery towards DDEs, see for example the local center manifold theorem for DDEs in Corollary 2.

1.1 C.1 Inhomogeneous Perturbations to Linear Abstract ODEs and AIEs

In this subsection, we work with the same tools and notation as presented in Sect. 2.2. Let \(J \subseteq {\mathbb {R}}\) be an interval and \(s \in J\) a starting time. Applying an inhomogeneous perturbation \(f: J \rightarrow X^{\odot \star }\) on the generator \(A^{\odot \star }(t)\) to (T-LAODE) yields

$$\begin{aligned} {\left\{ \begin{array}{ll} d^\star (j \circ u)(t) = A^{\odot \star }(t)ju(t) + f(t), \quad &{}t \ge s, \\ u(s) = \varphi , \quad &{} \varphi \in X, \end{array}\right. } \end{aligned}$$
(36)

which suggest the variation-of-constants formula

$$\begin{aligned} u(t) = U(t,s)\varphi + j^{-1} \int _s^t U^{\odot \star }(t,\tau ) f(\tau ) d\tau , \quad \varphi \in X. \end{aligned}$$
(37)

It is also possible to perturb the generator \(A_0^{\odot \star }\) by \(\varphi \mapsto B(t)\varphi + f\), for some fixed \(t \in J\). This yields

$$\begin{aligned} {\left\{ \begin{array}{ll} d^\star (j \circ u)(t) = A_0^{\odot \star }ju(t) + B(t)u(t) + f(t), \quad &{}t \ge s, \\ u(s) = \varphi , \quad &{} \varphi \in X, \end{array}\right. } \end{aligned}$$
(38)

which suggests the variation-of-constants formula

$$\begin{aligned} u(t) = T_0(t-s)\varphi + j^{-1} \int _s^t T_0^{\odot \star }(t-\tau ) [B(\tau )u(\tau ) + f(\tau )] d\tau , \quad \varphi \in X. \end{aligned}$$
(39)

Solutions to the linear problems above are similarly defined as in Sect. 2.2. It is clear by (5) that a solution to (36) is also a solution to (38) and vice versa. In this sense we call (36) and (38) equivalent. We would like to establish a similar equivalence between (37) and (39). When the perturbation B does not depend on time, one can work with integrated semigroups in the \(\odot \)-reflexive case to prove the equivalence between the inhomogeneous autonomous problems, see [7, Proposition 2.5] and [15, Lemma III.2.23]. However, if we would succeed to generalize this approach towards time-dependent systems, it would probably only work in a \(\odot \)-reflexive setting. To overcome this problem, we will generalize the non-\(\odot \)-reflexive approach by Janssens in [24, Section 3] towards a time-dependent setting while still assuming the \(\odot \)-reflexivity. The non-\(\odot \)-reflexive case is still an open problem, see Sect. 4.

Before generalizing Janssens approach to a time-dependent setting, notice that the weak\(^\star \) Riemann integral in (37) is well-defined when f is assumed to be continuous, see Lemma 1. Furthermore, the weak\(^\star \) Riemann integral in (39) is well-defined when f is assumed to be continuous because then the map \([s,t] \ni \tau \mapsto B(\tau )u(\tau ) + f(\tau ) \in X^{\odot \star }\) is continuous, see [5, Lemma 2.2]. This already indicates that continuity of f is a sufficient condition for the well-definedness of the variation-of-constants formulas.

Before showing any equivalence between the four proposed problems above, let us first prove that at least one of them induces a unique solution on a subinterval of J. The following result is inspired by [24, Proposition 20].

Proposition 11

Let I be a compact subinterval of J. The following two statements hold.

  1. 1.

    For every \(\varphi \in X\) and \(f \in C(I,X^{\odot \star })\) there exists a unique solution \(u_{\varphi ,f}\) of (39) on I and the map

    $$\begin{aligned} X \times C(I,X^{\odot \star }) \ni (\varphi ,f) \mapsto u_{\varphi ,f} \in C(I,X) \end{aligned}$$

    is continuous.

  2. 2.

    If \(\varphi \in j^{-1} {\mathcal {D}}(A_0^{\odot \star })\) and \(f: I \rightarrow X^{\odot \star }\) is locally Lipschitz, then there exist sequences of Lipschitz functions \(u_m: I \rightarrow X\) and \(f_m: I \rightarrow X^{\odot \star }\) such that

    $$\begin{aligned} u_m(t) = T_0(t)\varphi + j^{-1} \int _s^t T_0^{\odot \star }(t-\tau ) [B(\tau )u_m(\tau ) + f_m(\tau )] d\tau , \quad \forall t \in I, \end{aligned}$$
    (40)

    and \(f_m \rightarrow f\) and \(u_m \rightarrow u_{\varphi ,f}\) as \(m \rightarrow \infty \), uniformly on I.

Proof

We show the first claim by a fixed point argument. Choose \(M \ge 1\) and \(\omega \in {\mathbb {R}}\) such that \(\Vert T_0(t)\Vert \le Me^{\omega t}\). On the space C(IX), we introduce the one-parameter family of equivalent norms

$$\begin{aligned} \Vert u\Vert _\eta := \sup _{t \in I} e^{-\eta t} \Vert u(t)\Vert , \quad \eta \in {\mathbb {R}}, \end{aligned}$$

that makes \((C(I,X), \Vert \cdot \Vert _\eta )\) a Banach space for each \( \eta \in {\mathbb {R}}\). For each fixed \((\varphi ,f) \in X \times C(I,X^{\odot \star })\) define the operator \(K_{\varphi ,f}: C(I,X) \rightarrow C(I,X)\) as

$$\begin{aligned} (K_{\varphi ,f}u)(t):= T_0(t-s)\varphi + j^{-1} \int _s^t T_0^{\odot \star }(t-\tau ) [B(\tau )u(\tau ) + f(\tau )] d\tau , \quad \forall t \in I. \end{aligned}$$
(41)

Define \(N:= \sup _{(t,s) \in \Omega _I} W(t,s)\) and notice that N is finite because I is compact and W is continuous, where \(W: \Omega _I \rightarrow {\mathbb {R}}\) is defined as \(W(t,s):= \sup _{s \le \tau \le t} \Vert B(\tau )\Vert \). Let \(\eta > \omega \), then for all \(u_1,u_2 \in C(I,X)\) and \(t \in I\) we get

$$\begin{aligned} e^{-\eta t} \Vert (K_{\varphi ,f}u_1)(t) - (K_{\varphi ,f}u_2)(t)\Vert&\le \Vert j^{-1}\Vert M N \int _s^t e^{-(\eta -\omega ) (t-\tau )} e^{-\eta \tau }\Vert u_1(\tau ) - u_2(\tau )\Vert d\tau \\&\le \Vert j^{-1}\Vert M N \Vert u_1 - u_2\Vert _{\eta } \int _s^t e^{-(\eta -\omega ) (t-\tau )} d\tau \\&= \frac{\Vert j^{-1}\Vert M N (1 - e^{-(t-s)(\eta -\omega )})}{\eta - \omega } \Vert u_1 - u_2\Vert _{\eta } \\&\le \frac{\Vert j^{-1}\Vert M N}{\eta - \omega } \Vert u_1 - u_2\Vert _{\eta }. \end{aligned}$$

If we choose \(\eta > \omega \) large enough such that \(\frac{\Vert j^{-1}\Vert M N}{\eta - \omega } \le \frac{1}{2}\), then \(K_{\varphi ,f}\) is a contraction on C(IX) with respect to the \(\Vert \cdot \Vert _\eta \)-norm. The uniqueness of u now follows from the Banach fixed point theorem. For a fixed \(u \in C(I,X)\), it follows that the map

$$\begin{aligned} X \times C(I,X^{\odot \star }) \ni (\varphi ,f) \mapsto K_{\varphi ,f}u \in C(I,X) \end{aligned}$$

is continuous.

Let us now show the second assertion. Let \(\varphi \in j^{-1} {\mathcal {D}}(A_0^{\odot \star })\) and f be locally Lipschitz, we will show that \(K_{\varphi ,f}\) maps \({{\,\textrm{Lip}\,}}(I,X)\) into itself, where \({{\,\textrm{Lip}\,}}(I,X)\) denotes the subspace of C(IX) consisting of X-valued Lipschitz continuous functions defined on I. From the theory of Favard classes of \({\mathcal {C}}_0\)-semigroups and the important equalities [24, Equation (19)], it follows immediately that \(T_0(\cdot )\varphi \) is in \({{\,\textrm{Lip}\,}}(I,X)\). Let \(u \in {{\,\textrm{Lip}\,}}(I,X)\) be given, since B is Lipschitz continuous and f is assumed to be locally Lipschitz we know that \(t \mapsto B(t)u(t) + f(t)\) is locally Lipschitz on I and takes values in \(X^{\odot \star }\). Hence, the map \(v_1(\cdot ,s,B(\cdot )u+f): I \rightarrow j(X)\) defined by

$$\begin{aligned} v_1(t,s,B(\cdot )u+f):= \int _s^t T_0^{\odot \star }(t-\tau )[B(\tau )u(\tau )+f(\tau )] d\tau , \quad \forall t \in I, \end{aligned}$$

is weak\(^\star \) continuously differentiable and so locally Lipschitz by [24, Remark 16]. It follows that \(K_{\varphi ,f}u = T_0(\cdot )\varphi + j^{-1}v_1(\cdot ,s,B(\cdot )u+f)\) is in \({{\,\textrm{Lip}\,}}(I,X)\). Now, let \(u_0 \in {{\,\textrm{Lip}\,}}(I,X)\) be arbitrary. The sequence \((u_m)_{m \in {\mathbb {N}}}\) defined by

$$\begin{aligned} u_m:= K_{\varphi ,f} u_{m-1}, \quad m \ge 1, \end{aligned}$$

is in \({{\,\textrm{Lip}\,}}(I,X)\). We only have to show that there exists a sequence of \(X^{\odot \star }\)-valued Lipschitz continuous functions \((f_m)_{m \in {\mathbb {N}}}\) defined on I that satisfies the integral formula. It follows from (41) that for any \(t \in I\) and \(m \ge 1\) we have

$$\begin{aligned} u_m(t)&= K_{\varphi ,f} u_{m-1}(t) \\&= T_0(t-s)\varphi + j^{-1} \int _s^t T_0^{\odot \star }(t-\tau )[B(\tau )u_m(\tau ) + f(\tau ) + B(\tau )[u_{m-1}(\tau ) - u_m(\tau )]] d\tau . \end{aligned}$$

If we define for any \(m \ge 1\) the functions \(f_m: I \rightarrow X^{\odot \star }\) as \(f_m:= f + B(\cdot )(u_{m-1} - u_m)\), then each \(f_m\) is Lipschitz continuous and \(f_m \rightarrow f\) uniformly on I because

$$\begin{aligned} \Vert f_m - f \Vert&\le \sup _{(t,s) \in \Omega _I}W(t,s) \Vert u_{m-1} - u_m\Vert \\&\le N [\Vert u_{m-1} - u_{\varphi ,f}\Vert + \Vert u_{\varphi ,f} - u_m\Vert ] \\&\rightarrow 0, \quad \text{ as } m \rightarrow \infty , \end{aligned}$$

as both \(u_{m-1}\) and \(u_m\) converge to \(u_{\varphi ,f}\) uniformly on I as \(m \rightarrow \infty \). \(\square \)

The next proposition shows under which conditions on \(\varphi \) and f the unique solution to the abstract integral equation (39), proven in Proposition 11, induces a solution to the abstract ordinary differential equation (38). The proof is inspired by [24, Corollary 19].

Proposition 12

Suppose that \(\varphi \in j^{-1} {\mathcal {D}}(A_0^{\odot \star })\) and \(f: J \rightarrow X^{\odot \star }\) is locally Lipschitz. If u is a locally Lipschitz solution of (39) on a subinterval I of J then u is a solution of (38) on I.

Proof

If we apply j to the abstract integral equation in (39), we get for any \(t \in I\) that

$$\begin{aligned} ju(t) = T_0^{\odot \star }(t-s)j\varphi + \int _s^t T_0^{\odot \star }(t-\tau ) [B(\tau )u(\tau ) + f(\tau )] d\tau . \end{aligned}$$
(42)

It follows from the theory of Favard classes for \({\mathcal {C}}_0\)-semigroups [23, Equation (19)] that \({\mathcal {D}}(A_0^{\odot \star })\) is \(T_0^{\odot \star }\)-invariant. Hence, the first term on the right side takes values in \({\mathcal {D}}(A_0^{\odot \star })\) and notice from a \(\odot \)-variant of [6, Theorem 2.1] that this term is weak\(^\star \) continuously differentiable with weak\(^\star \) derivative

$$\begin{aligned} d^\star (T_0^{\odot \star }(\cdot -s) j\varphi )(t) = A_0^{\odot \star } T_0^{\odot \star }(t-s)j\varphi , \end{aligned}$$

Now, f and u are locally Lipschitz continuous functions on \(I \subseteq J\) and B is by definition of the time-dependent bounded linear perturbation on J. Hence, \(g: I \rightarrow X^{\odot \star }\) defined by \(g(\tau ):= B(\tau )u(\tau ) + f(\tau )\) for all \(\tau \in I\) is locally Lipschitz. Define the function \(v_1(\cdot ,s,g): I \rightarrow j(X)\) as

$$\begin{aligned} v_1(t,s,g):= \int _s^t T_0^{\odot \star }(t-\tau ) g(\tau ) d \tau , \quad \forall t \in I. \end{aligned}$$

It is clear from [7, Proposition 2.2] (or [24, Proposition 18]) that \(v_1(\cdot ,s,g)\) is weak\(^\star \) continuously differentiable, takes values in \({\mathcal {D}}(A_0^{\odot \star })\) and has weak\(^\star \) derivative

$$\begin{aligned} d^\star (v_1(\cdot ,s,g))(t) = A_0^{\odot \star }v_1(t,s,g) + g(t). \end{aligned}$$

By linearity, it is clear from (42) that u takes values in \(j^{-1} {\mathcal {D}}(A_0^{\odot \star })\). Combining all the results yield

$$\begin{aligned} d^\star (j \circ u)(t)&= A_0^{\odot \star } T_0^{\odot \star }(t-s)j\varphi \\&+ A_0^{\odot \star } \int _s^t T_0^{\odot \star }(t-\tau ) [B(\tau )u(\tau ) + f(\tau )] d\tau + B(t)u(t) + f(t)\\&= A_0^{\odot \star }ju(t) + B(t)u(t) + f(t). \end{aligned}$$

This shows that \(j \circ u\) is weak\(^\star \) continuously differentiable and satisfies (38) on I since \(u(s) = \varphi \). We conclude that \(u: I \rightarrow X\) is a solution of (38) on I. \(\square \)

Let u be the solution of (38) on a subinterval I of J generated by Proposition 12. Hence, u is also a solution of (36) on I by the equivalence between (36) and (38). Our next goal is to show that solutions of (36) are precisely given by the variation-of-constants suggestion presented in (37). The proof is inspired by [24, Proposition 21].

Proposition 13

Suppose that \(f \in C(J,X^{\odot \star })\) and I is a subinterval of J. If u is a solution of (36) on I then u is given by (37).

Proof

Let \(t \in I\) be given with \(t > s\), where s denotes the starting time. Define the function \(w: [s,t] \rightarrow X^{\odot \star }\) by \(w(\tau ):= U^{\odot \star }(t,\tau )ju(\tau )\) for all \(\tau \in [s,t]\). We claim that w is weak\(^\star \) continuously differentiable with weak\(^\star \) derivative

$$\begin{aligned} d^\star w(\tau ) = U^{\odot \star }(t,\tau )d^{\star }(j \circ u)(\tau ) - U^{\odot \star }(t,\tau )A^{\odot \star }(\tau )ju(\tau ), \quad \forall \tau \in [s,t]. \end{aligned}$$
(43)

To show this claim, let \(\tau \in [s,t]\) and \(x^\odot \in X^\odot \) be given. For any \(h \in {\mathbb {R}}\) such that \(\tau + h \in [s,t]\) we have

$$\begin{aligned} \langle w(\tau +h) - w(\tau ), x^\odot \rangle&= \langle U^{\odot \star }(t,\tau + h)ju(\tau + h) - U^{\odot \star }(t,\tau )ju(\tau ),x^\odot \rangle \\&= \langle U^{\odot \star }(t,\tau + h)[ju(\tau +h) - ju(\tau )],x^\odot \rangle \\&+ \langle [U^{\odot \star }(t,\tau + h) - U^{\odot \star }(t,\tau )]ju(\tau ),x^\odot \rangle \\&= \langle ju(\tau +h) - ju(\tau ), U^{\odot }(\tau + h,t)x^\odot \rangle \\&+ \langle [U^{\odot \star }(t,\tau + h) - U^{\odot \star }(t,\tau )]ju(\tau ),x^\odot \rangle . \end{aligned}$$

Because \(U^\odot \) is a strongly continuous backward evolutionary system, we have that \(U^{\odot }(\tau + h,t)x^\odot \rightarrow U^{\odot }(\tau ,t)x^\odot \) in norm as \(h \rightarrow 0\). Moreover, from the definition of the weak\(^\star \) derivative we obtain

$$\begin{aligned} \frac{1}{h}(ju(\tau +h) - ju(\tau )) \rightarrow d^\star (j \circ u) (\tau ) \quad \text{ weakly}^\star as h\rightarrow 0, \end{aligned}$$

if we can show that the difference quotients remains bounded in the limit. Since u is a solution to (36), we know that \(j \circ u\) is weak\(^\star \) continuously differentiable and so locally Lipschitz continuous by [24, Remark 16]. Because [st] is compact, \(j \circ u\) is Lipschitz continuous on [st] and so the difference quotients remain bounded in the limit. Combining these two facts yield

$$\begin{aligned} \frac{1}{h}\langle ju(\tau +h) - ju(\tau ), U^{\odot }(\tau + h,t)x^\odot \rangle \rightarrow \langle d^\star (j \circ u) (\tau ), U^\odot (t,\tau )x^\odot \rangle \quad \text{ as } h \rightarrow 0. \end{aligned}$$

Furthermore, since \(ju(\tau ) \in {\mathcal {D}}(A^{\odot \star }(\tau )) = {\mathcal {D}}(A_0^{\odot \star })\), it follows from [5, Theorem 5.5] that

$$\begin{aligned} \frac{1}{h} \langle [U^{\odot \star }(t,\tau + h) - U^{\odot \star }(t,\tau )]ju(\tau ),x^\odot \rangle \rightarrow \langle -U^{\odot \star }(t,\tau )A^{\odot \star }(\tau ) ju(\tau ), x^\odot \rangle \quad \text{ as } h \rightarrow 0. \end{aligned}$$

Consequently, it holds

$$\begin{aligned} \frac{1}{h} \langle w(\tau +h) - w(\tau ), x^\odot \rangle \\ \rightarrow \langle U^{\odot \star }(t,\tau )d^{\star }(j \circ u)(\tau ) - U^{\odot \star }(t,\tau )A^{\odot \star }(\tau )ju(\tau ), x^{\odot } \rangle \quad \text{ as } h \rightarrow 0, \end{aligned}$$

which proves (43). Substituting the differential equation from (36) into (43) yields

$$\begin{aligned} d^\star w (\tau ) = U^{\odot \star }(t,\tau )f(\tau ), \quad \forall \tau \in [s,t], \end{aligned}$$

and so \(d^\star w\) is weak\(^\star \) continuous since f was assumed to be (norm) continuous. Now, for any \(x^\odot \in X^\odot \) we get

$$\begin{aligned} \langle ju(t) - U^{\odot \star }(t,s)ju(s),x^\odot \rangle&= \langle w(t), x^\odot \rangle - \langle w(s), x^\odot \rangle \\&= \int _s^t \langle d^\star w(\tau ), x^\odot \rangle d\tau = \langle \int _s^t U^{\odot \star }(t,\tau )f(\tau ) d\tau , x^\odot \rangle . \end{aligned}$$

As \(x^\odot \in X^{\odot }\) and \(t > s\) were arbitrary, we conclude that

$$\begin{aligned} ju(t) - U^{\odot \star }(t,s)ju(s) = \int _s^t U^{\odot \star }(t,\tau )f(\tau ) d\tau . \end{aligned}$$

and so

$$\begin{aligned} j[u(t) - U(t,s)u(s)] = \int _s^t U^{\odot \star }(t,\tau )f(\tau ) d\tau . \end{aligned}$$

By \(\odot \)-reflexivity of X with respect to \(T_0\), and recalling that j is an isomorphism on its image \(X^{\odot \odot }\), we get

$$\begin{aligned} u(t) = U(t,s)u(s) + j^{-1} \int _s^t U^{\odot \star }(t,\tau )f(\tau ) d\tau , \quad \forall t \in I, \end{aligned}$$
(44)

which shows the claim since \(\varphi = u(s)\). The continuity of f ensures from Lemma 1 that the weak\(^\star \) integral takes values in j(X) and so (44) is well-defined. \(\square \)

Let us go full circle now by proving that the unique solutions of (39) are given by (37). The following result is inspired by [24, Theorem 22].

Proposition 14

Suppose that \(f \in C(J,X^{\odot \star })\) and I is a subinterval of J. The unique solution of (39) on I is given by (37).

Proof

Let us first assume that I is compact. From Proposition 11 we get a unique solution \(u_{\varphi ,f}: I \rightarrow X\) of (39) and sequences of Lipschitz functions \(u_m: I \rightarrow X\) and \(f_m: I \rightarrow X^{\odot \star }\) that satisfy (40). For each \(m \in {\mathbb {N}}\), let \({\hat{f}}_m: I \rightarrow X^{\odot \star }\) be a Lipschitz extension of \(f_m\) such that \({\hat{f}}_m |_{I} = f_m\). Substituting f with \({\hat{f}}_m\) and u with \(u_m\) in Proposition 12 shows us that each \(u_m\) is a solution to the initial value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} d^\star (j \circ u_m)(t) = A_0^{\odot \star }ju_m(t) + B(t)u_m(t) + {\hat{f}}_m(t), \quad t \in I, \\ u_m(s) = \varphi . \end{array}\right. } \end{aligned}$$

Recall from (5) that each \(u_m\) also is then also a solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} d^\star (j \circ u_m)(t) = A^{\odot \star }(t)ju_m(t) + {\hat{f}}_m(t), \quad &{} t \in I, \\ u_m(s) = \varphi . \end{array}\right. } \end{aligned}$$

It follows from Proposition 13, with u replaced by \(u_m\) and f replaced by \({\hat{f}}_m\), that

$$\begin{aligned} u_m(t) = U(t,s)\varphi + j^{-1} \int _s^t U^{\odot \star }(t,\tau ) f_m(\tau ) d\tau , \quad \forall m \in {\mathbb {N}}, \ t \in I, \end{aligned}$$
(45)

since \({\hat{f}}_m\) restricted to I precisely is \(f_m\). Let us take the limit as \(m \rightarrow \infty \) in (45) to obtain

$$\begin{aligned} u_{\varphi ,f}(t) = U(t,s)\varphi + j^{-1} \int _s^t U^{\odot \star }(t,\tau ) f(\tau ) d\tau , \quad \forall t \in I, \end{aligned}$$
(46)

for all \((\varphi , f) \in j^{-1} {\mathcal {D}}(A_0^{\odot \star }) \times {{\,\textrm{Lip}\,}}(I,X^{\odot \star })\). As \(j^{-1} {\mathcal {D}}(A_0^{\odot \star }) \times {{\,\textrm{Lip}\,}}(I,X^{\odot \star })\) is dense in \(X \times C(I,X^{\odot \star })\), the continuity statement from Proposition 11 implies that (46) also holds for all \(\varphi \in X\) and \(f \in C(I,X^{\odot \star })\). Hence, the unique solution of (39) on I is given by (37) on I. To extend this result towards non-compact subintervals I of J the same proof can be followed as in [24, Theorem 22]. \(\square \)

1.2 C.2 Equivalence Between (T-DDE) and (T-AIE)

Let us now prove the important one-to-one correspondence between solutions of (T-DDE) and (T-AIE). To prove this result, we assume weaker assumptions on the (nonlinear) time-dependent perturbations because this is not needed for the proof.

Theorem 7

Consider (T-DDE) with \(L \in C({\mathbb {R}},{\mathcal {L}}(X,{\mathbb {R}}^n))\) and \(G \in C({\mathbb {R}} \times X, {\mathbb {R}}^n)\).

  1. 1.

    Suppose that \(y: [s-h,t_\varphi ) \rightarrow {\mathbb {R}}^n\) is a solution of (T-DDE), then the function \(u_\varphi : [s,t_\varphi ) \rightarrow X\) defined by

    $$\begin{aligned} u_\varphi (t):= y_t, \quad \forall t \in [s,t_\varphi ), \end{aligned}$$

    is a solution of (T-AIE).

  2. 2.

    Suppose that \(u_\varphi : [s,t_\varphi ) \rightarrow X\) is a solution of (T-AIE), then the function \(y: [s-h,t_\varphi ) \rightarrow {\mathbb {R}}^n\) defined by

    $$\begin{aligned} y(t):= {\left\{ \begin{array}{ll} \varphi (t-s), \quad &{}s-h \le t \le s,\\ u_\varphi (t)(0), \quad &{}s \le t \le t_\varphi , \end{array}\right. } \end{aligned}$$

    is a solution of (T-DDE).

Proof

Before we start proving the first assertion, notice that the differential equation from (T-DDE) is equivalent to the integral equation

$$\begin{aligned} y(t) = \varphi (0) + \int _s^t L(\tau )y_\tau + G(\tau ,y_\tau ) d\tau , \quad t \ge s, \end{aligned}$$
(47)

due to the fundamental theorem of calculus. Let us start with proving the first assertion.

1. Notice that the right-hand side of the abstract integral equation in (39) with a \(C^k\)-smooth function \(f = R(\cdot ,u_\varphi (\cdot ))\) is equivalent to

$$\begin{aligned} T_0(t-s)\varphi + j^{-1} \int _s^t T_0^{\odot \star }(t-\tau ) [L(\tau )u_\varphi (\tau ) + G(\tau ,u_\varphi (\tau ))]r^{\odot \star } d\tau , \quad \forall t \in [s,t_\varphi ). \end{aligned}$$

It then follows from the action of the shift semigroup (23), the assumption \(u_\varphi (t) = y_t\) and [15, Lemma XII.3.3] where in this lemma the map g must be replaced by the continuous map \(L(\cdot )u_\varphi (\cdot ) + G(\cdot ,u_\varphi (\cdot )))\), since \(L \in C({\mathbb {R}},{\mathcal {L}}(X,{\mathbb {R}}^n))\), \( u_\varphi \in C([s,t_\varphi ),X)\) and \(G \in C({\mathbb {R}} \times X, {\mathbb {R}}^n)\), that this right-hand side evaluated at \(\theta \in [-h,0]\) is equivalent to

$$\begin{aligned}&(T_0(t-s)\varphi )(\theta ) + j^{-1} \bigg (\int _s^t T_0^{\odot \star }(t-\tau ) [L(\tau )u_\varphi (\tau ) + G(\tau ,u_\varphi (\tau ))]r^{\odot \star } d\tau \bigg )(\theta ) \\&= (T_0(t-s)\varphi )(\theta ) + \int _s^{\max \{s,t+\theta \}} L(\tau )u_\varphi (\tau ) + G(\tau ,u_\varphi (\tau )) d\tau \\&= (T_0(t-s)\varphi )(\theta ) + \int _s^{\max \{s,t+\theta \}} L(\tau )y_\tau + G(\tau ,y_\tau ) d\tau \\&= {\left\{ \begin{array}{ll} \varphi (t+\theta ), \quad &{}s-h \le t+\theta \le s, \\ \varphi (0) + \int _s^{t+\theta } L(\tau )y_\tau + G(\tau ,y_\tau ) d\tau , \quad &{} s \le t + \theta \le t_\varphi , \end{array}\right. } \\&= y(t+\theta ) = u_\varphi (t)(\theta ), \end{aligned}$$

where the fourth equality holds due to (47). Hence, \(u_\varphi \) is a solution to (39) with \(f = R(\cdot ,u_\varphi (\cdot ))\). It follows from Proposition 14 that \(u_\varphi \) then also is a solution of (37) with \(f = R(\cdot ,u_\varphi (\cdot ))\), which is equivalent to saying that \(u_\varphi \) is a solution of (T-AIE).

2. Let us first prove that the function y is continuous on \([s-h,t_\varphi ).\) As \(\varphi \in X\), it is clear that y is continuous for \(t \in [s-h,s]\). As point evaluation acts continuously on elements in \(X \ni u_\varphi (t)\), it follows that y is continuous on \([s,t_\varphi )\). Since \(u_\varphi (s)(0) = \varphi (0)\) we have that \(y \in C([s-h,t_\varphi ),{\mathbb {R}}^n)\).

Our next goal is to show that y satisfies (T-DDE) or equivalently (47). Because \(u_\varphi \) is a solution of (37) with \(f = R(\cdot ,u_\varphi (\cdot ))\), we know from Proposition 14 that \(u_\varphi \) is then also a solution of (39) with \(f = R(\cdot ,u_\varphi (\cdot ))\). It follows from (23) and [15, Lemma XII.3.3] that

$$\begin{aligned} y(t)&= u_\varphi (t)(0) \\&=(T_0(t-s)\varphi )(0) + j^{-1} \bigg (\int _s^t T_0^{\odot \star }(t-\tau ) [L(\tau )u_\varphi (\tau ) + G(\tau ,u_\varphi (\tau ))]r^{\odot \star } d\tau \bigg )(0)\\&= \varphi (0) + \int _s^t L(\tau )u_\varphi (\tau ) + G(\tau ,u_\varphi (\tau )) d\tau . \end{aligned}$$

It remains to show that \(u_\varphi (\tau ) = y_\tau \) for all \(\tau \in [s,t_\varphi )\). Because, then we have shown that y indeed satisfies (47). Let \(\theta \in [-h,0]\) be given. If \(\tau + \theta \in [s-h,s]\) then we have that

$$\begin{aligned} y_\tau (\theta ) = y(\tau + \theta ) = \varphi (\tau + \theta - s) = (T_0(\tau - s)\varphi )(\theta ) = u_\varphi (\tau )(\theta ), \end{aligned}$$

due to (23). When \(\tau + \theta \in [s,t_\varphi )\), it again follows from (23) and [15, Lemma XII.3.3] that

$$\begin{aligned} y_\tau (\theta )&= y(\tau + \theta )\\&= u_\varphi (\tau + \theta )(0)\\&= (T_0(\tau + \theta -s)\varphi )(0) \\&+ j^{-1} \bigg (\int _s^{\tau + \theta } T_0^{\odot \star }(\tau + \theta - \sigma ) [L(\sigma )u_\varphi (\sigma ) + G(\sigma ,u_\varphi (\sigma ))]r^{\odot \star } d\sigma \bigg )(0)\\&= \varphi (0) + \int _0^{\tau + \theta } L(\sigma )u_\varphi (\sigma ) + G(\sigma ,u_\varphi (\sigma )) d\sigma \\&= (T_0(\tau -s)\varphi )(\theta ) + j^{-1} \bigg (\int _s^{\tau } T_0^{\odot \star }(\tau - \sigma ) [L(\sigma )u_\varphi (\sigma ) + G(\sigma ,u_\varphi (\sigma ))]r^{\odot \star } d\sigma \bigg )(\theta )\\&=u_\varphi (\tau )(\theta ), \end{aligned}$$

and so \(y_\tau = u_\varphi (\tau )\) for all \(\tau \in [s,t_\varphi )\). To conclude,

$$\begin{aligned} y(t) = \varphi (0) + \int _s^t L(\tau )y_\tau + G(\tau ,y_\tau ) d\tau , \end{aligned}$$

and so y satisfies the differential equation of (T-DDE). By the history property, and the fact that \(\varphi \in X\), it follows by the method of steps applied to (47) that \(y \in C^1([s,t_\varphi ),{\mathbb {R}}^n)\). This shows that y indeed is a solution to (T-DDE). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lentjes, B., Spek, L., Bosschaert, M.M. et al. Periodic Center Manifolds for DDEs in the Light of Suns and Stars. J Dyn Diff Equat (2023). https://doi.org/10.1007/s10884-023-10289-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10884-023-10289-9

Keywords

Mathematics Subject Classification

Navigation