1 Introduction

In this paper, we consider the wave equation

$$\begin{aligned} u_{tt}-c(t)\Delta u=0, \end{aligned}$$
(1.1)

and its abstract version

$$\begin{aligned} u''(t)+c(t)Au(t)=0, \end{aligned}$$
(1.2)

where A is a linear nonnegative self-adjoint operator with domain D(A) in some real Hilbert space H. We always assume that the coefficient c(t), which in the model (1.1) represents the square of the propagation speed, is defined in some time interval \((0,T_{0})\), and satisfies the strict hyperbolicity assumption

$$\begin{aligned} 0<\mu _{1}\le c(t)\le \mu _{2} \quad \forall t\in (0,T_{0}). \end{aligned}$$
(1.3)

We investigate the regularity of solutions to (1.2) with initial data

$$\begin{aligned} u(0)=u_{0}, \quad u'(0)=u_{1}. \end{aligned}$$
(1.4)

We recall that problem (1.2)–(1.4) admits a unique solution for large classes of initial data, even if the coefficient c(t) is just in \(L^{1}((0,T_{0}))\), without sign conditions. Nevertheless, in general this solution is very weak, in the sense that it lives in a huge space of hyperdistributions, even if initial data are smooth.

Here we are interested in solutions with more “space” regularity. In order to state the definitions in the abstract setting we recall that, for every real number \(\beta \), the operator \(A^{\beta }\) is defined in a suitable domain \(D(A^{\beta })\), which in the concrete case corresponds to the Sobolev space \(H^{2\beta }\) (distributions if \(\beta <0\)).

Definition 1.1

(Well-posedness vs derivative loss)

  • (No derivative loss). Problem (1.2)–(1.4) is said to be well-posed with no derivative loss if, for every pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), the unique solution satisfies

    $$\begin{aligned} (u(t),u'(t))\in D(A^{\beta +1/2})\times D(A^{\beta }) \quad \forall t\in [0,T_{0}]. \end{aligned}$$
  • (Arbitrarily small derivative loss). Problem (1.2)–(1.4) is said to be well-posed with (at most an) arbitrarily small derivative loss if, for every pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), the unique solution satisfies

    $$\begin{aligned} (u(t),u'(t))\in D(A^{\beta -\varepsilon +1/2})\times D(A^{\beta -\varepsilon }) \quad \forall t\in [0,T_{0}] \quad \forall \varepsilon >0. \end{aligned}$$

    The arbitrarily small derivative loss does actually happen if there exists a pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\) such that the unique solution satisfies

    $$\begin{aligned} (u(t),u'(t))\not \in D(A^{\beta +1/2})\times D(A^{\beta }) \quad \forall t\in (0,T_{0}]. \end{aligned}$$
  • (Finite derivative loss). Problem (1.2)–(1.4) is said to be well-posed with (at most a) finite derivative loss if there exists a positive real number \(\delta \) such that, for every pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), the unique solution satisfies

    $$\begin{aligned} (u(t),u'(t))\in D(A^{\beta -\delta +1/2})\times D(A^{\beta -\delta }) \quad \forall t\in [0,T_{0}]. \end{aligned}$$

    The finite derivative loss does actually happen if there exist a function \(\delta :(0,T_{0}]\rightarrow (0,+\infty )\), and a pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), such that the unique solution satisfies

    $$\begin{aligned} (u(t),u'(t))\not \in D(A^{\beta -\delta (t)+1/2})\times D(A^{\beta -\delta (t)}) \quad \forall t\in (0,T_{0}]. \end{aligned}$$
  • (Infinite derivative loss). Problem (1.2)–(1.4) is said to exhibit and infinite derivative loss if there exists a pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), such that the unique solution satisfies

    $$\begin{aligned} (u(t),u'(t))\not \in D(A^{-\gamma +1/2})\times D(A^{-\gamma }) \quad \forall \gamma >0,\quad \forall t\in (0,T_{0}]. \end{aligned}$$

Due to the linearity of the equation, all the definitions stated above do not depend on the choice of \(\beta \). In words, no derivative loss means more generally that all solutions live in the same space of the initial data, while finite derivative loss means that the “space regularity” of the solution for positive times is less than the corresponding regularity of initial data. The parameter \(\delta \) measures this loss of regularity, which is a true loss of derivatives in the concrete case where the domains of powers of A are actually Sobolev spaces. The arbitrary small derivative loss is a condition in between no derivative loss and finite derivative loss: in this case solutions for positive times do not remain in the same space of initial data, but in all spaces with smaller exponents. Finally, the infinite derivative loss is a dramatic loss of regularity: in the concrete case it means the existence of solutions whose initial data have any given Sobolev regularity, and nevertheless they are not even distributions for positive times.

It is well known that the derivative loss of solutions depends on the time-regularity of the coefficient c(t), and in particular on its oscillatory behavior. This regularity has been measured in different ways in the literature. Let us mention some of them.

Modulus of continuity Let us assume that c(t) is continuous in the closed interval \([0,T_{0}]\), and let \(\omega :[0,+\infty )\rightarrow [0,+\infty )\) be a function such that

$$\begin{aligned} |c(t)-c(s)|\le \omega (|t-s|) \quad \forall (t,s)\in [0,T_{0}]^{2}. \end{aligned}$$
(1.5)

Any function \(\omega \) with this property is called a modulus of continuity for c(t) in \([0,T_{0}]\).

The relations between the modulus of continuity of the coefficient and the regularity of solutions was investigated for the first time by F. Colombini, E. De Giorgi and S. Spagnolo in the seminal paper [4]. The result was then refined and extended in many subsequent papers (see for example [2, 5, 8, 9]). Concerning the derivative loss of solutions, the situation is summarized in Table 1, where the assumptions in the first column refer to the behavior of \(\omega (\sigma )\) as \(\sigma \rightarrow 0^{+}\).

Table 1 Modulus of continuity of c(t) vs derivative loss

Now we know that all the results stated in Table 1 are residually optimal, namely for every modulus of continuity \(\omega \) the set of coefficients c(t) that are \(\omega \)-continuous, and for which problem (1.2)–(1.4) does exhibit the prescribed derivative loss is residual in the sense of Baire category (see [12, 13]).

Singular behavior of the derivative at the origin Let us assume that c(t) is differentiable for positive times, and let \(\theta :(0,+\infty )\rightarrow (0,+\infty )\) be a nonincreasing function such that

$$\begin{aligned} |c'(t)|\le \theta (t) \quad \forall t\in (0,T_{0}]. \end{aligned}$$
(1.6)

We point out that now c(t) is not required to be continuous in \(t=0\), and also \(\theta (t)\) is allowed to diverge as \(t\rightarrow 0^{+}\), and actually this is the interesting case. The effect of this singular behavior of \(c'(t)\) in \(t=0\) was studied by Colombini et al. [6] in the case where \(\theta (t)\sim 1/t^{\beta }\). Concerning the regularity of solutions, the situation is summarized in Table 2, where the assumptions in the first column refer to the behavior of \(\theta (t)\) as \(t\rightarrow 0^{+}\). Note that, due to the strict hyperbolicity condition, the divergence of the integral of \(|c'(t)|\) implies a highly oscillatory behavior of c(t).

Table 2 Singular behavior of \(c'(t)\) vs derivative loss

The optimality of many points in Table 2 remained an open problem for almost two decades. In particular, the result of the last row was proved in [14, Theorem 2.9], while the optimality of the results of the second and third row follows, respectively, from Corollaries 2.6 and  2.7 of the present paper.

Singular behavior of the first two derivatives at the origin A natural way to extend the results of the previous paragraph is to consider the first two derivatives of the coefficient c(t), with the hope that a bound on \(|c'(t)|\) and \(|c''(t)|\) can prevent c(t) from oscillating too fast and yield a smaller derivative loss. A first result in this direction was obtained by Yamazaki in [22]. The assumption is that c(t) is twice differentiable for positive times and satisfies, up to multiplicative constants, the estimates

$$\begin{aligned} |c'(t)|\le \frac{1}{t} \quad \text {and}\quad |c''(t)|\le \frac{1}{t^{2}} \end{aligned}$$

for every \(t\in (0,T_{0}]\). Under these assumptions she proved that problem (1.2)–(1.4) is well-posed with no derivative loss.

Some years later, Colombini et al. [7] assumed that, up to multiplicative constants, the coefficient c(t) satisfies

$$\begin{aligned} |c'(t)|\le \frac{|\log t|}{t} \quad \text {and}\quad |c''(t)|\le \left( \frac{\log t}{t}\right) ^{2} \end{aligned}$$

in a right neighborhood of the origin. Under these assumptions they proved that problem (1.2)–(1.4) is well-posed with finite derivative loss (see also [15, 16]).

These results were extended and unified recently in [14], were it is assumed that

$$\begin{aligned} |c'(t)|\le \frac{\varphi (t)}{t} \quad \text {and}\quad |c''(t)|\le \left( \frac{\varphi (t)}{t}\right) ^{2}\exp (\psi (t)), \end{aligned}$$

where \(\varphi :(0,T_{0})\rightarrow (0,+\infty )\) and \(\psi :(0,T_{0})\rightarrow (0,+\infty )\) are suitable nonincreasing and continuous functions. The results of [14] are summarized in Table 3, where the first column refers to the behavior as \(t\rightarrow 0^{+}\).

Table 3 Singular behavior of \(c'(t)\) and \(c''(t)\) vs derivative loss

We observe that in the case where \(\varphi (t)\) and \(\psi (t)\) are constant functions this is exactly the result of [22], while in the case where \(\varphi (t)\sim |\log t|\) and \(\psi (t)\) is constant this is exactly the result of [7]. Again, the derivative loss prescribed by Table 3 is residually optimal.

Modulus of continuity and first derivatives In this paper we combine the assumptions on the modulus of continuity and on the first derivative. More precisely, we assume that c(t) is continuous in the closed interval \([0,T_{0}]\), differentiable in the half-open interval \((0,T_{0}]\), and that it satisfies both (1.5) and (1.6) for suitable functions \(\omega \) and \(\theta \). The case where \(\omega (\sigma )\sim \sigma ^{\alpha }\) and \(\theta (t)\sim 1/t^{\beta }\) was considered by Colombini et al. [6] (see also [1]), while more subtle examples where considered by Colombini et al. [7] and by Del Santo et al [10] (see also [19]). Here we unify and improve some of their results, both on the positive and on the negative side. More important, we show that all those special examples fit into a common framework.

Our main result is that the key quantity

$$\begin{aligned} m(\lambda ):=\min \left\{ \lambda \,\omega \left( \frac{1}{\lambda }\right) s+ \int _{s}^{T_{0}}\theta (t)\,dt:s\in [0,T_{0}]\right\} \quad \forall \lambda >0 \end{aligned}$$
(1.7)

determines the derivative loss of solutions to problem (1.2)–(1.4) according to Table 4, where the first column refers to the behavior of \(m(\lambda )\) as \(\lambda \rightarrow +\infty \).

Table 4 Modulus of continuity and singular behavior of \(c'(t)\) vs derivative loss

As usual, all results are residually optimal.

Overview of the technique—upper bound for the derivative loss From the technical point of view, it is well-known that the spectral theorem for self-adjoint nonnegative operators reduces the abstract equation (1.2) to the family of ordinary differential equations

$$\begin{aligned} u_{\lambda }''(t)+\lambda ^{2}c(t)u_{\lambda }(t)=0, \end{aligned}$$
(1.8)

where \(\lambda \) is a positive real parameter. In particular, if one can prove that solutions to (1.8) satisfy an estimate of the form

$$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}\le \left( u_{\lambda }'(0)^{2}+\lambda ^{2}u_{\lambda }(0)^{2}\right) \exp (\phi _{+}(\lambda ,t)) \quad \forall t\in [0,T_{0}], \end{aligned}$$
(1.9)

where \(\phi _{+}(\lambda ,t)\) is a function independent of initial data, then the behavior of \(\phi _{+}(\lambda ,t)\) as \(\lambda \rightarrow +\infty \) determines the maximum possible derivative loss of solutions to (1.2)–(1.4) according to Table 5.

Table 5 Energy growth for solutions to (1.8) vs derivative loss

When c(t) is \(\omega \)-continuous, the approximated energy estimates introduced in [4] allow to show that (1.9) holds true with

$$\begin{aligned} \phi _{+}(\lambda ,t)\sim \lambda \,\omega \left( \frac{1}{\lambda }\right) t, \end{aligned}$$

and this explains all the results of Table 1 and much more, for example the well-posedness in Gevrey spaces in the case of Hölder continuous coefficients.

In a different direction, when c(t) is of class \(C^{1}\) in the closed interval \([0,T_{0}]\), the classical hyperbolic estimates give that (1.9) holds true with (note that in this case there is no dependence on \(\lambda \))

$$\begin{aligned} \phi _{+}(\lambda ,t)\sim \int _{0}^{t}|c'(\tau )|\,d\tau . \end{aligned}$$

In this paper \(|c'(t)|\) is not necessarily integrable in a right neighborhood of the origin, and therefore we need to mix the two techniques, in the sense that we use an estimate of the first type in some initial interval [0, s], followed by an estimate of the second type in the remaining interval [st], where (1.6) provides a control on the derivative. When we optimize with respect to s we conclude that now (1.9) holds true with \(\phi _{+}(\lambda ,t)\sim m(\lambda )\), with \(m(\lambda )\) given by (1.7). This is enough to conclude that the derivative loss of solutions to (1.2) is at most the one given in Table 4. We refer to statement (1) of Theorem 2.3 for the details.

Overview of the technique—road map to counterexamples The main contribution of this paper is the construction of solutions that exhibit a prescribed derivative loss. This is a much more delicate issue, since it requires to show that estimates of the form (1.9) are in some sense optimal. In statement (2) of Theorem 2.3 the idea is to look for coefficients c(t) such that solutions to (1.8) satisfy

$$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}\ge \left( u_{\lambda }'(0)^{2}+\lambda ^{2}u_{\lambda }(0)^{2}\right) \exp (\phi _{-}(\lambda ,t)) \quad \forall t\in [0,T_{0}], \end{aligned}$$
(1.10)

at least on a sequence \(\lambda _{n}\rightarrow +\infty \), with \(\phi _{-}(\lambda ,t)\sim m(\lambda )\) for positive times. These coefficients are called “universal activators” because the same coefficient induces the exponential growth of a sequence of solutions. They are the fundamental tool in the construction of counterexamples, as we show in Proposition 4.4 for operators that admit an unbounded sequence of eigenvalues, and in Proposition 4.5 for general unbounded self-adjoint operators, which is the case, for example, of the concrete wave equation (1.1) on the whole space or an external domain.

Following the path introduced in [12,13,14], the existence of universal activators is reduced to the existence of families of “asymptotic activators”, namely families \(\{c_{\lambda }(t)\}\) of coefficients such that solutions to

$$\begin{aligned} u_{\lambda }''(t)+\lambda ^{2}c_{\lambda }(t)u_{\lambda }(t)=0 \end{aligned}$$
(1.11)

satisfy (1.10) when \(\lambda \) is large enough.

We stress the difference between universal activators, where the same coefficient produces an exponential growth for a sequence of solutions, and asymptotic activators that achieve the same exponential growth by choosing a different coefficient for different values of \(\lambda \) (note that in (1.11) the coefficient does depend on \(\lambda \)).

The main point proved in [14] is that the existence of sufficiently many families of asymptotic activators, within a certain class of coefficients, implies the existence of a residual set of universal activators in the same class. This is some sort of “nonlinear uniform boundedness principle” (where nonlinear refers to the map coefficient \(\mapsto \) solutions): if there exist sufficiently many families of objects that show asymptotically the optimality of some estimate, then there are residually many objects that show directly the optimality of the same estimate. Equivalently, if we can not estimate the norm of \((u(t),u'(t))\) is some space Y in terms of the norm of \((u_{0},u_{1})\) in some space X, then there exist solutions such that \((u_{0},u_{1})\) lies in the space X, but \((u(t),u'(t))\) does not lie in the space Y for positive times.

Finally, asymptotic activators are produced starting from the usual building blocks, introduced for the first time in [4], and then modified and adapted in the subsequent literature. The key observation is that

$$\begin{aligned} u_{\lambda }(t):= \frac{1}{\lambda }\sin (\lambda t) \exp \left( \frac{1}{8}\int _{0}^{t}\varepsilon (s)\sin ^{2}(\lambda s)\,ds\right) \end{aligned}$$
(1.12)

grows exponentially and solves (1.11) with

$$\begin{aligned} c_{\lambda }(t):=1-\frac{\varepsilon (t)}{4\lambda }\sin (2\lambda t) -\frac{\varepsilon '(t)}{8\lambda ^{2}}\sin ^{2}(\lambda t) -\frac{\varepsilon (t)^{2}}{64\lambda ^{2}}\sin ^{4}(\lambda t). \end{aligned}$$
(1.13)

When showing the optimality of the results of Table 1, it is enough to choose \(\varepsilon (t)\) to be independent of t and equal to \(\lambda \,\omega (1/\lambda )\), up to multiplicative constants. In this way the integral in the exponential term of (1.12) grows as a multiple of \(\lambda \,\omega (1/\lambda )\), as required, and it is possible to control the modulus of continuity of the coefficient because \(c_{\lambda }(t)\) makes oscillations of order \(\omega (1/\lambda )\) in intervals with length of order \(1/\lambda \) (namely the period of the trigonometric terms).

In this paper we consider the minimizer \(s_{\lambda }\) in the minimum problem (1.7), and we check which of the two summands is bigger for \(s=s_{\lambda }\). When it is the first one, we define again \(\varepsilon (t)\) as \(\lambda \,\omega (1/\lambda )\) (see Proposition 4.9). When it is the second one, namely the integral, one would like to choose \(\varepsilon (t)=\theta (t)\), so that the integral in the exponential term of (1.12) grows as the integral of \(\theta (t)\). This choice has several disadvantages, mainly because we need to control \(c_{\lambda }'(t)\), and therefore the presence of \(\varepsilon '(t)\) in (1.13) forces to assume that \(\theta (t)\) is twice differentiable, with a lot of control on its derivatives.

In Proposition 4.10, we overcome this difficulty by choosing \(\varepsilon (t)\) equal to a piecewise constant approximation of \(\theta (t)\), changing the constant whenever the trigonometric terms vanish. In this way \(c_{\lambda }(t)\) remains Lipschitz continuous, the term with \(\varepsilon '(t)\) disappears, and the integral in (1.12) is equivalent to a Riemann sum for the integral of \(\theta (t)\). The lack of this type of construction, which becomes fundamental when the growth of \(\theta (t)\) is close enough to 1/t, is probably the reason why the previous results in the literature were not optimal.

Related problems and future perspectives We hope that the methods of this paper, more than the results themselves, could be useful to deal with analogous problems. For sure the nonlinear uniform boundedness principle, namely the general path from asymptotic to universal activators, can be used to show in an efficient way the optimality of many positive results. Indeed, some of the known counterexamples are still stated in the form of impossibility of a certain energy estimate, in the spirit of asymptotic activators, and not as examples of solutions that actually lose derivatives (see for example [2, Theorem 2.3 and Theorem 2.5], or [11, Theorem 2.7]).

In a different direction, we are confident that our techniques could shed some light also on a related problem studied in the last two decades in a series of papers by Reissig and Smith [21], Colombini [3], Hirosawa [17, 18], and Ebert et al. [11]. They consider the wave equation (1.1) with a smooth coefficient c(t) defined for all positive times, with two types of assumptions: the decay of some derivatives of c(t) as \(t\rightarrow +\infty \), and a “stabilization condition”, namely some integral control on \(|c(t)-c_{\infty }|\), where \(c_{\infty }\) is a suitable constant. They are interested in what they call “generalized energy conservation”, namely the boundedness of the ratio between the energy at time t and the energy at time 0. There are still some gaps between the positive results and the counterexamples (see for example [11, Table 1 and Table 2]). The analogy with this paper is plausible: the decay of derivatives at infinity should correspond to the blow-up at the origin, the stabilization condition could correspond to the modulus of continuity, and the general energy conservation to well-posedness with no derivative loss.

Structure of the paper This paper is organized as follows. In Sect. 2, we state our main results and some consequences, and we comment on them. In Sect. 3, we prove the positive part, namely the energy estimates from above that yield a bound from above for the derivative loss. In Sect. 4, we present the construction of the asymptotic activators, and how they lead to our counterexamples. Finally, in Sect. 5, we prove two corollaries concerning two special cases.

2 Statements

2.1 Notations and main result

Let us start by introducing some terminology and some notations.

Definition 2.1

(Modulus of continuity)

A modulus of continuity is a function \(\omega :[0,+\infty )\rightarrow [0,+\infty )\) such that

  • \(\omega (\sigma )>0\) for \(\sigma >0\), and \(\omega (\sigma )\rightarrow 0\) as \(\sigma \rightarrow 0^{+}\),

  • the function \(\sigma \mapsto \omega (\sigma )\) is nondecreasing,

  • the function \(\sigma \mapsto \sigma /\omega (\sigma )\) is nondecreasing.

A function \(c:[0,T_{0}]\rightarrow \mathbb {R}\) is called \(\omega \)-continuous if it satisfies (1.5) for some modulus of continuity \(\omega \).

Definition 2.2

(Classes of coefficients) Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\) be positive real numbers with \(\mu _{2}>\mu _{1}\). Let \(\omega \) be a modulus of continuity, and let \(\theta :(0,T_{0})\rightarrow (0,+\infty )\) be a continuous and nonincreasing function.

We call \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) the set of functions \(c\in W^{\infty }_{loc}((0,T_{0}))\) that satisfy (1.3) and (1.5) in the pointwise sense, and (1.6) in the almost everywhere sense.

We observe that \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) is a complete metric space with respect to the distance induced by the norm of \(L^{\infty }((0,T_{0}))\). We observe also that the elements of this space are continuous in \([0,T_{0}]\), and therefore pointwise values c(t) are well defined.

We are now ready to state our main result.

Theorem 2.3

(Main energy estimates)

Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\) be positive real numbers with \(\mu _{2}>\mu _{1}\). Let \(\omega \) be a modulus of continuity, and let \(\theta :(0,T_{0})\rightarrow (0,+\infty )\) be a continuous and nonincreasing function. For every positive real number \(\lambda \), let us define \(m(\lambda )\) as in (1.7). Let us consider the classes of coefficients introduced in Definition 2.2, and let us introduce the constants

$$\begin{aligned} M_{1}:=\left( \frac{\max \{1,\mu _{2}\}}{\min \{1,\mu _{1}\}}\right) ^{2}, \quad M_{2}:=\frac{1}{\mu _{1}}+\frac{1}{\sqrt{\mu _{1}}}, \quad M_{3}:=\frac{1}{128\mu _{2}}\min \left\{ 1,\frac{\sqrt{\mu _{1}}}{\pi }\right\} .\nonumber \\ \end{aligned}$$
(2.1)

Then the following statements hold true.

  1. (1)

    (Energy estimate from above). For every coefficient \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) and every positive real number \(\lambda \) it turns out that every solution to (1.8) satisfies

    $$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}\le M_{1}\left( u_{\lambda }'(0)^{2}+\lambda ^{2}u_{\lambda }(0)^{2}\right) \exp (M_{2}\,m(\lambda )) \quad \forall t\in [0,T_{0}]. \nonumber \\ \end{aligned}$$
    (2.2)
  2. (2)

    (Energy estimate from below). Let us assume that \(m(\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \).

    Then, for every sequence of positive real numbers \(\lambda _{n}\rightarrow +\infty \), the set of coefficients \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) such that the solutions to (1.8) with initial data \(u_{\lambda }(0)=0\), \(u_{\lambda }'(0)=1\) satisfy

    $$\begin{aligned} \limsup _{n\rightarrow +\infty }\left( |u_{\lambda _{n}}'(t)|^{2}+\lambda _{n}^{2}|u_{\lambda _{n}}(t)|^{2}\right) \exp \left( -M_{3}\,m(\lambda _{n})\right) \ge 1 \quad \forall t\in (0,T_{0}] \end{aligned}$$

    is residual.

The energy estimates of Theorem 2.3 can be applied to the abstract wave equation, yielding the following result.

Theorem 2.4

(Derivative loss for the abstract wave equation)

Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\), \(\omega \), \(\theta \), and \(m(\lambda )\) be as in Theorem 2.3. Let us consider the classes of coefficients introduced in Definition 2.2. Let us consider the abstract equation (1.2), where A is a linear nonnegative self-adjoint operator in some real Hilbert space H.

Then the following statements hold true.

  1. (1)

    (Estimate from above for the derivative loss). For every \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) the derivative loss of solutions to problem (1.2)–(1.4) is at most the one prescribed by Table 4.

  2. (2)

    (Optimality of the derivative loss). If the operator A is unbounded, then the set of coefficients \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) for which problem (1.2)–(1.4) exhibits exactly the derivative loss prescribed by Table 4 is residual.

Remark 2.5

One could define the class \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) also by considering only coefficients c(t) that are of class \(C^{1}\) for positive times, so that now (1.6) can be asked in the pointwise sense. In this case a structure of complete metric space is induced by the norm

$$\begin{aligned} \Vert c\Vert _{\theta }:=\max \{|c(t)|:t\in [0,T_{0}]\}+\sup \left\{ \frac{|c'(t)|}{\theta (t)}:t\in (0,T_{0})\right\} . \end{aligned}$$

All the previous results hold true also in this restricted class. This is actually the approach that was carried on in [14], and it delivers a residual (in this new space) class of counterexamples that are of class \(C^{1}\) for positive times. On the other hand, as explained in [14, section 4.5], it is always possible to produce counterexamples of class \(C^{\infty }\).

2.2 Some examples

Let us discuss the consequences of Theorem 2.4 in some special cases. A first result is that well-posedness with no derivative loss holds true in the class of coefficients \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) if and only if either \(\theta (t)\) guarantees that c(t) has bounded variation, or \(\omega (\sigma )\) guarantees that c(t) is Lipschitz continuous.

Corollary 2.6

Let us consider the same setting of Theorem 2.4.

Let us assume that the operator A is unbounded, and that

$$\begin{aligned} \lim _{\sigma \rightarrow 0^{+}}\frac{\sigma }{\omega (\sigma )}=0 \quad \text {and}\quad \int _{0}^{T_{0}}\theta (t)\,dt=+\infty . \end{aligned}$$
(2.3)

Then the set of coefficients \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\theta ,\omega )\) for which problem (1.2)–(1.4) exhibits at least an arbitrarily small derivative loss is residual.

Let us examine now the case where \(\theta (t)\sim 1/t\). According to Table 2 this assumption guarantees that the derivative loss is at most finite. Now we can show that the derivative loss is actually arbitrarily small if c(t) is \(\alpha \)-Hölder continuous for every \(\alpha \in (0,1)\), and this assumption is optimal.

Corollary 2.7

Let us consider the same setting of Theorem 2.4.

  1. (1)

    (Arbitrarily small derivative loss). Let us assume that the modulus of continuity satisfies

    $$\begin{aligned} \forall \alpha \in (0,1) \quad \lim _{\sigma \rightarrow 0^{+}}\frac{\omega (\sigma )}{\sigma ^{\alpha }}=0, \end{aligned}$$
    (2.4)

    and the function \(\theta (t)\) satisfies

    $$\begin{aligned} \limsup _{t\rightarrow 0^{+}}t\cdot \theta (t)<+\infty . \end{aligned}$$
    (2.5)

    Then, for every propagation speed \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\theta ,\omega )\), problem (1.2)–(1.4) has at most an arbitrarily small derivative loss.

  2. (2)

    (Finite derivative loss). Let us assume that the operator A is unbounded, that the modulus of continuity satisfies

    $$\begin{aligned} \exists \alpha \in (0,1) \quad \liminf _{\sigma \rightarrow 0^{+}}\frac{\omega (\sigma )}{\sigma ^{\alpha }}>0, \end{aligned}$$
    (2.6)

    and that the function \(\theta (t)\) satisfies

    $$\begin{aligned} \liminf _{t\rightarrow 0^{+}}t\cdot \theta (t)>0. \end{aligned}$$
    (2.7)

    Then the set of propagation speeds \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\theta ,\omega )\) for which problem (1.2)–(1.4) has at least a finite derivative loss is residual.

Finally, let us examine the case where \(\theta (t)\gg 1/t\). In this case problem (1.2)–(1.4) can exhibit any type of derivative loss, depending on the modulus of continuity \(\omega \). The modulus of continuity that guarantees a finite derivative loss is always stronger than any Hölder modulus \(\sigma ^{\alpha }\) with \(\alpha \in (0,1)\), but weaker that the classical \(\sigma |\log \sigma |\) that guarantees a finite derivative loss even without any assumption on \(c'(t)\). In Table 6 we display, for some special choices of \(\theta (t)\), the moduli of continuity that guarantee that \(m(\lambda )\sim \log \lambda \), and hence a finite derivative loss. As usual, these choices of \(\omega (\sigma )\) represent the threshold between the arbitrary small derivative loss and the infinite derivative loss.

Table 6 Examples of finite derivative loss when \(\theta (t)\gg 1/t\)

Remark 2.8

The reader might be puzzled by the multiple continuity moduli that appear in the first row, or by the fact that the continuity modulus is the same in the third and fourth row, where in addition \(\theta (t)\) depends on \(\beta \) but \(\omega (\sigma )\) does not. This is due to the bizarre behavior of the minimum problem (1.7). For example, if \(c_{1}\), \(c_{2}\), R are positive real numbers such that

$$\begin{aligned} \theta (t)\le c_{1}\frac{|\log t|}{t} \quad \text {and}\quad \omega (\sigma )\le c_{2}\,\sigma \exp \left( R|\log \sigma |^{1/2}\right) , \end{aligned}$$

then one can check that there exists a constant \(c_{3}\), depending on \(c_{1}\), \(c_{2}\), R, such that

$$\begin{aligned} m(\lambda )\le c_{3}\log \lambda , \end{aligned}$$

which is enough to guarantee a finite derivative loss. Of course different values of \(c_{1}\), \(c_{2}\), R yield different values of \(c_{3}\), and therefore a slower or faster derivative loss, but in any case a finite one.

In the case of the third and fourth row, we refer to Lemma A.1 in the Appendix for the details.

Remark 2.9

Some of the choices of Table 6 are considered in the previous literature, but without obtaining the optimal result. For example, the modulus \(\omega (\sigma )\) of the first row is considered in [19, Section 4.5], where a finite derivative loss is obtained with the correct condition \(\theta (t)\sim |\log t|/t\), and in [7, Example 1.2(ii)], where an infinite derivative loss is obtained, but only with the stronger condition \(\theta (t)\gg |\log t|^{2}/t\). The modulus \(\omega (\sigma )\) of the second row is considered in [7, Example 1.2(i)], where an infinite derivative loss is obtained, but only with the stronger condition \(\theta (t)\gg 1/t^{1+\beta }\cdot \log ^{1+\beta }(|\log t|)\).

Similarly, the choices of \(\theta (t)\) of the second and the fourth row (the latter in the special case \(\beta =2\)) are considered in [10, Example 2.1 and Example 2.2], and in both cases a finite derivative loss is obtained with a stronger modulus of continuity, and there is no mention of infinite and arbitrarily small derivative loss.

2.3 Comments

We conclude by speculating on our main results.

Remark 2.10

(Limit cases)

The cases where one prescribes only the modulus of continuity \(\omega (\sigma )\), or only the blow-up rate \(\theta (t)\) of the derivative, can be included in Theorems 2.3 and  2.4 as special limit cases.

If we prescribe only the modulus of continuity \(\omega (\sigma )\), we can think that \(\theta (t)\equiv +\infty \), and therefore imagine that in (1.7) the minimum is attained when \(s=T_{0}\). We obtain that \(m(\lambda )=\lambda \,\omega (1/\lambda )T_{0}\), which explains the results of Table 1.

If we prescribe only \(\theta (t)\), and of course also the strict hyperbolicity condition, we can think to extend the notion of modulus of continuity in order to include the border-line case in which \(\omega (\sigma )\equiv \mu _{2}-\mu _{1}\), and then define

$$\begin{aligned} m(\lambda )=\min \left\{ (\mu _{2}-\mu _{1})\lambda \,s+\int _{s}^{T_{0}}\theta (t)\,dt:s\in [0,T_{0}]\right\} . \end{aligned}$$

The minimum is attained when \(\theta (s)=(\mu _{2}-\mu _{1})\lambda \), and with some standard calculus we obtain all the results of Table 2.

Remark 2.11

(Quantitative estimate of the derivative loss) In the cases where \(m(\lambda )\sim \log \lambda \), and more precisely

$$\begin{aligned} 0<\liminf _{\lambda \rightarrow +\infty }\frac{m(\lambda )}{\log \lambda }\le \limsup _{\lambda \rightarrow +\infty }\frac{m(\lambda )}{\log \lambda }<+\infty , \end{aligned}$$

the liminf and limsup above provide, respectively, an estimate from below and from above for the finite derivative loss, namely for the constant \(\delta \) that appears in the definition.

Remark 2.12

(Progressive vs instantaneous derivative loss)

There is a subtle difference in the derivative loss between the case where only the modulus of continuity is prescribed, and the cases where we assume c(t) to be differentiable for positive times. For the sake of simplicity, let us limit ourselves to the finite derivative loss.

In the case where one prescribes only the modulus of continuity \(\omega (\sigma )\sim \sigma |\log \sigma |\), the finite derivative loss is in general progressive in the sense that, if \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\) for some \(\beta \), then \((u(t),u'(t))\in D(A^{\beta -\delta t+1/2})\times D(A^{\beta -\delta t})\) for positive times. In words, this means that the derivative loss increases with time, and tends to 0 as \(t\rightarrow 0^{+}\).

When c(t) is of class \(C^{1}\) for positive times, then any form of derivative loss is instantaneous, and in particular a finite derivative does not tend to 0 as \(t\rightarrow 0^{+}\) (but of course now \(\omega (\sigma )\gg \sigma |\log \sigma |\)). After the initial loss of regularity, in this case there is no further loss of derivatives, in the sense that the implication

$$\begin{aligned} (u(t_{0}),u'(t_{0}))\in D(A^{\gamma +1/2})\times D(A^{\gamma }) \quad \Longrightarrow \quad (u(t),u'(t))\in D(A^{\gamma +1/2})\times D(A^{\gamma }) \end{aligned}$$

holds true for every \(0<t_{0}\le t\le T_{0}\) and every \(\gamma \). In words, this means that the singular behavior of c(t) in the origin is responsible for the instantaneous loss of derivatives but, after the initial loss, the regularity of solutions is preserved by the smoothness of c(t).

Remark 2.13

(Well-posedness in Gevrey spaces)

The consequences of Theorem 2.3 go far beyond Theorem 2.4, and in particular beyond the classification of derivative loss according to Table 4. Indeed, the behavior of \(m(\lambda )\) as \(\lambda \rightarrow +\infty \) provides a sharp “measure” of the derivative loss, even when it is infinite or arbitrarily small. The formalization of this idea relies on the notion of generalized Gevrey spaces, or Gevrey distributions (we refer to [13, Definition 2.2] for more details on the abstract functional setting for abstract wave equations). Just to give some examples, let us stick to standard Gevrey spaces. The positive side is represented by well-posedness results, as follows.

  • If we assume that \(\omega (\sigma )=\sigma ^{\alpha }\) for some \(\alpha \in (0,1)\), and we have no informations about the derivative, then we can assume (as explained in Remark 2.10) that \(m(\lambda )\sim \lambda \,\omega (1/\lambda )=\lambda ^{1-\alpha }\), which implies the classical result of [4] according to which the problem is well-posed in Gevrey spaces of order \(s\le (1-\alpha )^{-1}\).

  • If we assume that \(\theta (t)=1/t^{\beta }\) for some \(\beta >1\), and we have no information about the modulus of continuity, then we obtain (as explained in Remark 2.10) that \(m(\lambda )\sim \lambda ^{(\beta -1)/\beta }\), which implies the classical result according to which the problem is well-posed in Gevrey spaces of order \(s\le \beta /(\beta -1)\) (see [6, Theorem 2]).

  • If we ask both conditions, namely that \(\omega (\sigma )=\sigma ^{\alpha }\) and \(\theta (t)=1/t^{\beta }\), then with some standard calculus we obtain that \(m(\lambda )\sim \lambda ^{(1-\alpha )(\beta -1)/\beta }\), and therefore the problem is well-posed in Gevrey spaces of order \(s<\beta (\beta -1)^{-1}(1-\alpha )^{-1}\) (see [6, Theorem 3]). This is one more example of “collaboration” between the modulus of continuity and the control on the derivative in order to provide well-posedness results for less regular data.

The negative side is a derivative loss from Gevrey spaces to Gevrey distributions. For example, in the case where \(\omega (\sigma )=\sigma ^{\alpha }\) with \(\alpha \in (0,1)\), and there are no informations on the derivative, there exist solutions whose initial data are in the Gevrey space of order s for every \(s>(1-\alpha )^{-1}\), and such that for positive times they do not belong to the space of Gevrey distributions of order s for every \(s>(1-\alpha )^{-1}\).

3 Energy estimates from above

In this section, we prove statement (1) of Theorem 2.3, which implies in a standard way also statement (1) of Theorem 2.4.

To begin with, let us extend c(t) to the whole half-line \(t\ge 0\) by setting

$$\begin{aligned} \widehat{c}(t):={\left\{ \begin{array}{ll} c(t) &{} \text {if }t\le T_{0}, \\ c(T_{0}) &{} \text {if }t\ge T_{0}. \end{array}\right. } \end{aligned}$$

For every \(\varepsilon >0\) let us set

$$\begin{aligned} c_{\varepsilon }(t):=\frac{1}{\varepsilon }\int _{t}^{t+\varepsilon }\widehat{c}(s)\,ds \quad \forall t\ge 0. \end{aligned}$$

Then it turns out that \(c_{\varepsilon }\in C^{1}([0,T_{0}])\) and satisfies the following estimates

$$\begin{aligned} \mu _{1}\le c(t)\le \mu _{2}, \quad |c_{\varepsilon }(t)-c(t)|\le \omega (\varepsilon ), \quad |c_{\varepsilon }'(t)|\le \frac{\omega (\varepsilon )}{\varepsilon } \end{aligned}$$
(3.1)

for every \(t\in [0,T_{0}]\). Following [4], we consider the usual Kovaleskyan energy

$$\begin{aligned} E_{\lambda }(t):=u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}, \end{aligned}$$
(3.2)

the usual hyperbolic energy

$$\begin{aligned} F_{\lambda }(t):=u_{\lambda }'(t)^{2}+\lambda ^{2}c(t)u_{\lambda }(t)^{2}, \end{aligned}$$
(3.3)

and the approximated hyperbolic energy

$$\begin{aligned} F_{\varepsilon ,\lambda }(t):=u_{\lambda }'(t)^{2}+\lambda ^{2}c_{\varepsilon }(t)u_{\lambda }(t)^{2}. \end{aligned}$$
(3.4)

These energies are equivalent in the sense that

$$\begin{aligned} \min \{1,\mu _{1}\}E_{\lambda }(t)\le F_{\lambda }(t)\le \max \{1,\mu _{2}\}E_{\lambda }(t), \end{aligned}$$
(3.5)

and

$$\begin{aligned} \min \{1,\mu _{1}\}E_{\lambda }(t)\le F_{\varepsilon ,\lambda }(t)\le \max \{1,\mu _{2}\}E_{\lambda }(t) \end{aligned}$$
(3.6)

for every admissible value of the parameters. What we need in (2.2) is an estimate of the Kovaleskyan energy (3.2). To this end, for every \(s\in (0,T_{0})\) we estimate the approximated hyperbolic energy in [0, s], and the standard hyperbolic energy in \([s,T_{0}]\).

The time-derivative of (3.4) is

$$\begin{aligned} F_{\varepsilon ,\lambda }'(t)=c_{\varepsilon }'(t)\lambda ^{2}u_{\lambda }(t)^{2}+\lambda ^{2}(c_{\varepsilon }(t)-c(t))\cdot 2u_{\lambda }(t)u_{\lambda }'(t), \end{aligned}$$

from which we deduce that

$$\begin{aligned} F_{\varepsilon ,\lambda }'(t)\le \frac{|c_{\varepsilon }'(t)|}{c_{\varepsilon }(t)}F_{\varepsilon ,\lambda }(t)+ \lambda \frac{|c_{\varepsilon }(t)-c(t)|}{c_{\varepsilon }(t)^{1/2}}F_{\varepsilon ,\lambda }(t). \end{aligned}$$

Integrating this differential inequality, and keeping (3.1) into account, we deduce that

$$\begin{aligned} F_{\varepsilon ,\lambda }(t)\le F_{\varepsilon ,\lambda }(0) \exp \left\{ \left( \frac{\omega (\varepsilon )}{\mu _{1}\varepsilon }+\lambda \,\frac{\omega (\varepsilon )}{\sqrt{\mu _{1}}}\right) t\right\} \quad \forall t\in [0,T_{0}]. \end{aligned}$$

Setting \(\varepsilon :=1/\lambda \), and recalling (3.6), this implies that

$$\begin{aligned} E_{\lambda }(t)\le \sqrt{M_{1}}\,E_{\lambda }(0) \exp \left\{ M_{2}\,\lambda \,\omega \left( \frac{1}{\lambda }\right) s\right\} \quad \forall t\in [0,s], \end{aligned}$$
(3.7)

where \(M_{1}\) and \(M_{2}\) are defined by (2.1). The time-derivative of (3.3) is

$$\begin{aligned} F_{\lambda }'(t)=\lambda ^{2}c'(t)|u_{\lambda }(t)|^{2}\le \frac{|c'(t)|}{c(t)}F_{\lambda }(t)\le \frac{\theta (t)}{\mu _{1}}F_{\lambda }(t) \quad \forall t\in (0,T_{0}]. \end{aligned}$$

Integrating this differential inequality we deduce that

$$\begin{aligned} F_{\lambda }(t)\le F_{\lambda }(s)\exp \left( \frac{1}{\mu _{1}}\int _{s}^{t}\theta (\tau )\,d\tau \right) \le F_{\lambda }(s)\exp \left( M_{2}\int _{s}^{T_{0}}\theta (\tau )\,d\tau \right) \end{aligned}$$

for every \(t\in [s,T_{0}]\). Recalling the equivalence (3.5), and estimate (3.7) with \(t=s\), we conclude that

$$\begin{aligned} E_{\lambda }(t)\le & {} \sqrt{M_{1}}\,E_{\lambda }(s)\exp \left( M_{2}\int _{s}^{T_{0}}\theta (\tau )\,d\tau \right) \\\le & {} M_{1}\, E_{\lambda }(0) \exp \left\{ M_{2}\left( \lambda \,\omega \left( \frac{1}{\lambda }\right) s+\int _{s}^{T_{0}}\theta (\tau )\,d\tau \right) \right\} \end{aligned}$$

for every \(t\in [s,T_{0}]\). On the other hand, the same estimate holds true also for \(t\in [0,s]\) because of (3.7). Optimizing with respect to s we obtain exactly (2.2). \(\Box \)

4 Counterexamples

In this section, we prove statement (2) of Theorem 2.3, and we show how that statement leads to the counterexamples required for the optimality part in Theorem 2.4.

4.1 Asymptotic and universal activators

Let us begin by summarizing the theory developed in [14, section 4.1]. In the sequel we consider solutions to the family of ordinary differential equations

$$\begin{aligned} u_{\lambda }''(t)+\lambda ^{2}c_{\lambda }(t)u_{\lambda }(t)=0, \end{aligned}$$
(4.1)

with initial data

$$\begin{aligned} u_{\lambda }(0)=0, \quad u_{\lambda }'(0)=1. \end{aligned}$$
(4.2)

We point out that in (4.1) the propagation speed depends on the parameter \(\lambda \). When the propagation speed is fixed, we consider equation

$$\begin{aligned} v_{\lambda }''(t)+\lambda ^{2}c(t)v_{\lambda }(t)=0, \end{aligned}$$
(4.3)

with initial data

$$\begin{aligned} v_{\lambda }(0)=0, \quad v_{\lambda }'(0)=1. \end{aligned}$$
(4.4)

Let us recall our notion of activators (compare with [13, 14]).

Definition 4.1

(Universal activators of a sequence)

Let \(T_{0}\) be a positive real number, let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function, and let \(\{\lambda _{n}\}\) be a sequence of positive real numbers such that \(\lambda _{n}\rightarrow +\infty \).

A universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \) is a coefficient \(c\in L^{1}((0,T_{0}))\) such that the corresponding sequence \(\{v_{\lambda _{n}}(t)\}\) of solutions to (4.3)–(4.4) satisfies

$$\begin{aligned} \limsup _{n\rightarrow +\infty }\left( |v_{\lambda _{n}}'(t)|^{2}+\lambda _{n}^{2}|v_{\lambda _{n}}(t)|^{2}\right) \exp \left( -\phi (\lambda _{n})\right) \ge 1 \quad \forall t\in (0,T_{0}]. \end{aligned}$$
(4.5)

Definition 4.2

(Asymptotic activators)

Let \(T_{0}\) be a positive real number, and let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function.

A family of asymptotic activators with rate \(\phi \) is a family of coefficients \(\{c_{\lambda }(t)\}\subseteq L^{1}((0,T_{0}))\) with the property that, for every \(\delta \in (0,T_{0})\), there exist two positive constants \(M_{\delta }\) and \(\lambda _{\delta }\) such that the corresponding family \(\{u_{\lambda }(t)\}\) of solutions to (4.1)–(4.2) satisfies

$$\begin{aligned} |u_{\lambda }'(t)|^{2}+\lambda ^{2}|u_{\lambda }(t)|^{2}\ge M_{\delta }\exp \left( 2\phi (\lambda )\right) \quad \forall t\in [\delta ,T_{0}],\quad \forall \lambda \ge \lambda _{\delta }. \end{aligned}$$
(4.6)

The coefficient 2 in the exponential of (4.6) could be replaced by any number greater than 1. The following result shows that families of asymptotic activators are the basic tool in the construction of universal activators. It is the “nonlinear uniform boundedness principle” that we mentioned in the introduction, namely “the existence of many (\(\lambda \) dependent) bad coefficients for every \(\lambda \) implies the existence of a single (\(\lambda \) independent) coefficient that is bad for many \(\lambda \)’s”. This is the point where Baire category theorem discloses its power. For a proof, we refer to [14, Proposition 4.5].

Proposition 4.3

(From asymptotic to universal activators)

Let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function such that \(\phi (\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \). Let \(T_{0}\) be a positive real number, and let \(\mathcal{PS}\mathcal{}\subseteq C^{0}([0,T_{0}])\) be a closed subset (with respect to uniform convergence).

Let us assume that there exists a dense subset \({\mathcal {D}}\subseteq \mathcal{PS}\mathcal{}\) such that for every \(c\in {\mathcal {D}}\) there exists a family of asymptotic activators \(\{c_{\lambda }\}\subseteq \mathcal{PS}\mathcal{}\) with rate \(\phi \) such that \(c_{\lambda }\rightarrow c\).

Then, for every unbounded sequence \(\{\lambda _{n}\}\) of positive real numbers, the set of elements in \(\mathcal{PS}\mathcal{}\) that are universal activators of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \) is residual in \(\mathcal{PS}\mathcal{}\) (and in particular nonempty).

Finally, the following statement clarifies the crucial connection between universal activators and derivative loss. In order to show the strategy, we start by proving the result in the special case where A admits an unbounded sequence of eigenvalues (see [14, Proposition 4.3]).

Proposition 4.4

(Universal activators vs derivative loss—Model case)

Let H be a Hilbert space, and let A be a nonnegative self-adjoint operator on H. Let us assume that there exists a sequence \(\{e_{n}\}\) of orthonormal vectors in H, and an unbounded sequence of positive real numbers \(\{\lambda _{n}\}\) such that \(Ae_{n}=\lambda _{n}^{2}e_{n}\) for every positive integer n.

Let \(T_{0}\) be a positive real number, let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function, and let \(c\in L^{1}((0,T_{0}))\) be a universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \). Let us assume also that

$$\begin{aligned} \phi (\lambda _{n})\ge n \quad \forall n\ge 1. \end{aligned}$$
(4.7)

Then the asymptotic behavior of \(\{\phi (\lambda _{n})\}\) determines the derivative loss of solutions to (1.2)–(1.4) according to the following scheme

$$\begin{aligned} \lim _{n\rightarrow +\infty }\phi (\lambda _{n})\rightarrow +\infty&\quad \leadsto \quad \text {(at least) arbitrarily small derivative loss}, \end{aligned}$$
(4.8)
$$\begin{aligned} \liminf _{n\rightarrow +\infty }\frac{\phi (\lambda _{n})}{\log \lambda _{n}}>0&\quad \leadsto \quad \text {(at least) finite derivative loss}, \end{aligned}$$
(4.9)
$$\begin{aligned} \lim _{n\rightarrow +\infty }\frac{\phi (\lambda _{n})}{\log \lambda _{n}}=+\infty&\quad \leadsto \quad \text {infinite derivative loss}. \end{aligned}$$
(4.10)

Proof

To begin with, we observe that (4.7) implies that

$$\begin{aligned} \sum _{n=1}^{\infty }\exp \left( -\eta \phi (\lambda _{n})\right) <+\infty \quad \forall \eta >0. \end{aligned}$$
(4.11)

For every positive integer n, let us set

$$\begin{aligned} a_{n}:=\exp \left( -\frac{\phi (\lambda _{n})}{4}\right) , \end{aligned}$$
(4.12)

and let us consider problem (1.2)–(1.4) with initial data

$$\begin{aligned} u_{0}:=0, \quad u_{1}:=\sum _{n=1}^{\infty }a_{n}e_{n}. \end{aligned}$$

It is well-known that the unique solution is given by (a priori this series converges just in the sense of ultradistributions)

$$\begin{aligned} u(t):=\sum _{n=1}^{\infty }a_{n}v_{\lambda _{n}}(t)e_{n} \quad \forall t\in [0,T_{0}], \end{aligned}$$

where \(\{v_{\lambda }(t)\}\) is the family of solutions to (4.3)–(4.4). In particular, for every choice of the real numbers \(\beta \) and \(\gamma \) it turns out that

$$\begin{aligned} \sum _{n=1}^{\infty }\lambda _{n}^{4\beta }a_{n}^{2}= \sum _{n=1}^{\infty }\exp \left( 4\beta \log \lambda _{n}-\frac{\phi (\lambda _{n})}{2}\right) , \end{aligned}$$
(4.13)

and

$$\begin{aligned}&\sum _{n=1}^{\infty }\lambda _{n}^{-4\gamma }a_{n}^{2} \left( |v_{\lambda _{n}}'(t)|^{2}+\lambda _{n}^{2}|v_{\lambda _{n}}(t)|^{2}\right) \nonumber \\&\quad =\sum _{n=1}^{\infty }\left( |v_{\lambda _{n}}'(t)|^{2}+\lambda _{n}^{2}|v_{\lambda _{n}}(t)|^{2}\right) \exp (-\phi (\lambda _{n}))\cdot \exp \left( \frac{\phi (\lambda _{n})}{2}-4\gamma \log \lambda _{n}\right) .\nonumber \\ \end{aligned}$$
(4.14)

Now we discuss the regularity of initial data by exploiting (4.13) and (4.11), and the regularity of the corresponding solution u(t) by exploiting (4.14) and condition (4.5) in the definition of universal activator. We distinguish three scenarios.

  • Under the assumption in (4.8) we observe that (4.13) converges when \(\beta =0\), while (4.14) does not converge when \(\gamma =0\). It follows that \((u_{0},u_{1})\in D(A^{1/2})\times H\), while \((u(t),u'(t))\not \in D(A^{1/2})\times H\) for every \(t\in (0,T_{0}]\), which shows that in this case the solution u(t) exhibits at least an arbitrarily small derivative loss.

  • Under the assumption in (4.9), let \(\delta \in (0,+\infty )\) denote the liminf of \(\phi (\lambda _{n})/\log \lambda _{n}\). In this case we observe that (4.13) converges for every \(\beta <\delta /8\), while (4.14) does not converge for every \(\gamma <\delta /8\). As a consequence, the derivative loss of the solution u(t) is at least \(\delta /4\).

  • Under the assumption in (4.10) we observe that (4.13) converges for every \(\beta \in \mathbb {R}\), and in particular \(u_{1}\in D(A^{\infty })\), while (4.14) does not converge for every \(\gamma \in \mathbb {R}\), which implies that the solution u(t) has an infinite derivative loss.

\(\square \)

In the following result we extend the construction of counterexamples to general unbounded self-adjoint operators.

Proposition 4.5

(Universal activators vs derivative loss—general case)

Let H be a separable Hilbert space, and let A be a nonnegative self-adjoint operator on H. Let \(T_{0}\) be a positive real number, and let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function.

Let us assume that the operator A is unbounded, and that \(\phi (\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \).

Then there exists a sequence of positive real numbers \(\lambda _{n}\rightarrow +\infty \) with the following property. If \(c\in L^{1}((0,T_{0}))\) is a universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \), then the asymptotic behavior of \(\{\phi (\lambda _{n})\}\) determines the derivative loss of solutions to (1.2)–(1.4) according to the scheme (4.8) through (4.10).

Proof

We imitate the proof of Proposition 4.4 by exploiting the general form of the spectral theorem and a reinforced version of universal activators.

Definition of the sequence \(\{\lambda _{n}\}\) According to the spectral theorem for self-adjoint operators (see for example [20, Theorem VIII.4]) there exists a finite measure space \((M,\mu )\), an isometric bijective map \(\mathscr {F}:H\rightarrow L^{2}(M,\mu )\), and a measurable function \(\lambda :M\rightarrow [0,+\infty )\) such that the operator A on H acts as the multiplication operator by \(\lambda ^{2}\) in \(L^{2}(M,\mu )\).

More precisely, to every vector \(w\in H\) it is associated the “generalized Fourier transform” \(\widehat{w}(\xi ):=\mathscr {F}(v)\in L^{2}(M,\mu )\) in such a way that

  • \(w\in D(A^{\beta })\) if and only if \((1+\lambda (\xi )^{2})^{\beta }\,\widehat{w}(\xi )\in L^{2}(M,\mu )\),

  • if \(w\in D(A)\) then \([\mathscr {F}(Aw)](\xi )=\lambda (\xi )^{2}\,\widehat{w}(\xi )\) for \(\mu \)-almost every \(\xi \in M\).

Since A is unbounded, the function \(\lambda (\xi )\) is essentially unbounded, and therefore there exists a sequence of positive real numbers \(\lambda _{n}\rightarrow +\infty \) such that

$$\begin{aligned} \mu (\{\xi \in M:\lambda (\xi )\in [\lambda _{n}-r,\lambda _{n}+r]\})>0 \quad \forall n\ge 1 \quad \forall r>0. \end{aligned}$$

Up to passing to a subsequence, we can also assume that the sequence \(\{\lambda _{n}\}\) is strictly increasing and (4.7) holds true.

A “stronger” property of universal activators Let \(\{v_{\lambda }(t)\}\) denote the family of solutions to (4.3)–(4.4). We claim that there exists a sequence \(\{r_{n}\}\) of positive real numbers such that the intervals \([\lambda _{n}-r_{n},\lambda _{n}+r_{n}]\) are pairwise disjoint and, if we set

$$\begin{aligned} I_{n}(t):=\inf \left\{ v_{\lambda }'(t)^{2}+\lambda ^{2}v_{\lambda }(t)^{2}: \lambda \in [\lambda _{n}-r_{n},\lambda _{n}+r_{n}]\right\} , \end{aligned}$$
(4.15)

then it turns out that

$$\begin{aligned} \limsup _{n\rightarrow +\infty }I_{n}(t)\exp \left( -\phi (\lambda _{n})\right) \ge 1 \quad \forall t\in (0,T_{0}]. \end{aligned}$$
(4.16)

To this end, it is enough to observe that the map

$$\begin{aligned} (0,+\infty )\ni \lambda \mapsto v_{\lambda }(t)\in C^{1}([0,T_{0}]) \end{aligned}$$

is continuous, and then choose \(r_{n}\) in such a way that

$$\begin{aligned} v_{\lambda }'(t)^{2}+\lambda ^{2}v_{\lambda }(t)^{2}\ge v_{\lambda _{n}}'(t)^{2}+\lambda _{n}^{2}v_{\lambda _{n}}(t)^{2}-1 \end{aligned}$$

for every \(t\in [0,T_{0}]\) and every \(\lambda \in [\lambda _{n}-r_{n},\lambda _{n}+r_{n}]\), and in particular

$$\begin{aligned} I_{n}(t)\ge v_{\lambda _{n}}'(t)^{2}+\lambda _{n}^{2}v_{\lambda _{n}}(t)^{2}-1 \quad \forall n\ge 1, \end{aligned}$$

so that now (4.16) follows from (4.5) because \(\phi (\lambda _{n})\rightarrow +\infty \). Up to reducing \(r_{n}\) if necessary, we can also assume that

$$\begin{aligned} \frac{\lambda _{n}}{2}\le \lambda _{n}-r_{n}<\lambda _{n}+r_{n}\le 2\lambda _{n} \quad \forall n\ge 1. \end{aligned}$$
(4.17)

Construction of counterexamples Let \(\{\lambda _{n}\}\) be any sequence as in the first paragraph, and let \(c\in L^{1}((0,T_{0}))\) be any universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \). We need to show that problem (1.2)–(1.4) exhibits the prescribed derivative loss. To this end, for every positive integer n we define \(r_{n}\) as in the previous paragraph, we consider the set

$$\begin{aligned} M_{n}:=\left\{ \xi \in M:\lambda (\xi )\in [\lambda _{n}-r_{n},\lambda _{n}+r_{n}]\right\} , \end{aligned}$$

which has positive measure, and we call \(\widehat{w}_{n}(\xi )\) the characteristic function of \(M_{n}\) (namely \(\widehat{w}_{n}(\xi )=1\) if \(\xi \in M_{n}\), and \(\widehat{w}_{n}(\xi )=0\) otherwise). Then we define \(a_{n}\) as in (4.12), we set

$$\begin{aligned} {\widehat{u}}_{1}(\xi ):=\sum _{n=1}^{\infty }\frac{a_{n}}{\mu (M_{n})^{1/2}}\,\widehat{w}_{n}(\xi ), \end{aligned}$$

and we consider problem (1.2)–(1.4) with initial data \(u_{0}=0\) and \(u_{1}:=\mathscr {F}^{-1}(\widehat{u}_{1}(\xi ))\). It is well-known that the “generalized Fourier transform” of the solution is

$$\begin{aligned} \widehat{u}(t,\xi ):=[\mathscr {F}(u(t))](\xi )=\widehat{u}_{1}(\xi )\cdot v_{\lambda (\xi )}(t), \end{aligned}$$

where \(\{v_{\lambda }(t)\}\) is again the family of solutions to (4.3)–(4.4).

Let us examine the regularity of \(u_{1}\) and of the pair \((u(t),u'(t))\). As for the regularity of \(u_{1}\), for every real number \(\beta \ge 0\) it turns out that

$$\begin{aligned} u_{1}\in D(A^{\beta }) \quad \Longleftrightarrow \quad \int _{M}\lambda (\xi )^{4\beta }\,\widehat{u}_{1}(\xi )^{2}\,d\mu (\xi )<+\infty . \end{aligned}$$

On the other hand, since the sets \(M_{n}\) are pairwise disjoint, we obtain that

$$\begin{aligned}&\int _{M}\lambda (\xi )^{4\beta }\,\widehat{u}_{1}(\xi )^{2}\,d\mu (\xi )= \sum _{n=1}^{\infty }\int _{M_{n}}\lambda (\xi )^{4\beta }\,\widehat{u}_{1}(\xi )^{2}\,d\mu (\xi ) \\&\quad =\sum _{n=1}^{\infty }\frac{a_{n}^{2}}{\mu (M_{n})}\int _{M_{n}}\lambda (\xi )^{4\beta }\,d\mu (\xi ) \le \sum _{n=1}^{\infty }a_{n}^{2}(\lambda _{n}+r_{n})^{4\beta } \le 2^{4\beta }\sum _{n=1}^{\infty }a_{n}^{2}\lambda _{n}^{4\beta }, \end{aligned}$$

where in the last step we exploited the estimate from above in (4.17), and analogously

$$\begin{aligned} \int _{M}\lambda (\xi )^{4\beta }\,\widehat{u}_{1}(\xi )^{2}\,d\mu (\xi )\ge \frac{1}{2^{4\beta }}\sum _{n=1}^{\infty }a_{n}^{2}\lambda _{n}^{4\beta }, \end{aligned}$$

so that in conclusion

$$\begin{aligned} u_{1}\in D(A^{\beta }) \quad \Longleftrightarrow \quad \sum _{n=1}^{\infty }a_{n}^{2}\lambda _{n}^{4\beta }<+\infty . \end{aligned}$$

As for the regularity of u(t), we observe that for every real number \(\gamma \ge 0\) it turns out that \((u(t),u'(t))\in D(A^{-\gamma +1/2})\times D(A^{-\gamma })\) if and only if

$$\begin{aligned} \int _{M}\lambda (\xi )^{-4\gamma }\,\widehat{u}_{1}(\xi )^{2} \left( v_{\lambda (\xi )}'(t)^{2}+\lambda (\xi )^{2}\,v_{\lambda (\xi )}(t)^{2}\right) \,d\mu (\xi )<+\infty . \end{aligned}$$

Recalling (4.15) the last integral can be rewritten and estimated as follows

$$\begin{aligned}&\sum _{n=1}^{\infty }\frac{a_{n}^{2}}{\mu (M_{n})}\int _{M_{n}} \lambda (\xi )^{-4\gamma }\left( v_{\lambda (\xi )}'(t)^{2}+\lambda (\xi )^{2}\,v_{\lambda (\xi )}(t)^{2}\right) \,d\mu (\xi ) \\&\quad \ge \sum _{n=1}^{\infty }(\lambda _{n}+r_{n})^{-4\gamma }a_{n}^{2}I_{n}(t) \\&\quad \ge \frac{1}{2^{4\gamma }}\sum _{n=1}^{\infty }\lambda _{n}^{-4\gamma }a_{n}^{2}I_{n}(t) \\&\quad = \frac{1}{2^{4\gamma }}\sum _{n=1}^{\infty }I_{n}(t) \exp (-\phi (\lambda _{n}))\cdot \exp \left( \frac{\phi (\lambda _{n})}{2}-4\gamma \log \lambda _{n}\right) , \end{aligned}$$

where in the last inequality we exploited the estimate from above in (4.17).

Thanks to (4.16), at this point all conclusions follow as in Proposition 4.4. \(\square \)

4.2 Building block and dense subset

From the general theory, we know that we need to show that asymptotic activators can approximate all coefficients in a dense subset. In this subsection, we identify this dense subset, and then we describe the starting point of the construction of the approximating family of asymptotic activators.

Definition 4.6

Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\), \(\omega \), \(\theta \) be as in Definition 2.2. We call \({\mathcal {D}}\) the set of all functions \(c_{*}\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) for which there exist real numbers \(T_{1}\), \(\gamma \), and \(\eta \) (that might depend on \(c_{*}\)) with

$$\begin{aligned} T_{1}\in (0,T_{0}), \quad \gamma ^{2}\in (\mu _{1},\mu _{2}), \quad \eta \in (0,1) \end{aligned}$$

such that

$$\begin{aligned} c_{*}(t)=\gamma ^{2} \quad \forall t\in [0,T_{1}] \end{aligned}$$

and

$$\begin{aligned} |c_{*}(t)-c_{*}(s)|\le (1-\eta )\omega (|t-s|) \quad \forall (t,s)\in [0,T_{0}]^{2}. \end{aligned}$$
(4.18)

When we want to emphasize the parameters we write \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\).

In word, the elements of \({\mathcal {D}}\) are constant in a right neighborhood of the origin, and they do not saturate neither the strict hyperbolicity condition in this neighborhood, nor the inequality in the definition of \(\omega \)-continuity. As one can easily guess, the result is that these special coefficients are dense in the classes introduced in Definition 2.2.

Proposition 4.7

(Density)

The set \({\mathcal {D}}\) is dense in \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) for every admissible choice of the parameters.

Proof

Let c(t) be any element of \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\). For every \(\varepsilon \in (0,1)\), with \(\varepsilon <T_{0}\), let us set

$$\begin{aligned} c_{\varepsilon }(t):={\left\{ \begin{array}{ll} (1-\varepsilon )c(\varepsilon )+\varepsilon \,\dfrac{\mu _{2}+\mu _{1}}{2} &{} \text {if }t\in [0,\varepsilon ], \\ (1-\varepsilon )c(t)+\varepsilon \,\dfrac{\mu _{2}+\mu _{1}}{2} &{} \text {if }t\in [\varepsilon ,T_{0}]. \end{array}\right. } \end{aligned}$$

Then it turns out that \(c_{\varepsilon }\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) with

$$\begin{aligned} T_{1}:=\varepsilon , \quad \gamma :=\sqrt{c_{\varepsilon }(0)}=\sqrt{c_{\varepsilon }(\varepsilon )}, \quad \eta :=\varepsilon , \end{aligned}$$

and that \(c_{\varepsilon }(t)\rightarrow c(t)\) uniformly in \([0,T_{0}]\). \(\square \)

The following lemma is essentially taken form [4]. We state and prove it because we need the exact values of the constants.

Lemma 4.8

(Basic block)

Let \(\varepsilon \), \(\gamma \), \(\lambda \) be positive real numbers such that

$$\begin{aligned} \varepsilon \le 8\gamma ^{3}\lambda . \end{aligned}$$
(4.19)

Let \((a,b)\subseteq \mathbb {R}\) be an interval whose endpoints satisfy

$$\begin{aligned} \frac{a\gamma \lambda }{2\pi }\in \mathbb {N}\quad \text {and}\quad \frac{b\gamma \lambda }{2\pi }\in \mathbb {N}. \end{aligned}$$
(4.20)

For every \(t\in [a,b]\) let us set

$$\begin{aligned} \varphi _{\varepsilon ,\gamma ,\lambda }(t) :=\frac{\varepsilon }{4\gamma \lambda }\sin (2\gamma \lambda t)+ \frac{\varepsilon ^{2}}{64\gamma ^{4}\lambda ^{2}}\sin ^{4}(\gamma \lambda t) \end{aligned}$$
(4.21)

and

$$\begin{aligned} w_{\varepsilon ,\gamma ,\lambda }(t):=\frac{1}{\gamma \lambda }\sin (\gamma \lambda t) \exp \left( \frac{\varepsilon }{16\gamma ^{2}}(t-a)-\frac{\varepsilon }{32\gamma ^{3}\lambda }\sin (2\gamma \lambda t)\right) . \end{aligned}$$
(4.22)

Then the following statements hold true.

  1. (1)

    For every \(t\in [a,b]\) it turns out that

    $$\begin{aligned} |\varphi _{\varepsilon ,\gamma ,\lambda }(t)|\le \frac{\varepsilon }{2\gamma \lambda } \quad \text {and}\quad \left| \varphi _{\varepsilon ,\gamma ,\lambda }'(t)\right| \le \varepsilon . \end{aligned}$$
    (4.23)
  2. (2)

    For every modulus of continuity \(\omega \) it turns out that

    $$\begin{aligned} \left| \varphi _{\varepsilon ,\gamma ,\lambda }(t)-\varphi _{\varepsilon ,\gamma ,\lambda }(s)\right| \le \varepsilon \cdot \max \left\{ 1,\frac{\pi }{\gamma }\right\} \cdot \left[ \lambda \,\omega \left( \frac{1}{\lambda }\right) \right] ^{-1}\cdot \omega (|t-s|) \nonumber \\ \end{aligned}$$
    (4.24)

    for every s and t in [ab].

  3. (3)

    The function \(w_{\varepsilon ,\gamma ,\lambda }(t)\) satisfies the differential equation

    $$\begin{aligned} w_{\varepsilon ,\gamma ,\lambda }''(t)+ \lambda ^{2}\left( \gamma ^{2}-\varphi _{\varepsilon ,\gamma ,\lambda }(t)\right) \cdot w_{\varepsilon ,\gamma ,\lambda }(t)=0 \end{aligned}$$

    for every \(t\in [a,b]\), with “initial” data \(w_{\varepsilon ,\gamma ,\lambda }(a)=0\), \(w_{\varepsilon ,\gamma ,\lambda }'(a)=1\), and “final” data

    $$\begin{aligned} w_{\varepsilon ,\gamma ,\lambda }(b)=0, \quad w_{\varepsilon ,\gamma ,\lambda }'(b)=\exp \left( \frac{\varepsilon (b-a)}{16\gamma ^{2}}\right) . \end{aligned}$$

Proof

Let us start with statement (1). From definition (4.21) it follows that

$$\begin{aligned} |\varphi _{\varepsilon ,\gamma ,\lambda }(t)|\le \frac{\varepsilon }{4\gamma \lambda }\left( 1+\frac{\varepsilon }{16\gamma ^{3}\lambda }\right) \quad \text {and}\quad |\varphi _{\varepsilon ,\gamma ,\lambda }'(t)|\le \frac{\varepsilon }{2}\left( 1+\frac{\varepsilon }{8\gamma ^{3}\lambda }\right) , \end{aligned}$$

and these estimates imply (4.23) because of assumption (4.19).

As for statement (3), it is just a (lengthy) computation.

It remains to prove statement (2). Let t and s be in [ab]. Since the function \(\varphi _{\varepsilon ,\gamma ,\lambda }\) is periodic with period \(\pi /(\gamma \lambda )\), there exists \(t_{1}\) and \(s_{1}\) in [ab] such that

$$\begin{aligned} \varphi _{\varepsilon ,\gamma ,\lambda }(t)=\varphi _{\varepsilon ,\gamma ,\lambda }(t_{1}), \quad \varphi _{\varepsilon ,\gamma ,\lambda }(s)=\varphi _{\varepsilon ,\gamma ,\lambda }(s_{1}), \end{aligned}$$

and

$$\begin{aligned} |t_{1}-s_{1}|\le \frac{\pi }{\gamma \lambda }, \quad |t_{1}-s_{1}|\le |t-s|. \end{aligned}$$
(4.25)

From the second estimate in (4.23) we know that \(\varphi _{\varepsilon ,\gamma ,\lambda }\) is Lipschitz continuous with Lipschitz constant less than or equal to \(\varepsilon \), and in particular

$$\begin{aligned} \left| \varphi _{\varepsilon ,\gamma ,\lambda }(t)-\varphi _{\varepsilon ,\gamma ,\lambda }(s)\right|= & {} \left| \varphi _{\varepsilon ,\gamma ,\lambda }(t_{1})-\varphi _{\varepsilon ,\gamma ,\lambda }(s_{1})\right| \nonumber \\\le & {} \varepsilon \cdot |t_{1}-s_{1}| \nonumber \\= & {} \varepsilon \cdot \frac{|t_{1}-s_{1}|}{\omega (|t_{1}-s_{1}|)}\cdot \omega (|t_{1}-s_{1}|) \nonumber \\\le & {} \varepsilon \cdot \frac{\pi }{\gamma \lambda }\left[ \omega \left( \frac{\pi }{\gamma \lambda }\right) \right] ^{-1} \cdot \omega (|t-s|), \end{aligned}$$
(4.26)

where in the last step we exploited that the functions \(\omega (\sigma )\) and \(\sigma /\omega (\sigma )\) are nondecreasing, and inequalities (4.25). Now we distinguish two cases.

  • If \(\pi /\gamma \ge 1\), then \(\omega (\pi /(\gamma \lambda ))\ge \omega (1/\lambda )\) because of the monotonicity of \(\omega (\sigma )\). It follows that

    $$\begin{aligned} \frac{\pi }{\gamma \lambda }\left[ \omega \left( \frac{\pi }{\gamma \lambda }\right) \right] ^{-1}\le \frac{\pi }{\gamma }\cdot \frac{1}{\lambda }\left[ \omega \left( \frac{1}{\lambda }\right) \right] ^{-1}, \end{aligned}$$

    and therefore in this case (4.24) follows from (4.26).

  • If \(\pi /\gamma \le 1\), then \(\pi /(\gamma \lambda )\le 1/\lambda \), and exploiting again the monotonicity of \(\sigma /\omega (\sigma )\) we obtain that

    $$\begin{aligned} \frac{\pi }{\gamma \lambda }\left[ \omega \left( \frac{\pi }{\gamma \lambda }\right) \right] ^{-1}\le \frac{1}{\lambda }\left[ \omega \left( \frac{1}{\lambda }\right) \right] ^{-1}, \end{aligned}$$

    and therefore also in this case (4.24) follows from (4.26).

\(\square \)

4.3 Proof of Theorem 2.3, statement (2)

It remains to prove that, for every coefficient \(c_{*}(t)\) in the dense subset \({\mathcal {D}}\) described in Definition 4.6, there exists a family of asymptotic activators that converge uniformly to \(c_{*}(t)\). This result is proved in Proposition 4.11, and the proof relies on two preliminary general constructions, that we introduce in the following two propositions.

Proposition 4.9

(\(\omega \)-construction)

Let us assume that \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) for some admissible values of the parameters, and let us set

$$\begin{aligned} \nu _{1}:=\min \left\{ 1,\frac{\sqrt{\mu _{1}}}{\pi }\right\} . \end{aligned}$$
(4.27)

Let \(\lambda \) be a positive real number such that

$$\begin{aligned} \nu _{1}\,\omega \left( \frac{1}{\lambda }\right) \le 8\gamma ^{3}, \quad \frac{\nu _{1}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) \le \min \left\{ \gamma ^{2}-\mu _{1},\mu _{2}-\gamma ^{2}\right\} . \end{aligned}$$
(4.28)

Let \((a,b)\subseteq (0,T_{1})\) be an interval whose endpoints satisfy (4.20) and

$$\begin{aligned} \omega (b)\le \eta \,\omega (T_{1}-b), \quad \nu _{1}\lambda \,\omega \left( \frac{1}{\lambda }\right) \le \theta (b). \end{aligned}$$
(4.29)

Finally, let consider the function \(\varphi _{\varepsilon ,\lambda ,\gamma }(t)\) defined in (4.21) with

$$\begin{aligned} \varepsilon :=\nu _{1}\lambda \,\omega \left( \frac{1}{\lambda }\right) , \end{aligned}$$

and let us define

$$\begin{aligned} c_{\lambda }(t):={\left\{ \begin{array}{ll} c_{*}(t) &{} \text {if }t\in [0,a]\cup [b,T_{0}], \\ \gamma ^{2}-\varphi _{\varepsilon ,\gamma ,\lambda }(t)\quad &{} \text {if }t\in [a,b]. \end{array}\right. } \end{aligned}$$

Then the following statements hold true.

  1. (1)

    The function \(c_{\lambda }\) belongs to \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) and satisfies

    $$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|\le \frac{\nu _{1}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) \quad \forall t\in [0,T_{0}]. \end{aligned}$$
    (4.30)
  2. (2)

    The solution \(u_{\lambda }(t)\) to problem (4.1)–(4.2) satisfies

    $$\begin{aligned}&u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2} \nonumber \\&\quad \ge \min \left\{ 1,\frac{1}{\mu _{2}}\right\} \exp \left( -\frac{1}{\mu _{1}}\int _{T_{1}}^{T_{0}}\theta (t)\,dt\right) \exp \left( \frac{\nu _{1}}{8\mu _{2}}\lambda \,\omega \left( \frac{1}{\lambda }\right) (b-a)\right) \nonumber \\ \end{aligned}$$
    (4.31)

    for every \(t\in [b,T_{0}]\).

Proof

To begin with, we observe that the first inequality in (4.28) is equivalent to \(\varepsilon \le 8\gamma ^{3}\lambda \), and therefore \(\varepsilon \), \(\gamma \), \(\lambda \) satisfy the assumptions of Lemma 4.8.

Statement (1) Let us start by proving (4.30). To this end, we can assume that \(t\in [a,b]\), because otherwise \(c_{\lambda }(t)\) and \(c_{*}(t)\) coincide. When \(t\in [a,b]\), from the first estimate in (4.23) we obtain that

$$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|= |\varphi _{\varepsilon ,\gamma ,\lambda }(t)|\le \frac{\varepsilon }{2\gamma \lambda }= \frac{\nu _{1}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) , \end{aligned}$$

which proves (4.30).

Now let us prove that \(c_{\lambda }\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\). As for the strict hyperbolicity condition, we can limit ourselves to the interval [ab], where it follows from (4.30) and the second condition in (4.28) because \(c_{*}(t)=\gamma ^{2}\) in (ab) (since \((a,b)\subseteq (0,T_{1})\)).

As for the estimate on the derivative, again we can limit ourselves to the interval [ab]. In this case from the second estimate in (4.23) and assumption (4.29) we obtain that

$$\begin{aligned} |c_{\lambda }'(t)|= |\varphi _{\varepsilon ,\gamma ,\lambda }'(t)|\le \varepsilon = \nu _{1}\lambda \,\omega \left( \frac{1}{\lambda }\right) \le \theta (b)\le \theta (t), \end{aligned}$$

where the last inequality follows from the monotonicity of \(\theta (t)\).

Finally, let us check the \(\omega \)-continuity of \(c_{\lambda }(t)\). To this end, we consider t and s in \([0,T_{0}]\), we assume without loss of generality that \(s<t\), and we distinguish some cases according to the position of t and s.

  • If \(a\le s<t\le b\), then we exploit (4.24), and from our definition (4.27) of \(\nu _{1}\) we obtain that

    $$\begin{aligned} |c_{\lambda }(t){-}c_{\lambda }(s)|= & {} |\varphi _{\varepsilon ,\gamma ,\lambda }(t){-}\varphi _{\varepsilon ,\gamma ,\lambda }(s)|\nonumber \\\le & {} \nu _{1}\cdot \max \left\{ 1,\frac{\pi }{\gamma }\right\} \omega (t-s)\le \omega (t-s).\nonumber \\ \end{aligned}$$
    (4.32)
  • If \(b\le s<t\le T_{0}\), then the \(\omega \)-continuity of \(c_{\lambda }\) follows from the \(\omega \)-continuity of \(c_{*}\).

  • If \(s\in [a,b]\) and \(t\in [T_{1},T_{0}]\), then

    $$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le & {} |c_{\lambda }(t)-c_{\lambda }(b)|+|c_{\lambda }(b)-c_{\lambda }(s)| \\= & {} |c_{*}(t)-c_{*}(b)|+|\varphi _{\varepsilon ,\gamma ,\lambda }(b)-\varphi _{\varepsilon ,\gamma ,\lambda }(s)| \\\le & {} (1-\eta )\omega (t-b)+\omega (b-s) \\\le & {} (1-\eta )\omega (t-b)+\omega (b), \end{aligned}$$

    where we exploited that \(c_{*}\) satisfies (4.18) in \([b,T_{0}]\), and the fact that \(c_{\lambda }\) satisfies (4.32) in [sb]. At this point we exploit the first inequality in (4.29) and we conclude that

    $$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le \omega (t-b)+\eta [\omega (T_{1}-b)-\omega (t-b)]\le \omega (t-b)\le \omega (t-s). \end{aligned}$$
  • The cases where at least one variable lies in \([0,a]\cup [b,T_{1}]\) are either trivial or can be easily reduced to the previous ones.

Statement (2) Let us examine now the solution to problem (4.1)–(4.2). In the interval [0, a] the solution is given by the explicit formula

$$\begin{aligned} u_{\lambda }(t)=\frac{1}{\gamma \lambda }\sin (\gamma \lambda t), \end{aligned}$$

and hence, since \(\gamma \lambda a\) is an integer multiple of \(2\pi \), it follows that \(u_{\lambda }(a)=0\) and \(u_{\lambda }'(a)=1\).

In the interval [ab] the solution is given by the explicit formula \(u_{\lambda }(t)=w_{\varepsilon ,\gamma ,\lambda }(t)\), where \(w_{\varepsilon ,\gamma ,\lambda }\) is defined by (4.22). Since \(\gamma \lambda b\) is an integer multiple of \(2\pi \), from the explicit formula we obtain that

$$\begin{aligned} u_{\lambda }'(b)^{2}+\lambda ^{2}\gamma ^{2}u_{\lambda }(b)^{2}= & {} \exp \left( \frac{\nu _{1}}{8\gamma ^{2}}\lambda \,\omega \left( \frac{1}{\lambda }\right) (b-a)\right) \nonumber \\ {}\ge & {} \exp \left( \frac{\nu _{1}}{8\mu _{2}}\lambda \,\omega \left( \frac{1}{\lambda }\right) (b-a)\right) . \end{aligned}$$
(4.33)

Finally, in the interval \([b,T_{0}]\) we consider the classical hyperbolic energy

$$\begin{aligned} F_{\lambda }(t):=u_{\lambda }'(t)^{2}+\lambda ^{2}c_{\lambda }(t)u_{\lambda }(t)^{2}. \end{aligned}$$

In the usual way we obtain that

$$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}\ge \min \left\{ 1,\frac{1}{\mu _{2}}\right\} F_{\lambda }(t), \end{aligned}$$
(4.34)

and

$$\begin{aligned} F_{\lambda }'(t)=\lambda ^{2}c_{\lambda }'(t)u_{\lambda }(t)^{2}\ge -\frac{|c_{\lambda }'(t)|}{c_{\lambda }(t)}\,F_{\lambda }(t)= -\frac{|c_{*}'(t)|}{c_{*}(t)}\,F_{\lambda }(t)\ge -\frac{|c_{*}'(t)|}{\mu _{1}}\,F_{\lambda }(t). \end{aligned}$$

Since \(c_{*}'(t)=0\) in \([b,T_{1}]\), and \(|c_{*}'(t)|\le \theta (t)\) in \([T_{1},T_{0}]\), integrating this differential inequality we obtain that

$$\begin{aligned} F_{\lambda }(t)\ge F_{\lambda }(b)\exp \left( -\frac{1}{\mu _{1}}\int _{T_{1}}^{T_{0}}\theta (\tau )\,d\tau \right) \quad \forall t\in [b,T_{0}]. \end{aligned}$$
(4.35)

Since \(F_{\lambda }(b)\) is given by (4.33), at this point (4.31) follows from (4.34) and (4.35). \(\square \)

Proposition 4.10

(\(\theta \)-construction)

Let us assume that \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) for some admissible values of the parameters, and let us set

$$\begin{aligned} \nu _{2}:=\frac{1}{2}\min \left\{ 1,\frac{\sqrt{\mu _{1}}}{\pi }\right\} . \end{aligned}$$
(4.36)

Let \(\lambda \) be a positive real number such that

$$\begin{aligned} \nu _{2}\,\omega \left( \frac{1}{\lambda }\right) \le 8\gamma ^{3}, \quad \frac{\nu _{2}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) \le \min \left\{ \gamma ^{2}-\mu _{1},\mu _{2}-\gamma ^{2}\right\} . \end{aligned}$$
(4.37)

Let \((a,b)\subseteq (0,T_{1})\) be an interval whose endpoints satisfy (4.20) and

$$\begin{aligned} \omega (b)\le \eta \,\omega (T_{1}-b), \quad \lambda \,\omega \left( \frac{1}{\lambda }\right) \ge \theta (a). \end{aligned}$$
(4.38)

Let us set \(t_{0}:=a\) and \(k:=\gamma \lambda (b-a)/(2\pi )\), and then let us define

$$\begin{aligned} t_{i}:=a+\frac{2\pi }{\gamma \lambda }\cdot i, \quad \text {and}\quad \varepsilon _{i}:=\nu _{2}\,\theta (t_{i}) \quad \quad \forall i\in \{1,\ldots ,k\}. \end{aligned}$$

Finally, let us define

$$\begin{aligned} c_{\lambda }(t):={\left\{ \begin{array}{ll} c_{*}(t) &{} \text {if }t\in [0,a]\cup [b,T_{0}], \\ \gamma ^{2}-\varphi _{\varepsilon _{i},\gamma ,\lambda }(t)\quad &{} \text { if } t\in [t_{i-1},t_{i}] \text { for some }i\in \{1,\ldots ,k\}. \end{array}\right. } \end{aligned}$$

Then the following statements hold true.

  1. (1)

    The function \(c_{\lambda }\) belongs to \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) and satisfies

    $$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|\le \frac{\nu _{2}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) \quad \forall t\in [0,T_{0}]. \end{aligned}$$
    (4.39)
  2. (2)

    The solution to problem (4.1)–(4.2) satisfies

    $$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2} \\ \ge \min \left\{ 1,\frac{1}{\mu _{2}}\right\} \exp \left( -\frac{1}{\mu _{1}}\int _{T_{1}}^{T_{0}}\theta (t)\,dt-2\pi \right) \cdot \exp \left( \frac{\nu _{2}}{8\mu _{2}}\int _{a}^{b}\theta (\tau )\,d\tau \right) \end{aligned}$$
    (4.40)

    for every \(t\in [b,T_{0}]\).

Proof

We follow the same path as in the case of Proposition 4.9. To begin with, from the monotonicity of \(\theta \) and the second assumption in (4.38) we obtain that

$$\begin{aligned} \varepsilon _{i}= \nu _{2}\,\theta (t_{i})\le \nu _{2}\,\theta (a)\le \nu _{2}\,\lambda \,\omega \left( \frac{1}{\lambda }\right) \quad \forall i\in \{1,\ldots ,k\}, \end{aligned}$$
(4.41)

and therefore from the first inequality in (4.37) we deduce that \(\varepsilon _{i}\le 8\gamma ^{3}\lambda \), and in particular the assumptions of Lemma 4.8 are satisfied in every interval \([t_{i-1},t_{i}]\).

Statement (1) Let us start by proving (4.39). To this end, we can assume that \(t\in [a,b]\), because otherwise \(c_{\lambda }(t)\) and \(c_{*}(t)\) coincide. When \(t\in [t_{i-1},t_{i}]\) for some \(i\in \{1,\ldots ,k\}\), from the first estimate in (4.23) we obtain that

$$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|= |\varphi _{\varepsilon _{i},\gamma ,\lambda }(t)|\le \frac{\varepsilon _{i}}{2\gamma \lambda } \end{aligned}$$
(4.42)

Plugging (4.41) into (4.42) we obtain (4.39).

Now let us prove that \(c_{\lambda }\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\). As for the strict hyperbolicity condition, we can limit ourselves to the interval [ab], where it follows from (4.39) because of the second condition in (4.37).

As for the estimate on the derivative, again we can limit ourselves to the interval [ab]. When \(t\in [t_{i-1},t_{i}]\) for some \(i\in \{1,\ldots ,k\}\) we apply the second estimate in (4.23) and we obtain that

$$\begin{aligned} |c_{\lambda }'(t)|= |\varphi _{\varepsilon _{i},\gamma ,\lambda }'(t)|\le \varepsilon _{i}= \nu _{2}\,\theta (t_{i})\le \theta (t_{i})\le \theta (t), \end{aligned}$$

where the last inequality follows from the monotonicity of \(\theta (t)\).

Finally, let us check the \(\omega \)-continuity of \(c_{\lambda }(t)\). To this end, we consider t and s in \([0,T_{0}]\), we assume without loss of generality that \(s<t\), and we distinguish some cases according to the position of t and s.

  • If \(t_{i-1}\le s<t\le t_{i}\) for some \(i\in \{1,\ldots ,k\}\), then from (4.24) we obtain that

    $$\begin{aligned}&|c_{\lambda }(t)-c_{\lambda }(s)|\nonumber \\&\quad = |\varphi _{\varepsilon _{i},\gamma ,\lambda }(t)-\varphi _{\varepsilon _{i},\gamma ,\lambda }(s)|\le \varepsilon _{i}\cdot \max \left\{ 1,\frac{\pi }{\gamma }\right\} \cdot \left[ \lambda \,\omega \left( \frac{1}{\lambda }\right) \right] ^{-1} \cdot \omega (t-s). \end{aligned}$$

    At this point we exploit (4.41) and our definition (4.36) of \(\nu _{2}\), and we conclude that

    $$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le \nu _{2}\max \left\{ 1,\frac{\pi }{\gamma }\right\} \cdot \omega (t-s)\le \frac{1}{2}\,\omega (t-s). \end{aligned}$$
  • If \(s\in [t_{i-1},t_{i}]\) and \(t\in [t_{j-1},t_{j}]\) for some \(1\le i<j\le k\), then from the previous case (and thanks to the factor 1/2) we deduce that

    $$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le & {} |c_{\lambda }(t)-\gamma ^{2}|+|\gamma ^{2}-c_{\lambda }(s)| \\= & {} |c_{\lambda }(t)-c_{\lambda }(t_{j-1})|+|c_{\lambda }(t_{i})-c_{\lambda }(s)| \\\le & {} \frac{1}{2}\,\omega (t-t_{j-1})+\frac{1}{2}\,\omega (t_{i}-s) \\\le & {} \omega (t-s). \end{aligned}$$
  • All other possibilities for s and t can be dealt with as in the case of the \(\omega \)-construction.

Statement (2) Let us examine the solution to problem (4.1)–(4.2). As in the case of the \(\omega \)-construction, in the interval [0, a] we have an explicit formula for the solution, from which we deduce that \(u_{\lambda }(a)=0\) and \(u_{\lambda }'(a)=1\). Then in the interval \([t_{0},t_{1}]\) the solution is given by the explicit formula \(u_{\lambda }(t)=w_{\varepsilon _{1},\gamma ,\lambda }(t)\), where \(w_{\varepsilon ,\gamma ,\lambda }\) is defined by (4.22). Since \(\gamma \lambda t_{1}\) is an integer multiple of \(2\pi \), from the explicit formula we obtain that

$$\begin{aligned} u_{\lambda }(t_{1})=0, \quad u_{\lambda }'(t_{1})=\exp \left( \frac{\varepsilon _{1}}{16\gamma ^{2}}(t_{1}-t_{0})\right) = \exp \left( \frac{\nu _{2}}{16\gamma ^{2}}\,\theta (t_{1})(t_{1}-t_{0})\right) . \end{aligned}$$

In the interval \([t_{1},t_{2}]\) the solution is given by the explicit formula \(u_{\lambda }(t)=\alpha w_{\varepsilon _{2},\gamma ,\lambda }(t)\), with \(\alpha =u_{\lambda }'(t_{1})\), and therefore

$$\begin{aligned} u_{\lambda }(t_{2})=0, \quad u_{\lambda }'(t_{2})= \exp \left( \frac{\nu _{2}}{16\gamma ^{2}}\,\theta (t_{1})(t_{1}-t_{0})+ \frac{\nu _{2}}{16\gamma ^{2}}\,\theta (t_{2})(t_{2}-t_{1})\right) . \end{aligned}$$

At this point by finite induction we find that

$$\begin{aligned} u_{\lambda }(b)=u_{\lambda }(t_{k})=0, \quad u_{\lambda }'(b)=u_{\lambda }'(t_{k})= \exp \left( \frac{\nu _{2}}{16\gamma ^{2}}\sum _{i=1}^{k}\theta (t_{i})(t_{i}-t_{i-1})\right) .\nonumber \\ \end{aligned}$$
(4.43)

Since \(t_{i}-t_{i-1}\) does not depend on i, from the monotonicity of \(\theta (t)\) we deduce that

$$\begin{aligned} \sum _{i=1}^{k}\theta (t_{i})(t_{i}-t_{i-1})= & {} \frac{2\pi }{\gamma \lambda }\sum _{i=1}^{k}\theta (t_{i})\ge \frac{2\pi }{\gamma \lambda }\sum _{i=1}^{k-1}\theta (t_{i})\ge \int _{t_{1}}^{b}\theta (t)\,dt \\= & {} \int _{a}^{b}\theta (t)\,dt-\int _{a}^{t_{1}}\theta (t)\,dt\ge \int _{a}^{b}\theta (t)\,dt-\frac{2\pi }{\gamma \lambda }\,\theta (a). \quad \end{aligned}$$

Recalling the second condition in (4.38), and the first condition in (4.37), we obtain that

$$\begin{aligned} \sum _{i=1}^{k}\theta (t_{i})(t_{i}-t_{i-1})\ge \int _{a}^{b}\theta (t)\,dt-\frac{2\pi }{\gamma }\,\omega \left( \frac{1}{\lambda }\right) \ge \int _{a}^{b}\theta (t)\,dt-\frac{16\pi \gamma ^{2}}{\nu _{2}}. \end{aligned}$$

Plugging this estimate into (4.43) we conclude that

$$\begin{aligned} u_{\lambda }'(b)^{2}+\gamma ^{2}\lambda ^{2}u_{\lambda }(b)^{2}\ge \exp \left( \frac{\nu _{2}}{8\mu _{2}}\int _{a}^{b}\theta (t)\,dt-2\pi \right) . \end{aligned}$$

At this point in the interval \([b,T_{0}]\) we consider the hyperbolic energy as in the case of the \(\omega \)-construction and we obtain (4.40). \(\square \)

Proposition 4.11

(Asymptotic activators for initially constant coefficients)

Let us assume that \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) for some admissible values of the parameters, and let us define \(m(\lambda )\) as in (1.7), and \(M_{3}\) as in (2.1).

Let us assume that \(m(\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \).

Then there exists a family of asymptotic activators \(\{c_{\lambda }(t)\}\subseteq \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) with rate \(\phi (\lambda ):=M_{3}\,m(\lambda )\) such that \(c_{\lambda }(t)\rightarrow c_{*}(t)\) uniformly in \([0,T_{0}]\).

Proof

The strategy is the following. For every \(\lambda \) large enough we define a coefficient \(c_{\lambda }\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) by modifying \(c_{*}\) in some interval \((a_{\lambda },b_{\lambda })\) according to the constructions described in Propositions 4.9 and  4.10. We show that \(b_{\lambda }\rightarrow 0\), and that for \(\lambda \) large enough it turns out that

$$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|\le \frac{\nu _{2}}{\gamma }\,\omega \left( \frac{1}{\lambda }\right) \quad \forall t\in [0,T_{0}], \end{aligned}$$
(4.44)

where \(\nu _{2}\) is defined by (4.36), and the solutions \(u_{\lambda }(t)\) to problem (4.1)–(4.2) satisfy

$$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}\ge M_{4}\exp (2M_{3}\,m(\lambda )) \quad \forall t\in [b_{\lambda },T_{0}], \end{aligned}$$
(4.45)

where

$$\begin{aligned} M_{4}:=\min \left\{ 1,\frac{1}{\mu _{2}}\right\} \exp \left( -\frac{1}{\mu _{1}}\int _{T_{1}}^{T_{0}}\theta (t)\,dt-2\pi -\frac{\omega (1)}{8\mu _{2}}\right) . \end{aligned}$$

If we prove these claims, then from (4.44) it follows that \(c_{\lambda }\rightarrow c_{*}\) uniformly in \([0,T_{0}]\), while (4.45) and the fact that \(b_{\lambda }\rightarrow 0^{+}\) imply that \(\{c_{\lambda }(t)\}\) is a family of asymptotic activators with rate \(\phi (\lambda ):=M_{3}\,m(\lambda )\).

In order to define \(c_{\lambda }\) we distinguish two cases. To begin with, we observe that \(\omega (\sigma )\) and \(\theta (t)\) satisfy (2.3), because otherwise \(m(\lambda )\) would be bounded independently of \(\lambda \).

Let

$$\begin{aligned} \psi (s):=\lambda \,\omega \left( \frac{1}{\lambda }\right) s+\int _{s}^{T_{0}}\theta (t)\,dt \end{aligned}$$

denote the function whose minimum is \(m(\lambda )\). Due to the second condition in (2.3), the minimum is never attained in \(s=0\). Moreover, since \(\psi '(s)=\lambda \,\omega (1/\lambda )-\theta (s)\), from the first condition in (2.3) we deduce that \(\psi '(T_{0})>0\) when \(\lambda \) is large enough, and therefore for these values of \(\lambda \) the minimum is not attained also in \(s=T_{0}\). Therefore, when \(\lambda \) is large enough the minimum is attained in some point \(s_{\lambda }\in (0,T_{0})\) where \(\psi '(s_{\lambda })=0\), and hence

$$\begin{aligned} \theta (s_{\lambda })=\lambda \,\omega \left( \frac{1}{\lambda }\right) . \end{aligned}$$

Exploiting again the first condition in (2.3) we deduce that the right-hand side tends to \(+\infty \), and hence \(s_{\lambda }\rightarrow 0^{+}\). Now let \(\Lambda _{\omega }\) denote the set of all \(\lambda >0\) such that

$$\begin{aligned} \lambda \,\omega \left( \frac{1}{\lambda }\right) s_{\lambda }\ge \frac{1}{2}\,m(\lambda ), \end{aligned}$$
(4.46)

and let \(\Lambda _{\theta }\) denote the set of remaining \(\lambda \)’s, for which necessarily it turns out that

$$\begin{aligned} \int _{s_{\lambda }}^{T_{0}}\theta (t)\,dt\ge \frac{1}{2}\,m(\lambda ). \end{aligned}$$

We are now ready to define \(c_{\lambda }(t)\) in the two cases.

Case \(\lambda \in \Lambda _{\omega }\) For every \(\lambda \in \Lambda _{\omega }\) we set (here \(\lfloor \alpha \rfloor \) denotes the greatest integer less then or equal to \(\alpha \))

$$\begin{aligned} a=a_{\lambda }:= \frac{2\pi }{\gamma \lambda }\left\lfloor \frac{\gamma \lambda }{4\pi }\,s_{\lambda }-2\right\rfloor , \quad b=b_{\lambda }:= \frac{4\pi }{\gamma \lambda }\left\lfloor \frac{\gamma \lambda }{4\pi }\,s_{\lambda }\right\rfloor , \end{aligned}$$

and we define the coefficient \(c_{\lambda }(t)\) according to the \(\omega \)-construction of Proposition 4.9. Let us check that the assumptions are satisfied if \(\lambda \in \Lambda _{\omega }\) is large enough. To begin with, we observe that \(a_{\lambda }>0\) because from (4.46) we deduce that \(\lambda s_{\lambda }\rightarrow +\infty \) when \(\lambda \rightarrow +\infty \) remaining inside \(\Lambda _{\omega }\). In addition, it turns out that

$$\begin{aligned} 0<a_{\lambda }<b_{\lambda }\le s_{\lambda }, \end{aligned}$$

so that in particular \(b_{\lambda }\rightarrow 0^{+}\) and \((a_{\lambda },b_{\lambda })\subseteq (0,T_{1})\) for large \(\lambda \). Moreover, (4.20) is almost trivial from the definition, while the two inequalities in (4.28) are again satisfied when \(\lambda \) is large because \(\omega (\sigma )\rightarrow 0\) as \(\sigma \rightarrow 0^{+}\). Finally, the first inequality in (4.29) is true for \(\lambda \) large because \(b_{\lambda }\rightarrow 0\), while the second inequality in (4.29) is satisfied because

$$\begin{aligned} \theta (b_{\lambda })\ge \theta (s_{\lambda })=\lambda \,\omega \left( \frac{1}{\lambda }\right) \ge \nu _{1}\lambda \,\omega \left( \frac{1}{\lambda }\right) . \end{aligned}$$

At this point we can use the conclusions of Proposition 4.9. From (4.30) we obtain immediately (4.44) in this case (note that \(\nu _{1}=2\nu _{2}\)). Finally, we observe that

$$\begin{aligned} b_{\lambda }-a_{\lambda }\ge \frac{s_{\lambda }}{2}, \end{aligned}$$

and therefore

$$\begin{aligned} \lambda \,\omega \left( \frac{1}{\lambda }\right) (b_{\lambda }-a_{\lambda })\ge \frac{1}{2}\lambda \,\omega \left( \frac{1}{\lambda }\right) s_{\lambda }\ge \frac{1}{4}\,m(\lambda ), \end{aligned}$$

so that (4.31) implies (4.45) in this case.

Case \(\lambda \in \Lambda _{\theta }\) For every \(\lambda \in \Lambda _{\theta }\) we define \(\widehat{s}_{\lambda }\) in such a way that

$$\begin{aligned} \int _{s_{\lambda }}^{\widehat{s}_{\lambda }}\theta (t)\,dt= \frac{1}{2}\int _{s_{\lambda }}^{T_{0}}\theta (t)\,dt\ge \frac{1}{4}m(\lambda ), \end{aligned}$$

and we observe that \(\widehat{s}_{\lambda }\rightarrow 0^{+}\) because the integral of \(\theta (t)\) in \((0,T_{1})\) is divergent. Then we set (here \(\lceil \alpha \rceil \) denotes the smallest integer greater then or equal to \(\alpha \))

$$\begin{aligned} a=a_{\lambda }:= \frac{2\pi }{\gamma \lambda }\left\lceil \frac{\gamma \lambda }{2\pi }\,s_{\lambda }\right\rceil , \quad b=b_{\lambda }:= \frac{2\pi }{\gamma \lambda }\left\lceil \frac{\gamma \lambda }{2\pi }\,\widehat{s}_{\lambda }\right\rceil , \end{aligned}$$

and we define the coefficient \(c_{\lambda }(t)\) according to the \(\theta \)-construction of Proposition 4.10.

With these notations it turns out that

$$\begin{aligned} 0< s_{\lambda }\le a_{\lambda }\le b_{\lambda } \quad \text {and}\quad \widehat{s}_{\lambda }\le b_{\lambda }\le \widehat{s}_{\lambda }+\frac{2\pi }{\gamma \lambda }, \end{aligned}$$

so that in particular \(b_{\lambda }\rightarrow 0^{+}\) and \((a_{\lambda },b_{\lambda })\subseteq (0,T_{1})\) for large \(\lambda \). Moreover, (4.20) is almost trivial from the definition, while the two inequalities in (4.37) are again satisfied when \(\lambda \) is large because \(\omega (\sigma )\rightarrow 0\) as \(\sigma \rightarrow 0^{+}\). Finally, the first inequality in (4.38) is true for \(\lambda \) large because \(b_{\lambda }\rightarrow 0\), while the second inequality in (4.38) is satisfied because

$$\begin{aligned} \theta (a_{\lambda })\le \theta (s_{\lambda })= \lambda \,\omega \left( \frac{1}{\lambda }\right) . \end{aligned}$$

At this point we can use the conclusions of Proposition 4.10. From (4.39) we obtain immediately (4.44) in this case. Finally, we observe that

$$\begin{aligned} \quad \int _{a_{\lambda }}^{b_{\lambda }}\theta (t)\,dt\ge \int _{s_{\lambda }+2\pi /(\gamma \lambda )}^{\widehat{s}_{\lambda }}\theta (t)\,dt= \int _{s_{\lambda }}^{\widehat{s}_{\lambda }}\theta (t)\,dt- \int _{s_{\lambda }}^{s_{\lambda }+2\pi /(\gamma \lambda )}\theta (t)\,dt \\ \ge \frac{1}{4}m(\lambda )- \frac{2\pi }{\gamma \lambda }\,\theta (s_{\lambda })\ge \frac{1}{4}\,m(\lambda )-\frac{2\pi }{\gamma }\,\omega \left( \frac{1}{\lambda }\right) , \quad \end{aligned}$$

and therefore

$$\begin{aligned} \int _{a_{\lambda }}^{b_{\lambda }}\theta (t)\,dt\ge \frac{1}{4}\,m(\lambda )-\frac{2\pi \omega (1)}{\gamma } \end{aligned}$$

when \(\lambda \) is large enough. Recalling that \(\nu _{2}\le \gamma /(2\pi )\), at this point (4.40) implies (4.45) in this case. \(\square \)

5 Proof of Corollaries

Proof of Corollary 2.6

It is enough to show that \(m(\lambda )\rightarrow +\infty \). Let \(\lambda _{n}\rightarrow +\infty \) be any sequence of positive real numbers. For every positive integer n, let us choose

$$\begin{aligned} s_{n}\in {\text {argmin}}\left\{ \lambda _{n}\,\omega \left( \frac{1}{\lambda _{n}}\right) s+ \int _{s}^{T_{0}}\theta (t)\,dt:s\in [0,T_{0}]\right\} . \end{aligned}$$

Up to subsequences (not relabeled) we can assume that \(s_{n}\rightarrow s_{\infty }\in [0,T_{0}]\). If \(s_{\infty }=0\), then \(m(\lambda _{n})\rightarrow +\infty \) due to the second term in the minimum and the second assumption in (2.3). If \(s_{\infty }>0\), then \(m(\lambda _{n})\rightarrow +\infty \) due to the first term in the minimum and the first assumption in (2.3). \(\Box \)

Proof of Corollary 2.7Statement (1) From assumption (2.5) we deduce that there exists a positive real number M such that \(\theta (t)\le M/t\) for every \(t\in (0,T_{0}]\), and hence

$$\begin{aligned} \lambda \,\omega \left( \frac{1}{\lambda }\right) s+\int _{s}^{T_{0}}\theta (t)\,dt\le \lambda \,\omega \left( \frac{1}{\lambda }\right) s+M(\log T_{0}-\log s). \end{aligned}$$

In particular, setting \(s=\lambda ^{\alpha -1}\) we obtain that

$$\begin{aligned} m(\lambda )\le \lambda ^{\alpha }\omega \left( \frac{1}{\lambda }\right) +M(\log T_{0}+(1-\alpha )\log \lambda ). \end{aligned}$$

Now we divide by \(\log \lambda \) and we let \(\lambda \rightarrow +\infty \). Due to assumption (2.4) we deduce that

$$\begin{aligned} \limsup _{n\rightarrow +\infty }\frac{m(\lambda )}{\log \lambda }\le M(1-\alpha ) \end{aligned}$$

for every \(\alpha \in (0,1)\). Finally, letting \(\alpha \rightarrow 1^{-}\) we conclude that actually \(m(\lambda )/\log \lambda \rightarrow 0\), which implies well-posedness with at most an arbitrarily small derivative loss.

Proof of Corollary 2.7Statement (2) From assumptions (2.6) and (2.7) we deduce that there exists positive real numbers \(m_{2}\) and \(m_{1}\) such that

$$\begin{aligned} \omega (\sigma )\ge m_{1}\,\sigma ^{\alpha } \qquad \quad \text {and}\quad \qquad \theta (t)\ge \frac{m_{2}}{t}, \end{aligned}$$

and in particular

$$\begin{aligned} \lambda \,\omega \left( \frac{1}{\lambda }\right) s+\int _{s}^{T_{0}}\theta (t)\,dt\ge m_{1}\lambda ^{1-\alpha }s+m_{2}(\log T_{0}-\log s). \end{aligned}$$

Minimizing the right-hand side with respect to s we conclude that

$$\begin{aligned} m(\lambda )\ge m_{2}\left( 1+\log T_{0}+\log (m_{1}\lambda ^{1-\alpha })-\log m_{2}\right) , \end{aligned}$$

and hence

$$\begin{aligned} \liminf _{\lambda \rightarrow +\infty }\frac{m(\lambda )}{\log \lambda }\ge m_{2}(1-\alpha )>0, \end{aligned}$$

which implies that the derivative loss is actually finite, and proportional to \(1-\alpha \), for a residual set of coefficients. \(\square \)