Abstract
We consider an abstract wave equation with a propagation speed that depends only on time. We assume that the propagation speed is differentiable for positive times, continuous up to the origin, but with first derivative that is potentially singular at the origin. We examine the derivative loss of solutions, and in particular we investigate which conditions on the modulus of continuity and on the behavior of the derivative in the origin yield, respectively, no derivative loss, an arbitrarily small derivative loss, a finite derivative loss, or an infinite derivative loss. As expected, we obtain that stronger assumptions on the modulus of continuity can compensate weaker assumptions on the growth of the derivative, and viceversa. Suitable counterexamples show that our results are sharp. We prove indeed that, for every set of conditions, the class of propagation speeds that satisfy the given conditions, and for which the corresponding equation exhibits a derivative loss as large as possible, is nonempty and actually also residual in the sense of Baire category.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the wave equation
and its abstract version
where A is a linear nonnegative self-adjoint operator with domain D(A) in some real Hilbert space H. We always assume that the coefficient c(t), which in the model (1.1) represents the square of the propagation speed, is defined in some time interval \((0,T_{0})\), and satisfies the strict hyperbolicity assumption
We investigate the regularity of solutions to (1.2) with initial data
We recall that problem (1.2)–(1.4) admits a unique solution for large classes of initial data, even if the coefficient c(t) is just in \(L^{1}((0,T_{0}))\), without sign conditions. Nevertheless, in general this solution is very weak, in the sense that it lives in a huge space of hyperdistributions, even if initial data are smooth.
Here we are interested in solutions with more “space” regularity. In order to state the definitions in the abstract setting we recall that, for every real number \(\beta \), the operator \(A^{\beta }\) is defined in a suitable domain \(D(A^{\beta })\), which in the concrete case corresponds to the Sobolev space \(H^{2\beta }\) (distributions if \(\beta <0\)).
Definition 1.1
(Well-posedness vs derivative loss)
-
(No derivative loss). Problem (1.2)–(1.4) is said to be well-posed with no derivative loss if, for every pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), the unique solution satisfies
$$\begin{aligned} (u(t),u'(t))\in D(A^{\beta +1/2})\times D(A^{\beta }) \quad \forall t\in [0,T_{0}]. \end{aligned}$$ -
(Arbitrarily small derivative loss). Problem (1.2)–(1.4) is said to be well-posed with (at most an) arbitrarily small derivative loss if, for every pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), the unique solution satisfies
$$\begin{aligned} (u(t),u'(t))\in D(A^{\beta -\varepsilon +1/2})\times D(A^{\beta -\varepsilon }) \quad \forall t\in [0,T_{0}] \quad \forall \varepsilon >0. \end{aligned}$$The arbitrarily small derivative loss does actually happen if there exists a pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\) such that the unique solution satisfies
$$\begin{aligned} (u(t),u'(t))\not \in D(A^{\beta +1/2})\times D(A^{\beta }) \quad \forall t\in (0,T_{0}]. \end{aligned}$$ -
(Finite derivative loss). Problem (1.2)–(1.4) is said to be well-posed with (at most a) finite derivative loss if there exists a positive real number \(\delta \) such that, for every pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), the unique solution satisfies
$$\begin{aligned} (u(t),u'(t))\in D(A^{\beta -\delta +1/2})\times D(A^{\beta -\delta }) \quad \forall t\in [0,T_{0}]. \end{aligned}$$The finite derivative loss does actually happen if there exist a function \(\delta :(0,T_{0}]\rightarrow (0,+\infty )\), and a pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), such that the unique solution satisfies
$$\begin{aligned} (u(t),u'(t))\not \in D(A^{\beta -\delta (t)+1/2})\times D(A^{\beta -\delta (t)}) \quad \forall t\in (0,T_{0}]. \end{aligned}$$ -
(Infinite derivative loss). Problem (1.2)–(1.4) is said to exhibit and infinite derivative loss if there exists a pair of initial data \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\), such that the unique solution satisfies
$$\begin{aligned} (u(t),u'(t))\not \in D(A^{-\gamma +1/2})\times D(A^{-\gamma }) \quad \forall \gamma >0,\quad \forall t\in (0,T_{0}]. \end{aligned}$$
Due to the linearity of the equation, all the definitions stated above do not depend on the choice of \(\beta \). In words, no derivative loss means more generally that all solutions live in the same space of the initial data, while finite derivative loss means that the “space regularity” of the solution for positive times is less than the corresponding regularity of initial data. The parameter \(\delta \) measures this loss of regularity, which is a true loss of derivatives in the concrete case where the domains of powers of A are actually Sobolev spaces. The arbitrary small derivative loss is a condition in between no derivative loss and finite derivative loss: in this case solutions for positive times do not remain in the same space of initial data, but in all spaces with smaller exponents. Finally, the infinite derivative loss is a dramatic loss of regularity: in the concrete case it means the existence of solutions whose initial data have any given Sobolev regularity, and nevertheless they are not even distributions for positive times.
It is well known that the derivative loss of solutions depends on the time-regularity of the coefficient c(t), and in particular on its oscillatory behavior. This regularity has been measured in different ways in the literature. Let us mention some of them.
Modulus of continuity Let us assume that c(t) is continuous in the closed interval \([0,T_{0}]\), and let \(\omega :[0,+\infty )\rightarrow [0,+\infty )\) be a function such that
Any function \(\omega \) with this property is called a modulus of continuity for c(t) in \([0,T_{0}]\).
The relations between the modulus of continuity of the coefficient and the regularity of solutions was investigated for the first time by F. Colombini, E. De Giorgi and S. Spagnolo in the seminal paper [4]. The result was then refined and extended in many subsequent papers (see for example [2, 5, 8, 9]). Concerning the derivative loss of solutions, the situation is summarized in Table 1, where the assumptions in the first column refer to the behavior of \(\omega (\sigma )\) as \(\sigma \rightarrow 0^{+}\).
Now we know that all the results stated in Table 1 are residually optimal, namely for every modulus of continuity \(\omega \) the set of coefficients c(t) that are \(\omega \)-continuous, and for which problem (1.2)–(1.4) does exhibit the prescribed derivative loss is residual in the sense of Baire category (see [12, 13]).
Singular behavior of the derivative at the origin Let us assume that c(t) is differentiable for positive times, and let \(\theta :(0,+\infty )\rightarrow (0,+\infty )\) be a nonincreasing function such that
We point out that now c(t) is not required to be continuous in \(t=0\), and also \(\theta (t)\) is allowed to diverge as \(t\rightarrow 0^{+}\), and actually this is the interesting case. The effect of this singular behavior of \(c'(t)\) in \(t=0\) was studied by Colombini et al. [6] in the case where \(\theta (t)\sim 1/t^{\beta }\). Concerning the regularity of solutions, the situation is summarized in Table 2, where the assumptions in the first column refer to the behavior of \(\theta (t)\) as \(t\rightarrow 0^{+}\). Note that, due to the strict hyperbolicity condition, the divergence of the integral of \(|c'(t)|\) implies a highly oscillatory behavior of c(t).
The optimality of many points in Table 2 remained an open problem for almost two decades. In particular, the result of the last row was proved in [14, Theorem 2.9], while the optimality of the results of the second and third row follows, respectively, from Corollaries 2.6 and 2.7 of the present paper.
Singular behavior of the first two derivatives at the origin A natural way to extend the results of the previous paragraph is to consider the first two derivatives of the coefficient c(t), with the hope that a bound on \(|c'(t)|\) and \(|c''(t)|\) can prevent c(t) from oscillating too fast and yield a smaller derivative loss. A first result in this direction was obtained by Yamazaki in [22]. The assumption is that c(t) is twice differentiable for positive times and satisfies, up to multiplicative constants, the estimates
for every \(t\in (0,T_{0}]\). Under these assumptions she proved that problem (1.2)–(1.4) is well-posed with no derivative loss.
Some years later, Colombini et al. [7] assumed that, up to multiplicative constants, the coefficient c(t) satisfies
in a right neighborhood of the origin. Under these assumptions they proved that problem (1.2)–(1.4) is well-posed with finite derivative loss (see also [15, 16]).
These results were extended and unified recently in [14], were it is assumed that
where \(\varphi :(0,T_{0})\rightarrow (0,+\infty )\) and \(\psi :(0,T_{0})\rightarrow (0,+\infty )\) are suitable nonincreasing and continuous functions. The results of [14] are summarized in Table 3, where the first column refers to the behavior as \(t\rightarrow 0^{+}\).
We observe that in the case where \(\varphi (t)\) and \(\psi (t)\) are constant functions this is exactly the result of [22], while in the case where \(\varphi (t)\sim |\log t|\) and \(\psi (t)\) is constant this is exactly the result of [7]. Again, the derivative loss prescribed by Table 3 is residually optimal.
Modulus of continuity and first derivatives In this paper we combine the assumptions on the modulus of continuity and on the first derivative. More precisely, we assume that c(t) is continuous in the closed interval \([0,T_{0}]\), differentiable in the half-open interval \((0,T_{0}]\), and that it satisfies both (1.5) and (1.6) for suitable functions \(\omega \) and \(\theta \). The case where \(\omega (\sigma )\sim \sigma ^{\alpha }\) and \(\theta (t)\sim 1/t^{\beta }\) was considered by Colombini et al. [6] (see also [1]), while more subtle examples where considered by Colombini et al. [7] and by Del Santo et al [10] (see also [19]). Here we unify and improve some of their results, both on the positive and on the negative side. More important, we show that all those special examples fit into a common framework.
Our main result is that the key quantity
determines the derivative loss of solutions to problem (1.2)–(1.4) according to Table 4, where the first column refers to the behavior of \(m(\lambda )\) as \(\lambda \rightarrow +\infty \).
As usual, all results are residually optimal.
Overview of the technique—upper bound for the derivative loss From the technical point of view, it is well-known that the spectral theorem for self-adjoint nonnegative operators reduces the abstract equation (1.2) to the family of ordinary differential equations
where \(\lambda \) is a positive real parameter. In particular, if one can prove that solutions to (1.8) satisfy an estimate of the form
where \(\phi _{+}(\lambda ,t)\) is a function independent of initial data, then the behavior of \(\phi _{+}(\lambda ,t)\) as \(\lambda \rightarrow +\infty \) determines the maximum possible derivative loss of solutions to (1.2)–(1.4) according to Table 5.
When c(t) is \(\omega \)-continuous, the approximated energy estimates introduced in [4] allow to show that (1.9) holds true with
and this explains all the results of Table 1 and much more, for example the well-posedness in Gevrey spaces in the case of Hölder continuous coefficients.
In a different direction, when c(t) is of class \(C^{1}\) in the closed interval \([0,T_{0}]\), the classical hyperbolic estimates give that (1.9) holds true with (note that in this case there is no dependence on \(\lambda \))
In this paper \(|c'(t)|\) is not necessarily integrable in a right neighborhood of the origin, and therefore we need to mix the two techniques, in the sense that we use an estimate of the first type in some initial interval [0, s], followed by an estimate of the second type in the remaining interval [s, t], where (1.6) provides a control on the derivative. When we optimize with respect to s we conclude that now (1.9) holds true with \(\phi _{+}(\lambda ,t)\sim m(\lambda )\), with \(m(\lambda )\) given by (1.7). This is enough to conclude that the derivative loss of solutions to (1.2) is at most the one given in Table 4. We refer to statement (1) of Theorem 2.3 for the details.
Overview of the technique—road map to counterexamples The main contribution of this paper is the construction of solutions that exhibit a prescribed derivative loss. This is a much more delicate issue, since it requires to show that estimates of the form (1.9) are in some sense optimal. In statement (2) of Theorem 2.3 the idea is to look for coefficients c(t) such that solutions to (1.8) satisfy
at least on a sequence \(\lambda _{n}\rightarrow +\infty \), with \(\phi _{-}(\lambda ,t)\sim m(\lambda )\) for positive times. These coefficients are called “universal activators” because the same coefficient induces the exponential growth of a sequence of solutions. They are the fundamental tool in the construction of counterexamples, as we show in Proposition 4.4 for operators that admit an unbounded sequence of eigenvalues, and in Proposition 4.5 for general unbounded self-adjoint operators, which is the case, for example, of the concrete wave equation (1.1) on the whole space or an external domain.
Following the path introduced in [12,13,14], the existence of universal activators is reduced to the existence of families of “asymptotic activators”, namely families \(\{c_{\lambda }(t)\}\) of coefficients such that solutions to
satisfy (1.10) when \(\lambda \) is large enough.
We stress the difference between universal activators, where the same coefficient produces an exponential growth for a sequence of solutions, and asymptotic activators that achieve the same exponential growth by choosing a different coefficient for different values of \(\lambda \) (note that in (1.11) the coefficient does depend on \(\lambda \)).
The main point proved in [14] is that the existence of sufficiently many families of asymptotic activators, within a certain class of coefficients, implies the existence of a residual set of universal activators in the same class. This is some sort of “nonlinear uniform boundedness principle” (where nonlinear refers to the map coefficient \(\mapsto \) solutions): if there exist sufficiently many families of objects that show asymptotically the optimality of some estimate, then there are residually many objects that show directly the optimality of the same estimate. Equivalently, if we can not estimate the norm of \((u(t),u'(t))\) is some space Y in terms of the norm of \((u_{0},u_{1})\) in some space X, then there exist solutions such that \((u_{0},u_{1})\) lies in the space X, but \((u(t),u'(t))\) does not lie in the space Y for positive times.
Finally, asymptotic activators are produced starting from the usual building blocks, introduced for the first time in [4], and then modified and adapted in the subsequent literature. The key observation is that
grows exponentially and solves (1.11) with
When showing the optimality of the results of Table 1, it is enough to choose \(\varepsilon (t)\) to be independent of t and equal to \(\lambda \,\omega (1/\lambda )\), up to multiplicative constants. In this way the integral in the exponential term of (1.12) grows as a multiple of \(\lambda \,\omega (1/\lambda )\), as required, and it is possible to control the modulus of continuity of the coefficient because \(c_{\lambda }(t)\) makes oscillations of order \(\omega (1/\lambda )\) in intervals with length of order \(1/\lambda \) (namely the period of the trigonometric terms).
In this paper we consider the minimizer \(s_{\lambda }\) in the minimum problem (1.7), and we check which of the two summands is bigger for \(s=s_{\lambda }\). When it is the first one, we define again \(\varepsilon (t)\) as \(\lambda \,\omega (1/\lambda )\) (see Proposition 4.9). When it is the second one, namely the integral, one would like to choose \(\varepsilon (t)=\theta (t)\), so that the integral in the exponential term of (1.12) grows as the integral of \(\theta (t)\). This choice has several disadvantages, mainly because we need to control \(c_{\lambda }'(t)\), and therefore the presence of \(\varepsilon '(t)\) in (1.13) forces to assume that \(\theta (t)\) is twice differentiable, with a lot of control on its derivatives.
In Proposition 4.10, we overcome this difficulty by choosing \(\varepsilon (t)\) equal to a piecewise constant approximation of \(\theta (t)\), changing the constant whenever the trigonometric terms vanish. In this way \(c_{\lambda }(t)\) remains Lipschitz continuous, the term with \(\varepsilon '(t)\) disappears, and the integral in (1.12) is equivalent to a Riemann sum for the integral of \(\theta (t)\). The lack of this type of construction, which becomes fundamental when the growth of \(\theta (t)\) is close enough to 1/t, is probably the reason why the previous results in the literature were not optimal.
Related problems and future perspectives We hope that the methods of this paper, more than the results themselves, could be useful to deal with analogous problems. For sure the nonlinear uniform boundedness principle, namely the general path from asymptotic to universal activators, can be used to show in an efficient way the optimality of many positive results. Indeed, some of the known counterexamples are still stated in the form of impossibility of a certain energy estimate, in the spirit of asymptotic activators, and not as examples of solutions that actually lose derivatives (see for example [2, Theorem 2.3 and Theorem 2.5], or [11, Theorem 2.7]).
In a different direction, we are confident that our techniques could shed some light also on a related problem studied in the last two decades in a series of papers by Reissig and Smith [21], Colombini [3], Hirosawa [17, 18], and Ebert et al. [11]. They consider the wave equation (1.1) with a smooth coefficient c(t) defined for all positive times, with two types of assumptions: the decay of some derivatives of c(t) as \(t\rightarrow +\infty \), and a “stabilization condition”, namely some integral control on \(|c(t)-c_{\infty }|\), where \(c_{\infty }\) is a suitable constant. They are interested in what they call “generalized energy conservation”, namely the boundedness of the ratio between the energy at time t and the energy at time 0. There are still some gaps between the positive results and the counterexamples (see for example [11, Table 1 and Table 2]). The analogy with this paper is plausible: the decay of derivatives at infinity should correspond to the blow-up at the origin, the stabilization condition could correspond to the modulus of continuity, and the general energy conservation to well-posedness with no derivative loss.
Structure of the paper This paper is organized as follows. In Sect. 2, we state our main results and some consequences, and we comment on them. In Sect. 3, we prove the positive part, namely the energy estimates from above that yield a bound from above for the derivative loss. In Sect. 4, we present the construction of the asymptotic activators, and how they lead to our counterexamples. Finally, in Sect. 5, we prove two corollaries concerning two special cases.
2 Statements
2.1 Notations and main result
Let us start by introducing some terminology and some notations.
Definition 2.1
(Modulus of continuity)
A modulus of continuity is a function \(\omega :[0,+\infty )\rightarrow [0,+\infty )\) such that
-
\(\omega (\sigma )>0\) for \(\sigma >0\), and \(\omega (\sigma )\rightarrow 0\) as \(\sigma \rightarrow 0^{+}\),
-
the function \(\sigma \mapsto \omega (\sigma )\) is nondecreasing,
-
the function \(\sigma \mapsto \sigma /\omega (\sigma )\) is nondecreasing.
A function \(c:[0,T_{0}]\rightarrow \mathbb {R}\) is called \(\omega \)-continuous if it satisfies (1.5) for some modulus of continuity \(\omega \).
Definition 2.2
(Classes of coefficients) Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\) be positive real numbers with \(\mu _{2}>\mu _{1}\). Let \(\omega \) be a modulus of continuity, and let \(\theta :(0,T_{0})\rightarrow (0,+\infty )\) be a continuous and nonincreasing function.
We call \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) the set of functions \(c\in W^{\infty }_{loc}((0,T_{0}))\) that satisfy (1.3) and (1.5) in the pointwise sense, and (1.6) in the almost everywhere sense.
We observe that \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) is a complete metric space with respect to the distance induced by the norm of \(L^{\infty }((0,T_{0}))\). We observe also that the elements of this space are continuous in \([0,T_{0}]\), and therefore pointwise values c(t) are well defined.
We are now ready to state our main result.
Theorem 2.3
(Main energy estimates)
Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\) be positive real numbers with \(\mu _{2}>\mu _{1}\). Let \(\omega \) be a modulus of continuity, and let \(\theta :(0,T_{0})\rightarrow (0,+\infty )\) be a continuous and nonincreasing function. For every positive real number \(\lambda \), let us define \(m(\lambda )\) as in (1.7). Let us consider the classes of coefficients introduced in Definition 2.2, and let us introduce the constants
Then the following statements hold true.
-
(1)
(Energy estimate from above). For every coefficient \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) and every positive real number \(\lambda \) it turns out that every solution to (1.8) satisfies
$$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2}\le M_{1}\left( u_{\lambda }'(0)^{2}+\lambda ^{2}u_{\lambda }(0)^{2}\right) \exp (M_{2}\,m(\lambda )) \quad \forall t\in [0,T_{0}]. \nonumber \\ \end{aligned}$$(2.2) -
(2)
(Energy estimate from below). Let us assume that \(m(\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \).
Then, for every sequence of positive real numbers \(\lambda _{n}\rightarrow +\infty \), the set of coefficients \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) such that the solutions to (1.8) with initial data \(u_{\lambda }(0)=0\), \(u_{\lambda }'(0)=1\) satisfy
$$\begin{aligned} \limsup _{n\rightarrow +\infty }\left( |u_{\lambda _{n}}'(t)|^{2}+\lambda _{n}^{2}|u_{\lambda _{n}}(t)|^{2}\right) \exp \left( -M_{3}\,m(\lambda _{n})\right) \ge 1 \quad \forall t\in (0,T_{0}] \end{aligned}$$is residual.
The energy estimates of Theorem 2.3 can be applied to the abstract wave equation, yielding the following result.
Theorem 2.4
(Derivative loss for the abstract wave equation)
Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\), \(\omega \), \(\theta \), and \(m(\lambda )\) be as in Theorem 2.3. Let us consider the classes of coefficients introduced in Definition 2.2. Let us consider the abstract equation (1.2), where A is a linear nonnegative self-adjoint operator in some real Hilbert space H.
Then the following statements hold true.
-
(1)
(Estimate from above for the derivative loss). For every \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) the derivative loss of solutions to problem (1.2)–(1.4) is at most the one prescribed by Table 4.
-
(2)
(Optimality of the derivative loss). If the operator A is unbounded, then the set of coefficients \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) for which problem (1.2)–(1.4) exhibits exactly the derivative loss prescribed by Table 4 is residual.
Remark 2.5
One could define the class \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) also by considering only coefficients c(t) that are of class \(C^{1}\) for positive times, so that now (1.6) can be asked in the pointwise sense. In this case a structure of complete metric space is induced by the norm
All the previous results hold true also in this restricted class. This is actually the approach that was carried on in [14], and it delivers a residual (in this new space) class of counterexamples that are of class \(C^{1}\) for positive times. On the other hand, as explained in [14, section 4.5], it is always possible to produce counterexamples of class \(C^{\infty }\).
2.2 Some examples
Let us discuss the consequences of Theorem 2.4 in some special cases. A first result is that well-posedness with no derivative loss holds true in the class of coefficients \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) if and only if either \(\theta (t)\) guarantees that c(t) has bounded variation, or \(\omega (\sigma )\) guarantees that c(t) is Lipschitz continuous.
Corollary 2.6
Let us consider the same setting of Theorem 2.4.
Let us assume that the operator A is unbounded, and that
Then the set of coefficients \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\theta ,\omega )\) for which problem (1.2)–(1.4) exhibits at least an arbitrarily small derivative loss is residual.
Let us examine now the case where \(\theta (t)\sim 1/t\). According to Table 2 this assumption guarantees that the derivative loss is at most finite. Now we can show that the derivative loss is actually arbitrarily small if c(t) is \(\alpha \)-Hölder continuous for every \(\alpha \in (0,1)\), and this assumption is optimal.
Corollary 2.7
Let us consider the same setting of Theorem 2.4.
-
(1)
(Arbitrarily small derivative loss). Let us assume that the modulus of continuity satisfies
$$\begin{aligned} \forall \alpha \in (0,1) \quad \lim _{\sigma \rightarrow 0^{+}}\frac{\omega (\sigma )}{\sigma ^{\alpha }}=0, \end{aligned}$$(2.4)and the function \(\theta (t)\) satisfies
$$\begin{aligned} \limsup _{t\rightarrow 0^{+}}t\cdot \theta (t)<+\infty . \end{aligned}$$(2.5)Then, for every propagation speed \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\theta ,\omega )\), problem (1.2)–(1.4) has at most an arbitrarily small derivative loss.
-
(2)
(Finite derivative loss). Let us assume that the operator A is unbounded, that the modulus of continuity satisfies
$$\begin{aligned} \exists \alpha \in (0,1) \quad \liminf _{\sigma \rightarrow 0^{+}}\frac{\omega (\sigma )}{\sigma ^{\alpha }}>0, \end{aligned}$$(2.6)and that the function \(\theta (t)\) satisfies
$$\begin{aligned} \liminf _{t\rightarrow 0^{+}}t\cdot \theta (t)>0. \end{aligned}$$(2.7)Then the set of propagation speeds \(c\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\theta ,\omega )\) for which problem (1.2)–(1.4) has at least a finite derivative loss is residual.
Finally, let us examine the case where \(\theta (t)\gg 1/t\). In this case problem (1.2)–(1.4) can exhibit any type of derivative loss, depending on the modulus of continuity \(\omega \). The modulus of continuity that guarantees a finite derivative loss is always stronger than any Hölder modulus \(\sigma ^{\alpha }\) with \(\alpha \in (0,1)\), but weaker that the classical \(\sigma |\log \sigma |\) that guarantees a finite derivative loss even without any assumption on \(c'(t)\). In Table 6 we display, for some special choices of \(\theta (t)\), the moduli of continuity that guarantee that \(m(\lambda )\sim \log \lambda \), and hence a finite derivative loss. As usual, these choices of \(\omega (\sigma )\) represent the threshold between the arbitrary small derivative loss and the infinite derivative loss.
Remark 2.8
The reader might be puzzled by the multiple continuity moduli that appear in the first row, or by the fact that the continuity modulus is the same in the third and fourth row, where in addition \(\theta (t)\) depends on \(\beta \) but \(\omega (\sigma )\) does not. This is due to the bizarre behavior of the minimum problem (1.7). For example, if \(c_{1}\), \(c_{2}\), R are positive real numbers such that
then one can check that there exists a constant \(c_{3}\), depending on \(c_{1}\), \(c_{2}\), R, such that
which is enough to guarantee a finite derivative loss. Of course different values of \(c_{1}\), \(c_{2}\), R yield different values of \(c_{3}\), and therefore a slower or faster derivative loss, but in any case a finite one.
In the case of the third and fourth row, we refer to Lemma A.1 in the Appendix for the details.
Remark 2.9
Some of the choices of Table 6 are considered in the previous literature, but without obtaining the optimal result. For example, the modulus \(\omega (\sigma )\) of the first row is considered in [19, Section 4.5], where a finite derivative loss is obtained with the correct condition \(\theta (t)\sim |\log t|/t\), and in [7, Example 1.2(ii)], where an infinite derivative loss is obtained, but only with the stronger condition \(\theta (t)\gg |\log t|^{2}/t\). The modulus \(\omega (\sigma )\) of the second row is considered in [7, Example 1.2(i)], where an infinite derivative loss is obtained, but only with the stronger condition \(\theta (t)\gg 1/t^{1+\beta }\cdot \log ^{1+\beta }(|\log t|)\).
Similarly, the choices of \(\theta (t)\) of the second and the fourth row (the latter in the special case \(\beta =2\)) are considered in [10, Example 2.1 and Example 2.2], and in both cases a finite derivative loss is obtained with a stronger modulus of continuity, and there is no mention of infinite and arbitrarily small derivative loss.
2.3 Comments
We conclude by speculating on our main results.
Remark 2.10
(Limit cases)
The cases where one prescribes only the modulus of continuity \(\omega (\sigma )\), or only the blow-up rate \(\theta (t)\) of the derivative, can be included in Theorems 2.3 and 2.4 as special limit cases.
If we prescribe only the modulus of continuity \(\omega (\sigma )\), we can think that \(\theta (t)\equiv +\infty \), and therefore imagine that in (1.7) the minimum is attained when \(s=T_{0}\). We obtain that \(m(\lambda )=\lambda \,\omega (1/\lambda )T_{0}\), which explains the results of Table 1.
If we prescribe only \(\theta (t)\), and of course also the strict hyperbolicity condition, we can think to extend the notion of modulus of continuity in order to include the border-line case in which \(\omega (\sigma )\equiv \mu _{2}-\mu _{1}\), and then define
The minimum is attained when \(\theta (s)=(\mu _{2}-\mu _{1})\lambda \), and with some standard calculus we obtain all the results of Table 2.
Remark 2.11
(Quantitative estimate of the derivative loss) In the cases where \(m(\lambda )\sim \log \lambda \), and more precisely
the liminf and limsup above provide, respectively, an estimate from below and from above for the finite derivative loss, namely for the constant \(\delta \) that appears in the definition.
Remark 2.12
(Progressive vs instantaneous derivative loss)
There is a subtle difference in the derivative loss between the case where only the modulus of continuity is prescribed, and the cases where we assume c(t) to be differentiable for positive times. For the sake of simplicity, let us limit ourselves to the finite derivative loss.
In the case where one prescribes only the modulus of continuity \(\omega (\sigma )\sim \sigma |\log \sigma |\), the finite derivative loss is in general progressive in the sense that, if \((u_{0},u_{1})\in D(A^{\beta +1/2})\times D(A^{\beta })\) for some \(\beta \), then \((u(t),u'(t))\in D(A^{\beta -\delta t+1/2})\times D(A^{\beta -\delta t})\) for positive times. In words, this means that the derivative loss increases with time, and tends to 0 as \(t\rightarrow 0^{+}\).
When c(t) is of class \(C^{1}\) for positive times, then any form of derivative loss is instantaneous, and in particular a finite derivative does not tend to 0 as \(t\rightarrow 0^{+}\) (but of course now \(\omega (\sigma )\gg \sigma |\log \sigma |\)). After the initial loss of regularity, in this case there is no further loss of derivatives, in the sense that the implication
holds true for every \(0<t_{0}\le t\le T_{0}\) and every \(\gamma \). In words, this means that the singular behavior of c(t) in the origin is responsible for the instantaneous loss of derivatives but, after the initial loss, the regularity of solutions is preserved by the smoothness of c(t).
Remark 2.13
(Well-posedness in Gevrey spaces)
The consequences of Theorem 2.3 go far beyond Theorem 2.4, and in particular beyond the classification of derivative loss according to Table 4. Indeed, the behavior of \(m(\lambda )\) as \(\lambda \rightarrow +\infty \) provides a sharp “measure” of the derivative loss, even when it is infinite or arbitrarily small. The formalization of this idea relies on the notion of generalized Gevrey spaces, or Gevrey distributions (we refer to [13, Definition 2.2] for more details on the abstract functional setting for abstract wave equations). Just to give some examples, let us stick to standard Gevrey spaces. The positive side is represented by well-posedness results, as follows.
-
If we assume that \(\omega (\sigma )=\sigma ^{\alpha }\) for some \(\alpha \in (0,1)\), and we have no informations about the derivative, then we can assume (as explained in Remark 2.10) that \(m(\lambda )\sim \lambda \,\omega (1/\lambda )=\lambda ^{1-\alpha }\), which implies the classical result of [4] according to which the problem is well-posed in Gevrey spaces of order \(s\le (1-\alpha )^{-1}\).
-
If we assume that \(\theta (t)=1/t^{\beta }\) for some \(\beta >1\), and we have no information about the modulus of continuity, then we obtain (as explained in Remark 2.10) that \(m(\lambda )\sim \lambda ^{(\beta -1)/\beta }\), which implies the classical result according to which the problem is well-posed in Gevrey spaces of order \(s\le \beta /(\beta -1)\) (see [6, Theorem 2]).
-
If we ask both conditions, namely that \(\omega (\sigma )=\sigma ^{\alpha }\) and \(\theta (t)=1/t^{\beta }\), then with some standard calculus we obtain that \(m(\lambda )\sim \lambda ^{(1-\alpha )(\beta -1)/\beta }\), and therefore the problem is well-posed in Gevrey spaces of order \(s<\beta (\beta -1)^{-1}(1-\alpha )^{-1}\) (see [6, Theorem 3]). This is one more example of “collaboration” between the modulus of continuity and the control on the derivative in order to provide well-posedness results for less regular data.
The negative side is a derivative loss from Gevrey spaces to Gevrey distributions. For example, in the case where \(\omega (\sigma )=\sigma ^{\alpha }\) with \(\alpha \in (0,1)\), and there are no informations on the derivative, there exist solutions whose initial data are in the Gevrey space of order s for every \(s>(1-\alpha )^{-1}\), and such that for positive times they do not belong to the space of Gevrey distributions of order s for every \(s>(1-\alpha )^{-1}\).
3 Energy estimates from above
In this section, we prove statement (1) of Theorem 2.3, which implies in a standard way also statement (1) of Theorem 2.4.
To begin with, let us extend c(t) to the whole half-line \(t\ge 0\) by setting
For every \(\varepsilon >0\) let us set
Then it turns out that \(c_{\varepsilon }\in C^{1}([0,T_{0}])\) and satisfies the following estimates
for every \(t\in [0,T_{0}]\). Following [4], we consider the usual Kovaleskyan energy
the usual hyperbolic energy
and the approximated hyperbolic energy
These energies are equivalent in the sense that
and
for every admissible value of the parameters. What we need in (2.2) is an estimate of the Kovaleskyan energy (3.2). To this end, for every \(s\in (0,T_{0})\) we estimate the approximated hyperbolic energy in [0, s], and the standard hyperbolic energy in \([s,T_{0}]\).
The time-derivative of (3.4) is
from which we deduce that
Integrating this differential inequality, and keeping (3.1) into account, we deduce that
Setting \(\varepsilon :=1/\lambda \), and recalling (3.6), this implies that
where \(M_{1}\) and \(M_{2}\) are defined by (2.1). The time-derivative of (3.3) is
Integrating this differential inequality we deduce that
for every \(t\in [s,T_{0}]\). Recalling the equivalence (3.5), and estimate (3.7) with \(t=s\), we conclude that
for every \(t\in [s,T_{0}]\). On the other hand, the same estimate holds true also for \(t\in [0,s]\) because of (3.7). Optimizing with respect to s we obtain exactly (2.2). \(\Box \)
4 Counterexamples
In this section, we prove statement (2) of Theorem 2.3, and we show how that statement leads to the counterexamples required for the optimality part in Theorem 2.4.
4.1 Asymptotic and universal activators
Let us begin by summarizing the theory developed in [14, section 4.1]. In the sequel we consider solutions to the family of ordinary differential equations
with initial data
We point out that in (4.1) the propagation speed depends on the parameter \(\lambda \). When the propagation speed is fixed, we consider equation
with initial data
Let us recall our notion of activators (compare with [13, 14]).
Definition 4.1
(Universal activators of a sequence)
Let \(T_{0}\) be a positive real number, let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function, and let \(\{\lambda _{n}\}\) be a sequence of positive real numbers such that \(\lambda _{n}\rightarrow +\infty \).
A universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \) is a coefficient \(c\in L^{1}((0,T_{0}))\) such that the corresponding sequence \(\{v_{\lambda _{n}}(t)\}\) of solutions to (4.3)–(4.4) satisfies
Definition 4.2
(Asymptotic activators)
Let \(T_{0}\) be a positive real number, and let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function.
A family of asymptotic activators with rate \(\phi \) is a family of coefficients \(\{c_{\lambda }(t)\}\subseteq L^{1}((0,T_{0}))\) with the property that, for every \(\delta \in (0,T_{0})\), there exist two positive constants \(M_{\delta }\) and \(\lambda _{\delta }\) such that the corresponding family \(\{u_{\lambda }(t)\}\) of solutions to (4.1)–(4.2) satisfies
The coefficient 2 in the exponential of (4.6) could be replaced by any number greater than 1. The following result shows that families of asymptotic activators are the basic tool in the construction of universal activators. It is the “nonlinear uniform boundedness principle” that we mentioned in the introduction, namely “the existence of many (\(\lambda \) dependent) bad coefficients for every \(\lambda \) implies the existence of a single (\(\lambda \) independent) coefficient that is bad for many \(\lambda \)’s”. This is the point where Baire category theorem discloses its power. For a proof, we refer to [14, Proposition 4.5].
Proposition 4.3
(From asymptotic to universal activators)
Let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function such that \(\phi (\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \). Let \(T_{0}\) be a positive real number, and let \(\mathcal{PS}\mathcal{}\subseteq C^{0}([0,T_{0}])\) be a closed subset (with respect to uniform convergence).
Let us assume that there exists a dense subset \({\mathcal {D}}\subseteq \mathcal{PS}\mathcal{}\) such that for every \(c\in {\mathcal {D}}\) there exists a family of asymptotic activators \(\{c_{\lambda }\}\subseteq \mathcal{PS}\mathcal{}\) with rate \(\phi \) such that \(c_{\lambda }\rightarrow c\).
Then, for every unbounded sequence \(\{\lambda _{n}\}\) of positive real numbers, the set of elements in \(\mathcal{PS}\mathcal{}\) that are universal activators of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \) is residual in \(\mathcal{PS}\mathcal{}\) (and in particular nonempty).
Finally, the following statement clarifies the crucial connection between universal activators and derivative loss. In order to show the strategy, we start by proving the result in the special case where A admits an unbounded sequence of eigenvalues (see [14, Proposition 4.3]).
Proposition 4.4
(Universal activators vs derivative loss—Model case)
Let H be a Hilbert space, and let A be a nonnegative self-adjoint operator on H. Let us assume that there exists a sequence \(\{e_{n}\}\) of orthonormal vectors in H, and an unbounded sequence of positive real numbers \(\{\lambda _{n}\}\) such that \(Ae_{n}=\lambda _{n}^{2}e_{n}\) for every positive integer n.
Let \(T_{0}\) be a positive real number, let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function, and let \(c\in L^{1}((0,T_{0}))\) be a universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \). Let us assume also that
Then the asymptotic behavior of \(\{\phi (\lambda _{n})\}\) determines the derivative loss of solutions to (1.2)–(1.4) according to the following scheme
Proof
To begin with, we observe that (4.7) implies that
For every positive integer n, let us set
and let us consider problem (1.2)–(1.4) with initial data
It is well-known that the unique solution is given by (a priori this series converges just in the sense of ultradistributions)
where \(\{v_{\lambda }(t)\}\) is the family of solutions to (4.3)–(4.4). In particular, for every choice of the real numbers \(\beta \) and \(\gamma \) it turns out that
and
Now we discuss the regularity of initial data by exploiting (4.13) and (4.11), and the regularity of the corresponding solution u(t) by exploiting (4.14) and condition (4.5) in the definition of universal activator. We distinguish three scenarios.
-
Under the assumption in (4.8) we observe that (4.13) converges when \(\beta =0\), while (4.14) does not converge when \(\gamma =0\). It follows that \((u_{0},u_{1})\in D(A^{1/2})\times H\), while \((u(t),u'(t))\not \in D(A^{1/2})\times H\) for every \(t\in (0,T_{0}]\), which shows that in this case the solution u(t) exhibits at least an arbitrarily small derivative loss.
-
Under the assumption in (4.9), let \(\delta \in (0,+\infty )\) denote the liminf of \(\phi (\lambda _{n})/\log \lambda _{n}\). In this case we observe that (4.13) converges for every \(\beta <\delta /8\), while (4.14) does not converge for every \(\gamma <\delta /8\). As a consequence, the derivative loss of the solution u(t) is at least \(\delta /4\).
-
Under the assumption in (4.10) we observe that (4.13) converges for every \(\beta \in \mathbb {R}\), and in particular \(u_{1}\in D(A^{\infty })\), while (4.14) does not converge for every \(\gamma \in \mathbb {R}\), which implies that the solution u(t) has an infinite derivative loss.
\(\square \)
In the following result we extend the construction of counterexamples to general unbounded self-adjoint operators.
Proposition 4.5
(Universal activators vs derivative loss—general case)
Let H be a separable Hilbert space, and let A be a nonnegative self-adjoint operator on H. Let \(T_{0}\) be a positive real number, and let \(\phi :(0,+\infty )\rightarrow (0,+\infty )\) be a function.
Let us assume that the operator A is unbounded, and that \(\phi (\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \).
Then there exists a sequence of positive real numbers \(\lambda _{n}\rightarrow +\infty \) with the following property. If \(c\in L^{1}((0,T_{0}))\) is a universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \), then the asymptotic behavior of \(\{\phi (\lambda _{n})\}\) determines the derivative loss of solutions to (1.2)–(1.4) according to the scheme (4.8) through (4.10).
Proof
We imitate the proof of Proposition 4.4 by exploiting the general form of the spectral theorem and a reinforced version of universal activators.
Definition of the sequence \(\{\lambda _{n}\}\) According to the spectral theorem for self-adjoint operators (see for example [20, Theorem VIII.4]) there exists a finite measure space \((M,\mu )\), an isometric bijective map \(\mathscr {F}:H\rightarrow L^{2}(M,\mu )\), and a measurable function \(\lambda :M\rightarrow [0,+\infty )\) such that the operator A on H acts as the multiplication operator by \(\lambda ^{2}\) in \(L^{2}(M,\mu )\).
More precisely, to every vector \(w\in H\) it is associated the “generalized Fourier transform” \(\widehat{w}(\xi ):=\mathscr {F}(v)\in L^{2}(M,\mu )\) in such a way that
-
\(w\in D(A^{\beta })\) if and only if \((1+\lambda (\xi )^{2})^{\beta }\,\widehat{w}(\xi )\in L^{2}(M,\mu )\),
-
if \(w\in D(A)\) then \([\mathscr {F}(Aw)](\xi )=\lambda (\xi )^{2}\,\widehat{w}(\xi )\) for \(\mu \)-almost every \(\xi \in M\).
Since A is unbounded, the function \(\lambda (\xi )\) is essentially unbounded, and therefore there exists a sequence of positive real numbers \(\lambda _{n}\rightarrow +\infty \) such that
Up to passing to a subsequence, we can also assume that the sequence \(\{\lambda _{n}\}\) is strictly increasing and (4.7) holds true.
A “stronger” property of universal activators Let \(\{v_{\lambda }(t)\}\) denote the family of solutions to (4.3)–(4.4). We claim that there exists a sequence \(\{r_{n}\}\) of positive real numbers such that the intervals \([\lambda _{n}-r_{n},\lambda _{n}+r_{n}]\) are pairwise disjoint and, if we set
then it turns out that
To this end, it is enough to observe that the map
is continuous, and then choose \(r_{n}\) in such a way that
for every \(t\in [0,T_{0}]\) and every \(\lambda \in [\lambda _{n}-r_{n},\lambda _{n}+r_{n}]\), and in particular
so that now (4.16) follows from (4.5) because \(\phi (\lambda _{n})\rightarrow +\infty \). Up to reducing \(r_{n}\) if necessary, we can also assume that
Construction of counterexamples Let \(\{\lambda _{n}\}\) be any sequence as in the first paragraph, and let \(c\in L^{1}((0,T_{0}))\) be any universal activator of the sequence \(\{\lambda _{n}\}\) with rate \(\phi \). We need to show that problem (1.2)–(1.4) exhibits the prescribed derivative loss. To this end, for every positive integer n we define \(r_{n}\) as in the previous paragraph, we consider the set
which has positive measure, and we call \(\widehat{w}_{n}(\xi )\) the characteristic function of \(M_{n}\) (namely \(\widehat{w}_{n}(\xi )=1\) if \(\xi \in M_{n}\), and \(\widehat{w}_{n}(\xi )=0\) otherwise). Then we define \(a_{n}\) as in (4.12), we set
and we consider problem (1.2)–(1.4) with initial data \(u_{0}=0\) and \(u_{1}:=\mathscr {F}^{-1}(\widehat{u}_{1}(\xi ))\). It is well-known that the “generalized Fourier transform” of the solution is
where \(\{v_{\lambda }(t)\}\) is again the family of solutions to (4.3)–(4.4).
Let us examine the regularity of \(u_{1}\) and of the pair \((u(t),u'(t))\). As for the regularity of \(u_{1}\), for every real number \(\beta \ge 0\) it turns out that
On the other hand, since the sets \(M_{n}\) are pairwise disjoint, we obtain that
where in the last step we exploited the estimate from above in (4.17), and analogously
so that in conclusion
As for the regularity of u(t), we observe that for every real number \(\gamma \ge 0\) it turns out that \((u(t),u'(t))\in D(A^{-\gamma +1/2})\times D(A^{-\gamma })\) if and only if
Recalling (4.15) the last integral can be rewritten and estimated as follows
where in the last inequality we exploited the estimate from above in (4.17).
Thanks to (4.16), at this point all conclusions follow as in Proposition 4.4. \(\square \)
4.2 Building block and dense subset
From the general theory, we know that we need to show that asymptotic activators can approximate all coefficients in a dense subset. In this subsection, we identify this dense subset, and then we describe the starting point of the construction of the approximating family of asymptotic activators.
Definition 4.6
Let \(T_{0}\), \(\mu _{1}\), \(\mu _{2}\), \(\omega \), \(\theta \) be as in Definition 2.2. We call \({\mathcal {D}}\) the set of all functions \(c_{*}\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) for which there exist real numbers \(T_{1}\), \(\gamma \), and \(\eta \) (that might depend on \(c_{*}\)) with
such that
and
When we want to emphasize the parameters we write \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\).
In word, the elements of \({\mathcal {D}}\) are constant in a right neighborhood of the origin, and they do not saturate neither the strict hyperbolicity condition in this neighborhood, nor the inequality in the definition of \(\omega \)-continuity. As one can easily guess, the result is that these special coefficients are dense in the classes introduced in Definition 2.2.
Proposition 4.7
(Density)
The set \({\mathcal {D}}\) is dense in \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) for every admissible choice of the parameters.
Proof
Let c(t) be any element of \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\). For every \(\varepsilon \in (0,1)\), with \(\varepsilon <T_{0}\), let us set
Then it turns out that \(c_{\varepsilon }\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) with
and that \(c_{\varepsilon }(t)\rightarrow c(t)\) uniformly in \([0,T_{0}]\). \(\square \)
The following lemma is essentially taken form [4]. We state and prove it because we need the exact values of the constants.
Lemma 4.8
(Basic block)
Let \(\varepsilon \), \(\gamma \), \(\lambda \) be positive real numbers such that
Let \((a,b)\subseteq \mathbb {R}\) be an interval whose endpoints satisfy
For every \(t\in [a,b]\) let us set
and
Then the following statements hold true.
-
(1)
For every \(t\in [a,b]\) it turns out that
$$\begin{aligned} |\varphi _{\varepsilon ,\gamma ,\lambda }(t)|\le \frac{\varepsilon }{2\gamma \lambda } \quad \text {and}\quad \left| \varphi _{\varepsilon ,\gamma ,\lambda }'(t)\right| \le \varepsilon . \end{aligned}$$(4.23) -
(2)
For every modulus of continuity \(\omega \) it turns out that
$$\begin{aligned} \left| \varphi _{\varepsilon ,\gamma ,\lambda }(t)-\varphi _{\varepsilon ,\gamma ,\lambda }(s)\right| \le \varepsilon \cdot \max \left\{ 1,\frac{\pi }{\gamma }\right\} \cdot \left[ \lambda \,\omega \left( \frac{1}{\lambda }\right) \right] ^{-1}\cdot \omega (|t-s|) \nonumber \\ \end{aligned}$$(4.24)for every s and t in [a, b].
-
(3)
The function \(w_{\varepsilon ,\gamma ,\lambda }(t)\) satisfies the differential equation
$$\begin{aligned} w_{\varepsilon ,\gamma ,\lambda }''(t)+ \lambda ^{2}\left( \gamma ^{2}-\varphi _{\varepsilon ,\gamma ,\lambda }(t)\right) \cdot w_{\varepsilon ,\gamma ,\lambda }(t)=0 \end{aligned}$$for every \(t\in [a,b]\), with “initial” data \(w_{\varepsilon ,\gamma ,\lambda }(a)=0\), \(w_{\varepsilon ,\gamma ,\lambda }'(a)=1\), and “final” data
$$\begin{aligned} w_{\varepsilon ,\gamma ,\lambda }(b)=0, \quad w_{\varepsilon ,\gamma ,\lambda }'(b)=\exp \left( \frac{\varepsilon (b-a)}{16\gamma ^{2}}\right) . \end{aligned}$$
Proof
Let us start with statement (1). From definition (4.21) it follows that
and these estimates imply (4.23) because of assumption (4.19).
As for statement (3), it is just a (lengthy) computation.
It remains to prove statement (2). Let t and s be in [a, b]. Since the function \(\varphi _{\varepsilon ,\gamma ,\lambda }\) is periodic with period \(\pi /(\gamma \lambda )\), there exists \(t_{1}\) and \(s_{1}\) in [a, b] such that
and
From the second estimate in (4.23) we know that \(\varphi _{\varepsilon ,\gamma ,\lambda }\) is Lipschitz continuous with Lipschitz constant less than or equal to \(\varepsilon \), and in particular
where in the last step we exploited that the functions \(\omega (\sigma )\) and \(\sigma /\omega (\sigma )\) are nondecreasing, and inequalities (4.25). Now we distinguish two cases.
-
If \(\pi /\gamma \ge 1\), then \(\omega (\pi /(\gamma \lambda ))\ge \omega (1/\lambda )\) because of the monotonicity of \(\omega (\sigma )\). It follows that
$$\begin{aligned} \frac{\pi }{\gamma \lambda }\left[ \omega \left( \frac{\pi }{\gamma \lambda }\right) \right] ^{-1}\le \frac{\pi }{\gamma }\cdot \frac{1}{\lambda }\left[ \omega \left( \frac{1}{\lambda }\right) \right] ^{-1}, \end{aligned}$$ -
If \(\pi /\gamma \le 1\), then \(\pi /(\gamma \lambda )\le 1/\lambda \), and exploiting again the monotonicity of \(\sigma /\omega (\sigma )\) we obtain that
$$\begin{aligned} \frac{\pi }{\gamma \lambda }\left[ \omega \left( \frac{\pi }{\gamma \lambda }\right) \right] ^{-1}\le \frac{1}{\lambda }\left[ \omega \left( \frac{1}{\lambda }\right) \right] ^{-1}, \end{aligned}$$
\(\square \)
4.3 Proof of Theorem 2.3, statement (2)
It remains to prove that, for every coefficient \(c_{*}(t)\) in the dense subset \({\mathcal {D}}\) described in Definition 4.6, there exists a family of asymptotic activators that converge uniformly to \(c_{*}(t)\). This result is proved in Proposition 4.11, and the proof relies on two preliminary general constructions, that we introduce in the following two propositions.
Proposition 4.9
(\(\omega \)-construction)
Let us assume that \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) for some admissible values of the parameters, and let us set
Let \(\lambda \) be a positive real number such that
Let \((a,b)\subseteq (0,T_{1})\) be an interval whose endpoints satisfy (4.20) and
Finally, let consider the function \(\varphi _{\varepsilon ,\lambda ,\gamma }(t)\) defined in (4.21) with
and let us define
Then the following statements hold true.
-
(1)
The function \(c_{\lambda }\) belongs to \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) and satisfies
$$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|\le \frac{\nu _{1}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) \quad \forall t\in [0,T_{0}]. \end{aligned}$$(4.30) -
(2)
The solution \(u_{\lambda }(t)\) to problem (4.1)–(4.2) satisfies
$$\begin{aligned}&u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2} \nonumber \\&\quad \ge \min \left\{ 1,\frac{1}{\mu _{2}}\right\} \exp \left( -\frac{1}{\mu _{1}}\int _{T_{1}}^{T_{0}}\theta (t)\,dt\right) \exp \left( \frac{\nu _{1}}{8\mu _{2}}\lambda \,\omega \left( \frac{1}{\lambda }\right) (b-a)\right) \nonumber \\ \end{aligned}$$(4.31)for every \(t\in [b,T_{0}]\).
Proof
To begin with, we observe that the first inequality in (4.28) is equivalent to \(\varepsilon \le 8\gamma ^{3}\lambda \), and therefore \(\varepsilon \), \(\gamma \), \(\lambda \) satisfy the assumptions of Lemma 4.8.
Statement (1) Let us start by proving (4.30). To this end, we can assume that \(t\in [a,b]\), because otherwise \(c_{\lambda }(t)\) and \(c_{*}(t)\) coincide. When \(t\in [a,b]\), from the first estimate in (4.23) we obtain that
which proves (4.30).
Now let us prove that \(c_{\lambda }\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\). As for the strict hyperbolicity condition, we can limit ourselves to the interval [a, b], where it follows from (4.30) and the second condition in (4.28) because \(c_{*}(t)=\gamma ^{2}\) in (a, b) (since \((a,b)\subseteq (0,T_{1})\)).
As for the estimate on the derivative, again we can limit ourselves to the interval [a, b]. In this case from the second estimate in (4.23) and assumption (4.29) we obtain that
where the last inequality follows from the monotonicity of \(\theta (t)\).
Finally, let us check the \(\omega \)-continuity of \(c_{\lambda }(t)\). To this end, we consider t and s in \([0,T_{0}]\), we assume without loss of generality that \(s<t\), and we distinguish some cases according to the position of t and s.
-
If \(a\le s<t\le b\), then we exploit (4.24), and from our definition (4.27) of \(\nu _{1}\) we obtain that
$$\begin{aligned} |c_{\lambda }(t){-}c_{\lambda }(s)|= & {} |\varphi _{\varepsilon ,\gamma ,\lambda }(t){-}\varphi _{\varepsilon ,\gamma ,\lambda }(s)|\nonumber \\\le & {} \nu _{1}\cdot \max \left\{ 1,\frac{\pi }{\gamma }\right\} \omega (t-s)\le \omega (t-s).\nonumber \\ \end{aligned}$$(4.32) -
If \(b\le s<t\le T_{0}\), then the \(\omega \)-continuity of \(c_{\lambda }\) follows from the \(\omega \)-continuity of \(c_{*}\).
-
If \(s\in [a,b]\) and \(t\in [T_{1},T_{0}]\), then
$$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le & {} |c_{\lambda }(t)-c_{\lambda }(b)|+|c_{\lambda }(b)-c_{\lambda }(s)| \\= & {} |c_{*}(t)-c_{*}(b)|+|\varphi _{\varepsilon ,\gamma ,\lambda }(b)-\varphi _{\varepsilon ,\gamma ,\lambda }(s)| \\\le & {} (1-\eta )\omega (t-b)+\omega (b-s) \\\le & {} (1-\eta )\omega (t-b)+\omega (b), \end{aligned}$$where we exploited that \(c_{*}\) satisfies (4.18) in \([b,T_{0}]\), and the fact that \(c_{\lambda }\) satisfies (4.32) in [s, b]. At this point we exploit the first inequality in (4.29) and we conclude that
$$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le \omega (t-b)+\eta [\omega (T_{1}-b)-\omega (t-b)]\le \omega (t-b)\le \omega (t-s). \end{aligned}$$ -
The cases where at least one variable lies in \([0,a]\cup [b,T_{1}]\) are either trivial or can be easily reduced to the previous ones.
Statement (2) Let us examine now the solution to problem (4.1)–(4.2). In the interval [0, a] the solution is given by the explicit formula
and hence, since \(\gamma \lambda a\) is an integer multiple of \(2\pi \), it follows that \(u_{\lambda }(a)=0\) and \(u_{\lambda }'(a)=1\).
In the interval [a, b] the solution is given by the explicit formula \(u_{\lambda }(t)=w_{\varepsilon ,\gamma ,\lambda }(t)\), where \(w_{\varepsilon ,\gamma ,\lambda }\) is defined by (4.22). Since \(\gamma \lambda b\) is an integer multiple of \(2\pi \), from the explicit formula we obtain that
Finally, in the interval \([b,T_{0}]\) we consider the classical hyperbolic energy
In the usual way we obtain that
and
Since \(c_{*}'(t)=0\) in \([b,T_{1}]\), and \(|c_{*}'(t)|\le \theta (t)\) in \([T_{1},T_{0}]\), integrating this differential inequality we obtain that
Since \(F_{\lambda }(b)\) is given by (4.33), at this point (4.31) follows from (4.34) and (4.35). \(\square \)
Proposition 4.10
(\(\theta \)-construction)
Let us assume that \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) for some admissible values of the parameters, and let us set
Let \(\lambda \) be a positive real number such that
Let \((a,b)\subseteq (0,T_{1})\) be an interval whose endpoints satisfy (4.20) and
Let us set \(t_{0}:=a\) and \(k:=\gamma \lambda (b-a)/(2\pi )\), and then let us define
Finally, let us define
Then the following statements hold true.
-
(1)
The function \(c_{\lambda }\) belongs to \(\mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) and satisfies
$$\begin{aligned} |c_{\lambda }(t)-c_{*}(t)|\le \frac{\nu _{2}}{2\gamma }\,\omega \left( \frac{1}{\lambda }\right) \quad \forall t\in [0,T_{0}]. \end{aligned}$$(4.39) -
(2)
The solution to problem (4.1)–(4.2) satisfies
$$\begin{aligned} u_{\lambda }'(t)^{2}+\lambda ^{2}u_{\lambda }(t)^{2} \\ \ge \min \left\{ 1,\frac{1}{\mu _{2}}\right\} \exp \left( -\frac{1}{\mu _{1}}\int _{T_{1}}^{T_{0}}\theta (t)\,dt-2\pi \right) \cdot \exp \left( \frac{\nu _{2}}{8\mu _{2}}\int _{a}^{b}\theta (\tau )\,d\tau \right) \end{aligned}$$(4.40)for every \(t\in [b,T_{0}]\).
Proof
We follow the same path as in the case of Proposition 4.9. To begin with, from the monotonicity of \(\theta \) and the second assumption in (4.38) we obtain that
and therefore from the first inequality in (4.37) we deduce that \(\varepsilon _{i}\le 8\gamma ^{3}\lambda \), and in particular the assumptions of Lemma 4.8 are satisfied in every interval \([t_{i-1},t_{i}]\).
Statement (1) Let us start by proving (4.39). To this end, we can assume that \(t\in [a,b]\), because otherwise \(c_{\lambda }(t)\) and \(c_{*}(t)\) coincide. When \(t\in [t_{i-1},t_{i}]\) for some \(i\in \{1,\ldots ,k\}\), from the first estimate in (4.23) we obtain that
Plugging (4.41) into (4.42) we obtain (4.39).
Now let us prove that \(c_{\lambda }\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\). As for the strict hyperbolicity condition, we can limit ourselves to the interval [a, b], where it follows from (4.39) because of the second condition in (4.37).
As for the estimate on the derivative, again we can limit ourselves to the interval [a, b]. When \(t\in [t_{i-1},t_{i}]\) for some \(i\in \{1,\ldots ,k\}\) we apply the second estimate in (4.23) and we obtain that
where the last inequality follows from the monotonicity of \(\theta (t)\).
Finally, let us check the \(\omega \)-continuity of \(c_{\lambda }(t)\). To this end, we consider t and s in \([0,T_{0}]\), we assume without loss of generality that \(s<t\), and we distinguish some cases according to the position of t and s.
-
If \(t_{i-1}\le s<t\le t_{i}\) for some \(i\in \{1,\ldots ,k\}\), then from (4.24) we obtain that
$$\begin{aligned}&|c_{\lambda }(t)-c_{\lambda }(s)|\nonumber \\&\quad = |\varphi _{\varepsilon _{i},\gamma ,\lambda }(t)-\varphi _{\varepsilon _{i},\gamma ,\lambda }(s)|\le \varepsilon _{i}\cdot \max \left\{ 1,\frac{\pi }{\gamma }\right\} \cdot \left[ \lambda \,\omega \left( \frac{1}{\lambda }\right) \right] ^{-1} \cdot \omega (t-s). \end{aligned}$$At this point we exploit (4.41) and our definition (4.36) of \(\nu _{2}\), and we conclude that
$$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le \nu _{2}\max \left\{ 1,\frac{\pi }{\gamma }\right\} \cdot \omega (t-s)\le \frac{1}{2}\,\omega (t-s). \end{aligned}$$ -
If \(s\in [t_{i-1},t_{i}]\) and \(t\in [t_{j-1},t_{j}]\) for some \(1\le i<j\le k\), then from the previous case (and thanks to the factor 1/2) we deduce that
$$\begin{aligned} |c_{\lambda }(t)-c_{\lambda }(s)|\le & {} |c_{\lambda }(t)-\gamma ^{2}|+|\gamma ^{2}-c_{\lambda }(s)| \\= & {} |c_{\lambda }(t)-c_{\lambda }(t_{j-1})|+|c_{\lambda }(t_{i})-c_{\lambda }(s)| \\\le & {} \frac{1}{2}\,\omega (t-t_{j-1})+\frac{1}{2}\,\omega (t_{i}-s) \\\le & {} \omega (t-s). \end{aligned}$$ -
All other possibilities for s and t can be dealt with as in the case of the \(\omega \)-construction.
Statement (2) Let us examine the solution to problem (4.1)–(4.2). As in the case of the \(\omega \)-construction, in the interval [0, a] we have an explicit formula for the solution, from which we deduce that \(u_{\lambda }(a)=0\) and \(u_{\lambda }'(a)=1\). Then in the interval \([t_{0},t_{1}]\) the solution is given by the explicit formula \(u_{\lambda }(t)=w_{\varepsilon _{1},\gamma ,\lambda }(t)\), where \(w_{\varepsilon ,\gamma ,\lambda }\) is defined by (4.22). Since \(\gamma \lambda t_{1}\) is an integer multiple of \(2\pi \), from the explicit formula we obtain that
In the interval \([t_{1},t_{2}]\) the solution is given by the explicit formula \(u_{\lambda }(t)=\alpha w_{\varepsilon _{2},\gamma ,\lambda }(t)\), with \(\alpha =u_{\lambda }'(t_{1})\), and therefore
At this point by finite induction we find that
Since \(t_{i}-t_{i-1}\) does not depend on i, from the monotonicity of \(\theta (t)\) we deduce that
Recalling the second condition in (4.38), and the first condition in (4.37), we obtain that
Plugging this estimate into (4.43) we conclude that
At this point in the interval \([b,T_{0}]\) we consider the hyperbolic energy as in the case of the \(\omega \)-construction and we obtain (4.40). \(\square \)
Proposition 4.11
(Asymptotic activators for initially constant coefficients)
Let us assume that \(c_{*}\in {\mathcal {D}}(T_{0},\mu _{1},\mu _{2},\omega ,\theta ;T_{1},\gamma ,\eta )\) for some admissible values of the parameters, and let us define \(m(\lambda )\) as in (1.7), and \(M_{3}\) as in (2.1).
Let us assume that \(m(\lambda )\rightarrow +\infty \) as \(\lambda \rightarrow +\infty \).
Then there exists a family of asymptotic activators \(\{c_{\lambda }(t)\}\subseteq \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) with rate \(\phi (\lambda ):=M_{3}\,m(\lambda )\) such that \(c_{\lambda }(t)\rightarrow c_{*}(t)\) uniformly in \([0,T_{0}]\).
Proof
The strategy is the following. For every \(\lambda \) large enough we define a coefficient \(c_{\lambda }\in \mathcal{PS}\mathcal{}(T_{0},\mu _{1},\mu _{2},\omega ,\theta )\) by modifying \(c_{*}\) in some interval \((a_{\lambda },b_{\lambda })\) according to the constructions described in Propositions 4.9 and 4.10. We show that \(b_{\lambda }\rightarrow 0\), and that for \(\lambda \) large enough it turns out that
where \(\nu _{2}\) is defined by (4.36), and the solutions \(u_{\lambda }(t)\) to problem (4.1)–(4.2) satisfy
where
If we prove these claims, then from (4.44) it follows that \(c_{\lambda }\rightarrow c_{*}\) uniformly in \([0,T_{0}]\), while (4.45) and the fact that \(b_{\lambda }\rightarrow 0^{+}\) imply that \(\{c_{\lambda }(t)\}\) is a family of asymptotic activators with rate \(\phi (\lambda ):=M_{3}\,m(\lambda )\).
In order to define \(c_{\lambda }\) we distinguish two cases. To begin with, we observe that \(\omega (\sigma )\) and \(\theta (t)\) satisfy (2.3), because otherwise \(m(\lambda )\) would be bounded independently of \(\lambda \).
Let
denote the function whose minimum is \(m(\lambda )\). Due to the second condition in (2.3), the minimum is never attained in \(s=0\). Moreover, since \(\psi '(s)=\lambda \,\omega (1/\lambda )-\theta (s)\), from the first condition in (2.3) we deduce that \(\psi '(T_{0})>0\) when \(\lambda \) is large enough, and therefore for these values of \(\lambda \) the minimum is not attained also in \(s=T_{0}\). Therefore, when \(\lambda \) is large enough the minimum is attained in some point \(s_{\lambda }\in (0,T_{0})\) where \(\psi '(s_{\lambda })=0\), and hence
Exploiting again the first condition in (2.3) we deduce that the right-hand side tends to \(+\infty \), and hence \(s_{\lambda }\rightarrow 0^{+}\). Now let \(\Lambda _{\omega }\) denote the set of all \(\lambda >0\) such that
and let \(\Lambda _{\theta }\) denote the set of remaining \(\lambda \)’s, for which necessarily it turns out that
We are now ready to define \(c_{\lambda }(t)\) in the two cases.
Case \(\lambda \in \Lambda _{\omega }\) For every \(\lambda \in \Lambda _{\omega }\) we set (here \(\lfloor \alpha \rfloor \) denotes the greatest integer less then or equal to \(\alpha \))
and we define the coefficient \(c_{\lambda }(t)\) according to the \(\omega \)-construction of Proposition 4.9. Let us check that the assumptions are satisfied if \(\lambda \in \Lambda _{\omega }\) is large enough. To begin with, we observe that \(a_{\lambda }>0\) because from (4.46) we deduce that \(\lambda s_{\lambda }\rightarrow +\infty \) when \(\lambda \rightarrow +\infty \) remaining inside \(\Lambda _{\omega }\). In addition, it turns out that
so that in particular \(b_{\lambda }\rightarrow 0^{+}\) and \((a_{\lambda },b_{\lambda })\subseteq (0,T_{1})\) for large \(\lambda \). Moreover, (4.20) is almost trivial from the definition, while the two inequalities in (4.28) are again satisfied when \(\lambda \) is large because \(\omega (\sigma )\rightarrow 0\) as \(\sigma \rightarrow 0^{+}\). Finally, the first inequality in (4.29) is true for \(\lambda \) large because \(b_{\lambda }\rightarrow 0\), while the second inequality in (4.29) is satisfied because
At this point we can use the conclusions of Proposition 4.9. From (4.30) we obtain immediately (4.44) in this case (note that \(\nu _{1}=2\nu _{2}\)). Finally, we observe that
and therefore
so that (4.31) implies (4.45) in this case.
Case \(\lambda \in \Lambda _{\theta }\) For every \(\lambda \in \Lambda _{\theta }\) we define \(\widehat{s}_{\lambda }\) in such a way that
and we observe that \(\widehat{s}_{\lambda }\rightarrow 0^{+}\) because the integral of \(\theta (t)\) in \((0,T_{1})\) is divergent. Then we set (here \(\lceil \alpha \rceil \) denotes the smallest integer greater then or equal to \(\alpha \))
and we define the coefficient \(c_{\lambda }(t)\) according to the \(\theta \)-construction of Proposition 4.10.
With these notations it turns out that
so that in particular \(b_{\lambda }\rightarrow 0^{+}\) and \((a_{\lambda },b_{\lambda })\subseteq (0,T_{1})\) for large \(\lambda \). Moreover, (4.20) is almost trivial from the definition, while the two inequalities in (4.37) are again satisfied when \(\lambda \) is large because \(\omega (\sigma )\rightarrow 0\) as \(\sigma \rightarrow 0^{+}\). Finally, the first inequality in (4.38) is true for \(\lambda \) large because \(b_{\lambda }\rightarrow 0\), while the second inequality in (4.38) is satisfied because
At this point we can use the conclusions of Proposition 4.10. From (4.39) we obtain immediately (4.44) in this case. Finally, we observe that
and therefore
when \(\lambda \) is large enough. Recalling that \(\nu _{2}\le \gamma /(2\pi )\), at this point (4.40) implies (4.45) in this case. \(\square \)
5 Proof of Corollaries
Proof of Corollary 2.6
It is enough to show that \(m(\lambda )\rightarrow +\infty \). Let \(\lambda _{n}\rightarrow +\infty \) be any sequence of positive real numbers. For every positive integer n, let us choose
Up to subsequences (not relabeled) we can assume that \(s_{n}\rightarrow s_{\infty }\in [0,T_{0}]\). If \(s_{\infty }=0\), then \(m(\lambda _{n})\rightarrow +\infty \) due to the second term in the minimum and the second assumption in (2.3). If \(s_{\infty }>0\), then \(m(\lambda _{n})\rightarrow +\infty \) due to the first term in the minimum and the first assumption in (2.3). \(\Box \)
Proof of Corollary 2.7—Statement (1) From assumption (2.5) we deduce that there exists a positive real number M such that \(\theta (t)\le M/t\) for every \(t\in (0,T_{0}]\), and hence
In particular, setting \(s=\lambda ^{\alpha -1}\) we obtain that
Now we divide by \(\log \lambda \) and we let \(\lambda \rightarrow +\infty \). Due to assumption (2.4) we deduce that
for every \(\alpha \in (0,1)\). Finally, letting \(\alpha \rightarrow 1^{-}\) we conclude that actually \(m(\lambda )/\log \lambda \rightarrow 0\), which implies well-posedness with at most an arbitrarily small derivative loss.
Proof of Corollary 2.7—Statement (2) From assumptions (2.6) and (2.7) we deduce that there exists positive real numbers \(m_{2}\) and \(m_{1}\) such that
and in particular
Minimizing the right-hand side with respect to s we conclude that
and hence
which implies that the derivative loss is actually finite, and proportional to \(1-\alpha \), for a residual set of coefficients. \(\square \)
Change history
22 July 2022
Missing Open Access funding information has been added in the Funding Note.
References
Cicognani, M., Colombini, F.: Sharp regularity of the coefficients in the Cauchy problem for a class of evolution equations. Differ. Integral Equ. 16(11), 1321–1344 (2003)
Cicognani, M., Colombini, F.: Modulus of continuity of the coefficients and loss of derivatives in the strictly hyperbolic Cauchy problem. J. Differ. Equ. 221(1), 143–157 (2006)
Colombini, F.: Energy estimates at infinity for hyperbolic equations with oscillating coefficients. J. Differ. Equ. 231(2), 598–610 (2006)
Colombini, F., De Giorgi, E., Spagnolo, S.: Sur les équations hyperboliques avec des coefficients qui ne dépendent que du temps. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 6(3), 511–559 (1979)
Colombini, F., Del Santo, D., Fanelli, F., Métivier, G.: The well-posedness issue in Sobolev spaces for hyperbolic systems with Zygmund-type coefficients. Commun. Partial Differ. Equ. 40(11), 2082–2121 (2015)
Colombini, F., Del Santo, D., Kinoshita, T.: Well-posedness of the Cauchy problem for a hyperbolic equation with non-Lipschitz coefficients. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 1(2), 327–358 (2002)
Colombini, F., Del Santo, D., Reissig, M.: On the optimal regularity of coefficients in hyperbolic Cauchy problems. Bull. Sci. Math. 127(4), 328–347 (2003)
Colombini, F., Lerner, N.: Hyperbolic operators with non-Lipschitz coefficients. Duke Math. J. 77(3), 657–698 (1995)
Colombini, F., Spagnolo, S.: Some examples of hyperbolic equations without local solvability. Ann. Sci. École Norm. Sup. (4) 22(1), 109–125 (1989)
Del Santo, D., Kinoshita, T., Reissig, M.: Energy estimates for strictly hyperbolic equations with low regularity in coefficients. Differ. Integral Equ. 20(8), 879–900 (2007)
Ebert, M.R., Fitriana, L., Hirosawa, F.: On the energy estimates of the wave equation with time dependent propagation speed asymptotically monotone functions. J. Math. Anal. Appl. 432(2), 654–677 (2015)
Ghisi, M., Gobbino, M.: Residual pathologies. arXiv:1908.09496
Ghisi, M., Gobbino, M.: Critical counterexamples for linear wave equations with time-dependent propagation speed. J. Differ. Equ. 269(12), 11435–11460 (2020)
Ghisi, M., Gobbino, M.: Finite vs infinite derivative loss for abstract wave equations with singular time-dependent propagation speed. Bull. Sci. Math. 166(102918), 41 (2021)
Hirosawa, F.: Loss of regularity for the solutions to hyperbolic equations with non-regular coefficients–an application to Kirchhoff equation. Math. Methods Appl. Sci. 26(9), 783–799 (2003)
Hirosawa, F.: On the Cauchy problem for second order strictly hyperbolic equations with non-regular coefficients. Math. Nachr. 256, 29–47 (2003)
Hirosawa, F.: On the asymptotic behavior of the energy for the wave equations with time depending coefficients. Math. Ann. 339(4), 819–838 (2007)
Hirosawa, F.: On the energy estimates of semi-discrete wave equations with time dependent propagation speed. J. Math. Anal. Appl. 496(1), Paper No. 124798, 28 (2021)
Kinoshita, T., Reissig, M.: About the loss of derivatives for strictly hyperbolic equations with non-Lipschitz coefficients. Adv. Differ. Equ. 10(2), 191–222 (2005)
Reed, M., Simon, B.: Methods of Modern Mathematical Physics I. Functional Analysis, 1st edn. Academic Press Inc, New York (1980)
Reissig, M., Smith, J.: \(L^p\)-\(L^q\) estimate for wave equation with bounded time dependent coefficient. Hokkaido Math. J. 34(3), 541–586 (2005)
Yamazaki, T.: On the \(L^2({ R}^n)\) well-posedness of some singular or degenerate partial differential equations of hyperbolic type. Commun. Partial Differ. Equ. 15(7), 1029–1078 (1990)
Acknowledgements
Both authors are members of the Italian “Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni” (GNAMPA) of the “Istituto Nazionale di Alta Matematica” (INdAM).
Funding
Open access funding provided by Università di Pisa within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, Massimo Gobbino states that there is no conflict of interest.
Data sharing
Data sharing not applicable because the article describes entirely theoretical research.
Additional information
Communicated by Y. Giga.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Appendix
A Appendix
Lemma A.1
Let \(\omega :(0,1/2)\rightarrow (0,+\infty )\) be any function, and let \(\theta :(0,T_0)\rightarrow (0,+\infty )\) be a measurable function. For every \(\lambda >0\) let us define
Then the following statements hold true.
-
(1)
(Estimate from above). Let us assume that there exist a real number \(\beta \) (no sign condition), and two real numbers \(c_1>0\) and \(c_2>0\), such that
$$\begin{aligned} \omega (\sigma )\le c_1\, \sigma |\log \sigma |\cdot \log (|\log \sigma |) \qquad \forall \sigma \in (0,1/2), \end{aligned}$$(A.1)and
$$\begin{aligned} \theta (t)\le c_2\,\frac{1}{t^\beta }e^{1/t} \qquad \forall t\in (0,T_0). \end{aligned}$$(A.2)Then there exists a constant \(c_3>0\), depending on \(c_1\), \(c_2\), \(\beta \), such that
$$\begin{aligned} m(\lambda )\le c_3\log \lambda \end{aligned}$$whenever \(\lambda \) is large enough (how large depends on \(c_1\), \(c_2\), \(\beta \), \(T_0\)).
-
(2)
(Estimate from below). Let us assume that there exist a real number \(\beta \) (no sign condition), and two real numbers \(d_1>0\) and \(d_2>0\), such that
$$\begin{aligned} \omega (\sigma )\ge d_1\, \sigma |\log \sigma |\cdot \log (|\log \sigma |) \qquad \forall \sigma \in (0,1/2), \end{aligned}$$(A.3)and
$$\begin{aligned} \theta (t)\ge d_2\,\frac{1}{t^\beta }e^{1/t} \qquad \forall t\in (0,T_0). \end{aligned}$$(A.4)Then there exists a constant \(d_3>0\), depending on \(d_1\), \(d_2\), \(\beta \), such that
$$\begin{aligned} m(\lambda )\ge d_3\log \lambda \end{aligned}$$whenever \(\lambda \) is large enough (how large depends on \(d_1\), \(d_2\), \(\beta \), \(T_0\)).
Proof
Let us define \(g_{1}:(0,T_{0})\rightarrow (0,+\infty )\) and \(g_{2}:(0,T_{0})\rightarrow (0,+\infty )\) by
We observe that
and that \(g_1\) and \(g_2\) are functions of class \(C^1\) with derivatives
and
In particular, the function \(g_2(s)\) is decreasing in a right neighborhood of the origin. Moreover, from De l’Hopital’s rule it follows that
Since \(g_1(s)\) is strictly positive in \((0,T_0/2)\), with a rather standard argument we deduce that there exists two real constants \(\gamma _1>0\) and \(\gamma _2>0\) (depending on \(\beta \) and \(T_{0}\)) such that
Conclusion—estimate from above Under assumptions (A.1) and (A.2) we know that
for every \(s\in (0,T_0/2)\), provided that \(\lambda \ge 1\). Now let us set
We observe that \(s_\lambda \in (0,T_0/2)\) when \(\lambda \) is large enough, and therefore
Now we observe that there exists a constant \(\gamma _3\), depending on \(\beta \), such that
when \(\lambda \) is large enough, and this completes the estimate from above.
Conclusion—estimate from below We need to show that there exists a constant \(d_3>0\) such that
provided that \(\lambda \) is large enough.
To this end, we observe that every \(s\in (0,T_0)\) satisfies either
or
In the first case it is enough to consider the first term in the definition of \(\psi (s)\), and assumption (A.3), in order to deduce that
In the second case it is enough to consider the second term in the definition of \(\psi (s)\) (namely the integral), and assumption (A.4), in order to deduce that
On the other hand, when \(\lambda \) is large enough we know that the values of s involved in the second case are small enough, and therefore we are in the region where \(g_1(s)\ge \gamma _1 g_2(s)\) and \(g_2\) is decreasing. It follows that
and we conclude by observing that there exists a constant \(\gamma _4>0\), again depending on \(\beta \), such that
when \(\lambda \) is large enough. \(\square \)
Remark A.2
A sharper estimate of the minimum point of \(\psi (s)\) is given by
This unique value could be used both in the estimate from above, and as a separator between the two regions in the estimate from below.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ghisi, M., Gobbino, M. Optimal derivative loss for abstract wave equations. Math. Ann. 386, 455–494 (2023). https://doi.org/10.1007/s00208-022-02403-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00208-022-02403-x