A unified observability result for non-autonomous observation problems

A final-state observability result in the Banach space setting for non-autonomous observation problems is obtained that covers and extends all previously known results in this context, while providing a streamlined proof that follows the established Lebeau-Robbiano strategy.


Introduction
Observability and null-controllability results for (non-)autonomous Cauchy problems are relevant especially in the field of control theory of partial differential equations and have recently attracted a lot of attention in the literature.Here, the most common approach towards final-state observability is a so-called Lebeau-Robbiano strategy, which combines a suitable uncertainty principle with a corresponding dissipation estimate for the evolution family describing the evolution of the system, see (essUCP) and (DE) below, respectively.Certain null-controllability results can then be inferred from final-state observability via a standard duality argument, see, e.g., [4] for more information and also [9] for an holistic overview of duality theory for control systems.
Such a Lebeau-Robbiano strategy has been considered, for instance, in [1,2,4,8,10,11,13,14], see also [5] for a review of other related results in this context.The two most general results in this direction so far are [4,Theorem 3.3] and [1,Theorem 13], each highlighting different aspects and exhibiting certain advantages and disadvantages over the other, both with regard to hypotheses and the asserted conclusion, see the discussion below.The aim of the present work is to present a unified extension of both mentioned results, taking the best of each, thus allowing to apply the Lebeau-Robbiano strategy to a broader range of observation problems and, at the same time, providing a streamlined proof.

Lebeau-Robbiano strategy for non-autonomous observation problems
For the reader's convenience, let us fix the following notational setup.Hypothesis 2.1.Let X and Y be Banach spaces, T > 0, E ⊆ [0, T ] be measurable with positive Lebesgue measure, and (U (t, s)) 0≤s≤t≤T be an exponentially bounded evolution family on X.
Here, we denote by L(X, Y ) the space of bounded operators from X to Y .Also recall that (U (t, s)) 0≤s≤t≤T ⊆ L(X) := L(X, X) is called an evolution family of bounded operators on X if It is called exponentially bounded if there exist M ≥ 1 and ω ∈ R such that for all 0 ≤ s ≤ t ≤ T we have the bound U (t, s) L(X) ≤ M e ω(t−s) .
Evolution families are oftentimes used to describe the evolution of non-autonomous Cauchy problems, see, e.g., [4,Section 2] and the references cited therein.The family (C(t)) t∈[0,T ] in the mapping t → C(t)U (t, 0)x 0 Y can be understood as observation operators through which the state of the system is observed at each time t ≥ 0. In the context of L p -spaces, these are often chosen as multiplication operators by characteristic functions for some (time-dependent) sensor sets, see, e.g., Example 2.5 below.
The following theorem now covers and extends all known previous results in this context, see the discussion below.
Theorem 2.2.Assume Hypothesis 2.1.Let (P λ ) λ>0 be a family in L(X) such that for some constants d 0 , d 1 , γ 1 > 0 we have Suppose also that for some constants Then, there exists a constant C obs > 0 such that for each r ∈ [1, ∞] and all x 0 ∈ X we have the final-state observability estimate Moreover, if for some interval (τ 1 , τ 2 ) ⊆ [0, T ] with τ 1 < τ 2 we have |(τ 1 , τ 2 ) ∩ E| = τ 2 − τ 1 , then, depending on the value of r, the constant C obs can be bounded as The above theorem represents the established Lebeau-Robbiano strategy, in which an uncertainty principle (essUCP), here with respect to the given family (P λ ) λ>0 and uniform only on the subset E ⊆ [0, T ], and a corresponding dissipation estimate in the form (DE) are used as an input; it should be emphasized that the requirement γ 1 < γ 2 is essential here.The output in the form of (OBS) then constitutes a so-called final-state observability estimate for the evolution family (U (t, s)) 0≤s≤t≤T ⊆ L(X) with respect to the family (C(t)) t∈[0,T ] of observation operators.The corresponding constant C obs in (OBS) is called observability constant.An explicit form of the constants C 1 , C 2 , C 3 in (2.2) is given in Remark 3.1 below for easier reference.
Discussion and extensions.We first comment on two minor extensions of Theorem 2.2.
Remark 2.3.(1) It becomes clear from the proof, see (3.7) below, that instead of the polynomial blow-up in the dissipation estimate (DE) for small differences t − s one can also allow a certain (sub-)exponential blow-up.More precisely, one may replace the term max{1, (t − s) −γ4 } by a factor of the form exp(c(t − s) Let us now compare Theorem 2.2 to earlier results in the literature.
Remark 2.4.(1) In the particular case where [τ 1 , τ 2 ] = [0, T ], the bound on C obs in (2.2) is completely consistent, except perhaps for some minor differences in the explicit form of the constants C 1 , C 2 , C 3 in Remark 3.1 below, with all bounds obtained earlier for E = [0, T ] in [3,8,14] in the autonomous case (i.e.C(t) ≡ C and the evolution family being actually a semigroup) and in [4] in the non-autonomous case.
(2) Theorem 2.2 covers [4, Theorem 3.3], while allowing a polynomial blow-up for small differences t − s in the dissipation estimate (DE).Such a blow-up has first been considered in [1, Theorem 13], but under much more restrictive assumptions, see item (4) below.Moreover, in contrast to [4, Theorem 3.3], Theorem 2.2 requires the uncertainty relation (essUCP) only on a subset of [0, T ] of positive measure, instead of the whole interval [0, T ], and thus allows far more general families of observation operators.These families also need to be uniformly bounded only on this measurable subset and not on the whole interval [0, T ].
(3) The results from [3, Theorem A.1] and [8, Theorem 2.1] formulate a variant of our Theorem 2.2 in the autonomous case with E = [0, T ], but assume (essUCP) and (DE) only for λ > λ * with some λ * ≥ 0. Our current formulation with the whole range λ > 0, just as in [4, Theorem 3.3], is not really a restriction to that.Indeed, by a change of variable, one may then simply consider the family (P λ+λ * ) λ>0 instead, with a straightforward adaptation of the parameters d 0 and d 1 in (essUCP).In this sense, Theorem 2.2 completely covers the results from [3] and [8]. ( By a change of variable, namely via considering C(• + τ 1 ) on [0, T − τ 1 ] and the evolution family U (t + τ 1 , s + τ 1 ) 0≤s<t≤T −τ1 , one may replace U (T, 0), U (t, 0), and E in (OBS) by U (T, τ 1 ), U (t, τ 1 ), and (τ 1 , T ) ∩ E respectively; note that (essUCP) and (DE) then remain valid with the same constants.In this sense, Theorem 2.2 entirely covers [1, Theorem 13], while leaving the Hilbert space setting and not requiring strong continuity or contractivity of the evolution family.At the same time, our bound on C obs in (2.2) contains an additional prefactor 1/(τ 2 − τ 1 ) 1/r in front of the exponential term, which significantly changes the asymptotics of the estimate as τ 2 − τ 1 (and thus also T ) gets large.Such improved asymptotics has proved extremely useful in the past, for instance, when considering homogenization limits as in [14].
In order to support the above comparison, we briefly revisit [4, Theorem 4.8] in the following example.
Example 2.5.Let a be a uniformly strongly elliptic polynomial of degree m Theorem 4.4] that there is an exponentially bounded evolution family (U p (t, s)) 0≤s≤t≤T in L p (R d ) associated to a. Let (Ω(t)) t∈[0,T ] be a family of measurable subsets of R d such that the mapping [0 [4,Lemma 4.7], so that Hypothesis 2.1 with C(t) = 1 Ω(t) is satisfied for every choice of measurable E ⊆ [0, T ] with positive measure.Moreover, a dissipation estimate as in (DE) with γ 2 = m (but without blow-up, i.e. γ 4 = 0) was established in the proof of [4,Theorem 4.8] with P λ being some smooth frequency cutoffs.It remains to consider a corresponding essential uncertainty principle (essUCP).
Suppose that the family (Ω(t)) t∈[0,T ] of subsets is uniformly thick on E in the sense that there are L, ρ > 0 such that for all x ∈ R d and all t ∈ E we have Following the proof of [4, Theorem 4.8], we see that an essential uncertainty principle as in (essUCP) holds with γ 1 = 1 < γ 2 , so that Theorem 2.2 can be applied.Here, the set E for which (2.3) needs to hold could be any measurable subset of [0, T ] with positive measure, for instance, E = [0, T ] \ Q (satisfying |E| = T ) or even some fractal set, say of Cantor-Smith-Volterra type.
In particular, this allows for completely arbitrary choices of measurable Ω(t) for t / ∈ E, even Ω(t) = ∅.By contrast, these choices for Ω(t) would not be allowed in [4,Theorem 4.8], where (2.3) is required to hold for all x ∈ R d and all t ∈ [0, T ] and is thus much more restrictive on the choice of (Ω(t)) t∈[0,T ] .
Remark 2.6.In the situation of Example 2.5 with p < ∞, it was shown in [4, Theorem 4.10] that an observability estimate as in (OBS) can hold with r < ∞ only if the family (Ω(t)) t∈[0,T ] is mean thick in the sense that for some L, ρ > 0 we have It is easy to see that families which are uniformly thick on a subset of [0, T ] of positive measure as in (2.3) are also mean thick in the above sense (with possibly different parameters), but the converse need not be true.A corresponding example in R is the family (Ω(t)) t∈[0,T ] with Ω(t) = (0, ∞) for t ≤ T /2 and Ω(t) = (−∞, 0) for T /2 < t ≤ T .It is yet unclear whether such choices also lead to an observability estimate as in (OBS) or anything similar.In this sense, Example 2.5 and Theorem 2.2 still leave a gap between necessary and sufficient conditions on the family (Ω(t)) t∈[0,T ] towards final-state observability.

Proof of Theorem 2.2
Our proof of Theorem 2.2 is a streamlined adaptation of earlier approaches, especially of that from [4] and its predecessors [14], [8], and [1].It avoids the interpolation argument in [4] and is thus much more direct and, at the same time, requires an uncertainty relation only on a measurable subset of [0, T ] of positive measure.
Proof of Theorem 2.2.Let us fix x 0 ∈ X.For 0 ≤ t ≤ T we abbreviate By Hölder's inequality we clearly have with the usual convention 1/∞ = 0. Hence, estimate (OBS) for r > 1 follows from the one for r = 1 by multiplying the corresponding constant C obs by |E| 1− 1 r ≤ max{1, |E|}.It therefore suffices to show (OBS) for r = 1, which in the new notation reads Upon possibly removing from E a set of measure zero, we may assume without loss of generality that (essUCP) holds with ess inf replaced by inf and that C(•) is uniformly bounded on E. Let us then show that there exist constants c 1 , c 2 > 0 such that for all 0 ≤ s < t ≤ T with t ∈ E and all ε ∈ (0, 1) we have To this end, let ε ∈ (0, 1) and fix 0 ≤ s < t ≤ T with t ∈ E. For λ > 0 we introduce The uncertainty relation (essUCP) and the uniform boundedness of C(•) on E then give where λ , the latter implies that for all λ > 0 we have (3.4) , and inserting this into the preceding estimate (3.4) yields for all λ > 0 that (3.5) Let us maximize f (λ) with respect to λ.In light of γ 2 > γ 1 by hypothesis, a straightforward calculation reveals that f takes its maximal value on (0, ∞) at the point Taking into account the relation γ1 γ2−γ1 + 1 = γ2 γ2−γ1 , we observe that We may therefore estimate f (λ) as Moreover, using the elementary bound ξ α ≤ e αξ for α, ξ > 0, we have Inserting this and the preceding bound on f (λ) into (3.5),we arrive for all λ > 0 at We finally choose λ > 0 such that ε = e − d 3 2 λ γ 2 (t−s) γ 3 , which shows that (3.3) is valid; note that indeed neither c 1 nor c 2 depend on s or t.Since the evolution family (U (t, s)) 0≤s≤t≤T is exponentially bounded by hypothesis, there exist M ≥ 1 and ω ∈ R such that Setting ω + := max{ω, 0}, this in particular implies for each m ∈ N and all t ∈ (ℓ m+1 , ℓ m ) that which in light of (3.9) satisfies Combining (3.11) Observing that by (3.10) and the choice of q, multiplying (3.13) by δ m and rearranging terms yields for all t ∈ (ξ m , ℓ m ) ∩ E, m ∈ N. Taking into account (3.12), integrating the latter with respect to t ∈ (ξ m , ℓ m ) ∩ E leads to for all m ∈ N. Note here that the exponential boundedness of the evolution family guarantees that the sequence (F (ℓ m )) m∈N is bounded.Since also δ m → 0 and ℓ m → ℓ as m → ∞, summing the last inequality over all m ∈ N implies by a telescoping sum argument that which can be rewritten as Now, we have F (T ) ≤ M e ω(T −ℓ1) F (ℓ 1 ) by using once more the exponential boundedness of the evolution family, which shows that (3.2) holds with We may then simply choose ℓ = τ 1 and ℓ 1 = τ 2 in the above reasoning, leading to δ 1 = (1−q)(τ 2 −τ 1 ).For r ≥ 1, in light of (3.1) with E replaced by (τ 1 , τ 2 ) ∩ E, we conclude that C obs in (OBS) can be bounded as in (2.2), which completes the proof.
For organizational purposes, we extract from the above proof the following more explicit bound on the observability constant.