1 Introduction and results

Uncertainty principles are frequently used in control theory to prove observability for certain abstract Cauchy problems. Often this is done via the so-called Lebeau–Robbiano method, where an uncertainty principle for elements in the spectral subspace, a so-called spectral inequality, is combined with a dissipation estimate, see [5, 11, 21, 26]. The aforementioned spectral inequalities were studied for several differential operators, see, e.g., [4, 6,7,8,9,10, 17, 20, 22] and the references cited therein. Suitable dissipation estimates to treat also semigroups generated by some quadratic differential operators in the sense of Hörmander [12] (that is, by operators associated to homogeneous quadratic polynomials via Weyl quantization) were provided in [3, 5, 7, 17].

A different approach was introduced in [16], based on [19]. It allows one to derive observability estimates from uncertainty principles with error term established for functions in the range of the semigroup associated to the abstract Cauchy problem. In the situation of [16], these uncertainty principles are established using Gelfand–Shilov smoothing effects. By the latter we mean that for the strongly continuous contraction semigroup \((\mathcal {T}(t))_{t\ge 0}\) on \(L^2(\mathbb {R}^d),\) there exist constants \(C \ge 1\), \(t_0 \in (0,1)\), \(\nu >0,\) and \(0<\mu \le 1\) with \(\nu +\mu \ge 1\), and \(r_1 \ge 0\), \(r_2 > 0\), such that for all \(g\in L^2(\mathbb {R}^d),\) we have

$$\begin{aligned} \Vert (1+|x|^2)^{n/2}\partial ^\beta \mathcal {T}(t)g\Vert _{L^2(\mathbb {R}^d)} \le \frac{C^{1+n+|\beta |}}{t^{r_1+r_2(n+|\beta |)}} (n!)^\nu (|\beta |!)^\mu \Vert g\Vert _{L^2(\mathbb {R}^d)} \end{aligned}$$
(1.1)

for all \(t\in (0,t_0)\) and all \(n\in \mathbb {N}\), \(\beta \in \mathbb {N}_0^d\).

In this context, we prove the following variant of [16, Theorem 2.3].

Theorem 1.1

Suppose that \(f \in C^\infty (\mathbb {R}^d)\) satisfies

$$\begin{aligned} \Vert (1+|x|^2)^{n/2}\partial ^\beta f\Vert _{L^2(\mathbb {R}^d)} \le D_1D_2^{n+|\beta |}(n!)^{\nu }(|\beta |!)^\mu , \quad n\in \mathbb {N}_0,\beta \in \mathbb {N}_0^d , \end{aligned}$$
(1.2)

with some \(D_1 > 0,\) \(D_2 \ge 1,\) \(\nu \ge 0,\) and \(0 \le \mu < 1\). Moreover,  let \(\delta \in [0,1]\) with \(s = \delta \nu +\mu < 1,\) and let \(\rho :\mathbb {R}^d \rightarrow (0,\infty )\) be a measurable function satisfying

$$\begin{aligned} \rho (x) \le R(1+|x|^2)^{\delta /2} \quad \text {for all}\ x\in \mathbb {R}^d \end{aligned}$$
(1.3)

with some \(R \ge 1\) and

$$\begin{aligned} \rho (x) \le \eta |x| \quad \text {for all}\ |x| \ge r_0 \end{aligned}$$
(1.4)

with some \(\eta \in (0,1)\) and some \(r_0 \ge 1\).

Then,  for every measurable set \(\omega \subset \mathbb {R}^d\) satisfying

$$\begin{aligned} \frac{|B(x,\rho (x))\cap \omega |}{|B(x,\rho (x))|} \ge \gamma \quad \text {for all} \ x\in \mathbb {R}^d \end{aligned}$$
(1.5)

with some \(\gamma \in (0,1)\) and for every \(\varepsilon \in (0,1],\) we have

$$\begin{aligned} \Vert f\Vert _{L^2(\mathbb {R}^d)}^2 \le \mathrm {e}^{K\cdot \bigr ( 1 + \log \frac{1}{\varepsilon } + D_2^{2/(1-s)}\bigr )} \Vert f\Vert _{L^2(\omega )}^2 + \varepsilon D_1^2 , \end{aligned}$$
(1.6)

where \(K \ge 1\) is a constant depending on \(\gamma , R,r_0,\eta ,\nu ,s,\) and the dimension d.

Here, \(B(x,\rho (x))\) denotes the open Euclidean ball of radius \(\rho (x) > 0\) centered at x. Note that (1.4) is automatically satisfied if \(\delta < 1\) with, say, \(\eta = 1/2\) and \(r_0 \ge (4R)^{1/(1-\delta )}\).

The estimate (1.6) differs from the usual form of an uncertainty principle by the appearance of the term \(\varepsilon D_1^2\). We call this the error term since it can be chosen arbitrarily small.

In [16], the same result is proved but under more technical assumptions, namely that \(\rho \) is a Lipschitz contraction with a uniform positive lower bound. On the other hand, the case \(s = 1\), which also allows \(\mu = 1\), is treated in [16] but is not in the scope of the method we discuss here. However, [16] does not present any application in terms of observability for this case.

Our proof, as well as the one in [16], follows the approach from [13, 14]. The main idea of the latter is to localize certain Bernstein-type inequalities on so-called good elements of some covering of \(\mathbb {R}^d\). Since in the setting of Theorem 1.1 there is no Bernstein-type inequality at disposal, the definition of good elements replaces, in some sense, the missing Bernstein-type inequalities needed. The proof then reduces to a local estimate for (quasi-)analytic functions. For the latter, [16] uses an estimate for quasianalytic functions proven in [23], see also the \(L^2\)-version in [16, Proposition 5.10], and a suitable estimate for the so-called Bang degree. By contrast, we rely on the more standard approach for (complex) analytic functions from [13, 14] by estimating Taylor expansions around suitable points. This is combined with ideas introduced in [7, 8] that incorporate the quadratic decay guaranteed by (1.2) in order to reduce the considerations to a bounded subset of \(\mathbb {R}^d\). This allows us to obtain a more streamlined proof while getting rid of the mentioned technical assumptions in [16].

If \((\mathcal {T}(t))_{t\ge 0}\) is a strongly continuous contraction semigroup on \(L^2(\mathbb {R}^d)\) satisfying the Gelfand–Shilov smoothing effects (1.1), then \(f = \mathcal {T}(t)g\) with \(g \in L^2(\mathbb {R}^d)\) and \(t \in (0,t_0)\) satisfies (1.2) with

$$\begin{aligned} D_1 = \frac{C}{t^{r_1}}\Vert g\Vert _{L^2(\mathbb {R}^d)} \quad \text {and} \quad D_2 = \frac{C}{t^{r_2}} . \end{aligned}$$

Thus, choosing the constant \(\varepsilon \) in Theorem 1.1 appropriately, we are able to apply [19, Lemma 2.1] in literally the same way as in the proof of [16, Theorem 2.11] and thereby obtain the following observability result, which reproduces [16, Theorem 2.11]. We omit the proof for brevity.

Corollary 1.2

Suppose that \((\mathcal {T}(t))_{t\ge 0}\) is a strongly continuous contraction semigroup satisfying (1.1) with some constants \(C \ge 1,\) \(\nu \ge 0,\) \(0\le \mu < 1,\) \(r_1\ge 0,\) \(r_2 > 0,\) and let \(\delta \in [0,1]\) and \(\rho :\mathbb {R}^d \rightarrow (0,\infty )\) be as in Theorem 1.1. Then

$$\begin{aligned} \Vert \mathcal {T}(T)g\Vert _{L^2(\mathbb {R}^d)}^2 \le N\exp \biggl (\frac{N}{T^{\frac{2r_2}{1-s}}} \biggr )\int \limits _0^T \Vert \mathcal {T}(t)g\Vert _{L^2(\omega )}^2 \mathop {}\!\mathrm {d}t , \quad g \in L^2(\mathbb {R}^d) ,\ T > 0 , \end{aligned}$$

for every measurable set \(\omega \subset \mathbb {R}^d\) satisfying (1.5) with some \(\gamma \in (0,1)\). Here,  \(N \ge 1\) is a constant depending on \(\gamma , R,r_0,\eta ,\nu ,s,C,r_2,\) and the dimension d.

As shown in [2, Corollary 2.2] and [16, Lemma 5.2], the semigroup generated by the (negative) fractional (an-)isotropic Shubin operator \(-((-\Delta )^m + |x|^{2k})^\theta \) with \(k,m\in \mathbb {N}\) and \(\theta > 1/(2m)\) satisfies (1.1) with

$$\begin{aligned} \nu = \max \Bigl \{ \frac{1}{2k\theta } , \frac{m}{k+m} \Bigr \} \quad \text {and} \quad \mu = \max \Bigl \{ \frac{1}{2m\theta } , \frac{k}{k+m} \Bigr \} . \end{aligned}$$

Hence, Corollary 1.2 can be applied, which reproduces [16, Corollary 2.12]. It should however be mentioned that in the general case \(\nu = \mu = 1/2\) of Corollary 1.2 (a particular instance being a Shubin operator with \(k=m\) and \(\theta =1\)) stronger results than Corollary 1.2 in terms of the conditions on \(\omega \) are available, see [7, Theorem 3.5]. More precisely, in this case the density \(\gamma \) is allowed to be variable and exhibit a certain subexponential decay, so that \(\omega \) may even have finite measure. In the present setting, we are able to obtain a variant of Theorem 1.1 where the density \(\gamma \) is allowed to exhibit a polynomial decay, but the result seems not to be sufficient to give observability as in Corollary 1.2, see Theorem 2.5 and Remark 2.6 below. If even \(k=m=1\), the Shubin operator corresponds to the harmonic oscillator, for which also a sharper observability constant can be obtained. Indeed, [7, Theorem 6.1] shows that the observability constant can then be chosen to vanish as \(T\rightarrow \infty \). We expect that such results also hold for the general Shubin operators. However, this would require setting up a suitable spectral inequality for these operators, which seems out of reach at the moment.

2 Proof of Theorem 1.1

Let \(\varepsilon \in (0,1]\), and choose

$$\begin{aligned} r := \frac{D_2}{\sqrt{\varepsilon /2}} \ge 1 , \end{aligned}$$
(2.1)

so that

$$\begin{aligned} \sup _{x\in \mathbb {R}^d \setminus B(0,r)} \frac{1}{1+|x|^2} \le \frac{\varepsilon }{2D_2^2} . \end{aligned}$$

Then (1.2) with \(n = 1\) and \(\beta = 0\) implies that

$$\begin{aligned} \Vert f\Vert _{L^2(\mathbb {R}^d \setminus B(0,r))}^2 \le \frac{\varepsilon }{2} \cdot \frac{\Vert (1+|x|^2)^{1/2} f\Vert _{L^2(\mathbb {R}^d)}^2}{D_2^2} \le \frac{\varepsilon D_1^2}{2} . \end{aligned}$$
(2.2)

We abbreviate \(w(x) = w_\delta (x) = (1+|x|^2)^{\delta /2}\). If \(\delta < 1\), we infer from [16, Lemma 5.3] that (1.2) implies

$$\begin{aligned} \Vert w^n\partial ^\beta f\Vert _{L^2(\mathbb {R}^d)} \le D_1\tilde{D}_2^{n+|\beta |}(n!)^{\delta \nu }(|\beta |!)^\mu , \quad n\in \mathbb {N}_0,\beta \in \mathbb {N}_0^d, \end{aligned}$$
(2.3)

with \(\tilde{D}_2 = 8^\nu \mathrm {e}^\nu D_2 \ge 1\). If \(\delta = 1\), then (2.3) agrees with (1.2) with \(\tilde{D}_2 = D_2 \ge 1\). We therefore just work with (2.3) for the remaining part.

The proof of the theorem now follows the following lines: Inspired by [9], the estimates (2.3) imply, in particular, that f is analytic, see Lemma A.1. Moreover, by (2.2), the \(L^2\)-mass of f outside of the ball B(0, r) can be subsumed into the error term, that is the second summand on the right hand side of (1.6). Using Besicovitch’s covering theorem, the region B(0, r) is then covered with (at most countably many) balls \(B(x,\rho (x))\) satisfying \(B(x,\rho (x))\cap B(0,r) \ne \emptyset \). Based on (2.3) and following [13, 14] and [16], these balls are classified into good and bad ones, where on good balls local Bernstein-type estimates are available and the contribution of bad balls can again be subsumed into the error term, see Lemma 2.1. Following again [13, 14], on good balls the local Bernstein-type estimates allow to bound the Taylor expansions of f around suitably chosen points, which by analyticity of f lead to local estimates of the desired form, see Lemmas 2.22.4. Summing over all good balls finally concludes the proof.

We consider balls \(B(x,\rho (x))\) with \(B(x,\rho (x))\cap B(0,r) \ne \emptyset \). The latter requires that \(|x|-\rho (x) < r\) and, thus, \(|x| < r_0\) or \((1-\eta )|x| \le |x| - \rho (x) < r\), that is, \(|x| < r/(1-\eta )\). By Besicovitch’s covering theorem, see, e.g., [18, Theorem 2.7], there is \(\mathcal {K}_0 \subset \mathbb {N}\) and a collection of points \((y_k)_{k \in \mathcal {K}_0}\) with \(| y_k | < \max \{ r_0 , r/(1-\eta ) \}\) such that the family of balls \(Q_k = B(y_k,\rho (y_k))\), \(k \in \mathcal {K}_0\), gives an essential covering of \(B(0,\max \{ r_0 , r/(1-\eta ) \})\) with overlap at most \(\kappa \ge 1\). Here, the proof in [18, Theorem 2.7] and a simple calculation shows that \(\kappa \) can be chosen as \(\kappa = K_{\mathrm {Bes}}^d\) with a universal constant \(K_{\mathrm {Bes}} \ge 1\). With \(Q_0 := \mathbb {R}^d \setminus \bigcup _{k \in \mathcal {K}_0} Q_k\) and \(\mathcal {K}:= \mathcal {K}_0 \cup \{0\}\), the family \((Q_k)_{k \in \mathcal {K}}\) thus gives an essential covering of \(\mathbb {R}^d\) with overlap at most \(\kappa = K_{\mathrm {Bes}}^d \ge 1\).

2.1 Good and bad balls

Similarly as in [16], we now define the so-called good elements of the covering. We do this in such a way that we have some localized Bernstein-type inequality on all good elements. More precisely, we say that \(Q_k\), \(k \in \mathcal {K}_0\), is good with respect to f if for all \(m \in \mathbb {N}_0,\) we have

$$\begin{aligned} \sum _{|\beta |=m} \frac{1}{\beta !}\Vert w^m \partial ^\beta f\Vert _{L^2(Q_k)}^2 \le \frac{2\kappa }{\varepsilon } \cdot \frac{2^{m+1} d^m q_m^2}{m!} \Vert f\Vert _{L^2(Q_k)}^2 , \end{aligned}$$

where

$$\begin{aligned} q_m = \tilde{D}_2^{2m}(m!)^{\delta \nu +\mu } = \tilde{D}_2^{2m}(m!)^s . \end{aligned}$$

We call \(Q_k\), \(k \in \mathcal {K}_0\), bad if it is not good.

Although we can not show that the mass of f on the good balls covers some fixed fraction of the mass of f on the whole of \(\mathbb {R}^d\), inequality (2.3) nevertheless implies that the mass of f on the bad balls is bounded by \(\varepsilon D_1^2/2\). Hence, the contribution of the bad elements can likewise be subsumed into the error term. This is summarized in the following result.

Lemma 2.1

We have

$$\begin{aligned} \Vert f\Vert _{L^2(\mathbb {R}^d)}^2 \le \Vert f\Vert _{L^2(\bigcup _{k :Q_k\, {\mathrm {good}}} Q_k)}^2 + \varepsilon D_1^2 . \end{aligned}$$

Proof

Since

$$\begin{aligned} \Vert f\Vert _{L^2(\mathbb {R}^d)}^2 \le \Vert f\Vert _{L^2(\bigcup _{k :Q_k\, {\mathrm {good}}} Q_k)}^2 + \Vert f\Vert _{L^2(\bigcup _{k :Q_k\, {\mathrm {bad}}} Q_k)}^2 + \Vert f\Vert _{L^2(Q_0)}^2 , \end{aligned}$$

it suffices to show that

$$\begin{aligned} \sum _{k :Q_k\, {\mathrm {bad}}} \Vert f\Vert _{L^2(Q_k)}^2 + \Vert f\Vert _{L^2(Q_0)}^2 \le \varepsilon D_1^2 . \end{aligned}$$
(2.4)

To this end, we first note that \(Q_0 \subset \mathbb {R}^d \setminus B(0,r)\) and, thus, \(\Vert f\Vert _{L^2(Q_0)}^2 \le \varepsilon D_1^2/2\) by estimate (2.2). Let now \(Q_k\), \(k \in \mathcal {K}_0\), be bad, that is, there is \(n \in \mathbb {N}_0\) such that

$$\begin{aligned} \Vert f\Vert _{L^2(Q_k)}^2&\le \frac{\varepsilon }{2\kappa } \cdot \frac{n!}{2^{n+1} d^n} \sum _{|\beta |=n} \frac{1}{\beta !} \Bigl ( \frac{\Vert w^n \partial ^\beta f\Vert _{L^2(Q_k)}}{q_n} \Bigr )^2\\&\le \frac{\varepsilon }{2\kappa } \sum _{m=0}^\infty \frac{m!}{2^{m+1} d^m} \sum _{|\beta |=m} \frac{1}{\beta !} \Bigl ( \frac{\Vert w^m \partial ^\beta f\Vert _{L^2(Q_k)}}{q_m} \Bigr )^2 . \end{aligned}$$

Summing over all bad \(Q_k\) with \(k\in \mathcal {K}_0\) and using (2.3) then gives

$$\begin{aligned} \sum _{\begin{array}{c} k\in \mathcal {K}_0\\ Q_k\, {\mathrm {bad}} \end{array}} \Vert f\Vert _{L^2(Q_k)}^2 \le \frac{\varepsilon }{2} \cdot D_1^2 \sum _{m=0}^\infty \frac{m!}{2^{m+1} d^m} \sum _{|\beta |=m} \frac{1}{\beta !} = \frac{\varepsilon }{2} \cdot D_1^2 \sum _{m=0}^\infty \frac{1}{2^{m+1}} = \frac{\varepsilon }{2} D_1^2 , \end{aligned}$$

where we used that \(\sum _{|\beta |=m} 1/\beta ! = d^m/m!\). This proves (2.4).

\(\square \)

Now, as in [9], see also [10, 13, 14], we use the definition of good elements to extract a pointwise estimate for the derivatives of f.

Lemma 2.2

Let \(Q_k\) be good. Then there is \(x_k \in Q_k\) such that for all \(\beta \in \mathbb {N}_0^d\) with \(|\beta | = m \in \mathbb {N}_0,\) we have

$$\begin{aligned} |\partial ^\beta f(x_k)| \le \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \cdot 2^{m+1} d^{m/2} \cdot C(k,m)^{1/2} \cdot \frac{\Vert f\Vert _{L^2(Q_k)}}{\sqrt{|Q_k|}} \end{aligned}$$
(2.5)

with

$$\begin{aligned} C(k,m) = q_m^2\sup _{x \in Q_k} w(x)^{-2m} . \end{aligned}$$
(2.6)

Proof

Assume that for all \(x \in Q_k,\) there is \(m = m(x) \in \mathbb {N}_0\) such that

$$\begin{aligned} \sum _{|\beta |=m} \frac{1}{\beta !}|\partial ^\beta f(x)|^2 > \frac{2\kappa }{\varepsilon } \cdot \frac{4^{m+1}d^m}{m!} \cdot C(k,m) \cdot \frac{\Vert f\Vert _{L^2(Q_k)}^2}{|Q_k|} . \end{aligned}$$

Reordering the terms and summing over all \(m \in \mathbb {N}_0\) in order to get rid of the x-dependence, we then obtain

$$\begin{aligned} \frac{\Vert f\Vert _{L^2(Q_k)}^2}{|Q_k|} < \frac{\varepsilon }{2\kappa } \sum _{m=0}^\infty \frac{m!}{4^{m+1}d^m C(k,m)} \sum _{|\beta |=m} \frac{1}{\beta !} |\partial ^\beta f(x)|^2 \end{aligned}$$
(2.7)

for all \(x \in Q_k\). We observe that

$$\begin{aligned} \Vert \partial ^\beta f\Vert _{L^2(Q_k)}^2 = \Vert w^{-m}w^m\partial ^\beta f\Vert _{L^2(Q_k)}^2 \le \sup _{x \in Q_k} w(x)^{-2m} \cdot \Vert w^m\partial ^\beta f\Vert _{L^2(Q_k)}^2 . \end{aligned}$$

Thus, integrating (2.7) over \(x \in Q_k\) and using that \(Q_k\) is good gives

$$\begin{aligned} \Vert f\Vert _{L^2(Q_k)}^2&< \frac{\varepsilon }{2\kappa } \sum _{m=0}^\infty \frac{m!}{4^{m+1}d^m C(k,m)} \sum _{|\beta |=m} \frac{1}{\beta !} \Vert \partial ^\beta f\Vert _{L^2(Q_k)}^2\\&\le \Vert f\Vert _{L^2(Q_k)}^2 \sum _{m=0}^\infty 2^{-m-1} = \Vert f\Vert _{L^2(Q_k)}^2 , \end{aligned}$$

leading to a contradiction. Hence, there is \(x_k \in Q_k\) such that for all \(\beta \in \mathbb {N}_0^d\) with \(|\beta | = m,\) we have

$$\begin{aligned} |\partial ^\beta f(x_k)|^2 \le \sum _{|\beta |=m} \frac{m!}{\beta !} |\partial ^\beta f(x_k)|^2 \le \frac{2\kappa }{\varepsilon } \cdot 4^{m+1}d^m \cdot C(k,m) \cdot \frac{\Vert f\Vert _{L^2(Q_k)}^2}{|Q_k|} , \end{aligned}$$

and taking square roots proves the claim.

\(\square \)

2.2 The local estimate

In order to estimate f on each (good) \(Q_k\), \(k \in \mathcal {K}_0\), we use a complex analytic local estimate that goes back to [13, 14, 24]. It has later been used in [9] and implicitly also in [4, 10, 17, 27]. We rely here on a particular case of the formulation in [9].

Lemma 2.3

(see [9, Lemma 3.5]). Let \(k\in \mathcal {K}_0,\) and suppose that the function \(f|_{Q_k} :Q_k \rightarrow \mathbb {C}\) has a bounded analytic extension \(F :Q_k + D_{8\rho (x_k)} \rightarrow \mathbb {C},\) where \(D_{8\rho (x_k)} = D(0,8\rho (x_k))\times \dots \times D(0,8\rho (x_k))\subset \mathbb {C}^d\) is the complex polydisk of radius \(8\rho (x_k)\).

Then,  for every measurable set \(\omega \subset \mathbb {R}^d\) with \(|Q_k\cap \omega | > 0,\) we have

$$\begin{aligned} \Bigl ( 24d2^d\cdot \frac{|Q_k|}{|Q_k\cap \omega |}\Bigr )^{1+4\log M_k/\log 2} \Vert f\Vert _{L^2(Q_k \cap \omega )}^2 \ge \Vert f\Vert _{L^2(Q_k)}^2 \end{aligned}$$

with

$$\begin{aligned} M_k := \frac{\sqrt{|Q_k|}}{\Vert f\Vert _{L^2(Q_k)}} \cdot \sup _{z \in Q_k + D_{8\rho (x_k)}} |F(z)| \ge 1. \end{aligned}$$

On good \(Q_k\), the hypotheses of Lemma 2.3 are indeed satisfied, and the pointwise estimates (2.5) can be used to obtain a suitable upper bound for the quantity \(M_k\).

Lemma 2.4

Let \(Q_k\) be good. Then the restriction \(f|_{Q_k}\) has a bounded analytic extension \(F_k :Q_k + D_{8\rho (x_k)} \rightarrow \mathbb {C},\) and \(M_k\) as in Lemma 2.3 satisfies

$$\begin{aligned} \log M_k \le \log (2K') + \frac{1}{2}\log \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr ) + D^{1/(1-s)} , \end{aligned}$$

where \(K' \ge 1\) is a constant depending only on s and where

$$\begin{aligned} D = 40d^{3/2}\tilde{D}_2^2 R\max \{ r_0 , (1-\eta )^{-1} \} . \end{aligned}$$

Proof

Let \(x_k \in Q_k\) be a point as in Lemma 2.2. For every \(z \in x_k + D_{10\rho (x_k)},\) we then have

$$\begin{aligned} \sum _{\beta \in \mathbb {N}_0^d}&\frac{|\partial ^\beta f(x_k)|}{\beta !} |(z-x_k)^\beta |\\&\le \sum _{m\in \mathbb {N}_0} \sum _{|\beta |=m} \frac{1}{\beta !} \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} 2^{m+1} d^{m/2} C(k,m)^{1/2} (10\rho (x_k))^{|\beta |} \frac{\Vert f\Vert _{L^2(Q_k)}}{\sqrt{|Q_k|}}\\&= 2\Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \frac{\Vert f\Vert _{L^2(Q_k)}}{\sqrt{|Q_k|}} \sum _{m\in \mathbb {N}_0} C(k,m)^{1/2} \frac{(20d^{3/2}\rho (x_k))^m}{m!} , \end{aligned}$$

where for the last inequality we again used that \(\sum _{|\beta | = m} 1/\beta ! = d^m/m!\). Taking into account that \(Q_k + D_{8\rho (x_k)} \subset x_k + D_{10\rho (x_k)}\) and that f is analytic by Lemma A.1, this shows that the Taylor expansion of f around \(x_k\) defines a bounded analytic extension \(F_k :Q_k + D_{8\rho (x_k)} \rightarrow \mathbb {C}\) of f and that

$$\begin{aligned} M_k \le 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty C(k,m)^{1/2} \frac{(20d^{3/2}\rho (x_k))^m}{m!} . \end{aligned}$$
(2.8)

Now, suppose first that \(|x_k| \le r_0\) with \(r_0 \ge 1\) as in (1.4). Then

$$\begin{aligned} \rho (x_k) \le R(1+r_0^2)^{\delta /2} \le R(1+r_0^2)^{1/2} \le 2Rr_0 . \end{aligned}$$

Using (2.8), (2.6), and the definition of \(q_m\), it follows that

$$\begin{aligned} M_k&\le 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty q_m \cdot \sup _{x\in Q_k} \frac{1}{w(x)^m}\cdot \frac{(40d^{3/2}Rr_0)^m}{m!}\\&\le 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty \frac{(40d^{3/2}\tilde{D}_2^2 Rr_0)^m}{(m!)^{1-s}} , \end{aligned}$$

where we have taken into account that \(w(x) \ge 1\) for all \(x\in \mathbb {R}^d\).

On the other hand, if \(|x_k| \ge r_0\), then for all \(x \in Q_k,\) we have by (1.4) the lower bound \(|x| \ge |x_k| - \rho (x_k) \ge (1-\eta )|x_k| > 0\) and, thus,

$$\begin{aligned} \frac{\rho (x_k)}{w(x)} \le \frac{Rw(x_k)}{w(x)} \le 2R \Bigl (\frac{|x_k|}{|x|}\Bigr )^\delta \le \frac{2R}{(1-\eta )^{\delta }} \le \frac{2R}{1-\eta } . \end{aligned}$$

Using again (2.8) and (2.6) then gives

$$\begin{aligned} M_k&\le 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty q_m \cdot \sup _{x \in Q_k} \frac{\rho (x_k)^m}{w(x)^m} \cdot \frac{(20d^{3/2})^m}{m!}\\&\le 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty \frac{(40d^{3/2}\tilde{D}_2^2R/(1-\eta ) )^m}{(m!)^{1-s}} . \end{aligned}$$

We conclude that for both cases \(|x_k| \le r_0\) and \(|x_k| \ge r_0,\) we have

$$\begin{aligned} M_k&\le 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty \frac{(40d^{3/2}\tilde{D}_2^2 R\max \{ r_0 , (1-\eta )^{-1} \})^m}{(m!)^{1-s}}\\&= 2 \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2} \sum _{m=0}^\infty \frac{D^m}{(m!)^{1-s}} . \end{aligned}$$

We estimate the series using the asymptotics

$$\begin{aligned} \sum _{m = 0}^\infty \frac{x^m}{(m!)^p} = \frac{\mathrm {e}^{px^{1/p}}}{p^{1/2}(2\pi x^{1/p})^{(p-1)/2}}\Bigl \{ 1 + O\Bigl ( \frac{1}{x^{1/p}} \Bigr ) \Bigr \} \qquad (p \in (0,4],\ x \rightarrow \infty ) \end{aligned}$$

derived in [25, Chapter 8, Eq. (8.07)]. Thereby,

$$\begin{aligned} \sum _{m=0}^\infty \frac{D^m}{(m!)^{1-s}} \le K'\mathrm {e}^{D^{1/(1-s)}} , \end{aligned}$$

where \(K'\) is a constant depending only on s. Hence,

$$\begin{aligned} M_k \le 2K'\Bigl ( \frac{2\kappa }{\varepsilon } \Bigr )^{1/2}\mathrm {e}^{D^{1/(1-s)}} \end{aligned}$$

and therefore

$$\begin{aligned} \log M_k \le \log (2K') + \frac{1}{2}\log \Bigl ( \frac{2\kappa }{\varepsilon } \Bigr ) + D^{1/(1-s)} . \end{aligned}$$

\(\square \)

We are now in position to prove our main result.

Proof of Theorem 1.1

By hypothesis, we have \(|Q_k| / |Q_k\cap \omega | \le 1/\gamma \). Combining this with Lemma 2.3 and the estimate for \(\log M_k\) derived in Lemma 2.4, we obtain for all good \(Q_k\) that

$$\begin{aligned} \Bigl ( \frac{24d\cdot 2^d}{\gamma } \Bigr )^{5+\bigl (4\log (K')+2\log (\frac{2\kappa }{\varepsilon }) + 4D^{1/(1-s)}\bigr )/\log 2} \Vert f\Vert _{L^2(Q_k \cap \omega )}^2 \ge \Vert f\Vert _{L^2(Q_k)}^2 . \end{aligned}$$

In particular,

$$\begin{aligned} \Bigl (\frac{1}{\gamma }\Bigr )^{K''\cdot \bigl (1+\log \frac{1}{\varepsilon } + D_2^{2/(1-s)}\bigr )} \Vert f\Vert _{L^2(Q_k \cap \omega )}^2 \ge \Vert f\Vert _{L^2(Q_k)}^2 , \end{aligned}$$
(2.9)

where \(K'' \ge 1\) is a constant depending on \(R,r_0,\eta ,\nu ,s\), and the dimension d. Summing over all good \(Q_k\) gives

$$\begin{aligned} \sum _{k:Q_k\ \text {good}} \Vert f\Vert _{L^2(Q_k)}^2&\le \Bigl (\frac{1}{\gamma }\Bigr )^{K''\cdot \bigl (1+\log \frac{1}{\varepsilon } + D_2^{2/(1-s)}\bigr )} \sum _{k:Q_k\ \text {good}} \Vert f\Vert _{L^2(Q_k\cap \omega )}^2\\&\le \kappa \Bigl (\frac{1}{\gamma }\Bigr )^{K''\cdot \bigl (1+\log \frac{1}{\varepsilon } + D_2^{2/(1-s)}\bigr )}\Vert f\Vert _{L^2(\omega )}^2 . \end{aligned}$$

Together with Lemma 2.1 this proves the theorem with \(K \le K''\log (1/\gamma ) + \log \kappa \).

\(\square \)

A slight adaptation of the proof of Theorem 1.1 allows us to consider a variable density \(\gamma = \gamma (x)\) with a polynomial decay.

Theorem 2.5

Let \(D_1,D_2,\mu ,\nu ,\) and \(\delta \) be as in Theorem 1.1 above,  and suppose that f satisfies (1.2). Let \(\omega \subset \mathbb {R}^d\) be any measurable set satisfying

$$\begin{aligned} \frac{|B(x,\rho (x))\cap \omega |}{|B(x,\rho (x))|} \ge \frac{\gamma _0}{1+|x|^a} \quad \text {for all} \quad x\in \mathbb {R}^d \end{aligned}$$

with some \(\gamma _0\in (0,1)\) and some \(a > 0\).

Then

$$\begin{aligned} \Vert f\Vert _{L^2(\mathbb {R}^d)}^2 \le \mathrm {e}^{K\cdot (1+\log \frac{1}{\varepsilon }+D_2^{2/(1-s)})^2} \Vert f\Vert _{L^2(\omega )}^2 + \varepsilon D_1^2 , \end{aligned}$$
(2.10)

where \(K \ge 1\) is a constant depending on \(\gamma _0,R,r_0,\eta ,\nu ,s,a,\) and the dimension d.

Proof

We have \(|Q_k| / |Q_k\cap \omega | \le (1+|y_k|^a) / \gamma _0\), and it is easily checked that

$$\begin{aligned} 1+|y_k|^a \le \frac{2^{1+a}r_0^a}{(1-\eta )^a}r^a \end{aligned}$$

for all \(k\in \mathcal {K}_0\). Since r is as in (2.1), this shows

$$\begin{aligned} \log ( 1+|y_k|^a ) \le \tilde{K}\Bigl (1 + \log \frac{1}{\varepsilon } + \log D_2\Bigr ) , \end{aligned}$$

where \(\tilde{K} \ge 1\) is a constant depending on \(r_0, \eta \), and a. In light of the inequality \(\log D_2 \le D_2^{2/(1-s)}\), the theorem now follows in the same way as Theorem 1.1.

\(\square \)

Remark 2.6

The exponent in (2.10) depends on \(\varepsilon \) essentially by \((\log (1/\varepsilon ))^2\), as compared to just \(\log (1/\varepsilon )\) in (1.6) of Theorem 1.1. To the best of our knowledge, the proof of the observability estimate from [16, 19] does not work with the kind of dependence in (2.10).