Skip to main content

Proof II: Lower Bounds

  • Chapter
  • First Online:
  • 682 Accesses

Part of the book series: Pseudo-Differential Operators ((PDO,volume 14))

Abstract

In this chapter we give a lower bound on \(\ln \det S_{\delta ,z}\) which is valid with high probability, and then using also the upper bounds of Chap. 16, we conclude the proof of Theorem 15.3.1 with the help of Theorem 12.1.2.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    We could also replace 1D(0,1)(z − w) with a smooth cutoff χ(z − w), where \(1_{D(0,1)\le \chi \in C_0^\infty (D(0,1+\theta ))}\) for some 0 < θ ≪ 1, in order to make 𝜖 continuous in z.

  2. 2.

    Indeed,

    $$\displaystyle \begin{aligned}\begin{aligned}t_k(E_{-+}^\delta ) &\le t_k(E_{-+}^\delta )+\| E_{-+}^\delta -E_{-+}^0\|\le t_k(E_{-+}^0)+\delta +2\frac{\delta ^2}{\tau _0}\\ &\le t_k(E_{-+}^0)+2\delta \le t_k(E_{-+}^0)+\tau _0\le 2\tau _0 . \end{aligned} \end{aligned}$$

References

  1. I.C. Gohberg, M.G. Krein, Introduction to the Theory of Linear Non-selfadjoint Operators. Translations of Mathematical Monographs, vol. 18 (AMS, Providence, 1969)

    Google Scholar 

  2. M. Hager, J. Sjöstrand, Eigenvalue asymptotics for randomly perturbed non-selfadjoint operators. Math. Ann. 342(1), 177–243 (2008). http://arxiv.org/abs/math/0601381

    Article  MathSciNet  Google Scholar 

  3. A. Melin, J. Sjöstrand, Determinants of pseudodifferential operators and complex deformations of phase space. Methods Appl. Anal. 9(2), 177–238 (2002). https://arxiv.org/abs/math/0111292

    MathSciNet  MATH  Google Scholar 

  4. J. Sjöstrand, Resonances for bottles and trace formulae. Math. Nachr. 221, 95–149 (2001)

    Article  MathSciNet  Google Scholar 

  5. J. Sjöstrand, Eigenvalue distribution for non-self-adjoint operators with small multiplicative random perturbations. Ann. Fac. Sci. Toulouse 18(4), 739–795 (2009). http://arxiv.org/abs/0802.3584

    Article  MathSciNet  Google Scholar 

  6. J. Sjöstrand, Eigenvalue distribution for non-self-adjoint operators on compact manifolds with small multiplicative random perturbations. Ann. Fac. Sci. Toulouse 19(2), 277–301 (2010). http://arxiv.org/abs/0809.4182

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

17.A Appendix: Grushin Problems and Singular Values

17.A Appendix: Grushin Problems and Singular Values

We mainly consider the case of an unbounded operator

$$\displaystyle \begin{aligned}P=P_0+V, \end{aligned}$$

where P 0 is an elliptic differential operator on X and V ∈ L (X). The underlying Hilbert space is \(\mathcal {H}=L^2(X)=H^0\) and we will view P as an operator from H m to H 0. The dual of H m is H m and we shall keep in mind the variational point of view with the triple

$$\displaystyle \begin{aligned}H^m\subset H^0\subset H^{-m}.\end{aligned}$$

Consider . For u ∈ H m, we have

(17.A.1)

Proposition 17.A.1

If w ∈ ]0, +, [ , then is bijective with bounded inverse.

Proof

The injectivity is clear since , u ∈ H m, and combining this with the standard elliptic apriori estimate, \(\|u\|{ }_{H^m}\le C(\|Pu\| +\|u\| )\), we get

implying

From this estimate and the fact that , when we take the adjoint in the sense of bounded operators H m → H m, it is standard to get the desired conclusion. □

Notice that when P is injective, then by ellipticity and compactness, we have for some C > 0 and we get the conclusion of the proposition also when w = 0.

The operator induces a compact self-adjoint operator H 0 → H 0. The range consists of all u ∈ H m such that Pu ∈ H m. The spectral theorem for compact self-adjoint operators tells us that there is an orthonormal basis of eigenfunctions e 1, e 2, … in H 0 such that

(17.A.2)

where when j → +. Clearly e j ∈ H m, Pe j ∈ H m, so we can apply to (17.A.2) and get

(17.A.3)

which we write as

(17.A.4)

Here , so we have found an orthonormal basis e 1, e 2, … ∈ H 0 with e j, Pe j ∈ H m such that

(17.A.5)

It is easy to check that \(t_j^2\) are independent of w, that e j can be chosen independent of w, and we have

$$\displaystyle \begin{aligned}\mu _j(w)^2=\frac{1}{t_j^2+w}. \end{aligned}$$

From Proposition 17.A.1 and its proof we know that is self-adjoint as a bounded operator: H m → H m. Consider as a closed unbounded operator with domain . Then , which is dense in and hence dense in L 2. (or equivalently ) is closed: If , u j → u, v j → v in L 2, then v j → v in H m, u j → u in H m, hence and since v ∈ L 2, we get \(u\in \mathcal {D}_{\mathrm {sa}}\). Similar arguments show that is self-adjoint. We also know that has a purely discrete spectrum and that \(\{ e_j\}_{j=1}^\infty \) is an orthonormal basis of eigenfunctions.

We have the max-min principle

(17.A.6)

where L varies in the set of closed subspaces of H 0 that are also contained in H m. Similarly, from (17.A.3), we have the mini-max principle

(17.A.7)

where L varies in the set of closed subspaces of H 0. When 0∉σ(P), so that P : H m → H 0 is bijective, we can extend (17.A.7) to the case w = 0 and then, as we have seen, \(\mu _j(0)^2=t_j^{-2}\).

Now assume

$$\displaystyle \begin{aligned} P:H^m\to H^0\text{ is a Fredholm operator of index }0. \end{aligned} $$
(17.A.8)

The discussion above applies also to PP when P is viewed as an operator H 0 → H m so that P  : H m → H 0. Put

Then as in (17.A.5) we have an orthonormal basis f 1, f 2, … in H 0 with f j, P f j ∈ H m such that

(17.A.9)

Proposition 17.A.2

We have \(\widetilde {t}_j=t_j\) and we can choose f j so that

$$\displaystyle \begin{aligned} Pe_j=t_jf_j,\quad P^*f_j=t_je_j. \end{aligned} $$
(17.A.10)

Proof

We have , when P and P are viewed as operators H m → H 0. Notice however that by elliptic regularity, the kernel of P  : H 0 = H m is the same as the one of P  : H m → H 0. Since P is Fredholm of index 0, the kernels of P and P have the same dimension, and consequently

Let \(t_0^2=t_{j_0}^2\) be a non-vanishing eigenvalue of of multiplicity k 0, so that

$$\displaystyle \begin{aligned}t_{j_0-1}<t_{j_0}=\cdots=t_{j_0+k_0-1}<t_{j_0+k_0} \end{aligned}$$

for some j 0, k 0 ∈N and with the convention that the first inequality is absent when j 0 = 1. If , we know that u, Pu ∈ H m, Pu≠0 and we notice that

Thus v := Pu ∈ H m is non-zero and satisfies

so P gives an injective map from into .

By the same argument P is injective from to so the two spaces have the same dimension. It follows that \(\widetilde {t}_j=t_j\) for all j.

Let e j, j 0 ≤ j ≤ j 0 + k 0 − 1 be an orthonormal basis for and put . Then

so f j, j 0 ≤ j ≤ j 0 + k 0 − 1 form an orthonormal basis for . Also notice that

$$\displaystyle \begin{aligned}P^*f_j=t_0^{-1}P^*Pe_j=t_0e_j, \end{aligned}$$

and we get (17.A.10) in the non-trivial case when t j≠0. □

Write t j(P) = t j, so that t j(P ) = t j by the proposition. When P has a bounded inverse let s 1(P −1) ≥ s 2(P −1) ≥⋯ be the singular values of the inverse (as a compact operator in L 2). We have

$$\displaystyle \begin{aligned} s_j(P^{-1})=\frac{1}{t_j(P)}. \end{aligned} $$
(17.A.11)

Let 1 ≤ N <  and let R + : H m →C N, R  : C N → H 0 be bounded operators. Assume that

$$\displaystyle \begin{aligned} \mathcal{P}=\left(\begin{array}{ccc}P &R_-\\ R_+ &0 \end{array}\right): H^m\times {\mathbf{C}}^N\to H^0\times {\mathbf{C}}^N \end{aligned} $$
(17.A.12)

is bijective with a bounded inverse

$$\displaystyle \begin{aligned} \mathcal{E}=\left(\begin{array}{ccc}E &E_+\\E_- &E_{-+} \end{array}\right). \end{aligned} $$
(17.A.13)

Recall that P has a bounded inverse precisely when E −+ does, and when this happens we have the relations,

$$\displaystyle \begin{aligned} P^{-1}=E-E_+E_{-+}^{-1}E_-,\quad E_{-+}^{-1}=-R_+P^{-1}R_-. \end{aligned} $$
(17.A.14)

Cf. Sects. 3.2, 5.3, Chap. 6, Sect. 8.1 and Chap. 13. Recall ([48] and Proposition 8.2.2) that if \(A,B:\mathcal {H}_1\to \mathcal {H}_2\) and \(C:\mathcal {H}_2\to \mathcal {H}_3\) are bounded operators, where \(\mathcal {H}_j\) are complex Hilbert spaces then we have the general estimates

$$\displaystyle \begin{aligned} s_{n+k-1}(A+B)\le s_n(A)+s_k(B), \end{aligned} $$
(17.A.15)
$$\displaystyle \begin{aligned} s_{n+k-1}(CA)\le s_n(C)s_k(A), \end{aligned} $$
(17.A.16)

In particular, for k = 1, we get

$$\displaystyle \begin{aligned}s_n(CA)\le \Vert C\Vert s_n(A),\ s_n(CA)\le s_n(C)\Vert A\Vert,\ s_n(A+B)\le s_n(A)+\Vert B\Vert . \end{aligned}$$

Applying this to the second part of (17.A.14), we get

$$\displaystyle \begin{aligned}s_k(E_{-+}^{-1})\le \Vert R_-\Vert \Vert R_+\Vert s_k(P^{-1}),\quad 1\le k\le N \end{aligned}$$

whence

$$\displaystyle \begin{aligned} t_k(P)\le \Vert R_-\Vert \Vert R_+\Vert t_k(E_{-+}),\quad 1\le k\le N. \end{aligned} $$
(17.A.17)

By a perturbation argument, we see that this holds also in the case when P, E −+ are non-invertible.

Similarly, from the first part of (17.A.14) we get

$$\displaystyle \begin{aligned}s_k(P^{-1})\le \Vert E\Vert +\Vert E_+\Vert \Vert E_-\Vert s_k(E_{-+}^{-1}), \end{aligned}$$

leading to

$$\displaystyle \begin{aligned} t_k(P)\ge \frac{t_k(E_{-+})}{\| E\| t_k(E_{-+})+\Vert E_+\Vert\Vert E_-\Vert}. \end{aligned} $$
(17.A.18)

Again this can be extended to the non-necessarily invertible case by means of small perturbations.

Generalizing Sect. 3.2 (as in [56]), we get a natural construction of a Grushin problem associated to a given operator. Let P = P 0 : H m → H 0 be a Fredholm operator of index 0 as above. Choose N so that t N+1(P 0) is strictly positive. In the following we sometimes write t j instead of t j(P 0) for short.

Recall that \(t_j^2\) are the first eigenvalues both for P 0 P 0 and P 0 P 0 . Let e 1, …, e N and f 1, …, f N be corresponding orthonormal systems of eigenvectors of P 0 P 0 and P 0 P 0 , respectively. They can be chosen so that

$$\displaystyle \begin{aligned} P^0e_j=t _jf_j,\quad {P^0}^*f_j=t_je_j. \end{aligned} $$
(17.A.19)

Define R + : L 2 →C N and R  : C N → L 2 by

$$\displaystyle \begin{aligned}R_+u(j)=(u|e_j),\quad R_-u_-=\sum_1^Nu_-(j)f_j.\end{aligned} $$
(17.A.20)

It is easy to see that the Grushin problem

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll}P^0u+R_-u_-=v,\\ R_+u=v_+, \end{array} \right. \end{aligned} $$
(17.A.21)

has a unique solution (u, u ) ∈ L 2 ×C N for every (v, v +) ∈ L 2 ×C N, given by

$$\displaystyle \begin{aligned} \left\{\begin{array}{ll} u=E^0v+E_+^0v_+,\\ u_-=E_-^0v+E_{-+}^0v_+, \end{array}\right.\end{aligned} $$
(17.A.22)

where

$$\displaystyle \begin{aligned}\begin{cases} E^0_+v_+=\sum_1^Nv_+(j)e_j,& E^0_-v(j)=(v|f_j),\\ E^0_{-+}=-\mathrm{diag}\,(t _j),& \Vert E^0\Vert \le {1\over t_{N+1}}. \end{cases} \end{aligned} $$
(17.A.23)

E 0 can be viewed as the inverse of P 0 as an operator from the orthogonal space (e 1, e 2, …, e N) to (f 1, f 2, …, f N).

We notice that in this case the norms of R + and R are equal to 1, so (17.A.17) tells us that \(t_k(P^0)\le t_k(E^0_{-+})\) for 1 ≤ k ≤ N, but of course the expression for \(E^0_{-+}\) in (17.A.23) implies equality.

Let \(Q\in \mathcal {L}(H^0,H^0) \) and put P δ = P 0 − δQ (where we sometimes put a minus sign in front of the perturbation for notational convenience). We are particularly interested in the case when Q = Q ω u = qu is the operator of multiplication by a function q. Here δ > 0 is a small parameter. Choose R ± as in (17.A.20). Then if δ < t N+1 and ∥Q∥≤ 1, the perturbed Grushin problem

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll}P^\delta u+R_-u_-=v,\\ R_+u=v_+, \end{array} \right. \end{aligned} $$
(17.A.24)

is well posed and has the solution

$$\displaystyle \begin{aligned} \left\{\begin{array}{ll} u=E^\delta v+E_+^\delta v_+,\\ u_-=E_-^\delta +E_{-+}^\delta v_+, \end{array} \right.\end{aligned} $$
(17.A.25)

where

$$\displaystyle \begin{aligned} \mathcal{E}^\delta =\left(\begin{array}{ccc}E^\delta &E_+^\delta \\ E_-^\delta &E_{-+}^\delta \end{array}\right) \end{aligned} $$
(17.A.26)

is obtained from \(\mathcal {E}^0\) by the formula

$$\displaystyle \begin{aligned} \mathcal{E}^\delta =\mathcal{E}^0\left( 1-\delta \left(\begin{array}{ccc}Q E^0 & Q E_+^0 \\ 0&0 \end{array}\right)\right) ^{-1}. \end{aligned} $$
(17.A.27)

Using the Neumann series, we get

$$\displaystyle \begin{aligned} \begin{aligned} &E_{-+}^\delta =E_{-+}^0+E_-^0\delta Q(1-E^0\delta Q)^{-1}E_+^0 \\&=E_{-+}^0+\delta E_-^0 Q E_+^0+ \delta^2 E_-^0 Q E^0 Q E_+^0+ \delta^3 E_-^0 Q (E^0 Q )^2 E_+^0+\cdots \end{aligned} \end{aligned} $$
(17.A.28)

We also get

$$\displaystyle \begin{aligned} E^\delta =E^0(1-\delta QE^0)^{-1} =E^0+\sum_{k=1}^{\infty }\delta ^kE^0(QE^0)^k, \end{aligned} $$
(17.A.29)
$$\displaystyle \begin{aligned} E_+^\delta =(1-E^0\delta Q)^{-1}E_+^0 =E_+^0+\sum_{k=1}^{\infty }\delta ^k(E^0Q)^kE_+^0, \end{aligned} $$
(17.A.30)
$$\displaystyle \begin{aligned} E_-^\delta =E_-^0(1-\delta QE^0)^{-1} =E_-^0+\sum_{k=1}^{\infty }\delta ^kE_-^0(QE^0)^k. \end{aligned} $$
(17.A.31)

The leading perturbation in \(E_{-+}^\delta \) is δM, where \(M =E_-^0 Q E_+^0: {\mathbf {C}}^N\to {\mathbf {C}}^N\) has the matrix

$$\displaystyle \begin{aligned} M(\omega )_{j,k}=(Q e_k|f_j), \end{aligned} $$
(17.A.32)

which in the multiplicative case reduces to

$$\displaystyle \begin{aligned} M(\omega )_{j,k}=\int q (x)e_k(x)\overline{f_j(x)}dx. \end{aligned} $$
(17.A.33)

Put τ 0 = t N+1(P 0) and recall the assumption

$$\displaystyle \begin{aligned} \Vert Q\Vert \le 1. \end{aligned} $$
(17.A.34)

Then, if δ ≤ τ 0∕2, the new Grushin problem is well posed with an inverse \(\mathcal {E}^{\delta }\) given in (17.A.26)–(17.A.31). We get

$$\displaystyle \begin{aligned} \Vert E^\delta \Vert \le \frac{1}{1-\frac{\delta }{\tau _0}} \Vert E^0\Vert \le \frac{2}{\tau _0},\quad \| E_\pm ^\delta \| \le \frac{1}{1-\frac{\delta }{\tau _0}}\le 2, \end{aligned} $$
(17.A.35)
$$\displaystyle \begin{aligned} \Vert E_{-+}^\delta -(E_{-+}^0+\delta E_-^0QE_+^0)\Vert \le \frac{\delta ^2}{\tau _0}\frac{1}{1-\frac{\delta }{\tau_0}}\le 2\frac{\delta ^2}{\tau _0}. \end{aligned} $$
(17.A.36)

Using this in (17.A.17), (17.A.18) together with the fact that \(t_k(E_{-+}^\delta ) \le 2\tau _0\),Footnote 2 we get

$$\displaystyle \begin{aligned} \frac{t_k(E^\delta _{-+})}{8}\le t_k(P^\delta )\le t_k(E^\delta _{-+}). \end{aligned} $$
(17.A.37)

Notes

This chapter concludes the proof of Theorem 15.3.1 and we have very much followed [137, 139]. Notice that the probabilistic lower bounds on the singular values of certain matrices in the Sects. 17.1 and 17.2 play a crucial role. It would be very interesting and probably very important to enrich this passage with different, perhaps more efficient methods. That would make it possible to generalize Theorem 15.3.1 and treat larger classes of operators and perturbations.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sjöstrand, J. (2019). Proof II: Lower Bounds. In: Non-Self-Adjoint Differential Operators, Spectral Asymptotics and Random Perturbations. Pseudo-Differential Operators, vol 14. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-10819-9_17

Download citation

Publish with us

Policies and ethics