1 Correction to: Journal of Fourier Analysis and Applications (2017) 23:530–571 https://doi.org/10.1007/s00041-016-9478-6

Our paper [1] contains an error in the proof of Proposition 8.1. More precisely the estimate claimed in Eq. (8.3) is erroneously motivated. In the following we state and prove Proposition 8.1 correctly.

We need the Hermite functions

$$\begin{aligned} h_\alpha (x) = \pi ^{-\frac{d}{4}}(-1)^{|\alpha |} (2^{|\alpha |}\alpha !)^{-\frac{1}{2}}e^{\frac{|x|^2}{2}} \partial ^\alpha e^{-|x|^2}, \quad x \in \mathbf {R}^{d}, \quad \alpha \in \mathbf {N}^{d}, \end{aligned}$$

and formal series expansions

$$\begin{aligned} f = \sum _{\alpha \in \mathbf {N}^{d}} c_\alpha h_\alpha \end{aligned}$$

where \(\{ c_\alpha \}\) is a sequence of coefficients defined by \(c_\alpha = c_\alpha (f) = (f,h_\alpha )\). The Hermite functions \(\{ h_\alpha \}_{\alpha \in \mathbf {N}^{d}} \subseteq L^2(\mathbf {R}^{d})\) is an orthonormal basis.

Langenbruch [4, Theorem 3.4] has shown that the family of Hilbert sequence spaces

$$\begin{aligned} \ell _{s,r}^2 = \ell _{s,r}^2(\mathbf {N}^{d}) = \left\{ \{c_\alpha \}: \Vert c_\alpha \Vert _{\ell _{s,r}^2} = \left( \sum _{\alpha \in \mathbf {N}^{d}} |c_\alpha |^2 e^{2 r |\alpha |^{\frac{1}{2s}}} \right) ^{\frac{1}{2}} < \infty \right\} \end{aligned}$$

for \(r > 0\) yields a family of seminorms for \(\Sigma _s(\mathbf {R}^{d})\) that is equivalent to the family (2.3) for all \(h > 0\), when \(t = s > \frac{1}{2}\). Thus \(\Sigma _{s}(\mathbf {R}^{d})\) can be identified topologically as the projective limit

$$\begin{aligned} \Sigma _{s}(\mathbf {R}^{d}) = \bigcap _{r>0} \left\{ \sum _{\alpha \in \mathbf {N}^{d}} c_\alpha h_\alpha : \ \{c_\alpha \} \in \ell _{s,r}^2 \right\} . \end{aligned}$$
(1)

Lemma

If \(s > \frac{1}{2}\) and \(h > 0\) then

$$\begin{aligned} \sup _{x \in \mathbf {R}} \left| \partial ^k \left( e^{- \frac{1}{2} x^2} \right) \right| \leqslant C_{h,s} h^k k!^s, \quad k \in \mathbf {N}, \end{aligned}$$

where \(C_{h,s} > 0\).

Proof

It is clear that

$$\begin{aligned} \partial ^k \left( e^{- \frac{1}{2} x^2} \right) = p_{k} (x) \, e^{- \frac{1}{2} x^2} \end{aligned}$$
(2)

where \(p_{k}\) is a polynomial of order \(k \in \mathbf {N}\). By induction one can prove the formula

$$\begin{aligned} p_{k} (x) = k! \sum _{m=0}^{\lfloor k/2 \rfloor } \frac{x^{k-2m} (-1)^{k-m}}{m!(k-2m)! 2^m}. \end{aligned}$$

Since \(k! \leqslant 2^k (k-2m)! (2m)!\) we can estimate

$$\begin{aligned} |p_{k} (x)| \leqslant \sum _{m=0}^{\lfloor k/2 \rfloor } \frac{|x|^{k-2m} (2m)!}{m! 2^{m-k}} \leqslant \sum _{m=0}^{\lfloor k/2 \rfloor } |x|^{k-2m} m! 2^{m+k}. \end{aligned}$$

Combining with \(m! = m!^{2s -\varepsilon }\) where \(\varepsilon = 2s-1 > 0\), this gives for any \(h > 0\) and \(b > 0\)

$$\begin{aligned} |p_{k} (x)| h^{-k} k!^{-s}&\leqslant \sum _{m=0}^{\lfloor k/2 \rfloor } |x|^{k-2m} m!^{2s - \varepsilon } 2^{m+k} h^{-k} k!^{-s} \\&= \sum _{m=0}^{\lfloor k/2 \rfloor } \left( \frac{ \left( \frac{b}{s} |x| ^{\frac{1}{s}} \right) ^{k-2m}}{(k-2m)!}\right) ^s \left( \frac{b}{s} \right) ^{s(2m-k)} \left( \frac{(k-2m)! m!^2}{k!}\right) ^s \frac{2^{m+k} h^{-k}}{m!^\varepsilon } \\&\leqslant e^{b |x|^{\frac{1}{s}} } \left( 2 \left( \frac{s}{b} \right) ^s h^{-1} \right) ^{k} \sum _{m=0}^{\lfloor k/2 \rfloor } \left( \frac{ \left( 2 \left( \frac{b}{s} \right) ^{2s} \right) ^{\frac{m}{\varepsilon }}}{m!}\right) ^\varepsilon \\&\leqslant e^{\varepsilon \left( 2 \left( \frac{b}{s} \right) ^{2s} \right) ^{\frac{1}{\varepsilon }} } e^{b |x|^{\frac{1}{s}} } \left( 4 \left( \frac{s}{b} \right) ^s h^{-1} \right) ^{k} \\&= C_{h,s} \, e^{s \, 4^{\frac{1}{s}} h^{-\frac{1}{s}} |x|^{\frac{1}{s}} }, \end{aligned}$$

where we pick \(b = s \, 4^{\frac{1}{s}} h^{-\frac{1}{s}}\) in the last equality, and \(C_{h,s} > 0\). Thus since \(\frac{1}{s} < 2\)

$$\begin{aligned} \sup _{x \in \mathbf {R}} \left| \partial ^k \left( e^{- \frac{1}{2} x^2} \right) \right| \lesssim h^k k!^s \sup _{x \in \mathbf {R}} e^{s \, 4^{\frac{1}{s}} h^{-\frac{1}{s}} |x|^{\frac{1}{s}} - \frac{1}{2} x^2} \leqslant C_{h,s} h^k k!^s, \quad k \in \mathbf {N}, \end{aligned}$$

for a new constant \(C_{h,s} > 0\). \(\square \)

The corrected result concerns operators with Schwartz kernel of the oscillatory integral form

$$\begin{aligned} K_T (x,y) = (2 \pi )^{-(d+N)/2} \sqrt{\det \left( \begin{array}{ll} p_{\theta \theta }''/i &{} p_{\theta y}'' \\ p_{x \theta }'' &{} i p_{x y}'' \end{array} \right) } \int _{\mathbf {R}^{N}} e^{i p(x,y,\theta )} d\theta \in \mathscr {S}'(\mathbf {R}^{2d}) \end{aligned}$$
(8.1)

where \(x,y \in \mathbf {R}^{d}\). Here p is a quadratic form on \(\mathbf {R}^{2d+N}\) associated with the positive Lagrangian that is defined by the twisted graph of the matrix \(T \in {\text {Sp}}(d,\mathbf {C})\) which is assumed to be positive in the sense of [3, Eq. 5.10]. The kernel is, modulo sign, independent of the form p and the dimension N, thanks to the the factor in front of the integral [3, p. 444].

Proposition 8.1

Suppose \(T \in {\text {Sp}}(d,\mathbf {C})\) is positive and let \(\mathscr {K}_T: \mathscr {S}(\mathbf {R}^{d}) \rightarrow \mathscr {S}'(\mathbf {R}^{d})\) be the continuous linear operator having Schwartz kernel \(K_T \in \mathscr {S}'(\mathbf {R}^{2d})\) defined by (8.1). For \(s > 1/2\) the operator \(\mathscr {K}_T\) is continuous on \(\Sigma _s(\mathbf {R}^{d})\) and \(\mathscr {K}_T\) extends uniquely to a continuous operator on \(\Sigma _s'(\mathbf {R}^{d})\).

Proof

By [3, Proposition 5.10] (cf. [2]) the matrix T can be factorized as

$$\begin{aligned} T = \chi _1 T_0 \chi _2 \end{aligned}$$

where \(\chi _1, \chi _2 \in {\text {Sp}}(d,\mathbf {R})\), \(T_0 \in {\text {Sp}}(d,\mathbf {C})\) is positive and \((y,\eta ) = T_0 (x,\xi )\), \(x,\xi , y, \eta \in \mathbf {R}^{d}\), where for each \(1 \leqslant j \leqslant d\) we have either

$$\begin{aligned} \left( \begin{array}{l} y_j \\ \eta _j \end{array} \right) = \left( \begin{array}{cc} \cosh \tau _j &{} - i \sinh \tau _j \\ i \sinh \tau _j &{} \cosh \tau _j \end{array} \right) \left( \begin{array}{l} x_j \\ \xi _j \end{array} \right) \end{aligned}$$
(3)

with \(\tau _j \geqslant 0\), or

$$\begin{aligned} \left( \begin{array}{l} y_j \\ \eta _j \end{array} \right) = \left( \begin{array}{cc} 1 &{} 0\\ i &{} 1 \end{array} \right) \left( \begin{array}{l} x_j \\ \xi _j \end{array} \right) . \end{aligned}$$
(4)

By [3, Proposition 5.9] we have

$$\begin{aligned} \mathscr {K}_T = \pm \mu (\chi _1) \mathscr {K}_{T_0} \mu (\chi _2). \end{aligned}$$

According to Proposition 4.4, \(\mu (\chi _j)\) is continuous on \(\Sigma _s(\mathbf {R}^{d})\), so it remains to show that \(\mathscr {K}_{T_0}\) is continuous on \(\Sigma _s(\mathbf {R}^{d})\).

The matrix \(T_0\) can be factorized as

$$\begin{aligned} T_0 = T_1 T_2 \cdots T_d \end{aligned}$$

where the matrices \(T_j\), \(1 \leqslant j \leqslant d\), commute pairwise, and have the following structure. It holds \((y,\eta ) = T_j (x,\xi )\) where \((y_k,\eta _k) = (x_k,\xi _k)\), \(k \in \{1, 2, \cdots , d\} \setminus \{ j \}\) and either (3) for some \(\tau _j \geqslant 0\), or (4) holds.

Again by [3, Proposition 5.9]

$$\begin{aligned} \mathscr {K}_{T_0} = \pm \mathscr {K}_{T_1} \mathscr {K}_{T_2} \cdots \mathscr {K}_{T_d} \end{aligned}$$

and thus it suffices to show that \(\mathscr {K}_{T_j}\) of each of the stated two types is continuous on \(\Sigma _s(\mathbf {R}^{d})\). In order to do that we first identify the operators \(\mathscr {K}_{T_j}\), cf. [2, p. 297].

Suppose \((y,\eta ) = T_j (x,\xi )\) where \((y_k,\eta _k) = (x_k,\xi _k)\), \(k \in \{1, 2, \cdots , d\} \setminus \{ j\}\).

Case (i) Suppose (3) for some \(\tau _j \geqslant 0\). Define the symmetric block matrix

$$\begin{aligned} Q_j = \frac{1}{2} \left( \begin{array}{cc} e_j e_j^t &{} 0\\ 0 &{} e_j e_j^t \end{array} \right) \in \mathbf {R}^{2d \times 2d} \end{aligned}$$

where \(e_j \in \mathbf {R}^{d}\) denotes the standard basis vector with zero entries except for position j which is one. With \(F_j = \mathcal {J}Q_j\) a short calculation shows that

$$\begin{aligned} e^{- 2 i \tau _j F_j} = T_j \end{aligned}$$

which reveals that \(\mathscr {K}_{T_j}\) is the solution operator to the initial value Cauchy problem (5.1) when the Hamiltonian Weyl symbol is defined by

$$\begin{aligned} q_j(x,\xi ) = \langle (x,\xi ), Q_j(x, \xi ) \rangle = \frac{1}{2} \left( x_j^2 + \xi _j^2 \right) , \quad (x,\xi ) \in T^* \mathbf {R}^{d}, \end{aligned}$$

at time \(\tau _j\), that is \(\mathscr {K}_{T_j} = e^{- \tau _j q_j^w(x,D)}\). Here \(q_j^w(x,D) = \frac{1}{2}( x_j^2 + D_j^2)\) is the Hermite operator (harmonic oscillator) acting on variable j, divided by two.

For this operator the Hermite functions are eigenfunctions, and

$$\begin{aligned} \frac{1}{2}( x_j^2 + D_j^2) h_\alpha = \left( \alpha _j + \frac{1}{2} \right) h_\alpha , \quad \alpha \in \mathbf {N}^{d} \end{aligned}$$

(cf. e.g. [5]). By the uniqueness of the solution to the Cauchy problem (5.1) we have

$$\begin{aligned} \mathscr {K}_{T_j} h_\alpha = e^{- \frac{\tau _j}{2}( x_j^2 + D_j^2)} h_\alpha = e^{- \tau _j \left( \alpha _j + \frac{1}{2} \right) } h_\alpha , \quad \alpha \in \mathbf {N}^{d}. \end{aligned}$$

Using the seminorms on \(\Sigma _s(\mathbf {R}^{d})\) defined by Hilbert sequence spaces \(\ell _{s,r}^2(\mathbf {N}^{d})\), cf. (1), and the orthonormality of \(\{ h_\beta \}_{\beta \in \mathbf {N}^{d}} \subseteq L^2(\mathbf {R}^{d})\) we obtain for \(f \in \Sigma _s(\mathbf {R}^{d})\) and \(\alpha \in \mathbf {N}^{d}\)

$$\begin{aligned} (\mathscr {K}_{T_j} f , h_\alpha ) = \sum _{\beta \in \mathbf {N}^{d}} (f, h_\beta ) (\mathscr {K}_{T_j} h_\beta , h_\alpha ) = (f, h_\alpha ) e^{- \tau _j \left( \alpha _j + \frac{1}{2} \right) }, \end{aligned}$$

and hence for any \(r > 0\)

$$\begin{aligned} \Vert (\mathscr {K}_{T_j} f, h_\alpha )_{\alpha \in \mathbf {N}^{d}} \Vert _{\ell _{s,r}^2}^2&= \sum _{\alpha \in \mathbf {N}^{d}} | (\mathscr {K}_{T_j} f, h_\alpha ) |^2 e^{2 r |\alpha |^{\frac{1}{2s}}} \\&= \sum _{\alpha \in \mathbf {N}^{d}} | ( f, h_\alpha ) |^2 e^{- \tau _j \left( 2 \alpha _j + 1 \right) } e^{2 r |\alpha |^{\frac{1}{2s}}} \\&\leqslant \sum _{\alpha \in \mathbf {N}^{d}} | ( f, h_\alpha ) |^2 e^{2 r |\alpha |^{\frac{1}{2s}}} \\&= \Vert (f, h_\alpha )_{\alpha \in \mathbf {N}^{d}} \Vert _{\ell _{s,r}^2}^2. \end{aligned}$$

This shows the continuity \(\mathscr {K}_{T_j}: \Sigma _s(\mathbf {R}^{d}) \rightarrow \Sigma _s(\mathbf {R}^{d})\).

Case (ii) Suppose (4). Define the symmetric block matrix

$$\begin{aligned} Q_j = \frac{1}{2} \left( \begin{array}{cc} e_j e_j^t &{} 0\\ 0 &{} 0 \end{array} \right) \in \mathbf {R}^{2d \times 2d} \end{aligned}$$

and \(F_j = \mathcal {J}Q_j\). Then

$$\begin{aligned} e^{- 2 i F_j} = T_j \end{aligned}$$

which implies that \(\mathscr {K}_{T_j}\) is the solution operator to the initial value Cauchy problem (5.1) when the Hamiltonian Weyl symbol is defined by

$$\begin{aligned} q_j(x,\xi ) = \langle (x,\xi ), Q_j(x, \xi ) \rangle = \frac{1}{2} x_j^2, \end{aligned}$$

at time \(t = 1\), that is \(\mathscr {K}_{T_j} = e^{- q_j^w(x,D)}\). Since \(q_j^w(x,D) f(x) = \frac{x_j^2}{2} f(x)\) we have \(\mathscr {K}_{T_j} f (x) = e^{- \frac{1}{2} x_j^2} f(x)\) which is a Gaussian multiplicator operator with respect to variable j.

From the Lemma we obtain for any \(\alpha , \beta \in \mathbf {N}^{d}\) and any \(h > 0\) using the seminorms (2.3)

$$\begin{aligned} \left| x^\beta D^\alpha \left( e^{- \frac{1}{2} x_j^2} f(x)\right) \right|&\leqslant \sum _{\gamma _j \leqslant \alpha _j} \left( {\begin{array}{c}\alpha _j\\ \gamma _j\end{array}}\right) \left| D^{\gamma _j} \left( e^{- \frac{1}{2} x_j^2} \right) \right| \left| x^\beta D^{\alpha -\gamma _j e_j } f(x) \right| \\&\lesssim \Vert f\Vert _{\mathcal {S}_{s,h}} \sum _{\gamma _j \leqslant \alpha _j} \left( {\begin{array}{c}\alpha _j\\ \gamma _j\end{array}}\right) h^{\gamma _j} \gamma _j!^s (\beta ! (\alpha -\gamma _j e_j)!)^s h^{|\beta | + |\alpha |-\gamma _j} \\&\leqslant \Vert f\Vert _{\mathcal {S}_{s,h}} h^{|\alpha +\beta |} (\beta ! \alpha !)^s 2^{\alpha _j} \\&\leqslant \Vert f\Vert _{\mathcal {S}_{s,h}} (2 h)^{|\alpha +\beta |} (\beta ! \alpha !)^s, \quad x \in \mathbf {R}^{d}, \end{aligned}$$

and thus

$$\begin{aligned} \left\| e^{- \frac{1}{2} x_j^2} f \right\| _{\mathcal {S}_{s,h}} \lesssim \left\| f \right\| _{\mathcal {S}_{s,h/2}}. \end{aligned}$$

We have shown the continuity of \(\mathscr {K}_{T_j}: \Sigma _s(\mathbf {R}^{d}) \rightarrow \Sigma _s(\mathbf {R}^{d})\). \(\square \)