1 Introduction

In this note, we discuss the orthogonality of the generalized eigenspaces associated to a general Ornstein–Uhlenbeck operator \({\mathscr {L}}\) in \(\mathbb {R}^N\).

Recently, the authors started studying some harmonic analysis issues in a nonsymmetric Gaussian context [1,2,3]. In particular, the Ornstein–Uhlenbeck semigroup \(\big ({\mathscr {H}}_t\big )_{t>0}\) generated by \({\mathscr {L}}\) is not assumed to be self-adjoint in \(L^2(\gamma _\infty )\); here \(\gamma _\infty \) denotes the unique invariant probability measure under the action of the semigroup, and will be specified later.

In this general framework, the Ornstein–Uhlenbeck operator \({\mathscr {L}}\) admits a complete system of generalized eigenfunctions; see [8]. But without self-adjointness, the orthogonality of distinct eigenspaces of \({\mathscr {L}}\) is not guaranteed. In fact, while the kernel of \({\mathscr {L}}\) is always orthogonal to the other generalized eigenspaces of \({\mathscr {L}}\) in \(L^2(\gamma _\infty )\), the question of orthogonality between generalized eigenspaces associated to nonzero eigenvalues is more delicate. As expected, the spectral properties of B play a prominent role here. Indeed, we prove in Section  3 that if B has a unique eigenvalue, then any two generalized eigenfunctions of \({\mathscr {L}}\) corresponding to different eigenvalues are orthogonal in \(L^2(\gamma _\infty )\).

Then in Sections 4 and 5, we exhibit two examples showing, respectively, that if B admits two distinct eigenvalues, the generalized eigenspaces associated to \({\mathscr {L}}\) may or may not be orthogonal. The last section also contains a result which relates the orthogonality of the eigenspaces of \({\mathscr {L}}\) to that of the eigenspaces of the drift matrix, under some restrictions.

In the following, the symbol \(I_k\) will denote the identity matrix of size k, and we omit the subscript when the size is obvious. We will write \(\langle \cdot , \cdot \rangle \) for scalar products both in \(\mathbb {R}^N\) and in \(L^2(\gamma _\infty )\). By \(\mathbb {N}\) we mean \(\{0, 1, \dots \}\).

2 The Ornstein–Uhlenbeck operator

In this section, we specify the definition of the Ornstein–Uhlenbeck operator \({\mathscr {L}}\) and recall some known facts concerning its spectrum.

We consider the Ornstein–Uhlenbeck semigroup \(\big ( {\mathscr {H}}_t^{Q,B} \big )_{t> 0}\,,\) given for all bounded continuous functions f in \(\mathbb {R}^N\), \(N\ge 1\), and all \(t>0\) by the Kolmogorov formula

$$\begin{aligned} {\mathscr {H}}_t^{Q,B} f(x)= \int f(e^{tB}x-y)\,d\gamma _t (y)\,, \quad x\in \mathbb {R}^N\,, \end{aligned}$$

(see [6] and [7, Theorem 9.1.1]). Here B is a real \(N\times N\) matrix whose eigenvalues have negative real parts, and Q is a real, symmetric, and positive-definite \(N\times N\) matrix. Then we introduce the covariance matrices

$$\begin{aligned} Q_t=\int _0^t e^{sB}\,Q\, e^{sB^*}ds\,,\qquad t\in (0,+\infty ], \end{aligned}$$

all symmetric and positive definite. Finally, the normalized Gaussian measures \(\gamma _t\) are defined for \(t\in (0,+\infty ]\) by

$$\begin{aligned} d\gamma _t(x) = (2\pi )^{-\frac{N}{2}} (\text {det} \, Q_t)^{-\frac{1}{2} } e^{-\frac{1}{2} \langle Q_t^{-1}x,x\rangle }\,dx. \end{aligned}$$

As mentioned above, \(\gamma _\infty \) is the unique invariant probability measure of the Ornstein–Uhlenbeck semigroup.

The Ornstein–Uhlenbeck operator is the infinitesimal generator of the semigroup \(\big ( {\mathscr {H}}_t^{Q,B} \big )_{t> 0}\,\), and it is explicitly given by

$$\begin{aligned} {\mathscr {L}}^{Q,B} f(x)= \frac{1}{2} {{\,\mathrm{tr}\,}}\big ( Q\nabla ^2 f\big )(x)+ \langle Bx, \nabla f(x)\rangle \,,\qquad {{ f\in {\mathscr {S}} (\mathbb {R}^N),}} \end{aligned}$$

where \(\nabla \) is the gradient and \(\nabla ^2\) the Hessian.

By convention, we abbreviate \( {\mathscr {H}}_t^{Q,B} \) and \({\mathscr {L}}^{Q,B} \) to \({\mathscr {H}}_t\) and \({\mathscr {L}}\), respectively. We can thus write \({\mathscr {H}}_t=e^{t{\mathscr {L}}}\).

In [8, Theorem 3.1], it is verified that the spectrum of \({\mathscr {L}}\) is the set

$$\begin{aligned} \left\{ \sum _{j=1}^r n_j \lambda _j\,:\, n_j\in {\mathbb {N}} \right\} , \end{aligned}$$
(1)

where \( \lambda _1, \ldots , \lambda _r\) are the eigenvalues of the drift matrix B. In particular, 0 is an eigenvalue of \({\mathscr {L}}\), and the corresponding eigenspace \(\ker {\mathscr {L}}\) is one-dimensional and consists of all constant functions, as proved in [8, Section 3].

We also recall that, given a linear operator T on some \(L^2\) space, a number \(\lambda \in \mathbb {C}\) is a generalized eigenvalue of T if there exists a nonzero \(u \in L^2\) such that \((T - \lambda I)^k \,u=0\) for some positive integer k. Then u is called a generalized eigenfunction, and those u span the generalized eigenspace corresponding to \(\lambda \). As already recalled, it is known from [8, Section 3] that the Ornstein–Uhlenbeck operator \({\mathscr {L}}\) admits a complete system of generalized eigenfunctions, that is, the linear span of the generalized eigenfunctions is dense in \(L^2(\gamma _\infty )\). It is also known that all generalized eigenfunctions of \( {\mathscr {L}}\) are polynomials, see [7, Theorem 9.3.20].

2.1. Use of Hermite polynomials. As proved in [9], a suitable linear change of coordinates in \(\mathbb {R}^N\) makes \(Q=I\) and \(Q_\infty \) diagonal. When applying this, we adhere to the notation introduced in [4, Lemma 1], where also the following facts can be found. Let \({\mathbf {H}}_n\) denote the space of Hermite polynomials of degree n in these coordinates, adapted by means of a dilation to \(\gamma _\infty \) in the sense that the \({\mathbf {H}}_n\) are mutually orthogonal in \(L^2(\gamma _\infty )\) (they are called \(H_{\lambda ,k}\) in [4]). The classical Hermite expansion (called the Itô-Wiener decomposition in [4]) says that \(L^2(\gamma _\infty )\) is the closure of the direct sum of the \({\mathbf {H}}_n\); we refer to [10, p. 64] for a proof in dimension one and note that the extension to higher dimension is trivial. In other words, we can decompose any function \(u\in L^2(\gamma _\infty )\) as

$$\begin{aligned} u = \sum _j u_j \end{aligned}$$
(2)

with \(u_j \in {\mathbf {H}}_j\) and convergence in \(L^2(\gamma _\infty )\). Further, each \({\mathbf {H}}_n\) is invariant under \({\mathscr {L}}\); see [4, Proposition 1].

The Hermite decomposition implies, in particular, that each generalized eigenfunction of \({\mathscr {L}}\) with a nonzero eigenvalue is orthogonal to the space of constant functions, that is, to the kernel of \({\mathscr {L}}\). Anyway, we provide here a proof of this fact which is independent of Hermite polynomials.

Lemma 2.1

Let \(\lambda \ne 0\). If \(u \in L^2(\gamma _\infty )\) and \((\mathscr {L} - \lambda )^k \,u = 0\) for some \(k \in \{ 1, 2,\dots \}\), then \(\int u\,d\gamma _\infty = 0\).

Proof

The implication is trivial if we set \(k=0\), so assume it holds for some \(k \ge 0\) and that \((\mathscr {L} - \lambda )^{k+1}\, u = 0\).

Then

$$\begin{aligned} \mathscr {L}(\mathscr {L} - \lambda )^k\, u = \lambda (\mathscr {L} - \lambda )^k\, u, \end{aligned}$$

and thus for any \(t>0\),

$$\begin{aligned} e^{t \mathscr {L}}(\mathscr {L} - \lambda )^k \,u = e^{t\lambda } (\mathscr {L} - \lambda )^k\, u. \end{aligned}$$

These operators commute, so

$$\begin{aligned} (\mathscr {L} - \lambda )^k\, e^{t \mathscr {L}} u =(\mathscr {L} - \lambda )^k \,e^{t\lambda } u, \end{aligned}$$

that is,

$$\begin{aligned} (\mathscr {L} - \lambda )^k \, (e^{t \mathscr {L}} u - e^{t\lambda } u) = 0. \end{aligned}$$

The induction assumption now implies that

$$\begin{aligned} \int (e^{t \mathscr {L}} u - e^{t\lambda } u)\,d\gamma _\infty = 0. \end{aligned}$$

Since \(\gamma _\infty \) is invariant under the semigroup, this means that

$$\begin{aligned} \int u \,d\gamma _\infty = e^{t\lambda } \int u \,d\gamma _\infty \end{aligned}$$

for all \(t>0\). Thus the integral vanishes. \(\square \)

3 The case when B has only one eigenvalue

Proposition 3.1

If the drift matrix B has only one eigenvalue, then any two generalized eigenfunctions of \({\mathscr {L}}\) with different eigenvalues are orthogonal with respect to \(\gamma _\infty \).

Let \(\lambda \) be the unique eigenvalue of B, which is necessarily real and negative. We first state a lemma and use it to prove the proposition. Recall that any generalized eigenfunction of \({\mathscr {L}}\) is a polynomial.

Lemma 3.2

Let u be a generalized eigenfunction of \({\mathscr {L}}\) which is a polynomial of degree \(n \ge 0\). Then the corresponding eigenvalue is \(n\lambda \).

Proof of Proposition 3.1

Let u be a generalized eigenfunction of \({\mathscr {L}}\), thus satisfying \(({\mathscr {L}} - \mu )^k\, u = 0\) for some \(\mu \in \mathbb {C}\) and \(k \in \mathbb {N}\). Applying the coordinates from Subsection 2.1, we can decompose u as in (2), where the sum is now finite. Since then

$$\begin{aligned} \sum _j ({\mathscr {L}} - \mu )^k u_j = 0 \end{aligned}$$

and each term here is in the corresponding \( {\mathbf {H}}_j\), all the terms are 0. But this is compatible with Lemma 3.2 only if there is only one nonzero term in the decomposition of u. Thus \(u \in {\mathbf {H}}_n\), where n is the polynomial degree of u.

Lemma  3.2 then implies that two generalized eigenfunctions with different eigenvalues are of different degrees and thus belong to different \({\mathbf {H}}_n\). The desired orthogonality now follows from that of the \({\mathbf {H}}_n\). \(\square \)

Proof of Lemma 3.2

Let u be a generalized eigenfunction of \({\mathscr {L}}\) of polynomial degree n. We denote the corresponding eigenvalue by \(\mu \). Decomposing u as in (2), we see that this sum is for \(j\le n\) and that the term \(u_n\) is nonzero and a generalized eigenfunction of \({\mathscr {L}}\) with eigenvalue \(\mu \). For some m, the function \(({\mathscr {L}} - \mu )^m u_n\) will then be an eigenfunction with the same eigenvalue. This function is in \( {\mathbf {H}}_n\) and thus a polynomial of degree n. As a result, we can assume that u is actually an eigenfunction of \({\mathscr {L}}\) when proving the lemma.

We now choose coordinates in \(\mathbb {R}^N\) that give a Jordan decomposition of B. This means that \(B = \lambda I + R\), where \(R = (R_{i,j})\) is a matrix with nonzero entries only in the first subdiagonal. More precisely, \(R_{i,i-1} = 1\) for \(i \in P\), where P is a subset of \(\{2,\dots ,N\}\), and all other entries of R vanish.

We write \({\mathscr {L}} = {\mathscr {S}}+{\mathscr {B}}\), where

$$\begin{aligned} {\mathscr {B}} f(x) = \langle Bx, \nabla f(x)\rangle , \end{aligned}$$

and \({\mathscr {S}}\) is the remaining, second-degree part of \({\mathscr {L}}\). Notice that, when applied to polynomials, \({\mathscr {B}}\) preserves the degree whereas \({\mathscr {S}}\) decreases it by 2. So if v is the nth-degree part of u, we must have \({\mathscr {B}} v = \mu v\).

We let \({\mathscr {B}}\) act on a monomial \(x^\alpha \), where \(\alpha \in \mathbb {N}^N\) is a multiindex of length \(|\alpha | = n\), getting

$$\begin{aligned} {\mathscr {B}} x^\alpha&= \sum _j \lambda x_j\, \frac{\partial x^\alpha }{\partial x_j} + \sum _{i \in P} x_{i-1}\, \frac{\partial x^\alpha }{\partial x_i} \\&= \lambda \sum _j \alpha _j \,x^\alpha + \sum _{i \in P} \alpha _i \,\frac{x_{i-1} }{x_i} \,x^\alpha = \lambda n\, x^\alpha + \sum _{i \in P} \alpha _i \, x^{\alpha ^{(i)}}, \end{aligned}$$

where \(\alpha ^{(i)} = \alpha + e_{i-1} - e_i\) for \(i \in P\). Here \(\{e_{j}\}_{j=1}^n\) denotes the standard basis in \(\mathbb {R}^N\). Thus the restriction of \({\mathscr {B}}\) to the space of homogeneous polynomials of degree n is given as \(\lambda n I + {\mathscr {R}}\), where \({\mathscr {R}}\) is the linear operator that maps \(x^\alpha \) to \( \sum _{i \in P} \alpha _i \, x^{\alpha ^{(i)}}\).

We claim that the only eigenvalue of \({\mathscr {R}}\) is 0. If so, the only eigenvalue of the restriction of \({\mathscr {B}}\) mentioned above is \(\lambda n\), which would prove the lemma since \({\mathscr {B}} v = \mu v\).

In order to prove this claim, we define for any \(\alpha \in \mathbb {N}^N\) with \(|\alpha | = n\),

$$\begin{aligned} V(\alpha ) = \sum _1^N j \alpha _j. \end{aligned}$$

Clearly \(V(\alpha ^{(i)}) = V(\alpha ) - 1\). We select a basis in the linear space of all homogeneous polynomials of degree n consisting of all monomials \(x^\alpha \) with \(|\alpha | = n\), enumerated in such a way that V is nondecreasing. The definition of \({\mathscr {R}}\) now shows that its matrix with respect to this basis is upper triangular with zeros on the diagonal. The claim follows, and so does the lemma. \(\square \)

4 B has two distinct eigenvalues: a first example

The following example shows that the generalized eigenspaces of the Ornstein–Uhlenbeck operator may be orthogonal even in the case when B has more than one eigenvalue. We show that \(\mathscr {L}\), while not being self-adjoint, is normal; then the orthogonality of its eigenspaces follows from the spectral theorem.

In two dimensions, we let

$$\begin{aligned} Q=I_2 \;\;\text { and } \;\;B = \begin{pmatrix} -1 &{} 1 \\ -1 &{} -1\end{pmatrix} \end{aligned}$$
(3)

whose eigenvalues are \(-1 \pm i\).

One finds that

$$\begin{aligned} e^{sB} = e^{-s} \begin{pmatrix} \cos s &{} \sin s \\ -\sin s &{} \cos s\end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} e^{sB}\,e^{sB^*} = e^{-2s}\,I_2, \end{aligned}$$

so that

$$\begin{aligned} Q_\infty = \frac{1}{2}\,I_2, \qquad \qquad Q_\infty ^{-1} = 2\,I_2. \end{aligned}$$

We write

$$\begin{aligned} B=-I_2+R, \qquad \text {where }\qquad R=\begin{pmatrix} 0 &{} 1 \\ -1 &{} 0\end{pmatrix}. \end{aligned}$$

Since \(R=-R^*\), [8, Proposition 2.1] implies that \(\mathscr {L}\) is normal (observe that \(I_2 = \frac{1}{2}\, D_{1/\lambda }\) in the notation of [8]).

However, we give below a brief, direct proof of this fact, independent of the change of variables adopted in [8, 9]. In the following, we write

$$\begin{aligned} \mathscr {L}=\mathscr {L}^0+\mathscr {R}, \end{aligned}$$

where

$$\begin{aligned} \mathscr {L}^0=\frac{1}{2}\, \Delta - \langle x, \nabla \rangle \end{aligned}$$

is the standard Ornstein–Uhlenbeck operator and so self-adjoint in \(L^2(\gamma _\infty )\). Further,

$$\begin{aligned} \mathscr {R}= \langle Rx, \nabla \rangle \end{aligned}$$

is seen to be an antisymmetric operator in \(L^2(\gamma _\infty )\). This leads to

$$\begin{aligned}{}[\mathscr {L}, \mathscr {L}^*]= [\mathscr {L}^0+ \mathscr {R}, \mathscr {L}^0- \mathscr {R}] =-\mathscr {L}^0 \mathscr {R}+\mathscr {R} \mathscr {L}^0-\mathscr {L}^0 \mathscr {R}+\mathscr {R}\mathscr {L}^0 =2[\mathscr {R}, \mathscr {L}^0]. \end{aligned}$$

If we write \(\partial _i\) for \(\partial _{x_i},\;i=1,2\), this amounts to

$$\begin{aligned} 2\left[ x_2\,\partial _1 -x_1\,\partial _2\, ,\, \frac{1}{2} \,\Delta -x_1\,\partial _1-x_2\,\partial _2\right] . \end{aligned}$$

A straightforward computation shows that this vanishes, and so \(\mathscr {L}\) is normal.

The spectral theorem for normal operators now implies the following result.

Proposition 4.1

With \(N=2\), let Q and B be as in (3). Then each generalized eigenfunction of \(\mathscr {L}\) is an eigenfunction. Moreover, any two eigenfunctions of \(\mathscr {L}\) with different eigenvalues are orthogonal with respect to \(\gamma _\infty \).

5 B has two distinct eigenvalues: a second example

In this section, we exhibit a class of drift matrices B with two different eigenvalues (which, in contrast to those in the example in Section 4, are real), but such that the generalized eigenspaces associated to the corresponding Ornstein–Uhlenbeck operator \({\mathscr {L}}\) are not orthogonal.

In \(\mathbb {R}^2\), we consider \(Q=I_2\) and

$$\begin{aligned} B = \begin{pmatrix} -a+d&{}\quad 0 \\ c&{}\quad -a-d \end{pmatrix}, \end{aligned}$$
(4)

with \(a>d>0\) and \(c\ne 0\). To compute the exponential of sB, we write \(B= -aI + M\), where

$$\begin{aligned} M = \begin{pmatrix} d&{}\quad 0 \\ c&{}\quad -d\end{pmatrix}. \end{aligned}$$

Since \(MM = d^2 I\), we get for \(s>0\),

$$\begin{aligned} \exp (sB)= e^{-as}\,\left( \cosh (sd)\, I + d^{-1} \, \sinh (sd)\, M \right) \,. \end{aligned}$$

This leads to

$$\begin{aligned} \exp (sB)\,\exp (sB^*) =&\, e^{-2as} \begin{pmatrix} e^{2sd}&{}\quad \frac{c}{d} \, e^{sd} \sinh (sd) \\ \frac{c}{d} \, e^{sd} \sinh (sd) &{}\quad \frac{c^2}{d^2} \, \sinh ^2 (sd) + e^{-2sd} \end{pmatrix}. \end{aligned}$$

Integrating this matrix over \(0<s<\infty \), we obtain

$$\begin{aligned} Q_\infty&=\begin{pmatrix} \frac{1}{2(a-d)}&{}\quad \frac{c}{4a(a-d)} \\ \frac{c}{4a(a-d)} &{} \quad \frac{c^2}{4a(a-d)(a+d)} + \frac{1}{2(a+d)} \end{pmatrix} , \end{aligned}$$

and so

$$\begin{aligned} \frac{1}{2}\,Q_\infty ^{-1}&=\frac{1}{c^2+4a^2} \begin{pmatrix} {2a[c^2+2a(a-d)]}{} &{} -{2ac(a+d)}{} \\ {}&{}{}\\ -{2ac(a+d)}{} &{}{4a^2(a+d)}{} \end{pmatrix}. \end{aligned}$$

The invariant measure \(\gamma _\infty \) is thus proportional to

$$\begin{aligned}&\exp \left( - \frac{2a[c^2+2a(a-d)]}{c^2+4a^2}\, x_1^2 +\frac{4ac(a+d)}{c^2+4a^2}\, x_1 x_2 - \frac{4a^2(a+d)}{c^2+4a^2}\, x_2^2\right) dx \\&\quad = \exp \big (- {(a-d)}\, x_1^2\big )\; \exp \left( -\frac{a+d}{c^2+4a^2} \left( c x_1-2a x_2 \right) ^2\right) dx. \end{aligned}$$

Writing \(z_1 =\sqrt{a-d}\, x_1\) and \(z_2 =\sqrt{\frac{a+d}{c^2+4a^2}}\, \big (2a x_2-c x_1\big )\) and recalling that \(\gamma _\infty \) is a probability measure, we see that

$$\begin{aligned} d\gamma _\infty&= \pi ^{-1}\,\exp \big (- z_1^2- z_2^2\big )\, dz. \end{aligned}$$

To find some eigenfunctions of \({\mathscr {L}}\), we consider polynomials in \(x_1,x_2\) of degree 2. One finds that

$$\begin{aligned} v_1&= x_1^2 -\frac{1}{2(a-d)},\\ v_2&= x_1^2 -\frac{2d}{c}\, x_1x_2 -\frac{1}{2a},\\ v_3&= x_1^2 -\frac{4d}{c}\,x_1x_2 + \frac{4d^2}{c^2}\, x_2^2 -\frac{c^2+4d^2}{2c^2(a+d)} \end{aligned}$$

are eigenfunctions, with eigenvalues \(-2(a-d),\, -2a\), and \(-2(a+d)\), respectively.

Any two of these polynomials turn out not to be orthogonal with respect to the invariant measure, as follows by straightforward computations. We sketch one example.

One simply multiplies \(v_1\) and \(v_3\) and rewrites the product in terms of \(z_1\) and \(z_2\). Doing so, one can neglect all terms of odd order in \(z_1\) or \(z_3\), when integrating with respect to \(\gamma _\infty \). Writing ”\(\mathrm { odd}\)” for such terms, we find that the product is

$$\begin{aligned}&\frac{1 }{a^2}\, z_1^4 + \frac{d^2 (c^2+4a^2)}{a^2c^2 (a^2-d^2)}\,z_1^2 z_2^2 -\Big [\frac{c^2+4d^2}{2c^2(a^2-d^2)} +\frac{1}{2a^2} \Big ]\, z_1^2 \\&-\frac{d^2(c^2+4a^2)}{2a^2c^2(a^2-d^2)}\,z_2^2 + \frac{c^2+4d^2}{4c^2(a^2-d^2)} + \mathrm { odd}. \end{aligned}$$

Integrating and simplifying, we get

$$\begin{aligned} \int&v_1v_3 \,d\gamma _\infty = \frac{1 }{2a^2 } \,>\,0, \end{aligned}$$

so \(v_1\) and \(v_3\) are not orthogonal.

Remark 5.1

Let now \(d=a/2\) in this example. Then the fourth-degree polynomial

$$\begin{aligned} v_4 = x_1^4 - \frac{6}{a}\,x_1^2 + \frac{3}{a^2} \end{aligned}$$

is an eigenfunction of \({\mathscr {L}}\) with eigenvalue \(-2a\), like \(v_2\). Thus eigenfunctions of different polynomial degrees can have the same eigenvalue. This shows that for an eigenfunction u, the sum in (2) may consist of more than one term, and a (generalized) eigenspace need not be contained in one \({\mathbf {H}}_n\).

The eigenvalues of the matrix B defined in (4) are \(-a\pm d\), and it is easily seen that the corresponding eigenspaces are not orthogonal in \(\mathbb {R}^2\). This turns out to be related to the non-orthogonality of the eigenspaces of \({\mathscr {L}}\), at least in two dimensions, in the following way.

Proposition 5.2

Let \(N=2\) and \(Q = I\), and assume that B has two different, real eigenvalues. Then the generalized eigenspaces of \({\mathscr {L}}\) are orthogonal in \(L^2(\gamma _\infty )\) if and only if the two eigenspaces of B are orthogonal in \(\mathbb {R}^2\).

Proof

To begin with, we consider a coordinate change \({\widetilde{x}} = Hx\), where H is an orthogonal matrix. Simple computations show that the operator \({\mathscr {L}}^{Q,B}\) is transformed to \({\mathscr {L}}^{{\widetilde{Q}}, {\widetilde{B}}}\) in the new coordinates, with \({\widetilde{Q}} =HQH^*\) and \({\widetilde{B}} =HBH^*\); cf. [9, p. 474]. In our case, \({\widetilde{Q}} = Q = I\). The eigenvalues of B and the angle between its eigenvectors will not change.

To prove the proposition, assume first that the (real) eigenvectors of B are orthogonal in \(\mathbb {R}^2\). Then B is symmetric since it can be diagonalized by means of an orthogonal change of coordinates as just described. This implies that \({\mathscr {L}}\) is symmetric ([7, Proposition 9.3.10]), so that the orthogonality of its eigenspaces is trivial.

Next, we assume that the eigenvectors of B are not orthogonal in \(\mathbb {R}^2\). By Schur’s decomposition theorem (see [5, Theorem 2.3.1]), there exists an orthogonal change of coordinates which makes B lower triangular, though not diagonal. We are thus in the situation described in (4). As we have seen, some eigenspaces of \({\mathscr {L}}\) are then not orthogonal with respect to the invariant measure. \(\square \)

We finally remark that the “if” part of this proposition easily extends to arbitrary dimension N. Then it is assumed that B has N different, real eigenvalues with mutually orthogonal eigenspaces.