On the orthogonality of generalized eigenspaces for the Ornstein--Uhlenbeck operator

We study the orthogonality of the generalized eigenspaces of an Ornstein--Uhlenbeck operator $\mathcal L$ in $\mathbb{R}^N$, with drift given by a real matrix $B$ whose eigenvalues have negative real parts. If $B$ has only one eigenvalue, we prove that any two distinct generalized eigenspaces of $\mathcal L$ are orthogonal with respect to the invariant Gaussian measure. Then we show by means of two examples that if $B$ admits distinct eigenvalues, the generalized eigenspaces of $\mathcal L$ may or may not be orthogonal.


Introduction
In this note we discuss the orthogonality of the generalized eigenspaces associated to a general Ornstein-Uhlenbeck operator L in R N .
Recently, the authors started studying some harmonic analysis issues in a nonsymmmetric Gaussian context [1,2,3]. In particular, the Ornstein-Uhlenbeck semigroup H t t>0 generated by L is not assumed self-adjoint in L 2 (γ ∞ ); here γ ∞ denotes the unique invariant probability measure under the action of the semigroup, and will be specified later.
In this general framework, the Ornstein-Uhlenbeck operator L admits a complete system of generalized eigenfunctions; see [8]. But without self-adjointness, the orthogonality of distinct eigenspaces of L is not guaranteed. In fact, while the kernel of L is always orthogonal to the other generalized eigenspaces of L in L 2 (γ ∞ ), the question of orthogonality between generalized eigenspaces associated to nonzero eigenvalues is more delicate. As expected, the spectral properties of B play a prominent role here. Indeed, we prove in Section 3 that if B has a unique eigenvalue, then any two generalized eigenfunctions of L corresponding to different eigenvalues are orthogonal in L 2 (γ ∞ ).
Then in Sections 4 and 5 we exhibit two examples showing, respectively, that if B admits two distinct eigenvalues, the generalized eigenspaces associated to L may or may not be orthogonal. The last section also contains a result which relates orthogonality of the eigenspaces of L to that of the eigenspaces of the drift matrix, under some restrictions.
In the following, the symbol I k will denote the identity matrix of size k, and we omit the subscript when the size is obvious. We will write ., . for scalar products both in R N and in L 2 (γ ∞ ). By N we mean {0, 1, . . . }.

The Ornstein-Uhlenbeck operator
In this section, we specify the definition of the Ornstein-Uhlenbeck operator L and recall some known facts concerning its spectrum.
We consider the Ornstein-Uhlenbeck semigroup H Q,B t t>0 , given for all bounded continuous functions f in R N , N ≥ 1, and all t > 0 by the Kolmogorov formula (see [6]). Here B is a real N × N matrix whose eigenvalues have negative real parts, and Q is a real, symmetric and positive-definite N × N matrix. Then we introduce the covariance matrices all symmetric and positive definite. Finally, the normalized Gaussian measures γ t are defined for t ∈ (0, +∞] by x,x dx. As mentioned above, γ ∞ is the unique invariant probability measure of the Ornstein-Uhlenbeck semigroup.
The Ornstein-Uhlenbeck operator is the infinitesimal generator of the semigroup H Q,B t t>0 , and it is explicitly given by where ∇ is the gradient and ∇ 2 the Hessian. By convention, we abbreviate H Q,B t and L Q,B to H t and L , respectively. We can thus write H t = e tL .
In [8,Theorem 3.1] it is verified that the spectrum of L is the set where λ 1 , . . . , λ r are the eigenvalues of the drift matrix B. In particular, 0 is an eigenvalue of L , and the corresponding eigenspace ker L is one-dimensional and consists of all constant functions, as proved in [8,Section 3].
We also recall that, given a linear operator T on some L 2 space, a number λ ∈ C is a generalized eigenvalue of T if there exists a nonzero u ∈ L 2 such that (T − λI) k u = 0 for some positive integer k. Then u is called a generalized eigenfunction, and those u span the generalized eigenspace corresponding to λ. As already recalled, it is known from [8, Section 3] that the Ornstein-Uhlenbeck operator L admits a complete system of generalized eigenfunctions, that is, the linear span of the generalized eigenfunctions is dense in L 2 (γ ∞ ). Subsection 2.1. Use of Hermite polynomials. As proved in [9], a suitable linear change of coordinates in R N makes Q = I and Q ∞ diagonal. When applying this, we adhere to the notation introduced in Lemma 1 in [4], where also the following facts can be found. Let H n denote the space of Hermite polynomials of degree n in these coordinates, adapted by means of a dilation to γ ∞ in the sense that the H n are mutually orthogonal in L 2 (γ ∞ ) (they are written H λ,k in [4]). The classical Hermite expansion (called the Itô-Wiener decomposition in [4]) says that L 2 (γ ∞ ) is the closure of the direct sum of the H n ; we refer to [10, p. 64] for a proof in dimension one and note that the extension to higher dimension is trivial. In other words, we can decompose any function The Hermite decomposition implies, in particular, that each generalized eigenfunction of L with a nonzero eigenvalue is orthogonal to the space of constant functions, that is, to the kernel of L . Anyway, we provide here a proof of this fact which is independent of Hermite polynomials.
Proof. The implication is trivial if we set k = 0, so assume it holds for some k ≥ 0 and that (L − λ) k+1 u = 0.
Then L (L − λ) k u = λ(L − λ) k u, and thus for any t > 0 These operators commute, so Since γ ∞ is invariant under the semigroup, this means that u dγ ∞ = e tλ u dγ ∞ for all t > 0. Thus the integral vanishes.
3. The case when B has only one eigenvalue Proposition 3.1. If the drift matrix B has only one eigenvalue, then any two generalized eigenfunctions of L with different eigenvalues are orthogonal with respect to γ ∞ .
We let λ be the unique eigenvalue of B, which is necessarily real and negative. It is known that all generalized eigenfunctions of L are polynomials, see [7,Thm. 9.3.20].
We first state a lemma and use it to prove the proposition.
Lemma 3.2. Let u be a generalized eigenfunction of L which is a polynomial of degree n ≥ 0. Then the corresponding eigenvalue is nλ.
Proof of Proposition 3.1. Let u be a generalized eigenfunction of L , thus satisfying (L − µ) k u = 0 for some µ ∈ C and k ∈ N. Applying the coordinates from Subsection 2.1, we can decompose u as in (2.1), where the sum is now finite. Since then and each term here is in the corresponding H j , all the terms are 0. But this is compatible with Lemma 3.2 only if there is only one nonzero term in the decomposition of u. Thus u ∈ H n , where n is the polynomial degree of u. Lemma 3.2 then implies that two generalized eigenfunctions with different eigenvalues are of different degrees and thus belong to different H n . The desired orthogonality now follows from that of the H n .
Proof of Lemma 3.2. Let u be a generalized eigenfunction of L of polynomial degree n. We denote the corresponding eigenvalue by µ. Decomposing u as in (2.1), we see that this sum is for j ≤ n and that the term u n is nonzero and a generalized eigenfunction of L with eigenvalue µ. For some m, the function (L − µ) m u n will then be an eigenfunction with the same eigenvalue. This function is in H n and thus a polynomial of degree n. As a result, we can assume that u is actually an eigenfunction of L , when proving the lemma.
We now choose coordinates in R N that give a Jordan decomposition of B. This means that B = λI + R, where R = (R i,j ) is a matrix with nonzero entries only in the first subdiagonal. More precisely, R i,i−1 = 1 for i ∈ P , where P is a subset of {2, . . . , N}, and all other entries of R vanish.
We write L = S + B, where and S is the remaining, second-degree part of L . Notice that, when applied to polynomials, B preserves the degree whereas S decreases it by 2. So if v is the nth-degree part of u, we must have Bv = µv.
We let B act on a monomial x α , where α ∈ N N is a multiindex, getting where α (i) = α + e i−1 − e i for i ∈ P . Here {e j } n j=1 denotes the standard basis in R N . Thus the restriction of B to the space of homogeneous polynomials of degree n is given as λnI + R, where R is the linear operator that maps x α to i∈P α i x α (i) .
We claim that the only eigenvalue of R is 0. If so, the only eigenvalue of the restriction of B mentioned above is λn, which would prove the lemma since Bv = µv.
In order to prove this claim, we define for any α ∈ N N with |α| = n We select a basis in the linear space of all homogeneous polynomials of degree n consisting of all monomials x α with |α| = n, enumerated in such a way that V is nondecreasing. The definition of R now shows that its matrix with respect to this basis is upper triangular with zeros on the diagonal. The claim follows, and so does the lemma.

B has two distinct eigenvalues: a first example
The following example shows that the generalized eigenspaces of the Ornstein-Uhlenbeck operator may be orthogonal even in the case when B has more than one eigenvalue.
(4.1) whose eigenvalues are −1 ± i. Proof. One finds that e sB = e −s cos s sin s − sin s cos s and e sB e sB * = e −2s I 2 , so that The invariant measure is therefore Since Q = I 2 and Q ∞ is diagonal, we are in the situation treated in [4]; see also Subsection 2.1. But [4] defines the analog of our L (denoted A) without the coefficient 1/2 in front of the second-order term ∆. We will adapt to the definitions and notation of [4]. Therefore, we replace L by A = 2L = ∆ + B x, ∇ , whereB = 2B. Obviously, A and L have the same (generalized) eigenfunctions.
From [4] we will also need the diagonal matrix denoted D λ , which in the case considered is The invariant measure is the same for the semigroups e tA and e tL by uniqueness, and given by (4.2).
For k = (k 1 , k 2 ) ∈ N 2 we introduce two-dimensional Hermite polynomials where H k i is the classical Hermite polynomial. Here we should point out that [4] uses dilated Hermite polynomials denoted H λ,k . But in our case there are no dilations, since the dilation factors are √ 2λ i = 1, i = 1, 2. The H k are mutually orthogonal in L 2 (γ ∞ ), and are orthonormal. The space H n is generated by the Hermite polynomials H k of degree k 1 + k 2 = n, for n ∈ N, and H n is invariant under A. As in [4, Section 4], we split the partial differential operator A as A = A 1 + L with A 1 = ∆ − D −1 λ x, ∇ and L = Cx, ∇ , where x = (x 1 , x 2 ) and C = (c i,j ) is the skew-symmetric matrix Then each polynomial in H n is an eigenfunction of A 1 with eigenvalue −2n. This follows from the differential equation satisfied by the Hermite polynomials; see [4, formula (3.5)].
The effect of L on Hermite polynomials can be read off from a formula on page 711 of [4], best described as that following the words "we find that". We adapt this formula to the normalized polynomials H k 1 ,k 2 with k 1 + k 2 = n, and choose ℓ i = k i ± 1. Here the scalar product written ., . is taken in L 2 (γ ∞ ). We also use the trivial observation that c 1,2 λ 2 = 1 and c 2,1 λ 1 = −1.
The result is for 0 ≤ k 1 ≤ n−1 and 1 ≤ k 2 ≤ n. This describes the restriction of L to H n completely, In H n we use the orthonormal basis H n−κ,κ , κ = 0, 1, . . . , n. A consequence of the preceding formulas is that the matrix L (n) = (L (n) i,j ) of the restriction of L with respect to this basis is given by all other entries of the matrix being 0. Thus L (n) is skew-symmetric. The restriction to H n of the operator A = A 1 + L has matrix −2nI n+1 + L (n) , and it follows that A is a normal operator in H n . The spectral theorem for normal operators implies that A can be diagonalized in H n by means of an orthogonal change of coordinates. Since the spaces H n , n ∈ N, are mutually orthogonal, this proves the proposition.

B has two distinct eigenvalues: a second example
In this section we exhibit a class of drift matrices B with two different eigenvalues (which, in contrast with those in the example in Section 4, are real), but such that the generalized eigenspaces associated to the corresponding Ornstein-Uhlenbeck operator L are not orthogonal.
In R 2 we consider Q = I 2 and , and so Writing z 1 = √ a − d x 1 and z 2 = a+d c 2 +4a 2 2ax 2 − cx 1 and recalling that γ ∞ is a probability measure, we see that To find some eigenfunctions of L , we consider polynomials in x 1 , x 2 of degree 2. One finds that are eigenfunctions, with eigenvalues −2(a − d), −2a and −2(a + d), respectively. Any two of these polynomials turn out not to be orthogonal with respect to the invariant measure, as follows by straightforward computations. We sketch one example.
One simply multiplies v 1 and v 3 and rewrites the product in terms of z 1 and z 2 . Doing so, one can neglect all terms of odd order in z 1 or z 3 , when integrating with respect to γ ∞ . Writing "odd" for such terms, we find that the product is 1 Integrating and simplifying, we get so v 1 and v 3 are not orthogonal.
Remark 5.1. Let now d = a/2 in this example. Then the fourth-degree polynomial Thus eigenfunctions of different polynomial degrees can have the same eigenvalue. This shows that for an eigenfunction u, the sum in (2.1) may consist of more than one term, and a (generalized) eigenspace need not be contained in one H n .
The eigenvalues of the matrix B defined in (5.1) are −a ± d, and it is easily seen that the corresponding eigenspaces are not orthogonal in R 2 . This turns out to be related to the non-orthogonality of the eigenspaces of L , at least in two dimensions, in the following way. Proof. To begin with, we consider a coordinate change x = Hx, where H is an orthogonal matrix. Simple computations show that the operator L Q,B is transformed to L Q, B in the new coordinates, with Q = HQH * and B = HBH * ; cf. [9, p. 474]. In our case, Q = Q = I. The eigenvalues of B and the angle between its eigenvectors will not change.
To prove the proposition, assume first that the (real) eigenvectors of B are orthogonal in R 2 . Then B is symmetric, since it can be diagonalized by means of an orthogonal change of coordinates as just described. This implies that L is symmetric ([7, Proposition 9.3.10]), so that the orthogonality of its eigenspaces is trivial.
Next, we assume that the eigenvectors of B are not orthogonal in R 2 . By Schur's decomposition theorem (see [5,Theorem 2.3.1]) there exists an orthogonal change of coordinates which makes B lower triangular, though not diagonal. We are thus in the situation described in (5.1). As we have seen, some eigenspaces of L are then not orthogonal with respect to the invariant measure.