Christoffel–Darboux Kernels in Several Real Variables

The Christoffel–Darboux kernels for orthogonal polynomials in several real variables are investigated within the context of the three-term relation reformulated for this purpose. As suggestive examples of orthogonality, we propose to discuss the two simple algebraic cases: the unit circle and the Bernoulli lemniscate.

to the Christoffel-Darboux formula itself, which surprisingly reduces an expression involving several initial orthogonal polynomials to the formula with only two last ones (except that for several variables things get more complicated).
A nice by-product we include is a very concise version of a theorem stated in [2] which here appears as Theorem 2. The references we list at the end could be easily made at least twice as long so if needed, the interested reader is encouraged to confront the references in [2].

Introduction
Let N = {0, 1, 2, . ..}.We make use of the Kronecker delta δ ab which is equal to 1 if a = b or 0 otherwise, where a and b are any objects.All measures on R d are assumed to be Borel measures with all moments finite, i.e.R d x n dµ(x) < ∞ for all n 0. Let P d stand for the space of all complex polynomials in d (real) variables, and P k d -for the space of all complex polynomials in d variables of degree at most k.We say that a measure µ on R d orthonormalizes a sequence {p k } ∞ k=0 ⊂ P d , if R d p k pl dµ = δ kl for all k, l ∈ N, where p is the defined by p(x) = p(x), x ∈ R d , p ∈ P d .
Let us recall the Favard theorem on orthogonal polynomials in one variable.The following version of this theorem is a direct consequence of Theorem 53 in [2] (cf.[1]).
k=0 is a sequence of real polynomials in one variable such that p 0 = 1 and deg p k = k for all k ∈ N, then the following two conditions are equivalent: (i) there exists a measure µ on R which orthonormalizes {p k } ∞ k=0 , (ii) for every k ∈ N, there exist a k ∈ R and b k ∈ R such that The condition (ii) is customarily called the three term recurrence relation.When considering a multi-variable version of this theorem the succesful attempts were undertaken by Kowalski [3] and then further developed by Xu [13], yet they formulated their versions only for full polynomial bases of P d .This clearly excludes some interesting measures, e.g. the Lebesgue measure on the unit circle in R 2 , as in this case every polynomial is orthogonal to x 2 + y 2 − 1.It was discovered in [2] that this deficiency can be overcome by introducing equality modulo an ideal instead of dealing with ordinary equality in the three term recurrence relation.The aforesaid ideal in the case of the circle would be the set of all polynomials vanishing on the circle.
We now proceed to introduce our main notions required to state the Favard theorem in several variables.Let V ⊂ P d be a proper ideal and Π V : P d → P d /V be the canonical embedding.We additionally assume that V is a * -ideal, i.e. p ∈ V whenever p ∈ V .We will say that a set of polynomials is linearly independent over V (or V -linearly independent) if Π V maps this set injectively onto a linearly independent subset of P d /V .Let and d V (0) def = 1.In most cases d V (k) 1 for all k 0, in which case we set κ V = ∞, and otherwise κ V = max{k 1 : d V (k) = 0}; this definition makes sense because by formula (8) in [2] all polynomials in Q k are of degree k and the set {q (k) [2] such bases always exist, and moreover, they can be build from the monomials.If p, q ∈ P d , then the notation p V = q means that p+ V = q + V (or equivalently p− q ∈ V ).If P and Q are column polynomials, then we write P V = Q if the columns are of the same size and all the entries of k=0 is a rigid V-basis of P d , p ∈ P d and deg p = k κ V , then there are unique scalar row vectors C 0 , . . ., C k such that p Given a linear functional L : , where p k,l ∈ P d .This way we can make sense of L(P Q * ), where P and Q are column polynomials (not necessarily of the same size), and Finally, we say that L orthonormalizes a rigid V -basis , where I stands for the identity matrix of appropriate size (which in these settings is equal to d V (k)).A linear functional L : P d → C is called positive definite if L(pp) 0 for all p ∈ P d .
We are now in a position to state our generalization of the Favard theorem which is in flavour of [2].
Theorem 2. Let V ⊂ P d be a proper * -ideal and {Q k } κV k=0 be a rigid V -basis of real polynomials with Q 0 = 1.Then the following conditions are equivalent: (A) there exists positive definite L : and such that V ⊂ ker L; (B) for every j = 1, . . ., d there exist systems of scalar matrices {A k,j } ∞ k=0 and {B k,j } ∞ k=0 of appropriate size such that for all k ∈ N, k κ V , and j = 1, . . ., d, where A −1,j = 1 and Proof.This is a specified version of Theorem 36 in [2], if conditions (A) and (B) therein are simplified by our assumptions that {Q k } κV k=0 is a rigid V -basis of P d ; in particular the lengthy condition (B) therein reduces to (B-i) only.
Remark 3. As follows from Theorem 36 in [2], the matrix must necessarily be injective for all k 0.Even more, all the matrices A k,j and B k,j must be real.To see this one can take adjoints 2 of both sides in (2) and then transpose them to deduce that (2) holds with (A * k,j ) ⊺ and (B * k,j ) ⊺ in place of A k,j and B k,j , respectively, which employs the assumption that all Q k are real column polynomials.Since {Q k } κV k0 is a rigid V -basis, the matrices in (2) are unique, thus (A * k,j ) ⊺ = A k,j and (B * k,j ) ⊺ = B k,j , which proves our claim.In particular, (2) can be written with Actually, all the matrices B k,j are symmetric.Indeed, if we multiply both sides of (2) by Q ⊺ k from the left, and then apply L appearing in Theorem 2 to the equation, after omitting the zero terms and noticing that It now suffices to justify the symmetry of the left-hand side: ).Note that this idea provides another way of showing that the matrices A k,j and B k,j are real.
As is easily seen the case when κ V < ∞ (in other terms dim P d /V < ∞) corresponds to a finite number of linearly independent polynomials over V , thus in this case we can have only a finite number of orthogonal polynomials whenever V ⊂ ker L. Indeed, one can readily verify that under the latter assumption any set C ⊂ P d such that L(pq) = δ pq , p, q ∈ C, must be V -linearly independent, thus its cardinality is no greater than dim P d /V .
In the single variable case Theorem 2 leads to the Favard theorem (Theorem 1) due to the well-known fact that every positive definite linear functional on P 1 can be represented as an integral with respect to a nonnegative Borel measure on R (see the discussion concerning the statement (50) in [2], see also [7] and [1]).Disappointingly, albeit challengingly so, the positive definiteness does not guarantee the existence of such a representing measure in the several variable case (again see the comment on (50) in [2]).A substantial part of [2] is devoted to conditions ensuring existence of representing measures in the multi-variable case.

Christoffel-Darboux kernel in the case of real variables
Let V ⊂ P d be a proper * -ideal and let {Q k } κV k=0 be a rigid V -basis of P d .For every n ∈ N, n κ V , we define the associated Christoffel-Darboux kernel by the formula We use the shorthand notation n k=0 Q k for the set of all entries of all column polynomials Q k , k = 0, . . ., n.It is worth noticing that each K n enjoys the reproducing property in the following sense (cf.formula (1.2.37) in [8]).
2 the adjoint of a complex matrix [a kj ] n k,j=1 is given by [a jk ] n k,j=1 Proposition 4. If {Q k } κV k=0 is a rigid V -basis of P d which is orthonormal with respect to an inner product •, -in P d , and then K n is a reproducing kernel for X n , which means that for every p ∈ X n we have the equation Proof.Let us write n k=0 Q k = {q 0 , . . ., q m }.The polynomial p ∈ X n may be written uniquely as p = m k=0 a k q k with some which is the desired conclusion.
If H 1 , H 2 are linear subspaces of P d , then we identify the algebraic tensor product H 1 ⊗ H 2 with the linear space spanned by all polynomials of the form We may now formulate a multi-variable version of the Christoffel-Darboux formula.
Theorem 5. Let V ⊂ P d be a proper * -ideal, and {Q k } κV k=0 be a rigid V -basis of real polynomials with κ V , and j = 1, . . ., d.
Proof.Fix j ∈ {1, . . ., d} and n ∈ N, n κ V .Let the expression (4) be denoted by Kn (x, y).Applying (B) supported by Remark 3 for a fixed s ∈ {0, . . ., n} we get Ks (x, y) where r s and rs are column polynomials equal to 0 modulo V .Remembering that B k,j are real and symmetric (see Remark 3) we get Technically, this procedure works only for s 1 but it is easy to see that the term Ks−1 for s = 0 is equal to zero, since Q −1 = 0. Taking summation over s = 0, . . ., n we complete the proof.
The assertion of Theorem 5 can be written briefly as Note that in case of V = {0} the above equation coincides with the ordinary one.We say that K n defined by (3) satisfies the j-th Christoffel-Darboux formula if (5) holds true with some scalar matrix A n,j .Lemma 6.Let X and Y be real or complex vector spaces, and X 0 and Y 0 be their linear subspaces, respectively.Then there is a linear isomorphism Proof.The authors are aware that this lemma is commonly regarded as a straightforward consequence of the Yoneda lemma, a well-established result in the category theory.However, we have not been able to find any direct reference in the mathematical literature (apart from fishy websites), hence we propose our short proof without recourse to any "abstract nonsense." For convenience, we abbreviate X 0 ⊗ Y + X ⊗ Y 0 to T(X 0 , Y 0 ).We begin with showing that the mapping thus x 1 ⊗y 1 and x 2 ⊗y 2 are equal modulo T(X 0 , Y 0 ) and the mapping is well defined.Since it is bilinear, the universal factorization property for tensor products yields the linear mapping We now proceed to define a mapping which will promptly turn out to be the inverse of Φ.Consider the bilinear mapping The bilinearity of this mapping and the universal factorization property lead to the linear mapping Passing to the quotient spaces the mapping Ψ induces another mapping with the property that Ψ 1 (x ⊗ y + T(X 0 , Y 0 )) = (x + X 0 ) ⊗ (y + Y 0 ) for all x ∈ X and y ∈ Y .It turns out that Φ • Ψ 1 and Ψ 1 • Φ are identity mappings on (X ⊗Y )/T(X 0 , Y 0 ) and (X/X 0 )⊗(Y /Y 0 ), respectively, which can be verified directly on the generators.
The following lemma shows that the ideal V 2 is well chosen regarding tensor products.
Corollary 7. Let V be a * -ideal in P d .There is a unique mapping Proof.This is Lemma 6 translated to the case of a * -ideal treated as a linear subspace of P d .The identification of P d ⊗ P d with P 2d is well known.
Corollary 8. Let V ⊂ P d be a proper * -ideal and {Q k } κV k=0 be a rigid V -basis of P d .If P 1 and P 2 are column polynomials such that with some integer k, j 0, then there exist a unique scalar matrix E such that Proof.It is well known that if {e n } κ n=0 with κ ∈ N∪{∞} is a basis of a vector space X, then the system {e m ⊗ e n } κ m,n=0 forms a basis for X ⊗ X.Hence, the system {Q k } κV k=0 induces the basis of (P d /V ) ⊗ (P d /V ) composed of all elements of the form (p + V ) ⊗ (q + V ), where p appears in some column Q j and q does in some column Q k .By Corollary 7 the related basis of P 2d /V 2 consists of all elements of the form p ⊗ q + V 2 with the same way of choosing polynomials p and q.
By our assumption P 1 can be expressed as unique scalar matrices D n (adding some zero terms we may have the same N for both P 1 and P 2 ).Substituting this in (6) we get Since the left hand side is a sum of polynomials of the form p(x)q(y) with q from the column Q k , by linear independence modulo V 2 we infer that D n = 0 for all n but n = k.Similarly, all C n = 0 except for n = j.It follows that P 1 = C j Q j and P 2 = D k Q k .Moreover, the equation ( 6) takes the form which by linear independence forces C ⊺ j = D k .The proof is complete.
As in one variable, we would like to emphesize that there is a deeper connection between the Christoffel-Darboux formula and the three term recurrence relation.In fact, the aforesaid formula implies the relation.Theorem 9. Let V ⊂ P d be a proper * -ideal and {Q k } κV k=0 be a rigid V -basis of real polynomials with Q 0 = 1, and j ∈ {1, . . ., d}.Let N ∈ N, N κ V .Assume that K s satisfies the j-th Christoffel-Darboux formula (5) for all n = 0, . . ., N with some scalar matrices A n,j .Then with some scalar matrices B k,j (with the convention Q −1 = 0 and A −1,j = 0).
Proof.Let Kn stand for the right hand side of (5), n = 0, . . ., N .By our assumption Kn (x, y) = (x j − y j )K n (x, y) + r n (x) ⊺ w n (y) + wn (x) ⊺ rn (y) with column polynomials r n , w n , rn , wn such that r n and rn are equal to 0 modulo V .For simplicity we write R n (x, y) = r n (x) ⊺ w n (y) + wn (x) ⊺ rn (y).Since for k = 0, . . ., N .This, written explicitly, reads as follows: which, after rearranging terms, leads to which gives the desired conclusion.

Examples
One of examples could be simply rewritten from [11] where the author considered the tensor product of two families of orthogonal polynomials in one variable, more specifically the Krawtchouk polynomials and the Charlier ones.Digesting this example one may grasp a general background idea that allows to construct what may be called "product polynomials", i.e. the tensor product of orthogonal polynomials in a single variable.Noticing this we prefer to direct our attention to two variable polynomials which are not obtained this way.
We now focus on P 2 /V , where V is the ideal of all polynomials vanishing on the unit circle T, centered at 0. In order to establish the three term recurrence relation we will employ the well known system {z n } n∈Z , which is orthonormal with respect to the normalized Lebesgue measure m on T. One can easily check that the system (7) : n 1 is also orthonormal with respect to m.What is more, every member of C[z, z] is equal modulo V to a linear combination of elements of the system.Indeed, since z z V = 1, for k > l we may write , so we have expressed z k zl linearly by means of the system (7).The same can be done for k < l, as z k zl V = zl−k .Since the case k = l is trivial, this proves our claim.
These polynomials give rise to the following real polynomials: We arrange them in a column form: By the properties of the system (7) and the identification of By Theorem 2 the system {Q k } ∞ k=0 satisfies the three term recurrence relation Following the idea from Remark 3, we may compute matrices A k,j and B k,j with the help of the formulas Leaving simple though slightly tedious computations to the reader one may arrive at and B k,j = 0 for all k 0, j = 1, 2. Theorem 5 is now ready to be applied for writing the Christoffel-Darboux formula where all the involved objects can be explicitly written.
Let us now turn to a more interesting example of the Bernoulli lemniscate B, i.e. the set of all points ( In the case of polynomials in one complex variable this case was discussed in [5] (cf.[6]).Our approach seems to be separated from that of [5], not to mention no apparent relation between orthogonality of polynomials in complex and real variables (the precedent case of the circle has just been a lucky coincidence).A parametric description of B is given by which leads to the formula defining measure m on B: where f is a Borel function bounded on B and α m > 0 is chosen so that m is normalized by Indeed, (a) is a direct consequence of the formula for m, while (b) and (c) result from The ideal V related to B consists of all polynomials p ∈ P 2 vanishing on B, or, equivalently, satisfying 2 ) with some one-variable polynomial q depending on nonnegative integers j and k.As a consequence, if p ∈ P 2 , then there exist q j ∈ P 1 , j = 1, 2, 3, 4, such that ).The crucial remark to make here is that every p ∈ P 2 is equal modulo V to a linear combination of polynomials p j,k l := X j 1 X k 2 (X 2 1 + X 2 2 ) l with integers j, k = 0, 1 and l 0. It follows that if j 1 , j 2 , k 1 , k 2 = 0, 1, (j 1 , k 1 ) = (j 2 , k 2 ) and q 1 , q 2 ∈ P 1 , then ), with some q ∈ P 1 and r, s = 0, 1 such that r + s > 0. Since the right hand side of (10) is a function satisfying one of the properties (a), (b) and (c) listed on p. 10, we infer that 2 ) = 0 with all j 1 , j 2 , k 1 , k 2 , q 1 , q 2 already chosen.In other words, polynomials 2 ) are L m -orthogonal.In particular, polynomials p j1,k1 l1 and p j2,k2 l2 are L m -orthogonal, provided j 1 , j 2 , k 1 , k 2 are as above and l 1 , l 2 0.
Fix any j, k = 0, 1 and consider the linear functional )), q ∈ P 1 .It is evident that L j,k (|q| 2 ) > 0 for any q = 0, thus there exists a sequence of real orthogonal polynomials {q j,k l } ∞ l=0 such that deg q j,k l = l, l 0, and L j,k (q j,k l1 q j,k l2 ) = δ l1l2 .
As usual for orthogonal polynomials in one variable the sequence is determined uniquely up to the unimodular factor of each q j,k l .It is a matter of routine to verify that the family of all polynomials (11) ), j, k = 0, 1, l 0, is L m -orthogonal, and even L m -orthonormal.We now arrange this family in the column form: At first glance this does not look like a good idea leading to a rigid V -basis of P 2 , because some coordinate polynomials in Q n are not of degree n if n 2. It turns out that for all n 0 every coordinate polynomial of Q n is equal modulo V to a polynomial of degree exactly n.This follows from the equation 3 Q n for every k 0, which we are now going to prove by induction.The instances of k = 0 and k = 1 are trivial, while for k = 2 we have deg Let us now assume that k 3. To prove the inclusion ,,⊂" fix a monomial X α 1 X β 2 such that α + β k.By induction hypothesis we may focus only on the case when α , in virtue of (9) we get with some coefficients a l ∈ C. Considering all four possible cases of µ, ν = 0, 1 we now show that every W µ,ν l in the above sum is a coordinate in one of Q n , n = 0, . . ., k.Indeed, if k = 2s − 1, then (µ, ν) is equal to (1, 0) or (0, 1), so the resulting polynomials W µ,ν l with l = 0, . . ., 2s − 2 are coordinate polynomials in one of Q 1 , Q 3 , . . ., Q 2s−1 .In turn, if k = 2s, then (µ, ν) is equal to (0, 0) or (1, 1).In the case when (µ, ν) = (0, 0) we obtain W 0,0 l with l = 0, . . ., 2s which appear in the columns Q 0 , Q 2 , . . ., Q 2s .The remaining case (µ, ν) = (1, 1) can be done in a similar way.This proves the desired inclusion.To verify the reverse inclusion fix µ, ν = 0, 1 and take any coordinate polynomial W µ,ν l in Q k .By induction 3 see the notation introduced in the beginning of Section 2 hypothesis it suffices to show that W µ,ν l + V ∈ Π V (P k 2 ).The polynomial q µ,ν l can be written as q µ,ν l = l j=0 b j X j with some coefficients b l ∈ C and therefore , for an even number j we get , and the latter polynomial belongs to P k 2 .If in turn j is odd we have where we are led to polynomial of degree µ + ν + j + 1, seemingly exceeding k.However, if j < l, then the last polynomial in ( 14) is of degree less than or equal to k.In the remaining case when j = l is odd one may notice that whenever W µ,ν l appears in Q k , then µ + ν + l < k, and the resulting polynomial in ( 14) is again a member of P k 2 .We have thus proved (12).Since all W j,k l are L m -orthonormal, we deduce that they are linearly independent modulo V , so by ( 12) the elements W j,k l + V form a basis of P 2 /V .It follows that the system {Q n } ∞ n=0 meets all the requirements to be a rigid basis except for the condition on the degree of coordinate polynomials of Q n .However, this can be easily overcome by the above discussion where we have shown how the coordinate polynomials can be replaced by polynomials of the proper degree preserving equality modulo V .Since our goal is to write the three term recurrence relation as in Theorem 2, which is equation modulo V , i.e.
we do not really have to bother about the degree of polynomials in Q n .As we already know, ) which is equal to zero but the following two cases: i) or (ii), hence B k,j = 0 for all admissible k and j.We now consider A k,j .Fix s 2 and introduce the auxiliary column polynomial R µ,ν l = [q µ,ν l−1 q µ,ν l ] ⊺ , l 1, µ, ν = 0, 1.Thus we may write and , s 2, where R µ,ν l (X 2 1 + X 2 2 ) means substituting X 2 1 + X 2 2 in the both coordinate polynomials of R µ,ν l .In virtue of (i) and (ii) we get 2s−2 (R 0,0 2s ) ⊺ ) 0 .
We encourage the reader to derive similar formulas for A k,j with k = 0, 1 and j = 0, 1, the cases not covered above.If we now employed a one-variable orthonormalization procedure, we could consecutively compute all the matrices A k,j .It is worth noting that one may derive the integral representation for all the functionals L j,k , j, k = 0, 1, that is The weight function for L 0,0 can be related to the weight mentioned in [12, p. 37] (formula (2.9.1)), in order to see this it is enough to perform change of variable t = 1 − x under the integral.In turn, the orthogonal polynomials associated with L j,k with j, k = 0, 1, j + k > 0, can be derived from those of L 0,0 by Theorem 2.5 in [12].The case considered in [5] concerns complex orthogonality of polynomials in a single complex variable with respect to an arbitrary admissible functional L (let the term "admissible" remain mysterious for the time being).This allows us to introduce this case in a little bit sketchy way.We will show that the system (15) forms a rigid V -basis of P 2 .By the discussion in [2, Section 4] the system {Σ n } ∞ n=0 is a proper candidate for a rigid V -basis of P 2 , because its structure is the same as that of the rigid V -basis {Q n } ∞ n=0 constructed above (i.e. the lengths of consecutive columns in {Σ n } ∞ n=0 are equal to d V (n)).Hence, it suffices to show that for every j, k 0 we have If j + k 3, then (16) is satisfied in an apparent way, because then X j 1 X k 2 is a member of Σ k+j .Assume that j + k 4 and k 4, which covers all the monomials of degree j + k outside of Σ j+k .Since X 2 1 − X 2 2 V = (X 2 1 + X 2 2 ) 2 we infer that

1 def= 1 and p − 1 def
Xp k = a k p k+1 + b k p k + a k−1 p k−1 , where a − 1, which can be easily justified with the help of properties (a), (b) and (c) on p. 10.Since all the polynomials W µ,ν l in the column Q k satisfy either µ