On multivariate orthogonal polynomials and elementary symmetric functions

We study families of multivariate orthogonal polynomials with respect to the symmetric weight function in d variables Bγ(x)=∏i=1dω(xi)∏i-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma >-1$$\end{document}, where ω(t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega (t)$$\end{document} is an univariate weight function in t∈(a,b)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t \in (a,b)$$\end{document} and x=(x1,x2,…,xd)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathtt {x} = (x_{1},x_{2}, \ldots , x_{d})$$\end{document} with xi∈(a,b)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{i} \in (a,b)$$\end{document}. Applying the change of variables xi,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{i},$$\end{document}i=1,2,…,d,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i=1,2,\ldots ,d,$$\end{document} into ur,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u_{r},$$\end{document}r=1,2,…,d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=1,2,\ldots ,d$$\end{document}, where ur\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u_{r}$$\end{document} is the r-th elementary symmetric function, we obtain the domain region in terms of the discriminant of the polynomials having xi,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{i},$$\end{document}i=1,2,…,d,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i=1,2,\ldots ,d,$$\end{document} as its zeros and in terms of the corresponding Sturm sequence. Choosing the univariate weight function as the Hermite, Laguerre, and Jacobi weight functions, we obtain the representation in terms of the variables ur\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u_{r}$$\end{document} for the partial differential operators such that the respective Hermite, Laguerre, and Jacobi generalized multivariate orthogonal polynomials are the eigenfunctions. Finally, we present explicitly the partial differential operators for Hermite, Laguerre, and Jacobi generalized polynomials, for d=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=2$$\end{document} and d=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=3$$\end{document} variables.


Introduction
In 1974 (see [8,9]), Koornwinder considered the family of orthogonal polynomials p , , n,k (u, v) , with n ⩾ k ⩾ 0 , obtained by orthogonalization of the sequence 1, u, v, u 2 , uv, v 2 , u 3 , u 2 v, … with respect to the weight function ( , , > −1, + + 3∕2 > 0, + + 3∕2 > 0 , on the region bounded by the lines 1 − u + v = 0 and 1 + u + v = 0 and by the parabola u 2 − 4v = 0 (see Fig. 1). In the special case = −1∕2 , orthogonal polynomials p , ,−1∕2 n,k (u, v) can be explicitly obtained by the identity and the change of variables u = x + y, v = xy , where P ( , ) n (x) are Jacobi polynomials in one variable. The author obtained two explicit linear partial differential operators D , , 1 and D , , 2 of order two and four, respectively, such that the polynomials p , , n,k (u, v) are their common eigenfunctions. In fact, D , , 1 and D , , 2 were the generators of the algebra of differential operators having the polynomials p , , n,k (u, v) as eigenfunctions. The polynomials p , , n,k (u, v) are not classical in the Krall and Sheffer sense [10] since the corresponding eigenvalues of D , , 1 depend on n and k.
In several variables, we find different extensions of Koorwinder's polynomials connected with symmetrical multivariate weight functions constructed from classical univariate weights. In fact, the so-called generalized classical orthogonal polynomials are multivariable polynomials which are orthogonal with respect to the weight functions with (t) being one of the classical weight functions (Hermite, Laguerre, or Jacobi) on the real line.  The multivariable Hermite, Laguerre, and Jacobi families associated with the weight functions B ( ) were introduced by Lassalle [11][12][13] and Macdonald [16] as a generalization of a previously known special case in which the parameter is being fixed at the value 0, [7]. Later, these multivariable generalizations of the classical Hermite, Laguerre, and Jacobi polynomials occur as the polynomial part of the eigenfunctions of certain Schrödinger operators for Calogero-Sutherlandtype quantum systems [1]. In fact, if we denote by the second-order differential operator having the classical orthogonal polynomials as eigenfunctions then the multivariable Hermite, Laguerre, and Jacobi are eigenfunctions of the differential operators Lassalle expressed the generalized classical orthogonal polynomials in terms of the basis of symmetric monomials Here the summation in (1.1) is over the orbit of with respect to the action of the symmetric group S d which permutes the vector components x 1 , x 2 , … , x d (see [11][12][13]).
Rather than study the eigenfunctions of H in terms of the monomial symmetric polynomials, in some previous studies (see [16]), it has been shown that it is convenient to change basis from the monomial symmetric polynomials to the Jack polynomials, that is, the unique (up to normalization) symmetric eigenfunctions of the operator In this work, we will consider (t) a univariate weight function in t ∈ (a, b) . For > −1 , we define a symmetric weight function in d variables on the hypercube (a, b) d as where u r are the r-th elementary symmetric functions defined by In [2], the change of variables = (x 1 , x 2 , … , x d ) ↦ = (u 1 , u 2 , … , u d ) was considered to construct multivariate gaussian cubature formulae in the case = ± 1 2 .
This construction is based on the common zeroes of multivariate quasi-orthogonal polynomials, which turns out to be expressed in terms of Jacobi polynomials (see also [3]).
Our main goal is the study of multivariate orthogonal polynomials in the variable associated with the weight function W ( ) obtained from the change of variables ↦ . Obviously, generalized classical orthogonal polynomials are included in our study.
To this end, in Section 2, some basic definitions will be introduced and some properties of the derivatives of elementary symmetric functions will be obtained.
In Section 3, we analyze the structure of the domain of the weight function W ( ) , that is, the image of the map ↦ . Orthogonal polynomials with respect to W ( ) are defined in Section 4.
Finally, in Section 5, generalized classical orthogonal polynomials are considered. Our main result states that, under the change of variables ↦ , the differential operators H H , H L and H J can be represented as linear partial differential operators in the form where a rs ( ) for r, s = 1, … , d are polynomials of degree 2 in and b r ( ) for r = 1, … , d are polynomials of degree 1 in . Those operators have the multivariate orthogonal polynomials with respect to W ( ) as eigenfunctions. In particular, we explicitly give the representation of these operators in the cases d = 2 and d = 3.

Definitions and first properties
, is a d-tuple of non-negative integers i , we call a multi-index which has degree | | = 1 + 2 + ⋯ + d . We order the multi-indexes by means of the graded reverse lexicographical order, that is, ≺ if and only if | | < | | , and in the case | | = | | , the first entry of − different from zero is positive.

3
If is a multi-index and = (x 1 , x 2 , … , x d ) ∈ ℝ d , we denote by the monomial x 1 1 x 2 2 … x d d which has total degree | | . A polynomial P in d variables is a finite linear combination of monomials P( ) = ∑ c . The total degree of P is defined as the highest degree of its monomials.
Following [14], the r-th elementary symmetric function u r is the sum of all products of r different variables x i , i.e., and u 0 = 1 . The elementary symmetric functions u r and r = 1, 2, … , d are harmonic homogeneous polynomials of degree r and can be obtained from the generating polynomial of degree d on the variable t, P(t), defined by For a given multivariate function f, we will denote by k f the partial derivative of f with respect to de variable x k . In this work, we are going to deal frequently with partial derivatives of the elementary symmetric functions. The following lemma provides some recursive and closed expressions for k u r . Proof Taking partial derivatives in (2.2), we get Next, multiply by (1 + x k t) in the above equality to obtain

The domain
Since B is obviously symmetric in the variables x 1 , x 2 , … , x d , it suffices to consider its restriction on the domain Δ given by Let E(t) be the monic polynomial of degree d on the variable t, having Let us consider the mapping and the corresponding Jacobian matrix Using (2.4) and subtracting suitable combinations of columns in |T| , we get As it is well known, the existence of d different roots of the polynomial E(t) as defined in (3.2) ( x i for i = 1, … , d ) is equivalent to the positivity of D( ) , the discriminant of E(t). Moreover, that all these different roots are contained in the interval (a, b) can be characterized in terms of the corresponding Sturm sequence (see [17, p. 30]). Consider the polynomials p 0 (t) = E(t) and p 1 (t) = E � (t) and let us construct a sequence {p k (t)} d k=0 with the help of Euclid's algorithm to seek the greatest common divisor of E and E ′ where m k is a positive constant for k = 1, … , d − 1.
Since the roots of E(t) are simple, p d (t) is a nonzero constant. Sturm's theorem states that if v(t) is the number of sign changes in the sequence then the number of roots of p 0 (t) (without taking multiplicities into account) confined between a and b is equal } has no sign changes and {p 0 (a), p 1 (a), … , p d (a)} has exactly d sign changes.
In [4], explicit expressions for the polynomials in a Sturm sequence were provided. These explicit representations were given in terms of the d different roots of the first polynomial in the sequence p 0 (t) ( x i for i = 1, … , d in our case). In particular, the author shows that the constant value of p d (t) coincides with the discriminant of p 0 (t) up to a positive multiplicative factor. Therefore, the condition D( Consequently, the following result holds.

Proposition 3.1 The region
is the image of Δ under the mapping = (x 1 , As a consequence, the orthogonality measure and its support in terms of the coordinates u 1 , … , u d can be obtained explicitly using the determinant R(E, E � ) combined with a simple algorithm.

The case d = 2
Let be a weight function defined on (a, b). For > −1 , let us define a weight function of two variables, defined on the domain Δ given by Let us consider the mapping ↦ defined by Then, E(t) = t 2 − u 1 t + u 2 and the Jacobian of the change of variables is |x 1 − x 2 |.
Expressed in terms of the variable , the discriminant of the polynomial E(t) is

3
And the Sturm sequence reads In the Jacobi case, we have (a, b) = (−1, 1) and In fact, this is the case originally considered by Koornwinder (see [8]). Then, using Proposition 3.1, the mapping ↦ is a bijection between Δ and the domain Ω given by which is depicted in Fig. 1.
In the Laguerre case, we have (a, b) = (0, +∞) and (t) = t e −t , with > −1 . Therefore, using again Proposition 3.1, the domain Ω is given by the region described in Fig. 2.

The case d = 3
For d = 3 , we set = (x 1 , x 2 , x 3 ) and = (u 1 , u 2 , u 3 ) , with Then, E(t) = t 3 − u 1 t 2 + u 2 t − u 3 and the discriminant D( ) can be expressed in terms of the elementary symmetric functions The Sturm sequence reads And finally, the region Ω for d = 3 can be described by the following inequalities The region Ω is depicted in Fig. 4. This picture has been obtained from the parametric representation of the images under the map defined by (2.1) of the four triangular faces of the domain Δ given by Ω is a solid limited by two flat faces and two curved faces. The first thing we have to notice is that Ω is invariant under the change of variables (u 1 , u 2 , u 3 ) → (−u 1 , u 2 , −u 3 ) . In the image, the brown face is part of the plane p 0 (1) = 0 . There is another symmetrical flat face contained in the plane −p 0 (−1) = 0 . The two flat faces intersect in the line segment from A = (1, −1, −1) to B = (−1, −1, 1) . The other line segment bounding the brown region (which is the intersection of the planes p 0 (1) = 0 and p 1 (1) = 0 ) is the line segment from A = (1, −1, −1) to C = (3, 3, 1) . The third boundary part of the brown region is the part from B to C of a parabola touching at the endpoints A and C of the boundary  Figure 5 shows the projection of Ω on the u 1 u 3 plane. Notice the two triangles, sharing one edge, and each having one parabolic side, namely, part of the parabolas u 3 = 1 4 (u 1 − 1) 2 and u 3 = − 1 4 (u 1 + 1) 2 .

Orthogonal polynomials
Under the mapping defined by (2.1), the weight function B , given in (3.1), becomes a weight function defined on the domain Ω by Now, it is possible to define the polynomials orthogonal with respect to W ( ) on Ω.

Proposition 4.1 Define monic polynomials P ( ) ( ) under the graded reverse lexicographic order ≺,
that satisfy the orthogonality condition for ≺ , then these polynomials are uniquely determined and are mutually orthogonal with respect to W ( ).
Proof Since the graded reverse lexicographic order ≺ is a total order, applying the Gram-Schmidt orthogonalization process to the monomials so ordered, the uniqueness follows from the fact that P ( ) ( ) has leading coefficient 1.
In the cases = ±1∕2 , a family of orthogonal polynomials in the variable can be given explicitly in terms of orthogonal polynomials of one variable (see [2] and [3, p.155]).

Generalized classical orthogonal polynomials
In this section, multivariable orthogonal polynomials are considered associated with the weight functions We are going to obtain the representation of the differential operators H H , H L and H J , under the change of variables ↦ .
For h = 0, 1, 2 , let us define the operators then Under the change of variables ↦ , we get and since 2 i u r = 0 we obtain x i i u r = ru r . x h i i u r i u s ,

31
To obtain the representation of the operator F h , let us consider the Vandermonde determinant V defined in (3.3). One can see that and therefore since every element 1∕(x i − x k ) in the above sum appears twice with opposite sign.
On the other hand, since V is an homogeneous symmetric polynomial of total degree d(d − 1)∕2 , again Euler's identity for homogeneous polynomials gives Finally, using (5.2) twice, we conclude Finally, for h = 2 , using (2.3) and (5.9), we have For the last equality, the two last equalities in the proof for h = 1 were used.
In this way, we have shown that, under the change of variables ↦ defined by (2.1), the differential operators H H , H L and H J can be represented as linear partial differential operators in the form the dominance partial ordering. Since the dominance partial ordering is not a total order, a priori, the polynomials p defined in this way could seem different from the polynomials p defined in (4.2). However, that they are still equal was first proved by Heckman [5,Theorem 8.3] by using very deep methods. Much easier proofs were given by Macdonald [15, (11.11)] and Heckman [6, Corollary 3.12].

The case d = 2
For d = 2 , using Propositions 5.1, 5.2, and 5.4, we can easily deduce the explicit expression of the differential operators H H , H L and H J , under the change of variables ↦ .
In the Jacobi case, the operator for d = 2 , can be written as follows and therefore we recover the differential operator given by Koornwinder in [8].
Denoting the corresponding orthogonal polynomial, for d = 2 , by P ( , , ) n−k,k ( ) = u n−k 1 u k 2 + ⋯ , we get In the Hermite case, the explicit expression of the differential operator for d = 2 , is given by Denoting the orthogonal polynomial, for d = 2 , by H ( ) n−k,k ( ) = u n−k 1 u k 2 + ⋯ , we get In the Laguerre case, the explicit expression of the differential operator H H = D 0 − 2E 1 + (2 + 1)F 0 , In the Laguerre case, the explicit expression of the differential operator for d = 3 , is given by