1 Introduction

In 1974 (see [8, 9]), Koornwinder considered the family of orthogonal polynomials \(p_{n,k}^{\alpha ,\beta , \gamma }( u, v)\), with \(n \geqslant k \geqslant 0\), obtained by orthogonalization of the sequence \(1, u, v, u^{2}, uv, v^{2}, u^{3}, u^{2}v, \ldots\) with respect to the weight function \((1-u+v)^{\alpha }(1+u+v)^{\beta }(u^{2}-4v)^{\gamma }\) for \(\alpha ,\beta , \gamma> -1, \alpha + \gamma +3/2> 0, \beta +\gamma +3/2 > 0\), on the region bounded by the lines \(1 -u+ v = 0\) and \(1+u+ v = 0\) and by the parabola \(u^{2}-4v= 0\) (see Fig. 1). In the special case \(\gamma = -1/2\), orthogonal polynomials \(p_{n,k}^{\alpha ,\beta ,-1/2}( u, v)\) can be explicitly obtained by the identity

$$\begin{aligned} p_{n,k}^{\alpha ,\beta , -1/2}( u, v)= P_{n}^{(\alpha ,\beta )}(x)P_{k}^{(\alpha ,\beta )}(y)+P_{k}^{(\alpha ,\beta )}(x)P_{n}^{(\alpha ,\beta )}(y) \end{aligned}$$

and the change of variables \(u = x+ y, v= xy\), where \(P_{n}^{(\alpha ,\beta )}(x)\) are Jacobi polynomials in one variable. The author obtained two explicit linear partial differential operators \(D_{1}^{\alpha ,\beta , \gamma }\) and \(D_{2}^{\alpha ,\beta , \gamma }\) of order two and four, respectively, such that the polynomials \(p_{n,k}^{\alpha ,\beta , \gamma }(u,v)\) are their common eigenfunctions. In fact, \(D_{1}^{\alpha ,\beta , \gamma }\) and \(D_{2}^{\alpha ,\beta , \gamma }\) were the generators of the algebra of differential operators having the polynomials \(p_{n,k}^{\alpha ,\beta , \gamma }(u,v)\) as eigenfunctions. The polynomials \(p_{n,k}^{\alpha ,\beta , \gamma }( u, v)\) are not classical in the Krall and Sheffer sense [10] since the corresponding eigenvalues of \(D_{1}^{\alpha ,\beta , \gamma }\) depend on n and k.

In several variables, we find different extensions of Koorwinder’s polynomials connected with symmetrical multivariate weight functions constructed from classical univariate weights. In fact, the so-called generalized classical orthogonal polynomials are multivariable polynomials which are orthogonal with respect to the weight functions

$$\begin{aligned} B_{\gamma }(\mathtt {x}) = \prod \limits _{i=1}^{d} \omega (x_{i}) \prod \limits _{i<j} |x_{i}-x_{j}|^{2\gamma +1}, \end{aligned}$$

with \(\omega (t)\) being one of the classical weight functions (Hermite, Laguerre, or Jacobi) on the real line.

The multivariable Hermite, Laguerre, and Jacobi families associated with the weight functions \(B_{\gamma }(\mathtt {x})\) were introduced by Lassalle [11,12,13] and Macdonald [16] as a generalization of a previously known special case in which the parameter \(\gamma\) is being fixed at the value 0, [7]. Later, these multivariable generalizations of the classical Hermite, Laguerre, and Jacobi polynomials occur as the polynomial part of the eigenfunctions of certain Schrödinger operators for Calogero-Sutherland-type quantum systems [1]. In fact, if we denote by

$$\begin{aligned} L(p(t)) = \phi (t) p^{\prime \prime }(t) + \psi (t) p^{\prime }(t) \end{aligned}$$

the second-order differential operator having the classical orthogonal polynomials as eigenfunctions then the multivariable Hermite, Laguerre, and Jacobi are eigenfunctions of the differential operators

$$\begin{aligned} \mathcal {H}_{\gamma } = \sum \limits _{i=1}^{d} \left( \phi (x_{i}){\partial _{i}^{2}} + \psi (x_{i}) \partial _{i} + (2\gamma +1) \sum \limits _{k\ne i} \frac{\phi (x_{i})}{x_{i}-x_{k}}\partial _{i}\right) . \end{aligned}$$

Lassalle expressed the generalized classical orthogonal polynomials in terms of the basis of symmetric monomials

$$\begin{aligned} m_{\lambda } (x) = \sum \limits _{\sigma \in \mathcal {S}_{d} (\lambda )} x_{\sigma (1)}^{\lambda _{1}} \cdots x_{\sigma (d)}^{\lambda _{d}}, \end{aligned}$$
(1.1)

with \(\lambda \in \mathbb {Z}^{d}\) satisfying \(\lambda _{1} \geqslant \lambda _{2} \geqslant \ldots \geqslant \lambda _{d} \geqslant 0\). Here the summation in (1.1) is over the orbit of \(\lambda\) with respect to the action of the symmetric group \(\mathcal {S}_{d}\) which permutes the vector components \(x_{1}, x_{2}, \ldots , x_{d}\) (see [11,12,13]).

Rather than study the eigenfunctions of \(\mathcal {H}_{\gamma }\) in terms of the monomial symmetric polynomials, in some previous studies (see [16]), it has been shown that it is convenient to change basis from the monomial symmetric polynomials to the Jack polynomials, that is, the unique (up to normalization) symmetric eigenfunctions of the operator

$$\begin{aligned} \mathcal {J}_{\alpha } = \sum \limits _{i=1}^{d} \left( {x_{i}^{2}} {\partial _{i}^{2}} + \frac{2}{\alpha } \sum \limits _{k\ne i} \frac{{x_{i}^{2}}}{x_{i}-x_{k}}\partial _{i}\right) . \end{aligned}$$

In this work, we will consider \(\omega (t)\) a univariate weight function in \(t \in (a,b)\). For \(\gamma >-1\), we define a symmetric weight function in d variables on the hypercube \((a,b)^{d}\) as

$$\begin{aligned} B_{\gamma }(\mathtt {x}) = \prod \limits _{i=1}^{d} \omega (x_{i}) \prod \limits _{i<j} |x_{i}-x_{j}|^{2\gamma +1}, \quad \mathtt {x} \in (a,b)^{d}. \end{aligned}$$

where \(\mathtt {x} = (x_{1},x_{2}, \ldots , x_{d})\), with \(x_{i} \in (a,b), i= 1, 2, \ldots , d.\) Next, we apply the change of variables

$$\begin{aligned} \mathtt {x}=(x_{1},x_{2}, \ldots , x_{d}) \mapsto \mathtt {u} = (u_{1},u_{2}, \ldots , u_{d}) \end{aligned}$$

where \(u_{r}\) are the r-th elementary symmetric functions defined by

$$\begin{aligned} u_{r} = \sum \limits _{1\leqslant k_{1}< k_{2}< \cdots < k_{r} \leqslant d} x_{k_{1}} x_{k_{2}} \cdots x_{k_{r}}, \quad 1 \leqslant r \leqslant d. \end{aligned}$$

In [2], the change of variables \(\mathtt {x}=(x_{1},x_{2}, \ldots , x_{d}) \mapsto \mathtt {u} = (u_{1},u_{2}, \ldots , u_{d})\) was considered to construct multivariate gaussian cubature formulae in the case \(\gamma = \pm \dfrac{1}{2}\). This construction is based on the common zeroes of multivariate quasi-orthogonal polynomials, which turns out to be expressed in terms of Jacobi polynomials (see also [3]).

Our main goal is the study of multivariate orthogonal polynomials in the variable \(\mathtt {u}\) associated with the weight function \(W_{\gamma }(\mathtt {u})\) obtained from the change of variables \(\mathtt {x} \mapsto \mathtt {u}\). Obviously, generalized classical orthogonal polynomials are included in our study.

To this end, in Section 2, some basic definitions will be introduced and some properties of the derivatives of elementary symmetric functions will be obtained.

In Section 3, we analyze the structure of the domain of the weight function \(W_{\gamma }(\mathtt {u})\), that is, the image of the map \(\mathtt {x} \mapsto \mathtt {u}\). Orthogonal polynomials with respect to \(W_{\gamma }(\mathtt {u})\) are defined in Section 4.

Finally, in Section 5, generalized classical orthogonal polynomials are considered. Our main result states that, under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\), the differential operators \(\mathcal {H}_{\gamma }^{H}, \mathcal {H}_{\gamma }^{L}\) and \(\mathcal {H}_{\gamma }^{J}\) can be represented as linear partial differential operators in the form

$$\begin{aligned} \sum \limits _{r=1}^{d} \sum \limits _{s=1}^{d} a_{rs}(\mathtt {u}) \frac{\partial ^{2}}{\partial u_{r} \partial u_{s}} + \sum \limits _{r=1}^{d} b_{r}(\mathtt {u}) \frac{\partial }{\partial u_{r}}, \end{aligned}$$

where \(a_{rs}(\mathtt {u})\) for \(r,s = 1, \ldots , d\) are polynomials of degree 2 in \(\mathtt {u}\) and \(b_{r}(\mathtt {u})\) for \(r = 1, \ldots , d\) are polynomials of degree 1 in \(\mathtt {u}\). Those operators have the multivariate orthogonal polynomials with respect to \(W_{\gamma }(\mathtt {u})\) as eigenfunctions. In particular, we explicitly give the representation of these operators in the cases \(d= 2\) and \(d=3\).

2 Definitions and first properties

Let \(d\geqslant 1\) denote the number of variables. If \(\alpha = (\alpha _{1}, \alpha _{2}, \ldots , \alpha _{d}) \in \mathbb {N}^{d}_{0}\), \(\mathbb {N}_{0} := \mathbb {N} \cup \{0\}\), is a d-tuple of non-negative integers \(\alpha _{i}\), we call \(\alpha\) a multi-index which has degree \(|\alpha | = \alpha _{1} + \alpha _{2} + \cdots + \alpha _{d}\). We order the multi-indexes by means of the graded reverse lexicographical order, that is, \(\alpha \prec \beta\) if and only if \(|\alpha | < |\beta |\), and in the case \(|\alpha | = |\beta |\), the first entry of \(\alpha - \beta\) different from zero is positive.

A multi-index \(\lambda = (\lambda _{1}, \lambda _{2}, \ldots , \lambda _{d}) \in \mathbb {N}^{d}_{0}\) will be called a partition if \(\lambda _{1} \geqslant \lambda _{2} \geqslant \ldots \geqslant \lambda _{d} \geqslant 0\).

Observe that for every multi-index \(\mu = (\mu _{1}, \mu _{2}, \ldots , \mu _{d})\) there exists a unique partition \(\lambda = (\lambda _{1}, \lambda _{2}, \ldots , \lambda _{d})\) satisfying

$$\begin{aligned} \mu _{1} = \lambda _{1} - \lambda _{2}, \mu _{2}= \lambda _{2} - \lambda _{3}, \ldots , \mu _{d} =\lambda _{d}. \end{aligned}$$

If \(\alpha\) is a multi-index and \(\mathtt {x} = (x_{1}, x_{2}, \dots , x_{d}) \in \mathbb {R}^{d}\), we denote by \(\mathtt {x}^{\alpha }\) the monomial \(x_{1}^{\alpha _{1}} x_{2}^{\alpha _{2}} \ldots x_{d}^{\alpha _{d}}\) which has total degree \(|\alpha |\). A polynomial P in d variables is a finite linear combination of monomials \(P(\mathtt {x}) = \sum _{\alpha } c_{\alpha } \mathtt {x}^{\alpha }\). The total degree of P is defined as the highest degree of its monomials.

Following [14], the r-th elementary symmetric function \(u_{r}\) is the sum of all products of r different variables \(x_{i}\), i.e.,

$$\begin{aligned} u_{r} = \sum \limits _{1\leqslant k_{1}< k_{2}< \cdots < k_{r} \leqslant d} x_{k_{1}} x_{k_{2}} \cdots x_{k_{r}}, \quad 1 \leqslant r \leqslant d, \end{aligned}$$
(2.1)

and \(u_{0} = 1\). The elementary symmetric functions \(u_{r}\) and \(r=1,2,\ldots ,d\) are harmonic homogeneous polynomials of degree r and can be obtained from the generating polynomial of degree d on the variable t, P(t), defined by

$$\begin{aligned} P(t) := \prod \limits _{i=1}^{d} (1 + x_{i} t) = \sum \limits _{r=0}^{d} u_{r} t^{r}. \end{aligned}$$
(2.2)

For a given multivariate function f, we will denote by \(\partial _{k} f\) the partial derivative of f with respect to de variable \(x_{k}\). In this work, we are going to deal frequently with partial derivatives of the elementary symmetric functions. The following lemma provides some recursive and closed expressions for \(\partial _{k} u_{r}\).

Lemma 2.1

For \(r = 1, 2, \ldots d\), partial derivatives of the elementary symmetric functions satisfy

$$\begin{aligned} \partial _{k} u_{r}= & {} u_{r-1} - x_{k} \partial _{k} u_{r-1}, \quad k = 1, 2, \ldots d,\end{aligned}$$
(2.3)
$$\begin{aligned} \partial _{k} u_{r}= & {} \sum \limits _{i=0}^{r-1} (-1)^{i} {x_{k}^{i}} u_{r-1-i}, \quad k = 1, 2, \ldots d, \end{aligned}$$
(2.4)
$$\begin{aligned} \partial _{i} \partial _{k} u_{r}= & {} - \frac{\partial _{i} u_{r} - \partial _{k} u_{r}}{x_{i} - x_{k}}, \quad k \ne i, \quad k, i = 1, 2, \ldots d. \end{aligned}$$
(2.5)

Proof

Taking partial derivatives in (2.2), we get

$$\begin{aligned} \partial _{k} P(t) := t \prod \limits _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{d} (1 + x_{j} t) = \sum \limits _{i=0}^{d} \partial _{k} u_{i} t^{i}. \end{aligned}$$

Next, multiply by \((1 + x_{k} t)\) in the above equality to obtain

$$\begin{aligned} (1 + x_{k} t)\partial _{k} P(t) := t \prod \limits _{j=1}^{d} (1 + x_{j} t) = (1 + x_{k} t)\sum \limits _{r=0}^{d} \partial _{k} u_{r} t^{r}, \end{aligned}$$

and (2.3) follows equating coefficients in both sides of the last equality. Next, (2.4) is obtained iterating (2.3).

Finally, taking partial derivatives in (2.3), for \(k \ne i\), we get

$$\begin{aligned} \partial _{k} \partial _{i} u_{r+1} = \partial _{k} u_{r} - x_{i} \partial _{k} \partial _{i} u_{r}, \end{aligned}$$

changing the role of k and i we obtain

$$\begin{aligned} \partial _{i} \partial _{k} u_{r+1} = \partial _{i} u_{r} - x_{k} \partial _{i} \partial _{k} u_{r}, \end{aligned}$$

and therefore, (2.5) follows.

3 The domain

Given a univariate weight function \(\omega (t)\) on \(t \in (a,b)\) (where \(a = -\infty\) and \(b = \infty\) are allowed) consider the variable \(\mathtt {x} = (x_{1},x_{2}, \ldots , x_{d})\), with \(x_{i} \in (a,b).\) For \(\gamma >-1\), we define a weight function in d variables on the hypercube \((a,b)^{d}\) as

$$\begin{aligned} B_{\gamma }(\mathtt {x}) = \prod \limits _{i=1}^{d} \omega (x_{i}) \prod _{i<j} |x_{i}-x_{j}|^{2\gamma +1}, \quad \mathtt {x} \in (a,b)^{d}. \end{aligned}$$
(3.1)

Since \(B_{\gamma }\) is obviously symmetric in the variables \(x_{1},x_{2}, \ldots , x_{d}\), it suffices to consider its restriction on the domain \(\Delta\) given by

$$\begin{aligned} \Delta = \{ \mathtt {x} : a< x_{1}< x_{2}< \cdots< x_{d} <b \}. \end{aligned}$$

Let E(t) be the monic polynomial of degree d on the variable t, having \(x_{i}, i=1, 2, \ldots , d\) as its roots, From (2.2), E(t) satisfies

$$\begin{aligned} E(t) := \prod \limits _{i=1}^{d} (t - x_{i}) = \sum \limits _{r=0}^{d} (-1)^{r} u_{r} t^{d-r}. \end{aligned}$$
(3.2)

Let us consider the mapping

$$\begin{aligned} \mathtt {x}=(x_{1},x_{2}, \ldots , x_{d}) \mapsto \mathtt {u} = (u_{1},u_{2}, \ldots , u_{d}) \end{aligned}$$

and the corresponding Jacobian matrix

$$\begin{aligned} T = \left( \partial _{k} u_{r}\right) _{1\leqslant k, r \leqslant d}. \end{aligned}$$

Using (2.4) and subtracting suitable combinations of columns in \(\left| T \right|\), we get

$$\begin{aligned} V:=\left| T \right|= & {} \left| \sum \limits _{i=0}^{r-1} (-1)^{i} {x_{k}^{i}} u_{r-1-i} \right| _{1\leqslant k,r \leqslant d} \nonumber \\= & {} \left| (-1)^{i-1} x_{k}^{i-1} \right| _{1\leqslant i, k \leqslant d} \nonumber \\= & {} \prod \limits _{1 \leqslant i < k \leqslant d} (x_{i} - x_{k}), \end{aligned}$$
(3.3)

the Vandermonde determinant. Thus, the determinant of the matrix \(TT^{t}\) can be given as

$$\begin{aligned} D(\mathtt {u}):=V^{2} = \det (TT^{t}) = \prod _{1 \leqslant i < k \leqslant d} (x_{i} - x_{k})^{2}. \end{aligned}$$

It turns out that \(D(\mathtt {u})\) coincides with the discriminant (see [17, p. 23]) of the polynomial E(t). In this way, \(D(\mathtt {u})\) can be expressed in terms of the elementary symmetric functions since the discriminant can be obtained from the resultant (see [17, section 1.3.1]) of E and its derivative \(E^{\prime }\) in the following way:

$$\begin{aligned} D(\mathtt {u})=(-1)^{{\frac{d(d-1)}{2}}} R(E,E^{\prime }). \end{aligned}$$

with

$$\begin{aligned} R(E,E^{\prime })&= \left| \begin{array}{cccccccc} a_{0} &{} a_{1} &{} \ldots &{} a_{d} &{} &{} &{} \\ &{} a_{0} &{} a_{1} &{} \ldots &{} a_{d} &{} &{} &{} \\ &{} &{} \ddots &{} \ddots &{} &{} \ddots &{} \\ &{} &{} &{} a_{0} &{} a_{1} &{} \ldots &{} a_{d} &{} \\ b_{0} &{} b_{1} &{} \ldots &{} b_{d-1} &{} &{} &{} \\ &{} b_{0} &{} b_{1} &{} \ldots &{} b_{d-1} &{} &{} \\ &{} &{} \ddots &{} \ddots &{} &{} \ddots &{} \\ &{} &{} &{}b_{0} &{} b_{1} &{} \ldots &{} b_{d-1} \end{array}\right| . \end{aligned}$$

where \(a_{i}=(-1)^{i}u_{i}\) for \(i=0,\ldots , d,\) and \(b_{i}=(-1)^{i}(d-i)u_{i}\) for \(i=0,\ \ldots ,\ d-1\).

As it is well known, the existence of d different roots of the polynomial E(t) as defined in (3.2) (\(x_{i}\) for \(i = 1, \ldots , d\)) is equivalent to the positivity of \(D(\mathtt {u})\), the discriminant of E(t). Moreover, that all these different roots are contained in the interval (ab) can be characterized in terms of the corresponding Sturm sequence (see [17, p. 30]). Consider the polynomials \(p_{0}(t)=E(t)\) and \(p_{1}(t) = E^{\prime }(t)\) and let us construct a sequence \(\{p_{k}(t)\}_{k=0}^{d}\) with the help of Euclid’s algorithm to seek the greatest common divisor of E and \(E^{\prime }\)

$$\begin{aligned} p_{0}(t)= & {} E(t),\\ p_{1}(t)= & {} E^{\prime }(t),\\&\cdots&\\ p_{k-1}(t)= & {} q_{k}(t) p_{k}(t) - {m_{k}} p_{k+1}(t),\\&\cdots&\\ p_{d-1}(t)= & {} q_{d}(t) p_{d}(t), \end{aligned}$$

where \(m_{k}\) is a positive constant for \(k = 1, \ldots , d-1\).

Since the roots of E(t) are simple, \(p_{d}(t)\) is a nonzero constant. Sturm’s theorem states that if v(t) is the number of sign changes in the sequence

$$\begin{aligned} \{p_{0}(t), p_{1}(t), \ldots , p_{d}(t)\}, \end{aligned}$$

then the number of roots of \(p_{0}(t)\) (without taking multiplicities into account) confined between a and b is equal to \({v(a)-v(b)}\). If all the roots of E(t) satisfy \(a< x_{1}< x_{2}< \cdots< x_{d} <b\) then, according to Sturm’s theorem the sequence \(\{p_{0}(b), p_{1}(b), \ldots , p_{d}(b)\}\) has no sign changes and \(\{p_{0}(a), p_{1}(a), \ldots , p_{d}(a)\}\) has exactly d sign changes.

In [4], explicit expressions for the polynomials in a Sturm sequence were provided. These explicit representations were given in terms of the d different roots of the first polynomial in the sequence \(p_{0}(t)\) (\(x_{i}\) for \(i = 1, \ldots , d\) in our case). In particular, the author shows that the constant value of \(p_{d}(t)\) coincides with the discriminant of \(p_{0}(t)\) up to a positive multiplicative factor. Therefore, the condition \(D(\mathtt {u})>0\) is equivalent to \(p_{d}(t)>0\).

Consequently, the following result holds.

Proposition 3.1

The region

$$\begin{aligned} \Omega = \{ \mathtt {u} : D(\mathtt {u})>0, p_{k} (b)>0, (-1)^{d-k} p_{k}(a) >0, k = 0 , 1, \ldots , d-1\}, \end{aligned}$$

is the image of \(\Delta\) under the mapping \(\mathtt {x} = (x_{1}, x_{2}, \ldots , x_{d}) \mapsto \mathtt {u} = (u_{1}, u_{2}, \ldots , u_{d})\) defined by (2.1).

As a consequence, the orthogonality measure and its support in terms of the coordinates \(u_{1},\ \ldots ,\ u_{d}\) can be obtained explicitly using the determinant \(R(E, E^{\prime })\) combined with a simple algorithm.

3.1 The case \(d=2\)

Let \(\omega\) be a weight function defined on (ab). For \(\gamma > -1\), let us define a weight function of two variables,

$$\begin{aligned} B_{\gamma } (x_{1}, x_{2}) := \omega (x_{1})\omega (x_{2})|x_{1} - x_{2}|^{2 \gamma +1}, \end{aligned}$$

defined on the domain \(\Delta\) given by

$$\begin{aligned} \Delta := \{(x_{1}, x_{2}) : {a}< x_{1}< x_{2} < {b}\}. \end{aligned}$$

Let us consider the mapping \(\mathtt {x} \mapsto \mathtt {u}\) defined by

$$\begin{aligned} u_{1} = x_{1} + x_{2},&\quad u_{2} = x_{1} x_{2}. \end{aligned}$$

Then, \(E(t) = t^{2} - u_{1} t + u_{2}\) and the Jacobian of the change of variables is \(|x_{1} - x_{2}|\).

Expressed in terms of the variable \(\mathtt {u}\), the discriminant of the polynomial E(t) is

$$\begin{aligned} D(\mathtt {u}) = - \left| \begin{array}{ccc} 1 &{} -u_{1} &{} u_{2} \\ 2 &{} -u_{1} &{} 0 \\ 0 &{} 2 &{} -u_{1} \end{array} \right| = {u_{1}^{2}}-4 u_{2}. \end{aligned}$$

And the Sturm sequence reads

$$\begin{aligned} p_{0}(t)= & {} t^{2} - u_{1} t + u_{2},\\ p_{1}(t)= & {} 2 t - u_{1} \\ p_{2}(t)= & {} \frac{1}{4}({u_{1}^{2}}-4 u_{2}). \end{aligned}$$
Fig. 1
figure 1

Domain \(\Omega\) in the Jacobi case for \(d=2\)

In the Jacobi case, we have \((a,b) = (-1,1)\) and \(\omega (t) = (1-t)^{\alpha }(1+t)^{\beta }\), with \(\alpha>-1, \beta >-1\). In fact, this is the case originally considered by Koornwinder (see [8]). Then, using Proposition 3.1, the mapping \(\mathtt {x} \mapsto \mathtt {u}\) is a bijection between \(\Delta\) and the domain \(\Omega\) given by

$$\begin{aligned} \Omega := \{(u_{1}, u_{2}) : 1 + u_{1} + u_{2}> 0, 1 - u_{1} + u_{2}> 0, 2> u_{1}> -2, {u_{1}^{2}} > 4u_{2}\} \end{aligned}$$

which is depicted in Fig. 1.

Fig. 2
figure 2

Domain \(\Omega\) in the Laguerre case for \(d=2.\)

In the Laguerre case, we have \((a,b) = (0,+\infty )\) and \(\omega (t) = t^{\alpha }e^{-t}\), with \(\alpha >-1\). Therefore, using again Proposition 3.1, the domain \(\Omega\) is given by

$$\begin{aligned} \Omega := \{(u_{1}, u_{2}) : u_{1}> 0, u_{2}> 0, {u_{1}^{2}} > 4u_{2}\}, \end{aligned}$$

the region described in Fig. 2.

Fig. 3
figure 3

Domain \(\Omega\) in the Hermite case for \(d=2\)

In the Hermite case, we have \((a,b) = (-\infty ,+\infty )\) and \(\omega (t) = e^{-t^{2}}\). The domain \(\Omega\) is

$$\begin{aligned} \Omega := \{(u_{1}, u_{2}) : {u_{1}^{2}} > 4u_{2}\} \end{aligned}$$

as we show in Fig. 3.

3.2 The case \(d=3\)

For \(d=3\), we set \(\mathtt {x} = (x_{1},x_{2},x_{3})\) and \(\mathtt {u} = (u_{1},u_{2},u_{3})\), with

$$\begin{aligned} u_{1}= & {} x_{1} + x_{2} + x_{3}, \\ u_{2}= & {} x_{1} x_{2} + x_{1} x_{3} + x_{2} x_{3}, \\ u_{3}= & {} x_{1} x_{2} x_{3}. \end{aligned}$$
Fig. 4
figure 4

The domain \(\Omega\) in the case \(d=3\)

Then, \(E(t) = t^{3} - u_{1} t^{2} + u_{2} t -u_{3}\) and the discriminant \(D(\mathtt {u})\) can be expressed in terms of the elementary symmetric functions

$$\begin{aligned} D(\mathtt {u})= & {} - \left| \begin{array}{ccccc} 1 &{} -u_{1} &{} u_{2} &{} -u_{3} &{} 0\\ 0 &{} 1 &{} -u_{1} &{} u_{2} &{} -u_{3}\\ 3 &{} -2u_{1} &{} u_{2} &{} 0 &{} 0 \\ 0 &{} 3 &{} -2u_{1} &{} u_{2} &{} 0 \\ 0 &{} 0 &{} 3 &{} -2u_{1} &{} u_{2} \end{array}\right| \\= & {} {u_{1}^{2}} {u_{2}^{2}} -4 {u_{1}^{3}} u_{3} - 4 {u_{2}^{2}} -27 {u_{3}^{2}} + 18u_{1}u_{2}u_{3}. \end{aligned}$$

The Sturm sequence reads

$$\begin{aligned} p_{0}(t)= & {} t^{3} - u_{1} t^{2} + u_{2} t - u_{3},\\ p_{1}(t)= & {} 3 t^{2} - 2 u_{1} t + u_{2}\\ p_{2}(t)= & {} \frac{1}{9}\left( (2 {u_{1}^{2}}-6 u_{2}) t - u_{1} u_{2} + 9 u_{3}\right) \\ p_{3}(t)= & {} \frac{9}{4\left( {u_{1}^{2}}-3u_{2}\right) ^{2}}\left( {u_{1}^{2}} {u_{2}^{2}} -4 {u_{1}^{3}} u_{3} - 4 {u_{2}^{2}} -27 {u_{3}^{2}} + 18u_{1}u_{2}u_{3}\right) . \end{aligned}$$

And finally, the region \(\Omega\) for \(d=3\) can be described by the following inequalities

$$\begin{aligned} D(\mathtt {u})= & {} {u_{1}^{2}} {u_{2}^{2}} -4 {u_{1}^{3}} u_{3} - 4 {u_{2}^{3}} -27 {u_{3}^{2}} + 18u_{1}u_{2}u_{3}> 0, \\ p_{0}(1)= & {} 1 - u_{1} + u_{2} - u_{3}> 0, \\ -p_{0}(-1)= & {} 1 + u_{1} + u_{2} + u_{3}> 0, \\ p_{1}(1)= & {} 3 - 2 u_{1} + u_{2}> 0, \\ p_{1}(-1)= & {} 3 + 2 u_{1} + u_{2}> 0, \\ p_{2}(1)= & {} 2 {u_{1}^{2}}-6 u_{2} - u_{1} u_{2} + 9 u_{3}> 0, \\ -p_{2}(-1)= & {} 2 {u_{1}^{2}}-6 u_{2} + u_{1} u_{2} - 9 u_{3}> 0. \end{aligned}$$
Fig. 5
figure 5

The projection of domain \(\Omega\) on the \(u_{1} u_{3}\) plane

The region \(\Omega\) is depicted in Fig. 4. This picture has been obtained from the parametric representation of the images under the map defined by (2.1) of the four triangular faces of the domain \(\Delta\) given by

$$\begin{aligned} \Delta = \{\mathtt {x} : -1< x_{1}< x_{2}< x_{3} <1\}. \end{aligned}$$

\(\Omega\) is a solid limited by two flat faces and two curved faces. The first thing we have to notice is that \(\Omega\) is invariant under the change of variables \((u_{1},u_{2},u_{3}) \rightarrow (-u_{1},u_{2},-u_{3})\). In the image, the brown face is part of the plane \(p_{0}(1) = 0\). There is another symmetrical flat face contained in the plane \(-p_{0}(-1) = 0\). The two flat faces intersect in the line segment from \(A = (1,-1,-1)\) to \(B = (-1,-1,1)\). The other line segment bounding the brown region (which is the intersection of the planes \(p_{0}(1) = 0\) and \(p_{1}(1) = 0\)) is the line segment from \(A = (1,-1,-1)\) to \(C = (3, 3, 1)\). The third boundary part of the brown region is the part from B to C of a parabola touching at the endpoints A and C of the boundary line segments. The orange curved faces are the part of the quartic surface \(D(u) = 0\) which is bounded by the line segments AC and BD (where the surface touches the planes \(p_{0}(1) = 0\) and \(-p_{0}(-1) = 0\), respectively), and by the parabola segments CB (where the surface intersects the plane \(p_{0}(1) = 0\)) and DA (where the surface intersects the plane \(-p0(-1) = 0\)).

Figure 5 shows the projection of \(\Omega\) on the \(u_{1} u_{3}\) plane. Notice the two triangles, sharing one edge, and each having one parabolic side, namely, part of the parabolas \(u_{3} = \frac{1}{4}(u_{1}-1)^{2}\) and \(u_{3} = -\frac{1}{4}(u_{1}+1)^{2}\).

4 Orthogonal polynomials

Under the mapping defined by (2.1), the weight function \(B_{\gamma }\), given in (3.1), becomes a weight function defined on the domain \(\Omega\) by

$$\begin{aligned} W_{\gamma }(\mathtt {u}) = \prod \limits _{i=1}^{d} \omega (x_{i}) D(\mathtt {u})^{\gamma }, \quad \mathtt {u} \in \Omega . \end{aligned}$$
(4.1)

Now, it is possible to define the polynomials orthogonal with respect to \(W_{\gamma }(\mathtt {u})\) on \(\Omega .\)

Proposition 4.1

Define monic polynomials \(P_{\mu }^{(\gamma )}(\mathtt {u})\) under the graded reverse lexicographic order \(\prec\),

$$\begin{aligned} P_{\mu }^{(\gamma )}(\mathtt {u}) = \mathtt {u}^{\mu } + \sum \limits _{\alpha \prec \mu } \mathtt {u}^{\alpha } \end{aligned}$$
(4.2)

that satisfy the orthogonality condition

$$\begin{aligned} \int _{\Omega } P_{\mu }^{(\gamma )}(\mathtt {u}) \mathtt {u}^{\alpha } W_{\gamma }(\mathtt {u}) d\mathtt {u} = 0, \end{aligned}$$

for \(\alpha \prec \mu\), then these polynomials are uniquely determined and are mutually orthogonal with respect to \(W_{\gamma }(\mathtt {u})\).

Proof

Since the graded reverse lexicographic order \(\prec\) is a total order, applying the Gram–Schmidt orthogonalization process to the monomials so ordered, the uniqueness follows from the fact that \(P_{\mu }^{(\gamma )}(\mathtt {u})\) has leading coefficient 1.

In the cases \(\gamma = \pm 1/2\), a family of orthogonal polynomials in the variable \(\mathtt {u}\) can be given explicitly in terms of orthogonal polynomials of one variable (see [2] and [3, p.155]).

Proposition 4.2

Let \(\{p_{k} \}_{k\geqslant 0}\) be the sequence of monic orthogonal polynomials with respect to w on (ab). For \(\gamma = -1/2\), \(n \in \mathbb {N}_{0}\), and \(\mu = (\mu _{1}, \mu _{2},\ldots , \mu _{d})\) satisfying \(0 \leqslant \mu _{1} \leqslant \mu _{2} \leqslant \ldots \leqslant \mu _{d} = n\), we define

$$\begin{aligned} P_{\mu }^{(-1/2)}(\mathtt {u}) = \sum _{\sigma \in \mathcal {S}_{d}} p_{\mu _{1}}(x_{\sigma (1)}) p_{\mu _{2}}(x_{\sigma (2)})\cdots p_{\mu _{d}}(x_{\sigma (d)}) \end{aligned}$$
(4.3)

where \(\mathtt {x}\) and \(\mathtt {u}\) are related by (2.1), and the sum in the right-hand side of (4.3) runs over all distinct permutations \(\sigma\) in the symmetric group \(\mathcal {S}_{d}\). Then, \(P_{\mu }^{(-1/2)}(\mathtt {u})\) is an orthogonal polynomial of degree n in the variable \(\mathtt {u}\).

For \(\gamma = 1/2\), \(n \in \mathbb {N}_{0}\), and \(\mu = (\mu _{1}, \mu _{2},\ldots , \mu _{d})\) satisfying \(0 \leqslant \mu _{1}< \mu _{2}< \ldots < \mu _{d} = n + d -1\), we define

$$\begin{aligned} P_{\mu }^{(1/2)}(\mathtt {u}) = \frac{1}{V} \left| \begin{array}{ccccc} p_{\mu _{1}}(x_{1}) &{} p_{\mu _{1}}(x_{2}) &{} \cdots &{} p_{\mu _{1}}(x_{d}) \\ p_{\mu _{2}}(x_{1}) &{} p_{\mu _{2}}(x_{2}) &{} \cdots &{} p_{\mu _{2}}(x_{d}) \\ \vdots &{} \vdots &{} &{} \vdots \\ p_{\mu _{d}}(x_{1}) &{} p_{\mu _{d}}(x_{2}) &{} \cdots &{} p_{\mu _{d}}(x_{d}) \end{array}\right| , \end{aligned}$$

where \(\mathtt {x}\) and \(\mathtt {u}\) are related by (2.1). Then, \(P_{\mu }^{(1/2)}(\mathtt {u})\) is an orthogonal polynomial of degree n in the variable \(\mathtt {u}\).

5 Generalized classical orthogonal polynomials

In this section, multivariable orthogonal polynomials are considered associated with the weight functions

$$\begin{aligned} B_{\gamma }^{H}(\mathtt {x})= & {} \prod \limits _{i=1}^{d} e^{-{x_{i}^{2}}} \prod \limits _{i<j} |x_{i}-x_{j}|^{2\gamma +1}, \quad \mathtt {x} \in \mathbb {R}^{d},\\ B_{\gamma }^{L}(\mathtt {x})= & {} \prod \limits _{i=1}^{d} x_{i}^{\alpha }e^{-x_{i}} \prod _{i<j} |x_{i}-x_{j}|^{2\gamma +1}, \quad \mathtt {x} \in (0,+\infty )^{d},\\ B_{\gamma }^{J}(\mathtt {x})= & {} \prod \limits _{i=1}^{d} (1-x_{i})^{\alpha }(1+x_{i})^{\beta } \prod \limits _{i<j} |x_{i}-x_{j}|^{2\gamma +1}, \quad \mathtt {x} \in (-1,1)^{d}, \end{aligned}$$

with \(\alpha , \beta , \gamma >-1\).

Under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\) defined by (2.1) the corresponding weight functions \(W_{\gamma }(\mathtt {u})\), as defined in (4.1), are given by

$$\begin{aligned} W_{\gamma }^{H}(\mathtt {u})= & {} e^{-{u_{1}^{2}}+2u_{2}} D(\mathtt {u})^{\gamma }, \\ W_{\gamma }^{L}(\mathtt {x})= & {} u_{d}^{\alpha }e^{-u_{1}} D(\mathtt {u})^{\gamma }\\ W_{\gamma }^{J}(\mathtt {x})= & {} (1-u_{1}+u_{2}+\ldots +(-1)^{d} u_{d} )^{\alpha }(1+u_{1}+u_{2}+\ldots +u_{d})^{\beta } D(\mathtt {u})^{\gamma }, \end{aligned}$$

with \(\alpha , \beta , \gamma >-1\).

The multivariable Hermite, Laguerre, and Jacobi families associated with the weight functions \(B_{\gamma }^{H}(\mathtt {x}), B_{\gamma }^{L}(\mathtt {x})\), and \(B_{\gamma }^{H}(\mathtt {x})\) (see [1, (2.1)]), respectively, are eigenfunctions of the differential operators

$$\begin{aligned} \mathcal {H}_{\gamma }^{H}= & {} \sum \limits _{i=1}^{d} \left( {\partial _{i}^{2}} - 2 x_{i} \partial _{i} + (2\gamma +1) \sum \limits _{k\ne i} \frac{1}{x_{i}-x_{k}}\partial _{i}\right) ,\\ \mathcal {H}_{\gamma }^{L}= & {} \sum \limits _{i=1}^{d} \left( x_{i}{\partial _{i}^{2}} + (\alpha +1-x_{i}) \partial _{i} + (2\gamma +1) \sum \limits _{k\ne i} \frac{x_{i}}{x_{i}-x_{k}}\partial _{i}\right) ,\\ \mathcal {H}_{\gamma }^{J}= & {} \sum \limits _{i=1}^{d} \left( (1-{x_{i}^{2}}) {\partial _{i}^{2}} + (\beta - \alpha - (\alpha +\beta +2)x_{i}) \partial _{i} + (2\gamma +1) \sum \limits _{k\ne i} \frac{1-{x_{i}^{2}}}{x_{i}-x_{k}}\partial _{i}\right) , \end{aligned}$$

with \(\alpha , \beta , \gamma >-1\).

We are going to obtain the representation of the differential operators \(\mathcal {H}_{\gamma }^{H}, \mathcal {H}_{\gamma }^{L}\) and \(\mathcal {H}_{\gamma }^{J}\), under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\).

For \(h = 0, 1, 2\), let us define the operators

$$\begin{aligned} \mathcal {D}_{h}= & {} \sum _{i=1}^{d} {x_{i}^{h}} {\partial _{i}^{2}}, \\ \mathcal {E}_{h}= & {} \sum _{i=1}^{d} {x_{i}^{h}} \partial _{i},\\ \mathcal {F}_{h}= & {} \sum _{i=1}^{d} \sum _{k\ne i} \frac{{x_{i}^{h}}}{x_{i}-x_{k}}\partial _{i}, \end{aligned}$$

then

$$\begin{aligned} \mathcal {H}_{\gamma }^{H}= & {} \mathcal {D}_{0} - 2 \mathcal {E}_{1} + (2\gamma +1) \mathcal {F}_{0}, \\ \mathcal {H}_{\gamma }^{L}= & {} \mathcal {D}_{1} + (\alpha +1) \mathcal {E}_{0} -\mathcal {E}_{1} + (2\gamma +1) \mathcal {F}_{1},\\ \mathcal {H}_{\gamma }^{J}= & {} \mathcal {D}_{0} - \mathcal {D}_{2} + (\beta - \alpha ) \mathcal {E}_{0}- (\alpha +\beta +2)\mathcal {E}_{1} + (2\gamma +1) (\mathcal {F}_{0} - \mathcal {F}_{2}). \end{aligned}$$

Under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\), we get

$$\begin{aligned} \partial _{i} = \sum \limits _{r=1}^{d} \partial _{i} u_{r} \frac{\partial }{\partial u_{r}}, \end{aligned}$$
(5.1)

and since \({\partial _{i}^{2}} u_{r} = 0\) we obtain

$$\begin{aligned} {\partial _{i}^{2}} = \sum \limits _{r=1}^{d} \sum \limits _{s=1}^{d} \partial _{i} u_{r} \partial _{i} u_{s} \frac{\partial ^{2}}{\partial u_{r} \partial u_{s}}, \end{aligned}$$

Proposition 5.1

The operator \(\mathcal {E}_{h}\) satisfies

$$\begin{aligned} \mathcal {E}_{0}= & {} \sum \limits _{r=1}^{d} (d-r+1) u_{r-1} \frac{\partial }{\partial u_{r}}, \end{aligned}$$
(5.2)
$$\begin{aligned} \mathcal {E}_{1}= & {} \sum \limits _{r=1}^{d} r u_{r} \frac{\partial }{\partial u_{r}}. \end{aligned}$$
(5.3)

Proof

From (5.1), we have

$$\begin{aligned} \mathcal {E}_{h} = \sum _{i=1}^{d} {x_{i}^{h}} \partial _{i} = \sum \limits _{r=1}^{d} \left( \sum \limits _{i=1}^{d} {x_{i}^{h}} \partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}}. \end{aligned}$$

For \(h=0\), using (2.3) and Euler’s identity for homogeneous polynomials, we get

$$\begin{aligned} \sum _{i=1}^{d} \partial _{i} u_{r} = \sum \limits _{i=1}^{d} u_{r-1} - \sum \limits _{i=1}^{d} x_{i} \partial _{i} u_{r-1} = (d-r+1) u_{r-1}, \end{aligned}$$

which gives (5.2). Identity (5.3) follows in the same way, since for \(h=1\) we get

$$\begin{aligned} \sum \limits _{i=1}^{d} x_{i} \partial _{i} u_{r} = r u_{r}. \end{aligned}$$

Proposition 5.2

The operator \(\mathcal {D}_{h}\) can be represented as

$$\begin{aligned} \mathcal {D}_{h} = \sum \limits _{i=1}^{d} {x_{i}^{h}} {\partial _{i}^{2}} = \sum \limits _{r=1}^{d} \sum \limits _{s=1}^{d} a_{rs}^{h}(\mathtt {u}) \frac{\partial ^{2}}{\partial u_{r} \partial u_{s}}. \end{aligned}$$

where the coefficients

$$\begin{aligned} a_{rs}^{h}(\mathtt {u}) = \sum \limits _{i=1}^{d} {x_{i}^{h}} \partial _{i} u_{r} \partial _{i} u_{s}, \end{aligned}$$

satisfy

$$\begin{aligned} a_{rs}^{0}(\mathtt {u})= & {} (d-s+1) u_{r-1} u_{s-1} - (d-r+2) u_{r-2} u_{s} + a_{r-1\,s+1}^{0}(\mathtt {u}),\end{aligned}$$
(5.4)
$$\begin{aligned} a_{rs}^{0}(\mathtt {u})= & {} (d-s+1) u_{r-1} u_{s-1} + \sum _{j=2}^{r} (r-s-2j+2) u_{r-j} u_{s+j-2},\end{aligned}$$
(5.5)
$$\begin{aligned} a_{rs}^{1}(\mathtt {u})= & {} (d-s+1) u_{r} u_{s-1} - a_{r+1\,s}^{0}(\mathtt {u}),\end{aligned}$$
(5.6)
$$\begin{aligned} a_{rs}^{2}(\mathtt {u})= & {} (-d+r+s) u_{r} u_{s} + a_{r+1\,s+1}^{0}(\mathtt {u}), \end{aligned}$$
(5.7)

taking into account that \(a_{rs}^{h}(\mathtt {u})=0\), for \(r\leqslant 0\), \(s \leqslant 0\), \(r > d\), or \(s > d\). Obviously, we have \(a_{rs}^{h}(\mathtt {u}) = a_{sr}^{h}(\mathtt {u})\) so we may assume that \(r \leqslant s\).

Proof

For \(h=0\), using (2.3) and (5.2), we deduce

$$\begin{aligned} a_{rs}^{0}(\mathtt {u})= & {} \sum \limits _{i=1}^{d} \partial _{i} u_{r} \partial _{i} u_{s} = \sum \limits _{i=1}^{d} ( u_{r-1} - x_{i} \partial _{i} u_{r-1}) \partial _{i} u_{s} \\= & {} (d-s+1) u_{r-1} u_{s-1} - \sum \limits _{i=1}^{d} \partial _{i} u_{r-1} x_{i} \partial _{i} u_{s} \\= & {} (d-s+1) u_{r-1} u_{s-1} - \sum \limits _{i=1}^{d} \partial _{i} u_{r-1} (u_{s} - \partial _{i} u_{s+1}) \\= & {} (d-s+1) u_{r-1} u_{s-1} - (d-r+2) u_{r-2} u_{s} + \sum \limits _{i=1}^{d} \partial _{i} u_{r-1} \partial _{i} u_{s+1}, \end{aligned}$$

and the recurrence formula (5.4) follows. Expression (5.5) can be obtained iterating (5.4).

For \(h=1\), from (2.3) and (5.2), we obtain

$$\begin{aligned} a_{rs}^{1}(\mathtt {u})= & {} \sum \limits _{i=1}^{d} x_{i} \partial _{i} u_{r} \partial _{i} u_{s} = \sum \limits _{i=1}^{d} ( u_{r} - \partial _{i} u_{r+1}) \partial _{i} u_{s} \\= & {} (d-s+1) u_{r} u_{s-1} - \sum \limits _{i=1}^{d} \partial _{i} u_{r+1} \partial _{i} u_{s}. \end{aligned}$$

Hence, (5.6) follows.

For \(h=2\), (5.7) can be obtained in the same way

$$\begin{aligned} a_{rs}^{2}(\mathtt {u})= & {} \sum \limits _{i=1}^{d} x_{i} \partial _{i} u_{r} x_{i} \partial _{i} u_{s} = \sum \limits _{i=1}^{d} ( u_{r} - \partial _{i} {u_{r+1}}) ( u_{s} - \partial _{i} {u_{s+1}}) \\= & {} d u_{r} u_{s} - (d-s) u_{r} u_{s} - (d-r) u_{r} u_{s} + a_{r+1\,s+1}^{0}(\mathtt {u}). \end{aligned}$$

To obtain the representation of the operator \(\mathcal {F}_{h}\), let us consider the Vandermonde determinant V defined in (3.3). One can see that

$$\begin{aligned} \frac{1}{V} \partial _{i} V = \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{1}{x_{i}-x_{k}} \end{aligned}$$

and therefore

$$\begin{aligned} \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V = \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{1}{x_{i}-x_{k}} = 0, \end{aligned}$$
(5.8)

since every element \(1/(x_{i}-x_{k})\) in the above sum appears twice with opposite sign.

On the other hand, since V is an homogeneous symmetric polynomial of total degree \(d(d-1)/2\), again Euler’s identity for homogeneous polynomials gives

$$\begin{aligned} \frac{1}{V} \sum \limits _{i=1}^{d} x_{i} \partial _{i} V = \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{x_{i}}{x_{i}-x_{k}} = \frac{d(d-1)}{2} = \left( {\begin{array}{c}d\\ 2\end{array}}\right) . \end{aligned}$$
(5.9)

Lemma 5.3

For \(r=1,2,\ldots ,d\), we have

$$\begin{aligned} \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V \partial _{i} u_{r} = - \left( {\begin{array}{c}d+2-r\\ 2\end{array}}\right) u_{r-2}. \end{aligned}$$

Proof

Using (2.5) and \(\partial _{ii}^{2} u_{r} = 0\) for \(i = 1, 2, \ldots , d\), we get

$$\begin{aligned} \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V \partial _{i} u_{r}= & {} \sum _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{\partial _{i} u_{r}}{x_{i}-x_{k}} = \sum _{i=1}^{d} \sum _{\begin{array}{c} k=i+1 \end{array}}^{d} \frac{\partial _{i} u_{r} - \partial _{k} u_{r}}{x_{i}-x_{k}}\\= & {} - \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=i+1 \end{array}}^{d} \partial _{i} \partial _{k} u_{r} = - \frac{1}{2}\sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \end{array}}^{d} \partial _{i} \partial _{k} u_{r}. \end{aligned}$$

Finally, using (5.2) twice, we conclude

$$\begin{aligned} \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V \partial _{i} u_{r}= & {} - \frac{1}{2}\sum \limits _{i=1}^{d} \partial _{i} \left( \sum \limits _{k=1}^{d} \partial _{k} u_{r}\right) = - \frac{1}{2}\sum \limits _{i=1}^{d} \partial _{i} \left( (d-r+1) u_{r-1}\right) \\= & {} - \left( {\begin{array}{c}d+2-r\\ 2\end{array}}\right) u_{r-2}, \quad r=1,2,\ldots ,d. \end{aligned}$$

Proposition 5.4

The operator \(\mathcal {F}_{h}\) satisfies

$$\begin{aligned} \mathcal {F}_{0}= & {} - \sum \limits _{r=1}^{d} \left( {\begin{array}{c}d+2-r\\ 2\end{array}}\right) u_{r-2} \frac{\partial }{\partial u_{r}}, \\ \mathcal {F}_{1}= & {} \sum \limits _{r=1}^{d} \left( {\begin{array}{c}d+1-r\\ 2\end{array}}\right) u_{r-1} \frac{\partial }{\partial u_{r}}, \\ \mathcal {F}_{2}= & {} \sum \limits _{r=1}^{d} \left( \left( {\begin{array}{c}d\\ 2\end{array}}\right) - \left( {\begin{array}{c}d-r\\ 2\end{array}}\right) \right) u_{r} \frac{\partial }{\partial u_{r}}, \end{aligned}$$

where \(\left( {\begin{array}{c}d-r\\ 2\end{array}}\right) =0\), for \(r=d\) or \(r=d-1\).

Proof

First, for \(h=0\), we have

$$\begin{aligned} \mathcal {F}_{0}= & {} \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{1}{x_{i}-x_{k}}\partial _{i} = \sum \limits _{r=1}^{d} \left( \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{1}{x_{i}-x_{k}}\partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V \partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}} = - \sum \limits _{r=1}^{d} \left( {\begin{array}{c}d+2-r\\ 2\end{array}}\right) u_{r-2} \frac{\partial }{\partial u_{r}}. \end{aligned}$$

For \(h=1\), using (2.3) and (5.8), we have

$$\begin{aligned} \mathcal {F}_{1}= & {} \sum \limits _{i=1}^{d} \sum \limits _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{x_{i}}{x_{i}-x_{k}}\partial _{i} = \sum \limits _{r=1}^{d} \left( \sum _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{x_{i}}{x_{i}-x_{k}}\partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V x_{i} \partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \frac{1}{V} \sum \limits _{i=1}^{d} \partial _{i} V (u_{r} - \partial _{i} u_{r+1})\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( {\begin{array}{c}d+1-r\\ 2\end{array}}\right) u_{r-1} \frac{\partial }{\partial u_{r}}. \end{aligned}$$

Finally, for \(h=2\), using (2.3) and (5.9), we have

$$\begin{aligned} \mathcal {F}_{2}= & {} \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{{x_{i}^{2}}}{x_{i}-x_{k}}\partial _{i} = \sum \limits _{r=1}^{d} \left( \sum \limits _{i=1}^{d} \sum _{\begin{array}{c} k=1 \\ k\ne i \end{array}}^{d} \frac{{x_{i}^{2}}}{x_{i}-x_{k}}\partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \frac{1}{V} \sum \limits _{i=1}^{d} x_{i} \partial _{i} V x_{i} \partial _{i} u_{r}\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \frac{1}{V} \sum \limits _{i=1}^{d} x_{i} \partial _{i} V (u_{r} - \partial _{i} u_{r+1})\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \left( {\begin{array}{c}d\\ 2\end{array}}\right) u_{r} - \frac{1}{V} \sum \limits _{i=1}^{d} x_{i} \partial _{i} V \partial _{i} u_{r+1})\right) \frac{\partial }{\partial u_{r}} \\= & {} \sum \limits _{r=1}^{d} \left( \left( {\begin{array}{c}d\\ 2\end{array}}\right) - \left( {\begin{array}{c}d-r\\ 2\end{array}}\right) \right) u_{r} \frac{\partial }{\partial u_{r}}, \end{aligned}$$

where \(\left( {\begin{array}{c}d-r\\ 2\end{array}}\right) =0\), for \(r=d\) or \(r=d-1\). For the last equality, the two last equalities in the proof for \(h = 1\) were used.

In this way, we have shown that, under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\) defined by (2.1), the differential operators \(\mathcal {H}_{\gamma }^{H}, \mathcal {H}_{\gamma }^{L}\) and \(\mathcal {H}_{\gamma }^{J}\) can be represented as linear partial differential operators in the form

$$\begin{aligned} \mathcal {M}_{\gamma } = \sum \limits _{r=1}^{d} \sum \limits _{s=1}^{d} a_{rs}(\mathtt {u}) \frac{\partial ^{2}}{\partial u_{r} \partial u_{s}} + \sum \limits _{r=1}^{d} b_{r}(\mathtt {u}) \frac{\partial }{\partial u_{r}}, \end{aligned}$$

where \(a_{rs}(\mathtt {u})\), for \(r,s = 1, \ldots , d\) are polynomials of degree 2 in \(\mathtt {u}\) and \(b_{r}(\mathtt {u})\), for \(r = 1, \ldots , d\) are polynomials of degree 1 in \(\mathtt {u}\).

Remark 5.5

It is well known that, in the \(\mathtt {x}\) variable, it is possible to derive formulas for Laguerre and Hermite cases by taking limits of formulas in the Jacobi case (see [1, (2.18)–(2.19)]. Similar results hold for the \(\mathtt {u}\) variable.

Next, it will be proved that the polynomials defined in (4.2) are eigenfunctions of \(\mathcal {M}_{\gamma }\). The proof is based on two lemmas.

Lemma 5.6

Let \(\mathtt {u}^{\mu } = u_{1}^{\mu _{1}}\ldots u_{d}^{\mu _{d}}\) a multivariate monomial then

$$\begin{aligned} \mathcal {M}_{\gamma } \mathtt {u}^{\mu } = c(\mu ) \mathtt {u}^{\mu } + \text {l.o.m.}, \end{aligned}$$

where \(\text {l.o.m.}\) stands for lower order degree monomials in the graded reverse lexicographical order. Here \(c(\mu ) \in \mathbb {R}\).

Proof

This result easily follows from Propositions 5.1, 5.2, and 5.4.

Lemma 5.7

For arbitrary polynomials \(p(\mathtt {u})\) and \(q(\mathtt {u})\), it holds that

$$\begin{aligned} \int _{\Omega } \mathcal {M}_{\gamma }p(\mathtt {u})\, q(\mathtt {u}) \, \mathcal {W}_{\gamma }(\mathtt {u})\, \text {d}\mathtt {u} = \int _{\Omega } p(\mathtt {u})\, \mathcal {M}_{\gamma }q(\mathtt {u}) \, \mathcal {W}_{\gamma }(\mathtt {u})\, \text {d}\mathtt {u}. \end{aligned}$$

Proof

Integration by parts provides the self-adjoint character of the differential operators \(\mathcal {H}_{\gamma }^{H}, \mathcal {H}_{\gamma }^{L}\) and \(\mathcal {H}_{\gamma }^{J}\) for symmetric polynomials in the corresponding domains (see [16]). The result follows after the change of variables \(\mathtt {x} \mapsto \mathtt {u}\).

Theorem 5.8

Let the \(p_{\mu }(\mathtt {u})\) one of the monic orthogonal polynomials defined by (4.2). Then,

$$\begin{aligned} \mathcal {M}_{\gamma } p_{\mu }(\mathtt {u}) = c(\mu )\, p_{\mu }(\mathtt {u}). \end{aligned}$$

Proof

By Lemma 5.6, the function \(\mathcal {M}_{\gamma } p_{\mu }(\mathtt {u})\) is a polynomial in \(\mathtt {u}\) whose leading term is \(c(\mu ) \mathtt {u}^{\mu }\). Let \(\mu ^{\prime } \prec \mu\), then it follows from Lemmas 5.6 and 5.7 that

$$\begin{aligned} \int _{\Omega } \mathcal {M}_{\gamma }p(\mathtt {u})\, \mathtt {u}^{\mu ^{\prime }} \, \mathcal {W}_{\gamma }(\mathtt {u})\, \text {d}\mathtt {u} = \int _{\Omega } p(\mathtt {u})\, \mathcal {M}_{\gamma }\mathtt {u}^{\mu ^{\prime }} \, \mathcal {W}_{\gamma }(\mathtt {u})\, \text {d}\mathtt {u} = 0. \end{aligned}$$

Hence, \(\mathcal {M}_{\gamma } p_{\mu }(\mathtt {u})\) is a polynomial whose leading term is \(c(\mu ) \mathtt {u}^{\mu }\) orthogonal to all polynomials of lower degree, so \(\mathcal {M}_{\gamma } p_{\mu }(\mathtt {u}) = c(\mu ) p_{\mu }(\mathtt {u})\).

Remark 5.9

If we write \(\mu = (\lambda _{1}-\lambda _{2}, \lambda _{2}-\lambda _{3}, \ldots , \lambda _{d})\) then the expression \(\text {l.o.m.}\) in Lemma 5.6 can stand for lower in the dominance partial ordering of the \(\lambda\), i.e., \(\lambda ^{\prime } \leqslant \lambda\) if and only if \(\lambda ^{\prime }_{1} \leqslant \lambda _{1}, \lambda ^{\prime }_{1} + \lambda ^{\prime }_{2} \leqslant \lambda _{1} + \lambda _{2}, \ldots , \lambda ^{\prime }_{1} + \cdots + \lambda ^{\prime }_{d} \leqslant \lambda _{1} + \cdots + \lambda _{d}\).

Accordingly, the orthogonal polynomials \(p_{\mu }\) can also be characterized as \(p_{\mu } = u^{\mu }+ \text {l.o.m.}\) (with \(\text {l.o.m.}\) having the same meaning as above) such that they are orthogonal to all \(p_{\mu ^{\prime }}\) with corresponding \(\lambda ^{\prime }\) less than \(\lambda\) (corresponding to \(\mu\)) in the dominance partial ordering. Since the dominance partial ordering is not a total order, a priori, the polynomials \(p_{\mu }\) defined in this way could seem different from the polynomials \(p_{\mu }\) defined in (4.2). However, that they are still equal was first proved by Heckman [5, Theorem 8.3] by using very deep methods. Much easier proofs were given by Macdonald [15, (11.11)] and Heckman [6, Corollary 3.12].

5.1 The case \(d=2\)

For \(d=2\), using Propositions 5.1, 5.2, and 5.4, we can easily deduce the explicit expression of the differential operators \(\mathcal {H}_{\gamma }^{H}, \mathcal {H}_{\gamma }^{L}\) and \(\mathcal {H}_{\gamma }^{J}\), under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\).

In the Jacobi case, the operator

$$\begin{aligned} \mathcal {H}_{\gamma }^{J} = \mathcal {D}_{0} - \mathcal {D}_{2} + (\beta - \alpha ) \mathcal {E}_{0}- (\alpha +\beta +2)\mathcal {E}_{1} + (2\gamma +1) (\mathcal {F}_{0} - \mathcal {F}_{2}), \end{aligned}$$

for \(d=2\), can be written as follows

$$\begin{aligned} \mathcal {H}_{\gamma }^{J}= & {} (-{u_{1}^{2}} + 2u_{2} +2) \frac{\partial ^{2}}{\partial {u_{1}^{2}}} + 2u_{1}(1-u_{2}) \frac{\partial ^{2}}{\partial u_{1} \partial u_{2}} + ({u_{1}^{2}}-2{u_{2}^{2}} -2u_{2}) \frac{\partial ^{2}}{\partial {u_{2}^{2}}} \\&\quad + [-(\alpha + \beta +2\gamma +3)u_{1} +2(\beta -\alpha )] \frac{\partial }{\partial u_{1}} \\&\quad +[(\beta -\alpha )u_{1} - (2\alpha +2\beta + 2\gamma +5)u_{2} -(2\gamma +1)]\frac{\partial }{\partial u_{2}}, \end{aligned}$$

and therefore we recover the differential operator given by Koornwinder in [8].

Denoting the corresponding orthogonal polynomial, for \(d=2\), by \(P_{n-k,k}^{(\alpha ,\beta ,\gamma )}(\mathtt {u}) = u_{1}^{n-k}{u_{2}^{k}} + \cdots\), we get

$$\begin{aligned} \mathcal {H}_{\gamma }^{J} P_{n-k,k}^{(\alpha ,\beta ,\gamma )}(\mathtt {u}) =-[n(n+\alpha +\beta +2\gamma +2)+k(k+\alpha +\beta +1)) ]P_{n-k,k}^{(\alpha ,\beta ,\gamma )}(\mathtt {u}). \end{aligned}$$

In the Hermite case, the explicit expression of the differential operator

$$\begin{aligned} \mathcal {H}_{\gamma }^{H} = \mathcal {D}_{0} - 2 \mathcal {E}_{1} + (2\gamma +1) \mathcal {F}_{0}, \end{aligned}$$

for \(d=2\), is given by

$$\begin{aligned} \mathcal {H}_{\gamma }^{H} = 2 \frac{\partial ^{2}}{\partial {u_{1}^{2}}} + 2u_{1} \frac{\partial ^{2}}{\partial u_{1} \partial u_{2}} + ({u_{1}^{2}}-2u_{2})\frac{\partial ^{2}}{\partial {u_{2}^{2}}} - 2 u_{1} \frac{\partial }{\partial u_{1}} - (4u_{2} + 2\gamma +1)\frac{\partial }{\partial u_{2}}. \end{aligned}$$

Denoting the orthogonal polynomial, for \(d=2\), by \(H_{n-k,k}^{(\gamma )}(\mathtt {u}) = u_{1}^{n-k}{u_{2}^{k}} + \cdots\), we get

$$\begin{aligned} \mathcal {H}_{\gamma }^{H} H_{n-k,k}^{(\gamma )}(\mathtt {u}) =-2(n+k) H_{n-k,k}^{(\gamma )}(\mathtt {u}). \end{aligned}$$

In the Laguerre case, the explicit expression of the differential operator

$$\begin{aligned} \mathcal {H}_{\gamma }^{L}&= \mathcal {D}_{1} + (\alpha +1) \mathcal {E}_{0} -\mathcal {E}_{1} + (2\gamma +1) \mathcal {F}_{1}, \end{aligned}$$

for \(d=2\), is given by

$$\begin{aligned} \mathcal {H}_{\gamma }^{L}= & {} u_{1} \frac{\partial ^{2}}{\partial {u_{1}^{2}}} +4u_{2} \frac{\partial ^{2}}{\partial u_{1} \partial u_{2}} + u_{1}u_{2}\frac{\partial ^{2}}{\partial {u_{2}^{2}}} + \left[ 2 \alpha + 2\gamma +3 - u_{1} \right] \frac{\partial }{\partial u_{1}} \\&+ \left[ (\alpha +1) u_{1} - 2u_{2} \right] \frac{\partial }{\partial u_{2}} . \end{aligned}$$

Again, denoting the orthogonal polynomial, for \(d=2\), by \(L_{n-k,k}^{(\gamma )}(\mathtt {u}) = u_{1}^{n-k}{u_{2}^{k}} + \cdots\), we get

$$\begin{aligned} \mathcal {H}_{\gamma }^{L} L_{n-k,k}^{(\gamma )}(\mathtt {u}) = -(n+k) L_{n-k,k}^{(\gamma )}(\mathtt {u}). \end{aligned}$$

5.2 The case \(d=3\)

For \(d=3\), using Propositions 5.1, 5.2, and 5.4, we can easily deduce the explicit expression of the differential operators \(\mathcal {H}_{\gamma }^{H}, \mathcal {H}_{\gamma }^{L}\) and \(\mathcal {H}_{\gamma }^{J}\), under the change of variables \(\mathtt {x} \mapsto \mathtt {u}\)

In the Jacobi case, the operator

$$\begin{aligned} \mathcal {H}_{\gamma }^{J} = \mathcal {D}_{0} - \mathcal {D}_{2} + (\beta - \alpha ) \mathcal {E}_{0}- (\alpha +\beta +2)\mathcal {E}_{1} + (2\gamma +1) (\mathcal {F}_{0} - \mathcal {F}_{2}), \end{aligned}$$

for \(d=3\), can be written as follows

$$\begin{aligned} \mathcal {H}_{\gamma }^{J}= & {} (-{u_{1}^{2}} + 2u_{2} + 3) \frac{\partial ^{2}}{\partial {u_{1}^{2}}} + 2(2u_{1}-u_{1}u_{2}+3u_{3}) \frac{\partial ^{2}}{\partial u_{1} \partial u_{2}} \\&\quad + 2(u_{2}-u_{1}u_{3}) \frac{\partial ^{2}}{\partial u_{1} \partial u_{3}} + 2({u_{1}^{2}}-{u_{2}^{2}} -u_{2} +u_{1}u_{3}) \frac{\partial ^{2}}{\partial {u_{2}^{2}}} \\&\quad + 2(u_{1}u_{2}-3u_{3}-2u_{2}u_{3}) \frac{\partial ^{2}}{\partial u_{2} \partial u_{3}} + ({u_{1}^{2}}-2u_{1}u_{3}-3{u_{3}^{2}}) \frac{\partial ^{2}}{\partial {u_{3}^{2}}} \\&\quad + \left[ - (\alpha +\beta +4\gamma +3)) u_{1} + 3(\beta - \alpha )\right] \frac{\partial }{\partial u_{1}} \\&\quad + \left[ - (2\alpha +2\beta +6\gamma +7) u_{2} + 2(\beta - \alpha )u_{1} -3(2\gamma +1) \right] \frac{\partial }{\partial u_{2}} \\&\quad + \left[ - (3\alpha +3\beta +6\gamma +9) u_{3} + (\beta - \alpha )u_{2} -(2\gamma +1)u_{1} \right] \frac{\partial }{\partial u_{3}}. \end{aligned}$$

In the Hermite case, the explicit expression of the differential operator

$$\begin{aligned} \mathcal {H}_{\gamma }^{H} = \mathcal {D}_{0} - 2 \mathcal {E}_{1} + (2\gamma +1) \mathcal {F}_{0}, \end{aligned}$$

for \(d=3\), is given by

$$\begin{aligned} \mathcal {H}_{\gamma }^{H}= & {} 3 \frac{\partial ^{2}}{\partial {u_{1}^{2}}} + 4u_{1} \frac{\partial ^{2}}{\partial u_{1} \partial u_{2}} + 2u_{2} \frac{\partial ^{2}}{\partial u_{1} \partial u_{3}} + 2({u_{1}^{2}}-u_{2})\frac{\partial ^{2}}{\partial {u_{2}^{2}}} \\&+ 2(u_{1}u_{2}-3u_{3}) \frac{\partial ^{2}}{\partial u_{2} \partial u_{3}} + ({u_{2}^{2}}-2u_{1}u_{2})\frac{\partial ^{2}}{\partial {u_{3}^{2}}} - 2 u_{1} \frac{\partial }{\partial u_{1}} \\&- [4u_{2} + 3(2\gamma +1)]\frac{\partial }{\partial u_{2}} -[6 u_{3} + (2\gamma +1)u_{1} ] \frac{\partial }{\partial u_{3}}. \end{aligned}$$

In the Laguerre case, the explicit expression of the differential operator

$$\begin{aligned} \mathcal {H}_{\gamma }^{L} = \mathcal {D}_{1} + (\alpha +1) \mathcal {E}_{0} -\mathcal {E}_{1} + (2\gamma +1) \mathcal {F}_{1}, \end{aligned}$$

for \(d=3\), is given by

$$\begin{aligned} \mathcal {H}_{\gamma }^{L}= & {} u_{1}\frac{\partial ^{2}}{\partial {u_{1}^{2}}} + 4u_{2} \frac{\partial ^{2}}{\partial u_{1} \partial u_{2}} + 6u_{3} \frac{\partial ^{2}}{\partial u_{1} \partial u_{3}} + (u_{1}u_{2}+3u_{3})\frac{\partial ^{2}}{\partial {u_{2}^{2}}} \\&+ 4u_{1}u_{3} \frac{\partial ^{2}}{\partial u_{2} \partial u_{3}} + u_{2}u_{3}\frac{\partial ^{2}}{\partial {u_{3}^{2}}} + [3\alpha +6\gamma +6 -u_{1}] \frac{\partial }{\partial u_{1}} \\&+[(2\alpha +2\gamma +3)u_{1} -2u_{2} ]\frac{\partial }{\partial u_{2}} + [(\alpha +1)u_{2} -3u_{3}] \frac{\partial }{\partial u_{3}}. \end{aligned}$$