On a Symmetric Generalization of Bivariate Sturm–Liouville Problems

A new class of partial differential equations having symmetric orthogonal solutions is presented. The general equation is presented and orthogonality is obtained using the Sturm–Liouville approach. Conditions on the polynomial coefficients to have admissible partial differential equations are given. The general case is analyzed in detail, providing orthogonality weight function, three-term recurrence relations for the monic orthogonal polynomial solutions, as well as explicit form of these monic orthogonal polynomial solutions, which are solutions of an admissible and potentially self-adjoint linear second-order partial differential equation of hypergeometric type.


Introduction
Classical univariate orthogonal polynomials (of a continuous variable) can be introduced and characterized by different ways. One possible approach is to present them as solution of a second-order differential equation of hypergeometric type [29] a(x)z (x) + b(x)z (x) + μ n z(x) = 0, (1.1) where a(x) and b(x) are polynomials of degree at most two and one, respectively, and μ n is a constant, usually referred as eigenvalue. The hypergeometric property means that if z(x) is a solution of (1.1), then any derivative of z(x) is solution of an equation of the same type. The characterization problem has been analyzed by several authors [9]. As it is well known, the Favard theorem [31] links orthogonality and three-term recurrence relation which in the univariate situation reads as x p n (x) = r n p n+1 (x) + s n p n (x) + t n p n−1 (x), t n = 0.
Classical univariate orthogonal polynomials (of a continuous variable and orthogonal with respect a positive weight function) are essentially the Jacobi, Laguerre and Hermite polynomials, which can be expressed in terms of hypergeometric functions. Moreover, the differential equation (1.1) can be discretized in several forms, giving rise to classical orthogonal polynomials of a discrete variable and classical orthogonal polynomials on nonuniform lattices [29]. All of them can be expressed in terms of hypergeometric series [14].
There are also several approaches in the multivariate context. Some authors have focused the analysis on the partial differential equation [15], but it is possible to introduce multivariate analogues of classical orthogonal polynomials from another approaches [2,[32][33][34]. A systematic analysis of the orthogonal polynomial solutions of some partial differential equations has been done by Suetin [30]. Let n be the set of polynomials of total degree n in the multivariate case. There are three main concepts on the partial differential equation to be considered: 1. Admissible: It might exist a sequence of eigenvalues {μ n } (n = 0, 1, . . . ) such that for each nonnegative integer n there are exactly n + 1 linearly independent solutions in n and the equation has not non-trivial solutions in k for k < n; 2. Hypergeometric: If y(x 1 , . . . , x n ) is solution of the partial differential equation, then ∂ r 1 +r 2 +···+r n ∂ x r 1 1 ∂ x r 2 2 · · · ∂ x r n n y(x 1 , . . . , x n ) is solution of an equation of the same type. 3. Potentially self-adjoint: From the partial differential equation, it is possible to introduce an operator D. The operator (and therefore the equation) is potentially self-adjoint in a domain if there exists a positive real function r in this domain such that the operator r D is self-adjoint in the domain .
Potentially self-adjoint second-order partial differential equations have been deeply analyzed by Suetin [30] (see Chapter V). The concept of hypergeometric was introduced by Lyskova [17,18] with the name basic class. Admissible equations go back to the works of Krall and Sheffer [15]. In recent works [5], all these conditions have been imposed to the partial differential equations obtaining the algebraic and differential properties (as well as the weight functions) of orthogonal polynomial solutions of bivariate second-order linear partial differential equations which are admissible, potentially self-adjoint and of hypergeometric type. This analysis has been performed also for the discrete case [4] as well as for their q-analogues [6]. Furthermore, Lee [16] characterized centrally symmetric orthogonal polynomials satisfying an admissible partial differential equation of the form where μ n = an (n − 1) + gn. Lee [16] shows that these centrally symmetric orthogonal polynomials are either the product of Hermite polynomials which are solutions of the partial differential equation [15] u x x + u yy − xu x − yu y = −nu, or the circle polynomials which are solutions of the partial differential equation [15] x 2 − 1 u x x + 2x yu xy + y 2 − 1 u yy + gxu x + gyu y = n (n + g − 1) u.
On the other hand, the very classical second-order linear differential equation (1.1) has been symmetrically extended [19][20][21][22] in the following way. Let ψ n (−x) = (−1) n ψ n (x) be a sequence of symmetric functions satisfying the equation where μ n is a sequence of constants and In [20], for such an equation, a basic class of symmetric orthogonal polynomials with four free parameters whose special sub-cases are four main sequences of symmetric orthogonal polynomials such as the generalized ultraspherical polynomials [10,11], generalized Hermite polynomials [11,31] and two other sequences of symmetric polynomials, which are finitely orthogonal on (−∞, ∞), is introduced and its standard properties, such as orthogonality relation, three-term recurrence relation and so on, have been investigated. In [21,22], using the extended Sturm-Liouville theorem for symmetric functions, two basic classes of symmetric orthogonal functions with five parameters and six parameters are obtained for second-order differential equations in the special forms of (1.2). Furthermore, using a generalization of Sturm-Liouville problems in discrete spaces, a basic class of symmetric orthogonal polynomials of a discrete variable with four free parameters, which generalizes all classical discrete symmetric orthogonal polynomials, is introduced in [23,24]. For some other papers, we refer [3,[25][26][27]. By the motivation of these papers in the univariate case, we consider now the bivariate case. In this direction, in bivariate case we consider a class of partial differential equations that shall be shown to have symmetric solutions, using the Sturm-Liouville theorem to prove their orthogonality.
The main aim of this work is to present a symmetric generalization of previous works obtaining a new class of partial differential equations having symmetric orthogonal solutions. For this class, conditions on the equation to be admissible are given and the general case is analyzed in detail. We would like to notice that, to the best of our knowledge, this type of partial differential equation has not been considered before in the literature, as providing orthogonal polynomial solutions of it. The interest of having these new orthogonal polynomial families is not only theoretical. Since they do satisfy a partial differential equation it might be possible to consider orthogonal polynomial expansions of related partial differential equations by considering, e.g., the so-called Navima algorithm [7,13], to get numerical approximations for the solutions to other partial differential equations.
The structure of this work is as follows. In Sect. 2, the new symmetric generalization to the bivariate case is presented, and orthogonality of the polynomial solutions is derived. Section 3 is devoted to analyze the conditions to have an admissible partial differential equation. Under these conditions several properties are obtained, namely: the three-term recurrence relations of the orthogonal polynomial solutions, the domain of orthogonality, as well as the explicit form of the monic orthogonal polynomial solutions, which are solutions of an admissible and potentially self-adjoint linear second-order partial differential equation of hypergeometric type.
where κ n is defined in (1.3). We shall impose that for each n = 0, 1, 2, . . . the latter equation has symmetric polynomial solutions u n (x, y) of total degree n so that u n (−x, −y) = (−1) n u n (x, y). Then, Following [30], let us define the linear operator Let us assume that (2.2) is not self-adjoint and there exists a function (x, y) such that D s is self-adjoint in a certain domain . Then, Hence, Under these conditions (2.1) is potentially self-adjoint in a domain if holds true. Since Since K , L, and M are even polynomials and Q, O are odd polynomials, we have (2.5) Thus, we have the following symmetry relation for the function (x, y) defined in (2.4): Under these conditions, it is possible to write (2.1) in self-adjoint form as Let us write (2.6) for u(x, y) = u n (x, y): and for u(x, y) = u m (x, y): If we multiply (2.7) by u m (x, y) and (2.8) by u n (x, y), and then substract them, we obtain If we take integral on both sides, we have

The Class of Admissible Partial Differential Equations
Let us assume that K , L and M are polynomials of total degree 4, Q and O are polynomials of total degree 3, and ϕ and T are polynomials of total degree 2, S is identically zero, such that the symmetry conditions (2.5) hold true: In this section, we shall give conditions on the coefficients of the above polynomials to (2.1) be an admissible partial differential equation, that is, a partial differential equation having n + 1 linearly independent polynomial solutions for each nonnegative integer n. Let x = (x, y) ∈ R 2 . For each nonnegative integer n, let x n be the column vector of the monomials x n−k y k , whose elements are arranged in graded lexicographical order [12, p. 32] x n = x n−k y k , k = 0, 1, . . . , n. (3.1) In this way,

Let 2
n be the space of all polynomials in two variables x = (x, y) of total degree at most n. Then, any element of 2 n can be expressed as finite sums of terms of the form c k x n−k y k , where c k are real constants.
If we apply the partial differential operator to x n defined in (3.1), we obtain under the restrictions If we impose that the partial differential equation has a monic solution, that is then the following conditions appear: Without loss of generality we can assume ϕ 0 = 1 and we obtain Under these conditions and μ n defined in (3.2) the equation (2.1) is admissible, i.e., for each nonnegative integer n the equation has precisely n + 1 linearly independent solutions in the form of polynomials of total degree n and has no non-trivial solutions in the set of polynomials whose total degree is less than n. Hence, ζ(x, y) = ϕ 3 + x 2 + xy + y 2 2 × −2k 0 l 5 xy − k 0 ϕ 3 k 0 y 2 + m 5 + k 0 m 5 x 2 + k 5 k 0 y 2 + m 5 − l 2 5 , η(x, y) = ϕ 3 + x 2 + xy + y 2 ϕ 3 k 2 0 y 2 (2x + y) + m 5 (−k 0 x + k 0 y + q 0 x) +l 5 y ϕ 3 (3k 0 − q 0 ) + k 0 7x 2 + 5xy + 3y 2 − q 0 x 2 + xy + y 2 As a consequence, as well as Under these conditions the equation becomes potentially self-adjoint since (2.3) holds true. Moreover, The orthogonality domain is defined by ζ(x, y) ≥ 0. For instance, if we fix m 5 = 1, k 5 = 4k 0 ϕ 3 , k 0 = −1, ϕ 3 = −1 (so that k 5 = 4), q 0 = −7 and l 5 = 0, then the domain is defined by the orthogonality weight function reduces to and the polynomials become orthogonal with any polynomial of lower total degree, as it has been shown.

Three-Term Recurrence Relations for the Monic Orthogonal Polynomial Solutions
Let us assume that the partial differential equation has a monic symmetric polynomial solution of the form P n = x n + G n,n−2 x n−2 + · · · , where x n is defined in (3.1). Let be matrices of size (n + 1) × (n + 2). Then, Hence, x 2 x n = L n,1 L n+1,1 x n+2 , y 2 x n = L n,2 L n+1,2 x n+2 , L n,2 L n+1,1 = L n,1 L n+1,2 , and for j = 1, 2, L n, j L T n, j = I n+1 , where I n+1 stands for the identity matrix of size n + 1. Moreover, let be matrices of size (n + 1) × n. For n ≥ 1, If we apply the second-order partial differential equation (2.1) to x n , we obtain that the matrix G n,n−2 ∈ M(n + 1, n − 1) is tridiagonal (3.6) As it is well known [12], for n ≥ 0, there exist unique matrices A n, j of the size (n + 1) × (n + 2), B n, j of the size (n + 1) × (n + 1), and C n, j of the size (n + 1) × n, such that x j P n = A n, j P n+1 + B n, j P n + C n, j P n−1 , j = 1, 2, (3.7) with the initial conditions P −1 = 0 and P 0 = 1 where we have denoted x 1 = x and x 2 = y. In the symmetric case B n, j is zero. From [5,Theorem 4.2] in this symmetric case, we have: In the monic case, the explicit expressions of the matrices A n, j and C n, j ( j = 1, 2), that appear in (3.7) in terms of the values of G n,n−2 , are given by where the matrices L n, j have been introduced in (3.4), and the coefficients of the matrix G n,n−2 defined in (3.5) are given in (3.6).

Explicit Form of the Monic Orthogonal Polynomial Solutions
Next, we obtain the explicit form of the monic polynomial solutions of the partial differential equation (2.1) for the specific form of the polynomials (3.3).

Proof
The result follows by direct substitution of the partial derivatives of the explicit expression (3.8) into (3.9) and equate the coefficients.
Since ∂ ∂ x τ (r ,s) (x, y) ζ(x, y) = ∂ ∂ y η (r ,s) (x, y) ζ(x, y) , the equation for the partial derivatives (3.9) is again potentially self-adjoint, as well as hypergeometric since the partial derivatives of any order of any solution of (3.9) are solution of an equation of the same type.