1 Motivation

Quadratic decomposition of univariate orthogonal polynomials is a very well known subject in the classical theory of orthogonal polynomials. It appears, for instance in the first chapter of [3], and describes the problem of identifying the even and odd parts of a symmetric orthogonal polynomial sequence. In fact, if \(\{S_n\}_{n\geqslant 0}\) is an orthogonal polynomial sequence (OPS) satisfying \(S_n(-x) = (-1)^n\,S_n(x)\), for \(n\geqslant 0\), then it means that the odd polynomials only contains odd powers, and the even polynomials only contains even powers, and then, there exist another two families of polynomials \(\{P_n\}_{n\geqslant 0}\) and \(\{Q_n\}_{n\geqslant 0}\) such that

$$\begin{aligned} S_{2n}(x) = P_n(x^2),{} & {} S_{2n+1}(x) = x \, Q_n(x^2), \end{aligned}$$
(1.1)

for \(n\geqslant 0\). From this relation, the families of polynomials \(\{P_n\}_{n\geqslant 0}\) and \(\{Q_n\}_{n\geqslant 0}\) inherit properties of orthogonality, directly related with the quadratic decomposition and a Christoffel modification of the original weight function ( [3, p. 41]). Moreover, the reciprocal holds, that is, a symmetric orthogonal polynomial sequence \(\{S_n\}_{n\geqslant 0}\) can be constructed by using (1.1) (Theorem 8.1 in [3]). As the most important example of QD, we highlight the well known relation between the classical Hermite orthogonal polynomials in the role of \(\{S_n\}_{n\geqslant 0} = \{H_n\}_{n\geqslant 0}\), and the classical Laguerre orthogonal polynomials as \(\{P_n\}_{n\geqslant 0} = \{L^{(-1/2)}_n\}_{n\geqslant 0}\), and \(\{Q_n\}_{n\geqslant 0} = \{L^{(1/2)}_n\}_{n\geqslant 0}\). In addition, the coefficients of the three term recurrence relations for every family of orthogonal polynomials involved in (1.1) are related by means of Bäcklund type relations. Following [3, Theorem 9.1], suppose that the respective recurrences satisfied are

$$\begin{aligned} P_n(x)&= (x-c_n)P_{n-1}(x) - \lambda _n\,P_{n-2}, \quad n=1, 2, \ldots \\ Q_n(x)&= (x-d_n)Q_{n-1}(x) - \nu _n\,Q_{n-2}, \quad n=1, 2, \ldots \\ S_n(x)&= x\, S_{n-1}(x) - \gamma _n\,S_{n-2}, \quad n=1, 2, \ldots \end{aligned}$$

with

$$\begin{aligned} P_{-1}(x) =Q_{-1}(x) = S_{-1}(x) = 0, \quad P_{0}(x) =Q_{0}(x) = S_{0}(x) = 1, \end{aligned}$$

and

$$\begin{aligned} \lambda _n\ne 0,\quad \nu _n\ne 0, \quad \gamma _n\ne 0. \end{aligned}$$

Then the following Bäcklund relations between the coefficients hold

$$\begin{aligned} \begin{aligned}&c_1 = \gamma _2, \quad c_n = \gamma _{2n-1} + \gamma _{2n}, \quad \lambda _n = \gamma _{2n-2}\gamma _{2n-1}, \quad n\geqslant 2,\\&d_n = \gamma _{2n} + \gamma _{2n+1}, \quad \nu _n = \gamma _{2n-1}\gamma _{2n}, \quad n\geqslant 1, \quad (\gamma _0=1). \end{aligned} \end{aligned}$$
(1.2)

Several aspects of the quadratic decomposition of univariate orthogonal polynomials have been already considered in the literature, see for instance, [2, 4, 7, 11, 12, 14,15,16,17,18,19] and the references therein. In [15, 16] the quadratic decomposition (1.1) has been generalized for polynomial sequences non necessarily symmetric, [13] by using arbitrary polynomials of degree 2 and 1 replacing \(x^2\) and x, respectively, in (1.1), with special attention to the quadratic transformation \(x^2 - 1\) relating Gegenbauer and Jacobi polynomials ( [10]), or even by means of a simple cubic decomposition, as we can read for instance in [5]. Bivariate symmetric orthogonal polynomials as solutions of second-order linear partial difference equations are analysed in [8, 20]. In that papers, they also study symmetric generalizations obtaining a new class of partial differential equations having symmetric orthogonal solutions in the bivariate case.

In [21, 22] the author found connections between even orthogonal polynomials on the ball and simplex polynomials in d variables. Following [22, section 4], let

$$\begin{aligned} W^{\textbf{B}}(x_1,x_2,\ldots , x_d)= W(x_1^2,x_2^2, \ldots , x_d^2) \end{aligned}$$

be a weight function defined on the unit ball on \({\mathbb {R}}^d\), and let

$$\begin{aligned} W^{\textbf{T}}(u_1,u_2,\ldots , u_d) = \frac{1}{\sqrt{u_1\,u_2\,\cdots \,u_d}} \, W( {u_1}, {u_2},\ldots , {u_d} ),{} & {} (u_1,u_2,\ldots , u_d)\in {\textbf{T}}^d, \end{aligned}$$

where \({\textbf{T}}^d\) is the unit simplex on \({\mathbb {R}}^d\). For \(n\geqslant 0\), and \(\alpha \in {\mathbb {N}}^d_0\) a multi-index, let \(S_{2n,\alpha }(x_1,x_2, \ldots ,x_d)\) be an orthogonal polynomial associated to the weight function \(W^{\textbf{B}}\) of even degree in each of its variables. Then Y. Xu proved that it can be written in terms of orthogonal polynomials on the simplex as

$$\begin{aligned} S_{2n,\alpha }(x_1,x_2, \ldots , x_d)=P_{n,\alpha } (x^2_1,x^2_2, \ldots , x^2_d),{} & {} |\alpha | = n, \end{aligned}$$

where \(P_{n,\alpha } (x^2_1,x^2_2, \ldots , x^2_d)\) is, for each \(n \geqslant 0\), an orthogonal polynomial of total degree n on the simplex associated to \(W^{\textbf{T}}\). In this way, there exists an important partial connection between classical ball polynomials and simplex ones that also appears in [9, Section 9].

Inspired by these relations, we try to analyze the situation for the leftover polynomials in this procedure, i.e., we want to know the properties of the polynomials with odd powers that were left in the above identification. We succeed doing so in a general framework, showing that these polynomials are related to new families of bivariate orthogonal polynomials, resulting from a Christoffel modification that we explicitly identify. Hence we have a totally answer in the case \(d = 2\), that generalizes the one given by T. Chihara in [3].

The paper is organized as follows. In Sect. 2 we state the basic tools and results that we will need along the paper. The Sect. 3 is devoted to describe symmetric monic orthogonal polynomial sequences, starting with the basic properties and regarding how the polynomials are. In Sect. 4 we analyze the quadratic decomposition process. This will be done in an equivalent procedure, i.e., given a symmetric monic orthogonal polynomial sequence, we separate it in four families of polynomials in a zip way, and we deduce the inherit properties of orthogonality for each of the four families, obtaining that they are Christoffel modifications of the quadratic transformation of the original weight function. As a converse result, we construct a symmetric monic orthogonal polynomial sequence from a given one.

In Sect. 5 we give relations between the matrix coefficients of the three term relations of the involved families, obtaining Bäcklund-type relations that extend the univariate case (1.2). In addition, the matrix coefficients of the Christoffel transformations for the four families of orthogonal polynomials are given in terms of the matrix coefficients of the three term relations for the symmetric polynomials. These matrices enable us to reinterpret the block Jacobi matrix associated with the four orthogonal polynomials sequences in terms of a \(\pmb {{\textsf{L}}} \pmb {{\textsf{U}}}\) or \(\pmb {{\textsf{U}}} \pmb {{\textsf{L}}}\) representation.

Finally, in the last Section, we complete the study started in [21, 22] describing explicitly the four families of orthogonal polynomials on the simplex deduced from a symmetric polynomial system orthogonal on the ball. That the four weight functions are needed from the triangle to the ball is known (cf. [9, Section 9]). The explicit expressions of the matrices of the recurrence for this specific case will be the subject of a future work.

2 Basic Facts

For each \(n\geqslant 0\), let \(\Pi _n\) denote the linear space of bivariate polynomials of total degree not greater than n, and let \(\Pi =\bigcup _{n\geqslant 0} \Pi _n\). A polynomial \(p(x,y)\in \Pi _n\) is a linear combination of monomials, i.e.,

$$\begin{aligned} p(x,y) = \sum _{m=0}^{n}\sum _{i=0}^m a_{m-i,i} x^{m-i}y^{i}, \qquad a_{m-i,i}\in {\mathbb {R}}, \end{aligned}$$

and it is of total degree n if \( \sum _{i=0}^n |a_{n-i,i}| >0\). The polynomial p(xy) is of partial degree h in x if it is the degree of p(xy) in the variable x considering the variable y as a constant. In the same way, the partial degree of p(xy) is k if it is the degree in the variable y considering x as a constant. For simplicity, we will omit the word total when we are working with total degree.

A polynomial p(xy) of degree n is called centrally symmetric if \(p(-x,-y) = (-1)^n \, p(x,y).\) As a consequence, if the degree of the polynomial is even, then it only contains monomials of even degree, and if the degree of the polynomial is odd, then the polynomial only contains monomials of odd powers.

We will say that a polynomial of partial degree h in x is x-symmetric if

$$\begin{aligned} p(-x,y) = (-1)^h \, p(x,y),{} & {} \forall (x,y). \end{aligned}$$

Therefore, if h is even, p(xy) only contains even powers in x, and if h is odd, it only contains odd powers in x. Analogously, we define the y-symmetry for a polynomial of degree k in y as

$$\begin{aligned} p(x,-y) = (-1)^k \, p(x,y),{} & {} \forall (x,y). \end{aligned}$$

A y-symmetric polynomial of degree k in y only contains odd powers in y when k is an odd number, and it contains only even power in y when k is even.

Obviously, if a polynomial is x-symmetric and y-symmetric, then it is centrally symmetric. In this case, if the polynomial has total degree n, with degree h in x and degree k in y, and \(n=h+k\), then if h is an odd number (respectively if k is an odd number), then the polynomial only contains odd powers in x (respectively, odd powers in y), and if h is even (respectively, if k is even), then it only contains even powers in x (respectively, it contains only even powers in y). The reciprocal is not true. For instance, the polynomial \(p(x,y) = x^3 y^5 + x^4 y^4\) is centrally symmetric since \(p(-x,-y) = p(x,y)\) but is not x-symmetric since \(p(-x,y) \ne (-1)^4 p(x,y)\).

For each \(n\geqslant 0\), let \({\mathbb {X}}_n\) denote the \((n+1)\times 1\) column vector

$$\begin{aligned} {\mathbb {X}}_n= \begin{bmatrix} x^n&x^{n-1} y&\cdots&xy^{n-1}&y^n \end{bmatrix}^{\top }, \end{aligned}$$

where, as usual, the superscript means the transpose. Then \(\{{\mathbb {X}}_n\}_{n\geqslant 0}\) is called the canonical basis of \(\Pi \). As in [6], for \(n \geqslant 0\), we denote by

$$\begin{aligned} {{\,\textrm{L}\,}}_{n,1}= \left[ \begin{array}{@{}c|c@{}} \begin{matrix} 1 \\ &{} \ddots \\ &{} &{} 1 \end{matrix} &{} \begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} \end{array} \right] , \qquad {{\,\textrm{L}\,}}_{n,2}= \left[ \begin{array}{@{}c|c@{}} \begin{matrix} 0 \\ \vdots \\ 0 \end{matrix} &{} \begin{matrix} 1 \\ &{} \ddots \\ &{} &{} 1 \end{matrix} \end{array} \right] , \end{aligned}$$

such that    \( {{\,\textrm{L}\,}}_{n,1}\,{\mathbb {X}}_{n+1}=x\,{\mathbb {X}}_{n}\)    and    \({{\,\textrm{L}\,}}_{n,2}\,{\mathbb {X}}_{n+1}=y\,{\mathbb {X}}_{n}\).

For \(i, j = 0,1\), and \(n\geqslant 0\), we introduce the matrices: \({{\,\textrm{J}\,}}_{n}^{(i,j)}\) of dimension \((n+1)\times (2n+1+i+j)\) in the following way

$$\begin{aligned} {{\,\textrm{J}\,}}_{n}^{(i,j)} = \begin{bmatrix} j_{h,k}^{(n,i,j)} \end{bmatrix}_{h,k=0}^{n\times (2n-1+i+j)}, \end{aligned}$$

such that \(j_{h,2h+j}^{(n,i,j)}=1\), for \(0\leqslant h\leqslant n\), and the other elements are zero. In particular,

$$\begin{aligned} {{\,\textrm{J}\,}}_{n}^{(0,0)}&= [e_1, \texttt{0}, e_2, \texttt{0}, \ldots , e_{n+1}]&{{\,\textrm{J}\,}}_{n}^{(1,0)}&= [e_1, \texttt{0}, e_2, \texttt{0}, \ldots , e_{n+1}, \texttt{0}] \\ {{\,\textrm{J}\,}}_{n}^{(0,1)}&= [\texttt{0},e_1, \texttt{0}, e_2, \texttt{0}, \ldots , e_{n+1}]&{{\,\textrm{J}\,}}_{n}^{(1,1)}&= [\texttt{0},e_1, \texttt{0}, e_2, \texttt{0}, \ldots , e_{n+1},\texttt{0}] , \end{aligned}$$

where \(e_i = [0, \ldots , 1, \dots , 0]^\top \), for \(i= 1, 2, \ldots , n+1\), denotes the coordinate vectors, and \(\texttt{0} = [0, \ldots , 0]^\top \) denotes a column of zeros.

Observe that the \({{\,\textrm{J}\,}}\)-matrices are obtained from the identity matrices by introducing columns of zeros. The objective of these matrices is to extract the odd or the even elements in a vector of adequate size. The transpose of these matrices introduce zeroes into a vector in the odd or even positions.

A simple computation allows us to prove the next result.

Lemma 2.1

For \(n\geqslant 0\) and \(k=1,2\), the following relations hold:

$$\begin{aligned} {{\,\textrm{J}\,}}_{n}^{(0,0)} \, {{\,\textrm{L}\,}}_{2n,k} = {{\,\textrm{J}\,}}_{n}^{(2-k,k-1)},{} & {} {{\,\textrm{J}\,}}_{n}^{(1,1)} \, {{\,\textrm{L}\,}}_{2n+2,k} = {{\,\textrm{L}\,}}_{n,k}\, {{\,\textrm{J}\,}}_{n+1}^{(k-1,2-k)}, \\ {{\,\textrm{J}\,}}_{n}^{(k-1,2-k)} \, {{\,\textrm{L}\,}}_{2n+1,k} = {{\,\textrm{J}\,}}_{n}^{(1,1)},{} & {} {{\,\textrm{J}\,}}_{n}^{(2-k,k)} \, {{\,\textrm{L}\,}}_{2n+1,k} = {{\,\textrm{L}\,}}_{n,k}\, {{\,\textrm{J}\,}}_{n+1}^{(0,0)}. \end{aligned}$$

2.1 Orthogonal Polynomial Systems (OPS)

Let \(\{ {{\,\textrm{P}\,}}_{n,m}(x,y): 0\leqslant m \leqslant n, n\geqslant 0\}\) denote a basis of \(\Pi \) such that, for a fixed \(n\geqslant 0\), \(\deg {{\,\textrm{P}\,}}_{n,m}(x,y) = n\), and the set \(\{{{\,\textrm{P}\,}}_{n,m}(x,y): 0\leqslant m \leqslant n\}\) contains \(n+1\) linearly independent polynomials of total degree exactly n. We can write the vector of polynomials

$$\begin{aligned} \mathbb {P}_n= \begin{bmatrix} {{\,\textrm{P}\,}}_{n,0}(x,y)&{{\,\textrm{P}\,}}_{n,1}(x,y)&\cdots&{{\,\textrm{P}\,}}_{n,n}(x,y) \end{bmatrix}^{\top }. \end{aligned}$$

The sequence of polynomial vectors of increasing size \(\{\mathbb {P}_n\}_{n\geqslant 0}\) is called a polynomial system (PS), and it is a basis of \(\Pi \). We say that is a monic polynomial sequence if every entry has the form

$$\begin{aligned} P_{n,j} (x,y) = x^{n-j}\,y^j + \sum _{m=0}^{n-1}\sum _{i=0}^m a_{m-i,i} x^{m-i}y^{i},{} & {} 0\leqslant j \leqslant n. \end{aligned}$$

Let W(xy) be a weight function defined on a domain \( \Omega \subset {\mathbb {R}}^2\), and we suppose the existence of every moment,

$$\begin{aligned} \mu _{h,k} = \int _{\Omega } x^h\,y^k \, W(x,y) \,\textrm{d} x \textrm{d} y < +\infty ,{} & {} h, k \geqslant 0 . \end{aligned}$$

As usual, we define the inner product

$$\begin{aligned} ( p, q ) = \int _{ \Omega } p(x,y)\,q(x,y)\, W(x,y) \, \textrm{d} x \textrm{d} y,{} & {} p, q \in \Pi , \end{aligned}$$
(2.1)

and remember how the inner product acts over polynomial matrices. Let \({{\,\textrm{A}\,}}= \begin{bmatrix} a_{i,j}(x,y) \end{bmatrix}_{i,j=1}^{h,k}\) and \({{\,\textrm{B}\,}}= \begin{bmatrix} b_{i,j}(x,y) \end{bmatrix}_{i,j=1}^{l,k}\) be two polynomial matrices. The action of (2.1) over polynomial matrices is defined as the \(h \times l\) matrix (cf. [6]),

$$\begin{aligned} ( {{\,\textrm{A}\,}}, {{\,\textrm{B}\,}}) = \int _{ \Omega } {{\,\textrm{A}\,}}(x,y)\, {{\,\textrm{B}\,}}(x,y)^\top \, W(x,y) \, \textrm{d} x \textrm{d} y = \begin{bmatrix} \displaystyle \int _{ \Omega } c_{i,j}(x,y) W(x,y) \, \textrm{d} x \textrm{d} y \end{bmatrix}_{i,j=1}^{h,l}, \end{aligned}$$

where \({{\,\textrm{C}\,}}= {{\,\textrm{A}\,}}\cdot {{\,\textrm{B}\,}}^\top = \begin{bmatrix} c_{i,j}(x,y) \end{bmatrix}_{i,j=1}^{h,l}\).

A PS \(\{\mathbb {P}_n\}_{n\geqslant 0}\) is an orthogonal polynomial system (OPS) with respect to \((\cdot , \cdot )\) if

$$\begin{aligned} ( \mathbb {P}_n, \mathbb {P}_m ) = {\left\{ \begin{array}{ll} \texttt{0}_{(n+1) \times (m+1)}, &{} n\ne m,\\ {\textbf{P}}_n, &{} n=m, \end{array}\right. } \end{aligned}$$

where \({\textbf{P}}_n\) is a positive-definite symmetric matrix of size \(n+1\), and \(\texttt{0}_{(n+1) \times (m+1)}\), or \(\texttt{0}\) for short, is the zero matrix of adequate size. It was proved [6] that there exists a unique monic orthogonal polynomial system associated to W(xy), and we will call MOPS for short.

In this work we will use Christoffel modifications of a weight function given by a multiplication of a polynomial of degree 1. In the next Lemma we recall the relations between the involved monic OPS (see [1]).

Lemma 2.2

Let W(xy) be a weight function defined on a domain \(\Omega \subset {\mathbb {R}}^2\), and let \(\lambda (x,y) = a\,x + b\,y\) be a polynomial with \(|a| + |b| >0\), such that \(W^\mathbf *(x,y) = \lambda (x,y)\,W(x,y)\) is again a weight function on \(\Omega \). Let \(\{{\mathbb {P}}_n\}_{n\geqslant 0}\) and \(\{{\mathbb {P}}^\mathbf *_n\}_{n\geqslant 0}\) be the respective monic OPS. Then, for all \(n \geqslant 1\),

$$\begin{aligned} \mathbb {P}_{n}&= \mathbb {P}^\mathbf *_{n} + {{\,\textrm{M}\,}}_{n}\,\mathbb {P}^\mathbf *_{n-1},\\ \lambda (x,y)\,\mathbb {P}^\mathbf *_{n}&= \big ( a {{\,\textrm{L}\,}}_{n,1} + b {{\,\textrm{L}\,}}_{n,2} \big ) \,\mathbb {P}_{n+1} + {{\,\textrm{N}\,}}_{n}\,\mathbb {P}_{n}, \end{aligned}$$

where

$$\begin{aligned} {{\,\textrm{M}\,}}_{n} = {\textbf{P}}_n\,(a\,{{\,\textrm{L}\,}}_{n-1,1}^\top + b\,{{\,\textrm{L}\,}}_{n-1,2}^\top )\,({\textbf{P}}_{n-1}^\mathbf *)^{-1},{} & {} {{\,\textrm{N}\,}}_{n} = {\textbf{P}}_n^\mathbf *\,{\textbf{P}}_{n}^{-1}, \end{aligned}$$

and

$$\begin{aligned} {\textbf{P}}_n = \int _{\Omega } \mathbb {P}_n\,\mathbb {P}_n^\top \,W(x,y)\, \textrm{d}x \textrm{d}y, \qquad {\textbf{P}}^\mathbf *_n = \int _{\Omega } \mathbb {P}^\mathbf *_n\,(\mathbb {P}^\mathbf *_n)^\top \,W^\mathbf {*}(x,y)\, \textrm{d}x \textrm{d}y , \end{aligned}$$

are non-singular matrices of size \((n+1)\).

3 Symmetric Monic Orthogonal Polynomial Sequences

A weight function W(xy) defined on \(\Omega \subset {\mathbb {R}}^2\) is called centrally symmetric (cf. [6, p. 76]) if satisfies

$$\begin{aligned} (x,y)\in \Omega \Rightarrow (-x,-y) \in \Omega{} & {} \text {and}{} & {} W(-x,-y) = W(x,y),{} & {} \forall (x,y)\in \Omega . \end{aligned}$$

Therefore, by a natural change of variables, we get

$$\begin{aligned} \mu _{h,k} = \int _{\Omega } x^h y^k W(x,y)\,\textrm{d}x \textrm{d}y = \int _{\Omega } (-x)^h(-y)^kW(-x,-y)\,\textrm{d}x \textrm{d}y = (-1)^{h+k} \mu _{h,k}, \end{aligned}$$

and then \(\mu _{h,k} =0\), for \(h+k\) an odd integer number.

We introduce an additional definition of symmetry.

Definition 3.1

We say that a weight function W(xy) is x-symmetric if

$$\begin{aligned} (x,y)\in \Omega \Rightarrow (-x,y) \in \Omega ,{} & {} \text {and}{} & {} W(-x,y) = W(x,y),{} & {} \forall (x,y)\in \Omega . \end{aligned}$$

Analogously, the weight function is y-symmetric if

$$\begin{aligned} (x,y)\in \Omega \Rightarrow (x,-y) \in \Omega ,{} & {} \text {and}{} & {} W(x,-y) = W(x,y),{} & {} \forall (x,y)\in \Omega . \end{aligned}$$

A  x-symmetric and y-symmetric weight function is called \(x\,y\)-symmetric.

Obviously, if W(xy) is \(x\,y\)-symmetric then it is centrally symmetric. As a consequence, if W(xy) is \(x\,y\)-symmetric, then \(\mu _{h,k} =0\) when, at least one, n or m are odd numbers.

Let \(\{\mathbb {S}_n\}_{n\geqslant 0}\) be the MOPS associated with a \(x\,y\)-symmetric weight function satisfying

$$\begin{aligned} (\mathbb {S}_n,\mathbb {S}_m) = \int _{\Omega } \mathbb {S}_n(x,y)\,\mathbb {S}_m(x,y)^\top \,W(x,y)\,\textrm{d}x \textrm{d}y = {\left\{ \begin{array}{ll} \texttt{0} , &{} n\ne m,\\ {\textbf{S}}_n , &{} n=m, \end{array}\right. } \end{aligned}$$
(3.1)

where \({\textbf{S}}_n\) is a \((n+1)\) positive-definite symmetric matrix.

Lemma 3.2

If the explicit expression of every vector polynomial is given by

$$\begin{aligned} \mathbb {S}_n(x,y) = \begin{bmatrix} S_{n,0}(x,y)&S_{n,1}(x,y)&S_{n,2}(x,y)&\cdots&S_{n,n}(x,y) \end{bmatrix}^\top , \end{aligned}$$

where

$$\begin{aligned} S_{n,k}(x,y)&= \sum _{i=0}^{\lfloor (n-k)/2\rfloor } \sum _{j=0}^{\lfloor k/2\rfloor } a_{i,j}^{n,k} \, x^{n-k-2i}\,y^{k-2j},{} & {} 0\leqslant k \leqslant n , \end{aligned}$$

with \(a_{0,0}^{n,k} = 1\), then the polynomials are \(x\,y\)-symmetric, that is,

$$\begin{aligned} S_{n,k}(x,y) = (-1)^n S_{n,k}(-x,-y) = (-1)^{n-k} S_{n,k}(-x,y) = (-1)^k S_{n,k}(x,-y). \end{aligned}$$

When a bivariate polynomial \(S_{n,k}(x,y)\) is \(x\,y\)-symmetric then it has the same parity order in every variable, i.e., if the partial degree in the first variable x is even (respectively, odd), then all powers in x are even (respectively, odd), and analogously, if the partial degree in the second variable y is even (respectively, odd), then all powers in y are even (respectively, odd).

Therefore, the vector polynomial \(\mathbb {S}_{n}(x,y)\) can be separated in a zip way, attending to the parity of the powers of x and y in its entries. In fact, for even, respectively odd degree, we get

$$\begin{aligned} \mathbb {S}_{2n} = \begin{bmatrix} S_{2n,0} \\ 0 \\ S_{2n,2} \\ 0 \\ \vdots \\ 0 \\ S_{2n,2n} \end{bmatrix} + \begin{bmatrix} 0 \\ S_{2n,1} \\ 0 \\ S_{2n,3} \\ \vdots \\ S_{2n,2n-1} \\ 0 \end{bmatrix}, \quad \mathbb {S}_{2n+1} = \begin{bmatrix} S_{2n+1,0} \\ 0 \\ S_{2n+1,2} \\ 0 \\ \vdots \\ S_{2n+1,2n}\\ 0\end{bmatrix} + \begin{bmatrix} 0 \\ S_{2n+1,1} \\ 0 \\ S_{2n+1,3} \\ \vdots \\ 0\\ S_{2n+1,2n+1} \end{bmatrix}.\nonumber \\ \end{aligned}$$
(3.2)

Lemma 3.3

We can express the monic orthogonal polynomial vectors as

$$\begin{aligned} \mathbb {S}_{2n}(x,y)= & {} \mathbb {P}_n^{(0,0)}(x^2, y^2) + x\,y\,\mathbb {P}_{n-1}^{(1,1)}(x^2, y^2), \end{aligned}$$
(3.3)
$$\begin{aligned} \mathbb {S}_{2n+1}(x,y)= & {} x\, \mathbb {P}_n^{(1,0)}(x^2, y^2) + y\,\mathbb {P}_{n}^{(0,1)}(x^2, y^2), \end{aligned}$$
(3.4)

where, for \(n\geqslant 0\),

  • \(\mathbb {P}_n^{(0,0)}(x^2, y^2)\) is a vector of size \((2n+1)\times 1\) whose odd entries are independent monic polynomials of exact degree n on \((x^2,\,y^2)\), and its even entries are zeroes,

  • \(\mathbb {P}_{n}^{(1,1)}(x^2, y^2)\) is a vector of size \((2n+3)\times 1\) whose even entries are independent monic polynomials of exact degree n on \((x^2,\,y^2)\), and its odd entries are zeroes,

  • \(\mathbb {P}_n^{(1,0)}(x^2, y^2)\) is a vector of size \((2n+2)\times 1\) whose odd entries are independent monic polynomials of exact degree n on \((x^2,\,y^2)\), and its even entries are zeroes,

  • \(\mathbb {P}_n^{(0,1)}(x^2, y^2)\) is a vector of size \((2n+2)\times 1\) whose even entries are independent monic polynomials of exact degree n on \((x^2,\,y^2)\), and its odd entries are zeroes.

These families will be called big vector polynomials associated with \(\{\mathbb {S}_n\}_{n\geqslant 0}\). We must observe that the big families are formed by vectors of polynomials in the variables \((x^2, y^2)\), that contains polynomials of independent degree intercalated with zeros.

Our objective is to extract the odd entries in the vectors \(\mathbb {P}^{(i,0)}_{n}(x, y)\), and the even entries in \(\mathbb {P}^{(i,1)}_{n}(x, y)\), for \(i=0,1\).

Lemma 3.4

For \(n\geqslant 0\), and \(i, j = 0,1\), we define the \((n+1)\times 1\) vector of polynomials

$$\begin{aligned} {\widehat{\mathbb {P}}}_n^{(i,j)}(x,y) = {{\,\textrm{J}\,}}_{n}^{(i,j)}\mathbb {P}_n^{(i,j)}(x,y),{} & {} n\geqslant 0. \end{aligned}$$

Then, its entries are independent polynomials of exact degree n, and therefore, the sequences of vectors of polynomials \(\{{\widehat{\mathbb {P}}}_n^{(i,j)}\}_{n\geqslant 0}\) are polynomial systems.

4 Quadratic Decomposition Process

Taking into account Lemma 3.2, we start studying the inherit properties of orthogonality of the polynomial systems \(\{{\widehat{\mathbb {P}}}_n^{(i,j)}\}_{n\geqslant 0}\), for \(i,j=0,1\).

Theorem 4.1

Let \(\{\mathbb {S}_n\}_{n\geqslant 0}\) be a \(x\,y\)-symmetric monic orthogonal polynomial system associated with a weight function W(xy) defined on a domain \(\Omega \subset {\mathbb {R}}^2\). Then, the four families of polynomials \(\{{\widehat{\mathbb {P}}}_n^{(i,j)}\}_{n\geqslant 0}\), for \(i,j=0,1\), defined in terms of the big ones by (3.3), (3.4) and \({\widehat{\mathbb {P}}}_n^{(i,j)} = {{\,\textrm{J}\,}}_{n}^{(i,j)}\,\mathbb {P}_n^{(i,j)}\), are monic orthogonal polynomial systems (MOPS) associated respectively, with the weight functions

$$\begin{aligned} W^{(0,0)}(x,y) =&\dfrac{1}{4}\,\dfrac{1}{\sqrt{x\,y}}\,W(\sqrt{x},\sqrt{y}) , \\ W^{(1,0)}(x,y) =&\dfrac{1}{4}\,\sqrt{\dfrac{x}{y}}\,W(\sqrt{x},\sqrt{y})=x\, W^{(0,0)}(x,y) , \\ W^{(0,1)}(x,y) =&\dfrac{1}{4}\,\sqrt{\dfrac{y}{x}}\,W(\sqrt{x},\sqrt{y})=y\,W^{(0,0)}(x,y) , \\ W^{(1,1)}(x,y) =&\dfrac{1}{4}\,\sqrt{x\,y}\,W(\sqrt{x},\sqrt{y})=x\,y\,W^{(0,0)}(x,y) , \end{aligned}$$

for all \( (x,y) \in \Omega ^\mathbf * = \{(x,y)\in {\mathbb {R}}^2: x, y \geqslant 0, (\sqrt{x}, \sqrt{y})\in \Omega \}.\)

Proof

From expression (3.3) and the \(x\,y\)-symmetry of the inner product (3.1), we get

$$\begin{aligned} (\mathbb {S}_{2n}(x,y), \mathbb {S}_{2m} (x,y))&= (\mathbb {P}_n^{(0,0)}(x^2, y^2), \mathbb {P}_m^{(0,0)}(x^2, y^2)) \\&\quad + (x\,y\,\mathbb {P}_{n-1}^{(1,1)}(x^2, y^2), x\,y\,\mathbb {P}_{m-1}^{(1,1)}(x^2, y^2)), \end{aligned}$$

On the one hand, if \(n\ne m\), then \((\mathbb {S}_{2n}(x,y),\mathbb {S}_{2m}(x,y)) = \texttt{0}\) if and only if

$$\begin{aligned} (\mathbb {P}_n^{(0,0)}(x^2, y^2), \mathbb {P}_m^{(0,0)}(x^2, y^2)) = \texttt{0},{} & {} (x\,y\,\mathbb {P}_{n-1}^{(1,1)}(x^2, y^2), x\,y\,\mathbb {P}_{m-1}^{(1,1)}(x^2, y^2)) = \texttt{0}, \end{aligned}$$

because the positivity of the inner product.

On the other hand, if \(n = m\), then \((\mathbb {S}_{2n}(x,y),\mathbb {S}_{2n}(x,y)) = {\textbf{S}}_{2n}\), a symmetric positive-definite matrix, and defining the matrices

$$\begin{aligned} {\textbf{P}}^{(0,0)}_n&= (\mathbb {P}_n^{(0,0)}(x^2, y^2), \mathbb {P}_n^{(0,0)}(x^2, y^2)) = \int _{\Omega } \mathbb {P}^{(0,0)}_n(x^2,y^2)\mathbb {P}^{(0,0)}_n(x^2,y^2)^\top W(x,y)\textrm{d}x \textrm{d}y \\ {\textbf{P}}^{(1,1)}_{n-1}&= (x\,y\,\mathbb {P}_{n-1}^{(1,1)}(x^2, y^2), x\,y\,\mathbb {P}_{n-1}^{(1,1)}(x^2, y^2)) \\&= \int _{\Omega } \mathbb {P}^{(1,1)}_{n-1}(x^2,y^2)\mathbb {P}^{(1,1)}_{n-1}(x^2,y^2)^\top x^2 y^2 W(x,y)\textrm{d}x \textrm{d}y, \end{aligned}$$

they are symmetric of size \((2n+1)\times (2n+1)\), since W(xy) is a weight function on \(\Omega \), and \(x^2\,y^2\,W(x,y)\) is a positive definite Christoffel perturbation. Therefore,

$$\begin{aligned} {\textbf{S}}_{2n} = {\textbf{P}}^{(0,0)}_n + {\textbf{P}}^{(1,1)}_{n-1}. \end{aligned}$$

In order to recover a MOPS, we need to do a change of variable, and multiply times a suitable \({{\,\textrm{J}\,}}\)-matrix to shrink the vectors to an adequate size. Hence, we define the change of variable \(u= x^2\), \(v=y^2\), and the integration domain will be defined by \(\Omega ^\mathbf * = \{(u,v)\in {\mathbb {R}}^2: u, v \geqslant 0, (\sqrt{u}, \sqrt{v})\in \Omega \}.\)

Then, the PS \(\{{\widehat{\mathbb {P}}}_{n}^{(0,0)}\}_{n\geqslant 0} = \{{{\,\textrm{J}\,}}_{n}^{(0,0)}\,\mathbb {P}_{n}^{(0,0)}(u,v)\}_{n\geqslant 0}\) is orthogonal in the form

$$\begin{aligned} ({\widehat{\mathbb {P}}}_{n}^{(0,0)},&{\widehat{\mathbb {P}}}_{n}^{(0,0)})^{(0,0)} = \dfrac{1}{4}\int _{\Omega ^\mathbf *} {\widehat{\mathbb {P}}}_{n}^{(0,0)}(u,v){\widehat{\mathbb {P}}}_{n}^{(0,0)}(u,v)^\top W^{(0,0)}(u,v)\textrm{d} u \textrm{d} v\\&= \dfrac{1}{4} {{\,\textrm{J}\,}}_{n}^{(0,0)} \int _{\Omega ^\mathbf *} \mathbb {P}_{n}^{(0,0)}(u,v)\mathbb {P}_{n}^{(0,0)}(u,v)^\top \dfrac{1}{\sqrt{u\,v}}\,W(\sqrt{u},\sqrt{v})\textrm{d} u \textrm{d} v ({{\,\textrm{J}\,}}_{n}^{(0,0)})^\top \\&= {{\,\textrm{J}\,}}_{n}^{(0,0)} \int _{\Omega } \mathbb {P}_{n}^{(0,0)}(x^2,y^2)\mathbb {P}_{n}^{(0,0)}(x^2,y^2)^\top \,W(x, y) \, \textrm{d} x \textrm{d} y ({{\,\textrm{J}\,}}_{n}^{(0,0)})^\top \\&= {{\,\textrm{J}\,}}_{n}^{(0,0)}\,{\textbf{P}}_n^{(0,0)}\, ({{\,\textrm{J}\,}}_{n}^{(0,0)})^\top = \widehat{{\textbf{P}}}_n^{(0,0)},\\ ({\widehat{\mathbb {P}}}_{n}^{(0,0)},&{\widehat{\mathbb {P}}}_{m}^{(0,0)})^{(0,0)} = \texttt{0}. \end{aligned}$$

Moreover, \(\widehat{{\textbf{P}}}_n^{(0,0)}\) is a symmetric \((n+1)\) full rank matrix since \(\{{\widehat{\mathbb {P}}}_{n}^{(0,0)}\}_{n\geqslant 0}\) is a PS.

Acting in the same way on \(\{{\widehat{\mathbb {P}}}_{n}^{(1,1)}\}_{n\geqslant 0} = \{{{\,\textrm{J}\,}}_{n}^{(1,1)}\,\mathbb {P}_{n}^{(1,1)}(u,v)\}_{n\geqslant 0}\), we can prove the orthogonality relations

$$\begin{aligned} ({\widehat{\mathbb {P}}}_{n}^{(1,1)},&{\widehat{\mathbb {P}}}_{n}^{(1,1)})^{(1,1)} = \dfrac{1}{4}\int _{\Omega ^\mathbf *} {\widehat{\mathbb {P}}}_{n}^{(1,1)}(u,v){\widehat{\mathbb {P}}}_{n}^{(1,1)}(u,v)^\top \,W^{(1,1)}(u,v)\textrm{d} u \textrm{d} v\\&= \dfrac{1}{4} {{\,\textrm{J}\,}}_{n}^{(1,1)} \int _{\Omega ^\mathbf *} \mathbb {P}_{n}^{(1,1)}(u,v)\mathbb {P}_{n}^{(1,1)}(u,v)^\top \,\sqrt{u\,v}\,W(\sqrt{u},\sqrt{v})\textrm{d} u \textrm{d} v ({{\,\textrm{J}\,}}_{n}^{(1,1)})^\top \\&= {{\,\textrm{J}\,}}_{n}^{(1,1)}\,\int _{\Omega } \mathbb {P}_{n}^{(1,1)}(x^2,y^2)\,\mathbb {P}_{n}^{(1,1)}(x^2,y^2)^\top \, x^2\,y^2\,W(x,y) \textrm{d}x \textrm{d}y\,({{\,\textrm{J}\,}}_{n}^{(1,1)})^\top \\&= {{\,\textrm{J}\,}}_{n}^{(1,1)}\,{\textbf{P}}_{n}^{(1,1)}\, ({{\,\textrm{J}\,}}_{n}^{(1,1)})^\top = \widehat{{\textbf{P}}}_{n}^{(1,1)},\\ ({\widehat{\mathbb {P}}}_{n}^{(1,1)},&{\widehat{\mathbb {P}}}_{m}^{(1,1)})^{(1,1)} = \texttt{0}. \end{aligned}$$

Now, we multiply two odd symmetric polynomials, use (3.4) and the \(x\,y\)-symmetry, obtaining

$$\begin{aligned} (\mathbb {S}_{2n+1}(x,y),\mathbb {S}_{2m+1}(x,y))= & {} (x\, \mathbb {P}_n^{(1,0)}(x^2, y^2), x\, \mathbb {P}_m^{(1,0)}(x^2, y^2)) \\{} & {} + (y\,\mathbb {P}_{n}^{(0,1)}(x^2, y^2), y\,\mathbb {P}_{m}^{(0,1)}(x^2, y^2)). \end{aligned}$$

Using the same reasoning as in the even case, and defining the PS \(\{{\widehat{\mathbb {P}}}_{n}^{(1-j,j)}\}_{n\geqslant 0}\) \(= \{{{\,\textrm{J}\,}}_{n}^{(1-j,j)}\,\mathbb {P}_{n}^{(1-j,j)}(u,v)\}_{n\geqslant 0} \), for \(j=0,1\), we prove that they are orthogonal, as

$$\begin{aligned} ({\widehat{\mathbb {P}}}_{n}^{(1-j,j)},{\widehat{\mathbb {P}}}_{n}^{(1-j,j)})^{(1-j,j)}&= \dfrac{1}{4}\int _{\Omega ^\mathbf *} {\widehat{\mathbb {P}}}_{n}^{(1-j,j)}(u,v){\widehat{\mathbb {P}}}_{n}^{(1-j,j)}(u,v)^\top W^{(1-j,j)}(u,v) \textrm{d} u \textrm{d} v\\&= {{\,\textrm{J}\,}}_{n}^{(1-j,j)} \,{\textbf{P}}_n^{(1-j,j)}\, ({{\,\textrm{J}\,}}_{n}^{(1-j,j)})^\top = \widehat{{\textbf{P}}}_{n}^{(1-j,j)},\\ ({\widehat{\mathbb {P}}}_{n}^{(1-j,j)},{\widehat{\mathbb {P}}}_{m}^{(1-j,j)})^{(1-j,j)}&=\texttt{0}, \end{aligned}$$

which ends the proof. \(\square \)

In a similar way we can prove the converse result.

Fig. 1
figure 1

Relation between the four weight functions and the corresponding polynomial systems

Theorem 4.2

Let \({\widehat{W}}(x,y)\) be a weight function defined on \(\Omega ^\mathbf *\subset {\mathbb {R}}^2_+ = \{(x,y)\in {\mathbb {R}}^2: x, y \geqslant 0\}\), and let \(\{{\widehat{\mathbb {P}}}^{(0,0)}_n\}_{n\geqslant 0}\) be the corresponding monic OPS. Let \(\{{\widehat{\mathbb {P}}}^{(1,0)}_n\}_{n\geqslant 0}\), \(\{{\widehat{\mathbb {P}}}^{(0,1)}_n\}_{n\geqslant 0}\), and \(\{{\widehat{\mathbb {P}}}^{(1,1)}_n\}_{n\geqslant 0}\) be the respective MOPS associated with the modifications of the weight function (Fig. 1)

$$\begin{aligned} W^{(1,0)}(x,y) = x\,{\widehat{W}}(x,y),{} & {} W^{(0,1)}(x,y) = y\,{\widehat{W}}(x,y),{} & {} W^{(1,1)}(x,y) = x\,y\,{\widehat{W}}(x,y). \end{aligned}$$

Define the family of vector polynomials \(\{{\mathbb {S}}_n\}_{n\geqslant 0}\) by means of (3.3) and (3.4), where \(\{\mathbb {P}_n^{(i,j)} = ({{\,\textrm{J}\,}}_{n}^{(i,j)})^\top \, {\widehat{\mathbb {P}}}^{(i,j)}_n\}_{n\geqslant 0}\), for \(i,j=0,1\). Then, \(\{{\mathbb {S}}_n\}_{n\geqslant 0}\) is a \(x\,y\)-symmetric monic orthogonal polynomial system associated with the weight function

$$\begin{aligned} W(x,y) = 4\,|x|\,|y|\,{\widehat{W}}(x^2, y^2) ,{} & {} (x,y) \in \Omega = \{(x,y)\in {\mathbb {R}}^2: (x^2, y^2) \in \Omega ^\mathbf *\}. \end{aligned}$$

As a consequence of Theorem 4.2, and since \(W^{(1,0)}(x,y)\), \(W^{(0,1)}(x,y)\), and \(W^{(1,1)}(x,y)\) are Christoffel modifications of the original weight function, following [1] and Lemma 2.2 there exist matrices of adequate size such that the short relations between that families of orthogonal polynomials holds.

In the next section we will describe explicitly those relations.

5 Bäcklund-Type Relations

Orthogonal polynomials in two variables satisfy a three term relation in each variable (cf. [6]) written in a vector form and matrix coefficients. In this section we want to relate the matrix coefficients of the three term relations for the monic orthogonal polynomial sequences involved in Theorems 4.1 and 4.2.

If \(\{{\mathbb {S}}_n\}_{n\geqslant 0}\) is a MOPS associated with a centrally symmetric weight function, the three term relation takes a simple form. In fact, [6, Theorem 3.3.10] states that a measure is symmetric if, and only if, it satisfies the three term relations

$$\begin{aligned} {\left\{ \begin{array}{ll} x \, \mathbb {S}_n(x,y) = {{\,\textrm{L}\,}}_{n,1} \, \mathbb {S}_{n+1}(x,y) + \Gamma _{n,1}\,\mathbb {S}_{n-1}(x,y), \\ y \, \mathbb {S}_n(x,y) = {{\,\textrm{L}\,}}_{n,2} \, \mathbb {S}_{n+1}(x,y) + \Gamma _{n,2}\,\mathbb {S}_{n-1}(x,y), \end{array}\right. } \end{aligned}$$
(5.1)

for \(n\geqslant 0\), where \(\mathbb {S}_{-1}(x,y)=0\), \(\Gamma _{-1,k}=0\), and

$$\begin{aligned} \Gamma _{n,k} = {\textbf{S}}_n\, {{\,\textrm{L}\,}}_{n-1,k}^{\top }\, {\textbf{S}}_{n-1}^{-1},{} & {} n\geqslant 1,{} & {} k=1,2, \end{aligned}$$

are matrices of size \((n+1)\times n\) with \(\textrm{rank}\, \Gamma _{n,k} = n\), for \(k=1,2\).

The four systems of monic orthogonal polynomials \(\{\widehat{{\mathbb {P}}}^{(i,j)}_n\}_{n\geqslant 0}\), with \(i,j=0,1\), involved in Theorems 4.1 and 4.2, satisfy the three term relations

$$\begin{aligned} {\left\{ \begin{array}{ll} x \, {\widehat{\mathbb {P}}}^{(i,j)} _n(x,y) = {{\,\textrm{L}\,}}_{n,1} \, {\widehat{\mathbb {P}}}^{(i,j)} _{n+1}(x,y) + {\widehat{{{\,\textrm{D}\,}}}}_{n,1}^{(i,j)}\,{\widehat{\mathbb {P}}}^{(i,j)} _n(x,y) + {\widehat{{{\,\textrm{C}\,}}}}_{n,1}^{(i,j)}\,{\widehat{\mathbb {P}}}_{n-1}^{(i,j)}(x,y), \\ y \, {\widehat{\mathbb {P}}}^{(i,j)} _n(x,y) = {{\,\textrm{L}\,}}_{n,2} \, {\widehat{\mathbb {P}}}^{(i,j)} _{n+1}(x,y) + {\widehat{{{\,\textrm{D}\,}}}}_{n,2}^{(i,j)}\,{\widehat{\mathbb {P}}}^{(i,j)} _n(x,y) + {\widehat{{{\,\textrm{C}\,}}}}_{n,2}^{(i,j)}\,{\widehat{\mathbb {P}}}_{n-1}^{(i,j)}(x,y), \end{array}\right. } \end{aligned}$$

where \({\widehat{\mathbb {P}}}^{(i,j)} _{-1}=0\), \({\widehat{{{\,\textrm{C}\,}}}}_{-1,k}^{(i,j)}=0\), \({\widehat{{{\,\textrm{D}\,}}}}_{n,k}^{(i,j)}\) and \({\widehat{{{\,\textrm{C}\,}}}}_{n,k}^{(i,j)}\) are matrices of respective sizes \((n+1)\times (n+1)\) and \((n+1)\times n\), such that

$$\begin{aligned} {\widehat{{{\,\textrm{D}\,}}}}_{n,1}^{(i,j)}\, \widehat{{\textbf{P}}}_{n}^{(i,j)}&= ( x\,{\widehat{\mathbb {P}}}^{(i,j)} _n,\, {\widehat{\mathbb {P}}}^{(i,j)} _{n})^{(i,j)},{} & {} {\widehat{{{\,\textrm{D}\,}}}}_{n,2}^{(i,j)}\, \widehat{{\textbf{P}}}_{n}^{(i,j)} = ( y\,{\widehat{\mathbb {P}}}^{(i,j)} _n,\, {\widehat{\mathbb {P}}}^{(i,j)} _{n})^{(i,j)}, \\ {\widehat{{{\,\textrm{C}\,}}}}_{n,1}^{(i,j)}\, \widehat{{\textbf{P}}}_{n-1}^{(i,j)}&= \widehat{{\textbf{P}}}_{n}^{(i,j)}\, {{\,\textrm{L}\,}}_{n-1,1}^{\top },{} & {} {\widehat{{{\,\textrm{C}\,}}}}_{n,2}^{(i,j)}\, \widehat{{\textbf{P}}}_{n-1}^{(i,j)} = \widehat{{\textbf{P}}}_{n}^{(i,j)}\, {{\,\textrm{L}\,}}_{n-1,2}^{\top } , \end{aligned}$$

where \(\widehat{{\textbf{P}}}_{n}^{(i,j)} = ({\widehat{\mathbb {P}}}^{(i,j)} _n,\,{\widehat{\mathbb {P}}}^{(i,j)} _{n})^{(i,j)}\). In addition, the \((n+1)\times n\) matrices \({\widehat{{{\,\textrm{C}\,}}}}_{n,k}^{(i,j)}\) have full rank n, for \(i,j=0,1\) and \(k=1,2\).

Suppose that the \(x\,y\)-symmetric monic polynomial system \(\{{\mathbb {S}}_n\}_{n\geqslant 0}\) and the four families of MOPS are related by (3.3) and (3.4), where \(\{\mathbb {P}_n^{(i,j)} = ({{\,\textrm{J}\,}}_{n}^{(i,j)})^\top \, {\widehat{\mathbb {P}}}^{(i,j)}_n\}_{n\geqslant 0}\), for \(i,j=0,1\), are the respective families of big polynomials.

Theorem 5.1

(Bäcklund-type relations) In the above conditions, the following relations hold, for all \(n\geqslant 0\) and \(k=1,2\),

$$\begin{aligned} {\widehat{{{\,\textrm{D}\,}}}}_{n,k}^{(0,0)}&= {{\,\textrm{J}\,}}_{n}^{(0,0)}\,[{{\,\textrm{L}\,}}_{2n,k}\,\Gamma _{2n+1,k} + \Gamma _{2n,k}\,{{\,\textrm{L}\,}}_{2n-1,k}]\,({{\,\textrm{J}\,}}_{n}^{(0,0)})^\top , \\ {\widehat{{{\,\textrm{C}\,}}}}_{n,k}^{(0,0)}&= {{\,\textrm{J}\,}}_{n}^{(0,0)}\,\Gamma _{2n,k}\,\Gamma _{2n-1,k}\,({{\,\textrm{J}\,}}_{n-1}^{(0,0)})^\top ,\\ {\widehat{{{\,\textrm{D}\,}}}}_{n,k}^{(1,1)}&= {{\,\textrm{J}\,}}_{n}^{(1,1)}\,[{{\,\textrm{L}\,}}_{2n+2,k}\,\Gamma _{2n+3,k} + \Gamma _{2n+2,k}\,{{\,\textrm{L}\,}}_{2n+1,k}]\,({{\,\textrm{J}\,}}_{n}^{(1,1)})^\top , \\ {\widehat{{{\,\textrm{C}\,}}}}_{n,k}^{(1,1)}&= {{\,\textrm{J}\,}}_{n}^{(1,1)}\,\Gamma _{2n+2,k}\,\Gamma _{2n+1,k}\,({{\,\textrm{J}\,}}_{n-1}^{(1,1)})^\top ,\\ {\widehat{{{\,\textrm{D}\,}}}}_{n,k}^{(1,0)}&= {{\,\textrm{J}\,}}_{n}^{(1,0)}\,[{{\,\textrm{L}\,}}_{2n+1,k}\,\Gamma _{2n+2,k} + \Gamma _{2n+1,k}\,{{\,\textrm{L}\,}}_{2n,k}]\,({{\,\textrm{J}\,}}_{n}^{(1,0)})^\top , \\ {\widehat{{{\,\textrm{C}\,}}}}_{n,k}^{(1,0)}&= {{\,\textrm{J}\,}}_{n}^{(1,0)}\,\Gamma _{2n+1,k}\,\Gamma _{2n,k}\,({{\,\textrm{J}\,}}_{n-1}^{(1,0)})^\top ,\\ {\widehat{{{\,\textrm{D}\,}}}}_{n,k}^{(0,1)}&= {{\,\textrm{J}\,}}_{n}^{(0,1)}\,[{{\,\textrm{L}\,}}_{2n+1,k}\,\Gamma _{2n+2,k} + \Gamma _{2n+1,k}\,{{\,\textrm{L}\,}}_{2n,k}]\,({{\,\textrm{J}\,}}_{n}^{(0,1)})^\top , \\ {\widehat{{{\,\textrm{C}\,}}}}_{n,k}^{(0,1)}&= {{\,\textrm{J}\,}}_{n}^{(0,1)}\,\Gamma _{2n+1,k}\,\Gamma _{2n,k}\,({{\,\textrm{J}\,}}_{n-1}^{(0,1)})^\top . \end{aligned}$$

with the convention that the matrix with negative indices is taken as a zero matrix.

Remark 5.2

For \(i=0,1\), we must observe that left multiplication by \({{\,\textrm{J}\,}}_{n}^{(i,0)}\) eliminates the even rows of the matrices, and the left multiplication by \({{\,\textrm{J}\,}}_{n}^{(i,1)}\) eliminates the odd rows of the matrices. The right multiplication by \(({{\,\textrm{J}\,}}_{n}^{(i,0)})^\top \) eliminates the even columns, and the multiplication by \(({{\,\textrm{J}\,}}_{n}^{(i,1)})^\top \) eliminates the odd columns of the matrices.

We divide the proof in several lemmas starting from a useful one for symmetric polynomials.

Lemma 5.3

Let \(p_{i,j}(x,y), q_{i,j}(x,y)\), \(i=0,1\), be polynomials of the same parity order. If

$$\begin{aligned} \begin{bmatrix} p_{0,0}(x,y) &{} p_{0,1}(x,y)\\ p_{1,0}(x,y) &{} p_{1,1}(x,y) \end{bmatrix} \begin{bmatrix} 1\\ x \end{bmatrix} = \begin{bmatrix} q_{0,0}(x,y) &{} q_{0,1}(x,y)\\ q_{1,0}(x,y) &{} q_{1,1}(x,y) \end{bmatrix} \begin{bmatrix} 1 \\ x \end{bmatrix} , \end{aligned}$$

then

$$\begin{aligned} \begin{bmatrix} p_{0,0}(x,y) &{} p_{0,1}(x,y)\\ p_{1,0}(x,y) &{} p_{1,1}(x,y) \end{bmatrix} = \begin{bmatrix} q_{0,0}(x,y) &{} q_{0,1}(x,y)\\ q_{1,0}(x,y) &{} q_{1,1}(x,y) \end{bmatrix}. \end{aligned}$$

Secondly, we deduce the relations between the big families of polynomials.

Lemma 5.4

The four big families of polynomials \(\{\mathbb {P}^{(i,j)}_n\}_{n\geqslant 0}\), for \(i,j=0,1\), defined by (3.3) and (3.4), are related by the expressions:

$$\begin{aligned}{} & {} \mathbb {P}^{(0,0)}_{n}(x, y) = {{\,\textrm{L}\,}}_{2n,k}\,\mathbb {P}^{(2-k,k-1)}_{n}(x, y) + \Gamma _{2n,k}\,\mathbb {P}^{(2-k,k-1)}_{n-1}(x, y) , \end{aligned}$$
(5.2)
$$\begin{aligned}{} & {} x_k\,\mathbb {P}^{(1,1)}_{n-1}(x, y) = {{\,\textrm{L}\,}}_{2n,k}\,\mathbb {P}^{(k-1,2-k)}_{n}(x, y) + \Gamma _{2n,k}\,\mathbb {P}^{(k-1,2-k)}_{n-1}(x, y), \end{aligned}$$
(5.3)
$$\begin{aligned}{} & {} \mathbb {P}^{(k-1,2-k)}_{n}(x, y) = {{\,\textrm{L}\,}}_{2n+1,k}\,\mathbb {P}^{(1,1)}_{n}(x, y) + \Gamma _{2n+1,k}\,\mathbb {P}^{(1,1)}_{n-1}(x, y), \end{aligned}$$
(5.4)
$$\begin{aligned}{} & {} x_k\,\mathbb {P}^{(2-k,k-1)}_{n}(x, y) = {{\,\textrm{L}\,}}_{2n+1,k}\,\mathbb {P}^{(0,0)}_{n+1}(x, y) + \Gamma _{2n+1,k}\,\mathbb {P}^{(0,0)}_{n}(x, y), \end{aligned}$$
(5.5)

for \(k=1,2\) and denoting \(x_1=x, x_2 = y\) for brevity.

Proof

The expressions (3.3) and (3.4) can be matrically rewritten in the following form

$$\begin{aligned} \begin{bmatrix} \mathbb {S}_{2n}(x,y) \\ \mathbb {S}_{2n+1}(x,y) \end{bmatrix}= & {} \begin{bmatrix} \mathbb {P}^{(0,0)}_{n}(x^2,y^2) &{} y\,\mathbb {P}^{(1,1)}_{n-1}(x^2,y^2) \\ y\,\mathbb {P}^{(0,1)}_{n}(x^2,y^2) &{} \mathbb {P}^{(1,0)}_{n}(x^2,y^2) \end{bmatrix}\! \begin{bmatrix} 1\\ x \end{bmatrix} \end{aligned}$$
(5.6)
$$\begin{aligned} \begin{bmatrix} \mathbb {S}_{2n}(x,y) \\ \mathbb {S}_{2n+1}(x,y) \end{bmatrix}= & {} \begin{bmatrix} \mathbb {P}^{(0,0)}_{n}(x^2,y^2) &{} x\,\mathbb {P}^{(1,1)}_{n-1}(x^2,y^2) \\ x\,\mathbb {P}^{(1,0)}_{n}(x^2,y^2) &{} \mathbb {P}^{(0,1)}_{n}(x^2,y^2) \end{bmatrix}\! \begin{bmatrix} 1\\ y \end{bmatrix}. \end{aligned}$$
(5.7)

We can write the first three term relations (5.1) in the form

$$\begin{aligned} x \! \begin{bmatrix} \mathbb {S}_{2n} \\ \mathbb {S}_{2n+1} \end{bmatrix} = \begin{bmatrix} \texttt{0} &{} \texttt{0} \\ {{\,\textrm{L}\,}}_{2n+1,1} &{} \texttt{0} \end{bmatrix} \! \begin{bmatrix} \mathbb {S}_{2n+2} \\ \mathbb {S}_{2n+3} \end{bmatrix} + \begin{bmatrix} \texttt{0} &{} {{\,\textrm{L}\,}}_{2n,1} \\ \Gamma _{2n+1,1} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {S}_{2n} \\ \mathbb {S}_{2n+1} \end{bmatrix} + \begin{bmatrix} \texttt{0} &{} \Gamma _{2n,1} \\ \texttt{0} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {S}_{2n-2} \\ \mathbb {S}_{2n-1} \end{bmatrix} \end{aligned}$$

where we have omitted the arguments (xy) for simplicity. Substituting (5.6), we get

$$\begin{aligned}{} & {} x \! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n} &{} y\,\mathbb {P}^{(1,1)}_{n-1}\\ y\,\mathbb {P}^{(0,1)}_{n} &{} \mathbb {P}^{(1,0)}_{n} \end{bmatrix}\! \begin{bmatrix} 1\\ x \end{bmatrix} = \left\{ \begin{bmatrix} \texttt{0} &{} \texttt{0} \\ {{\,\textrm{L}\,}}_{2n+1,1} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n+1} &{} y\,\mathbb {P}^{(1,1)}_{n} \\ y\,\mathbb {P}^{(0,1)}_{n+1} &{} \mathbb {P}^{(1,0)}_{n+1} \end{bmatrix} \right. \\{} & {} \quad \left. + \begin{bmatrix} \texttt{0} &{} {{\,\textrm{L}\,}}_{2n,1} \\ \Gamma _{2n+1,1} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n} &{} y\,\mathbb {P}^{(1,1)}_{n-1} \\ y\,\mathbb {P}^{(0,1)}_{n} &{} \mathbb {P}^{(1,0)}_{n} \end{bmatrix} + \begin{bmatrix} \texttt{0} &{} \Gamma _{2n,1} \\ \texttt{0} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n-1} &{} y\,\mathbb {P}^{(1,1)}_{n-2} \\ y\,\mathbb {P}^{(0,1)}_{n-1} &{} \mathbb {P}^{(1,0)}_{n-1} \end{bmatrix} \right\} \! \begin{bmatrix} 1\\ x \end{bmatrix} . \end{aligned}$$

where we have omitted the arguments \((x^2, y^2)\) of the big polynomials for brevity. Now, since

$$\begin{aligned} x \!\begin{bmatrix} 1\\ x \end{bmatrix} = \begin{bmatrix} 0 &{} 1 \\ x^2 &{} 0 \end{bmatrix}\! \begin{bmatrix} 1\\ x \end{bmatrix}, \end{aligned}$$

and applying Lemma 5.3, we deduce

$$\begin{aligned}{} & {} \begin{bmatrix} \mathbb {P}^{(0,0)}_{n} &{} y\,\mathbb {P}^{(1,1)}_{n-1}\\ y\,\mathbb {P}^{(0,1)}_{n} &{} \mathbb {P}^{(1,0)}_{n} \end{bmatrix}\! \begin{bmatrix} 0 &{} 1 \\ x^2 &{} 0 \end{bmatrix} = \begin{bmatrix} \texttt{0} &{} \texttt{0} \\ {{\,\textrm{L}\,}}_{2n+1,1} &{} \texttt{0} \end{bmatrix} \! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n+1} &{} y\,\mathbb {P}^{(1,1)}_{n} \\ y\,\mathbb {P}^{(0,1)}_{n+1} &{} \mathbb {P}^{(1,0)}_{n+1} \end{bmatrix} \\{} & {} \quad + \begin{bmatrix} \texttt{0} &{} {{\,\textrm{L}\,}}_{2n,1} \\ \Gamma _{2n+1,1} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n} &{} y\,\mathbb {P}^{(1,1)}_{n-1} \\ y\,\mathbb {P}^{(0,1)}_{n} &{} \mathbb {P}^{(1,0)}_{n} \end{bmatrix} + \begin{bmatrix} \texttt{0} &{} \Gamma _{2n,1} \\ \texttt{0} &{} \texttt{0} \end{bmatrix}\! \begin{bmatrix} \mathbb {P}^{(0,0)}_{n-1} &{} y\,\mathbb {P}^{(1,1)}_{n-2} \\ y\,\mathbb {P}^{(0,1)}_{n-1} &{} \mathbb {P}^{(1,0)}_{n-1} \end{bmatrix} . \end{aligned}$$

We finally arrive to,

$$\begin{aligned}{} & {} \begin{bmatrix} x^2\,y\,\mathbb {P}^{(1,1)}_{n-1} &{} \mathbb {P}^{(0,0)}_{n}\\ x^2\,\mathbb {P}^{(1,0)}_{n} &{} y\,\mathbb {P}^{(0,1)}_{n} \end{bmatrix} = \begin{bmatrix} \texttt{0} &{} \texttt{0} \\ {{\,\textrm{L}\,}}_{2n+1,1}\,\mathbb {P}^{(0,0)}_{n+1} &{} {{\,\textrm{L}\,}}_{2n+1,1}\,y\,\mathbb {P}^{(1,1)}_{n} \end{bmatrix} \\{} & {} \quad + \begin{bmatrix} {{\,\textrm{L}\,}}_{2n,1}\,y\,\mathbb {P}^{(0,1)}_{n} &{} {{\,\textrm{L}\,}}_{2n,1}\,\mathbb {P}^{(1,0)}_{n} \\ \Gamma _{2n+1,1}\,\mathbb {P}^{(0,0)}_{n} &{} \Gamma _{2n+1,1}\,y\,\mathbb {P}^{(1,1)}_{n-1} \end{bmatrix}\ + \begin{bmatrix} \Gamma _{2n,1}\,y\,\mathbb {P}^{(0,1)}_{n-1} &{} \Gamma _{2n,1}\,\mathbb {P}^{(1,0)}_{n-1} \\ \texttt{0} &{} \texttt{0} \end{bmatrix}. \end{aligned}$$

Then, after a convenient simplification and by introducing the variable (xy), we deduce the expressions (5.2), (5.4), (5.3), and (5.5) for \(k=1\). The same discussion can be done for the second variable using (5.7), taking \(k=2\). \(\square \)

The identities in Lemma 5.4 can be used to deduce three terms relations for the big polynomial families. Apparently, (5.8)-(5.11) are three term relations for the bivariate polynomials \(\{\mathbb {P}_n^{(i,j)}\}_{n\geqslant 0}\), \(i,j=0,1\), but the these big families are not polynomial systems.

Lemma 5.5

The families of big bivariate polynomials \(\{\mathbb {P}_n^{(i,j)}\}_{n\geqslant 0}\), for \(i,j=0,1\), satisfy the relations

$$\begin{aligned} x_k\,\mathbb {P}^{(0,0)}_{n}= & {} {{\,\textrm{L}\,}}_{2n,k} {{\,\textrm{L}\,}}_{2n+1,k}\mathbb {P}^{(0,0)}_{n+1} + [{{\,\textrm{L}\,}}_{2n,k} \Gamma _{2n+1,k} + \Gamma _{2n,k}{{\,\textrm{L}\,}}_{2n-1,k}]\mathbb {P}^{(0,0)}_{n} \nonumber \\{} & {} + \Gamma _{2n,k} \Gamma _{2n-1,k}\mathbb {P}^{(0,0)}_{n-1}, \end{aligned}$$
(5.8)
$$\begin{aligned} x_k\,\mathbb {P}^{(1,1)}_{n-1}= & {} {{\,\textrm{L}\,}}_{2n,k} {{\,\textrm{L}\,}}_{2n+1,k}\mathbb {P}^{(1,1)}_{n} + [ {{\,\textrm{L}\,}}_{2n,k} \Gamma _{2n+1,k} + \Gamma _{2n,k} {{\,\textrm{L}\,}}_{2n-1,k}]\mathbb {P}^{(1,1)}_{n-1} \nonumber \\{} & {} + \Gamma _{2n,k}\Gamma _{2n-1,k}\mathbb {P}^{(1,1)}_{n-2},\end{aligned}$$
(5.9)
$$\begin{aligned} x_k\,\mathbb {P}^{(1,0)}_{n}= & {} {{\,\textrm{L}\,}}_{2n+1,k} {{\,\textrm{L}\,}}_{2n+2,k}\mathbb {P}^{(1,0)}_{n+1} + [{{\,\textrm{L}\,}}_{2n+1,k} \Gamma _{2n+2,k} + \Gamma _{2n+1,k} {{\,\textrm{L}\,}}_{2n,k}]\mathbb {P}^{(1,0)}_{n} \nonumber \\{} & {} + \Gamma _{2n+1,k} \Gamma _{2n,k}\mathbb {P}^{(1,0)}_{n-1}, \end{aligned}$$
(5.10)
$$\begin{aligned} x_k\,\mathbb {P}^{(0,1)}_{n}= & {} {{\,\textrm{L}\,}}_{2n+1,k} {{\,\textrm{L}\,}}_{2n+2,k}\mathbb {P}^{(0,1)}_{n+1} + [ {{\,\textrm{L}\,}}_{2n+1,k} \Gamma _{2n+2,k} + \Gamma _{2n+1,k} {{\,\textrm{L}\,}}_{2n,k}]\mathbb {P}^{(0,1)}_{n} \nonumber \\{} & {} + \Gamma _{2n+1,k} \Gamma _{2n,k}\mathbb {P}^{(0,1)}_{n-1}, \end{aligned}$$
(5.11)

for \(k=1,2\) and \(x_1=x, x_2 = y\).

Proof

For \(k=1,2\), relations are obtained multiplying (5.2) by \(x_k\) and using (5.5); substituting (5.2) in (5.5); replacing (5.4) in (5.3); and multiplying (5.4) by \(x_k\) and substituting (5.3). \(\square \)

From the three terms relations of the big polynomials obtained in Lemma 5.5, we can deduce the three term relations for the small ones by a multiplications of an adequate \({{\,\textrm{J}\,}}\)-matrix. In fact, multiplying, respectively, (5.8) by \({{\,\textrm{J}\,}}_{n}^{(0,0)}\), (5.9) by \({{\,\textrm{J}\,}}_{n}^{(1,1)}\), (5.10) by \({{\,\textrm{J}\,}}_{n}^{(1,0)}\), and (5.11) by \({{\,\textrm{J}\,}}_{n}^{(0,1)}\), and making use of Lemma 2.1 we arrive to the following result.

Lemma 5.6

The families of small bivariate polynomials \(\{{\widehat{\mathbb {P}}}_n^{(i,j)}\}_{n\geqslant 0}\), for \(i,j=0,1\), satisfy the three term relations

$$\begin{aligned} x_k\,{\widehat{\mathbb {P}}}^{(0,0)}_{n}= & {} {{\,\textrm{L}\,}}_{n,k}{\widehat{\mathbb {P}}}^{(0,0)}_{n+1} + {{\,\textrm{J}\,}}_n^{(0,0)} [ {{\,\textrm{L}\,}}_{2n,k} \Gamma _{2n+1,k} + \Gamma _{2n,k}{{\,\textrm{L}\,}}_{2n-1,k} ]({{\,\textrm{J}\,}}_n^{(0,0)})^\top {\widehat{\mathbb {P}}}^{(0,0)}_{n} \nonumber \\{} & {} + {{\,\textrm{J}\,}}_n^{(0,0)}\Gamma _{2n,1} \Gamma _{2n-1,1}({{\,\textrm{J}\,}}_n^{(0,0)})^\top {\widehat{\mathbb {P}}}^{(0,0)}_{n-1}, \end{aligned}$$
(5.12)
$$\begin{aligned} x_k\,{\widehat{\mathbb {P}}}^{(1,1)}_{n-1}= & {} {{\,\textrm{L}\,}}_{n-1,k} {\widehat{\mathbb {P}}}^{(1,1)}_{n} + {{\,\textrm{J}\,}}_{n-1}^{(1,1)}[ {{\,\textrm{L}\,}}_{2n,k} \Gamma _{2n+1,k} + \Gamma _{2n,k} {{\,\textrm{L}\,}}_{2n-1,k}]({{\,\textrm{J}\,}}_{n-1}^{(1,1)})^\top {\widehat{\mathbb {P}}}^{(1,1)}_{n-1} \nonumber \\{} & {} + {{\,\textrm{J}\,}}_{n-1}^{(1,1)}\Gamma _{2n,1}\Gamma _{2n-1,1}({{\,\textrm{J}\,}}_{n-2}^{(1,1)})^\top {\widehat{\mathbb {P}}}^{(1,1)}_{n-2}, \end{aligned}$$
(5.13)
$$\begin{aligned} x_k\,{\widehat{\mathbb {P}}}^{(1,0)}_{n}= & {} {{\,\textrm{L}\,}}_{n,k}{\widehat{\mathbb {P}}}^{(1,0)}_{n+1} + {{\,\textrm{J}\,}}_n^{(1,0)} [{{\,\textrm{L}\,}}_{2n+1,k} \Gamma _{2n+2,k} + \Gamma _{2n+1,k} {{\,\textrm{L}\,}}_{2n,k}]({{\,\textrm{J}\,}}_n^{(1,0)})^\top {\widehat{\mathbb {P}}}^{(1,0)}_{n} \nonumber \\{} & {} + {{\,\textrm{J}\,}}_n^{(1,0)} \Gamma _{2n+1,k} \Gamma _{2n,k}({{\,\textrm{J}\,}}_{n-1}^{(1,0)})^\top {\widehat{\mathbb {P}}}^{(1,0)}_{n-1}, \end{aligned}$$
(5.14)
$$\begin{aligned} x_k\,{\widehat{\mathbb {P}}}^{(0,1)}_{n}= & {} {{\,\textrm{L}\,}}_{n,k} {\widehat{\mathbb {P}}}^{(0,1)}_{n+1} + {{\,\textrm{J}\,}}_n^{(0,1)}[ {{\,\textrm{L}\,}}_{2n+1,k} \Gamma _{2n+2,k} + \Gamma _{2n+1,k} {{\,\textrm{L}\,}}_{2n,k}]({{\,\textrm{J}\,}}_{n}^{(0,1)})^\top {\widehat{\mathbb {P}}}^{(0,1)}_{n} \nonumber \\{} & {} + {{\,\textrm{J}\,}}_n^{(0,1)}\Gamma _{2n+1,1} \Gamma _{2n,1}({{\,\textrm{J}\,}}_{n-1}^{(0,1)})^\top {\widehat{\mathbb {P}}}^{(0,1)}_{n-1}, \end{aligned}$$
(5.15)

for \(k=1,2\) and \(x_1=x, x_2 = y\).

Now, the Bäcklund-type relations contained in Theorem 5.1 are proven identifying coefficients.

As we have shown in Theorems 4.1 and 4.2, the small polynomial systems \(\{{\widehat{\mathbb {P}}}_n^{(i,j)}\}_{n\geqslant 0}\), for \(i+j \geqslant 1\) are Christoffel modifications of the first family \(\{{\widehat{\mathbb {P}}}_n^{(0,0)}\}_{n\geqslant 0}\). Then, by Lemma 2.2, there exist short relations between that families. Lemma 5.4 also allows us to deduce short relations for the small polynomial systems, multiplying by the adequate \({{\,\textrm{J}\,}}\)-matrix, and using Lemma 2.1. Next result gives the coefficients in terms of the matrix coefficients of the three term relations of \(\{{\mathbb {S}}_n\}_{n\geqslant 0}\).

Corollary 5.7

The families of small MOPS are related by

$$\begin{aligned} {\widehat{\mathbb {P}}}^{(0,0)}_{n}(x, y)&= {\widehat{\mathbb {P}}}^{(2-k,k-1)}_{n}(x, y) + {\widehat{\Gamma }}_{n,k}^{(0,0)}\,{\widehat{\mathbb {P}}}^{(2-k,k-1)}_{n-1}(x, y), \\ x_k\, {\widehat{\mathbb {P}}}^{(1,1)}_{n-1}(x, y)&= {{\,\textrm{L}\,}}_{n-1,k}\,{\widehat{\mathbb {P}}}^{(k-1,2-k)}_{n}(x, y) + {\widehat{\Gamma }}_{n,2}^{(0,1)}\,{\widehat{\mathbb {P}}}^{(k-1,2-k)}_{n-1}(x, y), \\ {\widehat{\mathbb {P}}}^{(k-1,2-k)}_{n}(x, y)&= {\widehat{\mathbb {P}}}^{(1,1)}_{n}(x, y) + {\widehat{\Gamma }}_{n,k}^{(1,1)}\, {\widehat{\mathbb {P}}}^{(1,1)}_{n-1}(x, y), \\ x_k\,{\widehat{\mathbb {P}}}^{(2-k,k-1)}_{n}(x, y)&= {{\,\textrm{L}\,}}_{n,k}\, {\widehat{\mathbb {P}}}^{(0,0)}_{n+1}(x, y) + {\widehat{\Gamma }}_{n,k}^{(1,0)}\,{\widehat{\mathbb {P}}}^{(0,0)}_{n}(x, y), \end{aligned}$$

where

$$\begin{aligned}&{\widehat{\Gamma }}_{n,k}^{(0,0)} = {{\,\textrm{J}\,}}_{n}^{(0,0)}\,\Gamma _{2n,k}\,({{\,\textrm{J}\,}}_{n-1}^{(1,0)})^\top ,&{\widehat{\Gamma }}_{n,k}^{(0,1)} = {{\,\textrm{J}\,}}_{n-1}^{(1,1)}\,\Gamma _{2n,k}\,({{\,\textrm{J}\,}}_{n-1}^{(0,1)})^\top , \\&{\widehat{\Gamma }}_{n,k}^{(1,1)} = {{\,\textrm{J}\,}}_{n}^{(0,1)}\,\Gamma _{2n+1,k}\,({{\,\textrm{J}\,}}_{n-1}^{(1,1)})^\top ,&{\widehat{\Gamma }}_{n,k}^{(1,0)} = {{\,\textrm{J}\,}}_{n}^{(1,0)}\,\Gamma _{2n+1,k}\,({{\,\textrm{J}\,}}_{n}^{(0,0)})^\top . \end{aligned}$$

These matrices \({\widehat{\Gamma }}\)’s enable us to reinterpret the block Jacobi matrix associated with the polynomials sequences \({\widehat{\mathbb {P}}}\)’s in terms of a \(\pmb {{\textsf{L}}} \pmb {{\textsf{U}}}\) or \(\pmb {{\textsf{U}}} \pmb {{\textsf{L}}}\) representation. In fact, for \(k=1,2\), we define the block matrices

$$\begin{aligned} \pmb {{\textsf{L}}}^{0}_k = \begin{bmatrix} {\text {I}} \\ {\widehat{\Gamma }}_{1,k}^{(0,0)} &{} {\text {I}} \\ &{} {\widehat{\Gamma }}_{2,k}^{(0,0)} &{} {\text {I}} \\ &{} &{} \ddots &{} \ddots \end{bmatrix},{} & {} \pmb {{\textsf{L}}}^{1}_k = \begin{bmatrix} {\text {I}} \\ {\widehat{\Gamma }}_{1,k}^{(1,1)} &{} {\text {I}} \\ &{} {\widehat{\Gamma }}_{2,k}^{(1,1)} &{} {\text {I}} \\ &{} &{} \ddots &{} \ddots \end{bmatrix},\\ \pmb {{\textsf{U}}}^{0}_k = \begin{bmatrix} {\widehat{\Gamma }}_{1,k}^{(0,1)} &{} {\text {L}}_{0,k} \\ &{} {\widehat{\Gamma }}_{2,k}^{(0,1)} &{} {\text {L}}_{1,k} \\ &{} &{} \ddots &{} \ddots \end{bmatrix},{} & {} \pmb {{\textsf{U}}}^{1}_k = \begin{bmatrix} {\widehat{\Gamma }}_{1,k}^{(1,0)} &{} {\text {L}}_{0,k} \\ &{} {\widehat{\Gamma }}_{2,k}^{(1,0)} &{} {\text {L}}_{1,k} \\ &{} &{} \ddots &{} \ddots \end{bmatrix}, \end{aligned}$$

we recover the recurrence relations (5.12), (5.13), (5.14), (5.15), respectively

$$\begin{aligned} x_k \, \pmb {{\mathcal {P}}}^{(0,0)}&= x_k \, \pmb {{\textsf{L}}}^{0}_k \, \pmb {{\mathcal {P}}}^{(2-k,k-1)} = \pmb {{\textsf{L}}}^{0}_k \, \pmb {{\textsf{U}}}^{1}_k \, \pmb {{\mathcal {P}}}^{(0,0)} , \\ x_k \, \pmb {{\mathcal {P}}}^{(1,1)}&= \pmb {{\textsf{U}}}^{0}_k \, \pmb {{\mathcal {P}}}^{(k-1,2-k)} = \pmb {{\textsf{U}}}^{0}_k \, \pmb {{\textsf{L}}}^{1}_k \, \pmb {{\mathcal {P}}}^{(1,1)} , \\ x_k \, \pmb {{\mathcal {P}}}^{(2-k, k-1)}&= \pmb {{\textsf{U}}}^{1}_k \, \pmb {{\mathcal {P}}}^{(0,0)} = \pmb {{\textsf{U}}}^{1}_k \, \pmb {{\textsf{L}}}^{0}_k \, \pmb {{\mathcal {P}}}^{(2-k, k-1)} , \\ x_k \, \pmb {{\mathcal {P}}}^{(k-1,2-k)}&= x_k \,\pmb {{\textsf{L}}}^{1}_k \, \pmb {{\mathcal {P}}}^{(1,1)} = \pmb {{\textsf{L}}}^{1}_k \, \pmb {{\textsf{U}}}^{0}_k \, \pmb {{\mathcal {P}}}^{(k-1,2-k)} , \end{aligned}$$

denoting \(x_1=x\), \(x_2=y\), and, for \(i,j=0,1\), the column vector \(\pmb {{\mathcal {P}}}^{(i,j)}\) is defined as

$$\begin{aligned} \pmb {{\mathcal {P}}}^{(i,j)} = \begin{bmatrix} (\widehat{{\mathbb {P}}}^{(i,j)}_0)^\top&(\widehat{\mathbb P}^{(i,j)}_1)^\top&\cdots \end{bmatrix}^\top . \end{aligned}$$

6 A Case Study

In this section we consider weight function that can be represented as \(W(x,y) = {\widetilde{W}}(x^2, y^2)\), which is by construction \(x\,y\)-symmetric. We totally describe the connection between bivariate polynomials orthogonal with respect to a \(x\,y\)-symmetric weight function defined on the unit ball of \({\mathbb {R}}^2\), defined by

$$\begin{aligned} {\textbf{B}}^2 = \{(x,y)\in {\mathbb {R}}^2: x^2 + y^2 \leqslant 1\}, \end{aligned}$$

and bivariate orthogonal polynomials defined on the simplex

$$\begin{aligned} {\textbf{T}}^2 = \{(x,y)\in {\mathbb {R}}^2: x, y \geqslant 0, x+y\leqslant 1\}, \end{aligned}$$

completing the discussion started by Y. Xu in [21, 22] for the even ball polynomials in each of its variables.

Following [22, section 4], let \(W^{\textbf{B}}(x,y)= W(x^2,y^2)\) be a weight function defined on the unit ball on \({\mathbb {R}}^2\), and let

$$\begin{aligned} W^{\textbf{T}}(u,v) = \frac{1}{\sqrt{u\,v}} \, W(u,v),{} & {} (u, v)\in {\textbf{T}}^2. \end{aligned}$$
(6.1)

Observe that \(W^{\textbf{B}}(x,y)\) is a \(x\,y\)-symmetric weight function defined on \({\textbf{B}}^2\).

For \(n\geqslant 0\), and \(0\leqslant k \leqslant n\), let \(S_{2n,2k}(x, y)\) be an orthogonal polynomial associated to the weight function \(W^{\textbf{B}}\) of even degree in each of variables. Then Y. Xu proved that it can be written in terms of orthogonal polynomials on the simplex as

$$\begin{aligned} S_{2n,2k}(x,y) = P_{n,k} (x^2,y^2), \end{aligned}$$

where \(P_{n,k} (x,y)\) is an orthogonal polynomial of total degree n associated to \(W^{\textbf{T}}\).

We can answer the question that what is about the leftover polynomials, i.e., we can give explicitly the shape of the polynomials orthogonal with respect to \(W^{\textbf{B}}(x,y)\). Following our results, these polynomials are related to new families of bivariate orthogonal polynomials, resulting from a Christoffel modification that we will explicitly identify.

Let \(\{{\mathbb {S}}_n\}_{n\geqslant 0}\) be the monic orthogonal polynomial system associated with the \(x\,y\)-symmetric weight function \(W^{\textbf{B}}(x,y)\), satisfying (3.1).

If the explicit expression of every monic vector polynomial is given by

$$\begin{aligned} \mathbb {S}_n(x,y) = \begin{bmatrix} S_{n,0}(x,y)&S_{n,1}(x,y)&S_{n,2}(x,y)&\cdots&S_{n,n}(x,y) \end{bmatrix}^\top , \end{aligned}$$

then every polynomial \(S_{n,k}(x,y)\), for \(0\leqslant k \leqslant n\), is \(x\,y\)-symmetric by Lemma 3.2. As we have proved, the vector of polynomials \(\mathbb {S}_{n}(x,y)\) can be separated in a zip way, cf. (3.2), attending to the parity of the powers of x and y, in its entries.

We deduce four families: \(\{S_{2n,2k}(x,y): 0\leqslant k \leqslant n\}_{n\geqslant 0}\), \(\{S_{2n,2k+1}(x,y): 0 \leqslant k \leqslant n-1\}_{n\geqslant 0}\), \(\{S_{2n+1,2k}(x,y): 0\leqslant k \leqslant n\}_{n\geqslant 0}\), and \(\{S_{2n+1,2k+1}(x,y): 0\leqslant k \leqslant n\}_{n\geqslant 0}\). Only the first family was identified in [21, 22] under the transformation \((x^2,y^2) \mapsto (x,y)\) as a family of polynomials orthogonal on \({\textbf{T}}^2\) with respect to the weight function (6.1). We observe that the second family has the common factor \(x\,y\), the third family has x as common factor, and the fourth family has common factor the second variable y.

Working as in Sect. 4, we separate the symmetric monic orthogonal polynomial vectors as it was shown in Lemma 3.3

$$\begin{aligned} \mathbb {S}_{2n}(x,y)&= \mathbb {P}_n^{(0,0)}(x^2, y^2) + x\,y\,\mathbb {P}_{n-1}^{(1,1)}(x^2, y^2),\\ \mathbb {S}_{2n+1}(x,y)&= x\, \mathbb {P}_n^{(1,0)}(x^2, y^2) + y\,\mathbb {P}_{n}^{(0,1)}(x^2, y^2) . \end{aligned}$$

After deleting all zeros in above vectors of polynomials and substituting the variables \((x^2,\,y^2)\) by \((x,\,y)\), we proved, in Theorem 4.1 that

\(\{{\widehat{\mathbb {P}}}_n^{(0,0)}\}_{n\geqslant 0} = \{{{\,\textrm{J}\,}}_{n}^{(0,0)}\,\mathbb {P}_n^{(0,0)}\}_{n\geqslant 0}\) is a MOPS associated with the weight function

$$\begin{aligned} W^{(0,0)}(x,y) = \dfrac{1}{\sqrt{x\,y}}\,W^{{\textbf{B}}}(\sqrt{x},\sqrt{y}), \end{aligned}$$

\(\{{\widehat{\mathbb {P}}}_n^{(2-k,k-1)}\}_{n\geqslant 0} = \{{{\,\textrm{J}\,}}_{n}^{(2-k,k-1)}\,\mathbb {P}_n^{(2-k,k-1)}\}_{n\geqslant 0}\), for \(k=1,2\), are MOPS associated with the Christoffel modification

$$\begin{aligned} W^{(1,0)}(x,y) = x \, W^{(0,0)}(x,y){} & {} \text {and}{} & {} W^{(0,1)}(x,y) = y \, W^{(0,0)}(x,y) . \end{aligned}$$

\(\{{\widehat{\mathbb {P}}}_n^{(1,1)}\}_{n\geqslant 0} = \{{{\,\textrm{J}\,}}_{n}^{(1,1)}\,\mathbb {P}_n^{(1,1)}\}_{n\geqslant 0}\) is a MOPS associated with the Christoffel modification

$$\begin{aligned} W^{(1,1)}(x,y) = x\,y\, W^{(0,0)}(x,y), \end{aligned}$$

for all \( (x,y) \in {\textbf{T}}^2\).

Therefore, we have described the complete relation between orthogonal polynomials on the ball with orthogonal polynomials on the simplex.

That the four weight functions are needed from the triangle to the ball is known, see, for example, Sect. 9 of [9]. The explicit expressions of the matrices of the recurrence for this specific case will be the subject of a future work.