On a generalization of Schur theorem concerning resultants

Let K be a field and put A:={(i,j,k,m)∈N4:i≤jandm≤k}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}:=\{(i,j,k,m)\in \mathbb {N}^{4}:\;i\le j\;\text{ and }\;m\le k\}$$\end{document}. For any given A∈A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A\in {\mathcal {A}}$$\end{document} we consider the sequence of polynomials (rA,n(x))n∈N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(r_{A,n}(x))_{n\in \mathbb {N}}$$\end{document} defined by the recurrence rA,n(x)=fn(x)rA,n-1(x)-vnxmrA,n-2(x),n≥2,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} r_{A,n}(x)=f_{n}(x)r_{A,n-1}(x)-v_{n}x^{m}r_{A,n-2}(x),\;n\ge 2, \end{aligned}$$\end{document}where the initial polynomials rA,0,rA,1∈K[x]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r_{A,0}, r_{A,1}\in K[x]$$\end{document} are of degree i, j respectively and fn∈K[x],n≥2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{n}\in K[x], n\ge 2$$\end{document}, is of degree k with variable coefficients. The aim of the paper is to prove the formula for the resultant Res(rA,n(x),rA,n-1(x))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text {Res}}(r_{A,n}(x),r_{A,n-1}(x))$$\end{document}. Our result is an extension of the classical Schur formula which is obtained for A=(0,1,1,0)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A=(0,1,1,0)$$\end{document}. As an application we get the formula for the resultant Res(rA,n,rA,n-2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text {Res}}(r_{A,n},r_{A,n-2})$$\end{document}, where the sequence (rA,n)n∈N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(r_{A,n})_{n\in \mathbb {N}}$$\end{document} is the sequence of orthogonal polynomials corresponding to a moment functional which is symmetric.


Introduction
Let N denotes the set of non-negative integers, N + the set of positive integers and for given k ∈ N + we write N ≥k for the set of positive integers ≥ k.
Let K be a field and consider the polynomials F, G ∈ K [x]. The resultant Res(F, G) of the polynomials F, G is an element of K which gives the information of possible common roots. More precisely, Res(F, G) = 0 if and only if the polynomials F, G has a common factor of positive degree. The computation of resultants is, in general, a difficult task. Of special interest is the computation of resultants of pairs of polynomials which are interesting from either a number theoretic or analytic point of view. The classical result is the computation of resultant of two cyclotomic polynomials m , n . More precisely, Apostol proved the formula Res( m , n ) = p ϕ(n) if m n is a power of a prime p, 1 otherwise, where ϕ is the Euler phi function [1]. On the other side, we have a result of Schur which allow computation of resultants of consecutive terms in the sequence (r n (x)) n∈N of the polynomials defined by a linear recurrence of degree two. More precisely, if r 0 (x) = 1, r 1 (x) = a 1 x + b 1 and we define r n (x) = (a n x + b n )r n−1 (x) − c n r n−2 (x), n ≥ 2, with a n , b n , c n ∈ C satisfying a n c n = 0. Under these assumptions, we have the following compact formula proved by Schur [9] (see also [10, p. 143]): Res(r n , r n−1 ) = (−1) In factm Schur obtained a slightly different result, i.e., he obtained the expression for n i=1 r n−1 (x i,n ), where x i,n is the ith root of the polynomial r n . The importance of the Schur method lies in its applications in the computation of discriminants of orthogonal polynomials. Indeed, Favard proved that each family of orthogonal polynomials corresponds with the sequence (r n (x)) n∈N for suitably chosen sequences (a n ) n∈N , (b n ) n∈N and (c n ) n∈N (for the proof of this important theorem see [2,Theorem 4.4]). Computation of discriminants of certain classes of orthogonal polynomials can be found in [10,Theorem 6.71].
The method of Schur was generalized by Gishe and Ismail [4]. As an application, the authors reproved and generalized the result of Dilcher and Stolarsky from [3] concerning the resultant of certain linear combinations of Chebyshev polynomials of the first and the second kind. All these results were recently extended by Sawa and Uchida [8, Theorem 3.1] by a clever application of the Schur method. However, in all mentioned results we have a strong assumption on the sequence considered sequences of polynomials, i.e., the degree of nth term need to be equal to n. Thus, it is natural to ask whether the method of Schur can be generalized for other families of recursively defined polynomials. Of special interest is the situation when the polynomial near r n−1 in the recurrence defining the sequence (r n (x)) n∈N is of degree ≥ 2. Moreover, one can ask whether the initial polynomials r 0 , r 1 can have degrees not necessarily equal to 0 and 1 respectively. The aim of this note is to offer such a generalization and apply it to get some new resultant formulas. For the precise statement of our generalization and the main result, we refer the reader to Sect. 3.
Let us describe the content of the paper in some details. In Sect. 2 we present remainder of basic properties of the notion of resultant. In Sect. 3 we prove the main result of the paper, i.e., the expression of the resultant of consecutive terms of the sequence (r A,n ) n∈N (Theorem 3.1). Finally, in the last section, we apply our main result to present some applications. In particular, under some mild assumptions on the coefficients of recurrence defining the sequence (r A,n ) n∈N we present the expression for the resultant of the polynomials r A,n , r A,n−2 .

Remainder on basic properties of resultants
Let K be a field and consider the polynomials F, G ∈ K [x] given by F(x) = a n x n + a n−1 x n−1 + · · · + a 1 x + a 0 , The resultant of the polynomials F, G is defined as where α 1 , . . . , α n and β 1 , . . . , β m are the roots of F and G respectively (viewed in an appropriate field extension of K ). There is an alternative formula in terms of certain determinant. More precisely, Res(F, G) is the element of K by the determinant of the (m + n) × (m + n) Sylvester matrix given by ⎛ a n a n−1 a n−2 . . . 0 0 0 0 a n a n−1 . . . 0 0 0 The expression of a resultant as a determinant of the Sylvester matrix allows to consider it for polynomials with coefficients in commutative rings (even with zero divisors). However, in the sequel we concentrate on the case when considered polynomials have coefficients in a field K .
We collect basic properties of the resultant of the polynomials F, G: Moreover, if F(x) = a 0 is a constant polynomial then, unless F = G = 0, we have The proofs of the above properties can be find in [6,Chapter 3]. Finally, we recall an important result concerning the formula for the resultant of the polynomial G and F, provided that F(x) = q(x)G(x) + r (x). More precisely, we have the following.
. Then we have the formula The proof of the above lemma can be found in [7] (see also [3]). For possible generalization of the notion of resultant for polynomials with many variables we refer the reader to [5].

Generalization of Schur theorem
In this section we state and prove the main result of this paper: the generalization of Schur theorem. Let K be a field. We define the set and for given A ∈ A we consider the sequence of polynomials (r A,n (x)) n∈N defined in the following way: We assume that p s , q s , v n , a n,s ∈ K (in the appropriate range of parameters s, n) and p i q j a n,k = 0 for each n ∈ N ≥2 . Moreover, we assume that Theorem 3.1 Under the above assumptions on A, r A,0 , r A,1 and f n for n ∈ N ≥2 we have the following formula . First of all note that from the assumptions on i, j, k, m, the assumption a 2,k q i − v 2 p i = 0 and simple use of the recurrence relation defining the sequence (r A,n ) n∈N we immediately note that the leading term L n of the polynomial r A,n is given by and it is non-zero. In consequence, we see that In order to give the value of the constant term, say C n , of r A,n , i.e., the value r A,n (0), we consider two cases: m > 0 and m = 0. If m > 0, then by simple induction one can prove that If m = 0 then the value C n = r A,n (0) satisfies the recurrence relation C n = a n,0 C n−1 − v n C n−2 . In the generality we are dealing here, we can not give an exact form of C n and in fact we will not need it.
We are ready to prove our theorem. However, in order to simplify the proof a bit, we first compute the resultant of the polynomials r A,2 (x), r A,1 (x). We have the following chain of equalities (2.5) where in the last equality we used the identity Res(r A,1 , x) = r A,1 (0) = q 0 . Now let us assume that n ≥ 3 and consider the polynomials r A,n (x), r A,n−1 (x). We have the following chain of equalities: Note that the first five equalities are true for all m ∈ N not only m > 0. We will need this observation later.
If m > 0, then from the above computations we have obtained recurrence relation for the value of R n = Res(r A,n , r A,n−1 ). More precisely, we have We consider the case i < j ∨ (i = j ∧ m < k) first. By simple iteration of the above recurrence together with the expression for R 2 , we obtain the formula , and after simplification of the resulting expression we get the first formula from the statement of our theorem with T A = q j . Performing exactly the same reasoning as above we get the formula from the statement in the case when i = k and m = k with T A = (a 2,k q i − v 2 p i )/a 2,k .
Let us back to the case m = 0. We put R n = Res(r A,n (x), r A,n−1 (x)). First of all let us note that performing exactly the same reasoning as in the case of computation of R 2 in case when m > 0, we easily get the equality Note that R 2 is equal to R 2 with m replaced by 0.
Let n ≥ 3. In order to find recurrence relation for R n we follow exactly the same approach as in the case of R n . In particular, we have n−1 R n−1 . Again, from our reasoning, we see that R n is equal to R n with m replaced by 0, where we taken into account the convention that r A,n−1 (0) 0 = 1 for any value of r A,n−1 (0). In particular, we allow r A,n−1 (0) to be 0.
Summing up, our formula for Res(r A,n , r A,n−1 ) from the statement of our theorem holds for each m ∈ N.

Remark 3.2
The formula for Res(r A,n , r A,n−1 ) presented in Theorem 3.1 is not the most general one. Indeed, one can consider slightly more general recurrence and obtain similar result. More precisely, for given A ∈ A one can consider the sequence (g A,n (x)) n∈N defined in the following way: where p i q j = 0 and where a n,s b m = 0 for each n ∈ N ≥2 . In particular h is fixed and does not depend on n. Moreover, in order to guarantee the good behavior of degree of the polynomial g A,n we need to assume a 2,k q i − v 2 b m p i = 0 for given k, i, m. With the above definitions and the assumptions, we get the equalities deg g A,0 = i, deg g A,1 = j and for n ≥ 2 we have deg g A,n = (n − 1)k + j. Thus we see that the leading term L A,n of the polynomial g A,n has the form: Now, if we put G n = Res(g A,n , g A,n−1 ) then, using essentially the same reasoning as in the proof of Theorem 3.1, we get the recurrence relation for the sequence (G n ) n∈N + in the form: Res(g A,n−1 , h)G n−1 . By independent computation we get the equality and the explicit formula However, in order to compute Res(g A,n , g A,n−1 ) with the help of the above formula we need to know the value of Res(g A,s , h) for each s = 1, . . . , n − 1, which in general is a difficult task (due to the complicated and essentially unknown form of the coefficients or g A,s ). We have simple expression for Res(g A,s , h) only in the case when h(x) = x m . This is exactly the case presented in Theorem 3.1.

Applications
In this section we offer some application of Theorem 3.1. We consider the sequence (r n ) n∈N governed by the recurrence: r 0 (x) = 1, r 1 (x) = a 1 x + b 1 and r n (x) = (a n x + b n )r n−1 (x) − c n x m r n−2 (x), n ≥ 2, (4.1) where a n , b n , c n ∈ K and a n c n = 0 for n ∈ N + and m ∈ {0, 1}. For m = 0 we get the recurrence considered by Schur. In this case the result of Schur gives the expression for the resultant of the polynomials r n and r n−1 . Now, we show that under some assumptions on the sequences (a n ) n∈N + , (b n ) n∈N + one can get nice expression for the resultant of the polynomials r n , r n−2 . More precisely, we prove the following Theorem 4.1 Let m ∈ {0, 1}. Let a n , b n , c n ∈ K for n ∈ N + and suppose that a n c n = 0. Let us consider the sequence of polynomials (r n (x)) n∈N defined by (4.1) and suppose that for each n ≥ 2 we have a n−2 b n = a n b n−2 . Moreover, let us put d n = a n a n−2 . Then, if m = 0 the following formulas hold:

M. Ulas
If m = 1 we have the following formulas: Res(r 2n , r 2(n−1) ) = Proof In order to apply Theorem 3.1 for computation of Res(r 2n , r 2(n−1) ) and Res(r 2n+1 , r 2n−1 ) we need to express r n in terms of r n−2 and r n−4 . First, solving (4.1) with respect to r n−1 we get r n−1 = 1 a n x + b n (r n + c n x m r n−2 ), r n−3 = 1 a n−2 x + b n−2 (r n−2 + c n−2 x m r n−4 ).
Next, from the relation (4.1) with n replaced by n − 1 and the above expressions we get 1 a n x + b n (r n + c n x m r n−2 ) = (a n−1 x + b n−1 )r n−2 − c n−1 x m a n−2 x + b n−2 (r n−2 + c n−2 x m r n−4 ).

(4.2)
Observing now that the condition a n b n−2 = a n−2 b n implies that the expression a n x + b n a n−2 x + b n−2 = a n (a n x + b n ) a n a n−2 x + a n b n−2 = a n (a n x + b n ) a n−2 (a n x + ab n = a n a n−2 = d n . does not depend on x. Thus, the relation (4.2) can be rewritten in the following equivalent form r n = h n (x)r n−2 − c n−1 c n−2 d n x m r n−4 , where h n (x) = a n−1 a n x 2 + (a n b n−1 + a n−1 b n )x − (c n + c n−1 d n )x m + b n−1 b n .
First we consider the case m = 0. Having the above recurrence relation (4.3) it is an easy task to get the expression for Res(r 2n , r 2(n−1) ). Indeed, we replace n by 2n and apply Theorem 3.1 to the polynomial r A,n (x) := r 2n (x), n ∈ N, with After necessary simplifications we get the expression from the statement of the theorem.
Next, we note that r 1 (x) = a 1 x + b 1 and from the identity a 1 b 3 = a 3 b 1 we get r 3 (x) ≡ 0 (mod a 1 x + b 1 ). In consequence, from the relation (4.3) we immediately get that r 2n+1 ≡ 0 (mod a 1 x + b 1 ) for each n ∈ N. Thus, in order to apply Theorem 3.1 we write r A,n (x) := r 2n+1 (x) After necessary simplifications we get the first part of our theorem.
In case m = 1 we perform exactly the same reasoning. We replace n by 2n and apply Theorem 3.1 to the polynomial r A,n (x) := r 2n (x), n ∈ N, with Finally, in order to consider the last formula from the statement of our theorem, we note that r 1 (x) = a 1 x + b 1 and from the identity a 1 b 3 = a 3 b 1 we get r 3 (x) ≡ 0 (mod a 1 x + b 1 ). In consequence, from the relation (4.3) we immediately get that r 2n+1 ≡ 0 (mod a 1 x + b 1 ) for each n ∈ N. Thus, in order to apply Theorem 3.1 we write r A,n (x) := r 2n+1 (x) a 1 x+b 1 for n ∈ N with After necessary simplifications we get our last formula.

Remark 4.2
The condition saying that a n b n−2 = a n−2 b n for n ∈ N ≥3 seems to be quite strong. However, it is clear that for b n = 0 this condition is satisfied. Notice that in this case we deal with an important class of orthogonal polynomials which corresponds to moment functionals which are symmetric. We recall the necessary definitions. Let (μ n ) n∈N be a sequence of complex numbers and let L be a complex valued function defined on C[x] satisfied the conditions for each n ∈ N and α 1 , α 2 ∈ C.
The moment functional is used in the definition of orthogonal polynomials. Indeed, the sequence (Q n (x)) n∈N is an orthogonal sequence if: The moment functional is called symmetric if all of its moments of odd order are 0, i.e., L[x 2n+1 ] = 0 for n ∈ N. However, this is equivalent with the condition b n = 0 for n ≥ 1 (see [2,Theorem 4.3]) and guarantees the existence of our compact formula given in Theorem 4.1.
This condition is satisfied by the Legendre, Hermite, Chebyshev, Bessel, Lommel . . . and many other sequences of orthogonal polynomials (see [2,Chapter V]). We present three illustrative examples.

Example 4.3
The sequence (P n (x)) n∈N of Legendre polynomials is given by P 0 (x) = 1, P 1 (x) = x and the recurrence relation In particular, we have It is clear that a n−2 b n = a n b n−2 (= 0) for n ≥ 2 and we get After necessary simplifications, we get the following formulas: Res(P 2n (x), P 2(n−1) (x)) , with the convention that 0 0 = 1. As a simple consequence of our computations, we get that the polynomials P n , P n−2 are co-prime or their only common root is x = 0 for each n ∈ N ≥2 . In particular, we have a n = 2, b n = 0, c n = 2(n − 1).
It is clear that a n−2 b n = a n b n−2 (= 0) for n ≥ 2 and we get that d n = 1. In consequence, after necessary simplifications, we get the following formulas: Res(H 2n (x), H 2(n−1) (x)) = 2 7n(n−1) The sequence (V n (x) n∈N is not a sequence of orthogonal polynomials. It is clear that V 0 (x) = 1, V 1 (x) = 2(x + 1). For n ≥ 2, the recurrence relation holds. This relation can be proved easily by induction on n with the help of the recurrence satisfied by the sequence of central binomial coefficients ( 2n n ) n∈N . We omit the details. Now in order to get the formula for Res(V n (x), V n−1 (x)) it is enough to apply Theorem 3.1 with A = (0, 1, 1, 1), q 0 = q 1 = 2, a n,0 = a n,1 = 2(2n − 1) n , v n = 16(n − 1) n .
After necessary simplifications we get the formula Res(V n (x), V n−1 (x)) = 2 3n(n−1) Note that the sequence of polynomials (V n (x)) n∈N (or to be more precise: the recurrence relation defining the sequence) satisfies also the assumption of Theorem 4.1. Thus, one can also compute the value of the resultant Res(V n (x), V n−2 (x)).