Polynomial Total Positivity and High Relative Accuracy Through Schur Polynomials

In this paper, Schur polynomials are used to provide a bidiagonal decomposition of polynomial collocation matrices. The symmetry of Schur polynomials is exploited to analyze the total positivity on some unbounded intervals of a relevant class of polynomial bases. The proposed factorization is used to achieve relative errors of the order of the unit round-off when solving algebraic problems involving the collocation matrix of relevant polynomial bases, such as the Hermite basis. The numerical experimentation illustrates the accurate results obtained when using the findings of the paper.


Introduction
Schur polynomials are homogeneous symmetric polynomials with integer coefficients that arise in many different contexts.They are indexed by partitions and generalize the class of elementary symmetric and complete homogeneous symmetric polynomials.In fact, the degree k Schur polynomials in j variables form a linear basis for the space of homogeneous degree k symmetric polynomials in j variables.When defined by Jacobi's bi-alternant formula, Schur polynomials are expressed as a quotient of alternating polynomials, i.e. polynomials that change sign under any transposition of the variables.
Schur polynomials have been classically studied in Combinatorics and Algebra.They have been playing a relevant role in the study of symmetric functions, in Representation theory and in Enumerative combinatorics (see [21] and the references therein).In recent years, they have also been used in Computer Science for Quantum computation [11] and in Geometric complexity theory [12].
A relevant topic in Numerical Linear Algebra is the design and analysis of procedures to get accurate solutions of algebraic problems for totally positive matrices that is, matrices whose minors are all non-negative.In particular, many fundamental problems in interpolation and approximation require linear algebra computations related to totally positive collocation matrices.For example, these matrices arise when imposing Lagrange interpolation conditions on a given basis of a vector space of functions, at sequences of parameters in the domain.
Let us note that many problems related to interpolation, numerical quadrature or least squares approximations can be formulated in terms of collocation matrices of a given basis.For example, interesting problems motivated by the use of the moving least squares method applied in the image analysis are adresed in [17].On the other hand, let us recall that in the interactive design of parametric curves and surfaces, shape preserving properties are closely related to the total positivity of the collocation matrices of the considered bases.
Unfortunately, collocation matrices may become ill-conditioned as their dimensions increase and then, standard routines implementing best traditional numerical methods do not obtain accurate solutions when computing the eigenvalues, the singular values or the inverse matrices.For this reason, it is very interesting to achieve computations to high relative accuracy (HRA computations) whose relative errors are of the order of the machine precision.In the last years, HRA computations when considering collocation matrices of different polynomial bases have been achieved (see [2,3,5,7,19]).
The total positivity of a given matrix can be characterized through the sign of the pivots and multipliers of its Neville elimination.The HRA computation of these pivots and multipliers provides a bidiagonal factorization for totally positive matrices, leading to HRA algorithms for the resolution of the aforementioned algebraic problems (cf.[8][9][10]).As shown in Sect.2, the pivots and multipliers of the Neville elimination can be expressed as quotients of minors with consecutive columns of the considered matrix.For collocation matrices of a given basis, these minors are alternating functions of the domain parameters and then can be expressed in terms of a basis of symmetric functions.
The previous observation is at the core of this paper and we shall exploit it when considering collocation matrices of polynomial bases.In this case, the preferred basis of symmetric functions is formed by Schur polynomials with which the pivots and the multipliers are naturally expressed.
HRA computations have been achieved for some polynomial collocation matrices by considering the bidiagonal factorization of Vandermonde matrices and that of a change of basis matrix between the considered and the monomial bases (see [2,3,19]).In contrast, the explicit expression for the bidiagonal factorization of any polynomial collocation matrix is deduced in this paper.Furthermore, the achieved formulae for the pivots and multipliers in terms of Schur polynomials, together with some known properties of these symmetric functions allow us to fully characterize the total positivity on unbounded intervals of relevant polynomial bases, and achieve HRA computations when solving algebraic problems involving their collocation matrices.
The layout of this paper is as follows.Section 2 recalls basic aspects related to the total positivity, HRA and Schur polynomials.In addition, the Neville elimination procedure is also described.In Sect.3, the pivots and multipliers of the Neville elimination of polynomial collocation matrices are explicitly expressed in terms of Schur polynomials.Section 4 focuses on polynomial bases obtained by multiplying the monomial basis by a nonsingular lower triangular matrix.A necessary and sufficient condition for the total positivity of these polynomial bases at nonbounded intervals with positive or negative parameters is also obtained.Taking into account the results of this section, bidiagonal factorizations for collocation matrices of well-known polynomial bases are provided in Sect. 5. Section 6 illustrates the accurate results obtained when solving algebraic computations with collocation matrices of Hermite polynomials.To the best of the authors' knowledge, such precise calculations have not yet been achieved with Hermite matrices.. Finally, some conclusions and final remarks are collected in Sect.7.

Notations and Auxiliary Results
Given k, n ∈ N with k ≤ n, A = (a i, j ) 1≤i, j≤n , and increasing sequences α = {α 1 , . . ., α k }, β = {β 1 , . . ., β k } of positive integers less than or equal to n, A[α|β] denotes the k × k submatrix of A containing rows and columns of places α and β, respectively, that is, Let (u 0 , . . ., u n ) be a basis of a given space U (I ) of functions defined on the real set I .The collocation matrix at a sequence We say that (u 0 , . . ., u n ) is totally positive or TP (respectively, strictly totally positive or STP), if for any t 1 < • • • < t n+1 in I , the corresponding collocation matrix is TP (respectively, STP), that is all its minors are nonnegative (respectively, positive).

High Relative Accuracy, Total Positivity and Neville Elimination
An important topic in Numerical Linear Algebra is the design and analysis of algorithms adapted to the structure of TP matrices and allowing the resolution of related algebraic problems, achieving relative errors of the order of the unit round-off (or machine precision), that is, to high relative accuracy (HRA).Algorithms avoiding inaccurate cancelations can be performed to HRA (see page 52 in [4]).Then we say that they satisfy the non-inaccurate cancellation condition, namely NIC condition, and they only compute multiplications, divisions, and additions of numbers with the same sign.Moreover, if the floating-point arithmetic is well-implemented the subtraction of initial data can also be allowed without losing HRA (see page 53 in [4]).
Nowadays, bidiagonal factorizations are very useful to achieve accurate algorithms for performing computations with TP matrices.In fact, the parameterization of TP matrices leading to HRA algorithms is provided by their bidiagonal factorization, which is in turn very closely related to the Neville elimination (cf.[8][9][10]).
The essence of the Neville elimination is to obtain an upper triangular matrix from a given A = (a i, j ) 1≤i, j≤n+1 , by adding to each row an appropriate multiple of the previous one.In particular, the Neville elimination of A consists of n major steps that define matrices A (1) :=A and for r = 2, . . ., n + 1, so that A (n+1) is an upper triangular matrix.In more detail, A (r +1) is computed from A (r ) according to the following formula The entry p i, j :=a for 1 ≤ j ≤ i ≤ n + 1, is called the (i, j) pivot of the Neville elimination of A and p i,i the i-th diagonal pivot.If all the pivots of the Neville elimination are nonzero, Lemma 2.6 of [8] implies that , (4) The complete Neville elimination of the matrix A consists of performing its Neville elimination to obtain the upper triangular matrix U :=A (n+1) and next, the Neville elimination of the lower triangular matrix U T .If the complete Neville elimination of the matrix A can be performed with no row and column exchanges, the multipliers of the complete Neville elimination of A are the multipliers of the Neville elimination of A (respectively, of A T ) if i ≥ j (respectively, j ≥ i) (see [10]).
Neville elimination is a nice and efficient tool to analyze the total positivity of a given matrix.This fact is shown in the following characterization, which can be derived from Theorem 4.1, Corollary 5.5 of [8] and the arguments of p. 116 of [10].

Theorem 1 A given matrix A is STP (resp. nonsingular TP) if and only if its complete Neville elimination can be performed without row and column exchanges, the multipliers of the Neville elimination of A and A T are positive (resp. nonnegative), and the diagonal pivots of the Neville elimination of A are positive.
Furthermore, a nonsingular TP matrix A ∈ R (n+1)×(n+1) admits a decomposition of the form where ) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by and D ∈ R (n+1)×(n+1) is the diagonal matrix whose diagonal elements are the diagonal pivots, p i,i > 0, i = 1, . . ., n + 1, of the Neville elimination of A in (3) (see Theorem 4.2 and the arguments of p. 116 of [10]).The entries m i, j , of the matrix F i in (7) are the multipliers of the Neville elimination of A. Furthermore, the entries m j,i of the matrix G i in (7) are the multipliers of the Neville elimination of A T .
By defining B D(A) = (B D(A) i, j ) 1≤i, j≤n+1 , with the decomposition (6) of a nonsingular TP matrix A can be stored (cf.[13]).If the entries of B D(A) can be computed to HRA, using the algorithms raised in [14], problems such as the computation of A −1 , of the eigenvalues and singular values of A, as well as the resolution of linear systems of equations Ax = b, for vectors b whose entries have alternating signs, can be performed to HRA.One can find the implementation of those algorithms through the link [16].The name of the corresponding functions is TNInverseExpand (applying the algorithm proposed in [20]), TNEigenValues, TNSingularValues, and TNSolve, respectively.All these functions require the matrix B D(A) as the input argument.

Basic Properties of Schur Polynomials
Given a partition λ:=(λ Schur polynomials are symmetric functions in their arguments.In addition, we now list other well-known properties that will be used in this article (for more details, interested readers are referred to [18]).
(iv) As running over all the partitions of size |λ|, the corresponding Schur polynomials provide a basis for the space of symmetric homogeneous polynomials of degree |λ|.When considering all partitions, Schur polynomials provide a basis of symmetric functions.
As an alternative to Weyl's formula (9), Schur polynomials can also be expressed in terms of monomials as follows where μ = (μ 1 , . . ., μ n+1 ) is a weak composition of |λ| and K λ,μ are non-negative integers depending on the integer partitions λ and μ.The numbers K λ,μ are called Kostka numbers and can be combinatorially calculated by counting the semistandard Young Tableaux (SSYT) that can be constructed with shape λ and weight μ.An important simple property is that Kostka numbers K λ,μ do not depend on the order of the entries of μ (cf.[21]).Apart from the general properties above mentioned, there are some specific facts involving Schur polynomials that will be needed in this paper.Taking into account the way that SSYT are defined, the following basic properties can be deduced: On the other hand, for a general partition λ and k = k 1 + • • • + k n+1 , the one variable polynomial is either 0, or its degree is |λ| − k.
Finally, let us observe that for any rectangular partition λ:=( , . . ., ), with l(λ) = j, S λ (t 1 , . . ., The simplicity of this Schur polynomial, which contains a single monomial term, lies in the fact that the number of rows of the corresponding partition coincides with the number of variables of the polynomial.In this case, it is easy to see that there is only one SSYT available, namely the one that satisfies (12).Finally, let us observe that Algorithm 5.2 of [6] evaluates Schur polynomials at positive parameters to HRA

The Factorization of Collocation Matrices of Polynomial Bases in Terms of Schur Functions
Let ( p 0 , . . ., p n ) be a basis of the space P n (I ) of polynomials of degree not greater than n defined on I , described by For a given sequence of parameters {t i } n+1 i=1 on I , the following result provides the multipliers and the diagonal pivots of the Neville elimination of the collocation matrix in terms of Schur polynomials and minors of the change of basis matrix A:=(a i, j ) 1≤i, j≤n+1 , such that with m i (t):=t i , i = 0, . . ., n.
Proof Using (4), the computation of the minors of M p ∈ R (n+1)×(n+1) with consecutive rows and initial consecutive columns will allow us to determine the corresponding pivots p i, j and multipliers m i, j , 1 ≤ j ≤ i ≤ n + 1. Taking into account properties of determinants, we can write where the sums are taken over all j-tuples (l 1 , . . ., l j ) with l i = 1, . . ., n+1, for i = 1, . . ., j.
Let us notice that any j-tuple (l 1 , . . ., l j ) with a repeated integer will no contribute to the sum since the corresponding determinant vanishes, as can be seen in ( 24).For this reason, we will only consider (l 1 , . . ., l j ) with different entries.Then, the sum (24) can be organized by choosing permutations of the entries such that Taking into account these considerations, we have where S j denotes the group of permutation of j elements.The function sgn(σ ) is the totally antysymmetric irreducible representation of the permutation group S n , which is 1dimensional and equals 1 if σ is an even permutation and −1 if σ is an odd permutation.
Recall that even (odd) permutations are the ones which can be written as a product of an even (odd) number of tranpositions.Now, from the definition of the Schur polynomials (see ( 9)), of Q i, j in (22), and (25), we can write where we derive that the pivots p i, j of the Neville elimination satisfy Consequently, for i = j, identities (19) are deduced.Finally, using ( 5) and ( 26), the multipliers m i, j , 1 ≤ j < i ≤ n + 1, can be written as in (20).Now, let us derive identities (21) for m i j .Again, using properties of determinants, the minors of M T p with initial consecutive columns and consecutive rows can be written as where the previous sums are taken over all j-tuples (l 1 , . . ., l j ) with l r = 1, . . ., n + 1, r = 1, . . ., j.So, we can write Now, taking into account (28), the definition (9) of the Schur polynomials and the definition (23 Using the following identity, we derive Finally, using (29) and ( 5), the multipliers mi, j , 1 ≤ j < i ≤ n + 1, can be written as in (21).
As a consequence of Theorem 2, the decomposition (6) of any collocation matrix of a polynomial TP basis on can be expressed in terms of Schur polynomials.
Since the Schur polynomials at positive parameters are positive, if I ⊆ (0, ∞) and the minors providing m i, j and m i, j are strictly positive, we can clearly guarantee m i, j > 0, m i, j > 0 and, by Theorem 1, M p ∈ R (n+1)×(n+1) is STP.

On the Total Positivity of a Relevant Class of Polynomials
Many relevant bases of the polynomial space P n (I ) are formed by polynomials q 0 , . . ., q n such that deg q i = i, i = 0, . . ., n, and then The change of basis matrix B = (b i, j ) 1≤i, j≤n+1 such that Taking into account the triangular structure of the matrix in (32), the following result restates Theorem 2 for bases (31), providing the pivots and the multipliers of the Neville elimination of their collocation matrices at nodes {t i } n+1 i=1 , Theorem 3 Let (q 0 , . . ., q n ) be a basis of P n (I ) such that the matrix B satisfying (32) is nonsingular lower triangular.Given {t i } n+1 i=1 on I , the diagonal pivots (4) and the multipliers (5) of the Neville elimination of the matrix M q ∈ R (n+1)×(n+1) in (34) are given by (35) The sum in ( 38) is taken over all increasing ascending sequences l Proof Since B is nonsingular lower triangular, the linear combinations Q i, j in ( 22) contain a single term, namely, the corresponding to the sequence l r = r for r = 1, . . ., j.This sequence corresponds to the Schur polynomial and the pivots and mutipliers given in (19) and (20) reduce to the expressions (35) and (36), respectively.
The bidiagonal factorization (6), described by (35), (36) and ( 37) is now illustrated with a simple example.The collocation matrix of the polynomial basis of at a sequence of parameters {t 1 , t 2 , t 3 } can be decomposed as follows where Taking into account Theorem 3, the following result provides a useful characterization of the polynomial bases (31), which are STP on intervals (τ, ∞), τ ≥ 0, in terms of the sign of the diagonal entries of the nonsingular lower triangular change of basis matrix B in (32).
The analysis of the sign of the multipliers m i, j needs a closer look.As in (38), let us define and then, the highest degree among the Schur polynomials in (40) is achieved in the term with maximum sum l 1 + • • • + l j , as long as det B[i − j + 1, . . ., i | l 1 , . . ., l j ] = 0. Since B is a nonsingular lower triangular matrix, the maximum for the sum the Schur polynomial in (40) with the highest degree is and deg S (i− j,...,i− j) = j(i − j).Moreover, by inspection of (40), we see that is a linear combination of Schur polynomials S λ with a degree lower than j(i − j), labelled by partitions λ = (λ 1 , . . ., λ j ) with λ 1 ≤ i − j.Then, by the use of ( 13), we deduce that for any r = 1, . . ., j.These considerations allow us to assure that the polynomials (t, . . ., t), , have positive leading coefficient.For this reason, for each P (i, j,k 1 ,...,k j ) , there exists a largest root r i, j,k 1 ,...,k j such that P (i, j,k 1 ,...,k j ) (t) > 0 and P (i, j,0,...,0) (t) > 0 for t > r i, j,k 1 ...,k j .Now, we can define The multivariate polynomial Q i, j can be written by its Taylor expansion around (τ, . . ., τ ), where t r = τ + δ r , r = 1, . . ., j, and , by (37), the corresponding multiplier m i, j can be written as and, by (46), we deduce that m i, j > 0.
Finally, taking into account that for any sequence τ < t 1 < • • • < t n+1 , the multipliers m i, j , m i, j and the diagonal pivots of the Neville elimination of the collocation matrix M q are positive, we deduce by Theorem 1, that M q is a STP matrix and conclude that the basis (q 0 , . . ., q n ) is STP on the interval (τ, ∞).Now, using a similar reasoning to that of Theorem 4, the following result provides a necessary condition for the total positivity of collocation matrices M q ∈ R (n+1)×(n+1) corresponding to negative parameters.
Theorem 5 Let (q 0 , . . ., q n ) be a polynomial basis such that the matrix B = (b i, j ) 1≤i, j≤n+1 , satisfying (32), is nonsingular lower triangular.Then, there exists τ ≤ 0 such that the collocation matrix (34) at any decreasing sequence 0 Proof For the direct implication, consider 0 ≥ τ > t 1 > • • • > t n+1 such that the collocation matrix (34) is STP.Taking into account that the diagonal pivots of its Neville elimination, p i,i are positive and satisfy identities (35), we deduce that sign(b i,i ) = (−1) i−1 , i = 1, . . ., n+1.Now, suppose that sign(b i,i ) = (−1) i−1 , i = 1, . . ., n + 1 and consider any sequence t n+1 < • • • < t 1 < 0. On the right hand side of (35) there are i − 1 negative factors t i − t k and so, sign( p i,i ) = sign(b i,i ) (−1) i−1 > 0.Moreover, using (36) we deduce that multipliers m i, j are positive since they can be written as in (36), with j negative factors in the numerator and j negative factors in the denominator.Now, define R i, j (t 1 , . . ., t j ):=sign(c i, j )(−1) where Q i, j (t 1 , . . ., t j ) is defined in (40), and The multivariate polynomial R i, j (t 1 , . . ., t j ) is defined in such a way that its leading term, Moreover, the sign of the leading term of any k-derivative of R i, j is (−1) k .Consequently, the leading terms of the polynomials defined as with degree deg For this reason, there exists a smallest root r i, j,k 1 ,...,k j such that P (i, j,k 1 ,...,k j ) (t) > 0 and P (i, j,0,...,0) (t) > 0 for t < r i, j,k 1 ...,k j .Now, we can define Using (50), R i, j can be written by its Taylor expansion around (τ, . . ., τ ), as follows, R i, j (t 1 , . . ., t j ) = P (i, j,0,...,0) (τ ) where t r = τ + δ r , r = 1, . . ., j.Given t j < • • • < t 1 < τ ≤ 0, we have δ r = t r − τ < 0, for r = 1, . . ., j, and then Finally, since sign c i, j c i−1, j−1 c i−2, j−1 c i−1, j (−1) j(i− j)−( j−1)(i− j)+( j−1)(i− j−1)− j(i− j−1) = 1, by (48), the multiplier m i, j can be expressed as Thus, by (53), m i, j > 0 for any Let us observe that, using Theorems 4 and 5, a simple inspection of the sign of the leading coefficient of the polynomial bases (31) provides a criterion to establish their total positivity on intervals contained in (0, ∞) and (−∞, 0), respectively.It turns out that these bases may be TP on wider intervals.In fact, we are giving some examples of this behavior in Sects.5.3 and 5.4.The location of such intervals requires a further account of the basis and does not fall within the scope of this paper.Even though, using Theorem 3, a deep study of the sign of the pivots and multipliers can be carried out to deduce the range where a specific polynomial basis is TP.Finally, let us notice that under the conditions provided by Theorem 4 or by Theorem 5, any STP collocation matrix M q ∈ R (n+1)×(n+1) in (34) admits a decomposition ( 6) such that where F i (respectively, G i ), i = 1, . . ., n, are the lower (respectively, upper) triangular bidiagonal matrices described in (7) and D = diag p 1,1 , p 2,2 , . . ., p n+1,n+1 .The diagonal entries p i,i , 1 ≤ i ≤ n + 1, can be obtained from ( 35).The off-diagonal entries m i, j and m i, j , 1 ≤ j < i ≤ n + 1, can be obtained from ( 36) and (37), respectively.Let us recall that, by Theorem 7.2 of [6], Algorithm 5.2 of [6] computes the values of Schur functions for positive data to HRA.Moreover, by (10), Therefore, Algorithm 5.2 of [6] can be also used to compute the values of Schur functions for negative data to HRA.In addition, let us observe that by Theorems 4 and 5, the pivots p i,i and multipliers m i, j of the NE of any STP collocation matrix M q ∈ R (n+1)×(n+1) in (34) can be computed to HRA.In addition, observe that the multipliers m i, j of the NE of M q ∈ R (n+1)×(n+1) can be computed to HRA if the change of matrix B satisfying (32) is TP.In the case that the matrix B is not TP there may appear subtractions of Schur functions.Although, strictly speaking, the value of the Schur functions cannot be considered as initial (exact) data, since they are computed to HRA they still lead to an excellent accuracy as we are going to show in Sect.6.

Total Positivity and Factorizations of Well-Known Polynomial Bases
Now, we shall use the results in previous sections to analyze the total positivity of relevant polynomial bases and provide the decomposition (6) of their collocation matrices.

Recursive Polynomial Bases
Given values b 1 , . . ., b n+1 , such that b i = 0, i = 1, . . ., n +1, let us consider the polynomial basis ( p 0 , . . ., p n ), such that Clearly, the change of basis matrix such that ( p 0 , . . ., p n ) T = B(m 0 , . . ., m n ) T , with m i :=t i , i = 0, . . ., n, is a nonsingular lower triangular matrix of the following form Let us note that the only non-zero minors of B are Taking into account (58), the decomposition ( 6) of the collocation matrix of ( p 0 , . . ., p n ) at {t i } n+1 i=1 is given by Theorem 3 where Let us notice that since the matrix (57) is nonsingular lower triangular, the criteria of total positivity depending on the diagonal entries b i = 0, i = 1, 2 . . ., n + 1, given by Theorems 4 and 5, apply.Section 6 will show accurate computations when solving algebraic problems with collocation matrices of recursive polynomial bases with leading coefficients satisfying either Let us observe that the collocation matrices of the bases (56) can be considered as particular cases of the Cauchy-polynomial Vandermonde matrices defined in [23], for the case l = 0.In Theorem 1 of [23], the authors give explicit formulas in terms of Schur functions of the determinants used for obtaining the pivots and the multipliers of the Neville elimination of A. It can be checked that from these expressions with l = 0, one can get the formula of the pivots and multipliers obtained in this paper.

Hermite Polynomials
The physicist's Hermite basis of P n is the system of Hermite polynomials (H 0 , . . ., H n ) defined by The change of basis matrix A between the Hermite basis (61) and the monomial basis, such that (H 0 , . . ., H n ) T = A(m 0 , . . ., m n ) T , is the nonsingular lower triangular matrix A = (a i, j ) 1≤i, j≤n+1 described by a i, j := Let us observe that, from Theorem 3, we can obtain the decomposition (6) of the collocation matrices of the Hermite polynomial basis (H 0 , . . ., H n ).The diagonal entries satisfy a i,i = 2 i−1 > 0, i = 1, 2, . . ., n+1.Therefore, by Theorem 4, there exists a lower bound τ ≥ 0 such that the Hermite polynomial basis (H 0 , . . ., Now, using well-known properties of the roots of Hermite polynomials, we shall deduce a lower bound for τ , which is an increasing function of the dimension of the basis.
Let us recall that the roots of the Hermite polynomials are simple and real.Then we can write t 1,n < • • • < t n,n , where t i,n , i = 1, . . ., n, are the n roots of the n-degree Hermite polynomial H n .In [22], it is shown that the largest root of On the other hand, since Hermite polynomials satisfy the following property the roots of H n and H n−1 interlace, that is, Taking into account (63) and (64), we can write By ( 37) and (65), we deduce that m n+1,1 is negative for Let us illustrate the bidiagonal decomposition (6) of collocation matrices of Hermite bases with a simple example.For ( H0 , H1 , H2 ), ) .
It can be easily checked that for √ 2/2 < t 1 < t 2 < t 3 , the collocation matrix is STP.Section 6 will show accurate computations when solving algebraic problems with collocation matrices of physicist's Hermite polynomial bases.

Bessel Polynomials
The Bessel basis of P n is the polynomial system (B 0 , . . ., B n ) with The change of basis matrix A between the Bessel basis (66) and the monomial basis, with (B 0 , . . ., B n ) T = A(m 0 , . . ., m n ) T , is the nonsingular lower triangular matrix A = (a i, j ) 1≤i, j≤n+1 such that In [3], the total positivity of A is proved and the pivots and multipliers of its Neville elimination are provided.As a consequence, accurate computations when considering collocation matrices of (B 0 , . . ., B n ) for (0 <)t 0 < t 1 < • • • < t n are derived by considering that they are the product of a Vandermonde matrix and the matrix A. The bidiagonal decomposition of Vandermonde matrices can be found in [1,5].Now, using Theorem 3, we can obtain the explicit bidiagonal decomposition (6) of the Bessel collocation matrices.Let us illustrate it with a simple example.For ( B0 , B1 , B2 ), ) .

Laguerre Polynomials
Given α > −1, the generalized Laguerre basis is (L The change of basis matrix between the generalized Laguerre basis (67) and the monomial basis, with (L where, for a real x and a positive integer k, the falling factorial is x 0) :=1, and In [2], computations to HRA for algebraic computations with the collocation matrix of (L are achieved.These computations are obtained through the bidiagonal decomposition of A and the well-known bidiagonal decomposition of the Vandermonde matrices.Now, using Theorem 3, we can obtain the explicit bidiagonal decomposition (6) of the Laguerre collocation matrices.Let us illustrate it with a simple example.For (L 0 0 , L 0 1 , L 0 2 ), ) .
It can be easily checked that, for t 3 < t 2 < t 1 < 2 − √ 2, the collocation matrix of the three dimensional Laguerre basis is STP.That means that, for this dimension, the results obtained in [2] for the total positivity of collocation matrices on parameters in (−∞, 0) can be extended to the larger interval (−∞, 2 − √ 2) and then, even to sequences of parameters t 3 < t 2 < t 1 with a different sign.

Jacobi Polynomials
Given α, β ∈ R, the basis of Jacobi polynomials of the space P n of polynomials of degree less than or equal to n is (J The change of basis matrix between the Jacobi basis (68) and the basis (v 0 , . . ., v n ) with is the lower triangular matrix A = (a i, j ) 1≤i, j≤n+1 such that a i, j := In [19] the total positivity on (1, ∞) of the Jacobi basis (J (α,β) 0

, . . . , J (α,β) n
) is deduced.In addition, the pivots and multipliers of the Neville elimination of the change of basis matrix A in (70) are provided.Using the bidiagonal decomposition of A, computations to HRA are achieved. Defining (1 + 2t), i = 0, . . ., n, using Theorem 3, we can write the bidiagonal decomposition (6) of the Jacobi collocation matrices.For ( J ) .
It can be easily checked that for ( √ 2 − 3)/7 < t 1 < t 2 < t 3 the collocation matrix is STP.Therefore, the Jacobi polynomial bases (J (1,2) Finally, let us observe that the bidiagonal decomposition (6) of the collocation matrices of Legendre, Gegenbauer, first and second kind Chebyschev and rational Jacobi polynomials can be derived by considering Theorem 3.

Numerical Experiments
In order to encourage the understanding of the numerical experimentation carried out, using Theorem 3, we provide the pseudocode of Algorithm 1 for computing the matrix form B D(M q ) (8) storing the bidiagonal decomposition (6) of the collocation matrix M q (34).Let us observe that Algorithm 1 calls the Matlab function Schurpoly available in [15], for the computation of Schur polynomials.
To test our algorithm, we have considered STP collocation matrices M q of recursive polynomial bases (56), as well as Hermite polynomial bases (61) with dimension n + 1 = 5, 10, 15.Moreover, using the bidiagonal decompositions B D(M q ) given by Algorithm 1 and the Matlab functions available in the software library TNTools in [14], we have solved fundamental problems in Numerical Linear Algebra involving the considered matrices.
In addition, for analyzing the accuracy of the obtained results, we have calculated the relative errors taking the solutions obtained in Mathematica with a 100-digit arithmetic as the exact solution of the considered algebraic problems.
We have also obtained in Mathematica the 2-norm condition number of all considered matrices.In Tables 1, 2 and 3, this conditioning is depicted.It can be easily observed that the conditioning drastically increases with the size of the matrices.Due to their ill-conditioning, standard routines do not obtain accurate solutions since they can suffer from inaccurate cancelations.In contrast, the accurate algorithms that we have developed in this paper exploit the structure of the considered matrices obtaining, as we will see, accurate numerical results.HRA computation of the singular values and eigenvalues Given B = B D(A) to HRA, the Matlab functions TNSingularValues(B) and TNEigenValues(B) are availables in [16] and compute the singular values and eigenvalues, respectively, of a matrix A to HRA.Their computational cost are all O(n 3 ) (see [13]).
Algorithm 2 uses B D(M q ) provided by Algorithm 1 to compute the eigenvalues and singular values of a collocation matrix M q .
For all considered matrices, we have compared the smallest eigenvalues and singular values obtained using Algorithm 2, with those computed to the Matlab commands eig and svd.The values provided by Mathematica using 100-digit arithmetic have been considered as the exact solution of the algebraic problem and the relative error e of each approximation has been computed as e:=|a − ã|/|a|, where a denotes the smallest eigenvalue and singular value computed in Mathematica and ã the smallest eigenvalue and singular value computed in Matlab.
Looking at the results (see Tables 4, 5 ), we notice that our approach is able to compute accurately the lowest eigenvalue and singular value, regardless of the ill-conditioning of the matrices.In contrast, the Matlab commands eig and svd return results that become not accurate when the dimension of the collocation matrices increases.HRA computation of the inverse matrix.Given B = B D(A) to HRA, the Matlab function TNInverseExpand(B) available in [16] returns A −1 to HRA, requiring O(n 2 ) arithmetic operations (see [20]).
Algorithm 3 uses B D(M q ) provided by Algorithm 1 to compute the inverse of a collocation matrix M q .For all considered matrices, we have compared the inverses obtained using Algorithm 3 and the Matlab command inv.To look over the accuracy of these two methods we have compared both Matlab approximations with the inverse matrix A −1 computed by Mathematica using 100-digit arithmetic, taking into account the formula e = A −1 − A −1 / A −1 for the corresponding relative error.
The achieved relative errors are shown in Table 6.We observe that our algorithm provides very accurate results in contrast to the inaccurate results obtained when using the Matlab command inv.HRA computation of solution of linear system Mc = d Given B = B D(A) to HRA and a vector d with alternating signs, the Matlab function TNSolve(B, d) available in [16] returns the solution of the linear system Ac = d to HRA.It requires O(n 2 ) arithmetic operations (see [16]).
Algorithm 4 uses B D(M q ) provided by Algorithm 1, to compute the solution of the linear system M q c = d, where d = ((−1) i+1 d i ) 1≤i≤n+1 and d i , i = 1, . . ., n + 1, are random nonnegative integer values.
For all considered collocation matrices, we have compared the solution obtained using Algorithm 4 and the Matlab command \.The solution provided by Mathematica using 100digit arithmetic has been considered as the exact solution c.Then, we have computed in Mathematica the relative error of the computed approximation with Matlab c, taking into account the formula e = c − c / c .In Table 7 we show the relative errors.We clearly see that, in spite of the dimension of the problem, the proposed algorithm preserves the accuracy as opposed to the results obtained with the Matlab command \.

Conclusions and Final Remarks
The field of symmetric functions brings new tools to tackle known algebraic problems related to collocation matrices.We expect that further studies in this line of research will involve representations of the groups of permutations, partitions and other bases of symmetric functions.That is, all the relevant concepts which come up whenever an action of the permutation group can be defined on a given setup.
Using the proposed Schur computation of the pivots and the multipliers of the Neville elimination, the bidiagonal factorization (6) of polynomial collocation matrices can obtained accurately.The efficiency of this procedure depends on the number of the involved Schur polynomials and the computational cost of their evaluation.For some bases, as those in (56), the number of nonzero minors decreases significantly, resulting in more efficient calculations.

Table 1
Condition number of collocation matrices of recursive polynomial bases (56) at t