1 Introduction

Schur polynomials are homogeneous symmetric polynomials with integer coefficients that arise in many different contexts. They are indexed by partitions and generalize the class of elementary symmetric and complete homogeneous symmetric polynomials. In fact, the degree k Schur polynomials in j variables form a linear basis for the space of homogeneous degree k symmetric polynomials in j variables. When defined by Jacobi’s bi-alternant formula, Schur polynomials are expressed as a quotient of alternating polynomials, i.e. polynomials that change sign under any transposition of the variables.

Schur polynomials have been classically studied in Combinatorics and Algebra. They have been playing a relevant role in the study of symmetric functions, in Representation theory and in Enumerative combinatorics (see [21] and the references therein). In recent years, they have also been used in Computer Science for Quantum computation [11] and in Geometric complexity theory [12].

A relevant topic in Numerical Linear Algebra is the design and analysis of procedures to get accurate solutions of algebraic problems for totally positive matrices that is, matrices whose minors are all non-negative. In particular, many fundamental problems in interpolation and approximation require linear algebra computations related to totally positive collocation matrices. For example, these matrices arise when imposing Lagrange interpolation conditions on a given basis of a vector space of functions, at sequences of parameters in the domain.

Let us note that many problems related to interpolation, numerical quadrature or least squares approximations can be formulated in terms of collocation matrices of a given basis. For example, interesting problems motivated by the use of the moving least squares method applied in the image analysis are adresed in [17]. On the other hand, let us recall that in the interactive design of parametric curves and surfaces, shape preserving properties are closely related to the total positivity of the collocation matrices of the considered bases.

Unfortunately, collocation matrices may become ill-conditioned as their dimensions increase and then, standard routines implementing best traditional numerical methods do not obtain accurate solutions when computing the eigenvalues, the singular values or the inverse matrices. For this reason, it is very interesting to achieve computations to high relative accuracy (HRA computations) whose relative errors are of the order of the machine precision. In the last years, HRA computations when considering collocation matrices of different polynomial bases have been achieved (see [2, 3, 5, 7, 19]).

The total positivity of a given matrix can be characterized through the sign of the pivots and multipliers of its Neville elimination. The HRA computation of these pivots and multipliers provides a bidiagonal factorization for totally positive matrices, leading to HRA algorithms for the resolution of the aforementioned algebraic problems (cf. [8,9,10]). As shown in Sect. 2, the pivots and multipliers of the Neville elimination can be expressed as quotients of minors with consecutive columns of the considered matrix. For collocation matrices of a given basis, these minors are alternating functions of the domain parameters and then can be expressed in terms of a basis of symmetric functions.

The previous observation is at the core of this paper and we shall exploit it when considering collocation matrices of polynomial bases. In this case, the preferred basis of symmetric functions is formed by Schur polynomials with which the pivots and the multipliers are naturally expressed.

HRA computations have been achieved for some polynomial collocation matrices by considering the bidiagonal factorization of Vandermonde matrices and that of a change of basis matrix between the considered and the monomial bases (see [2, 3, 19]). In contrast, the explicit expression for the bidiagonal factorization of any polynomial collocation matrix is deduced in this paper. Furthermore, the achieved formulae for the pivots and multipliers in terms of Schur polynomials, together with some known properties of these symmetric functions allow us to fully characterize the total positivity on unbounded intervals of relevant polynomial bases, and achieve HRA computations when solving algebraic problems involving their collocation matrices.

The layout of this paper is as follows. Section 2 recalls basic aspects related to the total positivity, HRA and Schur polynomials. In addition, the Neville elimination procedure is also described. In Sect. 3, the pivots and multipliers of the Neville elimination of polynomial collocation matrices are explicitly expressed in terms of Schur polynomials. Section 4 focuses on polynomial bases obtained by multiplying the monomial basis by a nonsingular lower triangular matrix. A necessary and sufficient condition for the total positivity of these polynomial bases at nonbounded intervals with positive or negative parameters is also obtained. Taking into account the results of this section, bidiagonal factorizations for collocation matrices of well-known polynomial bases are provided in Sect. 5. Section 6 illustrates the accurate results obtained when solving algebraic computations with collocation matrices of Hermite polynomials. To the best of the authors’ knowledge, such precise calculations have not yet been achieved with Hermite matrices.. Finally, some conclusions and final remarks are collected in Sect. 7.

2 Notations and Auxiliary Results

Given \(k,n\in \mathbb {N}\) with \(k\le n \), \(A=(a_{i,j})_{1\le i,j\le n }\), and increasing sequences \(\alpha =\{\alpha _1,\ldots ,\alpha _k\}\), \(\beta =\{\beta _1,\ldots ,\beta _k\}\) of positive integers less than or equal to n, \(A[\alpha |\beta ]\) denotes the \(k\times k\) submatrix of A containing rows and columns of places \(\alpha \) and \(\beta \), respectively, that is, \(A[\alpha |\beta ]{:}{=} (a_{\alpha _i,\beta _j})_{1\le i,j\le k}\).

Let \((u_0, \ldots , u_n)\) be a basis of a given space U(I) of functions defined on the real set I. The collocation matrix at a sequence \(\{t_{i}\}_{i=1}^{n+1}\subset I\) is

$$\begin{aligned} M =\big ( u_{j-1}(t_i) \big )_{1\le i,j\le n+1}. \end{aligned}$$

We say that \((u_0, \ldots , u_n)\) is totally positive or TP (respectively, strictly totally positive or STP), if for any \(t_1< \cdots < t_{n+1}\) in I, the corresponding collocation matrix is TP (respectively, STP), that is all its minors are nonnegative (respectively, positive).

2.1 High Relative Accuracy, Total Positivity and Neville Elimination

An important topic in Numerical Linear Algebra is the design and analysis of algorithms adapted to the structure of TP matrices and allowing the resolution of related algebraic problems, achieving relative errors of the order of the unit round-off (or machine precision), that is, to high relative accuracy (HRA).

Algorithms avoiding inaccurate cancelations can be performed to HRA (see page 52 in [4]). Then we say that they satisfy the non-inaccurate cancellation condition, namely NIC condition, and they only compute multiplications, divisions, and additions of numbers with the same sign. Moreover, if the floating-point arithmetic is well-implemented the subtraction of initial data can also be allowed without losing HRA (see page 53 in [4]).

Nowadays, bidiagonal factorizations are very useful to achieve accurate algorithms for performing computations with TP matrices. In fact, the parameterization of TP matrices leading to HRA algorithms is provided by their bidiagonal factorization, which is in turn very closely related to the Neville elimination (cf. [8,9,10]).

The essence of the Neville elimination is to obtain an upper triangular matrix from a given \( A=(a_{i,j})_{1\le i,j\le n+1}\), by adding to each row an appropriate multiple of the previous one. In particular, the Neville elimination of A consists of n major steps that define matrices \(A^{(1)}{:}{=}A\) and \( A^{(r)} =(a_{i,j}^{(r)})_{1\le i,j\le n+1}\), such that,

$$\begin{aligned} a_{i,j}^{(r)}=0, \quad 1\le j\le r-1, \quad j< i\le n+1, \end{aligned}$$
(1)

for \(r=2,\ldots , n+1\), so that \(A^{(n+1)} \) is an upper triangular matrix. In more detail, \(A^{(r+1)} \) is computed from \(A^{(r)} \) according to the following formula

$$\begin{aligned} a_{i,j}^{(r+1)}{:}{=}{\left\{ \begin{array}{ll} a^{(r)}_{i,j}, \quad &{}\text {if} \ 1\le i \le r, \\ a^{(r)}_{i,j}- \frac{a_{i,r}^{(r)}}{ a_{i-1,r}^{(r)} } a^{(r)}_{i-1,j}, \quad &{}\text {if } r+1\le i,j\le n+1, \text { and } a_{i-1,j}^{(r)}\ne 0,\\ a^{(r)}_{i,j}, \quad &{}\text {if} \quad r+1\le i\le n+1, \text { and } a_{i-1,r}^{(r)}= 0. \end{array}\right. } \nonumber \\ \end{aligned}$$
(2)

The entry

$$\begin{aligned} p_{i,j} {:}{=} a_{i,j}^{(j)}, \end{aligned}$$
(3)

for \(1\le j\le i\le n+1\), is called the (ij) pivot of the Neville elimination of A and \(p_{i,i}\) the i-th diagonal pivot. If all the pivots of the Neville elimination are nonzero, Lemma 2.6 of [8] implies that

$$\begin{aligned} p_{i,1}=a_{i,1},\quad i=1,\ldots ,n,\qquad p_{i,j}= \frac{ \mathrm det A[i-j+1,\ldots , i|1,\ldots ,j]}{\mathrm det A[i-j+1,\ldots , i-1|1,\ldots ,j-1]}, \end{aligned}$$
(4)

for \(1 < j \le i\le n\). Furthermore, the value

(5)

for \(1\le j < i\le n+1\), is called the (ij) multiplier.

The complete Neville elimination of the matrix A consists of performing its Neville elimination to obtain the upper triangular matrix \(U{:}{=}A^{(n+1)}\) and next, the Neville elimination of the lower triangular matrix \(U^T\). If the complete Neville elimination of the matrix A can be performed with no row and column exchanges, the multipliers of the complete Neville elimination of A are the multipliers of the Neville elimination of A (respectively, of \(A^T\)) if \(i \ge j\) (respectively, \(j \ge i\)) (see [10]).

Neville elimination is a nice and efficient tool to analyze the total positivity of a given matrix. This fact is shown in the following characterization, which can be derived from Theorem 4.1, Corollary 5.5 of [8] and the arguments of p. 116 of [10].

Theorem 1

A given matrix A is STP (resp. nonsingular TP) if and only if its complete Neville elimination can be performed without row and column exchanges, the multipliers of the Neville elimination of A and \(A^T\) are positive (resp. nonnegative), and the diagonal pivots of the Neville elimination of A are positive.

Furthermore, a nonsingular TP matrix \(A \in {\mathbb {R}}^{(n+1)\times (n+1)}\) admits a decomposition of the form

$$\begin{aligned} A=F_nF_{n-1}\cdots F_1 D G_1G_2 \cdots G_n, \end{aligned}$$
(6)

where \(F_i\in {\mathbb {R}}^{(n+1)\times (n+1)}\) (respectively, \(G_i\in {\mathbb {R}}^{(n+1)\times (n+1)}\)) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by

$$\begin{aligned} \small { F_i=\left( \begin{array}{cccccccccc} 1 \\ &{} \ddots \\ &{} &{} 1 \\ &{} &{} m_{i+1,1} &{} 1 \\ &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} m_{n+1,n+1-i} &{} 1 \end{array} \right) ,\qquad G_i^T=\left( \begin{array}{cccccccccc} 1 \\ &{} \ddots \\ &{} &{} 1 \\ &{} &{} {\widetilde{m}}_{i+1,1} &{} 1 \\ &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} {\widetilde{m}}_{n+1,n+1-i} &{} 1 \end{array} \right) , } \end{aligned}$$
(7)

and \(D \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is the diagonal matrix whose diagonal elements are the diagonal pivots, \(p_{i,i}>0\), \(i=1,\ldots ,n+1\), of the Neville elimination of A in (3) (see Theorem 4.2 and the arguments of p. 116 of [10]).

The entries \(m_{i,j}\), of the matrix \(F_i\) in (7) are the multipliers of the Neville elimination of A. Furthermore, the entries \({\widetilde{m}}_{j,i}\) of the matrix \(G_i\) in (7) are the multipliers of the Neville elimination of \(A^T\).

By defining \(BD(A)=( BD(A)_{i,j})_{1\le i,j\le n+1} \), with

$$\begin{aligned} BD(A)_{i,j}{:}{=}{\left\{ \begin{array}{ll} m_{i,j}, &{} \text {if } i>j, \\ p_{i,i}, &{} \text {if } i=j, \\ {\widetilde{m}}_{ j,i}, &{} \text {if } i<j, \\ \end{array}\right. } \end{aligned}$$
(8)

the decomposition (6) of a nonsingular TP matrix A can be stored (cf. [13]). If the entries of BD(A) can be computed to HRA, using the algorithms raised in [14], problems such as the computation of \(A^{-1}\), of the eigenvalues and singular values of A, as well as the resolution of linear systems of equations \(Ax=b\), for vectors b whose entries have alternating signs, can be performed to HRA. One can find the implementation of those algorithms through the link [16]. The name of the corresponding functions is TNInverseExpand (applying the algorithm proposed in [20]), TNEigenValues, TNSingularValues, and TNSolve, respectively. All these functions require the matrix BD(A) as the input argument.

2.2 Basic Properties of Schur Polynomials

Given a partition \(\lambda {:}{=}(\lambda _1,\lambda _2,\ldots ,\lambda _p)\) of size \(|\lambda |{:}{=}\lambda _1+\cdots +\lambda _p\) and length \(l(\lambda ){:}{=}p\), such that \(\lambda _1\ge \lambda _2 \ge \cdots \ge \lambda _p\ge 0\), the Jacobi’s definition of the corresponding Schur polynomial in \(n+1\) variables is via the Weyl’s formula,

$$\begin{aligned} S_\lambda (t_1,\dots ,t_{n+1}){:}{=} \det \left[ \begin{array}{cccc} t_{1}^{\lambda _1+n} &{} t_{2}^{\lambda _1+n} &{}\ldots &{} t_{n+1}^{\lambda _1+n} \\ t_{1}^{\lambda _2+n-1} &{} t_{2}^{\lambda _2+n-1} &{} \ldots &{} t_{n+1}^{\lambda _2+n-1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ t_1^ {\lambda _{ n+1} } &{}t_2^ {\lambda _{ n+1} } &{} \ldots &{} t_{n+1}^{\lambda _{n+1}} \end{array} \right] / \det \left[ \begin{array}{cccc} 1 &{} 1&{}\ldots &{} 1\\ t_{1} &{} t_{2} &{} \ldots &{} t_{n+1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ t_1^ {n} &{}t_2^ {n} &{} \ldots &{} t_{n+1}^{n} \end{array} \right] . \end{aligned}$$
(9)

Schur polynomials labeled by empty partitions are, by convention, \(S_{(0,\dots ,0)}(t_1,\ldots ,t_{n+1}){:}{=}1\). These polynomials are commonly denoted as \(S_\emptyset (t_1,\ldots ,t_{n+1})\).

Schur polynomials are symmetric functions in their arguments. In addition, we now list other well-known properties that will be used in this article (for more details, interested readers are referred to [18]).

  1. (i)

    \(S_\lambda (t_1,\ldots ,t_{n+1})>0\) for positive values of \( t_i\), \(i=1,\ldots ,n+1\).

  2. (ii)

    \(S_\lambda (t_1,\ldots ,t_{n+1})=0\) if \(l(\lambda )>n+1\).

  3. (iii)

    \(S_\lambda (t_1,\ldots ,t_{n+1})\) is an homogeneous function of degree \(|\lambda |\), that is,

    $$\begin{aligned} S_\lambda (\alpha \, t_1,\alpha \, t_2,\ldots ,\alpha \, t_{n+1})=\alpha ^{|\lambda |} S_\lambda (t_1,t_2,\ldots ,t_{n+1}). \end{aligned}$$
    (10)
  4. (iv)

    As running over all the partitions of size \(|\lambda |\), the corresponding Schur polynomials provide a basis for the space of symmetric homogeneous polynomials of degree \(|\lambda |\). When considering all partitions, Schur polynomials provide a basis of symmetric functions.

As an alternative to Weyl’s formula (9), Schur polynomials can also be expressed in terms of monomials as follows

$$\begin{aligned} S_\lambda (t_1,\dots ,t_{n+1})=\sum _{\mu }K_{\lambda ,\mu \,}t_1^{\mu _1}\cdots t_{n+1}^{\mu _{n+1}}, \end{aligned}$$
(11)

where \(\mu =( \mu _1,\ldots , \mu _{n+1}) \) is a weak composition of \(|\lambda |\) and \(K_{\lambda ,\mu }\) are non-negative integers depending on the integer partitions \(\lambda \) and \(\mu \). The numbers \(K_{\lambda ,\mu }\) are called Kostka numbers and can be combinatorially calculated by counting the semistandard Young Tableaux (SSYT) that can be constructed with shape \(\lambda \) and weight \(\mu \). An important simple property is that Kostka numbers \(K_{\lambda ,\mu }\) do not depend on the order of the entries of \(\mu \) (cf. [21]).

Apart from the general properties above mentioned, there are some specific facts involving Schur polynomials that will be needed in this paper. Taking into account the way that SSYT are defined, the following basic properties can be deduced:

$$\begin{aligned}{} & {} K_{\lambda ,\lambda }=1, \end{aligned}$$
(12)
$$\begin{aligned}{} & {} \frac{\partial ^{k}S_{\lambda }}{\partial t_r^k}(t_1,\dots ,t_{n+1})=0,\;\text { if } k>\lambda _1. \end{aligned}$$
(13)

On the other hand, for a general partition \(\lambda \) and \(k=k_1+\cdots +k_{n+1}\), the one variable polynomial

$$\begin{aligned} \frac{\partial ^k S_{\lambda }}{\partial t_1^{k_1}\cdots \partial t_{n+1}^{k_{n+1}}} (t,\dots ,t), \end{aligned}$$
(14)

is either 0, or its degree is \(|\lambda |-k\).

Finally, let us observe that for any rectangular partition \(\lambda {:}{=}(\ell , \ldots , \ell )\), with \(l(\lambda )=j\),

$$\begin{aligned} S_{\lambda }(t_1,\dots ,t_{j})= t_1^{\ell }\cdots t_{j}^{\ell }, \quad j\in \mathbb {N}. \end{aligned}$$
(15)

The simplicity of this Schur polynomial, which contains a single monomial term, lies in the fact that the number of rows of the corresponding partition coincides with the number of variables of the polynomial. In this case, it is easy to see that there is only one SSYT available, namely the one that satisfies (12).

Finally, let us observe that Algorithm 5.2 of [6] evaluates Schur polynomials at positive parameters to HRA

3 The Factorization of Collocation Matrices of Polynomial Bases in Terms of Schur Functions

Let \( (p_{0},\ldots ,p_{n})\) be a basis of the space \( {{\textbf{P}}}^n(I)\) of polynomials of degree not greater than n defined on I, described by

$$\begin{aligned} p_{i-1}(t)=\sum _{j=1}^{n+1} a_{i,j}t^{j-1},\quad t\in I, \quad i=1,\dots , n+1. \end{aligned}$$
(16)

For a given sequence of parameters \( \{ t_{i}\}_{i=1}^{n+1}\) on I, the following result provides the multipliers and the diagonal pivots of the Neville elimination of the collocation matrix

$$\begin{aligned} M_{p}{:}{=}\big ( p_{j-1}(t_i) \big )_{1\le i,j\le n+1}, \end{aligned}$$
(17)

in terms of Schur polynomials and minors of the change of basis matrix \(A{:}{=}(a_{i,j})_{1\le i,j\le n+1}\), such that

$$\begin{aligned} (p_{0},p_{1}, \ldots ,p_{n})^{T}=A(m_{0},m_{1}, \ldots ,m_{n})^T, \end{aligned}$$
(18)

with \(m_i(t){:}{=}t^i\), \(i=0,\ldots ,n\).

Theorem 2

Let \((p_0,\ldots ,p_n)\) be a basis of \({{\textbf{P}}}^n(I)\) and A be the matrix satisfying (18). Given \( \{ t_{i}\}_{i=1}^{n+1}\) on I, the diagonal pivots (4) and the multipliers (5) of the Neville elimination of the matrix \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (17) are given by

$$\begin{aligned}{} & {} p_{1,1}= a_{1,1}, \qquad p_{i,i} = \frac{ Q_{i,i} }{Q_{i-1,i-1}} \prod _{1<k<i} t_i-t_k,\quad 1< i\le n+1, \end{aligned}$$
(19)
$$\begin{aligned}{} & {} m_{i,1} = 1, \qquad m_{i,j} = \frac{ Q_{i,j} \, Q_{i-2,j-1} }{ Q_{i-1,j} \,Q_{i-1,j-1}} \frac{\prod \limits _{i-j+1\le k<i} t_i-t_k }{\prod \limits _{i-j\le k<i-1} t_{i-1}-t_k }, \quad 1< j<i\le n+1, \end{aligned}$$
(20)
$$\begin{aligned}{} & {} {\widetilde{m}}_{i,1} = \frac{\sum \limits _{l_1} a_{i,l_{1}} S_{(l_{1}-1)}(t_1)}{\sum \limits _{l_1} a_{i-1,l_{1}} S_{(l_{1}-1)}(t_1)}= \frac{p_{n}(t_{1})}{p_{n-1}(t_{1})}, \quad {\widetilde{m}}_{i,j} = \frac{ {\widetilde{Q}}_{i,j} \quad {\widetilde{Q}}_{i-2,j-1}}{{\widetilde{Q}}_{i-1,j} \quad {\widetilde{Q}}_{i-1,j-1} }, \quad 1< j<i\le n+1,\nonumber \\ \end{aligned}$$
(21)

with

$$\begin{aligned} Q_{i,j}&{:}{=}&\sum \limits _{l_1<\dots <l_{j}} \det A{[1,\dots ,j\,|\,l_1,\dots , l_j]} S_{(l_j-j,\dots ,l_1-1)}(t_{i-j+1},\dots ,t_i), \end{aligned}$$
(22)
$$\begin{aligned} {\widetilde{Q}}_{i,j}&{:}{=}&\sum \limits _{l_1<\dots <l_{j}} \det A{[i-j+1,\dots ,i\,|\,l_1,\dots , l_{j}]} S_{(l_j-j,\dots ,l_1-1)}(t_1,\ldots ,t_{j}). \end{aligned}$$
(23)

The sums in (20) and (21) are taken over all strictly increasing sequences \(l_{1}<\dots <l_{j}\) with \( l_{r}=1,\ldots ,n+1\), \(r=1,\ldots ,j\).

Proof

Using (4), the computation of the minors of \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) with consecutive rows and initial consecutive columns will allow us to determine the corresponding pivots \(p_{i,j}\) and multipliers \(m_{i,j}\), \(1\le j\le i\le n+1\). Taking into account properties of determinants, we can write

$$\begin{aligned} \det M_p{[i-j+1,\dots ,i\,|\,1,\dots , j]}&=\sum _{(l_1,\dots ,l_j)}\det \left( \begin{array}{ccc} a_{1,l_1}t_{i-j+1}^{l_1-1} &{} \ldots &{} a_{j,l_j}t_{i-j+1}^{l_j-1} \\ \vdots &{} \ddots &{} \vdots \\ a_{1,l_1}t_i^{l_1-1} &{} \ldots &{} a_{j,l_j}t_i^{l_j-1} \end{array} \right) \nonumber \\&=\sum _{(l_1,\dots ,l_j)} a_{1,l_1}\cdots a_{j,l_j} \det \left( \begin{array}{ccc} t_{i-j+1}^{l_1-1} &{} \ldots &{} t_{i-j+1}^{l_j-1} \\ \vdots &{} \ddots &{} \vdots \\ t_i^{l_1-1} &{} \ldots &{} t_i^{l_j-1} \end{array} \right) , \end{aligned}$$
(24)

where the sums are taken over all j-tuples \((l_1,\dots ,l_j)\) with \( l_{i}=1,\ldots ,n+1\), for \(i=1,\dots ,j\).

Let us notice that any j-tuple \((l_1,\dots ,l_j)\) with a repeated integer will no contribute to the sum since the corresponding determinant vanishes, as can be seen in (24). For this reason, we will only consider \((l_1,\dots ,l_j)\) with different entries. Then, the sum (24) can be organized by choosing permutations of the entries such that \(l_1<\dots <l_{j}\).

Taking into account these considerations, we have

$$\begin{aligned}&\det M_{p}{[i-j+1,\dots ,i\,|\,1,\dots , j]}\nonumber \\&\quad = \sum _{l_1<\dots<l_{j}} \sum _{\sigma \in S_j}a_{1,l_{\sigma (1)}}\cdots a_{j,l_{\sigma (j)}} \det \left( \begin{array}{ccc} t_{i-j+1}^{l_{\sigma (1)}-1} &{} \ldots &{} t_{i-j+1}^{l_{\sigma (j)}-1} \\ \vdots &{} \ddots &{} \vdots \\ t_i^{l_{\sigma (1)}-1} &{} \ldots &{} t_i^{l_{\sigma (j)}-1} \end{array} \right) \nonumber \\&\quad =\sum _{l_1<\dots<l_{j}} \sum _{\sigma \in S_j}a_{1,l_{\sigma (1)}}\cdots a_{j,l_{\sigma (j)}} \det \left( \begin{array}{ccc} t_{i-j+1}^{l_1-1} &{} \ldots &{} t_{i-j+1}^{l_j-1} \\ \vdots &{} \ddots &{} \vdots \\ t_i^{l_1-1} &{} \ldots &{} t_i^{l_j-1} \end{array} \right) \text {sgn}(\sigma )\nonumber \\&\quad =\sum _{l_1<\dots <l_{j}} \det A{[1,\dots ,j\,|\,l_1,\dots , l_j]} \det \left( \begin{array}{ccc} t_{i-j+1}^{l_1-1} &{} \ldots &{} t_{i-j+1}^{l_j-1} \\ \vdots &{} \ddots &{} \vdots \\ t_i^{l_1-1} &{} \ldots &{} t_i^{l_j-1} \end{array} \right) , \end{aligned}$$
(25)

where \(S_j\) denotes the group of permutation of j elements. The function sgn(\(\sigma \)) is the totally antysymmetric irreducible representation of the permutation group \(S_n\), which is 1-dimensional and equals 1 if \(\sigma \) is an even permutation and \(-1\) if \(\sigma \) is an odd permutation. Recall that even (odd) permutations are the ones which can be written as a product of an even (odd) number of tranpositions.

Now, from the definition of the Schur polynomials (see (9)), of \(Q_{i,j}\) in (22), and (25), we can write

$$\begin{aligned}{} & {} \frac{ \det {[i-j+1,\dots ,i\,|\,1,\dots , j]} }{ \det M_{j,t_{i-j+1},\ldots ,t_{i}} }= Q_{i,j},\\{} & {} \frac{ \det M_p{[i-j+1,\dots ,i-1\,|\,1,\dots , j-1]}}{\det M_{j-1,t_{i-j+1},\ldots ,t_{i}}}= Q_{i-1,j-1}, \end{aligned}$$

where \(M_{n+1,t_{1},\ldots ,t_{n+1}} {:}{=} \left( t_j^{i-1} \right) _{1\le i,j \le n+1}\). Taking into account that

$$\begin{aligned} \frac{\det M_{j,t_{i-j+1},\ldots ,t_{i}}}{\det M_{j-1,t_{i-j+1},\ldots ,t_{i-1}}}=\prod _{i-j+1<k<i}t_i-t_k, \end{aligned}$$

we derive that the pivots \(p_{i,j}\) of the Neville elimination satisfy

$$\begin{aligned} p_{i,1} = a_{i,1}, \quad p_{i,j} = \frac{Q_{i,j} }{Q_{i-1,j-1} } \prod _{i-j+1<k<i} t_i-t_k. \end{aligned}$$
(26)

Consequently, for \(i=j\), identities (19) are deduced. Finally, using (5) and (26), the multipliers \(m_{i,j}\), \(1\le j<i\le n+1\), can be written as in (20).

Now, let us derive identities (21) for \({\widetilde{m}}_{ij}\). Again, using properties of determinants, the minors of \(M_{p}^T\) with initial consecutive columns and consecutive rows can be written as follows

$$\begin{aligned} \det M_{p}^T{[i-j+1,\dots ,i\,|\,1,\dots , j]}= & {} \sum _{(l_1,\dots ,l_j)}\det \left( \begin{array}{ccc} a_{i-j+1,l_1}t_1^{l_1-1} &{} \ldots &{} a_{i-j+1,l_1}t_j^{l_1-1} \\ \vdots &{} \ddots &{} \vdots \\ a_{i,l_j}t_1^{l_j-1} &{} \ldots &{} a_{i,l_j}t_j^{l_j-1} \end{array} \right) \nonumber \\= & {} \sum _{(l_1,\dots ,l_j)} a_{i-j+1,l_1}\cdots a_{i,l_j} \det \left( \begin{array}{ccc} t_1^{l_1-1} &{} \ldots &{} t_j^{l_1-1} \\ \vdots &{} \ddots &{} \vdots \\ t_1^{l_j-1} &{} \ldots &{} t_j^{l_j-1} \end{array} \right) ,\nonumber \\ \end{aligned}$$
(27)

where the previous sums are taken over all j-tuples \((l_1,\dots ,l_j)\) with \(l_{r}=1,\ldots ,n+1\), \(r=1,\ldots ,j\). So, we can write

$$\begin{aligned}{} & {} \det M_{p}^T{[i-j+1,\dots ,i\,|\,1,\dots , j]}\nonumber \\{} & {} \quad = \sum _{l_1<\dots<l_{j}} \sum _{\sigma \in S_j}a_{i-j+1,l_{\sigma (1)}}\cdots a_{i,l_{\sigma (j)}} \det \left( \begin{array}{ccc} t_1^{l_{\sigma (1)}-1} &{} \ldots &{} t_j^{l_{\sigma (1)}-1} \\ \vdots &{} \ddots &{} \vdots \\ t_1^{l_{\sigma (j)}-1} &{} \ldots &{} t_j^{l_{\sigma (j)}-1} \end{array} \right) \nonumber \\{} & {} \quad =\sum _{l_1<\dots<l_{j}} \sum _{\sigma \in S_j}a_{i-j+1,l_{\sigma (1)}}\cdots a_{i,l_{\sigma (j)}} \det \left( \begin{array}{ccc} t_1^{l_1-1} &{} \ldots &{} t_j^{l_1-1} \\ \vdots &{} \ddots &{} \vdots \\ t_1^{l_j-1} &{} \ldots &{} t_j^{l_j-1} \end{array} \right) \text {sgn}(\sigma )\nonumber \\{} & {} \quad =\sum _{l_1<\dots <l_{j}} \det A{[i-j+1,\dots ,i\,|\,l_1,\dots , l_j]} \det \left( \begin{array}{ccc} t_1^{l_1-1} &{} \ldots &{} t_j^{l_1-1} \\ \vdots &{} \ddots &{} \vdots \\ t_1^{l_j-1} &{} \ldots &{} t_j^{l_j-1} \end{array} \right) .\nonumber \\ \end{aligned}$$
(28)

Now, taking into account (28), the definition (9) of the Schur polynomials and the definition (23) of \( {\widetilde{Q}}_{i,j}\), we deduce

$$\begin{aligned} \frac{ \det M_{p}^T{[i-j+1,\dots ,i\,|\,1,\dots , j]} }{\det M_{j,\, t_1,\ldots ,t_j}} = {\widetilde{Q}}_{i,j}. \end{aligned}$$

Using the following identity,

$$\begin{aligned} \frac{\det M_{j,t_1,\ldots ,t_{j}}}{\det M_{j-1,t_1,\ldots ,t_{j-1}}}=\prod _{1\le k<j} t_j-t_k, \end{aligned}$$

we derive

$$\begin{aligned} {\widetilde{p}}_{i,j} = \frac{ {\widetilde{Q}}_{i,j} }{ {\widetilde{Q}}_{i-1,j-1} } \prod _{1\le k<j} t_j-t_k. \end{aligned}$$
(29)

Finally, using (29) and (5), the multipliers \({\tilde{m}}_{i,j}\), \(1\le j<i\le n+1\), can be written as in (21). \(\square \)

As a consequence of Theorem 2, the decomposition (6) of any collocation matrix of a polynomial TP basis on can be expressed in terms of Schur polynomials.

Corollary 1

Let \(I\subseteq \mathbb {R}\) and \((p_{0},\ldots ,p_{n})\) be a TP basis of \( {{\textbf{P}}}^n(I)\). For any sequence of parameters \(t_1<\cdots <t_{n+1}\) on I, the collocation matrix \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (17) admits a factorization of the form (6) such that

$$\begin{aligned} M_{p}=F_nF_{n-1}\cdots F_1 D G_1 G_2 \cdots G_n, \end{aligned}$$
(30)

where \(F_i\) (respectively, \(G_i\)), \(i=1,\ldots ,n\), are the lower (respectively, upper) triangular bidiagonal matrices described in (7) and \(D=\textrm{diag}\left( p_{1,1}, p_{2,2},\ldots , p_{n+1,n+1}\right) \). The diagonal entries \(p_{i,i}\), \(1\le i\le n+1\), can be obtained from (19). The off-diagonal entries \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\), \(1\le j<i\le n+1\), can be obtained from (20) and (21), respectively.

In addition, if \(I\subseteq (0,\infty )\) and the minors providing \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\) are positive, then \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is STP.

Proof

Since \((p_{0},\ldots ,p_{n})\) is TP on I, the collocation matrix \(M_p \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is nonsingular and TP and, by Theorem 1, its complete Neville elimination can be performed without row and columns exchanges. Moreover, \(m_{i,j}\ge 0\), \({\widetilde{m}}_{i,j}\ge 0\), \(1\le j<i\le n+1\) and, by Theorem 2, these values satisfy (20) and (21), respectively. In addition, \(p_{i,i}>0\), \(1\le i\le n+1\), and satisfy (19).

Since the Schur polynomials at positive parameters are positive, if \(I\subseteq (0,\infty )\) and the minors providing \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\) are strictly positive, we can clearly guarantee \(m_{i,j}>0\), \({\widetilde{m}}_{i,j}>0\) and, by Theorem 1, \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is STP. \(\square \)

4 On the Total Positivity of a Relevant Class of Polynomials

Many relevant bases of the polynomial space \({{\textbf{P}}}^n(I)\) are formed by polynomials \(q_{0},\ldots ,q_{n}\) such that \(\text {deg } q_i =i\), \(i=0,\ldots ,n\), and then

$$\begin{aligned} q_{i-1}(t){:}{=}\sum _{j=1}^{i} b_{i,j}t^{j-1}, \quad i=1,\dots , n+1. \end{aligned}$$
(31)

The change of basis matrix \(B=(b_{i,j})_{1\le i,j\le n+1}\) such that

$$\begin{aligned} (q_{0}, q_{1}, \ldots ,q_{n})^{T}=B(m_{0}, m_{1}, \ldots ,m_{n})^T, \end{aligned}$$
(32)

with \( m_{i}(t){:}{=}t^i\), \(i=0,\ldots ,n\), is nonsingular lower triangular, that is,

$$\begin{aligned} b_{i,j}=0 \quad \text {if} \quad j>i, \quad i=1,\dots , n+1. \end{aligned}$$
(33)

Taking into account the triangular structure of the matrix in (32), the following result restates Theorem 2 for bases (31), providing the pivots and the multipliers of the Neville elimination of their collocation matrices at nodes \(\{t_{i}\}_{i=1}^{n+1}\),

$$\begin{aligned} M_{q}{:}{=}\big ( q_{j-1}(t_i) \big )_{1\le i,j\le n+1}. \end{aligned}$$
(34)

Theorem 3

Let \((q_0,\ldots ,q_n)\) be a basis of \({\textbf{P}}^n(I)\) such that the matrix B satisfying (32) is nonsingular lower triangular. Given \( \{ t_{i}\}_{i=1}^{n+1}\) on I, the diagonal pivots (4) and the multipliers (5) of the Neville elimination of the matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (34) are given by

$$\begin{aligned} p_{1,1}= & {} b_{1,1}, \quad p_{i,i}=b_{i,i}\prod _{1\le k<i} t_i-t_k, \quad 1< i\le n+1. \end{aligned}$$
(35)
$$\begin{aligned} m_{i,1}= & {} 1, \quad m_{i,j}=\frac{\prod \limits _{i-j+1\le k<i} t_i-t_k }{\prod \limits _{i-j\le k<i-1} t_{i-1}-t_k }, \quad 1< j<i\le n+1, \end{aligned}$$
(36)
$$\begin{aligned} {\widetilde{m}}_{i,1}= & {} \frac{\sum \limits _{l_1} b_{i,l_{1}} S_{(l_{1}-1)}(t_1)}{\sum \limits _{l_1} b_{i-1,l_{1}} S_{(l_{1}-1)}(t_1)}= \frac{q_{n}(t_{1})}{q_{n-1}(t_{1})},\quad {\widetilde{m}}_{i,j} = \frac{ {\widetilde{Q}}_{i,j}\, {\widetilde{Q}}_{i-2,j-1} }{{\widetilde{Q}}_{i-1,j} \,{\widetilde{Q}}_{i-1,j-1} }, \quad 1< j<i\le n+1,\nonumber \\ \end{aligned}$$
(37)

with

$$\begin{aligned} {\widetilde{Q}}_{i,j} {:}{=} \sum \limits _{l_1<\dots <l_{j}} \det B{[i-j+1,\dots ,i\,|\,l_1,\dots , l_{j}]} S_{(l_j-j,\dots ,l_1-1)}(t_1,\ldots ,t_{j}). \end{aligned}$$
(38)

The sum in (38) is taken over all increasing ascending sequences \(l_{1}<\dots <l_{j}\) with \( l_{r}=1,\ldots ,n+1\), \(r=1,\ldots ,j\).

Proof

Since B is nonsingular lower triangular, the linear combinations \(Q_{i,j}\) in (22) contain a single term, namely, the corresponding to the sequence \(l_r=r\) for \(r=1,\ldots ,j\). This sequence corresponds to the Schur polynomial \(S_\emptyset =1\). So, \(Q_{i,j}=b_{1,1}\cdots b_{j,j}\). Then,

$$\begin{aligned} \frac{ Q_{i,j} \, Q_{i-2,j-1} }{Q_{i-1,j} \,Q_{i-1,j-1} } =1,\quad \frac{Q_{i,i}}{Q_{i-1,i-1}}=b_{i,i}, \end{aligned}$$
(39)

and the pivots and mutipliers given in (19) and (20) reduce to the expressions (35) and (36), respectively. \(\square \)

The bidiagonal factorization (6), described by (35), (36) and (37) is now illustrated with a simple example. The collocation matrix of the polynomial basis of \({{\textbf{P}}}^2\) \((b_{1,1}, b_{2,1}+b_{2,2}t, b_{3,1}+b_{3,2}t+b_{3,3}t^2)\) at a sequence of parameters \(\{t_{1}, t_2, t_3\} \) can be decomposed as follows

$$\begin{aligned} {M_{q}}=\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ 0 &{}\frac{ t_{3}-t_{2} }{ t_{2}-t_{1} }&{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} b_{1,1} &{} 0 &{} 0 \\ 0 &{} b_{2,2}(t_{2}-t_{1}) &{} 0 \\ 0 &{}0 &{} b_{3,3}(t_{3}-t_{2})(t_{2}-t_{1})\\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} {\widetilde{m}}_{21} &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{32} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{31} \\ 0 &{}0 &{} 1 \\ \end{array}\right) , \end{aligned}$$

where

$$\begin{aligned} {\widetilde{m}}_{2,1}= & {} \frac{b_{2,1}S_{(0)}(t_{1})+b_{2,2}S_{(1)}(t_{1})}{b_{1,1}S_{(0)}(t_{1})}, \\ {\widetilde{m}}_{3,1}= & {} \frac{b_{3,1}S_{(0)}(t_{1})+b_{3,2}S_{(1)}(t_{1})+b_{3,3}S_{(2)}(t_{1})}{b_{2,1}S_{(0)}(t_{1})+b_{2,2}S_{(1)}(t_{1})}, \\ {\widetilde{m}}_{3,2}= & {} \frac{b_{1,1}S_{(0)}(t_{1}){\widetilde{Q}}_{3,2}}{\det {{B}[1,2|1,2]}S_{(0,0)}(t_{1},t_{2})(b_{2,1}S_{(0)}(t_{1})+b_{2,2}S_{(1)}(t_{1}))}, \end{aligned}$$

with

$$\begin{aligned} {\widetilde{Q}}_{3,2}= & {} \det {{B}[2,3|1,2]}S_{(0,0)}(t_{1},t_{2})+\det {{B}[2,3|1,3]}S_{(1,0)}(t_{1},t_{2})\\{} & {} +\det {{B}[2,3|2,3]}S_{(1,1)}(t_{1},t_{2}). \end{aligned}$$

Taking into account Theorem 3, the following result provides a useful characterization of the polynomial bases (31), which are STP on intervals \((\tau ,\infty )\), \(\tau \ge 0\), in terms of the sign of the diagonal entries of the nonsingular lower triangular change of basis matrix B in (32).

Theorem 4

Let \( (q_0,\ldots ,q_n)\) be a polynomial basis such that the matrix \(B=(b_{i,j})_{1\le i,j\le n+1}\), satisfying (32), is nonsingular lower triangular. Then, there exists \(\tau \ge 0\) such that \((q_0,\ldots ,q_n)\) is STP on \((\tau ,\infty )\) if and only if \(b_{i,i}>0\), for \(i=1,\ldots ,n+1\).

Proof

First, suppose that \((q_0,\ldots ,q_n)\) is TP in \((\tau ,\infty )\) for some \(\tau \ge 0\). Then there exists nodes \(0<t_{1}<\cdots <t_{n+1}\) such that the diagonal pivots \(p_{i,i}\), \(i=1,\ldots ,n+1\), of the Neville elimination of

$$\begin{aligned} M_{q} {:}{=}\big ( q_{j-1}(t_i) \big )_{1\le i,j\le n+1} \end{aligned}$$

are positive. Taking into account that \(p_{i,i}\) satisfy identities (35), we deduce that \(b_{i,i}>0 \), \(i=1,\ldots ,n+1\).

Now, consider that \(b_{i,i}>0 \), for \(i=1,\ldots ,n+1\). Then, for any sequence \(0<t_{1}<\cdots <t_{n+1}\), the positivity of the multipliers \(m_{i,j}\), \(1\le j<i\le n+1\), of the Neville elimination of the collocation matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is deduced from (36). Moreover, taking into account that the diagonal pivots are given by (35), we also conclude that \(p_{i,i}>0\), \(i=1,\ldots ,n+1\).

The analysis of the sign of the multipliers \({\widetilde{m}}_{i,j}\) needs a closer look. As in (38), let us define

$$\begin{aligned} {\widetilde{Q}}_{i,j}(t_1,\ldots ,t_j){:}{=}\sum _{l_1<\dots <l_{j}} \det B{[i-j+1,\dots ,i\,|\,l_1,\dots , l_j]} S_{(l_j-j,\dots ,l_1-1)}(t_1,\ldots ,t_j). \nonumber \\ \end{aligned}$$
(40)

Clearly,

$$\begin{aligned} \text {deg } S_{(l_j-j,\dots ,l_1-1)} =\sum _{k=1}^j l_k \, -\frac{j(j+1)}{2} \end{aligned}$$
(41)

and then, the highest degree among the Schur polynomials in (40) is achieved in the term with maximum sum \(l_1+\dots +l_j\), as long as \(\det B{[i-j+1,\dots ,i\,|\,l_1,\dots , l_j]}\ne 0\). Since B is a nonsingular lower triangular matrix, the maximum for the sum \(l_1+ \dots +l_j\) is obtained for columns

$$\begin{aligned} (l_1,l_2,\ldots ,l_j)= (i-j+1,\ldots , i). \end{aligned}$$

Since

$$\begin{aligned}{} & {} c_{i,j}{:}{=} \det B{[i-j+1,\dots ,i\,|\,i-j+1,\dots , i\,]}\nonumber \\ {}{} & {} =b_{i-j+1,i-j+1}\,b_{i-j+2,i-j+2}\cdots b_{i,i}>0, \end{aligned}$$
(42)

the Schur polynomial in (40) with the highest degree is

$$\begin{aligned} S_{(i-j,\dots ,i-j)}(t_1,\dots ,t_j)=t_1^{i-j}\cdots t_j^{i-j}, \end{aligned}$$
(43)

and \(\text {deg } S_{(i-j,\dots ,i-j)} =j(i-j)\). Moreover, by inspection of (40), we see that

$$\begin{aligned} {\widetilde{Q}}_{i,j}(t_1,\ldots ,t_j)- c_{i,j}t_1^{i-j}\cdots t_j^{i-j} \end{aligned}$$

is a linear combination of Schur polynomials \(S_{\lambda }\) with a degree lower than \(j(i-j)\), labelled by partitions \(\lambda =(\lambda _1,\dots ,\lambda _j)\) with \(\lambda _1\le i-j\). Then, by the use of (13), we deduce that

$$\begin{aligned} \frac{\partial ^k S_{\lambda }}{\partial t_1^{k_1}\cdots \partial t_j^{k_j}}(t_1,\dots ,t_j)=0 \quad \text {if } \quad k_r>i-j, \end{aligned}$$
(44)

for any \(r=1,\ldots ,j\). These considerations allow us to assure that the polynomials

$$\begin{aligned} P_{(i,j,0,\dots ,0)}(t) {:}{=} {\widetilde{Q}}_{i,j}(t,\ldots ,t), \quad P_{(i,j,k_1,\dots ,k_j)}(t) {:}{=} \frac{\partial ^k {\widetilde{Q}}_{i,j}}{\partial t_1^{k_1}\cdots \partial t_j^{k_j}} (t,\dots ,t), \end{aligned}$$

with \( \text {deg } P_{(i,j,k_1,\dots ,k_j)} =j(i-j)-k=j(i-j)-(k_1+\cdots +k_j) \), have positive leading coefficient. For this reason, for each \(P_{(i,j,k_1,\dots ,k_j)}\), there exists a largest root \(r_{i, j, k_1,\dots , k_j}\) such that \(P_{(i,j,k_1,\dots ,k_j)}(t)> 0\) and \(P_{(i,j,0,\dots ,0)}(t)>0\) for \(t > r_{i,j,k_1\dots ,k_j}\).

Now, we can define

$$\begin{aligned} \tau {:}{=}\max \left\{ 0, \max _{i,j,k_1,\dots ,k_j} \{r_{i,j,k_1,\dots ,k_j}|\,k=k_1+\dots +k_j,\,1\le k\le j(i-j)\} \right\} . \nonumber \\ \end{aligned}$$
(45)

The multivariate polynomial \({\widetilde{Q}}_{i,j}\) can be written by its Taylor expansion around \((\tau ,\ldots ,\tau )\),

$$\begin{aligned} {\widetilde{Q}}_{i,j}(t_1,\dots ,t_j)= P_{(i,j,0,\dots ,0)}(\tau )+\sum _{k=1}^{j(i-j)}\sum _{k_1+\cdots +k_j=k}\frac{1}{k_1!\cdots k_j!}\delta _1^{k_1}\cdots \delta _j^{k_j} P_{(i,j,k_1,\dots ,k_j)}(\tau ) \end{aligned}$$

where \(t_r=\tau +\delta _r\), \(r=1,\ldots ,j\), and

$$\begin{aligned} {\widetilde{Q}}_{i,j}(t_1,\dots ,t_j)>0,\quad 0\le \tau<t_1<\cdots <t_j. \end{aligned}$$
(46)

Given \(0 \le \tau< t_{1}<\cdots <t_{n+1}\), by (37), the corresponding multiplier \({\widetilde{m}}_{i,j}\) can be written as

$$\begin{aligned} {\widetilde{m}}_{i,j}=\frac{{\widetilde{Q}}_{i,j}(t_1,\ldots ,t_j){\widetilde{Q}}_{i-2,j-1}(t_1,\ldots ,t_{j-1})}{ {\widetilde{Q}}_{i-1,j-1}(t_1,\ldots ,t_{j-1}){\widetilde{Q}}_{i-1,j}(t_1,\ldots ,t_j)}, \end{aligned}$$
(47)

and, by (46), we deduce that \({\widetilde{m}}_{i,j}>0\).

Finally, taking into account that for any sequence \(\tau<t_{1}<\cdots <t_{n+1}\), the multipliers \(m_{i,j}\), \({\widetilde{m}}_{i,j}\) and the diagonal pivots of the Neville elimination of the collocation matrix \(M_{q}\) are positive, we deduce by Theorem 1, that \(M_{q}\) is a STP matrix and conclude that the basis \( (q_0,\ldots ,q_n)\) is STP on the interval \((\tau , \infty )\). \(\square \)

Now, using a similar reasoning to that of Theorem 4, the following result provides a necessary condition for the total positivity of collocation matrices \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) corresponding to negative parameters.

Theorem 5

Let \( (q_0,\ldots ,q_n)\) be a polynomial basis such that the matrix \(B=(b_{i,j})_{1\le i,j\le n+1}\), satisfying (32), is nonsingular lower triangular. Then, there exists \(\tau \le 0\) such that the collocation matrix (34) at any decreasing sequence \(0 \ge \tau> t_{1}>\cdots >t_{n+1}\) is STP if and only if \(\text {sign} (b_{i,i}) =(-1)^{i-1} \), \(i=1,\ldots ,n+1\).

Proof

For the direct implication, consider \(0 \ge \tau> t_{1}>\cdots >t_{n+1}\) such that the collocation matrix (34) is STP. Taking into account that the diagonal pivots of its Neville elimination, \(p_{i,i}\) are positive and satisfy identities (35), we deduce that \(\text {sign} (b_{i,i}) =(-1)^{i-1} \), \(i=1,\ldots ,n+1\).

Now, suppose that \(\text {sign} (b_{i,i}) =(-1)^{i-1} \), \(i=1,\ldots ,n+1\) and consider any sequence \( t_{n+1}<\cdots<t_{1}<0\). On the right hand side of (35) there are \(i-1\) negative factors \( t_i-t_k \) and so, \(\text {sign}(p_{i,i})= \text {sign}(b_{i,i})\,(-1)^{i-1}>0\). Moreover, using (36) we deduce that multipliers \(m_{i,j}\) are positive since they can be written as in (36), with j negative factors in the numerator and j negative factors in the denominator.

Now, define

$$\begin{aligned} R_{i,j}(t_1,\dots ,t_j){:}{=}\text {sign}(c_{i,j})(-1)^{j(i-j)}{\widetilde{Q}}_{i,j}(t_1,\dots ,t_j), \end{aligned}$$
(48)

where \({\widetilde{Q}}_{i,j}(t_1,\ldots ,t_j)\) is defined in (40), and

$$\begin{aligned} c_{i,j}=\det B{[i-j+1,\dots ,i\,|\,i-j+1,\dots , i\,]}=b_{i-j+1,i-j+1}\,b_{i-j+2,i-j+2}\cdots b_{i,i}.\nonumber \\ \end{aligned}$$
(49)

The multivariate polynomial \(R_{i,j}(t_1,\dots ,t_j)\) is defined in such a way that its leading term, \(|c_{i,j}| (-1)^{j(i-j)} t_1^{i-j}\cdots t_j^{i-j} \), is positive for \( t_{n+1}<\cdots<t_{1}<0\). Moreover, the sign of the leading term of any k-derivative of \(R_{i,j} \) is \((-1)^k\). Consequently, the leading terms of the polynomials defined as

$$\begin{aligned} P_{(i,j,0,\dots ,0)}(t){:}{=}R_{i,j}(t,\ldots ,t), \quad P_{(i,j,k_1,\dots ,k_j)}(t) {:}{=} (-1)^k\frac{\partial ^k R_{i,j}}{\partial t_1^{k_1}\cdots \partial t_j^{k_j}} (t,\dots ,t), \end{aligned}$$
(50)

with degree \( \text {deg } P_{(i,j,k_1,\dots ,k_j)} =j(i-j)-k=j(i-j)-(k_1+\cdots +k_j)\), take positive values if \(t<0\).

For this reason, there exists a smallest root \(r_{i,j,k_1,\dots ,k_j}\) such that \(P_{(i,j,k_1,\dots ,k_j)}(t)> 0\) and \(P_{(i,j,0,\dots ,0)}(t)>0\) for \(t < r_{i,j,k_1\dots ,k_j}\).

Now, we can define

$$\begin{aligned} \tau {:}{=}\min \left\{ 0, \min _{i,j,k_1,\dots ,k_j} \{r_{i,j,k_1,\dots ,k_j}|\,k=k_1+\dots +k_j,\,1\le k\le j(i-j)\} \right\} . \end{aligned}$$
(51)

Using (50), \(R_{i,j}\) can be written by its Taylor expansion around \((\tau ,\dots , \tau )\), as follows,

$$\begin{aligned}{} & {} R_{i,j}(t_1,\dots ,t_j)= P_{(i,j,0,\dots ,0)}(\tau )\nonumber \\ {}{} & {} \quad +\sum _{k=1}^{j(i-j)}\sum _{k_1+\cdots +k_j=k}\frac{1}{k_1!\cdots k_j!}\delta _1^{k_1}\cdots \delta _j^{k_j} (-1)^k P_{(i,j,k_1,\dots ,k_j)}(\tau ) \end{aligned}$$
(52)

where \(t_r=\tau +\delta _r\), \(r=1,\ldots ,j\).

Given \(t_{j}<\cdots<t_{1}<\tau \le 0\), we have \( \delta _r=t_r-\tau <0\), for \(r=1,\dots ,j\), and then

$$\begin{aligned} R_{i,j}(t_1,\dots ,t_j)>0. \end{aligned}$$
(53)

Finally, since

$$\begin{aligned} \text {sign}\big (c_{i,j}c_{i-1,j-1}c_{i-2,j-1}c_{i-1,j}\big )(-1)^{j(i-j)-(j-1)(i-j)+(j-1)(i-j-1)-j(i-j-1)}=1, \end{aligned}$$

by (48), the multiplier \({\widetilde{m}}_{i,j}\) can be expressed as

$$\begin{aligned} {\widetilde{m}}_{i,j}=\frac{R_{i,j}(t_1,\ldots ,t_j)R_{i-2,j-1}(t_1,\ldots ,t_{j-1})}{ R_{i-1,j-1}(t_1,\ldots ,t_{j-1})R_{i-1,j}(t_1,\ldots ,t_j)}. \end{aligned}$$
(54)

Thus, by (53), \({\widetilde{m}}_{i,j}>0\) for any \( t_{n+1}<\cdots<t_{1}<\tau \le 0\). \(\square \)

Let us observe that, using Theorems 4 and 5, a simple inspection of the sign of the leading coefficient of the polynomial bases (31) provides a criterion to establish their total positivity on intervals contained in \((0,\infty )\) and \((-\infty ,0)\), respectively. It turns out that these bases may be TP on wider intervals. In fact, we are giving some examples of this behavior in Sects. 5.3 and 5.4. The location of such intervals requires a further account of the basis and does not fall within the scope of this paper. Even though, using Theorem 3, a deep study of the sign of the pivots and multipliers can be carried out to deduce the range where a specific polynomial basis is TP.

Finally, let us notice that under the conditions provided by Theorem 4 or by Theorem 5, any STP collocation matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (34) admits a decomposition (6) such that

$$\begin{aligned} M_{q}=F_nF_{n-1}\cdots F_1 D G_1 G_2 \cdots G_n, \end{aligned}$$
(55)

where \(F_i\) (respectively, \(G_i\)), \(i=1,\ldots ,n\), are the lower (respectively, upper) triangular bidiagonal matrices described in (7) and \(D=\textrm{diag}\left( p_{1,1}, p_{2,2},\ldots , p_{n+1,n+1}\right) \). The diagonal entries \(p_{i,i}\), \(1\le i\le n+1\), can be obtained from (35). The off-diagonal entries \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\), \(1\le j<i\le n+1\), can be obtained from (36) and (37), respectively.

Let us recall that, by Theorem 7.2 of [6], Algorithm 5.2 of [6] computes the values of Schur functions for positive data to HRA. Moreover, by (10),

$$\begin{aligned} S_{\lambda }(-t_{1},\ldots , -t_{j}) =(-1)^{|\lambda |}S_{\lambda }(t_{1},\ldots ,t_{j}). \end{aligned}$$

Therefore, Algorithm 5.2 of [6] can be also used to compute the values of Schur functions for negative data to HRA. In addition, let us observe that by Theorems 4 and 5, the pivots \(p_{i,i}\) and multipliers \(m_{i,j}\) of the NE of any STP collocation matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (34) can be computed to HRA. In addition, observe that the multipliers \({\widetilde{m}}_{i,j}\) of the NE of \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) can be computed to HRA if the change of matrix B satisfying (32) is TP. In the case that the matrix B is not TP there may appear subtractions of Schur functions. Although, strictly speaking, the value of the Schur functions cannot be considered as initial (exact) data, since they are computed to HRA they still lead to an excellent accuracy as we are going to show in Sect. 6.

5 Total Positivity and Factorizations of Well-Known Polynomial Bases

Now, we shall use the results in previous sections to analyze the total positivity of relevant polynomial bases and provide the decomposition (6) of their collocation matrices.

5.1 Recursive Polynomial Bases

Given values \(b_1,\ldots , b_{n+1}\), such that \(b_i\ne 0\), \(i=1,\ldots ,n+1\), let us consider the polynomial basis \(({\widetilde{p}}_0,\dots , {\widetilde{p}}_{n})\), such that

$$\begin{aligned} {\widetilde{p}}_0(x){:}{=}b_1, \quad {\widetilde{p}}_{i}(t) {:}{=} b_{i+1}t^i + {\widetilde{p}}_{i-1}(t), \quad i=1,\ldots ,n. \end{aligned}$$
(56)

Clearly, the change of basis matrix such that \(({\widetilde{p}}_0,\dots , {\widetilde{p}}_{n})^T=B (m_{0},\ldots ,m_{n})^T \), with \(m_{i}{:}{=}t^i\), \(i=0,\ldots ,n\), is a nonsingular lower triangular matrix of the following form

$$\begin{aligned} B=\left( \begin{array}{cccccc} b_{1} &{}\quad 0&{}\quad 0&{}\quad 0&{}\quad \ldots &{}\quad 0 \\ b_{1} &{}\quad b_2 &{}\quad 0&{}\quad 0&{}\quad \ldots &{}\quad 0 \\ b_{1} &{}\quad b_2 &{}\quad b_3&{}\quad 0&{}\quad \ldots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ b_{1} &{}\quad b_2 &{}\quad b_3&{}\quad b_3&{}\quad \dots &{}\quad b_{n+1} \end{array} \right) . \end{aligned}$$
(57)

Let us note that the only non-zero minors of B are

$$\begin{aligned}{} & {} B[i-j+1,\ldots ,i|l_1,i-j+2,i-j+3,\ldots ,i]= b_{l_1}\nonumber \\ {}{} & {} \prod _{k=2}^j b_{i-j+k}, \quad l_1=1,\ldots ,i-j+1. \end{aligned}$$
(58)

Taking into account (58), the decomposition (6) of the collocation matrix of \(({\widetilde{p}}_0,\dots , {\widetilde{p}}_{n})\) at \(\{ t_{i}\}_{i=1}^{n+1}\) is given by Theorem 3 where

$$\begin{aligned} {\widetilde{Q}}_{i,j} = \sum \limits _{l_1=1}^{i-j+1}b_{l_1}\prod \limits _{k=2}^j b_{i-j+k} S_{\underbrace{(i-j,\ldots ,i-j}_{j-1 \text { times}},l_1-1)}(t_1,\ldots ,t_{j}). \end{aligned}$$
(59)

Let us notice that since the matrix (57) is nonsingular lower triangular, the criteria of total positivity depending on the diagonal entries \(b_i\ne 0\), \(i=1,2\ldots ,n+1\), given by Theorems 4 and 5, apply.

Section 6 will show accurate computations when solving algebraic problems with collocation matrices of recursive polynomial bases with leading coefficients satisfying either \( b_{i}>0\) or \(\text {sign} (b_{i}) =(-1)^{i-1} \), \(i=1,2 \ldots ,n+1\).

Let us observe that the collocation matrices of the bases (56) can be considered as particular cases of the Cauchy-polynomial Vandermonde matrices defined in [23],

$$\begin{aligned} A=\left( \begin{array}{ccccccc} \frac{1}{t_{1}+d_{1}} &{} \cdots &{} \frac{1}{t_{1}+d_{l}} &{} b_{1} &{} b_{1}+b_{2}t_{1} &{} \cdots &{} \sum _{k=0}^{n-l}b_{k+1}t_{1}^k\\ \frac{1}{t_{2}-d_{1}} &{} \cdots &{} \frac{1}{t_{2}-d_{l}} &{} b_{1} &{} b_{1}+b_{2}t_{2} &{} \cdots &{} \sum _{k=0}^{n-l}b_{k+1}t_{2}^k\\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ \frac{1}{t_{n+1}-d_{1}} &{} \cdots &{} \frac{1}{t_{n+1}-d_{l}} &{} b_{1} &{} b_{1}+b_{2}t_{n+1} &{} \cdots &{} \sum _{k=0}^{n-l}b_{k+1}t_{n+1}^k \end{array}\right) , \end{aligned}$$
(60)

for the case \(l=0\).

In Theorem 1 of [23], the authors give explicit formulas in terms of Schur functions of the determinants used for obtaining the pivots and the multipliers of the Neville elimination of A. It can be checked that from these expressions with \(l=0\), one can get the formula of the pivots and multipliers obtained in this paper.

5.2 Hermite Polynomials

The physicist’s Hermite basis of \({{\textbf{P}}}^n\) is the system of Hermite polynomials \((H_{0},\ldots ,H_{n})\) defined by

$$\begin{aligned} H_i(x){:}{=}i!\sum _{k=0}^{[\frac{i}{2}]} \frac{(-1)^k}{k!(i-2k)!}(2t)^{i-2k}, \quad i=0,\ldots ,n. \end{aligned}$$
(61)

The change of basis matrix A between the Hermite basis (61) and the monomial basis, such that \((H_{0}, \ldots ,H_{n})^{T}=A(m_{0}, \ldots ,m_{n})^T\), is the nonsingular lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) described by

(62)

Let us observe that, from Theorem 3, we can obtain the decomposition (6) of the collocation matrices of the Hermite polynomial basis \(({H}_{0},\ldots , {H}_n)\).

The diagonal entries satisfy \(a_{i,i}= 2^{i-1} >0\), \(i=1,2, \ldots , n+1\). Therefore, by Theorem 4, there exists a lower bound \(\tau \ge 0\) such that the Hermite polynomial basis \((H_{0},\ldots ,H_{n})\) is STP on \((\tau ,\infty )\).

Now, using well-known properties of the roots of Hermite polynomials, we shall deduce a lower bound for \(\tau \), which is an increasing function of the dimension of the basis.

Let us recall that the roots of the Hermite polynomials are simple and real. Then we can write \( t_{1,n}<\cdots <t_{n,n}\), where \(t_{i,n}\), \(i=1,\ldots ,n\), are the n roots of the n-degree Hermite polynomial \(H_n\). In [22], it is shown that the largest root of \(H_n\) satisfies

$$\begin{aligned} \sqrt{\frac{n-1}{2}}<t_{n,n}<\sqrt{2n-2}. \end{aligned}$$
(63)

On the other hand, since Hermite polynomials satisfy the following property

$$\begin{aligned} H_n'(t)=2n\, H_{n-1}(t), \end{aligned}$$

the roots of \(H_n\) and \(H_{n-1}\) interlace, that is,

$$\begin{aligned} t_{1,n}<t_{1,n-1}<t_{2,n}<t_{2,n-1}<\cdots<t_{n-1,n}<t_{n-1,n-1}<t_{n,n}. \end{aligned}$$
(64)

Taking into account (63) and (64), we can write

$$\begin{aligned} \sqrt{\frac{n-2}{2}}<t_{n-1, n-1}<t_{n,n}<\sqrt{2n-2}. \end{aligned}$$
(65)

By (37) and (65), we deduce that \({\widetilde{m}}_{n+1,1}\) is negative for \(t_1\) satisfying \(\sqrt{\frac{n-2}{2}}<t_{n-1,n-1}<t_1<t_{n,n}\). Therefore,

$$\begin{aligned} \sqrt{\frac{n-2}{2}}<\tau . \end{aligned}$$

Let us illustrate the bidiagonal decomposition (6) of collocation matrices of Hermite bases with a simple example. For \(({\tilde{H}}_{0},{\tilde{H}}_{1}, {\tilde{H}}_2)\),

$$\begin{aligned}{} & {} \left( \begin{array}{ccc} 1&{} 2t_{1} &{} 4t_{1}^2-2 \\ 1 &{}2t_{2} &{} 4t_{2}^2-2 \\ 1&{} 2t_{3} &{} 4t_{3}^2-2 \\ \end{array}\right) \\{} & {} \quad =\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ 0 &{}\frac{t_{3}-t_{2}}{t_{2}-t_{1}}&{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 2(t_{2}-t_{1}) &{} 0 \\ 0 &{}0 &{} 4(t_{3}-t_{2})(t_{2}-t_{1})\\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} {\widetilde{m}}_{21} &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{32} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{31} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \end{aligned}$$

where

$$\begin{aligned} {\widetilde{m}}_{2,1}= & {} 2S_{(1)}(t_{1}), \\ {\widetilde{m}}_{3,1}= & {} \frac{-2+4S_{(2)}(t_{1})}{2S_{(1)}(t_{1})},\\ {\widetilde{m}}_{3,2}= & {} \frac{4+8S_{(1,1)}(t_{1},t_{2}))}{4S_{(1)}(t_{1})}. \end{aligned}$$

It can be easily checked that for \(\sqrt{2}/2<t_{1}< t_{2}< t_{3}\), the collocation matrix is STP. Section 6 will show accurate computations when solving algebraic problems with collocation matrices of physicist’s Hermite polynomial bases.

5.3 Bessel Polynomials

The Bessel basis of \({{\textbf{P}}}^n\) is the polynomial system \((B_{0},\ldots ,B_{n})\) with

$$\begin{aligned} B_i(t){:}{=}\sum _{k=0}^{i} \frac{(i+k)!}{2^k(i-k)!k!}t^k, \quad i=0,\ldots ,n. \end{aligned}$$
(66)

The change of basis matrix A between the Bessel basis (66) and the monomial basis, with \((B_{0}, \ldots ,B_{n})^{T}=A(m_{0}, \ldots ,m_{n})^T\), is the nonsingular lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) such that

$$\begin{aligned} a_{i,j}= \frac{ (i+j-2)! }{ 2^{j-1} (i-j)! (j-1)!}, \quad i \ge j. \end{aligned}$$

In [3], the total positivity of A is proved and the pivots and multipliers of its Neville elimination are provided. As a consequence, accurate computations when considering collocation matrices of \((B_{0},\ldots ,B_{n})\) for \((0<) t_0<t_1<\cdots <t_n\) are derived by considering that they are the product of a Vandermonde matrix and the matrix A. The bidiagonal decomposition of Vandermonde matrices can be found in [1, 5].

Now, using Theorem 3, we can obtain the explicit bidiagonal decomposition (6) of the Bessel collocation matrices. Let us illustrate it with a simple example. For \(({\tilde{B}}_{0},{\tilde{B}}_{1}, {\tilde{B}}_2)\),

$$\begin{aligned}{} & {} \left( \begin{array}{ccc} 1&{} 1+t_{1} &{} 1+3t_{1}+ 3t_{1}^2 \\ 1&{} 1+t_{2} &{} 1+3t_{2}+ 3t_{2}^2 \\ 1&{} 1+t_{3} &{} 1+3t_{3}+ 3t_{3}^2\\ \end{array}\right) \\{} & {} \quad =\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0&{} 1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ 0 &{}\frac{t_{3}-t_{2}}{t_{2}-t_{1}}&{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} t_{2}-t_{1} &{} 0 \\ 0 &{}0 &{} 3(t_{3}-t_{1})(t_{3}-t_{2})\\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} {\widetilde{m}}_{21} &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{32} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{31} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \end{aligned}$$

where

$$\begin{aligned} {\widetilde{m}}_{2,1}= & {} 1+S_{(1)}(t_{1}),\\ {\widetilde{m}}_{3,1}= & {} \frac{1+3S_{(1)}(t_{1})+S_{(2)}(t_{1})}{1+S_{(1)}(t_{1})}, \\ {\widetilde{m}}_{3,2}= & {} \frac{2+3S_{(1,0)}(t_{1},t_{2})+3S_{(1,1)}(t_{1},t_{2})}{1+S_{(1)}(t_{1})}. \end{aligned}$$

It can be easily checked that for \( -1<t_{1}< t_{2}< t_{3}\) with \(t_{2}>-1+ \frac{1}{3(1+t_{1})}\), the matrix is STP.

5.4 Laguerre Polynomials

Given \(\alpha >-1\), the generalized Laguerre basis is \((L^{(\alpha )}_ 0,\ldots , L^{(\alpha )}_ n)\) with

$$\begin{aligned} L^{(\alpha )}_ i(x){:}{=}\sum _{k=0}^{i} (-1)^k {i+\alpha \atopwithdelims ()i-k} \frac{x^k }{k!}, \quad i=0,\ldots ,n. \end{aligned}$$
(67)

The change of basis matrix between the generalized Laguerre basis (67) and the monomial basis, with \((L^{(\alpha )}_ 0,\ldots , L^{(\alpha )}_ n)^{T}=A(m_{0}, \ldots ,m_{n})^T\), is the lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) such that

$$\begin{aligned} a_{i,j}= {i-1 \atopwithdelims ()j-1} \frac{ (-1)^{j-1}( i-1 +\alpha ) \cdots (\alpha + 1) }{ ( i-1)!(j-1+\alpha )^{j-1) } }, \quad i \ge j, \end{aligned}$$

where, for a real x and a positive integer k, the falling factorial is

$$\begin{aligned} x^{0)}{:}{=}1, \quad \text {and} \quad x^{k)}{:}{=} x(x-1)(x -2) \cdots (x-k + 1). \end{aligned}$$

In [2], computations to HRA for algebraic computations with the collocation matrix of \((L^{(\alpha )}_ 0,\ldots , L^{(\alpha )}_ n)\) at \((0>) t_0>t_1>\cdots >t_n\) are achieved. These computations are obtained through the bidiagonal decomposition of A and the well-known bidiagonal decomposition of the Vandermonde matrices.

Now, using Theorem 3, we can obtain the explicit bidiagonal decomposition (6) of the Laguerre collocation matrices. Let us illustrate it with a simple example. For \(({L}^{0}_{0},{L}^{0}_{1},{L}^{0}_2)\),

$$\begin{aligned}{} & {} \left( \begin{array}{ccc} 1&{} 1-t_{1} &{} 1-2t_{1}+ (1/2)t_{1}^2 \\ 1&{} 1-t_{2} &{} 1-2t_{2}+ (1/2)t_{2}^2 \\ 1&{} 1-t_{3} &{} 1-2t_{3}+ (1/2)t_{3}^2\\ \end{array}\right) \\{} & {} \quad =\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0&{} 1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ 0 &{}\frac{t_{3}-t_{2}}{t_{2}-t_{1}}&{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} -(t_{2}-t_{1}) &{} 0 \\ 0 &{}0 &{} (1/2)(t_{3}-t_{1})(t_{3}-t_{2})\\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} {\widetilde{m}}_{21} &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{32} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{31} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \end{aligned}$$

where

$$\begin{aligned} {\widetilde{m}}_{2,1}= & {} 1-S_{(1)}(t_{1}), \\ {\widetilde{m}}_{3,1}= & {} \frac{1-2S_{(1)}(t_{1})+(1/2)S_{(2)}(t_{1})}{1-S_{(1)}(t_{1})}, \\ {\widetilde{m}}_{3,2}= & {} \frac{-1+(1/2)S_{(1,0)}(t_{1},t_{2})-(1/2)S_{(1,1)}(t_{1},t_{2})}{-1+S_{(1)}(t_{1})}. \end{aligned}$$

It can be easily checked that, for \(t_{3}< t_{2}< t_{1}<2-\sqrt{2}\), the collocation matrix of the three dimensional Laguerre basis is STP. That means that, for this dimension, the results obtained in [2] for the total positivity of collocation matrices on parameters in \((-\infty , 0)\) can be extended to the larger interval \((-\infty , 2-\sqrt{2})\) and then, even to sequences of parameters \(t_{3}< t_{2}< t_{1}\) with a different sign.

5.5 Jacobi Polynomials

Given \(\alpha , \beta \in {\mathbb {R}}\), the basis of Jacobi polynomials of the space \({{\textbf{P}}}^n\) of polynomials of degree less than or equal to n is \((J_{0}^{(\alpha ,\beta )},\ldots ,J_{n}^{(\alpha ,\beta )})\) with

$$\begin{aligned}{} & {} J_i^{(\alpha ,\beta )}(t){:}{=}\frac{\gamma (\alpha +i+1)}{i!\gamma (\alpha +\beta +i+1)}\nonumber \\ {}{} & {} \sum _{k=0}^{i}\left( {\begin{array}{c}i\\ k\end{array}}\right) \frac{\gamma (\alpha +\beta +i+k+1)}{\gamma (\alpha +k+1)} \left( \frac{ t-1}{ 2}\right) ^k, \quad i=0,\ldots ,n. \end{aligned}$$
(68)

The change of basis matrix between the Jacobi basis (68) and the basis \((v_0,\ldots ,v_n)\) with

$$\begin{aligned} v_k(t){:}{=}\left( \frac{ t-1}{ 2}\right) ^k,\quad k=0,\ldots ,n, \end{aligned}$$
(69)

is the lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) such that

$$\begin{aligned} a_{i,j}{:}{=}{\left\{ \begin{array}{ll} \frac{1}{(j-1)!(i-j)!}\prod _{k=j}^{i-1}(\alpha +k)\prod _{k=1}^{j-1}(\alpha +\beta +i+k-1), \quad &{}\text {if} \quad i\ge j, \\ 0, \quad &{}\text {if} \quad i< j. \end{array}\right. } \end{aligned}$$
(70)

In [19] the total positivity on \((1,\infty )\) of the Jacobi basis \((J_{0}^{(\alpha ,\beta )},\ldots ,J_{n}^{(\alpha ,\beta )})\) is deduced. In addition, the pivots and multipliers of the Neville elimination of the change of basis matrix A in (70) are provided. Using the bidiagonal decomposition of A, computations to HRA are achieved.

Defining \({\widetilde{J}}_{i}^{(\alpha ,\beta )}(t){:}{=}{J}_{i}^{(\alpha ,\beta )}(1+2t) \), \(i=0,\ldots ,n\), using Theorem 3, we can write the bidiagonal decomposition (6) of the Jacobi collocation matrices. For \(({\widetilde{J}}_{0}^{(1,2)},{\widetilde{J}}_{1}^{(1,2)},{\widetilde{J}}_{2}^{(1,2)} )\),

$$\begin{aligned}{} & {} \left( \begin{array}{ccc} 1&{} 2+5t _{1} &{} 3+18t _{1}+ 21t ^2_{1} \\ 1&{} 2+5t _{2} &{} 3+18t _{2}+ 21t ^2_{2} \\ 1&{} 2+5t _{3} &{} 3+18t _{3}+ 21t ^2_{3} \\ \end{array}\right) \\{} & {} \quad =\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0&{} 1 &{} 1 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 \\ \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 5(t _{2}-t _{1}) &{} 0 \\ 0 &{}0 &{} 21(t _{3}-t _{1})(t _{3}-t _{2})\\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} {\widetilde{m}}_{21} &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{32} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \left( \begin{array}{cccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} {\widetilde{m}}_{31} \\ 0 &{}0 &{} 1 \\ \end{array}\right) \end{aligned}$$

where

$$\begin{aligned} {\widetilde{m}}_{2,1}= & {} 2+5S_{(1)}(t _{1}),\\ {\widetilde{m}}_{3,1}= & {} \frac{3+18S_{(1)}(t _{1})+21S_{(2)}(t _{1})}{2+5S_{(1)}(t _{1})}, \\ {\widetilde{m}}_{3,2}= & {} \frac{11+42S_{(1,0)}(t _{1},t _{2})+105S_{(1,1)}(t _{1},t _{2})}{5(2+5S_{(1)}(t _{1}))}. \end{aligned}$$

It can be easily checked that for \((\sqrt{2} - 3)/7<t _{1}<t _{2}<t _{3}\) the collocation matrix is STP. Therefore, the Jacobi polynomial bases \((J_{0}^{(1,2)},J_{1}^{(1,2)},J_{2}^{(1,2)} )\) is STP on \(((1+2\sqrt{2})/7,\infty ) \supset (0.546,\infty ) \supset (1,\infty )\).

Finally, let us observe that the bidiagonal decomposition (6) of the collocation matrices of Legendre, Gegenbauer, first and second kind Chebyschev and rational Jacobi polynomials can be derived by considering Theorem 3.

6 Numerical Experiments

In order to encourage the understanding of the numerical experimentation carried out, using Theorem 3, we provide the pseudocode of Algorithm 1 for computing the matrix form \(BD(M_{q})\) (8) storing the bidiagonal decomposition (6) of the collocation matrix \(M_{q}\) (34). Let us observe that Algorithm 1 calls the Matlab function Schurpoly available in [15], for the computation of Schur polynomials.

Algorithm 1
figure a

Computation of the bidiagonal decomposition of the collocation matrix \(M_{q} \in \mathbb R^{(n+1)\times (n+1)}\) (34)

To test our algorithm, we have considered STP collocation matrices \(M_{q}\) of recursive polynomial bases (56), as well as Hermite polynomial bases (61) with dimension \(n+1=5,10,15\). Moreover, using the bidiagonal decompositions \(BD(M_{q})\) given by Algorithm 1 and the Matlab functions available in the software library TNTools in [14], we have solved fundamental problems in Numerical Linear Algebra involving the considered matrices.

In addition, for analyzing the accuracy of the obtained results, we have calculated the relative errors taking the solutions obtained in Mathematica with a 100-digit arithmetic as the exact solution of the considered algebraic problems.

We have also obtained in Mathematica the 2-norm condition number of all considered matrices. In Tables 12 and 3, this conditioning is depicted. It can be easily observed that the conditioning drastically increases with the size of the matrices. Due to their ill-conditioning, standard routines do not obtain accurate solutions since they can suffer from inaccurate cancelations. In contrast, the accurate algorithms that we have developed in this paper exploit the structure of the considered matrices obtaining, as we will see, accurate numerical results.

Table 1 Condition number of collocation matrices of recursive polynomial bases (56) at \(t_{i}=i/(n+1)\) with \(b_{i}=i\) (left), \(t_{i}=1+i/(n+1)\) with \(b_{i}=(i+1)\) (center) and \(t_{i}=2+i/(n+1)\) with \(b_{i}=i+2\) (right), \(i=1,\ldots , n+1\)
Table 2 Condition number of collocation matrices of recursive polynomial bases (56) at \(t_{i}=-i/(n+1)\) with \(b_{i}=(-i)^{i-1}\) (left), \(t_{i}=-1-i/(n+1)\) with \(b_{i}=(-i-1)^{i-1}\) (center) and \(t_{i}=-2-i/(n+1)\) with \(b_{i}=(-i-2)^{i-1}\) (right), \(i=1,\ldots , n+1\)
Table 3 Condition number of collocation matrices of Hermite polynomial bases (61) at \(t_{i}=4+i/(n+1)\) (left), \(t_{i}=5+i/(n+1)\) (center) and \(t_{i}=8+i/(n+1)\) (right), \(i=1,\ldots , n+1\)

HRA computation of the singular values and eigenvalues Given \(B=BD(A)\) to HRA, the Matlab functions TNSingularValues(B) and TNEigenValues(B) are availables in [16] and compute the singular values and eigenvalues, respectively, of a matrix A to HRA. Their computational cost are all \(O(n^3)\) (see [13]).

Algorithm 2 uses \(BD(M_{q})\) provided by Algorithm 1 to compute the eigenvalues and singular values of a collocation matrix \(M_{q}\).

Algorithm 2
figure b

Computation of the singular values and eigenvalues of \(M_{q}\)

For all considered matrices, we have compared the smallest eigenvalues and singular values obtained using Algorithm 2, with those computed to the Matlab commands eig and svd. The values provided by Mathematica using 100-digit arithmetic have been considered as the exact solution of the algebraic problem and the relative error e of each approximation has been computed as \(e{:}{=} \vert a-{\tilde{a}} \vert / \vert a \vert \), where a denotes the smallest eigenvalue and singular value computed in Mathematica and \({\tilde{a}}\) the smallest eigenvalue and singular value computed in Matlab.

Looking at the results (see Tables 4, 5 ), we notice that our approach is able to compute accurately the lowest eigenvalue and singular value, regardless of the ill-conditioning of the matrices. In contrast, the Matlab commands eig and svd return results that become not accurate when the dimension of the collocation matrices increases.

Table 4 Relative errors when computing the lowest eigenvalue of collocation matrices of recursive polynomial bases (56) at \(t_{i}=i/(n+1)\) with \(b_{i}=i\), \(i=1,\ldots , n+1\) (left), recursive polynomial bases (56) at \(t_{i}=-i/(n+1)\) with \(b_{i}=(-i)^{i-1}\), \(i=1,\ldots , n+1\) (center) and Hermite polynomial bases (61) at \(t_{i}=8+i/(n+1)\), \(i=1,\ldots , n+1\) (right)
Table 5 Relative errors when computing the lowest singular value of collocation matrices of recursive polynomial bases (56) at \(t_{i}=i/(n+1)\) with \(b_{i}=i\), \(i=1,\ldots , n+1\) (left), recursive polynomial bases (56) at \(t_{i}=-i/(n+1)\) with \(b_{i}=(-i)^{i-1}\), \(i=1,\ldots , n+1\) (center) and Hermite polynomial bases (61) at \(t_{i}=8+i/(n+1)\), \(i=1,\ldots , n+1\) (right)

HRA computation of the inverse matrix. Given \(B=BD(A)\) to HRA, the Matlab function TNInverseExpand(B) available in [16] returns \(A^{-1}\) to HRA, requiring \(O(n^2\)) arithmetic operations (see [20]).

Algorithm 3 uses \(BD(M_{q})\) provided by Algorithm 1 to compute the inverse of a collocation matrix \(M_{q}\).

Algorithm 3
figure c

Computation of the inverse of \(M_{q}\)

For all considered matrices, we have compared the inverses obtained using Algorithm 3 and the Matlab command inv. To look over the accuracy of these two methods we have compared both Matlab approximations with the inverse matrix \(A ^{-1}\) computed by Mathematica using 100-digit arithmetic, taking into account the formula \(e=\Vert A ^{-1}-{\widetilde{A}} ^{-1} \Vert /\Vert A ^{-1}\Vert \) for the corresponding relative error.

The achieved relative errors are shown in Table 6. We observe that our algorithm provides very accurate results in contrast to the inaccurate results obtained when using the Matlab command inv.

Table 6 Relative errors when computing the inverse of collocation matrices of recursive polynomial bases (56) at \(t_{i}=1+i/(n+1)\) with \(b_{i}=i+1\), \(i=1,\ldots , n+1\) (left), recursive polynomial bases (56) at \(t_{i}=-1-i/(n+1)\) with \(b_{i}=(-i-1)^{i-1}\), \(i=1,\ldots , n+1\) (center) and Hermite polynomial bases (61) at \(t_{i}=5+i/(n+1)\), \(i=1,\ldots , n+1\) (right)

HRA computation of solution of linear system \( Mc=d\) Given \(B=BD(A)\) to HRA and a vector d with alternating signs, the Matlab function TNSolve(Bd) available in [16] returns the solution of the linear system \(Ac=d\) to HRA. It requires \(O(n^2\)) arithmetic operations (see [16]).

Algorithm 4 uses \(BD(M_{q})\) provided by Algorithm 1, to compute the solution of the linear system \(M_{q}c=d\), where \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and \(d_i\), \(i=1,\ldots ,n+1\), are random nonnegative integer values.

Algorithm 4
figure d

Computation of the solution of the linear system of \(M_{q}c=d\)

For all considered collocation matrices, we have compared the solution obtained using Algorithm 4 and the Matlab command \(\setminus \). The solution provided by Mathematica using 100-digit arithmetic has been considered as the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlab \({\tilde{c}}\), taking into account the formula \(e=\Vert c-{{\tilde{c}}}\Vert / \Vert c\Vert \).

In Table 7 we show the relative errors. We clearly see that, in spite of the dimension of the problem, the proposed algorithm preserves the accuracy as opposed to the results obtained with the Matlab command \(\setminus \).

Table 7 Relative errors of the approximations to the solution of the linear systems \(M_{q}c = d\), where \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and \(d_i\), \(i=1,\ldots ,n+1\), are random nonnegative integer values, and \(M_{q}\) collocation matrices of recursive polynomial bases (56) at \(t_{i}=2+i/(n+1)\) with \(b_{i}=i+2\), \(i=1,\ldots , n+1\) (left), recursive polynomial bases (56) at \(t_{i}=-2-i/(n+1)\) with \(b_{i}=(-i-2)^{i-1}\), \(i=1,\ldots , n+1\) (center) and Hermite polynomial bases (61) at \(t_{i}=4+i/(n+1)\), \(i=1,\ldots , n+1\) (right)

7 Conclusions and Final Remarks

The field of symmetric functions brings new tools to tackle known algebraic problems related to collocation matrices. We expect that further studies in this line of research will involve representations of the groups of permutations, partitions and other bases of symmetric functions. That is, all the relevant concepts which come up whenever an action of the permutation group can be defined on a given setup.

Using the proposed Schur computation of the pivots and the multipliers of the Neville elimination, the bidiagonal factorization (6) of polynomial collocation matrices can obtained accurately. The efficiency of this procedure depends on the number of the involved Schur polynomials and the computational cost of their evaluation. For some bases, as those in (56), the number of nonzero minors decreases significantly, resulting in more efficient calculations.