Abstract
In this paper, Schur polynomials are used to provide a bidiagonal decomposition of polynomial collocation matrices. The symmetry of Schur polynomials is exploited to analyze the total positivity on some unbounded intervals of a relevant class of polynomial bases. The proposed factorization is used to achieve relative errors of the order of the unit round-off when solving algebraic problems involving the collocation matrix of relevant polynomial bases, such as the Hermite basis. The numerical experimentation illustrates the accurate results obtained when using the findings of the paper.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Schur polynomials are homogeneous symmetric polynomials with integer coefficients that arise in many different contexts. They are indexed by partitions and generalize the class of elementary symmetric and complete homogeneous symmetric polynomials. In fact, the degree k Schur polynomials in j variables form a linear basis for the space of homogeneous degree k symmetric polynomials in j variables. When defined by Jacobi’s bi-alternant formula, Schur polynomials are expressed as a quotient of alternating polynomials, i.e. polynomials that change sign under any transposition of the variables.
Schur polynomials have been classically studied in Combinatorics and Algebra. They have been playing a relevant role in the study of symmetric functions, in Representation theory and in Enumerative combinatorics (see [21] and the references therein). In recent years, they have also been used in Computer Science for Quantum computation [11] and in Geometric complexity theory [12].
A relevant topic in Numerical Linear Algebra is the design and analysis of procedures to get accurate solutions of algebraic problems for totally positive matrices that is, matrices whose minors are all non-negative. In particular, many fundamental problems in interpolation and approximation require linear algebra computations related to totally positive collocation matrices. For example, these matrices arise when imposing Lagrange interpolation conditions on a given basis of a vector space of functions, at sequences of parameters in the domain.
Let us note that many problems related to interpolation, numerical quadrature or least squares approximations can be formulated in terms of collocation matrices of a given basis. For example, interesting problems motivated by the use of the moving least squares method applied in the image analysis are adresed in [17]. On the other hand, let us recall that in the interactive design of parametric curves and surfaces, shape preserving properties are closely related to the total positivity of the collocation matrices of the considered bases.
Unfortunately, collocation matrices may become ill-conditioned as their dimensions increase and then, standard routines implementing best traditional numerical methods do not obtain accurate solutions when computing the eigenvalues, the singular values or the inverse matrices. For this reason, it is very interesting to achieve computations to high relative accuracy (HRA computations) whose relative errors are of the order of the machine precision. In the last years, HRA computations when considering collocation matrices of different polynomial bases have been achieved (see [2, 3, 5, 7, 19]).
The total positivity of a given matrix can be characterized through the sign of the pivots and multipliers of its Neville elimination. The HRA computation of these pivots and multipliers provides a bidiagonal factorization for totally positive matrices, leading to HRA algorithms for the resolution of the aforementioned algebraic problems (cf. [8,9,10]). As shown in Sect. 2, the pivots and multipliers of the Neville elimination can be expressed as quotients of minors with consecutive columns of the considered matrix. For collocation matrices of a given basis, these minors are alternating functions of the domain parameters and then can be expressed in terms of a basis of symmetric functions.
The previous observation is at the core of this paper and we shall exploit it when considering collocation matrices of polynomial bases. In this case, the preferred basis of symmetric functions is formed by Schur polynomials with which the pivots and the multipliers are naturally expressed.
HRA computations have been achieved for some polynomial collocation matrices by considering the bidiagonal factorization of Vandermonde matrices and that of a change of basis matrix between the considered and the monomial bases (see [2, 3, 19]). In contrast, the explicit expression for the bidiagonal factorization of any polynomial collocation matrix is deduced in this paper. Furthermore, the achieved formulae for the pivots and multipliers in terms of Schur polynomials, together with some known properties of these symmetric functions allow us to fully characterize the total positivity on unbounded intervals of relevant polynomial bases, and achieve HRA computations when solving algebraic problems involving their collocation matrices.
The layout of this paper is as follows. Section 2 recalls basic aspects related to the total positivity, HRA and Schur polynomials. In addition, the Neville elimination procedure is also described. In Sect. 3, the pivots and multipliers of the Neville elimination of polynomial collocation matrices are explicitly expressed in terms of Schur polynomials. Section 4 focuses on polynomial bases obtained by multiplying the monomial basis by a nonsingular lower triangular matrix. A necessary and sufficient condition for the total positivity of these polynomial bases at nonbounded intervals with positive or negative parameters is also obtained. Taking into account the results of this section, bidiagonal factorizations for collocation matrices of well-known polynomial bases are provided in Sect. 5. Section 6 illustrates the accurate results obtained when solving algebraic computations with collocation matrices of Hermite polynomials. To the best of the authors’ knowledge, such precise calculations have not yet been achieved with Hermite matrices.. Finally, some conclusions and final remarks are collected in Sect. 7.
2 Notations and Auxiliary Results
Given \(k,n\in \mathbb {N}\) with \(k\le n \), \(A=(a_{i,j})_{1\le i,j\le n }\), and increasing sequences \(\alpha =\{\alpha _1,\ldots ,\alpha _k\}\), \(\beta =\{\beta _1,\ldots ,\beta _k\}\) of positive integers less than or equal to n, \(A[\alpha |\beta ]\) denotes the \(k\times k\) submatrix of A containing rows and columns of places \(\alpha \) and \(\beta \), respectively, that is, \(A[\alpha |\beta ]{:}{=} (a_{\alpha _i,\beta _j})_{1\le i,j\le k}\).
Let \((u_0, \ldots , u_n)\) be a basis of a given space U(I) of functions defined on the real set I. The collocation matrix at a sequence \(\{t_{i}\}_{i=1}^{n+1}\subset I\) is
We say that \((u_0, \ldots , u_n)\) is totally positive or TP (respectively, strictly totally positive or STP), if for any \(t_1< \cdots < t_{n+1}\) in I, the corresponding collocation matrix is TP (respectively, STP), that is all its minors are nonnegative (respectively, positive).
2.1 High Relative Accuracy, Total Positivity and Neville Elimination
An important topic in Numerical Linear Algebra is the design and analysis of algorithms adapted to the structure of TP matrices and allowing the resolution of related algebraic problems, achieving relative errors of the order of the unit round-off (or machine precision), that is, to high relative accuracy (HRA).
Algorithms avoiding inaccurate cancelations can be performed to HRA (see page 52 in [4]). Then we say that they satisfy the non-inaccurate cancellation condition, namely NIC condition, and they only compute multiplications, divisions, and additions of numbers with the same sign. Moreover, if the floating-point arithmetic is well-implemented the subtraction of initial data can also be allowed without losing HRA (see page 53 in [4]).
Nowadays, bidiagonal factorizations are very useful to achieve accurate algorithms for performing computations with TP matrices. In fact, the parameterization of TP matrices leading to HRA algorithms is provided by their bidiagonal factorization, which is in turn very closely related to the Neville elimination (cf. [8,9,10]).
The essence of the Neville elimination is to obtain an upper triangular matrix from a given \( A=(a_{i,j})_{1\le i,j\le n+1}\), by adding to each row an appropriate multiple of the previous one. In particular, the Neville elimination of A consists of n major steps that define matrices \(A^{(1)}{:}{=}A\) and \( A^{(r)} =(a_{i,j}^{(r)})_{1\le i,j\le n+1}\), such that,
for \(r=2,\ldots , n+1\), so that \(A^{(n+1)} \) is an upper triangular matrix. In more detail, \(A^{(r+1)} \) is computed from \(A^{(r)} \) according to the following formula
The entry
for \(1\le j\le i\le n+1\), is called the (i, j) pivot of the Neville elimination of A and \(p_{i,i}\) the i-th diagonal pivot. If all the pivots of the Neville elimination are nonzero, Lemma 2.6 of [8] implies that
for \(1 < j \le i\le n\). Furthermore, the value
for \(1\le j < i\le n+1\), is called the (i, j) multiplier.
The complete Neville elimination of the matrix A consists of performing its Neville elimination to obtain the upper triangular matrix \(U{:}{=}A^{(n+1)}\) and next, the Neville elimination of the lower triangular matrix \(U^T\). If the complete Neville elimination of the matrix A can be performed with no row and column exchanges, the multipliers of the complete Neville elimination of A are the multipliers of the Neville elimination of A (respectively, of \(A^T\)) if \(i \ge j\) (respectively, \(j \ge i\)) (see [10]).
Neville elimination is a nice and efficient tool to analyze the total positivity of a given matrix. This fact is shown in the following characterization, which can be derived from Theorem 4.1, Corollary 5.5 of [8] and the arguments of p. 116 of [10].
Theorem 1
A given matrix A is STP (resp. nonsingular TP) if and only if its complete Neville elimination can be performed without row and column exchanges, the multipliers of the Neville elimination of A and \(A^T\) are positive (resp. nonnegative), and the diagonal pivots of the Neville elimination of A are positive.
Furthermore, a nonsingular TP matrix \(A \in {\mathbb {R}}^{(n+1)\times (n+1)}\) admits a decomposition of the form
where \(F_i\in {\mathbb {R}}^{(n+1)\times (n+1)}\) (respectively, \(G_i\in {\mathbb {R}}^{(n+1)\times (n+1)}\)) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by
and \(D \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is the diagonal matrix whose diagonal elements are the diagonal pivots, \(p_{i,i}>0\), \(i=1,\ldots ,n+1\), of the Neville elimination of A in (3) (see Theorem 4.2 and the arguments of p. 116 of [10]).
The entries \(m_{i,j}\), of the matrix \(F_i\) in (7) are the multipliers of the Neville elimination of A. Furthermore, the entries \({\widetilde{m}}_{j,i}\) of the matrix \(G_i\) in (7) are the multipliers of the Neville elimination of \(A^T\).
By defining \(BD(A)=( BD(A)_{i,j})_{1\le i,j\le n+1} \), with
the decomposition (6) of a nonsingular TP matrix A can be stored (cf. [13]). If the entries of BD(A) can be computed to HRA, using the algorithms raised in [14], problems such as the computation of \(A^{-1}\), of the eigenvalues and singular values of A, as well as the resolution of linear systems of equations \(Ax=b\), for vectors b whose entries have alternating signs, can be performed to HRA. One can find the implementation of those algorithms through the link [16]. The name of the corresponding functions is TNInverseExpand (applying the algorithm proposed in [20]), TNEigenValues, TNSingularValues, and TNSolve, respectively. All these functions require the matrix BD(A) as the input argument.
2.2 Basic Properties of Schur Polynomials
Given a partition \(\lambda {:}{=}(\lambda _1,\lambda _2,\ldots ,\lambda _p)\) of size \(|\lambda |{:}{=}\lambda _1+\cdots +\lambda _p\) and length \(l(\lambda ){:}{=}p\), such that \(\lambda _1\ge \lambda _2 \ge \cdots \ge \lambda _p\ge 0\), the Jacobi’s definition of the corresponding Schur polynomial in \(n+1\) variables is via the Weyl’s formula,
Schur polynomials labeled by empty partitions are, by convention, \(S_{(0,\dots ,0)}(t_1,\ldots ,t_{n+1}){:}{=}1\). These polynomials are commonly denoted as \(S_\emptyset (t_1,\ldots ,t_{n+1})\).
Schur polynomials are symmetric functions in their arguments. In addition, we now list other well-known properties that will be used in this article (for more details, interested readers are referred to [18]).
-
(i)
\(S_\lambda (t_1,\ldots ,t_{n+1})>0\) for positive values of \( t_i\), \(i=1,\ldots ,n+1\).
-
(ii)
\(S_\lambda (t_1,\ldots ,t_{n+1})=0\) if \(l(\lambda )>n+1\).
-
(iii)
\(S_\lambda (t_1,\ldots ,t_{n+1})\) is an homogeneous function of degree \(|\lambda |\), that is,
$$\begin{aligned} S_\lambda (\alpha \, t_1,\alpha \, t_2,\ldots ,\alpha \, t_{n+1})=\alpha ^{|\lambda |} S_\lambda (t_1,t_2,\ldots ,t_{n+1}). \end{aligned}$$(10) -
(iv)
As running over all the partitions of size \(|\lambda |\), the corresponding Schur polynomials provide a basis for the space of symmetric homogeneous polynomials of degree \(|\lambda |\). When considering all partitions, Schur polynomials provide a basis of symmetric functions.
As an alternative to Weyl’s formula (9), Schur polynomials can also be expressed in terms of monomials as follows
where \(\mu =( \mu _1,\ldots , \mu _{n+1}) \) is a weak composition of \(|\lambda |\) and \(K_{\lambda ,\mu }\) are non-negative integers depending on the integer partitions \(\lambda \) and \(\mu \). The numbers \(K_{\lambda ,\mu }\) are called Kostka numbers and can be combinatorially calculated by counting the semistandard Young Tableaux (SSYT) that can be constructed with shape \(\lambda \) and weight \(\mu \). An important simple property is that Kostka numbers \(K_{\lambda ,\mu }\) do not depend on the order of the entries of \(\mu \) (cf. [21]).
Apart from the general properties above mentioned, there are some specific facts involving Schur polynomials that will be needed in this paper. Taking into account the way that SSYT are defined, the following basic properties can be deduced:
On the other hand, for a general partition \(\lambda \) and \(k=k_1+\cdots +k_{n+1}\), the one variable polynomial
is either 0, or its degree is \(|\lambda |-k\).
Finally, let us observe that for any rectangular partition \(\lambda {:}{=}(\ell , \ldots , \ell )\), with \(l(\lambda )=j\),
The simplicity of this Schur polynomial, which contains a single monomial term, lies in the fact that the number of rows of the corresponding partition coincides with the number of variables of the polynomial. In this case, it is easy to see that there is only one SSYT available, namely the one that satisfies (12).
Finally, let us observe that Algorithm 5.2 of [6] evaluates Schur polynomials at positive parameters to HRA
3 The Factorization of Collocation Matrices of Polynomial Bases in Terms of Schur Functions
Let \( (p_{0},\ldots ,p_{n})\) be a basis of the space \( {{\textbf{P}}}^n(I)\) of polynomials of degree not greater than n defined on I, described by
For a given sequence of parameters \( \{ t_{i}\}_{i=1}^{n+1}\) on I, the following result provides the multipliers and the diagonal pivots of the Neville elimination of the collocation matrix
in terms of Schur polynomials and minors of the change of basis matrix \(A{:}{=}(a_{i,j})_{1\le i,j\le n+1}\), such that
with \(m_i(t){:}{=}t^i\), \(i=0,\ldots ,n\).
Theorem 2
Let \((p_0,\ldots ,p_n)\) be a basis of \({{\textbf{P}}}^n(I)\) and A be the matrix satisfying (18). Given \( \{ t_{i}\}_{i=1}^{n+1}\) on I, the diagonal pivots (4) and the multipliers (5) of the Neville elimination of the matrix \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (17) are given by
with
The sums in (20) and (21) are taken over all strictly increasing sequences \(l_{1}<\dots <l_{j}\) with \( l_{r}=1,\ldots ,n+1\), \(r=1,\ldots ,j\).
Proof
Using (4), the computation of the minors of \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) with consecutive rows and initial consecutive columns will allow us to determine the corresponding pivots \(p_{i,j}\) and multipliers \(m_{i,j}\), \(1\le j\le i\le n+1\). Taking into account properties of determinants, we can write
where the sums are taken over all j-tuples \((l_1,\dots ,l_j)\) with \( l_{i}=1,\ldots ,n+1\), for \(i=1,\dots ,j\).
Let us notice that any j-tuple \((l_1,\dots ,l_j)\) with a repeated integer will no contribute to the sum since the corresponding determinant vanishes, as can be seen in (24). For this reason, we will only consider \((l_1,\dots ,l_j)\) with different entries. Then, the sum (24) can be organized by choosing permutations of the entries such that \(l_1<\dots <l_{j}\).
Taking into account these considerations, we have
where \(S_j\) denotes the group of permutation of j elements. The function sgn(\(\sigma \)) is the totally antysymmetric irreducible representation of the permutation group \(S_n\), which is 1-dimensional and equals 1 if \(\sigma \) is an even permutation and \(-1\) if \(\sigma \) is an odd permutation. Recall that even (odd) permutations are the ones which can be written as a product of an even (odd) number of tranpositions.
Now, from the definition of the Schur polynomials (see (9)), of \(Q_{i,j}\) in (22), and (25), we can write
where \(M_{n+1,t_{1},\ldots ,t_{n+1}} {:}{=} \left( t_j^{i-1} \right) _{1\le i,j \le n+1}\). Taking into account that
we derive that the pivots \(p_{i,j}\) of the Neville elimination satisfy
Consequently, for \(i=j\), identities (19) are deduced. Finally, using (5) and (26), the multipliers \(m_{i,j}\), \(1\le j<i\le n+1\), can be written as in (20).
Now, let us derive identities (21) for \({\widetilde{m}}_{ij}\). Again, using properties of determinants, the minors of \(M_{p}^T\) with initial consecutive columns and consecutive rows can be written as follows
where the previous sums are taken over all j-tuples \((l_1,\dots ,l_j)\) with \(l_{r}=1,\ldots ,n+1\), \(r=1,\ldots ,j\). So, we can write
Now, taking into account (28), the definition (9) of the Schur polynomials and the definition (23) of \( {\widetilde{Q}}_{i,j}\), we deduce
Using the following identity,
we derive
Finally, using (29) and (5), the multipliers \({\tilde{m}}_{i,j}\), \(1\le j<i\le n+1\), can be written as in (21). \(\square \)
As a consequence of Theorem 2, the decomposition (6) of any collocation matrix of a polynomial TP basis on can be expressed in terms of Schur polynomials.
Corollary 1
Let \(I\subseteq \mathbb {R}\) and \((p_{0},\ldots ,p_{n})\) be a TP basis of \( {{\textbf{P}}}^n(I)\). For any sequence of parameters \(t_1<\cdots <t_{n+1}\) on I, the collocation matrix \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (17) admits a factorization of the form (6) such that
where \(F_i\) (respectively, \(G_i\)), \(i=1,\ldots ,n\), are the lower (respectively, upper) triangular bidiagonal matrices described in (7) and \(D=\textrm{diag}\left( p_{1,1}, p_{2,2},\ldots , p_{n+1,n+1}\right) \). The diagonal entries \(p_{i,i}\), \(1\le i\le n+1\), can be obtained from (19). The off-diagonal entries \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\), \(1\le j<i\le n+1\), can be obtained from (20) and (21), respectively.
In addition, if \(I\subseteq (0,\infty )\) and the minors providing \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\) are positive, then \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is STP.
Proof
Since \((p_{0},\ldots ,p_{n})\) is TP on I, the collocation matrix \(M_p \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is nonsingular and TP and, by Theorem 1, its complete Neville elimination can be performed without row and columns exchanges. Moreover, \(m_{i,j}\ge 0\), \({\widetilde{m}}_{i,j}\ge 0\), \(1\le j<i\le n+1\) and, by Theorem 2, these values satisfy (20) and (21), respectively. In addition, \(p_{i,i}>0\), \(1\le i\le n+1\), and satisfy (19).
Since the Schur polynomials at positive parameters are positive, if \(I\subseteq (0,\infty )\) and the minors providing \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\) are strictly positive, we can clearly guarantee \(m_{i,j}>0\), \({\widetilde{m}}_{i,j}>0\) and, by Theorem 1, \(M_{p} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is STP. \(\square \)
4 On the Total Positivity of a Relevant Class of Polynomials
Many relevant bases of the polynomial space \({{\textbf{P}}}^n(I)\) are formed by polynomials \(q_{0},\ldots ,q_{n}\) such that \(\text {deg } q_i =i\), \(i=0,\ldots ,n\), and then
The change of basis matrix \(B=(b_{i,j})_{1\le i,j\le n+1}\) such that
with \( m_{i}(t){:}{=}t^i\), \(i=0,\ldots ,n\), is nonsingular lower triangular, that is,
Taking into account the triangular structure of the matrix in (32), the following result restates Theorem 2 for bases (31), providing the pivots and the multipliers of the Neville elimination of their collocation matrices at nodes \(\{t_{i}\}_{i=1}^{n+1}\),
Theorem 3
Let \((q_0,\ldots ,q_n)\) be a basis of \({\textbf{P}}^n(I)\) such that the matrix B satisfying (32) is nonsingular lower triangular. Given \( \{ t_{i}\}_{i=1}^{n+1}\) on I, the diagonal pivots (4) and the multipliers (5) of the Neville elimination of the matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (34) are given by
with
The sum in (38) is taken over all increasing ascending sequences \(l_{1}<\dots <l_{j}\) with \( l_{r}=1,\ldots ,n+1\), \(r=1,\ldots ,j\).
Proof
Since B is nonsingular lower triangular, the linear combinations \(Q_{i,j}\) in (22) contain a single term, namely, the corresponding to the sequence \(l_r=r\) for \(r=1,\ldots ,j\). This sequence corresponds to the Schur polynomial \(S_\emptyset =1\). So, \(Q_{i,j}=b_{1,1}\cdots b_{j,j}\). Then,
and the pivots and mutipliers given in (19) and (20) reduce to the expressions (35) and (36), respectively. \(\square \)
The bidiagonal factorization (6), described by (35), (36) and (37) is now illustrated with a simple example. The collocation matrix of the polynomial basis of \({{\textbf{P}}}^2\) \((b_{1,1}, b_{2,1}+b_{2,2}t, b_{3,1}+b_{3,2}t+b_{3,3}t^2)\) at a sequence of parameters \(\{t_{1}, t_2, t_3\} \) can be decomposed as follows
where
with
Taking into account Theorem 3, the following result provides a useful characterization of the polynomial bases (31), which are STP on intervals \((\tau ,\infty )\), \(\tau \ge 0\), in terms of the sign of the diagonal entries of the nonsingular lower triangular change of basis matrix B in (32).
Theorem 4
Let \( (q_0,\ldots ,q_n)\) be a polynomial basis such that the matrix \(B=(b_{i,j})_{1\le i,j\le n+1}\), satisfying (32), is nonsingular lower triangular. Then, there exists \(\tau \ge 0\) such that \((q_0,\ldots ,q_n)\) is STP on \((\tau ,\infty )\) if and only if \(b_{i,i}>0\), for \(i=1,\ldots ,n+1\).
Proof
First, suppose that \((q_0,\ldots ,q_n)\) is TP in \((\tau ,\infty )\) for some \(\tau \ge 0\). Then there exists nodes \(0<t_{1}<\cdots <t_{n+1}\) such that the diagonal pivots \(p_{i,i}\), \(i=1,\ldots ,n+1\), of the Neville elimination of
are positive. Taking into account that \(p_{i,i}\) satisfy identities (35), we deduce that \(b_{i,i}>0 \), \(i=1,\ldots ,n+1\).
Now, consider that \(b_{i,i}>0 \), for \(i=1,\ldots ,n+1\). Then, for any sequence \(0<t_{1}<\cdots <t_{n+1}\), the positivity of the multipliers \(m_{i,j}\), \(1\le j<i\le n+1\), of the Neville elimination of the collocation matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) is deduced from (36). Moreover, taking into account that the diagonal pivots are given by (35), we also conclude that \(p_{i,i}>0\), \(i=1,\ldots ,n+1\).
The analysis of the sign of the multipliers \({\widetilde{m}}_{i,j}\) needs a closer look. As in (38), let us define
Clearly,
and then, the highest degree among the Schur polynomials in (40) is achieved in the term with maximum sum \(l_1+\dots +l_j\), as long as \(\det B{[i-j+1,\dots ,i\,|\,l_1,\dots , l_j]}\ne 0\). Since B is a nonsingular lower triangular matrix, the maximum for the sum \(l_1+ \dots +l_j\) is obtained for columns
Since
the Schur polynomial in (40) with the highest degree is
and \(\text {deg } S_{(i-j,\dots ,i-j)} =j(i-j)\). Moreover, by inspection of (40), we see that
is a linear combination of Schur polynomials \(S_{\lambda }\) with a degree lower than \(j(i-j)\), labelled by partitions \(\lambda =(\lambda _1,\dots ,\lambda _j)\) with \(\lambda _1\le i-j\). Then, by the use of (13), we deduce that
for any \(r=1,\ldots ,j\). These considerations allow us to assure that the polynomials
with \( \text {deg } P_{(i,j,k_1,\dots ,k_j)} =j(i-j)-k=j(i-j)-(k_1+\cdots +k_j) \), have positive leading coefficient. For this reason, for each \(P_{(i,j,k_1,\dots ,k_j)}\), there exists a largest root \(r_{i, j, k_1,\dots , k_j}\) such that \(P_{(i,j,k_1,\dots ,k_j)}(t)> 0\) and \(P_{(i,j,0,\dots ,0)}(t)>0\) for \(t > r_{i,j,k_1\dots ,k_j}\).
Now, we can define
The multivariate polynomial \({\widetilde{Q}}_{i,j}\) can be written by its Taylor expansion around \((\tau ,\ldots ,\tau )\),
where \(t_r=\tau +\delta _r\), \(r=1,\ldots ,j\), and
Given \(0 \le \tau< t_{1}<\cdots <t_{n+1}\), by (37), the corresponding multiplier \({\widetilde{m}}_{i,j}\) can be written as
and, by (46), we deduce that \({\widetilde{m}}_{i,j}>0\).
Finally, taking into account that for any sequence \(\tau<t_{1}<\cdots <t_{n+1}\), the multipliers \(m_{i,j}\), \({\widetilde{m}}_{i,j}\) and the diagonal pivots of the Neville elimination of the collocation matrix \(M_{q}\) are positive, we deduce by Theorem 1, that \(M_{q}\) is a STP matrix and conclude that the basis \( (q_0,\ldots ,q_n)\) is STP on the interval \((\tau , \infty )\). \(\square \)
Now, using a similar reasoning to that of Theorem 4, the following result provides a necessary condition for the total positivity of collocation matrices \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) corresponding to negative parameters.
Theorem 5
Let \( (q_0,\ldots ,q_n)\) be a polynomial basis such that the matrix \(B=(b_{i,j})_{1\le i,j\le n+1}\), satisfying (32), is nonsingular lower triangular. Then, there exists \(\tau \le 0\) such that the collocation matrix (34) at any decreasing sequence \(0 \ge \tau> t_{1}>\cdots >t_{n+1}\) is STP if and only if \(\text {sign} (b_{i,i}) =(-1)^{i-1} \), \(i=1,\ldots ,n+1\).
Proof
For the direct implication, consider \(0 \ge \tau> t_{1}>\cdots >t_{n+1}\) such that the collocation matrix (34) is STP. Taking into account that the diagonal pivots of its Neville elimination, \(p_{i,i}\) are positive and satisfy identities (35), we deduce that \(\text {sign} (b_{i,i}) =(-1)^{i-1} \), \(i=1,\ldots ,n+1\).
Now, suppose that \(\text {sign} (b_{i,i}) =(-1)^{i-1} \), \(i=1,\ldots ,n+1\) and consider any sequence \( t_{n+1}<\cdots<t_{1}<0\). On the right hand side of (35) there are \(i-1\) negative factors \( t_i-t_k \) and so, \(\text {sign}(p_{i,i})= \text {sign}(b_{i,i})\,(-1)^{i-1}>0\). Moreover, using (36) we deduce that multipliers \(m_{i,j}\) are positive since they can be written as in (36), with j negative factors in the numerator and j negative factors in the denominator.
Now, define
where \({\widetilde{Q}}_{i,j}(t_1,\ldots ,t_j)\) is defined in (40), and
The multivariate polynomial \(R_{i,j}(t_1,\dots ,t_j)\) is defined in such a way that its leading term, \(|c_{i,j}| (-1)^{j(i-j)} t_1^{i-j}\cdots t_j^{i-j} \), is positive for \( t_{n+1}<\cdots<t_{1}<0\). Moreover, the sign of the leading term of any k-derivative of \(R_{i,j} \) is \((-1)^k\). Consequently, the leading terms of the polynomials defined as
with degree \( \text {deg } P_{(i,j,k_1,\dots ,k_j)} =j(i-j)-k=j(i-j)-(k_1+\cdots +k_j)\), take positive values if \(t<0\).
For this reason, there exists a smallest root \(r_{i,j,k_1,\dots ,k_j}\) such that \(P_{(i,j,k_1,\dots ,k_j)}(t)> 0\) and \(P_{(i,j,0,\dots ,0)}(t)>0\) for \(t < r_{i,j,k_1\dots ,k_j}\).
Now, we can define
Using (50), \(R_{i,j}\) can be written by its Taylor expansion around \((\tau ,\dots , \tau )\), as follows,
where \(t_r=\tau +\delta _r\), \(r=1,\ldots ,j\).
Given \(t_{j}<\cdots<t_{1}<\tau \le 0\), we have \( \delta _r=t_r-\tau <0\), for \(r=1,\dots ,j\), and then
Finally, since
by (48), the multiplier \({\widetilde{m}}_{i,j}\) can be expressed as
Thus, by (53), \({\widetilde{m}}_{i,j}>0\) for any \( t_{n+1}<\cdots<t_{1}<\tau \le 0\). \(\square \)
Let us observe that, using Theorems 4 and 5, a simple inspection of the sign of the leading coefficient of the polynomial bases (31) provides a criterion to establish their total positivity on intervals contained in \((0,\infty )\) and \((-\infty ,0)\), respectively. It turns out that these bases may be TP on wider intervals. In fact, we are giving some examples of this behavior in Sects. 5.3 and 5.4. The location of such intervals requires a further account of the basis and does not fall within the scope of this paper. Even though, using Theorem 3, a deep study of the sign of the pivots and multipliers can be carried out to deduce the range where a specific polynomial basis is TP.
Finally, let us notice that under the conditions provided by Theorem 4 or by Theorem 5, any STP collocation matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (34) admits a decomposition (6) such that
where \(F_i\) (respectively, \(G_i\)), \(i=1,\ldots ,n\), are the lower (respectively, upper) triangular bidiagonal matrices described in (7) and \(D=\textrm{diag}\left( p_{1,1}, p_{2,2},\ldots , p_{n+1,n+1}\right) \). The diagonal entries \(p_{i,i}\), \(1\le i\le n+1\), can be obtained from (35). The off-diagonal entries \(m_{i,j}\) and \({\widetilde{m}}_{i,j}\), \(1\le j<i\le n+1\), can be obtained from (36) and (37), respectively.
Let us recall that, by Theorem 7.2 of [6], Algorithm 5.2 of [6] computes the values of Schur functions for positive data to HRA. Moreover, by (10),
Therefore, Algorithm 5.2 of [6] can be also used to compute the values of Schur functions for negative data to HRA. In addition, let us observe that by Theorems 4 and 5, the pivots \(p_{i,i}\) and multipliers \(m_{i,j}\) of the NE of any STP collocation matrix \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) in (34) can be computed to HRA. In addition, observe that the multipliers \({\widetilde{m}}_{i,j}\) of the NE of \(M_{q} \in {\mathbb {R}}^{(n+1)\times (n+1)}\) can be computed to HRA if the change of matrix B satisfying (32) is TP. In the case that the matrix B is not TP there may appear subtractions of Schur functions. Although, strictly speaking, the value of the Schur functions cannot be considered as initial (exact) data, since they are computed to HRA they still lead to an excellent accuracy as we are going to show in Sect. 6.
5 Total Positivity and Factorizations of Well-Known Polynomial Bases
Now, we shall use the results in previous sections to analyze the total positivity of relevant polynomial bases and provide the decomposition (6) of their collocation matrices.
5.1 Recursive Polynomial Bases
Given values \(b_1,\ldots , b_{n+1}\), such that \(b_i\ne 0\), \(i=1,\ldots ,n+1\), let us consider the polynomial basis \(({\widetilde{p}}_0,\dots , {\widetilde{p}}_{n})\), such that
Clearly, the change of basis matrix such that \(({\widetilde{p}}_0,\dots , {\widetilde{p}}_{n})^T=B (m_{0},\ldots ,m_{n})^T \), with \(m_{i}{:}{=}t^i\), \(i=0,\ldots ,n\), is a nonsingular lower triangular matrix of the following form
Let us note that the only non-zero minors of B are
Taking into account (58), the decomposition (6) of the collocation matrix of \(({\widetilde{p}}_0,\dots , {\widetilde{p}}_{n})\) at \(\{ t_{i}\}_{i=1}^{n+1}\) is given by Theorem 3 where
Let us notice that since the matrix (57) is nonsingular lower triangular, the criteria of total positivity depending on the diagonal entries \(b_i\ne 0\), \(i=1,2\ldots ,n+1\), given by Theorems 4 and 5, apply.
Section 6 will show accurate computations when solving algebraic problems with collocation matrices of recursive polynomial bases with leading coefficients satisfying either \( b_{i}>0\) or \(\text {sign} (b_{i}) =(-1)^{i-1} \), \(i=1,2 \ldots ,n+1\).
Let us observe that the collocation matrices of the bases (56) can be considered as particular cases of the Cauchy-polynomial Vandermonde matrices defined in [23],
for the case \(l=0\).
In Theorem 1 of [23], the authors give explicit formulas in terms of Schur functions of the determinants used for obtaining the pivots and the multipliers of the Neville elimination of A. It can be checked that from these expressions with \(l=0\), one can get the formula of the pivots and multipliers obtained in this paper.
5.2 Hermite Polynomials
The physicist’s Hermite basis of \({{\textbf{P}}}^n\) is the system of Hermite polynomials \((H_{0},\ldots ,H_{n})\) defined by
The change of basis matrix A between the Hermite basis (61) and the monomial basis, such that \((H_{0}, \ldots ,H_{n})^{T}=A(m_{0}, \ldots ,m_{n})^T\), is the nonsingular lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) described by
Let us observe that, from Theorem 3, we can obtain the decomposition (6) of the collocation matrices of the Hermite polynomial basis \(({H}_{0},\ldots , {H}_n)\).
The diagonal entries satisfy \(a_{i,i}= 2^{i-1} >0\), \(i=1,2, \ldots , n+1\). Therefore, by Theorem 4, there exists a lower bound \(\tau \ge 0\) such that the Hermite polynomial basis \((H_{0},\ldots ,H_{n})\) is STP on \((\tau ,\infty )\).
Now, using well-known properties of the roots of Hermite polynomials, we shall deduce a lower bound for \(\tau \), which is an increasing function of the dimension of the basis.
Let us recall that the roots of the Hermite polynomials are simple and real. Then we can write \( t_{1,n}<\cdots <t_{n,n}\), where \(t_{i,n}\), \(i=1,\ldots ,n\), are the n roots of the n-degree Hermite polynomial \(H_n\). In [22], it is shown that the largest root of \(H_n\) satisfies
On the other hand, since Hermite polynomials satisfy the following property
the roots of \(H_n\) and \(H_{n-1}\) interlace, that is,
Taking into account (63) and (64), we can write
By (37) and (65), we deduce that \({\widetilde{m}}_{n+1,1}\) is negative for \(t_1\) satisfying \(\sqrt{\frac{n-2}{2}}<t_{n-1,n-1}<t_1<t_{n,n}\). Therefore,
Let us illustrate the bidiagonal decomposition (6) of collocation matrices of Hermite bases with a simple example. For \(({\tilde{H}}_{0},{\tilde{H}}_{1}, {\tilde{H}}_2)\),
where
It can be easily checked that for \(\sqrt{2}/2<t_{1}< t_{2}< t_{3}\), the collocation matrix is STP. Section 6 will show accurate computations when solving algebraic problems with collocation matrices of physicist’s Hermite polynomial bases.
5.3 Bessel Polynomials
The Bessel basis of \({{\textbf{P}}}^n\) is the polynomial system \((B_{0},\ldots ,B_{n})\) with
The change of basis matrix A between the Bessel basis (66) and the monomial basis, with \((B_{0}, \ldots ,B_{n})^{T}=A(m_{0}, \ldots ,m_{n})^T\), is the nonsingular lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) such that
In [3], the total positivity of A is proved and the pivots and multipliers of its Neville elimination are provided. As a consequence, accurate computations when considering collocation matrices of \((B_{0},\ldots ,B_{n})\) for \((0<) t_0<t_1<\cdots <t_n\) are derived by considering that they are the product of a Vandermonde matrix and the matrix A. The bidiagonal decomposition of Vandermonde matrices can be found in [1, 5].
Now, using Theorem 3, we can obtain the explicit bidiagonal decomposition (6) of the Bessel collocation matrices. Let us illustrate it with a simple example. For \(({\tilde{B}}_{0},{\tilde{B}}_{1}, {\tilde{B}}_2)\),
where
It can be easily checked that for \( -1<t_{1}< t_{2}< t_{3}\) with \(t_{2}>-1+ \frac{1}{3(1+t_{1})}\), the matrix is STP.
5.4 Laguerre Polynomials
Given \(\alpha >-1\), the generalized Laguerre basis is \((L^{(\alpha )}_ 0,\ldots , L^{(\alpha )}_ n)\) with
The change of basis matrix between the generalized Laguerre basis (67) and the monomial basis, with \((L^{(\alpha )}_ 0,\ldots , L^{(\alpha )}_ n)^{T}=A(m_{0}, \ldots ,m_{n})^T\), is the lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) such that
where, for a real x and a positive integer k, the falling factorial is
In [2], computations to HRA for algebraic computations with the collocation matrix of \((L^{(\alpha )}_ 0,\ldots , L^{(\alpha )}_ n)\) at \((0>) t_0>t_1>\cdots >t_n\) are achieved. These computations are obtained through the bidiagonal decomposition of A and the well-known bidiagonal decomposition of the Vandermonde matrices.
Now, using Theorem 3, we can obtain the explicit bidiagonal decomposition (6) of the Laguerre collocation matrices. Let us illustrate it with a simple example. For \(({L}^{0}_{0},{L}^{0}_{1},{L}^{0}_2)\),
where
It can be easily checked that, for \(t_{3}< t_{2}< t_{1}<2-\sqrt{2}\), the collocation matrix of the three dimensional Laguerre basis is STP. That means that, for this dimension, the results obtained in [2] for the total positivity of collocation matrices on parameters in \((-\infty , 0)\) can be extended to the larger interval \((-\infty , 2-\sqrt{2})\) and then, even to sequences of parameters \(t_{3}< t_{2}< t_{1}\) with a different sign.
5.5 Jacobi Polynomials
Given \(\alpha , \beta \in {\mathbb {R}}\), the basis of Jacobi polynomials of the space \({{\textbf{P}}}^n\) of polynomials of degree less than or equal to n is \((J_{0}^{(\alpha ,\beta )},\ldots ,J_{n}^{(\alpha ,\beta )})\) with
The change of basis matrix between the Jacobi basis (68) and the basis \((v_0,\ldots ,v_n)\) with
is the lower triangular matrix \(A=(a_{i,j})_{1\le i,j\le n+1}\) such that
In [19] the total positivity on \((1,\infty )\) of the Jacobi basis \((J_{0}^{(\alpha ,\beta )},\ldots ,J_{n}^{(\alpha ,\beta )})\) is deduced. In addition, the pivots and multipliers of the Neville elimination of the change of basis matrix A in (70) are provided. Using the bidiagonal decomposition of A, computations to HRA are achieved.
Defining \({\widetilde{J}}_{i}^{(\alpha ,\beta )}(t){:}{=}{J}_{i}^{(\alpha ,\beta )}(1+2t) \), \(i=0,\ldots ,n\), using Theorem 3, we can write the bidiagonal decomposition (6) of the Jacobi collocation matrices. For \(({\widetilde{J}}_{0}^{(1,2)},{\widetilde{J}}_{1}^{(1,2)},{\widetilde{J}}_{2}^{(1,2)} )\),
where
It can be easily checked that for \((\sqrt{2} - 3)/7<t _{1}<t _{2}<t _{3}\) the collocation matrix is STP. Therefore, the Jacobi polynomial bases \((J_{0}^{(1,2)},J_{1}^{(1,2)},J_{2}^{(1,2)} )\) is STP on \(((1+2\sqrt{2})/7,\infty ) \supset (0.546,\infty ) \supset (1,\infty )\).
Finally, let us observe that the bidiagonal decomposition (6) of the collocation matrices of Legendre, Gegenbauer, first and second kind Chebyschev and rational Jacobi polynomials can be derived by considering Theorem 3.
6 Numerical Experiments
In order to encourage the understanding of the numerical experimentation carried out, using Theorem 3, we provide the pseudocode of Algorithm 1 for computing the matrix form \(BD(M_{q})\) (8) storing the bidiagonal decomposition (6) of the collocation matrix \(M_{q}\) (34). Let us observe that Algorithm 1 calls the Matlab function Schurpoly available in [15], for the computation of Schur polynomials.
To test our algorithm, we have considered STP collocation matrices \(M_{q}\) of recursive polynomial bases (56), as well as Hermite polynomial bases (61) with dimension \(n+1=5,10,15\). Moreover, using the bidiagonal decompositions \(BD(M_{q})\) given by Algorithm 1 and the Matlab functions available in the software library TNTools in [14], we have solved fundamental problems in Numerical Linear Algebra involving the considered matrices.
In addition, for analyzing the accuracy of the obtained results, we have calculated the relative errors taking the solutions obtained in Mathematica with a 100-digit arithmetic as the exact solution of the considered algebraic problems.
We have also obtained in Mathematica the 2-norm condition number of all considered matrices. In Tables 1, 2 and 3, this conditioning is depicted. It can be easily observed that the conditioning drastically increases with the size of the matrices. Due to their ill-conditioning, standard routines do not obtain accurate solutions since they can suffer from inaccurate cancelations. In contrast, the accurate algorithms that we have developed in this paper exploit the structure of the considered matrices obtaining, as we will see, accurate numerical results.
HRA computation of the singular values and eigenvalues Given \(B=BD(A)\) to HRA, the Matlab functions TNSingularValues(B) and TNEigenValues(B) are availables in [16] and compute the singular values and eigenvalues, respectively, of a matrix A to HRA. Their computational cost are all \(O(n^3)\) (see [13]).
Algorithm 2 uses \(BD(M_{q})\) provided by Algorithm 1 to compute the eigenvalues and singular values of a collocation matrix \(M_{q}\).
For all considered matrices, we have compared the smallest eigenvalues and singular values obtained using Algorithm 2, with those computed to the Matlab commands eig and svd. The values provided by Mathematica using 100-digit arithmetic have been considered as the exact solution of the algebraic problem and the relative error e of each approximation has been computed as \(e{:}{=} \vert a-{\tilde{a}} \vert / \vert a \vert \), where a denotes the smallest eigenvalue and singular value computed in Mathematica and \({\tilde{a}}\) the smallest eigenvalue and singular value computed in Matlab.
Looking at the results (see Tables 4, 5 ), we notice that our approach is able to compute accurately the lowest eigenvalue and singular value, regardless of the ill-conditioning of the matrices. In contrast, the Matlab commands eig and svd return results that become not accurate when the dimension of the collocation matrices increases.
HRA computation of the inverse matrix. Given \(B=BD(A)\) to HRA, the Matlab function TNInverseExpand(B) available in [16] returns \(A^{-1}\) to HRA, requiring \(O(n^2\)) arithmetic operations (see [20]).
Algorithm 3 uses \(BD(M_{q})\) provided by Algorithm 1 to compute the inverse of a collocation matrix \(M_{q}\).
For all considered matrices, we have compared the inverses obtained using Algorithm 3 and the Matlab command inv. To look over the accuracy of these two methods we have compared both Matlab approximations with the inverse matrix \(A ^{-1}\) computed by Mathematica using 100-digit arithmetic, taking into account the formula \(e=\Vert A ^{-1}-{\widetilde{A}} ^{-1} \Vert /\Vert A ^{-1}\Vert \) for the corresponding relative error.
The achieved relative errors are shown in Table 6. We observe that our algorithm provides very accurate results in contrast to the inaccurate results obtained when using the Matlab command inv.
HRA computation of solution of linear system \( Mc=d\) Given \(B=BD(A)\) to HRA and a vector d with alternating signs, the Matlab function TNSolve(B, d) available in [16] returns the solution of the linear system \(Ac=d\) to HRA. It requires \(O(n^2\)) arithmetic operations (see [16]).
Algorithm 4 uses \(BD(M_{q})\) provided by Algorithm 1, to compute the solution of the linear system \(M_{q}c=d\), where \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and \(d_i\), \(i=1,\ldots ,n+1\), are random nonnegative integer values.
For all considered collocation matrices, we have compared the solution obtained using Algorithm 4 and the Matlab command \(\setminus \). The solution provided by Mathematica using 100-digit arithmetic has been considered as the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlab \({\tilde{c}}\), taking into account the formula \(e=\Vert c-{{\tilde{c}}}\Vert / \Vert c\Vert \).
In Table 7 we show the relative errors. We clearly see that, in spite of the dimension of the problem, the proposed algorithm preserves the accuracy as opposed to the results obtained with the Matlab command \(\setminus \).
7 Conclusions and Final Remarks
The field of symmetric functions brings new tools to tackle known algebraic problems related to collocation matrices. We expect that further studies in this line of research will involve representations of the groups of permutations, partitions and other bases of symmetric functions. That is, all the relevant concepts which come up whenever an action of the permutation group can be defined on a given setup.
Using the proposed Schur computation of the pivots and the multipliers of the Neville elimination, the bidiagonal factorization (6) of polynomial collocation matrices can obtained accurately. The efficiency of this procedure depends on the number of the involved Schur polynomials and the computational cost of their evaluation. For some bases, as those in (56), the number of nonzero minors decreases significantly, resulting in more efficient calculations.
Data Availability
Enquiries about data availability should be directed to the authors.
References
Björck, Å., Pereyra, V.: Solution of Vandermonde systems of equations. Math. Comput. 24, 893–903 (1970)
Delgado, J., Orera, H., Peña, J.M.: Accurate computations with Laguerre matrices. Numer. Linear Algebra Appl. 26, 2217 (2019)
Delgado, J., Orera, H., Peña, J.M.: Accurate algorithms for Bessel matrices. J. Sci. Comput. 80, 1264–1278 (2019)
Demmel, J., Gu, M., Eisenstat, S., Slapnicar, I., Veselic, K., Drmac, Z.: Computing the singular value decomposition with high relative accuracy. Linear Algebra Appl. 299, 21–80 (1999)
Demmel, J., Koev, P.: The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 27, 42–52 (2005)
Demmel, J., Koev, P.: Accurate and efficient evaluation of Schur and Jack functions. Math. Comput. 75(253), 223–239 (2006)
Demmel, J., Koev, P.: Accurate SVDs of polynomial Vandermonde matrices involving orthonormal polynomials. Linear Algebra Appl. 27, 382–396 (2006)
Gasca, M., Peña, J.M.: Total positivity and Neville elimination. Linear Algebra Appl. 165, 25–44 (1992)
Gasca, M., Peña, J.M.: A matricial description of Neville elimination with applications to total positivity. Linear Algebra Appl. 202, 33–53 (1994)
Gasca, M., Peña, J.M.: On factorizations of totally positive matrices. In: Gasca, M., Micchelli, C.A. (eds.) Total Positivity and Its Applications, pp. 109–130. Kluver Academic Publishers, Dordrecht (1996)
Hallgren, S., Russell, A., Ta-Shma, A.: Normal subgroup reconstruction and quantum computation using group representations. In: Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, pp. 627–635 (2000)
Ikenmeyer, C., Panova, G.: Rectangular Kronecker coefficients and Plethysms in geometric complexity theory. Adv. Math. 319, 40–66 (2017)
Koev, P.: Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 43, 229–241 (1982)
Koev, P.: Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 29, 731–751 (2007)
Koev, P.: https://math.mit.edu/~plamen/software/schurpoly.m. Accessed 16 May 2022
Koev, P.: https://math.mit.edu/~plamen/software/TNTools.m. Accessed 26 June 2022
Lee, Y.J., Micchelli, C.A.: On collocation matrices for interpolation and approximation. J. Approx. Theory 174, 148–181 (2013)
Macdonald, I.G.: Symmetric Function and Hall Polynomials, 2nd edn. Oxford University Press, Oxford (1998)
Mainar, E., Peña, J.M., Rubio, B.: Accurate computations with collocations and Wronskian matrices of Jacoby polynomials. J. Sci. Comput. 87, 77–107 (2021)
Marco, A., Martinez, J.J.: Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. 350, 299–308 (2019)
Stanley, R.: Enumerative Combinatorics, vol. 2. Cambridge University Press, Cambridge (1999)
Szeg, G.: Orthogonal Polynomials, vol. 23. American Mathematical Society, Providence (1939)
Yang, Z., Huang, R., Zhu, W.: Accurate computations for eigenvalues of products of Cauchy-polynomial-Vandermonde. Numer. Algorithms 85, 329–351 (2020)
Acknowledgements
We thank the anonymous referees for their helpful comments and suggestions, which have improved this paper.
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was partially supported by Spanish research Grants PGC2018-096321-B-I00 (MCIU/AEI) and RED2022-134176-T (MCI/AEI) and by Gobierno de Aragón (E41_23R).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Code Availability
The Matlab codes employed to run the numerical experiments are available upon request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was partially supported by Spanish research Grants PGC2018-096321-B-I00 (MCIU/AEI) and RED2022-134176-T (MCI/AEI) and by Gobierno de Aragón (E41\(\_\)23R).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Díaz, P., Mainar, E. & Rubio, B. Polynomial Total Positivity and High Relative Accuracy Through Schur Polynomials. J Sci Comput 97, 10 (2023). https://doi.org/10.1007/s10915-023-02323-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-023-02323-1