1 Introduction

The resolution of interpolation or approximation problems in a vector space of functions usually requires linear algebra computations with collocation or Wronskian matrices of a given basis of the space. For example, these matrices appear when imposing Lagrange or Taylor interpolation conditions on the considered basis functions. Unfortunately, when the dimension increases, these matrices may become very ill-conditioned, and so standard routines implementing best traditional numerical methods cannot obtain accurate solutions for the considered problems.

Taking into account the previous considerations, an important topic in numerical linear algebra is to achieve computations to high relative accuracy (HRA computations) whose relative errors are of the order of the machine precision. In the last years, HRA computations when considering totally positive collocation and Wronskian matrices of different polynomial bases have been achieved (see [7, 8, 12, 13, 24,25,26]).

The Vandermonde matrices have relevant applications in interpolation and numerical quadrature (see [15, 30]). These matrices are known to be totally positive at increasing sequences of positive parameters and the HRA resolution of related algebraic problems has been achieved by considering a bidiagonal factorization of them (see [4, 12] and references therein). Let us observe that Vandermonde matrices can be considered the change of basis matrices between monomial and Lagrange polynomial bases.

The polynomial basis used in the Newton interpolation formula is called the Newton basis. In this paper, for a given sequence of nodes, not necessarily distinct, we shall factorize the change of basis matrices between the monomial and the Newton bases of the same dimension. These matrices have a triangular structure, and their total positivity will be fully characterized in terms of the sign of the considered nodes. Also, HRA calculations with the change of basis matrices will be achieved as long as all nodes have the same sign even though the matrices are not totally positive. Among other applications, the proposed factorization will be used to obtain HRA computations with Wronskian matrices of Newton bases.

Furthermore, this paper shows that second-kind Stirling numbers can be considered divided differences of monomial polynomials at sets of nodes formed by the first consecutive nonnegative integers. Then, the change of basis matrix between the corresponding Newton and monomial bases is Stirling matrices, that is, triangular matrices whose entries are given in terms of Stirling numbers. On the other hand, these matrices allow us to define the Touchard polynomial bases.

Touchard polynomials are also called the exponential polynomials and generalize the Bell polynomials for the enumeration of the permutations when the cycles possess certain properties. Algebraic, combinatorial, and probabilistic properties of these polynomials are described in [6, 29, 31, 33]. In this paper, the total positivity of Touchard polynomial bases is proved, and a procedure to get HRA computations with their collocation and Wronskian matrices is provided.

This paper is organized as follows. Section 2 recalls basic aspects related to total positivity and HRA. The Neville elimination procedure to derive the parameterization of totally positive matrices leading to HRA algorithms is also described. In addition, conditions that guarantee HRA computations for non totally positive matrices are also provided. Section 3 focuses on the change of basis matrices relating monomial and Newton polynomial bases and shows some applications of these matrices. Their total positivity is characterized in terms of the sign of the considered nodes, and the bidiagonal decompositions providing HRA computations are also obtained. These findings are applied to achieve accurate computations with Wronskian matrices of Newton bases in Section 4 and with Stirling matrices in Section 5. The total positivity of Touchard polynomial bases is proved in Section 6. Moreover, Touchard Wronskian matrices are proved to be totally positive for positive parameters guaranteeing the HRA resolution of related algebraic problems. Finally, Section 7 shows the accurate computations obtained when solving relevant algebraic problems with collocation and Wronskian matrices of Touchard polynomials.

2 Notations, basic concepts, and auxiliary results

Given a basis \((b_0, \ldots , b_n)\) of a space U(I) of functions defined on \(I\subseteq \mathbb {R}\) and a sequence of values \(t_0, \ldots , t_{n}\) on I, the corresponding collocation matrix is

$$\begin{aligned} M \left[ \begin{array}{c} b_0, \ldots , b_n \\ t_0,\ldots , t_{n} \end{array} \right] := \big ( b_{j-1 }(t_{i-1}) \big )_{1\le i,j\le n+1}. \end{aligned}$$
(1)

If the functions are n-times continuously differentiable at \(t\in I\), we can define the Wronskian matrix at t as follows

$$\begin{aligned} W(b_0, \ldots , b_n)(t) :=(b_{j-1}^{(i-1)}(t))_{i,j=1, \ldots , n+1}, \end{aligned}$$

where the i-th derivative of b at the value t is denoted by \(b^{(i)}(t)\).

A matrix is said to be totally positive or TP if all its minors are nonnegative and strictly totally positive or STP if all its minors are positive (see [1]). In the literature, TP and STP matrices are also called as totally nonnegative and totally positive matrices, respectively (see [14, 20]). Nice and interesting TP and STP matrix applications can be found in [1, 14, 32]. Let us recall that, from Theorem 3.1 of [1], the product of TP matrices is another TP matrix.

An important topic in numerical linear algebra is the design and analysis of algorithms adapted to the structure of TP matrices and allowing the resolution of related algebraic problems with relative errors of the order the machine precision, that is, algorithms to high relative accuracy (HRA).

A real value \(y\ne 0\) is said to be computed to HRA whenever the obtained \( \widetilde{y}\) satisfies

$$\begin{aligned} \frac{ \Vert y-\widetilde{y} \Vert }{ \vert y \vert } < k u, \end{aligned}$$

where u is the unit round-off (or machine precision), and \(k>0\) is a constant, which does not depend on the arithmetic precision. Algorithms avoiding inaccurate cancelations can be performed to HRA (see page 52 in [11]). Then, we say that they satisfy the non-inaccurate cancellation condition, namely NIC condition, and they only compute multiplications, divisions, and additions of numbers with the same sign. Moreover, if the floating-point arithmetic is well-implemented, the subtraction of initial data can also be allowed without losing HRA (see page 53 in [11]).

Nowadays, bidiagonal factorizations are very useful to achieve accurate algorithms for performing computations with TP matrices. In fact, the parameterization of TP matrices leading to HRA algorithms is provided by their bidiagonal factorization, which is in turn very closely related to the Neville elimination, NE hereafter (cf. [16,17,18]).

The essence of the NE is to obtain, from a given matrix \( A=(a_{i,j})_{1\le i,j\le n+1}\), an upper triangular matrix by adding to each row a multiple of the previous one. In particular, the NE of A consists of n major steps defining matrices \(A^{(1)}:=A\) and \( A^{(r)} =(a_{i,j}^{(r)})_{1\le i,j\le n+1}\), such that,

$$\begin{aligned} a_{i,j}^{(r)}=0, \quad 1\le j\le r-1, \quad j< i\le n+1, \end{aligned}$$
(2)

\(r=2,\ldots , n+1\), so that \(U:=A^{(n+1)} \) is upper triangular. In more detail, \(A^{(r+1)} \) is computed from \(A^{(r)} \) as follows

$$\begin{aligned} a_{i,j}^{(r+1)}:={\left\{ \begin{array}{ll} a^{(r)}_{i,j}, \quad &{}\text {if} \ 1\le i \le r, \\ a^{(r)}_{i,j}- \frac{a_{i,r}^{(r)}}{ a_{i-1,r}^{(r)} } a^{(r)}_{i-1,j} , \quad &{}\text {if } r+1\le i,j\le n+1, \text { and } a_{i-1,j}^{(r)}\ne 0,\\ a^{(r)}_{i,j}, \quad &{}\text {if} \quad r+1\le i\le n+1, \text { and } a_{i-1,r}^{(r)}= 0. \end{array}\right. } \end{aligned}$$
(3)

The entry

$$\begin{aligned} p_{i,j} := a_{i,j}^{(j)}, \quad 1\le j\le i\le n+1, \end{aligned}$$
(4)

is the (ij) pivot and \(p_{i,i}\) is called the i-th diagonal pivot of the NE of the matrix A. Furthermore, the value

$$\begin{aligned} m_{i,j}:={\left\{ \begin{array}{ll} a_{i,j}^{(j)} / a_{i-1,j}^{(j)}={ p_{i,j} }/{ p_{i-1,j}}, &{} \text {if }\, a_{i-1,j}^{(j)} \ne 0,\\ 0, &{} \text {if }\, a_{i-1,j}^{(j)} = 0, \end{array}\right. } \end{aligned}$$
(5)

for \(1\le j < i\le n+1\), is called the (ij) multiplier of the NE of A. The complete Neville elimination (CNE) of a matrix A can be performed whenever no row swaps are needed in the NE of the matrices A and \(U^T\). In this case, the multipliers of the CNE of A are the multipliers of the NE of A if \(i \ge j\) and the multipliers of the NE of \(A^T\) if \(j \ge i\) (see [18]).

The total positivity property of a matrix can not be immediately deduced. However, the following result, derived from Corollary 5.5 of [16] and the reasoning in p. 116 of [18], illustrates that NE characterizes the class of STP and nonsingular TP matrices.

Theorem 1

A given matrix A is STP (respectively, nonsingular TP) if and only if its CNE can be performed with no row and column swaps, the diagonal pivots of the NE of A are positive and the multipliers of the NE of A and \(A^T\) are positive (respectively, nonnegative).

In fact, total positivity of \(A\in \mathbb R^{(n+1)\times (n+1)}\) can be studied by analyzing a bidiagonal factorization of the matrix. In this sense, by Theorem 4.2 and the arguments of p.116 of [18], a nonsingular TP matrix A admits a factorization of the form

$$\begin{aligned} A=F_nF_{n-1}\cdots F_1 D G_1 G_2\cdots G_n, \end{aligned}$$
(6)

where \(F_i\in \mathbb R^{(n+1)\times (n+1)}\) (respectively, \(G_i\in \mathbb R^{(n+1)\times (n+1)}\)) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by

$$\begin{aligned} F_i\!=\!\left( \begin{array}{cccccccccc} 1 \\ &{} \ddots \\ &{} &{} 1 \\ &{} &{} m_{i+1,1} &{} 1 \\ &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} m_{n+1,n+1-\!i} &{} 1 \end{array} \right) ,\, G_i^T\!=\!\left( \begin{array}{cccccccccc} 1 \\ &{} \ddots \\ &{} &{} 1 \\ &{} &{} \widetilde{m}_{i+1,1} &{} 1 \\ &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} \widetilde{ m}_{n+1,n+1\!-i} &{} 1 \end{array} \right) , \end{aligned}$$
(7)

and \(D=\textrm{diag}\left( p_{1,1},\ldots , p_{n+1,n+1}\right) \) has positive diagonal entries. The diagonal elements of D are the positive diagonal pivots of the NE of the matrix A, and the entries \(m_{i,j}\) and \(\widetilde{m}_{i,j}\) are the multipliers of the NE of the matrices A and \(A^T\), respectively. Under certain conditions, the factorization (6) is unique, and in [2], more general classes of matrices satisfying the bidiagonal factorization were obtained.

Remark 1

The NE of a nonsingular and TP matrix A also provides a bidiagonal factorization of the matrix \(A^{-1}\). In fact, by considering (6), a bidiagonal decomposition for \(A^{-1}\) can be computed as follows:

$$\begin{aligned} A^{-1}= \widehat{G}_1 \widehat{G}_{2} \cdots \widehat{G}_n D^{-1}\widehat{F}_n \widehat{F}_{n-1}\cdots \widehat{F}_1, \end{aligned}$$
(8)

where \(\widehat{F}_i\) (respectively, \(\widehat{G}_i\)) is the lower (respectively, upper) triangular bidiagonal matrix with the form described by (7), which is obtained by replacing the off-diagonal entries \(\{m_{i+1,1},\ldots , m_{n+1,n+1-i}\} \) and \(\{\widetilde{m}_{i+1,1},\ldots , \widetilde{m}_{n+1,n+1-i}\} \) by the values \(\{-m_{i+1,i},\ldots ,-m_{n+1,i} \}\) and \(\{-\widetilde{m}_{i+1,i},\ldots ,- \widetilde{m}_{n+1,i}\}\), respectively (see Theorem 2.2 of [27]), that is

$$\begin{aligned} \widehat{F}_i\!=\!\left( \begin{array}{cccccccccc} 1 \\ &{} \ddots \\ &{} &{} 1 \\ &{} &{} \!-m_{i+1,i} &{} 1 \\ &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} -m_{n+1,i} &{} 1 \end{array} \right) ,\quad \widehat{G}_i^T\!=\!\left( \begin{array}{cccccccccc} 1 \\ &{} \ddots \\ &{} &{} 1 \\ &{} &{} -\widetilde{m}_{i+1,i} &{} 1 \\ &{} &{} &{} \ddots &{} \ddots \\ &{} &{} &{} &{} -\widetilde{ m}_{n+1,i} &{} 1 \end{array} \right) . \end{aligned}$$

In the sequel, we shall use the matrix notation introduced in [19], allowing us to store the bidiagonal factorization (6) of A, as well as the bidiagonal factorization (8) of \(A^{-1}\), by means of a matrix \(BD(A)=(BD(A)_{i,j})_{1\le i,j\le n+1}\), whose diagonal entries are the diagonal pivots of the NE of A and the entries above and below its diagonal are the multipliers of the NE of \(A^T\) and A, that is,

$$\begin{aligned} BD(A)_{i,j}:={\left\{ \begin{array}{ll} m_{i,j}, &{} \text {if } i>j, \\ p_{i,i}, &{} \text {if } i=j, \\ \widetilde{m}_{ j ,i}, &{} \text {if } i<j. \\ \end{array}\right. } \end{aligned}$$
(9)

If the pivots and multipliers, and so BD(A), are given to HRA, then the Matlab functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve available in the software library TNTools in [21] take as input argument BD(A) and compute to HRA the eigenvalues and singular values of A, the inverse matrix \(A^{-1}\) (using the algorithm presented in [27]) and even the solution of linear systems \(Ax=b\), for vectors b with alternating signs.

The following result provides conditions that guarantee that a given matrix is the inverse of a TP matrix. Under these conditions, algebraic problems solving for non-TP matrices can also be done to HRA.

Theorem 2

Let \(A\in \mathbb {R}^{(n+1)\times (n+1)} \) and \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). If the CNE of the matrix A can be performed with no row and column swaps, the diagonal pivots of the NE of A are positive and the multipliers of the NE of A and \(A^T\) are nonpositive, then A is the inverse of a TP matrix and the matrix \(A_J:=JAJ\) is nonsingular TP.

Moreover, if the computation of the mentioned diagonal pivots and multipliers satisfies the NIC condition, the eigenvalues and singular values of A, its inverse matrix \(A^{-1}\), as well as the solution of \(A{ x}= { b}\), where the entries of the vector \({ b} = (b_1, \ldots , b_{n+1})^T\) have the same sign, can be obtained to HRA.

Proof

Under the considered hypotheses,

$$\begin{aligned} A=F_nF_{n-1}\cdots F_1 D G_1G_2 \cdots G_n, \end{aligned}$$
(10)

where the matrices \(F_i\) (respectively, \(G_i\)), \(i=1,\ldots ,n\), are lower (respectively, upper) triangular bidiagonal, and have the structure described by (7). The diagonal elements of D are positive and the off-diagonal entries \(m_{i,j}\) and \(\widetilde{m}_{i,j}\) of the bidiagonal factors are nonpositive.

Let us notice that \(A^{-1}\) can be factorized as in (8) and the bidiagonal matrices \(\widehat{F}_i\) and \(\widehat{G}_i\) are TP since \(-m_{i,j}\ge 0\) and \(-\widetilde{m}_{i,j}\ge 0\). Consequently, we can deduce that \(A^{-1}\) is a TP matrix because it is the product of TP matrices. So, using Theorem 3.3 of [1], we can derive that \(A_J\) is TP. In fact,

$$\begin{aligned} BD(A_J)_{i,j}:={\left\{ \begin{array}{ll} -m_{i,j}, &{} \text {if } i>j, \\ p_{i,i}, &{} \text {if } i=j, \\ -\widetilde{m}_{ j ,i}, &{} \text {if } i<j, \\ \end{array}\right. } \end{aligned}$$
(11)

has nonnegative entries.

If the computation of \(p_{i,i}\), \(m_{i,j}\) and \(\widetilde{m}_{i,j}\) satisfies the NIC condition, (11) can be computed to HRA. This fact guarantees that the eigenvalues and singular values of \(A_J\), the inverse matrix \(A_J^{-1}\) and the solution of \(A_J { x}= { d}\), where the entries of the vector \({ d} = (d_1, \ldots , d_{n+1})^T\) have alternating signs can also be obtained to HRA (see Section 3 of [12]).

Since J is a unitary matrix, we deduce that the eigenvalues and singular values of A coincide with those of \(A_J\), and therefore, using (11), their computation to HRA can be also achieved.

For the accurate computation of the inverse matrix \(A^{-1}\), we consider that \(A_J^{-1} \) can be computed to HRA. Since \( A^{-1}=JA_J^{-1}J\), by means of an appropriate change of sign of the elements of \(A_J^{-1}\), we can also compute the matrix \(A^{-1}\) to HRA. Finally, for the linear system \(A{ x}= { d}\), since the entries of J d have alternating signs, we can compute to HRA the solution \({ y}\in \mathbb R^{n+1}\) of the system \(A_J { y}= J { d} \), and then \({ x}=J{y}\).

\(\square \)

3 Matrix conversion between Newton and monomial bases

The Lagrange formula of the polynomial interpolant p of a function f at nodes \(t_0,\ldots , t_n\) such that \(t_i\ne t_j\) for \(i\ne j\), is obtained when the interpolant p is expressed in terms of the Lagrange basis \((\ell _0,\ldots , \ell _n)\) of the space \({\textbf{P}}^n\) of polynomials of degree not greater than n,

$$\begin{aligned} p(t)=\sum _{i=0}^n f(t_i) \ell _i(t),\qquad \ell _i(t):=\prod _{j\ne i} \frac{t-t_j}{t_i-t_j},\quad i=0,\ldots ,n. \end{aligned}$$
(12)

Denote \(f_i:=f(t_i)\) for \(i=0,\ldots ,n\). The polynomial p can also be written in terms of the monomial basis \((m_0,\ldots , m_n)\) of \({\textbf{P}}^n\),

$$\begin{aligned} p(t)=\sum _{i=0}^n c_i m_i(t),\qquad m_i(t):= t^i,\quad i=0,\ldots ,n, \end{aligned}$$
(13)

and, in this representation, the coefficients are the solution of the linear system

$$\begin{aligned} M \left[ \begin{array}{c} m_0, \ldots , m_n \\ t_0,\ldots , t_{n} \end{array} \right] { c}={ f}, \end{aligned}$$

where \({ c}=(c_0, \ldots , c_n)^T \) and \({ f}=(f_0, \ldots , f_n)^T \). Then, we can write

$$\begin{aligned} {c}= V^{-1} { f}, \end{aligned}$$
(14)

where \(V:= M \left[ \begin{array}{c} m_0, \ldots , m_n \\ t_0,\ldots , t_{n} \end{array} \right] \) is the Vandermonde matrix at the nodes \(t_0,\ldots , t_n\). Taking into account (12), (13) and (14), we have

$$\begin{aligned} p(t)= (\ell _0(t),\ldots , \ell _n(t)) { f}= (m_0(t),\ldots , m_n(t)) V^{-1} { f} \end{aligned}$$

and deduce that

$$\begin{aligned} (m_0,\ldots , m_n) = (\ell _0,\ldots , \ell _n) V. \end{aligned}$$
(15)

This means that the Vandermonde matrix V is the change of basis matrix between the monomial and the Lagrange basis of the polynomial space \({\textbf{P}}^n\).

This section is devoted to achieve HRA computations in algebraic problems related to the change of basis matrix between the monomial basis and the Newton basis corresponding to nodes not necessarily distinct.

Given nodes \(t_0\le \cdots \le t_n\), the Newton form of the polynomial interpolant p of a function f at \(t_0,\ldots , t_n\) is obtained when p is written as follows:

$$\begin{aligned} p(t)=\sum _{i=0}^{n}[t_0,\ldots ,t_i ]f w_i(t), \end{aligned}$$
(16)

where \([t_0,\ldots ,t_i ]f\) denotes the divided difference of f at nodes \(t_0,\ldots ,t_i\) and

$$\begin{aligned} w_0(t):=1,\quad w_i(t):=(t-t_0)\cdots (t-t_{i-1}),\quad i=1,\ldots ,n. \end{aligned}$$
(17)

The polynomial basis \((w_0,\ldots , w_n)\) is the Newton basis of the space \({\textbf{P}}^n\) determined by the nodes \(t_0,\ldots , t_{n-1}\). The Newton polynomial (16) is sometimes called Newton’s divided differences interpolation polynomial because its coefficients can be obtained using Newton’s divided differences method.

Let us recall that, if f is n times continuously differentiable on \([t_0,t_n]\), the divided differences \([t_0,\ldots ,t_i ]f \), \(i=0,\ldots ,n\), can be obtained using the following recursion

$$\begin{aligned}{}[t_i, \ldots , t_{i+k}]f ={\left\{ \begin{array}{ll} \frac{ [t_{i+1}, \ldots , t_{i+k}]f- [t_i,\ldots , t_{i+k-1}]f}{t_{i+k}-t_i}, &{} \text {if } t_{i+k}\ne t_i,\\ \frac{ f^{(k)}(t_i)}{ k! },&{} \text {if } t_{i+k}= t_i.\end{array}\right. } \end{aligned}$$

Moreover, given two functions f and g defined on an interval containing the nodes \(t_0,\ldots ,t_n\), the following Leibnitz-type formula for divided differences is satisfied

$$\begin{aligned}{}[t_0, \ldots , t_{n}](fg)= \sum _{k=0}^n [t_0, \ldots , t_{k}] f [t_k, \ldots , t_{n}] g. \end{aligned}$$
(18)

This formula has played a relevant role to derive recurrence relations for B-spline functions (cf. [3]).

Since \(m_i(t)=t^i\), \(i=0,\ldots ,n\), coincides with its interpolant at \(t_0,\ldots , t_n\), taking into account the Newton formula (16) for \(m_i\), \(i=0,\ldots ,n\), we deduce that the change of basis matrix U, satisfying

$$\begin{aligned} (m_0,\ldots ,m_n)=(w_0,\ldots ,w_n)U, \end{aligned}$$
(19)

is upper triangular. In addition, \(U=(u_{i,j})_{1\le i,j \le n+1}\) with \( u_{i,j}= [t_0,\ldots , t_{i-1}]m_{j-1}\), that is,

$$\begin{aligned} U=\left( \begin{array}{ccccc} 1 &{} [t_0]m_1 &{} [t_0]m_2 &{} \cdots &{} [t_0]m_n \\ 0 &{} 1 &{} [t_0,t_1]m_2 &{} \cdots &{} [t_0,t_1]m_n \\ 0 &{} 0 &{} 1 &{} \ddots &{} \vdots \\ \vdots &{} \vdots &{}\ddots &{} \ddots &{} [t_0,\ldots ,t_{n-1}]m_n \\ 0 &{}0 &{}\cdots &{} 0 &{} 1 \end{array} \right) , \end{aligned}$$
(20)

Let us notice that, by induction, it can be checked that

$$\begin{aligned}{}[t_0,\ldots , t_{i}]m_{j}= \sum _{\alpha _0+\cdots +\alpha _{i}=j-i} t_0^{\alpha _0} \cdots t_{i}^{\alpha _{i}},\quad j>i. \end{aligned}$$

On the other hand, the collocation matrix of the Newton basis \((w_0,\ldots ,w_n)\) at nodes \(t_0,\ldots ,t_n\) is a lower triangular matrix \(L=(l_{i,j})_{1\le i,j\le n+1}\) whose entries

$$\begin{aligned} l_{i,j}= w_{j-1}(t_{i-1})=\prod _{k=0}^{j-2}(t_{i-1}-t_{k}),\quad j\le i, \end{aligned}$$

satisfy the following recurrences

$$\begin{aligned} l_{i,1}=1, \quad l_{i,j+1}=l_{i,j}(t_{i-1}-t_{j-1}),\quad j=1,\ldots ,i-1,\quad i=1,\ldots ,n+1. \end{aligned}$$
(21)

Moreover, taking into account (15) and (19), we obtain the following Crout factorization of Vandermonde matrices

$$\begin{aligned} V= M \left[ \begin{array}{c} \ell _0,\ldots , \ell _n \\ t_0,\ldots ,t_{n}\end{array} \right] V= M \left[ \begin{array}{c} m_0,\ldots ,m_n \\ t_0,\ldots ,t_{n}\end{array} \right] = M \left[ \begin{array}{c} w_0,\ldots ,w_n \\ t_0,\ldots ,t_{n}\end{array} \right] U=LU. \end{aligned}$$

This factorization can be used to solve linear Vandermonde systems \(Vx=f\) by considering the systems \(Ld=f\) and \(Ux=d\). Note that, in Lagrange interpolation problems, the vectors \(d:=(d_1,\ldots ,d_{n+1})^T\) and \(f:=(f_1,\ldots ,f_{n+1})^T\) with \(d_i:=[t_0,\ldots ,t_{i-1}]f\) and \(f_i:=f(t_{i-1})\), \(i=1,\ldots ,n+1\), are related by

$$\begin{aligned} Ld=f. \end{aligned}$$

So, the matrix U relates the vector solution x with an intermediate vector d of divided differences (see [5]).

The following result deduces the pivots and multipliers of the NE of the matrix U in (20) and its inverse \(U^{-1}\). Their decomposition (6) is obtained and their total positivity will be analyzed.

Theorem 3

Let U be the change of basis matrix between the monomial and the Newton basis (17). Then,

$$\begin{aligned} U= G_1 \cdots G_{n}, \end{aligned}$$
(22)

where \(G_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices whose structure is described by (7) and their off-diagonal entries are

$$\begin{aligned} \widetilde{m}_{i,j} = t_{j-1}, \quad 1\le j< i \le n+1. \end{aligned}$$
(23)

Moreover,

$$\begin{aligned} U^{-1}= \widehat{G}_1 \cdots \widehat{G}_{n}, \end{aligned}$$
(24)

where \( \widehat{G}_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries

$$\begin{aligned} \widetilde{m}_{i,j} = -t_{i-j-1}. \quad 1\le j< i \le n+1. \end{aligned}$$
(25)

Proof

Let us define the lower triangular matrix \(L:=U^T\). Clearly, \(L=(l_{i,j} )_{1\le i,j\le n+1}\) with

$$\begin{aligned} l_{i,j} = [t_0,\ldots , t_{j-1}]m_{i-1}, \quad 1\le j\le i\le n+1. \end{aligned}$$
(26)

Now, let \(L^{(1)}:=L\) and, for \(r=2,\ldots , n+1\), let \(L^{(r)} =(l_{ij}^{(r)})_{1\le i,j \le n+1}\) be the matrix obtained after \(r-1\) steps of the NE of L. By induction on r, we shall deduce that

$$\begin{aligned} l_{i,j}^{(r)}= [t_{r-1},\ldots , t_{j-1}]m_{i-r}, \quad r\le j\le i\le n+1. \end{aligned}$$
(27)

First, taking into account formula (26), identities (27) clearly hold for \(r=1\). If (27) holds for some \(r\in \{1,\ldots ,n\}\), we have that

$$\begin{aligned} \frac{l_{i, r}^{(r)}}{l_{i-1, r}^{( r)}}= \frac{ [t_{r-1}]m_{i-r}}{ [t_{r-1}]m_{i-r-1} }=\frac{ t_{r-1}^{i-r}}{ t_{r-1}^{i-r-1} }=t_{r-1}. \end{aligned}$$
(28)

Since \(l_{i,j}^{(r+1)}=l_{i,j}^{(r)}- \left( {l_{i,r}^{(r)}}/{l_{i-1,r}^{(r)}} \right) l_{i-1,j}^{(r)}\), taking into account (27), (28) and the Leibnitz’s rule for divided differences (18) to \(m_j(t)=t m_{j-1}(t)\), we can write

$$\begin{aligned} l_{i,j}^{(r+1)}\! = [t_{r-1},\ldots , t_{j-1}]m_{i-r}\!- t_{r-1} [t_{r-1},\ldots , t_{j-1}]m_{i-r-1} \!=\! [t_{r},\ldots , t_{j-1}]m_{i-r-1}, \end{aligned}$$

corresponding to the identity (27) for \(r+1\). Now, from (4) and (27), the pivots \(p_{i,j}\) of the NE of L satisfy

$$\begin{aligned} p_{i,j}=l_{i,j}^{(j)}=[t_{j-1}]m_{i-j}=t_{j-1}^{i-j}. \end{aligned}$$
(29)

Consequently, the diagonal pivots are \(p_{i,i}=1\), \(i=1,\ldots ,n+1\), and the multipliers satisfy

$$\begin{aligned} m_{i,j}= \frac{ p_{i ,j}}{ p_{i-1,j}}= \frac{ t_{j-1}^{i-j} }{ t_{j-1}^{i-j-1}} =t_{j-1}, \quad 1\le j<i \le n+1. \end{aligned}$$
(30)

Then,

$$\begin{aligned} U^T= F_{n} \cdots F_{1}, \end{aligned}$$
(31)

and the off-diagonal elements \(m_{i,j}\) of \(F_{i}\), \(i=1,\ldots ,n\), are given by (30). Taking into account that \(U=L^T\), we have

$$\begin{aligned} U= F_{1}^T \cdots F_{n}^T, \end{aligned}$$

and defining \( G_i:= F_{i}^T\), \(i=1,\ldots ,n\), the factorization (22) for U is obtained. Taking into account (30), formula (23) for the off-diagonal entries \(\widetilde{m}_{i,j}\) is confirmed. Finally, taking into account Remark 1, the factorization (24) for \(U^{-1}\) can be deduced.

\(\square \)

The provided factorization (6) for the matrices U in (20), as well as the factorization (8) of \(U^{-1}\), can be stored by defining \(BD(U)=(BD(U)_{i,j})_{1\le i,j1\le n+1}\) with

$$\begin{aligned} BD(U)_{i,j}&= {\left\{ \begin{array}{ll} t_{i-1} , &{} \text {if } 2\le i<j\le n+1, \\[10pt] 1, &{}\text {if } 1\le i=j\le n+1, \\[10pt] 0, &{} \text {elsewhere}. \end{array}\right. } \end{aligned}$$
(32)

Taking into account the factorization derived in Theorem 3, computations to HRA with the matrices U and \(U^{-1}\) can be deduced when the interpolation nodes do not change their sign.

Corollary 4

The matrix U in (20) (respectively, \(U^{-1}\)) is TP if and only if the nodes of the Newton basis (17) satisfy \( t_i\ge 0\) (respectively, \( t_i\le 0\)), \(i=0,\ldots ,n-1\).

Moreover, if U (respectively, \(U^{-1}\)) is TP, its bidiagonal factorization (22) (respectively, (24)) can be computed to HRA. Consequently, the eigenvalues and singular values of U (respectively, \(U^{-1}\)), as well as the solution of the linear systems \(U{ x}= { b}\) (respectively, \(U^{-1}{ x}= { b}\)), where the entries of \({ b} = (b_0, \ldots , b_n)^T\) have alternating signs, can be obtained to HRA.

Proof

If U (respectively, \(U^{-1}\)) is TP, by Theorem 1, we can guarantee that its CNE can be performed with no row and column swaps, and the multipliers are nonnegative. From Theorem 3, the decomposition (6) of U (respectively, \(U^{-1}\)) is given in (22) (respectively, in (24)), and we deduce that the nodes are nonnegative (respectively, nonpositive).

If \(t_i\ge 0\) (respectively, \(t_i\le 0\)), \(i=0,\ldots ,n-1\), the off-diagonal entries of the bidiagonal matrix factors in (22) (respectively, in (24)) are nonnegative. Then, we can derive that the bidiagonal matrix factors are TP and conclude that U (respectively, \(U^{-1}\)) is TP since it is the product of TP matrices. In addition, the computation of the bidiagonal factorization (6) satisfies the NIC condition, and so, it can be computed to HRA. This fact guarantees the computation of the mentioned algebraic problems to HRA (see Section 3 of [12]).

\(\square \)

Furthermore, using Theorems 2 and 3, HRA computations with the matrix U can also be obtained when considering the matrices JUJ and \(JU^{-1}J\).

Corollary 5

Let U be the matrix in (20). Then, JUJ (respectively, \(JU^{-1}J\)) is TP if and only if the nodes of the Newton basis (17) satisfy \( t_i\le 0\) (respectively, \( t_i\ge 0\)), \(i=0,\ldots ,n-1\).

Moreover, if JUJ (respectively, \(JU^{-1}J\)) is TP, its decomposition (22) (respectively, (24)) can be computed to HRA. Consequently, the eigenvalues and singular values of U (respectively, of \(U^{-1}\)), as well as the solution of the linear systems \(U{ x}= { b}\) (respectively, \(U^{-1}{ x}= { b}\)), where the entries of \({ b} = (b_0, \ldots , b_n)^T\) have the same sign, can be obtained to HRA.

4 Accurate computations with Wronskian matrices of Newton bases

Let us recall that Corollary 1 of [23] provides the following factorization (6) of \(W:=W(m_{0},\ldots ,m_{n})(t)\), the Wronskian matrix of the monomial basis \((m_{0},\ldots ,m_{n})\) in (13),

$$\begin{aligned} W(m_{0},\ldots ,m_{n})(t)=D G_{1} \cdots G_{n}, \end{aligned}$$
(33)

where \(D=\text {diag}\{0!,1!,\dots ,n!\}\) and \(G_{i }\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries

$$\begin{aligned} \widetilde{m}_{k,k-i} =t,\quad i+1\le k\le n+1. \end{aligned}$$

For the matrix representation BD(W) of (33), we have

$$\begin{aligned} BD(W)_{i,j}:={\left\{ \begin{array}{ll} t, &{} \text {if } i<j, \\[5pt] (i-1)!, &{} \text {if } i=j, \\[7pt] 0 , &{} \text {if } i>j. \end{array}\right. } \end{aligned}$$
(34)

Clearly, the computation of BD(W) satisfies the NIC condition, and therefore, this matrix can be computed to HRA. In addition, taking into consideration the sign of the entries of BD(W), one can derive that the Wronskian matrix of the monomial basis is TP for any \(t>0\). In [23], using (33), computations to HRA when solving algebraic problems related to \(W(m_{0},\ldots ,m_{n})(t)\), \(t>0\), have been achieved.

For \(t<0\), taking into account (34), we clearly see that the hypotheses of Theorem 2 hold and deduce that HRA computations can also be obtained when considering the Wronskian matrix \(W(m_{0},\ldots ,m_{n})(t)\).

Corollary 6

Let \(W:=W(m_{0},\ldots ,m_{n})(t)\) be the Wronskian matrix of the monomial polynomial basis in (13) and \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\). Given \(t<0\), the matrix \(W_J:=J W J\) is TP and its bidiagonal factorization (6) is

$$\begin{aligned} W_J=D \widehat{G}_{1} \cdots \widehat{G}_{n}, \end{aligned}$$
(35)

where \(D=\text {diag}\{0!,1!,\dots ,n!\}\) and \( \widehat{G}_{i}\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries

$$\begin{aligned} \widetilde{m}_{k,k-i} =-t,\quad i+1\le k\le n+1. \end{aligned}$$

The bidiagonal decomposition (35) can be computed to HRA. Consequently, the eigenvalues and singular values of W, the matrix \(W^{-1}\), as well as the solution of linear systems \(W{ x}= { d}\), where the entries of d have the same sign, can be obtained to HRA.

Taking into account the factorizations obtained for \(W(m_{0},\ldots ,m_{n})(t)\) and for the change of basis matrix between the monomial and the Newton bases, the total positivity of Wronskian matrices of the Newton basis can be analyzed, and their bidiagonal factorization can be derived.

Theorem 7

Let \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\) and \(W:=W(w_{0},\ldots ,w_{n})(t)\) be the Wronskian matrix of the Newton basis \((w_{0},\ldots ,w_{n})\) defined in (17). Then,

  1. (a)

    If \(t_i\le 0\), \(i=0,\ldots ,n-1\), W is TP for any \(t > 0\) and its factorization (6) can be computed to HRA.

  2. (b)

    If \(t_i \ge 0\), \(i=0,\ldots ,n-1\), \(W_J:=JW J\) is TP for any \(t< 0\) and its factorization (6) can be computed to HRA.

Proof

Let \((m_0,\ldots ,m_n)\) be the monomial basis of \({\textbf{P}}^n\) and U be the change of basis matrix such that \((m_0,\ldots ,m_n)=(w_0,\ldots ,w_n) {U}\) (see (20)). Then,

$$\begin{aligned} W=W(m_0,\ldots ,m_n)(t) U^{-1}. \end{aligned}$$
(36)

If \(t_i\le 0\), \(i=0,\ldots ,n-1\), by Corollary 4, \(U^{-1}\) is TP and its decomposition (24) can be computed to HRA. If \(t>0\), by Corollary 1 of [23], \(W(m_{0},\ldots ,m_{n})(t)\) is TP. Taking into account that the product of TP matrices is TP, we deduce that W is TP and its factorization (6) can also be computed to HRA (see (34) for \(BD(W(m_{0},\ldots ,m_{n}))\).

If \(t_i\ge 0\), \(i=0,\ldots ,n-1\), using Corollary 5, we deduce that \(JU^{-1} J\) is TP and its factorization (24) can be obtained to HRA. Moreover, if \(t<0\), using Corollary 6, we deduce that \(JW(m_0,\ldots ,m_n)(t)J\) is TP and its bidiagonal factorization (6) can be computed to HRA. Since \(J=J^{-1}\), from (36), we can write

$$\begin{aligned} JWJ =( J W(m_0,\ldots ,m_n)(t)J) (J U^{-1}J), \end{aligned}$$
(37)

and deduce that JWJ is TP because it can be written as the product of TP matrices.

Using Algorithm 5.1 of [20], if the decomposition (6) of two nonsingular TP matrices is provided to HRA, then the decomposition (6) of the product is computed to HRA. Consequently, the decomposition (6) of W for the case \(t_i\le 0\), \(i=0,\ldots ,n-1\), and \(t > 0\) as well as the decomposition (6) of \(W_J\) for the case \(t_i\ge 0\), \(i=0,\ldots ,n-1\) and \(t < 0\) can be obtained to HRA.

\(\square \)

5 Applications to Stirling matrices

Stirling numbers of the first kind arise in combinatorics, when analyzing permutations. They can be seen as the coefficients s(nk), \(n,k\in \mathbb {N}\cup \{0\}\), \(k\le n\), in the expansion of the falling factorial, defined as

$$\begin{aligned} (x)_0 := 1, \quad (x)_n:=x(x-1)\cdots (x-n+1), \quad n\in \mathbb {N}, \end{aligned}$$
(38)

in terms of powers of the variable x, that is,

$$\begin{aligned} (x)_n=\sum _{k=0}^n s(n,k) x^k. \end{aligned}$$
(39)

First kind Stirling numbers can be computed using the relation

$$\begin{aligned} s(n+1,k)=-n\cdot s(n,k)+s(n,k-1), \end{aligned}$$
(40)

with

$$\begin{aligned} s(0,0) := 1, \qquad s(0, n)= s(n,0):=0, \quad n\in \mathbb {N}. \end{aligned}$$

Since \(sign(s(n,k))=(-1)^{n-k}\), Stirling numbers of the first kind are also called signed Stirling numbers. The absolute values of the first kind Stirling numbers are known as unsigned Stirling numbers and are usually denoted by c(nk) or \(\begin{bmatrix}n\\ k\end{bmatrix}\). These numbers satisfy

$$\begin{aligned} c(n,k)=\begin{bmatrix}n\\ k\end{bmatrix}={(-1)}^{n-k}s*(n,k), \end{aligned}$$
(41)

and can be seen as the coefficients in the expansion of the rising factorial:

$$\begin{aligned} x^{\bar{n}}:=x(x+1)\cdots (x+n-1)= \sum _{k=0}^n c(n,k) x^k. \end{aligned}$$
(42)

Unsigned Stirling numbers can be computed using the relation

$$\begin{aligned} \begin{bmatrix}n+1\\ k\end{bmatrix}=n\begin{bmatrix}n\\ k\end{bmatrix}+\begin{bmatrix}n\\ k-1\end{bmatrix}, \end{aligned}$$
(43)

with

$$\begin{aligned} \begin{bmatrix}0\\ 0\end{bmatrix}:=1,\begin{bmatrix}n\\ 0\end{bmatrix}:=\begin{bmatrix}0\\ n\end{bmatrix}=0,\;\;n\in \mathbb {N}. \end{aligned}$$

The Stirling numbers of the second kind are denoted by \({\displaystyle S(n,k)}\) or \(\begin{Bmatrix}n\\k\end{Bmatrix}\), count the number of partitions of a set of size n into k disjoint non-empty subsets and can also be characterized as the coefficients arising when one expresses powers of an indeterminate x in terms of the falling factorials (38), that is,

$$\begin{aligned} \sum _{{k=0}}^{n}\begin{Bmatrix}n\\k\end{Bmatrix}(x)_{k}=x^{n}. \end{aligned}$$
(44)

Second kind Stirling numbers can be computed using the relation

$$\begin{aligned} \begin{Bmatrix}n+1\\k\end{Bmatrix}=k\begin{Bmatrix}n\\k\end{Bmatrix}+\begin{Bmatrix}n\\k-1\end{Bmatrix}, \end{aligned}$$
(45)

with

$$\begin{aligned} \left\{ \begin{array}{cc}{n} \\ {n}\end{array}\right\} :=1,\; n\ge 0, \qquad \left\{ \begin{array}{cc}n \\ {0}\end{array}\right\} =\left\{ \begin{array}{cc}{0} \\ {n}\end{array}\right\} :=0,\quad n\in \mathbb {N}, \end{aligned}$$

or by the explicit formula

$$\begin{aligned} \begin{Bmatrix}n\\k\end{Bmatrix}=\frac{1}{k!}\sum \limits ^{k}_{i=0}(-1)^{i}\begin{pmatrix}k\\ i\end{pmatrix}(k-i)^{n}, \;\;k=0,\dots , n. \end{aligned}$$
(46)

Let us observe that, when considering the nodes \(t_i:=i\) for \(i=0,\ldots ,n-1\), the corresponding Newton basis (17) satisfies

$$\begin{aligned} w_i(t)= (t)_i, \quad i=0,\ldots ,n, \end{aligned}$$

and, taking into account (44), Stirling numbers of the second kind can be seen as divided differences of monomials with respect to the set of nodes formed by the first consecutive nonnegative integers. In particular,

$$\begin{aligned}{}[0,1,\dots , k]m_{n}=\begin{Bmatrix}n\\k\end{Bmatrix} \end{aligned}$$

where \(m_i(t)=t^i\), \(i=0,\ldots ,n\).

Moreover, the corresponding change of basis matrix U between the monomial and the Newton basis corresponding to \(t_i:=i\), \(i=0,\ldots , n-1\), and satisfying (19), is upper triangular and \(U=(u_{i,j})_{1\le i,j \le n+1}\), with

$$\begin{aligned} u_{i,j}= {\left\{ \begin{array}{ll} [0,\ldots , {i-1}]m_{j-1}= \begin{Bmatrix}j-1\\ i-1\end{Bmatrix}, &{} \text {if } 1\le i\le j \le n+1, \\ 0, &{} \text {elsewhere}. \end{array}\right. } \end{aligned}$$
(47)

We shall say that the matrix U, whose entries are given in (47), is the \((n+1)\times (n+1)\) second kind Stirling matrix. As a direct consequence of Theorem 3 and Corollary 4, we can deduce a bidiagonal decomposition providing HRA computations with second kind Stirling matrices.

Theorem 8

The second kind Stirling matrix U described by (47) is TP and admits the following decomposition

$$\begin{aligned} U= G_1 \cdots G_{n}, \end{aligned}$$
(48)

where \(G_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries

$$\begin{aligned} \widetilde{m}_{i,j} = j-1, \quad 1\le j< i \le n+1. \end{aligned}$$
(49)

Moreover, the decomposition (48) can be computed to HRA. Consequently, the eigenvalues and singular values of U, as well as the solution of the linear systems \(U{ x}= { b}\), where the entries of b have alternating signs, can be obtained to HRA.

Furthermore, using (39), we can also deduce that the inverse of the second kind Stirling matrix U described by (47) is the upper triangular \(\widetilde{U}:=U^{-1}=(\widetilde{u}_{i,j})_{1\le i,j \le n+1}\) such that

$$\begin{aligned} \widetilde{u}_{i,j}= {\left\{ \begin{array}{ll} s(j-1,i-1), &{} \text {if } 1\le i\le j \le n+1, \\ 0, &{} \text {elsewhere}. \end{array}\right. } \end{aligned}$$
(50)

where s(nk) denotes the corresponding (signed) first kind Stirling number provided by (40). We shall say that this matrix is the \((n+1)\times (n+1)\) signed Stirling matrix. Using Theorems 3 and 2, we have the following result.

Theorem 9

Let \(J:=\text {diag}((-1)^{i-1} )_{1\le i\le n+1}\) and \(\widetilde{U}\) be the signed Stirling matrix described by (50). Then, \(J\widetilde{U}J\) is TP and admits the following factorization

$$\begin{aligned} J\widetilde{U}J= {G}_1 \cdots {G}_{n}, \end{aligned}$$
(51)

where \(G_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices with the structure described by (7) and off-diagonal entries

$$\begin{aligned} \widetilde{m}_{i,j} = i-j-1, \quad 1\le j< i \le n+1. \end{aligned}$$

Moreover, the decomposition (51) can be computed to HRA. Consequently, the eigenvalues and singular values of \(\widetilde{U}\), as well as the solution of the linear systems \(\widetilde{U}{ x}= { b}\), where the entries of b have the same sign, can be obtained to HRA.

Finally, let us observe that the matrix \(\widehat{U}:=J\widetilde{U}J= (\widehat{u}_{i,j})_{1\le i,j \le n+1} \) in Theorem 9 satisfies

$$\begin{aligned} \widehat{u}_{i,j}= {\left\{ \begin{array}{ll} (-1)^{i+j}\widetilde{u}_{i,j}=\! (\!-1)^{i+j} s(j\!-1,i-1)= \begin{bmatrix}j-1\\ i-1\end{bmatrix}, &{} \text {if } 1\le i\le j \le n+\!1, \\ 0, &{} \text {elsewhere}, \end{array}\right. } \end{aligned}$$
(52)

where \(\begin{bmatrix}n\\ k\end{bmatrix}\) denotes the unsigned Stirling numbers of the first kind that can be computed by the recurrence relation (43). We shall say that this matrix is the \((n+1)\times (n+1)\) unsigned Stirling matrix. Then, we have the following result.

Corollary 10

The unsigned Stirling matrix \(\widehat{U} \) described by (52) is TP and admits the following decomposition

$$\begin{aligned} \widehat{U }= {G}_1 \cdots {G}_{n}, \end{aligned}$$
(53)

where \( {G}_i\), \(i=1,\ldots ,n\), are upper triangular bidiagonal matrices of the form (7), whose off-diagonal entries are

$$\begin{aligned} \widetilde{m}_{i,j} = i-j-1, \quad 1\le j< i \le n+1. \end{aligned}$$

Moreover, the decomposition (53) can be computed to HRA. Consequently, eigenvalues and singular values of \(\widehat{U}\), as well as the solution of the linear systems \(\widehat{U}{ x}= { b}\), where the entries of b have alternating signs, can be obtained to HRA.

In order to conclude this section, let us notice that bidiagonal factorizations of matrices formed by other types of Stirling numbers such as Jacobi-Stirling or q-Stirling numbers can be found in [10] and [9], respectively.

6 Total positivity and HRA computations with Touchard bases

The Touchard polynomials are also called the exponential polynomials or Bell polynomials and comprise a polynomial sequence defined by

$$\begin{aligned} T_{n}(x)=\sum _{{k=0}}^{n}S(n,k)x^{k}=\sum _{{k=0}}^{n}\begin{Bmatrix}n\\k\end{Bmatrix}x^{k}, \end{aligned}$$
(54)

where \(S(n,k)=\begin{Bmatrix}n\\k\end{Bmatrix}\) is the Stirling number of the second kind in (44) (cf [33]). We shall say that the basis \((T_0,\ldots , T_n)\) of \({\textbf{P}}^n\) is the \((n+1)\)-dimensional Touchard basis.

We clearly have

$$\begin{aligned} (T_{0}, \ldots ,T_{n}) = (m_{0}, \ldots ,m_{n})U, \end{aligned}$$
(55)

where \((m_{0},\ldots ,m_{n})\) is the monomial basis of \({\textbf{P}}^n\) and \(U=(u_{i,j})_{1\le i,j\le n+1}\) is the \((n+1)\times (n+1)\) second kind Stirling matrix, that is,

$$\begin{aligned} {u}_{i,j}= {\left\{ \begin{array}{ll} S(j-1,i-1), &{} 1\le i\le j \le n+1, \\ 0, &{} \text {elsewhere}, \end{array}\right. } \end{aligned}$$
(56)

(see (47)). Let us recall that the monomial basis \((m_{0},\ldots ,m_{n})\) of \({\textbf{P}}^n\) is STP on \((0,+\infty )\) and so, given \(0<t_0< \cdots <t_{n}\), the corresponding collocation matrix

$$\begin{aligned} V:=\left( t_{i-1}^{j-1} \right) _{1\le i,j \le n+1}, \end{aligned}$$
(57)

is STP (see Section 3 of [19]). V is the Vandermonde matrix at the considered nodes.

Taking into account the total positivity of the Vandermonde matrices at positive nodes in increasing ordering and the total positivity of the second kind Stirling matrices, we can deduce the total positivity of Touchard bases, as well as factorizations providing computations to HRA when considering their collocation matrices.

Theorem 11

The basis \((T_{0},\ldots ,T_{n})\) of Touchard polynomials defined in (54) is STP on \((0,\infty )\). Moreover, given \(0< t_{0}< \cdots < t_{n }\), the collocation matrix

$$\begin{aligned} T:=\left( T_{j-1}(t_{i-1})\right) _{1\le i,j \le n+1}, \end{aligned}$$
(58)

and its bidiagonal factorization (6) can be computed to HRA.

Proof

Given \(0< t_{0}< \cdots < t_{n }\), by formula (55), the collocation matrix (58) of the Touchard basis satisfies

$$\begin{aligned} T=V U, \end{aligned}$$
(59)

where V is the Vandermonde matrix (57) and U is the \((n+1)\times (n+1)\) second kind Stirling matrix described by (56).

It is well known that V is STP for \(0< t_{1}< \cdots < t_{n+1}\) and its decomposition (6) can be computed to HRA (see [19] or Theorem 3 of [22]). By Theorem 8, U is a nonsingular TP matrix, and its decomposition (6) can be also computed to HRA. Therefore, we can deduce that T is an STP matrix since it is the product of an STP matrix and a nonsingular TP matrix (see Theorem 3.1 of [1]). Moreover, using Algorithm 5.1 of [20], if the decomposition (6) of two nonsingular TP matrices is provided to HRA, then the decomposition of the product can be obtained to HRA. Consequently, T and its decomposition (6) can be obtained to HRA.

\(\square \)

Now, we can also analyze the total positivity of Wronskian matrices of Touchard bases.

Theorem 12

Let \((T_{0},\ldots ,T_{n})\) be the Touchard polynomial basis in (54). For any \(t> 0\), the Wronskian matrix \(W:=W(T_{0},\ldots ,T_{n})(t)\) is nonsingular and TP. Furthermore, W and its bidiagonal decomposition (6) can be computed to HRA.

Proof

Using formula (55), it can be checked that

$$\begin{aligned} W(T_0, \ldots , T_n)(t) = W(m_{0},\ldots ,m_{n})(t)U, \end{aligned}$$
(60)

Following the reasoning in the proof of Theorem 11, the result readily follows.

\(\square \)

7 Numerical experiments

In order to encourage the understanding of the numerical experimentation carried out, we provide the pseudocode of several algorithms. Firstly, using Theorem 8, we present Algorithm 1 for computing to HRA the matrix form BD(U) (48) of the bidiagonal decomposition of the second kind Stirling matrix U in (47). Furthermore, we also provide the pseudocode of Algorithms 2 and 3 for computing to HRA the matrix form (9) of the bidiagonal decomposition of the collocation and Wronskian matrices of Touchard bases. Taking into account (59), Algorithm 2 requires BD(U) and the bidiagonal decompositon of the Vandermonde matrix implemented in the Matlab function TNVandBD available in [21]. In addition, following (60), Algorithm 3 requires BD(U) and the bidiagonal decomposition (34) of the Wronskian matrix of the monomial basis. Finally, let us observe that both algorithms call the Matlab function TNProduct available in [21]. Let us recall that, given \(A=BD(F)\) and \(B =BD(G)\) to HRA, TNProduct(A,B) computes \( BD(F\cdot G)\) to HRA. The computational cost of the mentioned function and algorithms is \(O(n^3)\) arithmetic operations.

figure a

HRA computation of the bidiagonal decomposition of the second kind Stirling matrix U in (47).

figure b

HRA computation of the bidiagonal decomposition of the collocation matrix T of Touchard bases (54).

figure c

HRA computation of the bidiagonal decomposition of the Wronskian matrix W of Touchard bases (54).

Let us illustrate with a simple example the bidiagonal decompositions obtained by Algorithms 2 and 3 for the collocation and Wronskian matrices of Touchard bases. For \(n+1=10\), Algorithm 1 computes the following matrix storing the bidiagonal decomposition of the second kind Stirling matrix U:

$$\begin{aligned} BD(U)=\tiny { \left( \begin{array}{cccccccccc} 1 &{} 0 &{} 0 &{} 0 &{}0 &{}0 &{} 0 &{}0 &{}0 &{}0 \\ 0 &{} 1 &{} 1 &{} 1 &{}1 &{} 1 &{} 1 &{} 1 &{} 1 &{}1 \\ 0 &{} 0 &{} 1 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 \\ 0 &{} 0 &{} 0 &{} 1 &{} 3 &{} 3 &{} 3 &{} 3 &{} 3 &{} 3\\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 4 &{} 4 &{} 4 &{} 4 &{} 4\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 5 &{} 5 &{} 5 &{} 5 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 6 &{} 6 &{} 6 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 7 &{} 7 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 8 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ \end{array}\right) .} \end{aligned}$$

Let us now considerer the following sequence of parameters \(\widetilde{\textbf{t}}:=[2,4,6,8,10,12,\) 14, 16, 18, 20]. The bidiagonal factorization of the corresponding Vandermonde matrix V computed by TNVandBD is as follows:

$$\begin{aligned} BD(V)=\tiny {\left( \begin{array}{cccccccccc} 1 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 \\ 1 &{} 2 &{} 4 &{} 4 &{} 4 &{} 4 &{} 4 &{} 4 &{} 4 &{} 4 \\ 1 &{} 1 &{} 8 &{} 6 &{} 6 &{} 6 &{} 6 &{} 6 &{} 6 &{} 6 \\ 1 &{} 1 &{} 1 &{} 48 &{} 8 &{} 8 &{} 8 &{} 8 &{} 8 &{} 8 \\ 1 &{} 1 &{} 1 &{} 1 &{} 384 &{} 10 &{} 10 &{} 10 &{} 10 &{} 10 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 3840 &{} 12 &{} 12 &{} 12 &{} 12 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 46080 &{} 14 &{} 14 &{} 14 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 645120 &{}16 &{} 16 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 10321920 &{} 18 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 185794560 \end{array}\right) }. \end{aligned}$$

Using BD(V) and BD(U), Algorithm 2 computes \(BD(V\cdot U)=BD(T)\) for the bidiagonal factorization of the collocation matrix T at the parameters \(\widetilde{\textbf{t}}\) of the \((n+1)\)-dimensional Touchard basis, obtaining the following:

$$\begin{aligned} BD(T)=\tiny { {\left( \begin{array}{cccccccccc} 1 &{} 2 &{} 3 &{} 11/3 &{} 47/11 &{} 227/47 &{} 1215/227 &{} 2369/405 &{} 2018/319 &{} 1839/271 \\ 1 &{} 2 &{} 4 &{} 16/3 &{} 537/88 &{} 3835/562 &{} 3317/443 &{} 12529/1545 &{} 3280/377 &{} 13381/1444 \\ 1 &{} 1 &{} 8 &{} 6 &{} 61/8 &{} 3285/389 &{} 3429/371 &{} 928/93 &{} 14123/1324 &{} 7653/676 \\ 1 &{} 1 &{} 1 &{} 48 &{} 8 &{} 604/61 &{} 1687/157 &{} 6995/603 &{} 3061/247 &{} 2771/211 \\ 1 &{} 1 &{} 1 &{} 1 &{} 384 &{} 10 &{} 1838/151 &{} 13191/1013 &{} 2492/179 &{} 2229/151 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 3840 &{} 12 &{} 13271/919 &{} 16903/1106 &{} 20906/1289 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 46080 &{} 14 &{} 1437/86 &{} 2823/161 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 645120 &{}16 &{} 911/48 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 10321920 &{} 18 \\ 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 1 &{} 185794560 \end{array}\right) } }. \end{aligned}$$

On the other hand, let \(t:=2\). The bidiagonal factorization of the Wronskian matrix \(\widetilde{W}\) of the monomial basis at t can be represented by the following:

$$\begin{aligned} BD(\widetilde{W})=\tiny { \left( \begin{array}{cccccccccc} 1 &{} 2 &{} 2 &{} 2 &{} 2&{} 2 &{} 2 &{} 2 &{} 2 &{} 2 \\ 0 &{} 1 &{} 2 &{} 2 &{} 2&{} 2 &{} 2 &{} 2 &{} 2 &{} 2 \\ 0 &{} 0 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 \\ 0 &{} 0 &{} 0 &{} 6 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 24 &{} 2 &{} 2&{} 2 &{} 2&{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 120 &{} 2 &{} 2 &{} 2 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 720 &{} 2 &{} 2 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 5040 &{}2 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 40320 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 362880 \\ \end{array}\right) }. \end{aligned}$$

Using \(BD(\widetilde{W})\) and BD(U), Algorithm 3 computes \( BD(\widetilde{W} \cdot U)=BD(W)\) for the bidiagonal factorization of the Wronskian matrix W of the \((n+1)\)-dimensional Touchard basis at t.

$$\begin{aligned} BDW=\tiny { \left( \begin{array}{cccccccccc} 1 &{} 2 &{} 3 &{} 11/3 &{} 47/11 &{} 227/47 &{} 1215/227 &{} 2369/405 &{} 2018/319 &{} 1839/271 \\ 0 &{} 1 &{} 2 &{} 10/3 &{} 216/55 &{} 1953/431 &{} 967/190 &{} 3209/572 &{} 1398/229 &{} 1079/164 \\ 0 &{} 0 &{} 2 &{} 2 &{} 19/5 &{} 2885/684 &{} 484/101 &{} 2340/437 &{} 787/134 &{} 70/11 \\ 0 &{} 0 &{} 0 &{} 6 &{} 2 &{} 84/19 &{} 2059/450 &{} 1787/354 &{} 23184/4127 &{} 6737/1097 \\ 0 &{} 0 &{} 0 &{} 0 &{} 24 &{} 2 &{} 109/21 &{} 3383/668 &{} 893/168 &{} 3593/613 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 120 &{} 2 &{} 662/109 &{} 2043/356 &{} 1427/252 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 720 &{} 2 &{} 2325/331 &{} 1201/182 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 5040 &{}2 &{} 3491/436 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 40320 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 362880 \\ \end{array}\right) }. \end{aligned}$$

To test the accuracy on floating point arithmetic provided by the proposed bidiagonal factorizations, for different dimensions \(n+1=5,6, \ldots , 20\), we have solved algebraic problems related to collocation matrices \(T_n\) of Touchard bases with \(t_{i}=1+(i+1)/(n+1)\), \(i=0,\ldots , n,\) and Wronskian matrices \(W_n\) of Touchard bases at \(t=20\). In order to analyze the accuracy of the results, when calculating the relative errors, we have considered the solutions obtained in Mathematica using 100-digit arithmetic as the exact solutions.

In addition, we have also computed the 2-norm condition number of all considered matrices. In Fig. 1, the conditioning obtained in Mathematica is depicted. It can be easily observed that the conditioning drastically increases with the size of the matrices. Due to the ill-conditioning of these matrices, standard routines do not obtain accurate solutions because they can suffer from inaccurate cancelations. In contrast, the algorithms using the factorizations obtained in this paper exploit the structure of the considered matrices obtaining, as we will see, numerical results to HRA.

Fig. 1
figure 1

The 2-norm conditioning of collocation matrices \(T_n\) at \(t_{i}=1+(i+1)/(n+1)\), \(i=0,\ldots , n\), and Wronskian matrices \(W_n\) at \(t=20\) of Touchard bases

Computation of eigenvalues and singular values. Given \(B=BD(A)\) to HRA, the Matlab functions TNEigenValues(B) and TNSingularValues(B) available in [21] compute the eigenvalues and singular values of a matrix A to HRA. Its computational cost is \(O(n^3)\) (see [19]).

Algorithm 4 uses the bidiagonal decompositions provided by Algorithms 2 and 3 to compute the eigenvalues and singular values of collocation matrices and singular values of Wronskian matrices of Touchard bases to HRA. Let us note that the eigenvalues of the Wronskian matrices of Touchard bases are exact.

figure d

HRA computation of the eigenvalues of \(T_n\) and singular values of \(T_n\) and \(W_n\).

Fig. 2
figure 2

Relative error in the approximations to the lowest eigenvalue at \(T_n\) with \(t_{i}=1+(i+1)/(n+1)\), \(i=0,\ldots , n\)

Fig. 3
figure 3

Relative error in the approximations to the smallest singular value at \(T_n\) with \(t_{i}=1+(i+1)/(n+1)\), \(i=0,\ldots , n\), and \(W_n\) at \(t=20\)

Let us observe that ill-conditioned matrices have extremely small singular values. Moreover, small relative perturbations in the entries of a totally positive matrix can produce enormous relative perturbations in the small eigenvalues and singular values. So, traditional methods to obtain the eigenvalues or the singular values of an ill-conditioned TP matrix only guarantee relative accuracy in the computation of the largest eigenvalues or singular values (cf. [19]). In this context, we have compared the smallest eigenvalue and singular value obtained using Algorithm 4 and Matlab comands eig and svd. The values provided by Mathematica using 100-digit arithmetic have been considered the exact solution of the algebraic problem, and the relative error e of each approximation has been computed as \(e:= \vert a-\tilde{a} \vert / \vert a \vert \), where a denotes the smallest eigenvalue and singular value computed in Mathematica and \(\tilde{a}\) the smallest eigenvalue and singular value computed in Matlab.

In Figs. 2 and 3, the relative errors are shown. Note that our approach computes accurately the smallest eigenvalue and singular value regardless of the 2-norm condition number of the considered matrices. In contrast, the Matlab commands eig and svd return results that are not accurate at all.

Computation of inverses. Given \(B=BD(A)\) to HRA, the function TNInverseExpand(B) available in [21] returns \( A^{-1}\) to HRA, requiring \(O(n^2\)) arithmetic operations (see [27]).

Algorithm 5 uses the bidiagonal decomposition provided by Algorithms 2 and 3 to compute the inverse of these matrices to HRA.

figure e

Computation of the inverse of \(T_n\) and \(W_n\) to HRA.

For all considered matrices, we have compared their inverses obtained using Algorithm 5 and the Matlab command inv. To look over the accuracy of these two methods, we have compared both Matlab approximations with the inverse matrix \(A ^{-1}\) computed by Mathematica using 100-digit arithmetic, taking into account the formula \(e=\Vert A ^{-1}-\widetilde{A} ^{-1} \Vert /\Vert A ^{-1}\Vert \) for the corresponding relative error.

The achieved relative errors are shown in Fig. 4. Note that, in contrast to the Matlab command inv, our algorithm provides HRA results.

Fig. 4
figure 4

Relative error of the approximations to the inverse of \(T_n\) with \(t_{i}=1+(i+1)/(n+1)\), \(i=0,\ldots , n\), and \(W_n\) at \(t=20\)

Resolution of linear systems. Given \(B=BD(A)\) to HRA and a vector d with alternating signs, the Matlab function TNSolve(Bd) available in [21] returns the solution c of \(Ac=d\) to HRA. It requires \(O(n^2\)) arithmetic operations (see [21]).

Algorithm 6 uses the bidiagonal decomposition provided by Algorithms 2 and 3 to compute to HRA the solution of the linear systems \(T_nc=d\) and \(W_nc=d\) where \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and \(d_i\), \(i=1,\ldots ,n+1\), are random nonnegative integer values.

For all considered matrices, we have compared the solution obtained using Algorithm 6 and the Matlab command \(\setminus \). The solution provided by Mathematica using 100-digit arithmetic has been considered the exact solution c. Then, we have computed in Mathematica the relative error of the computed approximation with Matlab \(\tilde{c}\), taking into account the formula \(e=\Vert c-\tilde{c}\Vert / \Vert c\Vert \).

figure f

Resolution of linear systems of equations \(Tc=d\) and \(Wc=d\) to HRA.

As opposed to the results obtained with the command \(\setminus \), the proposed algorithm preserves the accuracy for all the considered dimensions. Figure 5 illustrates the relative errors.

Fig. 5
figure 5

Relative error of the approximations to the solution of the linear systems \(T_nc=d\) and \(W_nc=d\), where \({ d }=((-1)^{i+1}d_i)_{1\le i\le n+1}\) and \(d_i\), \(i=1,\ldots ,n+1\), are random nonnegative integer values

8 Conclusions

In this paper, we have focused on the change of bases matrices between the monomial and the Newton bases corresponding to a given sequence of nodes, illustrating that their total positivity can be characterized in terms of the sign of the nodes. If the nodes have the same sign, using the bidiagonal factorization (6) provided by Theorem 3, algebraic problems related to these matrices can be achieved to high relative accuracy, even though the matrix does not possess the total positivity property. As an interesting application, the Stirling numbers of the second kind can be considered divided differences of monomial polynomials at sets of nodes formed by the first consecutive nonnegative integers. Then, Stirling matrices can be considered particular cases of the above mentioned change of bases matrices, and consequently, algorithms to high relative accuracy have been delivered for the resolution of algebraic problems with collocation and Wronskian matrices of Touchard polynomial bases.