Total positivity of some polynomial matrices that enumerate labeled trees and forests, I. Forests of rooted labeled trees

We consider the lower-triangular matrix of generating polynomials that enumerate $k$-component forests of rooted trees on the vertex set $[n]$ according to the number of improper edges (generalizations of the Ramanujan polynomials). We show that this matrix is coefficientwise totally positive and that the sequence of its row-generating polynomials is coefficientwise Hankel-totally positive. More generally, we define the generic rooted-forest polynomials by introducing also a weight $m! \, \phi_m$ for each vertex with $m$ proper children. We show that if the weight sequence $\phi$ is Toeplitz-totally positive, then the two foregoing total-positivity results continue to hold. Our proofs use production matrices and exponential Riordan arrays.


Introduction and statement of results
It is well known 1 that the number of forests of rooted trees on n labeled vertices is f n = (n + 1) n−1 , and that the number of forests of rooted trees on n labeled vertices having k components (i.e. k trees) is f n,k = n − 1 k − 1 n n−k = n k k n n−k−1 (1.1) (to be interpreted as δ k0 when n = 0). In particular, the number of rooted trees on n labeled vertices is f n,1 = n n−1 . The first few f n,k and f n are [91, A061356/A137452 and A000272]. By adding a new vertex 0 and connecting it to the roots of all the trees, we see that f n is also the number of (unrooted) trees on n + 1 labeled vertices, and that f n,k is the number of (unrooted) trees on n + 1 labeled vertices in which some specified vertex (here vertex 0) has degree k.
The unit-lower-triangular matrix (f n,k ) n,k≥0 has the exponential generating function is the tree function [31]. 2 An equivalent statement is that the unit-lower-triangular matrix (f n,k ) n,k≥0 is [8] the exponential Riordan array [9,34,35] R[F, G] with F (t) = 1 and G(t) = T (t); we will discuss this connection in Section 3.1.
The principal purpose of this paper is to prove the total positivity of some matrices related to (and generalizing) f n and f n,k . Recall first that a finite or infinite matrix of real numbers is called totally positive (TP) if all its minors are nonnegative, and strictly totally positive (STP) if all its minors are strictly positive. 3 Background information on totally positive matrices can be found in [40,48,71,95]; they have applications to many areas of pure and applied mathematics. 4 Our first result is the following: (a) The unit-lower-triangular matrix F = (f n,k ) n,k≥0 is totally positive. 5 (b) The Hankel matrix (f n+n +1,1 ) n,n ≥0 is totally positive.
It is known [49,95] that a Hankel matrix of real numbers is totally positive if and only if the underlying sequence is a Stieltjes moment sequence, i.e. the moments of a positive measure on [0, ∞). And it is also known that (f n+1,1 ) n≥0 = ((n + 1) n ) n≥0 is a Stieltjes moment sequence. 6 So Theorem 1.1(b) is equivalent to this known result. But our proof here is combinatorial and linear-algebraic, not analytic.
However, this is only the beginning of the story, because our main interest [113,114,117] is not with sequences and matrices of real numbers, but rather with sequences and matrices of polynomials (with integer or real coefficients) in one or more indeterminates x: in applications they will typically be generating polynomials that enumerate some combinatorial objects with respect to one or more statistics. We equip the polynomial ring R[x] with the coefficientwise partial order: that is, we say that P is nonnegative (and write P 0) in case P is a polynomial with nonnegative coefficients. We then say that a matrix with entries in R[x] is coefficientwise totally positive if all its minors are polynomials with nonnegative coefficients; and we 4 Including combinatorics [12][13][14]46,112], stochastic processes [71,72], statistics [71], the mechanics of oscillatory systems [48,49], the zeros of polynomials and entire functions [5,38,65,71,73,95], spline interpolation [52,71,108], Lie theory [45,[82][83][84] and cluster algebras [43,44], the representation theory of the infinite symmetric group [10,126], the theory of immanants [121], planar discrete potential theory [32,42] and the planar Ising model [81], and several other areas [52]. 5 I trust that there will be no confusion between my use of the letter F for the matrix (f n,k ) n,k≥0 or its generalizations, and also for the power series F (t) in an exponential Riordan array R[F, G]. The meaning should be unambiguous from the context. 6 The integral representation [11] [70,Corollary 2.4] (n + 1) n n! = 1 π π 0 sin ν ν e ν cot ν n+1 dν shows that (n + 1) n /n! is a Stieltjes moment sequence. Moreover, n! = ∞ 0 x n e −x dx is a Stieltjes moment sequence. Since the entrywise product of two Stieltjes moment sequences is easily seen to be a Stieltjes moment sequence, it follows that (n + 1) n is a Stieltjes moment sequence. But I do not know any simple formula (i.e. one involving only a single integral over a real variable) for its Stieltjes integral representation. say that a sequence a = (a n ) n≥0 with entries in R[x] is coefficientwise Hankeltotally positive if its associated infinite Hankel matrix H ∞ (a) = (a n+n ) n,n ≥0 is coefficientwise totally positive. Most generally, we can consider sequences and matrices with entries in an arbitrary partially ordered commutative ring; total positivity and Hankel-total positivity are then defined in the obvious way (see Section 2.1). Coefficientwise Hankel-total positivity of a sequence of polynomials (P n (x)) n≥0 implies the pointwise Hankel-total positivity (i.e. the Stieltjes moment property) for all x ≥ 0, but it is vastly stronger.
Returning now to the matrix F = (f n,k ) n,k≥0 , let us define its row-generating polynomials in the usual way: (1.4) More generally, let us define its binomial partial row-generating polynomials  as can easily be verified by expanding the right-hand sides. The F n (x) are a specialization of the celebrated Abel polynomials A n (x; a) = x(x − an) n−1 [47,89,102,105] to a = −1, while the F n,k (x) can be found in [96,127]. From (1.5) we see that F n,k (x) is a polynomial of degree n − k with nonnegative integer coefficients, with leading coefficient n k ; in particular, F n,n (x) = 1. Moreover, F n,0 (x) = F n (x) [because 0 = 1] and F n,k (0) = f n,k [because k k = 1]. So the matrix F (x) = F n,k (x) n,k≥0 is a unit-lower-triangular matrix, with entries in Z[x], that has the row-generating polynomials F n (x) in its zeroth column and that reduces to F = (f n,k ) n,k≥0 when x = 0. Because of the presence of the binomial coefficients k in (1.5), we call F (x) the binomial row-generating matrix of the matrix F . 7 Please note that the definition (1.5) can be written as a matrix factorization (1.8) 7 Let us remark that the ordinary row-generating matrix of a lower-triangular matrix -that is, (1.5) without the factors k -has been introduced recently by several authors [16,87,131]. I do not know whether the binomial row-generating matrix has been used previously, but I suspect that it has been.
where B x is the weighted binomial matrix (1.9) (note that it too is unit-lower-triangular); this factorization will play a central role in our proofs. Our second result is then: (a) The unit-lower-triangular polynomial matrix F (x) = F n,k (x) n,k≥0 is coefficientwise totally positive.
(b) The polynomial sequence F = F n (x) n≥0 is coefficientwise Hankel-totally positive. [That is, the Hankel matrix H ∞ (F ) = F n+n (x) n,n ≥0 is coefficientwise totally positive.] It is not difficult to see (see Lemma 2.3 below) that the matrix B x is coefficientwise totally positive; and it is an immediate consequence of the Cauchy-Binet formula that the product of two (coefficientwise) totally positive matrices is (coefficientwise) totally positive. So Theorem 1.2(a) is actually an immediate consequence of Theorem 1.1(a) together with (1.8) and Lemma 2.3. But Theorem 1.2(b) will take more work.
But this is still not the end of the story, because we want to generalize these polynomials further by adding further variables. First let us agree that the vertices of our forest F of rooted trees will henceforth be labeled by the totally ordered set [n] = {1, 2, . . . , n}. Given a rooted tree T ∈ F and two vertices i, j of T , we say that j is a descendant of i if the unique path from the root of T to j passes through i. (Note in particular that every vertex is a descendant of itself.) Now let e = ij be an edge of T , ordered so that j is a descendant of i; then i is the parent of j, and j is a child of i. We say that the edge e = ij is improper if there exists a descendant of j (possibly j itself) that is lower-numbered than i; otherwise we say that e = ij is proper . Now let f n,k,m be the number of forests of rooted trees on the vertex set [n] that have k components and m improper edges (note that 0 ≤ m ≤ n − k since a forest with k components has n − k edges). And introduce the generating polynomial that gives a weight y for each improper edge and a weight z for each proper edge: (1.10) The first few f n,k (y, z) are n \ k 0 1 2 3 4 0 1 1 0 1 2 0 z + y 1 3 0 2z 2 + 4zy + 3y 2 3z + 3y 1 4 0 6z 3 + 18z 2 y + 25zy 2 + 15y 3 11z 2 + 22zy + 15y 2 6z + 6y 1 Clearly f n,k (y, z) is a homogeneous polynomial of degree n−k with nonnegative integer coefficients; it is a polynomial refinement of f n,k in the sense that f n,k (1, 1) = f n,k . (Of course, it was redundant to introduce the two variables y and z instead of just one of them; we did it because it makes the formulae more symmetric.) In particular, the polynomials f n,1 (y, z) enumerate rooted trees according to the number of improper edges; they are homogenized versions of the celebrated Ramanujan polynomials [21,37,62,63,69,80,98,111,128] [91, A054589]. 8 The unit-lower-triangular matrix (f n,k (y, z)) n,k≥0 is also the exponential Riordan array R[F, G] with F (t) = 1 and where T (t) is the tree function (1.3); we will show this in Section 3.2.
It should, however, be remarked that the coefficient matrix of the Ramanujan polynomials, R = (r(n, m)) n,m≥0 , is not totally positive: the lower-left 7 × 7 minor of the leading 9 × 9 matrix is −3709251874944000. Now we can again introduce row-generating polynomials and binomial partial row-generating polynomials: we generalize (1.4) and (1.5) by defining and Thus, F n (x, y, z) is the generating polynomial for forests of rooted trees on the vertex set [n], with a weight x for each component and a weight y (resp. z) for each improper (resp. proper) edge; and F n,k (x, y, z) is the generating polynomial for forests of rooted trees on the vertex set [n] with k distinguished components, with a weight x for each undistinguished component and a weight y (resp. z) for each improper (resp. proper) edge. Note that F n (x, y, z) [resp. F n,k (x, y, z)] is a homogeneous polynomial of degree n (resp. n − k) in x, y, z. Our third result is then: (a) The unit-lower-triangular polynomial matrix F (x, y, z) = F n,k (x, y, z) n,k≥0 is coefficientwise totally positive (jointly in x, y, z).
(b) The polynomial sequence F = F n (x, y, z) n≥0 is coefficientwise Hankel-totally positive (jointly in x, y, z).
Here part (c) is an easy consequence of part (b), obtained by restricting to n ≥ 1, dividing by x, and taking x → 0. We remark that Chen et al. [20,Corollary 3.3] have proven that the sequence f n+1,1 (y, z) n≥0 of Ramanujan polynomials is coefficientwise strongly log-convex (i.e. coefficientwise Hankel-totally positive of order 2). 9 Theorem 1.3(c) is thus an extension of this result to prove coefficientwise Hankel-total positivity of all orders.
But this is still not the end of the story, because we can add even more variables -in fact, an infinite set. Given a rooted tree T on a totally ordered vertex set and vertices i, j ∈ T such that j is a child of i, we say that j is a proper child of i if the edge e = ij is proper (that is, j and all its descendants are higher-numbered than i). Now let φ = (φ m ) m≥0 be indeterminates, and let f n,k (y, φ) be the generating polynomial for k-component forests of rooted trees on the vertex set [n] with a weight φ m def = m! φ m for each vertex with m proper children and a weight y for each improper edge. (We will see later why it is convenient to introduce the factors m! in this definition. Observe also that the variables z are now redundant, because they would simply scale φ m → z m φ m .) We call the polynomials f n,k (y, φ) the generic rootedforest polynomials. Here φ = (φ m ) m≥0 are in the first instance indeterminates, so that f n,k (y, φ) belongs to the polynomial ring Z[y, φ]; but we can then, if we wish, substitute specific values for φ in any commutative ring R, leading to values f n,k (y, φ) ∈ R[y]. (Similar substitutions can of course also be made for y.) When doing this we will use the same notation f n,k (y, φ), as the desired interpretation for φ should be clear from the context. The polynomial f n,k (y, φ) is quasi-homogeneous of degree n − k when φ m is assigned weight m and y is assigned weight 1. It follows from this quasi-homogeneity that the variable y is now in principle redundant, since it can be absorbed into φ: namely, if we define a rescaled φ by However, we prefer to retain the redundant variable y, in order to avoid the division by y inherent in (1.17); in particular, this facilitates the study of the limiting case y = 0. The lower-triangular matrix (f n,k (y, φ)) n,k≥0 is also an exponential Riordan array R[F, G] with F (t) = 1, as we will show in Section 3.3; but this time the function G(t) is rather more complicated. Now define the row-generating polynomials Thus, F n (x, y, φ) is the generating polynomial for forests of rooted trees on the vertex set [n], with a weight x for each component, y for each improper edge, and m! φ m for each vertex with m proper children; and F n,k (x, y, φ) is the generating polynomial for forests of rooted trees on the vertex set [n] with k distinguished components, with a weight x for each undistinguished component, y for each improper edge, and m! φ m for each vertex with m proper children. Our fundamental result is then the following: Let R be a partially ordered commutative ring, and let φ = (φ m ) m≥0 be a sequence in R that is Toeplitz-totally positive of order r. Then: (a) The lower-triangular polynomial matrix F (x, y, φ) = F n,k (x, y, φ) n,k≥0 is coefficientwise totally positive of order r (jointly in x, y).
(b) The polynomial sequence F = F n (x, y, φ) n≥0 is coefficientwise Hankel-totally positive of order r (jointly in x, y).
(The concept of Toeplitz-total positivity in a partially ordered commutative ring will be explained in detail in Section 2.1. Total positivity of order r means that the minors of size ≤ r are nonnegative.) Here (a) and (b) are once again the key results; (c) is an easy consequence of (b), obtained by restricting to n ≥ 1, dividing by x, and taking x → 0. Specializing Theorem 1.4 to r = ∞, R = Q and φ m = z m /m! (which is indeed Toeplitz-totally positive: see (2.1) below), we recover Theorem 1.3. Theorem 1.4 generalizes the main result of our recent paper [92] on the generic Lah polynomials, to which it reduces when y = 0; we will explain this connection in Section 5.
The main tool in our proofs is the theory of production matrices [33,34] as applied to total positivity [117], combined with the theory of exponential Riordan arrays [9,34,35]. Therefore, in Section 2 we review some facts about total positivity, production matrices, and exponential Riordan arrays that will play a central role in our arguments. This development culminates in Theorem 2.20; it is the fundamental theoretical result that underlies all our proofs. In Section 3 we show that the matrices (f n,k ) n,k≥0 , (f n,k (y, z)) n,k≥0 and (f n,k (y, φ)) n,k≥0 are exponential Riordan arrays R[F, G] with F = 1, and we compute their generating functions G. In Section 4 we prove Theorems 1.1-1.4, by exhibiting the production matrices for F , F (x), F (x, y, z) and F (x, y, φ) and proving that these production matrices are coefficientwise totally positive. In Section 5 we discuss the connection with the generic Lah polynomials that were introduced in [92]. Finally, in Section 6 we pose some open problems.
A sequel devoted to a different (but closely related) class of polynomials enumerating rooted labeled trees, written in collaboration with Xi Chen, will appear elsewhere [25].
Note added: Some related ideas concerning total positivity and exponential Riordan arrays can be found in a recent paper of Zhu [132].

Preliminaries
Here we review some definitions and results from [92,117] that will be needed in the sequel. We also include a brief review of exponential Riordan arrays [9,34,35] and Lagrange inversion [53]. The key result in this section -obtained by straightforward combination of the others -is Theorem 2.20.

Partially ordered commutative rings and total positivity
In this paper all rings will be assumed to have an identity element 1 and to be nontrivial (1 = 0).
A partially ordered commutative ring is a pair (R, P) where R is a commutative ring and P is a subset of R satisfying (a) 0, 1 ∈ P.
(b) If a, b ∈ P, then a + b ∈ P and ab ∈ P. We call P the nonnegative elements of R, and we define a partial order on R (compatible with the ring structure) by writing a ≤ b as a synonym for b − a ∈ P. Please note that, unlike the practice in real algebraic geometry [15,78,85,97], we do not assume here that squares are nonnegative; indeed, this property fails completely for our prototypical example, the ring of polynomials with the coefficientwise order, since A (finite or infinite) matrix with entries in a partially ordered commutative ring is called totally positive (TP) if all its minors are nonnegative; it is called totally positive of order r (TP r ) if all its minors of size ≤ r are nonnegative. It follows immediately from the Cauchy-Binet formula that the product of two TP (resp. TP r ) matrices is TP (resp. TP r ). 10 This fact is so fundamental to the theory of total positivity that we shall henceforth use it without comment.
We say that a sequence a = (a n ) n≥0 with entries in a partially ordered commutative ring is Hankel-totally positive (resp. Hankel-totally positive of order r) if its associated infinite Hankel matrix H ∞ (a) = (a i+j ) i,j≥0 is TP (resp. TP r ). We say that a is Toeplitz-totally positive (resp. Toeplitz-totally positive of order r) if its associated infinite Toeplitz matrix T ∞ (a) = (a i−j ) i,j≥0 (where a n def = 0 for n < 0) is TP (resp. TP r ). 11 When R = R, Hankel-and Toeplitz-total positivity have simple analytic characterizations. A sequence (a n ) n≥0 of real numbers is Hankel-totally positive if and only if it is a Stieltjes moment sequence [49, Théorème 9] [95, section 4.6]. And a sequence (a n ) n≥0 of real numbers is Toeplitz-totally positive if and only if its ordinary generating function can be written as ∞ n=0 a n t n = Ce γt t m with m ∈ N, C, γ, α i , β i ≥ 0, α i < ∞ and β i < ∞: this is the celebrated Aissen-Schoenberg-Whitney-Edrei theorem [71,Theorem 5.3,p. 412]. However, in a general partially ordered commutative ring R, the concepts of Hankel-and Toeplitz-total positivity are more subtle.
We will need a few easy facts about the total positivity of special matrices: Lemma 2.1 (Bidiagonal matrices). Let A be a matrix with entries in a partially ordered commutative ring, with the property that all its nonzero entries belong to two consecutive diagonals. Then A is totally positive if and only if all its entries are nonnegative.
Proof. The nonnegativity of the entries (i.e. TP 1 ) is obviously a necessary condition for TP. Conversely, for a matrix of this type it is easy to see that every nonzero minor is simply a product of some entries.
Lemma 2.2 (Toeplitz matrix of powers). Let R be a partially ordered commutative ring, let x ∈ R, and consider the infinite Toeplitz matrix Then every minor of T x is either zero or else a power of x.
In particular, if x is an indeterminate, then T x is totally positive in the ring Z[x] equipped with the coefficientwise order.
Proof. Consider a submatrix A = (T x ) IJ with rows I = {i 1 < . . . < i k } and columns J = {j 1 < . . . < j k }. We will prove by induction on k that det A is either zero or a power of x. It is trivial if k = 0 or 1. If A 12 = A 22 = 0, then A 1s = A 2s = 0 for all s ≥ 2 by definition of T x , and det A = 0. If A 12 and A 22 are both nonzero, then the first column of A is x j 2 −j 1 times the second column, and again det A = 0. Finally, if A 12 = 0 and A 22 = 0 (by definition of T x this is the only other possibility), then A 1s = 0 for all s ≥ 2; we then replace the first column of A by the first column minus x j 2 −j 1 times the second column, so that the new first column has x i 1 −j 1 in its first entry (or zero if i 1 < j 1 ) and zeroes elsewhere. Then det A equals x i 1 −j 1 (or zero if i 1 < j 1 ) times the determinant of its last k − 1 rows and columns, so the claim follows from the inductive hypothesis.
See also Example 2.10 below for a second proof of the total positivity of T x , using production matrices.

Lemma 2.3 (Binomial matrix).
In the ring Z, the binomial matrix B = n k n,k≥0 is totally positive. More generally, the weighted binomial matrix B x,y = x n−k y k n k n,k≥0 is totally positive in the ring Z[x, y] equipped with the coefficientwise order.
Then B x,y = DBD where D = diag (x n ) n≥0 and D = diag (x −k y k ) k≥0 . By Cauchy-Binet, B x,y is totally positive in the ring Z[x, x −1 , y] equipped with the coefficientwise order. But because B is lower-triangular, the elements of B x,y actually lie in the subring Z[x, y].
See also Example 2.11 below for an ab initio proof of Lemma 2.3 using production matrices.
Finally, let us show that the sufficiency half of the Aissen-Schoenberg-Whitney-Edrei theorem holds (with a slight modification to avoid infinite products) in a general partially ordered commutative ring. We give two versions, depending on whether or not it is assumed that the ring R contains the rationals: Lemma 2.4 (Sufficient condition for Toeplitz-total positivity). Let R be a partially ordered commutative ring, let N be a nonnegative integer, and let α 1 , . . . , α N , β 1 , . . . , β N and C be nonnegative elements in R. Define the sequence a = (a n ) n≥0 in R by ∞ n=0 a n t n = C Then the Toeplitz matrix T ∞ (a) is totally positive.
Of course, it is no loss of generality to have the same number N of alphas and betas, since some of the α i or β i could be zero.
Define the sequence a = (a n ) n≥0 in R by ∞ n=0 a n t n = C e γt Then the Toeplitz matrix T ∞ (a) is totally positive.
Proof of Lemma 2.4. We make a series of elementary observations: 1) The sequence a = (1, α, 0, 0, 0, . . .), corresponding to the generating function A(t) = 1 + αt, is Toeplitz-totally positive if and only if α ≥ 0. The "only if" is trivial, and the "if" follows from Lemma 2.1 because the Toeplitz matrix T ∞ (a) is bidiagonal.
3) If a and b are sequences with ordinary generating functions A(t) and B(t), then the convolution c = a * b, defined by c n = n k=0 a k b n−k , has ordinary generating function C(t) = A(t) B(t); moreover, the Toeplitz matrix T ∞ (c) is simply the matrix product T ∞ (a) T ∞ (b). It thus follows from the Cauchy-Binet formula that if a and b are Toeplitz-totally positive, then so is c.
4) A Toeplitz-totally positive sequence can be multiplied by a nonnegative constant C, and it is still Toeplitz-totally positive.
Combining these observations proves the lemma.
Proof of Lemma 2.5. We add to the proof of Lemma 2.4 the following additional observation: 5) The sequence a = (γ n /n!) n≥0 , corresponding to the generating function A(t) = e γt , is Toeplitz-totally positive if and only if γ ≥ 0. The "only if" is again trivial, and the "if" follows from Lemma 2.3 because γ n−k /(n − k)! = n k γ n−k × k!/n! and hence T ∞ (a) = D −1 B γ,1 D where D = diag( (n!) n≥0 ).

Production matrices
The method of production matrices [33,34] has become in recent years an important tool in enumerative combinatorics. In the special case of a tridiagonal production matrix, this construction goes back to Stieltjes' [122,123] work on continued fractions: the production matrix of a classical S-fraction or J-fraction is tridiagonal. In the present paper, by contrast, we shall need production matrices that are lower-Hessenberg (i.e. vanish above the first superdiagonal) but are not in general tridiagonal. We therefore begin by reviewing briefly the basic theory of production matrices. The important connection of production matrices with total positivity will be treated in the next subsection.
Let P = (p ij ) i,j≥0 be an infinite matrix with entries in a commutative ring R. In order that powers of P be well-defined, we shall assume that P is either row-finite (i.e. has only finitely many nonzero entries in each row) or column-finite.
Let us now define an infinite matrix A = (a nk ) n,k≥0 by Writing out the matrix multiplications explicitly, we have so that a nk is the total weight for all n-step walks in N from i 0 = 0 to i n = k, in which the weight of a walk is the product of the weights of its steps, and a step from i to j gets a weight p ij . Yet another equivalent formulation is to define the entries a nk by the recurrence with the initial condition a 0k = δ 0k . We call P the production matrix and A the output matrix , and we write A = O(P ). Note that if P is row-finite, then so is O(P ); if P is lower-Hessenberg, then O(P ) is lower-triangular; if P is lower-Hessenberg with invertible superdiagonal entries, then O(P ) is lower-triangular with invertible diagonal entries; and if P is unit-lower-Hessenberg (i.e. lower-Hessenberg with entries 1 on the superdiagonal), then O(P ) is unit-lower-triangular. In all the applications in this paper, P will be lower-Hessenberg.
The matrix P can also be interpreted as the adjacency matrix for a weighted directed graph on the vertex set N (where the edge ij is omitted whenever p ij = 0). Then P is row-finite (resp. column-finite) if and only if every vertex has finite outdegree (resp. finite in-degree).
This iteration process can be given a compact matrix formulation. Let us define the augmented production matrix Then the recurrence (2.7) together with the initial condition a 0k = δ 0k can be written as This identity can be iterated to give the factorization where I k is the k × k identity matrix; and conversely, (2.10) implies (2.9). Now let ∆ = (δ i+1,j ) i,j≥0 be the matrix with 1 on the superdiagonal and 0 elsewhere. Then for any matrix M with rows indexed by N, the product ∆M is simply M with its zeroth row removed and all other rows shifted upwards. (Some authors use the notation M def = ∆M .) The recurrence (2.7) can then be written as It follows that if A is a row-finite matrix that has a row-finite inverse A −1 and has first row a 0k = δ 0k , then P = A −1 ∆A is the unique matrix such that A = O(P ). This holds, in particular, if A is lower-triangular with invertible diagonal entries and a 00 = 1; then A −1 is lower-triangular and P = A −1 ∆A is lower-Hessenberg. And if A is unit-lower-triangular, then P = A −1 ∆A is unit-lower-Hessenberg. We shall repeatedly use the following easy facts: Lemma 2.6 (Production matrix of a product). Let P = (p ij ) i,j≥0 be a row-finite matrix (with entries in a commutative ring R), with output matrix A = O(P ); and let B = (b ij ) i,j≥0 be a lower-triangular matrix with invertible (in R) diagonal entries.
That is, up to a factor b 00 , the matrix AB has production matrix B −1 P B.
Proof. Since P is row-finite, so is A = O(P ); then the matrix products AB and B −1 P B arising in the lemma are well-defined. Now But B is lower-triangular with invertible diagonal entries, so B is invertible and Lemma 2.7 (Production matrix of a down-shifted matrix). Let P = (p ij ) i,j≥0 be a row-finite or column-finite matrix (with entries in a commutative ring R), with output matrix A = O(P ); and let c be an element of R. Now define (2.16) Proof. We use (2.6) and its analogue for Q: In (2.17), the only nonzero contributions come from i 1 = 1, with q 01 = c; and then we must also have

Production matrices and total positivity
Let P = (p ij ) i,j≥0 be a matrix with entries in a partially ordered commutative ring R. We will use P as a production matrix; let A = O(P ) be the corresponding output matrix. As before, we assume that P is either row-finite or column-finite.
When P is totally positive, it turns out [117] that the output matrix O(P ) has two total-positivity properties: firstly, it is totally positive; and secondly, its zeroth column is Hankel-totally positive. Since [117] is not yet publicly available, we shall present briefly here (with proof) the main results that will be needed in the sequel.
The fundamental fact that drives the whole theory is the following: can be written as a sum of products of minors of size ≤ k of the production matrix P .
In this proposition the matrix elements p = {p ij } i,j≥0 should be interpreted in the first instance as indeterminates: for instance, we can fix a row-finite or column-finite set S ⊆ N × N and define the matrix P S = (p S ij ) i,j∈N with entries Then the entries (and hence also the minors) of both P and A belong to the polynomial ring Z[p], and the assertion of Proposition 2.8 makes sense. Of course, we can subsequently specialize the indeterminates p to values in any commutative ring R.
Proof of Proposition 2.8. For any infinite matrix X = (x ij ) i,j≥0 , let us write X N = (x ij ) 0≤i≤N −1, j≥0 for the submatrix consisting of the first N rows (and all the columns) of X. Every k × k minor of A is of course a k × k minor of A N for some N , so it suffices to prove that the claim about minors holds for all the A N . But this is easy: the fundamental identity (2.9) implies So the result follows by induction on N , using the Cauchy-Binet formula.
If we now specialize the indeterminates p to values in some partially ordered commutative ring R, we can immediately conclude: Theorem 2.9 (Total positivity of the output matrix). Let P be an infinite matrix that is either row-finite or column-finite, with entries in a partially ordered commutative ring R. If P is totally positive of order r, then so is A = O(P ).
Remarks. 1. In the case R = R, Theorem 2.9 is due to Karlin [50]. However, all of these results concerned only special cases: [2,24,79,129] treated the case in which the production matrix P is tridiagonal; [130] treated a (special) case in which P is upper bidiagonal; [23] treated the case in which P is the production matrix of a Riordan array; [26,50] treated (implicitly) the case in which P is upper-triangular and Toeplitz. But the argument is in fact completely general, as we have just seen; there is no need to assume any special form for the matrix P .
3. A slightly different version of this proof was presented in [92,93]. The simplified reformulation given here, using the augmented production matrix, is due to Mu and Wang [88]. Example 2.11 (Binomial matrix). Let P be the upper-bidiagonal Toeplitz matrix xI + y∆, where x and y are indeterminates. By Lemma 2.1, P is TP in the ring Z[x, y] equipped with the coefficientwise order. An easy computation shows that O(xI + y∆) = B x,y , the weighted binomial matrix with entries (B x,y ) nk = x n−k y k n k . So Theorem 2.9 implies that B x,y is TP in the ring Z[x, y] equipped with the coefficientwise order. This gives an ab initio proof of Lemma 2.3. (2.20) Then the Hankel matrix of O 0 (P ) has matrix elements (Note that the sum over k has only finitely many nonzero terms: if P is row-finite, then there are finitely many nonzero (P n ) 0k , while if P is column-finite, there are finitely many nonzero (P n ) k0 .) We have therefore proven: Lemma 2.12 (Identity for Hankel matrix of the zeroth column). Let P be a row-finite or column-finite matrix with entries in a commutative ring R. Then Combining Proposition 2.8 with Lemma 2.12 and the Cauchy-Binet formula, we obtain: Corollary 2.13 (Hankel minors of the zeroth column). Every k × k minor of the infinite Hankel matrix H ∞ (O 0 (P )) = ((P n+n ) 00 ) n,n ≥0 can be written as a sum of products of the minors of size ≤ k of the production matrix P .
And specializing the indeterminates p to nonnegative elements in a partially ordered commutative ring, in such a way that P is row-finite or column-finite, we deduce: Theorem 2.14 (Hankel-total positivity of the zeroth column). Let P = (p ij ) i,j≥0 be an infinite row-finite or column-finite matrix with entries in a partially ordered commutative ring R, and define the infinite Hankel matrix H ∞ (O 0 (P )) = ((P n+n ) 00 ) n,n ≥0 . If P is totally positive of order r, then so is H ∞ (O 0 (P )).
One might hope that Theorem 2.14 could be strengthened to show not only Hankel-TP of the zeroth column of the output matrix A = O(P ), but in fact Hankel-TP of the row-generating polynomials A n (x) for all x ≥ 0 (at least when R = R)or even more strongly, coefficientwise Hankel-TP of the row-generating polynomials. Alas, this hope is vain, for these properties do not hold in general : Example 2.15 (Failure of Hankel-TP of the row-generating polynomials). Let P = e 00 + ∆ be the upper-bidiagonal matrix with 1 on the superdiagonal and 1, 0, 0, 0, . . . on the diagonal; by Lemma 2.1 it is TP. Then A = O(P ) is the lower-triangular matrix will all entries 1 (see Example 2.10), so that is not even log-convex (i.e. Hankel-TP 2 ) for any real number x > 0.
Nevertheless, in one important special case -which includes all the matrices arising in the present paper -the total positivity of the production matrix does imply the coefficientwise Hankel-TP of the row-generating polynomials of the output matrix: see Theorem 2.20 below.

An identity for
An important role will be played later in this paper by a simple but remarkable identity [92, Lemma 3.6] for B −1 x P B x , where B x is the x-binomial matrix and P is a particular diagonal similarity transform (by factorials) of a lower-Hessenberg Toeplitz matrix: and x be indeterminates, and work in the ring Z[φ, x]. Define the lower-Hessenberg matrix P = (p ij ) i,j≥0 by and the unit-lower-triangular x-binomial matrix B x by be the matrix with 1 on the superdiagonal and 0 elsewhere. Then In [92] we proved (2.25) by a computation using a binomial sum. Here is a simpler proof: . It follows that DT ∞ (φ)D −1 and B x commute. On the other hand, the classic recurrence for binomial coefficients implies since ∆∆ T = I.

A lemma on diagonal scaling
Given a lower-triangular matrix A = (a nk ) n,k≥0 with entries in a commutative ring R, let us define the matrix A = (a nk ) n,k≥0 by this is well-defined since a nk = 0 only when n ≥ k, in which case n!/k! is an integer. If R contains the rationals, we can of course write A = DAD −1 where D = diag (n!) n≥0 . And if R is a partially ordered commutative ring that contains the rationals and A is TP r , then we deduce immediately from A = DAD −1 that also A is TP r . The following simple lemma [92,Lemma 3.7] shows that this conclusion holds even when R does not contain the rationals: Lemma 2.17. Let A = (a ij ) i,j≥0 be a lower-triangular matrix with entries in a partially ordered commutative ring R, and let d = (d i ) i≥1 . Define the lower-triangular (b) follows from (a) by specializing indeterminates.
The special case A d = A corresponds to taking d i = i.

Exponential Riordan arrays
Let R be a commutative ring containing the rationals, and let F (t) = ∞ n=0 f n t n /n! and G(t) = ∞ n=1 g n t n /n! be formal power series with coefficients in R; we set g 0 = 0. Then the exponential Riordan array [9,34,35]  We shall use an easy but important result that is sometimes called the fundamental theorem of exponential Riordan arrays (FTERA): be a sequence with exponential generating function B(t) = ∞ n=0 b n t n /n!. Considering b as a column vector and letting R[F, G] act on it by matrix multiplication, we obtain a sequence R[F, G]b whose exponential generating function is F (t) B(G(t)).

Proof. We compute
We can now determine the production matrix of an exponential Riordan array R[F, G]: Theorem 2.19 (Production matrices of exponential Riordan arrays). Let L be a lower-triangular matrix (with entries in a commutative ring R containing the rationals) with invertible diagonal entries and L 00 = 1, and let P = L −1 ∆L be its production matrix. Then L is an exponential Riordan array if and only if P = (p nk ) n,k≥0 has the form for some sequences a = (a n ) n≥0 and z = (z n ) n≥0 in R.
More precisely, L = R[F, G] if and only if P is of the form (2.33) where the ordinary generating functions A(s) = ∞ n=0 a n s n and Z(s) = ∞ n=0 z n s n are connected to F (t) and G(t) by or equivalently Proof (mostly contained in [9, pp. 217-218]). Suppose that L = R[F, G]. The hypotheses on L imply that f 0 = 1 and that g 1 is invertible in R; so G(t) has a compositional inverse. Now let P = (p nk ) n,k≥0 be a matrix; its column exponential generating functions are, by definition, P k (t) = ∞ n=0 p nk t n /n!. Applying the FTERA to each column of P , we see that R[F, G]P is a matrix whose column exponential generating functions are with its zeroth row removed and all other rows shifted upwards, so it has column exponential generating functions Comparing these two results, we see that 37) or in other words where a = (a n ) n≥0 and z = (z n ) n≥0 are given by (2.35). Conversely, suppose that P = (p nk ) n,k≥0 has the form (2.33). Define F (t) and G(t) as the unique solutions (in the formal-power-series ring R[[t]]) of the differential equations (2.34) with initial conditions F (0) = 1 and G(0) = 0. Then running the foregoing computation backwards shows that ∆ R[F, G] = R[F, G] P .
The exponential Riordan arrays arising in the present paper will all have F (t) = 1: these are said to belong to the associated subgroup (or Lagrange subgroup). Such matrices (sometimes with the zeroth row and column removed) are also known as Jabotinsky matrices [64] or convolution matrices [75]. Their entries are also identical to the partial Bell polynomials [30, pp. 133-137] B n,k (g 1 , g 2 , . . .) where G(t) = ∞ n=1 g n t n /n!. Let us also observe that the matrices P occurring in Lemma 2.16 are precisely the production matrices (2.33) with z = 0 (and a = φ): that is, they are the production matrices of exponential Riordan arrays R[F, G] with F (t) = 1. This observation allows us to improve Theorem 2.14 -from Hankel-total positivity of the zeroth column to coefficientwise Hankel-total positivity of the row-generating polynomials -for the special case of exponential Riordan arrays R[F, G] with F (t) = 1: Theorem 2.20 (Hankel-TP for row-generating polynomials of exponential Riordan array). Let R be a partially ordered commutative ring containing the rationals; let A = (a nk ) n,k≥0 = R[1, G] be an exponential Riordan array of the associated subgroup, with entries in R and with invertible diagonal elements; let A n (x) = n k=0 a nk x k be its row-generating polynomials; and let P = A −1 ∆A be its production matrix.
If P is totally positive of order r in the ring R, then the sequence (A n (x)) n≥0 of row-generating polynomials is Hankel-totally positive of order r in the ring R[x] equipped with the coefficientwise order.
Proof. The row-generating polynomials A n (x) form the zeroth column of the binomial row-generating matrix AB x . By Lemma 2.6, the production matrix of AB x is B −1 x P B x . By Theorem 2.19, the production matrix P = (p nk ) n,k≥0 has the form p nk = n! (k − 1)! a n−k+1 (2.40) for some sequence a = (a n ) n≥0 in R. By Lemma 2.16, we have By Lemma 2.1, the matrix I + x∆ T is totally positive in the ring Z[x] equipped with the coefficientwise order; and by hypothesis, the matrix P is totally positive of order r in the ring R. It follows that B −1 x P B x is totally positive of order r in the ring R[x] equipped with the coefficientwise order. Theorem 2.14 then implies that the sequence (A n (x)) n≥0 of row-generating polynomials is Hankel-totally positive of order r in the ring R[x] equipped with the coefficientwise order.

Lagrange inversion
We will use Lagrange inversion in the following form [53]: If Φ(u) is a formal power series with coefficients in a commutative ring R containing the rationals, then there exists a unique formal power series f (t) with zero constant term satisfying and it is given by and more generally, if H(u) is any formal power series, then In particular, taking H(u) = u k with integer k ≥ 0, we have 3 The matrices (f n,k ) n,k≥0 , (f n,k (y, z)) n,k≥0 and (f n,k (y, φ)) n,k≥0 as exponential Riordan arrays In this section we show that the matrices (f n,k ) n,k≥0 , (f n,k (y, z)) n,k≥0 and (f n,k (y, φ)) n,k≥0 are exponential Riordan arrays R[F, G] with F = 1, and we compute their generating functions G. Much of the contents of the first two subsections is known [8,37], but we think it useful to bring it all together in one place; it will motivate our generalization in Section 3.3 and will play a key role in the remainder of the paper.

3.1
The matrix (f n,k ) n,k≥0 We recall that f n,k is defined combinatorially as the number of k-component forests of rooted trees on a total of n labeled vertices. Such a forest can be constructed as follows: partition the vertex set V into subsets V 1 , . . . , V k of cardinalities n i = |V i | ≥ 1; construct a rooted tree on each subset V i ; and finally divide by k! because the trees are distinguishable (since they are labeled) and any permutation of them gives rise to the same forest. It follows that f n,k = 1 k! n 1 , . . . , n k ≥ 1 n 1 + . . . + n k = n n n 1 , . . . , n k f n 1 ,1 · · · f n k ,1 . (3.1) In terms of the column exponential generating functions we have It follows from (3.3) and (2.30) that the matrix (f n,k ) n,k≥0 is the exponential Riordan array R[F, G] with F (t) = 1 and G(t) = F 1 (t).
On the other hand, a rooted tree on n labeled vertices can be obtained by choosing a root and then forming a forest of rooted trees on the remaining n−1 labeled vertices: thus Multiplying by t n /n! and summing over n ≥ 1, we get This is the well-known functional equation for the exponential generating function of rooted trees.
We can now (as is also well known 12 ) apply Lagrange inversion to the functional equation (3.5b) to compute f n,k . Using (2.45), we have and hence, using (3. 3), in agreement with (1.1). This is, of course, one of the many classic proofs of (1.1). In particular, for k = 1 we have F 1 (t) = ∞ n=1 n n−1 t n /n!, which is the celebrated tree function T (t) [31].
All this is, of course, extremely well known (except possibly for the interpretation as an exponential Riordan array, which is known [8] but perhaps not as well known as it should be). It is, however, a useful warm-up for the generalization in which we introduce the variables y and z, to which we now turn.

3.2
The matrix (f n,k (y, z)) n,k≥0 Recall that f n,k (y, z) is defined combinatorially as the generating polynomial for k-component forests of rooted trees on the vertex set [n], in which each improper edge gets a weight y and each proper edge gets a weight z. The reasoning leading to the identity (3.1) generalizes without any change whatsoever to f n,k (y, z): the point is that each set V i is order-isomorphic to [n i ] (by labeling the vertices in increasing order), so that the meaning of "proper edge" is unaltered. Therefore f n,k (y, z) = 1 k! n 1 , . . . , n k ≥ 1 n 1 + . . . + n k = n n n 1 , . . . , n k f n 1 ,1 (y, z) · · · f n k ,1 (y, z) . In terms of the column exponential generating functions Therefore, the matrix (f n,k (y, z)) n,k≥0 is the exponential Riordan array R[F, G] with F (t) = 1 and G(t) = F 1 (t; y, z). This fact will play a key role in the remainder of the paper.
Of course, it still remains to calculate the exponential generating function F 1 (t; y, z). This calculation is not at all trivial, but it was done a quarter-century ago by Dumont and Ramamonjisoa [37]; we need only translate their results to our notation.
n denote the set of rooted trees on the vertex set [n], where T in which each improper (resp. proper) edge gets a weight y (resp. z), and the corresponding exponential generating functions R(t; y, z) = F 1 (t; y, z) = ∞ n=1 R n (y, z) t n n! (3.14) A n (y, z) t n n!  and hence Solving the differential equation of Proposition 3.1(d) with the initial condition R(0; y, z) = 0, we obtain: The series R(t; y, z) satisfies the functional equation (3.17) and hence has the solution For completeness, let us outline briefly the elegant proof of Proposition 3.1, due to Jiang Zeng, that was presented in [37, section 7]: Sketch of proof of Proposition 3.1. (a) Consider a tree T ∈ T [1] n+1 , and suppose that the root vertex 1 has k (≥ 0) children. All k edges emanating from the root vertex are proper. Deleting these edges and the vertex 1, one obtains a partition of {2, . . . , n + 1} into blocks B 1 , . . . , B k and a rooted tree T j on each block B j . Standard enumerative arguments then yield the relation (a) for the exponential generating functions.
(b) Consider a tree T ∈ T 1 n+1 with root r, and let r 1 , r 2 , . . . , r l , 1 (l ≥ 0) be the path in T from the root r 1 = r to the leaf vertex 1. 13 All l edges of this path are improper. Deleting these edges and the vertex 1, one obtains an ordered partition of {2, . . . , n+1} into blocks B 1 , . . . , B l and a rooted tree (T j , r j ) on each block. Standard enumerative arguments then yield the relation (b) for the exponential generating functions.
(c) In a tree T ∈ T n , focus on the vertex 1 (which might be the root, a leaf, both or neither). Let T be the subtree rooted at 1, and let T be the tree obtained from T by deleting all the vertices of T except the vertex 1 (it thus has the vertex 1 as a leaf). The vertex set [n] is then partitioned as {1} ∪ V ∪ V , where {1} ∪ V is the vertex set of T and {1} ∪ V is the vertex set of T ; and T is obtained by joining T and T at the common vertex 1. Standard enumerative arguments then yield the relation (c) for the exponential generating functions. [19] and its associated differential operator.

Remarks. 1. Dumont and Ramamonjisoa also gave [37, sections 2-5] a second (and very interesting) proof of Proposition 3.1, based on a context-free grammar
2. We leave it as an open problem to find a direct combinatorial proof of the functional equation (3.17), without using the differential equation of Proposition 3.1(d).
3. The polynomials R n also arise [69] as derivative polynomials for the tree function: in the notation of [69] we have R n (y, 1) = G n (y − 1). The formula (3.18) is then equivalent to [69, Theorem 4.2, equation for G n ].

3.3
The matrix (f n,k (y, φ)) n,k≥0 Recall that f n,k (y, φ) is defined combinatorially as the generating polynomial for k-component forests of rooted trees on the vertex set [n], in which each improper edge gets a weight y and each vertex with m proper children gets a weight φ m def = m! φ m . The reasoning leading to the identity (3.1) again generalizes verbatim to f n,k (y, φ), so that f n,k (y, φ) = 1 k! n 1 , . . . , n k ≥ 1 n 1 + . . . + n k = n n n 1 , . . . , n k f n 1 ,1 (y, φ) · · · f n k ,1 (y, φ) .

(3.19)
In terms of the column exponential generating functions we have Therefore, the matrix (f n,k (y, φ)) n,k≥0 is the exponential Riordan array R[F, G] with F (t) = 1 and G(t) = F 1 (t; y, φ). We now show how Proposition 3.1 can be generalized to incorporate the additional indeterminates φ = (φ m ) m≥0 . For a rooted tree T on a totally ordered vertex set, we define pc m (T ) to be the number of vertices of T with m proper children. We define T • n , T

[i]
n and T i n as before, and then define the obvious generalizations of (3.11)-(3.16): Let us also define the generating function (3.28) We then have: and hence Proof. The proof is identical to that of Proposition 3.1, with the following modifications: (a) Consider a tree T ∈ T [1] n+1 in which the root vertex 1 has k children. Since all k edges emanating from the root vertex are proper, we get an additional factor φ k over and above what was seen in Proposition 3.1. Therefore, the exponential function in Proposition 3.1 is replaced here by the generating function Φ.
(b) Consider a tree T ∈ T 1 n+1 with root r, where r 1 , r 2 , . . . , r l , 1 is the path in T from the root r 1 = r to the leaf vertex 1. Since all l edges of this path are improper, the weights associated to the vertices r 1 , r 2 , . . . , r l in T are identical to those associated to these vertices in the trees (T j , r j ); therefore no modification is required. However, the tree T contains a leaf vertex 1 that is not present in any of the trees (T j , r j ), so we get an additional factor φ 0 = φ 0 .
(c) In a tree T ∈ T n , focus on the vertex 1 and define T and T as before. Since T has the vertex 1 as a leaf but T does not, a factor of φ 0 needs to be removed from the right-hand side.
Let us give a name to the function appearing on the right-hand side of the differential equation in Proposition 3.3(d): It follows from Proposition 3.3(d) that the generating function R(t; y, φ), and hence the generic rooted-forest polynomials f n,k (y, φ), depends on the indeterminates y, φ only via the combination φ * y N . Otherwise put, if φ , y and φ , y are two specializations of y, φ to values in a commutative ring R that satisfy φ * (y ) N = φ * (y ) N , then f n,k (y , φ ) = f n,k (y , φ ) for all n, k ≥ 0. We leave it as an open problem to find a bijective proof of this fact -possibly by bijection to a "canonical" specialization such as y = 0, i.e. a bijective proof of (see also Section 5 below).
Remark. One might hope to generalize Proposition 3.3 -and thus also Theorem 1.4 -by refining the counting of improper edges, as follows: Let φ = (φ m ) m≥0 and ξ = (ξ ) ≥0 be indeterminates, and let f n,k (ξ, φ) be the generating polynomial for k-component forests of rooted trees on the vertex set [n] with a weight m! φ m ξ for each vertex that has m proper children and improper children. Our polynomials f n,k (y, φ) thus correspond to the special case ξ = y . One might then hope that Proposition 3.3 could be generalized to this case, with 1/(1 − yR) replaced by Ξ(R), where Ξ(u) = ∞ =0 ξ u . Indeed, Proposition 3.3(a,c) do extend to this situation; but Proposition 3.3(b) does not, because the "global" counting of improper edges implicit in the proof does not correspond to the "local" counting of improper edges (assigning them all to the parent vertex) adopted in this definition of f n,k (ξ, φ). And in fact, the resulting polynomials are different: the differential equation while the counting of the nine 3-vertex trees with the specified weights yields The terms corresponding to trees with two improper edges are thus different: ξ 2 1 + 2ξ 2 from the differential equation, and 2ξ 2 1 + ξ 2 from the counting. I leave it as an open problem to find a different way of "localizing" the improper edges that would provide a combinatorial interpretation for the polynomials defined by the differential equation R (t) = Φ(R) Ξ(R).

Proof of Theorems 1.1-1.4
We will prove Theorems 1.1-1.4 by explicitly exhibiting the production matrices for F , F (x), F (x, y, z) and F (x, y, φ) and then proving that these production matrices are coefficientwise totally positive. By Theorems 2.9 and 2.14, this will prove the claimed results.
It suffices of course to prove Theorem 1.4, since Theorems 1.1-1.3 are contained in it as special cases: take φ m = z m /m! to get Theorem 1.3; then take y = z = 1 to get Theorem 1.2; and finally take x = 0 to get Theorem 1.1. However, we shall find it convenient to work our way up, starting with Theorem 1.1 and then gradually adding extra parameters.

4.1
The matrix (f n,k ) n,k≥0 and its production matrix Let F = (f n,k ) n,k≥0 be the unit-lower-triangular matrix defined by (1.1). Straightforward computation gives for the first few rows of its production matrix  Let F = (f n,k ) n,k≥0 be the unit-lowertriangular matrix defined by (1.1). Then its production matrix P = (p jk ) j,k≥0 = F −1 ∆F has matrix elements We will give two proofs of Proposition 4.1: a first proof using the theory of exponential Riordan arrays, and a second proof by direct computation using Abel-type identities.
First Proof of Proposition 4.1. It was shown in Section 3.1 that the matrix (f n,k ) n,k≥0 is the exponential Riordan array with F (t) = 1 and G(t) = the tree function T (t) = ∞ n=1 n n−1 t n /n!. Differentiation of the functional equation T (t) = t e T (t) gives . Applying this with a j = p jk and b n = n+1 k k (n + 1) n−k at fixed k ≥ 0, we see that (4.6) is equivalent to A bit of algebra shows that the right-hand side of (4.8) can be rewritten as where = j + 1 − k and N = n + 1 − k. But Cauchy's formula [99, p. 21] implies that the right-hand side of (4.9b) equals which equals p nk as defined in (4.2).

Corollary 4.2 (Production matrix for F ).
Let F = (f n+1,k+1 ) n,k≥0 = ∆F ∆ T be the unit-lower-triangular matrix obtained from F by deleting its zeroth row and column. Then its production matrix P = (p jk ) j,k≥0 = (F ) −1 ∆F is obtained from P by deleting its zeroth row and column, i.e. P = ∆P ∆ T , and hence has matrix elements Proof. Apply Lemma 2.7 to Proposition 4.1, with the matrix (4.1)/(4.2) playing the role of Q.
We remark that the elements of F are f n+1,k+1 = n k (n + 1) n−k .
Let us now introduce the sequence ψ = (ψ m ) m≥0 of positive rational numbers given by and the corresponding lower-triangular Toeplitz matrix T ∞ (ψ): with the convention ψ m def = 0 for m < 0. Then the production matrix (4.2) can be written as P = DT ∞ (ψ)D −1 ∆ (4.14) where D = diag (i!) i≥0 . Moreover, this production matrix has a nice factorization into simpler matrices: Proof. We have ψ = a * b where a n = 1/n! and b n = 1, and hence Remarks. 1. Since P = ∆P ∆ T , this also implies 2. It follows from (4.15) that the augmented production matrix P def = 1 0 0 0 · · · P is given here by The sequence ψ has the ordinary generating function Since this generating function is of the form (2.1), it follows that the sequence ψ is Toeplitz-totally positive. (This can equivalently be seen by observing that ψ = a * b, where a n = 1/n! and b n = 1 are both Toeplitz-totally positive.) In view of (4.14), this proves:  Equivalently, we can observe that the total positivity of P and P follows from the factorizations (4.15)/(4.17) together with Lemmas 2.2 and 2.3.
Proof of Theorem 1.1. Applying Theorem 2.9 to the matrix P and using Propositions 4.1 and 4.4, we deduce Theorem 1.1(a). Similarly, applying Theorem 2.14 to the matrix P and using Corollaries 4.2 and 4.5, we deduce Theorem 1.1(b).

The matrix (F n,k (x)) n,k≥0 and its production matrix
We now turn our attention to the matrix F (x) = (F n,k (x)) n,k≥0 of binomial partial row-generating polynomials defined by (1.5). The matrix factorization F (x) = F B x [cf. (1.8)] implies, by Lemma 2.6, that the production matrix of F (x) is B −1 x P B x , where P is the production matrix of F as determined in the preceding subsection [cf.

4.3
The matrices (f n,k (y, z)) n,k≥0 and (F n,k (x, y, z)) n,k≥0 and their production matrices We now generalize the results of the preceding two subsections to include the indeterminates y and z. The key result is the following: Proposition 4.7 (Production matrix for F (y, z)). Let F (y, z) = (f n,k (y, z)) n,k≥0 be the unit-lower-triangular matrix defined by (1.10). Then its production matrix P (y, z) = (p nk (y, z)) n,k≥0 = F (y, z) −1 ∆F (y, z) has matrix elements This time we have only a proof using exponential Riordan arrays: Proof of Proposition 4.7.
It was shown in Section 3.2 that the matrix (f n,k (y, z)) n,k≥0 is the exponential Riordan array with F (t) = 1 and G(t) = R(t; y, z), where R(t; y, z) solves the differential equation of Proposition 3.1(d) with initial condition R(0; y, z) = 0. Applying Theorem 2.19 and comparing this differential equation with (2.34), we see that Z(s) = 0 and A(s) = e zs /(1 − ys), which implies z n = 0 and a n = n =0 y n− z ! . (4.21) Inserting this into (2.33) yields (4.20).
Let us now introduce the sequence ψ(y, z) = (ψ m (y, z)) m≥0 of polynomials with nonnegative rational coefficients given by 22) and the corresponding lower-triangular Toeplitz matrix T ∞ (ψ(y, z)). Then (4.20) can be written as P (y, z) = DT ∞ (ψ(y, z))D −1 ∆ (4.23) where D = diag (i!) i≥0 ; the elements of these matrices lie in the ring Q[y, z]. Moreover, this production matrix has a nice factorization into simpler matrices: where B z is the weighted binomial matrix (1.9), T y is the Toeplitz matrix of powers (2.2), and D = diag (i!) i≥0 .
Proof. We have ψ(y, z) = a * b where a n = z n /n! and b n = y n , and hence Remark. It follows from (4.24) that the augmented production matrix P (y, z) = 1 0 0 0 · · · P (y, z) is given by Two interpretations of (4.26)/(2.10) in terms of digraphs are given by Gilmore [55].
The sequence ψ(y, z) has the ordinary generating function Ψ(s; y, z) def = ∞ m=0 ψ m (y, z) s m = e zs 1 − ys . (4.27) Since this generating function is of the form (2.4), Lemma 2.5 implies that the sequence ψ is coefficientwise Toeplitz-totally positive. (This can equivalently be seen by observing that ψ(y, z) = a * b, where a n = z n /n! and b n = y n are both coefficientwise Toeplitz-totally positive.) In other words, the Toeplitz matrix T ∞ (ψ(y, z)) is totally positive in the ring Q[y, z] equipped with the coefficientwise order. It follows from (4.23) that the same goes for P (y, z). But the elements of P (y, z) actually lie in the ring Z[y, z] ⊆ Q[y, z]. We have therefore proven: Proposition 4.9 (Total positivity of the production matrix for F (y, z)). The matrix P (y, z) = (p jk (y, z)) j,k≥0 defined by (4.20) is totally positive in the ring Z[y, z] equipped with the coefficientwise order.
Equivalently, the total positivity of P (y, z) follows from the factorization (4.24) together with Lemmas 2.2 and 2.3.
We now consider the matrix F (x, y, z) = (F n,k (x, y, z)) n,k≥0 of binomial partial row-generating polynomials defined by (1.15). The matrix factorization F (x, y, z) = F (y, z)B x implies, by Lemma 2.6, that the production matrix of F (x, y, z) is where P (y, z) is the production matrix of F (y, z) [cf. (4.20)]. But Lemma 2.16 shows that B −1 x P (y, z)B x = P (y, z)(I + x∆ T ). This, together with Proposition 4.9, immediately implies: (4.28) Once again, an equivalent way of stating this proof of Theorem 1.3(b) is that we have applied Theorem 2.20 to the matrices F (y, z) and P (y, z).
Now suppose that φ is specialized to be a sequence, with values in a partially ordered commutative ring R, that is Toeplitz-totally positive of order r. Then the sequence φ is obviously Toeplitz-TP r in the ring R[y] equipped with the coefficientwise order. And by Lemma 2.2, the sequence y N def = (y n ) n≥0 is Toeplitz-TP in the ring R[y] equipped with the coefficientwise order. It follows that their convolution φ * y N is Toeplitz-TP r in the ring R[y] equipped with the coefficientwise order. On the other hand, (4.29) can be written as where the operation is defined in Section 2.5. Lemma 2.17 then implies that the matrix P (y, φ) is TP r in the ring R[y] equipped with the coefficientwise order. We have therefore proven: Proposition 4.12 (Total positivity of the production matrix for F (y, φ)). Fix 1 ≤ r ≤ ∞. Let R be a partially ordered commutative ring, and let φ = (φ m ) m≥0 be a sequence in R that is Toeplitz-totally positive of order r. Then the matrix P (y, φ) = (p jk (y, φ)) j,k≥0 defined by (4.29) is totally positive of order r in the ring R[y] equipped with the coefficientwise order.
Remark 4.13. If the ring R contains the rationals, then we have the factorization where D = diag (i!) i≥0 , by analogy with Propositions 4.3 and 4.8.
The matrix factorization F (x, y, φ) = F (y, φ)B x implies, by Lemma 2.6, that the production matrix of (4.29)]. But Lemma 2.16 shows that B −1 x P (y, φ)B x = P (y, φ)(I + x∆ T ). This, together with Proposition 4.12, immediately implies: Proposition 4.14 (Total positivity of the production matrix for F (x, y, φ)). Fix 1 ≤ r ≤ ∞. Let R be a partially ordered commutative ring, and let φ = (φ m ) m≥0 be a sequence in R that is Toeplitz-totally positive of order r. Then the matrix B −1 x P (y, φ)B x defined by (4.29) and (1.9) is totally positive of order r in the ring R[x, y] equipped with the coefficientwise order.
Proof of Theorem 1.4. Applying Theorem 2.9 to the matrix B −1 x P (y, φ)B x and using Propositions 4.11 and 4.14, we deduce Theorem 1.4(a).
Similarly, applying Theorem 2.14 to the matrix B −1 x P (y, φ)B x and using Propositions 4.11 and 4.14, we deduce Theorem 1.4(b).
Then Theorem 1.4(c) follows from Theorem 1.4(b) by noting that (4.33) Once again, an equivalent way of stating this proof of Theorem 1.4(b) is that we have applied Theorem 2.20 to the matrices F (y, φ) and P (y, φ).

Connection with the generic Lah polynomials
In a recent paper [92] we introduced the generic Lah polynomials, which are defined as follows: Recall first [119, pp. 294-295] that an ordered tree (also called plane tree) is a rooted tree in which the children of each vertex are linearly ordered. An unordered forest of ordered trees is an unordered collection of ordered trees. An increasing ordered tree is an ordered tree in which the vertices carry distinct labels from a linearly ordered set (usually some set of integers) in such a way that the label of each child is greater than the label of its parent; otherwise put, the labels increase along every path downwards from the root. An unordered forest of increasing ordered trees is an unordered forest of ordered trees with the same type of labeling. Now let φ = (φ m ) m≥0 be indeterminates, and let L n,k (φ) be the generating polynomial for unordered forests of increasing ordered trees on the vertex set [n], having k components (i.e. k trees), in which each vertex with m children gets a weight φ m . Clearly L n,k (φ) is a homogeneous polynomial of degree n with nonnegative integer coefficients; it is also quasi-homogeneous of degree n − k when φ m is assigned weight m. The first few polynomials L n,k (φ) [specialized for simplicity to φ 0 = 1] are Now let x be an additional indeterminate, and define the row-generating polynomials L n (φ, x) = n k=0 L n,k (φ) x k . Then L n (φ, x) is quasi-homogeneous of degree n when φ i is assigned weight i and x is assigned weight 1. We call L n,k (φ) and L n (φ, x) the generic Lah polynomials, and we call the lower-triangular matrix L = (L n,k (φ)) n,k≥0 the generic Lah triangle. Here φ = (φ i ) i≥0 are in the first instance indeterminates, so that L n,k (φ) ∈ Z[φ] and L n (φ, x) ∈ Z[φ, x]; but we can then, if we wish, substitute specific values for φ in any commutative ring R, leading to values L n,k (φ) ∈ R and L n (φ, x) ∈ R[x].
We can relate the generic Lah polynomials L n,k (φ) to the generic rooted-forest polynomials f n,k (y, φ), as follows: First of all, the fact that we chose to define the generic Lah polynomials in terms of ordered trees is unimportant. Since the vertices of our trees are labeled, the children of each vertex are distinguishable; therefore, for each unordered labeled tree and each vertex with m children, there are m! possible orderings of those children. It follows that the generic Lah polynomials, defined initially as a sum over unordered forests of increasing ordered trees with a weight φ m for each vertex with m children, can equivalently be defined as a sum over unordered forests of increasing unordered trees with a weight φ m = m! φ m for each vertex with m children.
(This is why we inserted the factors m! into our definition of the generic rooted-forest polynomials.) We shall henceforth reinterpret the generic Lah polynomials in this manner, as a sum over unordered forests of increasing unordered trees. Now, the generic Lah polynomials are defined as a sum over forests of increasing trees on the vertex set [n], while the generic rooted-forest polynomials are defined as a sum over forests of arbitrary trees on the vertex set [n] with a weight y for each improper edge. Furthermore, the generic Lah polynomials are defined as giving a weight φ m for each vertex with m children, while the generic rooted-forest polynomials are defined as giving a weight φ m for each vertex with m proper children. But a tree is increasing if and only if all its edges are proper! Therefore, by setting y = 0 in the generic rooted-forest polynomials, we ensure that the sum runs precisely over forests of increasing trees, and we also ensure that all the children at each vertex are proper. It follows that the generic Lah polynomials L n,k (φ) are equal to the generic rooted-forest polynomials f n,k (y, φ) specialized to y = 0: Proposition 5.1 (Generic Lah polynomials as specialization of generic rooted-forest polynomials). We have L n,k (φ) = f n,k (φ, 0).
In [92,Proposition 1.4] we showed that the production matrix P = (p ij ) i,j≥0 for the generic Lah triangle L = (L n,k (φ)) n,k≥0 is This is precisely Proposition 4.11 of the present paper specialized to y = 0. So the generic rooted-forest polynomials are a generalization of the generic Lah polynomials, to which they reduce when y = 0. On the other hand, the generic rooted-forest polynomials are also a specialization of the generic Lah polynomials, since (3.31) and Proposition 5.1 immediately imply: Proposition 5.2 (Generic rooted-forest polynomials as specialization of generic Lah polynomials). We have f n,k (y, φ) = L n,k (φ * y N ).
We leave it as an open problem to find a direct (ideally bijective) proof of Proposition 5.2.
Proposition 5.2 can also be interpreted in the language of exponential Riordan arrays. As remarked in [92,Section 8], the generic Lah triangle L = (L n,k (φ)) n,k≥0 is in fact the general exponential Riordan array R[F, G] of the "associated subgroup" F = 1, expressed in terms of its A-sequence a = φ (cf. Theorem 2.19). That is, the theory of the generic Lah triangle is equivalent to the theory of exponential Riordan arrays of the "associated subgroup" R [1, G]. So, since the generic rootedforest triangle is indeed an exponential Riordan array of the associated subgroup (Section 3.3), it must be a specialization of the generic Lah triangle.
Let us remark, finally, that [92, Section 3.1] introduced a generalization of the generic Lah triangle -called the refined generic Lah triangle -in which the weight for a vertex with m children now depends also on a quantity called its "level" L [92, Definition 3.1]. The production matrix of the refined generic Lah triangle was determined in [92,Proposition 3.2]; the proof employed a bijection from ordered forests of increasing ordered trees to a set of labeled reversed partial Lukasiewicz paths. It would be interesting to know whether that construction can be generalized to y = 0, i.e. to forests of trees that are not necessarily increasing.

Open problems
We conclude by proposing some open problems, which are variants or generalizations of the results found here.
An equivalent statement is that, if we define P n,k (a, b) = [x k ] P n (x; a, b) for n, k ≥ 0 , (6.6) the unit-lower-triangular matrix P n,k (a, b) n,k≥0 is an exponential Riordan array R[F, G] with F (t) = 1. The identity (6.5) goes back in fact (in a slightly different notation) to Rothe [104] in 1793 and Pfaff [94] in 1795. Rothe's identity is usually expressed in terms of the polynomials which obviously satisfy R n (x; h, w) = P n (x; h + w, w) . . See [56-59, 68, 99, 109, 115, 124] for further discussion. It is worth observing that the polynomials P n and P n,k are symmetric in a ↔ b; that P n is homogeneous of degree n in x, a, b; and that P n,k is homogeneous of degree n − k in a, b. Note also that P n,1 (a, b) = [74] and P n,n−1 (a, b) = n 2 (a + b). The triangular array P n,k (a, b) n,k≥0 begins (6.9) Furthermore -and most importantly -we see from (1.6)/(6.1) that P n (x; 1, 1) = x(x + n) n−1 = F n (x) and hence that P n,k (1, 1) = f n,k [cf. (1.1)]. It follows that the polynomials P n,k (a, b) enumerate forests of rooted trees on the vertex set [n] with k components according to some bivariate statistic.
Some years after Rothe, Pfaff and Schläfli -in 2006, to be precise -Gessel and Seo [54], in a very interesting paper, reintroduced the polynomials (6.1) as enumerators of forests of rooted trees and gave two versions of this bivariate statistic, as follows 14 : Recall first that an edge e = ij in a forest F, ordered so that j is a child of i, is called a proper edge if all the descendants of j, including j itself, are highernumbered than i; and in this case we say that j is a proper child of i. These were the key concepts in the present paper. We now define a related but different concept: we say that a vertex i is a proper vertex if all the descendants of i, other than i itself, are higher-numbered than i. (Equivalently, a vertex is proper in case all of its children are proper children.) Note that every leaf is proper, and that the smallestnumbered vertex in each tree is proper. Let us write propv(F) for the number of proper vertices in the forest F. Writing F n,k for the set of forests of rooted trees on the vertex set [n] with k components, Gessel-Seo's first combinatorial interpretation is [54, Theorem 6.1] (6.10) Gessel and Seo [54] gave two proofs of (6.10): one using exponential generating functions, the other partly combinatorial. A fully bijective proof was given by Seo and Shin [110]. Note that the symmetry a ↔ b is far from obvious in (6.10); a combinatorial explanation was recently given by Hou [66].
Yet another related concept is as follows: We say that an edge e = ij in a forest F, ordered so that j is a child of i, is an ascent if i < j and a descent if i > j. Let us write asc(F) [resp. des(F)] for the number of ascents (resp. descents) in the forest F. Gessel-Seo's second combinatorial interpretation [54, Theorem 9.1] -a special case of a result found earlier by Egecioglu and Remmel [39] -is Note that the symmetry a ↔ b is manifest in (6.11): it suffices to relabel the vertices i → n + 1 − i. See also Drake [36, Example 1.7.2] for another combinatorial interpretation of the polynomials P n,1 (a, b).
Finally, it follows from either (6.10) or (6.11), using arguments identical to those used in Sections 3.1 and 3.2, that the unit-lower-triangular matrix P n,k (a, b) n,k≥0 is an exponential Riordan array R[F, G] with F (t) = 1. As noted earlier, this implies (and is in fact equivalent to) the Rothe-Pfaff-Schläfli identity (6.5).
By analogy with Theorem 1.3, I conjecture the following: (a) The unit-lower-triangular polynomial matrix P (a, b) = P n,k (a, b) n,k≥0 is coefficientwise totally positive (jointly in a, b).
(b) The polynomial sequence P = P n (x; a, b) n≥0 is coefficientwise Hankel-totally positive (jointly in x, a, b).
I have verified part (a) up to 15 × 15, and part (b) up to 11 × 11; part (c) is an immediate consequence of part (b). Conjecture 6.1 of course implies analogous statements for the Rothe polynomials (6.7), but not conversely.
In view of the approach used in Section 4 to prove Theorems 1.1-1.4, it is natural to try to employ the same production-matrix method to prove Conjecture 6.1. Alas, this does not work. Straightforward computation gives for the first few rows of the production matrix So the production matrix is not even coefficientwise TP 2 ! Clearly, new techniques will be needed to prove Conjecture 6.1, if indeed it is true. We can also take Conjecture 6.1(c) one step further. Note first that P n (1; 1, 1) = f n = (n + 1) n−1 is a Stieltjes moment sequence: it is the product of the Stieltjes moment sequences (n + 1) n (see footnote 6 above) and 1/(n + 1). This known fact is a specialization of the claim in Conjecture 6.1(b) that the sequence of polynomials P n (x; a, b) is coefficientwise Hankel-totally positive. On the other hand, we also know a stronger fact: not only is (n + 1) n−1 a Stieltjes moment sequence, but so is (n + 1) n−1 /n!, since it is the product of the Stieltjes moment sequences (n + 1) n /n! (see again footnote 6) and 1/(n + 1). (This latter fact is stronger, because multiplication by n! preserves the Stieltjes moment property.) This suggests to ask whether the sequence of polynomials P n (x; a, b)/n! is coefficientwise Hankel-totally positive. The answer is negative; indeed, this sequence is not even coefficientwise log-convex, since So Conjecture 6.1(b) does not have an analogue involving division by n!. But Conjecture 6.1(c) may: Conjecture 6.2 (Hankel-TP for the Schläfli-Gessel-Seo polynomials, bis).

I have verified parts (a) and (b) up to 11 × 11.
Remark. Among the Conjectures 6.1(c), 6.2(a) and 6.2(b), Conjecture 6.2(a) is "morally" the strongest, because one would expect that multiplication by the Stieltjes moment sequences n! or 1/(n + 1) would preserve coefficientwise Hankel-total positivity (as it does for Stieltjes moment sequences of real numbers). But, rather suprisingly, it turns out [117] that this is not a general property: there exist coefficientwise Hankel-TP sequences (p n (x)) n≥0 in the polynomial ring R[x] for which (n! p n (x)) n≥0 and (p n (x)/(n + 1)) n≥0 are not coefficientwise Hankel-TP. So Conjectures 6.1(c), 6.2(a) and 6.2(b) need to be considered separately.
Gilmore's [55] method is very different from the one used here: he uses planar networks and the Lindström-Gessel-Viennot lemma (along the lines of [13]), not production matrices. Indeed, the production matrix 0 −q 2 − q 3 + 3q 5 + 6q 6 + 5q 7 + 3q 8 + q 9 −q 2 + 2q 4 + 5q 5 + 5q 6 + 3q 7 + q 8 q 2 + 2q 3 + 2q 4  is not even coefficientwise TP 1 ; and numerical tests strongly suggest that it is pointwise TP (that is, for a real number q) only when q = 0 or q = 1. Finally, the production-matrix method cannot work here because the generalization of Theorem 1.1(b) is false: the row-generating polynomials F n (x, q) = n k=0 f n,k (q) x k (6.21) are not coefficientwise Hankel-TP. Indeed, the 3 × 3 Hankel minor = (−q 2 + 3q 4 + 8q 5 + 12q 6 + 12q 7 + 8q 8 + 4q 9 + q 10 ) is coefficientwise nonnegative in x (for real q) only when q = 1. But the Hankel-TP of the row-generating polynomials can possibly be restored by inserting a simple additional factor: let us define f n,k (q) def = q k(k−1)/2 f n,k (q) (6.23) and then F n (x, q) = n k=0 f n,k (q) x k . (6.24) Note that this k-dependent factor does not change the total positivity of the lowertriangular matrix (it corresponds to right-multiplication by a diagonal matrix of monomials), but it does change the row-generating polynomials. It is not difficult to show, using the q-binomial theorem [4,Theorem 3.3], that This latter formula -revealing F n (x, q) as a kind of "q-Abel polynomial" [28,67] -suggests that the numbers f n,k (q), and not f n,k (q), may be the most natural qgeneralization of the forest numbers. Since we certainly need q ≥ 1 in order to have coefficientwise Hankel-TP 2 (or even pointwise Hankel-TP 2 for large positive x), even if we restrict to the subsequence with n ≥ 1. But computations by Tomack Gilmore and myself suggest that we might have coefficientwise Hankel-TP (in x) of all orders whenever q ≥ 1, and that this might even hold coefficientwise (in the two variables) after a change of variables q = 1 + r.
I have verified this conjecture up to 10 × 10.
(a) The unit-lower-triangular polynomial matrix P (y, a, b, q) = P n,k (y, a, b, q) n,k≥0 is coefficientwise totally positive (jointly in y, a, b, q).
I have verified this conjecture up to 13 × 13.

Functional digraphs by number of components
Let ψ n,k be the number of functional digraphs on the vertex set [n] with k (weakly connected) components; obviously n k=0 ψ n,k = n n . The first few ψ n,k are  We refer to the Ψ n (y) as the functional-digraph polynomials. 16 We then have: Conjecture 6.9 (Total positivities for the functional-digraph polynomials).
I have verified part (a) up to 17 × 17, part (b) up to 13 × 13, and parts (c) and (d) up to 500 × 500 (by computing the classical S-fraction); of course part (c) is also an immediate consequence of either (b) or (d).
The production matrix P = Ψ −1 ∆Ψ is not totally positive: its 9 × 9 leading principal submatrix is

Functional digraphs by number of cyclic vertices and number of components
We can now combine the polynomials of the two preceding subsections into a single bivariate polynomial: let Ψ n (x, y) be the generating polynomial for functional digraphs on the vertex set [n] with a weight x for each cyclic vertex and a weight y for each component. Thus, Ψ n (x, 1) = F ord n (x) and Ψ n (1, y) = Ψ n (y). We refer to the Ψ n (x, y) as the bivariate functional-digraph polynomials. They have the exponential generating function From these bivariate polynomials we can form two different lower-triangular matrices: The matrix Ψ X is not an exponential Riordan array; but Ψ Y is the exponential Riordan array R[F, G] with F (t) = 1 and G(t) = − log[1 − xT (t)]. On the other hand, Ψ X is obtained from the forest triangle F = (f n,k ) n,k≥0 by right-multiplication by the diagonal matrix diag (y k ) k≥0 , where y k def = y(y + 1) · · · (y + k − 1): The coefficientwise total positivity (in y) of the matrix Ψ X is thus an immediate consequence of Theorem 1.1(a); and since its k = 1 column is y times that of the forest triangle, its coefficientwise Hankel-total positivity is equivalent to Theorem 1.1(b). For the rest, we have the following conjectures: Conjecture 6.10 (Total positivities for the bivariate functional-digraph polynomials).
(c) The polynomial sequence Ψ Y = ψ Y n+1,1 n≥0 is coefficientwise Hankel-totally positive (in x). Here ψ Y n,k (x) enumerates functional digraphs on the vertex set [n] with k components, with a weight x for each cyclic vertex. In particular, the polynomials in the k = 1 column, which have exponential generating function G(t) = − log[1 − xT (t)], enumerate connected functional digraphs on the vertex set [n] with a weight x for each cyclic vertex; equivalently, they enumerate cyclically ordered forests of rooted trees on the vertex set [n] with a weight x for each tree:  64 + 48w + 12w 2 + w 3 48 + 36w + 7w 2 12 + 6w 1 0 0 625 + 500w + 150w 2 + 20w 3 + w 4 500 + 450w + 140w 2 + 15w 3 150 + 120w + 25w 2  44) The polynomial f n,1 (w) = (w + n) n−1 has at least two combinatorial interpretations: it is (a) the generating polynomial for rooted trees on the vertex set [n] with a weight 1 + w for each root descent (i.e. child of the root that is lower-numbered than the root) [17,18,116]; and it is also (b) the generating polynomial for unrooted trees on the vertex set [n + 1] with a weight w for each neighbor of the vertex 1 except one [see (1.6) and the sentence preceding (1. 2)].
Since in case (a) the trees have size n, it follows that f n,k (w) counts k-component forests of rooted trees on the vertex set [n] with a weight 1 + w for each root descent. Defining, as usual, the row-generating polynomials (a) The unit-lower-triangular matrix F = f n,k (w) n,k≥0 is coefficientwise totally positive (in w).
(b) The polynomial sequence F = F n (x, w) n≥0 is coefficientwise Hankel-totally positive (in x, w).
I have verified part (a) up to 16 × 16, and part (b) up to 11 × 11; of course, part (c) is an immediate consequence of part (b). In fact, more seems to be true. Suppose that we make the shift w = −1 + w , so that the matrix begins This matrix is not coefficientwise TP 2 in the variable w , or even pointwise TP 2 at w = 0, since (1 + w )(3 + 3w ) − 1(4 + 4w + w 2 ) = −1 + 2w + 2w 2 . (6.47) So part (a) of Conjecture 6.11 does not extend to the shifted matrix. But parts (b) and (c) do appear to extend: Conjecture 6.12 (Total positivities for the bivariate forest polynomials, bis).
I have verified part (b) up to 11 × 11; of course, part (c) is an immediate consequence of part (b). In fact, Xi Chen and I have recently proven Conjecture 6.12(c) [and hence also the weaker Conjecture 6.11(c)]: that is, Theorem 6.13 (Chen and Sokal [25]). The polynomial sequence (w + n) n n≥0 is coefficientwise Hankel-totally positive (in w ).
The proof, which uses production-matrix methods similar to those used in the present paper, but for exponential Riordan arrays R[F, G] with F = 1, will appear elsewhere [25]. This proof does not, however, seem to extend to Conjecture 6.12(b).

Some refinements of the Ramanujan and rooted-forest polynomials
Let us now return to the generalized Ramanujan polynomials f n,k (y, z) defined in (1.10), which enumerate forests of rooted trees according to the number of improper and proper edges, and the corresponding matrix F (y, z) = (f n,k (y, z)) n,k≥0 . We saw in Propositions 4.7 and 4.8 that the production matrix of F (y, z) is P (y, z) = B z DT y D −1 ∆ , (6.50) which is totally positive by Lemmas 2.2 and 2.3. Observe now that the Toeplitz matrix of powers T y is simply a special case of the inverse bidiagonal matrix with entries T (y) ij = y j+1 y j+2 · · · y i for 0 ≤ j ≤ i. (We call it the "inverse bidiagonal matrix" because it is the inverse of the lower-bidiagonal matrix that has 1 on the diagonal and −y 1 , −y 2 , . . . on the subdiagonal.) It is easy to prove [117] that the inverse bidiagonal matrix T (y) is totally positive, coefficientwise in the indeterminates y = (y i ) i≥1 . So let F (y, z) = (f n,k (y, z)) n,k≥0 be the output matrix corresponding to the production matrix P (y, z) = B z DT (y)D −1 ∆ . (6.52) We then have the following generalization of Theorem 1.3(a,c): 17 Theorem 6.14 (Total positivity of the refined Ramanujan polynomials).
Proof. Since the production matrix P (y, z) is totally positive, part (a) follows immediately from Theorem 2.9. Moreover, Lemma 2.7 implies that P (y, z) def = ∆P (y, z)∆ T = ∆ B z DT (y)D −1 (6.53) is the production matrix for F (y, z) = ∆F (y, z)∆ T . Since P (y, z) is totally positive, part (c) follows from Theorem 2.14, because the zeroth column of F (y, z) is f n+1,1 (y, z) n≥0 .
By contrast, the analogue of Theorem 1.3(b) does not hold in this generality: that is, the row-generating polynomials of F (y, z) are not coefficientwise Hankeltotally positive. Indeed, I have been unable to find any interesting specializations of the y (other than y i = y for all i) in which this coefficientwise Hankel-totally positivity holds. For example, if we take y i = q i , then the 3 × 3 Hankel determinant is a degree-4 polynomial in x whose coefficient of x 4 is −q 2 (q − 1) 2 + 3q(q − 1) 2 z , (6.54) which is not coefficientwise nonnegative in z for any real number q = 0, 1. Even so, Theorem 6.14 shows that the polynomials f n,k (y, z) are of some interest. What do they count? Obviously they are enumerating forests of rooted trees according to the number of proper edges together with some refinement of the improper edges into classes 1, 2, 3, . . . with weights y 1 , y 2 , y 3 , . . . . What are these classes? Problem 6.15 (Interpretation of the refined Ramanujan polynomials). Find a combinatorial interpretation of the refined Ramanujan polynomials f n,k (y, z).
Alternatively, in the production matrix P (y, z) = B z DT y D −1 ∆ or P (y, φ) = (DT ∞ (φ)D −1 )(DT y D −1 ) ∆, we could replace T y by a more general Toeplitz matrix T ∞ (ξ): Since P (ξ, φ) = P (φ * ξ, 0), it is immediate that the polynomials f n,k (ξ, φ) generated by the production matrix P (ξ, φ) possess all the properties asserted in Theorem 1.4 whenever both φ and ξ are Toeplitz-totally positive. Furthermore, since f n,k (ξ, φ) = L n,k (φ * ξ) by Proposition 5.1, these polynomials have a trivial interpretation as enumerating forests of increasing rooted trees with a weight m! (φ * ξ) m for each vertex with m children. But we would like, rather, an interpretation in terms of forests of general (not necessarily increasing) rooted trees and that reduces to our original definition of the rooted-forest polynomials f n,k (y, φ) when ξ = y : Problem 6.16 (Interpretation of the refined rooted-forest polynomials). Find a combinatorial interpretation of the polynomials f n,k (ξ, φ) in which each vertex with m proper children gets a weight m! φ m and weights ξ are somehow assigned to the improper children/edges.
In this context, see the Remark at the end of Section 3.3.