Abstract
Cumulants are a notion that comes from the classical probability theory; they are an alternative to a notion of moments. We adapt the probabilistic concept of cumulants to the setup of a linear space equipped with two multiplication structures. We present an algebraic formula which involves those two multiplications as a sum of products of cumulants. In our approach, beside cumulants, we make use of standard combinatorial tools as forests and their colourings. We also show that the resulting statement can be understood as an analogue of Leonov–Shiryaev’s formula. This purely combinatorial presentation leads to some conclusions about structure constant of Jack characters.
1 Introduction
1.1 Cumulants in probability theory
One of classical problems in probability theory is to describe the joint distribution of a family \(\left( X_i \right) \) of random variables in the most convenient way. Common solution of this problem is to use the family of moments, i.e. the expected values of products of the form
It has been observed that in many problems it is more convenient to make use of the cumulants [7, 9], defined as the coefficients of the expansion of the logarithm of the multidimensional Laplace transform around zero:
where the terms on the righthand side should be understood as a formal power series in the variables \(t_1, \ldots , t_n\). The map \(\kappa \) is linear with respect to each of its arguments.
There are some good reasons for claiming advantage of cumulants over the moments. One of them is that the convolution of measures corresponds to the product of the Laplace transforms or, in other words, to the sum of the logarithms of the Laplace transforms. It follows that the cumulants behave in a very simple way with respect to the convolution, namely cumulants linearize the convolution.
Cumulants allow also a combinatorial description. One can show that the expression (1) is equivalent to the following system of equations, called the momentcumulant formula:
which should hold for any choice of the random variables \(X_1, \ldots , X_n\) whose moments are all finite. The above sum runs over the set partitions \(\nu \) of the set \(\left[ n\right] = \lbrace 1,\ldots , n \rbrace \), and the product runs over the blocks of the partition \(\nu \).
Example 1.1
For three random variables, the corresponding moment expands as follows:
Observe that the momentcumulant formula defines the cumulant \(\kappa \big (X_1,\ldots ,X_n\big )\) inductively according to the number of arguments n.
1.2 Conditional cumulants
Let \(\mathcal {A}\) and \(\mathcal {B}\) be commutative unital algebras, and let \(\mathbb {E}: \mathcal {A}\longrightarrow \mathcal {B}\) be a unital linear map. We say that \(\mathbb {E}\) is a conditional expected value. For any tuple \(x_1,\ldots , x_n \in \mathcal {A}\), we define their conditional cumulant as
where the terms on the righthand side should be understood as in Eq. (1). In this general approach, cumulants give a way of measuring the discrepancy between the algebraic structures of \(\mathcal {A}\) and \(\mathcal {B}\).
1.3 Framework
In this paper, we are interested in a following particular case. We assume that \(\mathcal {A}\) is a linear space equipped with two commutative multiplication structures, which correspond to two products: \(\cdot \) and \(*\). Together with each multiplication, \(\mathcal {A}\) form a commutative algebra. We call such structure an algebra with two multiplications. We also assume that the mapping \(\mathbb {E}\) is the identity map on \(\mathcal {A}\):
In this case, the cumulants measure the discrepancy between these two multiplication structures on \(\mathcal {A}\). This situation arises naturally in many branches of algebraic combinatorics, for example in the case of Macdonald cumulants [5, 6] and cumulants of Jack characters [3, 24].
Since the mapping \(\mathbb {E}\) is the identity, we can define cumulants of cumulants and further compositions of them. The terminology of cumulants of cumulants was introduced in [19] and further developed in [14] (called there nested cumulants) in a slightly different situation of an inclusion of algebras \(\mathcal {C}\subseteq \mathcal {B}\subseteq \mathcal {A}\) and conditional expectations \(\mathcal {A}{\mathop {\longrightarrow }\limits ^{\mathbb {E}_1}}\mathcal {B}{\mathop {\longrightarrow }\limits ^{\mathbb {E}_2}} \mathcal {C}\).
As we already mentioned in Sect. 1.1, cumulants allow also a combinatorial description via the momentcumulant formula. When \(\mathbb {E}\) is the identity map, (3) is equivalent to the following system of equations:
which should hold for any elements \(a_i \in \mathcal {A}\) (the product on the righthand side is the \(\cdot \)product). The above sum runs over the set partitions \(\nu \) of the set \(\left[ n\right] \), and the product runs over all blocks b of the partition \(\nu \).
Let A be a multiset consisting of elements of the algebra \(\mathcal {A}\). To simplify notation, for any partition \(\nu \) of a multiset A we introduce the corresponding cumulant \(\kappa _\nu \) as the product:
We denote by \(\mathcal {P} \left( {A} \right) \) the set of all partitions of A. With this notation, the momentcumulant formula has the following form:
Example 1.2
Given three elements \(a_{1} ,a_{2} ,a_{3} \in \mathcal {A}\), we have:
The equations above provide the explicit formulas for \(\kappa \big (a_1 ,a_2\big )\) and \(\kappa \big (a_1 , a_2,a_3\big )\) in terms of two multiplications considered on \( \mathcal {A}\):
1.4 The main result
The purpose of this paper is to present an algebraic formula which involves two multiplications on linear space \(\mathcal {A}\):
as a sum of products of only one type of multiplication.
We use the following notation. We denote by \(A_1 , \ldots ,A_n\) multisets consisting of elements of \(\mathcal {A}\). We denote by \( A = A_1 \cup \cdots \cup A_n\) the multiset, corresponding to the sum of all multisets \(A_i\). We use also the following notation for elements of \(A_i\):
Hence, the multiset A consists of the following elements:
Due to a combinatorial nature of this result, we introduce now the definitions of the mixing reduced forests and theirs cumulants. We begin with the following definition.
Definition 1.3
Consider a forest F whose leaves are labelled by elements of an algebra \(\mathcal {A}\). We denote by A the multiset consisting of labels of all leaves. If each node (vertex which is not a leaf) of F has at least two descendants, we call F a reduced forest with leaves in A. We denote the set of such forests by \({\mathcal {F}} \left( {A} \right) \) (see Fig. 1).
Note that the forests considered in this definition are not required to be planar (see also the discussion in Sect. 1.10).
For a reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \), we associate a cumulant \(\kappa _F\) in the following way:
Definition 1.4
Consider a reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \). Denote by \(a_v \) the label of a leaf v. For any vertex \(v \in F\), we define inductively the quantities \(\kappa _v\) as follows:
where \(v_1 , \ldots , v_n\) are the descendants of v. For the whole forest F, we define the cumulant \(\kappa _F\) to be the product:
where \(V_i\) are the roots of all trees in F (see Fig. 1).
Finally, we introduce a class of the mixing forests and the associated quantity \(w_F\).
Definition 1.5
Let us consider a multiset \(A = A_1 \cup \cdots \cup A_n\) and a reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \). We say that F is mixing for a division \(A_1 ,\ldots ,A_n\) (or shortly mixing) if for each vertex v whose descendants are all leaves, those descendants are elements of at least two distinct multisets \(A_i\) and \(A_j\). Denote by \(\overline{\mathcal {F}}(A)\) the set of all reduced mixing forests.
For a reduced mixing forest F, we define the quantity \(w_F\) to be the number of vertices in F minus the number of leaves (see Fig. 1).
We are ready to formulate the main result of this paper.
Theorem 1.6
(The main result) Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be the sum of those multisets. Then:
Example 1.7
Figure 1 presents all reduced forests F on the multiset \(A =\{ a_1^1 , a_2^1 , a_1^2 \}\). Six of them are mixing. By the statement of the theorem, we have
1.5 Leonov–Shiryaev’s formula
In 1959, Leonov and Shiryaev [15, Equation IV.d] presented a formula for a cumulant of products of random variables:
in terms of simple cumulants. This formula was first proved by Leonov and Shiryaev [15], a more direct proof was given by Speed [19]. The technique of Leonov and Shiryaev was used in many situations [13, 22] and was further developed in other papers: Krawczyk and Speicher [11, 17] found the free analogue of the formula; the formula was further generalized to the partial cumulants [18, Proposition 10.11].
We briefly present the original formula stated by Leonov and Shiryaev in the framework of an algebra with two multiplications. We use the same notation for multisets \(A_1 , \ldots ,A_n\) and its sum \( A = A_1 \cup \cdots \cup A_n\) as in Sect. 1.4.
We introduce a notion of a strongly mixing partitions (in the literature called also indecomposable partitions).
Definition 1.8
Consider a multiset \(A =A_1 \cup \cdots \cup A_n\). A partition \(\lambda =\{\lambda _1 ,\lambda _2 \}\) is called a column partition if for each multiset \(A_i\) we have: either \(A_i\subseteq \lambda _1\) or \(A_i\subseteq \lambda _2\).
A partition \(\nu = \{\nu _1 ,\ldots , \nu _q \}\) is called a strongly mixing partition for the division \(A =A_1 \cup \cdots \cup A_n\) (or shortly strongly mixing partition), if there is no column partition \(\lambda \) such that for any i either \(\nu _i \subseteq \lambda _1\), or \(\nu _i \subseteq \lambda _2\) (see Fig. 2).
We denote by \(\hat{\mathcal {P}} \left( {A} \right) \) the set of all strongly mixing partitions of a set A.
We can now express the Leonov–Shiryaev’s formula using the notations and notions relevant to the work done in this paper.
Theorem 1.9
(Leonov–Shiryaev’s formula)
where the sum on the righthand side is running over all strongly mixing partitions of a set A.
Example 1.10
By Leonov–Shiryaev’s formula, the cumulant \( \kappa \left( a_1^1 \cdot a_2^1 , a_1^2\right) \) expresses as follows:
1.6 Analogue of Leonov–Shiryaev’s formula
Leonov–Shiryaev’s formula relates a cumulant of products with some products of cumulants. In the situation, we are interested in this paper, where the conditional expected value is the identity mapping, we can define two types of cumulants. For each of them, we have Leonov–Shiryaev’s formula. We present now the third formula, which is a mix of those two.
Consider the identity map:
between commutative unital algebras \( \left( \mathcal {A}, \cdot \right) \) and \( \left( \mathcal {A}, *\right) \). Equation (4) defines cumulants \(\kappa \) of the identity mapping. Observe that we can also consider the inverse mapping, namely the map:
This mapping gives us a way to define cumulants [according to (4)], which we denote by \(\kappa ^*\).
We present below the Leonov–Shiryaev’s formula for both mappings mentioned above:
and
where the sums in both equalities run over all strongly mixing partitions of a multiset \(A =\{a_i^j : i\in [n], j\in [k_i] \}\). Observe that in each equality the cumulants on both sides are of the same type, on the contrary to the type of multiplication. In our formula, we will mix types of cumulants on both sides but keep the same multiplication.
To present our result, we introduce a class of strongly mixing forests \(\hat{\mathcal {F}} \left( {A} \right) \).
Definition 1.11
Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Consider a reduced mixing forest \(F \in \overline{\mathcal {F}} \left( {A} \right) \) consisting of trees \(T_1 , \ldots ,T_s\). Denote by \(a_v\in A\) the label of a leaf \(a\in A\). We define a partition \({\nu }_F\) of a multiset A as follows:
We say that a mixing reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \) is strongly mixing if the partition \(\nu _F\) is strongly mixing partition. We denote the set of such forests by \(\hat{\mathcal {F}} \left( {A} \right) \).
Remark 1.12
Observe that the class of all strongly mixing forests \(\hat{\mathcal {F}} \left( {A} \right) \) is a subclass of all mixing forests \(\overline{\mathcal {F}} \left( {A} \right) \), which itself is a subclass of all reduced forests \({\mathcal {F}} \left( {A} \right) \), i.e.:
This is analogue to the natural order between classes of strongly mixing partitions \(\hat{\mathcal {P}} \left( {A} \right) \), mixing partitions \(\overline{\mathcal {P}} \left( {A} \right) \) and partitions \(\mathcal {P} \left( {A} \right) \):
We can reformulate Theorem 1.6 as follows.
Theorem 1.13
(Analogue of Leonov–Shiryaev’s formula) Consider an algebra \(\mathcal {A}\) with two multiplicative structures \(\cdot \) and \(*\). Denote by \(\kappa \) and \(\kappa ^*\) cumulants related to the identity map on \(\mathcal {A}\) as we described above. Then, the following formula holds:
where \(\hat{\mathcal {F}} \left( {A} \right) \) is a set consisting of strongly mixing reduced forests.
Example 1.14
Figure 1 presents all reduced forests on the multiset \(A =\{ a_1^1 , a_2^1 , a_1^2 \}\). Six of them are mixing, and five of them are strongly mixing. Thus,
Proof
In (4), we present the momentcumulant formula for cumulants \(\kappa \) related to the map \( \mathcal {\left( A, \cdot \right) } {\mathop {\longrightarrow }\limits ^{{{\,\mathrm{id}\,}}}} \mathcal {\left( A, *\right) }\). Similar expression for cumulants \(\kappa ^*_{}\) related to the inverse map \(\mathcal {\left( A, *\right) } {\mathop {\longrightarrow }\limits ^{{{\,\mathrm{id}\,}}^{1}}} \left( \mathcal {A}, \cdot \right) \) is of the following form:
We express the \(\cdot \)product \(\left( a^1_1 *\cdots *a_{k_1}^1 \right) \cdots \left( a^n_1 *\cdots *a_{k_n}^n \right) \) via the momentcumulant formula given by the equation above:
From Theorem 1.6, we can express the lefthand side of this equation in another way:
Observe that we can split the summation of \(\left( 1\right) ^{w_F^{}} \kappa _F\) over all mixing reduced forests \(F\in {\mathcal {F}} \left( {A} \right) \) into \(*\)product of summation over all strongly mixing reduced forests:
where, for each partition \(\nu \), sets \(A^b :=\cup _{i\in b} A_i\) are division of a set A.
Observe that quantities
satisfy the system of equations given by the momentcumulant formula (8), which has a unique solution. This yields the statement of the theorem. \(\square \)
Remark 1.15
The above equation is still valid when we replace \(\kappa \) (which is hidden in \(\kappa _F\) terms) with \(\kappa ^*\) and replace \(*\)products with \(\cdot \)products simultaneously.
1.7 Approximate factorization property
In many cases, cumulants are quantities of a very small degree. The following definition specifies this statement [24, Definition 1.8].
Definition 1.16
Let \(\mathcal {A}\) and \(\mathcal {B}\) be filtered unital algebras, and let \(\mathbb {E}: \mathcal {A}\longrightarrow \mathcal {B}\) be a unital linear map. Let \(\kappa \) be the corresponding cumulants. We say that \(\mathbb {E}\) has approximate factorization property if for all choices of \(a_1, \ldots , a_l \in \mathcal {A}\) we have that
Observation 1.17
Let us go back to the case, when \(\mathbb {E}\) is the identity map on algebra \(\mathcal {A}\) with two multiplications. Suppose that the identity map
satisfies the approximate factorization property.
Let \(A_1, \ldots A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be the sum of those multisets. Then, for any forest \(F \in {\mathcal {F}} \left( {A} \right) \) consisting of f trees, there is the following restriction on the degree of cumulants:
where A is the number of elements in A.
Proof
We analyse the definition of \(\kappa _F\) (Definition 1.4). For any vertex \(v \in F\), we defined the quantities \(\kappa _v\). Using the approximate factorization property, observe that
Going from the root r to the leaves, we obtain:
where \(n_r\) is the number of leaves in a tree rooted in r, and \(v_i\) for \(i\in [n_r]\) are leaves of this tree.
The cumulants \(\kappa _F\) were defined as follows:
where \(V_i\) are roots of all trees in F, and hence \(\deg \kappa _F^{} \le \sum \deg \kappa _{V_i}\). It is now easy to see the statement of this observation. \(\square \)
1.8 Application: Jack characters and their structure constants
Jack characters provide dual information about Jack polynomials, which are a ‘simple’ version of Macdonald polynomials [16]. Connections of Jack polynomials with various fields of mathematics and physics were established (read more in [23]). A better understanding of Jack characters might shed some light on Jack polynomials. It seems that behind Jack characters stands a combinatorics of maps, i.e. graphs on the surfaces [4].
Jack characters \({{\,\mathrm{Ch}\,}}_\pi \) form a natural family (indexed by partitions \(\pi \)) of functions on the set \(\mathbb {Y}\) of Young diagrams. One can introduce two different multiplicative structures on the linear space spanned by Jack characters.
The \(*\)product is given by concatenations of partitions:
For any partitions \(\pi \) and \(\sigma \), one can uniquely express the pointwise product of the corresponding Jack characters
in the linear basis of Jack characters. The coefficients \(g_{\pi ,\sigma }^\mu (\delta )\in \mathbb {Q}[\delta ]\) in this expansion are called structure constants. Each of them is a polynomial in the deformation parameter \(\delta \), on which Jack characters depend implicitly. The existence of such polynomials was proven in [2]. There are several combinatorial conjectures about structure coefficients [24] and some partial results [1, 12]. Structure constants are closely related to structure coefficients introduced by Goulden and Jackson in [8].
Śniady considers an algebra of Jack characters as a graded algebra, with gradation given by the notion of \(\alpha \)polynomial functions [23]. Jack characters are \(\alpha \)polynomial function of the following degrees:
Śniady gave explicit formulas for the topdegree homogeneous part of Jack characters. We sketch shortly below how we use the result presented in this paper in order to find the topdegree coefficients of the structure constants.
Consider two integer partitions \(\pi =(\pi _1 ,\ldots ,\pi _n)\) and \(\sigma =(\sigma _1 ,\ldots ,\sigma _l)\) and the relevant multiset \(A=A_1 \cup A_2\) given by:
Together with the \(\cdot \)product and the \(*\)product described above, the linear space spanned by Jack characters becomes an algebra with two multiplications. Via (4), we can introduce cumulants as a way of measuring the discrepancy between those two types of multiplications. Recently, the approximation factorization property of cumulants of Jack characters was proven [24].
Lemma 1.18
(Reformulation of the main result) Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be the sum of those multisets. Then:
where \(*\) and \(\cdot \) are two different multiplications on \(\mathcal {A}\) and \(\nu = \lbrace \nu _1, \ldots , \nu _{ \nu } \rbrace \) is a partition of A.
Proof
Theorem 1.6 presents \(\left( a^1_1 *\cdots *a_{k_1}^1 \right) \cdots \left( a^n_1 *\cdots *a_{k_n}^n \right) \) as a sum over reduced mixing forests of cumulants associated with those forests. Leaves of F are labelled by elements of \(\mathcal {A}\); thus, we denoted by A the multiset consisting of those labels. Observe that each reduced mixing forest F splits naturally into a collection of trees \(T_1, \ldots , T_s\). Each of \(T_i\) possesses the property of being reduced and mixing. Division of F into \(T_1, \ldots , T_s\) determines a partition \(\nu = \lbrace \nu _1, \ldots , \nu _{s} \rbrace \) of a set A, namely \(\nu _i \subset A\) consists of all labels of leaves of \(T_i\). The cumulant \(\kappa _F\) is equal to:
by the definition. Moreover, \( \left( 1\right) ^{w_{F}^{}} = \prod _{i=1}^s \left( 1\right) ^{w_{T_i}^{}} \). \(\square \)
Theorem 1.19
With notation presented above, for any two partitions \(\pi \) and \(\sigma \), the following decomposition is valid
where \(\nu = \lbrace \nu _1, \ldots , \nu _{ \nu } \rbrace \) is a partition of A and \(\overline{\mathcal {T}} \left( {\nu _i} \right) \) denotes the set of all reduced mixing trees on \(\nu _i \subseteq A\).
Moreover, there is the following restriction on the degree of products of cumulants:
where \(  \nu \) is the number of parts in partition \(\nu \).
Presented statement is based on Lemma 1.18; the bound of a degree follows immediately from Observation 1.17.
The division given in (9) is a tool for capturing structure constants \(g_{\pi ,\sigma }^\mu \). It provides a way for induction on the number \(\ell (\sigma ) +\ell (\pi )\). More precisely, we express \(\kappa _T\) in the linear basis of Jack characters inductively, according to the number of leaves. In a forthcoming paper [1], we give an explicit combinatorial interpretation for the coefficients of highdegree monomials in the deformation parameter \(\delta \).
1.9 How to prove the main theorem?
Theorem 1.6 is a straightforward conclusion from two propositions which we present in this section. In our opinion, they are interesting themselves.
We begin by introducing a gapfree vertex colouring on forests \(F \in {\mathcal {F}} \left( {A} \right) \).
Definition 1.20
For a reduced forest F with leaves in a multiset \(A = A_1 \cup \cdots \cup A_n \), we say that c is a gapfree vertex colouring with length r if

c is a colouring by the numbers \(\{ 0,\ldots ,r\}\) and each colour is used at least once;

each leaf is coloured by 0;

the colours are strictly increasing on any path from a leaf to the root.
We denote by \(c:=r\) the length of c . We call such a colouring c weakly mixing if it satisfies one of the following additional conditions:

(1)
either there exists a vertex coloured by 1 with at least two descendants, each of whom belongs to a distinct multiset \(A_i\),

(2)
or colouring c does not use the colour 1 at all.
We denote by \(\mathcal {C}_F\) the set of all gapfree and weakly mixing colourings of a forest F.
The following result is a juggling of a concept of cumulants. We present its proof in Sect. 2.
Proposition 1.21
Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be a sum of those multisets. Then,
In Sect. 3, we will show that summing over all colourings \(c \in \mathcal {C}_F\) for a reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \) gives a surprisingly simple number. This result is presented in the proposition below.
Proposition 1.22
Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be a sum of those multisets. Then, for any reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \), the following holds:
Observe that by combining Propositions 1.21 and 1.22, we obtain the statement of Theorem 1.6.
1.10 Related work and free probability theory
A noncommutative probability space is a pair \(\left( \mathcal {A}, \phi \right) \) consisting of a unital algebra \(\mathcal {A}\) and a linear form \(\phi \) on \(\mathcal {A}\) such that \(\phi (1) =1\), which is called a noncommutative expectation [20, 21]. Functions
are called free moments.
Roland Speicher introduced the free cumulant functional [20] in the free probability theory. It is related to the lattice of noncrossing partitions of the set [n] in the same way in which the classical cumulant functional is related to the lattice of all partitions of that set.
Definition 1.23
A partition \(\nu \in \mathcal {P}([n])\) is noncrossing if there is no quadruple of elements \(i<j<k<l\) such that \(i\sim _{\nu } k\), \(j\sim _{\nu } l\), and \(\lnot ( i\sim _{\nu } j )\), where ”\(\sim _{\nu }\)” denotes the relation of being in the same set in the partition \(\nu \).
The free cumulants are defined implicit by the system of equations
where the sum runs over all noncrossing set partitions of [n], compare to (3). Möbius inversion over the lattice of noncrossing partitions gives the formula for the free cumulants in terms of the free moments.
JosuatVergès et al. [10, Theorem 4.2] give formulas for free cumulants in terms of Schréder trees, i.e. reduced plane trees for which the rightmost subtree is a leaf. To each such a tree, they associate the term constructed by the mapping \(\phi \). At the first glance, the formula seems to be related to the formula we give in a current paper. However, the reasons for appearance of reduced trees in both papers are different. In their work, reducedness of trees is a natural property appearing while recovering the free cumulant from free moments. In our case, where \(\mathbb {E}\) or \(\phi \) is the identity, we have \(\kappa \left( a\right) =a\) for any element \(a\in \mathcal {A}\). As a result, the trees that we are considering are reduced. The notion of free cumulants is based on noncrossing partitions which follows flatness of trees they consider.
There are other approaches to freeness then considering a linear form \(\phi \) on an algebra \(\mathcal {A}\). The standard and the most common approach is to additionally require that \(\mathcal {A}\) is a \(\mathcal {B}\)module or \(\mathcal {B} \subseteq \mathcal {A}\) is a subalgebra of \(\mathcal {A}\). The mapping \(\phi : \mathcal {A}\longrightarrow \mathcal {B}\) satisfies the bimodule map property:
for any \(a \in \mathcal {A}\) and \(b_1, b_2 \in \mathcal {B}\). There are several slightly different approaches, e.g. free products came with amalgamation over module \(\mathcal {B}\) [25], where cumulants and moments are operatorvalued multiplicative functions [21, Definition 2.1.1].
Our work is based on the idea of taking an identity map between elements of an algebra with two different multiplications (\(\phi \equiv {{\,\mathrm{id}\,}}\)). Such situation does not arise naturally in free probability theory, where \(\phi \) is usually either linear form or bimodule map on an algebra \(\mathcal {A}\). It rises the question if it is still possible to define naturally cumulants in the setup of noncommutative algebras with two different products.
2 Proof of Proposition 1.21
In this section, we shall prove Proposition 1.21. We use the same notation as in Sect. 1.4. We denote by \(A_1 , \ldots ,A_n\) multisets consisting of elements of \(\mathcal {A}\). We denote by \( A = A_1 \cup \cdots \cup A_n\) the multiset, which is the sum of all multisets \(A_i\). We use also the following notation for the elements of \(A_i\):
Hence, the multiset A consists of the following elements:
Additionally, we denote the set of all partitions of A by \(\mathcal {P} \left( {A} \right) \).
We say that partition \(\nu =\{ \nu _1 ,\ldots , \nu _l \}\) is a mixing partition if there exists an index \(i \in [l]\) such that \(\nu _i \not \subseteq A_j\) for all \(j \in [n]\). Partition \(\nu \) presented in Figure 2 is the mixing partition. We denote by \(\overline{\mathcal {P}} \left( {A} \right) \) a set of all mixing partitions of A.
2.1 Outline of the proof
Firstly, we express the lefthand side of (10) as a sum of cumulants, where the sum runs over all mixing partitions \(\nu \in \overline{\mathcal {P}} \left( {A} \right) \); see (11). By applying inductively the procedure (12) described below, we replace summation over all mixing partitions \(\nu \in \overline{\mathcal {P}} \left( {A} \right) \) by a sum over all nested upward sequences of partitions; see Definition 2.2. Then, we construct a bijection between such sequences and reduced forests \(F \in {\mathcal {F}} \left( {A} \right) \) equipped with gapfree, weakly mixing colourings \(c\in \mathcal {C}_F\) (see Definitions 1.3, 1.20). Later on, we will prove that the weighted sum over all gapfree colourings for a fixed forest is either equal to 0 or to \(\pm 1\).
2.2 Cumulants of mixing partitions
Observe that the following equality of the sets holds:
where the elements of the Cartesian product \(\mathcal {P} \left( {A_1} \right) \times \cdots \times \mathcal {P} \left( {A_n} \right) \) are understood as a partition of a multiset \(A =A_1 \cup \cdots \cup A_n\).
We apply the momentcumulant formula given in (4):
We split all partitions \(\mathcal {P} \left( {A} \right) \) into two categories: mixing partitions \(\overline{\mathcal {P}} \left( {A} \right) \) and products of partitions \(\mathcal {P} \left( {A_i} \right) \). In this way:
From the equations above, we obtain the following formula:
2.3 Cumulants of upward sequences of partitions
Each cumulant on the righthand side of (11) is a \(\cdot \)product of simple cumulants. We use the momentcumulant formula in a form given below
to replace \(\cdot \)products by \(*\)products and \(\cdot \)products consisting of a strictly smaller number of components.
For each cumulant on the righthand side in (11), we apply the procedure (12). As an output, we get one term which is a \(*\)product of cumulants and several terms of the form of a \(\cdot \)product of cumulants. Observe that in each term of the second type the number of factors is strictly smaller than before applying the procedure. We apply to them this procedure iteratively as long as we have \(\cdot \)terms in our extension. In the end, we get a sum of the terms given by \(*\)product and cumulants.
Example 2.1
Let us express \(\left( a_1^1 *a_2^2 \right) \cdot a_1^2\) using the procedure described above:
To formalize our idea, we define nested upward sequences and their cumulants.
Definition 2.2
A sequence of partitions \(\omega = \left( \nu ^1 \nearrow \cdots \nearrow \nu ^r \right) \) is said to be upward if
for any \(1\le i \le r1\) and \(\nu ^1\) is a partition of a multiset A. Moreover, if for each i the partition \(\nu ^{i+1}\) is nontrivial, i.e. \(\nu ^{i+1} \ne \left\{ \nu ^{i} \right\} \), it is said to be nested. We define the length of an upward sequence of partitions \(\omega =\left( \nu ^1 \nearrow \cdots \nearrow \nu ^r\right) \) as the length of a sequence, and we denote \(\omega  =r\).
Let us provide a simple example.
Example 2.3
Consider a 5element multiset \(A= \lbrace a_1, \ldots , a_5 \rbrace \) and the following nested upward sequences of partitions \(\omega _1 = (\nu ^1 \nearrow \nu ^2 )\) and \(\omega _2 = (\nu ^1 \nearrow \nu ^2 \nearrow \nu ^3 )\), where:
We introduce the following technical notation (similar to the definition of a cumulant \(\kappa _\nu \)) for an \(\ell \)element partition \(\nu \):
Definition 2.4
Let \(\nu \) be a partition of a multiset \(A =\{a_1 , \ldots ,a_n \}\). Consider an upward sequence of partitions \(\omega = \left( \nu ^1 \nearrow \cdots \nearrow \nu ^r \right) \) such that
We define the cumulant associated with the sequence \(\omega \) as follows:
Example 2.5
The cumulants \(\kappa _{\omega _1}\) and \(\kappa _{\omega _2}\) associated with the nested upward sequences of partitions \(\omega _1\) and \(\omega _2\), respectively , from Example 2.3 are of the following forms:
where we used the property \(\kappa (x) =x\).
Definition 2.6
Consider a multiset \(A = A_1 \cup \cdots \cup A_n \). Denote by \( \mathcal {N} \left( {A} \right) \) the set of all nested upward sequences of partitions \(\omega = \left( \nu ^1 \nearrow \cdots \nearrow \nu ^r\right) \) such that \(\nu ^1 \in \overline{\mathcal {P}} \left( {A} \right) \) is a mixing partition.
Proposition 2.7
Consider a multiset \(A=A_1 \cup \cdots \cup A_n\). Then,
Proof
Apply procedure (12) iteratively to the lefthand side of Proposition 2.7. Observe that applying this iterative procedure is nothing else but summing over all nested upward sequences of partitions \(\omega = \left( \nu ^1 \nearrow \cdots \nearrow \nu ^r\right) \). The sign of the term is determined by the number of iterations. Partition \(\nu _1\) describes the first application of the procedure (this is why \(\nu ^1 \in \overline{\mathcal {P}} \left( {A} \right) \)), partition \(\nu _2\) the second, and so on. \(\square \)
Observe that different nested upward sequences of partitions \(\omega \) may lead to the same cumulant \(\kappa _\omega \). The following example illustrates this phenomenon.
Example 2.8
Let \(A_1 = \{ a_1^1 ,a_2^1 \}\) and \(A_2 =\{a_1^2 ,a_2^2 \}\). Consider \(\omega _1, \omega _2, \omega _3 \in \mathcal {N} \left( {A} \right) \) given by \(\omega _1 =\left( \nu ^1_1 \nearrow \nu ^2_1 \right) \) and \(\omega _2 = \left( \nu ^1_2 \nearrow \nu ^2_2\nearrow \nu ^3_2 \right) \) and \(\omega _3 = \left( \nu ^1_3 \nearrow \nu ^2_3\nearrow \nu ^3_3 \right) \), where:
Observe that all sequences \(\omega _1, \omega _2, \omega _3\) lead to the same term \(\kappa \Big ( \kappa \big (a_1^1 ,a_2^2\big ),\kappa \big (a_2^1 ,a_1^2 \big ) \Big )\) up to the sign. Moreover, they are the only ones which lead to this specific nested cumulant. Observe that
Pointedly, the weights given by \( {(1)}^{\omega _i } \) sum up to 1. We will see that this is universal fact, more specifically, such weights sum up to \(\pm 1\) in general.
2.4 Reduced forests and their colourings
To each upward nested sequence of partitions \(\omega =\left( \nu ^1\nearrow \cdots \nearrow \nu ^r\right) \), we shall assign a certain rooted forest with a colouring. We construct a bijection between the sequences from \(\mathcal {N} \left( {A} \right) \) and relevant rooted forests equipped with the colourings.
Definition 2.9
Let \(\omega =\left( \nu ^1 \nearrow \cdots \nearrow \nu ^r\right) \) be a nested sequence of partitions. Denote the elements of partition \(\nu ^i\) by \(\nu ^i = \{ \nu ^i_1,\ldots ,\nu ^i_{k_i} \}\). Let \(\nu ^1 =\{ \nu ^1_1,\ldots , \nu ^1_{k_1} \}\) be a partition of \(A =A_1 \cup \cdots \cup A_n\). We associate with \(\omega \) a rooted forest with coloured vertices by the following procedure:

The elements of A are leaves of the forest. We colour each of them by 0.

For each element \(\nu ^i_j\), where \(1\le i \le r\) and \(1\le j \le k_i\), we create a vertex and colour it by i.

We join \(\nu ^{i1}_j\) and \(\nu ^i_j\) if \(\nu ^{i1}_j \subseteq \nu ^i_j\). Similarly, we join \(a\in A \) and \(\nu ^1_j\) if \(a \in \nu ^1_j\).

We delete each vertex v which has only one descendant. We join the descendant and the parent of v.
We denote by \(\Phi _1(\omega )\) the forest and by \(\Phi _2(\omega )\) the colouring associated with \(\omega \).
Example 2.10
The coloured forests associated with \(\omega _1, \omega _2, \omega _3 \) from Example 2.8 are presented in Fig. 3. Since \(\omega _1, \omega _2, \omega _3 \) ends by single element partition, all three obtained forests are trees.
The forest described in Definition 2.9 consists of \(k_r\) rooted trees, where \(k_r\) is a number of elements in \(\nu ^r\), namely \(\nu ^r = \{ \nu ^r_1,\ldots ,\nu ^r_{k_r} \}\). The condition of nestedness of \(\omega \) translates to the fact that each colour is used. Except for leaves, each vertex has at least two descendants. It leads to the definition of reduced forest and gapfree, weakly mixing colouring. We mentioned their definitions in the introduction (see Definitions 1.3 and 1.20).
Lemma 2.11
There exists a bijection \(\Phi \) between the set \(\mathcal {N} \left( {A} \right) \) of nested upward sequences starting with a mixing partition and the set of pairs (F, c) consisting of a reduced forest \(F\in {\mathcal {F}} \left( {A} \right) \) of length \(r \ge 1\) with a gapfree, weakly mixing colouring \(c \in \mathcal {C}_F\):
For any nested upward sequence starting with a mixing partition \(\omega \in \mathcal {N} \left( {A} \right) \), the following equality of cumulants holds
where \(\kappa ^{}_{\Phi _1 (\omega )}\) is a cumulant of a reduced forest \(\Phi _1 (\omega )\); see Definition 1.4.
Moreover, \(\omega =\Phi _2(\omega ) \), i.e. the length of the nested upward sequence is equal to the length of the corresponding colouring.
Proof
Definition 2.9 shows how to associate a reduced forest \(F:= \Phi _1 (\omega )\) with the gapfree colouring \(c:=\Phi _2 (\omega )\) to a nested upward sequence \(\omega \). The construction is done in such a way that \(c=\omega \). For the reverse direction, the algorithm is easily reproducible. The condition that a nested upward sequence \(\omega = \left( \nu ^1 \nearrow \cdots \nearrow \nu ^r \right) \in \mathcal {N} \left( {A} \right) \) starts with a mixing partition \(\nu ^1 \in \overline{\mathcal {P}} \left( {A} \right) \) translates to the condition of c being a weakly mixing colouring (Definition 1.20).
In Definition 1.4, we introduced cumulant \(\kappa ^{}_{F} \) for a forest \(F \in {\mathcal {F}} \left( {A} \right) \). There is an exact correspondence between this expression and the one, which is given in Definition 2.4. \(\square \)
We are ready to prove Proposition 1.21 which is the purpose of this section. Let us recall its statement:
Proposition 1.21Let \(A_1, \ldots , A_n \)be multisets consisting of elements of \(\mathcal {A}\). Let A be a sum of those multisets. Then,
Proof
Combining Formula (11) and Proposition 2.7 leads to the following expression:
We identify the product term on the righthand side of the equation above with the only reduced forest of length \(r =0\). Indeed, there is just one reduced forest of length \(r =0\) and the only one gapfree, weakly mixing vertex colouring c of it, namely the forest F consisting of separated vertices \(a\in A\), each coloured by 0. The term \(\left( a^1_1 *\cdots *a_{k_1}^1 \right) *\cdots *\left( a^n_1 *\cdots *a_{k_n}^n \right) \) is equal to the corresponding cumulant \(\kappa _F\).
We replace the sum term on the righthand side of the equation above, according to the bijection between sequences \(\omega \in \mathcal {N} \left( {A} \right) \) and reduced forests of length \(r\ge 1\) with gapfree, weakly mixing colourings given in Lemma 2.11. \(\square \)
3 Proof of Proposition 1.22
In this section, we shall prove Proposition 1.22. For a given reduced forest F, we investigate the following sum:
over all gapfree, weakly mixing colourings of F, which occur in Proposition 1.21.
3.1 Parameter \(w_F\) of a reduced forest \(F\in {\mathcal {F}} \left( {A} \right) \)
We introduced an invariant \(w_F\) which determines the coefficient of a cumulant \(\kappa _F\). This definition was already mentioned in Sect. 1.4, we recall it below and next extend it slightly:
Definition 1.5. Let us consider a multiset \(A = A_1 \cup \cdots \cup A_n\) and a reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \). We say that F is mixing for a division \(A_1 ,\ldots ,A_n\) (or shortly mixing) if for each vertex v whose descendants are all leaves, those descendants are elements of at least two distinct multisets \(A_i\) and \(A_j\). Denote by \(\overline{\mathcal {F}}(A)\) the set of all reduced mixing forests.
For a reduced mixing forest F, we define the quantity \(w_F\) to be the number of vertices in F minus the number of leaves (see Fig. 1).
If F is not mixing, we define \(w_F := \infty \). We may also introduce number \(w_F\) inductively, according to the height of a forest.
Definition 3.1
Let T be a reduced tree. The height of a tree T is the maximum distance between its root and one of its leaves. The height of a forest F is a height of the highest tree in F. We denote this quantity by h(F).
Definition 3.2
(Definition equivalent to Definition 1.5) Let F be a reduced forest. We define the number \(w_F \in \mathbb {N} \cup \{\infty \}\) inductively on the number of vertices in F. Whenever F is a forest consisting of subtrees \(T_1, \ldots ,T_r\), we have
For F being a tree, we denote by \(F'\) the forest obtained by deleting the root of F and we have
Example 3.3
Let \(A=A_1 \cup A_2 \cup A_3\). In Fig. 4, we give an example of two forests (in particular trees) and we count the two corresponding \(w_F\) numbers. Observe that number \(w_F\) depends on the labels of the leaves of F.
3.2 The proof of Proposition 1.22
Let us recall the statement of proposition.
Proposition 1.22Let \(A_1, \ldots , A_n \)be multisets consisting of elements of \(\mathcal {A}\). Let A be a sum of those multisets. Then, for any reduced forest \(F \in {\mathcal {F}} \left( {A} \right) \), the following holds:
The proof of Proposition 1.22 is divided into two cases: either a forest F is not mixing, i.e. \(w_F =\infty \) (Lemma 3.4), or a forest F is mixing, i.e. \(w_F \ne \infty \) (Lemma 3.5). The next two subsections establish these two cases.
3.3 Proof of the not mixing case
Lemma 3.4
Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be a sum of those multisets. For any reduced forest F which is not mixing, we have
Proof
Since F is not mixing, there exists a vertex v such that all of its descendants are leaves, and all of them belong to particular multiset \(A_i\) for some \(i\in [n]\). Consider the following partition of set \(\mathcal {C}_F\):
where each \(\mathcal {C}^1_i\) consists of all \(c \in \mathcal {C}_F\) with \(c=i\) and where the vertex v is coloured by its own colour; \(\mathcal {C}^2_i\) consists of all \(c \in \mathcal {C}_F\) with \(c=i\) and where there is another vertex coloured by the same colour as the vertex v. We express the sum over all \(c \in \mathcal {C}_F\) as follows:
We will show the equipotency of the sets \(\mathcal {C}_i^1\) and \(\mathcal {C}_{i1}^2\) from which it follows that the sum above is equal to 0 and the statement of the lemma is true.
Let us construct a bijection between \(\mathcal {C}_i^1\) and \(\mathcal {C}_{i1}^2\). Take any \(c\in \mathcal {C}_i^1\). Suppose that the vertex v is coloured by k. Observe that \(k \ge 2\). Indeed, if v was coloured by 1, it would be the only vertex of this colour. Then, the only 1coloured vertex would have descendants belonging to just one multiset \(A_i\), which is in contradiction with the fact that \(c\in \mathcal {C}_F\) ( . c is a weakly mixing colouring). From \(c\in \mathcal {C}_i^1\), we construct \(c' \in \mathcal {C}_{i1}^1\) as follows:

(1)
keep the colours of vertices coloured by \(1,\ldots , k1\) unchanged,

(2)
change the colours of vertices coloured by \(k,\ldots , i\) to \(k1,\ldots , i1\), respectively.
This procedure is reversible. Indeed, take \(c' \in \mathcal {C}_{i1}^1\) and suppose that vertex v is coloured by k for some \(k \ge 1\). Then, \(c \in \mathcal {C}_{i}^2\) can be recovered by the following procedure:

(1)
do not change colours of the vertices coloured by \(1,\ldots , k1\),

(2)
do not change the colour of v,

(3)
change the colours of the vertices coloured by \(k,\ldots , i1\) to \(k+1,\ldots , i\), respectively (excluding vertex v).
\(\square \)
3.4 How to prove the mixing case?
We will prove the following lemma.
Lemma 3.5
Let \(A_1, \ldots , A_n \) be multisets consisting of elements of \(\mathcal {A}\). Let A be a sum of those multisets. For any reduced mixing forest F, we have:
To prove the lemma above, we show a bijection between gapfree colourings of reduced trees \(T \in {\mathcal {T}} \left( {A} \right) \) and gapfree colourings of reduced forests \(F\in {\mathcal {F}} \left( {A} \right) \) which are not trees (see Remark 3.6). Using this bijection, we can restrict the proof of Lemma 3.5 to particular case of trees. For reduced trees and their gapfree colourings, we define a projection of this colourings (see Definition 3.7). We make use of the notion of projection in Lemma 3.8. Proof of Lemma 3.5 is done by induction on number of vertices in tree T, and it is presented in Sect. 3.7.
3.5 Restriction to the trees
Remark 3.6
There is a natural bijection f between all reduced trees \(T \in {\mathcal {T}} \left( {A} \right) \) and all reduced forests \(F\in {\mathcal {F}} \left( {A} \right) \) which are not trees. This bijection is obtained by deleting the root of T (see Fig. 1). Moreover, for a given reduced tree \(T\in {\mathcal {T}} \left( {A} \right) \), there is an obvious bijection \(f_T\) between all gapfree colourings of T and all gapfree colourings of the corresponding reduced forest f(T); the bijection is obtained by keeping the colours of the nondeleted vertices; hence,
Additionally, the bijection \(f_T\) preserves the property of being a weakly mixing colouring.
The above statement allows us to prove Lemma 3.5 just for the case of trees \(T\in {\mathcal {T}} \left( {A} \right) \) and conclude the statement for all forests \(F\in {\mathcal {F}} \left( {A} \right) \). Indeed, suppose that the statement of Lemma 3.5 holds for trees. Consider a mixing forest \(F\in {\mathcal {F}} \left( {A} \right) \) which is not a tree. Then, the tree \(T:=f^{1} (F) \in {\mathcal {T}} \left( {A} \right) \) is also mixing; hence, we can use the statement of Lemma 3.5. Observe that \(w_F = w_T 1\). Using bijections f and \(f_T\), we get the following equality:
which is the statement of Lemma 3.5 for the mixing forest \(F\in {\mathcal {F}} \left( {A} \right) \).
3.6 Projection of a gapfree colouring
For any reduced tree \(T\in {\mathcal {T}} \left( {A} \right) \), we consider subtrees \(T_1 ,\ldots , T_k\) formed by deleting the root of T. Number k is equal to the degree of the root. Each subtree \(T_i\) is also a reduced tree. Any gapfree colouring c induces also a subcolourings \(\bar{c}_1,\ldots , \bar{c}_k\) on \(T_1 ,\ldots , T_k\). Observe that subcolourings obtained this way are not necessarily gapfree. However, there is a canonical way to make them gapfree.
Definition 3.7
Let T be a reduced tree with a gapfree colouring c. Let \(\bar{c}_1,\ldots , \bar{c}_k\) be the induced colourings on subtrees \(T_1 ,\ldots ,T_k\) formed by deleting the root of T. For some \(i \in [k]\), let \(j^i_0<\cdots < j^i_l\) be the sequence of colours used in the colouring \(\bar{c}_i\). By replacing each \(j^i_n\) by n in the colouring \(\bar{c}_i\), we obtain a gapfree colouring, which we denote by \(c_i\). We say that \(c_i\) as an ith projection of the colouring c and denote it as \({{\,\mathrm{p}\,}}_i (c): =c_i\); see Fig. 5.
Lemma 3.8
Let T be a reduced mixing tree of height \(h(T) \ge 2\). Denote by \(T_1, \ldots ,T_r\) all subtrees obtained by deleting the root of T. Let \(c_1, \ldots ,c_r\) be gapfree colourings of \(T_1, \ldots ,T_r\), respectively. Then, the following equality holds:
Proof
Whenever T is the mixing tree any gapfree colouring c belongs to \(\mathcal {C}_T\). Indeed, take any vertex v coloured by 1. Clearly, its descendants are leaves labelled by elements of at least two distinct multisets \(A_i\) and \(A_j \) (by assumption that T is mixing). The existence of such a vertex implies that \(c\in \mathcal {C}_T\).
The proof is divided into three steps: firstly, we construct a bijection between the gapfree colouring c projecting onto \(c_1 ,\ldots , c_r\) and some integer paths in \(\mathbb {N}^r\); secondly, we introduce a generating function of these paths and characterize it by recursion on the endpoints and some boundary condition; finally, we find a function satisfying those conditions.
\(\textit{Step 1}\) Let us recall that any gapfree colouring c of T induces colourings \(\overline{c}_i \) on \(T_i\), and further a gapfree colourings \(c_i\) of \(T_i\) (Definition 3.7). We shall construct a bijection between all gapfree colourings c of T projecting on \(c_1,\ldots ,c_r\) and all integer paths \(\rho \) such that:

\(\rho \) connects \((0, \ldots , 0) \) and \(\left( c_1, \ldots ,c_r\right) \in \mathbb {N}^r\),

each step of \(\rho \) is of the following form: \(\left( k^{n}_1, \ldots ,k^{n}_r\right) \in \{0,1\}^r \setminus \left( 0,\ldots ,0 \right) .\)
Denote the class of such paths by \(\mathcal {P}_{ c_1, \ldots ,c_r}\). Moreover, the construction is done in such a way that \(c =\rho +1\), where by \(\rho \) we denote the number of steps in \(\rho \).
For a gapfree colouring c, we construct a path \(\rho \) starting from \((0, \ldots , 0) \in \mathbb {N}^r\) by the following procedure: the nth step of \(\rho \) is of the form \(\left( k^n_1, \ldots , k^n_r \right) \) where:
An example of such path is presented in Fig. 5.
The procedure described above is reversible. Indeed, take a path \(\rho \) between \(\left( 0, \ldots , 0\right) \) and \( \left( c_1, \ldots , c_r\right) \in \mathbb {N}^r\). Suppose that the nth step is of the form:
We can assign to the path \(\rho \) a colouring c by the following procedure. Let \(\left( x_1,\ldots ,x_r \right) \) be an endpoint of \(\rho \) after the nth step. We colour each vertex \(v\in F_i\) by n if v was coloured by \(x_i\) in colouring \(c_i\) and \(k_i^n \ne 0 \). We colour the root by \(\rho  +1\).
\(\textit{Step 2}\) The bijection from \(\textit{Step 1}\) was constructed in such a way that \(c = \rho  +1\). Observe that
Let us define a function \(F : \mathbb {Z}^r \longrightarrow \mathbb {Z}\):
Observe that:

I.
for all \((x_1, \ldots , x_r) \not \in \mathbb {N}^r\), \(F(x_1, \ldots , x_r) =0 \),

II.
\(F(0, \ldots , 0) =1\),

III.
for all \((x_1, \ldots , x_r) \in \mathbb {N}^r \setminus (0,\ldots ,0)\), the function F satisfies the following recursive formula:
$$\begin{aligned} F(x_1, \ldots , x_r) = \mathop {\sum }\limits _{\begin{array}{c} X \subseteq [r] \\ X\ne \emptyset \end{array}}^{} F(\overline{x}^X_1, \ldots , \overline{x}^X_r), \end{aligned}$$
where \(\overline{x}^X_i = \left\{ \begin{array}{ll} x_i&{} if i\not \in X \\ x_i 1&{} if i\in X \end{array} \right. \)
Let us shortly comment on this observation. There are no paths connecting \((0, \ldots , 0)\) with points \((x_1, \ldots ,x_r) \not \in \mathbb {N}^r\) using the set of steps which is nonnegative (Observation I). There is just one path connecting point \((0,\ldots ,0)\) to itself: the empty path. Its length is equal to 0 (Observation II). Consider all possibilities for the last step in path \(\rho \). It is equivalent to choosing indices \(X \subset [r] \), \(X\ne \emptyset \) and summing over all paths ending in \((\overline{x}^X_1,\ldots ,\overline{x}^X_r)\) multiplied by \(1\), because we count the sign of the path (Observation III).
Function F is uniquely defined by Observations I–III. The recursive formula gives us the way to compute \(F(x_1,\ldots ,x_r)\) inductively according to \(\mathop {\sum }\limits _{i=1}^{r} x_i \). The first and the second observation give us the starting point for our induction, namely the values \(F(x_1,\ldots ,x_r)\) for \(\mathop {\sum }\limits _{i=1}^{r} x_i =0\).
\(\textit{Step 3}\) We show now that the function \(G : \mathbb {Z}^r \longrightarrow \mathbb {Z}\):
satisfies all three properties I, II, III mentioned in \(\textit{Step 2.}\) Hence, those two functions F and G are equal. By connecting the results of each step, we get the statement of the lemma:
We shall show that function G satisfies three properties I, II, III. Clearly, it satisfies I, II. In order to show that the recursive formula also holds, take any \((x_1,\ldots ,x_r) \): \(\mathop {\forall }\limits _{i}^{} x_i \ge 0 \) and \(\mathop {\sum }\limits _{i=1}^{r} x_i >0\).
Define the set Y consisting of boundary indices i of the point \((x_1,\ldots ,x_r) \). More precisely, define \(Y \subset [r]\) as follows: \(\left\{ {\begin{matrix} i \in Y &{} if x_i >0 \\ i \not \in Y &{} if x_i =0 \end{matrix}} \right. \).
In order to show that G satisfies III, we have to show the vanishing of the following sum:
Observe that summands of the sum over \(X \not \subset Y\) are equal to 0. From \(X \not \subset Y\) it follow that there exists \(i \in [r]\) such that \(i\in X\) and \(i\not \in Y\). It means that \(\overline{x}^X_i =1\) and by definition \(G(\overline{x}^X_1,\ldots ,\overline{x}^X_r) =0\).
Observe that the summands in the sum over \(X \subset Y\) are of the form \( \prod _{i=1}^r \left( 1 \right) ^{\overline{x}^X_i}\). Indeed, from \(X \subset Y\) it follows that \(\overline{x}^X_i \ge 0\) for all \(i \in [r]\). Thus, we have
3.7 Proof of Lemma 3.5
Remark 3.9
Let T be a reduced mixing tree such that \(h(T) \le 2\). Let \(T_1, \ldots , T_r\) be subtrees obtained from T by deleting the root. For any colouring \(c \in \mathcal {C}_T\) the projection \(c_i :={{\,\mathrm{p}\,}}_i (c)\) is in \( \mathcal {C}_{T_i}\) for each \(i\in [r]\).
Indeed, by definition, \(c_i ={{\,\mathrm{p}\,}}_i (c)\) are gapfree colourings of \(T_i\). Take any vertex v coloured by 1 in \(c_i\). Its descendants are leaves. Hence, \(w_T \ne \infty \), so also \(w_{T_i} \ne \infty \). That means that descendants of v belong to at least two distinct multisets \(A_i\). The existence of such vertex implies that \(c\in \mathcal {C}_{T_i}\).
Proof of Proposition 3.5
We will use induction on the height of tree \(T\in {\mathcal {T}} \left( {A} \right) \).
We cannot begin with a tree of height 0, namely a onevertex tree \(T =\bullet \), because there is no such a tree if \(A \ge 2\).
Induction base We begin from a tree T of height one, namely consisting just of the root and leaves. We have exactly one gapfree colouring of length 1, namely leaves are coloured by 0 and the root by 1. The claim follows immediately.
Induction step Let \(n \ge 2\). Suppose now that the statement of Lemma 3.5 holds for any tree T of the height \(h(T) \le n1\). We will show that the statement holds also for any tree of height equal to \(n \ge 2\). Indeed, take such a tree T. Denote by \(T_1 , \ldots , T_r\) its subtrees obtained from T by deleting the root. Clearly, for every \(T_i\), the \(h(T_i) \le n1\) and we can use the induction hypothesis for them. We have:
which proves the statement for any tree of height equal to n. \(\square \)
References
Burchardt, A.: The topdegree part in the MatchingsJack Conjecture. arXiv:1803.09330 (2018)
Dołęga, M., Féray, V.: Gaussian fluctuations of Young diagrams and structure constants of Jack characters. Duke Math. J. 165(7), 1193–1282 (2016)
Dołęga, M., Féray, V.: Cumulants of Jack symmetric functions and \(b\)conjecture. Trans. Amer. Math. Soc. 369, 9015–9039 (2017)
Dołęga, M., Féray, V., Śniady, P.: Jack polynomials and orientability generating series of maps. Sém. Lothar. Combin. 70, B70j (2013)
Dołęga, M.: Strong factorization property of Macdonald polynomials and higherorder Macdonald’s positivity conjecture. J. Algebraic Combin. 46(1), 135–163 (2017)
Dołęga, M.: Top degree part in \(b\)conjecture for unicellular bipartite maps. Electron. J. Combin. 24(3), 39 (2017)
Fisher, R.A.: Moments and Product Moments of Sampling Distributions. Proc. Lond. Math. Soc. S2–30(1), 199 (1928)
Goulden, I.P., Jackson, D.M.: Connection coefficients, matchings, maps and combinatorial conjectures for Jack symmetric functions. Trans. Amer. Math. Soc. 348(3), 873–892 (1996)
Hald, A.: T.N. Thiele’s contributions to statistics. Int. Stat. Rev. 49(1), 1–20 (1981). (one plate)
JosuatVergès, M., Menous, F., Novelli, J.C., Thibon, J.Y.: Free cumulants, Schröder trees, and operads. Adv. Appl. Math. 88, 92–119 (2017)
Krawczyk, B., Speicher, R.: Combinatorics of free cumulants. J. Combin. Theory Ser. A 90(2), 267–292 (2000)
Kanunnikov, A.L., Vassilieva, E.A.: On the matchingsJack conjecture for Jack connection coefficients indexed by two single part partitions. Electron. J. Combin. 23(1), 30 (2016)
Lehner, F.: Cumulants in noncommutative probability I. Noncommutative exchangeability systems. Math. Z. 248, 67–100 (2004)
Lehner, F.: Free nested cumulants and an analogue of a formula of Brillinger. Probab. Math. Stat. 33, 327–339 (2013)
Leonov, V.P., Shiryaev, A.N.: On a method of semiinvariants. Theory Probab. Appl. 4, 319–329 (1959)
Lapointe, L., Vinet, L.: A Rodrigues formula for the Jack polynomials and the Macdonald–Stanley conjecture. Int. Math. Res. Not. 1995(9), 419–424 (1995)
Mingo, J.A., Speicher, R., Tan, E.: Second order cumulants of products. Trans. Amer. Math. Soc. 361(9), 4751–4781 (2009)
Nica, A., Speicher, R.: Lectures on the Combinatorics of Free Probability. London Mathematical Society Lecture Note Series, vol. 335. Cambridge University Press, Cambridge (2006)
Speed, T.P.: Cumulants and partition lattices. Aust. J. Stat. 25(2), 378–388 (1983)
Speicher, R.: Multiplicative functions on the lattice of noncrossing partitions and free convolution. Math. Ann. 298(4), 611–628 (1994)
Speicher, R.: Combinatorial theory of the free product with amalgamation and operatorvalued free probability theory. Mem. Amer. Math. Soc. 132(627), x+88 (1998)
Sesay, S.A.O., Subba Rao, T.: Yule–Walker type difference equations for higherorder moments and cumulants for bilinear time series models. J. Time Series Anal. 9(4), 385–401 (1988)
Śniady, P.: Top degree of Jack characters and enumeration of maps. arXiv:1506.06361v2 (2015)
Śniady, P.: Structure coefficients for Jack characters: approximate factorization property. arXiv:1603.04268 (2016)
Voiculescu, D.: Operations on certain noncommutative operatorvalued random variables. Astérisque 232, 243–275 (1995). Recent advances in operator algebras (Orléans, 1992)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Research supported by Narodowe Centrum Nauki, Grant number 2014/15/B/ST1/00064.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Burchardt, A. Algebras with two multiplications and their cumulants. J Algebr Comb 52, 157–186 (2020). https://doi.org/10.1007/s10801019008983
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10801019008983
Keywords
 Algebras with two multiplications
 Cumulants
 Leonov–Shiryaev’s formula
 Jack characters
Mathematics Subject Classification
 Primary 05E40
 Secondary 05C30
 05E05