1 Introduction

Multiple zeta values are defined for \(r\geqslant 1\) and \(k_1\geqslant 2, k_2,\dots ,k_r \geqslant 1\) by

$$\begin{aligned} \zeta (k_1,\ldots ,k_r) =\sum _{m_1>\cdots>m_r>0} \frac{1}{m_1^{k_1}\cdots m_r^{k_r}}. \end{aligned}$$
(1.1)

We call the number \(k_1+\dots +k_r\) its weight and r its depth. By \({\mathcal {Z}}\), we denote the \(\mathbb {Q}\)-algebra of all multiple zeta values. There are two ways of expressing the product of multiple zeta values, and both can be written in terms of quasi-shuffle products ( [22]). The relations obtained from the two product expressions, together with some regularization process, are referred to as the extended double shuffle relations of multiple zeta values ( [24]). Conjecturally these give all algebraic relations among multiple zeta values. Multiple zeta values have various different connections to modular forms. For example, in the case \(r=1\) multiple zeta values are the Riemann zeta values, which also appear as the constant term of Eisenstein series. In [20] the authors defined double Eisenstein series, which have double zeta values ((1.1) in the case \(r=2\)) as their constant terms, and which in some sense give a natural depth two version of Eisenstein series. This raised the question if these objects also satisfy some of the extended double shuffle relations. Partial answers for this were given in [20] and in arbitrary depths for so-called multiple Eisenstein series in [1, 2] and [9]. In this work, we present a new approach and lift Eisenstein series with rational coefficients in a purely combinatorialFootnote 1 way to q-series which we call combinatorial multiple Eisenstein series. This provides a new framework for relating modular forms and multiple zeta values. We discuss the relations satisfied by these q-series and give an interpretation of them as a variant of the extended double shuffle relations.

We first recall the extended double shuffle relations for multiple zeta values before explaining how the combinatorial multiple Eisenstein series fit into the picture. Consider the alphabet \(L_z=\{z_k \mid k\geqslant 1\}\) and let \({\mathfrak {H}}^1 = \mathbb {Q}\langle L_z \rangle \) be the free algebra over \(L_z\). Define a product on \(\mathbb {Q}L_z\) by \(z_{i} \diamond z_{j} = z_{i+j}\) for all \(i,j\geqslant 1\). The corresponding quasi-shuffle product (3.1) \(*= *_\diamond \) is usually called harmonic or stuffle product. Let \({\mathfrak {H}}^0\) be the subalgebra of \({\mathfrak {H}}^1\) generated by all words not starting in \(z_1\). Due to the usual power series multiplication, the linear mapFootnote 2 defined on the generators by

$$\begin{aligned} \begin{aligned} \zeta : {\mathfrak {H}}^0&\longrightarrow {\mathcal {Z}}\\ z_{k_1}\dots z_{k_r}&\longmapsto \zeta (k_1,\ldots ,k_r) \end{aligned} \end{aligned}$$
(1.2)

gives an algebra homomorphism from \(({\mathfrak {H}}^0,*)\) to \({\mathcal {Z}}\). This homomorphism can be extended to a homomorphism \(\zeta ^*:{\mathfrak {H}}^1\rightarrow {\mathcal {Z}}\), so we obtain elements \(\zeta ^*(k_1,\dots ,k_r) \in \mathbb {R}\) for all \(k_1,\dots ,k_r \geqslant 1\) called the stuffle regularized multiple zeta values (see [24]). In the case \(k_1 \geqslant 2\) these coincide with the multiple zeta values (1.1) and they are uniquely determined by this property together with \(\zeta ^*(1) = 0\) and the fact that the \(\mathbb {Q}\)-linear map \(\zeta ^*: {\mathfrak {H}}^1 \rightarrow {\mathcal {Z}}\) defined on the generators by \(z_{k_1}\dots z_{k_r} \mapsto \zeta ^*(k_1,\dots ,k_r)\) is an algebra homomorphism from \(({\mathfrak {H}}^1,*)\) to \({\mathcal {Z}}\).

Next, consider the alphabet given by the two letters \(L_{xy}=\{x,y\}\) and write \({\mathfrak {H}}=\mathbb {Q}\langle L_{xy} \rangle \). Define the product \(a\diamond b=0\) for all \(a,b\in \mathbb {Q}L_{xy}\), then the corresponding quasi-shuffle product \(*_\diamond \) is the shuffle product, denoted by . Via the identification \(z_k = x^{k-1}y\) we can view \({\mathfrak {H}}^1\) and \({\mathfrak {H}}^0\) as subalgebras of the shuffle algebra , we have \({\mathfrak {H}}^1 = \mathbb {Q}{\textbf {1}} + {\mathfrak {H}}y\) and \({\mathfrak {H}}^0 = \mathbb {Q}{\textbf {1}} + x {\mathfrak {H}}y\). Due to the iterated integral expression of multiple zeta values, one obtains that the map (1.2) gives an algebra homomorphism from to \({\mathcal {Z}}\). There is also a unique extension of the map \(\zeta \) to an algebra homomorphism given by shuffle regularized multiple zeta values and satisfying . These two regularizations differ and their difference can be described explicitly (see [24, Theorem 1]). For example, we have in depth two for all \(k_1,k_2\geqslant 1\)

$$\begin{aligned} \begin{aligned} \zeta ^*(k_1)\zeta ^*(k_2)&=\zeta ^*(k_1,k_2)+\zeta ^*(k_2,k_1)+\zeta ^*(k_1+k_2)\\&=\sum _{j=1}^{k_1+k_2-1}\left( \left( {\begin{array}{c}j-1\\ k_1-1\end{array}}\right) +\left( {\begin{array}{c}j-1\\ k_2-1\end{array}}\right) \right) \zeta ^*(j,k_1+k_2-j) + \delta _{k_1+k_2,2} \zeta ^*(2) \,, \end{aligned} \end{aligned}$$
(1.3)

where \(\delta \) denotes the Kronecker delta. We call these equations obtained by comparing products of shuffle- and stuffle-regularized multiple zeta values the extended double shuffle equations (see Definition 3.5 for a precise definition in terms of generating series).

Beside the multiple zeta values there are other objects satisfying the extended double shuffle equations. In particular, it is known that there exist (non-trivial) rationalFootnote 3solutions to the extended double shuffle equations, i.e. numbers \(\beta (k_1,\ldots ,k_r)\in \mathbb {Q}\) for \(k_1\geqslant 2,\ k_2,\ldots ,k_r\geqslant 1\) and corresponding stuffle and shuffle regularized maps . In this article, we focus on the stuffle regularized objects and thus we write by abuse of notation \(\beta =\beta ^*\). We restrict to rational solutions, which in depth one for even \(k\geqslant 2\) are given by

$$\begin{aligned} \beta (k) = \frac{\zeta (k)}{(2\pi i)^k} = - \frac{B_k}{2 k!} \end{aligned}$$
(1.4)

and for odd \(k\geqslant 1\) by \(\beta (k) = 0\). These rational numbers also appear as the constant terms in the Fourier expansion of the Eisenstein series, defined for \(k\geqslant 1\) by

$$\begin{aligned} G(k) = \beta (k)+ \frac{1}{(k-1)!}\sum _{\textrm{d},m\geqslant 1} \textrm{d}^{k-1} q^{md} \in \mathbb {Q}\llbracket q \rrbracket \,. \end{aligned}$$
(1.5)

For even \(k\geqslant 4\) these are, when viewed as functions in \(\tau \in \mathbb {H}= \{ \tau \in \mathbb {C}\mid {\text {Im}}(\tau ) > 0 \}\) with \(q=e^{2\pi i\tau }\), modular forms of weight k for the full modular group. In our context, they can also be seen as interpolations between \(\zeta (k)\) and \(\beta (k)\), i.e. the depth one objects of the two solutions of the extended double shuffle equations mentioned above. More precisely, we have \(\lim _{q\rightarrow 0} G(k) = \beta (k)\) and \(\lim _{q\rightarrow 1}(1-q)^k G(k) = \zeta (k)\), where the latter is a consequence of (2.4).

In this paper, we generalize this idea to arbitrary depths and lift a rational solution \(\beta \) satisfying (1.4), to objectsFootnote 4\(G(k_1,\dots ,k_r)\in \mathbb {Q}\llbracket q \rrbracket \), which we call combinatorial multiple Eisenstein series. In the case \(r=1\) they are exactly given by the Eisenstein series (1.5). The combinatorial multiple Eisenstein series interpolate between \(\zeta ^*\) and \(\beta \) in arbitrary depths, i.e. we have for \(k_1,\dots ,k_r \geqslant 1\) (Proposition 6.17)

$$\begin{aligned} \begin{aligned} \lim _{q\rightarrow 0} G(k_1,\dots ,k_r)&= \beta (k_1,\dots ,k_r),\\ {\lim _{q\rightarrow 1}}^{*}(1-q)^{k_1+\dots +k_r} G(k_1,\dots ,k_r)&= \zeta ^*(k_1,\dots ,k_r)\,, \end{aligned} \end{aligned}$$
(1.6)

where the \(\lim ^*\) indicates that we need to do some regularization in the case \(k_1 = 1\) (see (6.7)). The construction of the combinatorial multiple Eisenstein series depends on the choice of the rational solution to the extended double shuffle equations \(\beta \), though most of their properties are independent of this choice as we have already seen in (1.6). Moreover, the combinatorial multiple Eisenstein series can also be viewed as a map \(G: {\mathfrak {H}}^1 \rightarrow \mathbb {Q}\llbracket q \rrbracket \) satisfying for \(w,v \in {\mathfrak {H}}^1\) an analogue of the extended double shuffle equations. For example as an analogue of (1.3) we have for \(k_1,k_2\geqslant 1\) (Proposition 6.7)

$$\begin{aligned} G(k_1)G(k_2)&=G(k_1,k_2)+G(k_2,k_1)+G(k_1+k_2)\nonumber \\&=\sum _{j=1}^{k_1+k_2-1}\left( \left( {\begin{array}{c}j-1\\ k_1-1\end{array}}\right) +\left( {\begin{array}{c}j-1\\ k_2-1\end{array}}\right) \right) G(j,k_1+k_2-j) +R_G(k_1,k_2) \,, \end{aligned}$$
(1.7)

where the q-series \(R_G(k_1,k_2)\) is given by

$$\begin{aligned} R_G(k_1,k_2) = {\left\{ \begin{array}{ll} \frac{(k_1+k_2-3)!}{(k_1-1)!(k_2-1)!}q \frac{d}{dq} G(k_1+k_2-2) &{} k_1+k_2\geqslant 3\\ G(2) &{} k_1+k_2=2 \end{array}\right. }\,. \end{aligned}$$
(1.8)

Observe that we have \(\lim _{q\rightarrow 0} R_G(k_1,k_2)=\delta _{k_1+k_2,2}\beta (2)\) and \(\lim _{q\rightarrow 1} (1-q)^{k_1+k_2} R_G(k_1,k_2)=\delta _{k_1+k_2,2}\zeta (2)\). In particular, the formula (1.7) gives an explicit expression for \(q \frac{d}{dq} G(k)\) in terms of combinatorial double Eisenstein series by choosing \(k_2=2\). This actually works for arbitrary depths, for any \(w\in {\mathfrak {H}}^1\) we have (Corollary 6.31)

This is a nice example for the fact that derivatives are an obstacle for the combinatorial multiple Eisenstein series satisfying the extended double shuffle relations. The expression does not vanish in general, but it is exactly given by a derivative. So in particular, its constant term (and also its limit for \(q\rightarrow 1\)) indeed vanishes.

In order to deal with derivatives and to include them into the algebraic setup, we consider objects depending on double indices. More precisely, we introduce combinatorial bi-multiple Eisenstein series \(G\genfrac(){}{}{k_1,\dots ,k_r}{\textrm{d}_1,\dots ,\textrm{d}_r} \in \mathbb {Q}\llbracket q \rrbracket \) defined for \(k_1,\dots ,k_r\geqslant 1\) and \(d_1,\dots ,d_r\geqslant 0\). The sum \(k_1+\dots +k_r+d_1+\dots +d_r\) is called its weight. The combinatorial multiple Eisenstein series are given in the special case

$$\begin{aligned} G(k_1,\dots ,k_r) = G\genfrac(){}{}{k_1,\dots ,k_r}{0,\dots ,0}. \end{aligned}$$

In general one can think of the combinatorial bi-multiple Eisenstein series as some kind of ‘partial derivatives’ of the combinatorial multiple Eisenstein series, since we have (Proposition 6.29)

$$\begin{aligned} q \frac{\textrm{d}}{\textrm{d}q} G\genfrac(){}{}{k_1,\dots ,k_r}{\textrm{d}_1,\dots ,\textrm{d}_r} = \sum _{i=1}^r k_i G\genfrac(){}{}{k_1,\dots ,k_i+1,\dots ,k_r}{\textrm{d}_1,\dots ,\textrm{d}_i+1,\dots ,\textrm{d}_r}\,. \end{aligned}$$

With this the extra term in (1.8) can be written as \( R_G(k_1,k_2) = \left( {\begin{array}{c}k_1+k_2-2\\ k_1-1\end{array}}\right) G\genfrac(){}{}{k_1+k_2-1}{1}\). For example, as an analogue of the double shuffle equation (which also holds for the rational solution \(\beta \))

$$\begin{aligned} \zeta (2,1)\zeta (3)&= \zeta (3,2,1)+\zeta (2,3,1)+\zeta (2,1,3)+\zeta (5,1)+\zeta (2,4)\\&= 5\zeta (3,2,1)+2\zeta (2,3,1)+\zeta (2,1,3)+2\zeta (3,1,2)+9\zeta (4,1,1)+\zeta (2,2,2) \end{aligned}$$

the combinatorial multiple Eisenstein series satisfy

$$\begin{aligned} G(2,1) G(3)&= +G(3,2,1)+G(2,3,1)+G(2,1,3)+G(5,1)+G(2,4)\nonumber \\&= 5G(3,2,1)+2G(2,3,1)+G(2,1,3)+2G(3,1,2)+9G(4,1,1)\nonumber \\&\quad +G(2,2,2) + 3 G \genfrac(){}{}{4,1}{1,0}+ 3 G\genfrac(){}{}{3,2}{0,1} + G\genfrac(){}{}{2,3}{0,1}\,. \end{aligned}$$
(1.9)

The additional terms \(3\,G \genfrac(){}{}{4,1}{1,0}+ 3\,G\genfrac(){}{}{3,2}{0,1} + G\genfrac(){}{}{2,3}{0,1}\) in (1.9) vanish for both limits \(q\rightarrow 0\) and \(q\rightarrow 1\) (after multiplying with \((1-q)^6\)).

In some special cases, the combinatorial bi-multiple Eisenstein series are modular (Proposition 6.13) and the same holds for some linear combinations (Proposition 6.19). But in general, the combinatorial bi-multiple Eisenstein series do not satisfy the modularity condition and it is not clear which linear combinations of them do.

In contrast to the case of multiple zeta values, we do not describe the relations of the combinatorial bi-multiple Eisenstein series in terms of two different product expressions. Instead, we consider a bi-version of the stuffle product and a second family of relations given by the invariance under a certain involution. This involution has a natural origin coming from the theory of partitions ( [1, 2, 4, 12]) and it can be described nicely in terms of generating series. Therefore, we work entirely with generating series for the construction of the combinatorial bi-multiple Eisenstein series. For \(r\geqslant 1\) these are denoted by

$$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{\begin{array}{c} k_1,\dots ,k_r \geqslant 1 \\ \textrm{d}_1,\dots ,\textrm{d}_r\geqslant 0 \end{array}}G\genfrac(){}{}{k_1,\dots ,k_r}{\textrm{d}_1,\dots ,\textrm{d}_r} X_1^{k_1-1}\cdots X_r^{k_r-1} \frac{Y_1^{\textrm{d}_1}}{\textrm{d}_1!} \cdots \frac{Y_r^{\textrm{d}_r}}{\textrm{d}_r!} \,, \end{aligned}$$

and in general a collection of such generating series for all r is called a bimould (Definition 3.7). To describe the bi-analogue of the stuffle product we consider the alphabet \(L_z^\mathrm{{bi}}=\{z^k_d\mid k\geqslant 1,d\geqslant 0\}\) and define the quasi-shuffle product \(*=*_\diamond \) on \(\mathbb {Q}\langle L_z^\mathrm{{bi}}\rangle \) by \(z^{k_1}_{d_1}\diamond z^{k_2}_{d_2}=z^{k_1+k_2}_{d_1+d_2}\). Then we call the bimould \(\mathfrak {G}\) symmetril (Definition 3.8) if the linear map, defined on the generators by

$$\begin{aligned} z^{k_1}_{\textrm{d}_1}\dots z^{k_r}_{\textrm{d}_r}\mapsto G\genfrac(){}{}{k_1,\ldots ,k_r}{\textrm{d}_1,\ldots ,\textrm{d}_r}, \end{aligned}$$

gives an algebra homomorphism from \((\mathbb {Q}\langle L_z^\mathrm{{bi}}\rangle ,*)\) to \(\mathbb {Q}\llbracket q\rrbracket \). The bimould \(\mathfrak {G}\) is called swap invariant (Definition 3.10), if it satisfies for all \(r\geqslant 1\) the functional equation

$$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} = \mathfrak {G}\genfrac(){}{}{Y_1+\dots +Y_r,Y_1+\dots +Y_{r-1},\dots ,Y_1+Y_2,Y_1}{X_r, X_{r-1}-X_r,\dots ,X_2-X_3,X_1-X_2}\,, \end{aligned}$$

which implies linear relations among combinatorial bi-multiple Eisenstein series in homogeneous weight. The main result of this work is the following.

Theorem

(Theorem 6.5, Proposition 6.17) Let \(\beta \) be a \(\mathbb {Q}\)-valued solution to the extended double shuffle equations, which is in depth one given by (4.1). Then there exists a \(\mathbb {Q}\llbracket q \rrbracket \)-valued bimould \(\mathfrak {G}\), which is symmetril, swap invariant and whose coefficients in depth one are the Eisenstein series (1.5), i.e.

$$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X}{0}=\sum _{k\geqslant 1} G(k)X^{k-1}. \end{aligned}$$

The coefficients of the bimould \(\mathfrak {G}\) interpolate between (stuffle regularized) multiple zeta values and the given \(\mathbb {Q}\)-valued solution \(\beta \), i.e., they satisfy (1.6).

From this theorem we get that the combinatorial multiple Eisenstein series \(G(k_1,\ldots ,k_r)\) satisfy the stuffle product formula. By combining symmetrility and swap invariance of \(\mathfrak {G}\), we get that the combinatorial (bi-)multiple Eisenstein series also satisfy an analogue of the shuffle product formula. This is made explicit in depth two in Proposition 6.7.

The construction of the bimould \(\mathfrak {G}\) is inspired by the calculation of the Fourier expansion of the multiple Eisenstein series \(\mathbb {G}\) introduced by Gangl-Kaneko-Zagier ( [2, 20]). We recall this calculation (Theorem 2.1) in Sect. 2. The following diagram provides a rough overview of how the building blocks of our constructions (right-hand side) are related to the classical building blocks of multiple Eisenstein series (left-hand side).

figure a

In particular, the bimould \(\mathfrak {G}\) is constructed out of four bimoulds \(\mathfrak {b}\), \(\mathfrak {g}^{*}\), \(\mathfrak {L}_m\) and \(\mathfrak {g}\), whose constructions are all inspired by the corresponding objects/statements in Sect. 2. We show that the bimoulds \(\mathfrak {g}^{*}\) and \(\mathfrak {b}\) are symmetril (Proposition 6.224.7), hence the same holds for the bimould \(\mathfrak {G}\) (by Proposition 3.9). On the other hand, the bimould \(\mathfrak {G}\) is a sum of swap invariant bimoulds \(\mathfrak {G}_j\) (Theorem 6.26, Proposition 6.27), thus \(\mathfrak {G}\) is also swap invariant.

By (1.6) combinatorial multiple Eisenstein series can also be interpreted as q-analogues of multiple zeta values. Our notion of weight is compatible with the weight of quasi-modular forms, and both product expressions of the combinatorial bi-multiple Eisenstein series (given in Proposition 6.7 for depth two) are homogeneous in weight. As far as the authors know, combinatorial multiple Eisenstein series provide the first model of q-analogues of multiple zeta values with this property. In particular, this might give a positive answer to a question raised by Okounkov in [26], since the space \(\textsf{qMZV}\) introduced there is exactly spanned by all \(G(k_1,\dots ,k_r)\) with \(k_1,\dots ,k_r\geqslant 2\). Moreover, we show (Proposition 6.15) that the combinatorial bi-multiple Eisenstein series span the space of q-analogues of multiple zeta values \({\mathcal {Z}}_q\) consideredFootnote 5 in [1, 2, 4] and [7].

Conjecturally all algebraic relations among combinatorial bi-multiple Eisenstein are consequences of combining the symmetrility and the swap invariance (Remark 6.11). Since these relations are all in homogeneous weight, we, in particular, expect that the combinatorial bi-multiple Eisenstein are graded by weight.

In [5] the authors introduce the algebra of formal multiple Eisenstein series \(\mathcal {G}^f\), which is given by \((\mathbb {Q}\langle L_z^\mathrm{{bi}}\rangle ,*)\) modulo the relations coming from the swap invariance. In this algebra one can also define a projection to the space of formal multiple zeta values, which can be seen as a formal version of (1.6). Further it is shown, that the \(\mathfrak {sl}_2\)-action from quasi-modular forms can be extended to this algebra. By the above mentioned conjecture, the algebra of combinatorial bi-multiple Eisenstein series \(\mathcal {G}^{\textrm{bi}}\) (Definition 6.10) should be isomorphic to the algebra of formal multiple Eisenstein series and therefore \(\mathcal {G}^{\textrm{bi}}\) should also be an \(\mathfrak {sl}_2\)-algebra.

A similar formal algebraic approach is used independently in the thesis of the second named author [14]. Here another quasi-shuffle algebra is considered together with an involution, which is of a simpler shape than the operator swap. It is shown in [15, Theorem 7.10] that this weight-graded algebra is isomorphic to the weight-graded algebra of formal multiple Eisenstein series. The description in terms of this other quasi-shuffle algebra seems to be a good choice to proceed as in [27], which means giving a generalization of the pro-unipotent affine group scheme \({\text {DM}}\) and the double shuffle Lie algebra \(\mathfrak {dm}_0\).

Finally we remark that the name of the combinatorial multiple Eisenstein series was inspired by the combinatorial double Eisenstein series \(Z_{k_1,k_2}\) introduced in [20, (17)]. These differ slightly from our \(G(k_1,k_2)\), but they can be related using [8, Proposition 2.5] and adding the constant term \(\beta (k_1,k_2)\). Combinatorial multiple Eisenstein series might also have a connection to iterated integrals of quasi-modular forms ( [25]).

2 Multiple Eisenstein series

In this section, we recall multiple Eisenstein series and the calculation of their Fourier expansion. Details can be found in [1, 2], and [9]. This gives a motivation and an explanation for our construction of combinatorial multiple Eisenstein series in Sect. 6.

For \(k_1 \geqslant 3, k_2,\dots ,k_r \geqslant 2\) and \(\tau \in \mathbb {H}\) the multiple Eisenstein series are defined by

$$\begin{aligned} \mathbb {G}_{k_1,\dots ,k_r}(\tau ) := \sum _{\begin{array}{c} \lambda _1 \succ \dots \succ \lambda _r \succ 0\\ \lambda _i \in \mathbb {Z}\tau + \mathbb {Z} \end{array}} \frac{1}{\lambda _1^{k_1} \dots \lambda _r^{k_r}} \,, \end{aligned}$$
(2.1)

where the order \(\succ \) on the lattice \(\mathbb {Z}\tau + \mathbb {Z}\) is defined by \(m_1 \tau + n_1 \succ m_2 \tau + n_2\) iff \(m_1 > m_2\) or \(m_1 = m_2 \wedge n_1 > n_2\). Since \(\mathbb {G}_{k_1,\dots ,k_r}(\tau + 1) = \mathbb {G}_{k_1,\dots ,k_r}(\tau )\) the multiple Eisenstein series possess a Fourier expansion, i.e. an expansion in \(q=e^{2\pi i \tau }\), which was calculated in [20] for the \(r=2\) case and for arbitrary depth by the first author ( [2]). In depth one we have for \(k\geqslant 3\)

$$\begin{aligned} \mathbb {G}_k(\tau ) = \sum _{\begin{array}{c} \lambda \in \mathbb {Z}\tau + \mathbb {Z}\\ \lambda \succ 0 \end{array}} \frac{1}{\lambda ^k } = \sum _{ \begin{array}{c} m> 0 \\ \vee \, ( m=0 \wedge n>0) \end{array}} \frac{1}{(m\tau +n)^k } = \zeta (k)+ \sum _{m>0} \underbrace{\sum _{n\in \mathbb {Z}}\frac{1}{(m\tau +n)^k}}_{{=: \Psi _k(m\tau ) } }\,. \end{aligned}$$

For even \(k\geqslant 4\) these are just the classical Eisenstein series, which are modular forms for the full modular group. When k is even, these differ from the Eisenstein series (1.5) defined in the introduction just by a factor of \((2\pi i)^k\). We refer to \(\Psi _k(\tau )\) as the monotangent function ( [10]), which satisfies for \(k\geqslant 2\) the Lipschitz formula

$$\begin{aligned} \Psi _k(\tau ) = \sum _{n \in \mathbb {Z}} \frac{1}{(\tau +n)^k} = \frac{(-2\pi i)^{k}}{(k-1)!} \sum _{\textrm{d}>0} \textrm{d}^{k-1} q^d \,. \end{aligned}$$
(2.2)

This gives

$$\begin{aligned} \mathbb {G}_k(\tau )&= \zeta (k) + \sum _{m>0} \Psi _k(m\tau ) = \zeta (k) + \frac{(-2\pi i)^{k}}{(k-1)!}\sum _{\begin{array}{c} m>0\\ d >0 \end{array}} \textrm{d}^{k-1} q^{m d} =: \zeta (k) + (-2\pi i)^k g(k) \,. \end{aligned}$$

Here the g(k) are the generating series of the divisor-sums and for higher depths multiple versions of these q-series appear, which are defined for \(k_1,\dots k_r \geqslant 1\) by

$$\begin{aligned} g(k_1,\dots ,k_r)= \sum _{\begin{array}{c} m_1> \dots> m_r> 0\\ n_1, \dots , n_r > 0 \end{array}} \frac{n_1^{k_1-1}}{(k_1-1)!} \dots \frac{n_r^{k_r-1}}{(k_r-1)!} q^{m_1 n_1 + \dots + m_r n_r } \in \mathbb {Q}\llbracket q \rrbracket \,. \end{aligned}$$
(2.3)

These q-series were studied in detail in [2, 6] and they can be seen as q-analogues of multiple zeta values since one can show that for \(k_1 \geqslant 2\)

$$\begin{aligned} \lim \limits _{q\rightarrow 1}(1-q)^{k_1+\dots +k_r }g(k_1,\ldots ,k_r) = \zeta (k_1,\dots ,k_r)\,. \end{aligned}$$
(2.4)

In the Fourier expansion of (multiple) Eisenstein series, the q-series g always appear together with a power of \(-2\pi i\) and therefore we set for \(k_1,\dots ,k_r \geqslant 1\)

$$\begin{aligned} \hat{g}(k_1,\dots ,k_r) := (-2\pi i)^{k_1+\dots + k_r} g(k_1,\dots ,k_r) \in \mathbb {Q}[\pi i]\llbracket q \rrbracket \,. \end{aligned}$$

A multiple version of \(\mathbb {G}_k(\tau ) = \zeta (k) + \hat{g}(k)\) is given by the following.

Theorem 2.1

(\(r=1,2\) [20], \(r\geqslant 1\) [2]) For \(k_1 \geqslant 3, k_2,\dots ,k_r \geqslant 2\) there exist explicit \(\alpha ^{k_1,\dots ,k_r}_{l_1,\dots ,l_r,j} \in \mathbb {Z}\), such that for \(q=e^{2\pi i \tau }\) we have

$$\begin{aligned}&\mathbb {G}_{k_1,\dots ,k_r}(\tau ) = \zeta (k_1,\dots ,k_r) +\sum _{\begin{array}{c} 0< j < r\\ l_1+\dots +l_r = k_1+\dots +k_r\\ l_1\geqslant 2,l_2,\dots ,l_r\geqslant 1 \end{array}} \alpha ^{k_1,\dots ,k_r}_{l_1,\dots ,l_r,j}\, \zeta (l_1,\dots ,l_j) \hat{g}(l_{j+1},\dots ,l_r) \\&\quad + \hat{g}(k_1,\dots ,k_r)\,. \end{aligned}$$

In particular, \(\mathbb {G}_{k_1,\dots ,k_r}(\tau ) = \zeta (k_1,\dots ,k_r)+ \sum _{n> 0} a_{k_1,\dots ,k_r}(n) q^n\) for some \(a_{k_1,\dots ,k_r}(n) \in {\mathcal {Z}}[\pi i]\).

We sketch the proof of Theorem 2.1 in the following and then give an explicit example at the end of the section. First, observe that for \(k_1,\dots ,k_r \geqslant 2\) we have by the Lipschitz formula (2.2), that the q-series \(\hat{g}\) can be written as an ordered sum over monotangent functions

$$\begin{aligned} \hat{g}(k_1,\dots ,k_r) = \sum _{m_1> \dots> m_r > 0} \Psi _{k_1}(m_1 \tau ) \cdots \Psi _{k_r}(m_r \tau ) \,. \end{aligned}$$
(2.5)

In general the multiple Eisenstein series can be written as ordered sums over multitangent functions ( [10]), which are for \(k_1,\dots ,k_r \geqslant 2\) and \(\tau \in \mathbb {H}\) defined by

$$\begin{aligned} \Psi _{k_1,\ldots ,k_r}(\tau ) := \sum _{\begin{array}{c} n_1>\cdots >n_r \\ n_i \in \mathbb {Z} \end{array}} \frac{1}{(\tau +n_1)^{k_1}\cdots (\tau +n_r)^{k_r}}. \end{aligned}$$
(2.6)

These functions were originally introduced by Ecalle and then studied in detail by Bouillot in [10]. To write \(\mathbb {G}_{k_1,\dots ,k_r}(\tau )\) in terms of these functions, one splits the summation in the definition (2.1) into \(2^r\) parts, corresponding to the different cases where either \(m_i = m_{i+1}\) or \(m_i > m_{i+1}\) for \(\lambda _i = m_i \tau + n_i\) and \(i=1,\dots ,{r}\) (\(\lambda _{r+1}=0\)). Then one can check that the multiple Eisenstein series can be written as

$$\begin{aligned} \mathbb {G}_{k_1,\dots ,k_r}(\tau ) = \sum _{j=0}^r \hat{g}^*(k_1,\dots ,k_j) \zeta (k_{j+1},\dots ,k_r)\,, \end{aligned}$$
(2.7)

where the q-series \(\hat{g}^*\) are given as ordered sums over multitangent functions by

$$\begin{aligned} \hat{g}^*(k_1,\dots ,k_r) := \sum _{\begin{array}{c} 1 \leqslant j \leqslant r\\ 0 = r_0< r_1< \dots< r_{j-1} < r_j = r\\ m_1> \dots> m_j > 0 \end{array}} \prod _{i=1}^j \Psi _{k_{r_{i-1}+1},\ldots ,k_{r_i}}(m_i \tau )\,. \end{aligned}$$
(2.8)

Further, one can show ( [1, Construction 6.7]) that the q-series \(\hat{g}^*\) satisfy the harmonic product formula, e.g. \(\hat{g}^*(k_1) \hat{g}^*(k_2) = \hat{g}^*(k_1,k_2) + \hat{g}^*(k_2,k_1) + \hat{g}^*(k_1+k_2)\). We will generalize this construction later in terms of generating series (Lemma 6.21) and then use an analogue of (2.7) as the definition for the combinatorial multiple Eisenstein series. To obtain the statement in Theorem 2.1 one then uses the following theorem.

Theorem 2.2

[10, Theorem 6] For \(k_1,\dots ,k_r \geqslant 2\) with \(k=k_1+\dots +k_r\) the multitangent function can be written as

$$\begin{aligned} \Psi _{k_1,\dots ,k_r}(\tau )&= \sum _{\begin{array}{c} 1\leqslant j \leqslant r\\ l_1+\dots +l_r = k \end{array}} (-1)^{l_1+\dots +l_{j-1}+k_j+k} \prod _{\begin{array}{c} 1\leqslant i \leqslant r\\ i\ne j \end{array}}\left( {\begin{array}{c}l_i-1\\ k_i-1\end{array}}\right) \zeta (l_1,\dots ,l_{j-1}) \,\Psi _{l_j}(\tau )\,\\&\quad \zeta (l_r,l_{r-1},\dots ,l_{j+1})\ . \end{aligned}$$

Moreover, the terms containing \(\Psi _1(\tau )\) vanish.

This theorem can be proven by using partial fraction decomposition (see Example 2.3) and then using the shuffle product to show that the coefficient of \(\Psi _1(\tau )\) vanishes.

Applying Theorem 2.2 to (2.8), we see by (2.5), that the \(\hat{g}^*\) can be written as a \({\mathcal {Z}}\)-linear combination of \(\hat{g}\). This proves Theorem 2.1, since one can also show that all the appearing multiple zeta values have the correct depth.

Example 2.3

We give one explicit example in depth two. In this case, (2.7) reads

$$\begin{aligned} \mathbb {G}_{k_1,k_2}(\tau ) = \zeta (k_1,k_2) + \hat{g}^*(k_1) \zeta (k_2) + \hat{g}^*(k_1,k_2)\,, \end{aligned}$$

where \(\hat{g}^*(k_1)=\sum _{m_1>0} \Psi _{k_1}(m_1\tau ) = \hat{g}(k_1)\) and

$$\begin{aligned} \hat{g}^*(k_1,k_2)&= \sum _{m_1>0} \Psi _{k_1,k_2}(m_1 \tau ) + \sum _{m_1>m_2>0} \Psi _{k_1}(m_1 \tau )\Psi _{k_2}(m_2 \tau )\\&= \sum _{m_1>0} \Psi _{k_1,k_2}(m_1 \tau ) + \hat{g}(k_1,k_2)\,. \end{aligned}$$

Considering the special case \((k_1,k_2)=(3,2)\) one sees by partial fraction decomposition

$$\begin{aligned} \Psi _{3,2}(\tau )&= \sum _{n_1> n_2} \frac{1}{(\tau +n_1)^3 (\tau +n_2)^2} \\&= \sum _{n_1> n_2} \left( \frac{1}{(n_1-n_2)^2 (\tau +n_1)^3} +\frac{2}{(n_1-n_2)^3 (\tau +n_1)^2} + \frac{3}{(n_1-n_2)^4 (\tau +n_1)} \right) \\&\quad +\sum _{n_1 > n_2} \left( \frac{1}{(n_1-n_2)^3 (\tau +n_2)^2} - \frac{3}{(n_1-n_2)^4 (\tau +n_2)} \right) = 3 \zeta (3) \Psi _2(\tau ) + \zeta (2) \Psi _3(\tau ) \,, \end{aligned}$$

and therefore \(\hat{g}^*(3,2) = 3 \zeta (3) \hat{g}(2) + \zeta (2) \hat{g}(3) + \hat{g}(3,2)\). In total we get

$$\begin{aligned} \mathbb {G}_{3,2}(\tau ) =\zeta (3,2) + 3 \zeta (3) \hat{g}(2) + 2 \zeta (2) \hat{g}(3) + \hat{g}(3,2) \,. \end{aligned}$$

3 Moulds, bimoulds and quasi-shuffle products

First, we recall some basic facts on quasi-shuffle products ( [11, 22, 23]). Let L be a countable set whose elements we refer to as letters. A monic monomial in the non-commutative polynomial ring \(\mathbb {Q}\langle L \rangle \) is called a word and we denote the empty word by \({\textbf {1}}\). Suppose we have a commutative and associative product \(\diamond \) on the vector space \(\mathbb {Q}L\). Then the quasi-shuffle product \(*_\diamond \) on \(\mathbb {Q}\langle L \rangle \) is defined as the \(\mathbb {Q}\)-bilinear product, which satisfies \({\textbf {1}} *_\diamond w = w *_\diamond {\textbf {1}} = w\) for any word \(w\in \mathbb {Q}\langle L \rangle \) and

$$\begin{aligned} a w *_\diamond b v = a (w *_\diamond b v) + b (a w *_\diamond v) + (a \diamond b) (w *_\diamond v) \end{aligned}$$
(3.1)

for any letters \(a,b \in L\) and words \(w, v \in \mathbb {Q}\langle L \rangle \). This gives a commutative \(\mathbb {Q}\)-algebra \((\mathbb {Q}\langle L \rangle , *_\diamond )\), which is called quasi-shuffle algebra. Moreover, one can equip this algebra with the structure of a Hopf algebra [22, Section 3], where the coproduct is given for \(w \in \mathbb {Q}\langle L \rangle \) by the deconcatenation coproduct

$$\begin{aligned} \Delta (w) = \sum _{uv = w} u \otimes v\,. \end{aligned}$$
(3.2)

A well-known example is the shuffle Hopf algebra. Define the product on \(\mathbb {Q}L\) by \(a\diamond b=0\) for all \(a,b \in L\). Then the corresponding quasi-shuffle product \(*_\diamond \) on \(\mathbb {Q}\langle L \rangle \) is the shuffle product, usually denoted by . The antipode in the shuffle Hopf algebra is given by

$$\begin{aligned} S(a_1\dots a_r)=(-1)^r a_r\dots a_1, \qquad a_1,\ldots ,a_r\in L, \end{aligned}$$

so the defining property of S yields the following relations in \(\mathbb {Q}\langle L \rangle \).

Lemma 3.1

For any non-empty word \(w=a_1\dots a_r\) in \(\mathbb {Q}\langle L \rangle \), it is

To work with quasi-shuffle products it is convenient to consider generating series. For this, we will introduce the notion of moulds and bimoulds, which were introduced by Ecalle. We refer to the article [11] for a good overview on mould theory and a thorough list of reference for the original works of Ecalle.

Definition 3.2

Let \(\mathcal {A}\) be a unital \(\mathbb {Q}\)-algebra. A family \(Z=(Z^{(r)})_{r\geqslant 0}\) with \(Z^{(0)}\in \mathcal {A}\) and \(Z^{(r)} \in \mathcal {A}\llbracket X_1,\dots ,X_r\rrbracket \) for \(r\geqslant 1\) is called a mould with values in \(\mathcal {A}\).

Given a mould \(Z=(Z^{(r)})_{r\geqslant 0}\) we call the \(Z^{(r)}\) the depth r part of Z. All moulds considered in this article satisfy \(Z^{(0)}=1\), so we usually just give the depth \(r\geqslant 1\) parts when defining moulds. Since the depth is clear from the number of variables we just write \(Z(X_1,\dots ,X_r)\) instead of \(Z^{(r)}(X_1,\dots ,X_r)\) in the following. Let \(Z=(Z^{(r)})_{r \geqslant 0}\) be an \(\mathcal {A}\)-valued mould, then we define for \(r \geqslant 1\) and \(k_1,\dots ,k_r\geqslant 1\) the elements \(z(k_1,\dots ,k_r) \in \mathcal {A}\) as the coefficients of its depth r part

$$\begin{aligned} Z(X_1,\dots ,X_r)=:\sum _{k_1,\dots ,k_r\geqslant 1} z(k_1,\dots ,k_r) X_1^{k_1-1} \dots X_r^{k_r-1} \,. \end{aligned}$$
(3.3)

Consider the set of letters \(L_z=\{z_k \mid k\geqslant 1\}\). Then for any commutative and associative product \(\diamond \) on \(\mathbb {Q}L_z\) we obtain a quasi-shuffle algebra \((\mathbb {Q}\langle L_z \rangle ,*_\diamond )\).

Definition 3.3

Let \(\mathcal {A}\) be an unital \(\mathbb {Q}\)-algebra, Z an \(\mathcal {A}\)-valued mould with coefficients z as defined in (3.3), and \(\diamond \) a commutative and associative product on \(\mathbb {Q}L_z\).

  1. (i)

    The mould Z is called \(\diamond \)-symmetril if the coefficient map \(\varphi _Z:\mathbb {Q}\langle L_z \rangle \rightarrow \mathcal {A}\) given on the generators by \(\varphi _Z({\textbf {1}})=1\) and

    $$\begin{aligned} z_{k_1}\dots z_{k_r}\mapsto z(k_1,\ldots ,k_r) \end{aligned}$$

    is an algebra homomorphism from \((\mathbb {Q}\langle L_z \rangle ,*_\diamond )\) to \(\mathcal {A}\).

  2. (ii)

    If \(\diamond \) is given by \(z_{k_1} \diamond z_{k_2} = z_{k_1+k_2}\), then we call a \(\diamond \)-symmetril mould symmetril.

  3. (iii)

    If the product \(\diamond \) is given by \(z_{k_1} \diamond z_{k_2} = 0\), then we call a \(\diamond \)-symmetril mould symmetral.

Let \(Z_1\) and \(Z_2\) be moulds with values in \(\mathcal {A}\). The product \(Z_1 \times Z_2\) is the mould given by

$$\begin{aligned} (Z_1 \times Z_2)(X_1,\dots ,X_r) = \sum _{j=0}^r Z_1(X_1,\dots ,X_j)Z_2(X_{j+1},\dots ,X_r)\,. \end{aligned}$$

Since we usually have \(Z^{(0)}=1\), the first term (\(j=0\)) in the above sum is simply given by \(Z_2(X_1,\ldots ,X_r)\) and the last term (\(j=r\)) is given by \(Z_1(X_1,\ldots ,X_r)\). Equipped with this product the space of all (\(\mathcal {A}\)-valued) moulds becomes a non-commutative \(\mathbb {Q}\)-algebra.

Definition 3.4

  1. (i)

    For a mould Z we define the mould \(Z^\sharp \) by

    $$\begin{aligned} Z^\sharp (X_1,\dots ,X_r)&= Z(X_1+\dots +X_r,\dots ,X_1+X_2,X_1). \end{aligned}$$
  2. (ii)

    Let \(F=\sum _{r\geqslant 0} a_r T^r \in \mathcal {A}\llbracket T\rrbracket \) be a formal power series with coefficients in \(\mathcal {A}\). We can view F as a mould with values in \(\mathcal {A}\), which we also denote by F and which is in depth \(r\geqslant 0\) defined by \(F^{(r)}(X_1,\dots ,X_r)=a_r\). We call such a mould a constant mould.

    Also notice that the product of two constant moulds is exactly the constant mould coming from the product of their power series.

  3. (iii)

    For an \(\mathcal {A}\)-valued mould Z with coefficients (3.3) we define the constant mould \(\Gamma ^Z\) by

    $$\begin{aligned} \Gamma ^Z := \sum _{r=0}^\infty \gamma ^Z_r T^r := \exp \left( \sum _{n=2}^\infty \frac{(-1)^n}{n} z(n) T^n \right) \,. \end{aligned}$$
    (3.4)
  4. (iv)

    For an \(\mathcal {A}\)-valued mould Z define the mould

    $$\begin{aligned} Z_\gamma := Z^\sharp \times \Gamma ^Z, \end{aligned}$$

    i.e. explicitly we have

    $$\begin{aligned} Z_\gamma (X_1,\dots ,X_r) =&\sum _{j=0}^{r} \gamma ^Z_{j} Z(X_1+\dots +X_{r-j}, \dots , X_1+X_2,X_1) \,. \end{aligned}$$

    Moreover, we define its coefficients \(z_\gamma (k_1,\dots ,k_r)\in \mathcal {A}\) by

    $$\begin{aligned} Z_\gamma (X_1,\dots ,X_r) =:&\sum _{k_1,\dots ,k_r\geqslant 1} z_\gamma (k_1,\dots ,k_r) X_1^{k_1-1} \dots X_r^{k_r-1}\,. \end{aligned}$$

Conversely, the mould Z can also be written in terms of \(Z_\gamma \)

$$\begin{aligned} Z(X_1,\dots ,X_r) = \sum _{j=0}^r \tilde{\gamma }^Z_j Z_\gamma (X_r, X_{r-1}-X_r,\dots ,X_{j+1}-X_{j+2})\,, \end{aligned}$$
(3.5)

where (by using [23, (32)] for the last equation) we have

$$\begin{aligned} \sum _{k=0}^\infty \tilde{\gamma }^Z_k T^k := \sum _{k=0}^\infty z(\underbrace{1,\dots ,1}_k) T^k = \exp \left( \sum _{n=2}^\infty \frac{(-1)^{n+1}}{n} z(n) T^n \right) \,. \end{aligned}$$
(3.6)

Definition 3.5

Let \(\mathcal {A}\) be a unital \(\mathbb {Q}\)-algebra and Z an \(\mathcal {A}\)-valued mould. We say that the mould Z satisfies the extended double shuffle equations if the mould Z is symmetril and the mould \(Z_\gamma \) is symmetral.

Example 3.6

For a mould Z, the extended double shuffle equations in depth two are

$$\begin{aligned} \begin{aligned} Z(X_1) Z(X_2)&= Z(X_1,X_2) + Z(X_2,X_1) + \frac{Z(X_1)- Z(X_2)}{X_1- X_2}\,,\\ Z_\gamma (X_1) Z_\gamma (X_2)&= Z_\gamma (X_1,X_2)+Z_\gamma (X_2,X_1)\\&=Z(X_1+X_2,X_1) + Z(X_1+X_2,X_2) + \gamma _2^Z\,. \end{aligned} \end{aligned}$$
(3.7)

The motivating example for these equations is the mould of (stuffle regularized) multiple zeta values \(\mathfrak {z}\), whose depth r part is defined by

$$\begin{aligned} \mathfrak {z}(X_1,\dots ,X_r) = \sum _{k_1,\dots ,k_r\geqslant 1} \zeta ^*(k_1,\dots ,k_r) X_1^{k_1-1} \dots X_r^{k_r-1} \,. \end{aligned}$$

The mould \(\mathfrak {z}\) satisfies the extended double shuffle equations ( [17, 24]) and the corresponding relations obtained for multiple zeta values are exactly the extended double shuffle relations mentioned in the introduction.

Definition 3.7

Let \(\mathcal {A}\) be a unital \(\mathbb {Q}\)-algebra. A bimould with values in \(\mathcal {A}\) is a family \(B=(B^{(r)})_{r\geqslant 0}\) with \(B^{(0)} \in \mathcal {A}\) and \(B^{(r)} \in \mathcal {A}\llbracket X_1,\dots ,X_r,Y_1,\dots ,Y_r\rrbracket \) for \(r\geqslant 1\).

As is the case of moulds, we call \(B^{(r)}\) the depth r part of B and since the depth is clear from the number of variables and we just write \(B\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r}\) instead of \(B^{(r)}\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r}\) in the following. Moreover, all appearing bimoulds also satisfy \(B^{(0)}=1\), hence we often restrict to the case \(r\geqslant 1\) when defining bimoulds.

For bimoulds, we consider the alphabet

$$\begin{aligned} L_z^\mathrm{{bi}}=\{z^k_d \mid k\geqslant 1, d\geqslant 0\}, \end{aligned}$$

which can be seen as a generalization of \(L_z\). For a commutative and associative product \(\diamond \) on \(\mathbb {Q}L_z^\mathrm{{bi}}\) we obtain a quasi-shuffle algebra \((\mathbb {Q}\langle L_z^\mathrm{{bi}}\rangle ,*_\diamond )\).

Definition 3.8

Let \(\mathcal {A}\) be a unital \(\mathbb {Q}\)-algebra, B a \(\mathcal {A}\)-valued bimould, and \(\diamond \) a commutative and associative product on \(\mathbb {Q}L_z^\mathrm{{bi}}\).

  1. (i)

    If the coefficients \(b\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} \in \mathcal {A}\) of B in depth r are given by

    $$\begin{aligned} B\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r}= \sum _{\begin{array}{c} k_1,\dots ,k_r \geqslant 1 \\ d_1,\dots ,d_r\geqslant 0 \end{array}}b\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} X_1^{k_1-1}\cdots X_r^{k_r-1} \frac{Y_1^{d_1}}{d_1!} \cdots \frac{Y_r^{d_r}}{d_r!}, \end{aligned}$$

    then we define the coefficient map of B as the \(\mathbb {Q}\)-linear map \(\varphi _B:\mathbb {Q}\langle L_z^\mathrm{{bi}}\rangle \rightarrow \mathcal {A}\) on the generators by \(\varphi ({\textbf {1}}) = 1\) and

    $$\begin{aligned} z^{k_1}_{d_1}\dots z^{k_r}_{d_r} \mapsto b\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r}\,. \end{aligned}$$
  2. (ii)

    The mould B is called \(\diamond \)-symmetril if the coefficient map is a \(\mathbb {Q}\)-algebra homomorphism

    $$\begin{aligned} \varphi _B: (\mathbb {Q}\langle L_z^\mathrm{{bi}}\rangle ,*_\diamond ) \rightarrow \mathcal {A}. \end{aligned}$$
  3. (iii)

    If \(\diamond \) is given by \(z^{k_1}_{d_1} \diamond z^{k_2}_{d_2} = z^{k_1+k_2}_{d_1+d_2}\), then we call a \(\diamond \)-symmetril bimould symmetril.

  4. (iv)

    If the product \(\diamond \) is given by \(z^{k_1}_{d_1} \diamond z^{k_2}_{d_2} = 0\), then we call a \(\diamond \)-symmetril bimould symmetral.

If B is symmetril, then in depth two we have as an analogue of the first equation in (3.7)

$$\begin{aligned} B\genfrac(){}{}{X_1}{Y_1}B\genfrac(){}{}{X_2}{Y_2}&=B\genfrac(){}{}{X_1,X_2}{Y_1,Y_2}+ B\genfrac(){}{}{X_2,X_1}{Y_2,Y_1} + \frac{B\genfrac(){}{}{X_1}{Y_1+Y_2}- B\genfrac(){}{}{X_2}{Y_1+Y_2}}{X_1-X_2} \,. \end{aligned}$$

Similar as for moulds, we define the product of two bimoulds B and C as the bimould \(B \times C\) given by

$$\begin{aligned} (B \times C)\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} = \sum _{j=0}^r B\genfrac(){}{}{X_1,\dots ,X_j}{Y_1,\dots ,Y_j} C\genfrac(){}{}{X_{j+1},\dots ,X_r}{Y_{j+1},\dots ,Y_r}\,. \end{aligned}$$

Proposition 3.9

If B and C are \(\diamond \)-symmetril (bi)moulds then \(B \times C\) is \(\diamond \)-symmetril.

Proof

Let \(\varphi _B, \varphi _C\) be the coefficient maps of the (bi)moulds B and C and write \(m: \mathcal {A}\otimes \mathcal {A}\rightarrow \mathcal {A}\) for the multiplication on \(\mathcal {A}\). Then we see by definition that the coefficient map of \(B \times C\) is the convolution product of \(\varphi _B\) and \(\varphi _C\), i.e.

$$\begin{aligned} \varphi _{B \times C} = m \circ (\varphi _B \otimes \varphi _C) \circ \Delta \,, \end{aligned}$$

where \(\Delta \) is the coproduct (3.2) on \((\mathbb {Q}\langle L \rangle ,*_\diamond )\) for \(L=L_z\) or \(L=L_z^\mathrm{{bi}}\). This shows that \(\varphi _{B \times C}: (\mathbb {Q}\langle L \rangle ,*_\diamond ) \rightarrow \mathcal {A}\) is an algebra homomorphism if \(\varphi _B\) and \(\varphi _C\) are and therefore \(B \times C\) is \(\diamond \)-symmetril. \(\square \)

There is another important property of bimoulds, which is closely related to the conjugation of partitions (see [1, 3, 4] and [12]).

Definition 3.10

A bimould B is called swap invariant if for all \(r\geqslant 1\)

$$\begin{aligned} B\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} = B\genfrac(){}{}{Y_1+\dots +Y_r,Y_1+\dots +Y_{r-1},\dots ,Y_1+Y_2,Y_1}{X_r, X_{r-1}-X_r,\dots ,X_2-X_3,X_1-X_2}\,. \end{aligned}$$
(3.8)

An explicit formula for the coefficients on the right-hand side of (3.8) can be found in [5, Remark 3.14], where the swap is denoted by the involution \(\iota \) and the coefficients b are denoted by \(P_\textrm{s}\).

4 From moulds to bimoulds and the bimould \(\mathfrak {b}\)

Let \(\mathfrak {b}\) be a \(\mathbb {Q}\)-valued mould, which satisfies the extended double shuffle equations and is in depth one given by

$$\begin{aligned} \begin{aligned} \mathfrak {b}(X) =-\sum _{k\geqslant 2} \frac{B_k}{2k!}X^{k-1} = \sum _{m\geqslant 1} \frac{\zeta (2m)}{(2\pi i)^{2m}} X^{2m-1} = \frac{1}{2} \left( \frac{1}{X} - \frac{1}{e^X-1}-\frac{1}{2}\right) \,. \end{aligned} \end{aligned}$$
(4.1)

In particular, the coefficients \(\beta \) of \(\mathfrak {b}\) (as defined in (3.3)) are a \(\mathbb {Q}\)-valued solution to the extended double shuffle equations.

Remark 4.1

Such a mould \(\mathfrak {b}\) satisfying the extended double shuffle equations exists by the work by Racinet [27] or by combining the work of Drinfeld [16] and Furusho [18]. We give a short explanation how to obtain such an element. In [27, section IV], the space \({\text {DM}}_\lambda (\mathbb {Q})\subset \mathbb {Q}\langle \langle L_x\rangle \rangle \) is introduced, where \(L_x=\{x_0,x_1\}\). It is then shown that the space \({\text {DM}}_\lambda (\mathbb {Q})\) is non-empty, so we can choose an element for \(\lambda =\beta (2)=-\frac{1}{24}\), i.e. \(b\in {\text {DM}}_{-\frac{1}{24}}(\mathbb {Q})\). There is a canonical projection \(\pi _z:\mathbb {Q}\langle \langle L_x\rangle \rangle \rightarrow \mathbb {Q}\langle \langle L_z\rangle \rangle \), which is given on the generators by \(x_0^{k-1}x_1\mapsto z_k\) and maps each word ending in \(x_0\) to 0. So applying the map \(z_{k_1}\dots z_{k_r}\mapsto X_1^{k_1-1}\dots X_r^{k_r-1}\) to the depth r component of the element

$$\begin{aligned} b_*=\exp \left( \sum _{n\geqslant 2} \frac{(-1)^{n-1}}{n}(\pi _z(b)\mid z_n) z_1^n\right) \pi _z(b) \end{aligned}$$

yields a family of generating series \(\mathfrak {b}(X_1,\dots ,X_r) \in \mathbb {Q}[X_1,\ldots ,X_r]\), which defines a mould \(\mathfrak {b}\) satisfying the extended double shuffle equations.

In [16, §5], the space \({\text {M}}_\mu (\mathbb {Q})\) of associators is defined for each \(\mu \in \mathbb {Q}\). It is shown that the space \({\text {M}}_\mu (\mathbb {Q})\) is non-empty, thus we choose an element \(b\in {\text {M}}_1(\mathbb {Q})\). By [18, Cor 0.4], there is an embedding \({\text {M}}_1(\mathbb {Q})\hookrightarrow {\text {DM}}_{-\frac{1}{24}}(\mathbb {Q})\) (the definition of \({\text {DM}}_\lambda (\mathbb {Q})\) in [18] slightly differs from the original one given in [27], thus one has to be careful with the signs). So take the image of b under the embedding and proceed as before to obtain a mould \(\mathfrak {b}\) with values in \(\mathbb {Q}\) satisfying the extended double shuffle equations.

Neither of the above approaches provides an explicit construction of such a solution, this is an open problem so far. In low depths, there exist explicit rational solutions ( [13, 17, 20]), which then give possible candidates for \(\mathfrak {b}(X_1,\dots ,X_r)\) in the case \(r\leqslant 3\). See also [3, Section 3] for an explicit expression of the bimould \(\mathfrak {b}\) in depth two coming from the solution presented in [20] or see [8] on how to construct directly such a bimould in depth two out of a power series satisfying the Fay-identity (e.g. a variant of \(\coth \)).

The mould \(\mathfrak {b}\) is not unique, there are different choices starting in weight 8. In the following, we fix a mould \(\mathfrak {b}\) with values in \(\mathbb {Q}\), which satisfies the extended double shuffle equations and which is given by (4.1) in depth one. In particular, everything we define in the following depends on this choice.

The following gives natural constructions to obtain bimoulds out of moulds.

Definition 4.2

  1. (i)

    For a mould Z we define the two bimoulds \(X^Z\) and \(Y^Z\) by

    $$\begin{aligned} X^Z\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} := Z(X_{1},\dots ,X_r),\qquad Y^Z \genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} := Z(Y_{1},\dots ,Y_r) \end{aligned}$$
  2. (ii)

    For a mould Z we define the bimould \(B^Z\) by

    $$\begin{aligned} \begin{aligned} B^Z= Y^{Z_\gamma } \times X^Z\,, \end{aligned} \end{aligned}$$
    (4.2)

    so explicitly we have

    $$\begin{aligned} \begin{aligned} B^Z\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r}&= \sum _{j=0}^{r}Z_\gamma (Y_1,\dots ,Y_{j}) Z(X_{j+1},\dots ,X_r) \\&= \sum _{0\leqslant i\leqslant j \leqslant r} \gamma ^Z_i Z(Y_1+\dots +Y_{j-i},\dots ,Y_1+Y_2,Y_{1}) Z(X_{j+1},\dots ,X_r) \,, \end{aligned} \end{aligned}$$
    (4.3)

    where the coefficients \(\gamma ^Z_i\) are given by (3.4).

This construction will be used to obtain a bimould version of \(\mathfrak {b}\) in Definition 4.6. We show that \(B^Z\) is always a swap invariant bimould and, if Z satisfies the extended double shuffle relations, then \(B^Z\) is additionally symmetril.

Proposition 4.3

For any mould Z the bimould \(B^Z\) is swap invariant.

Proof

The swap of \(B^Z\) is given by

$$\begin{aligned} B^Z&\genfrac(){}{}{Y_1+\dots +Y_r,Y_1+\dots +Y_{r-1},\dots ,Y_1+Y_2,Y_1}{X_r, X_{r-1}-X_r,\dots ,X_2-X_3,X_1-X_2}\\&= \sum _{0\leqslant i\leqslant j \leqslant r} \gamma _i Z(X_{r-j+i+1},X_{r-j+i+2},\dots ,X_{r}) Z(Y_1+\dots +Y_{r-j},\dots ,Y_1) \,. \end{aligned}$$

Making the change of variables \(j'=r-j+i\), we see that above sum equals

$$\begin{aligned} \sum _{0\leqslant i\leqslant j' \leqslant r} \gamma _i Z(X_{j'+1},X_{j'+2},\dots ,X_{r}) Z(Y_1+\dots +Y_{j'-i},\dots ,Y_1) = B^Z\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} \,. \end{aligned}$$

\(\square \)

Proposition 4.4

If a mould Z satisfies the extended double shuffle equations, then the bimould \(B^Z\) is symmetril.

Proof

Let Z satisfy the extended double shuffle equations, so Z is a symmetril mould and \(Z_\gamma \) is a symmetral mould. We immediately obtain that \(X^Z\) is a symmetril bimould and \(Y^{Z_\gamma }\) is a symmetral bimould. If a bimould does not depend on the variables \(X_i\), symmetrility is equivalent to symmetrality. In particular, the bimould \(Y^{Z_\gamma }\) is also symmetril. In (4.2) we see that \(B^Z =Y^{Z_\gamma } \times X^Z\), so by Proposition 3.9 also \(B^Z\) is symmetril. \(\square \)

Remark 4.5

In [5], the relationship between the classical extended double shuffle equations and the relations of the coefficients of swap invariant and symmetril bimoulds will be explained in detail. In particular, the authors show that in the special case of moulds our Definition 3.5 coincides with the classical notion of extended double shuffle equations used in [24] and [27].

Definition 4.6

For the fixed \(\mathbb {Q}\)-valued mould \(\mathfrak {b}\) we define its corresponding bimould by \(\mathfrak {b}= B^\mathfrak {b}\). By abuse of notation we denote the mould and the bimould by \(\mathfrak {b}\), since it becomes clear from the the set of variables which one is meant. Explicitly, we have

$$\begin{aligned} \mathfrak {b}\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} = \sum _{0\leqslant i\leqslant j \leqslant r} \gamma _i \mathfrak {b}(Y_1+\dots +Y_{j-i},\dots ,Y_1+Y_2,Y_{1}) \mathfrak {b}(X_{j+1},\dots ,X_r) \,, \end{aligned}$$

where \(\gamma _k=\gamma ^\mathfrak {b}_k\) with the notation in (3.4), i.e. with (4.1) we have

$$\begin{aligned} \sum _{k=0}^\infty \gamma _k X^k = \exp \left( \sum _{n=2}^\infty \frac{(-1)^n}{n} \beta (n) X^n \right) = \exp \left( \sum _{n=2}^\infty \frac{(-1)^{n+1}}{n} \frac{B_n}{2 n!} X^n \right) \,. \end{aligned}$$

Corollary 4.7

The bimould \(\mathfrak {b}\) is swap invariant and symmetril.

Proof

This is just a special case of Propositions 4.3 and 4.4. \(\square \)

5 The bimould \(\mathfrak {g}\)

For \(m\geqslant 1\), we define the following power series in \(\mathbb {Q}\llbracket q \rrbracket \llbracket X,Y\rrbracket \)

$$\begin{aligned} L_m\genfrac(){}{}{X}{Y} = \frac{e^{X+mY}q^m}{1-e^X q^m}\,, \end{aligned}$$
(5.1)

which will be used in the construction of combinatorial multiple Eisenstein series and which is the building block of the following bimould.

Definition 5.1

We define the bimould \(\mathfrak {g}\) with values in \(\mathbb {Q}\llbracket q \rrbracket \) by

$$\begin{aligned} \mathfrak {g}\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} = \sum _{m_1> \dots> m_r > 0} L_{m_1}\genfrac(){}{}{X_1}{Y_1} \dots L_{m_r}\genfrac(){}{}{X_r}{Y_r} \,. \end{aligned}$$

Proposition 5.2

( [1, Theorem 2.3]) The bimould \(\mathfrak {g}\) is swap invariant.

The coefficients g of \(\mathfrak {g}\) as defined in Definition 3.8 (i) are explicitly given by

$$\begin{aligned} g\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} = \sum _{\begin{array}{c} m_1>\dots>m_r>0\\ n_1,\dots ,n_r>0 \end{array}} \frac{n_1^{k_1-1} m_1^{d_1}}{(k_1-1)!} \cdots \frac{n_r^{k_r-1} m_r^{d_r}}{(k_r-1)!} q^{m_1 n_1 + \dots + m_r n_r}\,. \end{aligned}$$
(5.2)

These coefficients are generalizations of the q-series defined in (2.3). The coefficient of \(q^n\) is given by the sum over all \(m_1 n_1 +\dots +m_r n_r=n\) with \(m_1>\dots>m_r>0, n_1,\dots ,n_r>0\), i.e. all partitions of n with r different parts \(m_1,\dots ,m_r\) and multiplicities \(n_1,\dots ,n_r\). This sum is invariant under the conjugation of partitions, which on the level of the generating series \(\mathfrak {g}\) corresponds exactly to the swap invariance (3.8). This describes a combinatorial proof of Proposition 5.2. Moreover, see [3] for the interpretation of the coefficients of the bimould \(\mathfrak {g}\) as a generalization of the generating series of classical divisor-sums and their derivatives.

The bimould \(\mathfrak {g}\) is not symmetril, but we can define a product \(\hat{\diamond }\) such that it becomes \(\hat{\diamond }\)-symmetril. For this, we need the following property of the series \(L_m\) defined in (5.1).

Lemma 5.3

For all \(m\geqslant 1\) we have

$$\begin{aligned} L_m\genfrac(){}{}{X_1}{Y_1}L_m\genfrac(){}{}{X_2}{Y_2} =&\frac{L_m\genfrac(){}{}{X_1}{Y_1+Y_2} - L_m\genfrac(){}{}{X_2}{Y_1+Y_2}}{X_1-X_2} \nonumber \\&+\left( 2 \mathfrak {b}(X_2-X_1) - \frac{1}{2}\right) L_m\genfrac(){}{}{X_1}{Y_1+Y_2} + \left( 2 \mathfrak {b}(X_1-X_2) - \frac{1}{2}\right) L_m\genfrac(){}{}{X_2}{Y_1+Y_2}\,,\qquad \end{aligned}$$
(5.3)

where \(\mathfrak {b}(X)=-\sum _{k\geqslant 2} \frac{B_k}{2k!}X^{k-1}\) is the depth one part of the mould \(\mathfrak {b}\) defined in (4.1).

Proof

By direct calculation one checks that \(L_m\genfrac(){}{}{X}{Y} = \frac{e^{X+mY}q^m}{1-e^X q^m}\) satisfies

$$\begin{aligned} L_m\genfrac(){}{}{X_1}{Y_1} L_m\genfrac(){}{}{X_2}{Y_2} = \frac{1}{e^{X_1-X_2}-1} L_m\genfrac(){}{}{X_1}{Y_1+Y_2} + \frac{1}{e^{X_2-X_1}-1} L_m\genfrac(){}{}{X_2}{Y_1+Y_2}, \end{aligned}$$

which gives the above formula by using \(\sum _{n=0}^\infty B_n \frac{X^n}{n!} = \frac{X}{e^X-1}\) and the parity of \(\mathfrak {b}\). \(\square \)

From this lemma, one can deduce the quasi-shuffle product satisfied by the coefficients g of \(\mathfrak {g}\). Explicitly, define for \(k_1,k_2,j \geqslant 1\) the rational numbers

$$\begin{aligned} \lambda ^{k_1,k_2}_j = -\left( (-1)^{k_1} \left( {\begin{array}{c}k_1+k_2-1-j\\ k_2 -j\end{array}}\right) + (-1)^{k_2} \left( {\begin{array}{c}k_1+k_2-1-j\\ k_1-j\end{array}}\right) \right) \frac{B_{k_1+k_2-j}}{(k_1+k_2-j)!} \, \end{aligned}$$

and define the commutative and associative product \(\hat{\diamond }\) on \(\mathbb {Q}L_z^\mathrm{{bi}}\) by

$$\begin{aligned} z^{k_1}_{d_1} \hat{\diamond }z^{k_2}_{d_2} = z^{k_1+k_2}_{d_1+d_2} +\sum _{j=1}^{k_1+k_2-1} \lambda ^{k_1,k_2}_j z^{j}_{d_1+d_2} \,. \end{aligned}$$
(5.4)

Proposition 5.4

The bimould \(\mathfrak {g}\) is \(\hat{\diamond }\)-symmetril.

Proof

This follows from [1, Theorem 3.6] and is a consequence of Lemma 5.3. For example, in lowest depth we have

$$\begin{aligned} \begin{aligned} \mathfrak {g}\genfrac(){}{}{X_1}{Y_1} \mathfrak {g}\genfrac(){}{}{X_2}{Y_2}&= \left( \sum _{m_1>m_2>0}+ \sum _{m_2>m_1>0}+\sum _{m_1=m_2>0}\right) L_{m_1}\genfrac(){}{}{X_1}{Y_1} L_{m_2}\genfrac(){}{}{X_2}{Y_2} \\&\overset{(5.3)}{=}\ \mathfrak {g}\genfrac(){}{}{X_1,X_2}{Y_1,Y_2} + \mathfrak {g}\genfrac(){}{}{X_2,X_1}{Y_2,Y_1} + \frac{\mathfrak {g}\genfrac(){}{}{X_1}{Y_1+Y_1} - \mathfrak {g}\genfrac(){}{}{X_2}{Y_1+Y_1}}{X_1-X_2} \\&\quad + \left( 2 \mathfrak {b}(X_2-X_1) - \frac{1}{2}\right) \mathfrak {g}\genfrac(){}{}{X_1}{Y_1+Y_1} + \left( 2 \mathfrak {b}(X_1-X_2) - \frac{1}{2}\right) \mathfrak {g}\genfrac(){}{}{X_2}{Y_1+Y_1}\,. \end{aligned} \end{aligned}$$
(5.5)

Considering the coefficients of (5.5) one sees that \(\hat{\diamond }\) in (5.4) gives \(\varphi _\mathfrak {g}(z^{k_1}_{d_1} *_{\hat{\diamond }}z^{k_2}_{d_2}) = \varphi _\mathfrak {g}(z^{k_1}_{d_1}) \varphi _\mathfrak {g}(z^{k_2}_{d_2})\). The general case can be proven by induction over the depth. \(\square \)

Proposition 5.4 shows the relationship between the bimould \(\mathfrak {g}\) and the depth one part of the mould \(\mathfrak {b}\). This will play a crucial role in the construction of the combinatorial multiple Eisenstein series.

6 Combinatorial multiple Eisenstein series

In this section, we introduce combinatorial (bi-)multiple Eisenstein series, which are the coefficients of the bimould \(\mathfrak {G}\). Before we can give the definition of \(\mathfrak {G}\) we need to introduce three other bimoulds \(\tilde{\mathfrak {b}}\), \(\mathfrak {L}_m\), and \(\mathfrak {g}^{*}\), which all depend on a fixed choice of a symmetril and swap-invariant bimould \(\mathfrak {b}\) given in Definition 4.6.

6.1 The bimoulds \(\tilde{\mathfrak {b}}\), \(\mathfrak {L}_m\), and \(\mathfrak {g}^{*}\)

Similar as in Definition 3.4 (ii) we can view the power series \(\exp \big (-\frac{T}{2}\big )\in \mathbb {Q}\llbracket T \rrbracket \) as a constant bimould. Moreover, define for any mould Z the mould \(Z^{-}\) for each \(r\geqslant 1\) by

$$\begin{aligned} Z^{-}(X_1,\ldots ,X_r)=Z(-X_1,\ldots ,-X_r). \end{aligned}$$

With this we define the following analogue of \(\mathfrak {b}= B^\mathfrak {b}= Y^{\mathfrak {b}_\gamma } \times X^\mathfrak {b}\).

Definition 6.1

Define the bimould \(\tilde{\mathfrak {b}}\) as a product of bimoulds

$$\begin{aligned} \tilde{\mathfrak {b}}=Y^{\mathfrak {b}_\gamma ^{-}}\times \exp \left( -\frac{T}{2}\right) \times X^{\mathfrak {b}}, \end{aligned}$$
(6.1)

i.e. for each \(r\geqslant 1\) we have

$$\begin{aligned} \tilde{\mathfrak {b}}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r}= \sum _{i=0}^r \frac{(-1)^i}{2^i i!} \mathfrak {b}\genfrac(){}{}{X_{i+1},\ldots ,X_r}{-Y_1,\ldots , -Y_{r-i}}. \end{aligned}$$

For \(m\geqslant 1\), let \(\mathfrak {L}_m\) be the bimould given in depth \(r\geqslant 1\) by

$$\begin{aligned} \mathfrak {L}_m\genfrac(){}{}{X_1,\ldots , X_r}{Y_1,\ldots , Y_r}=\sum _{j=1}^r \mathfrak {b}\genfrac(){}{}{X_1-X_j,\ldots ,X_{j-1}-X_j}{Y_1,\ldots ,Y_{j-1}}L_m\genfrac(){}{}{X_j}{Y_1+\dots + Y_r} \tilde{\mathfrak {b}}\genfrac(){}{}{X_r-X_j,\ldots , X_{j+1}-X_j}{Y_r,\ldots ,Y_{j+1}}\,. \end{aligned}$$

Observe that the depth one part of \(\mathfrak {L}_m\) is exactly given by the series \(L_m\) defined in (5.1).

Remark 6.2

(i) The bimould \(\mathfrak {L}_m\) can be also defined by using the flexion markers introduced in [17] (cf. [10, Section 7.5.3]).

(ii) The definition of \(\mathfrak {L}_m\) is inspired by the calculation of the Fourier expansion of multiple Eisenstein series. First observe that the power series \(L_m\) can be seen as the generating series of the monotangent functions for \(Y=0\). Namely, define the ‘combinatorial version’ of the monotangent function \(\Psi ^{\text {comb}}_k(\tau ) = \frac{1}{(k-1)!} \sum _{d>0} d^{k-1} q^d\) for \(k\geqslant 1\) by using simply the right hand side of the Lipschitz formula (2.2). Then we see that

$$\begin{aligned} \sum _{k\geqslant 1} \Psi ^{\text {comb}}_k(m \tau ) X^{k-1}&= \sum _{k\geqslant 1}\frac{1}{(k-1)!} \sum _{d>0} d^{k-1} q^{md} X^{k-1} = \sum _{d>0} e^{dX} q^{md} \\&= \frac{e^X q^m}{1-e^X q^m} = L_m\genfrac(){}{}{X}{0}\,. \end{aligned}$$

So in analogy to Theorem 2.2, the \(\mathfrak {L}_m\) can be seen as the generating series of the combinatorial version of the multitangent functions. In particular, the trifactorization of the mould of monotangent functions (used to prove Theorem 2.2) in [10, Theorem 5 & 6 ] is similar to our definition of \(\mathfrak {L}_m\). The mould consisting of multiple zeta values in [10] is in the definition of \(\mathfrak {L}_m\) replaced by the mould \(\mathfrak {b}\). Moreover, we omit the constant term for \(\Psi _1(\tau )\), this is necessary for our construction of combinatorial multiple Eisenstein series for arbitrary indices (see the discussion before Remark 6.14 in [1]).

Definition 6.3

We define the bimould \(\mathfrak {g}^{*}\) in depth \(r\geqslant 1\) by

$$\begin{aligned} \mathfrak {g}^{*}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{\begin{array}{c} 1 \leqslant j \leqslant r\\ 0 = r_0< r_1< \dots< r_{j-1} < r_j = r\\ m_1> \dots> m_j > 0 \end{array}} \prod _{i=1}^j \mathfrak {L}_{m_i} \genfrac(){}{}{X_{r_{i-1}+1},\ldots ,X_{r_i}}{Y_{r_{i-1}+1},\ldots ,Y_{r_i}}\,. \end{aligned}$$

The definition of the bimould \(\mathfrak {g}^{*}\) is inspired by the definition of the classical \(g^*\) in (2.8). In particular, we show that the bimould \(\mathfrak {g}^{*}\) is symmetril (Proposition 6.22).

6.2 The bimould \(\mathfrak {G}\) and combinatorial (bi-)multiple Eisenstein series

In analogy to (2.7) we define the bimould \(\mathfrak {G}\) as the product of \(\mathfrak {g}^{*}\) and \(\mathfrak {b}\).

Definition 6.4

  1. (i)

    We define the bimould \(\mathfrak {G}\) by

    $$\begin{aligned} \mathfrak {G}= \mathfrak {g}^{*}\times \mathfrak {b}\,. \end{aligned}$$
  2. (ii)

    For \(k_1,\dots ,k_r \geqslant 1\) and \(d_1,\dots ,d_r\geqslant 0\) we define the combinatorial bi-multiple Eisenstein series \(G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} \in \mathbb {Q}\llbracket q \rrbracket \) as the coefficients of the bimould \(\mathfrak {G}\),

    $$\begin{aligned} \sum _{\begin{array}{c} k_1,\dots ,k_r \geqslant 1 \\ d_1,\dots ,d_r\geqslant 0 \end{array}}G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} X_1^{k_1-1}\cdots X_r^{k_r-1} \frac{Y_1^{d_1}}{d_1!} \cdots \frac{Y_r^{d_r}}{d_r!} := \mathfrak {G}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} \,. \end{aligned}$$

    The number \(k_1+\dots +k_r+d_1+\dots +d_r\) is called its weight and r its depth.

  3. (iii)

    In the special case \(d_1=\dots =d_r=0\) we define the combinatorial multiple Eisenstein series for \(k_1,\dots ,k_r \geqslant 1\) by

    $$\begin{aligned} G(k_1,\dots ,k_r) := G\genfrac(){}{}{k_1,\dots ,k_r}{0,\dots ,0}\,. \end{aligned}$$

    The mould given by their generating series is also denoted by \(\mathfrak {G}\), i.e.

    $$\begin{aligned} \mathfrak {G}(X_1,\ldots ,X_r) := \mathfrak {G}\genfrac(){}{}{X_1,\ldots ,X_r}{0,\ldots , 0}\,. \end{aligned}$$

The main result of this work is the following.

Theorem 6.5

The bimould \(\mathfrak {G}\) is symmetril and swap invariant.

Proof

We will show that \(\mathfrak {g}^{*}\) is symmetril (Proposition 6.22). Since \(\mathfrak {b}\) is also symmetril, we deduce by Proposition 3.9 that \(\mathfrak {G}= \mathfrak {g}^{*}\times \mathfrak {b}\) is symmetril. For swap invariance we will show that \(\mathfrak {G}\) can be written as a sum of swap invariant bimoulds \(\mathfrak {G}_j\) (Proposition 6.27, Theorem 6.26) and therefore \(\mathfrak {G}\) is itself swap invariant. \(\square \)

Before presenting the necessary results for the proof of Theorem 6.5, we give some examples and consequences.

Example 6.6

  1. (i)

    In depth \(r=1\) we have \(\mathfrak {g}^{*}\genfrac(){}{}{X}{Y}=\mathfrak {g}\genfrac(){}{}{X}{Y}\) and therefore

    $$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X}{Y}=\mathfrak {b}\genfrac(){}{}{X}{Y}+\mathfrak {g}\genfrac(){}{}{X}{Y}. \end{aligned}$$

    So the coefficients are for \(k\geqslant 1,d\geqslant 0\) given by

    $$\begin{aligned} G\genfrac(){}{}{k}{d}&= -\delta _{d,0} \frac{B_k}{2 k!} - \delta _{k,1}\frac{B_{d+1}}{2 (d+1)} + \frac{1}{(k-1)!}\sum _{m,n\geqslant 1} m^d n^{k-1} q^{mn}. \end{aligned}$$
    (6.2)

    We see that for \(k>d\geqslant 0\) the combinatorial bi-multiple Eisenstein series \(G\genfrac(){}{}{k}{d}\) is essentially the d-th derivative of the Eisenstein series \(G(k-d)\), since

    $$\begin{aligned} G\genfrac(){}{}{k}{d}&= \frac{(k-d-1)!}{(k-1)!} \left( q \frac{d}{dq}\right) ^d G(k-d)\,. \end{aligned}$$

    The swap invariance in depth one just states \(\mathfrak {G}\genfrac(){}{}{X}{Y}=\mathfrak {G}\genfrac(){}{}{Y}{X}\). On the level of coefficients this gives \(G\genfrac(){}{}{k}{d} = \frac{d!}{(k-1)!} G\genfrac(){}{}{d+1}{k-1}\), which can also be obtained from (6.2).

  2. (ii)

    In depth \(r=2\) the bimould \(\mathfrak {G}\) is given by

    $$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X_1, X_2}{Y_1,Y_2} = \mathfrak {g}^{*}\genfrac(){}{}{X_1,X_2}{Y_1,Y_2} + \mathfrak {g}^{*}\genfrac(){}{}{X_1}{Y_1} \mathfrak {b}\genfrac(){}{}{X_2}{Y_2}+\mathfrak {b}\genfrac(){}{}{X_1,X_2}{Y_1,Y_2} \end{aligned}$$

    and from the definition of \(\mathfrak {g}^{*}\) and \(\mathfrak {L}_m\) we get

    $$\begin{aligned} \mathfrak {g}^{*}\genfrac(){}{}{X_1,X_2}{Y_1,Y_2}&= \sum _{m_1> m_2>0} \mathfrak {L}_{m_1}\genfrac(){}{}{X_1}{Y_1}\mathfrak {L}_{m_2}\genfrac(){}{}{X_2}{Y_2} + \sum _{m>0} \mathfrak {L}_m\genfrac(){}{}{X_1, X_2}{Y_1, Y_2}\\&= \mathfrak {g}\genfrac(){}{}{X_1,X_2}{Y_1,Y_2} + \mathfrak {b}\genfrac(){}{}{X_1-X_2}{Y_1} \mathfrak {g}\genfrac(){}{}{X_2}{Y_1+Y_2} + \mathfrak {g}\genfrac(){}{}{X_1}{Y_1+Y_2}\tilde{\mathfrak {b}}\genfrac(){}{}{X_2-X_1}{Y_2}\,. \end{aligned}$$

    This gives an explicit expression of \(G\genfrac(){}{}{k_1,k_2}{d_1,d_2}\) in terms of the \(\beta \) and the q-series g, which we omit here.

  3. (iii)

    In depth \(r=2\) the swap invariance reads \(\mathfrak {G}\genfrac(){}{}{X_1,X_2}{Y_1,Y_2}=\mathfrak {G}\genfrac(){}{}{Y_1+Y_2,Y_1}{X_2,X_1-X_2}\), which gives for \(k_1,k_2\geqslant 1, d_1,d_2\geqslant 0\)

    $$\begin{aligned} G\genfrac(){}{}{k_1,k_2}{d_1,d_2} = \sum _{\begin{array}{c} 0 \leqslant a \leqslant d_1\\ 0 \leqslant b \leqslant k_2-1 \end{array}} \frac{(-1)^b}{a!\, b!} \frac{d_1!}{(k_1-1)!} \frac{(d_2+a)!}{(k_2-1-b)!} \,\,G\genfrac(){}{}{d_2+1+a,\,d_1+1-a}{k_2-1-b,\,k_1-1+b}\,.\nonumber \\ \end{aligned}$$
    (6.3)

    In Proposition 6.7 this formula is used to give an analogue of the shuffle product formula for combinatorial multiple Eisenstein series.

  4. (iv)

    We saw in Example 2.3 that \(\mathbb {G}_{3,2}\) is given by

    $$\begin{aligned} \mathbb {G}_{3,2}(\tau ) =\zeta (3,2) + 3 \zeta (3) \hat{g}(2) + 2 \zeta (2) \hat{g}(3) + \hat{g}(3,2) \,. \end{aligned}$$

    In comparison, we get

    $$\begin{aligned} G(3,2) = \beta (3,2) + 2 \beta (2) g(3) + g(3,2) = g(3,2)-\frac{1}{12}g(3) \,. \end{aligned}$$

    Notice that \(\beta (3)=0\) and therefore this is exactly the same expression after replacing \(\zeta \) by the rational numbers \(\beta \) and \(\hat{g}\) by g.

  5. (v)

    The combinatorial multiple Eisenstein series G(2, 1, 1) is given by

    $$\begin{aligned} G(2,1,1)&= \beta (2,1,1)+\frac{1}{6} g(2) -g(2,1)+g(2,1,1)\,. \end{aligned}$$

    By duality \(\beta (4) = \beta (2,1,1)\), but one can check that \(G(4) \ne G(2,1,1)\), i.e. the combinatorial multiple Eisenstein series do not satisfy the duality relations. Another example for this is \(G(3) \ne G(2,1)\), since

    $$\begin{aligned} \sum _{n>0} \frac{n q^n}{(1-q^n)^2}= q \frac{d}{dq} G(1) = G(3) - G(2,1)\,. \end{aligned}$$

As an analogue of the double shuffle equations of multiple zeta values in depth two (1.3), the combinatorial bi-multiple Eisenstein series satisfy the following.

Proposition 6.7

For \(k_1,k_2\geqslant 1, d_1,d_2 \geqslant 0\) we have

$$\begin{aligned}\begin{aligned} G\genfrac(){}{}{k_1}{d_1} G\genfrac(){}{}{k_2}{d_2}&= G\genfrac(){}{}{k_1,k_2}{d_1,d_2} +G\genfrac(){}{}{k_2,k_1}{d_2,d_1} +G\genfrac(){}{}{k_1+k_2}{d_1+d_2} \\&= \sum _{\begin{array}{c} l_1+l_2=k_1+k_2\\ e_1+e_2=d_1+d_2\\ l_1,l_2\geqslant 1, e_1,e_2\geqslant 0 \end{array}} \left( \left( {\begin{array}{c}l_1-1\\ k_1-1\end{array}}\right) \left( {\begin{array}{c}d_1\\ e_1\end{array}}\right) (-1)^{d_1-e_1} + \left( {\begin{array}{c}l_1-1\\ k_2-1\end{array}}\right) \left( {\begin{array}{c}d_2\\ e_1\end{array}}\right) (-1)^{d_2-e_1} \right) G\genfrac(){}{}{l_1,l_2}{e_1,e_2} \\&\quad +\frac{d_1! d_2!}{(d_1+d_2+1)!}\left( {\begin{array}{c}k_1+k_2-2\\ k_1-1\end{array}}\right) G\genfrac(){}{}{k_1+k_2-1}{d_1+d_2+1}\,. \end{aligned} \end{aligned}$$

Proof

The first equality is just a direct consequence of the symmetrility. To show the second equality, first use the swap invariance to get \( G\genfrac(){}{}{k_1}{d_1} G\genfrac(){}{}{k_2}{d_2} = \frac{d_1! d_2!}{(k_1-1)!(k_2-1)!} G\genfrac(){}{}{d_1+1}{k_1-1} G\genfrac(){}{}{d_2+1}{k_2-1}\) and then evaluate this product by the first equality. Using then again the swap invariance in depth one and depth two (6.3) yields the result. \(\square \)

Remark 6.8

Proposition 6.7 shows that the combinatorial bi-multiple Eisenstein series in depth \(\leqslant 2\) give a realization of the formal double Eisenstein space introduced in [8]. This space is exactly defined by formal symbols satisfying the relation in Proposition  6.7.

One can obtain an analogue for the double shuffle relations in arbitrary depths with the same argument as in the proof of Proposition 6.7. For example, the equation (1.9) is obtained in this way.

Example 6.9

Evaluating G(2)G(2, 1) in the two different ways described above and writing out the Fourier expansion yields:

$$\begin{aligned} 0&= G(2,2,1) + 6 G(3,1,1) - G(2,3) - G(4,1) + 2 G\genfrac(){}{}{3,1}{1,0} + G\genfrac(){}{}{2,2}{0,1} \\&= \beta (2,2,1) + 6 \beta (3,1,1) - \beta (2,3) - \beta (4,1) \\&\quad + \left( 2\beta (2)-\beta (2)^2+12 \beta (1,1)+4 \beta (1,3)+6 \beta (2,2)+12 \beta (3,1) - \frac{1}{6}\right) q \\&\quad + \left( 6 \beta (2)-2 \beta (2)^2+60 \beta (1,1)+8 \beta (1,3)+12 \beta (2,2)+24 \beta (3,1) - 1\right) \,q^2\\&\quad + \left( 4 \beta (2)-2 \beta (2)^2+120 \beta (1,1)+8 \beta (1,3)+12 \beta (2,2)+24 \beta (3,1)- \frac{7}{3} \right) q^3 \\&\quad + O(q^4)\,. \end{aligned}$$

From this equation we can obtain relations among the coefficients \(\beta \) in lower weight without using their explicit expression. We get \(\beta (2) = -\frac{1}{24}\), \(\beta (1,1) = \frac{1}{48}\) and \(2 \beta (1, 3) + 3 \beta (2, 2) + 6 \beta (3, 1) = \frac{1}{1152} = \frac{1}{2} \beta (2)^2\). It might be interesting to understand in general, which relations among the \(\beta \) can be obtained from the relations among the G.

Definition 6.10

The \(\mathbb {Q}\)-vector space spanned by all combinatorial bi-multiple Eisenstein series is defined by

$$\begin{aligned} \mathcal {G}^{\textrm{bi}}= \mathbb {Q}+ \big \langle G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} \mid r \geqslant 1, k_1,\dots ,k_r \geqslant 1, d_1,\dots ,d_r\geqslant 0 \big \rangle _\mathbb {Q}\,, \end{aligned}$$

and the homogeneous subspace of weight \(k\geqslant 0\) is given by \(\mathcal {G}^{\textrm{bi}}_0=\mathbb {Q}\) and for \(k\geqslant 1\) by

$$\begin{aligned} \mathcal {G}^{\textrm{bi}}_k = \big \langle G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} \in \mathcal {G}^{\textrm{bi}}\mid k_1+\dots +k_r+d_1+\dots +d_r=k \big \rangle _\mathbb {Q}\,. \end{aligned}$$

The subspace spanned by all combinatorial multiple Eisenstein series is denoted by

$$\begin{aligned} \mathcal {G}= \mathbb {Q}+ \big \langle G(k_1,\dots ,k_r)\mid r \geqslant 1, k_1,\dots ,k_r \geqslant 1 \big \rangle _\mathbb {Q}\,, \end{aligned}$$

and we set \(\mathcal {G}_k = \mathcal {G}\cap \mathcal {G}^{\textrm{bi}}_k\).

Remark 6.11

We expect that all relations among the combinatorial multiple Eisenstein series come from the swap invariance and symmetrility. In particular, this would imply that the combinatorial bi-multiple Eisenstein series are graded by weight, i.e. we expect \(\mathcal {G}^{\textrm{bi}}\overset{?}{=}\ \bigoplus _{k\geqslant 0} \mathcal {G}^{\textrm{bi}}_k\).

Proposition 6.12

Both \(\mathcal {G}^{\textrm{bi}}\) and \(\mathcal {G}\) are \(\mathbb {Q}\)-algebras containing the algebra of (quasi-) modular forms with rational coefficients, given by \(\mathbb {Q}[G(2),G(4),G(6)]\).

Proof

This follows immediately from the symmetrility of \(\mathfrak {G}\) and (6.2). It can be also obtained from Proposition 6.15 (i) below. \(\square \)

Proposition 6.13

For \(k\geqslant 1, d\geqslant 0\) with \(k+d\) even, \(G\genfrac(){}{}{k,\dots ,k}{d,\dots ,d}\) is a quasi-modular form.

Proof

By a classical result for quasi-shuffle algebras ( [23, (32)]),the generating series of \(G\genfrac(){}{}{k,\dots ,k}{d,\dots ,d}\) can be written as

$$\begin{aligned} 1+ \sum _{r=1}^{\infty } G\genfrac(){}{}{\overbrace{k,\dots ,k}^r}{d,\dots ,d} T^r = \exp \left( \sum _{r=1}^{\infty } (-1)^{r-1} G\genfrac(){}{}{r k}{r d} \frac{T^r}{r} \right) \,. \end{aligned}$$
(6.4)

By (6.2), the \(G\genfrac(){}{}{r k}{r d}\) are quasi-modular for \(k+d\) even. Therefore by (6.4), the \(G\genfrac(){}{}{k,\dots ,k}{d,\dots ,d}\) are also quasi-modular forms of weight \(r(k+d)\) and depth (in the sense of quasi-modular forms) at most \(r\cdot \min (d,k-1)\). \(\square \)

Example 6.14

We give one explicit example for \(k=d=1\) and \(n=2\):

$$\begin{aligned} G\genfrac(){}{}{1,1}{1,1}&= \beta (2,2) + 2 \beta (3,1) + \beta (2) g \genfrac(){}{}{1}{1} - \frac{1}{2}g\genfrac(){}{}{1}{2} + g\genfrac(){}{}{1,1}{1,1}\\&= \frac{1}{1152} -\frac{1}{24} g(2) - g(3) + g(2,2) + 2 g(3,1) = \frac{1}{2} G\genfrac(){}{}{1}{1}^2 - \frac{1}{2}G\genfrac(){}{}{2}{2} \\&= \frac{1}{2} G(2)^2 - \frac{1}{2} q \frac{d}{dq}G(2)\,. \end{aligned}$$

Here the first equality comes from the definition of \( G\genfrac(){}{}{1,1}{1,1} \), the second equality follows from using explicit values for \(\beta \),which are unique up to weight 7, and the swap invariance of g. The third equality comes from (6.4), but could also be obtained from Proposition 5.4. For the last equation we used the swap invariance of G and (6.2).

For some indices one can also give an explicit formula for the G in terms of the q-series g, e.g. in the case \(k=2, d=0\) one can show that

$$\begin{aligned} \sum _{r\geqslant 0} G(\overbrace{2,\dots , 2}^r) T^{2r+1}= \sum _{r\geqslant 0} g(\overbrace{2,\dots ,2}^r) \left( 2 \sin \left( \frac{T}{2}\right) \right) ^{2r+1}\,. \end{aligned}$$

To obtain this formula one shows that the generating series over all \(r\geqslant 1\) of the coefficients of \(X_1 \dots X_r\) in \(\mathfrak {L}_m\genfrac(){}{}{X_1,\dots ,X_r}{0,\dots ,0}\) has a product expression in terms of the \(L_m\). Using the Weierstrass product expression of \(\sin \) together with our construction then yields the claim after some calculations. It would be interesting to know if in general there are similar expressions for \(G(2k,\dots ,2k)\) in analogy to the explicit evaluations of \(\zeta (2k,\dots ,2k)\) for \(k\geqslant 1\).

By construction the combinatorial bi-multiple Eisenstein series G can be written as rational linear combinations of the q-series g defined in (5.2). The following Proposition shows that also the converse is true.

Proposition 6.15

For all \(k_1,\dots ,k_r \geqslant 1\) and \(d_1,\dots ,d_r\geqslant 0\) we have

$$\begin{aligned} g\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,d_r} \in \mathcal {G}^{\textrm{bi}}\,. \end{aligned}$$

In particular, the combinatorial bi-multiple Eisenstein series span the space \({\mathcal {Z}}_q\) of q-analogues of multiple zeta values defined in [7].

Proof

In depth one we have by definition \(G\genfrac(){}{}{k}{d}=g\genfrac(){}{}{k}{d}+\mu \) for some \(\mu \in \mathbb {Q}\) (see (6.2)), thus it is \(g\genfrac(){}{}{k}{d}\in \mathcal {G}^{\textrm{bi}}\) for all \(k\geqslant 1, d\geqslant 0\). Moreover, we obtain immediately from the construction of \(\mathfrak {G}\) that for all \(k_1,\ldots ,k_r\geqslant 1\) and \(d_1,\ldots ,d_r\geqslant 0\)

$$\begin{aligned} G\genfrac(){}{}{k_1,\ldots ,k_r}{d_1,\ldots ,d_r}=g\genfrac(){}{}{k_1,\ldots ,k_r}{d_1,\ldots ,d_r}+(\text {terms only involving g of smaller depths and weights}). \end{aligned}$$

So induction on the depth shows that each q-series g is a \(\mathbb {Q}\)-linear combination of combinatorial bi-multiple Eisenstein series. The last statement follows from [7, Theorem 1], where it is shown that the q-series g span the space \({\mathcal {Z}}_q\), i.e. we get \(\mathcal {G}^{\textrm{bi}}={\mathcal {Z}}_q\). \(\square \)

Remark 6.16

  1. (i)

    The similar argument as in Proposition 6.15 also shows \(g(k_1,\ldots ,k_r)\in \mathcal {G}\) for all \(k_1,\ldots ,k_r\geqslant 1\). Also the converse holds, i.e. every combinatorial multiple Eisenstein series is also a \(\mathbb {Q}\)-linear combination of the single indexed g (this follows from equation (6.15)). This is in contrast to the shuffle ( [9]) and stuffle ( [1]) regularized multiple Eisenstein series, where double indexed g are needed when \(k_j=1\) for some j. In particular, we have \(\mathcal {G}= {\mathcal {Z}}^\circ _q\) where \({\mathcal {Z}}^\circ _q\) is defined in [7]. As a consequence of this, one would expect \(\mathcal {G}\overset{?}{=}\ \mathcal {G}^{\textrm{bi}}\). This was first conjectured in [1] (see also [7, Conjecture 5]).

  2. (ii)

    As explained in Remark 6.11, we expect that \(\mathcal {G}^{\textrm{bi}}\) is graded by weight. By Proposition 6.15, the dimensions of the homogeneous spaces \(\mathcal {G}^{\textrm{bi}}_k\) should coincide with the conjectured dimensions of the weight-graded parts of \({\mathcal {Z}}_q\) given in [7, Conjecture 3] (and similarly for the associated depth graded parts).

We explain now why the combinatorial multiple Eisenstein series interpolate between the rational solution \(\beta \) and multiple zeta values. In the case \(k_1\geqslant 2, k_2,\dots ,k_r\geqslant 1\) we get as a direct consequence of the proof of Proposition 6.15 and (2.4) that

$$\begin{aligned} \lim \limits _{q\rightarrow 1}(1-q)^{k_1+\dots +k_r } G(k_1,\ldots ,k_r) = \zeta (k_1,\dots ,k_r)\,. \end{aligned}$$
(6.5)

The limit is independent of the choice of the rational solution to the double shuffle equations \(\mathfrak {b}\), since the limit \(q\rightarrow 1\) considers just the highest depth term of the q-series g in G. In the case \(k_1=1\) this limit does not exist, but we can consider a regularized limit, which we describe now. Using the notation as in Sect. 3 we can, as in the introduction, view the combinatorial multiple Eisenstein series as a \(\mathbb {Q}\)-linear map defined on the generators by

$$\begin{aligned} \begin{aligned} G: {\mathfrak {H}}^1&\longrightarrow \mathcal {G}\\ w= z_{k_1}\dots z_{k_r}&\longmapsto G(w) := G(k_1,\dots ,k_r)\,. \end{aligned} \end{aligned}$$
(6.6)

Since \(\mathfrak {G}\) is symmetril, G gives an algebra homomorphism from \(({\mathfrak {H}}^1,*)\) to \(\mathcal {G}\). Due to \({\mathfrak {H}}^1 = {\mathfrak {H}}^0[z_1]\) (cf. [24, Proposition 1]) we can write \(w=z_{k_1}\dots z_{k_r} \in {\mathfrak {H}}^1\) for any \(k_1,\dots ,k_r\geqslant 1\) uniquely as \(w = \sum _{j=0}^r w_j * z_1^{*j}\) with \(w_j \in {\mathfrak {H}}^0\). Then we define the regularized version of the limit (6.5) as

$$\begin{aligned} {\lim _{q\rightarrow 1}}^*(1-q)^{k_1+\dots +k_r} G(k_1,\dots ,k_r) := \lim _{q\rightarrow 1}(1-q)^{k_1+\dots +k_r} G(w_0)=\zeta (w_0)\,. \end{aligned}$$
(6.7)

Notice that if \(k_1\geqslant 2\), then \(w=w_0\) and thus (6.7) is equal to (6.5).

Proposition 6.17

For any \(k_1,\dots ,k_r\geqslant 1\) we have

$$\begin{aligned} \lim _{q\rightarrow 0} G(k_1,\dots ,k_r)&= \beta (k_1,\dots ,k_r),\\ {\lim _{q\rightarrow 1}}^*(1-q)^{k_1+\dots +k_r} G(k_1,\dots ,k_r)&= \zeta ^*(k_1,\dots ,k_r)\,. \end{aligned}$$

Proof

First notice that the constant terms of the combinatorial multiple Eisenstein series are by construction the coefficients in \(\mathfrak {b}\genfrac(){}{}{X_1,\dots ,X_r}{0,\dots ,0}\). The bimould \(\mathfrak {b}\) was defined by the mould \(\mathfrak {b}\) (Definition 4.6), which satisfies the extended double shuffle equations. Since the mould \(\mathfrak {b}_\gamma \) is symmetral and \(\mathfrak {b}_\gamma (0)=0\) (as \(\beta (1)=0\)), we inductively get \(\mathfrak {b}_\gamma (0,\dots ,0)=0\). We deduce \(\mathfrak {b}\genfrac(){}{}{X_1,\dots ,X_r}{0,\dots ,0} = \mathfrak {b}(X_1,\dots ,X_r)\) which shows the first equation. For the second equation observe that the stuffle regularized multiple zeta values \(\zeta ^*\) are essentially defined in the same way as we constructed the regularized limit (6.7). Then since G and \(\zeta ^*\) are algebra homomorphisms, we obtain the claim from (6.5). \(\square \)

Remark 6.18

In general one can make sense of the limit of \((1-q)^{k_1+\dots +k_r+d_1+\dots +d_r} G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\) as \(q \rightarrow 1\). In [5] the authors introduce bi-multiple zeta values \(\zeta \genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r} \in {\mathcal {Z}}\) ( [5, Definition 4.18] after setting \(T=0\)), which are essentially given by the regularized limit of \((1-q)^{k_1+\dots +k_r+d_1+\dots +d_r} g\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\) as \(q\rightarrow 1\) (similar to (6.7)). Using the notion of degree and weight limit introduced in [5], one can check (by Proposition 6.15) that all the other terms in \(G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\) have lower degree than \(g\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\), so they do not contribute to the weight limit ( [5, Definition 4.3]). Therefore the regularized limit of \((1-q)^{k_1+\dots +k_r+d_1+\dots +d_r}G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\) as \(q \rightarrow 1\) is exactly given by \(\zeta \genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\). Assuming that \(\mathcal {G}^{\textrm{bi}}= \bigoplus _{k\geqslant 0} \mathcal {G}^{\textrm{bi}}_k\) (Remark 6.11) one then would get that the map \(G\genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r} \mapsto \zeta \genfrac(){}{}{k_1,\dots ,k_r}{d_1,\dots ,k_r}\) gives an \(\mathbb {Q}\)-algebra homomorphism from \(\mathcal {G}^{\textrm{bi}}\) to \({\mathcal {Z}}\).

Proposition 6.19

If for some \(\epsilon _{k_1,\dots ,k_r}\in \mathbb {Q}\) with \(r\geqslant 1\) and \(k_1+\dots +k_r = k\geqslant 4\) the q-series

$$\begin{aligned} \sum _{\begin{array}{c} 1 \leqslant r \leqslant k\\ k_1+\dots +k_r=k \end{array}} \epsilon _{k_1,\dots ,k_r} G(k_1,\dots ,k_r) \end{aligned}$$

is a modular form of weight k (after setting \(q=e^{2\pi i\tau }\)), then we have

$$\begin{aligned} \sum _{\begin{array}{c} 1 \leqslant r \leqslant k\\ k_1+\dots +k_r=k \end{array}} \epsilon _{k_1,\dots ,k_r} \zeta (k_1,\dots ,k_r) = (2\pi i)^k \sum _{\begin{array}{c} 1 \leqslant r \leqslant k\\ k_1+\dots +k_r=k \end{array}} \epsilon _{k_1,\dots ,k_r} \beta (k_1,\dots ,k_r)\,. \end{aligned}$$

Proof

If \(f(\tau )= a_0 + \sum _{n\geqslant 1} a_n q^n\) is modular of weight k then \(f(-\frac{1}{\tau }) = \tau ^k f(\tau )\), i.e.

$$\begin{aligned} \lim _{q\rightarrow 1} (1-q)^k f(q)&= \lim _{\tau \rightarrow 0} ((2\pi i \tau )^k + O(\tau ^{k+1})) f(\tau ) = \lim _{\tau \rightarrow 0} (2\pi i)^k f\left( -\frac{1}{\tau }\right) \\ {}&= \lim _{\tau \rightarrow i\infty } (2\pi i)^k f(\tau ) = (2\pi i)^k a_0\,. \end{aligned}$$

The statement then follows from Proposition 6.17. \(\square \)

Notice that the converse of Proposition 6.19 is not true, since \(\zeta (2,1,1) = (2\pi i)^4 \beta (2,1,1)\), but, as seen in Example 6.6, G(2, 1, 1) is not a multiple of G(4).

6.3 Symmetrility of \(\mathfrak {L}_m\), \(\mathfrak {g}^{*}\) and \(\mathfrak {G}\)

In this subsection, we give the proofs for the symmetrility of previously mentioned bimoulds.

Lemma 6.20

For all \(m\geqslant 1\) the bimould \(\mathfrak {L}_m\) is symmetril.

Proof

By replacing \(q^m = e^{-T}\) in \(L_m\genfrac(){}{}{X}{0} = \frac{e^Xq^m}{1-e^X q^m}\), we obtain a new series

$$\begin{aligned} L_T(X) = \frac{e^{X-T}}{1-e^{X-T}} = 2 \mathfrak {b}(X-T) - \frac{1}{X-T} - \frac{1}{2}.\end{aligned}$$

For each \(r\geqslant 1\), we define a multiple bi-version of \(L_T\) as

$$\begin{aligned} \mathfrak {L}_T\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\ldots ,Y_r} = \sum _{j=1}^r \mathfrak {b}\genfrac(){}{}{X_1-X_j,\dots ,X_{j-1}-X_j}{Y_1,\ldots ,Y_{j-1}} L_T(X_j) \tilde{\mathfrak {b}}\genfrac(){}{}{X_r-X_j,\dots ,X_{j+1}-X_j}{-Y_r,\ldots ,-Y_{j+1}}\,. \end{aligned}$$

Then after the change of variables \(q^m = e^{-T}\) and multiplication with \(\exp (m(Y_1+\dots +Y_r))\), we obtain precisely the bimould \(\mathfrak {L}_m\). Moreover, let \(\mathfrak {b}_T, \tilde{\mathfrak {b}}_T\) and \(M_T\) be the bimoulds given in depth \(r\geqslant 1\) by

$$\begin{aligned} \mathfrak {b}_T\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\ldots ,Y_r}&= \mathfrak {b}\genfrac(){}{}{X_1-T,\ldots ,X_r-T}{Y_1,\ldots ,Y_r}, \qquad \tilde{\mathfrak {b}}_T\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\ldots ,Y_r} = \tilde{\mathfrak {b}}\genfrac(){}{}{X_r-T,\ldots ,X_1-T}{Y_r,\ldots ,Y_1}\,, \\ M_T\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots ,Y_r}&={\left\{ \begin{array}{ll} \frac{1}{T-X_1}, &{} \text {if } r=1 \\ 0, &{} \text {else}\end{array}\right. }\,. \end{aligned}$$

We show that the bimould \(\mathfrak {L}_T\) has the following product representation

$$\begin{aligned} \mathfrak {L}_T = \mathfrak {b}_T \times M_T \times \tilde{\mathfrak {b}}_T. \end{aligned}$$
(6.8)

Since all bimoulds on the right hand side of the equation are symmetril, by Proposition 3.9 also \(\mathfrak {L}_T\) is a symmetril bimould. Substituting back \(e^{-T}=q^m\) and multiplying by \(\exp (m(Y_1+\dots +Y_r))\) gives the symmetrility of the bimould \(\mathfrak {L}_m\). In depth one, we compute

$$\begin{aligned} \begin{aligned} \mathfrak {b}_T\genfrac(){}{}{X}{0} + M_T\genfrac(){}{}{X}{0} + \tilde{\mathfrak {b}}_T\genfrac(){}{}{X}{0} =2\mathfrak {b}(X-T)-\frac{1}{2}+\frac{1}{T-X}=L_T(X). \end{aligned} \end{aligned}$$
(6.9)

Substituting (6.9) in the left hand side of (6.8), we have to show in some given depth \(r\geqslant 1\)

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^r \mathfrak {b}\genfrac(){}{}{X_1-X_j,\ldots ,X_{j-1}-X_j}{Y_1,\ldots ,Y_{j-1}}\mathfrak {b}_T\genfrac(){}{}{X_j}{0}\tilde{\mathfrak {b}}\genfrac(){}{}{X_r-X_j,\ldots ,X_{j+1}-X_j}{-Y_r,\ldots ,-Y_{j+1}}\\&\qquad +\sum _{j=1}^r \mathfrak {b}\genfrac(){}{}{X_1-X_j,\ldots ,X_{j-1}-X_j}{Y_1,\ldots ,Y_{j-1}}\tilde{\mathfrak {b}}_T\genfrac(){}{}{X_j}{0}\tilde{\mathfrak {b}}\genfrac(){}{}{X_r-X_j,\ldots ,X_{j+1}-X_j}{-Y_r,\ldots ,-Y_{j+1}}\\&\qquad +\sum _{j=1}^r \frac{1}{T-X_j} \mathfrak {b}\genfrac(){}{}{X_1-X_j,\ldots ,X_{j-1}-X_j}{Y_1,\ldots ,Y_{j-1}}\tilde{\mathfrak {b}}\genfrac(){}{}{X_r-X_j,\ldots ,X_{j+1}-X_j}{-Y_r,\ldots ,-Y_{j+1}} \\&\qquad - \sum _{j=0}^r\mathfrak {b}_T\genfrac(){}{}{X_1,\ldots ,X_j}{Y_1,\ldots ,Y_j}\tilde{\mathfrak {b}}_T\genfrac(){}{}{X_{j+1},\ldots ,X_r}{Y_{j+1},\ldots , Y_r} \\&\qquad -\sum _{j=1}^r \frac{1}{T-X_j} \mathfrak {b}_T\genfrac(){}{}{X_1,\ldots ,X_{j-1}}{Y_1,\ldots ,Y_{j-1}}\tilde{\mathfrak {b}}_T\genfrac(){}{}{X_{j+1},\ldots ,X_r}{Y_{j+1},\ldots ,Y_r} \\&\quad =0. \end{aligned} \end{aligned}$$
(6.10)

Rewrite this equation in terms of the mould \(\mathfrak {b}\) by inserting the Definitions 4.6 and 6.1. Then apply symmetrility to the terms \(\mathfrak {b}(X_a-X_j,\ldots ,X_{j-1}-X_j)\) and \(-\mathfrak {b}(T-X_j)\) for \(a\in \{1,\ldots ,j\}\) in the first row and to the terms \(-\mathfrak {b}(T-X_j)\) and \(\mathfrak {b}(X_{r-b}-X_j,\ldots ,X_{j+1}-X_j)\) for \(b\in \{0,\ldots ,r-j\}\) in the second row (after making use of the identity \(\mathfrak {b}(X)=-\mathfrak {b}(-X)\)). Finally, rewrite the equation in terms of the mould \(\mathfrak {b}_\gamma \) by using (3.5). Since the mould \(\mathfrak {b}_\gamma \) is symmetral, it satisfies by Lemma 3.1

$$\begin{aligned} \sum _{j=a-1}^{r-b} (-1)^j \mathfrak {b}_\gamma (X_j,X_{j-1},\ldots ,X_a)\mathfrak {b}_\gamma (X_{j+1},X_{j+2},\ldots ,X_{r-b})=0 \end{aligned}$$
(6.11)

for all \(1\leqslant a\leqslant r-b\leqslant r\). Frequently applying the relation (6.11) proves the above equation except for the terms, where no mould \(\mathfrak {b}\) depending on some of the variables \(X_1,\ldots ,X_r\) appears. To show that these terms also vanish, we use an explicit expression for the generating series of \(\tilde{\gamma }_k = \tilde{\gamma }^\mathfrak {b}_k \) defined in (3.6)

$$\begin{aligned} \tilde{\gamma }(X) = \sum _{k=0}^\infty \tilde{\gamma }^\mathfrak {b}_k X^k := \sum _{k=0}^\infty \beta (\underbrace{1,\dots ,1}_k) X^k = \exp \left( \sum _{n=2}^\infty \frac{(-1)^{n+1}}{n} \beta (n) X^n \right) \,. \end{aligned}$$

The following expression of the Gamma function (c.f. [24, (2.1)])

$$\begin{aligned} e^{\gamma X} \Gamma (1+X) = \exp \left( \sum _{n=2}^\infty \frac{(-1)^{n}}{n} \zeta (n) X^n \right) \, \end{aligned}$$

together with the equality \(\beta (n) = \frac{\zeta (n)}{(2\pi i)^n}\) for n even, gives

$$\begin{aligned} \tilde{\gamma }(X)^2 = \frac{1}{ \Gamma (1+\frac{X}{2\pi i})\Gamma (1-\frac{X}{2\pi i})} = \frac{2}{X} \sinh \left( \frac{X}{2}\right) = \frac{ e^{\frac{X}{2}} - e^{-\frac{X}{2}}}{X} \,. \end{aligned}$$
(6.12)

Using the definition of \(\tilde{\mathfrak {b}}\) as a product of three moulds given in (6.1) one can write the remaining terms in (6.10) as products of moulds for which one can show that they cancel out by using the explicit formula (6.12) together with some straightforward calculation. \(\square \)

Lemma 6.21

Let \(B_m\) be a family of bimoulds which are \(\diamond \)-symmetril for all \(m\geqslant 1\). Then the bimould \(C_M\) defined by

$$\begin{aligned} C_M\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{\begin{array}{c} 1 \leqslant j \leqslant r\\ 0 = r_0< r_1< \dots< r_{j-1} < r_j = r\\ M> m_1> \dots> m_j > 0 \end{array}} \prod _{i=1}^j B_{m_i} \genfrac(){}{}{X_{r_{i-1}+1},\ldots ,X_{r_i}}{Y_{r_{i-1}+1},\ldots ,Y_{r_i}} \end{aligned}$$

is \(\diamond \)-symmetril for all \(M\geqslant 1\).

Proof

It is \(C_1\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots ,Y_r}=0\) for \(r\geqslant 1\) and \(C_1^{(0)}=1\), thus \(C_1\) is a \(\diamond \)-symmetril bimould. Moreover, one obtains from direct calculations \(C_{M+1} = B_M\times C_M\). Therefore, induction on M and Proposition 3.9 yields the \(\diamond \)-symmetrility of the bimould \(C_{M}\) for all \(M\geqslant 1\). \(\square \)

Proposition 6.22

The bimould \(\mathfrak {g}^{*}\) is symmetril.

Proof

Choosing \(B_m = \mathfrak {L}_m\) in Lemma 6.21 and taking the limit \(M\rightarrow \infty \) gives the bimould \(\mathfrak {g}^{*}\). By Lemma 6.20 the bimoulds \(\mathfrak {L}_m\) are symmetril for all \(m\geqslant 1\), thus we obtain that \(\mathfrak {g}^{*}\) is symmetril. \(\square \)

Remark 6.23

The bimould \(\mathfrak {g}^{*}\) can be seen as variant of the bimould \(\mathfrak {g}\) which is symmetril instead of \(\tilde{\diamond }\)-symmetril. It should be remarked that this correction is a completely different to the one obtained by using the maps \(\log \) and \(\exp \) from [22] and [23], which enables one to switch between different quasi-shuffle products over the same alphabet. This other approach is illustrated in [1, Remark 6.6].

6.4 Swap invariance

Lemma 6.24

The bimould \(\tilde{\mathfrak {b}}\) satisfies

$$\begin{aligned} \tilde{\mathfrak {b}}\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} = \tilde{\mathfrak {b}}\genfrac(){}{}{-Y_1-\dots -Y_r,Y_1-\dots -Y_{r-1},\dots ,-Y_1-Y_2,-Y_1}{-X_r, -X_{r-1}+X_r,\dots ,-X_2+X_3,-X_1+X_2}\,, \end{aligned}$$

i.e. it is nearly swap invariant up to some additional signs.

Proof

Using the swap invariance of \(\mathfrak {b}\) (Corollary 4.7) we get

$$\begin{aligned}&\tilde{\mathfrak {b}}\genfrac(){}{}{Y_1+\dots +Y_r,\dots ,Y_1+Y_2,Y_1}{X_r, X_{r-1}-X_r,\dots ,X_1-X_2} = \sum _{i=0}^r \frac{(-1)^i}{2^i i!} \mathfrak {b}\genfrac(){}{}{Y_1+\dots +Y_{r-i},\ldots ,Y_1}{-X_r,\ldots , -X_{i+1} + X_{i+2}} \\&\quad = \sum _{i=0}^r \frac{(-1)^i}{2^i i!} \mathfrak {b}\genfrac(){}{}{-X_{i+1},\ldots ,-X_r}{Y_1,\ldots , Y_{r-i}} = \tilde{\mathfrak {b}}\genfrac(){}{}{-X_1,\dots ,-X_r}{-Y_1,\dots ,-Y_r}\,. \end{aligned}$$

\(\square \)

Definition 6.25

For \(j\geqslant 0\) we define the bimould \(\mathfrak {G}_j=(\mathfrak {G}^{(r)}_j)_{r\geqslant 0}\) as follows. In the case \(j=0\) we set \(\mathfrak {G}_0=\mathfrak {b}\) and in general \(\mathfrak {G}^{(r)}_j=0\) for \(j>r\). If \(1\leqslant j \leqslant r\) we define

$$\begin{aligned} \mathfrak {G}_j\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{\begin{array}{c} 0 = r_0< r_1< \dots < r_j \leqslant r\\ m_1>\dots>m_j>0 \end{array}} \prod _{i=1}^j \mathfrak {L}_{m_i}\genfrac(){}{}{X_{r_{i-1}+1}, \dots , X_{r_i}}{Y_{r_{i-1}+1}, \dots , Y_{r_i}} \mathfrak {b}\genfrac(){}{}{X_{{r_j}+1},\dots ,X_r}{Y_{{r_j}+1},\dots ,Y_r}\,. \end{aligned}$$

Notice that we have \(\mathfrak {G}^{(r)}_r=\mathfrak {g}^{(r)}\) for any \(r\geqslant 1\), i.e. the \(\mathfrak {G}_j\) can be seen as an interpolation between the swap invariant bimoulds \(\mathfrak {b}\) and \(\mathfrak {g}\). Using the swap invariance of \(\mathfrak {b}\) and \(\mathfrak {g}\) we get the following more general result.

Theorem 6.26

The bimould \(\mathfrak {G}_j\) is swap invariant for any \(j\geqslant 0\).

Proof

Since \(\mathfrak {G}_0=\mathfrak {b}\) we can assume \(1\leqslant j \leqslant r\) in the following. For \(1\leqslant a\leqslant b\leqslant r\) we use the notation \(X_{a}^b:=X_a - X_b\) and \(Y_a^b:= Y_a + \dots + Y_b\). The \(\mathfrak {L}_{m_i}\) can then be written as

$$\begin{aligned} \mathfrak {L}_{m_i}\genfrac(){}{}{X_{r_{i-1}+1}, \dots , X_{r_i}}{Y_{r_{i-1}+1}, \dots , Y_{r_i}} = \sum _{r_{i-1} < n_i \leqslant r_i} \mathfrak {b}\genfrac(){}{}{X_{r_{i-1}+1}^{n_i},\ldots ,X_{n_i-1}^{n_i}}{Y_{r_{i-1}+1},\ldots ,Y_{n_i-1}}L_{m_i}\genfrac(){}{}{X_{n_i}}{Y_{r_{i-1}+1}^{r_i}} \tilde{\mathfrak {b}}\genfrac(){}{}{X_{r_i}^{n_i},\ldots , X_{n_i+1}^{n_i}}{Y_{r_i},\ldots ,Y_{n_i+1}}\,. \end{aligned}$$

By the definition of the bimould \(\mathfrak {g}\) in 5.1 as a sum over the \(L_m\), we therefore obtain

$$\begin{aligned} \mathfrak {G}_j\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{0 = r_0< n_1 \leqslant r_1< \dots < n_j \leqslant r_j \leqslant r} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} \mathfrak {g}\genfrac(){}{}{X_{n_1}, \dots , X_{n_j}}{Y_{1}^{r_1}, \dots , Y_{r_{j-1}+1}^{r_j}} \,,\nonumber \\ \end{aligned}$$
(6.13)

where

$$\begin{aligned} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \prod _{i=1}^j \mathfrak {b}\genfrac(){}{}{X_{r_{i-1}+1}^{n_i},\ldots ,X_{n_i-1}^{n_i}}{Y_{r_{i-1}+1},\ldots ,Y_{n_i-1}} \tilde{\mathfrak {b}}\genfrac(){}{}{X_{r_i}^{n_i},\ldots , X_{n_i+1}^{n_i}}{Y_{r_i},\ldots ,Y_{n_i+1}}\mathfrak {b}\genfrac(){}{}{X_{{r_j}+1},\dots ,X_r}{Y_{{r_j}+1},\dots ,Y_r}. \end{aligned}$$
(6.14)

We want to check that \(\mathfrak {G}_j\) satisfies (3.8), i.e. that it is invariant under the simultaneous change of variables \(X_j \rightarrow Y_1+\dots +Y_{r-j+1}=Y_1^{r-j+1}\) and \(Y_j \rightarrow X_{r-j+1}-X_{r-j+2} = X_{r-j+1}^{r-j+2}\) for \(j=1,\dots ,r\) (here \(X_{r+1}:=0\)), which imply \(X_a^b \rightarrow Y_{r-b+2}^{r-a+1}\) and \(Y_a^b \rightarrow X_{r-b+1}^{r-a+2}\). Applying this change of variables we get

$$\begin{aligned} \mathfrak {G}_j\genfrac(){}{}{Y_1^r,\ldots ,Y_1^2,Y_1^1}{X_r^{r+1},X_{r-1}^r,\ldots , X_1^2} = \sum _{0 = r_0< n_1 \leqslant r_1< \dots < n_j \leqslant r_j \leqslant r}C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{Y_1^r,\ldots ,Y_1^2,Y_1^1}{X_r^{r+1},X_{r-1}^r,\ldots , X_1^2} \mathfrak {g}\genfrac(){}{}{Y_1^{r-n_1+1}, \dots , Y_1^{r-n_j+1}}{X_{r-r_1+1}^{r+1}, \dots , X_{r-r_{j}+1}^{r-r_{j-1}+1}} \,. \end{aligned}$$
(6.15)

Using the swap invariance of \(\mathfrak {g}\) together with the change of summation variables \(n'_i:= r - r_{j-i+1}+1\), \(r'_i:= r - n_{j-i+1} +1\) we see, after noticing that the sum over \(0< n_1 \leqslant r_1< \dots < n_j \leqslant r_j \leqslant r\) is the same as the sum over \(0< n'_1 \leqslant r'_1< \dots < n'_j \leqslant r'_j \leqslant r\), that

$$\begin{aligned} \mathfrak {G}_j\genfrac(){}{}{Y_1^r,\ldots ,Y_1^2,Y_1^1}{X_r^{r+1},X_{r-1}^r,\ldots , X_1^2} = \sum _{0 = r'_0< n'_1 \leqslant r'_1< \dots < n'_j \leqslant r'_j \leqslant r} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{Y_1^r,\ldots ,Y_1^2,Y_1^1}{X_r^{r+1},X_{r-1}^r,\ldots , X_1^2} \mathfrak {g}\genfrac(){}{}{X_{n'_1}, \dots , X_{n'_j}}{Y_{1}^{r'_1}, \dots , Y_{r'_{j-1}+1}^{r'_j}}\,. \end{aligned}$$

It remains to show that

$$\begin{aligned} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{Y_1^r,\ldots ,Y_1^2,Y_1^1}{X_r^{r+1},X_{r-1}^r,\ldots , X_1^2} = C^{r'_1,\dots ,r'_j}_{n'_1,\dots ,n'_j}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r}\,, \end{aligned}$$

but this follows by using the swap invariance of \(\mathfrak {b}\) (Corollary 4.7) and the negative swap invariance for \(\tilde{\mathfrak {b}}\) (Lemma 6.24) in (6.14) together with reversing the product, i.e. changing \(i\rightarrow j-i\). The factor outside the product then becomes the first factor in the product and the former first factor gives the factor outside the product. \(\square \)

Proposition 6.27

For \(r\geqslant 1\) we have

$$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{j=0}^r \mathfrak {G}_j\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r}. \end{aligned}$$

Proof

This follows directly from the definitions of \(\mathfrak {g}^{*}\), \(\mathfrak {G}\), and \(\mathfrak {G}_j\) (Definitions 6.36.46.25). \(\square \)

Remark 6.28

In [9] it was shown that the Fourier expansion of the multiple Eisenstein series \(\mathbb {G}\) can be described by using the Goncharov coproduct ( [21]). The explicit calculation of this coproduct has strong similarities with (6.13). Also Example 6.6 (iv) shows that there might be connection of our construction to the Goncharov coproduct. In particular, one might expect, in accordance with the results in [9], that \(G(k_1,\dots ,k_r)\) for \(k_1,\dots ,k_r \geqslant 2\) is given by the corresponding convolution product of the coefficient maps \(\varphi _\mathfrak {b}\) and \(\varphi _\mathfrak {g}\). A natural question then is, if the formula (6.13) can be interpreted as the depth j part of some convolution product with respect to some coproduct in this ‘bi-setup’, which might be a natural generalization of the Goncharov coproduct.

6.5 Derivatives

Taking the derivative in (6.2) gives \( q \frac{\textrm{d}}{\textrm{d}q} G\genfrac(){}{}{k}{\textrm{d}} = k G\genfrac(){}{}{k+1}{d+1}\), which is a special case of the following formula in arbitrary depths.

Proposition 6.29

For \(k_1,\dots ,k_r \geqslant 1\) and \(d_1,\dots ,d_r\geqslant 0\) we have

$$\begin{aligned} q \frac{\textrm{d}}{\textrm{d}q} G\genfrac(){}{}{k_1,\dots ,k_r}{\textrm{d}_1,\dots ,\textrm{d}_r} = \sum _{i=1}^r k_i G\genfrac(){}{}{k_1,\dots ,k_i+1,\dots ,k_r}{\textrm{d}_1,\dots ,\textrm{d}_i+1,\dots ,\textrm{d}_r}\,. \end{aligned}$$
(6.16)

Proof

First notice that (6.16) is equivalent to

$$\begin{aligned} q \frac{d}{dq} \mathfrak {G}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} = \sum _{i=1}^r \frac{\partial }{\partial X_i}\frac{\partial }{\partial Y_i} \mathfrak {G}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r}\,. \end{aligned}$$
(6.17)

Since \( q \frac{d}{{d}q} L_m\genfrac(){}{}{X}{Y} = \frac{\partial }{\partial X}\frac{\partial }{\partial Y}L_m\genfrac(){}{}{X}{Y}\) (6.17) is also satisfied by \(\mathfrak {g}\) (see [1, Proposition 4.2]). By (6.13) we then see that (6.17) is already satisfied by the \(\mathfrak {G}_j\) for all j, since

$$\begin{aligned} q \frac{\textrm{d}}{\textrm{d}q} \mathfrak {G}_j\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r}&= \sum _{0 = r_0< n_1 \leqslant r_1< \dots< n_j \leqslant r_j \leqslant r} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} q \frac{\textrm{d}}{\textrm{d}q} \mathfrak {g}\genfrac(){}{}{X_{n_1}, \dots , X_{n_j}}{Y_{1}^{r_1}, \dots , Y_{r_{j-1}+1}^{r_j}} \\&= \sum _{0 = r_0< n_1 \leqslant r_1< \dots< n_j \leqslant r_j \leqslant r} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} \sum _{i=1}^j \frac{\partial }{\partial X_{n_i}}\frac{\partial }{\partial Y_{n_i}} \mathfrak {g}\genfrac(){}{}{X_{n_1}, \dots , X_{n_j}}{Y_{1}^{r_1}, \dots , Y_{r_{j-1}+1}^{r_j}} \\&= \sum _{i=1}^r \frac{\partial }{\partial X_i}\frac{\partial }{\partial Y_i}\sum _{0 = r_0< n_1 \leqslant r_1< \dots < n_j \leqslant r_j \leqslant r} C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\genfrac(){}{}{X_1,\ldots ,X_r}{Y_1,\ldots , Y_r} \mathfrak {g}\genfrac(){}{}{X_{n_1}, \dots , X_{n_j}}{Y_{1}^{r_1}, \dots , Y_{r_{j-1}+1}^{r_j}} \,. \end{aligned}$$

In the last equality we used that (by Definition 4.6) both \(\mathfrak {b}\) and \(\tilde{\mathfrak {b}}\), and therefore \(C^{r_1,\dots ,r_j}_{n_1,\dots ,n_j}\), vanish under any \(\frac{\partial }{\partial X_i}\frac{\partial }{\partial Y_i}\) and that the \(\mathfrak {g}\) terms are independent of \(X_{i}\) if \(i \not \in \{n_1,\dots ,n_j\}\), so they vanish under \(\frac{\partial }{\partial X_i}\frac{\partial }{\partial Y_i}\) in these cases. Since \(\mathfrak {G}\) is the sum of the \(\mathfrak {G}_j\) (Proposition 6.27), we obtain the formula (6.17). \(\square \)

Proposition 6.30

For \(k_1,\dots ,k_r \geqslant 1\) we have

$$\begin{aligned} \begin{aligned} q \frac{\textrm{d}}{\textrm{d}q} G(k_1,\dots ,k_r) =&\,G(2) G(k_1,\dots ,k_r) - \sum _{\begin{array}{c} 1 \leqslant j \leqslant r\\ a+b = k_j+2 \end{array}} (a-1) G(k_1,\dots ,k_{j-1},a,b,k_{j+1},\dots ,k_r) \\&- \sum _{\begin{array}{c} 1 \leqslant i < j \leqslant r\\ a+b = k_j+1 \end{array}} k_i G(k_1,\dots ,k_i+1,\dots ,k_{j-1},a,b,k_{j+1},\dots ,k_r)\\&- \sum _{\begin{array}{c} 1 \leqslant i \leqslant r \end{array}} k_i G(k_1,\dots ,k_i+1,\dots ,k_r,1) - G(k_1,\dots ,k_r,2)\,. \end{aligned} \end{aligned}$$
(6.18)

In particular, the space \(\mathcal {G}\) is closed under \(q \frac{d}{dq}\).

Proof

Since \(\mathfrak {G}\) is symmetril, we have

$$\begin{aligned} \mathfrak {G}\genfrac(){}{}{X}{Y} \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X_r}{Y_1,\dots ,Y_r} =&\sum _{j=0}^r \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X_{j},X,X_{j+1},X_r}{Y_1,\dots ,Y_{j},Y,Y_{j+1},Y_r}\\&+ \sum _{j=1}^r \frac{1}{X-X_j} \left( \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X,\dots ,X_r}{Y_1,\dots ,Y + Y_j,\dots ,Y_r} - \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X_j,\dots ,X_r}{Y_1,\dots ,Y + Y_j,\dots ,Y_r} \right) \,. \end{aligned}$$

Sending all \(X_j \rightarrow X\), taking the derivative with respect to Y and setting \(Y=0\) gives by Proposition  6.29

$$\begin{aligned} q \frac{\textrm{d}}{\textrm{d}q} \mathfrak {G}\genfrac(){}{}{X,\dots ,X}{Y_1,\dots ,Y_r} = \frac{\partial }{\partial Y} \left( \mathfrak {G}\genfrac(){}{}{X}{Y} \mathfrak {G}\genfrac(){}{}{X,\dots ,X}{Y_1,\dots ,Y_r} - \sum _{j=0}^r\mathfrak {G}\genfrac(){}{}{X,\dots ,X}{Y_1,\dots ,Y_j,Y,Y_{j+1},\dots ,Y_r} \right) _{|Y=0}\,. \end{aligned}$$

Using the swap invariance and renaming the variables we obtain

$$\begin{aligned} q\frac{\textrm{d}}{\textrm{d}q} \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X_r}{Y,0,\dots ,0}&= \frac{\partial }{\partial X} \left( \mathfrak {G}\genfrac(){}{}{X}{Y} \mathfrak {G}\genfrac(){}{}{X_1,\dots ,X_r}{Y,0,\dots ,0} \right) _{|X=0}\\&\quad - \frac{\partial }{\partial X} \left( \sum _{j=1}^{r+1} \mathfrak {G}\genfrac(){}{}{X_1+X,\dots ,X_{j}+X,X_{j},X_{j+1},\dots ,X_r}{Y,0,\dots ,0} \right) _{|X=0}\,, \end{aligned}$$

where in the last sum we set \(X_{r+1}:=0\). The case \(Y=0\) yields the result by calculating the coefficients of the right-hand side. \(\square \)

With the interpretation of G as an algebra homomorphism from \(({\mathfrak {H}}^1,*)\) to \(\mathcal {G}\) in (6.6), we can give the following reinterpretation and consequence of Proposition 6.30.

Corollary 6.31

  1. (i)

    For \(w\in {\mathfrak {H}}^1\) the derivative of G(w) is given by

  2. (ii)

    Let \(h:{\mathfrak {H}}^1 \rightarrow {\mathfrak {H}}^1\) be the linear map defined on the generators by

    Then for any \(v,w \in {\mathfrak {H}}^1\) we have

    $$\begin{aligned} h(w *v) - h(w) *v - w *h(v) \in \ker G\,. \end{aligned}$$

Proof

The equation in (i) is a direct consequence of Proposition6.30 since the sums on the right-hand side of (6.18) correspond exactly to those indices which appear in the shuffle product of \(z_2 = xy\) with \(z_{k_1}\dots z_{k_r}=x^{k_1-1}y \dots x^{k_r-1}y\). For (ii) we use that G is an algebra homomorphism and \(q \frac{d}{dq}\) is a derivation on \(\mathbb {Q}\llbracket q \rrbracket \). By (i) we have \( q \frac{d}{dq} G(w)=G(h(w))\) and therefore get \(G( h(w *v) - h(w) *v - w *h(v)) =0\). \(\square \)

Notice that the map h is not a derivation on \(({\mathfrak {H}}^1,*)\), i.e. the relations we obtain among the combinatorial multiple Eisenstein series from Corollary 6.31 (ii) are non-trivial. For example, for \(v=w=z_1\) we get \(G(4) = 2\,G(2,2) - 2\,G(3,1)\). This is the first relation, and the only in weight 4, among combinatorial multiple Eisenstein series, since the q-series \(g(k_1,\dots ,k_r)\) do not satisfy relations in lower weight (see [6, (1.9)]). It would be interesting to see if one can describe for arbitrary \(w,v \in {\mathfrak {H}}^1\) explicitly and if this can be used to obtain relations among combinatorial multiple Eisenstein series similar to Corollary 6.31 (ii).

Remark 6.32

In [19] the Alekseev-Torossian associator, whose coefficients satisfy the extended double shuffle equations, is computed. It turns out that in depth 1, it satisfies also the additional conditions given in (4.1) (compare to [19, Example 4.1]). In general, the coefficients of the AT associator are not rational. But replacing the rational solution \(\mathfrak {b}\) to the extended double shuffle equations by the AT associator gives another family of q-series whose generating series are symmetril and swap invariant.