A tale of two shuffle algebras

As a quantum affinization, the quantum toroidal algebra is defined in terms of its"left"and"right"halves, which both admit shuffle algebra presentations. In the present paper, we take an orthogonal viewpoint, and give shuffle algebra presentations for the"top"and"bottom"halves instead, starting from the evaluation representation of the quantum affine group and its usual R-matrix. An upshot of this construction is a new topological coproduct on the quantum toroidal algebra which extends the Drinfeld-Jimbo coproduct on the horizontal quantum affine subalgebra.

1. Introduction 1.1. The affine quantum group U q (ṡl n ) = U q ( sl n ) (hats will be replaced by points in the present paper) has the following two presentations: • it is the quantum affinization of U q (sl n ) • it is the Drinfeld-Jimbo quantum group whose Dynkin diagram is an n-cycle However, the two presentations above yield different bialgebra structures on U q (ṡl n ), which is evidenced by the fact that the coproduct in the first bullet is only topological (i.e. ∆ is an infinite sum, which only makes sense in a certain completion). Moreover, the two bullets above yield different triangular decompositions of U q (ṡl n ) into positive, Cartan, and negative halves: (1.2) U q (ṡl n ) ∼ = U ↑ q (ṡl n ) ⊗ (Cartan subalgebra) ⊗ U ↓ q (ṡl n ) The two decompositions above are quite different: the positive subalgebra U → q (ṡl n ) of (1.1) is generated by Drinfeld's elements e i,k over all 1 ≤ i < n and k ∈ Z, while the positive subalgebra U ↑ q (ṡl n ) of (1.2) is generated by the Drinfeld-Jimbo elements {e i } i∈Z/nZ . The connection between these two presentations was given in [1].
Our U ← q,q (gl n ) and U → q,q (gl n ) are the Borel subalgebras of the quantum toroidal algebra, and they explicitly arise as a unipotent part tensored with a diagonal part: The usual topological coproduct of U q,q (gl n ) preserves the subalgebras (1.4) and (1.5), and extends the usual coproduct on the subalgebra U q (ġl 1 ) n ⊂ U q,q (gl n ). The main goal of the present paper is to define another decomposition into subalgebras: (1.6) U q,q (gl n ) ∼ = U ↑ q,q (gl n ) ⊗ U ↓ q,q (gl n ) We will explicitly construct the tensor factors of (1.6) as: where the horizontal subalgebra: (1.9) U q (ġl n ) ∼ = U ≥ q (ġl n ) ⊗ U ≤ q (ġl n ) will be the quantum group in the RTT presentation ( [8]). Moreover, we endow U q,q (gl n ) with a new topological coproduct which preserves the subalgebras (1.7), (1.8), and extends the usual (Drinfeld-Jimbo) coproduct on U q (ġl n ) ⊂ U q,q (gl n ).
To represent all these decompositions pictorially, we will recall that the quantum toroidal algebra is graded by Z n × Z, where Z n is the root lattice of U q (ṡl n ) and Z is the affinization direction. Then the following picture indicates the various subalgebras of U q,q (gl n ), by displaying which degrees they live in: q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q T The grading of U q,q (gl n ) and its various subalgebras In the particular case n = 1, the quantum toroidal algebra is isomorphic to the well-known Ding-Iohara-Miki ([4], [17]) a.k.a. elliptic Hall algebra ( [2], [25]), which has an action of SL 2 (Z) by automorphisms. In particular, the decomposition (1.6) is obtained from the decomposition (1.3) by applying the automorphism corresponding to rotation by 90 degrees. However, in the general n case, the algebras featuring in the two decompositions are not isomorphic to each other, which is sensible given the fact that the grading axes Z n and Z are quite different.
1.3. To describe U ← q,q (gl n ) and U → q,q (gl n ) of (1.3), let us consider the vector space: (1.10) S + ⊂ (d1,...,dn)∈N n Q(q, q 1 n )(..., z i1 , ..., z idi , ...) Sym of rational functions which satisfy the wheel conditions (as in [9], [11]): namely that such rational functions have at most simple poles at z ia q 2 − z i+1,b (for all i, a, b) and that the residue at such a pole is divisible by z ia − z i+1,b and z ia − z i+1,b for all a = a and b = b. The vector space (1.10) is called a shuffle algebra, akin to the classical construction of Feigin and Odesskii concerning certain elliptic algebras. Explicitly, the product on (1.10) is constructed using the function (3.70), see Definition 3.20. It was shown in [6] that there is an algebra homomorphism: U → q,q (gl n ) → S + which was shown to be an isomorphism in [21]. Similarly, U ← q,q (gl n ) ∼ = S − = (S + ) op .
To describe the subalgebras U ↑ q,q (gl n ) and U ↓ q,q (gl n ) which appear in (1.6), we will introduce a new kind of shuffle algebra (let V be an n-dimensional vector space): End Q(q,q and the algebra structure on the RHS is constructed using the R-matrix (4.1), see Propositions 4.5 and 4.6. By definition, the subspace (1.11) precisely consists of End(V ⊗k )-valued rational functions which have at most simple poles at z a q 2 − z b (for all a, b) and whose residue at such a pole satisfies the conditions outlined in Definition 4.8. The subalgebra A − is defined similarly, but with q −1 q −n instead of q.
Theorem 1.4. There exist injective algebra homomorphisms: Denoting the images of these maps by U ↑ q,q (gl n ) and U ↓ q,q (gl n ) yields the decomposition (1.6). Moreover, there exist topological coproducts on the subalgebras: which extend the Drinfeld-Jimbo coproduct on the horizontal subalgebra (1.9), and realize U q,q (gl n ) as the Drinfeld double of its subalgebras (1.12) and (1.13). We refer the reader to Subsection 3.16 for a more explicit statement of the Theorem.
In upcoming work, we will show how the presentation of U ↑ q,q (gl n ) as a shuffle algebra naturally arises from the study of vertex operators for the quantum group (1.9) and can be used to study the K-theory of moduli spaces of parabolic sheaves.
We note that [12] have already realized U q,q (gl n ) as a Drinfeld double by generators and relations, but there seem to be fundamental differences between the subalgebra called B in loc. cit. and our U ↑ q,q (gl n ) ⊂ U q,q (gl n ). For example, in affinization degree 1, the former algebra has elements parametrized by the positive roots of U q (ṡl n ), while the latter has elements parametrized by all roots of U q (ṡl n ). Therefore, we do not make any claims concerning the connection of our work with loc. cit.
We emphasize the fact that U ↑ q,q (gl n ) is not the same as the "vertical subalgebra" that was studied in [10] and numerous other works. The latter construction has to do with U q (ṡl n ) presented as the affinization of U q (sl n ) and thus implicitly breaks the symmetry among the vertices of the cyclic quiver. Meanwhile, our construction takes the "horizontal subalgebra" U q (ġl n ) and its evaluation representation V = C n (z) as an input, and outputs half of the quantum toroidal algebra.
More generally, starting from a quantum group U q (g) and a representation V endowed with a unitary R-matrix, one may ask if the double shuffle algebra: (1.14) D an appropriate subalgebra of ∞ k=0 End(V ⊗k ) (defined as in Section 2) is related to the quantum group U q (ġ). Theorem 1.4 deals with the case g =ġl n and V = C n (z). If something along these lines is true in affine types other than A, we venture to speculate that the algebra (1.14) might be related to the extended Yangians of [28], appropriately q-deformed and doubled. Such a realization of quantum affinizations is to be expected from the work [16], [23], who have realized Yangians and their q-deformations inside endomorphism rings of tensor products of certain geometrically defined representations of g.
1.5. The structure of the present paper is the following: • In Section 2, we construct a shuffle algebra A + starting from a vector space V and a unitary R-matrix ∈ End(V ⊗2 ) (see also [19]). By adding certain elements, we construct the extended shuffle algebra A + , which admits a coproduct. From two such extended shuffle algebras, we construct their Drinfeld double A.
• In Section 3, we recall the quantum group U q,q (gl n ) and its PBW presentation from [22]. This will allow us to construct the decomposition (1.6) as algebras.
• In Section 4, we construct a version of the shuffle algebra of Section 2 that corresponds to the R-matrix with spectral parameter (4.1), thus yielding (1.11).
• In Section 5, we construct the extended version of the shuffle algebra of Section 4, endow it with a topological coproduct, and construct a PBW basis of it.
• In Section 6, we construct a bialgebra pairing between two copies of the extended shuffle algebras of Section 5. The corresponding Drinfeld double will precisely match U q,q (gl n ), thus completing the proof of Theorem 1.4.
I would like to thank Pavel Etingof, Sachin Gautam, Victor Kac, Andrei Okounkov, and Alexander Tsymbaliuk for many valuable conversations, and all their help along the years. I gratefully acknowledge the NSF grants DMS-1600375, DMS-1760264 and DMS-1845034, as well as support from the Alfred P. Sloan Foundation.
1.6. Given a finite-dimensional vector space V , we will often write elements X ∈ End(V ⊗k ) as X 1...k in order to point out the set of indices of X. If V = C n , then: (1.15) X = i1,...,i k ,j1,...,j k for certain coefficients, where E ij ∈ End(V ) denotes the matrix with entry 1 on row i and column j, and 0 everywhere else. For any permutation σ ∈ S(k), we write: where σ V ⊗k by permuting the factors (therefore, the effect of conjugating (1.15) by σ is to replace the indices i 1 , ..., j k by i σ(1) , ..., j σ(k) ). Moreover, we will write: if we wish to set apart the first i tensor factors from the last k − i tensor factors of X. There is an implicit summation in the right-hand side of (1.17) which we will not write down, much alike Sweedler notation. For any a ∈ N, we will write: (the number k ≥ a will always be clear from context). More generally, for any X ∈ End(V ⊗k ) and any collection of distinct natural numbers a 1 , ..., a k , write: X a1...a k ∈ End(V ⊗N ) (the number N ≥ a 1 , ..., a k will always be clear from context) for the image of X under the map End(V ⊗k ) → End(V ⊗N ) that sends the i-th factor of the domain to the a i -th factor of the codomain, and maps to the unit in all factors = {a 1 , ..., a k }.

Shuffle algebras and R-matrices
2.1. The main goal of the present Section is to study shuffle algebras associated to the data contained in the 4 bullets below: • a vector space V , assumed finite-dimensional for simplicity • an element (R-matrix) R ∈ Aut(V ⊗2 ) satisfying the Yang-Baxter equation: (2.1) R 12 R 13 R 23 = R 23 R 13 R 12 • an element R ∈ Aut(V ⊗2 ) satisfying: • a scalar f so that: The present Section will be concerned with generalities in the context above, while Section 4 deals with a particular setting, namely that of: and R(x) = R 21 x −1 q −2 , for a parameter q. Many Propositions in the current Section have counterparts in Section 4, and we will only prove such statements once.
2.2. We will represent the tensor product V ⊗k as k labeled dots on a vertical line, and certain elements of End(V ⊗k ) will be represented as braids between two collections of labeled dots situated on parallel vertical lines. Specifically, the crossings below represent either the automorphisms R or R, with indices given by the labels of the strands (which are inherited from the labels of their endpoints): The strands are represented either as straight or squiggly, because we wish to indicate whether the picture in question refers to either R or R. Compositions are always read left-to-right, for example the following equivalence of braids underlies the Yang-Baxter relations (2.1):   We will equivalate braids connected by the Reidemeister III type moves above.
2.3. We will now recall the construction of Section 5.2 of [19] (itself a dual version of the construction of [8]) and present it in the language of shuffle algebras. We will then construct an extended shuffle algebra which admits a coproduct and bialgebra pairing, and then define the corresponding Drinfeld double.
Proposition 2.4. For V, R, R as in Subsection 2.1, the assignment: 1 yields an associative algebra structure on the vector space: with unit 1 ∈ End(V ⊗0 ). We will call (2.6) the "shuffle product". 1 The meaning of the indexing sets in the three products of R's or R's is that the factors in: are taken in any order such that (a i , b j ) is to the left of (a i , b j ) if i < i and to the right of (a i , b j ) if j < j . The underbraces saying "only if a i < b j " or "only if a i > b j " under such a product mean that only those pairs of indices (a i , b j ) satisfying the respective inequalities are summed over.
We note that the second line of (2.6) can be represented by the following braid, depicted here for k = 2, l = 2 and (a 1 , a 2 ) = (1, 3), (b 1 , b 2 ) = (2, 4): The proof of Proposition 2.4, namely that the multiplication defined above is associative, is a straightforward consequence of the following equivalence of braids: Indeed, in the top picture, one can pull the straight red strands to the left of the blue-green crossings, and the squiggly red strands below the blue-green crossings.
This procedure is simply a succession of the Reidemeister III moves of Figures 2,3 and 4, which in the end produces the bottom picture.
2.5. The symmetrization of a tensor X ∈ End(V ⊗k ) is defined as: where σXσ −1 refers to the permutation of the tensor factors of X in accordance with σ, and R σ ∈ End(V ⊗k ) is any braid connecting the i-th endpoint on the right with the σ(i)-th endpoint on the left.
Choosing one braid lift of σ over another is just the ambiguity of choosing R ab over R −1 ba for any crossing between the strands labeled a and b. Since (2.4) says that these two endomorphisms differ by a scalar, the ambiguity does not affect (2.8).
A tensor Y ∈ End(V ⊗k ) is called symmetric if: It is easy to see that any symmetrization (2.8) is a symmetric tensor.
Proposition 2.6. The shuffle product of Proposition 2.4 preserves the vector space: consisting of symmetric tensors. We will call A + the shuffle algebra.
Since the symmetrization of any tensor is symmetric, this concludes the proof.
By analogy with formula (2.14), one has the following: 2.7. Let us fix a basis v 1 , ..., v n of V and write E ij for the elementary symmetric matrix with a single 1 at the intersection of row i and column j, and 0 otherwise.
Definition 2.8. Consider the extended shuffle algebra: In order to concisely state the relations, it makes sense to package the new generators s ij , t ij into generating functions: and the required relations take the form: as well as: . The latter two formulas should be interpreted as identities in A + ⊗ End(V ), where the latter copy of V is the one represented by index 0. 2 Proposition 2.9. The following assignments make A + into a bialgebra: while for all X = X 1...k ∈ A + ⊂ A + for k ≥ 1, we set ε(X) = 0 and: Proof. The facts that the counit extends to an algebra homomorphism, and that it interacts appropriately with the coproduct, are easy to see. In order to show that the coproduct extends to an algebra homomorphism: An analogous argument shows that ∆(T 1 T 2 R) = ∆(RT 2 T 1 ). As for (2.18): Let us now apply the coproduct to the left-hand side of (2.19): In the next-to-last row of the expression above, we may apply (2.18) in order to move the T 's to the right and the S's to the left. Afterwards, we apply (2.19) and (2.20) to move the S's to the left of A a1...a d and the T 's to the right of B b1..
Finally, we may use (2.16) and (2.17) to move the outermost products of R aibj past the S's and the T 's, at the cost of re-ordering the latter: The right-hand side is simply ∆ applied to the RHS of (2.6), as we needed to prove.
2.10. Let us consider two copies of the extended shuffle algebra, denoted A + , A − , defined as in the previous Subsections with respect to the same R, but with: where End(V ⊗k ) †s → End(V ⊗k ) denotes the transposition of the s-tensor factor: It is an elementary exercise to show that if properties (2.2)-(2.3) hold for R + = R, then they also hold for R − given by formula (2.25). We will now define a pairing: which respects the bialgebra structure in the following sense: We will henceforth write X ± for the copy of X ∈ End(V ⊗k ) in A ± . The analogous notation will apply to S ± , T ± ∈ A ± ⊗ End(V ).
Proposition 2.11. The assignments: and for all X, Y ∈ End(V ⊗k ): Proof. The data provided are sufficient to completely define the pairing, in virtue of (2.28) and (2.29). The thing that we need to check is that the defining relations of the extended shuffle algebras, namely (2.16)- (2.20) and the definition of the shuffle product in (2.6), are preserved by the pairing. For (2.16), we have: (2.28) = S + 2 S + 1 R 12 , S − 3 and: We leave the analogous formulas when (2.16) is replaced by (2.17), or when the roles of S and T are switched, or when the roles of the two arguments of the pairing are switched, as exercises to the interested reader. As for (2.18), we have: The analogous formulas when S − 3 is replaced by T − 3 , or when the roles of the arguments of the pairing are switched, are left as exercises to the interested reader. To prove that (2.19) pairs correctly with elements of A − , note that (2.23) implies: ..k has a non-zero number of indices on either side of the ⊗ sign. Then we have: where Tr V ⊗k denotes trace with respect to the indices 1, ..., k only (therefore the expression above is valued in End(V ), corresponding to the index 0). The equality between the two rows of (2.33) is proved as follows: because both sides are bilinear in the tensors X 1...k and Y 1...k , it suffices to prove that they are equal for: for arbitrary i a , j a , i a , j a ∈ {1, ..., n}. In this case, the equality between the two rows of (2.33) is a straightforward exercise, which is performed by expanding S ± a and T ± a in terms of the elementary matrices E ij , and using (2.29), (2.30), (2.31). Similarly, one can show that: because of ε(S 0 ) = Id. Because trace is invariant under cyclic permutations, the right-hand side of the expression above equals the right-hand side of (2.33), thus showing that relation (2.19) is preserved by the pairing. The proof that (2.20) is preserved by the pairing is analogous, and left as an exercise to the interested reader.
Before proving that the pairing (2.32) intertwines the shuffle product with the coproduct, let us show that the trace pairing is symmetric, in the sense that: for all symmetric tensors X, Y ∈ End(V ⊗k ) and all tensors A, B ∈ End(V ⊗k ). Indeed, (2.34) follows from the fact that ∀σ ∈ S(k), we have: where the latter equality is the conjugation invariance of trace. Property (2.35) is proved likewise. As a consequence of (2.15) and (2.34), proving formula (2.28) for a = A + , b = B + and c = Y − boils down to the following equality: which we will now prove. We have: where the ellipsis denotes summands whose second tensor factor has a non-zero number of indices on either side of the ⊗ sign. Because of this, formula (2.29) when one of b and c is either S − or T − (which we have already checked yields a consistent bialgebra pairing) implies that: Therefore, the RHS of (2.36) is precisely equal to the LHS of (2.36), as required. Similarly, proving (2.29) for a = X + , b = A − , c = B − boils down to the equality: The equality (2.37) is proved by analogy with (2.36), and we skip the details.
2.12. Proposition 2.11 allows us to define the Drinfeld double: such that A + ∼ = A + ⊗ 1 and A −,op,coop ∼ = 1 ⊗ A −op,coop are sub-bialgebras of A, and the commutation of elements coming from the two factors is governed by: for all a ∈ A + and b ∈ A −,op,coop .
Proposition 2.13. We have the following formulas in A: where · + = · and · − = · op (the opposite multiplication in A). Finally, we have: Proof. Let us now prove the first formula in (2.40) and leave the second one and (2.41) as exercises for the interested reader. Since: formula (2.39) for a = S + 2 and b = S − 1 implies: Using (2.30) to evaluate the pairing implies precisely the first formula in (2.40).
Let us now prove (2.42) and leave the analogous formula (2.43) as an exercise. We will do so in the case ± = +, as ± = − just involves the opposite of all relations.
. where the rightmost ellipsis stands for terms which have a non-zero number of indices on either side of the ⊗ sign, so they pair trivially with S − . Meanwhile, ∆(S − ) is given by (2.46). Applying (2.39) for a = X + 1...k and b 0 = S − yields: Then (2.44) follows by applying (2.39) for a = E + ij and b = E − i j .
3. Quantum toroidal gl n 3.1. Fix n > 1 and letṡl n be the Kac-Moody Lie algebra of type A n . The corresponding Drinfeld-Jimbo quantum group 3 is defined as the associative unital algebra: modulo the fact that c is central, as well as the following relations: for all i, j ∈ Z/nZ and s, s ∈ {1, ..., n}. We will use the notation ψ s for all s ∈ Z by setting ψ s+n = cψ s . We also consider the q-deformed Heisenberg algebra: where c is central, and the p ±k satisfy the commutation relation: Then we will consider the algebra: which serves as an affine q-version of the Lie algebra isomorphism gl n ∼ = sl n ⊕ gl 1 . 3 We note that the algebra defined below is slightly larger than the usual quantum group, since the Cartan part of the latter is generated by the ratios ψ i ψ j , instead of ψ ±1 1 , ..., ψ ±1 n themselves 3.2. We can make make U q (ġl n ) into a bialgebra using the counit ε(c) = 1, ε(ψ s ) = 1, ε(x ± i ) = 0, ε(p k ) = 0 and the coproduct given by ∆(c) = c ⊗ c and: Moreover, the sub-bialgebras: are endowed with a bialgebra pairing: generated by properties (2.28), (2.29) and: and all other parings between generators are 0. It is well-known that U q (ġl n ) is the Drinfeld double corresponding to the datum (3.16) (modulo the identification of the symbols ψ s in the two factors of (3.16)). The algebra U q (ġl n ) is Z n -graded: 3.3. We will now give a different incarnation of the bialgebra (3.8), which is obtained by combining the results of [3] with those of [5]. Let R(x) be given by (4.1).

Theorem 3.4.
There is an algebra isomorphism: where: where Z 1 = Z ⊗ Id and Z 2 = Id ⊗ Z for any symbol Z, and: Explicitly, the isomorphism (3.17) sends: In the following Subsections, we will explain where p ±k ∈ U q (ġl n ) go under the isomorphism (3.17). To do so, we need some additional structure on the algebra E.

3.5.
The algebra E is a bialgebra with respect to the following coproduct: where · denotes matrix multiplication, and c 1 = c ⊗ 1 and c 2 = 1 ⊗ c. If we define: for all 1 ≤ i ≤ n and i ≤ j ∈ Z (and extend the notation to all integers i, j by the rule f [i+n;j+n) = f [i;j) ) then the coproduct of these elements takes the form: where ψ s is defined for every s ∈ Z by ψ s+n = cψ s . Moreover, E is Z n -graded via: The subalgebras: are graded by ±N n , and we will write E ±d for their graded pieces, for all d ∈ N n . The dimensions of these graded pieces are given by the following formula: Recall that δ = (1, ..., 1). As shown in [21], there is a unique (up to scalar) element: for any k ∈ N, which is primitive with respect to the coproduct (3.27)-(3.28). Here, the word "primitive" should be taken in a quantum group sense, i.e.: Sending the generators (3.6) of U q (ġl 1 ) ⊂ U q (ġl n ) to the same-named elements (3.30) of E allows one to define the isomorphism (3.17). To summarize: Definition 3. 6. The elements f ±[i;j) ∈ E ± are called root vectors. Also: Simple and imaginary generators are jointly called primitive generators of E.

3.7.
Primitive elements of E ∼ = U q (ġl n ) are only determined up to constant multiple. In the present Subsection, we will show how to define certain linear functionals that will completely determine such constant multiples. If we consider the subalgebras: s s∈Z then it is easy to observe that E ≥ and E ≤ are sub-bialgebras with respect to the coproduct (3.27)-(3.28). In fact, the following assignment: generates a bialgebra pairing: Moreover, the Drinfeld double of the datum above coincides with the algebra E, as can be seen by comparing the quadratic relation (2.39) in the case at hand with the quadratic relation (3.21). As a consequence of the bialgebra property, note that: Let q be a formal parameter. Consider the linear map: which is multiplicative due to the bialgebra property of (3.32). Define: as the coefficients of the map α, appropriately renormalized as follows: Then the multiplicativity of α translates into the following property: As a consequence of (3.34), it is elementary to see that: where the delta function is defined as: It is clear that the linear maps (3.35) are completely determined by (3.38)-(3.39).
Remark 3.9. Similarly, we may define linear maps α −[i;j) : E −[i;j) −→ Q(q, q 1 n ) satisfying the analogue of (3.37), and completely determined by the normalization: In defining the isomorphism (3.17), we may renormalize the elements (3.30) by multiplying them by arbitrary non-zero scalars. We make the choice of [22], namely: ∀k > 0, i ∈ Z/nZ. The fact that the right-hand side of (3.41) does not depend on i is a consequence of the Z/nZ-invariance of the elements p ±k , see [21].
3.10. It is easy to note that the bialgebra U q (ġl n ) ∼ = E possesses an antipode: 5 In terms of the root generators, we may write (let Q + = q 1 n and Q − = (qq 1 n ) −1 ): The antipode is a bialgebra anti-automorphism A : E → E satisfying certain compatibility properties with the product, coproduct and pairing. We choose to write A(x) instead of the more common S(x) so as to not confuse the antipode with the series S(x) of Subsection 3.3 where the elementsf ±[i;j) are inductively defined in terms of f ±[i;j) by the formulas: Alternatively, the elementsf ±[i;j) are completely determined by their coproduct: and by their values under the linear maps (3.35): The root vectors of (3.45) are parametrized by: However, we choose to replace this triple by (i, j) ∈ Z 2 (n,n)Z defined by: (3.46) i = r + au and j = r + av and therefore we will use the notation: for the analogues of the root generators (3.26) and (3.42), respectively. The numerator a of µ may be negative, in which case it may happen that i > j in formula (3.46). If this happens, we make the convention that: Moreover, formulas (3.46) imply that k := j−i µ is an integer, so we will write: makes E µ into a Z n × Z graded algebra, and we write E µ|d for its degree d × N graded piece. The following is an obvious consequence of Subsections 3.5-3.10.
Proposition 3.12. For any µ, the algebra E µ has a coproduct ∆ µ , for which: 6 and linear maps: 3.13. As in Definition 3.6, we obtain the following elements, for all µ ∈ Q ∞: which are all primitive for the coproduct ∆ µ and satisfy: We may use the notation: and p (±µln) ±lδ,r = p µ ±lδ,r to emphasize the fact that deg p Let us write: In other words,ī is the residue of i in the set {1, ..., n}.
3.16. We owe our reader a definition of the quantum toroidal algebra U q,q (gl n ), as well as an explanation of the isomorphism (3.56). These will be provided in Subsections 3.17 and 3.25, respectively. However, we are now poised to provide some more information on the content of our main theorem. Consider the subalgebras: . In fact, D ± are generated by: (this is a consequence of relations (3.59)-(3.60), as we will show in Proposition 3.32). We must also impose commutation relations between simple and imaginary generators with upper index ∈ {−1, 0, 1}: By analogy with Proposition 6.10, relations (3.57)-(3.60) and (3.63)-(3.65) imply that we have an isomorphism of vector spaces: If k = 0 in the formulas above, then we require i < j, while if k = 0 in (3.59) (respectively (3.60)), then we require i < j (respectively l > 0).
Intuitively, this means any product of elements from D + , D 0 , D − can be "straightened", i.e. written as a sum of products of elements from D + , D 0 , D − , in this order.
is the horizontal subalgebra of the quantum toroidal algebra. Then the factorization (3.66) implies (1.6). Moreover, there exist algebra isomorphisms: where A ± are the shuffle algebras that will be defined in Sections 4 and 6. In Section 5, we will use this shuffle algebra interpretation to define topological coproducts on the extended subalgebras (1.7) and (1.8) by setting: and requiring ∆ to extend the Drinfeld-Jimbo product on Υ(D 0 ) = U q (ġl n ). In Section 6 we will define a bialgebra pairing A + ⊗ A − → Q(q, q 1 n ), which leads to: and extends the bialgebra pairing (3.33). The quantum toroidal algebra U q,q (gl n ) is the Drinfeld double of the halves (1.7) and (1.8) with respect to the datum above.
3.17. Affinizations of quantum groups are defined by replacing each generator x ± i as in Subsection 3.1 by an infinite family of generators x ± i,k , ∀k ∈ Z. To define affinizations explicitly, let us consider variables z as being colored by an integer i. Then we may define the following color-dependent rational function: for any variables z, w of colors i and j respectively.
Definition 3.18. The quantum toroidal algebra is: w ±k , and set: and: for all choices of ±, ± , i, j, where the variables z and w have color i and j, respectively, for the purpose of defining the rational function ζ. Note that we extend the index i to arbitrary integers, by applying the convention: We consider the subalgebras U ± q,q (gl n ) ⊂ U q,q (gl n ) generated by {x ± i,k } i∈Z/nZ k∈Z .
3.19. Let us now recall the classical shuffle algebra realization of U ± q,q (gl n ). Consider variables z ia of color i, for various i ∈ {1, ..., n} and a ∈ N. We call a function R(z 11 ..., z 1d1 , ..., z n1 ..., z ndn ) color-symmetric if it is symmetric in z i1 , ..., z idi for all i ∈ {1, ..., n}. Depending on the context, the symbol "Sym" will refer to either color-symmetric functions in variables z ia , or to the symmetrization operation: The following construction arose in the context of quantum groups in [6], by analogy to the work of Feigin-Odesskii on certain elliptic algebras.  and endow it with an associative algebra structure, by setting R * R equal to: The algebras S ± are graded by ., d n ), while k denotes the homogeneous degree of R. We will write S ±d for the graded pieces of S ± with respect to the ±N n direction only.
The subalgebras S ± coincide with the Q(q, q 1 n )-vector subspaces of (3.78) consisting of rational functions R(..., z ia , ...) that satisfy: where r is an arbitrary Laurent polynomial which vanishes at the specializations: for any i ∈ {1, ..., n}. This vanishing property is the natural analogue of the wheel conditions studied in [11]. By convention, we set z n+1, It is a well-known fact (see [6]) that S ± ∼ = U ± q,q (gl n ). In order to obtain the entire quantum toroidal algebra and not just its halves, define the double shuffle algebra: . Therefore, there is a isomorphism: The specific relations that appear in (3.79) can be recovered from the fact that S is the Drinfeld double of its halves: Indeed, the algebras (3.81) and (3.82) are endowed with topological coproducts: and there exists a pairing between the two halves given by: for any R + ∈ S d and R − ∈ S −d (we refer the reader to [21] or [22] for details).
Remark 3.23. To think of (3.83) as a tensor, we expand the right-hand side in non-negative powers of z ia /z i a for a ≤ e i , e i < a , thus obtaining an infinite sum of monomials. In each of these monomials, we put the symbols ϕ + i,d to the very left of the expression, then all powers of z ia with a ≤ e i , then the ⊗ sign, and finally all powers of z ia with a > e i . The powers of the central elementc 1 =c ⊗ 1 are placed in the first tensor factor. The resulting expression will be a power series, and therefore lies in a completion of S ≥ ⊗S ≥ . The same argument applies to (3.84).
Remark 3.24. In formula (3.85), the integral is defined in such a way that the variable z ia traces a contour which surrounds z ib q 2 , z i−1,b and z i+1,b q −2 for all i ∈ {1, ..., n} and all a, b (a particular choice of contours which achieves this aim is explained in Proposition 3.8 of [21]).
3.25. In [21], [22], we defined root vectors of S, which allowed us to establish an isomorphism between S and the algebra D of (3.56). Combining this with (3.80) will yield Theorem 3.14. For any i < j and µ ∈ Q such that j−i µ ∈ Z, define: ∀i ≤ j. In order to think of the RHS of (3.86) and (3.87) as shuffle elements, we relabel the variables z i , ..., z j−1 according to the following rule ∀a ∈ {i, ..., i+n−1}: z a , z a+n , z a+2n , ... zā 1 , zā 2 q −2 , zā 3 q −4 , ... and thus the RHS of (3.86) and (3.87) lie in the vector space (3.78). If j−i µ / ∈ Z, the LHS of (3.86) and (3.87) are defined to be 0. We will occasionally write: and B (±k) Sketch of the proof of Theorem 3.14 (see [21], [22] for the full proof ): the subalgebra: is isomorphic to E µ of (3.45) for all µ ∈ Q ∞, by sending: Similarly, T 0 := S 0 is isomorphic to a tensor product of n Heisenberg algebras. The assignments (3.88) and (3.89) allows us to construct simple and imaginary generators ∈ S, by repeating the discussion in the beginning of Subsection 3.13. In loc. cit., we showed that these simple and imaginary generators ∈ S satisfy relations (3.59)-(3.60) and (3.63) -(3.65) instead of the p's, hence: is an algebra homomorphism. If we fix linear bases {v i µ } of E µ for all µ, it is not hard to show that ordered products of the v i µ 's in increasing order of µ ∈ Q ∞ form a linear basis of D. In [21], we showed that ordered products of the Φ(v i µ ) also form a linear basis of S, which implies that Φ is an isomorphism.
Proposition 3.28. Under the substitution: for all 1 ≤ i, j ≤ n and d ∈ Z, the following relations hold in D: Similarly, under the substitution: ±[i;j) are all multiples of each other: the following relations hold in D (recall that · op denotes the opposite product): To where r and r are the coefficients of the power series expansions: Equating the coefficients of ... ⊗ Euv w k in the two sides of (3.110) yields the identity: 1≤•, * ,x,y≤n which is a relation in the algebra D, upon the substitution (3.101).
Proof. We will only prove relation (3.102), and leave the analogous formulas (3.103)-(3.109) as exercises to the interested reader. Let us rewrite (3.102) as: Noting that A −1 (S(w)) = S(wc) −1 , the formula above reads: where for any e ∈ U ≥ q (ġl n ), we write: Indeed, when e = S 2 (w), the right-hand sides of (3.113) and (3.114) match the LHS and RHS of (3.111), respectively, due to (3.32) and (3.34). It is easy to see that the operations ♠ and ♣ are additive in e, and moreover: Since the series coefficients of S + (w) generate U + q (ġl n ), (3.112) implies that: [lδ,1) (whose coproduct is ∆(e) = e ⊗ 1 + c −l ⊗ e), relation (3.115) reads: [lδ,1) , T 1 (z) Formulas (3.36) and (3.41) imply that the pairings in the right-hand side of the formula above are equal to q l and q −l , respectively, so we have: Plugging in X 1 (z) = 3.29. We will now prove a useful Lemma about the structure of the algebra E + of Subsection 3.5. Note that relation (3.19) is equivalent to a collection of quadratic relations, which are indexed by their degree d ∈ N n : In fact, E + could be defined as the algebra generated by all f [i;j) modulo the ideal generated by all LHS d . In Section 5, we will find ourselves in the situation of having an algebra B + and wanting to construct an algebra isomorphism Υ : E + ∼ = B + . Of course, the straightforward way to do this is to construct elements: and directly check that Υ(LHS d ) = 0, ∀d ∈ N n . However, such a check will not be possible in our situation, and we will instead rely on some additional structure: Lemma 3.30. Assume B + is a N n -graded Q(q)-algebra, such that: To show that Υ is an algebra homomorphism, we would need to show that LHS d = 0, which we will prove by induction on d ∈ N n . The base case d = 0 is trivial, so we will only prove the induction step. We have: where the middle terms denoted by the ellipsis are equal to Υ ⊗ Υ applied to the middle terms of ∆(LHS d ). Since the latter are 0 (as LHS d = 0 in E + ), we conclude that LHS d is primitive. Moreover, the analogues of (3.37), (3.38), (3.40) imply: Therefore, assumption (3.117) implies that LHS d = 0 for all d, thus establishing the fact that Υ is a well-defined algebra homomorphism. To show that Υ is injective, assume that its kernel is non-empty. Since Υ preserves degrees, we may choose 0 = x ∈ E + of minimal degree d ∈ N n such that Υ(x) = 0. Since Υ preserves the coproduct and is injective in degrees < d (by the minimality of d), we conclude that x is primitive. However, since Υ intertwines the linear maps α [i;j) with α [i;j) , we conclude that x is also annihilated by the linear maps α [i;j) , hence x = 0.
Since an injective linear map of finite-dimensional vector spaces Φ : Proposition 3.32. The Q(q, q 1 n )-algebra D ± is generated by the elements: lδ,r ) lies in the subalgebra generated by the elements (3.118), for all choices of indices i, j, k, l, r. We will prove this statement by induction on k (respectively k ). Let us choose a lattice triangle of minimal size with the vector (j − i, k) (resp. (nl, k )) as an edge: In the case of the picture on the left, namely that of the element p (k) [i;j) , we have: If a ≡ j − i modulo n, then relation (3.59) gives us: while a ≡ j − i modulo n, then relation (3.60) gives us: In either of the two formulas above, the induction hypothesis implies that the left-hand side lies in the algebra generated by the elements (3.118). Therefore, so does the right-hand side and the induction step is complete.
The case of the picture on the right, namely that of the element p (k ) lδ,r , is proved analogously. We will therefore only sketch the main idea, and leave the details to the interested reader. For any s ≡ r mod g, (3.60) implies: where µ = k nl . By the induction hypothesis, the left-hand side of the expression above lies in the subalgebra generated by the elements (3.118), hence so does the right-hand side, which we henceforth denote RHS. Clearly, RHS ∈ B + µ , and it can therefore be expressed as a sum of products of the simple and imaginary generators: RHS = α · p µ lδ,r + sum of products of more than one generator of B + µ It was shown in the last paragraph of [22] that the coefficient α above is non-zero. By the induction hypothesis, all products of more than one simple or imaginary generator lie in the subalgebra generated by (3.118). Since the quantity RHS also lies in this subalgebra, we conclude that p µ lδ,r also does.
4. The shuffle algebra with spectral parameters 4.1. We will now generalize the construction of Section 2 by allowing the coefficients of matrices in End(V ⊗k ) to be rational functions. We will recycle all the notations from Section 2, so let V be an n-dimensional vector space and consider: given by: For an arbitrary parameter q, we define: which is explicitly given by: It is well-known that R(x) satisfies the Yang-Baxter equation with parameter: and it is easy to show that R(z) satisfies the following analogue of (2.2)-(2.3): Finally, we note that the R-matrix (4.1) is (almost) unitary, in the sense that: where: It is easy to see that the rational function f (x) could have been absorbed in the definition of R(x), but we will prefer our current conventions, in order for R(x) to have a simple pole at x = 1.

4.2.
We will represent elements of End(V ⊗k )(z 1 , ..., z k ) as braids on k strands. The only difference between the present setup and that of Section 2 is that each strand carries not only a label i ∈ {1, ..., k} but also a variable z i . With this in mind, we make the convention that the endomorphism corresponding to a positive crossing of strands labeled i and j, endowed with variables z i and z j respectively, is: Because of (4.2), we can represent both R and R as crossings of braids of the same kind (i.e. we do not need the dichotomy of straight strands versus squiggly strands, of Subsection 2.2) if we remember to change the variable on one of our strands. We will always write the variable next to each strand. For example, the braids: Figure 11. Braids decorated with variables represent the following compositions ∈ End(V ⊗2 )(z 1 , z 2 ): respectively. The variable does not change along a strand, except at a box.

Note that R(x)
has a pole at x = 1, hence: where (12) denotes the permutation operator of the two factors. Pictorially, the endomorphism (4.8) will be represented by two black dots indicating a color change (recall that the color encodes the index ∈ {1, ..., k} of a strand) along two strands: Figure 12. Black dots can slide past arbitrary strands The equality above means that one can move the black dots as far left or as far right as we wish, no matter how many other strands we pass over or under. Explicitly, the equality in Figure 12 reads: Finally, we note that due to formula (4.6), we can always change a crossing in a braid, at the cost of multiplying with the function (4. yields an associative algebra structure on the vector space: (V ⊗0 ). We call (4.9) the "shuffle product".
Proposition 4.6. The shuffle product above preserves the vector space: consisting of tensors X = X 1...k (z 1 , ..., z k ) which simultaneously satisfy: for some x ∈ End • X is symmetric, in the sense that: ..., z k ) is any braid lift of the permutation σ, and: Proof. The fact that the shuffle product (4.9) preserves the vector space of symmetric tensors is proved word-for-word like Proposition 2.6. Therefore, it remains to prove that if A and B only have simple poles at z i = z j q 2 , then A * B has the same property. Since R ab (z) has a simple pole at z = 1, a priori A * B could have simple poles at z i = z j , so it remains to show that the residues at these poles vanish.
Pictorially, the RHS of (4.13) may be represented as follows: Figure 14. The RHS of (4.13) for u = 2, λ 1 = 4, λ 2 = 3 Note the symbol "blue over blue" to the right of Figure 14. Given two colors γ 1 and γ 2 , placing γ 1 over γ 2 is a prescription that indicates that the braid in question be multiplied by the product of f (y/y ), where y (respectively y ) goes over all variables on strands whose left endpoint has color γ 1 (respectively γ 2 ), and the leftmost endpoint with variable y is above the leftmost endpoint with variable y .
Remark 4.9. We will call (4.13) the wheel conditions in the current matrixvalued setting, because the E 11 ⊗ ... ⊗ E 11 coefficient of any X satisfying (4.13) is a symmetric rational function in z 1 , ..., z k that satisfies the wheel conditions of [9].
Remark 4.10. Note that when k = 1, the wheel condition (4.13) is vacuous, but it is already non-trivial for k = 2 (as opposed from the n = 1 case of loc. cit.) Proposition 4.11. The vector subspace A + of Definition 4.8 is preserved by the shuffle product (and will henceforth be called the "shuffle algebra").
Proof. Assume that two matrix-valued rational functions A and B in k and l variables, respectively, satisfy the wheel condition (4.13). To prove that their shuffle product A * B also satisfies the wheel condition, we must take the iterated residue of the right-hand side of (4.9) at: z cs = y s , z cs+1 = y s q 2 , ..., z cs+1−1 = y s q 2(λs−1) for any composition k +l = λ 1 +...+λ u . We will show that, at such a specialization, each summand in the RHS of (4.9) has the form predicated in the RHS of (4.13), so we henceforth fix a shuffle a 1 < ... < a k , b 1 < ... < b l . Because R(z) has a simple pole at z = q −2 , the only way such a shuffle can have a non-zero residue is if: for some choice of r s ∈ {c s , ..., c s+1 − 1} for all s ∈ {1, ..., u}. We will indicate this choice by using the following colors for the strands of our braids: red for c s , blue for c s + 1, ..., r s − 1 purple for r s , green for r s + 1, ..., c s+1 − 1 With this in mind, the summand of (4.9) corresponding to our chosen shuffle is represented by the following braid (to keep the pictures reasonable, we will only depict the case u = 2, but the modifications that lead to the general case are straightforward; although we only depict a single blue and green strand in each of the u groups, the reader may obtain the general case by replacing each of them with any number of parallel blue and green strands, respectively): Figure 15.
The black dots in the middle of the braid appear because the variables on the braids in question are set equal to each other in the iterated residue. By sliding the black dots as far to the right as possible (which is allowed, due to Figure 12), we obtain: Figure 16.
One readily notices that certain pairs of braids are twisted twice around each other, and these twists can be canceled up to a factor of f (y/y ) (due to the identity in Figure 13), where y and y are the variables on the braids in question. Keeping in mind that the variables on the red strands are modified to the right of the red boxes, this yields the following braid: Figure 17.
Note that the black dots on the right side of the braid above yield the same permutation as the black dots on the right side of the braid in Figure 14, due to the following identity: (12) in the symmetric group S(k). Therefore, the final braid above is precisely of the form predicated in the right-hand side of (4.13), which concludes our proof.
Proposition 4.12. For any X ∈ A + and any composition k = λ 1 + ... + λ u , the tensor Y = X (λ1,...,λu) that appears in (4.13) has at most simple poles at: (4.14) y s q 2d − y t q −2 and y s q 2d − y t q 2λt for all 1 ≤ s < t ≤ u and any 0 ≤ d < λ s . Moreover, if λ s = λ t then: Proof. Let us first prove the statement about the poles of Y . In the course of this proof, all poles will be counted with multiplicities, in the sense that whenever we refer to a "set of poles", the reader should read this as the "multiset of poles". Because of the first bullet of Proposition 4.6, which determines the allowable poles of X ∈ A + , the left-hand side of (4.13) has a simple pole at: On the other hand, the right-hand side of (4.13) has a double pole at: because of the f factors, and a simple pole at: because of the simple pole of R(z) at z = 1. Eliminating the multiset of poles in (4.17) and (4.18) from the multiset of poles in (4.16) yields the allowable poles of Y (y 1 , ..., y u ), and it is elementary to see that they are precisely of the form (4.14).
As for (4.15), we will prove it pictorially. To keep the pictures legible, we will only show the case s = 1, t = u = 2, but the interested reader may easily generalize the argument. Because of property (4.10), the tensor X(y 1 , y 1 q 2 , ..., y 2 , y 2 q 2 , ...) (which is represented by a braid akin to Figure 14) is also equal to the following braid: Figure 18.
(we ignore the scalar-valed rational functions f in the diagrams above, as they commute with all the braids involved). The braid called R σ interchanges the two collections of λ s = λ t braids corresponding to the variables y s q 2 * and y t q 2 * . Although we could choose the crossings of R σ arbitrarily, the choice we make above is that the two red strands cross above all other ones, then the two blue strands next to the red strands cross above all remaining ones, then the two blue strands next to the previous blue strands cross etc. In virtue of Figure 12, we may move the black dots to the very right of the picture above, obtaining the braid in Figure  19. Then we pull the red strands as far up as possible, and notice that the blue strands are all unlinked, thus yielding the braid in Figure 20.  The red strands in the diagram above correspond to the endomorphism: )(y s , y t ) which we may equate with Y due to the braid equivalences described above.
4.13. The shuffle algebra A + has a "vertical" and a "horizontal" grading: where δ = (1, ..., 1) and the grading on End(V ) is defined by: We will find it convenient to extend the notation E ij to all i, j ∈ Z, according to: whereī denotes the residue class of i in the set {1, ..., n}. 9 Then the grading (4.20) makes sense for arbitrary integer indices E iaja , and formula (4.21) also makes sense 9 More generally, we will extend the notation above to a k-fold tensor, by the rule: for all integers i, j. We will denote the graded pieces of the shuffle algebra by: and refer to (d, k) as the degree of homogeneous elements. Finally, we write: for any d = (d 1 , ..., d n ) ∈ Z and refer to the number: as the slope of the matrix-valued rational function f (z 1 , ..., We will consider the partial ordering on Z n given by: The extended shuffle algebra with spectral parameter 5.1. We will now replicate Subsection 2.7 in the situation of the R-matrix (4.1).
Definition 5.2. Consider the extended shuffle algebra: In order to concisely state the relations, it makes sense to package the new generators s [i;j) into the generating function: We impose the following analogues of relations (2.16) and (2.19): Note that the grading on A + extends to one on A + , by setting: 5.3. We may also consider the elements t [i;j) ∈ A + defined by (3.92), where: Then it is easy to see that (5.1), (5.2) imply analogues of (2.17), (2.18), (2.20): Therefore, we conclude that the series S(x) and T (x) satisfy the same relations as in Definition 2.8 (modified in order to account for the variables z i ), even though the s's and the t's are not independent of each other anymore. We will write: and note that formulas (5.1) imply that: where ·, · is the bilinear form on Z n × Z n given by ς i , ς j = δj i − δ j−1 i . 5.4. Consider the following topological coproduct on the algebra A + , which is the natural analogue of the coproduct studied in Proposition 2.9: ) by (5.3) and: for all X 1...k (z 1 , ..., z k ) ∈ A + ⊂ A + . The fact that ∆ defined above gives rise to a coasociative coalgebra structure which respects the algebra structure is proved by analogy with Proposition 2.9, and we leave the details to the interested reader.
Remark 5.5. Because S(x) is a power series in x, the coproduct defined above takes values in a completion of A + ⊗ A + . Specifically, to make sense of the second line of (5.10), we must expand the rational function: for z 1 , ..., z i z i+1 , ..., z k , then collect all tensors of the form X 1...i (z 1 , ..., z i ) to the left of ⊗, and all tensors of the form X i+1,...,k (z i+1 , ..., z k ) to the right of ⊗. 5.6. Given X ∈ A, we will represent its degree deg X = (d, k) on a 2d lattice, via the projection (d, k) (|d|, k), and hence assign to X the lattice point (|d|, k). Similarly, we will assign to the tensor X 1 ⊗ X 2 the two-segment broken path:

FIGURE 21. The hinge of a tensor
The intersection of the arrows, namely (|d 1 |, k 1 ), is called the hinge of X 1 ⊗ X 2 .
Definition 5.7. Let µ ∈ Q. We let A + ≤µ ⊂ A + be the set of those X such that: (5.11) ∆(X) = ∆ µ (X) + (anything) ⊗ (slope < µ) where ∆ µ (X) consists only of summands X 1 ⊗ X 2 with slope X 2 = µ, as in (4.24). In terms of the pictorial definitions of hinges above, X ∈ A + ≤µ if and only if all summands in ∆(X) have hinge at slope |d|/k ≤ µ as measured from the origin.
It is easy to see that A + ≤µ is a vector space. Let us define its graded pieces: Proposition 5.8. For any µ ∈ Q, the subspace A + ≤µ is a subalgebra of A + , and: for all X, Y ∈ A + ≤µ .
Proof. Note that degree is multiplicative, i.e. (assume the LHS is non-zero): this implies the conclusion.
5.9. Our reason for introducing the slope subalgebras is that {A ≤µ|d,k } µ∈Q yield a filtration of A d,k by finite-dimensional vector spaces.
Lemma 5.10. The dimension of A ≤µ|d,k as a vector space over Q(q, q 1 n ) is at most the number of unordered collections: where: In (5.72), we will show that the dimension of A ≤µ|d,k is equal to the number of unordered collections (5.13). The argument below follows that of [9], [20], [21].
Proof. To any partition λ = (λ 1 ≤ ... ≤ λ u ) of k ∈ N, we associate the linear map: A ≤µ|d,k ϕ λ −→ End(V ⊗u )(y 1 , ..., y u ) X X (λ1,...,λu) of (4.13) Consider the dominance ordering λ > λ on partitions, and define: ≤µ|d,k = A ≤µ|d,k , then the desired bound on dim A ≤µ|d,k would follow from: [i s ; j s ) = d and j s − i s ≤ µλ s for all s ∈ {1, ..., u} for any λ = (λ 1 ≤ ... ≤ λ u ). By Proposition 4.12, any Y ∈ Im ϕ λ is of the form: However, if Y = ϕ λ (X) for some X ∈ A λ ≤µ|d,k , we claim that Y is a Laurent polynomial. Indeed, let us show that Y is regular at y s q 2s − y t q −2 . We have: where λ is obtained from λ by replacing λ s and λ t by λ s − d − 1 and λ t + d + 1. The right-hand side of the expression above vanishes because X ∈ A λ ≤µ|d,k and λ > λ. Similarly, one can show that the residue of Y vanishes at y s q 2d − y t q 2λt for any s < t and 0 ≤ d < λ s and this precisely implies that Y is a Laurent polynomial: Since the matrices R and R have total degree 0, the horizontal degree of Y is equal to that of X, namely d, so we conclude that the only summands with non-zero coefficient in (5.16) satisfy: Finally, the slope condition on X implies an analogous slope condition on Y : in each variable y s , we have: nh s + α s − β s ≤ µλ s Therefore, the number of coefficients that one gets to choose in (5.16) is at most the number of collections (j s = α s + nh s , i s = β s ) satisfying the conditions in the right-hand side of (5.14). The reason why we need to take unordered collections is the symmetry property of Y proved in Proposition 4.12.
Proposition 5.12. For any A ∈ A d,k and B ∈ A e,l , we have: where e n is the last component of the vector e = (e 1 , ..., e n ) ∈ Z n .
Proof. The proof is precisely the u = 1 case of the proof of Proposition 4.11, since the equality of braids therein indicates the fact that: The fact that B ∈ A e,l implies the homogeneity property: B(z 1 ξ, ..., z k ξ) = ξ en B(z 1 , ..., z k ) Since R-matrices are invariant under rescaling variables, the rational function B (l) (y) of (4.13) also satisfies B (l) (yξ) = ξ en B (l) (y). Then (5.18) implies (5.17).
For all (i, j) ∈ Z 2 (n,n)Z , define the linear maps: ). As a consequence of Proposition 5.12, we have: Corollary 5.13. For any k, l ∈ N and (i, j) ∈ Z 2 (n,n)Z , we have: Proof. The corollary is an immediate consequence of (5.17) and (5.19). The only thing we need to check is that the power of q is the same in the left as in the right-hand sides of (5.20), which happens due to the elementary identity: . Particular importance will be given to the subalgebra: As a consequence of Proposition 5.8, the leading order term ∆ µ of (5.11) restricts to a coproduct on the enhanced subalgebra: s s∈Z relation (5.8) Lemma 5.15. If X ∈ B + µ is primitive with respect to the coproduct ∆ µ , and: Proof.
The assumption X ∈ B + µ implies that deg X = (d, k) with: (5.23) |d| = µk As we observed in the proof of Lemma 5.10, it suffices to show that ϕ λ (X) = 0 for all partitions λ, which we will do in reverse dominance ordering of the partition λ. The base case is when λ = (k), which is satisfied because ϕ (k) (X) = 0 is precisely the content of the assumption (5.22). For a general partition λ, we may invoke the induction hypothesis to conclude that ϕ λ (X) = 0 for all partitions λ > λ, and in this case ϕ λ (X) takes the form of (5.16). However, the fact that X is a primitive element requires every summand appearing in the RHS of (5.16) to satisfy: nh s + α s − β s < µλ s for all 1 ≤ s ≤ u (we have u = l(λ) > 1, since we are dealing with the case λ = (k)). However, relation (5.23) forces the following identity: This yields a contradiction, hence ϕ λ (X) = 0, thus completing the induction step.
5.16. We will now construct particular elements of B + µ , which together with Lemma 5.10 will yield a PBW basis of the shuffle algebra, leading to the proof of Theorem 1.4. Consider the following notion of symmetrization, analogous to (2.8) and (4.10): where R σ is the product of R ij zi zj associated to any braid lift of σ. For instance: lifts the longest element of S(k). Consider the matrix-valued rational functions: which have, up to scalar, the same simple pole and residue as R(x) of formula (4.2): Moreover, it is easy to check the following identity: Proposition 5.17. For any (i, j) ∈ Z 2 (n,n)Z and µ ∈ Q such that j−i µ ∈ N, set: s a−1 s a q 2s a n (recall (4.22)) where s a = j − µa , s a = j − µa . Then F µ [i;j) ,F µ [i;j) ∈ B + µ , and: We will often write F (k) if j − i = µk (and the analogous notation forF ) in order to emphasize the fact that this shuffle element has vertical degree k.
Proof. Note that if Q,Q were replaced by R in formulas (5.30), (5.31), then the right-hand sides of the aforementioned formulas would precisely equal: The fact that the shuffle elements (5.34) satisfy the wheel conditions is simply a consequence of iterating Proposition 4.11 a number of k − 1 times. As far as the elements F µ [i;j) ,F µ [i;j) are concerned, the fact that they satisfy the wheel conditions is proved similarly with the fact that (5.34) satisfies the wheel conditions: this is because Proposition 4.11 uses the fact that Res x=q −2 R(x) is a multiple of the permutation matrix, and we have already seen in (5.28) that the residues of Q(x), Q(x), R(x) are the same up to scalar. We leave the details to the interested reader.
Let us prove that the shuffle elements (5.30) and (5.31) lie in B + µ . We will only prove the former case, since the latter case is analogous. We have: .., z k )X 1...k (z 1 , ..., z k ) where X is the expression on the second line of (5.30). To this end, for any l ∈ {0, ..., k} we need to look at the first l tensor factors of Sym R ω k X and isolate the terms of minimal |hdeg|. If Y is a k-tensor, we will henceforth use the phrase "initial degree of Y " instead of "total |hdeg| of the first l factors of Y ". Because: for all indices a and b, we obtain the following easy (but very useful) fact: (either on the left or on the right) cannot decrease the minimal initial degree of Y .
in the discussion above (specifically (5.36) and (5.37)), these factors only contribute a diagonal matrix to the terms of minimal initial degree. Specifically, if: (above, y l = s l = x l+1 in all summands with non-zero coefficient) then the terms of minimal initial degree in R ω k X yield the following value for the coproduct (5.11): where the various ψ ±1 a factors arise from the diagonal terms of the series S(x), T (x) (see (5.10), (5.11)). Using (5.8), we may move the product of ψ's on the left to join the product of ψ's in the middle, at the cost of cancelling the powers of q: As a consequence of the following straightforward claim: Claim 5.19. The quantity (5.44) is a sum of tensors E xuyu ⊗ ... ⊗ E xvyv where: #{x u ≡ r mod n} − #{y u ≡ r mod n} = #{c u ≡ r mod n} − #{d u ≡ r mod n} for any r ∈ Z/nZ.
5.20. According to Lemma 5.15, the elements F µ [i;j) ,F µ [i;j) ∈ B + µ are completely determined by the coproduct relations (5.32) and (5.33), together with their value under the linear functionals (5.19). Let us therefore compute the latter: Proposition 5.21. For any (i, j) ∈ Z 2 (n,n)Z and µ ∈ Q such that j−i µ ∈ N, we have: Proof. Recall that F = F µ [i;j) is given by the symmetrization (5.35), namely: where R σ is an arbitrary braid which lifts the permutation σ. Of the k! summands in the right-hand side, only the one corresponding to the identity permutation is involved in the iterated residue of F at z k = z k−1 q 2 ,..., z 2 = z 1 q 2 , hence we obtain: , we must specialize z a = yq 2a−2 in (5.49)). Note that: due to (4.3). Since R is given by (4.2) and Res x=q −2 Q(x) = q −1 · (12), we have: If we move the permutations (b − 1, b) all the way to the right, then (5.49) equals: With this in mind, (4.13) implies that: Given formula (5.19) and the elementary identity: we conclude (5.47). Formula (5.48) is proved analogously.

5.22
. Let a ∈ Z and b ∈ N be coprime, set g = gcd(n, a) and: Recall the discussion in Subsection 3.11, in which the algebra: (5.51) E µ = U q (ġl n g ) g was made to be Z n × Z graded, and its root vectors were indexed as: Comparing (3.27) and (3.38) with (5.32) and (5.47), respectively, Lemma 3.30 implies that there exists an algebra homomorphism: Lemma 5.10 for |d| = kµ implies that: The right-hand side is precisely the dimension of the algebra (5.51) in degree d, so we may invoke Corollary 3.31 to conclude that the map Υ µ is an isomorphism.
5.23. Since B + µ is isomorphic to the algebra E + µ , we may present it instead in terms of simple and imaginary generators (see Subsection 3.13 for the notation): Since Υ µ preserves the maps α [u;v) , we have: for all (u, v) ∈ Z 2 (n,n)Z . By (5.47) and (5.48), the simple generators are given by: if gcd(j − i, µ(j − i)) = 1, but we do not know a closed formula for the imaginary generators (5.55). We will sometimes use the notation: if j − i = µk, nl = µk , in order to emphasize the fact that deg P Theorem 5.24. We have an algebra isomorphism: where D + is defined in (3.61).
5.25. Formula (5.53) establishes the fact that the simple and imaginary generators of B + µ for fixed µ satisfy the commutation relations between the simple and imaginary generators of E + µ , for the same µ. We will now study the commutation relations between such shuffle elements for "nearby" µ. To this end, we start with the following: [i;j) + tensors with hinge strictly right of T + for the picture on the left ψv ψj P for the picture on the right powers of q to the minimal initial degree of R ω k X. Specifically, let: (we note that x 1 = j, y β = s β = x β+1 , y α = s α = x α+1 , y k = i in all summands above that have non-zero coefficient). Then m.i.d. ∆(R ω k X), which differs by certain powers of ψ ±1 s from m.i.d. R ω k X, is given by: Before we move on, we must explain three issues concerning the expression above: the power of q labeled "conjugation", why y α = x α+1 = s α were increased by 1, and why the factor −q −2 arose. The first issue, namely the power of q, appeared from the diagonal terms of arbitrary conjugation matrices R σ and R −1 σ as in (5.24), where σ is any permutation which switches the variables {β + 1, ..., α} and {α + 1, ...k} (this is because in the definition of the coproduct, the variables to the left of the ⊗ sign must all have smaller indices than the variables to the right of the ⊗ sign). The latter two issues happened because of the presence of Q α,α+1 (z α /z α+1 ) in the fourth squiggle. Because Q α,α+1 (∞) has 0 on the diagonal, its contribution of minimal initial degree comes from the immediately off-diagonal terms, which are: Therefore, for any indices u and v, we have: Using (5.8), we may move certain ψ factors around in (5.60), in order to cancel the powers of q with underbraces beneath: As a consequence of Claim 5.18, we may write the expression above as: Symmetrizing the expression above with respect to all permutations σ ∈ S(k) which fix the set A gives rise to m.i.d. ∆(F ). To obtain the expression on the second line of (5.58), it remains to establish the formulas below (let u = s α + 1 and v = s β ): To prove the formulas above, we start with an easy computation: Claim 5.27. We have the identity: Let us first show how the Claim allows us to complete the proof of the Proposition. Because of (5.65), formula (5.64) is equivalent to: which are all straightforward consequences of our assumption on the triangle T . This completes the proof of formula (5.58) for the picture on the left in Figure  22 (as we said, the case of the picture on the right is analogous, and left to the interested reader). As for Claim 5.27, we start with the following identity: for all i, j, t ∈ Z. Indeed, by plugging in (5.26)-(5.27), formula (5.67) reads: which is elementary. Identity (5.65) follows by k − 1 applications of (5.67).
Proof. of Theorem 5.24: We have already shown that, for fixed µ, the assignment: v) (RHS of (3.59)) ∀u, v. Therefore, Lemma 5.15 implies that the equality (3.59) would follow from: LHS of (3.59) ∈ B + j+ln−i k+k We may depict the degree vectors of the elements P (k) [i;j+nl) as: We need to show that all the hinges of summands of ∆(LHS of (3.59)) are to the right the vector (j − i + nl, k + k ). Since coproduct is multiplicative, the hinges of ∆(XY ) are all among the sums of hinges of ∆(X) and ∆(Y ), as vectors in Z 2 . By definition, the hinges of ∆(P (k) [i;j) ) and ∆(P (k ) lδ,r ) lie to the right of the vectors (j − i, k) and (nl, k ), respectively. The sum of any two such hinges lies to the right of the parallelogram in the picture, except for the sum of the two hinges below: has a hinge at (j − i, k) ∆(P [i;j) ) both have a hinge at the point (j −i, k), but the corresponding summand in both coproducts is: We conclude that this summand vanishes in ∆(LHS of (3.59)), which therefore has all the hinges to the right of (j−i+nl, k+k ). This completes the proof of (3.59).
Let us now prove (3.60) by induction on k + k (the base case k + k = 1 is trivial). Recall that µ = j+j −i−i k+k , and let us represent the degrees of P We have the following formulas, courtesy of Proposition 5.26: where the ellipsis denotes terms whose hinges lie to the right of the line of slope µ (this convention will remain in force for the remainder of this proof), and • denotes the natural number which makes the two sides of the expressions above have the same vertical degree. Letting LHS denote the left-hand side of (3.60), we have: Relation (5.8) allows us to move ψ's around, and show that the tensor on the first line which only contains P 's vanishes, while the expression on the second line yields: If we are only interested in the leading term ∆ µ of the coproduct, we may neglect the terms represented by the ellipsis. The induction hypothesis allows us to replace the term in square brackets by the RHS of (3.60) for (i, j, k) → (u, v, •): n ) denote the constant in the round brackets on the second line above. By (5.32) and (5.33), notice that the formula above matches ∆ µ (RHS). By Lemma 5.15, to prove that LHS = RHS, it suffices to show that the two sides of equation (3.60) take the same values under the maps (5.19). To this end: The equality between the right-hand sides of (5.69) and (5.70) was established in Claim 4.3 of [22]. This concludes the proof of (3.60), so there exists an algebra homomorphism Υ + as in (5.68). In the remainder of this proof, we need to show that Υ + is an isomorphism. As explained in [22] (following the similar argument of [2]), one may use relations (3.59) and (3.60) to express an arbitrary product of the generators (5.54) and (5.55) as a linear combinations of products of the form:  (3.29) implies that the number of products (5.71) with µ bounded above is precisely equal to the number in the RHS of (5.72)). Combining this with Lemma 5.10, we conclude that the products (5.71) actually form a linear basis of A + , and therefore A + is generated by the elements (5.54) and (5.55). This implies that Υ + is an isomorphism, since we showed in [22] that (5.71) also form a linear basis of D + .
Let us now prove Claim 5.28. We assume that the basis vectors x (i) µ of B + µ are ordered in non-decreasing order of |hdeg|, i.e.: Now suppose we have a non-trivial linear relation among the various products (5.71). We may rewrite this hypothetical relation as: where all terms in the RHS have i < i. Since the coproduct is multiplicative, then all the hinges of ∆(XY ) are sums of hinges of ∆(X) and hinges of ∆(Y ), as vectors in Z 2 . Therefore, ∆(LHS of (5.74)) has a single summand with hinge at the lattice point: and the corresponding summand is precisely: (5.76) ∆(LHS of (5.74)) = ...
where ψ stands for a certain (unimportant) product of ψ ±1 a 's. Meanwhile, the coproduct of the RHS of (5.74) can only have a hinge at a lattice point (5.75) if equality is achieved in (5.73). The corresponding summand in the coproduct is: µ cannot be expressed as a linear combination of x (i ) µ with i < i, the righthand sides of expressions (5.76) and (5.77) cannot be equal. This contradiction implies that there can be no relation (5.74), which proves Claim 5.28.
Corollary 5.29. The algebra A + is generated by the vdeg = 1 elements: The corollary is an immediate consequence of Proposition 3.32 and Theorem 5.24.

The double shuffle algebra with spectral parameters
In the previous Section, we have constructed the extended shuffle algebra corresponding to the R-matrix (4.1). We will now take two such extended shuffle algebras and construct their double, as was done in Subsections 2.10 and 2.12 for R-matrices without spectral parameters.
6.1. Let q + = q and q − = q −n q −1 . If R + (x) = R(x) is given by (4.2), then: is given by: Note that we have the equality: Definition 6.2. The shuffle algebra A − is defined just like in Definition 4.8, using q − instead of q, and the multiplication (4.9) uses R − instead of R.
6.5. We must now prove an analogue of Proposition 2.11, but the main difficulty in doing so is (2.32): given X, Y ∈ End(V ⊗k )(z 1 , ..., z k ), we can still define the trace of XY , but the answer will be a rational function in z 1 , ..., z k . To obtain a number, one must integrate out the variables z 1 , ..., z k , and the choice of contours will be crucial.
Let us consider the following expressions, for any σ ∈ S(k): Explicitly, the product in R σ is taken by following the crossings in the positive braid lifting the permutation σ, while R σ is defined as σR σ −1 σ −1 . It is elementary to prove the following equation for all σ ∈ S(k): where ω k denotes the longest permutation. We may use R −1 σ instead of R σ in (5.24) because they both lift the permutation σ, and thus we obtain: R σ for all I 1 , ..., I k ∈ End(V )[z ±1 ] ⊂ A ± . By Corollary 5.29, any element of A ± is a linear combination of the shuffle elements (6.8), for various I 1 , ..., I k .
Note that the pairing X + , Y − is only non-zero for pairs of elements of opposite degrees, i.e. X + ∈ A d,k and Y − ∈ A −d,−k for various (d, k) ∈ N × Z n .
Proof. Formula (6.10) is well-defined as a linear functional in the second argument, while (6.11) is well-defined as a linear functional in the first argument. Therefore, to show that (6.9) is well-defined as a linear functional in both arguments, we only need to show that (6.10) and (6.11) produce the same result when X ± is of the form (6.8) (this statement implicitly uses Corollary 5.29, which states that any element in A ± is a linear combination of the elements (6.8)). To this end, we have: 1 (q − q −1 ) k I + 1 * ... for any A, B ∈ End(V ). Taking the residue at z = q −2 , we obtain: (6.18) Tr V ⊗V (12)A 2 · R − 12 (q −2 ) · B 1 = 0 We may generalize the formula above to: (6.19) Tr V ⊗k (ij)A 1... i...k · R − ij (q −2 ) · B 1... j....k = 0 ∀i = j and A, B ∈ End(V ⊗k−1 ). Formula (6.18) implies (6.19) because none the indices other than the i-th and j-th one plays any role in the vanishing of the trace.
We will use the identity (6.19) to prove that the residue of (6.15) at z i q 2 + = z j vanishes. Let us consider the expression on the second line of (6.15) for σ = ω k and take its residue at z i q 2 + = z j ; the corresponding quantity is precisely represented by the top left braid in the picture on the next page, if we make the convention that the variable on each strand is multiplied by q 2 + and q 2 − , respectively, as soon as it reaches each of the two boxes on that strand. All the braids on the top row, as well as the leftmost braid on the bottom row, are equivalent to each other due to Reidemeister moves and the move in Figure 12. The leftmost and middle braids on the bottom row do not represent equal endomorphisms of V ⊗k , but they are equal upon taking the trace (since Tr(AB) = Tr(BA)). Finally, the middle and rightmost braids on the bottom row are equal due to Reidemeister moves. Because of the identity (6.19), the bottom right braid has zero trace, thys yielding the required conclusion.
Strictly speaking, the argument just given covers the case i = 2, j = 4 and k = 5, but it is obvious that we may replace the strands labeled 1,3,5 by any number of parallel strands, thus yielding the situation of arbitrary i, j, k. More crucial is the fact that we have only shown the vanishing of the residue at z i q 2 + = z j of the σ = ω k summand of (6.15). The case of general σ would require one to insert the positive braid representing the permutation σω k at the middle of the braids above and the positive braid representing the permutation σ −1 ω k at the bottom of the braids above. The braid moves involved are analogous to the ones just performed, with the idea being to move the crossing between the blue and green strands to the very left of all other crossings. We leave the visual depiction of this fact to the interested reader, but we stress the fact that we only need to check the vanishing of the residue for those i < j such that σ −1 (i) > σ −1 (j). This implies that the green and blue strands do not cross other than the two times already depicted in the braids above, and it is what allows the argument to carry through.  Proof. The proof follows that of Proposition 2.11 very closely, so we will only sketch the main ideas and leave the details to the interested reader. Take any: a, b ∈ {X + , S + (x), T + (x), for X ∈ A + } c ∈ {X − , S − (x), T − (x), for X ∈ A − } and define ab, c to be the RHS of (2.28). Then if i a i b i = 0 holds in A + , we must show that the pairing: thus defined is 0. If at least one of a, b, c is either S ± (x) or T ± (x), then the statement in question is proved just like in Proposition 2.11, if one is careful to expand x around ∞ ±1 (since (6.10)-(6.11) also involve integrals, we need to stipulate that x should be closer to ∞ ±1 than any of the variables z 1 , .., z k ).
The remaining case is when a, b, c are all in A ± , and we must prove that: Proof. Formulas (6.27) and (6.28) are proved just like (2.42) and (2.43) (the presence of the variable c is due to the twist in the coproduct (6.7)). Finally, (6.29) is proved precisely like (2.44), and we leave the details to the interested reader.
Proposition 6.10. We have an isomorphism of vector spaces A = A + ⊗ A 0 ⊗ A − .
Proof. The Proposition says that, for all a ∈ A − ⊕ A 0 and b ∈ A + ⊕ A 0 , we have: with |vdeg x + i | ≤ |vdeg a| and |vdeg x − i | ≤ |vdeg b| for all i. We will prove this fact by induction on d = min(|vdeg a|, |vdeg b|). The base case d = 0 follows from (6.5)-(6.6) and (6.27)-(6.28), while the case |vdeg a| = |vdeg b| = 1 follows from (6.29). Without loss of generality, we may use Proposition 3.32 to reduce to the case a = a 1 a 2 where |vdeg a 1 |, |vdeg a 2 | ∈ {1, ..., |vdeg a| − 1}. Therefore, the induction hypothesis implies that: We may apply the induction hypothesis to write a 1 x + i as in the RHS of (6.30), obtaining: ab = i,j y + j y 0 j y − j x 0 i x − i and finally, formulas (6.27)-(6.28) allow us to move y − j to the very right of the expression, thus completing the induction step.
Proof. of Theorem 1.4: Theorem 5.24 (and its analogue when A + is replaced by A − ) give rise to algebra isomorphisms: Υ ± : D ± → A ± Moreover, D 0 ∼ = A 0 because they have the same generators (the series S ± , T ± ) satisfying the same relations. Therefore, we obtain an isomorphism of vector spaces: To show that Υ is an algebra isomorphism, one needs to show that: for all a ∈ A − ⊕ A 0 and b ∈ A + ⊕ A 0 , and we will do so by induction on |vdeg a| + |vdeg b|. The base cases follow by comparing (3.65) and Proposition 3.28 with (6.29) and (6.5)-(6.6), (6.27)-(6.28). As for the induction step, it is proved akin to that of Proposition 6.10, by using Corollary 5.29 to conclude that either a or b can be written as the product of two elements whose |vdeg | is strictly smaller. We