Determinant Form of Correlators in High Rank Integrable Spin Chains via Separation of Variables

In this paper we take further steps towards developing the separation of variables program for integrable spin chains with gl(N) symmetry. By finding, for the first time, the matrix elements of the SoV measure explicitly we were able to compute correlation functions and wave function overlaps in a simple determinant form. In particular, we show how an overlap between on-shell and off-shell algebraic Bethe states can be written as a determinant. Another result, particularly useful for AdS/CFT applications, is an overlap between two Bethe states with different twists, which also takes a determinant form in our approach. Our results also extend our previous works in collaboration with A. Cavaglia and D. Volin to general values of the spin, including the SoV construction in the higher-rank non-compact case for the first time.


Introduction
More and more frequently integrability has been found to make its appearance in modern theoretical and mathematical physics in a wide range of problems, sometimes very unexpectedly (see the ongoing seminar series [1]). To a large extent this is because these models hide beautiful and very rich mathematical structures. They are universal enough to accommodate the complicated combinatorics of Feynman diagrams in 4D and 3D gauge theories and at the same time describe the motion of classically extended objects in curved spaces making the AdS/CFT correspondence almost manifest in many cases [2]. One of the main features of integrable models is separability of variables: the possibility of choosing a rather non-trivial coordinate system where the dynamics of the system simplifies dramatically and often can be solved exactly. At the quantum level this frequently implies the existence of a separation of variables (SoV) basis in the Hilbert space in which the wave function factorises into simple universal blocks. Ultimately, this factorisation should allow one to compute non-trivial expectation values of various observables with qualitatively much less computational effort than solving the same problem in a direct brute-force way.
SoV methods for quantum spin chains were pioneered by Sklyanin in [3][4][5][6][7]. The generalisation to higher rank was also initiated by Sklyanin [8] and later extended by Smirnov [9], but its explicit realization for models such as the Heisenberg XXX spin chain took more time and required new tricks to be developed. One of the crucial steps was done in [10] where the basis factorizing the stationary wave functions of the spin chain was built explicitly and it was also understood into which blocks the wave function factorises. These findings started a new wave of results in the subject [11-14, 16-19, 73] which remains very active. Since a large class of integrable models can be related in one way or another to a generalisation or deformation of the Heisenberg spin chain -even a complicated and powerful model such as planar N " 4 SYM which is essentially a version of the XXX spin chain with P SU p2, 2|4q symmetry [20] -this makes the problem of understanding how separation of variables works in models with high rank symmetry pivotal for progress in a number of directions.
In order to compute non-trivial expectation values between two stationary spin chain states one needs an additional ingredient -the measure or, equivalently, the scalar product in the SoV basis. For glp2q (rank 1) models it was found in numerous examples, but for higher rank cases it has remained unknown for a long time. Only recently it was obtained for spin chains at any rank, in a series of papers [21] and [22]. The result for the scalar product was shown to take a very compact determinant form, and is also in perfect agreement with the general structure anticipated earlier from a semiclassical picture [23,24]. This finding also encouraged development of an alternative approach to obtaining the measure via recursion relations for its elements in [25], which to a large extent was shown to be equivalent. In this paper we extend the results of [21,22] to non-compact spin chains with spin s, and also show how to use the measure to compute some very non-trivial overlaps and expectation values.
The SoV program has a strong motivation from the perspective of the AdS/CFT correspondence. At the moment it is well understood how to compute, using integrability, the exact non-perturbative spectrum of anomalous dimensions in planar N " 4 super Yang-Mills theory in 4D [26], and there are more examples of non-trivial QFTs which can be studied. At the same time much less is known about correlation functions -for very long operators there is the powerful hexagon approach [27], which, however, fails at a certain loop order. There are strong indications that in order to access truly non-perturbative correlation functions one should apply SoV methods [28][29][30][31] and in this paper we show how very similar objects can be efficiently computed in the SoV framework. Related overlaps were studied for spin chains in [32][33][34][35], and determinant representations for them are not known beyond the rank-2 slp3q case. Here we derive a (different) determinant form for the overlaps at any rank of the symmetry group.
The paper is organised as follows. In Section 2 we discuss some generalities of the Heisenberg spin chain which will be used throughout the main text and explain how to implement the SoV procedure for slp2q spin chains. In Section 3 we extend the previous slp2q results to the case of slp3q case and discuss the details and subtleties which appear beyond the rank 1 case in depth with the main focus being the construction of the SoV bases and the corresponding wave functions. In Section 4 we explain the functional approach to scalar products and demonstrate that this formalism matches what is expected from the operatorial construction of the wave functions in the SoV bases. In Section 4 we explain how to extract the measure in the SoV bases directly from the integral formalism and give an explicit formula for its matrix elements. In Section 6 we extend the previous construction of SoV states and wave functions to the slpN q case and explain the technical aspects omitted from previous sections. In Section 7 we apply our new techniques to the computation of various observables such as overlaps and correlation functions and show that they take a simple determinant form. Afterwards, in the Outlook section, we present various interesting avenues for future research. We also include various appendices to supplement the main text.

Warm up: Notations and slp2q Example
In this section we collect our main definitions which we will be using in the remainder of the text.

Heisenberg spin chain generalities
We begin by introducing the Heisenberg spin chain. Throughout the text we will assume that the spin chain is built out of spins in some highest-weight (HW) representation of slpN q with the same spin 1 s which we define below in terms of the highest-weight of the representation. Let E a,b be the generators of slpN q, satisfying rE ab , E cd s " δ cb E ad´δad E cb . (2.1) We use the HW representation where E ab |0y " 0 for b ą a, E 11 |0y "´s and E bb |0y "`s for b ą 1, which we refer to as the spin s representations (explicit form of the generators can be found in Appendix F). This class of representations is a generalisation of the symmetric powers of the defining representation which are labelled by Young diagrams with a single row. Such finite-dimensional representations are obtained by setting s " 0,´1 2 ,´1, . . . . Highest-weight representations of slpN q can be constructed on the space of polynomials in some number of variables. In general it is N 2 pN´1q [36] but for the class of representations we consider it reduces to N´1. Hence, our spin chain of length L has LpN´1q degrees of freedom.
In terms of the Lax operator and a constant twist matrix Λ ab we define the monodromy matrix The transfer matrix is then the following operator acting on the spin chain Tpuq " ÿ c T cc puq . (2.4) The key property leading to the integrability of the model is that rTpuq, Tpvqs " 0 and so the coefficients of the operator Tpuq in the u expansion are integrals of motion. However, Tpuq, which is the transfer-matrix in fundamental representation 2 , is a polynomial of degree L and does not contain the complete set of mutually commuting operators. To complete the set of the mutually commuting integrals of motion we have to additionally introduce the transfer matrices in all anti-symmetric representations, which we denote as T a,1 . We define 3 2˙ ( 2.5) whereb :" tb 1 , b 2 , . . . , b a u and similarly forc 4 and in the same way define the twist matrix in the anti-symmetric representation Λ a,1 We then define the transfer matrix in the antisymmetric representation T a,1 by using the corresponding L a,1 and Λ a,1 in place of the the original fundamental representation building blocks in (2.3) T a,1 puq " Defined in this way T a,1 puq is a polynomial of degree aˆL. However, for a ą 1 one usually finds that T a,1 puq contains trivial factors, see Appendix F. In particular T N,1 puq is just proportional to the unit operator and is called quantum determinant. After removing trivial factors, each T a,1 for a " 1, . . . , N´1 can be reduced to a polynomial of degree L and hence the total number of non-trivial conserved charges is LpN´1q -precisely matching the number of degrees of freedom of the system and implying complete integrability.
Twist matrix. The purpose of twisting is to remove degeneracies in the spectrum of the integrals of motion. For this a sufficient condition is that the twist matrix Λ has pairwise distinct eigenvalues λ 1 , . . . , λ N . In principle the twist matrix could be any diagonalisable matrix with these eigenvalues, but for the purpose of this paper we assume a particular form of it and take Λ ij " p´1q j´1 χ j δ i1`δi,j`1 (2.8) where χ j denote the elementary symmetric polynomials in λ n ź j"1 pt`λ j q " n ÿ j"0 χ j t n´j (2.9) and we further constrain the eigenvalues with λ 1 . . . λ n " 1. One can perform a similarity transformation, simultaneously in the physical and auxiliary spaces, to bring the twist to diagonal form and the matrix which diagonalises Λ is given by a simple Vandermonde-type matrix S ij " λ N´j`1 i . (2.10) However, for the purpose of separation of variables (2.8) is the most convenient as we will see (see also [13,19]).
Wave functions. We will frequently refer to the eivenvectors of the transfer matrices simply as wave functions, and denote them as |Ψy for the right eigenvectors and xΨ| for the left eigenvectors. We denote the corresponding eigenvalues as T a,1 puq T a,1 |Ψy " T a,1 |Ψy , xΨ|T a,1 " xΨ|T a,1 . (2.11) Spectrum and Q-functions. The expressions for the transfer matrix eigenvalues can be conveniently written in terms of the so-called Q-functions or Q-polynomials. We define the Q-functions to be "twisted" polynomials The numbers u i 1 ,...,im k are the Bethe roots and they can be found, for example, from the Bethe ansatz (BA) equations which follow the following pattern Q θ pu 1 k´i sq Q θ pu 1 k`i sq "´Q 1 pu 1 k`i q Q 1 pu 1 k´i q Q 12 pu 1 k´i 2 q Q 12 pu 1 k`i 2 q , k " 1, . . . , M 1 (2.13) 1 "´Q 1 pu 12 k´i 2 q Q 1 pu 12 k`i 2 q Q 12 pu 12 k`i q Q 12 pu 12 k´i q Q 123 pu 123 k´i 2 q Q 123 pu 123 k`i 2 q , k " 1, . . . , M 12 . . . and we have introduced the Baxter polynomial Q θ puq " ś L α"1 pu´θ α q. As the above BA equations originate from a nesting procedure, they contain some arbitrariness. Namely, at the m th step of nesting one can choose one of N´m "vacua". This arbitrariness results in the existence of a large number of equivalent BAs related by the duality relations Q I,i pu`i 2 qQ I,j pu´i 2 q´Q I,j pu`i 2 qQ I,i pu´i 2 q 9 Q I puqQ I,i,j puq (2.14) where I is a multi-index containing 1. We also assume the boundary condition Q 12...N " 1 in the recursion relation (2.14). By evaluating (2.14) at u " u I,i`i 2 and u " u I,i´i 2 and then dividing the results by each other we obtain the general form of the BA for auxiliary Bethe roots Above we introduced the standard notation f˘" f pu˘i 2 q , f˘˘" f pu˘iq , f ras " f pu`i a 2 q . (2.16) Having the Q-functions defined one can express all eigenvalues T a,1 puq in terms of Q's. In particular . . . (2.17) The above expression is indeed a polynomial of degree L when the BA equations (2.13) and (2.15) are satisfied. General expressions for T a,1 puq can be found in appendix C.

Warm up example -slp2q case
To give a simple example of the known construction we described above and to set the stage for the more complicated and original slp3q and slpN q cases in this section we consider the simplest slp2q case, very well known in the existing literature, see for example [37,38].
Representation. We are considering general non-compact HW representations, where each site carries the spin s representation. The slp2q raising and lowering operators are given respectively by and the Cartan generators are The representation space is then the space Crxs of polynomials in x which is spanned by monomials x n , n ě 0. For generic s this space is irreducible and infinite-dimensional. However, for special values of s, in particular when s P t0,´1 2 ,´1, . . . u the representation becomes reducible with a finite-dimensional irreducible part. It is obvious that the raising operator annihilates the state given by a constant and so the highest-weight state is simply given by the polynomial 1.
Scalar product. We define the scalar product x¨|¨y on our Hilbert space by introducing an orthonormal basis e n , n " 0, 1, 2, . . . and imposing xe n |e m y " δ nm . Naively, one would take e n " x n as a normalised basis, however, this will not result in correct conjugation properties for the generators. Instead one can define e n " c n x n and require the matrix of the operator E 12 to be minus transposed 6 of the matrix of the operator E 21 , i.e. E 21 "´E T 12 , where transposition is defined in the usual way xΨ 1 |OΨ 2 y " xO T Ψ 1 |Ψ 2 y. Together with the requirement e 0 " 1 this fixes c n and we find 7 . (2.20) At this point we should mention that our scalar product does not involve any complex conjugation and the scalar product is linear in both arguments. To promote it to Hermitian conjugation we need to impose that s is real. Furthermore, in order for Hermitian conjugation to lift to the spin chain Hilbert space one should make certain choices on the reality of the parameters of the model such as inhomogeneities θ α and twists λ i . Instead we will view our scalar product as simply defining the action of a dual vector on a vector.
Bethe ansatz and Baxter equation. There is only one non-trivial transfer matrix in anti-symmetric representations, the fundamental representation, whose eigenvalue is given by The above expression becomes polynomial when the BA equations are satisfied Twist and the ground state. For the slp2q case the expression (2.8) is simply given by For diagonal twist the ground state state, corresponding to Q 1 puq " λ iu 1 , would be just a constant polynomial. However, since our twist is non-trivial the constant polynomial gets transformed into the following expression |Ωy " The normalisation here is chosen for later convenience as we will see soon. Whereas these states are clearly not polynomial, they can be expanded into an infinite series. The Hilbert space should be understood as a completed space of polynomials, where such analytic functions which are regular at the origin are included. The scalar products of such states are computed as a limit of a scalar product of the truncated series, which additionally imposes (for s ą 0) that the convergence radius of the series should be ě 1. In our particular case convergence is guaranteed for |λ 1 | ą 1 which we assume to be satisfied 8 . Then the overlap of xΩ| with |Ωy is Excited states. The excited states (those with non-trivial Bethe roots) can be obtained by consecutive action of the Bpuq " T 12 puq operator |Ψy 9 In the case of slp2q the left eigenvectors of the transfer matrix can be built in the same way 9 SoV basis. Another advantage of the non-diagonal twist (2.8) is that Bpuq is diagonalisable. Furthermore, an important property of our twist, which we will use below, is that the Bpuq operator does not actually depend on λ -indeed if we denote the untwisted 10 monodromy matrix elements by T ij then the twisted monodromy matrix is given bỹ and hence we see that T 12 "´T 11 . In fact this is how this twist was initially introduced in [13]. The eigenvectors of Bpuq form left and right SoV bases, which we denote it as xx| and |xy. The Bpuq operator has a very simple spectrum Bpuq|xy "´L ź α"1 pu´θ α´i n α´i sq|xy "´L ź α"1 pu´x α q|xy, n α " 0, 1, 2, . . . (2.29) so that x α take values x α " θ α`i s`in α , n α " 0, 1, 2, . . . .
The SoV states can be uniquely labelled by the non-negative integers n α so one can also denote |xy " |n 1 , . . . , n L y. The SoV states |xy are the homogeneous polynomials of degree ř α n α -we will also refer to this number as SoV charge. There is only one SoV charge zero state, which we call the SoV vacuum and denote as |0y. Finally, we define bpuq "´Bpuq in order to make this operator a monic polynomial. 8 Typically our results are analytic in the twist so one may be able to go to other regimes by a careful analytic continuation. 9 In the earlier literature operator Cpuq " T21puq was used to create the left eigenstates. The reason for this was that in the case of diagonal twist Bpuq would annihilate the left ground state. One of course can start from our current construction and diagonalise the twist by a global rotation, then, however, one will get a B good puq operator, which is a linear combination of all 4 matrix elements T ab , like described in [10,39]. 10 corresponding to the case where the twist matrix is the identity operator Normalisation of the wavefunctions and SoV states. The reason we mainly use the non-diagonal twist (2.8) in this paper is to make the SoV basis simple. In particular the SoV states are simply polynomials and the SoV vacuum is a constant. To fix the normalisation we define |0y " 1 , x0| " 1 . (2.31) The transfer matrix ground states (2.24) are already normalised so that xΩ|0y " x0|Ωy " We fix the normalisation of the excited states by It remains to fix the normalisation of the excited SoV states xx| and |xy. We notice that as a consequence of (2.33) and (2.29) we have and similar for the xΨ|xy. We fix the remaining scale of the SoV states by requiring xx|Ωy " xΩ|xy " Even though this normalisation of the SoV basis does not look very natural, it actually makes the SoV states independent of λ 1 as was initially shown in [13]. For example for the case L " 1 the SoV states have to be proportional to x nα 1 , fixing the coefficient according to (2.35) we get xx| L"1 " p´x 1 q nα , (2.36) which indeed does not depend on the twist. For the most general proof of this see [13] and some details also in in Appendix A. Finally, in this normalisation we get xx|Ψy " xΨ|xy " which indeed shows that in the SoV basis the wave functions factorize into a product of Qfunctions. For the general slpnq case we will be using analogous conventions for normalisations.
Measure and scalar products in SoV basis. Since the normalisation of the SoV states is completely fixed, their overlap could be a non-trivial number. As they are left and right eigenstates of the same operator Bpuq they are orthogonal. The overlap xx|xy " M´1 x is nontrivial and is given by where we defined 11 and pf q s is the Pochhammer function 12 while ∆ is the Vandermonde determinant, This result will be derived in section 4.1 using an integral representation of the scalar product. At the same time, we expect it to match the overlaps xx|xy and we we have checked this directly for all states for L " 1 and also for states with SoV charge ď 2 for L " 2. For example, by explicitly diagonalizing the Bpuq operator for the L " 2 case and considering the states with n 1 " 1, n 2 " 0, we find that the right eigenstate (i.e. the |xy state) reads while the left one (i.e. the xx| state) is where θ 12 " θ 1´θ2 . The normalisations c 1 , c 2 are fixed by requiring (2.35) which gives c 1 "´1, c 2 " iθ 12 2s´iθ 12 . Finally, computing the inverse of their overlap we find which is the key completeness relation crucial for the computation of various scalar products as we will see below. As an example, we can write the overlap of left and right transfer matrix eigenstates xΨ A | and |Ψ B y corresponding to Q-functions Q A 1 and Q B 1 as where we used the SoV wavefunctions (2.37).
Overlaps of off-shell states. Another representation for the measure (2.38) is M n 1 ,...,n L " dpn 1 , . . . , n L q dp0, . . . , 0q , dpn 1 , . . . , n L q " det which is equivalent to (2.38). The fact that it can be written as a determinant is quite significant, as this implies that some overlaps can be also written as determinants as well.
For example, let us demonstrate that the overlap of any two states xΦ| and |Θy , which satisfy the separability condition i.e.
xΦ|xy " can be written in the form of determinant. Indeed xΦ|Θy " where d 0 " dp0, . . . , 0q. Examples of such states are off-shell algebraic Bethe states with two different twists In particular, for the simplest case K 1 " K 2 " 0 and L " 1 we get which correctly extends the relation (2.25).
The main goal of this paper is to show how to generalise these types of results to the slpN q case. In the next section we will extend our consideration to slp3q, mostly following the same steps as in this section.

slp3q spin chain
In this section we describe our general construction for the case of slp3q. For brevity we will leave the proof of certain technical details until we describe the general slpN q case. The main purpose of this section is to demonstrate the main ideas and our techniques. In comparison to the slp2q case there will be new ingredients involved, such as the Cpuq operator. Also, unlike in the slp2q case the SoV measure is unavoidably non-diagonal 13 for the general case, but it nevertheless leads to simple determinant expressions for various correlators and overlaps as we will see.

Representation
As we explained in the previous section in this paper we consider HW representations on the space of polynomials. More specifically we consider the representation of spin s, which we define in terms of the operators acting on a space of polynomials Crx, ys in two variables as follows: Raising operators Lowering operators Cartan generators It is also convenient to repackage the slp3q Cartan generators into glp3q Cartan generators In this way the generators satisfy the commutation relations (2.1). The HW state is simply a constant polynomial |0y " 1 and the diagonal generators E aa have the eigenvalues t´s,`s,`su on the HW state. The eigenstates of the Cartan generators are homogeneous polynomials in x and y and the lowering generators increase the degree by 1. 13 Different SoV bases which lead to a diagonal measure were constructed in [25] but to our knowledge these bases do not diagonalise any well-defined operators such as B and C, nor can the measure be efficiently extracted from the Baxter equation.
Lax operators For slp3q there are two non-trivial Lax operators L a,1 in anti-symmetric representations. Denoting L 1,1 as simply L, it is an easy calculation to show directly from the definition of L a,1 (2.5) that where t denotes the transpose of L written as a 3ˆ3 matrix.

Scalar product
Like in the slp2q case we define the scalar product by introducing an orthonormal basis e n,k " x n y k d Γ pn`k`2sq Γpn`1qΓpk`1qΓp2sq (3.10) and define the bracket x¨|¨y by xe nk , e n 1 k 1 y " δ nn 1 δ kk 1 . (3.11) As any polynomial can be expressed as a finite linear combination of e nk , this defines the scalar product on the space of all polynomials of x and y. It also defines the scalar product between a polynomial and any function analytic at the origin. In order to have the scalar product between two analytic functions finite one should impose some constraints on the convergence radius. More precisely we need to require the limit of the scalar products between two truncated expansions to have a finite limit. Like in the case of slp2q, the factor of gamma functions in (3.11) is needed to ensure that the generators E ab are either self-conjugate or anti-self-conjugate to E ba . This requirement fixes (3.10) completely (up to an overall real factor). Similar to the slp2q case we find the following conjugation properties of the generators E T 12 "´E 21 , E T 13 "´E 31 and E T 23 "`E 32 . Finally, since the Cartan generators act diagonally we also have E T aa "`E aa for a " 1, 2, 3.

Transfer matrix and Integrability
Having the representation defined we follow the general steps outlined in section 2.1. In this section we explicitly write some of the expressions from section 2.1 and give a few more details specific to the slp3q case. For the case N " 3 the twist matrix (2.8) becomes As before it can be brought to diagonal form and has eigenvalues λ 1 , λ 2 and λ 3 " 1 λ 1 λ 2 . The transfer matrix (2.3) is a differential operator in 2L variables x α , y α , α " 1, . . . , L. The complete set of conserved quantities is contained in the two non-trivial transfer matrices in fundamental T 1,1 puq and anti-fundamental T 2,1 puq representations. The transfer matrix T 3,1 puq correspnding to the totally antisymmetric representatio does not contain any new conserved quantities, but is a non-trivial function of u T 3,1 puq " Q θ pu´is`iqQ θ pu`is´iqQ θ pu`isq I , (3.13) where I denotes the identity operator.
Bethe Ansatz and the Transfer Matrix Eigenvalues. The set of Bethe Ansatz equations (BAE), relevant for our discussion, is the following where the twisted Baxter polynomials are Q 1 " λ iu 1 ś M 1 i"1 pu´u 1 k q, Q 12 " pλ 1 λ 2 q iu ś M 12 i"1 puú 12 k q and Q 13 " pλ 1 λ 3 q iu ś M 13 i"1 pu´u 13 k q (from (2.14) one should have M 1 " M 12`M13 ). Note that for the purposes of finding the spectrum the first two equations are usually sufficient. However, for the SoV construction we describe below one also needs to find Q 13 , which is a dualised Baxter polynomial, corresponding to an alternative nesting path in the nested BAE terminology.
Once the Q-functions are known, the eigenvalues of the transfer matrices are given by simple expressions (e.g. (2.17)). It will be convenient to introduce the notation This set of requirements is a way to define the Bethe roots, alternative to the Bethe ansatz, by first finding the Baxter polynomials satisfying the above equations and the analyticity requirements.
Ground state wave function. The ground state of the transfer matrix -the state corresponding to the trivial Baxter polynomials M 1 " M 12 " M 13 " 0 is particularly simple. If we were considering a diagonal twist, it would simply be a constant polynomial. As our twist is non-diagonal, but diagonalisable, the ground state can be obtained as a result of rotation of the constant function with a GLp3q group element and is a non-trivial function like in the case of slp2q -(2.24). Instead of diagonalising the twist and rotating the ground state it is simpler to construct T 1,1 puq explicitly for the length L " 1 case first. Then requiring we obtain two first order PDE's on |Ω L"1 y, fixing it uniquely up to a constant factor to Similarly, one can find the left ground state 15 Here the nontrivial overall normalisation is chosen so as to simplify the main results later on. These functions are analytic near the origin and can be expanded into a series in x and y. Note that their scalar product, obtained as a limit of the product of the truncated series expansions, is a nontrivial number, which we denote as N 1 16 xΩ L"1 |Ω L"1 y " N 1 , (3.26) For general L the ground state is simply a tensor product of L copies of |Ω L"1 y (or xΩ L"1 |, for the left eigenvector) 17 .
Having the ground state explicitly will allow us to build the excited states by means of the creation operator Bpuq [8,10], as we describe in the next section.

Wave functions and SoV
In this section we explain how to construct excited states of the transfer matrices by action of a creation operator Bpuq on the ground state, in analogy with the slp2q case (2.26). The Bpuq operator was first proposed in the context of SoV by Sklyanin in the seminal paper [8]. It was much later in [10] when it was realised that the the same operator can be used to diagonalise the transfer matrix and explicitly build its eigenstates. We will review this construction in this section and explain how it leads to the separation of variables.
Another operator, which was recently shown in [22] to also play a key role in the SoV construction, is the Cpuq operator which will be used to produce the dual SoV basis in the next section. Both of these operators have a similar structure in terms of the monodromy matrix elements 18 Bpuq " T 23 puqT 12 pu´iqT 23 puq´T 23 puqT 22 pu´iqT 13 puq T 13 puqT 11 pu´iqT 23 puq´T 13 puqT 21 pu´iqT 13 puq , (3.27) and Cpuq " T 23 puqT 12 puqT 23 pu`iq´T 23 puqT 22 puqT 13 pu`iq T 13 puqT 11 puqT 23 pu`iq´T 13 puqT 21 puqT 13 pu`iq . (3.28) A simple observation, which one can immediately make from the form of the B and C operators, is that due to the particular choice of the twist matrix (3.12) both of them do not depend on the twist eigenvalues λ a which can be checked by a direct calculation similarly to the slp2q case [13].
Creating excited states with Bpuq. The key formula, where |Ψy is the transfer matrix eigenvector with eigenvalues as in (3.19) and u 1 k are the (momentum carrying) Bethe roots corresponding to this state, first found in [10] for the fundamental representation s "´1{2 is valid for general s. Note that Bpuq is built out of T ab puq and thus is a differential operator. Hence (3.29) implies that once the momentumcarrying Bethe roots u k are found (from the TQ-relations or from the Bethe ansatz equations), one can immediately build the corresponding eigenvector in terms of partial derivatives of the ground state in full analogy with the slp2q algebraic Bethe ansatz construction. This is a huge simplification in comparison with the old nested Bethe ansatz construction [41], which involves all the auxiliary roots and is a hybrid between the algebraic and coordinate Bethe ansatz construction for slp2q.
In order to fix the normalisation of |Ψy it is convenient to extract a trivial scalar factor 19 from the Bpuq operator Bpuq "´Q θ pu`is´iqbpuq . (3.30) The remaining operator bpuq is a polynomial of degree 2L. After that we define |Ψy " exactly like in the slp2q case.

Eigenvalues of Bpuq.
Another key observation of [10] which generalises to our case is that the eigenvalues of the operator Bpuq are very simple. Because Bpuq and thus bpuq (3.30) commute with themselves for different u their eigenvalues are also polynomials xx|bpuq " and n αa are integers such that 0 ď n α.2 ď n α,1 . For convenience we also introduce x α,0 " θ α`i s . As we still have not defined the normalisation of the SoV states xx| we fix it by requiring xx|Ωy " The presence of this trivial factor stems from the fact that we consider a special class of representations of slp3q on 2 variables instead of the 3 needed for generic representations, see [13,19]. 20 In order to find the eigenvectors one needs to know how various operators acts on the right states. As Bpuq or Cpuq are built out of elements of monodromy matrix, which is built out of generators of slp3q we can easily transpose those operators by flipping signs of some of the generators accordingly. As a result the action of these operators to the right is also by a partial derivatives. Equation (3.32) is simply a set of PDE on the function xx|. We will give some explicit examples below in section 3.5.1.

which then results in 21
Ψpxq " xx|Ψy " which is the very essence of separation of variables as our wave function is now explicitly a product of one-dimensional factors. Let us again emphasise the importance of the normalisation (3.37). It was shown in [13] that this normalisation ensures that there is no dependence on the twist eigenvalues λ 1 , λ 2 in the SoV states xx|. We will demonstrate this later on an explicit example for L " 2.
SoV charge operator. While Bpuq and Cpuq commute with themselves for different arguments u they do not commute with each other and it is this property which leads to a non-diagonal measure for slp3q. However, as we see from the definitions (3.27) and (3.28) they only differ by shifts in u and thus commute at large u. Whereas the leading coefficient is simply a constant the subleading coefficients in Bpuq and Cpuq contain the same non-trivial operator N which thus commutes with both Bpuq and Cpuq at any u. We refer to this non-trivial operator as SoV charge [22] and in proper normalisation it is given by The SoV charge operator N defined in this way satisfies rN, Bpuqs " 0 , rN, Cpuqs " 0 . and so counts the excitations above the SoV vacuum. It is straightforward to find the explicit form of the operator N straight from its definition We can deduce from this that the SoV states have to be homogeneous polynomials in x α , y α , with x α contributing one unit of SoV charge whereas each y α adds two units.
Eigenstates of Bpuq operator. As it was shown above the SoV states are polynomials. As the ground state x0| (i.e. the left eigenstate of Bpuq with all n α,a " 0) according to (3.41) has SoV charge 0 it must be a constant function. Furthermore, its normalisation is fixed by (3.37), which implies x0| " 1 .
(3.43) 21 We recall that we define the Baxter polynomials with the twist factors as Q1puq " λ iu All the other SoV states xx| can be obtained by consecutive action of transfer matrices on the SoV ground state x0|. One way to see this is by observing that the eigenvalue of T 2,1 puq at a special values of the spectral parameter u " θ α`i s´i 2 simplifies to (see (3.19)) Similar relations hold true for the transfer matrices operators in higher representations. Denoting by T a,s the transfer matrix corresponding to the rectangular aˆs Young diagram and introducing the SoV creation operator A α,s defined by From this we obtain that where we used (3.38) to get the last equality. So we conclude that the state x0| ś α,a A α,nα,a has the same overlap as xx| with all eigenstates of the transfer matrix |Ψy, which of course means that they are equal 23 : (3.48) As A α,s are obtained from the transfer matrix in the representation 2ˆs, which themselves can be built out of s copies of L 2,1 a,b puq at each site, we conclude that A α,s is an sˆL order partial differential operator with polynomial coefficients, which makes the equation (3.48) very convenient for the building of the SoV states.

Dual SoV states
In our construction the left eigenstates xΨ| are not related to the right eigenstates of the monodromy matrix in an obvious way. Consequently, the basis which separates states xΨ| has to be built from scratch. 22 A simple way to verify this relation is by using the Hirota identity, which states that the eigenvalues of the transfer matrices in rectangular representations Ta,s satisfy Ta,spu`i 2 qTa,spu´i 2 q " Ta`1,spuqTa´1,spuqT a,s`1puqTa,s´1puq . Using the known eigenvalues Ta,0 " 1, T0,s " 1, T4,s " 0 and T1,1 " T 3 , T2,1 " T3, T3,1 " T1 one can find all Ta,spuq recursively using the Hirota identity. Alternatively, one can use the Wronskian solution of the Hirota identity, which gives Ta,s explicitly in terms of 3 Q-functions. 23 Note that it is manifest from this construction that the SoV states xx| are rational functions of the spin s since the transfer matricies are polynomial functions of the Lax operators (2.2) which themselves are polynomial functions of s.
In this section we build the right SoV basis |yy as an eigenbasis of the Cpuq operator. Like the original Bpuq operator it commutes with itself rCpuq, Cpvqs " 0. We will again see that the spectrum of operator Cpuq is very simple. For example the right SoV ground state |0y, is the only state with the SoV charge N " 0, which again must be a constant. We fix its normalisation so that |0y " 1, or equivalently xΩ|0y " where we defined for future convenience The eigenvectors of Cpuq can be constructed in a similar way to Bpuq -using the transfer matrices in anti-symmetric representations as building blocks. Namely, we define the following combinations . . , m 2 and is 1 otherwise. The combination (3.51) is reminiscent of the Cherednik-Bazhanov-Reshetikhin formula [42,43] for that transfer matrix in an irrep with a Young diagram µ (and µ 1 being the transposed of µ,see Fig.3), which states (3.52) In (3.51) we had to replace i by´i in the shift of the argument. The reason for such replacement will be clear in section 6. Like in the case with the eigenvalues of the operator Bpuq we introduce the "creation operators", which are simply properly normalised combinations of the integrals of motion (3.51) where like before µ 1 1 " 2 for m 2 ą 0 and 1 otherwise. The Cpuq-operator eigenvectors are then given by |yy " with the eigenvalue where we introduced the notation 24 The SoV charge operator N (3.42) appears in the subleading coefficient of the 1{u expansion of Cpuq and so its eigenvalue is given by the sum of all m α,a , N|yy " ÿ α,a m α,a |yy . (3.57) Finally, let us state the analogue of the relation (3.38) for the contraction of the eigenstate of the transfer matrix xΨ| and the eigenstate of the Cpuq-operator. For that we notice that xΨ| also diagonalises D α,m 1 ,m 2 with the following eigenvalue: We normalise xΨ| so that xΨ|0y " After that we get a factorised expression for the wave function in the SoV basis xΨ|yy " In particular for the ground state we can read off the following normalisation of the right SoV states xΩ|yy " Even though the above normalisation looks rather complicated, like in the case with xx| it ensures that there is no λ a dependence in the |yy state either (for general proof of this see [13]). We will see some explicit example below in section 3.5.1.

Overlap of the SoV states
In order to be able to use the factorised representation of the wave function one also needs to know the measure in the SoV basis. We will see that unlike in the slp2q case the left and right SoV states are not orthogonal to each other. Nevertheless, we can write an analog of the slp2q completeness relation (2.44) as where M y,x is an infinite set of nontrivial coefficients that form the SoV measure, a key part of the whole construction. As a matrix it is the inverse of the infinite matrix of overlaps xx|yy.
Knowledge of the matrix M y,x in particular would allow the calculation of overlaps between two Bethe states Ψ A and Ψ B The overlaps matrix has in fact a nice and simple structure. First, due to the existence of the SoV charge operator it is block-diagonal. Second, each block is a triangular matrix for a particular ordering of the states xx| and |yy. More precisely the left and right SoV states are in one-to-one correspondence as both are labelled by a set of 2L integers constrained by 0 ď n α,2 ď n α,1 and 0 ď m α,2 ď m α,1 . In section 5.2, we show that the overlap matrix becomes upper triangular when we order both SoV states lexicographically with words pn 1,1 , n 2,1 , . . . , n L,1 , n L,2 , . . . , n L,N´1 q and same for m's. We will see that in general it is a rather sparse matrix with elements accumulating near the diagonal. In section 5.2 we derive the general form of this matrix giving an explicit relation for its matrix elements, using an integral representation, generalising the results of [21,22].
In the next section we report on some experimental observations coming from length two spin chains.

Length-two data
We explicitly realised the above construction for a spin chain of length L " 2. In particular we computed all SoV states xx| and |yy up to the charge N " 6 analytically.
At charge 2 the states become more complicated We see that if we assign to x α a homogeneity weight 1 and weight 2 to y α , the eigenstates are homogeneous polynomials of weight equal to the SoV charge. We also note that the SoV basis does not have any dependence on the twist eigenvalues λ i , as anticipated previously.
In a similar way the eigenstates of Cpuq are labelled by the 4 integers m α,a . We denote For the right SoV vacuum we have At SoV charge 1 we get At the level 2 the states again become more complicated We notice once again that both left and right SoV states are homogeneous polynomials of degree equal to the SoV charge. Let us give some examples of the overlaps for SoV charge 0 and 1. We get a very simple expression For charge 2 it is similarly simple, but we also get non-diagonal elements xx|yy| N"2 " (3.75)´θ In order to be able to compare with section 5.2, where we found an analytic expression for the measure elements we need the inverse of the matrix (3.75) p2s`1qp2s´iθ 12 qp´iθ 12`2 s`1q θ 12 pθ 12`i q 0 0´s p2s`1qp2s`iθ 12 qpiθ 12`2 s`1q The overlaps of the SoV states do not depend on twist eigenvalues λ a . This shows universality of these coefficients. As it was advertised previously the matrix M y,x has upper triangular form, i.e. all elements with m α,1 ą n α,1 (for at least one α) are zero, furthermore if m α,1 " n α,1 then one still gets zero if m α,2 ą n α,2 . We computed explicitly the overlaps between the SoV states up to SoV charge 6. On the figure 1 we indicate with squares the non-zero elements.
In the section 5 we will explain how to obtain the overlap coefficients bypassing explicit construction of the SoV states.

Integral orthogonality relations
In this section we will describe the method of [21,28] for finding the SoV measure factor M y,x bypassing explicit calculation of the overlaps of the SoV states and then inverting the matrix. We derive the so-called orthogonality relation and then use it to find the matrix elements of the SoV measure explicitly in the next section.
The idea of the method of [21,28] is the following: imagine we knew the measure, then we would be able to compute the scalar products between left and right eigenvectors of the transfer matrix in terms of the Q-functions corresponding to the states. Due to the orthogonality of the eigenstates corresponding to different eigenvalues we then have that a combination of Q-functions, corresponding to any two different states, vanishes. At the same time the same combination where we only use the Q-functions of the same state should be non-zero. Firstly, without knowing about the SoV framework it may be even surprising that such combinations exist. At the same time if we find a combination of the Q-functions which can be interpreted as an SoV product with some state-independent measure, which has the above properties, it will most likely be unique and thus should produce the SoV measure up to an overall factor.
In [22] it was shown for a finite dimensional case how to build such combinations of Qfunctions with the orthogonality properties satisfied. In [22] the orthogonality relations were then interpreted as a system of linear equations for the measure matrix elements M y,x and from the counting of the equations it was argued that they fix the measure factors uniquely up to an overall factor. In the infinite dimensional case, it is harder to make a totally rigorous argument as the system of equations becomes infinite. However, we will see that the dependence on the spin can be factorised and thus it is sufficient to prove this statement for a finite dimensional case only. Furthermore, the existence of the SoV charge implies that the measure factor is block diagonal with each block being of a finite size, which indeed helps to extend the previous proof to the general spin s case. Furthermore, we explicitly verified our result for short lengths.
In this section we will generalize the results of [21] for slpN q spin chains in the simplest infinite-dimensional representation (i.e. s " 1{2 in our notations) to general values of s. We will first discuss the slp2q case, and then move on to slp3q. Finally, in appendix C we give the generalization to any slpN q 25 .

The slp2q case
Before considering the non-trivial slp3q case, we first re-derive the known slp2q results in a way suitable for the generalisation in the next section. We are following the derivation of [21], which we generalise to the general s ą 0 case.

Integral form of the scalar product
In the slp2q case the only nontrivial polynomial Q-function (with a twist factor) is Q 1 which satisfies the Baxter equation which follows from (2.21) or (2.17), Here τ 1 puq " T 1,1 puq is the eigenvalue of the transfer matrix with fundamental representation in the auxiliary space. Let us note that for general s only one of the two solutions of this equation will be regular and that is the one corresponding to Q 1 " λ iu The main idea of our approach is to introduce a scalar product on functions of one variable with respect to which the difference operatorÔ defining the Baxter equation (4.1), will be "self-adjoint". Here D is the shift operator, We write this scalar product as and the self-adjoint property is´fÔ g¯α "´gÔf¯α (4.5) where f and g are arbitrary twisted polynomials 26 . This requirement constrains the integration measure µ α . In fact we will find several such measure factors and the index α labels different possible choices. Let us write more explicitly the l.h.s. of (4.5), For the s " 1{2 case studied in [21] it was sufficient to assume that µ α is i-periodic. Then we can shift the integration contour (assuming there are no poles, which could give an additional contribution) in each of the terms in (4.6) up or down by i so as to remove the shifts of the argument in g. As a result we find precisely the same operatorÔ acting now on f , thus proving the self-adjointness property (4.5).
For generic s the i-periodicity of µ α is obviously not sufficient. Assuming that we can shift the contour by˘i, without getting any extra pole contributions we find that µ α has to satisfy The general solution to this first order difference equation is The factor ε is chosen so that it is analytic for all Im u ą´s (assuming θ β 's are real), it has poles at u " θ β´i s´in, n ě 0 and zeros at u " θ β`i s´in, n ě 1 and behaves at infinity as a power " u 1´2s . It remains to determine the i-periodic factors p α . The functions p α have to be chosen such that 1) the integral is convergent 2) there are no extra poles contributing to the integral when we shift the contour.
For simplicity let's assume s ą 0 to ensure that poles of ε are below the real axis. Let's first look at the factor p α ε Q r´2ss θ g´´and we need to make sure there are no poles in the strip 0 ď Im u ď 1. The only pole can come from the p α factor, however, since there is always one zero u " θ β`i s´in for some n ě 0 inside this strip, coming from εˆQ r´2ss θ we still can allow for the p α to have poles at u " θ β`i s´im, m P Z. Similarly for the term µ α Q r`2ss θ g`t here should not be poles at´1 ď Im u ď 0, this time ε has a dangerous pole at θ β´i s, however, this one is luckily cancelled by the factor Q r`2ss θ ; and similarly to the previous term we still can allow for p α to have poles at u " θ β`i s´im.
Further constraints on p α are coming from the convergence requirement. Assuming that both f and g behave as λ iu 1 u t at infinity, and assuming that´π ă arg λ 1 ă 0 for definiteness, we see that in order for the integral to converge, p α has to decay exponentially and faster than λ iu 1 at u Ñ`8 and at the same time not grow faster than λ´i u 1 at u Ñ´8. Since, furthermore, we are only allowed to have simple poles at θ β`i s´im the most general i-periodic function with these properties should have the form Thus we conclude there are L linearly independent measures with the specified properties, which we denote as µ α " ε 1´e 2πpu´θα´isq , α " 1, . . . , L . (4.10) Note that for this choice of the basis, for any given α the poles of the measure µ α in the upper half plane are at u "`is`in`θ α with n " 0, 1, . . . . For the case s " 1{2 the expression (4.10) reproduces the result of [21].
Finally, let us point out that for the case 0 ă arg λ 1 ă π we would have to simply replace the sign in the exponent in denominator of (4.10) to ensure the convergence.
Orthogonality. Having the self-adjoint property (4.5), we can now use standard arguments from linear algebra in order to deduce orthogonality relations for Q-functions corresponding to different states 27 . Consider two different transfer matrix eigenstates labelled as A and B, so thatÔ then as a consequence of (4.5) we havé The only difference between operatorsÔ A andÔ B comes from the transfer matrices which have the form where I A α are eigenvalues of the integrals of motion. Thus (4.12) gives a linear system of equations on the differences (4.14) As the set of coefficients of τ 1 distinguishes the spin chain state uniquely, at least one of the differences I A α´I B α has to be nonzero. This means that for A ‰ B the determinant of the linear system (4.14) should vanish One can consider this identity as an orthogonality relation between the SoV wave functions, since the above determinant has the correct form! In the next section we clarify more precisely the link with the explicit construction of the SoV basis from section 2.2.

Comparison to the SoV basis construction
Let us demonstrate that the orthogonality property (4.15) is directly related to the SoV basis we constructed in section 2.2. Since the determinant (4.15) vanishes when A and B label different transfer matrix eigenstates, we expect to identify it with the scalar product up to an overall state-independent constant factor N (which we can always introduce by rescaling the integration measure (4.10)). Let us relate the determinant to the SoV basis representation of this scalar product given in (2.45). We see that each of the brackets in (4.15) is an integral over the real line, where the integrands have asymptotics dictated by the measure and by the Q-functions which have the form λ iu 1ˆr polynomials. In section 2.2 we assumed that |λ 1 | ą 1 in order to ensure that the states we constructed are actually inside our Hilbert space (see the discussion after (2.24)), and here this condition also plays a key role as it allows us to close the integration contour in the upper half-plane. This means that the integral reduces to a sum over poles of the measure at u " θ α`i s`in, n " 0, 1, 2, . . . . As a result, we find where the sum is over integers n α ě 0 with x α " θ α`i s`in α , and the M 1 n 1 ,...,n L coefficients are some combination of residues of the integration measure. We see that the arguments of Qfunctions in (4.17) are precisely the eigenvalues of the separated variables given by (2.30) from section 2.2. We also see that the expressions in the brackets match the SoV wavefunctions (2.37). It is clear now that (4.17) indeed has exactly the same form as the scalar product between two transfer matrix eigenstates xΨ A |Ψ B y we gave in (2.45) above, with NˆM 1 n 1 ,...n L appearing in the place of the measure M n 1 ,...,n L in (2.45). Thus we identify the coefficients in (4.17), following from the evaluation of integrals, with the (inverse) overlaps of the SoV basis elements given by M n 1 ,...,n L , M n 1 ,...,n L " NˆM 1 n 1 ,...n L . In fact, when we consider all different eigenstates A, B of the transfer matrix, (4.17) has to vanish and thus we get a (infinite) system of linear equations that should be satisfied both by NˆM 1 n 1 ,...,n L and by the inverse overlaps M n 1 ,...,n L . One can expect that its solution is unique up to an overall normalisation, which leads to (4.18). Since the overlap of two SoV vacua is x0|0y " 1 in our conventions, we must have M 0,...,0 " 1 and this fixes the coefficient N in terms of a combination of residues, Now we can compute M 1 n 1 ,...,n L quite directly again in terms of residues, leading to the correct result (2.38) presented in section 2.2.

The slp3q case
In this section we generalise the derivation of the integral form for the scalar product and orthogonality relations for the Q-functions to the slp3q case.
For slp3q we have two Baxter equations, (3.21) and (3.22) that were discussed in section 3.2. Like in the previous section we rewrite them in terms of two difference operators, see that for large enough s there will be no poles at all to pick and thus the l.h.s. of (4.24) is indeed zero. At the same time, as all terms under the integral are analytic functions of s, so should be the integral. Thus we conclude that the poles should cancel when they are present. In particular, for the case s " 1{2 the cancellation of the poles was explicitly verified in [21]. We extended this consideration to the case s ą 0 in Appendix E.
Lastly, let us comment on convergence of the integrals in (4.24) at large u. As we already mentioned, below we will use this equation for the case when f can have one of the two types 28 of large u behavior: either " pλ 1 λ 2 q iu u t or " pλ 1 λ 3 q iu u t . Similarly to the slp2q case (see the discussion above (4.10)) we will assume for definiteness that 0 ă arg λ 2´a rg λ 1 ă π , 0 ă arg λ 3´a rg λ 1 ă π . (4.25) These conditions ensure that the integral in (4.24) will be convergent for both choices of asymptotics of f . Also, like for slp2q, if e.g. the first inequality in (4.25) is violated, we should redefine µ α for the case when f has asymptotics f " pλ 1 λ 2 q iu u t , by flipping the sign in the exponent in the denominator of (4.10). Similarly, when f " pλ 1 λ 3 q iu u t we redefine the measure in the same way when the second inequality in (4.25) is violated.
Integral orthogonality relation for the Q-functions. Now we are ready to derive the orthogonality relations for Q-functions following in analogy with what was done for slp2q in section 4.
where χ a pλq is the state-independent coefficient that is simply the character of slp3q in the a-th antisymmetric representation with eigenvalues given by the twists λ 1 , λ 2 , λ 3 . With this notation, (4.26) gives with a " 1, 2 and we introduced the multi-index pb, βq, which takes 2L different values. Since for two different states at least one of the differences I A b,β´1´I B b,β´1 has to be zero, we find 28 We also remind that in our conventions λ3 " 1{pλ1λ2q.
that the determinant of the linear system for these quantities in (4.28) should vanish, det pa,αq,pb,βq´Q This is the key orthogonality relation, which generalizes the slp2q relation (4.15) to the slp3q case.
This time, however, it is less obvious that (4.29) has the form of the SoV product with some measure factor. In section 5 we show that indeed (4.29) takes the exact form one gets for the scalar product of two wavefunctions in the SoV basis we built above in section 3. Like in the slp2q case, one can also argue that the number of the orthogonality relations in (4.29) is large enough to guarantee that we can actually deduce from it any element of the SoV overlap matrix xy|xy. Indeed, since the entries of the matrix xy|xy´1 are rational functions 29 of the spin s we can explicitly solve for each block of fixed SoV charge xy|xy´1 by considering the finite-dimensional case s P t0,´1 2 ,´1, . . . u with´s large enough. Then we can simply analytically continue the result for that block to general values of s. We will do this calculation in section 5.

General slpN q case
The integral form of the scalar product we obtained above for slp2q and slp3q spin chains can be generalized quite directly to any slpN q. In this section we will just present the result for the orthogonality relation, while the details of the derivation are given in Appendix C. The result is almost identical to that obtained for the s " 1{2 case in [21] and reads det pa,αq,pb,βq´Q Here the indices take values a, b " 1, . . . , N and α, β " 1, . . . , L. We remind that here D is the shift operator defined in (4.3). The only place where the s-dependence enters into this expression is through the definition of the double-brackets (4.4), which contains the sdependent factor µ α given in (4.10). In (4.30) we also introduced the notation for Q-functions with upper indices, where is the fully antisymmetric tensor and 12...N " 1, while the indices are chosen as tb 1 , . . . , b N´1 u " t1, . . . , N uztau. For example in the slp3q case the functions Q a appearing in (4.30) are Q 2 "´Q 13 and Q 3 " Q 12 , so that it reduces to the slp3q result we gave above in (4.28).
In the next section we show how the relation (4.30) leads to an explicit expression for the SoV measure M y,x .

Explicit formula for the SoV measure
In this section we establish the relation between the operatorial SoV approach, discussed in section 3 on the example of slp3q, and the integral orthogonality relation we derived in the previous section 4. As a result we will derive an explicit 30 formula for the SoV measure for general slpN q.

Comparison with the SoV construction for L " 2 case
In order to see how the relation with the SoV approach works, we first study the case of short length L " 2 for the slp3q spin chain explicitly in detail. In the next section we discuss arbitrary length spin chains and then consider the general slpN q case.

(5.3)
Note that all terms in the integral, except t 2 ptu α,a uq, are symmetric under the permutations of u α,a for each α separately. Computing the determinant explicitly we observe that up to permutation we have the following equation here » indicates that the equality holds up to the permutations. We also introduced the notation F s 1 ,s 2 α " Q 12 pu α 1`i s 1`i 2 qQ 13 pu α 2`i s 2`i 2 q´Q 13 pu α 1`i s 1`i 2 qQ 12 pu α 2`i s 2`i 2 q . (5.5) 30 The measure has implicitly been obtained in [21,22] and then later in [25].
We assume that the twists satisfy as in section 3 (where this condition ensured that the states we built actually lie in the Hilbert space). This means that we can close the integration contour in the upper half plane for all the integrals in (5.2), and evaluate them by picking the poles of µ α (defined in (4.10) and (4.8)) at u " θ α`i s`in , n ě 0 .
At these points the factor µ α has simple poles, with residues given by a product of Pochhammer functions defined in (2.39). For example, consider the first term in (5.4), which gives the following contribution to the result in (5.2): which we can now write as a sum over residues where R I n 11 n 12 n 21 n 22 " px 11´x21 qpx 12´x22 q ź α,a rr α,nαa Q 1 px αa qs (5.10) " Q 12 px 11`i 2 qQ 13 px 12´i 2 q´Q 13 px 11`i 2 qQ 12 px 12´i 2 q ‰ (5.11) " Q 12 px 21`i 2 qQ 13 px 22´i 2 q´Q 13 px 21`i 2 qQ 12 px 22´i 2 q ‰ (5.12) and x αa " θ α`i s`in α,a . This already has a form familiar from the SoV approach (3.63) if we identify y α1 " x α1 , y α2 " x α2´i , which gives The normalisation factor can be fixed by requiring that for n α,a " 0 the r.h.s. gives identity. This results in This is already highly non-trivial as we should be able to reproduce all diagonal elements of the matrix of (3.74) and (3.75). For example taking n 2,1 " 2 and all other n α,a " 0 we get from (5.13) M 0020 0020 " θ 1´θ2´2 i θ 1´θ2 r 2,2 r 2,0 "´s p2s`1q p2s`iθ 12 q piθ 12`2 s`1q θ 12 pθ 12´i q (5.15) which perfectly reproduces the p3, 3q element of the matrix (3.75)! We introduced the notation M m 11 m 12 m 21 m 22 n 11 n 12 n 21 n 22 " M y,x where y and x are associated to n's and m's in the usual way (3.33) and (3.56). Now notice that for SoV eigenvalues we have to impose inequalities (3.34) whereas (5.8) has the sum running over all positive n 1 s. To account for that let us split the sum in (5.8) into 4 parts depending on n α1 ě n α2 or n α1 ă n α2 for α " 1, 2. Then only one of four parts n α1 ě n α2 actually corresponds to (5.13). Now consider the case n 11 ă n 12 and n 21 ě n 22 . As in this case we violate the inequality (3.34) we better replace the names of the summation labels n 12 Ø n 11 . After that replacement and slight rearrangements in (5.10) we get R I n 12 n 11 n 21 n 22 "´px 12´x21 qpx 11´x22 q ź " r α,nα,a Q 1 px αa q ‰ (5.16) " Q 12 px 11´i 2 qQ 13 px 12`i 2 q´Q 13 px 11´i 2 qQ 12 px 12`i 2 q ‰ (5.17) " Q 12 px 21`i 2 qQ 13 px 22´i 2 q´Q 13 px 21`i 2 qQ 12 px 22´i 2 q ‰ , (5.18) which under identification y 11 " x 11´i , y 12 " x 12 and y 11 " x 11´i , y 12 " x 12 , which it terms of n's and m's gives m 11 " n 11´1 , m 12 " n 12`1 and m 2a " n 2a leads to M n 11´1 ,n 12`1 ,n 21 n 22 n 11 , n 12 , n 21 n 22 "´1 N px 12´x21 qpx 11´x22 q ź r α,nα,a . . It is clear that this time each ordering of n's will contribute in the same way so we can remove the factor 1{4 and assume that n 11 ą n 12 and n 21 ą n 22 (when n 11 " n 12 or n 21 " n 22 we simply get zero). Repeating the same argument as before we deduce M n 11 n 12`1 ,n 21´1 ,n 22 n 11 n 12 , n 21 , n 22 "´1 N px 12´x21 qpx 11´x22 q ź r α,nα,a (5.21) and finally the last term in (5.4) gives M n 11´1 ,n 12 n 21 ,n 22`1 n 11 , n 12 n 21 ,n 22 We see that the structure of the result is very suggestive. In the next section we will generalise the above derivation to general length L.

General L expression for slp3q measure
In order to generalize the derivation in the previous section to any L our starting point is again the determinant in the l.h.s. of (4.29), d L " det pa,αq,pb,βq´Q where the indices a, b P r1, . . . , Ks and α, β P r1, . . . , Ls, F and G are two arbitrary tensors with 3 indices. In the r.h.s. we are summing over all permutations σ of the L copies of ranges of numbers 1, . . . , K with σ α,b denoting the number at the location b`pα´1qK. We also indicated that in the r.h.s. the determinant is computed for the LˆL matrix whose columns are labelled by β, and whose rows are labelled by pairs pα, bq such that σ α,b " a (there are L such pairs). The relation (5.26) is easy to verify and we will use in the derivation below.
In application to our determinant (5.25) we get 27) where s α,a " 1´σ α,a and ∆ b is the Vandermonde determinant, build out of u α,a for which σ α,a " b.
Finally, in order to bring it close to the SoV form we have to use that all terms in (5.24) (except for the t L ) are invariant under u α,1 Ø u α,2 . For t L , interchanging u α,1 Ø u α,2 is equivalent to interchanging Q 12 pu α,1`. . . q with Q 13 pu α,1`. . . q and Q 13 pu α,2`. . . q with Q 12 pu α,2`. . . q and changing the overall sign of t L , as it is clear from the initial determinant form of t L . I.e. that is equivalent to antisymmetrizing Q 12 and Q 13 in the last term under the product of (5.27), which will then allow us to rewrite the result in terms of F s α,1 ,s α,2 α sym u α,1 Øu α,2 t L ptu α,a uq " Now the expression (5.28) is ready to go under the integration (5.24). Closing the contour in the upper half plane and evaluating the integration by residues we pick up poles coming from the integration factors µ α at u α,a " θ α`i s`in α,a where n α,a ě 0 and otherwise are unconstrained. By construction the integrand is now invariant under u α,1 Ø u α,2 for every value of α " 1, . . . , L and so the residues are also symmetric under n α,1 Ø n α,2 . Using this symmetry we can remove the factor 1{2 L and impose 0 ď n α,2 ď n α,1 . The only potential problem could be that in this way we take the contributions with n α,2 " n α,1 into account multiple times -we will see in a moment that we do not.

General expression for the measure
The general slpN q case is almost completely clear after the previous two derivations. We start from the integral orthogonality relation (4.30) and pull out LpN´1q integrations, factors of Q 1 pu α,a q and the measure factors µ α pu α,a q Then we apply the identity (5.26) to the remaining integrand t L ptu α,a uq and symmetrise in permutations of u α,a for each given α, which is equivalent to interchanging indices of Q a . Similarly to the slp3q example we arrive to the following result sym tu α,1 ,...,u α,N´1 u t L ptu α,a uq " where σ is a permutation of pN´1qL numbers 1, 2, . . . , N´1, 1, 2, . . . . We also introduced the generalisation of (5.5) as a pN´1qˆpN´1q determinant where s α,a " 1´σ α,a ď 0. Like in the slp3q case we then have to close the contour in the upper half plane (which strictly speaking requires |λ 1 | ą |λ 2 | ą¨¨¨ą |λ N |) and rewrite the integral (5.30) as a sum over poles at u α,a " θ α`i s`in α,a for n α,a ą 0. At first let us assume that all n α,a are all different for a given α, then we can restrict ourselves to n α,1 ě n α,2 ě¨¨¨ě n α,N´1 ě 0 using the symmetry of the integrand by removing pN´1q!'s from the denominator of (5.31). If there are some n 1 α s which are equal among each other, then the number of equivalent permutations of n α,a is less then N ! and we have to compensate for the overcounting by dividing for each α by (5.33) Thus up to an overall factor we get where x α,a " θ α`i s`in α,a and y α,0 " θ α`i s. In analogy with slp3q we define y α,a " y α,0`i m α,a´i pa´1q , m α,1 ě . . . m α,N´1 ě 0 . where s 0 α,a " 1´a. For that we simply have to order the numbers n α,a`sα,a . Some observations are in order: Firstly, all tn α,a`sα,a u N´1 a"1 must be distinct, as otherwise the determinant will vanish. This means that their ordering produces a unique permutation ρ a . Secondly, the number in the l.h.s. satisfy strict inequality m α,1`s 0 α,1 ą m α,2`s 0 α,2 ą¨¨¨ą m α,N´1`s 0 α,N´1 ě 2´N at the same time n α,a`sα,a ě 2´N meaning that we will always be able to find unique set tm α,a u, satisfying the required inequalities for any n α,a and s α,a . Explicitly for each α we find m α,a " psorttn α,b`sα,b uq α,a´s 0 α,a . Where we introduced the notation sort, for a function which implements the sorting permutation ρ α a . We need to keep track of the signature of the permutation ρ α , as this would affect the sign of the result. Thus we conclude that we get the following contribution to the measure M σ y,xˇm α,a"psorttnα,b´σα,buq α,a`σ In practice we will have to determine for given properly ordered sets tn α,a u and tm α,a u what is the value of the corresponding M y,x . For that we have to sum over all possible permutations σ's for which the relation between m's and n's holds. Finally, we can simplify our result a bit by noticing that when we have a degeneracy in n α,a we also have several σ's which give the same result -and their number is exactly M α . So instead of summing over all σ 1 s it simpler to sum over all inequivalent permutations of n α,a (within each α). Denoting such permutations by k we then get M y,x " ÿ k"perm α n signpσq˜N´1 ź a"1 ∆px σ´1paq q ∆ptθ a uq¸N´1 ź a"1 r α,nα,a r α,0ˇσ α,a"kα,a´mα,a`a .

(5.39)
To have all notations summarised in one place let us remind that σ α,a should be one of pN´1qL! L! N´1 permutations of the numbers 1, 2, 3, . . . , pN´1q repeated L times, otherwise we define signpσq " 0; we also define x σ´1paq " tx α,b : σ α,b " au. The signature of the permutation signpσq is˘1 depending on the number of elementary permutations needed to bring the ordered set u σ´1p1q Y u σ´1p2q¨¨¨Y u σ´1pN´1q to the canonical order u 1,1 , u 1,2 , . . . , u L,N´1 . Whereas signpσq could be ambiguous due to different possible orderings inside σ´1paq, the combination with the Vandermonds ∆px σ´1paq q it is a well defined. Finally r α,n is the only s-dependent factor which is defined in (2.39).
In appendix G we give a simple Mathematica code which computes the measure element for given n's and m's, implementing (5.39).

Extension to slpN q spin chains
The integral representation give a sharp suggestion on what the spectrum of the SoV operators in slpN q case should be and how they should give rise to the measure factor, for which we produced a prediction in the previous section. Here we extend the results described in section 4 for slp3q to slpN q. We will also fill some gaps in the previous discussions. In particular we demonstrate how to diagonalise the B and C operators by introducing new commutation relations between them and certain transfer matrices generalising the relation first obtained in [13].
We begin by reviewing the key tools and relations and then proceed with the generalisation.

Quantum minors
A useful tool when dealing with the higher-rank slpN q case are the so-called quantum minors which are certain anti-symmetric combinations of monodromy matrices where "quantum" refers to the presence of extra shifts which disappear in the classical limit. The quantum minors T " i 1 ...ia j 1 ...ja ı puq, n " 1, 2, . . . , N , are defined as SoV states up to SoV charge N " 10. All non-zero elements denoted by a yellow pixel. Within each fixed SoV charge block the states x and y are ordered lexicographically according to the words pn 1,1 , n 2,1 , n 1,2 , n 2,2 , n 1,3 , n 2,3 q. The matrix is upper triangular and has a fractal-like self-repetitive structure.
where the sum is over all elements σ of the permutation group S a of a elements. Note that these objects (6.1) can also be identified as elements of the monodromy matrices in antisymmetric representations. In other words in (6.1) the fusion procedure [44] is performed directly at the level of the monodromy matrix instead of the Lax operators like in (2.5). The transfer matrices in anti-symmetric representations T a,1 puq are then obtained as sums of quantum minors of a given size Transfer matrices in all other representations can be obtained through the recursive use of the Hirota identity if the representation is rectangular or by means of the CBR formula [42,43] for a representation corresponding to a generic Young diagram µ " pµ 1 , . . . , µ N q where as usual we have the constraints µ 1 ě µ 2 ě¨¨¨ě µ N and we remind the reader that the transpose Young diagram µ 1 is defined as in Figure 3.

Separated variables, Bpuq and Cpuq
We now turn to the construction of the SoV bases xx| and |yy obtained by diagonalising the operators Bpuq and Cpuq respectively. The B operator is defined as [8][9][10] Bpuq " where j k " tj 1 k , . . . , j k k u, k " 1, 2, . . . , N´2 is a multi-index and we sum over all configurations with 1 ď j 1 k ă j 2 k ă¨¨¨ă j k k ď N´1. Similarly, the slpN q C operator is defined as We see that the only difference between B and C is the order in which the minors appear and the associated shifts and so the two operators coincide in the classical limit and constitute two different quantisations of the classical separated variables [45]. We will see later how this definition of the C operator, initially found in [22] for sup3q case, comes about.
Untwisted operators and lowest-weight state. In order to progress we need to introduce the untwisted monodromy matrix elements T ij puq i.e.
If the representation is finite-dimensional i.e.´2s P N there exists a lowest-weight state |0y in addition to the highest-weight state |0y. The untwisted operators T ij have a simple action on both of these states, in particular x0|T ij pvq " 0, i ą j, x0|T kk pvq " Q r2ss θ x0|, k " 1, . . . , N´1, (6.8) x0|T N N pvq " Q r´2ss θ x0| and T ij pvq|0y " 0, i ą j, T 11 pvq|0y " Q r´2ss θ |0y, T kk pvq|0y " Q r2ss θ |0y, k ě 2 . (6.9) In [13] the B operator was shown to have a very simple form in terms of the untwisted monodromy matrix elements T ij when we use the companion twist matrix (2.8). In particular, the quantum minors appearing in (6.5) were shown to have the form which is easy to verify by direct calculation. Note that since we have chosen to put det Λ " 1 it follows immediately that B (and also C) are independent of the twist eigenvalues and hence so are their eigenvectors, in a properly chosen normalisation. Explicitly, in terms of the untwisted operators T ij puq, B and C read Bpuq " and Cpuq " We can also verify that x0|Bpuq " p´1q Cpuq|0y " p´1q r2s`2pk´1qs θ¯N´1´k |0y (6.14) as it follows immediately form (6.8) and (6.9). Like in the slp3q case we subsequently define bpuq and cpuq by removing trivial "non-dynamical" factors and Cpuq " cpuqˆp´1q respectively. Like in slp3q case the operators c and b are polynomials, as will be also clear from below.

Commutation relation.
The key relation derived in [13] allowing us to show that B indeed generates separated variables is its commutation relation with transfer matrices T µ puq 31 where the function f µ pu, vq and the operator R 1 pu, vq are given by where h µ denotes the height (number of non-zero rows) of the Young diagram µ and ". . . " in R 1 pu, vq refers to non-zero terms irrelevant for the rest of our discussion. The key idea is that if xΛ| is an eigenstate of B and xΛ|T j1 pvq " 0, j " 1, . . . , N for some v then the remainder term R 1 pu, vq in (6.17) vanishes and so xΛ|T µ pvq will be a new eigenstate of B.
As a result of the properties (6.8) the natural state to start with to build up the eigenvectors of B is the lowest-weight state x0|. Unfortunately, for generic values of the spin s we do not have a lowest-weight state meaning the relation (6.17) is not directly applicable. On the other hand, as we will see, (6.17) can be modified to allow us to diagonalise B starting from the highest-weight state instead. In order to gain some intuition for the required modifications we will first demonstrate how this relation can be used in the compact case s P t0,´1 2 ,´1, . . . , u to diagonalise B and then extend to general s.
Vanishing of remainder term in the compact case. In this subsection we assume that s P t0,´1 2 ,´1, . . . , u and hence a lowest-weight state x0| with the properties (6.8) exists. As we have just stated, we need a state xΛ| and a point v such that xΛ|T j1 pvq " 0 and so we can take xΛ| " x0| and v " θ α´i s. Hence, the commutation relation (6.17) reduces to x0|T µ pθ α´i`2 s`µ 1´µ 1 1˘q Bpuq " f µ pu, θ α´i sqx0|BpuqT µ pθ α´i`2 s`µ 1´µ 1 1˘q (6.19) We can then replace B with b and use the fact that x0| is an eigenvector of b with eigenvaluè Q θ puq r2ss˘N´1 to conclude that x0|T µ pθ α´i p2s`µ 1´µ What is not clear from this construction is if x0|T µ pθ α´i p2s`µ 1´µ 1 1 qq is actually non-zero. It was explained in [19] how this requirement places strong constraints on the choice of Young diagram µ. Specifically, µ must be contained in the Young diagram describing the physical space r´2s, 0, . . . , 0s in which case it has no vanishing eigenvalues and so we conclude that x0|T µ pθ α´i p2s`µ 1´µ 1 1 qq is non-zero. On the other hand, if µ is not contained in this 31 Strictly speaking this relation as we have written it only holds when applied to states which are annihilated by Tj1pvq for some v. This will not affect any of our arguments as we will only ever use it on such states. The precise details can be found in [13]. diagram then T µ pθ α´i p2s`µ 1´µ 1 1 qq vanishes identically. Hence we conclude that our new eigenstate should be of the form x0|T 1,s pθ α´i p2s`s´1qq for any s P t0, 1, . . . ,´2su . (6.21) It was also demonstrated in [13] that we can construct a whole family of states in such a manner. In particular, the remainder term will continue to vanish as long as a transfer matrix associated to a given θ α is not used more that N´1 times [13]. The conclusion is that each vector of the form is an eigenstate of b as long as we choose each s α j to satisfy s α j ď´2s. Since the transfer matrices commute we are also free to arrange the order of the product so that s α 1 ď s α 2 ď¨¨ď s α N´1 for each α, and so by (6.20) we conclude that (6.22) is an eigenvector of b with eigenvalue Finally, we make one more remark regarding the independence of the SoV states on the twist eigenvalues. Clearly, the state x0| is independent of twist. Furthermore, it was demonstrated in [13] that every transfer matrix T µ pvq with our choice of twist is of the form see Appendix A for more details. The twist part drops out when we act on states which are killed by T j1 pvq, which is precisely the requirement that T µ generates a new eigenstate of B, and hence B eigenstates constructed in this way are independent of the twist eigenvalues. Of course, we already know that this is the case since B is independent of twist, but also eigenvectors constructed as in (6.22) does not introduce twist into the normalisation. In this section we needed the lowest weight state to construct the eigenstates of the Bpuq operator. The existence of this state is only guaranteed for particular values of s. In the next section we will extend this construction to all values of s, circumventing the requirement of a lowest-weight state.

6.3˚-map
Now that we have reviewed how to diagonalise B in the compact case we turn our attention to the non-compact case and further explain how similar techniques can be used to diagonalise C.
We introduce the following map, which we call˚-map, which acts 32 on monodromy matrix T ij puq elements as T ij pu`aq Þ Ñ T ij pu´aq (6.25) and furthermore reverses the order of products T ij pu`aqT kl pu`bq Þ Ñ T kl pu´bqT ij pu´aq (6.26) for any a, b P C.
We will now discuss the key properties of this map, in particular how it acts on B and the transfer matrices. First we will find how it acts on quantum minors since these are the building blocks for both of these objects. From the definition (6.1) we find and hence relates Bpuq Þ Ñ Cpuq! This means that all results for the Bpuq operator can be translated into some new relations for Cpuq and will allow us to diagonalise Cpuq in the next seciton.
Next we examine the transfer matrices T µ puq. Let us denote their image under the˚map as Tμpuq. We first start with transfer matrices in anti-symmetric representations T a,1 puq. Since these transfer matrices are defined by it follows immediately from (6.27), (6.28) that Tå ,1 puq " T a,1 puq, and so the full set of conserved charges is invariant under the˚-map. Since transfer matrices in other representations Finally, let us recall that the remainder term R 1 pu, vq appearing in the commutation relation (6.17) is given by T j1 pvqˆ. . . (6.32) and so under the˚-map it results in R1pu, vq " ř N j"1¨¨¨ˆT j1 pvq. Combining these results we immediately find that the commutation relation (6.17) transforms under the˚-map to produce a new commutation relation which reads where g µ pu, vq is defined as In the next subsection we will explain how to use this relation to construct right SoV states |yy, the eigenstates of Cpuq.

Constructing the SoV bases
Diagonalising C. We will now explain how to diagonalise C using the commutation relation (6.33). By (6.9) we have that T j1 pθ α`i sq|0y " 0 (6.35) which itself implies that R1pu, θ α`i sq|0y " 0. Applying (6.33) to this state then produces the relation 1˘˙C puq|0y (6.36) where Tμ is defined in (6.31). Replacing C with c and using the fact that |0y is an eigenvector of c with eigenvalue ś N´1 k"1 Q r´2s`2pk´1qs θ we find that Tμ`θ α`i 2 p2s`µ 1´µ In terms of the different states we can construct the situation can be shown to be identical to that of diagonalising B for the case where the physical space carries symmetric powers of the anti-fundamental representation which was discussed in [13] but now using the transfer matrices T˚. Since the situation is identical we simply state the final result which is that the set of vectors 1˘˙| 0y (6.38) are eigenvectors of C for any choice of the Young diagrams µ α " pµ α 1 , . . . , µ α N´1 , 0q and are always non-zero (see Appendix B). The eigenvalues of c on these excited states is then deduced immediately from (6.37) and have the form where we have defined y α,j " θ α`i ps`µ α j`1´j q, where µ α j " m α j in the notation of Section 3 The operator Tμα`θ α`i s`i 2`µ α 1´µ α,1 1˘˘i s diagonalised by any eigenvector xΨ| of the transfer matrix with eigenvalue given by where we have omitted certain Q θ -dependent factors which we reabsorb into the definition of the SoV basis, similar to what was done in the slp3q case (3.54), and like for slp3q we define We can then choose to normalise xΨ| so that Diagonalising B. We already mentioned we would have some trouble diagonalising B since our commutation relation (6.17) requires the presence of a lowest-weight state x0|. Let us return to the compact case one more time. We already explained that the set of vectors We can gain some further insight by rewriting these transfer matrices using Q-operators [46][47][48][49][50]. The Q-operators Q i puq and Q i are defined as the operators which have the Q-functions Q i puq and Q i puq corresponding to the transfer matrix eigenstate |Ψy as their eigenvalues The main relation we will make use of is the following Q 1 pθ α´i ps`s α j qq Q 1 pθ α´i sq (6.46) where 9 indicates that the equality holds up to non-zero multiples of Q θ [19]. This relation allows us to rewrite the above set of vectors (6.44) as (6.47) The highest-weight state can then be obtained by choosing all s α j "´2s This simple rewriting has actually helped us quite a lot -we see that we can start from the highest-weight state x0| and move back down towards the lowest-weight state by acting with operators of the form Q 1 pθ α`i ps´s α j qq Q 1 pθ α`i sq , s α j " 0, 1, . . . ,´2s (6.49) and we will be able to obtain all B eigenvectors in this way. Explicitly, the B eigenvectors can be written as where now´2s ě s α 1 ě¨¨¨ě s α N´1 ě 0. We can now analytically continue in the spin s from t0,´1 2 , . . . u to general values. Then in (6.50) the constraint´2s ě s α 1 ě¨¨¨ě s α N´1 ě 0 should reduce to s α 1 ě¨¨¨ě s α N´1 ě 0, with no upper limit on the value of s α 1 , and we expect this set of vectors to exhaust all eigenvectors of B. In order to verify this it would be convenient to be able to generate the vectors (6.50) using transfer matrices instead of Q-operators since the former are usually easier to work with. Hence, we need some transfer matrix which at some value of the spectral parameter becomes the ratio (6.49). Luckily, it is not hard to work out that the transfer matrix in question is given by and we derive a new commutation relation between B and T N´1,s , alternate to (6.17), which reads T N´1,sˆv´i 2 pN´s´1q˙Bpuq " hpu, vqBpuqT N´1,sˆv´i 2 pN´s´1q˙`N ÿ j"1 T 1,j pvqˆ. . . .

(6.52)
This relation is a special case of a more general relation involving B and the transfer matrices constructed from the inverse monodromy matrix T´1puq, see Appendix A but for our purposes this relation is enough. Now we can use (6.52) directly to diagonalise B starting from x0|, avoiding analytic continuation. Like what was previously described when starting from the lowest weight state, it is possible to apply a transfer matrix corresponding to a given θ α at most N´1 times and the remainder term will still vanish, meaning all eigenvectors of b can be constructed as T N´1,s α jˆθ α`i s´i 2 pN´s α j´1 q˙, (6.53) and the corresponding eigenvalue is given by with s α j " n α,j in the notation of Section 3. The fact that this normalisation ensures that the states xx| are independent of the twist eigenvalues is clarified in Appendix A.

Reduction of SoV bases to the compact case
We close this section by demonstrating how the SoV bases we have constructed behave when we restrict ourselves to the compact case with s P t0,´1 2 ,´1, . . . u. In this scenario our infinite-dimensional irreducible space becomes reducible with a finite-dimensional irreducible part. As we will see, the SoV basis vectors corresponding to the irreducible part remain non-zero and everything else vanishes.
Since the SoV bases are polynomial functions in the spin s when we use the differential operator realisation like in the slp3q case there is no problem with simply setting s to some negative half-integer value. Using the results of Appendix B only a finite number of transfer matrices will remain non-zero when we do this. For the right SoV basis |yy defined by |yy " 1˘˙| 0y (6.55) the state will vanish for any configuration with µ α 1 ą´2s but will be non-zero as long as µ α 1 ď´2s, and hence the number of non-zero states precisely matches the dimension of the finite-dimensional irreducible part of the representation, see [13,19].
Similarly, for B, the transfer matrix (6.51) can be shown to be non-zero for any s in the non-compact case, and if we reduce to the compact case it remains non-zero only if 0 ď s ď´2s and we reach the same conclusion.

Determinant representations for overlaps and expectation values
In this section we will extend the previous results by deriving the SoV based determinant representations for overlaps and expectation values of various operators.

Defining det-product and its relation to SoV
Here we discuss the main tools for computing some physical observables with the help of the SoV approach we developed in the previous sections. For simplicity we will mostly demonstrate the method on the slp3q example but in all cases the generalisation to slpN q is quite clear.
In particular, in this section we consider the overlaps between the state of the chains with different twists. Such overlaps were recently considered in the context of AdS/CFT correspondence [31] and can be interpreted as 3 point correlation functions involving so-called color twist operators.
The key observation is that the SoV states, in the non-diagonal frame (2.8), are not sensitive to the twist of the monodromy matrix. In other words the same SoV basis will separate the wave function for any values of the twist eigenvalues λ a . This implies that the integral representation we derived in the previous section for the states of the same spin chain can be, very non-trivially, used to compute overlaps between the eigenstates of different transfer matrices.
For what follows it will be convenient to introduce the notation for what we call the det-product, where the notation with double brackets, which initially referred to an integration (4.4), we understand now more generally as a sum over the poles of the factor µ α . So in this section we re-define p p f puq q q α " 8 ÿ n"0 r α,n f px α,0`i nq .
The normalisation factor is Thus for the case when G and F in (7.1) are Q-functions describing two spin chain states, the det-product gives the overlap of these states we presented above in (4.29). Notice that with the expression (7.1) one can follow all the same steps as in section 5 to arrive to the following result: G α,2 py α,1`i 2 qG α,3 py α,2`i 2 q´G α,3 py α,1`i 2 qG α,2 py α,2`i 2 q ‰ .
We will see that a number of correlators can be expressed in terms of the det-product. Even though we found an explicit expression for the SoV measure M y,x , our ultimate goal is to bring the correlator to a determinant form, rather than to a sum over the SoV states. We will show how in some important cases the explicit form of the measure is not needed as the result takes the form of the det-product.
In order for two states Θ and Φ to have a scalar product which can be written in the det-product form, we have to require what we call separability property from these states, which can be expressed as xx|Θy " G α,2 py α,1`i 2 qG α,3 py α,2`i 2 q´G α,3 py α,1`i 2 qG α,2 py α,2`i 2 q ‰ .
If that is the case, then as a consequence of the completeness of both SoV bases txu and tyu and due to the relation (7.4) we then immediately get In what follows we explore a few examples when (7.5) does hold. One immediate example is when both states are transfer matrix eigenstates. In this case of course we simply have F α " Q 1 and G α,c " Q 1,c , so that In the above expression the left and right wave functions are normalised according to our conventions from section 3.

Overlaps between wave functions with different twists
Another quite obvious example where the separability property (7.5) is satisfied for both states but gives much less trivial overlap than (7.8) is the case when both states are eigenstates of transfer matrices with different sets of twists eigenvalues λ a andλ a . As we emphasised before, the SoV states do not depend on λ 1 s and thus should separate wavefunctions corresponding to the spin chains with arbitrary twist eigenvalues λ a (provided the twist matrix is of the form (2.8)). Thus we conclude that the overlap between the states of the spin chains with different twist eigenvalues can we written in the form xΨλ a |Ψ λa y " Q 12 ,Q 13ˇQ1 . (7.9) In the above expression we still assume that the states are normalised in agreement with our conventions. However, we can also form a normalisation independent combination, for example xΨλ a |Ψ λa yxΨ λa |Ψλ a y xΨλ a |Ψλ a yxΨ λa |Ψ λa y " Examples of the non-trivial overlaps. The simplest example of a non-trivial overlap is the overlap between two ground states corresponding to two different twists. Since our twist is non-diagonal, the corresponding ground states can be obtained by acting with a suitable global rotation on the constant polynomial. This is discussed in detail in section 3 where we explicitly found the ground states (3.25) and (3.24). For the simplest length 1 spin chain we have In order to demonstrate that equation (7.9) holds, we first compute the l.h.s. by expanding both functions up to some fixed order, computing the scalar product between two resulting polynomials and then sending the expansion order to infinity, like we did previously in (3.26) for two equal twists. We find that the generalisation of (3.26) reads 33 Now we can try to reproduce this result using the det-product of the corresponding Q-functions, i.e. Q λa 1 " λ iu 1 , Qλ a 1,1`a "λ iu 1λ iu 1`a . (7.14) Evaluating the sum over residues we get s .
Probing the transition matrix. The overlap between two eigenstates of the transfer matrix in different frames is slpN q invariant. This means that one can diagonalise either one of the two twist matrices (2.8). The matrix which relates the two frames that diagonalise one of these two twist matrices has the following general form, valid for slpN q: (7.17) 33 for the limit of the truncated series to exists one requires |λ1| ą |λ2| and |λ1| ą |λ3|, which also coincides with the condition for the convergence of the sum over poles in the r.h.s. of (7.9) (generalising a similar condition for the equal twists case discussed in equation (5.6)).

(7.18)
Let's now focus on the fundamental representation, i.e. s "´1{2. Let's assume that |Ωy is in the diagonalised frame. We know that for the diagonal twist the ground is simply the highest weight state |Ωy " e 1 , whereas the other state reads |Ωy " S´1|Ωy " S´1 11 e 1`S´1 21 e 2`S´1 31 e 3 . Similarly for the left states xΩ| " e 1 and xΩ| " xΩ|S " S 11 e 1`S21 e 2`S31 e 3 , from where we would expect that for s "´1{2 we should get which is indeed the case as we see from (7.18). Note that one can further interchange the order of the eigenvalues, changing the vacua accordingly, to deduce any combination of the form S ab S´1 ba , a, b " 1, 2, 3. One can invert the logic and verify that the knowledge of all S ab S´1 ba , a, b " 1, 2, . . . , N allows one to reconstruct S ab modulo the transformation S Ñ D 1 .S.D 2 , where D 1 , D 2 are two independent diagonal matrices. The diagonal matrices will commute with the twist matrices and they reflect the freedom in the definition of S in the first place.

On-shell off-shell overlap
In this section we explore the effect of the action by Bpuq or Cpuq operators on the states. Assuming the state |Θy is separated by the SoV basis like in (7.5), we have where bpwq is the nontrivial part of the Bpwq operator defined in (3.30). We see that the action by bpwq simply translates into the replacement F α puq Ñ pw´uqF α puq. It is clear that there is a potential to generalising this further. We can define a "local" b α operator so that bpuq " where b α puq is a polynomial of degree N´1 in u, diagonalised by xx|, with the spectrum ś N´1 a"1 pu´x α,a q. Repeating the same calculation as in (7.20) we see that b β pwq acts on F α as 34 b β pwq˝F α puq " pw´uq δ βα F α puq . To summarise, this means that multiple action of any b β pwq operators does not spoil the separability property of the wave function. This means that we can compute a set of rather non-trivial form factors in a determinant form, A particularly important case is the following state |Ψy off shell " bpv 1 q . . . bpv k q|Ωy , (7.24) which in analogy with slp2q one could call the off-shell Bethe state. To distinguish it from some other off-shell Bethe states existing in the literature, one could call it algebraic off-shell Bethe states as opposed to the hybrid coordinate-algebraic way of building eigenstates of transfer matrix in nested Bethe ansatz approach. It follows immediately from (7.23) that the overlap between (7.24) and any separable state, and in particular with an eigenstate xΦ| of the transfer matrix, is of a determinant form Note that for that to be true it is not required that tv k u are the Bethe roots, solving Bethe ansatz equations. As we described before in section 3.3, when the parameters tv k u do satisfy the Bethe ansatz equations the state |Ψy off shell does actually become an eigenstate of the transfer matrix.

Form factors of derivatives of the transfer matrices
In this section we show how our integral SoV approach leads to determinant representations for a large class of diagonal form factors, extending the results of [21] from s " 1{2 to generic s. While the extension is almost straightforward, we present here the key steps to make the discussion self-contained. We first consider the slp3q case, but generalization to slpN q is immediate as we will explain shortly. We also show how to compute matrix elements of some local operators from this data. The form factors we consider are the diagonal matrix elements of the derivatives of integrals of motion (coefficients of the transfer matrices) defined in (4.27), where p is a parameter of the model (either an inhomogeneity θ α or a twist λ a ). While the spectrum of the model is under good control and one could in principle compute the derivative in the r.h.s. of (7.28) directly (as a ratio of finite differences), here we rather wish to express it in terms of Q-functions evaluated at one fixed value of p, and it is nontrivial that such an expression exists at all. We will see that the result has a rather natural form of a ratio of two determinants, with the denominator corresponding to the norm (4.29) and the numerator given by the same expression with an extra insertion that we interpret as describing the operator B pÎa,α´1 we consider. In the AdS/CFT context correlators of this kind are also important as they correspond to 3-pt functions with marginal operators [51]. If we consider a small variation of our parameter p Ñ p`δp, the Q-functions Q a`1 as well as the operatorÔ : in the Baxter equation (4.21) will change, but the equation will remain satisfied, so that pÔ :`δÔ: qpQ a`1`δ Q a`1 q " 0. Using that the original Q-function satisfieŝ O : Q a`1 " 0, and dropping the terms quadratic in variations, we have 0 "´Q 1 pÔ :`δÔ: qpQ a`1`δ Q a`1 q¯α "´Q 1Ô : δQ a`1¯α`´Q 1 δÔ : Q a`1¯α . (7.29) Using now the self-adjoint property in the form of (4.22), we see that the first term vanishes so that we get´Q Explicitly, the variation ofÔ : reads Here we denoted by I b,L the leading coefficient in the transfer matrices of (4.27) so that I b,L " χ b pλ 1 , λ 2 , λ 3 q. Plugging this into (7.30) we get a linear system for the variations B p I b,β´1 , of the form ÿ pb,βq m pa,αq,pb,βq p´1q b`1 B p I b,β´1 " y pa,αq , y pa,αq "´Q 1Ŷp˝Q a`1¯α (7.33) where m pa,αq,pb,βq "´Q 1 u β´1 D´2 b`3˝Qa`1¯α (7.34) is the same matrix appearing in the slp3q scalar product (4.29) with the two states taken to be the same. We can write the solution of (7.33) using Cramer's formulas as wherem pa,αq,pb,βq is the matrix m pa,αq,pb,βq with the column pb 1 , β 1 q replaced with y pa,αq defined in (7.33). This gives a determinant representation for the variation of integrals of motion and the form factor (7.28). From the discussion in appendix C it is clear that the result (7.35) extends immediately to the slpN q case provided that instead of (7.32) we usê 36) and the indices a, b, . . . take now values from 1 to N´1 while the matrix m pa,αq,pb,βq is given by (4.30). Let us also note that we can derive a similar determinant representation for the values of I a,α themselves. For that we simply repeat the steps above starting now from the identity´Q 1Ô : Q a`1¯α " 0 (7.37) rather than (7.30). This gives a linear system for the set of I a,α´1 with α " 1, . . . , L whose solution reads I b 1 ,β 1´1 " p´1q b 1`1 det pa,αq,pb,βqmpa,αq,pb,βq det pa,αq,pb,βq m pa,αq,pb,βq , (7.38) where nowm pa,αq,pb,βq is the matrix m pa,αq,pb,βq with the column pb 1 , β 1 q replaced by z pa,αq defined (similarly to (7.33)) as z pa,αq "´Q 1Ẑ˝Q a`1¯α Notice that the determinant in the denominator of the result (7.38) for the quantities I a,α´1 is the same as the one appearing for their variations given by (7.35) (and is the overlap of the state with itself).

Example: local spin expectation value
One of the key quantities of interest in spin chains are correlators of "local" operators, i.e. those that act on a particular spin chain site in contrast to "global" operators such as the transfer matrix. While certain maps from local to global operators are well known (see the reviews [53,54]), here we will demonstrate that our approach offers yet another way to access local quantities. Namely, there is a remarkable relation between a subset of local operators and derivatives of the integrals of motion BÎ a,β {Bθ α , whose expectation values we computed in the previous section.
The main idea is that when taking the derivative in θ α we can single out the α-th spin chain site. To make it precise, let us write explicitly the large u expansion of the transfer matrix with fundamental representation in the auxiliary space defined in (2.3), (2.4) using the form of the Lax matrix from (2.2), TrpE pαqt Λq´θ α TrpΛq¯`Opu L´2 q . (7.41) The trace here is taken over the auxiliary space, and E pαq is an NˆN matrix whose element at position pa, bq is the operator E a,b (the slpN q generator) acting on the α-th site of the spin chain. Note that E in this expression is transposed w.r.t. the indices a, b as we indicated with the superscript t (this is due to the form of the Lax matrix (2.2)). We see that in (7.41) we have a sum of local operators over all sites of the spin chain. Now we notice that when we differentiate the transfer matrix in θ γ , the Lax operator at position γ in its definition will be simply replaced by minus the identity matrix, so as a result we will get the transfer matrix for the spin chain with the γ-th site removed. This means that the derivative will be given by the same result (7.41) but with sum taken over all sites except one, By combining this with (7.41) we can therefore extract the contribution from the site γ only, Taking the coefficient of u L´1 in this relation, we finally get We remind thatÎ a,α are the operator coefficients in the expansion of the transfer matrices in (4.27). We see that (7.44) is a relation between a local operator acting on the α-th site (in the l.h.s.) and a global operator acting on all sites (in the r.h.s.). Sandwiching this relation between left and right transfer matrix eigenstates |Ψy and xΨ|, we find that the expectation value is given by xΨ|TrpE pαqt Λq|Ψy xΨ|Ψy "´i xΨ| Bθα |Ψy xΨ|Ψy´i I 1,L´1´i θ α TrpΛq . (7.45) Let us note that this expression does not depend on normalisation of the states |Ψy. The only nontrivial correlator in the r.h.s. is the first term, which is given by the determinant (7.35) we derived above in the SoV approach. Thus we find a compact result for the expectation value of the local operator TrpE pαqt Λq. We can also repeat a similar argument starting from the transfer matrices in a-th antisymmetric slpN q representation in the auxiliary space. Using the results of appendix F, we find that the generalization of (7.44) reads a ÿ j"1 Tr´pE pαqt´s qp´Λq j¯χ a´j " piθ α`s qÎ a,L`iÎa,L´1`i B θαÎa,L´2 (7.46) where a " 1, 2, . . . , N´1 and we recall that χ j is the character defined in (2.9). This gives a (lower triangular) system of N´1 linear equations from which we can extract the expectation values of the local operators Tr`E pαqt Λ j˘f or j " 1, 2, . . . , N´1.
To illustrate the structure of this linear system, let us give as an example the expectation values of the operators in the r.h.s. of (7.46) for the slp3q case with a " 2 and L " 2, taking α " 1. The first term is simply the character, I 2,2 " χ 2 , while the other two are given by the determinants presented in (7.35) and (7.38), where N " det pa,αq,pb,βq`Q1 u β´1 D´2 b`3˝Qa`1˘α and Let us describe more explicitly the local operator TrpE pαqt Λq which enters (7.45). Notice that this equation holds as long as |Ψy and xΨ| are eigenstates of the transfer matrix constructed as in (2.3), (2.4) with the twist given by Λ. In our current notation Λ is the non-diagonal companion twist matrix (2.8), so the operator TrpE pαqt Λq is quite nontrivial. At the same time, the corresponding states |Ψy are somewhat unusual as they are built starting from the nontrivial ground state (e.g. (2.24) for slp2q). Alternatively, we can go to a more standard frame by performing a global rotation that diagonalizes the twist matrix Λ Ñ Λ diag . Then the states |Ψy will also get rotated |Ψy Ñ |Ψ 1 y, so that e.g. the ground state will become simply the trivial state |0y (given by (2.31) for slp2q). The value of the r.h.s. of (7.45) is the same in either frame 35 , so we have In the new frame, our local operator will be a simple combination of the Cartan elements.
As an example, for slp2q it will be so it reduces to the usual spin projection operator E 11´E22 . Here we used that for slpN q in our representation the operator ř N a"1 E aa acts as a scalar, For slpN q we can compute in this way the expectation values of all N operators E aa on a given site by considering (7.46) for a " 1, . . . , N´1 together with the condition (7.53). We note that form factors of exactly the type we can compute here are important e.g. in Landau-Lifshitz models [52], and it would be interesting to further explore their properties. Let us also point out that the expectation values of operators like BT puq{Bθ are not straightforwardly accessible by traditional methods of the algebraic Bethe ansatz, but appear to be natural objects in the SoV approach. We believe that exploring the interrelations between the SoV and more standard methods should open the way to computing a still larger class of correlators in the future.

Outlook
In this paper for the first time we found an explicit expression for the SoV measure for spin chains in highest-weight representation of slpN q with general spin s. This was done by carefully analysing and bringing together two different approaches -the operatorial SoV approach [8,[10][11][12][13] and the functional SoV approach [21].
The knowledge of the SoV measure has unlocked for us the possibility of computing a number of nontrivial scalar products, overlaps and form factors for which we derived new determinant representations. These results include in particular overlaps of states with different twists and on-shell/off-shell type overlaps. 35 since the eigenvalues of the transfer matrices do not change under this rotation Having direct access to the elements of the measure opens the way to an in-depth study of a great variety of important quantities in higher rank slpN q models. Some future applications may include form factors, correlators and g-functions of the types studied via SoV in [55][56][57][58] for rank-one cases. Our results should also facilitate studying the thermodynamic limits for higher rank spin chains in the SoV framework, extending the existing sup2q results (see e.g. [59]).
Another class of objects which can be computed using the functional SoV approach, generalising the initial observation in [21], are the form factors of derivatives of the transfer matrices w.r.t. external parameters like twists or inhomogeneities. As we discussed in section 7.4.1, in particular this type of form factors includes local spin operators -Cartan generators acting on one site of the chain. These are still to be fully understood within the operatorial SoV approach and could be relevant for the exact calculation of correlation functions in N " 4 SYM.
Most of these results seem to be highly nontrivial to get within traditional algebraic nested Bethe ansatz methods e.g. [60,61]. Trying to merge these methods together could promise a fruitful interplay.
Our results have already allowed us to compute rather exotic overlaps involving states with different twists. We are hopeful that they could find applications in N " 4 SYM where similar objects emerged already [28,31,62]. Our results could also give further clues about the type of algebraic structures that may appear in the N " 4 SYM context.
We leave for the future investigation the generalisation of our results to non highest-weight representations. One should keep in mind that there are certain complications on this waynone of the Q-functions Q i , the constituent blocks of the SoV wave function, are polynomial anymore, furthermore the spectrum of conserved charges will no longer be a discrete set of points. These additional features should have certain effect on the SoV construction as well, and there are still some mysteries to uncover. This direction is particularly important as it has applications for the Fishnet/Fishchain theories [63,64]. SoV-type methods adapted to the principal series representations have already led to a variety of interesting results in this context [65][66][67][68] (see also [69]).
Another interesting direction is developing SoV for higher rank spin chains with different symmetry groups such as sopN q where progress was made recently in [70,71]. Other natural extensions include the super-symmetric case (see [18], [72] for related results), boundary problems and q-deformations (see [73,74] for recent work). We hope to come back to some of these problems in future work.
(grant agreement No. 865075) EXACTC. The work of P.R. is supported in part by a Nordita Visiting PhD Fellowship and by SFI and Royal Society grant UF160578.

A Transfer matrices and antipode
In this section we will derive the relation (6.52) -actually we will derive a more general form of it. Our main tool for doing this will be the so-called Yangian antipode map S which sends the monodromy matrix T puq to its inverse S pT puqq " T´1puq . (A.1) For ease of notation we will also denote Spuq " T´1puq. Note that this map extends in an obvious way to the twisted case: if, as before, Tpuq denotes the twisted monodromy matrix Tpuq " T puqΛ then the antipode sends Tpuq Þ Ñ Spuq :" Λ´1Spuq. We will now derive a new commutation relation which intertwines B and S µ , which are transfer matrices constructed from Spuq in a similar way as to how T µ is constructed from Tpuq, and reduces to (6.52) for special choice of µ.
In order to derive this relation we will need some properties of the antipode map, which we now describe. We will need to perform fusion with the inverted monodromy Spuq. The original monodromy matrix T satisfies the RTT relation which acts on two copies a and b of the auxiliary space C N and the physical Hilbert space, and the R-matrix R ab puq is given by where 1 ab and P ab denote the identity operator and the permutation operator on C N b C N , respectively. Note that when v " u`i the R-matrix R ab pu´vq " R ab p´iq becomes an antisymmetriser, and it is this reason why the quantum minors (6.1) are constructed by taking products with the shift in each subsequent Tpuq increased by i. Inverting the RTT relation, we obtain Since R ab puq " u 1 ab`i P ab we have that R´1puq " u 1 ab´i P ab up to a scalar factor, so fusion for S is performed in exactly the same way as for T up to changing the sign of the shifts, which leads to the following definition for quantum minors constructed from S S " i 1 ...in j 1 ...jn ı puq " Since the transfer matrices T µ puq corresponding to generic Young diagrams µ can be expressed in terms of T a,1 using the CBR formula it then follows that all S µ puq can be expressed in terms of S a,1 puq as It is important to stress that the transfer matrices S µ puq do not give us a different set of conserved charges, or even any new ones. All transfer matrices constructed from Spuq can be written as simple expressions in transfer matrcies obtained from Tpuq. Denoting by I " ti 1 , i 2 , . . . , i N u and J " tj 1 , j 2 , . . . , j N u two permutations of t1, 2, . . . , N u then the following relation can be shown to be true [75] T N,1ˆu`i 2 pN´1q˙S This relation then implies the following relation for transfer matrices (A.10) Since all S ξ and all T ξ respectively can be constructed as polynomials in these anti-symmetric transfer matrices it follows that rS µ 1 pu 1 q, T µ 2 pu 2 qs " 0 (A.11) for any Young diagrams µ 1 and µ 2 . We will introduce one final map, which is usually denoted by ω [75], obtained by first applying the˚-map and then the antipode S. It is trivial to check using the definitions that We will now consider the commutation relation (6.17) which we repeat here for convenience 1˘˙`R 1 pu, vq . (A.14) As was stated earlier in the paper the relation (A.14) is strictly speaking not correct as we have written it. The correct version is given by The objects T N µ are null twist transfer matrices, obtained from T µ puq by sending all twist eigenvalues 36 λ i to 0. It was demonstrated in [13] that all T µ puq have the form 16) and so when applied to an eigenstate of B which is annihilated by T j1 pvq (A.15) is equivalent to (A.14).
The new relation we will derive then reads .18) and S 8 µ are transfer matrices obtained from S µ puq by sending the twist eigenvalues λ i Ñ 8, in analogy with the null twist transfer matrices.
Let us recall that the twist matrix Λ has the explicit form Λ ij " p´1q j´1 χ j δ i1`δi,j`1 , and lets define Λ N ij " δ i,j`1 so that Next, it is easy to work out the inverse matrix Λ´1 is given by .20) and in the same way we define Λ 8 ij " δ i`1,j . The main property we will use below is that Λ N and Λ 8 are related by a simple change of basis. Let K denote the matrix with K ij " δ N`1´i,j . Then Now lets consider the transfer matrix in the fundamental representation, T N 1,1 " tr`T puqΛ N˘. Unfortunately we cannot apply the antipode map to T N 1,1 as the twist Λ N is not invertible. On the other hand, we can bring T N 1,1 to S 8 1,1 by first performing a change of basis by K and then applying the ω map to the untwisted monodromy. Let consider the change of basis T ij Ñ T N`1´i,N`1´j which can be done with K. More precisely, we transform tr`T puqΛ N˘Ñ tr`KT puqK´1Λ N˘( A. 22) which, because of the cyclicity of the trace, is equivalent to sending Λ N Ñ KΛ N K´1 " Λ 8 ! If we then apply ω to the untwisted monodromy T we will obtain precisely S 8 1,1 . It is a straightforward calculation to show that the procedure can be done for all transfer matrices 36 In this section for transparency we do not impose det Λ " 1 and leave all λ1, . . . , λN free. T N µ , by first considering transfer matrices in anti-symmetric representations T a,1 and then considering T µ together with the CBR formula (6.4), being careful to take in account various shifts.
To summarise, the net effect of the change of basis K followed by ω is It now remains to see what happens to B when we apply K followed by ω. For simplicity of the calculation we will consider the slp3q case, with higher rank following immediately.
Recall the explicit form of B in terms of untwisted monodromy entries T ij given in (6.11): Bpuq " T 11 T r´2s " 12 after which we apply the relation (A.13) to obtain, up to an overall factor of the scalar quantum determinant, B r´2s , and the same computation goes through for higher rank in exactly the same way 37 . Now lets examine the remainder term R 1 pu, vq " ř N j"1 T j1 pvqˆin (A.14). Applying the same transformations results in ř 3 j"1 ω pT 4´j,3 pvqqˆwhich works out to be, up to quantum determinant factors, with R 3 pu, vq " ř N j"1 T 1j pvq. Next, if we recall that the transfer matrices T µ pvq satisfied Actually, one finds that the order of minors will be reversed. For slp3q this doesn't matter since all of the minors commute. For higher-rank this is no longer the case, but it can be easily checked, using the method of [13] that the commutation relation (6.17) is invariant under reversing the order of minors in B, meaning one could apply the mentioned sequence of transformations to that commutation relation and the end result will contain precisely B. and so, on eigenstates xΛ|of B which satisfy xΛ|T j1 pvq " 0 we can replace T N µ in (A.15) with T µ , resulting in (A.14). In a similar style, we can show that and so on eigenstates xΛ|of B which satisfy xΛ|T 1j pvq " 0 we can replace S 8 µ in (A.28) with S µ , resulting in Now in order to derive (6.52) we will specialise to Young diagrams µ with a single row, so that µ 1 " s and µ 1 1 " 1 and in this case the transfer matrix S µ is denotes S 1,s , and so (A.31) becomes By using the CBR formula (A.8) together with (A.10) we can easily derive the following relation Since the T N,1 factors are just scalar multiples of the identity operator we can then replace S 1,s`v`i 2 ps`1q˘in (A.31) with T N´1,s`v´i 2 pN´s´1q˘to finally obtain (6.52).

B Eigenstates of B and C form a basis
In this appendix we prove that the eigenstates xx| of B and |yy of C indeed form a basis of the Hilbert space which we remind the reader is the space of functions analytic at the origin. We will demonstrate this in two parts. First, we show that all of the vectors xx| and |yy constructed in the main text are non-zero. The fact that they are linearly independent follows from the fact that each xx| and |yy corresponds to a unique eigenvalue of B and C. Then, we will show that every element of the Hilbert space admits a series representation in xx| and |yy.

B.1 xx| and |yy are non-zero
In the main text we constructed the SoV bases by action of certain transfer matrices evaluated at special points. We need to check that the resulting states are not identically zero. To do this we need to check that the required transfer matrices do not having vanishing eigenvalues at the required point. In order to do this we use the method developed in [19], which is to check that the transfer matrices in question do not have vanishing eigenvalues for a specific value of the twist and hence to not in general. We use the fact that, for length L " 1, when the twist matrix is the identity matrix all transfer matrices are just scalar multiples of the identity operator where we can easily verify that the required transfer matrices are non-vanishing. At higher values of L the statement can be shown to reduce to the L " 1 case by taking limit where inhomogeneities are largely separated [19]. In order to prove the above claims we will use so-called quantum semi-standard Young tableaux [19,[76][77][78][79]. A semi-standard Young tableaux T of shape µ is a Young diagram µ filled with numbers from t1, 2, . . . , N u such that the numbers in each row weakly decrease and the numbers in each column strictly decrease. Let g P GLpN q have eigenvalues λ 1 , . . . , λ N . Then the character χ µ pgq of g in the representation µ can be written as a sum over all tableaux T of shape µ: where #pa, sq denotes the number in position pa, sq of the tableaux T . Similar expressions exist for transfer matrices. In order to describe them, let us consider generic highest-weight representations with glpN q highest-weight ν 1 , . . . , ν N . Denote by R k puq " pu´θ`is`iν k q, k " 1, . . . , N . Then the transfer matrix T µ puq can be also written as a sum over tableaux of shape µ according to the rule In [19] T µ`θα´i s´iν N´i 2 pµ 1´µ 1 1 q˘was proven to be non-zero under certain conditions on µ, which were precisely the cases needed for constructing the SoV basis. Namely, letν denote the "reduced" diagram of ν, i.e.ν is the Young diagram rν 1 , . . . ,ν N s whereν j :" ν j´νN . Then T µ`θα´i s´iν N´i 2 pµ 1´µ  Let us examine how this rule changes under action of the˚-map, i.e. when are the transfer matrices Tμpvq necessary for constructing |yy non-zero? To understand this we should first understand what happens with transfer matrices in antisymmetric representations T a,1 . By writing the sum (B.2) we obtain Applying the˚-map we find that which can be written as ÿ and hence we can deduce from the CBR formula [42,43] that all of the transfer matrices Tμpuq can be written as where T 1 is a tableaux of shape µ but now the entries strictly increase (instead of decrease) in each column and weakly increase in each row, which is equivalent to permuting the weights tν 1 , . . . , ν N u Þ Ñ tν N , . . . , ν 1 u. Notice also the difference in the sign of the shift in R #pa,sq in (B.6) compared to (B.2). Hence, Tμ has the interpretation of being a Young diagram describing the lowest-weight of the representation instead of the highest. For the classical character χ µ pgq there is no difference, but T µ and Tμ correspond to two different quantizations of this character. Alternatively, instead of swapping the weights and signs of shifts we can interpret both of these as having flipped the Young diagram upside down and backwards, while keepting the same rules for associating shifts to boxes after assigning zero shift to the bottom right corner. Now consider the˚-reduced diagramν˚" tν1 , . . . ,νN u where we defineνj :" ν j´ν1 . Since all of the entries ofν˚are either zero or negative we can view the diagram as having been flipped upside down and backwards. Then the requirement on µ for Tμ`θ α´i s´i ν 1`i 2 pµ 1´µ α 1 qt o be nonzero is totally analogous to the original case, that is µ should be a subdiagram of ν˚after µ has been flipped upside down and backwards as well, see Figure B.1.
Restricting to the compact case of interest where ν 1 "´2s, ν 2 " 0 "¨¨¨" ν N , this then implies that the transfer matrix will be non-zero as long as we restrict to µ being height at most N´1 and width at most 2s. When we extend to general s this latter restriction is no longer present, and the only condition is that µ is of height at most N´1, and hence all |yy are non-zero. Finally, we examine xx| which is constructed using the transfer matrix This transfer matrix is the same as T N´1,s`θα`i s`i 2 pN´s´1q˘, which admits the sum over tableaux (B.2), together with an overall shift of ips´N`1q. If we write sing the sum over tableaux then any tableaux containing 1 must contain it in the bottom right corner, which comes with a shift of´ips´N`1q. Hence, the transfer matrix T N´1,s`θα`i s`i 2 pN´s´1q˘when expanded in a sum over tableaux will always contribute a factor R 1 pθ α`i sq " 0 if that tableaux contains 1, and so only tableaux which don't contain 1 can contribute -in fact there is a unique such tableaux. In the compact case, it can easily be worked out that this single contribution is non-vanishing as long as s ď´2s, and in the non-compact case it is always non-vanishing. Hence, all xx| are non-zero.

B.2 Series representation
Now that we have demonstrated that all xx| and |yy are non-zero, we need to show that every element f of the Hilbert space can be written as where c x are some finite coefficients and similarly with |yy.
The key point is then that both the orthonormal basis of monomials and xx| are eigenvectors of the SoV charge operator (3.3). The charge operator gives the Hilbert space H the structure of a graded space, that is H " À sě0 H s where H s is the subspace of SoV charge s. It is then a trivial counting exercise to verify that the number of xx| contained in H s precisely matches the number of basis monomials, and each is contained in precisely one H s . Hence, xx| with charge s form a basis of the finite-dimensional space H s . By definition f admits a series expansion in the orthonormal basis of monomials and this series is absolutely convergent so can be rearranged in any order we like. We can then arrange it in order of increasing charge and so can write f as where f s denotes the projection of f onto H s . Then, each f s can be written as a finite linear combination of xx| with charge s and precisely the same argument goes through for |yy, completing the proof.
C Measure for slpN q from the Baxter equations In this section we extend the results of sections 4.1 and 4.2 for the integral form of the orthogonality relation to the generic slpN q case. We present the derivation in a more algebraic way, largely following the one done in [21] for the s " 1{2 case.
In order to concisely obtain the transfer matrix eigenvalues in the slpN q case, it is convenient to use the generating functional described [47] (see [80] for a review) which in our case it can be written as where Λ n are 'quantum eigenvalues' that are given in terms of Q-functions, where J i " 12 . . . i is a multi-index, so that e.g. Q J 2 " Q 12 (recall also that Q 12...N " 1 in our conventions). The Q-functions which appear here are all twisted polynomials, i.e. polynomials times exponents. This functional provides the (nontrivial part of the) eigenvalues of transfer matrices τ k with k-th antisymmetric representation of slpN q in the auxiliary space as coefficients of powers D n in its expansion, Let us note also that the first and last terms are state-independent, All the other τ k are also polynomials (of degree L) as long as the Bethe ansatz equations are satisfied. This can be shown by following the same argument as in [21]. Their relation to T a,1 (eigenvalues of T a,1 defined in section 2.1) is T a,1 " τ r`a´1s a a´1 ź k"1 Q r`2s´2k`a´1s θ . (C.5) Using this functional we can write the Baxter equations in a very compact form. Following [22] let us introduce the notation for action of the shift operators to the left and to the right, Then we see from the last factor in (C.1) that Ý Ñ WQ N r`N s " 0. Similarly to [21] one can show that this is true for all twisted polynomial Q's with one upper index (which we remind are defined in (4.31)), Ý Ñ WQ pa`1qr`N s " 0 , a " 1, . . . , N´1 . (C.7) Moreover, from the form of the first factor in (C.1) we find that when acting to the left W annihilates Q 1 if we multiply it by the same function ε we used before in (4.8), As a result we can write the N -th order Baxter equations satisfied by Q a and Q 1 as (C.7) and (C.8). Due to (C.3), the first of these equations can be written in a more explicit form asÔ : Q a`1 " 0 , a " 1, . . . , N´1 (C.9) where we introduced the difference operatorÔ : , O : Q a " τ 0 Q ar`N s´τ 1 Q ar`N´2s`¨¨¨`p´1 q N´1 τ N´1 Q ar´N`2s`p´1 q N τ N Q ar´N s " 0 . (C.10) This makes it clear that in particular for N " 3 we get the Baxter equation (4.21) we described before.

C.1 Orthogonality
Like for the slp3q case, in order to derive the orthogonality relations for Q-functions we will prove the key relation´Q 1Ô : f¯α " 0 (C.11) where we take the measure µ α in the definition of the bracket (see (4.4)) to be the same function (4.10) as for slp3q and slp2q. Here f is a regular function with the same large u asymptotics as any of the Q a`1 functions (a " 1, . . . , N´1). To demonstrate (C.11), we start from its l.h.s. and use thatÔ : " Ý Ñ WD N , then we transfer the shifts of the argument away from f by moving the integration contour, We see that as a result we get the operator Ð Ý W acting to the left on the combination µ α Q 1 , and this gives zero due to (C.8), thus proving (C.11). In (C.12) we have also indicated that when we move the integration contour there could be extra contributions from poles of µ α (located at u " θ β´i s´in and u " θ α`i s`in with β " 1, . . . , L and n " 0, 1, . . . ). However, we see that when s is positive and large enough, there are no poles in the region where the contour is being moved. Since all the terms under the integral are analytic as functions of s, so should be the whole expression and thus the poles do not give any extra contribution 38 .
Let us also comment on convergence of the integrals. Since we take f to have the same asymptotics as any of the Q a`1 , the integrals in (C.12) will be finite if 0 ă arg λ c´a rg λ 1 ă 2π , c " 2, . . . , N . (C.13) Here for definiteness we assume that these inequalities hold. If that is not the case, one should modify the integration measure in the same way as for the slp2q and slp3q spin chains (see the discussion after (4.25)).
Having the property (C.11) we can derive orthogonality relations for different states exactly like for slp2q and slp3q cases. For slpnq the transfer matrix eigenvalues τ a have the form τ a " χ a pλ 1 , . . . , λ N qu L`L ÿ α"1 I a,α u α´1 (C.14) where the leading term is the character of the twist matrix in the a-th antisymmetric representation, and the I a,α are eigenvalues of the integrals of motion. Using (C.11) we have for two different states A and B´Q Requiring that this linear system has a nontrivial solution leads to det pa,αq,pb,βq m pa,αq,pb,βq " 0 , m pa,αq,pb,βq "´Q A 1 u β´1 D´2 b`N˝QB,a`1¯α .

(C.17)
This is the orthogonality condition that we presented in the main text in (4.30).

D Scalar product for compact supN q spin chains and analytic continuation in the spin
In this section we discuss how our results for the scalar product can be analytically continued from s ą 0 to negative values of s, as well as the reduction to the compact spin chain case. 38 One can also verify directly that the poles contributions cancel by adapting the derivation from [21].

D.1 Analytic continuation to s ă 0
Let us recall that when deriving the integral form of the scalar product (e.g. (4.30) for slpN q) we assumed that s ą 0. All these scalar products are written in terms of the bracket (4.4) which is an integral along the real line with the measure (4.10) which has poles at u " θ β´i s`in (for all β " 1, . . . , L) and u " θ α`i s`in with integer n ě 0. One potential possibility to define the analytic continuation of this integral to s ă 0 would be to keep the contour always slightly below the pole at u " θ α`i s, but still we see that when we go from s ą 0 to s ă 0 the poles at u " θ β´i s cross the integration contour and thus one should also subtract their contribution. As we further decrease the value of s, more and more poles will cross the contour, making the result somewhat cumbersome. A simpler approach is to first rewrite the integral (for s ą 0) as a sum of poles in the upper half plane at u " θ α`i s`in with n " 0, 1, . . . as we discussed in sections 4.1.2 and 5 by closing the contour in the upper half plane. The sum over these poles itself is analytic in s and thus can be directly used for s ă 0. Thus, for s ă 0 we understand the bracket in the scalar product to be the sum over the poles at u " θ α`i s`in. Being analytic in s, it retains all its key features and ensures that the scalar product (4.30) vanishes for different states A, B.

D.2 Reduction to the compact supN q case
Having clarified the construction for s ă 0, we can now explore the particularly interesting case when s takes negative half-integer values, s "´1{2,´1,´3{2, . . . . In this case the representation of slpN q on the spin chain sites becomes reducible and acquires a finite-dimensional irreducible subspace, corresponding to the p´2sq'th symmetric power of the fundamental irrep of slpN q. As discussed in the end of section 6, our construction of the SoV basis then provides the basis precisely for this subspace, so that now our model reduces to a finite-dimensional compact supN q spin chain.
We expect that accordingly the scalar products (defined in terms of the sum over poles as we just discussed) should reduce from an infinite to a finite sum, over the values that label the SoV basis. Nicely, for the slp2q case we can see at once that this sum truncates for s a negative half-integer, as all but the first several elements of the SoV measure vanish since (2.38) gives zero for n β ě´2s. The same is true for the slpN q case, as one can see from the explicit result for the SoV measure (5.39) since the factor r α,n defined in (2.39) vanishes for n ě´2s. As a result, we see that our SoV measure works perfectly for the finite-dimensional case as well.
For various applications it is interesting to still write the scalar products in an integral form for the finite-dimensional case as well. This was done for sup3q originally in [22] for the case s "´1{2 (i.e. fundamental representation). We can now almost immediately extend this result to any supN q and any s "´1{2,´1, . . . .
Let us note that for these values of s the measure (4.10) we used so far in the integral form of the scalar product simplifies and becomes µ α puq " const 1`e 2πpu´θαqˆ1 s ś k"´s Q θ pu`ikq where the product goes over k "´s,´s`1, . . . , s. Let us redefine it by multiplying with an i-periodic function that removes the inifnite set of poles coming from the first factor, and also removes the poles of the Q θ factors associated to all θ β with β ‰ α, so that we define We assume for simplicity that s ą 0 and the inhomogeneities θ β are real. The poles of the integrand are located at u "˘pis`inq`θ β with β ‰ α and at u " is`in`θ α , with n ě 0. Let us first consider the case 0 ă s ă 1{2. We start with the first type of poles, i.e. those associated with θ β where β ‰ α. The first term in (E.1) (the term with f r´3s ) has no pole at potentially dangerous points u " is`θ β and u " is`i`θ β which we are crossing, and thus gives no contribution. The second term in (E.1) (the term with f´) also has no pole at u " is`θ β . The third term (the term with f`) gives a contribution from the pole at u " θ β´i s that we denote as r 1 with r 1 " f pθ β´i s`i{2qQ 1 pθ β´i sqτ 1 pθ β´i sq ie 2πθα`4iπs pe 2πθα`4iπs´e2πθβ qΓp1´2sq (E.2) ź γ‰β Γpipθ γ´θβ qq Γp´2s`ipθ γ´θβ q`1q .
Using (E.5) we find that r 3`r4 " 0 so these contributions cancel against each other. Finally, we can check in a similar way that the contributions from the 3rd and 4th terms in (E.1) also cancel each other.
Thus we see that when 0 ă s ă 1{2 the poles give no contribution. The case when s " 1{2 was considered in [21] and the poles similarly cancel. Finally, let us consider the case s ą 1{2. Then the 2nd and 3rd terms in (E.1) do not have poles in the relevant region at all. For the 1st and 4th terms the poles trivially vanish due to their accompanying Q θ factors. Thus there are no contributions from any poles. Let us also mention that when s ą 3{2 we see at once that all potential poles are absent from the relevant strip´3{2 ď Im u ď 3{2 where we are moving the contour.
As a result, we have shown that all the contributions from poles cancel nontrivially once we invoke relations between various transfer matrices and Q-functions.

F Oscillator representation for slpN q, and relations for generators
In the main text we noticed that the transfer matrices in anti-symmetric representations have trivial non-dynamical factors (3.20) and (C.3). This can be traced back to some simple relations satisfied by the generators of slpN q in our specific representation.
Firstly, there are linear and quadratic Casimirs, which are easy to determine by acting on the HW state E aa " pN´2qs , E ab E ba " N s 2´2 N s`2s , (F.1) where the repeated indices a and b are summed over. Another relation, which we found to be very useful, is E a,c E b,d´Eb,c E a,d " s pE a,c δ b,d´Eb,c δ a,d q`ps´1q pE b,d δ a,c´Ea,d δ b,c q (F.2) ps´1qs pδ a,d δ b,c´δa,c δ b,d q .
This is easy to verify for slp3q, using the explicit form of the generators (3.1),(3.2), (3.5). For general slpN q we used the oscillator representation of [81,82]. For completeness we write it below in our conventions where h "´2s´ř N´1 i"1 bì bí and " bí , bj ı " δ ij . One of the consequences of the relation (F.2) is that the Lax operators L a,1 (2.5) is linear in generators and produces trivial scalar factors. For example for L 2,1 we find where we raised some indices to indicate antisymmetrisation more easily. The general expression, which can be deduced as a consequence of (F.2), takes the form We see that the the last factor agrees with (C.3). Removing the scalar factor and performing the shift in accordance with (C.3) we see that τ a puq is built out of the following "reduced" L-operators In section 7.4.1 we are interested in the following combinations τ a puq`uB θα τ a puq| u L´1 " I a,L´1`Bθ α I a,L´2 , which give a combination of generators acting on one site of the chain. From (F.9) we immediately obtain I a,L´1`Bθ αÎ a,L´2 " (F.10) " p´θ α´i pa´1qsq δ d 1 re 1 . . . δ da eas`a i E tr`pE t´s qp´Λq j˘χ a´j " piθ α`s q I a,L`i I a,L´1`i B θα I a,L´2 , (F. 13) which is what we use in section 7.4.1.

G Mathematica implementation of general measure elements
In this appendix we give a simple implementation of our general formula (5.39). The code is purely for demonstration purposes and is not particularly optimised for long chains. First, we introduce some notations in Mathematica